Here are the results of the test you've suggested on my system (r293722), nvidia-driver-304-304.128 -- two runs with the break of 40 minutes: active inactive wire cache free total 85441 282221 280649 0 100455 748766 85488 282235 280655 0 100391 748769 85500 282240 280657 0 100372 748769 83226 283338 280692 0 101513 748769 82816 282439 280687 0 102827 748769 [14:01 - 1.52] [kostya_at_notebook2 9] ~ $ >sudo sh test.sh active inactive wire cache free total 82280 302769 304025 0 58081 747155 82273 302783 304021 0 58081 747158 82247 302809 304021 0 58081 747158 82239 302816 304009 0 58094 747158 82076 302995 304010 0 58077 747158 82080 303002 304010 0 58066 747158 [15:44 - 1.52] Hope this helps and you can see some tendency you're after. With kindest regards, Kostya Berger On Thursday, 4 February 2016, 3:56, Ultima <ultima1252_at_gmail.com> wrote: Just tested your script, there is definitely a memory leak. I also ran into really weird behavior. Running your script in tmux after starting and stopping an xorg session a few, tmux completely froze in the session. Creating a new window in the session was also completely frozen, however this is only visually as commands still worked, just showed a blank black screen. Also unloading the kernel modules for nvidia and nvidia-modeset (new as of 358.16ish) did not free the memory. On Wed, Feb 3, 2016 at 8:24 PM, Ultima <ultima1252_at_gmail.com> wrote: > Apologies, this should have been in my initial reply. > > https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=201340 > or here for attachment > https://bz-attachments.freebsd.org/attachment.cgi?id=165694 > > I haven't actually had a chance to do anything after upgrading > from stable other than see the corrupted console for myself. > Lack of time =/ > > On Wed, Feb 3, 2016 at 2:41 PM, Eric van Gyzen <vangyzen_at_freebsd.org> > wrote: > >> On 02/03/2016 10:54, Eric van Gyzen wrote: >> > I just set up a new desktop running head with x11/nvidia-driver. I've >> > discovered a memory leak where pages disappear from the queues, never to >> > return. Specifically, the total of >> > v_active_count >> > v_inactive_count >> > v_wire_count >> > v_cache_count >> > v_free_count >> > drops, eventually becoming /much/ less than v_page_count. >> >> Here is a script to log the data: >> >> #!/bin/sh >> >> readonly QUEUES="active inactive wire cache free total" >> readonly FORMAT="%s\t%s\t%s\t%s\t%s\t%s\n" >> >> vm_page_counts() { >> for queue in $QUEUES; do >> if [ "$queue" != "total" ]; then >> sysctl -n vm.stats.vm.v_${queue}_count >> fi >> done >> } >> >> sum() { >> s=0 >> while [ $# -gt 0 ]; do >> s=$((s + $1)) >> shift >> done >> echo $s >> } >> >> print_counts() { >> counts="`vm_page_counts`" >> printf "$FORMAT" $counts `sum $counts` >> } >> >> printf "$FORMAT" $QUEUES >> print_counts >> while sleep 60; do >> print_counts >> done >> >> _______________________________________________ >> freebsd-current_at_freebsd.org mailing list >> https://lists.freebsd.org/mailman/listinfo/freebsd-current >> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org >> " >> > > _______________________________________________ freebsd-current_at_freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"Received on Thu Feb 04 2016 - 11:52:41 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:02 UTC