Re: Kernel memory leak with x11/nvidia-driver

From: Gary Jennejohn <gljennjohn_at_gmail.com>
Date: Fri, 5 Feb 2016 10:02:12 +0100
On Thu, 4 Feb 2016 18:05:43 -0800
Mark Johnston <markj_at_FreeBSD.org> wrote:

> On Thu, Feb 04, 2016 at 05:37:24PM -0600, Eric van Gyzen wrote:
> > On 02/ 3/16 10:54 AM, Eric van Gyzen wrote:  
> > > I just set up a new desktop running head with x11/nvidia-driver.  I've
> > > discovered a memory leak where pages disappear from the queues, never to
> > > return.  Specifically, the total of
> > >      v_active_count
> > >      v_inactive_count
> > >      v_wire_count
> > >      v_cache_count
> > >      v_free_count
> > > drops, eventually becoming /much/ less than v_page_count.  After leaving
> > > xscreensaver running overnight, cycling the saver every 10 minutes, the
> > > system was unusable, because it only had a few MB of memory.  (It has 8
> > > GB physical.)  
> > 
> > In case anyone is curious, /usr/local/bin/xscreensaver-hacks/glmatrix 
> > triggers a fairly fast leak--around 600 pages per second.  
> 
> I'm able to repro this on my workstation. With DTrace I can see that
> glmatrix is allocating pages for an SG object at roughly the rate
> they're being leaked. I took a look at r292373 (based on the history of
> sg_pager.c) and noticed a vm_page_free() call was lost when
> sg_pager_getpages() was simplified.
> 
> The patch below seems to do the trick for me. Could you give it a try
> and confirm that it fixes the problem? I run current+nvidia-driver on
> multiple workstations but hadn't observed a leak until now, so maybe
> there's something additional going on in your case. Then again, I just
> use i3lock. :)
> 
> diff --git a/sys/vm/sg_pager.c b/sys/vm/sg_pager.c
> index 84bfa49..2cccb7ea 100644
> --- a/sys/vm/sg_pager.c
> +++ b/sys/vm/sg_pager.c
> _at__at_ -189,6 +189,9 _at__at_ sg_pager_getpages(vm_object_t object, vm_page_t *m, int count, int *rbehind,
>  	VM_OBJECT_WLOCK(object);
>  	TAILQ_INSERT_TAIL(&object->un_pager.sgp.sgp_pglist, page, plinks.q);
>  	vm_page_replace_checked(page, object, offset, m[0]);
> +	vm_page_lock(m[0]);
> +	vm_page_free(m[0]);
> +	vm_page_unlock(m[0]);
>  	m[0] = page;
>  	page->valid = VM_PAGE_BITS_ALL;
>  

I started looking at this yesterday after seeing the OP and verified
that I was also losing pages.

With this patch no more pages are lost.

Good work!

-- 
Gary Jennejohn
Received on Fri Feb 05 2016 - 08:02:18 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:02 UTC