Re: r273165. ZFS ARC: possible memory leak to Inact

From: Steven Hartland <killing_at_multiplay.co.uk>
Date: Tue, 04 Nov 2014 12:55:18 +0000
This is likely spikes in uma zones used by ARC.

The VM doesn't ever clean uma zones unless it hits a low memory 
condition, which explains why your little script helps.

Check the output of vmstat -z to confirm.

On 04/11/2014 11:47, Dmitriy Makarov wrote:
> Hi Current,
>
> It seems like there is constant flow (leak) of memory from ARC to Inact in FreeBSD 11.0-CURRENT #0 r273165.
>
> Normally, our system (FreeBSD 11.0-CURRENT #5 r260625) keeps ARC size very close to vfs.zfs.arc_max:
>
> Mem: 16G Active, 324M Inact, 105G Wired, 1612M Cache, 3308M Buf, 1094M Free
> ARC: 88G Total, 2100M MFU, 78G MRU, 39M Anon, 2283M Header, 6162M Other
>
>
> But after an upgrade to (FreeBSD 11.0-CURRENT #0 r273165) we observe enormous numbers of Inact memory in the top:
>
> Mem: 21G Active, 45G Inact, 56G Wired, 357M Cache, 3308M Buf, 1654M Free
> ARC: 42G Total, 6025M MFU, 30G MRU, 30M Anon, 819M Header, 5214M Other
>
> Funny thing is that when we manually allocate and release memory, using simple python script:
>
> #!/usr/local/bin/python2.7
>
> import sys
> import time
>
> if len(sys.argv) != 2:
>      print "usage: fillmem <number-of-megabytes>"
>      sys.exit()
>
> count = int(sys.argv[1])
>
> megabyte = (0,) * (1024 * 1024 / 8)
>
> data = megabyte * count
>
> as:
>
> # ./simple_script 10000
>
> all those allocated megabyes 'migrate' from Inact to Free, and afterwards they are 'eaten' by ARC with no problem.
> Until Inact slowly grows back to the number it was before we ran the script.
>
> Current workaround is to periodically invoke this python script by cron.
> This is an ugly workaround and we really don't like it on our production
>
>
> To answer possible questions about ARC efficience:
> Cache efficiency drops dramatically with every GiB pushed off the ARC.
>
> Before upgrade:
>      Cache Hit Ratio:                99.38%
>
> After upgrade:
>      Cache Hit Ratio:                81.95%
>
> We believe that ARC misbehaves and we ask your assistance.
>
>
> ----------------------------------
>
> Some values from configs.
>
> HW: 128GB RAM, LSI HBA controller with 36 disks (stripe of mirrors).
>
> top output:
>
> In /boot/loader.conf :
> vm.kmem_size="110G"
> vfs.zfs.arc_max="90G"
> vfs.zfs.arc_min="42G"
> vfs.zfs.txg.timeout="10"
>
> -----------------------------------
>
> Thanks.
>
> Regards,
> Dmitriy
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Tue Nov 04 2014 - 11:56:47 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:53 UTC