Re: r273165. ZFS ARC: possible memory leak to Inact

From: James R. Van Artsdalen <james-freebsd-current_at_jrv.org>
Date: Wed, 05 Nov 2014 06:36:26 -0600
On 11/4/2014 5:47 AM, Dmitriy Makarov wrote:
> Funny thing is that when we manually allocate and release memory, using simple python script:
...
>
> Current workaround is to periodically invoke this python script by cron.
>

I wonder if this is related to PR

https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=194513

This is against "zfs recv" and hanging in process state "kmem arena" &
but also has a workaround of allocating lots of memory in userland.

But I do not see a lot of inactive with that PR.

"zpool history" also hangs sometimes in "kmem arena" but I do not have
a  workaround for that.

This PR is filed against 10-STABLE but confirmed against CURRENT too.

SUPERTEX:/root# uname -a
FreeBSD SUPERTEX.housenet.jrv 10.1-PRERELEASE FreeBSD 10.1-PRERELEASE #3
r273476M: Wed Oct 22 15:05:29 CDT 2014    
root_at_SUPERTEX.housenet.jrv:/usr/obj/usr/src/sys/GENERIC  amd64
SUPERTEX:/root# top
last pid: 37286;  load averages:  0.03,  0.05,  0.05      up
11+11:24:34  06:25:46
39 processes:  1 running, 38 sleeping
CPU:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 6444K Active, 57M Inact, 6475M Wired, 25G Free
ARC: 4753M Total, 862M MFU, 2765M MRU, 52K Anon, 139M Header, 986M Other
Swap: 31G Total, 21M Used, 31G Free

  PID USERNAME    THR PRI NICE   SIZE    RES STATE   C   TIME    WCPU
COMMAND
  676 root          1  20    0 25456K  1048K select  8   0:22   0.00% ntpd
  723 root          1  20    0 24112K  1472K select 13   0:09   0.00%
sendmail
12105 root          1  20    0   103M 35984K kmem a 11   0:04   0.00% zpool
  693 root          1  20    0 30676K  1684K nanslp 10   0:03   0.00% smartd
  519 root          1  20    0 14508K   684K select  5   0:02   0.00%
syslogd
Received on Wed Nov 05 2014 - 11:36:44 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:53 UTC