On Mon, Apr 09, 2007 at 08:42:33PM -0500, Craig Boston wrote: > On Mon, Apr 09, 2007 at 08:30:35PM -0500, Craig Boston wrote: > > Even the vm.zone breakdown seems to be gone in current so apparently my > > knowledge of such things is becoming obsolete :) > > But vmstat -m still works > > ... > > solaris 145806 122884K - 15319671 16,32,64,128,256,512,1024,2048,4096 > ... > > Whoa! That's a lot of kernel memory. Meanwhile... > > kstat.zfs.misc.arcstats.size: 33554944 > (which is just barely above vfs.zfs.arc_min) > > So I don't think it's the arc cache (yeah I know that's redundant) that > is the problem. Seems like something elsewhere in zfs is allocating > large amounts of memory and not letting it go, and even the cache is > having to shrink to its minimum size due to the memory pressure. > > It didn't panic this time, so when the tar finished I tried a "zfs > unmount /usr/ports". This caused the "solaris" entry to drop down to > about 64MB, so it's not a leak. It could just be that ZFS needs lots of > memory to operate if it keeps a lot of metadata for each file in memory. > > The sheer # of allocations still seems excessive though. It was well > over 20 million by the time the tar process exited. That is a lifetime count of the # of operations, not the current number allocated ("InUse"). It does look like there is something else using a significant amount of memory apart from arc, but arc might at least be the major one due to its extremely greedy default allocation policy. Kris
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:08 UTC