On May 4, 2009, at 6:17 PM, Jeff Roberson wrote: > On Sat, 2 May 2009, Ben Kelly wrote: >> Hello all, >> >> Lately I've been looking into the "kmem too small" panics that >> often occur with zfs if you don't restrict the arc. What I found >> in my test environment was that everything works well until the >> kmem usage hits the 75% limit set in arc.c. At this point the arc >> is shrunk and slabs are reclaimed from uma. Unfortunately, every >> time this reclamation process runs the kmem space becomes more >> fragmented. The vast majority of the time my machine hits the >> "kmem too small" panic it has over 200MB of kmem space available, >> but the largest fragment is less than 128KB. > > What consumers make requests of kmem for 128kb and over? What > ultimately trips the panic? ZFS buffers range from 512 bytes to 128KB. I don't know of any allocations above 128KB at the moment. In my workload the panic is usually caused by zfs attempting to allocate a 128KB buffer, although sometimes its only doing a 64KB buffer. At one point I hacked in some instrumentation to print the kmem_map vm_map_entry when I touched a sysctl mib. Here's a capture I made during my load test as the fragmentation was occurring: http://www.wanderview.com/svn/public/misc/zfs/fragmentation.txt I also added some debug later to show the consumers of the allocations. The vast majority of them were from the opensolaris subsystem. Unfortunately I don't have a capture of that output handy. >> Ideally things would be arranged to free memory without >> fragmentation. I have tried a few things along those lines, but >> none of them have been successful so far. I'm going to continue >> that work, but in the meantime I've put together a patch that tries >> to avoid fragmentation by slowing kmem growth before the aggressive >> reclamation process is required: >> >> http://www.wanderview.com/svn/public/misc/zfs/zfs_kmem_limit.diff >> >> It uses the following heuristics to do this: >> >> - Start arc_c at arc_c_min instead of arc_c_max. This causes the >> system to warm up more slowly. >> - Half the rate arc_c grows when kmem exceeds kmem_slow_growth_thresh >> - Stop arc_c growth when kmem exceeds kmem_target >> - Evict arc data when the kmem exceeds kmem_target >> - If kmem usage exceeds kmem_target then ask the pagedaemon to >> reclaim pages >> - If the largest kmem fragment is less than kmem_fragment_target >> then ask the pagedaemon to reclaim pages >> - If the largest kmem fragment is less than a kmem_fragment_thresh >> then force the aggressve kmem/arc reclamation process >> >> The defaults for the various targets and thresholds are: >> >> kmem_reclaim_threshold = 7/8 kmem >> kmem_target = 3/4 kmem >> kmem_slow_growth_threshold = 5/8 kmem >> kmem_fragment_target = 1/8 kmem >> kmem_fragment_thresh = 1/16 kmem >> >> With this patch I've been able to run my load tests with the >> default arc size with kmem values of 512MB to 700MB. I tried one >> loaded run with a 300MB kmem, but it panic'ed due to legitimate, >> non-fragmented kmem exhaustion. >> > > May I suggest an alternate approach; Have you considered placing > zfs in its own kernel submap? If all of its allocations are of a > like size, fragmentation won't be an issue and it can be constrained > to a fixed size without placing pressure on other kmem_map > consumers. This is the approach taken for the buffer cache. It > makes a good deal of sense. If arc can be taught to handle > allocation failures we could eliminate the panic entirely by simply > causing arc to run out of space and flush more buffers. > > Do you believe this would also address the problem? Using a separate submap might help. It seems that the fragmentation is occurring due to the interaction of the smaller and larger buffers within zfs. I believe in opensolaris data buffers and meta-data buffers are allocated from separate arenas. We don't do this currently and it may be the cause of some of the fragmentation. It also occurred to me that it might be nice if the arc could somehow share the buffer cache directly. Unfortunately I am moving this Friday and probably will be unable to really look at this for the next couple weeks. Thanks. - BenReceived on Tue May 05 2009 - 11:48:49 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:47 UTC