Re: UMA cache back pressure

From: Jeff Roberson <jroberson_at_jroberson.net>
Date: Mon, 18 Nov 2013 09:11:10 -1000 (HST)
On Mon, 18 Nov 2013, Alexander Motin wrote:

> Hi.
>
> I've created patch, based on earlier work of avg_at_, to add back pressure to 
> UMA allocation caches. The problem of physical memory or KVA exhaustion 
> existed there for many years and it is quite critical now for improving 
> systems performance while keeping stability. Changes done in memory 
> allocation last years improved situation. but haven't fixed completely. My 
> patch solves remaining problems from two sides: a) reducing bucket sizes 
> every time system detects low memory condition; and b) as last-resort 
> mechanism for very low memory condition, it cycling over all CPUs to purge 
> their per-CPU UMA caches. Benefit of this approach is in absence of any 
> additional hard-coded limits on cache sizes -- they are self-tuned, based on 
> load and memory pressure.
>
> With this change I believe it should be safe enough to enable UMA allocation 
> caches in ZFS via vfs.zfs.zio.use_uma tunable (at least for amd64). I did 
> many tests on machine with 24 logical cores (and as result strong allocation 
> cache effects), and can say that with 40GB RAM using UMA caches, allowed by 
> this change, by two times increases results of SPEC NFS benchmark on ZFS pool 
> of several SSDs. To test system stability I've run the same test with 
> physical memory limited to just 2GB and system successfully survived that, 
> and even showed results 1.5 times better then with just last resort measures 
> of b). In both cases tools/umastat no longer shows unbound UMA cache growth, 
> that makes me believe in viability of this approach for longer runs.
>
> I would like to hear some comments about that:
> http://people.freebsd.org/~mav/uma_pressure.patch

Hey Mav,

This is a great start and great results.  I think it could probably even 
go in as-is, but I have a few suggestions.

First, let's test this with something that is really super allocator heavy 
and doesn't benefit much from bucket sizing.  For example, a network 
forwarding test.  Or maybe you could get someone like Netflix that is 
using it to push a lot of bits with less filesystem cost than zfs and 
spec.

Second, the cpu binding is a very costly and very high-latency operation. 
It would make sense to do CPU_FOREACH and then ZONE_FOREACH.  You're also 
biasing the first zones in the list.  The low memory condition will more 
often clear after you check these first zones.  So you might just check it 
once and equally penalize all zones.  I'm concerned that doing CPU_FOREACH 
in every zone will slow the pagedaemon more.  We also have been working 
towards per-domain pagedaemons so perhaps we should have a uma-reclaim 
taskqueue that we wake up to do the work?

Third, using vm_page_count_min() will only trigger when the pageout daemon 
can't keep up with the free target.  Typically this should only happen 
with a lot of dirty mmap'd pages or incredibly high system load coupled 
with frequent allocations.  So there may be many cases where reclaiming 
the extra UMA memory is helpful but the pagedaemon can still keep up while 
pushing out file pages that we'd prefer to keep.

I think the perfect heuristic would have some idea of how likely the UMA 
pages are to be re-used immediately so we can more effectively tradeoff 
between file pages and kernel memory cache.  As it is now we limit the 
uma_reclaim() calls to every 10 seconds when there is memory pressure. 
Perhaps we could keep a timestamp for when the last slab was allocated to 
a zone and do the more expensive reclaim on zones who have timestamps that 
exceed some threshold?  Then have a lower threshold for reclaiming at all? 
Again, it doesn't need to be perfect, but I believe we can catch a wider 
set of cases by carefully scheduling this.

Thanks,
Jeff

>
> Thank you.
>
> -- 
> Alexander Motin
>
Received on Mon Nov 18 2013 - 18:15:07 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:44 UTC