Re: UMA cache back pressure

From: Alexander Motin <mav_at_FreeBSD.org>
Date: Mon, 18 Nov 2013 11:59:34 +0200
On 18.11.2013 11:45, Luigi Rizzo wrote:
>
>
>
> On Mon, Nov 18, 2013 at 10:20 AM, Alexander Motin <mav_at_freebsd.org
> <mailto:mav_at_freebsd.org>> wrote:
>
>     On 18.11.2013 10:41, Adrian Chadd wrote:
>
>         Your patch does three things:
>
>         * adds a couple new buckets;
>
>
>     These new buckets make bucket size self-tuning more soft and
>     precise. Without them there are buckets for 1, 5, 13, 29, ... items.
>     While at bigger sizes difference about 2x is fine, at smallest ones
>     it is 5x and 2.6x respectively. New buckets make that line look like
>     1, 3, 5, 9, 13, 29, reducing jumps between steps, making algorithm
>     work softer, allocating and freeing memory in better fitting chunks.
>     Otherwise there is quite a big gap between allocating 128K and
>     5x128K of RAM at once.
>
>
> just curious (and i do not understand whether the "1, 5 ..." are object
> sizes in bytes or what),

Buckets include header (~3 pointers), plus number of item pointers. So 
on amd64 1, 5, 13 mean 32, 64, 128 bytes per bucket. It is not really 
about saving memory on buckets themselves since they are very small, 
comparing to stored items. We could use bigger (like 16 items) bucket 
zone for allocating all smaller ones, overwriting just their items 
limit. But more zones potentially means also lower zone lock congestion 
there, so why not?

> would it make sense to add some instrumentation
> code (a small array of counters i presume) to track the actual number
> of requests for exact object sizes, and perhaps at runtime create buckets
> trying to reduce waste ?

Since 10.0 buckets are also allocated from UMA cache zones, so all 
stats, garbage collection, etc. work by the same rules, which you can 
see in `vmstat -z`.

> Following your reasoning there seems to be still a big gap between
> some of the numbers you quote in the sequence.

Big (2x) gaps between big numbers is less important since once we got 
there it means we have not so much memory pressure and should not be 
hurt by many extra frees. At lower numbers it may be more important.

-- 
Alexander Motin
Received on Mon Nov 18 2013 - 08:59:41 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:44 UTC