Re: UMA cache back pressure

From: Adrian Chadd <adrian_at_freebsd.org>
Date: Mon, 18 Nov 2013 04:10:19 -0800
On 18 November 2013 01:20, Alexander Motin <mav_at_freebsd.org> wrote:
> On 18.11.2013 10:41, Adrian Chadd wrote:
>>
>> Your patch does three things:
>>
>> * adds a couple new buckets;
>
>
> These new buckets make bucket size self-tuning more soft and precise.
> Without them there are buckets for 1, 5, 13, 29, ... items. While at bigger
> sizes difference about 2x is fine, at smallest ones it is 5x and 2.6x
> respectively. New buckets make that line look like 1, 3, 5, 9, 13, 29,
> reducing jumps between steps, making algorithm work softer, allocating and
> freeing memory in better fitting chunks. Otherwise there is quite a big gap
> between allocating 128K and 5x128K of RAM at once.

Right. That makes sense, but your initial email didn't say "oh, I'm
adding more buckets." :-)

>
>> * reduces some lock contention
>
>
> More precisely patch adds check for congestion on free to grow bucket sizes
> same as on allocation. As consequence that indeed should reduce lock
> congestion, but I don't have specific numbers. All I see is that VM and UMA
> mutexes no longer appear in profiling top after all these changes.

Sure. But again, you don't say that in your commit message. :)

> * does soft back pressure
>
> In this list you have missed mentioning small but major point of the patch
> -- we should prevent problems, not just solve them. As I have written in
> original email, this specific change shown me 1.5x performance improvement
> in low-memory condition. As I understand, that happened because VM no longer
> have to repeatedly allocate and free hugely oversized buckets of 10-15 *
> 128K.

yup, sorry I missed this. It's a sneaky two lines. :)

>
>> * does the aggressive backpressure.
>
>
> After all above that is mostly just a safety belt. With 40GB RAM that code
> was triggered only couple times during full hour of testing with debug
> logging inserted there. On machine with 2GB RAM it is triggered quite
> regularly and probably that is unavoidable since even with lowest bucket
> size of one item 24 CPUs mean 48 cache buckets, i.e. up to 6MB of otherwise
> unreleasable memory for single 128K zone.
>
>
>> So, do you get any benefits from just the first one, or first two?
>
>
> I don't see much reason to handle that in pieces. As I have described above,
> each part has own goal, but they much better work together.

Well, with changes like this, having them broken up and committed in
small pieces make it easier for people to do regression testing with.

If you introduce some regression in a particular workload then the
user or developer is only going to find that it's this patch and won't
necessarily know how to break it down into pieces to see which piece
actually introduced the regression in their specific workload.

I totally agree that this should be done! It just does seem to be
something that could be committed in smaller pieces quite easily so to
make potential debugging later on down the road much easier. Each
commit builds on the previous commit.

So, something like (in order):

* add two new buckets, here's why
* fix locking, here's why
* soft back pressure
* aggressive backpressure

Did you get profiling traces from the VM free paths? Is it because
it's churning the physical pages through the VM physical allocator?
or?



-adrian
Received on Mon Nov 18 2013 - 11:10:22 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:44 UTC