Re: UMA cache back pressure

From: Alexander Motin <mav_at_FreeBSD.org>
Date: Mon, 18 Nov 2013 14:57:04 +0200
On 18.11.2013 14:10, Adrian Chadd wrote:
> On 18 November 2013 01:20, Alexander Motin <mav_at_freebsd.org> wrote:
>> On 18.11.2013 10:41, Adrian Chadd wrote:
>>> So, do you get any benefits from just the first one, or first two?
>>
>> I don't see much reason to handle that in pieces. As I have described above,
>> each part has own goal, but they much better work together.
>
> Well, with changes like this, having them broken up and committed in
> small pieces make it easier for people to do regression testing with.
>
> If you introduce some regression in a particular workload then the
> user or developer is only going to find that it's this patch and won't
> necessarily know how to break it down into pieces to see which piece
> actually introduced the regression in their specific workload.

I can't argue here, but too many small pieces turning later merging into 
a headache. This patch is not that big to not be reviewable at one 
piece. What's about better commit message -- your hint accepted. :)

> I totally agree that this should be done! It just does seem to be
> something that could be committed in smaller pieces quite easily so to
> make potential debugging later on down the road much easier. Each
> commit builds on the previous commit.
>
> So, something like (in order):
>
> * add two new buckets, here's why
> * fix locking, here's why
> * soft back pressure
> * aggressive backpressure

I can do that it you insist, I would just take different order 
(3,1,4,2). 2 without 3 will make buckets grow faster, that may be bad 
without back pressure.

> Did you get profiling traces from the VM free paths? Is it because
> it's churning the physical pages through the VM physical allocator?
> or?

Yes. Without use_uma enabled I've seen up to 50% of CPU time burned on 
locks held around expensive VM magic such as TLB shutdown, etc. With 
use_uma enabled situation improved a lot, but I've seen periodical 
bursts, which I guess happened when system was getting low on memory and 
started aggressively purge gigabytes of oversized caches. With this 
patch I haven't noticed such behavior so far at all, though it may be 
subjective since test runs quite some time and load is not very stationary.

-- 
Alexander Motin
Received on Mon Nov 18 2013 - 11:57:10 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:44 UTC