Re: [PATCH] microoptimize by trying to avoid locking a locked mutex

From: Mateusz Guzik <mjguzik_at_gmail.com>
Date: Thu, 5 Nov 2015 20:26:23 +0100
On Thu, Nov 05, 2015 at 11:04:13AM -0800, John Baldwin wrote:
> On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov wrote:
> > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik wrote:
> > > mtx_lock will unconditionally try to grab the lock and if that fails,
> > > will call __mtx_lock_sleep which will immediately try to do the same
> > > atomic op again.
> > > 
> > > So, the obvious microoptimization is to check the state in
> > > __mtx_lock_sleep and avoid the operation if the lock is not free.
> > > 
> > > This gives me ~40% speedup in a microbenchmark of 40 find processes
> > > traversing tmpfs and contending on mount mtx (only used as an easy
> > > benchmark, I have WIP patches to get rid of it).
> > > 
> > > Second part of the patch is optional and just checks the state of the
> > > lock prior to doing any atomic operations, but it gives a very modest
> > > speed up when applied on top of the __mtx_lock_sleep change. As such,
> > > I'm not going to defend this part.
> > Shouldn't the same consideration applied to all spinning loops, i.e.
> > also to the spin/thread mutexes, and to the spinning parts of sx and
> > lockmgr ?
> 
> I agree.  I think both changes are good and worth doing in our other
> primitives.
> 

I glanced over e.g. rw_rlock and it did not have the issue, now that I
see _sx_xlock_hard it wuld indeed use fixing.

Expect a patch in few h for all primitives I'll find. I'll stress test
the kernel, but it is unlikely I'll do microbenchmarks for remaining
primitives.

-- 
Mateusz Guzik <mjguzik gmail.com>
Received on Thu Nov 05 2015 - 18:26:28 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:00 UTC