Re: [PATCH] microoptimize by trying to avoid locking a locked mutex

From: Ian Lepore <ian_at_freebsd.org>
Date: Thu, 05 Nov 2015 16:35:22 -0700
On Thu, 2015-11-05 at 14:19 -0800, John Baldwin wrote:
> On Thursday, November 05, 2015 01:45:19 PM Adrian Chadd wrote:
> > On 5 November 2015 at 11:26, Mateusz Guzik <mjguzik_at_gmail.com>
> > wrote:
> > > On Thu, Nov 05, 2015 at 11:04:13AM -0800, John Baldwin wrote:
> > > > On Thursday, November 05, 2015 04:26:28 PM Konstantin Belousov
> > > > wrote:
> > > > > On Thu, Nov 05, 2015 at 12:32:18AM +0100, Mateusz Guzik
> > > > > wrote:
> > > > > > mtx_lock will unconditionally try to grab the lock and if
> > > > > > that fails,
> > > > > > will call __mtx_lock_sleep which will immediately try to do
> > > > > > the same
> > > > > > atomic op again.
> > > > > > 
> > > > > > So, the obvious microoptimization is to check the state in
> > > > > > __mtx_lock_sleep and avoid the operation if the lock is not
> > > > > > free.
> > > > > > 
> > > > > > This gives me ~40% speedup in a microbenchmark of 40 find
> > > > > > processes
> > > > > > traversing tmpfs and contending on mount mtx (only used as
> > > > > > an easy
> > > > > > benchmark, I have WIP patches to get rid of it).
> > > > > > 
> > > > > > Second part of the patch is optional and just checks the
> > > > > > state of the
> > > > > > lock prior to doing any atomic operations, but it gives a
> > > > > > very modest
> > > > > > speed up when applied on top of the __mtx_lock_sleep
> > > > > > change. As such,
> > > > > > I'm not going to defend this part.
> > > > > Shouldn't the same consideration applied to all spinning
> > > > > loops, i.e.
> > > > > also to the spin/thread mutexes, and to the spinning parts of
> > > > > sx and
> > > > > lockmgr ?
> > > > 
> > > > I agree.  I think both changes are good and worth doing in our
> > > > other
> > > > primitives.
> > > > 
> > > 
> > > I glanced over e.g. rw_rlock and it did not have the issue, now
> > > that I
> > > see _sx_xlock_hard it wuld indeed use fixing.
> > > 
> > > Expect a patch in few h for all primitives I'll find. I'll stress
> > > test
> > > the kernel, but it is unlikely I'll do microbenchmarks for
> > > remaining
> > > primitives.
> > 
> > Is this stuff you're proposing still valid for non-x86 platforms?
> 
> Yes.  It just does a read before trying the atomic compare and swap
> and
> falls through to the hard case as if the atomic op failed if the
> result
> of the read would result in a compare failure.
> 

The atomic ops include barriers, the new pre-read of the variable
doesn't.  Will that cause problems, especially for code inside a loop
where the compiler may decide to shuffle things around?

I suspect the performance gain will be biggest on the platforms where
atomic ops are expensive (I gather from various code comments that's
the case on x86).

-- Ian
Received on Thu Nov 05 2015 - 22:35:24 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:00 UTC