Re: proposed smp_rendezvous change

From: Max Laier <max_at_love2party.net>
Date: Tue, 17 May 2011 09:20:40 -0700
On 05/17/2011 05:16 AM, John Baldwin wrote:
...
> Index: kern/kern_switch.c
> ===================================================================
> --- kern/kern_switch.c (revision 221536)
> +++ kern/kern_switch.c (working copy)
> _at__at_ -192,15 +192,22 _at__at_
> critical_exit(void)
> {
> struct thread *td;
> - int flags;
> + int flags, owepreempt;
>
> td = curthread;
> KASSERT(td->td_critnest != 0,
> ("critical_exit: td_critnest == 0"));
>
> if (td->td_critnest == 1) {
> + owepreempt = td->td_owepreempt;
> + td->td_owepreempt = 0;
> + /*
> + * XXX: Should move compiler_memory_barrier() from
> + * rmlock to a header.
> + */

XXX: If we get an interrupt at this point and td_owepreempt was zero, 
the new interrupt will re-set it, because td_critnest is still non-zero.

So we still end up with a thread that is leaking an owepreempt *and* 
lose a preemption.

We really need an atomic_readandclear() which gives us a local copy of 
td_owepreempt *and* clears critnest in the same operation.  Sadly, that 
is rather expensive.  It is possible to implement with a flag for 
owepreempt, but that means that all writes to critnest must then be 
atomic.  Either because we know we have interrupts disabled (i.e. 
setting owepreempt can be a RMW), or with a proper atomic_add/set/...

I'm not sure what the performance impact of this will be.  One would 
hope that atomic_add without a memory barrier isn't much more expensive 
than a compiler generated read-modify-write, tho.  Especially, since 
this cacheline should be local and exclusive to us, anyway.

> + __asm __volatile("":::"memory");
> td->td_critnest = 0;
> - if (td->td_owepreempt) {
> + if (owepreempt) {
> td->td_critnest = 1;
> thread_lock(td);
> td->td_critnest--;
Received on Tue May 17 2011 - 14:20:46 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:14 UTC