On Fri, Jan 14, 2005 at 04:24:34PM -0500, John Baldwin wrote: > Ok, in the process of updating my tree that held the earlier version of the > critical section vs. spin mutexes patch I think I have found and fixed the > bug that may have caused the lockups a few people reported. As such, I'd > like folks to test the updated patch. Details and such of what the patch > does: > > - spin locks and critical sections are divorced. Specifically, the sole > purpose of a critical section is to keep the current thread from being > preempted until it exits the section. Nothing requires that the critical > section actually disable interrupts during the section as any interrupt > threads scheduled would simply not preempt either because they would be > picked up by another CPU or preempt the current thread when it exited the > critical section. However, spin locks do need to prevent themselves from > being interrupted by any code that can try to acquire a spin lock. Strictly > speaking, only spin mutexes used in interrupt context (sched_lock, icu_lock, > locks in INTR_FAST handlers, sleepq locks, etc.) need to block interrupts, > but if you have a mutex that is only used in top half code, you should > probably be using a normal mutex anyway, so the set of spin mutexes not used > in interrupt context tends to be small to empty. So far in SMPng, almost > all critical sections have been inside of spin mutexes (since spin mutexes > also need to block preemptions in addition to interrupts). Thus, for the > sake of simplicity, critical sections also included the interrupt blocking > behavior. (Keep in mind that this was an evolutionary process. :) However, > as SMPng progresses, it has now become useful to divorce the two concepts, > especially as some folks are working on locking schemes which just use > critical sections to protect per-CPU resources that are not accessed from > interrupt context. What this change does is to move the interrupt > blocking/deferment/whatever bits that spin mutexes need into a separate > spinlock_enter()/spinlock_exit() API completely implemented in MD code. > critical sections, on the other hand, are now reduced to a simple per-thread > nesting count and are now completely MI. > - The MI code that creates idle threads for each of the CPUs no longer tries > to set curthread up for the APs and no longer messes with the critnest count > for the idlethreads. Instead, the MD code now explicitly borrows the > idlethread context for the APs when it needs it and is responsible for > adjusting the critical section and spinlock nesting counts to account for > the weirdness of borrowing the context for the first context switch. > > I've tested this on SMP i386, SMP sparc64, and UP alpha. Testing on other > archs and on SMP would be greatly appreciated. Patch is at > http://www.FreeBSD.org/~jhb/patches/spinlock.patch After a non-contextual review and reading the patch description, I am very pleased that the evolutionary process has led us this way. This makes me much more confident that different synchronization paradigms will now be usable (and therefore considered) in FreeBSD. For those unaware of the impact of this work: it's big. FreeBSD tends more and more to a flexible middle-ground/compromise now when it comes to synchronization. We can now build much faster per-CPU structures without completely succumbing to the latency associated with global locking schemes (such as mutexes). But, in addition to that, we can still continue to use mutexes where appropriate. Yay! Thank you. -- Bosko Milekic bmilekic_at_technokratis.com bmilekic_at_FreeBSD.orgReceived on Fri Jan 14 2005 - 20:45:46 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:26 UTC