On Monday, May 16, 2011 4:30:44 pm Max Laier wrote: > On Monday 16 May 2011 14:21:27 John Baldwin wrote: > > Yes, we need to fix that. Humm, it doesn't preempt when you do a > > critical_exit() though? Or do you use a hand-rolled critical exit that > > doesn't do a deferred preemption? > > Right now I just did a manual td_critnest++/--, but I guess ... Ah, ok, so you would "lose" a preemption. That's not really ideal. > > Actually, I'm curious how the spin unlock inside the IPI could yield the > > CPU. Oh, is rmlock doing a wakeup inside the IPI handler? I guess that is > > ok as long as the critical_exit() just defers the preemption to the end of > > the IPI handler. > > ... the earliest point where it is safe to preempt is after doing the > > atomic_add_int(&smp_rv_waiters[2], 1); > > so that we can start other IPIs again. However, since we don't accept new > IPIs until we signal EOI in the MD code (on amd64), this might still not be a > good place to do the yield?!? Hmm, yeah, you would want to do the EOI before you yield. However, we could actually move the EOI up before calling the MI code so long as we leave interrupts disabled for the duration of the handler (which we do). > The spin unlock boils down to a critical_exit() and unless we did a > critical_enter() at some point during the redenvouz setup, we will yield() if > we owepreempt. I'm not quite sure how that can happen, but it seems like > there is a path that allows the scheduler to set it from a foreign CPU. No, it is only set on curthread by curthread. This is actually my main question. I've no idea how this could happen unless the rmlock code is actually triggering a wakeup or sched_add() in its rendezvous handler. I don't see anything in rm_cleanIPI() that would do that however. I wonder if your original issue was really fixed just by the first patch you had which fixed the race in smp_rendezvous()? -- John BaldwinReceived on Mon May 16 2011 - 18:46:05 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:14 UTC