I've done that in my tree, but there it was more to reduce pmc noise (critical_{enter,exit} would account for 5-8% of cycles). -Kip On 3/10/08, Jeff Roberson <jroberson_at_chesapeake.net> wrote: > On Mon, 10 Mar 2008, Julian Elischer wrote: > > > Stephan Uphoff wrote: > >> Julian Elischer wrote: > >>> > >>> Why would the following: > >>> void > >>> critical_exit(void) > >>> { > >>> struct thread *td; > >>> > >>> td = curthread; > >>> KASSERT(td->td_critnest != 0, > >>> ("critical_exit: td_critnest == 0")); > >>> > >>> if (td->td_critnest == 1) { > >>> td->td_critnest = 0; > >>> if (td->td_owepreempt) { > >>> td->td_critnest = 1; > >>> thread_lock(td); > >>> td->td_critnest--; > >>> SCHED_STAT_INC(switch_owepreempt); > >>> mi_switch(SW_INVOL|SW_PREEMPT, NULL); > >>> thread_unlock(td); > >>> } > >>> } else > >>> td->td_critnest--; > >>> > >>> CTR4(KTR_CRITICAL, "critical_exit by thread %p (%ld, %s) to %d", > >>> td, > >>> (long)td->td_proc->p_pid, td->td_name, td->td_critnest); > >>> } > >>> > >>> > >>> not be expressed: > >>> > >>> void > >>> critical_exit(void) > >>> { > >>> struct thread *td; > >>> > >>> td = curthread; > >>> KASSERT(td->td_critnest != 0, > >>> ("critical_exit: td_critnest == 0")); > >>> > >>> if (td->td_critnest == 1) { > >>> if (td->td_owepreempt) { > >>> thread_lock(td); > >>> td->td_critnest = 0; > >>> SCHED_STAT_INC(switch_owepreempt); > >>> mi_switch(SW_INVOL|SW_PREEMPT, NULL); > >>> thread_unlock(td); > >>> } else { > >> XXXXX If preemption happens here td_owepreempt will be set to preempt the > >> current thread > >> XXXXX since td_critnest != 0 . However td_owepreempt is not checked > again > >> so we will not > >> XXXXX preempt on td_critnest = 0; > > > > jeff's comment was that it could be expresssed as: > > > > if (--(td->td_critnest) == 0) { > > if (td->td_owepreempt) { > > thread_lock(td); > > td->td_critnest = 0; > > SCHED_STAT_INC(switch_owepreempt); > > mi_switch(SW_INVOL|SW_PREEMPT, NULL); > > thread_unlock(td); > > } > > } > > if (--(td->td_critnest) == 0) { > if (td->td_owepreempt) { > thread_lock(td); > if (td->td_owepreempt) { > SCHED_STAT_INC(switch_owepreempt); > mi_switch(SW_INVOL|SW_PREEMPT, NULL); > } > thread_unlock(td); > } > } > > Wouldn't that do just fine? If a preemption occurred before you disabled > interrupts in thread_lock you'd skip the switch after acquiring it. > > I don't see a way that we could miss owepreempt with the above code. I'd > also like to make critical_enter/critical_exit inlines with a > _critical_exit() that does the switch. with thread_lock we now do a lot > more nested spinlocking etc. All of these non-contiguous instruction > pointers add up. > > > > > This has the same race.. but as you say, it probably doesn't matter. > > In fact the race is probably required to ensure that pre-emption Does > occur > > one way or another. > > > > > > > > > >>> td_critnest = 0; > >>> } > >>> } else > >>> td->td_critnest--; > >>> > >>> CTR4(KTR_CRITICAL, "critical_exit by thread %p (%ld, %s) to %d", > >>> td, > >>> (long)td->td_proc->p_pid, td->td_name, td->td_critnest); > >>> } > >>> > >>> It seems to me there is a race in the current version, where the > >>> critical count is temporarily 0, where the thread could be pre-empted > >>> when it shouldn't be.. > >> > >> Yes - there is a race where the thread could be preempted twice. > >> However this is fairly harmless in comparison to not being preempted at > >> all. > >> This being said it may be worthwhile to see if that race can be fixed > now > >> after > >> the thread lock changes. > >> > >>> > >>> (prompted by a comment by jeffr that made me go look at this code).. > >>> > >> > >> Stephan > > > > > _______________________________________________ > freebsd-current_at_freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" >Received on Mon Mar 10 2008 - 21:57:38 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:28 UTC