Re: Bug about sched_4bsd?

From: Attilio Rao <attilio_at_freebsd.org>
Date: Mon, 18 Jan 2010 08:06:51 +0100
2010/1/18 Kohji Okuno <okuno.kohji_at_jp.panasonic.com>:
> Hello,
>
> Thank you, Attilio.
> I checked your patch. I think that your patch is better.
> I tested the patch quickly, and I think it's OK.
> # This probrem does not occur easily :-<
>
>
> What do you think about maybe_resched()?
> I have never experienced about maybe_resched(), but I think that the
> race condition may occur.
>
> <<Back Trace>>
> sched_4bsd.c:     maybe_resched()
> sched_4bsd.c:     resetpriority_thread()
> sched_4bsd.c:     sched_nice()                  get thread_lock(td)
> kern_resource.c:  donice()
> kern_resource.c:  setpriority()                 get PROC_LOCK()
>
> static void
> maybe_resched(struct thread *td)
> {
>        THREAD_LOCK_ASSERT(td, MA_OWNED);
>        if (td->td_priority < curthread->td_priority)
>                curthread->td_flags |= TDF_NEEDRESCHED;
> }
>
> I think, when td->td_lock is not &sched_lock, curthread->td_lock is
> not locked in maybe_resched().

I didn't look closely to the maybe_resched() callers but I think it is
ok. The thread_lock() function works in a way that the callers don't
need to know which container lock is present in a particular moment,
there is always a guarantee that the contenders will spin if the lock
on the struct can't be held.
In the case you outlined something very particular was happening.
Basically, we get &sched_lock but sched_lock was not the lock present
on td_lock. That means all the other paths willing to access to
td_lock for that thread (via thread_lock()) were allowed to do that
even if we wanted to keep the critical path closed.

Attilio


-- 
Peace can only be achieved by understanding - A. Einstein
Received on Mon Jan 18 2010 - 06:06:52 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:00 UTC