Re: Bug about sched_4bsd?

From: Jeff Roberson <jroberson_at_jroberson.net>
Date: Mon, 18 Jan 2010 15:47:33 -1000 (HST)
On Tue, 19 Jan 2010, Kohji Okuno wrote:

> Hello, Attilio,
>
> I think setpriority() can set priority to sleeping threads.
> Is it really safe?

I agree, in this code path maybe_resched is not properly locking 
curthread.  curthread will be sched_lock and the sleeping thread will be a 
sleepq lock.  I believe that since &sched_lock is ordered after container 
locks it would be sufficient to compare the two td_lock pointers and 
acquire sched_lock if they are not equal.  Someone should look at other 
maybe_resched callers or add an assert that curthread->td_lock is always 
&sched_lock in this function.

Thanks,
Jeff

>
> Thank you,
> Kohji Okuno
>
>> 2010/1/18 Kohji Okuno <okuno.kohji_at_jp.panasonic.com>:
>>> Hello,
>>>
>>> Thank you, Attilio.
>>> I checked your patch. I think that your patch is better.
>>> I tested the patch quickly, and I think it's OK.
>>> # This probrem does not occur easily :-<
>>>
>>>
>>> What do you think about maybe_resched()?
>>> I have never experienced about maybe_resched(), but I think that the
>>> race condition may occur.
>>>
>>> <<Back Trace>>
>>> sched_4bsd.c:     maybe_resched()
>>> sched_4bsd.c:     resetpriority_thread()
>>> sched_4bsd.c:     sched_nice()                  get thread_lock(td)
>>> kern_resource.c:  donice()
>>> kern_resource.c:  setpriority()                 get PROC_LOCK()
>>>
>>> static void
>>> maybe_resched(struct thread *td)
>>> {
>>>        THREAD_LOCK_ASSERT(td, MA_OWNED);
>>>        if (td->td_priority < curthread->td_priority)
>>>                curthread->td_flags |= TDF_NEEDRESCHED;
>>> }
>>>
>>> I think, when td->td_lock is not &sched_lock, curthread->td_lock is
>>> not locked in maybe_resched().
>>
>> I didn't look closely to the maybe_resched() callers but I think it is
>> ok. The thread_lock() function works in a way that the callers don't
>> need to know which container lock is present in a particular moment,
>> there is always a guarantee that the contenders will spin if the lock
>> on the struct can't be held.
>> In the case you outlined something very particular was happening.
>> Basically, we get &sched_lock but sched_lock was not the lock present
>> on td_lock. That means all the other paths willing to access to
>> td_lock for that thread (via thread_lock()) were allowed to do that
>> even if we wanted to keep the critical path closed.
>>
>> Attilio
>>
>>
>> --
>> Peace can only be achieved by understanding - A. Einstein
>
Received on Tue Jan 19 2010 - 00:43:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:00 UTC