On Tue, Aug 31, 2010 at 10:16 AM, <mdf_at_freebsd.org> wrote: > I recorded the stack any time ts->ts_cpu was set and when a thread was > migrated by sched_switch() I printed out the recorded info. Here's > what I found: > > > XXX bug 67957: moving 0xffffff003ff9b800 from 3 to 1 > [1]: pin 0 state 4 move 3 -> 1 done by 0xffffff000cc44000: > #0 0xffffffff802b36b4 at bug67957+0x84 > #1 0xffffffff802b5dd4 at sched_affinity+0xd4 > #2 0xffffffff8024a707 at cpuset_setthread+0x137 > #3 0xffffffff8024aeae at cpuset_setaffinity+0x21e > #4 0xffffffff804a82df at freebsd32_cpuset_setaffinity+0x4f > #5 0xffffffff80295f49 at isi_syscall+0x99 > #6 0xffffffff804a630e at ia32_syscall+0x1ce > #7 0xffffffff8046dc60 at Xint0x80_syscall+0x60 > [0]: pin 0 state 2 move 0 -> 3 done by 0xffffff000cc44000: > #0 0xffffffff802b36b4 at bug67957+0x84 > #1 0xffffffff802b4ad8 at sched_add+0xe8 > #2 0xffffffff8029b96a at create_thread+0x34a > #3 0xffffffff8029badc at kern_thr_new+0x8c > #4 0xffffffff804a8912 at freebsd32_thr_new+0x122 > #5 0xffffffff80295f49 at isi_syscall+0x99 > #6 0xffffffff804a630e at ia32_syscall+0x1ce > #7 0xffffffff8046dc60 at Xint0x80_syscall+0x60 > > So one thread in the process called cpuset_setaffinity(2), and another > thread in the process was forcibly migrated by the IPI while returning > from a syscall, while it had td_pinned set. > > Given this path, it seems reasonable to me to skip the migrate if we > notice THREAD_CAN_MIGRATE is false. > > Opinions? My debug code is below. I'll try to write a short testcase > that exhibits this bug. Just a few more thoughts on this. The check in sched_affinity for THREAD_CAN_MIGRATE is racy. Since witness uses sched_pin, it's not simple to take the THREAD lock around an increment of td_pinned. So I'm looking for suggestions on the best way to fix this issue. My thoughts: 1) add a check in sched_switch() for THREAD_CAN_MIGRATE 2) have WITNESS not use sched_pin, and take the THREAD lock when modifying td_pinned 3) have the IPI_PREEMPT handler notice the thread is pinned (and not trying to bind) and postpone the mi_switch, just like it postpones when a thread is in a critical section. Except for the potential complexity of implementation, I think (2) is the best solution. For those who want to play at home, I have a small test program that exhibits this behavior at http://people.freebsd.org/~mdf/cpu_affinity_test.c. It seems to require 4 or more CPUs to hit the assert. You'll also need to patch the kernel to assert when migrating a pinned thread: Index: kern/sched_ule.c =================================================================== --- kern/sched_ule.c (revision 158580) +++ kern/sched_ule.c (working copy) _at__at_ -1888,11 +1889,26 _at__at_ sched_switch(struct thread *td, struct t srqflag = (flags & SW_PREEMPT) ? SRQ_OURSELF|SRQ_YIELDING|SRQ_PREEMPTED : SRQ_OURSELF|SRQ_YIELDING; if (ts->ts_cpu == cpuid) tdq_add(tdq, td, srqflag); - else + else { + KASSERT(THREAD_CAN_MIGRATE(td) || + (ts->ts_flags & TSF_BOUND) != 0, + ("Thread %p shouldn't migrate!", td)); mtx = sched_switch_migrate(tdq, td, srqflag); + } } else { /* This thread must be going to sleep. */ TDQ_LOCK(tdq); mtx = thread_lock_block(td); Thanks, matthewReceived on Tue Aug 31 2010 - 16:53:14 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:06 UTC