Qemu 0.11.1 installed from port with -CURRENT as host, emulating 8 CPU on a 8-way box makes my FreeBSD -CURRENT guest kernel, panic with this bt at boot: panic: sched_priority: invalid priority 230: nice 0, ticks 2289712 ftick 353 ltick 1363 tick pri 50 cpuid = 7 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2a kdb_backtrace() at kdb_backtrace+0x37 panic() at panic+0x182 sched_priority() at sched_priority+0x1f8 sched_clock() at sched_clock+0x136 statclock() at statclock+0xc6 handleevents() at handleevents+0xda timercb() at timercb+0x1cb lapic_handle_timer() at lapic_handle_timer+0xb2 Xtimerint() at Xtimerint+0x8d The panic is due a KASSERT in sched_priority (sched_ule.c) KASSERT(pri >= PRI_MIN_TIMESHARE && pri <= PRI_MAX_TIMESHARE, ("sched_priority: invalid priority %d: nice %d, " "ticks %d ftick %d ltick %d tick pri %d", pri, td->td_proc->p_nice, td->td_sched->ts_ticks, td->td_sched->ts_ftick, td->td_sched->ts_ltick, SCHED_PRI_TICKS(td->td_sched))); ts->ts_ticks is higher than what you could expect. I figured out that sched_tick is being passed a huge number of ticks elapsed for the cpu at startup by hardclock_anycpu (kern_clock.c). I assume that QEMU is not doing a proper job of distributing run-time amongst cores. My hack, below, will assure that we won't be running for more than 5s solid, if we have a huge number of ticks in input to sched_tick, which is something that ULE can still handle. I don't think it's worth to have the hack into the tree for now. I'm just posting it FYI. -- Gianni diff -r d16464301129 sys/kern/kern_clock.c --- a/sys/kern/kern_clock.c Thu Sep 23 11:56:35 2010 -0400 +++ b/sys/kern/kern_clock.c Sun Oct 03 17:53:39 2010 -0400 _at__at_ -525,7 +525,7 _at__at_ hardclock_anycpu(int cnt, int usermode) PROC_SUNLOCK(p); } thread_lock(td); - sched_tick(cnt); + sched_tick((cnt < (hz*10)/2) ? cnt : (hz*10)/2); td->td_flags |= flags; thread_unlock(td);Received on Mon Oct 04 2010 - 05:18:24 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:08 UTC