Re: panic: sched_priority: invalid priority 2906: nice 0, ticks 122865664 ftick 516947 ltick 517947 tick pri 2726

From: Giovanni Trematerra <giovanni.trematerra_at_gmail.com>
Date: Mon, 29 Nov 2010 22:47:15 +0100
On Mon, Nov 29, 2010 at 9:56 PM, Attilio Rao <attilio_at_freebsd.org> wrote:
> 2010/11/29 Alexander Motin <mav_at_freebsd.org>:
>> On 29.11.2010 17:07, John Baldwin wrote:
>>>
>>> On Friday, November 26, 2010 4:38:49 pm David Rhodus wrote:
>>>>
>>>> I hit this panic on my NFS server.
>>>>
>>>> -DR
>>>>
>>>> coke.fun dumped core - see /var/crash/vmcore.2
>>>>
>>>> Fri Nov 26 14:50:48 UTC 2010
>>>>
>>>> FreeBSD coke.fun 9.0-CURRENT FreeBSD 9.0-CURRENT #14 r215800: Wed Nov
>>>> 24 12:35:30 UTC 2010     root_at_coke.fun:/usr/obj/usr/src/sys/GENERIC
>>>> i386
>>>>
>>>> panic: sched_priority: invalid priority 2906: nice 0, ticks 122865664
>>>> ftick 516947 ltick 517947 tick pri 2726
>>>
>>> I ran the numbers and assuming a hz of 1000, this requires you to have a
>>> very
>>> large value for ts_ticks (about (2726 * 24)<<  10).  I suspect this is due
>>> to
>>> sched_tick() being invoked for a long idle sleep combined with the
>>> eventtimer
>>> changes.  Can you go to frame 10 and 'p td->td_sched->ts_ticks'?
>>
>> As I can see, this is VirtualBox virtual machine. So it is still a question
>> what side makes so large hole in sched_tick() on some CPUs. It could be
>> interesting to get ktr(4) dump with KTR_SPARE2 mask:
>>
>> options         KTR
>> options         ALQ
>> options         KTR_ALQ
>> options         KTR_ENTRIES=131072
>> options         KTR_COMPILE=(KTR_SPARE2)
>> options         KTR_MASK=(KTR_SPARE2)
>
> I'm sure gianni (CC'ed) got  this bug
> and got some conclusions on it
> before (maybe he also has a patch).

I got it on QEMU and assumed that QEMU was not doing a proper job of
distributing run-time amongst cores (so VirtualBox???).
I figured out that sched_tick is being passed a huge number of ticks elapsed
for the cpu at startup, in my case, by hardclock_anycpu (kern_clock.c).

I haven't a patch only a dirty hack just to make sure we won't be
running for more than 5s solid, if we have a huge number of ticks in
input to sched_tick, which is something that ULE can still handle.

Hope this helps.

diff -r d16464301129 sys/kern/kern_clock.c
--- a/sys/kern/kern_clock.c     Thu Sep 23 11:56:35 2010 -0400
+++ b/sys/kern/kern_clock.c     Sun Oct 03 17:53:39 2010 -0400
_at__at_ -525,7 +525,7 _at__at_ hardclock_anycpu(int cnt, int usermode)
              PROC_SUNLOCK(p);
      }
      thread_lock(td);
-       sched_tick(cnt);
+       sched_tick((cnt < (hz*10)/2) ? cnt : (hz*10)/2);
      td->td_flags |= flags;
      thread_unlock(td);

--
Giovanni Trematerra
Received on Mon Nov 29 2010 - 20:47:17 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:09 UTC