Re: panic: softclock_call_cc: act 0xfffffe0003d36958 0

From: Hans Petter Selasky <hps_at_selasky.org>
Date: Wed, 30 Dec 2015 18:41:09 +0100
On 12/30/15 18:16, Bjoern A. Zeeb wrote:
> Hi,
>
> I am at SVN r292843 and I just got this panic:
>
> panic: softclock_call_cc: act 0xfffffe0003d36958 0
> cpuid = 0
> KDB: stack backtrace:
> db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe0839d897f0
> vpanic() at vpanic+0x182/frame 0xfffffe0839d89870
> kassert_panic() at kassert_panic+0x126/frame 0xfffffe0839d898e0
> softclock_call_cc() at softclock_call_cc+0x4d4/frame 0xfffffe0839d899c0
> softclock() at softclock+0x47/frame 0xfffffe0839d899e0
> intr_event_execute_handlers() at intr_event_execute_handlers+0x96/frame 0xfffffe0839d89a20
> ithread_loop() at ithread_loop+0xa6/frame 0xfffffe0839d89a70
> fork_exit() at fork_exit+0x84/frame 0xfffffe0839d89ab0
> fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe0839d89ab0
> --- trap 0, rip = 0, rsp = 0, rbp = 0 ---
> KDB: enter: panic
> [ thread pid 12 tid 100013 ]
> Stopped at      kdb_enter+0x3b: movq    $0,kdb_why
> db> show alllocks
> Process 75753 (ping) thread 0xfffff80016f6a9a0 (100110)
> exclusive lockmgr nfs (nfs) r = 0 (0xfffff80016a4b068) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> shared lockmgr nfs (nfs) r = 0 (0xfffff80016b925f0) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> Process 75752 (sh) thread 0xfffff8019f63c4d0 (100254)
> exclusive rw tcpinp (tcpinp) r = 0 (0xfffff80015307300) locked _at_ /tank/users/bz/projects_vnet.svn/sys/netinet/tcp_usrreq.c:872
> exclusive sx so_snd_sx (so_snd_sx) r = 0 (0xfffff800153037c8) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/uipc_sockbuf.c:265
> exclusive lockmgr nfs (nfs) r = 0 (0xfffff80016b92418) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> shared lockmgr nfs (nfs) r = 0 (0xfffff80016b925f0) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> Process 75751 (jexec) thread 0xfffff805254b14d0 (100370)
> exclusive lockmgr nfs (nfs) r = 0 (0xfffff80016a4a5f0) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> shared lockmgr nfs (nfs) r = 0 (0xfffff80016b925f0) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/vfs_subr.c:2473
> Process 75749 (jail) thread 0xfffff8001503f4d0 (100150)
> exclusive sleep mutex maxsockets_change (eventhandler list) r = 0 (0xfffff8000bd99b90) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/subr_eventhandler.c:126
> shared sx vnet_sysinit_sxlock (vnet_sysinit_sxlock) r = 0 (0xffffffff81d23cd8) locked _at_ /tank/users/bz/projects_vnet.svn/sys/net/vnet.c:573
> exclusive sx allprison (allprison) r = 0 (0xffffffff81cfe238) locked _at_ /tank/users/bz/projects_vnet.svn/sys/kern/kern_jail.c:1020
>
>
>
> I cannot make a dump so if anyone wants any further information please let me know.  I’ll need the machine again but I’ll wait an hour or two (at most).

Hi,

 From past experience these panics reflect use-after free scenarios with 
regard to callouts. Maybe dump the backtrace of all cores?

Are you able to make a crash dump?

Further there are some callout sepecific structures in 
sys/kern/kern_timeout.c which you might want to dump.

--HPS
Received on Wed Dec 30 2015 - 16:39:14 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:02 UTC