Re: Panic on kern_event.c

From: Sylvain GALLIANO <sg_at_efficientip.com>
Date: Thu, 8 Nov 2018 17:05:03 +0100
Hi,

I replaced
<< printf("XXX knote %p already in tailq  status:%x kq_count:%d  [%p %p]
%u\n",kn,kn->kn_status,kq->kq_count,kn->kn_tqe.tqe_next,kn->kn_tqe.tqe_prev,__LINE__);
by
>> panic("XXX knote %p already in tailq  status:%x kq_count:%d  [%p %p]
%u\n",kn,kn->kn_status,kq->kq_count,kn->kn_tqe.tqe_next,kn->kn_tqe.tqe_prev,__LINE__);

Here is the stack during panic:
panic: XXX knote 0xfffff801e1c6ddc0 already in tailq  status:1 kq_count:2
[0 0xfffff8000957a978]  2671

cpuid = 0
time = 1541688832
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2c/frame
0xfffffe0412258fd0
kdb_backtrace() at kdb_backtrace+0x53/frame 0xfffffe04122590a0
vpanic() at vpanic+0x277/frame 0xfffffe0412259170
doadump() at doadump/frame 0xfffffe04122591d0
knote_enqueue() at knote_enqueue+0xf2/frame 0xfffffe0412259210
kqueue_register() at kqueue_register+0xaed/frame 0xfffffe0412259340
kqueue_kevent() at kqueue_kevent+0x13c/frame 0xfffffe04122595b0
kern_kevent_fp() at kern_kevent_fp+0x66/frame 0xfffffe0412259610
kern_kevent() at kern_kevent+0x17f/frame 0xfffffe0412259700
kern_kevent_generic() at kern_kevent_generic+0xfe/frame 0xfffffe0412259780
sys_kevent() at sys_kevent+0xaa/frame 0xfffffe0412259810
syscallenter() at syscallenter+0x4e3/frame 0xfffffe04122598f0
amd64_syscall() at amd64_syscall+0x1b/frame 0xfffffe04122599b0
fast_syscall_common() at fast_syscall_common+0x101/frame 0xfffffe04122599b0
--- syscall (560, FreeBSD ELF64, sys_kevent), rip = 0x406e3bfa, rsp =
0x7fffdf7e9db8, rbp = 0x7fffdf7e9e00 ---
KDB: enter: panic


you can get kernel.debug + vmcore at:
https://drive.google.com/drive/folders/1MbqJQm12-KOYDbb4-9uNRTnAdsNqLaIP?usp=sharing


Le mer. 7 nov. 2018 à 05:35, Mark Johnston <markj_at_freebsd.org> a écrit :

> On Tue, Nov 06, 2018 at 10:50:06AM +0100, Sylvain GALLIANO wrote:
> > Hi,
> >
> > I got random panic on Current & 11.2-STABLE on kern_event.c
> >
> > Panic occur in syslog-ng (logging at high rate) with the folloging lines:
> >
> >   Panic String: Bad tailq NEXT(0xfffff80039ae7a38->tqh_last) != NULL
> >   Panic String: Bad tailq head 0xfffff80039f1a238 first->prev != head
> >
> > It's look like knote_enqueue try to add and existings knote on TAILQ
> > (confirmed by following patch).
> >
> > logs after apply patch:
> > XXX knote 0xfffff8012e3d33c0 already in tailq  status:1 kq_count:1  [0
> > 0xfffff800327d3538]  2671
> > XXX knote 0xfffff80032861780 already in tailq  status:1 kq_count:1  [0
> > 0xfffff80032457938]  2671
>
> Can you grab the stack when this happens as well, with kdb_backtrace()?
> Or better, convert the print into a panic so that we can examine the
> kernel dump.
>
Received on Thu Nov 08 2018 - 15:05:17 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:19 UTC