Re: netgraph calling VFS from network swi (was: Re: 5.2-RC fatal trap 12)

From: Alexander Motin <mav_at_alkar.net>
Date: Fri, 12 Dec 2003 20:25:45 +0200
Robert Watson wrote:
>>>Same problem on other hardware but on system booted from same HDD: 
>>
>>This is a really scary stack trace -- it looks like netgraph is calling
>>into the kernel linker from the network swi, and that in turn is hitting
>>VFS.  I may have missed earlier messages in this thread, but do you have a
>>precise list of userland activities you're performing to trigger this?  It
>>looks like you're doing something that causes netgraph to load additional
>>modules...  Which would probably not be such a bad thing if it happened in
>>a different thread context.

This happend when I try to use mpd daemon as PPPoE server. It works fine 
on 4.9, but crashes on 5.2-RC.
At that time I really did not have all NG modules loaded. In 4.9 all 
required modules was loaded automatically.

> FYI, you can probably work around the panic by preloading whatever module
> it's trying to load, such that the module is already available when the
> trigger event happens and it doesn't try to load the module in that
> context.

If I preload at least ng_tee then system don't crashes but mpd still 
dont work. :( But this is other question.

PS: One time when I try to unload ng_socket I got other kernel trap:

Fatal trap 12: page fault while in kernel mode
cpuid = 0; apic id = 00
fault virtual address   = 0xc417b2f4
fault code              = supervisor read, page not present
instruction pointer     = 0x8:0xc05b3bb0
stack pointer           = 0x10:0xd2a38c98
frame pointer           = 0x10:0xd2a38cb0
code segment            = base 0x0, limit 0xfffff, type 0x1b
                         = DPL 0, pres 1, def32 1, gran 1
processor eflags        = interrupt enabled, resume, IOPL = 0
current process         = 27 (swi8: tty:sio clock)
trap number             = 12
panic: page fault
cpuid = 0;

syncing disks, buffers remaining... 3020 3020 3020 3020 3020 3020 3020 
3020 3020 3020 3020 3020 3020 3020 3020 3020 3020 3020 3020 3020
giving up on 176 buffers
Uptime: 1h18m15s
Dumping 383 MB
  16 32 48 64 80 96 112 128 144 160 176 192 208 224 240 256 272 288 304 
320 336 352 368
---
#0  doadump () at ../../../kern/kern_shutdown.c:240
240             dumping++;
Ready to go.  Enter 'tr' to connect to remote target
and 'getsyms' after connection to load kld symbols.
(kgdb) bt
#0  doadump () at ../../../kern/kern_shutdown.c:240
#1  0xc0576661 in boot (howto=0x100) at ../../../kern/kern_shutdown.c:372
#2  0xc0576a3e in panic () at ../../../kern/kern_shutdown.c:550
#3  0xc06d32cc in trap_fatal (frame=0xd2a38c58, eva=0x0) at 
../../../i386/i386/trap.c:821
#4  0xc06d2f72 in trap_pfault (frame=0xd2a38c58, usermode=0x0, 
eva=0xc417b2f4) at ../../../i386/i386/trap.c:735
#5  0xc06d2b83 in trap (frame=
       {tf_fs = 0xc07b0018, tf_es = 0xd2a30010, tf_ds = 0xc0590010, 
tf_edi = 0xc05b3b90, tf_esi = 0xc417b2e0, tf_ebp = 0xd2a38cb0, tf_isp = 
0xd2a38c84, tf_ebx = 0x6, tf_edx = 0x0, tf_ecx = 0xd8, tf_eax = 0x10000, 
tf_trapno = 0xc, tf_err = 0x0, tf_eip = 0xc05b3bb0, tf_cs = 0x8, 
tf_eflags = 0x10282, tf_esp = 0x8, tf_ss = 0xc071a1a5})
     at ../../../i386/i386/trap.c:420
#6  0xc06bfd78 in calltrap () at {standard input}:94
#7  0xc0587018 in softclock (dummy=0x0) at ../../../kern/kern_timeout.c:225
#8  0xc05631f2 in ithread_loop (arg=0xc39de380) at 
../../../kern/kern_intr.c:544
#9  0xc05621a4 in fork_exit (callout=0xc0563060 <ithread_loop>, arg=0x0, 
frame=0x0) at ../../../kern/kern_fork.c:793

-- 
Alexander Motin mav_at_alkar.net
ISP "Alkar-Teleport"
Received on Fri Dec 12 2003 - 09:25:50 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:33 UTC