panic (kmem_map too small) with smbfs

From: Ivan Voras <ivoras_at_fer.hr>
Date: Sun, 15 May 2005 02:30:10 +0200 (CEST)
I get regular and repeatable panics when using smbfs for a long time. In 
that workload, I'm usually playing video from a Windows XP network 
share, and after a few hours (approx. 2 - 4 hours, very irregular, but 
always happens) 
of constant usage (there are no other significant processes on the system), the 
machine panics. It's a Celeron M laptop with 256MB RAM, and otherwise very 
stable.



#0  doadump () at pcpu.h:159
#1  0xc0541e3e in boot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:410
#2  0xc0542189 in panic (fmt=0xc071a72b "kmem_malloc(%ld): kmem_map too small: %ld total allocated")
     at /usr/src/sys/kern/kern_shutdown.c:566

(it's "kmem_alloc(4096), 79298560 total allocated")

#3  0xc067c70b in kmem_malloc (map=0xc103b0c0, size=4096, flags=258) at /usr/src/sys/vm/vm_kern.c:299
#4  0xc068fce7 in page_alloc (zone=0xc1045e40, bytes=0, pflag=0x0, wait=0) at /usr/src/sys/vm/uma_core.c:957
#5  0xc068f710 in slab_zalloc (zone=0xc1045e40, wait=258) at /usr/src/sys/vm/uma_core.c:827
#6  0xc06914c7 in uma_zone_slab (zone=0xc1045e40, flags=2) at /usr/src/sys/vm/uma_core.c:1994
#7  0xc069173f in uma_zalloc_bucket (zone=0xc1045e40, flags=2) at /usr/src/sys/vm/uma_core.c:2103
#8  0xc06912f2 in uma_zalloc_arg (zone=0xc1045e40, udata=0x0, flags=2) at /usr/src/sys/vm/uma_core.c:1911
#9  0xc059b1fe in cache_enter (dvp=0xc1cdf630, vp=0xc1d6c528, cnp=0xd4842a2c) at uma.h:276
#10 0xc1d2ac92 in ?? ()
#11 0xc1cdf630 in ?? ()
#12 0xc1d6c528 in ?? ()
#13 0xd4842a2c in ?? ()
#14 0x0000003a in ?? ()
#15 0xc1d01204 in ?? ()
#16 0xd4842a1c in ?? ()
#17 0xd4842a54 in ?? ()
#18 0xc1ce9980 in ?? ()
#19 0xc1d6c528 in ?? ()
#20 0xc1d01200 in ?? ()
#21 0xc1918d80 in ?? ()
#22 0xc1bcc500 in ?? ()
#23 0xd4842a5c in ?? ()
#24 0xc0533a6b in lockmgr (lkp=0xc1cdf630, flags=0, interlkp=0x25, td=0x8) at /usr/src/sys/kern/kern_lock.c:396
#25 0xc1d2adb3 in ?? ()
#26 0xc1cdf630 in ?? ()
#27 0xd4842cb4 in ?? ()
#28 0xc1bcc500 in ?? ()
#29 0xc1918d80 in ?? ()
#30 0x00000000 in ?? ()
#31 0xc1918d80 in ?? ()
#32 0xc1d0e000 in ?? ()
#33 0x00000001 in ?? ()
#34 0x000181ed in ?? ()
#35 0x00000000 in ?? ()
#36 0x00000000 in ?? ()
#37 0x0800ff03 in ?? ()
#38 0xb19d80dd in ?? ()
#39 0x15dfe000 in ?? ()
#40 0x00000000 in ?? ()
#41 0x00001104 in ?? ()
#42 0x01030002 in ?? ()
#43 0x01000040 in ?? ()
#44 0xc1918d80 in ?? ()
#45 0xd4842bec in ?? ()
#46 0xc0533a6b in lockmgr (lkp=0xc1cdf630, flags=3251500592, interlkp=0xd4842cb4, td=0xc1ce9980)
---Type <return> to continue, or q <return> to quit---
     at /usr/src/sys/kern/kern_lock.c:396
#47 0xc1d2cd61 in ?? ()
#48 0xc1cdf630 in ?? ()
#49 0xd4842cb4 in ?? ()
#50 0xc1bcc500 in ?? ()
#51 0xc1cdf630 in ?? ()
#52 0xd4842ce0 in ?? ()
#53 0xc05b4252 in getdirentries (td=0xc1918d80, uap=0x20002) at vnode_if.h:894
Previous frame identical to this frame (corrupt stack?)


-- 
Every sufficiently advanced magic is indistinguishable from technology
    - Arthur C Anticlarke
Received on Sat May 14 2005 - 22:30:31 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:34 UTC