Re: tmpfs panic

From: Kostik Belousov <kostikbel_at_gmail.com>
Date: Sun, 15 Jun 2008 13:19:18 +0300
On Sun, Jun 15, 2008 at 01:06:24PM +0400, Dmitry Morozovsky wrote:
> Hi there,
> 
> at contemporary RELENG_7/amd64
> 
> panic at umount phase (shutdown -r in progress):
> 
> (kgdb) bt
> #0  doadump () at pcpu.h:194
> #1  0x0000000000000010 in ?? ()
> #2  0xffffffff8021f530 in boot (howto=260) at 
> /usr/src/sys/kern/kern_shutdown.c:418
> #3  0xffffffff8021f94d in panic (fmt=0x104 <Address 0x104 out of bounds>) at 
> /usr/src/sys/kern/kern_shutdown.c:572
> #4  0xffffffff80394a64 in trap_fatal (frame=0xffffff00012129c0, 
> eva=18446742974216866000) at /usr/src/sys/amd64/amd64/trap.c:724
> #5  0xffffffff80394e35 in trap_pfault (frame=0xffffffffd517e8d0, usermode=0) at 
> /usr/src/sys/amd64/amd64/trap.c:641
> #6  0xffffffff803957db in trap (frame=0xffffffffd517e8d0) at 
> /usr/src/sys/amd64/amd64/trap.c:410
> #7  0xffffffff8037b54e in calltrap () at 
> /usr/src/sys/amd64/amd64/exception.S:169
> #8  0xffffffff802138dd in _mtx_lock_sleep (m=0xffffff00b41cbe78, 
> tid=18446742974216874432, opts=Variable "opts" is not available.
> ) at /usr/src/sys/kern/kern_mutex.c:335
> #9  0xffffffff80297b25 in vgone (vp=0xffffff00b41cbd90) at 
> /usr/src/sys/kern/vfs_subr.c:2471
> #10 0xffffffff8074d10e in tmpfs_alloc_vp (mp=0xffffff000933c978, 
> node=0xffffff00cef55000, lkflag=4098, vpp=0xffffffffd517ea98, 
> td=0xffffff00012129c0)
>     at /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_subr.c:396
> #11 0xffffffff8074c868 in tmpfs_root (mp=Variable "mp" is not available.
> ) at /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vfsops.c:388
> #12 0xffffffff80294b27 in dounmount (mp=0xffffff000933c978, flags=524288, 
> td=0xffffff00012129c0) at /usr/src/sys/kern/vfs_mount.c:1273
> #13 0xffffffff80297ecc in vfs_unmountall () at 
> /usr/src/sys/kern/vfs_subr.c:2936
> #14 0xffffffff8021f7c9 in boot (howto=0) at 
> /usr/src/sys/kern/kern_shutdown.c:400
> #15 0xffffffff8021fab9 in reboot (td=Variable "td" is not available.
> ) at /usr/src/sys/kern/kern_shutdown.c:172
> #16 0xffffffff803950ba in syscall (frame=0xffffffffd517ec70) at 
> /usr/src/sys/amd64/amd64/trap.c:852
> #17 0xffffffff8037b75b in Xfast_syscall () at 
> /usr/src/sys/amd64/amd64/exception.S:290
> #18 0x00000000004084ec in ?? ()
> Previous frame inner to this frame (corrupt stack?)
I suspect this may be my mistake.
IN case you can reproduce it, please, try the patch below.

diff --git a/sys/fs/tmpfs/tmpfs_subr.c b/sys/fs/tmpfs/tmpfs_subr.c
index cc1b75f..0c537c4 100644
--- a/sys/fs/tmpfs/tmpfs_subr.c
+++ b/sys/fs/tmpfs/tmpfs_subr.c
_at__at_ -391,11 +391,8 _at__at_ loop:
 
 	vnode_pager_setsize(vp, node->tn_size);
 	error = insmntque(vp, mp);
-	if (error) {
-		vgone(vp);
-		vput(vp);
+	if (error)
 		vp = NULL;
-	}
 
 unlock:
 	TMPFS_NODE_LOCK(node);
> 
> Also, active tmpfs usage easy leads to "swap zone exhausted, increase
> kern.maxswzone", even with 2G RAM + 4G swap and approx 2-3G of tmpfs
> in use -- any hints?

I think the message is pretty much self-explanatory. Kernel tried to
allocate the meatadata to track the swap metadata, and zone appears
exhausted. It is not the swap space shortage. Instead, this is kernel
zone used to track swap allocation shortage.

It is quite non-obvious how to automatically tune this limit, since zone
is allocated before swap is configured.

Received on Sun Jun 15 2008 - 08:48:12 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:32 UTC