After upgrading to current as of about 24 hours ago, I now consistently get panics when I try to unmount filesystems with pending I/O. If I just mount and unmount a filesystem, it works but if I try to unmount a filesystem that has been active, I get: handle_workitem_remove: vget: got error 16 while accessing filesystem softdep_waitidle: Failed to flush worklist for 0xc2c6429c panic: vfs_allocate_syncvnode: insmntque failed I have a crashdump and the backtrace looks like: #8 0xc055c545 in panic (fmt=0xc07309f1 "vfs_allocate_syncvnode: insmntque failed") at /usr/src/sys/kern/kern_shutdown.c:547 #9 0xc05d6b9a in vfs_allocate_syncvnode (mp=0xc2c6429c) at /usr/src/sys/kern/vfs_subr.c:3111 #10 0xc05d1222 in dounmount (mp=0xc2c6429c, flags=0x8000000, td=0xc2d9c600) at /usr/src/sys/kern/vfs_mount.c:1289 #11 0xc05d16ff in unmount (td=0xc2d9c600, uap=0xd6218cfc) at /usr/src/sys/kern/vfs_mount.c:1170 #12 0xc06de9ea in syscall (frame=0xd6218d38) at /usr/src/sys/i386/i386/trap.c:1008 #13 0xc06cc5c0 in Xint0x80_syscall () at /usr/src/sys/i386/i386/exception.s:196 According to kgdb, insmntque() returned both EXDEV and EBUSY. The former is impossible so I suspect kgdb is confused and the latter is correct. The mountpoint shows as in the process of being unmounted and has mnt_nvnodelistsize == 0 so insmntque() makes sense at the micro level. Having a system panic as a result does not make sense. The softdep_waitidle() error looks suspicious - at a quick glance, it appears to only wait for 10 ticks (10msec) for the dependency chain to empty. This seems unreasonably short for an operation that probably includes physical I/O. Is my reasoning correct? Does anyone have any suggestions on where to look next? -- Peter Jeremy
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:16 UTC