On Fri, Sep 11, 2009 at 07:05:59PM +0100, Kris Kennaway wrote: > Kris Kennaway wrote: > >9.0 doing I/O to a zfs: > > > >panic: sx_xlock() of destroyed sx _at_ > >/zoo/kris/src8/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_rlock.c:535 > > > >db> wh > >Tracing pid 14 tid 100047 td 0xffffff000357c720 > >kdb_enter() at kdb_enter+0x3d > >panic() at panic+0x17b > >_sx_xlock() at _sx_xlock+0xe9 > >zfs_range_unlock() at zfs_range_unlock+0x38 > >zfs_get_data() at zfs_get_data+0xd7 > >zil_commit() at zil_commit+0x532 > >zfs_sync() at zfs_sync+0xa6 > >sync_fsync() at sync_fsync+0x13a > >VOP_FSYNC_APV() at VOP_FSYNC_APV+0xb7 > >sync_vnode() at sync_vnode+0x157 > >sched_sync() at sched_sync+0x1d1 > >fork_exit() at fork_exit+0x12a > >fork_trampoline() at fork_trampoline+0xe > >--- trap 0, rip = 0, rsp = 0xffffff8125da0d30, rbp = 0 --- > > > >This was essentially just doing make world + cvs update + tar creation > >in a loop and failed after about a week. > > Any ideas? Machine is still in DDB. I was trying to reproduce it by doing much more frequent syncs and lowering vnodes limit, so they are inactivated more often, but I wasn't able to reproduce it. The problem here is that we lock a range for the given znode, but before we unlock the range, znode is destroyed. If you compile ZFS with debug (you have to uncomment CFLAGS+=-DDEBUG=1 in sys/modules/zfs/Makefile and recompile), we should be able to catch who is killing the znode, because then, avl_destroy(&zp->z_range_avl) should trigger a panic that tree isn't empty. -- Pawel Jakub Dawidek http://www.wheel.pl pjd_at_FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am!
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:55 UTC