On Fri, Aug 03, 2007 at 12:20:19PM +0200, Pawel Jakub Dawidek wrote: > On Thu, Aug 02, 2007 at 03:58:26PM +0400, Dmitry Morozovsky wrote: > > > > Hi there colleagues, > > > > FreeBSD/i386 on Athlon X2, HEAD without WITNESS. 4G of RAM. tmpfs used for > > 'make release'. > > > > > > panic: lockmgr: locking against myself > > cpuid = 0 > > KDB: enter: panic > > [thread pid 19396 tid 100245 ] > > Stopped at kdb_enter+0x32: leave > > > > db> tr > > Tracing pid 19396 tid 100245 td 0xce194220 > > kdb_enter(c066f664,0,c066dca9,e92799cc,0,...) at kdb_enter+0x32 > > panic(c066dca9,e92799dc,c0559cc7,e9279ac0,ca2f7770,...) at panic+0x124 > > _lockmgr(ca2f77c8,3002,ca2f77f8,ce194220,c0675afc,...) at _lockmgr+0x401 > > vop_stdlock(e9279a5c,ce194220,3002,ca2f7770,e9279a80,...) at vop_stdlock+0x40 > > VOP_LOCK1_APV(d06417e0,e9279a5c,e9279bc0,0,c8d00330,...) at VOP_LOCK1_APV+0x46 > > _vn_lock(ca2f7770,3002,ce194220,c0675afc,7f3,...) at _vn_lock+0x166 > > vget(ca2f7770,1000,ce194220,0,e9279b98,...) at vget+0x114 > > vm_object_reference(d1c70348,e9279b30,c063f81d,c0c71000,e381d000,...) at > > vm_object_reference+0x12a > > kern_execve(ce194220,e9279c5c,0,28204548,282045d8,e381d000,e381d000,e381d015,e381d4dc,e385d000,3fb24,3,20) > > at kern_execve+0x31a > > execve(ce194220,e9279cfc,c,ce194220,e9279d2c,...) at execve+0x4c > > syscall(e9279d38) at syscall+0x345 > > Xint0x80_syscall() at Xint0x80_syscall+0x20 > > --- syscall (59, FreeBSD ELF32, execve), eip = 0x28146a47, esp = 0xbfbfe4cc, > > ebp = 0xbfbfe4e8 --- > > > > db> show lockedvnods > > Locked vnodes > > > > 0xca2f7770: tag tmpfs, type VREG > > usecount 1, writecount 0, refcount 4 mountedhere 0 > > flags () > > v_object 0xd1c70348 ref 1 pages 19 > > lock type tmpfs: EXCL (count 1) by thread 0xce194220 (pid 19396) with 1 > > pending > > tag VT_TMPFS, tmpfs_node 0xd177f9d4, flags 0x0, links 9 > > mode 0555, owner 0, group 0, size 76648, status 0x0 > > > > It seems there is some locking problem in tmpfs. > > > > What other info should I provide to help resolve the problem? > > Here you can find two patches, which may or may not fix your problem. > The first one is actually only to improve debug. > > This patch adds all vnode flags to the output, because I believe you > have VI_OWEINACT set, but not printed: > > http://people.freebsd.org/~pjd/patches/vfs_subr.c.4.patch > > The problem here is that vm_object_reference() calls vget() without any > lock flag and vget() locks vnode exclusively when the VI_OWEINACT flag > is set. vget() should probably be fixed too, but jeff_at_ opinion is that > it shouldn't happen in this case, so this may be tmpfs bug. > > The patch below fixes some locking problems in tmpfs: > > http://people.freebsd.org/~pjd/patches/tmpfs.patch > > The problems are: > - tmpfs_root() should honour 'flags' argument, and not always lock the > vnode exclusively, > - tmpfs_lookup() should lock vnode using cnp->cn_lkflags, and not always > do it exclusively, > - in ".." case when we unlock directory vnode to avoid deadlock, we > should relock it using the same type of lock it was locked before and > not always relock it exclusively, > > Note, that this patch wasn't even compiled tested. Might be, vget shall check whether the vnode is already locked. On the other hand, I do not see how this scenario could be realized (note that usecount is already > 0). tmpfs may operate on random vnodes due to lack of synchronization between reclamation and vnode attachment to the tmpfs node. I already discussed this with delphij_at_.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:15 UTC