On 8/19/2014 8:53 AM, Larry Rosenman wrote: > On 2014-08-19 08:42, Eric van Gyzen wrote: >> On 08/18/2014 16:45, Ryan Stone wrote: >>> The first thing that I'd like to see is (in kgdb): >>> >>> set $td=(struct thread)0xfffff8002abeb000 >>> tid $td->td_tid >>> bt >>> >>> That will show us the backtrace of the thread that was blocked for so >>> long. >> >> Make that: >> >> set $td=(struct thread *)0xfffff8002abeb000 >> tid $td->td_tid >> bt >> >> >> Eric > #0 doadump (textdump=1) at pcpu.h:219 > 219 pcpu.h: No such file or directory. > in pcpu.h > (kgdb) set $td=(struct thread *)0xfffff8002abeb000 > Current language: auto; currently minimal > (kgdb) tid $td->td_tid > [Switching to thread 469 (Thread 100681)]#0 sched_switch ( > td=0xfffff8002abeb000, newtd=<value optimized out>, > flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1931 > 1931 cpuid = PCPU_GET(cpuid); > (kgdb) bt > #0 sched_switch (td=0xfffff8002abeb000, newtd=<value optimized out>, > flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1931 > #1 0xffffffff80a107d9 in mi_switch (flags=260, newtd=0x0) > at /usr/src/sys/kern/kern_synch.c:493 > #2 0xffffffff80a4c442 in sleepq_switch (wchan=<value optimized out>, > pri=<value optimized out>) at /usr/src/sys/kern/subr_sleepqueue.c:552 > #3 0xffffffff80a4c2a3 in sleepq_wait (wchan=0xfffff80070a4dd50, pri=96) > at /usr/src/sys/kern/subr_sleepqueue.c:631 > #4 0xffffffff809eb1fa in sleeplk (lk=<value optimized out>, > flags=<value optimized out>, ilk=<value optimized out>, > wmesg=<value optimized out>, pri=<value optimized out>, > timo=<value optimized out>) at /usr/src/sys/kern/kern_lock.c:225 > #5 0xffffffff809eaa06 in __lockmgr_args (lk=0xfffff80070a4dd50, > flags=<value optimized out>, ilk=0xfffff80070a4dd80, > wmesg=<value optimized out>, pri=<value optimized out>, > timo=<value optimized out>) at /usr/src/sys/kern/kern_lock.c:931 > #6 0xffffffff8092e092 in nfs_lock1 (ap=<value optimized out>) at > lockmgr.h:97 > #7 0xffffffff80f2d57c in VOP_LOCK1_APV (vop=<value optimized out>, > a=<value optimized out>) at vnode_if.c:2082 > #8 0xffffffff80abd22a in _vn_lock (vp=0xfffff80070a4dce8, > flags=<value optimized out>, > file=0xffffffff8110db88 "/usr/src/sys/kern/vfs_subr.c", line=2137) > at vnode_if.h:859 > #9 0xffffffff80aad4e7 in vget (vp=0xfffff80070a4dce8, flags=524544, > ---Type <return> to continue, or q <return> to quit--- > td=0xfffff8002abeb000) at /usr/src/sys/kern/vfs_subr.c:2137 > #10 0xffffffff80aa1491 in vfs_hash_get (mp=0xfffff8002aa1e990, > hash=1741450670, flags=<value optimized out>, td=0xfffff8002abeb000, > vpp=0xfffffe100c75c670, fn=0xffffffff80935820 <newnfs_vncmpf>) > at /usr/src/sys/kern/vfs_hash.c:88 > #11 0xffffffff809314bd in ncl_nget (mntp=0xfffff8002aa1e990, > fhp=0xfffff80070ccf4a4 "\001", fhsize=12, npp=0xfffffe100c75c6e0, > lkflags=<value optimized out>) > at /usr/src/sys/fs/nfsclient/nfs_clnode.c:114 > #12 0xffffffff809340fd in nfs_statfs (mp=0xfffff8002aa1e990, > sbp=0xfffff8002aa1ea48) at /usr/src/sys/fs/nfsclient/nfs_clvfsops.c:288 > #13 0xffffffff80aa7ade in __vfs_statfs (mp=0x0, sbp=0xfffff8002aa1ea48) > at /usr/src/sys/kern/vfs_mount.c:1706 > #14 0xffffffff80ab4f5e in kern_getfsstat (td=0xfffff8002abeb000, > buf=<value optimized out>, bufsize=<value optimized out>, > bufseg=UIO_USERSPACE, flags=<value optimized out>) > at /usr/src/sys/kern/vfs_syscalls.c:511 > #15 0xffffffff80e1625a in amd64_syscall (td=0xfffff8002abeb000, traced=0) > at subr_syscall.c:133 > #16 0xffffffff80df760b in Xfast_syscall () > at /usr/src/sys/amd64/amd64/exception.S:390 > #17 0x00000008010fc83a in ?? () > Previous frame inner to this frame (corrupt stack?) > (kgdb) Looks like the one I hit recently as well: http://lists.freebsd.org/pipermail/freebsd-fs/2014-July/019843.html This lock should probably be excluded from the DEADLKRES. I have not had time to follow-up on it, but it's a trivial thing to add it to the list. -- Regards, Bryan Drewery
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:51 UTC