Fabian Keil <freebsd-listen_at_fabiankeil.de> wrote: > Xin Li <delphij_at_delphij.net> wrote: > > > On 9/7/14 11:23 PM, Fabian Keil wrote: > > > Xin Li <delphij_at_delphij.net> wrote: > > > > > >> On 9/7/14 9:02 PM, Fabian Keil wrote: > > >>> Using a kernel built from FreeBSD 11.0-CURRENT r271182 I got > > >>> the following panic yesterday: > > >>> > > >>> [...] Unread portion of the kernel message buffer: [6880] > > >>> panic: deadlkres: possible deadlock detected for > > >>> 0xfffff80015289490, blocked for 1800503 ticks > > >> > > >> Any chance to get all backtraces (e.g. thread apply all bt full > > >> 16)? I think a different thread that held the lock have been > > >> blocked, probably related to your disconnected vdev. > > > > > > Output of "thread apply all bt full 16" is available at: > > > http://www.fabiankeil.de/tmp/freebsd/kgdb-output-spa_errlog_lock-deadlock.txt > > > > > > A lot of the backtraces prematurely end with "Cannot access memory > > > at address", therefore I also added "thread apply all bt" output. > > > > > > Apparently there are at least two additional threads blocking below > > > spa_get_stats(): > [...] > > Yes, thread 1182 owned the lock and is waiting for the zio be done. > > Other threads that wanted the lock would have to wait. > > > > I don't have much clue why the system entered this state, however, as > > the operations should have errored out (the GELI device is gone on > > 21:44:56 based on your log, which suggests all references were closed) > > instead of waiting. > I finally found the time to analyse the problem which seems > to be that spa_sync() requires at least one writeable vdev to > complete, but holds the lock(s) required to remove or bring back > vdevs. > > Letting spa_sync() drop the lock and wait for at least one vdev > to become writeable again seems to make the problem unreproducible > for me, but probably merely shrinks the race window and thus is not > a complete solution. > > For details see: > https://www.fabiankeil.de/sourcecode/electrobsd/ZFS-Optionally-let-spa_sync-wait-for-writable-vdev.diff Patch updated to unbreak the userspace build and to note that the deadlock can still be reproduced with an artifical test case like: Shell 1: mdconfig -u 0 -f /dpool/scratch/test-vdev.img zpool create test /dev/md0 while sleep 1; do mdconfig -d -u 0 -o force && mdconfig -f /dpool/scratch/test-vdev.img && zpool clear test; done Shell 2: # Cause writes to the pool from another shell, for example # by creating datasets. Log excerpt (from test begin to deadlock): Oct 29 12:34:28 kendra ZFS: vdev state changed, pool_guid=16039353738236808887 vdev_guid=3080051161477470469 [...] Oct 29 12:46:57 kendra ZFS: vdev state changed, pool_guid=16039353738236808887 vdev_guid=3080051161477470469 Oct 29 12:46:59 kendra ZFS: vdev is removed, pool_guid=16039353738236808887 vdev_guid=3080051161477470469 Oct 29 12:46:59 kendra ZFS: vdev state changed, pool_guid=16039353738236808887 vdev_guid=3080051161477470469 Oct 29 12:47:00 kendra kernel: g_dev_taste: make_dev_p() failed (gp->name=md0, error=17) With the deadman enabled, this will also cause: panic: I/O to pool 'test' appears to be hung on vdev guid 3080051161477470469 at '/dev/md0'. cpuid = 0 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe01136af870 vpanic() at vpanic+0x182/frame 0xfffffe01136af8f0 panic() at panic+0x43/frame 0xfffffe01136af950 vdev_deadman() at vdev_deadman+0x127/frame 0xfffffe01136af9a0 vdev_deadman() at vdev_deadman+0x40/frame 0xfffffe01136af9f0 spa_deadman() at spa_deadman+0x86/frame 0xfffffe01136afa20 softclock_call_cc() at softclock_call_cc+0x1a3/frame 0xfffffe01136afaf0 softclock() at softclock+0x94/frame 0xfffffe01136afb20 intr_event_execute_handlers() at intr_event_execute_handlers+0x1b6/frame 0xfffffe01136afb60 ithread_loop() at ithread_loop+0xa6/frame 0xfffffe01136afbb0 fork_exit() at fork_exit+0x9c/frame 0xfffffe01136afbf0 fork_trampoline() at fork_trampoline+0xe/frame 0xfffffe01136afbf0 With test's txg_sync_thread being the offender: (kgdb) tid 101874 [Switching to thread 819 (Thread 101874)]#0 sched_switch (td=0xfffff800513649a0, newtd=<value optimized out>, flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1969 1969 cpuid = PCPU_GET(cpuid); (kgdb) where #0 sched_switch (td=0xfffff800513649a0, newtd=<value optimized out>, flags=<value optimized out>) at /usr/src/sys/kern/sched_ule.c:1969 #1 0xffffffff805a3a18 in mi_switch (flags=260, newtd=0x0) at /usr/src/sys/kern/kern_synch.c:470 #2 0xffffffff805ea15a in sleepq_wait (wchan=0x0, pri=0) at /usr/src/sys/kern/subr_sleepqueue.c:631 #3 0xffffffff80530509 in _cv_wait (cvp=0xfffff8002678ea98, lock=0xfffff8002678ea78) at /usr/src/sys/kern/kern_condvar.c:139 #4 0xffffffff81930bbb in zio_wait (zio=<value optimized out>) at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zio.c:1535 #5 0xffffffff818e4871 in dsl_pool_sync (dp=0xfffff80047dfd000, txg=76) at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:540 #6 0xffffffff81903653 in spa_sync (spa=0xfffff8009dfe2000, txg=76) at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6734 #7 0xffffffff8190ccfa in txg_sync_thread (arg=0xfffff80047dfd000) at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:517 #8 0xffffffff80556edc in fork_exit (callout=0xffffffff8190c970 <txg_sync_thread>, arg=0xfffff80047dfd000, frame=0xfffffe011c27bc00) at /usr/src/sys/kern/kern_fork.c:1011 #9 0xffffffff8085b91e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:609 #10 0x0000000000000000 in ?? () (kgdb) f 6 #6 0xffffffff81903653 in spa_sync (spa=0xfffff8009dfe2000, txg=76) at /usr/src/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6734 (kgdb) p spa->spa_name $3 = 0xfffff8009dfe2000 "test" Fabian
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:00 UTC