On Sat, Jan 30, 2010 at 12:58:26AM +0200, Alexander Motin wrote: > Hi. > > Experimenting with SATA hot-plug I've found quite repeatable deadlock > case. Problem observed when several SATA devices, opened via devfs, > disappear at exactly same time. In my case, at time of unplugging SATA > Port Multiplier with several disks beyond it. All I have to do is to run > several `dd if=/dev/adaX of=/dev/null bs=1m &` commands and unplug > multiplier. That causes predictable I/O errors and devices destruction. > But with high probability several dd processes getting stuck in kernel. [...] I observed the same thing yesterday while stress-testing HAST: 3659 2504 3659 0 DE+ GEOM top 0x8079a348 dd 3658 2102 2102 0 DE+ GEOM top 0x8079a348 hastd 2 0 0 0 DL devdrn 0x85b1bc68 [g_event] Both dd(1) and hastd(8) wait for the GEOM topology lock in the exit path, which is already held by the g_event thread. Interesting backtraces: db> bt 2 [...] _sleep(85b1bc68,8079aab8,4c,80711ab3,64,...) at _sleep+0x339 destroy_devl(5,0,80711c53,85b1bcb0,804945cd,...) at destroy_devl+0x20f destroy_dev(86a10a00,8070ea93,86a09800,860888e0,0,...) at destroy_dev+0x2f g_dev_orphan(86a09800,8070f424,871038d8,90,6,...) at g_dev_orphan+0x6d g_run_events(8079a378,0,4c,8070c221,64,...) at g_run_events+0x1c0 g_event_procbody(0,85b1bd38,80713228,343,85d0b7f8,...) at g_event_procbody+0x8a [...] db> bt 3658 [...] sleepq_wait(8079a348,0,8070f822,3,0,...) at sleepq_wait+0x63 _sx_xlock_hard(8079a348,86974240,0,8070ea66,c8,...) at _sx_xlock_hard+0x496 _sx_xlock(8079a348,0,8070ea66,c8,2000,...) at _sx_xlock+0xc0 g_dev_close(85f8ee00,4003,2000,86974240,86974240,...) at g_dev_close+0xbd devfs_close(dc49eaac,80745707,80000,80000,868be984,...) at devfs_close+0x2b2 VOP_CLOSE_APV(80753ac0,dc49eaac,80726500,128,2,...) at VOP_CLOSE_APV+0xc5 vn_close(868be984,4003,85fd5500,86974240,0,...) at vn_close+0x190 vn_closefile(86a20968,86974240,86a20968,0,dc49eb5c,...) at vn_closefile+0xe4 devfs_close_f(86a20968,86974240,0,0,86a20968,...) at devfs_close_f+0x2b _fdrop(86a20968,86974240,14,80719d1a,0,dc49eb98,1,86975000,8635c22c,8635c22c,721,8071264b,dc49ebb8,804f87d0,8635c22c,8,8071264b,721) at _fdrop+0x43 closef(86a20968,86974240,721,71e,869742e4,...) at closef+0x290 fdfree(86974240,0,80712fdd,107,864c4330,...) at fdfree+0x3ea exit1(86974240,0,dc49ed2c,806d830a,86974240,...) at exit1+0x513 sys_exit(86974240,dc49ecf8,86974240,dc49ed2c,202,...) at sys_exit+0x1d [...] db> bt 3659 [...] sleepq_wait(8079a348,0,8070f822,3,0,...) at sleepq_wait+0x63 _sx_xlock_hard(8079a348,863e06c0,0,8070ea66,c8,...) at _sx_xlock_hard+0x496 _sx_xlock(8079a348,0,8070ea66,c8,2000,...) at _sx_xlock+0xc0 g_dev_close(86a10a00,3,2000,863e06c0,863e06c0,...) at g_dev_close+0xbd devfs_close(dc4f6aac,80745707,80000,80000,86aa6c3c,...) at devfs_close+0x2b2 VOP_CLOSE_APV(80753ac0,dc4f6aac,80726500,128,2,...) at VOP_CLOSE_APV+0xc5 vn_close(86aa6c3c,3,870d4080,863e06c0,80cbac08,...) at vn_close+0x190 vn_closefile(871028f8,863e06c0,871028f8,0,dc4f6b5c,...) at vn_closefile+0xe4 devfs_close_f(871028f8,863e06c0,0,0,871028f8,...) at devfs_close_f+0x2b _fdrop(871028f8,863e06c0,8071809c,40e,0,805354ab,8071809c,8071df19,8635d42c,8635d42c,721,8071264b,dc4f6bb8,804f87d0,8635d42c,8,8071264b,721) at _fdrop+0x43 closef(871028f8,863e06c0,721,71e,863e0764,...) at closef+0x290 fdfree(863e06c0,0,80712fdd,107,86153088,...) at fdfree+0x3ea exit1(863e06c0,100,dc4f6d2c,806d830a,863e06c0,...) at exit1+0x513 sys_exit(863e06c0,dc4f6cf8,863e06c0,dc4f6d2c,202,...) at sys_exit+0x1d [...] db> show lock 0x8079a348 class: sx name: GEOM topology state: XLOCK: 0x85d0d000 (tid 100008, pid 2, "g_event") waiters: exclusive -- Pawel Jakub Dawidek http://www.wheel.pl pjd_at_FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am!
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:00 UTC