Re: Panic _at_r207433: "System call fork returning with the following locks held"

From: Andrew Reilly <areilly_at_bigpond.net.au>
Date: Sat, 1 May 2010 09:12:41 +1000
Hi all,

I'm not sure if it's related (I get my src via csup, so I don't
have svn reveision numbers), but I upgraded about 16 hours ago
again a few hours after that, and my two-core AMD64 system has
been (seemingly) quite unstable.  I've had a few boot cycles
that have failed and dumped me out into kdb, rebooting through
single-user (to check file systems) seems to get me "up", but my
logs are completely full of:

Calling uiomove() with the following non-sleepable locks held:
exclusive sleep mutex vm page queue mutex (vm page queue mutex) r = 0 (0xffffffff80e60a00) locked _at_ /nb/src/sys/vm/vm_pageout.c:452
exclusive sleep mutex page lock (page lock) r = 0 (0xffffffff80e59e00) locked _at_ /nb/src/sys/vm/vm_pageout.c:451
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
_witness_debugger() at _witness_debugger+0x2e
witness_warn() at witness_warn+0x2c2
uiomove() at uiomove+0x52
ffs_write() at ffs_write+0x32d
VOP_WRITE_APV() at VOP_WRITE_APV+0x103
vnode_pager_generic_putpages() at vnode_pager_generic_putpages+0x1c5
vnode_pager_putpages() at vnode_pager_putpages+0x97
vm_pageout_flush() at vm_pageout_flush+0x1ad
vm_object_page_collect_flush() at vm_object_page_collect_flush+0x470
vm_object_page_clean() at vm_object_page_clean+0x408
vfs_msync() at vfs_msync+0xef
sync_fsync() at sync_fsync+0x12a
sync_vnode() at sync_vnode+0x157
sched_sync() at sched_sync+0x1d1
fork_exit() at fork_exit+0x12a
fork_trampoline() at fork_trampoline+0xe
--- trap 0, rip = 0, rsp = 0xffffff803ebbad30, rbp = 0 ---

or this slightly different version:

uma_zalloc_arg: zone "g_bio" with the following non-sleepable locks held:
exclusive sleep mutex vm page queue mutex (vm page queue mutex) r = 0 (0xffffffff80e60a00) locked _at_ /nb/src/sys/kern/vfs_bio.c:3571
exclusive sleep mutex page lock (page lock) r = 0 (0xffffffff80e5fb80) locked _at_ /nb/src/sys/vm/vm_pageout.c:451
KDB: stack backtrace:
db_trace_self_wrapper() at db_trace_self_wrapper+0x2a
_witness_debugger() at _witness_debugger+0x2e
witness_warn() at witness_warn+0x2c2
uma_zalloc_arg() at uma_zalloc_arg+0x335
g_vfs_strategy() at g_vfs_strategy+0x28
ufs_strategy() at ufs_strategy+0x45
bufstrategy() at bufstrategy+0x43
bufwrite() at bufwrite+0x108
cluster_wbuild() at cluster_wbuild+0x1cd
cluster_write() at cluster_write+0x2f5
ffs_write() at ffs_write+0x66b
VOP_WRITE_APV() at VOP_WRITE_APV+0x103
vnode_pager_generic_putpages() at vnode_pager_generic_putpages+0x1c5
vnode_pager_putpages() at vnode_pager_putpages+0x97
vm_pageout_flush() at vm_pageout_flush+0x1ad
vm_object_page_collect_flush() at vm_object_page_collect_flush+0x470
vm_object_page_clean() at vm_object_page_clean+0x19d
vfs_msync() at vfs_msync+0xef
sync_fsync() at sync_fsync+0x12a
sync_vnode() at sync_vnode+0x157
sched_sync() at sched_sync+0x1d1
fork_exit() at fork_exit+0x12a
fork_trampoline() at fork_trampoline+0xe
--- trap 0, rip = 0, rsp = 0xffffff803ebbad30, rbp = 0 ---

I'll be doing another csup/rebuild ASAP, in the hope of picking
up the fixes mentioned here.  Just thought I'd add another "me
too" and a bit of data.

Cheers,

-- 
Andrew
Received on Fri Apr 30 2010 - 21:12:43 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:03 UTC