Re: r244036 kernel hangs under load.

From: Rick Macklem <rmacklem_at_uoguelph.ca>
Date: Tue, 11 Dec 2012 20:58:47 -0500 (EST)
Konstantin Belousov wrote:
> On Tue, Dec 11, 2012 at 05:30:24PM -0500, Rick Macklem wrote:
> > Konstantin Belousov wrote:
> > > On Tue, Dec 11, 2012 at 04:55:52PM -0500, Rick Macklem wrote:
> > > > Konstantin Belousov wrote:
> > > > > On Mon, Dec 10, 2012 at 07:11:59PM -0500, Rick Macklem wrote:
> > > > > > Konstantin Belousov wrote:
> > > > > > > On Mon, Dec 10, 2012 at 01:38:21PM -0500, Rick Macklem
> > > > > > > wrote:
> > > > > > > > Adrian Chadd wrote:
> > > > > > > > > .. what was the previous kernel version?
> > > > > > > > >
> > > > > > > > Hopefully Tim has it narrowed down more, but I don't see
> > > > > > > > the hangs on a Sept. 7 kernel from head and I do see
> > > > > > > > them
> > > > > > > > on a Dec. 3 kernel from head. (Don't know the eact
> > > > > > > > rNNNNNN.)
> > > > > > > >
> > > > > > > > It seems to predate my commit (r244008), which was my
> > > > > > > > first
> > > > > > > > concern.
> > > > > > > >
> > > > > > > > I use old single core i386 hardware and can fairly
> > > > > > > > reliably
> > > > > > > > reproduce it by doing a kernel build and a "svn
> > > > > > > > checkout"
> > > > > > > > concurrently. No NFS activity. These are running on a
> > > > > > > > local
> > > > > > > > disk (UFS/FFS). (The kernel I reproduce it on is built
> > > > > > > > via
> > > > > > > > GENERIC for i386. If you want me to start a "binary
> > > > > > > > search"
> > > > > > > > for which rNNNNNN, I can do that, but it will take a
> > > > > > > > while.:-)
> > > > > > > >
> > > > > > > > I can get out into DDB, but I'll admit I don't know
> > > > > > > > enough
> > > > > > > > about it to know where to look;-)
> > > > > > > > Here's some lines from "db> ps", in case they give
> > > > > > > > someone
> > > > > > > > useful information. (I can leave this box sitting in DB
> > > > > > > > for
> > > > > > > > the rest of to-day, in case someone can suggest what I
> > > > > > > > should
> > > > > > > > look for on it.)
> > > > > > > >
> > > > > > > > Just snippets...
> > > > > > > >    Ss pause adjkerntz
> > > > > > > >    DL sdflush [sofdepflush]
> > > > > > > >    RL [syncer]
> > > > > > > >    DL vlruwt [vnlru]
> > > > > > > >    DL psleep [bufdaemon]
> > > > > > > >    RL [pagezero]
> > > > > > > >    DL psleep [vmdaemon]
> > > > > > > >    DL psleep [pagedaemon]
> > > > > > > >    DL ccb_scan [xpt_thrd]
> > > > > > > >    DL waiting_ [sctp_iterator]
> > > > > > > >    DL ctl_work [ctl_thrd]
> > > > > > > >    DL cooling [acpi_cooling0]
> > > > > > > >    DL tzpoll [acpi_thermal]
> > > > > > > >    DL (threaded) [usb]
> > > > > > > >    ...
> > > > > > > >    DL - [yarrow]
> > > > > > > >    DL (threaded) [geom]
> > > > > > > >    D - [g_down]
> > > > > > > >    D - [g_up]
> > > > > > > >    D - [g_event]
> > > > > > > >    RL (threaded) [intr]
> > > > > > > >    I [irq15: ata1]
> > > > > > > >    ...
> > > > > > > >    Run CPU0 [swi6: Giant taskq]
> > > > > > > > --> does this one indicate the CPU is actually running
> > > > > > > > this?
> > > > > > > >    (after a db> cont, wait a while <ctrl><alt><esc> db>
> > > > > > > >    ps
> > > > > > > >     it is still the same)
> > > > > > > >    I [swi4: clock]
> > > > > > > >    I [swi1: netisr 0]
> > > > > > > >    I [swi3: vm]
> > > > > > > >    RL [idle: cpu0]
> > > > > > > >    SLs wait [init]
> > > > > > > >    DL audit_wo [audit]
> > > > > > > >    DLs (threaded) [kernel]
> > > > > > > >    D - [deadlkres]
> > > > > > > >    ...
> > > > > > > >    D sched [swapper]
> > > > > > > >
> > > > > > > > I have no idea if this "ps" output helps, unless it
> > > > > > > > indicates
> > > > > > > > that it is looping on the Giant taskq?
> > > > > > > Might be. You could do 'bt <pid>' for the process to see
> > > > > > > where
> > > > > > > it
> > > > > > > loops.
> > > > > > > Another good set of hints is at
> > > > > > > http://www.freebsd.org/doc/en_US.ISO8859-1/books/developers-handbook/kerneldebug-deadlocks.html
> > > > > >
> > > > > > Kostik, you must be clairvoyant;-)
> > > > > >
> > > > > > When I did "show alllocks", I found that the syncer process
> > > > > > held
> > > > > > - exclusive sleep mutex mount mtx locked _at_
> > > > > > kern/vfs_subr.c:4720
> > > > > > - exclusive lockmgr syncer locked _at_ kern/vfs_subr.c:1780
> > > > > > The trace for this process goes like:
> > > > > >  spinlock_exit
> > > > > >  mtx_unlock_spin_flags
> > > > > >  kern_yield
> > > > > >  _mnt_vnode_next_active
> > > > > >  vnode_next_active
> > > > > >  vfs_msync()
> > > > > >
> > > > > > So, it seems like your r244095 commit might have fixed this?
> > > > > > (I'm not good at this stuff, but from your description, it
> > > > > > looks
> > > > > >  like it did the kern_yield() with the mutex held and
> > > > > >  "maybe"
> > > > > >  got into trouble trying to acquire Giant?)
> > > > > >
> > > > > > Anyhow, I'm going to test a kernel with r244095 in it and
> > > > > > see
> > > > > > if I can still reproduce the hang.
> > > > > > (There wasn't much else in the "show alllocks", except a
> > > > > >  process that held the exclusive vnode interlock mutex plus
> > > > > >  a ufs vnode lock, but it's just doing a witness_unlock.)
> > > > > There must be a thread blocked for the mount interlock for the
> > > > > loop
> > > > > in the mnt_vnode_next_active to cause livelock.
> > > > >
> > > > Yes. I am getting hangs with the -current kernel and they seem
> > > > easier for me to reproduce.
> > > >
> > > > For the one I just did, the "syncer" seems to be blocked at
> > > >  VI_TRYLOCK() in _mnt_vnode_next_active().
> > > trylock cannot block.
> > >
> > > > The vnode interlock mutex is eclusively locked by a "sh"
> > > > process (11627). Now, here is where it gets weird...
> > > > When I do a "db> trace 11627" I get the following:
> > > > witness_unlock+0x1f3 (subr_witness.c:1563)
> > > > mtx_unlock_flags+0x9f (kern_mutex.c:250)
> > > > vdropl+0x63 (vfs_subr.c:2405)
> > > > vputx+0x130 (vfs_subr.c:2116)
> > > > vput+0x10 (vfs_subr.c:2319)
> > > > vm_mmap+0x52e (vm_mmap.c:1341)
> > > > sys_mmap
> > > >
> > > > So, it seems this process is stuck while trying to unlock
> > > > the mutex, if that makes any sense...
> > > It probably not stuck, but just you catched it at this moment.
> > >
> > > The issue sounds more like a livelock. Can you obtain _all_ the
> > > information
> > > listed in the deadlock debugging page I sent earlier, and provide
> > > it
> > > to
> > > me ?
> > Well, this is a laptop and when it hangs (doesn't do anything,
> > except
> > sometimes echo characters on the console screen) I <ctrl><alt><esc>
> > to get to DB. How can I capture the stuff? (I don't even have a
> > digital
> > camera. Sorry, but I'm not into that sort of thing.)
> >
> > When I do a "db> cont" and then another <ctrl><alt><esc>, what I
> > get looks the same, so I don't think I'm just getting what is
> > happening "at that moment".
> It could be that it happens in rapid succession.
> 
> >
> > I'll start a binary search on kernel revision #s and try to
> > narrow it down to a commit. It will take a while, but...
> It is not useful, I just know that it is a consequence of the
> r243599+r243835, but I expected that r244095 would help. Still,
> if you have single-core machine, than it is possible that it is
> a livelock, or rather, a crawl.
> 
Ok, I'll test r243598 and then r243599 and r243835, just to
see if it really is this.

I'll email when I have done this.

> >
> >  Also, do you use the post-r244095 kernel ?
> >
> > Before and after. The most recent tests were post-r244095.
> > (If anything the more recent kernels hang more easily.)
> >
> >
> > >
> > > Is your machine SMP ?
> >
> > Old, slow single core i386.
> 
> Try this. Please note that this is mostly a debugging facility.
> 
It seemed to help, but didn't stop the hangs completely.
r244125 without the patch would hang somewhere in a kernel
build. r244125 plus this patch ran almost 2 kernel builds
before it got hung.

> diff --git a/sys/kern/vfs_subr.c b/sys/kern/vfs_subr.c
> index 67e078d..0905eec 100644
> --- a/sys/kern/vfs_subr.c
> +++ b/sys/kern/vfs_subr.c
> _at__at_ -4727,7 +4727,7 _at__at_ restart:
> continue;
> }
> if (!VI_TRYLOCK(vp)) {
> - if (should_yield()) {
> + if (1 || should_yield()) {
> mtx_unlock(&vnode_free_list_mtx);
> kern_yield(PRI_UNCHANGED);
> mtx_lock(&vnode_free_list_mtx);
> _at__at_ -4778,7 +4778,7 _at__at_ restart:
> continue;
> }
> if (!VI_TRYLOCK(vp)) {
> - if (should_yield()) {
> + if (1 || should_yield()) {
> mtx_unlock(&vnode_free_list_mtx);
> kern_yield(PRI_UNCHANGED);
> mtx_lock(&vnode_free_list_mtx);
Received on Wed Dec 12 2012 - 00:58:50 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:33 UTC