On Fri, Mar 17, 2006 at 01:26:13PM -0500, Rong-En Fan wrote: > On 3/17/06, Kostik Belousov <kostikbel_at_gmail.com> wrote: > > Just out of curiosity: > > > > could you, please, test this little patch: > > > > Index: sys/nfsclient/nfs_bio.c > > I have been played around with intr/nointr with this patch + some in > nfs_bio.c, nfs_vnops.c. > Here are the results (note that, the revision # below means the > changes made in that > revision), tested on RELENG_6 as of yesterday, i386, SMP. kernel is built with > INVARIANT on. > > * nfs_vnops.c 1.262, nfs_bio.c 1.154 > > - intr > > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.410424 secs (11887474 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C^C^C^C^C^C^C^C^C^C7+0 records in > 6+0 records out > 6291456 bytes transferred in 291.017236 secs (21619 bytes/sec) > (stuck in nfsaio) > > - nointr > > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.264193 secs (12295128 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C12+0 records in > 11+0 records out > 11534336 bytes transferred in 0.990210 secs (11648373 bytes/sec) > > * nfs_vnops.c 1.262 > > - intr > > $dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.238704 secs (12369064 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C > ^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^Ccc^C^C^C^C15+0 records in > 14+0 records out > 14680064 bytes transferred in 677.578696 secs (21665 bytes/sec) > (stuck in nfsaio) > > - nointr > > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.255155 secs (12321244 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C11+0 records in > 10+0 records out > 10485760 bytes transferred in 0.899381 secs (11658864 bytes/sec) > > * nfs_vnops.c 1.262, nfs_bio.c (remove slpflag = 0) > > - intr > > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.245185 secs (12350181 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C > (top's state is changing between CPU0, CPU1, RUN, *Giant) > (11 minutes passed, I reboot this box) > > - nointr > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.454680 secs (11769375 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C17+0 records in > 16+0 records out > 16777216 bytes transferred in 1.458180 secs (11505587 bytes/sec) > > * nfs_vnops.c 1.262, nfs_bio.c 1.154 (remove slpflags = 0) > > - intr > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.386083 secs (11953445 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C > (top's state is changing between CPU0, CPU1, RUN, *Giant) > (44 minutes passwd, I rebooted) > > - noitntr > $ dd if=/dev/zero of=b bs=1m count=50 > 50+0 records in > 50+0 records out > 52428800 bytes transferred in 4.370959 secs (11994805 bytes/sec) > $ dd if=/dev/zero of=b bs=1m count=50 > ^C25+0 records in > 24+0 records out > 25165824 bytes transferred in 2.122789 secs (11855076 bytes/sec) > > Looks like that the changes to nfs_vnops.c last Nov by ps_at_, ^C can really stop > the process, but it take too much time :( > > Hope this helps, > Rong-En Fan Sure, removal of setting slpflag to 0 was wrong. It seems I found the reason(s) for the process to stuck in nfsaio state: 1. It is wrong to call nfs_asyncio wih NULL td. This function assumes it running in the context of the real process and do the checks for signals delivered to it. Giving NULL td effectively ignored any signals. Patch: Index: sys/nfsclient/nfs_vnops.c =================================================================== RCS file: /usr/local/arch/ncvs/src/sys/nfsclient/nfs_vnops.c,v retrieving revision 1.264 diff -u -r1.264 nfs_vnops.c --- sys/nfsclient/nfs_vnops.c 8 Mar 2006 01:43:01 -0000 1.264 +++ sys/nfsclient/nfs_vnops.c 21 Mar 2006 15:23:29 -0000 _at__at_ -2588,7 +2588,7 _at__at_ * otherwise just do it ourselves. */ if ((bp->b_flags & B_ASYNC) == 0 || - nfs_asyncio(VFSTONFS(ap->a_vp->v_mount), bp, NOCRED, td)) + nfs_asyncio(VFSTONFS(ap->a_vp->v_mount), bp, NOCRED, curthread)) (void)nfs_doio(ap->a_vp, bp, cr, td); return (0); } 2. Signals delivered to the process may be actually put into the thread signal list. nfs_sigintr checked global process signal list only. Patch: Index: sys/nfsclient/nfs_socket.c =================================================================== RCS file: /usr/local/arch/ncvs/src/sys/nfsclient/nfs_socket.c,v retrieving revision 1.135 diff -u -r1.135 nfs_socket.c --- sys/nfsclient/nfs_socket.c 20 Jan 2006 15:07:18 -0000 1.135 +++ sys/nfsclient/nfs_socket.c 21 Mar 2006 15:23:29 -0000 _at__at_ -1513,11 +1513,13 _at__at_ p = td->td_proc; PROC_LOCK(p); tmpset = p->p_siglist; + SIGSETOR(tmpset, td->td_siglist); SIGSETNAND(tmpset, td->td_sigmask); mtx_lock(&p->p_sigacts->ps_mtx); SIGSETNAND(tmpset, p->p_sigacts->ps_sigignore); mtx_unlock(&p->p_sigacts->ps_mtx); - if (SIGNOTEMPTY(p->p_siglist) && nfs_sig_pending(tmpset)) { + if ((SIGNOTEMPTY(p->p_siglist) || SIGNOTEMPTY(td->td_siglist)) + && nfs_sig_pending(tmpset)) { PROC_UNLOCK(p); return (EINTR); } But, this just reveals (at least) two another problems, for which I do not have patches. I will continue numeration. [Situations below where obtained by running dd as specified in quoted text, and, after some time, making nfs server unavailable by firewall rules.] 3. Sometimes, the system goes into livelock with processes hanged in the flswai state (AKA bwillwrite() function). 4. Sometimes, the dd process could not exit. It stucks in the exit() code, trying to close filedescriptor for nfs-located file. This nfs vnode is exclusively locked by bufdaemon, and close() path tries to get exclusive lock on it. So, it seems, that combination of intr mounting and large writes currently causes only problems. :(
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:53 UTC