Re: Why does sleep(1) end up blocked in bwillwrite()?

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Sun, 23 Dec 2012 21:37:26 +0200
On Sun, Dec 23, 2012 at 11:55:15AM -0700, Ian Lepore wrote:
> Background:  I'm trying to get nandfs working on a low-end small-memory
> embedded system.  I'm debugging performance problems that manifest as
> the system (or large portions of it) becoming unresponsive for many
> seconds at a time.  It appears that sometimes the nandfs background
> garbage collector does things that lead to dirtying lots of buffers way
> faster than they can be written.  When that happens it seems to take too
> long (many seconds) for the problem to clear.  That's the basic
> situation I'm investigating, but NOT what this mail is about, that's
> just the background.
> 
> When this situation happens, some of the threads in my application keep
> running fine.  Others get blocked unexpectedly even though they do no
> disk IO at all, they're working with sockets and serial (uart) devices.
> I discovered by accident that I can see a form of the problem happening
> just using sleep(1) and hitting ^T while the buffer starvation is in
> progress...
> 
>   guava# sleep 999999
> [ hit ^T]
>   load: 1.03  cmd: sleep 472 [nanslp] 2.03r 0.01u 0.02s 0% 1372k
>   sleep: about 999997 second(s) left out of the original 999999
> [ hit ^T]
>   load: 1.27  cmd: sleep 472 [nanslp] 9.32r 0.01u 0.02s 0% 1376k
>   sleep: about 999989 second(s) left out of the original 999999
> [ hit ^T]
>   load: 1.49  cmd: sleep 472 [nanslp] 11.53r 0.01u 0.02s 0% 1376k
> [ note no output from sleep(1) here, repeated ^T now gives...]
>   load: 1.49  cmd: sleep 472 [flswai] 12.01r 0.01u 0.03s 0% 1376k
>   load: 1.49  cmd: sleep 472 [flswai] 12.27r 0.01u 0.03s 0% 1376k
>   load: 1.49  cmd: sleep 472 [flswai] 12.76r 0.01u 0.03s 0% 1376k
>   load: 1.49  cmd: sleep 472 [flswai] 13.06r 0.01u 0.03s 0% 1376k
>   load: 1.49  cmd: sleep 472 [flswai] 13.26r 0.01u 0.03s 0% 1376k
>   load: 1.61  cmd: sleep 472 [flswai] 20.03r 0.02u 0.07s 0% 1376k
>   load: 1.64  cmd: sleep 472 [flswai] 20.49r 0.02u 0.07s 0% 1376k
>   load: 1.64  cmd: sleep 472 [flswai] 20.68r 0.02u 0.08s 0% 1376k
>   sleep: about 999987 second(s) left out of the original 999999
> 
> So here sleep(1) was blocked in bwillwrite() for about 9 seconds on a
> write to stderr (which is an ssh xterm connection).
> 
> The call to bwillwrite() is in kern/sys_generic.c in dofilewrite():
> 
> 	if (fp->f_type == DTYPE_VNODE)
>               bwillwrite();
> 
> I just noticed the checkin message that added the DTYPE_VNODE check
> specifically mentions not penalizing devices and pipes and such.  I
> think maybe things have evolved since then (Dec 2000) and this check is
> no longer sufficient.  Maybe it needs to be something more like
> 
> 	if (fp->f_type == DTYPE_VNODE && fp->f_vnode->v_type == VREG)
> 
> but I have a gut feeling it needs to be more complex than that (can
> f_vnode be NULL, what sort of locking is required to peek into f_vnode
> at this point, etc), so I can't really propose a patch for this.  In
> fact, I can't even say for sure it's a bug, but it sure feels like one
> to the application-developer part of me.

The patch below would do what you want. But my opinion is that it is more
bug in the filesystem than in the VFS. Anyway, try this and report how
it works for you.

diff --git a/sys/fs/devfs/devfs_vnops.c b/sys/fs/devfs/devfs_vnops.c
index 97a1bcf..9851229 100644
--- a/sys/fs/devfs/devfs_vnops.c
+++ b/sys/fs/devfs/devfs_vnops.c
_at__at_ -1049,6 +1049,7 _at__at_ devfs_open(struct vop_open_args *ap)
 	int error, ref, vlocked;
 	struct cdevsw *dsw;
 	struct file *fpop;
+	struct mtx *mtxp;
 
 	if (vp->v_type == VBLK)
 		return (ENXIO);
_at__at_ -1099,6 +1100,16 _at__at_ devfs_open(struct vop_open_args *ap)
 #endif
 	if (fp->f_ops == &badfileops)
 		finit(fp, fp->f_flag, DTYPE_VNODE, dev, &devfs_ops_f);
+	mtxp = mtx_pool_find(mtxpool_sleep, fp);
+
+	/*
+	 * Hint to the dofilewrite() to not force the buffer draining
+	 * on the writer to the file.  Most likely, the write would
+	 * not need normal buffers.
+	 */
+	mtx_lock(mtxp);
+	fp->f_vnread_flags |= FDEVFS_VNODE;
+	mtx_unlock(mtxp);
 	return (error);
 }
 
diff --git a/sys/kern/sys_generic.c b/sys/kern/sys_generic.c
index f47cb03..b97ff7f 100644
--- a/sys/kern/sys_generic.c
+++ b/sys/kern/sys_generic.c
_at__at_ -536,7 +536,8 _at__at_ dofilewrite(td, fd, fp, auio, offset, flags)
 		ktruio = cloneuio(auio);
 #endif
 	cnt = auio->uio_resid;
-	if (fp->f_type == DTYPE_VNODE)
+	if (fp->f_type == DTYPE_VNODE &&
+	    (fp->f_vnread_flags & FDEVFS_VNODE) == 0)
 		bwillwrite();
 	if ((error = fo_write(fp, auio, td->td_ucred, flags, td))) {
 		if (auio->uio_resid != cnt && (error == ERESTART ||
diff --git a/sys/sys/file.h b/sys/sys/file.h
index dc49895..cf5f1ea 100644
--- a/sys/sys/file.h
+++ b/sys/sys/file.h
_at__at_ -178,7 +178,8 _at__at_ struct file {
 #define	f_advice	f_vnun.fvn_advice
 
 #define	FOFFSET_LOCKED       0x1
-#define	FOFFSET_LOCK_WAITING 0x2		 
+#define	FOFFSET_LOCK_WAITING 0x2
+#define	FDEVFS_VNODE	     0x4
 
 #endif /* _KERNEL || _WANT_FILE */
 

Received on Sun Dec 23 2012 - 18:37:43 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:33 UTC