Re: panic: No NCF_TS

From: Kostik Belousov <kostikbel_at_gmail.com>
Date: Wed, 25 Jan 2012 14:15:28 +0200
On Mon, Jan 23, 2012 at 06:54:49AM -0800, John Baldwin wrote:
> On 1/22/12 7:05 PM, Kostik Belousov wrote:
> > On Mon, Jan 23, 2012 at 05:36:42AM +0400, Yuri Pankov wrote:
> >> Seems to be reproducible here running r230467 as the NFS client and
> >> r230135 as NFS server. NFSv4 not enabled.
> >>
> >> # mount
> >> [...]
> >> sirius:/data/distfiles on /usr/ports/distfiles (nfs)
> >>
> >> # /usr/bin/env /usr/bin/fetch -AFpr -S 4682084 -o /usr/ports/distfiles/sqlite-src-3071000.zip http://www.sqlite.org/sqlite-src-3071000.zip
> >> /usr/ports/distfiles/sqlite-src-3071000.zip   100% of 4572 kB  379 kBps 00m00s
> >> # rm /usr/ports/distfiles/sqlite-src-3071000.zip
> >>
> >> immediately followed by:
> >>
> >> panic: No NCF_TS
> >> cpuid = 2
> >> KDB: enter: panic
> >> [ thread pid 1603 tid 100494 ]
> >> Stopped at	kdb_enter+0x3e:	movq	$0,kdb_why
> >> db> bt
> >> Tracing pid 1603 tid 100494 td 0xfffffe0089585460
> >> kdb_enter() at kdb_enter+0x3e
> >> panic() at panic+0x245
> >> cache_lookup_times() at cache_lookup_times+0x6b5
> >> nfs_lookup() at nfs_lookup+0x190
> >> VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0x8b
> >> lookup() at lookup+0x7e9
> >> namei() at namei+0x74c
> >> kern_statat_vnhook() at kern_statat_vnhook+0x90
> >> sys_lstat() at sys_lstat+0x30
> >> amd64_syscall() at amd64_syscall+0x221
> >> Xfast_syscall() at Xfast_syscall+0xfb
> >> --- syscall (190, FreeBSD ELF64, sys_lstat), rip = 0x80093ff3c, rsp = 0x7fffffffd8d8, rbp = 0x7fffffffd979 ---
> >>
> > 
> > Yes, my bad. I wrote the r230441 with the assumption that filesystems
> > are consistent in their use of cache_enter_time(). And my net-booting
> > test box did not catched this, which is at least interesting.
> > 
> > Please try the change below. Actually, it should be enough to only apply
> > the changes for fs/nfsclient (or nfsclient/ if you use old nfs). If this
> > does not help, then please try the whole patch.
> 
> I think we should have the existing assertion.  If cache_lookup_times()
> is called with a timestamp requested, then returning a name cache entry
> without a timestamp is just going to result in that name cache entry not
> being used.  Panic'ing in that case is correct.
> 
> With regard to the NFS changes below, all of these are bugs that didn't
> really work right before.  Specifically, adding a positive entry without
> setting n_ctime previously would have just resulted in the name cache
> entry being purged on the next lookup anyway.  Keep in mind that the
> timestamp for a give name cache entry in NFS needs to match an actual
> timestamp returned by the NFS server as post-op attributes in some RPC.
> Using the timestamp from vfs_timestamp() is completely bogus.  Instead,
> the timestamp for a negative entry needs to be the mtime of the parent
> directory.  If we don't have that timestamp handy, then we should just
> not add a namecache entry at all.  Also, the vap->va_ctime used below
> for mknod() and create() is not a timestamp from the server, so it is
> also suspect.  I can look at this in more detail on Wednesday, but my
> best guess is that nearly all (if not all) of these cache_enter() calls
> will simply need to be removed.
> 
> Note that other filesystems like UFS don't bother creating name cache
> entries for things like create or mknod.  It's debatable if the NFS
> client should even be creating any name cache entries outside of lookup
> and when handling a READDIR+ reply.

Ok, next try. I did removed the cache_enter calls for old nfs client,
but it seems that the calls can be kept for the new client.


diff --git a/sys/fs/nfsclient/nfs_clvnops.c b/sys/fs/nfsclient/nfs_clvnops.c
index 2747191..709cd8d 100644
--- a/sys/fs/nfsclient/nfs_clvnops.c
+++ b/sys/fs/nfsclient/nfs_clvnops.c
_at__at_ -1431,8 +1431,6 _at__at_ nfs_mknodrpc(struct vnode *dvp, struct vnode **vpp, struct componentname *cnp,
 		}
 	}
 	if (!error) {
-		if ((cnp->cn_flags & MAKEENTRY))
-			cache_enter(dvp, newvp, cnp);
 		*vpp = newvp;
 	} else if (NFS_ISV4(dvp)) {
 		error = nfscl_maperr(cnp->cn_thread, error, vap->va_uid,
_at__at_ -1591,7 +1589,7 _at__at_ again:
 	}
 	if (!error) {
 		if (cnp->cn_flags & MAKEENTRY)
-			cache_enter(dvp, newvp, cnp);
+			cache_enter_time(dvp, newvp, cnp, &vap->va_ctime);
 		*ap->a_vpp = newvp;
 	} else if (NFS_ISV4(dvp)) {
 		error = nfscl_maperr(cnp->cn_thread, error, vap->va_uid,
_at__at_ -1966,8 +1964,9 _at__at_ nfs_link(struct vop_link_args *ap)
 	 * must care about lookup caching hit rate, so...
 	 */
 	if (VFSTONFS(vp->v_mount)->nm_negnametimeo != 0 &&
-	    (cnp->cn_flags & MAKEENTRY))
-		cache_enter(tdvp, vp, cnp);
+	    (cnp->cn_flags & MAKEENTRY) && dattrflag) {
+		cache_enter_time(tdvp, vp, cnp, &dnfsva.na_mtime);
+	}
 	if (error && NFS_ISV4(vp))
 		error = nfscl_maperr(cnp->cn_thread, error, (uid_t)0,
 		    (gid_t)0);
_at__at_ -2023,15 +2022,6 _at__at_ nfs_symlink(struct vop_symlink_args *ap)
 			error = nfscl_maperr(cnp->cn_thread, error,
 			    vap->va_uid, vap->va_gid);
 	} else {
-		/*
-		 * If negative lookup caching is enabled, I might as well
-		 * add an entry for this node. Not necessary for correctness,
-		 * but if negative caching is enabled, then the system
-		 * must care about lookup caching hit rate, so...
-		 */
-		if (VFSTONFS(dvp->v_mount)->nm_negnametimeo != 0 &&
-		    (cnp->cn_flags & MAKEENTRY))
-			cache_enter(dvp, newvp, cnp);
 		*ap->a_vpp = newvp;
 	}
 
_at__at_ -2041,6 +2031,16 _at__at_ nfs_symlink(struct vop_symlink_args *ap)
 	if (dattrflag != 0) {
 		mtx_unlock(&dnp->n_mtx);
 		(void) nfscl_loadattrcache(&dvp, &dnfsva, NULL, NULL, 0, 1);
+		/*
+		 * If negative lookup caching is enabled, I might as well
+		 * add an entry for this node. Not necessary for correctness,
+		 * but if negative caching is enabled, then the system
+		 * must care about lookup caching hit rate, so...
+		 */
+		if (VFSTONFS(dvp->v_mount)->nm_negnametimeo != 0 &&
+		    (cnp->cn_flags & MAKEENTRY)) {
+			cache_enter_time(dvp, newvp, cnp, &dnfsva.na_mtime);
+		}
 	} else {
 		dnp->n_attrstamp = 0;
 		mtx_unlock(&dnp->n_mtx);
_at__at_ -2116,8 +2116,9 _at__at_ nfs_mkdir(struct vop_mkdir_args *ap)
 		 * must care about lookup caching hit rate, so...
 		 */
 		if (VFSTONFS(dvp->v_mount)->nm_negnametimeo != 0 &&
-		    (cnp->cn_flags & MAKEENTRY))
-			cache_enter(dvp, newvp, cnp);
+		    (cnp->cn_flags & MAKEENTRY) && dattrflag) {
+			cache_enter_time(dvp, newvp, cnp, &dnfsva.na_mtime);
+		}
 		*ap->a_vpp = newvp;
 	}
 	return (error);
diff --git a/sys/kern/vfs_cache.c b/sys/kern/vfs_cache.c
index 647dcac..4562ebc 100644
--- a/sys/kern/vfs_cache.c
+++ b/sys/kern/vfs_cache.c
_at__at_ -237,13 +237,9 _at__at_ static void
 cache_out_ts(struct namecache *ncp, struct timespec *tsp, int *ticksp)
 {
 
-	if ((ncp->nc_flag & NCF_TS) == 0) {
-		if (tsp != NULL)
-			bzero(tsp, sizeof(*tsp));
-		if (ticksp != NULL)
-			*ticksp = 0;
-		return;
-	}
+	KASSERT((ncp->nc_flag & NCF_TS) != 0 ||
+	    (tsp == NULL && ticksp == NULL),
+	    ("No NCF_TS"));
 
 	if (tsp != NULL)
 		*tsp = ((struct namecache_ts *)ncp)->nc_time;
_at__at_ -791,8 +787,8 _at__at_ cache_enter_time(dvp, vp, cnp, tsp)
 		    n2->nc_nlen == cnp->cn_namelen &&
 		    !bcmp(nc_get_name(n2), cnp->cn_nameptr, n2->nc_nlen)) {
 			if (tsp != NULL) {
-				if ((n2->nc_flag & NCF_TS) == 0)
-					continue;
+				KASSERT((n2->nc_flag & NCF_TS) != 0,
+				    ("no NCF_TS"));
 				n3 = (struct namecache_ts *)n2;
 				n3->nc_time =
 				    ((struct namecache_ts *)ncp)->nc_time;
diff --git a/sys/nfsclient/nfs_vnops.c b/sys/nfsclient/nfs_vnops.c
index a39b29b..c2dfd97 100644
--- a/sys/nfsclient/nfs_vnops.c
+++ b/sys/nfsclient/nfs_vnops.c
_at__at_ -1530,8 +1530,6 _at__at_ nfsmout:
 		if (newvp)
 			vput(newvp);
 	} else {
-		if (cnp->cn_flags & MAKEENTRY)
-			cache_enter(dvp, newvp, cnp);
 		*vpp = newvp;
 	}
 	mtx_lock(&(VTONFS(dvp))->n_mtx);
_at__at_ -1670,8 +1668,6 _at__at_ nfsmout:
 			vput(newvp);
 	}
 	if (!error) {
-		if (cnp->cn_flags & MAKEENTRY)
-			cache_enter(dvp, newvp, cnp);
 		*ap->a_vpp = newvp;
 	}
 	mtx_lock(&(VTONFS(dvp))->n_mtx);

Received on Wed Jan 25 2012 - 11:15:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:23 UTC