On Thu, 16 Jul 2009, Anonymous wrote: > Let's create 335 empty files in /blah and try to list them over nfsv3. > > # uname -vm > FreeBSD 8.0-BETA1 #0: Sat Jul 4 03:55:14 UTC 2009 root_at_almeida.cse.buffalo.edu:/usr/obj/usr/src/sys/GENERIC i386 > > # mkdir /blah > # (while [ $((i+=1)) -le 334 ]; do : >/blah/foo_$i; done) > # echo / -alldirs >/etc/exports > # /etc/rc.d/nfsd onestart > # mount -t newnfs -o nfsv3 0:/blah /mnt > Well, this turns out more interesting than I expected. The problem occurs when there is a large directory "at the mount point only". If you: # mount -t newnfs -o nfsv3 0:/ /mnt # cd /mnt/blah # ls - it works. When the large directory is at the mount point, it reads the first block normally but... it then thinks all subsequent blocks are already in the buffer cache. ie. They come back from getblk() with B_CACHE already set??? (It then just loops getting blocks forever, since it won't see the eof if it doesn't try and read from the server.) Anyone happen to have a clue why that would happen? Why would blocks on a mount point vnode behave differently than others. Well, at least it's easy to reproduce, so I can keep poking around with it, rick.Received on Thu Jul 16 2009 - 22:09:43 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:52 UTC