On Wed, 11 Nov 2009, Ivan Voras wrote: > > I think NFS uses sync disk IO access by default, this may be your > problem if you are write-heavy. Try setting vfs.nfsrv.async to 1 to > see if this is the cause of your problems. Just fyi, I took a quick look and I don't think this will be a good idea for NFSv3. (It allows the server to cheat for NFSv2 and avoid synchronous writes, which was contrary to the standard, but became fashionable for performance reasons, before NFSv3 came out.) For NFSv3, the writes are normally done asynchronously, followed by a Commit RPC to force them to disk, done by the client when it is flushing its buffer cache. When you set this sysctl, the NFSv3 server (sys/nfsserver, not the experimental one), all it does is reply to the write RPC that it has already been committed. This may have two effects, depending upon the client: 1 - The client may then choose to not bother with a Commit RPC. --> This shouldn't help performance much, because the data will normally have made to disk by now. *** This might be an interesting experiment to try on a ZFS server though, since if it does make a significant difference, it suggests that ZFS does a lot of work figuring out that the blocks are already on stable storage or something like that. (ie. It might hint at where to look for a ZFS related perf. problem.) OR 2 - Nothing, because the client doesn't notice it doesn't need to commit it and does the Commit RPC anyhow. Also, it is potentially dangerous, since if the server crashes after the client has done the write, but before the server has written it to disk, the data may be lost. (ie. The client might have flushed the dirty blocks out of its buffer cache, because it didn't think it needed to do a Commit RPC.) rickReceived on Thu Nov 12 2009 - 15:55:22 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:57 UTC