RE: Heads Up: default NFS server changing to the new one

From: Chris Forgeron <cforgeron_at_acsi.ca>
Date: Mon, 13 Jun 2011 15:40:19 -0300
>From: Rick Macklem
>
>Well, I doubt you'll find much difference performance wise. An NFS server can be looked as a protocol translator, converting the NFS RPCs into VFS/VOP calls. Performance is largely defined by how well the network stack and/or file system perform.
>
>When you set up a server, there are a few things that may help:
[..snip..]

Yes, I'm seeing little difference performance wise so far (maybe a slight boost on the new code), although I haven't had time to run all the tests that I'd like so I can't tell if it's significant.  However, that's good - as long as we're not regressing, I'm happy. 

I run ZFS exclusively for my SAN's, and I'm familiar with the various tweaks to make it go faster. ZFS and NFS don't play well under ESX due to the ESX client forcing an O_SYNC as I've detailed before, but a quick snip of a few lines in nfs_nfsdport.c to force ioflags to be what I want helps with that in my situation. I was the guy that was bugging you for a switch for that a month ago.. :-) I'm seeing around a 10% improvement when I do that, as it doesn't flog the ZIL as much (I use multiple hardware RAM drives for a ZIL, so they won't get much faster than that)

I'm also PXE booting over NFS, and that's working well, both from iPXE and gPXE. My linux clients running parted or clonezilla also don't seem to have any issues with the new NFS server. 

There's a bit of error/warning chatter between the old FreeBSD NFS servers I haven't upgraded and the new NFS clients, but it all still seems to work, and I plan on upgrading everything across the board to my newer 2011.05.28.15.00.00 build of 9-CURRENT by the end of this week or next. I'm also going to build a clang/llvm version of the new systems for testing on that end of things, as it may be time for me to switch. 

>As for things the nfsd server code can do, I plan on looking at a couple of things, but I don't think those will be in 9.0:
>- Making MAXBSIZE larger. Even if it is larger than the largest block
> size supported for the underlying fs, this may help, because it can
>  reduce the # of I/O RPCs.

That's interesting. I wonder what size would be good for ZFS? Possibly 128K. I see your definition in nfsport.h. I may fiddle a bit with this myself. 

I've also been wondering about the performance effect of the malloc's in the nfsvno_write() function - Would it be more efficient to malloc further up, and pass a pointer to it, so we're not always creating and releasing memory for the writes? Possibly malloc the max size at startup, and reuse the memory area. I haven't been that in-depth with compiling in a while however, so I don't recall how easy this will be, or if the tradeoff of passing the pointer will be just as bad. 
Received on Mon Jun 13 2011 - 16:40:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:14 UTC