Re: asymmetric NFS transfer rates

From: Emanuel Strobl <Emanuel.Strobl_at_gmx.net>
Date: Mon, 8 Nov 2004 04:29:11 +0100
Am Dienstag, 2. November 2004 19:56 schrieb Doug White:
> On Tue, 2 Nov 2004, Robert Watson wrote:
> > On Tue, 2 Nov 2004, Emanuel Strobl wrote:
> > > It's a IDE Raid controller (3ware 7506-4, a real one) and the file is
> > > indeed huge, but not abnormally. I have a harddisk video recorder, so I
> > > have lots of 700MB files. Also if I copy my photo collection from the
> > > server it takes 5 Minutes but copying _to_ the server it takes almost
> > > 15 Minutes and the average file size is 5 MB. Fast Ethernet isn't
> > > really suitable for my needs, but at least the 10MB/s should be
> > > reached. I can't imagine I get better speeds when I upgrade to GbE,
> > > (which the important boxes are already, just not the switch) because
> > > NFS in it's current state isn't able to saturate a 100baseTX line, at
> > > least in one direction. That's the real anstonishing thing for me. Why
> > > does reading staurate 100BaseTX but writes only a third?
> >
> > Have you tried using tcpdump/ethereal to see if there's any significant
> > packet loss (for good reasons or not) going on?  Lots of RPC retransmits
> > would certainly explain the lower performance, and if that's not it, it
> > would be good to rule out.  The traces might also provide some insight
> > into the specific I/O operations, letting you see what block sizes are in
> > use, etc.  I've found that dumping to a file with tcpdump and reading
> > with ethereal is a really good way to get a picture of what's going on
> > with NFS: ethereal does a very nice job decoding the RPCs, as well as
> > figuring out what packets are related to each other, etc.
>
> It'd also be nice to know the mount options (nfs blocksizes in
> particular).

I haven't done intensive wire-dumps yet, but I figured out some oddities.
My main problem seems to be the 3ware controller in combination with NFS. If I 
create a malloc backed md0 I can push more than 9MB/s to it with UDP and more 
that 10MB/s with TCP (both without modifying r/w-size).
I can also copy a 100M file from twed0s1d to twed0s1e (so from and to the same 
RAID5 array which is worst rate) with 15MB/s so the array can't be the 
bottleneck.
Only when I push to the RAID5 array via NFS I only get 4MB/s, no matter if I 
use UDP, TCP or nonstandard r/w-sizes.

Next thing I found is that if I tune -w to anything higher than the standard 
8192 the average transfer rate of one big file degrades with UDP but 
increases with TCP (like I would expect).
UDP transfer seems to hic-up with -w tuned, transfer rates peak at 8MB/s but 
the next second they stay at 0-2MB/s (watched with systat -vm 1) but with TCP 
everything runs smooth, regardless of the -w value.

Now back to my real problem: Can you imagine that NFS and twe are blocking 
each other or something like that? Why do I get such really bad transfer 
rates when both parts are in use but every single part on its own seems to 
work fine?

Thanks for any help,

-Harry

Received on Mon Nov 08 2004 - 02:29:15 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:21 UTC