Re: Re[4]: serious networking (em) performance (ggate and NFS) problem

From: Matthew Dillon <dillon_at_apollo.backplane.com>
Date: Sun, 21 Nov 2004 20:42:39 -0800 (PST)
: Yes, I knew that adjusting TCP window size is important to use up a link.
: However I wanted to show adjusting the parameters of Interrupt
: Moderation affects network performance.
:
: And I think a packet loss was occured by enabled Interrupt Moderation.
: The mechanism of a packet loss in this case is not cleared, but I think
: inappropriate TCP window size is not the only reason.

    Packet loss is not likely, at least not for the contrived tests we
    are doing because GiGE links have hardware flow control (I'm fairly
    sure).

    One could calculate the worst case small-packet build up in the receive
    ring.  I'm not sure what the minimum pad for GiGE is, but lets say it's
    64 bytes.  Then the packet rate would be around 1.9M pps or 244 packets
    per interrupt at a moderation frequency of 8000 hz.  The ring is 256
    packets.  But, don't forget the hardware flow control!  The switch
    has some buffering too.

    hmm... me thinks I now understand why 8000 was chosen as the default :-)

    I would say that this means packet loss due to the interrupt moderation
    is highly unlikely, at least in theory, but if one were paranoid one
    might want to use a higher moderation frequency, say 16000 hz, to be sure.

: I found TCP throuput improvement at disabled Interrupt Moderation is related
: to congestion avoidance phase of TCP. Because these standard deviations are
: decreased when Interrupt Moderation is disabled.
:
: The following two results are outputs of `iperf -P 10'. without TCP
: window size adjustment too. I think, the difference of each throughput
: at same measurement shows congestion avoidance worked.
:
:o with default setting of Interrupt Moderation.
:> [ ID] Interval       Transfer     Bandwidth
:> [ 13]  0.0-10.0 sec  80.1 MBytes  67.2 Mbits/sec
:> [ 11]  0.0-10.0 sec   121 MBytes   102 Mbits/sec
:> [ 12]  0.0-10.0 sec  98.9 MBytes  83.0 Mbits/sec
:> [  4]  0.0-10.0 sec  91.8 MBytes  76.9 Mbits/sec
:> [  7]  0.0-10.0 sec   127 MBytes   106 Mbits/sec
:> [  5]  0.0-10.0 sec   106 MBytes  88.8 Mbits/sec
:> [  6]  0.0-10.0 sec   113 MBytes  94.4 Mbits/sec
:> [ 10]  0.0-10.0 sec   117 MBytes  98.2 Mbits/sec
:> [  9]  0.0-10.0 sec   113 MBytes  95.0 Mbits/sec
:> [  8]  0.0-10.0 sec  93.0 MBytes  78.0 Mbits/sec
:> [SUM]  0.0-10.0 sec  1.04 GBytes   889 Mbits/sec

    Certainly overall send/response latency will be effected by up to 1/freq,
    e.g. 1/8000 = 125 uS (x2 hosts == 250 uS worst case), which is readily
    observable by running ping:

    [intrate]
    [set on both boxes]

    max:	64 bytes from 216.240.41.62: icmp_seq=2 ttl=64 time=0.057 ms
    100000:	64 bytes from 216.240.41.62: icmp_seq=8 ttl=64 time=0.061 ms
    30000:	64 bytes from 216.240.41.62: icmp_seq=5 ttl=64 time=0.078 ms
    8000:	64 bytes from 216.240.41.62: icmp_seq=3 ttl=64 time=0.176 ms
		(large stddev too, e.g. 0.188, 0.166, etc).

    But this is only relevant for applications that require that sort of
    response time == not very many applications.  Note that a large packet
    will turn the best case 57 uS round trip into a 140 uS round trip with
    the EM card.

    It might be interesting to see how interrupt moderation effects a
    buildworld over NFS as that certainly results in a huge amount of
    synchronous transactional traffic.

: Measureing TCP throughput was not appropriate way to indicate an effect
: of Interrupt Moderation clearly. It's my mistake. TCP is too
: complicated. :)
:
:-- 
:Shunsuke SHINOMIYA <shino_at_fornext.org>

    It really just comes down to how sensitive a production system is to
    round trip times within the range of effect of the moderation frequency.
    Usually the answer is: not very.  That is, the benefit is not sufficient
    to warrent the additional interrupt load that turning moderation off
    would create.  And even if low latency is desired it is not actually
    necessary to turn off moderation.  It could be set fairly high,
    e.g. 20000, to reap most of the benefit.

    Processing overheads are also important.  If the network is loaded down
    you will wind up eating a significant chunk of cpu with moderation turned
    off.  This is readily observable by running vmstat during an iperf test.

    iperf test ~700 MBits/sec reported for all tested moderation frequencies.
    using iperf -w 63.5K on DragonFly.  I would be interesting in knowing how
    FreeBSD fares, though SMP might skew the reality too much to be 
    meaningful.

	moderation	cpu
	frequency	%idle

	100000		2% idle
	30000		7% idle
	20000		35% idle
	10000		60% idle
	8000		66% idle

    In otherwords, if you are doing more then just shoving bits around the
    network, for example if you need to read or write the disk or do some
    sort of computation or other activity that requires cpu, turning off
    moderation could wind up being a very, very bad idea.

    In fact, even if you are just routing packets I would argue that turning
    off moderation might not be a good choice... it might make more sense
    to set it to some high frequency like 40000 Hz.  But, of course, it
    depends on what other things the machine might be running and what sort
    of processing (e.g. firewall lists) the machine has to do on the packets.

					-Matt
					Matthew Dillon 
					<dillon_at_backplane.com>
Received on Mon Nov 22 2004 - 03:42:43 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:22 UTC