Re: quick summary results with ixgbe (was Re: datapoints on 10G throughput with TCP ?

From: Andre Oppermann <andre_at_freebsd.org>
Date: Fri, 09 Dec 2011 01:10:30 +0100
On 08.12.2011 14:11, Lawrence Stewart wrote:
> On 12/08/11 05:08, Luigi Rizzo wrote:
>> On Wed, Dec 07, 2011 at 11:59:43AM +0100, Andre Oppermann wrote:
>>> On 06.12.2011 22:06, Luigi Rizzo wrote:
>> ...
>>>> Even in my experiments there is a lot of instability in the results.
>>>> I don't know exactly where the problem is, but the high number of
>>>> read syscalls, and the huge impact of setting interrupt_rate=0
>>>> (defaults at 16us on the ixgbe) makes me think that there is something
>>>> that needs investigation in the protocol stack.
>>>>
>>>> Of course we don't want to optimize specifically for the one-flow-at-10G
>>>> case, but devising something that makes the system less affected
>>>> by short timing variations, and can pass upstream interrupt mitigation
>>>> delays would help.
>>>
>>> I'm not sure the variance is only coming from the network card and
>>> driver side of things. The TCP processing and interactions with
>>> scheduler and locking probably play a big role as well. There have
>>> been many changes to TCP recently and maybe an inefficiency that
>>> affects high-speed single sessions throughput has crept in. That's
>>> difficult to debug though.
>>
>> I ran a bunch of tests on the ixgbe (82599) using RELENG_8 (which
>> seems slightly faster than HEAD) using MTU=1500 and various
>> combinations of card capabilities (hwcsum,tso,lro), different window
>> sizes and interrupt mitigation configurations.
>>
>> default latency is 16us, l=0 means no interrupt mitigation.
>> lro is the software implementation of lro (tcp_lro.c)
>> hwlro is the hardware one (on 82599). Using a window of 100 Kbytes
>> seems to give the best results.
>>
>> Summary:
>
> [snip]
>
>> - enabling software lro on the transmit side actually slows
>> down the throughput (4-5Gbit/s instead of 8.0).
>> I am not sure why (perhaps acks are delayed too much) ?
>> Adding a couple of lines in tcp_lro to reject
>> pure acks seems to have much better effect.
>>
>> The tcp_lro patch below might actually be useful also for
>> other cards.
>>
>> --- tcp_lro.c (revision 228284)
>> +++ tcp_lro.c (working copy)
>> _at__at_ -245,6 +250,8 _at__at_
>>
>> ip_len = ntohs(ip->ip_len);
>> tcp_data_len = ip_len - (tcp->th_off<< 2) - sizeof (*ip);
>> + if (tcp_data_len == 0)
>> + return -1; /* not on ack */
>>
>>
>> /*
>
> There is a bug with our LRO implementation (first noticed by Jeff Roberson) that I started fixing
> some time back but dropped the ball on. The crux of the problem is that we currently only send an
> ACK for the entire LRO chunk instead of all the segments contained therein. Given that most stacks
> rely on the ACK clock to keep things ticking over, the current behaviour kills performance. It may
> well be the cause of the performance loss you have observed. WIP patch is at:
>
> http://people.freebsd.org/~lstewart/patches/misctcp/tcplro_multiack_9.x.r219723.patch
>
> Jeff tested the WIP patch and it *doesn't* fix the issue. I don't have LRO capable hardware setup
> locally to figure out what I've missed. Most of the machines in my lab are running em(4) NICs which
> don't support LRO, but I'll see if I can find something which does and perhaps resurrect this patch.
>
> If anyone has any ideas what I'm missing in the patch to make it work, please let me know.

On low RTT's the accumulated ACKing probably doesn't make any difference.
The congestion window will grow very fast anyway.  On longer RTT's it sure
will make a difference.  Unless you have a 10Gig path with > 50ms or so it's
difficult to empirically test though.

-- 
Andre
Received on Thu Dec 08 2011 - 23:10:37 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:21 UTC