Re: 8.0-RC3 network performance regression

From: Robert Watson <rwatson_at_FreeBSD.org>
Date: Thu, 19 Nov 2009 09:11:19 +0000 (GMT)
On Wed, 18 Nov 2009, Elliot Finley wrote:

> I have several boxes running 8.0-RC3 with pretty dismal network performance. 
> I also have some 7.2 boxes with great performance. Using iperf I did some 
> tests:
>
> server(8.0) <- client (8.0) == 420Mbps
> server(7.2) <- client (7.2) == 950Mbps
> server(7.2) <- client (8.0) == 920Mbps
> server(8.0) <- client (7.2) == 420Mbps
>
> so when the server is 7.2, I have good performance regardless of whether the 
> client is 8.0 or 7.2. when the server is 8.0, I have poor performance 
> regardless of whether the client is 8.0 or 7.2.
>
> Has anyone else noticed this?  Am I missing something simple?

I've generally not measured regressions along these lines, but TCP performance 
can be quite sensitive to specific driver version and hardware configuration. 
So far, I've generally measured significant TCP scalability improvements in 8, 
and moderate raw TCP performance improvements over real interfaces.  On the 
other hand, I've seen decreased TCP performance on the loopback due to 
scheduling interactions with ULE on some systems (but not all -- disabling 
checksum generate/verify has improved loopback on other systems).

The first thing to establish is whether other similar benchmarks give the same 
result, which might us to narrow the issue down a bit.  Could you try using 
netperf+netserver with the TCP_STREAM test and see if that differs using the 
otherwise identical configuration?

Could you compare the ifconfig link configuration of 7.2 and 8.0 to make sure 
there's not a problem with the driver negotiating, for example, half duplex 
instead of full duplex?  Also confirm that the same blend ot LRO/TSO/checksum 
offloading/etc is present.

Could you do "procstat -at | grep ifname" (where ifname is your interface 
name) and send that to me?

Another thing to keep an eye of is interrupt rates and pin sharing, which are 
both sensitive to driver change and ACPI changes.  It wouldn't hurt to compare 
vmstat -i rates not just on your network interface, but also on other devices, 
to make sure there's not new aliasing.  With a new USB stack and plenty of 
other changes, additional driver code running when your NIC interrupt fires 
would be highly measurable.

Finally, two TCP tweaks to try:

(1) Try disabling in-flight bandwidth estimation by setting
     net.inet.tcp.inflight.enable to 0.  This often hurts low-latency,
     high-bandwidth local ethernet links, and is sensitive to many other issues
     including time-keeping.  It may not be the "cause", but it's a useful
     thing to try.

(2) Try setting net.inet.tcp.read_locking to 0, which disables the read-write
     locking strategy on global TCP locks.  This setting, when enabled,
     significantly impoves TCP scalability when dealing with multiple NICs or
     input queues, but is one of the non-trivial functional changes in TCP.

Robert N M Watson
Computer Laboratory
University of Cambridge
Received on Thu Nov 19 2009 - 08:11:20 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:58 UTC