Re: [HEADS UP] Significant TCP work committed to head - CUBIC & H-TCP committed

From: Lawrence Stewart <lstewart_at_freebsd.org>
Date: Thu, 02 Dec 2010 22:53:59 +1100
On 11/12/10 20:35, Lawrence Stewart wrote:
> Hi All,
> 
> A quick note that this evening, I made the first in a series of upcoming
> commits to head that modify the TCP stack fairly significantly. I have
> no reason to believe you'll notice any issues, but TCP is a complex
> beast and it's possible things might crop up. The changes are mostly
> related to congestion control, so the sorts of issues that are likely to
> crop up if any will most probably be subtle and difficult to even
> detect. The first svn revision in question is r215166. The next few
> commits I plan to make will be basically zero impact and then another
> significant patch will follow in a few weeks.
> 
> If you bump into an issue that you think might be related to this work,
> please roll back r215166 from your tree and attempt to reporoduce before
> reporting the problem. Please CC me directly with your problem report
> and post to freebsd-current_at_ or freebsd-net_at_ as well.
> 
> Lots more information about what all this does and how to use it will be
> following in the coming weeks, but in the meantime, just keep this note
> in the back of your mind. For the curious, some information about the
> project is available at [1,2].
> 
> Cheers,
> Lawrence
> 
> [1] http://caia.swin.edu.au/freebsd/5cc/
> [2]
> http://www.freebsd.org/news/status/report-2010-07-2010-09.html#Five-New-TCP-Congestion-Control-Algorithms-for-FreeBSD

After a rather arduous couple of weeks grappling with VIMAGE related
bugs, intermittently failing testbed hardware and various algorithm
ambiguities, the next chunk of work has finally landed in head. Kernel
modules implementing the CUBIC and H-TCP congestion control algorithms
are now built/installed during a "make kernel".

I should stress that everything other than NewReno is considered
experimental at this stage in an IRTF/IETF specification sense, and as
such I would strongly advise against setting the system default
algorithm to anything other than NewReno. The TCP_CONGESTION setsockopt
call (used by e.g. iperf -Z) is the appropriate way to test an algorithm
on an individual connection.

For those interested in taking the algorithms for a spin, the easiest
way is probably to use benchmarks/iperf from ports on a source/sink
machine and do the following:

- On the data sink (receiver)
cd /usr/ports/benchmarks/iperf
fetch http://caia.swin.edu.au/urp/newtcp/tools/caia_iperf204_1.1.patch
mv caia_iperf204_1.1.patch files/patch-caiaiperf
make install clean
sysctl kern.ipc.maxsockbuf=1048576
iperf -s -j 256k -k 256k

- On the data source (sender)
cd /usr/ports/benchmarks/iperf
fetch http://caia.swin.edu.au/urp/newtcp/tools/caia_iperf204_1.1.patch
mv caia_iperf204_1.1.patch files/patch-caiaiperf
make install clean
kldload cc_cubic cc_htcp
sysctl kern.ipc.maxsockbuf=1048576
iperf -c <data_sink_ip> -j 256k -k 256k -Z <algo> (where <algo> is one
from the list reported by "sysctl net.inet.tcp.cc.available")

You may need to fiddle with the above parameters a bit depending on your
setup. You will want decent bandwidth (5+Mbps should be ok) and a
moderate to large RTT (50+ms) between both hosts if you want to see
these algorithms really shine. You can use dummynet on the data source
machine to easily introduce artificial bw/delay/queuing e.g.

ipfw pipe 1 config noerror bw 10Mbps delay 20ms queue 100Kbytes
ipfw add 10 pipe 1 ip from me to <data_sink_ip> dst-port 5001

Be careful to do the above via console access or stick "options
IPFIREWALL" and "options IPFIREWALL_DEFAULT_TO_ACCEPT" in your kernel
config to avoid locking yourself out (dummynet needs IPFW to work).

For the really interested (by now I suspect my audience is down to 0,
but still), you might want to load siftr and enable/disable it during
each test run and make your very own plot of cwnd vs time to see what's
really going on behind the scenes.

Ok that's enough for now, but much more is on the way. Please let me
know if you have any feedback or run into any problems related to this work.

Cheers,
Lawrence
Received on Thu Dec 02 2010 - 10:54:02 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:09 UTC