TCP RACK performance

From: Chenyang Zhong <zhongcy95_at_gmail.com>
Date: Tue, 11 Sep 2018 17:41:20 +0800
Hi,

I am really excited to see that _at_rrs from Netflix is adding TCP RACK
and High Precision Timer System to the kernel, so I built a kernel
(r338543) and ran some test.

I used the following kernel config, as suggested in commit rS334804.

makeoptions WITH_EXTRA_TCP_STACKS=1
options TCPHPTS

After booting the new kernel, I loaded the tcp_rack.ko,
# kldload tcp_rack

and checked the sysctl to make sure rack is there.
# sysctl net.inet.tcp.functions_available
net.inet.tcp.functions_available:
Stack                           D Alias                            PCB count
freebsd                         * freebsd                          3
rack                              rack                             0

I ran the first test with the default stack. I was running iperf3 over
a wireless network where rtt was fluctuating but no packet loss. Here
is a ping result summary. The average and stddev of rtt is relatively
high.

39 packets transmitted, 39 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 1.920/40.206/124.094/39.093 ms

Here is the iperf3 result of a 30-second benchmark.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec   328 MBytes  91.8 Mbits/sec   62             sender
[  5]   0.00-30.31  sec   328 MBytes  90.9 Mbits/sec                  receiver

Then I switched to the new RACK stack.
# sysctl net.inet.tcp.functions_default=rack
net.inet.tcp.functions_default: freebsd -> rack

There was a 10% - 15% performance loss after running the same iperf3
benchmark. Also, the number of retransmissions increased dramatically.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-30.00  sec   286 MBytes  79.9 Mbits/sec  271             sender
[  5]   0.00-30.30  sec   286 MBytes  79.0 Mbits/sec                  receiver

I then ran iperf3 on a Linux machine with kernel 4.15, which uses RACK
by default. I verified that through sysctl:

# sysctl net.ipv4.tcp_recovery
net.ipv4.tcp_recovery = 1

The iperf3 result showed the same speed with the default freebsd
stack, and the number of retransmission matched the RACK stack on
freebsd.

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-30.00  sec   330 MBytes  92.3 Mbits/sec  270             sender
[  4]   0.00-30.00  sec   329 MBytes  92.1 Mbits/sec                  receiver

I am not sure whether the performance issue is related to my
configuration or to the new implementation of RACK on FreeBSD. I am
glad to offer more information if anyone is interested. Thanks again
for all the hard work. I cannot wait to see TCP BBR on FreeBSD.

Best,
Chenyang
Received on Tue Sep 11 2018 - 07:41:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:18 UTC