Rewritten TCP reassembly

From: Andre Oppermann <andre_at_freebsd.org>
Date: Fri, 10 Dec 2004 21:01:12 +0100
I've totally rewritten the TCP reassembly function to be a lot more
efficient.  In tests with normal bw*delay products and packet loss
plus severe reordering I've measured an improvment of at least 30% in
performance.  For high and very high bw*delay product links the
performance improvement is most likely much higher.

The main property of the new code is O(1) insert for 95% of all normal
reassembly cases.  If there is more than one hole the insert time is
O(holes).  If a packet arrives that closes a hole the chains to the left
and right are merged.  Artificially constructed worst case is O(n).  No
malloc's are done for new segments.  The old code was O(n) in all cases
plus n*malloc for a describing structure.

There are some problems with the new code I will fix before committing
it to the tree.  One is it can't handle non-writeable mbuf's and the
other is too little leading space in the mbuf (found only on loopback
interface, but there we don't have packet loss).  Once these two are
dealed with it is ready to go in.

Nothing is perfect and this code is only a first significant step over
what we have currently in the tree, especially for transfers over lossy
(wireless) and high speed links with and without packet reordering.
I have the next steps already in the works which will further optimize
(worst case O(windowsize/mclusters) instead of O(n)) and simplify a bit
more again.

The patch can be found here:

  http://www.nrg4u.com/freebsd/tcp_reass-20041210.patch

Please test and report good and bad news back.

-- 
Andre
Received on Fri Dec 10 2004 - 19:01:21 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:24 UTC