iflib.tx_abdicate: very strange behavior on incoming IPsec traffic (regression?)

From: Lev Serebryakov <lev_at_FreeBSD.org>
Date: Fri, 7 Dec 2018 16:40:19 +0300
 (I'm not sure, that it is exactly "bug" or "defect" and want to

 I've found very strange behavior of 13-CURRENT system I210 (igb)
interfaces and enabled "dev.igb.X.iflib.tx_abdicate".

 I'm measuring "router" performance with BSDRP's "equilibrium" script
(thank you, Oliver, for this great tool!). It generates traffic to route
with pkt-gen and try to find packet rate / bandwidth with binary search.

 I'm testing simple UDP traffic via physical connection, without any
GIF/GRE and other pseudo-interfaces.

 Router pass UDP traffic from igb1 to igb0, and this traffic is for ONLY
ONE IP:PORT pair, as I'm imitating edge router for small network where
only one host will receive huge amounts of traffic (i.e. torrent-box).

 When I enable "dev.igb.X.iflib.tx_abdicate" on both igb1 (inbound) and
igb0 (outbound) interface, packet per second become a little better. So
far so good.

 Now I'm throwing IPsec into mix. All incoming traffic is tunneled with
IPsec policy, with aes-128-gcm encryption. And with IPsec tx_abdicate
makes thing much worse and much more unstable.

 There is results without tx_abdicate:

480Mbit/s, 182Kpps

 And it is results with tx_abdicate:

352MBit/s, 85Kpps.

 And what is worse, "equilibrium" script starts to see unstable packet
rate. Without tx_abdicate or without IPsec process of searching for
"maximum" packet rate is very stable: each next measurement in binary
search looks like previous, there is no big jumps and found
"equilibrium" rate is very close to "maximum seen", and overloaded
router shows rate smaller than equilibrium one). But with both
"tx_abdicate" and IPsec it looks like (please, note, that overloaded
router shows much better rate than not-overloaded):

Benchmark tool using equilibrium throughput method
- Benchmark mode: Throughput (pps) for Router
- UDP load = 18B, IPv4 packet size=46B, Ethernet frame size=60B
- Link rate = 1488 Kpps
- Tolerance = 0.01
Iteration 1
  - Offering load = 744 Kpps
  - Step = 372 Kpps
  - Measured forwarding rate = 120 Kpps
  - Forwared rate too low, forcing OLOAD=FWRATE and STEP=FWRATE/2
Iteration 2
  - Offering load = 120 Kpps
  - Step = 60 Kpps
  - Trend = decreasing
  - Measured forwarding rate = 81 Kpps
Iteration 3
  - Offering load = 60 Kpps
  - Step = 60 Kpps
  - Trend = decreasing
  - Measured forwarding rate = 60 Kpps
Iteration 4
  - Offering load = 90 Kpps
  - Step = 30 Kpps
  - Trend = increasing
  - Measured forwarding rate = 84 Kpps
Iteration 5
  - Offering load = 75 Kpps
  - Step = 15 Kpps
  - Trend = decreasing
  - Measured forwarding rate = 75 Kpps
Iteration 6
  - Offering load = 82 Kpps
  - Step = 7 Kpps
  - Trend = increasing
  - Measured forwarding rate = 81 Kpps
Iteration 7
  - Offering load = 85 Kpps
  - Step = 3 Kpps
  - Trend = increasing
  - Measured forwarding rate = 85 Kpps
Iteration 8
  - Offering load = 86 Kpps
  - Step = 1 Kpps
  - Trend = increasing
  - Measured forwarding rate = 86 Kpps
Estimated Equilibrium Ethernet throughput= 86 Kpps (maximum value seen:
120 Kpps)


-- 
// Lev Serebryakov


Received on Fri Dec 07 2018 - 12:40:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:19 UTC