Re: Packet passing performance study on exotic hardware.

From: Guy Helmer <ghelmer_at_palisadesys.com>
Date: Fri, 08 Oct 2004 10:47:53 -0500
David Gilbert wrote:

>The opportunity presented itelf for me to test packet passing ability
>on some fairly exotic hardware.  The motherboard I really wanted to
>test not only had separate memory busses for each cpu, but also had
>two separate PCI-X busses (one slot each).  To this, I added two
>intel pro/1000 gigabit ethernet cards (PCI-X versions).
>
>I had two sets of processors to test: two 246's and two 240's.
>
>The packets in this test are all minimal 64 byte UDP packets.
>
>My first goal was to determine the DDOS stability of FreeBSD 5.3, and
>Linux on this hardware.  I was using amd64 binaries for both FreeBSD
>and linux.
>
>Right out of the box (with polling), Linux passed 550 kpps (kilo
>packets wer second).  Full data rate would be 1.9 mpps.  On linux, the
>240 processors passed only 450 kppps (which is somewhat expected).
>
>Right out of the box, FreeBSD 5.3 (with polling) passed about 200
>kpps.  net.isr.enable=1 increased that without polling to about 220
>kpps (although livelock ensued without polling as packet load
>increased).  With excessive tuning, we got FreeBSD 5.3 to pass 270
>kpps.  This included polling, nmbclusters, net.isr, and some em
>patches.  I can't see where to get more performance.
>
>To compare, we loaded FreeBSD-5.3 ia32 and achieved almost identical
>performance.
>
>Then, also to compare, we loaded FreeBSD-4.10 ia32 and it promptly
>passed 550 kpps (almost identical to the linux performance) (with
>polling).
>
>Some interesting things about 5.3(-BETA4) in this environment:
>
>  - without polling, it definately livelocks.
>
>  - with polling and excessive packets, it doesn't "receive" the full
>    load of packets.  In netstat -w, they show as input "errors"
>    although the number of "errors" isn't strictly related to the
>    number of dropped packets.  It's just some large number that
>    generally increases with the number of dropped packets.
>
>  
>
Have you used "sysctl hw.em0.stats=1" and/or "sysctl hw.em1.stats=1" 
before and after running the test to obtain snapshots of the detailed 
error statistics (they're logged by the kernel to /var/log/messages)?  
Perhaps those would be enlightening.

The fixed bug in the em driver for BETA7 may significantly help (see 
Scott Long's response prior to mine).

If you try BETA7 without polling but with SMP, do you get better results 
if you increase hw.em0.rx_int_delay and hw.em1.rx_int_delay above 0?

Have you set sysctls kern.random.sys.harvest.ethernet=0 and 
kern.random.sys.harvest.interrupt=0?

I don't know if it will have any effect in your situation, but have you 
increased net.inet.ip.intr_queue_maxlen?

Hope this helps,
Guy

-- 
Guy Helmer, Ph.D., Principal System Architect, Palisade Systems, Inc.
ghelmer_at_palisadesys.com
http://www.palisadesys.com/~ghelmer
Received on Fri Oct 08 2004 - 13:48:07 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:16 UTC