Re: Packet passing performance study on exotic hardware.

From: Scott Long <scottl_at_FreeBSD.org>
Date: Fri, 08 Oct 2004 09:18:56 -0600
David Gilbert wrote:
> The opportunity presented itelf for me to test packet passing ability
> on some fairly exotic hardware.  The motherboard I really wanted to
> test not only had separate memory busses for each cpu, but also had
> two separate PCI-X busses (one slot each).  To this, I added two
> intel pro/1000 gigabit ethernet cards (PCI-X versions).
> 
> I had two sets of processors to test: two 246's and two 240's.
> 
> The packets in this test are all minimal 64 byte UDP packets.
> 
> My first goal was to determine the DDOS stability of FreeBSD 5.3, and
> Linux on this hardware.  I was using amd64 binaries for both FreeBSD
> and linux.
> 
> Right out of the box (with polling), Linux passed 550 kpps (kilo
> packets wer second).  Full data rate would be 1.9 mpps.  On linux, the
> 240 processors passed only 450 kppps (which is somewhat expected).
> 
> Right out of the box, FreeBSD 5.3 (with polling) passed about 200
> kpps.  net.isr.enable=1 increased that without polling to about 220
> kpps (although livelock ensued without polling as packet load
> increased).  With excessive tuning, we got FreeBSD 5.3 to pass 270
> kpps.  This included polling, nmbclusters, net.isr, and some em
> patches.  I can't see where to get more performance.
> 
> To compare, we loaded FreeBSD-5.3 ia32 and achieved almost identical
> performance.
> 
> Then, also to compare, we loaded FreeBSD-4.10 ia32 and it promptly
> passed 550 kpps (almost identical to the linux performance) (with
> polling).
> 
> Some interesting things about 5.3(-BETA4) in this environment:
> 
>   - without polling, it definately livelocks.
> 
>   - with polling and excessive packets, it doesn't "receive" the full
>     load of packets.  In netstat -w, they show as input "errors"
>     although the number of "errors" isn't strictly related to the
>     number of dropped packets.  It's just some large number that
>     generally increases with the number of dropped packets.
> 
>   - With net.isr and not polling, both cpus are used (220 kpps)
> 
>   - With net.isr and polling, one cpu is used (270 kpps, one cpu free
>     for other tasks)
> 
>   - It's worth noting that only FreeBSD 5.3 used two cpus to pass
>     packets at any time.  Neither linux nor 4.10 used the other cpu.
> 
>   - hz and polling tuning options didn't really change packets passed
>     significantly.
> 
> During the next week, I will continue testing with full simulated
> routing tables, random packets and packets between 350 and 550 bytes
> (average ISP out/in packet sizes).  I will add to this report then.
> If anyone has tuning advice for FreeBSD 5.3, I'd like to hear it.
> 
> Dave.
> 

Interesting results.  One thing to note is that a severe bug in the 
if_em driver was fixed for BETA7.  The symptoms of this bug include
apparent livelock of the machine during heavy xmit load.  You might
want to update and re-run your tests.

Scott
Received on Fri Oct 08 2004 - 13:20:08 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:16 UTC