Re: dev.bce.X.com_no_buffers increasing and packet loss

From: Pyun YongHyeon <pyunyh_at_gmail.com>
Date: Fri, 5 Mar 2010 13:55:39 -0800
On Fri, Mar 05, 2010 at 11:16:41PM +0200, Ian FREISLICH wrote:
> Pyun YongHyeon wrote:
> > Thanks for the info. Frankly, I have no idea how to explain the
> > issue given that you have no heavy load.
> 
> How many cores would be involved in handling the traffic and runnig
> PF rules on this machine?  There are 4x
> CPU: Quad-Core AMD Opteron(tm) Processor 8354 (2194.51-MHz K8-class CPU)
> In this server.  I'm also using carp extensively.
> 

pf(4) uses a single lock for processing, number of core would have
no much benefit.

> > I have a bce(4) patch which fixes a couple of bus_dma(9) issues as
> > well as fixing some minor bugs. However I don't know whether the
> > patch can fix the RX issue you're suffering from. Anyway, would you
> > give it try the patch at the following URL?
> > http://people.freebsd.org/~yongari/bce/bce.20100305.diff
> > The patch was generated against CURRENT and you may see a message
> > like "Disabling COAL_NOW timedout!" during interface up. You can
> > ignore that message.
> 
> Thanks.  I'll give the patch a go on Monday when there are people
> nearby if something goes wrong during the boot.  I don't want to
> loose the redundancy over the week end.
> 

>From my testing on quad-port BCM5709 controller, it was stable. But
I agree that your plan would be better.

> Otherwise, is there another interface chip we can try?  It's got

I guess bce(4) and igb(4) would be one of the best controller.

> an igb(4) quad port in there as well, but the performance is worse
> on that chip than the bce(4) interface.  It's also riddled with

Yeah, I also noticed that. I think bce(4) seems to give more better
performance numbers than igb(4).

> vlan and other hardware offload bugs.  I had good success in the
> past with em(4), but it looks like igb is the PCI-e version.
> 

It may depend on specific workloads. Last time I tried igb(4), the
driver had a couple of bugs and after patching it, igb(4) also
seemed to work well even though the performance was slightly slower
than I initially expected. One thing I saw was using LRO on igb(4)
showed slightly worse performance. Another thing for igb(4) case,
it began to support multi-TX queues as well as RSS. Theoretically
current multi-TX queue implementation can reorder packets such that
it can give negative effects.

bce(4) still lacks multi-TX queue support as well as RSS. bce(4)
controllers also supports MSI-X as well as RSS so I have plan to
implement it in future but it's hard to tell when I can find time
to implement that.
Received on Fri Mar 05 2010 - 20:56:04 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:01 UTC