Re: dev.bce.X.com_no_buffers increasing and packet loss

From: Ian FREISLICH <ianf_at_clue.co.za>
Date: Tue, 09 Mar 2010 15:31:55 +0200
Pyun YongHyeon wrote:
> On Mon, Mar 08, 2010 at 04:45:20PM +0200, Ian FREISLICH wrote:
> > Pyun YongHyeon wrote:
> > > On Fri, Mar 05, 2010 at 11:16:41PM +0200, Ian FREISLICH wrote:
> > > > Pyun YongHyeon wrote:
> > > > > Thanks for the info. Frankly, I have no idea how to explain the
> > > > > issue given that you have no heavy load.
> > > > 
> > > > How many cores would be involved in handling the traffic and runnig
> > > > PF rules on this machine?  There are 4x
> > > > CPU: Quad-Core AMD Opteron(tm) Processor 8354 (2194.51-MHz K8-class CPU
)
> > > > In this server.  I'm also using carp extensively.
> > > > 
> > > 
> > > pf(4) uses a single lock for processing, number of core would have
> > > no much benefit.
> > 
> > What's interesting is the effect on CPU utilisation and interrupt
> > generation that net.inet.ip.fastforwarding has:
> > 
> > net.inet.ip.fastforwarding=1
> > interrupt rate is around 10000/s per bce interface
> > cpu 8.0% interrupt
> > 
> 
> Yes, this is one of intentional change of the patch. Stock bce(4)
> seems to generate too much interrupts on BCM5709 so I rewrote
> interrupt handling with the help of David. sysctl nodes are also
> exported to control interrupt moderation so you can change them if
> you want. Default value was tuned to generate interrupts less than
> 10k per second and try to minimize latencies.

Can you explain the tunables please - I'm guessing it's these:

dev.bce.$i.tx_quick_cons_trip_int
dev.bce.$i.tx_quick_cons_trip
dev.bce.$i.tx_ticks_int
dev.bce.$i.tx_ticks
dev.bce.$i.rx_quick_cons_trip_int
dev.bce.$i.rx_quick_cons_trip
dev.bce.$i.rx_ticks_int
dev.bce.$i.rx_ticks


--
Ian Freislich
Received on Tue Mar 09 2010 - 12:32:16 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:01 UTC