Re: problems with em(4) since update to driver 7.2.2

From: Arnaud Lacombe <lacombar_at_gmail.com>
Date: Thu, 5 May 2011 10:40:57 -0400
Hi,

On Wed, May 4, 2011 at 3:00 AM, Alastair Hogge <agh_at_fastmail.fm> wrote:
> [.]
> I also tried 2x, & 4x 25600 for max mbuff clusters via kern.ipc.nmbclusters.
> This didn't help.
>
For the record, I did the math yestarday, checked the code. By
default, a machine with 6 82574L-backed em(4) interfaces, with only 3
used (ie. brought up), initializes and work just fine with as low as
3076 mbuf clusters (1024*3 + 2).  It has been transferring about 28k
pps or 20Mbps of traffic (ICMP ping flood) since for the last 10h.
Here is the `netstat -m' output:

# netstat -m
2879/916/3795 mbufs in use (current/cache/total)
2877/199/3076/3076 mbuf clusters in use (current/cache/total/max)
2877/199 mbuf+clusters out of packet secondary zone in use (current/cache)
0/2/2/1537 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/768 9k jumbo clusters in use (current/cache/total/max)
0/0/0/384 16k jumbo clusters in use (current/cache/total/max)
6473K/635K/7108K bytes allocated to network (current/cache/total)
0/540580029/268859859 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/5/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

and, yes, allocation denial has sky-rocketed, but beside that the
driver is stable. In that case, the uninitialized issue did not happen
when the system booted.

The complete machine should be able to initialize properly with 6146 clusters.

 - Arnaud
Received on Thu May 05 2011 - 12:40:58 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:13 UTC