If you get "cannot setup receive structures" you cannot go on and try to use the thing :) It means you have inadequate mbuf clusters to setup your receive side, you simply have to increase it and reload the driver. Jack On Wed, Apr 27, 2011 at 5:39 AM, Olivier Smedts <olivier_at_gid0.org> wrote: > 2011/3/31 Jack Vogel <jfvogel_at_gmail.com>: > > This problem happens for only one reason, you have insufficient mbufs to > > fill your rx ring. Its odd that it would differ when its static versus a > > loadable > > module though! > > > > With the 7.2.2 driver you also will use different mbuf pools depending on > > the MTU you are using. If you use jumbo frames it will use 4K clusters, > > if you go to 9K jumbos it will use 9K mbuf clusters. The number of these > > allocated by default is small (like 6400 small :). > > > > I would use 'netstat -m' to see what the pools look like. Now that I > think > > about it, the reason it might fail as loaded while not as built in is you > > get > > allocation of the mbufs first when static, and something else is taking > them > > before you can load when loadable?? > > Sorry to be quite late on this, > > Here is what gives me netstat -m with my new 9-CURRENT kernel but with > old (working, after some time of computer use) if_em.ko : > 1027/3458/4485 mbufs in use (current/cache/total) > 1024/2066/3090/25600 mbuf clusters in use (current/cache/total/max) > 1024/1792 mbuf+clusters out of packet secondary zone in use (current/cache) > 0/367/367/12800 4k (page size) jumbo clusters in use > (current/cache/total/max) > 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) > 2304K/6464K/8769K bytes allocated to network (current/cache/total) > 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0/0/0 sfbufs in use (current/peak/max) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 0 calls to protocol drain routines > > And here is the output with the new (non-working) if_em.ko : > 1029/3456/4485 mbufs in use (current/cache/total) > 1023/2067/3090/25600 mbuf clusters in use (current/cache/total/max) > 1023/1793 mbuf+clusters out of packet secondary zone in use (current/cache) > 0/367/367/12800 4k (page size) jumbo clusters in use > (current/cache/total/max) > 0/0/0/6400 9k jumbo clusters in use (current/cache/total/max) > 0/0/0/3200 16k jumbo clusters in use (current/cache/total/max) > 2303K/6466K/8769K bytes allocated to network (current/cache/total) > 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) > 0/0/0 requests for jumbo clusters denied (4k/9k/16k) > 0/0/0 sfbufs in use (current/peak/max) > 0 requests for sfbufs denied > 0 requests for sfbufs delayed > 0 requests for I/O initiated by sendfile > 0 calls to protocol drain routines > > I've got the "em0: Could not setup receive structures" messages with > the new if_em.ko even in single user mode. No network connectivity. I > tried removing all other network-related modules (vboxnet, ipfw...) > and still have this problem (again, even when booting in single-user > mode). > My network card is "em0_at_pci0:0:25:0: class=0x020000 > card=0x304b103c chip=0x10ef8086 rev=0x05 hdr=0x00". I'm using a > stripped-down GENERIC amd64 kernel (no network, no scsi, no raid...), > a nearly empty sysctl.conf and loader.conf (except module loading). > > I saw at the time of the commit that an MFC to 8-STABLE was planned, > but I don't think it should happen so soon. Given that my network > adapter was previously working well before the em driver update, can't > this be considerd a serious regression ? > > Thanks, > Olivier > > -- > Olivier Smedts _ > ASCII ribbon campaign ( ) > e-mail: olivier_at_gid0.org - against HTML email & vCards X > www: http://www.gid0.org - against proprietary attachments / \ > > "Il y a seulement 10 sortes de gens dans le monde : > ceux qui comprennent le binaire, > et ceux qui ne le comprennent pas." >Received on Wed Apr 27 2011 - 13:44:51 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:13 UTC