IFLIB: em0/igb0 broken: No buffer space available/TX(0) desc avail = 1024, pidx = 0

From: O. Hartmann <ohartmann_at_walstatt.org>
Date: Tue, 16 May 2017 06:56:23 +0200
Since the introduction of IFLIB, I have big trouble with especially a certain
type of NIC, namely formerly known igb and em.

The worst device is an Intel NIC known as i217-LM

em0_at_pci0:0:25:0:        class=0x020000 card=0x11ed1734 chip=0x153a8086 rev=0x05
hdr=0x00 vendor     = 'Intel Corporation'
    device     = 'Ethernet Connection I217-LM'
    class      = network
    subclass   = ethernet
    bar   [10] = type Memory, range 32, base 0xfb300000, size 131072, enabled
    bar   [14] = type Memory, range 32, base 0xfb339000, size 4096, enabled
    bar   [18] = type I/O Port, range 32, base 0xf020, size 32, enabled

This NIC is widely used by Fujitsu's workstations CELSIUS M740 and the fate
would have it, that I have to use one of these.

When syncing data over the network from the workstation to an older C2D/bce
based server via NFSv4, since introduction of IFLIB the connection to the NFS
get stuck and I receive on the console messages like

em0: TX(0) desc avail = 1024, pidx = 0
em0: TX(0) desc avail = 42, pidx = 985

Hitting "Ctrl-T" on the terminal doing the sync via "rsync", I see then this
message:

load: 0.01  cmd: rsync 68868 [nfsaio] 395.68r 4.68u 4.64s 0% 3288k (just for
the record)

Server and client(s) are on 12-CURRENT: ~ FreeBSD 12.0-CURRENT #38 r318285: Mon
May 15 12:27:29 CEST 2017 amd64, customised kernels and "netmap" enabled (just
for the record if that matters).

In the past, I was able to revive the connection by simply putting the NIC down
and then up again and while I had running a ping as a trace indication of the
state of the NIC, I got very often

ping: sendto: No buffer space available

Well, today I checked via dmesg the output to gather again those messages and
realised that the dmesg is garbled:

[...]
nfs nfs servnnfs servefs r server19 2.19162n.fs snerver fs1 s9nfs s2er.nfs
server er192.168.0.31:/pool/packages: not responding v
er 192.168.0.31ver :/po1ol/packages9: 2.168.0.31:/pool/packagesn: noot
responding t
<6>n fs serverespondinngf
s
 server 192.168.1rn nfs server 192.168.0.31:/pool/packages: not1 responding
 9
 2.168.1f7s 0.31:/pool/packagenfs sesrver 19serv2er .168.0.31:/poo: not
respolnding /
 packages: not responding
 nfs server 19192.168.0.31:/pool/pa2c.k168.0.31:a/gpserver
ne1s92.168.0.31:/pool/pac: knot respaof1s68 gs.e17rve8r.2
3192.168.0.31:/pool/packa1:/pool/packages: not responding o goes: nl/packages:
not responding o
 t responding
 nfs server 192.168.0.31:/poes: ol/packages: nfns server
192.168.0.31:/pool/paot responding c
 kages: not respondinnfs server n192.1f68.0.31:/pool/packagess: ndi server
 192.168.0.31:/pool/packages: not responding
[...]

Earlier this year after introduction of IFLIB, I checked out servers equipted
with Intels very popular i350T2v2 NIC and I had similar problems when dd'ing
large files over NFSv4 (ZFS backed) from a client (em0, a client/consumer grade
older NIC from 2010, forgot its ID, towards server with i350, but the server
side got stuck with the messages seen similar to those reported with the
i217-LM). Since my department uses lots of those server grade NICs, I will swap
the i217 with a i350T2 and check again.

Nevertheless, the situation is very uncomfortable! 

Kind regards,
Oliver
Received on Tue May 16 2017 - 02:56:38 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:11 UTC