Re: igb is broken, even across reboots, at r312294

From: Sean Bruno <sbruno_at_freebsd.org>
Date: Mon, 16 Jan 2017 14:19:11 -0700
On 01/16/17 13:52, Alan Somers wrote:
> Today I updated my machine from 311787 to 312294.  After the update,
> my igb ports can pass no traffic.  If I reboot into kernel.old, they
> still can't pass any traffic.  They won't even work in the PXE ROM.  I
> have to power off, pull the power cables, then boot into kernel.old
> before they'll work.  This behavior is repeatable.
> 
> $ pciconf -lv
> ...
> igb0_at_pci0:1:0:0:        class=0x020000 card=0x34dc8086 chip=0x10a78086 rev=0x02
> hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82575EB Gigabit Network Connection'
>     class      = network
>     subclass   = ethernet
> igb1_at_pci0:1:0:1:        class=0x020000 card=0x34dc8086 chip=0x10a78086
> rev=0x02 hdr=0x00
>     vendor     = 'Intel Corporation'
>     device     = '82575EB Gigabit Network Connection'
>     class      = network
>     subclass   = ethernet
> ...
> 
> $ dmesg # on 311787, it's identical whether or not the igb ports are working
> ...
> igb0: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port
> 0x2020-0x203f mem 0xb1b20000-0xb1b3ffff,0xb1b44000-0xb1b47fff irq 40
> at device 0.0 on pci1
> igb0: Using MSIX interrupts with 5 vectors
> igb0: Ethernet address: 00:1e:67:25:71:bc
> igb0: Bound queue 0 to cpu 0
> igb0: Bound queue 1 to cpu 1
> igb0: Bound queue 2 to cpu 2
> igb0: Bound queue 3 to cpu 3
> igb0: netmap queues/slots: TX 4/1024, RX 4/1024
> igb1: <Intel(R) PRO/1000 Network Connection, Version - 2.5.3-k> port
> 0x2000-0x201f mem 0xb1b00000-0xb1b1ffff,0xb1b40000-0xb1b43fff irq 28
> at device 0.1 on pci1
> igb1: Using MSIX interrupts with 5 vectors
> igb1: Ethernet address: 00:1e:67:25:71:bd
> igb1: Bound queue 0 to cpu 4
> igb1: Bound queue 1 to cpu 5
> igb1: Bound queue 2 to cpu 6
> igb1: Bound queue 3 to cpu 7
> igb1: netmap queues/slots: TX 4/1024, RX 4/1024
> ...
> 
> $ dmesg # on 312294, when the igb ports are not working
> ...
> igb0: <Intel(R) PRO/1000 PCI-Express Network Driver> port
> 0x2020-0x203f mem 0xb1b20000-0xb1b3ffff,0xb1b44000-0xb1b47fff irq 40
> at device 0.0 on pci1
> igb0: attach_pre capping queues at 4
> igb0: using 1024 tx descriptors and 1024 rx descriptors
> igb0: msix_init qsets capped at 4
> igb0: pxm cpus: 8 queue msgs: 9 admincnt: 1
> igb0: using 4 rx queues 4 tx queues
> igb0: Using MSIX interrupts with 5 vectors
> igb0: allocated for 4 tx_queues
> igb0: allocated for 4 rx_queues
> igb0: Ethernet address: 00:1e:67:25:71:bc
> igb0: netmap queues/slots: TX 4/1024, RX 4/1024
> igb1: <Intel(R) PRO/1000 PCI-Express Network Driver> port
> 0x2000-0x201f mem 0xb1b00000-0xb1b1ffff,0xb1b40000-0xb1b43fff irq 28
> at device 0.1 on pci1
> igb1: attach_pre capping queues at 4
> igb1: using 1024 tx descriptors and 1024 rx descriptors
> igb1: msix_init qsets capped at 4
> igb1: pxm cpus: 8 queue msgs: 9 admincnt: 1
> igb1: using 4 rx queues 4 tx queues
> igb1: Using MSIX interrupts with 5 vectors
> igb1: allocated for 4 tx_queues
> igb1: allocated for 4 rx_queues
> igb1: Ethernet address: 00:1e:67:25:71:bd
> igb1: netmap queues/slots: TX 4/1024, RX 4/1024
> ...
> 
> Any ideas?
> 
> -Alan
> 

Yeah, fighting with EARLY_AP_STARTUP with regards to initialization of
the interfaces.  em(4) seems to be ok with my change today, but that
change makes igb(4) *very* angry.

I'm aware and trying to find a happy medium.

sean


Received on Mon Jan 16 2017 - 20:19:16 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:09 UTC