Re: Call for e1000phy(4) testers.

From: Pyun YongHyeon <pyunyh_at_gmail.com>
Date: Fri, 1 Dec 2006 21:43:15 +0900
On Fri, Dec 01, 2006 at 03:46:36PM +0800, Tai-hwa Liang wrote:
 > On Tue, 28 Nov 2006, Pyun YongHyeon wrote:
 > >Hi,
 > >
 > >I had been writing msk(4) for FreeBSD and realized that e1000phy(4)
 > >is buggy on newer Marvell PHYs. For example, manual media selection
 > >didn't work at all and I had to stick to autoselection of the media
 > >type. The Marvell PHYs are widely used on various NICs including
 > >em(4), stge(4), sk(4), msk(4) and nfe(4). Except em(4) which does
 > >not support MII layers, correct operation of e1000phy(4) is very
 > >important to get a good link with link partner and to report link
 > >state changes to upper layers(e.g. dhclinet(8)).
 > 
 >   Thank you for working on this.
 > 
 > >With this patch you should be able to set a media type without
 > >relying on autoselection and it should supports automatic crossover
 > >for all known Marvell PHYs. I've tried hard not to break existing
 > >behaviour(e.g. Fiber transceivers) but I can't verify that as I
 > >don't have any NICs that have Marvell Fiber transceivers. The patch
 > >is somewhat ugly in that it should read a PHY ID register in several
 > >palces. It seems that there is no easy way to avoid the reading until
 > >we have PHY model/revision numbers in mii softc.
 > >
 > >If you are one of users that use stge(4), sk(4), msk(4) and nfe(4)
 > >please test and report any strange things not observed on stock
 > >version.
 > >
 > >Note for nfe(4) users:
 > >It seems that nfe(4) has bugs that it can't send packets on
 > >half-duplex media(I've got "tx v1 error 0x6004"). I guess this comes
 > >from mismatches between PHY and MAC. So you may have to set
 > >full-duplex on nfe(4) until we have a fix for the issue.
 > >
 > >You can get the latest e1000phy(4) driver from the following URL.
 > >http://people.freebsd.org/~yongari/msk/e1000phy.c
 > >http://people.freebsd.org/~yongari/msk/e1000phyreg.h
 > >http://people.freebsd.org/~yongari/msk/miidevs
 > >
 > >OR get a jumbo patch for CURRENT.
 > >http://people.freebsd.org/~yongari/msk/e1000phy.patch
 > 
 >   I have tried your e1000phy patch as well as msk.diff.HEAD on an Acer
 > Aspire 5583 WXMi laptop:
 > 
 > mskc0_at_pci2:0:0:	class=0x020000 card=0x01101025 chip=0x435211ab 
 > rev=0x14 hdr=0x00
 >     vendor   = 'Marvell Semiconductor (Was: Galileo Technology Ltd)'
 >     class    = network
 >     subclass = ethernet
 > 
 >   It seems that device_attach always returns 6 regardless 
 >   hw.pci.enable_msi[x]
 > is 1 or 0:
 > 

[...]

 > found->	vendor=0x11ab, dev=0x4352, revid=0x14
 > 	bus=2, slot=0, func=0
 > 	class=02-00-00, hdrtype=0x00, mfdev=0
 > 	cmdreg=0x0000, statreg=0x4010, cachelnsz=16 (dwords)
 > 	lattimer=0x00 (0 ns), mingnt=0x00 (0 ns), maxlat=0x00 (0 ns)
 > 	intpin=a, irq=10
 > 	powerspec 2  supports D0 D1 D2 D3  current D0
 > 	VPD Ident: Marvell Yukon 88E8038 Fast Ethernet Controller
 > 	PN: Yukon 88E8038
 > 	EC: Rev. 1.4
 > 	MN: Marvell
 > 	SN: AbCdEfG85BCA0
 > 	CP: id 1, BAR16, off 0x3cc
 > 	RV: 0x7d
 > 	MSI supports 2 messages, 64 bit
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Since Scott said resource allocation problem I'd like to say what
I don't understand in this message.

The motherboard I have also reports 2 MSI but I'm pretty sure the
Yukon II hardware support only 1 MSI. If I force to call pci_alloc_msi()
with 1 MSI it works without problems. At first, I thought my
motherboard has chipset bug but I see the same 2 MSI message from
your system.

Does Youkon II really support 2 MSI?

 > pci2:0:0: reprobing on driver added
 > mskc0: <Marvell Yukon 88E8038 Gigabit Ethernet> irq 10 at device 0.0 on pci2
 > mskc0: MSI count : 2
 > pcib2: mskc0 requested unsupported memory range 0-0xffffffff (decoding 0-0, 0-0)
 > mskc0: 0x4000 bytes of rid 0x10 res 3 failed (0, 0xffffffff).
 > mskc0: Lazy allocation of 0x4 bytes rid 0x14 type 4 at 0x1000
 > mskc0: unknown device: id=0xff, rev=0x0f
 > device_attach: mskc0 attach returned 6

-- 
Regards,
Pyun YongHyeon
Received on Fri Dec 01 2006 - 11:40:10 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:03 UTC