Re: Call for Marvell/SysKonnect Yukon II Gigabit Ethernet testers.

From: Pyun YongHyeon <pyunyh_at_gmail.com>
Date: Tue, 12 Dec 2006 11:00:23 +0900
On Mon, Dec 11, 2006 at 11:56:01PM +0000, Bruce M. Simpson wrote:
 > Hi,
 > 
 > I successfully tested this driver under 7-CURRENT as of today on an ASUS 
 > Vintage AH-1 based system.
 > 
 > lspci has the following to say about it:
 > 
 > 02:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E 
 > Gigabit Ethernet Controller (rev 19)
 >        Subsystem: ASUSTeK Computer Inc. Marvell 88E8053 Gigabit 
 > Ethernet controller PCIe (Asus)
 >        Flags: bus master, fast devsel, latency 0, IRQ 18
 >        Memory at fe4fc000 (64-bit, non-prefetchable)
 >        I/O ports at c800
 >        Expansion ROM at fe4c0000 [disabled]
 >        Capabilities: [48] Power Management version 2
 >        Capabilities: [50] Vital Product Data
 >        Capabilities: [5c] Message Signalled Interrupts: 64bit+ 
 > Queue=0/1 Enable-
 >        Capabilities: [e0] Express Legacy Endpoint IRQ 0
 > 
 > A few sample netperf runs between this system (AMD Athlon64 3000+) and 
 > an Intel Dothan 1.8Ghz based Lenovo T43 with bge(4) interconnected via a 
 > 3Com 5-port gigabit workgroup switch reveal the following.
 > 
 > anglepoise# netperf -t UDP_STREAM -H 192.168.123.18
 > UDP UNIDIRECTIONAL SEND TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 > 192.168.123.18 (192.168.123.18) port 0 AF_INET
 > Socket  Message  Elapsed      Messages
 > Size    Size     Time         Okay Errors   Throughput
 > bytes   bytes    secs            #      #   10^6bits/sec
 > 
 >  9216    9216   10.00      129443 1135712     954.30
 > 42080           10.00      128739            949.11
 > 
 > 
 > anglepoise# netperf -t UDP_RR -H 192.168.123.18
 > UDP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 > 192.168.123.18 (192.168.123.18) port 0 AF_INET
 > Local /Remote
 > Socket Size   Request  Resp.   Elapsed  Trans.
 > Send   Recv   Size     Size    Time     Rate
 > bytes  Bytes  bytes    bytes   secs.    per sec
 > 
 > 9216   42080  1        1       10.00    2804.30
 > 9216   42080
 > 
 > anglepoise# netperf -t TCP_STREAM -H 192.168.123.18
 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.123.18 
 > (192.168.123.18) port 0 AF_INET
 > Recv   Send    Send
 > Socket Socket  Message  Elapsed
 > Size   Size    Size     Time     Throughput
 > bytes  bytes   bytes    secs.    10^6bits/sec
 > 
 > 65536  32768  32768    10.00     642.80
 > 
 > anglepoise# netperf -t TCP_RR -H 192.168.123.18
 > TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 
 > 192.168.123.18 (192.168.123.18) port 0 AF_INET
 > Local /Remote
 > Socket Size   Request  Resp.   Elapsed  Trans.
 > Send   Recv   Size     Size    Time     Rate
 > bytes  Bytes  bytes    bytes   secs.    per sec
 > 
 > 32768  65536  1        1       10.00    2790.69
 > 32768  65536
 > 
 > 
 > anglepoise# netperf -t TCP_CRR -H 192.168.123.18
 > TCP Connect/Request/Response TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET 
 > to 192.168.123.18 (192.168.123.18) port 0 AF_INET
 > Local /Remote
 > Socket Size   Request  Resp.   Elapsed  Trans.
 > Send   Recv   Size     Size    Time     Rate
 > bytes  Bytes  bytes    bytes   secs.    per sec
 > 
 > 32768  65536  1        1       10.00    1350.91
 > 32768  65536
 > 
 > anglepoise# netperf -t TCP_MAERTS -H 192.168.123.18
 > TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.123.18 
 > (192.168.123.18) port 0 AF_INET
 > Recv   Send    Send
 > Socket Socket  Message  Elapsed
 > Size   Size    Size     Time     Throughput
 > bytes  bytes   bytes    secs.    10^6bits/sec
 > 
 > 65536  32768  32768    10.00     407.51
 > 
 > And a statistically tighter test:
 > 
 > anglepoise# netperf -t TCP_STREAM -I 99 -i 30,10 -H 192.168.123.18
 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.123.18 
 > (192.168.123.18) port 0 AF_INET : +/-49.5% _at_ 99% conf.
 > Recv   Send    Send
 > Socket Socket  Message  Elapsed
 > Size   Size    Size     Time     Throughput
 > bytes  bytes   bytes    secs.    10^6bits/sec
 > 
 > 65536  32768  32768    10.00     644.98
 > 
 > Another test, this time between the same box and a dual PIII 933Mhz 
 > running 6.1-RELEASE with an em(4) card and 66Mhz 64-bit PCI data path on 
 > the same workgroup switch:
 > 
 > anglepoise# netperf -t TCP_STREAM -I 99 -i 30,10 -H 192.168.123.6
 > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.123.6 
 > (192.168.123.6) port 0 AF_INET : +/-49.5% _at_ 99% conf.
 > Recv   Send    Send
 > Socket Socket  Message  Elapsed
 > Size   Size    Size     Time     Throughput
 > bytes  bytes   bytes    secs.    10^6bits/sec
 > 
 > 262144  32768  32768    10.00     161.17
 > 
 > ...probably harder on the old machine than anything, it is very unlikely 
 > it could saturate at line rate.
 > 
 > Thank you for all the excellent work on this driver, I hope this data is 
 > useful.
 > 

Thanks for testing. The main focus for msk(4) was for getting working
native driver. Performance was not heavily tested and highly likely
to be lower than that of optimal performance. It seems that myk(4) has
several workarounds for better performance but that magic code is hard
to verify wihtout errata information from vendor. :-(

Btw, I'll commit msk(4) in two days if there is no breakage report
for e1000phy(4).

 > regards,
 > BMS
-- 
Regards,
Pyun YongHyeon
Received on Tue Dec 12 2006 - 01:04:44 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:03 UTC