Re: mfi driver performance

From: matt <sendtomatt_at_gmail.com>
Date: Mon, 10 Sep 2012 19:15:12 -0700
On 09/10/12 11:35, Andrey Zonov wrote:
> On 9/10/12 9:14 PM, matt wrote:
>> On 09/10/12 05:38, Achim Patzner wrote:
>>> Hi!
>>>
>>> We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
>>>
>>> mfi0: <ThunderBolt> port 0x2000-0x20ff mem 0xd0c60000-0xd0c63fff,0xd0c00000-0xd0c3ffff irq 34 at device 0.0 on pci5
>>> mfi0: Using MSI
>>> mfi0: Megaraid SAS driver Ver 4.23
>>> mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0
>>>
>>> or
>>>
>>> mfi0_at_pci0:5:0:0:        class=0x010400 card=0x35138086 chip=0x005b1000 rev=0x03 hdr=0x00
>>>      vendor     = 'LSI Logic / Symbios Logic'
>>>      device     = 'MegaRAID SAS 2208 [Thunderbolt]'
>>>      class      = mass storage
>>>      subclass   = RAID
>>>
>>> and seems to be doing quite well.
>>>
>>> As long as it isn’t used…
>>>
>>> When the system is getting a bit more IO load it is getting close to unusable as soon as there are a few writes (independent of configuration, it is even sucking  as a glorified S-ATA controller). Equipping it with an older (unsupported) controller like an SRCSASRB
>>> (mfi0_at_pci0:10:0:0:       class=0x010400 card=0x100a8086 chip=0x00601000 rev=0x04 hdr=0x00
>>>      vendor     = 'LSI Logic / Symbios Logic'
>>>      device     = 'MegaRAID SAS 1078'
>>>      class      = mass storage
>>>      subclass   = RAID) solves the problem but won’t make Intel’s support happy.
>>>
>>> Has anybody similar experiences with the mfi driver? Any good ideas besides running an unsupported configuration?
>>>
>>>
>>> Achim
>>>
>>> _______________________________________________
>>> freebsd-current_at_freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>>> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>> I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi.
>> Performance was excellent for mfisyspd volumes, as I compared using the
>> same hardware but with firmware (2108it.bin) that attaches under mps.
>> Bonnie++ results on random disks were very close if not identical
>> between mfi and mps. ZFS performance was also identical between a
>> mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear
>> mfisyspd volumes are true sector-for-sector pass through devices.
>>
>> However, I could not get smartctl to see an mfisyspd volume (it claimed
>> there was no such file...?) and so I flashed the controller back to mps
>> for now. A shame, because I really like the mfi driver better, and
>> mfiutil worked great (even to flash firmware updates).
>>
> Have you got /dev/pass* when the controller run under mfi driver?  If
> so, try to run smartctl on them.  If not, add 'device mfip' in your
> kernel config file.
mfip was necessary, and allowed smartctl to work with '-d sat'

bonnie++ comparison. Run with no options immediately after system boot. 
In both cases the same disks are used, two Seagate Barracuda 1TB 3G/S 
(twin platter) and a Barracuda 500G 3G/s (single platter) in a zfs 
triple mirror that the system was booted from. All are 7200 RPM drives 
with 32mb cache, and mediocre performance compared to my hitachi 7k3000s 
or the 15k sas cheetahs at work etc. Firmwares were the latest 2108it vs 
the latest imr_fw that work on the 9240/9220/m1015/drake skinny. I wish 
I had some 6g ssds to try!

MPS:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
flatline.local 32G 122 99 71588 24 53293 20 284 90 222157 33 252.6 49
Latency 542ms 356ms 914ms 991ms 337ms 271ms
Version 1.96 ------Sequential Create------ --------Random Create--------
flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 22197 93 9367 27 16821 99 23555 99 +++++ +++ 23717 99
Latency 31650us 290ms 869us 23036us 66us 131us
1.96,1.96,flatline.local,1,1347322810,32G,,122,99,71588,24,53293,20,284,90,222157,33,252.6,49,16,,,,,22197,93,9367,27,16821,99,23555,99,+++++,+++,23717,99,542ms,356ms,914ms,991ms,337ms,271ms,31650us,290ms,869us,23036us,66us,131us

MFI:
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
flatline.local 32G 125 99 71443 24 53177 21 317 99 220280 33 255.3 52
Latency 533ms 566ms 1134ms 86565us 357ms 252ms
Version 1.96 ------Sequential Create------ --------Random Create--------
flatline.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 22347 94 12389 30 16804 100 18729 99 27798 99 5317 99
Latency 33818us 233ms 558us 26581us 75us 12319us
1.96,1.96,flatline.local,1,1347329123,32G,,125,99,71443,24,53177,21,317,99,220280,33,255.3,52,16,,,,,22347,94,12389,30,16804,100,18729,99,27798,99,5317,99,533ms,566ms,1134ms,86565us,357ms,252ms,33818us,233ms,558us,26581us,75us,12319us

A close race, with some wins for each. Latency on sequential input and 
deleted files per second appear to be interesting salients.
A lot of the other stuff is back and forth and probably not 
statistically significant (although not much of a sample set :) ).

I tried to control as many variables as possible, but obviously it's one 
controller in one configuration, Your Mileage May Vary.

Matt
Received on Tue Sep 11 2012 - 00:15:26 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:30 UTC