Re: mfi driver performance

From: matt <sendtomatt_at_gmail.com>
Date: Mon, 10 Sep 2012 12:43:12 -0700
On 09/10/12 11:35, Andrey Zonov wrote:
> On 9/10/12 9:14 PM, matt wrote:
>> On 09/10/12 05:38, Achim Patzner wrote:
>>> Hi!
>>>
>>> We’re testing a new Intel S2600GL-based server with their recommended RAID adapter ("Intel(R) Integrated RAID Module RMS25CB080”) which is identified as
>>>
>>> mfi0: <ThunderBolt> port 0x2000-0x20ff mem 0xd0c60000-0xd0c63fff,0xd0c00000-0xd0c3ffff irq 34 at device 0.0 on pci5
>>> mfi0: Using MSI
>>> mfi0: Megaraid SAS driver Ver 4.23 
>>> mfi0: MaxCmd = 3f0 MaxSgl = 46 state = b75003f0 
>>>
>>> or
>>>
>>> mfi0_at_pci0:5:0:0:        class=0x010400 card=0x35138086 chip=0x005b1000 rev=0x03 hdr=0x00
>>>     vendor     = 'LSI Logic / Symbios Logic'
>>>     device     = 'MegaRAID SAS 2208 [Thunderbolt]'
>>>     class      = mass storage
>>>     subclass   = RAID
>>>
>>> and seems to be doing quite well.
>>>
>>> As long as it isn’t used…
>>>
>>> When the system is getting a bit more IO load it is getting close to unusable as soon as there are a few writes (independent of configuration, it is even sucking  as a glorified S-ATA controller). Equipping it with an older (unsupported) controller like an SRCSASRB
>>> (mfi0_at_pci0:10:0:0:       class=0x010400 card=0x100a8086 chip=0x00601000 rev=0x04 hdr=0x00
>>>     vendor     = 'LSI Logic / Symbios Logic'
>>>     device     = 'MegaRAID SAS 1078'
>>>     class      = mass storage
>>>     subclass   = RAID) solves the problem but won’t make Intel’s support happy.
>>>
>>> Has anybody similar experiences with the mfi driver? Any good ideas besides running an unsupported configuration?
>>>
>>>
>>> Achim
>>>
>>> _______________________________________________
>>> freebsd-current_at_freebsd.org mailing list
>>> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>>> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>> I just set up an IBM m1015 (aka LSI 9240lite aka Drake Skinny) with mfi.
>> Performance was excellent for mfisyspd volumes, as I compared using the
>> same hardware but with firmware (2108it.bin) that attaches under mps.
>> Bonnie++ results on random disks were very close if not identical
>> between mfi and mps. ZFS performance was also identical between a
>> mfisysd JBOD volume and a mps "da" raw volume. It was also quite clear
>> mfisyspd volumes are true sector-for-sector pass through devices.
>>
>> However, I could not get smartctl to see an mfisyspd volume (it claimed
>> there was no such file...?) and so I flashed the controller back to mps
>> for now. A shame, because I really like the mfi driver better, and
>> mfiutil worked great (even to flash firmware updates).
>>
> Have you got /dev/pass* when the controller run under mfi driver?  If
> so, try to run smartctl on them.  If not, add 'device mfip' in your
> kernel config file.
>
I will try mfi firmware again tonight. With ZFS it seemed happy whether
the pool was /dev/da* or /dev/mfisyspd*. Is the mfisyspd device name set
in stone? It's quite long!


Matt
Received on Mon Sep 10 2012 - 17:44:45 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:30 UTC