Re: mfi driver performance too bad on LSI MegaRAID SAS 9260-8i

From: Ultima <ultima1252_at_gmail.com>
Date: Mon, 1 Aug 2016 23:22:37 -0400
If anyone is interested, as Michelle Sullivan just mentioned. One problem I
found when looking for an HBA is that they are not so easy to find. Scoured
the internet for a backup HBA I came across these -
http://www.avagotech.com/products/server-storage/host-bus-adapters/#tab-12Gb1

Can only speak for sas-9305-24i. All 24 bays are occupied and quite pleased
with the performance compared to its predecessor. It was originally going
to be a backup unit, however that changed after running a scrub and the
amount of hours to complete cut in half (around 30ish to 15 for 35T). And
of course, the reason for this post, it replaced a raid card in passthrough
mode.

Another note, because it is an HBA, the ability to flash firmware is once
again possible! (yay!)

+1 to HBA's + ZFS, if possible replace it for an HBA.

On Mon, Aug 1, 2016 at 1:30 PM, Michelle Sullivan <michelle_at_sorbs.net>
wrote:

> Borja Marcos wrote:
>
>> On 01 Aug 2016, at 15:12, O. Hartmann <ohartman_at_zedat.fu-berlin.de>
>>> wrote:
>>>
>>> First, thanks for responding so quickly.
>>>
>>> - The third option is to make the driver expose the SAS devices like a
>>>> HBA
>>>> would do, so that they are visible to the CAM layer, and disks are
>>>> handled by
>>>> the stock “da” driver, which is the ideal solution.
>>>>
>>> I didn't find any switch which offers me the opportunity to put the PRAID
>>> CP400i into a simple HBA mode.
>>>
>> The switch is in the FreeBSD mfi driver, the loader tunable I mentioned,
>> regardless of what the card
>> firmware does or pretends to do.
>>
>> It’s not visible doing a "sysctl -a”, but it exists and it’s unique even.
>> It’s defined here:
>>
>>
>> https://svnweb.freebsd.org/base/stable/10/sys/dev/mfi/mfi_cam.c?revision=267084&view=markup
>> (line 93)
>>
>> In order to do it you need a couple of things. You need to set the
>>>> variable
>>>> hw.mfi.allow_cam_disk_passthrough=1 and to load the mfip.ko module.
>>>>
>>>> When booting installation media, enter command mode and use these
>>>> commands:
>>>>
>>>> -----
>>>> set hw.mfi.allow_cam_disk_passthrough=1
>>>> load mfip
>>>> boot
>>>> ———
>>>>
>>> Well, I'm truly aware of this problemacy and solution (now), but I run
>>> into a
>>> henn-egg-problem, literally. As long as I can boot off of the
>>> installation
>>> medium, I have a kernel which deals with the setting. But the boot
>>> medium is
>>> supposed to be a SSD sitting with the PRAID CP400i controller itself!
>>> So, I
>>> never be able to boot off the system without crippling the ability to
>>> have a
>>> fullspeed ZFS configuration which I suppose to have with HBA mode, but
>>> not
>>> with any of the forced RAID modes offered by the controller.
>>>
>> Been there plenty of times, even argued quite strongly about the
>> advantages of ZFS against hardware based RAID
>> 5 cards. :) I remember when the Dell salesmen couldn’t possibly
>> understand why I wanted a “software based RAID rather than a
>> robust, hardware based solution” :D
>>
>
> There are reasons for using either...
>
> Nowadays its seems the conversations have degenerated into those like
> Windows vs Linux vs Mac where everyone thinks their answer is the right one
> (just as you suggested you (Borja Marcos) did with the Dell salesman),
> where in reality each has its own advantages and disadvantages.  Eg: I'm
> running 2 zfs servers on 'LSI 9260-16i's... big mistake! (the ZFS, not
> LSI's)... one is a 'movie server' the other a 'postgresql database'
> server...  The latter most would agree is a bad use of zfs, the die-hards
> won't but then they don't understand database servers and how they work on
> disk.  The former has mixed views, some argue that zfs is the only way to
> ensure the movies will always work, personally I think of all the years
> before zfs when my data on disk worked without failure until the disks
> themselves failed... and RAID stopped that happening...  what suddenly
> changed, are disks and ram suddenly not reliable at transferring data? ..
> anyhow back to the issue there is another part with this particular
> hardware that people just throw away...
>
> The LSI 9260-* controllers have been designed to provide on hardware
> RAID.  The caching whether using the Cachecade SSD or just oneboard ECC
> memory is *ONLY* used when running some sort of RAID set and LVs... this is
> why LSI recommend 'MegaCli -CfgEachDskRaid0' because it does enable
> caching..  A good read on how to setup something similar is here:
> https://calomel.org/megacli_lsi_commands.html (disclaimer, I haven't
> parsed it all so the author could be clueless, but it seems to give
> generally good advice.)  Going the way of 'JBOD' is a bad thing to do, just
> don't, performance sucks. As for the recommended command above, can't
> comment because currently I don't use it nor will I need to in the near
> future... but...
>
> If you (O Hartmann) want to use or need to use ZFS with any OS including
> FreeBSD don't go with the LSI 92xx series controllers, its just the wrong
> thing to do..  Pick an HBA that is designed to give you direct access to
> the drives not one you have to kludge and cajole.. Including LSI
> controllers with caches that use the mfi driver, just not those that are
> not designed to work in a non RAID mode (with or without the passthru
> command/mode above.)
>
>
>
>
>> At worst, you can set up a simple boot from a thumb drive or, even
>> better, a SATADOM installed inside the server. I guess it will
>> have SATA ports on the mainboard. That’s what I use to do. FreeNAS uses a
>> similar approach as well. And some modern servers
>> also can boot from a SD card which you can use just to load the kernel.
>>
>> Depending on the number of disks you have, you can also sacrifice two to
>> set up a mirror with a “nomal” boot system, and using
>> the rest of the disks for ZFS. Actually I’ve got an old server I set up
>> in 2012. It has 16 disks, and I created a logical volume (mirror)
>> with 2 disks for boot, the other 14 disks for ZFS.
>>
>> If I installed this server now I would do it different, booting off a
>> thumb drive. But I was younger and naiver :)
>>
>>
>>
> If I installed mine now I would do them differently as well... neither
> would run ZFS, both would use their on card RAID kernels and UFS on top of
> them...  ZFS would be reserved for the multi-user NFS file servers.  (and
> trust me here, when it comes to media servers - where the media is just
> stored not changed/updated/edited - the 16i with a good highspeed SSD as
> 'Cachecade' really performs well... and on a moderately powerful MB/CPU
> combo with good RAM and several gigabit interfaces it's surprising how many
> unicast transcoded media streams it can handle... (read: my twin fibres are
> saturated before the machine reaches anywhere near full load, and I can
> still write at 13MBps from my old Mac Mini over NFS... which is about all
> it can do without any load either.)
>
> So moral of the story/choices.  Don't go with ZFS because people tell you
> its best, because it isn't, go with ZFS if it suits your hardware and
> application, and if ZFS suits your application, get hardware for it.
>
> Regards,
>
> --
> Michelle Sullivan
> http://www.mhix.org/
>
> _______________________________________________
> freebsd-stable_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-stable
> To unsubscribe, send any mail to "freebsd-stable-unsubscribe_at_freebsd.org"
Received on Tue Aug 02 2016 - 01:22:39 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:07 UTC