Re: Dell Perc 5/i Performance issues

From: Scott Long <scottl_at_samsco.org>
Date: Sun, 20 Jun 2010 13:27:43 -0600
Yeah, there's no value in using the /dev/random devices for testing disk i/o.  Use /dev/zero instead.  I've known of hardware RAID engines in the past that can recognize certain repeating i/o benchmark patterns and optimize for them, but I have no idea if LSI controllers do this, tho based on your results it's probably safe to say that they don't.

Scott

On Jun 20, 2010, at 1:09 PM, Artem Belevich wrote:

> /dev/random and /dev/urandom are relatively slow and are not suitable
> as the source of data for testing modern hard drives' sequential
> throughput.
> 
> On my 3GHz dual-core amd63 box both /dev/random and /dev/urandom max
> out at ~80MB/s while consuming 100% CPU time on one of the processor
> cores.
> That would not be enough to saturate single disk with sequential writes.
> 
> --Artem
> 
> 
> 
> On Sun, Jun 20, 2010 at 9:51 AM, oizs <oizs_at_freemail.hu> wrote:
>> I've tried almost everything now.
>> The battery is probably fine:
>> mfiutil show battery
>> mfi0: Battery State:
>>  Manufacture Date: 7/25/2009
>>    Serial Number: 3716
>>     Manufacturer: SMP-PA1.9
>>            Model: DLFR463
>>        Chemistry: LION
>>  Design Capacity: 1800 mAh
>>   Design Voltage: 3700 mV
>>   Current Charge: 99%
>> 
>> My results:
>> Settings:
>> Raid5:
>> Stripe: 64k
>> mfiutil cache 0
>> mfi0 volume mfid0 cache settings:
>>      I/O caching: writes
>>    write caching: write-back
>>       read ahead: none
>> drive write cache: default
>> Raid0:
>> Stripe: 64k
>> mfiutil cache 0
>> mfi0 volume mfid0 cache settings:
>>      I/O caching: writes
>>    write caching: write-back
>>       read ahead: none
>> drive write cache: default
>> 
>> Tried to play around with this as well, with almost no difference.
>> 
>> Raid5
>> read:
>> dd if=/dev/mfid0 of=/dev/null bs=10M
>> 1143+0 records in
>> 1143+0 records out
>> 11985223680 bytes transferred in 139.104134 secs (86160083 bytes/sec)
>> write:
>> dd if=/dev/random of=/dev/mfid0 bs=64K
>> 22747+0 records in
>> 22747+0 records out
>> 1490747392 bytes transferred in 23.921103 secs (62319342 bytes/sec)
>> 
>> Raid0
>> read:
>> dd if=/dev/mfid0 of=/dev/null bs=64K
>> 92470+0 records in
>> 92470+0 records out
>> 6060113920 bytes transferred in 47.926007 secs (126447294 bytes/sec)
>> write:
>> dd if=/dev/random of=/dev/mfid0 bs=64K
>> 16441+0 records in
>> 16441+0 records out
>> 1077477376 bytes transferred in 17.232486 secs (62525939 bytes/sec)
>> 
>> I'm writing directly to the device so im not sure any slice issues could
>> cause the problems.
>> 
>> -zsozso
>> On 2010.06.20. 4:53, Scott Long wrote:
>>> 
>>> Two big things  can affect RAID-5 performance:
>>> 
>>> 1. Battery backup.  If you don't have a working battery attached to the
>>> card, it will turn off the write-back cache, no matter what you do.  Check
>>> this.  If you're unsure, use the mfiutil tool that I added to FreeBSD a few
>>> months ago and send me the output.
>>> 
>>> 2. Partition alignment.  If you're using classic MBR slices, everything
>>> gets misaligned by 63 sectors, making it impossible for the controller to
>>> optimize both reads and writes.  If the array is used for secondary storage,
>>> simply don't use an MBR scheme.  If it's used for primary storage, try using
>>> GPT instead and setting up your partitions so that they are aligned to large
>>> power-of-2 boundaries.
>>> 
>>> Scott
>>> 
>>> On Jun 18, 2010, at 6:27 PM, oizs wrote
>>> 
>> 
>> _______________________________________________
>> freebsd-current_at_freebsd.org mailing list
>> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>> 
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Sun Jun 20 2010 - 17:27:48 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:04 UTC