Re: FreeBSD 5.3b7and poor ata performance

From: Scott Long <scottl_at_freebsd.org>
Date: Mon, 25 Oct 2004 16:41:51 -0600
Charles Swiger wrote:
> On Oct 25, 2004, at 5:39 PM, Brad Knowles wrote:
> 
>> At 3:25 PM -0600 2004-10-25, Scott Long wrote:
>>
>>>                                       But as was said, there is always
>>>  a performance vs. reliability tradeoff.
>>
>>
>>     Well, more like "Pick two: performance, reliability, price"  ;)
> 
> 
> That sounds familiar.  :-)
> 
> If you prefer...            ...consider using:
> ----------------------------------------------
> performance, reliability:    RAID-1 mirroring
> performance, cost:         RAID-0 striping
> reliability, performance:    RAID-1 mirroring (+ hot spare, if possible)
> reliability, cost:            RAID-5 (+ hot spare)
> cost, reliability:            RAID-5
> cost, performance:            RAID-0 striping

It's more complex than that.  Are you talking software RAID, PCI RAID,
or external RAID?  That affects all three quite a bit.  Also, how do
you define reliability?  Do you verify reads on RAID-1 and 5?  Also,
what about error recovery?

> 
>>> And when you are talking about RAID-10 with a bunch of disks, you 
>>> will indeed start seeing bottlenecks in the bus.
>>
>>
>>     When you're talking about using a lot of disks, that's going to be 
>> true for any disk subsystem that you're trying to get a lot of 
>> performance out of.
> 
> 
> That depends on your hardware, of course.  :-)
> 
> There's a Sun E450 with ten disks over 5 SCSI channels in the room next 
> door: one UW channel native on the MB, and two U160 channels apiece from 
> two dual-channel cards which come with each 8-drive-bay extender kit.  
> It's running Solaris and DiskSuite (ODS) now, but it would be 
> interesting to put FreeBSD on it and see how that does, if I ever get 
> the chance.
> 
>>     The old rule was that if you had more than four disks per channel, 
>> you were probably hitting saturation.  I don't know if that specific 
>> rule-of-thumb is still valid, but I'd be surprised if disk controller 
>> performance hasn't roughly kept up with disk performance over time.
> 
> 
> That rule dates back to the early days of SCSI-2, where you could fit 
> about four drives worth of aggregate throughput over a 40Mbs ultra-wide 
> bus.  The idea behind it is still sound, although the numbers of drives 
> you can fit obviously changes whether you talk about ATA-100 or SATA-150.
> 

The formula here is simple:

ATA: 2
SATA: 1

So the channel transport starts becoming irrlevant now (except when you
talk about SAS and having bonded channels going to switches).  The
limiting factor again becomes PCI.  An easy example is the software
RAID cards that are based on the Marvell 8 channel SATA chip.  It can
drive all 8 drives at max platter speed if you have enough PCI bandwidth
(and I've tested this recently with FreeBSD 5.3, getting >200 MB/s
across 4 drives).  However, you're talking about PCI-X-100 bandwidth at
that point, which is not what most peole have in their desktop systems.
And for reasons of reliability, I wouldn't consider software RAID to
be something that you would base your server-class storage on other than
to mirror the boot drive so a failure there doesn't immediately bring
you down.

Anyways, it sounds like the original poster found that at least part of
the problem was due to local ATA problems.  In the longer term, I'd
like to see people who care about performance focus on things like
I/Os per second, not raw bandwidth.  As I mentioned above, I've seen
that a software RAID driver on FreeBSD can sustain line rate with the
drives on large transfers.  That would make sense because the overhead
to set up the DMA is dwarfed in comparison to the time to do the DMA.
I'd also like to see more 'apples-to-apples' comparisons.  It doesn't
mean a whole lot to say, for example, that software RAID on SCSI
doesn't perform as well as a single ATA drive, regardless of how 'common
sense' this argument might sound.  The performance characteristics of
ATA and SCSI really are quite different.  With SCSI you get the ability
to do lots of parallel request via tagged queueing, and ATA just can't
touch that. With ATA you tend to get large caches and agressive
read-ahead, so sequential performance is always good.  In my opinion
these qualities can have a detrimental impact on reliability, but again
my focus has always been on reliability first.

What is interesting is measuring how many single-sector transfers can be
done per second and how much CPU that consumes.  I used to be able
to get about 11,000 io/s on an aac card on a 5.2-CURRENT system from
last winter.  Now I can only get about 7,000.  I not sure where the
problem is yet, unfortunately.  I'm using KSE pthreads to generate a
lot of parallel requests with as little overhead as possible, so maybe
something there has changed, or maybe something in the I/O path above
the driver has changed, or maybe something in interrupt handling or
shceduling has changed.  It would be interesting to figure this out
since this definitenly shows a problem.

Scott
Received on Mon Oct 25 2004 - 20:43:38 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:19 UTC