Re: Gvinum RAID5 performance

From: Alastair D'Silva <freebsd_at_newmillennium.net.au>
Date: Mon, 1 Nov 2004 16:22:48 +1100
Quoting Brad Knowles <brad_at_stop.mail-abuse.org>:

> At 9:55 AM +1100 2004-11-01, <freebsd_at_newmillennium.net.au> wrote:
>
> >  Now, running a dd from a plex gives me less performance than running a
> >  dd from one of the subdisks, even though the array is not running in
> >  degraded mode.
>
> 	Right.  This is RAID-5.  It is used for reliability, not
> performance.  The entire stripe has to be read at once and written at
> once, for any operation involving that stripe.

Granted, however there are some RAID5 implementations that read only the
necessary data and forgo reading the parity on reads when not operating in
degraded mode (3Ware springs to mind as an immediate example). The offshoot of
this is that to ensure data integrity, a background process is run periodically
to verify the parity.

Alternatively, simply buffering the (whole) stripe in memory may be enough, as
subsequent reads from the same stripe will be fed from memory, rather than
resulting in another disk I/O (why didn't the on-disk cache feed this request?
I did notice that the read from a single drive resulted in that drive's access
light being locked on solid, while reading from the plex caused all drives to
flicker rather than being solid).


Perhaps a sysctl or parameter to the 'org' directive in vinum could be
introduced to toggle the behaviour between the two/three modes of operation.

I think both approaches have the ability to increase overall reliability as well
as improve performance since the drives will not be worked as hard.

--
Alastair D'Silva
Networking Consultant
New Millennium Networking
Received on Mon Nov 01 2004 - 04:21:19 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:20 UTC