Re: geom_raid5 inclusion in HEAD?

From: Oliver Fromme <olli_at_lurza.secnetix.de>
Date: Thu, 15 Nov 2007 16:14:15 +0100 (CET)
Arne Wörner wrote:
 > Oliver Fromme wrote:
 > > Just a small question:  I noticed that the new gvinum
 > > raid5 implementation (in P4) allows adding disks to an
 > > existing RAID5, even while it is running.  Does geom_raid5
 > > support that, too?  (ZFS doesn't, unfortunately.)
 > 
 > Nope... graid5 doesnt do such things... I found no way, that could do it
 > without hurting the disks too much (I was afraid, that a power failure could
 > destroy the necessary knowledge about the size of the new-config-area; and I
 > didnt know how to do the beginning: it seemed like the first few blocks need a
 > special treatment, because there the new-config-area and the old-config-area
 > overlap)...

OK.  I don't know the inner workings of geom_raid5, so I
can't tell how difficult it would be to implement there.

Here's a little description and some ASCII graphics that
explin how growing RAID5 was implemented in the new
gvinum:

http://lists.freebsd.org/pipermail/p4-projects/2007-July/020082.html

 > But Veronica is developing a tool, that can do it in offline mode... With
 > service interruption...
 > 
 > But growfs induces a service interruption anyway and it is buggy, if u do
 > not zero the new area... Veronica filed a bug report about this...

Hm.  I used growfs only once, and it worked fine.  Was
there a regression introduced at some point?  It should
certainly be fixed, because growfs seem to be very
useful.

About service interruption:  growfs only takes a few
seconds, which might be acceptable in most cases.
But taking a whole RAID5 down to add disks and then
rebuilding it takes a _lot_ longer.  Therefore I think
the feature to add disks to a live RAID5 would be very
valuable.

 > Nowadays it is common practice to have 2 ot more hosts, that can substitute
 > each other (hot-standby or how they call it today), so that it doesnt matter,
 > if a box is damaged or in maintenance mode or... isnt it?

It depends.  Building a fail-over cluster with FreeBSD
is not trivial if you need a synchronized, consistent and
reliable file system on all of the nodes.

Of course you can use third-party black boxes such as
a cluster of NetApp Filers or whatever.  That would work
(I've put such setups into production myself), but it
costs a non-negligible amount of money, and it's
certainly not suitable for everyone.

YMMV, of course.

Best regards
   Oliver

-- 
Oliver Fromme, secnetix GmbH & Co. KG, Marktplatz 29, 85567 Grafing b. M.
Handelsregister: Registergericht Muenchen, HRA 74606,  Geschäftsfuehrung:
secnetix Verwaltungsgesellsch. mbH, Handelsregister: Registergericht Mün-
chen, HRB 125758,  Geschäftsführer: Maik Bachmann, Olaf Erb, Ralf Gebhart

FreeBSD-Dienstleistungen, -Produkte und mehr:  http://www.secnetix.de/bsd

One Unix to rule them all, One Resolver to find them,
One IP to bring them all and in the zone to bind them.
Received on Thu Nov 15 2007 - 14:14:33 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:22 UTC