Re: ZFS panic under extreme circumstances (2/3 disks corrupted)

From: Holger Kipp <hk_at_alogis.com>
Date: Tue, 26 May 2009 11:17:57 +0200
On Mon, May 25, 2009 at 09:19:21AM -0700, Freddie Cash wrote:
> On Mon, May 25, 2009 at 9:12 AM, Thomas Backman <serenity_at_exscape.org> wrote:
> > On May 25, 2009, at 05:39 PM, Freddie Cash wrote:
> >> On Mon, May 25, 2009 at 2:13 AM, Thomas Backman <serenity_at_exscape.org>
> >> wrote:
> >>> On May 24, 2009, at 09:02 PM, Thomas Backman wrote:
> >>>
> >>>> So, I was playing around with RAID-Z and self-healing...
> >>>
> >> On our storage server that was initially configured using 1 large
> >> 24-drive raidz2 vdev (don't do that, by the way), we had 1 drive go
> >> south.  "zpool status" was full of errors.  And the error counts
> >> survived reboots.  Either that, or the drive was so bad that the error
> >> counts started increasing right away after a boot.  After a week of
> >> fighting with it to get the new drive to resilver and get added to the
> >> vdev, we nuked it and re-created it using 3 raidz2 vdevs each
> >> comprised of 8 drives.
> >>
> >> (Un)fortunately, that was the only failure we've had so far, so can't
> >> really confirm/deny the "error counts reset after reboot".
> >
> > Was this on FreeBSD?
> 
> 64-bit FreeBSD 7.1 using ZFS v6.  SATA drives connected to 3Ware RAID
> controllers, but configured as "Single Drive" arrays not using
> hardware RAID in any way.

Not sure if this is related, but we have a 16-disk raid with fibrechannel and
have the disks configured as single-disk raid0 (it seems explicit jbod without
at least a logical raid0 container for the controller is not possible on most
or even all of these raid systems) on the controller. ZFS is of course raidz2.

We made a check to just remove one disk while system is up, reinserting the
disk after some time - boom. We couldn't get it to resilver. Reason is quite
simple. The raid system recognized the disk and because it is a raid0 it can't 
be repaired by the underlying raid system, so status sent to computer was always 
of type 'drive broken'. Inserting a new disk would not help either because
the raid system could also not repair the raid0 with a new disk. Obvious if
you think about it..
  What we had to do was to remove the raid0-container from raid configuration
and create a new one with the same name on the raid device. As soon as the
controller was happy, thinking the disk was ok, ZFS was able to access the
drive again, starting resilver without problems (that was with ZFS version 6).

Regards,
Holger
Received on Tue May 26 2009 - 07:30:54 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:48 UTC