ZPOOL import failure due to multiple pool IDs?

From: O. Hartmann <ohartman_at_zedat.fu-berlin.de>
Date: Wed, 24 Jul 2013 08:22:14 +0200
I have trouble with a ZFS pool after interrupted scrubbing on FreeBSD
10-CURREN (10.0-CURRENT #1 r253579: Tue Jul 23 20:31:59 CEST 2013
amd64).

After I shut down the box while the ZFS pool in question was still in
scrubbing, after a reboot the system marked that pool as defect. I
tried to clean the reported data corruption by adding the -F flag to
the import, but surprisingly, the pool has ambigious IDs confusing the
system (and me):

   pool: BACKUP00
     id: 257822624560506537
  state: FAULTED
 status: The pool metadata is corrupted.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported
using the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-72
 config:

        BACKUP00    FAULTED  corrupted data
          ada3p1    ONLINE

   pool: BACKUP00
     id: 9337833315545958689
  state: FAULTED
 status: One or more devices contains corrupted data.
 action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported
   using the '-f' flag.
   see: http://illumos.org/msg/ZFS-8000-5E
 config:

        BACKUP00               FAULTED  corrupted data
          8544670861382329237  UNAVAIL  corrupted data

I do not know what happens here. The pool has been upgraded twice as
far as I remember, since the disk/device is used as a compressed backup
device and only used for that purpose. But for a couple of time now
with FreeBSD 10, it starts to fail when the scrubbing is interrupted by
a shutdown. I remember that scrubbing of pools sesumed after the next
reboot - but I realised that this seems to be a problem now for some
reason on FreeBSD 10. I had a situation like this earlier this year
with the same device - amongst another pool after scrubbing didn't
resumed as expected.

The import of the pool above in question works by using the very first
id: id: 257822624560506537.

But what is with the other IDs? What are those IDs and labels doing
here? Is it possible that ZFS has some bugs revealing older
labels/GUIDs of the device from a earlier configuration than the last
one configured and prepared for? 

How can I get rid of those fake/phantom id?

Rgards,

Oliver



Received on Wed Jul 24 2013 - 04:22:29 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:39 UTC