Re: problem booting to multi-vdev root pool [Was: kern/150503: [zfs] ZFS disks are UNAVAIL and corrupted after reboot]

From: Niclas Zeising <zeising_at_daemonic.se>
Date: Fri, 16 Nov 2012 17:13:09 +0100
On 11/16/12 16:45, Andriy Gapon wrote:
> on 13/11/2012 18:16 Guido Falsi said the following:
>> My idea, but is just a speculation, i could be very wrong, is that the geom
>> tasting code has some problem with multiple vdev root pools.
>
> Guido,
>
> you are absolutely correct.  The code for reconstructing/tasting a root pool
> configuration is a modified upstream code, so it inherited a limitation from it:
> the support for only a single top-level vdev in a root pool.
> I have an idea how to add the missing support, but it turned out not to be
> something that I can hack together in couple of hours.
>
> So, instead I wrote the following patch that should fall back to using a root pool
> configuration from zpool.cache (if it's present there) for a multi-vdev root pool:
> http://people.freebsd.org/~avg/zfs-spa-multi_vdev_root_fallback.diff
>
> The patch also fixes a minor (single-time) memory leak.
>
> Guido, Bartosz,
> could you please test the patch?
>
> Apologies for the breakage.
>

Just to confirm, since I am holding back an update pending on this.
If I have a raidz root pool, with three disks, like this:
         NAME           STATE     READ WRITE CKSUM
         zroot          ONLINE       0     0     0
           raidz1-0     ONLINE       0     0     0
             gpt/disk0  ONLINE       0     0     0
             gpt/disk1  ONLINE       0     0     0
             gpt/disk2  ONLINE       0     0     0

Then I'm fine to update without issues. the problem is only if, as an 
example, you have a mirror with striped disks, or a stripe with mirrored 
disks, which it seems to me the original poster had.
Am I correct, and therefore ok to update?
Regards!
-- 
Niclas Zeising
Received on Fri Nov 16 2012 - 15:13:19 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:32 UTC