On Thu, Sep 19, 2019 at 12:01 PM Andreas Nilsson <andrnils_at_gmail.com> wrote: > Seems like more of a bios enumeration issue. You should be able to set a > boot order better suited for your setup there. And if that does not work, > just move the sata cables around seems like the most straight forward > solution. > > Although I think I've heard it is bad practice to use raw devices for zfs, > especially if need to replace a drive, which day happens to be a different > revision, with a few fewer blocks available. Then you will not be able to > do the replace. > > Back in the good ol' days of ZFS versions where everyone was compatible with Solaris, this was an issue. However, the ondisk format (or ZFS label setup?) was changed to leave 1 MB of free space at the end of the drive to allow for this. With ZFSv6, for example, if you used a raw device for the vdev, and that disk died, and the replacement was 1 sector smaller, the replace would fail. Today, with OpenZFS, the replace would succeed. There's also 1 MB or so of reserved space in the pool such that if you fill the pool "100%", you can still do a "zfs destroy" of a dataset to free up space. Previously, this would fail, as you need space in the pool to write the metadata for the destroy before doing the destroy. ZFS of today is much more resilient to these kinds of niggles that bit us all, back in the day. :D -- Freddie Cash fjwcash_at_gmail.comReceived on Thu Sep 19 2019 - 19:46:18 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:21 UTC