Re: ZFS: i/o error all block copies unavailable Invalid format

From: Peter Maloney <peter.maloney_at_brockmann-consult.de>
Date: Tue, 06 Dec 2011 08:36:46 +0100
Am 06.12.2011 07:14, schrieb KOT MATPOCKuH:
> Hello all!
>
> On 24 nov I updated sources via csup to RELENG_9 (9.0-PRERELEASE).
> After make installboot I successfully booted to single user.
> But after make installworld system was fail to boot with message:
> ZFS: i/o error all block copies unavailable
> Invalid format
>
> status command shows status of all polls properly.
> root filesystem is not compressed.
>
> # zfsboottest /dev/gpt/rootdisk /dev/gpt/rootmirr
>   pool: sunway
> config:
>
>         NAME STATE
>         sunway ONLINE
>           mirror ONLINE
>             gpt/rootdisk ONLINE
>             gpt/rootmirr ONLINE
>
> Restore of old /boot/zfsloader was solved issue.
> Before I successfully updated 4 another systems with same sources
> level without any problems.
>
> My sys/boot/zfs/zfsimpl.c's version: 1.17.2.2 2011/11/19 10:49:03
>
> Where may a root cause of problem? And how I can debug this problem?
>

"Invalid format" sounds like the software doesn't understand the disks.

Check your pool (software) version with:
# zpool upgrade -v

Check your pool (on disk) version with (I forget the exact command):
# zpool get version sunway

My guess is that you installed the latest zfs on the pool, but left the
old version of the bootloader.

-------------

To fix an unbootable zfs root where the disks are working fine or
degrade, this is the general procedure. I don't know if it applies to
your particular problem, but I am optimistic.


In this example, I copied a usb disk called zrootusb to one called
zrootusbcopy.

Import the pool using altroot and cachefile.
# zpool import -o altroot=/z -o cachefile=/tmp/zpool.cache zrootusbcopy

Set mount points (/ is fine, don't need legacy... legacy is a hassle,
needing to set it to / and back after umount every time you repair things)
Since altroot is /z, the root will be at /z/; do not prepend /z in
mountpoint.
# zfs list | grep zrootusbcopy
# zfs set mountpoint=/ zrootusbcopy

(if you were copying a disk and wanted it to be bootable, this is the
point when you would snapshot and zfs send, where the above is the newly
created bootable copy)

Make sure bootfs is set.
zfs get bootfs zrootusbcopy
zfs set bootfs=zrootusbcopy zrootusbcopy

**Copy the cache file to the new pool's /boot/zfs
cp /tmp/zpool.cache /z/boot/zfs/zpool.cache

Verify that the /boot/loader.conf is correct (pool name), and zfs_load
is there.
vfs.root.mountfrom="zfs:zrootusbcopy"
zfs_load="YES"

If this is your only zfs:
# zfs umount -a

otherwise one at a time:
# zfs umount zrootusbcopy/var/empty
# zfs umount zrootusbcopy/usr/
...
or a script (bash, untested):
#begin script
for name in $(zfs list -H -o name | grep -E "^zrootusbcopy/"); do
    zfs umount $name
done
zfs umount  zrootusbcopy
#end script

install bootloader (possibly the only step you actually needed).
1. Figure out what disks and partition number to put it on... I use:
gpart show

2. Install. If it is a mirror, do 2 of these commands with different
devices.
gpart bootcode -b /z/boot/pmbr -p /z/boot/gptzfsboot -i
<partitionnumber> <diskdevice>


Then do not export.
Then reboot; try to boot your previously unbootable zfs root system.


Here is a thread where I suggested this method to someone and it worked
for him, although his error message was different.
http://forums.freebsd.org/showthread.php?t=26789
Received on Tue Dec 06 2011 - 06:36:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:21 UTC