In the last episode (Sep 19), Axel said: > Adam Jacob Muller <freebsd-current_at_adam.gs> writes: > > On Sep 19, 2007, at 6:44 PM, Axel wrote: > >> There is a file called /boot/zfs/zpool.cache that is kept in sync > >> and loaded at boot time. > >> > >> If that's not there , e.g. by your /boot pointing to it , you're > >> hosed. > > > > File is there, of note is that some of the prior reboots had been > > "unintentional" reboots, so it is possible that that file was > > corrupt, however, it does not seem correct for zfs to come up in a > > state that shows drives as corrupted and/or unavailable. I believe I > > have corrected the crashing issue, however it still does not seem > > that this is the correct behavior. > > If you have a working root outside of zfs I'd do the following: > > 1) Rename the zpool.cache to something else to be safe > 2) Reboot, make sure that /boot/zfs points to the right location, > and reimport the pools. > 3) Should be fine from there on. > > I had sort of the same issue, the zpool.cache isn't documented too > well yet; I only stumbled over it by doing a "lsmod" at the loader > prompt;it's one reason root can be on zfs before hostid is set. If > you setup zfs and don't have the future /boot/zfs set right it won't > work because the information gets lost. With / on zfs it's crucial to > have /boot point to the actual UFS boot partition and not be in your > zfs / somewhere, cause that gets ignored until it's mounted. > > It's a good idea to keep the actual old UFS / directory around > although only /boot gets used in there if you mount / from zfs. What I do is populate my UFS /.boot filesystem with /etc, /lib, /libexec, /bin, and /sbin from my root filesystem, so if zfs fails to load it's easy to recover. -- Dan Nelson dnelson_at_allantgroup.comReceived on Thu Sep 20 2007 - 19:16:25 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:17 UTC