On Sep 19, 2007, at 6:44 PM, Axel wrote: > Adam Jacob Muller <freebsd-current_at_adam.gs> writes: > >> Hello, >> I have a server with two ZFS pools, one is an internal raid0 using 2 >> drives connected via ahc. The other is an external storage array with >> 11 drives also using ahc, using raidz. (This is a dell 1650 and >> pv220s). >> On reboot, the pools do not come online on their own. Both pools >> consistently show as failed. >> >> the exact symptoms vary, however I have seen that many drives are >> marked as variously "corrupt" or "unavailable" most zpool operations >> fail with "pool is unavailable" errors. >> >> Here is the interesting part. >> Consistently, 100% of the time, a zpool export followed by a zpool >> import restores the arrays to an ONLINE status. Once the array is >> online, it's quite stable (I'm loving ZFS btw, thank you to everyone >> for the hard work on this, ZFS is fantastic) and works great. >> >> Anyone have any ideas why this might occur and what/if the >> solution is? >> >> Any additional information can be provided on-request, I am running >> current from approximately 1 week ago. >> >> -Adam >> > > There is a file called /boot/zfs/zpool.cache that is kept in sync > and loaded at boot time. > > If that's not there , e.g. by your /boot pointing to it , you're > hosed. > File is there, of note is that some of the prior reboots had been "unintentional" reboots, so it is possible that that file was corrupt, however, it does not seem correct for zfs to come up in a state that shows drives as corrupted and/or unavailable. I believe I have corrected the crashing issue, however it still does not seem that this is the correct behavior. - AdamReceived on Wed Sep 19 2007 - 22:06:27 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:17 UTC