Re: ZFS panic under extreme circumstances (2/3 disks corrupted)

From: Ivan Voras <ivoras_at_freebsd.org>
Date: Mon, 25 May 2009 01:24:09 +0200
Thomas Backman wrote:
> 
> On May 24, 2009, at 09:02 PM, Thomas Backman wrote:
> 
>> So, I was playing around with RAID-Z and self-healing, when I decided
>> to take it another step and corrupt the data on *two* disks (well,
>> files via ggate) and see what happened. I obviously expected the pool
>> to go offline, but I didn't expect a kernel panic to follow!
>>
>> What I did was something resembling:
>> 1) create three 100MB files, ggatel create to create GEOM providers
>> from them
>> 2) zpool create test raidz ggate{1..3}
>> 3) create a 100MB file inside the pool, md5 the file
>> 4) overwrite 10~20MB (IIRC) of disk2 with /dev/random, with dd
>> if=/dev/random of=./disk2 bs=1000k count=20 skip=40, or so (I now know
>> that I wanted *seek*, not *skip*, but it still shouldn't panic!)
>> 5) Check if the md5 of file: everything OK, zpool status shows a
>> degraded pool.
>> 6) Repeat step #4, but with disk 3.
>> 7) zpool scrub test
>> 8) Panic!
>> [...]
> FWIW, I couldn't replicate this when using seek (i.e. corrupt the middle
> of the "disk" rather than the beginning):

Did you account for the time factor? Between your steps 5 and 6,
wouldn't ZFS automatically begin data repair?




Received on Sun May 24 2009 - 21:24:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:48 UTC