Re: gpart failing with no such geom after gpt corruption

From: Bartosz Stec <bartosz.stec_at_it4pro.pl>
Date: Thu, 01 Apr 2010 21:42:21 +0200
On 2010-04-01 21:02, Robert Noland wrote:
>>>
>>> After a while I've  noticed some SMART errors on ad1, so I've booted 
>>> machine
>>> with seatools for dos and made long test. One bad sector was found and
>>> reallocated, nothing to worry about.
>>> As I was in seatools already, I've decided to adjust LBA size on 
>>> that disk
>>> (seatools can do that), because it was about 30MB larger than the 
>>> other two,
>>> and because of that I had to adjust size of freebsd-zfs partition on 
>>> that
>>> disk to match exact size of others (otherwise 'zpool create' will 
>>> complain).
>>> So LBA was adjusted and system rebooted.
>>
>> I don't understand why you adjusted LBA. You're using GPT partitions,
>> so, couldn't you just make the zfs partition the same size on all
>> disks by adjusting it to the smallest disk, and let free space at the
>> end of the bigger ones ?
>

Well yes, I could indeed, and this is exactly what I did at the first 
time (before LBA count adjusting). But while I was already using 
software which could adjust LBA to make all HDD appear to be same size, 
I've decided to do it to never have to remember about it while 
partitioning ;) At least 'gpart show' isn't showing any unused (wasted) 
space now ;) :

# gpart show
=>      34  78165293  ad0  GPT  (37G)
         34       128    1  freebsd-boot  (64K)
        162   2097152    2  freebsd-swap  (1.0G)
    2097314  76068013    3  freebsd-zfs  (36G)

=>      34  78165293  ad1  GPT  (37G)
         34       128    1  freebsd-boot  (64K)
        162   2097152    2  freebsd-swap  (1.0G)
    2097314  76068013    3  freebsd-zfs  (36G)

=>      34  78165293  ad2  GPT  (37G)
         34       128    1  freebsd-boot  (64K)
        162   2097152    2  freebsd-swap  (1.0G)
    2097314  76068013    3  freebsd-zfs  (36G)

>
> For that matter, my understanding is that ZFS just doesn't care.  If 
> you have disks of different sized in a raidz, the pool size will be 
> limited by the size of the smallest device.  If those devices are 
> replaced with larger ones, then the pool just grows to take advantage 
> of the additional available space.
>
> robert.
>
Well, here's what man zpool says about zpool create:

    "(...) The use of differently sized  devices within  a  single raidz
    or mirror group is also flagged as an error unless -f is specified."

I know I could force it, I just didn't know if I should.

After all it's just easier to  type 3 times:

    gpt add -t freebsd-zfs -l diskN

to use all free space on device than checking numbers on other disks and 
type

    gpt add -b 2097314 -s 76068013 -t freebsd-zfs -l diskN

and that's why all story begins :)

-- 
Bartosz Stec
Received on Thu Apr 01 2010 - 17:42:35 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:02 UTC