Re: Resizing a zpool as a VMware ESXi guest ...

From: Edward Tomasz Napierała <trasz_at_FreeBSD.org>
Date: Thu, 16 Oct 2014 10:10:16 +0200
On 1010T1529, Matthew Grooms wrote:
> All,
> 
> I am a long time user and advocate of FreeBSD and manage a several 
> deployments of FreeBSD in a few data centers. Now that these 
> environments are almost always virtual, it would make sense that FreeBSD 
> support for basic features such as dynamic disk resizing. It looks like 
> most of the parts are intended to work. Kudos to the FreeBSD foundation 
> for seeing the need and sponsoring dynamic increase of online UFS 
> filesystems via growfs. Unfortunately, it would appear that there are 
> still problems in this area, such as ...
> 
> a) cam/geom recognizing when a drive's size has increased
> b) zpool recognizing when a gpt partition size has increased
> 
> For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see 
> the following ...
> 
> root_at_zpool-test:~ #  gpart show
> =>      34  16777149  da0  GPT  (8.0G)
>          34      1024    1  freebsd-boot  (512K)
>        1058   4194304    2  freebsd-swap  (2.0G)
>     4195362  12581821    3  freebsd-zfs  (6.0G)
> 
> If I increase the VM disk size using VMware to 16G and rescan using 
> camcontrol, this is what I see ...

"camcontrol rescan" does not force fetching the updated disk size.
AFAIK there is no way to do that.  However, this should happen
automatically, if the "other side" properly sends proper Unit Attention
after resizing.  No idea why this doesn't happen with VMWare.
Reboot obviously clears things up.

[..]

> Now I want the claim the additional 14 gigs of space for my zpool ...
> 
> root_at_zpool-test:~ # zpool status
>    pool: zroot
>   state: ONLINE
>    scan: none requested
> config:
> 
>          NAME                                          STATE     READ 
> WRITE CKSUM
>          zroot                                         ONLINE 0     0     0
>            gptid/352086bd-50b5-11e4-95b8-0050569b2a04  ONLINE 0     0     0
> 
> root_at_zpool-test:~ # zpool set autoexpand=on zroot
> root_at_zpool-test:~ # zpool online -e zroot 
> gptid/352086bd-50b5-11e4-95b8-0050569b2a04
> root_at_zpool-test:~ # zpool list
> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> zroot  5.97G   876M  5.11G    14%  1.00x  ONLINE  -
> 
> The zpool appears to still only have 5.11G free. Lets reboot and try 
> again ...

Interesting.  This used to work; actually either of those (autoexpand or
online -e) should do the trick.

> root_at_zpool-test:~ # zpool set autoexpand=on zroot
> root_at_zpool-test:~ # zpool online -e zroot 
> gptid/352086bd-50b5-11e4-95b8-0050569b2a04
> root_at_zpool-test:~ # zpool list
> NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
> zroot  14.0G   876M  13.1G     6%  1.00x  ONLINE  -
> 
> Now I have 13.1G free. I can add this space to any of my zfs volumes and 
> it picks the change up immediately. So the question remains, why do I 
> need to reboot the OS twice to allocate new disk space to a volume? 
> FreeBSD is first and foremost a server operating system. Servers are 
> commonly deployed in data centers. Virtual environments are now 
> commonplace in data centers, not the exception to the rule. VMware still 
> has the vast majority of the private virutal environment market. I 
> assume that most would expect things like this to work out of the box. 
> Did I miss a required step or is this fixed in CURRENT?

Looks like genuine bugs (or rather, one missing feature and one bug).
Filling PRs for those might be a good idea.
Received on Thu Oct 16 2014 - 06:10:22 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:52 UTC