Resizing a zpool as a VMware ESXi guest ...

From: Matthew Grooms <mgrooms_at_shrew.net>
Date: Fri, 10 Oct 2014 15:29:44 -0500
All,

I am a long time user and advocate of FreeBSD and manage a several 
deployments of FreeBSD in a few data centers. Now that these 
environments are almost always virtual, it would make sense that FreeBSD 
support for basic features such as dynamic disk resizing. It looks like 
most of the parts are intended to work. Kudos to the FreeBSD foundation 
for seeing the need and sponsoring dynamic increase of online UFS 
filesystems via growfs. Unfortunately, it would appear that there are 
still problems in this area, such as ...

a) cam/geom recognizing when a drive's size has increased
b) zpool recognizing when a gpt partition size has increased

For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see 
the following ...

root_at_zpool-test:~ #  gpart show
=>      34  16777149  da0  GPT  (8.0G)
         34      1024    1  freebsd-boot  (512K)
       1058   4194304    2  freebsd-swap  (2.0G)
    4195362  12581821    3  freebsd-zfs  (6.0G)

If I increase the VM disk size using VMware to 16G and rescan using 
camcontrol, this is what I see ...

root_at_zpool-test:~ # camcontrol rescan all
Re-scan of bus 0 was successful
Re-scan of bus 1 was successful
Re-scan of bus 2 was successful
root_at_zpool-test:~ # gpart show
=>      34  16777149  da0  GPT  (8.0G)
         34      1024    1  freebsd-boot  (512K)
       1058   4194304    2  freebsd-swap  (2.0G)
    4195362  12581821    3  freebsd-zfs  (6.0G)

The GPT label still appears to be 8G. If I reboot the VM, it picks up 
the correct size ...

root_at_zpool-test:~ # gpart show
=>      34  16777149  da0  GPT  (16G) [CORRUPT]
         34      1024    1  freebsd-boot  (512K)
       1058   4194304    2  freebsd-swap  (2.0G)
    4195362  12581821    3  freebsd-zfs  (6.0G)

Now I have 16G to play with. I'll expand the freebsd-zfs partition to 
claim the additional space ...

root_at_zpool-test:~ # gpart recover da0
da0 recovered

root_at_zpool-test:~ # gpart show
=>      34  33554365  da0  GPT  (16G)
         34      1024    1  freebsd-boot  (512K)
       1058   4194304    2  freebsd-swap  (2.0G)
    4195362  12581821    3  freebsd-zfs  (6.0G)
   16777183  16777216       - free -  (8.0G)

root_at_zpool-test:~ # gpart resize -i 3 da0

root_at_zpool-test:~ # gpart show
=>      34  33554365  da0  GPT  (16G)
         34      1024    1  freebsd-boot  (512K)
       1058   4194304    2  freebsd-swap  (2.0G)
    4195362  29359037    3  freebsd-zfs  (14G)

Now I want the claim the additional 14 gigs of space for my zpool ...

root_at_zpool-test:~ # zpool status
   pool: zroot
  state: ONLINE
   scan: none requested
config:

         NAME                                          STATE     READ 
WRITE CKSUM
         zroot                                         ONLINE 0     0     0
           gptid/352086bd-50b5-11e4-95b8-0050569b2a04  ONLINE 0     0     0

root_at_zpool-test:~ # zpool set autoexpand=on zroot
root_at_zpool-test:~ # zpool online -e zroot 
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root_at_zpool-test:~ # zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  5.97G   876M  5.11G    14%  1.00x  ONLINE  -

The zpool appears to still only have 5.11G free. Lets reboot and try 
again ...

root_at_zpool-test:~ # zpool set autoexpand=on zroot
root_at_zpool-test:~ # zpool online -e zroot 
gptid/352086bd-50b5-11e4-95b8-0050569b2a04
root_at_zpool-test:~ # zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  14.0G   876M  13.1G     6%  1.00x  ONLINE  -

Now I have 13.1G free. I can add this space to any of my zfs volumes and 
it picks the change up immediately. So the question remains, why do I 
need to reboot the OS twice to allocate new disk space to a volume? 
FreeBSD is first and foremost a server operating system. Servers are 
commonly deployed in data centers. Virtual environments are now 
commonplace in data centers, not the exception to the rule. VMware still 
has the vast majority of the private virutal environment market. I 
assume that most would expect things like this to work out of the box. 
Did I miss a required step or is this fixed in CURRENT?

Thanks,

-Matthew
Received on Fri Oct 10 2014 - 18:42:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:52 UTC