On 10/16/2014 3:10 AM, Edward Tomasz NapieraĆa wrote: > On 1010T1529, Matthew Grooms wrote: >> All, >> >> I am a long time user and advocate of FreeBSD and manage a several >> deployments of FreeBSD in a few data centers. Now that these >> environments are almost always virtual, it would make sense that FreeBSD >> support for basic features such as dynamic disk resizing. It looks like >> most of the parts are intended to work. Kudos to the FreeBSD foundation >> for seeing the need and sponsoring dynamic increase of online UFS >> filesystems via growfs. Unfortunately, it would appear that there are >> still problems in this area, such as ... >> >> a) cam/geom recognizing when a drive's size has increased >> b) zpool recognizing when a gpt partition size has increased >> >> For example, if I do an install of FreeBSD 10 on VMware using ZFS, I see >> the following ... >> >> root_at_zpool-test:~ # gpart show >> => 34 16777149 da0 GPT (8.0G) >> 34 1024 1 freebsd-boot (512K) >> 1058 4194304 2 freebsd-swap (2.0G) >> 4195362 12581821 3 freebsd-zfs (6.0G) >> >> If I increase the VM disk size using VMware to 16G and rescan using >> camcontrol, this is what I see ... > "camcontrol rescan" does not force fetching the updated disk size. > AFAIK there is no way to do that. However, this should happen > automatically, if the "other side" properly sends proper Unit Attention > after resizing. No idea why this doesn't happen with VMWare. > Reboot obviously clears things up. > > [..] > >> Now I want the claim the additional 14 gigs of space for my zpool ... >> >> root_at_zpool-test:~ # zpool status >> pool: zroot >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ >> WRITE CKSUM >> zroot ONLINE 0 0 0 >> gptid/352086bd-50b5-11e4-95b8-0050569b2a04 ONLINE 0 0 0 >> >> root_at_zpool-test:~ # zpool set autoexpand=on zroot >> root_at_zpool-test:~ # zpool online -e zroot >> gptid/352086bd-50b5-11e4-95b8-0050569b2a04 >> root_at_zpool-test:~ # zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zroot 5.97G 876M 5.11G 14% 1.00x ONLINE - >> >> The zpool appears to still only have 5.11G free. Lets reboot and try >> again ... > Interesting. This used to work; actually either of those (autoexpand or > online -e) should do the trick. > >> root_at_zpool-test:~ # zpool set autoexpand=on zroot >> root_at_zpool-test:~ # zpool online -e zroot >> gptid/352086bd-50b5-11e4-95b8-0050569b2a04 >> root_at_zpool-test:~ # zpool list >> NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT >> zroot 14.0G 876M 13.1G 6% 1.00x ONLINE - >> >> Now I have 13.1G free. I can add this space to any of my zfs volumes and >> it picks the change up immediately. So the question remains, why do I >> need to reboot the OS twice to allocate new disk space to a volume? >> FreeBSD is first and foremost a server operating system. Servers are >> commonly deployed in data centers. Virtual environments are now >> commonplace in data centers, not the exception to the rule. VMware still >> has the vast majority of the private virutal environment market. I >> assume that most would expect things like this to work out of the box. >> Did I miss a required step or is this fixed in CURRENT? > Looks like genuine bugs (or rather, one missing feature and one bug). > Filling PRs for those might be a good idea. > All, I know this is a very late follow up, but spent some more time looking at this today and found some additional information that I found quite interesting. I setup two VMs, one that acts as an iSCSI initiator ( CURRENT ) and another that acts as a target ( 10.2-RELEASE ). Both are running under ESXi v5.5. There are two block devices on the initiator, da1 and da2, that I used for resize testing ... [root_at_iscsi-i /home/mgrooms]# camcontrol devlist <NECVMWar VMware IDE CDR10 1.00> at scbus1 target 0 lun 0 (cd0,pass0) <VMware Virtual disk 1.0> at scbus2 target 0 lun 0 (pass1,da0) <VMware Virtual disk 1.0> at scbus2 target 1 lun 0 (pass2,da1) <FREEBSD CTLDISK 0001> at scbus3 target 0 lun 0 (da2,pass3) The da1 device is a virtual disk hanging off of a VMware virtual SAS controller ... [root_at_iscsi-i /home/mgrooms]# pciconf ... mpt0_at_pci0:3:0:0: class=0x010700 card=0x197615ad chip=0x00541000 rev=0x01 hdr=0x00 vendor = 'LSI Logic / Symbios Logic' device = 'SAS1068 PCI-X Fusion-MPT SAS' class = mass storage subclass = SAS [root_at_iscsi-i /home/mgrooms]# camcontrol readcap da1 -h Device Size: 10 G, Block Length: 512 bytes [root_at_iscsi-i /home/mgrooms]# gpart show da1 => 40 20971440 da1 GPT (10G) 40 20971440 1 freebsd-ufs (10G) The da2 device is an iSCSI LUN mounted from my FreeBSD 10.2 VM running ctld ... [root_at_iscsi-i /home/mgrooms]# iscsictl Target name Target portal State iqn.2015-01.lab.shrew:target0 iscsi-t.shrew.lab Connected: da2 [root_at_iscsi-i /home/mgrooms]# camcontrol readcap da2 -h Device Size: 10 G, Block Length: 512 bytes [root_at_iscsi-i /home/mgrooms]# gpart show da2 => 40 20971440 da2 GPT (10G) 40 24 - free - (12K) 64 20971392 1 freebsd-ufs (10G) 20971456 24 - free - (12K) When I increased the size of da1 ( the VMDK ) and then re-ran 'camcontrol readcap' without a reboot, it clearly showed that the disk size had increased. However, geom failed to recognize the additional capacity ... [root_at_iscsi-i /home/mgrooms]# camcontrol readcap da1 -h Device Size: 16 G, Block Length: 512 bytes [root_at_iscsi-i /home/mgrooms]# gpart show da1 => 40 20971440 da1 GPT (10G) 40 20971440 1 freebsd-ufs (10G) Here is the interesting bit. I increased the size of da2 by modifying the lun size in ctld.conf on the target and then issued a /etc/rd.d/ctld reload. When I re-ran 'camcontrol readcap' on the initiator without a reboot, it also showed that the disk size had increased, but this time geom recognized the additional capacity as well ... [root_at_iscsi-i /home/mgrooms]# camcontrol readcap da2 -h Device Size: 16 G, Block Length: 512 bytes [root_at_iscsi-i /home/mgrooms]# gpart show da2 => 40 33554352 da2 GPT (16G) 40 24 - free - (12K) 64 20971392 1 freebsd-ufs (10G) 20971456 12582936 - free - (6.0G) I was then able to resize the partition and then grow the UFS filesystem, all without rebooting the VM ... [root_at_iscsi-i /home/mgrooms]# gpart resize -i 1 da2 da2p1 resized [root_at_iscsi-i /home/mgrooms]# gpart show da2 => 40 33554352 da2 GPT (16G) 40 24 - free - (12K) 64 33554304 1 freebsd-ufs (16G) 33554368 24 - free - (12K) [root_at_iscsi-i /home/mgrooms]# growfs da2p1 Device is mounted read-write; resizing will result in temporary write suspension for /var/data2. It's strongly recommended to make a backup before growing the file system. OK to grow filesystem on /dev/da2p1, mounted on /var/data2, from 10GB to 16GB? [Yes/No] Yes super-block backups (for fsck_ffs -b #) at: 21798272, 23080512, 24362752, 25644992, 26927232, 28209472, 29491712, 30773952, 32056192, 33338432 [root_at_iscsi-i /home/mgrooms]# df -h Filesystem Size Used Avail Capacity Mounted on /dev/da0p3 15G 1.2G 12G 9% / devfs 1.0K 1.0K 0B 100% /dev /dev/da1p1 9.7G 32M 8.9G 0% /var/data1 /dev/da2p1 15G 32M 14G 0% /var/data2 It's also worth noting that the additional space was not recognized by gpart/geom on the initiator until after the 'camcontrol readcap da2' command was run. In other words, I'm skeptical that it was a Unit Attention notification that made the right thing happen since it still took manual prodding of cam to get the new disk geometry up into the geom layer. So what's the difference between the Virtual SAS block device vs the iSCSI block device? I'm sure I have no idea. But in my mind this invalidates two previous notions that were floated when I brought this problem up late last year ... 1) There is in fact a command that can be manually run to force cam to read new disk geometry. And when that new geometry is read, it is, at least in some cases, passed on to geom. 2) While ESXi may or may not be issuing the correct SCSI notifications to help the OS pick up the new disk geometry automatically, it surely reports the new size when asked. Additionally, all the pluming is in place to allow the entire disk, geom, fs resize process to work without a reboot. I'm just not sure why it seems to work with iSCSI but doesn't with the virtualized SAS controller. Any thoughts? Thanks, -MatthewReceived on Fri Nov 27 2015 - 05:17:32 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:01 UTC