Mark Millard via freebsd-current wrote: > Context: > > # gpart show -pl da0 > => 40 468862048 da0 GPT (224G) > 40 532480 da0p1 efiboot0 (260M) > 532520 2008 - free - (1.0M) > 534528 25165824 da0p2 swp12a (12G) > 25700352 25165824 da0p4 swp12b (12G) > 50866176 417994752 da0p3 zfs0 (199G) > 468860928 1160 - free - (580K) > > There is just one pool: zroot and it is on zfs0 above. > > # zpool list -p > NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > zroot 213674622976 71075655680 142598967296 - - 28 33 1.00 ONLINE - > > So FREE: 142_598_967_296 > (using _ to make it more readable) > > # zfs list -p zroot > NAME USED AVAIL REFER MOUNTPOINT > zroot 71073697792 135923593216 98304 /zroot > > So AVAIL: 135_923_593_216 > > FREE-AVAIL == 6_675_374_080 > > > > The questions: > > Is this sort of unavailable pool-free-space normal? > Is this some sort of expected overhead that just is > not explicitly reported? Possibly a "FRAG" > consequence? >From zpoolprops(8): free The amount of free space available in the pool. By contrast, the zfs(8) available property describes how much new data can be written to ZFS filesystems/volumes. The zpool free property is not generally useful for this purpose, and can be substantially more than the zfs available space. This discrepancy is due to several factors, including raidz parity; zfs reservation, quota, refreservation, and refquota properties; and space set aside by spa_slop_shift (see zfs-module-parameters(5) for more information).Received on Wed May 05 2021 - 22:01:25 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:28 UTC