zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool)

From: Mark Millard <marklmi_at_yahoo.com>
Date: Wed, 5 May 2021 16:40:01 -0700
Context:

# gpart show -pl da0
=>       40  468862048    da0  GPT  (224G)
         40     532480  da0p1  efiboot0  (260M)
     532520       2008         - free -  (1.0M)
     534528   25165824  da0p2  swp12a  (12G)
   25700352   25165824  da0p4  swp12b  (12G)
   50866176  417994752  da0p3  zfs0  (199G)
  468860928       1160         - free -  (580K)

There is just one pool: zroot and it is on zfs0 above.

# zpool list -p
NAME           SIZE        ALLOC          FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  213674622976  71075655680  142598967296        -         -     28     33   1.00    ONLINE  -

So FREE: 142_598_967_296
(using _ to make it more readable)

# zfs list -p zroot 
NAME          USED         AVAIL     REFER  MOUNTPOINT
zroot  71073697792  135923593216     98304  /zroot

So AVAIL: 135_923_593_216

FREE-AVAIL == 6_675_374_080



The questions:

Is this sort of unavailable pool-free-space normal?
Is this some sort of expected overhead that just is
not explicitly reported? Possibly a "FRAG"
consequence?


For reference:

# zpool status
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:31:48 with 0 errors on Sun May  2 19:52:14 2021
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          da0p3     ONLINE       0     0     0

errors: No known data errors


===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Received on Wed May 05 2021 - 21:40:11 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:28 UTC