Re: OpenZFS port updated

From: Maurizio Vairani <maurizio1018_at_gmail.com>
Date: Wed, 29 Apr 2020 17:44:44 +0200
Il giorno ven 17 apr 2020 alle ore 20:36 Ryan Moeller <freqlabs_at_freebsd.org>
ha scritto:

> FreeBSD support has been merged into the master branch of the openzfs/zfs
> repository, and the FreeBSD ports have been switched to this branch.
>
> OpenZFS brings many exciting features to FreeBSD, including:
>  * native encryption
>  * improved TRIM implementation
>  * most recently, persistent L2ARC
>
> Of course, avoid upgrading your pools if you want to keep the option to go
> back to the base ZFS.
>
> OpenZFS can be installed alongside the base ZFS. Change your loader.conf
> entry to openzfs_load=“YES” to load the OpenZFS module at boot, and set
> PATH to find the tools in /usr/local/sbin before /sbin. The base zfs tools
> are still basically functional with the OpenZFS module, so changing PATH in
> rc is not strictly necessary.
>
> The FreeBSD loader can boot from pools with the encryption feature
> enabled, but the root/bootenv datasets must not be encrypted themselves.
>
> The FreeBSD platform support in OpenZFS does not yet include all features
> present in FreeBSD’s ZFS. Some notable changes/missing features include:
>  * many sysctl names have changed (legacy compat sysctls should be added
> at some point)
>  * zfs send progress reporting in process title via setproctitle
>  * extended 'zfs holds -r' (
> https://svnweb.freebsd.org/base?view=revision&revision=290015)
>  * vdev ashift optimizations (
> https://svnweb.freebsd.org/base?view=revision&revision=254591)
>  * pre-mountroot zpool.cache loading (for automatic pool imports)
>
> To the last point, this mainly effects the case where / is on ZFS and
> /boot is not or is on a different pool. OpenZFS cannot handle this case
> yet, but work is in progress to cover that use case. Booting directly from
> ZFS does work.
>
> If there are pools that need to be imported at boot other than the boot
> pool, OpenZFS does not automatically import yet, and it uses
> /etc/zfs/zpool.cache rather than /boot/zfs/zpool.cache to keep track of
> imported pools.  To ensure all pool imports occur automatically, a simple
> edit to /etc/rc.d/zfs will suffice:
>
> diff --git a/libexec/rc/rc.d/zfs b/libexec/rc/rc.d/zfs
> index 2d35f9b5464..8e4aef0b1b3 100755
> --- a/libexec/rc/rc.d/zfs
> +++ b/libexec/rc/rc.d/zfs
> _at__at_ -25,6 +25,13 _at__at_ zfs_start_jail()
>
>  zfs_start_main()
>  {
> +       local cachefile
> +
> +       for cachefile in /boot/zfs/zpool.cache /etc/zfs/zpool.cache; do
> +               if [ -f $cachefile ]; then
> +                       zpool import -c $cachefile -a
> +               fi
> +       done
>         zfs mount -va
>         zfs share -a
>         if [ ! -r /etc/zfs/exports ]; then
>
> This will probably not be needed long-term. It is not necessary if the
> boot pool is the only pool.
>
> Happy testing :)
>
> - Ryan
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>

On my laptop I am testing the new OpenZFS, I am running:

> uname -a

FreeBSD NomadBSD 12.1-RELEASE-p3 FreeBSD 12.1-RELEASE-p3 GENERIC  amd64

> freebsd-version -ku

12.1-RELEASE-p3

12.1-RELEASE-p4

I want let ZFS to write to the laptop SSD only every 1800 seconds:

> sudo zfs set sync=disabled zroot

and I have added these lines in /etc/sysctl.conf:

# Write to SSD every 30 minutes.

# 19/04/20 Added support for OpenZFS.

# Force commit Transaction Group (TXG) at 1800 secs, increase to aggregated

# more data (default 5 sec)

# vfs.zfs.txg.timeout for ZFS, vfs.zfs.txg_timeout for OpenZFS

vfs.zfs.txg.timeout=1800

vfs.zfs.txg_timeout=1800

# Write throttle when dirty "modified" data reaches 98% of dirty_data_max

#(default 60%)

vfs.zfs.delay_min_dirty_percent=98

# Force commit Transaction Group (TXG) if dirty_data reaches 95% of

# dirty_data_max (default 20%)

# vfs.zfs.dirty_data_sync_pct for ZFS, vfs.zfs.dirty_data_sync_percent for
OpenZFS

vfs.zfs.dirty_data_sync_pct=95

vfs.zfs.dirty_data_sync_percent=95

For testing the above settings I use the command: ‘zpool iostat -v -Td
zroot 600’ .

On the classic FreeBSD ZFS the output of the above command is similar to:

Tue Apr 28 14:44:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G    206     38  5.52M   360K

  diskid/DISK-185156448914p2  31.9G  61.1G    206     38  5.52M   360K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 14:54:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      8      0   297K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      8      0   297K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:04:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  14.4K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  14.4K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:14:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  2.89K  18.4K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  2.89K  18.4K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:24:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    798      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    798      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:34:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  2.43K      0

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  2.43K      0

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 15:44:08 CEST 2020

                                 capacity     operations    bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    587  14.2K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    587  14.2K

----------------------------  -----  -----  -----  -----  -----  -----

where the SSD is written every 1800 seconds.

On the new OpenZFS the output is:

Tue Apr 28 15:58:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G    203     24  5.18M   236K

  diskid/DISK-185156448914p2  31.9G  61.1G    203     24  5.18M   236K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:08:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      8      0   287K  9.52K

  diskid/DISK-185156448914p2  31.9G  61.1G      8      0   287K  9.52K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:18:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  15.6K  10.0K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  15.6K  10.0K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:28:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  3.07K  12.2K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  3.07K  12.2K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:38:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0    573  11.1K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0    573  11.1K

----------------------------  -----  -----  -----  -----  -----  -----

Tue Apr 28 16:48:09 2020

                                capacity     operations     bandwidth

pool                          alloc   free   read  write   read  write

----------------------------  -----  -----  -----  -----  -----  -----

zroot                         31.9G  61.1G      0      0  1.96K  10.6K

  diskid/DISK-185156448914p2  31.9G  61.1G      0      0  1.96K  10.6K

----------------------------  -----  -----  -----  -----  -----  -----

where the SSD is always written.

What I am missing ?

Thanks in advance.

--

Maurizio
Received on Wed Apr 29 2020 - 13:45:00 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:23 UTC