Re: ZFS: destroying snapshots without compromising boot environments

From: Allan Jude <allanjude_at_freebsd.org>
Date: Sat, 28 Mar 2020 11:19:21 -0400
On 2020-03-28 03:24, Graham Perrin wrote:
> I imagine that some of the 2019 snapshots below are redundant.
> 
> Can I safely destroy any of them?
> 
> $ zfs list -t snapshot
> NAME                                                     USED AVAIL 
> REFER  MOUNTPOINT
> copperbowl/ROOT/Waterfox_at_2020-03-20-06:19:45            67.0M -  59.2G  -
> copperbowl/ROOT/r359249b_at_2019-08-18-04:04:53            5.82G -  40.9G  -
> copperbowl/ROOT/r359249b_at_2019-08-18-11:28:31            4.32G -  40.7G  -
> copperbowl/ROOT/r359249b_at_2019-09-13-18:45:27-0          9.43G -  43.4G  -
> copperbowl/ROOT/r359249b_at_2019-09-19-20:03:26            5.13G -  43.3G  -
> copperbowl/ROOT/r359249b_at_2019-09-24-20:45:59-0          7.67G -  44.6G  -
> copperbowl/ROOT/r359249b_at_2020-01-09-17:05:57-0          7.66G -  55.2G  -
> copperbowl/ROOT/r359249b_at_2020-01-11-14:15:47            7.41G -  56.2G  -
> copperbowl/ROOT/r359249b_at_2020-03-17-21:57:17            12.0G -  59.2G  -
> copperbowl/iocage/releases/12.0-RELEASE/root_at_jbrowsers     8K -  1.24G  -
> copperbowl/poudriere/jails/head_at_clean                    328K -  1.89G  -
> $ beadm list
> BE       Active Mountpoint  Space Created
> Waterfox -      -           12.2G 2020-03-10 18:24
> r357746f -      -            1.3G 2020-03-20 06:19
> r359249b NR     /          148.9G 2020-03-28 01:19
> $ beadm list -aDs
> BE/Dataset/Snapshot                              Active Mountpoint 
> Space Created
> 
> Waterfox
>   copperbowl/ROOT/Waterfox                       -      - 137.0M
> 2020-03-10 18:24
>     r359249b_at_2020-03-17-21:57:17                 - -           59.2G
> 2020-03-17 21:57
>   copperbowl/ROOT/Waterfox_at_2020-03-20-06:19:45   - -           67.0M
> 2020-03-20 06:19
> 
> r357746f
>   copperbowl/ROOT/r357746f                       - -            1.2G
> 2020-03-20 06:19
>     Waterfox_at_2020-03-20-06:19:45                 - -           59.2G
> 2020-03-20 06:19
> 
> r359249b
>   copperbowl/ROOT/r359249b_at_2019-08-18-04:04:53   - -            5.8G
> 2019-08-18 04:04
>   copperbowl/ROOT/r359249b_at_2019-08-18-11:28:31   - -            4.3G
> 2019-08-18 11:28
>   copperbowl/ROOT/r359249b_at_2019-09-13-18:45:27-0 - -            9.4G
> 2019-09-13 18:45
>   copperbowl/ROOT/r359249b_at_2019-09-19-20:03:26   - -            5.1G
> 2019-09-19 20:03
>   copperbowl/ROOT/r359249b_at_2019-09-24-20:45:59-0 - -            7.7G
> 2019-09-24 20:45
>   copperbowl/ROOT/r359249b_at_2020-01-09-17:05:57-0 - -            7.7G
> 2020-01-09 17:05
>   copperbowl/ROOT/r359249b_at_2020-01-11-14:15:47   - -            7.4G
> 2020-01-11 14:15
>   copperbowl/ROOT/r359249b_at_2020-03-17-21:57:17   - -           12.0G
> 2020-03-17 21:57
>   copperbowl/ROOT/r359249b                       NR /           59.0G
> 2020-03-28 01:19
> $
> 
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"


You can try to destroy the snapshot, if it is the basis of a clone, then
you will get an error, that you'd need to destroy the BE first, so you
might decide to keep that snapshot. As long as you don't use the -R flag
to zfs destroy dataset_at_snapshot, it will not destroy the clones.

You can also use 'zfs promote' to make the clone into the parent, making
the original parent into the clone. This allows you to destroy that
original and the snapshot while keeping the clone.


-- 
Allan Jude


Received on Sat Mar 28 2020 - 14:19:36 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:23 UTC