Re: Preparing ZFS drives

From: Freddie Cash <fjwcash_at_gmail.com>
Date: Tue, 12 Jan 2021 10:32:07 -0800
On Tue, Jan 12, 2021 at 10:10 AM joe mcguckin <joe_at_via.net> wrote:

> Folks,
>
> I want to buy some 16TB drives and raid them up
>
> How should I label and prepare the drives for ZFS?  Someone ought to write
> a ‘cookbook’ on that!
>

If these drives will be strictly data drives (not booting from them),
partitioning them is fairly easy.  You will want to determine a labelling
system for them.  Personally, I like to label the drives using a grid
system (columns are letters, rows are numbers).  For systems with multiple
JBODs attached, I include which JBOD chassis their in as well.  For
example, a 24-bay chassis would use disk-a1, disk-a2, disk-a3 ... disk-d4,
disk-d5, disk-d6.  A system with 2 24-bay JBODs would use jbod1-a1,
jbod1-a2, jbod1-a3 ... jbod2-d4, jbod2-d5, jbod2-d6.  So you label the GPT
partition on each disk, and build the pool using the GPT partition labels.

gpart create -s gpt da0
gpart add -t freebsd-zfs -a 1M -l disk-a1 da0

gpart create -s gpt da1
gpart add -t freebsd-zfs -a 1M -l disk-a2 da1

And so on.  Add 1 disk, partition/label it based on its location.  Then add
the next disk.  And so on.

Then use the GPT labels to create the pool (they show up as devices under
the /dev/gpt/ directory):

zpool create mypool mirror gpt/disk-a1 gpt/disk-a2 mirror gpt/disk-a3
gpt/disk-a4 mirror gpt/disk-a5 gpt/disk-a6


If you need to boot from these drives (make a root pool), then things get
more complicated.  Personally, I'd recommend using the 16 TB drives
strictly for a data pool, and then use some smaller SSDs for a root pool,
in a simple mirror vdev setup.  Separate the OS from the data.  :)

Do I need to start the volume on a particular sector boundary?
>

The "-a 1M" argument for gpart does it for you.  It aligns the partition at
the 1 MB boundary, and figures out which sector of the disk that
corresponds to based on the sector size of the disk (512B or 4K).


> Are the 4096 byte sector drives usable?
>

Yeah, they work without issues.  Try not to mix 512B and 4K drives within a
single vdev (it'll work, but may cause performance issues).  Mixing them in
a pool (a vdev using 512B drives, another vdev using 4K drives) is okay, so
long as you set the vfs.zfs.min_auto_ashift sysctl to 12 (force the minimum
block size used by ZFS to be 4K).  That way, in the future, you can replace
the 512B drives with 4K drives without any performance issues.

You can check what the ashift value is for each of the vdevs using:  zdb |
grep -B5 ashift

If it shows ashift=9 anywhere, then destroy the pool, change the sysctl
value, and recreate the pool.  Check to make sure it shows ashift=12 in zdb
output.

-- 
Freddie Cash
fjwcash_at_gmail.com
Received on Tue Jan 12 2021 - 17:32:20 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:26 UTC