Re: ZFS: alignment/boundary for partition type freebsd-zfs

From: O. Hartmann <o.hartmann_at_walstatt.org>
Date: Tue, 26 Dec 2017 18:31:05 +0100
Am Tue, 26 Dec 2017 10:13:09 -0700
Alan Somers <asomers_at_freebsd.org> schrieb:

> On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann <ohartmann_at_walstatt.org>
> wrote:
> 
> > Am Tue, 26 Dec 2017 11:44:29 -0500
> > Allan Jude <allanjude_at_freebsd.org> schrieb:
> >  
> > > On 2017-12-26 11:24, O. Hartmann wrote:  
> > > > Running recent CURRENT on most of our lab's boxes, I was in need to  
> > replace and  
> > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the  
> > disks I was about  
> > > > to replace. Well, the drives in question are 4k block size drives with  
> > 512b emulation  
> > > > - as most of them today. I've created the only and sole partiton on  
> > each 4 TB drive  
> > > > via the command sequence
> > > >
> > > > gpart create -s GPT adaX
> > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX
> > > >
> > > > After doing this on all drives I was about to replace, something drove  
> > me to check on  
> > > > the net and I found a lot of websites giving "advices", how to prepare  
> > large, modern  
> > > > drives for ZFS. I think the GNOP trick is not necessary any more, but  
> > many blogs  
> > > > recommend to perform
> > > >
> > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX
> > > >
> > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do  
> > that. My  
> > > > partitions all start now at block 40.
> > > >
> > > > My question is: will this have severe performance consequences or is  
> > that negligible?  
> > > >
> > > > Since most of those websites I found via "zfs freebsd alignement" are  
> > from years ago,  
> > > > I'm a bit confused now an consideration performing all this  
> > days-taking resilvering  
> > > > process let me loose some more hair as the usual "fallout" ...
> > > >
> > > > Thanks in advance,
> > > >
> > > > Oliver
> > > >  
> > >
> > > The 1mb alignment is not required. It is just what I do to leave room
> > > for the other partition types before the ZFS partition.
> > >
> > > However, the replacement for the GNOP hack, is separate. In addition to
> > > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k:
> > >
> > > sysctl vfs.zfs.min_auto_ashift=12
> > >
> > > (2^12 = 4096)
> > >
> > > Before you create the pool, or add additional vdevs.
> > >  
> >
> > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the
> > vdev. What is
> > the consequence for that for the pool? I lived under the impression that
> > this is necessary
> > for "native 4k" drives.
> >
> > How can I check what ashift is in effect for a specific vdev?
> >  
> 
> It's only necessary if your drive stupidly fails to report its physical
> sector size correctly, and no other FreeBSD developer has already written a
> quirk for that drive.  Do "zdb -l /dev/adaXXXpY" for any one of the
> partitions in the ZFS raid group in question.  It should print either
> "ashift: 12" or "ashift: 9".
> 
> -Alan
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"

I checked as suggested and all partitions report ashift: 12.

So I guess I'm save and sound and do not need to rebuild the pools ...? 

-- 
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten für
Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).

Received on Tue Dec 26 2017 - 16:31:47 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:14 UTC