Re: ZFS questions

From: Dmitry Morozovsky <marck_at_rinet.ru>
Date: Mon, 25 Jun 2007 20:00:05 +0400 (MSD)
On Mon, 25 Jun 2007, Pawel Jakub Dawidek wrote:

PJD> On Mon, Jun 25, 2007 at 11:53:43AM +0400, Dmitry Morozovsky wrote:
PJD> > Dear colleagues,
PJD> > 
PJD> > I'm playing with ZFS thinking about future storage server
PJD> > I have two questions about it (currently; a least ;-) 
PJD> > 
PJD> > 1. How can one determine which portion of a provider zpool uses? It seems 
PJD> > logical for me if `zpool status -v' would display this info.
PJD> 
PJD> I'm sorry, but I don't understand the question. If you use only one
PJD> partition of a disk, zpool status will show you which partition it is.
PJD> (but I don't think this was your question)

I was not clear enough.  In my experiments I used disks of different sizes, and 
got strange (to me) resulting sizes, such as:

marck_at_woozlie:~# diskinfo ad{6,8,10,12,14}
ad6     512     80026361856     156301488       155061  16      63
ad8     512     80026361856     156301488       155061  16      63
ad10    512     400088457216    781422768       775221  16      63
ad12    512     250059350016    488397168       484521  16      63
ad14    512     320072933376    625142448       620181  16      63

marck_at_woozlie:~# zpool create tank raidz ad6 ad8 ad12
marck_at_woozlie:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    222G    189K    222G     0%  ONLINE     -
marck_at_woozlie:~# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad6     ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad12    ONLINE       0     0     0

errors: No known data errors
marck_at_woozlie:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   120K   146G  24.0K  /tank

why zpool in 222G? zfs seems to have more reasonable size of 146G (sum of 2*80G 
disks).

Next (replacing first 80G to 320G):

marck_at_woozlie:~# zpool replace tank ad6 ad14

[wait]

marck_at_woozlie:~# zpool status
  pool: tank
 state: ONLINE
 scrub: resilver completed with 0 errors on Mon Jun 25 19:52:09 2007
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad8     ONLINE       0     0     0
            ad12    ONLINE       0     0     0

errors: No known data errors
marck_at_woozlie:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    222G    363K    222G     0%  ONLINE     -
marck_at_woozlie:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   120K   146G  24.0K  /tank

Still the same sizes...

The last component (80G -> 400G):

marck_at_woozlie:~# zpool replace tank ad8 ad10
marck_at_woozlie:~# zpool status
  pool: tank
 state: ONLINE
 scrub: resilver completed with 0 errors on Mon Jun 25 19:54:42 2007
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0

errors: No known data errors
marck_at_woozlie:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    222G    354K    222G     0%  ONLINE     -
marck_at_woozlie:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   120K   146G  24.0K  /tank

Still the same. [final is at the end]

PJD> 
PJD> > 2. It is also possible to expand the array by iteratively swapping each drive 
PJD> > in the array with a bigger drive and waiting for ZFS to heal itself - the heal 
PJD> > time will depend on amount of store information, not the disk size. 
PJD> > [ http://en.wikipedia.org/wiki/ZFS#Limitations ]
PJD> > 
PJD> > My experiments does not show that zpool size increases after set of 
PJD> > `zpool replace'. Where did I went wrong?
PJD> 
PJD> Works here after zpool export/zpool import.

Well, will check this.  However, this means that pool size increasing requires 
pool stopping (downtime required).  Still much better than backup/restore or 
even copying multi-terabyte fs...

In my case, though, numbers are still strange:

marck_at_woozlie:~# zpool export tank
marck_at_woozlie:~# zpool list
no pools available
marck_at_woozlie:~# zpool import
  pool: tank
    id: 5628286796211617744
 state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        tank        ONLINE
          raidz1    ONLINE
            ad14    ONLINE
            ad10    ONLINE
            ad12    ONLINE
marck_at_woozlie:~# zpool import tank
marck_at_woozlie:~# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tank        ONLINE       0     0     0
          raidz1    ONLINE       0     0     0
            ad14    ONLINE       0     0     0
            ad10    ONLINE       0     0     0
            ad12    ONLINE       0     0     0

errors: No known data errors
marck_at_woozlie:~# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
tank                    698G    207K    698G     0%  ONLINE     -
marck_at_woozlie:~# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
tank   128K   458G  24.0K  /tank

Strange: 458G or 698G? 320+250 does not look similar to either number....

Thanks!

Sincerely,
D.Marck                                     [DM5020, MCK-RIPE, DM3-RIPN]
------------------------------------------------------------------------
*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck_at_rinet.ru ***
------------------------------------------------------------------------
Received on Mon Jun 25 2007 - 14:00:07 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:13 UTC