Re: Uneven load on drives in ZFS RAIDZ1

From: Stefan Esser <se_at_freebsd.org>
Date: Mon, 19 Dec 2011 21:36:46 +0100
Am 19.12.2011 17:22, schrieb Dan Nelson:
> In the last episode (Dec 19), Stefan Esser said:
>> for quite some time I have observed an uneven distribution of load between
>> drives in a 4 * 2TB RAIDZ1 pool.  The following is an excerpt of a longer
>> log of 10 second averages logged with gstat:
>>
>> dT: 10.001s  w: 10.000s  filter: ^a?da?.$
>>  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
>>     0    130    106   4134    4.5     23   1033    5.2   48.8| ada0
>>     0    131    111   3784    4.2     19   1007    4.0   47.6| ada1
>>     0     90     66   2219    4.5     24   1031    5.1   31.7| ada2
>>     1     81     58   2007    4.6     22   1023    2.3   28.1| ada3
> [...]
>> zpool status -v
>>   pool: raid1
>>  state: ONLINE
>>   scan: none requested
>> config:
>>
>>         NAME        STATE     READ WRITE CKSUM
>>         raid1       ONLINE       0     0     0
>>           raidz1-0  ONLINE       0     0     0
>>             ada0p2  ONLINE       0     0     0
>>             ada1p2  ONLINE       0     0     0
>>             ada2p2  ONLINE       0     0     0
>>             ada3p2  ONLINE       0     0     0
> 
> Any read from your raidz device will hit three disks (the checksum is
> applied across the stripe, not on each block, so a full stripe is always
> read) so I think your extra IOs are coming from somewhere else.
> 
> What's on p1 on these disks?  Could that be the cause of your extra I/Os? 
> Does "zpool iostat -v 10" give you even numbers across all disks?

This is a ZFS only system. The first partition on each drive holds just
the gptzfsloader.

pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       4.41T  2.21T    139     72  12.3M   818K
  raidz1    4.41T  2.21T    139     72  12.3M   818K
    ada0p2      -      -    114     17  4.24M   332K
    ada1p2      -      -    106     15  3.82M   305K
    ada2p2      -      -     65     20  2.09M   337K
    ada3p2      -      -     58     18  2.18M   329K

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       4.41T  2.21T    150     45  12.8M   751K
  raidz1    4.41T  2.21T    150     45  12.8M   751K
    ada0p2      -      -    113     14  4.34M   294K
    ada1p2      -      -    111     14  3.94M   277K
    ada2p2      -      -     62     16  2.23M   294K
    ada3p2      -      -     68     14  2.32M   277K
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       4.41T  2.21T    157     86  12.3M  6.41M
  raidz1    4.41T  2.21T    157     86  12.3M  6.41M
    ada0p2      -      -    119     39  4.21M  2.24M
    ada1p2      -      -    106     31  3.78M  2.21M
    ada2p2      -      -     81     59  2.23M  2.23M
    ada3p2      -      -     57     39  2.06M  2.22M
----------  -----  -----  -----  -----  -----  -----

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
raid1       4.41T  2.21T    187     45  14.2M  1.04M
  raidz1    4.41T  2.21T    187     45  14.2M  1.04M
    ada0p2      -      -    117     13  4.27M   398K
    ada1p2      -      -    120     12  4.01M   384K
    ada2p2      -      -     89     12  2.97M   403K
    ada3p2      -      -     85     13  2.91M   386K
----------  -----  -----  -----  -----  -----  -----

The same difference of read operations per second as shown by gstat ...

Regards, STefan
Received on Mon Dec 19 2011 - 19:36:51 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC