Re: CURRENT: slow like crap! ZFS scrubbing and ports update > 25 min

From: O. Hartmann <ohartmann_at_walstatt.org>
Date: Thu, 23 Mar 2017 16:39:12 +0100
Am Thu, 23 Mar 2017 15:38:05 +0300
Slawa Olhovchenkov <slw_at_zxy.spb.ru> schrieb:

> On Wed, Mar 22, 2017 at 10:25:24PM +0100, O. Hartmann wrote:
> 
> > Am Wed, 22 Mar 2017 21:10:51 +0100
> > Michael Gmelin <freebsd_at_grem.de> schrieb:
> >   
> > > > On 22 Mar 2017, at 21:02, O. Hartmann <ohartmann_at_walstatt.org> wrote:
> > > > 
> > > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 amd64) is
> > > > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume,
> > > > updating /usr/ports takes >25 min(!). That is an absolute record now.
> > > > 
> > > > I do an almost  daily update of world and ports tree and have periodic scrubbing
> > > > ZFS volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't
> > > > grown much, the content of the ZFS volume hasn't changed much (~ 100 GB, its fill
> > > > is about 4 TB now) and this is now for ~ 2 years constant. 
> > > > 
> > > > I've experienced before that while scrubbing the ZFS volume, some operations,
> > > > even the update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit
> > > > longer than usual - but never that long like now!
> > > > 
> > > > Another box is quite unusable while it is scrubbing and it has been usable times
> > > > before. The change is dramatic ...
> > > >     
> > > 
> > > What do "zpool list", "gstat" and "zpool status" show?
> > > 
> > > 
> > >   
> > zpool list:
> > 
> > NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
> > TANK00  10.9T  5.45T  5.42T         -     7%    50%  1.58x  ONLINE  -
> > 
> > Deduplication is off right now, I had one ZFS filesystem with dedup enabled
> > 
> > gstat: not shown here, but the drives comprise the volume (4x 3 TB) show 100% busy
> > each, but one drive is always a bit off (by 10% lower) and this drive is walking
> > through all four drives ada2, ada3, ada4 and ada5. Nothing unusual in that situation.
> > But the throughput is incredible low, for example ada4:
> > 
> >  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w   %busy Name
> >  2    174    174   1307   11.4      0      0    0.0   99.4| ada4
> > 
> > kBps (kilo Bits per second I presume) are peaking at ~ 4800 - 5000. On another bos,
> > this is ~ 20x higher! Most time, kBps r and w stay at ~ 500 -600.  
> 
> kilo Bytes.
> 174 rps is normal for general 7200 RPM disk. Transfer too low by every
> request is about 1307/174 = ~8 KB. Don't know root cause of this. I am
> see raid-z of 4 disk, 8*3 = ~24KB per record. May be compession enable
> and zfs use 128KB record size? For this case this is expected
> performance. Use 1MB and higher record size.
> 

I've shutdown the box over night and rebooted this morning. After checking from remote
the output of "zpool status", the throughput was at ~229 MBytes/s - a value I'd expected,
peaking again at ~ 300 MBytes/s. I assume my crap home hardware is not providing more,
but at this point, everything is as expected. The load, as observed via top, showed ~75 -
85% idle (top -S). But anyway, on the other home box with ZFS scrubbing active, the
drives showed a throughput of ~ 110 MBystes/s and 129 MBytes/s - also value I'd expected.
But the system was really jumpy and the load showed ~ 80% idle (two cores, 4 threads, 8
GB RAM, the first box mentioned with the larger array has 4 cores/8 threads and 16 GB).

-- 
O. Hartmann

Ich widerspreche der Nutzung oder Übermittlung meiner Daten für
Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG).

Received on Thu Mar 23 2017 - 14:39:23 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:10 UTC