On Wed, Mar 22, 2017 at 10:25:24PM +0100, O. Hartmann wrote: > Am Wed, 22 Mar 2017 21:10:51 +0100 > Michael Gmelin <freebsd_at_grem.de> schrieb: > > > > On 22 Mar 2017, at 21:02, O. Hartmann <ohartmann_at_walstatt.org> wrote: > > > > > > CURRENT (FreeBSD 12.0-CURRENT #82 r315720: Wed Mar 22 18:49:28 CET 2017 amd64) is > > > annoyingly slow! While scrubbing is working on my 12 GB ZFS volume, > > > updating /usr/ports takes >25 min(!). That is an absolute record now. > > > > > > I do an almost daily update of world and ports tree and have periodic scrubbing ZFS > > > volumes every 35 days, as it is defined in /etc/defaults. Prts tree hasn't grown much, > > > the content of the ZFS volume hasn't changed much (~ 100 GB, its fill is about 4 TB > > > now) and this is now for ~ 2 years constant. > > > > > > I've experienced before that while scrubbing the ZFS volume, some operations, even the > > > update of /usr/ports which resides on that ZFS RAIDZ volume, takes a bit longer than > > > usual - but never that long like now! > > > > > > Another box is quite unusable while it is scrubbing and it has been usable times > > > before. The change is dramatic ... > > > > > > > What do "zpool list", "gstat" and "zpool status" show? > > > > > > > zpool list: > > NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT > TANK00 10.9T 5.45T 5.42T - 7% 50% 1.58x ONLINE - > > Deduplication is off right now, I had one ZFS filesystem with dedup enabled > > gstat: not shown here, but the drives comprise the volume (4x 3 TB) show 100% busy each, > but one drive is always a bit off (by 10% lower) and this drive is walking through all > four drives ada2, ada3, ada4 and ada5. Nothing unusual in that situation. But the > throughput is incredible low, for example ada4: > > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 2 174 174 1307 11.4 0 0 0.0 99.4| ada4 > > kBps (kilo Bits per second I presume) are peaking at ~ 4800 - 5000. On another bos, this > is ~ 20x higher! Most time, kBps r and w stay at ~ 500 -600. kilo Bytes. 174 rps is normal for general 7200 RPM disk. Transfer too low by every request is about 1307/174 = ~8 KB. Don't know root cause of this. I am see raid-z of 4 disk, 8*3 = ~24KB per record. May be compession enable and zfs use 128KB record size? For this case this is expected performance. Use 1MB and higher record size.Received on Thu Mar 23 2017 - 11:38:09 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:10 UTC