Re: ZFS perfomance regression in FreeBSD 12 APLHA3->ALPHA4

From: Allan Jude <allanjude_at_freebsd.org>
Date: Wed, 5 Sep 2018 10:54:36 -0400
On 2018-09-05 10:04, Subbsd wrote:
> Hi,
> 
> I'm seeing a huge loss in performance ZFS after upgrading FreeBSD 12
> to latest revision (r338466 the moment) and related to ARC.
> 
> I can not say which revision was before except that the newver.sh
> pointed to ALPHA3.
> 
> Problems are observed if you try to limit ARC. In my case:
> 
> vfs.zfs.arc_max="128M"
> 
> I know that this is very small. However, for two years with this there
> were no problems.
> 
> When i send SIGINFO to process which is currently working with ZFS, i
> see "arc_reclaim_waiters_cv":
> 
> e.g when i type:
> 
> /bin/csh
> 
> I have time (~5 seconds) to press several times 'ctrl+t' before csh is executed:
> 
> load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.41r 0.00u 0.00s 0% 3512k
> load: 0.70  cmd: csh 5935 [zio->io_cv] 1.69r 0.00u 0.00s 0% 3512k
> load: 0.70  cmd: csh 5935 [arc_reclaim_waiters_cv] 1.98r 0.00u 0.01s 0% 3512k
> load: 0.73  cmd: csh 5935 [arc_reclaim_waiters_cv] 2.19r 0.00u 0.01s 0% 4156k
> 
> same story with find or any other commans:
> 
> load: 0.34  cmd: find 5993 [zio->io_cv] 0.99r 0.00u 0.00s 0% 2676k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.13r 0.00u 0.00s 0% 2676k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.25r 0.00u 0.00s 0% 2680k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.38r 0.00u 0.00s 0% 2684k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.51r 0.00u 0.00s 0% 2704k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.64r 0.00u 0.00s 0% 2716k
> load: 0.34  cmd: find 5993 [arc_reclaim_waiters_cv] 1.78r 0.00u 0.00s 0% 2760k
> 
> this problem goes away after increasing vfs.zfs.arc_max
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
> 

Previously, ZFS was not actually able to evict enough dnodes to keep
your arc_max under 128MB, it would have been much higher based on the
number of open files you had. A recent improvement from upstream ZFS
(r337653 and r337660) was pulled in that fixed this, so setting an
arc_max of 128MB is much more effective now, and that is causing the
side effect of "actually doing what you asked it to do", in this case,
what you are asking is a bit silly. If you have a working set that is
greater than 128MB, and you ask ZFS to use less than that, it'll have to
constantly try to reclaim memory to keep under that very low bar.

-- 
Allan Jude


Received on Wed Sep 05 2018 - 12:54:42 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:18 UTC