On 7/9/07, Pawel Jakub Dawidek <pjd_at_freebsd.org> wrote: > On Sat, Jul 07, 2007 at 02:26:17PM +0100, Doug Rabson wrote: > > I've been testing ZFS recently and I noticed some performance issues > > while doing large-scale port builds on a ZFS mounted /usr/ports tree. > > Eventually I realised that virtually nothing ever ended up on the vnode > > free list. This meant that when the system reached its maximum vnode > > limit, it had to resort to reclaiming vnodes from the various > > filesystem's active vnode lists (via vlrureclaim). Since those lists > > are not sorted in LRU order, this led to pessimal cache performance > > after the system got into that state. > > > > I looked a bit closer at the ZFS code and poked around with DDB and I > > think the problem was caused by a couple of extraneous calls to vhold > > when creating a new ZFS vnode. On FreeBSD, getnewvnode returns a vnode > > which is already held (not on the free list) so there is no need to > > call vhold again. > > Whoa! Nice catch... The patch works here - I did some pretty heavy > tests, so please commit it ASAP. > > I also wonder if this can help with some of those 'kmem_map too small' > panics. I was observing that ARC cannot reclaim memory and this may be > because all vnodes and thus associated data are beeing held. > > To ZFS users having problems with performance and/or stability of ZFS: > Can you test the patch and see if it helps? > I've recompiled my system after Doug committed this patch 3 days ago and I can panic my machine as soon as I don't set kern.maxvnodes to 50000 while doing a ls -R after a recursive chown on some thousands of files and dirs: panic: kmem_malloc(16384): kmem_map too small: 326066176 total allocated I noticed that before this patch the system panicked very easily and soon in the chown process. Now it completes the chown on the thousands of files and dirs I have in it and only panics after in the ls -R process. It's an improvement, but something else is still there... -- Joao BarrosReceived on Thu Jul 12 2007 - 21:39:49 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:14 UTC