5.x w/auto-maxusers has insane kern.maxvnodes

From: Brian Fundakowski Feldman <green_at_FreeBSD.org>
Date: Tue, 04 May 2004 02:32:29 -0400
I have a 512MB system and had to adjust kern.maxvnodes (desiredvnodes) down 
to something reasonable after discovering that it was the sole cause of too 
much paging for my workstation.  The target number of vnodes was set to 
33000, which would not be so bad if it did not also cause so many more 
UFS, VM and VFS objects, and the VM objects' associated inactive cache 
pages, lying around.  I ended up saving a good 100MB of memory just 
adjusting kern.maxvnodes back down to something reasonable.  Here are the 
current allocations (and some of the peak values):

ITEM            SIZE     LIMIT     USED    FREE  REQUESTS
FFS2 dinode:     256,        0,  12340,     95,  1298936
FFS1 dinode:     128,        0,    315,   3901,  2570969
FFS inode:       140,        0,  12655,  14589,  3869905
L VFS Cache:     291,        0,      5,    892,    51835
S VFS Cache:      68,        0,  13043,  23301,  4076311
VNODE:           260,        0,  32339,     16,    32339
VM OBJECT:       132,        0,  10834,  24806,  2681863
(The number of VM pages allocated specifically to vnodes is not something 
easy to determine other than the fact that I saved so much memory even 
without the objects themselves, after uma_zfree(), having been reclaimed.)

We really need to look into making the desiredvnodes default target more 
sane before 5.x is -STABLE or people are going to be very surprised 
switching from 4.x and seeing paging increase substantially.  One more 
surprising thing is how many of these objects cannot be reclaimed because of 
they are UMA_ZONE_NOFREE or have no zfree function.  If they were, I'd have 
an extra 10MB back right now in my specific case, having just reduced the 
kern.maxvnodes setting and did a failed-umount on every partition to force 
the vnodes to be flushed.

The vnodes are always kept on the free vnode list after free because they 
might still be used again without having flushed out all of their associated 
VFS information -- but they should always be in a state that the list can be 
rescanned so they can actually be reclaimed by UMA if it asks for them.  All 
of the rest should need very little in the way of supporting uma_reclaim(),
but why are they not already like that?  One last good example I personally 
see of wastage-by-virtue-of-zfree-function is the page tables on i386:
PV ENTRY:         28,   938280,  59170, 120590, 199482221
Once again, why do those actually need to be non-reclaimable?

I hope you guys can shed some light on this, and hopefully some have ideas 
on how to make maxusers-auto-scaling more sane.

-- 
Brian Fundakowski Feldman                           \'[ FreeBSD ]''''''''''\
  <> green_at_FreeBSD.org                               \  The Power to Serve! \
 Opinions expressed are my own.                       \,,,,,,,,,,,,,,,,,,,,,,\
Received on Mon May 03 2004 - 21:32:30 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:53 UTC