Re: vnode leak in FFS code ... ?

From: Marc G. Fournier <scrappy_at_hub.org>
Date: Sat, 4 Sep 2004 13:29:50 -0300 (ADT)
On Fri, 3 Sep 2004, Julian Elischer wrote:

> Marc G. Fournier wrote:
>> 
>> Just as a followup to this ... the server crashed on Thursday night around 
>> 22:00ADT, only just came back up after a very long fsck ... with all 62 VMs 
>> started up, and 1008 processes running, vnodes currently look like:
>
> are you using nullfs at all on your vms?

No, I stop'd using that over a year ago, figuring that it was exasperating 
the problems we were having back then ... the only thing we did use nullfs 
at that time was so that we could 'identify' which files were specific to 
a VM, vs a file on the template ... we moved to using nfs to do the same 
thing ...

The only thing we use is unionfs, and nfs ...

Basically, we do a 'mount_union -b <template> <vm>', where template is a 
shared file system containing common applications, in order to reduce 
overall disk space being used by each client.  So, for instance, on one of 
our servers we have a 'template' VM that, when we need to add/upgrade an 
application, we start up, log into and install from ports ... then, we 
rsync that template to the live server(s) so that those apps are available 
within all VMs ...

We then use NFS to mount the 'base' file system for each VM, that contains 
only the changed files that are specific to the VM (ie. config files, any 
apps the client happens to have installed, etc) and use that to determine 
storage usage ...

there is only one NFS mount point, that covers the whole file system, we 
dont do a mount per VM or anything like that ...

So, in the case of the system that has risen quite high on the vnode 
count, with 60 VMs ... there would be:

5 UFS mounts
 	- /, /var, /tmp, /usr and /vm
 	- /vm is where the virtual machines run off of
1 NFS mount
60 UNIONFS mount points
 	- one for each VM
60 procfs mount points
 	- one for each VM

Thanks to the work that David and Tor put in last summer on vnlru, this 
works quite well, with the occasional crash when a 'fringe bug' gets 
tweaked ... our record uptime on a server, in this configuration, so far 
is 106days ...

The part that hurts the most is that the longer the server is up and 
running, the greater the chance of having a 12+hr fsck run due to all the 
ZERO LENGTH DIRECTORYs :(

Whenever I get a good core dump, I try and post a report to GNaTs, but 
between everyone focusing on 5.x, and those crying "unionfs is broken", 
they tend to sit in limbo ... altho most of the bugs I am able to find 
most likely exist in 5.x code as well, and fixing them would go one more 
step towards improving unionfs ...

----
Marc G. Fournier           Hub.Org Networking Services (http://www.hub.org)
Email: scrappy_at_hub.org           Yahoo!: yscrappy              ICQ: 7615664
Received on Sat Sep 04 2004 - 14:56:23 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:10 UTC