On Wed, Sep 01, 2004 at 05:12:06PM -0600, Scott Long wrote: > Marc G. Fournier wrote: > >On Wed, 1 Sep 2004, Allan Fields wrote: > > > >>On Wed, Sep 01, 2004 at 03:19:27PM -0300, Marc G. Fournier wrote: > >> > >>>I don't know if this is applicable to -current as well, but so far, > >>>anything like this I've uncovered in 4.x has needed an equivalent fix in > >>>5.x, so figured it can't hurt to ask, especially with everyone working > >>>towards a STABLE 5.x branch ... I do not have a 5.x machine running this > >>>sort of load at the moment, so can't test, or provide feedback there ... > >>>all my 5.x machines are more or less desktops ... > >>> > >>>On Saturday, I'm going to try an unmount of the bigger file system, > >>>to see > >>>if it frees everything up without a reboot ... but if someone can > >>>suggest > >>>something to check to see if it is a) a leak and b) is fixable > >>>between now > >>>and then, please let me know ... again, this is a 4.10 system, but > >>>most of > >>>the work that Tor and David have done (re: vnodes) in the past > >>>relating to > >>>my servers have been applied to 5.x first, and MFC'd afterwards, so I > >>>suspect that this too many be something that applies to both branches > >>>... > >> > >> > >>Unmounting the filesystems will call vflush() and should flush all > >>vnodes from under that mount point. I'm not entirely sure if this > >>is the best you can do w/o rebooting. > > > > > >Understood, and agreed ... *but* ... is there a way, before I do that, > >of determining if this is something that needs to be fixed at the OS > >level? Is there a leak here that I can somehow identify while its in > >this state? > > > >The server has *only* been up 25 days > > > > It's really hard to tell if there is a vnode leak here. The vnode pool > is fairly fluid and has nothing to do with the number of files that are > actually 'open'. Vnodes get created when the VFS layer wants to access > an object that isn't already in the cache, and only get destroyed when > the object is destroyed. A vnode that reprents a file that was opened > will stay 'active' in the system long after the file has been closed, > because it's cheaper to keep it active in the cache than it is to > discard it and then risk having to go through the pain of a namei() > and VOP_LOOKUP() again later. Only if the maxvnode limit is hit will > old vnodes start getting recycled to represent other objects. [...] > > So you've obviously bumped up kern.maxvnodes well above the limits that > are normally generated from the auto-tuner. Why did you do that, if not > because you knew that you'd have a large working set of referenced (but > maybe not open all at once) filesystem objects? [...] There was a pevious thread I've found which also helps explains this further: http://lists.freebsd.org/pipermail/freebsd-stable/2003-May/001266.html Really the same issue now as then? > If unmounting the filesystem doesn't result in numvnodes decreasing, > then there definitely might be a leak. Unfortunately, you haven't > provided that kind of information yet. > > Scott -- Allan Fields, AFRSL - http://afields.ca 2D4F 6806 D307 0889 6125 C31D F745 0D72 39B4 5541Received on Wed Sep 01 2004 - 23:35:35 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:10 UTC