Kris Kennaway wrote: > On Tue, Apr 12, 2005 at 05:01:49AM -0700, Don Lewis wrote: > >>On 11 Apr, Kris Kennaway wrote: >> >>>On Mon, Apr 11, 2005 at 06:43:17PM -0700, Don Lewis wrote: >>> >>>>On 11 Apr, Kris Kennaway wrote: >>>> >>>>>I'm seeing the following problem: on 6.0 machines which have had a lot >>>>>of FS activity in the past but are currently quiet, an unclean reboot >>>>>will require an hour or more of fscking and will end up clearing >>>>>thousands of inodes: >>>>> >>>>>[...] >>>>>/dev/da0s1e: UNREF FILE I=269731 OWNER=root MODE=100644 >>>>>/dev/da0s1e: SIZE=8555 MTIME=Apr 18 02:29 2002 (CLEARED) >>>> >>>>>/dev/da0s1e: UNREF FILE I=269741 OWNER=root MODE=100644 >>>>>[...] >>>>> >>>>>It's as if dirty buffers aren't being written out properly, or >>>>>something. Has anyone else seen this? >>>> >>>>This looks a lot like it could be a vnode refcnt leak. Files won't get >>>>removed from the disk while they are still in use (the old unlink while >>>>open trick). Could nullfs be a factor? >>> >>>Yes, I make extensive use of read-only nullfs. >>> >>>Kris (fsck still running) >> >>It would also be interesting to find out why fsck is taking so long to >>run. I don't see anything obvious in the code. > > > I can take a transcript of the entire fsck next time if you like :-) > (it ran for more than 5 hours on the 24G drive and was still going > after I went to bed) > > Kris Don might not know that your workload involves creating and deleting full ports/ trees repeatedly, and those trees contain hundreds of tousands of inodes each. If there is a reference count leak and those deletions aren't ever being finalized, then there would be a whole lot of work for fsck to do =-) Might also explain why disks have been unexpectedly filling up on package machines (like mine). ScottReceived on Tue Apr 12 2005 - 13:01:48 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:32 UTC