Michael Nottebrock wrote: > I recently had a filesystem go bad on me in such a way that it was > recognized way bigger than it actually was, causing fsck to fail while > trying to allocate and equally astronomic amount of memory (and my > machine already had 1 Gig of mem + 2 Gig swap available). > I just newfs'd and I'm now in the process of restoring data, however, I > googled a bit on this and it seems that this kind of fs corruption is > occurring quite often, in particular due to power failures. Yes, very troubling. You said that the alternate superblocks didn't help? > > Is there really no way that fsck could be made smarter about dealing > with seemingly huge filesystems? Also, what kind of memory would be > required to fsck a _real_ 11TB filesystem? > More than you can address in 32 bits. Reducing the RAM footprint of fsck_ufs is something that desperately needs to be done, especially since it's now easy to trash crashdumps that are saved in swap because fsck is consuming so much memory. ScottReceived on Mon Nov 29 2004 - 02:14:57 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:23 UTC