I recently had a filesystem go bad on me in such a way that it was recognized way bigger than it actually was, causing fsck to fail while trying to allocate and equally astronomic amount of memory (and my machine already had 1 Gig of mem + 2 Gig swap available). I just newfs'd and I'm now in the process of restoring data, however, I googled a bit on this and it seems that this kind of fs corruption is occurring quite often, in particular due to power failures. Is there really no way that fsck could be made smarter about dealing with seemingly huge filesystems? Also, what kind of memory would be required to fsck a _real_ 11TB filesystem? -- ,_, | Michael Nottebrock | lofi_at_freebsd.org (/^ ^\) | FreeBSD - The Power to Serve | http://www.freebsd.org \u/ | K Desktop Environment on FreeBSD | http://freebsd.kde.orgReceived on Mon Nov 29 2004 - 01:48:46 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:23 UTC