David Schultz wrote: > Thus spake Terry Lambert <tlambert2_at_mindspring.com>: > > o Put a counter in the first superblock; it would be > > incremented when the BG fsck is started, and reset > > to zero when it completes. If the counter reaches > > 3 (or some command line specified number), then the > > BG flagging is ignored, and a full FG fsck is then > > performed instead. I like this idea because it will > > always work, and it's not actually a hack, it's a > > correct solution. > > I'm glad you like it because AFAIK, it is already implemented. ;-) Nope. What's implemented is the FS_NEEDSFSCK flag. But that flag is not set in the superblock flags field as *the very first thing done*. Thus a failure that results in a panic will not set the flag in pfatal(), since it never gets there. Probably the correct thing to do is to set the flag as the very first operation, and then it will work as expected. FWIW, it looks like the code in pfatal() wanted to be in main(), since it complains about not being able to run in the background, the same way main() does. However, this still leaves a race window. The reason the panic happens is that FreeBSD is running processes on a corrupt FS. Even in the best case, this panic may occur when anything is loaded off the FS, so it could happen on init, or on fsck itself, etc.. So really, the only solution is a counter that the FS kernel code counts up, which is reset to zero when a BG fsck completes successfully. Say grabbing the first byte of fs_sparecon32[]. BTW: This still leaves a failure case: the BG fsck has to be able to complete successfully... but that's not enough to stave off a future panic from an undetected error that the fsck didn't see, because it was only pruning CG bitmaps. So the correct place to zero the counter is, once again, in the kernel. As a result of a successful unmount, from a non-panic shutdown. This does mean that three (or "count") consecutive power failures gets you a FG fsck, but that's probably livable (if you were that certain there was no corruption, you could boot to a shell and override the "count" parameter to the FG fsck trigger threshold). > > o Implement "soft read-only". The place that most of > > the complaints are coming from is desktop users, with > > relatively quiescent machines. Though swap is used, > > it does not occur in an FS partition. As a result, > > the FS could be marked "read-only" for long period of > > time. This marking would be in memory. The clean bit > > would be set on the superblock. When a write occurs, > > the clean bit would be reset to "dirty", and committed > > to disk prior to the write operation being permitted > > to proceed (a stall barrier). I like this idea because, > > for the most part, it eliminates fsck, both BG and FG, > > on systems that crash while it's in effect. The net > > result is a system that is statistically much more > > tolerant of failures, but which still requires another > > safety net, such as the previous solution. > > I was thinking of doing something like this myself as part of an > ``idle timeout'' for disks. (Marking the filesystem clean after a > period of quiescence would actually interfere with ATA disks' > built-in mechanism for spinning down after a timeout, which is > important for laptops, so the OS would have to track the true > amount of idle time.) Annoyingly, I can never get the disk > containing /var to remain quiescent for long while cron is running > (even without any crontabs), and I hope this can be solved without > disabling cron or adding a nontrivial hack to bio. We implemented this when we implemented soft updates in FFS under Windows at Artisoft. That was back before ATX power supplies were wide spread, and we needed to be tolerant of users who simply turned off the power switch, without running the Windows95 shutdown sequence. I dunno about cron. I think it "noticing" crontab changes "automatically" has maybe made it too smart for its own good. Cron updates the "access" time on the crontab file every time it runs, which is once a second. If you disabled this for fstat, the problem would go away. I'm not sure the semantics are OK, though. The old pre-"smarter" cron would not have this problem, as it would run on intervals, and sleep for long periods (until the next job was scheduled to run), and you had to hit it over the head with "kill -HUP" to tell it the file changed. Probably the correct thing to do is to use old-style long delta intervals, and register a kevent interest in file modifications. The cruddy thing is, if it were really read-only, then the access time update wouldn't happen. Catch-22. I think maybe it's useful to distinguish the POSIX semantics here: "shall be scheduled for update" is not the same thing, really, as "shall be updated". So, in practice, you could cache the access time update for long periods, as long as the correct time was marked in memory, and the write is scheduled to occur "eventually". So it's possible there is an "out", without having to worry about fixing cron so it's not so darn aggressive. Gotta wonder how much rewriting of one area of the disk with great frequency you can handle, before it becomes a cause of disk wear enough to shorten the MTBF. 8-(. -- TerryReceived on Fri Mar 28 2003 - 20:47:37 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:02 UTC