Re: [autofs] problems with "dirty" UFS2 partitions

From: O. Hartmann <o.hartmann_at_walstatt.org>
Date: Tue, 8 Aug 2017 13:26:21 +0200
On Mon, 07 Aug 2017 23:48:15 -0700
Cy Schubert <Cy.Schubert_at_komquats.com> wrote:


Just for convenience, I 'glued" Warner Losh's messages below and reply inline
as usual.

> In message <20170808071758.6a815d59_at_freyja.zeit4.iv.bundesimmobilien.de>, 
> "O. H
> artmann" writes:
> > Hello,
> > 
> > we're running a NanoBSD based appliance which resides on a small SoC and
> > utilises a mSATA SSD for logging, database storage and mail folder. The
> > operating system is recent CURRENT as it is still under development.
> > 
> > The problem ist, that from time to time, without knowing or seeing the
> > reason ,
> > the automounted partitions become "dirty (UFS2 partitions, no ZFS dur to
> > memory and performance limitations). Journaling is enbaled.
> > 
> > When the partitions on the SSD become "dirty", logging or accessing them
> > isn' t
> > possible anymore and for some reason I do not see any log entries reporting
> > this (maybe due to the fact all logs are going also to that disk since the
> > lo gs
> > would pollute the serial console/console and the console is used for
> > maintenance purposes/ssh terminal).
> > 
> > Is it possible to - automated! - check the filesystem on bootup? As on
> > ordina ry
> > FreeBSD systems with fstab-based filesystems, this happens due to the
> > rc-init-infrastructure but autofs filesystems seem to be somehow standing
> > asi de
> > from this procedure.  
> 
> I'd be interested in finding out if your system either panicked or simply 
> failed to unmount the filesystems in question during a boot or shutdown. Is 
> the SSD being removed prior to FreeBSD having the chance to unmount it? I 
> think if we answer These questions we're more than half way there.

The system in question is logging onto this mSATA SSD and the filesystem is
mounted/unmounted via autofs. I do not see any syystem/core faults when doing a
reboot and the cases, where the filesystem is unclean after a reboot are rare.
But it is even deadly if the system is required to log (it is a
routing/PBX/DNS/firewalling system with FAX and answering machine/recording
facilities). 

The only clue I have is that due to the unmount attempt of autounmountd while
still writing logging data the filesystem remains in an unclean condition. But
the question is then what is causing this condition.

> 
> Warner has a good suggestion worth considering.
> 
> Another option might be to use amd program maps. The program map being a 
> program or script that would run fsck prior to issuing a mount. Having said 
> that, this treats the symptom rather than addressing the cause. It's 
> preferred to discover the cause so that autofs (or amd) can mount a clean 
> filesystem.

Is this also possible with the in-kernel autofs facility? I replaced the amd
daemon by the more modern autofs feature and - sorry - I didn't look into the
man page while writing the mail. I'll check that out.

The main question is, if the above described condition of writing log data and
unmount at the same time results in an unresolvable race condition, to simply
mount the SSD filesystem via /etc/fstab. The box is booting off a SD card, the
mSATA SSD is for loggin/data only and I wanted it to make it as robust as
possible with the main on having the firewall/router online to let traffic
traverse instead of being blocked when the system fails mounting a filesystem,
which is not necessary for survival. To have log or to have traffic passing is
the essential question to answere here ...


> 
> 

[from Warner Losh]
> Can't you just list them in /etc/fstab with the noauto option, but with a
> non-zero number listed in the 'pass' number column? I know nanobsd doesn't
> generate things this way, but maybe it should....
>
> Warner

I haven't though of this ever - will it force a check of a dirty filesystem
even when it is mounted via autofs? I considered /etc/fstab and autofs as
mutual exclusive - in my naive view ...

Thank you very much,

Oliver 
Received on Tue Aug 08 2017 - 09:26:32 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:12 UTC