Re: jails, ZFS, deprecated jail variables and poudriere problems

From: O. Hartmann <ohartmann_at_walstatt.org>
Date: Thu, 29 Aug 2019 14:26:38 +0200
On Wed, 28 Aug 2019 13:57:00 +0200
Alexander Leidinger <Alexander_at_leidinger.net> wrote:

> Quoting "O. Hartmann" <ohartmann_at_walstatt.org> (from Tue, 27 Aug 2019
> 10:11:54 +0200):
>
> > We have a single ZFS pool (raidz), call it pool00 and this pool00 conatins a
> > ZFS dataset pool00/poudriere which we want to exclusively attach to a jail.
> > pool00/poudriere contains a complete clone of a former, now decomissioned
> > machine and is usable by the host bearing the jails. The jail, named
> > poudriere,
> > has these config parameters set in /etc/jail.conf as recommended:
> >
> >         enforce_statfs=         "0";

now set to
	enforce_statfs=		"1";

> >
> >         allow.raw_sockets=      "1";
> >
> >         allow.mount=            "1";
> >         allow.mount.zfs=        "1";
>
> The line above is what is needed, and what is replacing the sysctl
> you've found.
>
> >         allow.mount.devfs=      "1";
> >         allow.mount.fdescfs=    "1";
> >         allow.mount.procfs=     "1";
> >         allow.mount.nullfs=     "1";
> >         allow.mount.fusefs=     "1";

... and those extended with these

	allow.mount.tmpfs=	"1";
	allow.mount.linprocfs=	"1";

> >
> > Here I find the first confusing observation. I can't interact with
> > the dataset
> > and its content within the jail. I've set the "jailed" property of
> > pool00/poudriere via "zfs set jailed=on pool00/poudriere" and I also have to
> > attach the jailed dataset manually via "zfs jail poudriere
> > pool00/poudriere" to
> > the (running) jail. But within the jail, listing ZFS's mountpoints reveal:
> >
> > NAME                USED  AVAIL  REFER  MOUNTPOINT
> > pool00             124G  8.62T  34.9K  /pool00
> > pool00/poudriere   34.9K  8.62T  34.9K  /pool/poudriere
> >
> > but nothing below /pool/poudriere is visible to the jail. Being confused I

Since we use ezjail-admin for jail a rudimentary jail administration (just
creating and/or deleting the jail, maintenance is done manually), jails are
rooting at

pool00				/pool00
pool00/ezjail/			/pool/jails
pool00/ezjail/pulverfass	/pool/jails/pulverfass

"pulverfass" is the jail supposed to do the poudriere's job.

Since I got confused about the orientation of the "directory tree" - the root
is toplevel instead of downlevel - I corrected the ZFS dataset holding the
poudriere stuff that way:

pool00/ezjail/poudriere		/pool/poudriere

The jail "pulverfass" now is supposed to mount the dataset at

	/pool/jails/pulverfass/pool/poudriere

>
> Please be more verbose what you mean by "interact" and "is visible".
>
> Do zfs commands on the dataset work?

After I corrected my mistake by not respecting the mountpoint according to
statfs, with the changes explained above I'm able to mount /pool/poudriere
within the jail "pulverfass", but I still have problems with the way how I have
to mount this dataset. When zfs-mounted (zfs mount -a), I'm able to use the
dataset with poudriere as expected! But after rebooting the host and after all
jails has been restarted as well, I have to first make the dataset
/pool/poudriere available to the jail via the command "zfs jail pulverfass
ool00/ezjail/pulverfass" - which seems not to be done automatically by the
startup process - and then from within the jail "pulverfass" I can mount the
dataset as desribed above. This seems to be a big step ahead for me.

>
> Note, I don't remember if you can manage the root of the jail, but at
> least subsequent jails should be possible to manage. I don't have a
> jail where the root is managed in the jail, just additional ones.
> Those need to have set a mountpoint after the initial jailing and then
> maybe even be mounted for the first time.
>
> Please also check /etc/defaults/devfs.rules if the jail rule contains
> an unhide entry for zfs.

Within /etc/jail.conf

	devfs_ruleset=          "4";

is configured as a common rulesset for all jails (in the common portion of
/etc/jail.conf).
There is no custom devfs.rules in /etc/, so /etc/defaults/devfs.rules should
apply and as far as I can see, there is an "unhide" applied to zfs:

[... /etc/defaults/devfs.rulse ...]

# Devices usually found in a jail.
#
[devfsrules_jail=4]
add include $devfsrules_hide_all
add include $devfsrules_unhide_basic
add include $devfsrules_unhide_login
add path fuse unhide
add path zfs unhide
[...]

So, I guess everything is all right from this perspective, isn't it?

Is there a way to automatically provide the ZFS dataset of choice to the
propper jail or do i have to either

issue manually "zfs jail jailid/jailname pool/dataset" or put such a command as
script-command in the jail's definition portion as

	exec.prestart+=	"zfs jail ${name} pool00/ezjail/poudriere";
?

>
> Bye,
> Alexander.
>

Thank you very much and kind regards,
oh
Received on Thu Aug 29 2019 - 10:32:10 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:21 UTC