Re: VM images for FreeBSD

From: Alexander Yerenkow <yerenkow_at_gmail.com>
Date: Fri, 4 Nov 2011 13:18:48 +0200
2011/11/4 Daniel O'Connor <doconnor_at_gsoft.com.au>

>
> On 19/10/2011, at 21:19, Alexander Yerenkow wrote:
> > I can't specify to pkg_add that it should treat /zpool0/testroot as
> root, as
> > I need (so record really should be _at_cwd /usr/local)
> > Instead, pkg_add allows me to make chroot, which as you understand is not
> > good (In specified chroot all required by pkg* binaries/libraries must
> > exists, unfortunately I can't specify some empty dir and install there).
>
> Hmmm, why is it empty?
> When I have made something analogous I did an installkernel/world into a
> directory and then chroot'd in there and built ports. There is no reason
> you couldn't pkg_add from a local mirror (or nullfs mount a local package
> mirror directory into the chroot).
>

>From beginning I thought about having a lot of directories, which contains
one installed package;
I assume using plain copy all requried data, to have requried packages
installed in new chroot env. Not via install, but simply by copy.
Reason was - to make composing of images with pre-installed software faster
(avoiding pkg_add/unpack/mtree/etc. steps).
I could easily use unionfs, if only it could work under zfs :)
In any way, all package installation process, all running scripts ends and
leaving a bunch of new files (links too), and some changed files (like
added groups/users etc), that's all.
I just wanted have unpacked and initialized packages in directories, which
I could use as puzzle parts, to build image with pre-installed packages. Of
course, I understand that there required some tricks to make it all works
(like adding users/groups/ X config etc.), but many straightforward
software will just work, which is in mostly cases enough to test another
release.

Currently I'm using pretty slow way to make pre-installed images, new fresh
copy of base world for each package-set, after that created install script,
and runned in chroot.

for i in `cat test-package-list` ; do
echo "env PACKAGESITE=$packagesite pkg_add -rifF $i" >>
$blank/root/install.sh
done

parameters used:
-r, --remote
Use the remote fetching feature.
-i, --no-deps
Install the package without fetching and installing dependencies.
-I, --no-script
  If any installation scripts (pre-install or post-install) exist for a
given package, do not execute them.
-f, --force
  Force installation to proceed even if prerequisite packages are not
installed or the requirements script fails.
-F Already installed packages are not an error

So, package not getting dependencies (they all must be specified too), but
all went installed and even working somehow.

But all this is pretty rough :)

About long way to standardize installation process - I think even if it's
so complex, it should just start somewhere. Best candidate - all pear
install scripts; they can easily be moved to *.mk. A bit more complex is
standardizing user/groups.



>
> > Why is that? Because there is +INSTALL script in packages, in which
> > package/port system allows execute any code/script written by porter.
>
> This is a feature ;)
>
> > To summarize my efforts:
> > I checked 21195 packages;
> > I found 880 install scripts;
> >
> > 3 scripts contains plain "exit 0"
> > 8 install scripts contains some perl code;
> > 17 scripts contains some additional "install" commands;
> > 70 scripts contains some chgroup/chown actions (which probably could be
> done
> > by specifying mtree file?...)
> > 75 contains uncategorized actions (print of license, some interactive
> > questions, ghostscript actions, tex, fonts etc.)
> > 161 scripts contains some file commands, like (ld / cp / mv, creating
> > backups, creating configs if they aren't exists etc. )
> > 166 scripts contains useradd/groupadd commands (many similar
> constructions,
> > not too hard to move this to .mk, in pkgng group/users can be specified
> in
> > yaml config)
> > 380 contains pear component registration (md5 -q * | uniq  - produces
> > exactly one result, so these all scripts are really one, could be moved
> to
> > some pear.mk)
>
> Interesting stats, thanks for taking the time to do the analysis.
>
> I think one of the reasons pkg_add is so slow is that it copies everything
> to a staging directory, then copies the files.. This is very tedious
> (obviously). I wonder if it could be modified to have a "stream" mode where
> it unpacks directly into the target FS.
>
> Alternatively you could cut it in 2 conceptually and modify pkg_add so it
> can run it a mode where it just unpacks to a staging area, and another mode
> where it copies from the staging area to the destination.
>
> --
> Daniel O'Connor software and network engineer
> for Genesis Software - http://www.gsoft.com.au
> "The nice thing about standards is that there
> are so many of them to choose from."
>  -- Andrew Tanenbaum
> GPG Fingerprint - 5596 B766 97C0 0E94 4347 295E E593 DC20 7B3F CE8C
>
>
>
>
>
>
>


-- 
Regards,
Alexander Yerenkow
Received on Fri Nov 04 2011 - 10:18:50 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:20 UTC