Re: CTF: UEFI HTTP boot support

From: Maxim Sobolev <sobomax_at_freebsd.org>
Date: Wed, 17 Jun 2020 15:44:44 -0700
This is what we have running in AWS right now, kinda proof of concept but
it's not that difficult to generalize:

[root_at_ip-172-31-10-188 /usr/local/etc/freeswitch]# mdconfig -lv
md0     preload   160M  -

[root_at_ip-172-31-10-188 /usr/local/etc/freeswitch]# df
Filesystem                    512-blocks    Used  Avail Capacity  Mounted on
/dev/ufs/root_20200617071427     1300080 1220480  79600    94%    /
devfs                                  2       2      0   100%    /dev
/dev/ufs/etc_20200617071427         9912    6384   2736    70%    /etc
/dev/ufs/local_20200617071427    2746992 2572144 174848    94%    /usr/local
/dev/ufs/boot_20200617071427      389560  361208  28352    93%    /boot
tmpfs                              65536     624  64912     1%    /tmp
tmpfs                              20480      16  20464     0%
 /usr/home/ssp-user
tmpfs                             524288  336816 187472    64%    /var

Root file system is untrimmed 1.2GB UFS, generated with mkuzip compressed
down to 160MB with the UZIP, and pre-loaded along with the kernel. The
/usr/local file system is read-only UFS+UZIP images placed directly onto
the GPT and probed out with GEOM_LABEL. Out of those only /etc is
read-write. The idea here is that the box should theoretically survive
total loss of connectivity to both root and the /usr/local storage (or we
can replace it on the fly with the new version).

[root_at_ip-172-31-10-188 /usr/local/etc/freeswitch]# mount
/dev/ufs/root_20200617071427 on / (ufs, local, read-only)
devfs on /dev (devfs, local, multilabel)
/dev/ufs/etc_20200617071427 on /etc (ufs, local, synchronous)
/dev/ufs/local_20200617071427 on /usr/local (ufs, local, read-only)
/dev/ufs/boot_20200617071427 on /boot (ufs, local, read-only)
tmpfs on /tmp (tmpfs, local)
tmpfs on /usr/home/ssp-user (tmpfs, local)
tmpfs on /var (tmpfs, local)

Configuration is dead simple:

vfs.root.mountfrom="ufs:ufs/root_20200617071427"
image_load="YES"
image_name="/root.uzp"
image_type="mfs_root"
autoboot_delay="-1"

It takes less than 100 lines of code I think to generate this out of
buildworld/buildkernel. 0 third party tools.

Replace loading root from disk with loading it from HTTP server and it
would work just as good with the only need to load 1 or two files.

There is only one catch there - with real UEFI hardware sometimes there is
small(ish) region followed by a hole and then the much bigger region.
Unfortunately our loader picks smaller region for its work area and
md_image loading mechanism is not good enough to either place it entirely
into a bigger segment, or do scatter-gather and split it out and make the
kernel do some VM trick to re-assemble later. But with some
post-installworld cleaning if you can compress down the full image to some
30-40MB that usually works and has everything one may need including
kitchen sink (i.e. python 3.7 with few modules). As I said, no woodo magic
like famous crunchgen, just a very liberal application of the rm -rf.

With regards to ro vs. rw, recipy is "don't do it" :) If you want hard RO
embedded into kernel - compress your image with mkuzip first and then embed
it into kernel. This what FreeBSD/MIPS guys have mastered a lot (platform
was very tight on flash), which is why geom_uzip is practically in every
MIPS kernel config file. But that works (or should work) just as well on
x64. Not only that would save lots of VM but also the RO proper attribute
will be provided by the geom_uzip for free.

I am by the way hacking some way to populate /var with something more
interesting that just stock /etc/mtree/BSD.var.dist, there will be a review
request soon. :)

-Max

On Wed, Jun 17, 2020 at 2:33 PM Miguel C <miguelmclara_at_gmail.com> wrote:

> On Wed, Jun 17, 2020 at 9:28 PM Dave Cottlehuber <dch_at_skunkwerks.at>
> wrote:
>
> > On Wed, 17 Jun 2020, at 17:52, Rodney W. Grimes wrote:
> > > > Rodney W. Grimes <freebsd-rwg_at_gndrsh.dnsmgr.net> wrote:
> > > > > > The "fake cd drive" is in the kernel, loader just copies the iso
> > into
> > > > > > memory like any other module, and by the time that's done you
> just
> > > > > > reboot into the newly installed system, which again uses
> > > > > >
> > > > > > vfs.root.mountfrom="cd9660:/dev/md0.uzip"
> > > > >                                   ^^^
> > > > >
> > > > > Argh, the cd9660 confused me, I think your doing a
> > > > > "root on mfs/md"?
> > > >
> > > > loader.conf says
> > > >
> > > > rootfs_load="yes"
> > > > rootfs_name="contents.izo"
> > > > rootfs_type="md_image"
> > > > vfs.root.mountfrom="cd9660:/dev/md0.uzip"
> > > >
> > > > contents.izo is uzip'd contents.iso which file(1)
> > > > describes as ISO 9660 CD-ROM filesystem data ''
> > > >
> > > > That's for normal boot, for the loader 'install' command
> > > > it expects an uncompressed iso for rootfs.
> > >
> > > Ok, now the puzzle is how much work to get from a stock FreeBSD .iso
> > > image to something that works with this.  Obviously we need a non-stock
> > > /boot/loader.conf file, or to type some commands manually at a loader
> > > prompt.  I believe the stock GENERIC kernel has the md_root support
> > > for this already, so it may not be that hard to do.
> >
> >
> > Hi Miguel, all,
> >
> > I spent a bit of time on UEFI HTTP Boot earlier in the year in qemu,
> > bhyve, and intel NUCs -- until everything in the world went to custard. I
> > made some  rough notes[1] and I'll go through them again tonight with a
> > fresh build. Hopefully its useful.
> >
> > What I got stuck on was the final pivot, I have never debugged this setup
> > before and I'm still not clear at what point things fail. Olivier's PXE
> > booting and BSDRP were a fantastic reference, and I assume they work in
> > BSDRP already for him.
> >
> > Worth noting that LE TLS certs didn't play well with the PXE UEFI
> > implementation on my intel NUC, this comes up as a very unhelpful error.
> At
> > least use plain HTTP to get started.
> >
> > While my notes are amd64 oriented I'm very interested in using this for
> > aarch64 locally & in the clowd.
> >
> > My loader.conf follows:
> >
> > boot_multicons="YES"
> > console="efi,comconsole"
> > comconsole_speed="115200"
> > boot_verbose="YES"
> > # make booting somewhat less painful
> > #entropy_cache_load="NO"
> > #kern.random.initial_seeding.bypass_before_seeding="0"
> > # entropy_cache_load="YES"
> > # boot_single="YES"
> > tmpfs_load="YES"
> > autoboot_delay="-1"
> > # dump net vars
> > # exec="show boot.netif.hwaddr"
> > # exec="show boot.netif.ip"
> > # exec="show boot.netif.netmask"
> > # exec="show boot.netif.gateway"
> > # ensure we have enough ram for our image
> > vm.kmem_size=2G
> > vfs.root.mountfrom="ufs:/dev/md0"
> > # vfs.root.mountfrom.options=ro
> > mfs_load="YES"
> > mfs_type="md_image"
> > mfs_name="/boot/mfs-miniroot"
> >
> > interesting these are different from what's above in the thread.
> >
> >
> Ah thanks a lot for this and for the references, especially the first one
> with all the notes :D
>
> references:
> >
> > [1]: https://hackmd.io/_at_dch/H1X9RYEZr
> > [mfsBSD]: https://mfsbsd.vx.sk/ still 150% awesome
> > [olivier]:
> > https://blog.cochard.me/2019/02/pxe-booting-of-freebsd-disk-image.html
> > [BSDRP]: https://github.com/ocochard/BSDRP
> >
> > A+
> > Dave
> >
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> https://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>
>
Received on Wed Jun 17 2020 - 20:44:59 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:24 UTC