Re: CFT - Xen infrastructure and block I/O improvements

From: Justin T. Gibbs <gibbs_at_scsiguy.com>
Date: Fri, 17 Sep 2010 09:36:14 -0600
On 9/17/2010 1:27 AM, Rui Paulo wrote:
> On 17 Sep 2010, at 07:55, Rui Paulo wrote:
> 
>> On 17 Sep 2010, at 02:44, Justin T. Gibbs wrote:
>>
>>> At Spectra Logic, we are using FreeBSD amd64 under Xen to serve storage
>>> to other Xen domains.  Over the past 9 months, we've made several changes
>>> to FreeBSD's Xen support.  These include:

...

>> Justin, this is quite a big diff (16k lines). I wonder if you can create
>> separate diffs (xenstore, blkback, xenbus, etc.) for easier review and
>> commenting.
> 
> '... and comment'.

The bulk of the patch is due to code reorganization.  Unfortunately SVN
doesn't reflect the copies properly in a diff, but at least the diff will
apply properly even in a non-SVN backed source tree.  This original patch
should be the best for testers.

The following SVN operations were made to the source tree to clean up it's
organization and to import functionality from the vendor (1 file):

# Does not apply to FreeBSD's NewBus method for dealing with XenBus devices
svn delete sys/xen/xenbus/init.txt

# Linux version of backend XenBus service routines.  Never ported to FreeBSD.
# OBE: See xenbusb.c, xenbusb_if.m, xenbusb_front.c xenbusb_back.c
svn delete sys/xen/xenbus/xenbus_probe_backend.c

# Split XenStore into its own tree.  XenBus is a software layer built on top
# of XenStore.  The old arrangement and the naming of some structures and
# functions blurred these lines making it difficult to discern what services
# are provided by which layer and what times these services are available
# (e.g. during system startup and shutdown).
mkdir sys/xen/xenstore
svn add sys/xen/xenstore
svn move sys/xen/xenbus/xenbus_dev.c sys/xen/xenstore/xenstore_dev.c
svn copy sys/xen/xenbus/xenbusvar.h sys/xen/xenstore/xenstorevar.h
svn move sys/xen/xenbus/xenbus_xs.c sys/xen/xenstore/xenstore.c

# Split up XenBus code into methods available for use by client
# drivers (xenbus.c) and code used by the XenBus "bus code" to
# enumerate, attach, detach, and service bus drivers.
svn move sys/xen/xenbus/xenbus_client.c sys/xen/xenbus/xenbus.c
svn move sys/xen/xenbus/xenbus_probe.c sys/xen/xenbus/xenbusb.c
svn copy sys/xen/xenbus/xenbusb.c sys/xen/xenbus/xenbusb.h

# Merged with xenstore.c
svn delete sys/xen/xenbus/xenbus_comms.c
svn delete sys/xen/xenbus/xenbus_comms.h

# Merged with new XenBus control driver
mkdir sys/dev/xen/control
svn add sys/dev/xen/control
svn move sys/xen/reboot.c sys/dev/xen/control/control.c

# New file from Xen vendor with macros and structures used by
# a block back driver to service requires from a VM running a
# different ABI (e.g. amd64 back with i386 front).
svn add sys/xen/blkif.h

These alone account for 6k lines of svn diff.

A diff against a tree with these operations already made may make more sense
to a reviewer.  You can download this diff from here:

http://people.FreeBSD.org/~gibbs/FreeBSD-head-xen_post-svn-ops_2010-09-17_diffs.txt

It isn't much shorter since the additional context has amplified the changes.
The bulk is largely caused by refactoring, and comments.  It will
probably be easier to just review the files in sys/xen/xenstore and
sys/xen/xenbus in their entirety, but the comments bellow (boiled down from
our SCM system), when coupled with the diffs, should give you enough information
to understand the intentions behind the changes.

--
Justin

sys/conf/files:
    Adjust kernel build spec for new XenBus/XenStore layout and added
    Xen functionality.

sys/dev/xen/balloon/balloon.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/blkfront/blkfront.c:
sys/xen/xenbus/...
sys/xen/xenstore/...
    o Rename XenStore APIs and structures from xenbus_* to xs_*.
    o Adjust to use of M_XENBUS and M_XENSTORE malloc types for allocation
      of objects returned by these APIs.
    o Adjust for changes in the bus interface for Xen drivers.

sys/dev/xen/blkback/blkback.c:
    Rewrite the Block Back driver to attach properly via newbus, operate
    correctly in both PV and HVM mode regardless of domain (e.g. can be in
    a DOM other than 0), and to deal with the latest metadata available in
    XenStore for block devices.

    Allow users to specify a file as a backend to blkback, in addition
    to character devices.  Use the namei lookup to figure out whether
    it's a device node or a file, and use the appropriate interface.

    One current issue with the file interface is that we're effectively
    limited to having a single command at a time outstanding.  To get
    around this, we may need to try using the vnode pager more directly,
    or perhaps coming up with a direct interface into ZFS.  (i.e.
    something similar to zvols, but without the performance issues.)

    This will impact reads more than writes, because writes are cached
    but with random reads you have to go all the way down to the disk,
    so you suffer the full latency of the stack.

sys/dev/xen/blkback/blkback.c:
sys/xen/interface/io/blkif.h:
sys/xen/blkif.h:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:

    Extend the Xen blkif API: Negotiable request size and number of
    requests.

    This change extends the information recorded in the XenStore
    allowing block front/back devices to negotiate for optimal I/O
    parameters.  This has been achieved without sacrificing backward
    compatibility with drivers that are unaware of these protocol
    enhancements.  The extensions center around the connection protocol
    which now includes these additions:

     o The back-end device publishes its maximum supported values for,
       request I/O size, the number of page segments that can be
       associated with a request, the maximum number of requests that
       can be concurrently active, and the maximum number of pages that
       can be in the shared request ring.  These values are published
       before the back-end enters the XenbusStateInitWait state.

     o The front-end waits for the back-end to enter either the InitWait
       or Initialize state.  At this point, the front end limits it's
       own capabilities to the lesser of the values it finds published
       by the backend, it's own maximums, or, should any back-end data
       be missing in the store, the values supported by the original
       protocol.  It then initializes it's internal data structures
       including allocation of the shared ring, publishes its maximum
       capabilities to the XenStore and transitions to the Initialized
       state.

     o The back-end waits for the front-end to enter the Initalized
       state.  At this point, the back end limits it's own capabilities
       to the lesser of the values it finds published by the frontend,
       it's own maximums, or, should any front-end data be missing in
       the store, the values supported by the original protocol.  It
       then initializes it's internal data structures, attaches to the
       shared ring and transitions to the Connected state.

     o The front-end waits for the back-end to enter the Connnected
       state, transitions itself to the connected state, and can
       commence I/O.

    Although an updated front-end driver must be aware of the back-end's
    InitWait state, the back-end has been coded such that it can
    tolerate a front-end that skips this step and transitions directly
    to the Initialized state without waiting for the back-end.

    sys/xen/interface/io/blkif.h:
        o Increase BLKIF_MAX_SEGMENTS_PER_REQUEST to 255.  This is
          the maximum number possible without changing the blkif
          request header structure (nr_segs is a uint8_t).

        o Add two new constants:
          BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK, and
          BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK.  These respectively
          indicate the number of segments that can fit in the first
          ring-buffer entry of a request, and for each subsequent
          (sg element only) ring-buffer entry associated with the
          "header" ring-buffer entry of the request.

        o Add the blkif_request_segment_t typedef for segment
          elements.

        o Add the BLKRING_GET_SG_REQUEST() macro which wraps the
          RING_GET_REQUEST() macro and returns a properly cast
          pointer to an array of blkif_request_segment_ts.

        o Add the BLKIF_SEGS_TO_BLOCKS() macro which calculates the
          number of ring entries that will be consumed by a blkif
          request with the given number of segments.

    sys/xen/blkif.h:
        o Update for changes in interface/io/blkif.h macros.

        o Update the BLKIF_MAX_RING_REQUESTS() macro to take the
          ring size as an argument to allow this calculation on
          multi-page rings.

        o Add a companion macro to BLKIF_MAX_RING_REQUESTS(),
          BLKIF_RING_PAGES().  This macro determines the number of
          ring pages required in order to support a ring with the
          supplied number of request blocks.

    sys/dev/xen/blkback/blkback.c:
    sys/dev/xen/blkfront/blkfront.c:
    sys/dev/xen/blkfront/block.h:
        o Negotiate with the other-end with the following limits:
              Reqeust Size:   MAXPHYS
              Max Segments:   (MAXPHYS/PAGE_SIZE) + 1
              Max Requests:   256
              Max Ring Pages: Sufficient to support Max Requests with
                              Max Segments.

        o Dynamically allocate request pools and segemnts-per-request.

        o Update ring allocation/attachment code to support a
          multi-page shared ring.

        o Update routines that access the shared ring to handle
          multi-block requests.

    sys/dev/xen/blkfront/blkfront.c:
        o Track blkfront allocations in a blkfront driver specific
          malloc pool.

        o Strip out XenStore transaction retry logic in the
          connection code.  Transactions only need to be used when
          the update to multiple XenStore nodes must be atomic.
          That is not the case here.

        o Fully disable blkif_resume() until it can be fixed
          properly (it didn't work before this change).

        o Destroy bus-dma objects during device instance tear-down.


sys/dev/xen/blkback/blkback.c:
    Advertise support for and implement the BLKIF_OP_WRITE_BARRIER
    and BLKIF_OP_FLUSH_DISKCACHE blkif opcodes using BIO_FLUSH and
    the BIO_ORDERED attribute of bios.

sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
    Fix various bugs in blkfront.

        blkfront.c:     gnttab_alloc_grant_references() returns 0 for
                        success and non-zero for failure.  The check for < 0
                        is a leftover Linuxism.

                        When we negotiate with blkback and have to reduce
                        some of our capabilities, print out the original and
                        reduced capability before changing the local
                        capability.  So the user now gets the correct
                        information.

                        Fix blkif_restart_queue_callback() formatting.  Make
                        sure we hold the mutex in that function before
                        calling xb_startio().

                        Fix a couple of KASSERT()s.

        block.h:        Fix a check in the xb_remove_* macro to be a little more
                        specific.


sys/dev/xen/control/control.c:
    Add a XenBus front driver for handling shutdown, reboot, suspend, and
    resume events published in the XenStore.  Move all PV suspend/reboot
    support from reboot.c into this driver.

sys/xen/gnttab.h:
sys/xen/gnttab.c:
    Define GNTTAB_LIST_END as GRANT_REF_INVALID.

    Get rid of our private definition of GNTTAB_LIST_END.

sys/dev/xen/netfront/netfront.c:
    Use GRANT_REF_INVALID instead of driver private definitions of the
    same constant.

sys/xen/gnttab.h:
sys/xen/gnttab.c:
    Add the gnttab_end_foreign_access_references() API.

    This API allows a client to batch the release of an array of grant
    references, instead of coding a private for loop.  The implementation
    takes advantage of this batching to reduce lock overhead to one
    acquisition and release per-batch instead of per-freed grant reference.

    While here, reduce the duration the gnttab_list_lock is held during
    gnttab_free_grant_references() operations.  The search to find the
    tail of the incoming free list does not rely on global state and so
    can be performed without holding the lock.

sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/evtchn/evtchn.c:
sys/xen/xen_intr.h:
    o Implement the bind_interdomain_evtchn_to_irqhandler API for HVM mode.
      This allows an HVM domain to serve back end devices to other domains.
      This API is already implemented for PV mode.

    o Synchronize the API between HVM and PV.

sys/dev/xen/xenpci/xenpci.c:
    o Scan the full region of CPUID space in which the Xen VMM interface
      may be implemented.  On systems using SuSE as a Dom0 where the Viridian
      API is also exported, the VMM interface is above the region we used to
      search.

    o Pass through bus_alloc_resource() calls so that XenBus drivers
      attaching on an HVM system can allocate unused physical address
      space from the nexus.  The block back driver makes use of this
      facility.

sys/i386/xen/xen_machdep.c:
    o Use the correct type for accessing the statically mapped xenstore
      metadata.

sys/xen/interface/hvm/params.h:
sys/xen/xenstore/xenstore.c:
    Move hvm_get_parameter() to the correct global header file instead
    of as a private method to the XenStore.

sys/xen/interface/io/protocols.h:
    Sync with vendor.

sys/xeninterface/io/ring.h:
    Add macro for calculating the number of ring pages needed for an N
    deep ring.

    To avoid duplication within the macros, create and use the new
    __RING_HEADER_SIZE() macro.  This macro calculates the size of the
    ring book keeping struct (producer/consumer indexes, etc.) that
    resides at the head of the ring.

    Add the __RING_PAGES() macro which calculates the number of shared
    ring pages required to support a ring with the given number of
    requests.

    These APIs are used to support the multi-page ring version of the
    Xen block API.

sys/xeninterface/io/xenbus.h:
    Add Comments.

sys/xen/xenbus/...
    Refactor the FreeBSD XenBus support code to allow for both front and
    backend device attachments.

    Make use of new config_intr_hook capabilities to allow front and back
    devices to be probed/attached in parallel.

    Fix bugs in probe/attach state machine that could cause the system to
    hang when confronted with a failure either in the local domain or in
    a remote domain to which one of our driver instances is attaching.

    Publish all required state to the XenStore on device detach and failure.
    The majority of the missing functionality was for serving as a back
    end since the typical "hot-plug" scripts in Dom0 don't handle the case
    of cleaning up for a "service domain" that is not itself.

    Add doxygen style comments to the majority of the code.

    Cleanup types, formatting, etc.

    sys/xen/xenbus/xenbusb.c:
        Common code used by both front and back XenBus busses.

    sys/xen/xenbus/xenbusb_if.m:
        Method definitions for a XenBus bus.

    sys/xen/xenbus/xenbusb_front.c:
    sys/xen/xenbus/xenbusb_back.c:
        XenBus bus specialization for front and back devices.
Received on Fri Sep 17 2010 - 13:36:29 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:07 UTC