Re: async pass(4) patches available

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Wed, 1 Apr 2015 11:29:03 +0300
On Tue, Mar 31, 2015 at 04:50:51PM -0600, Kenneth D. Merry wrote:
> On Tue, Mar 31, 2015 at 03:49:12 +0300, Konstantin Belousov wrote:
> > On Mon, Mar 30, 2015 at 04:23:58PM -0600, Kenneth D. Merry wrote:
> > > Kernel memory for data transferred via the queued interface is 
> > > allocated from the zone allocator in MAXPHYS sized chunks, and user
> > > data is copied in and out.  This is likely faster than the
> > > vmapbuf()/vunmapbuf() method used by the CAMIOCOMMAND ioctl in
> > > configurations with many processors (there are more TLB shootdowns
> > > caused by the mapping/unmapping operation) but may not be as fast
> > > as running with unmapped I/O.
> > cam_periph_mapmem() uses vmapbuf() with an indicator to always map the
> > user pages mostly because I do not know CAM code and wanted to make
> > the least intrusive changes there.  It is not inherently impossible
> > to pass unmapped pages down from cam_periph_mapmem(), but might
> > require some more plumbing for driver to indicate that it is acceptable.
> 
> I think that would probably not be too difficult to change.  That API isn't
> one that is exposed, so changing it shouldn't be a problem.  The only
> reason not to do unmapped I/O there is just if the underlying controller
> doesn't support it.  The lower parts of the stack shouldn't be trying to
> sniff the data that is read or written to the device, although that has
> happened in the past.  We'd have to audit a couple of the drivers to
> make sure they aren't trying to access the data.
This is why I mentioned 'plumbing' required to map pages when needed.

> 
> > > The new memory handling model for user requests also allows
> > > applications to send CCBs with request sizes that are larger than
> > > MAXPHYS.  The pass(4) driver now limits queued requests to the I/O
> > > size listed by the SIM driver in the maxio field in the Path
> > > Inquiry (XPT_PATH_INQ) CCB.
> > >         
> > > There are some things things would be good to add:
> > >         
> > > 1. Come up with a way to do unmapped I/O on multiple buffers.
> > >    Currently the unmapped I/O interface operates on a struct bio,
> > >    which includes only one address and length.  It would be nice
> > >    to be able to send an unmapped scatter/gather list down to
> > >    busdma.  This would allow eliminating the copy we currently do
> > >    for data.
> > Only because nothing more was needed.  The struct bio does not use
> > address/length pair when unmapped, it passes the list of physical
> > pages, see bio_ma array pointer.  It is indeed taylored to be a pointer
> > to struct buf' b_pages, but it does not have to be.
> > 
> > The busdma unmapped non-specific interface is bus_dmamap_load_ma(),
> > which again takes array of pages to load.  If you want some additional
> > helper, suitable for your goals, please provide the desired interface
> > definition.
> 
> What I'd like to be able to do is pass down a CCB with a user virtual
> S/G list (CAM_DATA_SG, but with user virtual pointers) and have busdma deal
> with it.
Is there an existing definition of the 'user s/g list' ?  Some structure,
or existing example of use ?

> 
> The trouble would likely be figuring out a flag to use to indicate that the
> S/G list in question contains user virtual pointers.  (Backwards/binary
> compatibility is always an issue with CCB flags, since they have all been
> used.)
> 
> But that is essentially what is needed.  
> 

I can write the code, but I need API specification.  Also, ideally I need
a rough example which uses the API.
Received on Wed Apr 01 2015 - 06:29:09 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:56 UTC