On Mon, Mar 30, 2015 at 04:23:58PM -0600, Kenneth D. Merry wrote: > Kernel memory for data transferred via the queued interface is > allocated from the zone allocator in MAXPHYS sized chunks, and user > data is copied in and out. This is likely faster than the > vmapbuf()/vunmapbuf() method used by the CAMIOCOMMAND ioctl in > configurations with many processors (there are more TLB shootdowns > caused by the mapping/unmapping operation) but may not be as fast > as running with unmapped I/O. cam_periph_mapmem() uses vmapbuf() with an indicator to always map the user pages mostly because I do not know CAM code and wanted to make the least intrusive changes there. It is not inherently impossible to pass unmapped pages down from cam_periph_mapmem(), but might require some more plumbing for driver to indicate that it is acceptable. > > The new memory handling model for user requests also allows > applications to send CCBs with request sizes that are larger than > MAXPHYS. The pass(4) driver now limits queued requests to the I/O > size listed by the SIM driver in the maxio field in the Path > Inquiry (XPT_PATH_INQ) CCB. > > There are some things things would be good to add: > > 1. Come up with a way to do unmapped I/O on multiple buffers. > Currently the unmapped I/O interface operates on a struct bio, > which includes only one address and length. It would be nice > to be able to send an unmapped scatter/gather list down to > busdma. This would allow eliminating the copy we currently do > for data. Only because nothing more was needed. The struct bio does not use address/length pair when unmapped, it passes the list of physical pages, see bio_ma array pointer. It is indeed taylored to be a pointer to struct buf' b_pages, but it does not have to be. The busdma unmapped non-specific interface is bus_dmamap_load_ma(), which again takes array of pages to load. If you want some additional helper, suitable for your goals, please provide the desired interface definition.Received on Mon Mar 30 2015 - 22:49:18 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:56 UTC