Luigi Rizzo wrote: > [see context at the end] > > Queues of I/O requests are implemented by a struct bio_queue_head > in src/sys/sys/bio.h - however the lock for the queue is > outside the structure, and, as a result, whoever wants to manipulate > this data structure either a) needs it already locked, or b) needs to > know which lock to grab. > Case a) occurs when someone accesses the bio_queue through the > regular API -- the caller already does the required locking. > Case b) however can occur when we have some asynchronous request > to work on the queue, e.g. to change scheduler. > > So we need to know where the lock is. I can see two ways: > > 1) put the lock in the struct bio_queue_head. > This is the same thing done in struct g_bioq defined in > sys/geom/geom.h . Quite clean, except that perhaps some > users of bio_queue_head still run under Giant (e.g. cam/scsi ?) > and so it is not possible to 'bring in' the lock. > > 2) change bioq_init() so that it takes also a pointer to the mtx > that protects the queue. > This is probably less clean, but perhaps a bit more flexible because > the queue and its lock are decoupled. Also it permits to deal > with the 'Giant' case easily. > > Other ideas ? > > cheers > luigi > > > (background - this is related to the work my SoC student Emiliano, > in Cc, is doing on pluggalbe disk schedulers) > > The disk scheduler operates on struct bio_queue_head objects > (which include CSCAN scheduler info) and uses 5 methods: > > bioq_init() initializes the queue. > bioq_disksort() to add requests to the queue > bioq_first() to peek at the head of the queue > bioq_remove() to remove the first element. > bioq_flush() right now simply a wrapper around bioq_first() and > bioq_remove(), but one could imagine the need for a > specific destructor to free memory etc. Each bioq is owned by the consumer of it, i.e. the individual driver. As such, locking it is the responsibility of the driver. The block layer/GEOM does not manipulate the driver bioq's, nor should it. The driver knows best how to sort and queue requests to the hardware, and many drivers don't even need explicit sorting because they are talking to an intelligent controller that will do that work for them. Also, putting descrete locks on bioq operations is inefficient. I did a _lot_ of experimentation with locking storage drivers and found that one lock in the fast path that covers the bioq, softc/resources, and hardware access was a whole lot faster than splitting it up into multiple locks. There are certain circumstances where this might not be the case, but as a general rule it is. So, the status quo with regard to bioq locking is fine. Please don't try to make the block layer outguess the drivers and the hardware, and please don't try to mandate extra locking that reduces performance. ScottReceived on Fri Jul 08 2005 - 18:57:53 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:38 UTC