Luigi Rizzo wrote: > S > On Tue, Jul 12, 2005 at 04:09:35PM -0700, Luigi Rizzo wrote: > >>The approach you suggest in the second part sounds >>interesting, except that i have no idea where i should intercept >>the calls in the block layer. Any suggestion ? > > > So, rethinking about this oldish thread, you said: > > >>On Tue, Jul 12, 2005 at 01:28:02PM -0600, Scott Long wrote: >>... > > ... > >>>An alternate approach that I would suggest is to have the disk scheduler >>>freeze the block layer from delivering any new bio's while waiting for >>>all of the outstanding bio's to complete, then flip the scheduler >>>algorithm and allow i/o delivery to resume. That way there is no >>>need to play with driver locks, no need to rummage around in resources >>>that are private to the driver, and no need to worry about in-flight >>>bio's. It also removes the need to touch every driver with an API >>>change. > > > [please correct me if i am wrong] > > it seems a suitable place would be intercept dev_strategy(bp), > however i am not totally clear how i can reach the bioq > from there -- geom has a field > bp->bio_disk->d_queue, > but it does not seem to be universally used, e.g. > scsi_da uses > bp->bio_disk->d_drv1->softc->bio_queue > ata in 5.x uses > bp->bio_disk->d_drv1->queue > and so on. > > So if you intercept dev_strategy() you really need to stall I/O > until _all_ devices have drained their backlog, because you > cannot map the request to the individual bioq :( > > cheers > luigi How often are you going to be changing the scheduler at runtime? Is the scheduler meant to be aware of individual devices? Again, I'm not advocating that the upper layers be able to look at or manipulate the driver bioq's. ScottReceived on Thu Jul 14 2005 - 22:46:23 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:38 UTC