Andriy Gapon wrote: > on 21/03/2010 16:05 Alexander Motin said the following: >> Ivan Voras wrote: >>> Hmm, it looks like it could be easy to spawn more g_* threads (and, >>> barring specific class behaviour, it has a fair chance of working out of >>> the box) but the incoming queue will need to also be broken up for >>> greater effect. >> According to "notes", looks there is a good chance to obtain races, as >> some places expect only one up and one down thread. > > I haven't given any deep thought to this issue, but I remember us discussing > them over beer :-) > I think one idea was making sure (somehow) that requests traveling over the same > edge of a geom graph (in the same direction) do it using the same queue/thread. > Another idea was to bring some netgraph-like optimization where some (carefully > chosen) geom vertices pass requests by a direct call instead of requeuing. > yeah, like the 1:1 single provider case. (which we an most of our custommers mostly use on our cards). i.e. no slicing or dicing, and just the raw flash card presented as /dev/fio0Received on Sun Mar 21 2010 - 14:51:23 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:02 UTC