Re: Increasing MAXPHYS

From: Julian Elischer <julian_at_elischer.org>
Date: Mon, 22 Mar 2010 17:33:51 -0700
Pawel Jakub Dawidek wrote:
> On Mon, Mar 22, 2010 at 08:23:43AM +0000, Poul-Henning Kamp wrote:
>> In message <4BA633A0.2090108_at_icyb.net.ua>, Andriy Gapon writes:
>>> on 21/03/2010 16:05 Alexander Motin said the following:
>>>> Ivan Voras wrote:
>>>>> Hmm, it looks like it could be easy to spawn more g_* threads (and,
>>>>> barring specific class behaviour, it has a fair chance of working out of
>>>>> the box) but the incoming queue will need to also be broken up for
>>>>> greater effect.
>>>> According to "notes", looks there is a good chance to obtain races, as
>>>> some places expect only one up and one down thread.
>>> I haven't given any deep thought to this issue, but I remember us discussing
>>> them over beer :-)
>> The easiest way to obtain more parallelism, is to divide the mesh into
>> multiple independent meshes.
>>
>> This will do you no good if you have five disks in a RAID-5 config, but
>> if you have two disks each mounted on its own filesystem, you can run
>> a g_up & g_down for each of them.
> 
> A class is suppose to interact with other classes only via GEOM, so I
> think it should be safe to choose g_up/g_down threads for each class
> individually, for example:
> 
> 	/dev/ad0s1a (DEV)
> 	       |
> 	g_up_0 + g_down_0
> 	       |
> 	     ad0s1a (BSD)
> 	       |
> 	g_up_1 + g_down_1
> 	       |
> 	     ad0s1 (MBR)
> 	       |
> 	g_up_2 + g_down_2
> 	       |
> 	     ad0 (DISK)
> 
> We could easly calculate g_down thread based on bio_to->geom->class and
> g_up thread based on bio_from->geom->class, so we know I/O requests for
> our class are always coming from the same threads.
> 
> If we could make the same assumption for geoms it would allow for even
> better distribution.

doesn't really help my problem however.. I just want to access the 
base provider directly with no geom thread involved.

> 
Received on Mon Mar 22 2010 - 23:33:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:02 UTC