gmirror 'load' algorithm (Was: Re: siis/atacam/ata/gmirror 8.0-BETA3 disk performance)

From: Alexander Motin <mav_at_FreeBSD.org>
Date: Thu, 03 Sep 2009 21:13:46 +0300
Emil Mikulic wrote:
> On Wed, Sep 02, 2009 at 05:51:35PM +0300, Alexander Motin wrote:
>> To completely load gmirror on read operations, you may need to run
>> two dd's same time. Also make sure, that your gmirror runs in
>> round-robin mode. Default split mode, which should help with linear
>> read, is IMHO ineffective, at least with default MAXPHYS and slice
>> values.
> 
> On that note, there is an excellent patch in this PR which improves
> the way gmirror schedules read requests to different disks:
> 
> http://www.freebsd.org/cgi/query-pr.cgi?pr=kern/113885
> 
> Could someone please commit this?
> 
> With this patch and a two-way mirror, I can run two linear scans of
> different files in parallel and get almost perfect scaling.  (result:
> this approximately halves the wall-clock time it takes to do a backup of
> some fat VM images)
> 
> IIRC, without the patch it's faster to run them sequentially.  :(

I have played a bit with this patch on 4-disk mirror. It works better 
then original algorithm, but still not perfect.

1. I have managed situation with 4 read streams when 3 drives were busy, 
while forth one was completely idle. gmirror prefer constantly seek one 
of drives on short distances, but not to use idle drive, because it's 
heads were few gigabytes away from that point.

IMHO request locality priority should be made almost equal for any 
nonzero distances. As we can see with split mode, even small gaps 
between requests can significantly reduce drive performance. So I think 
it is not so important if data are 100MB or 500GB away from current head 
position. It is perfect case when requests are completely sequential. 
But everything beyond few megabytes from current position just won't fit 
drive cache.

2. IMHO it would be much better to use averaged request queue depth as 
load measure, instead of last request submit time. Request submit time 
works fine only for equal requests, equal drives and serialized load, 
but it is actually the case where complicated load balancing is just not 
needed. The fact that some drive just got request does not mean 
anything, if some another one got 50 requests one second ago and still 
processes them.

-- 
Alexander Motin
Received on Thu Sep 03 2009 - 16:13:58 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:54 UTC