Re: Apparent strange disk behaviour in 6.0

From: Brian Candler <B.Candler_at_pobox.com>
Date: Sat, 30 Jul 2005 23:12:16 +0100
On Sat, Jul 30, 2005 at 08:37:17PM +0200, Poul-Henning Kamp wrote:
> In message <20050730171536.GA740_at_uk.tiscali.com>, Brian Candler writes:
> >On Sat, Jul 30, 2005 at 03:29:27AM -0700, Julian Elischer wrote:
> >> 
> >> The snapshot below is typical when doing tar from one drive to another..
> >> (tar c -C /disk1 f- .|tar x -C /disk2 -f - )
> >> 
> >> dT: 1.052  flag_I 1000000us  sizeof 240  i -1
> >>  L(q)  ops/s    r/s   kBps   ms/r    w/s   kBps   ms/w    d/s   kBps   ms/d  %busy Name
> >>     0    405    405   1057    0.2      0      0    0.0      0      0    0.0  9.8| ad0
> >>     0    405    405   1057    0.3      0      0    0.0      0      0    0.0 11.0| ad0s2
> >>     0    866      3     46    0.4    863   8459    0.7      0      0    0.0 63.8| da0
> >>    25    866      3     46    0.5    863   8459    0.8      0      0    0.0 66.1| da0s1
> >>     0    405    405   1057    0.3      0      0    0.0      0      0    0.0 12.1| ad0s2f
> >>   195    866      3     46    0.5    863   8459    0.8      0      0    0.0 68.1| da0s1d
...
> >But if really is only 12.1% busy (which the 0.3 ms/r implies),
> 
> "busy %" numbers is *NOT* a valid measure of disk throughput, please do
> not pay attention to such numbers!

It seems to me that
     reads/sec * milliseconds/read  =  milliseconds spent reading per second

and that the "busy %" is <milliseconds per second spent with one or more
read or write requests outstanding> expressed as a percentage. The figures
in the above table seem to bear this out, bar rounding errors since ms/r is
so small. Or am I mistaken?

Examples:

> >>     0    405    405   1057    0.2      0      0    0.0      0      0    0.0  9.8| ad0

    405 * 0.2 = 81ms reading = 8%  (vs. busy% = 9.8%)
   
> >>    25    866      3     46    0.5    863   8459    0.8      0      0    0.0 66.1| da0s1

    3*0.5 + 863 * 0.8 = 692ms read/write = 69% (vs. busy% = 66%)

I guess I could dig through the source to check if this is true. But this is
how I had always assumed "busy %" was calculated: time spent waiting for
reads or writes to complete, as opposed to the time spent idle (with no
outstanding read or write request queued)

If I'm right, then the OP is right to ask why both the reading and writing
disks are well under 100% utilisation for a simple streaming copy-from or
copy-to operation.

> If you want to know how busy your disk is, simply look in the ms/r
> and ms/r columns and decide if you can live with that average
> transaction time.  If it is too high for your liking, then your
> disk is too busy.
> 
> If you want to do quantitive predictions, you need to do the
> queue-theory thing on those numbers.
> 
> If you know your queue-theory, you also know why busy% is
> a pointless measurement:  It represents the amount of time
> where the queue is non-empty.  It doesn't say anything about
> how quickly the queue drains or fills.

Indeed; if you have multiple processes competing to access the disk at
random points in time, then the time to service each request is going to be
calculated using queueing theory. For the same reason, an Internet
connection is considered "full" at ~70% utilisation, because the latency
goes through the roof above that, and users get unhappy.

But here we're talking about a single process trying to spool stuff off (or
onto) the disk as quickly as possible. Surely if everything is working
properly, it ought to be able to keep the queue of read (or write) requests
permanently non-empty, and therefore the disk should be permanently in use?
That's like an IP pipe being used for a single FTP stream with sufficiently
large window size. That *should* reach 100% utilisation.

I'm not saying geom is counting wrongly; I am just agreeing with the OP that
the underlying reason for this poor utilisation is worth investigating.
After all, he also only got 1M/s read and 8M/s write. It seems unlikely that
the CPU is unable to shift that amount of data per second. But if there were
poor performance from the drive or the I/O card, that still ought to show as
100% utilisation.

Regards,

Brian.
Received on Sat Jul 30 2005 - 20:10:51 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:40 UTC