6.0: Unable to make disks 100% busy in file system reads.

From: Julian Elischer <julian_at_elischer.org>
Date: Sun, 31 Jul 2005 13:55:58 -0700
On my 4.x systems, the following comand makes disks go about 100% (well, 
98%) busy
(measured by systat -vmstat).

tar cf /dev/null /usr

I know that some versions of  tar recognise /dev/null as an output 
device and
cheat, so to be sure I confirmed that

tar cf -  /usr | dd of=/dev/null bs=128k

has the same result  (IDE drive).

The same command run on 6.0 has difficulty keeping the drives 70% busy.
(though for sume unknow reason I have seen it get to 87% for up to
10 or 15 seconds at a time).

measured by gstat AND systems -vmstat.

cpu usage at the time:

21.0%Sys   2.3%Intr  0.5%User  0.0%Nice 76.2%Idl
17.9%Sys   2.0%Intr  0.5%User  0.0%Nice 79.5%Idl 

it is noticable that the times that the disk usage goes higher (e.g. 87%)
the system idle time is also higher, and sys time drops to about 6% so
I am presuming it is a set of large files being traversed at that time.
Softupdates is NOT enabled.

Now if I start TWO of the work processses, the drive usage climbs to a
pretty permanent 98% which is quite acceptable. So, it's not geom,
at least, not in any direct manner.

The interesting part is that 4.11 is able to force this disk usage with 
just one
work process.

it seems to be something to do with the speed of the return information 
for the read
from disk.. Some scheduler interaction possibly, along with some side 
effects of the
new. 

I've been looking at the way that the scheduling works and not seen 
anything that
really stands out.. If anyone has any ideas of other things to look at
I'm all ears..


Julian
Received on Sun Jul 31 2005 - 18:56:02 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:40 UTC