ciss(4): speed degradation for Compaq Smart Array [3rd edition]

From: Andrey Koklin <aka_at_veco.ru>
Date: Tue, 12 Apr 2005 13:50:29 +0400
On Wed, 30 Mar 2005 11:27:38 -0800
Julian Elischer <julian_at_elischer.org> wrote:

[snip]

> Thanks for giving more info..
> this shows up some problems though..
 
> I'm not saying that there is no problem (I actually think there is a 
> slowdown in 5/6 but
> it should be amenable to tuning as we get time to look at it.. The new 
> disk code is a lot more
> dependent on teh scheduler than the old disk code). What I AM saying is that
> teh test environment doens't eliminate some of the possible reasons for 
> speed
> differences..
>  For example, you don't say if the raid controllers arre set up the same..
> And the disks do not match.. the 74GB drives may be newer and faster..
> 
> Maybe you should reinstall the 6.0 machine to have a 4.11 partition as 
> well so that you
> can dual boot on the exact same hardware..  THAT would show it if you 
> used the same
> partition for both tests.. (The testing partition should be
> a UFS1 filesystem that both can read.)

Sorry, I'd got ill, and little later now with the reply.

To remember, there was big enough difference in overall transfers
under FreeBSD 4.11 and 6.0-CURRENT (5.4 gave results similar to 6.0,
so I've omitted it for brevity)

I've got one server, to have same hardware:

HP Proliant DL380 G2, 2 x P3 1.133GHz, RAM 1280 Mb,
SmartArray 5i, 5 x 36Gb Ultra320 10K HDD
disks configured as RAID5 with default stripe size (16K?)

do-test # bsdlabel da0s1
# /dev/da0s1:
8 partitions:
#        size   offset    fstype   [fsize bsize bps/cpg]
  a:  4194304        0    4.2BSD        0     0     0 
  b:  4194304  4194304      swap                    
  c: 284490208        0    unused        0     0         # "raw" part, don't edit
  d:  4194304  8388608    4.2BSD     2048 16384    89 
  e: 16777216 12582912    4.2BSD        0     0     0 
  f: 188743680 29360128    4.2BSD        0     0     0 
  g: 66386400 218103808    4.2BSD     2048 16384 28552 

do-test # df -lh  
Filesystem     Size    Used   Avail Capacity  Mounted on
/dev/da0s1a    1.9G     53M    1.7G     3%    /
devfs          1.0K    1.0K      0B   100%    /dev
/dev/da0s1e    7.7G    1.6G    5.5G    22%    /usr
/dev/da0s1f     87G     14G     66G    17%    /var
/dev/da0s1g     31G    3.4G     25G    12%    /mnt

da0s1a - FreeBSD 6.0-CURRENT
da0s1d - FreeBSD 4.11

Both OSes have custom SMP kernels.
6.0 - stripped off debugging, 4BSD scheduler (I'd tried ULE one too,
there was 5-10% difference in transfer and CPU load, so I've ommited it)

As there was no big geometry factor, all tests use one partition da0s1g,
formated as ufs1 and ufs2.


6.0-CURRENT, UFS2
-----------------
# newfs -O2 -U -o time /dev/da0s1g
# mount /dev/da0s1g /mnt
#
# dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 48.481901 secs (22147272 bytes/sec)
...
# dd if=/mnt/1Gb-1 of=/dev/null bs=1m           
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 23.303288 secs (46076838 bytes/sec)
#
# bonnie -d /mnt -m '6.0(1)' -s 4096
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
6.0(1)   4096 15810 35.8 19404 15.3 12366 11.0 30682 68.9 50639  23.5 1084.9 5.7


6.0-CURRENT, UFS1
-----------------
# newfs -O1 -U -o time /dev/da0s1g
# mount /dev/da0s1g /mnt
#
# dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 44.986316 secs (23868187 bytes/sec)
# dd if=/mnt/1Gb-1 of=/dev/null bs=1m
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 21.702390 secs (49475741 bytes/sec)
#
# bonnie -d /mnt -m '6.0(2)' -s 4096
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
6.0(2)   4096 17107 39.8 23879 16.9 13289 11.8 33849 75.9 50417 23.5 1116.5  5.9


6.0-CURRENT, UFS1 (no snap)
---------------------------
# newfs -O1 -U -n -o time /dev/da0s1g
# mount /dev/da0s1g /mnt
#
# dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 39.034020 secs (27507846 bytes/sec)
# dd if=/mnt/1Gb-1 of=/dev/null bs=1m        
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 22.023556 secs (48754244 bytes/sec)
#
# bonnie -d /mnt -m '6.0(3)' -s 4096
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
6.0(3)   4096 20402 45.2 20903 15.8 12674 11.0 32834 73.5 53088 22.3 1072.1  6.4


6.0-CURRENT, UFS1, partition formated under 4.11
------------------------------------------------
# dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 25.460762 secs (42172415 bytes/sec)
# dd if=/dev/zero of=/mnt/1Gb-3 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 26.140447 secs (41075879 bytes/sec)
#
# bonnie -d /mnt -m '6.0(4)' -s 4096
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
6.0(4)   4096 27343 59.8 36447 27.6 17517 15.1 39665 90.4 45941 19.3 1086.4  5.7


4.11-STABLE
-----------
# newfs -U -o time /dev/da0s1g
# mount /dev/da0s1g /mnt
#
dd if=/dev/zero of=/mnt/1Gb-1 bs=1m count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 24.076042 secs (44597938 bytes/sec)
...
dd if=/mnt/1Gb-1 of=/dev/null bs=1m
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 12.619832 secs (85083686 bytes/sec)
#
# bonnie -d /mnt -m '4.11' -s 4096
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
4.11     4096 45359 74.4 47120 24.7 21104 16.2 45216 97.9 85723 31.8 1503.2  5.3


Putting bonnie results together:

              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
6.0(1)   4096 15810 35.8 19404 15.3 12366 11.0 30682 68.9 50639  23.5 1084.9 5.7
6.0(2)   4096 17107 39.8 23879 16.9 13289 11.8 33849 75.9 50417 23.5 1116.5  5.9
6.0(3)   4096 20402 45.2 20903 15.8 12674 11.0 32834 73.5 53088 22.3 1072.1  6.4
6.0(4)   4096 27343 59.8 36447 27.6 17517 15.1 39665 90.4 45941 19.3 1086.4  5.7
4.11     4096 45359 74.4 47120 24.7 21104 16.2 45216 97.9 85723 31.8 1503.2  5.3

Where:
(1) - 6.0, UFS2
(2) - 6.0, UFS1
(3) - 6.0, UFS1, no snap
(4) - 6.0, UFS1, partition formated under 4.11


Again, simple benchmarks show disk system productivity under 6.0
as about 50-70% of 4.11's one.

Not fatal yet, if it wouldn't drop further.

Andrey
Received on Tue Apr 12 2005 - 07:50:33 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:32 UTC