Re: ciss(4): speed degradation for Compaq Smart Array [edited]

From: Julian Elischer <julian_at_elischer.org>
Date: Wed, 30 Mar 2005 11:27:38 -0800
Thanks for giving more info..
this shows up some problems though..

Andrey Koklin wrote:

>Firstly, I'm thankful to people who had found time to answer my previous
>messed post privately. Indeed, I hadn't put there key information about
>my system configuration, as well as the tests themselves say little or
>nothing on real disk system performance (they used 1K blocks and didn't
>take into consideration disks geometry).
>
>Nevertheless, my new corrected tests still reveal the same problem
>with performance on new systems.
>
>Tested systems:
>
>1. FreeBSD 4.11-STABLE #0: Thu Mar  3 15:40:34 MSK
>   Platform: HP Proliant DL380 G3, 2 x Xeon 3.2GHz, memory 2Gb,
>             SmartArray 5i, 5 x 18Gb Ultra3 10K HDD
>   SMP kernel
>   ciss driver version 1.2.2.21 2005/01/21
>
>2. FreeBSD 5.4-PRERELEASE #16: Sun Mar 20 23:05:52 MSK
>   Platform: HP Proliant DL380 G3, 2 x Xeon 3.2GHz, memory 2Gb,
>             SmartArray 5i, 6 x 72Gb Ultra320 10K HDD
>   SMP kernel
>   ciss driver version 1.56.2.1 2005/01/20
>  
>

this has a different drive type to the first system.. and a different 
number of drives..
what is the speed difference between the different drive types?

>3. FreeBSD 6.0-CURRENT #0: Tue Mar 29 15:45:56 MSD
>   Platform: HP Proliant DL380 G2, 2 x P3 1.133GHz, memory 1Gb,
>             SmartArray 5i, 5 x 36Gb Ultra320 10K HDD
>   SMP kernel with stripped off debugging information
>   ciss driver version 1.60 2005/03/29
>  
>

Woops! yet another different type of drive..

I'm not saying that there is no problem (I actually think there is a 
slowdown in 5/6 but
it should be amenable to tuning as we get time to look at it.. The new 
disk code is a lot more
dependent on teh scheduler than the old disk code). What I AM saying is that
teh test environment doens't eliminate some of the possible reasons for 
speed
differences..
 For example, you don't say if the raid controllers arre set up the same..
And the disks do not match.. the 74GB drives may be newer and faster..

Maybe you should reinstall the 6.0 machine to have a 4.11 partition as 
well so that you
can dual boot on the exact same hardware..  THAT would show it if you 
used the same
partition for both tests.. (The testing partition should be
a UFS1 filesystem that both can read.)

>
>Tests use 64k read/writes in 3 slices.
>Of course, there exists a fragmentation factor, but it's small enough.
>I've played with newly formated FS, soft-updates, sync/async modes
>with near the same result.
>
>-- 8< ------------------------------------------------------------------
>
>do# uname -a
>FreeBSD do.veco.ru 4.11-STABLE FreeBSD 4.11-STABLE #0: Thu Mar  3 15:40:34 MSK 2005     wooler_at_do.veco.ru:/usr/obj/usr/src/sys/DO  i386
>do# BS=64k
>do# BC=16
>do# df -lh
>Filesystem    Size   Used  Avail Capacity  Mounted on
>/dev/da0s1a   252M    45M   187M    20%    /
>/dev/da0s1h   2.0G    24K   1.8G     0%    /tmp
>/dev/da0s1e   2.0G   1.5G   325M    82%    /usr
>/dev/da0s1f    30G    17G    11G    61%    /var
>/dev/da0s1g    30G   1.5G    26G     5%    /export
>procfs        4.0K   4.0K     0B   100%    /proc
>  
>

this doesn't show the order of the partitions in the drive..
use 'disklabel da0s1' to show that information.

>do# dd if=/dev/zero of=/tmp/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 23.826439 secs (45065140 bytes/sec)
>do# dd if=/dev/zero of=/var/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 27.081948 secs (39647880 bytes/sec)
>do# dd if=/dev/zero of=/export/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 22.714908 secs (47270357 bytes/sec)
>do# dd if=/tmp/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 9.439599 secs (113748669 bytes/sec)
>do# dd if=/var/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 9.327485 secs (115115901 bytes/sec)
>do# dd if=/export/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 8.826914 secs (121644078 bytes/sec)
>do# rm /tmp/1Gb /var/1Gb /export/1Gb
>do# exit
>
>-- 8< ------------------------------------------------------------------
>
>re:/ # uname -a
>FreeBSD re.veco.ru 5.4-PRERELEASE FreeBSD 5.4-PRERELEASE #16: Sun Mar 20 23:05:52 MSK 2005     root_at_re.veco.ru:/usr/obj/usr/src/sys/RE  i386
>re:/ # BS=64k
>re:/ # BC=16
>re:/ # df -lh
>Filesystem     Size    Used   Avail Capacity  Mounted on
>/dev/da0s1a     15G    2.2G     12G    15%    /
>devfs          1.0K    1.0K      0B   100%    /dev
>/dev/da0s1e    124G     14G    100G    12%    /var
>/dev/da0s1d     31G     23G    5.7G    80%    /var/db/backup
>/dev/da0s1f    154G     87G     54G    62%    /var/ftp
>/dev/md0       124M     66K    114M     0%    /tmp
>devfs          1.0K    1.0K      0B   100%    /var/named/dev
>re:/ # dd if=/dev/zero of=/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 40.210140 secs (26703260 bytes/sec)
>re:/ # dd if=/dev/zero of=/var/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 39.433364 secs (27229273 bytes/sec)
>re:/ # dd if=/dev/zero of=/var/ftp/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 45.285700 secs (23710395 bytes/sec)
>re:/ # dd if=/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 17.519033 secs (61290016 bytes/sec)
>re:/ # dd if=/var/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 17.972094 secs (59744949 bytes/sec)
>re:/ # dd if=/var/ftp/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 15.436768 secs (69557425 bytes/sec)
>re:/ # rm /1Gb /var/1Gb /var/ftp/1Gb
>re:/ # exit
>
>-- 8< ------------------------------------------------------------------
>
>do-test# uname -a
>FreeBSD do-test.veco.ru 6.0-CURRENT FreeBSD 6.0-CURRENT #0: Tue Mar 29 15:45:56 MSD 2005     wooler_at_do-test.veco.ru:/usr/obj/usr/src/sys/  i386
>do-test# BS=64k
>do-test# BC=16
>do-test# df -lh
>Filesystem     Size    Used   Avail Capacity  Mounted on
>/dev/da0s1a    1.9G     73M    1.7G     4%    /
>devfs          1.0K    1.0K      0B   100%    /dev
>/dev/da0s1d    1.9G    940K    1.8G     0%    /tmp
>/dev/da0s1e    7.7G    1.6G    5.5G    22%    /usr
>/dev/da0s1f     87G     13G     67G    16%    /var
>/dev/da0s1g     31G    2.0K     28G     0%    /export
>do-test# dd if=/dev/zero of=/tmp/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 48.079470 secs (22332647 bytes/sec)
>do-test# dd if=/dev/zero of=/var/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 48.591069 secs (22097514 bytes/sec)
>do-test# dd if=/dev/zero of=/export/1Gb bs=$BS count=$[$BC*1024]
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 48.915319 secs (21951034 bytes/sec)
>do-test# dd if=/tmp/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 30.366247 secs (35359714 bytes/sec)
>do-test# dd if=/var/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 29.430927 secs (36483452 bytes/sec)
>do-test# dd if=/export/1Gb of=/dev/null bs=$BS
>16384+0 records in
>16384+0 records out
>1073741824 bytes transferred in 30.164319 secs (35596422 bytes/sec)
>do-test# rm /tmp/1Gb /var/1Gb /export/1Gb
>do-test# exit
>
>-- 8< ------------------------------------------------------------------
>
>
>Andrey
>_______________________________________________
>freebsd-current_at_freebsd.org mailing list
>http://lists.freebsd.org/mailman/listinfo/freebsd-current
>To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>  
>
Received on Wed Mar 30 2005 - 17:27:39 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:30 UTC