Re: geom_raid5 inclusion in HEAD?

From: Nikolay Pavlov <qpadla_at_gmail.com>
Date: Tue, 6 Nov 2007 23:00:12 +0200
On Tuesday 06 November 2007 21:33:42 Arne Wörner wrote:
> > How much and what kinds of testing has it already received?
>
> Just that real life test...
> I did a consitency test with TOS according to Pawel's recommendation:
> 1. create this: gmirror (graid5 (3 disks), graid3 (3 disks))
> 2. write some random data with raidtest(I dont know if it can do?) or
> with UFS+dd
> 3. wait for the gmirror device to enter state "SYNC-ED" (or how it is
> called)...
> 4. compare contents of the graid5 and graid3 device (they should be
> equal)...

You may be interested at some tests made by Michael Monashev in his 
personal blog(RUSSIAN):
http://michael.mindmix.ru/168-958-rezul-taty-testirovanija-graid5-graid3-gcache-i-raidz.zhtml

He is testing graid5, graid3+gcache and ZFS raidz:

Hardware:
Motherboard: Intel S5000PAL (Alcolu), Intel E5000P
CPU Dual-Core Intel Xeon 5130 2,00 GHz, cache 4 MB, FSB 1333 MHz
RAM 4 GB DDR2-677 Fully Buffered ECC (2*2 GB)
HDD SATA Seagate 750Gb 7200 rpm
HDD SATA Seagate 750Gb 7200 rpm
HDD SATA Seagate 750Gb 7200 rpm
HDD SATA Seagate 750Gb 7200 rpm
HDD SATA Seagate 750Gb 7200 rpm
HDD SATA Seagate 750Gb 7200 rpm

Software:

FreeBSD 7.0-CURRENT amd64

# mount
/dev/ad4s2d on /home (ufs, local, noatime, soft-updates)
/dev/ad4s1h on /opt/log (ufs, local, noatime, soft-updates)
...
tank/opt on /opt (zfs, local)
tank on /tank (zfs, local)
/dev/raid3/g3 on /opt2 (ufs, local, noatime, soft-updates)
/dev/raid5/g5 on /opt3 (ufs, local, noatime, soft-updates)

# zpool status
pool: tank
state: ONLINE
scrub: none requested
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE 0 0 0
ad6s1 ONLINE 0 0 0
ad8s2 ONLINE 0 0 0
ad10s3 ONLINE 0 0 0
ad12s1 ONLINE 0 0 0
ad14s2 ONLINE 0 0 0

errors: No known data errors


# gcache list
Geom name: cache_ad6s2a
WroteBytes: 98304
Writes: 42
CacheFull: 63546
CacheMisses: 70050
CacheHits: 647
CacheReadBytes: 99614208
CacheReads: 7151
ReadBytes: 1494126592
Reads: 92918
InvalidEntries: 0
UsedEntries: 6
Entries: 6
TailOffset: 250048479232
BlockSize: 65536
Size: 100
Providers:
1. Name: cache/cache_ad6s2a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
Consumers:
1. Name: ad6s2a
Mediasize: 250048503808 (233G)
Sectorsize: 512
Mode: r1w1e1

Geom name: cache_ad8s3a
WroteBytes: 98304
Writes: 42
CacheFull: 63659
CacheMisses: 70150
CacheHits: 652
CacheReadBytes: 99899904
CacheReads: 7143
ReadBytes: 1492060160
Reads: 92918
InvalidEntries: 0
UsedEntries: 6
Entries: 6
TailOffset: 250048479232
BlockSize: 65536
Size: 100
Providers:
1. Name: cache/cache_ad8s3a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
Consumers:
1. Name: ad8s3a
Mediasize: 250048503808 (233G)
Sectorsize: 512
Mode: r1w1e1

Geom name: cache_ad10s1a
WroteBytes: 98304
Writes: 42
CacheFull: 63679
CacheMisses: 70164
CacheHits: 625
CacheReadBytes: 99901952
CacheReads: 7110
ReadBytes: 1491070464
Reads: 92918
InvalidEntries: 0
UsedEntries: 7
Entries: 7
TailOffset: 250048413696
BlockSize: 65536
Size: 100
Providers:
1. Name: cache/cache_ad10s1a
Mediasize: 250048471040 (233G)
Sectorsize: 512
Mode: r1w1e1
Consumers:
1. Name: ad10s1a
Mediasize: 250048471552 (233G)
Sectorsize: 512
Mode: r1w1e1

Geom name: cache_ad12s2a
WroteBytes: 98304
Writes: 42
CacheFull: 63587
CacheMisses: 70099
CacheHits: 633
CacheReadBytes: 100357120
CacheReads: 7145
ReadBytes: 1493531648
Reads: 92918
InvalidEntries: 0
UsedEntries: 6
Entries: 6
TailOffset: 250048479232
BlockSize: 65536
Size: 100
Providers:
1. Name: cache/cache_ad12s2a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
Consumers:
1. Name: ad12s2a
Mediasize: 250048503808 (233G)
Sectorsize: 512
Mode: r1w1e1

Geom name: cache_ad14s3a
WroteBytes: 98304
Writes: 42
CacheFull: 28268
CacheMisses: 31194
CacheHits: 213
CacheReadBytes: 43265536
CacheReads: 3139
ReadBytes: 662387200
Reads: 41258
InvalidEntries: 0
UsedEntries: 7
Entries: 7
TailOffset: 250048479232
BlockSize: 65536
Size: 100
Providers:
1. Name: cache/cache_ad14s3a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
Consumers:
1. Name: ad14s3a
Mediasize: 250048503808 (233G)
Sectorsize: 512
Mode: r1w1e1


# graid3 list
Geom name: g3
State: COMPLETE
Components: 5
Flags: ROUND-ROBIN
GenID: 0
SyncID: 1
ID: 3868124998
Zone64kFailed: 0
Zone64kRequested: 198128
Zone16kFailed: 0
Zone16kRequested: 152096
Zone4kFailed: 0
Zone4kRequested: 62766
Providers:
1. Name: raid3/g3
Mediasize: 1000193882112 (932G)
Sectorsize: 2048
Mode: r0w0e0
Consumers:
1. Name: cache/cache_ad6s2a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Flags: NONE
GenID: 0
SyncID: 1
Number: 0
Type: DATA
2. Name: cache/cache_ad8s3a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Flags: NONE
GenID: 0
SyncID: 1
Number: 1
Type: DATA
3. Name: cache/cache_ad10s1a
Mediasize: 250048471040 (233G)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Flags: NONE
GenID: 0
SyncID: 1
Number: 2
Type: DATA
4. Name: cache/cache_ad12s2a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Flags: NONE
GenID: 0
SyncID: 1
Number: 3
Type: DATA
5. Name: cache/cache_ad14s3a
Mediasize: 250048503296 (233G)
Sectorsize: 512
Mode: r1w1e1
State: ACTIVE
Flags: NONE
GenID: 0
SyncID: 1
Number: 4
Type: PARITY

# graid5 list
Geom name: g5
State: COMPLETE CALM
Status: Total=5, Online=5
Type: AUTOMATIC
Pending: (wqp 0 // 0)
Stripesize: 65536
MemUse: 0 (msl 0)
Newest: -1
ID: 1151162121
Providers:
1. Name: raid5/g5
Mediasize: 1000193916928 (932G)
Sectorsize: 512
Mode: r0w0e0
Consumers:
1. Name: ad6s3
Mediasize: 250048512000 (233G)
Sectorsize: 512
Mode: r1w1e1
DiskNo: 0
Error: No
2. Name: ad8s1
Mediasize: 250048479744 (233G)
Sectorsize: 512
Mode: r1w1e1
DiskNo: 1
Error: No
3. Name: ad10s2
Mediasize: 250048512000 (233G)
Sectorsize: 512
Mode: r1w1e1
DiskNo: 2
Error: No
4. Name: ad12s3
Mediasize: 250048512000 (233G)
Sectorsize: 512
Mode: r1w1e1
DiskNo: 3
Error: No
5. Name: ad14s1
Mediasize: 250048479744 (233G)
Sectorsize: 512
Mode: r1w1e1
DiskNo: 4
Error: No

TEST RESULTS:

raidtest. 10 reads in parallel:

graid3 without round-robin reading:

# raidtest test -d /dev/raid3/g3 -n 10
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3316146176.
Number of processes: 10.
Bytes per second: 9303561
Requests per second: 140

graid3 with round-robin reading:

# raidtest test -d /dev/raid3/g3 -n 10
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3316146176.
Number of processes: 10.
Bytes per second: 11398078
Requests per second: 171

graid5:

# raidtest test -d /dev/raid5/g5 -n 10
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3284773376.
Number of processes: 10.
Bytes per second: 19700939
Requests per second: 299

ZFS raidz 
(http://lists.freebsd.org/pipermail/freebsd-geom/2007-September/002593.html )

# raidtest test -d /dev/zvol/tank/vol -n 10 -w
Read 50000 requests from raidtest.data.
Number of READ requests: 0.
Number of WRITE requests: 50000.
Number of bytes to transmit: 3281546240.
Number of processes: 10.
Bytes per second: 120195634
Requests per second: 1831
# zpool export tank
# zpool import tank
# raidtest test -d /dev/zvol/tank/vol -n 10 -r
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3281546240.
Number of processes: 10.
Bytes per second: 69264127
Requests per second: 1055
# raidtest test -d /dev/zvol/tank/vol -n 10 -r
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3281546240.
Number of processes: 10.
Bytes per second: 659851727
Requests per second: 10053


Generic partition of the disk:

# raidtest test -d /dev/ad4s1h -n 10
Read 50000 requests from raidtest.data.
Number of READ requests: 50000.
Number of WRITE requests: 0.
Number of bytes to transmit: 3290731520.
Number of processes: 10.
Bytes per second: 9067004
Requests per second: 137


dd:

ZFS raidz:

# dd if=/dev/zero of=/opt/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 66.627487 secs (125903112 bytes/sec)
# dd of=/dev/null if=/opt/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 34.686560 secs (241840297 bytes/sec)

graid3 with round-robin reading:

# dd if=/dev/zero of=/opt2/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 54.425021 secs (154131461 bytes/sec)
# dd of=/dev/null if=/opt2/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 64.300645 secs (130459158 bytes/sec)

graid5:

# dd if=/dev/zero of=/opt3/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 138.592430 secs (60527173 bytes/sec)
# dd of=/dev/null if=/opt3/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 64.681915 secs (129690162 bytes/sec)

Generic drive:

# dd if=/dev/zero of=/home/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 148.426179 secs (56517038 bytes/sec)
# dd of=/dev/null if=/home/22 bs=8M count=1000
1000+0 records in
1000+0 records out
8388608000 bytes transferred in 103.032438 secs (81417155 bytes/sec)


-- 
======================================================================  
- Best regards, Nikolay Pavlov. <<<-----------------------------------    
======================================================================  


Received on Tue Nov 06 2007 - 20:00:26 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:21 UTC