The array is treated as one logical disk, managed by the hardware controller / driver, i.e. /dev/da0. BIOS setup with 4 disks, raid 1 mirrored pairs with raid 0 between the pair(stock 3ware 0+1), 64k strip, write back turned on and 128meg ram. File system ext2 with soft updates was used. The "stock" default setting for newfs as per 5.3, the disk is laid out in the "classical" unix pattern a->root, b->swap, f->var g->usr h->home. the test was run on home so it might be a little slower for inner track issues. Their is no volume manger(i.e vimum, disk suite, etc...), Now the numbers are pretty good(my 3ware 6000s and 7000s max out around 20meg/s on freebsd with older hardware), But this a "hot" controller that I know could do better with some tuning. Ruled out the hardware by using Suse 9.1(disks/pci slots/cabling), but hated yast and 5.3 came out with suport for ADM and the other boxes are freebsd so..... The read-ahead issue has come up for both windows and linux on 3ware support pages, so this seems like a "know" issue. But Alas I failed here. It my fault, I simply do not know what knob to turn. It not helping my ego that my write speed is faster than my read speed, that should not be. -mjm BTW it a database machine with 6gig of ram and the disk speed would be handy, just wanted to mention that before people think "micro benchmark crazy" guy. Michael Meltzer wrote: > I have a 3ware 9500s-4lp controller with 4 10,000rpm raptors hooked up > to it. 0+1 configuration. AMD dual 64 bit processor. > > This Hardware setup had Sese 9.1 running on it for a few days, One on > the issues I had was that the controller seemed "slow". After reading > 3ware white paper for turning for 2.6, the issue seemed to be buffer > read ahead, i.e. blockdev -setra 16384 /dev/sda was needed for any > type of read speed. Some quick benchmark under Bonnie++ Sequential > read speeds from the mid 40's to 105meg/sec and had the write remained > around 98 meg/sec. > > Now the Problem. loaded 5.3 , cvsup'ed and built for freebsd 5.3 > stable, same hardware, the controller is feeling "slow" again. I tried > to play with the vfs prams (vfs.read_max after some googling around). > I could not find much information(other than the handbook) about the > vfs prams and was unable to increase the speed. Can Any one sheed some > light, subjections? insight, Gratefull for any help. > > Her is a iozone report pretty close to the linux bonnie++(sorry the > bonnie failed) to give you all an idea whats up. exect same hardware. > only change was OS and filesystem. Thank You > > MJM > > iozone -s 20480m -r 60 -i 0 -i 1 -t 1 > Iozone: Performance Test of File I/O > Version $Revision: 3.196 $ > Compiled for 64 bit mode. > Build: freebsd > > Contributors:William Norcott, Don Capps, Isom Crawford, Kirby > Collins > Al Slater, Scott Rhine, Mike Wisner, Ken Goss > Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain > CYR, > Randy Dunlap, Mark Montague, Dan Million, > Jean-Marc Zucconi, Jeff Blomberg. > > Run began: Mon Dec 20 21:03:36 2004 > > File size set to 20971520 KB > Record Size 60 KB > Command line used: iozone -s 20480m -r 60 -i 0 -i 1 -t 1 > Output is in Kbytes/sec > Time Resolution = 0.000001 seconds. > Processor cache size set to 1024 Kbytes. > Processor cache line size set to 32 bytes. > File stride size set to 17 * record size. > Throughput test with 1 process > Each process writes a 20971520 Kbyte file in 60 Kbyte records > > Children see throughput for 1 initial writers = 78738.67 > KB/sec > Parent sees throughput for 1 initial writers = 78716.55 > KB/sec > Min throughput per process = 78738.67 > KB/sec > Max throughput per process = 78738.67 > KB/sec > Avg throughput per process = 78738.67 > KB/sec > Min xfer = 20971500.00 KB > > Children see throughput for 1 rewriters = 32126.46 > KB/sec > Parent sees throughput for 1 rewriters = 32125.77 > KB/sec > Min throughput per process = 32126.46 > KB/sec > Max throughput per process = 32126.46 > KB/sec > Avg throughput per process = 32126.46 > KB/sec > Min xfer = 20971500.00 KB > > Children see throughput for 1 readers = 58563.70 > KB/sec > Parent sees throughput for 1 readers = 58557.14 > KB/sec > Min throughput per process = 58563.70 > KB/sec > Max throughput per process = 58563.70 > KB/sec > Avg throughput per process = 58563.70 > KB/sec > Min xfer = 20971500.00 KB > > Children see throughput for 1 re-readers = 58583.77 > KB/sec > Parent sees throughput for 1 re-readers = 58581.98 > KB/sec > Min throughput per process = 58583.77 > KB/sec > Max throughput per process = 58583.77 > KB/sec > Avg throughput per process = 58583.77 > KB/sec > Min xfer = 20971500.00 KB > > > > iozone test complete. > _______________________________________________ > freebsd-current_at_freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to > "freebsd-current-unsubscribe_at_freebsd.org" > >Received on Wed Dec 22 2004 - 01:11:57 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:25 UTC