On Fri, May 06, 2005 at 11:35:29AM -0700, Kris Kennaway wrote: > I might be bumping into the bandwidth of md here - when I ran less > rigorous tests with lower concurrency of extractions I seemed to be > getting marginally better performance (about an effective concurrency > of 2.2 for both 3 and 10 simultaneous extractions - so at least it > doesn't seem to degrade badly). Or this might be reflecting VFS lock > contention (which there is certainly a lot of, according to mutex > profiling traces). I suspect that I am hitting the md bandwidth: # dd if=/dev/zero of=/dev/md0 bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 9.501760 secs (55177988 bytes/sec) which is a lot worse than I expected (even for a 400MHz CPU). For some reason I get better performance writing to a filesystem mounted on this md: # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.943042 secs (66005946 bytes/sec) # rm foo # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.126929 secs (73564364 bytes/sec) # rm foo # dd if=/dev/zero of=foo bs=1024k count=500 500+0 records in 500+0 records out 524288000 bytes transferred in 7.237668 secs (72438804 bytes/sec) If the write bandwidth is only 50-70MB/sec, then it won't be hard to saturate, so I won't probe the full scalability of mpsafevfs here. Kris
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:34 UTC