Am 19.12.2011 17:36, schrieb Michael Reifenberger: > Hi, > a quick test using `dd if=/dev/zero of=/test ...` shows: > > dT: 10.004s w: 10.000s filter: ^a?da?.$ > L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name > 0 378 0 0 12.5 376 36414 11.9 60.6| ada0 > 0 380 0 0 12.2 378 36501 11.8 60.0| ada1 > 0 382 0 0 7.7 380 36847 11.6 59.2| ada2 > 0 375 0 0 7.4 374 36164 9.6 51.3| ada3 > 0 377 0 1 10.2 375 36325 10.1 53.3| ada4 > 10 391 0 0 39.3 389 38064 15.7 80.2| ada5 > > Seems to be sufficiently equally distributed for a life system... Hi Michael, in an earlier reply I mentioned the suspicious queue length and %busy of ada5, which may be the result of other load (not caused by the dd command) or of a hardware problem (I'd check drive health ...). (Hmmm, the numbers look strange: ops/s is not the sum of r/s and w/s, but misses that value by 2. I could understand a rounding difference of 1, but not 2 counts per second. But this is a different issue ...) Anyway: The imbalance that I observe on my system is specific to reads, not writes. Could you please check, whether sending a large (multi-GB) file to /dev/null shows identical read load over all drives? I suspect that 2 of the drives will see slightly (some 20%, perhaps) less read requests than the rest. Regards, STefanReceived on Tue Dec 20 2011 - 09:07:13 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC