In the last episode (Jul 21), jesk said: > i figured out that the performanceloss only really occur if the > process is heavily writing on the filesystem. dd if=/dev/zero > of=/dev/null bs=128k doesnt hurt much in responsetime of parallel > processes, but when dd operates on the filesystem with of=foo every > process will be affect in executiontime. a simple ps or ls meanwhile > dding onto the disk will be hang for dozen of seconds. Ah. now that's a different story. You're out of the control of the process scheduler and into the disk. I don't suppose you're using an IDE/ATA disk with no tagged queueing? :) Run "dmesg | grep depth.queue" to see how many requests can be queued up on your disk at once. That dd is stuffing lots of dirty data into the disk cache, and all the other processes have to wait in line to get their I/Os done. You'll see much better results from a SCSI disk (with usual queue depths between 32 and 64), and even better results from a multi-disk hardware RAID array (which will have a large write cache). -- Dan Nelson dnelson_at_allantgroup.comReceived on Wed Jul 21 2004 - 02:48:19 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:02 UTC