Re: I/O or Threading Suffer

From: Robert Watson <rwatson_at_freebsd.org>
Date: Tue, 20 Jul 2004 10:21:40 -0400 (EDT)
On Tue, 20 Jul 2004, jesk wrote:

> i have tested it with /dev/zero, /dev/random and /dev/urandom, the same
> everywhere.  as i mentioned i noticed the problem while mysqldump was
> dumping the mysqldatabase over night. in this timespan the mysqld didnt
> responsed to anything. after the dump(3minutes of time) the mysqld was
> back again and everything worked again.  i have tested it with 2 UP
> boxes (i386/pIII) without HTT.

Ok, so here's what I see, and the hardware configuration is somewhat
different.  I'm running with the network stack Giant-free on a
dual-processor Xeon box, with a parallel threaded benchmark against a
MySQL server.  I'm using libpthread in m:n mode, without the flag for
system scope threads, and running with the 4BSD scheduler, which typically
yields greater throughput in my benchmark (but is typically thought of as
less graceful in elevating interactive-like processes under load).

Under "normal" circumstances on this box, the benchmark yields about 7000
transactions a second in this configuration, data set, etc.  I ran the
benchmark in a loop via sshd on one terminal, and ran dd from /dev/urandom
to /dev/null on the serial console.  The benchmark is configured to
generate a transactions/sec stream for runs of 1000 transactions using 11
clients: 

6972.90
7070.46
6971.63
6998.54
7043.21
397.08		<- dd starts about here
357.18
647.10
379.39

So there's certainly a substantial performance impact.  I guess the
question is how we reach that level of impact, and what performance under
these circumstances would be "reasonable".  It does seem "undue" by
reasonable expectations.  I notice that ssh interactivity is also severely
hampered in this configuration as well.

When I run in a 1 thread configuration, I get about 5100-5800
transactions/sec (much higher variance).  When I kick in the dd session, I
observe similar problems: 

5339.64
5457.56
5044.89
5417.26
35.75		<-- guess what started running about here
21.71
17.22

I suspect a bit of using KTR to trace scheduling would probably elucidate
things quite a bit.  I'm not sure how you feel about tracing kernel
scheduling, but the ktr(9) and ktrdump(8) man pages aren't too terrible.

An interesting comparison here is /dev/zero as a source vs /dev/random as
a source.  In the /dev/zero case, I see some performance hit, but not
nearly as much.  Here's the 11 client version of the benchmark leading up
to "dd if=/dev/zero of=/dev/zero bs=128k":

6993.62
7013.36
7128.19
4505.15		<-- dd starts 
3689.62
4349.18

The primary difference between /dev/zero and /dev/random is that
/dev/random does a bunch of computation on behalf of the process, and
/dev/zero doesn't.  /dev/zero does just the copy of data to userspace.  So
it sounds like when a thread/process is using a lot of CPU in kernel, it's
starving other processes more than it should.  Mark -- how much
computation is being done here -- would it be worth dropping the Giant
lock during that computation so that the thread can yield without
generating a priority inversion?

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert_at_fledge.watson.org      Principal Research Scientist, McAfee Research
Received on Tue Jul 20 2004 - 12:22:15 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:02 UTC