I/O Benchmarking [Re: FreeBSD 6 is coming too fast]

From: Scott Long <scottl_at_samsco.org>
Date: Sun, 24 Apr 2005 22:20:20 -0600
Julian Elischer wrote:

> Kris Kennaway wrote:
> 
>> Measuring disk device performance (i.e. running a benchmark against
>> the bare device) and filesystem performance (writing to a filesystem
>> on the device) are very different things.
> 
> 
> I wish people would stop trying to deny that we have serious work in 
> front of us to get the VFS and disk IO figures back to where they were 
> before.
> 
> there ARE slowdowns and I have seen it both with tests on teh basic 
> hardware and throug the filesystems.  I don't know why this surproses 
> people because we have still a lot of work to do in teh interrupt 
> latency field for example, and I doubt that even PHK would say that 
> there is no work left to do in geom.
> Where we are now is closing in on "feature complete". Now we need to 
> profile and  optimise.

You are absolutely right.  However, Kris is also absolutely right that 
I/O is hard to profile.  These two statements are not mutually 
exclusive.  What I don't want is for people to run around quoting how 
fast bonnie benchmarks their memory controller, that does nothing to 
characterize the memory subsystem.  Here is what I want to see happen:

Step 1: Get the easy crap out of the way
   Step 1a: Figure out a reliable way to measure sequential read/write
     performance of just the driver and the hardware.  Prove that no
     needless copy operations are going on.
   Step 1b: Measure sequential read/write through the buffer cache.
     Prove that no needless copy operations are going on.
   Step 1c: Measure sequential read/write through the syscall layer.
     Prove that no needless copy operations are going on.

The only thing that should affect sequential speed is the speed of the
hardware, buses, and memory controller.  If anything in the OS is
standing in the way, we need to weed it out.  Then we need to forget
about sequential performance; that's the real of unscrupulous IDE
marketeers and linnex kiddies.  I don't really care how fast we can copy
20 bazillion terabytes of 0's, I want to know how many transactions per
second our databases can do, how scalable our mail servers are, etc.
None of that has anything to do with sequential I/O performance.

Step 2: Do the real work
   Step 2a: Measure transaction latency and thoroughput through the
     driver and hardware.  Profile lock contention.  Measure interrupt
     latency.
   Step 2b: Measure VFS latency in both specfs and ufs.  Profile lock
     contention and usage.  Compare contention against the driver.
   Step 2c: Profile the buffer cache.  Are pages being cached
     effectively?  Are they being cleaned efficiently?  Can we fix the
     "lemming syncer"?  Measure lock contention.

I imagine that this is where the real work is.  And lots of it.

Step 3: Beat the benchmarks
   Step 3a: Figure out what is being tested and how.  How do the data
     sources and sinks work?  Are other OS's beating the benchmarks by
     cheating here?  Can we create our own high-speed sources and sinks?
   Step 3b: ???

This is important even if it does sound slimy.  We could have a real
kick-ass I/O subsystem, but be beaten by a linux rig that fools the
benchmarks.  Kinda like how Intel has benchmarks that show that their
Pentium4 line is the fastest thing on the planet, when in reality it's
just smoke and mirrors.  Gotta win the PR race, here.

Scott
Received on Mon Apr 25 2005 - 02:20:26 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:32 UTC