Re: low(er) disk performance with sched_4bsd then with sched_ule

From: Scott Long <scottl_at_samsco.org>
Date: Sat, 17 Sep 2005 20:20:37 -0600
Andrew Gallatin wrote:
> Oliver Lehmann writes:
>  > Joseph Koshy wrote:
>  > 
>  > > ol> Wow, that update to BETA4 did the trick! While running 
>  > > ol> SCHED_4BSD:
>  > > 
>  > > Fantastic!  What is the profile like with the new 4BSD kernel?
>  > 
>  > http://pofo.de/tmp/gprof.4bsd.3
> 
> I don't know the disk codepath very well, but the samples look a
> little suspect.  We're copying a lot of data into and out of the
> kernel, so I would expect the majority of non disk wait time would be
> spent simply copying out the zero-filled pages, and copying them back
> in (AFAIK, dd uses read/write).  Where is the time spent in read,
> write, uiomove, bcopy?
> 
> What about ionode allocations, etc?  And why do things like
> g_bsd_modify and g_bsd_ioctl rank so high?  Aren't those only used
> when dealing with disklabels?
> 
> BTW, I *love* that we've got access to the hw counters, and an easy
> way to do low-overhead profiling of the kernel.
> 
> Drew
> 

I don't know if it's the case here or not, but GCC now does very
aggressive function inlining, so much so that it's nearly impossible
to look at a backtrace and figure out what the actual call path was.
Compiling with -O instead of the -O2 default turns off this 'feature'
(and I use that term quite liberally), so it might be useful to
recompile there kernel with 'CFLAGS= -O' in /etc/make.conf and see
if it changes the profiling numbers at all.

Also, I think that there was some talk last year about things like
preemption and fast interrupts screwing up certain kinds of profiling.
I don't recall if there was a solution to this, though.

Scott
Received on Sun Sep 18 2005 - 00:20:41 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:43 UTC