Re: 4BSD/ULE numbers...

From: Emanuel Strobl <Emanuel.strobl_at_gmx.net>
Date: Tue, 27 Sep 2005 01:04:40 +0200
Am Dienstag, 27. September 2005 00:37 CEST schrieb David Xu:
[...]
> I am fiddling it, although I don't know when I can finish.
> In fact, the ULE code in my perforce has same performance as
> 4BSD, at least this is true on my Dual PIII machine. the real
> advantage is ULE can be HTT friendly if it make it correctly,
> for example physical / logical CPU balance, if system has two
> HTT enabled physical CPU, if system has too CPU hog threads,
> you definitely want the two threads to run on the two physical
> cpu, not in same phyiscal cpu.

I'm sure ULE is on it's way to be our prefered scheduler, especially on MP 
machines, where it's probably already superior, and I don't really care 
much about the small differences in bonnie++ or flops bench-results, nor 
in the small timing differences, but I'm astonished about the really big 
gap between the "make configure" timings of ULE and 4BSD. (on my Tualatin 
UP)
The difference is really enormuous (samba.configure.bsd.time compared to 
samba.configure.ule.time == 3m15s <-> 5m30s) and there's still a thing I 
observed some years ago (about two, when I ran seti_at_home in the 
background): ULE isn't "nice" friendly, meaning other applications suffer 
from niced processes much more than under 4BSD. Ideally, in my dreams, no 
other process would loose performance because of any "niced" process. 
Watch the samba.configure.ule.nonice.time -> samba.configure.ule.time 
results, they're nearly identical...

But that's the point where I have to leave this discussion, my knowledge is 
very limited in that area, so I just wanted to give info/hints to help the 
gurus improoving the best. The better is the bests enemy... ;)
And I hope I can help with "real world" tests to see ULE outperforming 4BSD 
even on UP machines with bonnie++ (where I see the second significant 
difference)

Best regards!

-Harry

> but current it is not. Another advantage is when sched_lock pushes
> down, I know current sched_lock is a Giant lock between large
> number of CPU, also I don't know when sched_lock will be pushed
> down, sched_lock is abused in many place, they really can be replaced
> by another spin lock. :)
>
> David Xu

Received on Mon Sep 26 2005 - 21:04:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:44 UTC