Re: SCHED_ULE should not be the default

From: Lars Engels <lars.engels_at_0x20.net>
Date: Mon, 12 Dec 2011 17:13:08 +0100
Would it be possible to implement a mechanism that lets one change the scheduler on the fly? Afaik Solaris can do that.

_____________________________________________
Von: Steve Kargl <sgk_at_troutmask.apl.washington.edu>
Versendet am: Mon Dec 12 16:51:59 MEZ 2011
An: "O. Hartmann" <ohartman_at_mail.zedat.fu-berlin.de>
CC: freebsd-performance_at_freebsd.org, Current FreeBSD <freebsd-current_at_freebsd.org>, freebsd-stable_at_freebsd.org
Betreff: Re: SCHED_ULE should not be the default


On Mon, Dec 12, 2011 at 02:47:57PM +0100, O. Hartmann wrote:
> 
> > Not fully right, boinc defaults to run on idprio 31 so this isn't an
> > issue. And yes, there are cases where SCHED_ULE shows much better
> > performance then SCHED_4BSD. [...]
> 
> Do we have any proof at hand for such cases where SCHED_ULE performs
> much better than SCHED_4BSD? Whenever the subject comes up, it is
> mentioned, that SCHED_ULE has better performance on boxes with a ncpu >
> 2. But in the end I see here contradictionary statements. People
> complain about poor performance (especially in scientific environments),
> and other give contra not being the case.
> 
> Within our department, we developed a highly scalable code for planetary
> science purposes on imagery. It utilizes present GPUs via OpenCL if
> present. Otherwise it grabs as many cores as it can.
> By the end of this year I'll get a new desktop box based on Intels new
> Sandy Bridge-E architecture with plenty of memory. If the colleague who
> developed the code is willing performing some benchmarks on the same
> hardware platform, we'll benchmark bot FreeBSD 9.0/10.0 and the most
> recent Suse. For FreeBSD I intent also to look for performance with both
> different schedulers available.
> 

This comes up every 9 months or so, and must be approaching
FAQ status.

In a HPC environment, I recommend 4BSD. Depending on
the workload, ULE can cause a severe increase in turn
around time when doing already long computations. If
you have an MPI application, simply launching greater
than ncpu+1 jobs can show the problem.

PS: search the list archives for "kargl and ULE".

-- 
Steve
_____________________________________________

freebsd-stable_at_freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscribe_at_freebsd.org"
Received on Mon Dec 12 2011 - 15:14:03 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:21 UTC