Re: ULE 2.0

From: Jeff Roberson <jroberson_at_chesapeake.net>
Date: Thu, 4 Jan 2007 01:59:06 -0800 (PST)
On Thu, 4 Jan 2007, David Xu wrote:

> Jeff Roberson wrote:
>> Hello everyone,
>> 
>> After a considerable vacation from ULE I have come back to address some 
>> long standing concerns.  I felt that the old double-queue mechanism caused 
>> very unnatural behavior and have finally come up with something I'm happy 
>> to replace it with.  I've been working on this off and on for several 
>> months now.  Some details are below.  More are at:
>> http://jeffr-tech.livejournal.com/3729.html
>> 
>> The version now in CVS(1.172) should restore ULE's earlier interactive 
>> performance under load.  I have tested with a make -j128 kernel while using 
>> mozilla and while playing a dvd.  Neither ever skip for me.  nice now has a 
>> more gradual effect than before.  It no longer allows the total starvation 
>> of processes.  ULE should also be very slightly faster on UP as compared to 
>> before.  SMP behavior should have changed very little although I did 
>> simplify some small parts of these algorithms.  In general, non-interactive 
>> tasks are scheduled much more intelligently although this may not be 
>> apparent under most workloads.
>> 
>> I'm hoping for the following types of feedback from anyone interested in 
>> testing:
>> 
>> 1)  Is the response to nice levels as you would hope?  I think nice +20 may 
>> not inhibit the nice'd thread enough at the moment.
>> 2)  Is the interactive performance satisfactory?
>> 3)  Is there any performance degredation for your common tasks?
>> 4)  Does the cpu estimator give reasonable results?  See %cpu in top.  It 
>> is expected that there will be periods where summing up all threads will 
>> yield slightly over 100% cpu.
>> 
>> Any and all feedback is welcome.  Please make sure any problem reports are 
>> sent to jroberson_at_chesapeake.net in the to line so I see them more quickly.
>> 
>> Thanks,
>> Jeff
>
> I think it might be not a right way to work on FreeBSD thread scheduler,
> it is more important to work out a cpu dispatcher rather than inventing
> a dynamic priority algorithm to replace 4BSD's algorithm, the 4BSD
> dynamic priority algorithm is still the best one I can find, it provides
> very good fairness. the most important thing is there should be a
> cpu dispatcher which knows how to place a thread on a cpu with cpu
> affinity-aware, maybe multiple runqueues, it knows cpu topology, and
> may be NUMA awareness, maybe provide cpu partitions, root can create
> and destroy a partition, root can add cpu to the partition or remove
> a cpu from the parition or move a cpu from partition a to partition b,
> bind applications to a partition etcs. On the top of cpu-dispatcher, there 
> could be 4BSD or other dynamic priority alogrithm, but that's
> less important than this one. with this thought, I am going to remove 
> sched_core as I found the cpu dispatcher is the key thing.

David, I share some of your sentiment.  It has been very hard to match the 
4BSD fairness algorithm.  However, I do believe we can get there with ULE. 
Already ULE has had better behavior under extreme load for desktop 
applications.  If I can get the nice and non-interactive timeshare worked 
out than it will definitely be better in this regard.

The other things you mention, cpu affinity, cpu topologies, etc. are all 
things which are currently supported in ULE.  Although admittedly they 
also need work but they do show benefits for many workloads today.  I feel 
that working within the framework of the 4BSD scheduler would limit what 
we can do with these more complex scheduling problems.  That is why I 
started ULE in the first place.

I don't know how practical it is to try to seperate the SMP scheduling 
from the fairness and interactivity scheduling.  In ULE they are somewhat 
tightly integrated.  Although I think it's good to experiment.  Perhaps 
you could try forking the 4BSD scheduler and add these SMP features if you 
like.

Jeff

>
> Regards,
> David Xu
>
Received on Thu Jan 04 2007 - 09:00:41 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:04 UTC