Re: ULE 2.0

From: Jeff Roberson <jroberson_at_chesapeake.net>
Date: Thu, 4 Jan 2007 14:00:00 -0800 (PST)
On Thu, 4 Jan 2007, Scott Long wrote:

> David Xu wrote:
>> Jeff Roberson wrote:
>>> Hello everyone,
>>> 
>>> After a considerable vacation from ULE I have come back to address some 
>>> long standing concerns.  I felt that the old double-queue mechanism caused 
>>> very unnatural behavior and have finally come up with something I'm happy 
>>> to replace it with.  I've been working on this off and on for several 
>>> months now.  Some details are below.  More are at:
>>> http://jeffr-tech.livejournal.com/3729.html
>>> 
>>> The version now in CVS(1.172) should restore ULE's earlier interactive 
>>> performance under load.  I have tested with a make -j128 kernel while 
>>> using mozilla and while playing a dvd.  Neither ever skip for me.  nice 
>>> now has a more gradual effect than before.  It no longer allows the total 
>>> starvation of processes.  ULE should also be very slightly faster on UP as 
>>> compared to before.  SMP behavior should have changed very little although 
>>> I did simplify some small parts of these algorithms.  In general, 
>>> non-interactive tasks are scheduled much more intelligently although this 
>>> may not be apparent under most workloads.
>>> 
>>> I'm hoping for the following types of feedback from anyone interested in 
>>> testing:
>>> 
>>> 1)  Is the response to nice levels as you would hope?  I think nice +20 
>>> may not inhibit the nice'd thread enough at the moment.
>>> 2)  Is the interactive performance satisfactory?
>>> 3)  Is there any performance degredation for your common tasks?
>>> 4)  Does the cpu estimator give reasonable results?  See %cpu in top.  It 
>>> is expected that there will be periods where summing up all threads will 
>>> yield slightly over 100% cpu.
>>> 
>>> Any and all feedback is welcome.  Please make sure any problem reports are 
>>> sent to jroberson_at_chesapeake.net in the to line so I see them more 
>>> quickly.
>>> 
>>> Thanks,
>>> Jeff
>> 
>> I think it might be not a right way to work on FreeBSD thread scheduler,
>> it is more important to work out a cpu dispatcher rather than inventing
>> a dynamic priority algorithm to replace 4BSD's algorithm, the 4BSD
>> dynamic priority algorithm is still the best one I can find, it provides
>> very good fairness. the most important thing is there should be a
>> cpu dispatcher which knows how to place a thread on a cpu with cpu
>> affinity-aware, maybe multiple runqueues, it knows cpu topology, and
>> may be NUMA awareness, maybe provide cpu partitions, root can create
>> and destroy a partition, root can add cpu to the partition or remove
>> a cpu from the parition or move a cpu from partition a to partition b,
>> bind applications to a partition etcs. On the top of cpu-dispatcher, there 
>> could be 4BSD or other dynamic priority alogrithm, but that's
>> less important than this one. with this thought, I am going to remove 
>> sched_core as I found the cpu dispatcher is the key thing.
>> 
>> Regards,
>> David Xu
>> 
>
> It sounds like you want the linux O(1) scheduler.  It would be very 
> interesting to see this applied to FreeBSD.

I looked at porting the linux O(1) scheduler when I started ULE.  First, 
it wouldn't translate directly enough to FreeBSD to make a port 
worthwhile.  The two systems are simply too different.

Second, I know that this will sound like a bit of bsd elitism, but it's 
true.  FreeBSD users wouldn't have accepted the level of fairness and 
interactivity of the linux scheduler.  I did a lot of benchmarks and 
analysis of it at the time and they would've scoffed at it for sure as ULE 
was ahead of it in that regard.  It may have come a long way since then as 
I haven't looked at it in years.

Jeff

>
> Scott
> _______________________________________________
> freebsd-current_at_freebsd.org mailing list
> http://lists.freebsd.org/mailman/listinfo/freebsd-current
> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
>
Received on Thu Jan 04 2007 - 21:01:29 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:04 UTC