Alexandre "Sunny" Kovalenko wrote: > On Fri, 2006-10-27 at 16:41 -0400, Daniel Eischen wrote: >> On Fri, 27 Oct 2006, Paul Allen wrote: >> >>>> From Julian Elischer <julian_at_elischer.org>, Fri, Oct 27, 2006 at 12:27:14PM -0700: >>>> The aim of the fair scheduling code is to ensure that if you, as a user, >>>> make a process that starts 1000 threads, and I as a user, make an >>>> unthreaded process, then I can still get to the CPU at somewhat similar >>>> rates to you. A naive scheduler would give you 1000 cpu slots and me 1. >>> Ah. Let me be one of the first to take a crack at attacking this idea as >>> a mistake. >> No, it is POSIX. You, the application, can write a program with >> system scope or process scope threads and get whatever you behavior >> you want, within rlimits of course. >> >> If you want unfair scheduling, then create your threads with >> system scope contention, otherwise use process scope. The >> kernel should be designed to allow both, and have adjustable >> limits in place for (at least) system scope threads. >> >> Noone is saying that you can't have as many system scope threads >> as you want (and as allowed by limits), just that you must also >> be able to have process scope threads (with probably higher limits >> or possibly no limits). >> > I might be missing something here, but OP was separating M:N (which is > what you are referring to above), and "fairness" (not giving process > with 1000 *system scope* threads 1000 CPU scheduling slots). As far as I > know the first one is POSIX and the second one is not. > > FWIW: as an application programmer who spent considerable amount of time > lately trying to make heavily multithreaded application run most > efficiently on 32-way machine, I would rather not have to deal with > "fairness" -- M:N is bad enough. > no, fairness is making sure that 1000 process scope threads do not negatively impact other processes. 1000 system scope threads are controlled by your ulimit settings (Each one counts as a process.)Received on Fri Oct 27 2006 - 23:25:09 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:01 UTC