On Fri, 2006-10-27 at 20:51 -0700, Julian Elischer wrote: > Alexandre "Sunny" Kovalenko wrote: > > On Fri, 2006-10-27 at 18:25 -0700, Julian Elischer wrote: > >> Alexandre "Sunny" Kovalenko wrote: > >>> On Fri, 2006-10-27 at 16:41 -0400, Daniel Eischen wrote: > >>>> On Fri, 27 Oct 2006, Paul Allen wrote: > >>>> > >>>>>> From Julian Elischer <julian_at_elischer.org>, Fri, Oct 27, 2006 at 12:27:14PM -0700: > >>>>>> The aim of the fair scheduling code is to ensure that if you, as a user, > >>>>>> make a process that starts 1000 threads, and I as a user, make an > >>>>>> unthreaded process, then I can still get to the CPU at somewhat similar > >>>>>> rates to you. A naive scheduler would give you 1000 cpu slots and me 1. > >>>>> Ah. Let me be one of the first to take a crack at attacking this idea as > >>>>> a mistake. > >>>> No, it is POSIX. You, the application, can write a program with > >>>> system scope or process scope threads and get whatever you behavior > >>>> you want, within rlimits of course. > >>>> > >>>> If you want unfair scheduling, then create your threads with > >>>> system scope contention, otherwise use process scope. The > >>>> kernel should be designed to allow both, and have adjustable > >>>> limits in place for (at least) system scope threads. > >>>> > >>>> Noone is saying that you can't have as many system scope threads > >>>> as you want (and as allowed by limits), just that you must also > >>>> be able to have process scope threads (with probably higher limits > >>>> or possibly no limits). > >>>> > >>> I might be missing something here, but OP was separating M:N (which is > >>> what you are referring to above), and "fairness" (not giving process > >>> with 1000 *system scope* threads 1000 CPU scheduling slots). As far as I > >>> know the first one is POSIX and the second one is not. > >>> > >>> FWIW: as an application programmer who spent considerable amount of time > >>> lately trying to make heavily multithreaded application run most > >>> efficiently on 32-way machine, I would rather not have to deal with > >>> "fairness" -- M:N is bad enough. > >>> > >> > >> no, fairness is making sure that 1000 process scope threads > >> do not negatively impact other processes. > >> 1000 system scope threads are controlled by your ulimit settings > >> (Each one counts as a process.) > >> > >> > > I apologize for misinterpreting your words. But then, if I have M:N set > > to 10:1, I would expect application with 1000 process scope threads to > > have as many CPU slots as 100 processes, or, if I have 10 system scope > > threads and 990 process scope threads, I would expect application to > > have as many CPU slots as 109 processes. Is this what you refer to as > > "fairness"? > > > > M:N is not a ratio, but rather the notation to say that M user threads > are enacted using N kernel schedulable entities (kernel threads). > usually N is limited to something like NCPU kernel schedulable entities > running at a time. (not including sleeping threads waiting for IO) > (NCPU is the number of CPUs). > > so in fact M:N is usually M user threads over over some number like 4 or > 8 kernel threads (depending on #cpus) plus the number of threads waiting > for IO. > > Julian In the environment I am most familiar with -- IBM AIX -- M:N is the ratio and it is settable either systemwide or for the specific user using environment variable, e.g.: export AIXTHREAD_MNRATIO=8:1 with the minimum kernel threads allocated according to another setting: export AIXTHREAD_MINKTHREADS=4 Neither one depends on the physical CPU count in the box (which could change in the middle of application execution anyway). Both settings have known default values (8:1 and 8 respectively). Between the two I can always tell how many kernel threads given amount of the process scope threads will use. This gives me both flexibility and predictability. Am I understanding correctly that what you have implemented fixes number of kernel threads at boot time and changes the M:N ratio throughout the run time of the application? -- Alexandre "Sunny" KovalenkoReceived on Sat Oct 28 2006 - 16:19:21 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:01 UTC