:I think the notion of fairness is orthogonal to M:N threading. M:N is about :efficiently representing user threading to kernel space, as well as avoiding :kernel involvement in user context switches when not needed. Fairness is :about how the kernel allocates time slices to user processes/threads. :Fairness can be implemented for both 1:1 and M:N, with the primary differences :being in bookkeeping. Yes, this is precisely what I mean. Very well said. What we are talking about here is primarily algorithmic complexity and physical resource limitations (e.g. like kernel memory). Having the kernel scheduler only deal with (N) threads, where N is limited by the number of physical cpus, is a far easier problem for the kernel to solve in all respects then having the kernel deal with (M*N) individual threads. I personally see no reason why a program couldn't have 10,000 threads, or 100,000 threads, or a million threads, but the kernel is the wrong place to try to manage them if your system only has N cpus (N=2,4,8,16,32, etc). You have to ask yourself, what exactly is the kernel accomplishing trying to manage all those threads for a single application when it only has N cpu contexts to work with anyhow? The answer is: The kernel should only have to worry about the N cpu contexts and kernel memory reosurces for those contexts. ---- From the point of view of POSIX threading and a resource limits, people need to understand two things: (1) setrlimit was NEVER designed as a system moderation tool. It was designed to cause runaway programs to fail, period. setrlimit cannot, in fact, be used as a system moderation tool. Not very well anyway. setrlimit especially breaks down when you have a huge range of acceptable values, because higher values tend to muliply out and you wind up losing the protection that setrlimit was designed to supply. A good example of this is having a per-process descriptor limit AND a per-user process limit. X*Y often exceeds the size of the kernel's global descriptor table. Oops! (2) Just because the POSIX scheduler implements all sorts of different scopes and priority schemes says NOTHING AT ALL about how programs operating under such a scheduler should be apportioned cpu relative to OTHER PROGRAMS WHICH ARE INDEPENDANTLY RUNNING ON THE SYSTEM. POSIX is an abstraction (or virtualization out of available resources), just like everything else. If you try to treat it as a hard requirement the only result will be a broken system that might happily run everything else into the ground and stop allowing root ssh logins in order to accomodate a badly written POSIX program. There are many third party applications that set POSIX priorities, in particular realtime priorities, that I'd rather they not actually use. Most of these programs set these priorities based on the author's attempt to tune them on a single operating system (e.g. linux) and in a single operating environment. All a program can ever really do when requesting POSIX scheduling resources is compete against itself. It is the system operator, at a higher level, that must control how those resources compete with other programs. That should be clear to everyone it is so obvious. It is a whole lot easier for the kernel to give the system operator this power if the kernel scheduler does not have to juggle thousands of threads. It is very easy to write a scheduler for threaded applications when the most you have to deal with is N threads (N=ncpus) per application. -- Now lets consider programs which fork() instead of thread. The argument that threading is equivalent to forking from a management standpoint is just plain silly. From a design standpoint, programmers are very well aware of the resources required to fork(), and consequently per-fork tasks are generally much, MUCH better understood by system operators in the management context then per-thread tasks. per-thread tasks tend to be opaque... you never know how a threaded program might be written. You just cannot treat the two as equivalent or even close to equivalent. -MattReceived on Sun Oct 29 2006 - 02:44:56 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:01 UTC