Jeff Roberson wrote: >On Mon, 13 Dec 2004, Julian Elischer wrote: > > > >>The whole problem that "slots" is trying to solve is to stop a single >>process >>from being able to flood the system with threads and therefore make the >>system >>unfair in its favour. >> >>The "slots" method is really suitable for the 4bsd scheduler but it is >>really >>not so good for ULE (at least I think that there are probably better >>ways that >>ULE could implement fairness). >> >>What I think should happen at this stage is that the inclusion of >>kern_switch.c >>should be replaced by actually copying the contents of that file into >>the two >>schedulers and that they be permitted to diverge. This would allow ULE and >>BSD to be cleaned up in terms of the sched_td/kse hack (where they are in >>fact the same structure, but to keep diffs to a minimum I defined one in >>terms of the other with macros). >> >>It would also allow jeff to experiment absolutly freely on how ULE might >>implement fairness without any constraints of worrying about the BSD >>scheduler, and visa versa. >> >>I have been hesitant to do this because there was some (small) amount of >>work going on in the shared file, but I think it is time to cut the >>umbilical >>cord. If ULE is really fixed then this would be a good time to break >>them apart, >>and delete kern_switch.c (or at least move most of the stuff in it out >>to the >>two schedulers). This would protect ULE from future problems being >>"imported" from BSD for example. >> >>comments? >> >> > >Why don't we move the ke_procq into the thread and then kern_switch can >remain with the generic runq code? Then we can move *runqueue into the >individual schedulers. At least then we won't have to make a copy of the >bit twiddling code. > hmm just noticed that both 4bsd and ule have kse structure (td_sched) fields that are not used any more. (e.g. ke_kglist, ke_kgrlist) The bit twiddling code is already separate in runq.c is it not? The fact that finctions in kern_switch are currently used by both BSD and ULE doesn'rt make them "generic" from my perspective. The are just shared for historical reasons. runq_remove and runq_add (for example) ar epretty generic but would still need changing if a thread were on >1 list. setrunqueue() and remrunqueue() are heavily based on what fairness method is used. I'm not happy with the SLOTS as the ultimate answer, only as the easiest (except for "ignore fairness"). Having them generic limits wow this might be changed. Certainly if ULE were to implement a smarter fairness method it's need to have its own copy of them. Re. moving ke_procq. (should be renamed to ke_runq_entry or somrthing) What if a scheduler wants to keep a thread on TWO lists.. Puting it int he scheduler independent part of teh thread structure makes this harder to do. For example, I would like to experiment with a version of the BSD scheduler that keeps a thread on BOTH teh percpu queue and an independent queue. it gets removed from both when selected, but suelction is done from teh pcpu queue first, and proceeds to teh general queue only if there is nothing for that cpu. Another example would be a scheduler that uses (I forget the propper name) probablity scheduling rather than run queues. it would require a completely differnt set of fields to represent it's internal structures. Having an externally visible run queue would be misleading because it would be visible but not used. I would actually go the other way.. td_runq should be moved to the td_thread as it is used by the fairness code only and that could be implemented completely differently by different schedulers. I left it where it was only for diff reduction reasons. Is there a real reason that the two scheduelrs should not have separate copies of this code other than diskspace? I think that maintainance might even be made easier if people maintaining them don't have to always bare in mind the fact that the code is being used in two different scenareos with very different frameworks around them. julianReceived on Mon Dec 13 2004 - 20:50:44 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:24 UTC