On Mon, 19 Oct 2009, Ivan Voras wrote: >>>> I noticed that the softclock threads didn't seem to be bound to any cpu. >>>> >>>> I'm not sure whether it's the Right Thing (TM) to bind them to the >>>> corresponding cpus though: it might be good to give the scheduler a >>>> chance to rebalance callouts. >>>> >>>> I'm about to test the modification like the attached diff. Comments are >>>> welcome. >>> >>> Yes, I think the intent is that they have a "soft" affinity to the CPU >>> where the lapic timer is firing, but not a hard binding, allowing them to >>> migrate if required. It would be interesting to measure how effective >>> that soft affinity is in practice under various loads -- presumably the >>> goal would be for the softclock thread to migrate if a higher (lower) >>> priority thread is hogging the CPU. >> >> So why are there NCPU softclock threads if the binding isn't important? > > Nevermind, I got it - they are not used only for "clock". Using a soft affinity system encourages cache locality and load balancing, but allows the scheduler to compensate for occasional (or endemic) imbalance by placing work on other CPUs. A hard affinity system (in which the scheduler isn't allowed to do that) is potentially less adaptive in that regard. Obviously, there are lots of tradeoffs, but allowing ithreads and swis to wander if it turns out interrupt load (or callout scheduling, for that matter) isn't nicely balanced seems a reasonable design choice to me. What might be nice is a version of wakeup() hinting at a strong data flow between two threads, as a hint to the scheduler that maintaining CPU affinity has extra value, vs a version of wakeup() that hints at minimal data flow in which case there's reduced benefit to establishing share affinity. Robert N M Watson Computer Laboratory University of CambridgeReceived on Mon Oct 19 2009 - 15:50:12 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:57 UTC