On Sun, 12 Oct 2003, M. Warner Losh wrote: > In message: <20031011234314.P23991_at_root.org> > Nate Lawson <nate_at_root.org> writes: > : I am very interested in our idle load characteristics. It seems > : most systems I've analyzed have an average idle interrupt rate of about > : 225 per second, dominated by the clk and rtc interrupts as shown below. > : > : clk irq0 99/sec > : rtc irq8 127/sec > : > : Since these are both clocks, I assume the arrival of their interrupts are > : equally spaced and not correlated to each other. How much latency do the > : handlers for these have? Are there any system processes which generate > : repetitive bursts of very short tasks? If so, how long do those tasks > : take? > > These are clock interrupts. One is used for timing the system (clk), > while the other is used for profiling the system. They are > asynchronous to each other so that the profiling can profile more > effectively. On my systems with higher HZ setting, the clk interrupts > will happen more often, obviously. Yes, I understand. > : The reason why I ask is I'm coming up with a default policy for CPU sleep > : states which can have as high a latency as a few hundred microseconds. On > : an idle system, this should be fine although it does add to the latency > : for the above clock handlers. But I also need to be able to demote > : quickly to short sleep states (e.g., HLT) if the system is becoming active > : to decrease response times. > > The important thig with the time keeping devices is not to loose > interrupts, since that's how we tick out time. On non-idle systems, > there's issues with latency on the ticks, but on idle systems there > wouldn't be. 100us of latency would effecively limit HZ to 5000 or so > (I think that the Nyquist limit would apply, otherwise it is 10000). > Some time units have a wrap around that makes 100Hz impossible and > faster rates must be used. > > If you are going into a CPU state that's low, it might make sense to > increase the tick time, but I'm sure phk would have things to say > about that and its wisdom (or lack there of). A better workaround is that I will not allow selecting a state that has a latency >= 1 / (hz / 2). In practice, this would allow a sleep delay of up to 500 us for the faster HZ setting people use of 1000 (1 ms). Even the deepest Cx states are typically less than 200 us although they also disable bus mastering while sleeping. >From other email, it appears that we bounce between idle (with only clock interrupts that can easily handle a few hundred us of latency) and running, where cpu_idle() is not even called. Given that, my biggest concern now is IO corruption. Are there any devices that have a low interrupt rate (or bus mastering rate) that cannot handle a few hundred us latency added to their handler startup? I'm thinking something like a floppy drive where the time between interrupts is great enough that cpu_idle() is called but that need to be serviced quickly or data is over/underrun. Thanks, -NateReceived on Mon Oct 13 2003 - 09:04:54 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:25 UTC