Paul Allen wrote: >>From Julian Elischer <julian_at_elischer.org>, Tue, May 09, 2006 at 10:35:06AM -0700: > >>Sven Petai wrote: >> >>are there any patches that take the gettimeofday() calls and replace >>them with something that is cheap >>such as only doing every 10th one and just returning the last value ++ 1 >>uSec for the other ones.. > > Better yet, just realize that during any given scheduler quantum the process > is running on the same CPU. Therefore, you should just read the TSC. > > For that matter, if libc would just remember an accurate synchronized > timestamp and TSC pair on a per CPU basis, it should be trivial to get > cheap, synchronized, and accurate TSC time on SMP systems. TSC drift > isn't horrible--and best of all if the process drifts from CPU to CPU > libc will have a decent chance at doing incremental calibrations. Simply > giving libc easy access to a counter of scheduler ticks can be used to > ensure this process delivers monotonic time. > > Let me formalize this a bit, you have a noisy, but cheap time source: > the TSC always available provided you compute your deltas a per cpu basis. > You have another low resolution, low noise, but cheap time source: the > count of scheduler ticks. Rather than coding an ad hoc algorithm, > this information should be fed into a kalman filter. > > There are some lingering details: you need to invalidate the > TSC when the processor speed changes: but this is controlled > by powerd no? Second, if you can manage to throttle the CPU > it suggests that you can also manage to pay higher time > query costs and force clock_gettime calls. That's not enough. On some CPUs (like the current Opterons), the TSC slows down when the CPU executes a HLT instruction, so if you want good accuracy, you'll need to take that into account too. -- SuleimanReceived on Wed May 10 2006 - 06:43:03 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:55 UTC