As you might be aware DTrace timestamps right now are derived from TSC value. http://en.wikipedia.org/wiki/Time_Stamp_Counter DTrace timestamps are measured in nano-seconds and the formula similar to the following is used for calculations: rdtsc() * 1000000000 / tsc_freq where rdtsc is a function that returns current TSC value and tsc_freq is a frequency of TSC. This formula is supposed to produce proper results if tsc_freq stays constant. But there are environment where this might not be the case. If a CPU has a non-invariant TSC and processor's clock frequency changes (e.g. because of powerd), then tsc_freq changes too. As a result, the formula would produce wildly different values and, most importantly, was values would non be monotonic. Timestamp values that jump back and forth would not only be useless for a user, they would also confuse DTrace internal logic. There are at least the following two alternatives: 1. Keep things as they are and warn users not to change CPU clock frequency when they use DTrace and the CPU doesn't have invariant TSC. I think that this should cause only minor inconveniences to a portion of DTrace users. 2. Use raw TSC value as a DTrace timestamp and document this difference from the original DTrace. Advantage: timestamp value is always monotonic. Disadvantage: manual conversion is needed to get "real" time (using the same formula). Please note that in this case timestamps would be in non-linear time dimension if TSC frequency changes, so to get meaningful timestamps (when needed/important) one would still have to make sure that TSC frequency stay constant. Please share your opinion on these approaches. Or suggest yest another alternative. Just in case, related sysctls: machdep.tsc_freq kern.timecounter.invariant_tsc -- Andriy GaponReceived on Thu Jul 09 2009 - 15:31:15 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:51 UTC