On Fri, 28 Oct 2005, Chuck Swiger wrote: >> I'm happy to take a stab at this. >> >> We still need someone to grab the context switch time keeping by the >> horns and Do Something, though. > > If I understand what was said earlier, the getmicrotime() kernel > function ought to maintain the time at "(~ 1 msec)" precision. Could > getmicrotime() be exported as a syscall, so that we could do something > like this: > > --- lib/libc/gen/time.c~ Fri Jul 18 22:53:46 2003 > +++ lib/libc/gen/time.c Fri Oct 28 13:04:26 2005 > _at__at_ -47,7 +47,8 _at__at_ > struct timeval tt; > time_t retval; > > - if (gettimeofday(&tt, (struct timezone *)0) < 0) > + getmicrotime(&tt); > + if (tt.tv_sec == 0) > retval = -1; > else > retval = tt.tv_sec; > > Note that this might even cause time(2) to return an error if the system > is using dummyclock, which could be considered a feature. :-) In the rwatson_clock branch in Perforce, I've added two new clocks to clock_gettime(): CLOCK_SECOND - getnanotime() with nanoseconds truncated CLOCK_FUZZY - getnanotime() without nanoseconds truncated I recognize that both names are badly chosen. I'm compiling kernels to do a bit of benchmarking currently. If we remove the call to nanotime() in the context switch, we may want to add a callout that calls nanotime() once each tick? Or maybe automatically in the callout handler, so that any code running in a callout can use getnanotime() without having to worry about accuracy (much). Robert N M WatsonReceived on Fri Oct 28 2005 - 15:33:20 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:46 UTC