In message <20060125114544.edawx42obkkos0ck_at_netchild.homeip.net>, Alexander Lei dinger writes: >> That way, the user/system time reported will get units of "cpu seconds >> if the cpu ran full speed". > >How large do you expect the error will be? I don't consider it an error, I consider it increasing precision. If you run time mycommand on your laptop, and along the way the CPU clock ramps up from 75 MHz to 600 MHz before it reports user 2.01 sys 0.30 real 4.00 What exactly have you learned from the first two numbers with the current definition of "cpu second" ? With my definition you would be more likely to see lower numbers maybe user 0.20 sys 0.03 real 4.00 And they would have meaning, they should be pretty much the same no matter what speed your CPU runs at any instant in time. In theory, it should be possible to compare user/sys numbers you collect while running at 75 MHz with the ones you got under full steam at 1600 MHz. In practice however, things that run on the real time, HZ interrupting to run hardclock() for instance, will still make comparison of such numbers quite shaky. But at least they will not be random as they are now. -- Poul-Henning Kamp | UNIX since Zilog Zeus 3.20 phk_at_FreeBSD.ORG | TCP/IP since RFC 956 FreeBSD committer | BSD since 4.3-tahoe Never attribute to malice what can adequately be explained by incompetence.Received on Wed Jan 25 2006 - 10:02:49 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:51 UTC