On Thu, 27 Oct 2005, Poul-Henning Kamp wrote: >>> Why would anybody take a timestamp at all I/O syscalls ? >>> >>> "I wonder why my car can only go 30 km/h with the trunk full of >>> concrete" ? >>> >>> In a data base application I could possibly understand a timestamp >>> after every write. >>> >>> But after _all_ I/O syscalls ? That's just plain stupid... >> >> Don't panic, I agree that is stupid code, but I can not change it, it >> is not written by me, sorry! > > I'm not panicing, I'm merely pointing out that we should not optimize > performance after bogus code but rather try to improve it. There is, of course, a tension in optimizing systems to speed up applications that have been optimized for other systems. :-) Sadly, POSIX doesn't say anything about how applications can express preferences about the cost and granularity of time measurement. It's long been an issue though: AFS used to do magic on many UNIX systems to expose a timer tick timestamp to user space from the kernel via a special magic page. This trick has been used lots of other places to, but worked on the assumption that an application would be willing to pay a lower cost to get a poor measurement of time frequently, rather than pay a higher cost to get a time measurement less frequently. For user space applications implementing network protocols, this is important since they do need accurate measurements of round-trip time in order to calculate bandwidth windows, etc. We almost want clock_gettime() with CLOCK_TENMS, or the like. Robert N M WatsonReceived on Thu Oct 27 2005 - 11:04:07 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:46 UTC