Eric Anderson wrote: > M. Warner Losh wrote: > >> In message: <20051014091004.GC18513_at_uk.tiscali.com> >> Brian Candler <B.Candler_at_pobox.com> writes: >> : On Thu, Oct 13, 2005 at 11:10:26AM -0700, Brooks Davis wrote: >> : > > I don't think you can measure one single interger (or 64bit) >> increase in face : > > of a operation that has to access backing >> store. Even if there is a : > > performance hit, you don't have to >> build your kernel with the option enabled. >> : > : > The one thing I'd be worried about here is that 64bit updates are >> : > expensive on 32bit machines if you want them to be atomic. >> Relative to >> : > backing store they probably still don't matter, but the might be >> : > noticable. >> : : I'd be grateful if you could clarify that point for me. Are you >> saying that >> : if I write >> : : long long foo; >> : ... >> : foo++; >> : : then the C compiler generates code for 'foo++' which is not >> thread-safe? >> : (And therefore I would have to protect it with a mutex or critical >> section) >> : : Or are you saying that the C compiler inserts its own code around >> foo++ to >> : turn it into a critical section, and therefore runs less efficiently >> than >> : you'd expect? >> >> You have to protect this thread-unsafe operation yourself. > > > For statistics gathering purposes though, should I worry about this, or > go for 'fast and imperfect' instead of 'perfect and slow'? With > filesystems, I think it's more important to leave performance high and > get a notion of the statistics, rather than impact performance for > perfect stats (that you may only look at occasionally anyhow). If you make it a #define macro then you can leave the choice for the compile time. Fast and lose when i++ and safe and slow when atomic_inc(&i, 1). -- AndreReceived on Fri Oct 14 2005 - 14:34:58 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:45 UTC