On 12/22/11 10:56, Igor Mozolevsky wrote: > On 22 December 2011 05:54, Daniel Kalchev <daniel_at_digsys.bg> wrote: [...] >> Any 'benchmark' has a goal. You first define the goal and then measure how >> different contenders achieve it. Reaching the goal may have several >> measurable metrics, that you will use to later declare the winner in each. >> Besides, you need to define a baseline and be aware of what theoretical >> max/min values are possible. > > Treating a benchmark as a binary win/lose is rather naive, it's not a > competition, and (I hope) no serious person ever does that. A proper > benchmark shows true strength and weaknesses so than a well-informed > intelligent decision can be taken by an individual according to that > individual's needs. The caveat, of course, is making your methodology > clear and methods repeatable! > > > Cheers, > > -- Benchmarks also could lead developers to look into more details of the weak points of their OS, if they're open for that. Therefore, benchmarks are very useful. But not if any real fault of the OS is excused by a faulty becnhmarking. I remember that the worse threaded I/O performance of FreeBSD has been long discussed as a bad benchmarks schematics. Or even look at the thread regarding to SCHED_ULE. Why has a user, experiencing really worse performance with SCHED_ULE, in a nearly scientific manner some engineer the fault? I'd expect the developer or care-taking engineer taking care in a more user friendly manner. If a benchmark reveals some severe weak points in FreeBSD and I have to read about obscure tweaks of non documented sysctl, then this OS would be a no-go if I was a manager to make decissions. And yes, i know, FreeBSD is an free and open project. But I also know that this free and open project does not rely only on "volonteers". A volunteers do not expect funding or payment. So, even freeBSD is dependend on some finacial basis and such a basis has to be taken care of.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC