On 22 December 2011 10:12, Daniel Kalchev <daniel_at_digsys.bg> wrote: > As for how fast to get from point A to point B. If you observe speed limits, > that will depend only on the pilot, no? :) > Both cars are sufficiently faster than the imposed speed limits. You are ignoring acceleration, handling, and other factors... Besides, you're missing the point: *given same conditions* a benchmark allows one to show how A performs compared to B, which is why I said it is important to keep everything else constant! At the end of the day, what users, sysadmins, &c want to know is given hardware configuration H and requirement R will software X outperform software Y or Z. The components and the bells and whistles of X, Y or Z are, quite often, irrelevant (unless one has some silly idealogical reason, for example). > On very specific hardware, such as systems with many CPUs and lots of > memory, you may see one better than the other -- this in most cases will be > relevant to tuning, but also to overall system architecture. Are you saying that careful tuning will give you _orders of magnitude_ performance increase? Got numbers to back that up? ;-) > You may make an very "scientific", well documented and repeatable benchmark, > such as this one: > > time dd if=/dev/zero of=/dev/null > > .. then optimize your particular OS to run it at the highest possible > rate... and so what? Do you know what this benchmark measures? :) Yes, do you? I hope you are not being deliberately obtuse here... Besides, I would criticise your test in this example: have you tried running that with, say, bs=1g count=1000? Is there a difference how fast FreeBSD completes that vs how fast a Linux box does the same? The point of documenting a repeatable benchmark is to enable the person interpreting the results to see what was done (and verify) to achieve the result and treat that result accordingly. Cheers, -- IgorReceived on Thu Dec 22 2011 - 09:51:30 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC