My thoughts about benchmarking - don't forget, it's the way to get at least estimate on how your system will behave in given circumstances. When testers measured new videocard, they tested few factors, like FPS in modern games, pixel/texture fillrate, and whatever they do there else. That's because videocard have few simple and plain applications. And different vendor/generation cards are compared without problems. Different version of compiler, OS, etc - is very irrelevant. It's like say we can't compare these shovels, because they made from different sort of wood. OS is created to serve (produce some useful actions), and not to be measured and turned off forever:) So, in ideal world, there will be benchmarks on some real-world situations (like FPS for videocards, there got to be something also if not common, but at least wide-spread amongst users). Like - benchmarking fully tuned FreeBSD vs fully tuned Linux in high net-load (for example http + php + mysql). It's hard to disagree that this is very common spread use case for server OS. Also good tests would be productivity of FTP/File/Samba/Nfs/rsync servers. For desktop aspects, there's not much space for tuning (for linuxes), mostly linuxes tested out-of-box, or tuned via some gui-settings applet;, while FreeBSD-OOB needs some additional care (like get latest ports, install latest video drivers, xorg, etc., sysctl tuning probably). I'm glad there's PC-BSD, and PC-BSD can be used for desktop testing. And what to test in desktop? IMHO - WM responsiveness; - Program multi-tasking, and how productivity decreases when many background program working; (It's like, which user experience we'll get when our system is pretty heavy loaded) - Probably would be fair to compare same software in same circumstances (like same version of FF, Chromium, maybe something else). There's such extension for FF imacros - which can be used to simulate user actions, any actions in many tabs; - Overall usage experience, like measure time between program launching and window appearing (file managers, browsers, settings applet, calculator, etc.) - Sleep/Wake times with empty system(and with many programs launched ); If at all supported sleep/wake (as for me - my laptop can be slept, but deny to wake properly) - Time between you press "KDE start menu icon" and menu appeared; - Your variant?... This would be more careful benchmarking, not only number-crunching and heavy-archiving is used by all peoples. And this benchmarking can be at least be applied for users; They can imagine how it is - to have dolphin (KDE file manager) launched in 1.03 seconds, and alt-tabbing gives new window in 0.2 seconds, when video is playing. But what about time of calculating of Super-PI? Or archiving 4Gb file? It's mostly abstract measurement, and almost useless; I repeat - for average desktop users. I've at work PC-BSD installed on 24Gb SSD, with default ZFS setup slightly tuned (disabled prefetch), and I can say that system is great, and not sluggish. I sometimes happen to fill FS to 100%, then delete logs and continue working, without any signs of ZFS problems (I read somewhere that ZFS don't like to work when not much of free space available). KDE is old, but pretty fast. How can I measure this all with some few numbers :) ? My point is, that if not now, then in some near future benchmarking need to be more practical and applicable for users. Desktop measurements and server measurements. I hope Phoronix test suite will support desktop-experience benchmarking soon :) Thanks. -- Regards, Alexander YerenkowReceived on Thu Dec 22 2011 - 08:36:11 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:22 UTC