On Sun, 25 Feb 2007, Kris Kennaway wrote: > On Sat, Feb 24, 2007 at 10:00:35PM -0700, Coleman Kane wrote: > >> What does the performance curve look like for the in-CVS 7-CURRENT tree >> with 4BSD or ULE ? How do those stand up against the Linux SMP scheduler >> for scalability. It would be nice to see the comparison displayed to see >> what the performance improvements of the aforementioned patch were realized >> to. This would likely be a nice graphics for the SMPng project page, BTW... > > There are graphs of this on Jeff's blog, referenced in that URL. Fixing > filedesc locking makes a HUGE difference. I think the real message of all this is that our locking strategy is basically pretty reasonable for the paths exercised by this (and quite a few) workloads, but our low-level scheduler and locking primitives need a lot of refinement. The next step here is to look at the impact of these changes (individually and together) with other hardware configurations and other workloads. On the hardware side, I'd very much like to see measurements done on that rather nasty generation of Intel Xeon P4's where the costs of mutexes were astronomically out of proportion with other operation costs, which historically has heavily pessimized ULE due to the additional locking it had (don't know if this still applies). It would be really great if we could find "workload owners" who would maintain easy-to-run benchmark configurations and also run them regularly on a fixed hardware configuration over a long time publishing results and testing patches. Kris has done this for SQL benchmarks to great effect, giving a nice controlled testing environment for a host of performance-related patches, but SQL is not the be-all and end-all of application workloads, so having others do similar things with other benchmarks would be very helpful. Robert N M Watson Computer Laboratory University of CambridgeReceived on Sun Feb 25 2007 - 09:51:33 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:06 UTC