On Friday 27 January 2006 21:16, Brooks Davis wrote: [snip] > I agree as well. Certainly if we were charging for use of our cluster, > this is what we'd want. While I probably wouldn't run powerd on the > cluster, I and thinking about seeing if I can step down the CPU speed > when there aren't any queued jobs on the machine. That could save > significant power some of the time (I'm in the process of upgrading the > cluster portion of our server room to install 300KVA (~KW) of power and > plan to use it all within a year or two). > > Once we have the infrastructure to deal with this correctly, an > intresting test for someone to run would be to look at disk and memory > bound applications at different CPU speeds. I suspect you'd find that > while wallclock increased at lower CPU speeds, cpu cycles would decrease > for many workloads because the relative bandwidth of storage and maybe > memory would increase. > > -- Brooks Just to give the discussion a different angle - Across most very large midrange estates - the expectated maximum use of systems when measured over a full year averages is app. 8% - The % being sligthly tricky because it includes name servers, NIS etc. If one refines the data you find that the best case looks like a maximum use of around 16% - 20% - The measurements made over the entire estate of a couple of ITO's and hardware vendors - e.g. based upon 50,000+ servers - I that view the ability to account effectively for Use/Speed etc. are somewhat more interesstingReceived on Sat Jan 28 2006 - 14:15:52 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:51 UTC