I wrote: [stuff snipped] > So, I'd say either reverting the patch or replacing it with the "obvious change" mentioned > in the commit message will at least mostly fix the problem. "mostly fix" was probably a bit optimistic. Here's my current #s. (All cases are the same single threaded kernel build, same hardware, etc. The only changes are recent vs 1yr old head kernel and what is noted.) - 1yr old kernel, SMP, SCHED_ULE 94minutes - 1yr old kernel, no SMP, SCHED_ULE 111minutes - recent kernel, SMP, SCHED_4BSD 104minutes - recent kernel, no SMP, SCHED_ULE 113minutes - recent kernel, SMP, SCHED_ULE, r312426 reverted 122minutes - recent kernel, SMP, SCHED_ULE 148minutes So, reverting r312426 only gets rid of about 1/2 of the degradation. One more thing I will note is that the system CPU is higher for the cases that run with lower/better elapsed times: - 1yr old kernel, SMP, SCHED_ULE 545s - 1yr old kernel, no SMP, SCHED_ULE 293s - recent kernel, no SMP, SCHED_ULE 292s - recent kernel, SMP, SCHED_ULE 466s cperciva_at_ is running a highly parallelized buuildworld and he sees better slightly better elapsed times and much lower system CPU for SCHED_ULE. As such, I suspect it is the single threaded, processes mostly sleeping waiting for I/O case that is broken. I suspect this is how many people use NFS, since a highly parallelized make would not be a typical NFS client task, I think? There are other changes to sched_ule.c in the last year, but I'm not sure which would be easy to revert and might make a difference in this case? rick ps: I've cc'd cperiva_at_ and he might wish to report his results. I am hoping he does try a make without "-j" at some point.Received on Sun May 28 2017 - 18:16:57 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:11 UTC