Re: TTY task group scheduling

From: Taku YAMAMOTO <taku_at_tackymt.homeip.net>
Date: Sat, 20 Nov 2010 00:49:55 +0900
On Thu, 18 Nov 2010 21:30:16 +0100
"O. Hartmann" <ohartman_at_mail.zedat.fu-berlin.de> wrote:

> On 11/18/10 19:55, Lucius Windschuh wrote:
> > 2010/11/18 Andriy Gapon<avg_at_freebsd.org>:
> >> [Grouping of processes into TTY groups]
> >>
> >> Well, I think that those improvements apply only to a very specific usage pattern
> >> and are greatly over-hyped.
> >
> > But there are serious issue if you use FreeBSD as a desktop OS with
> > SMP and SCHED_ULE, or?
> > Because currently, my machine is barely usable if a compile job with
> > parallelism is running. Movies stutter, Firefox hangs. And even nice
> > -n 20 doesn't do the job in every case, as +20 seems not to be the
> > idle priority anymore?!?
> > And using "idprio 1 $cmd" as a workaround is, well, a kludge.
> > I am not sure if TTY grouping is the right solution, if you look at
> > potentially CPU-intensive GUI applications that all run on the same
> > TTY (or no TTY at all? Same problem).
> > Maybe, we could simply enhance the algorithm that decides if a task is
> > interactive? That would also improve the described situation.
> >
> > Regards,
> >
> > Lucius
> 
> Stuttering Response, being stuck for over 20 seconds also happens when I 
> start updating the OS' sources via svn. This happens on all boxes, some 
> of them do have 8 cores (ob two CPUs) and plenty of RAM. Heavy disk I/O, 
> doesn't matter on UFS2 or ZFS, also brings boxes to stutter, those 
> phenomena are most seen when you interact with the machine via X11 
> clients. I think it's hard to realize if a server only does console I/O, 
> but console also seems to be stuck sometimes. It would be worth checking 
> this with some 'benchmark'. X11 in its kind of oldish incarnation on 
> FreeBSD seems to contribute most to those slowdowns, what so ever.

I guess schedulers can hardly distinguish heavy disk I/Os from nanosleep()s
and user-interactions; schedulers think both as voluntary sleep.

To make the matters worse, the current implementation of SCHED_ULE reassigns
ts_slice on sched_wakeup() no matter how short the sleep was.

I have a dumb local hack to grant ts_slice proportional to the duration
the waking thread slept rather than unconditionally reset to sched_slice.


--- sys/kern/sched_ule.c.orig
+++ sys/kern/sched_ule.c
_at__at_ -1928,12 +1928,16 _at__at_ sched_wakeup(struct thread *td)
 		u_int hzticks;
 
 		hzticks = (ticks - slptick) << SCHED_TICK_SHIFT;
+		if (hzticks > SCHED_SLP_RUN_MAX)
+			hzticks = SCHED_SLP_RUN_MAX;
 		ts->ts_slptime += hzticks;
+		/* Grant additional slices after we sleep. */
+		ts->ts_slice += hzticks / tickincr;
+		if (ts->ts_slice > sched_slice)
+			ts->ts_slice = sched_slice;
 		sched_interact_update(td);
 		sched_pctcpu_update(ts);
 	}
-	/* Reset the slice value after we sleep. */
-	ts->ts_slice = sched_slice;
 	sched_add(td, SRQ_BORING);
 }
 


-- 
-|-__   YAMAMOTO, Taku
 | __ <     <taku_at_tackymt.homeip.net>

      - A chicken is an egg's way of producing more eggs. -
Received on Fri Nov 19 2010 - 14:49:59 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:09 UTC