Re: ULE and current.

From: Jeff Roberson <jroberson_at_chesapeake.net>
Date: Thu, 11 Dec 2003 04:13:12 -0500 (EST)
On Thu, 11 Dec 2003, Andy Farkas wrote:

> Jeff Roberson wrote:
>
> > Andy Farkas wrote:
> >
> > > The scheduling of nice processes seems to be broken:
> >
> > This is actually a problem in the load balancer.  It's not taking nice
> > into consideration when attempting to balance the load.
> >
> > > team2# nice -7 sh -c "while :; do echo -n;done" &
> > > team2# nice -7 sh -c "while :; do echo -n;done" &
> > > team2# sleep 120; top -S
> > >
> > >   PID USERNAME   PRI NICE   SIZE    RES STATE  C   TIME   WCPU    CPU COMMAND
> > >   675 root       133   -7  1576K   952K CPU1   1   1:52 75.78% 75.78% sh
> > >   676 root       133   -7  1576K   952K RUN    1   1:39 73.44% 73.44% sh
> > >    12 root       -16    0     0K    12K RUN    0  18:46 55.47% 55.47% idle: cpu0
> > >    11 root       -16    0     0K    12K RUN    1   7:00  0.00%  0.00% idle: cpu1
>
> Just to make it clear, I was expecting the above to be something like:
>
>  sh on CPU0 using 100% cpu,
>  sh on CPU1 using 100% cpu,
>  idle: cpu0 to be 0%, and
>  idle: cpu1 to be 0%.

Yes, I agree, I'm confused about why this is happening myself.  I'll look
into it soon.

>
> > >
> > > Adding a third nice process eliminates the idle time, but cpu% is still bad:
> > >
> > > team2# nice -7 sh -c "while :; do echo -n;done" &
> > > team2# sleep 120; top -S
> > >
> > >   PID USERNAME   PRI NICE   SIZE    RES STATE  C   TIME   WCPU    CPU COMMAND
> > >   705 root       133   -7  1576K   952K CPU0   0   1:53 100.78% 100.78% sh
> > >   675 root       133   -7  1576K   952K RUN    1  12:12 51.56% 51.56% sh
> > >   676 root       133   -7  1576K   952K RUN    1  11:30 49.22% 49.22% sh
> > >   729 root        76    0  2148K  1184K CPU1   1   0:00  0.78%  0.78% top
> > >    12 root       -16    0     0K    12K RUN    0  24:00  0.00%  0.00% idle: cpu0
> > >    11 root       -16    0     0K    12K RUN    1   7:00  0.00%  0.00% idle: cpu1
>
> And at this point I would expect something like:
>
>  sh #0 using 66.3%,
>  sh #1 using 66.3%,
>  sh #2 using 66.3%,
>  idle: cpu0 to be 0%,
>  idle: cpu1 to be 0%.

This is actually very difficult to get exactly right.  Since all processes
want to run all the time, you have to force alternating pairs to share the
second cpu.  Otherwise they wont run for an even amount of time.

>
> > I agree that 100.78% is wrong.  Also, the long term balancer should be
> > kicking one sh process off of the doubly loaded cpu every so often.  I'll
> > look into this, thanks.
>
> Could it be that the scheduler/balancer is confused by different idle
> processes?  Why does 'systat -p' show 3 idle procs?? :
>

The vm has an idle thread that zeros pages.  This is the third thread.

>                     /0   /10  /20  /30  /40  /50  /60  /70  /80  /90  /100
> root     idle: cpu0 XXXXXXXXXXXXXXXX
> root     idle: cpu1 XXXXXXXXXXXXXXXX
>              <idle> XXXXXXXXXXXXXXXX
>
>
> So, where *I* get confused is that top(1) thinks that the system can be up
> to 200% idle, whereas systat(1) thinks there are 3 threads each consuming
> a third of 100% idleness... who is right?

Both, they just display different statistics. ;-)

>
> >
> > Cheers,
> > Jeff
> >
>
> --
>
>  :{ andyf_at_speednet.com.au
>
>         Andy Farkas
>     System Administrator
>    Speednet Communications
>  http://www.speednet.com.au/
>
>
Received on Thu Dec 11 2003 - 00:13:19 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:33 UTC