upgrading to r230059 cause slow network throughput

From: Коньков Евгений <kes-kes_at_yandex.ru>
Date: Fri, 13 Jan 2012 21:32:14 +0200
I have tryed as ULE as SCHED_4BSD, no changes
in both cases low throughput

CPUload http://piccy.info/view3/2478224/e5d7f208538d05d813411c34eb493a8f/orig/
if load http://piccy.info/view3/2478228/bf8dca5fad12c1436092f4d6aaf2356f/orig/

looking on graphs CPU load it seems strange distribution...


it seems almost not scheduling ng_queue and netisr

last pid: 54347;  load averages:  0.30,  0.28,  0.22                                   up 0+06:24:06  21:25:11
273 processes: 5 running, 237 sleeping, 31 waiting
CPU 0:  0.0% user,  0.0% nice,  3.1% system,  0.4% interrupt, 96.5% idle
CPU 1:  3.5% user,  0.0% nice,  1.2% system,  0.4% interrupt, 94.9% idle
CPU 2:  2.0% user,  0.0% nice,  1.6% system,  0.4% interrupt, 96.1% idle
CPU 3:  0.0% user,  0.0% nice,  0.8% system,  3.1% interrupt, 96.1% idle
Mem: 310M Active, 1080M Inact, 179M Wired, 112M Buf, 354M Free
Swap: 3926M Total, 3926M Free


  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root       155 ki31     0K    32K CPU3    3 359:26 94.38% idle{idle: cpu3}
   11 root       155 ki31     0K    32K CPU0    0 356:33 92.77% idle{idle: cpu0}
   11 root       155 ki31     0K    32K RUN     1 345:23 89.60% idle{idle: cpu1}
   11 root       155 ki31     0K    32K CPU2    2 340:20 85.35% idle{idle: cpu2}
 3312 root        40    0 15468K  6488K select  2  15:00  4.25% snmpd
   12 root       -60    -     0K   248K WAIT    0   7:20  1.07% intr{swi4: clock}
   12 root       -92    -     0K   248K WAIT    3  12:06  0.29% intr{irq266: re0}
    0 root       -92    0     0K   152K -       3   6:45  0.05% kernel{dummynet}
   13 root       -92    -     0K    32K sleep   1   1:39  0.00% ng_queue{ng_queue1}
   13 root       -92    -     0K    32K sleep   1   1:39  0.00% ng_queue{ng_queue3}
   13 root       -92    -     0K    32K sleep   3   1:39  0.00% ng_queue{ng_queue0}
   13 root       -92    -     0K    32K sleep   0   1:39  0.00% ng_queue{ng_queue2}
 6880 root         8    0  9592K  1300K nanslp  1   0:46  0.00% monitord
    0 root       -16    0     0K   152K sched   1   0:43  0.00% kernel{swapper}
95148 root        16    0  9756K  1452K pause   1   0:37  0.00% netstat
   12 root       -72    -     0K   248K WAIT    3   0:34  0.00% intr{swi1: netisr 3}
   15 root       -16    -     0K     8K -       3   0:28  0.00% yarrow
 1070 root        40    0 10524K  4224K select  1   0:21  0.00% zebra
 2020 root        30  -10 50664K 22824K select  3   0:20  0.00% mpd5{mpd5}
 7611 firebird    30  -10   106M 65780K usem    2   0:14  0.00% fb_smp_server{fb_smp_server}
 1766 root        40    0  9680K  1480K select  2   0:11  0.00% syslogd
 1909 bind        40    0 69244K 55360K uwait   1   0:06  0.00% named{named}
 1909 bind        40    0 69244K 55360K uwait   1   0:06  0.00% named{named}
 1909 bind        40    0 69244K 55360K uwait   1   0:06  0.00% named{named}
 1909 bind        40    0 69244K 55360K uwait   3   0:06  0.00% named{named}
    8 root        16    -     0K     8K syncer  0   0:06  0.00% syncer
 1909 bind         4    0 69244K 55360K kqread  2   0:06  0.00% named{named}

in compare to FreeBSD-9,
10-CURRENT has only one {swi1: netisr 3}
9-         has four process: {swi1: netisr 0} {swi1: netisr 1} {swi1: netisr 2} {swi1: netisr 3}

last pid: 40679;  load averages:  2.38,  2.39,  2.28                      up 2+05:31:50  21:23:43
294 processes: 7 running, 269 sleeping, 18 waiting
CPU 0:  1.2% user,  0.0% nice, 20.4% system, 23.9% interrupt, 54.5% idle
CPU 1:  1.2% user,  0.0% nice, 10.6% system, 29.8% interrupt, 58.4% idle
CPU 2:  0.4% user,  0.0% nice, 10.2% system, 26.7% interrupt, 62.7% idle
CPU 3:  1.2% user,  0.0% nice, 16.1% system, 22.4% interrupt, 60.4% idle
Mem: 750M Active, 2700M Inact, 307M Wired, 83M Cache, 112M Buf, 58M Free
Swap: 4096M Total, 49M Used, 4047M Free, 1% Inuse

  PID USERNAME   PRI NICE   SIZE    RES STATE   C   TIME   WCPU COMMAND
   11 root       155 ki31     0K    32K RUN     1  37.1H 59.23% {idle: cpu1}
   11 root       155 ki31     0K    32K RUN     3  37.3H 58.79% {idle: cpu3}
   11 root       155 ki31     0K    32K RUN     2  36.8H 57.62% {idle: cpu2}
   11 root       155 ki31     0K    32K CPU0    0  34.9H 51.46% {idle: cpu0}
   12 root       -72    -     0K   160K CPU2    2 778:00 39.99% {swi1: netisr 3}
   12 root       -72    -     0K   160K CPU1    1 558:58 22.56% {swi1: netisr 1}
   12 root       -92    -     0K   160K WAIT    0 424:04 16.60% {irq256: re0}
   12 root       -72    -     0K   160K WAIT    3 204:04 14.36% {swi1: netisr 0}
   12 root       -72    -     0K   160K WAIT    1 224:14  7.62% {swi1: netisr 2}
   13 root       -16    -     0K    32K sleep   0 123:28  5.37% {ng_queue0}
 6907 root        23    0 15392K  5348K select  2 123:59  5.22% snmpd
   13 root       -16    -     0K    32K sleep   0 123:32  5.18% {ng_queue3}
   13 root       -16    -     0K    32K sleep   0 123:20  5.08% {ng_queue1}
   13 root       -16    -     0K    32K sleep   0 123:20  5.03% {ng_queue2}
 3605 root        25    0 10460K  3704K select  0  22:19  2.49% zebra
16519 firebird    20  -10   251M   158M usem    1   0:49  1.32% {fb_smp_server}
 5553 root        20    0   205M   102M select  3  39:29  0.98% {mpd5}
61490 freeradius  20  -20   354M   317M usem    1   3:40  0.63% {radiusd}
61490 freeradius  20  -20   354M   317M usem    2   3:33  0.63% {radiusd}
61490 freeradius  20  -20   354M   317M usem    0   3:58  0.54% {radiusd}
61490 freeradius  20  -20   354M   317M usem    0   3:32  0.54% {radiusd}
61490 freeradius  20  -20   354M   317M usem    1   3:29  0.54% {radiusd}
61490 freeradius  20  -20   354M   317M usem    2   3:23  0.54% {radiusd}
61490 freeradius  20  -20   354M   317M usem    3   3:22  0.54% {radiusd}
61490 freeradius  20  -20   354M   317M usem    3   3:35  0.44% {radiusd}
61490 freeradius  20  -20   354M   317M usem    0   3:21  0.44% {radiusd}



-- 
С уважением,
 Коньков                          mailto:kes-kes_at_yandex.ru
Received on Fri Jan 13 2012 - 18:32:21 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:23 UTC