Re: CURRENT slow and shaky network stability

From: Cy Schubert <Cy.Schubert_at_komquats.com>
Date: Sat, 02 Apr 2016 15:35:40 -0700
In message <20160402105503.7ede5be1.ohartman_at_zedat.fu-berlin.de>, "O. 
Hartmann"
 writes:
> --Sig_/VIBPN0rbNwuyJuk=dxEGA+U
> Content-Type: text/plain; charset=US-ASCII
> Content-Transfer-Encoding: quoted-printable
> 
> Am Sat, 02 Apr 2016 01:07:55 -0700
> Cy Schubert <Cy.Schubert_at_komquats.com> schrieb:
> 
> > In message <56F6C6B0.6010103_at_protected-networks.net>, Michael Butler writ=
> es:
> > > -current is not great for interactive use at all. The strategy of
> > > pre-emptively dropping idle processes to swap is hurting .. big time. =
> =20
> >=20
> > FreeBSD doesn't "preemptively" or arbitrarily push pages out to disk. LRU=
> =20
> > doesn't do this.
> >=20
> > >=20
> > > Compare inactive memory to swap in this example ..
> > >=20
> > > 110 processes: 1 running, 108 sleeping, 1 zombie
> > > CPU:  1.2% user,  0.0% nice,  4.3% system,  0.0% interrupt, 94.5% idle
> > > Mem: 474M Active, 1609M Inact, 764M Wired, 281M Buf, 119M Free
> > > Swap: 4096M Total, 917M Used, 3178M Free, 22% Inuse =20
> >=20
> > To analyze this you need to capture vmstat output. You'll see the free po=
> ol=20
> > dip below a threshold and pages go out to disk in response. If you have=20
> > daemons with small working sets, pages that are not part of the working=20
> > sets for daemons or applications will eventually be paged out. This is no=
> t=20
> > a bad thing. In your example above, the 281 MB of UFS buffers are more=20
> > active than the 917 MB paged out. If it's paged out and never used again,=
> =20
> > then it doesn't hurt. However the 281 MB of buffers saves you I/O. The=20
> > inactive pages are part of your free pool that were active at one time bu=
> t=20
> > now are not. They may be reclaimed and if they are, you've just saved mor=
> e=20
> > I/O.
> >=20
> > Top is a poor tool to analyze memory use. Vmstat is the better tool to he=
> lp=20
> > understand memory use. Inactive memory isn't a bad thing per se. Monitor=
> =20
> > page outs, scan rate and page reclaims.
> >=20
> >=20
> 
> I give up! Tried to check via ssh/vmstat what is going on. Last lines befor=
> e broken pipe:
> 
> [...]
> procs  memory       page                    disks     faults         cpu
> r b w  avm   fre   flt  re  pi  po    fr   sr ad0 ad1   in    sy    cs us s=
> y id
> 22 0 22 5.8G  1.0G 46319   0   0   0 55721 1297   0   4  219 23907  5400 95=
>   5  0
> 22 0 22 5.4G  1.3G 51733   0   0   0 72436 1162   0   0  108 40869  3459 93=
>   7  0
> 15 0 22  12G  1.2G 54400   0  27   0 52188 1160   0  42  148 52192  4366 91=
>   9  0
> 14 0 22  12G  1.0G 44954   0  37   0 37550 1179   0  39  141 86209  4368 88=
>  12  0
> 26 0 22  12G  1.1G 60258   0  81   0 69459 1119   0  27  123 779569 704359 =
> 87 13  0
> 29 3 22  13G  774M 50576   0  68   0 32204 1304   0   2  102 507337 484861 =
> 93  7  0
> 27 0 22  13G  937M 47477   0  48   0 59458 1264   3   2  112 68131 44407 95=
>   5  0
> 36 0 22  13G  829M 83164   0   2   0 82575 1225   1   0  126 99366 38060 89=
>  11  0
> 35 0 22 6.2G  1.1G 98803   0  13   0 121375 1217   2   8  112 99371  4999 8=
> 5 15  0
> 34 0 22  13G  723M 54436   0  20   0 36952 1276   0  17  153 29142  4431 95=
>   5  0
> Fssh_packet_write_wait: Connection to 192.168.0.1 port 22: Broken pipe

How many CPUs does FreeBSD see? (CPUs being the number of cores and 
threads, i.e. my dual core intel has two threads so FreeBSD sees four CPUs.)

The load on the box shouldn't exceed more than two processes per CPU or you 
will notice performance issues. Ideally we look at load average first. If 
it's high then we check CPU%. If that looks good we look at memory and I/O. 
With the scant information at hand right now I see a possible CPU issue. 
Scan rate looks high but there's no paging so I'd consider it borderline.


-- 
Cheers,
Cy Schubert <Cy.Schubert_at_komquats.com> or <Cy.Schubert_at_cschubert.com>
FreeBSD UNIX:  <cy_at_FreeBSD.org>   Web:  http://www.FreeBSD.org

	The need of the many outweighs the greed of the few.
Received on Sat Apr 02 2016 - 20:38:27 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:03 UTC