Am Fri, 29 Jul 2016 12:41:17 -0700 Ngie Cooper <yaneurabeya_at_gmail.com> schrieb: > On Fri, Jul 29, 2016 at 12:03 PM, Allan Jude <allanjude_at_freebsd.org> wrote: > > On 2016-07-29 14:04, O. Hartmann wrote: > >> > >> I realise an exorbitant memory usage of FreeBSD CURRENT ( FreeBSD > >> 12.0-CURRENT #16 > >> r303470: Fri Jul 29 05:58:42 CEST 2016 ). Swap space gets eaten up while > >> building > >> world/kernel and/or ports very quickly. > >> > >> I see this phenomenon on different CURRENT systems with different RAM (but > >> all ZFS!). No > >> box is less than 8 GB RAM: one 8GB, another 16, two 32 GB. An older XEON > >> Core2Duo server > >> with postgresql 9.5/postgis acting on some OSM data etas up all of its 32 > >> GB and > >> additional 48GB swap - never seen before with 11-CURRENT. > >> > >> I didn't investigate the problem so far since I realized this memory > >> hunger of 12-CURRENT > >> just today on several boxes compiling world, eating up all the memory, > >> staring swapping > >> and never relax even after hours from the swapped memory. > >> > >> Is this a known phenomenon or am I seeing something mystique? > >> > >> Regards, > >> > >> Oliver > >> > > > > Do you have the output of 'top', the first few lines > > > > Specifically, is there very high 'Other' usage, on the ZFS ARC line? > > `vmstat -Hm | sort -rnk 2,3 | head -n 10` might be helpful if the > memory used is in kernel space. > Thanks, > -Ngie > _______________________________________________ > freebsd-current_at_freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org" This is after starting VBox and a Win 7 pro guest (just started, no login) with 3572 MB memory reserved and 4 logical CPUs (VBox 5.0.26): root_at_localhost: [ports] vmstat -Hm | sort -rnk 2,3 | head -n 10 solaris 53030 62088K - 23000705 16,32,64,128,256,512,1024,2048,4096,8192,32768 devbuf 20600 39751K - 21380 16,32,64,128,256,512,1024,2048,4096,8192,16384,65536 iprtheap 9335 16498K - 12303 32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536 nvidia 8162 21261K - 549305 16,32,64,128,256,512,1024,2048,4096,8192,16384,32768,65536 sysctloid 6004 309K - 6125 16,32,64,128 acpica 5605 574K - 65245 16,32,64,128,256,512,1024,2048,4096 umtx 1728 216K - 1728 128 ufs_dirhash 1543 678K - 7175 16,32,64,128,256,512,1024,2048 pmc 1066 6679K - 1066 16,32,128,256,512,1024,4096,8192,65536 kdtrace 950 218K - 113551 64,256 And the top output is : last pid: 12145; load averages: 0.65, 0.45, 0.45 up 0+01:35:15 21:51:10 72 processes: 1 running, 71 sleeping CPU: 1.4% user, 0.0% nice, 20.5% system, 0.2% interrupt, 77.9% idle Mem: 21M Active, 293M Inact, 7429M Wired, 775M Buf, 85M Free ARC: 1682M Total, 363M MFU, 1077M MRU, 5536K Anon, 20M Header, 216M Other Swap: 64G Total, 400M Used, 64G Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 12077 ohartmann 25 20 0 4319M 3911M select 3 2:07 77.87% VirtualBox 1002 root 1 23 0 12475M 31616K select 3 1:18 6.02% Xorg 1027 ohartmann 1 25 0 125M 9440K select 0 0:15 1.35% wmaker 514 root 1 20 0 12748K 1916K select 0 0:26 1.28% moused 11980 ohartmann 45 20 0 826M 248M select 0 0:23 0.13% firefox 1645 root 1 20 0 22260K 2640K CPU2 2 0:02 0.10% top 1634 ohartmann 1 20 0 76020K 4180K select 0 0:01 0.09% xterm 1032 ohartmann 1 20 0 86260K 5852K select 3 0:06 0.06% xterm 12009 ohartmann 4 20 0 329M 35496K select 0 0:01 0.03% VirtualBox 12014 ohartmann 12 20 0 125M 7624K select 1 0:01 0.02% VBoxSVC 403 root 1 20 0 9588K 556K select 0 0:00 0.01% devd 12012 ohartmann 1 20 0 90036K 5164K select 2 0:00 0.01% VBoxXPCOMIPCD 563 root 1 20 0 12608K 1916K select 2 0:01 0.00% syslogd 793 root 1 20 0 22764K 12632K select 0 0:00 0.00% ntpd 820 root 1 20 0 43744K 2228K select 1 0:00 0.00% saned 1026 ohartmann 1 20 0 33592K 3044K select 3 0:00 0.00% gpg-agent 721 root 1 20 0 268M 1768K select 2 0:00 0.00% rpc.statd 930 root 4 52 0 8364K 1852K rpcsvc 2 0:00 0.00% nfscbd [...]
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:07 UTC