> In message <20200125233116.GA49916_at_troutmask.apl.washington.edu>, Steve > Kargl w > rites: > > On Sat, Jan 25, 2020 at 02:09:29PM -0800, Cy Schubert wrote: > > > On January 25, 2020 1:52:03 PM PST, Steve Kargl <sgk_at_troutmask.apl.washingt > > on.edu> wrote: > > > >On Sat, Jan 25, 2020 at 01:41:16PM -0800, Cy Schubert wrote: > > > >> > > > >> It's not just poudeiere. Standard port builds of chromium, rust > > > >> and thunderbird also fail on my machines with less than 8 GB. > > > >> > > > > > > > >Interesting. I routinely build chromium, rust, firefox, > > > >llvm and few other resource-hunger ports on a i386-freebsd > > > >laptop with 3.4 GB available memory. This is done with > > > >chrome running with a few tabs swallowing a 1-1.5 GB of > > > >memory. No issues. > > > > > > Number of threads makes a difference too. How many core/threads does your l > > aptop have? > > > > 2 cores. > > This is why. > > > > > > Reducing number of concurrent threads allowed my builds to complete > > > on the 5 GB machine. My build machines have 4 cores, 1 thread per > > > core. Reducing concurrent threads circumvented the issue. > > > > I use portmaster, and AFIACT, it uses 'make -j 2' for the build. > > Laptop isn't doing too much, but an update and browsing. It does > > take a long time especially if building llvm is required. > > I use portmaster as well (for quick incidental builds). It uses > MAKE_JOBS_NUMBER=4 (which is equivalent to make -j 4). I suppose machines > with not enough memory to support their cores with certain builds might > have a better chance of having this problem. > > MAKE_JOBS_NUMBER_LIMIT to limit a 4 core machine with less than 2 GB per > core might be an option. Looking at it this way, instead of an extra 3 GB, > the extra 60% more memory in the other machine makes a big difference. A > rule of thumb would probably be, have ~ 2 GB RAM for every core or thread > when doing large parallel builds. Perhaps we need to redo some boot time calculations, for one the ZFS arch cache, IMHO, is just silly at a fixed percent of total memory. A high percentage at that. One idea based on what you just said might be: percore_memory_reserve = 2G (Your number, I personally would use 1G here) arc_max = MAX(memory size - (Cores * percore_memory_reserve), 512mb) I think that simple change would go a long ways to cutting down the number of OOM reports we see. ALSO IMHO there should be a way for sub systems to easily tell zfs they are memory pigs too and need to share the space. Ie, bhyve is horrible if you do not tune zfs arc based on how much memory your using up for VM's. Another formulation might be percore_memory_reserve = alpha * memory_zire / cores Alpha most likely falling in the 0.25 to 0.5 range, I think this one would have better scalability, would need to run some numbers. Probably needs to become non linear above some core count. > Cy Schubert <Cy.Schubert_at_cschubert.com> -- Rod Grimes rgrimes_at_freebsd.orgReceived on Sun Jan 26 2020 - 16:45:50 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:22 UTC