When I discovered that you were building debug versions of devel/llvm60 that also meant that my Pine64+ 2GB specifics do not make a useful comparison (do not show how small an environment can be for a build). Last I tried such a debug build was on a powerpc64 with 16 MiBytes RAM and it took having a swap space of 10 MiBytes (9 insufficient) but it was devel/llvm40 and the machine was not in use for anything else significant and was using UFS, unlike your more involved context (ZFS, large applications running). For your usage, it is not clear to me that 32 GiBytes is all that much RAM, nor that 32GiByte RAM + 2 GiByte swap is all that big of a total. To me it seemed odd that your swap was so small for what you described as running in your environment vs. the amount of RAM. If it turns out handy, the stress test (scaled to 32 GiBytes) should show how your context handles low free RAM over extended periods. (Without the complications or delays of a build being involved.) (The Mark Johnston patches expose more than just any "was killed" notice.) My expectation is that the stress test would over time show OOM kills even if you expanded the swap space so that it clearly had notable space not in use at the time. It may be that setting vm.pageout_oom_seq large enough would help your builds complete by tolerating low free RAM for a longer time. Something I only just learned was that by default lld is multi-threaded. I've been told that this mode of operation uses more RAM but I've no knowledge of my own about the tradeoffs. They might be bigger for debug builds than for non-debug builds. Doing a quick experiment showed lld using 5 threads or so unless I used -Wl,--no-threads . (I was using the cc/c++ command interface.) === Mark Millard marklmi at yahoo.com ( dsl-only.net went away in early 2018-Mar)Received on Tue Aug 21 2018 - 04:03:34 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:17 UTC