On Fri, Jun 15, 2018 at 04:40:22AM -0400, Mark Johnston wrote: > > On Fri, Jun 15, 2018 at 01:10:25PM +0800, Kevin Lo wrote: > > On Tue, Jun 05, 2018 at 05:48:08PM -0400, Mark Johnston wrote: > > > > > > On Wed, Jun 06, 2018 at 12:22:08AM +0300, Lev Serebryakov wrote: > > > > On 05.06.2018 19:17, Gary Jennejohn wrote: > > > > > > > > > > > > > I complained about this also and alc_at_ gave me this hint: > > > > > sysctl vm.pageout_update_period=0 > > > > > > > > Really, situation is worse than stated in subject, because processes > > > > are being killed AFTER memory pressure, when here are a lot of free > > > > memory already! > > > > > > > > It looks like very serious bug. > > > > > > The issue was identified earlier this week and is being worked on. It's > > > a regression from r329882 which appears only on certain hardware. You > > > can probably work around it by setting vm.pageout_oom_seq to a large > > > value (try 1000 for instance), though this will make the "true" OOM > > > killer take longer to kick in. The problem is unrelated to the > > > pageout_update_period. > > > > I have a large swap space and I've encountered this issue as well > > > > pid 90707 (getty), uid 0, was killed: out of swap space > > pid 90709 (getty), uid 0, was killed: out of swap space > > pid 90709 (getty), uid 0, was killed: out of swap space > > ... > > > > Setting vm.pageout_oom_seq to 1000 doesn't help. If you have a patch I'll be > > happy to test it, thanks. > > The change was committed as r334752. Are you seeing unexpected OOM > kills on or after that revision? The box is running -CURRENT r334983. I'll investigate further, thanks.Received on Fri Jun 15 2018 - 11:48:32 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:16 UTC