On Fri, May 22, 2015 at 01:43:21PM -0400, Nikolai Lifanov wrote: > On 05/22/15 13:27, Mateusz Guzik wrote: > > On Fri, May 22, 2015 at 12:32:52PM -0400, Allan Jude wrote: > >> There is some question about if nargs is a sane value for maxprocs in > >> the negative case. 5000 does seem a bit high, and the behaviour can get > >> wonky depending on the order you specify -P and -n together on the > >> command line. > >> > >> Any suggestions? > >> > > > > GNU xargs imposes no limit whatsoever, but it also supports reallocating > > its process table, while our xargs allocates one upfront and does not > > change it. > > > > I would say reading hard proc resource limit and using that as the limit > > would do the job just fine. > > > > GNU xargs uses MAX_INT for this limit. Our xargs performs much worse > with it for a reason I haven't investigated. The 5000 number doesn't > seem high and I have workflows that do '.... | xargs -n1 -P0 ...' > spawning about this many jobs. > Strictly speaking MAX_INT is indeed the upper limit, but the number is so big it's not a limit in practice and it's not going to be in foreseeable future. As noted earlier our xargs allocates the table upfront, which with MAX_INT limit means several MBs allocated for no good reason. For all practical purposes grabbing hard limit for processes and capping it with pid_max will have the end result of xargs not limiting the amount of processes. -- Mateusz Guzik <mjguzik gmail.com>Received on Fri May 22 2015 - 16:05:58 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:57 UTC