On Thu, May 17, 2012 at 02:13:40PM +0200, Dimitry Andric wrote: > On 2012-05-17 05:18, b. f. wrote:... > > The slowdown is probably due - at least in part - to two factors: > > - the list of files to be checked for removal has grown substantially, > > because missing entries for old knobs and new entries for new knobs > > have been added; and > > - a new (and slower) method of checking was added in: > > http://svnweb.FreeBSD.org/base?view=revision&revision=220255 > > because the old method broke down with the size of the new lists of files. > Hm, maybe it would have been better to fix make, so it can accept > arbitrarily long lists, without segfaulting? There's such a thing as > malloc(), and I simply don't believe any of those lists could be larger > than a few hundred kilobytes. Alternatively, make could be fixed so that the original code works. Although an invocation like sh -c 'for file in VERY_LONG_LIST; do something; done' will bump into {ARG_MAX}, the shell itself does not have a fixed limitation so longer command lines can be written to a temporary file and passed to sh that way. In some cases (such as with -j), make always uses a temporary file, slowing things down and obscuring ps output. At the cost of needing the temporary file named a bit longer, it is better to pass the pathname to sh rather than feeding the script on standard input: this avoids interfering with terminal input and is potentially faster. The code currently in Makefile.inc1 can probably be sped up by passing the output of the make -V command to something like xargs sh -c 'for file do rm -i "${DESTDIR}/${file}"; done' sh instead of the xargs -n1 | while read file; do ...; done loop. (Note the second "sh" at the end, which serves as a value for $0 so all strings from xargs become positional parameters.) -- Jilles TjoelkerReceived on Thu May 17 2012 - 13:49:58 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:27 UTC