Unfortunate dynamic linking for everything

From: <dyson_at_iquest.net>
Date: Tue, 18 Nov 2003 08:07:56 -0500 (EST)
Guys,
	Please revisit the dynamic linking for everything.  The
	cost for using shared libs in cases like shells actually
	is higher than statically linking (both in memory and
	in time.)  It appears that there is a loss of VM understanding
	over time.  Don't confuse the advantages of using shared
	libs on X windows (and environments like that) with the
	same advantages being non-existant when using shared
	libs on normal layout programs (shells are perhaps the
	worst general case, but not necessarily the worst in all ways.)

	Another issue:  if you continue to support the dynamic
	linking mechanisms for special shared libs, but otherwise
	link statically, much of the high overhead of shared libs
	for-everything will still be mitigated.  Just because there
	might be a need for a special shared lib, that doesn't justify
	using shared libs for everything (and add the cost of
	sparse memory allocation and significantly higher fork/exec
	times than necessary.)  For a 'fun time', take a look at
	the process map in the deprecated /proc filesystem.  Compare
	the size (complexity) of a shared program with a static
	program...  It doesn't show all of the internal differences,
	but is an externally accessable (and benchmark free) exemplar.

	The only real reason for building various programs dynamic
	in order to gain the advantage of specific shared object
	would be an all or nothing type argument (a typical fallacy
	in most religious discussions.)

	It really doesn't make sense to arbitrarily cut-off a
	discussion especially when a decision might be incorrect.
	Perhaps the all-dynamic scheme had been decided upon so
	as to give a competitive performance advantage to those who 
	rebuild everything (appropriate) static? :-).

	If there hadn't been a noticed increase in cost by using
	all-shared-libs, then the measurements were done
	incorrectly.  If the decision is made based upon allowing
	for 1.5X (at least) times increase in fork/exec times, and
	larger memory usage (due to sparse allocations), then it
	would be best to have decided that performance isn't as
	important as it used to be (which it just might not be
	anymore.)  Last time that I heard, disk space is still
	much much less expensive than main memory :-).

	Remember:  the cost for disk space is nil nowadays, space
	only made obvious by an insanely small default root filesystem
	size allocation.  (An 'insanely' small root filesystem is
	okay when that root is a mini-root recovery system, but
	the cost of an extra 500MB is rather small on a non-specialty
	system is nil compared to other resources.)

	I do use an all-dynamic configuration for certain embedded
	applications (but also a case where there are no seperated
	filesystems, and the memory usage isn't quite as important
	because of the fact that AVAILABLE features is more important
	than lots of concurrently running programs.)  For the best
	sharing and quickest system response (both due to memory and
	raw program/image invocation times), at least make the shells
	static.

	(Sorry for any misspellings or grammar errors -- it is early
	in the morning...  I'll probably not participate in any further
	discussion on this matter either, but it would be good to
	generally avoid loosing 'ancient', but still 'accurate'
	technical history.)

	John
Received on Tue Nov 18 2003 - 04:07:58 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:29 UTC