Re: Unfortunate dynamic linking for everything

From: Robert Watson <rwatson_at_FreeBSD.org>
Date: Tue, 18 Nov 2003 21:53:20 -0500 (EST)
On Tue, 18 Nov 2003 dyson_at_iquest.net wrote:

> There might be a certain 'coolness' WRT dynamically linking everything,
> but the overhead is certainly measurable.  If the object is to maximally
> 'share', then for shells the FreeBSD VM shares maximally without using
> shared libs (perhaps there is a lost know-how about how aggressively the
> FreeBSD VM implements parent/child and exec based sharing?)
> 
> I'll try to put together a few simple benchmarks -- mostly from my
> defective memory.  I had recently refreshed myself on this issue (but
> lost again due to disk hardware disasters and being clobbered by vinum
> problems -- which I have given up on.)  Gimme a day or so -- I have
> several other things in my queue. 

I guess one of the key observations to make here is that the vast majority
of applications in FreeBSD have been dynamically linked for years.  The
only thing that has changed recently is a few binaries in /bin and /sbin. 
Of these binaries, the vast majority are never run for most systems, are
run once during boot, or extremely infrequently in response to an
administrative action where minor differences in latency will be
unnoticeable.  This would include applications like ping, mount_*,
fsirand, newfs, swapon, kldload, chflags, rcorder, quotacheck, etc.  The
"once during boot" case is interesting in the aggregate, but most of the
binaries in question should probably have been linked dynamically long ago
simply because there's no real benefit to making them statically linked.

So I think this leaves three interesting cases: 

(1) Shells, which are run for extended periods of time, and which are
    often run in a large number (propotional to #users logged in, or
    #windows open).  I'm not to interested in this argument simply because
    the applications most people are interested in using FreeBSD for are
    already dynamically linked: Apache, databases, perl, XWindows, KDE,
    ...  The vast majority of long-lived processes are already dynamically
    linked.

(2) Shells again, because they will be fork()d and exec()d frequently
    during heavily scripted activities, such as system boot, periodic
    events, large make jobs, etc.  And presumably the only shell of
    interest is sh, although some of the supporting non-builtin binaries
    may also be of interest. 

(3) Other binaries, such as mount_*, etc, which aren't run very often, but
    when they are run, will be run in aggregate with many other similar
    binaries, such as during system boot.  The cost of one binary going
    dynamic is presumably extremely small, but if you suddenly make many
    binaries dynamic, it's presumably much more noticeable. 

Some macrobenchmark results that would probably be interesting to see
(some of which I know have already been looked at as part of this
discussion): 

- With a move to rcNG (/etc/rc.d), our boot process is now probably quite
  a bit more sensitive to the net performance change from dynamic linking. 
  This would be at least one benchmark that would be interesting to look
  at (and I understand has been looked at by those exploring dynamic
  linking). 

- The impact on large multi-part build tasks, such as buildworld, is also
  interesting.  Turning on process accounting shows that 'sh' is run by
  make(1) once for each task it spawns off during the build.  A
  macrobenchmark would be helpful to look at whether there's a positive or
  negative impact.

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert_at_fledge.watson.org      Network Associates Laboratories
Received on Tue Nov 18 2003 - 17:55:31 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:29 UTC