Re: sbrk(2) broken

From: Andrew Reilly <andrew-freebsd_at_areilly.bpc-users.org>
Date: Tue, 8 Jan 2008 11:28:12 +1100
On Tue, 08 Jan 2008 00:17:04 +0000
"Poul-Henning Kamp" <phk_at_phk.freebsd.dk> wrote:

> For performance reasons, malloc(3) will hold on to a number of pages
> that theoretically could be given back to the kernel, simply because
> it expects to need them shortly.

Aah, OK, so there's some essentially system-level caching going
on behind the scenes, and that's readily malleable for this sort
of thing.  I thought that you were proposing some way to
propagate the "yellow" or "red" conditions to user-program
activity through malloc, which seems hard, since the only
official out-of-band signal there is a zero return.

I'll have to track down your papers, though, because I thought
that the whole problem revolved around the fact that malloc(3)
doesn't hand out physical pages at all: that was left up to the
kernel vm pager to do as needed.  Is it zeroed (and therefore
touched/present) pages that malloc keeps a stash of?

> Such parameters and many others of the malloc implementation can
> be tweaked to "waste" more or less memory, in response to a sensibly
> granular indication from the kernel about how bad things are.
> 
> Also, many subsystems in the kernel could adjust their memory use
> in response to a "memory pressure" indication, if memory is tight,
> we could cache vnodes and inodes less agressively, if things are
> going truly bad, we can even ditch all non-active entries from
> these caches.

I agree.  That sort of auto-tuning of the space/speed trade-off
would be extremely cool.

> If one implements this with three states:
> 
> Green - "all clear"
> 
> Yellow - "tight" - free one before you allocate one if you can.
> 
> Red - "all out" - free all that you sensibly can.

I imagine that even if the accounting can be managed efficiently,
the specification of the specific thresholds would be fairly
tricky to specify...

Cheers,

-- 
Andrew
Received on Mon Jan 07 2008 - 23:28:58 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:25 UTC