Re: mdconfig unable to allocate memory

From: Bruce Evans <bde_at_zeta.org.au>
Date: Sat, 28 Feb 2004 23:22:15 +1100 (EST)
On Sat, 28 Feb 2004, Chris BeHanna wrote:

> On Friday 27 February 2004 09:17, Chris Elsworth wrote:
> > Right, got it :) Having the machine crash is no problem. It's sat in
> > my lounge, so I can tweak and reboot and restore from kernel.generic
> > to my hearts content. This seems to be a problem that's been gone over
> > and over and over again, but never with any clear answer according to
> > Google. Given a FreeBSD machine with 4GB in it, be it 4 or 5, how can
> > one ensure that all the memory is being used for suitable file caching
> > if it's running MySQL?

Do nothing, except possibly for limiting excessive use of memory for
other things (e.g., for vnodes, mbufs, and bloatware).  Almost all
free memory is used for file caching by default.  It shows up as "inact"
memory in systat -v.  A typical maximum is about 800MB out of 1GB.  I
don't know any way to determine the amount of "inact" memory that is
used for file caching.  The amount shown as "buf" is not useful except
possibly to diagnose certain thrashing (see below).

> I was aiming at raising vfs.maxbufspace to be
> > 2GB or so.  Since that was just resuling in kernel panics pretty much
> > whatever I tried, I was giving mdconfig a go.

Changing this has very little effect.  I used to set it (actually nbuf
and BKVASIZE) large, but recently did fairly extensive benchmarks which
showed that the default on a system with 1GB is large enough.  Setting
larger than necessary just wastes memory that could be better used for
file caching.

>     There may be *some* tuning to do, but honestly, this is what the
> buffer cache was designed to do, *automatically*.  Data that have been
> fetched from disk is kept on one of several LRU lists, which are in
> turn cross-linked into chains hanging from the buffer hash table for
> quick lookup.  Once your working set is cached, the only things that
> can cause it to be flushed are:
>
>     1) The periodic (every 30 seconds by default) syncing out of dirty
>        buffers (buffers that you have changed since reading them in)
>        to storage (this does not cause the buffer to be thrown away,
>        but it does allow it to be marked "clean"), and
>
>     2) Memory pressure from other applications.
>
>     Chapter 6 in _The Design and Implementation of the 4.4 BSD
> Operating System_ (available from most well-stocked major booksellers,
> and maybe also in a well-stocked library) explains how file systems
> and the buffer cache interact in a quite accessible manner.  Before
> you go making changes that you think you need, I strongly recommend
> that you read this chapter.

Beware that this chapter hasn't applied directly to FreeBSD since
FreeBSD-2.0.  VMIO buffering has been used for most but not all things
since FreeBSD-2.0.5 (especially since vfs.vmiodirenable became the
default 2.4 years ago).  With VMIO buffering, the buffer cache is only
used to temporarily map pages into buffers so that file systems and
device drivers can access them easily.  The size of the buffer cache
is unimportant provided it is large enough not to cause too much
thrashing of the temporary maps.

> > >   The fact that running out of kernel address space can cause
> > > problems is quite well documented.
> >
> > Well, yes, but why would "vinum start" on a clean boot before any
> > tweaking (this is an out of the box 5.2.1-R) cause that? Surely the
> > kernel address space is sufficiently large just by self-tuning on a
> > machine with 4GB, for a vinum start to succeed?

Actually it's far from sufficiently large.  It is 1GB virtual, and
there is no way to map all of 4GB physical into that, but some things
including the bufffer cache want to use sparse mapping techniques to
they want to have more virtual than physical...but they can't have it.
Some tradeoffs must be made, and there are apparently some bugs in the
default tuning with 4GB physical.

> > >   Why do you want to have a GB malloc-backed disk anyway?
> >
> > This machine is for MySQL, and there's a wild un-backed-up claim on
> > a mysql list that moving MySQL indexes into a ramdisk (they're all
> > trashable data and easily recreateable) can give a fourfold
> > performance increase. So I wanted to try it. In order to get them all
> > in there, though, I'm going to need at least 2GB.

Possibly if there are lots of writes.  VMIO buffering only works (almost)
perfectly for reads.

Bruce
Received on Sat Feb 28 2004 - 03:22:21 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:45 UTC