PAE and large ram support.

From: Jake Burkholder <jake_at_locore.ca>
Date: Wed, 9 Apr 2003 11:01:58 -0400
Hi,

If you haven't read on cvs-src, just recently I've committed support for
PAE and more than 4G of ram on x86 to -current.  Basically what this does
is allows physical memory above 4G to be used normally by the kernel and
userland.  Except in certain circumstances no distinction is made between
memory above and below 4G, it all just becomes part of the general page
pool.  This does not increase the amount of virtual address space, just
the amount of physical memory you can use.

We'd like this feature to be solid for 5.1-RELEASE, so I'm hoping there
are people out there with systems with more than 4G of ram that are willing
to test it.  Its been tested pretty extensively with 6G of ram, I'd be very
interested to hear from anyone with substantially more than that.

There are a couple caveats to be aware of:

1. Not all device drivers will work properly, the hardware must either
   support 64 bit PCI addressing, or the driver must use busdma, which
   will use bounce buffers for DMA to memory not accessible by the hardware.
   I've committed a PAE kernel config (/sys/i386/conf/PAE) which excludes
   drivers that are known to totally not work, or which have not been tested.
   In short, the list of "certified" drivers at this time is:

	- aac
	- ahc
	- ahd
	- ata
	- em
	- fxp
	- xl

   plus all the normal stuff which doesn't use DMA.  The aac, ahc, ahd and
   em drivers will use 64 bit pci addressing, no bounce buffering will occur.
   The others will use bounce buffers for DMA to memory above 4G; performance
   is not likely to be that hot, but it will work.

2. You must not load kernel modules into a PAE kernel.  In particular,
   many machines with large amounts of memory are recent designs and require
   acpi, which is normally loaded as a kernel module.  You must compile it
   statically into the kernel with 'device acpi', this is included in the
   PAE kernel config.

3. The auto-tuning that the kernel does starts to fall apart pretty fast
   with lots of memory.  With 6G the maximum number of vnodes gets set
   higher than the kernel address space can support, so you may want to
   limit it to around 100,000 with the kern.maxvnodes sysctl.  There are
   probably other things that are allocated based on physical memory size
   only and which don't scale past 4G.  We need people with varying memory
   configurations to try it to know what else needs to be tuned.

I'm not sure I can trump Peter, but in any case I've put up the dmesg from
my test machine: http://people.freebsd.org/~jake/tip.pae.  The hardware was
provided by FreeBSD Systems, www.freebsdsystems.com.

Thanks,
Jake
Received on Wed Apr 09 2003 - 06:01:38 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:03 UTC