Re: memory allocation issue loading a kernel module

From: Sean McNeil <sean_at_mcneil.com>
Date: Tue, 25 Nov 2003 00:20:03 -0800
Yes, thanks for the clarification.  I still am inclined to believe,
though, that the disk driver is what is fragmenting the physical memory
with disk cacheing.  It is only a theory, but it sounded plausible.

Thanks again,
Sean

On Tue, 2003-11-25 at 00:13, Maxime Henrion wrote:
> Sean McNeil wrote:
> > Hi everyone,
> > 
> > I was wondering if there is a way to flush out pages in memory that
> > might not be required.  I have a device driver that allocates 16 distict
> > buffers each 32K in size.  This is done with a bus_dma call as they will
> > be accessed by a PCI device.  The problem is that if I do a compile on
> > my system prior to trying to kldload the module, there isn't enough
> > physical memory for the driver.  I am assuming it is the disk cache that
> > is eating up that memory and I want to flush out enough pages for my
> > bus_dma allocation to work.
> > 
> > Is this possible?  Any and all comments are appreciated.
> 
> The problem has probably nothing to do with the disk cache eating up
> memory but I believe what you're seeing is physical address space
> fragmentation.  On x86, when devices want to perform DMA operations,
> they are given physical addresses, not virtual ones as with other
> architectures like sparc64 which have an IOMMU.  This means that for
> each of your 32k buffers, you need 8 _physically contiguous_ pages of
> memory.
> 
> Unfortunately, the more a system is running, the more the physical
> address space tends to be fragmented, and it becomes impossible to
> reserve large chunks of physically contiguous memory, hence why the
> kldload is failing.
> 
> If I remember correctly, Alan Cox intended to write a binary buddy
> allocator to handle the physical address space (or do coalescing another
> way, I'm not sure...) so that this particular problem is solved.
> 
> Cheers,
> Maxime
Received on Mon Nov 24 2003 - 23:20:09 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:30 UTC