On Wed, 2005-02-16 at 13:40 -0800, Brooks Davis wrote: > On Wed, Feb 16, 2005 at 01:13:13PM -0800, Sean McNeil wrote: > > On Wed, 2005-02-16 at 14:57 -0600, Eric Anderson wrote: > > > Brooks Davis wrote: > > > > On Wed, Feb 16, 2005 at 12:12:10PM -0800, Sean McNeil wrote: > > > > > > > >>With a system built yesterday on my amd64, I had plenty of memory > > > >>showing as free when the system completely started up. Even after > > > >>intense usage I showed lots of free memory in top. Over night at some > > > >>point all my memory is no longer free but inactive. Is there anything > > > >>wrong here or is this expected behavior? ps doesn't show any serious > > > >>usage by any particular process. Also, if disk caches or something were > > > >>taking up the memory, I would expect it to have shown a lot earlier. > > > > > > > > > > > > On a system that has been up for any significant time, free memory > > > > should be very small since free memory is wasted. My guess is that it > > > > is disk cache and that one of the nightly jobs accessed enough stuff to > > > > fill it. > > > > > > Speaking of this - is there a way to flush the disk cache? > > > > I would have to dispute this fact. Any disk cache should not be > > assigned to inactive user pages. It should be, IMHO, cache or buffer > > memory. In my original email, the vmstat -m output would support this: > > > > UFS dirhash 2656 1105K 1330K 15744 16,32,64,128,256,512,1024,2048 > > BIO buffer 5435 10870K 10940K 236537 2048 > > > > I see nothing on my system that comes close to accounting for the 1G of > > inactive memory. Further, it really makes something like the system > > applet in gnome useless when all my memory is claimed to be allocated as > > user space. I've only recently used this applet, but again I stress > > that after heavy usage of the system during initial boot I had plenty of > > ram shown as free. > > > > As for flushing the disk cache, I thought that sync should do this. It > > doesn't have any effect here, though. > > Sync writes dirty pages, it does not return pages to the free list. > There is no reason to agressivly return pages to the free list under > normal load, the system knows which pages have not been modified and > thus can be added to the free list at virtual no cost so why free pages > that might be used again. There's little cost in defering that and very > large potential savings if the data is accessed again later. One way > this can happen is if you mmap a file and read it, those pages will be > mapped and will remain mapped until something pushes them out. If they > are unmodifed, another process that has access to the same data can use > the cached copy rather then doing a read from disk. > > Too see how you can use up your free memory like this, run top and a > command like: > > find /some/large/directory | xargs grep DjklfadsAFSDjklfasdhjASDhjEQ#_at_% > > The concept that most of your memory should be entierly free when the > system is not under load is simply wrong. The simplistic world view of > memory being allocated to a specific task or entierly free simply isn't > correct for all that it makes for nice graphs anyone can pretend they > understand. You can produce some sort of aproximation of the output you > see on other systems by modifying the program to include inactive memory > in free memory. That's what I did with Ganglia. I am not disputing this, but it was only a quick glance through the documentation that Steve pointed to that I think I understand, but again wonder a little as to why. What I seem to be getting from the documentation is that cache and buf pages are being actively used within the system for one reason or another. It would appear that disk cache pages that have either aged past a point or otherwise determined as reusable are placed on the user space inactive queue. Am I starting to get it? SeanReceived on Wed Feb 16 2005 - 20:55:28 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:28 UTC