On Sun, Apr 20, 2003, Lucky Green wrote: > David wrote quoting Bruce: > > > So the bug is mainly in vm making only a relatively useless > > statistic > > > available. On my systems, `Inact' is usually mainly for > > (non-dirty) > > > VMIO pages. > > > > Right. dillon was planning to separate out the dirty and > > clean pages in the inactive queue at some point. ISTR that > > his intent was along the lines of optimizing write clustering > > by making dirty pages easier to find, or something along > > those lines. But the number of inactive dirty pages is > > useful as a statistic by itself, too. > > So how do I find out what is consuming those "inactive" pages? And how > do I determine if those pages can be discarded or not? 'top -ores' will tell you which processes are hogging the most memory, but the system does not keep accurate statistics on clean vs dirty or swap-backed vs fs-backed pages. Nevertheless, that might give you some idea of where your 1 GB of memory has gone. > Exactly. Which is why I just replaced my old 128MB RAM/256MB swap server > with a new 1GB RAM server. I still fail to understand why a setup that > never was anywhere near running out of memory in the previous > configuration would run out of memory with more RAM than it had RAM and > swap combined. If I can't do in 1GB what I could do in 128 + 256 MB, > then somewhere there is a bug. How do I find out where? It would be useful to know whether dillon's suggestion fixes your problem.Received on Sun Apr 20 2003 - 10:19:59 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:04 UTC