Lucky Green wrote: > Terry wrote: > > This is generally an attempt to get a swap mapping for > > backing store for the process. It could be that all your > > "inactive" memory has been spoken for. > > I had been under the impression that inactive pages contained data that > is no longer being used by a program, but is kept around in case the > data may be needed again in the future. Is it not the case that inactive > memory should be available to active processes if the processes require > more memory? It is LRU'ed. If all the cache contents have been more recently used than the contents of the dirty page that's potentially going to be swapped, then the cached contents are "more precious" than the page being swapped, even if no one is currently referencing them. In any case, I don't think this is what's happening, but without a stack traceback, it's hard to tell exactly which of the 3 cases is really happening. You really need to give us a stack traceback, so that we don't have to analyze all three code paths, and can concentrate on the one that is biting you. > > If you had provided a traceback, I would guess that this > > happened as a call from swap_pager_reserve(), as opposed to a > > call from > > swap_pager_strategy() or swap_pager_putpages(). This can > > only happen if you are using an md device; are you using an > > md device (ramdisk)? If so: cut it out, or make sure the > > MD_RESERVE bit is not set. > > "device md" is compiled into the kernel, but to my knowledge I am not > using any MD devices. Should I remove this entry from the kernel config > file? I don't know if the problem is coming from there. Can you give us a stack traceback? Compile your kernel with BREAK_TO_DEBUGGER and DDB, and then replace: printf("swap_pager_getswapspace: failed\n"); with: panic("swap_pager_getswapspace: failed\n"); in /usr/src/sys/vm/swap_pager.c in swp_pager_getswapspace() at about line 474. Then post a traceback so we can tell who called it. -- TerryReceived on Sat Apr 19 2003 - 12:05:02 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:04 UTC