Re: Fatal trap 12: page fault panic with recent kernel with ZFS

From: Ben Kelly <ben_at_wanderview.com>
Date: Mon, 18 May 2009 22:34:58 -0400
On May 18, 2009, at 9:26 PM, Kip Macy wrote:
> On Mon, May 18, 2009 at 6:22 PM, Adam McDougall  
> <mcdouga9_at_egr.msu.edu> wrote:
>> On Mon, May 18, 2009 at 07:06:57PM -0500, Larry Rosenman wrote:
>>
>>  On Mon, 18 May 2009, Kip Macy wrote:
>>
>>  > The ARC cache allocates wired memory. The ARC will grow until  
>> there is
>>  > vm pressure.
>>  My crash this AM was with 4G real, and the ARC seemed to grow and  
>> grow, then
>>  we started paging, and then crashed.
>>
>>  Even with the VM pressure it seemed to grow out of control.
>>
>>  Ideas?
>>
>>
>> Before that but since 191902 I was having the opposite problem,
>> my ARC and thus Wired would grow up to approx arc_max until my
>> Inactive memory put pressure on ARC making it shrink back down
>> to ~450M where some aspects of performance degraded.  A partial
>> workaround was to add a arc_min which isn't entirely successful
>> and I found I could restore ZFS performance by temporarily squeezing
>> down Inactive memory by allocating a bunch of it myself; after
>> freeing that, ARC had no pressure and could grow towards arc_max
>> again until Inactive eventually rose.  Reported to Kip last night
>> and some cvs commit lists.  I never did run into Swap.
>>
>
>
> That is a separate issue. I'm going to try adding a vm_lowmem event
> handler to drive reclamation instead of the current paging target.
> That shouldn't cause inactive pages to shrink the ARC.

Isn't there already a vm_lowmem event for the arc that triggers  
reclamation?

On the low memory front it seems like the arc needs a way to tell the  
pager to mark some vnodes inactive.  I've seen many cases where the  
arc size greatly exceeded the target, but it couldn't evict any memory  
because all its buffers were still referenced.  This seems to behave a  
little better with code that increments vm_pageout_deficit and signals  
the pageout daemon when the arc is too far above its target.  The  
normal buffer cache seems to do this as well when its low on memory.

- Ben
Received on Tue May 19 2009 - 00:35:03 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:47 UTC