Re: When will ZFS become stable?

From: Vadim Goncharov <vadim_nuclight_at_mail.ru>
Date: Mon, 07 Jan 2008 21:16:44 +0600
07.01.08 _at_ 04:33 Robert Watson wrote:

> On Sun, 6 Jan 2008, Kris Kennaway wrote:
>
>> Vadim Goncharov wrote:
>>> 06.01.08 _at_ 23:34 Kris Kennaway wrote:
>>>
>>>>> What is the other 512 MB of the 1 GB used for?
>>>>  Everything else that the kernel needs address space for.  Buffer  
>>>> cache, mbuf allocation, etc.
>>>  Aren't they allocated from the same memory zones? I have a router  
>>> with 256 Mb RAM, it had a panic with ng_nat once due to exhausted  
>>> kmem. So, what these number from it's sysctl do really mean?
>>>  vm.kmem_size: 83415040
>>> vm.kmem_size_max: 335544320
>>> vm.kmem_size_scale: 3
>>> vm.kvm_size: 1073737728
>>> vm.kvm_free: 704638976
>>
>> I believe that mbufs are allocated from a separate map.  In your case  
>> you only have ~80MB available in your kmem_map, which is used for  
>> malloc() in the kernel.  It is possible that ng_nat in combination with  
>> the other kernel malloc usage exhausted this relatively small amount of  
>> space without mbuf use being a factor.

Yes, in-kernel libalias is "leaking" in sense that it grows unbounded, and  
uses malloc(9) instead if it's own UMA zone with settable limits (it frees  
all used memory, however, on shutting down ng_nat, so I've done a  
workaround restarting ng_nat nodes once a month). But as I see the panic  
string:

panic: kmem_malloc(16384): kmem_map too small: 83415040 total allocated

and memory usage in crash dump:

router:~# vmstat -m -M /var/crash/vmcore.32 | grep alias
      libalias 241127 30161K       - 460568995  128
router:~# vmstat -m -M /var/crash/vmcore.32 | awk '{sum+=$3} END {print  
sum}'
50407

...so why only 50 Mb from 80 were used at the moment of panic?

BTW, current memory usage (April 6.2S, ipf w+ 2 ng_nat's) a week after  
restart is low:

vadim_at_router:~>vmstat -m | grep alias
      libalias 79542  9983K       - 179493840  128
vadim_at_router:~>vmstat -m | awk '{sum+=$3} END {print sum}'
28124

> Actually, with mbuma, this has changed -- mbufs are now allocated from  
> the general kernel map.  Pipe buffer memory and a few other things are  
> still allocated from separate maps, however.  In fact, this was one of  
> the known issues with the introduction of large cluster sizes without  
> resource limits: address space and memory use were potentially  
> unbounded, so Randall recently properly implemented the resource limits  
> on mbuf clusters of large sizes.

I still don't understand what that numbers from sysctl above do exactly  
mean - sysctl -d for them is obscure. How many memory kernel uses in RAM,  
and for which purposes? Is that limit constant? Does kernel swaps out  
parts of it, and if yes, how many?

-- 
WBR, Vadim Goncharov
Received on Mon Jan 07 2008 - 14:16:49 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:25 UTC