On Tue, 18 Nov 2003, [iso-8859-1] Claus Guttesen wrote: > Hi. > > > > > > >>>panic: kmem_malloc(4096): kmem_map too small: > > >>>275251200 total allocated cpuid = 0; lapic.id = > > >>>00000000 > > >> > > You'll either want to raise the size of the kmem_map > > pool or decrease the maximum number of vnodes > allowed > > (vnodes get allocated out of the kmem_map and are > > likely depleating it > > Add one of the two lines to /boot/loader.conf: > > > > kern.vn.kmem.size=350000000 > > or > > kern.maxvnodes=150000 > > > > The first one is probably the better choice for you > > since > > the very nature of what you are doing demands that > > you touch a lot of vnodes. > > > > Scott > > It seems that your advice helpted cure the patient. I > did two things: > > 1. added kern.vm.kmem.size=450000000 > 2. clean up tmp-files older than 4 hours every hour > (previous was files older than 12 h.). > > Now the servers has been quite stable, no reboot in > almost two days! My problem appears to be too many > files in /tmp and /var/tmp (50.000 or more) which made > the kernel puke. I forgot to mention in the last email that kern.maxvnodes will still scale upwards as you increase kern.vm.kmem.size. So you might want to set a hard limit on it so you don't continue to run into problems. A value of 200,000 is probably good in your case. > > I guess this is a scenario which we will see more > often. Would it be possible to output this situation > to the message-log before the server simply reboots? It might be useful for this particular panic message to print out the value of maxvnodes, numvnodes, and/or other metrics to help with debugging. We need to also review the scaling algorithm and tweak it back into line. A more complex solution would be to create a way for the vfs system to get feedback on KVA and kmem_map pressure and auto-tune itself. ScottReceived on Tue Nov 18 2003 - 12:00:22 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:29 UTC