Re: tuning hints for PAE

From: Scott Long <scottl_at_freebsd.org>
Date: Thu, 15 Jul 2004 12:38:29 -0600
Daniel Lang wrote:

> Hi,
> 
> I've re-activated PAE for our server which has now 6 GB of RAM.
> 
> Thanks to a hint from Peter and basic knowledge about KVA, I have
> included
> 
> options         KVA_PAGES=768
> 
> in the kernel config. This should yield 2GB of KVA if I have
> understood Peter correctly.
> 
> 
> Now a panic has happened, I have a ddb trace:
> 
> [..]
> page_alloc(a81e46e0,1000,cf6f07ab,102,a04f4cd4) at 0xa045e4a2 = page_alloc+0x1a
> slab_zalloc(a81e46e0,2,a81e4728,a81e46e0,b1de7624) at 0xa045e14b = slab_zalloc+0x9f
> uma_zone_slab(a81e46e0,2,80,46e0,a81e4728) at 0xa045f3c0 = uma_zone_slab+0xb0
> uma_zalloc_bucket(a81e46e0,2) at 0xa045f5ac = uma_zalloc_bucket+0x124
> uma_zalloc_arg(a81e46e0,0,2) at 0xa045f2c3 = uma_zalloc_arg+0x25f
> ffs_vget(a8200c00,1823ed,2,cf6f08ec,200) at 0xa043d1be = ffs_vget+0x2ea
> ffs_valloc(aa07a000,81b6,b113a280,cf6f08ec) at 0xa0426519 = ffs_valloc+0xe5
> ufs_makeinode(81b6,aa07a000,cf6f0bf8,cf6f0c0c) at 0xa0449539 = ufs_makeinode+0x59
> ufs_create(cf6f0a6c,cf6f0b28,a03a515b,cf6f0a6c,678) at 0xa04461e6 = ufs_create+0x26
> ufs_vnoperate(cf6f0a6c) at 0xa0449b4f = ufs_vnoperate+0x13
> vn_open_cred(cf6f0be4,cf6f0ce4,1b6,b113a280,a) at 0xa03a515b = vn_open_cred+0x177
> vn_open(cf6f0be4,cf6f0ce4,1b6,a,a0543620) at 0xa03a4fe2 = vn_open+0x1e
> kern_open(b2955000,80a7911,0,20a,1b6) at 0xa039f474 = kern_open+0xd8
> open(b2955000,cf6f0d14,3,105ed,292) at 0xa039f398 = open+0x18
> syscall(80c002f,810002f,9f7f002f,8,283536d8) at 0xa049f6c3 = syscall+0x217
> Xint0x80_syscall() at 0xa048ce0f = Xint0x80_syscall+0x1f
> --- syscall (5, FreeBSD ELF32, open), eip = 0x282ce093, esp = 0x9f7fe30c, ebp = 0x9f7fe338 ---
> [..]
> 
> trace may be truncated, I could not save everything including
> the panic message itself, alas. 
> 
> I don't have a crashdump (the dump-device was less than 6 GB, I have
> fixed this meanwhile, but it reminds me of Solaris which can dump
> only kernel and curproc pages if requested), but I suspect this panic
> is due to kernel-address space exhaustion. So maybe the KVA_PAGES
> was still wrong.
> 
> I also get the following message:
> 
> kern.ipc.maxpipekva exceeded; see tuning(7)
> 
> The hint is rather useless, since the tuning manpage does
> not cover a lot of aspects, including this parameter.
> 
> Hmmm, I think I used to have set
> 
> kern.maxusers=768 (I have cleared this now, to utilize the autotuning
> of this parameter). Maybe this was a problem as well.
> 
> In the past I used to have also set kern.ipc.nmbcluster, but IIRC
> this is now tuned automatically as well.
> 
> I have bumped kern.ipc.somaxconn to 1024 of course.
> 
> Anything other parameters I should look out for tuning?
> If there is some more comprehensive documentation than tuning(7)
> and the Handbook section (yes, I have read it), please let me know.
> 
> Cheers,
>  Daniel

Look at kern.maxvnodes and trim is down to a smaller amount if it's more
than about 100,000.  This of course depends on your workload.  If you
really need a lot of cached vnodes, then you'll need to tune elsewhere.

Scott
Received on Thu Jul 15 2004 - 16:39:38 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:01 UTC