Weldon S Godfrey 3 wrote: > >> OK, at least we've figured out what is going wrong then. As a > >> workaround to get the machine to stay up longer, you should be able to > >> set kern.ipc.nmbclusters=256000 in /boot/loader.conf -but hopefully we > >> can resolve this soon. > >> > > I upped it to 256K. What I am trying to wrap my head around is how it was > working somewhat for so long at 24K, but it got to near 65K before I > rebooted it with the higher setting. Or did I reboot too early? Is > there any cleanup that isn't triggered intil it reaches max nmbclusters? > I am trying to see if anything on our network has changed to cause this to > become cronic. We have a ngaios server which handles up to 5000 concurrent nsca daemons and connections which manifested a similar problem on a Dell R905 (4x4core AMD, 16GB RAM, bce). Setting the following in /boot/loader.conf sorted out the problem for us: kern.ipc.nmbclusters="131072" kern.maxusers="1024" mbuf usage is pretty static at: $ netstat -m 40165/16220/56385 mbufs in use (current/cache/total) 40154/10500/50654/131072 mbuf clusters in use (current/cache/total/max) 40154/3359 mbuf+clusters out of packet secondary zone in use (current/cache) 0/1493/1493/65536 4k (page size) jumbo clusters in use (current/cache/total/max) 0/0/0/32768 9k jumbo clusters in use (current/cache/total/max) 0/0/0/16384 16k jumbo clusters in use (current/cache/total/max) 90349K/31027K/121376K bytes allocated to network (current/cache/total) 0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters) 0/0/0 requests for jumbo clusters denied (4k/9k/16k) 0/0/0 sfbufs in use (current/peak/max) 0 requests for sfbufs denied 0 requests for sfbufs delayed 246 requests for I/O initiated by sendfile 0 calls to protocol drain routines Ian -- Ian FreislichReceived on Fri Nov 06 2009 - 07:40:27 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:57 UTC