Greetings, Max Laier wrote: -cut- >> Well I think the interesting lines from this experiment are: >> max total wait_total count avg >> wait_avg cnt_hold cnt_lock name >> 39 25328476 70950955 9015860 2 7 >> 5854948 6309848 /usr/src/sys/contrib/pf/net/pf.c:6729 (sleep >> mutex:pf task mtx) >> 936935 10645209 350 50 212904 7 >> 110 47 /usr/src/sys/contrib/pf/net/pf.c:980 (sleep mutex:pf >> task mtx) >> > > Yeah, those two mostly are the culprit, but a quick fix is not really > available. You can try to "set timeout interval" to something bigger > (e.g. 60 seconds) which will decrease the average hold time of the second > lock instance at the cost of increased peak memory usage. > I'll try and this. At least memory doesn't seems to be a problem :) > I have the ideas how to fix this, but it will take much much more time > than I currently have for FreeBSD :-\ In general this requires a bottom > up redesign of pf locking and some data structures involved in the state > tree handling. > > The first(=main) lock instance is also far from optimal (i.e. pf is a > congestion point in the bridge forwarding path). For this I have also a > plan how to make at least state table lookups run in parallel to some > extend, but again the lack of free time to spend coding prevents me from > doing it at the moment :-\ > Well, now we know where the issue is. The same problem seems to affect synproxy state btw. Can I expect better performance with IPFW's dynamic rules? I wonder how one can protect himself on gigabit network and service more then 500pps. For example in my test lab I see incoming ~400k packets per second, but if I activate PF, I see only 130-140k packets per second. Is this expected behavior, if PF cannot handle so many packets? The missing 250k+ are not listed as discarded or other errors, which is weird. -- Best Wishes, Stefan Lambrev ICQ# 24134177Received on Sun Jan 27 2008 - 13:29:21 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:26 UTC