RE: 4.7 vs 5.2.1 SMP/UP bridging performance

From: Robert Watson <rwatson_at_freebsd.org>
Date: Wed, 5 May 2004 17:38:38 -0400 (EDT)
On Tue, 4 May 2004, Gerrit Nagelhout wrote:

> I tried enabling debug.mpsafenet, but it didn't make any difference. 
> Which parts of the bridging path do you think should be faster with that
> enabled? 

What network card are you running with?  Some of the in-tree drivers don't
set INTR_MPSAFE and enable their fine-grained locking by default so
require manual tweaking.  I've been meaning to do all that tweaking in the
netperf patch by default, but haven't yet.  Specifically, if_xl is one of
these drivers.

> I haven't actually tried implementing polling from multiple CPUs, but
> suggested it because I think it would help performance for certain
> applications (such as bridging).  What I would probably do 
> (without having given this a great deal of thought) is to:
> 1) Have a variable controlling how many threads to use for polling
> 2) Either lock an interface to a thread, or have interfaces switch
>    between threads depending on their load dynamically.

These would all be very interesting things to explore.  One thing may get
now when running with netperf and the ULE scheduler is some decent
affinity of interfaces to CPUs by virtue of thread affinity for the
interrupt thread.  I don't remember if our current scheduler really does
"affinity", but it does assign a cost to migration, which has a similar
effect. 

> One obvious problem with this approach will be mutex contention
> between threads.  Even though the source interface would be owned
> by a thread, the destination would likely be owned by a different
> thread.  I'm assuming that with the current mutex setup, only one
> thread can receive from or transmit to an interface at a time.

Yes.  Most drivers implement a per-interface instance lock that's required
for both the send and receive path on the interface.  In many cases,
that's because I/O to card has to be serialized.  I'm not opposed to
having separate locks for send/receive, as long as we properly serialize
the I/O to the card without introducing excessive overhead.

Robert N M Watson             FreeBSD Core Team, TrustedBSD Projects
robert_at_fledge.watson.org      Senior Research Scientist, McAfee Research
Received on Wed May 05 2004 - 12:39:55 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:53 UTC