On Thu, 6 May 2004, Bruce Evans wrote: > On Wed, 5 May 2004, Robert Watson wrote: > > > The reason to run with these patches is that, without them, it's not safe > > to run without Giant over the lower half of the network stack, so all the > > network interrupt threads are running with Giant otherwise. > > What does this particular finer grained locking do for the number of > locks. Nothing good I fear. Well, it introduces more, certainly, and this is definitely a first cut that we will want to refine substantially. However, it offers advantages on both UP and SMP: on UP by offering lower latency for network processing through the opportunity to preempt earlier and not contend with Giant on the other half of the kernel, and SMP by offering lower latency, improving parallelism, and reducing contention. At least, that's what it offers in the presence of a variety of other work to improve interrupt handling, scheduling, etc. That said, on SMP I see a several times speedup in network performance in network intensive activities. > > Also, in that > > patchset, network processing to completion runs in the ithread rather than > > the netisr, so you get lower latency and more parallelism even for > > bridging. > > Doesn't it give less parallelism, and higher latency for other > interrupts, but lower latency for this interrupt of course? Well, it depends a fair amount on your workload and configuration. On SMP with bridging, it should allow the interrupt threads for the two interfaces to run on different CPUs simultaneously acting as pipelines, subject to lock contention and other load. In SMP IP packet forwarding tests I ran, this actually seemed to happen fairly reasonably and offered substantial performance benefits. > > Another known issue is the latency from interrupt generation to interrupt > > task execution in the interrupt thread. > > Especially with the current bugs: > - interrupt thread execution is often or always deferred until the running > thread gives up control or returns to userland, and all netisr and other. > This bug has been active since last November. Before then the indefinite > deferral only happened occasionally. > - netisr and most other SWI priorities are backwards relative to softclock's > priority. This bug seems to have been active since the first version of > SMPng. Well, both of these bugs would explain regressions I've seen in latency for network I/O handling, but I hadn't yet been able to track them down to a particular source. One of the things I'd like to do when the first pass at top-to-bottom Giant free operation is complete is sit down with KTR and look at a complete trace of context switches during various types of operations. I suspect we'll see a lot of nits we'll need to track down which will have a substantial performance benefit when corrected. For example, it looks like our CV implementation generates an extra context switch on wakeup that we can probably eliminate. From chatting with John, it sounds like the first one is resolved in his work branch, but under-tested for a merge yet. Robert N M Watson FreeBSD Core Team, TrustedBSD Projects robert_at_fledge.watson.org Senior Research Scientist, McAfee ResearchReceived on Thu May 06 2004 - 06:32:22 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:53 UTC