Dear all, In the next couple of days (exact schedule depends on how testing goes), I'll merge a portion of the rwlock lock patch that I've developed and that Kris has been testing. This opens the door to increased parallelism in the network stack by facilitating UDP and TCP moves to read-locking of certain data structures under certain conditions. These patches were originally developed to address known high lock contention when running the BIND9 and nsd name servers, which employ a single UDP socket from many threads simultaneously. In the first pass, the changes simply substitute an rwlock for a mutex, and convert the accessor macros to explicit write-locking for inpcbs; for pcbinfo, we allow read locking to be used in certain restricted situations, and generalize certain lock assertions to allow read locks to be held. However, in practice, this pass should make little functional difference, as all key paths will remain protected by exclusive locking. This will then settle for a bit to attempt to show up any problems that didn't turn up in testing so far, and then... In the second pass, write locking is replaced with read locking of the pcbinfo lock in input paths for UDP, and write locking of the inpcb is replaced with read locking in many cases in the output paths, elimating one source of high lock contention for BIND/nsd. TCP continues to use exclusive locking in all cases with this set of changes. It is my understanding, and I need to confirm this, that struct rwlock is the same size as struct mutex, meaning that (a) they don't require monitoring tools to be rebuilt, and (b) these are potential MFC candidates. If you run into ABI problems with monitoring tools after the merge, please let me know (including architecture information, etc). Thanks, Robert N M Watson Computer Laboratory University of CambridgeReceived on Thu Apr 17 2008 - 08:43:50 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:29 UTC