Sigh, this ends up being ugly I'm afraid. I need some time to look at code and think about it. Jack On Mon, Aug 5, 2013 at 10:36 AM, Luigi Rizzo <rizzo_at_iet.unipi.it> wrote: > On Mon, Aug 5, 2013 at 7:17 PM, Adrian Chadd <adrian_at_freebsd.org> wrote: > > > I'm travelling back to San Jose today; poke me tomorrow and I'll brain > > dump what I did in ath(4) and the lessons learnt. > > > > The TL;DR version - you don't want to grab an extra lock in the > > read/write paths as that slows things down. Reuse the same per-queue > > TX/RX lock and have: > > > > * a reset flag that is set when something is resetting; that says to > > the queue "don't bother processing anything, just dive out"; > > * 'i am doing Tx / Rx' flags per queue that is set at the start of > > TX/RX servicing and finishes at the end; that way the reset code knows > > if there's something pending; > > * have the reset path grab each lock, set the 'reset' flag on each, > > then walk each queue again and make sure they're all marked as 'not > > doing TX/RX'. At that point the reset can occur, then the flag cna be > > cleared, then TX/RX can resume. > > > > so this is slightly different from what Bryan suggested (and you endorsed) > before, as in that case there was a single 'reset' flag IFF_DRV_RUNNING > protected by the 'core' lock, then a nested round on all tx and rx locks > to make sure that all customers have seen it. > In both cases the tx and rx paths only need the per-queue lock. > > As i see it, having a per-queue reset flag removes the need for nesting > core + queue locks, but since this is only in the control path perhaps > it is not a big deal (and is better to have a single place to look at to > tell whether or not we should bail out). > > cheers > luigi > _______________________________________________ > freebsd-net_at_freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-net > To unsubscribe, send any mail to "freebsd-net-unsubscribe_at_freebsd.org" >Received on Mon Aug 05 2013 - 15:49:16 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:40 UTC