Re: Some netgraph node global locking patches

From: Julian Elischer <julian_at_elischer.org>
Date: Wed, 14 Jul 2004 10:31:29 -0700
Gleb Smirnoff wrote:

>R> > one AC.
>R> 
>R> This sounds good.  I've chatted some with Julian about locking down access
>R> to private data, but I've not had a chance to really digest and think
>R> through it yet.  Is this something you're interested in looking at? :-)
>
>Yes, it is. AFAIU, at this moment netgraph locks node in way that only one
>netgraph callback can run at a time. It means one thread per node. So there is
>no need to lock private data separately. Do I miss smth?
>

there are ways to enter the node that bypass the locks..
 for example in the ng_tty node, the calls by the tty line disciplin
do not go through a lock..
I am thinking about adding a method to node that has a standard default,
that is for direct callers.. it would be similar to calling
ng_send_fn() with your data rather than
just jumping in and doing your stuff.

Basically the problem only exists in nodes that are
part netgraph and part "something else".
the "something else" code can access teh internal structures etc.
without getting any locks.


>
>R> > R>   //depot/vendor/freebsd/src/sys/netgraph/ng_tty.c
>R> > R> +/*
>R> > R> + * XXXRW: ngt_unit is protected by ng_tty_mtx.  ngt_ldisc is constant once
>R> > R> + * ng_tty is initialized.  ngt_nodeop_ok is untouched, and might want to be a
>R> > R> + * sleep lock in the future?
>R> > R> + */
>R> > 
>R> > One question: are any locks held when linesw callbacks (ngt_open,
>R> > ngt_close, etc..) are called? 
>R> 
>R> Hmm.  That is an interesting problem indeed.  The tty code currently
>R> requires Giant, although Poul-Henning is doing cleanup that will hopefully
>R> lead to locking at some point.  In the SLIP code, I currently use a
>R> taskqueue to defer calls into the tty code so that Giant is not grabbed at
>R> arbitrary points in the SLIP processing, which may be called into while
>R> holding protocol layer locks.  It looks like a lot of the tty access in
>R> ng_tty.c is done from the device node calls associated with the tty
>R> subsystem, and therefore Giant will already be held (which is fine,
>R> although not 100% desirable).  The problematic bit is the call to
>R> ngt_start() in ngt_rcvdata(), and synchronizing the mbuf queue in the node
>R> between various consumers.  We could very easily use the same trick here I
>R> use in SLIP by deferring ngt_start() to a Giant-holding task queue.  We
>R> could add a 'struct task' to the node softc and just use that, for
>R> example. 
>


possibly the answer is to make the tty disciplin code call
ng_send_fn() with it's data, and just leave it queued if it can't
get the lock.

>
>I have to get better understanding of tty code, then make comments :)
>May be Poul-Henning has some ideas.
>
>  
>
Received on Wed Jul 14 2004 - 15:31:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:01 UTC