Re: Instant panic while trying run ports-mgmt/poudriere

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Mon, 24 Aug 2015 11:10:15 +0300
On Sun, Aug 23, 2015 at 10:35:44PM -0700, John-Mark Gurney wrote:
> Konstantin Belousov wrote this message on Sun, Aug 23, 2015 at 15:54 +0300:
> >  	if (kev->flags & EV_ADD)
> > -		tkn = knote_alloc(waitok);	/* prevent waiting with locks */
> > +		/*
> > +		 * Prevent waiting with locks.  Non-sleepable
> > +		 * allocation failures are handled in the loop, only
> > +		 * if the spare knote appears to be actually required.
> > +		 */
> > +		tkn = knote_alloc(waitok);
> 
> if you add this comment, please add curly braces around the block...
Ok.

> 
> >  	else
> >  		tkn = NULL;
> >  
> > _at__at_ -1310,8 +1315,7 _at__at_ done:
> >  		FILEDESC_XUNLOCK(td->td_proc->p_fd);
> >  	if (fp != NULL)
> >  		fdrop(fp, td);
> > -	if (tkn != NULL)
> > -		knote_free(tkn);
> > +	knote_free(tkn);
> 
> Probably should just change knote_free to a static inline that does
> a uma_zfree as uma_zfree also does nothing is the input is NULL...
This was already done in the patch (the removal of the NULL check in
knote_free()). I usually do not add excessive inline keywords. Compilers
are good, sometimes even too good, at figuring out the possibilities for
inlining. knote_free() is inlined automatically.

> > _at__at_ -1948,7 +1948,7 _at__at_ knote(struct knlist *list, long hint, int lockflags)
> >  	 * only safe if you want to remove the current item, which we are
> >  	 * not doing.
> >  	 */
> > -	SLIST_FOREACH(kn, &list->kl_list, kn_selnext) {
> > +	SLIST_FOREACH_SAFE(kn, &list->kl_list, kn_selnext, tkn) {
> 
> Clearly you didn't read the comment that preceeds this line, or at
> least didn't update it:
>          * SLIST_FOREACH, SLIST_FOREACH_SAFE is not safe in our case, it is
>          * only safe if you want to remove the current item, which we are
>          * not doing.
> 
> So, you'll need to be more specific in why this needs to change...
> When I wrote this code, I spent a lot of time looking at this, and
> reasoned as to why SLIST_FOREACH_SAFE was NOT correct usage here...
I explained what happens in the message.  The knote list is modified
by the filter, see knlist_remove_inevent() call in filt_proc().

> 
> >  		kq = kn->kn_kq;
> >  		KQ_LOCK(kq);
> >  		if ((kn->kn_status & (KN_INFLUX | KN_SCAN)) == KN_INFLUX) {
> > _at__at_ -2385,15 +2385,16 _at__at_ SYSINIT(knote, SI_SUB_PSEUDO, SI_ORDER_ANY, knote_init, NULL);
> >  static struct knote *
> >  knote_alloc(int waitok)
> >  {
> > -	return ((struct knote *)uma_zalloc(knote_zone,
> > -	    (waitok ? M_WAITOK : M_NOWAIT)|M_ZERO));
> > +
> > +	return (uma_zalloc(knote_zone, (waitok ? M_WAITOK : M_NOWAIT) |
> > +	    M_ZERO));
> >  }
> >  
> >  static void
> 
> per above, we should add inline here...
> 
> >  knote_free(struct knote *kn)
> >  {
> > -	if (kn != NULL)
> > -		uma_zfree(knote_zone, kn);
> > +
> > +	uma_zfree(knote_zone, kn);
> >  }
> >  
> >  /*
> 
> I agree w/ the all the non-SLIST changes, but I disagree w/ the SLIST
> change as I don't believe that all cases was considered...
What cases do you mean ?

The patch does not unlock knlist lock in the iteration. As such, the
only thread which could remove elements from the knlist, or rearrange
the list, while loop is active, is the current thread. So I claim that
the only the current iterating element can be removed, and the next list
element stays valid. This is enough for _SAFE loop to work.

Why do you think that _SAFE is incorrect ? Comment talks about very
different case, where the knlist lock is dropped. Then indeed, other
thread may iterate in parallel, and invalidate the memoized next element
while KN_INFLUX is set for the current element and knlist is dropped.
But _SAFE in sys/queue.h never means 'safe for parallel mutators', it
only means 'safe for the current iterator removing current element'.

I preferred not to touch the comment until it is confirmed that the
change help.  I reformulated it now, trying to keep the note about
unlock (but is it useful ?).

> 
> Anyways, the other changes shouldn't be committed w/ the SLIST change
> as they are unrelated...
Sure, I posted the diff against the WIP branch.  The commits will be split.

diff --git a/sys/kern/kern_event.c b/sys/kern/kern_event.c
index a4388aa..0e26a78 100644
--- a/sys/kern/kern_event.c
+++ b/sys/kern/kern_event.c
_at__at_ -1105,10 +1105,16 _at__at_ kqueue_register(struct kqueue *kq, struct kevent *kev, struct thread *td, int wa
 	if (fops == NULL)
 		return EINVAL;
 
-	if (kev->flags & EV_ADD)
-		tkn = knote_alloc(waitok);	/* prevent waiting with locks */
-	else
+	if (kev->flags & EV_ADD) {
+		/*
+		 * Prevent waiting with locks.  Non-sleepable
+		 * allocation failures are handled in the loop, only
+		 * if the spare knote appears to be actually required.
+		 */
+		tkn = knote_alloc(waitok);
+	} else {
 		tkn = NULL;
+	}
 
 findkn:
 	if (fops->f_isfd) {
_at__at_ -1310,8 +1316,7 _at__at_ done:
 		FILEDESC_XUNLOCK(td->td_proc->p_fd);
 	if (fp != NULL)
 		fdrop(fp, td);
-	if (tkn != NULL)
-		knote_free(tkn);
+	knote_free(tkn);
 	if (fops != NULL)
 		kqueue_fo_release(filt);
 	return (error);
_at__at_ -1507,10 +1512,6 _at__at_ kqueue_scan(struct kqueue *kq, int maxevents, struct kevent_copyops *k_ops,
 	} else
 		asbt = 0;
 	marker = knote_alloc(1);
-	if (marker == NULL) {
-		error = ENOMEM;
-		goto done_nl;
-	}
 	marker->kn_status = KN_MARKER;
 	KQ_LOCK(kq);
 
_at__at_ -1929,7 +1930,7 _at__at_ void
 knote(struct knlist *list, long hint, int lockflags)
 {
 	struct kqueue *kq;
-	struct knote *kn;
+	struct knote *kn, *tkn;
 	int error;
 
 	if (list == NULL)
_at__at_ -1941,14 +1942,13 _at__at_ knote(struct knlist *list, long hint, int lockflags)
 		list->kl_lock(list->kl_lockarg); 
 
 	/*
-	 * If we unlock the list lock (and set KN_INFLUX), we can eliminate
-	 * the kqueue scheduling, but this will introduce four
-	 * lock/unlock's for each knote to test.  If we do, continue to use
-	 * SLIST_FOREACH, SLIST_FOREACH_SAFE is not safe in our case, it is
-	 * only safe if you want to remove the current item, which we are
-	 * not doing.
+	 * If we unlock the list lock (and set KN_INFLUX), we can
+	 * eliminate the kqueue scheduling, but this will introduce
+	 * four lock/unlock's for each knote to test.  Also, marker
+	 * would be needed to keep iteration position, since filters
+	 * or other threads could remove events.
 	 */
-	SLIST_FOREACH(kn, &list->kl_list, kn_selnext) {
+	SLIST_FOREACH_SAFE(kn, &list->kl_list, kn_selnext, tkn) {
 		kq = kn->kn_kq;
 		KQ_LOCK(kq);
 		if ((kn->kn_status & (KN_INFLUX | KN_SCAN)) == KN_INFLUX) {
_at__at_ -2385,15 +2385,16 _at__at_ SYSINIT(knote, SI_SUB_PSEUDO, SI_ORDER_ANY, knote_init, NULL);
 static struct knote *
 knote_alloc(int waitok)
 {
-	return ((struct knote *)uma_zalloc(knote_zone,
-	    (waitok ? M_WAITOK : M_NOWAIT)|M_ZERO));
+
+	return (uma_zalloc(knote_zone, (waitok ? M_WAITOK : M_NOWAIT) |
+	    M_ZERO));
 }
 
 static void
 knote_free(struct knote *kn)
 {
-	if (kn != NULL)
-		uma_zfree(knote_zone, kn);
+
+	uma_zfree(knote_zone, kn);
 }
 
 /*
Received on Mon Aug 24 2015 - 06:10:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:59 UTC