Hi, Konstantin, Thank you for your prompt response. > The reason for the testcancel() calls there is that async cancellation > is defined as to be acted upon at any time after the cancellation is > enabled. If we have pending cancellation request, the async cancellation > must proceed after the pthread_setcancelstate(PTHREAD_CANCEL_ENABLE). > > I see a bug there, namely, we should not process the cancellation if > the type is deferred. Is this is real issue you are concerned with ? I think your change is right. I found this issue in my sample source. My sample source in 8.1-RELEASE worked good, but it caused dead lock in current, since my_mutex did not release. This change was committed after 8.1-RELEASE. void my_cleanup(void *arg) { int oldstate; pthread_mutex_lock(&my_mutex); pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &oldstate); ... pthread_setcancelstate(oldstate, NULL); pthread_mutex_unlock(&my_mutex); } And I investigated about glibc. In glibc, if PTHREAD_CANCEL_ENABLE && ASYNCHRONOUS then __do_cancel() is executed in pthread_setcancelstate(). glibc-2.17/nptl/pthread_setcancelstate.c: 24 int 25 __pthread_setcancelstate (state, oldstate) 26 int state; 27 int *oldstate; 28 { 29 volatile struct pthread *self; 30 31 if (state < PTHREAD_CANCEL_ENABLE || state > PTHREAD_CANCEL_DISABLE) 32 return EINVAL; 33 34 self = THREAD_SELF; 35 36 int oldval = THREAD_GETMEM (self, cancelhandling); 37 while (1) 38 { 39 int newval = (state == PTHREAD_CANCEL_DISABLE 40 ? oldval | CANCELSTATE_BITMASK 41 : oldval & ~CANCELSTATE_BITMASK); 42 43 /* Store the old value. */ 44 if (oldstate != NULL) 45 *oldstate = ((oldval & CANCELSTATE_BITMASK) 46 ? PTHREAD_CANCEL_DISABLE : PTHREAD_CANCEL_ENABLE); 47 48 /* Avoid doing unnecessary work. The atomic operation can 49 potentially be expensive if the memory has to be locked and 50 remote cache lines have to be invalidated. */ 51 if (oldval == newval) 52 break; 53 54 /* Update the cancel handling word. This has to be done 55 atomically since other bits could be modified as well. */ 56 int curval = THREAD_ATOMIC_CMPXCHG_VAL (self, cancelhandling, newval, 57 oldval); 58 if (__builtin_expect (curval == oldval, 1)) 59 { 60 if (CANCEL_ENABLED_AND_CANCELED_AND_ASYNCHRONOUS (newval)) 61 __do_cancel (); 62 63 break; 64 } 65 66 /* Prepare for the next round. */ 67 oldval = curval; 68 } 69 70 return 0; 71 } Many thanks, Kohji Okuno > On Tue, Jun 18, 2013 at 06:15:56PM +0900, Kohji Okuno wrote: >> Hi, >> >> This is newer document. >> >> http://pubs.opengroup.org/onlinepubs/9699919799/functions/V2_chap02.html#tag_15_09_05_02 >> >> > Hi, >> > >> > I have a question. >> > >> > Are pthread_setcancelstate() and pthread_setcanceltype() cancellation >> > points? >> > >> > I refered to the following URL. >> > >> > http://pubs.opengroup.org/onlinepubs/7908799/xsh/threads.html >> > >> > This document shows that cancellation points in the pthread library >> > are only pthread_cond_timedwait(), pthread_cond_wait(), pthread_join() >> > and pthread_testcancel(). >> > >> > But, current implementation in FreeBSD is the following. >> > (Please take notice of "(*)" marks). >> > >> > Would you check this? > The reason for the testcancel() calls there is that async cancellation > is defined as to be acted upon at any time after the cancellation is > enabled. If we have pending cancellation request, the async cancellation > must proceed after the pthread_setcancelstate(PTHREAD_CANCEL_ENABLE). > > I see a bug there, namely, we should not process the cancellation if > the type is deferred. Is this is real issue you are concerned with ? > > diff --git a/lib/libthr/thread/thr_cancel.c b/lib/libthr/thread/thr_cancel.c > index 89f0ee1..beae707 100644 > --- a/lib/libthr/thread/thr_cancel.c > +++ b/lib/libthr/thread/thr_cancel.c > _at__at_ -87,7 +87,8 _at__at_ _pthread_setcancelstate(int state, int *oldstate) > break; > case PTHREAD_CANCEL_ENABLE: > curthread->cancel_enable = 1; > - testcancel(curthread); > + if (curthread->cancel_async) > + testcancel(curthread); > break; > default: > return (EINVAL);Received on Tue Jun 18 2013 - 22:19:54 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:38 UTC