Re: [RFC] Outline of USB process integration in the kernel taskqueue system

From: Matthew Fleming <mdf356_at_gmail.com>
Date: Sat, 6 Nov 2010 06:57:50 -0700
On Sat, Nov 6, 2010 at 1:37 AM, Hans Petter Selasky <hselasky_at_c2i.net> wrote:
> On Friday 05 November 2010 20:06:12 John Baldwin wrote:
>> On Friday, November 05, 2010 3:00:37 pm Hans Petter Selasky wrote:
>> > On Friday 05 November 2010 19:48:05 Matthew Fleming wrote:
>> > > On Fri, Nov 5, 2010 at 11:45 AM, Hans Petter Selasky <hselasky_at_c2i.net>
>> >
>> > wrote:
>> > > > On Friday 05 November 2010 19:39:45 Matthew Fleming wrote:
>> > > >> True, but no taskqueue(9) code can handle that.  Only the caller can
>> > > >> prevent a task from becoming enqueued again.  The same issue exists
>> > > >> with taskqueue_drain().
>> > > >
>> > > > I find that strange, because that means if I queue a task again while
>> > > > it is running, then I doesn't get run? Are you really sure?
>> > >
>> > > If a task is currently running when enqueued, the task struct will be
>> > > re-enqueued to the taskqueue.  When that task comes up as the head of
>> > > the queue, it will be run again.
>> >
>> > Right, and the taskqueue_cancel has to cancel in that state to, but it
>> > doesn't because it only checks pending if !running() :-) ??
>>
>> You can't close that race in taskqueue_cancel().  You have to manage that
>> race yourself in your task handler.  For the callout(9) API we are only
>> able to close that race if you use callout_init_mtx() so that the code
>> managing the callout wheel can make use of your lock to resolve the races.
>> If you use callout_init() you have to explicitly manage these races in your
>> callout handler.
>
> Hi,
>
> I think you are getting me wrong! In the initial "0001-Implement-
> taskqueue_cancel-9-to-cancel-a-task-from-a.patch" you have the following code-
> chunk:
>
> +int
> +taskqueue_cancel(struct taskqueue *queue, struct task *task)
> +{
> +       int rc;
> +
> +       TQ_LOCK(queue);
> +       if (!task_is_running(queue, task)) {
> +               if ((rc = task->ta_pending) > 0)
> +                       STAILQ_REMOVE(&queue->tq_queue, task, task, ta_link);
> +               task->ta_pending = 0;
> +       } else
> +               rc = -EBUSY;
> +       TQ_UNLOCK(queue);
> +       return (rc);
> +}
>
> This code should be written like this, having the if (!task_is_running())
> after checking the ta_pending variable.
>
> +int
> +taskqueue_cancel(struct taskqueue *queue, struct task *task)
> +{
> +       int rc;
> +
> +       TQ_LOCK(queue);
> +               if ((rc = task->ta_pending) > 0) {
> +                       STAILQ_REMOVE(&queue->tq_queue, task, task, ta_link);
> +                                 task->ta_pending = 0;
> +                         }
> +       if (!task_is_running(queue, task))
> +               rc = -EBUSY;
> +       TQ_UNLOCK(queue);
> +       return (rc);
> +}
>
> Or completely skip the !task_is_running() check. Else you are opening up a
> race in this function! Do I need to explain that more? Isn't it obvious?

I think you're misunderstanding the existing taskqueue(9) implementation.

As long as TQ_LOCK is held, the state of ta->ta_pending cannot change,
nor can the set of running tasks.  So the order of checks is
irrelevant.

As John said, the taskqueue(9) implementation cannot protect consumers
of it from re-queueing a task; that kind of serialization needs to
happen at a higher level.

taskqueue(9) is not quite like callout(9); the taskqueue(9)
implementation drops all locks before calling the task's callback
function.  So once the task is running, taskqueue(9) can do nothing
about it until the task stops running.  This is why Jeff's
implementation of taskqueue_cancel(9) slept if the task was running,
and why mine returns an error code.  The only way to know for sure
that a task is "about" to run is to ask taskqueue(9), because there's
a window where the TQ_LOCK is dropped before the callback is entered.

Thanks,
matthew
Received on Sat Nov 06 2010 - 12:57:51 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:08 UTC