Re: [patch] i386 pmap sysmaps_pcpu[] atomic access

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Mon, 18 Feb 2013 22:36:30 +0200
On Mon, Feb 18, 2013 at 09:27:40PM +0100, Svatopluk Kraus wrote:
> On Mon, Feb 18, 2013 at 6:09 PM, Konstantin Belousov
> <kostikbel_at_gmail.com> wrote:
> > On Mon, Feb 18, 2013 at 06:06:42PM +0100, Svatopluk Kraus wrote:
> >> On Mon, Feb 18, 2013 at 4:08 PM, Konstantin Belousov
> >> <kostikbel_at_gmail.com> wrote:
> >> > On Mon, Feb 18, 2013 at 01:44:35PM +0100, Svatopluk Kraus wrote:
> >> >> Hi,
> >> >>
> >> >>    the access to sysmaps_pcpu[] should be atomic with respect to
> >> >> thread migration. Otherwise, a sysmaps for one CPU can be stolen by
> >> >> another CPU and the purpose of per CPU sysmaps is broken. A patch is
> >> >> enclosed.
> >> > And, what are the problem caused by the 'otherwise' ?
> >> > I do not see any.
> >>
> >> The 'otherwise' issue is the following:
> >>
> >> 1. A thread is running on CPU0.
> >>
> >>         sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
> >>
> >> 2. A sysmaps variable contains a pointer to 'CPU0' sysmaps.
> >> 3. Now, the thread migrates into CPU1.
> >> 4. However, the sysmaps variable still contains a pointers to 'CPU0' sysmaps.
> >>
> >>       mtx_lock(&sysmaps->lock);
> >>
> >> 4. The thread running on CPU1 locked 'CPU0' sysmaps mutex, so the
> >> thread uselessly can block another thread running on CPU0. Maybe, it's
> >> not a problem. However, it definitely goes against the reason why the
> >> submaps (one for each CPU) exist.
> > So what ?
> 
> It depends. You don't understand it or you think it's ok? Tell me.
> 
Both. I do not understand your concern, and I think that the code is fine.

Both threads in your description make useful progress, and computation
proceeds correctly.

> 
> >>
> >>
> >> > Really, taking the mutex while bind to a CPU could be deadlock-prone
> >> > under some situations.
> >> >
> >> > This was discussed at least one more time. Might be, a comment saying that
> >> > there is no issue should be added.
> >>
> >> I missed the discussion. Can you point me to it, please? A deadlock is
> >> not problem here, however, I can be wrong, as I can't imagine now how
> >> a simple pinning could lead into a deadlock at all.
> > Because some other load on the bind cpu might prevent the thread from
> > being scheduled.
> 
> I'm afraid I still have no idea. On single CPU, a binding has no
> meaning. Thus, if any deadlock exists then exists without binding too.
> Hmm, you are talking about a deadlock caused by heavy CPU load? Is it
> a deadlock at all? Anyhow, mutex is a lock with priority propagation,
> isn't it?
> 

When executing on single cpu, kernel sometimes make different decisions.
Yes, the deadlock can be more precisely described as livelock.

It might not make any matter for exactly this case, but still is useful
to remember.

Received on Mon Feb 18 2013 - 19:36:34 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:35 UTC