Re: [patch] i386 pmap sysmaps_pcpu[] atomic access

From: Svatopluk Kraus <onwahe_at_gmail.com>
Date: Mon, 18 Feb 2013 21:27:40 +0100
On Mon, Feb 18, 2013 at 6:09 PM, Konstantin Belousov
<kostikbel_at_gmail.com> wrote:
> On Mon, Feb 18, 2013 at 06:06:42PM +0100, Svatopluk Kraus wrote:
>> On Mon, Feb 18, 2013 at 4:08 PM, Konstantin Belousov
>> <kostikbel_at_gmail.com> wrote:
>> > On Mon, Feb 18, 2013 at 01:44:35PM +0100, Svatopluk Kraus wrote:
>> >> Hi,
>> >>
>> >>    the access to sysmaps_pcpu[] should be atomic with respect to
>> >> thread migration. Otherwise, a sysmaps for one CPU can be stolen by
>> >> another CPU and the purpose of per CPU sysmaps is broken. A patch is
>> >> enclosed.
>> > And, what are the problem caused by the 'otherwise' ?
>> > I do not see any.
>>
>> The 'otherwise' issue is the following:
>>
>> 1. A thread is running on CPU0.
>>
>>         sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
>>
>> 2. A sysmaps variable contains a pointer to 'CPU0' sysmaps.
>> 3. Now, the thread migrates into CPU1.
>> 4. However, the sysmaps variable still contains a pointers to 'CPU0' sysmaps.
>>
>>       mtx_lock(&sysmaps->lock);
>>
>> 4. The thread running on CPU1 locked 'CPU0' sysmaps mutex, so the
>> thread uselessly can block another thread running on CPU0. Maybe, it's
>> not a problem. However, it definitely goes against the reason why the
>> submaps (one for each CPU) exist.
> So what ?

It depends. You don't understand it or you think it's ok? Tell me.


>>
>>
>> > Really, taking the mutex while bind to a CPU could be deadlock-prone
>> > under some situations.
>> >
>> > This was discussed at least one more time. Might be, a comment saying that
>> > there is no issue should be added.
>>
>> I missed the discussion. Can you point me to it, please? A deadlock is
>> not problem here, however, I can be wrong, as I can't imagine now how
>> a simple pinning could lead into a deadlock at all.
> Because some other load on the bind cpu might prevent the thread from
> being scheduled.

I'm afraid I still have no idea. On single CPU, a binding has no
meaning. Thus, if any deadlock exists then exists without binding too.
Hmm, you are talking about a deadlock caused by heavy CPU load? Is it
a deadlock at all? Anyhow, mutex is a lock with priority propagation,
isn't it?

>
>>
>> >>
>> >>      Svata
>> >>
>> >> Index: sys/i386/i386/pmap.c
>> >> ===================================================================
>> >> --- sys/i386/i386/pmap.c      (revision 246831)
>> >> +++ sys/i386/i386/pmap.c      (working copy)
>> >> _at__at_ -4146,11 +4146,11 _at__at_
>> >>  {
>> >>       struct sysmaps *sysmaps;
>> >>
>> >> +     sched_pin();
>> >>       sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
>> >>       mtx_lock(&sysmaps->lock);
>> >>       if (*sysmaps->CMAP2)
>> >>               panic("pmap_zero_page: CMAP2 busy");
>> >> -     sched_pin();
>> >>       *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
>> >>           pmap_cache_bits(m->md.pat_mode, 0);
>> >>       invlcaddr(sysmaps->CADDR2);
>> >> _at__at_ -4171,11 +4171,11 _at__at_
>> >>  {
>> >>       struct sysmaps *sysmaps;
>> >>
>> >> +     sched_pin();
>> >>       sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
>> >>       mtx_lock(&sysmaps->lock);
>> >>       if (*sysmaps->CMAP2)
>> >>               panic("pmap_zero_page_area: CMAP2 busy");
>> >> -     sched_pin();
>> >>       *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) | PG_A | PG_M |
>> >>           pmap_cache_bits(m->md.pat_mode, 0);
>> >>       invlcaddr(sysmaps->CADDR2);
>> >> _at__at_ -4220,13 +4220,13 _at__at_
>> >>  {
>> >>       struct sysmaps *sysmaps;
>> >>
>> >> +     sched_pin();
>> >>       sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
>> >>       mtx_lock(&sysmaps->lock);
>> >>       if (*sysmaps->CMAP1)
>> >>               panic("pmap_copy_page: CMAP1 busy");
>> >>       if (*sysmaps->CMAP2)
>> >>               panic("pmap_copy_page: CMAP2 busy");
>> >> -     sched_pin();
>> >>       invlpg((u_int)sysmaps->CADDR1);
>> >>       invlpg((u_int)sysmaps->CADDR2);
>> >>       *sysmaps->CMAP1 = PG_V | VM_PAGE_TO_PHYS(src) | PG_A |
>> >> _at__at_ -5072,11 +5072,11 _at__at_
>> >>       vm_offset_t sva, eva;
>> >>
>> >>       if ((cpu_feature & CPUID_CLFSH) != 0) {
>> >> +             sched_pin();
>> >>               sysmaps = &sysmaps_pcpu[PCPU_GET(cpuid)];
>> >>               mtx_lock(&sysmaps->lock);
>> >>               if (*sysmaps->CMAP2)
>> >>                       panic("pmap_flush_page: CMAP2 busy");
>> >> -             sched_pin();
>> >>               *sysmaps->CMAP2 = PG_V | PG_RW | VM_PAGE_TO_PHYS(m) |
>> >>                   PG_A | PG_M | pmap_cache_bits(m->md.pat_mode, 0);
>> >>               invlcaddr(sysmaps->CADDR2);
>> >> _______________________________________________
>> >> freebsd-current_at_freebsd.org mailing list
>> >> http://lists.freebsd.org/mailman/listinfo/freebsd-current
>> >> To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Mon Feb 18 2013 - 19:27:42 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:35 UTC