Re: svn commit: r360233 - in head: contrib/jemalloc . . . : This partially breaks a 2-socket 32-bit powerpc (old PowerMac G4) based on head -r360311

From: Mark Millard <marklmi_at_yahoo.com>
Date: Wed, 10 Jun 2020 18:56:57 -0700
On 2020-May-13, at 08:56, Justin Hibbits <chmeeedalf_at_gmail.com> wrote:

> Hi Mark,

Hello Justin.

> On Wed, 13 May 2020 01:43:23 -0700
> Mark Millard <marklmi_at_yahoo.com> wrote:
> 
>> [I'm adding a reference to an old arm64/aarch64 bug that had
>> pages turning to zero, in case this 32-bit powerpc issue is
>> somewhat analogous.]
>> 
>>> . . .
> ...
>> . . .
>> 
>> (Note: dsl-only.net closed down, so the E-mail
>> address reference is no longer valid.)
>> 
>> Author: kib
>> Date: Mon Apr 10 15:32:26 2017
>> New Revision: 316679
>> URL: 
>> https://svnweb.freebsd.org/changeset/base/316679
>> 
>> 
>> Log:
>> Do not lose dirty bits for removing PROT_WRITE on arm64.
>> 
>> Arm64 pmap interprets accessed writable ptes as modified, since
>> ARMv8.0 does not track Dirty Bit Modifier in hardware. If writable
>> bit is removed, page must be marked as dirty for MI VM.
>> 
>> This change is most important for COW, where fork caused losing
>> content of the dirty pages which were not yet scanned by pagedaemon.
>> 
>> Reviewed by:	alc, andrew
>> Reported and tested by:	Mark Millard <markmi at dsl-only.net>
>> PR:	217138, 217239
>> Sponsored by:	The FreeBSD Foundation
>> MFC after:	2 weeks
>> 
>> Modified:
>> head/sys/arm64/arm64/pmap.c
>> 
>> Modified: head/sys/arm64/arm64/pmap.c
>> ==============================================================================
>> --- head/sys/arm64/arm64/pmap.c	Mon Apr 10 12:35:58
>> 2017	(r316678) +++ head/sys/arm64/arm64/pmap.c	Mon Apr
>> 10 15:32:26 2017	(r316679) _at__at_ -2481,6 +2481,11 _at__at_
>> pmap_protect(pmap_t pmap, vm_offset_t sv sva += L3_SIZE) {
>> 			l3 = pmap_load(l3p);
>> 			if (pmap_l3_valid(l3)) {
>> +				if ((l3 & ATTR_SW_MANAGED) &&
>> +				    pmap_page_dirty(l3)) {
>> +
>> vm_page_dirty(PHYS_TO_VM_PAGE(l3 &
>> +					    ~ATTR_MASK));
>> +				}
>> 				pmap_set(l3p, ATTR_AP(ATTR_AP_RO));
>> 				PTE_SYNC(l3p);
>> 				/* XXX: Use pmap_invalidate_range */
>> 
>> . . .
>> 
> 
> Thanks for this reference.  I took a quick look at the 3 pmap
> implementations we have (haven't check the new radix pmap yet), and it
> looks like only mmu_oea.c (32-bit AIM pmap, for G3 and G4) is missing
> vm_page_dirty() calls in its pmap_protect() implementation, analogous
> to the change you posted right above. Given this, I think it's safe to
> say that this missing piece is necessary.  We'll work on a fix for
> this; looking at moea64_protect(), there may be additional work needed
> to support this as well, so it may take a few days.

Ping? Any clue when the above might happen?

I've been avoiding the old PowerMacs and leaving
them at head -r360311 , pending an update that
would avoid the kernel zeroing pages that it
should not zero. But I've seen that you were busy
with more modern contexts this last about a month.

And, clearly, my own context has left pending
(for much longer) other more involved activities
(compared to just periodically updating to
more recent FreeBSD vintages).

===
Mark Millard
marklmi at yahoo.com
( dsl-only.net went
away in early 2018-Mar)
Received on Wed Jun 10 2020 - 23:57:05 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:24 UTC