On Sun, Oct 05, 2014 at 01:15:12PM +0900, Kohji Okuno wrote: > Hi, > > > On Sat, Oct 04, 2014 at 08:53:35PM +0900, Kohji Okuno wrote: > >> Hi Konstantin, > >> > >> Thank you for your prompt response. > >> I will test and report from next monday. > >> > >> >> In addtion, I have one question. > >> >> In current and 10-stable, is vm_map_delete() called by kva_free()? > >> > No, kva_free() only releases the vmem backing, leaving the page > >> > tables intact. This is why I only did the stable/9 patch. > >> > >> Where are PTEs allocated by pmap_mapdev() freed in current and 10-stable? > >> Could you please explain me? > > They are not freed. The removal of the vmem which covers the address > > space managed by corresponding ptes, allows the reuse of both KVA region > > and corresponding PTEs in the tables. The only concern with the resident > > page tables is to avoid two kva_alloc() to step over each other, and > > this is ensured by vmem. > > I agree that normal pages are reusable. But, since the pages mapped by > pmap_mapdev() are concerned with the physicall address of device (For > example: video memory), these PTEs aren't reusable, I think. > So, should we free these PTEs by pmap_unmapdev()? There is no hold on any physical pages which were referenced by the ptes. The only thing which is left out are the records in the page tables which map the KVA to said device memory. It is harmless. When the KVA is reused, the ptes in page tables are overwritten. It might be argued that clearing PTEs, or at least removing the PG_V bit catches erronous unintended accesses to the freed range, but by the cost of the overhead of re-polluting the caches. And since clearing is not effective without doing TLB flush, which requires broadcast IPI, it is more trouble than advantage.Received on Sun Oct 05 2014 - 06:57:59 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:52 UTC