Re: easy way to work around a lack of a direct map on i386

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Sat, 1 Feb 2020 21:23:09 +0200
On Sat, Feb 01, 2020 at 01:56:59PM +0100, Hans Petter Selasky wrote:
> On 2020-01-31 13:31, Konstantin Belousov wrote:
> > On Fri, Jan 31, 2020 at 10:13:58AM +0100, Hans Petter Selasky wrote:
> > > On 2020-01-31 00:37, Konstantin Belousov wrote:
> > > > On Thu, Jan 30, 2020 at 11:23:02PM +0000, Rick Macklem wrote:
> > > > > Hi,
> > > > > 
> > > > > The current code for KERN_TLS uses PHYS_TO_DMAP()
> > > > > to access unmapped external pages on m_ext.ext_pgs
> > > > > mbufs.
> > > > > I also need to do this to implement RPC-over-TLS.
> > > > > 
> > > > > The problem is that some arches, like i386, don't
> > > > > support PHYS_TO_DMAP().
> > > > > 
> > > > > Since it appears that there will be at most 4 pages on
> > > > > one of these mbufs, my thinking was...
> > > > > - Acquire four pages of kva from the kernel_map during
> > > > >     booting.
> > > > > - Then just use pmap_qenter() to fill in the physical page
> > > > >     mappings for long enough to copy the data.
> > > > > 
> > > > > Does this sound reasonable?
> > > > > Is there a better way?
> > > > 
> > > > Use sfbufs, they should work on all arches.  In essence, they provide MI
> > > > interface to DMAP where possible.  I do not remember did I bumped the
> > > > limit for i386 after 4/4 went in.
> > > > 
> > > > There is currently no limits for sfbufs use per subsystem, but I think it
> > > > is not very likely to cause too much troubles.  Main rule is to not sleep
> > > > waiting for more sfbufs if you already own one..
> > > 
> > > In the DRM-KMS LinuxKPI we have:
> > > 
> > > void *
> > > kmap(vm_page_t page)
> > > {
> > > #ifdef LINUXKPI_HAVE_DMAP
> > >          vm_offset_t daddr;
> > > 
> > >          daddr = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(page));
> > > 
> > >          return ((void *)daddr);
> > > #else
> > >          struct sf_buf *sf;
> > > 
> > >          sched_pin();
> > >          sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE);
> > >          if (sf == NULL) {
> > >                  sched_unpin();
> > >                  return (NULL);
> > >          }
> > >          return ((void *)sf_buf_kva(sf));
> > > #endif
> > > }
> > > 
> > > void
> > > kunmap(vm_page_t page)
> > > {
> > > #ifdef LINUXKPI_HAVE_DMAP
> > >          /* NOP */
> > > #else
> > >          struct sf_buf *sf;
> > > 
> > >          /* lookup SF buffer in list */
> > >          sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE);
> > > 
> > >          /* double-free */
> > >          sf_buf_free(sf);
> > >          sf_buf_free(sf);
> > > 
> > >          sched_unpin();
> > > #endif
> > > }
> > > 
> > > I think that is the fastest way to do this.
> > 
> > So the kmap address is only valid on the CPU that called the function ?
> > This is strange, I was not able to find mention of it in references to
> > kmap.
> 
> Yes, only on the current CPU. See the SFB_CPUPRIVATE flag.

I can read the FreeBSD code.  But I did not found a mention that Linux
kmap() only invalidates TLB on the core that called it.
Received on Sat Feb 01 2020 - 18:23:26 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:23 UTC