Re: easy way to work around a lack of a direct map on i386

From: Rick Macklem <rmacklem_at_uoguelph.ca>
Date: Fri, 31 Jan 2020 22:47:09 +0000
Thanks everyone. I should have waited a day, since jhb_at_ responded
w.r.t. using sf_bufs as well.
For now, we are sticking with a 64bit only solution, since work on the
receive side of KERN_TLS is more critical to getting this going.

rick

________________________________________
From: owner-freebsd-current_at_freebsd.org <owner-freebsd-current_at_freebsd.org> on behalf of Konstantin Belousov <kostikbel_at_gmail.com>
Sent: Friday, January 31, 2020 7:31 AM
To: Hans Petter Selasky
Cc: Rick Macklem; freebsd-current_at_FreeBSD.org
Subject: Re: easy way to work around a lack of a direct map on i386

On Fri, Jan 31, 2020 at 10:13:58AM +0100, Hans Petter Selasky wrote:
> On 2020-01-31 00:37, Konstantin Belousov wrote:
> > On Thu, Jan 30, 2020 at 11:23:02PM +0000, Rick Macklem wrote:
> > > Hi,
> > >
> > > The current code for KERN_TLS uses PHYS_TO_DMAP()
> > > to access unmapped external pages on m_ext.ext_pgs
> > > mbufs.
> > > I also need to do this to implement RPC-over-TLS.
> > >
> > > The problem is that some arches, like i386, don't
> > > support PHYS_TO_DMAP().
> > >
> > > Since it appears that there will be at most 4 pages on
> > > one of these mbufs, my thinking was...
> > > - Acquire four pages of kva from the kernel_map during
> > >    booting.
> > > - Then just use pmap_qenter() to fill in the physical page
> > >    mappings for long enough to copy the data.
> > >
> > > Does this sound reasonable?
> > > Is there a better way?
> >
> > Use sfbufs, they should work on all arches.  In essence, they provide MI
> > interface to DMAP where possible.  I do not remember did I bumped the
> > limit for i386 after 4/4 went in.
> >
> > There is currently no limits for sfbufs use per subsystem, but I think it
> > is not very likely to cause too much troubles.  Main rule is to not sleep
> > waiting for more sfbufs if you already own one..
>
> In the DRM-KMS LinuxKPI we have:
>
> void *
> kmap(vm_page_t page)
> {
> #ifdef LINUXKPI_HAVE_DMAP
>         vm_offset_t daddr;
>
>         daddr = PHYS_TO_DMAP(VM_PAGE_TO_PHYS(page));
>
>         return ((void *)daddr);
> #else
>         struct sf_buf *sf;
>
>         sched_pin();
>         sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE);
>         if (sf == NULL) {
>                 sched_unpin();
>                 return (NULL);
>         }
>         return ((void *)sf_buf_kva(sf));
> #endif
> }
>
> void
> kunmap(vm_page_t page)
> {
> #ifdef LINUXKPI_HAVE_DMAP
>         /* NOP */
> #else
>         struct sf_buf *sf;
>
>         /* lookup SF buffer in list */
>         sf = sf_buf_alloc(page, SFB_NOWAIT | SFB_CPUPRIVATE);
>
>         /* double-free */
>         sf_buf_free(sf);
>         sf_buf_free(sf);
>
>         sched_unpin();
> #endif
> }
>
> I think that is the fastest way to do this.

So the kmap address is only valid on the CPU that called the function ?
This is strange, I was not able to find mention of it in references to
kmap.
_______________________________________________
freebsd-current_at_freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscribe_at_freebsd.org"
Received on Fri Jan 31 2020 - 21:47:12 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:23 UTC