Re: per file descriptor device callbacks ?

From: Konstantin Belousov <kostikbel_at_gmail.com>
Date: Wed, 29 Aug 2012 07:12:40 +0300
On Tue, Aug 28, 2012 at 08:42:26PM +0200, Luigi Rizzo wrote:
> On Tue, Aug 28, 2012 at 08:26:06PM +0300, Konstantin Belousov wrote:
> ...
> > > dev_clone() is rather gross and a lot harder to use than
> > > devfs_set_cdevpriv().  If you are fine with the inherent problems
> > > of the device pager (you can't ever make mappings go away), you can
> > > just assign each client a unique offset into your shared object's
> > > memory space.  However, if you are exporting shared memory buffers,
> > > then a better model might be to let your clients use
> > > shm_open(SHM_ANON) to create buffers, then pass them into your driver
> > > via an ioctl() and use shm_map() to map them into the kernel.
> > 
> > Well, there is a new OBJT_MGTDEVICE pager, which together with
> > d_mmap_single() allows to have even per-mapping data. i915kms uses it.
> > You do not need cdevpriv for the per-mapping data.
> > 
> > Also, MGTDEVICE does track the mappings of the pages, so you can clean
> > up on device destruction.
> 
> interesting, thanks for the pointer, i'll look it up in i915kms.
> Does this exist also in stable/9 ?
> It would be a shame otherwise...
Yes, it was merged.

> 
> > Normal callbacks of the device pager allows to execute ctr/dtr methods
> > at the time of mapping creation and tear down.
> 
> what would be a good way to install my own ctr/dtr methods ?
> I only found out a rather crude one, and it only works for
> the destructor:
See below.

> 
>     static struct cdev_pager_ops saved_cdev_pager_ops;
>     static struct cdev_pager_ops netmap_cdev_pager_ops;
> 
>     static void
>     netmap_dev_pager_dtor(void *handle)
>     {
>         saved_cdev_pager_ops.cdev_pg_dtor(handle);
> 	// my code here
>         D("ready to release memory for %p", handle);
>     }
> 
> 
>     static int
>     netmap_mmap_single(struct cdev *cdev, vm_ooffset_t *foff,
> 		vm_size_t objsize,  vm_object_t *objp, int prot)
>     {
>         vm_object_t obj;
>  
> 	/* XXX check size etc. */
>         obj = vm_pager_allocate(OBJT_DEVICE, cdev, objsize, prot, *foff,
>             curthread->td_ucred);
Use cdev_pager_allocate().

>         if (obj == NULL)
>                 return EINVAL;
>         if (saved_cdev_pager_ops.cdev_pg_fault == NULL) {
I do not understand what are you trying to accomplish with the
check and reinitialization, but I assume that cdev_pager_allocate()
would take care of it.

>                 D("initialize cdev_pager_ops");
>                 saved_cdev_pager_ops = *(obj->un_pager.devp.ops);
>                 netmap_cdev_pager_ops = *(obj->un_pager.devp.ops);
>                 netmap_cdev_pager_ops.cdev_pg_dtor = netmap_dev_pager_dtor;
>         };
>         obj->un_pager.devp.ops = &netmap_cdev_pager_ops;
>         *objp = obj;
> 	/* XXX perhaps do something more here, such as install
> 	 * mappings for the pages so we have no faults later.
> 	 */
>         return 0;
>     }
> 
>     static struct cdevsw netmap_cdevsw = {
>         .d_version = D_VERSION,
>         .d_name = "netmap",
>         .d_open = netmap_open,
>         .d_mmap = netmap_mmap,
>         .d_mmap_single = netmap_mmap_single,
>         .d_ioctl = netmap_ioctl,
>         .d_poll = netmap_poll,
>         .d_close = netmap_close,
>     };
> 
> cheers
> luigi

Received on Wed Aug 29 2012 - 02:12:45 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:30 UTC