On Sun, Sep 14, 2008 at 08:15:04PM +0100, Robert Watson wrote: > > On Sun, 14 Sep 2008, Kostik Belousov wrote: > > >When implementing cdevpriv, I completeley missed the support for d_mmap() > >driver method. This method is called by the kernel (at least) twice: one > >time to validate the mmaped region and protection mode (see > >vm/device_pager.c:dev_pager_alloc()), and second time to obtain the > >physical address when serving page fault (see dev_pager_getpages()). > > > >Support for cdevpriv for the first call(s) is trivial, and is implemented > >by the patch below. Second calls are much harder and essentially require > >attaching cdevpriv bookkeeping data to the struct vm_map_entry. In fact, I > >am not sure whether this support for the second time calls is needed at > >all in real usage. > > > >I Cc:ed people that pointed me to the issue, please evaluate the patch > >against your needs. I think I will commit it shortly after your feedback. > >On the other hand, I would think about implementing full support for > >d_mmap only if actually needed. > > I'm not really sure that these changes "make sense" given the way our > device pager works right now. My understanding is that most consumers of > cdevpriv use it in order to provide session-centric device nodes as a > cleaner alternative to cloning. However, even with your change, it's not > possible to support session-centric memory mapping of devices as the memory > map and device pager code assumes there is one VM object for each device, > and hence all pages mapped independently from different file descriptors on > the same device are from the same set of pages (and hence in the same VM > object page cache). In order to implement session-centric semantics, I > think it's actually quite a bit more complicated than just adding > vm_map_entry book-keeping -- we also need to have a different VM object for > each session. > > And, arguably, we need a more mature device_pager that understands that > pages might someday no longer be associated with a device due to that > device being removed... The issues you raised are obviously important, but IMHO they are ortogonal to the cdevpriv KPI working in for _any_ pager. Pager' getpages method is usually called from the context where kernel does not have naturally supplied filedescriptor. For instance, most typical caller if vm_fault(). Thus, whatever pager is used, we have to provide a way for kernel to somehow associate struct file with faulted page (region), and make that file available to the pager. [Hmm, because kernel is allowed to fault too, vm_fault() should save/restore td_fpop.] Another point is that we have important consumers of the existing device pager interface that already want to use cdevpriv. Their usage ATM is limited to authentification, whatever it means. I assume it means checking that the caller was given some token the validation step. The code should be structured approx. like this: dri_mmap(...) { some_dri_data *p; int error; error = devfs_get_cdevpriv(&p); if (error == 0) { /* authenticate; */ } /* * Auth successfull or error == EBADF * Do whatever needed to return phys address */ ... } And, the last issue you raised, the need for the new pager is actually real for GEM/TTM or whatever the newest DRI interface is called. I have an intent to play with it, but more smart thing would be to wait some time more. Hopefully, then DRI folks will finally decide on the (more) stable interface. I am sure that it would be changed several dozen times in the future, but have a hope that it will not be designed from scratch.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:35 UTC