Re: vm_page_t related KBI [Was: Re: panic at vm_page_wire with FreeBSD 9.0 Beta 3]

From: Attilio Rao <attilio_at_freebsd.org>
Date: Mon, 7 Nov 2011 12:35:20 +0100
2011/11/7 Attilio Rao <attilio_at_freebsd.org>:
> 2011/11/7 Arnaud Lacombe <lacombar_at_gmail.com>:
>> Hi,
>>
>> On Sat, Nov 5, 2011 at 10:13 AM, Kostik Belousov <kostikbel_at_gmail.com> wrote:
>>> On Fri, Nov 04, 2011 at 06:03:39PM +0200, Kostik Belousov wrote:
>>>
>>> Below is the KBI patch after vm_page_bits_t merge is done.
>>> Again, I did not spent time converting all in-tree consumers
>>> from the (potentially) loadable modules to the new KPI until it
>>> is agreed upon.
>>>
>>> diff --git a/sys/nfsclient/nfs_bio.c b/sys/nfsclient/nfs_bio.c
>>> index 305c189..7264cd1 100644
>>> --- a/sys/nfsclient/nfs_bio.c
>>> +++ b/sys/nfsclient/nfs_bio.c
>>> _at__at_ -128,7 +128,7 _at__at_ nfs_getpages(struct vop_getpages_args *ap)
>>>         * can only occur at the file EOF.
>>>         */
>>>        VM_OBJECT_LOCK(object);
>>> -       if (pages[ap->a_reqpage]->valid != 0) {
>>> +       if (vm_page_read_valid(pages[ap->a_reqpage]) != 0) {
>>>                for (i = 0; i < npages; ++i) {
>>>                        if (i != ap->a_reqpage) {
>>>                                vm_page_lock(pages[i]);
>>> _at__at_ -198,16 +198,16 _at__at_ nfs_getpages(struct vop_getpages_args *ap)
>>>                        /*
>>>                         * Read operation filled an entire page
>>>                         */
>>> -                       m->valid = VM_PAGE_BITS_ALL;
>>> -                       KASSERT(m->dirty == 0,
>>> +                       vm_page_write_valid(m, VM_PAGE_BITS_ALL);
>>> +                       KASSERT(vm_page_read_dirty(m) == 0,
>>>                            ("nfs_getpages: page %p is dirty", m));
>>>                } else if (size > toff) {
>>>                        /*
>>>                         * Read operation filled a partial page.
>>>                         */
>>> -                       m->valid = 0;
>>> +                       vm_page_write_valid(m, 0);
>>>                        vm_page_set_valid(m, 0, size - toff);
>>> -                       KASSERT(m->dirty == 0,
>>> +                       KASSERT(vm_page_read_dirty(m) == 0,
>>>                            ("nfs_getpages: page %p is dirty", m));
>>>                } else {
>>>                        /*
>>> diff --git a/sys/vm/vm_page.c b/sys/vm/vm_page.c
>>> index 389aea5..2f41e70 100644
>>> --- a/sys/vm/vm_page.c
>>> +++ b/sys/vm/vm_page.c
>>> _at__at_ -2677,6 +2677,66 _at__at_ vm_page_test_dirty(vm_page_t m)
>>>                vm_page_dirty(m);
>>>  }
>>>
>>> +void
>>> +vm_page_lock_func(vm_page_t m, const char *file, int line)
>>> +{
>>> +
>>> +#if LOCK_DEBUG > 0 || defined(MUTEX_NOINLINE)
>>> +       _mtx_lock_flags(vm_page_lockptr(m), 0, file, line);
>>> +#else
>>> +       __mtx_lock(vm_page_lockptr(m), 0, file, line);
>>> +#endif
>>> +}
>>> +
>> Why do you re-implement the wheel ? all the point of these assessors
>> is to hide implementation detail. IMO, you should restrict yourself to
>> the documented API from mutex(9), only.
>>
>> Oh, wait, you end-up using LOCK_FILE instead of just __FILE__, but
>> wait LOCK_FILE is either just __FILE__, or NULL, depending on
>> LOCK_DEBUG, but you wouldn't have those function without
>> INVARIANTS.... This whole LOCK_FILE/LOCK_LINE seem completely
>> fracked-up... If you don't want this code in INVARIANTS, but in
>> LOCK_DEBUG, only make it live only in the LOCK_DEBUG case.
>>
>> Btw, let me also question the use of non-inline functions.
>
> My impression is that you don't really understand the patch, thus your
> disrespectful words used here are really unjustified.
>
> I think that kib_at_ intention is just to get "the most official way" to
> pass down file and line to locking functions from the consumers.
> His patch is "technically right", but I would prefer something
> different (see below).
>
> LOCK_FILE and LOCK_LINE exist for reducing the space in .rodata
> section. Without INVARIANTS/WITNESS/etc. they will just be NULL and
> not pointing to a lot of datas that won't be used in the opposite
> case.
> I'm unsure if this replies to your concerns because you just criticize
> without making a real technical question in this post.
>
>>> +void
>>> +vm_page_unlock_func(vm_page_t m, const char *file, int line)
>>> +{
>>> +
>>> +#if LOCK_DEBUG > 0 || defined(MUTEX_NOINLINE)
>>> +       _mtx_unlock_flags(vm_page_lockptr(m), 0, file, line);
>>> +#else
>>> +       __mtx_unlock(vm_page_lockptr(m), curthread, 0, file, line);
>>> +#endif
>>> +}
>
> Kostik,
> we usually catered this case by having interfaces directly specified
> in mutex.h in order to keep the implementation details "compact
> enough" (see the thread_lock() case for this).
>
> I'm unsure what you prefer here, at least for the locking functions
> you could move over there as there are cons and prons for both
> approaches really and I'm fine with both.

After thinking a bit about it, my guess is that the best approach
would be patching mutex.h in order to offer a simple way to do what
Kostik needs (i.e. a general interface to do that).
I wouldn't encourage, infact, neither putting more things in mutex.h
or growing checks in other files depending by the compiling options.

I hope I can provide a patch asap.

Attilio


-- 
Peace can only be achieved by understanding - A. Einstein
Received on Mon Nov 07 2011 - 10:35:23 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:20 UTC