Re: panic: lockmgr still held [tmpfs] [vm_map_remove()->vdropl()] (r262186: Thu Feb 20)

From: Bryan Drewery <bdrewery_at_FreeBSD.org>
Date: Wed, 05 Mar 2014 06:44:19 -0600
On 3/5/2014 5:07 AM, Konstantin Belousov wrote:
> On Wed, Mar 05, 2014 at 11:41:24AM +0200, Andriy Gapon wrote:
>> on 04/03/2014 18:45 John Baldwin said the following:
>>> So I'm not sure how to fix this.  The crash is in this code in 
>>> vm_object_deallocate():
>>>
>>> 			if (object->type == OBJT_SWAP &&
>>> 			    (object->flags & OBJ_TMPFS) != 0) {
>>> 				vp = object->un_pager.swp.swp_tmpfs;
>>> 				vhold(vp);
>>> 				VM_OBJECT_WUNLOCK(object);
>>> 				vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
>>> 				vdrop(vp);
>>> 				VM_OBJECT_WLOCK(object);
>>> 				if (object->type == OBJT_DEAD ||
>>> 				    object->ref_count != 1) {
>>> 					VM_OBJECT_WUNLOCK(object);
>>> 					VOP_UNLOCK(vp, 0);
>>> 					return;
>>> 				}
>>> 				if ((object->flags & OBJ_TMPFS) != 0)
>>> 					VOP_UNSET_TEXT(vp);
>>> 				VOP_UNLOCK(vp, 0);
>>> 			}
>>>
>>> The vdrop() is dropping the count to zero and trying to free the vnode.  The 
>>> real problem I think is that swp_tmpfs doesn't have an implicit vhold() on the 
>>> vnode, so in this case, the code is doing a vhold/vn_lock/vdrop of an already-
>>> free vnode.  For OBJT_VNODE objects, the reference from the object back to the 
>>> vnode holds a vref() that gets released by a vput() in 
>>> vm_object_vndeallocate().
>>>
>>> One fix might be to chagne smp_tmpfs to hold a vhold reference.  This is 
>>> untested but might work (but I'm also not sure that this is the right thing in 
>>> that I don't know what other effects it might have).
>>
>> I agree with your analysis, but I don't think that a filesystem holding its own
>> vnode is a good idea.  If I am not mistaken, that would prevent tmpfs vnodes
>> from going to free list.
>> I'd rather try to modify vm_object_deallocate() code.  E.g. vdrop() could be
>> called after VOP_UNLOCK().  Alternatively, the code could handle a doomed vnode
>> in a different way.
> 
> I agree with Andrey, it is just a bug to vdrop() before unlock.
> Please try this.
> 
> diff --git a/sys/vm/vm_object.c b/sys/vm/vm_object.c
> index 8683e2f..787b18b 100644
> --- a/sys/vm/vm_object.c
> +++ b/sys/vm/vm_object.c
> _at__at_ -536,17 +536,18 _at__at_ vm_object_deallocate(vm_object_t object)
>  				vhold(vp);
>  				VM_OBJECT_WUNLOCK(object);
>  				vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
> -				vdrop(vp);
>  				VM_OBJECT_WLOCK(object);
>  				if (object->type == OBJT_DEAD ||
>  				    object->ref_count != 1) {
>  					VM_OBJECT_WUNLOCK(object);
>  					VOP_UNLOCK(vp, 0);
> +					vdrop(vp);
>  					return;
>  				}
>  				if ((object->flags & OBJ_TMPFS) != 0)
>  					VOP_UNSET_TEXT(vp);
>  				VOP_UNLOCK(vp, 0);
> +				vdrop(vp);
>  			}
>  			if (object->shadow_count == 0 &&
>  			    object->handle == NULL &&
> 

Ok I will try this.

Note that I cannot easily reproduce the issue. So if this seems to be
right logically I suggest just committing after stress testing. I'll
report back after a few builds to let you know if it makes it any worse.

Thanks all.

-- 
Regards,
Bryan Drewery


Received on Wed Mar 05 2014 - 11:44:30 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:47 UTC