Re: (r257598) panic: Assertion tmp->tm_pages_used == 0 failed at /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vfsops.c:316

From: Bryan Drewery <bdrewery_at_FreeBSD.org>
Date: Mon, 04 Nov 2013 16:48:09 -0600
On 2013-11-04 11:27, Konstantin Belousov wrote:
> On Mon, Nov 04, 2013 at 10:43:06AM -0600, Bryan Drewery wrote:
>> On 2013-11-04 10:27, Konstantin Belousov wrote:
>> > On Mon, Nov 04, 2013 at 08:35:17AM -0600, Bryan Drewery wrote:
>> >> 11.0-CURRENT #87 r257598
>> >>
>> >> During a poudriere build.
>> >>
>> >> It creates a tmpfs, builds a port in a jail using that tmpfs and then
>> >> removes the tmpfs and recreate the tmpfs before building the next
>> >> port.
>> >>
>> >> > panic: Assertion tmp->tm_pages_used == 0 failed at /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vfsops.c:316
>> >> > cpuid = 9
>> >> > KDB: stack backtrace:
>> >> > db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe1247ee57a0
>> >> > kdb_backtrace() at kdb_backtrace+0x39/frame 0xfffffe1247ee5850
>> >> > vpanic() at vpanic+0x126/frame 0xfffffe1247ee5890
>> >> > kassert_panic() at kassert_panic+0x136/frame 0xfffffe1247ee5900
>> >> > tmpfs_unmount() at tmpfs_unmount+0x163/frame 0xfffffe1247ee5930
>> >> > dounmount() at dounmount+0x41f/frame 0xfffffe1247ee59b0
>> >> > sys_unmount() at sys_unmount+0x356/frame 0xfffffe1247ee5ae0
>> >> > amd64_syscall() at amd64_syscall+0x265/frame 0xfffffe1247ee5bf0
>> >> > Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe1247ee5bf0
>> >> > --- syscall (22, FreeBSD ELF64, sys_unmount), rip = 0x8008a02fa, rsp = 0x7fffffffd198, rbp = 0x7fffffffd2b0 ---
>> >> > Uptime: 44m40s
>> >>
>> >> > (kgdb) #0  doadump (textdump=1) at pcpu.h:219
>> >> > #1  0xffffffff808bcf87 in kern_reboot (howto=260)
>> >> >     at /usr/src/sys/kern/kern_shutdown.c:447
>> >> > #2  0xffffffff808bd495 in vpanic (fmt=<value optimized out>,
>> >> >     ap=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:754
>> >> > #3  0xffffffff808bd326 in kassert_panic (fmt=<value optimized out>)
>> >> >     at /usr/src/sys/kern/kern_shutdown.c:642
>> >> > #4  0xffffffff81e159d3 in tmpfs_unmount (mp=0xfffff810502cd660,
>> >> >     mntflags=<value optimized out>)
>> >> >     at /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vfsops.c:316
>> >> > #5  0xffffffff8095e1af in dounmount (mp=0xfffff810502cd660, flags=134742016,
>> >> >     td=0xfffff8013d57a490) at /usr/src/sys/kern/vfs_mount.c:1324
>> >> > #6  0xffffffff8095dd66 in sys_unmount (td=0xfffff8013d57a490,
>> >> >     uap=0xfffffe1247ee5b80) at /usr/src/sys/kern/vfs_mount.c:1212
>> >> > #7  0xffffffff80cb7d75 in amd64_syscall (td=0xfffff8013d57a490, traced=0)
>> >> >     at subr_syscall.c:134
>> >> > #8  0xffffffff80c9c90b in Xfast_syscall ()
>> >> >     at /usr/src/sys/amd64/amd64/exception.S:391
>> >> > #9  0x00000008008a02fa in ?? ()
>> >
>> > Do you have core ?
>> > I want to see the struct tmpfs_mount content for the tmpfs mount point
>> > which caused the panic.
>> 
>> Yes.
>> 
>> Hopefully this is what you're asking for:
>> 
>> (kgdb) frame
>> #4  0xffffffff81e159d3 in tmpfs_unmount (mp=0xfffff810502cd660,
>> mntflags=<value optimized out>) at
>> /usr/src/sys/modules/tmpfs/../../fs/tmpfs/tmpfs_vfsops.c:316
>> 316             MPASS(tmp->tm_pages_used == 0);
>> 
>> (kgdb) print *mp
>> $2 = {mnt_mtx = {lock_object = {lo_name = 0xffffffff80f11f09 "struct
>> mount mtx", lo_flags = 16973824, lo_data = 0, lo_witness =
>> 0xfffffe00006d3b00}, mtx_lock = 4}, mnt_gen = 1, mnt_list = {tqe_next 
>> =
>> 0xfffff8116be06660, tqe_prev = 0xfffff81050257688}, mnt_op =
>> 0xffffffff81e1b940, mnt_vfc = 0xffffffff81e1ba60, mnt_vnodecovered =
>> 0xfffff8104fb0c3b0,
>>    mnt_syncer = 0x0, mnt_ref = 1, mnt_nvnodelist = {tqh_first = 0x0,
>> tqh_last = 0xfffff810502cd6c0}, mnt_nvnodelistsize = 0,
>> mnt_activevnodelist = {tqh_first = 0x0, tqh_last = 
>> 0xfffff810502cd6d8},
>> mnt_activevnodelistsize = 0, mnt_writeopcount = 1, mnt_kern_flag =
>> 16777225, mnt_flag = 4096, mnt_opt = 0xfffff80014424ae0, mnt_optnew =
>> 0x0, mnt_maxsymlinklen = 0,
>>    mnt_stat = {f_version = 537068824, f_type = 135, f_flags = 4096,
>> f_bsize = 4096, f_iosize = 4096, f_blocks = 17125058, f_bfree =
>> 17049291, f_bavail = 17049291, f_files = 2147483647, f_ffree =
>> 2147473906, f_syncwrites = 0, f_asyncwrites = 0, f_syncreads = 0,
>> f_asyncreads = 0, f_spare = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0}, f_namemax 
>> =
>> 255, f_owner = 0, f_fsid = {
>>        val = {-2029977679, 135}}, f_charspare = '\0' <repeats 79 
>> times>,
>> f_fstypename = "tmpfs\000\000\000\000\000\000\000\000\000\000",
>> f_mntfromname = "tmpfs", '\0' <repeats 82 times>, f_mntonname =
>> "/poudriere/data/build/exp-91amd64-default-xzibition/08/usr/local", 
>> '\0'
>> <repeats 23 times>}, mnt_cred = 0xfffff8001418e100, mnt_data =
>> 0xfffff80fedf75700,
>>    mnt_time = 0, mnt_iosize_max = 65536, mnt_export = 0x0, mnt_label =
>> 0x0, mnt_hashseed = 4132690418, mnt_lockref = 0, mnt_secondary_writes 
>> =
>> 0, mnt_secondary_accwrites = 0, mnt_susp_owner = 0x0, mnt_gjprovider =
>> 0x0, mnt_explock = {lock_object = {lo_name = 0xffffffff80f11f1a
>> "explock", lo_flags = 108199936, lo_data = 0, lo_witness =
>> 0xfffffe00006eb280},
>>      lk_lock = 1, lk_exslpfail = 0, lk_timo = 0, lk_pri = 96},
>> mnt_upper_link = {tqe_next = 0x0, tqe_prev = 0x0}, mnt_uppers =
>> {tqh_first = 0x0, tqh_last = 0xfffff810502cd980}}
>> 
>> (kgdb) print *(struct tmpfs_mount *)(mp)->mnt_data
>> $3 = {tm_pages_max = 18446744073709551615, tm_pages_used =
>> 18446744073709551615, tm_root = 0xfffff8104ff44828, tm_nodes_max =
>> 2147483647, tm_ino_unr = 0xfffff8002fd65080, tm_nodes_inuse = 0,
>> tm_maxfilesize = 9223372036854775807, tm_nodes_used = {lh_first = 
>> 0x0},
>> allnode_lock = {lock_object = {lo_name = 0xffffffff81e1aa47 "tmpfs
>> allnode lock",
>>        lo_flags = 16908288, lo_data = 0, lo_witness =
>> 0xfffffe00006ecd80}, mtx_lock = 6}, tm_dirent_pool = 
>> 0xfffff810500bb000,
>> tm_node_pool = 0xfffff81050046000, tm_ronly = 0}
>> 
>> Looks like tm_pages_used is -1.
> 
> Yes, it looks like it is over-accounted.  Are you able to reproduce
> the panic at will ?

Yes I can consistently reproduce this. It for some reason occurs when 
devel/jna port fails to build. I reproduced it about 6 out of 6 times 
without your patch.

> 
> My current guess-work is the following, which I think, is the right
> change anyway.

The patch does fix the issue. Tested 3 times.

Thanks!

> 
> diff --git a/sys/vm/vm_object.c b/sys/vm/vm_object.c
> index 9dea3a1..8683e2f 100644
> --- a/sys/vm/vm_object.c
> +++ b/sys/vm/vm_object.c
> _at__at_ -2099,8 +2099,9 _at__at_ vm_object_coalesce(vm_object_t prev_object,
> vm_ooffset_t prev_offset,
>  	if (prev_object == NULL)
>  		return (TRUE);
>  	VM_OBJECT_WLOCK(prev_object);
> -	if (prev_object->type != OBJT_DEFAULT &&
> -	    prev_object->type != OBJT_SWAP) {
> +	if ((prev_object->type != OBJT_DEFAULT &&
> +	    prev_object->type != OBJT_SWAP) ||
> +	    (prev_object->flags & OBJ_TMPFS) != 0) {
>  		VM_OBJECT_WUNLOCK(prev_object);
>  		return (FALSE);
>  	}

-- 
Regards,
Bryan Drewery
Received on Mon Nov 04 2013 - 21:48:12 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:44 UTC