Re: panic: solaris assert: sa.sa_magic == 0x2F505A (0x4d5ea364 == 0x2f505a), file: /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c, line: 625

From: Martin Wilke <miwi_at_bsdhash.org>
Date: Mon, 1 Apr 2013 23:31:05 +0800
I can confirm this problem see the same.

- Martin

On Apr 1, 2013, at 10:18 PM, Fabian Keil <freebsd-listen_at_fabiankeil.de> wrote:

> I got the following panic on 10.0-CURRENT from two days ago
> while receiving an incremental snapshot to a certain pool:
> 
> (kgdb) where
> #0  doadump (textdump=0) at pcpu.h:229
> #1  0xffffffff8031a3ce in db_dump (dummy=<value optimized out>, dummy2=0, dummy3=0, dummy4=0x0) at /usr/src/sys/ddb/db_command.c:543
> #2  0xffffffff80319eca in db_command (last_cmdp=<value optimized out>, cmd_table=<value optimized out>, dopager=1) at /usr/src/sys/ddb/db_command.c:449
> #3  0xffffffff80319c82 in db_command_loop () at /usr/src/sys/ddb/db_command.c:502
> #4  0xffffffff8031c5d0 in db_trap (type=<value optimized out>, code=0) at /usr/src/sys/ddb/db_main.c:231
> #5  0xffffffff805d0da3 in kdb_trap (type=3, code=0, tf=<value optimized out>) at /usr/src/sys/kern/subr_kdb.c:654
> #6  0xffffffff8087fdc3 in trap (frame=0xffffff80dc9d6520) at /usr/src/sys/amd64/amd64/trap.c:579
> #7  0xffffffff80869cb2 in calltrap () at exception.S:228
> #8  0xffffffff805d058e in kdb_enter (why=0xffffffff80a47e7a "panic", msg=<value optimized out>) at cpufunc.h:63
> #9  0xffffffff80599216 in panic (fmt=<value optimized out>) at /usr/src/sys/kern/kern_shutdown.c:747
> #10 0xffffffff8130323f in assfail3 (a=<value optimized out>, lv=<value optimized out>, op=<value optimized out>, rv=<value optimized out>, f=<value optimized out>, l=<value optimized out>)
>    at /usr/src/sys/modules/opensolaris/../../cddl/compat/opensolaris/kern/opensolaris_cmn_err.c:89
> #11 0xffffffff8117924e in zfs_space_delta_cb (bonustype=<value optimized out>, data=0xffffff8015eeb8c0, userp=0xfffffe004261c640, groupp=0xfffffe004261c648)
>    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:625
> #12 0xffffffff8110003b in dmu_objset_userquota_get_ids (dn=0xfffffe004261c358, before=0, tx=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1249
> #13 0xffffffff811071b6 in dnode_sync (dn=0xfffffe004261c358, tx=0xfffffe00186e1300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dnode_sync.c:554
> #14 0xffffffff810ff98b in dmu_objset_sync_dnodes (list=0xfffffe00691a5250, newlist=<value optimized out>, tx=<value optimized out>)
>    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:910
> #15 0xffffffff810ff825 in dmu_objset_sync (os=0xfffffe00691a5000, pio=<value optimized out>, tx=0xfffffe00186e1300)
>    at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1027
> #16 0xffffffff8110cb0d in dsl_dataset_sync (ds=0xfffffe001f3d0c00, zio=0x780, tx=0xfffffe00186e1300) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_dataset.c:1411
> #17 0xffffffff8111399a in dsl_pool_sync (dp=0xfffffe0069ec4000, txg=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dsl_pool.c:409
> #18 0xffffffff8112f0ee in spa_sync (spa=0xfffffe0050f00000, txg=3292) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/spa.c:6328
> #19 0xffffffff81137c45 in txg_sync_thread (arg=0xfffffe0069ec4000) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/txg.c:493
> #20 0xffffffff80569c1a in fork_exit (callout=0xffffffff811378d0 <txg_sync_thread>, arg=0xfffffe0069ec4000, frame=0xffffff80dc9d6c00) at /usr/src/sys/kern/kern_fork.c:991
> #21 0xffffffff8086a1ee in fork_trampoline () at exception.S:602
> #22 0x0000000000000000 in ?? ()
> Current language:  auto; currently minimal
> (kgdb) f 12
> #12 0xffffffff8110003b in dmu_objset_userquota_get_ids (dn=0xfffffe004261c358, before=0, tx=<value optimized out>) at /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/dmu_objset.c:1249
> 1249		error = used_cbs[os->os_phys->os_type](dn->dn_bonustype, data,
> (kgdb) p *dn
> $1 = {dn_struct_rwlock = {lock_object = {lo_name = 0xffffffff811da0a9 "dn->dn_struct_rwlock", lo_flags = 40960000, lo_data = 0, lo_witness = 0x0}, sx_lock = 1}, dn_link = {list_next = 0xfffffe0042629020, 
>    list_prev = 0xfffffe00691a5360}, dn_objset = 0xfffffe00691a5000, dn_object = 55652, dn_dbuf = 0xfffffe00427ad0e0, dn_handle = 0xfffffe0069f70128, dn_phys = 0xffffff8015eeb800, 
>  dn_type = DMU_OT_PLAIN_FILE_CONTENTS, dn_bonuslen = 192, dn_bonustype = 44 ',', dn_nblkptr = 1 '\001', dn_checksum = 0 '\0', dn_compress = 0 '\0', dn_nlevels = 1 '\001', dn_indblkshift = 14 '\016', 
>  dn_datablkshift = 0 '\0', dn_moved = 0 '\0', dn_datablkszsec = 10, dn_datablksz = 5120, dn_maxblkid = 0, dn_next_nblkptr = "\000\000\000", dn_next_nlevels = "\000\000\000", 
>  dn_next_indblkshift = "\000\000\000", dn_next_bonustype = ",\000\000", dn_rm_spillblk = "\000\000\000", dn_next_bonuslen = {192, 0, 0, 0}, dn_next_blksz = {0, 0, 0, 0}, dn_dbufs_count = 0, dn_dirty_link = {{
>      list_next = 0xfffffe00691a51f0, list_prev = 0xfffffe0042628ab0}, {list_next = 0x0, list_prev = 0x0}, {list_next = 0x0, list_prev = 0x0}, {list_next = 0x0, list_prev = 0x0}}, dn_mtx = {lock_object = {
>      lo_name = 0xffffffff811da0bf "dn->dn_mtx", lo_flags = 40960000, lo_data = 0, lo_witness = 0x0}, sx_lock = 1}, dn_dirty_records = {{list_size = 208, list_offset = 0, list_head = {
>        list_next = 0xfffffe004261c470, list_prev = 0xfffffe004261c470}}, {list_size = 208, list_offset = 0, list_head = {list_next = 0xfffffe004261c490, list_prev = 0xfffffe004261c490}}, {list_size = 208, 
>      list_offset = 0, list_head = {list_next = 0xfffffe004261c4b0, list_prev = 0xfffffe004261c4b0}}, {list_size = 208, list_offset = 0, list_head = {list_next = 0xfffffe004261c4d0, 
>        list_prev = 0xfffffe004261c4d0}}}, dn_ranges = {{avl_root = 0x0, avl_compar = 0xffffffff81106ec0 <free_range_compar>, avl_offset = 0, avl_numnodes = 0, avl_size = 40}, {avl_root = 0x0, 
>      avl_compar = 0xffffffff81106ec0 <free_range_compar>, avl_offset = 0, avl_numnodes = 0, avl_size = 40}, {avl_root = 0x0, avl_compar = 0xffffffff81106ec0 <free_range_compar>, avl_offset = 0, 
>      avl_numnodes = 0, avl_size = 40}, {avl_root = 0x0, avl_compar = 0xffffffff81106ec0 <free_range_compar>, avl_offset = 0, avl_numnodes = 0, avl_size = 40}}, dn_allocated_txg = 3292, dn_free_txg = 0, 
>  dn_assigned_txg = 0, dn_notxholds = {cv_description = 0xffffffff811da0dd "dn->dn_notxholds", cv_waiters = 0}, dn_dirtyctx = DN_UNDIRTIED, dn_dirtyctx_firstset = 0x0, dn_tx_holds = {rc_count = 0}, 
>  dn_holds = {rc_count = 3}, dn_dbufs_mtx = {lock_object = {lo_name = 0xffffffff811da0cb "dn->dn_dbufs_mtx", lo_flags = 40960000, lo_data = 0, lo_witness = 0x0}, sx_lock = 1}, dn_dbufs = {list_size = 224, 
>    list_offset = 176, list_head = {list_next = 0xfffffe004261c5f8, list_prev = 0xfffffe004261c5f8}}, dn_bonus = 0x0, dn_have_spill = 0, dn_zio = 0xfffffe00695af000, dn_oldused = 2560, dn_oldflags = 3, 
>  dn_olduid = 1001, dn_oldgid = 1001, dn_newuid = 0, dn_newgid = 0, dn_id_flags = 5, dn_zfetch = {zf_rwlock = {lock_object = {lo_name = 0xffffffff811dd156 "zf->zf_rwlock", lo_flags = 40960000, lo_data = 0, 
>        lo_witness = 0x0}, sx_lock = 1}, zf_stream = {list_size = 112, list_offset = 88, list_head = {list_next = 0xfffffe004261c688, list_prev = 0xfffffe004261c688}}, zf_dnode = 0xfffffe004261c358, 
>    zf_stream_cnt = 0, zf_alloc_fail = 0}}
> 
> The incremental was created with:
> zfs send -i _at_2013-03-28_14:21 tank/home/fk_at_2013-04-01_12:31
> piped through mbuffer and received with:
> zfs receive -v -u -F rockbox/backup/r500/tank/home/fk
> 
> Reading the incremental directly from a file triggers the
> panic as as well, but sometimes it takes more than one attempt.
> 
> The offending sa_magic in the panic message is always the same.
> 
> The receiving pool appears to be okay:
> 
> fk_at_r500 ~ $zpool status rockbox
>  pool: rockbox
> state: ONLINE
> status: Some supported features are not enabled on the pool. The pool can
> 	still be used, but some features are unavailable.
> action: Enable all features using 'zpool upgrade'. Once this is done,
> 	the pool may no longer be accessible by software that does not support
> 	the features. See zpool-features(7) for details.
>  scan: scrub repaired 0 in 0h3m with 0 errors on Mon Apr  1 13:57:35 2013
> config:
> 
> 	NAME                 STATE     READ WRITE CKSUM
> 	rockbox              ONLINE       0     0     0
> 	  label/rockbox.eli  ONLINE       0     0     0
> 
> errors: No known data errors
> 
> The feature that isn't yet enabled is lz4 but after upgrading
> a copy of the pool the panic was still reproducible. On the
> receiving side gzip compression is enabled.
> 
> Fabian

+-----------------oOO--(_)--OOo-------------------------+
With best Regards,
       Martin Wilke (miwi_(at)_FreeBSD.org)

Mess with the Best, Die like the Rest
Received on Mon Apr 01 2013 - 13:31:10 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:36 UTC