Re: panic: g_eli_key_hold: sc_ekeys_total=1

From: Fabian Keil <freebsd-listen_at_fabiankeil.de>
Date: Sun, 24 Apr 2011 11:12:03 +0200
Fabian Keil <freebsd-listen_at_fabiankeil.de> wrote:

> With sources from today my system panics at boot time
> after attaching the swap device:
> 
> GEOM_ELI: Device ada0s1b.eli created.
> GEOM_ELI: Encryption: AES-XTS 256
> GEOM_ELI:     Crypto: software
> panic: g_eli_key_hold: sc_ekeys_total=1
> cpuid = 0
> KDB: enter: panic
> Uptime: 2m16s
> Physical memory: 1974 MB
> Dumping 213 MB: 198 182 166 150 134 118 102 86 70 54 38 22 6
> 
> Reading symbols from /boot/kernel/zfs.ko...Reading symbols from /boot/kernel/zfs.ko.symbols...done.
> done.
> [...]
> Loaded symbols for /boot/kernel/acpi_ibm.ko
> #0  doadump () at /usr/src/sys/kern/kern_shutdown.c:250
> 250             if (textdump_pending)
> (kgdb) where
> #0  doadump () at /usr/src/sys/kern/kern_shutdown.c:250
> #1  0xffffffff805354f7 in kern_reboot (howto=260) at /usr/src/sys/kern/kern_shutdown.c:418
> #2  0xffffffff80534f91 in panic (fmt=Variable "fmt" is not available.
> ) at /usr/src/sys/kern/kern_shutdown.c:591
> #3  0xffffffff811acab2 in g_eli_key_hold (sc=0xfffffe0005c33400, offset=0, blocksize=Variable "blocksize" is not available.
> ) at /usr/src/sys/modules/geom/geom_eli/../../../geom/eli/g_eli_key_cache.c:266
> #4  0xffffffff811acdbc in g_eli_crypto_run (wr=0xfffffe0005cc3a80, bp=0xfffffe0005b9f0e8) at /usr/src/sys/modules/geom/geom_eli/../../../geom/eli/g_eli_privacy.c:317
> #5  0xffffffff811a5301 in g_eli_worker (arg=Variable "arg" is not available.
> ) at /usr/src/sys/modules/geom/geom_eli/../../../geom/eli/g_eli.c:519
> #6  0xffffffff80509845 in fork_exit (callout=0xffffffff811a4f20 <g_eli_worker>, arg=0xfffffe0005cc3a80, frame=0xffffff80e68d5c50) at /usr/src/sys/kern/kern_fork.c:920
> #7  0xffffffff807bd67e in fork_trampoline () at /usr/src/sys/amd64/amd64/exception.S:603
> [...]
> (kgdb) f 3
> #3  0xffffffff811acab2 in g_eli_key_hold (sc=0xfffffe0005c33400, offset=0, blocksize=Variable "blocksize" is not available.
> ) at /usr/src/sys/modules/geom/geom_eli/../../../geom/eli/g_eli_key_cache.c:266
> 266             KASSERT(sc->sc_ekeys_total > 1, ("%s: sc_ekeys_total=%ju", __func__,
> (kgdb) p *sc
> $1 = {sc_geom = 0xfffffe00028a1000, sc_crypto = 2, 
>   sc_mkey = "[scrubbed]", sc_ekey = '\0' <repeats 63 times>, sc_ekeys_queue = {tqh_first = 0xfffffe0005c2b380, tqh_last = 0xfffffe0005c2b3d0}, 
>   sc_ekeys_tree = {rbh_root = 0xfffffe0005c2b380}, sc_ekeys_lock = {lock_object = {lo_name = 0xffffffff811adf38 "geli:ekeys", lo_flags = 16973824, lo_data = 0, lo_witness = 0x0}, 
>     mtx_lock = 4}, sc_ekeys_total = 1, sc_ekeys_allocated = 1, sc_ealgo = 22, sc_ekeylen = 256, 
>   sc_akey = "[scrubbed]", sc_aalgo = 0, sc_akeylen = 0, sc_alen = 0, sc_akeyctx = {state = {0, 0, 0, 0, 0, 0, 0, 
>       0}, bitcount = 0, buffer = '\0' <repeats 63 times>}, 
>   sc_ivkey = "[scrubbed]", sc_ivctx = {state = {0, 0, 0, 0, 0, 0, 0, 0}, 
>     bitcount = 0, buffer = '\0' <repeats 63 times>}, sc_nkey = -1, sc_flags = 13, sc_inflight = 1, sc_mediasize = 2147483648, sc_sectorsize = 4096, sc_bytes_per_sector = 0, 
>   sc_data_per_sector = 0, sc_queue = {queue = {tqh_first = 0x0, tqh_last = 0xfffffe0005c336a8}, last_offset = 2147479552, insert_point = 0x0}, sc_queue_mtx = {lock_object = {
>       lo_name = 0xffffffff811adf2d "geli:queue", lo_flags = 16973824, lo_data = 0, lo_witness = 0x0}, mtx_lock = 4}, sc_workers = {lh_first = 0xfffffe0005cc3a40}}
> 
> Before the panic, the geli provider (AES-CBC 128) for the ZFS pool
> is attached without issues. Attaching geli providers located on
> USB disks doesn't seem to cause issues either, and I haven't
> been able to reproduce the panic by manually running:
> 
> /sbin/geli onetime -l 256 /dev/ada0s1b
> swapon /dev/ada0s1b.eli

Which of course isn't the sector size normally
used for the swap device.

The panic can be reproduced with:
/sbin/geli onetime -l 256 -s 4096 /dev/ada0s1b

Fabian

Received on Sun Apr 24 2011 - 07:14:27 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:13 UTC