Re: tmpfs panic

From: Steve Wills <swills_at_freebsd.org>
Date: Sun, 6 Jul 2014 21:07:47 +0000
On Sun, Jul 06, 2014 at 01:49:04PM -0700, Neel Natu wrote:
> Hi Steve,
> 
> On Sun, Jul 6, 2014 at 8:46 AM, Steve Wills <swills_at_freebsd.org> wrote:
> > I should have noted this system is running in bhyve. Also I'm told this panic
> > may be related to the fact that the system is running in bhyve.
> >
> > Looking at it a little more closely:
> >
> > (kgdb) list *__mtx_lock_sleep+0xb1
> > 0xffffffff809638d1 is in __mtx_lock_sleep (/usr/src/sys/kern/kern_mutex.c:431).
> > 426                      * owner stops running or the state of the lock changes.
> > 427                      */
> > 428                     v = m->mtx_lock;
> > 429                     if (v != MTX_UNOWNED) {
> > 430                             owner = (struct thread *)(v & ~MTX_FLAGMASK);
> > 431                             if (TD_IS_RUNNING(owner)) {
> > 432                                     if (LOCK_LOG_TEST(&m->lock_object, 0))
> > 433                                             CTR3(KTR_LOCK,
> > 434                                                 "%s: spinning on %p held by %p",
> > 435                                                 __func__, m, owner);
> > (kgdb)
> >
> > I'm told that MTX_CONTESTED was set on the unlocked mtx and that MTX_CONTENDED
> > is spuriously left behind, and to ask how lock prefix is handled in bhyve. Any
> > of that make sense to anyone?
> >
> 
> Regarding the lock prefix: since bhyve only supports hardware that has
> nested paging, the hypervisor doesn't get in the way of instructions
> that access memory. This includes instructions with lock prefixes or
> any other prefixes for that matter. If there is a VM exit due to a
> nested page fault then the faulting instruction is restarted after
> resolving the fault.
> 
> Having said that, there are more plausible explanations that might
> implicate bhyve: incorrect translations in the nested page tables,
> stale translations in the TLB etc.
> 
> Do you have a core file for the panic? It would be very useful to
> debug this further.

No, unfortunately I did not have swap or dumpdev setup at the time so I was
unable to get a core dump from the crashed kernel. (Bhyve did not crash.) I've
setup swap in the VM and set the dumpdev as well, so if it happens again I
should get a core.

Steve

Received on Sun Jul 06 2014 - 19:07:57 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:50 UTC