Re: Vinum and freebsd

From: Mario Doria <mariodoria_at_yahoo.com>
Date: Sat, 10 Jul 2004 02:47:38 -0500
Hi,

I've experienced the same problem on two machines, one 2-Pentium 3 SMP 
Machine and one with a Duron. The sources were from last wednesday 
(July 7) on both cases.

 In the Duron, I could boot if I removed start_vinum="YES" 
from /etc/rc.conf, removed the disks from /etc/fstab and then, when the 
system was up, issue a "vinum start".

The vinum configuration in the duron machine was a RAID-1 of about 75GB. 
Sometimes it would say that one of the disks was crashed, and I had to 
do a resetconfig, then restore my old configuration and start the 
failed disk. Other times, it would fail with a "vinum: incompatible 
sector sizes.  raid1.p0.s0 has 0 bytes, raid1.p0.s1 has 512 bytes.  
Ignored.". These problems would happen just by doing a "vinum start" 
and then a "vinum stop".

If, on the other hand, the variable start_vinum was enabled 
in /etc/rc.conf, the machine would panic with the message "panic: 
umount: dangling vnode".

On the SMP machine, the machine would panic with another error and I did 
not get details of the panic. I just cvsuped sources from June 17 and 
the problem was gone; except from this that I just got this morning on 
my dmesg:

SMP: AP CPU #1 Launched!
Mounting root from ufs:/dev/da0s1a
lock order reversal
 1st 0xc22538c4 vm object (vm object) 
_at_ /usr/src/sys/vm/swap_pager.c:1313
 2nd 0xc08c73a0 swap_pager swhash (swap_pager swhash) 
_at_ /usr/src/sys/vm/swap_pager.c:1799
 3rd 0xc1040ef4 vm object (vm object) _at_ /usr/src/sys/vm/uma_core.c:923
Stack backtrace:
backtrace(0,1,c0895708,c08969a0,c08259dc) at backtrace+0x12
witness_checkorder(c1040ef4,9,c07e3927,39b) at witness_checkorder+0x53b
_mtx_lock_flags(c1040ef4,0,c07e3927,39b,c20ac508) at 
_mtx_lock_flags+0x57
obj_alloc(c20fe420,1000,daf19a2f,101,daf19a3c) at obj_alloc+0x31
slab_zalloc(c20fe420,1,c20fe420,c20fe420,c20ac500) at slab_zalloc+0x87
uma_zone_slab(c20fe420,1,c20ac508,0,c07e3927,78e) at uma_zone_slab+0xb0
uma_zalloc_internal(c20fe420,0,1,c20ac508,0) at uma_zalloc_internal+0x29
uma_zalloc_arg(c20fe420,0,1) at uma_zalloc_arg+0x2a2
swp_pager_meta_build(c22538c4,3,0,2,0) at swp_pager_meta_build+0x108
swap_pager_putpages(c22538c4,daf19c00,4,0,daf19b70) at 
swap_pager_putpages+0x2a8
default_pager_putpages(c22538c4,daf19c00,4,0,daf19b70) at 
default_pager_putpages+0x18
vm_pageout_flush(daf19c00,4,0,c0894fc0,2ff) at vm_pageout_flush+0x112
vm_pageout_clean(c1a4dcb0) at vm_pageout_clean+0x2a5
vm_pageout_scan(0) at vm_pageout_scan+0x543
vm_pageout(0,daf19d48,0,c072b608,0) at vm_pageout+0x2d2
fork_exit(c072b608,0,daf19d48) at fork_exit+0x98
fork_trampoline() at fork_trampoline+0x8
--- trap 0x1, eip = 0, esp = 0xdaf19d7c, ebp = 0 ---

uname -a: FreeBSD bender.tecdigital.net 5.2-CURRENT FreeBSD 5.2-CURRENT 
#0: Thu Jul  8 21:20:14 CDT 2004     
madd_at_bender.tecdigital.net:/usr/obj/usr/src/sys/GENERIC  i386


Mario Doria


>Hi,
>        I've had problems with vinum and newer 5.2-CURRENT kernels, I 
>am 
>unable to boot a newer kernel with out the kernel going in to a panic 
>and saying "panic: umount: dangling vnode" . I've had this problem 
>since early June and it doesn't seemed to be fixed (as of July 9th) , 
>has something changed in vinum that I am missing and I need to change 
>about my setup or is vinum just broken right now?
>
>The vinum setup I have right now is pretty old and should I have to 
>rebuild it?
>
>Michael
Received on Sat Jul 10 2004 - 05:47:40 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:01 UTC