On Thu, 10 Dec 2009, Jeremie Le Hen wrote: > Hi list, > > First, excuse me to post on -current_at_ while this problem happened with > -STABLE but RELENG_8 is still relatively close to HEAD and I have the > feeling that -stable_at_ is more concerned with configuration and maybe > userland problems. > > I've done the following command sequence on a fresh RELENG_8 from around > 3rd dec: > zfs send -R data/repos | zfs receive -d data/crepos > zfs destroy data/repos > zfs rename data/crepos/repos data/repos > > And this led to the following panic on rename: > > % Fatal trap 12: page fault while in kernel mode > % cpuid = 0; apic id = 00 > % fault virtual address = 0x780fe2a0 > % fault code = supervisor read, page not present > % instruction pointer = 0x20:0x806d1687 > % stack pointer = 0x28:0xcb41c750 > % frame pointer = 0x28:0xcb41c784 > % code segment = base 0x0, limit 0xfffff, type 0x1b > % = DPL 0, pres 1, def32 1, gran 1 > % processor eflags = resume, IOPL = 0 > % current process = 72605 (zfs) > % [thread pid 72605 tid 100435 ] > % Stopped at _sx_xlock_hard+0x21e: movl 0x1a0(%eax),%eax > % db> bt > % Tracing pid 72605 tid 100435 td 0x88b6c480 > % _sx_xlock_hard(8f2460a0,88b6c480,0,85ce8fc8,a1,...) at _sx_xlock_hard+0x21e > % _sx_xlock(8f2460a0,0,85ce8fc8,a1,866b2a70,...) at _sx_xlock+0x48 > % rrw_enter(8f2460a0,1,85cdf7b1,0,cb41c7e8,...) at rrw_enter+0x35 > % zfs_statfs(866b2a10,866b2a70,1d8,cb41c844,865a3a10,...) at zfs_statfs+0x39 > % __vfs_statfs(866b2a10,cb41c844,0,0,0,...) at __vfs_statfs+0x1f > % nullfs_statfs(865a3a10,865a3a70,806bd68b,865a3a70,865a3a10,...) at nullfs_statfs+0x46 > % __vfs_statfs(865a3a10,865a3a70,1d8,a5889340,cb41cb78,...) at __vfs_statfs+0x1f > % kern_getfsstat(88b6c480,cb41ccf8,8df8,0,1,...) at kern_getfsstat+0x2d0 > % getfsstat(88b6c480,cb41ccf8,c,cb41ccb0,8096d28a,...) at getfsstat+0x2e > % syscall(cb41cd38) at syscall+0x320 > % Xint0x80_syscall() at Xint0x80_syscall+0x20 > % --- syscall (395, FreeBSD ELF32, getfsstat), eip = 0x281742d7, esp = 0x7fbfc8dc, ebp = 0x7fbfc908 --- > > > FYI, after the crash, I could rename the filesystem without any problem. I think I saw this same panic last weekend after I migrated from an old raidz2 to a new larger volume. I didn't have the kernel set up to get a backtrace, so this is just a "me too", but it happened at exactly noon, which is when freebsd-snapshot would be creating and renaming snapshots. Just as you mentioned, after rebooting I was able to rename and destroy the snapshots without a problem. As extra data points, if any of it matters: - I do not have nullfs in my kernel. - Both the old and new pool are raidz2 - Both are attached to an mfi bus - the old pool had been exported and all of the devices detached - the new pool was been imported and renamed to the name of the old poolReceived on Fri Dec 11 2009 - 02:18:09 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:59 UTC