Am 18.08.2009 um 16:20 schrieb Thomas Backman: > On Aug 17, 2009, at 17:24, Thomas Backman wrote: > >> On Aug 17, 2009, at 15:25, Thomas Backman wrote: >> >>> So, I've got myself a source tree almost completely free of >>> patches after today's batch of ZFS patches merged - all that >>> remains is that I uncommented ps -axl from /usr/sbin/crashinfo, >>> since it only coredumps anyway, and added CFLAGS+=-DDEBUG=1 to zfs/ >>> Makefile. >>> >>> One of the changes I didn't already have prior to this must have >>> broken something, though, because this script worked just fine >>> before the merges earlier today. >>> The script below is the exact same I linked in http://lists.freebsd.org/pipermail/freebsd-current/2009-July/009174.html >>> back in July (URL to the script: http://exscape.org/temp/zfs_clone_panic.sh >>> ) - I made some local changes, thus the name invoked below. >>> >>> Now that all the patches are merged, you should need nothing but >>> the script, bash, and the ~200MB free space on the partition >>> containing /root/ to reproduce this problem. >>> (Note that the "no such pool" in the FIRST script is normal; it >>> simply tries to clean up something that isn't there, without error/ >>> sanity checking.) >>> >>> [...] >>> + zpool create -f -R /slave slave ggate666 >>> ++ date +backup-%Y%m%d-%H%M >>> + NOW=backup-20090817-1522 >>> + echo 'Creating snapshots' >>> Creating snapshots >>> + zfs snapshot -r tank_at_backup-20090817-1522 >>> + echo 'Cloning pool' >>> Cloning pool >>> + zfs send -R tank_at_backup-20090817-1522 >>> + zfs recv -vFd slave >>> cannot receive: invalid stream (malformed nvlist) >>> warning: cannot send 'tank_at_backup-20090817-1522': Broken pipe >>> >>> >>> Regards, >>> Thomas >> This is perhaps more troubling... >> [...] >> [root_at_chaos ~]# zpool create testpool ad0s1d >> [root_at_chaos ~]# zpool export testpool >> [root_at_chaos ~]# zpool import testpool >> cannot import 'testpool': no such pool available >> >> Regards, >> Thomas > > OK, I tried to reproduce this in a VM... And I have to say I was a > bit surprised: after doing an installkernel/installworld, but BEFORE > REBOOTING (I install in "multi"-user (one user via ssh), never had a > problem with that), the same issue has appeared, so I'm guessing > zfs.ko can't be to blame here? It's not the zpool binary by itself: after updating from a two-day old current (make world && reboot): root_at_freebsd-current:~# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT zroot 14.9G 753M 14.1G 4% ONLINE - root_at_freebsd-current:~# zpool export zroot root_at_freebsd-current:~# zpool list no pools available root_at_freebsd-current:~# zpool import zroot cannot import 'zroot': no such pool available root_at_freebsd-current:~# zpool.old import zroot cannot import 'zroot': no such pool available root_at_freebsd-current:~# zpool.old import root_at_freebsd-current:~# uname -a FreeBSD freebsd-current.lassitu.de 8.0-BETA2 FreeBSD 8.0-BETA2 #1 r196359: Tue Aug 18 16:42:41 CEST 2009 root_at_freebsd-current.lassitu.de :/usr/obj/usr/src/sys/MINIMAL amd64 I saved zfs and zpool before the installworld. And my root is actually on UFS; this pool was left over from root on zfs raidz experiments. root_at_freebsd-current:~# ls -l /sbin/zpool* -r-xr-xr-x 1 root wheel 76752 Aug 18 17:47 /sbin/zpool* -r-xr-xr-x 1 root wheel 76752 Aug 18 17:46 /sbin/zpool.old* root_at_freebsd-current:~# md5 /sbin/zpool* MD5 (/sbin/zpool) = 83dcf6343bb0392a38159dd456dcf4c5 MD5 (/sbin/zpool.old) = 340cb5a383b2fc3c77afbdc881258597 HTH, Stefan -- Stefan Bethke <stb_at_lassitu.de> Fon +49 151 14070811Received on Tue Aug 18 2009 - 14:11:58 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:54 UTC