> Hi, > As a start you can use these in /boot/loader.conf to prevent the confusion about gptid or disk_ident. I disabled gptid at my computer. But if > I understand you would like to disable disk_ident. For ZFS it should not matter what you use. > $ sysctl kern.geom.label > kern.geom.label.disk_ident.enable: 1 > kern.geom.label.gptid.enable: 0 > kern.geom.label.gpt.enable: 1 > kern.geom.label.ufs.enable: 1 > kern.geom.label.ufsid.enable: 1 > kern.geom.label.reiserfs.enable: 1 > kern.geom.label.ntfs.enable: 1 > kern.geom.label.msdosfs.enable: 1 > kern.geom.label.iso9660.enable: 1 > kern.geom.label.ext2fs.enable: 1 > kern.geom.label.debug: 0 Thanks for that, this would probably work, but I don't understand why it would change in the first place. I know that when it occurred it was offline and I think it came back online when the system was rebooted. I'm not positive tho. My guess is the scan found it on diskid before dptid, but then why is gptid first for the others? I'm just going to replace the drive with itself with gptid because I'v already wiped some data with dd. (even tho a scrub would prob be good enough) > Further. Does ZFS see 14989197580381994958 and gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 as the same disk? Zpool replace also has an option to replace the disk 'with itself'. Just provide it one parameter like this: > # zpool replace tank 14989197580381994958 > or > # zpool replace tank gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 > Does that help? I actually didn't realize this. However the same error persists. # zpool replace tank gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 invalid vdev specification the following errors must be manually repaired: /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 is part of active pool 'tank' # zpool replace -f tank /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 invalid vdev specification the following errors must be manually repaired: /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 is part of active pool 'tank' > Oh, while I read your mail again. You have 2 GB swap configured on the disk so wiping 2MB at the start of the disk does not wipe the freebsd-zfs metadata of the da14p2 partition. Try wiping 3GB from the start and end of the disk and repartition it. Thanks for pointing this out! It would probably help if the correct area on the disk is wiped. Although it still seems that labelclear isn't up for the task. I really think the force (-f) flag needs a bump in power (for both replace and labelclear). Am I misunderstanding the use for the labelclear command? It clears the label that zdb will show for possibly similar circumstances that i'm encountering? # zpool labelclear -f gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 /dev/gptid/31be0527-84f0-11e6-bbbc-fcaa14edc6a6 is a member (ACTIVE) of pool "tank" Apologies, I failed to mention labelclear in my original post. It is providing similar output as the replace command. As the device is offline from the pool. Is this the correct behavior to show being an (ACTIVE) member of the pool? After wiping the correct area on the disk via dd, the replace successfully added the drive back to the pool! Thanks for pointing out my error. Thanks for taking a look at this Ronald and Allan! UltimaReceived on Wed Sep 28 2016 - 21:00:44 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:08 UTC