USB flash drive sometimes inexplicably read-only

From: Graham Perrin <grahamperrin_at_gmail.com>
Date: Wed, 13 Jan 2021 19:31:12 +0000
On 12/01/2021 09:45, Johan Hendriks wrote:

 > Re: zpool can not create a pool after using gdisk to prepare the device

> On 12/01/2021 07:50, Graham Perrin wrote:
>> I used gdisk(8) with a USB flash drive to:
>>
>> 1. zap (destroy) GPT data structures
>> 2. blank out the MBR
>> 3. (below) write a new GPT with a FreeBSD ZFS (A504) partition at 
>> /dev/da1p1
>>
>> ----
>>
>> root_at_mowa219-gjp4-8570p:~ # gdisk /dev/da1
>> GPT fdisk (gdisk) version 1.0.5
>>
>> Partition table scan:
>>   MBR: not present
>>   BSD: not present
>>   APM: not present
>>   GPT: not present
>>
>> Creating new GPT entries in memory.
>>
>> Command (? for help): n
>> Partition number (1-128, default 1):
>> First sector (34-7827358, default = 2048) or {+-}size{KMGTP}:
>> Last sector (2048-7827358, default = 7827358) or {+-}size{KMGTP}:
>> Current type is A503 (FreeBSD UFS)
>> Hex code or GUID (L to show codes, Enter = A503): A504
>> Changed type of partition to 'FreeBSD ZFS'
>>
>> Command (? for help): w
>>
>> Final checks complete. About to write GPT data. THIS WILL OVERWRITE 
>> EXISTING
>> PARTITIONS!!
>>
>> Do you want to proceed? (Y/N): y
>> OK; writing new GUID partition table (GPT) to /dev/da1.
>> Warning: The kernel may continue to use old or deleted partitions.
>> You should reboot or remove the drive.
>> The operation has completed successfully.
>> root_at_mowa219-gjp4-8570p:~ #
>>
>> ----
>>
>> I exported the pool that used the device at /dev/da0 (preparing for a 
>> disruptive test), removed both devices then reconnected the USB flash 
>> drive.
>>
>> zpool can not create a pool, the file system is reportedly read-only. 
>> Please, why is this?
>>
>> ----
>>
>> root_at_mowa219-gjp4-8570p:~ # tail -n 0 -f /var/log/messages
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: ugen0.6: <Kingston 
>> DataTraveler G2> at usbus0 (disconnected)
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: umass0: at uhub1, port 3, 
>> addr 14 (disconnected)
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: da0 at umass-sim0 bus 0 
>> scbus6 target 0 lun 0
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: da0: <Kingston 
>> DataTraveler G2 1.00>  s/n 001D0F0CAABFF97115A00A15 detached
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: (da0:umass-sim0:0:0:0): 
>> Periph destroyed
>> Jan 12 06:44:44 mowa219-gjp4-8570p kernel: umass0: detached
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: ugen0.6: <Kingston 
>> DataTraveler G2> at usbus0
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: umass0 on uhub1
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: umass0: <Kingston 
>> DataTraveler G2, class 0/0, rev 2.00/1.00, addr 15> on usbus0
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: umass0:  SCSI over 
>> Bulk-Only; quirks = 0xc100
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: umass0:6:0: Attached to 
>> scbus6
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0 at umass-sim0 bus 0 
>> scbus6 target 0 lun 0
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0: <Kingston 
>> DataTraveler G2 1.00> Removable Direct Access SCSI-2 device
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0: Serial Number 
>> 001D0F0CAABFF97115A00A15
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0: 40.000MB/s transfers
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0: 3821MB (7827392 512 
>> byte sectors)
>> Jan 12 06:44:48 mowa219-gjp4-8570p kernel: da0: quirks=0x2<NO_6_BYTE>
>> ^C
>> root_at_mowa219-gjp4-8570p:~ # lsblk da0
>> DEVICE         MAJ:MIN SIZE TYPE LABEL MOUNT
>> da0              1:247 3.7G GPT - -
>>   <FREE>         -:-   1.0M -                                     - -
>>   da0p1          1:248 3.7G freebsd-zfs gpt/efiboot0 <ZFS>
>> root_at_mowa219-gjp4-8570p:~ # zpool create -m /media/sorry sorry 
>> /dev/da0p1
>> cannot open '/dev/da0p1': Read-only file system
>> root_at_mowa219-gjp4-8570p:~ #
>>
>>
> It looks like it is mounted or something like that.
> So see with mount if it is mounted somewhere.
>
> I alway use gpart to partition disk and i never have problems.
> gpart destroy -F /dev/da0
> gpart create -s GPT /dev/da0
> gpart create -a 1M -t freebsd-zfs -l LABELNAME /dev/da0
>
> Now you can create your pool using zpool create sorry gpt/LABELNAME
>
> This way you create your pool using the GPT labelname that never 
> changes, and you can use it everywhere


Thank you.

With the device this evening at da1, it was again reportedly read-only; 
gpart destroy failed.

After disconnecting then reconnecting, gpart destroy succeeded and an 
iso9660 'heritage' was observed.

I'm now stress-testing the writeable space; 
<https://www.freshports.org/sysutils/stressdisk/>

----

root_at_mowa219-gjp4-8570p:~ # gpart destroy -F /dev/da1
gpart: geom 'da1': Read-only file system
root_at_mowa219-gjp4-8570p:~ # mount | grep /dev/da
root_at_mowa219-gjp4-8570p:~ # lsblk da1
DEVICE         MAJ:MIN SIZE TYPE LABEL MOUNT
da1              0:162 3.7G GPT - -
   <FREE>         -:-   1.0M -                                     - -
   da1p1          0:163 3.7G freebsd-zfs gpt/FreeBSD <ZFS>
root_at_mowa219-gjp4-8570p:~ # gpart show /dev/da1
=>     34  7827325  da1  GPT  (3.7G)
        34     2014       - free -  (1.0M)
      2048  7825311    1  freebsd-zfs  (3.7G)

root_at_mowa219-gjp4-8570p:~ # gpart destroy -F /dev/da1
da1 destroyed
root_at_mowa219-gjp4-8570p:~ # lsblk da1
DEVICE         MAJ:MIN SIZE TYPE LABEL MOUNT
da1              0:162 3.7G cd9660 iso9660/Kubuntu%2020.04.1%20LTS%20amd64 -
root_at_mowa219-gjp4-8570p:~ # gpart show /dev/da1
gpart: No such geom: /dev/da1.
root_at_mowa219-gjp4-8570p:~ # gpart create -s GPT /dev/da1
da1 created
root_at_mowa219-gjp4-8570p:~ # lsblk da1
DEVICE         MAJ:MIN SIZE TYPE LABEL MOUNT
da1              0:162 3.7G GPT - -
   <FREE>         -:-   3.7G -                                     - -
root_at_mowa219-gjp4-8570p:~ # gpart show /dev/da1
=>     40  7827312  da1  GPT  (3.7G)
        40  7827312       - free -  (3.7G)

root_at_mowa219-gjp4-8570p:~ # gpart create -a 1M -t freebsd-zfs -l iffy 
/dev/da1
gpart: illegal option -- a
…

----

root_at_mowa219-gjp4-8570p:~ # lsblk da1
DEVICE         MAJ:MIN SIZE TYPE LABEL MOUNT
da1              0:162 3.7G GPT - -
   <FREE>         -:-   3.7G -                                     - -
root_at_mowa219-gjp4-8570p:~ # gpart add -a 1M -t freebsd-zfs -l iffy /dev/da1
da1p1 added
root_at_mowa219-gjp4-8570p:~ # zpool create sorry gpt/iffy
root_at_mowa219-gjp4-8570p:~ # zpool status sorry
   pool: sorry
  state: ONLINE
config:

         NAME        STATE     READ WRITE CKSUM
         sorry       ONLINE       0     0     0
           gpt/iffy  ONLINE       0     0     0

errors: No known data errors
root_at_mowa219-gjp4-8570p:~ # zfs unmount sorry
root_at_mowa219-gjp4-8570p:~ # zfs set mountpoint=/media/sorry sorry
root_at_mowa219-gjp4-8570p:~ # zfs mount sorry
root_at_mowa219-gjp4-8570p:~ # ls -dhl /media/sorry
drwxr-xr-x  2 root  wheel     2B Jan 13 19:20 /media/sorry
root_at_mowa219-gjp4-8570p:~ # chown grahamperrin:grahamperrin /media/sorry
root_at_mowa219-gjp4-8570p:~ # exit
logout
% whoami
grahamperrin
% stressdisk cycle /media/sorry
2021/01/13 19:24:45 loaded statsfile "stressdisk_stats.json"
2021/01/13 19:24:45
Bytes read:         24704 MByte (  20.46 MByte/s)
Bytes written:      10454 MByte (   8.60 MByte/s)
Errors:                 0
Elapsed time:  24.417097ms

2021/01/13 19:24:45 Removing 0 check files
2021/01/13 19:24:45 Starting round 1
2021/01/13 19:24:45 No check files - generating
2021/01/13 19:24:45 Writing file "/media/sorry/TST_0000" size 1000000000
2021/01/13 19:25:45
Bytes read:         24704 MByte (  20.46 MByte/s)
Bytes written:      10864 MByte (   8.52 MByte/s)
Errors:                 0
Elapsed time:  1m0.065825136s

2021/01/13 19:26:45
Bytes read:         24704 MByte (  20.46 MByte/s)
Bytes written:      11112 MByte (   8.32 MByte/s)
Errors:                 0
Elapsed time:  2m0.028319049s

2021/01/13 19:27:31 Writing file "/media/sorry/TST_0001" size 1000000000
2021/01/13 19:27:45
Bytes read:         24704 MByte (  20.46 MByte/s)
Bytes written:      11444 MByte (   8.20 MByte/s)
Errors:                 0
Elapsed time:  3m0.025935623s

…
Received on Wed Jan 13 2021 - 18:31:18 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:26 UTC