Filesystem on >8k sectors

From: Ivan Voras <ivoras_at_fer.hr>
Date: Tue, 28 Sep 2004 14:10:06 +0200
If I create a device with ggatel that has sector size > 8192, newfs 
fails thusly (16k sectors):

# newfs /dev/ggate0
/dev/ggate0: 10.0MB (20480 sectors) block size 16384, fragment size 16384
         using 3 cylinder groups of 4.00MB, 256 blks, 64 inodes.
newfs: can't read old UFS1 superblock: read error from block device: 
Invalid argument

This works fine with any lower sized sectors (including, e.g. 8k sectors 
and 1-byte sectors). It seems that newfs tries to make a read request 
that is not a multiple of block size. Note also:
- that there's no "old UFS1 superblock" on the device, as it contains junk.
- that newfs thinks there are 20480 sectors (assumes sectors are 
512-byte sized), but with 16k sectors there are 640 sectors.
- fiddling with newfs options doesn't help.

Is it only newfs or UFS/FFS can't work on devices with large sector sizes?

This isn't ranting for the sake of itself, but I have a neat idea for a 
ggatel-like utility that would work optimaly with huge sector sizes. :)
Received on Tue Sep 28 2004 - 10:10:53 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:14 UTC