Re: newfs_msdos and DVD-RAM

From: Kostik Belousov <kostikbel_at_gmail.com>
Date: Tue, 30 Mar 2010 11:09:16 +0300
On Tue, Mar 30, 2010 at 10:40:07AM +1100, Bruce Evans wrote:
> On Mon, 29 Mar 2010, Andriy Gapon wrote:
> 
> >...
> >I am not a FAT expert and I know to take Wikipedia with a grain of salt.
> >But please take a look at this:
> >http://en.wikipedia.org/wiki/File_Allocation_Table#Boot_Sector
> >
> >In our formula:
> >SecPerClust         *= pmp->pm_BlkPerSec;
> >we have the following parameters:
> >SecPerClust[in] - sectors per cluster
> >pm_BlkPerSec - bytes per sector divided by 512 (pm_BytesPerSec / DEV_BSIZE)
> >SecPerClust[out] - bytes per cluster divided by 512
> >
> >So we have:
> >sectors per cluster: 64
> >bytes per sector: 4096
> >
> >That Wikipedia article says: "However, the value must not be such that the 
> >number
> >of bytes per cluster becomes greater than 32 KB."
> 
> 64K works under FreeBSD, and I often do performance tests with it (it gives
> very bad performance).  It should be avoided for portability too.
> 
> >But in our case it's 256K, the same value that is passed as 'size' 
> >parameter to
> >bread() in the crash stack trace below.
> 
> This error should be detected more cleanly.  ffs fails the mount if the
> block size exceeds 64K.  ffs can handle larger block sizes, and it is
> unfortunate that it is limited by the non-ffs parameter MAXBSIZE, but
> MAXBSIZE has been 64K and non-fuzzy for so long that the portability
> considerations for using larger values are even clearer -- larger sizes
> shouldn't be used, but 64K works almost everywhere.  I used to often do
> performance tests with block size 64K for ffs.  It gives very bad
> performance, and since theire are more combinations of block sizes to
> test for ffs than for msdosfs, I stopped testing block size 64K for ffs
> long ago.
> 
> msdosfs has lots more sanity tests for its BPB than does ffs for its
> superblock.  Some of these were considered insane and removed, and there
> never seems to have been one for this.
> 
> >By the way, that 32KB limit means that value of SecPerClust[out] should 
> >never be
> >greater than 64 and SecPerClust[in] is limited to 128, so its current must 
> >be of
> >sufficient size to hold all allowed values.
> >
> >Thus, clearly, it is a fault of a tool that formatted the media for FAT.
> >It should have picked correct values, or rejected incorrect values if 
> >those were
> >provided as overrides via command line options.
> 
> If 256K works under WinDOS, then we should try to support it too.  mav_at_
> wants to increase MAXPHYS.  I don't really believe in this, but if MAXPHYS
> is increased then it would be reasonable to increase MAXPHYS too, but
> probably not to more than 128K.
> 
> >>fk_at_r500 /usr/crash $kgdb kernel.1/kernel.symbols vmcore.1
> >[snip]
> >>Unread portion of the kernel message buffer:
> >>panic: getblk: size(262144) > MAXBSIZE(65536)
> >[snip]
> >>#11 0xffffffff803bedfb in panic (fmt=Variable "fmt" is not available.
> >>) at /usr/src/sys/kern/kern_shutdown.c:562
> 
> BTW, why can't gdb find any variables?  They are just stack variables whose
> address is easy to find.
> 
> >>...
> >>#14 0xffffffff8042f24e in bread (vp=Variable "vp" is not available.
> >>) at /usr/src/sys/kern/vfs_bio.c:748
> 
> ... and isn't vp a variable?  Maybe the bad default -O2 is destroying
> debugging.  Kernels intended for being debugged (and that is almost all
> kernels) shouldn't be compiled with many optimizations.  Post-gcc-3, -O2
> breaks even backtraces by inlining static functions that are called only
> once.

Dwarf interpreter in the very old gdb 6.1.1 that is provided in our
tree is same old and buggy. I found that latest gdbs, like 6.8, 7.1 etc
work much better even with slightly newer gcc 4.2.1 from the tree.

amd64 calling conventions do not make this easier.

Received on Tue Mar 30 2010 - 06:09:56 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:02 UTC