Re: UFS2 filesystem and filesize limits

From: Bruce Evans <bde_at_zeta.org.au>
Date: Sat, 3 Jul 2004 17:54:46 +1000 (EST)
On Fri, 2 Jul 2004, Kenneth D. Merry wrote:

> On Fri, Jul 02, 2004 at 10:37:59 -0600, Kenneth D. Merry wrote:
> >
> > I've searched a bit on the list archives, but didn't find an obvious answer
> > so I thought I'd ask:
> >
> > What is the maximum possible size of a UFS2 filesystem?  Are there any
> > gotchas associated with going that large?
> >
> > What is the maximum possible file size on a UFS2 filesystem?

It is given by the same formula as for any ffs file system.  The main limit
is that there are only 3 levels of triple indirection, so not many more than
N**3 logical blocks can be addressed, where N = <(illogical) block size> /
<size of a logical block address>.  The size of a logical block address is
twice as large for ffs2, so N is twice as small and the limit on the
maximum possible file size from triple indirection is about 2**3 = 8 times
as small for ffs2 as for ffs1.

There is a secondary limit that only affects ffs1.  File sizes are
restricted by the size of a logical block address.  For ffs1 this limit
is 2**31 * <logical block size>.  For ffs2 it is 2**63 * <logical block size>.
Since file sizes (or at least file offsets) are limited to 2**63-1 by off_t
being 64 bits signed, the latter limit is irrelevant since other limits are
much more restrictive.  The N**3 limit is also much more restrictive for
non-proposterous values of <block size>.

The ffs1 limit is also reduced by an off-by-1 error in its 2**31 limit
(limit is 2**30 * <logical block size>) and a bug (overflow at 1TB).

The default block sizes of 16K/2K combined with ffs2's 64-bit lba's give
N = 2048 and N**3 * <logical block size> = 16TB.  The actual limit is
a little larger (add terms of N**2 + N + C -- these are dominated by the
N**3 term).

The corresponding limits for ffs1 are:
- 128TB (N**3 term)
- 4TB (2**31 term)
- 2TB (2**30 term)
- 1TB (overflow bug)

> Evidently there are some limits.  I can create a 11TB filesystem, but I can
> only put 714GB or so on it:
>
> # dd if=/dev/zero of=bigfile bs=1m
>
> /bigdisk: write failed, filesystem is full
> dd: bigfile: No space left on device
> 730368+0 records in
> 730367+0 records out
> 765845307392 bytes transferred in 11211.369691 secs (68309701 bytes/sec)

The error isn't EFBIG, so it is apparently unrelated to file siszes.
Apparently there are bugs in the block allocator for such large file
systems.  Your other debugging output is consistent with this [writing
of additional big files fails much sooner but not immediately].

Look for overflow near the freespace() calculations.

Bruce
Received on Sat Jul 03 2004 - 05:54:51 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:00 UTC