Re: fun with df..

From: Bruce Evans <bde_at_zeta.org.au>
Date: Thu, 15 Jan 2004 05:39:50 +1100 (EST)
On Wed, 14 Jan 2004, Wilko Bulte wrote:

> On Wed, Jan 14, 2004 at 10:58:35AM +1100, Bruce Evans wrote:
> > On Tue, 13 Jan 2004, Nate Lawson wrote:
> >
> > > On Mon, 12 Jan 2004, Wilko Bulte wrote:
> > > > My laptop just presented me with a funny one:
> > > >
> > > > wkb_at_chuck ~: df
> > > > Filesystem  1M-blocks Used             Avail Capacity  Mounted on
> > > > /dev/ad0s2g      4032 3842 36028797018963835   104%    /usr
> > > > /dev/ad0s2e        62    6                51    12%    /var
> > > >
> > > > ....
> > > >
> > > > wkb_at_chuck ~: df -k
> > > > Filesystem  1K-blocks    Used   Avail Capacity  Mounted on
> > > > /dev/ad0s2g   4129310 3934638 -135672   104%    /usr
> > > >
> > > > Oldish 5.x- (Dec 17)
> > >
> > > Note the M/K flags.  Someone is probably using an unsigned for the M
> > > printing and a (correct) signed for the K printing.
> >
> > I note that there is no -m flag.  The 1M blocks apparently come from
> > statfs(), but that shouldn't happen.
>
> There is a export BLOCKSIZE=M instead.

Hmm.  I can't see how negative block counts can ever be printed correctly
except by df -h.  (I don't use most of the statfs changes since they give
utilities that are not backwards compatible, so I can't test this easily.).

Negative block counts are misprinted here:

% 	if (hflag) {
% 		prthuman(sfsp, used);

[This works OK, since it converts everything to floating point before
scaling and there are no sign extension bugs for floating point.]

% 	} else {
% 		(void)printf(" %*jd %*jd %*jd",
% 		  (u_int)mwp->total,
% 		  (intmax_t)fsbtoblk(sfsp->f_blocks, sfsp->f_bsize, blocksize),
% 		  (u_int)mwp->used,
% 		  (intmax_t)fsbtoblk(used, sfsp->f_bsize, blocksize),
% 	          (u_int)mwp->avail,
% 	          (intmax_t)fsbtoblk(sfsp->f_bavail, sfsp->f_bsize, blocksize));
 		            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This implements sign extension and overflow bugs by multiplying or
dividing the (signed) block count f_bavail by a scale factor of type
uint64_t.  Small negative block counts always get converted to values
slightly less than UINT64_T_MAX and then are messed up further by
scaling.

So the suprising thing in your output is that the df -k output is not
preposterous.  I think it apparently works because the scale factor is
either 1 or a multiplication by a value larger than 1.  E.g.:

(1) If the scale factor is (uint64_t)1 then it makes no difference except
    to convert the small signed value to a large unsigned one.  Then the
    bogus cast to intmax_t converts back to the original value.
(2) If the scale factor is a multiplier of 16 (which I think is the usual
    case -- f_bsize defaults to 16K and blocksize defaults to 1K), then
    multiplication of (2^64 - epsilon) by 16 overflows 16 times and gives
    (2^64 - 16 * epsilon), at least on systems with 64-bit intmax_t's.  Then
    the bogus cast to intmax_t gives the correct value of (- 16 * epsilon).
(3) If the scale factor is a divisor of >= 2 (which I think always happens
    for BLOCKSIZE=1M -- blocksize is 1M and f_bsize is <= 64K), division of
    (2^64 - epsilon) doesn't overflow but gives a preposterously large
    value that is not obviously related to epsilon.  Then the bogus cast
    to intmax_t has no effect, since the preosterously large value is not
    as large as INTMAX_MAX.

So the surprising thing in your output is that you don't still have it.

% 	}

Bruce
Received on Wed Jan 14 2004 - 09:40:33 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:38 UTC