Re: head -r335782 (?) broke ci.freebsd.org's FreeBSD-head-amd64-gcc build (lib32 part of build)

From: John Baldwin <jhb_at_FreeBSD.org>
Date: Fri, 29 Jun 2018 13:26:03 -0700
On 6/28/18 7:54 PM, Mark Millard wrote:
> On 2018-Jun-28, at 6:04 PM, Mark Millard <marklmi at yahoo.com> wrote:
> 
>> On 2018-Jun-28, at 5:39 PM, Mark Millard <marklmi at yahoo.com> wrote:
>>
>>> [ ci.free.bsd.org jumped from -r335773 (built) to -r335784 (failed)
>>> for FreeBSD-head-amd64-gcc. It looked to me like the most likely
>>> breaking-change was the following but I've not tried personal
>>> builds to confirm.
>>> ]

So this is a bit complicated and I'm not sure what the correct fix is.

What is happening is that the <float.h> shipped with GCC is now being used
after this change instead of sys/x86/include/float.h.  A sledgehammer approach
would be to remove float.h from the GCC package (we currently don't install
the float.h for the base system clang either).  However, looking at this
in more detail, it seems that x86/include/float.h is also busted in some
ways.

First, the #error I don't understand how it is happening.  The GCC float.h
defines LDBL_MAX_EXP to the __LDBL_MAX_EXP__ builtin which is 16384 just
like the x86 float.h:

# x86_64-unknown-freebsd12.0-gcc -dM -E empty.c -m32 | grep LDBL_MAX_EXP
#define __LDBL_MAX_EXP__ 16384

I even hacked catrigl.c to add the following lines before the #error
check:

LDBL_MAX_EXP_ = LDBL_MAX_EXP
LDBL_MANT_DIG_ = LDBL_MANT_DIG

#if LDBL_MAX_EXP != 0x4000
#error "Unsupported long double format"
#endif

And the -E output is:

DBL_MAX_EXP_ = 16384
LDBL_MANT_DIG_ = 53

# 51 "/zoo/jhb/zoo/jhb/git/freebsd/lib/msun/src/catrigl.c:93:2: error: #error "U
nsupported long double format"
 #error "Unsupported long double format"
  ^~~~~

Yet clearly, 16384 == 0x4000 assuming it is doing a numeric comparison (which
it must be since the x86 float.h uses '16384' not '0x4000' as the value).

However, LDBL_MANT_DIG of 53 is a bit more fun.  We have a comment about the
initial FPU control word in sys/amd64/include/fpu.h that reads thus:

/*
 * The hardware default control word for i387's and later coprocessors is
 * 0x37F, giving:
 *
 *	round to nearest
 *	64-bit precision
 *	all exceptions masked.
 *
 * FreeBSD/i386 uses 53 bit precision for things like fadd/fsub/fsqrt etc
 * because of the difference between memory and fpu register stack arguments.
 * If its using an intermediate fpu register, it has 80/64 bits to work
 * with.  If it uses memory, it has 64/53 bits to work with.  However,
 * gcc is aware of this and goes to a fair bit of trouble to make the
 * best use of it.
 *
 * This is mostly academic for AMD64, because the ABI prefers the use
 * SSE2 based math.  For FreeBSD/amd64, we go with the default settings.
 */
#define	__INITIAL_FPUCW__	0x037F
#define	__INITIAL_FPUCW_I386__	0x127F
#define	__INITIAL_NPXCW__	__INITIAL_FPUCW_I386__
#define	__INITIAL_MXCSR__	0x1F80
#define	__INITIAL_MXCSR_MASK__	0xFFBF

GCC is indeed aware of this in gcc/config/i386/freebsd.h which results in
__LDBL_MANT_DIG__ being set to 53 instead of 64:

/* FreeBSD sets the rounding precision of the FPU to 53 bits.  Let the
   compiler get the contents of <float.h> and std::numeric_limits correct.  */
#undef TARGET_96_ROUND_53_LONG_DOUBLE
#define TARGET_96_ROUND_53_LONG_DOUBLE (!TARGET_64BIT)

clang seems unaware of this as it reports all the same values for
LDBL_MIN/MAX for both amd64 and i386 (values that match GCC for amd64
but not i386):

# cc -dM -E empty.c | egrep 'LDBL_(MIN|MAX)__'
#define __LDBL_MAX__ 1.18973149535723176502e+4932L
#define __LDBL_MIN__ 3.36210314311209350626e-4932L
# cc -dM -E empty.c -m32 | egrep 'LDBL_(MIN|MAX)__'
#define __LDBL_MAX__ 1.18973149535723176502e+4932L
#define __LDBL_MIN__ 3.36210314311209350626e-4932L
# x86_64-unknown-freebsd12.0-gcc -dM -E empty.c | egrep 'LDBL_(MIN|MAX)__'
#define __LDBL_MAX__ 1.18973149535723176502e+4932L
#define __LDBL_MIN__ 3.36210314311209350626e-4932L
# x86_64-unknown-freebsd12.0-gcc -dM -E empty.c -m32 | egrep 'LDBL_(MIN|MAX)__'
#define __LDBL_MAX__ 1.1897314953572316e+4932L
#define __LDBL_MIN__ 3.3621031431120935e-4932L

The x86/include/float.h header though reports the MIN/MAX values somewhere
in between the two ranges for both amd64 and i386 while reporting an
LDBL_MANT_DIG of 64:

#define LDBL_MANT_DIG	64
#define LDBL_MIN	3.3621031431120935063E-4932L
#define LDBL_MAX	1.1897314953572317650E+4932L

I guess for now I will remove float.h from the amd64-gcc pkg-plist, but we
should really be fixing our tree to work with compiler-provided language
headers when at all possible.  It's not clear to me if amd64 should be
using the compiler provided values of things like LDBL_MIN/MAX either btw.

-- 
John Baldwin
Received on Fri Jun 29 2018 - 18:26:06 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:16 UTC