Re: Use of C99 extra long double math functions after r236148

From: Rainer Hurling <rhurlin_at_gwdg.de>
Date: Thu, 26 Jul 2012 17:06:00 +0200
Am 26.07.2012 03:52 (UTC+1) schrieb Bruce Evans:
> On Wed, 25 Jul 2012, Rainer Hurling wrote:
>
>> On 25.07.2012 19:00 (UTC+2), Steve Kargl wrote:
>>> On Wed, Jul 25, 2012 at 06:29:18PM +0200, Rainer Hurling wrote:
>>>>
>>>> Many thanks to you three for implementing expl() with r238722 and
>>>> r238724.
>>>>
>>>> I am not a C programmer, but would like to ask if the following example
>>>> is correct and suituable as a minimalistic test of this new C99
>>>> function?
>>>
>>> It's not clear to me what you mean by test.  If expl() is
>>> not available in libm, then linking the code would fail.
>>> So, testing for the existence of expl() (for example,
>>> in a configure script) is as simple as
>>
>> Sorry for not being clear enough. I didn't mean testing for the
>> existence, but for some comparable output between exp() and expl(), on
>> a system with expl() available in libm.
>
> This is basically what I do to test exp() (with a few billion cases
> automatically generated and compared).  It is not sufficient for
> checking expl(), except for consistency.  (It is assumed that expl()
> is reasonably accurate.  If it is in fact less accurate than exp(),
> this tends to show up in the comparisons.)

Ahh ok, I think I understand it better now.

>>> #include <math.h>
>>> long double
>>> func(long double x)
>>> {
>>>    return (expl(x));
>>> }
>>>
>>>> //-----------------------------------
>>>> #include <stdio.h>
>>>> #include <math.h>
>>>>
>>>> int main(void)
>>>> {
>>>>    double c = 2.0;
>>>>    long double d = 2.0;
>>>>
>>>>    double e = exp(c);
>>>>    long double f = expl(d);
>>>>
>>>>    printf("exp(%f)  is %.*f\n",  c, 90, e);
>>>>    printf("expl(%Lf) is %.*Lf\n", d, 90, f);
>>>
>>> If you mean testing that the output is correct, then
>>> asking for 90 digits is of little use.  The following
>>> is sufficient (and my actually produce a digit or two
>>> more than is available in number)
>>
>> Ok, I understand. I printed the 90 digits to be able to take a look at
>> the decimal places, I did not expect to get valid digits in this area.
>
> Use binary format (%a) for manual comparison.  Don't print any more
> bits than the format has.  This is DBL_MANT_DIG (53) for doubles and
> LDLBL_MANT_DIG (64 on x86) for long doubles.  %a format is in nybbles
> and tends to group the bits into nybbles badly.  See below on reducing
> problems from this.  Decimal format has to print about 3 more digits
> than are really meaningful, to allow recovering the original value
> uniquely.  For manual comparison, you need to print these extra digits
> and manually round or ignore them as appropriate.  The correct number
> of extra digits is hard to determine.  For the "any", type, it is
> DECIMAL_DIG (21) on x86.  The corresponding number of normally-accurate
> decimal digits for long doubles is given by LDBL_DIG (18).  For
> floats and doubles, this corresponds to FLT_DIG (6) and DBL_DIG (15).
> Unfortunately, <float.h> doesn't define anything corresponding to
> DECIMAL_DIG for the smaller types.  21 is a lot of digits and noise
> digits take a long time to determine and ignore (its worse on sparc64
> where DECIMAL_DIG is 36).  I usually add 2 extra digits to the number
> of normally-accurate digits.  This is sloppy.  3 is needed in some
> cases, depending on MANT_DIG and the bits in log(2) and/or log(10).

I printed more bits (digits) than the format provides, because I wanted 
to see if and when what the new function would do in this 'outside' 
area. Of course you are right that this has nothing to do with more 
precision and these digits may not be used in subsequent calculations. 
But this really was not my intention here (only curiosity).

>>> troutmask:fvwm:kargl[203] diff -u a.c.orig a.c
>>> --- a.c.orig    2012-07-25 09:38:31.000000000 -0700
>>> +++ a.c 2012-07-25 09:40:36.000000000 -0700
>>> _at__at_ -1,5 +1,6 _at__at_
>>>   #include <stdio.h>
>>>   #include <math.h>
>>> +#include <float.h>
>>>
>>>   int main(void)
>>>   {
>>> _at__at_ -9,8 +10,8 _at__at_
>>>     double e = exp(c);
>>>     long double f = expl(d);
>>>
>>> -  printf("exp(%f)  is %.*f\n",  c, 90, e);
>>> -  printf("expl(%Lf) is %.*Lf\n", d, 90, f);
>>> +  printf("exp(%f)  is %.*f\n",  c, DBL_DIG+2, e);
>>> +  printf("expl(%Lf) is %.*Lf\n", d, LDBL_DIG+2, f);
>>>
>>>     return 0;
>>>   }
>>
>> Thanks, I was not aware of DBL_DIG and LDBL_DIG.
>
> Steve is sloppy and adds 2 also :-).  For long doubles, it is clear that
> 3 are strictly needed, since DECIMAL_DIG is 3 more.

So I better use +3 here?
printf("expl(%Lf) is %.*Lf\n", d, LDBL_DIG+3, f);
                                           ^^

> For most long double functions on i386, you need to switch the rounding
> precision to 64 bits around calls to them, and also to do any operations
> on the results except printing them.  expl() is one of the few large
> functions that does the switch internally.  So the above should work
> (since it only prints), but (expl(d) + 0) should round to the default
> 53-bit precision and this give the same result as exp(d).

Now that this rounding problem is more understandable to me, I am happy 
that in most cases I can use math/R for my calculations ;-)

>>> If you actually want to test expl() to see if it is producing
>>> a decent result, you need a reference solution that contains
>>> a higher precision.  I use mpfr with 256 bits of precision.
>>>
>>> troutmask:fvwm:kargl[213] ./testl -V 2
>>> ULP = 0.3863
>>>    x = 2.000000000000000000e+00
>>> libm: 7.389056098930650227e+00 0x1.d8e64b8d4ddadcc4p+2
>>> mpfr: 7.389056098930650227e+00 0x1.d8e64b8d4ddadcc4p+2
>>> mpfr: 7.3890560989306502272304274605750078131803155705518\
>>>            47324087127822522573796079054e+00
>>> mpfr:
>>> 0x7.63992e35376b730ce8ee881ada2aeea11eb9ebd93c887eb59ed77977d109f148p+0
>>>
>>> The 1st 'mpfr:' line is produced after converting the results
>>> fof mpfr_exp() to long double.  The 2nd 'mpfr:' line is
>>> produced by mpfr_printf() where the number of printed
>>> digits depends on the 256-bit precision.  The last 'mpfr:'
>>> line is mpfr_printf()'s hex formatting.  Unfortunately, it
>>> does not normalize the hex representation to start with
>>> '0x1.', which makes comparison somewhat difficult.
>
> Starting with 0x1 is only a good (just OK) normalization when the number
> of binary digits is 1 mod the number of bits in a nybble.  So it is good
> for doubles but bad for long doubles on x86 and 256-bit format.  To
> reduce this problem, promote to a common type and print all the bits of
> that.  (I don't know how to promote to 256 bits to match mpfr).  Even when
> the first hex nybble is not normalized, the bits of similar results
> line up unless the results are not actually similar or in rare half-way
> cases.
>
> Another method that I like to use with better binary formats than %a
> produced by my library is to print values twice, first rounded to
> their nominal number of binary bits (53 for doubles) and then with
> 8 extra bits (8 is a multiple of 4 to keep the nybbles lined up).
> Sometimes you have extra precision and need to see what it is.
> i386 has 11 bits of extra precision for long doubles internally,
> in all cases provided its rounding precision is switched to 64
> bits and in some other cases.  Actually, I would use 12 extra bits
> to compare 64-bit values with 53-bit ones, since one extra nybble
> provides complete info without much expansion.

Yes, that is making sense, even to me with only little skills in this field.

Many thanks for your patience and kindness in answering my beginner 
questions, also to Steve for his postings before. I really appreciate 
your detailed, understandable explanations. With your help this subject 
is much more transparent now.

Rainer

> Bruce
Received on Thu Jul 26 2012 - 13:06:14 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:29 UTC