Re: Use of C99 extra long double math functions after r236148

From: Stephen Montgomery-Smith <stephen_at_missouri.edu>
Date: Sun, 08 Jul 2012 19:29:30 -0500
On 07/08/2012 06:58 PM, Steve Kargl wrote:
> On Sun, Jul 08, 2012 at 02:06:46PM -0500, Stephen Montgomery-Smith wrote:

>> So do people really work hard to get that last drop of ulp out of their
>> calculations?
>
> I know very few scientist who work hard to reduce the ULP.  Most
> have little understanding of floating point.
>
>>   Would a ulp=10 be considered unacceptable?
>
> Yes, it is unacceptable for the math library.  Remember ULPs
> propagate and can quickly grow if the initial ulp for a
> result is large.

OK.  But suppose I wanted ld80 precision.  What would be the advantage 
of using an ld80 expl function with a ulp of 1 over an ld128 expl 
function with a ulp of 10?  The error propagation in the latter case 
could not be worse than the error propagation in the latter case.

In other words, if I were asked to write a super-fantastic expl 
function, where run time was not a problem, I would use mpfr, use 
Taylor's series with a floating point precision that had way more digits 
than I needed, and then just truncate away the last digits when 
returning the answer.  And this would be guaranteed to give the correct 
answer to just above 0.5 ulp (which I assume is best possible).

 From a scientist's point of view, I would think ulp is a rather 
unimportant concept.  The concepts of absolute and relative error are 
much more important to them.

The only way I can see ULP errors greatly propagating is if one is 
performing iteration type calculations (like f(f(f(f(x))))).  This sort 
of thing is done when computing Julia sets and such like.  And in that 
case, as far as I can see, a slightly better ulp is not going to 
drastically change the limitations of whatever floating point precision 
you are using.
Received on Sun Jul 08 2012 - 22:29:32 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:28 UTC