FOM: A Followup on An Earlier Point
steve at cs.clemson.edu
Thu Sep 28 15:57:21 EDT 2000
On January 28, 2000, Professor Davis replied to my comments concerning
floating point arithmetic.
For the record:
>>Here's a simple experiment. Take a
>>standard, commercially developed Fortran compiler with its trig
>>routines. You should find that sin^2(x)+cos^2(x) is greater than 1.0
>>about three percent of the time. Is this any *real* problem or do we
>>just widen 1.0 to 1.0+\delta?
Professor Martin Davis
>Are there no numerical analysts in your CS department? Because what are
>called "reals" as data types in programming languages are (of necessity)
>rational numbers, all computations with real numbers are approximations.
>For arithmetic operations the IEEE floating point standard is a beautiful
>accommodation, implemented in most compilers. For transcendental functions,
>error analysis is crucial.
>What are there foundational issues in any of this?
I have run a number of tests with "standard" trigonometric identies on
several different machines. A draft of the results without any
discussion can be found at
The point I wanted to discuss originally was what should the proper
definition of these computation be to carry over the idea of a theorem
(identity)? As it stands, all the systems purport to follow IEEE 754 so
that doesn't help. Error analysis is not the answer, particularly with
the SGI outliers.
How should we (numerical types) implement \sin(x) and \cos(x)? Should
it be to make the identities consistent (\sin^2(x) and \cos^2(x) \leq
1)? The trig identities are used to design the implementation. Isn't
there some necessity that the theorem hold on the computer?
Steve (really "D. E.") Stevenson Assoc Prof
Department of Computer Science, Clemson, (864)656-5880.mabell
Support V&V mailing list: ivandv at cs.clemson.edu
More information about the FOM