The following is an abstract of a talk to be presented at the Fundamentals of Computer Science Study Conference, July 25-27, Washington DC. This conference is organized by the Computer Science and Telecommunications Board (CSTB), upon a request from NSF's Computer and Information Science and Engineering Directorate (CISE) to conduct a study exploring and explicating the fundamentals of computer science as a research endeavor. For more information, visit the CSTB home, www.cstb.org and click "Fundamentals of Computer Science" under Active Projects.
TITLE:
	Towards Robust Geometric Computation 

AUTHORS:
	Chee Yap and Kurt Mehlhorn

	Department of Computer Science
	Courant Institute of Mathematical Sciences
	New York University
	   email: yap@cs.nyu.edu

	Max-Planck Institute of Computer Science
	Saarbruecken, Germany
	   email: mehlhorn@mpi-sb.mpg.de

	June 18, 2001

ABSTRACT:

Numerical nonrobustness of programs is a well-known
and widespread problem in all of computational sciences. 
This phenomenon has consequences beyond the scientific realm:
it is a productivity issue when researchers and programmers
waste time and effort debugging nonrobust code;
it is an economic issue when it becomes the
major barrier to full automation in many industrial processes.
It is sometimes declared an unsolvable problem.

Within the computational geometry and geometric modeling
communities, many approaches have been proposed within
the last 20 years.   The Exact Geometric Computation (EGC)
has now emerged as one the most promising.  Unlike other
approaches that produce problem- and algorithm-specific
solutions, EGC is quite general and can make a large
class of algorithms robust.  Just by using special libraries,
it is now possible for any programmer
to routinely write completely robust programs for a large
class of problems.  These programs include practically all the
algorithms described in standard textbooks in Computational Geometry.  
There are two such libraries at present: LEDA and Core Library. 
In essence, they offer programmers with a new form of
numerical computing.  

In this paper, we will describe:

   * The intellectual basis for this capability
   * Growth of a new technology: recent advances 
   * Limitations and Opportunities
   * Theoretical and practical issues raised by this
	new form of numerical computing

This history of this area of research illustrates how the insights of
computer science theory have enabled a new computing technology
to emerge, and to solve a seemingly intractable problem. 

1. INTRODUCTION

Numerical non-robustness is a well-known phenomenon in all of
computational sciences and engineering.  This is seen in program
crashes and wrong results.  The root cause is the use of machine
arithmetic, which has fixed precision.  Today, machine arithmetic
has converged to the IEEE standard [1].  Although numerical errors
can often be tolerated, serious problems arise when such errors lead 
to invalid combinatorial structures or inconsistent state during
a program execution.  We will characterize the latter errors as
"qualitative" (also called "catastrophic errors").  

This is not just a mere inconvenience -- there
are major economic consequences.  One example is the program of
mesh generation, an essential preliminary step for many modeling
and simulation process.  Because of nonrobustness, this step cannot
be fully automated -- a human user must check and manually fix the
output of any mesh generation code.  The countless hours that researchers'
and programmers' time wasted on debugging nonrobust code is a serious
productivity issue.   It is folk lore that practically all
current geometric and CAD software will break with the
appropriate choice of numerical inputs (usually the system
implementors know better than anyone else how to break their own code).
We have conducted experiments with
several well known CAD systems and show that they either crash or
quietly produce wrong outputs (which may lead the unwary user
to more serious errors).  We want to emphasize that in this paper,
we only treat nonrobustness that has a numerical origin; authors
have used the term "robustness" for many other system properties,
including scalability.

It is well-known that floating point computation has many pitfalls
for the unwary user (e.g., [2]).  Authors such as Forrest [3] and
Hoffmann [4] have noted numerical errors are particularly insidious
for geometric computation.  Even basic geometric operations such
as intersecting two line segments have led researchers to produce
many non-trivial solutions, aiming to achieve robustness.   Thus it has
been suggested [3] that a generally correct algorithm for such problems
may be impossible.  To see the difficulty, we need to understand
"geometry" as comprising two components, a combinatorial part
and a numerical part.   For example, the convex hull of a point set
has a combinatorial structure is the set of faces (of each dimension)
and their adjacency relation; the numerical part are coordinates of
the extremal vertices.  The difficulty of geometric computation
comes from the fact that there is some consistency relationship
that must hold between the combinatorial and the numerical parts.  When
this relationship breaks down, we have catastrophic errors.
Numerical computation is involved in both the construction of new geometric 
objects (e.g., intersection point of two lines)
and in the evaluation of geometric predicates (e.g., deciding
if a point lies on a hyperplane).  Geometric predicates are
especially critical, as they determine the combinatorial
relations among objects.  Incorrect evaluation of such
predicates can lead to inconsistencies.

In general, geometric algorithms are designed under 
a Real RAM model of computation where all numerical computations are
exact. As machine arithmetic is widely used as substitute
for this assumed exact arithmetic, numerical errors are inevitable
in implementation.  The importance of the IEEE standard is widely acknowledged.
For his contributions to this standard, Kahan was awarded the
Turing Award.  The relation between this standard and nonrobustness
should be understood: this standard does not eliminate nonrobustness.
It may reduce the frequency of occurrence, but its main contribution is  
to make software behave predictably across hardware platforms.  If a
program fails on one platform, the standard ensures that it fails
in the same way on another platform.  The numerical analysts have
extensive experience in many of the robustness issues.  But note that
some common responses of the numerically sophisticated user
does not really solve our nonrobustness problem.  
One response is to say "use numerically stable algorithms".  
The problem is that numerical stability only
reduce the likelihood of qualitative errors.  Another response is to
say "avoid degenerate inputs".  The problem is that some inputs to
geometric algorithms are degenerate by design (e.g., an mechanical
design will have parallel lines as a feature, not an accident).

2. APPROACHES TO NONROBUSTNESS 

The numerical non-robustness problem has received much attention 
in the computational geometry community in the last 15 years
[5,6].  See the surveys [7,8,9] and various articles in [10].
There have been many approaches proposed to solve the problem.

It is evident that if we are able to compute without any numerical error, then
the nonrobustness problem is completely solved.  There are two issues:
first, such approaches appear to be too inefficient to be practical.
Second, for some problems, big numbers does not eliminate errors when we
compute irrational numbers such as sqrt(2).  In [8], the literature is
classified along two lines: Type I Approaches aim to make fast but non-robust
algorithms based on machine arithmetic robust; Type II Approaches aim to make
slow but exact algorithms more efficient.  In [11], another dichotomy is used:
Type A Approaches tries to fix the arithmetic used by algorithm, and Type B
Approaches tries to fix the underlying geometry.  To illustrate a Type B
Approach, Sugihara and Iri proposed the Topology-Oriented Approach which
gives primacy to the combinatorial part, and tries to design algorithms that
guarantee certain combinatorial (=topological) properties, adjusting
the numerical part if necessary.   Most algorithms that tries to achieve
robustness while using fixed precision arithmetic (like the IEEE standard)
can be viewed as an attempt to introduce some form of "finite resolution
geometry" (such as the grid).

3. EXACT GEOMETRIC COMPUTATION

This article is about a particular Type A Approach called "Exact Geometric
Computing" (EGC) [12,13,14].  The name "EGC" emphasizes that the exactness
is in the geometry, not in the arithmetic.  Thus an EGC algorithm always
compute the correct combinatorial part in a geometric structure.  The
numerical part can be approximate, as long as it is consistent with the
combinatorial part.  In principle, the prescription of EGC is extremely easy
to understand.  Simply ensure that all predicates are in a geometric algorithm
are evaluated exactly.   But why does this arithmetic prescription say anything
about geometry?  This is because predicate evaluations determine the different
possible execution paths, each leading to a distinct combinatorial structure.  
If the branches are correctly taken, the final combinatorial structure is
the exact once.  Consistence of this structure is just a consequence of
exactness.  It should be noted that some authors have tried to ensure
consistency without exactness, but these have not been very successful
(leading to intractable problems equivalent to theorem proving in geometry).

The EGC Approach represents a significant relaxation from the naive notion
of numerical exactness.  In fact it tells us exactly what precision is needed
at any step.  It has led to an emerging body of new techniques for
adaptive precision (precision-sensitivity precision-driven, lazy evaluation,
floating point filters, etc).   These techniques has been specialized for
important predicates such as the sign of determinants or those commonly
found in algorithms.

Most important from a practical viewpoint, it makes possible the kind of
library solution [15,16] to nonrobustness described in the abstract.
As the title of this paper suggests, for a large class
of problems, it now appears that the construction of robust
geometric algorithms is within reach of every programmer, not
just the experts in the field of robustness.  We expect the technology
to support this capability to continue to develop, alongside the theory.
Indeed, we see many parallels between this robustness technology and
compiler technology.  Just as programmers are allowed to write totally
inefficient programs, our robustness library does not prevent users
from writing code which are very inefficient even when better solutions
are possible.  However, the development of smart compilers can eliminate
most of the common mistakes in using our libraries.  The
major libraries, LEDA and CGAL, are both committed to the EGC
approach.  Their underlying EGC capabilities comes from the number type
LEDA_Real [13,15].  Since these libraries have a relatively large user-base,
and have been adopted by some major commercial companies, they give us
some real world experience of EGC.

We have alluded to the class of problems that can be solved by EGC,
and these subsume the bulk of problems treated in the computational geometry
literature and textbooks.  There is a nice characterization
of this class, namely the class of all algebraic problems [12].  
The theory underlying EGC for algebraic problems is the existence of root
bounds.  The basic idea is that if an algebraic quantity is non-zero,
then it can be bounded away from zero by a constructively given bound.
Research in providing such bounds has already yielded some impressive
gains in practical efficiency. 

In general, efficiency continues to be a challenge for problems of
unbounded degree [12], but for many application domains these do not arise.
Even for bounded degree problems, EGC has led to the search for new
algorithms with lower degree predicates.  The main open theoretical
questions are the "Constant Zero Problems" [11] for non-algebraic operators.
The decidability of these problems are unknown, and related to deep
questions in transcendental number theory.  Numerical and data filters
has emerged to be a critical practical technology in EGC.  
But it also has a very interesting theoretical basis -- it can be viewed
as a relaxation of the notion of program checking.  This connection has yet
to be explored. 

Another important area is geometric rounding.  This will 
grow more critical as we apply EGC techniques in large software systems.
The introduction of compiler technology [16] in EGC libraries will be
critical if we are to truly bring robustness to the masses. 
The treatment of non-algebraic problems will require new and relaxed
notions of EGC.  Partly in view of this, we have begun to look at
time-robustness tradeoffs.   There is the larger issue of how to construct
code that handle inconsistent inputs in a reasonable way -- this notion of
"robust code" are much harder to describe mathematically, and is best
dealt with as the problem of cleaning up dirty data.  A few papers have
begun to treat such problems.  

In short, even as EGC solve a large class nonrobustness problems,
it opens up the opportunity for us to address many more harder but
related problems. 


3. COMPUTER SCIENCE PERSPECTIVE

In conclusion, the ``EGC solution to nonrobustness'' represents a
story in the best of the theoretical computer science tradition.  
It has emerged from the field of Computational Geometry, which has developed
as a subfield of theoretical algorithms.   As such, the field has often been
accused of irrelevance to the larger world practical geometric computation.  
In the early 1990's, computational geometers begin address such concerns.
In implementing their algorithms, they quickly realize that their Real
RAM model may have a problem.  

One response, and this was historically the favored approach, was to start
to describe algorithms for finite precision arithmetic and finite resolution
geometries (Type B Approaches).  It was thought that practitioners would 
only program algorithms using machine (IEEE) arithmetic, so why bother to
treat any other kind of models?  This is close to what practitioners have
always done: all implemented geometric algorithms introduce various
magical "epsilon constants" eps > 0, so that testing if a quantity X is zero
is replaced by testing if "|X| < eps".  This is called epsilon-tweaking as
the constants are adjusted empirically.  In effect, these constants
induces some modified kind of "fat geometry" (e.g., a point is
now a disc).  Treating such geometries have not been been successful as a
a general approach: besides being complicated to handle, the ersatz geometries
give up most of the properties in Euclidean geometry so that most of our
geometric intuitions fail.  

But another school of thought refuse to give up
the mathematically-clear model of Euclidean geometry.  They sought to bring
the insights of theoretical computer science to control the beast of exact
computation.  Informed by complexity theory, they carefully navigate
between the despair of "fixed precision computation" (where nonrobustness truly
appears to be unsolvable in general) and the threats of intractability
(such as theorem proving).  What has emerged is both a true product of computer
science and of the theoretical perspective.


REFERENCES
==========
[1] D. Goldberg, What every computer scientist should know about
   floating-point arithmetic", ACM Computing Surveys, 23(1)5--48, 1991.

[2] Nicholas J. Higham, "Accuracy and stability of numerical algorithms",
   SIAM, Philadelphia, 1996.

[3] A. R. Forrest, "Computational Geometry and Software Engineering:
   Towards a Geometric Computing Environment" in "Techniques for Computer
   Graphics" (eds. D. F. Rogers and R. A. Earnshaw) Springer, pp.23--37, 1987.

[4] Christoff M. Hoffmann, "The problems of accuracy and robustness in 
   geometric computation", IEEE Computer, 22(3)31-42, 1989.

[5] Roberto Tamassia, P. Agarwal, N. Amato, D. Chen, D. Dobkin, R. Drysdal,
   S. Fortune, M. Doorich, J. Hershberger, J. O'Rourke, F. Preparata,
   J.-R. Sack, S. Suri, I. Tollis, J. Vitter, S. Whitesides, "Strategic
   Directions in Computational Geometry: Working Group Report", ACM Computing
   Surveys, 4(28)1996.

[6] Bernard Chazelle et al, "The Computational Geometry Impact Task Force
   Report: an executive summary", in "Applied Computational Geometry:
   Towards Geometric Engineering" (eds. Ming C. Lin and D. Manocha),
   Lecture Notes in Computer Science No. 1148, Springer, pp.59--66, 1996.

[7] Stefan Schirra, "Precision and Robustness in Geometric Computations",
   in "Algorithmic Foundations of Geographic Information Systems",
   (eds. M. van Kreveld, J. Nievergelt, T. Roos, and P. Widmayer),
   Lecture Notes Comp. Science, No. 1340, Springer.  pp.255--287, 1997.

[8] Chee K. Yap, "Robust geometric computation", in
   "Handbook of Discrete and Computational Geometry" (eds. Jacob E. Goodman
   and Joseph O'Rourke), "CRC Press LLC, Boca Raton, FL. pp.653--668, 1997.

[9] N.M. Patrikalakis, W. Cho, C.-Y. Hu, T. Maekawa, E.C. Sherbrooke and
   J. Zhou, "Towards Robust Geometric Modelers, 1994 Progress Report", Proc.
   1995 NSF Design and Manufacturing Grantees Conference, pp. 139-140, 1995.

[10] M. C. Lin and D. Manocha, "Applied Computational Geometry:
   towards Geometric Engineering", Lecture Notes in Computer Science,
   State-of-the-Art Survey.  Selected papers from the Workshop on
   Applied Computation Geometry (WACG'96).  Springer, 1996.

[11] C. Li and C. Yap, "Recent Progress in Exact Geometric Computation" in
   "Proc. DIMACS Workshop on Algorithmic and Quantitative Aspects of Real
   Algebraic Geometry in Mathematics and Computer Science", March 12 - 16,
   2001.  Download http://cs.nyu.edu/exact/doc/.

[12] C. Yap, "Towards Exact Geometric Computation", Comp.Geometry Theory and
   Appli. 7:3--23 (1997).  Also, Proc.5th Canadian Conf. on Comp.Geometry
   (invited), 1993.

[13] C. Burnikel and J. K\"onnemann, K. Mehlhorn, S. N\"aher, S. Schirra
   and C. Uhrig, "Exact Geometric Computation in LEDA", Proc. 11th ACM Symp.
   Comp. Geom., pp.C18-C19, 1995.

[14] C. Yap and T. Dub\'e, "The Exact Computation Paradigm" in "Computing
   in Euclidean Geometry", 2nd edition, (eds. D.-Z. Du and F. K. Hwang),
   World Scientific Press, Singapore.  pp. 452--486, 1995.

[15] C. Burnikel, R. Fleischer, K. Mehlhorn and S. Schirra, "Exact Geometric
   Computation Made Easy",  Proc. 15th ACM Symp. Comp. Geom., pp.341-450, 1999.

[16] V. Karamcheti, C. Li, I. Pechtchanski and C. Yap, "A Core Library for
   Robust Numerical and Geometric Libraries", Proc. 15th ACM Symp. Comp. Geom.,
   pp.351--359, 1999.