The Core Library



We introduce the numerical non-robustness phenomenon and prescribe our Exact Geometric Computation (EGC) Solution. Finally, we discuss current research which extends the EGC to a softer version with much wider applicability.

Motivation: Numerical Non-Robustness

  • Numerical non-robustness is the informal property of computer programs to behave unpredictably, and often catastrophically, depending on the numerical values of its input parameters.
  • In most numerical computation, numerical quantities are approximated and as such, quantitative errors are expected and usually benign. However, such quantitative errors often lead to more drastic errors that are known as "catastrophic errors'' or what we prefer to call qualitative errors. Such errors occur when a program enters some unanticipated state with no easy or graceful means of recovery. Colloquially, we say the program "crashed''.
  • Non-robustness is especially notorious in so-called "geometric'' algorithms. But what makes a computation "geometric''? It is not the simple presence of numerical data alone. It turns out, an adequate explanation of "geometric computation'' will also lead to an appropriate solution approach.
  • We identify geometric computation as involving numerical data L that are intimately tied to combinatorial structures G under some consistent constraints. Informally then, a geometric structure is a pair (G,L) with such constraints.
  • EXAMPLE: Suppose (G,L) represents the convex hull of a set of planar points. Here G may be viewed as a graph that represents a cycle, G=(v1,v2,...,vn,v1). L may be regarded as an assignment of the vertices vi to points pi in the plane. The consistency requirement here is that these points pi must be in convex position, and be adjacent in the order indicated by the graph.
  • Suppose L' is a perturbation of the true numerical data L. We may say that the perturbation is only quantitative as long as (G,L') remains consistent. Otherwise, the perturbation has introduced qualitative error. Non-robustness of geometric algorithms arises from inconsistencies because all algorithms implicitly rely on such consistency properties.

Solution: The Exact Geometric Computation Approach

Geometric computation always involve some geometric structure D, whether we are constructing D or implicitly searching in it. By its nature, D has a combinatorial (or, topological) part and a numerical part, and these two parts together must satisfy suitable consistency relation. The approach to robustness called Exact Geometric Computation (EGC) says that nonrobustness is caused by geometric inconsistency, and the best way to ensure consistency is to compute the combinatorial part exactly.

To achieve the goals of EGC, we must first recognize that the combinatorial part is often defined by the numerical part. In an algorithm, we navigate or construct the combinatorial by performing tests. Such tests amount to evaluating the sign of real predicates (typically polynomial functions), leading to a two-way or three-way branching in the program. A typical test is evaluating the sign of a determinant. If we ensure that such tests are error-free, then the resulting structure would be exact.

So the exactness in EGC is in the "geometry'', not in the arithmetic. In contrast to "exact'' arithmetic approaches, only "sufficiently accurate'' arithmetic is needed. This simple fact has profound implications. In what we call the EGC mode of computation, we have to answer the question -- how much precision is sufficient? Many research issues in EGC amounts to answering this question efficiently. More importantly, from a practical viewpoint, we can encode the EGC solution in a software library. This means that any programmer can achieve completely robust algorithms just by calling such a library! The first such system is our Real/Expr Package. Currently, there are two such EGC libraries: LEDA_real and our Core Library. The major European software library efforts, CGAL and LEDA are also committed to this EGC approach.

Current Research: Beyond EGC

Since around 2009, we have been interested in going beyond EGC. We are still interested in ``exactness'' but allowing prescribed relaxation. For example, one such concept is the notion of ``resolution-exactness''. Suppose we now introduce a new resolution parameter, ε > 0. Intuitively, it means that we want to compute the solution up to ε accuracy. But the combinatorial nature of the output might be affected by this ε. This can be viewed as a generalization of EGC because when ε = 0, we again have EGC. But in practice, there is a vast difference between ε = 0 and ε > 0. In particular, we can avoid algebraic computation altogether when ε > 0 --- provided we formulate the notion of resolution-exactness ``correctly'' (we must not ''smuggle'' in exactness even with the ε). We have successfully applied this notion to well-known problems in robotics, computational geometry and algebraic computation. Specifically, in motion planning, root isolation, curve and surface analysis and differential equations. Such algorithms are essentially numerical in nature, and the key lies in interval methods and their generalizations. Ultimately, going beyond EGC is essential because for many reasons:
  • the EGC solution is too expensive (even if possible).
  • the EGC solution may be undecidable or unknown to be decidable (most transcendental problems falls under this category).
  • Finally, even when EGC solutions are feasible, it is unrealistic when the input in inherently noisy (this is the case in almost all the applications we are aware of). In robotics and Computational Sciences, we face the fact that all common physical constants, save speed of light, is known to more than 8 digits of accuracy. The uncertainty in any biological applications is much greater.