SIGGRAPH 1998 Panel:
Is Robust Geometry Possible?


Panel Summary

Computational geometry for design, modeling, animation, and visualization, has been plagued by problems ranging from simple visual discrepancies (such as "cracks" between polygons) to strange program behavior including outright crashes. Why is it so hard to correctly create and implement computer programs involving geometry, and what can be done to make geometric programs work correctly? The industial and academic experts on this panel expect lively discussion and strong challenges from the audience regarding the nature of the problem, limits of existing methodologies and radical proposals for solutions. Personal experiences and practical tips will catalyze the ensuing debate.


Panel Description:

Is Robust Geometry Possible?

Graphics strongly depends upon geometric representations, as do accompanying design and analysis applications. Yet, geometric computing involves accuracy issues that are more complex and difficult than simple number representation and calculation issues because it involves maintaining constraints between sets of values, exacerbating an already serious digital representation and calculation problem. The effects of these problems can range from simple visual discrepancies in displays (such as "cracks" between polygons) to strange program behavior including outright crashes. Anyone who has tried to implement a geometry based algorithm quickly becomes aware that there's more to it than meets the eye. Large amounts of programming time in geometric computing is spent devising ad hoc solutions to these problems, and much user time is devoted to avoiding error prone portions of geometric programs. The problem can get even worse when erroneous geometric results emerge from research labs into the real world, such as in the manufacturing of mechanical parts used in complex everyday objects like automobiles and airplanes. There, the problems may result in lost schedule time and rework during manufacturing, or may not even surface until much later, with potentially disastrous results. Until recently, most of these issues have been privately debated among friendly colleagues but publicly swept under the rug by both academicians and practioners. Typical geometric implementation strategies of using "fudge factors"and double precision numbers offer only limited relief. There has been little organized effort to provide methodologies for either avoiding geometric errors or for proving the correctness and accuracy of geometry based computer programs. The precision required for engineering design and analysis is much greater than that needed for contemporary graphics resolution. However, addressing these engineering problems will benefit next generation graphics. This panel will bring together experts in this field from both industry and academia to discuss:
- the nature and causes of geometric computing and accuracy problems
- specific examples to illustrate the issues
- the variety of approaches being used to attack these problems
- personal experiences and practical advice on geometric programming
- perspectives on both the past and future of robust geometric computing

In addition, we hope to engage the audience to share their own personal experiences, perspectives, geometry horror stories, and practical advice on geometric computing implementation strategies.

The main objective of the panel is provide a forum for the SIGGRAPH community to become more aware of the issues in robust geometric computing, their importance, the variety of possible solutions that are available now, the need for better methodologies, and where we can go from here. We also hope to have a little fun hearing of geometry horror stories from the audience as well as from the panel. This panel would be only a small step, but we'd like to see the outcome of the panel be a better appreciation of geometric computing and accuracy issues in the SIGGRAPH community, and hopefully interest more people into working in this important area.

The panel members have been selected because they represent a variety of different approaches and perspectives on the geometric computing problem. In addition to having some of the highest academic credentials in this field, most of them have either worked in or consulted in industrial settings solving practical problems, so we feel we have a great combination that represents both the latest research and practical experience in this field.

A panel session is the ideal forum for this topic because:

- there is a variety of alternative approaches; we will be engaging in active debate about the efficacy and appropriateness of the different approaches during the session
- many different types of attendees, from implementors to users of graphics technology from many different application fields will gain a better understanding of how this problem affects them and what they can do about;
- the panel provides an overview more efficienctly than specialized sessions;
- a major part of the panel session is the audience participation; any other format would tend to discourage the diversity of the audience that could benefit from this session, reducing its value both in terms of its reach across multiple fields and the diversity of opinions expressed


Speakers


POSITION:
Robust Computation in Rendering
Tom Duff
Pixar Animation Studios

The input of most geometric computations is in a graph-like representation describing relationships amongst points, lines, curves, surfaces and volumes. When the output is in the same form, as in many CAD and other geometric applications, numerical uncertainty in computations of, for example, positions of points, can cause the graph to be updated in an inconsistent manner, for example placing intersection points in the wrong order on lines, incorrectly implying further intersections that, of course, cannot be found. In rendering, on the other hand, we are usually converting a geometric model description into a sampled image, largely avoiding the problems that occur in geometric applications. Because in a high-quality renderer screen pixels are computed by convolving a geometrically determined image-intensity function with a sampling kernel, errors in the intensity function in regions whose size is near the floating-point quantization error will mostly be lost when averaged with (overwhelmingly more numerous) correct values computed near by. Of course, there is no guarantee that small uncertainties in quantities in the geometric database will not result in large erroneous areas in the image function (this can easily happen when surfaces are nearly coincident), but our experience gives us little cause to worry. Three important areas where we encounter robustness difficulties in rendering are local and global illumination, and scan-converting higher-order surfaces. A variety of local illumination problems require transforming coordinates of closely spaced points from model to screen space, doing some computation on them and transforming back to model space before taking differences between them, for example to compute tangent vectors to surfaces or local sampling rates for texture maps. Often the transformations from model to screen-space have large condition numbers (two ways this can easily happen are if the client specifies a near clipping plane unrealistically close to the viewpoint, or when a projection with a very narrow viewing angle is taken from a great distance), causing large uncertainty in the inverse transformations. Subtraction of quantities with large uncertainty leads to catastrophic cancellation and often, worthless shading results. In global illumination computations, we often find ourselves either dicing primitives in ways that may expose geometric robustness problems of the sort usually found in computational geometry or CAD applications, or performing large numbers of ray-intersection tests, as in ray-tracing and Monte Carlo radiosity techniques. Ray intersections are notoriously hard to compute correctly, if only because secondary rays always start at a surface and can easily cause a spurious intersection there. The usual epsilon-test method of handling these performs badly, especially for glancing rays which may have a legitimate intersection quite close to the spurious one. Interval arithmetic techniques, as proposed by Amanatides and Mitchell (``Some Regularization Problems in Ray Tracing,'' Proc. Graphics Interface '90, pp. 221-228), handle this problem very well, and deserve to be more widely adopted. Another area in which interval arithmetic is useful is scan-conversion of higher-order surfaces, as discussed by Snyder (``Interval analysis for computer graphics,'', Proc. Siggraph '92, pp 121-130) and Duff (``Interval arithmetic and recursive subdivision for implicit functions and constructive solid geometry,'' Proc. Siggraph '92, pp 131-138). Interval methods produce guaranteed, 100% robust, answers to numerical questions that are hard even to pose in a pointwise (non-interval) context, and can be a powerful tool for geometric problems when algorithms are designed specifically to take advantage of the interval perspective. Retrofitting intervals into existing algorithms is mostly pointless, although many arguments against their use stem from assuming that this is the only possibility.


POSITION:
Steve Fortune
Bell Laboratories

Geometric algorithms are usually described in the conceptual model of the real numbers, with unit-cost exact arithmetic operations. Implementers often substitute floating-point arithmetic for real arithmetic. This leads to the well-known problem of numerical robustness, since geometric predicates depend upon sign-evaluation, which is unreliable if expression evaluation is approximate.

An ideal solution to the problem of numerical robustness would be simple, efficient, and widely applicable. No such solution exists or is likely to exist.

The most attractive available solution to robustness is exact evaluation of geometric predicates. Exact evaluation simplifies reasoning about the correctness of algorithms, and can reduce the number of special cases. Performance concerns can be addressed by the use of adaptive-precision arithmetic, which with careful engineering reduces performance cost close to floating-point arithmetic. Exact arithmetic is the only way to obtain trustworthy implementations of complex geometric algorithms, e.g. boolean set operations on polyhedra.


POSITION:
Christoph Hoffmann
Purdue University

Geometric computations are a mixture of numerical calculations, such as determining the intersection of two surfaces, and logical deductions based on the numerical computations. When the computations are done with floating point arithmetic, the errors incurred may lead to uncertain deductions. Worse, different numerical computations may answer the same logical question in two different ways -- without anyone realizing it. So, what is one to do?

Of course, you could fix the numerical computation, or try a different deduction method! Both has been tried: To fix the arithmetic, one can use exact computations. That eliminates the robustness problem entirely, but it is too expensive. To fix the process of logical deduction is harder to do and must be based on a ternary logic: "yes," "no," and "unclear". Many attempts have been made but they fell short, usually by failing to account fully for the "unclear" outcome of a test.

There are also more radical proposals to repair the robustness problem in geometric computations: You could redefine the problem so that troublesome inputs are changed to "nice" ones. Or you could redefine what the result should be, so that wrong outputs are "right." Depending on the nature of your business, these solutions might work for you.

Current trends postpone "general" solutions in favor of targeted solutions for specific geometric problems. For example, if your numerical computation involves evaluating a univariate polynomial and testing the result for zero, then there are fancy techniques that make the "unclear" outcome almost as rare as being struck dead by a meteor. Better yet, they are not excessively expensive. And they do astonish the numerical analysts. Then again, that is just for evaluating a polynomial with one variable. Still, perhaps we can figure out how to leverage such small advances and accelarate what has been, for so many years, disappointingly slow process.


POSITION:
Geometry and Graphics: From Scan-Lines to Topological Anomalies
Thomas J. Peters
University of Connecticut

Scan-line techniques typify the relationship between geometry and graphics. Namely, the geometric vertices of a polygon are approximated by floating point values within the resolution available. A discretization process then identifies all pixels within the interior of the polygon and shades accordingly. In complex scenes, topological anomalies often appear at the boundaries between adjacent polygons.

These topological anomalies arise from the failure to reconcile the concept of a boundary element with the realities of finite precision computation. By definition, even infinitesimally small perturbations of boundary points can cause them to fall into the interior or exterior. Even while topological connectivity of geometric models is represented exactly by symbolic data, this connectivity is determined approximately via floating-point values. The inherent disparities between exactitude and approximation can lead to semantic inconsistencies. In practical implementations, algorithmic tolerance values are intended to preserve semantic consistency between topology and geometry. Yet, complete integration of suitable tolerance couplings remains a formidable unsolved software design problem.

New conceptual approaches are needed for semantic consistency between floating point values for geometry and the symbolic representations for topology. Novel perspectives from mathematical semantics of programming languages afford some promise. The issues considered in defining neighborhoods within which point perturbations preserve equivalent model topology are of particular interest in animation and morphing. For instance, small geometric perturbations between successive animation frames can lead to undesirable self-intersections.


Panel Format

The organizer will lead off with a 10-12 minute overview of the geometric computing problem, followed by a 10-12 minute presentation from each of the four other panelists. There will then be a 30-40 minute period for audience comments, questions, and perspectives.


Compiled by Chee Yap .