COURSE:
-- benign vs. catastrophic errors -- examples: inside vs outside intersect 2 lines, is point on line? -- root cause: fixed precision computation -- IEEE Std (describe this) -- history of floating point computation -- Examples of geometric computation convex hull, Vor diagram, linear programming triangulations
-- language constructs -- arithmetic fixes -- finite precision geometry -- Illustration: finite precision line
-- geometric = combinatoric + numeric -- e.g. convex hull -- e.g. shortest path -- e.g. triangulated surface -- consistency constraint -- qualitative vs quantitative errors -- the "pure consistency play" (approach of Hopcroft et al) -- if we assume input is consistent, we prefer to guarantee the EXACT Combinatorial structure. -- egc clarified: decision nodes determine the Exact combinatorial structure! Hence must evaluate signs correctly -- naive egc (exact arithmetic, work of Yu) -- nevertheless, it address a major source of non-robustness -- role of floating point (work of Yap-Dube) -- caveats: * need for nominal exactness * works only for "geometric computation" (which may be only one aspect of the computation, e.g., in physical simulation) * wrong to go after pure consistency (Hoffmann/Hopcroft) -- why it is a win: separation of rounding geometry from computation separation of symbolic perturbation from cleanup approximate arithmetic is superior to rational
-- geometric primitives -- indepth study of some basic geometric algorithms (convex hulls, Vor.diagrams, triangulation)
-- Tarski's notion of "geometric" -- Decision problems for real closed fields -- resultants -- root bounds -- radical bound of Mehlhorn -- limitations (major open problem about non-algebraic)
-- the basis to support EGC -- 4 accuracy levels -- difference bet precision and error bound -- difference bet relative and abs precision -- difference bet levels 2 and 3 -- allows user to choose to execute a program at ANY point along some speed vs. robustness curve -- advantage of this model: debugging -- "no change in (naive) programmer behavior" * set epsilons to 0 * do not play with intervals
-- static analysis of Fortune, van Wyk -- dynamic analysis -- LEDA's work -- analysis of Preparata, Olivier
-- why it is important -- why it is more tractable than traditional approaches -- snap rounding (Guibas et al) -- greene-yao -- fortune's
-- another serious impediment to robustness: need to enumerate all special cases (grows exponentially with dimension) -- Yap's method -- Seidel's method -- Canny's method -- clean-up algorithms (still, a very effective separation of concerns)
-- work of Clarkson -- work of Preparata et al for 2/3 dim -- Bareiss -- work of Karasick
-- work of Sellen, Choi, Yap -- extensions
-- beyond Big Numbers -- experience of LN -- Sensitivity Analysis -- compiler techniques: common subexpressions -- compilation of expressions -- partial evaluation
-- guaranteed absolute precision -- work of Brent -- exp, log -- pi, e -- recent fast formulaes -- hypergeometric functions