Keywords

1 Introduction

In a sequence of papers Shirayanagi and Sekigawa [14, 31,32,33] propose so-called interval methods with zero rewriting to get exact results with bigfloat interval arithmetic in order to avoid expensive exact arithmetic. They propose to use bigfloat interval arithmetic of a fixed precision and to replace any zero-containing interval I arising during computation by the zero interval, i.e., the point interval representing zero. This is called zero rewriting. So the rule is “if we don’t know the sign for sure, we assume it is zero”. If the result of the computation cannot be verified, computation is repeatedly started over with increased bigfloat precision. Their goal of using interval methods with zero rewriting is getting both exact numerical and exact combinatorial results more efficiently.

Intuitively, interval methods with zero rewriting give us correct results if the underlying bigfloat precision is sufficiently high: During a finite computation we compute only finitely many numerical values. If our precision suffices to let the bigfloat interval arithmetic separate the non-zero values from zero, all zero rewritings are correct. This observation is made formal by Sweedler and Shirayangi in their stabilization theorem [28, 29]. However, usually we don’t know the sufficient precision in advance. Therefore, Shirayanagi and co-workers suggest to repeatedly increase the precision until correctness can be verified. In Sect. 3 we recap the various intervals methods with zero rewriting as well as the corresponding approaches to result verification.

In contrast to the interval methods with zero rewriting, the primary motivation for the exact geometric computation paradigm [37] is achieving numerical robustness by avoiding inconsistent decisions: If all evaluations of geometric predicates are exact, inconsistencies are a non-issue. The emphasis is on getting exact combinatorial results; numerical data need not be exact. In this paper we compare interval methods with zero rewriting to techniques that are used in computational geometry for implementing the exact geometric computation paradigm, in particular to lazy adaptive evaluation with expression-dags, a general purpose approach for exact geometric computation with real algebraic numbers.

Shirayanagi and his co-workers were initially interested in algebraic algorithms only, especially in reducing the cost of algorithms manipulating polynomials by using floating-point arithmetic [30]. In polynomial algebra exactness of the coefficients of the computed polynomials is inevitable. The situation is somewhat different in computational geometry where exactness of the numerical part of the output is usually much less important than exactness of the combinatorial part. Often, together with the numerical data in the input, the combinatorial part serves as an exact symbolic representation of the numerical data in the output. For example, knowing the sites giving rise to a Voronoi vertex suffices to recover its numerical coordinates exactly.

In [33], Shirayanagi and Sekigawa apply their approach to geometric computing, more precisely, to planar convex hull computation. Are interval-symbol methods with correct zero rewriting a valuable alternative to the techniques used to implement the exact geometric computation paradigm? We discuss this question by comparing different versions of interval methods with zero rewriting to techniques used in computational geometry to implement the exact geometric computation paradigm. Our comparative discussion is accompanied by experiments on convex hull computation in the plane.

2 Exact Geometric Computation Paradigm

Let us call a representation of a real number r exact if we can read-off the sign of r directly without any further computation and the representation allows us to approximate r to any accuracy we want. While exact arithmetic always maintains exact representations, exact decisions computation ensures only that all comparisons of numerical values as well as all sign computations are correct. Note that exact values are not always necessary (actually, as it turns out, in our context, they are never necessary) for exact decisions, sufficiently close approximations are adequate. The exact geometric computation paradigm advocated by Yap [37] extends this relaxation from the arithmetic level to the geometric level. It asks for exact results of geometric predicates only. The error-free evaluation of predicates makes robustness problems a non-issue in geometric computing. Since exact geometric computation allows for incorrect decisions within the evaluation of geometric predicates, it is less strict than exact decisions computation. Of course, computation with exact arithmetic subsumes exact decisions computation and exact geometric computation can be implemented by exact decisions computation. The latter approach is sometimes called exact decisions geometric computation as well. Structural filtering [9] carries the relaxation even further. It asks for exact substeps only. Intermediate inexactness is permitted and repaired at the end of the execution of a substep where necessary. A substep might involve several predicates or might even embrace a whole algorithm, depending on the application.

The precision and robustness problem is closely related to the problem of degeneracies in geometric computing. On the one hand, exact decisions computation is required to detect and handle degeneracies exactly, on the other hand, exact decisions computation is a prerequisite for symbolic perturbation methods that allow one to circumvent the handling of degeneracies. Avoiding the handling of degeneracies is the goal of the topology-oriented approaches by Sugihara and his co-workers [34,35,36] and of controlled perturbation [10,11,12, 18]. Both approaches try to get along with floating-point arithmetic. Designing such algorithms seems to be more difficult and overall, compared to the exact geometric computation paradigm, these approaches still seem to be less mature or less applicable in general. And of course, this way we can get approximate solutions only.

Over the years effective techniques [38] have been used and developed to support the efficient implementation of the exact geometric computation paradigm, most notably floating-point filters [7] and approaches exploiting error-free floating-point transformations [27]. A general purpose approach is coupling lazy adaptive evaluation with expression dags [1]. This can be done both on the arithmetic level [1, 3, 13] and on the geometric level [8, 23]. Main ingredients of these approaches are approximate expression evaluation, arithmetic filters, and constructive zero separation bounds, where a zero separation bound for an arithmetic expression E is a positive real number \( sep (E)\) which is a lower bound on the absolute value of E. For expressions involving operations \(+,-,{\cdot },/\) and \(\root k \of {\ }\) and integer (or rational) operands, zero separation bounds can be computed inductively according to the structure of an expression [4, 6, 19, 24,25,26]. Zero separation bounds allow us to verify that an expression is zero if we have a sufficiently close approximation: If the sum of the absolute value of the approximation and the error bound are smaller than the zero separation bound we may conclude that the actual value is zero. The exact geometric computation paradigm is implemented in the C++-software libraries CGAL [5] and LEDA [16].

Lazy Adaptive Evaluation with Expression-Dags. In the sequel we focus on arithmetic expression-dags since they are more closely related to the interval with symbols approach by Shirayanagi and Sekigawa. Recording the computation history of numerical values in expression-dags, i.e., expression-“trees” that may share common subexpressions, allows one to (re)compute an approximation of the value of the expression at any time at any accuracy. Using such dags we can adaptively compute the sign of an expression correctly by repeatedly increasing the precision until the error is less than the absolute value or constructive zero separation bounds allow us to conclude that the actual value is zero. Furthermore, we apply lazy evaluation to sign computation, i.e., sign computation is delayed until the sign is actually needed. This strategy is implemented in C++-number types in CORE [13], leda::real [3], and RealAlgebraic [20]. Since all sign computations and hence all decisions in geometric predicates are exact, we banish inconsistencies caused by numerical imprecision.

3 Variants of Interval Methods with Zero Rewriting

Over the years Shirayanagi and Sekigawa and Shirayanagi and Katayama came up with different versions of interval methods with zero rewriting.

Interval Method with Zero Rewriting. In the simplest version, Shirayanagi and Sekigawa [32] propose to replace every zero-containing interval I arising during bigfloat interval computation with a certain precision by the zero interval immediately, without any verification of this step, and to verify the result of the overall computation afterwards. If this verification fails the whole computation is re-done with increased precision of bigfloat interval arithmetic. This is repeated until verification succeeds, see Fig. 1(a).

Zero rewriting reminds one of epsilon tolerancing, also called epsilon tweaking, where we replace tests for zero by comparisons of absolute values with certain epsilons. In contrast to zero rewriting where we get “sound epsilons” through interval arithmetic, finding epsilons for epsilon tolerancing is often guess work and “an art that requires infinite patience” [22]. Usually, people applying epsilon tweaking do not attempt to verify results. Of course, epsilon tweaking does not implement the exact geometric computation paradigm. Unfortunately, like epsilon tweaking, zero rewriting does not abolish inconsistencies. For example, equality testing is not transitive. We might rewrite the distance between a and b to zero as well as the distance between b and c, but not the distance between a and c. Since inconsistent decisions can still arise, epsilon tweaking is not a recommended approach to precision and robustness problems in computational geometry.

Correcting afterwards presumes that the algorithm runs until the end regardless of whether zero rewriting is correct. Therefore, interval method with plain zero rewriting can be used only with (quasi-)robust algorithms which always compute some (kind of) useful output, which can be verified afterwards. However, designing a robust geometric algorithm is a difficult task. Since many geometric algorithms are not (quasi-)robust, applicability of the initial version of interval methods with zero rewriting is rather limited. The idea of correcting afterwards is present in structural filtering approaches in computational geometry, too. While Shirayanagi and Sekigawa suggest to simply rerun the algorithm with higher precision, structural filtering aims at repairing the result of a structurally filtered step using other exact methods. Kettner and Welzl [15] apply the idea to convex hull computation in the plane. Funke et al. discuss the above robustness problem for structural filtering at the algorithm level in [9].

Zero rewriting takes place immediately when a new zero-containing bigfloat interval is created. One could think of lazy zero rewriting as well, where zero rewriting takes place only if we ask for the sign of a numerical value approximated by a bigfloat interval.

Intervals with Symbols. Since output verification without exact numerical values might be difficult, Shirayanagi and Sekigawa [32] propose to maintain symbolic information in addition to the approximating bigfloat intervals. They store symbol strings to record the computation history of a numerical value. Keeping track of the history of a value allows one to get exact representations for the numerical values approximated by the intervals at the very end of the computation.

There are two versions presented in [32]. In a first version, the symbol strings of the operands of an arithmetic operation are copied into the symbol string of the result, together with a symbol representing the arithmetic operation performed. These symbol strings are stored with the intervals. Unfortunately, this lets the size of the symbol strings grow quickly, since the same information is stored in several places. Therefore, Shirayanagi and Sekigawa propose a second version where intervals with symbols share computation histories: They maintain a global symbol list where each entry records an operation together with list indices for the operands. Now only a list index is stored with an interval. List index and global symbol list allow one to reconstruct the computation history for a value and hence enable exact (re-)computation. Still, zero rewriting is applied whenever a zero-containing interval arises. No attempt is made yet to verify that the actual value is zero.

Recording computation history enables reconstruction of exact values and eases verification of the computed result afterwards. As before, if verification of the computed output fails, computation is restarted from scratch with higher precision bigfloat interval arithmetic until we obtain a correct result. As discussed above output verification must be possible somehow. If we have a selective geometric problem, where all numerical data of the output is present in the input already, there is no need to verify these numerical data and inspection of the combinatorial part suffices. If we have a constructive geometric problem, i.e., new numerical data is constructed, we can make use of the symbolic information stored with the intervals and the symbol list to verify these data. However, according to [32], the symbolic part is erased when zero rewriting takes place. Unfortunately this way we lose the option to check zero values for correctness at the end of computation and it is unclear how exactness is ensured in such a case.

Maintenance of computation history in a global symbol list is similar to maintenance of computation history in expression dags. Copying is avoided and common subexpressions are shared. However, while expression dags use reference counting to detect when subexpressions are not needed anymore, there is no corresponding mechanism in the global symbol list. The symbol list then contains information for many numerical values that are not existent anymore. Therefore, global symbol list grows continuously and can become quite large even for flat computations which are typical for most algorithms for low-dimensional geometric problems.

Fig. 1.
figure 1

Pictorial outlines of (a) simple plain interval method with zero rewriting, (b) interval-symbol method with correct zero rewriting, (c) improved version of (b), and (d) lazy adaptive evaluation with expression-dags. In cases (a)–(c) we convert numerical data to bigfloat intervals with precision p before running the algorithm and restart the process with increased precision if output verification fails (a) or if we detect an incorrect zero rewriting (b), (c), where in (c) we determine the new precision based on the incorrect zero rewriting. Zero rewriting takes place when a new interval is created while lazy evaluation is postponed to decision steps.

Interval-Symbol Method with Correct Zero Rewriting. The use of symbols also allows for the verification of zero rewritings [32]. This verification is done by an exact computation according to the computation history recorded in the symbol strings or list. The overall strategy is to restart computation from scratch with increased precision as soon as verification by exact computation fails to confirm zero rewriting, i.e., computation is re-started with increased precision if the current precision bigfloat interval arithmetic does not suffice to separate a non-zero value from zero, see Fig. 1(b).

Intervals with correct zero rewriting reminds us of floating-point filters. With a floating-point filter we try to verify non-degeneracy by fast hardware-supported floating-point arithmetic and error bounds, i.e. interval arithmetic. If verification fails, we switch to an exact computation or some other exact method. Of course, one can generalize floating-point filters to arithmetic filters using bigfloat arithmetic instead of hardware-supported floating-point arithmetic. With standard floating-point filters, whenever we have a zero-containing interval we use exact arithmetic to compute the exact sign. However, now that we know the exact sign we continue our computation with the evaluation of other geometric predicates. There is no restart from scratch. In order to compute the exact sign we must have access to the exact input data. Hence, such filters are most applicable in geometric predicates that operate on the input data directly. In cascaded computations we must have some other means for getting access to exact input data, e.g. maintaining expression dags.

Recently, Katayama and Shirayanagi [14] revised and improved the iteration strategy. Now, whenever interval arithmetic with the current precision gives us a zero-containing interval where verification fails, using the information in symbol strings or list, only the computation of this interval is rerun with repeatedly increased precision until we get an interval that does not contain zero anymore. The final sufficient precision is then used when re-staring the overall computation from scratch, see Fig. 1(c).

Besides the differences in expression-dags and symbols lists pointed out above, the strategy of lazy adaptive evaluation with expression-dags is substantially different in other aspects as well, see Fig. 1(d). With lazy adaptive evaluation on expression-dags we never restart from scratch. Moreover, verified sign computation is lazy, i.e., it takes place only if requested by the algorithm in a decision step, not as soon as a zero-containing interval arises. Furthermore, precision is increased locally only, i.e., within the subexpression whose sign we would like to know, similar to what Katayama and Shirayanagi propose. However, the precision of other interval computations is not automatically increased as well. This way we can save a lot of computation time, since we use higher precision only where we need it. Usually, approximate evaluation is precision-driven. This means, we do not use the same precision for all bigfloat interval computations everywhere, but compute precisions sufficient for the operands in order to guarantee a requested approximation error at a dag node. Thus, even for the evaluation for a single value we do not use interval arithmetic with uniform precision in lazy adaptive evaluation with expression dags. Finally, there is no verification by exact arithmetic, i.e., exact computation in the usual sense. Verification is done by constructive zero separation bounds. Of course, this gives us the correct sign, so in this sense it is an exact computation as well. By the underlying iterative approach interval methods with correct zero rewriting throw away a lot of knowledge already gained in previous iterations, since we redo computation with higher precision and hence more expensive bigfloat arithmetic, no matter whether this higher precision is necessary or not. With lazy adaptive evaluation with expression-dags iteration is always local to the sign computation and never affects re-evaluation on a global basis.

The idea of iterative trial-and-error computation is present in controlled perturbation as well. There, current computation is stopped whenever floating-point arithmetic does not suffice to verify non-degeneracy of the current perturbed input. Then computation is re-started with a larger perturbation. Usually, the precision of the floating-point arithmetic is not increased, since software bigfloats are much more expensive than hardware-supported floats.

4 Experiments

In [33], Shirayanagi and Sekigawa use planar convex hull computation to illustrate the use of interval methods with zero rewriting and so do we. Shirayanagi and Sekigawa use a computer algebra system, more precisely, maple 12, to implement their code. While a computer algebra system provides a perfect infrastructure for exact computation, implementation in C++, the programming language used for exact geometric computation in the software libraries CGAL [5] and LEDA [16], is somewhat more challenging. Since the infrastructure provided by CGAL and LEDA does not supply an exact arithmetic in the strong sense for real algebraic numbers, we use a symbolic representation and exact decision evaluation via leda::real in the verification part of zero rewriting. This works for expressions involving radicals.

We implement the interval method with symbol list and correct zero rewriting as described in Sect. 3. Additionally, we implement a variant with lazy zero rewriting which does not apply zero rewriting at construction time, but defers zero rewriting to decision steps via sign computations. Initially, we used CGAL’s Gmpfi class for bigfloat interval arithmetic which is based on mpfi, a multiple precision interval arithmetic library based on mpfr [21]. Since Gmpfi does unfortunately not allow us to limit the precision to less than 53 bits, we now use leda_bigfloat_interval, another CGAL class which couples LEDA’s bigfloat number type with boost::numeric::interval form Boost [2]. LEDA’s bigfloat number type allows us to limit the precision to less than 53 bits.

We maintain a global symbol list to avoid storing redundant information. This allows us to encapsulate all arithmetic and zero rewriting in a C++ number type and to use this number type together with CGAL’s geometric algorithms. Together with a list of exact operands the symbol list allow us to re-compute a value using another exact number type whenever zero rewriting takes place. We use C++ exception handling to interrupt computation whenever verification of zero rewriting fails. If this happens we increase bigfloat precision and re-start computation again after clearing exact operand and symbol list. Since constants 0 and 1 arise frequently during geometric computations with CGAL, we store and reuse symbols for these constants at the beginning of the symbol list, thereby avoiding further blow-up of the symbol list. Remember that a symbol list never shrinks during an iteration with fixed precision.

At first, we consider the convex hull experiments from [33] in our C++-based framework. Note that Shirayanagi and Sekigawa use decimal arithmetic and talk about decimal places when referring to precision whereas we use binary places. They consider test data in several categories, cf. [33]:

Example 1. :

5000 points with coordinates (xy), where x and y are randomly generated integers satisfying \(0 \le x,y\le 200\).

Example 2. :

5000 points with coordinates (xy), where x and y are randomly generated integers satisfying \(-200 \le x,y\le 200\) and \(x^2 + y^2 \le 200^2\).

Example 3. :

5000 points with coordinates (xy), where x and y are randomly generated integers satisfying \(0 \le x,y\le 400\), \(x^2 + y^2 \le 400^2\), and \(y^2 \le 3x^2\).

Example 4. :

The origin (0, 0) and 4999 points with coordinates (xy), where x and y are randomly generated integers satisfying \(1 \le x,y\le 6000\), and \(\frac{9}{10} \le \frac{x}{y} \le 1\).

We use CGAL’s random point generators to create the test data accordingly and use CGAL’s default convex hull algorithm which in contrast to the maple code used in [33] avoids division operations. Therefore we can use arbitrary precision integers like CGAL’s Gmpz or leda::integer in the verification step of zero rewriting. Since the integers generated in the categories above are fairly small, division-free computation with double precision always gives correct integer result everywhere. In order to observe dependence on precision we have to use LEDA’s bigfloats with binary precision limited to less than 53 bits. Note that such bigfloat computation with lower precision is somewhat more expensive than bigfloat computation with default precision 53. Besides the two variants described above, we implemented a version without symbol list, analogously to the maple code made available for convex hull computation by Sekigawa. This approach defers zero rewriting to decision steps as well. In contrast to the two other variants, interval-symbol method with correct zero rewriting is not wrapped in a number type, but implemented via CGAL’s traits concept for planar convex hull computation.

Since running time depends on the precision we start with, we measure and report the running time of the last successful iteration of convex hull computation only. The measured time includes the cost of conversion from int to our number type wrapping intervals with symbols, see Table 1.

Table 1. Convex hull with integer points for four random data sets in example classes 1 to 4. Interval method with correct zero rewriting is based on leda_bigfloat_intervals with verification of zero rewritings via exact integer arithmetic (leda::integer). Lazy adaptive evaluation is leda::real. Running times are for the last successful iteration only.

In a second set of examples, Shirayanagi and Sekigawa [33] use irrational coordinates. Corresponding to example i above there is example \(i+4\) where instead of coordinates (xy) points we now have coordinates \((\mathrm {sign}(x)\cdot \sqrt{|x|}, \mathrm {sign}(y)\cdot \sqrt{|y|})\), and x and y are generated as described above for example i:

Example 5. :

5000 points with coordinates \((\sqrt{x},\sqrt{y})\), where x and y are randomly generated integers satisfying \(0 \le x,y\le 200\).

Example 6. :

5000 points with coordinates \((\mathrm {sign}(x)\cdot \sqrt{|x|}, \mathrm {sign}(y)\cdot \sqrt{|y|})\), where x and y are randomly generated integers satisfying \(-200 \le x,y\le 200\) and \(x^2 + y^2 \le 200^2\).

Example 7. :

5000 points with coordinates \((\sqrt{x},\sqrt{y})\), where x and y are randomly generated integers satisfying \(0 \le x,y\le 400\), \(x^2 + y^2 \le 400^2\), and \(y^2 \le 3x^2\).

Example 8. :

The origin (0, 0) and 4999 points with coordinates \((\sqrt{x},\sqrt{y})\), where x and y are randomly generated integers satisfying \(1 \le x,y\le 6000\), and \(\frac{9}{10} \le \frac{x}{y} \le 1\).

For example classes 5 to 8, it was not obvious anymore how to do verification of zero rewriting, since in contrast to computer algebra systems we did not have exact arithmetic in the strong sense for such real algebraic numbers at hand. We use an exact decision number type for real algebraic numbers, namely leda::real, which wraps lazy adaptive evaluation with expression dags. Results of experiments for examples 5 to 8 are shown in Table 2.

Table 2. Convex hull with radical coordinates for four random data sets in example classes 5 to 8. Interval method with correct zero rewriting is based on leda_bigfloat_intervals with verification via leda::real. Lazy adaptive evaluation is leda::real. Running times are for the last successful iteration only.

While Shirayanagi and Sekigawa observe a huge difference in running time between their maple-based versions with and without symbols list, running times of all our C++-based versions are roughly on the same order of magnitude. The gain of the variant without symbol lists is evident but much smaller.

Planar convex hull computation is a selective geometric problem of very low computational depth. For such problems floating-point filters are very effective for random input data. Moreover, efficient methods based on error-free transformation techniques and others are known for exact geometric computing of planar convex hulls. These methods are much more efficient than arbitrary precision integer arithmetic and hence much more efficient than (our implementation of) interval-symbol methods with correct zero rewriting. Therefore we consider cascaded geometric computations as well. Such cascaded computations are numerically more demanding and since we do not have access to exact input data anymore in the geometric predicates of later stages, many of the techniques applicable to planar convex hull computation can not be directly used anymore.

Given a set of line segments we compute the convex hull of their intersection points. So the coordinates of the points whose convex hull we are interested in are not part of the input, but numerical values computed during computation. In cascaded computations, we have to record computation history somehow in order to enable verification of zero rewriting, thus in contrast to the previous examples symbol lists are not dispensable anymore. We use CGAL’s geometric object generators to create input segments.

Example 9. :

300 segments whose endpoints are random points with double coordinates (almost) on a circle of radius 250, cf. Fig. 2(a).

Example 10. :

150 pairwise disjoint segments with endpoints on vertical line segments with integral x-coordinate and double y-coordinates and 150 disjoint segments with endpoints on two horizontal lines with integral y-coordinates and double x-coordinates, cf. Fig. 2(b).

Fig. 2.
figure 2

Segments generated using CGAL’s sample code for geometric object generators. (a) 300 segments with endpoints with double coordinates (almost) on circle of radius 250 centered at the origin. (b) 300 segments with endpoints on two vertical and two horizontal segments.

Table 3. Results for examples 9 and 10: computing convex hull of intersection points of segments as shown in Fig. 2.

Table 3 shows results for examples 9 and 10. In example 9, precision 31 was sufficient, 1283 zero rewritings took place and the running time of the iteration with precision 31 was 1.78 s. Exact computation with leda::rational took 2.59 s, which means that interval-symbol method with zero rewriting indeed can save computation time with respect to computation with exact arithmetic if we start at a precision close to sufficient. However, using leda::real, running time was 0.31 s only. In example 10, the computation of the convex hull of the intersection points between a vertical fan of segments and a horizontal fan of segments, we get many collinear points on the 4 convex hull edges. In this example, precision 61 suffices, 748 zero rewritings occurred, and running time was 8.97 s. However, both leda::rational and leda::real are significantly faster.

We close with a remark on the number of zero rewritings. While there are no zero rewritings with data sets in examples 1 to 3 there are many zero rewritings in examples 5 to 7. Note that we count zero rewritings in the last iteration only, so all these zero rewritings are correct. In all these data sets the number of points we generate is larger than the number of different integer coordinates we allow. So points are not in general position. The resulting degeneracies cause correct zero rewritings. These are the zero rewritings showing up in examples 5 to 7. They do not show up in examples 1 to 3, because the precision of the bigfloat arithmetic suffices to perform integer arithmetic exactly, i.e., we get singleton intervals, especially zero intervals, and there is no need for zero rewriting. Thus, in examples 1 to 3, the bigfloat interval arithmetic already verifies degeneracies, whereas in examples 5 to 7, we have inaccurate approximations only due to the square root operations. Thus, in examples 5 to 7, interval arithmetic does not deliver zero intervals for the point coordinates, and hence coordinate degeneracies cause zero rewritings.

At http://wwwisg.cs.uni-magdeburg.de/ag/ISCZECG the code we use in our experiments is made available. It is based on CGAL 4.10 and LEDA 6.5.

5 Conclusions

Without verification of zero rewriting, interval methods suffer from the same problems as epsilon tweaking, since many geometric algorithms are not robust and inconsistencies can still arise. After all, this non-robustness of geometric algorithms is the motivation for the exact geometric computation paradigm.

With correct zero rewriting interval methods somewhat work like floating-point filters with bigfloat arithmetic. If a zero-containing interval is detected we consult exact computation. However, floating-point filters are lazy. Exact verification of the sign of a value takes place only if the sign is requested, not immediately upon creation. More important, while an incorrect zero rewriting causes a restart from scratch with increased bigfloat precision, a floating-point filter failure just triggers an exact sign computation and overall computation continues based on a decision with the exact sign. There is no restart of the overall algorithm.

Compared to lazy adaptive evaluation with expression dags, interval-symbol methods with correct zero rewriting has a severe performance handicap. With present interval methods with correct zero rewriting all computations are performed with the same precision, the maximum of the minimum precisions required to separate from zero where the maximum is taken over all numerical values arising during computation. With lazy adaptive evaluation we use the precision required for the local sign computation only. This precision is in principle independent of precisions required elsewhere. Degeneracies and near-degeneracies often require higher precision than configurations in general position. With interval methods with correct zero rewriting such demanding degeneracies and near-degeneracies determine the precision used for all interval computations, including those for less-demanding general position configurations. Lazy adaptive evaluation however always adapts the precision to the situation under investigation and thus uses less precision for general position configurations whenever possible. Furthermore, recording computation history in symbol lists suffers from list blow-up. Expression-dag based exact geometric computation uses reference counting to detect when a numerical value is not used any longer and frees corresponding memory space. Our implementations show that interval-symbol method with correct zero rewriting is manageable in C++ as well. However, in view of the performance issues discussed above the current approach is most likely not competitive for exact geometric computing, at least for exact geometric computing with algebraic numbers of small algebraic degree.