Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of the benefits of intervals is that you can ensure your results are correct irrespective of floating-point errors, if you carefully round all computations in the correct direction.

I don't think you can ensure that with gradients though: if f and f' are computed in machine arithmetic, cancellation errors might pile up.



It's true; this is a different application of interval arithmetic, not the usual application, which is, as you say, to avoid numerical stability issues. By and large, graphics researchers don't give a shit about numerical stability issues. They're the Ed Woods of numerical computation.


The point of interval arithmetic isn't really to deal with numerical stability, per se... it's to make it possible to do rigorous and validated computations on a computer. Numerical stability is a different concept that is somewhat orthogonal. I could carry out a numerically unstable computation using interval arithmetic, but using interval arithmetic wouldn't magically make the computation stable, it would just give you error bounds (which may indeed be quite loose and unhelpful if the algorithm is unstable).


From my point of view, the problem with numerical instability is that your program computes the wrong answer, and, worse, gives you no clue that it's wrong. Interval and other self-validating methods do fix that problem: the answer may not be useful, but at least it's not wrong, and they tell you there has been trouble.


Yeah, I'd generally agree with that.

One thing to bear in mind, though: there are numerical algorithms which are stable and unimpeachably useful but which are unstable in interval arithmetic owing to the natural interval inclusion behaving poorly. A great example of this is the Clenshaw recursion for evaluating Chebyshev series. It is the method of choice for evaluating a Chebyshev series at a single point, but the corresponding natural interval inclusion is actually unstable itself (the width tends to blow up exponentially fast in the degree of the series, regardless of the coefficients).

I tend to view "classical numerical methods" and interval arithmetic as somewhat different camps. There's definitely a significant overlap and they can both profit from each other, but you need to put on a different hat when you're doing each. Sometimes interval arithmetic is just too expensive, in which case you need to select a good, stable numerical algorithm. On the other hand, if you're doing interval arithmetic because you must (you need a certified, correct answer), then you will often need to use a completely different set of algorithms and spend some time fussing over what the most appropriate interval inclusion is.

Again, another good example here relates to Chebyshev series. If you want to find all of the roots of an analytic function on an interval, expand it in a Chebyshev series and solve the colleague matrix eigenvalue problem. The result could be subtly wrong due to numerical roundoff, and you'd have a hard time recovering from this position; but this method is undeniably extremely useful. On the other hand, if you want to compute rigorous enclosures of all zeros of that same analytic function, you'd need to develop a suitable interval inclusion for it, and then run some type of interval rootfinding algorithm based on a combination of interval contractors (Krawczyk) and subdivision. Basically, a branch and bound algorithm.


Sure, I can accept that.

Does modal interval analysis help with these cases? I still don't understand it, but there are a lot of things I don't understand in numerical methods.


Never heard of it. Got a reference? I can investigate.


>I don't think you can ensure that with gradients though: if f and f' are computed in machine arithmetic, cancellation errors might pile up.

Yes, you can. You need to do automatic differentiation with interval arithmetic. This gives you a mathematically correct result for the derivative.

Always keep in mind with interval arithmetic that (-inf, inf) is also a mathematically correct result. Keeping intervals small is the challenge with interval arithmetic and means that every algorithm not specifically developed for it is likely going to lead to useless results.


I think perhaps this could be done in other ways that don't require interval arithmetic for autodiff, only that the gradient is conservatively computed, in other words carrying the numerical error from f into f'


But the only way we know how to make "conservative computations" is via interval arithmetic.


No? Or maybe I'm missing something. If the goal is to be able to bound the computation of f, you can:

1) compute f with interval arithmetic

2) compute f normally and f' with interval arithmetic

3) compute f rounding towards zero, compute f' from f rounded towards infinity, and round f' up (if f positive) or round f' down (if f negative).

In all 3 cases you can use what you computed to figure out bounds on f, (1) is direct, the other two need extra work.


> You need to do automatic differentiation with interval arithmetic.

But that kinda defeats the point of replacing interval arithmetic with gradients, though.


This doesn't help if your computation for f is numerically unstable, unless you compute f with interval arithmetic too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: