Hacker News new | past | comments | ask | show | jobs | submit login

One of the reasons I would be happier if there were more emphasis in real math/geometry algorithms on "range" processing, i.e. treating each "number" as a scaled "range" instead and figuring out how to combine these ranges in every step to get some stable solution. Most often I see programmers just bashing analytic formulas like they were computing something precisely and then being surprised that results are way off. Imagine computing curve intersections by a combination of analytical and iterative method and then figuring out if such an intersection is unique or if it is some existing endpoint...



Interval arithmetic [1] has been around since the 1950's, I believe, but for some reason it never caught on.

In a nutshell, interval arithmetic is about storing each imprecise number as a lower bound and upper bound. On each calculation, the lower bound is rounded down and the upper bound is rounded up. The existing libraries are well developed, and the performance hit is often surprisingly close to 2.

[1] http://en.m.wikipedia.org/wiki/Interval_arithmetic


That's because intervals tend to just keep growing. Without some function to make the interval smaller or otherwise keep it bounded, cumulative interval arithmetic tends to be useless.

Using the rounding mode to calculate once with rounding down, once with standard rounding, and once to rounding up will give a much better indication of whether or not an algorithm is sensitive to roundoff error.


The problem is that the range will increase with each operation. E.g. the range in x*y is more than the range in x and so you need to keep track of those ranges. But the ranges themselves will be inaccurate if stored as floating point numbers.

Or just store them as a fraction if you really care about the value not getting corrupted.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: