Hacker News new | past | comments | ask | show | jobs | submit login

Related: How do you compute the midpoint of an interval? [1]. PDF [2]

[1] http://dl.acm.org/citation.cfm?id=2493882 [2] https://hal.archives-ouvertes.fr/hal-00576641v1/document




This really should be the top comment of the thread. I've been reading the paper slowly since yesterday.

Tl;dr - the best way to compute mid-points is:

    round_to_nearest_even((a - a/2) + b/2)
Why?

It can't be (a+b)/2 because (a+b) can overflow.

It can't be (a/2 + b/2) because a/2 or b/2 can underflow. However, this works if you explicitly handle the case when a == b.

It can't be (a + (b/2 - a/2)) for similar but more complicated reasons.

However, if you switch things around, ((a - a/2) + b/2) works great without the special case. (Though all three options need to handle a second special case: when a == -b.)

Regarding the rounding method, you want it to be symmetric and moreover you want nearby numbers to not always be biased in the same direction (as might happen if you always round towards zero). Hence rounding to the nearest 'even' floating point number (i.e. nearest bit pattern with a zero in the LSB)

---

In the case of OP, I think the paper would argue that the mid-point was working just fine, and that the code surrounding it isn't considering all possibilities (symmetrically). But then the discussion of infinities in the paper suggests that no matter what you do, there are issues for surrounding code to consider. Which in turn suggests that floating point is absolutely busted as an abstraction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: