Hacker News new | past | comments | ask | show | jobs | submit login

I think the point the GP is making is that your calculator might have a function like like

   calc_div(num: Number, den: Number) -> (Error | Number):
     if den ==  0:
        return Error("division by zero")
     else
        return num / den

Now this function pre-validates the data. But it might be used as part of a much larger system. Should that system be programmed to know that `den` should always be non-zero and pre-pre-validate accordingly? Or else should it leave "calc_div" to be the expert on the rules for division?

If you take the latter approach, then the caller has to have a way of accepting the error gracefully, as a normal thing that might happen. And thus we have a div0 that is an error rather than an exception.




Ah, but floating-point math is such fun! Here is how it might work...

den is a tiny non-zero value in an 80-bit register. It gets spilled to a 64-bit slot on the stack, rounding it to zero, while another copy of it remains in an 80-bit register. The 80-bit version is compared against zero, and is non-zero. The 64-bit version, which is zero, gets used for the division.

It is fully standards-compliant for a C compiler to do this. Some languages may specify otherwise, but often the behavior is unspecified or is implicitly the same as C.


For many use cases, it a bad idea for a calculator program to use floating point, rather some more exact representation.

However if you do use floating point, then the kind of dangers you point out make my point even stronger. You could conceivably embed the `calc_div` function in a larger system that knew about pre-validating for div0. But if you want to deal with all possible sources of FP weirdness when doing division, then you really need to concentrate it in the "division expert": i.e. have calc_div pre validate all that stuff, and have its caller accept that errors are a normal result.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: