what are you talking about? when did i say I want to make arithmetic slower? And how is that globally agreed upon as a standard? In math when you get a NaN, do mathematicians continue to use it as a variable or do they realize they made a mistake? Think about it. In all math and science It's actually globally agreed upon that a NaN is not a variable you can reuse in some other function.
> And how is that globally agreed upon as a standard?
Because all modern processors and mainstream languages implement their floating point calculations as IEEE 754.
> when did i say I want to make arithmetic slower?
When you said you wanted to introduce exceptions whenever NaN is encountered.
Your processor has instructions that add/subtract/divide/multiply floating point numbers according to the IEEE 754 standard. What you propose is to then check in the implementation of JS after each instruction to see if the result is NaN, which is going to be a speed reduction of at least 2-3x, though I would expect it to actually be higher than that because branching can get
expensive. (Making a note to go try it and benchmark)
Then, after doing this check, you want to have JS throw an exception. This only slows down the uncommon case, so that's not much of an issue, but in situations where NaNs are able to be treated as valid values, they then have to catch these exceptions and resume normal execution flow.
The result is that there is a happy path slowdown of arithmetic operations by at least 2-3x, to gain an arguable advantage in debugging time in the unhappy path.
Throwing away compatibility with other languages and working against the processor are not goals I'd shoot for, and wrapping basic arithmetic in try/catch blocks doesn't sound like a very good payout for doing so.
>Throwing away compatibility with other languages and working against the processor are not goals I'd shoot for, and wrapping basic arithmetic in try/catch blocks doesn't sound like a very good payout for doing so.
Almost every popular language launches an exception when you do a divide by zero.
>Your processor has instructions that add/subtract/divide/multiply floating point numbers according to the IEEE 754 standard. What you propose is to then check in the implementation of JS after each instruction to see if the result is NaN, which is going to be a speed reduction of at least 2-3x, though I would expect it to actually be higher than that because branching can get expensive. (Making a note to go try it and benchmark)
Try it, I'd like to see the results. I'm not too familiar with the processor implementation, but something tells me that if every system level language implements exceptions for division by zero then the cost must be minimal. Also how would it be a 3x slow down? At most it's just one instruction.
Interestingly in some languages integer division by zero throws an exception and floating point division by zero returns some kind of infinite or NaN value.
If this behavior was propagated down from processor design, I'd say the processors made a bad choice. Why is this behavior specific to floating point? Why is the behavior for integers different. Either way when I say poorly designed, I mean poorly designed in terms of usability, not speed. Clearly, javascript wasn't initially designed for speed.
>Throwing away compatibility with other languages and working against the processor are not goals I'd shoot for, and wrapping basic arithmetic in try/catch blocks doesn't sound like a very good payout for doing so.
Like I said, the purpose of a division by zero exception is not to be caught. It's to prevent a buggy program from continuing to run. The only place a division by zero operation can legally occur is if the zero arrives via IO. In that case it's better to sanitize the input or detect the zero rather than catch an exception.
> Almost every popular language launches an exception when you do a divide by zero.
1.) Divide by zero returns Inf, not NaN.
2.) I just tried C++, Java, C#, Go[1], and the champion of safety: Rust. All of them evaluate to Infinity when dividing a double by 0.0. No exceptions to be found.
> Try it, I'd like to see the results. I'm not too familiar with the processor implementation, but something tells me that if every system level language implements exceptions for division by zero then the cost must be minimal.
But they don't. Not every system level language even has exceptions and the ones that do return Inf as stated above.
> At most it's just one instruction.
Branches can be expensive if the branch predictor chooses the wrong path and forces a pipeline flush. Should be able to work with the predictor so that happy paths are true and that should make it less likely, but I'm not an expert there.
> The only place a division by zero operation can legally occur is if the zero arrives via IO. In that case it's better to sanitize the input or detect the zero rather than catch an exception.
In many applications this is true. Not all.
[1]: You do need to make sure Go treats both operands as doubles, easiest way to be sure is to declare x and y as float64 and then divide them afterward. But when actually dividing a float64 by another float64, it does evaluate to Inf as well.
> All of them evaluate to Infinity when dividing a double by 0.0. No exceptions to be found.
Well, Rust doesn't have exceptions, so it wouldn't happen, but yeah. To your parent, these languages follow IEEE 754, as far as I know, which specifies this behavior.
Yeah, floats and ints are just fundamentally different, especially when you're at the level of caring about how hardware deals with things.
TIL about Go:
Numeric constants represent exact values of arbitrary precision and do not overflow. Consequently, there are no constants denoting the IEEE-754 negative zero, infinity, and not-a-number values.