Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A reasonable number of these "wtf"s are just more examples of people failing to understand floating point. sigh


The question is who failed to understand floating point: The authors of the JS specification, the authors of the JS runtime, or the people writing the javascript that demonstrates the 'wtf'? Is the answer "all three"?


Here's a "wtf" from the site:

0.1 + 0.2 === 0.3 // false

I'd say that whoever posted that needs a lesson in how floating point works.


The real WTF is, why does a person writing some Javascript code need to understand floating point? What high-performance calculations are you doing in you Javascript wouldn't be easier, more accurate, and not that much slower using fixed point math?


With fixed point or arbitrary-precision math, you either need to (a) set the level of precision explicitly, which is just as bad about forcing the user to understand the underlying model, or (b) have the language choose the level of precision heuristically, and have even more unpredictable performance and rounding characteristics.

There's no language that lets you do real-world work with numbers like sqrt(2) or PI without understanding precision and representation to some degree.


JavaScript's brilliant solution is to just not have integers. At all. That's better, somehow?


No, I think JavaScript's "floats everywhere" is worse than most languages. I prefer languages like Python/Ruby that have one arbitrary-precision integer type and one floating point type, or languages like Haskell or Lisp that have a rich selection of floating/fixed/abitrary/rational types with well-defined performance. But some (not all) of the WTFs here exist in all of those languages too. Eventually, programmers working with numbers really do need to learn how to use different numerical representations.


Could you please grace us with a code sample for all of these? I'm not intimately familiar with Haskell, Lisp or Ruby.

  >>> from decimal import *
  >>> Decimal('0.3') == Decimal('0.1') + Decimal('0.2')
  True
  >>> Decimal('NaN') == Decimal('NaN')
  False
  >>>


You only get that result in python by importing the decimal library. The standard behavior in python is (.1 + .2 == .3) == False

However, your code example makes the excellent point that python has such a library available, whereas Javascript does not. At least, not as easily as an import.


The people writing the JavaScript that demonstrates the wtf.

I can see that some of the design decisions in IEEE-754, like NaN not beeing equal to NaN might seem unintuitive, but it has nothing to do with JavaScript - the standard is implemented by numerous languages and hardware.

That large numbers are rounded to a limited precision should not even be wtf if you think about it for a moment. Otherwise a single use of PI would fill up all the computer memory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: