Hacker News new | past | comments | ask | show | jobs | submit login

> Computers can't represent 0.1 (in floating point)

Of course they can. They just can't do it in binary [EDIT: base 2]. But they can certainly do it in (say) BCD [EDIT: base 10].




.NET’s decimal datatype for example represents a decimal floating point number


Just as a total pedantipoint, they also do that in binary (hence the B in BCD, in that specific case).


I've updated my comment with appropriate pedantedits.


But BCD is not floating point (generally shorthand for the IEEE 754 Floating Point standard, which nearly every CPU and GPU have hardware support for). And I don't know much about BCD, but it is probably missing niceties like NaN and Inf for capturing edge cases that happen deep inside your equations. These matter if your system is to be reliable.


> generally shorthand for the IEEE 754 Floating Point standard

Yes, generally, but that is just a social convention. There is nothing stopping you from doing floating point in base 10 rather than base 2, and if you do, 0.1 becomes representable exactly. It's just a quirk that 1/10 happens to be a repeating decimal in base 2. It is in no way a reflection of a limitation on computation.


IEEE 754 defines decimal floating point in single (32 bit), double (64) and quadruple (128) precision since the 754-2008 revision. (However .NET’s type I mentioned above even though it’s 128-bit is its own thing and does not adhere to it).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: