Hacker News new | past | comments | ask | show | jobs | submit login

> Because we cannot represent fractional units of the already smallest unit of currency, we have to choose a value anyway to charge or dispense. Unlike, say, if we stored it in dollars as a floating point and we can start compounding errors from the floating point type.

I've spent 20 years mostly working in finance. You'd be surprised at the prevalence of floating points used to represent currency. I cringe every time I see it, but it's surprisingly common (and wrong).

More correctly, pricing of securities (from exchanges) is done with integers and a scaling factor. The factor is typically static and doesn't need to be transmitted on every tick. The factor tells you effectively how many fractional digits are present (e.g. the actual price is multiplied by 10^-factor). Factors of 4 or even 6 are somewhat common, but some securities have to go with less precision. I remember in particular Berkshire Hathaway causing issues with overflow using 32-bit ints in the last decade (because they never split their shares, the share price is quite large).




You'd be surprised at the prevalence of floating points used to represent currency. I cringe every time

We all cringe but then there’s a “floating point or gtfo” ultimatum that most languages present to you. People would be happy to not use FP. But the reality is, you can have a monetary column in a 30 years old database engine, but not in a 5 years old language runtime. When you only have a hammer…


In that situation you’re meant to use integer-cents though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: