Hacker News new | past | comments | ask | show | jobs | submit login

If you actually want to do something useful and accurate and auditable and performant and space-efficient, because you have a lot of money values to bank with, you really don't want too much abstraction.

My experiences in various bits of banking include where juniors ignore advice NOT to store currency in floating point values and then come whining that arithmetic is broken, and tech dudes in a lab deciding that every single FX flow in an investment bank should have 2MB of (unshared) calendar hidden inside its abstraction which made some individual trades too big to load even for powerful machines...

Fixed point calcs in integers are good.




Floating point feels like an incredibly grokkable concept that was just not taught well for a long time. Maybe too mathematically (of course). Or maybe that was just my experience.

I feel like any dev team should pick up a copy of this for on-boarding: https://jvns.ca/blog/2023/06/23/new-zine--how-integers-and-f...


Floating point being grokkable doesn’t make it any more suitable for this application.

Floating point is inherently an approximation - your bank balance should not be an approximation.


> Floating point is inherently an approximation - your bank balance should not be an approximation.

I think this is a perfect example of bad floating point teaching. Floating point is not an approximation in any sense. If the numerical result of your calculation is representable in floating point you will get an exact answer always. And for results that aren't representable you decide exactly what should be done about that. It's like saying integers are an approximation because 5/2 == 2.


In programming, an `int` perfectly represents the integers between INT_MIN and INT_MAX.

A `float` on the other hand, approximates the real numbers. It can perfectly represent exactly 0% of them.

Having control over the rounding behavior is meaningless - floats cannot correctly represent any non-contrived calculation. Exact representation is important in financial systems.


> floats cannot correctly represent any non-contrived calculation

Like adjusting all your financial calculations to use microcents and partitoning instead of division to keep the result representable by integers? Neither can represent 1/3 even shifted. When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

> A `float` on the other hand, approximates the real numbers

Okay so that's not at all what they do, they represent subsets of the reals, just like how integers represent a subset of the reals. Even arbitrary precision libraries can only represent a subset of the rationals.


> they represent subsets of the reals

Sure. My point is that this subset is useless. Because trying to add, subtract, multiply, or divide members of this set will result in a number outside the set.

> When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

In the general case, you absolutely cannot. Lets look at some examples.

In forex trading, you need 9 digits after the decimal place in the price. So right off the bat, a valid price like 1000000.000000001 cannot be represented by a float. If the exchange sends your system that price, your system is guaranteed to be wrong.

Lets say you start at a representable price, like 1000000.0 and want to tick it up or down by the tick size, say 0.025 . The result of that addition / subtraction is not representable, so you cannot calculate and round prices correctly.

If you don't have control of your inputs, and you need precision, floats will never work.


I think you're getting the impression that my stance is "floats are usable for all problem domains" when it's really "floating point arithmetic is not the same as approximate calculations."

Nearly all mathematical calculations cause the result to be outside the range of integers. You can't do much else other than subtract without accounting for edge cases. No matter what tool you use you must work with your chosen representation and around its limitations and make sure your domain can be modeled exactly. For example Python's base random function chooses a floating point uniformly in the range [0, 1) but it achieves this by requiring that the result be a multiple of 2^-52 which is exactly representable so rounding doesn't introduce bias.

> If you don't have control of your inputs

Well you clearly do to some degree because you're sure you can model anything you might might receive with fixed sized fixed precision integers. I'm not saying this means you can just switch to floats but that you're doing the same thing, mapping the real life problem domain exactly to a subset of the reals that is closed under the operations you want to perform.


> When you want to do exact calculations with floats, and you can, you just have to set yourself up so that the result is exactly representable, it's not as intractable as you make it seem.

Can you expand on what you mean by that? Whenever I have dealt with calculation errors (either in fixed or floating point) "set yourself up so that the result is exactly representable" has been the key problem to prevent errors accumulating.

I obviously disagree on some other points, but circular discussions go nowhere!

Edit to add: Essentially I'm fishing for tactics. A big one in graphics development is detailed here: https://developer.nvidia.com/content/depth-precision-visuali... - but in fixed point I got very used to working out how to premultiply variables depending on their expected ranges.


TBF, be careful here. The IEEE floats can represent a subset of integers in their range exactly. For example, 64 bit floats can represent the range of 32 bit ints accurately (and more).

That said, it is bizarre to claim that if the result can be represented you get the exact result, when the core problem is that the result cannot be represented because the representation is an approximation.


> The IEEE floats can represent a subset of integers in their range exactly. For example, 64 bit floats can represent the range of 32 bit ints accurately (and more).

I know, I am being a little facetious. A double has a 52 bit mantissa, so it can exactly represent integers that need 52 or less bits.

Still, as a percent, a float can represent 0% of the reals. There are an infinite amount of numbers it cannot represent, even if we give it lower and upper bounds. Whereas an int can represent 100% of the integers within a lower and upper bound.


More to the point binary fp absolutely is a bad approximation to, and does poor arithmetic on, common legal sub-1 decimal currency values, ie those that do not have an exact binary fp representation.


Indeed. I'm just making a tangential comment. Definitely want to work in unsigned fractional cents.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: