Did this exponential blowup ever prove troublesome in practice? I think it can be easily solved by strategically placed rounding steps. But explicit rounding is advisable in many cases anyway.
But strategically placed rounding was in fact the solution to the original problem that in this thread Rationals were proposed as solving...
Rational types are not super popular and don't get used that much, most (not all) people that end up using them do it very intentionally and consciously and know what they're dealing with -- if they were more ubiquitious, I suspect more trouble would be seen "in practice".
While it's probably true that any digital number format will have edge cases that in fact come up in practice, my personal choice of "Why the heck isn't this more popular, why doesn't every language support it, why isn't it in fact the default representation of a numeric literal" -- is floating point "decimal" types, like ruby (or I think Java?) BigDecimal, rather than rational. I think they mostly work matching programmer's mental models of numbers, and for many/most common uses on 2023 platforms the performance is just fine. (this would not have been true 30 years ago).
Or it's vendors that don't care about correctness, like providing decimals by default and high performance floating point as an "optimized" option. Python did something like that by using bigints as a default for integers.
What would not have been true 30 years ago? I remember "real" fixed-point type built-in in Turbo Pascal and explicitly documented as suitable for money. Common Lisp has had first class fractions support since the start.
Slight correction, from a former Turbo Pascal (and subsequent Delphi) programmer...
"Real" types were platform-dependent floating-point types, not suitable for monetary calculations whatsoever. and would map to either Single or Double depending on the underlying CPU architecture. Sort of like a C-style "int" that would map to an 8-bit, 16-bit, 24-bit, 32-bit, 36-bit, 60-bit, 64-bit etc integer, depending on the compiler and the compile target.
Turbo Pascal did have an 8-byte, fixed point, "Currency" type suitable for monetary calculations, however using it was very, very slow compared to pure floating point ops - just as the comment you replied to suggested. If that weren't enough, library support (both built-in and 3rd-party) for math and other utility functions was either limited or non-existent.
Turbo Pascal did NOT have an 8-byte fixed point "currency" type suitable for monetary calculations. It did have an int64 type called "comp" though which was handled as a float by the FPU and hence not slower than the types "single" (f32), "double" (f64) or "extended" (f80).
IIRC, "currency" came with Delphi V2.0 (or even later), but then still it wasn't slower than other floats when you did heavy calculations with it as it was also handled by the FPU. Only reads and writes from and to such variables were expensive as there was always a scaling (multiplication by 1e4 and 1e-4) involved — internally it was that int64 "comp" type. (But here I might be wrong, I never really used "currency", I disassembled lots of Delphi binaries with lots of different data types as I wanted to know how the compiler worked. Today however my Delphi knowledge is quite fuzzy).
I meant to suggest that 30 years ago the performance difference between "floats" (floating-point binary) and "BigDecimal"-style arbitrary-precision floating-point decimal would have been much more significant to many more real-world use cases, compared to now. So that may have been a reason not to make them the default when you simply write a literal `5.4` in code, but that argument is less now.
Fun fact many processors actually support decimal types in hardware. I believe you can use them in c with _Decimal. Worked on my Ryzen 3700x and (if memory serves) my M1 mac
Due to the downvotes I looked into this again. You can use _Decimal32 etc in GCC on x86, but it is a soft implementation. I believe it is in hardware on some ibm platform, but definitely not on the M1