> It's 2023: I'm still surprised that so many new languages/platforms […] still lack built-in, runtime-provided, nor even standard-library-provided rational types
Because numerators and denominators can blow up easily, rational types effectively require storing them as bigints. That makes them bad from a performance viewpoint.
Assuming your application will work fine with fixed-size rationals (which, IMO, is highly unlikely), the natural way to store rationals in fixed-size storage wastes lots of room.
For example, in a (int32 nominator, int32 denominator) pair, there are about 2³² ways to encode ‘1’ that way, 2³¹ to encode ½, etc. I also think such a fixed-size rational type isn’t very natural to work with
Rationals also require regularly computing a gcd when you add (1/3 + 1/6 isn’t 9/18 but 1/2) or multiply (10/21 × 7/5 isn’t 70/105 but 2/3) them, slowing down things more (you can use heuristics to avoid some of those simplifications, but not doing one when that’s possible may mean your rationals keep huge numerators and denominators, slowing down operations)
> Because numerators and denominators can blow up easily, rational types effectively require storing them as bigints. That makes them bad from a performance viewpoint.
While hardware support for full number towers might be neat, I don't think anyone is suggesting that int and floats should not have language/standard library support - only that there should be a blessed option for precise arithmetic.
Usually the slow correct answer is preferable to the quick wrong answer (esp in business logic).
I'm (vaguely) surprised there hasn't been a hardware type for rationals. Then the GCD operation could be done in hardware and everything would be fast (at least as fast as floats). Not surprising, because adding new hardware data types is hard, but the financial institutions would benefit from something like that.
A rat64 could be:
- 1 sign bit
- 3 format bits. Determines where the decimal is located, e.g. 30.30 vs 52.8.
- 60 bits of numerator and denominator. The split is determined by the format bit. 50.10 would be handy for any percent/ppt operations (e.g. Most USD operations)
Might need an error bit, but that's my 5 minute gist.
> Then the GCD operation could be done in hardware and everything would be fast (at least as fast as floats).
Could it? Is there a fast way to do GCD in hardware?
Googling gave me https://en.wikipedia.org/wiki/Binary_GCD_algorithm, which needs O(log₂(max(u, v))) operations to compute gcd(u,v), but I wouldn’t know whether that’s the best we can do in hardware, or whether that’s ‘as fast as floats’.
Also, I don’t see how your format describes a rational type. Rationals store numbers as p/q for integer p and q, not with a decimal point.
Sorry, abuse of notation. 30.30 would be 30 bits numerator, 30 bits denominator.
I would imagine it would not need to fully reduce after every operation, there's probably tricks where if you know the factors going in, there will be obvious ones after multiplication/division.
The trouble is this isn't enough, even for ordinary usage. The numerator and denominator grow in proportion to the total number of operations you have performed. For example, if you start with `1` and then multiply by `80/81` a thousand times, you get a number that's around 1e-6, but when expressed as a rational the numerator and denominator have hundreds of digits:
But how often are people using decimal types to do that? Most of the uses I could see this type being used for - currency, percent scaling, datetimes, audio/video codec frame rates - all are basically fixed point operations. Anything involving powers of 80/81 would probably need bigint based rationals anyways.
Actually if you had an int64 type which was scaled by flicks, that'd give you quite a lot of latitude for most day to day stuff.
Yeah, fixed point is different from rational, and all those examples you gave sound to me like fixed point. And that can be implemented efficiently without dedicated hardware support: the denominator is fixed, and you store the numerator as an integer.
A 1/3 off discount on a $10 item is $6.67 (or $6.66 if rounding in the customer's favor), not $10/3.
(Except datetime, did you mean timestamp? A timestamp is an instant in time, and it often makes sense to store it in high precision because you're saying exactly when something happened. A datetime is for communicating between humans who are using some particular calendar; it rarely makes sense to have more than minute precision.)
Because numerators and denominators can blow up easily, rational types effectively require storing them as bigints. That makes them bad from a performance viewpoint.
Assuming your application will work fine with fixed-size rationals (which, IMO, is highly unlikely), the natural way to store rationals in fixed-size storage wastes lots of room.
For example, in a (int32 nominator, int32 denominator) pair, there are about 2³² ways to encode ‘1’ that way, 2³¹ to encode ½, etc. I also think such a fixed-size rational type isn’t very natural to work with
Rationals also require regularly computing a gcd when you add (1/3 + 1/6 isn’t 9/18 but 1/2) or multiply (10/21 × 7/5 isn’t 70/105 but 2/3) them, slowing down things more (you can use heuristics to avoid some of those simplifications, but not doing one when that’s possible may mean your rationals keep huge numerators and denominators, slowing down operations)