Hacker News new | past | comments | ask | show | jobs | submit login

Makes me wonder if there's a use case for "dynamic fixed point" numbers: say for a 16 bit value the upper 2 bits are one of four values that says where the decimal is in the remaining 14. Say 0 (essentially an int), two spots in the middle, and 14 (all decimal). The CPU arithmetic for any operation is (bitshift+math), which should be an order faster than any float operation. The range isn't nearly as dynamic, but would allow for fractional influence. Maybe such a system would lack enough precision for accuracy?



What you just described is exactly floating point numbers, you're just using a different split for the exponent and mantissa and not using the "integer part is zero" simplification.


hmm interesting, I never saw them that way. But now that you say that, it makes them "click" a lot more. Thanks!


:D

Yeah, floating point is nothing more than your standard scientific notation of numbers, e.g.

    digit.xyz... * 10 ^^ +/- some exponent
The exponent is simply shifting where the decimal point is. The only different for floating point is that everything is base 2 because computers :D

Interestingly you're right that a bunch of fp functions can be faster than integers equivalents (although I'm still not convinced that this isn't simply due to the reduced number of bits involved), and more fun the relative performance of operations can actually change vs what they would be in integers. Also this is in the context of doing it in software vs hardware, where again the perf costs of things change.


It does have several additional attributes:

1. Since the normalized significand will always be 1.bbbb, the '1' bit is stripped from the significand representation, except:

2. To extend the range, the lowest 'zero' value of the exponent drops the leading '1'. This is referred to as the subnormal range

3. The highest exponent value, when the significand is zero, is used to represent positive and negative infinity

4. The highest exponent value with a non-zero significand is used to represent NaN

5. There are many different values usable for NaN by software, including a differentiation between 'quiet' NaNs and a (I believe implementation optional) 'signaling' variant, which will raise an interrupt when used. The idea is that these can be used to convey additional information, and that the signaling variant as well as the right interrupt handlers can be used to add additional functionality such as variable substitution.

6. Zero is signed


Yes, I was giving a simplified description to convey how to translate what floating point is to something people are more familiar with.

The technical details of how it handles _every_ case weren’t particularly relevant.

However just to address 1,2 with some hilariousness (autocorrect wants this to be “hilarious mess” which may be more correct).

Ieee754’s 80 bit format was the first widely deployed format, and was largely used by intel to get the other manufacturers to stop trying to reduce the functionality of ieee floating point because “it couldn’t be implemented, implemented efficiently, etc”. However because of that it has a quirk that was fixed for fp32,64,etc.

FP80 uses an explicit bit for the leading 1. That means it can do 1.0 * 2 ^^ N, or 0.1 * 2 ^^ N it should hopefully be immediately obvious why this could be a problem :)

Not only do the multiple representations for a single value result in sadness, it also gives us a variety of concepts like pseudo-denormals, pseudo normals, pseudo infinities, pseudo nans, etc all of which cause their own problems.

Mercifully by default the only hardware fp80 implementation now (since 286 maybe?) defaults to just treating them as invalid and converts to Nan. But you can set a flag to make it treat them as it did originally.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: