I'd be amazed if the hardware you're using and the software you're running doesn't have the "feature" that for some value of x, 1+x=1 and yet 1-x != 1. If you read the article you might understand why.
If you don't understand why, perhaps you could come back and ask specific questions.
For the concrete and practical minded who don't want to bother with understanding the theoretical arguments, here's some python:
#!/usr/bin/python
a = 1.
x = 1.
while a+x!=1.:
x = x/2
print 'x =', x
print 'a+x =', '%.18f' % (a+x)
print 'a-x =', '%.18f' % (a-x)
Output:
x = 1.11022302463e-16
a+x = 1.000000000000000000
a-x = 0.999999999999999889
Performance optimizations that affect mathematical accuracy are not necessarily "rubbish"; there are certainly cases where they are necessary and appropriate.
OTOH, it is a problem, IMO, that many popular languages make it much easier to use fixed-precision binary floating point representations -- which, while the most performance-optimized way to work with non-integer numbersare also the most prone manifesting incorrect results on real-world inputs -- than any other representation of non-integer numbers.
Use of fixed-precision binary floating point is a performance optimization that, like all such optimizations, should be driven by a real need in the particular application, not just be the default choice because that's what the language makes easiest.
Unfortunately, very few languages make this natural and idiomatic (Scheme and its numeric tower get numbers right, but very few popular languages do.)
There was a post on hacker news about floating numbers.
It said "if 1+x is 1", then "1-x is not." There is a
long explanation of how this is true. They are very
proud of this outcome. They believe this wired outcome
is "perfect logical". I just want to say "F**k you!"
If your program can't handle those even kids can do,
you are writing rubbish. F**k you!
It's not a case of being proud of it, it's an unavoidable consequence of using a small number of bits (32, 64, 128, 65536, all these numbers are small) to try to represent a very large range of numbers. In other words, it's an unavoidable consequence of using floating point.
The point is that floating point numbers are not mathematics, and they don't obey all the mathematical laws you've been taught. They're an excellent model, provided you stay away from the edges. But if you do go close to the edges, the inaccuracies of the model get exposed. To understand the difference is of real value.
Let's use mathematical reasoning to show why it's true.
Using 64 bit (say) numbers to represent a range larger than 0 to 2^64-1 we must choose either that the numbers we can represent are equally spaced, or not.
If we choose them to be equally spaced then we either cannot represent 0, or we cannot represent 1. To see this, suppose we can represent both 0 and 1. The numbers being equally spaced means that we can then represent 2, 3, 4, and so on up to 2^64-1. Now we have nothing left to go beyond, and thus we are not representing a range larger than 0 to 2^64-1. So if the representable numbers are equally spaced, then we cannot represent both 0 and 1, which seems sub-optimal.
If we choose them not to be equally spaced then there will be consecutive representable numbers r0<r1<r2, such that mathematically r1-r0 is not equal to r2-r1. Now set:
d0 = r1-r0 so that r0 + d0 = r1
d1 = r2-r1 so that r1 + d1 = r2
Suppose that d0 < d1, and let h = (d0+d1)/2. Hence (d0/2)<h<(d1/2).
Note that d0, d1, and h might not be representable.
Now r0 and r1 are consecutive representable numbers. The number h is more than half the gap between r0 to r1, so we would want r1-h to round down to r0. But the number h is less than half the gap from r1 to r2, so we would want r1+h to round to r1.
Therefore:
r1 - h = r0
r1 + h = r1
Thus it is unavoidable that if the representable numbers are not equally spaced then we can find a and x such that:
a + x == a
a - x != a
The above argument still goes through even if d0>d1, or if we want things to round down, or if we want things to round up, and we leave it as an exercise for the interested reader to check the details in those cases.
Now you have three choices:
* Reject this, because it contradicts your belief that floating point numbers must behave in the same way as real, mathematical numbers;
* Demonstrate a logical flaw in the argument; or
* Accept the conclusion, even though it contradicts your belief that floating point number behave in the same way as real, mathematical numbers, and accept that floating point numbers are just a model of mathematical numbers.
If you don't understand why, perhaps you could come back and ask specific questions.
For the concrete and practical minded who don't want to bother with understanding the theoretical arguments, here's some python:
Output: