For anyone turned off by this document and its proofs, I recommend Numerical Methods for Scientists and Engineers (Hamming). Still a math text, but more approachable.
The five key ideas from that book, enumerated by the author:
(1) the purpose of computing is insight, not numbers
(2) study families and relationships of methods, not individual algorithms
> This motto is often thought to mean that the numbers from a computing machine should be read and used, but there is much more to the motto. The choice of the particular formula, or algorithm, influences not only the computing but also how we are to understand the results when they are obtained. The way the computing progresses, the number of iterations it requires, or the spacing used by a formula, often sheds light on the problem...Thus computing is, or at least should be, intimately bound up with both the source of the problem and the use that is going to be made of the answers-- it is not a step to be taken in isolation from reality
(From "An Essay on Numerical Methods" p 3 of the mentioned text; emphasis authors)
Not the OP, but I suspect it means focus on what questions are being asked first, and even then, look for opportunities to simplify wherever you find them.
So many of us spend so much time getting enamoured with technical solutions to problems that no one cares about.
Shared this because I was having fun thinking through floating point numbers the other day.
I worked through what fp6 (e3m2) would look like, doing manual additions and multiplications, showing cases where the operations are non-associative, etc. and then I wanted something more rigorous to read.
For anyone interested in floating point numbers, I highly recommend working through fp6 as an activity! Felt like I truly came away with a much deeper understanding of floats. Anything less than fp6 felt too simple/constrained, and anything more than fp6 felt like too much to write out by hand. For fp6 you can enumerate all 64 possible values on a small sheet of paper.
For anyone not (yet) interested in floating point numbers, I’d still recommend giving it a shot.
One thing that really did it for me was programming something where you would normally use floats (audio/DSP) on a platform where floats were abysmally slow. This forced me to explore Fixed-Point options which in turn forced me to explore what the differences to floats are.
Fixed point gave rise to the old programmers meme 'if you need floating point you don't understand your problem'. It's of course partially in jest but there is a grain of truth in it as well.
It is quite old, attributable to VonNeumann and Goldstine in 1947. Later Goldstine joked that if rescaling for every step was easy enough for Johnny it ought to be easy for everyone else.
The gag here being that perhaps that isn’t the best dividing line for programming talent.
It gets WORSE. Here's a quote from “The Birth of a Computer” in BYTE Magazine, February 1985, an interview with J.H. Wilkinson, noted numerical slouch, on the Manchester machines ca 1949 (p. 178):
>They were fixed point, but one of the earliest things that I did (at Turing’s request) was to program a set of subroutines for doing floating-point arithmetic.
So we ought to scale to better ourselves with self-study, meanwhile one of the first errands TURING send WILKINSON on was to rid themselves of this duty. ;)
It's interesting how many of these things we take for granted.
I'm working (and have been for a while) on something that requires both ridiculous precision and speed on a relatively puny power budget and it's been a really nice trip down memory lane regarding optimization. I discovered fixed point pretty early in my programming career when doing 3D graphics on the 6502. I never imagined that that knowledge would come in handy more than almost five decades later, but here we are.
Absolutely nobody will think this is 'clearer', this is a leaky abstraction and personally I think that the OP is right and == in combination with floating point constants should be limited to '0' and that's it.
We all know that 1/3 + 1/3 + 1/3 = 1, but 0.33 + 0.33 + 0.33 = 0.99. We're sufficiently used to decimal to know that 1/3 doesn't have a finite decimal representation. Decimal 1/10 doesn't have a finite binary representation, for the exact same reason that 1/3 doesn't have one in decimal — 3 is co-prime with 10, and 5 is co-prime with 2.
The only leaky abstraction here is our bias towards decimal. (Fun fact: "base 10" is meaningless, because every base calls itself base 10)
> they (might) have a float, and are using the `==` operator, they're doing something wrong.
Storage, retrieval, transmission, and serialization/deserialization systems should be able to transmit and round-trip floats without losing any bits at all.
Floats break the basic expectation of == for round-trip verification, not due to programmer error, but because NaN is non-reflexive by spec. A bit-perfect round-trip can reproduce the exact bit pattern and still fail an equality check. The problem is intrinsic to the type, not the operator.
Well, there are many legitimate cases for using the equality operator. Insisting someone is doing something wrong is downright wrong and you shouldn't be teaching floating-point numbers.
A few use cases are: Floating-points differing from default or initial values and carrying meaning, e.g. 0 or 1 translates to omitting entire operations. Then there is also the case for measuring the tinyest possible variation when using relative tolerances are not what you want. Not exhaustive.
If you use == with fp, it only means you should've thought about it thoroughly.
Sure, division might be a tad more surprising though since most don't do that on an every-day basis. The specific case we had was when a colleague had rewritten
(a / b) * (c / d) * (e / f)
to
(a * c * e) / (b * d * f)
as a performance optimization. The result of each division in the original was all roughly one due to how the variables were computed, but the latter was sometimes unstable because the products could produce denomalized numbers.
There’s plenty of cases where ‘==‘ is correct. If you understand how floating point numbers work at the same depth you understand integers, then you may know the result of each side and know there’s zero error.
Anything to do “approximately close” is much slower, prone to even more subtle bugs (often trading less immediate bugs for much harder to find and fix bugs).
For example, I routinely make unit tests with inputs designed so answers are perfectly representable, so tests do bit exact compares, to ensure algorithms work as designed.
I’d rather teach students there’s subtlety here with some tradeoffs.
If they were taught what was representable and why they’d learn it quickly. And those that forget details later know to chase it down again if they need it. Making it voodoo hides that it’s learnable, deterministic, and useful to understand.
Tell them that they can only store integer powers of 2 and their sums exactly. 2^0 == 1. 2^-2 == .25. Then say it's the same with base 10. 10^-1 == 0.1. 1/9 isn't a power of 10, you you can't have an exact representation.
Yeah I'd argue that the beginner friendly version of the rule is probably "Never use exact == or != for floating point variables" and the slightly more advanced one is "Don't use it unless the value you are comparing to is the constant 0.0".
I wish that (still) worked reliably, but it can unfortunately get one into trouble with some compilers and some optimization modes that assume that NaNs are undefined behavior.
I have a linter in my code that shouts at me if I use exact equality for floats.
But I regret not making an exception for the constant zero, because it's one of the cases where you probably should accept it. I.e. if (f != 0.0) {...}
Zero shouldn't be an exception there. If f had been set from something like f = a - b, then you're in the same situation where f might be almost but not exactly zero.
The linter wouldn't know where f came from, so it should flag all floating point equality cases, and have some way that you can annotate it for "yeah this one is okay."
if (f == 0.0) means "is f exactly zero so it's not initialized" 99 times for every one time it means "is f zero-ish because of a cancellation/degeneracy/whatever"
I just found that I have now annotated it for "yeah this one is ok" about 100 times, and caught zero cases where I meant to do a comparison to zero-or-very-nearly-so but accidentally wrote == 0.0.
So my conclusion is: I would have had less noise in my code with that exception in the linter, and the linter had been equally useful.
The idea is not to do it with values derived from arithmetic, but e.g. from measurements where a real zero is very unlikely and indicates something different.
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=3808168 - April 2012 (3 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1982332 - Dec 2010 (14 comments)
What Every Computer Scientist Should Know About Floating-Point Arithmetic - https://news.ycombinator.com/item?id=1746797 - Oct 2010 (2 comments)
Weekend project: What Every Programmer Should Know About FP Arithmetic - https://news.ycombinator.com/item?id=1257610 - April 2010 (9 comments)
What every computer scientist should know about floating-point arithmetic - https://news.ycombinator.com/item?id=687604 - July 2009 (2 comments)