I have an exception that proves the rule. I thought about responding to Julia's call, but decided this was too subtle. But here we go...
A central primitive in 2D computational geometry is the orientation problem; in this case deciding whether a point lies to the left or right of a line. In real arithmetic, the classic way to solve it is to set up the line equation (so the value is zero for points on the line), then evaluate that for the given point and test the sign.
The problem is of course that for points very near the line, roundoff error can give the wrong answer, it is in fact an example of cancellation. The problem has an exact answer, and can be solved with rational numbers, or in a related technique detecting when you're in the danger zone and upping the floating point precision just in those cases. (This technique is the basis of Jonathan Shewchuk's thesis).
However, in work I'm doing, I want to take a different approach. If the y coordinate of the point matches the y coordinate of one of the endpoints of the line, then you can tell orientation exactly by comparing the x coordinates. In other cases, either you're far enough away that you know you won't get the wrong answer due to roundoff, or you can subdivide the line at that y coordinate. Then you get an orientation result that is not necessarily exactly correct wrt the original line, but you can count on it being consistent, which is what you really care about.
So the ironic thing is that if you had a lint that said, "exact floating point equality is dangerous, you should use a within-epsilon test instead," it would break the reasoning outlined above, and you could no longer count on the orientations being consistent.
As I said, though, this is a very special case. Almost always, it is better to use a fuzzy test over exact equality, and I can also list times I've been bitten by that (especially in fastmath conditions, which are hard to avoid when you're doing GPU programming).