I'm not sure what I think about that. I know when to use decimal/rational types vs floats, but my own Python code has a whole lot more float() calls than Decimal()s. Floats are almost always what I want unless I'm directly working with money.
Are they almost always what you want? Like, what kind of code do you work on? I personally find, that I need floats quite rarely and rationals usually are just fine, when I write Scheme. And when I write Python, floats also don't come up that often, except for in places where I would rather like to have exact numbers than inexact ones, always making me doubt, whether what I calculate here might have too big error in it.
It's mostly in calculating percentages, measuring timing, computing the interval to wait to satisfy a rate limit, that sort of thing. None of those require the extra overhead of an exact datatype in the contexts I'm using them.
It's funny that you're talking about this in a Python context, because pretty much every few lines will be 50-100x overhead. I take your point that Python would probably slow this type of thing down even more than it should, but still... Python is the exact type of language where this type of generalization to something slower but "more correct" makes sense in my opinion.
For what it's worth I agree with you when it comes to languages where 50+x overhead isn't an ever-present fact; you should generally have to opt in to using decimal types and the like.