Indeed, 0.1 can be represented exactly in decimal floating point, and can't be represented in binary fixed point. It's just that fractional values are currently almost always represented using binary floating point, so the two get conflated.
The reason why the 0.1 case is weird (unexpected) is that we use decimal notation in floating-point constants (in source code, in formats like JSON, and in UI number inputs), but the value that the constant actually ends up representing is really the closest binary number, where in addition the closeness depends on the FP precision used. If we would write FP values in binary or hexadecimal (which some languages support), the issue wouldn’t arise.