It was not, but for the most part languages don't default to arbitrary precision (i.e., sinple mathematical operators with decimal literals typically gets you binary floating point, not arbitrary precision decimal), even if they have arbitrary precision available at the language or standard library level.
Guess I was wrong. It’s fixed point, and also what you said — COBOL defaults to it. It still makes it easier to write software that need to do a lot of calculations like that.
Maybe that was true 20 years ago.