You don't need "strict typing" to handle money; in the old (COBOL) days, we used BCD to represent monetary amounts with arbitrary precision. When they took away BCD, we were stuck, if we wanted to build a system that could represent a large sum correctly in both dollars and yen.
You're right; but COBOL BCD types allowed arbitrary precision, and it was super-easy to debug data; the hex represention was the same as the decimal representation.
BCD is simply "work in base ten on a base two digital computer". What made it good is that it enforces a discipline with the same pattern of rounding errors as base ten arithmetic on pencil and paper. This was particularly attractive when computers were new and replacing "doing it by hand". Bankers were nervous about the new systems screwing everything up, and they wanted the new system to demonstrate it would produce the exact same results as the old system.
To give an illustrative example, what's 2/3 of a dollar? 66 cents or 67 cents, one or the other, choose the same one you would choose with pencil and paper. Now add 33 cents, did you "overflow" the cents and need to increment the dollars?
Yeah, you can achieve the same thing with binary by constantly checking ranges of numbers, but the difference is, BCD when you screw up your code produces errors similar to adding numbers by hand, errors recognizable by your non computer literate accountant; binary screwups will produce a different unrecognizable pattern of errors.
the way it worked was pretty straightforward, just like 4 bits is hex 0-F and 8 bits is 0x00 to 0xFF, a BCD byte is 00-99 and you just never have the patterns for A-F. This was enforced in hardware, in the CPU/ALU
in terms of multi-currency, same thing, you'll see the same familiar rounding problems as traditional pencil and paper currency changing systems.
Also the same set of issues extends to fixed point implementations of "floating point"/"decimal fraction"/"rational number" systems more common in engineering. 1/3 is a .33333.... repeating fraction; 1/5 is .2, no repeat, because 2x5=10 base 10. In binary, 1/5 is a repeating decimal, not good for comparing results, rounding, etc. And you can easily see that the same issue does apply to currency too (it was my example above with 67 cents), it's just a bit less visible because it's less common to use extended fractional amounts.
I appreciated BCD because the in-memory data represented in hex (as by an 80's era debugger) was exactly the decimal value. As I recall, debuggers of that time only understood two datatypes: ASCII characters and hex octets.
On consideration, I think my COBOL compiler's ability to define arbitrary-precision fixed-length numerical variables wasn't down to the use of BCD; you can do that with other binary encodings. But I worked for Burroughs at the time; their processors had hardware support for BCD arithmetic, so it was fast. The debugging convenience came with no great cost.
Binary Coded Decimal allows for perfect representation of numbers by not restricting you to 4 or 8 bytes. It trades speed and memory efficiency for precision and simplicity.
It allows, like all encodings, a perfect representation of some subset of real numbers but not the rest.
In particular it perfectly represents numbers which are commonly used in modern commerce, like 19.99 or 1.648 (the current price per litre of fuel near me). It's not great at other numbers like pi or 1/240.
You still don't know whether something is dollars or cents. Some things are conventionally priced in dollars, others in cents, and your customers are going to be very unhappy if they have to enter or read the price in the "wrong" unit.
COBOL was pretty good for dealing with money.