> NASA uses 15 digits for pi because that's the default and it is enough accuracy for them. The interesting question is why is that the default.
The general rule of thumb in numerical analysis is you need roughly twice the working precision as the output precision. Double-precision floating point has ~16 decimal digits of precision, which means the output should generally be good for ~8 decimal digits; with single-precision, you have ~7 decimal digits of working precision, or about ~3-4 decimal digits of output precision.
In other words, a 32-bit floating-point number doesn't leave with enough useful precision for many cases, whereas a 64-bit floating-point number is good for most use cases.
> That goes back to the Intel 8087 chip, the floating-point coprocessor for the IBM PC. A double-precision real in the 8087 provided ~15 digits of accuracy, because that's the way Berkeley floating-point expert William Kahan designed its number representation. This representation was standardized and became the IEEE 754 floating point standard that almost everyone uses now.
It predates 8087! VAX had 64-bit floats with similar precision to IEEE 754 double precision. There's probably even older uses of 64-ish-bit floating-point types, but my knowledge of computers in the 60's and 70's is pretty poor. I fully expect you'd see similar results on those computers, though: you need enough decimal digits for working precision, and word-sized floating point numbers are just too small to have enough.
The 8087 itself doesn't use double precision types, it uses 80-bit types internally, which have 64 bits of mantissa (or ~19 decimal digits), although the reason for the 80-bit type is primarily to get higher precision for intermediate results on implementing transcendental functions.
This is something I wish was captured in programming more often. The well-loved library academics swear by is https://github.com/SBECK-github/Math-SigFigs but I wish there was a built-in "scientific number" type in every language that could make it easy for everyone to use.
The general rule of thumb in numerical analysis is you need roughly twice the working precision as the output precision. Double-precision floating point has ~16 decimal digits of precision, which means the output should generally be good for ~8 decimal digits; with single-precision, you have ~7 decimal digits of working precision, or about ~3-4 decimal digits of output precision.
In other words, a 32-bit floating-point number doesn't leave with enough useful precision for many cases, whereas a 64-bit floating-point number is good for most use cases.
> That goes back to the Intel 8087 chip, the floating-point coprocessor for the IBM PC. A double-precision real in the 8087 provided ~15 digits of accuracy, because that's the way Berkeley floating-point expert William Kahan designed its number representation. This representation was standardized and became the IEEE 754 floating point standard that almost everyone uses now.
It predates 8087! VAX had 64-bit floats with similar precision to IEEE 754 double precision. There's probably even older uses of 64-ish-bit floating-point types, but my knowledge of computers in the 60's and 70's is pretty poor. I fully expect you'd see similar results on those computers, though: you need enough decimal digits for working precision, and word-sized floating point numbers are just too small to have enough.
The 8087 itself doesn't use double precision types, it uses 80-bit types internally, which have 64 bits of mantissa (or ~19 decimal digits), although the reason for the 80-bit type is primarily to get higher precision for intermediate results on implementing transcendental functions.