That NASA article kind of misses the point. NASA uses 15 digits for pi because that's the default and it is enough accuracy for them. The interesting question is why is that the default. That goes back to the Intel 8087 chip, the floating-point coprocessor for the IBM PC. A double-precision real in the 8087 provided ~15 digits of accuracy, because that's the way Berkeley floating-point expert William Kahan designed its number representation. This representation was standardized and became the IEEE 754 floating point standard that almost everyone uses now.
By the way, the first Ariane 5 launch blew up because of floating point error, specifically an overflow when converting a 64-bit float to an int. So be careful with floats!
> NASA uses 15 digits for pi because that's the default and it is enough accuracy for them. The interesting question is why is that the default.
The general rule of thumb in numerical analysis is you need roughly twice the working precision as the output precision. Double-precision floating point has ~16 decimal digits of precision, which means the output should generally be good for ~8 decimal digits; with single-precision, you have ~7 decimal digits of working precision, or about ~3-4 decimal digits of output precision.
In other words, a 32-bit floating-point number doesn't leave with enough useful precision for many cases, whereas a 64-bit floating-point number is good for most use cases.
> That goes back to the Intel 8087 chip, the floating-point coprocessor for the IBM PC. A double-precision real in the 8087 provided ~15 digits of accuracy, because that's the way Berkeley floating-point expert William Kahan designed its number representation. This representation was standardized and became the IEEE 754 floating point standard that almost everyone uses now.
It predates 8087! VAX had 64-bit floats with similar precision to IEEE 754 double precision. There's probably even older uses of 64-ish-bit floating-point types, but my knowledge of computers in the 60's and 70's is pretty poor. I fully expect you'd see similar results on those computers, though: you need enough decimal digits for working precision, and word-sized floating point numbers are just too small to have enough.
The 8087 itself doesn't use double precision types, it uses 80-bit types internally, which have 64 bits of mantissa (or ~19 decimal digits), although the reason for the 80-bit type is primarily to get higher precision for intermediate results on implementing transcendental functions.
This is something I wish was captured in programming more often. The well-loved library academics swear by is https://github.com/SBECK-github/Math-SigFigs but I wish there was a built-in "scientific number" type in every language that could make it easy for everyone to use.
> A double-precision real in the 8087 provided ~15 digits of accuracy, because that's the way Berkeley floating-point expert William Kahan designed its number representation
Kahan was the co-architect of the 8087. Palmer (at Intel) hired Kahan as a consultant for the 8087 since Kahan was an expert on floating point. Kahan says: "Intel had decided they wanted really good arithmetic. I suggested that DEC VAX's floating-point be copied because it was very good for its time. But Intel wanted the `best' arithmetic. Palmer told me they expected to sell vast numbers of these co-processors, so `best' meant `best for a market much broader than anyone else contemplated' at that time. He and I put together feasible specifications for that `best' arithmetic." Kahan and Palmer were co-authors on various papers about the 8087 and then Kahan and others authored the IEEE 754 standard.
And Kahan's Turing award: "During a long and productive relationship with Intel he specified the design for its floating-point arithmetic on several chips starting with the 8087" https://amturing.acm.org/award_winners/kahan_1023746.cfm
I think you're missing the point. The 8087's double-precision limit of ~15 decimal digits wasn't arbitrary -- the purpose of that device was to perform practically useful computations, so that amount of precision was chosen by the designers as a reasonable engineering trade-off.
IOW, the likely reason for not storing more mantissa bits than that is someone involved in designing the 8087 determined that even NASA doesn't need more precision than that.
By the way, the first Ariane 5 launch blew up because of floating point error, specifically an overflow when converting a 64-bit float to an int. So be careful with floats!