In python, 0.3 prints as 0.3, but it's a double, so it should be 0.299999999999999988897769753748434595763683319091796875 (according to the article, and the 0.1+0.2 != 0.3 trick also works)
What controls this rounding?
e.g., in an interactive python prompt i get:
>>> b = 0.299999999999999988897769753748434595763683319091796875
>>> b
0.3
0.299999999999999988897769753748434595763683319091796875 suggests 54 fractional digits of precision, which is misleading.
0.29999999999999998 or 0.29999999999999999 are less misleading but wasteful. Remember they are not only visible to users but can be serialized in decimal representations (thanks to, e.g. JSON).
In fact, most uses of FP are essentially lies, providing an illusion of real numbers with a limited storage. It is just a matter of choosing which lie to keep.
What controls this rounding?
e.g., in an interactive python prompt i get: