Use the fact that the approximation in the paper is equal to the inner product of e^{-x^2} with a Dirac comb [1]. The amazing thing about the Gaussian function and the Dirac comb is that they're both preserved by the Fourier transform. So apply the Fourier transform to both of them, observe that the Fourier transform is unitary (essentially a rotation), and therefore doesn't affect the inner product; then expand the inner product of the Fourier transforms.
Essentially, it's the same proof, but not in the language of Theta functions. In fact, I'd argue it's a better proof, because it generalises.
Generalisation: This technique applies to all Riemann sums, as long as you can compute the Fourier transform of the function. The thinner the tails of the function's Fourier transform, the faster its Riemann sums will converge.
I gave my generalisation in terms of frequency, and not in terms of "number of rectangles" in the Riemann sum, which I conveniently assumed to be infinite.
A better generalisation: if a function is thin-tailed, and its Fourier transform is thin-tailed, then it's Riemann sums converge to its integral at an exponential rate. In particular, this applies to the Gaussian function.
The formula involves a parameter c=10^10 and initially seems unbelievable because one's initial inclination is to assume that the 10^10 is particularly important, like it has to be that particular value in order for the formula to work. The "trick" is that higher values of c, like c=10^19, give even more accuracy. Obviously c=10^10 was chosen to optimize the formula's aesthetic beauty.
That "wrong formula" approximates pi by an integral: more precisely, exp(-x^2) integrated over the real line integrates to sqrt(pi). So the question is, in some sense, why is that such a good approximation? (My Fourier analysis is rusty, so I can't answer that question.)
I'm familiar with the integral. But why is that sum a particularly good approximation to the integral? It seems better than one has a right to expect, although it's been a while since I did any numerical analysis.
Replying to my own comment. See (4.4) in the article and the discussion after it. The error in this approximation is like exp(-pi^2 c). (4.10) in the article translates this to that we should expect about 4.2c digits of accuracy, since log_10 exp(-pi^2) is about 4.2. This falls out of Poisson's summation formula, according to the OP.
Generally I'd expect a numerical integration scheme to have an error like h^k for some constant k - for example for Riemann sums the error goes like h^2 and for Simpson's rule like h^5. h is the distance between the sampling points and so is analogous to 1/c. So this doesn't just follow from Riemann summation.
Trapezoidal rule quadrature is better than you would expect (2nd order) for periodic (even better for analytic and indeed entire, in this case) integrands. See this paper: http://eprints.maths.ox.ac.uk/1734/
Riemann sums. EDIT: The short answer is that it's a good approximation because e^(-n^2/c) drops off very vast as n approaches zero, meaning most of the contributions will come from small (on the order of sqrt(c)).
Other than that, it's just a Riemann sum presented in an aesthetically pleasing manner.
Do you mean 'approaches infinity'? Because at n=0 the expression is maximal :-)
But the point is not to drop small terms at the extremes - it is still an infinite sum. The point is to approximate sections of the function by rectangles, and those rectangles are a bad approximation around 0 too.
Seems like the formula is simply a fine enough Riemann sum for calculating Gaussian integral int_-inf^inf e^(-x^2) dx = sqrt(pi). It makes it much less surprising, though the exact estimation of error is still quite ingenious and worth following.
It's accurate to about a 100 digits, admittedly not quite as good as the expression in the OP, but it's the same principle. It's taking a fairly standard approximation, in the OP's case a Riemann sum, and inserting concrete but aesthetically pleasing numbers and acting as if something is incredible about it.
Yes, I get that. It seems like Wolfram|Alpha thinks 10^100 is too large of a number for it to have to care about, and is rounding it to infinity internally somewhere, hoping it doesn't change the final result. But, of course, it does matter in this case.
I don't think it is. Try N[((1+10^-n)^(10^n))/e, n] for smaller values of n. At first it approaches e faster than the precision can "catch up", and up until around n=50 it will result in 1. But at 10^60 and above it will return 1.0000000…
Interesting article, but I found the title a bit funny. I'd like to see an article about Newton's laws titled "Getting to the Moon from a Wrong Formula".
For any pattern, constant, irrational, etc, in math, there exists a set of formulas that models it up until n. It still is a cool little Riemann sum in this paper
I might be ignorant here, but wouldn't it suffice to prove that digits of Pi are random? Just check whether the distribution is uniform, the same way you would validate pseudorandom sequence generator for a programming language. Once you proved that Pi digits are random the normality hypotesis becomes trivial: as number of known Pi digits approaches infinity the probablity of any finite sequence to appear approaches 1.
You can’t just check all the digits. There are an infinite number of them and you don’t know if they stop looking random at some point after you stop checking.
Digits of Pi are not random, instead they are uniformly distributed or "pseudo-random".
Not all sequences are uniformly distributed, thus it very unlikely that we will be able to find long pieces of non-uniformly distributed sequences in uniformly distributed sequence.
I’m sorry, but I don’t think you quite understand what randomness, or pseudorandomness, means. In a truly random sequence, the probability of any given finite sequence appearing in it is 1.
The distribution wouldn't even need to be uniform, right? As long as one or more digits never stopped appearing at any point, the infinite sequence would include every possible sequence of numerals.
Why would you be sure it doesn't include a million zeroes in a row? If the digits are random, as appears likely, it definitely will include a million zeroes in a row. The problem is even if you could turn every atom in the universe into a supercomputer, and somehow get them all cooperatively working on calculating the digits, you'd still never get to the million zeroes sequence in a million universe lifetimes. But they're still out there somewhere.
I cannot reply to the reply from v_lisivka (for some reason) but I don't know why you'd think a temporary run of decimal digits in pi would have any effect on the geometry of a circle. It all comes down to whether pi is Normal or not. Repeating the quote above, it is thought likely but not proven at this stage;
Quoting:
> It is widely believed that the (computable) numbers √2, π, and e are normal, but a proof remains elusive.
My rough understanding is that Pi is representation of certain curve: circle. If we skew sequence too far to one side or another, then curve will lost it shape.
One would expect a repdigit sequence of length k to appear once in a 10^k normal sequence. So in the 10^1000000 expansion of pi, one should expect a million zeroes to appear in a row once. You can check this to a limited extent here: http://www.angio.net/pi/. It only goes up to 200 million (2 x 10^8) digits, but sure enough 00000000 happens to appear twice. (Nonetheless, we don't actually know if pi is normal.)
Thanks for this, very interesting. My earlier intuition is more than confirmed; If you turn every atom in the universe into a supercomputer, get them all working together generating digits of pi, for a million universe life times, you only get 10^80 (atoms in universe approx) * 10^10 (10 billion digits per sec each say) * 10^11 (years in a universe life time roughly) * 10^8 (seconds in a year roughly) * 10^6 (a million universe lifetimes) = 10^115 (ish) digits. In fact that is an incomparably smaller number that 10^1000000 digits, only good for a single run of 115 zeroes in a row.
Basically there's an awful lot of room between zero and mathematical infinity, and normal large numbers that we experience or even try to reason about in everyday life don't and can't begin to stretch even a little way towards the ultimate mathematical limits.
Not as accurate, either, given that the title of the paper is "Get Billions and Billions of Correct Digits of pi from a Wrong Formula," and it was published before "click-bait" was a thing.
Use the fact that the approximation in the paper is equal to the inner product of e^{-x^2} with a Dirac comb [1]. The amazing thing about the Gaussian function and the Dirac comb is that they're both preserved by the Fourier transform. So apply the Fourier transform to both of them, observe that the Fourier transform is unitary (essentially a rotation), and therefore doesn't affect the inner product; then expand the inner product of the Fourier transforms.
Essentially, it's the same proof, but not in the language of Theta functions. In fact, I'd argue it's a better proof, because it generalises.
Generalisation: This technique applies to all Riemann sums, as long as you can compute the Fourier transform of the function. The thinner the tails of the function's Fourier transform, the faster its Riemann sums will converge.
[1] - https://en.wikipedia.org/wiki/Dirac_comb