> this simple one still converges at about 0.6 decimal digits per term.
Quick proof of this: as the number of terms n in the sum goes to infinity, the ratio of each term to the previous one is approximately 1/4 - the first factor contributes m/(m+1), the second q/(q+2) for some m and q that go to infinity along with n, the third contributes 1/4.
If we counted base 4, then the value of each digit would be on average 1/4 of the previous one, certainly for a normal number like pi. But we count base 10, so we get log_10 4 decimal digits every time we get one base-four digit. Which is very close to 0.6.
It's not just the comparisons, it's the additions, multiplications and divisions too. When you enter 10,000, each iteration of the loop is working with numbers 10,000 digits long. When you enter 100,000, each iteration is working with numbers 100,000 digits long, which I imagine makes every operation slower.
A number fills the same space in your registers regardless of size, up until the max size, at which point it must be split into multiple operations, and that's when it takes longer.
I didn't note timing numbers, but rendering digits in hex was basically instantaneous, even when doing a million (decimal) digits.
Rendering the digits in decimal was significant delay (like 40 seconds for a million digits). That's why it just shows the hex until the very end, and it shows how long the decimal conversion takes.
You sure you don't mean converting digits to hex instead of rendering? The constant rendering of the hex digits as the work progresses results in a ~3x slowdown for 100k digits in my testing (see my other comment in this tree).
Maybe you're right. I know displaying in hex was way faster than displaying in decimal and seemed fast enough, so I stopped there. I didn't test rendering speed specifically.
Doing a million digits in the console (with no status updates) took about an hour and doing it in that page took just over two hours.
Sure but one has 10x more digits than the other so takes a heck of a lot longer to convert and display the string for the same number of iterations.
As a quick test of my theory the majority of the time is being spent trying to display the progress:
- Default 100,000 = 58.276
- CSS display: none; = 22.359
- Display when done = 20.057
> Sure but one has 10x more digits than the other so takes a heck of a lot longer to convert and display the string for the same number of iterations.
Hm, I was comparing the same number of digits displayed on screen though. One takes 2s to compute and display 10k digits of information on screen, one takes 30s to compute and display 10k digits of information on screen. same number of digits. At least according to the "progress" output that reads "Digits done".
I may be confused. But you have looked at the demo, right?
> I may be confused. But you have looked at the demo, right?
I have run the demo multiple times, how else would I have generated the time it took for me to run it in the comment above :).
> Hm, I was comparing the same number of digits displayed on screen though. One takes 2s to compute and display 10k digits of information on screen, one takes 30s to compute and display 10k digits of information on screen. same number of digits. At least according to the "progress" output that reads "Digits done".
"Digits done" is the number of correct digits, not the number of displayed digits. It's possible your screen area is too small to notice the difference.
As you can see "Digits done" ~= "Digits displayed" in this demo by any means. It simply represents the number of digits that will no longer change as the calculation continues, it is still displaying all the digits every update regardless if they are "done" or not which is where the inefficiencies in rendering come in (and based on the above benchmarks outweigh any other inefficiencies with number size substantially).
It is pretty new, and as you mention not many websites use them yet, so I'm sure there are more important optimizations the Firefox team is focusing on for now.
What is benchmarked here is basically the performance of the division operation on big integers and does not represent the general performance of a browser for everyday tasks.
JS Error `No identifiers allowed directly after numeric literal` when running http://ajennings.net/pi.html on Mac OS Mojave 10.14.6, Safari 12.1.2 (14607.3.9)
I didn't say that. You're moving goalposts anyway. Calculating pi isn't the only use for big ints. The more practical application is maybe crypto or something.
Will adding developer-centric features help reduce vulnerabilities? Generally adding any feature would increase vulnerabilities because of the increased surface area.
Can someone explain the logic behind the evolution of the fractions at each stage? I see that (1/4) becomes (1/4^2) and (1/4^3), but it's not obvious to me how (1/3)->(1/5)->(1/7) flows (odds? primes?), or (1/2) -> (13/24) -> (135/246).
EDIT: I understand now, the numerator on the first term is ascending odds and the denominator is ascending evens. Thanks for everyone's help!
Remove the leading 3 and the trailing term of 1/4 increasing in power.
You are left with this (in the 4th line):
1 3 5 1
- - - -
2 4 6 7
The pattern I see is that, starting from the top left and reading numerator, denominator, numerator, denominator and so on gives: 1 2 3 4 5 6 7 if you ignore the last numerator.
I may have it wrong, but that looks like the pattern.
odds to get the 1/3, 1/5, 1/7, 1/9, etc. term
and count by alternating digits on the numerator and denominator up to the next even number to get 1/2, 13/24, 135/246, 1357/2468, etc for the first term.
equivalently you just list odds on the numerator and evens in the denominator
> this simple one still converges at about 0.6 decimal digits per term.
The Chudnovsky brothers’ algorithm computes 14.18... digits per term. Its implementation in Scheme is only about a couple dozen lines of code. It computes a million pi digits in about 17.5 seconds on raspberry pi 4 in Gambit Scheme (57 seconds on the original raspberry pi, IIRC).
Not sure. Where the company I work for operates they have some very weird proxy setup (not my company’s rule but the host). Largely it seems only effective at blocking legitimate content for nonsense reasons. My own site gets blocked on occasion, though I’ve lazily let the cert expire. That reminds me....
Anyway I wouldn’t worry too much. It’s likely just stodgy corporate environments that put those kinds of controls in place.
Yes, 4 * arctan(1) will do in any language. I just find some joy in knowing a formula that uses only addition, multiplication, and addition, and computing it directly.
The algorithm seems to be at least quadratic in the length. On a 2014 i7 Mac mini, (n, time(sec)) = (1000, 0.29), (2000, 1.65), (4000, 9.70), (8000, 58.42).
I was going to make a joke about just writing Math.PI.toFixed(10 * * 6), but it turns out that Number.toFixed() only supports up to 100 places AND Math.PI stops at the 50th place. Still amusing to me. Also, BigInt numbers can't be combined with non BigInt without conversion. I haven't had cause to use them yet, so I didn't know that.
Learned three new things making a dumb HN joke... not bad!
I used to have a script on my personal site that would just continue to compute digits of Pi until your browser crashed. I finally took it down after too many recruiters and potential employers keep clicking on the link that said, "Don't Click This." and complaining about it.
I did the same thing when I was testing it. I would keep an eye on the system RAM resources graph as the script was running. Watching the RAM start to spike was oddly satisfying.
It's pretty scary to me how easy it is to crash a browser these days with something so simple.
It should take 415kB to store a million digit number, and the algorithm only needs to keep two of them. So, 1MB total to calculate a million digits of pi. I wonder if there are a lot of temporary allocations that build up until they get garbage collected.
That version already doesn’t work… once you fix the arrow function, there’s also the issue that most engines today don’t support proper tail calls, so nothing recursive will be portable. (But if it did, you could save a lot of characters by dropping unnecessary parentheses, expanding (i+1n)×4n to 4n×i+4n, replacing the const with a comma, removing semicolons…)
Displaying hex is quite fast (compared to trying to display decimal), though you're right that it might add up to a noticeable amount of time in the end. I did the full million digit calculation on that sample page on the same computer and it took a little over two hours (so 2.1x slowdown) but at that point I was also adding a 5ms delay every 100 terms.
It is fun to scroll down and watch the "cutoff", where digits above that are not changing and digits below that are. That's just as fun in hex as it is in decimal. But yes, maybe I should add an option to turn that off.
Good question. I just presented it here the way it was given to me, as a couple initial terms and some rules on how it evolves, because I find that to be the easiest way to remember it.
The actual formula looks much less friendly (because it's tricky to write "the product of the first n odd integers"), but it's a good exercise for those who are inclined.
The article says:
let i = 1n;
let x = 3n * (10n 1000020n);
let pi = x;
while (x > 0) {
x = x * i / ((i + 1n) * 4n);
pi += x / (i + 2n);
i += 2n;
}
console.log(pi / (10n 20n));
Until you go to examine it and realize it's infinite scrolling via JS and spend all your time trying to hack around that. (Or you just grab the URL out of the request and fetch it a few times directly)
Quick proof of this: as the number of terms n in the sum goes to infinity, the ratio of each term to the previous one is approximately 1/4 - the first factor contributes m/(m+1), the second q/(q+2) for some m and q that go to infinity along with n, the third contributes 1/4.
If we counted base 4, then the value of each digit would be on average 1/4 of the previous one, certainly for a normal number like pi. But we count base 10, so we get log_10 4 decimal digits every time we get one base-four digit. Which is very close to 0.6.