Hacker News new | past | comments | ask | show | jobs | submit login

The algorithm is described in more details in https://academic.oup.com/comjnl/article/14/3/317/420539?logi... (around page 323, page 318, ..)

I haven't thought about it at all, but isn't it somewhat similar to Newton Raphson, in how it approximates a first reciprocal (here with a table), and then converges on better estimates in a few iterations?

Arguably the method for converging looks like it could be different at first glance. You could add it per WP:BOLD and see if anyone objects, the descriptions above are good enough to cite, such that there's no original work in describing the algorithm




That representation's even nicer – the logarithmic complexity is immediately evident:

    1/(1 + x) = 1 - x + x^2 - x^3 ... 
              = (1 - x) (1 + x^2) (1 + x^4) (1 + x^8) ...
late edit: I've tested and confirmed that the naive implementation does in fact work. This C snippet is within at most 1 ULP for most double floats in the range (0.5, 1.0]: I tested a few billion and found no counterexamples.

    double recip(double y) {
       long double x = ((long double) y) - 1.0;
       long double u = 1.0 - x;
       long double v = x;
       for (int j=1; j<=5; ++j) {
          v = v * v; // v = x^(2^j)
          u = u * (1.0 + v);
       }
       return (double)u;  // approx. 1.0/y
    }


Not only similar, it's identical to Newton Raphson in ideal arithmetic!

The nth order product approximation for the reciprocal of d = 1 - x is y_n = product_{i=1}{n} (1 + x^{2^(i-1)}) = sum_{i=0}^{2^n - 1} x^i = (1 - x^(2 n)) / (1 - x).

The Newton-Raphson iteration for the reciprocal of d is z_{n+1} = z_n * (2 - (1-x) * z_n).

Induction with the base case z_0 = 1 = y_0 shows the sequences are equal by inserting y_n for z_n in the Newton Raphson iteration: z_{n+1} = (1 - x^(2 n)) * (2 - (1 - x^(2 n))) / (1 - x) = ( 1 - x^(4 n) ) / ( 1 - x) = y_{n+1}.

So I guess you could call it an explicit product formula for the Newton Raphson reciprocal.


This 100% checks out! I didn't recognize this at all.

- "...in ideal arithmetic!"

Comparing the two, Newton-Raphson seems rather better conditioned. I guess it's the (1.0 + x^(2^i)) part that holds back the explicit-product version (adding a large float with a small one).


I don't know where you got the idea that there is a first guess that comes from a table. The table lookup just brings x into a suitable range, given A.

Shades of "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"— a Taylor series can only do so much!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: