It's interesting to note that one can get O(lg (n)) complexity of analytical solution without using any floating arithmetic. There are at least two easy ways to do it: one is to recognize the analytical formula as a result of diagonalizing certain linear map, and calculate nth power of that map using repeated squaring instead of diagonalization. In more concrete terms: consider a square matrix A = {{1,1},{1,0}}. To obtain nth Fibonacci number, raise A to the nth power and take lower left entry. Clever multiplication by repeated squaring lets one perform only a logarithmic number of matrix multiplications instead of linear number using naive method.
Another way is to interpret analytical formula in Q(sqrt(5)) ring, and implement Q(sqrt(5)) arithmetic using quadruples of integers. More concretely: suppose we have two numbers of the form a+b sqrt(5), c+d sqrt(5), where a,b,c,d are rational. then their sum is (a+c) + (b+d)sqrt(5), and their product is (ac+5bd)+(ad+bc)sqrt(5). Thus, just like we usually represent complex numbers as pairs of real numbers with special multiplication formula, we can represent elements of Q(sqrt(5)) as pairs of rationals with special multiplication formula. then we just use the analytical formula for our special number system. I've implemented it a while ago in Haskell: https://github.com/xyzzyz/FibBinet
While I agree with the gist of your comment, it's important to note that none of these give an O(lg(n)) algorithm for computing fibonacci numbers (except on a hypothetical computer that can do arbitrary size integer operations in O(1) time).
The nth fibonacci number has O(n) digits, so you can't even write it out in time less than O(n). The repeated-squaring approach will involve multiplying numbers with O(n/2) = O(n) digits in its final step, so a very coarse analysis says the complexity is at least O(n lg n).
This is completely missing the point of Big-Oh notation. If you start trying to analyze how long integer operations take, ALL your computational complexity analysis falls apart for any sufficiently large integer arithmetic for any algorithm. This is precisely why such operations are ignored, because it becomes completely impossible to do any meaningful platform-independent analysis if you try to take that into account.
That's not to say you shouldn't take it into account, but the algorithm he described is, in fact, O(log(n)) time. It just is. That's how the math works. Now, you should note that the O(log(n)) time constraint does not take into account arbitrary integer size which will cause progressively larger constant factors to influence the efficiency of the algorithm... but it is still a O(log(n)) algorithm.
My analysis holds for any implementation for which you can perform O(1) addition and multiplication of integers of some fixed-size. That's just about as platform-independent as you can get, but it still captures the real complexity of the operation in a way that handwavy "pretend you can do arbitrary arithmetic in O(1)" analyses do not.
This isn't simply my viewpoint on the subject; even the very theoretical textbook by Papadimitriou and Vazirani takes this approach to the question (see exercise 0.4 and note that everything is carried out in terms of the complexity of arbitrary-size integer multiplication M(n)).
You're conflating the magnitudes. You can't directly compare the number of multiplications with the number of transistor switches or CPU cycles required for a single multiplication.
If you "drop" the 1/2, you can use Z(sqrt(5)) and work with only a pair of integers. Also, the two parts of the formula are "conjugates" so it's possible to get the exact result with only one half.
That's how I would approach this if arbitarily large integers were needed. It's particularly nice because it readily generalizes to any other finite depth linear recurrence.
The analytic solution really should win for very large terms though, but I'm not sure how to implement it properly. Can someone outline how you would decide what precision is needed to compute the Nth Fibonacci number?
If both of those absolute values are less than sqrt(5)/4, than their sum would be less than sqrt(5)/2, satisfying the inequality. Because 1<sqrt(5/4), I will make them be less than 1. Without loss of generality, I will consider only one of the absolute values.
|A`^n - A^n| < 1
Assuming the maximum error:
|(A+E)^n - A^n| < 1
Notice that the first term when you expand the binomial is A^n, so the n-1 remaining terms are our error. Every coefficient is less than 2^n. The greatest that A factor can be is A^(n-1)<A^n. And the greatest that the E factor can be is E^1. Therefore all (n-1) factors are less than (2^n)(A^n)E.
The sum of these error term is therefore less than (n-1)(2^n)(A^n)E. So:
(n-1)(2^n)(A^n)E < 1
Because A<2, A^n<2^n we can say:
(n-1)2(2^n)E < 1
E < 1/(2(n-1)(2^n))
If our value for A is accurate for d+1 (binary) digits passed the decimal point, E<=2^(-d).
2^(-d) < 1/(2(n-1)(2^n))
2^d > 2(n-1)(2^n)
d>lg(2)+lg(n-1)+lg(2^n)
d>1+lg(n-1)+n
d>2n
I made a lot of simplifications in this analysis, so you can probably get away with fewer terms. It is interesting to note that Fibonacci numbers grow exponentially, so the number of digits needed simply to right the answer grows linearly, so the extra digits do not decrease the algorithmic efficiency.
Another way is to interpret analytical formula in Q(sqrt(5)) ring, and implement Q(sqrt(5)) arithmetic using quadruples of integers. More concretely: suppose we have two numbers of the form a+b sqrt(5), c+d sqrt(5), where a,b,c,d are rational. then their sum is (a+c) + (b+d)sqrt(5), and their product is (ac+5bd)+(ad+bc)sqrt(5). Thus, just like we usually represent complex numbers as pairs of real numbers with special multiplication formula, we can represent elements of Q(sqrt(5)) as pairs of rationals with special multiplication formula. then we just use the analytical formula for our special number system. I've implemented it a while ago in Haskell: https://github.com/xyzzyz/FibBinet