That's how I would approach this if arbitarily large integers were needed. It's particularly nice because it readily generalizes to any other finite depth linear recurrence.
The analytic solution really should win for very large terms though, but I'm not sure how to implement it properly. Can someone outline how you would decide what precision is needed to compute the Nth Fibonacci number?
If both of those absolute values are less than sqrt(5)/4, than their sum would be less than sqrt(5)/2, satisfying the inequality. Because 1<sqrt(5/4), I will make them be less than 1. Without loss of generality, I will consider only one of the absolute values.
|A`^n - A^n| < 1
Assuming the maximum error:
|(A+E)^n - A^n| < 1
Notice that the first term when you expand the binomial is A^n, so the n-1 remaining terms are our error. Every coefficient is less than 2^n. The greatest that A factor can be is A^(n-1)<A^n. And the greatest that the E factor can be is E^1. Therefore all (n-1) factors are less than (2^n)(A^n)E.
The sum of these error term is therefore less than (n-1)(2^n)(A^n)E. So:
(n-1)(2^n)(A^n)E < 1
Because A<2, A^n<2^n we can say:
(n-1)2(2^n)E < 1
E < 1/(2(n-1)(2^n))
If our value for A is accurate for d+1 (binary) digits passed the decimal point, E<=2^(-d).
2^(-d) < 1/(2(n-1)(2^n))
2^d > 2(n-1)(2^n)
d>lg(2)+lg(n-1)+lg(2^n)
d>1+lg(n-1)+n
d>2n
I made a lot of simplifications in this analysis, so you can probably get away with fewer terms. It is interesting to note that Fibonacci numbers grow exponentially, so the number of digits needed simply to right the answer grows linearly, so the extra digits do not decrease the algorithmic efficiency.
The analytic solution really should win for very large terms though, but I'm not sure how to implement it properly. Can someone outline how you would decide what precision is needed to compute the Nth Fibonacci number?