While your formula is correct, it is not an actual implementation. How many digits of the square root do you need to compute exactly for being sure that the value of the power does not change?
Whether this is "fast" or not depends a lot on the concrete implementation of your arbitrary-precision real numbers.
As a nice thing, the second term (after the first minus sign) is smaller than 1, so you can omit it by rounding the result to the nearest integer; all the game happens in the first term.
You can implement the matrix exponentiation via repeated squaring recursively even in a language like Python that doesn't do tail call optimization, and it will still be fast: your recursion only goes to a logarithmic depth.
In any case, recursion vs iteration is an implementation detail. Especially if your language supports tail call optimization. Have a look at this example:
f(0) := (0, 1)
f(n) := let (a, b) = f(n-1) in (b, a + b)
f is recursive function over tuples of integers. And f is basically equivalent to the typically iterative algorithm for computing Fibonacci numbers.
We've learnt on "discrete math" course at university how to derive closed form solutions to any similar recursive sequences using algebra and eigenvectors.
I remember that it's possible but I forgot all the required math :) Now that I googled it it's not THAT bad
(And you can also use recursion in the fastest implementations. You just wouldn't use the naive recursive solution.)