Ah, I think the non-constructive comment is just about presentation order; he gives a sketch of how f works first then later he gives an explicit formula partway through page 2 after theta, alpha, beta, and h are defined.
For why f is recursive (computable), here's maybe a helpful way of thinking about it: Let's think about the halting behavior of an input Turing Machine M. Consider the following pseudocode infinitary program which "outputs" two variables A_M and B_M after infinite time:
for each integer n:
if M halts on step n:
return A_M = 2^(-n) ; B_M = 1
if M never halts, return A_M = 0 ; B_M = 0
Clearly, B_M is not computable; that's the halting problem. However, A_M is actually computable, as computability of real-valued functions is defined as the ability to give an estimate to arbitrarily small queried precision: for precision epsilon, pick 2^(-c) < epsilon and run our algorithm above for just the first c steps; then pass on the value if it returned, or we can return zero and that's fine as we are within epsilon of the true A_M. So, A_M is computable whereas B_M is not.
The trick then is to observe that we can construct theta-functions with arbitrarily small values but constant-sized derivatives; so by summing a bunch of these together we get a function whose value at 2^(-M) behaves like A_M and whose derivative at 2^(-M) behaves like B_M.
Uh, because Myhill says so? "We first define the function f non-constructively..."
But I agree with your analysis. It seems constructive to me too.
I don't understand his proof that f is recursive though.