"Turing complete as long as you only use a polynomial number of steps in the size of the input" is another way of saying "not Turing complete" (doubly so when you look at the actual polynomials they're using). In fact, that paper just reaffirms that adding context doesn't improve formal expressive power all that much. And this is not equivalent to "well, Turing Machines are fake anyway because of finite space"--I can pretty trivially write programs in Coq, a total and terminating language, that will take far longer than the age of the universe to normalize but can be described in very little input.