> What we have here in the cryptography sense is different: the running time n(c+n) clearly does depend on n; it just does not depend on the actual input.
Can you explain to me what the input is, if not c and n?
Consider the sentence just before “This takes n(c + n) simple operations”, as even quoted by you: “Section 5 […] states our main algorithm to compute c coefficients of the nth iterate of divstep”. So from this it is clear that c and n are the number of coefficients desired and the number of iterations desired; they are not the input to the algorithm (the two numbers or polynomials whose gcd is sought).
I'm really curious about your comments here. To understand them better, could you take a moment to say which of these statements you disagree with:
(1) A function f is O(1) if it is bounded. That is, if there exists a constant C such that for all n, we have f(n) < C.
(2) When we say that an algorithm (or more precisely, its running time) is O(1), we mean that the function “the algorithm's running time as a function of its input size” is O(1). In other words, let's denote the time that the algorithm takes on input I by T(I), and let us denote f(n) = max_{|I|=n} T(I) where |I| denotes the size of input I. Then we mean that f(n) = O(1). In simpler terms: that the worst-case running time does not grow without bound as a function of the input size.
(3) The algorithm in this paper, to compute the GCD of two polynomials or integers, does not have a bounded running time. (In fact, the authors describe it by the term “subquadratic”, and compare it to earlier subquadratic algorithms: “[…] a constant-time algorithm where the number of operations is subquadratic in the input size—asymptotically n(log n)^{2+o(1)}.”) That is, given any constant C, there exist inputs to this algorithm (two sufficiently large polynomials or integers) such that the running time of the algorithm is greater than C.
(4) As (3) shows that the property described in (2) does not hold, the running time of the algorithm here is not O(1).
One can argue separately whether “constant time” should be used to mean
• (the usual computational complexity sense) that the worst-case running time, i.e. the function f(n) is O(1), or
• (the cryptography sense) that T(I) does not give any exploitable information about I beyond what is known, typically |I|.
This is just an unfortunate clash of terminology across the cultures of the two fields, and IMO not an interesting thing to debate — everyone can choose whatever terminology works for them. But your claim that the two definitions are the same is puzzling, as all of (1), (2), (3), (4) seem obvious to me.
Agreeing with you by example: In computational complexity the size of the input varies. In cryptography it typically doesn't: time is identical for all inputs of the same size.
Imagine a magic sorting algorithm that always takes 1 second to check if an input is sorted, and 9 seconds to sort the input if wasn't sorted. It then returns the sorted value as output.
EG sorting "abcdefg" would check (taking 1 second) and then return (taking 0 seconds since it's sorted). Sorting "gfedcba" would check (taking 1 second) and then sort (taking 9 seconds) and return. Taking in the complete works of Shakespeare it would check (taking 1 second) and then sort (taking 9 seconds) and return.
It's O(1), yet the time varies based on the input. From a computational complexity terminology standpoint it's constant time, from a cryptographic terminology (and common intuition) standpoint it's clearly not, since it doesn't always take the same time to run.
Can you explain to me what the input is, if not c and n?