> "It is exponential in the number of nested loops, which is what's important for the realization that adding more nested loops is bad."
Adding more nested loops is bad, but it's not exponential. It's polynomial.
As you nest more and more loops, the big O complexity goes from N to N^2 (quadratic) to N^3 (cubic) to N^4, etc... N^(any number) is polynomial. Exponential would be 2^N or 3^N or any number raised to the N.
OK I had to write things down to try and make sense of this. If anyone is like me, consider this loop that loops 5 times... maybe it will help?
defines = 0
tests = 0
increments = 0
defines++
for (let i = 0; i < 5; i++) {
tests++
increments++
}
console.log(`defines: ${defines}, tests: ${tests}, increments: ${increments}`)
For the sake of space I'm not going too copy nested versions of that, but imagine nesting it two and then three levels deep.
1 level will increment 5 (5^1) times
2 levels will increment 25 (5^2) times
3 levels will increment 125 (5^3) times
More interesting are the number of definitions and tests:
levels tests defines
1 15 1
2 30 6
3 155 31
But I don't know is how the interpreter works and whether it has tricks to shrink those numbers; those just reflect my naive mental model of how things work.
If N is the number of nested loops, and M is the number of times through the loop, then it is indeed a O(M^N). So indeed, complexity scales exponentially with the level of nesting. The wording was just off amd confusing due to it being a nontraditonal formulation of the problem, but what he was saying does actually make sense.
Something can be exponential in one context and polynomial in another. Asymptotic analysis is about the growth rate of a function in terms of some input variable. The results you get depend on which values you assume to be fixed while others vary. It is usually applied to analyze the runtime of a program in terms of the input size, which is the basis of classifying the runtime complexity, but that's not the only way you can do it.
If you have a sequence of programs with polynomial runtime, but which grows as N, N^2, N^3, N^4, ..., that's a textbook example of exponential growth. It is not generated by running a program on increasingly larger inputs, so there's nothing to be put in the exponential runtime complexity class, but it's exponential nonetheless.
> If you have a sequence of programs with polynomial runtime, but which grows as N, N^2, N^3, N^4, ..., that's a textbook example of exponential growth
No, it isn't. N, N^2, N^3, N^4, ... is polynomial, not exponential. Exponential would be X^N. Look at the graph on the Wikipedia page I linked to.
Each element of the sequence is a polynomial, but the sequence of polynomials grows exponentially. X^N is exponential in N, polynomial in X. N^X is exponential in X, polynomial in N.
> Each element of the sequence is a polynomial, but the sequence of polynomials grows exponentially.
Bringing this back to the original comment about nesting loops, perhaps what you're describing is some sort of metaprogramming that dynamically generates increasingly deeply nested loops based upon the size of the input. If someone were to write that program then yes, it would be exponential.
In the far more common case of a programmer nesting a few loops, the resulting run-time would be polynomial, not exponential.
In this case, the number of iterations (1000) of each loop is being held fixed, and the depth of nesting is varying, i.e. the body of the inner loop executes O(1000^depth) times.
1000^depth is not a constant: the function here is "f(depth) = number of times the inner loop body executes". f(depth) = O(1000^depth).
And, "times" here is serving the same role as "comparisons" in "Mergesort takes O(n log n) comparisons": it's referring to the thing that f is actually counting.
Given O(f(x)), x is almost always understood to be measurement of the work input to the algorithm. Loop nesting depth is not a measurement of work input, it's a property of the code – "1000^depth" is constant for any given algorithm.
Yes, O(…) is just a mathematical construct, so you can apply it the way you have to mean "the amount of work done for a given input size increases exponentially with respect to the number of nested loops iterating over the input"… but that's not how most people interpret it in the context of a computer algorithm. (The exception would be if the nesting depth were dynamically determined by an input parameter, but I don't think that's what anyone here is talking about.)
Anyway I'm not trying to argue, I agree with your point, but I think everyone here is talking past each other trying to say the same thing, ultimately due to imprecision in semantics.
The only difference between what is "code" and what is "work" is context: in this case, the work of interest is a property of some input code. If you want a framing that matches your notions of input better, just imagine we're measuring the asymptotics of a VM/interpreter, and then the code truly is the input.
In this case depth is not constant, so 1000^depth isn't either.
And usually when you encounter O(some constant), it's meant as "of the order of magnitude of", i.e. somewhere between some constant/10 and some constant * 10. That isn't the definition used here, but seems to be the cause of most complaints about asymptotic analysis being misapplied when no asymptotic analysis was being done in the first place.