Classically, the equivalent of complexity theory in machine learning is statistical learning theory, where the main question is: if I have a magical algorithm that always finds the function in my class that fits the data best, how big does my dataset (which is a sample from some unknown probability distribution) need to be to ensure that the function I pick is almost as good as the best function in the class with high probability? This is known as PAC (probably approximately correct) learning.
For many non-deep machine learning models like support vector machines (SVMs), "find the function in my class that fits the data best" can be posed as a convex optimization problem. So the magical algorithm actually exists. In this setting, the PAC analysis is the natural thing to study. (Typically, we find that you need more samples if your function class is more expressive, which agrees with the observation that deep learning requires huge data sets.)
In deep learning, the magical algorithm doesn't exist. The optimization problem is nonconvex, but everyone uses stochastic gradient descent (SGD), an algorithm that is only guaranteed to find the global optimum on convex problems. Theory suggests that SGD will often converge on a local optimum that is significantly worse than the global optimum. However, in practice this doesn't happen much! If the network is big enough and all the algorithm hyperparameters are tuned well, and you run deep the deep learning algorithm with different random seeds, the result will be about equally good every time.
ML theory people working in deep learning tend to focus on this phenomenon: why does SGD usually find good local optima? This is totally different from the PAC analysis, and the analogy with computational complexity is less crisp.
For many non-deep machine learning models like support vector machines (SVMs), "find the function in my class that fits the data best" can be posed as a convex optimization problem. So the magical algorithm actually exists. In this setting, the PAC analysis is the natural thing to study. (Typically, we find that you need more samples if your function class is more expressive, which agrees with the observation that deep learning requires huge data sets.)
In deep learning, the magical algorithm doesn't exist. The optimization problem is nonconvex, but everyone uses stochastic gradient descent (SGD), an algorithm that is only guaranteed to find the global optimum on convex problems. Theory suggests that SGD will often converge on a local optimum that is significantly worse than the global optimum. However, in practice this doesn't happen much! If the network is big enough and all the algorithm hyperparameters are tuned well, and you run deep the deep learning algorithm with different random seeds, the result will be about equally good every time.
ML theory people working in deep learning tend to focus on this phenomenon: why does SGD usually find good local optima? This is totally different from the PAC analysis, and the analogy with computational complexity is less crisp.