I think this is a very good point. Years ago people were worried about gradient descent getting stuck at a local minima, plausibly because this problem is very obvious in a 3 dimensional space. In higher dimensions however this problem seems to go away more or less and a lot of worrying about the issue seems to be the result of lower dimensional intuitions wrongly extrapolated to higher dimensions.
When I was implementing a neural network for a university assignment (2 years ago so my memory might fail me), we had to run our algorithm multiple time with different starting positions, then take the minimum of those local minima.
I'm not sure what momentum and dropout are, but I agree with Eleizer, without these things (which I didn't use) local minima are a problem.
Dropout is where you randomly remove neurons from your network during training, which prevents them from depending too much on specific neurons (making the output more generalizable). It was developed in 2014 so it would have been brand new tech back when you were in your class.
I think you're misinterpreting the parent who is saying that local minima are not a problem in high dimensions because there is always a dimension to move in that reduces the loss (unlike in lower dimensions where you can get stuck in a point across all dimensions that cannot be locally improved upon)
Not the person you replied to, but your comment was both rude and incorrect enough that I feel the need to reply. See for example http://www.offconvex.org/2016/03/22/saddlepoints/ for some discussion on this.