If you browse r/MachineLearning, you will see many people complaining about unnecessarily convoluted mathematics that are literally there to appease reviewers and don't actually say anything useful.
In my understanding this comes from the fact that many ML is more engineering than science but wants to be seen (or is reviewed as) theoretical research. So it fits the situation in the article even better.
I agree with you. I beliebe that many people are trying to emulate papers like Vapnik's SVMs in an attempt to appear as ground-breaking because, well, the competition is enormous.
There isn't clear distinction between the engineering aspects and the theoretical aspects, as the article said, it looks as if we are trying to get approval by mathematicians so papers become a convoluted amalgamation of different ideas and more often than not actually provide the worst of both worlds.
I think many ML isn’t even engineering. Speeding up the implementation of an algorithm is, but the algorithms themselves mostly are of the “something like this seemed to work for problem P, so for problem Q, I tried this variation” type.
Yes, there may be solid math behind it that says “if your problem is of type T, this algorithm will get within Foo of the optimal solution in O(n log n) time”, but the problem is that nobody can tell whether a given real-world problem is of type T. Yet, people happily run the algorithm and if it works, it works.