This is an ignorant and hyperbolic characterization of recent developments in neural networks. In no way did most AI researchers think that it was "nuts" to learn using neural nets. Instead, neural networks was an approach that had fallen out of favor, and many considered other approaches to be more promising. However, I don't think that any serious researcher has ever called Geoffrey Hinton "nuts." He was very well respected in his field before this epic hotel meeting.
The comparison was between AI and procedural programming, not between machine learning and AI. It was a reference to the more general second AI winter, which basically ended due to the success of neural-network approaches.
Ten years ago, a programmer in industry generally would not throw machine-learning at a problem as their first—or even their last—approach; it just wasn't a tool in most non-academics' toolboxes, and even where it was, the common wisdom was that its application was limited to certain conventional uses, like spam filtering. Now we've basically landed in that "throwing around learning models like they were if statements" world that seemed like science fiction just a short while ago.
Machine Learning was still well liked and accepted in research, and in fact the stuff that is solved by ML today (speech and object recognition, most significantly) was still solved by ML back then. I completely agree this article is very hyperbolic.