That's the big difference in this round. Before you had to have the ML expertise and the expertise to understand the implication of say a MNIST classifier example. Now anyone can "get" it because you're prompting and getting inference back in English. Underneath the fundamentals aren't all that different though, it has the same novelty factor and the same limitations apply.
I think the fundamentals are radically different, just due to the ease of applying this stuff.
I used to be able to train and deploy a ML model to help solve a problem... if I put aside a full week to get that done.
Now I tinker with LLMs five minutes at a time, or maybe for a full hour if I have something harder - and get useful results. I use them on a daily basis.