I don't think that is a problem with AI, it is a problem with the idea that pure vibe-coding will replace knowledgeable engineers. While there is a loud contingent that hypes up this idea, it will not survive contact with reality.
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
That's just it. You can only use AI usefully for coding* once you've spent years beating your head against code "the hard way". I'm not sure what that looks like for the next cohort, since they have AI on day 1.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.
Learning the ropes looks different now. You used to learn by doing, now you need to learn by directing. In order to know how to direct well, you have to first be knowledgeable. So, if you're starting work in an unfamiliar technology, then a good starting point is read whatever O'Reilly book gives a good overview, so that you understand the landscape of what's possible with the tool and can spot when the LLM is doing (now) obvious bullshit.
You can't just Yolo it for shit you don't know and get good results, but if you build a foundation first through reading, you will do a lot better.
Totally agreed, learning the ropes is very different now, and a strong foundation is definitely needed. But I also think where that foundation lies has changed.
My current project is in a technical domain I had very little prior background in, but I've been getting actual, visible results since day one because of AI. The amazing thing is that for any task I give it, the AI provides me a very useful overview of the thing it produces, and I have conversations with it if I have further questions. So I'm building domain knowledge incrementally even as I'm making progress on the project!
But I also know that this is only possible because of the pre-existing foundation of my experience as a software engineer. This lets me understand the language the AI uses to explain things, and I can dive deeper if I have questions. It also lets me understand what the code is doing, which lets me catch subtle issues before they compound.
I suppose it's the same with reading books, but books being static tend to give a much broader overview upfront, whereas interacting with LLMs results in a much more focused learning path.
So a foundation is essential, but it can now be much more general -- such as generic coding ability -- but that only comes with extensive hands-on experience. There is at least one preliminary study showing that students who rely on AI do not develop the critical problem solving, coding and debugging skills necessary to be good programmers:
On vibe coding being self-correcting, I would point to the growing number of companies mandating usage of AI and the quote "the market can stay irrational longer than you can stay solvent". Companies routinely burn millions of dollars on irrational endeavours for years. AI has been promised as an insane productivity booster.
I wouldn't expect things to calm down for a while, even if real-life results are worse. You can make excuses for underperformance of these things for a very long time, especially if the CEO or other executives are invested.
> The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real
I hate to say it but that's never going to happen :/
I'm a bit cynical at this point, but I'm starting to think these AI mandates are simply another aspect of the war of the capital class on the labor class, just like RTO. I don't think the execs truly believe that AI will replace their employees, but it sure is a useful negotiation lever. As in, not just an excuse to do layoffs but also a mechanism to pressure remaining employees: "Before you ask for perks or raises or promotions, why are you not doing more with less since you have AI? You know that soon we could replace you with AI for much cheaper?"
At the same time, I'll also admit that AI resistance is real; we see it in the comments here for various reasons -- job displacement fears, valid complaints about AI reliability, ethical opposition, etc. So there could be a valid need for strong incentives to adopt it.
Unfortunately, AI is also deceptively hard to use effectively (a common refrain of mine.) Ideally AI mandates would come with some structured training tailored for each role, but the fact that this is not happening makes me wonder about either the execs' competency or their motives.
Purely vibe-coded projects will soon break in unexplainable ways as they grow beyond trivial levels. Once that happens their devs will either need to adapt and learn coding for real or be PIP'd. I can't imagine any such devs lasting long in the current layoff-happy environment. So it seems like a self-correcting problem no?
(Maybe AGI, whatever that is, will change things, but I'm not holding my breath.)
The real problem we should be discussing is, how do we convince students and apprentices to abstain from AI until they learn the ropes for real.