Developers (often juniors) use LLM code without taking time to verify it. This leads to bugs and they can't fix it because they don't understand the code. Some senior developers also trust the tool to generate a function, and don't take the time to review it and catch the edge cases that the tool missed.
They rely on ChatGPT to answer their questions instead of taking time to read the documentation or a simple web search to see discussions on stack overflow or blogs about the subject. This may give results in the short term, but they don't actually learn to solve problems themselves. I am afraid that this will have huge negative effects on their career if the tools improve significantly.
Learning how to solve problems is an important skill. They also lose access to the deeper knowledge that enable you to see connections, complexities and flows that the current generation of tools are unable to do. By reading the documentation, blogs or discussions you are often exposed to a wider view of the subject than the laser focused answer of ChatGPT
There will be less room for "vibe coders" in the future, as these tools increasingly solve the simple things without requiring as much management. Until we reach AGI (I doubt it will happen within the next 10 years) the tools will require experienced developers to guide them for the more complex issues. Older experienced developers, and younger developers who have learned how to solve problems and have deep knowledge, will be in demand.
> They rely on ChatGPT to answer their questions instead of taking time to read the documentation or a simple web search.
Documentation is not written with answers in mind. Every little project wants me to be an expert in their solution. They want to share with me the theory behind their decisions. I need an answer now.
Web search no longer provides useful information within the first few results. Instead, I get content farms who are worse than recipe pages - explaining why someone would want this information, but never providing it.
A junior isn’t going to learn from information that starts from the beginning (“if you want to make an apple pie from scratch, you must first invent the universe.”) 99.999% of them need a solution they can tweak as needed so they can begin to understand the thing.
LLMs are good at processing and restructuring information so I can ask for things the way I prefer to receive them.
Ultimately, the problem is actually all about verification.
> Documentation is not written with answers in mind. Every little project wants me to be an expert in their solution. They want to share with me the theory behind their decisions. I need an answer now.
I have an answer now, because I read the documentation last week.
As a real example, I needed to change my editor config last month. I do this about once every 5 years. I really didn’t want to become an expert in the config system again, so I tried LLM.
Sad to report, it told me where to look but all of the exact details were wrong. Maybe someday soon, though.
I used to make fun of (or deride) all the "RTFM" people when I was a junior too. Why can't you just tell me how to do whatever thing I'm trying to figure out? Or point me in the right direction instead of just saying "its in the docs lol"?
Sometime in the last few years I've started doing more individual stuff, I've started reading documentation before running npm i. And honestly? All the "RTFM" people were 100% right.
Nobody here is writing code that's going to be used on a patient on the surgical table right now. You have time to read the docs and you'll be better if you do.
I'm also a hypocrite because I will often point an LLM at the root of a set of API docs and ask how to do a thing. But that's the next best thing to actually reading it yourself, I think.
I'm in total agreement, TM does wonders. Even if you don't remember all of it you get a gist of what's gong on and can find things (or read them) faster.
In Claude I put in a default prompt[1] that helps me gain context when I do resort to asking the LLM for a specific question.
[1] Your role is to provide technical advice in developing a Java application. Keep answers concise and note where there are options and where you are unsure on what direction that should be taken. Please cite any sources of information to help me deep dive on any topics that need my own analysis.
Same could be said for every language abstraction or systems layer change. When we stopped programming kernel modules and actually found a workable interface it opened the door to so many more developers. I'm sure at the time there was skepticism because people didn't understand the internals of the kernel. That's not the point. The point is to raise the level of abstraction to open the door, increase productivity and focus on new problems.
When you see 30-50 years of change you realise this was inevitable and in every generation there's new engineers entering with limited understanding of the layers beneath. Even the code produced. Do I understand the lexers and the compilers that turn my code in to machine code or instruction sets? Heck no. Doesn't mean I shouldn't use the tools available to me now.
No, but you can understand them if given time. And you can rely on them to be some degree of reliable approaching 100% (and when they fail it will likely be in a consistent way you can understand with sufficient time, and likely fix).
LLMs don’t have these properties. Randomness makes for a poor abstraction layer. We invent tools because humans suffer from this issue too.
> it opened the door to so many more developers. [...] That's not the point. The point is to raise the level of abstraction to open the door, increase productivity and focus on new problems.
There are diminishing returns. At some point, quoting Cool Hand Luke, some men you just can't (r|)teach.
Aren't the insufficiencies of the LLMs a temporary condition?
And as with any automation, there will be a select few who will understand it's inner workings, and a vast majority that will enjoy/suffer the benefits.
> Developers (often juniors) use LLM code without taking time to verify it. This leads to bugs and they can't fix it because they don't understand the code
Well... is this something new? Previously the trend was to copy and paste Stackoverflow answers, without understanding what it did. Perhaps with LLM code it's an incremental change but the concept is fairly familiar.
So the scope of answers are single function or single class ? I have people nearby that are attempting generating whole projects, I really wonder how they will ensure anything about it beside the happy paths. Or maybe they plan to have an army of agents fuzzing and creating hotfixes 24/7 ..
> Or maybe they plan to have an army of agents fuzzing and creating hotfixes 24/7
There are absolutely people who plan to do exactly this. Use AI to create a half-baked, AI-led solution, and continue to use AI to tweak it. For people with sufficient capital it might actually work out halfway decent.
I've had success with greenfield AI generation but only in a very specific manner:
1. Talk with the LLM about what you're building and have it generate a detailed technical specification. Iterate on this until you have a good, human-readable explanation of the entire application or feature.
2. Start a completely new chat/context. If you're using something like Gemini, turn temperature down and enable external search.
3. Have instructions¹ guiding the LLM, this might be the most important step, even moreso than #1.
4. Create the base/blank project as its own step. Zero features or config.
5. Copy features one at a time from the spec to the chat context OR have them as separate documents and say things like "we're creating Feature 3A.1" or whatever.
6. Iterate on each feature until you're happy then repeat.
Developers (often juniors) use LLM code without taking time to verify it. This leads to bugs and they can't fix it because they don't understand the code. Some senior developers also trust the tool to generate a function, and don't take the time to review it and catch the edge cases that the tool missed.
They rely on ChatGPT to answer their questions instead of taking time to read the documentation or a simple web search to see discussions on stack overflow or blogs about the subject. This may give results in the short term, but they don't actually learn to solve problems themselves. I am afraid that this will have huge negative effects on their career if the tools improve significantly.
Learning how to solve problems is an important skill. They also lose access to the deeper knowledge that enable you to see connections, complexities and flows that the current generation of tools are unable to do. By reading the documentation, blogs or discussions you are often exposed to a wider view of the subject than the laser focused answer of ChatGPT
There will be less room for "vibe coders" in the future, as these tools increasingly solve the simple things without requiring as much management. Until we reach AGI (I doubt it will happen within the next 10 years) the tools will require experienced developers to guide them for the more complex issues. Older experienced developers, and younger developers who have learned how to solve problems and have deep knowledge, will be in demand.