If you spend time on places that attract newbie programmers (some subreddits focused on game dev or game engines, for example) you’ll see the outcome of “I no longer think you should learn to code.” And it’s not pretty.
Many, many posts of people looking for help fixing AI-generated code because the AI got it wrong and they have no idea what the code even does. Much of the time the problem is simply an invented method name that doesn’t exist, a problem that is trivially solved by the error message and documentation. But they say they’ve spent several days or whatever going back and forth with the AI trying to fix it.
It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.
Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI. It’s also easier to explain the solution to them because they wrote the code, so it tends to be simpler. Several times I’ve pitied someone asking for help with AI code and even when I explained the solution they still didn’t understand it, and I had to just give up on them - I’m not getting paid to help them.
I have played around wit AI code from time to time. I do not code routinely but I have pet personal projects that allow me to do some code and this is where I experimented.
Rule number 1 and the only rule: You need to be a subject matter expert. Be it program logic or be it programming language. AI is only a helper, it will go wrong, frequently, and if you do not understand the reason for the code and the programming language, you will take so much more time than if you did not even use the AI.
Without naming the IDE, but one of the top 3 I guess, I asked it to simplify some code. I had repeated code 8 times. 6 of them were identical, the last 2 had a variation. The AI just did not catch it, and refactored all 8 blocks to use the logic of first block. How can you even do that? The code is similar but different, it looks the same but there are extra lines of code in the last 2 blocks !!
And it took me a while to realize this. I never ingest AI code directly so at first I was marveling at a job well done, and as I read and compared, the horror !! And that was not the first time it happened, but once again I got tricked by the soft spoken well mannered AI to believe that it did a fantastic job when it did not.
Edit: It is just an assistant. You give it a task, it will make a mistake, you tell it to fix the mistake, it will fix the mistake. It still saves you time. Next day, it will make the same mistake - and hopefully that gets reduced as the versions evolve.
AI is excellent for tasks you know how to do, but can't be arsed to spend the time.
Example: I wanted a tool that notifies me of replies or upvotes to my recent Hacker News comments. Grok3 did it in 133 seconds with Think mode enabled. Total time including me giving it the example HTML as attachment and writing the specs + pasting the response to a file and running it? About 5 minutes.
I know perfectly well how to do it myself, but do I want to spend the hour or so to write all the boilerplate for managing state and grabbing HTML and figuring out the correct html elements to poll? Fuck no.
From my experience using ai, if you don't write a really precise description of your initial requirements in the initial prompt, and it it doesn't one-shot the answer, I don't bother asking it to fix the mistake.
Unless you're using a LLM with really long context, it's prone that some context loss will happen soner or later - when that happens, pointing errors that dropped out of the context will just result in repeated or garbage output.
I don‘t use these tools daily because I have a hard time to commit to these workflows. But do I need to do this setup once, once per project or every time for one off things I might code?
You really don't need these for tiny one-off scripts, but they're essential for larger projects where the whole application can't fit into the LLM context at once.
Basically they're just a markdown files where you write what the project is about, where everything is located and things like that. Pretty much something you'd need to have for another human to bring them up to speed.
An example snippet from the project I just happened to have open:
## Code Structure
- All import processors go under the `cmd/` directory
- The internal/ directory contains common utility functions, use it when possible. Add to it when necessary.
- Each processor has its own subdirectory and Go package
## Implementation Requirements
- Maintain consistent Go style and idiomatic patterns
- Follow the existing architectural patterns
- Each data source processor should be implemented as a separate command
This way the LLM (Cursor in this project) knows to, for example, check the internal/ directory for common utils. And if it finds duplication, it'll automatically generate any common functions in there.
It's a way to add guidelines/rails for projects, if you don't add anything the LLM will just pick whatever. It may even change style depending on what's being implemented. In this project the Goodreads processor was 100% different from the Steam processor. A human would've seen the similarities and followed the same style, but the LLM can't do that without help.
why would I pay for the advanced features when I haven't been impressed with the free features? in fact Claude 3.5, which is what is available, is a nearly worthless product, with value comparable to a free search engine, and not even a very good one. It is usually incorrect and frequently in subtle ways that will cost me a lot of time.
pro AI people sound like someone with an expensive addiction trying to justify it. the free product is bad, so I just need to pay to see the light?
Why would Anthropic let me use a model for free that is going to make me more skeptical of their paid offerings unless it is pretty similar to the paid ones and they think it's good?
Just read the manual and write the code yourself. These toys are a distraction.
Like many tools, there is some user skill required. Certainly there are situations where AI assistants won’t help much, but if every single attempt you’ve made to use an AI coding assistant has been “useless”, you are either working in a very niche area or, perhaps more likely, it is user error on your own part.
There are plenty of people who are way too high on the current abilities of AI, but I find that the “AI is useless crowd” to be equally ridiculous.
It reminds me of early in my career working in statistics where the company I joined out of grad school was justifiably looking to move out of SAS and start working in R and Python. Many were enthusiastic about the change and quickly saw the benefit, but there were some who were too entrenched in their previous way of working, and insisted that there was no benefit to changing, they could do anything required in SAS, and stubbornly refused to entertain the idea that there was a benefit to be gained by learning a new skill.
You needn’t become an AI cultist. But with the number of people who are getting at least some benefit to using AI coding assistants, if you are finding it to be worthless in your personal experience, it may be worth stepping back and considering if there is something wrong with how you are trying to utilize it.
What I do is go back through the conversation history, select the response that has the somewhat working code, then submit a prompt with what I want changed. By selectively including context and adjusting temperature, top_p/k and sometimes swapping model or system prompt for a given query will give better results. Combine this with repeating the query multiple times with that same context, then select which result is the best and move on.
That hasn't been my experience, nor that of others using the AI.
It's a force constant, rather than a multiplier. If you're low skilled, asking it to do a low skilled task, it works fine. If you're high skilled, and asked it to do the low skilled task, you saved a tiny bit of time (less than that of the low skilled person).
But it cannot do a high skilled task (at least, not right now). It can pretend, which can lead the low skilled person astray (but not the high skilled person).
Therefore, all AI does is raise the floor of what is achievable by the laymen, rather than multiply the productivity of a high skilled programmer.
You could also be a mixed skilled developer. Good at regular code, architecture, and algorithms but not as familiar with a given UI framework. Having the LLM generate the html and css for a given layout description saves a lot of time looking through the docs.
Does it really? The thing is that there’s a domain model beneath each kind of library and if they solve the same problem, you will find that they will generally follows the same pattern.
Let’s take the web. React, Svelte, Angular, Alpine.js, all have the same problems they’re solving. Binding some kind of state to a template, handling form interactions, decompose the page into “components”,… once you got the gist of that, it’s pretty easy to learn. And if you care about your code being correct, you still have to learn the framework to avoid pitfalls.
Same things with 3D engines, audio frameworks, physic engines, math packages,…
Using myself as an example -- I'm a long time C programmer (occasionally in a professional setting, mostly personal or as a side-item on my primary professional duties). I've picked up other languages through the years, had to deliver a web based application a few years ago so I did a deep-dive into html5, css3, and javascript. Now javascript has evolved since then, and I lost a bit of what I learned.
So now I want to do a new web application -- If I fall back on my C roots, my Javascript looks a lot like C. Example: adding an item to an array. The C style in Javascript would be to track the length of the array in another variable "len", and do something like myarray[len++] = new_value;
I can feed this into an LLM, or even say "Give me the syntax to add a value to an array", and it gives me "myArray.push(newValue)", which reminds me that "Oh yeah, I'm dealing with a functional/object oriented language, I can do stuff like this". And it reminds me that camelCase is preferred in Javascript. (of course, this isn't the real situation I've run into, just a simplified example -- but I really don't have all the method names memorized for each data type. So in that manner it is useful to get more concise (and proper) code.
I'm sure this is valuable for you, but here is my point of view.
I've worked professionally in many languages like Perl, Python, Kotlin, C# and dabbled in Common Lisp, Prolog, Clojure, and other exotics one. Whenever I forgot the exact syntax (like the loop DSL in CL), I know that there is a specific page in the docs that details all of these information. So just a quick keyword in a search engine and I will find it.
So when I come back in a language I haven't used in a while, I've got a few browser tabs opened. But that only lasts for a few days until I get back in the grove.
So for your specific example, my primary choice would have been the IDE autocompletion, then the MDN documentation for the array type. Barring that, I would have some books on the language opened (if it were some language that I'm learning).
My high skilled example from the other day: I wrote an algorithm out line by line in detail and had Claude turn it into avx2 intrinsics. It worked really well and i didn't have to dig through the manuals to remember how to get the lanes shuffled in the way I needed. Probably saved me 10 minutes but it would have been an annoying 10 minutes. :)
But that's a very low level task, where i had already decided on the "how it should be done" part. I find in general that for things that aren't really obvious, telling the llm _how_ to do it produces much more satisfactory results. I'm not quite sure how to measure that force multiplier - I can put in the same amount of work as a junior person and get better output?
> Rule number 1 and the only rule: You need to be a subject matter expert.
Strong disagree. I've been coding for 25+years but never on the front-end side. I couldn't write JS w/ pen & paper with a gun against my head. But I know what to ask and how to make sure a react component does what I want it to do, with these tools.
I see where you're coming from. But later in the message they go > and if you do not understand the reason for the code and the programming language, you will take so much more time than if you did not even use the AI.
I guess that's the part I disagree on. The programming language is largely irrelevant now is what I'm seeing. And especially the "time to first result" is orders of magnitude smaller when using AI assistants than having to rtfm/google/so every little problem I encounter in an unfamiliar language.
And I am in no way stating that my code will match an expert in that field, of course.
Knowing how to code, and having a lot of experience and an "intuitive" sense of what is a good idea and what is a bad idea, also puts you in a position to question the advice the AI gives you. Just now I was asking Claude to help me with an issue with a React component and it told me to add useEffect with a timer. I am not a React expert, but that immediately felt like a code smell to me, so I followed up:
> is it weird or an anti-pattern to use a timer like this?
The response:
> Yes, using a timer like this is generally considered an anti-pattern in React for several reasons: It introduces non-deterministic behavior (timing-dependent code), It's a workaround rather than addressing the root cause, It can be brittle and lead to race conditions.
I'm sure all those things are true. This is a classic example of the problem with people using AI programming tools but lacking a real understanding of what they're doing. They don't know enough to question the advice they're getting, let alone properly review the code its generating.
The other day, in a Rails app, Claude generated a bunch of code that spawned various threads to accomplish certain things I needed to do asynchronously. Maybe these days, in Ruby 3 and Rails 8, this is safe. But I remember that back in the Rails 2 days, going off and spawning new threads was not a good idea. Plus, I have a back-end async job processor already set up. Again, I questioned the approach. The revised code I got back was a lot simpler, and once I'd reviewed and tested it, I (mostly) used it as-is.
that's the thing if your inquisitive and have an interest in learning things then you can still go far with AI coding. can you explain why this code works?
is this the best way to do it or are there other solutions? what are the pros and cons?
are there security problems with this? how could I make this code more secure?
what are some things I should look out for with AI coding(meta question)?
what does this error mean?
just talking back and forth with the AI on the phone you can get a high level understanding of a topic pretty quickly and way more in depth and personalized than a tutorial on the internet.
> Much of the time the problem is simply an invented method name that doesn’t exist
I spent a solid 2 hours yesterday trying to get an SSDP protocol implementation going because the LLM was absolutely insistent upon using 3rd party libraries that don't exist and UDP client methods defined in Narnia. I had to spoon feed it half-way attempts before I could get it to budge on useful code. This was all before I realized we had a problem with multicast group membership and multiple network adapters.
These models definitely can help (I wouldn't have gotten as far as I did without one), but you need to know what you want every step of the way. Having mere "vibes" about a sophisticated end result will result in unhappy outcomes. I think the model would have made my life much worse if I wasn't as cynical and suspicious regarding every aspect of its operation. I can see how these models would steal learning opportunities from more novice developers. Breaking out Wireshark is the sort of desperation that only arises when you can't constantly ping some rubber duck for shreds of hope (or once you realize there is no hope).
I gave up on AI because of that. The old IDEs that use an AST for autocomplete still exist and work very well for allowing me to hit tab and get the correct function filled in. They are also very good at the little pop-up that tells me details about the parameter I'm trying to fill in really is - the AI has no clue what order the arguments really are and so often gets it wrong. They won't complete 1000 lines of code - but that is only rarely a savings as most 1000 line code snippets I've worked with just as fast to write myself (I've been programming for 30 years) as to figure out how the AI got some details wrong.
If the AI had access to the AST and could know what functions exist they might be helpful. Then they could write that function they wished existed if it doesn't. However that means they need to know how to understand the code not just the structure.
> Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI.
I’m not sure this is true. Prior to AI you saw a lot of the same behavior, but it was with code copied and pasted from stack overflow, or tutorials, or what have you.
I don’t think AI has changed much in terms of behavior. There has always been a subset of people who have just looked for getting something that “worked” without understanding why, whether that’s from an AI code assistant, or an online forum, or their fellow teammates, and others who want to understand why something works. AI has perhaps made this more apparent, but it’s always been a thing.
The difference is that code are copy-pasting isn’t randomly mutated for each person doing so, and likely if they take the time to go back to where they got it there is likely also an explanation or more info about if they care to take the time to read.
They like to talk about feelings a lot. Lots of posts about how it feels when their upcoming game reaches <number> of wishlists on Steam, or how it felt when their low resolution pixel art game using Kenney's asset packs flopped against all odds.
Real live example. Recent conversation with colleague :
Hey, trying to translate excel sheet with chatgpt, cant understand what to do (posts screenshot with explanation and example "pip install [package-name]"
You just need to execute specified command in your environment
What is "my environment"?
Yeah, I like using LLMs when I code, but every time I've tried "vibe coding" (i.e. letting the LLM do the whole thing start to finish) it's never written a full, functioning app that doesn't have bugs in the basic functionality. I tried this just a couple days ago with the SOTA Gemini 2.5 Pro - it wrote itself into a corner and when I gave it the logs from my bugs it literally said "this doesn't make sense" and got stuck. That finally prompted me to take a look at the code and I immediately knew what it got wrong.
> a problem that is trivially solved by the error message and documentation
Then why does the AI not solve it anyways?
I think understanding "the code" will eventually be as important as understanding machine code or assembly nowadays - still very important for a small number of devs, important in very rare cases for some other devs and completely irrelevant to the majority of devs.
Perhaps because "A.I." doesn't understand anything, it just makes plausible output based on its training data. LLM is a much better term, because intelligence has connotations of understanding, whereas model does not.
Yeah, in a sense it does. But I think the problem here isn't "the AI" but actually the tooling or the way "the AI" is applied by the person. (i.e. being unable to even c&p the error message into the AI)
Because from my experience, sonnet 3.7 can almost always fix issues of that type and if it can't, it's usually not "trivial" at least not by my understanding of the word.
That’s an interesting observation, because a corollary of this would be: if young people believe this to be true and don’t start learning to code now, there will be an even higher shortage of developers, unless the AI system become insanely better than today.
As a very experienced programmer who has experimented with using AI: I can't imagine trying to do anything useful with it if you don't understand the code it generates.
Even if the code it generates works, what happens when you need to change it and no longer have that AI conversation and its state available?
Then there's the security nightmare this is going to be. All this slop-code generated by ignorant hustlers with AIs is going to be a hellscape of security bugs.
The scale may change, but that is likely just because more people code. You couldn't do X but now you almost can!
> Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote,
Really? They've not copied and pasted something from SO? They're that early in their coding journey?
> It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.
Leading a horse to water, etc. but llms are excellent at being a patient teacher of basic coding.
Frankly they can be excellent at lots of things people keep saying they're bad at but some seem to refuse to learn how to use them as a tool. It reminds me of watching people be unable to use Google in the past - how to not just ask a question but search for information.
I'm a professor at a small college. I teach intro programming most semesters and we're now moving to using tools like Cursor with no restrictions in upper-level courses.
"How do students learn to code nowadays?" - I think about this pretty much all the time.
In my intro class, the two main goals are to learn about structured programming (using loops, functions, etc.) and build a mental model of how programs execute. Students should be able to look at a piece of code and reason through what it does. I've moved most of the traditional homework problems into class and lab time, so I can observe the students coding without using AI. The out-of-class projects are now bigger and more creative and include specific steps to teach students how to use AI collaboratively.
My upper-level students are now doing more ambitious and challenging projects. What we've seen is that AI moves the difficulty of programming away from remembering details of languages or frameworks, but rewards having a careful, structured development process:
- Thinking hard and chatting about the problem and the changes you need to implement before doing anything
- Keeping components encapsulated and thinking about interfaces
- Controlling the scope of your changes; current AIs work best at the function or class level
- Testing and validation
- Good manual debugging skills; you can't rely on AI to fix everything for you
- General system knowledge: networking, OS, data formats, databases
One of my key theories is that AI might lower the value of "computer science" as a standalone major, but will lead to a lot more coding across fields that currently don't use it. The intersection of "not a traditional engineer" and "can work with AI to solve problems with code" is going to be an emerging skill set that will change a lot of disciplines.
The rise of tools like Cursor reminds me of the Industrial Revolution in France. When machines first appeared in factories, unskilled workers who didn’t understand how they operated often got injured - sometimes quite literally losing fingers. But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
The same applies to software development. If you lack the fundamentals - how memory, I/O, networking, and databases work - you’re at risk of building something fragile that will break under real-world conditions. But for those who understand the moving parts, tools like Cursor supercharge efficiency, allowing them to focus on high-level problem-solving rather than boilerplate coding.
Technology evolves, but the need for deep knowledge remains.
Those who invest in learning the craft will always have the advantage.
> When machines first appeared in factories, unskilled workers who didn’t understand how they operated often got injured - sometimes quite literally losing fingers.
Factories were extremely dangerous because the machines had no safety measures. And they continued to be dangerous, for everybody skilled or not, until the introduction of workers rights, regulations and enforced safety measures and protocols.
> But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
Skilled craftsmen continued working as they traditionally did so much so that up to today it is possible to find craftsmen that use traditional tools.
> Those who invest in learning the craft will always have the advantage.
I like your comparison. A related thought: what should be really valuable right now for Cursor, Windsurf etc is figuring out who the skilled users are and further training their models based on their usage. In fact, actively courting skilled devs would give them very high quality data to finesse the tools further.
If I could honestly say I was any good at coding I'd be using this as an argument for unlimited free access to these platforms!
Well it’s a good point that proves at least two things. First in the industrial world machine have not yet replaced man after decades. Still a force multiplier.
The second point is the one who control « what » produces value wins it all. In France we had amazing industries and some were deported offshore. Maybe some genius thought that only the brain mattered. Now countries have to rely on other countries to build or make products evolve and those countries can make their own products now and can charge us whatever they want (I’m simplifying) because we don’t know how to build things anymore, tools and craftsmanship is gone and not learned anymore. I feel the article pin points exactly the main idea behind AI : who will have control and who will be able to decide that the API price can be x100 ? If no one knows how to code, that is very dangerous and what happened in the industrial world shows it’s dangerous. Companies have an endgame of power and as a developer deciding to not learn or delegate my know how makes me at mercy in the end
When I look at fields like car manufacturing, which is mostly robotic, it seems that nowadays humans are force multipliers for machines rather than the other way around.
Yeah but there isn’t one self operating supply chain that makes cars. We make more cars of ship them faster.
The day machines 100% replaced humans throughout the industries it will be an other problem because capitalism is built upon the premise that man is paid because he brings value. Once that’s over and you don’t have money the things you’ll consume less are the nice to have so whole countries might be in trouble. So either we all be able to bring other kinds of values, either the system will have to change not to collapse ?
But the usual way of learning the craft is broken. Experienced developers will now work with AI instead of hiring junior developers. Some exceptional individuals might still learn on their own, but the path from junior to senior, learning by doing, could vanish. That's my worry.
> Some exceptional individuals might still learn on their own
And people with money/means. Children of software engineers may be able to learn the profession easier than others. The same goes for children with affluent parents that can pay for many years of education.
It seems a retreat back to a more medieval economy that excludes large parts of society.
The free content to learn how to code is still available on the Internet and it won't go away.
SE is one of the few professions that one can _learn_ for free, by themselves.
It could take longer than going into a fancy university, and it won't open corporate doors as easily, but basically anyone with a computer and an Internet connection can learn SE.
Probably too B&W. But I’ve had a lot of discussion about this recently and the general consensus is that there’s something to it—especially developers who just got into the field solely because it’s where they thought the money was.
I'm not making a judgment, just describing a dynamic a lot of people claim to be seeing. One can reasonably assume that the most junior tier (for whatever combination of education, genuine interest, etc.) could potentially feel the impact most to the degree that LLMs really do have a disparate impact on junior people. It's a continuum of course. There are plenty of competent people who enter many fields because it's a job.
I'm also at least somewhat cautious about making "passion" (or whatever) a prerequisite for working in general.
aye, to me they’re just a different interface to the same information publicly available via a search engine.
for folks who haven’t spent the last 15 years honing their finding out technical information with a search engine craft i can see why they might be useful.
but a search engine won’t sometimes mangle the output and provide an incorrect answer — it only provides a link to the raw data (webpage), rather than trying to create a paragraph of text about it.
i’d rather have access to the raw data guaranteed unmangled. i’m fast enough using that method.
> But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
I don't know if I agree with this line of thought (is there evidence this is true?). Once you have a metal press, you precisely no longer need a blacksmith skilled at swinging a hammer; in fact, all you need is someone that can be trained to read the manual and follow the instructions -- the exact opposite of a skilled tradesman.
I do think it is like an industrialization of software engineering[0], but I don't think it favors the skilled craftsman; rather it shifts the sets of skills required and focuses more on reading code rather than writing.
> Because if you can vibe code… so can everyone else.
That's really the money shot, right there.
CEOs have this dream of firing all their "obnoxious" engineers, and "vibe-coding" their own products. That's not something new. People have been selling this dream to gullible C-suiters since I first started coding in Machine Code (1980s).
The future will belong to the engineers that can leverage AI. Engineering is a lot more involved than "HAL, write me a Facebook," which is the C-suite dream.
It's just that engineering will move another level up, as it has, for hundreds of years.
> CEOs have this dream of firing all their "obnoxious" engineers, and "vibe-coding" their own products.
Give one of the key skills of a CEO's is pulling together the resources to make something happen, what happens to CEO's if you no longer need resources to make stuff happen?
ie if everybody can Vibe code, vibe market, vibe deploy - aren't you going to be swamped with competitors?
So the interesting thought experiment here is - in such an environment - what are the critical success factors?
> Give one of the key skills of a CEO's is pulling together the resources to make something happen, what happens to CEO's if you no longer need resources to make stuff happen?
Never underestimate how much importance a lot of people in middle and upper management place in the number of reports they have. It’s almost a <thing> measuring contest with some of them.
They’ll hire people. It might not be in engineering. But I bet they’ll find a reason to hire more engineering. It might even be justified. Software productivity has been increasing for decades. This has not led to a smaller number of software engineers. Only to more ambitious projects.
The future might be different, but I think the chances of that are small.
Mostly that in such a reality software engineering will cease to be a thing; however industries based around physical resources such as manufacturing, construction and healthcare will continue to employ people, and by extension, the CEO.
Absolutely. Just like, I'm sure, plenty of engineers are starting to think about developing a cool product and vibe-marketing, vibe-salesing, vibe-accounting, etc their way into a functioning business. See: the plethora of SaaS platforms promising to automate away entire sections of business operations "with AI".
Both will fail, unfortunately, because it's easy to underestimate the complexities and intricacies of processes you do not understand in the first place. These various AI offerings are just making the situation worse, because they (as with most things in AI) give the appearance of being functional while falling apart under scrutiny -- the "confidently wrong" problem and all.
I was just talking to someone about this, this morning.
I will use ChatGPT (generally) to help me solve occasional issues. I'll come across some conundrum, and ask ChatGPT for a suggestion, which it confidently delivers.
The first suggestion is almost always wrong.
I'll say something like "That won't work," or "That answer is deprecated."
It will say "You're right!", followed by one that is more useful.
I suspect lots of folks run with the first answer.
I've been programming for a few decades. I love LLMs. They make tedious things quick. Help me resolve gnarly issues. Make short work of writing unit tests. Generate oodles of boilerplate at will. Etc. It makes me more productive and less reluctant to take on risky things. By risky I mean things that formerly would have likely derailed my busy schedule because I'd get side tracked for to long and would have to de-prioritize more important stuff.
Anyway, resistance is futile. You will be assimilated ... or retired. The reality of our job is that new generations are going to come in and they'll be using all the latest tools and gadgets. That's nothing new. And I'm part of a generation that in a decade or two will be mostly on the sidelines enjoying retirement. So, I'm well aware that progress isn't going to stop over my whining and grumbling. It annoys me when I catch myself doing that. I want to be better than that.
LLMs are part of the job now. They are tools. And tools are only as good as the people wielding them. So, skill up and learn. It's not like it's very hard. If you are getting poor results, you might be doing it wrong. Figure it out; part of the job. Your mileage may vary. But there are a lot of tools and chances are you just haven't found the right one yet. Also, if some tool/llm limitation is blocking getting good results for something, wait 3 months and try again. The pace of progress is ridiculous currently.
Or better yet: become part of the solution and make your own tools. This stuff is stupidly easy. It's prompt engineering with some trivial plumbing around them mostly. And you can generate the plumbing (what, you were going to do that manually?). That's why there are so many AI tools popping up right now. Most of them won't survive very long. But there are some good ideas lurking there.
I've been programming for a few decades. I hate LLMs. They generate oodles of buggy shite that I have to fix by hand. They frequently steal my time and make me less productive because people on this site say I have to learn them or retire, and then I waste time looking up the details the bot got wrong. They're a slot machine and the people who think they are good are justifying an addiction and sunk costs.
So retire me, I guess. I'm probably younger than you, but I'm almost ready to retire because I'm cheap and I don't buy into expensive fads, so I'm almost ready to cash out of this nightmare
except then you realize the valuations of anthropic etc are propping up the whole economy and doing so on the promise that LLMs are going to deliver AGI!
LLMs are marginally useful in some contexts. But I have seen absolutely nothing -- nothing -- to justify the costs or the valuations of these companies. They are definitely not AGI and before you accuse me of moving the goalposts, the AI companies are the ones promising this.
It's a bubble. It's easy to get started. Good luck building a real product with just AI though. Good luck with that.
If you turn out to be right I will happily exit this God forsaken industry. Lord free me from silicon valley; I liked computers. Not this. Not these people.
The real answer is probably somewhere between the two. There is value in AI - and the versions that will come up in the future will be better. However it isn't nearly as valuable as the advocates say either. I've given up on the current rounds, but I'm still going to keep watching for when they get better. They might or might not get enough better before I retire (I'm likely older than you), but there are a lot of things they can do better. I have no idea how hard those things are.
I'd like to agree with you and remain optimistic, but so much tech has promised the moon and stagnated into oblivion that I just don't have any optimism left to give.
I don't know if you're old enough, but remember when speech-to-text was the next big thing? DragonSpeak was released in 1997, everyone was losing their minds about dictating letters/documents in MS Word, and we were promised that THIS would be the key interface for computing evermore. And.. 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then. In messenger applications people are sending literal voice notes -- audio clips -- back and forth because dictation is so unreliable. And audio clips are possibly the worst interface for communication ever (no searching, etc).
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
Probably the opposite is true: the more you know how to code, the less productive you'll be with AI. This has been my observation watching a non-technical friend build a SaaS as a one-man team in 4 months and is generating $#,000 in revenue within 2 months.
The way he uses AI is just completely different from how the technical folks I know use AI because he doesn't think about the code at all. The way he instructs the AI is different from how engineers prompt the AI.
I actually think that his success with AI is in particular because he doesn't know how to code but was previously managing projects and offshore teams (so lots of writing down exactly what he wants, but with no specifics on how it gets implemented).
Problem is what kind of moat does that SaaS have? The flip side of 'we replaced our dev team with vibe coders look how fast we print shitware' is now shitware value falls close to 0, as anyone can make it.
Though you suggest he's non-technical... "writing down exactly what he wants" is just coding!
> "writing down exactly what he wants" is just coding!
That would make good project managers and business analysts "coders" and they are not coders. It is only in this age with LLMs does that line between functional requirements and code become blurred.
He doesn't know how to write code well; he knows how to write requirements and instructions well from managing offshore teams.
In practice, his instructions are detailed in the functional domain. Engineers bias too much in the technical domain.
> The flip side of 'we replaced our dev team with vibe coders look how fast we print shitware' is now shitware value falls close to 0, as anyone can make it.
Actually, he recognizes this and said something to the effect of "this is the end of SaaS" meaning that anyone can build this. That's his biggest fear going all in on this (he is still keeping his day job despite this project gaining traction so quickly)
But I don't think this is true; I think there are still some technical barriers (at the moment) like one needs to know to instruct about databases, set up external services, etc. The AI is writing the connections to some third party APIs, but one needs to know what an API is and which one to use to instruct the agent.
A future may be coming where this is no longer the case (e.g. combining deep research and computer use that will automatically set up domains, connect external services, etc.), but it's not here yet.
> 'shitware'
Is it "shitware" if customers are paying because they are deriving value from it? He's got 30 customers, a few of which paid annual subscriptions because it provided value to their actual business. Is it "shitware" because it's not handcrafted? Does it matter if it's solving some real problem and customers want to pay for it?
> "writing down exactly what he wants" is just coding!
Agreed. The more exact and clear your instructions are, the closer to programming it is. Presumably the non-technical person has an application where they care about things like performance, scalability, compatibility and all those things coders sweat over.
This resonates a lot with what I’ve seen outside of code too. I’ve been building an AI chess coach and noticed the same pattern: people plug their games into Stockfish, see a list of best moves, and walk away thinking they’ve “analyzed” the game. But real understanding — like in programming — only comes from engaging with why things went wrong.
That’s what I’m trying to fix. Instead of just showing lines, my AI coach gives voice-guided feedback, visual highlights, and practical insights. More like working with a real coach than sifting through raw engine output.
The goal is to make analysis as engaging as playing—and shift the mindset from “just tell me the best move” to “help me think better.”
Seems like an interesting idea - is it only tactics in scope or has does the AI also do well at analyzing strategic ideas?
Some other thoughts:
Isn't the first example just wrong? The AI says "after dxe3 Rxd8 Rxd8, white wins the exchange, gaining a Rook for a Bishop" but unless I am mistaken actually it's a Queen for a Rook and Bishop?
Also, it seems the visual highlight AI referenced is not working? Talks about Rad1 while the pawn is still highlighted.
It will do both tactics and strategy. Also working on incorporating positional concepts.
Yes you are right. It is still in demo phase, it still does make mistakes. I am refining the model and inputs, so definitely a work in progress :)
Regarding the circle highlighting, the agent is deciding / reasoning which square to highlight. So it is non-determinsitic, so it is sometimes right / wrong.
It will definitely get better as the models improve
To each, their own. To deeply understand an area you have to learn it from bottom up.
I learned BASIC as a small kid, using a clone of ZX Spectrum. I was aware that the memory was limited and that I can poke and peek a memory address to set or retrieve the info.
I learned Pascal and then rapidly went to C and C++. I learned about pointers and how the memory is layed out and what are system calls.
I learned about CPUs and I learned some X86 assembly.
Ar the university I learned about digital circuits and how to assemble one using logical gates. Of course I learned much more, data structures, algorithms, operating systems, distributed systems, parallel and concurrent programming, formal languages and automata theory, cryptography, web, lots of stuff.
I learned other lots of stuff by myself.
I've built desktop apps, websites, software for microcontrollers, games, web applications and now I am working on microservices based apps running in cloud.
I was a junior developer, graduated somehow to senior. I worked as a software architect and now I am a team leader.
These days I work almost exclusively with C#, but I am also interested in other languages if I have some spare time to evaluate them.
What I want to say is this: is not enough to learn the highest level of technology of today. Today that is AI, few years ago it was JS frameworks, more years ago it was Java, .NET, Python.
To be good at what you do you always have to learn all layers under the current top layer. Learn from bottom up. You don't have to be good at every technical detail, but you have to understand at least how the things work.
To add to this, if someone depends on AI (the top-layer in your example) and doesn't learn the the 'how's and the 'why's of programming, they or their organization will be completely beholden and dependent on the organizations running those AI tools. Not a great position to be in.
Ignoring AI would be foolish, since AI is the best programming tutor you can wish for and will speed up your learning noticeably, since you can always ask for clarification or examples when something isn't clear. It's also a great way to get random data for testing. And it will help you unearth lesser known corners of a programming language that you might have overlooked otherwise.
The downside is that at the current speed of improvement, AI might very well already be at escape velocity, where it improves faster than you, and you'll never be able to catch up and contribute anything useful. For a lot of small hobby projects, that's already the case.
I don't think there are any easy answers here. Nothing wrong with learning to code because it's fun. But as a career choice, it might not have a lot of future, but then, neither might most other white-collar jobs. Weird times are ahead of us.
> AI is the best programming tutor you can wish for...you can always ask for clarification or examples when something isn't clear
Yep, AI can teach beginners the fundamentals with endless patience and examples to suit everyone's style and goals. Walking you through concepts and giving you simulated encouragement as you progress. Scary stuff, but that's how it is.
But... as we know, it doesn't always provide the best solution. Or it gets muddled. When you point out its mistakes, it apologises, recognises the mistake and explains why its a mistake. It's reasoning is incredible, but it still makes mistakes. This could be very risky for production code.
Related anecdote... I needed Photoshop help recently for horizontally offsetting a vignette effect. Surprisingly not easy. The built-in vignette filter can't be applied to a new blank layer, and is always centred on the image. AI suggested making it manually but I didn't want to do that, as I like the built-in vignette better. AI's next solution involved several complicated steps using channel isolation and weird selection masking etc. No thanks. Then my own brain sparked a better idea... simply increase the canvas size temporarily, apply the vignette, then crop back to the original size. Job done. I told AI about my solution and it was gushing with praise about how brilliant my solution was compared to its own. Moral: never stop trusting your own brain.
I'm getting back into programming after a couple of years other roles. I have to learn a framework I haven't worked before and get to know new paradigms I never used.
In a way, I'm feeling lucky that my company currently explicitly bans the use of AI tools on our code bases (for good reasons). This forces me to write all my own code and understand what it's doing. The only thing I use AI tools for is to explain some new concepts or paradigms in ways I can understand.
What's also great is that I can throw some code from Stack or Github in there and get it explained. I'm glad these tools exist, since they make learning much easier, but they're also a trap if you depend on them in stead of your own knowledge.
> Because if you can vibe code… so can everyone else.
> And if everyone can do it, what makes you think Devin won’t replace you?
Devin won't replace you if you can create valuable products through "vibe coding" or whatever else you call it.
when coding itself becomes a commodity, value creation becomes more concentrated up the stack: what you choose to build, how well you market / sell it, how you connect with your customers, product design. Devin won't outcompete humans at these skills anytime soon.
instead of sticking to a skill that's quickly becoming a commodity (as the author recommends), moving up the stack is the way to go (outside of very niche, specialized engineering domains - e.g: training base models).
AI does not and will not solve the most difficult part of programming:
Expressing how you wish to solve a problem in simple terms.
It doesn't matter if you communicate with a compiler or an LLM - you still need to express your thoughts and ideas with no ambiguity for it to produce the wanted behavior. What makes "vibe coding" with an LLM both easier and more challenging at the same time is that is will guess what you mean and give you results that "kind of " work even when you express yourself unclearly. For someone who can code, the "kind of work" results can be used as a starting point to evolve into something useful. For someone who can't code, it's an inevitable dead end.
I find that those who struggle with programming have the exact same type of struggles when trying to do it with LLMs - no structured plan on how to approach a problem and difficulties to understand the context in which they are working.
I was pretty shocked and dissappointed at his quote -- Replit's rise was so aspirational. So the VC $$$ forced Replit to pivot to shilling AI schlock rather than actually improving humans...
Replit is a bit strange phenomena, they clearly are creating a huge buzz online, but I mingle a lot with university students and academics and I hardly ever heard anyone mentioning using, even sometimes when I explicitly ask them they don't seem to have heard of it?
Generally curious if it is gaining traction and somehow I haven't bumped into others who use it.
That is obviously just something he has to say, to pitch his AI company, but I reckon it is BS.
Since AGI does not seem around the corner, coders will be required as ever.
But I would not ignore AI to learn to code. Its great for asking endles stupid beginner questions. And yeah, some answers will be wrong, but my human tutors also taught me some very wrong concepts.
But some coding without any help, is likely beneficial for learning.
To me "learn to code" has ALWAYS been a synonym to "learn to think". Coding is nothing else than thinking. You learn a handful of structures and then you only think about how to combine them. It doesn't matter if you do it by AI or "by hand".
We've got an LLM analyzing merge requests. While most of what it writes is too wordy for my liking and I'd prefer if people making MRs would actually write good descriptions and commit messages instead of saying "we use squash merges", one thing I do like it is that it suggests a better title. The title of an MR is used in the squash merge commit, and that commit is used in the changelog which is what users of our library rely on to consider what they need to check before upgrading.
On the one side, people should write good MR titles for this reason. On the other, sometimes you just don't know, and people's brains are already overflowing with all of the tasks they need to do in order to do their job properly.
But also, finally, it's just a suggestion; we don't have AIs writing our changelog yet, except ironically to make them worse / cringe.
I guess experience is widely varying with tools. I had no idea what WSS was. Cursor searched the web and docs and necessary and built a working program. Then we went over each conceptual blocks and it explained stuff to me. When I was confused about some parts, it decided I need to strip away the RealTime API component and just understand asynco in context of WSS so it created a new file with a toy example.
Could it have written some really, really bad and unmaintainable code and it was just justifying things? Very possible. I asked to strip some 60% of the code after I thought I understood, and it was still working. The original code was too abstract for me and hard to follow. Did I learn in this whole process? I think yes. Would I have learned more if I had worked without AI? I think yes.
In life, some people play games and figure out the best builds after a lot of sweat and tears while others prefer googling the meta and only doing those builds, it seems. Both approach seems fine.
For a cynical take, 10, 15 years ago this headline would have been "Learn to code, ignore Stack Overflow, then use SO to code even better".
In this context, AI is just one of many resources you can use - books, websites, SO, etc - to improve your coding. The big differentiator is that you can have a realtime conversation with AI, whereas when talking to other people - be it on SO, IRC, Github, forums etc - you may or may not get a response and it may or may not be helpful. AI tools are a bit more predictable that way.
I'm of the opinion that AI will not replace developers but will be another tool in their arsenal. And part of the skillset of (some) developers will be working with an AI tool effectively. I don't have that yet, but then, I don't know much about Docker / K8S, cloud providers, low level memory management, pure functional coding, Rust, graphics programming etc etc either.
I don't think you are tackling (what i believe to be) the main point in this article. Yes we can all agree that replacing SO with chatgpt is very fast and often a better experience.
However, this is not about researching for solutions, it is about offloading your coding capacity to the machine. These are two different things and I think the author is addressing the latter.
That is what I did. I programmed since I was 10 on and off until around age 27 where I started basically full time after studying chemistry but not wanting to work in the lab. I was 100% FT on a project since 2018/2019 and began using AI when ChatGPT came out, but couldn't use it on this project for ages because it was all giant legacy code with lots of context (both explicit and implicit). I used it for new projects and side things. I really enjoyed it. I learned SQL and Bash and PowerShell from the collaborations. I also made my first macOS app. I have even learned a few new things in JavaScript which is my main and began to use it on my main project too. I feel so lucky to be alive at this time. It's amazing to have this way to work!
I wonder how much the success of AI coding tools hinges on having a decent learning set?
Because if a lot, I predict that as more published code is generated, we will find ourselves locked in a certain point in history in terms of versions of dependencies.
To me this is just like maths ... At school we learnt addition subtraction multiplication and division... Then we did it with a calculator... Same thing here ... Except the calculator needs much more compute and ram ... So the thing is we need to learn what is right before we can use these tools....
That advice was true already a decade ago, but replace AI with StackOverflow. I encountered many junior devs (mostly money-driven, not out of passion for software development) that were completely fine just copy-pasting code out of Stackoverflow without getting a deeper understanding or at least trying to understand what their copied code does. Same is true for AI. Both are great tools, but to become truly a master of your craft you should always thrive to become the person that would have answered the SO question - or wrote the article that the AI learned from.
Re what Replit CEO said: i mean sure, learn how to think, learn how to breakdown problems and communicate - good skills to have, regardless of AI or not. and learning how to code is one of the great ways to improve these skills. people generally aren't going to learn how to think in a vacuum, right? Math, science, and programming, among others, that's how most people learn how to think, they are not going to be a waste of time.
I've been following some kid on TikTok who started last year on this journey - and it's been truly frustrating to watch. I tried to discourage them from leaning so heavily on AI, but they insisted it was a different way of learning.
A year later, and they still don't comprehend for loops or conditional blocks, and believe that comprehending this basic material is a deep understanding of code.
In a way, "vibe coding" is like what infants do when they give "error messages" of "hungry" to query mom for food.
Newcomers may be tempted to maintain the uttermost ignorance of what they are creating, as I imagine "vibe coding" implies, where the height of problem-solving is creating quicker pipes between errors and re-submission to AI to fix those errors.
I'm really glad I learned programming a long time ago, before AI. I use AI more like a tool for getting information (much like browsing manuals, forums, Stack Overflow), but it is, of course, magnitudes better. I still do the conceptual outlining and problem solving, and that is what programming is to me. If there's something I don't understand from the AI, I always query to know how it works. It's even easier now to understand how things work thanks to AI.
Imo if you learn to code today and ask the LLM to explain things to you it could very efficient. People that fall into the trap of using LLMs to write code they don't understand would just fall into another trap when LLMs wouldn't exist.
The only people saying this kind of thing are coders desperate to stay relevant. There is no future in coding. Its gone in a few years. Instead begin learning the skills required to work with AI to get it create what you want.
Is there any research that shows that failing years in school and remaining longer than expected is linked with blindly believing anything AI grifters say?
I think of it as a magic wand that makes me more efficient when it comes to smaller projects. I now do things I would earlier not even pick up because I simply type less now. But I feel like you need to have intuition/experience to question AI all the time. Why this code was added? Is it safe? Is it the right way? Is it the right method, tool, library, maybe we can try another approach. I feel like I learn faster too, because I can so quickly ask a question in a context with which AI is familiar. I would neither down play its role or follow "learning how to code is useless" hypers.
There are a lot of reasons I dislike the use of LLMs in software, one of which is that I think it's absurd to claim that using such fuzzy and imprecise tools and methods should be considered engineering...
But to be honest, I think the main thing is that they just absolutely suck the joy and fun out of an activity. I like writing, I like coding, these are things that I do at least in part because I get some kind of direct enjoyment out of doing them. Using a chatbot simply is not interesting or fun to me. It's like being a spectator and thinking that means you "participated" in the game you watched.
If you code or whatever just as a means to make ends meet sure, it makes total sense that you'll want to use these tools, but I will absolutely never understand the people that claim to like a craft and reach for LLMs to do it—to me these people have just entirely missed the point of what it means to have an activity you enjoy. It's not about the output, it's all about the process and self-enrichment that happens along the way.
You seem to be saying that you’re more interested in writing code than understanding problems and designing architectural solutions to them that exploit the best available tools to help create the actual code. Which may be LLMs (to some degree) at this point.
Even worse: Who will produce new training data? If everyone only uses AI, then nothing new will be invented. Quality will stagnate or go downhill. It cannot get better
But haven't you heard? AI gets better every month! It's just like Moore's Law and it will keep getting better because why wouldn't it? It's not like there is a dead-end to something fundamentally built around categorization and regurgitation of existing things.
(Noting this is all sarcasm in case I didn't lay it on thick enough)
The best part of AI in my opnion is not having to search through documentation. You have any questions about a specific funcion in this lib? Just ask, and will do the search for you, and provide an quick example of use.
Trying nine wrong solutions to a problem before finding the tenth, which actually succeeds, is frequently an essential part of the learning process, in writing natural language as in programming.
I've stopped reading HN posts about AI because comments on them are the same old same old: 2 camps, one of which says get over it 'cause it's great, the other of which says it's a complete FAIL and always will be.
Many, many posts of people looking for help fixing AI-generated code because the AI got it wrong and they have no idea what the code even does. Much of the time the problem is simply an invented method name that doesn’t exist, a problem that is trivially solved by the error message and documentation. But they say they’ve spent several days or whatever going back and forth with the AI trying to fix it.
It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.
Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI. It’s also easier to explain the solution to them because they wrote the code, so it tends to be simpler. Several times I’ve pitied someone asking for help with AI code and even when I explained the solution they still didn’t understand it, and I had to just give up on them - I’m not getting paid to help them.