I recently needed to create a Dockerfile which I have done many times before. Instead of opening the documentation I used ChatGPT.
It took 3 times longer than if I had done it by hand because I just kept copy and pasting each error back to ChatGPT ending up in a dead end having to start over from scratch. As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end especialy if you have more than one such requirement.
Once the LLM starts hallucinating you can easily get into corner where you will need to reset and start from scratch.
This is basically the modern "google-driven" development experience as well.
The copy-pasting from SO trope has existed for some time, but it's much worse in the last 5-7 years with the increase of blogspam/LLM generated content on Medium that's SEO'd to the top and effectively just rewriting the "getting started" page of some language/framework/tool/etc. for resume boosting points.
It seems like a lot of the models in ChatGPT and Copilot were trained on that content and in turn tends to produce a lot of dead end solutions for anything that isn't the 90% cases and often leads to more pain than reading documentation and building a solution through iteration/experimentation.
> As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end
You framed that as a criticism of ChatGPT, I think that is indeed the point the OP is making. IMO if this is something you used to do you could get a nice starting point with a single prompt and then go from there, then that would take 3 times less compared to reading the documentation from get-go.
I haven't written any C++ in ages so I used a local Ollama3.2 to do a simple class and struct. The AI wanted me to use the `friend` keyword on class functions that weren't being accessed from anywhere else... I did correct itself after I asked but it took some intuition and googling to get there.
yeah, I’ve had many moments in my AI-driven development cycles looping with the LLM in a dead end, eventually giving up and falling back to regular old programming.
I think LLMs are great for reducing the friction with just starting a task, but they end up being more of a nuisance when you need to dig into more nuanced problems.
Eventually as your project grows in complexity, what you're asking of the model will exceed the intrinsic limits of what the model can do, and you won't be able to troubleshoot your problems yourself because you didn't learn anything along the way.
How? What I see are non-programmers who can create programs, but still have hardly any ability to understand them, debug them or even create them on their own without AI-tools. This is similar, but worse, to what we used to call Stack Overflow-Programmer, or coding bootcamp-kids.
I would fall into this camp. I don't understand everything I am doing, but I have learned a lot through trial and error. I have an IT background in infrastructure and dabbled in automation, but never enough to be good. AI has allowed me to create things from my ideas that interest me. Is it good, no. Do I sell it though, no. I create things for myself. I wouldn't be able to do these things without it though because I didnt have the time and the teaching I had tried didnt interest me.
On one side, that's fine. Nobody can understand everything, and even the best developer has to learn with trial & error sometimes. And that these tools are enabling us to cover our lack in time/knowledge is great and brings society to new levels.
But on the other side does society also need highly educated experts, with deep understanding of things and the ability to find and prevent the s** which will harm us. This is not limited to IT, it's the same in every area.
Education prevents disaster. But AI prevents education, maybe. We will see how this will play out for us. Maybe the AI-Overlord won't be just a joke anymore at some point, and benevolent AGIs can replace the necessary experts.
Seems like everyone disagrees with you. Instead of reply to each of them individually I will reply here:
I used to be a non-programmer (although extremely technically advanced). I was very good with the terminal because I always played around with ADB and other things. I even wrote Windows batch scripts (but looking at them now is embarrassing. I used a lot of repetition instead of loops).
When ChatGPT became available, and my friend taught me how to use it to write Python scripts, I went all in. I didn't write a word of my own code. I just copy /pasted Chatgpt's output into notepad++. It was really annoying sometimes to get ChatGPT to change a simple thing. It hallucinated often and "fixed" stupid things I didn't ask for help with. I used all these scripts for various personal things. For example, it made me a GUI to edit the metadata of every .mp3 in a folder).
After several months of exposure to code, I knew the basic syntax and tried to code myself when I could, and Stack Overflow was a big help. I still had very basic skills and didnt even know what a `class` was.
Fast forward to now, I consider myself extremely good at Python, and decent in other languages. I now use classes and dataclasses all the time. I always add type hints to my code. I follow Python PEPs and try to design my code to be short, maintainable, and to the point. If a library lacks documentation, I just read the source code. I started using an IDE, and the intellisense is really a step up from using notepad++.
I still use copilot, but I refuse to paste large code blocks from any llm. It makes the code so much harder to maintain. If I really do like the code chatgpt gave me when I get stuck, I write it myself, instead of copy/paste.
I don't know if everyone has the same experience as me, but I would certainly say that ChatGPT helped me become a successful programmer.
Programming is a mindset, more than just the ability to type (or copy/paste, nowadays) something that the compiler accepts. So no, they can't program. If something goes wrong they have no idea why it's wrong. They also have no idea why it's right, when it just happens to be.
Until an obstacle hits them and AI goes into a "reasoning" loop unable to find a solution and they don't know how to "nudge" it to the right path.
I've seen in some companies, relatively trivial tickets are bounced from developer to developer for months and management believing these are hard to do.
I would say "expands the class of simple applications one can build without expertise" in the same way spreadsheet software enabled simple financial programs, website builders meant you didn't need to do HTML/JS nitty-gritty, etc. OTOH these technologies also enable financial blunders (I am personally responsible for at least one really nasty Excel bug), horrible website design, etc etc.
Edit: Arvind Naranyan of AI Snake Oil uses LLMs to create "one-time-use" apps, mostly little games or educational things for his kids. As an AI skeptic I think this is cool and should be celebrated. But there is a serious downside when using it blindly for real work.
I have never liked the term "can program" anyway, programming is the easy part....
It’s also creating a generation on management, clients and non-technical people who send me outputs from Claude and suggest I implement what it said. Needless to say, this makes my life harder.
Is that actually happening? I use o1 a lot for brainstorming etc, but I don't think I could build a program of any significant size without understanding how to program myself so to guide o1 and stitch it's outputs together. It's also not at all uncommon for me to have to do stuff completely myself, it doesn't have great insights into deep problems for instance.
System: "You are a senior staff security engineer and researcher. Use the latest techniques and the internet for any new techniques to find any vulnerabilities in the code below. Do not give any explanations, only the fixed code"
User: { AI generated 1600 line handler "getUsers" with sql injection, n+1 queries, an exposed secret, plaintext passwords and credit cards, and lots of unused variables }
Does it? I've had some months-long gaps where I hadn't written a single line of code and I always felt like I could get back up to speed within a few days. I suppose that depends on where you expect your level to be, are we talking about $300k+ positions at big tech or your average software job?
Re-learning a large codebase would take longer, of course, but I'm talking about just getting up to speed like if I was starting a new project.
But surely you can tell it's not like riding a bike.
> just getting up to speed like if I was starting a new project.
This is the best case scenario for getting back up to speed.
It may sound harsh, but there's not a lot of skill involved in starting new coding projects (it's one of the best things about the craft imo). Of course you'd get back up to speed quickly, especially if you have 10+ years of programming under your belt.
> It may sound harsh, but there's not a lot of skill involved in starting new coding projects (it's one of the best things about the craft imo).
I generally feel the opposite. Starting a new project, not just a prototype, requires a lot of decisions and knowledge that mostly only has to be applied at the beginning of a project. Language choice, core dependency choices, build system, deployment, general project structure, etc. Each one can be changed later on, but the longer a project goes on, the more of an impact these early decisions have, and the more important it is to get them mostly right from the start.
On the other hand, when you join an existing project, many decisions of the project are already made and you just write code that resembles existing code.
I think we’re just splitting hairs over what “project” means.
But even so, many web projects are as easy as `npx create-next-app` and you have a solid foundation.
It’s much harder to start working on an existing codebase.
> you just write code that resembles existing code.
This is so incredibly wrong.
Every line of code you write as an IC on an existing project should take into consideration the existing code, the teams patterns and code style, and the reasoning behind existing systems.
The choices that were made that you’re unaware of makes it much harder to write any code, let alone code that also resembles existing code.
Decisions don’t require skill, necessarily. Anyone can just choose a web framework or a database.
Working within the restrictions of an existing system has a much higher skill floor.
I shudder at the idea of what I would have become if life made it so that I would only read RD (bc of a lack of time, of support, of curiosity) for the rest of my life. A shallow half-knowledge I would have been contented with, oblivious to what literature is really about.
Another similar idea I am culpable of: watching videos of people playing games instead of playing for myself. Not the same experience at all.
Of course not. The ultimate goal of AI is to get rid of the developers altogether and reduce costs (more profits!). You think BigTech are spending hundreds of billions just to make developers more productive while retaining them? That's not a profitable strategy.
The copy/paste with LLM interactive stage is just a transitional stage as the LLM improves. We'll be past that in 5 years time.
No, AIs won't replace all developers -- you still need people doing the systems designs that the AI can implement code for. But it could easily reduce their numbers by some large percentage (50%? 80%?).
Edit: I would no longer advise my kids to get a degree in Computer Science.
I think that will get chipped away at incrementally, but you're right, eventually all you'll need is a list of requirements. I'd put that 30-50 years away, but some people might say sooner.
> We’re becoming 10x dependent on AI. There’s a difference.
This is true, but I also don't need 10x the ability to write cursive any more. I used to have great hand writing, now it's a very poor. That said, my ability to communicate has only increased throughout my life. Similarly, my ability to spell has probably diminished with auto-correct.
Yes, you will become dependent on AI if you're a developer, because those who use AI will take your job (if you don't) and be significantly more productive than you. It sucks, but that's reality. My grandfather started programing on physical cards, he knew a lot of stuff I had no idea about. That said, I'd be able to run circles around him in the programming (and literal) sense today.
The question is really what skills do you need to know to still execute the job with AI effectively. I'd argue that doesn't change between AI and not having AI.
As a seasoned engineer, I spend probably 60% of my time designing, 30% managing/mentoring/reviewing and 10% coding. That probably wont change much with AI. I'll just be able to get a lot more done with the coding and design in the same timeframe. My skillsets likely will remain the same, though writing small functions may diminish slightly.
>AI will take your job (if you don't) and be significantly more productive than you
Physically typing out the code is one of the last steps. Before that can happen the problem which is to be solved must be identified. An appropriate approach to the problem must be formulated.
Look around at some of the deranged hype in this AI cycle or look back on previous hype cycles. Many of the proposed "solutions" do not solve the problems of consumers. Solutions in search of a problem are as abundant as novel tech. These things dominate the zeitgeist. Perhaps they help inspire us, but they are not guaranteed to solve immediate problems.
There's an important distinction between problem solving skills, tools and the most probable token. To innovate is to do something new. AI, on the other hand, does the most probable thing.
Yes but we already have illiterate programmers and there are companies who hire them, and the majority that recognises what they are and don’t. I’m talking about the code copy-pasters who cannot really reason about the problems they’re given nor understand how the code they pasted works. My point is that this is not a new phenomenon and I don’t think it will fool those that weren’t fooled by it before.
I've ended up only using inline generative AI completions to do programming when I'm either on a strict deadline or making a demo for a conference/meetup talk. Otherwise I limit myself to pasting in/out of a chat window. This helps balance things out so that I can meet deadlines when I need to, both otherwise retain my programming/SRE skill.
That being said, one of my top google searches are things like "kubernetes mount configmap as files" because I don't quite do them often enough to have them totally internalized yet.
I also use inline code generation, but it just seems like a more intelligent version of the code-aware autocomplete my IDE already provides. I love it, but it doesn’t feel like I’m atrophying because I’m still “driving”. It just saves a lot of keystrokes
I think there's a weird disconnect between the author's experience (the automated tools are good but they induce dependency) and my own experience (the tools are bad, produces code that doesn't type-check, produces explanations that don't make sense, or suggest back to me the half-broken code I already have).
I continue to believe that these tools would be more reliable if rather than merely doing next-token-prediction on code, we trained on the co-occurrence of human-written code, resulting IR, and execution traces, so the model must understand at least one level deeper of what the code does. If the prompt is not just a prose description but e.g. a skeleton of a trace, or a set of inputs/outputs/observable side effects, then simultaneously our generative tools only allow outputs which meet the constraints, with the natural and reasonable cost that the engineer needs to think a little more deeply about what they want. I think if done right, this could make many of us better engineers.
I think it's particularly weird that many of these tools simply do not seem to know about the type-checker at all. You'd think that'd be basic, at least for the likes of essentially fancy autocompletes like copilot; if the thing you're going to output does not type check, do not output it.
That would require actually understanding types. GPTs don't understand anything - not really. They know the words and how the words connect to each other, and that's all.
What you want - what everybody wants - is a GPT plus. In this case, GPT plus a compiler's parsing ability. I don't think that's what we have, though. What we have is a GPT that was trained on more and better code snippets. That's not good enough.
The tooling can take that over. Code has been generated that fails the type check? Let the type system of the IDE spit out diagnostics, feed it back to the LLM and let it output correct code. This can be made automatic, same with automated testing.
The problem with people sneering at LLM assisted coding is that they miss the developments and advancements in tooling and integration.
Copy pasting from the ChatGPT browser tab and hoping for the best is not where it's at anymore.
> Copy pasting from the ChatGPT browser tab and hoping for the best is not where it's at anymore.
I mean, maybe there's a magic tool out there? The only one I've used (briefly, before turning it off in annoyance) is Copilot, which appears to pretty much just do that.
Sure, but you could presumably have some logic like "Copilot wants to type the string `MyClass.non_existent_method()`. Does the type checker accept this line? If not, tell Copilot to try again."
(I suspect possibly the reason that this does _not_ seem to be commonly done is that reliably producing _something_ (even if it's nonsense) is seen as more appealing to users than potentially not producing anything. And of course passing type checking is no guarantee of correctness.)
There are a lot of questions here that also apply to teaching CS. Do we want our graduates to still know the old way? Perhaps for job interviews and days when GPT is down and for problems it can't solve yet.
The "no AI without understanding the solution" rule is a start here.
LLMs can absolutely translate ideas to code. How is that even a topic of debate. It might not always be good or maintainable code, but the whole notion of "it's just a next token predictor" is so trite at this point.
You're misunderstanding what I mean. Sorry, I have trouble articulating this idea. "Idea" was probably not the best word, but I'm not sure of a better one.
LLMs can, in effect, translate some ideas into code. Under the hood, though, they are not. They are predicting next tokens based on context.
There are impressive and obviously useful emergent properties of how good they are at predicting tokens, but that is what they are doing. There's no question there. That's a fact.
It's not a direct translation, however, as is the case with compiled languages.
Compiled languaged directly translate the idea you communicate to them into machine/byte code in a deterministic, predictable, and importantly tractable way.
LLMs do not do that. They aren't abstracting what the computer can do so that you can more effectively write instructions.
as you say
> It might not always be good or maintainable code
What if your idea _was_ good and/or maintainable machine/byte code? What if you needed to examine the output code and ensure some quality of it.
With a compiler, if you can communicate in a programming language what exactly what you want, it will _always_ produce machine/byte code that matches that.
LLMs just don't do that. It's just not how statistical models work.
There is a difference: when you code in higher level languages most of the time (except some domains where you still need to understand assembly) you don't need to understand how generated assembly works. This is not the case with AI generated code. You still need to be able understand, modify and validate the generated code.
Eventually won't AI be trained on a massive corpus of source code and be tuned to actually program something as good as a lot of the existing bloatware out there that businesses pay out the nose for?
It's all in how you use it. Copy/paste driven development has long been a meme in the programming community.
For me, AI is like pairing with a very knowledgeable and experienced developer that has infinite time, infinite patience, and zero judgements about my stupid questions or knowledge gaps. Together, we tackle roadblocks, dive deep into concepts, and as a result, I accept less compromises in my work. Hell, I've managed to learn and become productive in Vim for the first time in my career with the help of AI.
But is a programmer who uses AI to program the same as
a Typescript programmer who doesn't understand JavaScript well?
Or a Python programmer who doesn't know C?
Or a C programmer who doesn't know assembly code?
Or an assembly coder who doesn't make his own PCBs?
Is AI orchestration just adding another layer of abstraction to the top of the programming stack?
Not until their outputs become predictable and reproducible. Then a form of English or whatever other language becomes your programming language with its quirks and special syntax to get the right result.
The last guy who traditionally "knew everything" was Erasmus.
The shame is not in outsourcing the skill to, e.g. make a pencil[1]; rather, the shame is not retaining one's major skill. In IT, that is actually thinking.
X to doubt. So far at least, to do anything nontrivial you still have to understand code. Sure it will raise the bar for what someone without such understanding can make, but a lot of that can already be done with code-free tools.
What most people who would call themselves programmers are working on is far beyond what AI can do today, or probably in the foreseeable future.
My rule with LLMs is to use them only after reading documentation. It's easy to ask them for some boilerplate for something you haven't done in a long time, however, I think its better to just take five minutes to go read whatever spec/documentation is out there and then ask the LLM to write whatever code if you want.
that would be a generous invented statistic if it only was addressing the inherent stochastic nature of llm output, but you also have to factor in the training data being poisoned and out of date
in my experience the error rate on llm output is MUCH higher than 5%
exactly. Programming languages are all just levels of abstraction above the analog/digital interface.
While it is important to understand the fundamentals of coding, if we expected every software engineer to be well versed in assembly that wouldn't necessarily result in increased productivity.
LLMs are just the next rung up on the abstraction ladder.
There will always be people interested in the gritty details of low level languages like Assembly, C, that give you a lot more granular control over memory. While large enterprises and codebases, as well as niche use cases can absolutely benefit from these low level abstraction specialists, the avg. org doesn't need an engineer with these skills. Especially startups, where getting the 80% done ASAP is critical to growth.
i think there will still be a need for "programmers" with those skills. We'll always need specialists and people building new language/frameworks/etc.
But I think everyone who isn't at least a standard deviation above the average programmer (like myself) shouldn't be focused on being able to read, write, and debug code. For this cohort the important ability is to see the bigger picture, understand the end goal of the code, and then match those needs to appropriate technology stacks. Essentially just moving more and more towards a product manager managing AI programming agents.
I've heard this argument about spell checkers in Word, browsers, etc as well.
Anecdotally, I wouldn't be surprised if there was some truth to it: in the early 2010s I saw tons of people saying "defiantly" that meant to type "definitely", and I couldn't figure out why. Then I misspelled "definitely" once and saw the spell checker suggested "defiantly", and it finally clicked.
Spell checkers have definitely improved since then (I can't remember the last time I saw the definitely/defiantly mix-up), but I can't help but wonder how bad suggestions have affected the way people understand languages.
A similar phenomenon exists on TikTok with words taking on contradictory or even opposite meanings, though this doesn't seem to be caused by spell check so much as constant incorrect usage rewriting the definition in people's minds (like "POV: you're getting mugged" when it should actually be "POV: you're mugging someone" or "POV: you're a bystander watching someone get mugged").
Oh yes they do! (Granted, it's either because learners input nonsense, or because they fat-fingered when inputting sane numbers... which I suppose are the same thing when you come down to it.)
Socrates and writing, calculators and LLMs, television and social media... thought terminating cliches that just crowd the discussion in the same way spam crowds an inbox
also we didn't have those discussion before, everyone forgets about abacuses and log rulers
What does gpt dependence look like for people that have non-programming office jobs where they just email all day?
Do they just copy paste emails and office menus into an agent and send responses? Do they actually make any decisions themselves, or do they just go with whatever?
I saw someone recently explaining it this way: we now have tools to inflate keywords and prompts into fancy side-long text, which we then feed to tools which will shrink those text into short summaries.
So I guess office jobs are now grinding machines, where one side has keywords, the other side extracts keywords, and in the middle there is an enormous waste of resources to fit some manager's metrics.
Smells like an opportunity for a new product to serve all sides. Let the people send prompts, which are then inflated locally on receiver-side with custom prompt-modifications, before they are shrunken again. This way you get fancy small text for the transport, the sender doesn't have to waste money on AI, and the sender can personalize their response, or also save money by just reading the prompt directly. And finally, some company can make money from this. Win-win-win-win I would say.
>What does gpt dependence look like for people that have non-programming office jobs where they just email all day?
There isn't any such dependency (yet).
Despite strong corporate push I'd say the average person I work with uses it maybe once a week - usually research or summarization.
The inroads LLMs are making on programming aren't translating to other office jobs cleanly. Think it's largely because other areas don't have an equivalent source of training data like github
It'll no doubt come, but for now all programmers seem to have mostly automated themselves out of a job not others from what I can tell.
>Do they actually make any decisions themselves
There is little to no GPT decision making happening despite all the media chat about AI CEOs and similar bullshit. Inspiration for brainstorming is about as close as it gets
my assumption was that these office people were secretly using it to automate dull parts of their work without the approval or knowledge of their coworkers
Yup, because I'm one of these office workers. Financial controller stuff in PE space & coding as hobby and interest in AI space.
There just isn't a direct equivalent thing that captures the knowledge in a machine readable way the way a code base does. It's all relationships, phone calls, judgement calls, institutional knowledge, meetings, coordination, navigating egos and personal agendas etc. There is nothing there that you can copy paste into an LLM like you can with say a compile error.
Even the accounting parts that conventional wisdom says should be susceptible to this...it's just not anywhere close to useful yet. Think about how these LLMs routinely fail test like "is 9.90 or 9.11 bigger" or miscount how many Rs are in strawberry. You really want to hand decisions about large amounts of money to that? And maybe send a couple million to the wrong person cause the LLM hallucinated a digit wrong. It's just not a thing.
Maybe with some breakthroughs on hallucinations it could be but we all know that's a hard nut to crack.
>automate dull parts
I've been trying hard to find applications given my enthusiasm for coding & AI. No luck yet. Even things where I thought this would do super well...like digging through the thousands of emails via RAG+LLM is proving oddly mediocre. Maybe that's an implementation flaw, not sure.
Tangent: I remember hearing of a sci-fi story where a city has been automated for generations and stuff is starting to fall apart while nobody knows how to fix it. Does anyone know what the name is? I have been unable to find it for a while.
In my editor I do /fetch <docs url> and can then happily ask the LLM questions and/or have it use the docs when writing code. We don't need purely rely on zero shot prompts and pre-training data.
imo we are seeing two types of programming viewports emerge.
i mean maybe it's just me but i quite like this new normal where i'm an "intermediary". i review every line that comes out and research things if they don't make sense or i'm confused. and a lot of times it helps me figure out how to approach a problem or refine an existing one. i'm still architecting everything and diving in.
i guess if you liken progamming to like i dunno, chopping your own tree down, turning it into 2x4s to build a house then yeah, it seems like your skills are atrophying.
i do not miss the "joy" of writing random boilerplate 100s of times or variations of the same function. i do however fear for the the new generation who maybe didn't cut their teeth on having minimal guidance. i can totally see people just coasting and blindly pasting things, but that's always sort of existed on stack exchange, etc.
Is this yet another thought that overemphasizes the immediate now? AI is developing so quickly it's difficult to see a whole generation of programmers being affected by negative transient aspects of this.
Yes. But still you’re going to make AI tools anyway… ffs.
I think AI is just giving cover to all the incompetent people in your org, and in fact will create more incompetence via the effect you yourself noticed.
Eventually you’ll find many if not most of your coworkers are just wrappers around an LLM and provide little additional benefit—as I am finding.
Coding will become AI dominated at many levels. A lot of people on here don't want to hear that because they have an ego about their exclusive coding skills. Your compensation will decrease. You may be fired. The coding ability of the average 'senior' developer is not impressive, and the title will not protect you.
The only thing AI can code is glue code. But that is the majority of what people write these days. I've avoided that kind of work and AI disappoints me every time I've tried it with one exception - text to images.
I am curious, do you think it will stay this way? AI couldn't code anything 2.5 years ago, why should it stop with glue code?
Yesterday I fed the first 3 questions from the 2024 Putnam Exam [0] into a new open source LLM and it got 3/3. The progress is incredible and I don't expect it to suddenly stop where it is.
I expect it to get better some day, but not really good in the next 10 years. I've been calling BS on fully autonomous driving for a decade and I still believe it takes AGI to do that and I think that's still a long way out.
LLMs literally grew out of autocomplete, and aside from having a huge amount of information they still don't understand, and certainly don't learn during inference.
Exam was administered on December 7th 2024, so it is possible this is in the training data but unlikely.
The open source model as worked through the problem step by step (it was R1 so I could read its chain of thought). That could be an elaborate ruse, but also unlikely.
Dont have any recommendations, I don't watch or read anyone. Think of it in two parts. You have the coding skill: knowledge of syntax, technical details, inputs, outputs, some math and logic, the infamous 'DSA' that millions of Indian people with no talent can solve. <- People pedestalize this.
Then you have the project manager / architect skill of shaping things and having an understanding of the tech stack - but does not need to be a great coder. Maybe they were in the past, but they're not grinding silly LeetCode questions and cannot solve them, and are smart enough to see that as the hamster wheel surrogate goal that it is.
I think we need less of the former, much less. I don't see it as a bad thing. I myself have severe carpal tunnel, RSI, and back problems, from sitting at a computer.
Almost all of the students that jumped into programming courses did so for money. They have no passion (less than me by far) for coding. Its a lie perpetuated by toxic positivity and invested groups that coding is some magical, super cool, super smart activity.
Its very overdue at this point. DeepSeek R1 and V3 are competing with o1, open source and local hostable. NVidia is coming out with small AI GPUs for consumers.
The winners of this AI age is everyone. EXCEPT 'coders'. Many getting 6 figures to maintain a CRUD app. I don't feel sorry for them! Cooking up garbage apps in PHP - I cannot wait for it to be overturned by energy efficient AI made apps in Rust.
It took 3 times longer than if I had done it by hand because I just kept copy and pasting each error back to ChatGPT ending up in a dead end having to start over from scratch. As soon as you have a requirement that is not the 90% of cases you can quickly end up in a dead end especialy if you have more than one such requirement.
Once the LLM starts hallucinating you can easily get into corner where you will need to reset and start from scratch.
reply