Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not."

Ugh, anyone who says that and really believes it can no longer see common sense through the hype goggles.

It's just stupid and completely 100% wrong, like saying all musicians will use autotune in the future because it makes the music better.

It's the same as betting that there will be no new inventions, no new art, no works of genius unless the creator is taking vitamin C pills.

It's one of the most un-serious claims I can imagine making. It automatically marks the speaker as a clown divorced from basic facts about human ability



I disagree. While there are developers that truly build new technology and invent novel solutions, the overwhelming majority of developers who are paid for their work daily do pretty mundane and boring software development. They are implementing business logic, building forms, building tables, etc.

And AI already excels at building those sorts of things faster and with cleaner code. I’ve never once seen a model generate code that’s as ugly and unreadable as a lot of the low quality code I’ve seen in my career (especially from Salesforce “devs” for example)

And even the ones that do the more creative problem solving can benefit from AI agents helping with research, documentation, data migration scripts, etc.


I'm one working on novel tech, without AI. I've never been doing more valuable work or been more in command of my craft.

Yet the blanket statement is that I will fail and be replaced, and in fact that people like me don't exist!

So heck yeah I'll come clap back on that.


Continuing to develop my craft in the 4th straight year of numbskulls claiming that Chat GPT is the end of history has made me extraordinarily confident in my ability to secure lucrative developer work for the rest of my life.


It is not productive to call people numbskulls and make straw-man arguments just because you don't agree with their statements, or may not have observed the same benefits that they have.

There is absolutely something real here, whether you choose to believe it or not. I'd recommend taking a good faith and open minded look at the last few months of developments. See where it can benefit you (and where it still falls way short).

So even if you may have arrived at your conclusion years ago, I assure you that things continue to improve by the week. You will be pleasantly surprised. This is not all or nothing, nor does it have to be.


Your interpretation of the original quote is so far off that I would question your interpretation of the world. Sure novel stuff is being built but the vast majority of code at all sizes of companies has been written before in some iteration. Even for the novel work being done, it’s being surrounded by layers of code that has probably being written before. Are engineers going anywhere? No. But I also don’t think it’s far fetched to see a possible near term of competent engineers who use AI tools being more productive than the ones who don’t. I am not talking about copy and pasting but rather thoughtful use of tooling.


But "more productive" is a wording I also take issue with.

Code is like law. The goal isn't to have a lot of it. Come to me when by "more productive" you actually mean that the person using the LLM deleted more lines of code than anyone else around them while preserving the stability and power of the system


[flagged]


I thought you were supposed to be boosting AI?

Why are you so bitter?

You aren't going to sell any snake oil with this venomous strategy.


I think I started the bitterness. I'm sorry. Everyone has genuine hurt. People who have bought the AI product are hurt because they're weakened and vulnerable by their dependence on buying their own productivity back from a reseller who is now raising the prices.

I get cheesed off cause the AI people disrespect hard work and try to devalue its rewards, and that message is just toxic to anyone trying to learn. You can't master a musical instrument by paying an assistant to practice it for you.


No hurt here. I simply reply to low quality posts with snark but also trying to challenge them. You keep responding with projection. I have zero skin in defending what works for me. I am simply pointing out your ignorance is equally as bad as the extreme hype on the other end of the spectrum.


Right, I chose my words carefully when I said the "overwhelming majority" – and not "every single developer"


> and with cleaner code

I use AI pretty extensively and encourage my folks to use it as well but I've yet to see this come directly from an LLM. With human effort after the fact, sure, but LLMs tend to write inscrutable messes when left to their own devices.


> the overwhelming majority of developers who are paid for their work daily do pretty mundane and boring software developmet

So are musicians. We think of them as doing creative stuff but a vast majority is mundane.


Most musicians (i.e. non-famous ones) get most of their income from teaching students. I don't think such a model makes sense for developers

(though who knows, maybe at some time in the future there will be significant numbers of people programming as a hobby and wanting to be coached by a human...)


It really depends. I think if there's a majority trend, it's just to have another job of any kind.


Do the people in this corner use compilers? Would they agree that programmers who don't use them* have been replaced by those that do?

*: I'm aware of cases like the recent ffmpg assembly usage that gave a big performance boost. When talking about industrial trend lines, I'm OK with admitting 0.001% exceptions.

(Apologies if it comes across as snarky or pat, but I honestly think the comparison is reasonable.)


> Do the people in this corner use compilers? Would they agree that programmers who don't use them* have been replaced by those that do?

Are you aware compilers are deterministic most of the time?

If a compiler had a 10% chance of erasing your code instead of generating an executable you'd see more people still using assembly.


Compilers are systems that tame complexity in the "grug-brain" sense. They're true extensions of our senses and the information they offer should be provably correct.

The basic nature of my job is to maintain the tallest tower of complexity I can without it falling over, so I need to take complexity and find ways to confine it to places where I have some way of knowing that it can't hurt me. LLMs just don't do that. A leaky abstraction is just a layer of indirection, while a true abstraction (like a properly implemented high-level language) is among the most valuable things in CS. Programming is theory-building!


There are technologies that become de facto requirements for work in a field. For software, compilers and version control both qualify.

But... what else? These things are rare. It’s not like there’s a new thing that comes along every few years and we all have to jump on or be left behind, and LLMs are the latest. There’s definitely a new thing that comes along every few years and people say we have to jump on or be left behind, but it almost never bears out. Many of those ended up being useful, but not essential.

I see no indication that LLMs or associated tooling are going to be like compilers and version control where you pretty much can’t find anyone making a living in the field without them. I can see them being like IDEs or debuggers or linters where they can be handy but plenty of people do fine without them.


Compilers were invented pretty early on in things… I wouldn’t be they shocked if the population of assembly programmers has remained constant.

Where would you put the peak? Fortran was invented in the 50’s. The total population of programmers was tiny back then…


this is kind of a funny example to me because of all the programming language and compiler discourse that's happening. analogies almost always miss the mark by hiding the nuances that need discussing, and this topic is no exception.


the comparison is preposterous.


When Fortran came out, I don't think a lot of people yelled at the assembly programmers and told them "learn Fortran or be replaced".


If the assembly programmers were struggling with correctly optimizing loops for optimal performance on several distinct target machines, I would hope that their management would want them to try this new Fortran thing and see how well it worked. (And it did, and it enabled new companies like CDC to win customers from IBM.)


+1, even though I mildly disagree with you. I pay For Gemini Pro by the year, and even though I don’t use it often, it is still high value. There are obvious things like generating a Bash shell script quickly - and other things I rarely do, are simple, and I save 5 minutes here and there. Sometimes code generation can be useful, in moderation.

But the big thing is using AI to learn new things, explain some tricky math in a paper I am reading, help brain storm, etc. The value of AI is in improving ourselves.


> explain some tricky math in a paper I am reading

To me this seems to be the single most valuable use case of newer "AI tools"

> generating a Bash shell script quickly

I do this very often, and to me this seems to me the second most valuable use case of newer "AI tools"

> The value of AI is in improving ourselves

I agree completely.

> help brain storm

This strikes me as very concerning. In my experience, AI brainstorming ideas are exceptionally dull and uninspired. People who have shared ideas from AI brainstorming sessions with me have OVERWHELMINGLY come across as AI brained dullards who are unable to think for themselves.

What I'm trying to say is that Chat GPT and similar tools are much better suited for interacting with closed systems with strict logical constraints, than they are for idea generation or writing in a natural language.


For brainstorming: when I write out a plan for a writing or coding project, I like to ask questions for ‘what am I missing or leaving out?’, etc.

Really, it is like students using AI: some are lazy and expect it to do all the work, some just use it as a tool as appropriate. Hopefully I am not misunderstanding you and others here, but I think you are mainly complaining about lazy use of AI.


Full disclosure, I'm on the Kilo Code team, but I read your analogy, and I have to respectfully disagree. Musicians don't all use autotune, because autotune is a specalized technology used to elicit a specific result. BUT, MOST musicians use technology; either to record their work, or mix their tracks, or promote their work. You could definitely say "A musician who doesn't post online to a platform or save their work in certain audio formats at the studio are going to be replaced by musicians who do." Are there musicians who still release their work on vinyl or cassette tapes and prefer the sound of an stage with no microphones? Sure. But to dismiss the overarching influence of technology on the process would be ignoring where the progress is going. I'd argue that people who use Kilo Code don't just "autotune" their code, they're using it as a tool that augments their workflow and lets them build more, faster. That's valuable to an employer. Where the engineer is still vital is in their ability to know what to ask for, how to ask for it, and how to correct the tool when it's wrong, because it'll never been 100% right. It's just not hype, it's inevitable.


I actually agree with you that LLM assistance is inevitable. The fact that we can have small local models is what convinces me that the tech won't go away.

Even if things are going the direction you say, though, Kilo is still just a fork of VSCode. Lipstick on a pig, perhaps. I would bet that I know the strengths and weaknesses of your architecture quite a lot better than anyone on the Kilo team because the price of admission for you is not questioning any of VSCode's decisions, while I consider all of them worthy of questioning and have done so at great length in the process of building something from scratch that your team bypassed.


I believe it. Maybe not replace 100%, but effectively replace it.

I believe that at some point, AI will get good enough that most companies will eventually stop hiring someone that doesn’t utilize AI. Because most companies are just making crud (pun intended). It’ll be like specialized programming languages. Some will exist, and they may get paid a lot more, but most people won’t fall into that category. As much as we like to puff ourselves up, our profession isn’t really that hard. There are a relative handful of people doing some really cool, novel things. Some larger number doing some cool things that aren’t really novel, just done very nicely. And the majority of programmers are the rest of us. We are not special.

What I don’t know is the timing. I don’t expect it to be within 5 years (though I think it will _start_ in that time), but I do expect it within my career.


AI isn't a stylistic preference or minor enhancement, but cognitive augmentation that allows developers to navigate complexity at scales human cognition wasn't designed for.

Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable, those who reject tools that fundamentally expand their problem-solving capacity will find themselves unable to compete with those who can architect solutions across larger possibility spaces on smaller teams.

Will it be used for absolutely every problem? No - There are clearly places where humans are needed.

But rejecting the enormous impact this will have on the workforce is trading hype goggles for a bucket of sand.


> Just as the developer who refused to adopt version control, IDEs, or Stack Overflow eventually became unemployable

This passage forces me to concluse that this comment is sarcasm. Neither IDEs nor the use of Stack Overflow is anywhere near a requirement for being a professional programmer. Surely you realize there are people out there who are happily employed while still using stock Vim or Emacs? Surely you realize there are people out there who solve problems simply by reading the docs and thinking deeply rather than asking SO?

The usage of LLM assistance will not become a requirement for employment, at least not for talented programmers. A company gating on the use of LLMs would be preposterously self-defeating.


> cognitive augmentation that allows developers to navigate complexity at scales human cognition wasn't designed for

I don't think you should use LLMs for something you can't master without.

> will find themselves unable to compete

I'd wait a bit more before concluding so affirmatively. The AI bubble would very much like us to believe this, but we don't yet know very well the long term effects of using LLMs on code, both for the project and for the developer, and we don't even know how available and in which conditions the LLMs will be in a few months as evidenced by this HN post. That's not a very solid basis to build on.


Two masters go head to head. One uses AI tools (wisely - after all, they're a master!), the other refuses to. Which one wins?

To your second point -- With as much capital as is going into data center buildout, the increasing availability of local coding LLMs that near the performance of today's closed models, and the continued innovation on both open/closed models, you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?

I think we simply don't have similar mental models for predicting the future.


> Which one wins?

We don't really know yet, that's my point. There are contradictory studies on the topic. See for instance [1] that sees productivity decrease when AI is used. Other studies show the opposite. We are also seeing the first wave of blog posts from developers abandoning the LLMs.

What's more, most people are not masters. This is critically important. If only masters see a productivity increase, others should not use it... and will still get employed because the masters won't fill in all the positions. In this hypothetical world, masters not using LLMs also have a place by construction.

> With as much capital as is going into

Yes, we are in a bubble. And some are predicting it will burst.

> the continued innovation

That's what I'm not seeing. We are seeing small but very costly improvements on a paradigm that I consider fundamentally flawed for the tasks we are using it for. LLMs still cannot reason, and that's IMHO a major limitation.

> you're going to hang your hat on the possibility that LLMs are going to be only available in a 'limited or degraded' state?

I didn't say I was going to, but since you are asking: oh yes, I'm not putting my eggs in a box that could abruptly disappear or become very costly.

I simply don't see how this thing is going to be cost efficient. The major SaaS LLM providers can't seem to manage profitability, and maybe at some point the investors will get bored and stop sending billions of dollars towards them? I'll reconsider when and if LLMs become economically viable.

But that's not my strongest reason to avoid the LLMs anyway:

- I don't want to increase my reliance on SaaS (or very costly hardware)

- I have not caved in yet in participating in this environmental disaster, and in this work pillaging phenomenon (well, that last part, I guess I don't really have a choice, I see the dumb AI bots hammering my forgejo instance).

[1] https://www.sciencedirect.com/science/article/pii/S016649722...


There's a clear difference between "I have used these tools, tested their limits, and have opinions" and "I am consuming media narratives about why AI is bad"

AI presently has a far lower footprint on the globe than the meat industry -- The US Beef industry alone far outpaces the impact of AI.

As far as "work pillaging" - There is cognitive dissonance in supporting the freedom of information/cultural progress and simultaneously desiring to restrict a transformative use (as it has been deemed by multiple US judges) of that information.

We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!


> consuming media narratives about why AI is bad

That's quite uncharitable.

I don't need to use it to make these points. While I might show a lack of perspective, I don't need to do X to reasonably think X can be bad. You can replace X with all sorts of horrible things, I'll let the creativity of the readers fill in the gap.

> AI presently has a far lower footprint on the globe than [X]

We see the same kind of arguments for planes, cars, anything with a big impact really. It still has a huge (and growing) environmental impact, and the question is do the advantages outweigh the drawbacks?

For instance, if a video call tool allowed you to have a meeting without taking a plane, the video call tool had a positive impact. But then there's also the ripple effect: if without the tool, the meeting hadn't happened at all, the positive impact is less clear. And/or if the meeting was about burning huge amounts of fuel, the positive impact is even less clear, just like LLMs might just allow us to produce attention-seeking, energy-greedy shitty software at a faster speed (if they indeed work well in the long run).

And while I can see how things like ML can help (predicting weather, etc), I'm more skeptical about LLMs.

And I'm all for stopping the meat disaster as well.

> We don't see eye to eye, and I think that's ok. We can re-evaluate our predictions in a year!

Yep :-)


It's not intended to be uncharitable - You clearly value many things I do (the world needs less attention-seeking, energy greedy shitty software).

I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience. Note that I count "social media" as media.

My proposition is that without hands-on experience, your information is limited to media narratives, and it seems like the "AI is net bad" narrative seems to be the source of perspectives.

Skepticism is warranted, and there are a million ways this technology could be built for terrible ends.

But, I'm of the opinion that: A) The technology is not hype, and is getting better B) That it can, and will, be built -- Time horizon debatable. C) That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.

If anything, more people like you need to be engaging it to have grounded perspectives on what it could become.


> I don't know how else one gets information and forms an opinion on the technology except for media consumption or hands-on experience.

Okay, I think I got your intent better, thanks for clarifying.

You can add discussion with other people outside software media, or opinion pieces outside media (I would not include personal blogs in "media" for instance, but would not be bothered if someone did), including people who tried and people who didn't. Medias are also not uniform in their views.

But I hear you, grounded perspectives would be a positive.

> That for it to result in good outcomes for humanity, it requires good people to help shape it in its formative years.

I hear you as well, makes perfect sense.

OTOH, it's difficult to engage into something that feels fundamentally wrong or a dead end, and that's what LLMs feel like to me. It would be also frightening: the risk that, as a good person, you help shape a monster.

The only way out I can see is inventing the thing that will make LLMs irrelevant, but also don't have their fatal flaws. That's quite the undertaking though.

We'd not be competing on an equal footing: LLM providers have been doing things I would never have dared even considering: ingesting considerable amount of source materials completely disregarding their licenses, hammering everyone servers, spending a crazy amount of energy, sourcing a crazy amount of (very closed) hardware, burning an insane amount of money even on paid plans. It feels very brutal.

Can an LLM be built avoiding any of this stuff? Because otherwise, I'm simply not interested.

(of course, the discussion has shifted quite a bit! The initial question was if a dev not using the LLMs would remain relevant, but I believe this was addressed at large in other comments already)


My point on the initial discussion remains, but I think that it also seems like we disagree on the foundations/premise of the technology.

The actions of a few companies does not invalidate the entire category. There are open models, trained on previously aggregated datasets (which, for what its worth, nobody had a problem with being collected a decade ago!), doing research to make training and usage more efficient.

The technology is here. I think your assessment on its relevance is not informed by actual usage, your frame of its origins is black/white (rather than understanding the actual landscape of different model approaches), and that your lack of interest in using it does nothing to change the absolutely massive shift that is happening in the nature of work. I'm a Product Manager, and the Senior Engineer I work with has been reviewing my PRs before they get merged - 60%+ were merged without much comment, and his bar is high. I did half of our last release, while also doing my day job. Safe to say, his opinion has changed based on that.

Were they massive changes? No. But these are absolutely impactful in the decision calculus that goes into what it takes to build and maintain software.

The premise of my argument is that what you see as "fatal flaws" are an illusion created by bias (which bleeds into the second-hand perspectives you cite just as readily as it does the media), rather than your direct and actual validation that those flaws exist.

My suggestion is to be an objective scientist -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible, and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology and your willingness to adopt it.

I believe that it's coming, not because the hype machine tells me so (and it's WAY hyped) - But because I've used it, seen its flaws and strengths, and forecast how quickly it will change the work that I've been doing for over a decade even if it stopped getting better (and it hasn't stopped yet)


Among the fatal flaws I see, some are ethical / philosophical regardless how the thing actually perform. I care a lot about this. It's actually my main motivation for not even trying. I don't want to use a tool that has "blood" on it, and I don't need experience in using the tool to assess this (I don't need to kill someone to assess that it's bad to kill someone).

On the technical part I do believe LLMs are fundamentally limited in their design and are going to plateau, but this we shall see. I can imagine they can already be useful is certain cases despite their limitations. I'm willing to accept that my lack of experience doesn't make my opinion so relevant here.

> My suggestion is to be an objective scientist

Sure, but I also want to be a reasonable Earth citizen.

> -- use the best model released (regardless of origins) with minor research into 'best practices' to see what is possible

Yeah… but no, I won't. I don't think it will have much practical impact. I don't feel like I need this anecdotal experience, I'd not use it either way. Reading studies will be incredibly more relevant anyway.

> and then ask yourself if the 'origins' issue were addressed by a responsible open-source player in 6-12 months, whether it would change anything about your views on the likely impact of this technology

I doubt so, but open to changing my mind on this.

> and your willingness to adopt it.

Yeah, if the thing is actually responsible (I very much doubt it is possible), then indeed, I won't limit myself. I'd try it and might use it for some stuff. Note: I'll still avoid any dependency on any cloud for programming - this is not debatable - and in 6-12 months, I won't have the hardware to run a model like this locally unless something incredible happens (including not having to depend on proprietary nvidia drivers).

What's more, an objective scientist doesn't use anecdotal datapoints like their own personal experience, they run well-designed studies. I will not conduct such studies. I'll read them.

> I think that it also seems like we disagree on the foundations/premise of the technology.

Yeah, we have widely different perspectives on this stuff. It's an enriching discussion. I believe we start having said all that could be said.

[1] https://salsa.debian.org/deeplearning-team/ml-policy/-/blob/...


> The US Beef industry alone far outpaces the impact of AI.

Beef has the benefit of seeing an end, though. Populations are stabilizing, and people are only ever going to eat so much. As methane has a 12 year life, in a stable environment the methane emissions today simply replace the emissions from 12 years ago. The carbon lifecycle of animals is neutral, so that is immaterial. It is also easy to fix if we really have to go to extremes: Cull all the cattle and in 12 years it is all gone!

Whereas AI, even once stabilized, theoretically has no end to its emissions. Emissions that are essentially permanent, so even if you shut down all AI when you have to take extreme measures, the effects will remain "forever". There is always hope that we'll use technology to avoid that fate, but you know how that usually goes...


> There's a clear difference between...

There's also a clear difference between users of this site that come here for all types of content, and users who have "AI" in their usernames.

I think that the latter type might just have a bit of a bias in this matter?


I'd be surprised if one needed to refer to my username to make a determination on me viewing the technology more optimistically, although I do chafe a tad at the notion that I don't come here for all types of content.


> I don't think you should use LLMs for something you can't master without.

I'm not sure, I frequently use LLMs for well-scoped math-heavy functions (mostly for game development) where I don't neccessarly understand what's going on inside the function, but I know what output I expect given some inputs, so it's easy for me to kind of blackbox test it with unit tests and iterate on the "magic" inside with an LLM.

I guess if I really stopped and focused on math for a year or two I'd be able to code that myself too, but every time I tried to get deeper into math it's either way too complex for me to feel like it's time well spent, and it's also boring. So why bother?


I can't comment on your well-scoped case. I can still see it backfire (in terms of maintenance) but it does seem you are precocious and are limiting the potential damages, with your unit tests you are increasing at least the level of confidence you can have on the LLM output and it's on a very specific part of your code.

I didn't have such cases in mind, was replying to the "navigate complexity at scales human cognition wasn't designed for" aspect.


“I don’t think you should pay others for something you can’t master without [paying].” is one a hell of an argument to make. Good luck trying.


> is one a hell of an argument to make.

I agree, but it's not mine.


IMO Version Control, IDEs, and Stack Overflow are many degrees of magnitude more valuable than GPT tools.

The use cases of these GPT tools are extremely limited. They demo well and are quite useful for highly documented workflows (E.G. they are very good at creating basic HTML/JS layouts and functionality).

However, even the most advanced GPT tools fall flat on their face when you start working with any sort of bleeding edge, or even just less-ubiquitous technology.


That is interesting, in my experience these tools work quite well on larger codebases but it depends how you use them and I haven't really found a counterexample. Do you maybe have a practical example, like a repo you could link that just doesn't work for AI?


GPT tools give piss-poor suggestions when working with the Godot game engine.

The Godot engine is an open-source project that has matured significantly since GPT tools hit the market.

The GPTs don't know what the new Godot features are, and there is a training gap that I'm not sure Open AI and their competitors will ever be able to overcome.

Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.


I stopped using GPT for coding long ago, mostly because I use Claude.

With Claude I can even write IC10 code. (with a bit of help and understanding of how Claude works)

IC10 is a fictional, MIPS-like CPU in the game Stationeers. So that's pretty promising for most other things.


Thanks for the Godot example. I experimented with it through Claude Code (I have no prior experience with it). Got a Vampire Survivors-esque game working from scratch with plain shapes representing the game elements like player or enemies in 70 minutes on and off. It included a variety of 5 weapons, enemies moving to the player, player movement, little experience orbs on enemies expiring, a projectile and area of effect damage system, and a levelling up and upgrade system and UI which influenced weapon behaviours.

Godot with AI was definitely a worse experience than usual for me. I did not use the Godot editor. It seems like the development flow for Godot however is based around it. Scenes were generated through a Python script, which was of course written by Claude Code. Personally, I reviewed no line of code during the process.

My findings afterwards are;

1) Code quality was not good. Personally I have a year of experience working with Unity and online the code examples tend to be of incredibly poor quality. My guess is if AI is trained on the online corpus of game development forums, the output should be absolutely terrible. For the field of game development especially AI is tainted with this poor quality. It did indeed not follow modern practices, even after having hooked up a context MCP which provides code examples.

2) It was able to refactor the codebase to modern practices upon instructing it to; I told it to figure out what modern practices were and to apply them; it started making modifications like adding type hints and such. Commonly you would use predefined rules for this with an LLM tool, I did not use any for my experiment. That would be a one-time task after which the AI will prefer your way of working. An example for Godot can be found here: https://github.com/sanjeed5/awesome-cursor-rules-mdc/blob/ma...

3) It was very difficult to debug for Claude Code. The platform seems to require working with a dedicated editor, and the flow for debugging is either through that editor or by launching the game and interacting with it. This flow is not suitable at the moment for out of the box Claude Code or similar tools which need to be able to independently verify that certain functions or features work as expected.

> Do you work on web applications? I've found that GPT tools get pretty stupid once you stop working with HTML/JS.

Not really - I work on developer experience and internal developer platforms. That is 80~90% Python, Go, Bash, Terraform and maybe a 10~20% Typescript with React depending on the project.


Think larger codebases that do not involve node/npm...


The reason I ask is I work on developer experience, and I see feedback often online by developers that AI simply doesn't work for their flow. So I'm looking for real-world concrete examples on what AI development tools are stuggling with. Or, maybe it is how you use the tool? Personally I haven't run into limitations, so I'm really looking for hard counter-examples. The Godot example by the other poster was great, maybe you could provide another?


Can't help because I haven't tried those bots that edit the code for you.

I just use "AI" instead of Google/SO when I need to find something out.

So far it mostly answers correctly, until the truthful answer comes close to "you can't do that". Then it goes off the rails and makes up shit. As a bonus, it seems to confuse related but less popular topics and mixes them up. Specific example, it mixes couchdb and couchbase when I ask about features.

The worst part is 'correctly' means 'it will work but it will be tutorial level crap'. Sometimes that's okay, sometimes it isn't.

So it's not that it doesn't work for my flow, it's that I can't trust it without verifying everything so what flow?

Edit: there's a codebase that i would love to try an "AI" on... if i wouldn't have to send my customer's code to $random_server with $random_security with $untrustable_promises_of_privacy. Considering how these "AI"s have been trained, I'm sure any promise that my code stays private is worth less than used toilet paper.

Gut feeling is the "AI" would be useless because it's a not-invented-here codebase with no discussion on StackOverflow.


This may not be the strong argument you think it is. There are plenty of highly productive senior developers who either don't use IDEs or SO or use them very minimally. Even version control, if they're working alone. Smart devs will find out how to be virtually as productive in a terminal as they would be with an IDE. Potentially more productive when solving edge case issues with processes that IDEs abstract


IDEs can be iffy, but any project bigger than a 20 line throwaway script needs/deserves version control.


If reading your code requires navigating complexity that human cognition wasn't designed for then something has gone terribly wrong.


Really, scales human cognition wasn't designed for?

Human cognition wasn't designed to make rockets or AIs, but we went to the moon and the LLMs are here. Thinking and working and building communities and philosophies and trust and math and computation and educational institutions and laws and even Sci Fi shows is how we do


> we went to the moon

We also killed quite a few astronauts.


RIP Grissom, White and Chaffee.

But the loss of their lives also proves a point: that achievement isn't a function of intelligence but of many more factors like people willing to risk and to give their lives to make something important happen in the world. Loss itself drives innovation and resolve. For evidence, look to Gene Kranz: https://wdhb.com/wp-content/uploads/2021/05/Kranz-Dictum.pdf


In that case, the decision to launch was taken not by the astronauts risking their lives, but by NASA management, against the recommendation of Morton Thiokol's engineers. This was not simply an unfortunate "accident", but an entirely preventable gamble.

https://en.wikipedia.org/wiki/Rogers_Commission_Report#Flawe...

> Loss itself drives innovation and resolve

True, but did NASA in 1986 really need to learn this lesson?

This isn't (just) rocket science, it's the fundamentals of risk liability, legality and process that should be well established in a (quasi-military) agency such as this.


Yeah I think they did need to learn it.

They knew they were taking some gambles to try to catch up in the Space Race. The urgency that justified those gambles was the Cold War.

People have a social tendency to become complacent about catastrophic risks when there hasn't been a catastrophe recently. There's a natural pressure to "stay chill" when the people around you have decided to do so. Speaking out about risk is scary unless there's a culture of people encouraging other to speak out and taking the risks seriously because they all remember how bad things can be if they don't.

Someone actually has to stand up and say "if something is wrong I really actually want to and need to know." And the people hearing that message have to believe it, because usually it is said in a way that it is not believed.


Two different incidents. The parent was talking about Apollo 1, not Challenger.


Ah, yes, You are correct.


Maybe somehow this will be true in the future, but I am finding that as soon as you work on a novel or undocumented or non internet available problem it is just a hallucinating junior dev


The dirty secret is most of the time we are NOT working on anything novel at all. It is pretty much a CRUD application and it is pretty much a master detail flow.


Even for completely uninteresting CRUD work, you're better off with better deterministic tooling (scaffolding for templates, macros for code, better abstractions generally). Unfortunately, past a certain low level, we're stuck rolling our own for these things. I'm not sure why, but I am guessing it has to do with them not being profitable to produce and sell.


I work on novel technologies.


Yeah me too, better off buying something if it already exists


I think this is a somewhat short sighted perspective. It's not really augmenting, but replacing cognition.

I see people starting to unlearn working by themselves rapidly and becoming dependant on GPT, making themselves quite useless in the process. They no longer understand what they're working with and need the help from the tool to work. They're also entirely helpless when whatever 'AI' tool they use can't fix their problem.

This makes them both more replaceable and less marketable than before.

It will have and already has a huge impact. But it's kinda like the offshoring hype from a decade ago. Everyone moved their dev departments to a cheaper country, only to later realize that maybe cheap does not always mean better or even good. And it comes with a short term gain and a long term loss.


Author and Kilo Code team member here - this is a much better explanation of what I mean. And honestly, it's a quick phrase I've been using that is shorthand really for THIS much better-written take.


A surprisingly large number of musicians do use things like written music (a medieval invention, but came into its own in the renaissance) or amplifiers (a modern one).


Sure but that's different than saying "if you don't use those things you are worthless and will fail"


Ah, I personally read it as "Use of good tools makes you more competitive",

but you're right that "I firmly believe that AI will not replace developers, but a developer using AI will replace a developer who does not." could have multiple other readings too.


Author and Kilo Code team member here - yes your interpretation is what I meant...but to be fair as you mention it could be read multiple ways.

Let's be fair - I made it intentionally a little provocative :)


Haha I'm glad to hear it, because I do the same of course.

What I might not have mentioned is that I've spent the last 5 years and 20,000 or so hours building an IDE from scratch. Not a fork of VSCode, mind you, but the real deal: a new "kernel" and integration layer with abilities that VSCode and its forks can't even dream of. It's a proper race and I'm about to drop the hammer on you.


This is true for 99% of developers who aren't particularly talented or driven, e.g. the average engineer who treats their job as a job and not their passion.


O no, it's not hype. The problem is the sentence is incomplete.

a developer using AI in a low-cost region will replace any developer in a high cost region ;)


Lol, I think making confident claims in either direction is total copium at this point. It's abundantly obvious that LLM-based tools are useful, it's just a question of what we'll settle on using them for and to what degree.

Nobody knows how this will play out yet. Reality does not care about your feelings, unfortunately.


Just completely ignoring LLMs is like not wanting to use autocomplete or language servers, preferring to hand-craft everything.

But on the other hand there is the other end who think AGI coming in a few months and LLMs are omniscient knowledge machines.

There is a sweet spot in the middle.


There's certainly room for debate, but I've taken to just ignoring the people who proclaim that AI is merely a passing fad or a useless parlor trick. I can't predict much, but I can confidently say I won't be the only one ignoring them.


For me, the sweet spot seems to be "keep an eye on LLM developments, but don't waste work time trying to use them yet".


Yeah, that makes sense to me for a lot of reasons. Recently decide to take the plunge and start introducing AI tools in my startup. Fairly cheap experiment, all things considered - three months of giving it a try as a team and then we'll make a call. Time will tell!


I'd suggest using LLMs as "Autocomplete on steroids" and see how that goes.

If online models aren't your thing twinny.dev + ollama will make it fully local.


[flagged]


I'm sure that is how many photographers felt about digital cameras and photo editing software when it first came. Now very few professional photographers work without digital cameras and photo editing software (even if it is just to tweak some colors). Yes, you will still have some artists making money with purely analog cameras without any touch up but they are a tiny minority of the photographers getting paid for their work (slightly more people use analog cameras as a hobby).


It's probably true in our industry too - I'm sure that when people started using programming languages instead of writing machine code people looked down on them too.

This will be another abstraction layer that MANY people will use and be able to accomplish things that would have been impossible to do in a reasonable amount of time in machine code.


good think that those abstraction layer break all the time eg: javascript ecosystem


one thing about GPT users is they LOVE typing "npm install"


> I'm sure that is how many photographers felt about digital cameras and photo editing software when it first came.

And for a few decades at least it was true. The technology was shite compared to film photography for a long time. The same will probably be true for AI, as full developer replacement will require AGI.


Totally. I do think we're a long way from fully replacing developers. I do think at this point we are where you can enhance a developers work with an AI that can write code for that developer based on the instructions by the developer. But I don't think we're at a place where we can fully replace developers for any complicated application.


It's amazing how much better shot-on-film media from the 60s and 70s has aged than the early digital video that became popular in the late 80s and 90s.


Yeah. We are never getting a remastered edition of Deep Space Nine, because unlike TNG, it was shot on digital video and we just don't have it in over 480p.


I'm sure that's how many laundry machine users felt when the tide pod first came out.

Now very few laundry doers measure out their detergent by hand.


During the tide pods craze I saw that they were less than 20% of the market. Just a heads up. Most use liquid detergent.

Original posters autotune example was off too - 99% of commercial records use pitch correction. The other 1% are the ones they write stories about "nO AuToTuNe" cuz the reality is its on everything. I mix records. Easy to find autotune errors in live performances on youtube. It's not that they can't sing, the audience has been trained for perfection(they believe).


I think it is great to take pride in ones craft, but in the end it comes down to this: We create software for users.

If the user can't feel any difference in quality between human made software, or AI made software, then it does not mater. It is that easy.

If AI makes better software, at lower prices, human developers will become obsolete. It is the natural way of life.

Once we had telephone operators, now we don't. Once you had to be a good tradesman with good knowledge in how to use a mallet/hammer, axe, chisel, etc. to build a house - now you don't. You don't get awarded for being old-fashioned.


I make better software because I don't think like this.


That's wonderful, I hope someone will continue to pay you for it, and not go for the 80% solution that is 1000x cheaper. If you're a deeply specialized IC, you're probably good. If you write the perfect, elegant, hand-set code to do CRUD or implement run-of-the-mill business logic, probably not.


Your code is slop. Your coworkers can tell that Chat GPT wrote your PR.


That's alright Jenny, we all use agents here for development. But don't you fret, you and your handcrafted artisanal code will always have a place in the world, just as painters do.


Do you or a firm you represent have significant investments in AI platforms?


You think you make better software because you don’t think like this.


There's still work for artisanal woodworkers, doing everything by hand with the utmost care[0]

But that doesn't change the fact that the VAST majority of people are just fine with mass-produced furniture

I think this is the difference that's going to happen to software.

There's the one doing everything in bare vim with zero assist, just rawdogging function names and library methods from rote memory.

And then there's the rest who use all the tools at their disposal to solve problems. Is the work super clean, efficient and fully hand-crafted? Nope. Does it solve the problem? Yes it does, and fast.

[0] https://www.youtube.com/@ISHITANIFURNITURE


This is an interesting analogy as I have found that unless you are regularly in contact with hand made furniture, you have little feel for what is different about mass produced furniture and why anything else is desirable.

There may very well be a texture to “hand crafted” software that will be totally lost on users. But I kind of doubt the difference will be anything like the difference in furniture.


IMO it's the same in furniture.

There will always be people who know the difference and care about the difference. And are prepared to pay the premium for handcrafted furniture.

But the vast majority of people either won't care or don't know the difference - mostly because they've never experienced it. They just take it for grated that their IKEA cardboard stuff won't make it through a move and buy a new one.

And then there's the one person with the almost century old desk that was handmade from hardwood and is practically indestructible and fully repairable if something cracks.


Typical opinion of someone who has not yet tried developing with ai.

When you finally give it a go you will feel stupid for having this opinion.

I did.


Woodworkers who use powertools are beneath artisans with only hammers and chisels, too, right?

Current gen of LLMs are tools. Use them or not. Judging others based on whether they use tools at all vs how they use them is… naive.


I've seen a very clever dev use it. I can't know if you are even cleverer, but I'd be cautious.

In any case there are better and stronger arguments against LLMs than this.


Such a strong reaction sounds much more driven by emotion than reason


do you hand compile your code?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: