Hacker News new | past | comments | ask | show | jobs | submit login

For those less confident:

U.S. (and German) automakers were absolutely sure that the Japanese would never be able to touch them. Then Koreans. Now Chinese. Now there are tariffs and more coming to save jobs.

Betting against AI (or increasing automation, really) is a bet against not against robots, but against human ingenuity. Humans are the ones making progress, and we can work with toothpicks as levers. LLM's are our current building blocks, and people are doing crazy things with them.

I've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math, to building a soccer team generator that uses Google's OR tools to optimise across various axes, to spinning up four different test apps with Replit's agent to try multiple approaches to a task I'm working on. All the while skilling up in React and friends.

I don't really have time for those side-quests but LLM's make them possible. Easy, even. The amount of time and energy I'd need pre-LLM's to do this makes this a million miles away from "a waste of time".

And even if LLM's get no better, we're good at finding the parts that work well and using that as leverage. I'm using it to build and check datasets, because it's really good at extraction. I can throw a human in the loop, but in a startup setting this is 80/20 and that's enough. When I need enterprise level code, I brainstorm 10 approaches with it and then take the reins. How is this not valuable?




In other words, you have built exactly zero commercial-grade applications that us the working programmers work on building every day.

LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.

LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).

You seem to confuse one-off projects that have zero to little need for maintenance for actual commercial programming, perhaps?

Your analogy with the automakers seems puzzlingly irrelevant to the discussion at hand, and very far from transferable to it. Also I am 100% convinced nobody is going to protect the programmers; business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic

Like your parent commenter, if LLMs get on my level, I'd be _happy_ to retire. I don't have a super vested interest in commercial programming, in fact I became as good at it in the last several years because I started hating it and wanted to get all my tasks done with minimal time expended; so I am quite confident in my assessment that LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.

Your take is rather romantic.


We're gonna get mini-milled.

People with the attitude that real programmers are producing the high level product are going to get eaten slowly, from below, in the most embarrassing way possible.

Embarrassing because they'll actually be right. LLMs aren't making the high quality products, it's true.

But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

Advances in LLMs will feed on the money made from sweeping up all these crap jobs, which will legitimately vanish. That guy who can barely glue together a couple of pages, he is indeed finished.

But the money will make better LLMs. They will swallow up the field from below.


>But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

just like you don't need webmasters, if you remember that term. IF you are just writing CRUD apps, then yeah - you're screwed.

If you're a junior, or want to get into the field? same, you're totally screwed.

LLMs are great at soft tasks, or producing code that has been written thousand of times - boilerplate, CRUD stuff, straightforward scripts - but the usual problems aren't limited by typing speed, nor amount of boilerplate but by thinking and evaluating solutions and tradeoffs from business perspective.

Also I'll be brutally honest - by the time the LLMs catch up to my generation's capabilities, we'll be already retired, and that's where the real crisis will start.

No juniors, no capable people, most seniors and principal engineers are retired - and quite probably LLMs won't be able to fully replace them.


> But the low quality CRUD websites (think restaurant menus) will get swallowed by LLMs. You no longer need a guy to code up a model, run a migration, and throw it on AWS. You also don't need a guy to make the art.

To be fair those were eaten long ago by Wordpress and site builders.


And even Facebook. I think the last time bespoke restaurant menu websites were a viable business was around 2010.


That doesn't agree with my area at all. Most restaurants have menus or an app or both. Some have a basic website with phone number and link to an app. There seems to be some kind of app template many restaurants use which you can order from too.


> I think the last time bespoke restaurant menu websites

> There seems to be some kind of app template many restaurants use which you can order from too.

I think you agree with the comment you are replying to, but glossed over the word bespoke. IME as a customer in a HCOL area a lot of restaurants use a white-label platform for their digital menu. They don't have the need or expertise to maintain a functional bespoke solution, and the white-label platform lends familiarity and trust to both payment and navigation.


Have you personally tried? I have a business doing exactly that and have three restaurants paying between $300-400/month for a restaurant website -- and we don't even integrate directly with their POS / menu providers (Heartland).


I don't think they will vanish at all.

A modern CRUD website (any software can be reduced to CRUD for that matter) is not trivially implemented and far beyond what current LLM can output. I think they will hit a wall before they will ever be able to do that. Also, configuration and infrastructure management is a large part of such a project and far out of scope as well.

People build some useful tools for LLM to enable them to do anything besides outputting text and images. But it is quite laborious to really empower them to do anything still.

LLM can indeed empower technical people. For example those working in automation can generate little Python or JavaScript scripts to push bits around, provided the endpoints have well known interfaces. That is indeed helpful, but the code still always needs manual review.

Work for amateur web developers will likely change, but they certainly won't be out of work anytime soon. Although the most important factor is that most websites aren't really amateur land anymore, LLM or not.


Arguably you never really needed a SWE for those use cases, SWEs are for bespoke systems with specific constraints or esoteric platforms.

"That guy who can barely glue together a couple of pages" was never going to provide much value as a developer, the lunches you describe were already eaten: LLMs are just the latest branding for selling solutions to those "crap jobs".


>> LLMs aren't making the high quality products,

And who cares exactly? The only real criteria is whether the product is good _enough_ to bring in profit. And rightly so if you ask me.

Also, far too many products build by human _experts_ are low quality. So, in a way we're more than ready for the LLM-quality products.


Who checks to see if the LLM swallowed it or not?


Yeah, and it's always in the future, right? ;)

I don't disagree btw. Stuff that is very easy to automate will absolutely be swallowed by LLMs or any other tool that's the iteration #13718 of people trying to automate boring stuff, or stuff they don't want to pay full programmer wages for. That much I am very much convinced of as well.

But there are many other, rather nebulous shall we call them, claims, that I take issue with. Like "programming is soon going to be dead". I mean OK, they might believe it, but arguing it on HN is just funny.


> LLMs are good for playing with stuff, yes, and that has been implied by your parent commenter as well I think. But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior)... and even more important traits.

I'm totally conflicted on the future of computing careers considering LLM impact, but I've worked at a few places and on more than a few projects where few/none of these are met, and I know I'm far from the only one.

I'd wager a large portion of jobs are like this. Majority of roles aren't working on some well-groomed Google project.

Most places aren't writing exquisite examples of a canonically "well-authored" codebase; they're gluing together enterprise CRUD slop to transform data and put it in some other database.

LLMs are often quite good at that. It's impossible to ignore that reality.


If there is one job I don't want to do, it's being responsible for a big heap of enterprise CRUD slop generated and glued together by LLMs. If they can write it, they should learn to maintain it too!


Insightful.

Whadja say, sama and backers?

IOW: maintain your own dog food, etc.


LLM’s are good at things a lot of people do, because a lot of people do them, and there’s tons of examples.

It’s the very definition of a self-fulfilling prophecy.


Yes. LLMs are great at generating Python code but not so great at generating APL code.


Good to the power of one by infinity.

Fixed that for you.


Absolutely. I agree with your take. It's just that I prefer to work in places where products are iterated on and not made once, tweaked twice, and then thrown out. There LLMs are for the moment not very interesting because you have to correct like 50% of the code they generate, ultimately wasting more time and brain energy than writing the thing yourself in the first place.


And wasting more money too, of highly paid devs.

A wake-up call for the bean counters.


I wish. But many of them only get as far as "this tool will replace your expensive programmer". Thinking stops there for them.


Lets replace them.


Very defensive.

I love it anyhow. Sure, it generates shit code, but if you ask it it’ll gladly tell you all the ways it can be improved. And then actually do so.

It’s not perfect. I spent a few hours yesterday pulling it’s massive blobby component apart by hand. But on the plus side, I didn’t have to write the whole thing. Just do a bunch of copy paste operations.

I kinda like having a junior dev to do all the typing for me, and to give me feedback when I can’t think of a good way to proceed.


> I spent a few hours yesterday pulling it’s massive blobby component apart by hand. But on the plus side, I didn’t have to write the whole thing.

The question, really, is: are you confident that this was better than actually writing the whole thing yourself? Not only in terms of how long it took this one time, but also in terms of how much you learned while doing it.

You accumulate experience when you write code, which is an investment. It makes you better for later. Now if the LLM makes you slightly faster in the short term, but prevents you from acquiring experience, then I would say that it's not a good deal, is it?


Those are the right questions indeed, thank you for being one of the commenters who looks past the niche need of "I want to generate this and move along".

I tried LLMs several times, even started using my phone's timers, and found out that just writing the code I need by hand is quicker and easier on my brain. Proof-reading and looking for correctness in something already written is more brain-intensive.


Asking questions and thinking critically about the answers is how I learn. The information is better structured with LLMs.

Everyone isn't "generating and moving on". There's still a review and iteration part of the process. That's where the learning happens.


If an LLM can give you feedback on a way to proceed it sounds more like you might be the junior? :P

LLMs seems to be ok'ish at solving trivial boilerplate stuff. 20 attempts deep I have not yet seen it able to even remotely solve anything I have been stuck enough on to have to sit down and think hard.


Curious: what kind of problem domain to you work on? I use LLMs every day on pretty hard problems and they are always net positive. But I imagine they’re not well trained on material related to your work if you don’t find them useful.


What kind of problem domain do you work on?


> If an LLM can give you feedback on a way to proceed it sounds more like you might be the junior? :P

More like it has no grey matter to get in the way of thinking of alternatives. It doesn’t get fixated, or at least, not on the same things as humans. It also doesn’t get tired, which is great when doing morning or late night work and you need a reality check.

Deciding which option is the best one is still a human thing, though I find that those often align too.


I would not mind having a junior type out the code, or so I thought for a while. But in the case of those of us who do deep work it simply turned out that proof-reading and correcting the generated code is more difficult than writing it out in the first place.

What in my comment did you find defensive, btw? I am curious on how does it look from the outside for people that are not exactly of my mind. Not making promises I'll change, but still curious.


"In other words, you have built exactly zero commercial-grade applications that us the working programmers work on building every day."

The majority of programmers getting paid a good salary are implementing other people's ideas. You're getting paid to be some PMs chatGPT


> You're getting paid to be some PMs chatGPT

yes but one that actually works


You’re completely missing the point. The point being made isn’t about “are we implementing someone else’s idea”. It’s about the complexity and trade-offs and tough calls we have to make in a production system.


I never said we are working on new ideas only -- that's impossible.

I even gave several examples of the traits that a commercial code must have and that LLMs fail to generate such code. Not sure why you ignored that.


> LLMs fail to generate such code

As another oldster, yes, yes absolutely.

But the deeper question is: can this change? They can't do the job now, will they be able to in the future?

My understanding of what LLM's do/how they work suggests a strong "no". I'm not confident about my confidence, however.


I am sure it can change. Ultimately programming we be completely lost; we are going to hand-wave and command computers inside us or inside the walls of our houses. This absolutely will happen.

But my issue is with people that claim we are either at that point, or very nearly it.

...No, we are not. We are not even 10% there. I am certain LLMs will be a crucial part of such a general AI one day but it's quite funny how people mistake the brain's frontal lobe with the entire brain. :)


Plenty of jobs for coming by to read the logs across 6 systems when the LLM applications break and can't fix themselves.


Yep, quite correct.

I am even fixing one small app currently that the business owner generated with ChatGPT. So this entire discussion here is doubly amusing for me.


Just like we were fixing last year the PowerApps-generated apps (or any other low/no code app) the citizens developers slapped together.


The question is how those jobs will pay. That seems like something that might not be able to demand a living wage.


If everyone prompts code. Less people will actually know what they're doing?


That's the part many LLM proponents don't get it or hand-wave away. For now LLMs can produce okay one-off / throwaway code by having been fed with StackOverflow and Reddit. What happens 5 years down the road when half the programmers are actually prompters?

I'll stick to my "old-school" programming. I seem to have a very wealthy near-retirement period in front of me. I'll make gobs of money just by not having forgotten how to do programming.


If it can't demand a living wage then senior programmers will not be doing it, leading to this software remaining defective. What _that_ will lead to we have no idea because we don't know what kind of software.


Really? Excellent debugging is something I associate with higher-paid engineers.


> LLMs don't write code like that

As someone who has been doing this since the mid 80's in all kinds of enterprise environments, I am finding that the latest generation are getting rather good at code like that, on par with mid-senior level in that way. They are also very capable of discussing architecture approaches with an encyclopaedic knowledge, although humans contribute meaningfully by drawing connections and analogies, and are needed to lead the conversation and make decisions.

What LLM's are still weak at is holding a very large context for an extended period (which is why you can see failures in the areas you mentioned if not properly handled e.g. explicitly discussed, often as separate passes). Humans are better at compressing that information and retaining it over a period. LLM's are also more eager and less defensive coders. That means they need to be kept on a tight leash and drip fed single changes which get backed out each time they fail - so very bright junior in that way. For example, I'm sometimes finding that they are too eager to refactor as they go and spit out env vars to make it more production like, when the task in hand is to get basic and simple first pass working code for later refinement.

I'm highly bullish on their capabilities as a force multiplier, but highly bearish on them becoming self-driving (for anything complex at least).


> I'm highly bullish on their capabilities as a force multiplier, but highly bearish on them becoming self-driving (for anything complex at least).

Very well summed and this is my exact stance, it's just that I am not seeing much of the "force multiplier" thing just yet. Happy to be proven wrong, but last time I checked (August 2024) I didn't get almost anything. Might be related to the fact that I don't do throwaway code, and I need to iterate on it.


Recently used Cursor/Claude sonnet to port ~30k lines of EOL Livescript/Hyperscript to Typescript/JSX in less than 2 weeks. That would have took at least several months otherwise. Definitively a force multiplier, for this kind of repetitional work.


Shame that you can't probably open-source this, that would have been a hugely impressive case study.

And yeah, out of all the LLMs, it seems that Claude is the best when it comes to programming.


Do not know how you are using them? It speeded up my development around 8-10x, things i wouldnt have done earlier i'm doing now, e let it do by the AI; writing boilerplate etc. Just great!


8-10x?! That's quite weird if you ask me.

I just used Claude recently. Helped me with an obscure library and with the hell that is JWT + OAuth through the big vendors. Definitely saved me a few hours and I am grateful, but those cases are rare.


I developed things which i would have never started without AI because i could see upfront that there would be a lot of mundane/exhausting tasks


Amusingly, lately I did something similar, though I still believe my manual code is better in Elixir. :)

I'm open to LLMs being a productivity enabler. Recently I started occasionally using them for that as well -- sparingly. I more prefer to use them for taking shortcuts when I work with libraries whose docs are lacking.

...But I did get annoyed at the "programming as a profession is soon dead!" people. I do agree with most other takes on LLMs, however.


Sonnet 3.5 v2 was released in October. Most people just use this. First and only one that can do passable front end as well.


Thank you, I'll check it out.


That sounds like they need a Unix plug-in pipe approach creating small modules that do one thing well handing their result without caring where it goes to the next not caring where it came from module while the mother Llm overseas only all the Blackbox connection pipes with the complex end result as a collateral divine conception.

now there was someone that I could call king


"But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider (i.e. by using env vars and config files for modifying its behavior).."

Interestingly I do not find that the stuff you mentioned are the things that LLLMs are bad at. It can generate easy to read code. It can document its code extensively. It can write tests. It can use dependency injection especially if you ask it to. What I noticed where I am currently am much better than an LLM is that I can still have a very nuanced very complicated problem space in my head and create a solution based on that. The LLM currently cannot solve a problem which is so nuanced and complicated that even I cannot fully describe and work partially from insincts. It is the 'engineering taste' or 'engineering instincts' and our ability to absorb complicated nuanced design problems in our head that separates experienced developers from LLMs.

Unless LLMs get significantly better and just replace humans in every task, I predict that LLMs effect on industry will be that much less developers will be needed but those who will be needed will be relatively 'well paid' as it will be harder to become a professional software developer. (more experience and engineering talent will be needed to work professionally).


If you say so then OK, I am not going to claim you are imagining it. But some proof would be nice.

I have quickly given up on experimenting with LLMs because they turned out to be net negative for my work -- proof-reading code is slower than writing it.

But if you have good examples then I'll check them out. I am open to the possibility that LLMs are getting better at iterating on code. I'd welcome it. It's just that I haven't seen it yet and I am not going to go out of my way to re-vet several LLMs for the... 4th time it would be, I think.


> But when you have to scale the work, then the code has to be easy to read, easy to extend, easy to test, have extensive test coverage, have proper dependency injection / be easy to mock the 3rd party dependencies, be easy to configure so it can be deployed in every cloud provider

the code doesn't have to be anything like that, it only has to do one thing and one thing only: ship


But in a world of subscription based services, it has to ship more than once. And as soon as that's a requirement, all of the above applies, and the LLM model breaks down.


If and only if it's a one-off. I already addressed this in my comment that you are replying to, and you and several others happily ignored it. Really no idea why.


i'm sorry, both you and /u/NorthTheRock have a dev first mindset. Or a google-work-of-art codebase mindset. Or a bespoke little dev boutique mindset. Something like that. For the vast, vast majority of software devs it doesn't work that way. The way it works is: a PM says: we need this out ASAP, then we need this feature, this bug, hurry up, close your tickets, no, a clean codebase and unit tests are not needed, just get this out, the client is complaining.

And so it goes. I'm happy you guys work in places where you can take your time to design beautiful work of arts, I really am. Again, that's not the experience for everyone else, who are toiling in the fields out there, chased by large packs of rabid tickets.


I am aware that the way I work is not the only one, of course, but I am also not so sure about your "vast, vast majority" thing.

You can call it dev-first mindset, I call it a sustainable work mindset. I want the people I work for to be happy with my output. Not everything is velocity. I worked in high-velocity companies and ultimately got tired and left. It's not for me.

And it's not about code being beautiful or other artsy snobby stuff like that. It's about it being maintainable really.

And no I am not given much of a time, I simply refused to work in places where I am not given any time is all. I am a foot solider in the trenches just like you, I just don't participate in the suicidal charges. :)


Thank you for saying this; I commented something similar.

The HN commentariat seems to be comprised of FAANGers and related aspirants; a small part of the overall software market.

The "quality" companies doing high-skilled, bespoke work are a distinct minority.

A huge portion of work for programmers IS Java CRUD, React abominations, some C# thing that runs disgusting queries from an old SQL Server, etc etc.

Those privileged enough to work exclusively at those companies have no idea what the largest portion of the market looks like.

LLMs ARE a "threat" to this kind of work, for all the obvious reasons.


Are they more of a threat than all the no code, spreadsheets, bi tools, and salesforce?

We've watched common baselines be abstracted away and tons of value be opened up to creation from non engineers by reducing the complexity and risk of development and maintenance.

I think this is awesome and it hasn't seemed to eliminate any engineering jobs or roles - lots of crud stuff or easy to think about stuff or non critical stuff is now created that wasn't before and that seems to create much more general understanding of and need for software, not reduce it.

Regardless of tools available I think of software engineering as "thinking systematically" and being able to model, understand, and extend complex ideas in robust ways. This seems improved and empowered by ai coding options, not undermined, so far.

If we reach a level of ai that can take partially thought out goals and reason out the underlying "how it should work"/"what that implies", create that, and be able to extend that (or replace it wholesale without mistakes) then yeah, people who can ONLY code wont have much earning potential (but what job still will in that scenario?).

It seems like current ai might help generate code more quickly (fantastic). Much better ai might help me stay at a higher level when I'm implementing these systems (really fantastic if it's at least quite good), and much much better ai might let me just run the entire business myself and free me up to "debug" at the highest level, which is deciding what business/product needs to exist in the first place and figuring out how to get paid for it.

I'm a pretty severe ai doubter but you can see from my writing above that I think if it DOES manage to be good I think that would be good for us actually, not bad.


I don't know what to believe, long or short term; I'm just speculating based on what I perceive to be a majority of software job opportunities and hypothetical immediate impacts.

My basic conclusion is "they seem quite good (enough) at what appears to be a large portion of 'Dev' jobs" and I can't ignore the possibility of this having a material impact on opportunities.

At this point I'm agnostic on the future of GenAI impacts in any given area. I just no longer have solid ground upon which to have an opinion.


> business people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson. /off-topic

What is the "lesson" that business people fail to learn? That Indians are worse developers than "someone like yourself"?

(I don't mean to bump on this, but it is pretty offensive as currently written.)


1. The Indians were given as an example of absolutely terrible outsourcing agencies' dev employees. Call it racist or offensive if you like, to me it's statistical observation and I will offer no excuses that my brain is working properly and is seeing patterns. I have also met amazingly good Indian devs for what it's worth but most have been, yes, terrible. There's a link between very low-quality outsourcing agencies and Indian devs. I did not produce or fabricate this reality, it's just there.

2. The lesson business people fail to learn is that there's a minimum payment for programmers to get your stuff done, below which the quality drops sharply and that they should not attempt their cost-saving "strategy" because it ends up costing them much more than just paying me to do it. And I am _not_ commanding SV wages btw; $100k a year is something I only saw twice in my long career. So it's double funny how these "businessmen" are trying to be shrewd and pay even less, only to end up paying 5x my wage to a team that specializes in salvaging nearly-failed projects. I'll always find it amusing.


not OP, but not necessarily. the general reason is not that indians are worse developers per se. in my opinion it's more about the business structure. the "replacement indian" is usually not a coworker at the same company, but an employee at an outsourcing company.

the outsourcing company's goal is not to ship a good product, but to make the most money from you. so while the "indian developer" might theoretically be able to deliver a better product than you, they wont be incentivized to do so.

in practice, there are also many other factors that play into this - which might arguably play a bigger role like communication barriers, indian career paths (i.e. dev is only a stepstone on the way to manager), employee churn at cheap & terrible outsourcing companies, etc.


The analogy is that Japanese cars were initially utterly uncompetitive, and the arrogance of those who benefited from the status quo meant they were unable to adapt when change came. Humans improved those cars, humans are currently improving models and their use. Cheap toys move up the food chain. Find the direction of change and align yourself with it.

> people trying to replace one guy like myself with 3 Indians has been a reality for 10 years at this point, and amusingly they keep failing and never learning their lesson.

Good good, so you're the software equivalent of a 1990's era German auto engineer who currently has no equal, and maybe your career can finish quietly without any impact whatsoever. There are a lot of people on HN who are not you, and could use some real world advice on what to do as the world changes around them.

If you've read "crossing the chasm", in at least that view of the world there are different phases to innovation, with different roles. Innovators, early adopters, early majority, late majority, laggards. Each has a different motivation and requirement for robustness. The laggards won't be caught dead using your new thing until IBM gives it to them running on an AS400. Your job might be equivalent to that, where your skills are rare enough that you'll be fine for a while. However, we're past the "innovators" phase at this point and a lot of startups are tearing business models apart right now, and they are moving along that innovation curve. They may not get to you, but everyone is not you.

The choices for a developer include: "I'm safe", "I'm going to get so good that I'll be safe", "I'm going to leverage AI and be better", and "I'm out". Their decisions should not be based on your unique perspective, but on the changing landscape and how likely it is to affect them.

Good sense-check on where things are in the startup universe: https://youtu.be/z0wt2pe_LZM

I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc. Obviously it won't handle all edge cases, but...that's not your grandad's Cursor.

Might be this one, but don't recognise the name: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...


You are assuming I am arrogant because I don't view the current LLMs as good coders. That's a faulty assumption so your argument starts with a logical mistake.

Also I never said that I "have no equal". I am saying that the death of my career has been predicted for no less than 10 years now and it still has not happened, and I see no signs of it happening; LLMs produce terrible code very often.

This gives me the right to be skeptical from where I am standing. And a bit snarky about it, too.

I asked for a measurable proof, not for your annoyed accusations that I am arrogant.

You are not making an argument. You are doing an ad hominem attack that weakens any other argument you may be making. Still, let's see some of them.

---

RE: choices, my choice has been made long time ago and it's this one: "I will become quite good so as to be mostly safe. If 'AI' displaces me then I'll be happy to work something else until my retirement". Nothing more, nothing less.

RE: "startup universe", that's a very skewed perspective. 99.99999% of all USA startups mean absolutely nothing in the grand scheme of things out there, they are but a tiny bubble in one country in a big planet. Trends change, sometimes drastically and quickly so. What a bunch of guys in comfy positions think about their bubble bears zero relevance to what's actually going on.

> I can't find it now, but there's at least one company that is doing enterprise-scale refactoring with LLM's, AST's, rules etc.

If you find it, let me know. That I would view as an interesting proof and a worthy discussion to have on it after.


"You seem to confuse"

"Your analogy with the automakers seems puzzlingly irrelevant"

"Your take is rather romantic."

That's pretty charged language focused on the person not the argument, so if you're surprised why I'm annoyed, start there.

Meta has one: https://arxiv.org/abs/2410.08806

Another, edited in above: https://www.linkedin.com/pulse/multi-repo-ai-assisted-refact...

Another: https://codescene.com/product/ai-coding

However, I still don't recognise the names. The one I saw had no pricing but had worked with some big names.

Edit: WAIT found the one I was thinking of: https://www.youtube.com/watch?v=Ve-akpov78Q - company is https://about.grit.io

In an enterprise setting, I'm the one hitting the brakes on using LLM's. There are huge risks to attaching them to e.g. customer-facing outputs. In a startup setting, full speed ahead. Match the opinion to the needs, keep two opposing thoughts in mind, etc.


Thanks for the links, I'll check them out.


Too bad the Grit.io guy doesn’t use his considerable resources to learn how to enunciate clearly.

Or build an AI agent to transform his speech into something more understandable.


> If you find it, let me know. That I would view as an interesting proof and a worthy discussion to have on it after.

https://docs.aws.amazon.com/amazonq/latest/qdeveloper-ug/tra...

To be honest, I have no idea how well it works, but you can’t get much bigger than AWS in this regard.


Thanks. I'll absolutely check this out.


I think you're talking about the differencing between coding and writing systems, which means you're talking teams, not individuals.

Rarely, systems can be initiated by individuals, but the vast, vast majority are built and maintained by teams.

Those teams get smaller with LLMs, and it might even lead to a kind of stasis, where there are no new leads with deep experience, so we maintain the old systems.

That's actually fine by big iron, selling data, compute, resilience, and now intelligence. It's a way to avoid new entrants with cheaper programming models (cough J2EE).

So, if you're really serious about dragging in "commercial-grade", it's only fair to incorporate the development and business context.


I have not seen any proof so far that LLMs can enable teams. I and one other former colleague had to fix subtly broken LLM code several times, leading to the "author" being reprimanded that he's wasting three people's time.

Obviously anecdata, sure, it's just that LLMs for now seem mostly suited for throwaway code / one-off projects. If there's systemic proof for the contrary I'd be happy to check it out.


> LLMs are barely at the level of a diligent but not very good junior dev at the moment, and have been for the last at least 6 months.

> Your take is rather romantic.

I’m not sure you’re aware what you’ve written here. The contrast physically hurts.


Apparently I'm not. Elaborate?


Current generation LLMs have been in development for approximately as long as it takes for a college to ingest a high schooler and pump out a non-horrible junior software developer. The pace of progress slowed down, but if we get further 50% improvement in 200% time it’s still you who is being the romantic, not the op.


I honestly don't follow your angle at all. Mind telling me your complete takeaway? You are kind of throwing just bits and pieces at me and I followed too many sub-threads to keep full context.

I don't see where I was romantic either.


Not the OP, but I imagine it has to do with the "LLMs will improve over time" trope. You said "and have been for the last at least 6 months" and it's confusing what you meant or what you expected should have happened in the past 6 months.


I am simply qualifying my claims with the fact that they might not be super up to date is all.

And my comments here mostly stem from annoyance that people claim that we already have this super game-changing AI that will remove programmers. And I still say: no, we don't have it. It works for some things. _Some_. Maybe 10% of the whole thing, if we are being generous.


> LLMs don't write code like that. Many people have tried with many prompts. It's mostly good for just generating stuff once and maybe do little modifications while convincingly arguing the code has no bugs (and it has).

You have to make them write code like that. I TDD by telling the LLM what I want a test for, verify it is red, then ask it to implement. I ask it to write another test for an edge case that isn’t covered, and it will.

Not for 100% of the code, but for production code for sure. And it certainly speeds me up. Especially in dreadful refactoring where I just need to flip from one data structure to another, where my IDE can’t help much.


The main takeaway is that llms can't code to a professional level yet. But with improvement they probably will. It doesn't even have to be LLMs the coding part of our job will eventually be automated to a much larger degree than it is.


Anything might happen in the future. My issue is with people claiming we're already in this future, and their only proof is "I generated this one-off NextJS application with LLMs".

Cool for you but a lot of us actually iterate on our work.


Or not. The point is that we don't know.


Did you think computers and ARPANET came out for commercial-grade applications and not "good for playing with stuff" ?


Of course they came from playing. But did you have people claiming we have a general AI while basic programming building blocks were still on the operating table?


Then what was dot com bubble?


There are some very narrow minded, snarky people here,

“doesn’t pass my sniff test”

okay Einstein, you do you


Surely, you actually mean those who extrapolate from a few niche tasks to "OMG the programmers are toast"?

Fixed it for you. ;)


Sorry I don’t mean to be snarky :) I think there is a happy middle ground between

“AI can’t do anything, it sucks!”

and

“AI is AGI and can do everything and your career is done for”

I teeter along the spectrum, and use with caution while learning new topics without expertise.

But I’ve been very surprised by LLMs in some areas (UI design - something I struggle with - I’ve had awesome results!)

My most impressive use case for an LLM so far (Claude 3.5 Sonnet) has been to iterate on a pseudo-3D ASCII renderer in the terminal using C and ncurses, where with the help of an LLM I was able to prototype this, and render an ascii “forest” of 32 highly detailed ascii trees (based off a single ascii tree template), with lighting and 3 axis tree placement, where you can move the “camera” towards and away from the forest.

As you move closer trees scale up, and move towards you, and overlapping trees don’t “blend” into one ascii object - we came up with a clever boundary highlighting solution.

Probably my favourite thing I’ve ever coded, will post to HN soon


I absolutely agree that it's a spectrum. I don't deny the usefulness of some LLMs (and I have used Claude 3.5 lately with great success -- it helped me iterate on some very annoying code with very niche library that I would have needed days to properly decipher myself). I got annoyed by the grandiose claims though so likely got a bit worked up.

And indeed:

> “AI is AGI and can do everything and your career is done for”

...this is the thing I want to stop seeing people even imply it's happening. An LLM helped you? Cool, it helped me as well. Stop claiming that programming is done for. It's VERY far from that.


This is quite an absurd, racist and naive take.

"Commercial grade applications" doesn't mean much in an industry where ~50% of projects fail. It's been said before that the average organisation cannot solve a solved problem. On top of this there's a lot of bizarre claims about what software _should_ be. All the dependency injection and TDD and scrum and all the kings horses don't mean anything when we are nothing more than a prompt away from a working reference implementation.

Anyone designing a project to be "deployed in every cloud provider" has wasted a huge amount of time and money, and has likely never run ops for such an application. Even then, knowing the trivia and platform specific quirks required for each platform are exactly where LLMs shine.

Your comments about "business people" trying to replace you with multiple Indian people shows your level of personal and professional development, and you're exactly the kind of personality that should be replaced by an LLM subscription.


And yet, zero proof again.

Getting so worked up doesn't seem objective so it's difficult to take your comment seriously.


As a senior-ish programmer who struggled a lot with algorithmic thinking in college, it's really awe-inspiring.

Truly hit the nail on the head there. We HAD no business with these side-quests, but now? They're all ripe for the taking, really.


One hundred per cent this.

LLM pair programming is unbelievably fun, satisfying, and productive. Why type out the code when you can instead watch it being typed while thinking of and typing out/speaking the next thing you want.

For those who enjoy typing, you could try to get a job dictating letters for lawyers, but something tells me that’s on the way out too.


My experience is it's been unbelievably fun until I actually try to run what it writes and/or add some non-trivial functionality. At that point it becomes unbelievably un-fun and frustrating as the LLM insists on producing code that doesn't give the outputs it says it does.


I've never found "LLM pair-programming" to be fun. I enjoy engaging my brain and coding on its own. Co-pilot and it's suggestions after a point become distracting. I'm sure there's are several use cases, but for me it's a tool that sometimes gets in the way (I usually turn off suggestions).


What do you prefer to use for LLM pair programming?


Claude 70%. ChatGPT o1 for anything that needs more iterations, Cursor for local autocomplete, tested Gemini for concepts and it seemed solid. Replit when I want it to generate everything, from setting up a DB etc for any quick projects. But it’s a bit annoying and drives into a ditch a lot.

I honestly have to keep a tight rein on them all, so I usually ask for concepts first with no code, and need to iterate or start again a few times to get what I need. Get clear instructions, then generate. Drag in context, tight reins on changes I want. Make minor changes rather than wholesale.

Tricks I use. “Do you have any questions?” And “tell me what you want to do first.” Trigger it into the right latent space first, get the right neurons firing. Also “how else could I do this”. It’ll sometimes choose bad algo’s so you need to know your DSA, and it loves to overcomplicate. Tight reins :)

Claude’s personality is great. Just wants to help.

All work best on common languages and libraries. Edge cases or new versions get them confused. But you can paste in a new api and it’ll code against that perfectly.

I also use the API’s a lot, from cheap to pricy depending on task. Lots of data extraction, classifying. I got a (pricier model) data cleaner working on other data generated by a cheaper model, asking it to check eg 20 rows in each batch for consistency. Did a great job.


Copilot and aider-chat


Claude and pasting code back and forth for simple things. But I’d like to try Claude with a few MCP servers so it can directly modify a repo.

But lately Cursor. It’s just so effortless.


Yeah I had my fair share of pride around typing super fast back in college, but the algorithms were super annoying to think through.

Nowadays I get wayyy more of a kick typing the most efficient Lego prompts in Claude.


"Other people were wrong about something else so that invalidates your argument"

Why are half the replies like this?


Because what is shared is overconfidence in the face of a system that has humble beginnings but many people trying to improve it. People have a natural bias against massive changes, and assume the status quo is fixed.

I’m open to all possibilities. There might be a near term blocker to improvement. There might be an impending architectural change that achieves AGI. Strong opinions for one or the other with no extremely robust proof are a mistake.


The cognitive dissonance of seemingly educated people defending the LLMs when it comes to writing code is my top mystery for the entirety of 2024.

Call me if you find a good reason. I still have not.


In my opinion people that harp on about how LLMs have been game changer for them are the people that put themselves as never actually having built anything sophisticated enough that a team of engineers can work and extend on for years.


This back and forth is so tiring.

I have built web services used by many Fortune 100 companies, built and maintained and operated them for many years.

But I'm not doing that anymore. Now I'm working on my own, building lots of prototypes and proof-of-concepts. For that I've founding LLMs to be extremely helpful and time-saving. Who the hell cares if it's not maintainable for years? I'll likely be throwing it out anyway. The point is not to build a maintainable system, it's to see if the system is worth maintaining at all.

Are there software engineers who will not find LLMs helpful? Absolutely. Are there software engineers who will find LLMs extremely helpful? Absolutely.

Both can exist at the same time.


I agree with you and I don't think OP disagrees either. The point if contention is the inevitable and immediate death of programming as a profession.


Surely nobody has that binary a view?

What are the likely impacts over the next 1, 5, 10, 20 years. People getting into development now have the most incredible technology to help them skill up, but also more risk than we had in decades past. There's a continuum of impact and it's not 0 or 100%, and it's not immediate.

What I consider inevitable: humans will keep trying to automate anything that looks repeatable. As long as there is a good chance of financial gain from adding automation, we'll try it. Coding is now potentially at risk of increasing automation, with wildcards on "how much" and "what will the impact be". I'm extremely happy to have nuanced discussions, but I balk at both extremes of "LLMs can scale to hard AGI, give up now" and "we're safe forever". We need shorthand for our assumptions and beliefs so we can discuss differences on the finer points without fixating on obviously incorrect straw men. (The latter aimed at the general tone of these discussions, not your comment.)


And I'll keep telling you that I never said I'm safe forever.


And I never said both can't exist at the same time. Are you certain you are not the one fighting straw men and are tiring yourself with the imagined extreme dichotomy?

My issue is with people claiming LLMs are undoubtedly going to remove programming as a profession. LLMs work fine for one-off code -- when they don't make mistakes even there, that is. They don't work for a lot of other areas, like code you have to iterate on multiple times because the outer world and the business requirements keep changing.

Works for you? Good! Use it, get more productive, you'll only get applause for me. But my work does not involve one-off code and for me LLMs are not impressive because I had to rewrite their code (and to eye-ball it for bugs) multiple times.

"Right tool for the job" and all.


"Game changer" maybe.

But what you'll probably find is that people that are skilled communicators are currently getting a decent productivity boost from LLMs, and I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Personally, I've found AI to be a large productivity boost - especially once I've put certain patterns and structure into code. It's then like hitting the N2O button on the keyboard.

Sure, there are people making toy apps using LLMs that are going to quickly become a unmaintainable mess, but don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.


> I suspect that the difference between many that are bullish vs bearish is quite likely coming down to ability to structure and communicate thoughts effectively.

Strange perspective. I found LLMs lacking in less popular programming languages, for example. It's mostly down to statistics.

I agree that being able to communicate well with an LLM gives you more results. It's a productivity enabler of sorts. It is not a game changer however.

> don't be too quick to assume that LLMs aren't already making an impact within production systems. I know from experience that they are.

OK, I am open to proof. But people are just saying it and leaving the claims hanging.


Yep, as cynical and demeaning that must sound to them, I am arriving at the same conclusion.


My last project made millions for the bank I was working at within the first 2 years and is now a case study at one of our extremely large vendors who you have definitely heard of. I conceptualised it, designed it, wrote the most important code. My boss said my contribution would last decades. You persist with making statements about people in the discussion, when you know nothing about their context aside from one opinion on one issue. Focus on the argument not the people.


You're the one who claimed that I'm arrogant and pulled that out of thin air. Take your own advice.

I also have no idea what your comment here had to do with LLMs, will you elaborate?


[flagged]


Indeed, your blaming of me being arrogant is like a school yard indeed. You started it, you got called out, and are now acting superior. I owe you no grace if you started off the way you did in the other sub-thread.

Go away, I don't want you in my replies. Let me discuss with people who actually address the argument and not looking to paint me as... something.


All the programming for the Apollo Program took less then a year and Microsoft Teams is decades in development obviously they are better than NASA programmers.


the programming for the NASA program is very simple; the logistics of the mission, which has nothing to do with programming, is what was complex

You're essentially saying "the programming to do basic arithmetic and physics took only a year" as if that's remotely impressive compared to the complexity of something like Microsoft Teams. Simultaneous editing of a document by itself is more complicated than anything an Apollo program had to do


I want to not like this comment, but I think you are right! There's a reason people like to say your watch has more compute power than the computers it took to put man on the moon.


but that’s thing, right? it is not “seemingly” there are A LOT of highly educated people here telling you LLMs are doing amazing shit for them - perhaps a wise response should be “lemme go back and see whether it can also become part of my own toolbox…”

I have spent 24 years coding without LLMs, cannot fathom now spending more than a day without it…


I have tried them a few times but they were only good at generating a few snippets.

If you have scrutable and interesting examples, I am willing to look them up and try to apply them to my work.


Which needs does it fulfil that you weren't able to fulfil yourself? (Advance prediction: these are needs that would be fulfilled better by improvements to conventional tooling.)


The short answer is they're trying to sell it.


Agreed, but I can't imagine that everybody here on HN that's defending LLMs so... misguidedly and obviously ignoring observable reality... are financially interested in LLMs' success, can they?


I’m financially interested in Anthropic being successfull since it means their prices are more likely to go down, or for their models to get (even) better.

Honestly, if you don’t think it works for you, that’s fine with me. I just feel the dismissive attitude is weird since it’s so incredibly useful to me.


Given they're a VC backed company rushing to claim market share and apparently selling well below cost, it's not clear that that will be the case. Compare the ride share companies for one case where prices went up once they were successful.


Why is it weird, exactly? I don't write throwaway projects so the mostly one-off nature of LLM code generation is not useful to me. And I'm not the only one either.

If you can give examples of incredible usefulness then that can advance the discussion. Otherwise it's just us trying to out-shout each other, and I'm not interested in that.


It's a message board frequented by extremely tech-involved people. I'd expect the vast majority of people here to have some financial interest in LLMs - big-tech equity, direct employment, AI startup, AI influencer, or whatever.


Yeah, very likely. It's the new gold rush, and they are commanding wages that make me drool (and also make me want to howl in pain and agony and envy but hey, let's not mention the obvious, shall we?).

I always forget the name of that law but... it's hard to make somebody understand something if their salary depends on them not understanding it.


For similar reasons, I can confidently say that your disliking of LLMs is sour grapes.


I could take you seriously if you didn't make elementary English mistakes. Also think what you like, makes zero difference to me.


Conflict of interest perhaps


Yep. Still makes me wonder if they are not seeing the truth but just refusing to admit it.


It’s a line by Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”


Orrrrr more simpler than imagining a vast conspiracy is that your observations just don't match theirs. If you're writing, say, C# with some esoteric libraries using CoPilot, it's easy to see it as glorified auto-complete that hallucinates to the point of being unusable because there's not enough training data. If you're using Claude with Aider to write a webpage using NextJS, you'll see it as a revolution in programming because of how much of that is on stack overflow. The other side of it is just how much the code needs to work, vs has to look good. If you're used to engineering the most beautifully organized code before shipping once, vs shipped some gross hacks you're ashamed of shipping, and the absolute quality of the code is secondary to it working and passing tests, then the generated code having an extra variable or crap that doesn't get used isn't as big an indightment of LLM-assisted programming that you believe it to be.

Why do you think your observable reality is the only one, and the correct one at that? Looking at your mindset, as well as the objections to the contrary (and their belief that they're correct), the truth is likely somewhere in-between the two extremes.


Where did I imply conspiracy? People have been known to turn a blind eye to criticism towards stuff they like ever since... forever.

The funny thing about the rest of your comment is that I'm in full agreement with you but somehow you decided that I'm an extremist. I'm not. I'm simply tired of people who make zero effort to prove their hypothesis and just call me arrogant or old / stuck in my ways, again with zero demonstration how LLMs "revolutionize" programming _exactly_.

And you have made more effort in that direction than most people I discussed this with, by the way. Thanks for that.


You said you couldn't imagine a conspiracy and I was responding to that. As far as zero demonstration, simonw has a bunch of detailed examples at:https://simonwillison.net/tags/ai-assisted-programming/ or maybe https://simonwillison.net/tags/claude-artifacts/, but the proof is in the pudding, as they say, so setting aside some time and $20 to install Aider and get it working w/ Claude, and then building a web app is the best way to experience either the second coming, or an overhyped let down. (or somewhere in the middle.)

Still, I don't think it's a hypothesis that most are operating under, but a lived experience that either it works for them or it does not. Just the other day I used ChatGPT to write me a program to split a file into chunks along a delimiter. Something I could absolutely do, in at least a half-dozen languages, but writing that program myself would have distracted me from the actual task at hand, so I had the LLM do it. It's a trivial script, but the point is I didn't have to break my concentration on the other task to get that done. Again, I absolutely could have done it myself, but that would have been a total distraction. https://chatgpt.com/share/67655615-cc44-8009-88c3-5a241d083a...

On a side project I'm working on, I said "now add a button to reset the camera view" (literally, aider can take voice input). we're not quite at the scene from star trek where scottie talks into the mac to try and make transparnet aluminum, but we're not that far off! The LLM went and added the button, wired it into a function call that called into the rendering engine and reset the view. Again, I very much could have done that myself, but it would have taken me longer just to flip through the files involved and type out the same thing. It's not just the time saved, which, I didn't have a stopwatch and a screen recorder, but apart from the time, it's not having to drop my thinking into that frame of reference, so I can think more deeply about the other problems to be solved. Sort of why ceo isn't an IC and why IC's aren't supposed to manage.

Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't, but there's just so much that LLMs can do, as they exist now, that it's not hyperbole to say it's redefined programming, for those specific use cases. But where that use case is "build a web app", I don't know about you, but I use a lot of web apps these days.


These are the kind of the informed takes that I love. Thank you.

> Detractors will point out that it must be a toy program, and that it won't work on a million line code base, and maybe it won't

You might call me a detractor. I think of myself as being informed and feeling the need to point out where do LLMs end and programmers begin because apparently even on an educated forum like HN people shill and exaggerate all the time. That was the reason for most of my comments in this thread.


And it’s the short wrong answer. I get absolutely zero benefit if you or anyone else uses it. This is not for you or I, it’s for young people who don’t know what to do. I’m a person who uses a few LLM’s, and I do so because I see the risks and want to participate rather than have change imposed on me.


OK, do that. Seems like a case of FOMO to me but it might very well turn out that you're right and I'm wrong.

However, that's absolutely not clear today.


Genuine question: how hard have you tried to find a good reason?


Obviously not very hard. And looking at the blatant and unproven claims on HN gave me the view that the proponents are not interested in giving proof; they simply want to silence anyone who disagrees LLMs are useful for programming.


Because they don’t have an actual argument.


This is no different from creating a to-do app with an LLM and proclaiming all developers are up for replacement. Demos are not what makes LLMs good, let alone useful.


quantum computers still can’t factor any number larger than 21


> 've got almost 30 years experience but I'm a bit rusty in e.g. web. But I've used LLM's to build maybe 10 apps that I had no business building, from one-off kids games to learn math,

Yea i built bunch of apps when RoR blog demo came out like 2 decades ago. So what?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: