Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Do you already see the impact of LLMs on the job prospects for dev's?
38 points by thesumofall 3 months ago | hide | past | favorite | 73 comments
Looking at our own experience and the experience from practitioners in other companies, it seems like AI-assisted programming is able to deliver a realistic 10-20% overall productivity boost for developers (including the fact that developers don’t code all day). That’s significant, but do we already see an impact on hiring beyond the regular ups and downs of the economy? Personally, I don’t see it yet as the productivity boost seems to be eaten up by unrealistic project plans and expanding scope. But what’s your experience?



My own experience is that it _might_ speed up juniors (and even that, I'm not convinced about). But juniors make up a tiny part of the overall throughput, I am easily 10x faster to do anything than they are. So a 10-20% productivity boost for juniors is pretty much negligible for the company.

For myself, everytime I try to use chatgpt it completely fails to be helpful, for the same reason that stackoverflow also fails: the problems I have as a senior are too specific or too complex. Every single time, querying an LLM ends up being a waste of time.


The boost comes from easy problems. Write a cli command without looking at the docu, convert from this json format to the other, look ath this method and do the same in this method. This can also be easily verified. Mid-fast tasks become instantly done.


Also when you wear multiple hats. Recently I had to setup an OpenResty including writing some fairly complex Lua scripts, both which I had never touched before, and it was much faster with the help of an llm.


This. I think LLMs have been most helpful to me when I'm doing something that's not really in my mainstream domain. Yesterday wanted to know the structure that SAP stores invoices in, GPT-4 had a great detailed answer and even made me sample data instantly. It would take me hours of research to do it myself.

There's still a lot of closed off knowledge that's hard to access even though it should be open. ChatGPT is great for that.


Same here. Not a very common language, deep application knowledge and knowledge about some real world domains is required - LLMs are blank.


LLMs help bootstrap ideas where developer lack skill. By this very nature, that generated, not totally understood code is the worst thing you can put in production. The fact that it will be put in production by the metric ton is a guarantee that any initial gains in speed will be offset by countless hours of desperate debugging. And for that, LLMs are useless, in my experience.


This. The adoption of LLMs for code generation, even with this statement well understood, feels inevitable, and there's going to be some amazing consultancy work fixing the garbage that gets put near production.


Have you used Claude Opus or Sonnet 3.5 by anychance?


You sound crazy.


As a non-native English speaker LLMs are amazing when it comes to documentation and design docs writing. In these situations, by either proof-reading or helping me conveying the message in a succinct way, they are amazing. Doing those tasks faster frees some time for programming. When it comes to actual coding I use them to write scaffolding or perhaps the trivial test cases, but nothing more complicated.

Does it mean that I can do 10/20% more things? No, I likely do the same amount of tasks, but I feel the quality improved because I can save brain cycles for those things LLMs are not good enough at.


As an experienced native English speaking writer, I find LLMs can save me some time writing "boring" things that need to be in a doc that I could write but would take me some time to do so. Things like intros that are mostly boilerplate. Or definitions of terms. It does save some time for certain types of documents for me but I need to know the area well enough to be comfortable it's not making mistakes.


> realistic 10-20% overall productivity boost

With productivity being measured as what, exactly?

The Venn diagram of "discussions about LLMs" and "Idea Guys Talking" is close to a circle in my experience.


I find myself using it for quick math type problems, for example just now I wanted to wrangle some bits so I asked it to make the masks for me.

But even then we're talking 1% if we're being really generous.


> for example just now I wanted to wrangle some bits so I asked it to make the masks for me.

Why do you think it will get that right? You have to do the work anyway to verify those are correct, that is exactly the kind of thing you shouldn't use an LLM for.


> With productivity being measured as what, exactly?

Most likely relative productivity, i.e. time to develop things with support of AI Vs time to develop the same things without support of AI.


Which will potentially miss things like accruing technical debt and security issues.


which happens with and without AI to the extent of effort and state of brains of the junior dev


I've seen juniors solve issues in an hour or two that could have taken them a day or two without ChatGPT.

The code isn't always the most desirable one but code review catches that.

On the other hand, this puts a larger burden on seniors who have to review subpar code on a faster pace now.


And said juniors then likely end up as subpar seniors and pretty soon more and more subpar code slips through.

I don’t hire juniors though and I would not hire seniors that open PR’s with code that isn’t ready for review — ie. something ChatGPT wrote that isn’t “desirable”.


Sounds like what happens anyway with the new layers of abstractions that get added by frameworks, etc.


To a degree yes, but spending a lot of time massaging the output of an LLM over training your own mental faculties for problem solving causes an entirely different kind of skill to develop. It’s not good.


I've seen juniors run around in circles for hours because they didn't catch a mistake made by an LLM.

They never count this as reduced productivity though because of the halo effect surrounding LLMs right now.


I’ve seen people drop a bit of code on StackOverflow going “ChatGPT made this for me, how do I make it work?” and it has invented entire functionality out of thin air.


> AI-assisted programming is able to deliver a realistic 10-20% overall productivity boost for developers (including the fact that developers don’t code all day).

Dubious, at best, claim


IMO, it is making the job easier, but I don't think more is getting done / more productivity. I also don't think the deliverable is improved.

Long term, I imagine this just leads to a longer percentage of the day going to slacking off.


Anecdotally I agree with the 10-20% increase. Using Cursor.sh, so much less time wasted on menial tasks or reading documentation.


Yup, the number should be higher


Not a pro-dev but in & around IT for many many years.

My issue with the LLMs is not that "it will replace" your expensive staff. The problem/challenge imho is not how to produce 1000 lines of code. Yes I can do that myself with LLM even now.

The real challenge is that business need people that understand the business as well. You've most likely all had arguments in your workplaces with people that make a code suggestion, that 'seems right' at the time but in your core you feel that this will be just a technical debt, or will create some security issue in the future.

So cutting down from (e.g.) 10 experienced devs to 2, to save money, will come at the cost of having the business coming back to you asking for more & different things as the direction/strategy is different.


From my experience, more junior devs don't have the skills to use LLMs properly. On top of that, most people in general aren't up to date on the latest updates and will try to use free ChatGPT and form an opinion based on that. It will take some LLM education for them to be more productive, and some serious "you shouldn't use it to generate anything you couldn't write yourself" talks. But I can see the potential in the future.

I kept on top of what's happening in this space and found ways to improve my work, but I doubt the improvement crosses 10%. This place will have a skewed statistics of people who can use LLMs efficiently - outside, in real world I don't think the impact is noticeable at all yet.


Not a dev, but as a DevOps engineer, I use LLMs for a few types of work:

1. A starting point for a problem I've no experience with - "How do I set up replication on a database". I won't follow it blindly, but it gives me a starting point to search online. 2. Helping me put together proposals and documentation. It's great at setting up an outline for things or rewriting my badly written things. 3. Writing regex

As for impacting jobs specifically, I havn't found any impact, yet. If anything, I've seen companies either put down blanket bans of using AI (for fear of people imputting sensitive data), outright banning the URLs on the VPN, or putting very strict policies in place with how they can be used.


We have (probably unwisely) put off hiring a junior because I'm so productive with an AI assistant. I have built a complex desktop app in less than 8 months by myself. I have deep domain knowledge and have only recently brought on a systems analyst to help with QA testing. The PM has no idea about the requirements, at all. My boss (technical architect) has barely had to lift a finger.

This was my first app in WPF. Huge learning curve, Copilot has been indespinsible.


No, and I am not expecting there to be much if any change, at least not until the technology drastically improves.

Ai can still help a ton with the predictable and routine code, but it sucks at hard code. That gets even more true the more niche the application is. I would expect it is making an impact in applications that are crud, but anything with some amount of depth is going to require the same amount of human to power it.


Most developers are writing routine code to implement crud apps.


A lot of the high-end folks started out this way. It will be interesting to see what this does to the pipeline in a decade.


> the productivity boost seems to be eaten up by unrealistic project plans and expanding scope

This is what happens with every improvement in software, doesn't matter if its better hardware, improved tooling, increased libraries or simple more programmers. The increased expectations always create more, not less demand for programming. The only thing that a change in tooling does is a change in skills that are in demand and features that are demanded.

The juniors probably are most helped by these tools. I find the benefits a mixed bag. Even as a better autocomplete it (github copilot) frequently makes the most trivial grammatical errors such as unmatched parenthesis, which a 'dumb' autocomplete would never produce. And sometimes the code looks so good, it is easy to overlook that one insidious semantic error and is now costing you debugging time.

I won't be replaced by AI, but I might be replaced by a younger dev who is able to get more value out of these newfangled tools than me.


Not a dev per se, but what I see in the data analytics/web analytics space is, that the results from an "analysis" by an AI is oftentimes significantly better than what a junior, sometimes even an intermediate person produces. But - more often than not, there are subtle errors/slip ups in there, as the AI is not reasoning at all, nor understanding.

On the other hand - those (or similar errors) can be found in the work of more junior people as well. So the amount of editorial work - as I like to call it - is the same.

The difference? The AI does this in a fraction of the time, it takes a junior to do that. So from a purely economic business standpoint, it is way cheaper to have me perform this crap (as I know how to prompt the right things) instead of a junior analyst.

The downside is, that juniors are loosing the opportunity to learn. They don't develop the skills you only develop by fucking up as often as possible in many different ways and learning the business the hard way - because that is what enables you to spot fuck ups later in your carreer more easily and quickly.


Sure, a lot of tasks won't exist anymore. I've seen many rote administrative tasks coded out of existence by very simple glue code as well, and some people may lose a job if they can't adapt.

But the same could be said for many things. I've done manual memory management at the start of my career, working on plain applications. At this point it is coming back a bit due to popularity of Rust, but you'd mostly need to have an interest in systems or game programming in order to learn how to do it properly.

The average web or application programmer really doesn't have to learn how to allocate memory and debug leaks, and may be in a lesser position to recognize and trace leaks in their buggy python or ruby code for it. But overall, it doesn't matter too much.


Yep, this is called Jevons paradox. When technology increases the efficiency with which a resource (be it energy, computational power, or programmer's time) is used, the demand increases too.


we expand a highway to allow more cars -- so the highway gets used more and is still heavily congested.


As far as I can tell it will only increase developer demand. The main driver is starry eyed investors shoveling money into AI startups. This isn't just a boost to developer employment it might be the life support machine preventing the bottom falling out of the dev hiring market right now.

From what I've seen the productivity boost is negligible and might even be negative. The developers I've seen who claim a productivity boost seem to discount all the times it leads them astray. This needs to be deducted from the productivity gains driven by getting the odd snippets of code a few minutes faster. I know that most of the times I've asked questions I couldn't trivially google it gave me bullshit answers.

It's ironic, really. However, it just goes to show that the mainstream corporate media is very, very good at spinning a narrative out of fiction. Even junior devs are convinced by this narrative.

The big risk for job prospects for devs is not AI it is tech industry consolidation: that is, Microsoft, Amazon and Google growing their competitive moat and swallowing or destroying startup competition. The more secure they feel in their market position the more likely they will be to swap their workforce with cheaper, lower quality workers. This is what happened to Detroit in the 1950s and why it went from a thriving middle class city with tens of thousands of auto industry SMEs to a desolate wasteland run by 3 vertically integrated companies who conspired to strangle all startup competition.


Here's the thing that I'm wondering about most.

Anything that current LLM's can do code-wise is overshadowed for me with what they can do product, design, marketing-wise. If LLM's actually break the productivity ceiling why would any developer bother working for anyone other than themself? Having a job and working for someone else just disproportionately improves their wealth while limiting yours.

If this turns out to be the case I'd expect to see unprecedented micro-software shops spring up overnight. Why bother burning yourself out working for a FAANG or F1000 when you could make more and be in control of your own destiny and happiness? The rise of entrepreneurship should follow actual increases of LLM productivity across the board.


"Why bother burning yourself out working for a FAANG or F1000 when you could make more and be in control of your own destiny and happiness?"

Because working for FAANG or another company lets you do your work and get paid regularly for that rather than doing sales - something most devs suck at.


It's a good point but I'm willing to bet there are thousands of devs who are smart and talented enough to land those positions who will absolutely jump at the chance to run their own business instead. There are many devs that can't get a job due to layoffs and schools are still turning out graduates each semester.

The skills they're not good at is exactly where AI (assuming it can of course) will be used. Why only use it for a 10% improvement on a skill you have vs a 100% improvement on a skill you don't have? Or why not both?

Hard economic times (layoffs) typically result in people starting their own projects/businesses. The last major downturn resulted in a mass influx of new startups and ideas. Of course most of them failed, but that's typical I suppose.


Depends on the context.

Most people in here fail to understand how broad and varied the software industry and culture has become. This place, like every community on the internet is a bubble.

So, if you're talking about basic CRUD in Java/Python/Node done remotely from 3rd world countries and Eastern Europe for companies not directly in the technology or finance sectors (e.g.: retail, services), then the response is a resounding yes. People in Poland, Brazil and India are certainly using LLMs to do faster what they did before: spitting code they don't understand copied from StackOverflow.

True, it is most of times bad code. But anyone that hires from 3rd world countries is not overly concerned about code quality.


Counterpoint: $employer evaluated github copilot a couple months ago and decided to pass. Personally I liked the magic autocomplete even if it mostly saved keystrokes rather than time, but couldn't get anything useful out of the other parts.


Definitely!

1. Many companies are hiring "AI" engineers. My guess is 90% of these jobs are virtue signaling to investors and those positions will go to folks that are good at appearing competent in interviews. Yay! More overpaid incompetent colleagues, just what we need.

2. My editor saves me about 2 minutes a day with smart printf/loop/variable completions (JetBrains editors -- no sarcasm, i like this!)

3. I am wasting time responding to email from PMs suggesting that "we don't have engineering capacity to do XYZ, but maybe we can use an AI to do it???"

(I am not anti-GenAI -- I've used it to create flyers and do pretty cool stuff!)


While I don't think AI will replace anyone in the near term, I find it interesting how many people react in IT field to any attempts to automate code development to a certain degree using AI.

This reaction is quite harsh and emotional. "If you think you can be replaced by AI, it means you are a shitty developer" is quite popular). This talks more not about LLMs, but about our own insecurities.

Yes, you CAN be replaced by either AI, or any other technology shift, or by younger more productive developers, or simply by market forces rulling your skills out of favour. It happened before, it'll happen again.


As far as I've seen, LLMs used to write code are only good for getting juniors to a PR features faster. But, it slows down everything else in the process. Reviews take ages because there's random nonsense landmines scattered around, the previous PR feedback is less likely to be applied to later code because they're not writing it, bug fixes take much longer because no one understands the code well, and there's just so much more code to deal with at every step since it doesn't matter to them if they are copy-pasting 10 lines or 1000.

I've tried using them myself, but they end up sapping more of my time than they save because of all the dead ends they send me down with plausible sounding bullshit. Things that use real terms, but incorrectly. I basically treat LLM output like that one guy who doesn't know anything except the existence of a bunch of technical terms and who throws those terms around everywhere trying to sound smart. It might be nice to know that a term exists if you're unfamiliar with the topic, but only to go look up what it actually means elsewhere.


In my experience, most coding work is on established code bases. There's that well thought out senior engineer phase at the start of each project, but the vast majority of future work starts with grokking the existing code base.

I don't think we'll see much of an impact of LLM-generated code until these systems are trained on the code and the existing user and dev documentation of the project itself.

As for jr. engineer impact and prospective candidates, I'd say virtually zero.


I have tried using LLMs on the legacy C++ codebase that I work on, and the only thing it could reliably do was generate code for unit tests.

When I fix bugs, it's usually not helpful because I need to debug and track down where the bug is.

When I develop new features, it occasionally uses the wrong lock, or makes up APIs that don't exist. I find it gets in the way more for development.

For C# and .NET core, I found IntelliCode to be pretty useful.


It'll be really hard to measure this because you can't easily isolate the impact of LLMs vs all the economy. I guess anything at all will overshadow LLMs numbers (e.g. tiny changes in Fed interest rates having more impact on job posts than LLMs ever will).

I only have anecdata to share. My coworkers and friend seem to be going through the disillusionment phase and finding LLMs as a better (if mildly outdated) search engine and a helper for simple well-known tasks. I guess the 10% productivity improvement makes sense from what I've seen.

I've also met company owners that thought they could reduce their workforce drastically because of LLMs. I can only wish them good luck, it's going to be bumpy for them once they realize the mess they will be in (e.g. spending more time troubleshooting systems their engineers never understood in the first place).

TL;DL; No, except for places you wouldn't want to work at.


>(e.g. spending more time troubleshooting systems their engineers never understood in the first place).

I do not, for one second, believe that any company is literally cut-and-pasting code straight out of ChatGPT into their production environments without their engineers understanding it.

The number of developers on HackerNews who think that not using LLMs is "cool" or more valuable is shocking. It's no different than saying you don't use X other development tool.


> I do not, for one second, believe that any company is literally cut-and-pasting code straight out of ChatGPT into their production environments without their engineers understanding it.

I would guess that close to 10% of the codebase at my company is straight up copy pasted from ChatGPT without further thought.


> I do not, for one second, believe that any company is literally cut-and-pasting code straight out of ChatGPT into their production environments without their engineers understanding it.

It routinely happens with StackOverflow answers to the point of being a meme. LLMs are just the next iteration.


I love your idealism, but allow me to be your data point. The place where I work just fired our CTO because he was doing exactly that, to "speed up" the software team, after I complained to other execs that this was massively slowing us down because of all the time we spent trying to figure out what the nonsense code he was shotgunning into main was supposed to do. (Not the only reason, I'm told.)


At the company I currently work at we are detecting people that take a very simple take home test, then they go on to use chatgpt and proceed to not be able to explain why "they" solved the problem that way.

Also people sending a barebnes chatgpt cover letter when it is optional to do so.


It's just a more effective google, now that google has basically become completely useless to get results which aren't 'seo optimised' (full of rubbish). Its value is still mostly in generating boilerplate.


Idk, we got access to GH copilot with our project loaded in it to test it and understand if we should use it. Beyond unit tests and trivial code writing it's almost useless. Even for unit tests I tried using the chat feature to ask it to generate ut for a specific function - at first I needed to convince it to generate them since it didn't 'want' to, finally, when convinced - it spewed such a garbage that I would have written everything faster by myself. In other words, there's some value in autocompletion for simple use-cases, but the moment situation gets complicated involving adding a feature that impacts more files and that needs to do some data processing or something related to either writing threading related code or fix threading bugs - it just takes more time (at least for us, the xp is similar with my colleagues)

Maybe other companies did manage to squeeze real help in more complex situations from it or maybe gpt 4.5 is much better than copilot (I tried to use public chatgpt to detect some threading bug and still it couldn't find it, until in the end I found it by myself), but at least for us the xp with using copilot wasn't that stellar


LLMs are to a programmer what a rhyming dictionary is to a writer.

A good poet says complex things using few words. A bad poet is someone who conveys something simple using a lot of words.


Specifically for Copilot. It's good for exploration type of work or getting some quick examples of things without diving into specific docs/search all the time. Using the chat is convenient and can go back to previous examples quickly. Also, I've found it's useful for filling boilerplate when writing unit tests. As for actual coding I find most of the time that the suggestions are annoying and have turned them off (have a shortcut to toggle them if I'm feeling stuck etc.). Overall it's an OK tool, but I don't find it mind blowing. It's good if you're playing around with things that you don't understand or if you ask it for different approaches to doing something. I've found some gems that way, but in my work I don't encounter those often enough to say that it's been a big boost to productivity.


Our tech lead was very enthusiastic about LLM-generated code for a minute. This lasted until the third or fourth time one of us reviewed a PR he'd so written and requested ground-up rewrites.

My guess is that, by this time next year, the vast majority of people and companies currently enthusiastic about generative AI will be pretending like they never had anything to do with it, a small hardcore of true believes excepted. The hype cycle will then begin anew with something else.


nopes.. beyond starter projects, raw usage of chatgpt hardly leads to any gains in productivity in the enterprise setting .

So far no, but in the future with more specific and enterprise suitable tooling - likely


Only murmurs for now, admittedly, but I heard a reason IBM is training models (e.g. https://research.ibm.com/blog/granite-code-models-open-sourc...) is because they provide LLM-based systems for enterprise customers to work on ancient, legacy codebases in languages like COBOL more easily. If true, I could definitely see how that might boost productivity as fewer people remain fully trained in the details of such old systems and languages.


That sounds a lot like their old Watson claims.


unless IBM beats GH copilot enterprise that's by itself not that great, hard doubt it'll make a big impact


I don't have any insider knowledge, but maybe IBM could get its hands on a lot more legacy code for weird arcane systems than GitHub could? It would make their models more specific than those trained on 50% Python at least.


That would be Github that's owned by Microsoft who also make Windows? They can get weird arcane code.


maybe, I've used copilot enterprise in our company for a big C++ project and to say I was unimpressed is an understatement


Lots of old code not on GH that they could combine with what is on GH…


On my own experience, LLMs are great to speed up boilerplate and test cases (note 1). And the closer you are to the business the more it become less useful (e.g. I used to work in an insurance company and it insisted to sum interests or taxes on some payments by hallucinating variables) Note 1: On tests written by juniors it helped us to have much more tests, but did not improve their quality, llms have no way to measure coverage so it would just generate "cleaver" edge cases (NaN, and infinities even though theses cases were already caught by another place in the code), because the juniors would just accept any code recommendations from llms. Seniors on the other hand would only accept tests they wrote good descriptions for... Were the seniors fast? Yeah, they even enjoyed writing tests now. But reviews were a lot more about catching useless cases. And yeah, I'm aware tests are not only about coverage, but it is one of the concerns we had.


LLMs are significantly improving every month (few weeks?), I feel like whatever us devs say on here will be outdated soon.


> LLMs are significantly improving every month

[citation needed]


No need to search for long, for instance Sonnet recently outranked most GPTs: https://aws.amazon.com/blogs/machine-learning/anthropic-clau...

If it's not 'every month', it is at least improving regularly, like every other new revolutionary technology.

Don't understand the downvotes though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: