At my work, here is a typical breakdown of time spent by work areas for a software engineer. Which of these areas can be sped up by using agentic coding?
05%: Making code changes
10%: Running build pipelines
20%: Learning about changed process and people via zoom calls, teams chat and emails
15%: Raising incident tickets for issues outside of my control
20%: Submitting forms, attending reviews and chasing approvals
20%: Reaching out to people for dependencies, following up
10%: Finding and reading up some obscure and conflicting internal wiki page, which is likely to be outdated
Really though? That’s only 2 hours per week writing code.
It’s true to say that time writing code is usually a minority of a developer’s work time, and so an AI that makes coding 20% faster may only translate to a modest dev productivity boost. But 5% time spent coding is a sign of serious organizational disfunction.
This is what software engineers need to be more productive:
- Agentic DevOps: provisions infra and solves platform issues as soon as a support ticket is created.
- Agentic Technical Writer: one GenAI agent writes the docs and keeps the wiki up to date, while another 100 agents review it all and flag hallucinations.
- Agentic Manager: attends meetings, parses emails and logs 24x7 and creates daily reports, shares these reports with other teams, and manages the calendar of the developers to shield them from distractions.
- Agentic Director: spots patterns in the data and approves things faster, without the fear of getting fired.
- Agentic CEO: helps with decision-making, gives motivational speeches, and aligns vision with strategy.
- Agentic Pet: a virtual mascot you have to feed four times a day, Monday to Friday, from your office's IP address. Miss a meal and it dies, and HR gets notified. (This was my boss's idea)
You're not wrong, but it's a "dysfunction" that many successful tech companies have learned to leverage.
The reality is, most engineers spend far less than half their time writing new code. This is where the 80/20 principle comes into play. It's common for 80% of a company's revenue to come from 20% of its features. That core, revenue-generating code is often mature and requires more maintenance than new code. Its stability allows the company to afford what you call "dysfunction": having a large portion of engineers work on speculative features and "big bets" that might never see the light of day.
So, while it looks like a bug from a pure "coding hours" perspective, for many businesses, it's a strategic feature!
I suspect a lot of that organizational dysfunction is related to a couple of things that might be changed by adjusting individual developer coding productivity:
1) aligning the work of multiple developers
2) ensuring that developer attention is focused only on the right problems
3) updating stakeholders on progress of code buildout
4) preventing too much code being produced because of the maintenance burden
If agentic tooling reduces the cost of code ownership, annd allows individual developers to make more changes across a broader scope of a codebase more quickly, all of this organizational overhead also needs to be revisited.
IMHO, the biggest impact LLMs have had in my day to day has not been agentic coding. For example, meeting summarisers are great, it means I sometimes can skip a call or join while doing other things and I still get a list of bullet points afterwards.
I can point at a huge doc for some API and get the important things right away, or ask questions of it. I can get it to review PRs so I can quickly get the gist of the changes before digging into the code myself.
For coding, I don't find agents boost my productivity that much where I was already productive. However, they definitely allow me to do things I was unable to before (or would have taken very long as I wasn't an expert) – for example my type signatures have improved massively, in places where normally I would have been lazy and typed as any I now ask claude to come up with some proper types.
I've had it write code for things that I'm not great at, like geometry, or dataviz. But these are not necessarily increasing my productivity, they reduce my reliance on libraries and such, but they might actually make me less productive.
New requirements, new features, old bugs being fixed, refactoring code to improve maintainability, writing tests for edge cases previously not discovered, adapting code for different kinds of deployment, ...
Depending on the workplace, refactoring or bug fixing is not something you just do. You have to create a ticket, meet with other members, discuss approach, scope, prioritise and only play when it is ready to pick up. The touching of the code is small fraction of that time.
Still, to write few hundred lines, doesn't take a whole week.
I've been on embedded projects where several weeks of work were spent on changing one line of code. It's not necessarily organizational dysfunction. Sometimes it's getting the right data and the right deep understanding of a system, hardware/software interaction, etc, before you can make an informed change that affects thousands of people.
Unfortunately it is true with any org that is rapidly reducing their risk appetite. It is not dysfunctional. It is about balancing the priorities at org level. Risk is distributed very thinly across many people. Heard of re-insurance business? sort of similar thing happens in software development as well.
It means though, that the business positions itself no longer as a software making business. No longer does it value being able to make software things that support its processes, whether those are customer processes or internal processes.
It doesn't if you have to manually check all that code. (Or even worse, you dump the code into a pull request and force someone else to manually check it - please do not do that.)
5% is pretty low but similar to what i have seen on low performing teams at 10K+ employee multinationals. this would also be why the vast majority of software today is bug ridden garbage that runs slower than the software we were using 20 years ago.
agentic coding will not fix these systemic issues caused by organizational dysfunction. agentic coding will allow the software created by these companies to be rewritten from scratch for 1/100th the cost with better reliability and performance though.
the resistance to AI adoption inside corporations that operate like this is intense and will probably intensify.
it takes a combination of external competitive pressure, investor pressure, attrition, PE takeovers, etc, to grind down internal resistance, which takes years or decades depending on the situation.
"10% running build pipelines + 20% submitting forms" vs 5% making code changes?
Are you in heavily regulated industry or dysfunctional organization?
Most big tech optimize their build pipelines a lot to reduce commit to deploy (or validation/test process) which keeps engineers focus on the same task while problem/solution is fresh.
How about you find out for yourself? Keep a chat window or an agent open and ask it how it could help with your tasks. My git messages and gitlab tickets are being written by AI for a year now, way better than anything I would half heartedly do on my side, really good commit messages too. Claude even reminds me to create/update the ticket.
I find the commits written by AI often inadequate, as they mostly just describe what is already in the diff, but miss the background on why was the change needed, why this approach was chosen, etc, the important stuff...
Then ask it to write the commit differently, or you can explain why in the prompt. Edit: I start by creating the ticket with Claude+terminal tool, the title and descriptions gives context info to the llm, then we do the task, then commit and update the ticket
And in the time it takes to do all of that, the guy could have already written a meaningful commit message and be done with that issue for the day.
You only have to describe how you want commits written once and then the AI will just handle it. Is not that anyone of us can't write good commits, but humans get tired, lose focus, get interrupted, etc.
Just in my short time using Claude Code, it generally writes pretty good commits; it often adds more detail than I normally would not because I'm not capable but because there's a certain amount of cognitive overhead when it comes to writing good commits and it gets harder as our mental energy decreases.
I found this custom command [1] for Claude Code and it reminded me that there's no way a human can consistently do this every single time, perhaps a dozen times per day, unless they're doing nothing else--no meetings, no phone calls, etc. And we know that's not possible:
# Git Status Command
Show detailed git repository status
*Command originally created by IndyDevDan (YouTube: https://www.youtube.com/@indydevdan) / DislerH (GitHub: https://github.com/disler)*
## Instructions
Analyze the current state of the git repository by performing the following steps:
1. *Run Git Status Commands*
- Execute `git status` to see current working tree state
- Run `git diff HEAD origin/main` to check differences with remote
- Execute `git branch --show-current` to display current branch
- Check for uncommitted changes and untracked files
2. *Analyze Repository State*
- Identify staged vs unstaged changes
- List any untracked files
- Check if branch is ahead/behind remote
- Review any merge conflicts if present
3. *Read Key Files*
- Review README.md for project context
- Check for any recent changes in important files
- Understand project structure if needed
4. *Provide Summary*
- Current branch and its relationship to main/master
- Number of commits ahead/behind
- List of modified files with change types
- Any action items (commits needed, pulls required, etc.)
This command helps developers quickly understand:
- What changes are pending
- The repository's sync status
- Whether any actions are needed before continuing work
Arguments: $ARGUMENTS
It's not possible for a human to do what an LLM does at scale, for sure. But that's the difference, humans are not robots, so they will turn the the problem around and will try to find ways on how to not have to do this in the first place. E.g. minimizing pending changes left around by making small frequent commits. A lot of invention comes from people being annoyed doing something all over again manually. LLM stirs up things a little bit as it provides a completely different way of doing such tasks. You don't have to invent a better process if the LLM can do it repeatedly for a reasonable price. The new pressure then comes from minimizing LLM costs, I guess.
Wishful thinking. They will often ignore your general instructions, due to the statistical nature of their output. Source: have many such detailed general instructions that routinely get ignored.
These tools aren't magic, if there are reasons for code changes outside of the diff LLMs aren't going to magically fabricate a commit message that gives that context.
Do you feed the LLM additional context for the commit message, or it is just summarising what’s in the commit? In the latter case, what’s the point? The reader can just get _their_ LLM to do a better job.
In the former case… I’m interested to hear how they’re better? Do you choose an agent with the full context of the changes to write the message, so it knows where you started, why certain things didn’t work? Or are you prompting a fresh context with your summary and asking it to make it into a commit message? Or something else?
Depends, I have a prompt ready for changes I made manually, that checks the diff, gets the context, spits a conventional commit with a summary of the changes, I check, correct if needed and add the ticket number. It’s faster because it types really fast, no time thinking about phrasing and remembering the changes, and usually way more complete then what I would have written, given time constraints.
If I’m using a CLI:
the agent already has:
- the context from the chat
- the ticket number via me or when it created the ticket
- meta info via project memory or other terminal commands like API call etc
- Info on commit format from project memory
So it boils down to asking it to commit and update the ticket when we’re done with the task in that case. Having a good workflow is key
For your question: I still read and validated/correct, in the end I’m the one committing the code! So it’s the usual requirements from there. If someone would use their LLM the results would vary, here they have an approved summary. This is why human in the loop is essential.
Interesting approach. I'm a bit old-school, when I make a change I already have all the context and beyond in my head, plus all the expectations from colleagues, historical context etc that might be useful to remind people about. At least for me, it is easier to formulate the commit based on that, than trying to formulate a prompt to formulate what I want to have in the commit. But I have the same with code. When it is born in my head, it's usually easier for me to write what I want, than trying to explain it to an LLM. I find the LLM a bit lacking precision when it comes to comprehension, a little like trying to explain something to a child (with superpowers, but still need step by step directions).
But I find it very interesting how others find prompting more productive for their use cases. It's definitely a new skill. Over years I also built my skill to write commits, so it comes natural to me as opposed to prompting, which requires extra effort and thinking in a different way and context and it doesn't work well for something that I do basically automatically already.
I’m from the old guard, I get where you’re coming from. The thing is when I find a prompt that works well, I can reuse it, build on it, create new rules, all in natural language.
You are saying that people need to write so complex that an LLM that can pass an LSAT test with flying colors is unable to summarize its changes in a few sentences, or else their work is not critical? That is a high bar.
I am not sure what tests LLMs are passing these days. Every day its some other metric of no practical usage. You know we make money by delivering working code and features. What I do know is that for myself and people working for me at my company, we hit the limits of their practical usage so often,not even counting the casual removal of entire parts of code, that we recently decided to revert back from agents to using them again only in the conversational mode and only for select tasks. Whoever claims these tools are revolutionary is clearly not using them intensively enough or does not have a challenging use case. We get it, they can quickly spit out a react app for you, the frontend devs and people who were never good at maths are finally "good" at something vaguely technical. However -try using them for production-ready products over several months every day, your opinion will likely change.
>We get it, they can quickly spit out a react app for you, the frontend devs and people who were never good at maths are finally "good" at something vaguely technical
Plenty of us are using LLM/agentic coding in highly regulated production applications. If you're not getting very impressive results in backend and frontend, it's purely a skill issue on your part. "This hammer sucks because I hit my thumb every time!"
Again mate, not relevant. Oh how about this. Show me one major application that was developed mainly with LLMs and that was a huge success by any measure (does not have to be profitability). Again the benchmarks show what benchmarks show, but we have yet to see some killer app done by the LLMs (or mostly LLMs).
You started with insulting someone for using an LLM to write git commit messages, and in order to defend that statement you say that an LLM hasn't written a killer app by itself.
I am not really sure what to say except that if you are simply looking for a way to insult people, just admit you are a mean person and you won't have to justify in ways that make no sense. But if you really only hate LLMs, you can do that in ways that don't involve insulting people. But to be so full of disdain for a technology that it turns you irrational is something that should be a bit concerning.
Insulting, really? I merely made a statement about the nature of their work. That's not an insult. Please re-read and understand, before conflating. Also you fully misunderstood my comments about the LLMs. If I had disdain I would not have dished out thousands of USD for my team to use them. I am merely saying that they are not what the hype-makers would have you believe. Now show me that one killer app that someone successfully vibe-coded? All we see is theoretical bullshit, benchmarks etc. But no real-world a-ha moment.
You just felt like coming into a thread which was bound to be populated by people talking about using LLM for coding to let them know that their work isn't important because they use an LLM.
It seems to me the only reason someone would feel the need to do such a thing is to validate their own experience. If everyone else seems to be finding value in a tool, but you cannot, it must be because everyone else just isn't doing important things with it.
As I said earlier, I would be concerned about such behavior if I found myself doing it.
Are you also that cocky when you forget to turn off your coding agent during the coding interviews or when you turn in code commits with +300 deletions and +700 new entries that some poor soul has to review? The amount of people like yourself we reject for job applications seems definitely increasing.
You can't comment like this on Hacker News, no matter what you're replying to. If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.
I've replied to them too, but it takes at least two people to make a flamewar and you could have de-escalated or stopped replying at any time. Rather than pointing the finger at someone else, please make an effort to observe the guidelines and show you're sincere about using HN as intended. If you want others to be held to a high standard you need to hold yourself to a high standard.
The right thing to do if someone else is breaking the guidelines is flag their comments and email us at hn@ycombinator.com so we can take action.
By the way, you were right that the other person was inflammatory, and I should have called them out at the same time; it's just that we're often working quickly through lists of flagged comments and don't always think to look over the whole subthread.
In short, the answer to "why didn't you also call out that other user's abusive comment?" is almost always that we hadn't seen it yet, but we likely would have if you'd flagged it or emailed us.
My man, I've been paying for GitHub Copilot Business License and some additional Pro+ accounts for my entire team for more than a year and half, with top-tier access to models like Claude Sonnet, Opus and the rest of the bunch. We even had a generous overage policy. I may have been a bit excited about the tech in 2021, when it was not yet sure just how much of a dead-end its. I've seen a fair share of cocky morons like yourself forgetting to turn the VS Code extension or the CLI assistance off when interviewing with us and going 'let me just turn that off'. Then continuing to demonstrate their utter incompetence and obviously dependence on LLM. But what do I know? I never had my production database deleted by an LLM. Altough we haven't seen disasters on the scale of this buddy: https://www.theregister.com/2025/07/21/replit_saastr_vibe_co..., we did have some close calls, which is why we reverted the usage to strictly conversational mode and heavy supervision requirements. Maybe also explain your excitement about LLM to this fresh thread here https://news.ycombinator.com/item?id=44651485 . It's ok to be junior and to be excited about stuff. But you obviously lack the heavy duty exposure that would open up your eyes a bit. Just be careful not to delete your employer's database.
> My point stands, go get a feel of what’s happening in 2025 with coding agents like Claude code or the one from this article, or you’ll be left behind. I’m done arguing with a smug man child
Junior, first you re-learn to read correctly, as LLM dependency seems to have impacted your reading comprehension skills. I never said I only used them in 2021 (Claude/Anthropic did not even exist back then), as you seem to be falsely constructing in your head. I am saying I've been using them since 2021 and paying for a generous usage profile of my team since the last 18 months. Recently we decided to drop agentic usage as it is absolute crap and is a net negative. I am sorry to pop your bubble, but the only person left behind is you - your arguments are even sounding like an LLM hallucination. Are you sure you did not ask Claude to give you those arguments to shoot back at me?
We must have the same job! Generating code is a miniscule part of my job. We have the same level of organizational disfunction. Mostly the work part involves long investigations of customer bugs and long face to face calls with customers - I'm only getting the stuff that stumped level 1 and level 2 support.
I actually tried to use Qwen3[1] to analyse customer cases and it was worse than useless at it.
[1] We can't use any online model as these bug reports contain large amounts of PII, customer data, etc.
Many of those things could be improved today without AI but e.g. raising Incidents for issues outside of your control could also give you a suggestion already that you just have to tick off.
Not saying we are there yet but hard to imagine it's not possible.
Raising incidents is not about suggestions. Things like build pipelines run into issues, someone from Ops need to investigate, and maybe bump up some pods or apply some config changes on their end. Or some wiki page has conflicting information, someone need to update it with correct information after checking with the relevant other people, policies and standards. The other people might be on vacation and their delegate misguides as they are not aware of the recently changed process.
Also, you're not making an argument against agentic coding, you're actually making an argument for it - you don't have time to code, so you need someone or something to code for you.
You should automate this, like i did. You're an engineer, no? Work around the digital bureaucracy.
- Running build pipelines: make cli tool to initiate them, monitor them and notify you on completion/error (audio). Allows to chain multiple things. Run in background terminal.
- Learning about changed process and people via zoom calls, teams chat and emails: pass logs of chats and emails to LLM with particular focus. Demand zoom calls transcripts published for that purposes (we use meet)
- Raising incident tickets for issues outside of my control: automate this with agent: allow it to access as much as needed, and guide it with short guidance - all doable via claude code + custom MCP
- Submitting forms, attending reviews and chasing approvals - best thing to automate. They want forms? They will have forms. Chasing approvals - fire and forget + queue management, same.
- Reaching out to people for dependencies, following up: LLM as personal assistant is classic job. Code this away.
- Finding and reading up some obscure and conflicting internal wiki page, which is likely to be outdated: index all data and put it into RAG, let agent dig deeper.
Most of the time you spend is on scheduling micro-tasks, switching between them and maintaining unspoken queue of checking various saas frontends. Formalize micro-task management, automate endpoints, and delegate it to your own selfware (ad-hoc tools chain you vibe coded for yourself only, tailored for particular working environment).
I do this all (almost) to automate away non-coding tasks. Life is fun again.
In the short term, I think humans will be doing more of technical / product alignment, f2f calls (especially with non-technical folks), digesting illegible requirements, etc.
Coding, debugging builds, paperwork, doc chasing are all tasks that AI is improving on rapidly.
If 95% of employee time is work coordination, then executive leadership needs to downsize aggressively. This is a comical example of Brooks Law. Likewise, your clients or customers should be outraged and demand proof that pricing reflects business value and $0.95 of every dollar they give your company isn’t wasted.
There are so many problems in the world we need to stop cramming into the same bus.
05%: Making code changes
10%: Running build pipelines
20%: Learning about changed process and people via zoom calls, teams chat and emails
15%: Raising incident tickets for issues outside of my control
20%: Submitting forms, attending reviews and chasing approvals
20%: Reaching out to people for dependencies, following up
10%: Finding and reading up some obscure and conflicting internal wiki page, which is likely to be outdated