Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why?

Because small programs are really quick and easy to write, there was never a bottleneck making them and the demand for people to write small programs is very small.

The difficulty of writing a program scales super linearly with size, an experienced programmer in his current environment easily writes a 500 line program in a day, but writing 500 meaningful lines to an existing 100k line codebase in a day is not easy at all. So almost all developer time in the world is spent making large programs, small programs is a drop in an ocean and automating that doesn't make a big difference overall.

Small programs can help you a lot, but that doesn't replace programmers since almost no programmers are hired to write small programs, instead automatically making such small programs mostly helps replace other tasks like regular white collar workers etc whose jobs are now easier to automate.





  > but writing 500 meaningful lines to an existing 100k line codebase in a day is not easy at all.
I've had plenty of instances where it's taken more than a day to write /one line/ of code! I suspect most experienced devs have also had these types of experiences.

Not because the single line was hard to write but because the context in which it needed to be written.

Typing was never the bottleneck and I'm not sure why this is the main argument for LLMs (e.g. "LLMs save me from the boilerplate). When typing is a bottleneck it seems like it's more likely that the procedure is wrong. Things like libraries, scripts, and skeletons tend to be far better solutions for those problems. In tough cases abstraction can be extremely powerful, but abstraction is a difficult tool to wield.

The bottleneck is the thinking and analyzing.


> Things like libraries, scripts, and skeletons tend to be far better solutions for those problems.

My feelings exactly.

LLM code generation (at least, the sort where people claim they're being 10X-ed) feels like it competes with frameworks. "An agent built this generic CRUD webapp on its own with only 30 minutes of input from me!"—well, I built an equivalent webapp in 30 minutes with Django. These are off-the-shelf solutions to solved problems. Yes, a framework like Django requires up-front learning, but in the end it leaves you with fewer lines of code to maintain, as opposed to custom-generated LLM code.


There's an argument to be made that this gap is actually highlighting design issues rather than AI limitations.

It's entirely possible to have a 100k LOC system be made up of effective a couple hundred 500 line programs that are composed together to great effect.

That's incredibly rare but I did once work for a company who had such a system and it was a dream to work in. I have to think AIs are making a massive impact there.


  > It's entirely possible to have a 100k LOC system be made up of effective a couple hundred 500 line programs that are composed together to great effect.
I'm confused. Are you imagining a program with 100k LoC is contained in a single file? Because you'd be insane to do such a thing. It's normally a lot of files with not LoC each, which de facto meets this criteria.

You may also wish to look at UNIX Philosophy. The idea that programs should be small and focused. A program should do one thing and do it well. But there's a generalization to this philosophy when you realize a function is a program.

I do agree there's a lot of issues with design these days but I think you've vastly oversimplified the problem.


> It's entirely possible to have a 100k LOC system be made up of effective a couple hundred 500 line programs that are composed together to great effect.

To me, this sounds like an nightmare—I'm sure anyone who's worked at a shop with way too many microservices would agree. It's trivial to right-click a function call and jump to its definition; much harder to trace through your service mesh and find out what, exactly, is running at `load-balancer.kube.internal:8080/api`.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: