As a response to the AI negativity in the thread.
Remember that this thing is in its infancy.
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.
If managers are pushing a clearly not-working tool, it makes perfect sense for workers to complain about this and share their experiences. This has nothing to do with the future. No one knows for sure if the models will improve or not. But they are not as advertised today and this is what people are reacting to.
Current LLMs are already trained on the entirety of the interwebs, including very likely stuff they really should not have had access to (private github repos and such).
GPT-5 and other SoTA models are only slightly better than their predecessors, and not for every problem (while being worse in other metrics).
Assuming there is no major architectural breakthrough[1], the trajectory only seems to be slowing down.
Not enough new data, new data that is LLM generated (causing a "recompressed JPEG" sort of problem), absurd compute requirements for training that are only getting more expensive. At some point you hit hard physical limits like electricity usage.
[1]: If this happens, one side effect is that local models will be more than good enough. Which in turn means all these AI companies will go under because the economics don't add up. Fun times ahead, whichever direction it goes.
There's a lot to unpack here but to me your comment sort of contradicts itself. You're saying these things are in their infancy and therefore not able to produce code at the standard of a skilled software engineer. But you also seem to have an axe to grind against code review, which is fine but wouldn't that mean code review is even more important? At least right now? Which is kind of the point of the article.
I don't know what code review royalty is exactly but there are certain coworkers of mine whose feedback I value very highly. I feel sad if that's not the case for you.
Current models are the embryos of what is to come.
Code quality of the current models is not replacing skilled software engineers, network or ops engineers.
Tomorrows models may well do that though.
Venting the frustrations of this is all very well but I sincerely hope those who wish to stay in the industry, learn to get ahead of AI and utilize and control it.
Set industry standards (now) and fight technically incompetent lawmakers before they steer us into disaster.
We have no idea what the effect of tomorrows LLMs is going to have, autonomous warfare is not that far away eg.
All while todays tech talent spends energy bickering on HN about the loss of being the code review King.
Everyone hated the code review royalty anyway. No one mourns them. Move on.