Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The future looks especially bleak because LLMs, for many who I work with, are doing with logic what Google did with memory. 'Just ask chatgpt' will be a thing ; maybe not chatgpt, probably indeed just google but with a their chatbot which can do 'logic' and abstractions built in.

Google -> you don't need a longterm memory, just Google it.

So with good LLMs (and the latest iteration of chatgpt is really good at a lot of things you don't want bore yourself with), you don't need to process logic and abstraction as it will do it for you.

This is not yet there for everyone but I think it will work the same. Lazy the mind and have less and less rigor where the abstractions get more abstract and also more shallow.



> ' Just ask ChatGPT '

Yeah I heard that from coworkers. Lazy mind, yes. Read the doc, look one example, check one stackoverflow link and you will know how you make a caroussel UI component...

And please do not use it to try understanding some logic in your codebase. Try even less giving him a small snippet of the codebase, thinking it will magically understand what it does and correctly imagine what all the missing related code of snippet is and does. ( ! )

Most of the time I just wanna scream "use you brain". By the time he wrote his first sentence to the IA, I've resolved the issue, or at least have a clue. It is really infuriating because what more is that ChatGPT need a -minimum- precise request to be expected to give a useful response. When the request from the user is blatantly inprecise kinda like "help, thing doesn't work", I just feel bad for the IA having to deal with terrible communication, and for myself for having to deal with that coworker. Thanksfully when I am the one being asked help, I know the project, can look into the code, and coworkers can show me the issue instead of failing to explain it.

Yeah that is why I like programming, computers only accept precise communication, it is not "move that div on the left" but "move div by id X to the left of itself" 120px over 200ms with a linear speed".

Only once ChatGPT did better than my mind or google : someone was searching the name of a bank starting wirh the letter "o".

Your statement is correct, google replace a lot of my memory, maybe someone could call me lazy too.

Maybe ChatGPT will have his use for me one day.


While I agree with you that programmers increasingly relying on LLMs would be (is?) a problem, there is an element of neo-Luddism to this, as was excellently framed by xkcd[1] some years ago. Could we not choose to fill all that time we used to spend writing code on testing/verification instead? Or on performance, or documentation, or security? Or, to go back to the premise of the blog, on re-educating ourselves in the tools we use every day but don't actually understand? I don't know how the industry will adapt and I won't bother making predictions, but the future doesn't necessarily involve everyone becoming mindless code monkeys.

[1]: https://xkcd.com/1227


But I think it will eventually end up in having less understanding of what people are doing, which won’t help us.

And I don’t think we will become mindless code monkeys: I think we will end up telling the computer what we need in english but us having 0 clue or memory on how it achieves that. That’s the ultimate ‘too abstract’ issue.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: