Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m not terribly worried. Though I assume my work will begin to resemble that of the Adeptus Mechanicus rather than proper engineering.

The fact is that anyone who understands even at a basic level what the computer is actually doing and isn’t afraid to look at it at a low level can’t be replaced by an AI trained on stack overflow.

It may be that I will spend more of my time on code review of LLM generated code, or make my money in the new kinds of legacy code created by copy pasting ChatGPT snippets together instead of SEO optimized stack overflow scrapes.

For me the outcome is the same. The skills I need to be more effective than the machine are the exact same as they were decade, century or even millennium ago. I still don’t see these LLMs do any synthesis of knowledge, and they don’t seem to have a grasp of logic or grammar at the level I expect a bright middle school student to have.



> Though I assume my work will begin to resemble that of the Adeptus Mechanicus rather than proper engineering.

Lol, I was thinking about this the other day. Eventually most devs will essentially just be praying to the Machine spirit to make the computer do what they want. A small few high clerics will bother to learn how computers actually work. The rest will simply be cargo culting to the maximum extent possible.


> Eventually most devs will essentially just be praying to the Machine spirit to make the computer do what they want.

Same as it ever was.


You should read the "Sparks of AGI" paper, especially the math and code sections. It's a GPT-4 evaluation conducted from outside OpenAI (authored by a MS team). It's an easy to read paper, mostly a collection of examples.

https://arxiv.org/abs/2303.12712




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: