Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

re reasoning traces - not sure frankly. I get what you're saying in that there is only so much advanced thinking you can learn from just scraping github code, and it certainly seems to be the latest craze in getting a couple extra % on benchmarks but I'm not entirely convinced it is necessary per se. Feels like an human-emulation crutch to me rather than a necessary ingredient to machines performing a task well.

For example I could see some sort of self-play style RL working. Which architecture? Try them all in a sandbox and see. Humans need to trial & error learning as you say. So why not here too? Seems to have worked for alphago which arguably also contains components of abstract high level strategy.

>Jevons paradox

I can see it for tokens and possibly software too, but rather skeptical of it in job market context. It doesn't seem to have happened for the knowledge work AI already killed (e.g. translation or say copy writing). More (slop) stuff is being produced but it didn't translate into a hiring frenzy of copy writers. Possible that SWE is somehow different via network effects or something but I've not heard a strong argument for it yet.

>It's also possible that human-replacement AGI is harder to achieve than widely thought.

Yeah I think the current paradigm isn't gonna get us there at all. Even if you 10x GPT5 it still seems to miss some sort of spark that a 5 year old has but GPT doesn't. It can do PHD level work but qualitatively there is something missing there about that "intelligence".

Interesting times ahead for better or worse



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: