Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That is already happening. These labs are writing next gen models using next gen models, with greater levels of autonomy. That doesn’t get the hard takeoff people talk about because those hypotheticals don’t consider sources of error, noise, and drift.


They’re using lossy models to feedback into the training and research of new lossy models. But none of it is AGI self learning.

You need both the generalised part of AGI and the ability to self learn. One without the other wouldn’t cause a singularity.


They are doing self-learning things. That’s what a lot of synthetic data is about. When managed by the AI, it is an AI picking what it want to train on in order to develop new capabilities.

(Artificial General Intelligence says nothing about self-learning though. I presume you mean ASI?)


The models may be writing the code but I would be surprised if they were contributing to the underlying science, which feels like the hard part


it's hardly science it's mostly experimentation + ablations on new ideas. but yeah idk if they are asking llms to generate these ideas. probably not good enough as is. though it doesn't seem outo f reach to RL on generating ideas for AI research


I'm curious what you think qualifies as science.


haha touché but I don't think they are trying to understand the underlying theory etc or do hypothesis testing? I think it's more like engineering tbh




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: