That is already happening. These labs are writing next gen models using next gen models, with greater levels of autonomy. That doesn’t get the hard takeoff people talk about because those hypotheticals don’t consider sources of error, noise, and drift.
They are doing self-learning things. That’s what a lot of synthetic data is about. When managed by the AI, it is an AI picking what it want to train on in order to develop new capabilities.
(Artificial General Intelligence says nothing about self-learning though. I presume you mean ASI?)
it's hardly science it's mostly experimentation + ablations on new ideas. but yeah idk if they are asking llms to generate these ideas. probably not good enough as is. though it doesn't seem outo f reach to RL on generating ideas for AI research