If we want the intelligence to be recognizably human-like, then yes. As to whether the AGI could be trained in a simulation vs needing to really be physically embodied: I used to believe not, but then I learned about sim2real. (Ex: [0]) Training blank-slate deep learning algorithms on real hardware is prohibitive. It requires millions of hours of experience. Those hours are really expensive on current-day hardware (also cannot be accelerated, stuck with wall clock time). But if pre-training in simulations can be effective, then I think we have a decent shot in the near-term.
Human-level and human-like are not necessarily the same though. I doubt human intelligence is as general as we like to think it is. There's probably a lot about what we consider intelligence that is domain-specific. Training AGI on unfamiliar domains could be super valuable because it would be easier to surpass human effectiveness there.
My pet conjecture, though, is this: the ability to experiment is key to developing AGI. Passively consuming data makes it much harder to establish cause and effect. We create and discard hypotheses about potential causal links based on observed coincidences (ie, small samples) all the time. Doing experiments to confirm/refute these hypotheses is much less computationally expensive than doing passive causal inference, which enables us to consider many more potential causal linkages. That allows us to lower the statistical threshold for considering a potential causal relationship, which makes it more likely for us to find the linkages that do exist. The benefit of developing accurate causal models is that they are much more compact & computationally-effecient than trying to model the entire universe with a single probability distribution.
Human-level and human-like are not necessarily the same though. I doubt human intelligence is as general as we like to think it is. There's probably a lot about what we consider intelligence that is domain-specific. Training AGI on unfamiliar domains could be super valuable because it would be easier to surpass human effectiveness there.
My pet conjecture, though, is this: the ability to experiment is key to developing AGI. Passively consuming data makes it much harder to establish cause and effect. We create and discard hypotheses about potential causal links based on observed coincidences (ie, small samples) all the time. Doing experiments to confirm/refute these hypotheses is much less computationally expensive than doing passive causal inference, which enables us to consider many more potential causal linkages. That allows us to lower the statistical threshold for considering a potential causal relationship, which makes it more likely for us to find the linkages that do exist. The benefit of developing accurate causal models is that they are much more compact & computationally-effecient than trying to model the entire universe with a single probability distribution.
[0] https://openai.com/blog/solving-rubiks-cube/