I'm using AGI here as arbitrary major improvement over the current state of the art. But given that OpenAI has the stated goal of creating AGI, I don't think it's a non-sequitur to respond to the parent comment's question
> Are you saying instead, that concrete predictive algorithms need improvement or are we lumping the tuning into this?
in the context of what's needed to get to AGI - just as if NASA built an engine we'd talk about its effectiveness in the context of space flight.
"Sammy A thinks we've made the best engine with the tools at hand" -> "this will never get us out of the solar system"
Sorry to unload on you. It is frustrating to constantly see AGI get brought up needlessly on HN