I would argue that humanity has been advancing rather steadily towards "true AGI" since at least the Jacquard loom. Otherwise, if you wait to admit progress until you actually have evidence for having achieved AGI, it'll just look like the Heaviside step function.
How can you claim that we've been advancing steadily when we don't even know how far the destination is, or if we're moving in the correct direction? There's no basis to claim that it will look anything like a step function; if we do achieve true AGI someday the first one might be equivalent to a really stupid person and then subsequent iterations will gradually improve from there. It seems like a lot of assumptions are being made.
Show me a computer that can reason in a generalized way at least as well as a chimpanzee (or whatever). And no, LLMs don't count. They are not generalized in any meaningful way.
Please do clarify why LLMs aren't generalized - other than not being embodied, they seem quite general to me. Is there any particular reasoning task that you have in mind that chimpanzees are good at, but LLMs are incapable of?
This was the prevailing wisdom in AI before generalized transformers as well. We're rapidly moving toward black box hyperintelligent AGI.