> The answer it gave me was surprisingly coherent wise and useful. From there (AGI) we could be minutes away from artificial super intelligence
What am I missing? I assumed of course that they didn't mean literal minutes but more metaphorically, but my reply was also using it in the metaphoric sense - it could be a potentially heavy lift to go from point A to point B.
e: I guess I am missing that they are conflating GPT with AGI and saying it will be minutes from AGI to artificial super intelligence in the literal sense. That, I actually do agree with, but I don't think GPT-X or PaLM or LMs in general qualify as such.
I read it in a different way, where we’d go from “plays chess like a bad human” to “plays chess superhumanly”, applied to general knowledge synthesis tasks — something that would be a rapid change once it was able to synthesize general knowledge answers.
Upon reflection, my comment was too critical for having a different read than you did.
This makes it sound like you’re arguing against something no one said — which makes it a weird thing to quote.