I used to work in physics, which seems to attract a lot of people with shallow knowledge, and ML seems to be going in a similar direction.
To be clear, I'm actually quite bullish on AI (well, bullish on it ultimately happening, not necessarily on how it will impact humanity). I was mostly reacting to the timeline for photonic quantum computing rather than on ML advancement, actually.
Language modeling is extremely impressive. I've been saying since 2017 that modeling language is basically modeling the world/intelligence, but at the time most people did not seem to believe that. People still don't really seem to believe it, but I think the smarter among us are beginning to now that they see GPT-X & PaLM. I think there is still a substantial gap from "good language model" to "embodied agent taking actions in the world" that we have not bridged and will be potentially pretty hard to, more than mere "minutes."
> The answer it gave me was surprisingly coherent wise and useful. From there (AGI) we could be minutes away from artificial super intelligence
What am I missing? I assumed of course that they didn't mean literal minutes but more metaphorically, but my reply was also using it in the metaphoric sense - it could be a potentially heavy lift to go from point A to point B.
e: I guess I am missing that they are conflating GPT with AGI and saying it will be minutes from AGI to artificial super intelligence in the literal sense. That, I actually do agree with, but I don't think GPT-X or PaLM or LMs in general qualify as such.
I read it in a different way, where we’d go from “plays chess like a bad human” to “plays chess superhumanly”, applied to general knowledge synthesis tasks — something that would be a rapid change once it was able to synthesize general knowledge answers.
Upon reflection, my comment was too critical for having a different read than you did.
To be clear, I'm actually quite bullish on AI (well, bullish on it ultimately happening, not necessarily on how it will impact humanity). I was mostly reacting to the timeline for photonic quantum computing rather than on ML advancement, actually.
Language modeling is extremely impressive. I've been saying since 2017 that modeling language is basically modeling the world/intelligence, but at the time most people did not seem to believe that. People still don't really seem to believe it, but I think the smarter among us are beginning to now that they see GPT-X & PaLM. I think there is still a substantial gap from "good language model" to "embodied agent taking actions in the world" that we have not bridged and will be potentially pretty hard to, more than mere "minutes."