> If we can make something that appears to be able to mimic human responses, isn't that AGI in practice even if it's not in principle?
I'm confused by this. What's the difference? Is there a measure other than how well it "mimics" a human? If a robot is indistinguishable from a human, what other measure would make it different in practice vs principle?
And if that’s the definition of AGI we’re going by, we’re in no danger of it murdering us all. Imagine thinkjng GPT4 was somehow capable of really any harm at all.
It’s also just incredible failure of risk assessment. Global warming is the real and pressing threat, and it’s here right now. Not sentient murderous AI. But it’s easier to dream about a threat that doesn’t exist than one that does because you can still imagine a perfect scenario in which you prevent it. Whoops.
> Imagine thinkjng GPT4 was somehow capable of really any harm at all.
There’s plenty of new ways you could use chatgpt to mess with society. For example, apparently you only need 8% of people to be talking about something for it to seem like a mass movement. It would be pretty easy for a malicious actor to use LLMs to flood the internet with some new - but fake - story or motivated point of view and kickstart a “mass movement” that way. Or use something like that to heavily influence politics. That might already be happening. We have no idea.
>But it’s easier to dream about a threat that doesn’t exist than one that does because you can still imagine a perfect scenario in which you prevent it
Science fiction invents scenarios that once created cannot be stopped quite often. The "don't create a black hole on earth unless you want to be in the black hole" effect.
I'm confused by this. What's the difference? Is there a measure other than how well it "mimics" a human? If a robot is indistinguishable from a human, what other measure would make it different in practice vs principle?