The point at which the Chinese room fails for me when trying to describe GPT is when I make up a world with certain rules for it and have GPT do a task in that world using those rules.
The idea that there's a lookup table somewhere seems to fail in these situations - or rather that if there's a lookup table it is astronomically large compared to the size of the model.
In writing this I was thinking about the classic wolf, goat and cabbage problem which has been well studied. What about Wizards, Warriors and Priests with a set of powers?
Imagine a universe where there are three types of people: wizards, warriors, and priests. Wizards can open a portal that allows two people to go through at a time, but they cannot go through the portal themselves. Priests can summon people from other locations to their location or teleport to the location of another person. Warriors cannot teleport or summon, but may be teleported or summoned by others.
---
Given four wizards, a priest, and a warrior - what are the necessary steps to move them all to a new location?
The when GPT answers that, it doesn't seem reasonable that it has been trained on that type of data or problem (in the abstract) or that according to the Chinese room problem there's a lookup table such that when you get that text you can look up the answer for it and generate an answer (that seems to have been reasoned out - it is also interesting to then tweak the problem domain and have it create a new response ("wizards can only open one portal").
The Chinese room is ok enough to say "nope, that doesn't need consciousness" for the chatbots of old that are very much a parse / lookup / response style approach. Even the digital assistants of Siri, Alexa, and Google are acceptable with that model.
But with GPT-4 that it can't be explained with the Chinese room... as I said, that's interesting. It doesn't mean that GPT-4 is conscious (I would tend to argue against it but I don't have a good definition of consciousness to argue from) its just that one can't say that GPT-4 isn't conscious using the Chinese room to describe its abilities.
The Frankenstein fear, I believe, is two part. First, there's the "it's coming for our jobs" part of automation and robots. There's also an existential fear that it is harder and harder to say what makes us human and separates us from machine. That gets very much into questions that come into conflict with many individuals identity and the value of knowledge. If those questions haven't been considered before or if you are coming from a rigid worldview where the possible new answers are coming into conflict with them - the LLM can be quite frightening.
The idea that there's a lookup table somewhere seems to fail in these situations - or rather that if there's a lookup table it is astronomically large compared to the size of the model.
In writing this I was thinking about the classic wolf, goat and cabbage problem which has been well studied. What about Wizards, Warriors and Priests with a set of powers?
The when GPT answers that, it doesn't seem reasonable that it has been trained on that type of data or problem (in the abstract) or that according to the Chinese room problem there's a lookup table such that when you get that text you can look up the answer for it and generate an answer (that seems to have been reasoned out - it is also interesting to then tweak the problem domain and have it create a new response ("wizards can only open one portal").The Chinese room is ok enough to say "nope, that doesn't need consciousness" for the chatbots of old that are very much a parse / lookup / response style approach. Even the digital assistants of Siri, Alexa, and Google are acceptable with that model.
But with GPT-4 that it can't be explained with the Chinese room... as I said, that's interesting. It doesn't mean that GPT-4 is conscious (I would tend to argue against it but I don't have a good definition of consciousness to argue from) its just that one can't say that GPT-4 isn't conscious using the Chinese room to describe its abilities.
The Frankenstein fear, I believe, is two part. First, there's the "it's coming for our jobs" part of automation and robots. There's also an existential fear that it is harder and harder to say what makes us human and separates us from machine. That gets very much into questions that come into conflict with many individuals identity and the value of knowledge. If those questions haven't been considered before or if you are coming from a rigid worldview where the possible new answers are coming into conflict with them - the LLM can be quite frightening.