Everyone agrees that the man doesn't understand Chinese. The sleight of hand is that Searle compares a full computer to a component of the Chinese Room, when he should be comparing a computer to the room as a whole. The man in this thought experiment is analogous to an internal component of a computer, not the whole thing.
Put another way, Searle separates a "computer" from the instructions it is running, but there is no such separation. It's like separating a brain from the electrical brain activity. It's true that a brain without electrical activity cannot understand Chinese just like a computer without a program cannot, but that's not a very interesting observation.
A computer is just an implementation of a function (in mathematical sense), ie., a finite set of pairs (IN, OUT) where (IN, OUT) in {0,1}^N, {0,1}^N. Ie., its an implementation of {010101, ...} -> {010101010, 01010...} .
If we all agree that the function the man performs can be set to be whatever you wish (ie., any {input sentences} -> {output sentences}) and we agree that the man doesnt understand chinese... then the question arises: what is missing?
I dont think the "systems reply" actually deals with this point -- it rather just "assumes it does" by "assuming some system" to be specified.
If using a language isnt equivalent to a finite computable function -- what is it?
The systems reply needs to include that the reason a reply is given is that the system is caused to give the reply by the relevant causal history / environment / experiences. Ie., that the system says "it's sunny" because it has been acquainted with the sun, and it observes that the day is sunny.
This shows the "systems reply" prima facie fails: it is no reply at all to "presume that the system can be so-implemented so as to make the systems reply correct". You actually need to show it can be so-implemented. No one has done this.
There are lots of reasons to suppose it can be done, not least, that most things arent computable (ie., via non-determinism, chaos, and the like). Given the environment is chaotic, it is a profoundly open question whether computers can be built to "respond to the right causes" and computational systems may be incapable of doing this.
If they cannot, then searle is right. That man, and whatever he may be a part of, will never understand chinese. It is insufficient to "look up the answers", "proper causation" is required.
>I dont think the "systems reply" actually deals with this point
To be clear, the point of the systems reply isn't to demonstrate how a computational system can understand language, it is to point out a loophole such that computationalism avoids the main thrust of the Chinese room.
>If using a language isnt equivalent to a finite computable function -- what is it?
In the Chinese room, the man isn't an embodiment of the computable function (algorithm). The man is simply the computational engine of the algorithm. The embodied algorithm includes the state of the "tape", the unbounded stack of paper as "scratch space" used by the man in carrying out the instructions of the algorithm. So it remains an open question whether the finite computable function that implements the language function must "use language" in the relevant sense.
What reason is there to think that it does use language in the relevant sense? For one, a semantic view of the dynamics of the algorithm as it processes the symbols has predictive power for the output of the system. Such predictive power must be explained. If we assume that the room can in principle respond meaningfully and substantively to any meaningful Chinese input, we can go as far as to say the room is computing over the semantic space of the Chinese language embedded in some environment context encoded in its dictionary. This is because, in the limit of substantive Chinese output for all input, there is no way to duplicate the output without duplicating the structure, in this case semantic structure. The algorithm, in a very straightforward sense, is using language to determine appropriate responses.
What is missing is the man's ability to memorize the function and carry it out without the book. If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese.
Edit: He would speak Chinese, but he wouldn't understand it. What is missing in order to understand Chinese is an additional program that translates Chinese to English or pictures or some abstract structure independent of language. Humans have this, but the function in this thought experiment is a black box, so we don't know if it uses an intermediate representation. Thanks to colinmhayes for pointing this out.
Could he give a reasoned answer to "what is the weather"? That depends on whether we include external stimuli as part of the function input. If not, then neither a human nor a computer could give a sensible answer beyond "I don't know".
I see now that your issue is really with the function - could a function ever exist that gives responses based on history/environment/experience. My understanding is that such a function is the premise of the thought experiment; it's a hypothetical we accept in order to have this discussion. Searle claims that even if such a function exists, the computer still doesn't truly understand. But if that's what we're asking, my answer would be yes, as long as history/environment/experience are inputs as well. Of course a computer locked away in a room only receiving questions as input can never give a real answer to "what is the weather", just like a human in that situation couldn't. But if we expand our Chinese Room test to include that type of question and also allow the room or computer to take its environment as input, then it can give answers caused by its environment.
> You actually need to show it can be so-implemented. No one has done this.
I mean, fair enough. It's fine to say "I won't believe it until I see it", but that pretty much ends the discussion. If we want to talk about whether something not yet achieved is possible, then we need to be willing to imagine things that don't exist yet.
> What is missing is the man's ability to memorize the function and carry it out without the book. If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese.
No, the function would.
My CPU doesn't speak HTML. My web browser does. Does running my web browser mean the CPU speaks HTML, even if the whole browser is loaded into cache? I don't think it does; the CPU is still running machine code.
If I memorise a Turing machine, does that mean I understand the computation it's performing? No; I'd have to pick it apart and work out what each bit does, then put that meaning back together, in order to try to work out how it works.
Memorising the function would enable the man to teach himself Chinese (whichever language “Chinese” is), just like memorising a good mathematics would allow most people to teach themselves the concepts therein. But memorisation isn't understanding.
> If he could, and the function truly produced a normal response to any Chinese phrase in existence, then he would speak Chinese
I don't necessarily disagree, but it's not so simple. Just because the man speaks chinese doesn't mean he understands it. He could have figured out the proper output for every possible input without knowing what any of it means. If the next person asks in English "what did you talk about with the last person?" what would he answer, assuming he also speaks english? Really the question comes down to if the computer is able to write the book by observing I guess, but even then you could conceive of a different book with instructions on how to write the translation book.
One of you said a few comments up: "then the question arises: what is missing?"
And i just wanna pour us all a cup of tea and sit with that.
It reminds me of the song "anything you can do, i can do better, i can do anything better than you - No you cant! Yes i can!.."
The issue is- tell me what it cant do. Explain to me what the chinese room is not understanding. Whatever model you use to translate the missing functions can then be "encoded".
Id like to think my cpu has a brain and it goes off dreaming with the idle threads- but im also quite convinced that every single thread that runs machine code is a non-concious direct representation of the mental model i encoded into its functions.
The point is that the human in the Chinese room is just the hardware, and no one really thinks that any computer hardware on its own understands Chinese. It would obviously be the entire computational system that understands Chinese. The only reason this can even appear to be confusing is that we often casually use "computer" to refer both to an entire computational system (your "implementation of a function") as well as a physical piece of hardware (like the box on the floor with a Dell sticker on it).
Worth noting that computers don't actually compute functions. They are used to simulate computation, but they're not computing per se. Any claim that they do is an anthropomorphic projection onto an object that lacks that capacity.
Kripke's quaddition vs. addition distinction is also a good point in this regard.
This is probably too late a response to ever be read, but with regard to the latter point, Kripke did not have to invent the 'quus' operation , as there are actual examples to be found, for example in modular versus conventional arithmetic, real-valued versus complex-valued functions, etc. These cases do not create any unavoidable problems; we simply disambiguate where necessary. It is a non-issue unless your philosophical intuition depends on the semantics of words being something more fundamental than a convention among the users of a language.
As for your first claim, it appears to be begging the question, predefining, without justification, 'real' computation as being something that only humans can do, while machines can only simulate so doing.
Put another way, Searle separates a "computer" from the instructions it is running, but there is no such separation. It's like separating a brain from the electrical brain activity. It's true that a brain without electrical activity cannot understand Chinese just like a computer without a program cannot, but that's not a very interesting observation.