Hacker News new | past | comments | ask | show | jobs | submit login

In the early 90's I did a short undergraduate course about the philosophy of the mind. The main text was Searle's Rediscovery of the Mind. I was never convinced by Searle's Chinese room thought experiment and I always wondered if I lacked the sophistication to understand the argument or whether the argument itself made too much of an appeal to subjective rather than objective reasoning. And so I find it interesting that Searle is not mentioned in the article.



Scott Aaronson has an interesting take on the Chinese room theorem that tackles it from a computational complexity point of view:

Let's say this Chinese rule book does exist, and that it can provide an answer to every single possible question asked it. It would need to have an answer for every possible combination of words in the Chinese language. This is an unimaginably large amount of data. There are more combinations than there are atoms in the universe. This book and room simply can't exist as a 'Lookup'. Even if it did, you would need robots flying around at the speed of light retrieving information. I wouldn't have trouble calling such a system intelligent.

So if this room were to exist, and was a reasonable size, then there must be some incredible intelligence involved in scaling it down to this size.

He talks about it in his book: Quantum Computing Since Democritus.


For what it's worth, Daniel Dennett spends considerable time in _Consciousness Explained_ (and his other books) deconstructing both why Searle's Chinese Room argument is misleading, and also why is seems intuitively appealing (he calls it an "intuition pump"). I suspect you would find Dennett's treatment of Searle very much aligned with your skepticism!


Could you perhaps summarize Dennett's counter argument to Chinese Room?


I'd say Dennett's claim is that Chinese Room isn't an argument at all. The reason it has a guy in the room is because we're used to people understanding language. It applies just as well to components of the brain, like neurons (as currently understood - i.e. primarily mechanistic). There's no "guy" sitting in your brain who understands English. In fact I think a lot of people miss that Searle's core claim is that there's something fundamental about consciousness in the same way that there's something fundamental about the foundations of physics. Searle is arguing against the idea that neurons, individually or collectively, can be conscious.


The entire house understands Chinese note the person in it, just like your individual neurons don't understand Chinese either.

Don't know what Dennets counter is but it's probably pretty close to that.


Personally I found the book tedious--Searle makes lots on personal attacks on his philosophical rivals, but he does little to actually refute their ideas. He strikes me as someone who is so invested in his ideas that the possibility that he could be wrong is inconceivable.


I had to look that one up.

It looks like a variant of the philosophical zombie argument, which I always found fascinating because as far as I can see, that one really argues that consciousness is the illusion, not that physicalism is false.

Although I think the main issue might be that the word "illusion" is no really appropriate here. I figure the mind is analogous to a moiré pattern: it's real, it emerges from underlying patterns interacting with each other, it is distinct from those patterns, but it cannot exist without them. Which is probably also why looking for consciousness in the brain is fruitless: you will not find the moiré pattern in its component patterns either.

I was so impressed with myself as an art student that I made an artwork about that. I think that's when I hit peak pretentiousness, glad I switched to programming.

[0] https://en.wikipedia.org/wiki/Chinese_room

[1] https://en.wikipedia.org/wiki/Philosophical_zombie

[2] https://en.wikipedia.org/wiki/Moir%C3%A9_pattern

[3] http://jobleonard.nl/books/Contemplating/


I don't know what your professor's viewpoint was, but my sense is that the Chinese Room, as taught in undergrad classes, is mostly a punching bag. It's a very clever thought experiment, but you're not meant to believe it. My sense is that Searle is more famous in philosophy for his work in philosophy of language.

"subjective rather than objective reasoning." <-- I don't know what you mean by this.


It's purely an attack on symbolic (and possibly neural network-ish/statistical) artificial intelligence. I myself, however, don't think it's a particularly good attack.


> but my sense is that the Chinese Room, as taught in undergrad classes, is mostly a punching bag.

this is more or less how I learned it in my classes. we had to write a paper detailing our understanding of one of the major attacks against the chinese room (the systems neurobiology attack was my personal favorite)


In the late 80's, I was interested enough in artificial intelligence to take several classes in philosophy of mind; then and since, I have never been able to distinguish Searle's Chinese room argument from plain dualism. (The apparently separate idea of "biologicalism" that I've heard from some about Searle and (sort-of) from Penrose don't make any sense to me.)


Nothing in Searle's argument suggests that consciousness isn't physical. If anything he is less of a dualist than his functionalist critics, since he sees consciousness as being inherently tied to specific kinds of physical system.


The specific argument suggests a tiny homunculus who lives in your head and and understands English.

The biologist argument requires something special in organic chemistry, something unknown, that is incompatible with other physics?

Quantum mechanics? Bell's Inequality.


>The specific argument suggests a tiny homunculus who lives in your head and and understands English.

I don't see how you are getting that from anything Searle says.

> The biologist argument requires something special in organic chemistry, something unknown, that is incompatible with other physics?

We can't know at our present level of understanding whether or not it will be compatible with current physics. But anyway, perhaps current physics will need some revision before we can understand consciousness. It's hard to rule out any possibility at present.


Searle is expecting you to reject the idea that the system understands Chinese, from your common sense intuition. In his view this conclusion is obvious.


Yes, I know that he rejects the idea that the system understands Chinese. I didn't say otherwise.


Searles Chinese Room argument is very typical of the circular reasoning that many analytical philosophers, unfortunately, run into.

Of course the person in the room doesn't understand Chinese just like the individual neurons in my brain doesn't either.

It's the entire house that's conscious.

One reason why the mind is so hard to grasp and why people like Searle ends up with something like The Chinese Room argument which is really, to be honest a very sloppy argument is because of our obsession with turning it into a thing which can be located.

A much more fruitful way to think about the mind is as a pattern recognizing feedback loop and then reason from there.

That also gives you a much better idea way to think about evolution and how not just the conscious but the self-aware conscious mind come to be.

Gregory Bateson has some really interesting thoughts on that IMO.


Searle might have be justified to ask this kind of questions in the early 80s, but more recent advances in neurosciences render the conscience-first approach to intelligence increasingly irrelevant.


What are "subjective reasoning" and "objective reasoning"? It's not clear what you mean.


I think Searl's central argument still stands: Computers as we know them today can only simulate the human mind but can never bee consiuos, since they cannot expereince qualia like humans and probably other animals do.

I have always found Dennett completely unconvincing. He falls into the behaviorism trap: Since we cannot objectively observe consiousness, it does not exist.


How do you know that they can't experience qualia? How do you know that anybody besides you can?


Searle's original argument doesn't involve a digital computer. What is there in the Chinese Room which experiences the qualia someone who understands Chinese does? There's just a person, who tells you he doesn't, and a printed algorithm, which is just ink on paper. Is there a disembodied consciousness?


>What is there in the Chinese Room which experiences the qualia someone who understands Chinese does?

The system instantiated by the book, the man performing instructions, his memory and/or external memory aids, etc. The key point is that there is a new system that produces and consumes semantic information that is entirely distinct from the man. For example, if a question was asked to describe a childhood experience, no experience from the man's childhood would be communicated.


The system is only an abstraction though, only existing in the mind of the observer, e.g. the Chinese speaker outside the room. The man's brain is processing the information, even if he doesn't understand its meaning. You seem to be arguing that consciousness arises whenever information is processed by whatever means, and is a property of the process and not the processor per se. It's an interesting argument but I'm not convinced by it.


>only existing in the mind of the observer, e.g. the Chinese speaker outside the room.

Why do you think this? Each component of the system has physical embodiment and causal efficacy. It's just as "real" as the man himself.

>You seem to be arguing that consciousness arises whenever information is processed by whatever means

The Chinese room experiment isn't about consciousness specifically, its about semantics (meaning) vs syntax. Whether or not something that understands is conscious is a different question.


What's your take on brain simulation?


The person in the Chinese Room could in principle simulate the brain of a Chinese speaker, but would still not know what it is like to understand the Chinese words he's processing.

Note the distinction between "understand" and "knows what it is like to understand". I accept that the system of man + algorithm does understand Chinese. But it does no know what it is like.


Assume, for the sake of the argument, that we'll eventually have computers powerful enough to simulate brains to the accuracy we think necessary for it to function.

There are several ways this experiment could go:

First, we could fail to produce a mind, because there's some secret sauce to it we're not aware of (eg a God-given soul).

Second, we could produce a zombie, indistinguishable from a conscious individual while actually not being conscious (though note that we'd have to treat it as if it were conscious, for reasons that should be obvious).

Third, we could produce a conscious mind.

I'm in the camp that thinks option three is plausible.

Let's assume I'm right. Now, instead of a supercomputer, give every one of the billions of humans on the planet an instruction manual, a pocket calculator and a phone to communicate results, and have them do the exact same calculations the supercomputer would do. Despite the latency, if option three were true, we should expect that this would still produce a conscious mind, albeit a rather slowly-thinking one.


The person in the Chinese Room could in principle simulate the brain of a Chinese speaker, but would still not know what it is like to understand the Chinese words he's processing.

This is addressed by the Turing Test: the point is that if there is no observable difference, does it matter? You can't tell if other people understand Chinese either.

Personally, the only logical conclusion I can draw is panpsychism. I think a rock rolling down hill experiences free will and perceives its random path as a series of choices.


That's not the only logical conclusion one can draw: Consciousness could be an emergent phenomenon requiring a certain amount of complexity that an individual rock lacks.


Of course I cannot know that more than I can know that any other object doesn't experience qualia -- a chair, say, or a thermostat.


I fail to observe the difference between simulating a human mind on a computer and on a synaptic network. Why would a program run differently depending on the underlying hardware? (assuming sufficiently faithful simulation as opposed to some approximation)


This comment is relevant: https://www.reddit.com/r/artificial/comments/5hmduk/prof_sch....

> For example, nuclear energy is a real physical phenonenon. It doesn't exist because of various abstract relations - i.e., simulating a nuclear reactor in a computer doesn't mean you have nuclear energy. We know that matter of a specific kind arranged in a specific way creates nuclear energy.

> do strong AI proponents think that causal relationships must be involved to run a program to make it conscious? Or is it enough for the abstract relationships to exist? For example, how about a computer program written down on a piece of paper? Yes or no? Why is the physical running of it important? If so, please explain the physics of how running it in dominoes, water valves or transistors all produce the same phenomenon. If not, does this mean that any abstract set of relations is also conscious - the program on a piece of paper? Doesn't that then also mean that there are an infinitude of consciousnesses since an infinitude of abstract relations exist between all of the bits of matter in the universe?

Analogy: Simulating fluid dynamics on a computer does not mean the computer becomes wet. Simulating a black hole on a computer does not mean the computer starts curving the spacetime around it. Simulating an electric field on a computer does not mean the computer creates an electric field. Simulating a brain on a computer may or may not mean the computer creates consciousness.


Searle's argument is that consiousness is not a function of the program (there is no "program" in the human mind), but a function of hardware. Though just how the meat machine that is our brains produces consiousness is very poorly understood, it stand to reason that a computer is so different from it physically that it's no more likely a computer can acheive consoiusness than a stone or a car.


To me that sounds more like an immediate presupposition than an argument. I don't see why I'd favor an explanation of "consciousness is computation within a special magic meat brain" over simply "consciousness is computation", at least until we determine that brains run on special meat magic, of which so far there appears to be little evidence. I'm also not entirely convinced by implicit promises that "we'll discover this later when our understanding of brains is better".


Searle's point is that given what we currently understand about consciousness (i.e. nothing), there is no reason at all to think that consciousness will arise in any physical system that implements a particular kind of program. You talk about "magic meat brains", but one could just as well talk about "magic programs", since we have no idea how implementing a particular set of computations could give rise to conscious experience. Searle's bet is that the specific physical properties of the brain will turn out to be important for consciousness. Can we be sure of that? Of course not. But everyone in this domain is guessing.


> specific physical properties of the brain will turn out to be important for consciousness.

But we do know something about the components of brains and they don't seem to exhibit any features that cannot in principle be computed.

>But everyone in this domain is guessing.

Sure we're guessing, but Searle goes further to rule out potential solutions based on assumptions of specialness of brains.


>But we do know something about the components of brains and they don't seem to exhibit any features that cannot in principle be computed.

Sure, but so what? Neither do rain storms, but as Searle puts it, no-one would expect a computer simulation of a rain storm to actually make anyone wet. If conscious brains can be simulated, that in no way entails that simulations of conscious brains are themselves conscious.

It seems to be a common misreading of Searle that he thinks that brains can’t be simulated on computers. The starting point of the Chinese room argument is to concede for the sake of argument that this is possible.


>Neither do rain storms, but as Searle puts it, no-one would expect a computer simulation of a rain storm to actually make anyone wet.

Of course not, but that's because the simulation doesn't have the right kind of causal relationship with my head to cause my head to get wet. But it might have the right kind of causal relationship with a simulated head to cause it to get wet.

The analogy doesn't hold in the case of consciousness because all that is at stake is its own access to phenomenal properties. There is no causal category mismatch like there is with simulated rain and my physical head. If phenomenal properties are purely relational or functional, then the right kind of simulation will be conscious.


It’s virtually a contradiction in terms to say that a phenomenal property is purely relational or functional. If it were obvious, or even plausible, that phenomenal properties were purely relational or functional, then no-one would be worrying about their implications for the philosophy of mind!


It seems contradictory because of our intuitions regarding what is required for phenomenal properties, i.e. access to some non-physical substance. But these intuitions don't rule out functionalism if there's a functional explanation for why we have these intuitions.

Stated more clearly: the explanandum here is the "seemings" of phenomenal properties, not the ontology of phenomenal properties. Functionalism needs to explain why the proposition "it seems to i that P" is true, where "i" indexes a conscious system and P is a phenomenal property.


I don't have any intuitions about non-physical substances. It's just not possible to even define what a phenomenal property is without making reference to non-relational or non-functional notions. Note that Searle is a physicalist, so he certainly doesn't think that non-physical properties are relevant here.

> Functionalism needs to explain why the proposition "it seems to i that P" is true, where "i" indexes a conscious system and P is a phenomenal property.

We aren't trying to explain why people think that they are conscious, we are trying to explain why they are conscious. A non-conscious system could believe that it was conscious, given a functionalist account of belief.


> It's just not possible to even define what a phenomenal property is without making reference to non-relational or non-functional notions.

We haven't really defined it at all. The only thing we can do is reference in some manner the thing we all presumably share. We're all trying to get at the nature of that thing.

>Note that Searle is a physicalist

In name only. He has strange beliefs on the subject that are hard to pin down.

>We aren't trying to explain why people think that they are conscious, we are trying to explain why they are conscious.

This isn't a meaningful distinction due to the ambiguity in what consciousness actually is. Our usual casual talk about conscious often assume more than is warranted. In actuality, the only definite explanandum is that "it seems that [phenomenal property]". Cashing out exactly what this means, and how it is the case that "it seems that [phenomenal property]" is what properly done philosophy of mind is about. Chalmers' recent paper on the meta-problem of consciousness spells this out well: https://philpapers.org/archive/CHATMO-32.pdf


>We're all trying to get at the nature of that thing.

Sure, I'm not worried about defining the term 'phenomenal property'. My point is that functionalist accounts of the property of e.g. 'being in pain' are transparently not accounts of anything we can recognize as potentially being a phenomenal property. I'm therefore baffled by your suggestion that phenomenal properties might be 'purely relational or functional'. It's all very well to say that a seeming contradiction might turn out not to be a real contradiction, but in the absence of any positive argument to this effect, why assume so? You suggest that there may be a functional explanation for why we believe there's a contradiction, but that isn't any use unless you can conjoin it with an argument that there is not in fact any contradiction.

>In name only. He has strange beliefs on the subject that are hard to pin down.

None of Searle's arguments against functionalism rely on the premise that physicalism is false. It's perfectly clear that one could accept the full force of the Chinese room argument while being a physicalist in the strictest imaginable sense. The conclusion of the argument is that consciousness (or 'understanding') cannot be the result of merely executing a particular program. Nothing Searle says suggests that the extra ingredient needs to be something non-physical.

>In actuality, the only definite explanandum is that "it seems that [phenomenal property]"

There isn't any distinction between feeling pain and seeming to feel pain. Chalmers agrees on this point ("illusionism is obviously false", p. 35). In fact, section 6 of the paper is devoted to arguing against exactly what you are arguing for in your last paragraph. A solution to the metaproblem won't help very much because, as Chalmers puts it:

> On my view, consciousness is real, and explaining our judgments about consciousness does not suffice to solve or dissolve the problem of consciousness.


>My point is that functionalist accounts of the property of e.g. 'being in pain' are transparently not accounts of anything we can recognize as potentially being a phenomenal property.

To me this is assuming more than we have a right to. It's presupposing an ontological distinction between phenomenal appearances and functional processing. Sure, we have an intuition that says they are distinct as well as explanatory gaps that cause us to question such an identity. But that in itself isn't enough.

>but in the absence of any positive argument to this effect, why assume so?

Depends on what you consider a positive argument. There are certainly many reasons to prefer a functionalist approach, e.g. its the most parsimonious way to cash out the correlations between brain states and phenomenal states, it doesn't suffer from the combination problem, inverted spectrum/valence problems, etc.

>None of Searle's arguments against functionalism rely on the premise that physicalism is false.

It depends on what you mean by physicalism. Often there's an ambiguity between physicalism and materialism, so let me clarify. Physicalism is the idea that brain processes are logically identical to phenomenal properties. To put it another way, there's no possible world where you have physical properties identical to this world that doesn't have consciousness. But physics tells us that physical properties just are sets of physical interactions. Thus physicalism leads directly to substance independence, and so a perfect simulation of a brain would be conscious.

>There isn't any distinction between feeling pain and seeming to feel pain.

I agree, but the point of the "it seems that P" is to cast the problem in a theory-neutral manner. You take functionalism to be a non-starter as an explanation of consciousness, and so you're quick to erase the "it seems that..." from the equation. This wording is necessary to avoid our biases from infecting our language thus biasing the investigation. The point of referencing Chalmers paper was for his exploration of the problem space that was theory-neutral. I disagree with Chalmers' conclusions but he always offers good exposition.


> It's presupposing an ontological distinction between phenomenal appearances and functional processing. Sure, we have an intuition that says they are distinct as well as explanatory gaps that cause us to question such an identity. But that in itself isn't enough.

Right, we agree on where we disagree. Probably not much point in hashing out this long-standing philosophical debate in HN comments.

>Physicalism is the idea that brain processes are logically identical to phenomenal properties.

That's a rather boutique definition of physicalism. Standard definitions (e.g. "the thesis that everything is physical", according to the Stanford Encylopedia) make no reference at all to the brain.

>But physics tells us that physical properties just are sets of physical interactions. Thus physicalism leads directly to substance independence

Hmm, that seems like a total non sequitur to me. I'm sure people have constructed arguments to that effect, but it's very far from obvious that physicalism entails substance independence. Searle certainly doesn't think so.

I don't really understand your last paragraph. As far as I can tell, you don't think we should start with the assumption that people feel pain, but only with the assumption that it seems to people that they feel pain. But that just dodges the main issue. If people don't really feel pain, then most of the philosophical problems we're talking about dissolve immediately. On the other hand, if they do in fact feel pain, then these problems remain, regardless of whether or not we have an account of why it seems to them that they feel pain. As Chalmers says, to really get anywhere with this line of argument, you end up having to deny that people really do feel pain -- which is absurd.


>That's a rather boutique definition of physicalism... make no reference at all to the brain.

Perhaps my definition was too on the nose. Going by the SEP: "Physicalism is the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical". Supervenes on the physical, at least in the context of consciousness, implies substance independence of consciousness. Chalmers depends on this understanding in his zombie argument.

Just to belabor the point, we can posit some property of physical matter that entails conscious experience only in certain kinds of physical processes (say in biological brains but not in microchips). This difference-making property's actions either are or are not mediated through physical interactions. If it is mediated through physical interactions, then we can include those interactions in the simulation, thus the property obtains in microchips, contradicting the premise. If it is not mediated through physical interactions, then by definition it's mode of influence is non-physical. But this contradicts the premise that "everything [including the mind] supervenes on the physical" (i.e. we have a change in consciousness but no change in physical properties).

Chalmers cashes out such a difference-making property of physical matter as his microphenomenal properties in panpsychism. But importantly, this isn't Physicalism (his zombie argument is specifically against Physicalism). He argues for an expanded notion of the physical world that includes microphenomenal properties at the base. Just to tie all this back to Searle, from this discussion it seems clear that Searle cannot be a physicalist under this understanding.

>I don't really understand your last paragraph. As far as I can tell, you don't think we should start with the assumption that people feel pain, but only with the assumption that it seems to people that they feel pain.

I agree that people feel pain. The problem is that there is some ambiguity in common usage that can mislead and confuse the issue. For example, if I stub my toe and I shout ouch, you would say that I'm in pain. But in the context of philosophy of mind, we can't assume that from only outward appearances. So my characterization is an effort to shed all possibility of misunderstanding by zeroing in on the phenomenal character of the thing, as well as the only indisputable statement of fact that describes our relationship to the phenomenal character.

So in this case "I am in pain" is operationalized as "It seems that [pain quale]". The "it seems that..." is important because we cannot be mistaken about seemings, i.e. it is not possible that an evil demon or simulation can trick us into thinking something seems a certain way while not being the case that it seems that way. But once phenomenal consciousness is operationalized as such, it becomes clear that functional explanations cannot be ruled out simply by definition.


So far as I can see, your argument that physicalism entails substance independence works equally well for the substance independence of wetness:

“We can posit some property of physical matter that entails wetness only in certain kinds of physical processes (say in rain storms but not in microchips). This difference-making property's actions either are or are not mediated through physical interactions. If it is mediated through physical interactions, then we can include those interactions in the simulation, thus the property obtains in microchips, contradicting the premise. If it is not mediated through physical interactions, then by definition its mode of influence is non-physical. But this contradicts the premise that "everything [including rainstorms] supervenes on the physical" (i.e. we have a change in wetness but no change in physical properties).”

>So in this case "I am in pain" is operationalized as "It seems that [pain quale]"

That is not operationalization in the usual sense, since it isn’t possible to observe which quales are or aren’t seeming to someone.

>The "it seems that..." is important because we cannot be mistaken about seemings

That depends on how you cache out the formula “it seems that [quale]”. If it’s an attribution of a propositional attitude, then it certainly could be mistaken. If it’s not an attribution of a propositional attitude, then I don’t know what it means. Or at least, I don’t know how “It seems to John that [quale]” differs from “John is experiencing [quale]”, or how formulating things this way helps anything.

I don't think there is in fact any ambiguity in common usage. "John is in pain" unambiguously means that John is undergoing a particular sensation, not that he is exhibiting a particular kind of behavior. The latter interpretation of the statement is entertained only in the work of certain behaviorist/verificationist philosophers.


>So far as I can see, your argument that physicalism entails substance independence works equally well for the substance independence of wetness:

I have no problem with this if the terms are properly understood. Wetness as a relational property can obtain in a sufficiently precise simulation. Wetness as a metaphysical property can't obtain in a simulation because the term includes suppositions of ontological grounding that interactions of electrical signals don't satisfy (why my head can't get wet from simulated water). If we note this distinction then there is no issue or reductio.

Let me ask you this: if you take it that there is some difference-making physical property of matter for consciousness that doesn't manifest as a kind of physical interaction studied by physics, then what is the nature of that property? Physics tells us that physical properties are defined by their interactions. So if we rule out the standard kind of influence that physical properties have, then whatever is left is necessarily unobservable. This is plainly non-physical influence if you take physics seriously.

>That is not operationalization in the usual sense, since it isn’t possible to observe which quales are or aren’t seeming to someone.

It's not measurable in practice, but it might be in principle if physicalism or materialism is true (i.e. influence should be measurable).

>That depends on how you cache out the formula “it seems that [quale]”. If it’s an attribution of a propositional attitude, then it certainly could be mistaken.

Right, attributing such a statement externally could be mistaken. But we do know that such a self-report, if genuine, necessarily is correct. And so in the context of analyzing consciousness, we can assume genuine reports of phenomenal properties and so attribution isn't an issue.

> "John is in pain" unambiguously means that John is undergoing a particular sensation, not that he is exhibiting a particular kind of behavior.

The issue is less about ambiguity (I know I used that word), and more about bias. If we want to avoid bias in our specification of the problem, we need to use theory-neutral language. "John is experiencing pain" isn't theory-neutral, as it assumes John and pain are distinct things rather than pain being a state or property of John.

Some random thoughts on the subject (feel free to skip):

With my "it seems that [quale]" formulation and the idea that genuine self-reports of that nature cannot be mistaken, some interesting questions arise. In Chalmers' zombie argument, a zombie would give utterances of this sort exactly like we do. But on what basis can we say that their reports are not genuine without begging the question against physicalism? On the other hand, if we say their reports are not genuine, neither are ours! Actual phenomenal properties by assumption play no causal role in our behavioral reports of phenomenal properties and so the utterances are not genuine. When we think "I seems that [quale]" we are correct, but every time we create a physical artifact to that effect, we are wrong. What a strange state of affairs non-physicalism is.

Under what conditions are we justified in believing phenomenal reports are genuine? If I were to evolve or train an artificial neural network that, without any explicit training, started to make such phenomenal reports, are we justified in believing it genuine?


I see it as contradictory to saying that your velocity exists only relative to an observer, and that we can't ever observe your "fundamental" velocity. I think it's okay to have a relative view the world, which to me is the same thing as relational. Some might say that your innate properties can be summed up by graph relations to all things you have a relationship to.


But hardware can be simulated in software.


> cannot expereince qualia

says the human....

anyway that line of work leads to dead-ends thats it didn't gain traction or usefulness in neuroscience


Everyone in this thread keeps claiming that the Chinese Room argument is bad. Nobody says why.


Dennett does a pretty good job (to my mind) of undermining it in Consciousness Explained: you can read part of the relevant section on Google Books [0].

Or this SMBC [1] summarises the argument pretty well, if somewhat sarcastically.

[0] https://books.google.co.uk/books?redir_esc=y&id=d2P_QS6AwgoC...

[1] https://www.smbc-comics.com/comic/the-chinese-room


Searle's argument depends on our intuition of the distinction between syntax and semantics, but this distinction seems very outdated to me. It's an easy intuition to have in 1980, but its hard to defend in 2018. It's true that syntax itself isn't enough for semantics, but syntax is a component of a system that recognizes semantics, i.e. low level rule based operations that in aggregate entail semantics. So to say that because computers operate based on syntactic structures means the computer can never understand semantics is just a mistake on many levels.


Searle does not prove that the distinction between "actually understanding Chinese symbols" and "simulating the ability to understand Chinese symbols" is a real one. Physicalists, including me, believe this is just a confusion generated from our inside view of our consciousness, and that "really understanding" is just something a computer program thinks it can do once it gets complex enough to be conscious. At the bottom level it's still just pushing symbols around via syntactic rules (in the case of the mind, natural law).


> "really understanding" is just something a computer program thinks it can do once it gets complex enough to be conscious.

At the same time, consciousness might not be a requisite of higher intelligence at all; it could merely have been evolutionarily advantageous early on in the development of complex brains because of our natural environment... it's hard to imagine an intelligent animal with no "me" program doing very well.

But maybe a digital intelligence (one that did not evolve having to worry about feeding itself, acquiring rare resources, mating, communicating socially, etc.) would have no use for a central "me" program that "really experiences" things.

Such a creature is kind of eerie to think about.


Anyone that claims it's bad/outdated/unequivocally wrong probably doesn't have the requisite background to make such a claim, let alone any academic training in the field. The Chinese Room is still studied and debated vigorously. For example, Baggini thinks that Searle's Chinese Room is a knock-down argument against functionalism. Dennett, on the other hand, thinks it's bunk.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: