Hacker News new | past | comments | ask | show | jobs | submit login
The Best Books on the Philosophy of Mind (fivebooks.com)
211 points by prismatic on March 18, 2018 | hide | past | favorite | 186 comments



In the early 90's I did a short undergraduate course about the philosophy of the mind. The main text was Searle's Rediscovery of the Mind. I was never convinced by Searle's Chinese room thought experiment and I always wondered if I lacked the sophistication to understand the argument or whether the argument itself made too much of an appeal to subjective rather than objective reasoning. And so I find it interesting that Searle is not mentioned in the article.


Scott Aaronson has an interesting take on the Chinese room theorem that tackles it from a computational complexity point of view:

Let's say this Chinese rule book does exist, and that it can provide an answer to every single possible question asked it. It would need to have an answer for every possible combination of words in the Chinese language. This is an unimaginably large amount of data. There are more combinations than there are atoms in the universe. This book and room simply can't exist as a 'Lookup'. Even if it did, you would need robots flying around at the speed of light retrieving information. I wouldn't have trouble calling such a system intelligent.

So if this room were to exist, and was a reasonable size, then there must be some incredible intelligence involved in scaling it down to this size.

He talks about it in his book: Quantum Computing Since Democritus.


For what it's worth, Daniel Dennett spends considerable time in _Consciousness Explained_ (and his other books) deconstructing both why Searle's Chinese Room argument is misleading, and also why is seems intuitively appealing (he calls it an "intuition pump"). I suspect you would find Dennett's treatment of Searle very much aligned with your skepticism!


Could you perhaps summarize Dennett's counter argument to Chinese Room?


I'd say Dennett's claim is that Chinese Room isn't an argument at all. The reason it has a guy in the room is because we're used to people understanding language. It applies just as well to components of the brain, like neurons (as currently understood - i.e. primarily mechanistic). There's no "guy" sitting in your brain who understands English. In fact I think a lot of people miss that Searle's core claim is that there's something fundamental about consciousness in the same way that there's something fundamental about the foundations of physics. Searle is arguing against the idea that neurons, individually or collectively, can be conscious.


The entire house understands Chinese note the person in it, just like your individual neurons don't understand Chinese either.

Don't know what Dennets counter is but it's probably pretty close to that.


Personally I found the book tedious--Searle makes lots on personal attacks on his philosophical rivals, but he does little to actually refute their ideas. He strikes me as someone who is so invested in his ideas that the possibility that he could be wrong is inconceivable.


I had to look that one up.

It looks like a variant of the philosophical zombie argument, which I always found fascinating because as far as I can see, that one really argues that consciousness is the illusion, not that physicalism is false.

Although I think the main issue might be that the word "illusion" is no really appropriate here. I figure the mind is analogous to a moiré pattern: it's real, it emerges from underlying patterns interacting with each other, it is distinct from those patterns, but it cannot exist without them. Which is probably also why looking for consciousness in the brain is fruitless: you will not find the moiré pattern in its component patterns either.

I was so impressed with myself as an art student that I made an artwork about that. I think that's when I hit peak pretentiousness, glad I switched to programming.

[0] https://en.wikipedia.org/wiki/Chinese_room

[1] https://en.wikipedia.org/wiki/Philosophical_zombie

[2] https://en.wikipedia.org/wiki/Moir%C3%A9_pattern

[3] http://jobleonard.nl/books/Contemplating/


I don't know what your professor's viewpoint was, but my sense is that the Chinese Room, as taught in undergrad classes, is mostly a punching bag. It's a very clever thought experiment, but you're not meant to believe it. My sense is that Searle is more famous in philosophy for his work in philosophy of language.

"subjective rather than objective reasoning." <-- I don't know what you mean by this.


It's purely an attack on symbolic (and possibly neural network-ish/statistical) artificial intelligence. I myself, however, don't think it's a particularly good attack.


> but my sense is that the Chinese Room, as taught in undergrad classes, is mostly a punching bag.

this is more or less how I learned it in my classes. we had to write a paper detailing our understanding of one of the major attacks against the chinese room (the systems neurobiology attack was my personal favorite)


In the late 80's, I was interested enough in artificial intelligence to take several classes in philosophy of mind; then and since, I have never been able to distinguish Searle's Chinese room argument from plain dualism. (The apparently separate idea of "biologicalism" that I've heard from some about Searle and (sort-of) from Penrose don't make any sense to me.)


Nothing in Searle's argument suggests that consciousness isn't physical. If anything he is less of a dualist than his functionalist critics, since he sees consciousness as being inherently tied to specific kinds of physical system.


The specific argument suggests a tiny homunculus who lives in your head and and understands English.

The biologist argument requires something special in organic chemistry, something unknown, that is incompatible with other physics?

Quantum mechanics? Bell's Inequality.


>The specific argument suggests a tiny homunculus who lives in your head and and understands English.

I don't see how you are getting that from anything Searle says.

> The biologist argument requires something special in organic chemistry, something unknown, that is incompatible with other physics?

We can't know at our present level of understanding whether or not it will be compatible with current physics. But anyway, perhaps current physics will need some revision before we can understand consciousness. It's hard to rule out any possibility at present.


Searle is expecting you to reject the idea that the system understands Chinese, from your common sense intuition. In his view this conclusion is obvious.


Yes, I know that he rejects the idea that the system understands Chinese. I didn't say otherwise.


Searles Chinese Room argument is very typical of the circular reasoning that many analytical philosophers, unfortunately, run into.

Of course the person in the room doesn't understand Chinese just like the individual neurons in my brain doesn't either.

It's the entire house that's conscious.

One reason why the mind is so hard to grasp and why people like Searle ends up with something like The Chinese Room argument which is really, to be honest a very sloppy argument is because of our obsession with turning it into a thing which can be located.

A much more fruitful way to think about the mind is as a pattern recognizing feedback loop and then reason from there.

That also gives you a much better idea way to think about evolution and how not just the conscious but the self-aware conscious mind come to be.

Gregory Bateson has some really interesting thoughts on that IMO.


Searle might have be justified to ask this kind of questions in the early 80s, but more recent advances in neurosciences render the conscience-first approach to intelligence increasingly irrelevant.


What are "subjective reasoning" and "objective reasoning"? It's not clear what you mean.


I think Searl's central argument still stands: Computers as we know them today can only simulate the human mind but can never bee consiuos, since they cannot expereince qualia like humans and probably other animals do.

I have always found Dennett completely unconvincing. He falls into the behaviorism trap: Since we cannot objectively observe consiousness, it does not exist.


How do you know that they can't experience qualia? How do you know that anybody besides you can?


Searle's original argument doesn't involve a digital computer. What is there in the Chinese Room which experiences the qualia someone who understands Chinese does? There's just a person, who tells you he doesn't, and a printed algorithm, which is just ink on paper. Is there a disembodied consciousness?


>What is there in the Chinese Room which experiences the qualia someone who understands Chinese does?

The system instantiated by the book, the man performing instructions, his memory and/or external memory aids, etc. The key point is that there is a new system that produces and consumes semantic information that is entirely distinct from the man. For example, if a question was asked to describe a childhood experience, no experience from the man's childhood would be communicated.


The system is only an abstraction though, only existing in the mind of the observer, e.g. the Chinese speaker outside the room. The man's brain is processing the information, even if he doesn't understand its meaning. You seem to be arguing that consciousness arises whenever information is processed by whatever means, and is a property of the process and not the processor per se. It's an interesting argument but I'm not convinced by it.


>only existing in the mind of the observer, e.g. the Chinese speaker outside the room.

Why do you think this? Each component of the system has physical embodiment and causal efficacy. It's just as "real" as the man himself.

>You seem to be arguing that consciousness arises whenever information is processed by whatever means

The Chinese room experiment isn't about consciousness specifically, its about semantics (meaning) vs syntax. Whether or not something that understands is conscious is a different question.


What's your take on brain simulation?


The person in the Chinese Room could in principle simulate the brain of a Chinese speaker, but would still not know what it is like to understand the Chinese words he's processing.

Note the distinction between "understand" and "knows what it is like to understand". I accept that the system of man + algorithm does understand Chinese. But it does no know what it is like.


Assume, for the sake of the argument, that we'll eventually have computers powerful enough to simulate brains to the accuracy we think necessary for it to function.

There are several ways this experiment could go:

First, we could fail to produce a mind, because there's some secret sauce to it we're not aware of (eg a God-given soul).

Second, we could produce a zombie, indistinguishable from a conscious individual while actually not being conscious (though note that we'd have to treat it as if it were conscious, for reasons that should be obvious).

Third, we could produce a conscious mind.

I'm in the camp that thinks option three is plausible.

Let's assume I'm right. Now, instead of a supercomputer, give every one of the billions of humans on the planet an instruction manual, a pocket calculator and a phone to communicate results, and have them do the exact same calculations the supercomputer would do. Despite the latency, if option three were true, we should expect that this would still produce a conscious mind, albeit a rather slowly-thinking one.


The person in the Chinese Room could in principle simulate the brain of a Chinese speaker, but would still not know what it is like to understand the Chinese words he's processing.

This is addressed by the Turing Test: the point is that if there is no observable difference, does it matter? You can't tell if other people understand Chinese either.

Personally, the only logical conclusion I can draw is panpsychism. I think a rock rolling down hill experiences free will and perceives its random path as a series of choices.


That's not the only logical conclusion one can draw: Consciousness could be an emergent phenomenon requiring a certain amount of complexity that an individual rock lacks.


Of course I cannot know that more than I can know that any other object doesn't experience qualia -- a chair, say, or a thermostat.


I fail to observe the difference between simulating a human mind on a computer and on a synaptic network. Why would a program run differently depending on the underlying hardware? (assuming sufficiently faithful simulation as opposed to some approximation)


This comment is relevant: https://www.reddit.com/r/artificial/comments/5hmduk/prof_sch....

> For example, nuclear energy is a real physical phenonenon. It doesn't exist because of various abstract relations - i.e., simulating a nuclear reactor in a computer doesn't mean you have nuclear energy. We know that matter of a specific kind arranged in a specific way creates nuclear energy.

> do strong AI proponents think that causal relationships must be involved to run a program to make it conscious? Or is it enough for the abstract relationships to exist? For example, how about a computer program written down on a piece of paper? Yes or no? Why is the physical running of it important? If so, please explain the physics of how running it in dominoes, water valves or transistors all produce the same phenomenon. If not, does this mean that any abstract set of relations is also conscious - the program on a piece of paper? Doesn't that then also mean that there are an infinitude of consciousnesses since an infinitude of abstract relations exist between all of the bits of matter in the universe?

Analogy: Simulating fluid dynamics on a computer does not mean the computer becomes wet. Simulating a black hole on a computer does not mean the computer starts curving the spacetime around it. Simulating an electric field on a computer does not mean the computer creates an electric field. Simulating a brain on a computer may or may not mean the computer creates consciousness.


Searle's argument is that consiousness is not a function of the program (there is no "program" in the human mind), but a function of hardware. Though just how the meat machine that is our brains produces consiousness is very poorly understood, it stand to reason that a computer is so different from it physically that it's no more likely a computer can acheive consoiusness than a stone or a car.


To me that sounds more like an immediate presupposition than an argument. I don't see why I'd favor an explanation of "consciousness is computation within a special magic meat brain" over simply "consciousness is computation", at least until we determine that brains run on special meat magic, of which so far there appears to be little evidence. I'm also not entirely convinced by implicit promises that "we'll discover this later when our understanding of brains is better".


Searle's point is that given what we currently understand about consciousness (i.e. nothing), there is no reason at all to think that consciousness will arise in any physical system that implements a particular kind of program. You talk about "magic meat brains", but one could just as well talk about "magic programs", since we have no idea how implementing a particular set of computations could give rise to conscious experience. Searle's bet is that the specific physical properties of the brain will turn out to be important for consciousness. Can we be sure of that? Of course not. But everyone in this domain is guessing.


> specific physical properties of the brain will turn out to be important for consciousness.

But we do know something about the components of brains and they don't seem to exhibit any features that cannot in principle be computed.

>But everyone in this domain is guessing.

Sure we're guessing, but Searle goes further to rule out potential solutions based on assumptions of specialness of brains.


>But we do know something about the components of brains and they don't seem to exhibit any features that cannot in principle be computed.

Sure, but so what? Neither do rain storms, but as Searle puts it, no-one would expect a computer simulation of a rain storm to actually make anyone wet. If conscious brains can be simulated, that in no way entails that simulations of conscious brains are themselves conscious.

It seems to be a common misreading of Searle that he thinks that brains can’t be simulated on computers. The starting point of the Chinese room argument is to concede for the sake of argument that this is possible.


>Neither do rain storms, but as Searle puts it, no-one would expect a computer simulation of a rain storm to actually make anyone wet.

Of course not, but that's because the simulation doesn't have the right kind of causal relationship with my head to cause my head to get wet. But it might have the right kind of causal relationship with a simulated head to cause it to get wet.

The analogy doesn't hold in the case of consciousness because all that is at stake is its own access to phenomenal properties. There is no causal category mismatch like there is with simulated rain and my physical head. If phenomenal properties are purely relational or functional, then the right kind of simulation will be conscious.


It’s virtually a contradiction in terms to say that a phenomenal property is purely relational or functional. If it were obvious, or even plausible, that phenomenal properties were purely relational or functional, then no-one would be worrying about their implications for the philosophy of mind!


It seems contradictory because of our intuitions regarding what is required for phenomenal properties, i.e. access to some non-physical substance. But these intuitions don't rule out functionalism if there's a functional explanation for why we have these intuitions.

Stated more clearly: the explanandum here is the "seemings" of phenomenal properties, not the ontology of phenomenal properties. Functionalism needs to explain why the proposition "it seems to i that P" is true, where "i" indexes a conscious system and P is a phenomenal property.


I don't have any intuitions about non-physical substances. It's just not possible to even define what a phenomenal property is without making reference to non-relational or non-functional notions. Note that Searle is a physicalist, so he certainly doesn't think that non-physical properties are relevant here.

> Functionalism needs to explain why the proposition "it seems to i that P" is true, where "i" indexes a conscious system and P is a phenomenal property.

We aren't trying to explain why people think that they are conscious, we are trying to explain why they are conscious. A non-conscious system could believe that it was conscious, given a functionalist account of belief.


> It's just not possible to even define what a phenomenal property is without making reference to non-relational or non-functional notions.

We haven't really defined it at all. The only thing we can do is reference in some manner the thing we all presumably share. We're all trying to get at the nature of that thing.

>Note that Searle is a physicalist

In name only. He has strange beliefs on the subject that are hard to pin down.

>We aren't trying to explain why people think that they are conscious, we are trying to explain why they are conscious.

This isn't a meaningful distinction due to the ambiguity in what consciousness actually is. Our usual casual talk about conscious often assume more than is warranted. In actuality, the only definite explanandum is that "it seems that [phenomenal property]". Cashing out exactly what this means, and how it is the case that "it seems that [phenomenal property]" is what properly done philosophy of mind is about. Chalmers' recent paper on the meta-problem of consciousness spells this out well: https://philpapers.org/archive/CHATMO-32.pdf


>We're all trying to get at the nature of that thing.

Sure, I'm not worried about defining the term 'phenomenal property'. My point is that functionalist accounts of the property of e.g. 'being in pain' are transparently not accounts of anything we can recognize as potentially being a phenomenal property. I'm therefore baffled by your suggestion that phenomenal properties might be 'purely relational or functional'. It's all very well to say that a seeming contradiction might turn out not to be a real contradiction, but in the absence of any positive argument to this effect, why assume so? You suggest that there may be a functional explanation for why we believe there's a contradiction, but that isn't any use unless you can conjoin it with an argument that there is not in fact any contradiction.

>In name only. He has strange beliefs on the subject that are hard to pin down.

None of Searle's arguments against functionalism rely on the premise that physicalism is false. It's perfectly clear that one could accept the full force of the Chinese room argument while being a physicalist in the strictest imaginable sense. The conclusion of the argument is that consciousness (or 'understanding') cannot be the result of merely executing a particular program. Nothing Searle says suggests that the extra ingredient needs to be something non-physical.

>In actuality, the only definite explanandum is that "it seems that [phenomenal property]"

There isn't any distinction between feeling pain and seeming to feel pain. Chalmers agrees on this point ("illusionism is obviously false", p. 35). In fact, section 6 of the paper is devoted to arguing against exactly what you are arguing for in your last paragraph. A solution to the metaproblem won't help very much because, as Chalmers puts it:

> On my view, consciousness is real, and explaining our judgments about consciousness does not suffice to solve or dissolve the problem of consciousness.


>My point is that functionalist accounts of the property of e.g. 'being in pain' are transparently not accounts of anything we can recognize as potentially being a phenomenal property.

To me this is assuming more than we have a right to. It's presupposing an ontological distinction between phenomenal appearances and functional processing. Sure, we have an intuition that says they are distinct as well as explanatory gaps that cause us to question such an identity. But that in itself isn't enough.

>but in the absence of any positive argument to this effect, why assume so?

Depends on what you consider a positive argument. There are certainly many reasons to prefer a functionalist approach, e.g. its the most parsimonious way to cash out the correlations between brain states and phenomenal states, it doesn't suffer from the combination problem, inverted spectrum/valence problems, etc.

>None of Searle's arguments against functionalism rely on the premise that physicalism is false.

It depends on what you mean by physicalism. Often there's an ambiguity between physicalism and materialism, so let me clarify. Physicalism is the idea that brain processes are logically identical to phenomenal properties. To put it another way, there's no possible world where you have physical properties identical to this world that doesn't have consciousness. But physics tells us that physical properties just are sets of physical interactions. Thus physicalism leads directly to substance independence, and so a perfect simulation of a brain would be conscious.

>There isn't any distinction between feeling pain and seeming to feel pain.

I agree, but the point of the "it seems that P" is to cast the problem in a theory-neutral manner. You take functionalism to be a non-starter as an explanation of consciousness, and so you're quick to erase the "it seems that..." from the equation. This wording is necessary to avoid our biases from infecting our language thus biasing the investigation. The point of referencing Chalmers paper was for his exploration of the problem space that was theory-neutral. I disagree with Chalmers' conclusions but he always offers good exposition.


> It's presupposing an ontological distinction between phenomenal appearances and functional processing. Sure, we have an intuition that says they are distinct as well as explanatory gaps that cause us to question such an identity. But that in itself isn't enough.

Right, we agree on where we disagree. Probably not much point in hashing out this long-standing philosophical debate in HN comments.

>Physicalism is the idea that brain processes are logically identical to phenomenal properties.

That's a rather boutique definition of physicalism. Standard definitions (e.g. "the thesis that everything is physical", according to the Stanford Encylopedia) make no reference at all to the brain.

>But physics tells us that physical properties just are sets of physical interactions. Thus physicalism leads directly to substance independence

Hmm, that seems like a total non sequitur to me. I'm sure people have constructed arguments to that effect, but it's very far from obvious that physicalism entails substance independence. Searle certainly doesn't think so.

I don't really understand your last paragraph. As far as I can tell, you don't think we should start with the assumption that people feel pain, but only with the assumption that it seems to people that they feel pain. But that just dodges the main issue. If people don't really feel pain, then most of the philosophical problems we're talking about dissolve immediately. On the other hand, if they do in fact feel pain, then these problems remain, regardless of whether or not we have an account of why it seems to them that they feel pain. As Chalmers says, to really get anywhere with this line of argument, you end up having to deny that people really do feel pain -- which is absurd.


>That's a rather boutique definition of physicalism... make no reference at all to the brain.

Perhaps my definition was too on the nose. Going by the SEP: "Physicalism is the thesis that everything is physical, or as contemporary philosophers sometimes put it, that everything supervenes on the physical". Supervenes on the physical, at least in the context of consciousness, implies substance independence of consciousness. Chalmers depends on this understanding in his zombie argument.

Just to belabor the point, we can posit some property of physical matter that entails conscious experience only in certain kinds of physical processes (say in biological brains but not in microchips). This difference-making property's actions either are or are not mediated through physical interactions. If it is mediated through physical interactions, then we can include those interactions in the simulation, thus the property obtains in microchips, contradicting the premise. If it is not mediated through physical interactions, then by definition it's mode of influence is non-physical. But this contradicts the premise that "everything [including the mind] supervenes on the physical" (i.e. we have a change in consciousness but no change in physical properties).

Chalmers cashes out such a difference-making property of physical matter as his microphenomenal properties in panpsychism. But importantly, this isn't Physicalism (his zombie argument is specifically against Physicalism). He argues for an expanded notion of the physical world that includes microphenomenal properties at the base. Just to tie all this back to Searle, from this discussion it seems clear that Searle cannot be a physicalist under this understanding.

>I don't really understand your last paragraph. As far as I can tell, you don't think we should start with the assumption that people feel pain, but only with the assumption that it seems to people that they feel pain.

I agree that people feel pain. The problem is that there is some ambiguity in common usage that can mislead and confuse the issue. For example, if I stub my toe and I shout ouch, you would say that I'm in pain. But in the context of philosophy of mind, we can't assume that from only outward appearances. So my characterization is an effort to shed all possibility of misunderstanding by zeroing in on the phenomenal character of the thing, as well as the only indisputable statement of fact that describes our relationship to the phenomenal character.

So in this case "I am in pain" is operationalized as "It seems that [pain quale]". The "it seems that..." is important because we cannot be mistaken about seemings, i.e. it is not possible that an evil demon or simulation can trick us into thinking something seems a certain way while not being the case that it seems that way. But once phenomenal consciousness is operationalized as such, it becomes clear that functional explanations cannot be ruled out simply by definition.


So far as I can see, your argument that physicalism entails substance independence works equally well for the substance independence of wetness:

“We can posit some property of physical matter that entails wetness only in certain kinds of physical processes (say in rain storms but not in microchips). This difference-making property's actions either are or are not mediated through physical interactions. If it is mediated through physical interactions, then we can include those interactions in the simulation, thus the property obtains in microchips, contradicting the premise. If it is not mediated through physical interactions, then by definition its mode of influence is non-physical. But this contradicts the premise that "everything [including rainstorms] supervenes on the physical" (i.e. we have a change in wetness but no change in physical properties).”

>So in this case "I am in pain" is operationalized as "It seems that [pain quale]"

That is not operationalization in the usual sense, since it isn’t possible to observe which quales are or aren’t seeming to someone.

>The "it seems that..." is important because we cannot be mistaken about seemings

That depends on how you cache out the formula “it seems that [quale]”. If it’s an attribution of a propositional attitude, then it certainly could be mistaken. If it’s not an attribution of a propositional attitude, then I don’t know what it means. Or at least, I don’t know how “It seems to John that [quale]” differs from “John is experiencing [quale]”, or how formulating things this way helps anything.

I don't think there is in fact any ambiguity in common usage. "John is in pain" unambiguously means that John is undergoing a particular sensation, not that he is exhibiting a particular kind of behavior. The latter interpretation of the statement is entertained only in the work of certain behaviorist/verificationist philosophers.


>So far as I can see, your argument that physicalism entails substance independence works equally well for the substance independence of wetness:

I have no problem with this if the terms are properly understood. Wetness as a relational property can obtain in a sufficiently precise simulation. Wetness as a metaphysical property can't obtain in a simulation because the term includes suppositions of ontological grounding that interactions of electrical signals don't satisfy (why my head can't get wet from simulated water). If we note this distinction then there is no issue or reductio.

Let me ask you this: if you take it that there is some difference-making physical property of matter for consciousness that doesn't manifest as a kind of physical interaction studied by physics, then what is the nature of that property? Physics tells us that physical properties are defined by their interactions. So if we rule out the standard kind of influence that physical properties have, then whatever is left is necessarily unobservable. This is plainly non-physical influence if you take physics seriously.

>That is not operationalization in the usual sense, since it isn’t possible to observe which quales are or aren’t seeming to someone.

It's not measurable in practice, but it might be in principle if physicalism or materialism is true (i.e. influence should be measurable).

>That depends on how you cache out the formula “it seems that [quale]”. If it’s an attribution of a propositional attitude, then it certainly could be mistaken.

Right, attributing such a statement externally could be mistaken. But we do know that such a self-report, if genuine, necessarily is correct. And so in the context of analyzing consciousness, we can assume genuine reports of phenomenal properties and so attribution isn't an issue.

> "John is in pain" unambiguously means that John is undergoing a particular sensation, not that he is exhibiting a particular kind of behavior.

The issue is less about ambiguity (I know I used that word), and more about bias. If we want to avoid bias in our specification of the problem, we need to use theory-neutral language. "John is experiencing pain" isn't theory-neutral, as it assumes John and pain are distinct things rather than pain being a state or property of John.

Some random thoughts on the subject (feel free to skip):

With my "it seems that [quale]" formulation and the idea that genuine self-reports of that nature cannot be mistaken, some interesting questions arise. In Chalmers' zombie argument, a zombie would give utterances of this sort exactly like we do. But on what basis can we say that their reports are not genuine without begging the question against physicalism? On the other hand, if we say their reports are not genuine, neither are ours! Actual phenomenal properties by assumption play no causal role in our behavioral reports of phenomenal properties and so the utterances are not genuine. When we think "I seems that [quale]" we are correct, but every time we create a physical artifact to that effect, we are wrong. What a strange state of affairs non-physicalism is.

Under what conditions are we justified in believing phenomenal reports are genuine? If I were to evolve or train an artificial neural network that, without any explicit training, started to make such phenomenal reports, are we justified in believing it genuine?


I see it as contradictory to saying that your velocity exists only relative to an observer, and that we can't ever observe your "fundamental" velocity. I think it's okay to have a relative view the world, which to me is the same thing as relational. Some might say that your innate properties can be summed up by graph relations to all things you have a relationship to.


But hardware can be simulated in software.


> cannot expereince qualia

says the human....

anyway that line of work leads to dead-ends thats it didn't gain traction or usefulness in neuroscience


Everyone in this thread keeps claiming that the Chinese Room argument is bad. Nobody says why.


Dennett does a pretty good job (to my mind) of undermining it in Consciousness Explained: you can read part of the relevant section on Google Books [0].

Or this SMBC [1] summarises the argument pretty well, if somewhat sarcastically.

[0] https://books.google.co.uk/books?redir_esc=y&id=d2P_QS6AwgoC...

[1] https://www.smbc-comics.com/comic/the-chinese-room


Searle's argument depends on our intuition of the distinction between syntax and semantics, but this distinction seems very outdated to me. It's an easy intuition to have in 1980, but its hard to defend in 2018. It's true that syntax itself isn't enough for semantics, but syntax is a component of a system that recognizes semantics, i.e. low level rule based operations that in aggregate entail semantics. So to say that because computers operate based on syntactic structures means the computer can never understand semantics is just a mistake on many levels.


Searle does not prove that the distinction between "actually understanding Chinese symbols" and "simulating the ability to understand Chinese symbols" is a real one. Physicalists, including me, believe this is just a confusion generated from our inside view of our consciousness, and that "really understanding" is just something a computer program thinks it can do once it gets complex enough to be conscious. At the bottom level it's still just pushing symbols around via syntactic rules (in the case of the mind, natural law).


> "really understanding" is just something a computer program thinks it can do once it gets complex enough to be conscious.

At the same time, consciousness might not be a requisite of higher intelligence at all; it could merely have been evolutionarily advantageous early on in the development of complex brains because of our natural environment... it's hard to imagine an intelligent animal with no "me" program doing very well.

But maybe a digital intelligence (one that did not evolve having to worry about feeding itself, acquiring rare resources, mating, communicating socially, etc.) would have no use for a central "me" program that "really experiences" things.

Such a creature is kind of eerie to think about.


Anyone that claims it's bad/outdated/unequivocally wrong probably doesn't have the requisite background to make such a claim, let alone any academic training in the field. The Chinese Room is still studied and debated vigorously. For example, Baggini thinks that Searle's Chinese Room is a knock-down argument against functionalism. Dennett, on the other hand, thinks it's bunk.


Pretty wild to me that David Chalmers' The Conscious Mind isn't included on this list.


Keith Frankish sides with Daniel Dennett on the subject of phenomenal consciousness, so it isn't that surprising.


On the contrary readers are dodging a massive bullet there imho.

Edit: the author has responded to this:

https://twitter.com/keithfrankish/status/973699230643707905

> I've been asked why Chalmers's The Conscious Mind wasn't one of my Five Books on Phil Mind The answer is that I limited myself to one consciousness book & for me that had to be CE. But I admire TCM & it would have been one of my Five Books on Consciousness


Similarly, I’d put Nagel’s “What It’s like to be a Bat” as required reading in that group as well.


^ Agree. This is a great paper.


I think the most interesting and relevant strand of philosophy of mind is the connectionism of Rumelhart, McClelland and the PDP Research Group (including "godfather of deep learning" Geoff Hinton), exemplified in the tome "Parallel Distributed Processing" released in 1986

It even has an accompanying manual on how to implement artificial neural networks.

http://www.cnbc.cmu.edu/~plaut/IntroPDP/index.html

More along this line is found in David Marr's Vision: A Computational Investigation into the Human Representation and Processing of Visual Information

https://www.youtube.com/watch?v=9WcIiSCDqhE

There is a spiritual successor to the PDP books called "Rethinking Innateness: A connectionist perspective on development" lead by Jeffery Elman who did some of the first research on Recurrent Neural Networks (RNNs). It makes heavy reference to Chomsky's work on language and innateness

https://crl.ucsd.edu/~elman/Papers/book/index.shtml

This line of work even has heavyweight ancestors in Donald Hebb's work on the neural basis of learning in "The Organization of Behavior" (1949) and the noted austrian economist(?)/classical liberal and anti-communist Frederick Hayek covered similar lines in his book "The Sensory Order" (1952)

I think this tradition is carried on in the work of Yann Lecun, who recently released a talk titled "Deep Learning, Structure and Innate Priors"

http://www.abigailsee.com/2018/02/21/deep-learning-structure...


that would be "5 books on the philosophy of connectionism"


connectionism is an approach to the philosophy of mind, the most successful approach as evidenced by recent deep learning hype

the only book in the OP I thought was even mildly relevant was the one by Dennet, and it's generally acknowledged that it's pretty bad.

A more interesting but flawed approach, that resonates as someone who loves Dawkins and Hitchens (atheists and advocates of evolution), is Gerald Edelman's Neural Darwinism (1978). There is even modern research (that is real, working code) at the University of Texas that could be considered a successor to Edelman: NeuroEvolution of Augmenting Topologies (NEAT)

https://www.youtube.com/watch?v=qv6UVOQ0F44


The problem with a list like this is that each of the books - and more importantly here, the selection of the books - has a very distinct view of the problems and arguments. It's not at all objective. So in that way it may be a little misleading for neophytes to the field.


Is there ever consensus in Philosophy? The field pretty much retreated to questions where we don't have objective answers.


Whether or not there is a consensus has very little to do with the existence of an objective answer. For example, there is no consensus on God's existence, but there is presumably an objective fact of the matter as to whether God exists.


Philosophy teaches us that there's no such thing as objective answers. All of us are beholden to subjectivity and perspective, which means we can't reliably judge whether something is objective anyway. Consensus is moot, and I'd go so far as to say that consensus is how you get religions.


Philosophy teaches you that? How?

Most working philosophers do not hold this view. (https://philpapers.org/surveys/results.pl)


First, I'm speaking for myself, not the orthodoxy. Martin Luther was 500 years ago, after all.

Second, what am I supposed to realize via that link?


You can be objective in how you present the various ideas and viewpoints that people have on the subject. Rather than saying a viewpoint is right, you explain the reasons people give for it, and then let the reader decide which arguments they buy and which they reject.


The might be a side-effect of the fact that there is no broad consensus about how the brain works.


I think the point that's trying to be made is: because there's no consensus, any list that attempts to be complete must include opposing view points. And this list doesn't do that.


Perhaps, but also a list of "five best books" is not aiming to be complete - the restriction to 5 guarantees that it will be wildly incomplete!


Maybe the problem is just with the title: "The best books on Philosophy of Mind". The words "the best" suggests that these are objectively the best 5 books overall.

But in the article, it says "he selects five of [ emphasis mine ] the best books in the field". This basically means that, of the best (which could be hundreds), he has selected 5 that he likes for more particular reasons other than just quality.

Perhaps a better title would be something like "Selected books on Philosophy of Mind". Hopefully we can assume they are high-quality books or they wouldn't bother to recommend them, so it isn't necessary to emphasize that with "best".


For that you'd want an undergraduate "Introduction to..." type book. I think Cambridge does some good ones but I don't know about this specific topic.

Once you have an overview then you can read and contextualise arguments for particular views.


Agreed. That the list is restricted to five titles does not mean that we can't manage a better selection. Indeed, there are individual introductory books available (such as Feser's) that offer broader coverage.


I never read a book about philosophy but found an answer to most philosophical questions just by thinking, discussing and occasionally wikipedia. Took a philosophy course, but it was mostly lot of vague words with questionable substance. The core ideas in philosophy could be summarized on one page with little information loss. This probably seems horribly arrogant, but that's just how I honestly see things.


This line of thinking is similar to: "I never took an academic/theoretical computer science course, but I'm able to write code that solves most problems just by thinking, discussing, and occasionally StackOverflow."

That might very well be true -- and of course you can (and plenty do) argue that academic CS isn't worthwhile either -- but in both cases it isn't obvious the nuance is useful/justified until you dive into the weeds yourself.

I'm in no place to defend the whole field, but I do think it takes a special kind of cynicism to claim that one of the (if not, the) longest running academic projects on Earth "could be summarized on one page with little information loss."


I think that in philosophy, unlike other fields, one can get very far with little studying, possibly closer to truth than someone who did a lot of it.

(Posting under a different username because I got "you're posting too fast" error.)


Comparing fields this way is a bit apples to oranges (getting "very far" in CS is quite different than in English Lit, which is quite different than Philosophy, etc.) -- these fields all have very different goals.

That said, in philosophy I often have the feeling that some "common sense" view I hold is more accurate than the rigorous, less clear views that I'm studying. Almost always, there turn out to be problems with my common sense view that weren't obvious to me at the outset, and necessitate the extra rigor and abstraction.

The unfortunate bit is that if you don't "study" (as you put it), you'll never notice these little flaws, and go on thinking that people are intentionally overcomplicating things. (Made worse by the fact that certain people actually are overcomplicating things -- but that's not evidence against the fact that many things are, in fact, complicated)


Honestly, I doubt that, the philosophy course I took convinced me that the signal-to-noise ratio is pretty bad. I've developed my philosophical view mostly independently and have always been able to defend it. I hoped I would get some hard philosophy questions here but nevermind :)


But you took one course. A phil phd is going to read hundreds of books, likely just for quals. How can you possibly expect to have a grasp of the field after a few hours of work?


Finding such an ignorant comment at the top of HN is quite disappointing.

Obviously, you are still in the first stage of competence https://www.wikiwand.com/en/Four_stages_of_competence and you don't know what you don't know.


When someone tells me they know how to fly, I first ask them to show me before claiming they are full of it.

Of course philosophy has value but I too am quite disappointed how easily it enables very basic flawed assumptions to exist over eons even in what would be considered high academic levels. To me it feels like the equivalent of physicists today being divided on whether the earth revolves around the sun or vice versa.

Personally I think both Searle and Dennet are essentially dualists (though they'd both vehemently deny it), but each practicing a different form of dualism: Searle promoting some mysterious consciousness as some ghost in the machine, and Dennet protecting physicalism which in my book negates any subject from existing, in complete contradiction to the reality that any object (physical or otherwise) will always be just a model in someone's experience. I see Dennet as trying to squash dualism by giving the model more validity than the very real experience from which it arises, where the better path to eliminate dualism would be to try and get to the source of this division.


Possibly. But when defending my philosophical positions, I never had issues or felt that I'm missing out. Same with the philosophy course I took. Bad signal-to-noise ratio, too much nonsense.


It seemed to me that most writings in philosophy are so verbose that it is hard to see what core patterns there are. Philosophers can make whatever excuses they’d like, but the fact is that if your elegant theory needs a hundred pages of prose (without section titles!) to be expressed, it is not an elegant theory.

And inevitably the theory will be reduced to a paragraph or less anyhow when it is cited as existing work by future scholars.

If I were being cynical I would say that this verbosity creates a maze in which one can hide from criticism.

I definitely saw some very interesting ideas while learning about philosophy, but I agree with you 100%. I think they’re holding themselves back through constant re-elaboration; you can’t build a tower out of dry sand.


The way I see it, the core of philosophy is the hard problem of consciousness. Once you understand that, there's not much left. Some people include aesthetics or ethics in philosophy, but that's really just psychology, biology, maybe sociology.


I can see how aesthetics can be reduced to biology, but I don't think ethics reduces so cleanly. The fields you mentioned are all about what "is" while ethics is about what "ought" to be, and there isn't a clear path from one to the other.

You might not be inclined to explore philosophy, but David Hume pretty much took the field to its logical conclusion, so it's worth studying him. His philosophy was pure and rooted in first principles. Most of philosophy after Hume is tainted by the assumption of premises not derived from first principles.


It's not just arrogant, it's disappointing.

I could ask you to answer 'the big questions', to tell us what life means or what the teleological truth about our creators or why we seem to all broadly agree on quality of artwork but can't seem to write down the rules of agreement. But you only have those answers for yourself. Further, if the answers which make sense to you can fit on a single page, they likely don't make sense to anybody else.

While you're not a solopsist, you haven't really advanced the state of the discussion.


> or why we seem to all broadly agree on quality of artwork

That's really just a biology / neuroscience / psychology question.


You mentioned elsewhere that you haven't been asked any hard questions. So, here's the question: Does this 'quality' of art exist in the things which we observe? Or is it subjective, existing only in the observer?

Please think hard on this. Take as much time as you need. When you're ready, first write down your answer in as much detail as you like, to your own confidence. Then, pop open Pirsig's Zen and the Art, starting on p228, and read his thoughts and see what you think.


What exactly do you mean by quality of art? Beauty is just an emotion / feeling triggered when exposed to certain stimuli. It's to some extent encoded in our DNA. Anthropology / human biology / sociology problem.


> The core ideas in philosophy could be summarized on one page with little information loss. This probably seems horribly arrogant, but that's just how I honestly see things.

I think academic philosphy aims to be more rigourous, so while such a summary might capture the broad points, it will miss the nuance.

A comparison could be made with a game of Chess. On the surface, you could describe a game in terms of the moves played, but to understand why those moves were played, you need to know about the potential moves that weren't played.

I can understand why some people criticise academic philosophy for exploring seemingly pointless details extensively, but such exploration always gives someone a richer understanding, and sometimes uncovers flaws in things that previously seemed unquestionable. I think this is what people don't appreciate: understanding the moves played doesn't mean you understand why those moves were played, the ocean of possiblities underneath. No one page summary will do that, for Chess or philosophy.


It's how you see things on the basis of never having read any philosophy. It's difficult to see the value in a subject that you know nothing about.


Whenever I looked into what most philosophers publish, I just saw vagueness, poor thinking and little value.

What's a philosophical problem or question (understandable by someone who hasn't studied philosophy) I would need a philosophy book to answer?


> Whenever I looked into what most philosophers publish

Oh, most philosophers even? So you have taken a good look at the field and have a grasp of the philosophical community and their methods of inquiry and communication?

> vagueness

If there is one thing I have learned because I switched from studying physics to studying contemporary art, is that when I consider something "vague", it often boils down to my own assumptions and premises. By the standard of what is considered the right way to acquire knowledge in physics, most art is "vague". But the problem is me insisting on judging it by those terms.

> poor thinking

To elaborate on my previous point: you cannot apply the lines of thinking of one field to the other and judge it by that without also putting in the effort of understanding the lines of thinking of that other field. I have my doubts as to whether you are not mistaken an unfamiliar way of thinking for poor thinking.

> little value

Well given the two before it, there is indeed little value for you. Then again, I don't read or speak Chinese, so even the most beautiful poetry bundle written in simplified Chinese would hold little value for me too.


Would it be wrong for someone to catch stray puppies and slowly torture them to death in his basement? Why or why not? Can you not only convince yourself with your answer, but others? And yes, ethics is a part of philosophy.


Depends on how you define "wrong" exactly. It feels wrong to most people.


Exactly. Good philosophers are careful with definitions. Do you think there's enough space on your one page to define your terms and reach a conclusion that would satisfy most reasonable people?


Well, I wouldn't include ethics at all, I think it falls under biology, psychology, sociology.

The core of philosophy is basically understanding consciousnes, not too much else to it. But I guess one page was an exaggeration, let's say five.


Those sciences don't touch upon what ethics is about, which is what is good and bad, right and wrong.

There is no "core of philosophy", and it's certainly not confined to "understanding consciousness". I'd have to agree with others here who have tried to admonish you that you're arguing from ignorance and being unserious, even if a great deal of what passes for philosophy is a waste of time. What about logic, philosophy of mathematics, philosophy of language, ...?


True, but this is where ethics logcally belongs to. It's just an aspect of human behavior and inner world.

Asking what is good or bad is like asking what is the definition of these words. Well, the definition is whatever we agree on. IMost people call "bad" the behavior, that triggers specific emotion of "wrongness".

I think that consciousness and closely related topics is the hard part, other things such as ethics or aesthetics have quite mundane explanations. Philosophy of mathematics nad language is somewhere in between.


Did "looking into" it involve reading it or not?


It involved some reading.


What philosophers? What books?


If it weren't for Western philosophy, the modern world would not exist. Most of the present population would not be alive, and those who were would almost all be peasant farmers who were illiterate, got married as soon as they were biologically able to produce children, half their children would die in infancy or childhood, and they themselves would likely die around the age of 45, if not earlier from the regular epidemics and famines.

And why are you not aware of all this?


The funny thing about philosophy is that it concerns itself with concepts that anybody can make for themselves. However, it also allows for unlimited nuance and detail, so the difference can be compared to a freeway vs. backroads through the same area. There's no requirement to adopt any of it as your own, but the same way programmers often learn new techniques and language features, there is a continually growing world of concepts and ways of thinking.


Fine as long as you realise that this is taking a philosophical position. The grain of truth is that most published philosophy is bunk.


The following is all in my philistine opinion.

The problem with philosophy is that science has crowded out most of the good stuff. Who cares about a philosophy of chemical reactions when we have chemistry? Philosophy is useful around issues which don’t admit to empirical study, or for which study yields inconclusive results. In the words of Feynman, “Philosophy of science is about as useful to scientists as ornithology is to birds.”

Unfortunately instead of sticking to ethics, morality, and other hazy constructs, too often philosophers think they have something useful to say about everything. The result is a lot of bloviation and self-satisfaction with very little substance.

Hence the “boil it down to a single page” issue.

Edit: My biggest problem with philosophy is that I can imagine a sufficiently advanced civilization which has testable answers to the questions posed by philosophy. I distrust a field of study which proposes to produce something which amounts to a placeholder until scientists do the real, hard work. To go back to my initial example, the philosophy of chemistry became a pointless endeavor with the death of alchemy, and the birth of chemistry.


I've always been curious how philosophy ended up with such a bad rap here. My pet theory is that a lot of the "philosophy" that comes onto the popular radar is terrible (and imo barely philosophy at all -- but that may be a No True Scotsman).

> I distrust a field of study which proposes to produce something which amounts to a placeholder until scientists do the real, hard work.

This is one of the more common criticisms, motivated (as far as I can tell) by some idea that philosophers are stubborn in the face of empirical evidence. This has not been my experience -- almost all working philosophers understand well the scope of their arguments, and most limit themselves along the lines of: "if the evidence bears out X, then Y...". They don't attempt to claim what is true while we wait for the evidence to prove it, but rather to point out what things cannot be true (due to logical inconsistency), and to clarify what concepts are the most useful to talk about in the meantime (in terms of expressive power).

To me, it's a bit like pure math: you can build some interesting structure and prove some facts about it, and while that structure may not be useful in explaining anything about the natural world, the facts you proved about the structure itself are still true regardless. Likewise, the evidence may not bear out, say, Libertarian conceptions of free will -- but if they should, Robert Kane has done some good work determining what else is necessarily true, and what implications this might have for some of the "other hazy constructs". (He's proven some facts about that structure)


> Unfortunately instead of sticking to ethics, morality, and other hazy constructs, too often philosophers think they have something useful to say about everything.

They do have something useful to say about people who, due to not having studied philosophy of science, think science has something useful to say about everything. Sam Harris's The Moral Landscape is one notable recent example.


Are you saying that you believe Richard Feynman didn’t have a working grasp of the issues at hand? Or are you saying that “philosophers of science” don’t know anything about the science they philosophize about? I’d guess that Feynman was more familiar with the philosophy of science, than the philosophers were with path integral formulations.

As for science, it’s a method, and it can be applied to anything. What it says may often be, “don’t know” which imo is better than 100 pages saying “don’t know, but I’ll give it a go anyway, because no one can prove me wrong yet.”


> Are you saying that you believe Richard Feynman didn’t have a working grasp of the issues at hand? Or are you saying that “philosophers of science” don’t know anything about the science they philosophize about?

I'm saying that understanding things like what the scientific method can and can't do is useful to a scientist because, if they don't understand the distinction, they might try to use the scientific method to prove something it cannot.

> As for science, it’s a method, and it can be applied to anything. What it says may often be, “don’t know” which imo is better than 100 pages saying “don’t know, but I’ll give it a go anyway, because no one can prove me wrong yet.”

While you might try to apply the scientific method to anything, that doesn't mean it is useful for everything. For example, if something isn't replicable, then the scientific method cannot help you. You cannot scientifically prove what effect the Battle of Hastings had on Britain.


I can’t prove that, but could a sufficiently powerful AI prove it? Probably. The open questions for philosophy are a function of our present limitations, and as those limitations are overcome, the space for philosophy shrinks. I don’t think it says much that’s good about a field of study for which the major criteria is untestability and immunity from definitive critique.


I don't think that's a very accurate view of the field. If modern analytical philosophy values anything, it's logic (especially the formal variety), and I can think of at least a few dominant views in the last few decades that were felled by someone pointing out a bug in the underlying logic.

I don't think the philosophical questions are functions of our present limitations either. Imagine knowing everything about all people, and complete God-like power. Do you maximize utility? Do you equalize utility? Do you maximin? Do you ignore utility entirely, and move on some other criteria? You might have all the "is", but the "ought" is still an important question (and, importantly, not a relative one! Despite not being empirical) [1]

[1]: https://en.wikipedia.org/wiki/Is%E2%80%93ought_problem


> The open questions for philosophy are a function of our present limitations, and as those limitations are overcome, the space for philosophy shrinks. I don’t think it says much that’s good about a field of study for which the major criteria is untestability and immunity from definitive critique.

I'd agree that there are sections of the philsophic community that rely on that immunity. But other parts of the philosophy community perform an important role of teaching the philosophic foundations on which things like the scientific method are built. This is an important fight on which the effective practice of science itself depends.

Corporations and governments increasingly try to manipulate the scientific community for their own ends, giving rise to scientism. Consider stories like this: http://www.bmj.com/content/345/bmj.e4737

Furthermore, philosophy teaches people how to think critically and how to express themselves accurately and unambiguously, to challenge the prevailing beliefs around them. I think philosophy is far from a pointless old field that needs to shut up shop. In fact, I think societies would benefit if more people studied philosophy and at a younger age.


Perhaps add 'The Mind Matters' by David Hodgson to this list. Especially interesting as the author has a pressing need for practical pragmatic guidelines to his behaviour, rather than engaging in a purely academic or abstract exercise (Australian High Court judge in New South Wales).


I use this site pretty regularly to bootstrap reading about topics that I haven't much familiarity with. Even if the 5 books aren't the best, they're almost always enough so that I can direct my own continued reading. A great site for going from 0 to 1 in some topic, so to speak.


"The mind is material" seems like a funny hypothesis to me, given the massive unknowns in quantum theory --- we don't even know what "material" is fundamentally.


By that logic, I can't say "this brick is material" either, because we don't know what "material" is.

But really that's just arguing over words and definitions. We know what we mean when we say material (certain properties and behaviors), and as far as I know, we have just as much reason to think that a brick is material as that a mind if material. We don't need to understand literally everything to be able to make predictions about how things behave.


This is where relativity comes in.

What's good or bad? You can't compare anything too distant or you loose sight of the differentiation.

That stone is mater just like my brain but one was assembled molecularly a long time ago and the other evolved and was likely billions of molecules from around the planet at the time the stone formed.

I have info in my brain that's really only useful to me from a distance but truely only useful to my cortex, but my boss likes what I do with it for him. He says I do a good job but his competitors think it's all bad and some of my employees also attribute my success as a bad thing. But is that thing they consider bad me or something relative to them and not apples to apples to what my boss considers good?


I can show you a brick. Can you show me a mind?


Well that depends. What do you mean when you say a mind?

Explain to me what exactly it means to show you a mind, and I'll tell you whether I can show you one. If you can't explain what you mean by "show you a mind", then I'm not sure how this question is different than "can you show me a gefadfij" - questions are only meaningful if we agree on what we're talking about.

Note: by default to your question of "can you show me a mind", I plan to show you a brain, plus lots of reasons to think this constitutes a mind (e.g., changing something in the brain changes how someone behaves, touching parts of the brain with electricity can reliably cause sensations to people, etc).


Excellent question.

>If you can't explain what you mean by "show you a mind", then I'm not sure how this question is different than "can you show me a gefadfij"

This is basically my point, the mind is an undefinable concept.

Many people arbitrarily assume the mind is constrained to the brain, which is provably false.

For example, gut bacteria have a powerful influence on the mind.

Do you think it would be possible to completely separate the brain from the rest of the body and it's environment and still keep the persons conscious mind "alive" and in the same state?


Will you be showing a compressed piece of clay and straws or some kind of rectangular piece of concrete ?


Could you point to the difference?


That's like asking, can you show me an immaterial soul? Yes, there are arguments to be made in favour of the existence of such things, but given all we've learnt about things like biology, computation and the brain, we can hardly just assume that such things literally exist, and there's many reasons to think they don't.


>That's like asking, can you show me an immaterial soul?

Precisely my point. Saying "the mind is material" is like saying "the soul is ether"


Your mind wouldn't ask a brick to show it a mind would it? It identifies something similar and depending how important it serves it's purpose it'll keep evaulating how much can be learned by this.

Keep in mind we're only letting "intelligent" people contribute to this conversation and that in itself is biased. Why aren't we trying to identify why schizophrenic or people with split personalities chime in on why they perceive things differently and manage to survive.

We say they're unsuccessful in life but by our standards and that's not much different than a servant of a faith, no?

That's all your minds looking for. Things that change with a pattern it can differentiate. Bricks don't change unless something you understand changes it. Like inertas definition or Murphy's law, an obersavation of change from a perspective.


You show me a collection of matter and energy, it is no more separate from the world around it than the salt is separate from the ocean. It takes a mind to make it a brick.


Can you show me a computer program?



This is irrelevant. You don't have to show brick every time. We say sugar is sweet. No body experience how sweet is defined by other. Still we all believe it is sweet.


That's a bit different. The definition of sweet is basically defined as "the experience you get when eating sugar". It doesn't matter for this purpose if that experience is completely different for you - all that matters is that we both know that "sweet" is basically a synonym for "your experience of sugar".

Note that this definition allows you to predict a lot of things:

1. You can predict that you'll have the same experience when eating other things which are not sugar, but which you know cause the same experience in you.

2. You can predict, at least in broad outlines, whether people will find this experience pleasurable or not (again, on average, broad outline, but still better than chance).

3. You can predict that someone without a sense of taste won't experience anything when eating sugar.

That's why this definition is meaningful. Of course I can play word games all day about "can you show me 'sweet'???". But those are word games that are trying to hide the idea of what a definition is and what we use it for.


Can you show me the inside of a brick?


Can you show me a Higgs Boson?


Can you show me a cat gif?


This is a pretty good question, but there's a standard trick to answer it. You don't define "material" directly, you just say either:

1) Whatever physics studies is material.

2) Whatever makes something material, atoms, quarks, and those other subatomic particles are material. By "material", we just mean whatever the ultimate theory of those things says they have in common.

Neither of these responses guarantees that "material" is a meaningful concept. If panpsychism is true[0], then everything is somehow conscious, and you may no longer have a reasonable criterion for materiality.

For more context, you might try the Stanford encyclopedia article on physicalism: https://plato.stanford.edu/entries/physicalism/.

[0] Sadly, there are very talented contemporary philosophers who believe in panpsychism.


> Sadly, there are very talented contemporary philosophers who believe in panpsychism.

What saddens you about this?


I think that panpsychism is one of a handful of views in contemporary philosophy that ought to be treated as a reductio, but have truly brilliant proponents who have convinced the profession to giving those positions more time than they are due.

To clarify, panpsychism is still a very small minority position.


Why do you think panpsychism "ought to be treated as a reductio"?

Why do you believe panpsychism deserves less time than it receives?


I'm not the OP, but I think the problems with panpsychism are that 1. it creates more problems than it solves (the combination problem and its sub-problems), 2. it's anti-parsimonious, 3. it follows a poor track record of philosophers assuming the mysterious thing in question is fundamental (i.e. ontologically basic) and being proven wrong. Don't know why people insist on making that same mistake.


Why sadly?


It’s always sad when people choose to believe in what they want, rather than what they have evidence for. I want to be loved by a god and live forever, but that doesn’t make me believe in any of that. I find it hard to ignore the pattern of people injecting the same old beliefs with new language into whatever narrowing gaps exist in current theories.

No one ever goes and actually finds the teapot, they just try to come up with more esoteric ways of defining it. I think that’s sad too.


Why is that sad though? Are you saddened by it? Are they unhappy via these beliefs? Or are you saddened by your inability to believe things you cannot directly evidence? This last point does seem like a poor way to live - for example, I have never really known how to evidence that the world is round, but I'm happy to take other people's word for it, apply it to what is useful to me, and to not worry about it.


I’m definitely saddened by my inability to believe in what no evidence exists to suggest is real, and I’m not being sarcastic. I’m sure that life would be less stressful if I could contextualize it in terms of spiritual meaning. What I was referring to in my comment is sadness at what I see a waste of talent going down well worn roads of thought, when there are new and exciting horizons in front of us. QM opens the possibility of a universe unlike anything our human expectations and intuition suggest, and produces testable theories. Science has been the most successful method of inquiry and testing in human history, and yet so many want to take only sips from its cup because they don’t like what it has to say about our lives, our significance, and place in the universe.

By the way, if you ever want to prove to yourself that the world is round, it’s quite easy. On a clear day, look to the horizon. Now climb the tallest tree or structure near you and look again. You can see further than before! Ergo the earth is spheroid and not flat.


>Science has been the most successful method of inquiry and testing in human history, and yet so many want to take only sips from its cup because they don’t like what it has to say about our lives, our significance, and place in the universe

What does "science" say about our lives and our place in the universe?


It says we’re a speck among a potentially infinite array of specks.


What does all of this have to do with panpsychism?


What does this have to do with panpsychism? The contents of your comment lead me to suspect you didn't even know what panpsychism was when you wrote it.


Why do you think that? Is what I said inconsistent with a rejection of the “conscious rock” concept? Even if you did believe that, you’ve phrased your post in such a way that meaningful conversation is impossible, and any refutation would ring hollow. Very, “Did you beat your wife today?”


I agree with all of your posts above, and personally relate to both your sadness of being unable to believe and your sadness of seeing time wasted on fairy-tale nonsense.

That being said, the concept of panpsychism has merit insofar as it generalizes the idea that "consciousness may simply be what computation feels like from the inside". This, in my opinion, is an idea worth exploring – you don't need to take "conscious rock" woo-woo literally to wonder about substrate independence, different types of physicalism and emergence, etc. If the apparent absurdity of panpsychism can shock people into questioning our human gatekeeping tendencies around what consciousness is, then perhaps panpsychism is doing it's job fine.


> Why do you think that?

Your comment implies philosophers think panpsychism is true because it brings them "comfort" (i.e. your example "I want to be loved by a god and live forever"), which is a ridiculous claim you haven't provided any evidence for. If you are going to accuse someone, especially a professional philosopher, of basing their views on pure wishful thinking, you better be damn sure you can back that up.

Your comment also claims panpsychism is "injecting the same old beliefs (?) with new language (?) into whatever narrowing gaps (?) exist in current theories". What is this even supposed to mean?


I think it's a misnomer to say the mind is material.

From my limited understanding of the brain, my concept of the mind is this:

  - The brain is a computer and the computer is material
  - The brain executes software, and the software is grown/encoded as neural networks
  - The state of the running instance(s) of the software is the current action potential of every neuron
  - Both the software and the state encode information 
  - The information can represent things that are real, unreal, consistent, inconsistent, etc.
  - The information is abstract and immaterial, but stored in a physical and dynamic system
  - The flow of information is tolerant of faults and noise in the underlying physical neural network. A failure of a small percentage of connections does not prevent the information from flowing through the system
  - The information is not dependent on the precise physical network, but rather on the virtual network composed of many redundant connections
Therefore, I think the mind is:

  - information
  - encoded in a virtual network
  - built on many redundant physical neural connections
  - produced and supported by the brain


It means it's material in the same way a computer chip is material, or your spleen is material, and there's nothing "special" happening that can't, in principle, be explained by the same physical laws that govern everything else.


>there's nothing "special" happening that can't, in principle, be explained by the same physical laws that govern everything else.

How about dark matter and social dynamics?


Dark matter: Just because we don't know the laws yet, that doesn't mean that it's magic.

Social dynamics: Give me a sufficiently powerful computer and complete information about the starting state and I can probably give you a pretty good approximation what happens.


>Just because we don't know the laws yet, that doesn't mean that it's magic.

This is a strawman often thrown up when the blind spots of science are pointed out.

Magic is a fancy word for something that we have no provable explanation of. Asserting "mind is material" seems like a hubristic explanation of an imaginary thing.


It's also somewhat of a tautology because physicists don't really have a 'solid' understanding of what constitutes material. Pun intended.


Even if our brains run on dark matter, or spooky quantum shenanigans, such explanations are still fundamentally materialistic, because they're still physical interactions that can be interrogated approximately the same way we interrogate how muscle fiber works or whatever.

As to social dynamics, well, that's like trying to explain the behavior of bulk matter or ensembles of things generally. The tools we use are quite different from explaining the behavior of atoms, but we're pretty sure that one way or another, the behavior of a rod of steel is determined more or less entirely by the fundamental particles/fields/stuff that make it up. There's nothing spooky about a bar of steel even though it behaves completely differently than the quarks and electrons it's made of.

The Stanford Encyclopedia of Philosophy explains it better: https://plato.stanford.edu/entries/physicalism/


>they're still physical interactions that can be interrogated approximately the same way we interrogate how muscle fiber works or whatever

Newtonian physics fail to explain quantum phenomenon.


Well, sure. But we know an awful lot about quantum mechanics, and we know quantum mechanics is spooky but appears to be contained entirely in the realm of the physical. Particles behave by laws that we can sit down and calculate. That's all materialism is- it doesn't insist that everything is determined by any particular set of physical laws, much less Newton's, which we know is incomplete. Maybe everything's a wave, maybe everything is a tiny vibrating string, maybe we're all holographic projections on a membrane. All of this is compatible with materialism.

Materialism is essentially a philosophical stance, that everything in the world is physical. It's not something that is likely to be solved empirically and it's not clear how you would even prove it one way or another. Anything as-yet unexplained might yet yield to a physical explanation, and insofar as physics is well-understood, skeptics can retreat into talking about mental states and so on.


> a philosophical stance, that everything in the world is physical. It's not something that is likely to be solved empirically and it's not clear how you would even prove it one way or another. Anything as-yet unexplained might yet yield to a physical explanation, and insofar as physics is well-understood, skeptics can retreat into talking about mental states and so on.

Interesting. From a philosophical standpoint I see no quarrel. You could replace "physical" with "god" in your definition and it would read true to many religions. It's when dogmatism overrides doubt, questioning, and humility that people are misguided, in scientific theory and religion.


> You could replace "physical" with "god" in your definition and it would read true to many religions.

The difference is that the concept of physical minimally satisfies the required features to fill the conceptual role implied by science. If you replace physical with god, you are either eliminating the necessary properties of god or you're assuming more than is warranted.


What are the "necessary properties of god"?


Necessary properties of our concept of god, e.g. intelligent creator. If you deflate the concept of god such that any sort of eternal grounding substance of the universe is "god" then you've abandoned what is important in the concept of god (or attempting to engage in sleight of hand).


What do you mean "our" god? You do know there are many other understandings of god that are different from the mainstream stereotypical Abrahamic definition of god, right?

"God, therefore, is the one most simple essence of the entire universe"

-Nicholas de Cusa

"We shall find God in everything alike, and find God always alike in everything."

-Meister Eckhart

"The superior devotee sees that God alone has become everything...He finds that everything, above and below, is filled with God."

-Ramakrishna

"In order to attain perfect union, we must divest ourselves of God...The common belief about God, that He is a great Taskmaster, whose function is to reward or punish, is cast out by perfect love; and in this sense the spiritual man does divest himself of God as conceived of by most people."

-Henry Suso


Do you have a substantive point, or are you just trying to be pedantic? Clearly, if your conception of god is contentless such that it's isomorphic to a basic physical substance then there is no problem. If your point was simply to point out such conceptions, well, OK, but I'm not sure why you felt the need to point that out.


Which is probably not relevant for the problem at hand: I would be surprised if brains were able to maintain quantum coherence, which means quantum effects will effectively reduce to classical probabilities...


>I would be surprised if brains were able to maintain quantum coherence

Are you hypothesizing the mind is entirely self contained within the brain and independent from the rest of the body and it's environment?


No, I'm hypothesizing that brains can be understood within the realm of classical physics.


Then how is that relevant? Were talking about the philosophy of mind, not just brains.



This is dumb. It completely ignores things like information and computation.


Fair point, but historically, there have been metaphysical accounts of matter that continue to exercise their influence on the sciences and scientists (sometimes to their detriment). For example, Descartes defined material objects as extended things that are devoid of properties like color (which, for example, in physics are redefined in terms of things that are considered properties of matter, with what can't be accounted for in those terms relegated to the mind). Few physicists or biologists can explain to you what Cartesian metaphysics proposes, and few if any appeal to it in any explicit sense, but the legacy is insinuated in the modern scientific tradition, though it is not essential to science per se. This is the reason why scientists, perhaps biologists in particular, are uncomfortable or plainly hostile to the idea of teleology (though many of them misunderstand what it is by conflating it with intent). Teleology was in no way disproved historically by philosophers like Descartes, it was merely dismissed or relegated to immaterial minds.

Materialism, as I have encountered it as a philosophical position, has been a position that takes Cartesian dualism and drops the immaterial mind from it, leaving us with only the Cartesian account of matter. This position has been shown to be unworkable because now it is impossible to account for all of those things that have been swept under the rug of the immaterial mind. That's why Dennett takes an eliminativist position that simply denies that those things we've been sweeping under the rug exist at all. It's a cop out move because it denies the existence of those things it cannot explain. The sounder move is to revise one's presuppositions.

For this reason, I don't think these are really the best books on the subject, esp. for someone interested in getting a lay of the land and a rudimentary understanding of the subject matter. The best introductory book I've come across in this regard is E. Feser's "The Philosophy of Mind"[0].

[0] https://books.google.com/books/about/Philosophy_of_Mind.html...


Would you care to elaborate?

The mind consists of atoms and molecules. All of the physics relevant to chemistry is known [0].

[0] 10.1021/cr200042e


> The mind consists of atoms and molecules.

This is a type error. It's like saying the number 3 is made of atoms and molecules, or Wednesday is made of atoms and molecules.


The concept of a mind is not the same as an instance of a particular mind, any more than a algorithm is the same as a running process, or the idea of 3 is the same as a set of logic gates designed to represent a 3 in binary.

Mistaking our own internal representation of our own mind (our consciousness) for our actual mind is what Dennit is arguing against.


You're gonna have to say a lot more to not beg the question here.


What question? It seems pretty clear what they are saying: the concept of mind is a social contruct, metaphorical, a figure of speech rather than a figure of physical reality.


material is a word

Quantum theory has little to do with how neurons behave - they work at different levels. Current evidence would suggest it doesn't play any role other than tiny random fluctuations in computations, just like in any other micro-system.


That's an interesting site.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: