The author seems to want to say that consciousness is primary and that the physical universe, matter, is consciousness somehow, so poof no more mystery.
But this doesn't touch the "hard problem" so who cares?
There is what I like to call the flux: form and movement. I lump together "external" and "internal" experience, including all proprioceptive experience, etc.
Then there is the subjective awareness. It is primary. As the author points out, every other fact we know is contingent upon the fact of subjective awareness. Everything is content to this whatever-it-is observer. This awareness itself seems not to have any qualities or properties whatsoever, making it extremely difficult to talk about (and rendering it forever beyond any scientific treatment!)
Somehow, this awareness is our "self". (It may or may not also be tied into the quantum "Measurement Problem" but that is a whole 'nother story.)
You have a body but you're not your body; you have emotions but you aren't your emotions; you have thoughts but you aren't your thoughts; you are that awareness.
Now, if that awareness created or creates the physical world (as the Bhagavad Gita seems to state) that would be pretty amazing and I'd love to read about it. This article doesn't really expand on that.
Not only that, but you're you because you're not someone else's body. That sounds tautological but I think it's important. I think a big part of what gives someone consciousness is identity, and what gives us identity, is the idea of us being an island isolotated within our body and therefore distinct from other conciousnesses.
Which sounds pointless, but I think it's actually significant if you start to consider the eventual effect of identity on "shared consciousness" via technologically-enabled telepathy, which we may actually see one day in the far future.
Perhaps consciousness is shared, only memory isn't, and the distinction is mere an illusion due to distinct memory. Perhaps that's how aiming to be "self-less" is a thing in Buddhism, Sufism, Zen, etc.
Makes sense if you think about memory in the same way you think about time, as creations of the mind.
I read somewhere once about a story written in a book. The beginning and the end is there, everything superimposed together. For the characters in the book, there is no time.
Then the reader's mind/consciousness comes along and goes through the book, page-by-page. By this very act he creates memory and time. Maybe our minds are doing the same thing on an infinite, timeless substrate that Buddhism, Sufism, Zen etc call awareness.
I'd say perception is more relevant to the experience of unique identity than memory; I only see out of my eyes, which makes me feel separate from those around me, even if we'd have lived through all the same experiences.
I'm not sure perception and memory are all that different. What are memories except echoes of perception? I actually wonder if memory is not our perceptual signals caught within neural loops that are able to feedback and trigger pattern detection circuits. When this happens for multiple patterns in synchrony, we form associations. Is it right? Who knows.. no idea how even neuroscience can start to answer that..
Dr. Charles Tart's experiment with "mutual hypnosis" had some startling implications. (Search for “Psychedelic Experiences Associated with a Novel Hypnotic Procedure, Mutual Hypnosis.” there's a PDF but I don't want to link to it, as I don't know its provenance. See also http://skepdic.com/tart.html)
One could flip it - There's no matter, only awareness. All awareness, when devoid of the body, the thoughts, emotions, memories, are identical.
Now, make that leap, where `identical == expressing an identity`, where it is all one universal awareness, and that is reality.
You now have a pass to:
Hindu (Brahman):
"Brahman (/brəhmən/; ब्रह्मन्) connotes the highest Universal Principle, the Ultimate Reality in the universe."[1]
Islam (Allah):
"He is Allah, the one and only. God, The Eternal, the Absolute. He begot not, nor was He begotten, and there is none comparable to. Him."
Reality created nothing, nor was reality created.[2]
Taoism (Tao)
"The unnamable is the eternally real...
Free from desire, you realize the mystery.
Caught in desire, you see only the manifestations....
mystery and manifestations arise from the same source....." [3]
If you must, think of Adam and Eve as the first two self-aware beings, and "fruit of the tree of knowledge" as using their special ability for the benefit of their own ego, at the expense of being one with the universe, like all other plants and "lesser" animals. A.k.a "The original sin". In Taoist terms, Adam and Eve stopped seeing the mystery, and full of desire, saw the manifestations, and that's when they were 'cast out of the garden'. One moment perfectly content, next moment cannot ever stop wanting 'more'.
> religion is about shattering the idea that one is separate from the rest of reality.
Wait. My main issue with religion is that it forces the belief that one is separate from source/God/universe and creates a life long relationship of separation thus perverting humanity's relationship with ourselves and with our own "sacredness" or "divinity" in order to become a middleman to and sell something that was never separate from us. It's really insidious from that standpoint.
unless we speak of Daoism or Advaita, both nondualistic practices, both nonreligious.
But why are we talking about this in terms of "mysteries"? We'll always be able to reduce things down to a level where we don't know how it works. Right now, sure, we don't know what makes matter matterial, but for the purposes of the consciousness conversation, is that relevant? Just because we don't understand why electrons are electrons, does that prevent us from understanding how electricity works? "Consciousness" is an emergent property of our incredibly complex neurological infrastructure; just because we don't understand why the atomic components of that infrastructure doesn't mean that the emergent property has some "mysterious" aspect any more than anything else does.
> Now, if that awareness created or creates the physical world (as the Bhagavad Gita seems to state) that would be pretty amazing [...]
Where else would it come from? If we break a conventionally physical object into smaller and smaller bits we find that: 1) There's mostly nothing there - a whole lotta nothing inside of most "stuff", and 2) What is there seems to simply be a (very fast) movement, and doesn't follow the behavior of conventional objects. Whatever it is probably can't even properly be described as noun-stuff doing verb-stuff.
So, there appears to be something out there, but if in fact there is, it clearly isn't made out of conventional objects. That "objectivization" and the languageing of the objects of awareness is created in awareness. Who is the master who makes the grass green and all that...
> Now, if that awareness created or creates the physical world (as the Bhagavad Gita seems to state) that would be pretty amazing and I'd love to read about it.
I am not surprised that the top comment in HN, in 2016, is that of pro-solipsism ... given its recent proclivity towards all things spiritual over rational.
That sounds great but it ignores a possible and compelling explanation. We are aware because nature selected for awareness. Why? Perhaps because of free will being a powerful tool for survival.
Consciousness is a suitcase word, which makes it challenging to debate. From [1]:
"In The Emotion Machine, Marvin Minsky discusses suitcase words—words that contain a variety of meanings packed into them, such as conscience, emotions, consciousness, experience, thinking, morality, right, and wrong.
"The word ‘consciousness’ is used to describe a wide range of activities, such as “how we reason and make decisions, how we represent our intentions, and how we know what we’ve recently done [p128].” If we want to better understand the various meanings of consciousness we need to analyse each one separately, rather than treating it as a single concept.
> Consciousness is a suitcase word, which makes it challenging to debate.
It is worse: it carries with it connotations of discarded science, such as levels of consciousness (aka. "animals have a lower level, Buddhist monks a higher one"), and the mind-body separation — it is fairly clear by now that whatever could be called a mind is a part of our body.
There was a time where the majority belief was that emotions and diseases came from four types of humors running through our veins. Slowly, the consensus switched away towards miasma theory and then germ theory.
Words like humor, miasma, aether, have now fallen into disuse.
We are at that awkward period before another old word bites the dust.
I find that usually, people that are puzzled by consciousness only want to understand the relativity of points of view: "why am I me and not someone else? Why don't I wake up in a different body?" To which the answer is obviously that memories are located in your brain alone.
Have they? A rhetorical tone can be hard to convey over the aether, but humor me for a minute...
The point is that even after words (and whole concepts) fall out of favor scientifically, they can still have a metaphorical meaning. Sometimes it's the best we've got even though we know it's wrong. We might like to describe someone's personality in terms of serotonin levels and amygdala response, but we really don't know enough - so we call them hot-blooded, even though that theory died many a long age ago.
>If we want to better understand the various meanings of consciousness we need to analyse each one separately, rather than treating it as a single concept.
This implies the naive premise that consciousness the thing is not a suitcase itself (and thus, that an umbrella world is unsuitable to name it).
So, it basically presupposes the reductionism that it's supposed to prove.
The problem of a suitcase word is when different people are using different definitions to talk about the same concept. It is entirely reasonable to presuppose that that is nonsense. It has nothing to do with the complexity of the thing discussed. The problem is when there are disjoint definitions.
>The problem of a suitcase word is when different people are using different definitions to talk about the same concept. It is entirely reasonable to presuppose that that is nonsense.
Not really. People could be legitimately doing that, because the concept is inherently multi-faceted and each approaches it from a specific angle.
It's okay to approach things from different angles, and to use equivalent definitions even if they take wildly different shapes. But as my explanation continued, you cannot use disjoint definitions. If we are to talk about the concept of barnfulness, we won't have a productive discussion if one person defines it as 'red and square' and nothing more, while another person defines it as 'contains horses' and nothing more.
No concept inherently requires that definitions of it contradict. And highly-contradicting definitions is a critical factor in having a suitcase word.
Not at all, the goal is to negotiate specific meanings or alternatives for a conversation, so you know everyone is on the same page, as a foundation for a fruitful discussion with useful thought.
Every word has emotional meaning alongside it. It's packed in via various ways: culture, history, happenings, social group. The bad part, is words unintended to carry bad emotional meaning can cause significant problems.
> Those who make the Very Large Mistake (of thinking they know enough about the nature of the physical to know that consciousness can’t be physical) tend to split into two groups. Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become “dualists.” Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become “eliminativists.”
What's problematic about the "say no to consciousness" case? It seems to be based on their preference for other descriptions of human brain activity, especially when the alternative description (consciousness) has been so ambiguous, vague and unhelpful.
> You cannot describe first-hand facts using only third-hand facts. This is the core philosophical dilemma in the hard problem of consciousness.
(or more accurately, I think third-hand factual systems could probably posit first-hand facts-- but be unable to assert validity or something. I am not a philosopher.)
Saying that conciousness isn't real because it's an emergent phenomenon of non-conscious processes is like saying that turbulence isn't real, because it's an emergent property of non-fluid particles. I mean sure, at the level of atoms, it's just particles interacting locally, but that doesn't mean that at certain scales, turbulence isn't a better way of looking at their group behavior.
Technically though describing consciousness as emergent property of neural functioning is a conjecture, though a very credible one.
Unlike turbulence, I only have one case where I have ever seen this consciousness which I am trying to explain as an epiphenomenon, namely my own. All others consciousness (if they even exist which I can't prove directly) are not remotely observed in the same way, they are self reported by other people.
So you have the question of why I can only directly perceive that emergent property in exactly one brain and not others.
Likewise you'd have to explain why the emergent property is apparently continuous through my whole lifetime over which my brain changes enormously.
Finally you'd have to deal with the hypothetical case where I am duplicated molecule for molecule and then ask which consciousness would I have direct access to, the duplicate or the original.
So while dualism is obviously patently absurd, the idea of consciousness as epiphenomenon, while promising, needs work too.
You have some assumption wrt time in there. To say that your consciousness is continuous relies on you trusting your memory of it being so, which, lacking a time machine, is no more provable or observable than others consciousness.
Removing the continuity assumption, perhaps even time, kind resolves the duplicate question: both copies (if conscious) should have a memory of conscious you and experience their consciousness being a continuation of that, while you, now you, are as removed from those future copies, as you are from me.
One question, that seems tied to consciousness, is what is "now"? Are there objective reasons to believe in the existence of a canonical "now" existing independent of conscious experience? If not then there really is no point in distinguishing either momental or continuous experiences as special in any way. Perhaps consciousness just emerges as an interpretation of a single state connecting past and future by memory and anticipation, it's simply appears continuous like the series of images at the top of the article appears to be.
Yes this is my favorite explanation: that consciousness is incredibly brief and delicate. That only one very precise organization of the brain could give rise to the epiphenomenon that is me. So precise in fact that "I" only exist for a very brief moment.
What I perceive as a continuous consciousness is really just a memory of other "I"s who were caused by this brain in the past, left a memory of introspection and then dissipated to be replaced by an new one.
This view has many advantages: it solves the problem of time you elude to, it answers the problem of other minds and the problem of my childhood self. The only problem I can not see solved is the hypothetical molecular level duplicate I mentioned above.
I don't see a major problem with the difference between your own and other consciousness. Let's say you are American, and had never travelled abroad. You've only heard tales of France and Japan from others. Do you suspect that other countries have a fundamentally different nature to your own?
>So you have the question of why I can only directly perceive that emergent property in exactly one brain and not others
You are the thing carrying around that brain and sense organs, and the plumbing thereof. This is a unique situation, hence the uniqueness.
> Likewise you'd have to explain why the emergent property is apparently continuous through my whole lifetime over which my brain changes enormously.
Emergent phenomenon can be very robust, such as Jupiter's spot which was observed 340 years ago, or the common orbital plane of the solar system, which is billions of years old.
Also, your consciousness is not continuous. You turn (at least some of) it off when you go to sleep. Very curious! It's at least robustly bistable!
> Finally you'd have to deal with the hypothetical case where I am duplicated molecule for molecule and then ask which consciousness would I have direct access to, the duplicate or the original.
This seems very simple too. There are two copies of you at the instant of duplication, which diverge from that moment on. You1 would have access to the consciousness of You1's nervous system and You2 that created by You2's nervous system. This question is only difficult if you insist on something other than the bodies being involved.
I would see A. The cloned me in chamber B would be brought into existence at that moment. They would share all the memories as me, and believe to be the one in chamber A. May be slightly confused about why the letter changed, unless you prepared yourself beforehand.
But the original observer that was put into the duplication chamber, would maintain their letter A
If consciousness is purely a result of the arrangement of the brain, why would you not see B and the copy see A since the bodies are identical? This paradox is the problem with a simple emergent property explanation.
To say somehow the different history of the atoms in the original brain vs the atoms in duplicate is somehow relevant to the emergent property would be a very odd claim: we don't take into account the prior history of the atoms in turbulent flow for example, just what the are doing right now.
Or for clarity, if I gradually replaced every molecule in your brain with another identical one, the emergent "you" would not be affected. So, if it weren't already obvious, it clearly can not be dependent on the particular atoms that make you up.
And by the way, if your brain at 10 years old and 90 years old, in spite of being very different structurally, still give rise to your same "you", the copy would not even have to be very good.
I'm not thinking about this from my perspective, however. I'm in a way thinking of it from an outsider's perspective.
In terms of "me", there would be two individual instances of me that open their eyes and either see A or B. The "original" is determined by you the question-asker, and the fact that that "original" walked into chamber A. Because that "original" walked into chamber A, that "original" would see the letter A, whereas the clone would see the letter B. Its possible that the "original" was, at time of cloning, completely recreated/rearranged. However, that "original" would retain the A designation/"originality" because to an observer, someone walked into a chamber with no other exits, and exited out the same chamber, that someone is the same person.
You are as much what others think of you as what you think of yourself, in this case.
I don't think this as mysterious a problem as you think it is. If I take a snapshot of a VM on aws, and copy it to a new instance, the vm will 'experience' being shut down on one instance and waking up on another, while the original will wake up exactly as it was. This no more negates the existence of the operating system than your scenario negates the existence of consciousness.
(Btw, the videogame _Soma_ goes over your exact scenario)
If every bodily action is caused ultimately by things explained by physics, as is clearly the case, then there is no room for a non-phiscal mind.
Put another way, using our current knowledge of physics and chemistry, we can, on a high level anyway, account for every item in the cause and effect chain from the photons hitting the retina to the arm that a person raises to shade his eyes. There is no room to insert anything into that causal chain let alone something so massively influential that it causes all the body actions that we consider driven by conscious choice. Thus whatever is going on must be made from very ordinary matter.
It's this very fact that makes it such a fun puzzle to look at, very ordinary mater maybe doing something (causing consciousness) that is wildly unexpected.
Because if something occurs outside the current laws of physics it does not mean it's not natural, rather that the laws of physics need to adjust and take it into account.
Except that if your conscience (and everything in you) is under the _laws_ of physics, it implies that you are a deterministic or randomized machine. But then it is not easy to explain free will, for example. You could say that if it exists, it is a physical phenomenon, but what laws does it conform to? So most naturalists say that free will is just an illusion, but I personally would not call this a "patently obvious" choice. (I think it is a "Very Large Mistake")
It's not that consciousness "isn't real" in the sense that flying unicorns aren't real - it's that "consciousness" is just a word that humans made to describe a phenomenon, just like "turbulence."
Glad you mentioned 'emergent' and the turbulence example because it is going in the direction of the phrase I was thinking of in response to the Leibniz thing -- 'metasystem transition'.
There are problems with our experience of experience. We think consciousness is continuous, but it can be clearly demonstrated that it's fragmentary. Daniel Dennett demonstrates this using alternating images in his TED talks. However, just because our experience of experience doesn't have 100% fidelity, it doesn't follow that our experience doesn't exist. That would be like saying our vision doesn't exist because it doesn't have 100% fidelity. I think Daniel Dennett and many other consciousness deniers make this mistake.
It means is that our experience isn't what we think it is, but if the discrepancy is large enough, that's more or less the same thing as saying it doesn't exist.
Basically, if none of the aspects of consciousness that make it important or special in our eyes turn out to be true, I think that denying its existence is actually a less misleading way to correct the record than trying to completely change our intuitive understanding of whatever it is we actually have.
We can't deny "vision exists". But we can say that vision doesn't work anything like people think it works. (Everyone has a mental model of a camera or webcam.)
Your eye is really more like a flashlight in the dark. And it interprets things way to much (see optical illusions.)
It's easy to dismiss these problems, because you are rarely are in situations where the difference matters. This is no coincidence, since people rarely construct things that our eyes can't process. For example, fluorescent lights are actually turning ON and OFF constantly, but we just happen to not be able to see that. There really is no color purple, we just think it exists.
But it's quite easy to mess people up. See "rotating tunnel illusion", or the experiment where subjects fail to notice they are now giving directions to a different person, or the experiment where people fail to notice a guy in a gorilla suit (!) when given a simple task.
So the upshot is "vision is nothing like you think it is". Maybe we should have different words for the way a computers see things (a lot more reality based) vs the way humans see things (with all their foibles.) Ditto for Consciousness.
If our experience itself isn't what we think it is, that would tend to undermine empiricism itself, because we rely on our senses to inform us about the natural world. This includes our sense of whether or not others agree with us, whether they have the same experiences as us, etc.
I think elimitavists like Dennett don't deny that things go on, just that consciousness, in he form of this magical thing, doesn't exist. The demonstration of that being that many of the things people believe about consciousness that make it so magical and hard to explain simply aren't true. People are often mistaken about their own consciousness, so perhaps the remarkable mysterious thing that we can't explain doesn't exist. Instead there could be something much simpler and explainable, much in the mold of explanations that Dennett gives. The catch being that many people simply claim that the thing that Dennett can explain isn't consciousness because it doesn't do the magical things they think consciousness does.
But he puts it like this: "Consciousness is an illusion." I guess he's just clickbaiting us. What you're saying is that he actually means is: "Our culturally inherited notion of consciousness is incomplete and in some ways misguided." Well of course. You can say that for physics and various sciences.
Yes, his 'Consciousness is an Illusion' in clickbait is a sense. Mostly he is trying draw attention to the fact that the thing he is trying to explain is not the thing other people are saying is unexplainable. The commonly accepted notion of what consciousness is ... he doesn't believe that exists. But that's just because he thinks we are deluding ourselves about what we are actually experiencing as consciousness -- the thing we think about when we talk about consciousness? That's just an illusion, and the real workings are just mundane, and you'll just say "that's not really consciousness" if I explain them.
> It denies the most fundamental fact of human experience, the existence of that experience itself.
How does "saying no to consciousness" deny "the existence of the human experience"? If you were talking about brain denial, then I would have to agree. It seems fully possible for the brain to continue to do interesting things even if "consciousness" turns out to be false. Thankfully it's nonfalsifiable apparently....
It's not clear to me what it would even mean for "consciousness to turn out to be false." The way consciousness is commonly defined (i.e. not with an analytical definition), its existence as a phenomenon can't be false - only our explanations for what it is and how it works.
It would be analogous to telling a physicist trying to provide an explanation for why objects fall to the ground that it might simply turn out to be false that objects fall to the ground at all. That doesn't make sense, because the exploration started with an observable phenomenon.
> It would be analogous to telling a physicist trying to provide an explanation for why objects fall to the ground that it might simply turn out to be false that objects fall to the ground at all.
It is false. Gravity is not about "ground" nor does it require that everything fall to the ground, indeed there seem to be forces on objects stronger than the force of gravity. The ability to falsify concepts is critically important in many branches of epistemology.
I think you missed the point you were replying to. That objects fall to the ground is an undeniable observation. Having conscious experiences falls into the same category. For that matter, we experience objects falling to the ground. Gravity is an explanation of our experiences of falling objects. The experience comes first, then the explanation. If there was no experience of falling, there would be no explanation of gravity.
I think he grasped the point. The trick here is reframing. Someone says "objects fall to the ground is an undeniable observation", and the other responds "No, your looking at it the wrong way; objects do something but you just think that's falling to the ground, in practice it's something else that has the appearance of falling to the ground". The responder isn't denying your observations, they are denying your interpretation of what you've seen.
The same applies to eliminitavists. They aren't denying that something happens that feels to you like experience, they are suggesting, however, that it may not be the thing that you think it is. They'll often point to people's own misunderstandings of their consciousness as examples that we can, in fact, actually be wrong about our own first person experiences. One example of this is the fact that we think we experience full field colour vision. In practice we have colour vision only in a narrow field in the center of our full visual field. You can perform experiments on yourself to demonstrate that this is in fact true. Another example is the sense of continuity to consciousness, which again, can be demonstrated to actually be false. If our "consciousness" is sufficiently different in reality from what we normally presume to be consciousness then, in some real sense, consciousness as we commonly think of it indeed doesn't exist. That doesn't mean there is nothing, just that calling it consciousness with all the associations that implies is perhaps sufficiently misleading as to be wrong.
Because the description of brain activity does not capture the experience of being an active brain. The scientific picture of the world is odorless, tasteless, colorless, soundless, etc. It's an abstract, mathematical description. But we don't experience the world that way. There is a massive discrepancy between the two. That's consciousness.
I wouldn't say that's really a problem, because there's a perfectly fine scientific explanation of why it doesn't (couldn't) "capture" this. The experience of color or taste is triggered when information is fed to the brain through specific pathways. The pathways that are activated when we read about something are different and don't reach the same places, so any information you acquire through reading and cognition is necessarily going to be "tasteless". Because of that, we can't possibly "connect" our knowledge of the process that causes the qualia of taste to the qualia of taste, but that's just a limitation of the brain's wiring, not a fault in the model itself.
> the experience of color or taste is triggered when information is fed to the brain through specific pathways
The question is why the experience is triggered. Why does moving atoms at specific positions trigger the perception of color or pain? Current physics doesn't have an answer for that, even in theory, assuming we can see exactly what happens in the brain.
The thing is, though, that even if "physics" did have an answer to this, it's not a given that we would understand it. Our incapability to wrap our head around reductionist explanations of qualia does not entail that they are inaccurate, only that they fail to satisfy us. I mean, let me put it this way: what do you expect from an answer?
For example, in order to understand why atoms moving in certain ways produce a taste, do you need the explanation to trigger some experience or memory of a taste? If so, you will never understand.
Or do you expect the answer to be short? What if qualia tend to be as complex as the brains that experience them (for holistic reasons), so the proof for our own qualia spans trillions of pages? I think that's actually quite probable, but you'll never understand in that case either.
The space of naturalistic explanations of things is humongous and we're already mass producing objects so complex no single individual could hope to understand every aspect of them. If you're going to start with "moving atoms at specific positions" in order to understand a defining aspect of a system made out of hundreds of billions of complex switches connected in obscure ways, let me tell you that you're going to need several Libraries of Congress before you're done.
What I expect from an answer to the hard problem of consciousness is a basic high level explanation for how the moving of atoms in certain ways can actually give rise to qualia. You seem to be starting with the presumption that it does. But I'm not convinced that the moving of atoms alone can possibly give rise to qualia, because our knowledge so far about how atoms move seems completely different from our experience of qualia.
On the other hand, I don't understand all the details of what goes into a Macbook Pro, from network protocols to graphics rendering, but I have a basic high level understanding of how the things it does are possible, and it doesn't take a Library of Congress to explain that. My high level knowledge of how atoms move provide sufficient assurance and understanding of how a Macbook Pro experience is created. That is simply not true for qualia.
> My high level knowledge of how atoms move provide sufficient assurance and understanding of how a Macbook Pro experience is created.
I don't think it does per se. In a world where computers didn't exist you would probably not be able to say whether a MacBook Pro is a thing that could possibly exist or not just from fundamental knowledge of physics. If I gave you the most low-level possible description of physics, it'd probably take you some time to figure out whether it supports solid macroscopic objects. If I just give you the rules to the Game of Life cellular automaton, it's far from obvious that it supports replicators, let alone that it's Turing complete.
Most of these things are only obvious in hindsight, and the understanding is made easier by the lack of cognitive dissonance: we know we made them using certain principles, so of course it works.
> That is simply not true for qualia.
I don't know about that. My own high level knowledge of physics does provide me with sufficient assurance and understanding of how qualia happens. Perhaps I'm wrong, but I do feel like I understand it: what defines or distinguishes, say, my qualia of red, is how it primes my brain into thinking about red things, how it changes my mood, how it behaves relative to every other concept, which makes it about as complex as my entire brain would be. I also imagine that there is a continuum of "qualia" such that basically any physical system has some, but they are so trivial we don't make the connection.
Again, that could be nonsense, but it makes perfect sense to me -- about as much as a MacBook does. The hard problem that you see flat out does not exist to me, and presumably the reason is that we are satisfied by different explanations.
> Perhaps I'm wrong, but I do feel like I understand it: what defines or distinguishes, say, my qualia of red, is how it primes my brain into thinking about red things, how it changes my mood, how it behaves relative to every other concept, which makes it about as complex as my entire brain would be.
Over here it seems like you're defining consciousness by its relation to other objects. In other words, you're reducing consciousness to its visible outputs, such as behaviour, mood, etc. If that's all consciousness is, I fully agree with you that we have sufficient knowledge of physics to explain it. But that's not what I'm referring to. When I talk about consciousness, I'm referring to something that is completely independent of its outputs. Is it not conceivable that a man who is missing both his arms and legs, mute, blind, and deaf, can still have a conscious ego? So what is that experience? This is the question which has no parallel in our knowledge of how atoms move.
> I'm referring to something that is completely independent of its outputs. Is it not conceivable that a man who is missing both his arms and legs, mute, blind, and deaf, can still have a conscious ego?
The brain has many internal inputs and outputs, even more of them than external ones. Even if you imagine a completely isolated brain that can't see, hear or move, it's still exchanging signals internally. For instance, such a brain could fabricate a dream world, create (possibly nonsensical) images to feed to its visual cortex directly, and so on. Its conscious experience would be whatever it builds for itself, operating through these internal input and output channels, shaped by random noise and whatever memories it may have of real things. It's what dreaming is, basically. That is consistent with my view.
If, on the other hand, you ask me to imagine a mute, blind, deaf, paralyzed man who also has his brain cut up in such a way that it can't talk to itself, then no, I do not think that it is conceivable that the man has a conscious ego.
It seems once again that you're defining the qualia in terms of some input/output formula. But that's missing my point; I used "external" inputs/outputs merely as an example. At what point do those inputs and outputs, whether they be internal or external, become qualia? In other words, from where does the "ability to experience" the inputs and outputs come from? Does the mere fact that the brain receives input make it able to experience the inputs?
I never brought up inputs and outputs, though, you did, and ultimately I feel it's kind of a strawman of my position.
A brain is made out of billions of neurons, each of which has its own inputs and outputs, and so does each part of each neuron, down to the atomic level. Every physical entity is a kind of IO machine, so in a sense, any physical definition I may give of qualia will reduce to IO patterns. That is, however, missing the bigger picture, which is the structure of the system: how the information flows within it, and how certain conditions modify that flow. That's where the meat is.
In my mind, consciousness and qualia are structural properties. When experiencing a qualia, the information flows in a particular way in your brain, like a mode of operation, and that's what the qualia is.
I think of it this way: you experience "anger" when your brain is put in an "anger mode", which taints everything it sees or does -- anger is not an "input" or an "output", it is a "mode" that your whole brain is in. It makes you "different", although not in a way that makes you lose your sense of self (well, usually). And in a lesser way, when you see red, your brain enters a "red mode", when you eat nougat it is put in a "nougat mode", and when you think about red or nougat your brain also enters a "recall" version of red or nougat mode that has similar properties.
So what happens is that your brain can enter billions of subtly different "modes" that correspond to different stimuli (real or not). Each of these "modes" makes you think a bit differently, in a way that's adapted to the stimuli you received. These are the "experiences" it can have: they are not inputs or outputs, they are ways of thinking, triggered by stimuli or memories or sometimes random noise.
I guess I would say a qualia is a bit like a protein fold, it's a functional reconfiguration.
I understand that you view qualia as structural properties, and can map trillions of physical brain-configurations to trillions of subjective brain-states. However there is still a massive discrepancy between our subjective experience of such states, and the physical configurations that are claimed to represent it - which is why it is not a satisfactory answer to the hard problem of consciousness for many people.
The 2 big discrepancies I see are this: 1. We have the subjective perception of free will: I can choose to write these sentences or not. Where does that intention originate? If our consciousness is just neuronal computation, then it is completely deterministic or probabilistic. That is entirely contradictory of most people's experience, because we can choose what we want to do, and do things that strongly contradict our "mood". Some people can even go further to consciously modify their brain states. If you claim that is also just a product of complex structural properties, you get into an infinitely regressing claim.
2. Despite being able to create incredibly complex connected structures, arguably even more complicated than a single person's brain, such as the Internet, we have no evidence of such an entity having a subjective feeling of consciousness. There is simply no parallel in our current knowledge for how subjective experience can arise from system configurations. You seem to be simply assuming that an experience is such a configuration, but we have no evidence of that. If such a configuration could be consciousness itself, then we have examples of other entities which are connected in similarly complex ways, yet we don't believe they are concious. This argument is explained here: http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconsci...
1. I find that one fairly easy to explain (easier than qualia, even). When we weigh decisions, we can use an internal model of ourselves to support our thought process. We can imagine ourselves doing X, evaluate the consequences a bit, assess how it feels, then imagine ourselves doing Y, and so on. Considering that our intuitive understanding of "I can do X" corresponds to "I can imagine my self-model doing X, and I don't see any external factors that would stop me," then our ability to evaluate multiple courses of action directly entails an impression of free will.
Now, this "free will" only corresponds to epistemic non-determinism about ourselves: "for all we know", we can do X, and we can also do Y, because if we knew we couldn't, we wouldn't be evaluating these possibilities. So in a sense, it is our self-model that has free will: it is the model that we imagine in various circumstances, but the model isn't the real thing. So our mistake is to assume that this property of the model is also a property of the real thing, i.e. that our epistemic uncertainty translates into some kind of metaphysical non-determinism.
I would expect that any decision system that can construct hypotheticals about itself would have an impression of free will, or a qualia of free will. They may not necessarily be able to express it, or to feel it as vividly as we do, but they would have it.
2. I know the point you are trying to make, but where you see a reductio ad absurdum I just see a lack of imagination. Our usual understanding of consciousness is very, very, very deeply anthropomorphic. Neither the Internet nor the USA are systems that have a human or animal consciousness, that much is clear. "But what about consciousness in general?" Man, I'm sorry to be blunt here, but you wouldn't know it even if it hit you in the face. We're human. We have human consciousness. Our idea of consciousness outside of the exact thing that we have is: fuck. all.
We can start with Leibniz's thought experiment they mention in the article, where a gigantic brain is built out of gears and pipes. Or the similar thought experiment about the "China brain" which emulates a human brain by using one person to emulate each neuron. To me, it is clear that both of these things would, in fact, be conscious. I see no good reason to think otherwise and I am perfectly content biting that bullet. I have zero reservations, even intuitively.
Now, regarding something like an "USA consciousness". Personally, I do not think this is bizarre. It's only bizarre under an anthropomorphic view of consciousness, or to put it in an other way, it's only bizarre if you believe that conscious entities ought to be relatable (an absurd expectation). But if you use the criteria I've given, you can find several things that would correspond to "USA's qualia". For example, 9/11 caused large structural changes that you could interpret as the USA feeling a qualia of fear and/or anger. A new episode of Game of Thrones may also be a qualia of sorts. Makes perfect sense to me.
Perhaps that's not evidence to you, but I don't know what more you'd expect. I think you conflate consciousness with human consciousness, so you only perceive consciousness when it's similar enough to your own. If I may indulge in a silly (but, I think, accurate) comparison, it's a bit like a shoemaker conflating craftsmanship with shoemaking, and then saying there is no evidence that a boatsman is a craftsman, because they've never seen them make shoes.
Nonetheless, there is no reason to expect a description to be able to trigger the experience even if we knew exactly why it was triggered by sensory input.
"Human conscious experience is wholly a matter of physical goings-on in the body and in particular the brain."
Is there any proof of this? The while article seems to hinge on this assumption and then go on from there. We can't see or measure consciousness (brain activity yes, consciousness no). We only experience it. The article makes far to many naive assumptions without any proof.
All the proof you need is the existence of psychoactive chemicals, the effectiveness of transcranial magnetic stimulation, evidence from fMRI studies, and the selective personality alterations that occur in cases of localized brain damage.
But physicalism explains it while dualism has to work around it, violating Occam's razor.
More points in favor of physicalism: scientists' ability to alter rat neurons to be photosensitive and attach them to fiber optic cables, the ability to implant memories in rats by altering a neuron, the mapping of place neurons in rats -- or perhaps the fact that brains are produced entirely from matter consumed by the mother during gestation and by the creature afterward, so where does the dualistic mind get a chance to insert itself?
I'll note that I used to be a staunch dualist for religious reasons, but became convinced of physicalism over a period of years of semi-active reading.
One could argue that consciousness is a field phenomena and that our (gross) material brain is a device that interacts with this field. Modify the device and you modify the local effect.
I don't think so, consciousness seems from another dimension, you feel that you are present in this world while even if you build the most complex machinery you don't obtain that result.
I am obviously just speculating because I don't see an obvious connection between matter and consciousness even if they are linked in some way. The article doesn't give any clarity about this problem stated by Leibniz et al.
"...I don't see an obvious connection between matter and consciousness even if they are linked in some way."
Dualism has worse problems, though. If there is no connection between consciousness and matter, how do psychoactive substances alter consciousness? How do conscious decisions cause material movements?
Do we have proof of any phenomenon being physical?
What we have are various phenomenon that we attempt to classify. Some would like there to be one meta category we can lump everything into. That's typically called material or physical. But it could also be labelled ideal or mental, from a different categorization scheme.
The problem we've had so far is that some phenomena are not easily placed into the one all-encompassing meta category. It would be easier for us conceptually if we could make it all fit, but we haven't quite succeeded so far.
Actually I think the grandparent comment should be read "...proof [of the truth] of any phenomenon..."
I suggest the problem is with the concept "true".
The assumption that all phenomenon have a physical basis is exceedingly practical and produces fantastic predictive success. "True-ness" as it is intuitively defined is not interesting in this case, indeed it may be nonsense.
Stated another way, if we're going to describe "true" as some aspect of reality that we can never directly perceive but only get at through fallible senses and perceptions then we are not only flirting with something that is likely a-priori unprovable but also likely very uninteresting.
Wouldn't it depend on what sort of attributes phenomena in the category are supposed to have? If we say that everything is material, then presumably we mean that there is a criteria for what makes something material, probably along the lines of what physicists use to make physical theories.
Otherwise, our category should just be called "everything".
I've always found that line of argumentation very disingenuous[1].
Of all the explanations we have for anything that has ever been explained, anywhere in any time, they all fall in the physical/material category.
This of course doesn't mean there could never be another explanation. But the time to take it seriously is when serious evidence is presented.
Just because we can imagine a certain reality in which events, actions and things might occur in a different way, doesn't mean it is this reality.
[1]The argument is disingenuous, not you of course.
> Of all the explanations we have for anything that has ever been explained, anywhere in any time, they all fall in the physical/material category.
So far we haven't come up with a decent explanation of consciousness which is why I am prepared to be a bit more open minded. The fact you can't observe it, only experience it adds to that. It may well be physical, but it may not.
It might be an entirely different thing altogether! why restrict our "open mind" to only those two choices?
> So far we haven't come up with a decent explanation of consciousness which is why I am prepared to be a bit more open minded.
Then we should say we don't know. It's disingenuous to say that because we don't have an answer, or as in this case, a full and complete answer, then it means that it might be this entirely different thing for which nobody has any evidence, no one has any explanation and no one can even describe or define.
If you assert that consciousness has no physical cause, then that's all you can say about it. You're skyhooking.
you can take the wrong turn Penrose took and posit that it's due to quantum effects, as a sort of middle ground.
My way of looking at it is that we use "consciousness" as a default bucket for things we don't have instrumentation to measure, or otherwise have a functioning predictive model for.
I've always wondered if anyone has pointed out to Penrose that asserting all behavior is random is not better than the alternatives, philosophically speaking.
The reason it's bad, using a Dennet frame of mind, is that it's a sort of left-handed "skyhook" - throwing "quantum" in the resolve the longstanding divide between Mechanists and those who believe in a Prime Mover or deity. It's an Ex Deux Machina of sorts.
I can sympathize with Penrose; his entire career was about measuring the information transforms of things like black holes, so why not apply the same basic recipe to consciousness? It's still an interesting idea, but it's not, SFAIK, testable.
Without 100% knowledge of all the physical causes, how could you ever rule out that the stuff you don't know about is caused physically? Proof that causality actually works (that physical effects are always caused by prior physical events) is not remotely on the horizon.
If consciousness is something fundamentally nonphysical than causality is out the window, because my actions in the world are not actually caused by anything in the world.
Epiphenomenalism is [roughly] a conception of the mind as a non-physical entity that 'creates' conciousness, but interacts with the physical world in a read-only manner.
100% knowledge of all physical causes does not rule it out, and it does not rule out 100% physical causality.
The idea of proof, in the empirical sense in which term is used in dealing with any statements about phenomena, presumes this about all phenomena. It is itself unprovable (for any phenomenon), as it is the axiomatic underpinning on which all "proof" of anything about phenomena rests.
Occam's razor. Any non-physical theory of consciousness is extraordinarily more complicated than a straightforward physical theory if you consider the implications.
If consciousness is just a complicated computation, which seems the simplest explanation, then it can be analogous to the attitude toward "old computer science beliefs". As a side issue, someday much like people make furniture using 300 year old technology there will be craftsman making programs using no algorithm discovered (learned? found? created?) post 1970.
Its a good thing we invented computers in recorded history and they're not prehistorical or delivered from space aliens, or we'd be having the same sophistry about philosophical discussions about computers as we do about consciousness. What, you claim a mere arrangement of impure silicon atoms implements the majesty and beauty of the lambda and the red black tree? When you think of binding a value to a variable, how do you propose to do that with mere atoms alone? So you think you can build a computer completely out of NOR gates, well, that's very nice but seeing as computers came from space aliens and no human has ever made a computer we'll wait on that theory until we see you run "hello world" on it. Surely an "intelligent designer" created computers and such complexity cannot operate via mere atoms, much like mere atoms arranged in peculiar shapes makes hard steel.
Something like this already almost exists between the EE/microcontroller type people vs the CS/algo people. Note my plethora of weasel words, something, almost, etc.
Aside from the philosophical analogies being entertaining by themselves, it would make interesting hard sci fi. Imagine a world where god is blamed for giving us clay tablets containing most of automata theory. Wouldn't that be an interesting sci fi setting?
The problem with claiming that consciousness is just a complicated computation is not merely that we simply haven't advanced our study enough to understand it. The problem is that what we know so far about the nature of computation seems to directly contradict and falsify the notion that consciousness is just computation. To put it simply: our knowledge of computational complexity, algorithms, and general problem solving shows that our human intelligence performs far more sophisticated operations than are possible given the number of neurons that exist in our brains (unless there are some super-fast algorithms that we haven't discovered yet, which would have massive implications on the P=NP problem, and our understanding of physics).
> our knowledge of computational complexity, algorithms, and general problem solving shows that our human intelligence performs far more sophisticated operations than are possible given the number of neurons that exist in our brains
This is not true.
I wish I could respond with something more interesting, but that's all there is to it. You are just saying something that is incorrect. Our brains are information-theoretically and complexity-theoretically entirely reasonable.
> Our brains are information-theoretically and complexity-theoretically entirely reasonable.
I know this has been the traditional view of most information theorists. But many philosophers (and a few computer scientists) challenge that view. See Fodor's frame problem[1]. The basic idea is this: yes, our brains are complexity-theoretically reasonable after the problem is defined given a set of inputs, outputs, and problem states. However the real issue is how our brains formulate the problem in the first place - how to sort through the combinatorially explosive amount of information in order to narrow down what is relevant to a problem. This objection has never been satisfactorily answered by information theorists in my opinion. The answers given by info theorists always seems to presuppose the existence of a problem formulation before solving it.
Everything in the brain is natively probabilistic. Representations which have low to zero mutual information with the current sensory input are simply not computed-with. You could say that almost all representations will have nonzero mutual information with almost all real-world sensory input, but the sensory input is precision-weighted, so it will take both a sufficiently high degree of mutual information and sufficiently high precision of the activating sense-data to make a given branch of the hierarchy (the whole model is hierarchical, and very large) relevant enough to load it into working memory and use it for prediction.
> Representations which have low to zero mutual information with the current sensory input are simply not computed-with.
Who decides that a particular representation in the brain has zero mutual information with current sensory input? This requires a comparison of some sort (i.e. computation).
It's a distributed computation that has been mostly precomputed, ie: brain connectivity. And a very popular theory of the brain says it does calculate second-order (precision/entropy) statistics about what it models.
The combinatorially explosive amount of information that we deal with at every single moment is too large to be dismissed as "precomputed by brain connectivity". If such a large amount of computation could be "precomputed", it has massive implications on our understanding of complexity-theory. Besides it contradicts our subjective feeling of free will, which is why it is not a satisfactory answer for most people.
>combinatorially explosive amount of information that we deal with at every single moment
What are you referring to? The sensory input to the brain takes, as a ridiculously high upper bound, perhaps terabits per second. In reality it's probably a few megabits. It's really not much. Vision is, I suspect, by far the highest bandwidth input to our brain, and computer vision has met or exceeded human vision in many tasks.
> our human intelligence performs far more sophisticated operations than are possible given the number of neurons that exist in our brains
This only holds if you presume those operations are being performed in the same way as a computer, rather than the incredibly imprecise rule-of-thumb heuristics they actually use.
>To put it simply: our knowledge of computational complexity, algorithms, and general problem solving shows that our human intelligence performs far more sophisticated operations than are possible given the number of neurons that exist in our brains
I'd really like to see a source for this, especially taking into account that neurons are natively probabilistic.
Integrated Information Theory[0][1] is the first approach to solving these problems that make sense for me. It takes an axiomatic approach and provides a computable method to associate both a level of consciousness and the content of the experience to an arbitrary causal system. Unlike all these unscientific philosophical considerations it can actually be used to create testable hypotheses.
I really don't get the appeal of IIT. Smells like curve fitting, which would be fine except they don't have anything solid to fit against. I agree with essentially everything Scott Aaronson has said about it [1][2]:
> When we consider whether to accept IIT’s equation of integrated information with consciousness, we don’t start with any agreed-upon, independent notion of consciousness against which the new notion can be compared. The main things we start with, in my view, are certain paradigm-cases that gesture toward what we mean
> [...] But how does Giulio know that the cerebellum isn’t conscious? [...] he just told us that a 2D square grid [a simple circuit that scores high phi] is conscious!
Have you read Scott Aaronson's "Why I Am Not An Integrated Information Theorist" (http://www.scottaaronson.com/blog/?p=1799)? He commends IIT for making falsifiable predictions, but argues that some of those predictions are, in fact, false.
Yes. In the follow-up linked above he also describes the authors' responses to this. The predictions are not de facto false. He argues that they are intuitively false, which is a matter of personal belief but not empirical research. The same goes for some counterarguments.
It will probably depend on how well large systems can be approximated.
In terms of reproducible results, it would be most interesting to research the shape of different qualia and how they progress if the underlying network changes (both in function and connectivity). Right now, we can only rely on self-aware and -reporting systems. But if we know how actual experiences integrate if two systems merge it might be possible to get a "feeling" for others.
Cases such as these[0] are therefore extremely interesting. If it is possible to gradually change the connectivity between two systems, according to the theory, there should be a sudden "split" into two experiences. High bandwidth brain-computer-brain interfaces could possibly provide such a possibility in the future.
[1] is another reason why I tend to be pretty agnostic about intuitive arguments.
I don't see why any of that is necessary. It seems just as mystical and hand wavey as other attempts to talk about consciousness, from what I've browsed.
There isn't anything mystical about it, though - if close to the truth - it would have some interesting implications. I would be interested in seeing different approaches with similar explanatory power.
To echo callesgg elsewhere in this thread, my own pet "theory" is that conciousness is a metacomputation, i.e. not just a process of neural reactions to outside events, but reaction to a process of reaction. For example, understanding the need to convey acquired information to another being to the same species eventually led to the development of language. Figuring out that brain forgets information led to the development of storytelling (hoping that someone else will remember) and eventually writing. And so on. Conciousness to me is not black and white but instead a spectrum of traits made possible by thinking about thinking.
"metacomputation" -- that reminded me of this from Wikipedia on metasystem transition:
"Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution."
There's meta computation and meta storage - the global consciousness is all computable Wisdom, but done with sacredness (delay). Space, or the creation of it, is networking or data exchange between the computation layer and the "storage" layer. We are here because of the causality happening between the layers.
You ask what causes consciousness and how it emerges, which presupposes something before consciousness - something that can cause it and it can emerge from. Most likely you intend the physical world.
Why is consciousness in need of being explained in that manner, is what I think the author would ask. It's similar to saying that physical matter or energy are hard problems, as we don't know what causes them or how they emerge - not in the sense of conversion from one form of energy or matter to another, but what generated matter or energy in the first place.
(With that said, I do think there are strong arguments for favoring the physical world, and that pose a challenge for the author's position.)
>You ask what causes consciousness and how it emerges, which presupposes something before consciousness - something that can cause it and it can emerge from.
And why not? Most (all?) including rocks, have a cause and constituent elements that need to be arranged in a certain way for them to have a particular function.
The strange thing would be to presuppose consciousness as without something before it. Which would be like what Christians for example consider for the "soul".
My personal theory of what consciousness is that it is a feedback loop.
I have major issues explaining why i think that but somehow it fits my model.
When thinking of consciousness as a feedback loop, stuff sort of makes sense. I have been unable to poke holes in the theory but also unable to add strength to it.
Godel Escher Bach also touches on this as well as the relationship between the impossibility of absolute formal logic reductionism v holism, and consciousness.
FWIW, this book gave me the idea that consciousness could be some sort of feedback loop involving the whole Universe (including you and me and everybody else) possibly via quantum entanglement. I know that's terribly hand-wavey and soupy, sorry about that, just felt I should mention it.
Related: meditation (at least some kinds) seems to operate by turning the subjective awareness back onto itself, becoming aware of becoming aware of... etc... A little like a camera hooked up to a monitor that it's pointed at, you get an infinite regression.
In the camera-monitor system there is a physical loop of information from the lens-to-screen-to-lens-to...
When you do it with you awareness, there must be something looping somewhere, regardless of the physicality or otherwise of consciousness, eh?
I'm not sure you need entanglement. Embedded/embodied cognition runs with the argument that a brain isn't intelligent without a universe, and brains-in-vats models are missing something fundamental, which is all the useful stuff we offload to the body and world around us.
> FWIW, this book gave me the idea that consciousness could be some sort of feedback loop involving the whole Universe (including you and me and everybody else) possibly via quantum entanglement
Then this book sounds like it's not worth reading. There is absolutely no need to invoke pseudo-quantum mumbo-jumbo to investigate consciousness. We don't understand the chemical operation of neurons well enough to say whether quantum effects are necessary for neural functioning (although they probably aren't). Most "quantum-looking" (i.e. obviously non-classical) processes happen at much smaller space and energy scales than the chemical processes we know are involved in cognition.
Quantum entanglement almost certainly isn't involved because it can't be used to transmit information and it requires high degrees of precision to make use of in the first place.
The brain's thermal noise floor is vastly larger than the scale of most "interesting" quantum effects.
I have come to the same conclusion, but I have some ideas how to back it up. My only concern with packaging it up as a theory is that probably few would take me seriously because I don't have any neuroscience and psychology background.
If thoughts that you have can go back in the beginning of the thought cycle, you remember that you have had them you don't actually need to be "conscious" to think you are "conscious".
Well technically you will be conscious, but the meaning/description of conscious in your head wont fit what it actually is.
This is an almost stereotypical reductionist view - yes, we know a lot of fine details of how consciousness and cognition and experience work, but we absolutely do not understand their nature. That's like saying chemistry is all you need to understand how a bacterium works. You'll be able to decipher a lot of the mechanisms, but how the system developed and how the pieces fit together is always going to remain beyond your grasp without more specific research.
After reading all of the interesting comments and debates, I think that as far as having fun wasting time with amateur philosophy, consciousness can be quite a fun topic.
But beyond that I think I just want to echo what paulsutter said in his comment -- consciousness is a suitcase word with too many different meanings. To have a 'real' discussion you would need to unpack all of the different meanings at the beginning, but no one is going to do that.
Also, the word consciousness is directly tied into belief systems, so if you want to understand someone's belief system, you can ask them about it, but the nature of beliefs is that you are not going to have a constructive conversation.
If anyone is curious about my own beliefs, I think that we can generally answer the Leibniz machine question with 'emergence' so perhaps I am an emergentist https://en.wikipedia.org/wiki/Emergentism
i'm struggling to identify the point of the article.
mystery and matter have never been mutually exclusive. depending on it's configuration, matter can lead to some mysterious things. "it's matter" is a bad answer to pretty much every question.
The point I got is this: Just because we can't figure out a physical picture of consciousness doesn't mean there isn't one. Furthermore, it is the physical part that is the big mystery, rather than the consciousness part.
The author is stating that physicalism (formerly materialism) is true, however, physics fails to fully capture what it is to be physical, because it doesn't not include our experience of said physical world.
Basically, our conceptual understanding is lacking. But dualists or idealists might say the same thing, so ...
The evidence is that consciousness is tied to the physical somehow (via brains at least). What this means, we haven't been able to figure out. Or at least, there is no consensus.
I am not denying it is tied to the physical, that seems fairly obvious, but it doesn't mean that it is a only physical phenomenon. A radio is not the same as the channel that it is playing and one is tied to the other.
Maybe you would sound more clear if you explained what else, different from "physical" there is that conciousness, or anything else for that matter, could be attributed to.
I feel like a lot of the replies here are missing what's actually a quite subtle, thoughtful argument:
The German philosopher Gottfried Wilhelm Leibniz made the point vividly in 1714. Perception or consciousness, he wrote, is “inexplicable on mechanical principles, i.e. by shapes and movements. If we imagine a machine whose structure makes it think, sense, and be conscious, we can conceive of it being enlarged in such a way that we can go inside it like a mill” — think of the 1966 movie “Fantastic Voyage,” or imagine the ultimate brain scanner. Leibniz continued, “Suppose we do: visiting its insides, we will never find anything but parts pushing each other — never anything that could explain a conscious state.”
...
Many make the same mistake today — the Very Large Mistake (as Winnie-the-Pooh might put it) of thinking that we know enough about the nature of physical stuff to know that conscious experience can’t be physical. We don’t. We don’t know the intrinsic nature of physical stuff..
We find this idea extremely difficult because we’re so very deeply committed to the belief that we know more about the physical than we do, and (in particular) know enough to know that consciousness can’t be physical. We don’t see that the hard problem is not what consciousness is, it’s what matter is — what the physical is.
People making computer analogies below are engaging in the exact fallacy the piece is arguing against. A turing machine is a mathematical model for symbolic manipulation, and yet it's assumed one could somehow "map" consciousness 1-to-1 onto such a machine. The fact we can't simulate a single protein fold on a supercomputer gives me real pause in believing we've begun to grasp or have the ability to digitally emulate the kind of tricks life used to bootstrap consciousness out of matter.
I know the "no real evidence" / "too warm & wet & macro" arguments against quantum consciousness, but that to me looks like by far the most fruitful path to investigate to break out of dualist/"eliminativist" dilemma we find ourselves in. Recent research on quantum biology already makes clear life makes is far more quantum than I think almost anyone would have believed possible a decade or two ago. Considering just photosynthesis, arguably no life as we know it on earth would be possible without clever non-classical hacks by nature.
How does making it quantum help? A PRNG in my head makes me unpredictable until someone finds the seed; what benefits other than true randomness does a quantum brain actually "buy"? It's all still physical stuff bumping around, just a little bit nonlocal.
Not exactly. We've pushed determinism down to the level of quantum mechanics. Bell's theorem strongly implicates that the universe is fundamentally non-deterministic. The way I see it, you can interpret this non-determinism as randomness, or you can interpret it as choice. The thing I find ironic is that most scientists - who espouse parsimony and deduction based on observation - choose the random explanation. If quantum indeterminacy is random, it is the first truly random process we've ever discovered. Additionally, when a scientist states that their perception of choice is mistaken/an illusion, I have to ask, then how do you trust any of the data you base your science on?
Yep, mostly agreed. I think if you squint a little, it's possible to see how real non-deterministic "choice" can bootstrap itself out of QM. Do we have a theory describing how? No, but such a theory would inherently upend the foundations of science and western philosophy, so even if we don't have it in hand, it's worth pointing out there's a fair amount of ... circumstantial evidence(?) in that direction.
"I find this odd because we know exactly what consciousness is — where by “consciousness” I mean what most people mean in this debate: experience of any kind whatever. It’s the most familiar thing there is, whether it’s experience of emotion, pain, understanding what someone is saying, seeing, hearing, touching, tasting or feeling. It is in fact the only thing in the universe whose ultimate intrinsic nature we can claim to know. It is utterly unmysterious.
"The nature of physical stuff, by contrast, is deeply mysterious, and physics grows stranger by the hour."
The author seems to be traveling a road very close to solipsism.
If the only thing you can know is your own consciousness...
Even though solipsism is considered unfalsifiable, it can't explain why the world is so regular and yet complex, much more complex than human imagination. We can feel the limits of our imagination, but reality is much more complex. So there is the proof that there is an external world. It's not all in our imagination, because our imagination is less complex than the world we experience. If we said that our own imagination created the world, then it must be a Godly power. Basically we attribute divine powers to our own minds, which is absurd.
There is no proof of an external world, since nobody has ever experienced it. Everything you consider to be a part of that external world is mediated through your mind.
In short, you exist in virtual reality which is really manifested by your mental map which is layer upon layer of memories, personality traits, beliefs and thoughts.
That does not necessarily mean that "you" create _the_ world, only that your mind gives raise to the subjective reality you have access to, the only reality you have access to.
There may be an external, objective world out there, but we wouldn't be able to directly access it.
Well the assumption of an objective external world I find a bit silly. We can assume that other minds exist since we have direct experience of our own. It's a small leap from one mind to many (a leap nonetheless).
But to assume an objective external world exists when nobody has ever experienced such a thing, is a leap of quite different proportions, akin to believing in a bearded man in the sky.
It's easy: there are many systems that are more complicated than what we usually can imagine. For example, the detailed structure of a material, or the internal structure of a language. We don't usually imagine coherent systems of such complexity.
If there ever was a man trying to imagine something hugely complex, then he was surely Tolkien with LOTR. He imagined languages, countries and peoples in an amazing detail. But that shows how hard it would be to try to imagine something as complex as the real world.
Our minds don't have such generative powers, thus the world doesn't come from our imagination.
Are you imagining the detailed structure of a material or are you imagining someone telling you about the detailed structure of a material? How much is first-hand sensing and how much is second hand reports?
Also, that which does the imagining, your subconscious, say, has to be more complex than what you might perceive it to be.
I didn't realize until reading this comment section that there's an entire demographic of intelligent people in denial about why we are conscious (generic cognition is an evolutionary advantage in responding to circumstances that are too dynamic to be accounted for by a process measured in generations) and whether it persists beyond death (no, because it's hooked up to the brain).
And by the way, it clearly worked : we colonized the planet. And now we have arrived at a juncture wherein we are left having to explain our existence when the reality is that our only identifiable purpose is to reproduce. Everything else is incidental.
Its not about intelligence, its about beliefs. Beliefs do not indicate intelligence. For example, I have an identical twin brother who makes more money than me with a highly technical job and is quite religious, and I am an atheist. I have another brother who is a senior principal engineer who thinks Islam is an inherently violent religion and is enthusiastic about the middle east invasions, whereas myself I believe that to be propaganda going back to the Crusades.
The "free will" as it is used by those who insist on it existing is something that is based on the needs of religious thought. It's a concept invented to explain the problem of (paraphrasing) "if good is all-powerful and good, why is there evil," to which the magical answer is "the people have free will" to mean, the God doesn't decide what people will decide. And this is just nonsense invented to explain the starting nonsense problem in the "demon haunted world"(1) of those who believe in a "good God."(2)
Our decisions are simply the function of the "stored state" in our body and outside influences in the moment up to the decision. Animals function the same. Of course, there are so many influences and the process of thinking(3) is dependent on so many variables that we can actually make different decisions based on, for example, our same experiences up to the moment of the decision but depending on if it's 9 AM or 9 PM, or if we drunk the coffee one or two hours before etc. So it appears to be a "free" decision but is still just a function of the state(4) and the input.
2) No every religion has a "good for everybody" god, so for these religions it's not even a problem. If the god is there just to favor your group when he is not distracted by something else (yes, there are religions where the god has a limited attention span), he can smite everybody else and he doesn't have to be absolutely "good." For those who believe in a "good God" the shock is, of course, when something like this happens: https://en.wikipedia.org/wiki/1755_Lisbon_earthquake#Effect_...
3) which we surely believe to posses, in some aspects more complex than what we understand the animals do, simply because we don't know their language and they can't write
4) not just of our neurons but also of the exact amount of the traces of different chemicals
This is a good enough answer to the question. I would have delved into the realm of chaos theory and such, but suffice to say that our understanding of the universe suggests that nothing is truly random.
However the randomness, which actually exists(1), doesn't in any way equate the concept of "free will" I've discussed. The generator of real random numbers would never be considered to have "free will" by the proponents of the later.
The pseudorandomness, which also exists, is also not "free will." The "free will" is just a religious-thought based philosophical construct.
Because evolution ultimately reduces entropy in response to stimuli. This process, combined with time, can yield complexity. Countless studies abound ascertaining rudimentary aspects of cognition in even fairly primitive organisms. It's hardly a trait upon which humans hold an exclusive claim - we just evolved the best version of it.
That's another way of asking if we could (theoretically) simulate intelligence. Scientific consensus offers an emphatic 'yes' to this question. We debate the when and how (not the whether) of AGI.
The article postulates that "everyones" definition of consciousness is "experience of any kind whatever."
That's not my definition at all.
My camera is not conscious because it experiences images.
My speakers aren't conscious because they experience the sound vibration they emit.
Consciousness may be an important part in us humans interpreting our experiences in some context, but it is totally different from those experiences. I can be conscious for a while in a sensory deprivation tank!
Separately, saying we "know nothing" about physics, just because we have to experience physics through our senses, is somewhere between disingenuous and false. We know how to predict the behaviour of physical systems very well. We know much less about how to predict conscious systems. So, if by "knowledge" you mean "ability to predict outcomes, possibly as result of stimuli," we know physics much more than consciousness. Or, perhaps, we know simpler physical systems, but but the complex systems that give rise to consciousness -- again, we know more about physics than consciousness, but not enough physics to explain consciousness as a physical system.
Finally, arguing that the mind is just its physical/chemical make up requires a lot more proof than we currently have. That argument reduces to "there is no free will." It may be true, but we don't know enough of quantum processes in the brains function to say that for sure. We certainly can't simulate or predict a couscous human brain yet.
Perhaps I'm just not smart enough to understand what the hell this article is driving at. But I'm pretty confident in my personal understanding of "consciousness": it's a word that we use to describe our "experiencing" of the physical world, which is an emergent property of our incredibly complex neurological infrastructure.
Is this article agreeing with me, or disagreeing with me? I can't honestly tell.
"[T]here is a fundamental respect in which ultimate intrinsic nature of the stuff of the universe is unknown to us — except insofar as it is consciousness"
I think the article finds "emergent property" mysterious. It intuits that there is a link between knowing what the "stuff" is that behaves as physics describes it, and "consciousness", by which we know there is something rather than nothing. I think that's a mistake: it's like thinking you couldn't understand an article on the web without directly observing all the electronics that created it. I don't believe that fundamental particles (e.g., quarks) have any identifier attached to them to distinguish them from other fundamental particles, except those that describe their behavior: (probability distribution of) location, (probability distribution of) momentum, (probability distribution of) mass, etc. There is no "stuff" that is not behavior. Physics works to refine our model of the fundamental particles; everything after that is "emergent property". A sorting algorithm in a high level language doesn't depend on what CPU it runs on, given sufficient RAM and that there are no hardware faults. It may have been designed by a human, but it may also have been generated by simulated evolution (genetic algorithm). Fundamental particles/waves don't have hardware faults, but they do have probability distributions. Atoms are an emergent property of their constituents and decay when their constituents reach a low probability state incompatible with the constitution of the atom. Molecules are an emergent property of atoms and become different molecules when their constituents decay or their components react to some external force (usually the electromagnetic forces of another molecule). Cells are an emergent property of some kinds of molecules, and so on up the complexity scale. Consciousness is perhaps the top of the pyramid.
Given the world is analog, with states never reaching a state of absolute zero (e.g. there is no distance from a nucleus at which there is zero probability that an orbiting electron will be found), I think consciousness does not ever diminish to nothingness at physical states simpler than our own. It could therefore be reasoned that all matter in the universe is conscious.
"It’s ironic that the people who are most likely to doubt or deny the existence of consciousness (on the ground that everything is physical, and that consciousness can’t possibly be physical)"
We don't doubt that consciousness exists, what we do call into question is the assertion that it is of a non-corporeal nature. Therefore it has to be physical in nature.
panpsychism: 'the doctrine or belief that everything material, however small, has an element of individual consciousness.'
If consciousness is just matter, why isn't yours accessible to me. There is something deeper at play.
I have a hypothesis that a consciousness is a function of a universe. Each universe has just one consciousness.
Another person's consciousness is inaccessible because it's in a different universe. Only the matter part of that is projected in your universe, not the actual identity.
Without a hypothesis like this, we are left with postulating hidden variables: some hidden context pointer which resolves to you or me or such.
> If consciousness is just matter, why isn't yours accessible to me.
Because my matter and your matter aren't in close enough physical proximity or the appropriate configuration to have any communication with one another. Same way that you and I both live in houses, but my front door doesn't open into your living room.
You can visit someone's home, or move to a new one.
I think that being locked to a particular consciousness and having no access to another one is on par with not having access to parallel universes. I'm going to click Update right now. In another universe, a parallel me closes the page without clicking Update. I have no access to that.
Mechanicists confuse Consciousness with behavior. Of course we have behavior (mind) and that, in a mechanistic world, is enough to survive and reproduce. But Consciousness is something else. We are not Philosophical Zombies. Qualia is there and is a hard problem that resist most hypotesis.
Scientific rationalism at its worst: trying to apply logic for something that is beyond logic, and all the way not realizing the basic mistake of thinking that everything in experience can be expressed or understood with logic, thought and scientific experiments.
The writer (and a very large part of society) is so oblivious to the unconscious dogma "Everything can be explained in a consistent scientific system" that he doesn't even see it he's holding it - precisely like a Christian will tell you "Look stuff is like this, it says so in the bible".
All this mental masturbation about trying to locate consciousness in the brain/body is utterly ridicoulous. Why do you think you haven't found it yet? Hello?
But according to scientific rationalism everything that exist is matter (for only matter can be PROVED to exist with a scientific experiment...), so consciousness must be there too right?
Just like a man who's lost his keys outside, but the street is very dark and cold, so he decided to search for them in his house, which has lights to see better and is much more comfortable.
This article just kicks around concepts with no real insight. It's easy to demonstrate that consciousness is matter. Put a bullet through someone's brain and that person will cease to be conscious.
What is difficult to demonstrate is if that is all there is to it. For this we can turn to math. Do the intricacies of the rules of mathematics actually exist anywhere? Obviously not, unless you consider words and symbols written in books to qualify. Math exists everywhere, because you can use it to describe the physical, and it exists nowhere, because there is no literal physical representation of it anywhere.
But we can actually build physical representations of particular math equations, inputting them as programs into a computer. The representations obey the rules of mathematics, as well as the rules of the physical world, if you destroy the computer, you also destroy the operation of the program.
But yet there is more to these programs than meets the eye physically. There are hidden rules that they operate by, more than just the physical affects them. These are the rules of math. We can analyze the programs using various mathematical techniques and prove things regarding them.
It is the same way with consciousness. We only see the physical affects because that's what we're looking for. We don't see the countless hidden rules that also affect consciousness. They are so numerous and manifold that they look wholly continuous with the physical world and physical rules.
What are these rules? In a word, they are ideas. Ideas have logic to them and can be compared with other ideas. We can say that one course of action is good or bad, when compared to other courses of action. Ideas and thoughts themselves are so comparable to computer programs that it's amazing to me that we programmers scoff at the idea of thinking machines.
You analyze thoughts with other thoughts, with tools of logic. You analyze computer programs with the rules of math, many times those rules are implemented with other computer programs. The question, "what is consciousness," is purely the domain of philosophy. Biology and physics can only takes us so far in our quest to understand ourselves.
To attempt to do so would be like trying to analyze a running computer system by smashing it apart and looking at the silicon under a microscope. It fundamentally mistakes what a computer program is. And knowing that it is just electrical signals traveling across transistor gates doesn't get you very far either. Even analyzing the voltage levels across the entire running system isn't going to tell you much either.
- Consciousness is out of the scope of physics. You have to leave the common "physics is everything" perspective to understand consciousness, however counterintuitive that is.
- Physics is just a tool to describe the patterns of our subjective experience / consciousness.
- The hard problem is, why does positioning atoms in a piece of matter (brain) trigger the perception of color or pain?
- Dualism and solipsism are approximately correct, depending on the specific definitions, which vary a lot.
"philosophers have struggled to comprehend the nature of consciousness”
Many philosophers today, including materialists, are heirs to Cartesian metaphysics, the same metaphysics that brought us the mind-body problem and the problem of qualia. The difficulties the author has in mind are very much bound up with this Cartesian (and Galilean) legacy.
The idea that material beings can be conscious is often treated like it's some new and shocking claim. The reason for that is, again, the broadly Cartesian heritage at work. For Descartes, it takes an immaterial "res cogitans" to be conscious. The material body, as a desiccated "res extensa", is incapable of functioning as a substrate for anything we might call "consciousness". And because Descartes holds that only human beings possess minds understood as "res cogitans", only human beings can possess consciousness. Furthermore, "res extensa" lacks sensory qualities like color or sound. As a result, these qualities are located in the "res cogitans" as immaterial "qualia". Materialism does not escape the Cartesian paradigm. Instead, it denies the "res cogitans" and tries to locate what was ascribed to the "res cogitans" back in the "res extensa" while maintaining many of the metaphysical and methodological tenets of Cartesianism. This is where the problem of qualia occurs. The problem is insurmountable as long as the Cartesian suppositions are maintained. No amount of "keeping at it" with the current methods will ever resolve the issue. (See Nagel for more.)
For what it's worth, Aristotelian metaphysics encounters no such difficulties because it does not adhere to the dualism of Cartesian metaphysics (though sadly, few philosophers understand Aristotelian metaphysics; there seems to be a resurgence of interest, however). Indeed, for Aristotle, "consciousness" (though he does not use this term) is part and parcel of what it means to be an animal, human or otherwise. (See Jaworski and Feser for more.)
"It’s true that people can make all sorts of mistakes about what is going on when they have experience, but none of them threaten the fundamental sense in which we know exactly what experience is just in having it."
If there was any doubt about the Cartesian flavor of the author's reasoning, this statement should have dispelled it.
"Members of the first group remain unshaken in their belief that consciousness exists, and conclude that there must be some sort of nonphysical stuff: They tend to become 'dualists.' Members of the second group, passionately committed to the idea that everything is physical, make the most extraordinary move that has ever been made in the history of human thought. They deny the existence of consciousness: They become 'eliminativists.'"
Ah, but the eliminativists haven't escaped the legacy of Cartesian dualism either! They've eliminated not only the "res cogitans", but the facts of experience (e.g., "qualia") altogether! It's a notoriously incoherent position.
Strawson says he's a panpsychic physicalist, but that falls into the dualist camp once you get down to it. He offers no third way between Cartesian dualism and eliminativism.
I was reading "Hard problem of consciousness" Wikipedia article this morning: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness I'm not sure this adds anything to that.
The author seems to want to say that consciousness is primary and that the physical universe, matter, is consciousness somehow, so poof no more mystery.
But this doesn't touch the "hard problem" so who cares?
There is what I like to call the flux: form and movement. I lump together "external" and "internal" experience, including all proprioceptive experience, etc.
Then there is the subjective awareness. It is primary. As the author points out, every other fact we know is contingent upon the fact of subjective awareness. Everything is content to this whatever-it-is observer. This awareness itself seems not to have any qualities or properties whatsoever, making it extremely difficult to talk about (and rendering it forever beyond any scientific treatment!)
Somehow, this awareness is our "self". (It may or may not also be tied into the quantum "Measurement Problem" but that is a whole 'nother story.)
You have a body but you're not your body; you have emotions but you aren't your emotions; you have thoughts but you aren't your thoughts; you are that awareness.
Now, if that awareness created or creates the physical world (as the Bhagavad Gita seems to state) that would be pretty amazing and I'd love to read about it. This article doesn't really expand on that.