> It seems to me though, that it's no easier to explain having the illusion of experience than to explain actual experience (and vice versa).
The paper goes over that too: 1. The brain uses information to build models, 2. all brain models are necessarily inaccurate.
Therefore quale are inaccurate models of perceptions and/or models of perceptual memory, where some information has been thrown away, as we do with all models. The redness of an apple is a distorted sense memory of the apple, just with all the "appleness" thrown away leaving only the "redness", which is why they seem so vague and ephemeral.
I'm pretty sure I'm missing the main point of the paper and I'll read it again tomorrow.
If I and some other people would follow a certain algorithm, the one in the paper, but refined and perfected for a thousand years... Calculating with pen and paper the progression of states from a suitable initial state... Would an illusion of experience/qualia appear somewhere just from our calculations? Would it feel like something to be the state represented in the numbers?
> Calculating with pen and paper the progression of states from a suitable initial state... Would an illusion of experience/qualia appear somewhere just from our calculations? Would it feel like something to be the state represented in the numbers?
No, why would some data scribbled on paper describing qualia in a model of a brain reproduce that qualia in your brain?
If you write code on a piece of paper, does your laptop suddenly execute that code? If your laptop takes a picture of that scribbled code, does it then execute that code? If you take a picture of it and then OCR it, does it execute that code? If you take a picture of it, OCR it, and then feed it to the right compiler/runtime does it execute the code?
As you can see, there are numerous layers of translation to get the information in the right form and in the right place. "The right form" for our qualia is the form we evolved to perceive. I'm not sure why one would expect information in any other form to produce quale.
Not in my brain, in some context within the numbers. It seems that it's purely about measurement, modeling, errors, and other things that can be expressed as a computer program. I only chose pen and paper for the whole process to seem less magical. Suitable inputs from the programs "environment" would be incorporated in the updates of the state.
> Not in my brain, in some context within the numbers.
Then yes. As I said, quale are sort of information stripped models of our perceptions, but divorced from direct perception, and this lack of a direct link is why they seem ephemeral and mysterious. If you believe perceptions have a formal model then quale are a projection of those.
> And this is the hard problem of consciousness, how it is even possible to evolve that.
How was it possible for the eye to evolve? The paper I linked goes over this too: systems that have attention schema models are more efficient and more effective than those that don't.
The paper goes over that too: 1. The brain uses information to build models, 2. all brain models are necessarily inaccurate.
Therefore quale are inaccurate models of perceptions and/or models of perceptual memory, where some information has been thrown away, as we do with all models. The redness of an apple is a distorted sense memory of the apple, just with all the "appleness" thrown away leaving only the "redness", which is why they seem so vague and ephemeral.