Hacker News new | past | comments | ask | show | jobs | submit login

Hi Scott,

This may be an ill-formed question, but it's something I've been thinking about for a long time:

Do you think the human mind is equivalent to Turing machines, or somehow above it? Assuming we have an infinite tape/memory and time.




There are really two questions here.

The first one is, can the human brain be simulated by a Turing machine in its input-output behavior (to a suitable precision, given appropriate initial data, yada yada)? Note that, even though you specified "infinite tape/memory and time," from the outset I'm going to outlaw "simulations" that simply cache what the human would do in response to every possible stimulus in a gargantuan lookup table. For that kind of simulation trivially exists, and its existence tells us nothing interesting. I'll insist instead that the simulation be "reasonable"---so, at a minimum, that it simulate the brain without an exponential blowup in size.

I don't know for certain, but as a believer in the physical Church-Turing Thesis, my guess is going to be yes, this is possible. I.e., I guess that the laws of physics are fully computable---that there's nothing in them like what Roger Penrose wants---and I see no evidence that the brain can violate the laws of physics that govern everything else in the universe.

(Even here, though, there remains the extremely interesting question of whether, even in principle, one could scan a specific person's brain well enough to "bring additional copies of the person into being," without simply killing the person if one tried. My friends in the futurist and singularity movements expect that the answer is yes, but if one needed to scan all the way down to the molecular level, then the No-Cloning Theorem of quantum mechanics would certainly present an obstacle. For my thoughts and speculations about this question, see my "Ghost in the Quantum Turing Machine" essay, which was referenced elsewhere on this thread: https://www.scottaaronson.com/papers/giqtm3.pdf )

Anyway, the second question is whether, even if we agree that a human brain can be simulated by an appropriate Turing machine, there's some special sauce of consciousness that makes there be something that it's like to be us, but nothing that it's like to be a conventional Turing machine. I.e. there's Chalmers' "hard problem of consciousness."

Here I'm going to plead ignorance, with my extenuating circumstance being that we're up against arguably the ultimate mystery of existence. :-)

Yes, I feel like there's something that it's like to be me. (Though if I were a philosophical zombie, just a glorified Turing machine made of meat, I could've told you exactly the same thing, so you should take whatever I have to say about this with a grain of salt. :-) )

And yes, I take it as axiomatic that there's similarly something that it's like to be you---or for that matter, to be a chimpanzee or a dog (but an ant? a bacterium? unclear). No, I don't understand it. No, I don't know what properties of a computational process are either necessary or sufficient to cause there to be something that it's like to inhabit that process; I don't know how we should build an AI either to ensure that it's conscious or to ensure that it isn't. In practice, we'd probably have to extend a presumption of consciousness to anything that behaved sufficiently similarly to us---that, famously, was Turing's point in 1950. But even here there are many uncertainties: for example, would you still take a machine to have "passed the Turing Test" if you could perfectly predict everything it would say given a copy of its source code---not just in principle but in practice?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: