Hacker News new | past | comments | ask | show | jobs | submit login

The article doesn't really do a good job of explaining the real problem, which is understanding what consciousness is, as a requirement for a thinking 'being'.

Obviously Penrose's book 'The Emperor's New Mind' came out some time after this post by Minsky, but as one reviewer summarises:

"The Emperor's "new clothes," of course, were no clothes. The Emperor's "New Mind," we then suspect, is nothing of the sort as well. That computers as presently constructed cannot possibly duplicate the workings of the brain is argued by Penrose in these terms: that all digital computers now operate according to algorithms, rules which the computer follows step by step. However, there are plenty of things in mathematics that cannot be calculated algorithmically. We can discover them and know them to be true, but clearly we are using some devices of calculation ("insight") that are not algorithmic and that are so far not well understood -- certainly not well enough understood to have computers do them instead. This simple argument is devastating. " http://www.friesian.com/penrose.htm




I believe Minsky is arguing that because these "devices of calculation" are not well understood, you cannot conclude that there is anything special about them that puts them beyond the reach of any computational (algorithmic) processes.

I think he's right. There's been a lot of progress in understanding information processing in the brain (especially the neocortex) since Minsky wrote this, and I really believe someday soon, notions such as "insight" that we, in our ignorance of how they work, reserve exclusively for humans, won't be such a big mystery anymore.


That is exactly what Penrose demonstrates in his book - within mathematics there are things that cannot be calculated algorithmically. Which is also why the above summary makes the point that this simple argument is devastating (for proponents of Strong AI).

A summary of his position can be found on his Wikipedia page: "He claims that the present computer is unable to have intelligence because it is an algorithmically deterministic system. He argues against the viewpoint that the rational processes of the mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer. This contrasts with supporters of strong artificial intelligence, who contend that thought can be simulated algorithmically. He bases this on claims that consciousness transcends formal logic because things such as the insolubility of the halting problem and Gödel's incompleteness theorem prevent an algorithmically based system of logic from reproducing such traits of human intelligence as mathematical insight."

I searched for some decent refutations by Minsky, and was disappointed to only find this: http://kuoi.com/~kamikaze/doc/minsky.html "Thus Roger Penrose's book [1] tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle."

Disclaimer: I'm not a student of AI, and only realised just now that Penrose and Minsky represent the two opposing schools of thought on AI!


"within mathematics there are things that cannot be calculated algorithmically"

What is missing is the proof of the opposite: that the human brain can calculate those things. I don't think it can - mathematics is the tool we use to think about them, after all.


I think you're right. The brain is a very specialized computing machine, and it evolved for a specific purpose.* Evolution doesn't reward general computing capabilities. Hence, computers are vastly more general than brains. If something is, in principle, uncomputable, than there is no chance that a brain can compute it.

*Sure, a networked system of neuron-like "units" might have general computing capabilities. It's as if you saw a fully functional modern computer for the first time, and it only gets fed arithmetic problems - the system itself is much more powerful and general than that, and you could hack around and make it do other things once you understand how it manages to do arithmetic. The brain is kind of like that too -- it gets fed stimuli and it has evolved to process those stimuli. Yet it might be that a computational system made of a network of neurons could be a lot more powerful and general than that in principle.


What is missing is the proof of the opposite: that the human brain can calculate those things.

Really, we have no proof either way. The ability to hand-trace computation makes the human Turing-complete. Whether the Turing machine is human-complete is an open question, and I expect it to remain so. Each side of the can think vs. can't think debate works under the assumption that Turing machines are or are not human-complete, and they mostly build their cases on that assumption while claiming the other side's is wrong (without fully proving or disproving either assumption).


Disclaimer: I haven't read the book (but am now planning to) and may be completely off mark with my comments, but here's what I make of all this. Sure, there are certain things that are not computable. This is a theoretical limitation on any computational system. Is Penrose saying that there's something special about consciousness or high-level thought that puts them into the class of things that are not computable? But the computability of something is a mathematical concept, and surely we'd have to define consciousness and intelligence in a mathematical way to be able to put them in such a class. In light of this and the fact that we're only really beginning to have a theoretical understanding of what brains do, it seems to me very dubious to make the bold claim that intelligence/consciousness come about via non-algorithmic processes. Perhaps I don't understand what he means by non-algorithmic processes outside the abstract computation theoretic setting, and in the context of neural computation (in fact, "non-algorithmic" and "computation" seem to contradict each other right off the bat).

Not only that, but our brains compute as well. We get input, and we produce output in the form of neuronal firing and motor activity, the latter being what amounts to behavior, and the former being (according to materialists) responsible for intelligence and consciousness.

So the brain does compute outputs! But according to Penrose, it seems, such outputs are not computable! Or if they are, they are not enough to account for intelligence/consciousness. So what gives?

Again, I might have failed to appreciate his and your points completely, but you've certainly piqued my curiosity and I will check out some more of Penrose's work.


I actually tried to read Penrose's book a few years ago. The operative word over here is tried. At one level I wasn't mature enough to understand it, but the things I did understand and could imagine made me feel that he spends more time elevating human thought to a pedestal and pointing towards some unknown unknown than working on the idea. This might be subjective, but I agree with Minsky. Even though I didn't know who Minsky was at the time I felt exactly the same way and left it.

His argument is along these lines;

1.) There are things in Mathematics that are non computable and cannot be expressed in algorithms. 2.) We can understand them. To what degree is left unanswered. 3.) Computers are algorithmic. 4.) Hence they can't think and we can.

Perhaps, what we need is a new way to do things not more mathematical models. I once went to a talk by Rodney Brooks# and he pointed out that we don't make a mathematical model of everything, and neither must we expect a computer to do so to accomplish the same task. He argued that we should turn towards simplicity instead of creating complex structures which fail at the slightest breeze. He showed that complexity can and will arise from simplicity.

A fly can execute complex and beautiful maneuvers and yet it does not have a complex nervous system built into it. Whenever we try to make that fly we tend to put in a lot of math and code to compute the flightpath and other such nitty gritties, but the fly cannot do all of that and yet it for now it outperforms and model we make. How? As Mr. Brooks points out perhaps there is nothing more to it than closely connecting the sensors with the actuators. To prove this he set out to make simple machines which didn't do much computation and yet could perform complex behavior that eluded even the best algorithms.

An excellent example of this is Cog. Cog actually started interacting with researchers in games despite having an extremely simple architecture. In fact seeing the videos of Cog and Kismet in action as a 4-6 year old is the reason why I fell in love with AI in the first place. To me it was and still is magic.(See: http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/v...)

This doesn't mean that complex models aren't needed, but instead I think that sooner or later there will be a series Aha! moments that will rephrase this problem. Perhaps, the problem is with us we might not be seeing things in the right way.

I love how Minsky ended things;

>>>No one can tell where that will lead and only one thing's sure right now: there's something wrong with any claim to know, today, of any basic differences between the minds of men and those of possible machines.<<<

#After that, I spent a lot of time reading him up and I highly recommend Flesh and Machines (http://www.amazon.com/Flesh-Machines-Robots-Will-Change/dp/0...) and a few papers (http://people.csail.mit.edu/brooks/publications.html) that he wrote to anyone who wants to understand AI. If someone as moronic as me could understand it anyone can.

[edit: I had a dyslexic d'oh!]


I've found interesting this response to "The Emperor's New Mind" by Hans Moravec: http://www.cni.org/pub/LITA/Think/Moravec2.html


One thing that I don't think many people appreciate is that it may be possible to make computed sentient systems without actually fully understanding their mechanism of operations.


"Penrose and his book have been debunked."

http://news.ycombinator.com/item?id=1540420


And here was I thinking I was providing value by linking to a very relevant response while also delivering correct credit to the original poster. Oh well I've learnt my lesson.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: