Hacker News new | past | comments | ask | show | jobs | submit login
Why people think computers can't (media.mit.edu)
70 points by xtacy on July 23, 2010 | hide | past | favorite | 51 comments



So why is genius so rare, if each has almost all it takes?

Simple statistical distribution. Compared to monkeys we are all geniuses. Compared to a hypothetical super-intelligent alien race we are all as children. Individuals on average could easily be much more creative or much less; it's all relative. The fact that we recognize and elevate genius within our ranks is just an emergent psychology of being a sentient species.


Personally I think environment plays a huge roll in this as well. If you grow up in an environment that constantly enforces that learning/growing/etc. are pointless, most people will eventually stop trying. Your environment also has a lot to do with forming your "id" [1]. I personally believe that if you could go back in time, take this same person and put them in a different environment they would be recognizable only in appearance.

I think most people are capable of being so much more than they are.

[1] pg has an article that talks about "id". Your environment will often influence how much you allow to become part of your "id" and what to do when various parts of your "id" are impugned.


If you don't take for granted the enormous complexity of leading even the most mundane of human lives, then it's easy to see where all that genius is being used.

Throw a monkey into the fray of human society and see how well it does for itself. Let's see it climb the corporate ladder, raise a family, or tell a joke properly.

Not surprisingly, those humans that are recognized for applying their genius in remarkable ways tend to be in serious neglect of one or the other aspect of their everyday lives.


If you assume that our brains are completely physical objects (no soul), then you should be able to model them with a computer, like all other physical objects, and create a thinking computer.

Just because we have an incomplete understanding now doesn't mean we always will, or that it's impossible to have a complete (or reasonably complete) understanding.

Or there's the idea of a non-physical, impossible to model soul. I don't see much data to support that conclusion though, it seems the equivalent of cosmic ether to me.



Both of those arguments in no way counter the notion that the brain, as a physical device, can be simulated mechanically- they merely suggest that a simulation would have to be more fine-grained than neuroscience currently accepts. This is much like arguing that since Go has a massively larger game tree than Chess, writing a program to play Go as well as a human is impossible. Intractable? Maybe. Impossible? Clearly not.

It's also worth quoting the second-to-last sentence of the second article you posted:

"The book's thesis is considered erroneous by experts in the fields of philosophy, computer science, and robotics."


It's not that there are "data" to support the idea, but there is the fact that consciousness is unobservable. That is weird and different from all other physical phenomena, wouldn't you say?


Unobservable? You mean it doesn't emit visible light, right?

It's complex, it's hard to define, but it's definitely not unobservable. Also, as a mere physical manifestation, you can fiddle with it in a pretty physical fashion. You can put in on drugs and watch it warping. You can toy with its physical structure surgically and see how it gets messed up (wonderfully discussed in Oliver Sacks' books), you can bump it with a boxing glove and literally switch it off.

Too much interactive for an unobservable phenomenon.


Sorry I forgot the qualification: excepting one's own experience of one's own consciousness. That was silly.

What makes it weird and different from physical phenomena is the fact that it is not generally observable, even though it is posited to exist for all people.

Or do you disagree even with that? If you do: I don't just mean that it doesn't emit visible light. I mean it doesn't emit anything. Consciousness is just the fact that the universe exists from a perspective. It is not an object, "out there", "in the world", but something only experienced by each consciousness for itself. The mere fact that we could create a perfectly convincing AI and then still have doubts about whether there really is "anything in there" is a result of the fact that consciousness is unobservable. Does that explain well enough what I mean?


I'm sorry, but your argument still seems diffuse to me. "Not an object", "anything in there"... these are terms so vague that anything we say about them could stand as irrefutable.

Your own feelings about your consciousness are unimportant to define consciousness. It's definitely irrelevant that you can't "experience" how other people feel the colour yellow, we still regard it as observable. The only difference is that we can define "yellow" as electromagnetic radiation with certain wavelengths, whereas we'll have to wait a bit to get an equally satisfactory definition for consciousness.

We shouldn't attribute a special epistemological status to the brain/mind/consciousness just because it's complex and we don't know it very well and just because it's ours.


If we're to assume that consciousness only exists where it is observable, only I exist. All of you are automatons.

In fact, because of the fact that it is unobservable, you cannot even make a reasonable argument that the netbook I'm typing this on lacks consciousness.

I for example can argue that anything that is executing a series of algorithms as part of a system has consciousness, and therefore this computer is quite conscious, just as I am (even though my series of algorithms is much less well-defined and reproducible.)


I personally do not assume that consciousness only exists where it is observable. I am merely interested in the fact that it is unobservable-- and in how that makes it weird and different. Do you agree with me on this point?


Yes, but in that it's empirically indistinguishable, from an ethical standpoint we should assume it exists when we see the hallmarks of consciousness (though perhaps the primary hallmark is an intrinsic understanding of the concept itself.)


Of course something that does not really exist is unobservable. Consciousness is just an illusion.


The article doesn't really do a good job of explaining the real problem, which is understanding what consciousness is, as a requirement for a thinking 'being'.

Obviously Penrose's book 'The Emperor's New Mind' came out some time after this post by Minsky, but as one reviewer summarises:

"The Emperor's "new clothes," of course, were no clothes. The Emperor's "New Mind," we then suspect, is nothing of the sort as well. That computers as presently constructed cannot possibly duplicate the workings of the brain is argued by Penrose in these terms: that all digital computers now operate according to algorithms, rules which the computer follows step by step. However, there are plenty of things in mathematics that cannot be calculated algorithmically. We can discover them and know them to be true, but clearly we are using some devices of calculation ("insight") that are not algorithmic and that are so far not well understood -- certainly not well enough understood to have computers do them instead. This simple argument is devastating. " http://www.friesian.com/penrose.htm


I believe Minsky is arguing that because these "devices of calculation" are not well understood, you cannot conclude that there is anything special about them that puts them beyond the reach of any computational (algorithmic) processes.

I think he's right. There's been a lot of progress in understanding information processing in the brain (especially the neocortex) since Minsky wrote this, and I really believe someday soon, notions such as "insight" that we, in our ignorance of how they work, reserve exclusively for humans, won't be such a big mystery anymore.


That is exactly what Penrose demonstrates in his book - within mathematics there are things that cannot be calculated algorithmically. Which is also why the above summary makes the point that this simple argument is devastating (for proponents of Strong AI).

A summary of his position can be found on his Wikipedia page: "He claims that the present computer is unable to have intelligence because it is an algorithmically deterministic system. He argues against the viewpoint that the rational processes of the mind are completely algorithmic and can thus be duplicated by a sufficiently complex computer. This contrasts with supporters of strong artificial intelligence, who contend that thought can be simulated algorithmically. He bases this on claims that consciousness transcends formal logic because things such as the insolubility of the halting problem and Gödel's incompleteness theorem prevent an algorithmically based system of logic from reproducing such traits of human intelligence as mathematical insight."

I searched for some decent refutations by Minsky, and was disappointed to only find this: http://kuoi.com/~kamikaze/doc/minsky.html "Thus Roger Penrose's book [1] tries to show, in chapter after chapter, that human thought cannot be based on any known scientific principle."

Disclaimer: I'm not a student of AI, and only realised just now that Penrose and Minsky represent the two opposing schools of thought on AI!


"within mathematics there are things that cannot be calculated algorithmically"

What is missing is the proof of the opposite: that the human brain can calculate those things. I don't think it can - mathematics is the tool we use to think about them, after all.


I think you're right. The brain is a very specialized computing machine, and it evolved for a specific purpose.* Evolution doesn't reward general computing capabilities. Hence, computers are vastly more general than brains. If something is, in principle, uncomputable, than there is no chance that a brain can compute it.

*Sure, a networked system of neuron-like "units" might have general computing capabilities. It's as if you saw a fully functional modern computer for the first time, and it only gets fed arithmetic problems - the system itself is much more powerful and general than that, and you could hack around and make it do other things once you understand how it manages to do arithmetic. The brain is kind of like that too -- it gets fed stimuli and it has evolved to process those stimuli. Yet it might be that a computational system made of a network of neurons could be a lot more powerful and general than that in principle.


What is missing is the proof of the opposite: that the human brain can calculate those things.

Really, we have no proof either way. The ability to hand-trace computation makes the human Turing-complete. Whether the Turing machine is human-complete is an open question, and I expect it to remain so. Each side of the can think vs. can't think debate works under the assumption that Turing machines are or are not human-complete, and they mostly build their cases on that assumption while claiming the other side's is wrong (without fully proving or disproving either assumption).


Disclaimer: I haven't read the book (but am now planning to) and may be completely off mark with my comments, but here's what I make of all this. Sure, there are certain things that are not computable. This is a theoretical limitation on any computational system. Is Penrose saying that there's something special about consciousness or high-level thought that puts them into the class of things that are not computable? But the computability of something is a mathematical concept, and surely we'd have to define consciousness and intelligence in a mathematical way to be able to put them in such a class. In light of this and the fact that we're only really beginning to have a theoretical understanding of what brains do, it seems to me very dubious to make the bold claim that intelligence/consciousness come about via non-algorithmic processes. Perhaps I don't understand what he means by non-algorithmic processes outside the abstract computation theoretic setting, and in the context of neural computation (in fact, "non-algorithmic" and "computation" seem to contradict each other right off the bat).

Not only that, but our brains compute as well. We get input, and we produce output in the form of neuronal firing and motor activity, the latter being what amounts to behavior, and the former being (according to materialists) responsible for intelligence and consciousness.

So the brain does compute outputs! But according to Penrose, it seems, such outputs are not computable! Or if they are, they are not enough to account for intelligence/consciousness. So what gives?

Again, I might have failed to appreciate his and your points completely, but you've certainly piqued my curiosity and I will check out some more of Penrose's work.


I actually tried to read Penrose's book a few years ago. The operative word over here is tried. At one level I wasn't mature enough to understand it, but the things I did understand and could imagine made me feel that he spends more time elevating human thought to a pedestal and pointing towards some unknown unknown than working on the idea. This might be subjective, but I agree with Minsky. Even though I didn't know who Minsky was at the time I felt exactly the same way and left it.

His argument is along these lines;

1.) There are things in Mathematics that are non computable and cannot be expressed in algorithms. 2.) We can understand them. To what degree is left unanswered. 3.) Computers are algorithmic. 4.) Hence they can't think and we can.

Perhaps, what we need is a new way to do things not more mathematical models. I once went to a talk by Rodney Brooks# and he pointed out that we don't make a mathematical model of everything, and neither must we expect a computer to do so to accomplish the same task. He argued that we should turn towards simplicity instead of creating complex structures which fail at the slightest breeze. He showed that complexity can and will arise from simplicity.

A fly can execute complex and beautiful maneuvers and yet it does not have a complex nervous system built into it. Whenever we try to make that fly we tend to put in a lot of math and code to compute the flightpath and other such nitty gritties, but the fly cannot do all of that and yet it for now it outperforms and model we make. How? As Mr. Brooks points out perhaps there is nothing more to it than closely connecting the sensors with the actuators. To prove this he set out to make simple machines which didn't do much computation and yet could perform complex behavior that eluded even the best algorithms.

An excellent example of this is Cog. Cog actually started interacting with researchers in games despite having an extremely simple architecture. In fact seeing the videos of Cog and Kismet in action as a 4-6 year old is the reason why I fell in love with AI in the first place. To me it was and still is magic.(See: http://www.ai.mit.edu/projects/humanoid-robotics-group/cog/v...)

This doesn't mean that complex models aren't needed, but instead I think that sooner or later there will be a series Aha! moments that will rephrase this problem. Perhaps, the problem is with us we might not be seeing things in the right way.

I love how Minsky ended things;

>>>No one can tell where that will lead and only one thing's sure right now: there's something wrong with any claim to know, today, of any basic differences between the minds of men and those of possible machines.<<<

#After that, I spent a lot of time reading him up and I highly recommend Flesh and Machines (http://www.amazon.com/Flesh-Machines-Robots-Will-Change/dp/0...) and a few papers (http://people.csail.mit.edu/brooks/publications.html) that he wrote to anyone who wants to understand AI. If someone as moronic as me could understand it anyone can.

[edit: I had a dyslexic d'oh!]


I've found interesting this response to "The Emperor's New Mind" by Hans Moravec: http://www.cni.org/pub/LITA/Think/Moravec2.html


One thing that I don't think many people appreciate is that it may be possible to make computed sentient systems without actually fully understanding their mechanism of operations.


"Penrose and his book have been debunked."

http://news.ycombinator.com/item?id=1540420


And here was I thinking I was providing value by linking to a very relevant response while also delivering correct credit to the original poster. Oh well I've learnt my lesson.


Some of the OCR mistakes were humorous in light of the content.


Can Computers Think? debate graph, http://debategraph.org/Stream.aspx?nID=75


> I don't believe that there's much difference between ordinary thought and highly creative thought.

And I do. This quote says it perfectly:

In science, as well as in other fields of human endeavor, there are two kinds of geniuses: the “ordinary” and the “magicians.” An ordinary genius is a fellow that you and I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what he has done, we feel certain that we, too, could have done it. It is different with the magicians. They are, to use mathematical jargon, in the orthogonal complement of where we are and the working of their minds is for all intents and purposes incomprehensible. Even after we understand what they have done, the process by which they have done it is completely dark. They seldom, if ever, have students because they cannot be emulated and it must be terribly frustrating for a brilliant young mind to cope with the mysterious ways in which the magician’s mind works. Richard Feynman is a magician of the highest caliber. Hans Bethe, whom [Freeman] Dyson considers to be his teacher, is an “ordinary genius,”

(Quoted from Enigmas of Chance: An Autobiography, by Mark Kac. Harper and Row. 1985. p. xxv.)


What a load of horseradish. Feynman was adored by his students, and was often quite introspective about how he goes about solving problems.



I believe computers will one day be able to think, but certainly not using Von Neumann architecture.


Why do you feel that the key to thinking is the separation of program memory from data memory? Or is there another characteristic of Von Neumann architecture that I'm not thinking of?

If thinking can be done with a universal turning machine then it would work on a Von Neumann architecture machine as VN arch is a class of UTM.


Milesf might be implying that he thinks a different type of hardware might be necessary to have true cognitive abilities, e.g. Quantum computing or something taking advantage of quantum effects as described by Roger Penrose et al. (I don't buy that requirement, personally)


I think that we'll just end up gluing neurons onto transistors for the near future. With the latest research they have rat brains flying flight sims. The primary stumbling block is going to be the ethics committee or finding an animal that has a large brain that ethics committees aren't particularly fond of. Or networking the brains of rats together to form a much larger neural net that interfaces to ethernet. Think EC2 with a rat brain on each server. Or conversely a network interface for the human brain.

I do think it's possible to think with a UTM but I don't think that discovering that ability is going to be economical compared to fusing transistors to neurons.


I tend to hold with Anne Foerst, who was the theologian working on the ethical side of the MIT robotics project back in the 90's. http://www.cs.sbu.edu/afoerst/

If it is relatable and seems to have human-like consciousness, we have to treat it like any other human, because consciousness is an empirically meaningless term.


So, how at what ratio of biology-to-technology does something seize being a thinking person, and start being a thinking computer?

Clearly, a few rat cells hooked up to big processors are a thinking machine. But someone with an electronic implant in their brain that prevents seizures is a thinking person. But at which point do we say, "you're a robot, beep boop"?


Precisely at the same ratio of new boards to old boards when the Ship of Theseus stops being the Ship of Theseus.

http://en.wikipedia.org/wiki/Ship_of_Theseus


you want "cease" (to stop/end), rather than "seize" (to take hold/control of).

no offense or anything, I am just assuming you are a non-native english speaker (by your name)


Oops! Well, you're right - my first language was Russian - but at this point English is my primary language. I'd attribute that mistake less to my heritage, but rather to all the wine I drank last night :P


I agree. I also don't think one can get very far in that respect without first understanding the computational principles and mechanisms by which the brain can perform ordinary tasks that are extremely difficult for computers (e.g. vision). A purely computational account of how the brain manages to do what it does can allow for useful abstractions that can be applied to other kinds of "implementations" of intelligence. Without looking at the brain, I think it would be incredibly difficult to tease out form first principles the fundamentals of what intelligence really is and how it arises.


I posted something tangential to this topic, but none of you philistines upvoted it.

Bam! There it is. --> http://news.ycombinator.com/item?id=1526734


This seems to conflate consciousness with creativity, which are not the same thing. Computers can be creative combinatorically with evolutionary algorithms provided you can provide a fitness test.


But that just moves the creativity bit into designing the fitness function, does it not?

(I do agree with your sentiment though; just pointing out the obvious hole)


It depends on what are you measuring.

Let's say that your job is to solve some problem and want to use a genetic algorithm to solve it.

You probably have to build a model, a simplified version of the reality, and find out a fitness function which makes sense in that model, which is fast to calculate etc.

Probably you have to be clever, you have to somehow have an intuition of what matters and what doesn't. That is, you have to know when your cow is too spherical.

This task requires intelligence, and I'm sure that creativity will certainly help making a leap forward, especially if the problem is not well known in advance, or you benefit from solving it in a completely different way.

But, this doesn't mean that each fitness function that will solve the problem, given enough time, will require creativity, or even intelligence!

Natural selection provided really ingenious solutions to a huge set of problems of any kind. The fact that we are discussing this very fact is an astonishing achievement of that very fitness function which is natural selection.

But nobody designed this fitness function, and so long we are interpreting the word "creativity" meaning "created by a mind", no creativity was required to design this "fitness function" or any of the products of that "genetic algorithm".

Of course, one could argue that "creativity" the very act of "creating" things, even without creators, even just emerging properties of a complex system (like natural selection).

Well, redefining creativity that way is certainly not wrong, but I don't think normally people think about creativity in those terms, but as an intrinsic characteristics of minds (and perhaps also human mind only, to most people, but that's another topic)


I don't think computers can think, thinking is only a human perception and most of it is useless. Being creative to direct an orchestra that other humans may like is pointless to survival. Too think I think you need to have a purpose, If I were to give humans a naive purpose I would say to find the meaning of life through survival and attaining great knowledge. I believe a machine will be able to do this better eventually if not soon. All it needs is problem solving and to start with a set of resources that its allowed to use.


Thinking is defined as the thing that separates humans from nonhumans. Computers aren't humans. Therefore, even if computers can perform every task that humans can perform, what the computer is doing is, by definition, not thinking. Also, the question of whether X can think is completely without value.


It's hard to give a more circular argument than that.


It's more or less the standard cynical joke in AI circles, though--- computers aren't intelligent, so when AI succeeds at something, it isn't AI anymore, and moves to some other field instead, so AI never has successes. :)


So maybe the computers will be humans. Was there any thinking involved in writing your comment? I think that anyone would say yes. But if you were a computer (and why you couldn't be just a computer, nothing less than autopsy can convince me), then you'd have to say no. That's strange, and would make word think completely useless.


Ah, yeah. The perils of taking definitions too seriously.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: