Hacker News new | past | comments | ask | show | jobs | submit login
Is Smarter-than-Human Intelligence Possible? (acceleratingfuture.com)
4 points by MikeCapone on Sept 15, 2009 | hide | past | favorite | 7 comments



"Without qualitative improvements to the structure of intelligence, we will just keep making the same mistakes, only faster. Experiments have shown that you cannot train humans to avoid certain measurable, predictable statistical errors in reasoning. They just keep making them again and again."

This is a very well replicated result. The research program that discovered this is mentioned in

http://www.project-syndicate.org/commentary/stanovich1

with suggestions for new kinds of mental tests. The same author has a new book

http://yalepress.yale.edu/yupbooks/book.asp?isbn=97803001238...

that provides many examples of errors in reasoning that are equally common in high-IQ and low-IQ people, with abundant citations to the primary research literature.

"A person with an IQ of 100 cannot understand certain concepts that people with an IQ of 140 can understand, no matter how many time and notebooks they have. Intelligence means being able to get the right answer the first time, not after a million tries."

Both of these statements from the submitted blog post are specifically contradicted, with citations to published evidence, in the book I have just mentioned. And this paragraph from the blog post is in partial contradiction to the first quoted paragraph from the blog post, which correctly says, "Experiments have shown that you cannot train humans to avoid certain measurable, predictable statistical errors in reasoning."


Most errors of reasoning in humans are the result of our evolutionary background, there is no reason to expect these in a machine intelligence, even in one less intelligent than a human. It is not even necessary that a different set of systematic errors will be present, though that is a possibility.


There's was a great article I red once, maybe on Less Wrong (their site search sucks, I can't find anything) about a rare brain disorder.

I think it was genetic and only hit the more primitive parts of the brain, while the most recently evolved and most rational part were unaffected.

Would this make you extra smart? It turns out no, but it does seem to make you pathologically friendly what ever that means.

So the human brain is in fact a carefully balanced combination of reason and instinct. Remove the primitive animal instincts and emotions and the result is not super smart, but surprisingly dumb.

And this means a super intellect can not simply be pure reason or just a lot of reason. It will not at all be easy to come up with smarter then Human AI.

And what if when we do have super smart AI, it takes a zen attitude towards existence, and just doesn't care much for existence itself? I don't mean our existence, but it's own. After all there is a sad correlation between higher then average human intellect and depression and suicide.


The author doesn't seem to distinguish between qualitative and quantitative difference in computation. His criticism doesn't really address the argument he quoted. The reference to IQ test distribution is not a particularly good analogy, as his opponent was speaking about limits of intelligence.

It is also worth mentioning that the notion of superintelligence is at odds with Church thesis. Which, while can't be proven, is widely accepted and is a cornerstone of computer science as we know it. Personally I think there's a better chance of breaking light speed barrier than of intelligence capable of what human (or hypothetical "ordinary AI") inherently can't.


It may not be a question of 'can't' or 'can', it may be a question of how fast.

Someone that can solve complicated puzzles quickly is perceived as smarter than someone who can solve those very same puzzles but slower.

Even though to all practical intents and purposes they would be equally intelligent.

All the tests by which we measure ourselves have a time limit associated with them. Society is quite biased against 'slow' people, even though they may be just as smart as the rest.

I'm skeptical about claims that we will be able to engineer an intelligence that is 'smarter' than we are, but I'm open to the possibility that we can engineer one that is as intelligent as we are but simply much faster.

I don't expect that to happen any time soon though (soon as in the next 50 to 200 years or so), and maybe we won't be able to do this at all.

But one small fact about our ability to make tools to make things that we can not make is visible in the semiconductor industry, where we have made computers that make the next generation chips, and so on. If you wanted to re-start today from 1970's technology with all the knowledge that we have about making chips it would still take quite a bit of time to get us back to the present state of the art.

Maybe something similar holds for 'intelligence', that once you achieve a certain base level of intelligence that runs faster than our own and so can try many more avenues than we are capable of that might yield results faster and lead to incremental improvements in the functioning of this intelligence over time.

One big factor here could be that an AI could draw on a 'perfect memory', once something is stored it would accessible for ever at very high speeds. That alone would give it a tremendous edge.


This isn't however what usually meant by 'superintelligence', a philosophic device predating the now-popular singularity school.

Imagine a puzzle that can't be solved by "normal", human or otherwise, intelligence in infinite amount of time, but can be solved by superintelligence. Speed of thought plays no role here, nor capability of particular specimen. We talk about the limits pretty much in the same key you'd talk about the limits of functions in mathematics. That's what his virtual opponent was addressing.


Right, that's a different kettle of fish - or brains.

Consider this though: a neuron is not particulary 'smart', if you have enough of them they get smarter.

A single human of some given intelligence may not be able to solve that puzzle, but many humans of a given intelligence working together might be able to do it.

So, a true 'superintelligence' should be able to solve things that we collectively can not solve.

Anything less would be just a speedup, not a breakthrough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: