Hacker News new | past | comments | ask | show | jobs | submit login
The Singularity Institute's Scary Idea (and Why I Don't Buy It) (multiverseaccordingtoben.blogspot.com)
36 points by billswift on Oct 30, 2010 | hide | past | favorite | 16 comments



I'm always unimpressed by how certain people are that a hard takeoff is even possible. As near as we can tell, designing things is hard. (At least so long as the P NP boundary exists.) I don't expect revisions of an AGI to have enough marginal intelligence over previous revisions to offset the increase in complexity that designing a further better one requires. It seems more likely to me that any given AGI will asymptote or grow logarithmically (rather than exponentially) simply because designing things seems to in general be exponentially hard relative to the number of states involved.


You don't have assign high probability to hard takeoff to support ethical/friendly AI research. The consequences of a hard takeoff are so huge (potentially destroying humanity) that even a low chance is worth making lower.

Humans have been running on the same hardware for several thousand years. Just our improved knowledge and culture has been enough to keep us growing at an exponential rate. If our brains were end-user modifiable and had APIs, we'd be able to increase our growth rate even faster. An AI could do even better than self-modification. It could buy or hack enough computers to give itself thousands of times more processing power.

I don't assign high probability to that sort of scenario, but it is high enough that it's worth ameliorating.


Provided that the next increment is within grasp of the current increment for all increments, it will likely come very, very quickly.

Remember, even logarithms hit infinity as x -> inf. I agree that the base intelligence of the AI would have to be at least within the realm of a very intelligent human to begin the cycle, but once begun, I'm unsure.

As for asymptotes, if there are hard limits and structural changes that need to take place in order to break an IQ of, say, 220 then it stands to reason that we will never break it (or near-never, IQ as a system has no max or min, so if 10 trillion humans are eventually born then by definition we should have some way ahead of that measure). The reason we could never break that limit is that if we could break it the AI already would have, but since it is more intelligent than we are and can think orders of magnitudes faster than we can, it is unlikely that we would do so.

Assuming that an AI has near instant thought processing time, it will either make the next level soon, or we never will.


The problems begin with this concept of a "provably friendly architecture", which probably doesn't exist. Heck, humans aren't exactly provably friendly either.

But the main argument against this design goal of being somehow provable and intelligent at the same time is that we have fundamentally conflicting paradigms:

The first is the idea that in order to build an artificial intelligence, we'd need to build an actual artificial person. Complete with their own internal representation of the world (or as you might say: its own hopes and dreams). It's an approach to AI that is fundamentally uncertain in the way individual AI entities will turn out personality-wise, but it is guaranteed to work. We know this approach will work, because we are ourselves machines built on this principle. ==> good chance of success, somewhat limited danger

Then, conflicting with that comes this notion of a provably friendly/unfriendly design which sounds like it was made up by CS theorists who think intelligence is a function of raw processing power and thought patterns are in any way related to rigid formulas. They're very likely wrong, but luckily that also means this group of researchers will never produce anything dangerous except maybe a lot of whitepapers. ==> virtually no chance of happening, no danger

I do agree though that there might be a third kind of AI, a sort of wildly self-improving problem-solving algorithm that has no real consciousness and simply goes on an optimization rampage through the world. This would be a disaster of possibly grey goo-like proportions. BUT, this approach to AI is also very likely to be used in tools with only limited autonomy. And because the capability of an unconscious non-entity AI to understand the world is limited, the probability of it taking over the world also seems limited. ==> small probability of autonomous takeoff, but if it happens it will be the end of everything


> a sort of wildly self-improving problem-solving algorithm that has no real consciousness and simply goes on an optimization rampage through the world

That sounds pretty similar to humans.


That's funny but doesn't really make sense unless you manage to confuse consciousness with conscience.


I meant the human species as a collective has no single consciousness. And also that individual humans do not have metaphysical consciousness.


> I meant the human species as a collective has no single consciousness.

Neither has an AI species, but that's not the issue. The point being made here was that a danger could arise from a very efficient and powerful automaton that has neither self awareness nor recognizes other beings with minds as relevant. From that I argued the threat of it happening is actually low because by its nature this kind of AI would probably lack the means to instigate an autonomous takeover of our planet.

> And also that individual humans do not have metaphysical consciousness.

Ah, I finally see where our misunderstanding comes from. Science doesn't talk about consciousness (or metaphysics) in the spiritual sense. The question whether people have metaphysical consciousness or not really depends on your definition of those terms, so arguing "for" or "against" isn't really gonna do anything besides getting you karma for oneliners.

As far as practical AI research is concerned, the definition of consciousness is the same for humans and non-humans and while there are different degrees of consciousness possible, there certainly is an agreement that the average human has one.


And the Less Wrong discussion of it - currently at 66 comments and growing - http://lesswrong.com/lw/2zg/ben_goertzel_the_singularity_ins...


Somehow, I'm not comforted by reassurances that an AGI is likely to have a human-like value system, given how humans have often treated other humans who are 1) Different from themselves in some way and 2) Technologically more primitive.


Ever since Frankenstein was written, I think we have always been unfairly predisposed to think our creations will turn on us. Probably due to the ubiquitous nature of inner-guilt or as a social legacy of Christian sin.


Warning! Overly verbose blog post!

Topic: Scary idea = AI ending the human race.


[deleted]


Since "synthetic" is basically a synonym for "artificial", I don't see why it shouldn't. Silicon is no more special than carbon.


Even beyond Si vs. C - it's really if we can just figure out how to make some semblance of the human mind using our own engineering.

Strong AI like described is pretty scary, but perhaps humans will be integrated with the tech. As this replaces our slower systems, perhaps the biggest question is what part of the mind makes us "human". If we modify that, are we still human?


And now Robin Hanson has weighed in at Overcoming Bias http://www.overcomingbias.com/2010/10/goertzel-on-friendly-a... . Mostly agreeing with Ben Goertz's position. He is pretty skeptical about, and has been for some time, the hard-takeoff position.


So paradoxically, the Singularity Institute can grow into a great enemy of AI research. Should we deem Ray Kurzweil the first Pope of this anti-science religion?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: