Hacker News new | past | comments | ask | show | jobs | submit login

I'm more-or-less in agreement with this (Hanson's ems are the kind of mind I was imagining, above), but I was assuming something of a best case, where it turns out that there are hard limits to mindlike complexity. If it turns out that there aren't, none of this will matter. I don't have any particular hope that Eliezer, et al, will construct a bugfree, airtight Greater Wish.



I'd say hard limit isn't the real criterion for rejecting the Intelligence Explosion hypothesis: there is a hard limit, but most likely well above human level: a human-made substrate could most certainly think way faster than evolution-made neurons, and the software could probably at least get rid of biases.

What really matters is whether intelligence is likely to explode or not. I think it would be really foolish to count on it not exploding, unless we're positive it won't. The stakes are too high.

As for MIRI (as it is called now) actually pulling it off, especially as they are now, I don't have high hopes either. However, they do look like the current best bet. And they do plan to grow (they need money). And maybe, maybe they will convince the other AI scientists to be wary of new powerful magic. For once. If not them, maybe the Future of Humanity Institute.


> there is a hard limit, but most likely well above human level

I suppose you're thinking of the speed of light, but I meant a somewhat more prosaic limit of having nowhere to go. If at some point an intelligence of level n can't do much better than chance at finding an improvement to n, intelligence growth might be very slow. I was wrong to refer to this limit as "hard", but it seems like a pretty plausible scenario to me. Our current software industry suffers from this problem. In this future, the most intelligent agents might be only a few standard deviations above the brightest current humans.

> a human-made substrate could most certainly think way faster than evolution-made neurons, and the software could probably at least get rid of biases.

I don't expect either of those to produce much effective increase in intelligence.

Speed increases aren't really the same as intelligence. Speeding up a dog's brain by a million times will not produce a more intelligent dog, only a faster one. (I'm not knocking faster thinking, by the way; it's just not the same as being able to think more complex thoughts).

The most intelligent things people do tend not to be the product of conscious, rational thought, but of loading up your mind with a lot of details about the problem you want to solve and waiting for systems below conscious thought deliver answers. Therefore, learning how to be more rational will help only incrementally if you're already fairly rational.


Ah, that hard limit. Eurisko did flatten out…

> In this future, the most intelligent agents might be only a few standard deviations above the brightest current humans.

Current methods of doing software are reaching their limits. That doesn't mean we have reached the limit yet. See Squeak, and more recently the Viewpoint Research Institute's work: <http://vpri.org/html/work/ifnct.htm>. When I see Frank (basically a personal computing suite in 20K lines, compilers included), I see a proof that we just do software wrong. The actual limit of what humans can program is probably still far.

Fast intelligence isn't a panacea, but still: imagine an Em thinking 10 times faster than meatware, on a personal computer, capable of copying itself over the network. That alone would be pretty dangerous. Now give it perfect dedication, and enough common sense to avoid most obvious mistakes… Now we could stop it… With another such Em. And then Hanson is back.


Eurisko is more legend than history, at this point. As far as I know, the source code was never available to anyone except Lenat, and most of the claims about how effective it was at the beginning were sourced directly from Lenat, as well. The fact that we've never seen anything similarly small and effective (and that Lenat abandoned the entire approach in favor of Cyc) makes me wonder how much of what Eurisko is reported to have done is exaggeration.

Your scenario with the Em that's copiable and ten times faster than a human is exactly what I started this with. :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: