Hacker News new | past | comments | ask | show | jobs | submit login

That's a pretty high bar that would be pretty controversial if you applied it to other fields.

For example, should we scorn genetics researchers who do not have a credible claim that their organisms will remain harmless after a billion generations of evolution and recombination? That's more or less what many of the anti-GMO arguments boil down to, that we ought to require genetically-modified organisms to be provably safe, both as they exist now, and in all possible future ways they could evolve and interact with other organisms. (And since that bar is very hard to reach, therefore, their arguments go, we should be careful about funding such research to begin with, and definitely shouldn't let any of its results out into the wild, e.g. into crops.)

If anything, the argument there is stronger, because evolving biological organisms that can pose a threat to humans actually exist, whereas evolving machines that can pose a threat to humans are sci-fi, and likely to remain so for a very long time. Why regulate the latter one more stringently?




The search space of biological organisms been well explored. It's unlikely that one will develop which is vastly more dangerous than the ones which already exist. The same cannot be said for robots/AI.


> The search space of biological organisms been well explored.

Not at all true. The space of possible biological organisms is searched in a highly nonuniform manner by evolution, and the human search strategy is fundamentally different. It's overwhelmingly likely that there competitive human-constructible organisms which could never be produced by evolution in the past 4 billion years.


This is true if you are discussing creating organisms which are highly different from existing ones. _delirium was discussing genetic modifications which are simply tweaks to existing organisms.

There is no reason to believe that golden rice will evolve in any significantly different manner than ordinary rice. In contrast, AI will evolve via a mechanism which is unprecedented.


I imagine that this has already happened back when countries had active biowarfare programs: what if someone clones a mix of potent neurotoxins into bacteria or fungi? Or mosquitos?

What if we create bacteria that can digest anything, survive in environmental extrema, sporulate, and kill off competing strains? The grey goo scenario comes to mind.

Chemical and biological state space is infinite. There is much room for good, but also for bad. Misuse of biology is much more dangerous in the short term than AI.


I think that most people who take the AI singularity very seriously would say that genetics researchers should be held to similarly stringent standards, as should the handful of other research fields with significant existential risk (e.g. nanotech, high-energy physics).

Eliezer (the interviewee) has an HN account so he can comment for himself.


You should consider the timescale and degree as well. A billion generations of AI will pass in the relative blink of an eye, allow for no response, and in almost every failure case will irrevocably destroy everything humans value.

GMO happens slower and is somewhat more manageable.


> evolving biological organisms that can pose a threat to humans actually exist

Depends what you mean. There certainly exist biological organisms that can pose a threat to individual humans; but AI can pose a threat to humanity itself, and is thus very very dangerous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: