Hacker News new | past | comments | ask | show | jobs | submit login

gnosis makes a good point, and to be honest, I'm not sure the AIs coming out of the AI research sector are the ones we'll need to be worried about anyway.

Basically, I'm thinking something along the lines of Hanlon's Razor: "Never attribute to malice that which is adequately explained by stupidity."

We'll see serious damage caused by "weak" AIs long before we have a "strong" AI capable of causing similar damage. For example, 2010's "Flash Crash" seems to have high frequency trading at its core.

My hope is that through the growing pains we experience from "weak" AI systems doing something stupid, we'll be better prepared for a "strong" AI system that may try to do something malicious.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: