Hacker News new | past | comments | ask | show | jobs | submit login

I really think the stability of your robot is a completely separate issue. George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like. Don't you think that perhaps, as these flying robots gain more and more autonomy, that discussions of ethics are actually important, and important now? 50 years is a long, long time in computer science.

I'm surprised that you are so pessimistic about your research that you think ethics won't even be relevant in year 2205. Holy cow you must think AI is hard.




George W. Bush and Barack Obama both use flying robots with missiles to hunt down and kill people they don't like.

This is a very good point. It's always good to be reminded that we're already living in the future.

That said, I feel like aothman is discussing real artificial intelligence, that is, an entity capable of making a conscious decision that it wants to, in this case, fire the missiles. If I had to guess, if predator drones gain the ability to "decide" for themselves whether or not to fire their missiles, it will be built on a system of complex rules, and not because they're "intelligent". Potayto, Potahto? Maybe. I'm not an AI researcher and I don't even come close to understanding human intelligence, but I feel like even if it is just a complex system of rules, it's at a much deeper level than we'll be able to simulate soon.


> It's always good to be reminded that we're already living in the future.

This is not a new phenomenon; the first use of autonomous killer robots was in 1943, in the form of acoustically guided torpedoes.


Well heck, if we're going to stretch the analogy why not a mouse trap?


Because nobody has ever been killed by a misguided mousetrap?


Bear trap, then.


They don't move around of their own accord, attempting to close with the target. Guided weapons do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: