Hacker News new | past | comments | ask | show | jobs | submit login

There are really two kinds of attacks: ones that involve subverting existing networked devices to perform malicious acts (hacking), and making one's own malicious devices (building).

Of the two, attacking existing devices sounds scarier because there is a large fleet of vulnerable existing devices that can potentially be exploited. However, of the two problems, it's also the one we know more about: it's basically straight up cyber security. The AI aspect is mostly a tangent, a way to control the system once you have access.

It seems the most straightforward way to defeat these attacks is to increase their cost by promoting better cyber security (sandboxing, use of safer languages, cryptography as appropriate, etc.). That's not to say it'll be easy, but the problems are largely known problems.

On the other hand, anything that involves the attacker actually building something themselves is either going to be much smaller scale, or is going to leave a physical trail that can be traced like we do for any other sort of weapons mitigation.




I think it's a spectrum; or rather, weak and strong ai is a force multiplier :

A hacked car is dangerous, many hacked cars are more dangerous, a pack of hacked cars more so. As is a hacked car that can hack other cars, and recruit them for the pack.

Note this isn't hyperbole as such; we already have experience with computer viri and worms.

It seems quite reasonable that a bot network might work semi-automously toward a "goal". In the same sense ants might "hunt" for sugar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: