Hacker News new | past | comments | ask | show | jobs | submit login

> I think people are worried no one really understands how AI picks the target.

Yeah, I mean, black-box murder is never really desirable... but is it fair to assume AI will never be able to elucidate its reasoning? And that also seems a bit of a double standard, when so many life-and-death decisions made by humans are also not entirely comprehensible or transparent, either to the general public or sometimes even to the other individuals closest to the decision-maker.

Sometimes it's a snap judgment, sometimes it's a gut feeling, sometimes it's bad intel, sometimes it's just plain "because I said so"... not every kill list is the result of a reasoned, transparent, fair and ethical process.

After all, how long have Israel and Hamas (or other groups) been at each other's throats, with cries of injustice and atrocities about either side, from observers all over the world? And it wasn't so long ago we destroyed Afghanistan and Iraq, and Russia is still going at it because of the desires of one man. AI doesn't have to be perfect to be better than us.

If there's one thing humans are really, really bad at, it's letting objective data overrule our emotional states. It takes exceptional training and mental fortitude to be able to do that under pressure, especially life-and-death, us-vs-them pressure.

Humans make mistakes, too, and friend-or-foe identification isn't easy for humans either, especially in the heat of battle or in poor visibility. Training for either humans or AI can always be improved, but probably will never reach 100% accuracy.

Maybe we should start putting some hypothetical kill lists in front of both humans and AI, recording their decisions, and comparing them after a few years to see who did "better". I wouldn't necessarily bet on the humans...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: