Hacker News new | past | comments | ask | show | jobs | submit login

I am not very comfortable with a machine that's very competent in killing fighter pilots. I am much less comfortable with such machines generalizing that competency to other, closer, problem spaces. Also, in cases where the use of deadly force happens without a human in the loop, being able to describe exactly which rules triggered and caused the death of a friendly pilot or that C-40 that happened to actually be a 737 full of passengers would be a requirement. "Because the plane got confused" is not very satisfactory.



Fighter jet AIs aren't that scary compared to the work being done on algorithms for teams of robot soldiers. The US's TARDEC and the Australian DSTO held a competition for this back in 2010 called Multi Autonomous Ground-robotic International Challenge(MAGIC)[0]. In this competition, a team of aerial and ground robots had to perform a simulated combat mission to 'secure' a set of moving(other soldiers) and stationary targets(IEDs) in an approximation of an urban environment.

In simulation, the algorithms demonstrated for doing this do quite well with, being able to complete the mission with a success rate of 97.5%, so long as one has 6 search robots and 3 gun robots.

This did not work so well in real life, partially because real robots are difficult to do with. It is still disturbing though because of the high success rate, not to mention the immediate applicability to robot SWAT teams. As a civilian, I'd be much more concerned with a SWAT AI than a fighter jet AI.

However, robot swat teams are still a ways off.

[0]http://singularityhub.com/2010/03/19/teams-of-military-robot... [1]https://en.wikipedia.org/wiki/Multi_Autonomous_Ground-roboti... [2]http://www.frc.ri.cmu.edu/~ssingh/Sanjiv_Singh/PUBS_CONF_fil...


How about a future where AI fighter pilots fight against other AI fighter pilots?

Maybe one day war will be less about killing people, and more of a battle between countries' best engineers.

Maybe I'm just optimistic, but I think robot wars would be a hell of a lot better than real wars.


>Maybe one day war will be less about killing people, and more of a battle between countries' best engineers.

When two robots fight each other, it's not because the other robot is the target, it's because it's an obstacle between them and the human targets that will actually affect the war.


This reminds me of Philip K. Dick's "The Second Variety" that I recently read. (Not that I'm trying to make a prescient political-statement, just sharing a fun short-story you might be interested in.)

http://manybooks.net/pages/dickp3203232032/0.html


Such a great story. Thank you for posting it.


Why are their targets other AI fighter pilots? That's a very, very big assumption. Can we safely assume that the AI pilot won't target buildings/ships/etc.? Even if we assume a well-behaved AI, why should we assume that AI fighter pilots will target military targets? Can we say for certain that they won't be controlled by a malicious despot?


Both sides' civilians could meet together on bleachers and share popcorn.


What when one AI pilot generalizes it's knowledge on enemy airplanes and finds out that bombing an airplane factory destroys multiple targets with minimum risk to itself?


Because in real wars the body count and the physical destruction is what matters.


Not necessarily. Generally, a conventional war ends when one side surrenders, usually because it knows it can't win and wants to minimize further losses. If we reach a point where unmanned weapons are the ultimate tool for destruction, then it makes sense that by the time the AI combatants reach a point where they could target enemy populations, the war would already be decided, and the nation about to have its people killed would surrender. There will be exceptions, but it would be no worse than current war, and at least potentially be better.

That is of course assuming that the most efficient strategy to achieve a surrender doesn't actually end up being a bypass of enemy AI and direct targeting of human populations. This would only be possible if there's a huge mismatch between offensive and defensive capability. But then you end up with a MAD situation similar to what we have with nukes.

That said, I expect that the flip-side is that this tech would ultimately make a guerilla war by a local population against an occupying force very very ugly for the guerillas.


That would essentially mean two nations agreeing to resolve their intractable diplomatic differences with trial-by-combat, and to abide by the results. It's a lovely notion, but in practice you can't really force a nation to change policy without a palpable threat to their population and/or infrastructure.

So it boils down not to robots killing robots, but robots killing humans, while other robots try to stop them.


We are already partially there only with athletes instead of engineers.

(That said I'm less optimistic I guess.)


England may as well just roll over and capitulate to everyone in that case - Go Iceland ;-)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: