Hacker News new | past | comments | ask | show | jobs | submit login
AI fighter pilot wins in combat simulation (bbc.co.uk)
202 points by _airh on June 28, 2016 | hide | past | favorite | 177 comments



I read the paper, and read up about the techniques used to do that (because the paper is very light on details). I came back completely underwhelmed.

This makes (clever) use of hundreds, if not thousands, man hours of painstakingly entering expert rules if the form IF <some input value is above or below some threshold> THEN <put some output value in the so and so range>.

The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.

This kind of techniques has some nice properties (its "reasonings" are understandable and thus kind of debuggable and kind of provable, it smoothes some logic rules that would otherwise naively lead to non smooth control, etc.) but despite the advances presented here that seem to make the computation of the model tractable, I don't see how it could make the actual definition of model anywhere near tractable.

Also, I dislike having to wade though multiple pages of advertising before I can find the (very light) scientific content.

-- Edit: I realize I am very negative here. I do not mean to disparage the work done by the authors. It's just that the way it is presented make it sound way more impressive than it is. It's still interesting and novative work.


Some rules can be derived from instrumenting humans as they perform the maneuvers, and generalizing from their behavior? We used to instrument motorcycle riders at Harley Davidson and create fuzzy-logic models of expert riders as they performed certain acts on a track (dodging road hazard; emergency stop; hairpin turn). Our goal was also a fuzzy-logic driver model, which they used to help design new motorcycle suspensions/steering that would feel 'natural' to an expert rider e.g. mesh well with the model they had for an expert.


Wow that's very innovative for HD. Why don't they put some of that innovation effort into their actual drivetrain? I loved my Buell's look and ride, but I mistrusted it's horrid 50-year-old Baker transmission which went on to completely fail at ~6,000 miles, necessitating me to disassemble the entire engine to split the crankcase so I could repair it. After seeing its guts, I no longer wanted it. It ignores a half century of innovation in motorcycle design producing a machine that I vote most likely to unexpectedly leave me on the side of the road.


I'll throw out a guess that part of it is the character of the bike being tied to the drivetrain.

Even outside Buell, which was a bit of a neither-fish-nor-foul anomaly, they've had the odd innovative model here and there--the V-Rod comes to mind. And I've seen some interesting things in their ABS systems and some other components. But I think the most successful models have been very conservative about their drivetrain as it's part of their signature sound/feel.

My hope is that Polaris' recent critical success with the Indian Scout (which I bought over HD--far better bang for buck than a Sportster) gives the segment a kick in the ass.


The continued anomaly of the Buell is partly the V-Rod's fault. The V-Rod engine was originally supposed to be for the Buell line to address the issue until the mothership got interested in it. They added too much weight and too high a deck height for Buell chassis, so the project was wrestled away from Buell to make a bike that, in the end, HD couldn't really sell anyway.


Interesting--I had no idea that the V-Rod engine was originally destined for Buell. That makes lots of sense.

Buell had some great innovative designs too, but you could tell that was all Erik and not the motor company. I wasn't surprised when the split happened.


What many don't know about the shuttering of Buell is that it happened while HD was in the process of receiving over a billion in TARP money. This information was kept secret for over a year. Harley had something like ~6000 employees at the time. Buell operated on ~100. It seemed like a ludicrous move; like it had to be political.


If you're looking for innovative motorcycling, gotta look at a sportbike. HDs are for people who love cruisers, a genre defined by rat bikes and self-made bikes, but for people with money. (Not knocking on rat bikes, though, those things are awesome)


I asked myself the same question, and did read some papers, but could not find a recent comprehensive survey on automatic fuzzy rule generation (I admit I gave up after ~15 minutes).

What I found did not convince me that it would fare better than an off-the-shelf (somewhat) non-interpretable statistical supervised learning algorithm.

It can be a nice way of bootstrapping the rule writing process, or to go the other way : to discover and analyze new expert knowledge by looking at the rules.

But performance-wise, I would go the machine learning way anytime.

Also, Inverse Reinforcement Learning seems to be very promising : one guesses the reward function by observing the expert acting.


I imagine there's probably significant regulatory constraints into the interpretability of any models generated to run combat weapons platforms, even if just in a simulation. Deviations from a norm during times of war due to needing to chase after additional data or testing hypotheses might be considered a significant demerit to the model. Alternatively, formalizing these rules may be helpful for instructing new pilots or adhering to existing rules-of-engagement.


Quinlans https://en.wikipedia.org/wiki/C4.5_algorithm is somewhat popular for similar tasks, it allows to build decision trees from data that are conceptually similar to such fuzzy rules, and the rules can be human-readable so they can be really powerful after expert review.

For example, a very specific condition can either mean that this particular condition is useful, or that simply the training data happened to have those particular instances of a more general condition - and a human expert can usually easily decide which rules need to be extended for proper generalization beyond your training data, but the automated generation helps identify factors that the expert could recognize, but wouldn't think of if doing it themselves from scratch.


Thanks for the ref !


I'd like to double-click on that comment. :) If you're ever interested in writing more about that, I'd upvote it.


Yeah it was weird working with guys named 'Roadkill' and 'Slash'. Nice as could be. Terrific riders.


what, what is fuzzy logic model? From parent post, it seems to be a like data learning, but manually?


It's an approach to AI that allows you to generate rules based on probabilistic logic (0.0-1.0) rather than strictly boolean true/false (1 or 0). The applications are used in tons of different systems from medical diagnostics tools to washing machines.

The plus side is that it allows you to make systems that can be tweaked using trial and error to handle cases that would require very complex logic otherwise. The down side is that alot of the time the systems aren't provable the way other types of logic are, and they can be a pain to debug.


I would assume that in a military setting (as it is in most bureocratic/management settings) a solution like this has the immense advantage that one can precisely determine the source of any one error to a specific requirement/rule.

It is very hard to ask for management to trust a system they know nothing about, where they have literally no control over final behaviour, even if in the end it will perform better overall. In a rule-based system, instead, it is always possible to make adjustments and blame mistakes very efficiently to specific causes.

I guess this is the main reason why "true" AI is currently being used mostly in information fields, rather than on physical machines and engineering. No-one would know how to deal with the outcome of a fuzzy learned algorithm making the wrong decision. This is also a reason why autonomous cars are very interesting to me, even though I bet they are still full of ad-hoc rules in order to have a layer of "manageability" over the overall system.


I see how it can sound appealing to a bureaucrat, but as a programmer, debugging the concurrent evaluation of thousands of "natural" language IF...THEN... rules until I find the questionable one where a threshold was defined too low or too high sounds like a nightmare.


Rule-based systems have had for a long time the ability to automatically answer queries about why was such or such decision taken.


That would indeed be a nightmare, but trying to debug a machine-generated neural network is actually worse.


I imagine they would log all the inputs, as well as branches taken so they can later replay everything in a debugger. That would make the process much simpler.


Being able to take the blackbox recordings of your combat drone which got shot down, reconstruct the scenario and permute the rules till you get a win, seems like...well, a big win.

Air combat is also one of those areas which does notionally have narrowly computable victory parameters - given hardware of capabilities X, there is a model we don't know which should generally predict the outcome.


Only if you do it wrong?

If it ends up like thousands upon thousands of if statements then you got it wrong (or so I think).

Getting it right IMO means realizing that rules are not necessarily source code but often data to be iterated over, filtered etc.

This is a form of metaprogramming that can be practiced even in Java ;-)


At least you can actually debug it.


Comparing comments here to the comments on, say, first Alpha Go post, reveals the amazing amount of AI bias on this website.

When an expert system beats some human in a complex real-life problem the comments are about how it is narrow, boring, not sufficiently tested and ultimately doesn't matter.

When a neural network (with the help of MCTS and an entire data-center full of servers) beats some human in a board game the comments here hype it through the roof and jump to conclusions about the coming dawn AGI.


>> Comparing comments here to the comments on, say, first Alpha Go post, reveals the amazing amount of AI bias on this website.

I think it's because only recently has AI been in the news again and it's been the news thanks to machine learning and neural networks in particular (and more specifically, deep learning). The last time there was a big todo about AI was in the '90s and most people writing here were probably not old enough to figure out what the hell was up back then.

Er, I'm not dating myself here. I know about those things out of happy accident (maybe a story for another time). Most people who graduated from CS in the last five or six years will not have heard anything about expert systems and GOFAI, except that it failed etc, if that.

Then along comes Google and promises it can make your phone talk to you. People are intrigued.

But of course, those who don't know their history are doomed to repeat it.


I'd guess there's a difference in kind that people get excited about.

The fighter jet AI technique is hard-coded to a very specific problem domain, and could only be reproduced in a different problem domain by doing it from scratch.

The technique used by AlphaGo is at least closer to the idea that we can eventually build generically trainable machines that can learn to do a variety of tasks, without having to code them from scratch every time.


>> The fighter jet AI technique is hard-coded to a very specific problem domain

So are machine learning models, ultimately. Although the algorithms that build the models have more general application, once you train a model on a certain set of data, that model can only ever be used in the domain circumscribed by the data. They're one-trick ponies, yes?

>> we can eventually build generically trainable machines that can learn to do a variety of tasks, without having to code them from scratch every time.

We would however have to train them from scratch every time, for each separate task, and each time we'd need terabytes of data and megawatts of power.


You realize that this system uses machine learning? And that AlphaGo was custom-built to play Go?

AlphaGo has several handcoded training features, uses Monte-Carlo tree search, and was primed by a huge human-made dataset. It's not like it can be throws at other problems without heave re-engineering.


For me personally, this is less exciting because AI jet pilots have a natural advantage.

Human pilots are limited in maneuvering by the amount of force their body can take. Computer pilots are limited by the amount of force the airframe can take. The later allows much more aggressive and radical maneuvers.

That means, compared to playing Go, the robot pilot can be comparatively much worse than the human and still win decisively by taking advantage of high g maneuvers.


The test were done in a simulation. Also, if you want to run high-G maneuvers, you can always control a drone from the ground.


I am not very comfortable with a machine that's very competent in killing fighter pilots. I am much less comfortable with such machines generalizing that competency to other, closer, problem spaces. Also, in cases where the use of deadly force happens without a human in the loop, being able to describe exactly which rules triggered and caused the death of a friendly pilot or that C-40 that happened to actually be a 737 full of passengers would be a requirement. "Because the plane got confused" is not very satisfactory.


Fighter jet AIs aren't that scary compared to the work being done on algorithms for teams of robot soldiers. The US's TARDEC and the Australian DSTO held a competition for this back in 2010 called Multi Autonomous Ground-robotic International Challenge(MAGIC)[0]. In this competition, a team of aerial and ground robots had to perform a simulated combat mission to 'secure' a set of moving(other soldiers) and stationary targets(IEDs) in an approximation of an urban environment.

In simulation, the algorithms demonstrated for doing this do quite well with, being able to complete the mission with a success rate of 97.5%, so long as one has 6 search robots and 3 gun robots.

This did not work so well in real life, partially because real robots are difficult to do with. It is still disturbing though because of the high success rate, not to mention the immediate applicability to robot SWAT teams. As a civilian, I'd be much more concerned with a SWAT AI than a fighter jet AI.

However, robot swat teams are still a ways off.

[0]http://singularityhub.com/2010/03/19/teams-of-military-robot... [1]https://en.wikipedia.org/wiki/Multi_Autonomous_Ground-roboti... [2]http://www.frc.ri.cmu.edu/~ssingh/Sanjiv_Singh/PUBS_CONF_fil...


How about a future where AI fighter pilots fight against other AI fighter pilots?

Maybe one day war will be less about killing people, and more of a battle between countries' best engineers.

Maybe I'm just optimistic, but I think robot wars would be a hell of a lot better than real wars.


>Maybe one day war will be less about killing people, and more of a battle between countries' best engineers.

When two robots fight each other, it's not because the other robot is the target, it's because it's an obstacle between them and the human targets that will actually affect the war.


This reminds me of Philip K. Dick's "The Second Variety" that I recently read. (Not that I'm trying to make a prescient political-statement, just sharing a fun short-story you might be interested in.)

http://manybooks.net/pages/dickp3203232032/0.html


Such a great story. Thank you for posting it.


Why are their targets other AI fighter pilots? That's a very, very big assumption. Can we safely assume that the AI pilot won't target buildings/ships/etc.? Even if we assume a well-behaved AI, why should we assume that AI fighter pilots will target military targets? Can we say for certain that they won't be controlled by a malicious despot?


Both sides' civilians could meet together on bleachers and share popcorn.


What when one AI pilot generalizes it's knowledge on enemy airplanes and finds out that bombing an airplane factory destroys multiple targets with minimum risk to itself?


Because in real wars the body count and the physical destruction is what matters.


Not necessarily. Generally, a conventional war ends when one side surrenders, usually because it knows it can't win and wants to minimize further losses. If we reach a point where unmanned weapons are the ultimate tool for destruction, then it makes sense that by the time the AI combatants reach a point where they could target enemy populations, the war would already be decided, and the nation about to have its people killed would surrender. There will be exceptions, but it would be no worse than current war, and at least potentially be better.

That is of course assuming that the most efficient strategy to achieve a surrender doesn't actually end up being a bypass of enemy AI and direct targeting of human populations. This would only be possible if there's a huge mismatch between offensive and defensive capability. But then you end up with a MAD situation similar to what we have with nukes.

That said, I expect that the flip-side is that this tech would ultimately make a guerilla war by a local population against an occupying force very very ugly for the guerillas.


That would essentially mean two nations agreeing to resolve their intractable diplomatic differences with trial-by-combat, and to abide by the results. It's a lovely notion, but in practice you can't really force a nation to change policy without a palpable threat to their population and/or infrastructure.

So it boils down not to robots killing robots, but robots killing humans, while other robots try to stop them.


We are already partially there only with athletes instead of engineers.

(That said I'm less optimistic I guess.)


England may as well just roll over and capitulate to everyone in that case - Go Iceland ;-)


>The mathematical model of Fuzzy Trees is nice, but this is completely ad-hoc to the specific modelization of the problem, and will fail to generalize to any other problem space.

Well, why should it? No one is inventing HAL-like AI anytime soon, or ever. If this system does a better job of killing the enemy than human pilots then its quite the breakthrough. Projecting air power is one of the ways countries keep aggressors away and this would be quite an advantage for variety of reasons. Not the least of which means you can now design AI driven fighters that have zero design compromises to keep human pilots alive.

I imagine fighter engagement consists of a fairly limited set of problems to solve. Think of this as just a souped up autopilot/autoland system, except with guns and missiles. We're not asking the AI to write the next Romeo and Juliet here.

>because the paper is very light on details

Defense contractors aren't known for sharing details. I imagine this is a competitive advantage and they want to keep their cards close to their chest. There may even be national security issues here.


> Well, why should it?

Because what is trumpeted as a breakthrough may in fact be so narrow in scope that it may not even be possible to use it in a flight combat video game without a lot of work, let alone any real life environment.

It is absolutely, very closely tied to the mathematical model of aerial combat that they devised and can not easily be made to accommodate new insights, or new challenges.


> Because what is trumpeted as a breakthrough may in fact be so narrow in scope that it may not even be possible to use it in a flight combat video game without a lot of work, let alone any real life environment.

What? They were quite literally running the AI in a simulator -- ie, a very expensive video game. The only thing that might not scale is the computational power necessary to execute.


Sorry, I was unclear, I meant in /another/ video game. They are strongly tied with their particular modelling of the problem.


I suppose it depends on the granularity of the rules. Deciding what to do is separate from how to do it. If those are tied together within the rule system, then yes I agree, the rules would need to be tweaked for every new model. But if the rules only define the goal of the system, and flight mechanics are handled by, say, a goal-based feedback mechanism, then it should work fine in any model.


Additionally, doesn't this mean that the code is very sensitive. If the opponent were to get their hands on the code it would be easy to predict what these things are going to do.


So, while it might be the fashionable thing to do some kind of (machine/deep/?) learning approach where you allow it to run millions of simulations and figure out things itself, I can understand why they didn't.

Learning approaches which depend on mass-simulation are great when your problem only ever exists in a "virtual" context, but what happens when you want to take your trained neural network out into the real world? Clearly it's going to have to adapt to the differences between the real world and the virtual world - but how would you do that? You can't run millions of dogfights in the real world to adjust its training.

?


This is called domain adaptation and transfer learning in the literature. There are ways to do that. It is an active area of research. Basically the idea is to run a few real world dogfights (you could conceivably collect a few hundreds), and use methods to adapt the simulation model to the new domain. Solutions involving unsupervised learning (e.g. no dogfight, just collect sensor data from fighters - you could collect thousands of hours this way) are also active areas of research.


It is not about fashion, it is about not being ad-hoc.

For small scale problem where most of the variables are well understood, this kind of approaches work beautifully. Big problem are better tackled by a more generic approach (maybe with some ad-hoc adaptations, such as mixed approach between expert systems and statistical algorithms, feature engineering, etc.) because these approaches will be more resilient to an exposure to the real world, and the manpower invested in them is useful in more than one problem domain.

To address your last point, there is an extensive body of work on data-scarce environments. I've even seen a talk about applying reinforcement learning to endangered species preservation, where you only get a single digit number of interaction with the system !


The solution would be to make the simulator so good that there is no practical difference.


For a start, if it can be done by a fuzzy decision tree, it can be derived by a Bayesian network (that is basically a fuzzy decision tree that keeps extra data for learning) and made more versatile after that.

But the decision tree is much more tractable, thus the longer it's kept on this format, the more future-proof is the work.


Sounds like how Deep Blue defeated Kasparov. The first time is always awkward, but now that we know that it can be done we can develop more generic algorithms. A Stockfish for air combat may be several years away but it's coming.


I'm not sure that an open source AI for air combat would get you very far...depending on the licensing terms.


That's fuzzy logic for you. Probably why it mostly died around the turn of the century.

It's still used in a few systems as a complementary system involving PIC controls.


For those who read this piece of news and don't understand why there is no mention of machine learning, neural networks and deep learning, that's because the system described is a typical fuzzy logic Expert System, a mainstay of Good, Old-Fashioned AI.

In short, it's a hand-crafted database of rules in a format similar to "IF Condition THEN Action" coupled to an inference procedure (or a few different ones).

That sort of thing is called an "expert system" because it's meant to encode the knowledge of experts. Some machine learning algorithms, particularly Decision Tree learners, were proposed as a way to automate this process of elicitation of expert knowledge and the construction of rules from it.

As to the "fuzzy logic" bit, that's a kind of logic where a fact is true or false by degrees. When a threshold is crossed, a fact becomes true (or false) or a rule "fires" and the system changes state, ish.

It all may sound a bit hairy but it's actually a pretty natural way of constructing knowledge-based systems that must implement complex rules. In fact, any programmer who has ever had to code complex business logic into a program has created a de facto expert system, even if they didn't call it that.

For those with a bit of time in their hand, this is a nice intro:

http://www.inf.fu-berlin.de/lehre/SS09/KI/folien/merritt.pdf


Not actually hand-crafted. If you read the actual paper, they're training their fuzzy system with some kind of genetic algorithm. The theory behind it isn't in this paper, and it seems to be home-grown DIY type stuff - pretty standard for heavily military work like this - but it is still doing some optimization and learning. No idea what it's doing, whether it's just tuning weights or whether it's actually altering the tree itself, but I'd guess that they've basically reinvented decision trees.


>> If you read the actual paper

You're right that I didn't. Thanks for correcting me and apologies for the slight fudging. I think my description is still mostly accurate though.

>> I'd guess that they've basically reinvented decision trees.

That would make sense in the sense that it's kind of an obvious algorithm to re-invent if you're trying to learn propositional rules. I'm not so sure about the "evolutionary" part though.


Also, I should say: I'm really sorry this news had to be about an automated weapon. That sucks.


It wouldn't be news if it weren't about a weapon. Or rather, it wouldn't have grabbed your attention.


It grabbed my attention because it's an expert system with fuzzy logic rules and I have an interest in those. I'm not interested in their military applications.


Then why did you bring it up?


I analysed the bit I'm interested in and stated I abhor the bit I abhor.

What are you failing to understand?


AI Fighter Pilots have been killing me in Flight Simulations for at least 30 years now using similar systems. From the paper, they basically use an expert system using something they call a Genetic Fuzzy Tree (GFT), which seems suspiciously like a Behavior Tree where the nodes are trained. They trained the GFT then had it go up against itself where Red team was the 'enhanced' AI and Blue was supposed to be the human (this part was odd to me).

After they completed the training they put it up against real veteran pilots and the AI basically did a few things. It would take evasive maneuvers when fired upon and fire when in optimal range. That's pretty much it. And you know what? That's really all modern pilots need to do. It's amazing what they did with Top Gun, making this stuff not look boring. In the end of the day it's just wait for some computer to tell you that you have target lock and press a button. If attacked, take evasive maneuvers and pray. Takeoff and landing on a Carrier is the scariest part.

I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).

Even so, I enjoyed reading the paper (not the article) so would recommend it if you're into Game AI at all.


It's amazing what they did with Top Gun, making this stuff not look boring.

When airplanes get within gun range ("knife-fighting range") things are very interesting. Beyond visual range is just weapons management, but close-in it's energy management to optimize maneuvering to get to a killing position.

Also, it's much more interesting when you have something at stake besides losing a video game and having to restart.

I'm quite curious how this system would perform in WWII era dogfights where you had to worry about the stress on your plane, had to deal with engines that failed and stalled all the time and maneuvers that were much slower and closer to the enemy (plus no missiles).

By the end of WWII, properly maintained engines flown within parameters were pretty reliable. (So no pulling negative g's in a plane with a carburetor.) The concerns of fighter pilots at very close range or trying to evade when targeted by missiles can still be similar in certain regards to pilots trying to stay alive in WWII.

AI's would probably be very good at energy management and taking shots of opportunity.


Seems to me this gets a lot more interesting when they start building fighter jets without the assumption that a pilot will be in the plane at all. You can build much smaller, lighter planes without the need for life support systems or worrying about g-forces that will kill a human pilot.

I'll grant you that doing this in the US is going to be problematic because of ethical concerns, but there is definitely going to be some country that does it, and as soon as they do, they'll instantly gain air supremacy.


Aren't those called missiles or UAVs?

As in, we should consider why we have planes in the first place: it's to deliver some payload (bombs, missiles, or in the olden days, cameras for photography) to a specific place where you make use of that payload and then go home. Once you remove the human, you're not left with too many uses that can't be solved with existing technology.


The HiMAT basically demonstrated this. Unfortunately there doesn't seem to be much information online about the program, however some books I've read note that the program was a success.

http://www.boeing.com/history/products/himat-research-vehicl...

Rudimentary fuzzy logic in this kind of platform should be able to defeat a human pilot regardless of the pilots cunning/unpredictability/other human aspects which movies trump as superior human features. But, practical requirements such as range, payload, loiter time, pork barrelling may mean that the platform is otherwise compromised.

Additionally in an era where things such as airborne lasers are becoming a reality, the whole meta may completely change.


The life-support weight and g-force worrying can be dropped, but I think parameters like wing-loading, range, load capacity, the radar and its power generation will set the size of a fighter plane to be pretty much that of a fighter, even if you drop the pilot.


You're saying modern air to air combat is relatively simple and doesn't require that much pilot skill? If that's the case, why do all the accounts of the 1991 and 2003 Gulf Wars claim pilot skill was perhaps the single largest advantage the Americans and British had over the Iraqis? (Not a rhetorical question; I don't know what the specifics of the task consist of, and I'm curious about the answer.)


In Gulf War 91, the Iraqi pilots were both skilled and combat experienced from the Iran-Iraq war. Although rigorously trained and highly skilled, very few USAF pilots flying operationally had actually previously experienced air-to-air combat.

Apart from technically superior radar and weapons systems, the USAF Weapons School[0] teaches USAF fighter pilots how to engage within the envelope of those weapons.

In the early part of the Vietnam war, the air-to-air missiles kept missing the target due to poor pilot training. The result of the Ault report[1] and subsequent TOPGUN[2] program worked to remedy the shortcoming in the training. The USAF FWS was created after seeing the results of the Navy TOPGUN training.

[0] https://en.wikipedia.org/wiki/USAF_Weapons_School

[1] https://en.wikipedia.org/wiki/Ault_Report

[2] https://en.wikipedia.org/wiki/United_States_Navy_Strike_Figh...


The last time I watched a documentary about dogfights, two stories about the gulf war ends in the Iraqi pilots losing situational awareness, and consequently dying by crashing into the ground.

It makes me wonder a little bit if their skills are vastly overestimated.


In the same Dogfights documentary, they also mentioned Iraqi MiG-25 pilots evading several AIM-7 radar guided missiles fired at the same jet. In other references, there were a few cases of MiG-25s evading USAF F-15 firing missiles at them, and making it back to base.

Experienced USAF pilots have hit the ground during combat operations. That doesn't mean they were not skilled pilots, just that they made a deadly mistake.


Not saying this has any relation in this specific case, but both of those governments have made up similar lies about pilot ability (British in WWII with radar hidden by "pilots who eat carrots to improve eyesight" is the classic example) in the past to hide technological advancements.


I'm not gulf war specialist, but most recent analysis predicts pretty consistently that if you pit decent to great pilots in F35 vs F35 with similar missiles, you get mutual kill every time. Same goes for any single plane that has BVR radar and missiles and off boresight close range missiles. Most 4th gen planes have these capabilities.

I didn't agree with this at first, but they claim that modern missiles are able to differentiate between heat signature of plane body and engine nozzle. Also modern missiles are capable of 50G turns. That G-loading tells nothing about turning circle, but it does tell surprisingly nicely how quickly you can deviate from straight line. Fighter plane loses every time from every angle at every speed.

The only problem of air-to-air missile dominance is that they burn through all of their fuel relatively quickly. Once that happens, maneuverability goes down really quick. So the "no escape zone" is crucial and it varies from missile to missile.

If you want to stay alive, you need to out-range or surprise the enemy. Either stealth, superior radar or longer range missiles.

It looks like fighters have become very expensive and very mobile SAM sites. It beats me why countries without aircraft carriers would ever pick F35. You can get ~50 trucks with Pirate IRST and METEOR missiles with the same price. More area covered at any given time while survivability goes up like hell I don't know what.


"That's really all modern pilots need to do."

Recorded conversations between ground support pilots and forward air controllers would disagree. Lots of very fast paced observation, pattern matching, orientation and tough judgment calls.

If we ever fight a competent air adversary I would imagine the AWACS to pilot conversations would be fascinating.

Learning to fly a plane is like learning to throw a baseball. It takes hours at most. Of course learning to beat a pro player at their entire game not just one activity, so as to get their job, is a little harder. An interesting observation of human judgment WRT the ratio of people who think they can go pro vs the people who have the skills to go pro is not terribly inspiring WRT AI pilots, they'll be lots of coders with bravado and not much action. And learning how to lead a team to a world series win isn't even definable at this time. But yeah, toss that ball over there when I say to do it, that's a solved problem. Likewise successfully accomplishing a combat mission is a lot more complicated than "and this is how to keep the wings level and that button makes things go boom". A really smart autopilot is going to help yet isn't the only thing necessary.

Note that its possible to send a man to do a cruise missiles job, or even a plain old missile's job. That doesn't imply a cruise missile can do everything a man can do, it just means the man was mismanaged to not take full advantage of his abilities.


If we assume the wars of the future to be fought by AI-driven warmachines, can we abstract the matter further and have virtual wars? Our AI versus your AI fighting on computational resources provided by, erm, Switzerland. Nobody gets hurt and no money is spent building and destroying warplanes. Everybody wins. And have a prize pot, so actual invasion of territory is not necessary. Bulletproof solution, may I say. What do you mean it won't work?


But nobody has any skin in the game that way.

The Star Trek version was to have citizens of both sides executed to match simulated casualty numbers... https://en.wikipedia.org/wiki/A_Taste_of_Armageddon


To take matters even further and also somewhat address skin in the game, how about we do away with AI virtual warfare, since it too implies taxpayer money being used for eventually futile endeavours and simply organise a war-chess games between the leaders of the countries. President Trump, your move.


The prize purse and "simulated" aspect make me wonder how much the rise of football and the World Cup has sublimated democratic nations' drive for warfare as a path to national pride. The same argument also applies for the Olympics.


Interesting question. If we assume competitive sports to function in relation to testosterone or other hormonal drives when played or watched, an argument could be made that if the competitive hormonal 'itch' of men is scratched through sports, the desire to compete in war is lessened in the population, which in turn may affect politics. Purely speculative comment.


This is a part of the plot of Ian M. Banks 'Surface Detail' (2010).

Two parties agrees to wage a war in virtual worlds to decide if virtual hells should be allowed or banned. Unsurprisingly, the losing party tries to move the war in the real world to reverse the losing trend.


Well it was a little more complicated than that. And also that whole book was amazing.


I'm not sure if 'Surface Detail' is a book or a movie, but I assumed there was more to it than a two sentence description.


The so-called "cyber war" will be a very important component of any future war between nation states. Taking over the enemies SCADA systems, power grid, net infrastructure etc. can do massive economic damage with few casualties.

I don't think we'll ever see a civilized form of war where no real harm is done, because abiding by the rules of such a war is not a game-theoretic equilibrium. If a party builds a real army in addition to the simulated army and uses it, it will win the war.


We have a Geneva Convention, and international norms against nuclear weapons, which are also pretty good deterrents themselves (we hope).

Just like nuclear powers sometimes wage limited conventional wars, it's possible to imagine a set of international laws and norms where disputes would be resolved by virtual conflicts without escalation to armed force.

Advances in robotic war machines could make them so fearsome they deterred real-world armed conflict in favor of virtual conflict. (Of course, that's what they said about the machine gun before WW1)


The Geneva Convention only works if both sides abide by it. Such deterrents only work because we allow them to work. Once one side decides the consequences of going against the norm is acceptable, or can be avoided altogether, then the rules no longer apply.


What makes you think that disabling infrastructural targets will result in fewer casualties?

Power for heating, cooling and hospitals. Transit systems for food distribution. Computers for synchronising all of the above.

How many casualties do you think would result from shutting down infrastructure in NYC in mid winter?

Even evacuation causes casualties. I was reading an assessment of the Fukushima evacuation that suggested fewer casualties would have resulted from just staying there.


Blowing up all transformers in the power grid by inducing a huge power surge surely causes fewer casualties than blowing up all transformers using bombs.


Perhaps?

I suspect the random variation between the two would account for far more difference than the casualties directly attributable to method of destruction, though.

Unless you mean destroying the transformers with blockbuster-size HE, or dirty nuclear devices?

I guess my point is that the number of people who would die from sustained, widespread infrastructure disruption would be huge, and not greatly changed by casualties around the blast site if you decided to blow stuff up literally instead of just metaphorically.


The commitment and potential loss of resources while waging war is actually a valuable input to the system. If all war was virtual, then why not continuously wage war with everyone weaker than yourself all the time? Why would the strongest not virtually-subjugate every other nation? In the real world, resources aren't unlimited and losses are cumulative. So consideration must be made as to when it's worth committing to a war.


Nobody that loses a virtual war is going to give up and let the other side take what they want without a very violent fight.


Assuming the results are pretty accurate about the outcome, why not? You can either surrender now and not have any civilians killed, or suffer the following casualties and still lose.

Heck it might even prevent war if the simulation says that both sides will suffer too high casualties to make it worth it.


The outcome of such simulations isn't repeatable by definition since it's path-dependent on various decisions and risky outcomes, and of course even the information coming out of the simulation affects the decisions and thus outcomes of future runs of the simulation. That's even with perfect information, which already is an unrealistic assumption.

At best, a very accurate simulation could tell you "in the case of war, here is the expected distribution of outcomes, here are the probabilities of various levels of "winning", here are the estimated levels of civilian casualties.

And quite a few political leaders in such situations have or would have chosen to take e.g. a 5% chance of victory instead of surrender, or simply as a deterring tactic - yes, we know that we will surely lose, but in the process you'll lose many men as well, so we'll bet that the result is not that important to you and you won't pay this price.


And there's no telling whether the enemy will do something batshit crazy that you could have never predicted.

If Herodotus is to believed, a Persian siege against Babylon was met with the Babylonians strangling all the women except for a few to make bread as to stretch fighting resources as long as possible.

During the Sono-Soviet border conflict of the late 1960s, Mao threatened to overwhelm the Russian border with millions of Chinese as a part of his "man over weapons" strategy. Even with nuclear weapons, the Soviets were terrified of the sheer number of people that China could throw at them.

How do you predict that? How do you put a number to it?


Even when a military of outrageously greater power wins, people will still fight, even if they know they are fighting to the death and couldn't make a dent in the fighting strength of the superior occupying forces.

https://en.m.wikipedia.org/wiki/Iraq


I'm not sure the Iraqi resistance against the US was in that hopeless a position. They were outgunned conventionally, but that didn't matter in a guerilla war against a foreign occupier.


For a guerilla war to be successful, the foreign occupier has to lose the will to win. Such resistance is hopeless in general terms of warfare, to be won it has to be considered as much a political war as it is a conventional one.


> Such resistance is hopeless in general terms of warfare, to be won it has to be considered as much a political war as it is a conventional one.

War is always political.


I agree.

I was meaning more that a conventional war is more violence then politics. There are various aspects to war, each type has its own balance of these aspects. I just feel that in a guerilla war the politics has as much a part as violence.


That may have been the case in Iraq, but that was not at all the case in WWII. The Germans by no means lost the will to win in Yugoslavia or Poland.


True, they did not lose the will to win as they were simply defeated by an organized force. I believe the Germans were involved in more than just fighting underground resistance in multiple countries. Such resistance alone did not defeat the Germans, but did assist with the final outcome.


Tito's partisans drove out their occupiers without outside assistance.

Now, Germany was obviously occupied on multiple fronts.

While I can't contest that if a nation of 330 million were to devote its full human and industrial capacity for the purpose of destroying all resistance in a nation of 33 million, it would succeed...

I wouldn't call the richest country in the world failing to convert to a war economy to overcome an enemy ten times smaller then they are to be 'lack of will to win.'


Yes, drove them out but did not defeat them in context of the war.

The richest nation in the world should not have to convert to a war economy to defeat an enemy ten times smaller. If it could not; then it didn't want to be there in the first place, it didn't want to do what was necessary to win, or their internal politics is screwed up. Possibly more than one of those.


It's clear now that there was no "Iraqi resistance against the US," or else the would have stopped "resisting" after the US left.

It was clear then too, but it obvious now that the proto-ISIS "freedom fighter" narrative was completely divorced from reality.


The group that became ISIS wasn't the only group fighting the US occupation. (And the fact that a group has goals beyond resisting the occupation doesn't mean its not resisting the occupation; just like the fact that it had goals beyond opposing Saddam's regime doesn't mean that the same group wasn't opposed to Saddam's regime -- which had contained it far more effectively than the US occupation did.)


Already have this scenario. Nuclear powers are unwilling to go to war since we already know the outcome.


  For this to work, every aspect of the simulation would 
  have to match up with real world circumstances.

  ...which would mean no secrets.

  ...or weapons that, when used, obviate many degrees of 
  secrecy, like nuclear weapons.


But it worked on Star Trek!


I think you're missing a big reason why wars are waged - massive cash flows stemming from delivering actual destruction and subsequent rebuilding.

war is a dirty business, but business it is, and what a juicy one. simpler people hate banks/bankers, yet I haven't heard about any protesters occupying Lockheed, BAE or similar folks and wishing them jail. last thing these powerful corporations want is to change the game that works so well for them now.


This is a conspiracy theory, as while weapons manufacturers profit from war, they would profit any way governments bought, maintained or upgraded weapons. There is also no evidence that BAE or GE go out of their way to cause wars. Indeed, they would rather sell weapons to all sides. Arms races are certain profit, wars bring uncertainty, and cut off other massive cash flows (like the Iran oil embargo)


Pretty sure massive amounts of German arms manufacturers went out of business after both WWI and WWII. If the country is utterly defeated, it doesn't bode well for shareholders if the company is dismantled as part of peace talks, or if the government backing its payments shrivels up into nothing.


imagine 50 years of no war, civil uprisings, no global enemy or power hungry empire, warmongering dictator etc. who would need latest war tech in massive numbers?

war is a seed planted for long term future. look at middle east - I guarantee you 100 years from now on, this will still be a bad place to be. who will profit from it? all that arm all sides.

when last few US wars were declared, for whatever funky official reasons, do you really believe that only US president and very few true and just in government were involved in deciding if, when, how, what tech will be used etc? in this I trust Russia much more - epicenter of power is pretty clear there.

I don't believe in some Illuminati governing everything, nor do I believe that people with real power are not corrupted to their rotten core, owning favors left and right, their missteps are overlooked and their places +-secured etc. you don't get too high in politics (or army for that matter which becomes politics from certain level) as independent and just. just my gut feeling and common sense. but boy would i be glad to be wrong!


This conspiracy theory has the same fatal flaw that many others have: If arms companies are conspiring to cause war to secure business then there should be an even greater conspiracy undoing their work because many more businesses are hindered by war.

In this same way you can figure out that conspiracies about free energy and coal/gas industry suppression of alternative energies is bullshit. Almost everyone else in the world would love to have cheaper energy so they would not allow the oil or gas companies to suppress alternative energy.

Well that's my logic anyways.


Curiously, that happens already. Simulations are run on battles before they are fought until a winning tactic is found.


Now if only we could get the losing party to accept the outcome of their simulations too and to capitulate.

In practice, the losing party as often as not will try to inflict as much damage on the victor as they can, and more often than not what starts as a battle ends up being a long term occupation and that's when all simulations seem to break down.

Battles are 'easy', long term planning is not.


Actually it's more that the loser has to genuinely believe the result of the outcome. And amongst a population, the demographic of rebellious young adults tends not to accept outcomes like that...but of course that also becomes part of the calculus: can you accept expected losses of occupation forces?


Original Star Trek did it!

In this episode, the crew of the USS Enterprise visits a planet whose people fight a computer-simulated war against a neighboring planet. Although the war is fought via computer simulation, the citizens of each planet have to submit to real executions inside "disintegration booths" to meet the casualty counts of the simulated attacks. The crew of the Enterprise is caught in the middle and are told to submit themselves voluntarily for execution after being "killed" in an "enemy attack".

https://en.wikipedia.org/wiki/A_Taste_of_Armageddon


Which also contains one of the best Spock lines ever: "Sir, there is a multi-legged creature crawling on your shoulder" - neck pinch, boom.


"We lost"

"Well, they're not taking [whatever the goal was] without a real fight!"

It will then be AI/humans vs AI/humans cause one side will have to bear (real) heavy losses...


Wars of the future will be fought by people strapping bombs to themselves, obtaining the most destructive guns they can find and all the ammo that they can carry, and attacking civilian areas. 100 million dollar AI killing machines are mainly ways to funnel state money to the connected.


That's not much different from how things work since the beginning of the nuclear era. Both sides compare their strength, and the weaker side often capitulates to avoid a real war. (It was less common on the past, but it always happened to some extent.)

Of course, it does not always work.


You might find this an enjoyable read: https://en.wikipedia.org/wiki/Peace_on_Earth_(novel)



Or you could have the warring nations fight in an Eve Online Alliance Tournament :)


They did only one simulation? Strange to report on details of one single simulation when more makes sense.

Why not do hundreds of simulations, with different amounts of attacking and defending jets. Sounds like fun, must not be a problem to find pilots who want to do this simulation, it's merely hundreds of hours of gameplay :).

Or was it like, they did hundreds, but this is the only one where the AI won, and it had 4 planes while the humans had only 2?


The Pentagon is betting on human-AI teaming, called 'Centaurs'. The foundational story is this:

Back in the late 1990s, Deep Blue beat the best human chess player, a demonstration of the power of AI.

Around ten years later, a tournament of individual grandmasters and individual AIs was won by ... some amateur chess players teamed with AIs.

AIs aren't good at dealing with novel situations, humans are; they complement each other (and I'll add: unlike most other endeavors, in war the environment (the enemy) is desperately striving to confuse you and do the unexpected. Your self-parking car would have more trouble if someone was trying everything they could think of to stop it, as if their survival was at stake). Also, we strongly prefer humans make life-and-death decisions; hopefully that turns out to be realistic.


Huh, couple that with an aircraft not bound by human limits (no life support, much faster maneuvering with no loss in decision making) and it should be awesome. And terrifying.


There is a terrible sci-fi film called Stealth that explores some of that.


you mean drone?


Or missile. There is terrific range advantage if you only fly half of the round trip.


Was this Raspberry Pi powered? This story makes that claim: http://www.newsweek.com/artificial-intelligence-raspberry-pi...

If that is true, it puts this achievement in a totally different class.


Well, why not? the computers they have in those extreme situations are not the newest Intel xeons. Battle tested and reliable computers are years behind their more modern desktop counterparts.


I imagine an AI pilot always has a path to victory since they aren't subject to red/black-out and can thus pull crazier maneuver than their human counterparts.


This is a very interesting thought. So much effort goes into the Human Factors of fighter planes (they started the field after all), that it'd be super interesting to see what a "fighter drone" looks like.


You've surely seen this? https://en.wikipedia.org/wiki/Boeing_X-45

Very stealthy (no bucket/glass where the pilot has to sit). Huge amounts of lift available for high G maneuvers. Infinite air-time given a re-supply tanker is in the area.


https://en.wikipedia.org/wiki/Surface-to-air_missile

I know the train of thought. Once I spent hour thinking of "design dirt" that sticks to machine parts and protects them from real dirt and oxidation. Then it occurred to me that it's called "paint".


The Alpha paper, "Genetic Fuzzy based Artificial Intelligence for Unmanned Combat Aerial Vehicle Control in Simulated Air Combat Missions" is open access and available online:

http://www.omicsgroup.org/journals/genetic-fuzzy-based-artif...


What form of combat was this? It sounds as if they were dogfighting, something that is more myth than reality these days. Fighters fight but they don't engage on the equal terms, the duel we see in films. What were the BVR conditions? Was this a missile fight or with cannons?

The concept of two flights approaching each other, seeing each other, and not engaging until they are in dogfighting range is silly. To get two modern fighters close enough for a proper turning fight at least one side will have to be taken by surprise. Otherwise, the long-range missile fight will either decide the matter or place one side in such a poor position that they will withdraw. (Either they are down or will have so reduced their energy that a turning fight isn't an option.)


In every considerations, an AI pilot has all the advantages in a physical combat, no G force limit, precise maneuvers, instant reactions, full time awareness. The only question is, will the rules of war allow an AI to kill a human ? Or how a human decision can be inserted in the loop.


If a hypothetical drone pulled 9Gs+ in extreme maneuvers routinely, the USAF would be buying a new drone pretty quick. Even without weapons load, the service life would be short as a few years.

This is precisely what happened when the US Navy F-16Ns were used exclusively by TOPGUN, and had a short service life before the airframe developed cracks and went to the boneyard. These jets were flown harder than the USAF flew them in operational service.

Also, the aerodynamic control surface effectiveness on fighter jets does not allow rapid reversals at some magic level significantly beyond what a human pilot can handle. An old A-4 Skyhawk can roll beyond 360 degrees a second with a human pilot.

Another issue is that thrust vectored aircraft like the F-22 Raptor, will rapidly loose airspeed at a relatively low sustained turn rate. This turn rate is way less than the 9G maximum of the aircraft. Additionally, An F-22 (91-4003) was effectively written off after a -11G overstress mishap. The pilot was okay, but the jet never flew again. To fly a fighter in an expanded G envelope would require additional weight, negating some of the benefits of unmanned fighters.


really you can't design a machine that handles more than 10g's?

the designers are targeting a certain g-limit assumed for the pilots, and then make the plane as light as possible.

perhaps we will see hybrids in the intermediate term : humans providing guidance to the unmanned craft (with latency and jammability) and the craft autonomously doing the low latency fighting. i'm not sure this is "worse" if all the planes are unmanned. granted any fighting is bad.

this also has big implications for design -- reliability requirements, disposability change. one way missions.

the comments re: once the virtual fight ends the humans start fighting, i don't think so. they won't be as good, else the ai version wouldn't be in service, so that would be pointless.


The jet still has to carry the weapons, engine, fuel etc. and if it was designed for 16Gs operationally, the airframe structural strength would add to the weight significantly, especially if they expected a reasonable service life.

If the plan was for smaller, miniature, one-way drones, then something like a ADM-160 MALD or AGM-154 JSOW would suffice, but it wouldn't be a fighter sized drone.

https://en.wikipedia.org/wiki/ADM-160_MALD

https://en.wikipedia.org/wiki/AGM-154_Joint_Standoff_Weapon


We already let robotic missiles kill humans, why not robotic planes? It's important to keep humans in the loop somewhere but they can be back on the AWACS overseeing the squadron.


I would think the AI's instructions are to neutralize the enemy, not to kill any humans. It will not be shooting at the pilot if he bails out.


MAD is the future. And righteousness is the enemy. Don't mess with us. Don't mess with them.

Also, do the world a favor, and don't innovate new weapons. They leave an indelible affect on the collective mind.


Problem is the undeniable fact that much innovation comes out of human conflict. Even our basic laws of thermodynamics come out of experiments in cannon-boring.


> Because a simulated fighter jet produces so much data for interpretation, it is not always obvious which manoeuvre is most advantageous or, indeed, at what point a weapon should be fired.

This is changing very rapidly with hardware-accelerated RNN chips being researched by Google and facebook.

I wonder about communication though. All the enemy fighter needs to do is jam any signals used by the jets to communicate. I wonder if they could rely on laser/line-of-sight communication instead of RF frequencies.


They made a movie about this in 2005 (Stealth); looks like it's only taken 10 years for the first half of the plot to unfold.

Now we just need the AI to go rogue and target it's master ;)


Sort of like these:

"Dark Star - Bomb Philosophy- https://www.youtube.com/watch?v=29pPZQ77cmI

and the conclusion "Dark Star - let there be light"- https://www.youtube.com/watch?v=I9-Niv2Xh7w


But we seem to missing the planes that can fly halfway around the world in less than half an hour on less than one tank of gas.


ONE simulation? This is hardly news. It'd be more interesting if they did hundreds or thousands of simulation. One data point means nothing statistically.


I'd like to know how this system compares to TacAir-Soar: http://ai.eecs.umich.edu/people/laird/papers/AIMag99.html


I've been losing to the AI fighter pilots in DCS:World[0] for years.

[0]: https://en.wikipedia.org/wiki/Digital_Combat_Simulator


That's interesting, but it had a 2:1 numerical advantage too, which does matter.


It does matter, but AIs could fly different fighter jets from the ones human pilots fly with in future. Smaller, more maneuverable, they could have a sizable technological advantage. If they are comparable today they'll be no match for fighter jets limited by the presence of human pilots. Ground to air missiles will become more important for countries with no military AI capabilities.


That makes sense, given that human pilots can only take so many G's.


For an anime depiction of a human pilot reaching the limits of G forces while attempting to overcome an AI pilot unrestricted by G forces, search youtube for "yf-21 vs X-9", or, "Guld's Death". I've also posted the link in a comment above.


Fighter jets feel like something that could be effectively tackled using genetic algorithms. Algorithms that get shot down are weeded out. Algorithms that shoot down enemies are promoted. Yeah?


Yes, but only if your simulations are very close to the real world. Usually that means they are very slow.

I feel like swarms of cheap, fast, unmanned aircraft that communicate efficiently and orchestrate their behavior with AI will prevail in the end.


I think the latter is what you want to select for, otherwise you might end up with jets that are too risk averse. Mass manufactured, disposable machines would work better when you don't have to worry about pilot training.


For many years John Boyd and the "Fighter Mafia" helped to plan, build, test, and then manufacture fighters that had optimal "performance envelopes" that enabled them to maintain dominance in the sky. Perhaps this concept means that the new "performance envelope" is going to be one of software. This argument is fleshed out here: http://warontherocks.com/2016/02/imagine-the-starling-peak-f...


I imagine in real life conditions adversaries would focus on sensory attack types then?

Are there sensors that are immune to scrambling and bad data?


There aren't, but this applies to human sensors as well. The only human sensor that's any use in a dogfight is optical, and you can make optical sensors that perform a lot better than the human eye in regards to magnification, etc.


My first thought was of that little bastard UFO in Asteroids. It's pew-pew gun would never miss me.


I find myself imagining a world where the weapons trade is replaced with bootlegged AI software trade.


News at 11. One robot pilot beats another robot pilot.

"The AI, known as Alpha, used four virtual jets to successfully defend a coastline against two attacking aircraft - and did not suffer any losses."

"Alpha, which was developed by a US team, also triumphed in simulation against a retired human fighter pilot."

Key words here are "also" and "simulation" and "retired".

Click bait much?


In the clip below, one of mankind's last manned aircraft pilots--flying his fighter with a mind interface--attempts to destroy his AI-controlled fighter replacement:

https://www.youtube.com/watch?v=5hJepWBUqZk#t=0m20s

Perhaps honor can't be programmed.


Why is this news? I lose to AI games all the time...


Does it go without saying that actually running a simulation is super easy? At times I feel locked in by my operating system, so I wonder how these guys did it.


Can be deadly, but if it's predictable it can be controlled. For example, a gator. A gator is deadly, but can be manipulated because of its predictability.


Ender's Game!


AIs beat humans in simulated combat continually. It's called 'losing a life in a video game'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: