Why? There are many other types of AI or statistical methods that are easier, faster and cheaper to use not to mention better suited and far more accurate. Militaries have been employing statisticians since WWII to pick targets (and for all kinds of other things) this is just current-thing x2 so it’s being used to whip people into a frenzy.
Make defensive comments in response to LLM skepticism all you want— there are still precisely zero (0) reasons to believe they’ll make a quantum leap towards human-level reasoning any time soon.
The fact that they’re much better than any previous tech is irrelevant when they’re still so obviously far from competent in so many important ways.
To allow your technological optimism to convince you to that this very simple and very big challenge is somehow trivial and that progress will inevitably continue apace is to engage in the very drollest form of kidding yoursef.
Pre-space travel, you could’ve climbed the tallest mountain on earth and have truthfully claimed that you were closer to the moon than any previous human, but that doesn’t change the fact that the best way to actually get to the moon is to climb down from the mountain and start building a rocket.
That seems like something a special purpose model would be a lot better and faster at. Why use something that needs text as input and output? It would be slow and unreliable. If you need reaction time dependent decisions like collision avoidance or evasion for example then you can literally hard wire those in circuits that are faster than any other option.
Yo, this wouldn't make flying decisions, this would evaluate battlefield situations for meta decisions like acceptable losses etc. The rest of course would be to slow.