This is just doomerism. Even though this model is slightly better than the previous, using an LLM for high risk tasks like healthcare and picking targets in military operations still feels very far away. I work in healthcare tech in a European country and yes we use AI for image recognition on x-rays, retinas etc but these are fundamentally completely different models than a LLM.
Using LLMs for picking military targets is just absurd. In the future, someone might use some other variation of AI for this but LLMs are not very effective on this.
LLM's will of course also be used, due to their convenience and superficial 'intelligence', and because of the layer of deniability creating a technical substrate between soldier and civilian victim provides - as has happened for two decades with drones.
Why? There are many other types of AI or statistical methods that are easier, faster and cheaper to use not to mention better suited and far more accurate. Militaries have been employing statisticians since WWII to pick targets (and for all kinds of other things) this is just current-thing x2 so it’s being used to whip people into a frenzy.
Make defensive comments in response to LLM skepticism all you want— there are still precisely zero (0) reasons to believe they’ll make a quantum leap towards human-level reasoning any time soon.
The fact that they’re much better than any previous tech is irrelevant when they’re still so obviously far from competent in so many important ways.
To allow your technological optimism to convince you to that this very simple and very big challenge is somehow trivial and that progress will inevitably continue apace is to engage in the very drollest form of kidding yoursef.
Pre-space travel, you could’ve climbed the tallest mountain on earth and have truthfully claimed that you were closer to the moon than any previous human, but that doesn’t change the fact that the best way to actually get to the moon is to climb down from the mountain and start building a rocket.
That seems like something a special purpose model would be a lot better and faster at. Why use something that needs text as input and output? It would be slow and unreliable. If you need reaction time dependent decisions like collision avoidance or evasion for example then you can literally hard wire those in circuits that are faster than any other option.
Yo, this wouldn't make flying decisions, this would evaluate battlefield situations for meta decisions like acceptable losses etc. The rest of course would be to slow.
Probably this is due to confusion over what the term "AI" means. If you do some queries on a database, and call yourself a "data scientist", and other people who call themselves data scientists do some AI, does that mean you're doing AI? For left wing journalists who want to undermine the Israelis (the story originally appeared in the Guardian) it'd be easy to hear what you want to hear from your sources and conflate using data with using AI. This is the kind of blurring that happens all the time with apparently technical terms once they leave the tech world and especially once they enter journalism.
Yeah, but the Guardian explicitly state a lot of things which turn out to be not true also.
Given that the underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets), and given that the sort of people who work at the Guardian openly loathe Israel, it makes more sense that the story is being misreported.
The underlying premise of the story is bizarre (is the IDF really so short of manpower that they can't select their own targets)
The premise that the IDF would use some form of automated information processing to help select potential targets, in the year 2023?
There's nothing at all unrealistic about this premise, of course. If anything it's rather bizarre to suggest that it might be.
The sort of people who work at the Guardian openly loathe Israel
This sounds you just don't have much to say about the substantive claims of these reports (which began with research by two Israeli publications, +972 and the Local Call -- and then taken further by The Guardian). Or would you say that former two "openly loathe Israel" also? Along with the Israeli sources that they're quoting?
More likely, the IDF is committing a genocide and are finding innovative ways to create a large list of targets which grants them plausible deniability.
I will spend no more than two comments on this issue.
Most people have already made up their minds. There is little I can do about that, but perhaps someone else might see this and think twice.
Personally, I have spent many thousands of hours on this topic. I have Palestinian relatives and have visited the Middle East. I have Arab friends there, both Christian and Muslim, whom I would gladly protect with my life. I am neither Jewish nor Israeli.
There are countless reasons for me to support your side of this issue. However, I have not done so for a simple reason: I strive to remain fiercely objective.
As a final note, in my youth, I held views similar to the ones you propagate. This was for a simple reason—I had not taken the time to understand the complexities of the Middle East. Even now, I cannot claim to fully comprehend them. However, over time, one realizes that while every story has two sides, the context is crucial. The contextual depth required to grasp the regrettable necessity of Israeli actions in their neighborhood can take years or even decades of study to reconcile. I expect to change few minds on this topic. Ultimately, it is up to the voters to decide. There is overwhelming bipartisan support for Israel in one of the world's most divided congresses, and this support stems more from shared values than from arms sales.
I stand by my original comment. As I said, this will be my last on this topic. I hope this exchange proves useful to some.
The total of these two comments make no objective claims, rather says there are nuances, and complexities. But in all these complexity they are sure that Israel is right in their actions. Bipartisan support is on shared values, supposedly. Not so surprisingly, it even has a
> I have <insert race/group> friends paragraph.
I also work in healthtech, and nearly every vendor we’ve evaluated in the last 12 months has tacked on ChatGPT onto their feature set as an “AI” improvement. Some of the newer startup vendors are entirely prompt engineering with a fancy UI. We’ve passed on most of these but not all. And these companies have clients, real world case studies. It’s not just not very far away, it is actively here.
>Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.
>In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.
>According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”
But it does say that some sort of text processing AI system is being used right now to decide who to kill, it is therefore quite hard to argue that LLMs specifically could never be used for it.
It is rather implausible to say that an LLM will never be used for this application, because in the current hype environment the only reason the LLM is not deployed to production is that someone actually tried to use it first.
I'm 100% on the side of Israel having the right to defend itself, but as I understand it, they are already using "AI" to pick targets, and they adjust the threshold each day to meet quotas. I have no doubt that some day they'll run somebody's messages through chat gpt or similar and get the order: kill/do not kill.
I use ChatGPT in particular to narrow down options when I do research, and it is very good at this. It wouldn't be far-fetched to feed it a map, traffic patterns and ask it to do some analysis of "what is the most likeliest place to hit"? And then take it from there.
i don't know about European healthcare but in the US, there is this huge mess of unstructured text EMR and a lot of hope that LLMs can help 1) make it easier for doctors to enter data, 2) make some sense out of the giant blobs of noisy text.
people are trying to sell this right now. maybe it won't work and will just create more problems, errors, and work for medical professionals, but when did that ever stop hospital administrators from buying some shiny new technology without asking anyone.
Using LLMs for picking military targets is just absurd. In the future, someone might use some other variation of AI for this but LLMs are not very effective on this.