The Brigade uses social media such as Twitter and Facebook to influence populations and behaviour.[11][12] David Miller, then a professor of political sociology at the University of Bristol studying British government propaganda and public relations, said that it is "involved in manipulation of the media including using fake online profiles".
Social media is increasingly a battleground for bots. Real humans ironically become a voluntary force in this, as the bots only need to propagate an idea so far before people believe it enough to make it part of their identity.
One idea is a social media service built on strict identity checks, but I'm not sure people actually care that much whether they're talking to a bot or not.
I am skeptical of nearly all gen ai applications, including this one. However, if humans can hallucinate weapons of mass destruction in Iraq I guess we are no worse off than we are with the status quo.
Curious if there is any direct evidence that someone in the govt knew that Iraq didn't have WMD and said the opposite. Being mistaken is not the same as lying.
Summarizing large bodies of text is one of the most proven applications of generative AI. Super useful if you're an analyst trawling through terabytes of private messages trying to find the ones who intend harm to the state.
The trouble begins when officers just believe an AI and fail to follow up with good faith investigation, this has happened a lot with face-ID tech.
The us government agencies had direction from high to push the view that Iraq was importing yellowcake from Nigeria to make nuclear weapons.
Imagine how good ai will be at pushing this. Any dissent online can now be drowned out with bots. Those who chase social approval online will naturally shift their view the way the bots encourage them to with upvotes/downvotes. If Iraq happened again today we’d struggle to get a word of dissent past the online message boards.
Yup, you can visibly see this with the Russia “sympathizers” wrt Ukraine.
Your traditional racist uncle types and old people worried about nonsense are all-in. It’s bizarre to see right wingers suddenly worried about killing Russians… talk about reversal.
End of the day, social media has been weaponized and the intelligence services need to be there. Eventually, they’ll be heavy regulation, but that day is far away.
I’d ask why but I can already guess at the plausible technical answer (ai probably helps process/sift through lots of signals and we love sigint in the US) and a political answer (lucrative contracts). Sure hope the longstanding conversations about our underinvestment in humint don’t take on a new dimension.
Political answer is probably much more relevant one. Someone in power had a friend who had AI company and revolving door is real. So, they hire AI company, push it through to show some results they can shout from rooftop, walk out of government into AI company as director of Government relations with good six figure salary.
Also, bonus of National Security is public can't really fact check. Likely all the data it's analyzing is classified.
Propaganda is a key component of what intelligence agencies do and it would surely be amazing for spamming one sided views to various forums. I suspect it’d be good for upvoting/downvoting on mass too. Have the ai judge if the comment is inline with the message desired and upvote/downvote. People chasing social approval will naturally turn the way you want them.
Given that Generative AI can now read brain scans [1] and this, I wonder how far away we are from "you thought negatively about something, the authorities are on their way".
Well we’re not infinitely far away from it, which is why we need to build political and legal systems that can respect human dignity even in the presence of such technologies.
The US intelligence agencies have dedicated data centers of domestic and foreign surveillance [1]. Of course they are "embracing" it. Plus, people are very willing to "train" them by feeding it data. Private companies are working with social media companies to get their hands on the raw data in preparation of training on much much larger data sets.
We are very very slowly inching towards the science fiction portrayed in "1984".
Supposedly much traffic is also being warehoused, including encrypted data, so that when decryption becomes easier it can all be read.
Which means we might have to contend with a future where LLMs think about every email written, message sent, word spoken with an active mic nearby, etc. Unless civil protections are put in place to prevent this, each person is likely to have a profile built with an excruciating level of detail on their entire life.
But it's not just intelligence agencies. This article talks about open-source data - even corporations building advertising profiles have access to immense data freely available online or available to purchase.
I think a lot about the light-cone of information that spreads outward from every move I make. It seems that every action online, especially communications- results in a permanent footprint that ripples out forever, available to be studied by some future information gathering thing.
I try not to let this thought chill my behavior too much, I don't like that sort of preemptive surrender. But it's part of the thought framework for while civil liberties like freedom of speech and privacy are so important.
Not mentioned in the article, but one major provider for this is AskSage.ai, which has built up a lot of features around creating government documents, proposals, RFPs, etc.
Given that the US intelligence community also embraced ESP (really: https://en.wikipedia.org/wiki/Stargate_Project), I’m not sure this is a _particularly_ great endorsement, as they go.
I'm not sure I agree. Searching and summarizing are excellent uses of generative AI. It's very good at understanding meaning and intent- making it especially useful for detecting actors plotting against the state.
That said, even such defensive uses do require a loss of civil liberties so that the generative AI can read our data and infer intent. Like all warfare, AI warfare will take a toll on humanity.
> “We were captured by the generative AI zeitgeist just like the entire world was a couple of years back,” Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation
The intelligence agency was.. "capture" by "zeitgeist." That's an exceptionally unsettling view into their internal marketing process.
> “We also acknowledge that there’s an immense amount of technical potential that we still have to kind of get our arms around, making sure that we’re looking past the hype and understanding what’s happening, and how we can bring this into our networks,”
I love how when AI is applied to any other field of labor, people immediately imaging shrinking the labor pool as a result, yet, amazingly, that does not apply to the CIA.
The rest of the article is a fanciful story for the tax payers to convince them that because of AI, the CIA somehow needs _more_ resources and people, and cloud computing providers are well justified in building "air gapped" AI solutions.
Exactly what will improve conditions; the “intelligence” (we really need a more accurate term) services using A as I that lies and hallucinates to please. I guess it means we can shrink the murder clown show called intelligence services, since the computers can just do what they’ve been doing since always.
“ChatGPT; show me a plan of sequential lies I can use to start a war and murder millions of people” … “Sure thing, Minister of Love goon. I see you’ve already gotten a head start instigating thermonuclear warfare. Well done! To ensure total annihilating, I recommend …”
No surprises here. Their foreign policy objectives are predicated on establishing popular support for an ongoing genocide. They can’t control tik-tok so they ban it and for everything else some bots powered by gen-AI roaming the threads to muddy waters with garbage replies.
Alternative explanations exist: the subject is actually controversial among people, and your line of reasoning here supposes that all people are on the same page WRT the conflict, when in reality they aren't.
You don’t think a majority of the world’s population are against the very regular stream of victims of the genocide?
I don’t dispute that the majority of Britain, US, and other western nations are probably indifferent to what’s happening in Gaza and therefore likely to support the government position (more bombs for Israel and no mercy for gazans). But what about Africa? What about Asia? What about the Middle East? What about South America? What portion of those populations are sympathetic to a European colonization project brutalizing the indigenous population?
A lot of the language you're using around the subject reeks of being "in" on a specific worldview, and I would argue that most people are either unaware or have various levels of agreement & disagreement with it.
I’m not so sure I share your cynical view. I can see how in the west most people people might be unaware - and I don’t think I can blame them - but in the east people are very aware. As for any level of agreement with the genocide — that’s just crazy. I’ve never come across a single person that supported what’s happening to the Gazans.
You do know that current estimate have that between 6-7% of gazas population are either dead or as good as dead even if a ceasefire went into effect today? Who would have any level of agreement with that?
https://en.wikipedia.org/wiki/77th_Brigade_(United_Kingdom)#...
The Brigade uses social media such as Twitter and Facebook to influence populations and behaviour.[11][12] David Miller, then a professor of political sociology at the University of Bristol studying British government propaganda and public relations, said that it is "involved in manipulation of the media including using fake online profiles".
Generative AI is even cheaper.