Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
US intelligence community is embracing generative AI (nextgov.com)
51 points by belter on July 7, 2024 | hide | past | favorite | 53 comments


Not surprising. We know from a rare moment of truth that the U.K. 77th brigade is using social media to influence people:

https://en.wikipedia.org/wiki/77th_Brigade_(United_Kingdom)#...

The Brigade uses social media such as Twitter and Facebook to influence populations and behaviour.[11][12] David Miller, then a professor of political sociology at the University of Bristol studying British government propaganda and public relations, said that it is "involved in manipulation of the media including using fake online profiles".

Generative AI is even cheaper.


Social media is increasingly a battleground for bots. Real humans ironically become a voluntary force in this, as the bots only need to propagate an idea so far before people believe it enough to make it part of their identity.

One idea is a social media service built on strict identity checks, but I'm not sure people actually care that much whether they're talking to a bot or not.


I'm glad we do a bit of that rather than just letting the Russians dominate there.


I am skeptical of nearly all gen ai applications, including this one. However, if humans can hallucinate weapons of mass destruction in Iraq I guess we are no worse off than we are with the status quo.


>if humans can hallucinate weapons of mass destruction in Iraq I guess we are no worse off

Generous way of saying "outright fucking lied about...".


In 2024, it is less controversial to accuse the US government of lying than generative AI models


Curious if there is any direct evidence that someone in the govt knew that Iraq didn't have WMD and said the opposite. Being mistaken is not the same as lying.


They took care of it:

https://www.theguardian.com/politics/2013/jul/16/david-kelly...

Oh, and this whole thread was in HN's first page just a moment ago and now it dissapeared. lol


Summarizing large bodies of text is one of the most proven applications of generative AI. Super useful if you're an analyst trawling through terabytes of private messages trying to find the ones who intend harm to the state.

The trouble begins when officers just believe an AI and fail to follow up with good faith investigation, this has happened a lot with face-ID tech.


>who intend to harm to the state

Who knows if that will be the only target when it becomes easy to keep tabs on most citizens, all in the name of safety.


The us government agencies had direction from high to push the view that Iraq was importing yellowcake from Nigeria to make nuclear weapons.

Imagine how good ai will be at pushing this. Any dissent online can now be drowned out with bots. Those who chase social approval online will naturally shift their view the way the bots encourage them to with upvotes/downvotes. If Iraq happened again today we’d struggle to get a word of dissent past the online message boards.


> If Iraq happened again today we’d struggle to get a word of dissent past the online message boards.

Sobering thought. Well brought up ...


Just like cable news back then.

We all just got too used to thinking there was a real person on the other end of social media.


Niger not Nigeria


Yup, you can visibly see this with the Russia “sympathizers” wrt Ukraine.

Your traditional racist uncle types and old people worried about nonsense are all-in. It’s bizarre to see right wingers suddenly worried about killing Russians… talk about reversal.

End of the day, social media has been weaponized and the intelligence services need to be there. Eventually, they’ll be heavy regulation, but that day is far away.


  "Zoom!"
  "Enhance!"
  [and now] "Generate!"
"There! I told you those were missiles! Bombing at 11!" :)


Gotta preserve those previous bodily fluids


I’d ask why but I can already guess at the plausible technical answer (ai probably helps process/sift through lots of signals and we love sigint in the US) and a political answer (lucrative contracts). Sure hope the longstanding conversations about our underinvestment in humint don’t take on a new dimension.


Political answer is probably much more relevant one. Someone in power had a friend who had AI company and revolving door is real. So, they hire AI company, push it through to show some results they can shout from rooftop, walk out of government into AI company as director of Government relations with good six figure salary.

Also, bonus of National Security is public can't really fact check. Likely all the data it's analyzing is classified.


Propaganda is a key component of what intelligence agencies do and it would surely be amazing for spamming one sided views to various forums. I suspect it’d be good for upvoting/downvoting on mass too. Have the ai judge if the comment is inline with the message desired and upvote/downvote. People chasing social approval will naturally turn the way you want them.


Yeah, this is probably already widely in use by various intelligence agencies around the world.


Given that Generative AI can now read brain scans [1] and this, I wonder how far away we are from "you thought negatively about something, the authorities are on their way".

[1] -- https://www.biorxiv.org/content/10.1101/2022.11.18.517004v3


Well we’re not infinitely far away from it, which is why we need to build political and legal systems that can respect human dignity even in the presence of such technologies.

Be sure to vote :)


He is going to build these ? The same people that are build systems expressly to avoid accountability?


The EU will want to scan your brain...for the children....


The US intelligence agencies have dedicated data centers of domestic and foreign surveillance [1]. Of course they are "embracing" it. Plus, people are very willing to "train" them by feeding it data. Private companies are working with social media companies to get their hands on the raw data in preparation of training on much much larger data sets.

We are very very slowly inching towards the science fiction portrayed in "1984".

[1] https://en.wikipedia.org/wiki/Utah_Data_Center


Supposedly much traffic is also being warehoused, including encrypted data, so that when decryption becomes easier it can all be read.

Which means we might have to contend with a future where LLMs think about every email written, message sent, word spoken with an active mic nearby, etc. Unless civil protections are put in place to prevent this, each person is likely to have a profile built with an excruciating level of detail on their entire life.

But it's not just intelligence agencies. This article talks about open-source data - even corporations building advertising profiles have access to immense data freely available online or available to purchase.


Related: Lakshmi Raman’s address at the AWS keynote. Starts around 12:40.

https://m.youtube.com/watch?v=D4IdZFmFMIs


Could be useful for the psyops divisions like: https://www.goarmy.com/careers-and-jobs/specialty-careers/sp...


I think a lot about the light-cone of information that spreads outward from every move I make. It seems that every action online, especially communications- results in a permanent footprint that ripples out forever, available to be studied by some future information gathering thing.

I try not to let this thought chill my behavior too much, I don't like that sort of preemptive surrender. But it's part of the thought framework for while civil liberties like freedom of speech and privacy are so important.


Not mentioned in the article, but one major provider for this is AskSage.ai, which has built up a lot of features around creating government documents, proposals, RFPs, etc.


Given that the US intelligence community also embraced ESP (really: https://en.wikipedia.org/wiki/Stargate_Project), I’m not sure this is a _particularly_ great endorsement, as they go.


Instead of making up most of their reports by hand, they'll have ChatGPT make up most of their reports? :)


Unfortunately it is somewhat asymmetric in that it is better at attacking.

GenAI is much better at flooding social media with misinformation than it is at:

> search and discovery assistance, writing assistance, ideation, brainstorming

It's the same asymmetry we see with tools to detect AI writing - they're just not very good, while bulk generation of it is.

Wild times ahead.


I'm not sure I agree. Searching and summarizing are excellent uses of generative AI. It's very good at understanding meaning and intent- making it especially useful for detecting actors plotting against the state.

That said, even such defensive uses do require a loss of civil liberties so that the generative AI can read our data and infer intent. Like all warfare, AI warfare will take a toll on humanity.


> search and discovery assistance, writing assistance, ideation, brainstorming

These are some of the things that gen AI is really good at. "Flooding social media with misinformation" is an application of these things.


AWS Washington DC Summit 2024: https://youtu.be/D4IdZFmFMIs?t=763


Link gives an error: Error 403 Forbidden Forbidden

Error 54113 Details: cache-iad-kiad7000088-IAD 1720547017 3290452697

Varnish cache server


Ty three-letter agencies. This is why my stealth startup to patriot-wash my clients' internet presence is raking it in rn


> “We were captured by the generative AI zeitgeist just like the entire world was a couple of years back,” Lakshmi Raman, the CIA’s director of Artificial Intelligence Innovation

The intelligence agency was.. "capture" by "zeitgeist." That's an exceptionally unsettling view into their internal marketing process.

> “We also acknowledge that there’s an immense amount of technical potential that we still have to kind of get our arms around, making sure that we’re looking past the hype and understanding what’s happening, and how we can bring this into our networks,”

I love how when AI is applied to any other field of labor, people immediately imaging shrinking the labor pool as a result, yet, amazingly, that does not apply to the CIA.

The rest of the article is a fanciful story for the tax payers to convince them that because of AI, the CIA somehow needs _more_ resources and people, and cloud computing providers are well justified in building "air gapped" AI solutions.

Disappointing on several levels.


Didn’t OpenAI add a NSA chief to its board?


Exactly what will improve conditions; the “intelligence” (we really need a more accurate term) services using A as I that lies and hallucinates to please. I guess it means we can shrink the murder clown show called intelligence services, since the computers can just do what they’ve been doing since always.

“ChatGPT; show me a plan of sequential lies I can use to start a war and murder millions of people” … “Sure thing, Minister of Love goon. I see you’ve already gotten a head start instigating thermonuclear warfare. Well done! To ensure total annihilating, I recommend …”


What will be interesting to me if generative AI becomes so good it’s hard to tell it’s fake, will all photos just be fake until proven otherwise.

I mean at this point in time someone could release your porn tape you can just say it’s a deep fake.


God help us...


Why?


Brazil, where hearts were entertaining June

We stood beneath an amber moon

And softly murmured "Someday soon"

We kissed and clung together

...

Recalling thrills of our love

There's one thing I'm certain of

Return, I will, to old Brazil


At Opsec HQ:

Lieutenant: Captain! Captain! You've got to see this!

Captain: What is it?

Lieutenant: We just uncovered an entirely new ICBM design being deployed right now by the Kremlin!

Captain: Really, that's incredible!

Lieutenant: Well, not quite. The fins are attached horizontally, the warhead is placed inside the booster, and the fuselage is an asian woman's face.


Spot on :)

PS. Also, curiosly fins have six fingers ...


No surprises here. Their foreign policy objectives are predicated on establishing popular support for an ongoing genocide. They can’t control tik-tok so they ban it and for everything else some bots powered by gen-AI roaming the threads to muddy waters with garbage replies.


Alternative explanations exist: the subject is actually controversial among people, and your line of reasoning here supposes that all people are on the same page WRT the conflict, when in reality they aren't.


You don’t think a majority of the world’s population are against the very regular stream of victims of the genocide?

I don’t dispute that the majority of Britain, US, and other western nations are probably indifferent to what’s happening in Gaza and therefore likely to support the government position (more bombs for Israel and no mercy for gazans). But what about Africa? What about Asia? What about the Middle East? What about South America? What portion of those populations are sympathetic to a European colonization project brutalizing the indigenous population?


A lot of the language you're using around the subject reeks of being "in" on a specific worldview, and I would argue that most people are either unaware or have various levels of agreement & disagreement with it.


I’m not so sure I share your cynical view. I can see how in the west most people people might be unaware - and I don’t think I can blame them - but in the east people are very aware. As for any level of agreement with the genocide — that’s just crazy. I’ve never come across a single person that supported what’s happening to the Gazans.

You do know that current estimate have that between 6-7% of gazas population are either dead or as good as dead even if a ceasefire went into effect today? Who would have any level of agreement with that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: