Hacker News new | past | comments | ask | show | jobs | submit login
The US military is funding an effort to catch deepfakes and other AI trickery (technologyreview.com)
154 points by etiam on May 23, 2018 | hide | past | favorite | 108 comments



We're rapidly heading into a world where every form of non-fictional media has to be mediated by a trusted third party because we have no way to distinguish what's true from what's fabricated with our own senses. This is a complete reversal of the verifying power that images, audio, and video once provided to everybody regardless of income or education. Think about some of the potent imagery that came out of wars like WWII and Vietnam, for example.

We're not too far away from the manufacture of literal "fake news" out of whole cloth. I'm not sure this bodes well for the idea of an informed electorate.


We may later view the short period in which A/V recordings as authoritative over first-hand accounts was a brief "glitch" in the history of the world.

The ability to fake such things means that we have to return to trusting our fellow human and making wise choices about who can be trusted.

Fingerprints, DNA samples will remain of varying degrees more difficult to fake, but one must assume that those too will fall as paragons of unassailable guilt (or innocence, but usually guilt).


DNA contamination is already turning out to be very widespread, we shed it everywhere, it lasts, and can get on victims' clothing, fingernails, etc.. Also contamination in labs.

Read recently about a case with apparently solid DNA evidence under victim's fingernails, supposed perp no real memory, but it turns out the police had him in custody at the time of the crime. He'd eaten in the same restaurant days earlier. I can't find the article quickly, but it was recent.

Some others: https://newrepublic.com/article/123177/how-dna-evidence-incr...

https://www.theatlantic.com/magazine/archive/2016/06/a-reaso...

http://journals.plos.org/plosone/article?id=10.1371/journal....


Those should already have fallen. Gross human incompantance/perverse incentives to use science as a blackbox for prosecution


You really just don't like forensic science or what


https://www.wired.com/story/dna-transfer-framed-murder/

DNA is not the end all of an investigation


If the people who prosecute you are the people who report the dna, whose to say they don't just say that it matched when it really didn't?

Fakable media needs to be decentralized, period.

Edit: not parent commenter, btw, just my personal two cents


There are silver linings to the total fabrication of images and audio, whose veracity and presumed authenticity due to the threshold for forgery is perhaps only 100 years old, and soon to disappear.

One, people will be less swayed by emotionally charged imagery, and that's a good thing. That means less #FOMO and less rage at the perception that everyone is having fun but you. People can feel normal, just being normal again, because of the soon-to-be-wide-spread understanding that any and all photography, video or audio can be fabricated.

Two, media is manipulative regardless of who holds the keys to the castle, as most trusted information source. It's not a conflict of fake versus real. It's a conflict of one group of large personalities versus another set of polarizing characters, and external players fanning the flames. That one group can shout down another isn't great, because even if there is One True Narrative neither side presents it to us, and most of the smarter folks among us are already aware of this, and digest all information under such a context implicitly.


i’m not sure about the less FOMO bit. people already are widely aware the many fashion magazines photoshop their cover images, but that doesn’t stop a huge number of people feeling terrible about themselves for not looking like that


I agree, I think the human response to this stuff is instinctive and visceral. I consider myself a highly skeptical and disciplined mind but once in a while I catch myself believing stuff that doesn’t have much rigor behind it. And it takes me a while to catch myself. By then I might have already told a few people some “facts.”

It’s quite a mental load to question every source of data and have one’s guard up. Unless our own wiring evolves I can’t see the general population changing our data processing habits.


>every form of non-fictional media has to be mediated by a trusted third party

This sounds like the kind of "trust problem" I've heard will supposedly be solved by practical blockchain technology. I'm wondering this aloud, since I'm not sure how that could really be accomplished. As someone below suggested, some means of cryptographically signing "authentic" or ungenerated/unmodified video.

Come to think of it, have major scandals been invoked by photos that were proven or likely to be manipulated since the invention of that technology? Does the public place less stock in the authenticity of photos by default these days?


So I'm showing my age here, but...

Back in the day - a fax wasn't considered a "signed legal document", but it _was_ considered "proof of existence of a signed legal document".

Perhaps the response to this is to not "trust" video or images as standalone legal "evidence", but to allow a person to testify "yes, that's an unmodified depiction of an event I witnessed", with that person then being held liable to legal penalties for perjury if that's disproved.

There's precedent (here in Australia) - traffic cameras put a hash of the image and detected infringement details into the file they use for enforcement action and as evidence if you elect to have it heard in court, at which time they need an "expert witness" from the manufacturer to state that the image cannot have been manipulated as "proved" by the hash (there was a hilarious case in Sydney a couple of decades back because back where a defence lawyer got smart enough advice to point out the fixed speed camera his client was accused by had a design flaw where the hash was calculated of the photograph _before_ overlaying the date/time/detected speed on it, so it was impossible to verify the hash since the annotations destroyed enough information... They had to let the alleged offender off, and then scrabbled to create a twisted legal interpretation to ensure all other drivers fine by the same model speed camera couldn't all dispute past fines...)


That puts us back to relying on eyewitness testimony. Not progress, IMHO [1].

[1] https://en.wikipedia.org/wiki/Eyewitness_testimony


Eyewitness testimony is a lot more reliable when they've got a video to back it up.


How can we conclude this if we don't know whether the video is accurate or falsified?

Ordinary photo/video evidence can no longer be trusted as either support or impeachment of an eyewitness' report.


Eyewitness testimony by honest witnesses is what's considered unreliable. Video corroboration is what makes it reliable.


I take a third option. Viewing the world in terms of first principles. Evaluate events not in terms of what is said, but only by what you know to be undeniably and absolutely true and then those principles' logical connection to other events. To take a very controversial example, let's consider Assad gassing his people.

I know for a fact that there was an attack of some sort - this is not in dispute by any party. I also know that Assad had been steadily regaining complete control of his nation - again something that's not really in dispute. I also know that Assad and his forces are capable of highly effective conventional attacks with minimal international issue. And finally I know, and I know that he knows, that chemical weapons would likely spur international outrage and risk dragging outside players (such as the US) into his conflict, which could turn everything upside down. The conclusion I draw is controversial, but makes infinitely more sense than what my trusted third parties tell me I should believe.

And maybe most importantly, I also accept that I could be wrong. There may be evidence or information, that would qualify as first principles, that I'm not aware of. It's also possible that somebody acted in a completely irrational and self destructive way for no apparent benefit. And as new information comes to light, I'm happy to change my views. I find that people who rely on third parties are generally not all that well informed on what they're talking about as pundits tend to focus on pathos over logos. And I think this ignorance is what drives people to double down on their views as a sort of defensive mechanism - this is probably playing a major role in our antagonism towards each other on controversial topics.


> And finally I know, and I know that he knows, that chemical weapons would likely spur international outrage and risk dragging outside players (such as the US) into his conflict, which could turn everything upside down.

You're making the same mistake that a lot of economic analysis makes and assuming that Assad (and people, in general) will always act in their own best interest without bias or emotion clouding their judgement. This is the same reasoning that Stalin probably applied to his treaty with Hitler before WWII. Hitler invaded Russia anyways, even though bringing Russia into the war was a huge blunder for Germany.

People don't always make the right - or even rational - decision. Trying to infer the truth based on what a "rational actor" would do is utter nonsense.


People don't always make rational decisions? I'd agree with you, but the times they don't make up a minuscule fraction of all times. When we have incomplete information we're left to make probabilistic approximations about how likely each possibility is the real explanation for something.

Operation Barbarossa is great example - I really enjoy WW2 history! You seemingly think this was an arbitrary act of a mad man. In reality it was a rational and calculated decision. The Red Army was amassing on Germany's border and it was later revealed that Red Army generals had already been requesting permission to begin their attack, but Stalin was waiting for a more strategic opportunity. In particular he wanted England and Germany to go to war and for his army to march over the remains. By contrast England in the west was wanting the exact same thing with the USSR and Germany. America was also looking more and more like it might take a more direct role in the war.

Germany basically had two substantial powers on either side of it, waiting to pounce. It had even offered peace to England, who rejected the offer, in an effort to avoid fighting a war on both sides. And both sides were also massively ramping up their forces. It was during this era that millions went without under Stalin as he dedicated massive and unprecedented resources to a military buildup. And England, as mentioned, was also receiving increasing support from America. In other words, his enemies were getting stronger faster than he was. His invasion was what he saw as the best of many bad options.

When there is an explanation that follows logically and clearly for something, and another explanation that simply relies extensively on illogical events and turning 'bad guys' into caricatures, I tend to go with logic. Could I be wrong? Like I said, absolutely. And I'm fine with this because history shows time and again that logic tends to be far more accurate than a one-sided perspective on events.


As a sort of edit here, to clarify - I am not equating rational with 'good' decisions. But only that decisions are generally driven by a reason. These reasons can be good, bad, flawed, or flawless. But there is a reason that generally can be empathized with, which is not to say agreed with.


There are silver linings to the total fabrication of images and audio, whose veracity and presumed authenticity due to the threshold for forgery is perhaps only 100 years old, and soon to disappear.

One, people will be less swayed by emotionally charged imagery, and that's a good thing. That means less #FOMO and less rage at the perception that everyone is having fun but you. People can feel normal, just being normal again, because the understanding that any and also photography, video or audio can be fabricated.

Two, media is manipulative regardless of who holds the keys to the castle, as most trusted information source. It's not a conflict of fake versus real. It's a conflict of one group of large personalities versus another set of polarizing characters, and external players fanning the flames. That one group can shout down another isn't great, because even if there is One True Narrative neither side presents it to us, and most of the smarter folks among us are already aware of this, and digest all information under such a context implicitly.


Optimistically - relying on "single source" stories is something we're already learning on other media to avoid. There's regular good advice shared about not reacting to tragedy reported on Twitter if there's not independent sources of confirmation. We're getting closer to the point of human existence where there are enough people with cameras in their pocket to be able to expect multiple images/video from different angles of any newsworthy event. Deepfake isn't (probably? yet?) capable of modifying multiple video streams from different angles in a consistent fashion...


The version used on the NSFW subreddit(s) is not capable of that. The jump is not far though, as it is 'just' a more perspective-considering application of mostly the same technology. IIRC the issue is mostly with the regular deepfake showing poor quality if the face is shown from the side, which makes the artifacts rather blunt. You still have to worry about this. There was a season of the TV series 24 (second half of season 2), which dealt with an audio recording almost causing WW3. The season ran winter 2002/3. Google already has the technology to synthesize voice at a quality that easily passes examination, provided the minimal artifacts can be hidden in (non-white) noise. Considering this is full synthesis, and that we already had real-time face morphing [0] in 2016 (with hardware capable of reaching NTSC quality available for under $1k in a laptop at that time), the point of faking multi-angle video using motion capture technology and similar, as well as humans for guiding the software/eliminating artifacts, is likely not far.

[0]: http://gvv.mpi-inf.mpg.de/projects/MZ/Papers/DemoF2F/page.ht...


Good thing we have PGP


I'm surprised few people are actually discussing this, digital signatures seem like the right way to go for footage like security recordings or recordings that can serve as evidence that must be trusted.

Discussion below[0]

[0] https://news.ycombinator.com/item?id=17136292


This still depends on trust and tamper-proof devices. You don't need the keys if you can alter the input stream.


Indeed, but it ups the ante. It requires someone creating a spoof expend more work rather than just training a NN to fabricate a video.

I think everyone knows this, security just makes things harder, and that might be enough.


the true application of blockchains?


Humans have such issues with confirmation bias [1] that I worry it's not going to matter.

The last election already had algorithmically generated artificial news designed to sway people (note: I'm talking FB articles with headlines like "$POLITICIAN just insulted $NICHE_AUDIENCE. STOP THEM." that linked to articles that were scraped/spun from other sources). They relied upon people not actually reading and just hitting like/forward/heart whatever and spreading the top level message.

We are not going to be able to convince ourselves, never mind other people, that video (which has been the gold standard for "truth" for nearly a century) isn't actually real.

Case in point: https://www.youtube.com/watch?v=cQ54GDm1eL0 (which was included in the article) has comments from people deeply confused by it on YT.

1 - https://www.newyorker.com/magazine/2017/02/27/why-facts-dont...


Well hopefully there will be such an influx of fakes everywhere that people will at least begin to accept that anything can be faked, and hopefully take the next step of thinking a little more critically about what they hear next.


I see it going to the other extreme, where people believe anything can be faked, and politicians, criminal, etc start to dismiss evidence of their wrongdoings as fake, and people believe them.


Any Deepfake detection tool would also be a GAN sparring partner for the Deepfake makers.


If you consider this a two sided game, then you can have the fakes quickly become so good that hardly anyone can tell.

And I'm not just talking about image fakes. You can have MCTS find the best arguments for ludicrous statements, and other paths to justify fake things that look just like real arguments.


That is the point of a GAN. We’re missing something about the problem space and/or human cognition or their output would already be indistinguishable from reality to all humans.


Yep, they're a 3D reconstruction of points from a 4D space a (massively simplifying the fact that the 2D video frames themselves are high-dimensional data) and that's just the video. Bring a 3D camera into the game and the underlying distribution and see if hilarity ensues IMO.


This doesn't automatically mean GANs can outpace any detection method.


Furthermore, if the Generator outpaces the Discriminator by too much then the generator stops learning, or, worse, degenerates into mode collapse. Generators and Discriminators have to be close to each other in capability for either to get anywhere.


If the detection methods are publicly available, then you simply incorporate the detection method in your training regimen.

Effective detection methods may end up being closely guarded secrets.


They need to be publicly available and differentiable to be able to use as an opponent for a GAN.

Pretty much nothing except other neural networks are differentiable, unless you put effort into designing them to be.


It's feasible to design a reinforcement learning based network that uses output from a non-differentiable deepfake detector as a component of the loss function.


And can you imagine the impact of false positives? Prosecutor has the perp on video committing a heinous offense but it's still insufficient evidence because of a software bug.


We can dream of juries that wise... haven't seen one yet though have we?


Which means we would have to rely on an "Authority" who cannot be independently verified, because we do not know the secret sauce they are using.


Hence they probably shouldn't make it public for it to be effective.


But then everyone has to trust that whatever the US Military says about the video is true, which kinda ruins the point.


Or: they only make it public once the discriminator demonstrates a large margin to reduce the potential training signal.


Why isn't this a solution:

Everyone has some gpg-like setup.

If we see a statement or video or audio by a person accompanied by a public (graphic or audible) key that checks out...ok, done, verified. If we don't, then we can justifiably disregard the statement or video or audio.


Would work in some but not all scenarios. Perhaps the video is legitimate but embarrassing for the represented party, so the represented party declines to sign it.


You're right...so here's the logical space:

A. Video/audio/text of/by person, signed, we have knowledge that this video bears their signature, they endorse it

B. Unsigned B1. Unsigned because it's not them B2. Unsigned because they don't want it to be associated with them

I think this is still a better state of affairs because we can verify positive endorsements as genuine or not.

You're right, we couldn't verify stuff they refused to sign, but!

Imagine the state coerces people to carry small devices with radio transmitters that constantly transmit, over the length of a second, a signed key, ad infinitum; that way no one can plausibly deny that it wasn't them (unless someone plants the persons device on an imposter).


If I have some embarrassing/damning video I want to post online you can bet your ass I'm going to scrub off any metadata.

If there could be some repercussions (e.g. police brutality, gang violence, whistleblowing, etc.) I'm going to go above and beyond to make it not traceable to me. And you're here asking the government to make it a law to make all video outputs traceable to the creator?

Perhaps you should apply to the NSA or Palantir.


I think it’s a thought experiment and not a suggestion. And the transmitter is for the person being videotaped, not the person creating the video. Though of course you raise a good point about anonymity of the video creator.


Videos should be signed by cameras, using keys only those cameras have, at the time the videos are made. Sure, this means they have to be online so their manufacturers can supervise and record key generation, but that's simple now.


I really like this idea. Every video frame from a political/public personality is cryptographically signed with their private key and edge viewing devices are able to tell if it's properly signed or not.

It's easy to imagine this idea becoming a part of video (be it VOD or broadcast) encoding standards in the future, once this becomes a critical thing.

In terms of UX: one could imagine a simple indication on TVs: an RGB led that shines in different colors depending on verified authenticity levels.


Most of the shocking, status-quo challenging videos these days are cell-phone videos taken by bystanders. Those videos are how we find out about police brutality and other abuses by authorities.

I don't see how this video signing system will help verify a cell phone video taken by a random bystander.


Sensor supplier could embed the crypto in the hardware as close to the imager as possible and issue signing keys for each device. OEM would write firmware to extract that from the camera bitstream and dump it into the metadata of the image (something along those lines anyway)

The image and signature could be added as authentication layers and then a third layer added to allow post-production (crop/levels/etc). Right click in your browser to 'see original image' with a little padlock in the corner. Ideally this would include GPS info to reduce likelihood that someone would just project a desired image on a screen and photograph that.


That assumes a lot of trust in OEM providers, and there would be a lot of incentive to compromise the key.


Agreed. Nothing is foolproof. It would make it harder for Joe Sixpack to spoof images, I guess the question is whether or not that's better.


Yeah, joe six pack is not the one I am worried about. Nation states are.


Sounds like DRM. Also, it wouldn't be too hard for the people who actually want to hack it.


Well, no, it won't.

It will however make sure that you're watching actual Obama 2, and not some computer simulation of him doing some disgusting thing.


Yes, this is terrible. We're going back to the "dark ages" when a horrendous crime could happen in broad daylight around multiple witnesses and nobody would be found or sentenced.


You could even go further than that.

If you employ a set of digital cameras, say for security cameras or other things, each has a private key which it uses to sign whatever it records. Obtaining said key would require physical access to the device. Then, when it matters, your organization has a set of public keys which it uses to verify that the picture/video is legitimate.


Why isn't this a solution: Everyone has some gpg-like setup.

The self-answering question.


Half the galaxy's engineers are busy trying to jam the AIs while the other half are busy trying to jam the AI jammers.


Starts to resemble biologic ecosystems.


And here we thought we're better than nature. No, we've just replaced competition of designs with competition of goals.


Slight tangent, but let me just say that "deepfakes" is the perfect sci-fi terminology for the technology.

I could imagine an entire series (or part of a series) about "deepfakes".

I hope the name sticks, as the technology inevitably becomes more common.


If such a system to automatically detect this was created, then I would suppose it would become the story instead.

Whatever method used would have to come up with a reference photo (or "model"), ie. what the photo "should" be. And this tech can then be used for the purpose of more convincing deep fakes and other AI trickery. Hurray! Progress! :)


Why bother creating these fakes? Saying something loud and often enough is all you need. Throwing money at social media is far more effective than creating deepfakes. Also, if the US military did spot a deep fake, do they have the trust for people to believe them? They are picking an uphill fight.


>Saying something loud and often enough is all you need.

Depressingly true, but keep in mind that words < pictures < video when it comes to provoking an emotional reaction.

>Also, if the US military did spot a deep fake, do they have the trust for people to believe them?

Depends on how their deepfake detection method works, if it's opensource, or at least publicly available and verifiably functional, then who wouldn't believe them? No one thinks their GPS is lying to them.


For a one time 'hit' on a target, knowing fake news spreada faster than true news. By the time its shown to be fake, the damage has been done to the target.


These fakes are just another tool like social media. Social media alone might be effective, but add in these "deep fakes" and it becomes even more.


There's also a lot of potential criminal use for them, like proof-of-life for a person who's actually dead.


Right now the visuals for these fakes seem to fall apart at higher resolutions or when there is any significant head rotation. This is actually better than what we have in the realm of images, where almost anyone can create a "realistic" Photoshop of the wast majority of scenes.


It's only a matter of time though.


Likely you're right.

OTOH, when time is a factor (as in movies - temporal succession of images), perhaps simple curve fitting may never be quite perfect? Perhaps you need anticipation, and counterfactual thinking, cause and effect, and all that?

Also, like in video games, maybe even some understanding of real world physics may be required for a perfect fake.


Most of early physics was done by curve fitting, and on very small datasets. Simple Newtonian dynamics shouldn't be much of a hurdle for a neural network. That's about as much anticipation of cause and effect as you'd need for faking most kinds of videos.


It's already possible, but nobody has written a public-domain implementation of it. Probably because it can sell for a lot of money. I bet someone has this software right now.


The ability to fake something is equivalent to the ability to rewrite the past. As our ability to fake things increases, more and more of our knowledge then comes into doubt.

I think we'll end up in a world where the only way to prove what the past really is if we hash and stuck on a blockchain somewhere. One can then see PoW as a constant tax being paid to maintain a literal link to a past when deepfakery did not exist.


Fancy making a bot which crawls the web, makes a massive merkle tree, and then puts the head of the tree into a bitcoin transaction?

You could then monetise it by charging a nominal fee for the path from any given bit of data to a timestamped bitcoin transaction.

The disadvantage is the cost to do this involves "get access to all data on the web", but perhaps you could partner with the web archive or something to do it?


From what I see of history, as humans we already do this by killing of all "invalid" copies of history (books, art, humans, etc...) and replacing them with the new "fixed" versions.

Now that making copies is as easy as owning a camera or USB drive it is dis-information that is the new way to change history.


> I think we'll end up in a world where the only way to prove what the past really is if we hash and stuck on a blockchain somewhere

LOL. Blockchain to the rescue! Except. Which fork is the right one? The one with the most hash power? How can you be sure that one is right? Which ethereum fork is the "correct" fork? Which bitcoin fork is the "correct" fork?

Blockchains don't help for shit with proving something existed. The are fully mutable data structures--it just requires more effort than a simple UPDATE statement.


Or, you know, printed books from the past.

Real physical, historical evidence.


Blaming fakes for all kinds of things is just the extension of the "war on privacy and anonymity".

Why are fakes supposedly such a big problem? Their identity might be fake, but that doesn't make their ideas any less attackable and that's what it should be about: Ideas, not identities.

If an idea is good I couldn't care less who had it, I only care about the idea itself. Yet all the public discourse focuses on "fakes spreading wrong ideas", even the recent Facebook EU hearing had that as a major topic, with Zuckerberg constantly reiterating that catching fake profiles, who could influence elections, is one of Facebook's top priorities (which I don't doubt).

But barely anybody seems to make the effort to think this trough consequently, once you do that, you realize that we are heading in exactly the same direction China is heading: No anonymity, Social media becoming the de-facto replacement for government institutions. Is that really the world we want to live in?


This isn't about ideas, it is about faking facts. You are telling me you don't care if someone created a video of you viciously murdering someone, and then that video was used to convict you?

Sure, ideas can stand on their own, but we have to have facts that we can know so that we can use those facts to judge the ideas. If we can fake images and videos, we can create fake facts, and that is certainly dangerous.


They are specifically worried about attacks on politicians in the form of fake videos that look like those politicians saying incriminating things. In that case, it matters a great deal whether the person said those things or not.


Stating lies as fact isn't spreading ideas.


I don’t understand the hand-wringing about this. We have had this problem for decades with all forms of media other than video. Text, photos, and audio are all easily faked and can be done so to a degree indistinguishable by 99.99% of people. Why weren’t we all terrified that bad actors would create fake images or audio files that the stupid electorate would find so persuasive that it would overthrow governments and swing elections? What makes video different? And in particular, why won’t video quickly join the list of things that people don’t trust without a credible source vouching for it, like we do everything else?


Anyone know the name of the grant/program/challenge ? I didn't see it in the article..



Is there a way to cryptographically sign a video to make sure that it's tamper free? I'm thinking about some sort of encryption that happens directly on the camera while a video is being recorded.


You can use a signature to verify that a video hasn’t been modified, but that doesn’t really help in itself with fakes since fakes can be signed too. And if the cameras are doing the signing, you’d just need to extract the private key from the camera and use it to sign fakes. It would require something like the secure enclave to protect the private key and, of course, we’d then be trusting the camera manufacturers.


Not to mention creating a world in which people can't build their own cameras, or change the software on those cameras...


Sure you can. Just wouldn’t be usable as factual proof of an event.


The camera could sign it while recording...

But whats to stop someone reverse engineering the camera and using the signing key to sign something else?


Wouldn't that also allow one to determine the camera with which the video was recorded? Not exactly something a whistle-blower would want.


A potential stopgap: increase the necessary computation power required for fakes by only releasing 'official' media with multiple viewing angles and device signatures.

e.g. press conference containing multiple mobile phone camera videos of the speaker from different audience perspectives. Probably the models will eventually catch up and allow this too, to be faked.


Are there high-resolution deepfakes out there yet, or are they within sight? One s that you couldn't catch by your eye if you just zoomed in a bit?


Alas - the 200 years or so by which humans had visual 'proof' that something did or did not happen is rapidly about to end. :)


Side note: I wish the "killer app" for deepfakes hadn't been porn. The technology was impressive, but having such a scuzzy popular usecase made sharing it with others kind of fraught.


Porn was the killer app for online video streaming and secure online payments too, though. The Internet owes a lot to porn.


Sexual gratification is one of our most primal urges. Makes sense that it would be a driving factor in this, despite the strong taboo in some cultures.


what would the other use be? Manipulation of new and media?


TV/Cartoon mashups, AR 'stickers', plenty of silly entertainment possibities.


Lol. More like DARPA wants to use deepfakes in psyops without getting caught.


Carries as much risk of being used against, no? If not more for getting fooled by deep fakes into doing something dumb. They want to weaponize it, sure, but protect themselves.


I think the military wants to not have deepfakes done to them.

Here's a video of your officer giving orders. Should you obey them? (Note that "no" as an automatic answer is just as bad as "yes". The only correct answer is "yes if authentic, no if not". But how do you know if it's authentic?)


Well yeah, though its worth noting that the potential damage is far, far greater than the potential utility.

The Western Intelligence Community has incredible power and very little oversight. There's a long way to fall if public opinion suddenly swings against them because someone publicized a deepfake of CIA, DGSE, MI6, and AIVD agents ritually sacrificing a child.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: