Hacker News new | past | comments | ask | show | jobs | submit login
US House of Representatives Hearing on the Dangers of Deepfakes and AI (lionbridge.ai)
98 points by MintChocoisEw on June 23, 2019 | hide | past | favorite | 67 comments



Beyond geopolitical concerns - as the tech becomes more widespread and easier to use, to the point where teenagers start using it to punk one another, no one will believe anything they see, hear or read ever again without an extremely robust digital provenance system (and who built that system? are they to be trusted? does it rely on intrusive measures? etc)

It's great the developers and researchers are taking pause - this is great tech that could be used positively and creatively in so many areas - and with any new tech comes pain from misuse.

If history is anything to go by, legislation is likely catch up slightly too late to stop the cultural impact, be overly broad, drag in a lot of relatively harmless outliers, undermine the positive use-cases and won't deter dedicated high-stakes bad actors.


> no one will believe anything they see, hear or read

Yes they will. That's the one part, which is why there is a problem here in the first place. The second part is that people will stop believing media, which might actually be a good thing.

The real problem with deepfakes is when the latency to modify something goes from hours to minutes to real-time, and even a live stream can no longer be trusted to be genuine. We're pretty close to that point, and for some use cases have passed it already.


> The second part is that people will stop believing media, which might actually be a good thing.

More likely: They won't trust media not in line with what they are already thinking. This won't help reduce confirmation bias. Thus already existing problems will likely be aggravated even more.


> The second part is that people will stop believing media, which might actually be a good thing.

Why is that a good thing?


I dont know if it will be a good thing or not, but one possible consequence is that people will be less reactionary to videos, and thus videos that are taken out of context will do less harm.


I said it might be, not that it is.

Besides that, people are just too plain gullible when it comes to media. It used to be said that if it is printed it must be true, now it is 'if it is on the internet, it must be true' leading to vast numbers of people being critically mis-informed about important issues.


Personal SSL certificates? I don't see why original media can't be signed by the creator. We'll see similar measures as we do for entering passwords into non-SSL web pages:

"Warning: this video is unsigned and claims to originate from NYTimes.com."


Because that still relies on trust.


Trust, to me, is not the problem. You can build trust. Known-good certificates can be distributed physically, and require signed messages for replacement. Or, we can develop schemes for distribution digitally via validated channels. For example, each worker at a company has a particular known-good digital presence, verified by their own public key, and distribution happens with them as the source, essentially creating an expanding ring of trust to the key being distributed. Violating such a ring of trust is not going to be easy, if it is well enough built.

There are two issues I do see, though, and they're kind of the same issue. Right now, we have this concept of a central store of public certificates. It makes it easy for you to get a certificate for a particular entity, but it also makes the central store a target. If you can compromise a central store (or a machine that is attempting to access said central store), you probably have the resources to at least redirect the user to your own site and leave them none-the-wiser, and you probably have the resources to man-in-the-middle their connection entirely and just snoop your heart out. So central stores of trust are a bit of an issue, and the ways around that are non-trivial to set up. A good example is probably KeyBase, who allow you to certify your various online presences with your private key. So if someone wants to replace your information on KeyBase with their own one, and they have the resources to do so, now they also have to compromise all the places you've distributed that key to. Or, they have to compromise one of those centralized stores of trust....

The big issue with centralized stores of trust is that they build blind trust. That's the big issue with humans in general, though. We don't want to question what we're watching. And we probably don't want to be bothered with validating that the "trusted source" of the certificate used to sign this content is actually _trusted_. It's just too much mental overhead. We want it to be automatic. We want central stores of trust, because it's just _easier_. The work is going to be convincing people that _easier_ is dangerous, in this case. Or, it's going to be to convince software companies to build in inconvenient technology and not make it trivial to turn off.


“Easier” is the whole point of the society you live in.

To be fair, the point of society is trust. It’s a way to trust information and ensure the species is safe.

The whole point of using markets and capitalism is because they generate more trust worthy results than top down driven systems.

Until this mess, which makes it seem like a central authority will be better than a system that leaves nodes on the three open for manipulation.

Essentially, we had a distributed decision making society. Now we found a hack to break that society structure. The cost for such a society to manage verification is absurdly high - every person will have to spend non trivial effort to verify that they are not being manipulated.

In contrast central decision making societies like China will just avoid that cost and be more competitive, beating out western democratic systems.


To mean anything it has to be signed by robustly secure hardware, with a manufacturer key.


This is a good idea in theory, but I want you to think about how much difficulty we have with passwords all ready. Not confident this is workable.


> as the tech becomes more widespread and easier to use, to the point where teenagers start using it to punk one another, no one will believe anything they see, hear or read ever again without an extremely robust digital provenance system

People still believe grandma's easily Snope-able hoax E-mail forwards. We've had Photoshop for decades, yet people believe (and share/spread) doctored photos likely more today than ever. Pros can already make convincing doctored videos. I guarantee, in the 2024 election (maybe even as early as the 2020 one), we'll have professional-quality, flawless "video evidence" of one or more candidates saying or doing things they never said or did, produced by amateurs. Despite knowledge that this technology exists, most people will fall for it.


Real truth, I'm afraid, is disappearing, and there's no way to get it back. It's funny: We've always seen the eventual abilities of computer processed images (their eventually being indistinguishable from reality) as an absolute endpoint to the progress made in hardware and software over the years, but this threat of deep fakes seems to have surprised the lot of us.


*has disappeared


You can’t believe everything you read right now and haven’t been able to since writing was invented. It’s only a relatively recent situation that we’ve had photography and audio recordings that were difficult to fake.


History itself will be a very interesting field to follow post deep-fake.


It feels like we're having the 1990s debate on "weapons-grade" encryption all over again. Yes, some software has the potential for abuse. But there's no reasonable way to stop it from being written and used. New technology springs up to counter whatever harms arise, society adapts, and we move on.


Don’t expect it to turn out the same this time. Back then, technology was seen as full of promise and potential. Now, thanks to a number of actors like the familiar Facebook, our industry is no longer seen in the same light.


My point still stands:

> But there's no reasonable way to stop it from being written and used.

Can't stop encryption, can't stop P2P file sharing, can't stop deep fakes either.


What technology has sprung up to disable "weapons-grade" encryption? Near as I can tell, it's still quite a problem for law enforcement. And it's a false equivalency: deep fakes mess with something far more essential to a working society than secrecy: truth.


> What technology has sprung up to disable "weapons-grade" encryption?

Cellebrite, etc.

> Near as I can tell, it's still quite a problem for law enforcement.

They complain about it because they want everyone's data all the time, likely for convenient fishing expeditions. But as far as we know (based on public testimony and documents at least) they have been able to get access when they actually need it for criminal and terrorist cases, or at worst they have been able to work around the lack of access in other ways. This all came out back when Comey was first complaining about it.

> And it's a false equivalency: deep fakes mess with something far more essential to a working society than secrecy: truth.

The truth left the public sphere long ago, leaving us with a false consensus propped up by the media. The "fake news" hysteria is just the public waking up to this condition, coupled with the media's growing fear over the loss of their ability to influence public opinion. It remains to be seen whether we will actually find a way to identify and hold onto the "real" truths, many of them highly unpleasant, or whether we will slip back into another false consensus led by the ruling class and their courtiers.


Imagine someone calling your grandparents on the phone with your deepfaked voice crying and saying you've been kidnapped and need $10,000 to come back home.


Deepfake voices would twist the knife even deeper, but they aren't necessary. This kind of scam already hits seniors all the time:

https://www.chicagotribune.com/opinion/commentary/ct-perspec...

https://www.aarp.org/money/scams-fraud/info-2018/grandparent...

https://www.consumer.ftc.gov/articles/0204-family-emergency-...


What are the top ten “bombshell” audio or video recordings from the 21st century so far which ignited action because we believed them (probably rightly) to be truth, which might these days (or soon) be plausibly denied as deep fakes, leading to inaction?


The Rodney King beating video is good example of a recording that led to direct and immediate social impacts.


I don’t think this falls into that category, but the first example that comes to mind was the 1938 “war of the worlds” radio broadcast[0].

The dramatization was written as a news report and didn’t identify itself as fictional until well into the broadcast. Angry callers reported mobs taking to the streets, but the scale of the reaction has been debated.

0: https://en.m.wikipedia.org/wiki/The_War_of_the_Worlds_(radio...


That was from the 20th century, not the 21st century.


HG Well's "War of the Worlds" original radio broadcast is a great example is the media abusing trust unknowingly.


9/11?

There's always more to an event being a trigger for change than a simple binary is this true/false. Humans are not simple rational actors.

You can trigger action with small things, concentrated on the right people, or you can have major news go under the radar.


Media that is not cryptographically signed and authenticated by inclusion of the signature in a trusted register or blockchain cannot be trusted to be authentic.


That's true, but it also gives plausible deniability to accidentally taped evidence (e.g. mic left on before/after a press conference). The person making the newsworthy statement, could always claim it is a deep fake.

Of course, if reliable sources were present at the occasion, they could jointly sign a recording, but under other circumstances it could be fake.

Another potential problem is producing fake confessions in e.g. criminal cases. The person who allegedly confesses would not sign the recordings, an the police's signatures are worthless if they are fabricated.


>Another potential problem is producing fake confessions in e.g. criminal cases. The person who allegedly confesses would not sign the recordings, and the police's signatures are worthless if they are fabricated.

To look at it from the dystopian, draconian point-of-view, deep-fakes created and used by the local justice system[s] could be signed by a police officer (who's intent is "solving the case, no matter what").

No matter how many witnesses and experts that you throw at it, the court will almost always believe the officer (because of unjustly doted infallibility) and that deep-fake could cost someone their life.

Just as other tools have been used for good and bad, on both sides of law enforcement, the probability lends to this equally be abused; especially, in areas where "south of the Mason-Dixon line" and "the south will rise again" are still parts of every-day parlance.


More than faking documents such as confessions, the government has a strong interest in faking documents to provide covers for detectives, legends for spies, etc. A robust crypto signature infrastructure that is widely used and immune to government meddling is not viewed favorably by certain sectors.


Even if it's signed, nobody can stop one from filming a deep fake or even feeding fake data straight to the sensor.


The vast majority of media in the world is not that.


“Citron is currently writing a model statute that would address wrongful impersonation and cover some deepfake content.”

Existing libel laws would cover this, no?


Hmm I think it depends on alot of things. Public officials have little rights in terms of satirical parodies, so there could be grey areas in terms of using deepfakes to impersonate officials doing stupid things. Articles in the satiracal site, theonion.com, are often libelous but since it's on a comedy site it is allowed.


Great, we're going to give Trump the ability to throw Alec Baldwin in jail.


Or you can have the daily beast doxx you for doctoring a parody video... of a politician


But then you have to somehow prove a fake is a fake. That's not easy now, and will only get harder.


With either the libel law, or this newly proposed law, it is still necessary to prove fake.

Not sure what is gained here.


This might be a good opportunity for camera manufacturers to introduce a feature to sign video with some sort of authentication. If a video is true and real, then it could be authenticated by Canon, Sony, Apple, etc. Any video without this authentication should be suspect.


Photoshop is more than 30 years old. And in three decades, the impact of manipulated images on people's reputations hasn't been quite as devastating as people once feared.

Why won't it be the same this time? It's much easier to fool a human than to fool tools designed to detect manipulation, and that's limited fake photos' impact severely. As long as we have ways to detect fakes, and media responsible enough to do the cursory investigation required to verify authenticity, I don't see why manipulated videos poses any greater risk than manipulated photos.


> In three decades, the impact of manipulated images on people's reputations hasn't been quite as devastating as people once feared.

As the cost of producing fakes approaches zero, the noise-to-signal ratio increases until people distrust everything they see and hear. I'd argue we are already on that path, and have already begun feeling the consequences. (Read about the Russian "Internet Research Agency" and their role in tampering the US 2016 election.) Deep fakes accelerates that trend.

> As long as we have ways to detect fakes, and media responsible enough to do the cursory investigation required to verify authenticity...

Media is becoming less prone to cursory investigation (fragmentation, understaffing/lack of revenue). A GAN is specifically designed to fool detection, and they're only getting stronger.


> As the cost of producing fakes approaches zero, the noise-to-signal ratio increases until people distrust everything they see and hear.

This is a bold face lie. The cost of producing the fake is nearly irrelevant, what matters is the reputation of the broadcaster. This has been so simce time immemorial and did not change with Photoshop.

Many journalistic revelations stand, even though they could be trivially faked. The Snowden documents would cost trivial amounts of money to fake (it's a bunch of PowerPoints), but people believe it, because se US government issued a warrant for his capture. The metadata is much harder to fake.


This page has an invisible Facebook like button on top of the text


I've been thinking about this some recently. Is some scheme like the following remotely plausible?

- Have a piece of hardware in-frame that can show hashes that is hooked to the operating cameras.

- In each frame show the hash of the previous frame (low latency would be required, as would the transmission of the full-quality original).

- Publish the original seed and the final hash (and, since the original is broadcast, the whole chain of hashes can be verified).


The fakes can do this too, though...

And how do you validate what is displayed on TV? Who will validate most videos they see?


We've had fake news and media for years. People wisen up, even if deep fakes come about reputable media outlets will call it out.

If you want to do something more without hurting free-speech improve existing slander and defamation laws to specifically target deep fakes.

Hold the platforms accountable the same way copyright has, its worked wonders.


> We've had fake news and media for years. People wisen up

I haven't seen much evidence of this. Now we've got a significant portion of the population who calls every news report that conflicts with their view of the world fake news and then turn around and consume media which has been measurably shown to be among the most inaccurate available. All of this behavior is supported by the President of the United States. When will people wisen up?


The time scale for wisening up is measuring in years, probably 5-10 years minimum, maybe more. People and societies take a while to clue in and adapt. It will happen though. Mass social media use is a relatively new thing.

Deepfakes wont be any more challenging to deal with than the struggle for accurate information in the pre-TV, or even pre-print days. People will learn to look for reputable sources, and propaganda will keep trying to trick people in new ways.


I don't see any evidence of this, at all. I see a large portion of the populace gleefully indulging in their own self delusion, even when the ability to verify or disprove content is so easily available. To the parent's point, "fake news" has just become a convenient word to shout when you don't want to hear or believe something.


Propaganda is as old as man. Manipulating people has existed for as long as someone has been able to profit from it. Implying that it is suddenly new because one politician embraced the conspiracy is very harmful to the idea of fighting it.

The internet brought a few things into play that have made the issue more complicated. Primarily that information and false information can be made by anyone and spread at the speed of light. Secondarily, that more people can present themselves as experts because all you need is a website and not third party's validation.

In only a few years we went from a researcher showing how he could manipulate a video to make it look like someone said something different than the actual transcript to the ability to fabricate entire scenes with deep fakes. Most people in the world don't even know this technology exists.

If you think only some back woods Trump supporters will fall for this then you're definitely mistaken.


>I haven't seen much evidence of this.

Fake news isn't binary - it is a continuum. For many, fake news is merely an extension of misleading news which has existed for a long time (recent examples are things like the crowd level at the toppling of Saddam's statue).

And to be frank, many of the alarmist 90's email chains (warnings about fake scams, etc) fall into the category of fake news. They're written like news articles and passed around, and are totally fabricated. It's just that today they have fancy looking web sites - but the concept is the same.

Misleading news is as old as news itself. When I was a news junkie, I knew the effort it took to distinguish real news from misleading news - both coming from the same established news agency/company (be it NY Times or CNN or Fox News). The only difference now is that fake news sites tend to publish only fake stuff - the difference is merely a matter of degree.

As much as it's annoying that people are calling everything fake news, I don't want to encourage people to go back to easily trusting traditional news sources - NY Times, etc. It's naive to think that people calling everything "fake news" is an artifact of Russia or Trump. There's been a very legitimate distrust of news organization for a long time, and all we're seeing is the lack of nuance in this distrust. Trump/Russia merely tapped an existing vein. Had they not, this level of distrust would have arisen soon anyway.


> Hold the platforms accountable the same way copyright has, its worked wonders.

It has?! Citation needed.


Try uploading an copyrighted song to YouTube or Vimeo (and maybe Facebook?). They made it work very well, too aggressively sometimes.


Exactly.


>Hold the platforms accountable the same way copyright has, its worked wonders.

You mean by creating a selectively enforced corporate hellscape?


This particular issue is “vexing” to put it mildly. It’s nightmare fuel to be honest, because when you start trying to think of solutions to deep fakes and fake news, you very very quickly, run up against the assumed norms of our enlightenment era ideals. Free speech, expression, and government control.

Because it’s clear, that there’s no private solution to essentially an evolutionary war between carnivore (malicious humans) and herbivore(people who don’t want to be manipulated).

Right now, the Chinese total control model is the model that’s working, and while on HN we may find it abhorrent, people in high places are being forced to make pragmatic choices. For them a Chinese styled information system will win out.

To prevent this, I think it’s increasingly time to re-examine our norms on information and expression.

In general, taking a step back - human society (markets, media, reporters, books, news) are effectively a giant solution to finding out what is “true” and what is “something else”.

We are excellent at solving these problems, we build families (parents are decision makers and know what information and implications make sense), clans (what is ideal for this group of people with similar genes I can trust), businesses (contracted people with relatively aligned interests), and more. You get the drift.

So the question resolves to: How, do we organize ourselves, to verify information, to at least keep parity with the verification rates before the internet era.

The key difference, is the mass production of information/content. The rate of content generation outstrips the ability to verify.

This last part will not change. We will always lose as an information society, if we get into a pure verification war with computer generated information.

This leads to the first conclusion:

1) clear measures to control the rate of generation of fake data.

to be blunt: this means jail. Punitive and clear response to stop this behavior, across borders.

Here’s where a decent chunk of HN will be aghast. This is what I meant by our values coming into conflict with the necessities of the solution.

Which brings us to problem 2/ addendum to solution 1.

2) who watches the watchers?

If govt has the power to punish people for “fake news” how do we know that’s not misused for “inconvenient news”.

Well, the weak solution that presents itself is a fire walled agency, with guaranteed ability to be funded, in order to seek out and identify manipulation and spread of faked information.

Hopefully, this agency won’t crumble on day 1, on the inherent contradictions of its role and responsibilities.

Some structure of this sort, gives us a pathway through the near future to deal with this issue.

Hopefully, it can be used to buy the time to solve the issue we are facing.

The irony, of advocating for a ministry of truth, in order to save the truth, is not lost on me.

But unless action is taken to stop propaganda, mass produced information creation, then it is a guarantee that our old human ways of assessing information will fail.

The only option which will be left on the table is dystopia, or some bizarre world where nothing and anything is true.


> It’s nightmare fuel to be honest, because when you start trying to think of solutions to deep fakes and fake news, you very very quickly, run up against the assumed norms of our enlightenment era ideals.

You write as if web-of-trust isn't a thing.


>Punitive and clear response to stop this behavior, across borders.

Good luck punishing "fake data" from China/Russia.


You develop similar capacities as them and then agree to not use it.


What, you think Putin is blackmailable? He is already accused of being a murderer in the west, what kind of leverage can you add from that?


How do you reconcile arguments with the watchers between different countries? When the US and Chinese fire walled agencies conflict, what happens?

I think any government agency has too many inherent biases to ever be trustworthy. I’d believe 100 self-asserting bloggers before I believe Pravda.


Which is the problem right now - so if you are willing to accept that, then the current status of possibly fake information being part of everyone’s daily intake is OK with you.

in that case how do you reconcile fake news with the needs of a complex society?


Your comment reads to me as well considered, unlike the majority of the comments here. I am an author of an "Automated Actor Replacement in Media" patent, awarded multi-nationally between 2008 and 2010. I tried to raise financing for a "Personalized Advertising" agency during that time frame, which proposed a media pipeline very similar to what is now called Deep Fakes. Back then, no one would believe my working feature film based VFX technology, even when demonstrated before their eyes. The situation was either non-belief, or excitement that focused on porn wealth - which I refused. I ultimately went bankrupt trying create this company. While struggling, I worked with political parties too. The Dems wanted to produce personalized "spending time with the candidate" videos while the GOP refused to believe my technology was real... what a world we live in...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: