The era of verifiability-- when photographic and video evidence could plausibly be trusted-- is over.
Photos have been dubious since photoshop, but still required expertise and artistry to use. Fake moving pictures required considerably more extensive expertise, lots of time, and usually specialized equipment.
None of that is true anymore. We are back to a pre-industrial age of discourse, where rumor and hearsay and anecdote dominate again. Buckle up.
Sure, for a casual observer, these new methods for generating videos appear convincing, but is that the right bar to judge "ability to fake evidence"?
As far as I know, there have always been more sophisticated techniques and forensics to determine if an image is doctored, and likewise for video. I've not seen any research tackling fooling those methods yet, and I would bet that naive implementations of neural networks for generating videos would leave very obvious "neural network" artifacts. Of course, this is still new technology, so it will obviously get better at fooling our other tools over time too, but as of right now, I don't think the clamoring for "all evidence can now be faked" is all that justified.
I question becomes, does that matter? If everything needs to go through forensic experts, that works for courts. For media, who knows. Maybe it just becomes another way to challenge anything.
There's always been a certain value to letting people see things themselves, and use their non-expert judgement directly, en masse. Think of that iconic image of a Vietnamese girl escaping napalm or (more recently) the drowned refugee child on that Turkish beach. These had value past the factual information.
In any case, I think the Photoshop/stills case is a little heartening. We get an occasional false image, but overall it hasn't created some sort of massive truth-crisis. Other stuff did happen to truth, but Photoshop wasn't at the centre of it.
But that was already true with words alone. Lives are already ruined every day by baseless claims with zero evidence, no video required. People who already research the source of words before forming an opinion will now research the source of videos, too. Those who are fooled by written and spoken lies will also be fooled by animated ones.
Words have almost no impact compared to visual imagery. Even if the words come from the mouth or pen of a respected individual, you can still doubt it, and usually people do.
However, whether we like it or not, everything we see, at least on a subsconscious level, we perceive as real.
My assumption would be that we have no evolutionary resistance to that. I don't know if there can be at all. People have been lying from our fist day of existence, but up until very recent times, there has never existed anything in nature that can create arbitrary images that look 100% real.
It doesn't matter if it's fake or not, if you see a convincing version of your president fucking a pig, you'll never forget about it, and you'll always attach that image to him. Similarly, people often can't differentiate between an actors character and the real person. Whether it's Macron on his campaign film or Gandalf on lotr, you associate that person with your simulated experience.
Animated lies are on a completely different tier of immersiveness and deception than speech or writing. You can't reduce this development to "just another lie". It's very close to the most convincing lies possible, and it's definitely very dangerous.
In addition to this, despite all our cutting edge forensics, we still fumble the ball in the courts, based on sham forensics, and send innocent people to jail.
As a defendant, you have to pay forensics experts -- highly -- to research and create such evidence of manipulation. If they are willing to take on your case; they have their own reputations and motives.
This all taking us further into an effectively tiered justice system. Can you afford to challenge the evidence, even if you think it's doctored?
If you are a cause celeb -- or a good vehicle for a non-profit -- maybe someone else will. Otherwise...
FYI, russian propaganda machine routinely uses videos from videogames and movies as "documents" in TV broadcasts. [1] Despite the fact that reports debunking these "proofs" appear in the next day or so, there's no backlash or retractions. Hell, even russian defense ministry used insanely crude photoshopped images in it's reports, and still - no backlash, at all. [2]
My point is, if you just want to brainwash the population, these new technologies are overkill. People aren't generally smart enough to require this sophistication from a fake.
Nikon used to provide "Image Authentication Software" which would authenticate images and show if they had been altered since taken by the camera. This was useful not only in news but also court cases.
This, obviously, could not be used on tweaked published images but would work if you had access to the original frame. Most photojournalists work to a code whereby they don't amend the images at all, even tidying up, but that's only viable if they and their editors are trustworthy.
The Nikon process was cracked a few years ago. This allowed edited images to pass the validation.
Yes, but at what point do you hash it? The Nikon software proved it was unaltered since the shutter press, anything after that would rely on it not being touched until the hash was generated.
No, this is an opportunity for new technology that adds additional data stream(s) to audio/video to ensure authenticity. Perhaps something like a one way hash of physical properties related to time/position and the rasterized data, in a way that cannot be faked.. or maybe a move away from rasterized data. Idk. Just brainstorming.
I’m confident there will be an innovation in this space since there is clearly a very urgent need. In fact, this could be a start-up opportunity. Maybe there is relevant academic research already on which a startup idea could be built.
A short-lived opportunity for sure. This is the very definition of an arms race, amd I can't see "real" winning out over fake. Anything that can be checked can be preempted, so "real" is always one step behind, as far as the present is concerned.
Indeed, something like that would be the only way to implement this, and I hope that everyone will agree that we should not go there. To a certain extent, the ability to lie is foundational to human civilization.
The difficult part is the "in a way that cannot be faked". If you can record it using a device you control, you can generate a fake made to appear as if it had been recorded with that device. If you do not fully control the recording device (say because it uses a secure enclave to generate signatures), you'll need some way to generate realistic inputs for fake data, but that might be as easy as photographing a high-resolution print.
If the effort to document the truthfulness of data is much higher than for unverified recordings, the majority of the evidence anyone can come up with will be of the variety that is trivially to fake for someone determined.
Tamper-proof HSMs in recording equipment might do the trick. The point is not that they can be broken. The level of sophistication has to be high enough that most bad actors will be incapable of doing that. This is why chip cards for security work.
> The era of verifiability-- when photographic and video evidence could plausibly be trusted-- is over.
If ease of producing convincing fakes of X means that the era of X as convincing proof is over, then why are signatures and written documents still taken as proof?
I think it just means that things will be scrutinized more carefully from now on, and that there will need to be a stronger document trail.
> We are back to a pre-industrial age of discourse, where rumor and hearsay and anecdote dominate again.
And I definitely don't think it's as grim as this.
Signatures _aren't_ hard proof, they're absolutely contestable. Images and videos once were hard proof, but now are becoming contestable.
I wonder how long it will be before we see journalism-focussed photography equipment automatically generating a secure cryptographic signature for each shot. A particularly fancy implementation could be signed by both the manufacturer and the photographer, offering two assertions the shot is valid and undoctored. Going back to a reputation system still isn't great, but it's a lot better than a "pre-industrial state of discourse"
Whenever I receive a signed-for delivery, I'm asked to provide my name and attempt to sign on the proffered touchscreen device - invariably scrawling a shape that in no way resembles my actual signature.
I recently accepted a new role, and as part of the on-boarding digitally signed my contract by typing my name into a text box marked "Digital Signature" - I'm kinda assuming it's just a regular input box with added legalese.
Signatures are absolutely no longer proof of anything, simply a token gesture with perhaps a dash of security-by-fud tacked on.
For any important document, the signing is almost always witnessed by someone else. My understanding is that the act of signing is primarily the legal act of accepting the document, not a security mechanism. The signature provides a quick reassurance that a document has been accepted, but if there were a real dispute, the witness could be asked to testify.
Yes, almost always signatures are only going to be relevant to civil cases, which are decided on the balance of probabilities. So all the evidence introduced is about pushing those probabilities around. A signature is neither necessary (except in some special cases) nor sufficient but it will usually be evidence for "acceptance", one of the main parts of a contract in law. A witness would shore up that piece of evidence but it rarely comes up. In most cases acceptance is not disputed because doing things you contracted to do is also pretty good evidence for acceptance and that's usual. e.g. if you're in court over a car lease, the fact you paid the first three months and drove the car is usually going to be more than enough "acceptance" for a court even if the signed document has been lost.
In current use, a physical signature doesn't really verify identity or authentication (i.e. that it was you who signed the document) but rather intent and consent - i.e., assuming that it's your freely given signature, this "ritual" marks that you intended to approve that document as opposed to words and gestures indicating that yes, maybe this is the right decision for some future time but might still want to think about it.
It's not as if people were fact checking CNN and Fox or running forensic analysis to scrupulously determine the plausibility of the stories in their newsfeed to begin with.
As far as society and its trust (or lack thereof) in digital media is concerned this won't change much. Most people believe what they want to believe, and doubt what they don't, trust the in group and doubt the out group.
Plausibly be trusted in what context? In a court of law? Or in the court of public opinion?
I think the general hypothesis among mass communication studies doctorates is that as soon as a technology is created and adopted by a mass audience, there are immediate examples of humans immediately using it to manipulate public opinion in some way. For example, as soon as telegrams became a thing used to manipulate public opinion (like the Zimmerman telegram), fake telegrams became a thing. As soon as online video became a thing, fake online videos became a thing (like Lonleygirl15).
Isn't what you are really saying is, "humans like to create fake things, and then a lot of people fall for those fake things." Right? Why should we buckle up for that? Isn't it already a foregone conclusion?
There are companies that make cameras that sign the images. I believe they are only used for legal proceedings now but I’m sure we could expand their use easily enough.
Any shopping would destroy the signature but that might be a good thing for dragging us back to reality.
And some people will believe even the most blatant photoshops because they want to.
Just look at the pizzgate incident. Some fool entered a pizza restaurant and fired is his gun because he beleived people on the internet that claimed it had a secret pedophile den in its basement.
I've always felt that deepfakes are a net positive. Think of all the revenge videos that will be rendered moot. The Jennifer Lawrences of the world can stop stressing because nobody will take any of it seriously anymore.
It's kind of like those pornographic clones of popular kids cartoons. People freaked out at first, but really you don't hear much about it these days.
Sort of like an anti-virus inoculation when you think about it.
I think you're looking at it the wrong way. The issue of leaks and revenge porn isn't the nudity. It's the breach of privacy and trust.
Trying to claim that leaked photos are fake doesn't change the fact that they aren't. It doesn't make you feel any better knowing that there's intimate photos of you out there, whether leaked by a jealous ex or stolen by a hacker.
A lot of these people that have had intimate photos or videos of them leaked have actually posed naked for publications, and they've almost certainly been naked around strangers as part of their job. It's not the nudity that's the issue here.
This is especially true of revenge porn. I don't really care if someone sees pictures of my dick, plenty of people have seen it. I'd be more upset that someone I trusted broke that trust in just about the worst possible way.
Yes, I don't quite understand the fear / excitement / fuss. We've had Photoshop for decades; and for instance, nobody believes models actually look like how they appear on the cover of magazines.
We know photos can be doctored. Now it's coming to video: what's the big deal?
A document, a quote, a rumor, any piece of information really, has to be evaluated in context; is it plausible that Michelle Obama would strip on camera? It's so far out of the realm of possibilities that there really isn't anything to debate.
Some people like to believe in conspiracies and unfortunately there's little we can do about it; the videos of 9/11 were real but conspiracists will insist they were fake and there's no convincing them otherwise...
We're already getting close to the point where it's technically feasible to to enter a facebook profile and press a button for a sex tape to be created from a mix of FB photos and porn websites, for any choice among a billion people, that's a problem. The fact you can tell your parents, colleagues or school friends how easy it is to do so and that it's not really you, is not a net positive if you ask me.
I can't imagine she (or any other victim of the leak) enjoyed having their private lives and most intimate moments being accessible to the whole world.
I assure you none of them lost any sleep over this. All those carefully planned and shot takes were "leaked" completely accidentally at exactly the right time to ensure maximum media uproar and their personas appearing on the front pages of every tabloid in the western world. If you think they play by the same rules as you and me, you are naive.
You shouldn't "assure" stuff you have no idea about. Fact is JLaw was very upset about the leaks and reacted so badly to it that most of reddit style sites fell out of love with her because she didn't appreciate having her photos leaked or didn't just laugh it off and that was considered out of character for her.
Yes, because celebrities are known for genuine reactions in media and not maintaining a carefully crafted public persona for the purposes of publicity.
Sure, that's possible. But then such cheap theatrics will be rendered moot as well.
Anyways, I can assure you that there are a lot of women who are near suicidal over these things. By undermining their authenticity I think a lot of women will be secretly happy for deepfakes.
As the very talented actress she is, she already had all the media attention she needed. Cheap tricks like this are for those that lack other talents (e.g. Kardashian).
Anyone else rather unimpressed by the realism of these fake videos? I mean, technologically it’s cool and all, but it seems more like a proof of concept than anything that can be used to fool humans.
All the videos I’ve watched — out of technical curiosity, naturally — have had some sort of glitch that made it obvious it was fake. I think this technology will have a serious problems just mapping the facial expressions of one person onto another, since many people have their own distinct facial expressions.
The linked YouTube channel with the Putin video[1] is a good example: it looks completely unrealistic because the actor in the source video makes facial expressions Putin would never make.
In my personal opinion, I think it will take decades before this technology becomes good enough to fool humans, and probably longer before it can fool humans closely related to the subjects of the fake videos — if this ever becomes possible at all. The fundamental challenge is mapping the emotions of one person to another’s, which isn’t easily solvable. Just mapping the facial features of Putin onto SNL’s Beck Bennett isn’t going to convince anyone familiar with how Putin looks and acts.
Here’s the thing, they don’t need to convince you, even five percent of people seeing this is enough. Bombard them with this nonstop and like legit warp their reality so that they believe it. This vocal minority can influence quite a bit of the rest of society.
Definitely not perfect, but this is hobbyist grade work. With the recent work around parallelized WaveNet synthesizing 10 seconds of audio for every second of wall clock time, a live fake that fools 50% of regular people is probably a couple of years away at most. Particularly if you can control the setting to ensure lighting/angles/etc match up reasonably well.
I am starting to wonder if this concept simply works better for some viewers than others. I seem to always very deeply notice that the shape of the head is always sort of wrong... I'm wondering if I use "shape of head and typical hairstyle" as a major way I recognize people. Maybe other people are really focussing on facial features like "lips eyes and nose"? Honestly, what I keep thinking when I see stuff like this is "this just looks like Paul Rudd wearing a lot of makeup" not "this looks like Jimmy Fallon with a narrower head and a different hairstyle".
Some of them are pretty incredible - you can not tell at all they are fakes. I think it depends on how close the face matches the person being faked and how generic the lighting is. If you scroll down in the OP article you'll see some impressive examples.
Since reddit banned deepfakes, including SFW content, does anyone know where the community is now congregating? The appeal of the technology from online avatars to cheaper cgi is undeniable. Is this a case of throwing out the baby with the bathwater?
This is so frustrating and depressing. I'm come from the hobby indie movie production side of things and something like this would be so amazing. I came here all excited to see where I can dip into the technology and this is what I find...two asshole corners of the internet I would never frequent.
Can you imagine Reddit banning Photoshop and all other photo editing technology because a bunch of guys used it to post boobies on photos of women?
Chans and voat can be very interesting. Don't label them asshole corners because the 'good guys' told you so. They are orders of magnitudes less censored as HN/reddit/twitter and offer a far wider range of opinions as well as information.
Fascinating how a topic like this is so 'controversial' that 8chan and Voat are the last places they can find a refuge while on the other hand it is such a highly impacting development for society.
I believe dpfak.com has become as close as you can get to a central hub. The landing page is pornography, but you can click through to the forums and there's discussion to be found.
I live in and work in a small African country with very corrupt leadership. This stuff disturbs me ever so deeply. I can easily see a future where despotic governments use this technology to wipe out their detractors.
10+ years ago we started getting the technologies for facial land mark detection, and now we have facial swapping. Currently we have the beginnings of good full-body pose derection, and I imagine soon we’ll eventually have full body swapping (maybe along with clothes).
That raises interesting questions. As legitimate looking sources become harder to trust, what other ways can we verify them? One idea that was floating around is key signing each datafile. That raises the question though of how to manage keys. [ Maybe have each key tied to a digital id, the id similar to Estonia e-residency? ]
At low levels of risk, like a recorded automobile accident, is such scrutiny useful?
Couldn't blockchain be a great tool here? The original hash is uploaded to the blockchain at the time of the recording, then you can verify that nothing has changed since then.
There is already tech in filming to know what camera filmed what footage. I imagine there will be some kind of process happening soon that links footage through its changes, all the way from capture to broadcast.
I think this is part of what KODAKCoin is suggested to provide. But video footage goes through so many post processing steps that I’m not sure how that helps (or if it even matters) since the viewing audience likely won’t care about these things if a video supports their narrative.
Posted in a sibling thread already, but what about supplemental data streams related to environmental data (to verify time and position) that cannot be faked, somehow combined with the rasterized/sampled a/v in an irreversible way, or a new technology that doesn’t use traditional pixelized video or sampled audio that can’t be faked?
Surely there will be an innovation since there clearly is a need for this.
I don't believe a signing solution is feasible. It's currently too complicated for even a lot of IT experts, and the amount of actors involved is too large to even move to a single unified solution.
Given that as of now authenticity can still be verified by experts, it's back to trusting journalistic outlets to properly verify news and sources.
Reminds me of tango and cash. Is this really true? Any sources for such verification techniques?
One might think that, given low enough resolution, any form of rasterized video or sampled audio can be convincingly faked even to experts, particularly once those with the faking technology fully assimilate such verification techniques so as to work around them.
I think the effects of "fake" journalism would be mitigated somewhat if we didn't have laws against slander, libel, and false advertising.
The media-consuming public in the United States still believes that "if it is in print, it must be true" - they haven't been inoculated against falsehood like they would have been otherwise. Presumably, if there were no expectations of truth in print/media to be enforced by some magical (and actually sort-of non existent) federal authority, media outside the "trusted" sources would be automatically suspect unless reviewed by some other trusted third party.
There have been more of these fake video stories recently. Without wanting to get bogged down in politics, I have wondered if these stories are being ramped up to provide some plausible defence to possible 'tapes' mentioned in the Steele dossier? Not necessarily a legal defence, but enough to cast some doubt as to the legitimacy in the media
You could trust the authenticity of the fact that someone used the author's key to sign it at some point in time.
You could not trust any claims the author makes about the circumstances under which the signed object was produced. They could have put their signature on a deepfake. They could have put their signature on the work of someone else. The author could have lost their signing key. It could have been created much earlier than it was signed.
A signature tells you very little about the thing being signed besides the fact that it was signed.
Ghost in the Shell addressed this issue in 2005(o)
The Tachikoma units are debating how to stop a nuclear strike and one suggests that broadcasting a live feed of the nuclear sub would help, but the idea is reasoned against due to the technological capabilities to fake such a feed:
"Pictures don't prove anything anymore. It would just end up as a source of amusement for the uninvolved masses, an image from an unknown source that showed up at an all too convenient time."
beyond this inflection point one must now trust both the content and the source
Fake videos is nothing new. For those interested, you can look into how the BBC was (is still) using footage from the 1st war in Afghanistan to illustrate their coverage of the second one. Or how CNN used footage of an Indian porn movie to accuse pakistani soliders of rape. I'm sure there are many more examples of such misuse of videos ...
This is deeply frightening. I could easily imagine this being used as a political tool to incite hate and racism online. I fear it will be used a "proof" that an event happened.
Step 2.) Desire an advantage over your peers and competition
Step 3.) Notice political hysteria and how it affects the behavior of your rivals.
Step 4.) Deepfake a rival's face into a Nazi rally and send it to his professional network.
Step 5.) Rinse and repeat for every rival you encounter. It's not like denying being a Nazi supporter or crying foul play ever assuages paranoid suspicion.
Congratulations, you can now destroy the career of any worker in Silicon Valley with impunity.
You've been posting a lot of trollish, unsubstantive comments. Could you please take a look at the guidelines and start commenting civilly and substantively?
Where's the impunity? The described actions can and will be prosecuted as libel; USA libel laws are comparably quite generous to facilitate free speech, but what's describe above is not protected.
Sure, but the exact actions described above don't meet the criteria to be considered parody, that's quite straightforward libel. Intent matters a lot, and given the actions you described, any judge would infer malicious intent.
But if people want to keep going with flippant accusations of Nazism in everything they see, then I suspect using this technique to abuse the witch hunt will be the natural next step.
Google "trump is a nazi" and take your pick of any number of Overton window nudging. It's a steady drumbeat of contortions to make the consumer of these outlets see Nazis everywhere.
Can you point to an actual example? Telling people to go google something feels like a very wimpy way of dodging requests for citations. Like saying "Go read a book".
Edit: Thanks for the edited links. I don't see how discussions about the actions and discourse of the president of the United States support your suggestion that fake Nazi-accusing videos will be used to destroy Silicon Valley careers though.
Photos have been dubious since photoshop, but still required expertise and artistry to use. Fake moving pictures required considerably more extensive expertise, lots of time, and usually specialized equipment.
None of that is true anymore. We are back to a pre-industrial age of discourse, where rumor and hearsay and anecdote dominate again. Buckle up.