Hacker News new | past | comments | ask | show | jobs | submit login
DeepFaceLab: A tool that utilizes ML to replace faces in videos (github.com/iperov)
347 points by wawhal on Sept 3, 2019 | hide | past | favorite | 178 comments



I wonder what this implies for the future of media (audio, photos, video) in general? Do we only consume media that has been signed by the creator and verified by an authority that we trust, e.g., a "blue checkmark" for media? Do we know how effective the SSL certificate verification has been in browsers at influencing consumer behavior?


In Neal Stephenson's latest book, that seems to be how he gets around the fake news problem.

The near future world signs everything with a personal identifier so they can prove they what people are seeing is genuine.

I imagine these identifiers could be extended to things like security cameras and things too so that there could be some verification that the video footage hasn't been doctored.

All of this of course relies on the general public getting the knowledge and tools to seamlessly do this verification on a daily basis. In the book it's mostly taken care of using google glass style wearables.


The primary problem with fake news is not that people are fabricating evidence. Sure, that does happen on occasion but it is relatively rare and is usually debunked pretty quickly. The biggest issue is that there are sources that can't be trusted and consumers don't have the skills or tools to identify those untrustworthy sources or simply don't care that their sources might be biased. Fixing the first problem (which is all some type of signing certificate could hope to do) doesn't accomplish much if we can't also addresses the second problem.


Imo the problem with fake news is that by the time it's debunked (if), its effect has already happened and can't be undone.


That is really just a symptom of the second problem. Fake news can spread pretty far before it is debunked, but the fake news has to start somewhere. Maybe people get it from some untrustworthy news site, social media, or directly from the mouth of a habitually lying politician, but it almost never originates from a legitimate news source that has the skills, tools, and motivation to better assess the authenticity of the news. If people put less faith in those untrustworthy sources and waited for actual journalism to be done before reacting, fake news would be less of a problem.


I can honestly say I do not know a legitimate news site. For every topic that I have more than surface knowledge on the articles are always, almost with no exception, biased and limited to a single perspective which aligns with cultural expectations.

The exceptions are local news about local events that has no political angle, and occasionally investigative journalism.

The most common method that legitimate news sources use to bias news is by omission, and second by using a misleading context. Neither is strictly a lie, but the result is as much fake news as something fabricated.


Mainstream news is to a degree more problematic when they formulaically follow a narrative and don’t bother checking the assumptions and present misleading or overreprsenting information.

That’s because people are apt to believe those sources and not critically question them.


The problem is the masses of people who insist on believing unbelievable things. We don't have centralized media any more that can act as filters for wingnut ideas.

Pizza-gate is bonkers, but that doesn't mean there isn't a healthy segment of the population that believes it as a way to channel their collective hatred for someone. Which is itself just a way to solidify their group identity, much the same as the resurgence of flat-earthers who feed off of their own ignorance while proclaiming their supposed objectivity.


Pizzagate is bonkers but they were not that far off, with Epstein and all that


Everyone is downvoting you but the pizza gate conspiracy seemed ridiculous when it came out, and now it just seems like what they got wrong is that it probably wasn't happening at Cosmic Pizza, and was happening at a way more massive scale than they'd assumed.

I think the reason people are downvoting your comment (aside from the obvious reason, which is that it's a little low-effort) is that people didn't like pizzagate. It had an altright, Trumpy 'beware the deep state satan pedophile illuminati' feel to it that we oppose being true for identity reasons (ie, we are not those people who say trollish, unintellectual things like that!).

Although pizzagate is false, it's not as false (in a kind of Bayesian way) as it was before the Epstein island entered the public conscience.


There is also a problem with biased coverage that can be 100% factual. It is fake news in that it can craft a false perception of an issue that society then acts upon because people have some underlying belief that all comparable issues are granted equal coverage.


We already (mostly) solved that problem in the US with the Fairness Doctrine. Then we apparently decided because the problem was solved, we didn't need the regulation anymore. That ended exactly as you would expect, and exactly as it has every other time a regulation that's working is removed because it's "not necessary anymore".

https://en.wikipedia.org/wiki/FCC_fairness_doctrine


I wonder what would happen if we tried applying that today. For example, would climate change qualify? Scientists are pretty one sided on it, but politicians and the general public are not. So does it have to be controversial for the experts, or for the general population? I can also think of issues where scientists aren't in agreement but the population mostly is. What about if there are numerous different sides to an issue? Imagine giving fairness to some religious matter, given how many different religions there are.

Overall, I just have a hard time imagining the details of how it would apply in today's world.


I recommend reading "Fall; or Dodge in Hell" because Stephenson addresses just about everything you mentioned.

It's not the primary focus of the story, but it does at least quickly attempt to answer these problems to advance the narrative in a reasonable way.


For anyone else not up to date, Neal Stephenson's latest novel is Fall; or, Dodge in Hell: https://www.nealstephenson.com/fall,-or-dodge-in-hell.html


I read that current camera manufacturers already have some e-signature code inside cameras, but it is hackable..


Yeah, in the book they used distributed ledgers and quantum computing to make it a bit more secure.


Is the new book any good? I heard bad reviews, and it turned me off. I usually love everything NS does though.


I'm only about 2/3 of the way through but I really enjoy it.

The NY Times seems to like it. https://www.nytimes.com/2019/06/14/books/review/fall-or-dodg...


We do know that people don't care about trust indicators. They care about warnings for danger, but they don't care about trust indicators. Trust indicators are essentially, something for companies to sell.

Because everybody isn't going to pay for the trust indicator and we're going to consume plenty of things that don't have it, the indicator is basically just noise. It's why EV failed and we're about to see it repeat with this BIMI technology in the email world IMO.

On the other hand, if you get a big giant danger warning that stands out because everything else doesn't have it...it gets attention.


While I agree that a giant danger warning will get attention, it also implies that someone needs to be the arbiter of "danger". And given how eagerly the giant Internet co's have embraced being gatekeepers, I don't hold out a lot of hope for this approach.


Yes, and not only that, on the other end, when the danger indicators become too frequent, they end up merely being seen as noise.

( E.g., see California cancer warnings on just about everything -- makes it hard to distinguish between the <thing that will kill you now> vs the <thing that may increase your chance of a curable cancer by 1/10^6>, so it all gets ignored. )


Agreed. In this particular situation, that's really difficult to do too.


It’s gonna have to be better than a blue check mark.

A blue check mark only says this account/person has our imprimatur. Nothing more.

There are people who don’t have who are better disseminators than those who have them.

That would be terrible. Basically it would turn media into versions of Xinhua or former Pravda. Only approved voices and opinions get the “authenticity” mark.


The ability to fake photos has been around for a while and it hasn't been a big deal. People overall have proven savvy enough to mostly discern real from faked. My prediction is that the same will be true for ML faked videos.


For quite a while and probably still now, advertising photos were/are routinely doctored without people realising (e.g. fashion or makeup models). This was a scandal a few years ago certainly in the UK. The outcome was for plus size models to be more frequently featured. So in some cases it can go unnoticed and be a serious problem.


Besides, look at how many reddit submissions are just an image of a headline, or an image of a picture + caption. Not even a link to a source. Yet thousands of upvotes and reactions to it, taking it at complete face value.

To worry about fake photos/videos seems out of touch with the current state of the internet. People will upvote and bicker about a screenshot of the text of a tweet. Why even bother with ML when an image of an outrageous sentence is enough?


That's what I thought too until I saw The Shining staring Jim Carrey.

https://youtu.be/HG_NZpkttXE


> Do we know how effective the SSL certificate verification has been in browsers at influencing consumer behavior?

Considering how EV is essentially dead, I would assume it hasn't been very effective.


I have the same feeling, but if you look at photo's, they've been edited for decades (https://en.wikipedia.org/wiki/Censorship_of_images_in_the_So...).

Photo editing became "household" at least a decade ago. So if you think of that, I can't see why the perception of video's in 3 years would be different than photo's today.


[flagged]


Personal attacks will get you banned here. Please review the site guidelines and don't post like this to HN.

https://news.ycombinator.com/newsguidelines.html


Critical thinking only helps you determine what could happen, not what actually did happen.


I feel like this question is just a cheap snide remark.

The current political moment provides a lot of evidence for why a technology like this introduces these questions.

A technology where we can fake any type of photo or video is dangerous in a world where tweets dominate the news cycle.


Video evidence is going to be inadmissible very soon.

It's ironic that we're essentially being pushed back to a pre-technology, pre-media time. "If you didn't see it with your own two eyes, you can't believe it"


Video evidence has always required testimony to its provenance, hasn't it? Otherwise it can be removed as hearsay. Courts have always been worried about authenticating evidence including video.

https://jamespublishing.com/2013/objecting-video-audio-evide...

https://www.esquiresolutions.com/federal-rules-catch-digital...


> "If you didn't see it with your own two eyes, you can't believe it"

Given what we know of human memory and visual processing, you'd be a damned fool to trust your own eyes any better.


I was just thinking about that. Knowing what we know about memory manipulation now, there is no real objective truth anymore


videos can be altered in this way for a long long time now. just not by neural networks. so in reality, it changes nothing.


It does change a whole lot. Up until a few years ago you'd only have to be suspicious of very important videos coming from people who have access to convincing video editing. Some video coming from the USA, Chinese or Russian government trying to disprove war crimes? Yeah, better be careful. Some guy using damcash footage to prove the other driver was in the wrong? Not so much.

As the technology improves and becomes accessible soon anybody will be able to edit videos convincingly without having to invest a massive amount of time or money. That's going to have a large impact I think.


Look at the old video's of Bigfoots and UFO's. A lot of them have been questioned for years if they were fake or not.

So in that sense it could have still be done pretty cheap.


Nobody knows exactly what a UFO and Bigfoot really look like so that helps. The debate is not so much about whether the footage has been doctored but rather about what it depicts.

A better example would probably be the moon landing footage or 9/11. And then it's really only questioned by conspiracy theorists and specifically because a state actor might have wanted to hide the evidence of a conspiracy.

Tomorrow with this new technology I might not convince you that Bigfoot exists but I might be able to show you extremely convincing footage of Margaret Thatcher and Mao Zedong frolicking in the Swiss Alps while Tupac is watching.


It's true that things can be more subtle now, like a politician giving a racist remark or something like that.


I need to see this video.

I had no idea Mao and Tupac were friends.


But does the average juror understand that? In the future, when each of them has had their face transplanted on top of Chevy Chase or Beverly D'Angelo for some e-greeting for Christmas Vacation on Facebook, maybe they will stop trusting it.


Reducing technical complexity changes everything.


A little story of admission of evidence I heard from a friend of my wife when I gave her ride home recently.

We listened to Sirius 117 thru our ride and they always tell crime stories. It was a death row story and when the narrator said that the only evidence they had against a suspect was a DNA but it was enough to put him on electric chair, she chuckled a little. It was odd so I asked her why. As a court reporter she witnessed many horrors written by life, but this one was particularly scary for another reason. She was reporting on a case where the only evidence of a murder was a DNA, even though a suspect was in New York on a day where victim was murdered in Los Angeles. The DA rested case explaining that DNA evidence is enough and holds up in courts for many years and that they do not need to explain anything else, how he go from NY to LA at the night of murder or really nothing else mattered "because of DNA." So defended lawyers ran a suspect DNA via one of popular DNA-kit websites and to their surprise they found a good enough match. She was lucky to be called in when the case was heard again (most of the time different court reporter would be called in depending of availability) and they showed that result to DA/prosecutors asking if they would consider prosecuting based on that person DNA. They look at it and said it was the same person DNA. So defended lawyers revealed that it was different person. To judge shock, then DA came to say that DNA match doesn't have to be 100% and over the span of human existence some 100 billion people lived so its quite possible for extremely similar DNA to be found in wild. She recalls looking at judge face at that moment and he turned pale blue - you couldn't help to see this man wondering if over the span of last 2 decades, he may have sentenced hundreds of people based on inadequate evidence. The case did not ended there but shortly after when she was checking on it, the whole case all of sudden got sealed and even year later was still locked down. All she knows is that most likely he wasn't found guilty, as checking on his name recently did not yield any record on Cali's death row list.

So so much of DNA being of 100% proof.


thankfully, video evidence is still rare in most cases even today, and was practically nonexistent 20 years ago.


People have been convincingly altering pictures since forever. I could do it no problem when I was 14.

Nothing changes.


Were you fourteen in the era of digital image processing, or in the darkroom?

It was always possible, but it used to be far more difficult.


I forged many documents in the early digital era because paper still had the trust of the darkroom era. Theres always a transition time like that thats very exploitable.


It'll be the same as anything else. An approach to fake something will be created. Then an approach to identify the fakes... and repeat.


I think the cons are going to heavily outweigh the pros with this tech.

Fake news on high octane race gas.


Ads featuring you instead of an actor - do you want to see how well these clothes are fitting you? How would you like to be slim again? Here is your hair back! This supplement makes you look like the Terminator! Here is a regular day at your new property! All featuring you!

If you think this won't attract huge investments from VCs, you are likely too optimistic.


Ads asking camera permissions. Sends a chill down my spine


I'm thinking more "Google builds your face model in exchange for some cool messaging app" -> "Google allows advertisers to give Google some clothes models (as in 3d models)/etc to show on the user's actual body with their face"


Well, I also like the facts that:

* Everyone can access it. Not just sophisticated actors anymore

* It might make us rethink the entire "truth" chain, on how we source our information

Of course, the two go hand-in-hand, but the latter point is overdue: while video is perhaps more glaring, it's something that's needed in a lot of other areas as well (text -- news articles, messages, mail; sound -- phone calls, etc; image -- photoshop, though we start to get used to it).


What's good about letting every script kiddie use it? I'm thinking, the less usage the better?

I agree that provenance could become more important, but I don't see it changing how memes spread. For a lot of people it's just entertainment and they don't care whether it's true.


> What's good about letting every script kiddie use it?

I think parent was making a point that once this tech is in the hands of the common man, it's value for sophisticated players might diminish strongly. Same way as an undisclosed 0day in a high value target (say IOS), is extremely valuable. Once it gets disclosed and everyone knows about it, people can come up with workarounds and eventual fixes, rendering the threat basically defanged.

I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.


> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.

This is why I am all for the current hype and the apps. Everyone that has access to electronic media should now that videos are now more easily to manipulate than ever. This undermines the attack vector of supporting fake news with fake videos. (At least it should, I do not know about the psychological side of it. Maybe even knowledge of a video's falsehood does not diminish its impact that much. Still I think it is best to make video fakeability common knowledge.)


I used to work for a company who made verifiable audio and video for law enforcement. Even a constant running timecode could be circumvented in the early 90's. (They used to use a dedicated audio channel to encode a hash of the last few seconds - on analogue). It actually takes a lot of effort to show that a video hasn't been/could not have been doctored.


> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.

On the flip side, this enables revenge porn deepfakes and person to person harm on a much, much more common level. If you are someone who becomes briefly famous on the internet, expect that a TON of harmful or disturbing videos of "you" will show up in short order. I'm not sure we have good regulations and laws in place to protect people against the misuse of this tech yet.


I understand, but I believe it is obvious now that technology like this cannot be legislated away or controlled. The moment that people discovered that ML can be used to generate fakes was the point of no return.

There's not a single thing in the chain of tools and knowhow required to produce this that can be kept out of the hands of malicious actors.

The best course of action now is to rapidly educate people of the implications and hope that the initial wave of abuse will be without too many casualties.


Think about Photoshop - back before it existed people assumed magazine ads had real people (not carefully, manually edited py artists). Photoshop and tools like it became commonly available. Over time the tools being more widely available caused more people to scrutinize and be suspicious of what they saw. Ads featuring heavily manipulated photos had the opposite of their original Intent. People picked up the term 'shopped to mean an image was fake or being deceptive. Kids in middle school learn how to do it.

Now, when you see a before and after photo, or an advertisement, the default is to assume it has been manipulated. That change came about with broad access to once-rare tools.


> Everyone can access it. Not just sophisticated actors anymore

Why do you like this?


“When everyone’s special, no one is.”

Basically, a (good) deepfake was out of reach for all but a few until recently, so if some bad actor with money and resources wanted to fake a video they could do it and few people would even think it could be faked.

Nowadays, the bar is way higher for this, as every video can be suspect.


> every video can be suspect.

But how many will suspect them? If 10 million people watch a video but only (generously) 1 million people think or are at least willing to consider it may be fake, that is very effective disinformation. We have to remember that the average person (especially in older generations) is likely unaware that this sort of technology is possible and easily accessible.


More people will suspect them than before. Just like photoshopping is now coming knowledge enough to have terms associated with it added to the common lexicon.


As people with the knowledge that video is no longer trustworthy, it is incumbent on us to share that message with other people, so it does become common knowledge.


I know that DARPA is one Govt. agency in USA looking at this from the disinformation & media forensics standpoint:

https://www.nextgov.com/emerging-tech/2019/08/darpa-taking-d...

https://techcrunch.com/2018/04/30/deepfakes-fake-videos-darp...

...The consensus seems to be that ongoing detection is going to be a challenge as the tech evolves. I suppose that is like anything though....


Some pros:

Bereavement. It won’t be for everyone, but perhaps some people who have lost loved ones might like to feel that they are still alive by bringing them into modern day videos.

Fantasies. You hear about how some people like to live out fantasy lives in video games. I heard one about a severely disabled teen or young man who gets to be a big strapping strongman in his favorite video game. Imagine an old lady who can create a video of herself at age 25 free soloing El Capitan by deepfaking herself into an Alex Honnold video.


Prior to the invention of the photograph, people were forgotten. I don't think that was a bad thing. Our brains weren't designed to retain all family member from all time. Also, why? Why would we need that? I don't care to see a family member from 4 generations ago walking to the refrigerator in some video.

No, this tech will be used to amuse and/or influence the intellectually challenged.


It’s not about why you don’t want it. It’s about why someone else does. Some people get their dead pets stuffed and keep them in their house. Maybe someone might do the same digitally with their loved ones. Maybe a couple will add in their son who died of a heroin overdose in 2016 to Mom’s 70th birthday video. Maybe you won’t. Both are fine.


Photoshop has existed with the ability to modify images kn this way for over a decade.

Can you think of many recent examples of video evidence was the sole evidence in a news report?


It would go a long way towards making many kinds of blackmail difficult or impossible, which seems like a pretty big upside.


Beyond trivial amusement what are the Pros?


I don't know that it makes sense to isolate just the narrow concept of deep fake creation. It seems like fundamentally a lot (most?) of the breakthroughs that make creation of deep fakes possible are the same ideas that make possible the current state of the art for classification, decision problems, advanced NLP, and other things we call ML or AI.

So to do this pro/con analysis you probably need to include the pros of these related technologies as well, which are certainly more than trivial amusement.


> So to do this pro/con analysis you probably need to include the pros of these related technologies as well, which are certainly more than trivial amusement.

Like the time my uncle (by marriage) sent a JibJab to the whole family featuring 4 recently-dead family members... real big pro, I'm sure... /s


I'm using this for the second half of letting authors generate fake faces for their fictional characters and then putting those faces on top of video snippets from actors. I'm still in the exploratory stages of what all is possible/useful, but feedback so far has been generally positive ("characters feel more real", "easier to empathize", "super interesting potential").

Could also make for some interesting low-budget book commercials/trailers where authors (or I guess anyone making a video starring people) can design the characters in a video without worrying about what actors are available and/or what CGI costs.


Stunt doubles?


There is also:

https://github.com/deepfakes/faceswap

With a big development community and interesting results.


What are the non-malicious uses for something like this? Haven't actually got around to pondering that yet.


An interesting use could be to anonymise people in public videos. You could use randomly generated faces (which other emerging tech can produce) and effectively remove peoples recognizable features by "replacing" them.


Reminds me of the anonymizing suits in “A Scanner Darkly”.


Actually, you could probably get exactly that effect if you modify the script to pull from a pool of source faces per frame instead of a single face.


That’s interesting. I wonder if that would be sufficient to forgo the typical release forms required when filming in certain public places.


Politicians and celebrities are all about their faces. They wouldn't allow for anonymizing their footage even though the 'bad' guys will want to exactly use their faces on fake videos for exactly the same reasons.


An alternative to blurring could be useful.

But who should they look like?

Maybe that spot could be sold...


Doesn't have to be an existing person at all: https://thispersondoesnotexist.com/

I like your idea about selling spots though, definitely something I have never thought of before.


Storytelling of historical characters where we have common knowledge of what they looked like (e.g. Einstein).

Reshooting scenes of movies without having to get key actors back on location.

Immersive storytelling where you get to be placed inside a movie.

Actor safety where they can go through transformations for a shoot, such as extreme weight loss or weight gain.

I can really only think of it in terms of entertainment. There may be some therapy benefits which are yet to get discovered, but I would imagine that's a whole new level of complexity.


Presenting yourself online (think Skype) as more symmetrical, more imposing, more pleasing. For business advantage while negotiating.


This concept is literally used in the Ghost in the Shell manga, often for comedic purposes. There's one funny set of panels where a character is shown in a video-chat window as neat-and-tidy and wearing a nice business suit, but is shown in a subsequent panel as videochatting while she's on the toilet in messy hair and a tank-top.


Every time I hit HN I can't help but be amazed at how sad the future is getting


You have it reversed. If it is already true that being more symmetrical, imposing and pleasing is advantageous for negotiations, then the future in which we have the tools to counter that bias is getting less sad.


These tools are not for countering that bias.

They are for reinforcing it.


You can make a similar argument that the wide availability of soaps and deodorants reinforces biases about how people should smell. While that's true, the fact that anyone can easily match those biases reduces the opportunity to discriminate and makes those biases less of an issue.


You can make that argument, yes. I can deconstruct it by arguing that not everyone can easily match those biases - I bet there are still plenty of places on Earth where soap is surprisingly difficult to come by.

That side issue does not have much to do with the video issues under discussion, though.

I shudder at the thought of a world in which always-on lies about your appearance become standard practice.

Have physical reality and truth really become irrelevant?


They have been for quite some time: Cosmetic Surgery.

I remember watching a TV show years ago about a woman who had had lots of plastic surgery, found the man of her dreams, and was now pregnant. She was afraid the baby would come out looking totally different than she did, because she had physically changed herself so much, and her husband would leave her over the deceit.


I see what you're saying.

At the same time, with cosmetic surgery, you are changing physical reality. Someone has sliced up your body and reconfigured it to be more what you want.

So it's not necessarily a "lie", as such.

Like I said, though, I see what you're getting at, and it's a valid point.


Since this has gotten to the point where we need to be precise with the language: they are not the tools for eliminating the existence of bias. They are tools for countering the effect of bias in particular instances.


I think you missed your parent's point. Let's examine a related case: fashion models. The industry has been manipulating images for years, to make models skinnier, whiter, removing blemishes, etc. In your language, these are tools for eliminating the effect of bias for the models. The impact that's had on our society is well studied: biases have been disastrously reinforced.

This isn't imprecision of language.


Thank you for elucidating my point so well and concisely.


Ha! People who wear power suits, makeup, expensive haircuts and shoes are already doing all this. Technology just means you don't have to spend time in the stylist's chair to accomplish it any more!


Until you have to, you know, actually meet someone in person.


I've done entire contracts without ever meeting in person.


And what percentage of the population do you believe that is true for? Remember, we're talking about people interviewing in general here, not the small percentage of software devs who work 100% remotely on contract.

Not to mention you'll just come off as a huge weirdo as soon as you're found out.


So, it works for those that do long-distance negotiation. So what? Finding a case that doesn't work so good, is just straw-man stuff.


It's not a straw man when your case applies to a tiny fraction of the human population and mine applies to the rest. Context matters, I was responding to this:

>Ha! People who wear power suits, makeup, expensive haircuts and shoes are already doing all this. Technology just means you don't have to spend time in the stylist's chair to accomplish it any more!

Did you forget what you wrote?


Did you forget the entire topic? What other uses for this technology. Which online representation modification is.


* See what you look like in a certain outfit being put through its paces. A stock video of a bride walking down the aisle or dancing in a dress could have the real bride's face put in place so the bride can visualize how she'll look in different situations.

* A demo video of what life in a new house would be like. Pre-recorded video featuring actors waking up, eating breakfast, and living life in a home could have the actors replaced with the potential home buyers.

* Security tests... for facilities that have to look for "banned" people, occasionally a face could be replaced on the video to see if the security guard notices. This is similar to the TSA sometimes testing workers by putting in fake images of guns or bombs into the X-Ray display to make sure their workers are actually paying attention.

* Retroactively correcting visual media that is no longer appropriate. I would love a version of "The Cosby Show" where Bill Cosby was replaced by another famous black actor just because it would trigger my liberal guilt less. This could allow studios to craft contracts where if a star committed a grievous act or if they die, their likeness could be replaced throughout the series and they would stop receiving royalties for their visual appearance. Dialog could be re-dubbed as well to complete the retroactive continuity.

* Updating low-res video... If there is a very poor and grainy interview with someone, the image might be able to be enhanced more completely by replacing the subjects face with a higher resolution copy of their own face!

* Smart therapy mirrors... people who have had disfigurements or have body image problems could have a smart mirror that displays a modified image for either self-esteem purposes or to help them psychologically.

* Same applies for funhouse/haunted house mirrors... seeing a mirror image of yourself with a zombie-ified head would be a great illusion.

* Real-world facial masking... You know how people can 3d map the exterior of a building and then project an image onto it using that 3d map taken into account to pull off some spectacular illusions? [1]

Something similar, sans holographic technology, could be done to make someone appear like someone else. A camera and 3d scanner environment to track the facial contours and location, with a projector projecting the image of another face onto the real face. Similar to the scene in Bladerunner 2049 where the AI girlfriend rents a real-world avatar for a personal encounter with her boyfriend. [2]

[1] https://www.youtube.com/watch?v=XSR0Xady02o

[2] https://www.youtube.com/watch?v=VuV2c-6js8w


“Joke” apps, so you can put yourself in a video as a hero heroine, villain villainess. Put yourself as a singer or whatever on a music video, etc.

Punking your friends too.

In a productive context in filmmaking you can have fun with doubles and stuntmen and stunt women. Or if you have the rights to license the face and likeness of an actor actress and they can be in five films, ten films in a year (forgetting about brand dilution). John Wayne can be in a new Western movie,



Wait, this is a one shot? The poster is cherry picking results, right?


He posted a ton of video examples if you scroll down thread on Twitter, and some are better than others. It's shocking how accessible the technology is now that a phone app is letting you do faceswaps on short video clips with one picture and basically no effort on the part of the user.


> Punking your friends too.

"Punking" does not qualify as non-malicious.


That depends on your relationship and liberties you take.


It does if you and your friends share the same sense of humour.


Famous actors could license their face for use. This could even last past their death, allowing them to star in movies decades after their flesh death.

If an actor dies while filming a movie, it can be required in the contract that they allow their face to be generated for scenes that haven't been filmed yet.

They could also use this to use a younger face as they age.


I'm pretty sure I read somewhere that Stan Lee's face had been scanned like crazy before his death in order to keep the cameos going.


I hope they don’t do that. I’d just find it more sad and creepy than enjoyable.


It could be useful for film production where you could replace the stuntman's face with the real actor's one.


Doing exactly this, replacing stunt doubles, is what I wrote and acquired a global patent back in '08. I tried to create a personalized advertising agency: put people into the advertisements themselves, driving the new car, wearing the new fashions, taking the luxury vacation. Problem was, I started my effort in '04, had a working pipeline by '08 - but none believed it was possible. Even when I'd demonstrate right in front of them, investors thought it was a trick. I eventually went bankrupt and quit. Today, we have deepfakes, where we could have had personalized adverting a decade ago.


That is neat and all but do we want that?


For the fashion industry, it becomes a fantasy visualization platform - see yourself in all manners of clothing, hair and makeup.


And replace the low paid actor's face with a high paid actor's face...


Savings! You also can produce movies then in low-payment countries.


This has been routine in films since at least Jurassic Park (1993).


I see a lot of concern regarding fake pornographic videos. And I get it. But on the other hand, people could release videos they do not want to appear in with a credible alternative to blurring faces (though I guess you could still fingerprint facial expressions).

If this gets mainstream enough, "revenge porn" would also lose a lot of its current value, being easily dismissible as "fake". (And so would genuine photographical evidence...)

Journalists could use it to hide their sources from the public. If this gets reaaly good, it would become a nice alternative to motion capture for cinematography (pose estimation, including facial expressions). Then swap in the 3D model, or concept art of the person you want to appear on screen. Significantly reduce the costs for makeup, and enable more in post-prod. That was probably already available for big teams with big budgets, but this makes it more mainstream.


I recently watched a Deep Fake video of The Shining with Jim Carrey's face over Jack Nicholson's. It was so damn good you couldn't even tell it wasn't actually him playing the role. This was all fun and games, but one thing that immediately comes to mind is movies staring people who are no longer alive.


Imagine if when you click up a movie/show on Netflix, you get to pick which actors play the main parts.


Grand Moff Tarkin says hello.


I’ve been telling people that there will now always be new movies with Tom Hanks. Even after he has passed, there’s so much screen time to use and audio recordings that he could forever be in new movies.


Standing in for me for many tedious videoconference meetings :)


Privacy. A commercial video shot in public today is usually required to blur faces of people who didn't give explicit consent. Replacing faces with stubs is a nice alternative, as long as the target face is from a person with a consent.

I am not a lawyer, though.


> Replacing faces with stubs is a nice alternative, as long as the target face is from a person with a consent.

Why use a real face? You could generate a fake one using a GAN. You may even be able to build a fully automatic anonymize tool that can identify the broad characteristics of each face present (skin tone, age, male/female etc) then use that to generate replacement faces.


Maybe it could be coupled with GAN generated faces.


On a network like FB that has your photos, mix your own face into the advertisement! The video can have the pre-computed target, and the network can generate yours from all that free photo storage/tagging...


My initial reaction is that this counts as malicious.

But thinking about it some more, I at least don't mind clothing ads using me as the model. (Assuming there's opt in and all)

It's better than mentally swapping out the skin tone and body type to see if it was really the clothes that looked good or the attractive model.


Replacing all those hacks and pretenders in every movie ever [1] with the glory that is Nicolas Cage.

1. including "Face off"


As a part of a pro movie making suite like AfterEffects or DaVinci Resolve - you can take somebody else to act in a scene and then replace with physique of your chosen actor, e.g. you do stunts, your actor can't appear at a given geographic location at a given time, somebody had a facial injury/scars you want to get rid of etc. Plenty of valid uses. Then you could e.g. do a render in Blender and replace "avatars" with real human faces, avoiding "uncanny valley" in your animation.

Travel industry can have uses as well - these days many places offer video recordings of their clients doing some dangerous activity; for those that can't do them yet still want the video they could offer pre-baked scenes where taking a selfie could kill somebody and then replace their face/body with some green screen footage of their client using this app and some pose estimator/body switcher. Or just show you how much would you enjoy some beach property in some hot location with a video featuring you etc. This would make it useful for custom ads featuring you as well.


I could see someone using it to update home movies after transitioning. Dysphoria is probably not great for reminiscing.


Localisation of films\tv shows\educational material.

Often English language films are poorly dubbed for local (mostly non-English speaking) markets. Having a local/recognisable actor filling the role would be a big improvment.


That’s a little more complicated. Now you’re replacing a face and body and either having them voice dialogue separately (dubbing themselves) or you’re putting a local face to a foreign film (and subbing?)

I can see them modifying the actual original actor to speak in a local language however to sync with the localized dialogue.

That said, your proposition is also interesting. It raises some questions though. Imagine an indie French film where the protagonist (or whoever) gets replaced by a Hollywood actor. Or an indie Angolan film which gets the treatment. There would be lots of purists (for many reasons) protesting this kind of option.


I thought of a non-malicious use.

Celebrities could sell the 'rights' to their faces and then ordinary people could 'act' in commercials and later on even movies and then just stick the face of the celebrities?! xD Celebrities will become cheaply available to 'star' in your next Netflix series.

(But unfortunately, even if celebrities don't sell their faces, they'll be stolen!)



A different git repository has a "manifesto" of sorts explaining why they think it is ethical to release:

https://github.com/deepfakes/faceswap#faceswap-has-ethical-u...


Deep analysis of the body for any visible signs of tumors or diseases. Stroke detection software. Home and car entry without needing to carry any keys. Much faster entry into airports, where we need passports or security checks anyway. Lots of entertainment possibilities.


Agree that this feels like a tool for malicious intent. But, maybe reviving dead celebrities or family members? Either way this tech seems inevitable, nerds showing off their skills to other nerds.

Next question: does it make CCTV footage obsolete as a form of permissible evidence in court?


If you had hardware that generated per frame signatures using PK encryption? Private keys and hardware can be compromised but at least it raises the bar and weeds out casually malicious actors. Having said that, I have no idea if such devices exist nor how they might work.


Point your signature-per-frame camera at a TV screen showing a faked video. Instant signed fake video.


Habituating people to the fact that this exists, therefore lessening the impact of malicious uses.


First Person Artist Museum experiences [1]

[1] https://www.dezeen.com/2019/05/24/salvador-dali-deepfake-dal...

[EDITED]


I got a good laugh out of this one - https://www.youtube.com/watch?v=1kJO_L9gr5Y

probably only funny if you watch these folks regularly


Putting your own face (or a friend's face) into a movie or TV show.

Or going the other way, use a real video of your friend and replace their face with a famous actor.


Personalized meme gif generation? Send your friend a gif with their face on Bruce Lee to tell them they are kicking ass


Movies/Media with you, your friends and family as the cast?

Personalized entertainment.


Say you need to replace an actor in a movie. Great potential there


Seems useful for memes?


Personalized adverting: see yourself / your family using/enjoying products you're yet to purchase. Cars, clothing, trips...


Porn.


haha obvious answer.


Is it the same technology used by Zao? The example in the gallery does not look as good as the video here https://twitter.com/AllanXia/status/1168049059413643265


I think this tech is too complex to run on mobile devices in 8 seconds for video transfer from 1 selfie.

What I think Zao does is preprocess the videos (manually or highly accurate facepoint detection). They pre-calculate the transforms (standard facemorphing algorithms with opacity/alpha tweaks), and shading depending on the scene lighting. Then they just need a good frontal selfie, or do some frontalization, and keypoint detection, and the rest can be rendered/computed without much resources, following a pre-defined script.

If more advanced than facemorphing, then perhaps something more like: https://github.com/facebookresearch/supervision-by-registrat... (pre-fitting a 3D face mask, then texturing it with your selfie)


Yeah, I think the big question is if Zao only allows pre-selected "scenes" that they have already done the processing on or if they allow you to upload any video.

From the results, I think you are exactly right in how they are accomplishing those videos.


Hollywood has been doing this for a while, for stunt performers. Sometimes well, sometimes badly. We'll be seeing more of that as it works better.

As for the political implications, go watch "Wag the Dog" again.


This technology has been posted on HN before and the comments were all too similar to this one. I understand why a lot of people are falling into the FUD-hole, but we can pretty easily solve the issues mentioned in the comments by inserting a digital key - similar to a SSL certificate, into an image/video upon initial upload. Or even upon creation, maybe using some sort of ledger to verify integrity. I'm not too worried.


What does that verify? Assuming they key hasn't be compromised, it does verify that the holder of the key uploaded a file at a particular time. It still doesn't show that the video itself is genuine. Although it does seem to allow us to fall back on the trustworthiness of the source which is better than nothing.


I used to work for a company who provided verifiable audio and video (virtually in the pre digital days) mainly for law enforcement. I remember discussing how it'd be done in digital: a constant stream of verification (eg a hash of the last second) all encoded by a key. It's pretty much how the analogue version worked. I'm sure (if they're still around) that'd be what they're doing.


Am I the only one that cries when seeing non PEP8 compliant Python code? Why do data scientists mistreat Python that way?


You are the only one.


On a (somewhat) related note, I'm working on face recognition with homomorphic encryption, therefore without compromising the user privacy. The bold goal is the first privacy preserving videocamera. If you find this interesting, I would love to chat about it.


I'd love to hear more about it & see the code.


I'm surprised no one's bringing up the precedent of photoshop. For years we've culturally realized that pictures may not tell the whole story, and this is the exact same, down to results that can't quite escape the uncanny valley.


On a scale of 1 to 10, how much easier does this make Deepfakery?


Deep Fake with its revenge porn wasn't enough to give this a second thought?


Researchers and prototyping engineers generally don't seem concerned with ethics or consequences these days. Hard to be sure whether it's due to a pressure to publish or interest in chasing fame/revenue but there's lots of stuff like this being published lately. Even if some researchers opted not to chase this down there are always more people looking to push the boundaries.

The natural conclusion goes past revenge porn to revenge videos of beheadings, murders, etc that can be sent to friends or family members who aren't technically savvy enough to recognize fakes for what they are. Not to mention the existing phenomenon of blaming random civilians for murders and terror attacks - this will be far more convincing when it's paired with convincing fakes. It'll probably claim some lives.


Is innovation supposed to pause for society to catch up every time something drastic is created or discovered?


Nuclear tech required a rethink. So maybe yes?


Fair!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: