Sure, for a casual observer, these new methods for generating videos appear convincing, but is that the right bar to judge "ability to fake evidence"?
As far as I know, there have always been more sophisticated techniques and forensics to determine if an image is doctored, and likewise for video. I've not seen any research tackling fooling those methods yet, and I would bet that naive implementations of neural networks for generating videos would leave very obvious "neural network" artifacts. Of course, this is still new technology, so it will obviously get better at fooling our other tools over time too, but as of right now, I don't think the clamoring for "all evidence can now be faked" is all that justified.
I question becomes, does that matter? If everything needs to go through forensic experts, that works for courts. For media, who knows. Maybe it just becomes another way to challenge anything.
There's always been a certain value to letting people see things themselves, and use their non-expert judgement directly, en masse. Think of that iconic image of a Vietnamese girl escaping napalm or (more recently) the drowned refugee child on that Turkish beach. These had value past the factual information.
In any case, I think the Photoshop/stills case is a little heartening. We get an occasional false image, but overall it hasn't created some sort of massive truth-crisis. Other stuff did happen to truth, but Photoshop wasn't at the centre of it.
But that was already true with words alone. Lives are already ruined every day by baseless claims with zero evidence, no video required. People who already research the source of words before forming an opinion will now research the source of videos, too. Those who are fooled by written and spoken lies will also be fooled by animated ones.
Words have almost no impact compared to visual imagery. Even if the words come from the mouth or pen of a respected individual, you can still doubt it, and usually people do.
However, whether we like it or not, everything we see, at least on a subsconscious level, we perceive as real.
My assumption would be that we have no evolutionary resistance to that. I don't know if there can be at all. People have been lying from our fist day of existence, but up until very recent times, there has never existed anything in nature that can create arbitrary images that look 100% real.
It doesn't matter if it's fake or not, if you see a convincing version of your president fucking a pig, you'll never forget about it, and you'll always attach that image to him. Similarly, people often can't differentiate between an actors character and the real person. Whether it's Macron on his campaign film or Gandalf on lotr, you associate that person with your simulated experience.
Animated lies are on a completely different tier of immersiveness and deception than speech or writing. You can't reduce this development to "just another lie". It's very close to the most convincing lies possible, and it's definitely very dangerous.
In addition to this, despite all our cutting edge forensics, we still fumble the ball in the courts, based on sham forensics, and send innocent people to jail.
As a defendant, you have to pay forensics experts -- highly -- to research and create such evidence of manipulation. If they are willing to take on your case; they have their own reputations and motives.
This all taking us further into an effectively tiered justice system. Can you afford to challenge the evidence, even if you think it's doctored?
If you are a cause celeb -- or a good vehicle for a non-profit -- maybe someone else will. Otherwise...
Sure, for a casual observer, these new methods for generating videos appear convincing, but is that the right bar to judge "ability to fake evidence"?
As far as I know, there have always been more sophisticated techniques and forensics to determine if an image is doctored, and likewise for video. I've not seen any research tackling fooling those methods yet, and I would bet that naive implementations of neural networks for generating videos would leave very obvious "neural network" artifacts. Of course, this is still new technology, so it will obviously get better at fooling our other tools over time too, but as of right now, I don't think the clamoring for "all evidence can now be faked" is all that justified.