Hacker News new | past | comments | ask | show | jobs | submit login

Any Deepfake detection tool would also be a GAN sparring partner for the Deepfake makers.



If you consider this a two sided game, then you can have the fakes quickly become so good that hardly anyone can tell.

And I'm not just talking about image fakes. You can have MCTS find the best arguments for ludicrous statements, and other paths to justify fake things that look just like real arguments.


That is the point of a GAN. We’re missing something about the problem space and/or human cognition or their output would already be indistinguishable from reality to all humans.


Yep, they're a 3D reconstruction of points from a 4D space a (massively simplifying the fact that the 2D video frames themselves are high-dimensional data) and that's just the video. Bring a 3D camera into the game and the underlying distribution and see if hilarity ensues IMO.


This doesn't automatically mean GANs can outpace any detection method.


Furthermore, if the Generator outpaces the Discriminator by too much then the generator stops learning, or, worse, degenerates into mode collapse. Generators and Discriminators have to be close to each other in capability for either to get anywhere.


If the detection methods are publicly available, then you simply incorporate the detection method in your training regimen.

Effective detection methods may end up being closely guarded secrets.


They need to be publicly available and differentiable to be able to use as an opponent for a GAN.

Pretty much nothing except other neural networks are differentiable, unless you put effort into designing them to be.


It's feasible to design a reinforcement learning based network that uses output from a non-differentiable deepfake detector as a component of the loss function.


And can you imagine the impact of false positives? Prosecutor has the perp on video committing a heinous offense but it's still insufficient evidence because of a software bug.


We can dream of juries that wise... haven't seen one yet though have we?


Which means we would have to rely on an "Authority" who cannot be independently verified, because we do not know the secret sauce they are using.


Hence they probably shouldn't make it public for it to be effective.


But then everyone has to trust that whatever the US Military says about the video is true, which kinda ruins the point.


Or: they only make it public once the discriminator demonstrates a large margin to reduce the potential training signal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: