Hacker News new | past | comments | ask | show | jobs | submit login

C2PA is more about tracking origin and edit lineage for e.g. press photographers. It does not determine if an image is a deepfake or not. Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.

But supposed you take a C2PA photo and then edit its contents with e.g. generative in-fill. The updated C2PA stamp from that action will get noted and can be audited later on.




> Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.

Okay but then it's a deepfake from the BBC or some other known source, which would be bad for its reputation if people found out.


Right but if it’s a breaking-news-tiktok-video the C2PA will probably be absent and then said media co plays the “did we just Fox News smear Dominion?” game. Which isn’t so bad a tradeoff today while C2PA is new.

The C2PA might theoretically prevent forgery of the C2PA record, but it cannot certify that the pixels happened due to an event or due to a trick.


Sure but you can tell if it's from an organization you trust not to fake stuff or not.


Could it be used to track down journalists or prove that a certain journalist took a specific photograph?

Sounds like the photographic equivalent of guns that put unique stamps on their fired shells - theoretically making it easier to ID the shooter.


Both newspapers and social media routinely reconvert the original camera image into some smaller size or more suitable format. How is this system going to work in practice?


The newspapers and/or social media would sign their scaled pictures - presumably they could also provide the original for external validation as well




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: