C2PA is more about tracking origin and edit lineage for e.g. press photographers. It does not determine if an image is a deepfake or not. Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.
But supposed you take a C2PA photo and then edit its contents with e.g. generative in-fill. The updated C2PA stamp from that action will get noted and can be audited later on.
Right but if it’s a breaking-news-tiktok-video the C2PA will probably be absent and then said media co plays the “did we just Fox News smear Dominion?” game. Which isn’t so bad a tradeoff today while C2PA is new.
The C2PA might theoretically prevent forgery of the C2PA record, but it cannot certify that the pixels happened due to an event or due to a trick.
Both newspapers and social media routinely reconvert the original camera image into some smaller size or more suitable format. How is this system going to work in practice?
But supposed you take a C2PA photo and then edit its contents with e.g. generative in-fill. The updated C2PA stamp from that action will get noted and can be audited later on.