This is the first time I've heard of C2PA authentication[0]. It looks like they are digitally signing images from the cameras now to create a chain of authenticity. There's even TruePic[1] that is an authentication service that will show people if an image came from one of these cameras, overlaid on the image (or video) with some Javascript. Interesting way to fight AI images and deep fakes. They are also letting people register their deep fakes[2], but I'm not sure why someone would do that.
C2PA is more about tracking origin and edit lineage for e.g. press photographers. It does not determine if an image is a deepfake or not. Create a deepfake then take a picture of it with a C2PA camera, now you've got a C2PA-stamped deepfake.
But supposed you take a C2PA photo and then edit its contents with e.g. generative in-fill. The updated C2PA stamp from that action will get noted and can be audited later on.
Right but if it’s a breaking-news-tiktok-video the C2PA will probably be absent and then said media co plays the “did we just Fox News smear Dominion?” game. Which isn’t so bad a tradeoff today while C2PA is new.
The C2PA might theoretically prevent forgery of the C2PA record, but it cannot certify that the pixels happened due to an event or due to a trick.
Both newspapers and social media routinely reconvert the original camera image into some smaller size or more suitable format. How is this system going to work in practice?
Considering this might be targeted by nation-state adversaries for propaganda uses, I don't see the effectiveness, at least on that threat model. AFAIK even if the private keys are buried in some trusted computing chip, there's very little a focused ion beam can't do. And that's in an extreme security scenario, I doubt those cameras even reach that.
Assuming they use different embedded keys per camera (which is unfortunately unlikely) instead of the same key on every chip, this can be defeated by demanding to see the intact camera before accepting the signature. Ion beam assisted probing is a destructive process.
Then why can’t the three letter agencies in the US unlock an iPhone? I don’t think it’s that easy. In that case all encryption would be useless if you had physical access to the machine.
It only works for things where the key is stored on the chip or phones where the key is stored in a TPM or equivalent and relies on a PIN to release it, as opposed to typical use of encryption where the full key is entered to unlock. An attacker with physical access can probe the chip but it’s risky - one slip and it’s gone for ever. This technique is most useful when all chips of this type have the same key so you only need to successfully crack one, any one. Ross Anderson’s book Security Engineering uses the example of Sky Pay TV cards https://www.cl.cam.ac.uk/~rja14/Papers/SEv2-c16.pdf
With all the encryption I’ve used these past decades the only time I have entered a full encryption key was for cryptocurrency. Aren't 99% of use cases the case where the key is stored on the client device? It seems like the main deterrent then just comes down to the failure rate of the probes.
On a tangent, a bad actor could then release HDCP keys for TVs from a big brand like Samsung and effectively invalidate all content protection for all the TVs they have already sold (afaik those can’t be remotely updated). If those keys are then revoked, there would be millions of bricked Samsung TVs.
> Then why can’t the three letter agencies in the US unlock an iPhone?
They almost certainly can but I believe the point of the FBI making a big song and dance about unlocking that phone (which they did unlock by themselves, btw) was about trying to force Apple into allowing TLA backdoors via the court of public opinion.
Useful for source authentication in news to combat fakes. I am sure
if you don't want your identity associated with a picture there will
be plenty methods to avoid this. Indeed some journalists will also
need anonymity - but perhaps signed metadata on the GPS
location/timestamp, so any camera for serious photo-journalist
reporting will have to have those features.
Perhaps less interestingly, single frame exposure will defeat the
accidental recording of sound into pictures recently reported to be a
side effect of rolling shutters.
I think we are going to see more of this. I doubt this is about "deep fakes", as in reality the press doesn't care about it as long as it generates clicks and revenue.
It's more to do with surveillance and being able to more easily track someone trying to play investigative journalist, whistleblower etc. and show them their place.
For instance, if someone takes a photo of a big corporation exec giving a brown envelope to a politician and then another journalist show them to interested parties as a courtesy before publication, they could find out who took the picture and get them killed.
This is very dangerous and stupid what Sony is playing with.
The thing is, it's rather easy to remove this. Even advanced
steganographic watermarks fall to current AI and DSP.
Non-consensual/forced source ID is a fools errand.
OTOH, it's very hard to firmly attribute source ID in a way that
cannot be faked (hashed or reconstructed after the fact).
In other words it's easy to disown, hard to own. Repudiation is a one
way function.
Since we're headed for a post-truth, post-trust digital world the
desirable sides of this seem to greatly outweigh the apparent
"dystopian downsides".
Seemed fairly dystopian before you get to the examples. It also seems fairly trivial to bypass even if it's through crude brute force methods (ie. screenshots/copying framebuffers).
Is this essentially just enforcement through building a moat around editing applications?
0: https://c2pa.org/ 1: https://truepic.com/ 2: https://truepic.com/revel/