Sorry under which of these other moderation regimes does the organisation in question transmit CSAM from a client device to their own servers? To my knowledge Apple is the only one doing so.
Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.
Instagram also check once you upload.
These are all transmissions.
Apple checks if you upload. If you don't upload or attempt to upload to their servers, no check.
All these are to flag potential CSAM. Some do more - nudity in general, harmful content filters etc. Some is auto blocked, some is forwarded for review and report etc.
In almost all cases flagging is part of or connected to uploading to a third parties servers. The flagging for CSAM is not a conviction - some do and some don't do a human review before submission. Most situations where folks can use flagging to hide content get a human review at some level to avoid abuse of the flagging system itself.
> Facebook checks for potential CSAM when you upload from your client device (sometimes an iphone) to their servers or after it's on their system and a user flags it.
Thats not the issue according to the linked source.
Instagram transmits all photos and assumes it's not CSAM until flagged - thats safe content. Apple transmits ONLY CSAM - thats a no-no content because they're assuming it is CSAM and you can't transmit CSAM.
You can't knowingly transmit CSAM. Transmitting a photo pre-scan is safe (if you assume any photo is not csam), transmitting post-scan is dangerous if you filter for csam.
Apple uploads all photos to iCloud photos (just as google does). This includes CSAM and not CSAM.
It keeps a counter going of how much stuff is getting flagged as possible CSAM. They don't even get an alert about anything until you hit some thresholds etc. And no one has reviewed anything yet at all, the system is flagging things up as other systems do.
Are you sure (legally) that one can't review flags from a moderation system? That is routine currently. No one is knowingly doing anything. Their system discusses being alerted to possible counts of CSAM.
Is your goal that apple go straight to child porn reports automatically with no human involvement at all? At scale (billions of photos) that's going to be a fair number of false positives with potentially very serious consequences.
The current approach is that images are flagged in various ways, folks look at them (yes, a large number have problems), then next steps are taken. But the flags are treated as possible CSAM.
Please look into all the false positives in youtube screening before you jump from a flag => sure thing. These databases and systems are not perfect.
I'm not a lawyer and i want apple to do nothing especially not scan my device.
I'm saying the linked article in discussion says you can't transmit content you KNOW (or suspect) is CSAM. You don't assume that all your customers' content is CSAM, but post-scan, you should assume.
The only legal way to transmit (according to article) is if it's to the government authorities.
I don't know the legal view on the "false positive" suspicion vs legality of transmitting. That's a gamble it seems. I don't have a further opinion on it since IANAL and this is very legal grey area.
Apple is very clear that they don't know anything when photos are uploaded. The system does not even begin to tell them that some may have CSAM until it's had like 30 or so matches. The jump from this type of system (variations are used by everyone else) to some kind of child porn charges is such a reach it's really mind boggling. Especially since the very administrative entities involved are supporting it.
A strong claim (apple committing CSAM felonies) should be supported by reasonably strong support.
Here we have a blog post where they've talked to someone ELSE who (anonymously) has reached some type of legal conclusion. If you follow the QAnon claims in this area (there are lots) they follow a somewhat similar approach - someone heard from someone that something someone did is X crime. It's a weak basis for these legal conclusions.
Apple is attaching a ticket to images as the user uploads to iCloud. If enough of these tickets think CSAM and allow an unlock key to be built, they will unlock and get checked. It's still the user who has turned on iCloud and uploaded the images.
The one odd thing I don't get. It would be a lot EASIER to just scan everything when its in the cloud itself.
Why go to this trouble to avoid looking at users photos in the cloud, set these thresholds etc. You'd only need to scan on device if for some reason you blocked your own ability to scan in cloud (ie, for E2E photos - which I don't think users actually want).
Something like all of iCloud getting E2EE would be a big feature and likely only be announced at an event. I agree, if the CSAM on device scanning isn’t followed on by something else, it seems like a lot of work and PR flak for little gain.
Right - the system is actually quite complex to blind apple to something they currently have sitting with their own keys on it on their own servers. I mean, they can (and maybe will) just scan directly?