Completely agree with the final sentences in their conclusion/recommendations:
"In a world where our personal information lies in bits carried on powerful communication and storage devices in our pockets, both technology and laws must be designed to protect our privacy and security, not intrude upon it. Robust protection requires technology and law to complement each other. Client-side scanning would gravely undermine this, making us all less safe and less secure."
Does this count for something like AV, too? I grew up in a world where AV and anti malware only worked offline / client-side. What about spam filters and AV on mail servers? I often hear a commercial for Crowdstrike on Darknet Diaries podcast, which apparently is some kind of combination of a SIEM and ML (though that might be marketing). Would that be the panacea according to this paper? Or is this specific about a client-server model where the server hosts the data? Because the solution to that is rather simple 1) FOSS client 2) public-key cryptography where the private key does not enter the server. Sure, a cloud might make that illegal, but that's why the cloud is just someone else's server. Use your own instead.
I just disabled Google Play Services on my Android phone to increase privacy... then I started to get spammed with about 10 notifications every 10 seconds (not a joke) to tell me that those 10 apps would not work properly without Google Play Services enabled even if they did work properly... Google and/or LG allowed me to disable 7 of these apps, but the others could not be uninstalled or disabled using the GUI... I had to use ADB to remove them. One of those apps, believe it or not was the LG phone clock and another was the calculator.
After I removed all apps that were complaining about missing Google Play Services and installed alternatives for the ones that I needed like the calculator, everything was working fine. (thanks to f-droid for helping me find viable alternatives)
Don't disable google play services, run something like microg (easiest way to do so is probably https://lineage.microg.org/ ). Apps are written to expect you to be spied upon and in order for them to not push you to "fix" that, you must lie to them. Moving entirely to f-droid is not an option for most people, as they use their phones to communicate and doing so involves a certain amount of dirtying yourself.
I don't see why you say not to remove the Google Play Services because I found alternative apps that work just as good, but either way, it was I think my only option because my LG G8 is not supported by LineageOS/microg.
Can we coin the word spy-tech and go on with our day?
I read a post here about when radioactive toys were popular, back in the early 1900s. Radioactivity fell so much out of grace, that these days we prefer to die of global warming than to use nuclear power.
In a similar vein, a world where most people go back to pen and paper and watching public TV on bootlegged devices so that they can (try to) escape continuous surveillance by electronic devices is conceivable. Give it enough time and grisly precedents, and companies like Apple and Samsung will only be able to sell new devices to corporate customers.
don't forget Steven M. Bellovin who's on that list and who was one of the inventors of the first Encrypted Key Exchange, a precursor to the Diffie-Hellman Key Exchange.
Martin Edward Hellman, electrical engineer, was born in 1945.
Richard Hellman started selling jars in 1912. No remote relation apparent. (Specifying just in case of doubt, especially for those unfamiliar with the brand.)
Eventually we’ll see cryptographic attestation of open source binaries on our phones. Until then, all popular phones will run closed source software, and it will be necessary to trust the vendor. Even then, the vendor may also be the chip maker. Apple would do well to look at sourcing an independent chip vendor for their on-device enclaves. That would give them a trust advantage over Android phones.
I don't believe this is a technology problem. This is a legislative problem. We (well, some of us) have the Bill of Rights and the justice system for a reason. We need to codify who owns our data (we do) and what third parties can do with it (nothing unless we say they can, individually).
LEO can't do a damn thing unless they have probable cause. This dragnet BS needs to stop.
We are absolutely empowered to change the rules to suit us.
> Apple would do well to look at sourcing an independent chip vendor for their on-device enclaves. That would give them a trust advantage over Android phones.
They are designing all of the critical chips (SoC and crypto/enclave) themselves already, TSMC only manufactures them - and if there is one company capable to verify that the chips that TSMC produce actually match the designs, it's Apple.
What makes you think cryptographic attestation will favor open source binaries? Don't you think it's much more likely that cryptographic attestation will tilt the field even more in favor closed-source binaries, given who hold the keys to everyone's kingdom?
I believe law enforcement and spying agencies might be falsely complaining about cryptography. Doing this will make targets rely more on cryptography instead of also using alternative security measures. See: https://en.wikipedia.org/wiki/Dual_EC_DRBG
I wonder if the title refers to the song "Drugs in my pocket" by The Monks, 1979. This song was actually only popular in a few cities in the world, I think Toronto was a big one. I wonder if the author grew up there...
I'm in the libertarian lion's den . . . and I only read the abstract.
What I see from a historical standpoint, pre-cloud, mobile phone/computer, personal encryption etc is that anything stored be it something on paper, something in your house, safety deposit box, whatever was available to law enforcement with controls via the courts or other mechanism. It was available when there was a legal matter. Is there disagreement that legal matters should allow for full disclosure whether criminal or civil?
Is the problem that law enforcement and other state institutions through legislative channels and courts getting just too much access without legal justification?
Notional idea: could you have a key vault? Only with court order the keys are released and your devices get opened up? Even if not implementable would that work for most people?
I do get the government mass surveillance aspect and think that needs way more scrutiny and people should be vicious in their defense of themselves and society. But it also smells like we lost that battle as private companies are doing pretty well at surveilling individuals and communities.
Let's rewind to look at your historical perspective properly this time.
In theory yes a court could get access to your papers but for the average person because person to person interaction produces no permanent record and what limited record is created by correspondence isn't nearly as permanent because it requires one to take the time to file it and manage a very finite space nothing anything like your present digital life would ever come to exist to be requested.
As important is the cost of such a request. Compare the risk and expense of actually spying on a large population, to the cost of tapping their phones and paying someone to listen to a million boring conversations to the cost of bugging everyone's communications at once with a box installed for that purpose.
If you consider how feasible it would be to obtain that much privacy invading mostly useless intelligence it wouldn't historically have been hard it would have been impossible. They could have flown to Mars easier than doing that much physical surveillance unless every third citizen was spying on the other 2.
Historically I suspect such requests to logically be rare outside of settling business matters where receipts and contracts would be common or wills/trusts.
If you want to justify some sort of key escrow system you shall not find justification in returning to some hypothetical state of nature that never was.
Neither mass use of unbreakable encryption nor mass surveillance enabled by court order, nor your suggestion bear any resemblance to historical reality.
Furthermore if we can't trust the government not to break its own laws what would keep them from simply ordering your keys confiscated and ordering whomever holds them not to tell you?
The difference is that our computing devices have become an extension of our very being. Our thoughts, actions, and interactions are recorded at a granular level never before possible in history, we often don't know what is actually being recorded, and that data can be stored cheaply, transfered instantly, replicated, and searched for mere pennies.
To expand on your comment, I can go in my file cabinet and flip through papers I have chosen to keep with data that I chose to write down, and then drop it in the trash can. It's physical and tangible and it doesn't have a record of who I called, where I went for breakfast, or what I ordered.
Our founding fathers were very clear on their intent with regards to how they viewed governments imposing on the private lives of citizens.
> Is there disagreement that legal matters should allow for full disclosure whether criminal or civil?
Yes, I don't agree with this.
> Is the problem [...] too much access without legal justification?
That is also a problem, which isn't new (it existed in the time you described in the opening of your post as well).
> Only with court order the keys are released and your devices get opened up?
This doesn't work (and we know it doesn't, there's so many times it failed already because of systemic issues with the idea), so as a hypothetical it only serves to distract from the fundamental truth that it cannot be a solution.
> I do get the government mass surveillance aspect [...] But [...] we lost that battle
I would say your historical reference is flawed. One could just as well have written an encrypted text on a sheet of paper that would be non-trivial to decrypt (e.g. Z-340). The difference, I think, is that it used to be far more effort to create the encrypted text than it is today.
Wait till they start putting noses on those things. In fact, I am surprised no one has done it. It will be super convenient... for the people collecting said data
that's the misconception – the iPhone isn't a good but a service receiver.
Think iCloud, Facetime, iMessage, Push, FindMyPhone etc.
So Apple won't leave after you once have the phone – they remain with you. And spying isn't what Apple calls it. Smart services always have a tendency to be encroaching.
On one side Apple wants to read your lips to know your wishes in advance but on the other the authorities demand access to that knowledge.
While the indiscriminate collection of data is another issue entirely, I struggle to see that I — or, indeed, most of us — are important enough to be focused on in particular, out of the many billions who’s data is inevitably collected by various three-letter agencies.
There is the side that says "we should know everything, because we own you" and the side that says "I prefer not to be owned". There is no good-faith way to see both sides of that, that is just avoiding a hard question because it is convenient.
Ah, so because you are not inconvenienced right now it's not an issue. That's not a good way to approach a systemic injustice, except if you look at other people's suffering and can still sleep fine.
While I don't like client-side scanning, that's overly reductive.
"Client side scanning" (both in general, and in the recent Apple kerfuffle) is talking about a network client, that will be talking to servers that are owned by "them." If they wish to enforce rules over what is stored on their server then to enforce that right, the only two choices are to disallow E2EE or to perform client-side scanning.
Really client-side scanning is only up for debate when E2EE is used. The Javascript that checks validity of forms before you submit them is a form of client-side scanning, but most of the time[1] nobody cares because it's data that you intend to send to the server anyways.
1: Inadvertent pastes into fields that phone-home for e.g. autocomplete can reveal otherwise private information, so "most of the time"
But none of these conundrums could exist if Apple had no access to the user's device, nor control over the software running on it. "Who owns your computer" is still the central question; we're just Sapir-Whorfing ourselves around it within the implicit language of walled gardens. "Apple owns your computer" is the unspoken premise, and it's not axiomatic.
There's a huge tangle of things with "Apple owns your computer" but I don't think most of it applies to the icloud question.
If you wanted to store photos in icloud on a Windows machine, you'd be using the Apple icloud client. Apple has at least some control over what software they write and ship does[1]. They can break 3rd party clients almost at will, so if they choose to be hostile to 3rd party clients that control is fairly strong.
Arguing over what amount of control Apple should exercise over what software runs on a device purchased by a consumer is mostly orthogonal to arguing over what amount of control Apple should exercise over what software can connect to their servers.
1: On a general purpose machine, debuggers and emulators can influence what software does, obviously, so the control isn't absolute.
It seems to me that Apple does their best to ensure you end up using icloud against your will through a a series of very confusing opt-out prompts and defaults.
Maybe it would be nice to have a tool that would allow you to see exactly what data the client side scanner is “allowed” to “see”?
Well services generally shouldn't be controlling what software can connect either!
The argument over whose computer it is applies pretty similarly. My computer is not part of their infrastructure, and they should only be controlling the software on their infrastructure.
Apple develops a phone operating system and sells phones that run that operating system. What does it even mean to say “if Apple had no access to the user’s device”?
Yes, Apple is also the OS vendor. What it would mean to say "Apple has no access to the user's device" is, whatever an average, unsophisticated user understands it to mean -- because their informed consent is ethically all that matters here.
It means precisely that Apple has no technical capability to remotely access the device.
It means any (consented) Apple software update leaves behind no hooks or backdoors that enable subsequent, remotely-initiated access.
It means there verifiably exist no code paths that allow remote exfiltration of data, other than those that pass through consent dialogs. The basic distinction between legitimate OS functionality, and malware.
But average users are clearly aware of many high profile iPhone features that inherently involve Apple “remotely accessing the device,” assuming you're include all cases where the iPhone software can be configured to send data from the device to Apple servers. That’s what all iCloud services explicitly do.
Yes, but the distinction between "stores private data E2E encrypted on a secure server" and "uploads private data for Apple employees to review" is a bright line. Informed consent means we can't extrapolate from one to the other, if we pretend to be ethical.
It's not as if Apple's marketing doesn't heavily emphasize the "private", "E2E encrypted" aspects already.
> If they wish to enforce rules over what is stored on their server
The whole point of end-to-end encryption is that what is stored on their server is statistically uniform binary white noise. If they wish to enforce that, there are a plethora of server-side tools (like the Diehard test suite) with which to do so.
You are completely correct from a computer science perspective - unfortunately, this is not a computer science discussion. As far as the FBI are concerned, “storing encrypted child porn on behalf of people with the keys to decrypt it” still counts as “storing child porn”.
You can disagree with that (and there are many good reasons to do so) - but “it’s encrypted so it’s fine” isn’t going to convince anybody who matters.
In the US, a service provider incurs legal obligations when it has actual knowledge that it is hosting something that appears to be CSAM. A provider hosting encrypted data with no knowledge of what it decrypts to does not have such obligations.
While that's the law, the big factor here is actual regulatory and agency pressure to scan for CSAM for the images they harbor, given they were previously only doing so when CSAM was manually reported to them by users (as in, probably, given they only submitted 265 reports to NCMEC in 2020[0]). Think "we regulate a second iOS app-store or you fix your CSAM problem".
This is the part where we need laws to protect privacy. This is arguably an overreach by the FBI in the first place and if it is legal it shouldn’t be.
Since Congress folks seem happy to threaten Apple too with changing the law to do what the FBI wants, I wouldn’t assume it would go the way you are thinking it will.
It's not even the just FBI; if the majority of your competitors claim to prevent child-porn from being stored on their servers and you don't, the reputational damage is real. Apple doesn't want to be the "Child Porn friendly cloud service."
You're the customer of a cloud service. Do you want the one that does or does not scan your own files so that a false positive could cause you to be arrested, incur thousands of dollars in legal fees and suffer severe and permanent reputational damage yourself?
Considering that using a service which is known by all to not scan, and is therefore the place the media says is ‘child molester friendly’ could cause the same reputational damage?
> Considering that using a service which is known by all to not scan, and is therefore the place the media says is ‘child molester friendly’ could cause the same reputational damage?
Even putting aside how much of a stretch that is, how is anybody else supposed to know which service you use? It's your personal files. That nobody else should have access to them is the point.
It's not as if Apple or whomever should be providing anyone with a list of their customers, as that should cause you to not use them too. As far as I know they don't currently make their customer list public.
Not equivalent to CSAM - just examples of Apps that get some degree of judgement that can be problematic.
What would you think about someone that you were talking to that showed you something on their phone (a restaurant listing you were both thinking of going to, or something on Maps), but then a Parler notification popped up? What if they were married and Grindr or Tinder or whatever notification popped up?
Would you judge them? Would you expect many other people to judge them, even if you don’t?
Don’t get me wrong, I don’t think Apple’s products would be problematic that way. But a big reason why is because they have and likely will continue to make decisions like the one we are discussing.
If they went full end to end super privacy, then got named by the feds repeatedly in whatever the next big csam/terrorist/whatever scandal, that could change, and that is even assuming Congress people don’t join in the action, which they’ve already shown an interest in doing.
Probably >90% of people with the Parler app are conservatives and a similar percentage of married people with the Grindr or Tinder app are cheating, which is where those assumptions come from.
Even if Apple wasn't scanning anyone's files, >99% of their users would not be pedophiles and no one would have any reason to assume that they were.
> If they went full end to end super privacy, then got named by the feds repeatedly in whatever the next big csam/terrorist/whatever scandal
The main effect from getting put on a list like that is to gain credit with privacy-conscious people for standing up for their users. Nobody says "oh no, I better stop using my favorite full disk encryption because it hasn't got any known backdoors in it."
> that is even assuming Congress people don’t join in the action, which they’ve already shown an interest in doing.
At which point you have government action and can bring out the constitutional arguments etc.
And like someone had mentioned in another thread (paraphrased) ‘if you create a place which is anti-witch-hunt and you enforce it and it gets a reputation as anti-witch-hunt, you’ll end up having 3 strong civil minded libertarians and a gazillion witchs’. So then the app will get a reputation for that, deserved or not.
And none of your personal lack of being a witch is going to help you when you get the reputation as the ‘weird dude that uses that child porn storage app’.
I guarantee you 90% plus of the population, once they learn what Tor is, will have the reaction to Tor. And it’s a matter of time until it gets enough press for that. Same with anything that does what you describe (proper end to end, we don’t care what you store, and we don’t have the keys so pound sand LEO).
When something is low key/under the radar, it can be a healthy witch-hunt-free zone that also isn’t filled to the brim with witches. But something at the scale Apple is at can’t, and even Parler (which didn’t start that way) got too much visibility and ended up as you describe and got shut down.
The privacy conscious folks may nod their head and know, but most folks don’t get it, and if it gets widely used, it will be attacked this way by authority figures. The ‘think of the children’ routine is used a lot because it does work on the majority of the population.
As I said, Apple isn’t there yet, and likely would never be close - because they’re going to do things like they are now to avoid the Public backlash.
And as long as they put enough of a veneer on it, the vast majority of folks won’t think twice about continuing to use them (95%+ of current customers at least). Most of the market will continue using Android which is 10x worse near as I can tell.
The ‘mistake’ they seemed to have made here is being a bit too obvious about it and not keeping up the veneer well enough.
Overall, I suspect we’ll be seeing some move back to on-prem for a non-trivial percentage of folks because of this and other things going on. That said, I’m sure cloud will continue to grow exponentially because most folks just don’t care enough to pay to do otherwise.
> ‘if you create a place which is anti-witch-hunt and you enforce it and it gets a reputation as anti-witch-hunt, you’ll end up having 3 strong civil minded libertarians and a gazillion witchs’.
This only happens when you start off with zero users and having an anti-witch-hunt policy is the only thing causing you to gain users, specially because they're disproportionately witches. Because without that policy the service would still have zero users.
It doesn't apply to any service that already has a large number of non-witches or has any effective means of also attracting users who are not witches.
> I guarantee you 90% plus of the population, once they learn what Tor is, will have the reaction to Tor.
Tor has a specific marketing disadvantage because by its nature it defeats most forms of tracking and advertising, which makes it adversarial to media companies who profit from tracking and advertising. This is why all the stories are about "the dark web" and not about that thing that helps dissidents in China and Iran evade authoritarian censorship.
It's also hard to get ordinary people to use it, and thereby understand that the technology itself isn't anything nefarious, because there is a noticeable latency cost to using it that most people aren't going to like.
None of this applies to a generic hosting services making it so they can't read their customers' personal files.
> When something is low key/under the radar, it can be a healthy witch-hunt-free zone that also isn’t filled to the brim with witches. But something at the scale Apple is at can’t
This is completely the opposite. It's the things at the scale of Apple that can do it because there aren't enough witches in the world to make a userbase the size of Apple's be more than 1% witches.
It also works for everyone as long as everyone does it, because then "no witch hunts" is common practice rather than something that makes you attract a disproportionate number of witches.
> even Parler (which didn’t start that way) got too much visibility and ended up as you describe and got shut down.
Parler was clearly the app that gained users by having a policy against witch hunts. It only started gaining a significant number of users when the other platforms started carelessly banning large numbers of users for alleged transgressions that were in many cases facially absurd, but it also attracted the people who were banned because they were actually bad.
And even then, it wasn't the users who abandoned them, it was Apple and Google and Amazon. Apple is obviously not going to do that to themselves.
I'm not a lawyer, so may be wrong, but am I the only one to think that the presumption of innocence principle does not exist anymore? As there are areas where is does not, and there are "gray" areas that effectively are the same.
Presumption of innocence never applied to SOCIAL judgement. It would be nice if it did, but that was never what happened. It’s why minors information isn’t supposed to be pasted all over the evening news too, so if they make a mistake it isn’t ‘permanent’ and they have some plausible deniability.
It’s always been the case that your neighbors or friends or employers or whatever would judge you as they saw fit for whatever they might see or find out about, or even overhear from the local loudmouth/busybody. Regardless of what was found in a court of law.
The challenge now is our ‘neighbors’ are now whichever nosy random person in the entire world that cares to look instead of just the local gossip circle, and the gossip is recorded nearly permanently and is indexed in an easy to find way decades later. :s
As to if this means people will stop being less judgmental since everyone is likely to have something shitty posted about them at some point? History or the current circumstances don’t seem to be pointing to ‘yes’ right now.
But at some point, either everyone is going to stop sharing the smell of their farts every morning, or folks are going to have to stop caring, or we’ll literally not be able to be friends with or employ anyone.
I agree with you, but if the FBI wanted to serve a warrant to search my device, they can compel me to do so. Failure to unlock that device could put you into jail until you comply with the warrant.
US case law is not settled on that matter, and some courts have concluded that disclosing a password is testimonial and therefore covered by the fifth amendment. Courts that have ruled the other way have usually done so under narrow exceptions.
Generally, most lawyers would advise their clients to Stay away from an area or activities that can be described this way - because it’s a really good way to be ‘right but dead’ (really, bankrupt or in jail or whatever).
> The Javascript that checks validity of forms before you submit them is a form of client-side scanning
It is hardly difficult to draw a distinction between ensuring a field looks like an expected datatype and ML analysis guessing at photo content.
In fact, trying to construct an argument conflating the two pretty much immediately runs in to the fact that one is adversarial, so it only works if you studiously ignore intent.
When you say "one is adversarial", I assume you mean that someone trying to upload illegal files will try to prevent the system from detecting that their files are illegal, while someone filling in a form isn't trying to sneak invalid data past the client-side validator (assuming all hackers know that the system is correctly implemented, i.e. the server performs its own validation).
The other distinction is, if a client-side form validator detects a problem with the input, it tells the user so they can fix it, whereas if Apple's system detects a problem with your input, it potentially tells the police without giving you a chance to delete the maliciously-generated false positive files you've unwittingly received.
Yeah, 'adversarial' isn't exactly the right word; I'm not sure what is.
What I mean is that client side form validation is typically part of a cooperative process - I want to buy something at your store, you want to sell it to me, and smart client side validation can make that faster/easier if I typo something it can catch. It is (or at least should be) mostly aimed at helping the user; your parsing for security is (should be), as you note, on the server.
Scanning photos for [CSAM, thoughtcrime memes, poor taste, whatever] doesn't make the user's life easier, is not something anyone asked to be subjected to, and potentially can lead to a very negative outcome for them.
That's the distinction I was getting at, and yes, your second point is directly relevant there.
Just as an aside, I think form validity checks that phone home to a server prior to submission are very questionable, and recording keystroke data from those is absolutely unethical under all circumstances (even if it's only to improve an autocomplete algorithm). Really, any out-of-band transmission from a web form to some other app / database is a potential security threat. The same is true with any kind of client-side scanning, which is part of the point: Although these files may only be scanned because they will be transmitted and stored in the cloud, the whole stack doing secondary scans is out-of-band and by definition insecure, i.e., it is designed to ultimately make human review possible. Who can access that side stream of data and when is intentionally left opaque in these types of proposals, and amounts to "trust us" hocus pocus which is usually a sure sign that proper security measures will not be taken.
> "Client side scanning" (both in general, and in the recent Apple kerfuffle) is talking about a network client, that will be talking to servers that are owned by "them." If they wish to enforce rules over what is stored on their server then to enforce that right, the only two choices are to disallow E2EE or to perform client-side scanning.
So the core of the problem is that they want to enforce rules over what is stored on their servers, even in E2EE form. They could just allow regular E2EE, where they ignore the content that they don't know and push back against politicians who think they are entitled to all data. They chose to push back against user's privacy instead.
What if I have a file on my phone that is already encrypted before I upload it to icloud and it gets encrypted a second time? Apple would have no knowledge about its content. Would they have to scan all my other devices too?
Just to be clear, is apple's icloud photo storage E2EE? From what I've been able to find it isn't, but I couldn't find anything official one way or the other from apple directly.
And you decide to install the update. If iOS 15 is the only option, such as on the 13, you knowingly decided to purchase a device with this sort of scanning happening. Once more, you also decided to sign in to iCloud and enable storing photos in iCloud Photos, thus enabling the bit that controls whether or not photos are scanned.
Client side scanning of inappropriate pictures is of content you'd ordinarily be sending them as anyways as well. The proposal was only to do this if cloud services were/are enabled.
> Client side scanning of inappropriate pictures is of content you'd ordinarily be sending them as anyways as well. The proposal was only to do this if cloud services were/are enabled.
I have an iPhone. The Photos app keeps telling me that it's unable to upload things to iCloud because my account is full.
I never turned it on. I never intended to upload any photos to the cloud.
I haven't signed into my iCloud account for years because I don't use it. Nonetheless, iCloud has a magical way of uploading things to something that I've literally never used.
Next you'll be arguing that people using Windows should have simply turned off online logins if they didn't want their Windows computer to phone home. Bullshit, Microsoft shoves that shit down people's throats.
So your statement of "you'd ordinarily be sending them as anyways as well" is ludicrous. That's deliberately burying your head into he sand against the fact that big business sets defaults to settings that users often have no idea were set or are buried behind huge warnings against turning them off.
Bullshit, Microsoft shoves that shit down people's throats.
As an example of this, I never once opted into any kind of data sharing, set telemetry to the lowest allowed setting, and don't remember ever signing into a system-wide Microsoft account, yet when I eventually discovered deeply hidden privacy options I found that my MS account had a log of every single application I had ever used on my W10 laptop.
Where the "deeply hidden" options under Settings -> Privacy -> Activity History with "Jump back into what you were doing on your device by storing your activity history"?
It implements the feature of pressing Win+Tab to see open programs, and then scrolling down to see previously open programs.
I've had an iDevice since 2007. I've never signed up for the paid iCloud. I get the standard 5GB plan that all Apple accounts receive. I have never accidentally uploaded a photo to it. I have never enabled it. I don't understand how your situation happens as it has never happened to me. It makes no sense other than someone (maybe you forgot, a significant other, a kid) played around with some settings? There's no other explanation that makes sense to me.
There's nothing better than knowing everything and never having to play around with settings to discover what they do, never forgetting what you've set your settings to, and not having children, family members, or friends do the same. There's no way any reasonable person could ever have their uploads accidentally turned on without their full knowledge and consent so that definitely invalidates any reason to argue against the idea that client-side scanning is unreasonable because it only happens to things that you wanted to upload anyway.
There's definitely no way a new version could patch your system and turn something on without your knowledge. No, there's absolutely never been a situation where some new setting has shown up and you didn't know what it does or inspected what it was set to by default. And there's absolutely no way you could have restored a backup and not had all of your settings transfer over correctly. No, there's no way you'll ever turn the setting on and forget that it's on when you plug your device into some network. And you know you will never be the victim of any malicious activity that could screw you over in some way. You've never had some app automatically connect to something that you didn't know it could even connect to. You'll never have someone else pick up your phone and take random pictures or recordings that you don't know about because those would never get automatically uploaded because, of course, you didn't turn on that setting for yourself. You'll never have to worry about your battery going low because you turned on automatic uploads and not only did your upload happen but your device also scanned your uploads too. You never use your phone for work because your work definitely pays for a new device for you to use for work.
Gosh it sure is weird hat so many people don't want client-side scanning. Scanning your device before uploading anything is just a very reasonable thing to do.
I don't want client side scanning, and I don't want the cloud. If only wishing made it so.
People not being able to understand the devices they use is why devs have gotten us to this point. People are too uneducated to do proper back ups, so some enterprising people came up with a way to do that for you. Peeps still get it wrong. Some other asshats come along and take advantage of uneducated people, and do malicious stuff. Fuck 'em. We should just end the cloud because we as a society can't handle it or the responsibility of operating our own equipment. /s
With server side scanning, if someone accidentally enables uploading you are in exactly the same position. It being enabled on upload, which side it runs changes nothing in all the scenarios you are sarcastically ranting about.
OK, I've re-read your complaints about client-side scanning which also apply equally to server-side scanning in every way except battery life, and I've noticed you also incorrectly frame "scanning your device before uploading anything" which it doesn't do, it scans only the things being uploaded not the whole device and as part of the upload.
You either misunderstand it or are deliberately being misleading about it. The only complaint that you have made which stands is battery-life.
I'm speculating here, but I wonder if part of your experience is based on the fact that you're a long time user. Features like auto-uploading to Photo Library are new, and Apple is generally decent about informing you of new features before opting in.
Brand new account setups are a different story. You're encouraged to use all of the latest/greatest stuff (and why not, current topic notwithstanding?).
Bottom line: it's extremely easy for an average user to start uploading their stuff without really realizing it.
Maybe. I'm very anti-cloud from the first moments I ever heard of it and saw the first puffy shapes in slide decks. I don't trust it. It's not in my control and I don't know who does control it. That scares the bejeebus out of me.
I'm not the unsuspecting dupe that devs are targeting to get a new user tricked into something. I'm very much aware of the shenanigans devs try and pay attention to that shit from the go.
Having said that, I do read the crap and choose no where necessary. People just haphazardly pressing okay to get to new shiny almost deserve whatever they've agreed to. I say almost because these dialogs can be worded like "Vote No for Yes" kind of BS.
If you're one of the asshat devs FUCK YOU for making this a thing we even have to discuss in the first place. Edit: Royal You Devs
It's not even the devs in general. It's their management and bean counters. Data is money.
And yeah.. The cloud is just someone else's computer. Would you store all your stuff on your friend's computer? Well most people store everything they have on the computers of people they don't even know...
True but slippery slopes are simply a thing. Like the Overton window. Every move takes a step further and moves something else from ridiculous into feasible.
And personally, I know this won't affect me. I don't own such content. I don't use the cloud without encryption first (thanks Cryptomator). However I just hate the feeling of my own phone constantly looking over my shoulder on someone else's behalf. Is that so weird?
I think we're saying the same thing. 100% agree here. I think (hope) the vast majority of folks wouldn't be affected by the proposal. But the backlash against it has been justified for exactly the reasons you outline.
It’s their cloud service. If you want to upload to iCloud, you must agree to use their client, and their client implements CSS. If you don’t want to use their client, don’t use their service.
Their iCloud photo upload client is what does the scanning, at the time you upload to their service. It just so happens that their client is bundled with the phone. I think the root of angst here is that nobody trusts that the client isn’t running even when you don’t choose to upload your photos to their cloud service.
Given how the average person and even the majority of people on tech have been acting the last 6 years I'm at the point where I don't care. I can protect myself, everyone else is their own responsibility.
The more we remove privacy by tech the less we lose it by law which I now think is the much worse outcome.
> I can protect myself, everyone else is their own responsibility.
How does that work if everyone expects you to communicate with them via Whatsapp and their Gmail, or even if you don't, they will happily backup all communication with you in the cloud?
> The more we remove privacy by tech the less we lose it by law
On its face, that seems like a false dichotomy. Can you expand?
Generally, I see the erosion of our right against unreasonable search and seizure to be something that hurts everyone (regardless of an individual's ability to make fewer searchable spaces).
If 99.8% of people can have all their information sized by law enforcement then law enforcement won't see the point of a costly political battle to overturn the 4th amendment.
Much like advertising those of us tech savvy enough to install add blockers are subsidized by those who don't.
"We didn't violate your privacy because we bought the information from a private company that violated your privacy. But it's ok, because you clicked a button, or signed a EULA."
That's the legal justification behind law enforcement fusion centers gathering, deanonymizing, and sharing data harvested by adtech firms, device manufacturers, and service providers.
It's a gotcha with the same intellectual weight as "I know you are but what am I?"
We need legislation with consequences lethal to companiesthat violate privacy.
Something like this - Every individual variable that a company wants to obtain from a person should be consensual with no option to select all, and the data and permission should be ephemeral. At any time a person should be able to inspect, delete, or allow continued possession of private data. They should be able to allow or deny sale or transfer of the data, and any recipient of the data must confirm permissions before taking receipt. Any algorithm or software or human analysis of data must be public and transparent. A record of any decision or business logic involving private data must be kept, and that record becomes private data, subject to the same constraints.
Stealing someone's identity should have a mandatory minimum of 2 years of community service, and total loss of opt in privileges for 5 years. No free web services or social media if you fuck around with someone else's privacy. Violations result in fines, paid to the victim and more community service.
A business caught abusing private data is subjected to a fine of 5% of company net worth per day. Half of the fine goes to the regulatory bureau, half to the victims.
Law enforcement must obtain warrants specific to known individuals - no geofencing or search term fishing expeditions. Digital data is subject to the same 4th amendment protections as physical papers and property.
Leaks would mean the end of an organization. If a company can't protect private data, then it can't participate in collecting it.
I'm sure there are flaws, but the gist of this seems a good starting outline. Anything less won't solve the problems and there should still be a mechanism for consensual participation in data markets. This would nuke credit bureaus, rein in isps and big tech abuses.
What's missing from this question is the recognition the answer can very easily be "probably never". Nothing says "they" (who?) will come for you (for some reason this is always implied) and it is eminently possible the person you are questioning doesn't mind living in a fascist hellscape where their neighbours are regularly dragged off to death camps.
Of course, all of that ignores the fact that your question is histrionic. The problems you are referring to are not modern problems and comparing them is a disservice to both situations.
“It’s not perfect yet, so let’s not do it” is not much of an argument, especially from cryptography wonks with no experience in public policy or law enforcement.
On the other hand, CSS might -- eventually, anyway -- offer the best compromise for facilitating reliable, responsible lawful access to mass consumer information technology.
Develop CSS in a manner that minimizes the noted risks. Such mechanisms are a fundamental compromise, philosophically. I am skeptical that those on opposing ends of the privacy debate will find sufficient common ground to achieve responsible implementations.
Deeper concerns regarding the misprioritization of security in consumer infotech design prevent meaningful basis to realize a suitable compromise for CSS tech, anyway.
No, do not. There is no reasonable privacy preserving manner in which you can do so. Spyware is fundamentally incompatible with privacy. I don't care how many god damn whitepapers they write about their novel perceptual hash cohort-based homomorphic 0-trust TPM scanner. It's still a rat.
As another poster said, it's not a choice of whether or not your content is scanned; it's a choice of where. If you upload pictures to the cloud—which is the only scenario in which Apple's scanning was stated to happen¹—then it's a choice between scanning on your device, which allows for the possibility of E2E encryption, or definitely no encryption and scanning on the server.
At present, Apple doesn't scan photos on the server, but all their competitors do, and I don't doubt for a second that they will eventually start scanning photos as well. The choice is not if, but how, and their client-side solution seems to me to be much more privacy-preserving than server-side scanning.
¹If you don't believe them, that's fine, but given that they have root control over the software running on your phone, your only choice is to either believe them or don't use their phones. Same goes for all other phones.
> it's a choice between scanning on your device, which allows for the possibility of E2E encryption
No, it isn’t a choice at all. Your statement is factually incorrect, and presents a false situation. Apple has no obligation, legal or otherwise, to perform CSS. Nothing is stopping Apple from allowing E2EE right now.
While this is true in general, I think it's actually not true for Apple with respect to iPhotos (possibility of true end to end encryption), since they also make the photo processing software and the camera itself. The sensor data needs to be rendered to a file before it is even possible to encrypt it, so Apple could capture and scan that if they wanted to. You can't encrypt light waves before they hit the physical sensors.
Of course, the same is true of messaging. Apple owns the keyboard software and nothing stops them from putting a keylogger in to capture text before it ever hits the messaging app that encrypts it (and it pretty much needs to have one for predictive text to be possible).
They obviously can and arguably should choose to ignore the data streams before they hit the network clients and can be encrypted, but the capability will always be there.
I think part of the issue here is Apple owning all of the hardware, the OS, and the network client. People don't want to trust network clients, but you have no choice but to trust the OS and hardware vendors. If you don't trust them, your only option for guaranteed private communication is to not use computers. You either need to resort to the organized crime/terrorist model of using hand carry via couriers who credibly believe you'll kill them if they rat you out, or the military model of building your own communications devices.
I’m presuming that Apple wants to scan for CSAM, which I think they do. Personally, I’m also in favor of tightly-regulated scanning for CSAM.
The fact that they don’t support E2EE, despite their strong pro-privacy stance, supports my presumption. So it comes back to the same argument: presuming Apple is going to scan cloud images for CSAM (which, again, all other major provides already do, to my knowledge), then it’s just a question of how.
For someone like me, who believes scanning for CSAM is worthwhile, Apple’s solution is far superior and privacy-preserving compared to, say, Microsoft’s.
I think this argument really comes down to “no scanning at all” vs. “carefully applied scanning,” but that’s not how it’s framed by the people objecting. I think it’s because that’s an argument they’re not likely to win. And so, if they “win,” I think we’ll just end up with cheap and dumb server-side scanning, which would take a whole lot less effort and political trouble for Apple… and ironically, be much easier to subvert in the ways people against CSS worry about.
"In a world where our personal information lies in bits carried on powerful communication and storage devices in our pockets, both technology and laws must be designed to protect our privacy and security, not intrude upon it. Robust protection requires technology and law to complement each other. Client-side scanning would gravely undermine this, making us all less safe and less secure."