What I want is very simple: I want software that doesn't send anything to the Internet without some explicit intent first. All of that work to try to make this feature plausibly private is cool engineering work, and there's absolutely nothing wrong with implementing a feature like this, but it should absolutely be opt-in.
Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor's own playground. Local on-device data shouldn't be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.
So why didn't Apple just simply ask for user permission to enable this feature? My cynical opinion is because Apple knows some portion of users would instantly disallow this if prompted, but they feel they know better than those users. I don't like this attitude, and I suspect it is the same reason why there is an increasing discontent growing towards opt-out telemetry, too.
This mindset is how we got those awful cookie banners.
Even more dialogs that most users will blindly tap "Allow" to will not fix the problem.
Society has collectively decided (spiritually) that it is ok signing over data access rights to third parties. Adding friction to this punishes 98% of people in service of the 2% who aren't going to use these services anyway.
Sure, a more educated populous might tip the scales. But it's not reality, and the best UX reflects reality.
Nope, collective indifference to subpar user experiences has gotten us those lousy cookie banners.
Web sites could legally use cookies for non-tracking purposes without cookie banners but considering people have not stopped visiting sites despite the fugly click-through cookie banners makes them a failure.
All it takes is for 50% of the internet users to stop visiting web sites with them, and web site authors will stop tracking users with external cookies.
Everyone knows that sarcasm doesn’t transmit well through text. His phrasing isn’t uncommon for something someone would say out loud with a sarcastic tone of voice
Yeah, this is an insane proposal. I know GP may be imagining a smart populace walking away from Big Evil Facebook and X with heads held high, but the other 99% of sites are also doing the same cookie banner stupidity because it is roughly mandatory due to useless EU law (unless you’re not engaging at all in advertising even as an advertiser). So, no more accessing your bank, power utility, doctor, college, etc. That’ll show those pesky cookie banner people!
“The Internet” to someone boycotting cookie banners would basically just be a few self-hosted blogs.
You do not need to show a banner and ask for consent if every cookie is to make the website work (e.g. for authentication and settings). GDPR didn't create this banner; websites that use useless cookies and phone home to Big Tech are.
- Nearly all commercial websites advertise their site in some way
- Nearly all websites people use day-to-day are commercial
- To run ads in a post-1997 world, you must have a conversion pixel because ads aren't sold by impression, they're sold by clicks and they need to know someone made it to your site
- Therefore, some form of tracking cookies (oooh evil) are required
- Big Tech (Google/Meta/X) controls 99% of the real estate where ads can be run, so... they will know about visitors
Unless browsers simply had a setting by default to only save cookies past one session when users allow it. That would be a wildly more effective and efficient solution than forcing every single random website to implement some byzantine javascript monstrosity which attempts to somehow inhibit other JS it doesn't actually control from dropping cookies -- something that the JS API in a browser doesn't even support.
I work on a product that doesn't even have any ad traffic land on it or want to do any tracking, and setting up a cookie management platform was insane. You have to dive into the docs of every SDK to try to figure out how this particular SDK can be signaled to do the GDPR compliance things.
I’m not a web developer, but it seems to me that the referrer that you get after a click on a link should be sufficient to count clicks vs impressions.
I am happy to learn what I may have been imagining: thanks for that!
The law has turned out to be useless, agreed — or at least, it has driven hard-to-navigate UX that we live through today. The intent could have taken us in a different direction with some care (i.e. mandating a clear, no-dark-pattern opt-out/opt-in ahead-of-time option a la DoNotTrack header that similarly failed): if web clients (browsers) were required to pass visitor's preferences and if the list of shared-with was mandated to be machine readable with an exact format (so browsers would create nice UIs), maybe we'd get somewhere.
That's precisely what https://en.wikipedia.org/wiki/EPrivacy_Regulation was supposed to be! As you can imagine, there are strong incentives to lobby against it, so it's almost a decade late already.
Whoever came up with an idea to attach CSAM scanning provision to it is an evil genius, what an incredible way to make sure it's not going to pass any time soon.
'Do not track' was stupid. 'Cannot Be Tracked' would have worked fine. The difference is that the browser is literally the user's agent, so it should work for the user. It is the thing which identifies you today, and could easily NOT identify you without your permission if that was what was mandated -- and "big bad ad tech" could do nothing about it.
Simply select the sites whose first party cookies you want preserved, triggered only by user actively toggling it on, or prompted for on a user-triggered POST that occurs on a page with a user-filled password field (similar to how popups were killed off, no prompting on a POST done without user interaction). "Do you want to let this site 'ycombinator.com' remember you (stay logged in, etc.)?" [YES] [NO]
Otherwise delete the cookies in X minutes/hours/etc.
Or another way, keep the cookies while a tab is on the site, then once no tabs are visiting it, put them in an 'archive.' Upon visiting the site again, show a prompt "Allow ycombinator.com to recognize you from your previous visit(s)?" <Yes> <No, be anonymous> If yes, restore them, otherwise, delete them.
It is so simple to have browsers be responsible for the user's safety, yet since we left it to politicians to decide, we got all this silliness putting it on the users -- and where the technical implementations are by necessity INSIDE the JS sandbox where it's difficult for users to verify that it's being done correctly.
I read an article that said something along the lines of people aren't prepared to pay for apps, so instead we get app store silo advert supported crap-ware. And if it's not the apps its click bait making fractional gains by being supported by ad networks. That some of, but not all of us recoil from.
> All it takes is for 50% of the internet users to stop visiting web sites with them, and web site authors will stop tracking users with external cookies.
How would the content creators or news sites earn then? Web is built on ads, and ads are built on tracking as untargeted ads pays significantly lower than targeted.
No. A significant number of people care about Privacy which is why 1. Apply was targeting them with Ads and 2. AdBlock did hurt Google's business. Also care is different from go to war (as in install Linux and manually setup a privacy shield + Tor + only transact in Monero). Some people do that out of principal. Many people want the Privacy features but with the ease of use.
I'd bet if you ask people "do you care about privacy?" Close to 100% would say yes.
If you ask "you have to give up privacy to be able to log in to your email automatically. Are you ok with that?" Close to 100% would say yes.
If you ask "we will give you this email service for free but in exchange we get to squeeze every ounce of juice that we can out of it to persuade you to buy things you don't need. Are you ok with that?" Close to 100% would say yes.
It doesn't matter what people say they care about. Their actions say otherwise, if the privacy-friendly option is in any way less convenient.
> This mindset is how we got those awful cookie banners.
The only thing I've found awful is the mindset of the people implementing the banners.
That you feel frustration over that every company has a cookie banner, is exactly the goal. The companies could decide that it isn't worth frustrating the user over something trivial like website analytics, as they could get that without having to show a cookie banner at all.
But no, they want all the data, even though they most likely don't use all of it, and therefore are forced to show the cookie banner.
Then you as a user see that banner, and instead of thinking "What a shitty company that don't even do the minimal work to not having to show me the cookie banner", you end up thinking "What a bad law forcing the company to inform me about what they do with my data". Sounds so backwards, but you're not the first with this sentiment, so the PR departments of the companies seems like they've succeed in re-pointing the blame...
Not really. You can still get metrics and analytics, you just don't include PII in it. There are tons of privacy-respecting platforms/services (both self-hosted and not) you can use, instead of just slapping Google Analytics on the website and having to show the banner.
But even so, I'd argue that since it's a small business, you'd do much better with qualitative data rather than quantitative, since it's a small business it's hard to make choices based on small amount of data. Instead, conduct user experience studies with real people, and you'll get a ton of valuable data.
The non-use of collected data is the most ridiculous part of all this. I work with many companies that collect tons of data and only use a small percentage of it. All they're doing is building a bigger haystack.
This is partially due to the fact that Google Analytics is free and the default for most website/app builders. But, still, it's ridiculous.
In my experience, most people that have semi or full decision-making control over this kind of thing have absolutely no idea if they even need cookie consent banners. They just fall for the marketing speak of every single SAAS product that sells cookie-consent/GDPR stuff and err on the side of caution. No one wants to be the guy that says: "hey, we're only logging X, Y and not Z. And GDPR says we need consent only if we log Z, so therefore we don't need cookie consent." For starters, they need a lawyer to tell them it's "A OK" to do it this way, and secondly it's plain old cheaper and a lot less political capital to just go with the herd on this. The cost of the banner is off-loaded outside of the company and, for the time being, the users don't seem to mind or care.
This is why half the web has cookie-consent banners. No amount of developers who know the details screaming up the ladder will fix this. The emergent behavior put in place by the legal profession and corporate politics favors the SAAS companies that sell GDPR cookie banner products and libraries. Even if they're in the right, there is a greater-than-zero percent chance that if they do the wrong thing they'll go to court or be forced to defend themselves. And even then if it's successful, the lawyers still need to be paid, and the company will look at "that fucking moron Joe from the website department" which caused all their hassles and countless hours of productivity as a result of being a "smart ass".
> have absolutely no idea if they even need cookie consent banners
> This is why half the web has cookie-consent banners
Agree, but we as developers can have an impact in this, especially in smaller companies. I've managed to "bark up the ladder" sufficiently to prevent people from mindlessly adding those popups before, and I'm sure others have too.
But those companies have all been companies where user experience is pretty high up on the priority ladder, so it's been easy cases to make.
People think in terms of what is inconveniencing them directly. Great examples are when consumers yell at low level workers when a company has horrible policies that run back to cost cutting...
or union workers strike against Imaginary Mail Service Corp. because they are being killed on the job, and people (consumers) get angry at the workers because their package wont show up on time (or the railways arent running, etc...) instead of getting mad at the company inflicting that damage on other people...
or when [imaginary country] puts sanctions on [other poorer country] the people of that country blame the government in power instead of the people directly inflicting harm on them.
I'm not sure why this is the case, but we have been conditioned to be resistant to the inconvenience and not the direct cause. Maybe its because the direct cause tends to be a faceless, nameless entity that directly benefits from not being the target of ire.
Do you feel like your comment is responding to mine in good faith and using the strongest plausible interpretation? Because it sure feels like you intentionally "misunderstood" it.
Obviously the intention is not "to not improve user privacy at all" but to give companies and users the agency to make their own choices. Many companies seems to chose "user inconvenience" over "user privacy", and it now makes it clear what companies made that choice. This is the intention of the directive.
I didn't intend to criticize your description of the situation. My intent was to criticize the people who (allegedly) had that goal, because it has become clear that the result of the policy was not to cause user frustration and have that lead to companies improving their privacy practices. Instead, the result of the policy was simply to increase user frustration without improving privacy practies.
Those are the same goals, at least in a capitalistic free market. The theory is that consumers will go towards products which are better (meaning, less obnoxious), and therefore the obnoxious websites will either die off or give up the banners to conform to the market.
Naturally, as you can see, free markets are purely theoretical. In practice, up and leaving a website you're using is almost never easy, and isn't even a choice you can make often.
It’s odd that you think the people implementing the banners want them so they can get more data. They want them because they provide a shield from litigation. I don’t know about you, but in the past year, most of my ads on Facebook are from law firms with headlines like “have you browsed (insert random minor e-commerce site) in the past two years? Your data may have been shared. You may be entitled to compensation.” If I’m a random mom and pop e-commerce site and I do not add a cookie banner, and I use any form of advertising at all, then I am opening myself up to a very expensive lawsuit - and attorneys are actively recruiting randos to serve as plaintiffs despite them never being harmed by “data collection.”
It’s that simple. That’s the situation with CCPA. Not sure the exact form that GDPR penalties take because I’m not European. But it’s not a complicated issue. you have to display some stupid consent thing if you’re going to have the code that you’re required to have in order to buy ads which take people to your website.
Note that plenty of these cookie banner products don’t actually work right, because they’re quite tricky to configure correctly, as they’re attempting to solve a problem within the webpage sandbox that should be solved in the browser settings (and could easily be solved there even today by setting it to discard cookies at close of browser). However, the legal assistants or interns at the law firm pick their victims based on who isn’t showing an obvious consent screen. When they see one, it’s likely that they will move onto the next victim because it’s much easier to prove violation of the law if they didn’t even bother to put up a cookie banner. A cookie banner that doesn’t work correctly is pretty easy to claim as a mistake.
> If I’m a random mom and pop e-commerce site and I do not add a cookie banner, and I use any form of advertising at all, then I am opening myself up to a very expensive lawsuit
Nope, that's not how it works. But your whole comment is a great showcase about how these myths continue to persist, even though the whole internet is out there filled with knowledge you could slurp up at a moments notice.
Your comment would be better if you cited any evidence. Otherwise, I could also point you to a whole internet which is, as I said, full of law firm ads fishing for plaintiffs who have only been 'harmed' in the most strained definition of the word.
'Nothing is essential until you prove it is' - apply to the cookie ombudsman for €1k to make your case for allowance.
You complete a detailed form including giving your company registration and the reason for use of each cookie. You list each company with access.
You pay into escrow €10 per user per company (eg 10 users, sending data to 1200 companies; 120000€) you wish to gather/keep data on, providing that users details and an annual fee.
Any non trivial infringement and you get DNS blocklisted, the escrow money is paid out, CEO of the registered company is fined one years income (max of last 4 years) and legal proceedings are started against the company and its executives.
On application to the cookie ombudsman I can see all companies who legally have access to my data (and via which gateway company), I can withdraw access, they can withdraw service.
I think society has collectively "decided" in the same way they "decided" smoking in a restaurant is great.
There's little to no conscious choice in this. But there is a lot of money in this. Like... a LOT of money. If I were to try to influence society to be okay with it, it would be a no brainer.
So, to me, it's obvious that society has been brainwashed and propagandized to accept it. But doing so generates hundreds of billions if not trillions of dollars. How, exactly, such manipulation is done is unknown to me. Probably meticulously, over the course of decades if not centuries. I know that the concept of privacy during the writing of the constitution was much, much more stringent than it was in the 70s, which is much more stringent than it is today.
I think it's clear that users should be able to have their own agents that make these decisions. If you want an agent that always defers to you and asks about Internet access, great. If you want one that accepts it all great. If you want one that uses some fancy logic, great.
u-Block Origin's annoyances filters take care of the cookie banners, giving the best of both worlds: no banners and a minimal amount of tracking.
(The "I don't care about cookies" extension is similarly effective, but since I'm already running u-block origin, it makes more sense to me to enable it's filter.)
> u-Block Origin's annoyances filters take care of the cookie banners, giving the best of both worlds: no banners and a minimal amount of tracking.
Word of caution though, that might silently break some websites. I've lost count of the times some HTTP request silently failed because you weren't meant to be able to get some part of the website, without first rejecting/accepting the 3rd party cookies.
Usually, disabling uBlock, rejecting/accepting the cookies and then enabling it again solves the problem. But the first time it happened, it kind of caught me by surprise, because why in holy hell would you validate those somehow?!
Users had a global way to signal “do not track me” in their browser. I don’t know why regulators didn’t mandate respecting that instead of cookie consent popups.
Apple IDs could easily have global settings about what you are comfortable with, and then have their apps respect them.
I’m spitballing here but wouldn’t another way to handle it would be to return dummy / null responses by redirecting telemetry calls to something that will do so?
This would have the added benefit of being configurable and work on a bunch of apps instead of just one at a time too
Not really. A mandatory opt-in option at the browser level would be the correct way to do it, but legislation forced instead those cookie banners onto the webpage.
No, legislation (the GDPR) doesn’t say anything about cookie pop ups. It says that private data (or any kind) can only be used with opt in consent, given freely, with no strings attached, with the ability to be withdrawn, that it will be kept secure, deleted when not needed for the original purpose, etc. All very reasonable stuff. Tracking cookies are affected, but the legislation covers all private data (IP, email address, your location, etc)
… And if Browsers agreed on a standard to get and withdraw opt-in consent, it would be compatible with what the legislation requires.
The vast majority (>95%) of users does not understand what those pop-ups say, seems fundamentally incapable of reading them, and either always accepts, always rejects, or always clicks the more visually-appealing button.
Try observing a family member who is not in tech and not in the professional managerial class, and ask them what pop-up they just dismissed and why. It's one of the best lessons in the interactions between tech and privacy you can get.
Well, then >95% of users won't be using $FEATURE. Simple as that. The fact that users for some reason no not consent to $FEATURE the way corporations/shareholders would want them to does not give anyone the right to stop asking for consent in the first place.
When looked at from another angle, opt-in does work.
By adding that extra step forcing users to be aware of (and optionally decline) the vendors collection of personal data, it adds a disincentive for collecting the data in the first place.
In other words, opt-in can be thought of as a way to encourage vendors to change their behaviour. Consumers who don't see an opt-in will eventually know that the vendor isn't collecting their information compared to others and trust the product more.
As much as I hate cookie consent dialogs everywhere, the fact is that it is clearly working. Some companies are going as far as to force users to pay money in order to be able to opt out of data collection. If it wasn't so cumbersome to opt-out, I reckon the numbers for opt-out would be even higher. And if companies weren't so concerned about the small portion of users that opt-out, they wouldn't have invested in finding so many different dark patterns to make it hard.
It is definitely true that most users don't know what they're opting out of, they just understand that they have basically nothing to gain anyway, so why opt-in?
But actually, that's totally fine and working as intended. To be fair to the end user, Apple has done something extremely complicated here, and it's going to be extremely hard for anyone except for an expert to understand it. A privacy-conscious user could make the best call by just opting out of any of these features. An everyday user might simply choose to not opt-in because they don't really care about the feature in the first place: I suspect that's the real reason why many people opt-out in the first place, you don't need to understand privacy risks to know you don't give a shit about the feature anyway.
If you do not want it (and that is >90% of people, who never asked for it, never requested it, but was forced upon them these 'enriched' lies and exposure to corporate greed).
> Try observing a family member who is not in tech
This is everyone, it is universal, I've met many people "in tech" who also click the most "visually appealing" button because they are trying to dismiss everything in their way to get to the action they are trying to complete.
The microcosm that is HN users might not just dismiss things at the 95%+ rate, but that is because we are fed, every day, how our data is being misappropriated ate every level. I think outside of these tiny communities, even people in tech, are just clicking the pretty button and making the dialog go away.
The issue really isn't opt-in itself but how the option is presented.
I agree that a lot of people don't read, or attempt to understand the UI being presented to them in any meaningful manner. It really is frustrating seeing that happen.
But, think about the "colorful" option you briefly mentioned. Dark patterns have promoted this kind of behaviour from popups. The whole interaction pattern has been forever tainted. You need to present it in another way.
Informed consent is sexy. In the Apple ecosystem, we’re literally paying customers. This is ridiculous. This line you parroted is ridiculous. This needs to stop.
Except that, still, to this day, most sexual consent is assumed, not explicit, even in the highest brow circles where most people are pro-explicit-sexual-consent.
The same way, most tech privacy consent is assumed, not explicit. Users dismiss popups because they want to use the app and don't care what you do with the data. Maybe later they will care, but not in the moment...
> Except that, still, to this day, most sexual consent is assumed, not explicit
Did you miss my sarcastic little requote blurb that stated exactly that? Or do you normally rephrase the exact same point with added ad hominem attacks, and somehow frame it as a counterpoint?
> The same way, most tech privacy consent is assumed, not explicit.
And yet, you still have a right to it. In both instances.
Anyways, the "right" you have to explicit sexual consent is not really there. In that, you cannot go to court and say "I said no" and get any meaningful damages or a conviction without other evidence. Similarly, courts treat these popups as essentially unreadable and you cannot go to court and say "They clicked "Allow"" and get away with anything unreasonable.
> So why didn't Apple just simply ask for user permission to enable this feature?
That’s an interesting question. Something to consider, iOS photos has allowed you to search for photos using the address the photo was taken at. To do that requires the Photos app to take the lat/long of a photos location, and do a reverse-geo lookup to get a human understandable address. Something that pretty much always involves querying a global reverse-geo service.
Do you consider this feature to be a violation of your privacy, requiring an opt-in? If not, then how is a reverse-geo lookup service more private than a landmark lookup service?
It's a complete violation if it's a new or changed setting from the default state of the user not having it possible.
Something to consider - location is geo-encoded already into photos and doesn't need this uploaded to Apple servers. Searching can be done locally on device for location.
Apple goes as far as to offer a setting to allow the user to share photos and remove the geocoding from it.
Offering a new feature is opt-in.
Unfortunately, against my better wishes, this only erodes trust and confidence in Apple that if this is happening visibly, what could be happening that is unknown.
> Do you consider this feature to be a violation of your privacy, requiring an opt-in?
I suppose in some sense it is, as it a reverse-geo lookup service, but it's also no where near to the front in the location privacy war.
Cell phone providers basically know your exact position at all times when you have your phone on you, credit card companies know basically everything, cars track driving directly, etc. etc.
I can see why some people would be up in arms but for me this one doesn't feel like missing the forest for the trees, it feels like missing the forest for the leaves.
I very much agree with your position. There are legitimate questions to be asked about this feature being opt-in, although we may find that you implicitly opt-in if you enable Apple Intelligence or similar.
But the argument that this specific feature represents some new beachhead in some great war against privacy strikes me as little more that clickbate hyperbole. If Apple really wanted to track people’s locations, it would be trivial for them to do so, without all this cloak and dagger nonsense people seem to come up with. Equally, is a state entity wanted to track your location (or even track people’s locations at scale), there’s a myriad of trivially easy ways for them to do so, without resorting to forcing Apple to spy on their customers via complex computer vision landmark lookup system.
You’re right. But: Anyone in IT or tech, thinking deeply about the raw facts. They know it always boils down to trust, not technology.
The interesting thing is that Apple has created a cathedral of seemingly objective sexy technical details that feel like security. But since it’s all trust, feelings matter!
So my answer is, if it feels like a privacy violation, it is. Your technical comparison will be more persuasive if you presented it in Computer Modern in a white paper, or if you are an important Substack author or reply guy, or maybe take a cue from the shawarma guy on Valencia Street and do a hunger strike while comparing two ways to get location info.
Nope. It's still taking the user's data away without informing them, and saying trust us we super good encrypted it.
Apple is building a location database, for free, from user's photos and saying it's anonymized.
It's not a service I want, nor one I authorize. Nor are my photos licensed to Apple to get that information from me.
Encryption is only good relative to computational power to break it available to the many, or the few.
Computational power usually seems always available in 10-20-30 years to generally break encryption for the average person, as unimaginably hard it seems in the present. I don't have interest in taking any technical bait from the conversation at hand. Determined groups with resources could find ways..
This results in no security or encryption.
> Apple is building a location database, for free, from user's photos and saying it's anonymized.
Where on earth did you get that from? The photos app is sending an 8bit embedding for its lookup query, how are they going to build a location database from that?
Even if they were sending entire photos, how do you imagine someone builds a location database from that? You still need something to figure out what the image is, and if you already have that, why would you need to build it again?
> Encryption is only good relative to computational power to break it available to the many, or the few.
> Determined groups with resources could find ways.. This results in no security or encryption.
Tell me, do you sell tin foil hats as a side hustle or something? If this is your view on encryption why are you worried about a silly photos app figuring out what landmarks are in your photos. You basically believe that it’s impossible for digital privacy of any variety is effectively impossible, and that you also believe this is a meaningful threat to “normal” people. The only way to meet your criteria for safe privacy is to ensue all forms of digital communication (which would include Hacker News FYI). So either you’re knowingly making disingenuous hyperbolic arguments, you’re a complete hypocrite, or you like to live “dangerously”.
> So my answer is, if it feels like a privacy violation, it is. Your technical comparison will be more persuasive if you presented it in Computer Modern in a white paper, or if you are an important Substack author or reply guy, or maybe take a cue from the shawarma guy on Valencia Street and do a hunger strike while comparing two ways to get location info.
They’re broadly similar services, both provided by the same entity. Either you trust that entity or you don’t. You can’t simultaneously be happy with an older, less private feature, that can’t be disabled. While simultaneously criticising the same entity for creating a new feature (that carries all the same privacy risks) that’s technically more private, and can be completely disabled.
> The interesting thing is that Apple has created a cathedral of seemingly objective sexy technical details that feel like security. But since it’s all trust, feelings matter!
This is utterly irrelevant, you’re basically making my point for me. As above, either you do or do not trust Apple to provide these services. The implementation is kinda irrelevant. I’m simply asking people to be a little more introspective, and take a little more time to consider their position, before they start yelling from the rooftops that this new feature represents some great privacy deception.
And how, pray tell, do geotagged images magically get into your Photos library?
I actually couldn't get Photos address search to work right in my testing before writing my previous comment, even with a geotagged photo that I just took. So I'm not sure whether I have some setting disabled that prevents it.
The only match was via character recognition of a printed form that I had photographed.
To be clear, I meant that it was a nonissue for me, because I don't geotag my photos (except in that one test). Whether it's an issue for other people, I don't know.
One of the problems with iPhone lockdown is that it's a lot more difficult to investigate how things work technically than on the Mac.
The point still stands. That’s how geo-tagged images get in your photos, and the search function still works.
For what it’s worth, I’m surprised that you never save photos to your phone that you didn’t take yourself. Do people not send you interesting pictures? Pictures of yourself?
I don't actually use my phone for much. Taking photos, making phone calls. I'm not the kind of person who lives on their phone. I live on my laptop (which has Location Services disabled entirely).
I don't even use Photos app on Mac anymore. They've ruined it compared to the old iPhoto. I just keep my photos in folders in Finder.
you started out suggesting it wasn't possible and when presented with a common case showing it was not only possible but likely, you moved the goal posts and now say "well, I don't do that." Weak sauce.
I admit that I hadn't considered the possibility of importing outside geotagged photos into your own Photos library, because I don't do that. But I also said already, "To be clear, I meant that it was a nonissue for me, because I don't geotag my photos (except in that one test). Whether it's an issue for other people, I don't know."
I don't personally have a strong feeling for or against this particular feature, since I don't use geotagging at all, so it doesn't affect me. I'm neither endorsing nor condemining it offhand. I'll leave it to other people to argue over whether it's a privacy problem. It could be! I just lack the proper perspective on this specific issue.
Yeah, I have geo-tagged images from people having AirDropped photos to me. Occasionally I've noticed a photo somehow says the city it was taken in, much to my surprise -- only to remember this was actually AirDropped to me from someone who was there with me, or whatever. Maybe even iMessaged and then manually saved, not sure.
Personally I do not believe these popups serve any purpose, because I ultimately cannot (at least in a reasonable way) prove that the website is acting in good faith. Asking me whether the app should phone home doesn't really guarantee me pressing "no" will actually prevent the tracking.
I am continuously surprised at how we convince ourselves privacy at scale will work with a varying amount of yes/no buttons. There are 2 ways to trust software 1. be naive and check whether "privacy first" is written somewhere 2. understand the software you are running, down to the instructions it is able to execute.
The permission popups also lack granularity. When giving access to my contact list, which contacts does it actually access? Can I only give access to contacts name and not phone numbers? Is it for offline or online processing? If online, should we have another popup for internet access? But then, can I filter what kind of internet stuff it does? You go down the rabbit hole and eventually end up with a turing-complete permission system, and if you don't, your "privacy" will have some hole to it.
Even with opt-in a vendor will keep harassing the user until they tap "yes" in an inattentive moment.
And I've been in situations where I noticed a box was checked that I'm sure I didn't check. I want to turn these things off and throw away the key. But of course the vendor will never allow me to. Therefore I use Linux.
It is true that there are not absolutely zero instances of telemetry or "phoning home" in Linux, but Desktop Linux is not a similar experience to Windows or macOS in this regard, and it isn't approaching that point, either. You can tcpdump a clean install of Debian or what-have-you and figure out all of what's going on with network traffic. Making it whisper quiet typically isn't a huge endeavor either, usually just need to disable some noisy local networking features. Try Wiresharking a fresh Windows install, after you've unchecked all of the privacy options and ran some settings through Shutup10 or whatever. There's still so much crap going everywhere. It's hard to even stop Windows from sending the text you type into the start menu back to Microsoft, there's no option, you need to mess with Group Policy and hope they don't change the feature enough to need to change a different policy later to disable it again. macOS is probably still better (haven't checked in a while), but there are still some features that basically can't be disabled that leak information about what you're doing to Apple. For example, you can't stop macOS from phoning home to check OCSP status when launching software: there's no option to disable that.
The reason why this is the case is because while the tech industry is rotten, the Linux desktop isn't really directly owned by a tech industry company. There are a few tech companies that work on Linux desktop things, but most of them only work on it as a compliment to other things they do.
Distributions may even take it upon themselves to "fix" applications that have unwanted features. Debian is infamous for disabling the KeepassXC networking features, like fetching favicons and the browser integration, features a lot of users actually did want.
Are there any tools that enable capturing traffic from outside the OS you’re monitoring, that still allow for process-level monitoring?
Meaning, between the big vendors making the OS, and state-level actors making hardware, I wouldn’t necessarily trust Wireshark on machine A to provide the full picture of traffic from machine A. We might see this already with servers running out-of-band management like iDRAC (which is a perfectly fine, non-malicious use case) but you could imagine the same thing where the NIC firmware is phoning home, completely outside the visibility of the OS.
Of course, it’s not hard to capture traffic externally, but the challenge here would be correlating that external traffic with internal host monitoring data to determine which apps are the culprit.
Curiosity has led me to check on and off if the local traffic monitoring is missing anything that can be seen externally a few times, but so far I've never observed this happening. Though obviously, captures at different layers can still yield some differences.
Still, if you were extra paranoid, it wouldn't be unreasonable or even difficult to check from an external vantage point.
> Are there any tools that enable capturing traffic from outside the OS you’re monitoring, that still allow for process-level monitoring?
Doing both of these things at once would be hard, though. You can't really trust the per-process tagging because that processing has to be done on the machine itself. I think it isn't entirely implausible (at the very least, you could probably devise a scheme to split the traffic for specific apps into different VLANs. For Linux I would try to do this using netns.)
For what it's worth, I use Linux, too, but as far as phones go, stock phones that run Linux suffer from too many reliability and stability issues for me to daily drive them. I actually did try. So, as far as phones go, I'm stuck with the Android/iOS duopoly like anyone else.
> I want software that doesn't send anything to the Internet without some explicit intent first
I want this too, but when even the two most popular base OSes don't adhere to this, I feel like it's an impossible uphill battle to want the software running on those platforms to behave like that.
"Local-first" just isn't in their vocabulary or best-interest, considering the environment they act in today, sadly.
Developers of software want, and feel entitled to, the data on your computer, both about your usage within the app, as well as things you do outside of the app (such as where you go and what you buy).
Software will continue to spy on people so long as it is not technically prohibited or banned.
I highly suggest everyone else does their darnedest not too either. Don’t do it in your own software. Refuse and push back against it at $dayJob.
I realize that my small contribution as a privacy and data-respecting SWE is extremely small, but if we all push back against the MBAs telling us to do these things, the world will be better off.
So long as a significant portion of companies harvest user data to provide “free” services, no well-meaning business can compete with their paid apps. Not in a real way.
It’s the prisoner’s dilemma, but one vs many instead of one vs one. So long as someone defects, everyone either defects or goes out of business.
It’s the same as with unethical supply chains. A business using slave labour in their supply chain will out-compete all businesses that don’t. So well-meaning business owners can’t really switch to better supply chains as it is the same as just dissolving their business there and then.
Only universal regulation can fix this. If everyone is forced not to defect, we can win the prisoners dilemma. But so long as even 10% of big tech defects and creates this extremely lucrative business of personal data trade that kills every company not participating, we will continue to participate more and more.
Why do you assume it's MBA driven? As a software developer, I like knowing when my software crashes so that I can fix it. I don't care or even want to know who you are, your IP address, or anything that could be linked back to you in any way, but I can't fix it if I don't know that it's crashing in the first place.
And of course you've reported every single crash you've encountered via email or support portal?
Normal people don't email support with crash logs, they just grumble about it to their coworkers and don't help fix the problem. You can't fix a problem you don't know about.
And yet we don't have home inspectors coming into our homes unannounced every week just to make sure everything is ok. Why is it that software engineers feel so entitled to do things that no other profession does?
Because software is digital and different than the physical world and someone like you understands that. It's intellectually dishonest to pretend otherwise. How hard is it to make a copy of your house including all the things inside of it? Can you remove all personally identifying features from your house with a computer program? Analogies have their limitations and don't always lead to rational conclusions. Physicists had to contend with a lot of those after Stephen Hawking wrote his book about black holes with crazy theories that don't make sense if you know the math behind them.
Downloading a torrent isn't the same thing as going to the record store and physically stealing a CD, and regular people can also understand that there's a difference between the invasiveness of a human being entering your house and someone not doing that. So either people can understand torrenting isn't the same as going into a store a physically stealing something and anonymized crash logs aren't the same thing as a home inspector coming into your house, or Napster and torrenters actually owe the millions that the RIAA and MPAA want them to.
I'm not saying that all tracking is unequivocally good, or even okay, some of it is downright bad. But let's not treat people as idiots who can't tell the difference between the digital and physical realm.
Once it is running on a computer you don't own, it is no longer your software.
To put it in the language of someone who mistakenly thinks you can own information: data about crashes on computers that aren't yours simply doesn't belong to you.
It's just a matter of ownership and entitlement. You believe you are entitled to things other people own, that is on their property, because you have provided a service to them that's somehow related.
Outside of specifically silicon valley, that level of entitlement is unheard of. Once you put it in human terms what you're asking for, it sounds absolutely outrageous. Because it is - you and I just exist in a bubble.
This can all be avoided and you can have you cake, too. Run the software on your metal. That's what my company does and it's great in many ways. We have a level of introspection into our application execution that other developers can only dream of. Forget logging, we can just debug it.
The pain of a possible future panopticon dwarfs the "pain" of some software crashing. Considering long-term outcomes does not make you a sociopath - quite the opposite.
In the OP article it seems more like users demand to search their photos by text, and Apple has put in a huge effort to enable that without gaining access to your photos.
This is extracting location based data, not content based data in an image (like searching for all photos with a cup in it).
"Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. "
Years ago I developed for iOS as an employee. In my case, it was the product managers that wanted the data. I saw it as a pattern and I hated it. I made my plans to leave that space.
> So why didn't Apple just simply ask for user permission to enable this feature? My cynical opinion is because Apple knows some portion of users would instantly disallow this if prompted, but they feel they know better than those users. I don't like this attitude, and I suspect it is the same reason why there is an increasing discontent growing towards opt-out telemetry, too.
I'm just not sure why Apple needed to activate this by default, other than not draw attention to it... and doing so that was more important than the user's rights to the privacy they believe they are purchasing on their device.
I don't care what convenience i'm being offered or sold. If the user has decided what they want and the premium they are paying for Apple, it must be respected.
This makes me wonder if there is an app that can monitor all settings in an iPhone both for changes between updates, and also new features being set by default to be enabled that compromise the user's known wishes.
Consent for complex issues is a cop out for addressing privacy concerns. Users will accept or reject these things without any understanding of what they are doing either way. Apple seems to have taken a middle ground where they de-risked the process and made it a default.
This is a “look at me, Apple bad” story that harvests attention. It sets the premise that this is an unknown and undocumented process, then proceeds to explain it from Apple documentation and published papers.
"What I want is very simple: I want software that doesn't send anything to the Internet without some explicit intent first."
It exists. I use such software everyday. For example, I am submitting this comment using a text-only browser that does not auto-load resources.
But this type of smaller, simpler software is not popular.
For example, everyone commenting in this thread is likely using a browser that auto-loads resources to submit their comments. HN is more or less a text-only website and this "feature" is not technically necessary for submitting comments. All so-called "modern" web browsers send requests to the internet without explicit intent first. IN addition to auto-loading resources, these browsers automatically run Javascript which often sends further requests never intended by the web user.
Brand new Apple computers now send packets to the internet as soon as the owner plugs them in for the first time. This may enable tracking and/or data collection. Apple proponents would likely argue "convenience" is the goal. This might be true. But the goal is not the issue. The issue is how much the computer owner is allowed to control the computer they buy. Some owners might prefer that the computer should not automatically send packets to remote Apple servers. Often it is not even possible to disable this behaviour. Computer purchasers never asked for these "convenenience" features. Like the subject of this submission, Apple Photos, these are Apple's decisions. The computer owner is not allowed to make decisions about whether to enable or disable "convenience" features.
As the court acknowledged in its opinion in US v Google, default settings are significant. In this case, it is more than a default setting. It is something the owner cannot change.
>I want software that doesn't send anything to the Internet without some explicit intent first.
I too want exactly that, which got me thinking, that's what firewalls are for! DROP OUTBOUND by default, explicit allow per-app.
On Andoid, iptables-based firewalls require root, which wasn't a good option for me (no twrp support for my device), so after some searching I stumbled upon NetGuard - open source and rootless, implements a firewall using Android's VPN service (you can configure Android to route all traffic through this "VPN" which is actually a local firewall). The downside is you can't use an actual VPN (except with some complicated setup involving work profiles and other apps). I've been using it for a couple of weeks and am very satisfied, I noticed apps phoning home which I did not want to, like a scanning app I had used to scan private documents in the past, perhaps an oversight on my part.
Use a rooted Android phone with AFWall+ installed, with default block rules. Even just LineageOS allows you to set granular network settings per app, though it's not preemptive like AFWall.
Can't run various banking apps and can't run PagerDuty on a rooted device due to Google Play API Integrity Check. The ecosystem is closing in on any options to not send telemetry, and Google is leading the way in the restrictions on Freedom.
> Google is leading the way in the restrictions on Freedom.
They're the ones allowing you to root your phone or flash a custom ROM in the first place, so that's not a fair characterisation. Banks have a vested interest in reducing fraud, and a rooted Android might allow for easier and additional attack vectors into their apps and thus systems.
Not really though. I've been using LineageOS+Magisk for at least 6 years and haven't found an app that worked that stopped working all of a sudden. I'm using all my banking apps and everything else without issue, and have been for a long time. Doesn't seem like the app devs are hellbent on blocking those willing to use Magisk.
This line of thinking ignores a whole bunch of legitimate reasons why people knowledgeable enough to root their phone still choose not to, not least of which is that I have to exchange trusting a large corporation with a financial incentive to keep my device secure (regulations, liability) with an Internet anon with incentive to do the opposite (no direct compensation, but access to banking apps on the user’s device).
Even in the case where I’m willing to risk trusting the developer, they have literally zero resources to pen test the software I’ll be running my banking apps on, and in the case of Android roms need to run known vulnerable software (out-of-support source-unavailable binary blobs for proprietary hardware that were never open-sourced).
The same argument was made about TPM’s on PC’s and against Windows 11 for years (that they should just be disabled/sidestepped). It only holds water if you don’t understand the problem the device solves for or have a suitable alternative.
Absolutely! The important bit is that users have no choice in the matter. They're pushed into agreeing to whatever ToS and updating to whatever software version.
The backlash against Microsoft's Windows Recall should serve as a good indicator of just how deeply people have grown to distrust tech companies. But Microsoft can keep turning the screws, and don't you know it, a couple years from now everyone will be running Windows 11 anyways.
It's the same for Android. If you really want your Android phone to be truly private, you can root it and flash a custom ROM with microG and an application firewall. Sounds good! And now you've lost access to banking apps, NFC payments, games, and a myriad of other things, because your device no longer passes SafetyNet checks. You can play a cat-and-mouse game with breaking said checks, but the clock is ticking, as remote attestation will remove what remains of your agency as soon as possible. And all of that for a notably worse experience with less features and more problems.
(Sidenote: I think banking apps requiring SafetyNet passing is the dumbest thing on planet earth. You guys know I can just sign into the website with my mobile browser anyways, right? You aren't winning anything here.)
But most users are never going to do that. Most users will boot into their stock ROM, where data is siphoned by default and you have to agree to more data siphoning to use basic features. Every year, users will continue to give up every last bit of agency and privacy so as long as tech companies are allowed to continue to take it.
Opt out is portrayed as a choice when it barely is. Because it is very tiresome to always research what avenues exist and explicitly opt put of them and then constantly having to review that option to make sure it isnt flipped in an update or another switch has appeared that you also need to opt out of.
Maybe you need to set an environment variable. Maybe that variable changes. It is pretty exhausting so I can understand people giving up on it.
Is that really giving up on it though? Or are they contorted to it?
If you do anything on the radio without the users explicit consent you are actively user hostile. Blaming the user for not exercising his/her right because they didn't opt out is weird.
If you accept Android as an option, then GrapheneOS probably check a lot of your boxes on an OS level. GrapheneOS developers sit between you and Google and make sure that shit like this isn't introduced without the user's knowledge. They actively strip out crap that goes against users interests and add features that empower us.
I find that the popular apps for basic operation from F-Droid do a very good job of not screwing with the user either. I'm talking about DAVx⁵, Etar, Fossify Gallery, K-9/Thunderbird, AntennaPod etc. No nonsense software that does what I want and nothing more.
I've been running deGoogled Android devices for over a decade now for private use and I've been given Apple devices from work during all those years. I still find find the iOS devices to be a terrible computing experience. There's a feeling of being reduced to a mere consumer.
GrapheneOS is the best mobile OS I've ever tried. If you get a Pixel device, it's dead simple to install via your desktop web browser[1] and has been zero maintenance. Really!
Running a custom ROM locks you out of almost all decent phone hardware on the market since most have locked bootloaders, and it locks you out of a ton of apps people rely on such as banking and money transfer apps. You must recognise that it's not a practical solution for most people.
I've happily used LineageOS without gapps for years across several OnePlus devices. If I ever need a new phone I check their supported devices list to pick, and the stock ROM on my new device gets overwritten the day it arrives. Currently using a OnePlus 8T. When I move on from this device as my primary someday, I may put postmarketOS on it to extend its usefulness.
> Running a custom ROM locks you out of almost all decent phone hardware on the market since most have locked bootloaders
GrapheneOS only works on Pixel devices. Pixel devices are fine. We have reached a point where just about every mid-tier device is fine, really. I run my devices until they are FUBAR or can't be updated due to EOL. EOL for Android (and GrapheneOS) is ~7 years from the release date now.
> it locks you out of a ton of apps people rely on such as banking and money transfer apps.
These can be installed and isolated using work or user profiles in GrapheneOS. Also as https://news.ycombinator.com/item?id=42538853 points out, a lot of work has been put into making Graphene work with banking apps[1].
> You must recognise that it's not a practical solution for most people.
Of course I do. We can act on two levels. We (as a society) can work for regulation and we (computery people) can take direct action by developing and using software and hardware that works in the user's interest. One does not exclude the other.
You don't need tons of choice, but sufficient availability of a decent enough choice. The google piexel line supported by grapheneos is one.
My budget didn't allow me to buy a brand new one but I could buy a second hand pixel 6a for 200€.
Having said that you can also use an older phone with /e/os or lineageos and avoid apps that tracks you by limiting to android apps without telemetry available on f-droid.
The solution is the general populace becoming more tech literate, much like I became more literate in the yellow pages 20 years ago.
The reality is these are no longer mere tools, they are instruments for conducting life. They are a prerequisite to just about any activity, much like driving in the US.
We expect each and every citizen to have an intimate understanding of driving, including nuances, and an understanding of any and all traffic laws. And we expect them to do it in fractions of a second. Because that is the cost of utilizing those instruments to conduct life.
We can act on two levels. We (as a society) can work for regulation and we (computery people) can take direct action by developing and using software and hardware that works in the user's interest. One does not exclude the other.
That said. You can order a Pixel with GrapheneOS pre-installed and Google Apps and services can be isolated.
> I think banking apps requiring SafetyNet passing is the dumbest thing on planet earth. You guys know I can just sign into the website with my mobile browser anyways, right?
No, you're not. For logging in, you need a mobile app used as an authentication token. Do not pass go, do not collect $200... (The current state of affairs in Czechia, at least; you still _do_ have the option of not using the app _for now_ in most banks, using password + SMS OTP, but you need to pay for each SMS and there is significant pressure to migrate you from it. The option is probably going to be removed completely in future.)
Right now I don't think there's anything like this in the United States, at the very least. That said, virtually every bank here only seems to support SMS 2FA, which is also very frustrating.
fwiw, on Android, you can install a custom certificate and have an app like AdGuard go beyond just DNS filtering, and actually filter traffic down to a request-content level. No root required. (iOS forbids this without jailbreaking though :/)
One of the reasons is because telemetry and backdoors are invisible. If the phone was showing a message like "sending your data to Cupertino" then users were better aware of this. Sadly I doubt there will be a legal requirement to do this.
Anything is possible through lobbying for regulation and policy.
It's the same way that bills come out to crack people's policy.
Only people don't always know they can demand the opposite so it never gets messed with again, and instead get roped into fatigue of reacting to technology bills written by non-technology people.
Apple seems to be the best option here too. They seem to have put in a huge effort to provide features people demand (searching by landmarks in this case) without having to share your private data.
It would have been so much easier for them to just send the whole photo as is to a server and process it remotely like Google does.
Individuals who grew up primarily as consumers of tech, also have consented to a relationship of being consumed, bought, and sold themselves as the product.
Those who grew up primarily as creators with tech, have often experienced the difference.
Whether or not people in general are aware of this issue and care about it, I think it's pretty disingenuous to characterize people as willfully giving up their privacy because they own smartphone. When stuff like this is happening on both iOS and Android, it's not feasible to avoid this without just opting out of having a smartphone entirely, and representing as a binary choice of "choose privacy or choose not to care about privacy" is counterproductive, condescending, and a huge oversimplification.
Maybe not privacy in general but this is about location privacy.
If you have a smartphone in your pocket, then, for better or worse, you're carrying a location tracker chip on your person because that's how they all work. The cell phone company needs to know where to send/get data, if nothing else.
It seems disingenuous to put a tracker chip in your pocket and be up in arms that someone knows your location.
Do you honestly believe people understand what they’re doing?
Nowhere in marketing materials or what passes for documentation on iOS we see an explanation of the risks and what it means for one’s identity to be sold off to data brokers. It’s all “our 950 partners to enhance your experience” bs.
The shorter answer is that it's your data, but it's their service. If you want privacy, you should use your own service.
And for how cheap and trivial syncing photos is, any mandatory or exclusive integration of services between app/platform/device vendors needs to be scrutinized heavily by the FTC.
> Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor's own playground. Local on-device data shouldn't be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.
I find that there is a specific niche group of people who care very much about these things. But the rest of the world doesn't. They don't want to care about all these little settings they're just "Oh cool it knows it's the Eiffel tower". The only people who are becoming distrusting of software are a specific niche group of people and I highly suspect they're going to be mad about something.
> So why didn't Apple just simply ask for user permission to enable this feature?
Because most people don't even care to look at the new features for a software update. And let's be serious that includes most of us here otherwise, this feature would have been obvious. So why create a feature that no one will use? It doesn't make sense. So you enable it for everyone and those who don't want it opt-out.
I want a hardware mic switch. We are an iHouse with one exception and that's a SheildTV that is currently out of order because I want to reset it and haven't found time in, oh..., weeks. Anyway, out of the blue one of the kids asked about Turkish delights and wonders where the name came from. SO and I facepalm then explain. Not an hour later she gets something in her Facebook feed: 15 interesting facts about Turkey.
This is just too much of a coincidence. I know, I know, this "... isn't Apple's fault" blah blah. Bullshit it's not. They can't have it both ways where they say their app store process is great and then they allow this shit.
Browsing the Internet is explicit intent! Some of the stuff enabled by JavaScript definitely tows the line but at the very least that's not really the direct fault of the browser.
When it’s done default on, I default opt out until otherwise.
When it’s done default on, I default opt out.
GeoIP is definitely as you say of figuring out the location of an IP.
With photos, geocoding is embedding the gps location in each photo. Those locations are being sent scrubbed but still metadata to Apple.
The more I think of it I’m not sure if it’s useful to me to give that database to Apple for free. If it was presented as it’s being explained in device maybe I’d say cool, yeah.
Would you mind giving an example of something bad that could happen to somebody as a result of Apple sending this data to itself? Something concrete, where the harm would be realized, for example somebody being hurt physically, emotionally, psychologically, economically, etc
Once upon a time, I worked for a pretty big company (fortune 500ish) and had access to production data. When a colleague didn't show up at work as they were expected, I looked up their location in our tracking database. They were in the wrong country -- but I can't finish this story here.
Needless to say, if an Apple employee wanted to stalk someone (say an abusive partner, creep, whatever), the fact that this stuff phones home means that the employee can deduce where they are located. I've heard stories from the early days of Facebook about employees reading partner's Facebook messages, back before they took that kind of stuff seriously.
People work at these places, and not all people are good.
I doubt Apple employees could deduce location from the uploaded data. Having worked at FB I know that doing something like that would very quickly get you fired post 2016
Easy, consider a parent taking pictures of their kid's genitals to send to their doctor to investigate a medical condition, the pictures getting flagged and reported to the authorities as being child pornography by an automated enforcement algorithm, leading to a 10-month criminal investigation of the parent.
This exact thing happened with Google's algorithm using AI to hunt for CP[1], so it isn't hard to imagine that it could happen with Apple software, too.
and there's absolutely nothing wrong with implementing a feature like this, but it should absolutely be opt-in
This feature is intended to spy on the user. Those kinds of features can't be opt-in. (And yeah, holomorophic "privacy preserving" encryption song-and-dance, I read about that when it came out, etc).
This is an incredibly shallow dismissal that states the opposite of Apple's claim with zero evidence or reasoning and hand-waves away the very real and well-researched field of homomorphic encryption.
Trust in software will continue to erode until software stops treating end users and their data and resources (e.g. network connections) as the vendor's own playground. Local on-device data shouldn't be leaking out of radio interfaces unexpectedly, period. There should be a user intent tied to any feature where local data is sent out to the network.
So why didn't Apple just simply ask for user permission to enable this feature? My cynical opinion is because Apple knows some portion of users would instantly disallow this if prompted, but they feel they know better than those users. I don't like this attitude, and I suspect it is the same reason why there is an increasing discontent growing towards opt-out telemetry, too.