Years ago I tried to install and sign up for Turo on iOS to rent out a car I owned. It was a luxury car with a rebuilt title.
After I put in the VIN of the car, I received an error, and inexplicably I was banned from the app. No notification as to why, no "we don't accept rebuilt title vehicles," nothing. Naturally I scoffed, deleted the app and forgot about it.
Last year a friend rented a few cars on Turo for a trip and added me as a driver to one of them. I had switched phone numbers but kept the same phone. I downloaded Turo again and signed up with a new phone number and new email.
Before Turo even asked for my driver's license information, I was blocked again. It must be due to fingerprinting, which persisted over years.
I'm unsure how much apps can learn about your user profile, other apps you have installed, and other uniquely identifiable data. I've assumed it was limited, but perhaps I've been naive.
I guess these new rules are generally good? But I can imagine for every nefarious usage of these APIs, there can be a plausible cover reason...
Why does Apple let your device work against your own interests? If an app developer wants your phone to detect you committing "fraud", that should be their problem.
Why would Apple ever prioritize their customer's interests over their own? They've never once suggested that they would, and their customers prefer a hierarchical relationship. Apple is a company that whitelists which functions of a general purpose computer that their customers will be allowed to use.
That makes some people feel really secure, like the company is a loving parent, although companies don't love. They decide what is profitable and what is not.
Why do mail providers work against your interests by blocking outgoing spam? Because in aggregate it's beneficial to users if external parties can trust the more.
Service providers need to ban people sometimes. This includes people who are savvy enough to know how to delete and reinstall an app to clear its settings. Never permanently banning anyone simply isn't a thing that's happening.
If Apple didn't provide DeviceCheck, or something similar to it, service providers would use some other means of deterring abuse. There's a couple directions they can go in, but they're all generally worse for users (e.g. using invasive tracking, requiring users to pay for service, etc). DeviceCheck is about the least invasive way I can imagine this being implemented.
It used to. They have largely changed that now - all data is deleted once the last app from a given vendor has been deleted (though it's not instant, and seems to apply weirdly on TestFlight + ad-hoc builds)
I delete Facebook a few times and every time I installed the app the first screen I got prompted with was "Hello Josh, would you like to sign in with your stored details?" Not all data is scrubbed. This persisted to even today running on iOS 17 Public Beta.
I have experienced the same thing. Even when Apple made changes in Keychain policy to try to combat fingerprinting, “I never got the memo.” That sounds nuts, but I’m in the same boat.
I’ve had a few apps I’ve redownloaded months later, the only one from the developer, and my auth state was preserved.
I keep hearing that the Keychain data should be deleted, but my iCloud Keychain is filled with long-dead data
Everything in this space is so muddled. Deleting the last app from a vendor should erase that data. On the other hand, if you restore your phone from another device, that should never require relogging into anything.
Yeah, last I checked, encrypted itunes backups would keep the "this device only" keychain data. Which would only work when restored to the same device - it needs the UID key from the secure enclave to decode. (I wrote code a few years ago to decrypt the rest of the keychain.)
At one point, google authenticator started marking its entries as "this device only". I don't know if they've backed off on that since then.
> developers will need to explain why they're using certain APIs.
Great! I just hope the same will be introduced on Android. From the very beginning of the smartphones history, I immediately noticed apps put pointless requirements to be given permission for everything and I wanted such a policy to exist. No permission besides those strictly necessary to fulfill the very function of the app should ever be given or even asked for.
Nevertheless I'm afraid such clauses increase the power of the vendors like Apple to fight the apps which do whatever they hate even if that is what the user wants.
Sometimes the scopes of permissions are surprising. For example. scanning for Bluetooth devices on Android requires the location permission. Why? Bluetooth beacons can be used to precisely locate a device.
Unfortunately, there isn't a way provided to the average user to say that an app can use Bluetooth, but not GPS.
As a developer of an app for controlling smart devices in the home, this one is especially infuriating. What's worse, if you have code that needs to scan ble devices in the background (for instance to update the clocks on those smart devices) you need to tell the user to go to their system settings and allow location tracking all the time.
This is rough. As a developer, I can 100% see why there could be a ton of legit reasons to need this. As a user, there's no way in hell I would trust you not to be nefarious, unless you had a very detailed and clear explanation AND the rest of your app also strongly signaled "competent and trustworthy." And even then I would be wary. The company/app you trust today could be acquired by a company you despise tomorrow. But also as a user, I find location tracking for myself to be incredibly useful.
To that last point, I am curious: is there any good comprehensive open sourced and audited alternatives to eg Tile or Find My?
> Sometimes the scopes of permissions are surprising. For example. scanning for Bluetooth devices on Android requires the location permission. Why? Bluetooth beacons can be used to precisely locate a device.
I am indeed surprised by that. I would have expected the location permission to just control whether or not the app can use the system's location APIs or access GPS (if the system provides low level access to the GPS that doesn't require going through the location APIs).
Having it control access to something that is not specifically a location service but that might sometimes be usable to obtain information that can be used to figure out location does not seem wise to me, because all kinds of things that you might not expect to provide such information do. For example internet access can provide location information indirectly via IP geolocation.
If they try to stop all of that through the location permission then a bazillion apps that the user does not think of as having anything to do with location would have to require location permission, and users would learn that if an app asks for that they have to give it to them.
Bluetooth is specifically used to track location in many places. If you've ever been to a Disney property for example, there are beacons throughout the park for exactly that purpose. As you'd expect, the Disney apps request the fine localization permission and there's fine print somewhere on their website about how you should disable Bluetooth if you don't want to be tracked.
For me, the annoying part of Disney doing this isn't that they do it, it's that they do it and and their ride wait times still aren't accurate. They have a ton of data measuring people moving past specific points in lines in real time, so what are they even doing with it?
> Having it control access to something that is not specifically a location service but that might sometimes be usable to obtain information that can be used to figure out location does not seem wise to me, because all kinds of things that you might not expect to provide such information do. For example internet access can provide location information indirectly via IP geolocation.
Using Bluetooth beacons you can get the location of a user accurate to a couple of meters, possibly granular enough to identify the individual. Continuous tracking can track your movements with the same scale. IP geolocation will at best be a radius of multiple kilometers. There's a big difference between "I know where you're standing at this second" and "I can guess what part of the country you're in".
With Bluetooth beacons they don’t need gps, with just Bluetooth beacons they can triangulate your location, there’s no way to know if an app is gonna do that vs just try to connect to your (non-Apple) headphones or watch
Location permission would of course be needed if you were going to geotag your picture. The camera app with my phone makes it an optional permission--and now I see it isn't recording location for some reason.
I'm pretty sure that's a bug in your camera app; mine prompts me for those permissions but I'm pretty sure it works fine for still photographs without them
Don't hold your breath. XPrivacy software for rooted Android could give fake data to apps 7+ years ago [1], effectively accomplishing what that policy would. But rooting has become more difficult and more undesirable due to SafetyNet, and Google has not implemented such a feature in official Android.
These lines are wild. Android has great privacy controls and has had them as long or longer than Apple has.
Android gives 0 permissions by default in an app. Android requires 1 by 1 permission granting following a standard of least-required. Android's default permission selection is "Only this time" and not "Forever", meaning most users only grant temporary permission to the app. Android then automatically removes "forever" permissions that are unused after a period of 30-90 days without user input. Android is so aggressive about it that most "background" apps (widgets/battery monitors/weather) silently stop working as Google ruthlessly rips permissions away.
Heck -- unlike Apple, Google distributes nearly all of their apps and even system components through the App Store and still relies on standard user permissioning! You still have to grant Google the same permissions you grant any other app, even though it's "OS level". Imagine Apple authentically asking you for permission when you open a first party app and respecting it!
It's basically impossible to have an app use a permission you didn't know about on Android these days.
And in light of the great reality we live in, you have some crazy Apple stans in here saying "Google is actively hostile to user control". That line is wild considering almost every major iOS UX or hardware enhancement was ripped straight from an Android device that debuted it years prior... Between Apple and Google, there is no argument that Apple is the most hostile device company to "user control" in history. That's their thing! There's one way to do it, the Apple way, and if you want to customize it differently, you're the problem! ("Hostile to user control")
> Android has great privacy controls and has had them as long or longer than Apple has.
These can both be true. Android can have great privacy & permission controls, and also be better at many of these things than Apple, and also have systematically cut back on users control of their own devices, time and time again (Android 7 making it impossible to manage your own CA certificates, Android 14 tightening that further, sideloading restrictions, SafetyNet/Play Integrity making custom OSs & rooting unusable, etc).
There's a lot more to user control of devices than just app permissions, and "Better than Apple" on this stuff is not a high bar.
> Heck -- unlike Apple, Google distributes nearly all of their apps and even system components through the App Store and still relies on standard user permissioning! You still have to grant Google the same permissions you grant any other app, even though it's "OS level". Imagine Apple authentically asking you for permission when you open a first party app and respecting it!
That's clearly the big difference between both. Who knows what the internal apps are using on iOS, certainly not anything standard. That's also why it's usually the first attack vector on zero days.
Yeah but that also help that they don't have the same access as any random app. When you manage to get access to iMessage, that's not exactly the same thing as getting access to the sms app on Android.
Nobody really knows except from a few security researchers? I'm not working at Apple and I can't access the device internals. You can't build a third-party iMessage yourself unlike on Android anyways so there's some system apis somewhere enabling that for sure.
There's at least some special security keys, a custom notification system, extended access to device storage and custom wakeup exceptions from what I could deduce myself looking at it. Of course there's for sure more than that but that's at least what's visible.
Go ahead and prove me wrong then, link me an alternative iMessage client built with the normal apis or even explain how you could even do it. Good luck.
I can do that on Android, that's impossible on iOS because it's a custom system app with custom apis.
Slightly unrelated but we really need an equivalent to userscripts on mobile so that users can inject modifications to their apps for adding changes they need.
I really liked the xprivacy concept and it should be extended to full scripts.
That's effectively what Xposed, the framework used by Xprivacy was. Of course the security implications of malicious code running inside Xposed would be nightmarish.
This has been a requirement in the Play store for years. You have to explain a number of "sensitive" permissions, often times with videos. You will be rejected if they don't think the feature adds value
Nice! I haven't checked for quite a long. I'm glad it is this way already. Especially considering the fact a user still is free to opt out of any policy by sideloading the app from the author's website.
As an app developer, I have to say that Apple has made it increasing more difficult to provide a seamless user experience to our users over time. Yes, there are plenty of bad actors out there, and they ruin it for the rest of us who collect data only for the purpose of driving the experience forward.
The biggest recent pain point that comes to mind is cracking down on accessing the pasteboard without prompting the user for access first. The pasteboard was an essential piece for enabling seamless universal links in many apps. Universal links allow us to give our users the best possible first impression and make things easy for them. Mind you, most users _need_ to be guided along most experiences, don't read things, and blame you for when they can't read. At some point the value prop of apps is going to be completely negated by how difficult they are to use.
What data do you collect that "driv[es] the experience forward"? How are we to know you aren't a bad actor?
People just want privacy. App developers are not entitled to every piece of information on a user. If I have sensitive information in my pasteboard, you're not entitled to it. I just want an app to serve a singular purpose then I want to close it and go about my day. I don't need an app to glean my personal info in order to show me ads.
Apple has turned popular opinion against startups and the little guy so much that we're saying these things out loud to one another. Seriously?
Apple has too much power. Just like Google on the web.
Dealing with a little more pain and friction from marketing, in exchange for freedom for our devices and a healthy distribution of power, would be worth it.
We're being told to be afraid of "marketers", when we're actively being put into computing straightjackets by the biggest thugs of them all. Every move these gigantic companies make is to make you further reliant upon them.
We're moving to a world where Apple and Google decide who can execute what, and where there is no hope of leaving. And that's terrifying.
Why would you ever trust a "startup"? "The little guy" doesn't have data protection practices and can't prove their existence even if he does. Cavalier practices all over the place--I know, I've both created those practices in a "we need to ship" crunch and I've also lobbied (sometimes even successfully) to fix them later. Google has a lot of practices that absolutely suck, but their privacy and data protection functions have teeth.
As to the rest of your post: the app developer in this thread is saying Apple should give him and his friendos special permissions ("trusted certificates") to not ask the user whether the user trusts them, and you're saying it's Apple's decision to require a user to affirmatively consent to having their pasteboard read by an app that is thuggish?
One of these is actually on my side as a user, and it's not the marketer and it's not you.
Apple may revoke my certificate whenever it’s abused beyond its intended purpose. Hell, they could go further than that and only allow pasteboard access without a prompt iff the data matches a certain predicate that they approve. That would be fine by me.
Honestly, I’d prefer if we didn’t need the paste prompt all together. I just want a way to universal link into my app on the very first launch. The pasteboard and fingerprinting gave us that, but apple has given us no feasible alternative.
This is like chrome saying you can’t go to anything other than the root path of a site the first time you go there.
Requiring a user to agree to what an application wants to do isn't a requirement to prove innocence. It's a requirement to obtain consent that is verifiable by the operating system in which the user has placed trust.
Why is the open and clear obtaining of consent so clearly anathema throughout this thread?
So startups and the little guy deserve my data more than the big guys? No one deserves my data. Not everything in this world needs to be a marketing opportunity. How does "more pain and friction from marketing" allow for freedom on my devices?
This also parallels with developers complaining that they can’t have a direct relationship with customers.
I don’t want a direct relationship with developers. I don’t want to go to their website to subscribe or to cancel my subscription and I don’t want to give them my credit card information.
I want to be able to use “Sign in with Apple” and not give them my real email address.
The problem is that VC doesn't want to build a successful business, they want to build a profitable exit strategy. They want an IPO or acquisition that nets huge amounts of cash.
Diligently creating a business which turns a healthy profit for a fair exchange with the customer is less profitable than developing a large customer base that could be sold to someone.
Say what you will about the big players... there is no exit strategy. They are in the endgame.
Why else would the VC give you money to build anything in the first place? We couldn’t even be having this discussion without the decades of VC spending that’s gone into technology.
> We couldn’t even be having this discussion without the decades of VC spending that’s gone into technology.
Unless this is a reference to the fact we are having the conversation on the Y Combinator website specifically, it's is a fallacious argument to suppose that the only way any of our current technology could happen through the historical accidents that have occurred.
The nature of complex systems mean that larger systems process more information relevant to their continuance are more successful.
I do agree with the point underlying your rhetorical question. I am just not so certain we should be grateful for naked self interest.
To be honest with you, my whole point is not even missing out on a marketing opportunity. It's way more banal than that. It's more like routing to a specific experience on first app launch based on where you came from on the web.
I mean, there are other ways around this. We could create an App Clip that essentially does the same thing. It's just harder for the user to get through.
Surely there's a compromise solution here? I don't see why Apple couldn't grant trusted certificates to good actors and revoke certificates from bad actors in regards to pasteboard access?
As I have said elsewhere, this is the compromise position. The extreme position is "there is no option for pasteboard access at all and all pasteboard interactions must happen through OS-provided, fully disintermediated controls."
You already have a compromise: you can, if you insist and if your user consents--not Apple through some "trusted certificate" granting process but the user themselves--choose to not follow standard system flows. Or you can follow standard system flows and receive implicit consent by the user when they click 'Copy' in the share sheet.
Why is seeking consent so terrifying a prospect? And why should anyone privilege "your flows" over that consent?
I don't have a problem with seeking consent from the user, and that's exactly what Firebase Dynamic Links offered by including "Check to continue my place in the app" on by default, but that product is no more thanks to Apple. Consent is given on the webpage before going to the App Store.
My issue is with Apple acting as the arbiters of what consent looks like. You have to consent with the flow to go through with it, right? Nobody's forcing our users to continue.
And like I have already said, it's not like we can't do what you're suggesting. In fact, we have, but that equates to more churn in our flows because users get confused or are fatigued by the amount of hoops they have to jump through to make the product work, which was my original point to this whole thread.
Ground truth, inviolate: the OS owns the trust relationship with the user. The OS may allow the extension of that trust relationship (MDM, custom TLS roots, etc.) but that's informing the OS's own authorization, it's not supplanting it. It follows, then, Apple is the arbiter of consent on an iOS device because they own the OS (the user has chosen to, by buying an iOS device, grant the authorizee's part of the trust relationship).
Apple does not have a trusted relationship with you, the software developer. And Apple doesn't know about a trust relationship between the user and the software developer until the OS sees confirmation from the user.
It then follows that consent must be given at or inside the security boundary to be provable; the web page you refer to is outside of it. You are asking to move from a less trusted environment (a web browser, generally watched like a hawk) to a more trusted environment (an application, with additional implicit permissions and the explicit ability to ask for others). That isn't a decision you are allowed to make and it isn't something that, for all Apple knows, you confused-deputy'd your way around a user's affirmatively consenting to.
It's turtles all the way down. You have to acquire consent at a trustable level. That means the OS or, if the OS isn't sure, the user themselves, through an OS-verifiable method. Sorry that your third-party vendor doesn't count, but it shouldn't. "Just trust me" isn't security.
"But nobody cares" might be next up, so let's settle that now: nobody cares because they pay Apple to care for them.
> Apple does not have a trusted relationship with you, the software developer.
Yeah, they do, because they have to trust the certificates and entitlements that my app is signed with. All I'm asking for is an extension of that same idea to other parts of the app experience.
I just don't think we're going to agree on this, and that's fine. Care to call it a day and we can respectfully both walk away?
I get what you’re saying but have to disagree because it’s kind of different. Apple has to review our apps before they’re released and they certainly have capability to review a trusted pasteboard copying predicate as I am suggesting. They would have the authority to revoke it at any time and I would certainly not want to risk my developer account being put into jeopardy by breaking their trust.
> Apple has to review our apps before they’re released and they certainly have capability to review a trusted pasteboard copying predicate as I am suggesting.
Are you willing to provide signed proof that the user consented to this action? The same way you provide signed code to prove you're the one uploading the code. Presumably this would have to be via a key that you don't have access to (only the user would be able to consent to this).
Because that seems like the parallel here.
Technically, apple should be auditing your flows at that point too, to make sure there aren't any dark patterns.
But, yeah, at that point, I think you could make that argument stick.
Speaking from a user point of view, I find arguments like this are often disingenuous (I'm not saying that yours is). "Tailoring for the user" normally actually means "tracking the user". Specifically, the passing of UTM parameters from my website session to my app session is something that normally has no benefit to me and I find it repugnant that developers feel entitled to do it.
> they certainly have capability to review a trusted pasteboard copying predicate as I am suggesting
They do not have the capability to exhaustively check all uses thereof in a way that obviates the need for a user to consent to having you monitor their pasteboard.
They could develop it, expending significant resources on an edge-case for a tiny fraction of app developers, or they could ask the user for permission because the user has the context to expect this request (or to not).
They already do similar stuff like cross referencing applinks at .well-know/apple-app-site-association with the associated domains bundled in your binary.
What I'm suggesting is a predicate like only allow NSPasteboard access without prompting iff the predicate passes something like let predicate = NSPredicate(format: "SELF MATCHES %@", "^/.\\?var=.$")
We won't get the data from the link without the pasteboard or fingerprinting. The app is a blank slate on first open after installing: we lose the context from which we came.
It's important to note that this is not an issue for apps that are already installed: we get those links and their data; this is just a first-ever launch issue.
It’s been a while since I’ve worked with universal linking, but doesn’t Apple’s APIs allow you to carry context? Or at least preserve the URL so you can take the user to where they were? I don’t think you need to hijack the pasteboard just to continue the users flow.
You definitely do for the first launch post install.
The flow is click link, copy data or finger print, redirect to the App Store, install, user opens app for the first time, route to experience from finger print.
Yes, you and everyone else despises advertising, and also despises subscription fees even more. Eventually this will reach an equilibrium point. It’s one or the other.
People were spoiled on the old Internet, like pre-2010, where a simple adblocker meant you could just see all content, ad-free, with virtually no paywalls. It seems like because that’s changed now, they feel they have been robbed of their birthright.
>Apple has turned popular opinion against startups and the little guy so much that we're saying these things out loud to one another. Seriously?
By “start up” and “little guy” you of course are referring to complete strangers with close to zero reputational risk at stake who are handling my private data? Those people?
Apple had nothing to do with my distrust. Plenty of bad actors just like them have done that all on their own.
It's been a while, so correct me if I'm wrong, but a "Share -> Copy" action doesn't require permissions, does it? That's the conventional way to do this in Apple's own apps and a user should be able to be expected to follow that if they want to copy a link. (Doing otherwise would be nonstandard and surprising to me, regardless.)
Otherwise, accessing UIPasteboard should require permissions. Sorry for Your Flow, but like--the app needs to be transparent about what it's doing, and dirtbags exist, so.
While that works, it's not very user friendly, and that means friction, and friction means churn, and churn means a sad boss. It makes obvious sense to us, but gramma or your luddite uncle have a hard time with that. They need to be redirected (handheld) to where they're going.
It absolutely does, and I'm sure your boss doesn't like to hear it, but falling out of your flow is a pretty small price to pay for somebody else not being able to scrape Grandma's credit card off the pasteboard.
I want my tech-averse relatives to bail out of an app when they see that unless they're hand-held through it and the reasoning is made sufficiently apparent that they can parse it. If they can't, it's not clear enough, and they (and the OS) should default to no.
> _My_ flow, was the same flow shared by anyone else who used Firebase Dynamic Links, Adjust Links, and many others. I'm not alone in this.
No, you're not. And that's good. It's a bad pattern! It's a bad pattern whether one app or a million use it.
> There has to be a common sense compromise on this, like a revokable certificate for access without a prompt.
You seem to think this is the extreme position, and it's not. The extreme position is "you cannot access the pasteboard directly, at all, under any circumstances, and only user-triggered actions through OS-served and unintermediated controls can do so."
You have your compromise already: "ask for permission and be sufficiently clear about why you want it to get people to say yes".
> I completely reject your whole premise.
That's fine. Apple doesn't. That's why I like them. You started this thread with what sounded like a reasonable objection, but you've kept digging to the point where it sounds like you're kind of the reason they're doing it.
While you seem to believe the person you’re arguing with wants that API to do scummy things, it would be more fair to acknowledge that the feature they described is a workaround for the poorly-designed way apps open on first launch. Being able to offer a user an “install and open this in-app” is an incredibly common feature that everyone who has a website and an app wants, and Apple could easily enable this, for instance, by allowing the app to ask the OS to prompt the user for consent “to continue where you left off on the web?” (“Insert Title from Most Recently Used Safari Tab matching this app’s registered website domain here”)
Consent -> OS provides that tab’s URL to the app as though the user had clicked a Universal Link
No Consent -> do nothing
> While you seem to believe the person you’re arguing with wants that API to do scummy things
It doesn't seem like they do believe that, and even if they did, that's not the point.
The point is that allowing this access without explicit user consent enables various "scummy things" that neither Apple nor the user can then stop or easily detect.
It's really not that different from the person at the bank saying, "But you know it's me! Why can't you give me all my money without seeing my ID??!"
No dispute there. My only point was that that commenter doesn't even actually WANT the user's clipboard. What they want is to let the user consent to resuming the activity/state they were doing on the Web in a freshly-downloaded native app. Apple half-assed this with those little metatag-based banner things, which function great (passing the current web URL to the native app with a tap) when you already have the app, but if you don't have the app, or if you want to use a UI other than that persistent banner, the best you can do is send them to the app store and hope they find their own way. This is a poor experience, and the fact that developers resorted to the clipboard method is at Apple's feet because they didn't make any effort to solve the problem, which they could easily do in a secure, consent-asking way.
And what happens when there's a vulnerability in that SDK and every app using it for "their flow" is suddenly vulnerable to attackers scraping sensitive data from the pasteboard of their users? Or what if one of the dozen other SDKs you bundle in your app includes malicious code that takes advantage of the fact you already enabled the pasteboard API for another SDK?
As a developer, I sympathize with you, but as a user, you can get bent. I appreciate that Apple makes it difficult for you to use potentially dangerous APIs with large risk surfaces, and forces you to ask me for permission to take that risk. It's why I continue to buy Apple products and will never switch to Android as long as it's owned by a user-hostile advertising company.
A vulnerability in your SDK that lets a malicious actor control when something is read from the pasteboard. Or a bug in your code that forgets to place the route on the clipboard and ends up reading the previous item from the clipboard and attempting to load it on your server (i.e. sending user clipboard data to your server). And besides, beyond the issue of risk mitigation or code security, there's also that of trust - why should I trust that you're only using clipboard access for the claimed purpose, and why should I trust you implemented and tested your code correctly?
It's addressed by Apple's permission prompt that allows me to select the "ask every time" option. You're right that it's a black box after that in terms of whether or not the app puts a new item on my clipboard and then reads it (why would it need to do that?), but at least I know that if I have something sensitive at the top of my clipboard then I can decline the prompt.
That's not to mention that my concern is fundamentally addressed by gating clipboard access behind a prompt in the first place. That is, I always have the option to decline the permission entirely.
Thanks for this thread. It's disheartening that SLSA and dependency-driven attacks aren't clear to the general development populace, but you've done a good job of explaining the threat even assuming the best of intentions (which I sure don't) out of app developers themselves.
It's a layered set of problems, and the answer is high walls.
you might be missing a key piece of my particular scenario, and it’s that the thing I’m reading from your clipboard is something I put there from my website. The only thing that’s being exposed is something akin to path variable.
The purpose of which is navigating to a particular part of the app on its first ever launch.
If universal links worked on the first install none of this theoretical discussion would be necessary.
No, I understood that this was your intention. I'm trying to explain that even if you have the best of intentions, there is a risk you introduce a bug where your website _doesn't_ put something on the clipboard, so then when your app tries to read it, it instead reads whatever unrelated data is on the top of the user's clipboard. And if your app tries to append this value to a URL, then you are effectively sending clipboard data to your server, i.e. risking worst case scenarios like user passwords showing up in your server logs.
If you can put arbitrary data on the clipboard from your website, and then read it in your app, without my interaction, that allows for a variety of potential exploits.
So you're popping the latest value from the user's clipboard, crossing your fingers and hoping that it's the value you just copied from your site (and that there are no bugs in your code or some intricacies of the user agent that caused that copy to fail), and then sending the value to your server by appending it to your URL. Surely you see the risk here?
No, I’m looking in the clipboard for a valid path in my app and routing to it found and discarding if not. All routing is local to the client app and nothing is sent to the server.
If we’re taking hundreds of thousands of clients and it works 99% of the time, then the risk of it not working in that 1% of cases is acceptable to our product team. Worst cast scenario is that it fails, the user tries again, and no clipboard is needed on the second attempt because universal links work now that app is installed.
This value in the clipboard might be something like: /path-to-experience?id=something
I'm glad Apple does this, and I'll keep paying them multiple thousands of dollars per year to keep random companies from accessing my clipboard data without my consent.
No, I don't give a shit about your "seamless experience".
I think this dismissive response is a pretty good example of why people are distrustful of startups/apps/etc. The person you’re responding to has almost definitely been burned by a shitty app copying everything on their clipboard…you care enough to respond and be a bit of an ass but not enough to defend your position or acknowledge the clear risk-benefit trade off here.
The person they responded to was beyond dismissive but invalidating of their comment. The responder's message "I don't give a shit" was in poor taste and discourages community interaction
I prefer privacy of whatever small improvements will be made to the UX through data collection. In 20 years of software development, I've not seen data collection actually move the needle much in terms of UX improvement.
The pasteboard is critical to protect from bad actors.
I totally understand your sentiment and am not against your thoughts and approach. However, isn't this like Mark Zuckerberg replying to the question of tracking, “We want to show better and targeted ads.” (or something in that line).
As someone who's worked on a few ad supported apps, I have to admit that I share Zuck's sentiment. How do we make money if few are willing to pay for apps and ads don't pay enough?
The obvious answer is "don't." I just looked, and I've spent $250 on mobile apps this year. They are all mobile applications that do useful things for me, be it improvements to HomeKit or media creation tools--Affinity has me for like $150 by themselves this year!
If your mobile apps don't make money without invasive ad targeting, that's OK: others clearly can (see the sibling commenter), and we can get by with fewer mobile apps that don't when they are are presenting so thin a value proposition as to be unable to get users to pay for them through conventional means.
The "hard men / strong times" line in your profile (which, to be candid, is a pretty gross pretext for a lot of bad-actor politics, but you picked it not me) echoes deep here: there are plenty of endeavors that sell something people want enough to pay actual money for them, and you can always do that instead. I haven't worked for an ad-supported tech business since my first job out of college in 2010, and most of their business was actually in air travel and hotel bookings, the ads were tertiary. The jobs and the products exist.
Let's unleash the hot take cannon for a second: users basically don't need apps. Businesses do--and there's a ton of money in LOB mobile apps even--but almost every B2C mobile application I can think of is some form of luxury good. Making the case for a luxury good is harder than making one for one of self-evident value. Congratulations--you picked hard mode, and nobody is obligated to make it easier.
Optimize for making value to people who will pay for it, not virality or shareability, and perhaps you have a path to not needing to play with edgy patterns.
You aren't the average user. Most users can't even afford decent housing or food. Let alone $250/year for apps, and they will never pay. I'm sorry, but I think we just have fundamental disagreements about what software is.
We do disagree. But one of us is describing, and one of us is depending, and that skews things a little, doesn't it? Upton Sinclair had a line about this.
What value, in a sane system, does selling advertising that targets people who "can't even afford decent housing or food" actually do? How are the goods you are selling--human attention--actually turning into revenue for the advertiser if the user has no money to act on the advertising? Or is it just one more iteration on a mutual deception that exists to create a predicate for "any of these mobile apps are worth creating in the first place"?
You're building a bigger house of cards with every post.
Poor people might be better served with targeted ads to help them find lower cost goods to save money, access helpful services they otherwise wouldn't know about, and seek out support groups/clubs/organizations. Not everything in this life is evil ya know.
They might but we both know they won’t be. They’ll be targeted with ads which exploit the fact that they are in an extremely vulnerable state.
In fact, the more desperate, the more willing they’d be to click on an ad, which gives a perverse incentive to advertisers and developers to worsen their financial position.
But please, show me examples of these good targeted ads because while in theory they could exist, something tells me they very much don’t.
Just because you refuse to say it publicly does not mean you don't know it.
> The non-profits in my area (which is very low income) would very much like to target my local community better/cheaper.
They certainly would, but have they? I asked for a very simple and specific thing. Examples of targeted ads right now that are specifically targeted towards low-income folks and which are doing it to better their lives.
Please, by all means, show me examples. Not anecdotes. Not opinion. Hard examples.
Until then, it's a pipe dream used to justify your company's desire to further invade people's privacy.
> Just because you refuse to say it publicly does not mean you don't know it.
Does that honestly sound fair to you? I'm not putting words into your mouth. Please respect my agency and I'll continue to respect yours in kind.
> Please, by all means, show me examples. Not anecdotes. Not opinion. Hard examples.
Aren't examples also anecdotes? https://www.sheerid.com/business/blog/why-and-how-you-should...
> Take for example, Headspace, makers of the popular meditation app by the same name. The company created an exclusive discount to help teachers who might not be able to afford the app use it to manage their stress.
So your example is… someone advertising their app so a teacher can spend slightly less money? That’s your choice for targeted ads being used for good? Seriously?
That’s a long way down from those theoretical local non-profits.
And if that’s the best you’ve got, then I don’t know how anyone could say anything but fuck no to invading their privacy for a shiny discount to an app they likely don’t even need.
You asked for an example and I gave you one, but it's not good enough for you. Where shall the goalpost move next? If it feels like it's an example of low consequence that's because it is: what we're arguing about is pretty trivial to most folks. And that might not make sense, but here's why: most folks aren't on hacker news nor care so deeply about such things.
The goalpost is exactly where it has always been, your example is just a horrible way to try and show that targeted advertising can be used for good. How does it "better their lives"? It's not like Headspace is giving the app away for free; teachers still have to pay for it.
In the single example you did provide, the motivation behind the targeting is still strictly to extract money from people. It may be to extract slightly less money from a specific group, but it's not like Headspace is doing the targeting out of the goodness of their hearts; they are doing it because they think they will get more profits from the higher number of sales even at a discount.
Using the most capitalistic example is certainly a choice you can make, but it's not one that is going to convince most people of your cause. If you want to convince people that targeted ads can be used for good, then actually show that it can be used for good instead of talking about theoretical and hypothetical non-profits and providing the weakest possible example when pushed for reality.
Non-profits and for profit companies alike can use targeted ads to mutually benefit those they are targeting. Obviously for profit companies exist to make money. Do you think that true altruism is a real thing? Everybody has a motive and that's the game we call life.
As far as my real-life local non-profit example: my local PP chapter reached out to my local side business IG account to share posts of their social media to widen their audience. Wouldn't it be great if they could better target people in their target demographic more easily / affordably?
There are many nice things we can’t have because they are exploited by bad actors. There is a nearly infinite amount of evidence that targeted ads go hand-in-hand with abusive, anti-social practices.
So I am sorry that you and your innocent nonprofits are being impacted by the massive, exploitative, evil machine that inherently builds around grinding away privacy, but it’s an unfortunate side effect of how many ethicless assholes there are.
I never said that my business model did, nor that big money was involved. Please don't assume my position and re-read the thread. All I had said was that a non-profit in my area would very much like to target my local community better/cheaper.
That's kind of my issue: In journalism it's hard for us to make money because we have a low amount of users who are wiling to pay to disable ads (or remove paywalls for that matter) while at the same time ads don't pay us enough despite high user engagement, so what do we do?
That's kind of missing the point? Sure you spent a lot on your laptop, but it isn't (as) a significant portion of your income, presumably.
A typical developer is spending much less on a phone as a % of their income, whereas a typical iPhone buyer is spending much much more — to a surprising degree.
Agreed, but setting laptops aside, would you spend ~1.8% of your pretax income on a phone? — And not for the best phone either, just the base pro, or an upgraded regular model.
I certainly wouldn't, but a lot of iPhone users are. I'm not judging, I'm genuinely surprised.
I only spend 15% of my pretax income on my 15 year mortgage. But I make BigTech money working remotely. So I find that question kind of irrelevant. I definitely don’t expect the average person to only spend 15% of their income on housing.
Expenses don’t scale linearly with income.
When I was making $22K a year in 1996, I bought phone for $300, was that too much to spend on a cell phone too?
I wasn't comparing general living expenses, I was comparing iPhone ASP to the income of those buying them. $1k:53k is less affordable than $300:22k accounted for inflation — for tech that is much more mature and commoditized today than it was back then.
Again, it's just surprising and not a value judgement.
Roll back maybe 10-12 years ago. I had a Tomtom in my car, maybe £200. An iPod. Probably the same again. A point and shoot camera. Same again. A 3G WiFi stick and contract for my laptopp. A personal laptop that saw a lot of use. A dedicated Sonos hardware controller. A GPS on my bike, and of course an actual phone (N95).
How many of those devices have been coalesced into one £1200 iPhone. I call that a result.
You...know most people don't buy a brand-new phone every year, right?
I don't recall exactly what I spent for my iPhone, but it was 4 years ago, so if we assume it was about $1000, then based on that median income, that makes it 0.5%. Does that seem more palatable to you...?
Yeah I could easily afford that but I haven't spent $250 on phone apps in my entire life. I don't think I've even spent $25. $250 is more like what I'd pay for the phone itself.
Totally, but what do you do with mobile devices that needs it? I use iOS with iPads and iPhones, so buying Affinity (which works great on an iPad with a Pencil but also has software for quick edits on a phone) makes a lot of sense to me. They provide value, so I paid for them.
This, not simply "doesn't have money", is the thing about mobile apps that it feels like most people in that most things a user wants to do on a phone don't actually need an app, and if they do it's probably because they're buying something or talking to somebody and neither of those are categories that need more entries with a lower security barrier.
Maybe your app just isn't worth the downsides of building an unaccountable, global surveillance panopticon. I'm sorry =/ If we destroy the panopticon, perhaps we can find new business models that aren't as destructive.
What about it? Most of the big names seem to be switching to a paywall model, which is fine with me. A local outlet has an almost entirely reader-supported model that seems to be working for them[1]. There are options other than the global surveillance panopticon, and the more we support those options, the more viable they will become.
As someone who works with a national news outlet, I can tell your for certain that paywalls are failing, and we're now considering how to recoup lost revenue with ads that don't pay as well as they did five years ago.
That sucks, but I'm still zero percent interested in supporting the unaccountable, global surveillance panopticon. Don't blame me that Facebook & Google destroyed your business model!
I don't know. I think the best approach would be to ban surveillance capitalism, (e.g. make gathering personal data illegal), and then new business models would fall out out of the new situation. More likely, old business models would re-appear: selling ads to relevant publications (e.g. video game ads on video game websites), instead of targeted to users; selling products direct to customers instead of funding via ads (this is currently difficult because the surveillance business model out-competes it).
Another approach could be to break up the big tech companies into a bunch of tiny pieces, and hope something more ethical comes out of the more distributed market power & natural competition.
I'm not a marxist by any means, but breaking up the monopoly into smaller companies might make sense. The newspapers before them had similar regulations and faired well.
I think you need a more nuanced take than "journalism is dying" – a more accurate take might be that journalism is centralizing for professionals (NYT, major cable networks have scarcely ever been doing better... don't take my word for it, read the financial statements) and decentralizing for amateurs (substack, twitter, etc.).
Journalism is evolving and changing, but it isn't dying or going away. Hyper-targeted ads will not "save journalism".
News sites can't compete with big tech companies on targeted ads, they're at a systemic disadvantage.
News sites have to spend X% of their margin to continuously generate content to attract viewers, while big tech companies spend 0 to attract viewers because viewers are the generators of their content. For this fundamental reason, news sites never be able to compete on targeting granularity or ad pricing.
Rather than fighting a losing battle, it's better to put your chips behind a battle where you have a differentiated advantage (in the NYT/Cable News companies, it's the cornered resource of accredited voices in reporting and editorial).
It would be net-negative to try to compete in the space targeted ads. They'd be better off offering low-revenue, low-cost blanket ads (i.e. advertise in the "automotive" section of the website to non-subscribers), where the cost to the ad operator is effectively zero because they don't have to maintain a database of user ad profiles.
It means that people are willing to pay for apps. As with any product endeavor, success isn't evenly distributed — a small percentage of developers ("small" or not) are making interesting and/or useful apps that have achieved product/market fit. The rest follow Sturgeon's law and/or don't know how to find their audience.
You make money by not assuming people won't pay for apps. I'll happily pay for an app that serves a purpose for me and makes my life easier. If your app is good enough, people will pay for it. I'm more likely to uninstall an app with ads. Believe it or not, I don't want to sit through ads for the latest castle crasher or casino game that I'll never install.
Most average users I know HATE ads. They understand that every app has them but they’re in the same boat as the user you responded to: much more likely to uninstall an ad based app
That's the beauty we lost with targeted ads: less ads for the user, and more revenue for the app. There were obviously issues with that model, but it was a better experience for most people.
Wait wait wait... you think that having more-targeted ads (which will have a higher CTR and thus are more valuable to the advertiser, and thus lead to a higher CPM for the publisher) will lead to less ads? What planet are you on?
I was already starting to think you're full of it from your responses to other posters, but now I'm sure of it.
On iOS at least, ad prevalence has increased over time because targeted ads have been effectively abolished: more ads, with less relevance, are required to gain the same amount of revenue.
it seems more like ad prevalence has increased over time so that advertisers and ad-based services can make more money
after all, it increased before any attempts to rein in targeted advertising (which still exists, it's just harder to hyper-target in some circumstances)
I should say that, while I disagree with you, I think your comments have been well made & respectful. I regret that people are downvoting them simply for disagreeing.
We never should have gone beyond context-based ads. Hyper-targeting just feeds the enshittification of everything. The advertising business was doing just fine before the web.
What this mentality engages in is an arms race to a user interface that requires no action from the user whatsoever. Thats the ideal end game here right? You make assumptions from mined user data and do everything for them. Idiot users, despite being a majority, should not sleepwalk every company into conducting as much aggressive surveillance and profiling as possible for "ease of use". Or assume that everyone wants to share their information for ease of use.
Universal links on iOS do not work on the very first app launch, which is why developers have to resort to using techniques like copying the link out of the clipboard or associating a link to a user with a fingerprint.
This isn't a problem after the first launch, as the app delegate gets an event to open and route the user from the URL.
It’s there for people like you and me who want to change it. It’s buried enough that my older family members aren’t likely to do it on a whim, which is good.
At some point, I've just come to hate app development. Everything's been ruined by the collateral damage from unscrupulous developers and the war the app stores wage on them, the rent-seeking the platforms engage in, the bullshit rules and shitty review process. The general rot and entropy of it all.
UserDefaults... really?
Let's give Google more control over the web so they can finally the destroy it in the same way and there can no longer be anything for me to care about.
"This API has the potential of being misused to access device signals to try to identify the device or user, also known as fingerprinting."
The reason is because UserDefaults is device scoped. Even within the context of a single application, the developer could use the API to build a list of user devices and identify the particular device the user is accessing the application from. Absurd? Perhaps. But it falls within the definition of what they are aiming to eliminate.
(1) Nearly every app in the App Store uses UserDefaults. It's such a basic, fundamental API.
(2) App Store apps are sandboxed, so they cannot access the UserDefaults of another app.
(3) UserDefaults is more or less glorified key-value storage. It's true that you could store a UUID in there, but you could just as easily store a UUID in a text file in your app's container, an API usage that is not covered by this.
(4) According to the docs, there's only one allowed "reason":
> CA92.1
> Declare this reason to access user defaults to read and write information that is only accessible to the app itself.
> This reason does not permit reading information that was written by other apps or the system, or writing information that can be accessed by other apps.
But again, as I said in (2), "reading information that was written by other apps or the system, or writing information that can be accessed by other apps" is not even possible for sandboxed apps.
In other words, almost every app in the App Store will have to declare "CA92.1" in a privacy manifest, and if every app has the same response, then nothing is accomplished except privacy theater and some unjustified good PR for Apple.
> In other words, almost every app in the App Store will have to declare "CA92.1" in a privacy manifest, and if every app has the same response, then nothing is accomplished except privacy theater and some unjustified good PR for Apple.
I disagree. It gives Apple a tool to refer to when an app says A but does B. If you are a malicious app and you lie about any of these things then kicking you off the App Store is a simple decision.
In the past you could get away with just doing it in code. Now you have to declare that your usage is non malicious. That is a huge difference.
> Now you have to declare that your usage is non malicious.
That's not even what you're declaring! I already quoted Apple's docs: "Declare this reason to access user defaults to read and write information that is only accessible to the app itself."
But sandboxing already makes this impossible, so the only thing you're declaring is that your app only does what the operating system allows it to do. Nothing about maliciousness or fingerprinting.
Moreover, the only way that Apple can actually detect fingerprinting is to reverse engineer the app, which Apple could do anyway regardless of whether developers self-report. Self-reporting is basically useless, because violators will simply lie.
Apple can already kick you out of the App Store for any reason, or no reason. And they could make a public rule against fingerprinting without requiring developer self-reporting, which is mostly a sham.
Why does the tool even matter? They already have no-fingerprinting rules. They can point at that, no need to have every dev on earth explain why they want users to stay logged it and not get the onboarding screen every time the app is opened.
> Also, I guess you could write some values into UserDefaults to uniquely identify users.
I already said this: "UserDefaults is more or less glorified key-value storage. It's true that you could store a UUID in there, but you could just as easily store a UUID in a text file in your app's container, an API usage that is not covered by this."
If you look at the docs for nsubiquitouskeyvaluestore [1] the fingerprint warning doesn't exist there. So it seems to reason that this is about "source device" identification.
Ahh interesting. I could see from the other listed features that this was about guarding against telemetry being used for user tracking. This was not a use case that sprang to mind.
Why is it wild? They are not telling you to not use UserDefaults anymore. The only thing you have to do is say “I store the users preferred sort order of the thinger list in UD” or whatever your non-malicious app does.
There is no magic here. I see a lot of complaining here but for 99.9% of devs it is a one time two minute check mark exercise.
Its just a pointless friction point. Both for devs and users. 100% of the apps you use will have this permission. No user is going to deny the request. Its the same as the Android 'network' permission, which no one bats an eye at.
For devs, is Apple going to now audit your `UserDefaults` usage and deny an app update because you forgot to enumerate that you _also_ store preferred start page in your new release? Why even add this mental energy to the process unnecessarily?
> This API has the potential of being misused to access device signals to try to identify the device or user, also known as fingerprinting. Regardless of whether a user gives your app permission to track, fingerprinting is not allowed.
I think this is mostly aimed at MacOS apps, where the scopes are much more relaxed for backwards compatibility reasons? My guess is that iOS apps will be automatically approved.
Just because you can still do potentially malicious things through other means does not change the reasoning, intent or need for the UserDefaults change.
If you want to write data without providing a reason, then go do that. Until Apple requires reasons for file writes.
It’s literally a 2 second process, and it’s meant to provide transparency into what the developer is doing. That seems entirely beneficial to users. Sorry that being honest and taking a few seconds of your time is such a heavy burden.
> It’s literally a 2 second process, and it’s meant to provide transparency into what the developer is doing.
What transparency? Almost every app in the App Store uses UserDefaults, and the only allowed reason to use it is, literally, "CA92.1". How is that transparency?
> That seems entirely beneficial to users.
How, exactly?
> Sorry that being honest and taking a few seconds of your time is such a heavy burden.
The dishonest developers will give the exact same "reason" as the honest developers. It's security theater.
Hold on, Apple is restricting access to UserDefaults? Is there an app out there that DOESN'T use UserDefaults to save various app settings? Data saved there is already scoped to the app itself, not to other apps or system information
> Hold on, Apple is restricting access to UserDefaults?
That's not how I would characterize it. Per the link above, they've provided a way for developers to declare the reasons their app uses API categories that can be used for fingerprinting.
> Data saved there is already scoped to the app itself, not to other apps or system information
Per the link, it appears that bad actors are somehow using it to violate App Store anti-fingerprinting policies: "This reason does not permit reading information that was written by other apps or the system, or writing information that can be accessed by other apps."
So let's say I store a UUID in UserDefaults - I know on any app install on another device or delete and reinstall I lose track of that ID. In my mind, I'm using this just as an 'App Session' marker to keep track of how many 'events' a typical user goes through in the app.
Is that considered fingerprinting? I really don't care who it is, I'm just trying to make sure the features I'm building are actually being used.
I would think what matters is whether or not you’re attempting to form a shared identity that links activity between devices, installs, site visits etc when the user hasn’t explicitly created an account in all of those places. Basically anything that’d allow you to map out their activity regardless of if they have an account or not is what’s problematic.
With some exceptions (such as social media apps) I don’t think it’s generally first party devs who are doing this, but rather third party SDKs that are popular with devs, e.g. Google Analytics, Firebase, Facebook SDK, etc.
That just means that you need to be creative with shuttling data around. Web tracking identifiers can ride in on deep links for example, which are then persisted to user defaults. After these identifiers have accumulated from a few different sources you then have a reasonably high-confidence fingerprint of the user.
To help this along the app can do things like kick the user out to their main browser to do some routine thing, where cookies can be accessed. The user doesn’t need to deep link back to the app in that case, the app can pull down whatever tracking info was harvested during the page visit and persist it.
I installed Overcast on my phone and since a UUID is saved to iCloud, when I installed it on my iPad, it automatically ties it back to my Overcast account without having to explicitly create an account.
The author of Overcast, Marco Arment, did that specifically so he wouldn’t have to store usernames, passwords and emails on his server - increasing privacy.
You can add a username and password to your account if you need to log in to the website. But he really wants to get rid of that requirement too.
No, you can scope it to an App Group shared across all apps in one account (Team ID). Google uses this to aggressively fingerprint — if you’re signed in to Google Voice, your actions in Google Maps will still be associated to your account even if you’ve never signed in.
The fact that Keychain isn't in this list of APIs despite also being available to all apps by a developer (and optionally synced via iCloud) also points to them being OK with shared logins between apps (or they just forgot???)
Fingerprinting is more of an intent. Google will absolutely tell you it's just to ensure consistency among their products. Their most important product being targeted advertising. Using it to enable consistent targeted advertising would be what the advertising industry calls fingerprinting.
Can this scope be shared in some way if you’re the owner of multiple apps? I thought it was strange seeing Threads listed as a 12+ year old app when I know it’s brand new… it gave me the feeling someone could just fudge numbers and more if they’re clever.
Glad I’m not the only one. Even after I eventually realised it’s the age rating and not the age of the app itself, I still find myself constantly forgetting this because it’s so unintuitive.
There is something about the way it’s presented (or maybe a bug in my brain) that keeps reading it as the age of the app.
(And IMHO that would be a lot more useful thing to show in that prominent position too)
Apple is the bastion of gatekeeping walled gardens. Ofc there is reasons to demand a rationale for certain API feature access, but some of these are pretty common. It seems like they are really demanding more app feature justification in general.
It feels that developing apps for Apple is more akin to being an Uber driver where there's very strict guidelines to being an operator. There's zero room to build any platform software [e.g. no side loading, no 3rd party browser engines allowed, no alt payments methods, no emulation, no platform mods, etc].
I don’t disagree that they are one of the most restrictive walled gardens but to push the narrative of them valuing security misuse of APIs to fingerprint furthers that. But who are we kidding when this can disrupt current strategies to deliver ads to their users by third parties.
Also, if you are pushing some kind of software to a platform you are beholden to the platform. This has been argued to the beginning where software repositories run by corporations can get you kicked off of it. All of what you are saying in the second paragraph have been rules for a long time on the App Store and it is the most lucrative for the App Store to develop for which supports developers . If you want to do what you want using android is an alternative but then you have to remove the shovelware or buy a Pixel. Also the google Play store has less to offer and isn’t well policed since the App Store is more lucrative with more rules.
> If you want to do what you want using android is an alternative …
I don’t agree with this. Google is implementing many restrictions, checks and rules too. Just like Apple. Their play store rules are equally comprehensive.
It is true that Google has a different approach to Android APIs and extensibility but they face many of the same problems as Apple has on their platform and you can see that they react to that every Android release by adding new security and privacy features or policies.
You can flash grapheneos on your device and then side load which really isn’t possible with an iPhone. I was alluding to this but I should had made that more apparent .
But how much do people really actually want any of that (the single exception is emulation for me).
We often have developers complaining about certain restrictions on iOS but never ask if users care? For me a key reason I choose iOS is because of the restrictions given to developers.
Just to be clear I also find myself annoyed at some of the restrictions, like I find it particularly annoying that given all of their talk about the iPad (and Vision Pro) being a computer I will likely never be able to do my job on one since I can't run my own code there.
BUT I don't push too hard for it since I recognize that adding the ability for that opens up other issues that would affect me when just being a normal user.
I disagree with that analogy. I don’t believe that you’ve raised it in good faith. It just agrees with your view. The situation is the situation, plain and simple. There are too many contextual differences. People have built multi-million and multi-billion-dollar businesses on apps distributed via the App Store. You can’t say the same for Uber drivers. People have built truly differentiated, revolutionary experiences distributed via the App Store. Uber drivers have very little room for differentiation. Be honest.
I am curious if they will go back and look at apps that are accessing these API's or will only look at them when there is an update?
I am wondering if we will be seeing another situation where certain apps delay updates like with the previous app tracking permissions.
Also curious if they would ever go so far as to add a notice of something like "this app could possibly be fingerprinting you" in the App Store or if they are confident enough in asking about these permissions that they will feel they won't need to alert users.
I am curious if they will go back and look at apps that are accessing these API's or will only look at them when there is an update?
With the earlier privacy crackdown that upset Facebook and Google so much, it only applied to new and updated apps.
According to people on HN, that's why it took Google months and months to issue an update to its apps, and during that time it continued Hoovering up people's information.
There are lots of apps on the app store that haven't been updated since the last round of privacy rules came out, which tells you a lot about those developers.
Launching a non-trivial app requires so much back-and-forth to get Apple's blessing these days:
Special entitlements, business verification, app review, mandatory marketing website etc...
It's not a big deal if you're GoogFaceSoft and can throw people at it, but for a solo developer the list of things to deal with is ever increasing.
Feels sadly like Apple don't care much about indie developers anymore.
The beauty of the web still remains that you can launch something to the world in minutes.
Apple, like Google, was naive once, hoping that developers would respect their customers as much as they do (unless you're big enough that kicking you out of the app store would make their products less marketable).
They got proven wrong, and had to invent policy after policy to try to fix things. Google has been restricting permissions while Apple has gone even further.
Most developers don't need all that much special treatment. Picking between three codes in a few categories isn't really that much work. At least they don't require you to manually email the app reviewers!
This stuff only seems to be a major issue for developers trying to use APIs in way Apple does not allow them to be used.
> Picking between three codes in a few categories isn't really that much work.
True, but it's not just that. These changes are just another item to add to the already very long checklist.
I'd much rather they add to the App Store ToS "I will not use the following APIs/syscalls for fingerprinting users: [stat, UserDefaults, etc...]" rather than all this busywork. Since these restrictions are enforced by the review team (vs. the OS), the end result is the same.
I would imagine that a description of what the api is being used for makes life way easier for the review team to check whether you are telling the truth.
I think apple noticed a long time ago that the world is not exactly lacking in quantity of phone apps. If anything, the sheer number of them has become a hindrance to anyone wading through thousands of nearly identical apps to try to find the actually good one.
So if they implement policies that increase the average quality of apps and decrease the total quantity, that's an improvement for users twice over.
I was installing the Discord desktop app on an old MacBook earlier today and I got a prompt along the lines of "Discord would like to monitor every keystroke you enter in every application on this computer -- hit OK to continue." I very carefully selected "Hell no" and I honestly felt offended even to have been asked that question. It would be like some random person on the subway just asking if they could have all the money in my wallet. I felt like I was being pranked. If you're going to be that audacious, at least give me the respect of explaining why. Just asking me that with "ok" or "no" as my only options to proceed is ridiculous.
If I’m not mistaken, this is for a global push-to-talk shortcut that works even when Discord is not focused. Not sure if Apple provides an opportunity for apps to show a reason in the prompt.
Guess the old replies to that of "oh, but it's open source! Anyone can see the code...so there's no need for security because the OS is secure because anyone can see the code! See? Any bugs and they're fixed in like hours. Don't worry!"
At least that's what I used to hear all the time. We've now seen that was hogwash.
The only linux distro that has meaningful security is android.
Linux sandboxing scene is completely broken for end-user usage, it’s only good for CICD pipelines. If I want to open a file with a program, I don’t want to see an empty drive, neither do I want to kill the program - there should be a proper interaction between the user and the program, like mobile OSs do. Flatpak does have something like that, but only for files and not even that is seamless (plus flatpaks mix packaging with security for no good reason, imo).
You literally run basically everything as the same user, every document, family photo is saved, and thus available for r/w by any process, as those share the exact same privilege. This includes that npm install with millions of dependencies as well, that could literally install a screensharing malware with clear access to any internet site and you wouldn’t even notice.
The age-old xkcd is still true: the only thing secured is being able to install a video driver.
Flatpak works fine for this. The discord flatpak only has access to my downloads. If I want to tweak that, I run the flatpak customizer UI. It's sort of vaguely annoying to have to restart the app.
Or I just run it in docker and only mount what I want. No VM overhead since cgroups are native.
What you are suggesting is basically to write the code yourself. That's equivalent to "Linux distributions do not have proper security out of the box".
That's blatantly false, you have a variety of tools at hand from VMs to containers to AppArmor policies. Whether those tradeoffs are worth it for you are a decision you can make.
"I don't want to make that decision! I want total privacy!" - Great, use Tails which is a distro that goes all in on privacy.
What you suggest is to write a sandbox yourself and solve all the issues (e.g. fix all software to work inside sandbox). For example, software might need an access to a certain DBUS service, but you don't want to grant it an access for all other services. Out of the box DBUS doesn't provide means for that. Out of the box there is no way to prevent an app from getting information about a videocard (e.g. though Vulkan or OpenGL). And so on.
Regarding AppArmor, how do you write a policy that would allow a text editor to open only files that user has chosen in Open dialog?
In this case we can say that Windows is also absolutely secure - you can just write a kernel driver that would protect sensitive data from applications.
Maybe use Android. You can sideload to that. Why even use an iPhone at all if all you want to do is bypass the appstore and sideload apps anyway? Get a great Android flagship phone with all it's bells and whistles on it and sideload away to your heart's content.
I don't know, Apple is trying to stop bad actors from fingerprinting users and violating privacy etc. Or maybe you're into that?
I have android phone. I develop a cross platform application (FOSS). Every time I push an update to the app store version it's easily 4x the work of the play store equivalent.
I also have to pay apple £80 for the privilege of having my app available for free on their platform.
I would just like to be able to provide users a file and say "here you go, install this" so I don't have to pay apple to bless my software.
Oh no you have to pay £80 to gain access to a few billion users, and automatically push updates to them. Maybe don’t pay the fee if you think it’s not worth it.
The issue is not that you have to pay to put your software on the App Store, it’s that the only way to run your software on the hardware you and your users own is to put it in the App Store.
I wonder if we will see more privacy focused action by Apple on its App Store in the near future.
With them being forced to allow for 3rd party app stores, they will go all in on trying to convince people that their App Store is the one for privacy and safety and that people are risking their privacy and safety by installing another App Store.
By having to compete against other potential app stores, they are incentivized to lean in further into privacy to set themselves apart.
Makes sense but I imagine some of them will also be very annoying, it depends how strict Apple are about giving permission. If you have to maintain a separate record of file change time stamps for example that’s going to get pretty tiring.
If I had to guess, they would likely grant access to individual API's pretty easy but scrutinize any requests that ask for all of those API's. Apple is clearly cracking down on fingerprinting and those API's are use to accomplish just that. A developer asking for permission to 1-2 of those API's likely has a valid use case but those asking for all 5 are probably just fingerprinting.
I would argue that apps should almost never need access to some of those, like boot time (why?), free space (not meaningful in the era of dynamic offloading to iCloud), and active keyboard (just accept whatever input you’re given). With the benefit of hindsight, User Defaults seems like a bit of a poorly designed API.
They all seem like APIs that could be abused to fingerprint the device, e.g. no two phones are going to have the same free disk space and no two devices behind the same IP address are going to have the same uptime.
File timestamp is a hard one, I’m not really sure what the “correct” way to implement that is while preserving privacy. Maybe hiding timestamps on files not owned by that app?
But free space (if being used for caching purposes): Why not just try to save to .cachesDirectory? If there’s no space the file save will fail, which is fine, your app should handle that anyways. If there’s low but sufficient space that fine, the OS will evict the file soon enough.
Re: timestamps, the app is already sandboxed and is limited in the files it can read. It's strange that they're seemingly locking down timestamps tighter than the data itself?
For free space, it's pretty common for iOS apps to restrict functionality when your device is critically low on space -- because having most disk writes fail can be very difficult to handle.
You also might want to implement a cache policy that's "greedier" when there's plenty of free space. I think there are plenty of reasons to want to know how much free space is available.
I feel like offering lower fidelity values may be a better of handling these privacy concerns rather than adding further complexity to the already onerous app review process.
For example, for files your app doesn't own, round timestamps to the nearest second, and free space to the nearest 100mb. Most legit use-cases won't need more than this level of fidelity.
I have zero clue why they are gatekeeping UserDefaults. Basically every app on earth uses it for even simple details like preventing show of onboarding/tour screens and it is fully sandboxed. If they are worried about some data access or such that one can do with User Defaults, put that behind an entitlement or change default behavior.
Mark my words, this goes two ways:
1. Apple rolls it back, devs feel like they have power, Apple does security/Dev theater over the whole ordeal.
2. Massive App store review issues for 99% of apps on the store.
I take it you don't use Apple products, because there are a bunch of 3rd-party apps out there that use the Face-ID or Touch-ID all the time. For instance the password manager 1Password uses biometrics to get into the app, if you choose.
Man, I'm seeing such disingenuous takes from you all over this thread.
From "just get an android" in response to someone complaining about having to pay $100/yr to publish a FOSS app on the App Store to "I take it you don't use apple products" or "That's all going to the government! I ain't gonna use that!" when someone only mentioned not setting up Touch ID or Face ID.
Maybe read, think, and then don't reply with snark.
While I do agree with this change, this restriction also clearly serves to further entrench Apple to be the gatekeeper over users. They can fingerprint you, they can serve ads to you, but oh anyone else no no no, that's evil and against your privacy™
Maybe read, think, and then don't reply to my remarks. While Apple can serve ads to you, it's what they do with that data that matters. No matter what we do, if we want to have any presence online, meta-data is going to be collected. But that doesn't mean we should just throw up our hands and go "welp, everyone is doing it, so what does it matter". It matters. While I don't trust anything totally, at this point in time, in 2023, I would trust Apple more than Google or Facebook in how it handles that meta-data. They aren't in the business of selling it.
That could change at the drop of a hat and a change in their TOS though, so it's best to be ever vigilant.
While you should never trust any corporation in the sense that you might trust a person, you can trust that they will do the things that they believe will make them the most money.
What financial incentive do you believe that apple has to steal your fingerprints?
You know that you could just… provide that reason and get approved, right?
Apple isn’t banning the use of these APIs. It just requires you as a developer to actually be transparent and honest about the things that could potentially fingerprint a user.
To make that sound like a bad thing because you’re lightly inconvenienced is an interesting take.
Maybe this is just a clash of cultures, but I expect to type "gcc myprog.c" on one computer, give the resulting executable to someone running the same hardware+OS, and have it just work. I don't want to be forced to enter into a contract with Apple and pay them fees to be permitted to produce software that can run on my friends' computers. I don't want them to be able, with the snap of their fingers, to remotely disable all software I've ever made, for whatever reason they deem just, like if I besmirched the good name of their leader on Twitter.
I think Apple's an awful company slowly enclosing the commons that is general purpose computing, so that it make phat bank by forcing itself as a middleman between anyone who uses their hardware and anyone who writes software for those people. I don't encourage anyone to use Apple products so I write software that can be used on almost any platform, but can also be used on Apple if you compile it correctly. Hence why stat() piques my interest, and some obj-c/swift nonsense does not. I resent any further restrictions on distribution or usage.
I should not have to justify myself or my code to Apple. I should have to justify myself to the users that be run it, and the user should have all the tools necessary to see what I'm doing in my code (e.g. wouldn't it be nice if macos hadn't crippled dtruss by default)
I don't think it's a light inconveniencing. I think it's further encroachment on the freedom to run any software on your computer for any purpose, being done to privately enrich Apple, under the marketing guise of benefit to their users.
Of course you can run arbitrary executables on a Mac. You can also get their toolchain for free and use that, or you can install gcc or other compilers that support Macs.
I don’t think that’s true. It is true for ad blockers-ads, but you can make a completely indifferent execution environment with no access to the outside. Of course you can also just try to learn the movement of cursor/touch screen, but I don’t think that would be accurate enough.
People are much more likely to just self-fingerprint themselves.
While I agree with this, I think the value lies in making the fingerprints less precise. You can think of a fingerprint as akin to a hash of your device's unique ID. If hash collisions are frequent enough, the value of the fingerprint is reduced.
I think, as usual, this will not be solved by a technical cat and mouse game (in which the cat can always decide it likes the advertisement money that comes from tracking after all), but with a piece of paper from the local legislative body appropriate enforcement against app developers.
Anything else is bound to not be in the users interest.
And with AI developed by Western engineers it will finally be possible to listen to all phone calls, read all emails and watch video from all cameras. Fully automated people management.
App Store to require developers to describe why their apps use certain APIs - https://9to5mac.com/2023/07/27/app-store-describe-app-api/
Apple cracking down on 'fingerprinting' with new App Store API rules - https://www.engadget.com/apple-cracking-down-on-fingerprinti...
(via https://news.ycombinator.com/item?id=36905295, but we merged that thread hither)