Apple seems to be investing heavily in security and privacy, but I'm curious to see if they can actually convince the average consumer to care (and/or buy into their security narrative, depending on your level of cynicism). So far, the convenience and features offered by their competition (at the expense of user privacy) seem to be a stronger draw.
I figure either (A) they're trying to carve out a niche of hardcore consumers who do care, or (B) they're trying to play a long game, hoping that broad sentiment shifts towards valuing electronic privacy. If it's the former case, I think they're doing fine; these kinds of whitepapers will reach most of those who care, and periodic news articles ("Terrorist iPhone unable to be unlocked!") will reach the rest.
If it's the latter, I think it's a pretty big risk given the scale of their re-education task (the pool of users willing to sacrifice personal privacy for other benefits, i.e. Google and Facebook's bread and butter) and the potential pushback they'll receive/have been receiving from governmental sources.
What does HN think? Is this a viable business differentiator for them, long term? Or will they have to shift to the 'dark side' of personalized data and services to remain competitive in the future?
"What does HN think? Is this a viable business differentiator for them, long term? Or will they have to shift to the 'dark side' of personalized data and services to remain competitive in the future?"
Honestly, I cant possibly trust Apple to always do the right thing. A corner stone of digital privacy and security is being in control of your systems and data which Apple takes away from its customers. When I buy and use Apple products - I have to have faith in Apple to do nothing wrong with my data or send me a malicious update that compromises my system. It is all closed source and you only get occasional white papers explaining technology in abstract terms about their intentions. There is no independent person(s) that can vouch for Apple's integrity in building the systems the way they claim so.
That said, based on current status quo, I'd rather warily trust Apple with some of my personal data like GPS location, browsing history, Notes etc than Google or Facebook given the latter have a business motive to sell me out.
So when my friends or relatives ask me what phone/computer I think they should buy, I opinionatedly recommend iPhone + GnuLinux for the tech savvy and iPhone and Mac for the non tech savvy. I also give them a mini 5 minute lecture about
* Not allowing Location access to all the apps they download - Why would a calculator app need your location "Even while not using the app"?,
* Paying for their email service - I use Fastmail. ,
* Enabling tracking prevention in their browser to prevent tracking cookies,
* Using separate browser for FaceBook, Amazon and others while using FireFox only for personal/sensitive browsing needs.
Realistically though, your real world choice is android or iOS. Either a product which is half closed (play services) from a company that makes 90% of its revenue from ads, or a product which is entirely closed from a company that makes all of its revenue from selling products directly to their customers. In theory you can get AOSP and be in control, but in practice that’s not practical for normal people.
My two cents: half closed is pretty much the same thing as fully closed from a trust perspective, and apple’s incentives are much less in conflict with user privacy than google’s.
I wouldn’t touch Android with a 10ft pole - none of that open source stuff means anything when no human in earth has the time/money/competency to audit it.
I highly recommend an ad blocking VPN so rest of the apps hitting G ad network are blocked as well.
Googles explicit goal is to collect data about you to make money with advertizing. It works so well, Google even gives you stuff for free so you stay with them.
A privacy oriented VPN providers explicit goal is to give you some privacy, and you pay them for it.
Sure, a VPN can turn on you, but then they bite the hand that feeds them. Also, it would contradict what they explicitly say, completely unlike the situation with Google.
A Debian developer might also try to bring a backdoored package to your machine. Is therefore trusting Debian and trusting Apple the same? They are not, as Debians structure is transparent, the developers carry individual responsibility not just towards the Debian project, but towards every body that can see it, i.e. the whole world.
The theoretical possibility of multiple things being equally bad is irrelevant as long as other significant aspects are ignored.
On Android it's DNS66 [2] or NetGuard [1] (only the github version has hosts based blocking) for me. Both are open source and act as a fake vpn to intercept traffic locally on your device. No Cloud service needed.
I find Block This! to be pretty good on my Android devices. It's not actually a VPN--it's a fake VPN that just runs a host filter and dumps ad traffic.
They have a build on the web; I compiled my own off of their source, though, so can't vouch for that.
on ios, the ($3) app 'adblock' is a local vpn with domain-based adblocking rules -- in practice it's essentially an adblock hosts file, and doesn't take nearly as much battery
Does it? I've been using it on and off, on Android. I have set it to disconnect when screen is off. The impact on battery does not seem to be drastic, though it is difficult to do real side by side comparison.
Another option is to only use it on demand, while browsing.
It can leave connection open which prevents many cellular radio technologies from falling into lower power state.
Really depends on how optimized the cell modem is and how often keepalive packets are being sent by the VPN. Honestly, I haven't seen too much of a battery hit on ipsec VPN on the iphone SE. However, my older android phones used to get shredded on OpenVPN.
EDIT: If anyone's asking, I use the algo.sh script to deploy ipsec vpn on digitalocean (has option to enable adblocking too) https://github.com/trailofbits/algo.
What makes you think Android isn't audited besides insufficient knowledge on the subject? Wouldn't you think that the most popular OS in the world that has the top security researchers and companies combing through its source code trying to find vulnerabilities for over a decade would classify as being somewhat audited? I find it odd that you would not "touch Android with a 10ft pole" yet have no problem touching an OS that you, or a security researcher, can't audit the source code of. Security through obscurity doesn't work and your pole can't really help you when it bites you and it will.
You can't seriously tell iOS is not entirely closed just because they release source for two components under GPL. If they were able to stop using this GPL-ed close they certainly would.
Apple even stripped most of iOS related code for their MacOS open source drops. E.g they only recently started to release XNU source with code for iOS.
You don't have to use Play Services, and this is very easy to do. You can also use Play Services piecemeal. The choice really is between open or closed.
You can use Firefox as your default browser on Android but are limited to a bastardized version on iOS. You can run an OpenStreetMap app as your default map on Android, but you have no choice on iOS. You can build your own apps and run them indefinitely on Android, but you have to pay a $99 yearly fee (on top of the Mac tax) or rebuild every 7 days on iOS.
I don't recommend iOS to anybody. People who think it is more secure don't understand defense in depth, and people who think it is more private don't understand that Pixel and Android One builds actually collect less information by default (before opting in on any of the dialogs).
I tried the lineage version: I can't even register for signal (a messaging app) with microg enabled. The only workaround is to disable microg, register signal, then enable microg.
> People who think it is more secure don't understand defense in depth, and people who think it is more private don't understand that Pixel and Android One builds actually collect less information by default (before opting in on any of the dialogs).
For one, orders of magnitude more iOS users have been infected by malware than users of Google or Amazon flavors of Android even though there are orders of magnitude more users of the latter. See Xcodeghost, which Apple had to rely on Twitter users to find instances of in their own App Store. Compare to Google and Amazon, which run static and dynamic analysis of apps uploaded to their stores and allow third party security research on their stores, enabling both earlier detection of malware and faster takedown of all apps that share the same malware.
This is dangerously misleading nonsense. Yes, XCodeGhost was bad for iOS in China, but Android malware in China is of a different class entirely: it often comes pre-installed in the firmware [1]. Furthermore Play Store isn't available in China, and the Android app stores available in China are overflowing with malware.
500 million is the number of devices which potentially had access to an app store containing apps that had malware. It's not the number actually infected. I mean, come on.
> 500 million is the number of devices which potentially had access to an app store containing apps that had malware. It's not the number actually infected. I mean, come on.
From the article: "XcodeGhost potentially affects more than 500 million iOS users, primarily because messaging app WeChat is very popular in China and the Asia-Pacific region." After that article was published, Angry Birds 2 was also discovered to be infected.
Did you notice how I compared to Google and Amazon app stores? Those are the devices that HN readers would buy (those Chinese app store phones are not available for sale in the US), and they have vastly more users than the iTunes App Store yet in total infected devices can't come anywhere close to the toxic hellstew that is the App Store.
Again, source? Xcodeghost was one instance of limited impact.
> Compare to Google and Amazon, which run static and dynamic analysis of apps uploaded to their stores and allow third party security research on their stores, enabling both earlier detection of malware and faster takedown of all apps that share the same malware.
No need to assume. https://researchcenter.paloaltonetworks.com/2015/09/more-det... not only shows that there were thousands more apps affected by Xcodeghost than originally reported (and thus, more infected users than the approximately 500 million estimate from the earlier link based on the original 50 apps), but also that Apple was still taking down affected apps days later, waiting for third parties to report them. This despite that Xcodeghost represents a single malware that can be detected with a binary grep. That Apple didn't have the infrastructure to deal with even that demonstrates how woefully inadequate their app management infrastructure is for dealing with malware.
Where does the link say this? As far as I know Apple has all the information needed to make this decision themselves. As you said, a binary grep, coupled with many of Apple's static/dynamic analysis tools should be enough to find this issue.
"Starting September 18, Apple began to remove some iOS apps infected by XcodeGhost from its App Store.... As of this writing, on Monday, September 21, we notice that there are still some previously known infected iOS apps available in App Store."
> As you said, a binary grep, coupled with many of Apple's static/dynamic analysis tools should be enough to find this issue.
As I said, it should be so simple if Apple had set up the basic infrastructure for this. Since it had not, Xcodeghost remained on the App Store long after it was initially discovered, allowing researchers to find thousands more affected apps. Compare to Google's Play Store which not only performs static analysis but also crash analysis, battery usage analysis, and dynamic analysis through running the apps in cloud VMs (something Amazon did at launch).
Is it actually easy to avoid using Play Services on Android? The ways I've seen this done before requires rooting your Android phone, which is sketchy at best, and often cannot be done at all.
I don't understand why you are downvoted. The relative openness of Android is the reason something like GNU's Replicant, LineageOS or CopperheadOS (all without Google software) is possible in the first place.
To see proprietary and incredibly locked-down devices such as iPhones advocated for so strongly seems so weird. As if openness and security are at odds or something.
Though an iOS user myself, I was on board with the logic of your comment until this point:
> … or rebuild every 7 days on iOS.
By tossing in a flippant remark, you undermine the legitimacy of the other arguments. I don’t think a reasonable person would conclude that you actually believe the above statement to be true. This type of rhetoric, in which reality is knowingly distorted to make a point, may be less effective than you consider it to be.
Honestly, I cant possibly trust Apple to always do the right thing. A corner stone of digital privacy and security is being in control of your systems and data which Apple takes away from its customers. When I buy and use Apple products - I have to have faith in Apple to do nothing wrong with my data or send me a malicious update that compromises my system.
For a user who doesn't have the foggiest idea about technology, to whom the whole thing is "magic," why isn't this rational? Do you closely examine the safety of chemicals in all of your cleaning products, household items, clothes, your car, and the devices in your home? For most people, probably no more than in a cursory sense, as it's not normally one's area of expertise.
We have to "outsource" many aspects of daily life to experts.
Paying for their email service - I use Fastmail.
You've effectively outsourced your email security to Fastmail. How is that any different?
I was not calling Using Apple stuff irrational. I use an iPhone myself. I was just saying that I still don't "trust" it because Apple can go rogue if they want to and I can't do anything about it. If it was an open source eco system, there would be more eyes on the product that even if I personally don't read the source code entirely, some non-apple person does and Apple would be wary of the public eye.
"Do you closely examine the safety of chemicals.."
No, but they need to list the ingredients, get it certified by authorities, and labs can freely test for the ingredients without the company's input. Software is just not the same as chemicals.
True, trusting FastMail is no different than trusting Apple. I didnt mean to say that FastMail was magically more secure. Just that they have no incentive to sell me out. I could've used any company for this. Just trying to not put all eggs in one basket. It is certainly a better choice than Google which creepily builds a profile of me to make money.
I don’t mean to be short, but your views demonstrate completely the bizarre myopia of the “hacker community” (for lack of a better term.)
Everyday users experience massive troubles with security issues, and incur painful losses, at the hands of criminals and black hats! I have no awareness of a major corporation “going rogue” and deliberately harming their users, ever.
The incentive structure is the opposite: companies generally try to help their customers.
That’s not to say they don’t fuck up, particularly when the incentive structure gets “skewed.”
To see that there is actually an inverse correlation between personal control of software and security, just observe the crypto currency space.
The losses suffered by users from bugs, security failures, and hacking are comically large.
"I don’t mean to be short, but your views demonstrate completely the bizarre myopia of the “hacker community” (for lack of a better term.)
Everyday users experience massive troubles with security issues, and incur painful losses, at the hands of criminals and black hats! I have no awareness of a major corporation “going rogue” and deliberately harming their users, ever."
English is not my first language, and I was ambiguous in what I wrote. By "Apple could go rogue", I meant that Apple could change their policies on user privacy and muck around with making money of user data. True that it is bad PR, and reversal of their current state, but what if there is a leadership change and the new ones think it is a goldmine waiting to be opened?
After all, we have seen Apple do exact things they said they wont. Big iPhones, mini ipad, give up on user privacy to the government in China...
"The incentive structure is the opposite: companies generally try to help their customers."
I think that's a bit too abstract. Companies try to help customers as long as it is in their business interest.
What do I wish would exist?
A hardware company in the class of Apple that stays with hardware, and a company like RedHat that builds mobile Linux OS with a GNU app store, where users can buy apps from the developers straight and install it themselves. That way no one is in complete control. Phones are pretty powerful these days to start using the features we have developed for laptops..
> A corner stone of digital privacy and security is being in control of your systems and data
I see this as an opportunity that Mozilla has, and I'd submit an "idea" to them but I don't know how. Google got big by making ads suck less - they were the first that delivered non-intrusive ads that actually worked (and were personalized too, to boot - without being intrusive!)
I wonder if there isn't a serious market for flipping the paradigm on its head. The marketers do need to deliver relevant messages to relevant persons... but I think they don't need to collect lots of personal data about you. What if someone trustworthy would flip the paradigm on its head? Say that you tell Firefox your interests, and, besides being a browser, it becomes a sort of "local ad exchange" too - when you visit "New Yorker", it says "I can display here an add about cars, or one about sleep pills, or one about cruises etc.". Instead of profiling the user - send extensive ad profiles to the browser, and let the browser pick locally based on user profile & preferences. It would be wasteful in that you potentially send a lot of ad metadata to a single browser... but, theoretically, all that metadata can be cached. And, I dunno, maybe it's not really that much metadata, afterall? It's a rough idea, but I feel it's one worth exploring that could provide genuine benefits both for (non-shady) marketers & for end-users.
Interesting, but I think that data would be only semi-useful for marketers. Situations change, needs change, and for some things, on a frequent basis. I imagine most users are disinterested in constantly going into their preferences to update what they're willing to view. It's much easier for them to let someone passively observe their life and web activity the way Facebook or Google do it. And except for a number of tech savvy tinfoil hats, most people would still choose convenience over privacy and security. The proof is in the numbers. So this would probably be dead in the water for Firefox because few marketers would find value in it. Hopefully, I'm wrong about people continuing to value convenience over privacy and security though.
Well, Firefox _could_ passively observe your life and update your preferences (better than Facebook or Google do, if it's the primary browser - since they'd observe you across all activities). It's just that it wouldn't need to share that information with anyone except you - it stays local. That's what I love about the system - you can be as targeted/"privacy invasive" as you wish without actually invading anyone's privacy, since people stay firmly in control of their own data.
To add to that, "private mode" would actually do what people expect it to do - keep them anonymous on the web, and not leak sensitive/ embarrassing information about themselves.
A great business differentiator, and something that could be much easier to execute at Apple (due to them controlling the whole stack) would be to create a fully trusted platform.
That is, design all hardware and software in-house, and use formal methods (theorem proving, program analysis and model checking) to verify everything. Furthermore, adopt a Qubes-like architecture where external untrusted applications can be run in an isolated way.
I switched over to fastmail last year and never looked back. I get no spam, it seems as secure as email can be in terms of the business model being aligned with my needs.
Good idea for the separate browsers too, I never considered that. I would just throw a vpn you trust (or roll your own) in that list, along with uBlock Origin and uMatrix.
They own the enterprise market for phone and tablet, despite being one of the most obnoxious vendors to deal with. Big enterprises that found that unacceptable (like NYPD, who went all in with Microsoft), came crawling back.
They own the universe of mobile users who actually spend money on mobile.
I think they’ve established that they can virtually print money without the intrusive and risky data gathering that their competitors engage in. Fundamentally, they operate an honest business — you give them cash, they give you stuff.
Everyone I know who works on the iPhone, iOS or related products cares deeply about privacy. They aren’t just making your phone. They’re making their own.
If Apple brings the iMac Pro architecture to Macbook, it will be the only mainstream computer that is architecturally resistant to microarchitecture attacks, because it isolates trusted (Apple-authored) and untrusted code to separate ARM & x86 processors. If iOS apps are distributed in bitcode, their binary versions can be optimized for security on the device, as threats evolve.
It worked on me (It was a leading reason for going back to an iPhone after switching to android. The others being customer service and iMessage). And I think it will be an increasingly powerful differentiator as time goes on.
They don't need to do the reeducation. Ask yourself, will there be more security breaches / issues in the future or less? Will there be more issues with governments and companies taking advantage of private users data or less? Will there be more demands for your data in the future (health, context, emotional signals) or less? I think the answers there are pretty obvious, and owning the "we care about our users, their privacy and security" frame is going to be an increasingly powerful one.
I can only speak for myself, but I wouldn't touch a system controlled by Google with a ten-foot pole. The amount of information they have on me is already staggering, no need to give them the last few percent, too.
Right, but the point is that we're on a niche internet forum. Out feeling about any technology aren't likely representative of the population at large.
This isn't the 80s any 90s anymore, "technology influencers" have no impact on Apple's business. This is a common trope I see usually when "technology influencers" are mad about some product decision Apple makes. Apple gets more word of mouth from other laymen using iPhones than any "professional" using a Mac Pro.
A recent example of this is Macbook Pro sales rising despite the outcry in nerd circles. Another example was when FCP X dropped and there were countless hot takes about Apple dropping their professional customers and all of them switching to Adobe and Avid.
I don’t think influencers have much say in a positive direction. I think they do in a negative way but it can take years for things to change. The negative outcry against Microsoft started in the mid-90s but didn’t hit them in terms of mind share or numbers till ten years later.
The "HN crowd" has not downvoted that comment, it has upvoted it. Meanwhile you've left an irrelevant comment that explicitly broke the site guidelines. Would you please read them and not do that again?
Other than in scenarios where use cases requires use in conditions outside of iphones limited operating range (iPhones don’t like cold), I’m not aware of any large enterprise standardized on Android. I can think of a dozen on iOS.
The devices that my teams support on Android need 3-5 agents or other tools to meet compliance requirements.
There are some short term wins of course, especially as they can use this to paint their competitors’ business model and strengths as a negative. But I think this approach will pay additional dividends when they move deeper into healthcare and more computers make the move onto and then into our bodies.
I wouldn't be surprised that there'll be a big privacy/issue leak/scandal in the foreseeable future. Which will make Google have to apologise, and Apple just say "we told you so".
With the current track record of Apple quality and security on macOS... are you sure? What makes you think Apple iCloud carrying all the personal data is more secure than Google services?
I think Apple employs engineers and leadership who care about privacy and security and are willing to marshall the resources to bring that about. I doubt too many customers really truly care.
Somehow I don’t really like the framing of this question. It simply takes an economic lens and misses the nuances that agency and morality bring to the table.
As the richest organization in the world I would argue that Apple is uniquely positioned to act and further their own goals and agenda – especially long term. If they felt that privacy was a cause to fight for... who could really imagine what could be done with those deep pockets? However, you will never get there if you employ a purely economic lens. Economics is a mostly value-free toolset you need to fill with assumptions to get any results.
For what it’s worth I see a stronger path in shaping/making the future than speculating about the infinite possible alternatives that are out there. So I would rephrase the question to something like:
Is security/privacy a worthwhile goal for Apple to pursue – not only economically but in general?
This question is imho a lot more tractable and you can get somewhere with it. For example, the current market climate and possible re-education cost you mention are really important aspects to consider – but they are not the only ones... people are not just pawns but capable of reasoning => we are interested in what SHOULD happen.
> Apple seems to be investing heavily in security and privacy
I would say on the surface. The latest bugs that allowed you to login as root with an empty passwords could have been avoided with more testing (I would assume).
That appears more to do with Apple prioritizing iOS over macOS, not deprioritizing security and privacy. Not that this explanation is any more acceptable.
I think people increasingly do care, and will care more and more.
Some points:
* The revelations of Snowden and all the many hack attacks in the news have impacted the average Joe
* Many security experts expect more downfalls in the short future. For instance, cyberwarfare has relatively just begun.
* As example, in Germany, people are much more conscious about privacy.
* And think now about people who want to secure their crypto assets investments.
All in all, I think Apple is betting properly on security and it will have an economical reward.
Not necessarily. Safety for cars was nothing at the start. Until the 60's car companies were actively avoiding discussing about it in order to promote the macho, sexy, powerful car image. Then it became hygiene: you had to have it. Now I'd say it's a selling point: companies actually promote cars presenting their NCAP ratings or whatever extra safety features in order to draw customers.
Of course, it's different when your life (or the life of your loved ones) is on the line. But even though we won't quite be at the same point, the stakes are increasing for software. I'd imagine in 20 years we should be beyond hygiene.
- Amazon's Alexa: I am willing to put a microphone and/or camera into one or more areas of my home so that I can do things without pulling my phone out of my pocket.
- Google Gmail: I am willing to give you full access to the text of my emails so that I don't have to pay for an email service.
- Facebook: I am willing to give you a massive amount of information over what I do on a day-to-day basis in order to facilitate my social life.
I know a number of non-tech people that tell me they use iPhones because they believe they are more secure, and they think Apple is better with privacy issues.
>Apple seems to be investing heavily in security and privacy, but I'm curious to see if they can actually convince the average consumer to care (and/or buy into their security narrative, depending on your level of cynicism). So far, the convenience and features offered by their competition (at the expense of user privacy) seem to be a stronger draw.
I'd hazard a guess and say they're doing this for reason A. It definitely sets them apart from Google though, as it's Google's bread and butter to collect data on users in order to sell ads. Apple doesn't have a horse in that race, so it's a win/win.
I guess my cynicism level is the issue :I don't trust Apple a whole lot than, let's say Google.
Their servers are still operating in the US, they don't have a stellar history of protecting their consumer's privacy (for example not so long ago people realized that their iPhones were sending location data to Apple.
Apple's stance on privacy seems more motivated by PR than a genuine core value of the company.
They’re the only commercial OS vendor that doesn’t rely on monetizing customer information to turn a profit. (I’m not sure why MS voluntarily pulled itself out of that camp).
This is why I switched away from Windows and Android. Open source phones don’t really exist, and out of the box usable desktop Linux laptops are rare, so most people in my boat land on Apple products.
People "realized" this when Apple began doing it and honestly told people about it. It is trivially easy for you to turn off location sharing permissions for iPhone Analytics if you so desire. It's right there in Settings.
Are you maybe talking about the location cache?[0]
That wasn't to any Apple server, it was a local cache that apps couldn't get unless they used a root exploit; was safe on your backups if they were encrypted; and was discussed in the TOS (Ars's sensationalism about "without your consent" notwithstanding).
One of the ways to view Apple is through the lens of the field of luxury marketing - because the iPhone is a luxury good, a status symbol, and very successful one at that.
One of the major techniques in this field is that in order to create a luxury good, you need to tell a story about a genius creator, creating something that nobody else can create, with unique methods only his company possess.
So when the iPhone was new, this story was partially true(yes the iPhone was really unique. but no Steve Jobs didn't create it with it's bare hands). but once competitors created high quality products with great design, Apple needed new stories.
So there the unique manufacturing method for the metal body, and the glass. And their processor which was really the best(but did the users really use that? ). And now we have privacy.
And marketing wise, the iPhone still remains a status symbol. So what they're doing is working. And i wouldn't bet against them.
As for them needing personalized data and services in the future ? Well they've got Google's apps for that(a good deal for Google too). And that way users can have their privacy cake, and eat it too.
Jobs practically created the iphone with his bare hands. The biggest differentiator between Apple and other companies has been the the development process. No large company, and few small companies, ever has had such a focused process that consciously fights against the interjection of bureaucracy.
The irony is that Apple's offering is not at all compelling to anyone who actually understands anything about security and privacy. Yes, Apple's security is strong with respect to outside threats, but at the cost of putting absolute blind trust in Apple. So your actual privacy is only as good as Apple's internal policies allow it to be, and those are not only completely opaque, but Apple is under no obligation whatsoever to maintain those policies in the future. Apple could be selling your data to the Chinese on the side, and there would be no way for you to know. And even if they're not doing it today, they could decide to do it tomorrow. At that point, even if you somehow found out, you'd be very hard-pressed to do anything about it.
[UPDATE] This comment is getting a lot more attention that I expected it to. I've watched the point count on it go as high as 20 and as low as 0, with several cycles between 0 and 10 and back, so a lot of people are voting on it. So let me say a few additional things.
First, I concede that the way I phrased my position was inartful. I apologize for that.
Second, Apple is probably the best solution on the market in terms of security and privacy. My complaint is not about them per se, it's really about the state of the market. My choices are either to hand my data over to Google or Microsoft, or to hand over my control of what I can and cannot run on my system to Apple. Neither of those is a satisfactory option IMHO.
Sure, they could do that, if they got tired of their business model and decided to completely change everything about the way they operate.
I really don't get accusations like this. Selling customer data would violate nearly everything about what Apple cares about and believes in. It makes no sense, as soon as it came to light it would completely wreck their business as their customers leave in droves, and their employees would also leave in droves as soon as they heard about it.
> Selling customer data would violate nearly everything about what Apple cares about and believes in.
Not quite. It would violate everything Apple says they care about. But there's actually reason to suspect that Apple might change this policy. Once upon a time Apple said they cared about building computers that Just Worked, that were the most reliable and easiest to use in the industry, that had the most advanced hardware. That is no longer true. Apple's hardware is way behind the state of the art. Their applications are a horrible mess. Their OSs crash regularly. Security failures are common. Of course, they didn't get up one day and announce that they were abandoning their prior commitment to quality bwahahaha, they just did it. Quietly. Gradually. Probably unintentionally. So worrying about where Apple might go in the future is not just paranoia. Apple has jumped the shark before, they could do it again.
I'm sure Apple totally enjoys this occurring. Be real here: it's just that Apple put too little attention on it; they're not actively trying to ruin their "It Just Works" image.
That's valid, but what would you suggest is better? Even if Apple open sourced the implementation of key security libraries there is nothing to assert that the shipped version of the library hasn't be supplemented with closed source patches that allow for backdoors.
Ultimately it has to come down to do you trust Apple. The evidence to date is that their behavior is trustworthy (e.g. they fought against unlocking an iPhone device despite being requested by authorities). However, there is nothing we can do to assert that trust into the future.
Apple trustworthy? Apple initially claimed it was impossible for the company to aid in government data requests, which the press ate up. After the FBI showed that to be a lie, Apple quietly stopped making that claim. (The much ballyhooed phrase "not technically feasible" has magically disappeared from the privacy page.) I know of no other American tech company that has so brazenly lied to its customers without so much as a mea culpa after it was caught out.
> Apple initially claimed it was impossible for the company to aid in government data requests
Apple does comply with government data requests, as they're legally required to, for data that they already have. What Apple had an issue with was adding a backdoor to products so that it would collect data that Apple currently didn't have to hand over to the government.
You missed the point. They claimed it was impossible for them to help the government obtain data off the devices. It clearly was not. When this was discovered, they simply disappeared this false claim from their marketing material (privacy page) without even admitting to their lie. Can you name any other American tech company that has lied like this and then covered up their lie? That is why I consider Apple untrustworthy.
Also, there was no backdoor requested, but that is a separate issue. If Apple could install a backdoor, that would make their initial claim even more of a lie.
> They claimed it was impossible for them to help the government obtain data off the devices.
Are you sure that's exactly what they said? Like you said, this could be construed as a lie and open them up to legal liability.
> Also, there was no backdoor requested
I'm pretty sure that's what the whole FBI thing was about. Write a version of iOS that we can install on this phone to gain access to it without the password.
> I'm pretty sure that's what the whole FBI thing was about. Write a version of iOS that we can install on this phone to gain access to it without the password.
That is incorrect. The FBI asked for Apple to install a build that would let them brute force the pin without wiping the device. The device and build would stay on Apple's premises. Again, this is beside the point that Apple is untrustworthy.
I don't understand where that is coming from. What parts of the iOS security model involve blind trust in Apple, apart from the firmware update process?
The data you're supposing they might sell to the Chinese in the future is mostly stuff they go out of their way not to collect in the first place.
I'm not insisting on open source (though I do prefer that all else being equal). But I would like my computing infrastructure to be independently auditable.
My real concern is not that Apple will one day announce that they will change their business model to be more like Google bwahahaha, it is that they will either be, or have already been, infiltrated by foreign government agents (most likely Chinese) who will advance to positions of authority inside the company and use that authority to insert back doors surreptitiously. The other plausible scenario is that Apple management will decide to put patriotism or national security concerns ahead of commerce and decide to cooperate with the U.S. government to insert a back door. I think it's not unreasonable for someone to assign Bayesian priors to these events that are greater than zero. The problem is that there is no way to obtain any data to update that prior.
What makes this attack plausible is that it's relatively easy to mount (for the Chinese government) with a potentially huge payout (by their quality metric). It's hard for me to imagine that they haven't tried, and once you get to that realization, it's only slightly harder to imagine that they haven't succeeded.
The core of understanding all information security questions is understanding and internalizing “reflections on trusting trust”.
For most values of “me” as a customer, it’s in Apple’s interest to behave as they advertise. They can stray due to government or other action, but so can any party you trust.
Yes, but the difference is that other parties I trust are subject to external verification and audit. If my bank steals my money, it will soon be discovered, either by me the next time I get my statement and it doesn't match my records, or by the bank auditors the next time they audit the books and discover that they don't balance. By way of contrast, if Apple sneaks a back door into a MacOS update it could easily go unnoticed for years.
If I'm a regular person, what is a scenario where an actor as powerful as a state or Apple would actually try and breach my privacy? Would Apple want my credit card? Or would it try to blackmail me with my private photos get a couple of hundred dollars?
Some of us here have access to systems, data, or people with high value information. Some folks are involved in things professionally that an external, unethical party could put a dollar value on accessing things or people whom the victim could access.
Your kids’ teacher might have a spouse whose job directly influences a governor, mayor, Union, Senator, etc. Access to the wrong twitter account or TV producer could influence POTUS.
> Some of us here have access to systems, data, or people with high value information. Some folks are involved in things professionally that an external, unethical party could put a dollar value on accessing things or people whom the victim could access.
I think that your reply directly contradicts the assumption contained in my question. It's pretty obvious that if I would have such access I'd adopt a completely different security priorities.
I actually think the most likely scenario is for Apple to be infiltrated by agents of the Chinese government. I have no idea how likely this is because I have no idea what Apple's internal controls are like. But I'm pretty sure that someone who knew what they were doing could slip a back door into OS X if they had write-access to the source tree, and that they could do this in a way that even if it were noticed would provide them with plausible deniability that it was just a careless bug. I'm also pretty sure that the Chinese government could cultivate an agent who was indistinguishable from a highly qualified software engineer that Apple would not hesitate to hire. Such a person might even be a U.S. citizen.
I also think it's possible that Apple could sneak a back door into the OS in cooperation with the U.S. government while maintaining an overt posture of defending their user's privacy. Yes, this would be risky, but if Apple management put patriotism or national security above commerce (which is not a completely unreasonable thing for them to do) they might decide to take such a risk.
> it's possible that Apple could sneak a back door into the OS
Apple has gone to great lengths to make both devices secure and policies. You are awfully critical, so I really want to understand what is the expected behavior that you'd like to see?
Open source the security libraries? But if you don't trust Apple not to backdoor then what's stopping them from putting the backdoor in the kernel? At some point we must trust something.
So as a regular user, golerka's biggest concern should be that the Chinese government is going to steal their credit card number/look through some of their private pictures/create a profile on them because Apple's employee background checks might not be thorough enough?
No, Golerka's biggest concern should be that some independent researcher or Romanian hacker will discover the back door, and they will use it to steal money directly out of their bank or brokerage accounts.
In other words: I think that the odds are >0 that there is a systemic weakness in Apple's security, either accidental (like the meltdown and spectre issues almost certainly are) or deliberately introduced by a mole, or by Apple's management in service of some higher cause. I think that the odds are further >0 that this weakness could be discovered by a malicious party, and that will end up leading to major problems, not all of which will necessarily be immediately obvious.
I don't know, I've been practicing condescension, myopia, or overgeneralization for many years, but there's always room for improvement.
Do you actually disagree with the substance of what I said, or are you just criticizing my style? Because if you agree with the substance then I would say that the style is justified.
I disagree with the substance as someone who makes their career understanding everything about security, which means I then also have to disagree with the style.
I’ll try to help you say what you should have said: “Apple doesn’t have great security, their major competitors are far better because [reasons] and instead of Apple products you should be using [competitor].”
Since you're on a technical forum but talking about mass-market products, you should also clarify your audience: “if you’re a technical user, [competitor_1] is the best security you can get, while [comptitor_2] beats them on usability and user experience. If you’re looking for the best of both worlds, [competitor_3] is your best bet.”
Because what you did is walk into a discussion, say “nah you’re all wrong and besides that you’re all idiots” and walked away. Back your shit up, man. If you don’t think Apple is the product to use because of their security practices, recommend an alternative. If an alternative doesn’t exist, then keep your mouth shut because some security is better than no security and saying “shit’s all fucked anyway” isn’t going to help anything.
I hope Apple will begin to spin privacy and security as part of their "premium lifestyle". Because if privacy and security will be associated with premiumness, other companies will have an incentive to implement similar measures in their products. People will actually care about their digital privacy for the first time! (though not because of the benefits of privacy, but to show off to others that they can afford a premium product with privacy)
Sort of like how companies suddenly started caring about their mobile phones' package design after the iPhone was released with its sleek packaging.
Hopefully this doesn't backfire though: by associating privacy with a "premium lifestyle", it by definition stops becoming something accessible to everyone and instead something you must pay for.
Apple security is confusing. For example, Find My Mac does not require 2FA even when 2FA is enabled. An attacker can remotely wipe your MacBook with just your iCloud password.
Another example: apparently there is a distinction between "two-factor authentication" and "two-step authentication", the later being a deprecated, but active system. Reading the docs for the older system, you'll soon discover differences in things such as account access and recovery that lead to an entirely different set of consequences and caveats for security. You'll find out that in certain scenarios you could permanently lose access to your iCloud account and iTunes purchases under "two-step authentication*, but not the newer "two-factor authentication". If a user confused the two while reading the Apple online support pages, it could have grave consequences.
Security is something that needs to be documented and marketed in clear terms. Why Apple would adopt names so similar for two distinct implementations of a security mechanism that they could arbitrarily describe either is incoherent with Apple's supposed model of user friendliness. It's what Microsoft does with its products, not Apple. Additionally, all facets of a security feature should be documented, and documented well. It is unacceptable that Apple does not warn users that 2FA can be bypassed in certain scenarios. I hope Apple does further focus on security, and documenting it well.
> An attacker can remotely wipe your MacBook with just your iCloud password.
This is not a security/privacy issue–none of your information is leaked.
> It is unacceptable that Apple does not warn users that 2FA can be bypassed in certain scenarios. I hope Apple does further focus on security, and documenting it well.
Should every password field have a disclaimer that says it can be "bypassed" by someone who knows your password?
I certainly appreciate this effort, whatever their long term intention or strategy is with this in a commercial way (or not), it’s in line with what I expect when it comes to my privacy and security.
Some of the google/Android “features” and what they do with your data, make old school keyloggers look like a joke.
"The processor forwards the data to the Secure Enclave but can’t read it. It’s encrypted and authenticated with a session key that is negotiated using a shared key provisioned for each Touch ID sensor and its corresponding Secure Enclave at the factory."
> "at the factory"
I suppose the secret key is erased at the factory, however, what if it isn't? Or, is the secret key generated on-chip via a random number generator? If it were stored at the factory somewhere then it would be possible to link it to each iPhone. I'm not familiar with cryptography, so I think it's just a misunderstanding on my part, and I'm not sure if this would be a weakness in the Touch ID sensor.
The long-lived secret for TouchID is generated in the SEP, using the UID of the SoC (a secret known only to the SoC) and the serial number of the respective sensor. That key is then keywrapped with a (symmetric) key known to all TouchID sensors and sent to the fingerprint sensor. It unwraps and verifies the key and burns it into fuses.
That key is then used for establishing a session key in the field. The session key uses entropy from both the SEP TRNG and the TouchID TRNG.
Your threat means someone would 1. need to know the TouchID global key 2. slurp off all the keywrapped blobs for later, at all iPhone manufacturing sites. This then gives them to ability to either decrypt fingerprints, or feed 'fake' fingerprints to the SEP. Given that there are far easier ways of stealing fingerprints, that leaves the feeding fake fingerprints to the SEP. If you're in a position to do that you might as well just feed the phone a fake finger.
It is relatively easy to do the negotiation in a way that nothing at the factory ever sees the resulting session secret. DH is obvious and in this case secure approach, but even simply generating random key in one or other end of the secure link and sending it in plaintext for the first time will be essentially sufficient (you still have to trust the environment this happens in to large extent).
Depends on your definition of "fucked up" in this case.
The change is that data for China-based users will need to reside on domestic (Chinese) servers. Any associated implication derived from that would be opinion-based.
iCloud Keychain may be surprising for some folks. For example, it can be restored from an iCloud backup only to the same machine. Also, you have no ability to recover your iCloud keychain from your own time machine backups.
The reasons, as the document outlines, are for added security. But, having recently wiped my iCloud keychain by resetting Safari's privacy settings and inadvertently loosing all my passwords, I was surprised to discover that I couldn't restore my passwords from my own backups. The upside is a compromised iCloud password doesn't also leak all the keychain passwords.
> "The processor forwards the data to the Secure Enclave but can’t read it. It’s encrypted and authenticated with a session key that is negotiated using a shared key provisioned for each Touch ID sensor and its corresponding Secure Enclave at the factory."
If I understand this correctly, IF they're using Diffie Hellman key exchange to generate the shared session key for every chip, doesn't this mean Apple also owns the session key for every single iDevice out there and can crack into them if they wanted to?
Does this mean the "security" only protects users from men-in-the-middle, but not from Apple (or NSA if they come after them)?
The long-lived secret for TouchID is generated in the SEP, using the UID of the SoC (a secret known only to the SoC) and the serial number of the respective sensor. That key is then keywrapped with a (symmetric) key known to all TouchID sensors and sent to the fingerprint sensor. It unwraps and verifies the key and burns it into fuses.
That key is then used for establishing a session key in the field. The session key uses entropy from both the SEP TRNG and the TouchID TRNG.
So an 'evil' Apple couldn't crack the session key unless it had been evil all along and storing the generated keywrapped blobs. In which case you're sorta screwed no matter which way you slice it.
Regarding iCloud accounts, Apple seems to be forcing the usage of phone numbers for 2FA and account recovery without an option to disable it. I switched from an Android device to an iPhone recently and was asked to setup an iCloud account. I went through the setup process and realized that my phone number was setup as a 2nd factor with no option to disable it [0]. For all the talk about Apple devices being the most secure, not many people seem to be complaining about how Apple forces a phone number as a 2nd factor + account recovery method. Most people backup very personal data to their iCloud accounts and forcing users to use a phone number for 2FA and account recovery is ridiculous. IMO Google gets 2FA right : I can setup a Yubikey + Authenticator + backup codes and remove my phone number as a 2FA method. And I also realized that there's no way to delete an iCloud account. I assumed all the big companies will have an option to delete accounts. I hope there's a law mandating all online accounts need to have a clearly defined lifecycle with an option to delete accounts and personal data if users want to.
(First time using an Apple device, so I might be misunderstanding the 2FA situation, correct me if I'm wrong.)
IIRC I wasn't able to download or update apps from the app store without an iCloud account, so an iCloud account is not really optional. Regarding backups, I've disabled all backups to iCloud and upload photos, videos to a google account since I can't directly transfer files to a linux pc.
You can. The Apple account needed for the App Store is independent of iCloud. You can create this account on the apple website, then use it to sign into the device (separately from iCloud). Just say no to all requests which ask for iCloud signon, during device setup.
You can directly transfer files to a Linux PC (via SFTP, WebDav, SMB) with iOS apps like Transmit, Coda, GoodReader, Briefcase and others.
Is this chain-of-trust implementation the reason my backlit-keyboard on my macbook pro won't light up whilst asking me for my password on coldboot? It's a giant pain in the rear to get up and flip on a light when you're in bed programming at night... ( sigh )
This looks like a great example of insecurity through security.
Given that Apple is not trustworthy and you need to be able to change and/or inspect a device to have a chance at security, this is a solid strike for a human-thought-free insecure world.
>The probability that a random person in the population could look at your iPhone X and unlock it using Face ID is approximately 1 in 1,000,000 (versus 1 in 50,000 for Touch ID)
I would love to know the likelihood of this in reality. For example, What about people who look like you? You don't tend to hang around with completely random people, its often parents and siblings who, unlike fingerprints, may bare facial resemblance enough to trick it
I figure either (A) they're trying to carve out a niche of hardcore consumers who do care, or (B) they're trying to play a long game, hoping that broad sentiment shifts towards valuing electronic privacy. If it's the former case, I think they're doing fine; these kinds of whitepapers will reach most of those who care, and periodic news articles ("Terrorist iPhone unable to be unlocked!") will reach the rest.
If it's the latter, I think it's a pretty big risk given the scale of their re-education task (the pool of users willing to sacrifice personal privacy for other benefits, i.e. Google and Facebook's bread and butter) and the potential pushback they'll receive/have been receiving from governmental sources.
What does HN think? Is this a viable business differentiator for them, long term? Or will they have to shift to the 'dark side' of personalized data and services to remain competitive in the future?