Hacker News new | past | comments | ask | show | jobs | submit login
Why Apple's iPhone encryption won't stop NSA (siliconexposed.blogspot.com)
143 points by todd8 on Oct 5, 2014 | hide | past | favorite | 68 comments



I'd like for people to start thinking about security measures, including encryption, as a cost function rather than as a boolean condition (e.g. safe vs. unsafe; stop the NSA vs. not stop the NSA). I doubt there is anyone who can stop the NSA from executing a targeted attack that breaches that information.

My feeling is that good security measures increase the marginal cost per person surveilled. As the article points out, there are some kinds of communications that Apple is obliged to provide access to. Those are likely to carry low marginal costs per person surveilled, as law enforcement may have the legal means to directly and unilaterally access that information. Even if they don't, the capacity for Apple, telecoms, whoever, to operate in "God mode" with respect to some kinds of communications means that mechanism is a good target for agencies like the NSA to breach.

By contrast, even flawed technologies like iMessage (centralized key management) raise the per-person marginal cost by forcing attacks to be reasonably targeted. While the NSA could probably figure out how to attack Apple's key management infrastructure to set themselves up with a virtual iMessage device for an individual, doing so for all the people would, in principle, be fairly noticeable to Apple.

The more we can instrument technologies that force attacks from "bulk mode" to "targeted," the better chance we have at actually curbing surveillance. Of course, this still leaves the panopticon problem--if we accept that targeted attacks will always be possible, and that any of us could be watched at roughly any time, the psychological chilling effects of surveillance remain.


> I'd like for people to start thinking about security measures, including encryption, as a cost function rather than as a boolean condition (e.g. safe vs. unsafe; stop the NSA vs. not stop the NSA)

Great point and one that I think typical computer savvy users already do. It is in the security-expert-fantasy-world that the perfect quickly becomes the enemy of the good.

For example, using SSL to send sensitive data like passwords or payment info is nearly ubiquitous. Yes, there are many flaws in SSL's trust model, but the cost is so small, using it is a no brainer.

However, PGP encrypted email is much more rarely used. It has numerous higher costs, from special email software to the social pain of getting your friends to also use it. The benefit is still high, but the cost is high too and that results in far less usage.

In the real world, informed people look at the cost benefit break down of security. Just like we all don't drive semi trucks around even though they are safer in a collision, we all don't communicate in the most secure fashion because there is a cost associated with that decision. With that in mind, the easiest path towards greater security in communications population-wide would be to focus less on improving the security and more on lowering the associated cost.


In a traditional economic sense, the optimal amount to spend on security measures is when the marginal costs equal the marginal benefits.


Yes, but it's hard to quantify the benefits (or even the costs) of a security measure—it's just as much a risk evaluation as it is a cost one.


Like so much of economics, the hard part is coming up with a price tag that everyone agrees on in all contexts. How much would you pay to save your grandmother's life from an illness? How much would you accept to allow me to hunt her for sport?

So, in that sense, coming up with a precise measure of marginal cost / marginal benefit would be hard in practice.


It has numerous higher costs, from special email software to the social pain of getting your friends to also use it.

With 'special email software' do you mean not-webmail? Outlook is still around in many companies, where your colleagues are already connected. IIRC there's a PGP plugin.


A PGP plugin is special software.


We could make passive observation (beamsplitters) useless by opportunistically encrypting all traffic with unauthenticated DH to derive a key. This would do nothing against an active observer, but it would raise the cost of interception significantly.


There's a fine line between significantly and marginally in the case of a massively well funded organization like the NSA. Unauthed HD is so damn trivial to MiTM that accepting it as the status quo might just as likely force our government to invest a couple extra billions in MiTM equipment next year as it would be likely to protect the masses from the over reaching hand of mass surveillance. Who's to say they don't already use this capability on a regular basis? As long as you're using an unauthenticated encryption scheme you will never know...


"Who's to say they don't already use this capability on a regular basis? As long as you're using an unauthenticated encryption scheme you will never know"

Oh, but we could know. And we should know.

This is a good example of good vs perfect and boolean cost. You are right that we currently don't know how prevalent these attacks are and if they are employed on big scale. But it is possible to know and would make a good research topic and/or a way for the community to raise the bar and know more about the state of Internet.

Yes do MITM on unauth DH is easy. But that does not say anything about the actual application traffic flowing through the connection. A successful, _undetected_ attack must also know and subvert the application traffic to keep the illusion alive.

We could do second level authentication, out-of-band authentication etc at application level to make it possible to detect the MITM. If we do this measurement on large scale and continuously we can find out who and where these attacks are in fact occuring.

IMHO one of the best network security research projects lately is the Spoiled Onions:

http://www.cs.kau.se/philwint/spoiled_onions/

This project aims at monitoring Tor for bad nodes and use out-of-band mechanisms to find things like MITM attacks. Very cool and relevant.


While the NSA could probably figure out how to attack Apple's key management infrastructure [...], doing so for all the people would, in principle, be fairly noticeable to Apple.

Maybe that's why Apple's warrant canary has disappeared[1].

[1] https://gigaom.com/2014/09/18/apples-warrant-canary-disappea...


Exactly.

Good security is not an unbreakable lock, it's a lock that costs significantly more to break than the value of what it's guarding.


I agree to some extent (and I'm excited by the progress in "opportunistic encryption"), but I'd like for people to also start thinking about what's possible and what's simply not.

For example, modern block ciphers like AES can protect data against an attacker that does not have access to the encryption key. However, in this case we're talking about an attacker that has perhaps 90% of the encryption key, the UID, in its physical possession. The UID can be concealed in many ingenious ways, but that doesn't change the fact that the attacker is in physical possession of the atoms that encode it. This is essentially "security by obscurity", albeit on a high level.

In contrast I would argue that message exchange over the internet can be made secure, even against a nationstate adversary.


Rising the cost hurts the population much more in the long run - we, the people, we pay the surveillance for us. No one else.

We must fight hard that we don't need to pay that tax any more - money which flows in large sums directly to the military/industrial complex.


So, perhaps a law to ban any encryption would save us a lot of money. Better yet, force everyone to make every information public.

But most likely, all this data would be of great interest for criminals? What about the cost of fraud, crime, identity theft? Is it better to have unencrypted phones stolen by criminals in exchange for easier and cheaper surveillance?

I'd think Apples encryption isn't mostly targeted to stop law enforcement and surveillance but more to stop misuse of lost and stolen phones. In addition, if able to implant zero-day exploits, any attacker can bypath any encryption - or he might just target the transmission or the other side of a sent/received secret. Most information isn't produced on an iPhone to sit there forever: either it is received from a server / second device or it is intended to be transmitted to a receiver. So, an encrypted phone isn't really stopping surveillance.


Those intelligence budgets aren't going to get smaller, almost no matter what. The general US population has no idea how much is spent, their best guess would be: a lot (and it'd be a correct one); so there can't hardly even be a voter push-back based on expenditures.

As such, the best option is to make it so expensive to carry out mass surveillance, their only choice is to be selective about who they target and to focus on specific individuals rather than a billion people in general.

The cost should be drastically increased in all regards: time, electricity, water, monetary, etc.

It's very likely this will happen. The NSA was sitting at the ultimate sweet-spot in history, but windows like that are only open for extremely brief amounts of time. It will close and there is nothing that the NSA can do to stop it.


Increasing the cost of surveillance means it becomes easier to constrain the intelligence agencies by simply not allowing them endless budgets.


I think you misunderstood the term "cost" as used here.


Insofar as analyst hours, kwh, computing hardware, etc. can all be converted to dollars, spacefight and I are talking about comparable units.

And the idea of reducing the cost for intelligence agencies to surveil us to save money is an interesting one. While I agree that paying more taxes for a dubiously useful "service" is bad, I disagree that the right response is to reduce that marginal cost--i.e. be more transparent in the face of government surveillance. Under a regime of high marginal costs, I'd like to believe that the NSA would simply be more selective in its targets, rather than be successful in securing additional funding.


Not to mention, the larger the black budget, the harder it is to hide, and the more outrage when something like Snowden happens.


Not sure, no. Why?


I think cost can be looked at in terms of money, time, risk, effort, or any number of other things and this is what is being referred to above. That point is probably moot though, as all will translate to additional financial costs as you mention.


Disk encryption will only stop a targeted attack (someone physically gets their hands on your phone) anyway. All this does is raise the cost by a bit.


Or move the attack.

If it's encrypted at rest you look at attack in transit or via social engineering or whatever.

I think sometimes we get too hung up about the technology around security. For a targeted attack, going for the technology often won't be the best route.


But the NSA has unlimited funds, created by the tax payers they spy on. So why not make it easy, save us all a lot of money </sarcasm>. It so crazy that we are talking about people that require the consent of their governed... On second though maybe outside of our bubble they have that consent and we have to bow to that majority.


If they had that majority, my guess is they wouldn't have had to lie about what they were doing. Or at least my hope.


MitM with a 0-day payload? Acid etching and SEM? You would have to be an extremely high-value target to legitimately worry about this stuff.

The post is attacking a straw man. Apple's iPhone security is meant to address criminals, mass surveillance, and overzealous law enforcement. They're not claiming a single phone will withstand the entire resources of the NSA devoted to breaking it.


> The post is attacking a straw man.

It's a straw man, but not his straw man. Pull up popular press articles about iOS8's disk encryption and you'll find that a disturbing number of them uncritically claim that it's meant to thwart the NSA. It's been driving me crazy.


Exactly. I wrote it to set the record straight: Apple's crypto is intended to guard against a limited class of attacks, and "the KGB is after me" is not one of them.


Yup. The reality is that you can't defeat the NSA unless you are able to design and fab your own silicon without any third party involvement, are able to write and audit your own crypto code with zero bugs, and are able to physically protect yourself from being "encouraged" to reveal your passphrases. It's not worth seriously trying to defeat an organisation as well funded/staffed/connected as NSA from obtaining your chat history and nude photos.


And if you're that important of a target, they're eventually just going to pick you up physically, black-bag style, from nearly anywhere on earth if they can. The NSA is the US military after all, there's really nowhere on earth that someone is entirely safe if they want you (maybe under direct protection by Beijing or Moscow).


>> "...(maybe under direct protection by Beijing or Moscow)."

This argument is bolstered by the fact that Snowden (hardly an ignorant party) sought precisely the protection you mention.


Although I agree with the premise -- a sufficiently dedicated attacker can defeat many mechanisms you can come up with to protect your data -- many of the points that the author makes seem to be based on either incorrect or implausible assumptions.

For instance, the claim that modern cell protocols can be "silently" MITMed is not really true; the current known attack to spoof a GSM tower, I believe, is limited to using some vulnerabilities in older GSM protocols, and may not work against modern 3G or LTE. And, indeed, the paper cited on a cryptosystem in GSM 3G being weak enough to pull data off the air does not say that at all: it simply weakens the cipher, but the conclusion of that paper itself says that the attack may not be viable for current networks.

The author's view of how the UID works in the Secure Enclave is weak at best, as well. The article that the author cites the possibility of the "Secure Enclave code being able to read the UID key"; as comex mentioned yesterday [1], this isn't true. (I know also that other SoCs work the same way that comex mentions; this is a common pattern.) The author then goes on to discuss what could be done even if the key bits were extracted from fuses (an attack that I agree is possible); he claims a cycle time of 800 per iteration if executed on a CPU, but in reality, the encryption is done on a dedicated AES engine; I believe a cycle time closer to 4 per iteration is more likely, giving timescale estimates over 2 orders of magnitude worse than the author suspects.

It's not all bad, though. The author makes at least one very good point: 0day on the device, while it is powered on, could be enough to simply run the entire device through the onboard crypto. The exploit doesn't need to be complicated enough to modify the system software permanently -- as long as it can be used once, that's good enough.

I think the crux of the matter is that this crypto scheme is not designed to stop the NSA, anyway: it's designed to stop comex and to stop the local police. If you need an NSA-proof device, you need a much much smaller attack surface to begin with.

[1] https://news.ycombinator.com/item?id=8410819


> I think the crux of the matter is that this crypto scheme is not designed to stop the NSA

I think the NSA has so many tools available when it comes to hack into people's data, it doesn't really matter how you secure yourself, there are many ways for the NSA to spy on people if they really want to. Right now I don't think anyone can really pretend to secure their data from the NSA.

It might make it harder for them, but if you really want to hide from the NSA, I don't think it's really realistic to just be informed about cryptography and computer security. You would really need to just not use computers at all.


Not least of which is using 210 to ask your nuts for your password...


What's 210?


volts ac


Duh! Thanks. :-)

(120v country resident here)


Airgaps and secured physical access are probably good enough.


You are only considering technological attacks.


> The article that the author cites the possibility of the "Secure Enclave code being able to read the UID key"; as comex mentioned yesterday [1], this isn't true.

We don't know that, we just know Apple says it's the case and nobody's broken it yet. Without a full reverse engineering of the Secure Enclave firmware, plus the IC, there's no way to know if there's a hidden backdoor, bug, or debug mode allowing the data to be read.


I thought the question was whether the UID is protected from malicious Secure Enclave firmware or not, no?


What's often missing from the discussion around Apple's new encryption regime is that it serves two primary important purposes: it gives privacy-oriented consumers a little bit of peace-of-mind about the surveillance state and the safety of their personal data (a plus for Apple) and it relieves Apple from having to be involved in routine, low-stakes law enforcement subpoenas and other requests (a plus for Apple), so it's a double win for Apple to make he change. Whether or not it actually deters NSA snooping is beside the point.


If you look at the incentives for Apple in this scenario: It's best for them if we all think their phones are secure. And it's also best for them if they dont piss off LEO. So the rational thing for them to do, is convince us all they have strongly encrypted their phones, while continuing to provide some type of back door, but hiding it well. Parallel contruction etc etc


I can't think of any reason why its "best" for Apple to keep LEO happy. LEO don't pay them anything and actively increase the difficulty they face running their business as well as reduce the trust the people who do pay them (ie their customers) have for their products, actively hurting their business.

The big fear is that pissing off LEO will result in harmful regulation, and, while that is certainly possible, history has shown the technology moves forward regardless. Consider the US trying to prevent the spread of cryptography. They lost that battle[1], and any government who picks a new battle will eventually lose it, too.

1. The only injuries in that battle were US companies trying to sell software overseas because they were forced to include sub-par crypto.



The amount of money Apple makes from (US) government contracts is utterly insignificant compared to what they make from selling directly to consumers. Which revenue stream do you think they would be least likely to gamble with?


That's a terrible idea. Because when the day comes that a security researcher finds the flaw (and they will find the flaw) Apple has to admit that they lied to their customers. And that's a very bad business practice.

The enhanced encryption is Apple's way of saying that it's not their problem if LE has a beef with you. It removes Apple from the equation. If LE starts knocking at their they can say that there is nothing they can do.

It reduces warrant less spying, and removes some of the ease with which LE agencies have been able to operate. LEA agencies can collect data with broad strokes anymore, they'll need warrants (which they should have needed in the first place and they will have to conduct targeted investigations now (which is what they should have been doing all along).


"...they will find the flaw"

in something as sophisticated as an iphone there are lots of places for vulnerabilities to be hidden from view.


Friendly reminder: don't embed images from other people's websites, especially if you're looking to get on HN/Slashdot/reddit/whatever.

First, it's rude. The owner of the second website has to deal with the burden of hosting traffic on your site and gets nothing in return. In this case, the blog kept downloading an image from siliconpr0n.org, effectively DoSing the website and taking it offline. Horrible. Hopefully tomorrow everything is back to normal and the owner of siliconpr0n.org isn't stuck with a massive bill from his host.

Second, there's no guarantee the image will stay online. Maybe the directory structure on the site will get reorganized or something. Maybe the website will go offline for good, only to get picked up by a domain squatter. Maybe the owner of the website will decide to change the direct-linked image to something else you weren't expecting. You have no idea.


You're assuming I'm not affiliated with siliconpr0n.

I'm actually one of the main contributors to the site and took a lot of the photos on it, just not that particular one. John (my friend who actually admins the server) is fully aware of the situation and just raised the resource limits to counter the DoS. If either of us uses an image somewhere that we expect to stay online for a while, we make a point of leaving it in place when reorganizing directory structures etc.


I had no idea, sorry. Most of what I said about direct-linking doesn't apply if you're affiliated with the website you're direct-linking to.


If there's an increased bandwidth bill, I'm sure John will be grateful for the check you send him.


I'm pretty sure Apple never claimed to be able to stop the NSA. Walling off any obvious back doors both for itself and anyone else that may want access through said door is not the same as saying that what was once a house is now a reinforced bank vault.


I'm sorry, but this is just plain wrong.

Let's take "since the key is physically burned in"... You don't need to burn it in, it could easily be stored in a few bytes of on-die sram. As for the assertion that a de-powered chip can't wipe itself? You can just go out and buy a self wiping chip ... you don't need to be either Apple or the NSA, just have a credit card.


If the key is kept across a battery replacement or repair procedure, then it's going to be hard-wired/fused into the chip. SRAM needs to be powered constantly to retain data.

Credit cards/smartcards include self-destructs that will erase the nonvolatile memory (flash) in certain cases if power is applied while a tamper signal is asserted. They cannot erase data while in the "off" state. One of the problems with fuse-based memory is that it's easier to dump off the silicon than, say, Flash.

Although I haven't decapped an A7 yet (as soon as I get my hands on one, rest assured I will) adding flash to an IC fab process is very expensive and adds somewhere around a dozen new masks, so OTP fuse memory (which doesn't need any new masks) is typically used instead of flash for on-die ID codes etc.


Just so I understand - this process of dumping the keys off the iPhone would typically something that the owner would notice? Is it feasible to take someone's phone, dump the silicon, and then return the phone to them?



What is stopping companies from implementing timers to prevent brute force attacks? Limit password entry attempts to 3 then if still wrong then wait 1 minute. One more wrong entry then wait 3 minutes then 15 minutes then 1 hour, 4 hours, 8 hours, 24 hours etc.

Doesn't iOS do this if you don't have "erase data after 10 failed attempts" set?

Why can't all systems have this implemented or is this bypassed another way which then allows someone to brute force to their heart's content?


1) This article seems to describe actually having physical access to the device.

2) As for "all systems," if you mean a place where public guessing is possible (like a web app), then this measure opens up an easy DOS surface. Want someone not to be able to access their account? Know their email / username? Just burn up all their 'guesses' and they'll have to wait.


Good points. What do you think about a 2FA-type of setup where if someone tries this then it sends a message to the device that asks if you are trying to access something through a webapp and if you say no then it blocks access until the correct password is entered or until it is accessed from the same device that previously successfully accessed it on a consistent basis (say 5 times within the past week or month).

Could IP-blocking be implemented or a double timer where if someone tries to DoS the account by entering too many passwords too quickly then that is also limited such as trying to submit too many comments to HN too quickly?


I didn't get the impression that Apple was trying to tell us our data was safe from other people snooping because of their new encryption practices. Instead, Apple is removing themselves from the list of corporations who are forced to turn over our data because they no longer have access to it.

In other words, trust Apple not to betray you because they no longer have the ability. That's the message I think they are trying to send.


I hope Apple's security employees are reading these articles and are already working on "solutions" to fix the security system's weaknesses that people like the author of this post or Matthew Green are pointing out.


Nobody said it would.


There's a thing called PRISM in case OP forgot. Get this click-bait off of here.


An interesting read, I would agree with the author, Apple is making it difficult, not impossible for govt to get your data. TLDR, apple's claim is misleading, govt can get data in other ways


How is Apple's claim misleading? The cryptography is sound, and that is the only thing Apple is "claiming". In this article, titled somewhat sensationally to clickbait, points out valid claims, but they have little to do with Apple's cryptographic solution.


It is a common mistake to assume that things written by third parties about Apple's intentions are in any way actually related.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: