They're presumably already 99% of the way there. If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?
To me, the more profound consideration is this: if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone. The AES-XTS key that protects data on the device is derived from your passcode, via PBKDF2. These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec.
The real lynchpin here is not hardware, but iCloud. Apple can pull data out of an iCloud backup, and the only reason the San Bernadino case even got off the ground is because somebody at the county screwed up and effectively prevented the backup from occurring.
iCloud backups can be secured so not even Apple can get in them, but it is fundamentally much harder to secure (can't be hareware-entangled and still restore to a new device), and it would significantly complicate iCloud password changes. I'm sure they are working on it, but it is nontrivial.
That (software) problem is the real reason 99% of users are still exposed, as you say the hardware and secure enclave holes are basically closed.
> iCloud backups can be secured so not even Apple can get in... I'm sure they are working on it, but it is nontrivial.
There is no way they are working on this. It is an intentional design decision that Apple offers an alternative way to recover your data if you lose your password.
Or if you die without telling your next-of-kin your password. Most people do not actually want all of their family photos to self-destruct when they die because they didn't plan for their death "correctly". That would be a further tragedy for the family. (Most people don't even write wills and a court has to figure things out.)
Making data self-destruct upon forgetting a password (or dying) is not a good default. It's definitely something people should be able to opt-in to in particular situations, but only when they understand the consequences. So it's great news that in iOS 9.3 the Notes app will let you encrypt specific notes with a key that only you know. But it's opt-in, not the default.
Has Apple even given access of someone's iCloud account to next-of-kin after they died? I've never heard of this, and I don't expect Apple to be responsible to preserve photos. You already can have shared photo streams, and there are many solutions for other data that could be potentially lost that don't involve Apple getting directly involved in these cases.
The idea of Apple (or some other big corporation) providing my protected personal data to my next-of-kin is more frightening than the idea that the government has the ability to spy on me while I'm alive. It's the most morbid kind of subliminal marketing that could possibly exist.
"Hey, we're really sorry about fluxquanta's passing. Here is his private data which he may or may not have wanted you to see (but we'll just assume that he did). Aren't we such a caring company? Since we can no longer count on him to give us more money when our next product comes out, keep us and our incredibly kind gesture of digging through the skeleton closets of the dead in mind when shopping for your next device."
The thing is, you can opt in to destroy-when-I-die security. You can encrypt notes or use a zero-knowledge backup provider (backblaze offers this). But for most people that's the wrong default for things like decades of family photos.
In absence of a will it would be terrible to assume that a person meant to have all their assets destroyed instead of handed down. It should be an explicit opt-in. The default should be, your stuff is recoverable and inheritable.
> But for most people that's the wrong default for things like decades of family photos.
That seems like a weird assumption, that there'd be a single person with access to an account containing the only copies of decades of family photos. If someone else has account access or if there are copies of the photos elsewhere, then "destroy-when-I-die" isn't a big problem.
On the other hand, it also violates the way that I think things would usually work in the physical world. That is, if there's a safe that only the deceased had the combination to, I can still drill it to access the contents.
Far from a "weird assumption", that is exactly how most families operate. There's a family computer with all the photos on it that's always logged in, but maybe only dad or mom knows the iCloud password ("hey mom what's the password again?..") Or maybe they are split between family member iPhones, and they just show them to each other when they want to see them.
It would be a pretty big bummer for most families if when a family member passed away so did all those memories. That's probably not what they would have wanted. Or even if they just forgot their password.. that when they reset it all their photos go poof.
You are I might understand the consequences, but for most people it should really be a clear opt-in to "you can turn on totally unhackable encryption, but if you lose your pw you are totally screwed".
Do you have non-anecdotal evidence for that? Among my own friends and family, there are some images that only exist on one device or account, but most of the stuff likely to draw interest ends up somewhere else (a shared Dropbox account, e-mail attachments, on Facebook, copied onto some form of external storage).
There are likely some demographic groups that are more likely to behave one way than the other, and that could perhaps account for our differing experiences.
On second though, it is the easiest way to use the account (each person having an account on each device). I wonder what percentage of people that would benefit from it actually use the Family Sharing option?
I see what you're saying, and I know that I'm the odd man out here. My original comment stems mostly from my own messed up familial situation. My parents, (most) siblings and I don't get along very well, and I'm single.
If I were to die today I wouldn't want my personal photos, online history, or private writing to fall into the hands of my family. Hell, I don't really even want my physical assets to go to them (something I really should address in a will one of these days to donate it all to charity).
There has been a lot of fighting and backstabbing over who gets what when relatives have died in the past, and the more emotional items (like photographs) have been used to selfishly garner sympathy online through "likes" and "favorites" and it makes me sick. My position is that if you didn't make the effort to get to know a person while they were alive, you should lose the privilege of using their private thoughts for your own emotional gain after they're gone. And I do realize how selfish that sounds on my part, but in my current position I feel like it's justified. If I got a long term partner I would probably change my mind on that.
So yes, an opt-in would be ideal for me, but I don't think many online companies provide that right now.
That's pretty standard, though: once you no longer exist, all your private data, all your private money, all your private goods become part of your estate, to be disposed of by your executor according to your will.
Things like money and personal physical property, sure, I understand that. But I feel like personal protected (encrypted) data should be treated differently. I'm thankful Google at least has options[0] available for their ecosystem, but I guess I'm going to need a will to cover the rest.
In the case of sudden death, there would not have been any way to securely dispose of any private "data". So your private information, diaries, works you purposefully didn't publish, unfinished manuscripts you abandoned - everything was handed down to your estate, and more often than not used against your intent.
I'm not entirely clear whether your will could specify such disposal to be done, or could prohibit people from at least publishing these private notes and letters if not reading them, in any kind of binding and permanent way.
Shared photo streams are only a solution if they are used. Most people don't even write wills.
If you fail to write a will should the state just burn all your assets, assuming that's what you meant? No, that's the wrong default. Burn-when-I-die should be opt-in for specific assets, not the default.
And the good news is Apple is providing opt-in options like secure notes. Perhaps even backups too (3rd parties already do). But only after presenting the user with a big disclaimer informing them of the severe consequences of losing the password.
On the other hand, "turn it on and let it do its thing" is a terrible idea from a forensics standpoint. You want to lock the account down ASAP to prevent potential accomplices from remote wiping your evidence.
The "screwup" grandparent is suggesting is that the county didn't think to disable the setting that would let employees turn off iCloud backups for their devices, however many months or years ago, not that they've messed up during the investigation now.
No, they're probably referring to this, from the second letter,
"One of the strongest suggestions we [Apple] offered was that they pair the phone to a previously joined network, which would allow them to back up the phone and get the data they are now asking for. Unfortunately, we learned that while the attacker’s iPhone was in FBI custody the Apple ID password associated with the phone was changed. Changing this password meant the phone could no longer access iCloud services."
Uhh, well it's probably pretty high. Considering their adoption rate for new software is sitting somewhere around 95%. iCloud backups default to on - just like automatic updates - when the user sets up their phone. Not to mention most Geniuses would ask to turn on iCloud backup when upgrading the device for convenience.
It did have iCloud backups, but the latest was six weeks prior. The FBI requested the iCloud password be reset, which prevented a new iCloud backup they could have subpoenaed.
Uploading the encrypted content has no value as backup, if you don't have keys that can decrypt it. If the keys are backed up as well, all security is gone.
The hardware key is designed to be impossible to extract from the device. That's part of the security, so you can't simply transfer the data to a phone where protections against brute-forcing the user key have been removed.
To spell it out (1) request new encryption key from device (let's call it key4cloud); (2) encryption key generated, displayed for physical logging by the user, & stored in the secure enclave; (3) all normal backups to iCloud are now encrypted via key4cloud; (4) user loses phone; (5) user purchases new phone; (6) new phone downloads data; (7) user enters key4cloud from physical notes & decrypts backup
Yes, it requires paper and a pencil and user education (hence the opt-in). But it's also incredibly resistant to "Give us all iCloud data on User Y."
It can be the same hardware but I believe that not usually meant with "hardware based encryption". The point is that the private keys never leave the hardware of the phone, thus making it secure. So they could employ the same hardware but the hardware does not have the necessary keys.
Why would they have made the Secure Enclave allow updates on a locked device without wiping the key in the first place? Either they didn't think it through, assumed they would never be compelled to use it as a backdoor, or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it? Do we even know for certain that the Secure Enclave on the 6s can be reflashed on a locked phone without wiping the key?
From what's been said, it seems like it was made to be updated so that Apple could easily issue security updates. They've already increased the delay between repeated attempts at password entry. Probably they were worried about vulnerabilities or bugs that hadn't been found and wanted to maintain debugging connections to make repairs easier. A tamper-resistant self-destruct mechanism with no possibility of recovery introduces extra points of failure, and it seems that until now, they didn't think it was necessary.
Look at the controversy over the phone not booting with third-party fingerprint reader repairs as an example. People were upset when they found out that having their device worked on could make it unbootable, but Apple was able to easily fix it with a software update. If it had been designed more securely, it might have wiped data when it detected unauthorized modifications, which would have meant even more upset people. Now that this has become a public debate, there will be a very different response to making it more secure.
How much easier? If all they had to do to not have access to it themselves is to ask the user for his password when there's a new update, that's hardly that inconvenient...
I'm not saying that it was the right thing to do in hindsight, but I get a little nervous even when updating a small web server, so I understand the tendency to leave repair options open on something as big as iPhones. Real hardware-based security is about more than just about asking for a password. It means making the device unreadable if it's been disassembled or tampered with, and that could have unintended side-effects if any mistakes are made or something is overlooked. It's definitely worth pursuing considering the political situation the world is in right now.
As I understand it, Secure Enclave firmware is just a signed blob of code on main flash storage that's updated along with the rest of iOS, which can be done via DFU without pin entry. I assume DFU updates are very low level, with no knowledge of the Secure Enclave or ability to prompt the user to enter their pin.
Making the DFU update path more complex increases the risk of bugs and thus the risk of permanently bricking phones.
You could imagine an alternative where on boot the Secure Enclave runs some code from ROM which checks that a hash of the SE firmware matches a previously signed hash, which is only updated by the Secure Enclave if the user entered their pin during the update. If it doesn't match, either wipe the device or don't boot until the previous firmware is restored.
This way Secure Enclave firmware updates and updates via DFU are still possible, but not together without wiping the device.
Yeah, the key question is how Secure Enclave firmware updates work, and whether they can be prevented without pin entry. One former Apple security engineer thinks they are not subject to pin entry: https://twitter.com/JohnHedge/status/699892550832762880
> or perhaps they were afraid some bug could end up having catastrophic consequences of locking a billion people out of their phones with no way to fix it?
That basically happened (at a smaller scale) just last week. When Apple apologized and fixed the "can't use iPhone if it's been repaired by a 3rd party" thing, the fix required updating phones which were otherwise bricked. It's not an unreasonable scenario.
If the device has a manufacturer's key and the user's key, then it's basically down to simple Boolean logic: does the innermost trusted layer allow something to be installed or altered if it is authorized by the manufacturer's key OR your key? Or the manufacturer's key AND your key? Or just your key? (With a warning if it has no other key?)
>If the Secure Enclave can be updated on a locked phone, all they need to do is stop allowing that, right?
That probably also means removing most debugging connections from the physical chip, and making extra sure you can't modify secure enclave memory even if you desolder the phone.
No one has been talking about the fact that you can rebuild transistors on an existing chip. It's very high tech stuff, the sort that Intel uses to repair early engineering samples painstakingly, but it is used.
You decap the chip to expose the die with HF, and then use Focused Ion Beams and a million dollar microscope setup, you can rearrange the circuits. So, if the NSA absolutely had to have the data on the chip they could modify it to make it sing. So, if say they know an iPhone had the location of Bin Laden on it, they could get the goods without Apple.
They're not anywhere near 99% of the way there; they've destroyed the heterogeneous decentralized ecosystem that broad security requires.
Locking themselves out of the Secure Enclave isn't anywhere near sufficient. As long as the device software and trust mechanisms are totally opaque and centrally controlled by Apple, the whole thing is just a facade. There's almost nothing Apple can't push to the phone, and the audibility of the device is steadily trending towards "none at all".
If the NSA pulls a Room 641A, we'd never know. If Apple management turns evil, again, we'll never know. If a foreign state use some crazy tempest attack to acquire Apple's signing keys ... again, we'll never know.
Then again, nobody is suing over android phone crypto, and as recently as last November bugs have been discovered that sookmg things like entering an excessively long password allows you to bypass the lock screen.
In android world too many parties have the keys to kingdom and people that protect their devices take that into consideration. Also once the bootloader is unlocked and custom firmware put there - all bets are off.. I have yet to see viable attack against sufficiently strongly protected LUKS at rest.
I think from the context it's pretty clear that "hack" in this case is referring to "being forced to unlock". Yes, they could still deliberately break encryption for future OSes and phones, but the same could be said of any software, open or closed source.
I don't think acting like an open ecosystem is the be-all and end-all of security is productive. Most organizations (let alone individuals) don't have the resources to vet every line in every piece of software they run. Software follows economies of scale, and for hard problems (IE, TLS, font rendering, etc) will only have one or two major offerings. How hard would it be to introduce another heartbleed into one of those?
Binaries can be converted back to assembly and quite often even back to equivalent C; bugs are most often found by fuzzing (intentional or not) which does not require source code. The difference between open and closed source is that open is more often analysed by white hats who rather publish vulnerabilities and help fixing them, while closed by black hats who rather sell or exploit them in secret.
You misunderstand; if you can't even decrypt the binary, you can't disassemble, much less run a decompiler over it.
As someone who has done quite a bit of reverse engineering work, I have no idea how I'd identify and isolate a vulnerability found by fuzzing without the ability to even look at the machine code.
If it runs, it has to be decrypted (at a current level of cryptography); at most it is obfuscated and the access is blocked by some hardware tricks which may be costly to circumvent, but there is nothing fundamental stopping you.
> don't have the resources to vet every line in every piece of software they run
For the same reason I do not independently vet every line of source code I run, but still reasonably trust my system magnitudes more than anyone could - and I argue, nobody can - trust proprietary systems. And that is because while I personally may not take initiative to inspect my sources, I know many other people will, and that if I were suspicious of anything I could investigate.
Bugs like Heartbleed just demonstrated... well, several things:
1. Software written in C is often incredibly unsafe and dangerous, even when you think you know what you are doing.
2. Implementing hard problems is not the whole story, because you also need people who comprehend said problems, the sources implementing them, and have reason to do so in the first place.
Which I guess relates back to C in many ways.
I look forward to Crypto implemented in Rust and other memory / concurrency / resource safe languages. There is always a surface vector of a mistake being made that can compromise any level of security - if you move the complexity into the programming language the burden falls on your compiler. But in the same way you can only trust auditable in production heavily used sources, nothing is going to be more heavily used and scrutinized, at least by those interested, than languages themselves.
C is not a problem -- you can make a bug in every language. Even with memory safety and a perfect compiler, bug may direct the flow in bad direction (bypassing auth for instance) or leak information via side-channel.
We all understand that as long as apple can update the phone the can do all kinds of bad things.
The important thing about the secure enclave thing is that it pushes security over the line so that the attacker has to comprimise you befor you do whatever it is that will get you on somebodys shitlist.
Probably not. If you're dead, they probably have your fingers. If you're alive, they can compel you to unlock the device with your fingerprint.
The only point I'm making is that Apple already designed a cryptosystem that resists court-ordered coercion: as long as your passcode is strong (and Apple has allowed it to be strong for a long time), the phone is prohibitively difficult to unlock even if Apple cuts a special release of the phone software.
Using a strong pin is pretty annoying, and a relatively visible signal when using the phone on the street etc, So it can be a good filter(maybe via street cams) to filter suspicious people - which isn't a bad goal for law enforcement.
That sounds good until you remember the Bayesian Base Rate Fallacy: there are very few terrorists (the base rate of terrorism is very low), so filtering on "people with strong passphrases" is going to produce an overwhelming feed of false positives.
Be careful not to take the base rate fallacy too far, with enough difference in likelihood even a small base rate won't prevent an effect from being significant, and regardless of the base rate you'll still get some information out of it, it might just not be as much as you wanted.
Nobody cares that you're using an alphanumeric passcode on your iPhone.
Some corps require or strongly encourage it. My employer does.
And most parents I know use alphanumeric to keep their kids from wiping their phones and iPads just by tapping the numbers. (A four digit number code auto-submits on the 4th tap, so all it takes is 40 toddler taps. An alphanumeric code can be any length and won't submit unless the actual submit button is tapped.)
Corporate email profiles on BYOD phones often enforce a long passcode requirement, so you've got a lot of Fortune 500 sales guys to screen out if you're stopping and searching anybody with a suspiciously long password.
I'm at a loss as to how alphabet agency can determine a weak passcode vs strong passcode was used. how does a pin get stored on the phone? surely, not plain text of a 4 digit pin. if they do any encryption to the 4 digit pin, how would it appear any different than a significantly stronger passcode?
Except that with Touch ID, you only have to enter it when you reboot the phone, or if you've mis-swiped 5 times. I've had a strong pin for a couple of years, and really don't find it even a slight inconvenience (in the way that I use a super-weak password for Netflix, as entering passwords on an Apple TV is a real pain)
If they have access to a live finger for the TouchID, sure they can bypass - but they could do that with the $5 guaranteed coercion method as well [1].
Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer.
All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).
> All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).
Excellent advice. Even better, if you're about to pass through US customs and border patrol, backup the phone first, wipe, and restore on the other side. Of course, this depends on your level of paranoia. I am paranoid.
If you're paranoid, making a complete copy of all your secrets on some remote Apple or Google "cloud" where the government can get at it trivially is the exact opposite of what you want to be doing.
Well, yeah, if you back it up with a 3rd party backup tool, you are trusting the 3rd party.
I recommend you make a backup to your laptop, which you then encrypt manually. That way the trust model is: you trust yourself. Then you can do whatever you want with the encrypted file. Apple's iCloud is perfectly fine at this point.
The real challenge is to find a way to restore that backup, because you have to be on a computer you trust. If you decrypt the backup on a "loaner" laptop, your security is broken.
If you decrypt the backup on your personal laptop but the laptop has a hidden keylogger installed by the TSA or TAO, your security is broken.
It would be necessary to backup the phone on the _phone_ _itself_. Then manually encrypt the file (easy to do). Then upload to iCloud. At this time, no such app exists for iOS.
Since you plan to restore the backup to the phone anyway, it's no problem to decrypt a file on the phone before using it for the restore.
> I recommend you make a backup to your laptop, which you then encrypt manually.
You mean your laptop that was manufactured by a 3rd party, with a network card that was manufactured by a 3rd party? And you're using encryption software that, even if it's open source, you probably aren't qualified to code review. I'm not downplaying the benefit of being careful, but unfortunately you can keep doing that pretty much forever.
Probably not. FB is doing the same thing. In most cases your app or service does not actually know if the remote service it is talking to is local or in another DC. Yes, you can find out if you need to, but that requires contacting another service and introduces some delay and latency. Use a service router to try to keep the calls local to a rack or a DC, but you know that if there are problems with local cells you might get routed across the country so start with the assumption that _all_ connections get encrypted even if the connection is to localhost.
Wiping the phone doesn't help you. Using the strong password renders the information inaccessible, at least as inaccessible as your phone backup is. Touch ID isn't re-enabled until the phone's passcode is used. Presumably if the authorities have access to your phone's memory they also have access to your laptops, and neither will do them any damn good.
And it's paranoia if there's a legitimate threat, that's just called due diligence. ;)
> Touch ID isn't re-enabled until the phone's passcode is used.
Do the docs confirm that there is no way around this? I'd guess generating the encryption key requires the passcode, which is discarded immediately, and Touch ID can only "unlock" a temporarily re-encrypted version which never leaves ephemeral storage?
From the iOS Security Guide - How Touch ID unlocks an iOS device;
If Touch ID is turned off, when a device locks, the keys for Data Protection class
Complete, which are held in the Secure Enclave, are discarded. The files and keychain
items in that class are inaccessible until the user unlocks the device by entering his
or her passcode.
With Touch ID turned on, the keys are not discarded when the device locks; instead,
they’re wrapped with a key that is given to the Touch ID subsystem inside the Secure
Enclave. When a user attempts to unlock the device, if Touch ID recognizes the user’s
fingerprint, it provides the key for unwrapping the Data Protection keys, and the
device is unlocked. This process provides additional protection by requiring the
Data Protection and Touch ID subsystems to cooperate in order to unlock the device.
The keys needed for Touch ID to unlock the device are lost if the device reboots
and are discarded by the Secure Enclave after 48 hours or five failed Touch ID
recognition attempts.
TouchID I believe unlocks the passcode so the phone can use it to login, but TouchID itself is not enabled until you enter it once, presumably because it isn't actually stored on the device in a readable way.
Could the "code equivalent" of your fingerprint be stolen by a rogue app if it's allowed to read it? I don't have a touchId phone but have wondered what would happen if your "print" is stolen -- passwords can at least be changed.
Speaking as an App Developer, we cannot touch stuff like that. We're allowed to ask Touch ID to verify things and process the results, but we don't actually get to use the Touch ID system. It's similar to how the shared keychain is used: We can ask iOS to do things, but then must handle any one of many possible answers. We don't actually see your fingerprint in any way.
Wouldn't surprise me if true, iOS as a whole is built in a very modular fashion when it comes to the different components of the OS and developers only get access to what Apple deems us worthy of, hehe. Not that I want access to Touch ID, I much prefer to not have access to that...
Depends on if they're at a border crossing or in the interior of the country. Laws apply to citizens and non-citizens alike. If you haven't been admitted to the country, about the most they can do is turn you away at the border checkpoint and put you on the next flight back to your home country.
We wrote about this in our border search guide and concluded that there is a risk of being refused admission to the U.S. in this case (in the border search context) because the CBP agents performing the inspection have extremely broad discretion on "admissibility" of non-citizens and non-permanent residents, and refusing to cooperate with what they see as a part of the inspection could be something that would lead them to turn someone away. (However, this is still not quite the same as forcing someone to answer in the sense that they don't obviously get to impose penal sanctions on people for saying no.)
" they don't obviously get to impose penal sanctions on people for saying no"
I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.
> I wonder if there is any negative effects associated with being refused entry by a CBP? Could it be the case that if you are refused entry once, that in the future they will be more likely to refuse you entry? If so, that's a fairly significant penalty/power that the CBP person has.
Yes, some categories of non-citizen visitors (I don't remember which) are asked on the form if they have ever been refused entry to the U.S. (and are required to answer yes or no). If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.
Plenty of countries will ask if you've ever been refused entry to any country. And you're also generally automatically excluded from any Visa Waiver Programme from then on too. So it's a major issue.
> If they're using the same passport number as before, CBP likely also has access to a computerized record of the previous interaction.
(They might also be able to search their database by biographical details such as date of birth, so getting a different passport may not prevent them from guessing that you're the same person.)
It is not a good bet if you're pulled over by the authorities to be doing something with your hands that they can't reliably identify as different from preparing a weapon. Particularly if not white.
> "Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer."
You get plenty of tries to perfect the technique, before using it on the actual device.
You acquire identical hardware and "dead finger countermeasures" (does the iphone employ any? Some readers look for pulses and whatnot, I don't know if the iphone does). You then practice reading the fingerprint on that hardware until you are able to reliably get a clean print and bypass any countermeasures. Only then do you try using the finger on the target phone.
You might still fuck it up, and you only get 5 chances on the target hardware. But with practice on the right hardware, I see no reason why you couldn't get it.
Is it only five fails on TouchID to delete data? I don't have the option to delete the data enabled on my iPhone... but it often takes more than five tries to just get it to work on my finger that is legitimately registered in touchID.
After five failures the you cannot use Touch ID to unlock and will instead need the passcode to access the phone again. This means that any approach to fooling the fingerprint reader will need to be done within five tries.
Of course you can. As long as the courts can be persuaded that there is no causal nexus between the torture and the evidence, or if the torture actually isn't legally torture. That assumes that the defendant can show (or is even aware) the torture actually took place.
Examples:
* prolonged solitary confinement: not legally torture
* fellow prisoner violence: not legally torture, no nexus
* prolonged pre-trial confinement: not really torture, but we may as well include it
* waterboarding/drowning: not legally torture? (Supreme Court declined to rule)
Sure, you can. It all depends on who gets to define "torture."
If they can find a judge who believes the iron maiden isn't torture while the anal pear is, then guess what... the government will use the iron maiden.
Even if they can't find such a pliable jurist, they'll have no problem getting a John Yoo to write an executive memo that justifies whatever they want to do to you, and let the courts sort it out later. There's no downside from their point of view.
The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.
The failure is the de facto doctrine of absolute executive immunity. It has two prongs: 1. "When the president does it, that means that it is not illegal." 2. When the perpetrator follows president's orders, also not illegal.
Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.
The memos didn't provide de iure indemnity. There is no constitutional basis, in fact the proposition that a memo can supersede the Constitution is idiotic on its face.
Yes, and that's what I meant by "let the courts sort it out later." The Constitution's not much help either way, being full of imprecise, hand-waving language and vague terms like "cruel and unusual." It was anticipated by the Constitution's authors that it would be of use only to a moral government.
Nevertheless, since there is no legal basis, there is nothing preventing the next government from prosecuting them.
I wonder if that's ever happened in the US? Does anyone know?
I would disagree. The Constitution is a bulwark against tyranny. The US have successfully prosecuted waterboarding in the past.
It usually only happens when the rule of law is suspended and then resumed. You're a young country, so maybe it hasn't happened before. Robert H. Jackson was an American, though ;-)
No, although I'd love to see a HealthKit app that uses your Apple Watch as a dead man's switch, and disables Touch ID or powers the phone off in the event the watch is removed or your pulse is no longer detected.
If you take the watch off, it automatically locks. I wouldn't mind it also automatically locking my phone and requiring a passcode instead of TouchID.
There is a VERY limited amount of time in which you can take the watch off and switch to another wrist (like milliseconds, you have to practically be a magician to switch wrists (which I do throughout the day)).
Apple has the watch, they could use it to beef up security for those that want it.
I don't think "already fenced off if people were savvy" is really valid. That's the security equivalent of "no type errors if people were savvy", which is the same as "probably has type errors".
It was near-impenetrable, but it could have been inevitable if it weren't for the fact that Apple could push OS updates without user consent. They could have made it impossible for anyone to get in even if your pin was 1234, but didn't.
Kind of disappointing given their whole thing about the Secure Enclave. Bunch of big walls in the castle, but they left the servant's door unlocked.
Secure enclave as per their docs sounds just like their implementation of trust zone.. err "Trust Zone", most likely following ARM specs.
The main difference would be that everyone knows trust zone through Qualcom's implementation and software - as it's been broken many times. At the end of the day "its just software" though, which runs on a CPU-managed hypervisor with strong separation ("hardware" but really, the line is quite a blur at this level).
What that means is that you need to be unable to update the secure enclave without user's code (so the enclave itself needs to check that) which is probably EXACTLY what apple is going to do.
Of course, Apple can still update the OS to trick the user into inserting the code elsewhere, then FBI to use that to update the enclave and decrypt - though that means the user needs to be alive obviously.
Past that, you'd need to extract the data from memory (actually opening the phone) and attempt to brute force the encryption. FBI does not know how to do this part, the NSA certainly does, arguably, Apple might since they're designing the chipset itself.
I don't understand the whole debate about Apple security:
- Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?
- Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.
- Even if iCloud is supposed to be encrypted, they didn't open up that part to public scutinity.
- Therefore a perfect security around the SE only solves the problem of accessing a phone that wasn't backdoored yet. There are all reasons for, say, Europe and CIA, to require phones to be backdoored by default for LE and economic intelligence purposes.
If the person knowing the passcode is around and you can fool them into using their passcode then yes, you could capture their passcode. Touch ID is even less of a problem because taking someone's fingerprints is a lot easier than taking a passcode out of their head.
But in both those situations the weakness is in the person, not the device. Apple devices still potentially have security weaknesses which the FBI is asking Apple to exploit for them. Apple wants to fix these weaknesses, to stop Apple being forced to exploit them.
Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?
I don't believe this is the case.
Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.
What you say about an on-screen passcode is likely true but the architecture of the secure enclave is such that the touch ID sensor is communicating over an encrypted serial bus directly with the SE and not iOS itself. It assumes that the iOS image is not trustworthy.
From the white paper [1]:
It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.
...
The Secure Enclave is responsible for processing fingerprint data from the Touch ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user. Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface
bus. The processor forwards the data to the Secure Enclave but cannot read it. It’s encrypted and authenticated with a session key that is negotiated using the device’s shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption.
The only statement I could find from Apple was from the iOS security guide that states, "it utilizes its own secure boot and personalized software update separate from the application processor." I think we can both agree that's a pretty vague statement, if you have a better source I'd like to see it.
"The executives — speaking on background — also explicitly stated that what the FBI is asking for — for it to create a piece of software that allows a brute force password crack to be performed — would also work on newer iPhones with its Secure Enclave chip"
I understand that the boot chain is the only way Apple may modify the behaviour of the Enclave but how would the update be forced? DFU wipes the class key, making any attempt at trying to brute force the phone, useless. If debug pinout access is available, then why does FBI needs Apple to access the phone at all?
Any device that relies on hiding secrets inside the silicon itself is subject to hacking. Several secure-enclave like chips have been hacked in the past by using electron microscopes and direct probes on the silicon. If BlackHat conference independent security researchers have the resources to pull this off, Apple and the NSA certainly can. Exfiltrating the Enclave UID could be done by various mechanisms at the chip level, especially if you have access to the actual HW design and can fab devices to help.
I mean, we're talking about threat models where chip-level doping has been shown as an attack. This just seems to be a variation on the same claims of copy protection tamper resistant dongles we've had forever. That someone builds a secure system that is premised on a secret being held in a tiny tamper-resistant piece, only the tamper resistance is eventually cracked.
It might even be the case that you don't even need to exfiltrate the UID from the Enclave, what the FBI needs to do is test a large number of PIN codes without triggering the backoff timer or wipe. But the wipe mechanism and backoff timer runs in the application processor, not on the enclave, and so it is succeptable to cracking attacks the same way much copy protection techniques are.
You may not need to crack the OS, or even upload a new firmware. You just need to disable the mechanism that wipes the device and delays how many wrong tries you get. So for example, if you can manage to corrupt, or patch the part of the system that does that, then you can try thousands of PINs without worrying about triggering the timer or wipe, and without needing to upload a whole new firmware.
I used to crack disk protection on the Commodore 64 and no matter how sophisticated the mechanism all I really needed to do was figure out one memory location to insert a NOP into, or change a BNE/BEQ branch destination, and I was done. Cracking often came down to mutating 1 or 2 bytes in the whole system.
(BTW, why the downvote? If you think I'm wrong, post a rebuttal)
* Decapping and feature extraction even from simpler devices is error prone; you can destroy the device in the process. You only get one bite at the apple; you can't "image" the hardware and restore it later. Since the government is always targeting one specific phone, this is a real problem.
* There's no one byte you can write to bypass all the security on an iPhone, because (barring some unknown remanence effect) the protections come from crypto keys that are derived from user input.
* The phone is already using a serious KDF to derive keys, so given a strong passphrase, even if you extract the hardware key that's mixed in with passphrase, recovering the data protection key might still be difficult.
No, the chief protection against the PIN code hacking comes from the retry counter. The FBI doesn't need the crypto keys, it just needs the PIN code. So it needs to brute force about 10,000 PIN codes.
Any mechanism that prevents the application processor from either a) remembering it incremented the count b) corrupts the count or c) patches the logic that handles a retry count of 10, is sufficient to attack the phone.
Somewhere in the application processor, code like this is running:
Now there are two possibilities. Either there are redundant checks, or there aren't. If there aren't redundant checks, all you need to do is corrupt this code path or memory in a way that prevents it's execution, even if it is to crash the phone and trigger a reboot. Even with 5 minutes between crash reboot cycles, they could try all 10,000 pins in 34 days.
But you could also use more sophisticated attacks if you know where in RAM this state is stored. You couldn't need to de-capp the chip, you could just use local methods to flip the bits. The iPhone doesn't use ECC RAM, so there are a number of techniques you could use.
You aren't limited to 10,000 possibilities. You can use an alphanumeric passphrase. The passphrase is run through PBKDF2 before being mixed with the device hardware key.
On phones after the 5C, nothing you can do with the AP helps you here; the 10-strikes rule is enforced by the SE, which is a separate piece of hardware. It's true that if you can flip bits in the SE, you can influence its behavior. But whatever you do to extract or set bits in SE needs to not cause the SE to freak out and wipe keys.
We can still imagine a state actor spending the megadollars to research a reliable chip-cloning process, to bring parallel brute-forcing within reach. I wonder if the NSA have been on a SEM/FIB equipment buying spree lately.
The ultimate way to defeat physical or software attacks is to exploit intrinsic properties of the universe, which suggests finding a mathematical and/or quantum structure impervious to both.
Your reply is the kind of comment I come to HN for - we've started off talking about mobile device security and ended up discussing unbreakable quantum encryption.
I'm speaking the case of the San Bernardino killers. Using strong alphanumeric pass phrases are anti-usability, the vast majority of people won't use them. Hell, the vast majority of people don't even have strong alphanumeric passwords on desktop services.
So it falls to either 2-factor or biometric to avoid PINs. Biometric of course has it's own problems.
Perhaps people should really carry around a Secure Enclave on a ring or something, and with a button to self-destruct it in case of emergency. (e.g. pinhole reset)
You only need the strong alphanumeric pass phrases on device startup, then you can use TouchID. I bought an iPhone 6 for exactly this reason (employer required strong passphrase, was too annoying to type in on the Android device I had at the time).
You seem to make the assumption that corrupting the secure enclave firmware is easy, or that its RAM is exposed of chip.
The entire point of an secure enclave is to completely enclose all the hardware and software needed to generate encryption keys in a single lump of silicon.
This means that all of its processing requirements (it's a complete co-processor) are on chip, it's RAM is on chip (not shared with it the main CPU, and probably has ECC), and it uses secure boot to cryptographically verify that it's firmware has not been tampered with before it starts executing. Additionally it may even be possible to update it bootloader in the future to prevent further updates without a passcode.
The end result means that attacking a secure element is very difficult. There are few, if any, exposure points that would allow you to fiddle with its internal state, and any attempts too should result in the secure element wiping stored keys, making further attacks a moot point.
I don't make that assumption, I worked on developing TPM modules myself in the 90s at research labs, and our prototypes had even more anti-tampering than so far revealed about Secure Enclave/Trustzone: we had micro-wire-meshes in the packaging to self-destruct on drilling or decapping, we had anti-ultrasonic and anti-TEMPEST shielding. I'm pretty familiar.
The point is that state actors have vast resources to pull off these attacks. The NSA intercepted hardware in the supply chain to implant attacks as documented by Snowden. Stuxnet was a super-elaborate attack on the physical resources of the Iranian nuclear program, which was obviously carried out with supply chain vendors like Siemens. Apple uses Samsung as a supplier, and the US government has very high level security arrangements with the South Koreans, so how do we know the chips haven't been compromised even before they arrive at Foxconn for assembly?
Perhaps the NSA is savvy enough to know that a heroic effort isn't needed, and that the FBI is mostly looking to set precedent rather than find anything worth the cost and risk of chip-hacking.
Seems to me when we are at a point were every time the NSA wants to get at some data, the have to start a heroic effort of attacking low level hardware, we are in a pretty good state in terms of device security.
>it's RAM is on chip (not shared with it the main CPU, and probably has ECC)
Apple's security guide would indicate otherwise, look on page 7. The secure enclave encrypts its portion of memory, but it isn't built into the secure enclave itself.
Is there anything preventing them from imaging the parts of the device that store data? The data in the image would be encrypted, of course, but wouldn't this give them essentially unlimited (or up to their budget) attempts at getting to the data?
If the encryption key didn't depend on the hardware this would work. Even the iPhone 5C that the recent court case is about relies on the hardware keeping a key secret and it doesn't contain the secure enclave. For an iPhone 5C, the encryption key is derived from the pin and a unique ID for the phone that the CPU itself can't read. The only thing that the application processor can do is perform some crypto instructions using the key, there isn't an operation that would just put the key into memory or a register that you can read from. Even if you have root and the phone in front of you with the password, there's nothing you can do short of decapping it to try to identify that key.
Unless there is weakness in the PRNG/RNG that creates the fused key in the secure enclave itself. Which is not out of question. I am not sure why FBI didn't ask apple politely how these keys are generated in the first place.
That seems excessively unlikely to me. The phone itself wouldn't have anything to seed a PRNG with, so the random number would need to come from an embedded hardware generator or a dedicated random number device in the factory, and both of those options would have huge amounts of engineering oversight.
>You may not need to crack the OS, or even upload a new firmware. You just need to disable the mechanism that wipes the device and delays how many wrong tries you get. So for example, if you can manage to corrupt, or patch the part of the system that does that, then you can try thousands of PINs without worrying about triggering the timer or wipe, and without needing to upload a whole new firmware.
I disagree. The pin validation is done within the secure enclave. You can't do it outside the secure enclave because the pin is combined with a secret that is burned into the silicon of it. The secure enclave can and will enforce timeouts for repeated failures, as well as refuse to process any pin entries after too many attempts. Disabling the wipes or bypassing the timer won't do you any good when you only have a few attempts.
The state representing the number of attempts must be stored somewhere, and thus a determined adversary could eventually corrupt it.
Look, there's a big difference between trusting known ciphers that have been well studied by the world's top cryptographers, and a proprietary TPM chip that relies on security-through-obscurity.
The history of embedding secrets into black boxes is a history of them being broken. This isn't a theoretical concern, it's a very practical one.
The question is not whether a determined adversary could corrupt the counter. The question is whether they can corrupt the counter before they corrupt something else that causes total data loss.
Physical defenses are not security through obscurity, and why you assuming they don't use known ciphers?
Kerckhoff's principle should be adhered to if truly secure encryption is desired; alas, then all sorts of hard obstacles pop up (UX becomes a SPOF, most commonly - a secret always needs to be stored somewhere, if only in the user's head).
OTOH, the practical purpose of encryption is to remain unbroken for long enough, not to be completely unbreakable. As seen here, security-through-obscurity is practical enough in cases where user-obtained key material is too weak to provide enough protection using strong publicized crypto. In other words, it's a two-part key: one is in user's wetware, the other in phone's hardware (as per obXKCD, it's usually easier to attack the former).
Isn't the point of their efforts to make it so that Apple doesn't know how to get into the phones? If someone from BlackHat or Defcon can get in, the FBI should hire that person if they want access. The reason Apple is doing this is so that if they are served a court order, they can just say "we don't know how"
> You just need to disable the mechanism that wipes the device
Sure, to resist microscopic attacks, an IC must assert logical integrity to itself i.e. that the gates & wires are not compromised by a microscopic attack.
But just because you and I haven't imagined it, doesn't mean some kind of internal canary can't exist. Your naive code (below) of a counter might instead be based on quantum cryptography, or on intrinsic properties of a function or algorithm which if compromised the SE cannot function at all.
The existence of one-time password schemes like S/KEY gives me hope, since it is a sequence generator that simply doesn't function without input of the correct next value (technically the previous value from the hash function). S/KEY itself is not the answer (wrong UX and no intrinsically escalating timer), but I wanted to illustrate that you can generate a self-validating sequence without tracking integer position.
Apple apparently has a motive and the warchest for the R&D. If they're hiring cryptographers (has anyone checked?), they're acting on it.
I upvoted because I think you are absolutely correct.
Better security could be had with a two-factor system -- plug the phone into a cryptobox to decrypt. Having everything in one place is vulnerable.
I've been very impressed with what I've learned in the last few weeks regarding Apple's efforts to provide privacy for its customer using what it seems some very robust engineering and design. I'm currently an Android user (Samsung S6 edge) but am considering seriously going back to the iPhone because of this.
The cynical side of me says that Apple's marketing tactics have worked. But I've got a feeling, heck, I want to believe, that this is actually driven by company values and not a short-term marketing benefit.
> "There's a lot of good stuff in Pd, and a lot I like about it. There's also a lot I don't like, and am scared of. My fear is that Pd will lead us down a road where our computers are no longer our computers, but are instead owned by a variety of factions and companies all looking for a piece of our wallet. To the extent that Pd facilitates that reality, it's bad for society. I don't mind companies selling, renting, or licensing things to me, but the loss of the power, reach, and flexibility of the computer is too great a price to pay."
I think his fears have come true to some extent in iOS, but knowing what we know now about government surveillance of everybody, it may no longer seem like too great a price to pay. That is, if you trust the vendor. Apple seems to be worthy of that trust. But Microsoft...?
> I think his fears have come true to some extent in iOS, but knowing what we know now about government surveillance of everybody, it may no longer seem like too great a price to pay.
We're already paying that price, essentially. An iPhone won't run arbitrary code, a replacement OS, or accept code from arbitrary sources. It's already an exclusively vendor-curated platform. If you're already going to buy into that model, I don't see the point in not going for the greatest amount of protection that you can get. (OK, yes, a dev can compile their own code and push it to their own device. I'm actually not sure why I don't hear about this happening more often as a way to run "unacceptable" programs on iOS devices).
I thought that's the opposite of what Palladium did. Doesn't it make it so the apps and data on your computer aren't actually yours? Like Microsoft would have total control over what you put on your computer? I was under the impression it didn't do anything to protect your privacy: instead it actually put backdoors in your computer that Microsoft could access any time they wanted?
Anything specifically missing on android side except the PR? Seriously asking if I'm missing something. The nexus series has comparable crypto hw and similar options for encryption + wiping.
Two things come to mind. First an equivalent of the secure enclave. Second a single company that is willing to go this far to protect its users. For Samsung this is complicated because both Google and Samsung are involved, and Samsung is not a US company so I'd expect them to cave in under pressure from the US govt more easily.
Edit: a Nexus device bought directly from Google with the right hw may address both points.
I have been looking at the Snapdragon 820 and it at least on that level, it does not seem that android devices should mis anything. The new Sense Id is an improved Touch Id, and I mean that both in terms of the finger print sensor itself, as well as the hardware protection itself. They implemented full UAF in the SecureMSM for the authentification. The best thing is that this is exposed to the layers above and can be leveriged in the growing fido ecosystem.
The major issue with android systems does not seem to be lacking software and hardware, but rather the unwillingnes of providers to push best practices as defaults to all users.
I somewhat agree and somewhat disagree with your analysis of the politics. Their are both advantages and disadvantages of both situations
> For Samsung this is complicated because both Google and Samsung are involved, and Samsung is not a US company so I'd expect them to cave in under pressure from the US govt more easily.
To many Americans, Apple is the example of American innovation and entrepreneurial spirit, and a proof that the American model works. Apple employs 10s of thousands of Americans directly, and probably provides jobs for 100s of thousands indirectly. Going too aggressive on Apple, e.g. at the level where executives could be charged in court, or products embargoed, would be a decidedly unpopular move with many voters and politicians. Samsung is a much easier target here.
Also as an American company, Apple can legitimately enter the democratic debate, see the calls it makes to congress. Samsung can't really do that. Imagine Samgsung putting out press release quoting the founding fathers or referring to the first amendment. That would not be credible.
You are right to a certain extent! But lets not forget that Samsung is a huge company too and is registered as per US norms. So the American executives of Samsung would be very much comfortable referring to either of them.
I'll repost a snippet from a post by merhdada that hints at the root of one of the problems with android security:
"This can happen only because of a design flaw in the security architecture of Android (L). Unlike iOS and like traditional PCs, the disk encryption key is always in memory when the device is booted and nothing is really protected if you get a device in that state. It's an all-or-nothing proposition."
Please read the entire thread, and check the links referenced in that thread, for information on how issues like these are mitigated.
That's only one issue though. There are a few more.
But none of that even matters a lot of times ... you really won't need to hack an android phone... because the data is also on corporate servers. So the FBI could get at it in any case most of the time.
Yeah, the problem is that Google's whole business model depends on uploading all your unencrypted data to their cloud, whereas Apple could probably decide to encrypt everything in iCloud so not even they could read it if any government/hacker came looking.
Of course it's a short-term marketing benefit. But, if the encryption is secure, then the marketing benefit is matching up with the customer benefit, so hey.
It's not clear that it IS a short-term benefit. Poll results are mixed (although the wording has a significant impact on the results) and a leading presidential candidate is calling for a boycott of their products. Marketing campaigns tend to be less polarizing. Also, I would imagine that losing the case would negatively impact sales more than if they had quietly complied.
Polls say whatever the poll-maker wants. Ask people if they support government surveillance, they say "Sure." Ask if they want the government to be able to access their Dick-Pics and the answer is a resounding NO[1]. Apple is on the right side of history here.
Right, but the question here is of the obvious short-term marketing benefit, which to me is not that obvious. I think that in the short term Apple has more to lose financially by not aiding the FBI in an emotionally-charged request than if they had silently complied, particularly if they end up losing the case.
Maybe I'm being cynical, but if I were Apple and had just been forced to implement back doors with a gag order, I might be announcing that my new phones were unhackable too.
Is anyone aware of anything that makes this more than a leap of faith?
Do you really need such strong security? Or after the FBI forced Apple to apply their best engineering minds to crack your phone, they'd just find a grocery shopping list and pictures of your cats?
Because this sounds a bit like Tesla's "operating room air quality" - something that might be useful 0.001% of the customers, and it's just marketing for the remaining 99.999%
How can you ask a question like this? Define "so strong" in this context? It's similar to asking "do you need so free speech". We're not talking about anything special here beyond a standard expectation of reasonable security. The fact that apple is trying to make it "so secure even they can't hack it" is just a means for them to protect themselves that happens to align with the interests of the user.
General, unbreakable crypto security applied to all contents is a feature that very few people ever needed or even tried to achieve.
Until a few years ago you were perfectly content with keeping an agenda in your pocket and pictures in your living room's drawer. A minimum of privacy is of course needed and welcome; however, unless you're planning a major terror attack, or strategic war plans, or you have incredibly valuable industrial secrets (all cases in which you'll probably be using specialized SW to keep your information) you don't really need incredibly advanced security simply because nobody is going to spend vast amounts of time and resources to uncover your little secrets. The GP is talking about switching phone (spending money) to obtain a level of security that he won't need in a million years.
Your agenda in your pocket wasn't subject to unconditional dragnet surveillance. Copies of it weren't going to find their way on to security contractors' systems. Such copies wouldn't have then been stolen and distributed by whoever, and made available for search as you type. The intimacies of daily life are very precious.
For me it's not really about my personal security because, you're right, there's nothing interesting on my phone. My issue is with one entity having access to ALL of our phones. Have you read 1984? Because that's what that sounds like. It's too much power for the government to have.
I think I missed the part where anybody asked Apple to build a backdoor into every phone that could be accessed without appropriate control from the authorities and without passing through Apple each time.
Of course I'm not saying that your data should be uploaded daily to a government's server for anybody with a badge and free time to spare to look through.
Yes, you did miss it. The FBI/etc are very clearly and deliberately looking to set a precedent for use in any and all future instances. Just because you don't seem to value personal privacy and security doesn't mean the rest of us are willing to throw it away for no good reason.
The FBI here only represents the 'legal' government and not the world of secret courts and the NSA.
The NSA did infact try to build backdoors into important hardware and software standards. They did push companies into using worse crypto. The do massiv port scanning and build themself botnets from where thet attack other nation states. And thats just a tiny fraction of what they do.
So yes, I absolutly do need computer hardware and software that even the manufacturer cant break. Low level security for boot and authentification is only the first in many, many steps that we have to take all the way up to imroving usability in end user applications to make it hard to do the wrong thing.
The FBI are not the o ly player, all governments want such control, all governments have things like the NSA. Even private actors are getting better and better.
We do need better security to protect the integrety of all our data, this includes all our communication and even, if possible metadata that we produce.
Of course. And 11000 meters waterproof is the only waterproof acceptable for a watch. And operating room clean air is the only clean air. And obsidian blades are the only ones that deserve to be used in your kitchen. And triple malt, 60 years aged whiskey is the only whiskey. Etc.
The reason not everyone has the best watches, air conditioning, knives, or whiskey is that, for physical products, quality tends to cost more.
There is no reasonable argument to be made that people shouldn't have higher quality products when they _don't_ cost more^.
Apple only have to develop "unbreakable" encryption once and then it costs them no more to make it available in every iPhone than to only make it available in some of them. Indeed, it'd be cheaper than maintaining both breakable and "unbreakable" variants.
There are arguments to be made about the secure enclave hardware, since it presumably costs more to make it more tamperproof.
However, securing iPhones against this particular "attack" appears to be a software issue: iOS should never apply updates without an authenticated user approving them first.
^ For the avoidance of doubt, this includes externalized costs.
I'm sorry, I might be wrong here, but I thought that any cryptographic system is breakable, given enough time and resources. If this is true, then, according to your statement, you're never protected. Therefore you can just transmit and store plain data without any cryptography, isn't it the same?
Any watch can be breached by water, given enough time and pressure. Most watches would not survive very long at the bottom of the Marianas Trench. Similarly, most watches would not survive a few centuries in a shallow pool, even if rated for much deeper immersion.
Although no watch can be absolutely waterproof, not even at a given depth, there are levels of risk one can accept. A watch you can use at 100m for several hours a day is effectively waterproof if that's the harshest treatment the watch will receive.
Similarly, although no cryptographic system is absolutely unbreakable^, there are levels of risk one can accept. And, unlike with watches, we can design cryptographic systems which, except in the face of unforeseen mathematical breakthroughs, or bugs (or backdoors) in their implementation, cannot be broken in the next few hundred years even by a nation state-level attacker.
I think is it reasonable to describe a cryptographic system that can't be broken within the lifetime of anyone alive today as "unbreakable".
^ Except maybe one-time-pads, depending upon how "unbreakable" is defined.
Your comment (and its sibling) substantially agree with what I wrote - there isn't absolutely unbreakable cryptography, only reasonably secure. Therefore the parent doesn't make sense.
Now, is a cryptography that can't be broken by anyone except maybe (that hasn't even happened yet) through a specific court order signed by a judge, reasonably secure? I think it qualifies as such. If you need even more security, I'm sure you can use specialized software to achieve it - I'm not saying you shouldn't be allowed to.
Strictly, it is not the cryptography being broken in this case. The FBI want to guess a (possibly) six-digit pin. The iPhone might have been configured to erase its data on 10 failed PIN attempts, so the current odds are not good. To this end, the FBI want Apple to produce a version of iOS that bypasses this restriction, and install it on the phone.
Assuming I agree that a security system that can be turned off remotely by its vendor is reasonably secure, it is only a specific court order now. If Apple are successfully compelled to produce a version of iOS that bypasses PIN security, it will be much easier for the FBI to request that it be deployed on phones in the future - after all, that version of iOS will already exist then.
If Apple do make it, I am certain there will quickly be a slew of court orders regarding other iDevices that the authorities have in their possession, all of which are likely to be harder to defeat than the court order they would just have failed to defeat.
However, I don't agree that a security system that can be turned off remotely by its vendor is reasonably secure, anyway. There is nothing technically requiring Apple to wait for a court order: the phone will accept their new software whether or not it comes with a court order. Apple could decide to make PIN cracking available to anyone who can prove they own a given iPhone. Given their attitude, they probably won't, but the actual security mechanism is reliant on their goodwill for it to remain unbroken. I don't consider that reasonable.
There's an idea used in crypto commonly called "reasonable security". Anything is possible given an computationally unbounded adversary, but the point of strong crypto is to make it such that cracking the crypto takes an "unfeasible amount" of time. Crypto isn't some spectrum like waterproofing is, it's binary: either broken or it's "will be broken".
I am not sure why this comment (and all Udik's comments) is being downvoted into oblivion. This is the view of the US government and quite likely a vast majority of citizens here (and, I would guess, in many countries).
This morning I was having a conversation with my fiancee, who said "if the US government gets a warrant they can open your mail, they can tap your phone calls, they can come into your house and search -- why should your phone be some sort of zone they cannot search even with a warrant?"
I happen not to agree but this is not some wacko view.
It might not be the most constructive way of doing things, but people tend to downvote comments they disagree with.
As to why they disagree: HN's audience is not representative of the general citizenry. We're better informed about technical security matters (or we like to think we are, at least). I suspect that correlates with being less willing to trust security to the goodwill of third parties.
Maybe you don't need it, but you'll have fun every day with it. While you'll never be able to enjoy the difference between "almost unbreakable" and "unbreakable".
and whats the cost differential to me as an end user?
And whats the difference to me between 452 ppi or 532 ppi? I'll never be able to enjoy the difference between the two, yet i would still go for the higher ppi all else being equal.
It's never the case of "all other things being equal". The GP was saying that he switched from Apple to Android - presumably because there was a relevant difference between the two - but he's considering switching back to have a feature that he'll never use.
Of course there is always an appeal in the numbers. I'd go for a 40MP camera instead of a 20MP one - who cares if the quality of the lens is such that there is no difference beyond 10MP. It's marketing. It's curious how people so wary of being observed or exploited make themselves so prone to basic manipulation by entities who want to get their money.
ah, i'm thinking more of something like WEP vs WPA2 - like, why the heck would i want to downgrade my crypto?
I agree there may be other reasons the user switched, but maybe they switched to android because they believed it to be more secure? Or maybe the user wants to vote with their wallet for the company they see as most in support of security/privacy.
I do agree though, switching for a feature you are unlikely to use is silly, but i think there are definitely reasons enough to make a switch like that from a 'voting with your wallet' type standpoint
Do you drive around in a 1 litre CC car? Or do you buy a car with a bigger engine?
In both cases, when was the last time you drove it at its maximum speed all the time? Or ensured that you were using maximum torque at all times and always sitting in the maximum power band for the engine?
If you find that you haven't done these things, you probably should ask yourself why you have a car, right? After all, you're never going to drive the full speed of the car, so why have the car in the first place?
A lot of the comments on that article burn me up. People in the U.S. really think there's a terrorism problem here. The only problem is that government spending so much money on a non-issue! Politicians love to "debate" it because they know it is one of those things that looks good to the naive citizens but they really don't have to do anything because there's nothing to be done.
What really burns me is that this strategy is so well known. 1984 was written almost 70 years ago, and yet we have millions of people begging for persistent, unavoidable surveillance by authorities as part of a never-ending war with an ambiguous enemy that our own policies are strengthening.
Referencing 1984 is childish in this context, we're talking about obtaining a warrant for known suspects or already convicted persons. The enemy isn't ambiguous, you're purposely muddying their image.
I believe the GP was making a generality and not talking about just this specific scenario. "Terrorism" is an ambiguous enemy and while the number of deaths to terrorism is disheartening, it pales in comparison to many other problems (e.g. car accidents or heart disease).
Let's not forget that because terrorism is ambiguous, our own government can create mock attacks and blame them on 3rd parties. Furthering their own agendas. Invoking fear and loathing in the citizens.
Indeed. Even Bernie doesn't make this point (or at least, I haven't heard him make it). To stand up and say, "Actually, terrorism isn't a big threat to the US, especially compared to ..." would be political suicide. Why? Because terrorism isn't about any real threat, it's about hurt pride, outrage at being vulnerable, outrage at being hated, and underlying it all a cultural animosity that ranges from dispassionate concern to visceral hatred. American's are very much doers and they want to "win the war on terror". Which of course is stupid since terrorism has always been around, and will always be around. (And in another twist of irony I am positive that the American Revolutionaries were called terrorists by the British.)
Anyway, a rational politician would have a tremendous uphill battle against both Pride and Ignorance. He or she would have to have tremendous skill as a teacher and a leader, not to mention the emotional fortitude of a Buddha to endure the onslaught of hatred.
> Even Bernie doesn't make this point (or at least, I haven't heard him make it). To stand up and say, "Actually, terrorism isn't a big threat to the US, especially compared to ..." would be political suicide.
Sanders has expressly argued that climate change is a bigger national security threat than terrorism (or anything else) -- and did so in one the Democratic debates, in response to a question on national security threats. While that may not be directly minimizing terrorism, it certainly is explicitly placing it behind other problems in terms of need for focus.
> (And in another twist of irony I am positive that the American Revolutionaries were called terrorists by the British.)
They absolutely were not; the term "terrorists" was first applied to the leaders of the regime of the Reign of Terror in the French Revolution (shortly after the American Revolution), and it was quite a long time after that before the term was applied to actors other than state leaders applying terror as a weapon to control their subject population.
it's an appeal to emotion and it's actually a bit disgusting to me. I wish my government would stop creating the terrorists that it wants to then fight.
It's important to emphasize something: iCloud will always be "backdoored", by design, and backing up to iCloud is what most users should and will be doing.
The reason iCloud data will always be accessible by Apple, and thus governments, is not because Apple wants to make it accessible to governments. It's so that Apple can offer customers the very important feature of accessing their own data if they forget or otherwise don't have the password. That is an essential feature, and why this aspect will never change.
When someone passes away, for example, it would be a terrible compounding tragedy if all their photos from their whole life passed away along with them, because they didn't tell anyone their password or where they kept the backup key. So Apple wants and needs to provide an alternative way to recover the account. (For example, they will provide access to a deceased person's account if their spouse can obtain a court order proving the death and relationship.)
Harvard recent published a paper (called "Don't Panic") that essentially states the same.[1] Governments shouldn't "panic" because in most cases, consumers will not be exclusively using unbreakable encryption, because it has tradeoffs that aren't always desirable.
And the reason why most consumer should be backing up to iCloud is similar: that's how you prevent the tragedy of losing your data if you lose your phone.
Just something to keep in mind when discussing the "going dark" and "unhackable" news items.
It is worth noting however that people who do "have something to hide" from governments probably won't be using iCloud, if they know what they're doing. Then again if they know what they're doing, they wouldn't use anything that is backdoored anyway. So the naive criminals will still probably be hackable, and that's about all we can hope for.
I actually intend for my private data to die with me. I have gone out of my way to guarantee that it will. In my view, if I haven't published it, then it shouldn't be accessible.
I have absolutely nothing to hide. I have simply always treated my privacy as something that was valuable in it of itself. Perhaps even more valuable than the photos I clearly opted to not share with others, to go off your example.
I also don't understand why it's so absurd for some people to conceptualize non-malicious things you wouldn't wasn't to share with anyone but yourself. Hell, I have tons of notes and things I write to myself that I definitely do not want to be seen by anyone. They simply weren't written with the intention to be read by others. So I don't sympathize with desire to make a deceased person's private things accessible, even to family. Let it burn. It might have been the owner's intention all along.
Since you're responding to me I'm assuming you mean me, but I have no problem conceptualizing non-malicious things you would want to keep private.
The problem here is that a lot of the stuff stored on phones falls somewhere between "dies with me" private and "should pass on to my family" private. Or "should be recoverable if I lose my key" private.
Strong encryption makes it impossible to recover in the event of a lost key or pass on to family in the event of your death. So that's not necessarily a great default for, say, decades of family photos. It would be a huge tragedy if that was lost.
The good news is that Apple does provide tools to opt-in to stronger security, rather easily. For example, the Notes app was recently upgraded with note-level strong encryption. That might be a good solution for your most private notes, without endangering the survivability of your digital memories and assets.
Yes, It's different for everyone. I didn't mean to accuse of being one of those people, I was speaking generally there. I'm sorry if my phrasing was bad. My main concern is that people are scared to admit they use the best, because they will be accused of being criminals. The stigma is harmful.
Imagine if somebody is writing down there dreams, writing there intimate deep thoughts, tracking symptoms of a medicine, using drugs recreationally...etc. These things might be very very private and be totally abusable outside of context.
That's what a last will is for: "...and the passphrase for my inheritable private stuff is 12345; it's the file named Blah.xyzzy.foo on my desktop, decryptable using BazBarFoo (installed)."
Most people don't write wills. Their assets shouldn't be lost forever as a result. That would be terrible.
It would be better to opt-in to auto-destruct-when-i-die, not opt-out. It's more of a special case. E.g. create encrypted notes for super secret stuff you want to die with you, but let the default security for photos and documents be "private but recoverable in the event of death or forgotten key."
Not to mention, writing that password down in a will would be pretty bad from a security standpoint while you're alive.
Most people don't stick their wills to their monitors with post-its (there are other secrets in there after all, and many people would like to know those); the legal system has mature tools that are surprisingly good at keeping such secrets secret until the release conditions are met. A will is a Solved Problem, with highly reliable solutions - consider the ways to prove that it is indeed to be opened. Contrast with most computerized solutions and "solutions" thereof, mostly hinging on some form of dead man's switch.
> When someone passes away, for example, it would be a terrible compounding tragedy if all their photos from their whole life passed away along with them, because they didn't tell anyone their password or where they kept the backup key.
Would you really expect Apple to recover the data in this scenario for the next of kin? I certainly wouldn't, and I wouldn't want them to.
I definitely would if it was a regular iCloud Photo Library with decades of family history, pictures of the kids growing up, etc. Imagine all that being lost. It's not what most users would want.
If there was something the deceased person truly wanted hidden from their next of kin, they could use stronger encryption for that. The Notes app, for example, allows for note-level strong encryption. But it's not an ideal fit for the more typical use case.
One of the "knock it out of the park" features of icloud is that it lets you trivially share photostreams (It's the one service that I've found close to flawless) - so all those shared pictures are still available to the family who you shared them with should you pass away.
I certainly don't know if Apple should, without a court order, share any of my data that I haven't explicitly shared with next of kin if I passed away.
I'm wondering though - what happens if I stop paying my $2.99/month for 200 GB - will any of my existing photos be wiped out of shared photostreams?
The problem with requiring explicit sharing is that a lot of people don't realize they need to do it to properly navigate these future events. Just look at how many people fail to write wills. You wouldn't want real-world assets to automatically get destroyed because you failed to write a will, even though there may be some things in there you didn't want to pass down. The "failsafe" mechanism there is, a court figures it out. So that's apparently what Apple is doing.
But keep in mind this is not just about next-of-kin. It's also about the ability for you to recover your life if you forget your password. That is why Apple will always have a "backdoor" into iCloud.
> When someone passes away, for example, it would be a terrible compounding tragedy if all their photos from their whole life passed away along with them, because they didn't tell anyone their password or where they kept the backup key. So Apple wants and needs to provide an alternative way to recover the account. (For example, they will provide access to a deceased person's account if their spouse can obtain a court order proving the death and relationship.)
A pretty interesting point.
Photos are probably good to recover... unless they were photos of something horrible you did (beat up someone, sent photos of your anatomy, etc.).
What about text messages? Again, it could express what kind of person you are. Do all iOS users have unwitting diaries that will be unlocked at our death in the form of our iMessage and SMS history?
In 400 years, will our ancestors point out, "Wow, great-great-great-grandma was pretty awful, did you see this text they sent once?" in a way that removes context from the message written at the time. This is something we can't know about our ancestors... and probably for the best, since otherwise we might be disappointed in our ancestors.
Over the past few years services have been slowly starting to phase in a way to specify a person who can "inherit" access to your account in the event of your death. I know Facebook has it fully set up now (you can go into your security settings and specify a "Legacy Contact"), and I expect that in the near future it'll be a standard part of any service that intends to operate long enough for this to become a worry.
That is also an interesting point. It certainly comes up often when next-of-kin go through the private physical possessions of the deceased and discover secret love letters, diaries, etc. They probably wanted those to stay private.
But for bank account records, most photos, etc., you probably don't want those to disappear in the event of your death. You want those to pass on.
Given the choice between the two defaults, it makes a lot of sense for Apple to make "accessible to next of kin" the default, and "dies with you" the opt-in.
>The reason iCloud data will always be accessible by Apple, and thus governments, is [...] so that Apple can offer customers the very important feature of accessing their own data if they forget or otherwise don't have the password. That is an essential feature, and why this aspect will never change.
And may the era of homomorphic encryption schemes come and thus render moot the need for Apple and other companies to access unencrypted data as a plausible excuse when performing back-end processing/recovery on their client's data.
edit: well, to correct myself, as you said that wouldn't obviate the need for the feature of "recover data without password, after passing some other security tests"
There's an easy way Apple could offer a true cryptographically secure cloud backup service - support local backups with a physical keystore. Make it an advanced option but use it as an excuse to sell users a Time Machine backup unit with RFID built in to read a security key.
Apple could make truly secure systems user friendly if they wanted to. It seems they may see some value in doing so.
Local backups are not cloud backups. But there's already at least one 3rd party cloud backup provider that provides zero-knowledge encrypted backup as an option, with the caveat that the data will be forever lost if you lose the key (or die, etc.) [1]
It's just not an option that your average person would want for their family photos.
I'm not exactly sure what you mean. iCloud data is reportedly encrypted at rest. The issue is that Apple has the decryption key. And they keep it in order to recover you from a lost password event, so that is likely to always be true forever.
Having said that, you can add another layer of your own encryption to certain data that is stored in iCloud, like for example the latest Notes app in iOS 9.3. Apple won't have that key.. but the app warns you the data will be lost if you lose it. You could also encrypt files you store in iCloud Drive using an encryption app. But you wouldn't be able to do this with other data that is managed by iOS like iCloud backups or photo libraries.
I guess you could send pre-encrypted data to iCloud to try to avoid Apple's backdoor... but of course if you do that from an Apple device, there is no guarantees...
This is all just theatre. The real motivation is to control the platform: to ship a piece of hardware that dictates who can install stuff on it, instead of the traditional hardware that lets you completely overwrite everything in it if you have physical access.
Since 197X, people had home computers (and institutional computers for two decades before that) on which the FBI could install anything they want, if that equipment fell into their hands. This fact never made news headlines; it was taken for granted that the computer is basically the digital equivalent of a piece of stationery, written in pencil.
There is nothing wrong with that situation, and on such equipment, you can secure your data just fine.
No machine can be trusted if it fell under someone's physical access. Here is a proof: if I get my hands on your device, I can replace it with a physically identical device which looks exactly like yours, but is actually a man-in-the-middle (MITM). (I can put the fake device's board into your original plastic and glass, so it will have the same scratches, wear, grime pattern and whatever other markings that distinguish the device as yours.) My fake device will collect the credentials which you enter. Those are immediately sent to me and I play them against the real device to get in.
Apple are trying to portray themselves as a champion of security, making clueless users believe that the security of a device rests in the manufacturer's hands. This could all be in collaboration with the FBI, for all we know. Two versions of Big Brother are playing the "good guy/bad guy" routine, so you would trust the good guy, who is basically just one of the faces of the same thing.
> This is all just theatre. The real motivation is to control the platform: to ship a piece of hardware that dictates who can install stuff on it, instead of the traditional hardware that lets you completely overwrite everything in it if you have physical access.
This is already the case. Right now, only firmware signed by Apple can be installed. The next logical step is to build a system where the unit that deals with PINs cannot be updated at all, or at least not without wiping all keys. This would prevent any non-invasive attempts of bypassing the rate-limiting of PIN attempts or auto-wipe.
> There is nothing wrong with that situation, and on such equipment, you can secure your data just fine.
Again, this is also true for an iPhone with a sufficiently complex passphrase, Because Crypto™. Secure Enclave is just an additional layer that protects against everyone not in a position to get custom firmware signed by Apple.
> No machine can be trusted if it fell under someone's physical access. Here is a proof: if I get my hands on your device, I can replace it with a physically identical device which looks exactly like yours, but is actually a man-in-the-middle (MITM). (I can put the fake device's board into your original plastic and glass, so it will have the same scratches, wear, grime pattern and whatever other markings that distinguish the device as yours.) My fake device will collect the credentials which you enter. Those are immediately sent to me and I play them against the real device to get in.
The scenario here isn't an Evil Maid Attack. It's about protecting locked devices while someone else has physical access to them. Right now, you're fairly safe from most attackers in this scenario. In the future, with a read-only Secure Enclave, you're also safe from Apple and anyone who could force Apple to sign firmware. The fact that Evil Maid Attacks are harder to pull off because of this is just a nice extra.
> Apple are trying to portray themselves as a champion of security, making clueless users believe that the security of a device rests in the manufacturer's hands. This could all be in collaboration with the FBI, for all we know. Two versions of Big Brother are playing the "good guy/bad guy" routine, so you would trust the good guy, who is basically just one of the faces of the same thing.
This doesn't make sense. There's no crypto backdoor. The worst case scenario for their current security architecture is that it falls back to how FDE works on a desktop system - i.e., it's completely dependent on your passphrase complexity.
>it, instead of the traditional hardware that lets you completely overwrite everything in it if you have physical access.
How do you plan to flash all the HDD/USB/Network controllers? Not to mention the CPU/GPU microcode, and countless other random chips inside your computer that are executing firmware you have no access to.
We're already hosed. Its just a matter of whats considered a 'reasonable' barrier.
If I have no access to the firmware, but neither does anyone else, then it's just a part of the hardware. That is okay.
I don't care whether a given processor is microcoded via a tiny ROM, or whether it is all hard-wired gates; the difference is just in the instruction execution timings.
We are not "hosed" in any way by this.
As soon as the microcode is writable, then we have questions: can anyone write any arbitrary microcode and put it in place? Or is there some tamper-proof layer containing that only accepts signed microcode, and who has the keys?
Any aspect of the machine which is data-driven is de facto hardware if that data is fixed in read-only memory.
Consider than an AND gate can just be memory. The two inputs can be treated as a two bit address: 00, 01, 10, or 11. If we stuff in the values 0, 0, 0, 1 into the 1-bit content cells at these addresses, we have an AND gate.
If this memory is ROM, then the overall circuit is not distinguishable from a conventional AND gate where a few transistors do the signaling directly.
What is to stop the DOJ from requiring them to produce a phone that has a hardware backdoor? If they are required to produce a software backdoor then building an iphone which is immune to such vulnerabilities seemingly solves that problem but I don't see the leap towards compelling Apple to build vulnerabilities into hardware as a large one.
I'm not well versed in security so excuse me for my ignorance but what if there were a way to solder chip onto the board that allows access to the secure enclave. Every time an iphone is made a companion chip is produced that contains some kind of access key which only works for that device and someone is required to foot the bill for storing them.
The DoJ doesn't really have the power to do that. They can get a judge to issue a warrant to search an existing device, and the judge can in some circumstances compel other parties to cooperate in that search. But generally any requirement that Apple insert a generalized backdoor into a product will need to come from new legislation.
Actually the FBI's current argument is very close to saying that the All Writs Act has no limits, and can compel literally anything the FBI thinks would "help" them with investigations.
The FBI's argument doesn't come anywhere close to saying that. What the FBI's motion actually says[1] is:
Pursuant to the All Writs Act, the Court has the power, "in aid of a valid warrant, to order a third party to provide nonburdensome technical assistance to law enforcement officers."
The most important limitation here is that nobody, including the FBI, is claiming the All Writs Act grants the court any power at all in the absence of a search warrant. Nobody really disputes the statement above, or the validity of the warrant in question.
Again: if the FBI wants Apple to preemptively insert a generalized backdoor into their products they'll need to lobby to have new legislation passed. They've tried that and it hasn't gone much of anywhere. In my opinion lets try and keep it that way.
I'm not a lawyer so obviously I'm not exhaustively well read on the law but in the case that All Writs did allow any action to be demanded to help with an investigation it would still require there to be an investigation in the first place.
To preemptively demand a back door is almost akin to guilty until proven innocent, youre assuming that there will be an investigation in the future where a governments ability to hack a device is required.
This is something Apple practically guaranteed by using platform DRM to turn themselves into a critical single point of failure.
CALEA was extended to ISPs once ISPS consolidated enough; now that Apple has consolidated central control of mobile devices in a similar fashion, it seems quite likely that extending CALEA to cover smart phones will be on the table.
I'd be extremely surprised if Apple's management wasn't very aware of the CALEA precedent, but they chose to go down this road anyway. I find that rather unsettling.
All writs is for current investigation. FBI could theoretically compel Apple to produce a single backdoored phone that will be exchanged with evil maid attack with original device when they have ongoing investigation and a warrant.
But unless you can point at - This is Bill, we are targeting Bill, we have a warrant for Bill, we need 1 phone that we will make sure becomes Bill's - All Writs cannot help.
If Bill mails order a new iPhone they can compel apple store to give him compromised device. They could probably put FBI team presenting themselves as store employees in every store if Bill is high value enough target and expected to buy iphone today.
But they cannot say - compromise all of SF Bay Area iphones because we expect one of them to be bought by Bill.
Some of the lawyers here correct me if I am too wrong.
What if another agency already has an NSL in place requiring exactly the same (backdoor, weak crypto params, weak by design secure enclave) and they simply are under a gag order to talk about?
NSLs can only ask for information, not force a company to build a product. That kind of request would have to come through legislation and apply to all US companies in a similar situation.
"Room 641A is a telecommunication interception facility operated by AT&T for the U.S. National Security Agency"
As long as you have a backdoor, and Apple does, shady government agencies can and do come knocking. We've got plenty of shady government agencies, and can never guarantee that we won't have more in the future.
Yea NO. NSL's can't do that. At worst they will tell you to release and data that you have and your private keys. At best they will tell you to make sure you archive everything and don't permanently destroy records in case they are required in the future.
They cannot force you to add backdoors or create a weak crypto , although they can indirectly suggest you to do that and its then on the company if they do so.
Today, we celebrate the first glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology—where each worker may bloom, secure from
the pests purveying contradictory truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death, and we will bury them with their own confusion. We shall prevail!
They should re-run this commercial for iPhone 7. "On September 24th, Apple Computer will introduce iPhone 7. And you'll see why 2017 won't be like 1984."
@everyone: All this hubbub and no guarantee the phone wasn't already wiped and/or doesn't contain any sensitive information because they didn't use that phone for those purposes.
@Udik: I could just keep my tax documents in printed plaintext on top of my dresser but I opt to keep them locked up. Privacy and security are important. If people who utilize privacy/security tools are up to no good then why does the U.S. Gov't have a clause for not revealing information due to State Secrets? Why do we set our Facebook profiles to private? Why have passwords at all on anything? Are you beginning to see the point?
that's the endgame of government surveillance requests: it's increasingly in a company's best interest to have the best security possible so they can't be compelled to hack their own devices.
Surely it is a company's best interests to have 'good enough' looking security to serve their PR purposes while also secretly providing government access to maximise government kudos and all the benefits that would entail?
For many customers of hardware and software trust is what is being sold.
As trust is eroded 'good enough' is no longer good enough. The only way to continue to be trusted is to be more secure, and as the grandparent points out the endgame there is that the encryption puts the software and hardware beyond the reach of the company that produced it.
You really have to pick one side or the other, unless you're extremely good at keeping a secret and deceiving outside researchers. That's a much higher level of difficulty than simply creating a secure system.
Darn... this, along with the fact that the MacBook Pro my work gave me is so much better than I expected, is making it harder for me not to become full-on Apple convert.
Apple is far from having a secure phone right now. NSA certainly has ways to bypass this based on my attack framework and their prior work. They just don't want them to be known. They pulled the same stuff in the past where FBI talked about how they couldn't beat iPhones but NSA had them in the leaks & was parallel constructing to FBI. So, the current crop are probably compromised but reserved for targets worth the risk.
That said, modifying CPU to enable memory + I/O safety, restricting baseband, an isolation flow for hardware, and some software changes could make a system where 0-days were rare enough to be worth much more. Oh yeah, they'll have to remove the debugging crap out of their chips and add TEMPEST shielding. Good luck getting either of those two done. ;)
> They pulled the same stuff in the past where FBI talked about how they couldn't beat iPhones but NSA had them in the leaks & was parallel constructing to FBI.
Do you have a link to a leak that shows this? I couldn't find anything with a simple google search.
Could you be more specific? I've followed the NSA leaks with some interest, but not particularly closely, so I'd be really interested in seeing the actual presentation/document/whatever.
For reference I've googled every combination of "nsa apple mobile OS leak" I could think of and couldn't find a primary source.
Helps to type in just what you want and what will specifically have your answer. Mobile will give you garbage most of the time. Apple as well. A technical document will usually reference iOS. Also, you can use quotes to ensure something appears.
Interesting enough, me typing what you typed into Google still led to same leak and others showing potential backdoors. Hmmm.
If you are asked for a source it doesn't look great to begin your answer with: "I googled …" About the linked article: out of date (2013, mentions iOS 4.3.3). Very thin on actual information. 90% of article is about Blackberry but insinuates same risks for iOS. As a German I wouldn’t trust Der Spiegel anyway: when it come to IT issues my fellow countrymen are often fueled by longstanding anti-americanism and technophobia :/
He said he couldn't find anything on the topic Googling. When anyone does that, the first thing I do is Google exactly what they said. Usually turns up something. Then I point that as if I wonder how much research they actually did. If they otherwise seem alright, I might also give tips on how I get decent results out of search engines.
"About the linked article: out of date (2013, mentions iOS 4.3.3). Very thin on actual information. "
It's what I got out of a quick Google. I was unwilling to spend more time on that angle as my list of risks plus Apple's development practices shows we should consider it untrustworthy by default. I just don't feel like putting too much time into finding the specific evidence NSA might hit a specific version of a product that wasn't secure in its entire history. Also, which came from a company whose products did things like require an admin login on certain services but not check if password matches records: just the existence of a password in submission was enough. Better to spend that time on researching actual security. ;)
Sorry, maybe I should have been a little more specific. I was looking for a primary source that shows this part: "FBI talked about how they couldn't beat iPhones but NSA had them in the leaks & was parallel constructing to FBI." The link you posted only talks about "hacking" iPhones, but nothing about parallel construction. Additionally, it only talks about how the NSA is making use of the device's (local) backup. As far as I can tell that doesn't imply a hack of the iPhone itself.
Oh, OK. That makes more sense. As I said to other commenter, I'm not doing enough research to figure out how they're hacking it directly. There's so many published hacks of hardware and firmware that were designed for secure operation that it would be a miracle if they didn't have something on iPhones. It's more a "insecure-by-default" if it doesn't address what's on my list of attack vectors. I know it doesn't because it wouldn't be compatible with legacy ARM or iOS code without modifications. ;)
Far as parallel construction, let me see if I can quickly Google something. Here's you a few on it early on with FBI and DEA's stronger cooperation.
Further, they'll actually let a criminal go just to prevent either the public or courts learning the details of defense-related techniques. The Stingrays are a perfect example:
Gotcha. I agree that there are a lot of potential security issues present on the iPhone, and if I was a terrorist (or my name was Edward Snowden) I would definitely act as if they were already hacked by the NSA/FBI/whomever.
However, your original comment made it sound like we had direct confirmation of that fact. I hadn't heard that before, which is why I was interested.
I figured that's what you were wondering. Nah, it would take new leaks for us to know about that. Gets less likely as they put their people under more intense scrutiny.
What I want is a service that deletes all my online presence after I die. A deadswitch. All texts, messages, emails, facebook posts, pictures anywhere, everything.
I want it all to go when I do. Hell, I want some of it to go now.
After I'm gone, I want to leave no part of my existence on the internet.
I realize that's not possible. But I want to minimize my footprint.
It is totally possible for a local device. I have a deadswitch on all my computers. If I don't log in and set an alive flag via the command line in any of my computers for more than a week, that computer securely wipes itself.
Let it be known, I have nothing to hide. I just think this is the best way to do things.
Edit: My reason for this is the frequency with which I encounter people who are no longer alive. It's a harsh thing to look at a link to someone who said something, and you used to know and then suddenly realize, "Oh shit. He's dead. And I used to be his best friend."
I know facebook has memorial pages, but those are difficult to get.
When something you create is public, you no longer have a right to dictate it. You do not have a right to be forgotten. That would be an attempt at some sort of thought control, and you don't get to tell us that this comment you just wrote can and should be forgotten. If I choose to remember it, outside of your wishes, there's nothing you can do about it.
Private information is another matter, but when people presume they have rights to choose how others think, it really makes my blood boil.
Since then I've started noticing services rolling out the ability to specify someone to take over your account after you die, and I suspect the legal framework around wills and estates is robust enough that you could leave instructions (and have them enforced) to delete things.
One aspect of what all this comes down to is that governments don't want to have to do real work or even prioritize their tracking and surveillance.
What encryption and security really does is create scarcity of access to information and data in order to force a market solution where government groups have to prioritize their efforts and apply them deliberately.
Good. Congress shall pass no law abridging freedom of speech, and code has been ruled free speech.
The only reason previous wiretapping laws were passed is because they weren't in the limelight and the public never had a chance to weigh in. Let's make this an election issue
Take a step back and fight campaign finance and gerrymandering, so that people are once again represented well in lawmaking. Campaign finance needs to be reformed so that it becomes affordable for representatives to represent their country instead of their sponsors, and gerrymandering so that democracy has the competition it needs to select better politicians.
I'm fine with that. But this issue is being debated now, and if you choose to not take a side, keep in mind that that is still taking a side. You've simply let someone else decide for you.
Nothing is 100% proof, crypto certainly isn't. It's going from child's play to "you actually need to knowledge" to "this is actually hard now" (but.. not impossible).
Nobody would adopt them. It's annoying enough to deal with 4 digits when it's cold and I'm wearing gloves and I just want to change the song I'm listening to.
Passphrases suck enough whenever you have to log back in. Are people really gonna put up with that every time they want to use their phone?
On the other hand, if there were a convenient way to toggle between passphrases and 4-digit unlock, (especially if you had to use the passphrases to toggle back to 4-digit) then I would be all for it.
Not really. Have the option to set both a long passphrase and a PIN code.
The long passphrase is used to mount the encrypted file system upon startup, and the PIN code is used merely as a "screen lock", as a casual deterrent from your friends and casual thieves from swiping through your photos before you can catch them.
The file system can be automatically unmounted after a set period of inactivity, or if the user wants to unmount it on demand. After the file system is unmounted, the data is secure again and the long passphrase will once again be needed before anything can be done with the phone.
I'd love to have a long passphrase that has to be entered after booting and every 48 hours, and then a 4-digit pin that's usable when TouchID is for when I'm unlocking my phone with my nose.
Exactly. Short passwords/longish pins suffice for short durations if they are random (i.e. not guessable), particularly if the device requires external hardware to brute force due to attempt duration scaling.
I currently use a generated long password on my Android phone and have adapted to the extra work, but having the option to enter a password once a day and a pin or shorter password throughout that day would be a welcome convenience option, and it's not really significantly more onerous than just a pin.
A secure passphrase requirement would kill Apple overnight.
Low-friction UI has been Apple's differentiator for years. If you have to take the time to type out a secure passphrase every time you want to interact with your phone, people will stop interacting with their phones (or use phones that suck less).
Why do you think they built Touch ID? Because typing even a 4-digit PIN is too annoying!
On my wish list is for the iPhone to effectively support multiple concurrent users. You login with your simple PIN for most cases or when the kids are using it, and you can login with a passphrase and are now working on a completely separate partition, with its own apps, settings, Apple ID, etc. I guess they could even support TouchID in this case as long as you enroll different fingers for each.
Does anyone know if it re-encryptes the data after I change my passphrase? In other words, am I immediately more secure if I switch from a 6 digit pin to a passphrase?
Longer answer: There's a key that encrypts the actual data, and that key is stored on disk, but encrypted with your passcode along with a hardware key. The hardware key cannot be read, only used to decrypt. Changing your code just changes the key stored to disk, but not the encryption key, so it's quick, but preserves security.
Yes, they could just remove the PIN authentication method, and I hope they do. Because then it would leave DoJ to argue for something even more extreme than they are now.
Hmmm... this absolutest attitude by Apple begs the question for me, are we SURE we want to have phones that absolutely cannot be unlocked when the owner is no where to be found/dead?
It's such a grey area and I will probably get down voted for commenting this way. I 100% agree that the power, in the wrong hands, is horrible, but can't we talk about this in a way where there's some kind of middle ground? All I've been reading are either extremes.
Write your pass code on a piece of paper, put it in an envelope, and staple the envelope to your will and deposit it with your lawyer. Nothing prevents you from telling loved ones your pass code.
Isn't this problem already solved in the non-tech world through a last will and testament or a bank lockbox which contains passwords you want people to have in the event of your death?
I'm not sure it's a perfect solution but might be better than counting on someone to reverse engineer or hack into your phone.
If you're serious about encryption you should always have a backup key somewhere... unless you want a single point of failure (you). Both should be an option.
Of course we want! Then only I am in the power to chose who can access my personal data. What forbids you from leaving your private keys and passcodes to the next of a kin? Furthermore, if information in your phone is important to more people than you only (family photos etc.), it should be backed up somewhere else anyway, you don't keep all eggs in a single basket. The real problem is that cryptography is very powerful tool and we need to educate people how to use it properly. Of course, it's naive to hope that it can be done overnight, but small, incremental changes might be done imho. For example, before asking user to create it's master passcode, emphasize in big, bold letters, that it's your reponsibility to keep this password safe and accessible, because if you lost it, there's nothing can be done to bypass it. Keeping user key/password backup (aka MS style) is a sloppy security tactics.
The original story has changed its title to "Apple Is Said to Be Trying to Make It Harder to Hack iPhones".
I was a bit surprised by the clickbait-y nature of the HN title, but we can see in the nytimes URL that this "Apple Is Said to Be Working on an iPhone Even It Can’t Hack" was the original title, eh.
The problem with software is that none have been 100% secure yet... I doubt that Apple will be able to achieve that in the near future. Someone should send a phone to John Mcafee at the very least [1][2] ...
This is why the FBI's argument and that of those who say "they just want balance" is such nonsense.
"Balanced" compared to what? To the 80% insecurity we have now? And "balance" for what protocol? For all existing protocols? For all future protocols? What if hackers learn how to exploit that "balance" in a massive way? Will companies be allowed to fix it by improving the security or will they be "impeding law enforcement"?
It's unbelievable to me how hard the government is fighting against basic security.
Could Apple not push an OS update that can compromise everything they are doing to make the iPhone unhackable? As long as user has to trust Apple there's always going to be the possibility that FBI/NSA/Whoever force Apple to update a target's iPhone to enable tracking/recording of whatever information.
It's not an attainable goal in practice. Today they generate a per device customized update that can be installed without user intervention. Even if they tomorrow enforce user intervention they still retain the capability to push a targeted update for a specific device on law enforcement/court order. The user has no way of telling what the update did.
It's very difficult, especially since they aren't open source. However, they could attain a state where to compromise a device requires the user accepting a malicious update, which would make the FBI's current request moot.
(although there's a whole separate set of legal attacks unexplored)
I think they could do all the stuff that makes it unhackable in hardware, and/or they could make it so updates aren't installed while the phone is locked.
The article doesn't cite a source. It doesn't even say that it is anonymously sourced from someone close to Apple (who presumably is leaking). That makes me wonder if the real source of this info is Apple-approved, and sort of an indirect way of engaging policymakers. I get the sense that Apple is picking a fight b/c the DOJ has violating an unwritten agreement, basically that Apple will provide all the help requested, informally, as long as the DOJ doesn't push for court orders or new laws that tie Apple's hands in constructing its devices and the software that runs on them.
That's normal style in mainstream journalism, weird as it is. If you read a lot of sports journalism for example, you'll see a ton of articles which are literally just a summarized transcript of a phone call a reporter got from an agent, written as if it's just pure factual information that appeared from thin air. At least in those cases it's trivial to guess who the source actually is.
Again, it is objectively very strange to not even hint at what the source of your information is. But it's also standard practice.
Yeah, standard practice maybe. I guess I'm more interested to know if this story is sourced from Apple (unofficially) or is it based on a more indirect rumor that's going around...
As much as I <3 Apple, they're still a SPoF just like Lavabit or anyone else with centralized servers that aren't "SWAT-resistant." If iDevices could work without iCloud and usefully communicate with each other directly (sans cell network too), that would be impressive... storage, processing and wireless tech are all getting cheaper... p2p "iCloud" might be within the realm of not-quite-insane.
(Somehow, I feel iMessage and related apps are MITMable because there is no mandatory, mutual, out-of-band validation of a recipient's identity.)
If Congress does pass such laws, I would love it if Apple considered security so important to it's product vision that they'd be willing to use their cash reserves to restructure the company and engineering and moving it's security engineering to a country that pledges never to force it to compromise on security. Apple is no stranger to keeping internal secrets and keeping concerns isolated. I have no doubt that they could find a way to guarantee security. IMHO governments are security bugs to be patched.
If this means that there's going to be some hardware measures in the iPhone itself that would prevent multiple passcode entry attempts then that'd be good. Otherwise, as long there's that "troubleshooting" system that can update/reinstall the firmware without the passcode and all measures taken to prevent brute forcing the passcode out are built in the software, it's all talk. There's nothing enlightening in this article.
This marks a very interesting time in my opinion. We have corporations with more money with governments making (or at least attempting to make) certain social decisions once reserved for only public sector government officials. If Apple is successful here, it will usher in a new era of what a private company can do.
They can have perfect hardware crypto, but they can always send a new OS update to every phone with "if your account id is in top 100 wanted, send a copy of everything to x.y.z". Nobody would ever know (until it's too late, at least)
(of course if the phone is not in use anymore it doesn't apply)
My Nexus 6 running Android 6.0.1 is encrypted and uses hardware backed credential storage.
If the software (Android) had the same type of protection (if the wrong PIN is entered 10 times it destroys the key), would this device be at par with the iOS approach?
By saying what exactly? That an unhackable iOS is illegal? That's not currently true and it is the precedent that everyone believes the DOJ would like to set, but that the DOJ keeps denying.
I'm just thinking out loud, but who knows? DOJ has been pretty creative about making this issue more critical than the other times they've tried to unlock iOS devices (because, of course, terrorism)
What if the feds decide that an O/S update closes a zeroday that the NSA was using (note they've been really quiet here) and interferes with an FBI investigation in process?
And yeah, DOJ keeps saying it's just the one device, just this one time. What happens if they suddenly change course just to prevent iOS from getting more secure?
There is currently no law on the books that compels Apple to act here. That is why Comey asked Obama to ask congress for a law. Obama said no and advised using the AWA.
They won't try to pass this law until after this election year. It's too sensitive an issue and will fracture the voter base along unexpected lines, thus giving Trump a chance at winning.
How about you simply encrypt your data store? There's no reason you can't encrypt things in such a way that your operating system does not have direct access to it.
It would be a huge middle finger for Apple to design this new iPhone as a free upgrade for all current iPhone users. With their cash reserve, it would be a huge PR spin.
There are however those pesky share holders to keep happy.
This is one of those moments I wish Jobs was still here.
Had he lost to the DOJ, here is what would (might) have happened:
- he would gladly unlocked this phone and bill DOJ for the time spent on redesigning IOS
- going forward, he would label each phone's box in red letters: CONTAINS GOVERNMENT-REQUIRED BACKDOOR (I doubt Gov can forbid him from doing that)
- he would then stop selling devices in Apple stores directly and only allow to order them in stores with direct home delivery from Apple website hosted and operated outside USA.
- all the shipping would be done directly from China by-passing US-tax system all together.
- shortly after he would remove the backdoor IOs for devices that are not directly sold on US soil
And then Jobs would find himself for a long long prison term after the DOJ decides to go full power with him for something otherwise unrelated or small. You commit a lot of federal offenses by just existing in the USA. Or every other country. There is always something that they can nail you for.
To me, the more profound consideration is this: if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone. The AES-XTS key that protects data on the device is derived from your passcode, via PBKDF2. These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec.