TL;DR: "they're asking the public to grant them significant new powers that could put all of our communications infrastructure at risk, and to trust them to not misuse these powers. But they're deliberately misleading the public (and the judiciary) to try to gain these powers. This is not how a trustworthy agency operates. We should not be fooled."
The TL;DR should be "iPhone 5C NAND flash can be wiped and restored to default, making brute forcing the PIN a possibility," not this speculative snippet.
Except that all the evidence points to this speculative snippet being correct, and is the point of the article. So, your tl;dr is missing the crucial point of the article, that the FBI is making a power gambit through deception.
By the same measure, Apple is deceiving everyone too.
Apparently, they can't enable the phone's "icloud backup" because someone changed the icloud password. Doesn't Apple have old passwords -i.e can't they restore the old password from backup? And presuming they can't (why?).... can't they simply modify the server-side to not check for password for a given account, and accept just accept any password for backing up?
I have to say, whoever at the FBI decided this was the right case to push their new doctrine, could have done his/her homework a bit better. Technically speaking, this is the last iPhone you can actually crack without assistance from Apple. They are making it harder for themselves. They only have to wait for another major incident, retrieve (or plant, why not) an iPhone 6 from the scene, and do it again, this time for real.
Unless they are trying to pre-empt something else (like the recently-touted shift to "devices even we can't access" from Tim Cook, which may or may not be simple advertising), they just picked the wrong time to stir this particular pot.
I suspect the main driver in picking this case had nothing to do with the technical details and everything to do with the emotional aspects. The general public won't follow the technical bits and the emotional appeal of the case (terrorists! scary!) has clearly been a boon.
This case isn't just about Apple though. If the FBI wins this case, it becomes a precedent that can then be turned around and used on ANY hardware or software company in the future.
> this is the last iPhone you can actually crack without assistance from Apple.
That's being pretty liberal with words. It's more difficult to crack more recent versions, but saying it's impossible is putting too much faith in the creators of the device and too little in determined people who would want in.
Heck, the FBI could also disable writes to the chip, or simply interpose some logic that pretends to write, but actually doesn't (a non-write-through cache :-) ).
That is, if the secrets in question are on that NAND chip.
Heck, there are probably 20 companies with a Silicon Valley zipcode that could do that work, then. And a few in LA. I know people who could do this in their garage (and that's not hyperbole).
That would rather defeat the point of this whole fake political-judicial spin effort. Teh terrorists already destroyed their personal phones that had actual terror secrets. These work phones, which could have been confiscated by San Bernardino County at any time, are vanishingly unlikely to yield months-old information of any sort.
They already retrieved a ton of info from a month or so old iCloud backup. The FBI has a good idea what might be on the phone, but they aren't saying what they already found.
If someone with the necessary equipment were to document the process of attaching the write-blocker an iPhone of the same make and model, and successfully logging in on the 11th try, it could go a huge way towards illustrating to the general public what this case is really about.
Might not be bad exposure for a security startup, either.
They aren't, the key is burned into the chip. You could decap it and try and get at it that way, but that also runs the risk of destroying it and losing all hope of data recovery.
No, all iOS devices have hardware keys. Newer devices have the Secure Enclave which is a separate chip that stores hardware ID and does the actual encrypting/decrypting.
Sure, but if all of the state is on the NAND flash, you can nearly trivially bypass all the retry restrictions with a hardware-level snapshot. There's nothing on the SOC that's persistent. Boom, done, and the FBI is exposed as a pack of idjits. (Of course, this is not about the single phone, and never has been. But let's continue that fiction for the sake of argument).
If you can copy the contents of the NAND memory to one chip for testing, then you can copy it to a hundred chips and parallelize the process, assuming I haven't misunderstood the hardware issues at hand (not my specialty, to be fair).
The exponential backoff of attempts is not really an issue in that case.
Interestingly, this is the EXACT advice [1] given at the Apple + FBI @ U.S. Congressional Hearing on March 1st from California congressman Darrell Issa (R).
Congressman Issa was previously the CEO of DEI, a car security and audio equipment company. He is possibly one of the most tech-savvy members of U.S. Congress and happens to be one of the wealthiest as well [2].
The Congressional hearing video footage is here, the suggestion was proposed at 1h23m 13s in.[3]
I noticed the same thing. I wonder whether James Comey's aw-shucks answer (roughly, "I have no idea, I'm just a regular Joe, I don't know much about the tech here") was feigned.
Are you sure it's unreadable? I'd be so surprised if they couldn't sniff that out.
If it is, how do they do that? I can't imagine it's somehow embedded in circuitry (too complicated to mass produce) so it must be on some kind of storage medium, right? What makes that unreadable?
> Every iOS device has a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory, making file encryption highly efficient.
> The device’s unique ID (UID) and a device group ID (GID) are AES 256-bit keys fused (UID) or compiled (GID) into the application processor and Secure Enclave during manufacturing. No software or firmware can read them directly; they can see only the results of encryption or decryption operations performed by dedicated AES engines implemented in silicon using the UID or GID as a key.
> Additionally, the Secure Enclave’s UID and GID can only be used by the AES engine dedicated to the Secure Enclave. The UIDs are unique to each device and are not recorded by Apple or any of its suppliers.
No, as Apple states, every iOS device has a hardware key.
> Every iOS device has a dedicated AES 256 crypto engine built into the DMA path between the flash storage and main system memory, making file encryption highly efficient.
In newer phones it is in the Secure Enclave instead of the CPU (the SE handles all encryption/decryption for the CPU).
There's no known API to read the UID out (there is one to use it, or the GID, or any other AES key: http://iphonedevwiki.net/index.php/IOCryptoAcceleratorFamily...), no published side-channel attacks, and the destructive/invasive techniques that would involve decapping the chip and using an electron microscope are not solid enough to use for this, such that there would a huge risk of destroying evidence.
Embedding it in the circuitry isn't that complicated - remember, it's only 256 bits you have to embed, which for example could be stored in 256 fusible links.
I guess it would depend, like brk mentioned, on whether or not the passcode is stored in NAND and only validated against user input. I'm out of my depth here, so I'll gladly defer to the experts.
The passcode is run through a key generation function, PBKDF2, to turn it into an AES256 key, which is then used to unwrap filesystem keys that protect any files marked as only available "when unlocked" or "after first unlock". The PBKDF2 process involves a special UID key which is unique to the device and inaccessible to software. (Software can only perform operations with the key.)
If you're not familiar with PBKDF2, it is similar in function to bcrypt or scrypt - it turns a password into a key and is designed to take a long time to prevent brute force attacks. Tying in the UID key prevents the attacker from brute forcing on a faster machine (or machines).
The wrapped keys I mentioned are stored in what apple calls "effaceable storage", specially designated non-volatile memory that actually erases rather than just being marked as free. I have no idea if it's stored on the NAND chip no the iPhone 5c or not. (Apparently there was a previous attack that involved making the chip read-only, so Apple may have moved the effaceable storage to mitigate it.)
If you're interested in details, this is a good read, lots of interesting ideas in there:
This was my thought too after reading this article.
Part of this relies on the specific iPhone 5c from the shooter, because of the per-device hardware key. They ultimately need to unlock that specific phone, with the NAND data intact, in order to read the contents.
But, if the passcode is stored in NAND and validated only against user input they could duplicate the NAND and parallelize the process. If any part of the user code check involves the hardware key, then it wouldn't work.
In the article it is specifically mentioned that multiple keys are used for the encryption. There is a secret key burned into the A6 processor that is hard to access without the risk of destroying it. This key cannot be destroyed programmatically, so a secondary key on the NAND is destroyed in case of too many attempts. Only that process could be circumvented with this technique, it doesn't address the increasing interval and AFAIK it is not possible to multiply the hacking process without somehow multiplying the A6 chip - i.e. very hard.
Or you could just extract the UUID (though others are claiming that's not possible; I'm skeptical but don't know) and throw the iPhone in the garbage as they use a distributed supercomputing cluster to decrypt it.
Even after the 'delete everything' ten attempts? In any case, you can just reboot the phone, and if it does write the number of missed attempts to NAND, you were going to revert that anyway.
Apple said that could sync the data if the AppleID password wasn't changed. Can Apple just revert the AppleID account on their servers to a backup with the old password hash (or however it is stored)? Why wouldn't this work? Has something on the phone changed because of the password change or is Apple unwilling or unable to revert the AppleID account?
Can Apple just revert the AppleID account on their servers to a backup with the old password hash?
If it were just a password hash and Apple still had that hash, then probably so. Is that actually how this information is stored? ISTM it could be something more complex. The system was not designed to support the FBI's intended use, so the fact that Apple could have logged and stored all sorts of things doesn't indicate that they did.
After I changed my iCloud email/password my phone tried to use the old one for months, and I had to escalate the issue high up Apple Support to get it fixed (you can't erase your iPhone without proper iCloud credentials if findmyiphone is enabled). That also involved my email, so it was slightly different.
Could have, and perhaps should have, but I haven't seen anyone claim that it actually does delete the stored password after a failure. And it would be awkward if the automatic backup stopped functioning and required manual intervention after each intermittent network failure. It should be fairly easy for someone with an iPhone to test and find out.
Possible, but still awkward. If the network failure is between the server and the password database, it would still need to distinguish between "password permanently incorrect" and "password temporarily incorrect".
But my stronger point was that there is no need to speculate, since someone with an iPhone can verify what actually happens. Set up backups, change the password, verify that it fails, change the password back, and report what happens.
If the HTTP status code is 5xx, keep the password. If the HTTP status code is 4xx, delete the password.
Obviously making some assumptions (like HTTP, or a 5xx on network fail between internal services), but telling the difference between "user supplied bad data" and "the server messed up" really isn't that awkward at all.
It seems like there are several articles and security experts out there explaining how to recover data from a locked iPhone as if it were a cakewalk but where is one example of a complete soup-to-nuts case study on unlocking the same model phone as the San Bernardino shooter?
If you want the American public to believe the FBI is making fraudulent claims, show demonstrable proof that it can actually be done instead of all the talk and theories.
The device is evidence, so all of you saying they can just start desoldering things and such need to think about that. What is the first thing a defense attorney would say if the data were to be used in a criminal trial? That's right, "the FBI replaced the memory chip on the phone with one they wrote their own copy of the data to." That is only after they potentially permanently damage the device and data.
Can't the same be said for when law enforcement takes images of hard drives? That happens all the time. There are procedures for maintaining chain of custody that are required to verify these things.
Hard drives have a universal interface for reading out their contents without modification. Most jurors would understand that. Many jurors would equate drastically modifying a device as tampering. Chain of custody doesn't matter. What a juror thinks is what matters.
I wouldn't expect there's much of a difference in how laymen (jurors) understand reading out and using an image of a hard drive and reading out and using an image of a flash device.
First of all, for most of them, it's black magic. For the few who know a bit more, will be convinced by arguments of analogy (after all, nowadays many hard drives are in fact flash devices, and the details of interface protocol are beyond what jurors will care about).
And yet, there's no issue with "We'll let apple take the device, flash a custom OS onto it, and allow us to make attempts remotely while not having physical access to the device".
That's what they're asking for. Part of the FBI's argument is that they need the information for safety of our country, not specifically for the trial.
The FBI specifically asked for a version of iOS that would be loaded into memory and not modify any of the information on the flash. One of Apple's arguments about why the request is an undue burden is that iOS doesn't normally work like that and it would take a lot of effort to make it work like that.
The FBI said it would be acceptable for Apple to retain possession of the phone while it was running the customized version of iOS out of RAM in order to prevent the custom iOS from falling into their hands.
> The FBI said it would be acceptable for Apple to retain possession of the phone...
However... in that case the FBI will have remote access to the phone in question to run whatever software tools against it they require. (This requirement is in the order. :) )
Given that "prevent iOS from reading the ROM used to boot the iDevice" probably isn't a threat that Apple considered to be a serious one, it's entirely possible that the FBI (or an agent of another TLA embedded within the FBI) could use this remote access to also gain access to Apple's (signed!) PIN entry delay and self-destruct removal modifications.
If this happens, and there's a way to bypass whatever mechanism Apple used in the modified image to make it run only on that single iPhone, then Apple has just unwittingly (and unwillingly) handed a backdoor to any iPhone of that model to FedGov (along with any other governments that have clandestine access to the systems of the TLAs in question).
The owner is dead, so the phone data is not needed for a trial against the owner.
It might be evidence against "co-conspirators" - sadly, we have a vindictive, lawless government now that likes to go after former roommates and acquaintances.
What criminal trial? The criminals are dead. Yeah, I know, James Comey said that maybe there's another criminal; spare me, I ain't buying it.
Also, you hire an expert to testify as to the technical details. It isn't perfect, but it is what it is; and anyway, the first thing a defense attorney will say/do in the case that the FBI prefers is to cast doubt upon the integrity of anything the FBI finds after all of the screwing around with it that's been done.
I think this makes the FBI look dumb, but I don't think this really helps them either.
If the NSA did this for espionage it's one thing, but I'm curious as to whether substantially modifying the iPhone in this way would stand up in court.... How would the police assert that they preserved evidence after doing this?
I was involved in a drawn out case challenged the validity of data recovered from backup at great. That was easy to assert with normal IT people, and yet it took weeks to litigate. Couldn't imagine how this would go.
I wonder if the FBI has checked for any ways to circumvent the passcode screen using software bugs.
Edit: Not sure why I got downvoted. I can currently circumvent my keyboard passcode with a number of steps, and I'm on iOS 9. Steps to try for yourself:
Edit: Ok I've been tricked. The steps below are unnecessary as the first step actually unlocks your iPhone in the background. ¯\_(ツ)_/¯ The fact remains though that these bugs have existed in the past and may exist on the device the FBI wants to unlock.
1. Invoke Siri, "what time is it?"
2. Press the time/clock that is shown
3. Tap the + icon.
4. Type some arbitrarily long string into the search box. Highlight that text and copy it.
5. Tap on the search box. There should be a share option if your device is capable. Tap the share option.
Interestingly, this works on my 6s when I press the home button with a finger that is registered for TouchID. I can't add a new clock when I use a different finger. That suggests that on my phone, at least, I'm really just silently unlocking the phone with Touch ID at some point in following these instructions--probably when I press the home button.
TouchID is so fast on the 6s, it's easy to unlock it without realizing you're doing it.
Try accessing Siri with something that isn't registered with touchID (like a friend's finger) and this doesn't work. I think the steps have nothing to do with it, and it has more to do with you unlocking the phone with touchID when you summon Siri.
There's this neat trick that can get you access into all of your bank accounts and credit cards. You can even get access if you forget your username and password.
Data is still encrypted in that state. The passcode screen is not what is preventing access. That is how it works in Android (where a lockscreen is just an app that is locked into the foreground), but not iOS. Bypassing the iOS passcode entry screen gets you to a weird limbo-like state where a lot of thing don't work. I don't recall what that state is called.
The default protection class is "Protected Until First User Authentication". This means that unless an app says something more specific, the key required to read a file is not available between reboot and the first time a phone is unlocked.
FDE on Android still leaves a lot to be desired. It hardly stands up to Apple's implementation. The only time that any data is encrypted is when the device is off or a volume is unmounted.
This reminds me of the republican congressman from Cali, Issa, telling the FBI in very technical terms (inserting in between that he could be completely wrong) the exact same thing mentioned in this article. I'm unsure if the author was inspired by congressman Issa or if he came to it by his own accord.
More over, what's more fascinating is, some people may say it's privacy v security and the fight for terror. But what has emerged from the last few weeks is multiple reason why the FBI should not win in court, regardless of your perspective of terror. It's been very clear from day 1 that the intentions of the FBI are vicious and non-genuine, and with every passing day, more people are finding out.
If the FBI's technical teams can't figure this out they should all be fired and it's all the more reason we shouldn't trust them with back doors into anything.
Seems like a pretty articulate explanation of what is going on here. Of course I realize that my confirmation bias will cause me to see articles more in line with my way of thinking as 'right' but I've also worked with NAND flash devices and believe that the chip[1] they use in the phone does not have any sort of protections on the NAND flash itself, you should be able to just drop it into a test fixture and read it out.
Anyone else a little surprised that apples security feature here is so easy to sidestep? I'd have thought, in the least, that any such keys were stored in the main processor without external read/write capabilities.
FWIW, it's not easy to bypass on devices with TouchID. The Secure Enclave manages the process there, and it's designed to be resistant to tampering so you can't use this trick with it.
It looks likely that the Secure Enclave has no storage of its own, so this same attack would still apply.
What would make you secure here is having a password sufficiently long and complex to make brute forcing infeasible. Touch ID facilitates that by making it practical to use the device even with a strong password set. But if you have a four or six-digit passcode set on such a device, then this technique should still work fine.
I haven't done much research on this, but I thought the secure enclave actually did contain the key in question? I know the secure enclave enforces the 10 tries rule (and the timeout between tries), which suggests that the secure enclave is responsible for destroying the key when the rule is violated.
As far as I know there's no official word on this. I say that it probably has no storage based on a couple of things:
First, the iOS security guide makes no mention of any storage besides the flash. When it discusses key erasure, it's always about Effaceable Storage, which according to the guide is a dedicated area of NAND flash.
Second, Apple says that the attack the FBI wants them to carry out could be made against all iPhones. If the Secure Enclave had its own storage, it would be trivial to have it wipe the device if the SE software was updated without a passcode, which would defeat the requested attack. It's possible that the SE has its own storage and Apple just didn't use it this way, of course. Or whoever said this on behalf of Apple could just be wrong that the more recent phones are vulnerable.
The SE does enforce the escalating timeouts, and probably the wipe after ten tries, but that wipe could just be the SE telling the Effaceable Storage to erase the key.
The SE doesn't wipe when it's updated without a passcode because that would completely remove the ability to fix devices that get stuck at the "Connect to iTunes" screen without wiping it. And a non-trivial number of devices end up in that state every time Apple releases an OS update, so maintaining the ability successfully recover from this state without wiping the device is important. They can't just add a passcode entry to that screen because that screen is rendered by iBoot, which has no idea how to handle touch input or any of the other prerequisites for doing passcode entry (it can barely even draw that "Connect to iTunes" screen). And they can't add the passcode entry to iTunes without making it possible to send passcodes over the wire, which is exactly what the FBI has asked Apple to implement (well, that and disabling the wipe-after-10-failures bit).
That said, you can bet that Apple is working on a fix for this for future devices. Some way for the Secure Enclave to know that the update was authorized by the user even if the device was subsequently bricked by the update and then recovered. With the ability to maintain proof of authorization even after the device is bricked, they can then make the SE wipe itself if it's updated without authorization.
Sending passcodes over the wire would be fine. That parenthetical you threw in there is the feature that actually adds security. An inability to submit passcodes over the wire adds essentially no security. It's the escalating timeout and eventual wipe that prevents brute force.
And fixing a stuck device like that could be done by just not upgrading the Secure Enclave's software during the recovery process. Restore the main OS and you're back where you were. Then you can update the SE firmware.
I'm pretty sure that if the SE doesn't wipe when updated without a passcode, it's because there's no way for it to do so. It just loads the firmware it's presented at boot, no questions asked beyond verifying the signature on that firmware.
No doubt Apple is working on improving this. You'd only need 256 bits of nonvolatile storage within the SE, and corresponding improvements to the SE's bootloader.
You can't just restore the old OS, because it bricked during OS upgrade and you can't downgrade. So it has to restore the new OS, which means if the OS update includes an update to the SE, then the SE needs to be updated with it.
As for sending passcodes over the wire, Apple really doesn't want to have this because it makes it a lot harder to reject requests like the one the FBI is making. Disabling the wipe-after-10-passcodes is just turning off a bit of code, that's not an "undue burden", but implementing the ability to send passcodes over the wire is a non-trivial engineering effort and becomes an "undue burden" when the FBI requests that it be added.
I'm sure Apple could figure out a way to allow repairing this problem while still hardening the Secure Enclave against this attack. They just haven't, because it seems that their threat model didn't previously include themselves an a potential attacker. I'm sure that's changing now.
Submitting passcodes electronically doesn't really make that much of a difference. It takes at least 80ms to try a passcode that way. Require touchscreen input and you've bumped that up to, what, a second or two? Without the escalating delays and potential wipe, a (very bored) person could crack a four-digit passcode by hand in a day.
Also, did they remove this functionality? The "IP Box" brute forcer submits passcodes over USB. Apple may have removed that when they patched the vulnerability that allowed bypassing the escalating delays, of course, but it did exist.
I don't know why the FBI requested this ability, but it's presumably just something like, while we're here we might as well make things a little easier.
No, I'm not that surprised. I think this feature is intended to protect your personal data from the casual iPhone thief or to minimize risk if you lose/forget your iPhone somewhere. I don't think it was intended to be a secure-from-governments kind of features.
Guess again - it is! It's a work in progress, and on the 6S it has improved a lot with the secure enclave in its own chip. It seems it will get even better, with encryption in the icloud as well.
The SE uses a different key for each app, so even if you can decrypt one key, you only get data from one app.
The feature exists to protect against less sophisticated attacks. It's not necessary, as setting a long alpha-numeric passphrase will mitigate brute force attacks.
From the sound of various blogs, articles, etc., it sounding like the FBI doesn't have anyone who has technical expertise in this area (or if they do, those persons are being kept buried). While the court case is important to the FBI (and very wrong to the public), the technical details of breaking into an iPhone should not have been an issue for them.
I'm starting to think no one is driving the clown car in their technical division.
Well, according to the congressional hearing (posted in another comment thread on this post), the FBI has "engaged all parts of the US government" to try and find a solution and came up with none. Which is funny, because the congressman who was questioning Comey proposed essentially the same thing as this article.
> If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it.
Sounds like a better hack would be to interpose the flash memory interface with a RAM cache that simulates writes without modifying the original flash data. Then they can hammer away at brute forcing it without the delay of reburning the flash.
The ACLU is not wrong, they are right in the technical sense.
But I very much doubt you would practically manage to remove that NAND chip and replace it very often on that umpteen layer ultra thin board. Instead, remove it once and stick it in a test fixture, then try brute forcing it.
"Technical correct... the very best kind of correct!"
The bigger question here is: do you really want law enforcement to hack into things as standard procedure? They certainly don't. It's difficult, expensive, slow, and worst, unbound by law. It's a world where your privacy is exposed based on the federal hacking budget rather than a judge's opinion about your potential criminality.
It's much better for law enforcement to be constrained by law than technical ability.
Sorry about the slight OT, but what truth is there in this statement I was presented with?
>Even if an iPhone is locked, all of that encrypted data can technically be read easily so long as the phone had at least been unlocked once since the time it was booted up.
Obviously I think it's a nonsense, but I have no way of disproving it (even though the burden of proof is on the claimer, naturally).
Maybe their exists experts that can get this right every time but there are significant risks to damaging a chip desoldering and resoldering. It's not just removing a through hole capacitor.
Disclaimer I am no electrical engineer and have had limited experiences soldering components on circuit boards. But from all the tutorials I've been through, I agree. There are risks of damaging the NAND chip while desoldering and/or re-soldering chips on the board. Since there is the only piece of hardware with the information, perhaps the Feds aren't going to risk the chance of destroying the evidence all together after all they made the mistake of changing iCloud password already.
Yet the FBI's official plan is to get Apple to assign a few developers to put together a custom version of iOS that they trust will overcome all risk of erasing the device.
Is there a reason we should have confidence in software engineers rather than electrical engineers?
sw can test it on another phone till they get it right....EEs can only practice on another phone, if they screw up on the actual phone, it's gone burger
This.
The chip needs to be desoldered, cleaned (remove a layer of epoxy resin) and then finally reballed (secure a 0.3mm ball of solder on each of the > 64 connectors) just to get the on the chip reader.
Once you've done that you need to reverse the process.
I've done a few of these and they still scare me.
Its not a trivial process.
Never attribute to malice that which can be attributed to stupidity. Some engineer probably told upper management they couldn't decrypt the phone because the software would erase all data. Maybe because they didn't know, or didn't want to, but still this has blown out of proportion.
To be clear I don't think apple should compromise the phone, just that this is not a long con by the FBI to compromise all phones.
There needs to be a corollary considered here, possibly something like "never invoke the stupidity out lane for those who have previously demonstrated their malice"
the most frustrating part of this whole thing is the multi-headed response by various agency chieftains. fbi says one thing. nsa says another. former generals say another.
am i crazy to want the president step up and say: "our position as a government is: x"? there's no/no way this has escaped his notice. isn't that part of the job description of "leader of the free world?"
The executive branch is a big organization. Each agency has a different mission, culture and authorities (legal powers). The cryptowars are a complex issue, so it makes sense that the agencies have different takes on it.
I agree that thr government should have a unified view on all this. However, it should come from CONGRESS, not the President. This is clearly an area where our democracy needs to make a decision about how we govern ourselves, then enshrine that decision in law, then follow that law.
What strikes me as odd in all those analysis is that they all assume that the FBI is not expecting that weakened security will mean that there will be far more difficult to address crime -- i.e. far more on their plate.
I don't know why this case is getting so much attention when it's readily apparently the FBI could just get everything off the phone with a cellebrite & call it a day.
> If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it. If they plan to do this many times, they can attach a “test socket” to the circuit board that makes it easy and fast to do this kind of chip swapping.
> If the FBI doesn't have the equipment or expertise to do this, they can hire any one of dozens of data recovery firms that specialize in information extraction from digital devices.
Presumably you'd only have to remove it once, then install it in an easily-controlled circuit that would selectively connect it back to the rest of the phone?
for passcode, iPhone supports 4-Digit numeric code (10k combinations), 6-Digit numeric code (1 million combinations), "custom numeric code" (arbitrary length, millions of combinations), or "custom alphanumeric code" (arbitrary length, millions of combinations)
This is really annoying. I wrote a blog post last week making this exact same point, posted it here, and it promptly got flagged to death, most likely by the same people who were commenting that I was "absolutely, totally wrong".
AIUI your blog post was nonetheless wrong. As the OP explains, there are multiple layers of encryption.
The first is at the hardware level, performed by the SSD itself.[0] It encrypts all data on it with a key stored in rewriteable memory. By changing the key, you can "erase" the drive. The FBI could backup that key if they wanted to undo the "secure erase" feature.
However, this is not the only layer of encryption. The second is done in software, performed by iOS. It encrypts all user data with a key derived from the user's passcode and a unique key burned into the chip. The FBI cannot get past this layer, and this is what they wanted Apple's help for.
You were right in that the FBI could trivially get past the first layer of encryption, but it's not the one we care about. There's more than one layer.
[0] Or rather, by the flash controller. At least, I assume it is. It might actually be an iOS software feature too, but the OP's description reminds me very much of how some SSDs implement secure erase. Whether it's actually done in hardware or software is immaterial, anyway.
FBI didn't demand help with that layer, it demanded a way to bypass auto erase. They intended to perform the brute force of the 4 digit PIN themselves. Sure, they wanted to be able to enter PINs electronically, but they could just as easily have an intern sit there and enumerate the 1000 combinations once auto erase was disabled.
That was a timing weakness in an older version of iOS that was patched in software. Previously you could just cut power at a precise monent and get unlimited attempts. It's no longer exploitable.
Yes you can, if you sync the commit to NVRAM before giving any external indication of success/failure, and don't leak through any side-channels. The CVE before demo'd by the famous youtube video was that you had a split second after failure was indicated where you could cut the power and keep the failure from being stored.
Many of the reports in the press were saying it was a 4-digit pin. But it's feasible even to brute force a six-digit PIN in the way I described, it would just take several weeks.
I use a larger than 8 digit pass code on my iphone. It looks different.
Using a default 4 digit code, you end up with a numeric pad and four boxes. If you use a longer than 4 digit code but stick with numbers, it gives a single box to enter the pass code in but presents a numeric pad. If you use letters at all, it switches to a qwerty-ish keyboard and a single box to enter the pass code.
Yes, it can, and I explained exactly how in the original post. It's true I got some of the technical details wrong, but the substance of the post was and is correct.
One of the technical details is that disjunctive claims are true if any of their constituent elements are true. My claim was of the form, "The iPhone is vulnerable to attack. Here is one attack. If that doesn't work, here's a second attack." The first attack doesn't work, but the second one does. Hence the overall claim -- that the iPhone is vulnerable -- was (and remains) correct.
So should you be flagged to death because you got this wrong?
" they could use a copy of the chip to try five different PIN codes, and then replace the chip with a fresh copy of the original and try five more. Lather, rinse, repeat. At worst this would take about a week or so."
>> Yes, it can, and I explained exactly how in the original post. It's true I got some of the technical details wrong, but the substance of the post was and is correct
No you were wrong. You claimed that if the FBI was competent they could retrieve the key from the flash. There were no technical details discussed at all. (you could trivially google this issue and find out what you suggested doesn't work)
The ACLU is describing an attack on the PIN not the key. You acknowledge this in another comment on this thread so I have no idea why you are claiming here that the FBI can get past the encryption in any way; which as others have said doesn't work.
It has been known in the abstract, but AFAIK I was the first to publish a specific attack. (Not that I really claim that as much of an accomplishment. Sometimes it's hard to navigate the line between educating people and insulting their intelligence, particularly when it comes to security.)
To be fair, this isn't an especially relevant argument, even if it's true.
The FBI isn't going to rip open a phone, unsolder a chip and risk destroying the device, when it can do what it's done successfully, many times in the past, and ask Apple to unlock the phone for them:
Nerds think that proving that there's some theoretical, high-tech attack against the this specific phone means that the FBI should therefore lose. But that's irrelevant. This case is about the pipe wrench.
To use the classic XKCD comic, the crux of the case is that FBI is the one arguing the first panel (i.e., some bogus magical encryption we can never break), and because of that claim, they need to be able to compel Apple to compromise the security features using the old wrench trick.
The reality of there being practical alternatives for the FBI to pursue should give pause as to whether they can compel Apple to compromise the security features, and arguably the method described/discussed is indeed very practical.
All in all, it's less about the FBI's ability to do any of this and instead more about "should be the allowed to force a company to do something like this?". By demonstrating the claim that it's impossible to proceed without Apple's help is not true, I would think it should give pause to any court as to how to rule, since the implication of the ruling is pretty big.
The point is that the definition of "practical" is debatable -- any reasonable person can see that there are more risks associated with mucking around with the circuit board than having Apple install a custom software build, which carries no technical risk at all.
It doesn't matter that you can come up with some theoretically plausible attack that works in this one case. If it's harder or riskier or slower or less effective than Apple complying with the warrant, then the question stands.
It actually seems clearly the opposite to me. This approach uses standard tools and methodologies for which there are already experts. Asking Apple to write new firmware has the potential of software bugs and similar unexpected issues.
I agree with you. Writing a custom firmware on the device is on the same risk level as desoldering the chip. In both cases it would be a smart option to test this approaches on a different device first.
The difference is that one of those options is nearly completely reproducible, the other requires humans to deconstruct a device which introduces more chances for things to go wrong.
yeah, so if Apple writes a new firmware upgrade (in a couple years from now) and the device has something in it's configuration that in conjunction with a new bug in the firmware ends up bricking the device... I wonder what then.
It may not be relevant legally. But it undermines the (already very flawed) PR argument that Apple is enabling terrorists by not unlocking this specific phone.
Forcing the FBI to admit that their real objective is the general power to hit people with pipe wrenches seems like an important step.
The linked article quoting former NSA person is wrong, Apple never unlocked encrypted iPhone 5C before, there are enough articles about the fact that for the previous demands no new software was needed, just data copying.
No, there's an essential difference between your post and the ACLU's: you suggested that the FBI would be able to brute-force the key off-chip, which is not true because the encryption key is derived in part from the device ID, which can't be extracted. This is not what the ACLU/DKG is suggesting - they are suggesting making a backup of the flash storage and restoring it after the auto-erase kicks in. The decryption would still happen on the iPhone itself.
Granted, you updated your post to suggest what the ACLU is now suggesting, but that was after the commenters correctly criticized your post for being wrong.
According to your post, the FBI would only need to do that if they were "completely incompetent." You wrote, "unless they are completely incompetent, having read out the contents of the chip they should be able to decrypt its contents in a matter of minutes if not seconds." Details matter, and it was fair to call you on your mistake. You shouldn't get all upset now that the ACLU has published a post which gets the details right.
You're right, I got that one detail wrong. Nonetheless, the substance of my post was correct. If every post that had a minor error like this was flagged to death the home page would be empty.
>>
It's encrypted, but here's the thing: the encryption key is also (almost certainly) stored in the same chip. So all the FBI needs to do is de-solder the chip, mount it in its own hardware, and read out the data.
This is not correct and was the main suggestion you made.
>>
I don't know how I could have made it any clearer that I was proposing an attack on the PIN, not the key.
Because you just did. You are now claiming that an offhand comment you made that resembles what the ACLU suggests is the main point of your post and that is not the case.
That was one of two proposed attacks. At the time I wrote it, I was unaware that the A6 chip has a UID that is used in the KDF. That renders the first of my two proposed attacks ineffective, but not the second one.
Well, HN is -just like any other community- biased and also suffers from echo chamber problem. Even if it is much smaller scale than any other online forum, still there is a chance for getting down voted even though your are absolutely 100% right about something and most of the down voters are wrong. You can either live with this and just ignore it or getting annoyed by it. I am not sure if this is going to be fixed or it needs to be fixed at all.
The problem is not so much the voting and the community, but the extreme non-linearity of the flagging mechanism. Getting flagged is mostly invisible until you cross a threshold and then -- poof! -- your article is dead. It was intended to keep spam and kitten pix off the front page. Vouching was introduced as a counter-measure to inappropriate flagging, and indeed my article was vouched for once and was briefly resurrected before it was flagged to death a second time.
Keeping control of an on-line curated forum as it grows is still an unsolved problem.
I was thinking about this a lot, how could you protect the integrity of you content and at the same time keep up quality but still I got as far as introducing weight to the votes, and scale up how much somebody's vote means based on their karma. I don't think that HN does that, and also not sure it would fix this problem. We need probably some model to verify. Curious if anybody has a good idea.
do you think they could back up the nand chip and replace it with a read only nand chip with the same memory, and then try to unlock is as much as they want?
I don't know, but it's fairly pretty to find out. Just try it. I don't have the skills to desolder a surface mount chip but I personally know people who do. It would probably cost about $1000 to buy an iPhone and hire one of these people to do the experiment.
No, the annoying part was being flagged to death, which cut off the discussion. I don't mind people telling me that I'm wrong. Sometimes they're right (but not this time).
Truths are whatever you make them, as long as that is what you really believe. "Truths" rationalized by an entity holding conflicting information may need sampling to ascertain the origin of the conflict. That's where voting comes in handy, but it's not a perfect solution by any means.
Hacker News filled with people who don't know what they're talking about but talking like they are infallibly right? Say it isn't so!
Your real issue was going up against the cult of Apple fanboys that hang out here. You will find this response with anything that remotely suggests that Apple isn't perfection.
You can see them still getting huffy in the responses to this comment.
This article seems wrong to me. I don't know a ton about the iPhone's specific implementation. That said, I was under the impression that these systems all worked similarly to the PC's TPM. Essentially, the encryption key is stored in a chip that acts as a black box. That chip is manufactured in such a way that makes it extremely difficult to extract data from. You can't simply copy it. You'd have to take it apart, inspect it with a microscope, and hope you don't destroy the data in the process.
The OS should set the security level initially. The TPM would enforce it. You can't modify the OS to make an attempt without it counting against the initially configured limit.
Some of the critical encryption keys are stored in the main A6 processor and incredibly difficult to extract, which is why you can't launch offline brute-force attacks - however those keys are read-only and initialized at device manufacturing time. All the volatile data is in the external flash chip. I'm not even sure if the Secure Enclave has its own flash on newer devices that have one.
With 14 million combinations just in a 4 character alphanumeric(upper/lower/numbers) password, I would think they would start to encounter flash reliability issues re-writing this "Effaceable Storage" long before the password could be broken.
This would also slow down their attack considerably.
At that point they can swap out the flash chip for a new one, new chips can be obtained easily. Also, the PIN is 4 numeric digits, no alpha characters.
thank you. I just assumed it was configurable, like Android,, but, with the downvotes, I thought I was wrong.
Of course it would take much less time for a 4-digit numeric code -- but AFAIK at this point the length of the password is unknown, so, the ACLU claiming fraud based on the assumption of the length of the password is not correct.
When you tap the home button on an iPhone, it shows you empty circles indicating how many characters are in the passcode. It also varies the entry keypad depending upon whether it's alphanumeric or numeric.
In other words, if you are holding the phone in your hand, you can figure out how many digits the passcode is, and whether it's alphanumeric or just numeric, without entering a single character.
by definition a 4 digit code is a 4 digit numeric code, but I haven't seen a definitive source saying that the phone is secured by a 4 digit PIN, as the iPhone allows longer PIN's as well as alphanumeric passcodes.
I didn't think that was all that pedantic, I was even agreeing with you that 4-digit codes are not alphanumeric, so if anything, I was as pedantic as you.
But where I was disagreeing was that it was ever revealed that the phone in question really does have a 4 digit numeric code, because as far as I know, the size and complexity of the pass code has never been revealed by the FBI.
"The FBI can simply remove this chip from the circuit board (“desolder” it), connect it to a device capable of reading and writing NAND flash, and copy all of its data. It can then replace the chip, and start testing passcodes. If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it. If they plan to do this many times, they can attach a “test socket” to the circuit board that makes it easy and fast to do this kind of chip swapping."
Right. They could do this, and risk destroying the device, or they could ask Apple to do the easy, reliable thing, and just install a build on this phone that allows brute-force attacks.
Given that Apple has a long history of complying with these kinds of requests for valid search warrants, and that this situation is about as clear as it gets when it comes to justifiable uses of government investigatory powers, it's obvious why they're taking the latter approach, and not the former.
There's a legitimate privacy debate in this case, but this isn't it.
Edit: I'm just stating facts here, folks. Downvoting me won't change those facts, or make the government change its tactic.
They're not being asked to provide "a digital signature". They're being asked to enable a brute-force attack on a single phone. Here's the full text of the request:
That's a distinction without a difference. Presumably Apple has done signed custom installs on the ~70 other iPhones they've brute-forced under warrant, because signed firmware has existed on iOS since (IIRC) the iPhone 3G.
In any case, the legal question has nothing to do with encryption. It's an incidental detail.
Those 70 other cases didn't involve installing a custom OS. They were running older OSes that did not do as good a job protecting the user's data, and thus could be attacked without any changes to the OS. The whole reason this thing has blown up now is because Apple finally improved their security to the point where the old attacks no longer work.
You are wrong. Apple cracked the other phones by installing software that brute-forced the password. They didn't have someone sit there and punch in 10,000 codes like a monkey.
Moreover, Apple won't comply with valid warrants for phones running iOS7, so it doesn't really have anything to do with the security of the OS. This started only because a federal judge made an issue of the legal justification for the first time ever:
How do you know that's how they cracked the other phones? I would have expected that it would have involved taking advantage of some existing security vulnerability. The jailbreakers already have it nicely packaged up, even.
How about the fact that these tools have existed in the public domain for every version prior to iOS8, plus the fact that Apple could do this in Apple stores for customers, plus basic common sense?
But OK, if you insist...here's "evidence" straight from the EFF:
"For older phones with no encryption, Apple already had a software version to bypass the unlock screen (used, for example, in Apple stores to unlock phones when customers had forgotten their passcode)."
And before you go there: whether or not you call this "brute forcing" is, again, a distinction without a difference. The FBI wants access to a single, password-protected phone, under warrant, and Apple has historically maintained custom software that helped them comply with these exact requests. Nobody knowledgable about this case cares that the software has to iterate through 10,000 numbers, or uses some other method to gain entry. They just want the outcome.
Your first link requires an already-compromised boot path; it cannot be used on the San Bernardino phone. Your second link describes software that only works on unencrypted devices, which likely means it needs to be able to grab the password hash directly (which it's free to then brute-force off-device, avoiding the max-attempts erasure).
Whether Apple has previously signed a piece of PIN unlock software or not completely misses the point: they decided to do that. They were not compelled. They expressed trust in the software because they trusted it. Not because they were forced to. Compelled speech is constitutionally prohibited.
Presumably, or backed up with a reliable source? I've not seen any credible claim or piece of evidence that Apple has signed custom binaries for law enforcement.