Hacker News new | past | comments | ask | show | jobs | submit login
The Feds Can Now Probably Unlock Every iPhone Model (forbes.com/sites/thomasbrewster)
265 points by dsr12 on Feb 27, 2018 | hide | past | favorite | 159 comments



Bruce Schneier says¹:

> There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

¹https://www.schneier.com/blog/archives/2018/02/cellebrite_un...


>The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents.

Crazy if true.

Doesn't this also create a weird incentive problem where the FBI (or any other law enforcement agency) who would normally be tasked with helping Apple with this doesn't actually want to?


So Israel is like a high tech Guantanamo where our government goes when those pesky laws get in the way.


Within ten seconds of searching my memory I can think of at least three Israeli companies that are known for researching/hoarding secret zero days and making use of them for large sums of money. Cellebrite, NSO, and Elbit.

https://www.forbes.com/sites/thomasbrewster/2016/08/25/every...

Elbit is a big defense contractor and basically wants to simultaneously be a competitor of General Dynamics and Palantir.


Exactly:

https://youtu.be/oYNXVgYhPOc

Listen carefully to the choice and emphasis on words.


James Mickens said, “YOU’RE STILL GONNA BE MOSSAD’ED UPON”



The DMCA is applicable in Israel via the bilateral trade agreement with the US.


The DMCA is not concerned with the security of Apple’s devices or “Secure Enclave” as Apple never said they existed for the sake of protecting copyright - that’s iTMS’s DRM which is entirely unrelated.


I didn’t said it was applicable in this case however US copyright laws are enforced over most of the world in one way or another via the trade agreements basically any deal that the US signs has a clause about respecting US copyright and a mechanism through which to seek grievances.


Cellebrite is a Japanese company, a subsidiary of Japan Sun Corporation.


None of the executive team is Japanese (the CEO is Israeli), and most of the positions they are hiring for are located in Israel...


Relative to the size of its population, Israel has a highly developed electronics engineering related industry. Part of it related to their state support of domestic defense contractors like IAI and their avionics/radar/C4I equipment. Aside from Cellbrite, companies like Ceragon, Alvarion, Radwin, ECI, Telrad, Elbit.

Second hand knowledge: Within international organizations that have worked extensively in the Israel-Lebanon border area it is well known that Israel has pwned most of the Lebanese telecoms and ISPs quite thoroughly. To the extent that Hezbollah started laying its own fiber optic cables.

https://www.google.com/search?client=ubuntu&channel=fs&q=hez...


Specifically, many Israli tech people (and especially those in defense) seem to be “graduates” of the IDF’s 8200 SIGINT unit, which has close relationships with VCs in both Israel and the US.


> Relative to the size of its population, Israel has a highly developed electronics engineering related industry. Part of it related to their state support of domestic defense contractors ...

It's not due only to Israeli resources. Much of Israel's defense budget comes from the U.S., plus there is much more support, including technology transfer, that isn't provided in cash.


According to Businessweek [1] in 2012, the Israeli defence budget was approx. $15 billion per year and US military aid was $3.07 billion per year.

[1] http://www.businessinsider.com/heres-how-much-america-really...


For some reason, the Powers that Be at YCombinator allow unabated criticism of Israel, even if it's based on completely made-up facts.


I have enough trust in the HN community that they would be able to call out falsehoods and downvote comments that are factually incorrect.

Why would the moderators get involved anyway? Should they also moderate criticism of Russia whether right or wrong? of North Korea?


> unabated criticism of Israel, even if it's based on completely made-up facts.

How is it criticism? Which facts were made up?


get over it? it isn't their job to 1. have encyclopedic knowledge of a small and belligerent country and then 2. enforce others to the same standard. no country gets that treatment here.


About 10% of the defense budget or 1% of GDP is US military aid that can be spent only on US military hardware.


Err, Facts can't be made up.


? Israel defense budget is about 20 billion these days the military aid to Israel is about 2.5 billion without special congressional allowances. Israel’s GDP is ~320 or so billion.

And if you wondering about the reason for discrepancies from the article above it's because that one is from 2012 the budget is larger and the US dollar devaluation against the Israeli Shekel by 15% since then, in fact fluctuations of the US Foreign Military Financing (FMF) as portion of the Israeli Defense budget are often due to currency exchange as much of the Israeli budget allocated in local currency is spent locally while the FMF is spent over in the US in dollars.

It's also important to note that the Israeli allocation of the budget does not include FMF or any other aid so when Israel allocates say 18 billion USD equivalent in local currency in the budget that is the amount to which the government will fund the defense ministry, beyond that the defense ministry has it's own internal budget which is funded via the Israeli government budget, FMF as well as any additional revenue streams of defense ministry such as rent, dividends from it's share of now privatized national defense contractors like IWI (form Israeli Military Industries) and IAI etc.


@dang why do you allow and encourage people to spread bald-faced off-topic lies?


Ok then. I guess Apple's a Chinese company or an Irish Company or a Singaporean company because they have subsidiaries--each with a local CEO--in each of those countries:

http://www.nytimes.com/interactive/2013/05/21/business/apple...


Cellebrite itself is an Israeli company, despite being a subsidiary of a Japanese one.

Apple is an American company, but they have foreign subsidiaries like Shazam Entertainment, which is a British company. If Apple Singapore is a full subsidiary, then sure, that's a Singaporean company, but Apple Inc wouldn't be.


No? The law doesn’t prevent the government from searching your property in a wide range of circumstances: e.g. with a warrant, pursuant to a valid arrest, etc. That’s the whole idea of warrants: so there is a controlled way to search private property. The government goes to Israel to defeat technological roadblocks to doing what it’s allowed to do under the law. This technology isn’t being used to break into phones at surprise checkpoints, it’s being used to search phones of people who have been arrested.


You left out why that's different from Guantanamo, and instead just defended it generally.


Presumably guelo thinks Guantanamo permits the US to do things that would be illegal here. But breaking an iPhone pursuant to a warrant wouldn’t be. Having a warrant (or a suspect in custody) permits the government, by design, to do lots of things that would otherwise be illegal. The government doesn’t need to ship safes to Israel to avoid violating safe cracking laws when searching pursuant to a warrant.


>implying the government asks for a warrant for many of their invasive activities

they don't play by their own set of rules.


Ah yes, the old “government does X, so i’m going to speculate it also does Y, and it’s up to you to prove otherwise” trick. Arguing on the Internet is so much fun when we get to just make things up.


Come on now, there are literally thousands and thousands of thoroughly documented cases of law enforcement and government agencies violating the law.

The Israel == Guantanamo thing doesn't exactly make sense to me either, but now you're arguing nonsense. Certainly, we all know that the government, including law enforcement, doesn't always follow the law.

That's almost the entire argument for putting any limits on governmental power at all.

(That's not to say we should restrict them from doing this, though; if they can crack a phone, good for them, I suppose. But it's another thing to be concerned about.)


There's thousands of law enforcement agencies in the U.S., handling tens of thousands if not hundreds of thousands of cases each year. If they break the law with respect to a small percentage of those cases, you'll end up with thousands of examples. But with respect to any given thing, statistically, the government is probably not breaking the law.

Here, the "Israel == Guantanamo" thing doesn't make sense if you assume that the government is using the Israeli hacks to break iPhones it has in custody because of a warrant or arrest. You can speculate that the government is stealing peoples' iPhones and breaking into them without a warrant, but it's an actual logical fallacy to point to different things the government is doing to argue that the government is doing this thing too.


But under the DMCA it doesn't matter if the thing you are trying to break the protection for is something you are allowed to do, just the act of breaking the protection is illegal.


The DMCA, 17 USC 1201(e), includes a specific exception for law enforcement.


How do you know what it's being used for?


The iPhone hacks by their nature require having custody of the physical cell phone for an extended period of time. As far as I know, the government isn’t stealing peoples iPhones to search them.


> As far as I know, the government isn’t stealing peoples iPhones to search them.

Well, OK, now you know:

The US government seizes phones and laptops, "without showing reasonable suspicion of a crime or getting a judge’s approval", on a regular basis, and has done so for a number of years.

https://www.politico.com/story/2013/09/laptop-seizure-border...

https://www.theregister.co.uk/2017/03/16/canadian_privacy_co...

https://www.nytimes.com/2017/09/13/technology/aclu-border-pa...

http://www.bbc.com/news/magazine-25458533

(et fuckin' cetera...)


The government is permitted to search anything that crosses the U.S. border. It's a power inherent to nations, which are entities defined by their borders. The founding generation provided for such searches and seizures in the very first session of Congress.

You might not like it, but border searches aren't illegal, and the government doesn't need to go to Israel to do them.


I'm not claiming it's illegal. I'm just arguing that this is a new and serious security concern. You write:

> This technology isn’t being used to break into phones at surprise checkpoints, it’s being used to search phones of people who have been arrested.

and:

> As far as I know, the government isn’t stealing peoples iPhones to search them.

That implies it's nothing to worry about if you aren't being arrested, which is wrong.

First, you don't know when this technology is being used. It would be prudent to assume they US government could use this technology on any phone they seize.

Second, even if it's not "stealing" when government agents seize your phone at a border (or yes, at a surprise checkpoint, which they can and do use), from a security standpoint, it's the same thing.

The legality of these searches is not that interesting to me (witch-burning and slavery were legal too). What's interesting is that this new exploit, assuming the story is accurate, allows the government to search the data of phones that they seize.

Why should we worry about that? Because, as we have already established, they seize phones routinely, and not necessarily in conjunction with an arrest or even suspicion of criminality.

Yes, it's legal (in many cases, anyway). But before this new phone-cracking capability, it probably wasn't effective. The security on the Apple iPhone was believed to be good enough to stop such intrusion; now (again, assuming this article is accurate) we know it isn't.


Can't the police legally detain you for 24+ hours without "arresting" you? In that case wouldn't they have ample time to unlock the phone?


Sounds like some bizarre William Gibson novel, but then most things do nowadays.


1984: CIA smuggles cocaine into the US

2018: FBI smuggles Apple engineers out of the US


It’s absolutely untrue that ex-Apple employees are helping them break in. Apple employees do not have an advantage in creating this type of thing.


Sure they do, they're familiar with the design. Unless it is completely open source or has been totally reverse engineer (which I doubt) then that is an advantage.


It’s not an advantage, in practice. Writing exploits against iOS is a very scarce skill, and the people who can do this might be slightly more productive if aided by the source, but the reverse isn’t true. Having the source doesn’t teach you anything about finding and exploiting these bugs.

It might seem logical to those unfamiliar with how these hacks work, but consider that such hacks do not depend on the secrecy of the design. Also consider that Apple hires regular software and hardware engineers, who do their best to design a system, and Apple then hires hackers (both internally, as well as external consultants) to find weaknesses in their designs. These weaknesses are then fixed before the product ships, meaning even those who were paid to break it no longer know how. This alone should tell you that people who know the system intimately are not the ones who understand how to break it.

Put another way, if I need to make this product, and there are two candidates I can hire, one person who wrote the software and one who knows nothing about the software but is demonstrably skilled at finding exploits in similar systems, I’ll take the latter in a heartbeat.


I don't know if that's true, but if I had the same power as a government agency, which would allow way higher salaries than Apple's or any other hi tech corporation, that's the first thing I'd attempt to do: find ex/unhappy/disgruntled engineers and offer them 5x pay plus a luxury home and lab in some tropical island.

Also don't forget the hardware. Like with most/all other phone vendors, iPhone chips aren't made in the US; most of the design maybe is, but the chips themselves are not, and finding a Chinese hardware engineer happy to help would be even easier because all he should implement is a covert channel to tunnel sensitive data (passwords?) to a known place. If you have access to the hardware that should be trivial to do: just implement a small undocumented flash memory space anywhere, then when the user taps a password an also undocumented firmware routine (that's hundreds of bytes, very easy to conceal) would add the password in that small memory that can be read only in certain conditions (say connecting power+tapping a bossa rhythm while with the phone screen is facing down+disconnecting power - that seems crazy but you get the idea: every sensor is a switch and any switch can be used to enter a code). A few spare kilobytes of memory here and there would allow this and other spying mechanisms, so I would't be surprised at all if some big agency would attack the hardware/firmware rather than the software.


> If you have access to the hardware that should be trivial to do: just implement a small undocumented flash memory space anywhere, then when the user taps a password an also undocumented firmware routine (that's hundreds of bytes, very easy to conceal)

I sincerely apologize for being this blunt, but you clearly have no clue what you’re talking about. Ask anyone who’s shipped any piece of hardware they helped design, let alone a processor, and they won’t be able to answer because they’ll be laughing so hard. Adding persistent hardware based spying like you describe into a design is anything but trivial.


By spying I didn't mean moving multi megabytes of data or reprogramming a processor. On a PC stealing passwords can be done by inserting a small uController that acts as a HID device on one side and talks with a USB keyboard on the opposite side. If you hide it into a keyboard and instruct it to record the first two lines one writes just after power on, which are almost always the system username and password, then add them to a flash into the microcontroller, then it's just a matter of social networking to get the data ("hey, here's a new keyboard, I'll trash the old one for you"). On phones one has to intercept screen taps, which is harder, but if you have access to the hardware and develop its drivers, you very likely can do that before passwords get encrypted. All it needs is a daemon reading taps and comparing them with the virtual keyboard key positions (assuming you haven't access to the virtual key output, which would make it even easier) once you have that daemon, tell it to read the system load and intercept what the user taps after a long sleep, which will very likely be the device pin. Want the bank password?, just read what the user taps when there's a bank app in foreground. I'm sorry for those laughing, but it can be done.


> find ex/unhappy/disgruntled engineers and offer them 5x pay

5x pay sounds quite regruntling, as in P.G. Wodehouse's "I could see that, if not actually disgruntled, he was far from being gruntled."

[ https://www.goodreads.com/quotes/26678-i-could-see-that-if-n... , https://en.wiktionary.org/wiki/regruntle ]


They could also have iOS 11 jailbreak exploits in their possession. iOS 11 was already jailbroken recently and the Project Zero team has also informed Apple of exploits they discovered.


Correct me if I’m wrong but this wouldn’t help? “Unlocking” in the context of the FBI and iPhones always seems to be based around making it possible to brute force a device in their posession, which also means strong passphrases will remain secure. This is an incredibly hostile environment to security, the fact that Apple make it as hard as they do is quite impressive.


It's been many years since I've even glanced at an iOS jailbreak, but since when can you jailbreak a locked device?


There's been a number of ways to bypass a locked iOS device throughout the years[1]. This hardware box worked up until iOS 11 beta[2]. I imagine Cellebrite is using something similar, but gets around the fix Apple released.

[1]https://www.youtube.com/results?search_query=everything+appl...

[2]https://www.youtube.com/watch?v=IXglwbyMydM


These devices allow you to perform a brute force attack.

Jailbreak requires a reboot of the device and after a reboot the encryption key for the useful data on the device is not available. If the device uses a strong passcode (as opposed to a numeric code) it cannot practically be brute forced even if you force the device to allow you to enter many attempts quickly.


This is exactly why I use an XKCD style multi-word password for my phone. TouchID/FaceID keep me from having to enter it a lot and the rapid pressing of the power button to disable it give me convenience without significant compromise.


Good but do throw a random character in there, otherwise your passphrase is essentically a few characters long in a (larger) alphabet—ie, a dictionary sorted by most frequently used words. Or at least use some uncommon words.

    70^8   = 576480100000000   // 8 chars of upper/lower case, numbers, symbols
    4000^4 = 256000000000000   // 4 words pulled from a vocabulary of 4000 words

    word          rank
    ------------- ----
    correct       1808
    horse         1286
    battery       3221
    staple        (not in the first 4000)
https://xkcd.com/936/ (For those who were wondering about the context.)


> Good but do throw a random character in there

There is, but that part isn't the important part. The important part is using a long pass phrase rather than a multi digit number.


How is chain of custody maintained if the process is a secret? Couldn't a person argue that the data obtained was planted?


Even if you can't authenticate evidence (chain of custody, disclosure of technical process, etc), you can still use the fruits of the analysis as long as acquisition of the phone wasn't illegal and the fruits can be proven independently after the fact. I imagine that in most situations law enforcement can make their case once information on the phone points them in the right direction, especially in high-profile cases where the government would spend a lot of money on a secret process.

Authentication is necessary because the prosecutor has the burden of proof, and part of meeting that burden of proof is making a facially sound case about the authenticity and reliability of each piece of evidence. But unlike, say, an illegal search, failure to meet that burden doesn't poison derivative evidence as long as that evidence is independently submissible.

Importantly, you don't need to authenticate evidence _before_ getting a warrant to take possession of the phone; at least not to the extent required at trial. And as far as I know there are no laws limiting how the government can extract information from a phone it legally possesses for investigatory purposes, which means any technical process would be entirely irrelevant to the legality of the search. So there's no way to force the government to divulge the process as long as they don't try to submit the information gained by that process directly as evidence.

But maybe I'm missing something.


They'll probably drop the case if the defense points this out (seriously, look at Stingrays).


Or they just do parallel construction.

They gather the evidence illegally, and the conjure up a legal means to re-find the evidence they already know exists.


What here is illegally gathered?


Nothing, if they have a warrant (or equivalent legal authorization).

But the reason most of us care about having our data encrypted is not actually because we are committing heinous felonies, and want our phones to hide the evidence from legitimate cops (though of course sometimes that’s the case).

It’s because we don’t trust the authorities to follow the law. If they can crack your phone legally, they can also crack it illegally. (Say, after seizing it within 100 miles of the border, which they can do any time they please for whatever reason (including no real reason)).

So even though this ability isn’t necessarily illegal in and of itself, it’s certainly of interest to those of us who are concerned about the threat vectors that are presented by government forces that do engage in illegal practices.


It's my personal belief that this line of thinking, which is common among "geeky" types but not among the general population, is a form of slight delusion or power fantasy.

Is there anything you can provide to convince me it's remotely possible?


let's say you need a warrant to get X.

you use illegal methods to get X without a warrant. but you can't use that information legally. so you use your knowledge of X to find a legal way of learning X, after the fact.

then you go to the courts saying you found X the legal way.

but you didn't.


I understand the theory thanks, I'm just disputing that this scenario is anything other than exceptionally rare. Parallel construction is used to protect sensitive sources, not cover up illegality.


But not so much for breathalyzers...


Breathalyzers will tell you what's up right on the spot, unless you've come across some that require the cops to collect a jar of your breath for processing at some remote discrete location?


Breathalyzers are easily tampered with by police to provide false readings. One case of this in New Jersey could have potentially thrown out 20,000 DWI cases. But breathalyzer results in cases today are not thrown out after pointing this out.


It's irrelevant that breathalyzers in New Jersey were tampered with. You would need to show evidence that the breathalyzer in your specific case was tampered with. It's a basic rule of evidence...


In this case, a few breathalyzers were not calibrated correctly, but the officer responsible for testing them claimed to have done so. Now every device and calibration procedure is called into question, as their results may not be as verifiably precise as required by law. The device itself is not claimed to have been tampered with.

http://www.nj.com/politics/index.ssf/2017/10/20k_dwi_cases_c...


> after pointing this out.

What would you say, exactly?

If you said:

"your honor, breathalyzers can be tampered with to provide false readings"

It seems quite easy for anyone to respond with "how so?". Do you refer to "this one time in New Jersey"?


i have an expert that can explain various methods. Also, please release the device for the defenses inspection.


The prosecution will present calibration logs and security tampering prevention information to show that the device was independently verified as working correctly and demonstrably unchanged from that inspection date. A lot of people's careers rely on those records being correct, up to including a perjury charge if they're falsified.


IANAL. I had this talk with a lawyer friend a few weeks ago. I thought there was a way to show the design of the device is faulty - for example a speed gun at sunset reports inaccurate times. But it's a very high standard to get a court to allow that.


Are you asking me to explain how trial law works?


That why where I live if you trigger the drink driving limit on a breathalyzer you're driven to the station where a medical professional will take your blood and send it to an independent lab.


>How is chain of custody maintained if the process is a secret? Couldn't a person argue that the data obtained was planted?

Well this is a more general issue (I mean not limited to this case or to sending a device to Cellebrite or to another external laboratory) once a chain of custody is formally valid, it has as much integrity as the integrity of the people that had physical access to or worked on the device.

Still - thankfully - "planting" evidence on a modern file system and OS (provided that the end result is an actual physical extraction) is not as easy as it may seem.

Definitely possible, but extremely difficult to achieve without leaving any trace behind.


Other articles stated the unlock could also be performed on premises for a hefty sum.


by the time anyone would challenge in court how an iOS 11 was broken, Apple would have released iOS 15, and the iOS 11 exploit would be public knowledge.


I'm too lazy to find a link for it, but last time I read about Cellebrite, they were cloning the data and simply trying unlock codes in sequence until one worked. They could restore the cloned data before each try, or possibly do it on custom hardware or an emulator, and start with a fresh copy each time, so they never triggered "erase after 10 failures". It's a pretty straightforward approach, but it doesn't scale well. Works for targeted cracking of high-value targets.


This is not true. You cannot just clone the data and run passcodes against it, because the data is not encrypted by your passcode. Instead, each file on iOS 11 is encrypted with a different AES 256-bit key, and cracking even one 256-bit key through exhaustive search is thought to be out of reach of humankind (https://security.stackexchange.com/questions/6141/amount-of-...). The file keys are wrapped by, among other things, the device's Unique ID, a 256-bit key generated by the Secure Enclave, and accessible only to the Secure Enclave, not any other hardware or software running on an iOS device.

In the end, the only options are: bruteforcing passcodes on the original device while attempting to trick the device into allowing more than 10 failures, or prying open the Secure Enclave to obtain the Unique ID — both options a lot more complicated than just cloning the data and trying passcodes on it.


or prying open the Secure Enclave to obtain the Unique ID

People have been cracking secure coprocessors of the type used in payment cards, TPMs, and the like for a long time, dare I say even those which were designed to a higher level of security than Apple's. The fact that there is an entire phone attached to it doesn't make much of a difference, but the technology behind this (FIB, microprobing, etc.) has been steadily dropping in price and increasing in availability for a long time.


Fwiw, Apple has a $100k bounty on this type of exploit (pulling secrets from the secure enclave).


But Cellebrite apparently makes millions off of its service, so the economic incentives are still on their side.


I understand what you're saying here: why share the fact they've broken the SE for $100k when they can keep making millions.

But if they cracked the SE, and kept that fact to themselves, they would be making even more money because every government on the planet would be coming to them. This is provided they kept it to themselves.

It would mean a significant spike in the number of phones being cracked and people being arrested/charged/hung/etc. This would be a statistic that would jump off the charts and trigger Apple to essentially develop a solution straight away.

The only way this would work is if they had cracked the SE and are doing an Enigma: keeping it top secret and only cracking very high profile targets with the technology, which I guess is possible.


keeping it top secret and only cracking very high profile targets with the technology

Of course. The "big guns" are not to be used lightly, as the saying goes.


> This would be a statistic that would jump off the charts and trigger Apple to essentially develop a solution straight away.

How, though? If the only information Apple has is that their SE scheme is broken, how is that supposed to help them develop a solution?


The risky.biz podcast proposed a solution, half seriously and half in jest, offer 50 million for the bug bounty. It would destroy the working relationships and trust of the group of people that is required to come up with multi stage exploits, and apple has the cash to do that once or twice.


This argument gets made really often. Consider that bug bounties and blackhat talks disclosing bugs both exist and are extremely popular. Not everyone wants to be a drug dealer. Cellebrite hardly has a monopoly on hardware research.


I'm sure companies, like the one in the article, will pay substantially more for them.


> but the technology behind this (FIB, microprobing, etc.) has been steadily dropping in price

Isn't it more of a cat and mouse? The defences also drop in price and increase in availability?

As usual, old tech is becomes vulnerable. Hopefully most people will get the chance to upgrade to the latest and greatest before attacks get too easy and destroy the old device.

I wonder what Chris Tarnovsky is up to these days...


I think he got the the idea right. Yes, you need the secret key burned into the CPU to decrypt anything, and yes, you can't easily extract the keys. but his claim is that by fully restoring the flash storage (presumably where the retry counter is stored), it's possible to bypass the "erase data after 10 failed attempts" policy by constantly resetting the counter back to its original state. It might take a while (you might have to go through the boot process each time, but for a 6 digit pin and 30 seconds each attempt, it's still less than a year.


That doesn’t work as well because the counter is kept in the Secure Enclave so it’s not part of the flash contents. Also the exponential delay in attempts is enforced by the enclave.

Previous iOS versions used to have some small window where you could race and power off after trying a passcode but before the enclave had incremented the counter, but that bug was fixed long ago. Maybe there are others unknown bugs of similar kind


>That doesn’t work as well because the counter is kept in the Secure Enclave so it’s not part of the flash contents

I skimmed the secure enclave documentation at https://www.apple.com/business/docs/iOS_Security_Guide.pdf (page 5, 14, 15), and I can't find anything to confirm that.


"On devices with Secure Enclave, the delays are enforced by the Secure Enclave coprocessor." Page 15

It was also confirmed explicitly during the Q&A at the blackhat talk in 2016, which I believe is on YouTube.


>It was also confirmed explicitly during the Q&A at the blackhat talk in 2016, which I believe is on YouTube.

doesn't look like it. https://www.blackhat.com/docs/us-16/materials/us-16-Mandt-De...


That’s not the apple talk. Apple gave a talk specifically about hardware security in the iPhone.

Edit: this one. https://youtu.be/BLGFriOKz6U

Edit 2: the question was asked at 47:20


It makes the most sense that it's in the Secure Enclave. Once you have control of the counter, you can run through the passphrase space in no time.


Only if the pass’phrase’ is a short numeric code. iOS also allows a normal password.


> constantly resetting the counter back to its original state

I wouldn't rule out some kind of electrical glitching attack.


That’s essentially the method used to bypass smart card processors.


>In the end, the only options are: bruteforcing passcodes on the original device while attempting to trick the device into allowing more than 10 failures, or prying open the Secure Enclave to obtain the Unique ID — both options a lot more complicated than just cloning the data and trying passcodes on it.

If I'm understanding correctly, obtaining the unique ID would simply mean the strength of the AES key becomes the point of failure? (So a strong password means FBI doesn't get in)


You will still be trusting Apple to securely use your entire password rather than always truncating it to, say, 3 characters, and/or "backing up" some or all of it to their cloud - things that are rather difficult to independently verify.


That’s not quite what the person above said. They said that the data would be pulled off the phone - in its encrypted form - then restored to make the phone forget the number of attempts.

I think the Secure Enclave has independent built-in mechanisms for keeping track of the number of times things have happened though.


How does the Secure Enclave store all of those AES keys? I am guessing that the keys aren't random and are regenerated in order to do decryption, so "all" an attacker needs to do is break the key generation process, not the keys themselves.


Is there no chance that Apple and/or the FBI have the list of phone serial numbers and matching AES key?

(If that sounds sarcastic it's not meant to, genuine question as I don't know much a about this stuff!).


The key that is mentioned is only part of what you need to decrypt the data. The point of the secret key is to prevent an offline attack; you need to get the device to combine passwords or codes with the secret key to get the encryption key. That way the device can enforce speed nd maximum number of attempts.


Doesn't scale well? Are you assuming you cannot run this parallel?


As far as I can remember, that was the proof of concept which ended up emerging for the 5C crack a few years ago, and I don’t see any reason the methods would’ve changed.


The iPhone 5C could be attacked this way because it didn't contain the Secure Enclave that shipped as part of the A7 chip. Attacks on devices with the A7 (or newer chips) would be novel.


> relatively inexpensive, costing as little as $1,500 per unlock

That's not be good. Since we're bound to have that cost-of-unlock war anyway as new workarounds are found, it should at least be higher. I'd hope for $50k+ so if it's really needed, it goes through several levels of approvals.


Why didn't Forbes send them an iPhone X running iOS 11 to hack, to find out if it's true or not?


I imagine Cellebrite is very picky about who gets to be there customer. Wouldn't want someone sending a trojan device that reveals their secrets.


I would think Forbes Magazine would be a great customer for them.


> I would think Forbes Magazine would be a great customer for them

Forbes would unlock one phone and then alert Apple by way of their story. Celebrite’s ideal customer pays for lots of iPhones to be unlocked.


Go to Apple store with reporter. Buy phone. Have reporter take selfie and set passcode. Take phone, unlock, show reporter selfie.


Why would Cellebrite want to do this in the first place?


I don't know, but imagine Forbes would convince them. Isn't that what journalism is all about?


I think we can pretty safely assume that if they would have been willing to do this, Forbes would have been happy to report on it, because it's a damn good story.

Presumably they weren't, because there's nothing in it for them. And Forbes ran the story anyway. Would it really have been better for Forbes to not run it?


I wish there was a kind of "dead man switch" app that would wipe a device if it is not unlocked for x days or met some other kind of personalized criteria.


Many years ago someone (possibly/probably Dan Kaminsky) suggested storing your gpg-encrypted+signed full device encryption key in the global DNS cache. If you don't do a lookup every X days, it'll expire from the cache and the drive will be unrecoverable assuming no other copies of the key exist.


> global DNS cache

No such thing exists

> If you don't do a lookup every X days, it'll expire from the cache

Requerying a cached DNS record doesn’t extend its TTL. TTL is based on when it was first added to the cache.


You're not wrong, but it has been shown that you can store small amounts of data in open resolving dns servers.

https://github.com/benjojo/dnsfs


That's definitely true and I didn't dispute that part, I focused on the parts which were factually not correct.


I'll be the first to admit that to a DNS expert my original phrasing was not fully precise or fully complete, but calling it "factually not correct" is unfair. As the GP hints the idea is that you cycle through a large number of open resolvers around the world, putting the key into, let's say, 10 of them each time for redundancy & availability. As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.


> As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.

Think about that from a technical perspective and you’ll realize the flaw. :)

You can’t cycle to a new set unless the authoritative server is still responding with the key. If the authoritative server still has it, what difference does the fact a caching name server has it? Furthermore, there’s zero guarantees a caching resolver will cache for the length of the specified TTL, so you literally have a land mine that’ll explode randomly and cause you to lose your data.

> As the GP hints

Read it again. GP’s Github link doesn’t allude to what you imply it does. Storing arbitrary data in DNS is of course possible and others will cache it for you, but implying anything like what you described as feasible just doesn’t hold merit.

> but calling it "factually not correct" is unfair.

This entire theory you posted originally doesn’t hold up to even basic technical review. It’s nothing against you personally, the idea simply doesn’t provide any actual benefit and very fairly is factually incorrect.


> You can’t cycle to a new set unless the authoritative server is still responding with the key.

Yes - during a refresh you'd (re-)add the key to an authoritative server that you control, query it from X open resolvers that you do not control, then delete the records from the authoritative server that you control, such that the only remaining copies are held by the open resolvers. Care would be taken to make the key forensically unrecoverable on the authoritative resolver whenever it's not participating in this refresh process.

> there’s zero guarantees a caching resolver will cache for the ... [full] TTL

To deal with that you can test them first using useless/random data, and use many (10+) to deal with the risk that policy changes after your test. The hope being that it's unlikely for all 10+ to go offline, time out, etc, before your next refresh. But it is true that some availability risk is the price you must pay for the "unrecoverability after X seconds, using HW you don't own" benefit of the scheme.


I think that's how some HSMs work to guard against tampering. The keys are stored on volatile memory, and there's a internal battery/capacitor. You can unplug it, but its tamper intrusion mechanisms will still be active because of the internal battery. If you trip the tamper detection mechanisms, the keys get wiped. If the battery runs out, the key gets wiped. So if you're relocating the HSM, you have x days to replug it in before you lose your keys. I imagine if you're designing a high security phone, you could possibly want this as a feature, but I doubt most people want a phone that wipes their data if the battery went flat for 7 days.


I agree, but based on some people's risk profile, the option would be nice.


An app would need an environment to run in, but the main CPU/etc may not be available if the recovery process involves removing chips or other hardware modifications.

A better idea would be to put an RTC + watchdog timer[1] into the security chip that holds the keys and power it continuously with a small amount of external power. The power must be available and the watchdog timer must have time remaining to disconnect a circuit that pulls the memory holding the keys to ground.

More advanced types of tamper resistance and self-destructing chips are possible, but they tend to have significant downsides.

[1] https://en.wikipedia.org/wiki/Watchdog_timer


Apple is being sued for making phones slightly slower, can you imagine the lawsuit if they were "deliberately lobotomizing" phones? You know that's how it would be said...


Good points. Thank you for this reading material.


This is not exactly what you are looking for, but G Suite MDM has an "Auto Account Wipe" feature that "Automatically removes corporate account data when a device reaches a specified number of days of inactivity."

Presumably, this would automatically delete your G Suite email, contact, calendar, and other data from your device.

https://support.google.com/a/answer/6328708?hl=en#general


That’s not a feature implemented on device by Google, Apple built the feature explicitly to make this available to corporate customers. Same thing exists for Exchange.


Interesting. The feature is also available on Android.


Same thing exists on Windows Phone, BlackBerry, WebOS, even all the way back to Symbian I think.


Okay, likely a standard Exchange feature then.


The dead man switch isn’t an option, but using the Apple Configurator (https://support.apple.com/apple-configurator), there are lots of additional security features that can be enabled that aren’t accessible via the iPhone’s UI.

EDIT Using Apple Configurator, you can change the number of times unsuccessful attempts can occur before erasing an iPhone to anything between 2-10.


You mean like automatically erasing the device after 10 incorrect passcode attempts?

That option has been built in to the iPhone pretty much since the beginning.


No. An example would be automatically erase if the device has not been unlocked in 3 days.


This would be inconvenient at times.


-To most users, probably. But if you are a high-ish profile target, the (potential) inconvenience is probably worth the effort for the added peace of mind.

After all, when was the last time you spent three days without fondling your phone?


So they get owned earlier this month, and then a fluff piece appears later this month that doesn't mention any of the findings even tho it calls out the dangers of hoarding vulns.

This is a planted marketing/PR piece.


This might be a good scenario for Apple. Apple doesn't have to build a backdoor, which is good for PR, and the Feds got what they want to they'll stop bothering Apple. Which is the position Android/Google was in all along.


So the tinfoil hat theory here is that Apple itself leaks the cracking tech to Cellobrite to ease the fed pressure, and keep reputation intact? Sorry, I don't buy it.


I'm not suggesting any theories. I'm just pointing out that Apple isn't really in a terrible position because of this news.


No one at Apple would share source code? You never heard of iBoot.


That's not really the strongest interpretation of GP's statement. It wasn't implying that nobody would ever share source code for any reason, but that the company would deliberately decide to share source code in order to obtain fly-by-night compliance with government requests while maintaining its public image.


That happens all the time too, and I'm sure Apple is no exception. Want to score that big NSA storage contract? Pony up your HDD firmware... for "security assurances." Suddenly, NSA has exploits for all 7 major HDD manufacturers.

http://www.cbc.ca/news/technology/nsa-hid-spying-software-in...

Hmm, how did Apple get cleared for DOD use?

http://www.zdnet.com/article/iphones-ipads-cleared-for-u-s-m...

What process would be required for that? Hmmmm.


https://xkcd.com/538

All I want is my every day encryption to be a big enough pain in the butt to crack that the feds can't break it without spending a medium amount of money.

===Edit HN: Apologies this was lost in my subtlety, but consider the game theory aspects. Your best bet is to _just enough_ of a pain in the butt it's difficult to reach you, but you certainly don't want to be singled out on a national stage either. Maybe I'm the only one that considers this angle?


I'm not even on the Fed's radar. I don't want a mugger to send my stolen iDevice up the food chain to a Russian syndicate and have them able to in my Lastpass, internet banking app, live bitcoin wallet until I've had enough time to change all the credentials.


Claims in one hand, shit in the other. Prove yourself. P


Not too surprising based on economics alone.

Presumably a security professional selling a usable exploit to a company like Celibrite pays far better than the $0 that comes from releasing it as a "jailbreak" to the general public.


I'd assume things like these are generally difficult to release to the public in any meaningful way, since they often require hardware hacks like desoldering components.


What ? No they don't.

All of the jailbreaks have just involved tethering your phone to iTunes or visiting a particular website or app. There's never been a need to do any desoldering.


All of the jailbreaks that were released as easily accessible jailbreaks. It's definitely plausible that this exploit requires direct access of pinouts on the motherboard.

The difference is just that those exploits which were difficult enough that they required soldering generally weren't released or didn't get much traction.

That being said, I remember soldering a modchip on my original xbox 16 years ago


I think he's just talking about attacks against locked devices. I'd consider that a different category of thing than jailbreaks (rooting an unlocked device).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: