> There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
>The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents.
Crazy if true.
Doesn't this also create a weird incentive problem where the FBI (or any other law enforcement agency) who would normally be tasked with helping Apple with this doesn't actually want to?
Within ten seconds of searching my memory I can think of at least three Israeli companies that are known for researching/hoarding secret zero days and making use of them for large sums of money. Cellebrite, NSO, and Elbit.
The DMCA is not concerned with the security of Apple’s devices or “Secure Enclave” as Apple never said they existed for the sake of protecting copyright - that’s iTMS’s DRM which is entirely unrelated.
I didn’t said it was applicable in this case however US copyright laws are enforced over most of the world in one way or another via the trade agreements basically any deal that the US signs has a clause about respecting US copyright and a mechanism through which to seek grievances.
Relative to the size of its population, Israel has a highly developed electronics engineering related industry. Part of it related to their state support of domestic defense contractors like IAI and their avionics/radar/C4I equipment. Aside from Cellbrite, companies like Ceragon, Alvarion, Radwin, ECI, Telrad, Elbit.
Second hand knowledge: Within international organizations that have worked extensively in the Israel-Lebanon border area it is well known that Israel has pwned most of the Lebanese telecoms and ISPs quite thoroughly. To the extent that Hezbollah started laying its own fiber optic cables.
Specifically, many Israli tech people (and especially those in defense) seem to be “graduates” of the IDF’s 8200 SIGINT unit, which has close relationships with VCs in both Israel and the US.
> Relative to the size of its population, Israel has a highly developed electronics engineering related industry. Part of it related to their state support of domestic defense contractors ...
It's not due only to Israeli resources. Much of Israel's defense budget comes from the U.S., plus there is much more support, including technology transfer, that isn't provided in cash.
get over it? it isn't their job to 1. have encyclopedic knowledge of a small and belligerent country and then 2. enforce others to the same standard. no country gets that treatment here.
? Israel defense budget is about 20 billion these days the military aid to Israel is about 2.5 billion without special congressional allowances.
Israel’s GDP is ~320 or so billion.
And if you wondering about the reason for discrepancies from the article above it's because that one is from 2012 the budget is larger and the US dollar devaluation against the Israeli Shekel by 15% since then, in fact fluctuations of the US Foreign Military Financing (FMF) as portion of the Israeli Defense budget are often due to currency exchange as much of the Israeli budget allocated in local currency is spent locally while the FMF is spent over in the US in dollars.
It's also important to note that the Israeli allocation of the budget does not include FMF or any other aid so when Israel allocates say 18 billion USD equivalent in local currency in the budget that is the amount to which the government will fund the defense ministry, beyond that the defense ministry has it's own internal budget which is funded via the Israeli government budget, FMF as well as any additional revenue streams of defense ministry such as rent, dividends from it's share of now privatized national defense contractors like IWI (form Israeli Military Industries) and IAI etc.
Ok then. I guess Apple's a Chinese company or an Irish Company or a Singaporean company because they have subsidiaries--each with a local CEO--in each of those countries:
Cellebrite itself is an Israeli company, despite being a subsidiary of a Japanese one.
Apple is an American company, but they have foreign subsidiaries like Shazam Entertainment, which is a British company. If Apple Singapore is a full subsidiary, then sure, that's a Singaporean company, but Apple Inc wouldn't be.
No? The law doesn’t prevent the government from searching your property in a wide range of circumstances: e.g. with a warrant, pursuant to a valid arrest, etc. That’s the whole idea of warrants: so there is a controlled way to search private property. The government goes to Israel to defeat technological roadblocks to doing what it’s allowed to do under the law. This technology isn’t being used to break into phones at surprise checkpoints, it’s being used to search phones of people who have been arrested.
Presumably guelo thinks Guantanamo permits the US to do things that would be illegal here. But breaking an iPhone pursuant to a warrant wouldn’t be. Having a warrant (or a suspect in custody) permits the government, by design, to do lots of things that would otherwise be illegal. The government doesn’t need to ship safes to Israel to avoid violating safe cracking laws when searching pursuant to a warrant.
Ah yes, the old “government does X, so i’m going to speculate it also does Y, and it’s up to you to prove otherwise” trick. Arguing on the Internet is so much fun when we get to just make things up.
Come on now, there are literally thousands and thousands of thoroughly documented cases of law enforcement and government agencies violating the law.
The Israel == Guantanamo thing doesn't exactly make sense to me either, but now you're arguing nonsense. Certainly, we all know that the government, including law enforcement, doesn't always follow the law.
That's almost the entire argument for putting any limits on governmental power at all.
(That's not to say we should restrict them from doing this, though; if they can crack a phone, good for them, I suppose. But it's another thing to be concerned about.)
There's thousands of law enforcement agencies in the U.S., handling tens of thousands if not hundreds of thousands of cases each year. If they break the law with respect to a small percentage of those cases, you'll end up with thousands of examples. But with respect to any given thing, statistically, the government is probably not breaking the law.
Here, the "Israel == Guantanamo" thing doesn't make sense if you assume that the government is using the Israeli hacks to break iPhones it has in custody because of a warrant or arrest. You can speculate that the government is stealing peoples' iPhones and breaking into them without a warrant, but it's an actual logical fallacy to point to different things the government is doing to argue that the government is doing this thing too.
But under the DMCA it doesn't matter if the thing you are trying to break the protection for is something you are allowed to do, just the act of breaking the protection is illegal.
The iPhone hacks by their nature require having custody of the physical cell phone for an extended period of time. As far as I know, the government isn’t stealing peoples iPhones to search them.
> As far as I know, the government isn’t stealing peoples iPhones to search them.
Well, OK, now you know:
The US government seizes phones and laptops, "without showing reasonable suspicion of a crime or getting a judge’s approval", on a regular basis, and has done so for a number of years.
The government is permitted to search anything that crosses the U.S. border. It's a power inherent to nations, which are entities defined by their borders. The founding generation provided for such searches and seizures in the very first session of Congress.
You might not like it, but border searches aren't illegal, and the government doesn't need to go to Israel to do them.
I'm not claiming it's illegal. I'm just arguing that this is a new and serious security concern. You write:
> This technology isn’t being used to break into phones at surprise checkpoints, it’s being used to search phones of people who have been arrested.
and:
> As far as I know, the government isn’t stealing peoples iPhones to search them.
That implies it's nothing to worry about if you aren't being arrested, which is wrong.
First, you don't know when this technology is being used. It would be prudent to assume they US government could use this technology on any phone they seize.
Second, even if it's not "stealing" when government agents seize your phone at a border (or yes, at a surprise checkpoint, which they can and do use), from a security standpoint, it's the same thing.
The legality of these searches is not that interesting to me (witch-burning and slavery were legal too). What's interesting is that this new exploit, assuming the story is accurate, allows the government to search the data of phones that they seize.
Why should we worry about that? Because, as we have already established, they seize phones routinely, and not necessarily in conjunction with an arrest or even suspicion of criminality.
Yes, it's legal (in many cases, anyway). But before this new phone-cracking capability, it probably wasn't effective. The security on the Apple iPhone was believed to be good enough to stop such intrusion; now (again, assuming this article is accurate) we know it isn't.
Sure they do, they're familiar with the design. Unless it is completely open source or has been totally reverse engineer (which I doubt) then that is an advantage.
It’s not an advantage, in practice. Writing exploits against iOS is a very scarce skill, and the people who can do this might be slightly more productive if aided by the source, but the reverse isn’t true. Having the source doesn’t teach you anything about finding and exploiting these bugs.
It might seem logical to those unfamiliar with how these hacks work, but consider that such hacks do not depend on the secrecy of the design. Also consider that Apple hires regular software and hardware engineers, who do their best to design a system, and Apple then hires hackers (both internally, as well as external consultants) to find weaknesses in their designs. These weaknesses are then fixed before the product ships, meaning even those who were paid to break it no longer know how. This alone should tell you that people who know the system intimately are not the ones who understand how to break it.
Put another way, if I need to make this product, and there are two candidates I can hire, one person who wrote the software and one who knows nothing about the software but is demonstrably skilled at finding exploits in similar systems, I’ll take the latter in a heartbeat.
I don't know if that's true, but if I had the same power as a government agency, which would allow way higher salaries than Apple's or any other hi tech corporation, that's the first thing I'd attempt to do: find ex/unhappy/disgruntled engineers and offer them 5x pay plus a luxury home and lab in some tropical island.
Also don't forget the hardware. Like with most/all other phone vendors, iPhone chips aren't made in the US; most of the design maybe is, but the chips themselves are not, and finding a Chinese hardware engineer happy to help would be even easier because all he should implement is a covert channel to tunnel sensitive data (passwords?) to a known place. If you have access to the hardware that should be trivial to do: just implement a small undocumented flash memory space anywhere, then when the user taps a password an also undocumented firmware routine (that's hundreds of bytes, very easy to conceal) would add the password in that small memory that can be read only in certain conditions (say connecting power+tapping a bossa rhythm while with the phone screen is facing down+disconnecting power - that seems crazy but you get the idea: every sensor is a switch and any switch can be used to enter a code). A few spare kilobytes of memory here and there would allow this and other spying mechanisms, so I would't be surprised at all if some big agency would attack the hardware/firmware rather than the software.
> If you have access to the hardware that should be trivial to do: just implement a small undocumented flash memory space anywhere, then when the user taps a password an also undocumented firmware routine (that's hundreds of bytes, very easy to conceal)
I sincerely apologize for being this blunt, but you clearly have no clue what you’re talking about. Ask anyone who’s shipped any piece of hardware they helped design, let alone a processor, and they won’t be able to answer because they’ll be laughing so hard. Adding persistent hardware based spying like you describe into a design is anything but trivial.
By spying I didn't mean moving multi megabytes of data or reprogramming a processor. On a PC stealing passwords can be done by inserting a small uController that acts as a HID device on one side and talks with a USB keyboard on the opposite side. If you hide it into a keyboard and instruct it to record the first two lines one writes just after power on, which are almost always the system username and password, then add them to a flash into the microcontroller, then it's just a matter of social networking to get the data ("hey, here's a new keyboard, I'll trash the old one for you").
On phones one has to intercept screen taps, which is harder, but if you have access to the hardware and develop its drivers, you very likely can do that before passwords get encrypted. All it needs is a daemon reading taps and comparing them with the virtual keyboard key positions (assuming you haven't access to the virtual key output, which would make it even easier) once you have that daemon, tell it to read the system load and intercept what the user taps after a long sleep, which will very likely be the device pin. Want the bank password?, just read what the user taps when there's a bank app in foreground. I'm sorry for those laughing, but it can be done.
They could also have iOS 11 jailbreak exploits in their possession. iOS 11 was already jailbroken recently and the Project Zero team has also informed Apple of exploits they discovered.
Correct me if I’m wrong but this wouldn’t help? “Unlocking” in the context of the FBI and iPhones always seems to be based around making it possible to brute force a device in their posession, which also means strong passphrases will remain secure. This is an incredibly hostile environment to security, the fact that Apple make it as hard as they do is quite impressive.
There's been a number of ways to bypass a locked iOS device throughout the years[1]. This hardware box worked up until iOS 11 beta[2]. I imagine Cellebrite is using something similar, but gets around the fix Apple released.
These devices allow you to perform a brute force attack.
Jailbreak requires a reboot of the device and after a reboot the encryption key for the useful data on the device is not available. If the device uses a strong passcode (as opposed to a numeric code) it cannot practically be brute forced even if you force the device to allow you to enter many attempts quickly.
This is exactly why I use an XKCD style multi-word password for my phone. TouchID/FaceID keep me from having to enter it a lot and the rapid pressing of the power button to disable it give me convenience without significant compromise.
Good but do throw a random character in there, otherwise your passphrase is essentically a few characters long in a (larger) alphabet—ie, a dictionary sorted by most frequently used words. Or at least use some uncommon words.
70^8 = 576480100000000 // 8 chars of upper/lower case, numbers, symbols
4000^4 = 256000000000000 // 4 words pulled from a vocabulary of 4000 words
word rank
------------- ----
correct 1808
horse 1286
battery 3221
staple (not in the first 4000)
Even if you can't authenticate evidence (chain of custody, disclosure of technical process, etc), you can still use the fruits of the analysis as long as acquisition of the phone wasn't illegal and the fruits can be proven independently after the fact. I imagine that in most situations law enforcement can make their case once information on the phone points them in the right direction, especially in high-profile cases where the government would spend a lot of money on a secret process.
Authentication is necessary because the prosecutor has the burden of proof, and part of meeting that burden of proof is making a facially sound case about the authenticity and reliability of each piece of evidence. But unlike, say, an illegal search, failure to meet that burden doesn't poison derivative evidence as long as that evidence is independently submissible.
Importantly, you don't need to authenticate evidence _before_ getting a warrant to take possession of the phone; at least not to the extent required at trial. And as far as I know there are no laws limiting how the government can extract information from a phone it legally possesses for investigatory purposes, which means any technical process would be entirely irrelevant to the legality of the search. So there's no way to force the government to divulge the process as long as they don't try to submit the information gained by that process directly as evidence.
Nothing, if they have a warrant (or equivalent legal authorization).
But the reason most of us care about having our data encrypted is not actually because we are committing heinous felonies, and want our phones to hide the evidence from legitimate cops (though of course sometimes that’s the case).
It’s because we don’t trust the authorities to follow the law. If they can crack your phone legally, they can also crack it illegally. (Say, after seizing it within 100 miles of the border, which they can do any time they please for whatever reason (including no real reason)).
So even though this ability isn’t necessarily illegal in and of itself, it’s certainly of interest to those of us who are concerned about the threat vectors that are presented by government forces that do engage in illegal practices.
It's my personal belief that this line of thinking, which is common among "geeky" types but not among the general population, is a form of slight delusion or power fantasy.
Is there anything you can provide to convince me it's remotely possible?
you use illegal methods to get X without a warrant. but you can't use that information legally. so you use your knowledge of X to find a legal way of learning X, after the fact.
then you go to the courts saying you found X the legal way.
I understand the theory thanks, I'm just disputing that this scenario is anything other than exceptionally rare. Parallel construction is used to protect sensitive sources, not cover up illegality.
Breathalyzers will tell you what's up right on the spot, unless you've come across some that require the cops to collect a jar of your breath for processing at some remote discrete location?
Breathalyzers are easily tampered with by police to provide false readings. One case of this in New Jersey could have potentially thrown out 20,000 DWI cases. But breathalyzer results in cases today are not thrown out after pointing this out.
It's irrelevant that breathalyzers in New Jersey were tampered with. You would need to show evidence that the breathalyzer in your specific case was tampered with. It's a basic rule of evidence...
In this case, a few breathalyzers were not calibrated correctly, but the officer responsible for testing them claimed to have done so. Now every device and calibration procedure is called into question, as their results may not be as verifiably precise as required by law. The device itself is not claimed to have been tampered with.
The prosecution will present calibration logs and security tampering prevention information to show that the device was independently verified as working correctly and demonstrably unchanged from that inspection date. A lot of people's careers rely on those records being correct, up to including a perjury charge if they're falsified.
IANAL. I had this talk with a lawyer friend a few weeks ago. I thought there was a way to show the design of the device is faulty - for example a speed gun at sunset reports inaccurate times. But it's a very high standard to get a court to allow that.
That why where I live if you trigger the drink driving limit on a breathalyzer you're driven to the station where a medical professional will take your blood and send it to an independent lab.
>How is chain of custody maintained if the process is a secret? Couldn't a person argue that the data obtained was planted?
Well this is a more general issue (I mean not limited to this case or to sending a device to Cellebrite or to another external laboratory) once a chain of custody is formally valid, it has as much integrity as the integrity of the people that had physical access to or worked on the device.
Still - thankfully - "planting" evidence on a modern file system and OS (provided that the end result is an actual physical extraction) is not as easy as it may seem.
Definitely possible, but extremely difficult to achieve without leaving any trace behind.
by the time anyone would challenge in court how an iOS 11 was broken, Apple would have released iOS 15, and the iOS 11 exploit would be public knowledge.
I'm too lazy to find a link for it, but last time I read about Cellebrite, they were cloning the data and simply trying unlock codes in sequence until one worked. They could restore the cloned data before each try, or possibly do it on custom hardware or an emulator, and start with a fresh copy each time, so they never triggered "erase after 10 failures". It's a pretty straightforward approach, but it doesn't scale well. Works for targeted cracking of high-value targets.
This is not true. You cannot just clone the data and run passcodes against it, because the data is not encrypted by your passcode. Instead, each file on iOS 11 is encrypted with a different AES 256-bit key, and cracking even one 256-bit key through exhaustive search is thought to be out of reach of humankind (https://security.stackexchange.com/questions/6141/amount-of-...). The file keys are wrapped by, among other things, the device's Unique ID, a 256-bit key generated by the Secure Enclave, and accessible only to the Secure Enclave, not any other hardware or software running on an iOS device.
In the end, the only options are: bruteforcing passcodes on the original device while attempting to trick the device into allowing more than 10 failures, or prying open the Secure Enclave to obtain the Unique ID — both options a lot more complicated than just cloning the data and trying passcodes on it.
or prying open the Secure Enclave to obtain the Unique ID
People have been cracking secure coprocessors of the type used in payment cards, TPMs, and the like for a long time, dare I say even those which were designed to a higher level of security than Apple's. The fact that there is an entire phone attached to it doesn't make much of a difference, but the technology behind this (FIB, microprobing, etc.) has been steadily dropping in price and increasing in availability for a long time.
I understand what you're saying here: why share the fact they've broken the SE for $100k when they can keep making millions.
But if they cracked the SE, and kept that fact to themselves, they would be making even more money because every government on the planet would be coming to them. This is provided they kept it to themselves.
It would mean a significant spike in the number of phones being cracked and people being arrested/charged/hung/etc. This would be a statistic that would jump off the charts and trigger Apple to essentially develop a solution straight away.
The only way this would work is if they had cracked the SE and are doing an Enigma: keeping it top secret and only cracking very high profile targets with the technology, which I guess is possible.
The risky.biz podcast proposed a solution, half seriously and half in jest, offer 50 million for the bug bounty. It would destroy the working relationships and trust of the group of people that is required to come up with multi stage exploits, and apple has the cash to do that once or twice.
This argument gets made really often. Consider that bug bounties and blackhat talks disclosing bugs both exist and are extremely popular. Not everyone wants to be a drug dealer. Cellebrite hardly has a monopoly on hardware research.
> but the technology behind this (FIB, microprobing, etc.) has been steadily dropping in price
Isn't it more of a cat and mouse? The defences also drop in price and increase in availability?
As usual, old tech is becomes vulnerable. Hopefully most people will get the chance to upgrade to the latest and greatest before attacks get too easy and destroy the old device.
I wonder what Chris Tarnovsky is up to these days...
I think he got the the idea right. Yes, you need the secret key burned into the CPU to decrypt anything, and yes, you can't easily extract the keys. but his claim is that by fully restoring the flash storage (presumably where the retry counter is stored), it's possible to bypass the "erase data after 10 failed attempts" policy by constantly resetting the counter back to its original state. It might take a while (you might have to go through the boot process each time, but for a 6 digit pin and 30 seconds each attempt, it's still less than a year.
That doesn’t work as well because the counter is kept in the Secure Enclave so it’s not part of the flash contents. Also the exponential delay in attempts is enforced by the enclave.
Previous iOS versions used to have some small window where you could race and power off after trying a passcode but before the enclave had incremented the counter, but that bug was fixed long ago. Maybe there are others unknown bugs of similar kind
>In the end, the only options are: bruteforcing passcodes on the original device while attempting to trick the device into allowing more than 10 failures, or prying open the Secure Enclave to obtain the Unique ID — both options a lot more complicated than just cloning the data and trying passcodes on it.
If I'm understanding correctly, obtaining the unique ID would simply mean the strength of the AES key becomes the point of failure? (So a strong password means FBI doesn't get in)
You will still be trusting Apple to securely use your entire password rather than always truncating it to, say, 3 characters, and/or "backing up" some or all of it to their cloud - things that are rather difficult to independently verify.
That’s not quite what the person above said. They said that the data would be pulled off the phone - in its encrypted form - then restored to make the phone forget the number of attempts.
I think the Secure Enclave has independent built-in mechanisms for keeping track of the number of times things have happened though.
How does the Secure Enclave store all of those AES keys? I am guessing that the keys aren't random and are regenerated in order to do decryption, so "all" an attacker needs to do is break the key generation process, not the keys themselves.
The key that is mentioned is only part of what you need to decrypt the data. The point of the secret key is to prevent an offline attack; you need to get the device to combine passwords or codes with the secret key to get the encryption key. That way the device can enforce speed nd maximum number of attempts.
As far as I can remember, that was the proof of concept which ended up emerging for the 5C crack a few years ago, and I don’t see any reason the methods would’ve changed.
The iPhone 5C could be attacked this way because it didn't contain the Secure Enclave that shipped as part of the A7 chip. Attacks on devices with the A7 (or newer chips) would be novel.
> relatively inexpensive, costing as little as $1,500 per unlock
That's not be good. Since we're bound to have that cost-of-unlock war anyway as new workarounds are found, it should at least be higher. I'd hope for $50k+ so if it's really needed, it goes through several levels of approvals.
I think we can pretty safely assume that if they would have been willing to do this, Forbes would have been happy to report on it, because it's a damn good story.
Presumably they weren't, because there's nothing in it for them. And Forbes ran the story anyway. Would it really have been better for Forbes to not run it?
I wish there was a kind of "dead man switch" app that would wipe a device if it is not unlocked for x days or met some other kind of personalized criteria.
Many years ago someone (possibly/probably Dan Kaminsky) suggested storing your gpg-encrypted+signed full device encryption key in the global DNS cache. If you don't do a lookup every X days, it'll expire from the cache and the drive will be unrecoverable assuming no other copies of the key exist.
I'll be the first to admit that to a DNS expert my original phrasing was not fully precise or fully complete, but calling it "factually not correct" is unfair. As the GP hints the idea is that you cycle through a large number of open resolvers around the world, putting the key into, let's say, 10 of them each time for redundancy & availability. As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.
> As you usually cannot extend the timeout on those servers, you simply move to a different set of 10 servers during each refresh.
Think about that from a technical perspective and you’ll realize the flaw. :)
You can’t cycle to a new set unless the authoritative server is still responding with the key. If the authoritative server still has it, what difference does the fact a caching name server has it? Furthermore, there’s zero guarantees a caching resolver will cache for the length of the specified TTL, so you literally have a land mine that’ll explode randomly and cause you to lose your data.
> As the GP hints
Read it again. GP’s Github link doesn’t allude to what you imply it does. Storing arbitrary data in DNS is of course possible and others will cache it for you, but implying anything like what you described as feasible just doesn’t hold merit.
> but calling it "factually not correct" is unfair.
This entire theory you posted originally doesn’t hold up to even basic technical review. It’s nothing against you personally, the idea simply doesn’t provide any actual benefit and very fairly is factually incorrect.
> You can’t cycle to a new set unless the authoritative server is still responding with the key.
Yes - during a refresh you'd (re-)add the key to an authoritative server that you control, query it from X open resolvers that you do not control, then delete the records from the authoritative server that you control, such that the only remaining copies are held by the open resolvers. Care would be taken to make the key forensically unrecoverable on the authoritative resolver whenever it's not participating in this refresh process.
> there’s zero guarantees a caching resolver will cache for the ... [full] TTL
To deal with that you can test them first using useless/random data, and use many (10+) to deal with the risk that policy changes after your test. The hope being that it's unlikely for all 10+ to go offline, time out, etc, before your next refresh. But it is true that some availability risk is the price you must pay for the "unrecoverability after X seconds, using HW you don't own" benefit of the scheme.
I think that's how some HSMs work to guard against tampering. The keys are stored on volatile memory, and there's a internal battery/capacitor. You can unplug it, but its tamper intrusion mechanisms will still be active because of the internal battery. If you trip the tamper detection mechanisms, the keys get wiped. If the battery runs out, the key gets wiped. So if you're relocating the HSM, you have x days to replug it in before you lose your keys. I imagine if you're designing a high security phone, you could possibly want this as a feature, but I doubt most people want a phone that wipes their data if the battery went flat for 7 days.
An app would need an environment to run in, but the main CPU/etc may not be available if the recovery process involves removing chips or other hardware modifications.
A better idea would be to put an RTC + watchdog timer[1] into the security chip that holds the keys and power it continuously with a small amount of external power. The power must be available and the watchdog timer must have time remaining to disconnect a circuit that pulls the memory holding the keys to ground.
More advanced types of tamper resistance and self-destructing chips are possible, but they tend to have significant downsides.
Apple is being sued for making phones slightly slower, can you imagine the lawsuit if they were "deliberately lobotomizing" phones? You know that's how it would be said...
This is not exactly what you are looking for, but G Suite MDM has an "Auto Account Wipe" feature that "Automatically removes corporate account data when a device reaches a specified number of days of inactivity."
Presumably, this would automatically delete your G Suite email, contact, calendar, and other data from your device.
That’s not a feature implemented on device by Google, Apple built the feature explicitly to make this available to corporate customers. Same thing exists for Exchange.
The dead man switch isn’t an option, but using the Apple Configurator (https://support.apple.com/apple-configurator), there are lots of additional security features that can be enabled that aren’t accessible via the iPhone’s UI.
EDIT
Using Apple Configurator, you can change the number of times unsuccessful attempts can occur before erasing an iPhone to anything between 2-10.
-To most users, probably. But if you are a high-ish profile target, the (potential) inconvenience is probably worth the effort for the added peace of mind.
After all, when was the last time you spent three days without fondling your phone?
So they get owned earlier this month, and then a fluff piece appears later this month that doesn't mention any of the findings even tho it calls out the dangers of hoarding vulns.
This might be a good scenario for Apple. Apple doesn't have to build a backdoor, which is good for PR, and the Feds got what they want to they'll stop bothering Apple. Which is the position Android/Google was in all along.
So the tinfoil hat theory here is that Apple itself leaks the cracking tech to Cellobrite to ease the fed pressure, and keep reputation intact? Sorry, I don't buy it.
That's not really the strongest interpretation of GP's statement. It wasn't implying that nobody would ever share source code for any reason, but that the company would deliberately decide to share source code in order to obtain fly-by-night compliance with government requests while maintaining its public image.
That happens all the time too, and I'm sure Apple is no exception. Want to score that big NSA storage contract? Pony up your HDD firmware... for "security assurances." Suddenly, NSA has exploits for all 7 major HDD manufacturers.
All I want is my every day encryption to be a big enough pain in the butt to crack that the feds can't break it without spending a medium amount of money.
===Edit
HN: Apologies this was lost in my subtlety, but consider the game theory aspects. Your best bet is to _just enough_ of a pain in the butt it's difficult to reach you, but you certainly don't want to be singled out on a national stage either. Maybe I'm the only one that considers this angle?
I'm not even on the Fed's radar. I don't want a mugger to send my stolen iDevice up the food chain to a Russian syndicate and have them able to in my Lastpass, internet banking app, live bitcoin wallet until I've had enough time to change all the credentials.
Presumably a security professional selling a usable exploit to a company like Celibrite pays far better than the $0 that comes from releasing it as a "jailbreak" to the general public.
I'd assume things like these are generally difficult to release to the public in any meaningful way, since they often require hardware hacks like desoldering components.
All of the jailbreaks have just involved tethering your phone to iTunes or visiting a particular website or app. There's never been a need to do any desoldering.
All of the jailbreaks that were released as easily accessible jailbreaks. It's definitely plausible that this exploit requires direct access of pinouts on the motherboard.
The difference is just that those exploits which were difficult enough that they required soldering generally weren't released or didn't get much traction.
That being said, I remember soldering a modchip on my original xbox 16 years ago
I think he's just talking about attacks against locked devices. I'd consider that a different category of thing than jailbreaks (rooting an unlocked device).
> There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.
¹https://www.schneier.com/blog/archives/2018/02/cellebrite_un...