Hacker News new | past | comments | ask | show | jobs | submit login
Your iPhone just got less secure. Blame the FBI (washingtonpost.com)
328 points by Libertatea on March 30, 2016 | hide | past | favorite | 239 comments



The FBI's refusal to detail the flaw will just add to the pile of miscommunications between technologists and the government. That hurts the government's ability to advance their own technological capabilities and understanding. Every day, they're getting better at shooting themselves in the foot and widening that communication gap.

I see nobody out there capable of bridging it. Not Tim Cook, not the EFF, not Obama, and certainly not the DOJ.

Bruce Schneier's previous coverage from 2015-07 [1] is what first got me interested and up to speed in the recent SB case. Even if Apple isn't demanding the FBI's method at this moment, I respect what Bruce has to say here.

[1] https://www.schneier.com/blog/archives/2015/07/back_doors_wo...


It seems to me that Tim Cook and the FBI understand each other very well. They just don't care about the same things.

Former CIA and NSA Director Michael Hayden clearly understands the issues. I saw an interview where he stated that the FBI was correct to want access (it makes their job easier) and that we shouldn't give it to them (he understands that a backdoor will be used in ways other than intended). The point being that people on the government side aren't just naive, ignorant administrators.

I think the problem with a lot of national issues isn't the players lacking an understanding of the nuances, it's that people in general aren't very receptive to nuance. That makes it a losing strategy to try to address it.


Public officials answer to a different standard than private citizens who run companies. The oath of the FBI is not to make their own jobs easier. It is to maintain public security. If the Director of the FBI cannot do that effectively, then that is a blemish on the record of President Obama who appointed Comey.

There's a definite need for someone to step up and say that on balance, we are more secure without trying to guarantee government access to encrypted data, or vice versa. So far nobody has taken that high level view and been able to convince any of the early major players in this debate.

Lindsey Graham's statement during his questioning of Lynch is the closest we got to a high level player changing sides, demonstrating an understanding of both positions [1].

[1] https://youtu.be/uk4hYAwCdhU?t=6m53s


> The oath of the FBI is not to make their own jobs easier. It is to maintain public security.

Actually, this is their oath:

    I [name] do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; that I take this obligation freely, without any mental reservation or purpose of evasion; and that I will well and faithfully discharge the duties of the office on which I am about to enter. So help me God.
(source: https://www2.fbi.gov/publications/leb/2009/september2009/oat...)

Their oath is to the Constitution, and the duties of their office are to investigate federal crimes. The Constitution does not mandate that they share everything they know, but neither does it make it as easy for them to do their jobs as they might like.

The Constitution includes certain rights which make it more difficult for them to investigate crimes than they would prefer; it is also silent in some areas which some of us might wish it spoke more loudly on.


If the Constitution is silent, that means it is prohibited.

The constitution is first and foremost a document outlining what the federal government is allowed to do. If it is not listed in the Constitution, then by design the government does not have the authority.

I think this has been lost, in part by the inclusion of the bill of rights, which where never intended to be an all inclusive lists of a persons rights.


Section. 8.

The Congress shall have Power To lay and collect Taxes, Duties, Imposts and Excises, to pay the Debts and provide for the common Defence and general Welfare of the United States; but all Duties, Imposts and Excises shall be uniform throughout the United States;

That's some pretty broad power that is enumerated right there.

to ... provide for the ... general Welfare


Actually it is not, unless you take it out of context like you have....

Allow me to Quote Jefferson...

"“[T]he laying of taxes is the power, and the general welfare the purpose for which the power is to be exercised. They [Congress] are not to lay taxes ad libitum for any purpose they please; but only to pay the debts or provide for the welfare of the Union. In like manner, they are not to do anything they please to provide for the general welfare, but only to lay taxes for that purpose"

Meaning to the founders that section was explicitly about collecting taxes and does not give wide over arching powers to legislate things about the "general welfare" of the population.


Ok, sub in that it would be a valuable tool in carrying out their mission for the making their job easier. The point is that it isn't extraordinary for law enforcement to want investigative powers.

If you watch some interviews with Michael Hayden, you'll see him saying just what you want, and I think someone who is a former director of both the CIA and NSA counts as a high level player. Autoplay video, but read the text:

http://www.usatoday.com/story/news/2016/02/21/ex-nsa-chief-b...


> Ok, sub in that it would be a valuable tool in carrying out their mission for the making their job easier. The point is that it isn't extraordinary for law enforcement to want investigative powers.

Sure. Then I'd just circle back to my original point which is there is disagreement over how to keep the public safe. That's the cause of the problem, and we're missing someone who can bridge that communication gap.

> If you watch some interviews with Michael Hayden, you'll see him saying just what you want, and I think someone who is a former director of both the CIA and NSA counts as a high level player.

I've watched several. The one you cite is actually a bit old. In more recent interviews, he sides even more with Apple.

Hayden definitely brings a lot of credibility to Apple's side. Unfortunately he's not in a position to call a meeting between the tech industry and the DOJ plus Obama to settle their differences. In fact, nobody is except the public. The public will ultimately decide this through their voice and vote. If we sit back and do nothing, I imagine we would see backdoor legislation pass quickly. So far, we've been vocal enough to prevent Feinstein's bill from being released. Let's keep it that way and start pushing back against the DOJ. We can play offense too by asking the FBI to share its technique with Apple.


It's not a communication gap. The sides understand each other just fine.


The FBI brought this to court and demanded, quite vehemently in their last brief [1], what they wanted. They chose to bypass the option of further discussing the issue with Apple outside of a court room. Whether you feel Apple or the FBI was being stubborn, that is not good communication.

In my opinion, the government needs to make some deposits into its emotional bank account with technologists to make up for the damage it has done.

[1] https://www.techdirt.com/articles/20160310/18161233865/we-re...


But I think we could craft a talking point version of this argument that unsubtle busy people could understand.

"The FBI is trying to compromise your data security, for the sake of their job security."


Not bad!


I agree with you, and I think this will eventually lead to a world where governments are unable to exert meaningful influence on large corporations. We're already starting to get there; I have a feeling that if the supreme court had forced Apple to write a custom version of iOS that things could have gotten really messy very quickly -- there were rumors that Apple's entire iOS engineering team was ready to resign if the case went the wrong way. It's plausible to see a scenario where Apple says "You know what? Fuck it, we're based in Ireland now."

Ultimately, I don't think governments are designed to deal with corporations that make as much money as a company like Apple does. These companies are the size of governments -- if Apple decided it wanted to hire a bunch of mercenaries and take over a small country, it could probably do so (if it didn't mind getting embargoed by whoever was friendly to the country they took over).

I wouldn't be surprised to see corporate sovereignty become a big international issue in our lifetimes. International law is a huge grey area, and I expect companies to exploit that to their advantage to avoid enforcement actions by individual nations.


> It's plausible to see a scenario where Apple says "You know what? Fuck it, we're based in Ireland now."

Maybe, but I think you're putting the cart before the horse. We're not at that stage right now. Right now we can turn the tables on the FBI and demand they contribute back to our own ability to secure ourselves. It's not too tough to describe the issue to the general public.

The FBI would ordinarily help businesses identify security vulnerabilities in their products, such as a flaw in a bank vault, because it makes the public more safe to enable the bank to secure itself. Law enforcement regularly recommends certain bike locks, car systems (note their recent notification about remote exploits [1]), etc. over others. In this case, due to disagreement about how to keep the public safe, the FBI is refusing to cooperate with the general public who own iPhones.

[1] http://www.ic3.gov/media/2016/160317.aspx


> the FBI is refusing to cooperate with the general public who own iPhones.

Which I think is a perfectly rational thing for the FBI to do in the absence of a law stating that they must do so. I don't think it's the best thing for democracy, but if I were a senior-level FBI official, I'm trying to give my organization as many tools as I can get to do their job. He's a man with a small, narrowly defined scope: investigate crimes as effectively as possible. It's not his job to think of the repurcussions.

Which is why our current decade-long legislative deadlock is fucking killing us. The world today is nearly unrecognizable from the one in 2006 - and in the lack of leadership by Congress, the executive branch (which includes the FBI and most other non-military law enforcement agencies) has to step in and take control.

Really, the problem is that a lack of leadership from congress has created a power vacuum that Obama has been publicly very reluctant to fill. But the gap exists, and people in the executive branch under Obama have had no such qualms expanding their power into areas that Congress just hasn't addressed because they're too buy trying to defund Obamacare or ban abortions.

Our legislative process at work, folks.


The people are the most powerful part of the legislative process in a democracy. They, and you, may have forgotten that by not voting and attempting to justify the positions taken by public officials who are sworn to protect the constitution.

> Which I think is a perfectly rational thing for the FBI to do in the absence of a law stating that they must do so. I don't think it's the best thing for democracy, but...

I've seen a lot of this "understanding" in the news and online. It is not how democracy works. We have the power, not them. We can literally vote them out of their jobs.


> These companies are the size of governments -- if Apple decided it wanted to hire a bunch of mercenaries and take over a small country, it could probably do so

Is this anything new though? I once heard the Dutch West India Company described as "Exxon Mobil with guns."


I.e. new mutations of an older modern phenomena - https://en.wikipedia.org/wiki/East_India_Company


> Fuck it, we're based in Ireland now

Apple has an enormous investment in their design team in Cupertino. It would be an enormous impact to their product development capability to start over somewhere else.

It's not enough to say "HQ is over here bro," court orders still work in California. Then again this whole All Writs effort to "build me a tool to help my investigation" seems to break new ground. Maybe it wouldn't even be enough to pack up and move all R&D out of the US (e.g. injunction on US sales until the foreign Apple company complies).


> Maybe it wouldn't even be enough to pack up and move all R&D out of the US (e.g. injunction on US sales until the foreign Apple company complies).

There are already state level bills proposing this in CA and NY [1]. Those bills would fine manufacturers $2,500 for every phone sold in those states that isn't capable of providing decrypted data. The language originally comes from a white paper by Manhattan DA Cyrus Vance. Feinstein-Burr are working on a similar federal bill which was supposed to be released late last year, then this month [2]. It has obviously been delayed by the public's response to the SB Apple case.

[1] https://www.techdirt.com/articles/20160122/06200833403/calif...

[2] http://www.politico.com/tipsheets/morning-cybersecurity/2016...


"Maybe it wouldn't even be enough to pack up and move all R&D out of the US (e.g. injunction on US sales until the foreign Apple company complies)."

That'd be an interesting possibility. Government, to U.S. consumers: "YOU are not allowed to buy the products you chose, because a foreign company refused to perform work for the U.S. Government." ...


I imagine most of the engineers would respond well to "can you please relocate closer to the giant pile of money in ireland"


The specific point of bridging is that technology corporations and the federal government should both care deeply about making consumer and corporate technology as secure as possible, given how much of the nation depends on it.

On paper, the right federal agency for this should be the Department of Homeland Security. In reality they have neither the technical expertise nor the political "juice" to compete with the intelligence and law enforcement agencies--who care much more about access than security.

Until this balance is corrected at the federal level, it's going to be a mess. On balance, the government essentially WANTS technology to be insecure right now, so that intelligence and law enforcement staff can do their jobs more easily.


The government isn't a single entity with a single goal. There already exist federal agencies with the goal of increasing security (see http://csrc.nist.gov/groups/ST/toolkit/), while others like the FBI have vested interest in increasing their powers of investigation. The executive branch has already made their stance clear, weakening encryption should not be the goal of any federal agency:

"We recommend that, regarding encryption, the US Government should:

(1) fully support and not undermine efforts to create encryption standards;

(2) not in any way subvert, undermine, weaken, or make vulnerable generally available commercial software; and

(3) increase the use of encryption, and urge US companies to do so, in order to better protect data in transit, at rest, in the cloud, and in other storage."

http://arstechnica.com/information-technology/2013/12/nsa-sh...


It just seems like federal folks working on security are currently outgunned by the federal folks working on access.

For example where were the pro-security quotes from NIST in all the FBI-Apple stories? I'm sort of kidding--obviously there weren't any--but the reality is that NIST can't stand up to the FBI and that's not their role anyway. They set standards not executive priorities.

If we think of the federal govt as a multi-armed see-saw, where points of view oppose one another from various agencies, then right now the arms in favor of access have a lot more "weight", so the overall system tilts toward them. This was visible in what the Presdient said at SXSW.

What do we see? Pro-encryption messages come from private groups, but pro-access messages come from federal executives. Why wasn't there a senior federal appointee telling Congress that hacking the iPhone was a bad idea? That the FBI had not fully considered all that consequences? Who would that be? The head of NIST?


> It just seems like federal folks working on security are currently outgunned by the federal folks working on access.

I'm not the least bit happy about this, but it's been that way for the entirety of the computer age -- it's not a new development.


I agree. But I think that as computing makes its way into more and more of our lives, it becomes less and less excusable.


> That hurts the government's ability to advance their own technological capabilities and understanding.

I take it that nobody in The White House uses an iPhone, or at least I hope not. Institutionalized ignorance to the degree of shooting oneself in the foot is concerning - far more concerning than the existence of this specific vulnerability.


Tons do. Watch the Congressional hearing [1]. Some representatives hold theirs up when they begin talking.

[1] https://youtu.be/g1GgnbN9oNw


I know I'm stating the obvious here but it needs to be stated: The White House is therefore completely okay with this weapon in, as one example, terrorists' hands. Is this a new war or terror strategy or something, placing weapons into the foes' hands?


This is awful reporting.

For weeks, the experts in security had been saying that FBI does not in fact need Apple's help to get into that phone; that they are just posturing in order to obtain a back door.

This was posted on the ACLU website:

https://www.aclu.org/blog/free-future/one-fbis-major-claims-...

The FBI can simply remove this chip from the circuit board (“desolder” it), connect it to a device capable of reading and writing NAND flash, and copy all of its data. It can then replace the chip, and start testing passcodes. If it turns out that the auto-erase feature is on, and the Effaceable Storage gets erased, they can remove the chip, copy the original information back in, and replace it. If they plan to do this many times, they can attach a “test socket” to the circuit board that makes it easy and fast to do this kind of chip swapping.

Maybe the FBI has a secret new exploit --- or maybe they just did the above method, getting the "third party" help with the desoldering and the attachment of a socket, and hardware for reading NAND.

It's just speculation.

Even if the FBI are exploiting some hole, that is better than them having a back door, which is essentially a security hole put in by design.

The article is speculating, and it's conflating security holes with back doors.


I don't know about anyone else, but the iPhones seem to be the focus of all the academics that do security research.

I'm always seeing X done to the iPhone.

Just these past few weeks I remember seeing a key extraction done via EM emitted from the phone.


Yes. I'm not worried if the police seize a locked safe from someone and hire a safecracker to open it. That doesn't mean that I shouldn't use that safe. It may still be, in fact, the best safe for me to use.


Schneier has never had a strong intuition for how software vulnerabilities work. In the 2000s, he wrote articles in his newsletter blaming eEye (a security research firm then the home of Derek Soeder, Barnaby Jack, Ryan Permeh, and the like) for publishing their vulnerability research. He is at turns anti-disclosure, pro-disclosure, and all points in between.


My understanding is that “disclosure” is a nuanced thing, time-wise: responsible disclosure is to mention the vulnerability to someone who can and is intent to fix it first, give them the time to write, test and send a patch, and then publish it. Publishing it earlier sound very unreasonable, especially before handing the details to the manufacturer.

I am not familiar with what Schneier said 15 years ago, though. He might have chastised someone for doing that; he might have changed his mind. If he hasn’t in a decade and a half, I’d be shocked.

I realised this morning that 15 years ago, I was proud of using SAS, “the most powerful analytics software there [was]”. I changed.


There is no such thing as "responsible disclosure". That's a term invented by vendors to coerce independent researchers into doing free work for them. Semantic drift has somewhat legitimized the term, but I think it's important we remember why it was conjured in the first place.


How would you call Google Project Zero's 90 day policy?


A decision they had every right to make for themselves, but that they have no right to impose on anyone else. I'm pretty sure Tavis Ormandy agrees with me.


It is imposed by Google, not the vendor of the broken software.


Doesn't Google do free work for the vendors with this project? I would assume so.


Disclosure to the vendor is very different from publishing a vulnerability. I don't see any inconsistency here.


I find this rather silly. iPhones didn't get less secure because the FBI used a known vulnerability to break into one. iPhones were that insecure all along, and the only thing that changed is that we now know it.

The article further states, "There’s no such thing as a vulnerability that affects only one device." Except that I'm pretty sure that whatever attack the FBI used relied on the fact that the phone in question had a short passcode set. I'd bet dollars to donuts that whatever attack they used would not work against my phone with a secure passphrase.

I'm usually a fan of Schneier, but I think he really missed with this one.


This is pedantic in my opinion. The main point of the article was that the FBI found a vulnerability. So before it was likely an unknown vulnerability, now it is known by law enforcement. The whole point of the article was that responsible organizations disclose vulnerabilities so that companies can make their software/hardware more secure. In this case the FBI is sitting on it, so they can continue to use it. Last week the FBI couldn't break into my iPhone. Now they /might/ be able to (it's a 6, so who knows if it is fixed). And Apple can't fix what they don't know about. Therefore, my confidence in the security of my iPhone has diminished, and it may be less secure than it was last week.


According to the article, the vulnerability was already known by a third party, and all the FBI did was ask/pay that third party to use it on this phone. I don't see how that act alters the security of iPhones.

I agree that the FBI should inform Apple of the vulnerability they used. I just don't think that failing to do so makes iPhones less secure. Failing to do so leaves them as they are, and informing Apple would make iPhones more secure.

I don't see this as pedantic, because it's the difference between the FBI actively harming us and the FBI merely not helping us.


I do not believe the FBI can legally report the vulnerability. They did not find it, they do not "own" it. They have used a service to get to the phone's data. Even if they reverse engineered the exploit used, it would be a breach of contract to disclose the vulnerability.


I'm not sure what "ownership" is supposed to mean in this context. Legally speaking, how does one "own" a security vulnerability?


If phones are no less secure when the existence of a vulnerability is disclosed, why do we prefer security researchers to notify the manufacturer before disclosing the vulnerability?

And perhaps the vulnerability in question wouldn't work against your iPhone, but how many millions of iPhones still use a short passcode? This vulnerability could still affect all of those phones.


Disclosing vulnerabilities makes us more secure, that's why we want people to do it. That doesn't imply that failing to disclose them makes us less secure. Not doing anything leaves us where we started.


Bruce Schneier is an advocate of full disclosure. He does not believe that companies should be notified first.


Schneier has tended toward more high-level and pragmatic assessments over time. Maybe because of his "security evangelism" efforts he's trained to think more like the pragmatic layperson.

Obviously all phones technically have the same vulnerability both before and after the FBI decision. But now there is a larger pragmatic chance of it being exploited. That is the point AFAIK.


Sad to see this downvoted, it's as far as I can see absolutely accurate.


To be clear, the FBI isn't compelling Apple to leave the vulnerability open, are they? My understanding is they simply found/were informed of a vulnerability that existed in this model of the phone, and are not informing Apple of it. If Apple themselves finds the vulnerability and patches it, the FBI can't do anything about it.


Correct. And to be fair, the FBI could have received the vulnerability from a foreign state that is also opposed to terrorism but did not want their identity or methods known.

ie they may have reasons for not disclosing the vulnerability, and it's a shame that there's so much mistrust of the government [caused by actions of the government] so that we can't assume the best.


Without disclosure of the method used by the FBI, this sounds like nothing more than political posturing and a non-story. Apple is no longer able to fight the All Writs Act, and the FBI is no longer compelled to watch one of their favorite misuse-able laws get taken away.


Schneier knows this, and this is a particularly idealistic op-ed, but this is just how the exploit market works and while it would be nice if law enforcement would take the white-hat road, the hazard here is still vastly better than some kind of legal precedent for requiring backdoors.

The good thing about the exploit market is that it is naturally self-limiting: you don't burn a zero-day on a dragnet; you limit its use to high-value (and ideally court-sanctioned) targets.


> ...you don't burn a zero-day on a dragnet;

That depends entirely upon the nature of the vulnerability. Every TLS vulnerability publicly disclosed and then patched comes to mind. The NSA can use that all day long without anybody knowing - see parallel reconstruction.


> ...you don't burn a zero-day on a dragnet;

A for-profit entity shouldn’t.

I’m not sure the FBI has the same objectives. If they do, I’m sure Apple would be happy to pay market-prices for that vulnerability.


There's a bit of irony to this, too. If Apple had been more cooperative earlier, law enforcement probably would've taken the white hat approach. Apple protected users from the FBI, but is now potentially unable to protect them from organised crime.


There is absolutely no proof of that. What they where asking for was keys to every lock in the world. It would have given them unprecedented power to do whatever they wanted. Apple did the right thing. Users should always come first especially when privacy is at stake.


They were willing to hand over the phone to Apple and allow all work to take place on Apple premises. How does that affect other phones?


So have you not heard that this would set a legal precedent? Apple would get pestered until they finally create an automatic gateway for turn key security breaking, not unlike the wiretap portals setup by telecoms - or the frequently abused youtube DMCA mechanisms.


It only sets a precedent that they can give phones to Apple and have them unlock them, as long as it's possible for them to do so. If Apple decides to build a general backdoor and hand it to the government that's entirely their decision.


In fact, it also only sets precedent if they have to legally force Apple to do it. Apple could've used their discretion to comply in this particular case, avoiding courts, and avoiding legal precedents being set.


> It only sets a precedent...

Yeah, it "only" sets a precedent that the government can compel a company to reverse all the work put into securing a product, no possible market driven unintended consequences there.

> If Apple decides...

Classic victim blaming, "decision" is the wrong word to use here. Constant negative reinforcement leads to conditioned responses - not decisions.


It wouldn't set a legal precedent unless a court ruled on it. It would only have come to that if Apple continued to decline the FBI's request.


Again, victim blaming. How is that any different from the false choice "We can do this the hard way, or the easy way"?

Have you considered what will happen to the number of such requests as devices continue to become more secure?


That's not blaming anyone. It's objectively the circumstance under which a precedent would be created. My understanding of your point was that Apple shouldn't comply in this case because it would set precedent. My point is that choosing to comply was actually one of the ways they could've avoided setting precedent.


> It would only have come to that if Apple continued to decline the FBI's request.

> That's not blaming anyone.

I guess I misunderstood you because you left out the other potential scenarios, like instead of "Apple continued" the "FBI ceased" - or the "DOJ never attempted to compel Apple under the All Writs act in the first place". Apple had no control over the situation outside appeasing the FBI, the state controlled every other aspect.

> My understanding of your point...

You think it wouldn't come up further down the road as the FBI demands more and more, after they've already built the tools to meet prior demands?


As devices become more secure, Apple will no longer be able to unlock them.

If you put a backdoor into your phones, you should expect to have to use it.


> As devices become more secure, Apple will no longer be able to unlock them.

That is only true if revoking trust is the only thing left to do to increase iphone security against every potential bad actor. That isn't the case, and that leaves plenty of time to pester Apple with unlocking phones collected in every investigation from now until then.

> If you put a backdoor into your phones, you should expect to have to use it.

What law is that, the All Writs act? That is what was being tested, because not everybody shares your certainty about it. The moral problem of the forced labor of an unrelated party is enough to make me disagree. Also, that logic could easily be used to justify the forced inclusion of a backdoor in an otherwise secure device. You can't have it both ways, a secure device and a device manufactured by a company that can be legally compelled to do whatever the state deems "necessary or appropriate".


I don't think labor should be forced, but data should be accessible. Apple had the keys to use the backdoor, and they should have been forced to choose between using them to sign software, or handing them over and letting the FBI sign it themselves. Apple would prefer to sign it themselves, so they can effectively be forced to do so.

I disagree that they can force the inclusion of a backdoor using the same logic. The reason I think they can compel action is that the real thing they want from Apple is information, I.e. the key. Apple can get out of revealing the key by using it themselves.


> ...but data should be accessible.

This is the most important part of the whole thing. Where do you draw the line, is everybody legally obligated to comply with such demands? Well obviously those directly involved with whatever crime is being investigated are exposed to such compulsion, but so are third party service providers. But in the case of these third parties, there are a lot of laws dictating their part in great detail. For its part as a service provider, Apple supplied the user data it had. What the FBI has done is declare loud and clear that they'd like to expand those subject to compulsion to include anyone it designates, for any purpose that isn't explicitly illegal, thanks to All Writs. Now in this case they say, for the victims, they're being exhaustive in pursuing what is most likely a dry hole. But what is to stop them from wielding such authority over every manufacturer of everything that the subject of the investigation touched? How about compelling every luckless individual who lives along the subject's morning commute to turn over all video and image files because it might help them in their investigation. For as crazy as that sounds, consider the fact that there is no logical distinction between any of those scenarios.

> I don't think labor should be forced ... they should have been forced to choose...

You can't believe both of those, because compulsion eliminates choice. That would be like saying "I'm not forcing you to run, it is your choice, but if you don't then I'll kill you".

> ...they want from Apple is information, I.e. the key.

No, they want on demand access - they don't care about any key. Even if Apple were to get rid of the key, there is historical precedent for the state pulling some insane precrimes logic in order to expand authority - see the commerce clause. I'll explain that reference briefly: the federal government has explicitly defined authority over commerce that occurs between states, but it was reasoned that commerce within a state could potentially result in the absence of interstate commerce, therefor the federal government has authority over all commerce... Now imagine that reasoning being applied to a manufacturer who doesn't have the capability to, but could if it wanted, respond to a court order. Oh, and lets not forget the clipper chip, a nice indicator of the state's position on forcing manufacturers to backdoor consumer products in the interested of potential investigations.


>Where do you draw the line, is everybody legally obligated to comply with such demands?

If I have a warrent, I can force you to hand over all information allowed by the warrent. That's how warrents work.

They're able to get any data a judge agrees with.

I don't think they're able to compel labor.

>How about compelling every luckless individual who lives along the subject's morning commute to turn over all video and image files because it might help them in their investigation.

If a judge thinks there's probable cause, they can require this, so far as I know.

>You can't believe both of those, because compulsion eliminates choice.

They have the ability to compel Apple to hand over data. They don't have the ability to compel Apple to perform labor unrelated to producing data.

>No, they want on demand access - they don't care about any key.

The FBI explicitly asked for the key at one point in the case.

Could you explain how your commerce example is in any way precedent?


> If I have a warrent...

> If a judge thinks there's probable cause...

> They have the ability to compel Apple to hand over data.

Warrants aren't golden tickets, if a warrant exceeds the court's authority then it isn't worth anything. That is exactly what was going to be resolved here, a court ordered Apple to do something that Apple thought exceeded the state's authority - thanks to the FBI the question remain unresolved (which the DOJ prefers to a precedent going against them).

> I don't think they're able to compel labor.

Subpoena to appear immediately comes to mind, NSLs might reach that level as well - they certainly exceed "hand over data". But the point isn't really about court orders during discovery, it is the final interpretation of an existing law that will determine the nature of forced labor. There are plenty of laws that compel labor.

> The FBI explicitly asked for the key at one point in the case.

No, they presented it as an alternative after they met resistance (aka threatened).

> Could you explain how your commerce example is in any way precedent?

"historical precedent" (an example, a demonstration of disposition), not a legal precedent. I'm disappointed that you focused on that and overlooked the point - the use of "precrimes", "potential" and "clipper chip" is the hint.


If Apple had done as requested, the FBI would not have hired a contractor and we would not know that a contractor capable of this existed.


NPR was playing this story up as if it's a blow for Apple - is it really? Isn't the vulnerability the fact that the phone has no secure enclave, and so the timeout/wipe can be worked around by external access to the flash?

Isn't that the whole reason the newer phones were upgraded?

Older device fails, newer device with improved security doesn't. That's not a blow to Apple, that's the way the world works.


You are exactly right: If the vulnerability does exist, and the FBI did contract it out, this just shows that the vulnerability is very expensive and only available to major state actors. In fact, a single government - the U.S. - couldn't do it alone.

It's possible this is a actually script-kiddie exploitable vuln, but I highly doubt it.

All that we have learned is what we suspected: Some "outdated"/older phones are infinitesimally less secure in a way that only internationally collaboration between some of the world's most powerful hackers can break. And it's not really the FBI's fault. They're not sharing it with anyone. (Probably).

So it's not a blow to Apple at all. They already moved on before this. This phone/vuln was a stepping stone for them that is already obsolete. In fact, this is a boon to Apple, bc people will be scared into upgrading. (Except for the hopeless sheeple who "have nothing to hide" and perfer to share their personal info with the world's gubmints).


As far as I've been able to figure out, the Secure Enclave does not have its own storage. The proposed attack of cloning the phone's flash memory would work just as well on a new iPhone 6s. A lot of people are assuming that the Secure Enclave would prevent this attack, but I've not yet been able to find any basis for that assumption.

The main security advantage of newer phones in this context is that Touch ID makes it practical to set a more complex passphrase. The flash cloning attack only works if you have a short passcode set, because it relies on brute force. If your passphrase is complex enough that it can't be brute forced even when the software controls are removed, then you're still safe.

I'd wager that when the iPhone 7 ships this fall, it will have an upgraded Secure Enclave with internal storage that can prevent this attack entirely.


That's absolutely correct, come to find out. For some reason I thought the SE was responsible for holding some of the keys and the wipe counter, but instead it's a section of the NAND flash called "Effaceable Storage".

Their own whitepaper defines it:

A dedicated area of NAND storage, used to store cryptographic keys, that can be addressed directly and wiped securely. While >>it doesn’t provide protection if an attacker has physical possession of a device<<, keys held in Effaceable Storage can be used as part of a key hierarchy to facilitate fast wipe and forward security

(emphasis mine)

However, it appears that the FBI attack only worked because the people in question used the 4 digit pin. A strong passcode is the way to go and protects you from these kinds of brute force attacks.


I initially assumed the same as you, because it would just make so much sense.

And yes, it looks like all of this came down to the short PIN. It's likely a six-digit PIN would still fall, but a proper passcode would have made this whole affair moot. It looks like even the older phones are still totally secure in that case, although the lack of Touch ID can make it somewhat impractical.

This is a major problem I have with the "iPhones got less secure" idea. Apple has gone through heroics to make short passcodes somewhat secure, but there's only so much you can reasonably expect there.


For all we know, it was because the device was owned by San Bernadino government (despite the FBI screwing that up) that allowed them to access it.

This is all smoke and no fire.


Is this just FUD? Simply confirming the vulnerability seems likely to lead to it being plugged, whether or not the FBI reveals their methods, in effect doing the opposite of what the title suggests.


They confirmed a method to get in existed, but did not reveal

- what the scope of its usefulness is (particular kinds of hardware or software revisions)

- whether it still works on modern platforms

Without even a vague idea where to look or whether it's still there, it's a guarantee that _something_ existed, sometime.


How can it be plugged, if Apple doesn't know what it is?


Because Apple being the OEM of the hardware and software involved, things tend to get back to them, even if in rumor form.

And them knowing their software so well gives them intimate knowledge when tracking down vulnerabilities.


FUD or not, if it's from the Washington Post, you can almost guarantee it's not a good source.


We'll find out based on how many of the similar cases are withdrawn.

If the answer is zero, they're probably lying.


I find myself divided on this article, as a person who values security very strongly.

1. If the vulnerability the FBI used worked because the device was an iPhone without a secure enclave, Apple probably knows how they did it, but they can't really fix devices that have already shipped without the security features. While this obviously hurts users of those phones, every phone going forward won't have this issue, and this won't be replicable on a mass scale.

2. Because this was a vulnerability that was found, not intentionally created, there is a high likelihood that the bug will be found again by security researchers, or at the very least paid for handsomely by Apple. This isn't necessarily true (Heartbleed existed for 2 years without being noticed), but it means that the vulnerability has a timetable that rapidly closes. This is far different from an _introduced_ backdoor/vulnerability, where Apple knows exactly what can be used to get into a device, but has their hands tied by the government, which would _unilaterally_ make our phones less secure. I don't like buying a door lock if I know there's a master key that can open any of the doors of that brand.

3. The author I feel is misleading when he says things like "A vulnerability in Windows 10, for example, affects all of us who use Windows 10". Even if a piece of software has a vulnerability, that vulnerability could perhaps only be exploitable under certain conditions, like software running on certain hardware (eg: without a secure enclave), or under certain conditions (passcodes of less than 4 digits). It also can be highly theoretical or impractical to do on any sort of scale -- if the vulnerability involves reading data off a hard drive using an electron microscope to check for magnetic signatures, I'm not going to be too worried that it will be abused, as the man hours to have it work for a single case would be astronomical and only feasible in the most extreme circumstances.

It's probably very frustrating on Apple's part that the FBI found a vulnerability that they (likely) don't know about, and in an ideal world, governments would disclose those vulnerabilities to make us more secure. But as long as the software and hardware continue to get more secure and not intentionally crippled, any benefit they derived feels short lived at best.


I am sure Apple knows exactly what they did, it's their system and their own hacking team likely finds stuff like this. The key is if anyone else in the US is able to purchase this hack to unlock a phone: if they do and try to reference evidence based on that a judge will require the process be disclosed.


It always was insecure, know we just know for sure.


We do not know for sure.


If there's a will, there's a way.


I think this just feeds off the myth that the iPhone was invulnerable to exploit. The average washington post reader isnt going to know the intricacies of mobile security. NPR and to some extent this article makes it seem like the iPhone was previously secure, and now less so. There is no data supporting either hypothesis, but its likely that this was not a new exploit found that the FBI used.

I am curious, however, as to whether this affects only the 5c or if it affects the newer 6/6+/s/s+


I have a CS degree and I don't know the intricacies either.


Given how much corruption is revealed in wikileaks maybe less privacy is not a bad trade off.

What amaze me is the tendency for our society to call for more privacy that favors the more powerful, hence the more likely to be corrupted.

If we want make it easy for our leaders to be corrupted I definitively would call for more cryptography.

Me except the normal "shameful" stuff from the common people I want privacy to. Like anyone I have stuff I want to hide. But, it is okay if I get _caught_. I will not say everything I did is okay, because, I still have hurt indirectly people. There are stuff I want to hide to protect myself from the intolerant crowd.

But guess what, I have as much as I could went to present my excuses, and took responsibility for my own shitty actions. And for the stuff I should not be ashamed of, I don't see the need to hide it, in a democracy we have the freedom to fight for our opinions.

None of my stupid stuff requiring privacy have been leading to blood being shed, exploitation, or making the market noncompetitive.

With greater power/wealth should come greater transparency. And iPhone are definitively more expensive than most phones.

So let's remember that is often the look from the others that makes us more virtuous, and let us all accept to live in an house made of glass (except for the bathroom, and the bedrooms).

In a fair competitive market access to information is symmetrical. In a real democracy government are expected to be openly enforcing the choices of the voters.

In a world tending towards virtue, there is no need for more privacy.


The "vulnerabilities equities process", adopted in 2010, requires the FBI disclose the exploit details to Apple on request.

More info can be found in am EFF case: https://www.eff.org/files/2014/07/01/eff_v_nsa_odni_-_foia.p...


Maybe I misread the redacted doc [0] but from what I see, it is sent to the Entities Review Board (ERB) that then determines whether they should disseminate. I was unable to find the requirement to disclose. Can you point to it?

0 - https://www.eff.org/document/vulnerabilities-equities-proces...


There are reasons to doubt the FBI is being honest about this matter: Firstly, they must have known they could desolder and copy the chip containing the PIN-protected key, replacing it if it were erased. This method was put forward by multiple security experts. So the FBI started from a dishonest position regarding Apple being the only source of assistance.

Now they claim that a mystery exploit emerged and was put to use between Sunday night and Tuesday. That's not enough time to crack the phone using a pile of chips and brute-forcing the PIN.

Since the FBI started from a dishonest position, and hasn't spent enough time to plausibly have used the know approach, and they have carefully avoided saying they actually got keys and/or encrypted data out of the phone - or that they did anything at all with the phone, there is a more than small chance that the FBI has simply moved their own vague goalposts and declared that they have everything they need.

Not honest or ethical.


There's a typo in the URL for the "process" link in "They even have a process for deciding what to do when a vulnerability is discovered."

Should be https://www.whitehouse.gov/blog/2014/04/28/heartbleed-unders...

It's interesting that the government (at least this part: " Special Assistant to the President and Cybersecurity Coordinator") recognizes the pros/cons:

> Disclosing a vulnerability can mean that we forego an opportunity to collect crucial intelligence ... Building up a huge stockpile of undisclosed vulnerabilities while leaving the Internet vulnerable and the American people unprotected would not be in our national security interest.


Ideally (for the FBI), the FBI would order Apple to build a backdoor into their system. This is functionally the same as ordering Apple not to patch a known vulnerability. In turn, this is functionally the same (in the short term) as not disclosing a known vulnerability to Apple.

While the article's title is misleading, the FBI might as well be taking an active role in making the iPhone less secure. In communicating to the non-technical public, insisting on elaborating the difference is being pedantic.


Why did the FBI drop the lawsuit against Apple in the first place? They both made it sound like the stake was much higher than unlocking the one device.

Or, the FBI managed to unlock this phone and so they can unlock any other devices too, now-and-forever? They don't need a backdoor any more?

Wouldn't the logical next step from Apple's side be to protect their users in their future devices? Wouldn't then the FBI have to sue them again?

Feels like we are missing key facts here, and I wonder why Apple is not talking now.


It's a writ. It is legally only valid if there is necessity and the court must be informed if that changes.


Hilarious. Apple wouldn't give the FBI access but now the FBI should give Apple access to their information?

This conversation has jumped the shark.


> We don’t know what the method is, or which iPhone models it applies to.

Where is the proof? there is no proof.


How do you prove something you don't know?


Well the burden of the proof lies with the person that states something as a fact. What is the title of that blog post again ? Right. now let the author show you the proof. If he doesn't know then he shouldn't talk about something like it is a fact. The only fact in this whole debacle is that the FBI has been caught lying again,again and again. So they were lying then and people should believe them now ?


> Well the burden of the proof lies with the person that states something as a fact.

Sure, but they aren't really stating a fact. They are saying they don't know if it's fact or not.

I agree the title of the post is terrible and inaccurate, but the formatting of your comment is focused on the quote, not the title.


Can we file a Freedom of Information Act request to have the method the FBI used disclosed?


TBH there have been so many security holes in the iOS software... this software did not "get less secure", it has been there all along. If you're an iPhone user, you notice the CONSTANT iOS security updates... most of them are because someone found a hole and patched it up! I just hope that the FBI lets Apple know of this hole once the case is over (but I doubt that will happen), or Apple figures out how they did it. If Apple can't find out, it won't be very easy for others to get into iPhones either... but it's still possible.


Changelog: iOS 9.4 1st April 2016 - Fix FBI backdoor


Bruce singles out the FBI for blame in this case, but if that's right, then whatever third party helped the FBI also deserves blame.


This is the company the FBI supposedly contracted (http://www.cellebrite.com/Pages/cellebrite-solution-for-lock...), if that's the case it's just older devices that don't have the secure enclave which is not surprising.


Did anyone else get the ad from Huawei? "Focus Perservere Breakthrough" ..delicious..


I don't know why I still haven't seen this here, but there are plenty of videos on youtube on how to bypass the 10 wrong pass code limit. I don't understand, it looks so easy and still people are talking about encryption modules etc ... Am I missing something ?


Blame the FBI?

How is it the FBI's fault that this particular iPhone was not secure?

It's mystifying to me how far Apple advocates will go to not blame their favorite company.


This is bad reporting.

The iPhone did not get less secure. It has always had this security hole. I, like many others here on HN, believe the vulnerability to be related to the lack of a secure hardware biometric / encryption module. If this is the case, then your iPhone probably did not get less secure -- such exploits would only work on iPhones prior to the 5S (I think? The 6 series phones are covered for sure). Basically, if your phone supports Apple Pay, you're good.

And if it's the case, it was a design decision rather than a bug. The secure hardware wasn't ready for consumer use when those devices were designed (and even then, the first generation of iPhone fingerprint readers was pretty bad). Apple knew a desoldering attack could be successful, but such attacks are very expensive, require long-term physical possession of the phone, and are impossible to pull off covertly. There may be another way to pull off a direct hardware access attack without actually needing to desolder the flash memory, but that would still only be effective in the absence of a hardware security module. The only defense against this type of attack is a secure hardware encryption module like the one Apple included to support Apple Pay (because the banks likely insisted on this level of security).


Funny to call it reporting when it's more of an editorial by the renowned security researcher Bruce Schneier.

However, I'll defend his point: take the Monty Hall problem [https://en.wikipedia.org/wiki/Monty_Hall_problem]. The probabilities change, even when a door you didn't pick [and doesn't hold the prize] is opened.

I think this is a fair analogy. We've now gained knowledge about the existence of a vulnerability, and that knowledge makes everyone's device less secure [for a variety of reasons -- eg others know to look for the vulnerability, others know it can be bought, etc]. It doesn't matter that the vulnerability "has always been there" if nobody knew about it.


> The probabilities change, even when a door you didn't pick [and doesn't hold the prize] is opened.

That's the common misunderstanding of the problem. Most people think that the probabilities go fro 1/3, 1/3, 1/3 to 1/2, 1/2, after choosing a door and having Monty Hall open one of the others. The probabilities don't change.

The probabilities are 1/3, 1/3, 1/3 at the start. After you choose a door, they're still 1/3, 1/3, 1/3. Or a put a different way: 1/3 (your choice) 2/3 (what you didn't choose). When Monty Hall opens one of the other doors, the probabilities don't change. It's still 1/3 (your choice) 2/3 (what you didn't choose).

However, because he's removed one of the doors from the problem, the 2/3 probability is now applied to the remaining door.

That can be viewed as "probabilities changing" for the remaining door, but it's better described as the probabilities being shared between the doors you didn't choose.


This is a great clarification of how this problem works. I STILL can't grasp why you'd switch. Since you have no idea which door it is, couldn't the 2/3 probability be applied to either his door or your door? For all you know, the one that he removed was just one random one of the goat doors. Your chance of picking the car was 1/3 before, if you could have the car already, why would it be better to switch now that he removed which ever would definitely be a goat? It seems like you still have a 50/50 chance of getting the car and your chance of the one that remains (unpicked/unremoved) is just as likely to be the other goat as it is to be the car..

That said, I made a simulation of it where it does the following: 1. Assign random locations behind doors, 2. Player randomly chooses door, 3. Host excludes door (random if both are goats, goat if one is a car), 4. Player either switches, or doesn't switch based on what's chosen for that simulation (the full set gets one or the other)

After running 50,000,000 then dividing success/tries, my number is nearly exactly the same between the two. Is my simulation working incorrectly?


I find it clearer if you have 100 doors. Pick one. The host eliminates 98 doors [with no prize behind them].

Do you switch? Or do you stick with your original guess? As the parent mentioned, switching gives you a 99/100 chance of being correct, staying gives you a mere 1/100 chance.


Thanks for clarifying a very counter intuitive problem. I always struggled with this one, not anymore :) Sometimes scaling things up is the best way to understand things. Who knew!


I don't see why the other door has 99x more probability of being correct. It's still 50/50, original door or remaining door, the other 98 don't change the odds.


Your initial pick has a 1/100 chance of being right. If you switch, and you were right, you lose. On the other hand, your initial pick has a 99/100 chance of being wrong, and if you were wrong, and you switch, you win. So switch.


Let's play a game to make this more clear. I'm thinking of a number between 1 and 1000. First, guess what it is.


2


Well, before you finalize your guess, I'll give you a hint: it's either 2, or 437. It's not any other number.

Keep in mind I made my choice before we started, and it hasn't changed.

Do you want to change your guess to 437, or stick with 2?


I've always flip-flopped on grasping the Monty Hall thing (sometimes I'd think I understand, other times not so much)... but this little guessing game really clarified it for me, once and for all. Thanks!


This is a nice reinterpretation.


the behavior of the game host as he will never reveal the door with the car this is the key bit. If the door you picked is the correct one (1% chance) the host can open any door. If you picked incorrectly (99% chance) he can open any door except the winning door so the remaining door will be the correct one 99% of the time.


From the initial setup you have a 1/3 chance of choosing the correct door and a 2/3 chance of choosing the wrong door.

The key to this problem is the behavior of the game host as he will never reveal the door with the car. So, after he reveals the contents of one of the doors, if you choose not to switch, you still only have a 1/3 chance of having chosen the door with the car.

On the other hand, if you choose to switch, 2/3 of the time you will have chosen incorrectly initially, and under the switching strategy, those 2/3ds of the time you are guaranteed to win the car. Hence, you have a 2/3ds chance of winning under the switching strategy, vs a 1/3d chance of winning under the "staying" strategy.

Hope this clarifies things.


Your simulation is not correct.

Look at it this way: 1/3 of the time, you pick the right door up front. If you switch, you lose.

2/3 of the time, you didn't pick the right door. If you switch, you win.

So switch.


> For all you know, the one that he removed was just one random one of the goat doors.

The key observation is that the problem states that Monty Hall opens an empty door. It's not clear whether Monty knows which doors are empty, but it doesn't actually matter, since the problem states that he will always open an empty door.

As others have said, your choice as a contestant is whether to take one door (the one you initially pointed at), or to take the other two doors (at least one of which is obviously empty). Stated that way, it's pretty obvious that taking 2 doors is better than taking 1 door.


The important bit is that Monty will never choose to show you the winning door, and he will always show you a door after you pick a door and before asking you if you want to switch. He always opens a losing door.

The location of the prize and the player choice of door are independent events. Monty's choice is dependent on whether the player choice and the prize location are the same. Monty will never open the door the player picked. Monty will never reveal the prize. Those rules generate exploitable information for the player.

Here is the complete probability tree, and two possible strategies of play:

  Prize behind door A;  P = 1/3                           ALWAYS STAY strategy:
    Player picks door A;  P = 1/3 * 1/3 = 1/9               P(WIN) = 1/18 + 1/18 + 1/18 + 1/18 + 1/18 + 1/18
      Monty shows Door B;  P = 1/3 * 1/3 * 1/2 = 1/18              = 6/18
        Player stays with A;  WIN                                  = 1/3
        Player switches to C;  LOSS                         P(LOSS) = 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9
      Monty shows Door C;  P = 1/3 * 1/3 * 1/2 = 1/18               = 6/9
        Player stays with A;  WIN                                   = 2/3
        Player switches to B;  LOSS
    Player picks door B;  P = 1/3 * 1/3 = 1/9             ALWAYS SWITCH strategy:
      Monty shows Door C;  P = 1/3 * 1/3 * 1/1 = 1/9        P(WIN) = 1/9 + 1/9 + 1/9 + 1/9 + 1/9 + 1/9
        Player stays with B;  LOSS                                 = 6/9
        Player switches to A;  WIN                                 = 2/3
    Player picks door C;  P = 1/3 * 1/3 = 1/9               P(LOSS) = 1/18 + 1/18 + 1/18 + 1/18 + 1/18 + 1/18
      Monty shows Door B;  P = 1/3 * 1/3 * 1/1 = 1/9                = 6/18
        Player stays with C;  LOSS                                  = 1/3
        Player switches to A;  WIN
  Prize behind door B;  P = 1/3
    Player picks door A;  P = 1/3 * 1/3 = 1/9
      Monty shows Door C;  P = 1/3 * 1/3 * 1/1 = 1/9
        Player stays with A;  LOSS
        Player switches to B;  WIN
    Player picks door B;  P = 1/3 * 1/3 = 1/9
      Monty shows Door A;  P = 1/3 * 1/3 * 1/2 = 1/18
        Player stays with B;  WIN
        Player switches to C;  LOSS
      Monty shows Door C;  P = 1/3 * 1/3 * 1/2 = 1/18
        Player stays with B;  WIN
        Player switches to A;  LOSS
    Player picks door C;  P = 1/3 * 1/3 = 1/9
      Monty shows Door A;  P = 1/3 * 1/3 * 1/1 = 1/9
        Player stays with C;  LOSS
        Player switches to B;  WIN
  Prize behind door C;  P = 1/3
    Player picks door A;  P = 1/3 * 1/3 = 1/9
      Monty shows Door B;  P = 1/3 * 1/3 * 1/1 = 1/9
        Player stays with A;  LOSS
        Player switches to C;  WIN
    Player picks door B;  P = 1/3 * 1/3 = 1/9
      Monty shows Door A;  P = 1/3 * 1/3 * 1/1 = 1/9
        Player stays with B;  LOSS
        Player switches to C;  WIN
    Player picks door C;  P = 1/3 * 1/3 = 1/9
      Monty shows Door A;  P = 1/3 * 1/3 * 1/2 = 1/18
        Player stays with C;  WIN
        Player switches to B;  LOSS
      Monty shows Door B;  P = 1/3 * 1/3 * 1/2 = 1/18
        Player stays with C;  WIN
        Player switches to A;  LOSS
If your simulation does not conform with the mathematically derived result, your simulation is incorrect.


The most clear way that I've heard it described is by scaling it up to 100 doors or so, and then removing 98 doors instead of removing 1. (This is mentioned elsewhere in the thread.) It would be interesting to see the results of your test using this method. It seems intuitively clear with 100 doors that switching is the better method.


This is the one article that made me fully grasp it forever. I've known about this for ages, but it never really clicked until I read it put this way:

http://waitbutwhy.com/2016/03/the-jellybean-problem.html


The table here [1] helped me understand it when I first heard of this problem.

[1] https://en.wikipedia.org/wiki/Monty_Hall_problem#Simple_solu...


I only started to understand how to look at this problem once I realized the host was going to eliminate a bad door _no matter what was chosen_. Of course if you have already selected a bad door he won't eliminate it.

Another useful exercise is to stretch the number of doors to some large N and suppose the choice was posed repeatedly with a decreasing number of doors. If you can have anywhere from 2 to N guesses and as N decreases the number of bad doors decreases, it seems more obvious that you should always switch.


Not switching until the end should work out better than switching multiple times.

4 doors, in 8ths to simplify fractions.

  2/8,2/8,2/8,2/8 => initial conditions, choose leftmost
  2/8,3/8,3/8 => random bad door removed, choose a random 3/8
  3/8,5/8 => remove another door.
Next choice is only 5/8, whereas if we had waited it would have been 6/8.


>> The probabilities change

> That's the common misunderstanding of the problem...

You've got a choice here... you can either interpret "The probabilities change" as meaning the probabilities of a given door being correct at the start (which would be a misunderstanding) or you could interpret it from the pragmatic angle; the probability of success if you switch doors after one is taken away.

Given the context I'm inclined to believe that wrsh07 intended the latter.


This is good reasoning. Unlike other probability puzzles, the new information isn't useful in this one.

P(my first choice was right) = P(my first choice was right | given that I've been shown a goat in another door)

It's my second-favorite probability puzzle; my favorite is the one in which the new information has no relevance at all but still changes the probabilities, the "not both daughters" problem.


I should have just given the other problem, sorry for the laziness.

Your friend tells you that she has two children. They're not both boys, she adds. What are the odds that they're both girls? (Obviously 1/3).

When she tells you that one child is named Mary, the odds that they're both girls change (to 1/2)! Even though you knew that at least one of the children was a girl, and the name might as well be arbitrary, and the new information seems useless... the odds have changed.

It's nontrivial to work this one out; exercise for the reader.


After first datum (there are two children):

  Presuming A and B to be independent events
  (first child is a girl, second child is a girl)
  P(A) = 1 - P(!A) = 1/2
  P(B) = 1 - P(!B) = 1/2

  P(A and B) = P(B|A) * P(A) = 1/2 * 1/2 = 1/4
  P(A and !B) = P(!B|A) * P(A) = 1/2 * 1/2 = 1/4
  P(!A and B) = P(B|!A) * P(!A) = 1/2 * 1/2 = 1/4
  P(!A and !B) = P(!B|!A) * P(!A) = 1/2 * 1/2 = 1/4
Good so far.

But the probability later on depends on how you find out that at least one of your friend's children is a girl. If your friend tells you a random fact about her whole family, the probability is 1/3. If she tells you a fact about a randomly chosen child, the probability is 1/2.

You can presume from "they're not both boys" that the statement is equivalent to "there is at least one girl". When she tells you the name, you now know "a specific child of mine is a girl".

"Not both boys" gives you the precondition "!(!A & !B)" which equates to "A or B".

  P(A or B) = P(A and B) + P(A and !B) + P(!A and B) = 3/4
  P(A and B|A or B) = P(A or B|A and B) * P(A and B) / P(A or B) = 1 * 1/4 / 3/4 = 1/3
"Mary is a girl" gives you either the precondition "A" or the precondition "B"

  P(A and B|A) = P(A|A and B) * P(A and B) / P(A) = 1 * 1/4 / 1/2 = 1/2
  P(A and B|B) = P(B|A and B) * P(A and B) / P(B) = 1 * 1/4 / 1/2 = 1/2
These are both the same number, so that works out fine.


Now see if you can work it out if your friend told you this beforehand:

Between pregnancies, I had an accident that resulted in an odd medical condition. The doctors said that after it happened, 50% of boy fetuses that otherwise would have resulted in pregnancy would simply fail to implant, and I wouldn't even know about it.

Now there's an all new ambiguity. How do you decide whether your friend was planning on having two children all along, or if she has two children because that's how many times she got pregnant?


Here's an epistemological head-scratcher: do the odds still change if you don't hear the daughter's name accurately? Why?


Remember, those odds are the level of certainty you have that a given statement may be true based only on what you already know. You can also assign a level of certainty to the facts you think you know.

Given the number of ways to mishear "Mary" and the dialectical variant in pronunciation characterized by the Mary-marry-merry merger, I'd tentatively assign the following values:

  25% One child is a girl.  (Heard correctly)
  25% One child is a boy.   (Heard incorrectly)
  50% No new information.   (Incomprehensible or ambiguous)

  In the first case P(A and B|A) = P(A and B|B) = 1/2
  In the second, P(A and B|!A) = P(A and B|!B) = 0
  In the third, P(A and B|A or B) = 1/3
In any case, you're just guessing at the probability you might mishear something, so there's no definitive answer. The mere possibility that I might have heard a boy's name means that the overall probability could be anywhere between 0 and 1/2. By the above, my new odds are 5/12.

So the odds do change, based on your confidence in what you heard (and your confidence that your friend isn't the kind of parent to name her son "Meriadoc" or something similar).


You are, of course, correct. Sorry about the careless wording [I think the analogy is still interesting, though!]


This assumes you always open a door, that door never holds the prize, and there is exactly 1 prize.

You could run (ed:a similar game with different rules) such that the odds go 1/3,1/3,1/3 to 1/2, 1/2 if you flipped a coin to chose the second door and sometimes show the prize. Alternatively, if you chose when to open the second door, you could make swapping a very good (100%) or very poor choice (0%).

Worse, you could swap what's behind the doors after they chose.

Which IMO is why people find this so confusing. In the real world the odds presented may or may not line up with the actual odds. aka unknown unknowns.


This is incorrect.

Flipping a coin to "to chose the second door and sometimes show the prize." has no impact on the probabilities. The parent is correct - there is a 2/3 chance the prize exists under one of the other two doors, and if one of them is shown not to have the prize (through random flipping, deliberate selection, whatever) - then the final door now has a 2/3 chance to have the prize.


If one of the doors is shown to not have the prize, the odds are now 1/2 when the choice of reveal door is made randomly. That's because 1/3 of the time, you're not even offered the chance to switch, since it's pointless.

I'll run the trials here. Assume the first two doors have the goat, and the third door has the car.

   Pick  Reveal  Switch?  Outcome
   ----  ------  -------  -------
     1      2      Y 3    Car
     2      1      Y 3    Car
     3      1      N 3    Car
     3      2      N 3    Car

     1      2      N 1    Goat
     2      1      N 2    Goat  
     3      1      Y 2    Goat
     3      2      Y 1    Goat

     1      3       -     No chance to switch      
     1      3       -     No chance to switch
     2      3       -     No chance to switch
     2      3       -     No chance to switch
Meaning, given that you've made it to the point that an offer to switch doors is presented, there is no advantage to do so (or not do so).


But in the actual game, you are always given the chance to switch. Here are the possible outcomes of the game, with the third column being what the contestant should do to win:

   You Pick  Car   Switch?
   -------  -----  ------
       1      1      N    
       2      1      Y   
       3      1      Y
 
       1      2      Y
       2      2      N  
       3      2      Y

       1      3      Y     
       2      3      Y     
       3      3      N   
Without knowing anything else, switching means that you win 2/3 of the time.


I changed to rules so in "3, 1, Y" he might open door 3 and show you a car. At that point switching is a meaningless choice.


Your comment (EDIT: Sorry, not yours, but parent) was talking about a random flip of coin determining which door Monty shows. That's not the actual game. If Monty is randomly showing a door without regard to goat/car, then you have to also consider the scenarios where he reveals a car and you lose early.


If, while randomly flipping a coin, he reveals the car then you have a 0% chance of winning. If he doesn't reveal a car then you have a 2/3 chance of winning if you switch doors.


In this scenario, Monty's door choice and your door choice are completely independent actions. Because of this, you gain no new information from the door he reveals. In the classic version of this puzzle (where Monty always reveals a goat), you do gain information, because most of the time Monty's forced to choose the only other goat, and therefore avoid the car.

http://c2.com/cgi/wiki?NotTheMontyHallProblem

The switching strategy is effective because it relies on Monty's avoidance of the car. When Monty isn't trying to avoid the car, his revelation doesn't help you out at all. It just tells you that you either (A) now have a slightly better chance (50%), or (2) you've already lost. (0%).

EDIT: Also see the table on this wikipedia article. It lists probabilities for all the different variants of the MHP. Your scenario is referred to as the Monty Fall or Ignorant Monty variant.

https://en.wikipedia.org/wiki/Monty_Hall_problem#Other_host_...

    "Monty Fall" or "Ignorant Monty": The host does not know
    what lies behind the doors, and opens one at random that
    happens not to reveal the car
    (Granberg and Brown, 1995:712) (Rosenthal, 2005a) (Rosenthal, 2005b).
    Switching wins the car half of the time.


When you originally pick a door - there are three. (We all agree on this). Because you are picking at random, and the prize is only behind one door, there is a 1/3 chance that you have picked the right door. (likewise, We can all agree on that.) Therefore, I think we can all agree that there is a 2/3 chance that the prize exists behind one of the other two doors.

Let us take your approach, and have Monty flip a coin, revealing ANY door at random.

If he flips your door, and you have the prize - You have a 100% chance of winning. If he flips your door, and you have the goat, 0% chance of winning. Those two scenarios are easy, and we can ignore them.

Likewise, if he flips the coin, and shows one of the other two doors, and they have the prize - then, once again - easy - 0% chance of winning. We can ignore these scenarios.

I think up to this point, we can agree on all of the above scenarios. (They are win/lose)

Where it gets complex, he flips a coin, selects one of the doors you aren't on, and reveals a goat.

Given that you had a 1/3 chance originally of winning, and a 2/3 chance of not winning, and given that Monty has now revealed that one of the other two doors (By chance) does not have the car, your 2/3 chance of not winning is still the same, but now it applies to only a single door - I.E. You have a 66% chance of winning by selecting that other door - and the fact that Monty selected that door by chance, is irrelevant.

Unfortunately, despite my (flawed) apparently logical argument - I'm completely wrong, and you are correct.

   from random import *
   def putPrizeInDoor():
       doors=[0,0,0]
       door_winner=randint(0,2)
       doors[door_winner]=1
       return doors

   trials=100000
   testCount=0
   win=lose=0
   for x in range(trials):
       doors=putPrizeInDoor()
       userSelects=randint(0,2)
       montyReveals=randint(0,2)
       if montyReveals==userSelects: # Monty reveals the door user is on
           pass
       else:
           if doors[montyReveals]==1: # Monty Revealed the prize.
               pass
           else: # this is where it's interesting - Monty reveals, at random, door not winning that user not on.
               revealedDoors=set([montyReveals,userSelects])
               newUserChoice=list(set([0,1,2])-revealedDoors)[0]
               testCount+=1
               if doors[newUserChoice]==1:
                   win+=1
               else:
                   lose+=1
        
            
   print ("Tested: {}  Wins: {}  Lost: {}  Percentage: {:.2f}".format(testCount,win,lose,win/testCount))
 

Tested: 44484 Wins: 22220 Lost: 22264 Percentage: 0.50

It's interesting that the deliberate selection by Monty of the goat, increases my chance from 33% to 66% if I switch, but the random selection by Monty of the goat, only increases my chance from 33% to 50% if I switch. I'm not really sure where the argument I made breaks down, but, empirically, it's clearly broken. I'll have to meditate on it a bit to see where I've gone awry.


I must say this is one of the best comments I have seen on HN. I don't know if this helps you meditate on the game or not, but this is how I walk though the logic.

1/3 of the time you picked correct: 100% of those worlds he shows a goat. 1/3 of the time you picked wrong and he shows a goat. 1/3 of the time you picked wrong and he shows a car.

So your overall odds are 1/3 of winning if you don't change AND 1/3 of the time you don't get the chance to change.

Thus, your overall chances of winning in this case is 1/3. But, if he shows you a goat your odds are bumped to 1/3: 1/3 or 50/50.


Thanks for this comment! I thought this thread was dead.

I was searching for online simulators that let me tweak the rules of the game, but I could only find those that follow the original rules of the puzzle.

In thinking more about the Ignorant Monty scenario, I realized it's equivalent to having 3 contestants pick doors:

    Me
    Montgomery
    Switchy McSwitcherson
First I pick a door. Then Montgomery picks a door. Finally Switchy McSwitcherson gets the remaining door. Once all the picking is done, Montgomery opens his door first, and is saddened to see a goat. At this point, who is more likely to win? Me or Switchy McSwitcherson? Since we both had 1/3 initial odds, there's symmetry there, and one of us can't reap all the odds-boosting.

But the classic puzzle is more like a 2-contestant game, where I pick a door, the host selects a goat to reveal (and reveals it), then Switchy McSwitcherson is assigned the remaining door. In that scenario, I'm mad that they made me pick first before the host eliminated a door for Switchy. Switchy enjoys all the odds boosting.


> Funny to call it reporting when it's more of an editorial by the renowned security researcher Bruce Schneier.

I respect Bruce and he's done a ton of great work, but I obviously disagree with him on this point. I do not believe governments (especially ones engaged in clandestine surveillance operations) have an obligation to share security vulnerabilities with companies. But neither do those companies have an obligation to create vulnerabilities for the governments to exploit (on the contrary; the companies have an obligation to find and fix the holes in their products).

> It doesn't matter that the vulnerability "has always been there" if nobody knew about it.

There's no guarantee that nobody knew about it, and that's the problem. Information asymmetry is a bitch, but the safest assumption is that someone else did indeed know about it, and then told the FBI. If someone was willing to tell the FBI, it's a safe bet the security community knows about this exploit.

Also, if I was the FBI I would intentionally try to obfuscate my capabilities as much as I can. I would want people to think I can hack every iPhone at any time, even if I can't.


> I do not believe governments (especially ones engaged in clandestine surveillance operations) have an obligation to share security vulnerabilities with companies.

So I take it then you don't believe in a government "for the people"? Like it or not, Apple is legally a person, and even tossing that aside, we know that many of Apple's customers are American citizens, and this whole idea of "keeping knowledge from you for your own good" just reeks of the patronizing oligarchy that the US government has become known for.

Lest you jump to the argument that this would endanger operations, I would still point out two very salient facts: this information is not intelligence data, and as Schneier pointed out, this attack can be used against many in the US government, including FBI agents in the field. Getting it fixed is the right thing to do.


Agents who handle secret or otherwise restricted data should not be handling it on a mobile device. Those devices should be sanitized.

Presuming security is what gets people compromised (and in some cases in political trouble as one US presidential candidate is coming to realize).


Presuming security is when someone says "restricted data should not be [handled] on a mobile device". Spills happen, intentionally or otherwise. This security hole should be fixed before a field agent gets his iPhone hacked by the Chinese.


> Lest you jump to the argument that this would endanger operations, I would still point out two very salient facts: this information is not intelligence data, and as Schneier pointed out, this attack can be used against many in the US government, including FBI agents in the field. Getting it fixed is the right thing to do.

I agree with you that given the facts we know today, notifying Apple is the right thing to do. But I also think the FBI should have some latitude in deciding what is in its best interest; and if they're making their own agents vulnerable to an exploit they know exists -- well, that's dumb and I hope it bites them in the ass.


We do not even know if the FBI knows the vulnerability. The phone could have been attacked by a contractor working for the FBI without the contractor sharing the tools.


This is a great point. Cellebrite's known for their black box approach. Law enforcements and Courts can send their locked iPhone and get it back unlocked. They'll just get the job done without knowing what was actually performed on the phone. This might as well be the case here.


Doesn't this approach cast doubt on the legitimacy of any evidence obtained from the device?


Don't think so, it's basically like lock picking a suspect's apartment with a warrant. What is found should be valid evidence, no? Law enforcement can break the door or call a locksmith and pay him for his service. In this case Cellebrite's the locksmith.


That seems reasonable, but usually investigators would be on the scene while the locksmith works. If the process is a black box that happens while the device is in Cellebrite's possession isn't it more like calling up the locksmith from the precinct and saying "hey, could you open up the apartment at this address and call us when you're done? Don't go inside or touch anything though, thanks!" What prevents Cellebrite employees from planting or deleting evidence after the device is unlocked and before returning it? What guarantees are there that Cellebrite's unlocking process doesn't intentionally or unintentionally modify some other aspect of the device? Scouts honor? What if their process is to just flash a new rom filled with child porn and no passcode then skip merrily to the bank to cash their checks?


Yep, I understand that. I don't think that was posed as a problem in the Italian case I know of, although I think that in the U.S. rules on the chain of custody are stricter, indeed. That's a good point.


> Apple is legally a person

it's not government "for the persons"


> I respect Bruce and he's done a ton of great work, but I obviously disagree with him on this point. I do not believe governments (especially ones engaged in clandestine surveillance operations) have an obligation to share security vulnerabilities with companies.

Interesting. What obligations do governments have? On the one hand we have government agencies (the CPA, e.g.) whose entire function is to protect consumers from bad products and marketing. On the other hand there's the many regulatory and standards bodies, which are to ensure orderly marketplaces and discourage anti-competition. Then there's an agency that protects the environment, one that administers health, education....yet there's no obligation to disclose security vulnerabilities? Hmmppf. I'm stumped.


Here's the oath stated by FBI agents when they join [1],

> I [name] do solemnly swear (or affirm) that I will support and defend the Constitution of the United States against all enemies, foreign and domestic; that I will bear true faith and allegiance to the same; that I take this obligation freely, without any mental reservation or purpose of evasion; and that I will well and faithfully discharge the duties of the office on which I am about to enter. So help me God.

[1] https://www2.fbi.gov/publications/leb/2009/september2009/oat...


That oath is worth nothing given the Greatest threat to the Constitution is the FBI, NSA and other parts of the Federal Government

They may swear a oath to the constitution, but they are actually Loyal to the Federal Government, not the Constitution.


Your comment is snarky but on the money. I have a very, very bad feeling we're losing our Republic to the State and its various apparatuses.


And here's what their "about" page says:

"Our mission is to help protect you, your children, your communities, and your businesses from the most dangerous threats facing our nation—from international and domestic terrorists to spies on U.S. soil…from cyber villains to corrupt government officials…from mobsters to violent street gangs…from child predators to serial killers."

Our mission is to help protect ... your businesses ... from cyber villains. The FBI acknowledges right on their home page that they have an obligation to share security vulnerabilities with companies.

https://www.fbi.gov/about-us


> I do not believe governments (especially ones engaged in clandestine surveillance operations) have an obligation to share security vulnerabilities with companies.

Is this view particular to software defects? For example, do government inspectors have an obligation to report food safety violations that they've discovered? In either case, the problem puts the public at risk.


> I respect Bruce and he's done a ton of great work, but I obviously disagree with him on this point. I do not believe governments (especially ones engaged in clandestine surveillance operations) have an obligation to share security vulnerabilities with companies. But neither do those companies have an obligation to create vulnerabilities for the governments to exploit (on the contrary; the companies have an obligation to find and fix the holes in their products).

How do you determine if a vulnerability was there because it was overlooked in development, or if it was there because the government demanded it from the company but used the law to impose a gag order on the company preventing the public from finding out about it?


> How do you determine if a vulnerability was there because it was overlooked in development, or if it was there because the government demanded it from the company but used the law to impose a gag order on the company preventing the public from finding out about it?

You don't; but that's possible today with the way the gag orders and FISA courts work. That's a real problem around transparency in our legal system; which IMO is a different issue from transparency around security issues relating to privately-developed products.


That's a really interesting viewpoint. Thanks for sharing it.

In my opinion the government should be obliged to share the vulnerability for the purposes of the keeping the rest of the users safe from the same exploit, whether executed by the government or executed by somebody else.

In summary, I disagree with you but I'm glad I took the time to ask you about your view since I learned something new.


>take the Monty Hall problem

It amazing how many people still don’t get the Monty Hall problem. Its lesson is that the probabilities do NOT change – until we make a choice!. That’s why it’s better to switch once we see the goat behind door 1. The probability of our having made a good choice, initially (1 in 3), has NOT changed even though there are now only two ‘choices’. But they are not REALLY choices because we’ve ALREADY chosen. The probability can only change if we make a NEW choice, because the ‘probability’ that we’re discussing is the probability that our choice, at the time it was made, would produce a result that we wanted. The iPhone 5 is clearly no less secure now than it was then. But we can switch.


The iPhone 5 is now less secure because attackers now know that there is a vulnerability, and they will find it. If, previously, they had thought there wasn't a vulnerability, they might have invested fewer resources in hacking it. That is now no longer the case.

Increasing the number of people who will find exploits on a device, will reduce the security of that device.

Put another way - a platform that no (talented) engineers are attempting to find exploits for is more secure than the same platform if many talented engineers put their time into finding exploits for it.


It's much easier to understand when you realize that Monty never reveals that one of the closed doors contained the car.

That asymmetry of action leads to the asymmetry of probabilities; your initial choice constrained his choices when he takes action.


Well, Monty doesn't know (or care about) your choice. He's 'constrained' by the fact that his 'probability' is 100%: he knows where the goats and car are.


What? Of course his action is based on your choice.

If a has the car and b and c have goats, and you pick b, he opens c. And if you pick c, he opens b...


You’re right. I forgot that you had to tell him which door you wanted to open (instead of just saying that you've decided on a door - like the FBI as just SAID that they've cracked the iPhone5). But it doesn’t matter because Monty was always going to open one of the two goat doors, and the one that he opens doesn’t matter to the 1/3 probability that you picked the car door. Your probability of getting the car ALWAYS doubles when you switch (even though you still may not get it).


He does care. 2/3 of the time, your choice forces his. If you choose a goat, Monty doesn't have a choice. He can only reveal the other goat.

The other 1/3 of the time, you're right, he doesn't care, because he can choose any of the remaining doors at random.


I have to respectfully disagree here. There are bugs in iOS, iPhones and pretty much everything. It would be naive and foolish to think that any electronic device you use doesn't have bugs.

It would also be remiss to think that people aren't looking for the vulnerabilities, people like Stefan Esser have plenty of research documenting the security mechanisms and flaws in iOS, and in the hardware. People have crawled over the hardware and software documenting everything, looking for bugs. People have bugs we haven't heard about because they put the time in to go down the avenues that lead nowhere in order to find them. The people that would find these bugs have a deep knowledge and understanding of the internals of the devices in question, and some suspicion of how they would be applied in this situation. They are not people who would go from zero knowledge to chasing 0day without at least a decent lead as to what the FBI's third party did.

The thing is, the FBI are not obligated to hand over their bugs, nor are the third party. It's the third party's trick, they found it, if they want to keep it it's their right to do so.

Asserting that somehow because someone has specific 0day (designed to bypass unlocks when the phone is in their physical possession for an extended period of time) and doesn't want to share makes us all less secure is incredibly ingenuous and a logical fallacy at best.


I really don't see how this relates to the Monty Hall problem. The FBI just revealed how to unlock an encrypted phone. That's like opening the door with the car behind it.


According to the article, the vulnerability was known. All that changed is the FBI got somebody to use it.

I would agree with the "less secure" bit if the FBI had uncovered the vulnerability, or asked some third party to do it. But since they just used something that was already known, the "less secure" statement makes no sense to me.


That was an excellent analogy.


> The FBI just revealed ...

It would be a better analogy if they had actually revealed anything, instead of just saying that they had. This was more like telling us which door had a car behind it and trusting us to believe them.


It's true, I wish they had shown us a zero-knowledge proof (https://en.wikipedia.org/wiki/Zero-knowledge_proof) that they had cracked it.

At least then we would know that this isn't just to escape an unfavorable court ruling.


a "renowned security researcher" or a "renowned academic cryptographer"?


> This is bad reporting.

I felt the same way when I hit the article link, but changed by mind when Bruce Schneier made the more nuanced argument.

Of course the vulnerability already existed. That's not what he has a problem with: the problem is that now there is a commercially-known but secret vulnerability. Which is a different thing than an unknown vulnerability.

Newer hardware revisions, etc etc, but the biggest issue he takes is that the US government is supporting this practice (and in doing so basically acting like malware authors). Again, that may be realpolitik, but isn't something that we should support as a policy position.


I guess the part I disagree with him on is that I actually expect the US government to act like malware authors. They've shown an affinity for the tactics before (using surveillance software, stingrays, etc) so it's at least perfectly consistent.

I have no expectations that the relationship between law enforcement and technology companies will improve. I guess I've just accepted this situation as the "new normal".


I actually expect the US government to act like malware authors.

The main point is, that this should not be acceptable.


As much as one can go on about the US government abusing its powers, there's still a difference between criminal enterprises and national security theater. At the minimum, the later is still within control of democratically elected branches of government (or indirectly via courts).

One of the central features of democracy is that it's a large ship but does turn toward the majority. You/I/we want that ship to turn towards a right to strong crypto systems enshrined in law? Work on influencing that majority.

Because the fact of the matter is that a majority of Americans don't even understand encryption. And to paraphrase McAvoy from The Newsroom and point at one of the reasons more social change wasn't effected from the 60s, in a democracy an idea being right is less important than it being known and agreed with.

All that is required for cynacism, on the other hand, is to sit on ones couch.

> I have no expectations that the relationship between law enforcement and technology companies will improve.

I absolutely agree with this, but that Brave New World has its own dangers. Apple could technically construct a device to enable secure drug trafficking and place its levers of control over that device outside US jurisdiction. Then begin taking a cut of the enterprise. Control is power, and if we shift it from the US government to corporations, then corporate accountability starts to become a lot more important.

Because that's ultimately what we're talking about now. Imagine in 20 years when we're having the conversation about the poor US government standing up to Apple or Google.


>> now there is a commercially-known but secret vulnerability.

Exactly, and not only that, but (potentially) unknown to Apple.


believe the vulnerability to be related to the lack of a secure hardware biometric / encryption module.

The problem here is that we're all just speculating. We suspect this to be the case, but we can't be sure. And we probably never will be.

To take this a step further, the FBI has also learned the lesson to never take this public again. If you are worried about law enforcement attacks against any device protected by they signing keys of a US company, it is only prudent for you to ASSUME that a FISA court has or will soon compromise the signing keys of your device. Jonathan Zdziarski has already shown the enclave to be useless in this case of compromised keys.

At this point, the work being done by Joanna Rutkowska, Coreboot, Purism, and others are our only hope now. And even there, we'll never own our chipsets, ethernet controllers, or CPUs.

May as well give up, we've already lost.


> And we probably never will be.

Investigation is either made completely public after the fact or partially marked secret for certain years. Now if someone intentionally hide the investigation details, well, that's a whole different story and I wouldn't be surprised at all. Writer chooses what to write in their investigation log reports.

Anyway, we really don't know what method they used to gather the information and what kind of information they pulled off for sure.


Dont forget Genode project https://genode.org/ and Crash safe http://www.crash-safe.org/


Thank you for these.


I suppose you have the option to use older/legacy hardware which has had more time to be vetted.


The Secure Enclave which amongst others prevents tampering with the microkernel and related modules of iOS was introduced with the A7 SoC.

So basically only the iPhone 5 and older models are easy to compromise, with iPhone 5S and newer it gets a _lot_ harder.

Details here, from the mothership itselves: https://www.apple.com/business/docs/iOS_Security_Guide.pdf


>The iPhone did not get less secure. It has always had this security hole.

Although given the fact that it was made public that they found a security hole, won't that change the behavior of malicious actors? Now that it's known that a hole exists, more people will start looking for it, reducing security through obscurity.


A concrete example of this happening would be when the NSA's catalog of implants and general descriptions of what they could do was released, and a large flurry of public (and probably private) activity surrounding rediscovering the wheel ensued.

It's a lot easier to build something when you have a loosely-constrained search space and a guarantee that it's possible, than to discover something entirely new.


Security by obscurity is an antipractice.


Sort of? You're not supposed to DEPEND on it as your only means security, but it certainly lowers the likelihood of something being broken into when combined with other methods of security.


I believe that his point is that the non-disclosure affects the security of other similar devices, especially those that can still be patched.


That's the entire purpose of the FBI not disclosing this information though. Government agencies engage in game theory as well, and information asymmetry is a powerful tool.


>and even then, the first generation of iPhone fingerprint readers was pretty bad

Hunh? The iPhone 5s and the iPhone 6 both contained the first gen fingerprint reader, and it was universally regarded as one of the best consumer fingerprint readers available on any device.

That is largely irrelevant though, as the secure enclave is not in the fingerprint reader. TouchID is one way to access it, but the secure element can exist without touchID (as on the watch).


Note that the Secure Element is not the same as the Secure Enclave. The Secure Enclave is a walled-off section of the main CPU, running a custom L4 microkernel, which handles the device encryption. The Secure Element is a separate chip running Java Card which handles payments. The devices (including the watch) have both.


How secure something is doesn't only depends on itself but also on its environment. There's a vulnerability known by some which isn't going to be fixed. The moment this vulnerability was disclosed to the FBI, the iPhone became effectively less secure.

More generally, any given version of a piece of software if becoming less and less secure with time, because vulnerabilities are found, although it's the same piece of software.


Windows is less secure. Can we blame FBI?


I don't blame either of them. But I do blame users who put confidential information on their phone with the expectation that it will remain so.


If Apple refused to comply with the FBI's request, why should the FBI owe Apple a disclosure of this vulnerability they found?

Keep the downvotes coming, lads! They're meant for burying spam and junk comments, not expressing disagreement, but I enjoy them anyway.


If I find a vulnerability and exploit someone's device, it's not okay under law. Why is a government institution exempt from law in a supposedly exemplary democracy?


Because they're a law enforcement agency with a warrant. It's also illegal for me to enter your home without your permission, but the police don't need your permission when they have a search warrant.


The phone exploited was owned by the government.


Because it's not someone's device anymore. It's government evidence?


It was the county's device to start with, too. Not the shooter's.


The FBI doesn't owe Apple. It owes the public - many of whom own an Apple product.


Because FBI spent taxpayer's money to find a vulnerability in a device used by millions of people - it should release that information so that apple can fix it. Simple as that.


Apple probably should have considered their own fallibility and the value of a working relationship with the FBI before launching a scorched-earth PR campaign.

If the FBI could rely on Apple to provide assistance when presented with a court order, they would have no reason to keep such a method private -- or to even find and use it in the first place.

Tim Cook does not come across as a sufficiently prescient CEO in this boondoggle.


Because of the pissing match, I doubt that is going to happen.


I am sure they just hired some interns to

    code = 0000

    While code_is_valid not true:

        enter code

        reboot device

        code++
Edit: I am pretty sure there's a delay in code execution registering a wrong pin hence by rebooting the phone, the wrong pins won't get detected.


Regarding your edit, that was true in the past, but Apple fixed it in iOS 8.2. Now the system makes sure to write the failure to nonvolatile storage before reporting it. Phones running older OSes could be exploited this way. There are even boxes you can buy that will do it for you automatically. You set it up to try a passcode, then cut the power on failure. But they don't work anymore, and this phone's OS was too new for it.


Good one, though I can see there is a problem with this "algorithm"! haha :-)


What might it be?

I assumed the code is just 4 digits long but judging by the quick downvotes people aren't even considering this to be a possibility.


There's a mechanism that wipes the device after too many incorrect code attempts. Disabling this mechanism was the entire point of the FBI/Apple lawsuit.


I am pretty sure a reboot bypasses that mechanism.


Why would you be pretty sure of that? That would be totally braindead if a simple reboot bypassed the limit. It doesn't. There was a flaw in an earlier iOS version where you could reboot the phone a few milliseconds after an incorrect PIN, before the phone recorded the failure, but that was fixed some time ago.


I completely disagree with the premise here. This is Tim Cook's fault and it should fall completely on Apple. I've been using Apple products my whole life, and I think security and privacy are great, but I don't believe for one second that Apple is the holy savior of our privacy. They fought the FBI because of marketing and profits, not out of a sense of duty to protect our privacy.

I also don't buy the rhetorical come-back about those who would trade liberty for security deserve neither. There is a line between the two, and each side must give and take. If the FBI found a way to do it without Apple, bravo. Let Apple figure it out if they want.

This is Apple's fault and they should not get a pass for spinning the PR in their favor.


You're certainly right about one thing: the people want privacy and will reward a company handsomely for delivering.

In theory government is supposed to also be responsive to the will of the people especially with regards to things like privacy from government searches so I guess it's up to the reader to decide for themselves which side they're on here.


Yeah, the "trade liberty for security" line is applied selectively, as far as I can tell. It tends to be trotted out frequently in discussions about data encryption.

I don't see that line used nearly so often against traffic laws, driver's license laws, state medical board doctor licensing laws, laws banning the sale of guns to minors, compulsory education laws, childhood vaccination laws, laws banning lead in gasoline, building codes requiring the use of twelve-gauge wire in certain circuits, laws requiring carbon monoxide detectors in our bedrooms and kitchens, etc.

The government is applying constraints to our lives and freedoms literally _all the time_ in the name of securing us against threats to life and health.

Most people accept those trade offs. But those same people actually _do_ deserve both liberty and security.


Your iphone just got less secure - so don't use iphones anymore.

It always amazes me when people get on their high horse and start complaining about something when the remedy is quite simple - don't use the iphone. Get an Android phone, or an Ubuntu phone, or a Blackphone or a Windows phone. There's plenty of other devices that haven't been cracked by the FBI.

There's irony in the fact Apple resisted the FBI attempts to crack the phone and now this vulnerability will go unpatched and the FBI now has a zero-day exploit they can use whenever they want.


I don't believe even for a second that Android devices are any more secure than iPhones. And I use an android phone. But let's be honest - with dozens of manufacturers there isn't a single one who dedicated the same amount of work to securing their devices or engineering things like the secure enclave in iPhones. I use a Sony phone but I'm sure Sony wouldn't have the guts to stand up to FBI, or that FBI would even need to ask - the hardware is most definitely not on the same level of polish as Apple's.


Hi! do you have contact details/email I can reach you on?


I just found out the other day that if you use any "Accessibility features" on an android phone, your device's encryption password changes to a default, effectively disabling encryption until you turn those features off again.

Furthermore, it's not immediately clear that you're disabling encryption when you do that.

So all in all, and for other reasons listed, I believe iPhone is probably the better bet at the moment for secure hardware.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: