This compromise was detected by Google because they have hard-coded "certificate pins" in Chrome which specify which CAs the browser should expect to see when connecting to Google.
This type of "pinning" is the only technique that we know of which has actually detected a CA-assisted MITM attack in the wild (a few times now). So it works really well, but only if you're running Chrome, and only if you're connecting to Google (or the few other pinned sites hardcoded in Chrome).
It doesn't scale well because not every website can (or is willing to) hardcode their CA information in browser binaries, the pinned information has to expire at some point (otherwise you can never change your CA), sites are still vulnerable to their own CAs, and it only works in browsers which are willing to maintain this hardcoded list of pins in their client binaries.
Trevor Perrin and I developed a dynamic certificate pinning solution called TACK (http://tack.io) that is designed to address all of those issues. It's a fully specified TLS extension; all we need are browsers to support it.
If you're interested in whats been going on with the CA system, "SSL And The Future Of Authenticity" is a Defcon talk where I discuss some of the problems with CAs, and even interview the original SSL protocol author about them: https://www.youtube.com/watch?v=8N4sb-SEpcg
The somewhat difficult thing about a lot of this stuff is that its future is really in the hands of the browser vendors, not the IETF, W3C, or anyone else. What they choose to do or not do is all that matters, so there's not much more that any of us can do other than make suggestions and see what happens.
My experience thus far is that most browser vendors only have one or two people working in this area of their code base, and those people are so overwhelmed that they spend most of their time just treading water. SSL cipher suite problems seem to be absorbing most of their time lately.
Google has a larger set of really good engineers working in this area, so for the most part they seem to take the lead. Right now, my impression is that they're pretty all-in on Certificate Transparency. The major roadblock with CT is that it requires all CAs to willingly participate. If that's even possible, my sense is that it's a long road, so I would love to see Chrome incorporate TACK in the mean time (and Trevor has even written the patch) while they try to drive CA adoption.
But ultimately, how they decide to spend their time is up to them.
Rather than persuading CAs to implement Certificate Transparency, how about sorting CAs into groups by jurisdiction and displaying different icons or colors for low/medium/high security, corresponding to having signatures from 1, 3, or 5 complementary jurisdictions? This would give CAs a financial incentive to cooperate: people are less likely to buy 3 or 5 certs at full price, so they could capture significantly more market-share by teaming up and offering to sign each other's customers for half or quarter price. It's (more) free money for them and it significantly complicates the process of performing a MITM, especially if you aren't the USA.
EDIT: Nevermind, I think I understand why it wouldn't work: independently verifying customers is hard, so they would either have to trust each other or spend a nontrivial amount of money verifying each customer. If they trusted each other, the additional security would be worthless, since the host country of the principal CA could just order them to lie. If they each verified the customer independently, they would each incur the usual amount of fixed cost, so they couldn't offer much of a discount.
TACK works without coordination between the 1/2-time TLS maintainer at a browser and the (elaborate) UX/UI team at that browser; it's something you can implement in the backend.
The UX for certificate trust on the Internet is dreadful and essentially hasn't evolved since the late 1990s. It is badly due for an overhaul. But we can get TACK working today; UX changes could take years.
This makes it sound easy: after all, there aren't many browsers, but really the problem is that everything that talks SSL needs to support it. Browsers are the starting point, after that I guess the OS-level libraries, then everything else.
No. It would be nice if backend TLS tools supported TACK, but they don't need to for TACK to be hugely valuable, both by increasing the security of browsers themselves and by drastically expanding the surveillance network we have to detect bogus certificates.
What's so incredibly fucked up is that its left to you as a guerrilla-hacker-promoter to evangelize a model that should have been used in the first place, and already adopted long long ago in the second.
Good for you for doing the work, though. You rock.
X.509 is broken, it only protects you against casual script kiddies on Starbucks. I know about people who deleted the CA directory from their systems, in my case I prefer to use Certificate Patrol (both in Firefox and Thunderbird) or use self-signed certs and then pgp sign the fingerprint.
>I know about people who deleted the CA directory from their systems, in my case I prefer to use Certificate Patrol (both in Firefox and Thunderbird) or use self-signed certs and then pgp sign the fingerprint.
How does the latter work? Is that possible with Firefox?
Yeah sure, if your system doesn't have any CA cert installed you will be asked to accepted every new certificate you receive when you start a TLS connection, and you can permanently accept it. Would be pretty manual, since you also need to ask the issuer to digitally sign the fingerprint and then check it...
I think TACK is targeting browser vendors, and convergence is targeting browser users. But it doesn't look like it has been updated since 2011 (still in beta, too), and there's no add-on for Chrome.
Hi Moxie,
What do you think of the certificate transparency project they mentioned? It is quite a bit different than pinning and doesn't have any of the scalability issues.
If you haven't heard about it, it basically requires that a certificate be observed in a central database for the browser to accept it. The server provides a proof (signature) of it being in the database when it passes the cert to the client so no extra connections are required.
This makes it immediately known when another cert is issued for a site.
The efficacy of CT will largely hinge on whether Google can get CAs to participate. Even if they can, it'll be a long road (it already has been), and TACK is immediately deployable in the short term.
Is it necessary to get every cert? Getting the CAs to participate would be the best way, but it seems there are workarounds that will result in a large number of certs to be listed, though not all of them:
"Google is currently operating a Certificate Transparency log, and we are filling the log with certificates that we retrieve while crawling the web. We are also actively working on monitoring and auditing software."
> What is the CA hierarchy "linking back to ANSSI" that chrome is/was trusting? Is the root of that hierarchy still trusted by chrome?
Chrome gets the list of root certificates from the underlying operating system. Since ANSSI is in the Mozilla root set, it is quite widely distributed.
(Although note that it may be listed under ANSSI's old name: DCSSI, or as "IGC/A".)
ANSSI is essentially the French NSA. It's a fair comparison, even though the agency is brand new (they were hiring en masse a couple of years ago) and has a seriously small budget.
I'm amazed that they can run a CA and that nobody find this odd.
Unfortunately, it's probably more common than you think for CAs to delegate their authority to other organizations; some US commercial CAs have been caught selling this capability.
The (small) good news is that these delegations aren't intended to compromise the TLS PKI (for instance, to enable dragnet surveillance). Instead, they're a part of the normal security regime within large IT departments. Network security teams want to be able to monitor comms in and out of their networks. Most large enterprise networks require users to navigate through a proxy to get to the Internet, and block direct access. The CA certificate delegation is simply an act of laziness: rather than push a custom certificate to all their endpoints, a big-enough organization might simply buy a CA certificate that will work by default.
The bad news, of course, is that this activity is extraordinarily dangerous to the rest of the Internet. If a delegated CA=YES cert is stolen, the thief can silently MITM connections across the world.
The fix for this problem is straightforward, and already in process. Chrome pins certificates: it hardcodes the relationships between certain sites and their place in the TLS PKI. When users at at networks with delegated ANSSI (or Trustwave or whoever) certs hit Google, the Chrome security team can detect the bullshit cert. This needs to (a) happen in all the browsers, not just Chrome, and (b) incur a CA death penalty in all but the most extraordinary circumstances.
I can understand why a lazy admin might want this, but why on earth would browser vendors permit CAs to issue such certificates? Surely it makes a joke of the entire security model?
X.509 has a top-down security model, which has benefits in authoritative deployment scenarios but is also vulnerable to things like seen today.
Chrome's ca-chain pinning is only available for a selected group of sites, e.g. google, twitter, tor2web while everyone can get their properties into the HSTS list (doesn't help if any CA goes bad).
This hard-coded, limited list does not scale. We need a bottom-up model, that allows something like pinning or "per-certificate" trust instead of recursive CA trust. This should not only work for things like TLS but also for S/MIME.
In addition, OS and browser vendors should ask the user before some certs are trusted or at least make it very easy to untrust a lot. On OS X it's a PITA to double click every CA cert and untrust it manually.
The CAs are not telling the browser vendors that they are doing this. When they get caught, "our investigation reveals that internal procedures and safety protocols had not been followed. oops."
The other fix is also very simple: death penalty for any business that sold a certificate delegation that is stolen. A little blunt, but if selling these represented the potential loss of millions of revenue, companies would pay more attention to it. We should all kill ANSSI certs.
I submitted this 14 hours ago and got no upvote...
Anyway, here is the answer of the French Ministry of Finance, who was issuing the certificate (or more exactly, by the ANSSI, but the cert for the the Ministry of Finance):
> The mistake has had no consequences on the overall network security, either for the French administration or the general public.
Wrong. It had huge consequences. Users within the French administration lost their ability to securely communicate with Google and who knows what other sites. Additionally, the overall network security was hit by another blow against the SSL trust infrastructure.
This is a seriously bad issue and downplaying hopefully won't do them any good. "We only spied in a very sneaky way on our employees, undermining the trust given us by all browser and OS vendors. Don't worry. Nothing to see here. Too bad we got caught - if we hadn't, we'd just continue".
By having their root accepted into the browsers stores, they took on a huge responsibility which they have now betrayed and IMHO it doesn't matter whether that was the root or one of their intermediates: if they can't control them, they never should have issued these intermediate certificates.
Browser vendors need to set an example and do a Diginotar here: remove that root from the stores. There are already too many supposedly trusted and hopefully competent ones in there, no need for the devious or incompetent ones.
The flowchart for removing a CA from the browser CA roots has a decision node that asks "does this CA sign important or widely-used certificates, such that removing that CA would cause chaos and alarm from large numbers of normal users?"
Unfortunately, if the answer to that question is "yes", and the offense triggering removal is "improperly delegated a CA certificate for internal network use", precedent dictates that the CA will not be removed. See: Trustwave.
I don't like it any more than you do, but it's helpful to know what reality looks like.
If the browser vendors could remove a CA from the roots without causing said chaos and alarm, do you think they would be willing to remove CAs that issued delegated CA certificates that were used to MITM major websites?
>Browser vendors need to set an example and do a Diginotar here: remove that root from the stores.
I just found Diginotar certificates in FF 25.0.1. So no "Diginotar" has been done in FF. At least not what I'd call a "Diginotar": Remove all certs of that organization.
My Firefox 26 on OSX doesn't list Diginotar any more. Neither does the OSX Keychain and by that extension Safari and Chrome. They might have been slow to remove it, but it's gone now. (Edit: here's the link to the firefox bug where they've removed the root: https://bugzilla.mozilla.org/show_bug.cgi?id=682927) seems to have been removed all the way to Firefox 6 even.
Have you manually added Diginotar back? Has some malware added Diginotar back?
Judging by the last time a CA issued a CA=yes certificate for MITM, nothing will happen to it. Mozilla will send another "strongly worded letter" [1] and then fail to take any action at all.
"2) Review your CA operations and customers to ensure that there are no certificates chaining up to your trust anchors that are included in Mozilla’s program that may be used for MITM or “traffic management” of domain names or IP addresses that the certificate holder does not own or control. Mozilla’s CA Certificate Enforcement Policy has been updated to make it clear that Mozilla will not tolerate this use of publicly trusted certificates."
This is the core problem of 'trusting trust' in the certificate chain. I explicitly do not trust most of the top-level CA's - they have repeatedly been proved untrustworthy by both mistakes and intentional malice, so the whole current chain is useless.
I'd prefer trusting a much more limited (i.e., specifically excluding 99% of national government CAs) set of CAs; and for the major providers that hold my data, including Google, I'd only trust a chain where they themselves are at the top, i.e., where companies such as Equifax and Geotrust (who currently sign google.com certificate) and anyone else is physically unable to issue new certificates for google sites.
Yeah, but the "trusting trust" rathole has no bottom. Why stop at the government CAs? Google certainly cooperated with NSA surveillance, if presented with a subpoena they could be forced to implement a MITM attack in chrome directly. Why trust any browser vendor or CAs that could conceivably be vulnerable to government pressure?
Well, if I'm connecting to www.google.com or gmail.com, then I have to trust Google anyway - so google can do anything anyway, but the infrastructure needs to ensure that, say, Russian government can't do MITM without cooperation from Google itself.
The same is for www.thatserviceIreallytrust.com. There should be a trivial, accessible by default way to whitelist them in a way that noone else can make a new 'valid' certificate for them.
The web-going public desperately needs a bigger stick with which to hit a CA which screws up. 'Sudden death' would be a nice idea if the browser vendor half of CAB could achieve it. A bond held by CAB and forfeited to UNHRC (or someone) on an event like this would work, too.
CT is a great idea, and the CAs really must pull their collective fingers out and support it, but only works to detect screw ups after the fact.
Meet User A. Rather than trust the certificate signing activities of a random selection of "CAs" who have all paid a fee to a self-appointed "CA root" (as do all "modern" browser), she has created her own CA root using openssl and prefers to sign her own certificates for the websites she uses. She does not "trust" other CAs, she only trusts the certificates she herself issues and signs. User A can thus, if necessary, monitor her own SSL traffic by using a small program running on her computer called a proxy. (No Python is necessary.) As with non-encrypted traffic, she can run tcpdump to see what is being sent to and from her computer to remote websites. She can then assess the implications and block, filter or redirect certain connections and sanitize the traffic if she wishes, with the help of DNS (because she runs her own root.zone and local cache), and her packet filter, also known as an in-kernel firewall.
Meet User B. User B has the same basic computer skills as User A, but User B uses the Chrome browser. Thanks to "certificate pinning", User B cannot monitor what is being sent from her computer to Google.
User B wants to see what is being sent to and from her computer by Google. Can she do this and still use Chrome?
My understanding is that Chrome only cares about this sort of thing when it's a CA in the default trust store that shouldn't be impersonating them. User-added certs like in your example are fine, because explicit action was taken.
Does this require a writeable, user-accessible file on the computer where the browser is installed? (versus, e.g., a gateway computer the user controls that can run openssl, the proxy, tcpdump and the packet filter)
If yes, how would the NSS solution work if the user is browsing from a device that hides and even tries to deny the user access to the filesystem, like one of today's smartphones or tablets?
People have to go. It's been a couple hundred thousand or a couple million years, depending on how you measure it, and the species has yet to prove itself. Destroy humanity!
We just need genetically modified humans, like we have genetically modified plants. Need to remove a few genes: like greed, jealousy, and probably a few others. But there is no ethical way to do this :(
There has always been and will always be government. From village elders to Persian Emperors, the Roman Senate to fascist dictators. People seem to like to form them, spontaneously, and if there is none in place then one will insert itself by force and with great bloodshed.
We need massive reform, we should be very careful about calling for revolution, but life without government in some form or other is next to impossible. It's like asking people not to be people any more.
If you look at history, a fair deal of truly massive reform comes only in response to the threat of revolution, as the incumbent elite seek to give up only enough ground to appease the masses and maintain control.
There's even economic models of this. I can't be bothered finding the citations though, apologies.
This is why the whole CA mechanism is fundamentally flawed. The only real way to move forward is going to be a web-of-trust model which is admittedly harder, and will result in messy situations, but does at least ensure people can control who they want to trust.
I'm not sure why I should trust the web of trust especially either. I don't think it's a bulletproof solution to the problem (if indeed there is one). For instance it could be very hard for a newcomer to the web to get trusted status, significantly delaying the time it takes to bring a new service online.
That doesn't need to be the case. You can have a weighted web of trust where if a new party (e.g. because they presented their passport & proof of ownership of a domain) gets trusted because they were certified by an already trusted party.
You could implement a weighted trust system where you're more trusted the more people trust you, and consequently if you "issue" trust it's counted as more trustworthy than issuing trust to yourself.
So you could have the equivalent of CA's in a system like that, but the list could be dynamic based on total trust on the network, instead of being issued by a static list of 100% trusted parties like the system we have now.
> You could implement a weighted trust system where you're more trusted the more people trust you, and consequently if you "issue" trust it's counted as more trustworthy than issuing trust to yourself.
Nice try, but now write up your defense against a Sybil attack. Someone could play a long con, gain a lot of trust, and then cause a lot of damage.
Ultimately I think we need something like the blockchain publicly associating a domain with a private key. Namecoins I guess.
This works for a while, but you'd still end up with problems (albeit problems that self-heal relatively quickly) when a highly weighted trust-issuer decides to misbehave.
Yeah, and a system like that could be worse in some ways. Likely what would happen is that what are currently the CA's would be the highest trusted parties in the system, and that trust would largely be derived from them trusting each other.
This is largely a matter of the game theory dynamics behind this, but if one of them does something bad are the other parties more or less likely to revoke trust? If they easily revoke trust they're creating a dynamic where if they mess up in "minor" ways their whole business could get destroyed. The penalty for not revoking trust soon enough might be much too small to create a system better than what we have now.
I don't know, and I wonder if there's been any research on the various aspects of replacing the CA system with a trust-based system.
You two seem to be discussing a very centralised model of web of trust.
The main point is that each user should have their own trust graph, not that there is any single trust network that we all use. Individuals are the entities that make decisions, and any emergent authority that violates the trust of those individuals gets booted by enough they cease to be an authority.
>> Individuals are the entities that make decisions,
The fundamental problem here is that most individuals don't want anything to do with managing trust. In fact it's not even that, it's that they don't know what trust means, they have no interest in learning and many of them are not even capable of doing so.
The problem that TLS and the authority system try to solve is "how do I set up a secure, trusted connection between two parties who have never met, one of whom has probably never even heard of a key pair". Individually managed trust graphs don't really help there. AFAICT.
>> any emergent authority that violates the trust of those individuals gets booted by enough they cease to be an authority.
Absolutely. But any system should be examined with game theory in mind, and I don't see that web-of-trust is necessarily immune, nor do I see that it pre-empts the kind of problem we see here - trusted parties acting badly for money/legal/government reasons.
I may be wrong, and would actually quite like to be.
Yes, which is how they found out about the fraudulent cert. They went ahead and revoked the entire intermediate CA because Chrome doesn't pin the whole internet and if Google confirmed fraud on their own domain it has to assume there are others.
Chrome's popularity and Google's use of pinning to their own properties is a pretty powerful combination to detect MITM.
Can I somehow sniff details of all certificates that go through my (Linux) machine? I would be interested in seeing what CAs my used services and visited websites use.
"we updated Chrome’s certificate revocation metadata immediately to block that intermediate CA"
Dumb question, why then does Chrome on my computer show "Google Chrome is up to date."? Shouldn't there be an update ready? Or is there a different (silent) way to update certificates?
This type of "pinning" is the only technique that we know of which has actually detected a CA-assisted MITM attack in the wild (a few times now). So it works really well, but only if you're running Chrome, and only if you're connecting to Google (or the few other pinned sites hardcoded in Chrome).
It doesn't scale well because not every website can (or is willing to) hardcode their CA information in browser binaries, the pinned information has to expire at some point (otherwise you can never change your CA), sites are still vulnerable to their own CAs, and it only works in browsers which are willing to maintain this hardcoded list of pins in their client binaries.
Trevor Perrin and I developed a dynamic certificate pinning solution called TACK (http://tack.io) that is designed to address all of those issues. It's a fully specified TLS extension; all we need are browsers to support it.
If you're interested in whats been going on with the CA system, "SSL And The Future Of Authenticity" is a Defcon talk where I discuss some of the problems with CAs, and even interview the original SSL protocol author about them: https://www.youtube.com/watch?v=8N4sb-SEpcg