Just revoking the intermediates is not enough. This tells the main culprits that it's ok to issue intermediates to do whatever and then deny responsibility as it gets detected.
The only possible course of action is to hold the root responsible and to remove them from the trust store.
The CA system is fragile enough as it is and only the prospects of immediately going out of business can be any deterrent for CAs not to start doing shady stuff.
Yes. Revoking the root is annoying for customers of that root, but I'm sure other CAs will gladly offer a free or really cheap replacement program. And if the affected root is REALLY big, then just pre-announce the revocation for a year or two. Just like what Google is doing with the sha1 certs.
I have marked CNNIC root cert untrusted on all of systems I can control. It bothers me sometimes when some providers with "good" reputation uses them, like azure china. I had to manually add the server certificate to trusted list, just to manage my vms. Otherwise, I barely notice the issue.
It looks that even CNNIC certs themselves weren't widely deployed, correct me if I'm wrong, most of Chinese sites who wants to employ TLS doesn't do it right, for example, 12306 has its own root and some sites I use daily has a self-signed CA with random input in all of the fields.
If you think the company that has agreed to collaborate with a Chinese company to make a different version of Skype (Tom-Skype), for the sole purpose of letting the Chinese government spy on Chinese citizens' communications in real-time, can be "trusted" to not have backdoored cloud services in China, then I have a bridge to sell you.
Microsoft has proven again and again that it's willing to make any concessions to the Chinese government if there's the slightest chance of them making an extra percent market share in China. We saw that when Google was hacked by the Chinese government, too. Microsoft was more than happy to agree to all the censorship policies Google didn't, because it meant that maybe possibly that would earn them some market share in China, after Google would be gone - it didn't anyway. Baidu took all that difference Google lost.
This isn't just about earning market share in China. Just about any organization that operates in China (or hopes to) has a vested interest in, e.g., having reliable email access there. If your company uses Google products for email, etc., then these days you'll have some real challenges making them work for folks over there. Microsoft can capture a lot of American business that way. (Concrete fears about employees losing crucial access tend to outweigh philosophical arguments about government surveillance when folks sit down to make these decisions.)
I disagree, in this case the direct link is not a better URL. The Ars Technica article (as many issues as I have with them) correlates not only the Google alert, but also followup conversations on Twitter and from Mozilla. This kind of correlation (and analysis) is the very definition of adding value added.
It also clearly links to the Google blog, so it's hard to accuse them of trying to co-opt Google's content as their own.
The fact that it's CNNIC that has issued these dangerous certificates is not exactly relevant to the problem at hand; More than one root authority has made mistakes in delegation, and several have made the mistake of not checking the delegation bit and allowing third-parties to request intermediate certificates.
Still, it bears repeating that CNNIC, which is effectively a branch of the chinese government, has a root certificate trusted by default in all the major browsers. Unless you regularly visit chinese websites, you should remove CNNIC from your browser trust list.
I'm on FF 36.0 on Kubuntu 14.10 - I removed the certs for CNNIC and then to test went to the CNNIC website and rewrote the address as https. Website still shows the lock symbol and still shows the cert verified by the CNNIC root CA?!? Seems the removal is slightly glitchy somehow, third time worked.
2 things I notice:
1) there are a lot of default trusted suppliers, seems that this should perhaps be selected on install (trust all or trust local [geographic] or trust by selecting regions).
2) that unlike with cookies you don't get a record of how often a certificate (or CA) has been used - so I can't tell from looking at the certificate information FF holds whether I've ever used the dozen or so Turkish certs for example; this seems like useful information for users that's not being displayed. I only use Turkey as an example because I don't use Turkish websites [I barely know a handful of Turkish words] nor AFAIK any Turkish company's English language sites.
Why would I need to trust geographically and linguistically distant CA's by default? If I decide to do something with a .cn site that needs a https connection it seems that I should be able to get info like "these CA - you already trust - in turn 'trust' this CA which certifies the site you are accessing". That along with any warnings the browser wants to give on malware or phishing then would feed in to a decision to accept the cert and interact "securely" with the site in question. The sites I actual need secure transactions with are probably certified by less than a dozen CA; trusting hundreds by default then seems poor security practice [to this layman].
Etsy built a tool to log CA certs at their network perimeter [1]:
During the two months we’ve had CAWatch in operation,
we’ve seen only 61 unique CA certificates cross the
wire. This accounts for slightly less than 29% of the
212 total CA certificates installed by default in our
standard build
If I'm understanding the meaning of "Bin Number" right, not all of the 0s are surprising. But some are. For instance, the AOL CA hasn't been used to sign any certs that have been seen. (I guess, in a sense, that's not really surprising...)
I'd be cool if someone better at front-end than I could present the data with useful labels, and also mark each CA by whether it's in the non-Mozilla roots.
You can look at the local telemetry for your current Firefox session, including this variable, by going to about:telemetry, which sort of gets you what
pbhjpbhj was looking for, albeit in an inconvenient and limited fashion.
It is true that the sites people actually need secure transactions with are certified by less than a dozen CA. The problem is those dozen CAs are not the same dozen CAs.
My gut agrees with you, but I found Ryan Sleevi's arguments on this thread interesting (the proposal at hand is to require, effectively, a *.cn name constraint on CNNIC):
"Another reason is it encourages the trust store to be used for regional or arbitrary distinctions ('not designed to be served by that CA'). This amounts to naught more than recognizing borders on the Internet, a somewhat problematic practice, for sure. That is, for every 'constrained' CA that you can imagine that ONLY wants to issue for a .ccTLD, you can also imagine the inverse, where ONLY a given CA is allowed to issue for that .ccTLD. The reasoning behind the two are identical, but the implications of the latter - to online trust - are far more devastating. [...]
"As it relates to online trust ecosystem, we can see these government CAs have either botched things quite spectacularly (India CCA) or been highly controversial (CNNIC). The arguments for CNNIC aren't 'Well, if they're only MITMing .cn users, that's OK', it's 'Well, they could MITM'. [...]
"Name constraints, as presented, give tacit approval to the CAs constrained to botch things, as long as they do so only in their little fiefdoms. But when these fiefdoms easily represent millions-to-billions of Internet users, especially in emerging markets, do we really believe that their
needs are being served?
"That is, in essence, why I think a change like this is so dangerous. It strives to draw borders around the (secure) Internet, and to acknowledge that what you do in your own borders, to your own users, is an issue between you and them. I don't think that's a good state for anyone to be in."
This doesn't directly address your proposal, that individual internet users remove CNNIC from their personal trust stores. But if we, the internet community, really believe that a branch of the Chinese government has no place in root stores, then we shouldn't allow the "weary giants of flesh and steel" to make that any different for those internet users who have the misfortune of living within or doing business within the borders of China.
I vaguely remember when talking to some ICANN people that there were talks of a system to restrict the domains which the individual root-CAs were allowed to sign for. I don't remember what happened to that.
I looked around, but couldn't find one. If you're running Linux, you can do it manually yourself (blacklist all, whitelist the ones you need) in about fifteen minutes. I posted the steps for Arch Linux somewhere further down.
FWIW, here's what I just did on my (Arch) Linux machine:
$ for f in /etc/ssl/certs/*.pem; do sudo ln -sfn "$f" /etc/ca-certificates/trust-source/blacklist/; done
$ sudo update-ca-trust
This will block all currently installed CAs (as well as double-block some, but that doesn't really matter). You then need to add them back in.
Restart your browser, and go to websites you access frequently (change them to https:// if necessary). Click the (broken) padlock and read off what CA they used; remove the corresponding .pem file from the blacklist directory. Some might be signed by intermediate certs and thus hard to find, but SSL Hopper has a great chain inspection tool at https://www.sslshopper.com/ssl-checker.html you can use to identify the topmost CA cert you need to whitelist.
After you're done, run "sudo update-ca-trust" again, and restart your browser. All normal sites should work, and you've gotten rid of ~160 root certs.
If it's of interest to anyone, here are the ones I whitelisted to get all sites I bothered to try up an running:
EDIT: Note that this is not a perfect solution; the CAs you've whitelisted could still go bad, and you'll need to blacklist any new CA certs that are added with subsequent ca-certificates updates. But it's a start.
Be careful, this has no effect if you're using firefox/iceweasel. They bring their own set of trusted CAs and ignore all changes on the OS trust store.
You've said the start by blacklisting all SSL, then to inspect the SSL chain using SSL Shopper, but the link to SSL Shopper is using https. So (at least the first time), you need to inspect yet another SSL cert before you can inspect the other SSL cert you were looking at. :P
(FYI, looks like SSL Shopper uses GoDaddy for SSL)
Action is really needed. A system that can be breached just by any CA company that is more interested in money than in security (also at least one big CAs has a bad history concerning security) is plainly broken. There should at least be a kind of overseer that has the ability to intervene when bad behavior is reported.
In the current situation, it is impossible to trust this "system of trust". It just sounds similar as I shall give the keys to my house to the thieves guild to prevent burglaries.
This particular bad behavior, like entirely too many other breaches, was (probably) noticed because Google's own browser saw illegitimate but valid Google certificates, and alerted Google through its own channels. CT extends that to any other site that wants to participate.
CT also requires that CAs disclose every certificate that's signed, including those signed by intermediates that they give to third parties. This doesn't make legitimate use of intermediates much more onerous, but it makes MITMing-proxy use extremely logistically complicated, even if you felt like telling the whole world you were MITMing certificates.
Given the operational difficulty of implementing a transparently-MITMing proxy in a Certificate Transparency regime, I'm not sure you can say with certainty that it wouldn't have prevented this attack. Every time you want to MITM a new site, you need to contact some number of auditors before you can complete the connection. That sounds difficult to implement reliably and quickly enough for a MITM to work.
(Not to mention that most off-the-shelf MITM proxies will intentionally not implement this, since the use case for legitimate MITM proxying involves using site-specific CAs, not globally-valid CAs, so you'd have to deploy custom code to go talk to the auditors. And I somehow have doubts that a robust, black-hat MITM proxy solution will emerge, given that it probably will have only a handful of users at any given time.)
In any case, it is certainly not a perfect solution. But it is a solution.
I've read that okTurtles blog post before. Insofar as it points out that CT has limitations, it's generally right. But the okTurtles scheme is much worse: if a certificate gets compromised, the only recourse is to pick a new website name. I think it's likely that there is no perfect solution here. CT makes no claims for perfection, but it's a pretty good imperfect solution, and I think we need that more than we need a nonexistent perfect solution.
BTW: "Resist commenting about being downvoted. It never does any good, and it makes boring reading."
If Palo Alto aren't willing to implement CT, which I'm pretty sure they aren't because they're a legitimate company whose business isn't driven by people who abuse globally-valid certificates, then (regardless of whether SCTs can be forged in theory) that alone would have prevented the attack.
> Given the operational difficulty of implementing a transparently-MITMing proxy in a Certificate Transparency regime, I'm not sure you can say with certainty that it wouldn't have prevented this attack. Every time you want to MITM a new site, you need to contact some number of auditors before you can complete the connection.
1. Certs don't need to include SCTs, so, end of story.
2. Even if that was a requirement, they can be faked just like the certificate.
> In any case, it is certainly not a perfect solution. But it is a solution.
It doesn't prevent MITM attacks (even Google acknowledges that), so it's not a solution (if preventing attacks is what you want).
> But the okTurtles scheme is much worse: if a certificate gets compromised, the only recourse is to pick a new website name.
Certificates are not associated with the key to modify the blockchain entry. So if a certificate gets compromised, you can immediately fix it by updating the blockchain entry.
> BTW: "Resist commenting about being downvoted. It never does any good, and it makes boring reading."
If people downvote me for no good reason, I'll point it out to them. Doesn't matter to me if it bores them. If that's a problem maybe they shouldn't downvote in the first place? :P
I am confident that if DNSChain is such an obviously good solution, it will be evangelized by at least one person who doesn't complain about being downvoted, or claim that people clearly don't want security.
Are you Tao Effect? I've seen you on messaging@moderncrypto, and you have trouble staying on topic. I'd have been more inclined to give you the benefit of the doubt if you were some second person in the world who thought DNSChain was a good idea, but I've only ever seen one person evangelize this concept (in email, on the website, and now in HN comments). As general advice, if you try a bit harder to respect the conventions of the fora you use to evangelize things, no matter how reasonable or unreasonable you think the conventions are, you're likely to win more hearts and minds.
And you do want to win hearts and minds, don't you? Either you want this problem to be solved (and you have the perfect solution, if only people would listen), or you don't.
When you run out of legitimate arguments resort to personal attacks, got it.
The number of people on Earth who've spent the time studying Certificate Transparency in depth could probably be counted on two hands. You and I are two of them. That leaves ~8 other people (who are probably not reading these threads) to comment.
I'm grateful to everyone who's taken the time to support DNSChain, Namecoin, and related projects by writing about them, podcasting, tweeting, blogging, contributing code, etc.
I think that it's important that we find a solution to the CA problem. That's why I'm engaging you as earnestly as I can about CT and DNSChain in another subthread. (And please call me out if you think I'm being less than earnest or unnecessarily dismissive.) If CT is in fact flawed the way that you're saying, then it's important for not only me to understand that, but for everyone to understand that. If DNSChain is in fact a good solution, or even close to a good solution, it deserves the enthusiastic attention of way more than 10 people.
Therefore it's important to me that you not get downvoted. So I'm trying to help you get listened to, by telling you why I find myself wanting to reach for the downvote button—but the things you say are important enough that they deserve being engaged with, even if the manner in which you say it is frustrating.
> Therefore it's important to me that you not get downvoted. So I'm trying to help you get listened to, by telling you why I find myself wanting to reach for the downvote button—but the things you say are important enough that they deserve being engaged with, even if the manner in which you say it is frustrating.
Well, I appreciate that, thanks. I also appreciate that you decided to engage in actual honest discussion.
> 2. Even if that was a requirement, they can be faked just like the certificate.
I'm trying to understand how this works. (If they can be faked, then yes, that's fatal for CT, but I don't think the CT proponents acknowledge that, so either someone is very wrong, or the truth is subtler than that.)
The okTurtles blogpost, in the section about the first "inaccurate" claim, says, "The SCT (signed certificate timestamp) is pretty much irrelevant in this type of attack since the MITM either can order CA/log combos to do what they want, or they own and operate one of the 1200+ CAs out there (in a clandestine operation), or they’ve hacked their way into obtaining the CA/log private keys they need to conduct mass-surveillance on any website they want (undetected)."
The words "can order" are linked to a page about national security letters. It's true that CT does not protect you against an arbitrarily misbehaving hegemonic government, but if that's in your threat model, a lot of security analyses fly out the door. In this particular case, though, the malicious actor was not a government, so we can rule that one out. (Much as the Chinese government is an attractive target of blame, in this case, it seems like the misissuance was from a private corporation in Egypt. CNNIC voluntarily reported the issue to Mozilla; though it's unclear if Google alerted them first, clearly there was no governmental intention to MITM here.)
I don't understand what being a CA (either by actually being a CA, or by hacking into one) is supposed to gain you here. CT is designed to protect against CA misbehavior. If the argument is that it doesn't prevent a MITM, it just permits one to be detected, then yes, but that isn't faking an SCT. I'm happy to argue about whether detection vs. prevention is useful, but I just want to be clear that that's a different argument.
So that leaves "they’ve hacked their way into obtaining the CA/log private keys they need to conduct mass-surveillance on any website they want (undetected)". Assuming we're talking about "log" here out of "CA/log", then, first, this significantly raises the bar to an attack. This particular attack was done because a non-CA entity asked a CA to sign an intermediate, and lied to the CA about how it would be used. No hacking was involved. In order to conduct the same attack in a CT world, the entity would also have to set up their own log and get it trusted (who would trust it?), or ask the CA for the CA's private key (which would be a "no, and please stop being our customer"), or hack into the CA.
Second, that presupposes that the CT site's claim that log misbehavior is detectable is untrue in a fairly major way. (This is a reasonable claim to make, I'm just trying to figure out if that's the claim you're making.)
In any case, I don't see an argument that it can be faked just like the certificate. It's significantly more work, and involves suborning a legitimate log. Faking a certificate merely involves abusing an intermediate, which is a product that (unfortunately) gets sold for legitimate use, on trust. That trust cannot be abused to generate an SCT the way it can be abused to generate a signed certificate.
> It doesn't prevent MITM attacks (even Google acknowledges that), so it's not a solution (if preventing attacks is what you want).
It prevents some MITM attacks, by disincentivizing them or making them logistically complicated. It does not prevent all of them. It is not a perfect solution, but it is a partial and pretty good solution. I said in my initial post that it's not my favorite solution, but it's my favorite realistic solution.
In particular, CT would have completely prevented this attack.
> I don't understand what being a CA (either by actually being a CA, or by hacking into one) is supposed to gain you here. CT is designed to protect against CA misbehavior. If the argument is that it doesn't prevent a MITM, it just permits one to be detected, then yes, but that isn't faking an SCT. I'm happy to argue about whether detection vs. prevention is useful, but I just want to be clear that that's a different argument.
Being a CA (that has a log) allows you to have two different merkle trees (one for secret MITM, the other for ordinary uses), as explained in the post. That is where even detection gets questionable.
But as also mentioned further down in the post, the attack will probably be simpler than that (identical to today's attacks). These attacks might be "detectable" after they've happened, but so what? So was this one. Damage will have been done.
> In particular, CT would have completely prevented this attack.
No, it wouldn't have, not even if SCTs were mandatory.
All the attacker has to do is send the cert to a log, and all CA-signed certs are accepted, so they would have gotten their SCT without any trouble at all. After some time, Google might query the log that it was sent to and discover the SCT and raise the alarm, but again, damage will have been done, and it's even possible that the MITM will prevent any fixes from making their way to the victims.
> These attacks might be "detectable" after they've happened, but so what? So was this one. Damage will have been done.
Well, in this particular case, there was presumably an intention to conduct the MITM long term. Buying a Palo Alto Networks device, paying for a (fairly expensive!) unconstrained intermediate, and being able to MITM a connection for a few hours is not very useful.
There are certainly attackers who want to MITM a specific site for a specific user for a short period of time, and CT may not solve that. It may be the case that no certificate-based system can solve that, because those attackers will prefer spear-phishing the user into downloading malware to reconfigure their browser. But this particular attack was not such a case.
What went (somewhat) wrong here is that the attacker believed that they could get away with it for quite some time, because the mechanism by which Chrome tattles on illegitimate certificates is proprietary and not well-understood as a part of the system. I suspect that if the attacker knew that this would not work for more than, say, a day, then they would have chosen a different, less-attacky route to solve their actual problem.
(Besides, the Palo Alto device would not have actually sent certs to a log. The attacker would have had to write their own MITM proxy, which is a tricky business. That rules out some but not all actual attacks, and if our metric is what actual attacks are prevented, then that counts for something.)
> All the attacker has to do is send the cert to a log, and all CA-signed certs are accepted, so they would have gotten their SCT without any trouble at all.
I concede that that is true; I implied otherwise because I didn't think hard about that part. Thanks.
> After some time, Google might query the log that it was sent to and discover the SCT and raise the alarm
I don't see why the CA wouldn't immediately query logs for all certificates that chain off of their intermediates, to verify that intermediates are only used in the way that people said they'd be used. (Remember that in this case, the customer lied to the CA and said it would only be used for sites the customer had control over.) I could easily imagine that, in a SCTs-are-mandatory world, it's also mandatory for CAs to run this little shell script if they want to be allowed to sell intermediates. It's just a technical formalization of something that's already in the Baseline Requirements.
> it's even possible that the MITM will prevent any fixes from making their way to the victims.
That's certainly an interesting problem and I haven't seen much discussion of it.
Is it possible to configure the browser to fail closed if it hasn't reached its update server within some period of time? Is this something that CT can/should do? Can we standardize something like Chrome's CRLsets, and make the browsers require they be able to reach CRLsets somehow, either through the vendor or through a CT log?
What's DNSChain's solution here? Can a MITM pretend that the blockchain hasn't moved in a few weeks?
> I don't see why the CA wouldn't immediately query logs for all certificates that chain off of their intermediates, to verify that intermediates are only used in the way that people said they'd be used.
That's unlikely to happen (I suspect) because the amount of data that would need to be processed would be significant. Each CA would have to effectively mirror all of the logs out there, and the logs impose rate limits on queries. It would be interesting also to see what happens when a log gets DDoS'ed (we know OCSP servers aren't useful because of that problem).
> Is it possible to configure the browser to fail closed if it hasn't reached its update server within some period of time?
Sure.
> Is this something that CT can/should do? Can we standardize something like Chrome's CRLsets, and make the browsers require they be able to reach CRLsets somehow, either through the vendor or through a CT log?
Can? Yes. Should? Well, I suspect this would be yet another user experience nightmare that would therefore lead to little support for the idea.
> What's DNSChain's solution here? Can a MITM pretend that the blockchain hasn't moved in a few weeks?
Are you referring to the connection between clients and DNSChain or a blockchain and its network?
If the former, then a MITM's only option is to block connection to DNSChain, which would be treated as a MITM attack (appropriately).
If the latter, yes, a MITM could do that, and if PoW is being used by the blockchain this attack would be detectable instantly due to an immediate and significant drop in block difficulty. Even if it somehow wasn't detected, it still wouldn't allow the MITM to issue fraudulent certificates.
> Each CA would have to effectively mirror all of the logs out there, and the logs impose rate limits on queries. It would be interesting also to see what happens when a log gets DDoS'ed (we know OCSP servers aren't useful because of that problem).
Hrrrm. The data in a log is public; the act of logging needs to be done by the log server, but all output (STHes, audit proofs, etc.) is signed, and thus can be mirrored. A log could just push copies of its data elsewhere instead of self-hosting
For this specific use case, I'd imagine that at least some of the logs would be willing to push data to CAs. If we're really worried about this, demanding that logs push data to CAs doesn't seem like an onerous requirement to add to CT. (This starts to resemble MS's Certificate Translucency^WReputation.)
I have no personal investment in CT as the spec exists (I'm just an end-user); if there are realistic changes that would make it more robust I think we should push for them. (Another easy change that would rule out some of the attacks you're worried about is a mandatory 2×MMD delay on certificate issuance, at least for domains that have opted into such a delay, but this seems obvious enough that I assume people have already thought about its pros and cons.)
Naively, distributing log data seems like the same sort of problem as making a blockchain highly available (whether we're talking about Namecoin's, or Bitcoin's, or anyone else's). I'm not well-versed in how blockchains work at the protocol level; does DDoS cause problems there too?
> Are you referring to the connection between clients and DNSChain or a blockchain and its network?
I'm assuming that, in a scenario like the one at hand here (a corporation wanting to MITM all its traffic), the victims are using the attacker's DNS server and are blocked from using any others. That's pretty common, even in places that don't try to MITM your SSL connections; the sort of appliance at issue here can also do things like block known phishing/malware sites, but not try to modify or log your traffic. In a DNSChain world, perhaps that DNS server will speak the DNSChain protocol, but lie about things to the extent that it can (it sounds like it can make arbitrary changes!???). Or if each client runs a DNSChain resolver itself, the attacker will intercept all blockchain traffic and lie about that to the extent it can.
I think I buy that you can notice that the block difficulty is suddenly remaining constant and there's something unusual with that. I'd be worried that a network-wide formal specification would have to account for, say, a good part of the network getting bored and the hash rate actually dropping, causing a worldwide DoS. But intuitively, it seems good enough.
I don't think the UX concerns are particularly different between DNSChain and something more traditional and CT-like, are they? For DNSChain resolvers on a firewalled network, someone needs to relay blockchain activity or the resolver will shut itself down; if you can do that, you can also relay CT log activity (which must be re-signed every MMD), browser updates, CRLset pushes, etc. and maintain liveness.
So, I'm realizing that this whole conversation is somewhat silly because blockchains already provide CT and they do a better job of CT than Google's CT.
If you'd be interested in reading a draft of a blog post on this topic, please shoot me an email (see my profile).
Lot of good people taking action on this. There are solutions that totally solve this problem. Talk about them here though, you'll get downvoted (just see my comments).
It means either: people want security theater, or they're completely ignorant about the topic.
I think people have different definitions of "solutions that totally solve this problem". If your solution totally solves this problem technically, but is unlikely to be deployed in the real world, it is not a solution that totally solves this problem in my book. Although I wouldn't downvote such claims, I can understand people downvoting "total solution" claims if the solution clearly is not (according to the above definition).
> If your solution totally solves this problem technically, but is unlikely to be deployed in the real world, it is not a solution that totally solves this problem in my book.
The solution is deployed in the real world for some websites right now, and it would take Google less effort to implement for their websites than the effort they're putting into CT.
I suspect these comments are getting downvoted for reasons that have nothing to do with technical (or social) merit.
In my opinion, web browser extensions are not a viable way to deploy the solution to this problem. As far as I can tell, DNSChain has no buy in from web browser vendors. That makes it undeployable.
It is irrelevant (or not very relevant) that it would take less effort than CT for Google. What is relevant is that Google is willing to implement CT, and not willing to implement DNSChain. Yes, this has nothing to do with technical merit, but it has a lot to do with actual merit of the solution in improving the current situation.
> In my opinion, web browser extensions are not a viable way to deploy the solution to this problem. As far as I can tell, DNSChain has no buy in from web browser vendors. That makes it undeployable.
One of the reasons why browser vendors are having a tough time actually fixing this problem is because CAs make a lot of money off of selling SSL certificates.
We're working to remove obstacles out of their way by making it easier for them to support auth systems that do not rely on today's broken system.
> What is relevant is that Google is willing to implement CT, and not willing to implement DNSChain. Yes, this has nothing to do with technical merit, but it has a lot to do with actual merit of the solution in improving the current situation.
Google hasn't made up its mind on DNSChain-type solutions.
Remember, CT wouldn't have prevented this attack. If they actually want to prevent such attacks, they have no choice but to actually fix the problem.
Central certificate authorities were all web users put their trust against money compensation to the CA are a broken security model! Especially in the new internet security era were we have government hackers with almost unlimited budgets. Governments will hack into CAs and issue fake certificates to popular domains that they want to Man in the middle attack and to that we have no protection. I think the solution is a voting mechanism or digital ID. So that the engineer working at Google can sign the certificate with a Corporate Google ID.
Central CAs is almost as broken security as credit cards with the card number readable in cleartext on the front of the card.
If they implemented and enabled DANE in Chromium then we could detect and render harmless this miss-issuance for our domains via PKIX-TA Certificate Authority Constraints instead of playing Whac-A-Mole
Has anyone gone throught the list of root certificates that are trusted in the browsers/OSes and done some homework on them? Would be interesting to see what's behind those authorities and also which certificates they have issued.
I see that in the OS X keychain I have 213 "system roots" that are trusted, I wonder how many of them I really need...
I've always had the suspicion that my pc had some sketchy certs installed.. what do ya expect from buying a pc with an OS installed from an electronics retailer..right?
So what do we do to fix the broken CA system? Particularly in a case where a bogus certificate is issued by an intermediate CA that rolls up to a root CA that is included in all the major browsers.
It seems to me that we have several building blocks available and it may be a combination we need:
1. CERTIFICATE PINNING (HPKP) - example: http://tools.ietf.org/html/draft-ietf-websec-key-pinning-21 - if a browser had previously pinned the certs of affected sites then it would have known that it was going to bogus sites and could have displayed a warning or prevented the user from going to those sites.
The challenge with pinning is that somehow you have to learn what to pin before you visit a site, either by having the pinset or certs pre-loaded into the app/browser or simply by doing Trust On First Use (TOFU). Additionally, if the first use is to the site controlled by an attacker, the cert that gets pinned in the browser could in fact be the attacker's cert, which could make recovery difficult.
Which brings us to...
2. DANE/DNSSEC - http://www.internetsociety.org/deploy360/resources/dane/ - with DANE (RFC 6698) you put a fingerprint (or an entire cert) into a TLSA record in DNS and then sign that with DNSSEC. A browser or app that supported DANE could deal with the TOFU issue by doing a DANE check to verify the cert that it is receiving from the web server. This would then effectively bootstrap the cert pinning process by providing a way to test the validity of the cert.
Because the DANE TLSA records are entered by the owner/operator of the domain and then through DS records are tied into a validated chain-of-trust up to the root of DNS, it would be difficult for an attacker to subvert this with his/her malicious TLSA records.
But we have a third layer to help...
3. CERTIFICATE TRANSPARENCY (CT) - http://www.certificate-transparency.org/ - CT provides a way to log all the issued certs and have browsers/apps check those logs through an auditor. When the browser gets the TLS cert, it also gets a signed cert timestamp (SCT) and can use that through an auditor component to check if the cert is valid. The challenge is that right now only some CAs log issuance of certs and only some browsers/apps check via an auditor. (I also don’t understand myself exactly how real-time CT is, but that may just be that I need to understand the SCT mechanism better.)
To me these three different components can work together to provide a higher degree of trust: cert pinning to help with the speed of connecting to frequently-visited sites; DANE to help with the first use/key learning issue[1]; and CT to provide another means of checking the cert validity.
Comments?
[1] And yes, DANE could be used to store entire TLS certs separately, but right now I’m looking at how these pieces could be used together to give the maximum efficiency and security.
Hello, I would like to help tutors and students by sharing my experience with http://preply.com/en/chinese-by-skype I am currently learning Chinese with native speakers over there and the quality presented is excellent and satisfying
Goodin can't help but say borderline moronic things every once in a while:
Defenders of current system for acquiring and and revoking TLS certificates have recent chafed in response to statements from this author that it's hopelessly broken. Besides remembering that almost all of these critics have a strong financial interest in the way the system works now
The only possible course of action is to hold the root responsible and to remove them from the trust store.
The CA system is fragile enough as it is and only the prospects of immediately going out of business can be any deterrent for CAs not to start doing shady stuff.
Yes. Revoking the root is annoying for customers of that root, but I'm sure other CAs will gladly offer a free or really cheap replacement program. And if the affected root is REALLY big, then just pre-announce the revocation for a year or two. Just like what Google is doing with the sha1 certs.