This is something I've been talking about for a while. Back in 2009 I gave a presentation at Virus Bulletin on JavaScript security problems and highlighted some statistics on remotely loaded JavaScript:
1. 47% of the top 1,000 web sites include google-analytics.com
2. 69% include a remotely loaded web analytics solution
3. 97% load something remotely
If you can attack any of these you get access to a very large number of web sites and can inject arbitrary code. Clearly forging the SSL certificate for SSL loaded remote JavaScript is one way in, another is an attack on the DNS of non-securely loaded remote JavaScript.
At the time techcrunch.com loaded 18 different JavaScript elements remotely. Attacking one would allow a complete site takeover using JavaScript. And those 18 elements could easily have been loading other elements so that attack could have been done through a third-party.
A quick survey in the UK shows that the banks HSBC, Lloyds TSB, Royal Bank of Scotland all load third-party JavaScript on the secure page used for online banking login. Barclays look like they are not, but in fact the domain they are using for one piece of JavaScript is a CNAME for a third-party.
Yet people laugh at me for being a paranoid silly tin-foil-hat nerd if I tell them that I browse with Javascript disabled by default and if a site requires it I am more likely to simply close the tab and move on before I enable it.
Why I use NotScripts: the extension lets you enable JS temporarily or permanently for a particular website with just a mouse click. As well, at the same time, you can do the same thing for any third-party JS because it presents you with a list of them. So you can enable jsquery and disable google-tracker at the same time.
It's much easier than navigating the menu system to add an exception to the list of blocked sites for each time you just want to read a PDF, for example. That's the main use-case for me.
After I started using NoScript, I was amazed at how many sites load Javascript from other sites. Google-analytics is common but I have noticed more and more sites are trying to load Facebook scripts as well.
Happily, my bank doesn't load Javascript from anywhere else.
> After I started using NoScript, I was amazed at how many sites load Javascript from other sites.
Not only that, but I was surprised how much paranoia-induced plugins like NoScript and RequestPolicy induce paranoia themselves. And not entirely without cause, either: sites that load 5 different analytics systems are not uncommon, and I've seen 10 different ad providers on a single page.
There's an easy solution here: Load Google Analytics locally. There's no urgent need to load ga.js from Google's servers; there are benefits, namely speed, utilizing client cache, and getting updates, but its core functionality does not rely on where ga.js comes from.
Then, the only resource loaded form Google's servers is http://ssl.google-analytics.com/__utm.gif, and that's just loaded via a new Image(), so even if you MITM that resource request, it doesn't execute as a script or anything similar.
The DigiNotar hack adds to the hack of Comodo in terms of recent attacks on certificate authorities. The lead of Comodo blamed the attack on "a sophisticated state actor" aka Iran.
Moxie Marlinspike pointed out that it was his script 'sslsniff' that the hackers downloaded to carry out the attack. They didn't even change IPs from the one they used to download 'sslsniff' to the one used in the attack. The lesson: this could have been carried out by a script kiddie.
The head of security companies implying that hacking attacks must be caused by a state actor, simply because they don't understand the attack, creates a frightful prospect for the future of world security. Take these claims with a grain of salt. So long for 'sophisticated state actors'.
I agree. the "Iran Forged ...." heading is a bit much. No one really has any idea if this was state sponsored/encouraged or not. One cannot discount the single self motivated actor in this case. In fairness, the article does mention the presumption. I would be happier if they just said someone from within Iran.
Quite true. If they did get a certificate for ssl.google-analytics.com, I guess the title of my post should have been "We're paying attention to the wrong forged SSL certificate" -- the contents of the post is still valid, though.
Colin, I'd like it if you're stop muddying the waters about my take on SSL/TLS.
I am not a fan of the HTTPS/TLS CA system. You know I'm not.
I am a "fan" of TLS, as much as anyone can be a fan of a protocol. Most if not all of the smartest crypto protocol people in the world have taken shots at TLS. Roughly once every 3-5 years, one of them finds a new vulnerability in TLS, which, when fixed, makes the protocol stronger. That's gone on for roughly 15 years now, making TLS the soundest cryptosystem available to developers on the whole Internet.
You don't like TLS. I get it. You think TLS is too complicated, that it has too much negotiation and too much statekeeping to reason about its security. I think that's a reasonable position to take.
You used to advocate that people write their own encrypted transports to avoid using TLS. You don't do that so much anymore. When you used to advocate that, I yelled about it, because you were wrong. Predictably, when people† write their own encrypted transports, they make grave errors that cause their cryptosystems to blow up.
Now you advocate that people use things like spiped, your new encrypted transport. That's fine too. I'm not recommending it, but wouldn't flag it if I found it on an engagement.
There you have the entirety of our engagement on the issue of SSL and TLS. Note how the Internet trust model doesn't factor into it? That's because we agree on that issue and there is no reason for us to argue about it.
I am not a fan of the HTTPS/TLS CA system. You know I'm not.
Yeah, but to 99.99% of people out there, the existing CA system is an integral part of TLS. Every time you tell people to use TLS, you're (inadvertently) encouraging them to trust the certificate authority structure.
Correct me if I'm wrong, but haven't you also attacked TOFU POP as being worse than the CA system? If I understand correctly, it was a limited form of TOFU POP that caught this attack. (And it seems like TOFU POP MONK would fix the remaining weaknesses of TOFU POP.)
It seems to have worked better in this case. It also would have worked better in the Comodo case, but it wasn't deployed yet. It certainly works better for the intranet case. I can't identify the case where it doesn't work better. Can you help?
As far as I can see this isn't a fundamental problem with SSL, but the fact that most environments come pre-installed with certificates for CAs that aren't really worthy of trust.
[Edit: Certainly looking through the list of Trusted Root CA certs on this machine I have no idea who 95% of these organisations are - I also have a certificate installed by a proxy so it can intercept any SSL traffic and inspect the contents].
Unfortunately SSL's PKI is a fundamental part of how people use it. That said, I would agree with you if you were to say that there's nothing fundamentally wrong with the TLS protocol spec itself, aside from it being probably a bit more complex than we really really need.
The browser PKI is not a fundamental part of how SSL/TLS is used in non-browser applications. For instance, enterprise software that uses TLS routinely rely on static access lists (for instance, of digests of self-signed certificates) to authenticate connections.
The reason browsers have the crazy PKI model is that browser SSL/TLS has to scale to the entire Internet and allow new sites to come online with only days or hours or minutes of advanced warning.
It is perfectly possible to use TLS without relying on Verisign. Nothing in the protocol depends on Verisign. The protocol was built in such a way that you can run your own CA, or run no CA at all and have your system manually manage self-signed certificates.
Browsers won't run without the Verisign/Thawte CA system. That's not an SSL/TLS problem; that's a browser problem. Browsers exist in a complicated ecosystem involving banking and credit cards, cooperation between hostile software vendors, and the most massive installed base of users in the history of the world.
Don't conflate the problems that browsers have with the attributes of the SSL/TLS protocol. If you need to create an new kind of encrypted transport between two endpoints on the Internet and choose almost anything other than SSL/TLS, you might as well write your own block cipher while you're at it.
Hell, for DOD systems on secure networks, you're required to remove all of the non-DOD root CAs. No DigiNotar or GoDaddy or the hundreds of others allowed.
DoD systems on secure networks shouldn't have IP connectivity outside DoD, though. The only issue is code signing keys for activex/java. (which really shouldn't exist on DoD secure networks either, but they've fully drunk the MS kool-aid)
I'm not a big fan of handing over the security of my website to third parties by letting them inject arbitrary code into my pages, eg Google Analytics. A lot of people seem to do it without giving it any consideration though.
You have to weigh up the pros and cons I agree. However, do you need that like button which works by including javascript from facebook.com, or can you live without it? Even better, can you do something alternative which allows you to have a like button, but without including third party script?
> can you do something alternative which allows you to have a like button, but without including third party script?
How about create a JavaScript library that sandboxes execution of third-party scripts by loading them in iframes based off of a different domain? This would allow site owners to embed Google Analytics or FB Like buttons without worrying about the third-party scripts getting compromised or becoming malicious.
> How about create a JavaScript library that sandboxes execution of third-party scripts [...]
There has been some work done in this direction. I don't know how active the project is, but it's called ADsafe (http://www.adsafe.org/). It's a subset of regular JavaScript and doesn't allow access to global variables or the DOM, instead giving access to an ADSAFE object to limit the access of the script.
ADsafe makes it safe to put guest code (such as third party scripted
advertising or widgets) on a web page. ADsafe defines a subset of
JavaScript that is powerful enough to allow guest code to perform
valuable interactions, while at the same time preventing malicious or
accidental damage or intrusion. The ADsafe subset can be verified
mechanically by tools like JSLint so that no human inspection is
necessary to review guest code for safety. The ADsafe subset also
enforces good coding practices, increasing the likelihood that guest
code will run correctly.
Some of the things removed:
- Global variables: Limited access to Array, Boolean, Number, String, and Math is allowed.
- Dangerous methods and properties: arguments callee caller constructor eval prototype stack unwatch valueOf watch
- Date and Math.random: Access to these sources of non-determinism is restricted in order to make it easier to determine how widgets behave.
There's a difference between the iframe version of Facebook's like button and the XFBML version. The XFBML version, for various reasons, is preferable to the iframe version (e.g. say you want to subscribe to the edge.create event to determine if someone clicked on the like button).
Now, if you want to add the XFBML version of the like button, you'd have to embed Facebook's JavaScript SDK script (https://connect.facebook.net/en_US/all.js) to your site. If connect.facebook.net ever gets compromised via a fake SSL certificate, your site will also be compromised.
Instead of letting third-party scripts run on your main site, it may be safer to let them run within an iframe based off of a different domain so that a compromised third-party script doesn't compromise your main site.
It is indeed based on Perspectives. But the implementation provided with Perspectives does not have privacy. If you contact a notary with Perspectives then that notary has your browsing history. See 36:50 on http://www.youtube.com/watch?v=Z7Wl2FW2TcA
Very interesting, thank you. It looks like it solves very significant problems in a very good way (I'm amazed the Perspectives guy missed them, especially sending the certificate to Perspectives like Convergence does).
I really, really hope this catches on and gets built into browsers...
It relies on people to set up Notaries that you can specify you trust. There are many organizations I trust. The Tor Project, EFF, my university, the local hackerspace etc. If they ran notaries, I would specify that I trust them. If a SSL Authority/Notary is hacked, you remove them from the list that you trust. At the moment, trust is not agile. Browsers specify in advance which authorities are to be trusted or not.
This project is in its infancy, so get involved, set up a Notary, contribute on GitHub.
I don't see how this is different (even after reading the blog above), other than reducing the initial input list of CAs. Today, if a CA gets hacked, I pull them out of my trust-chain. Either way, I have to pay attention. Help me understand how it solves this, because I do think SSL is currently quite broken and would like to see a solution.
In this case DigiNotar is being removed from browsers because nobody that lives in Mountain View happens to visit sites signed by DigiNotar. And aside from being Dutch, they're also unusually small (they only made 100k in revenue from certificate sales this year).
This is not the common case. There was a very similar incident with Comodo in March, and they weren't removed. This is because Comodo certifies some non-negligible portion of the internet (between 1/4 and 1/5th of certificates), and so removing them would break a lot of things.
The same is true for VeriSign, Thawte, Comodo RAs, Geotrust, Equifax, etc...
I don't trust any of these parties, and yet I kept them in my trust DB for years, because without them the internet was unusable.
What Convergence aims to do is make trust agility even easier than it was for DigiNotar, which itself was unusually simple for the CA model. It also aims to invert the trust relationship, and put trust decisions fully in the hands of the client.
I can appreciate that, I'm just not sure I understand how that will happen. I don't have a direct trust relationship with the vast majority of the internet, so I need to put my trust in somebody else I have a closer relationship with.
Right now, I trust the browser/OS vendors with the ability to black-list individual CAs (or white-list, as the case may be). In the "trust agility" model, I just have to choose somebody else I trust, right?
Maybe as a technical person who spends time in the security world, I can figure out who that should be, but isn't the average person going to find themselves in the same situation (trusting the browser/OS provider)?
Perhaps the better way to phrase this question is thus: How does this prevent 1/4th of the SSL Internet from going down when Comodo gets hacked?
The problem is that right now, in the common case, the browser/OS vendors can't black-list individual CAs. Their ability to do so with DigiNotar is exceptionally rare, and would not be possible most of the time.
Trust agility ensures that clients have the ability to make these trust decisions easily. A client does not necessarily have to be a user, it could still be the browser/OS vendors. For details on how Convergence works, in order to answer your question of how it prevents 1/4th of the SSL internet from going down when Comodo gets hacked, the best reference is (unfortunately) still the presentation: http://www.youtube.com/watch?v=Z7Wl2FW2TcA
The presentation cleared things up marvelously. It may be worth adding the presentation to the convergence.io details page, even if it was just a clip of the last few minutes where you talk about notaries. Once you went through that, everything cleared up.
Sooner or later it's going to happen; obtaining forged SSL certificates is just too easy to hope otherwise. What can we do about it? Don't load the Google Analytics javascript when your site is accessed via HTTPS. This is easy to do: Just throw a if("http:" == document.location.protocol) around the document.write or s.parentNode.insertBefore code which loads the Google Analytics javascript. On the website for my Tarsnap online backup service I've been doing this for years — not just out of concern for the possibility of forged SSL certificates, but also because I don't want Google to be able to steal my users' passwords either!
I don't understand - if you are uncomfortable loading the GA javascript into your pages when users are using https to visit your site, why are you ok with loading the GA JS when visitors are using http?
Or is it implied in here that the analytics is used on http only pages because the sensitive pages on your site are https only? In other words, you are only using GA on non-sensitive portions of your site?
How come it's just one CA that is needed to ensure the trust of a domain, especially one as important as *.google.com? It seems like it's only a matter of time before something like this happens again.
The worst breakage in the HTTPS/TLS security model is the fact that every CA is a full peer to ever other CA and can sign anything. Combine that with the fact that CA's are allowed to have resellers and the whole thing breaks down predictably.
It doesn't have to be that way. SSL/TLS libraries, for the most part, only verify that the certificate chain is properly signed all the way to the root. That doesn't mean the browser trust system is limited to that! After certificates are verified, it should be straightforward to apply additional policies, such as "Colin Percival does not trust certificates from this CA with the exception of these three domains which unfortunately rely on it, but Colin and all his friends are also helpfully monitoring the fingerprints of the known good certs for those domains".
You don't need permission from the IETF, IANA, Mozilla, or Verisign to build this. You just have to build it and get people to use it.
Moxie Marlinspike is working on an idea similar to this at CONVERGENCE.IO.
The idea is that all CA's are effectively fully trustworthy. The public keys for the CA's certs are loaded into browsers. When a specific SSL cert is created the private key is generated by the buyer (and never revealed) then the public key for that is sent to the CA and signed. In this way a link of trust is created between the CA and whoever bought the SSL cert.
Now, when a browser is used to go to a website using SSL it will ask that site for its cert, which contains the public key and the digital signature from the CA. If the cert is signed by a CA that the browser trusts then the browser in turn trusts that specific cert. The browser then uses the public key to encrypt a message to the site containing information for encrypting return transmissions to the browser.
The point being, all CA's are on an equal playing field, and fully trusted. The moment any one CA is no longer fully trustworthy or the moment its private key is no longer secure the whole system fails.
The said chrome will only accept one particular CA when verifying the signature for google.com (which is one of the reason the fraudulent cert was detected).
Utilities > KeyChain Access > System Roots (left) > All Items
> find "DigiNotar Root CA" > right click, get info > expand Trust > When using this certificate, never trust
The average person on the coffee shop wifi isn't an SSL CA, and that's what "always on https" is defending against. SSL doesn't protect you from someone breaking into your house with a gun and forcing you to reveal your email archive. But that doesn't mean it's not useful.
I'm aware of this; however, I'm just pointing out the ironic timing of these exploits.
You teach people to fear one thing, and in this case, they leap head first into something even further beyond their comprehension. They need to start teaching Internet 101 classes in middle school.
Looks more like a general statement than advice. Most users would tie their dick in a knot if the browser let them. Tell people to use NoScript, at the very least for the XSS countermeasures.
Slightly offtopic. can anyone explain how DigiNotar revoking the wrong certificate works?
As per my understanding the browser simply trusts all certificates issued by a trusted issuing authority, so how would you revoke a single certificate?
1. You add the certificate to your Certificate Revocation List.
2. You pretend that people will check the CRL before trusting the forged certificate, ignoring the fact that some clients only check for updates to the CRL periodically and most don't check CRLs at all.
Yes, certainly. This is a constant battle I'm having with marketing/seo people. "Just drop this code into all your pages". Not that I can present much alternative, so I end up caving in ..
If the ad code is inserted via JavaScript, then yes, the problem is real. Most ad code is inserted via JS, e.g. Google's AdSense.
But according to https://www.google.com/adsense/support/bin/answer.py?answer=... AdSense isn't available over https, so this specific problem of forged SSL certs does not apply here. But if you embed non-SSL code in your httpS page (and I assume that most users just ignore the message that would popup in this case, alerting them that non-SSL code is loaded into the "secure" site) there's no need to do that: just do the MitM attack.
Just by having a forged SSL Certificate for ssl.google-analytics.com how can they supply their javscript ? The request still goes to the google servers and not to any evil-democracy-suppressors.gov.ir
So sure if they could reroute the request to their servers evil things could be done. But they can NOT. Or am i missing something ?
Of course they can reroute traffic.
All they have to do is
* Force every ISP/Telco within their borders to add fake google.com entries to their DNS servers.
and/or
* Force every ISP/Telco to transparently proxy all DNS traffic and provide fake replies for google.com queries
You can even make it easier:
Just hijack IP routing at the borders, such that IP traffic to 209.85.149.99 (and all other google networks) are not routed to the real google servers on the internet, but their own malicious filtering proxies.
Even without involving the ISPs/Telcos, they could transparently hijack and proxy you, for a whole country it might be a rather big task though, but here's what you do:
* Find all the cables carrying internet traffic in/out of your country.
* Bring a shovel, dig up the cables.
* break the cables.
* hook up the cables to your transparent proxy/filtering machinery.
Done properly, all everyone would know know was some lights flickering in the few seconds the cables were broken.
I imagine that more sophisticated networking equipment uses something like TDR (https://secure.wikimedia.org/wikipedia/en/wiki/Time-domain_r...) to detect when the cables have changed in length. Some PC BIOSes include a tool that will report the length of attached network cables, whether or not there is a system at the other end.
It'd be a far greater task intercepting all downloads for every browser out there and replace it with a malicious one. Besides, you'd not get to hijack people browsing with the IE that came installed on their PC, which likely outnumbers firefox users.
So then they can only fake the traffic in their country what they can do anyway with non SSL traffic. I sounds more like this could be a global attack.
It's 'local', since you somehow need a way to intercept the traffic and there's a limit to the feasibility. Let's say this is 'local' for everyone in Iran.
But going for the certificate Colin suggests broadens the attack quite a lot: Instead of being able to server your own version of GMail/intercepting mail traffic you're now able to inject Javascript into what? 60% of the websites of the net? Basically everyone using Google Analytics now silently serves your code and the browser runs it without warnings.
So local/global is orthogonal to this impersonation 'improvement'. Even if you do this (somehow tricking a CA) yourself in the internet cafe of your choice, you would make the attack so much worse if you don't target a single service anymore and inject your code into as much content as possible.
The aim is for monitoring traffic from within Iran.
The government almost certainly controls all internet traffic entering or leaving the country at the ISPs, and could intercept and/or redirect it as necessary.
What would be necessary for some Paxos-based system to be used to sign certificates? That way, half of CAs would have to get hacked before something like this could be pulled off?
Can anyone explain to me how I can open up a CA and get my CA certs distributed with browsers and JVMs and what not? Is there some sort of "IANA" that approves and manages this and why would they approve all sort of shady CAs which clearly are a dangerous weak link in the whole SSL construct.
There is no approval process, no central authority. If you want your CA in OS X, you talk to Apple, if you want it in Windows, you talk to Microsoft. If you want it in Firefox, you talk to Mozilla.
All vendors want market share in the Netherlands, so a few Dutch CAs get on the list; and they all want market share in China so the Chinese Ministry of Information gets on the list.
No browser wants to be the one which doesn't work with someone, somewhere's bank, so once you're on one list, you tend to get added to all of them; and it becomes nigh-on impossible for marketing reasons to remove anyone from the list ever.
15 years later, browsers have 80 CAs and 200 certificates built-in.
...and what compounds the problem is that CAs are trusted on an all-or-nothing basis - you don't have a concept of "this CA is trusted only for .nl domains, and this other CA is trusted only for .cn and .hk domains".
There is no central authority, but each browser does have an approval process. Most require the CA to be audited annually by an organization that does WebTrust audits. Mozilla's procedures are listed at https://wiki.mozilla.org/CA:How_to_apply
I'm not sure having a central authority would be practical, but the approval process and audits need to be more thorough to find the type of security problems that DigiNotar and Comodo had.
What with Mozilla wanting to build a browser-based OS, the non-existent security measures of the DOM will beam us back like several decades in terms of security. Awesome. Not.
I know that security through obscurity is no security at all, but I don't think it's particularly clever or helpful to give direct, useful advice to the goons in Iran.
This is not an anonymous argument. If you were sitting next to me, I'd be, right now, arguing that you should not publish this article because it will only cause harm overall.
What's next? "Why terrorists are stupid and what they should do to cause maximum damage"? How will you feel when the Iranian government does implement your kind suggestion?
The people who carry out the orders are not the ones giving the orders. Just because they're being ordered to "hack into Gmail" doesn't mean they have to find creative ways to do so. This blog post, however, provides useful, practical, exact, almost step-by-step suggestions to the people giving the orders.
I've not lived in a dictatorship, but my parents have, and from their stories, I gather that most of the smart people in a dictatorship do not really want to help the regime, but they have to because otherwise their lives or their families' lives and careers could be destroyed.
By pointing out exactly how they should do it, this article removes the wiggle room of plausible deniability that "we didn't know there was another way to do it".
1. 47% of the top 1,000 web sites include google-analytics.com
2. 69% include a remotely loaded web analytics solution
3. 97% load something remotely
If you can attack any of these you get access to a very large number of web sites and can inject arbitrary code. Clearly forging the SSL certificate for SSL loaded remote JavaScript is one way in, another is an attack on the DNS of non-securely loaded remote JavaScript.
At the time techcrunch.com loaded 18 different JavaScript elements remotely. Attacking one would allow a complete site takeover using JavaScript. And those 18 elements could easily have been loading other elements so that attack could have been done through a third-party.
A quick survey in the UK shows that the banks HSBC, Lloyds TSB, Royal Bank of Scotland all load third-party JavaScript on the secure page used for online banking login. Barclays look like they are not, but in fact the domain they are using for one piece of JavaScript is a CNAME for a third-party.