I think CA's are problematic nevertheless because they involve a third party I never invited. I would much prefer to have a ssh style public key fingerprint that you must verify and accept for one site, and know that once I've decided to trust them, nothing can come between me and them unlike with SSL certification chain. I would prefer to receive my bank's public key on paper when signing up for online banking and verify that it equals what I see when I first connect to their servers.
This reduces the MITM to the initial handshake. This could be worked out to a degree with some p2p scheme or a set of DHTs for popularity count. Fingerprints would be submitted to a public service that accumulates submissions from different users, identified by the same public key used to sign the submissions. Trustworthiness would be weighted based on the history and age of the users' submissions: someone who has used the same public key and submitted fingerprints for five years is likely to be a real user that you can trust instead of a bot that generates new identities in order to submit false fingerprints to launch a MITM. There are numerous ways to implement this.
One positive thing would be to bring the same attention to trustworthiness to people's minds that they must employ in "real" life as well, i.e. offline. A website that's never fraudulent or genuine (unless you got it in writing) but shades of trust more reflects the interactions we encounter in real life (can I trust this guy because my friend said he's ok?).
The approach you're talking about --- some people call it "key continuity", some people call it "trust on first use" --- does not work on the public Internet. At all.
The problem with it is that SSL/TLS is designed and intended to protect commerce. The attackers that commerce cares about† aren't targeted. If they can't sniff someone's session because they already established a trusted pairing with their bank, so be it! They'll just wait for the next person to connect. They will obviously get some percentage of user sessions that way, either from first connections or because people changed computers or reinstalled their browser or deleted files or what-have-you.
These are connections the attackers get above and beyond the stuff they already get by owning up machines. Which they'll also be able to do more of, because the same problem will happen with software update systems.
The current hardwired selection of CAs we have is bad, is a flaw in SSL/TLS/HTTPS even, but it's not so bad that we continuously lose X% of all connections to passive attackers.
† and that you should care about if you value the ability to run businesses on the Internet
See the EFF's recent 'sovereign keys' proposal for one similar idea for preserving trust in a server key, once established, against later CA-based hijacking.
Or, Tyler Close's 'YURL'/httpsy scheme for making introduction/bootstrap the important step, rather than a CA/PKI lookup:
> This reduces the MITM to the initial handshake. This could be worked out to a degree with some p2p scheme or a set of DHTs for popularity count.
I'm no expert but I think the solution is already here in the form of http://convergence.io/http://perspectives-project.org/ and other, similar projects. If each of us could have several trusted notaries (including an ability to run one) taking a look at every ssl certificate we accept by comparing it to what others are seeing, what is served in the dns records for requested domain, whether its integrity is secured with dnssec, what was presented on last connection, and whether it's signed by a trusted CA, we would have a pretty safe system, unshaken by the failure of any single entity.
Convergence is already doing a lot of it, unfortunately, last time I checked it was still breaking on SNI. But that's a minor problem with a particular plugin, the approach itself is, again in my non-expert opinion, great.
Mostly. No matter much you trim your certificate chain, there's nothing preventing Google/your bank/Amazon/etc from sharing their private key with, say, Uncle Sam. However, the backdoor admin access that the gov't gets to sites like TwitterFace and Gmail probably makes that a pointless effort.
Confidentiality/Authenticity are pretty much impossible to guarantee unless you control everything on both ends.
I mean yes, if you're paranoid enough you probably should build an underground bunker in the mountains and grow your food, but objectively there is a huge security difference between whatever shenanigans a trusted partner may be up to and a large body of auto-trusted with potentially leak able-to-who-knows-where subcerts.
Or, what if the user's browser requested a site's public key on its first visit, but required it to be signed with a trusted CA? So to perform a MITM on an SSL connection, Mallory would have to both infiltrate a CA and MITM the user's connection on their first visit to a site.
When a site's certificate changes, Certificate Patrol shows you what has changed, including the public key fingerprint. So you easily detect the case when the site changes CA without changing their public key.
It is already reduced to the initial handshake. That is exactly what a CA does. But perhaps you are right that a CA should sign certificates based on influence-weighted pseudonymous votes rather than simply receipt of payment! Should CA's require payment in bitcoins?
It's only reduced to the initial handshake if every single CA¹ keeps their signing certs protected. We've seen more than once that this is not always true.
¹ (remember, even if you have a cert with a specific CA, nothing technically prevents a cert from being signed with another CA's certificate).
Assuming ideal scenarios no one can snoop on your traffic between you and Google because of SSL.
If a company wants to monitor you they have lots of options. One option is to try and record all traffic between you and Gmail. However if the try to do that your web browser will pop up a warning saying mail.google.com is pretending to be someone else.
To get around this a company can create fake certificates that are trusted by your browser. Then they need to intercept the traffic and basically pretend they are gmail all the way recording you.
Ignoring the fact that companies already have a lot of rights to snoop on whatever you do this is just bad precedent.
Or they can provide default VMs which have a company certificate as a trusted root, either locally or in the Active Directory. Then it's simply a intercept and dump without the browser complaining. (I cannot confirm the active directory problem in other browsers apart from IE)
See dektz's reply. Companies create their own CA cert (you can do that using e,g. openssl) and use domain policies to install it on every machine they control. Then they can setup a proxy that takes the CA cert and dynamically generates certs for each domain that is accessed over HTTPS (Squid can do that).
So this article is suggesting that this practice is wrong and will not be tolerated or that the root CA authorities should not be the ones to generate the certificates?
There is nothing wrong with creating your own CA root certificate and installing it on computers you use. People do this all the time. For instance, web testing proxies all synthesize a fake CA root cert, because otherwise they wouldn't be able to run tests against HTTPS sites.
What happened here was, a company didn't want to go through the (significant) bother of installing something in thousands of computer systems. So they did an end run around the system and got a complicit CA to in effect add them to the global Internet CA root system, which they had no business being a part of. Since the certificate they minted in this process appeared to browsers to be a real, signed, chained-from-the-roots CA=YES cert --- something nobody can make for themselves, not being able to do that being the whole point of SSL/TLS --- they didn't have to install anything. On any computer. Anywhere on the Internet.
Interestingly, Chrome and Gmail do strange things when you MITM attack them. Try enabling Charles Proxy and watch all he requests to securebriwsing.Google.com.
Awesome news, finally someone stands up to the blatant misuse of trust. I work in such a company where SSL mitm is a part of day-to-day development. It is disgusting that the security of SSL is broken for the sake of corporate control over its workers.
Mozilla was faced with what appears to be the worst possible abuse of the privilege of being a Mozilla CA root. The abuse happened for commercial purposes. The transaction that occurred was on its face abusive of the CA system. The CA hasn't (and probably can't) name the company that was given illegitimate CA privileges.
In response to this, Mozilla sent out a letter. That letter doesn't even instruct CAs not to sell subCAs! It says, "don't sell them for MITM purposes".
I think Mozilla is between a rock & a hard place here, but there is no spin I think you can put on this story in which Mozilla stood up against abuse of trust.
> Mozilla is between a rock & a hard place here, but there is no spin I think you can put on this story in which Mozilla stood up against abuse of trust
I'm not sure it's worth our energy to be so disappointed in Mozilla here. That "they're between a rock and a hard place" precisely is the spin within which they stood up against abuse of trust.
They have sent out a letter so far. One of your worries appears to be that it sets the precedent that companies who do what Trustwave did can expect (only) letters in the future. But (iirc, and perhaps even in the bugzilla thread) one of the stated purposes of the letter is to set the opposite precedent: to publicly warn commercial CA's that future bullshit will result in distrusting.
I happen to think Trustwave had plenty of warning without the letter, but this particular hard place is particularly thorny because Mozilla is such a central policy hub. That's a problem you touched on above, that we agree needs a fix, but it's the way things are today.
The other half of the hard place is that before Trustwave pulled this BS, they issued a whole lot of legit commercial certs to a predominantly innocent user population. And you know, they can and should all go acquire certs from someone else, but they can't do that overnight. So if we really want Trustwave to get distrusted, I feel like we should focus our energy on a) telling people that Trustwave sucks and urging them to migrate, and b) paying attention/effort to initiatives like the one you mentioned from MM.
I'm mostly with you. But Mozilla could issue a blanket moratorium on the issuance of CA=YES certs to external organizations; Verisign would, during the moratorium, only be allowed to issue chained CA certs for Verisign properties.
They could do that today. Nothing would break.
Then they could spend some time --- spend as much time as they like, really --- coming up with a policy that allows extraordinarily trusted companies to sponsor and sign subCAs.
But they didn't do that. It's not just that they only issued a letter; it's that the letter is comically weak.
I think what Mozilla have done here is to provide an incentive for any other CAs who may have already done what TrustWave did to quickly own up, revoke the certificates, and promise not to issue any more.
If they'd immediately executed TrustWave, the incentive would be for any other CAs who've done the same thing to double-down and hide it, which would leave us in an overall worse position.
In other words, it's a temporary amnesty - a strategy which has a good history of working well.
While I don't generally buy the notion that this is carefully-calculated hardball on the part of Mozilla, I agree that immediate revocation isn't the only reasonable outcome here. What disappoints me the most is that the response leaves the unaccountable system of for-profit subsidiary CAs untouched.
Agreed. Any subsidiary CA should be meeting all the requirements imposed on the root CAs - and if they are, then they could simply be included in the browser root programs in their own right rather than paying another CA for a sub-CA cert.
In short, at least one CA, Trustwave, has issued a "subordinate CA certificate" (which allows another party to issue certs which will be trusted as if they were from Trustwave) to a network admin who used it to create forged certificates and intercept SSL traffic on their internal network.
Has there been any talk about which company received the subordinated cert? The excuse for it being used in an internal network only doesn't really hold any weight due to the ability to install custom certs on all web browsers.
Apparently some root certificate authorities have been issuing certificates that lets others generate their own, valid certificates at their own discretion. Sounds these subordinate CA certificates are being used for SSL man-in-the-middle attacks (http://en.wikipedia.org/wiki/Man-in-the-middle_attack).
So say your employer wants to monitor everything you're doing at work. In order to get around the fact that a lot of your network data is encrypted, they put in a device between your network connection and the internet that looks at all your network traffic. For any connection that is SSL encrypted (like https pages), it will forge its own valid certificate, and use it to prove to your web browser (etc) that nothing is wrong with the connection. Meanwhile, its logging the content of your Gmail, etc.
If an employer wants to snoop they usually install a root cert on your machine. That's cause you, the user, do not have physical control of the machine. What is new with this is that if you have full control of your hardware, someone can man in the middle attack you now. This is serious. It could be used by any ISP or government to snoop on all the traffic on it's network or country.
This is the unfortunate flaw with SSL and TLS. The certificates are bound only to the "domain name" which is a wobbly concept at best without DNSsec to validate your DNS responses.
Anyone clever enough to create a DNS redirector that turns paypal.com into 10.0.0.2 on their closed network could also create an accompanying SSL certificate that they own for paypal.com and you'd be none the wiser as they proxy all of your traffic through their server, logging every bit of it.
Wait, what? No they can't. The whole point of SSL/TLS is that you don't trust the DNS.
The only way an attacker can redirect DNS for Paypal and have a valid-looking CA for Paypal is if they compromise a CA root certificate. That's the point of SSL/TLS.
Before you suggest "but we can't trust the CAs", note two things: (1) if you don't trust the CA, SSL/TLS isn't doing anything for you anyways, and (2) DNSSEC also has a hierarchical PKI in which you are required to trust commercial enterprises --- or governments, explicitly.
SSH has more robust security than SSL seems to if only because once you've established a connection with a remote host, whereupon their public key is displayed and validated, it can be saved as "trusted".
The same principle doesn't seem to apply for SSL in browsers where, so long as it's signed by a "trusted" authority, there's no question the certificate is valid.
And even if the IP address is unaltered - what will prevent the owner of the closed network from routing every http(s) request silently through a proxy on its way out to the Internet?
In a corporate setting in which you use a computer provided by your employer this will always be possible. You can do it by simply generating your own CA certificate and making sure every browser used throughout the company has it in its CA cert list.
This is being read as good news, as Mozilla sticking up for its own users by enforcing a new control on its CAs. But what it actually reads like to me is a surrender. For once, I'll line up with the howl- at- the- moon crowd and say that, excepting the fact that Mozilla might have been boxed into a corner here, this is a bit of a travesty.
What you're not getting in this story is the following set of facts:
* A Mozilla-approved CA appears to have sold one of its customers a CA=YES cert --- in other words, they sold one of their customers the ability to be a CA.
* The customer wanted the cert to implement a "data loss protection" system. These are systems that either sit on the network or on everyone's machine and scan traffic to make sure people aren't exfiltrating company secrets. DLP systems want to see inside of SSL/TLS traffic.
* You don't need a CA cert to look inside the SSL/TLS traffic of your users. What you're meant to do is, create your own self-signed CA root certificate and install it yourself in all your users' browsers. Yes, this is painful.
* So instead of incurring that pain, this particular company got a CA to, in effect, install that certificate in the browser of every computer on the Internet. That they didn't intend to MITM the traffic of every one of those computers is beside the point.
* Instead of the exact audit and certification process used by Mozilla to add new CAs, this chained sub-CA was audited by the CA itself. But: before you say, "well at a minimum you'd hope the CA audited them!", you should know: the primary business of this particular CA is auditing.
* The CA didn't appear to have told anyone this for a long time (I want to be careful to point out that I don't know exactly what the timelines are here; from the CRLs, it doesn't look great).
* Something happened that caused the CA to announce that they had issued the cert.
* A long discussion happened on Mozilla Bugzilla, under a critical security blocker bug that more or less requested that the CA be removed from Firefox.
* Publicly, the policy response from Mozilla was, "they came forward, and removing their cert from the roots is an extreme punishment". I think behind the scenes, getting rid of this CA was problematic for other (valid) reasons.
* So instead, stern letter to all the CAs. Yay! The Internet is safe again.
I don't know how much more cut-and-dry an abuse of the CA system you can get than selling your CA status to random unnameable companies so that they can implement MITM systems.
But more than that: it is simply batshit crazy that CAs are allowed to sell CA=YES certs at all. I don't get it. Maybe someone working at a browser vendor can reply to this comment and explain to us why any one company is trusted to make a decision like that for the entire Internet?
I am a vocal and public fan of SSL/TLS, and I've even on occasion stuck up for X.509 (horrid as this certificate format is). But this stuff here is the part of the SSL/TLS system that the privacy zealots are right about. I'm hopeful that we'll see adoption of a system like Moxie Marlinspike proposed with "Convergence", where instead of blindly accepting Mozilla's policy decisions (and updating our policies once- in- a- blue- moon), we'll have end-user-selectable trust roots and thus an incentive for someone (maybe the EFF) to do a good job running such a root. I'm also hopeful that at some point we'll manage to figure out a UX for website trust that is better than what we came up with in 1997.
"Maybe someone working at a browser vendor can reply to this comment and explain to us why any one company is trusted to make a decision like that for the entire Internet?"
I don't work on security (although I do work at Mozilla), but I'm sure it's basically this: SSL/TLS PKI is the system we have, and if Firefox stops working with people's banks because a widely-used CA was distrusted, users will switch browsers. Once they switch browsers, the security benefits of distrusting the CA are lost.
Instead, Mozilla has to work with the CAs. As you say, we're between a rock and a hard place. The best we can do is a message like this:
Which not only requires action on the part of all CAs to prove that this cannot happen again, but also clearly explains that abuses in the future can and will result in distrusting of the CA.
Issue a moratorium on subCAs. Nothing will break. Today would be great. Then we (or, more appropriately, your organization and the stakeholders) can work out a better policy than we have now.
(Is what I'd say if you were the right person at Moz to talk to).
My opinion is that you should never make a threat (even an implicit one) that you can't follow through on. No root DB maintainer is really in a position to play hardball with the CAs--at the moment the whole system is just too big to fail. So, I don't see this kind of saber rattling as productive.
That said, the CA situation is rapidly evolving. Chrome 18 has opt-in per-site certificate pinning, and the IETF pinning standard is on track. Plus, we're starting to see consensus around using broader reputation-based components as an additional form of certificate and CA validation. So, I think things are headed in the right direction and we can fix the system in the immediate term without replacing it entirely--meaning that we'll be able to catch bad acts from CAs and actually hold them accountable.
I don't understand this, Justin. What part of any one CA makes them too big to fail? Isn't this tantamount to saying there's really not much a CA could do get distrusted?
I'm actually not advocating for the overthrow of the whole CA system.
Symantec alone owns 42+% of all the HTTPS certificates. Can you imagine a browser willing to break 42% of all the secure sites for their users?
Even if you know a CA isn't very trustworthy, the situation needs to get really out of hand to outweigh the problem of having thousands of sites stop working.
That's why the Convergence/Perspectives proposals are so interesting: they let you remove trust on a provider ("Notary") without breaking the process.
The system doesn't have to be fair for it to be better. So Symantec and Verisign can use their clout to get around the rules. Fine. Let the system be unfair to the smaller CAs, and better for end-users.
Nitpick: Verisign isn't in the CA business anymore, Symantec bought it.
Let the system be unfair to the smaller CAs, and better for end-users.
But how is the system better for end-users? If a big CA fucks up, they're either at the risk of being MITMed or of having half their secure sites stop working. If no CA had more than, say, 10% of the market, a fuck up would only affect a small number of the sites they use.
It's not that a major CA's root couldn't be revoked, but the situation would have to be extremely dire to warrant the negative impact. Plus, there's so little visibility into what the CAs are issuing that it's really hard to justify revocation--unless it's something public and obviously dangerous like the Diginotar situation. That's what I meant by "the system is just too big to fail."
(And to be clear, this is my opinion here, not me speaking for Google or Chrome.)
Edit: I just reread my original comment and I can see the confusion. I should have made it clear that I was talking about the response to this particular situation (issuing an intermediary for DLP). Basically, the CAs have no good reason to come clean, the abuse is hard to detect, and revocation is pretty hard to justify. So, it feels to me like the threat is hollow.
I think we agree more than we disagree, but I'm not fatalistic about browser CA roots; I think there are basic tactical things they can do today without breaking thousands of sites.
I agree with the world view you have, though: of the 3 most important organizations that control CA policy (I'm guessing that's Mozilla, Apple, and Microsoft), you have 2 companies for whom CA policy is small-ball that they're just not going to fight hard over, and the third is easily pushed around.
So, calling it like I see it: Mozilla isn't standing up for anything here, so much as they're getting rolled by abusive companies.
I didn't think we were disagreeing on anything substantive, unless I'm misreading. Mozilla's letter strikes me as somewhat hollow posturing, and Apple and Microsoft don't really seem interested in doing anything.
There are a number of choice quotes, but my two favorites are from Jacob Applebaum (@ioerror if you aren't already following him):
"TrustWave needs to be congratulated and as a reward, I think they should have their CA distrusted."
And another commenter 'Philip' (sorry I don't lurk there and don't have more detailed identity info for him...his words more than stand alone though):
"Trustwave appears to issue certificates directly from its root - this is not good practice. The root should be kept offline and never - ever - have a path to the internet."
...
"the trust chain was broken at the root level, so that is where this branch, and all leaves, must be pruned"
So I guess this is where I break slightly from tptacek's opinion: I'm not yet 100% convinced that the issuance of CA=YES certificates by public CA's is an abusive practice.
Installing organization-specific self-signed roots in all company devices is more than just 'a pain'; it's fundamentally impractical for any organization that wants to let employee-owned or public devices onto its network. So just don't do that? Or just put all the uncontrolled devices onto some airgapped public network? Well, airgapped according to whom, and as of what last physical audit date? Should all organizations that implement DLP have physical security measures similar to DoD installations, where a guard with a gun makes it clear that your iPad should be left locked in your car in the parking lot?
Not saying everything's cool, just saying "it's complicated." What is clear is that Trustwave added at least a couple extra layers of supreme jackassery in this case.
Trustwave's issuance of a CA=YES certificate directly from their topmost root is the most distressingly unprofessional method they could have chosen. That choice alone, in my view, is a coherent argument for why a) their root should be distrusted, and b) they deserve (at minimum) a long cool-down period before an alternative Trustwave root CA becomes a candidate for addition to the default trusted set.
Companies already do device-by-device interventions for all sorts of other security policy issues, from making sure machines aren't going to spread malware to ensuring that USB drives can't be used to exfiltrate data. If a company's employees want to bring their own devices on-site, they can have the company's root cert installed; that's the cost of using your own device on the company's network. That's not an unprecedented policy.
...and it doesn't even matter if the employee-owned devices don't have the employer CA certificate installed; their traffic still gets MITMed and DLPed just fine. It's just that they don't get an (incorrect) indication that their session hasn't been MITMed, which if it is a problem at all, is a problem for the employee.
There are lots of applications that use HTTPS under the covers that will break if certificate validation fails, so not having the root cert installed does "break" those devices.
(There are unfortunately even more apps that use HTTPS under the covers that appear not to care whether certs validate).
> Installing organization-specific self-signed roots in all company devices is more than just 'a pain'; it's fundamentally impractical for any organization that wants to let employee-owned or public devices onto its network. So just don't do that?
Yes, just don't do that. There are sufficient mechanisms to distribute security policy configuration to company devices.
Silently sniffing traffic from external, personal devices is unequivocally unacceptable. I would be livid to know that a company was silent pretending to be my bank, my e-mail provider, or even just Amazon without my prior consent. Having to explicitly trust their internal CA is exactly how that consent is supposed to be provided.
The CA system's trustworthiness is fundamentally broken. I am austounded that Mozilla did not immediately remove the CA certificate from the trusted set.
Company's network, no privacy, full stop. Pick your battles. The problem isn't that a company wanted to MITM SSL traffic on its corporate network. The problem is that a CA was willing to allow them to hijack the whole CA system so they could do it on the cheap.
I think you either misread, or I was unclear -- we're in agreement.
Company network with company devices: sniffing traffic is fine, insofar as it's done by configuring the devices with a private CA.
Company network with personal devices (including visitor's devices): silently MITMing SSL using forged certificates and a real CA is not fine. They can either forbid the use of personal devices, or request that I install their internal CA.
Unfortunately, trustwave produces some of the the smallest resulting certs (I forget if its for EV or for general), which really helps for mobile. (they're also the biggest PCI auditor by far)
The CAs are allowed to sell CA=YES certs, but that doesn't mean that a browser (or any other tool using certificate chains) needs to follow them. On the other hand, the argument would be that if the CA could always just approve anything that this third party asked for... same effect and well within their power. So perhaps we DO want CA=YES just so we can tell when some CA is being unreasonably liberal.
"Approving anything that the third party asks for" doesn't really work unless you give them a completely automated, very high speed process to do this with. The DLP boxes are generating the MITM certificates on-the-fly.
First of all .. are they going to enforce this? Indeed, can they enforce it? Is it possible to detect such certificates and will Mozilla remove root CAs.
Secondly, this doesn't do anything for the fundamental broken design of CAs which is that any CA in the world can issue a certificate for any website. Firefox ships with dozens of certificates for such eminent organizations as "OISTE WISeKey Global Root GA CA" (who they?), basically anyone who can stump up the fee and fill in a form on Mozilla's website. Any of these could issue duplicate or fraudulent certificates, and any of them could be attacked.
It's enforceable because any such fraudulent certificate, once found, identifies both the dodgy intermediate CA and the responsible root CA. The fraudulent cert itself provides all the proof that Mozilla needs to revoke the corresponding root.
The fraudulent certificates could be found and saved by a user using Chrome's certificate pinning feature, or Firefox's Certificate Patrol add-on, or similar.
Such certificates are allowed in general, just not if the site is a 'global' site (whatever the definition of 'global' would be). That's why I don't think this is enforcible.
Many certificates that you can buy today are issued by legitimate resellers of bigger CA's. That's done by the big CA (which is in the list of trusted roots) handing out a CA certificate to the reseller.
We'd probably want to keep this as is or the already way too big list of roots in the browsers becomes totally unmanageable. Or we move back to the days of only a handful of roots, which probably means also going back to the old prices ($500+ a year for a non-ev one)
We're kidding ourselves, totally kidding ourselves, that we have made the CA system "manageable" by allowing CAs to sell subsidiary CAs to other companies. Yes, those certs aren't cluttering up our 2 terabyte hard disks. That's a bad thing, because they're still out there, and they work whether your browser tells you about them or not.
All this means is that CAs have to be a bit more careful who they give reseller certificates to - essentially, only signing reseller certificates for sellers they think are trustworthy.
Because that's what signing a * certificate says - "I trust the owner of this certificate with signing power for every domain". If a particular CA is giving that away to people who shouldn't be trusted with that, then that's pretty shady behaviour on the part of the CA.
They can do public-key pinning like Chrome does (for example, they embed the "mail.google.com" public key into Chrome itself, and verify that it's the certificate you're TLS'ing to.
For historical reasons (viz. NSS), Mozilla maintains its own list of trusted CAs. Chrome uses whatever is provided by the OS, so they aren't in a position to make the same sorts of demands.
Not that I disagree with the sentiment -- there's just a very specific reason why Mozilla is involved, and it's not simply because they write a web browser.
Chrome already has a mechanism to detect a MITM for Google's servers by embedding those servers' public keys into Chrome itself.
Of course, that doesn't stop a company from placing locally-trusted rogue certificates on computers they control, overriding Chromes public-key pinning check. But it means that they can't MITM a connection from your personal laptop when you're on their network.
Makes you wonder what actually happened with TrustWave (there's obviously more to it than "Oh, this was an ethical dilemma so we stopped."). Probably their customer found a way into the intermediate CA private key and was being naughty with it.
What I think sparked Mozilla is TrustWave's claim that this kind of thing is widespread and commonplace among CA's. That's shouldn't surprise anyone, though.
They have ruined security for everyone by allowing government agencies (they didn't even bother to setup a front) and other questionable entities on that list. The correct response is not to send out a nice letter asking everyone to please give up crucial business information, but to kick everyone off that list and start over.
This reduces the MITM to the initial handshake. This could be worked out to a degree with some p2p scheme or a set of DHTs for popularity count. Fingerprints would be submitted to a public service that accumulates submissions from different users, identified by the same public key used to sign the submissions. Trustworthiness would be weighted based on the history and age of the users' submissions: someone who has used the same public key and submitted fingerprints for five years is likely to be a real user that you can trust instead of a bot that generates new identities in order to submit false fingerprints to launch a MITM. There are numerous ways to implement this.
One positive thing would be to bring the same attention to trustworthiness to people's minds that they must employ in "real" life as well, i.e. offline. A website that's never fraudulent or genuine (unless you got it in writing) but shades of trust more reflects the interactions we encounter in real life (can I trust this guy because my friend said he's ok?).