'haberman's comment includes actual information. Can we not punish people for posting information? I doubt very much that 'haberman approves of NSLs, especially since he said as much.
Moreover, your comment may actually be incorrect; a good chunk of all the mail Gmail handles is never on the wire in a format that can be decrypted with any known attack without access to Google's (often pinned) secret keys. The NSA's ability to snarf it off the wire, stipulated, does not connote their ability to read it.
when I receive email from people on non-Google hosted domains, I sometimes check the headers and see that mail was delivered to my gmail with ESMTPS, using TLS. so a lot of non-google hosted mail on the internet will use ESMTPS for delivery between servers, silently.
you can check this too by looking at the SMTP headers on some mail in your inbox.
The PKI is broken, and I bet a lot of client SMTP plays fast and loose with certificate checking anyway, even if it wasn't. DNSSEC can't come fast enough.
It helps against passive adversaries, but if someone's got access to the sending mailserver's network there are active MITM attacks that will probably defeat this.
Option 1: Try doing MITM and sending a self-signed cert for Google. The client smtpd may accept it anyway. (Cost: free)
Option 2: Spend resources to obtain a legitimate intermediate CA cert, and issue a valid cert for Google's mailserver, and MITM with that. (Cost: ca $25k-$100k, maybe less with proper connections.)
The only thing worse than self-censorship after assuming an insecure channel is a false sense of security.
DNSSEC is a PKI run by governments. If DNSSEC had been deployed and used to run the TLS PKI a couple years ago, Ghadafi would have effectively controlled Bit.ly's SSL keys.
DNSSEC is a debacle. Reprising an older comment:
* Amazingly, contrary to everything you'd expect about "secure DNS", DNSSEC does not in fact secure DNS queries from your machine. Instead, it delegates securing DNS to DNSSEC-enabled resolver servers. For securing the actual queries your computer makes, your browser is on its own. There's a whole different protocol, TSIG, intended to address that problem.
* DNSSEC has zero successful real-world deployments, and no existing integration with any TLS stack. DNSSEC obviously does nothing to secure your actual traffic; all it does is try to protect the name lookup. TLS protects both.
* DNSSEC does nothing to address all the other intercepts, from ARP to BGP4, that real traffic has to contend with. Once you go from name to IP address (or "cert" in the fairytale world where DNSSEC has replaced the CAs), you're on your own. TLS addresses all of these issues except for CA configuration.
* DNSSEC actually reduces the security of DNS in some ways: in order to authenticate "no such host", DNSSEC publishes a sort-of-encrypted list of all your hosts. There's a whole other standards group drama surrounding the proposals to resolve this problem (NSEC3, whitelies, etc).
* DNSSEC fails badly compared to TLS. When keys inevitably get screwed up in TLS, you get a browser click-through. There is no API support to recover from a "gethostbyname()" failure caused by DNSSEC. This sounds like a reliability problem, but it's actually a security problem, in the same sense as "the little blue key icon isn't big enough" is a security problem for SSL. We just don't know what the exploit is, because nobody has designed the "solution" for this problem.
* TLS has 15+ years of formal review (it is the most reviewed cryptosystem ever published). We still find things in it. DNSSEC has received nothing resembling the same scrutiny. It's ludicrous to believe we won't find horrible problems with it. You'd be asserting that a protocol co-designed by Paul Kocher will eventually fare worse than one designed by the IETF DNS working group. The IETF DNS working group would basically have to crush some of the smartest practical crypto people in the world.
* TLS is at least configurable (virtually all TLS problems are in fact user interface and configuration problems, not problems with the underlying system). You can nuke untrustworthy CAs. There is no clean way to opt in or out of different DNSSEC policies, as the drama surrounding DLV illustrates.
In the '90s, we designed web security to assume that DNS was insecure. That was a smart decision. "Security" means different things to different people. It's a policy decision. The end-to-end argument strongly suggests that it's something that can't be baked into the lower parts of the stack. DNSSEC is a step backwards. I think you can already see the indications of the problems it will cause just by looking at the places it already falls down.
What we need is a concerted effort to solve the security UI and policy problems that browsers have.
If you're looking for protocol-level remediation for TLS's current CA policy problem, you want to pay attention to TACK:
"Before yottabytes of data from the deep web and elsewhere can begin piling up inside the servers of the NSA’s new center, they must be collected. To better accomplish that, the agency has undergone the largest building boom in its history, including installing secret electronic monitoring rooms in major US telecom facilities. Controlled by the NSA, these highly secured spaces are where the agency taps into the US communications networks, a practice that came to light during the Bush years but was never acknowledged by the agency. The broad outlines of the so-called warrantless-wiretapping program have long been exposed—how the NSA secretly and illegally bypassed the Foreign Intelligence Surveillance Court, which was supposed to oversee and authorize highly targeted domestic eavesdropping; how the program allowed wholesale monitoring of millions of American phone calls and email. In the wake of the program’s exposure, Congress passed the FISA Amendments Act of 2008, which largely made the practices legal. Telecoms that had agreed to participate in the illegal activity were granted immunity from prosecution and lawsuits. What wasn’t revealed until now, however, was the enormity of this ongoing domestic spying program."
Its a recent article outlining what's ahead (and presently implemented) for the NSA. Given what is already known, the U.S. Govt already has access to your e-mail, and they have the capabilities to decrypt it should your e-mail become high priority.
NSA ability to sniff traffic at major telecom exchanges is real. NSA ability to break $cipher or $hash based on the hearsay journalism involving an interview of (ex-)NSA employees (who would certainly be barred from talking about any real non-public attacks) is not real [1]. It's possible the NSA is setting up real systems that will brute force or factor or find collisions for known borderline algorithms/keysizes. Maybe they have a collection of old DES-encrypted traffic and they are building enough computing resources to do large-scale cracking of DES keys.
The idea that they can create collisions for hashes or crack ciphers believed to be relatively secure in the near to mid future is paranoid speculation.
However, if you're going to be paranoid, direct your attention to RSA and DH (plain, not ECDH). In Suite B, which the NSA recommends for use by government, RSA and DH are absent. If the NSA knows of a weakness in anything currently believed to be secure (I think that's unlikely), I would bet that it's RSA and DH, because the NSA no longer recommends them. I think RSA and DH are superseded by ECDSA/ECDH simply because of speed at comparable key strengths, not because the NSA knows something the public doesn't. As an aside, it indicates that the NSA has a fair amount of confidence in ECDSA/ECDH.
I do not think the NSA is stupid enough to play chicken with the public crypto community by recommending encrypting classified information with ciphers NSA knows to be weak. The public could discover those weaknesses tomorrow. The most sensitive information inside the U.S. government and military is presumably protected by the NSA's Suite A algorithms, but other important information is not, notably military communications between U.S. allies, for which Suite B is recommended.
I heard a story somewhere that public key cryptography was known to the NSA long before the 70s. Maybe they are 30 years ahead in cryptographic number theory? Maybe prime factorization isn't actually hard? Maybe...
What was essentially RSA was known to Britain's GCHQ (Government Communications Headquarters) in 1973. Is this what you were thinking of? Rivest, Shamir and Adleman rediscovered it in 1977.
But it's worth acknowledging such programs exist and don't appear to be going away.
Beyond the AT&T incident (and following legal ruling dismissing, retroactively, carriers from wrongdoing in wiretapping).... there's also the 'TrailBlazer Project'[1] with public accounts from William Binney (NSA , 'Director of World Geopolitical and Military Analysis Reporting Group')and Thomas Drake [2] (NSA) regarding the overreach of such projects....that it's kinda hard to exclude data and so forth.
Jacob Applebaum (Tor, etc) recently dragged William Binney around NYC to gather publicity [3] - but few outlets paid much attention.
Try reading critically. To process 1 yottabyte of data assuming you have 128 bit registers you would need 100,000,000 petaflops.(See http://www.wolframalpha.com/input/?i=%2810%5E24+bytes+%2F+12...) Therefore, there must be a great deal of preprocessing using classifiers to basically eliminate a great deal of useless information. Just because you store it doesn't mean you will listen to it.
Moreover, your comment may actually be incorrect; a good chunk of all the mail Gmail handles is never on the wire in a format that can be decrypted with any known attack without access to Google's (often pinned) secret keys. The NSA's ability to snarf it off the wire, stipulated, does not connote their ability to read it.