There was also a key marked as 'NSAKEY' in a normally encrypted part of Windows NT that was revealed in a Service Pack. However Microsoft said it had another purpose.
A buddy who is an excellent reverse engineer assures me that this isn't a conspiracy. Crypto services had to be verified by a key; the NSA's crypto services were classified, so they couldn't let Microsoft sign them; therefore, they needed their own key. The key is only used to authenticate crypto services, which I think Douglas Adams would describe as Mostly Harmless.
I don't have the reverse engineering skills/IDA Pro license to verify this, but fwiw I trust and respect this person's skills.
But lets do a thought experiment.
1. How much would the NSA gain from pressuring Microsoft into backdooring (or as they say, "enabling") Windows, in terms of systems they could not access before that they can access now?
2. How much would it cost the NSA, in terms of effort, good will, and exposure to risk by the people at Microsoft who would know about the backdoor and may leak or abuse it? How bad would it be if the public got wind of it? How hard would it be to keep it secret over the years, especially as engineers moved around to other companies? Would they have to involve foreign nationals on the dev team? Could they be trusted not to warn their governments?
3. How many times could they abuse their backdoor before it was obvious Windows couldn't be trusted? When/if that happened, what would be the damage to the US economy, and to their ability to penetrate systems?
When I put myself in the shoes of DIRNSA and ask myself these questions, backdooring Windows (at least through official channels, like _NSAKEY supposedly is) seems like an insane proposition.
1 - Privileged access to the dominant consumer operating system, also used by many corporations likely to be targeted.
2 - Minimal effort cost. Good will cost seems like something NSA ignores. Exposure to risk seems minimal given the existence of NDA contracts.
3 - I think anyone who isn't deluded and/or a member of the "nothing to hide; nothing to fear" camp already knows you can't trust Windows. The damage to the US economy seems minimal in light of the Snowden leaks that implicate nearly every US-based technology company, and Microsoft is investing heavily into things like X-box to diversify their revenue streams. I don't think there'd be any fallout worth mentioning, tbh.
When I put myself into the shoes of the DIRNSA and ask myself these questions, backdooring Windows seems like an obvious "Yes".
_NSAKEY may very well not be a backdoor, but I find the suggestion that Windows doesn't contain one to be laughably naive.
Why would they backdoor Windows, when apparently they could just buy an exploit for $X00k[1]? Its seems buying an exploit serves all those same factors, at a similar price range, while making it much harder to point a finger at the NSA when it eventually gets discovered.
Its probably a safe assumption that if someone is found using a backdoor in Windows, its probably the US Government that put it there. If its an exploit, its a hell of a lot harder to point that finger at anyone in particular.
Bingo, or even creating their own exploit and releasing it in a way that does not tie back to them.
Realistically, a backdoor is the worst option for the NSA. A backdoor would be known by the people who implemented it, who, assuming it's a cooperative venture, would most likely be at the company itself. A backdoor would also be most likely living in the real codebase, able to be discovered by others, and, if somehow it leaks, it'll point directly at the NSA.
An exploit does not live in the codebase, could be blamed on others, and will produce the same results.
You talk like they're different things. This is something the Chinese do. Leave the backdoor as a vulnerability. Sure other people may find it, but that means they have access to it from the git-go (on another note, this should be how you initialize repos in git)
That way when someone finds it, they could go "oops. thanks for pointing this vulnerability out for us. Will fix"
That gives you the worst of both worlds, though. You get the major developmental downside of a backdoor - making sure no one in the development pipeline finds and removes it - while still having to do the non-trivial work of actually exploiting the bug. Admittedly I don't have real experience with the 0-day black market, but the internet tells me I can just show up with $200k and buy a Chrome/Windows/iOS 0-day, if I know the right people. I find it hard to believe its actually cheaper or even easier to backdoor software than it is to just buy the exploits.
Whether NSA explicitly backdoored WinNT or simply knew so much about its vulnerability that they didn't need to, we can all safely stipulate that the agency had extrajudicial and coercive access to every networked WinNT system on the Internet.
I doubt the NSA needs or desires a backdoor in Windows.
If they were going to put that kind of pressure on Microsoft, they'd demand access to source code or perhaps a backdoor in its TLS implementation. These investments would pay off much better. I think you also vastly underestimate how much effort it would take. I sincerely doubt Microsoft would comply with a smile, and even after they won the (secret) court case keeping the operation under wraps would be even more laborious. Snowden is proof that people who sign contracts are still capable of disclosing secrets.
It is worth it for the NSA to backdoor crypto because, if the implementation is solid and the keys can't be stolen, then no matter how many resources they pour into cracking it the math will remain inflexible.
There is no computer in the world with that kind of security. If it is smartly configured, they'll use 0day. If its airgapped, they'll compromise a sysadmin's computer and wait for them to connect to it (think Stuxnet.) If that isn't feasible, they'll walk into the data center and put malicious hardware into a PCI slot.
If we persist in thinking of the NSA as a boogeyman logging every packet and backdooring every OS, rather than discussing their real capabilities and motivations - what they are, what they should be - we will become paralyzed to act against them, they will continue to operate without meaningful oversight, and our rights to privacy and to secure software will languish.
I don't doubt the NSA has Windows source code; many companies already do have it (as well as a small number of individuals actually). It's not unusual to have access to the Windows source code TBH.
Saying that the NSA backdoored windows is not a boogeyman type claim; it's exactly within their real capabilities and seems like a very plausible path for them to have taken.
Nor does it mean we can't fight against it. We can use OpenBSD and have a higher confidence that it's not backdoored.
The first step to reeling in the unchecked power of the NSA is not to claim that they would not have done such a simple thing but to realize exactly how atrocious the scope of their acts are - not to become paralyzed with fear, but to incite change.
>We can use OpenBSD and have a higher confidence that it's not backdoored.
Why? As a non-technical user, from my POV I'm simply trading my trust that NSA hasn't backdoored MS with trusting that your, or De Raadt's authority is meaningful. I can't review the source code I'm running (without a prohibitively large time investment), and as we saw with Heartbleed, the "many eyes" theory is flawed as well.
As an individual, non-technical user I have no reason to be anymore confident in OpenBSD than in Windows. At some point you have to rely on a chain of trust (or develop the silicon yourself) and I view the "NSA paid/forced MS" boogeyman just as likely as the "NSA paid/forced OpenSSL" to merge heartbleed. Am I to believe that the NSA gagged with thousand or so developers who work on windows, or just the 10 who manage OpenSSL?
The parent post has a very important point, and the history better aligns with what he/she said. The NSA didn't coerce Google into giving up user data - they simply took advantage of the fact that their inter-DC traffic was unencrypted and use their resources to attack that fact. It didn't take a secret court nor did it take a gag order. They experienced an attack that could have been done by anyone dedicated enough - government or blackhat - and its likely that keeping your software secure against such attacks is very effective at protecting user privacy.
>as we saw with Heartbleed, the "many eyes" theory is flawed as well.
I don't think Heartbleed counts as some sort of evidence against the "many eyes" paradigm. There are so many better bugs for that, as Heartbleed is really low hanging fruit. OpenSSL is a total nightmare. I've posted elsewhere about this at length - but in short OpenSSL is really an example of what a good program _shouldn't_ do. How a good program _shouldn't_ be written. There is a list of sins a mile long on http://opensslrampage.org/.
The truth is that there is no guarantee that Windows, Linux, or BSD are not backdoored by the NSA, GCHQ, or FSB. There's no guarantee you didn't get owned and Chuck Blackhat installed a backdoor on your computer. The real reason to use OpenBSD is because it's had less remote exploits in the past 15 years than Windows has had in the past year. The real reason to use Linux and BSD is because that software respects your freedom. If you don't care about things like software freedom or if you feel the security of Windows is "good enough" for what you're doing then of course you don't care about Linux and BSD.
> I have no reason to be anymore confident in OpenBSD than in Windows
Past statistics show that OpenBSD is safer. It's had far fewer security issues and has a much cleaner codebase.
If you don't place faith in past statistics then you're willfully ignoring the best means of predicting future behavior.
In addition, OpenBSD has far fewer lines of code, and the most reliable correlation with security holes is lines of code. Simply by having fewer LoC, OpenBSD is already statistically less likely to contain a security hole.
> chain of trust
Yeah, with microsoft your chain of trust is microsoft employees and the word of other people reverse engineering the code (e.g. the people who said the _NSAKEY thing was legit after reverse engineering a small portion of the code).
With OpenBSD your chain of trust includes me, the developers, and other eyes that have looked at the code. The "many eyes" theory is not flawed. It never stated that having many eyes eliminates all bugs, merely that it's better to have more eyes than fewer eyes and increases the chance a bug is noticed. There's no sane way to argue against that statement unless you turn it into a ridiculous strawman of "many eyes means heartbleed couldn't have happened QED".
> Am I to believe that the NSA gagged with thousand or so developers who work on windows, or just the 10 who manage OpenSSL
It's much easier to believe that the NSA could gag one or two of a thousand developers than one or two of 10. Believe me, you don't have to get all MS employees to futz windows security. Just getting one at random already gives you a decent probability of getting a kernel level exploit, and selecting five or so specific employees can get you a hell of a lot more.
> the "NSA paid/forced MS" boogeyman
Evidence in this post-Snowden era indicates the NSA has worked to backdoor commercial software. It's also quite possible heartbleed was an NSA inspired hole, though I don't think that would be a productive discussion to have.
If you read leaked NSA slides and look at what they have done (such as the Verizon MITM closet) then backdooring operating systems is not a bogeyman, it's quite reasonable.
You cite that they have intercepted data without the consent of the parties involved, but that ignores the fact that they also coerce parties as well; just because they have used the tactic you mention does not mean it's the only tactic they use.
If you're going to argue that BSD is no more secure than Windows and the NSA is not in fact using gag-orders and subverting software you'll need a heck of a better argument.
I guess what I meant was, "We now have a huge amount of evidence about what they really are doing, lets talk about that and where the line should be drawn; speculating gets us nowhere."
I agree we're not doing enough with the information we do have to curb this, but speculation also has value. Speculating is a way to explore what the line might be and also could bring attention to an unknown infringement since the NSA surely isn't telling us everything yet.
> I think anyone who isn't deluded and/or a member of the "nothing to hide; nothing to fear" camp already knows you can't trust Windows
You drastically overestimate the general public's perception. I guarantee you that if you go down the street and ask random people if they trust Windows, they will say yes (or no, and say they trust OSX instead). In HN-land, sure, that can be assumed, but I highly doubt that viewpoint is shared outside tech circles.
Nobody credible believes NSAKEY to have been a backdoor. Schneier debunked it back in the '90s. Microsoft already held a key that had the same authority that "NSAKEY" had, so the conspiracy theory here requires you to believe that NSA subverted Windows NT to add a key labeled "NSA KEY" despite having access to another key that did the same thing.
Still, the explanation given by Microsoft has even less sense than the "backdoor" theory:
"Microsoft said that the key's symbol was "_NSAKEY" because the NSA is the technical review authority for U.S. export controls, and the key ensures compliance with U.S. export laws."
No sense at all. But if you read between the lines, it's the key without which Windows couldn't be exported. So it's a key which at least allows weaker encryption outside of the US.
That there is some bizarre legal reason having to do with 90s export control policy that would lead MSFT to have a special registry token for NSA is far more plausible than the backdoor story, which is implausible by the very nature of the key we're talking about.
So if that was a key which allowed the NSA access to the communication that was otherwise unreachable to it, it's not a backdoor to the computer, just to the communication, and therefore you deny it to be a backdoor at all? Or do we just play the game of the "proper" names? How can we call the key which allows the access to the encrypted communication? Wikipeda calls that a backdoor too:
It is true that _NSAKEY was necessary for the technical implementation of cryptographic export controls. It is also true that one of the goals of cryptographic export controls was weakening the security of people who use exported software. (Although saying that the NSA had only one goal is pretty wrong: read up about DES's S-boxes, which caused all sorts of cries of "Backdoor!" before Snowden was even born.) But calling _NSAKEY a "backdoor" is confusing for lots of reasons.
At the same time as _NSAKEY, there was another bizarre mechanism in the '90s called "server-gated cryptography". US law was that exported cryptography could not be stronger than 40-bits, but there was an exception for financial organizations. The implementation was that certain CAs were trusted to verify whether their customer was in fact a financial organization (trusted in the sense that, if the browsers decided wrong, they were violating munitions control laws...) and could place a special extension in the certificate. If that certificate was present, export browsers would negotiate 128-bit cipher suites; otherwise they would only negotiate 40-bit cipher suites.
This mechanism, incidentally, blew up in our collective faces two weeks ago under the name "FREAK", and there was a lot of talk about whether the NSA's meddling was appropriate.
But where's the backdoor? In this case, it is the presence of certain CA keys that allows strong crypto, and their absence weakens it. _NSAKEY has the same goal, but it's just done in reverse. So calling the key a backdoor is not very meaningful, since in the SGC case, we'd have to call the absence of a key a backdoor.
This is a very different sort of thing from the Lotus escrow business in this article, where the software silently encrypts the data to a public key owned by the NSA. The Windows _NSAKEY is just a signing key, and in US versions of the software, _KEY is also allowed to sign all the same things. Nothing is ever encrypted to _NSAKEY.
Or, in other words, the presence of _NSAKEY in US versions of the software cannot possibly weaken anyone's security.
If there is a backdoor here at all, it is the entire system of export controls for crypto. (Which everyone knew about because it was literally the law, so calling it a "backdoor" is sorta like calling Wikipedia's edit-this-page button a "security vulnerability".) All of this was very different from the Lotus backdoor described in the article.
So the arguments you give are: the presence of the NSAKEY doesn't point to the backdoor because the whole system is a backdoor and because US Windows was anyway more secure, who cares for dem Europeans or Asians.
The catch 22 is not a catch 22, the whole system is a catch 22, therefore don't ever call the catch 22 the catch 22.
RSA-768 has been factored by academics in 2009[0].
It has long been speculated that NSA can factor 1024bit RSA (or DHE) using custom hardware, which is why in protocols like TLS and SSH the current recommendation is for keys, certificates and Diffie Hellman key exchange to be at least as strong as RSA-2048 (e.g. 256 bit elliptic curve crypto is strong enough).
If I got the maths right, due to the exponential relationship a 760-bit key should on average take roughly 1/256th of the time a 768-bit key takes to factor.
It is stronger than the 512-bit RSA factored in the FREAK attack but is still factorable with clusters. I wonder how fast a FPGA would be able to do it.
FPGA are very very inefficient at doing anything, they are very flexible and you can program them to perform specific operations very quickly relative to general purpose hardware however most of the silicon is dedicated to facilitate the programmability of the FPGA rather than the actual processing.
If you only have access to commodity hardware than GPU's would probably be better. Xeon Phi is also insanely cheap right now and you can get a 57 core card for under or just about 200$ but I don't have clear performance data for it, i know for BC mining it's comparable to R9 290/295X or so but with much lower power consumption, but i also suspect that due to its relative low market cap it's fairly poorly optimized atm.
NSA and large private organizations use most likely specially designed hardware rather than commodity hardware and surely not FPGAs.
For private individuals the most cost effective way to factor a single key these days is probably renting EC2 GPU instances (CUDA) from Amazon @ about 70 cents and hour you should be able to factor 512bit keys for 75-150$ (based on confirmed reports).
1024 bit might also be in reach however it will require a sizable budget.
Based on the current development of "auxiliary" processing components whether it's GPU based compute cards or more traditional but highly threaded processing cards ala Xeon Phi it would not surprised me if 1024 or even 2048 bit keys will be easy to factor before 2020.
My current bet is that 1024 will be achievable on EC2 or a similar service by late 2016 to mid 2017.
NIST has disallowed 1024bit since 2014, and based on it's previous deprecation most keys were factored within 2 years after it's final deprecation notice.
It's also quite important to point that there are quite a bit of "weak" RSA keys out there and there's a good chance that the NSA and similar organizations have the capability to factor certain keys probably upto and including 2048 bit.
By "scalably factor", I was referring to their ability to take arbitrary 1024 bit public keys as they appeared in random TLS sessions on the Internet and factor them on demand.
NSA can virtually certainly target a specific, hardcoded 1024 bit key and break it. In fact, leaving out the cost and difficulty of recruiting the team to actually put the pieces together, the typical California venture capital firm has the resources to build a machine to do that today. Eran Tromer put the cost of such a machine in the single-digit millions, many years ago.
Apropos nothing: the gap between a 1024 bit key and a 2048 bit key is enormous. The thing that allows the NSA to meaningfully attack a 2048 bit key is likely to take RSA out altogether (and with it probably multiplicative finite field --- ie, "conventional" --- Diffie Hellman).
Watson Ladd has pointed out that since breaking authentication 10 years after it's been deprecated does not let you retroactively MITM someone but breaking key exchange 10 years after it's been deprecated allows decryption of stored intercepts, KEX should be stronger than certificates. So make sure your TLS server with 1024 bit key uses ECDHE or 2048 bit DHE, not plain RSA KEX.
https://news.ycombinator.com/item?id=5846189