Hacker News new | past | comments | ask | show | jobs | submit login
Did NSA Put a Secret Backdoor in New Encryption Standard? (2007) (schneier.com)
174 points by Jach on Sept 27, 2012 | hide | past | favorite | 93 comments



Back when DES was being designed, IBM had a bunch of values in their S-boxes. NSA told them "don't use those values; use these values instead." People freaked that it was a backdoor the NSA put in.

About 15 years later, differential cryptoanalysis was publicly discovered. The original S-box values would have been very vulnerable to the attack, but the ones the NSA used were resistant, suggesting that NSA knew about differential cryptoanalysis way ahead of time and were suggesting ways to protect the public against its eventual discovery.

It is possible that there is still some magic in there to let the NSA magically defeat DES, but we still haven't found it. Similarly, it's possible that this random number generator exists for some nefarious purpose, but we have no evidence for it.

Also, this article is 5 years old (the headline didn't say so when I first read this). Schneier was in pretty big self-promotion mode at that time.


I can't find the citation right now, so treat this as apocryphal, but...

I've read an explanation that the USA's focus on security has changed in the days since 9/11/2001. At one time, the philosophy was that the country was more secure when we could all be assured that the privacy of our communications was intact.

But in the last decade we've changed that philosophy. The government's security philosophy now is that the most important thing is for the government to be able to tell what's going on.

So it may be that the parent post's anecdote is quite correct, and reflects the policy of that time. But in the days of the War On Terror, we can't trust that anymore.

(And OT, I believe it reflects a fundamental change in the philosophy of governance that is against the founding principles of the country, and a pretty bad thing. I believe that our strength derives from ... us -- we the people. And so strengthening the government by weakening us in fact weakens the nation.)

EDIT: fix date. Thanks, Retric.


> At one time, the philosophy was that the country was more secure when we could all be assured that the privacy of our communications was intact.

That's never really been the view of the U.S. government. It's always been the view that you should be able to have communications with your neighbor that are secure from everyone except the US government, who obviously need to snoop in order to protect us all.

The security agencies and the police agencies have always pushed for more comprehensive and more invasive surveillance. We've had wiretapping for as long as we've had wires to tap. We crippled Internet security for years. I remember the days of separate US and international Netscape releases, due to crypto export restrictions. It wasn't until 2000 that the restrictions were truly relaxed.


US and international Netscape releases (as well as Windows and various other tools) rather indicate that the US govt was interested in secure communication for US citizens and corporations, while being able to snoop on the rest of the world.

If you need an example, pick the Clipper chip - and even that doesn't _quite_ work out, given how publically that proposal was shot down.


I think it's rather an example of how the Judicial branch keeps the other branches in check. If the NSA had its way, they'd be able to listen in on every conversation you ever have, track every site you visit, record every communication you ever make. They would do the same for everyone internationally as well. The difference is that we have the Supreme Court protecting US citizens to some extent, so the NSA cannot legally wiretap your phone just for kicks, but the Supreme Court doesn't extend the same protection to citizens of other countries.


And you believe the NSA when you can't see the warrants, know who issued them, what they contain, etc.? How would we know they weren't tapping domestic communication? You wouldn't. Any whistleblowers would be roughed-up or locked-up... much like Thomas Drake.

https://en.wikipedia.org/wiki/Thomas_Andrews_Drake#2007_FBI_...

https://en.wikipedia.org/wiki/NSA_warrantless_surveillance_c...


I said that the judicial branch keeps the other branches is check. I didn't say that are completely effective or that their checks are sufficient. And I certainly didn't say that the NSA wasn't engaging in any domestic wiretapping.


Yes, I think the key word in that sentence is "legally". The following article describes the situation re: warrantless monitoring of u.s. citizens by the NSA: http://www.newyorker.com/reporting/2011/05/23/110523fa_fact_...

The thing is that they can not bring the result of this warrantless wiretapping into court. But they probably don't want to.

From what I've seen the FBI is a lot more vocal in complaining about the impact of encryption because their mandate involves bringing cases to court so they want a formalized, legitimate way of breaking encryption when they have warrants. They would also love to have the dragnet that the NSA has to know who to watch, and I don't know to what extent they do, but the bigger difference that I see is that the NSA is not interested in launching court battles (any more) whereas that is the primary endgame for the FBI.

The problem of course is that an encryption system which can be broken in a formalized way is open to the possibility of being broken by the wrong people. You can't have your cake and eat it too by having strong encryption that can be broken by the "right" people because there is no way to theoretically describe who the "right" people are. The encryption has to work the same for everyone.

Like all big issues in society there are competing rights; the need for law enforcement bumps up against the freedom of the individual. I believe that we are comfortable enough pushing this balance more heavily towards the freedom of the individual in America that a policy of embracing strong encryption is in the best interests of everyone, but I am aware that I don't have as much knowledge about this issue as some others.


You can have an encryption system that can only be broken by the 'right' people. We already have crypto systems where any 1 of n people can decrypt the message. If you embed a public key into the algorithm, then only the algorithm's designers would know the private key needed for decryption.

Doing this in a non-obvious way seems much more difficult, but if the NSA did have a weakness to DES, it could very possibly require knowing a secret key.


Actually it is a reflection of the law. The US government had the authority to prohibit export of crypto (at the time), but did not have the authority to limit it domestically.

If they had been given that authority things may have been different.


That's kind of circular. If the government has the authority to create a law to do X, then the government has the authority to do X, full stop.


I always thought that the clipper chip died in the court of public opinion and not in a court of law. However, I do not understand your interpretation of circular reasoning.

Lets look at the case where X is regulate the sale of switchblades. The federal government has the authority to regulate the sale (commerce) of switchblade knives between states, the federal government does not have the authority to regulate the sale of a switchblade within a state.


> The federal government has the authority to regulate the sale (commerce) of switchblade knives between states, the federal government does not have the authority to regulate the sale of a switchblade within a state.

Let's look at the case of guns. Do you really think that Montana could say "you can sell Montana-made machineguns in Montan without satisfying federal law"? (The feds don't much care about switchblades. They care about guns.)

See http://en.wikipedia.org/wiki/Wickard_v._Filburn . In that case, the feds got to regulate even though the wheat in question wasn't sold and never left the farm.


Well the feds did care about switchblades, that is why they passed a law banning the interstate sale of switchblades, "the Switchblade Knife Act, (Pub.L. 85-623, 72 Stat. 562, enacted on August 12, 1958, and codified in 15 U.S.C. §§ 1241–1245), prohibits the manufacture, importation, distribution, transportation, and sale of switchblade knives in commercial transactions substantially affecting interstate commerce[56] between any state."[1] Evidence for a continued interest in switchblades can be found in the recent exemption carved out for assisted opening knives in 5 USC § 1244.[2] (I think the exemptions in 1244 were passed within the last 5 years as part of a Homeland Security appropriations bill, but I'm fuzzy on the exact date.)

Wickard was 70 years ago, interstate commerce doctrine has evolved a lot in the intervening years. In fact I'm a little surprised that you used it as an example. It has been a while since ConLaw I, but I think Wickard is often used as an example of the height of the broad interpretation of the commerce clause. Are you arguing that there is no limit on the power of the the commerce clause? Or that Wickard is the controlling case? Lopez is one of many cases since Wickard where the Supremes walked back such a broad interpretation of the commerce clause.

[1] http://en.wikipedia.org/wiki/Switchblade#Federal_law

[2] http://www.law.cornell.edu/uscode/text/15/1244


> Wickard was 70 years ago, interstate commerce doctrine has evolved a lot in the intervening years.

The Supremes haven't overturned Wickard.

Yes, they did decide that the first version of the Gun Free School Zones Act didn't have a commerce nexus, but they seem quite content with the current version, which affects only those guns that have gone interstate.

However, the relevant question is whether the Supremes have ever decided that something sold can be exempt from the federal power to regulate interstate commerce.

Take machine guns. A Montana statute that allows unrestricted sale of machine guns made in Montana clearly affects "commerce" (in Montana at the very least) of guns not made in Montana, aka "interstate guns".

Do you really think that the Supremes would reject that argument? On what basis?

And, if they accept that argument wrt guns, why wouldn't they accept it wrt cantalope?


That was 1958, back when prohibition was still in the memory of many congressmen.

When prohibition was passed, the Civil War was still in memory and Congress felt it needed a constitutional amendment to ban ethyl alcohol.

Today, if Congress wants to ban a thing they simply pass a law that puts you in jail for its sale or possession. Simple as that.


"Today...they simply pass a law that puts you in jail for its sale"

Today? They have always done that. Which is why the USC reads as follows:

"Whoever knowingly introduces, or manufactures for introduction, into interstate commerce, or transports or distributes in interstate commerce, any switchblade knife, shall be fined not more than $2,000 or imprisoned not more than five years, or both."


It is not circular. Interest and authority are two very different things. For example, the Federal government has the authority to wage war -- this has no bearing on a discussion as to whether they are philosophically correct in doing so.

The suggestion that policy is justified merely because it subsists upon formal authority is nonsense.


The point is that the government may not have the authority to do X or to create a law to do X. They may have the ability, but the supreme court decides if the authority exists.


Things are hardly so absolute.

The United States Constitution is the highest law, and provides for different treatment of foreign and domestic matters, so your statement is obviously false even under the most broad interpretation of "the government".

The President/Executive (closest to what many other countries would consider "the government") is also limited in most matters by the laws passed by Congress, so even assuming domestic regulation of cryptography were Constitutional (and I don't personally believe it would be), if Congress has not passed a law giving the Executive the authority to regulate it, the Executive cannot do so.


9/11/2001


And? Cryptography is much easier to build, test, ship, and even export after 9/11 than it was before it. I shipped commercial security products before 9/11 and it was a nightmare. A huge portion of all desktops ran insecure crypto simply because it was too logistically challenging to ensure that they had good crypto and were easy to sell in Europe and Japan.

There is simply nothing to this analysis. The crypto policy fight happened in the late '90s, and crypto won.


Yes, the crypto fight was fought and seems to have been won by the people. But crypto is only one facet of a broader communications security policy.

It should be clear from recent controversies about "how private is your cellphone"; warrantless wiretapping and retro-active legalization of the same; various proposals for granting government authorities over the Internet (including a "kill switch" and a rumored upcoming Executive Order since he can't get it through Congress); that in the broader context, the US government is very much interested in monitoring communications.


We're talking about crypto on this thread. I see the controversies over government access to communications differently than you do, but I'm not particularly interested in litigating the issue. The federal government has not subverted cryptography in any meaningful way; industry does a perfectly great job of doing that job for them.

You have a lot more to fear from the Linux devs "cleaning up" OpenSSL's CSPRNG than you do from the NSA.


But this is not true. David Wagner and Ian Goldberg (the cryptographers who cracked GSM) have documented that the encryption used was purposefully weakened to enable realtime software decryption of voice calls.

This was back in the late 1990s and there was a lot of discussion on the cryptography usenet group at the time (i.e. http://www.mail-archive.com/cryptography@metzdowd.com/msg007...) but there is a fairly readable mass market piece here (http://scard.org/gsm/pr/nytimes/). I'm only an amateur when it comes to this stuff, but why do you think David Wagner is wrong?


That happened in the 1990s. At the same time, the US Government tried to directly criminalize unregulated sales of encryption. They lost both fights: in 2012, it is easier than it has ever been to encrypt phone calls in a manner that prevents LEOs from eavesdropping on them.


That's true for phone calls for people that know how to do this. However:

1. Most people are unable to do this technically.

2. The fact that you do it may constitute prima facie evidence of being a person of interest.

3. The government is trying very hard to get the means to wiretap VoIP.

4. It doesn't address traffic analysis at all. I know you said you aren't concerned about this, but there are plenty of people who are, and the government is going like gangbusters (literally, I guess) toward this.


What does "prima facie evidence of being a person of interest" even mean? You can be a person of interest simply by virtue of build and hair color.

The US Government hasn't restricted traffic analysis, and indeed nothing they have ever proposed W.R.T. encryption could have controlled traffic analysis.


Just correcting the parent, they said 2011 not 2001.

As to pure software crypto it's not really that important vs securing the endpoints. Consider WoW uses encryption when logging in, but a significant % of accounts are hacked before they add an authentication either as a key-chain or on your cellphone. I suspect if it ever became mainstream pure client side bitcoins would be DOA for more or less the same reasons.


To expand a little, the NSA also required the initial permutation of plaintext bits. This was done before the first 16 rounds of DES, and looked like this:

    58 50 42 34 26 18 10 2
    60 52 44 36 28 20 12 4
    62 54 46 38 30 22 14 6
    64 56 48 40 32 24 16 8
    57 49 41 33 25 17  9 1
    59 51 43 35 27 19 11 3
    61 53 45 37 29 21 13 5
    63 55 47 39 31 23 15 7
Which meant "put the 58th bit of the plaintext into the first position, put the 50th bit into the second position, etc, and THEN run through the encryption algorithm."

At the time, it was totally unclear why you had rearrange the bits in this very exact way. After all, you were about to encrypt it (and obliterate any plaintext patterns) anyway. And would it be as strong if you started with the 57th bit, instead of the 58th? The whole thing seemed so arbitrary.

Now it's true that this is robust to differential cryptanalysis, but it's also true that these bit permutations significantly slow down software implentations of DES. But it's trivial to implement the initial permutation in hardware.

In the 1970s, the hardware required to crack DES costed $20,000,000 US dollars [1] (about $120 million in today's dollar [2]). The general tinfoil-theory at the time was that 1) only the NSA had the resources to build such a machine and 2) by forcing DES to use this initial permutation, the NSA was giving themselves a significant "head-start" over everybody else using software to crack DES.

[1]http://www.krapp.org/hydra/courses/analysis/3-DataEn.pdf (PDF) [2]http://www.wolframalpha.com/input/?i=%2420%2C000%2C000+1970+...


The permutations in DES have nothing to do with security; there is an excellent explanation by Thomas Pornin on what their purpose was: http://crypto.stackexchange.com/a/6/592

What was altered were the 8 S-boxes, seemingly random lookup tables that map 6- to 4-bit values. More details by Don Coppersmith himself at: http://simson.net/ref/1994/coppersmith94.pdf


I phrased that poorly (and just edited my first sentence to reflect your correction). What I was trying to add was that the NSA made some other modifications to DES that were also seen as dubious at the time.


From https://en.wikipedia.org/wiki/Differential_cryptanalysis

"In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal."


Don't forget that IBM actually designed Lucifer which was 128bits but it was the (edit:s/NSA/NHIS) that truncated it to 56bits and named it DES. So maybe this is an example of "6 of one or half a dozen of the other"?


And don't forget that Eli Biham smote Lucifer with differential cryptanalysis, found more than half of the keys insecure, reduced its security to 235, and published a journal article concluding that NSA strengthened the cipher design; DES was better than Lucifer.


Maybe 3 decades after it was released according to this paper; http://scholar.google.ca/scholar_url?hl=en&q=http://59.1... Even if the NSA appointed S-Boxes did improve Lucifer then decreasing the block size does not really improve on your point.


You're not arguing with me; you're arguing with Eli Biham. He addressed the question you asked head on.


suggesting that NSA knew about differential cryptoanalysis way ahead of time and were suggesting ways to protect the public against its eventual discovery

The NSA may also have know who else knew about differential cryptoanalysis, and didn't want it to be weak in ways that others could break it, but left it weak in ways that they could.


You would think that after this many decades any such built-in weakness to DES would have been discovered. Do you think the NSA crypto knowledge (or rather, IBM crypto knowledge, since that's where differential cryptanalysis was developed) back in the 1970s was better than the modern crypto knowledge?


To give one small example: Clifford Cocks invented RSA about 3 years before RSA did. But he did it by himself, and he did it in his head. Cocks' version was revealed 25 years after he created it; 22 years after RSA was revealed.

They're only just released some of the stuff that Turing did.

They keep things secret, and they use things hard. There's not really anyway to know what they know about your system, which is why cryptography likes systems that seem secure even when you know everything about that system.

(http://www.youtube.com/watch?v=a-xEiOvXux4)


"3 years" in not quite the same secret head start that "30 years" is.


The point is that even though PKI had been independently developed, and made public, 3 years after Cocks did it they still kept it secret for 22 years.

So imagine what they do with the secrets that are still secret - the secrets not independently developed and made public.


I think of all the public crypto that we know duplicated intelligence-agency crypto, the public was only behind by a handful of years. We haven't heard of anything that was like "Oh yeah, the NSA had this twenty years ago." (Differential cryptanalysis is probably the biggest of these gaps, and its timeframe is something of an outlier.)

So either the public is doing a decent job of keeping up with the spooks, there's a massive misinformation campaign where intelligence agencies only admit to having discovered things that the public discovered soon afterwards, or there's a strange bimodal property where the public replicates private results either five years later or fifty.

Also, the NSA is not _fundamentally smarter_ than the rest of the world; they're just possibly more focused on it. So exactly how a clever idea would occur to them in the 1970s and 1980 and have occurred to nobody in academia since then needs some explanation.


If that's true, then the NSA is 40 years ahead of the public.


Some information about computers from the early '50s is still classified — http://www.governmentattic.org/4docs/NSAgenSpecComputers_198... . That plus http://www.governmentattic.org/3docs/NSA-HGPEDC_1964.pdf are interesting reads for anyone interested in the early development of computing.


Spot on. You have to realize that the government buys software and systems that rely on encryption from the private sector. It is in their own interest to have the strongest crypto in government hardware and software. So while they know a lot of exploits they also know that it is quite plausible that our enemies know the same exploits (i.e China, Russia) and using government hardware that uses a poor standard is not a great idea.


>>It is possible that there is still some magic in there to let the NSA magically defeat DES, but we still haven't found it.

Actually the NSA s-boxes are weak against linear crypto analysis. http://reference.kfupm.edu.sa/content/l/i/linear_cryptanalys...

If I remember Schneider's applied cryptography correctly, the NSA s-boxes were among the worst 7% possible.

I wonder what we would be saying about the NSA if we (publicly) discovered linear crypto-analysys before differential. However, I suspect the vulnerabilty to linear analysys is the result of how structured they made it to resist differential.


Well, the NSA is building that data center in Utah to store and process all the data they're hoovering up. That data would be much more useful if it weren't encrypted. So they do at least have a motive, even if the ability can't be proved.


If I'm not mistaken, they've sort of tweaked the definition of "interception" too. The game plan is to capture everything and then it's 'intercepted' when they actually listen and analyze it.


So do they basically work on the definition of Schroedinger's box?

"We've captured a bunch of data. But as long as we don't observe it, it's not legally going to be counted as intercepted."

Sounds to me, like a massive slippery slope just waiting to happen.


Yeah, you think?

That was the talk at Defcon, I didn't do enough follow up to find out but it's pretty open that the new Utah data center can store every American's phone calls, emails, .. everything for a century.

If you can slip that through whoever it is that protects us, there are some corner cases. Say we do rub out a terrorist cell that does a successful attack (20 years after this database is running,) could you then mine that database to determine if your AI that finds terrorists works? If I was training up voice finger printing algorithms and such, you have an incredible dataset and there are likely other signals coming in to help populate it (maybe the census? say you're training something that detects Arab accents)

Never mind the fact that it's so huge and so much data that 1) is has to be online and 2) all intelligent queries will be given to AI/Google like software agents to find. Could a future president query the database to dig up dirt on an upcoming election opponent? (He's the president right?)


Well, you know, if the President does it, of course it'd be legal.

;)


It is possible that the NSA modified this RNG for perfectly good reasons, but, in my opinion, even the slight possibility of an attack existing is too high when other better RNGs exist.


This isn't a controversy. Nobody would have used that RNG anyways. There's probably no bignum math in any commonly used RNG, let alone elliptic curve math. Not only that, but Dual EC had problems even before Furguson pointed out the parameter weirdness.

You do not need to be on guard against secret NSA Dual EC backdoors.


This is quite interesting, as the NSA on the one hand needs to provide secure algorithms for the public to use, but on the other hand has an interest in being able to break algorithms.

IIRC, during the standarization of DES, the NSA has also modified some S-Boxes without giving any explanation. Only later, when differential cryptoanalysis became known to the public, it was clear that this was to strengthen DES against this particular attack (which was already known to the NSA).

The case here is more interesting though: It seems like you need to know some secret numbers (a sort of "private key" if you will) to be able to attack the PRNG. So it seems that the NSA could place a "safe" backdoor that even an attacker with the same cryptography knowledge as they have cannot break unless he himself possesses the "private key".


What sense does it make to backdoor an RNG that no normal system is ever going to use, and to do so in a public standard, and further to do so by using deliberately broken curve parameters?

It was an interesting story because it was so weird. But even had Dual EC not become known to the world exclusively as "the CSPRNG with the backdoor in it", nobody would have used it anyways.


As a teaching moment? I've been told a story about a group of embassy staffers (state department, not the other kind) that were in the habit of being rather loose with sensitive conversations over the local telephones. This was during the cold war when secure conversations often meant driving into the office to speak face to face, which could understandably make weekend news rather unwelcome. This apparently made some folks on station rather unhappy, but repeated admonishment fell on deaf ears. The reported solution was to remove a tiny bit of insulation from several of the phone wires as they entered the building on the roof. This lead to a very occasional clicking on the lines as wind would move the wires and cause a momentary short which blossomed into a newfound respect among the corps for communication protocol.

Most people learn much better by doing (or getting done).


I agree it was interesting because it was weird. One wonders if the NSA might put things out like this to test the analysis ability of the 'open' community? I wonder if there is ever the question inside the Fort like "Well we can do this, but how obvious is it to people outside our group?" Testing that question isn't easy, but answering it is important for knowing the size of the threat posed by an equivalent type vulnerability.


Since many of these procedures are implemented in hardware directly, maybe it could be utilized as a conditional fall-back? An example might be a MITM attack that indicates an invalid hand shake or shows that an insufficient random pool is detected so it would attempt to connect using each alternative in order till it gets to the weak link. Technically speaking this would be ideal since the government and ISP's that carry these messages are so intertwined that the breadth would cover every citizen in their own country and likely many others.


Yeah that is rather weird. A botched attempt maybe?


No.


Nobody used this random number generator.


Perhaps the government required defense contractors to use this one in products sold to other countries?

I don't have any evidence for that, just speculation on one reason the NSA might insist on including something like this in the spec.


"In the meantime, both NIST and the NSA have some explaining to do."

Not happening. Hasn't yet, nor will it. NSA just keeps quite and watches for someone to use their particular algorithm. We should assume malicious intent and take Schneier's advice: "My recommendation, if you're in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances."

And if you "don't have anything to hide," then I propose you've given up already and cryptography is useless to you.


You don't have to take Schneier's advice in order to not end up with an elliptic curve random number generator. Nobody does that anyways.


Can you elaborate on why?


1 - Very slow

2 - A nightmare to implement securely

3 - Only exists for the sake of provable security

4 - The same standard defines a much better alternative pseudorandom generators based on hashes and/or block ciphers.


Which raises the question of why such a useless thing was even standardized in the first place.


From that perspective, one wonders if they are instead aware of attacks on the other three and designed the fourth to withstand those attacks (but look suspicious and be unappealingly slow).

Overall, the mental game is quite taxing, so I'm going to plead "bureaucracy."


That seems unlikely, DRBGs are really quite solid little things. Or in other words, if the other NIST DRBGs are significantly broken it would almost certainly imply that the hash functions upon which they were built are broken much much worse.


It's easy to see how one could get caught up in a web of paranoid implications, each one less likely to be true than the one before.


I'm curious as well. Why not?


'pbsd gave a much better answer than I could have.


Not entirely. While the best cryptography is that which is secure against everyone (duh), there are a vast number of perfectly legitimate use cases for cryptography where it doesn't matter if the NSA can read your encrypted data.


I can't imagine any legitimate use case for crappy cryptography, and any crypto with a backdoor is crappy. If the door is there, anyone with the key can open it. How can you be sure who will get the key, or what their motives will be?

Good, modern cryptography offers a level of security unparalleled in the physical world, and at a processing cost which any computer can handle. Why would you intentionally choose something inferior?


The example often given is "MILITARY GRADE ENCRYPTION" - if tank driver Bob is given commands to shell a target those commands only need to be secret for a short length of time. It doesn't matter if the enemy can decrypt them by taking a week's time, because Bob will have attacked the target and gone by then.

This is, obviously, a really old example, because modern hardware makes encrypting things quick and easy, and decrypting things quick and easy. But imagine strong encryption in the 1980s - you had to balance strength with time with size and then try to cram it into low-specced hardware.


OK, yes, there is such a thing as "strong enough crypto for your purposes", and there are tradeoffs for computing power, etc.

My point was that there's no point in picking something with a known back door when there are perfectly good, non-compromised alternatives.


>How can you be sure who will get the key, or what their motives will be?

You can never be infinitely certain, but nor can you be certain your non-backdoored crypto is secure. There are cases where "the NSA leaks their top-secret backdoor to the internet" is a vanishingly small threat vector compared to other possibilities (e.g. rubber hoses).

>Good, modern cryptography offers a level of security unparalleled in the physical world, and at a processing cost which any computer can handle. Why would you intentionally choose something inferior?

You wouldn't. But if it later turns out the NSA has a backdoor in the algorithm you happened to pick, and for whatever reason you're unable or unwilling to change algorithms now, that still doesn't mean "you've given up already and cryptography is useless to you".


> You can never be infinitely certain, but nor can you be certain your non-backdoored crypto is secure.

True. But "is this algorithm crackable?" is a question that can be answered with experimentation and math. "Has the NSA lost its keys or its scruples, and if not, will they ever?" cannot be answered at all until you know you've been hacked.

> But if it later turns out the NSA has a backdoor in the algorithm you happened to pick, and for whatever reason you're unable or unwilling to change algorithms now, that still doesn't mean "you've given up already and cryptography is useless to you".

Maybe not, but it does mean you value something else more than you value security. Yes, everyone makes cost/benefit decisions about security. But if your answer to "whoops, whoever knows this trick can see all our bank transactions" is "meh, it would take like, a week to fix that," you're placing a pretty low value on security.


use case for crappy crypto: performance. even google uses RC4 for gmail.


RC4 isn't perfect, but it's immune to some attacks (e.g. BEAST) that block ciphers like AES tend to have problems with.

RC4 actually has a huge internal state. You could use a 1600 bit key with it if you wanted to.


I'd rather err on the side of keeping them in the dark all the time. Encrypting data differently based on whether I mind if NSA sees it leaks information. Specifically, that this data doesn't matter but that data does and NSA should focus their efforts on breaking that data or analyzing that social graph affiliated with the data.


Are there? If I knew something had a backdoor, I'd always wonder who else has figured it out.


Interesting... Was there any code written to demonstrate the attack?


No. The backdoor relies on the generator of the parameters (presumably the NSA) knowing a discrete logarithm ahead of time. Anyone outside the NSA would have to break a rather large discrete logarithm, which is just not gonna happen anytime soon.


That's a good question. I suspect if so, it was alongside an academic paper and this stuff just doesn't get attention from journalists. I'd be curious to find out but I don't have the skills to pore over the material myself.


Do you really think "academic finds back door in NSA's random number generator" wouldn't get headlines? Remember how people freaked over the mere mention of the name of "NSAKEY" freaked everyone out?


Not exactly current - the article is something like 5 years old....


You'll notice the "[2007]" in the post title.


Apparently the mods added it later. There really should be some way to tell what the mods have and haven't done.


Something like mouse-over for original title would be great.


Excellent idea. I just submitted the feature request:

https://news.ycombinator.com/item?id=4580886


Yeah, I started reading it and thought gee this sounds familiar.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: