Hacker News new | past | comments | ask | show | jobs | submit login
Distrustful U.S. allies force spy agency to back down in encryption fight (reuters.com)
286 points by petethomas on Sept 21, 2017 | hide | past | favorite | 84 comments



SIMON and SPECK are both pretty straightforward block cipher designs. You can implement them in less than 100 lines of code. There are no s-boxes or weird constants. Unlike a lot of what NSA designs, they were published formally, with design papers that included rationales. The software-optimized cipher (SPECK) is a simple ARX design. The hardware-optimized cipher (SIMON) uses bitwise operations instead of AND. This is pretty mainstream stuff. It is very unlikely that they harbor backdoors.

(The point of both algorithms is to provide scalable low-profile crypto, instantiable at very small key and block sizes; this is something you'd want if you were, for instance, building an encrypted IOT scheme on microcontrollers).

That doesn't mean they should be international standards; maybe it makes sense that after Dual EC, the NSA doesn't have a shot at producing a global standard for low-profile encryption. But however well justified, it's mostly a political decision, not a technical one.


>it's mostly a political decision, not a technical one.

Is there a good reason to trust the NSA's motivations?

The NSA's stated motivation from the article:

>encryption tools [...] without requiring a lot of computer processing power.

But it was noted that:

>“There are probably some legitimate questions around whether these ciphers are actually needed,” said Curtis Dukes, who retired earlier this year. Similar encryption techniques already exist, and the need for new ones is theoretical, he said.

The NSA's purpose is as a spy agency. No matter how effective that clear coat they're selling you might actually be, they're probably selling you on something you really don't need because it has a benefit to their purpose.


The use-case for Simon/Speck are things like IoT and other places where you are not only limited by CPU power, but also by message sizes, while on the other hand the amount of traffic is so small that you can live with significantly smaller Ns in various 2^n attack difficulties than normal use cases. For such applications it is certainly better to use Simon/Speck than rolling you own proprietary algorithm (see KeeLog, NXP "Crypto-1" as used on MIFARE Classic and so on).

Probably only other algorithm with similar level of scalability which was openly peer reviewed is RC5. In comparison to Speck/Simon, RC5 is significantly more complex (due to it's key schedule) and is not constant time on platforms without barrel shifter. (X)TEA and such things can be similarly tuned for various block and key sizes, but there are no recommended parameters for doing that and thus you get into the "rolling your own crypto" territory very fast.


No, there's no reason to trust the NSA's motivations. I'm just not sure what the NSA's motivations have to do with this.

If you're asking, "do we actually need lightweight ciphers", well, the NSA isn't the only organization designing them; it's a whole field of research. If you want cryptographic security on machines that don't have multipliers and count their capacity for program text space in single-digit kilobytes, you're probably going to reach for special-purpose designs.


Running an algorithm chosen by an attacker with extensive resources is foolhardy, because you can never be certain that your resources are sufficient to detect a trap carefully hidden by their resources. We have a history of the NSA performing attacks and standards subversion. Why accept their potential trojan horse when you can have algorithms designed by those without that checkered past, keep up the same amount of scrutiny for potential trojan horses, and have decreased odds of a backdoor being present if the provider is more trustworthy?

It seems that taking motivations into account could lead you into a false sense of security, but that if you keep up the same amount of security and distrust known bad actors that you increase it.



The NSA massively violated trust in the technical standards processes, now they get to lay down in their bed.

If your response to this is "don't worry about it, its unlikely these ciphers are backdoored", you are missing the point.


This isn't responsive to anything I wrote on this thread. I am interested in talking about cipher design, or about the technical intersection between trust and cipher design. I've already stipulated the politics of the story, which are deeply boring to me. You're addressing arguments I haven't made.


The NSA is dual purpose. Both as a signals spy agency and a signals counterintelligence agency.

And ultimately, what's the difference between a publicly vetted algorithm proposed by the NSA and a publicly vetted algorithm proposed by someone else?

Everyone points at the Dual EC fiasco, but if vulnerabilities are possible either way it seems like throwing the baby out with the bathwater.


Any food could be poisonous, possibly with a poison you don't know how to detect. Would you rather eat food prepared by someone known to have poisoned you before, with obvious motivation to poison you again, or someone with no history of poisoning people? In no case are you guaranteed to not have poisoned food, but I know which I'd prefer.


There's a strain of anti-intellectualism here. It's actually not true that any algorithm might be backdoored. Algorithms aren't unknowable.

Furthermore, people act like there's a track record of super-sneaky NSA backdoors, and that Dual EC shows we can't trust anything NSA produces. I don't trust the NSA at all. But Dual EC wasn't sneaky. There were only two surprising things about Dual EC:

(1) That they actually managed to get anyone to use it, despite how clunky and slow and unreasonable the design was.

(2) That having produced a design that stuck out like a sore thumb for clunkiness, slowness, and unreasonableness, that design would be their backdoor; we gave them a lot more credit for tradecraft than that.

People knew there was something shady about Dual EC almost from the jump. It is a random number generator that works by encrypting random state with a public key for which the private key is undisclosed. It's obviously weird.

That's not the case with SPECK and SIMON. They're mainstream designs, and, more importantly, if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.

Should you use SIMON and SPECK? Of course not. You shouldn't go anywhere near lightweight ciphers unless you know exactly what you're doing, and if you know exactly what you're doing, there are less politically controversial lightweight ciphers to use. The problem here isn't that the world is being deprived of SIMON and SPECK; it's that we're getting too practiced at turning our brains off.


Hi Thomas, I have a conceptual question.

In the past there was a common idea that NSA's classified research might be a decade or more ahead of the academic world, even in an era when the academic world had gotten interested in crypto in earnest, and that they might know of entire classes of vulnerability that other people didn't. Probably the clearest precedent in support of this concern was differential cryptanalysis.

Recently I've heard more of a suggestion that we understand crypto dramatically better than we used to, that cryptanalysis that appears to be hard generally is hard, and that spy agencies have been migrating largely to side-channel attacks and exploitation of vulnerabilities (and maybe also supply-chain attacks). (The weak Diffie-Hellman thing is apparently not a counterexample because there was open literature giving appropriate defensive guidance about that for many years.) Famously Snowden said that "crypto works" and reportedly leaked very little information about novel cryptanalysis, although there's also the counterargument that he might not have access to the relevant compartments to know how crypto doesn't work.

I've even heard the claim that most of the unknowns for pure cryptanalytic attacks are in some way now known unknowns. At least, academic understanding of the mathematical problems has really matured significantly.

What do you think about this question? If someone said "but what if NSA has the next mathematical breakthrough akin to to differential cryptanalysis, which Nadia Heninger is only going to discover in 2027?", would you say "that's really implausible nowadays given the maturity of our understanding of the math here"?

(And I know Nadia mainly works on number theory and algebra rather than block ciphers.)


This is a carefully written question that deserves a better response than I can give it. My basic answer is, "I don't know". I can give you some hunches, but you should be aware that they're based on the same kinds of first-principles reasoning that I'm so skeptical about in other people's HN comments. So:

* I have been very unimpressed with the quality of NSA "TAO's" tooling, and, when I have conversations with people closer to NSA offensive cyber stuff than I am, what shakes out generally is that they people at the tip of that particular spear tend to be super young people in basically their first ever software job.

* But Stuxnet was sort of impressive.

* But then, what was impressive about Stuxnet was domain knowledge about a particular piece of industrial equipment, not hardcore computer science. The software engineering details of Stuxnet were kind of unimpressive.

* But there's Flame, which is impressive in a CS kind of way.

* NSA's original advantage in these kinds of systems probably stems largely from the fact that they were a monopsony buyer of cryptographic talent for many decades. You'd expect there to be a period of catching up.

* But cryptography is now one of the better known pipelines for applied mathematics research, particularly for people who cross over between math and CS, and there's a lot of those people, so it's hard to see why NSA's advantage would be sustainable over the long term.

* But also NSA does have a sustainable advantage in the kinds of cryptanalytic work that can only be done with massive, specialized compute resources, and for all I know when you can casually conduct research on gigantic ASIC clusters as easily as I can fire up Sage and add generate a curve point, you learn a whole bunch of stuff that is broadly applicable.

* But academics get to collaborate directly with everyone in their field and NSA not so much, which is a mitigating factor.

I think it's very healthy that we assume that NSA has space-alien capabilities. It adds rigor to our security models. And again I don't think we should use things like SIMON and SPECK.

I just don't think we have to stop talking about the engineering aspects of SIMON and SPECK simply because they came from the NSA.


For most people the reaction to something proposed by a spy agency is to immediately assume there is an ulterior motive. Your rational explanation helps control this thought process and is much appreciated.


> I have been very unimpressed with the quality of NSA "TAO's" tooling, and, when I have conversations with people closer to NSA offensive cyber stuff than I am, what shakes out generally is that they people at the tip of that particular spear tend to be super young people in basically their first ever software job.

Are you trying to find out who the TAO employees on HN are? They'll be itching to defend themselves.


Just add random data so every encrypted has to be brute forced and then you'll see the NSA applying for more Bluffingdales, a bigger budget, and more side channel attacks, like the Intel ME.


One thing that tends to get overlooked in these discussions is what being 'a decade ahead' really means. Suppose you go back to 2007. What ciphers are you able to break now that you couldn't then?

In the past decade, I can think of 2 sort of new general cryptanalytic attacks: invariant subspace attacks, and the division property. The division property didn't really break anything; it claimed a full break of MYSTY1 with complexity 2^70, in case you happen to have the entire codebook already. It doesn't work well mostly for the same reason cube attacks haven't: most cipher designers know how dangerous a low algebraic degree can be.

The invariant subspace attack has been more successful, mainly in the lightweight space, mostly because it exploits the symmetries that tend to make a design smaller and elegant. But once again, against vetted ciphers it has not done so well.

So let's posit the NSA does have another couple of attacks in hand we don't know about. Chances are they're not going to be very useful. Do they specifically affect SIMON and/or SPECK? It would require an intersection of conditions that seems very implausible, and it seems tricky to have it affect both designs at the same time without being noticeable. But I guess we'll know in 2027.

In another note, if I was going to make a cipher with a hidden weakness to dupe the world into using, I probably wouldn't go with a block cipher---literally the most heavily analyzed kind of primitive in the public sphere. I would probably go with a stream cipher, or a stream-like dedicated authenticated cipher, whose security is much less studied than block ciphers, and can still be used in most places a block cipher would be.


As an engineer and not a researcher, are invariant subspace attacks worth digging into? Is there a better starting point than the PRINTcipher paper?


Probably not, I don't know. There's a better paper from a couple of years ago [1], which even manage to include code [2]. One of the attacks on NORX [3] was essentially exploiting a bigger-than-expected invariant subspace, it might be easier to get the gist of it that way.

By the way, I have the feeling that these attacks are more the consequence of these sort of designs becoming more popular than any particular breakthrough in cryptanalysis. For example, the symmetry properties of the AES round were already well known long ago, but it wasn't until people started taking the AES round and building primitives out of it without adding symmetry-breaking constants that this became a problem.

[1] https://eprint.iacr.org/2015/068

[2] http://invariant-space.gforge.inria.fr/

[3] http://dx.doi.org/10.13154/tosc.v2017.i1.156-174


Referred from your reply to my comment[0]; Algorithms can be backdoored due to having a novel technique to defeat them that you have not disclosed and has not otherwise been discovered yet. We are constantly adopting and discarding encryption algorithms that have not withstood the test of time.

If someone has gotten a jump on research and found a novel attack against their math, but the math looks good enough to convince others to use, that is an enormous advantage.

[0]: https://news.ycombinator.com/item?id=15305331


And my rebuttal to that notion is that if the NSA has secret math that breaks a simplified, stripped down standard ARX/Feistel design, we probably have bigger problems than the NSA's preferred lightweight cipher. I'm not fond of citing Schneier, but he's an authority to a lot of people here, and look what he has to say about Speck: that it's basically an improved version of Threefish.

The "unknowable secret math" argument works both ways. As I said upthread: if you believe this, how do you rule out the possibility that ARX designs are the ones NSA can't break, that they have secret math that only works against iterated ciphers built solely on bitwise primitives, and that they published this particular cipher --- something they rarely do! --- precisely to create the kind of suspicion we're seeing on the thread?

If you want to play Kremlinology instead of talking about engineering, arguments like that are fair game too. I'd rather rule both of them out.


Of course, this could be NSA's test of community trust and an attempt to gain some goodwill. Surely they know they are not the most popular kid on the block... :)


> if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.

It wasn't a backdoor, but doesn't this sound a lot like SHA-0? The NSA fixed the mistake and published SHA-1, but they didn't say why. They might as well have designed the cyphers a similar issue and kept it hidden.

SIMON and SPECK have been analyzed by the wider cryptography community, but in principle something like the above wouldn't be surprising from the NSA...


I'm reasonably confident, as I assume you are, that if NSA actually has a practical weakness in SPECK, they're not going to disclose it, despite having promoted it as a standard.


> That's not the case with SPECK and SIMON. They're mainstream designs, and, more importantly, if you can hide a backdoor in a simple ARX Feistel block cipher that can be implemented in 100 lines of code, we probably have bigger concerns than these algorithms.

Isn't the bigger concern that the NSA may be proposing these because they already know how to break them, and not necessarily that they'll sneak back doors into implementations?


I'm not really distinguishing between NSA knowing how to break a conventional block cipher and NSA having snuck a backdoor into it. To me, the implications are basically the same either way, as is my rebuttal.


Can you point me to some other publicly analyzed lightweight ciphers that are better choices than speck/simon? From my review of that space I got the impression that speck and simon are the more reasonable (and probably also more reviewed) designs.


If we have a fairly robust system for detecting poison in the food, and we successfully detected the poisoned food the last time they tried to poison us, and the food looks really tasty because the chef was pretty good, even if he was a poisoner...... I might eat it.


I'd eat it for the same reason I prefer more used open source packages: my threat model ranks "not enough smart people looked at this" a lot higher than "only a few people in the world know a secret about this."


I agree the biggest worry isn't a hidden attack on the high parameter modes.

The weak modes, though! A 32-bit block(!), 64-bit key, and 22 XOR/add/XOR rounds is just an excuse to say you have int'l standard crypto despite it being painfully weak.

If we see demand for a standard cheaper than hardware AES or software ChaCha20 (or 12 if you live dangerously), we should pick one in an open process like the many we've had.


The small block and key sizes aren't about export controls (obviously, since the standard comes with 128/256); it's for platforms that can't afford serious block and key sizes.


Yeah, "int'l standard" in my comment referred to standardization by the International Standards Organization, not the old "international" export-strength keys. I'm saying I think standardizing weak params is a bad idea, even if strong ones are offered too; it gives folks who could afford sane block/key sizes an excuse to cheap out and use something bad and say "it's standard."

To the extent industry is calling for cheaper crypto, there are some existing civilian candidates (like the eSTREAM hardware profile though I haven't seen much interest in it), or we could have an open competition, responding to what civilian security folks say they're missing. You could definitely get, e.g. gate area down vs. AES or ChaCha without sacrificing sane block/key sizes.


If AES is a lightweight cipher now, you should probably inform all the academics working on lightweight cipher designs that what they're doing is pointless.


My phrasing may have been unclear: by "You could definitely get, e.g. gate area down vs. AES or ChaCha without sacrificing sane block/key sizes" I mean you can make ciphers that achieve gate areas lower than AES's or ChaCha's without sacrificing decent block/key sizes. ("Down vs." was not the clearest; maybe it sounded like "down for".)

As you note, and as I mentioned above, some standards processes have already picked some ciphers meant to be better for constrained environments than those two. The ones I know about (as a nonspecialist!) still use higher parameters than smallest versions of Simon/Speck. PRESENT, accepted by the same committee, has an 80-bit key and 64-bit block, for example, and eSTREAM has its 80-bit hardware profile. That's the "smaller primitives that still have sane-ish params" I was talking about. The smallest Simon/Speck variants go lower, and seem like skating too close to the edge to recommend and standardize, even if there's somewhere they're the right/only fit.

After posting the comment you replied to, I found a PDF from IAD that suggested that the 32/64 block/key size was never going to be submitted to the ISO committee because they have a minimum key size of 80 bits, and 48/96 was canned out of technical concern about the 48-bit block. (Posted it as a self-reply (a...great-uncle? of this comment), because it was too late to edit.) What they wrote actually makes dropping the weaker versions sound a lot less contentious than the Reuters story does, though of course it's one party's version of the story. And the continued pushback from some countries to the 128/256 versions clearly has other causes.

If you mostly just want to dunk on me, I'm sure I've said something wrong here, but not wanting to standardize very low parameters at least isn't the same as just not being aware of lightweight crypto.


Since I can't edit, noting here that the minimum key size required here was 80 bits, so apparently the 32-bit-block version was never really considered. Yay! (Search for 'ISO' in https://iadgov.github.io/simon-speck/papers/Algorithms-to-Su...) The Reuters story made it sound a bit like this was a topic of debate, when the IAD doc makes it sound like more of an assumption from the start.

The grump in me still sees a pretty narrow niche for this sort of algo (like, maybe most considering it should just be using AES?), but whatever.


I thought the weak modes were to more easily crypto analyze it.


My understanding is what tptacek said: they were proposed for real-world use, with IoT and other small/embedded platforms as the justification.


I don't see why would they bother with them when things like Chacha20 (which is a popular and well audited algorithm) already exist.


ChaCha20 isn't a lightweight stream cipher.


This is a tangent but...

> SIMON and SPECK are both pretty straightforward block cipher designs. You can implement them in less than 100 lines of code.

If I super promise to never let the code off my computer, can you point me to resources to do that for educational purposes?


Google [filetype:pdf SIMON SPECK].

If you've never implemented a cipher before, they're both pretty good first ciphers to implement. They're simple without being unrealistically simple, like RC4. The paper has test vectors, so you can make sure what you come up with is actually correct.


One of their talks has a tweet sized implementation, if you want to look at that. If you don't want to be "spoiled" be careful reading the original paper because it has Speck in C IIRC. They're also good algorithms if you want practice with SIMD instructions.

http://csrc.nist.gov/groups/ST/lwc-workshop2015/papers/sessi...


(i hope it's obvious I meant "instead of ADD", not "instead of AND").


This article is full of choice quotes. The TL;DR is that the NSA through its own actions has violated peoples trust:

“I don’t trust the designers,” Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden’s papers. “There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards.”

Chris Mitchell, a member of the British delegation, said he supported Simon and Speck, noting that “no one has succeeded in breaking the algorithms.” He acknowledged, though, that after the Dual EC revelations, “trust, particularly for U.S. government participants in standardization, is now non-existent.”

“How can we expect companies and citizens to use security algorithms from ISO standards if those algorithms come from a source that has compromised security-related ISO standards just a few years ago?” - Christian Wenzel-Benner.

These are coming from Israel, Britain, and Germany - all close US allies.

I'm not a crypto guy, but I looked at Speck. The code is really clean and efficient. If it's secure that's really awesome. But how is anyone supposed to trust it given the past actions of its creator?


They have a budget denominated in the tens of billions of dollars. How do you know that they didn't publish a simple, clean, efficient ARX cipher design specifically to get people to avoid a design that they knew they'd have trouble beating? They can clearly afford that level of cleverness.


Keep in mind that the past actions of the NSA also include strengthening DES and strengthening SHA-1. The NSA also designed Skipjack for public use (in the Clipper chip), which was eventually declassified, and which was apparently designed in good faith and remains secure (up to the 80 bit level it was designed for).

Also keep in mind that the DUAL_EC backdoor was discovered within a year of its publication; SIMON and SPECK were published years ago and nobody has found or suggested a backdoor (plenty of people have been analyzing the ciphers). ARX designs have been proposed by plenty of other cryptographers, so nothing about the SIMON or SPECK designs would immediately raise eyebrows other than the fact that the NSA proposed them.

Personally, I doubt that the effort to subvert standards involves backdoors, which are pretty hard to hide and pretty easy to avoid (DUAL_EC is the only credible candidate for a backdoor, it was discovered quickly, and it was not widely used). It seems more likely that the effort involves (this is all speculation):

1. Making standards more complex than necessary.

2. Making standards more sensitive to bad randomness (e.g. DSA signatures).

3. Making standards where constant-time implementations are harder or slower.

In other words, they have pushed for standards that are harder to securely implement and easy to use insecurely. Why bother with backdoors when you can exploit common and easy-to-make mistakes? Given their expertise in spotting and exploiting these kinds of bugs, the NSA can probably satisfy the "information assurance" mission by vetting / correcting implementations used by the government, at least for the most important government secrets (most government communication would just use COTS; of course, most government communication is of limited value to foreign governments).


Agreed. My non-expert suspicion is that Speck is secure and simple. The point is that we can no longer assume that's the case coming from the NSA and people in other countries don't trust them.

One could get out their tinfoil hat (as another posterdid) and suggest that the allies are publicly questioning it so people don't adopt simple and secure encryption. After all, the result of their vote is that it does not become a standard.

In the end we have to go by actual analysis. As it should be.


>But how is anyone supposed to trust it given the past actions of its creator?

I would have presumed a lack of trust was the default mentality when evaluating any security based algorithm -- after all any CS professor could be in the NSA's pocket.


This is a short 2013 Bruce Schneier article about SIMON and SPECK: https://www.schneier.com/blog/archives/2013/07/simon_and_spe...

They're "lightweight" block ciphers; SIMON is designed for optimal performance in hardware implementations, and SPECK for software. According to the NSA PDF, "The relatively new field of lightweight cryptography addresses security issues for highly constrained devices." Indeed, SIMON is about a third of the hardware gate requirement of AES, and SPECK is about 15% the number of flash bytes. Some of the space savings is from skipping ciphertext/plaintext whitening.


I hadn't heard of whitening before: sounds like it's a simple change of the plaintext prior to full encryption, in the example at Wikipedia it's an XOR operation.

https://en.m.wikipedia.org/wiki/Key_whitening

Presumably something like rot13 would count as whitening? Also, assuming the name comes from analogy with "white noise", ie reducing signal quickly, cheaply?


Key whitening is about expanding the key space, not changing the plaintext prior to encryption (which might be called data whitening).

Say you encrypt a message M with key K1 and get encrypted message E: encrypt(M, K1) = E. An attacker might brute force your encryption, if your key space is small this might be an issue. So what you can do is XOR the message with another key K2 before encryption and a third key K3 after encryption to get: E = encrypt(M XOR K2, K1) XOR K3. Now the attacker has three keys to brute force. (though I think the actual effective key size is between 2x and 3x the length if the attacker knows the message distribution)

I'm not an expert but I imagine XOR is popular because it's a basic logical operator and so has gate level hardware implementations.


I suspect that XOR is popular because of the property that XORing data cannot reduce its entropy: 128 truly random bits XOR zero* are still 128 truly random bits. The benefit you get if you XOR is that if there are mathematical correspondences in your data, you are likely to hide them by XORing with some key.


It seems like the compromise (dispose of the more lightweight versions while retaining the "most robust" version) still leaves the possibility of an NSA-known vulnerability.

Maybe the "most robust" version is harder for the NSA to break, maybe the NSA doesn't know of a way to break it, or maybe the NSA just proposed the lightweight versions so they'd have room to negotiate, and have just achieved exactly what they hoped.

I'm glad other countries are suspicious of the NSA, but I'm not sure that distrust goes far enough.

Bruce Schneier's [thanks tptacek] 2013 opinion on the presence of an NSA-known backdoor: "maybe, but I don't think so."

His post today is also interesting, saying the ISO "rejects" (which seems a bit stronger than the source article): https://www.schneier.com/blog/archives/2017/09/iso_rejects_n...

He concludes [2017]: "I don't trust the NSA, either."


It's "Schneier", not "Schneider".

This is one of those things where, if NSA can break 128/128 SPECK, we probably have bigger problems than SPECK.


If SIMON and/or SPECK are NOBUS-breakable, then NSA probably has an unknown cryptanalytic technique that would likely threaten other widely used ciphers. Certainly possible but unlikely. However, that's not the issue IMO.

Instead of blindly supporting or rejecting an author, we should insist on public crypto competitions which are the best route for obtaining well-tested, studied, and trusted ciphers.

There are decent correlations between:

  * crypto that's been de jure standardized before deployment and bad crypto (DUAL EC, DNSSEC, etc.)

  * crypto that's been through a public competition before deployment and good crypto (Salsa20, Argon2)

  * crypto that's been de facto standardized and good crypto (Curve25519, Signal protocol, etc.)
As an aside, high-level APIs like in NaCl, libsodium, libtls (from LibreSSL), etc. are a new, in-progress form of de facto standardization. It would be hard to introduce a new low-level general-purpose crypto library and attract major adoption.


Nacl/Sodium and libtls aren't suitable for small-footprint computing. That's probably why there's so much interest in lightweight ciphers: it's hard to get a lot of real-world attention for a new block cipher design targeting general-purpose computers, but there's no (de facto or de jure) standard lightweight cipher.


Have you had a chance to look at libhydrogen?

https://github.com/jedisct1/libhydrogen


Yeah, I'm hoping for a lightweight cipher competition but I know it would take tremendous resources. What do you think of GIMLI? Just speaking as a bystander, Hamburg's recent cryptanalysis seemed very strong especially given it was the first one, and so soon after GIMLI's introduction. Seems we're a long way from de facto standardization of anything lightweight.


Why do they need to use encryption mechanisms that were invented by an organization which has been known to abuse these mechanisms?

"The Americans distributed a 22-page explanation of its design and a summary of attempts to break them"

That doesn't really sound like a peer review :)


SIMON and SPECK have received peer review:

https://scholar.google.com/scholar?hl=en&q=speck+cipher&btnG...


Peer review isn't a panacea. It's totally possible that the NSA knows of a vulnerability or class of vulnerabilities that the rest of the peer review community doesn't. The cipher could be bad but still pass peer review (or it could be better than publicly known, like DES was).

In any case, it seems like the NSA was dragging its feet in trying to fully explain the designs (from the OP):

> Finally, at a March 2017 meeting in Hamilton, New Zealand, the Americans distributed a 22-page explanation of its design and a summary of attempts to break them - the sort of paper that formed part of what delegates had been seeking since 2014.

Given they're more recent history, I'd be mistrustful. It seems to me that the design of a good cipher should be done totally in the open, so any vulnerabilities are inadvertent. This includes explaining the design and the decisions and trade-offs that brought you there.


> It seems to me that the design of a good cipher should be done totally in the open, so any vulnerabilities are inadvertent. This includes explaining the design and the decisions and trade-offs that brought you there.

Implausible, considering that the mathematical attacks NSA is aware of but designing ciphers to be resistant to are still classified and currently being used by the NSA against older generation ciphers.

See history of differential cryptanalysis and DES design.

https://en.m.wikipedia.org/wiki/Differential_cryptanalysis


This is speculation, based upon a time when the academic cryptoanalysis community was pretty much nonexistent.

I find it exceedingly unlikely that the NSA is years ahead of public efforts on this front in 2017.


I find it plausible due to the asymmetry of information flow.

Everything known by the public is also known by the NSA, but the NSA only tells the public what it wants them to know.

That practically guarantees that there is a lot they know that we don't.

Of course that proves nothing about this specific instance, and measuring "how far ahead" in years is hard, but I think it is likely the NSA has some extremely sophisticated techniques that we know nothing about.


The devil's advocate to that is offered by tptacek in other thread: that NSA's efforts are simultaneously hurt by the fact they don't openly engage with the academic cryptography community.

Whereas the public community is made stronger by its interactions.


What aspects of the design do you feel needed explanation? In what sense are these not mainstream cipher designs?


I'm not a cryptographer, so I only know what's in the OP. However, it's clear important design information the ISO cryptographers wanted was being withheld by the NSA for at least 3 years.


I stand corrected. Thank you.


Instead, you can have one designed by a different group of cryprographers... Would you prefer Russian, Chinese, or Israeli? Is a covert shill workimg for one of them preferable?

They are all going to be peer reviewed before becoming a standard like AES was reviewed, and they are much less complex. I view political subversion of the technical process as a bigger issue.

Paranoia is fine, but you have to pick your battles.


Of course. Everything can both be used and abused. And maybe the inventor can abuse it the most of all.

It seems though, regarding the complex dependency on encrypted information, abuse can have epic results in a very short time.

Everybody can purchase a huge analysis network in minutes and have the information crunched in almost any way possible. Maybe paranoia, but quite possible as well.


Everytime I hear about "lightweight" algorithms being standardized for IoT and such, I worry about their security. There is usually a reason why the "lightweight" variants weren't adopts for PCs, too, in the first place.


What's it take for an organisation to get kicked out of this standardisation body?

You'd think deliberately compromising the goals of the body in such a cavalier fashion would do it.


https://eprint.iacr.org/2017/560.pdf

Notes on the design and analysis of Simon and Speck

Ray Beaulieu Douglas Shors Jason Smith Stefan Treatman-Clark Bryan Weeks Louis Wingers

June 8, 2017

Note. This document was prepared by the designers of Simon and Speck in order to address questions regarding the design rationale and analysis of the algorithms.


I can understand about being suspicious of the NSA after the whole Dual_EC_DRBG fiasco. However these designs are not unknown throughout the industry (ARX having gotten some heat lately from the Keccak people). Is there some technical reasons these designs should be disallowed aside from "We don't like the NSA".


My technical (but probably mostly insignificant and unexploitable) issue with speck/simon is with the key schedule, which has somewhat slow diffussion with key length >2 words.


Also the key schedule is trivially invertible (I intended to include that fact in my original comment, but wasn't sure of that, now I'm).

On the other hand this seems like deliberate design choice in order to remove any unexplained constants from the design (the counter in the key schedule seems "explainable"). Alternative with the same design would be to supply the key into the key schedule as subkeys (cyclically or so), which would then mean that initial state is some kind of unexplained constant (there is good reason why {0,0} is not good initial state and given the fact that it comes from NSA any other value will seem suspect)

Edit: the fact that key schedule is invertible does not decrease the security as long as it is used as block cipher (in fact on this level of analysis it slightly increases the confidence in the design as long as it is only meant as block cipher). On the other hand it means that insecure constructions of hash function from block cipher are probably not only theoretically insecure, but readily breakable by NSA. (I wouldn't be surprised if this was the motivation of NSA, because for many IoT applications one is more interested in authentication than in confidentiality)


Do you buy a shiny new car from a dealer who knowingly sold you a lemon?

What record of inspections and promises would convince you to buy?

The NSA provides that record https://eprint.iacr.org/2017/560.pdf

We currently know of no technical reason to reject the ciphers.


This gets back to how innovation is often advanced under military/security circumstances -- it's unfortunate that so much money stems to those reasons for research such that the intent of the researchers is always a tiny bit questionable.


DJB actually made a post on twitter about Simon/Speck https://twitter.com/hashbreaker/status/719884030796177409


Why should one nation have the power to set a single global standard for anything?


What do you mean by a global standard?


[flagged]


We ban accounts that comment like this, so would you please (re-)read https://news.ycombinator.com/newsguidelines.html and comment in the spirit of this site from now on? The idea goes like this: if you have a substantive point to make, make it thoughtfully; if you don't, please don't comment until you do.


How about the NSA make all US government agencies use their Simon and Speck algorythms and we (rest of the world) see in a few years what the outcomes (and their so-called secret documents) look like.


not because they were good encryption tools, but because it knew how to break them

Both NSA and CIA had their crown jewels stolen and exposed, yet they assume that states like China and Russia (to name a few) don't have the ability to find these bugs. Heads should roll




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: