Hacker News new | past | comments | ask | show | jobs | submit login
In Cryptography, Advances In Program Obfuscation (2014) (quantamagazine.org)
74 points by alistproducer2 on May 9, 2016 | hide | past | favorite | 29 comments



For those wondering why you haven't yet seen practical applications of this research, this answer[1] on Stack Exchange does a decent job of explaining:

> What must be understood is that in all these constructions, each circuit gate must map to an instance of Gentry's fully homomorphic encryption scheme, and at every clock cycle for the obfuscated circuit, all gates must be processed, regardless of whether they are "active" or not in the circuit (this is a big part of why the obfuscation theoretically works: it does not reveal active gates, by always making them all active). This[2] article gives performance result: on a rather big PC, we are up for minutes of computation. That's for each gate in the obfuscated circuit, and for each clock cycle.

> There are millions or even probably billions of gates in the envisioned circuit, given the setup of functional encryption: the "obfuscated circuit" must include asymmetric decryption and validation of zero-knowledge proofs. So now we are talking about using all the computers currently running on Earth, all collaborating on the computation, and they might make non-negligible progress towards running one instance of functional encryption within a few centuries.

[1] http://security.stackexchange.com/a/50972

[2] http://eprint.iacr.org/2010/520.pdf


Another reason why you won't see practical applications for a while is that we aren't really sure yet whether indistinguishability obfuscation exists at all. The first proposed construction has recently been broken [1], and while other candidates do exist, they are mostly based on similar principles, so most experts wouldn't bet a lot on their long-term security.

This is a rapidly evolving field, though, so I'm cautiously hopeful that some genuinely novel ideas will emerge soon to overcome the current stumbling blocks.

[1] https://eprint.iacr.org/2016/147


Let me save you some time. Here's the beef. The rest of the article is prognostication and the names of many famous research institutions.

The team’s obfuscator works by transforming a computer program into what Sahai calls a “multilinear jigsaw puzzle.” Each piece of the program gets obfuscated by mixing in random elements that are carefully chosen so that if you run the garbled program in the intended way, the randomness cancels out and the pieces fit together to compute the correct output. But if you try to do anything else with the program, the randomness makes each individual puzzle piece look meaningless.

This obfuscation scheme is unbreakable, the team showed, provided that a certain newfangled problem about lattices is as hard to solve as the team thinks it is. Time will tell if this assumption is warranted, but the scheme has already resisted several attempts to crack it, and Sahai, Barak and Garg, together with Yael Tauman Kalai of Microsoft Research New England and Omer Paneth of Boston University, have proved that the most natural types of attacks on the system are guaranteed to fail. And the hard lattice problem, though new, is closely related to a family of hard problems that have stood up to testing and are used in practical encryption schemes.


I can't wait to find exploitable side-channels in the first-to-market implementations of this idea.

Lattice-based crypto is all the rage among snakeoil marketing (right after One-Time-Pads).


Lol lattice-based theoretical crypto has solid theoretical foundations. Finding the correct paramter ranges for applications is difficult and that's why you aren't seeing widespread usage yet, but it's certainly not snake-oil.

And don't get your hopes up about seeing a practical implementation of iO any time soon; it's thoroughly and utterly impractical at the moment, and will remain so for the foreseeable future.


> Lol lattice-based theoretical crypto has solid theoretical foundations.

Yes, it was (is?) one of the candidates for post-quantum public-key encryption. But that doesn't stop terrible projects from claiming to have it in their marketing today.


Lattice-based crypto seems to be on the way out already: most papers I see exploring cutting edge crypto are based on ring-LWE, which is a different problem that was proven (by Regev) to reduce to the SVP problem over integer lattices, thus being equally as hard. But it's not actually a lattice problem itself.


LWE and SIS-based cryptography absolutely is lattice-based crypto, and I can't really imagine what other kind of scheme you would deem more deserving of the name than those. Even the epitome of classical lattice-based crypto, Ajtai-Dwork, is proved secure under a problem (hidden hyperplane) which is "not actually a lattice problem itself", at least not anymore than LWE and SIS. But being based on a problem equivalent to worst-case lattice problems under polynomial-time reductions strikes me as the strongest possible sense in which you could claim a scheme to be lattice-based.


I meant ring LWE is not the same problem as SVP. Perhaps I have misunderstood the algorithm but I do not see any lattices in the description of ring LWE. The fact that it's equivalent in hardness doesn't, to my mind, imply it's "lattice based".


What's wrong with one time pads ?


Key exchange, plus the fact that the key is the same size of the message, and can only be used once. This means that securely exchanging keys is as hard as securely exchanging the message itself -- if you had a way to exchange the keys, you might as well exchange the message itself and be done with it.


Nothing's wrong with one time pads. They're as good as they've ever been, which is to say: completely impractical for use on the internet and almost all other computing applications.

Also, no forward secrecy.


>Also, no forward secrecy.

My understanding of OTP's implies that these must be used only once. As such they offer the same PFS as all the schemes I know of. PFS does not guarantee the secrecy of a specific message. It does guarantee that if you are able to crack one message you can't use the key to crack others. Therefore if you use an OTP only once it gives you the same guarantees.



Am I wrong or the phrase "a type of mathematical protocol for convincing someone that something is true without revealing any details of why it is true" is wrong and should be "a type of mathematical protocol for allowing someone to verify that something is true without revealing any details of how it is calculated to be true?"


No, that's what a zero knowledge proof is; at the end of the protocol, you are convinced that some statement is either true or false, and learn nothing else about the "proof".

In NP-terms, you learn whether the NP instance is true (eg a 3SAT clause is satisfiable), but learn nothing about the witness (eg the assignment to the clause's variables).


So, which statement is right/better?


Not completely clear to me what you mean by "calculated to be true", but you may want to look at the difference between computational and statistical zero knowledge. A computational ZK proof hides the witness from computationally bounded (i.e. probabilistic polynomial time) adversaries. Statistical ZK, on the other hand, hides the witness from any adversary, even if they can carry out an unbounded amount of computations (there are slight subtleties there depending on the precise security model but that's the basic idea).


Also, why isn't "homomorphic encryption" mentioned at all since that's what this is all about?


Obfuscation is different from homomorphic encryption, and in some sense much more powerful.

With homomorphic encryption, Alice can send some secret data to Bob in encrypted form, and let Bob carry out computation on that encrypted data. However, the result of the computation remains encrypted, and only Alice's private key can decrypt the result.

By contrast, obfuscation allows Alice to give Bob a program that contains some secret information (such as cryptographic keys) in such a way that Bob can run it (without any further interaction with Alice) on any inputs of his choice, and get the result in the clear. However, he cannot learn anything about the hidden secret information other than what is revealed by the input-output pairs he has obtained.

It's not hard to see that obfuscation gives you homomorphic encryption for free (you can probably get a rough idea of how to do it based on the somewhat imprecise descriptions above), but we don't know how to go in the other direction. (Current obfuscation candidates do use fully homomorphic encryption under the hood, but they need to rely on much more than that).


Title: Perfecting the Art of Sensible Nonsense

Date: 2014

Previous discussion: https://news.ycombinator.com/item?id=7153657


This is precisely right: "Similarly, a black box obfuscator would provide a way to instantly convert any private cryptography scheme to a public one that could be performed over the Internet by strangers. In a sense, obfuscation is the key to all cryptographies."

Ciphertext is a higher-order, or simply, highly ordered, deterministically derived version of plaintext.


> Ciphertext is a higher-order, or simply, highly ordered, deterministically derived version of plaintext.

Sorry, but what in the fuck are you even talking about?


It's a shame that so little philosophical thought has been given to cryptography. In fact, the very concept of order appears to have been lost in all contemporary discussions. Consider Leibniz's discussion and you may see some shocking origins: http://www.labirintoermetico.com/12ArsCombinatoria/Leibniz_G...


I was equally baffled


Even if that made sense it would be wrong; secure encryption (IND-CPA) needs either state or randomness. Most popular approaches stick to using randomness.


Surely you don't mean random. At best you mean pseudo-random, but either way, the result isn't random (because after all, you'd be left with something scrambled, not encrypted). That's the entire conceptual point of cryptography.


No, I do mean random. Block ciphers, such as AES, model PRFs. One cannot get IND-CPA security by just passing the message into AES ala ECB mode. Instead, one has to use randomness, in the form of a nonce, as in CBC mode, to obtain IND-CPA security.

Sure, the resulting ciphertext is only computationally indistinguishable from random. But the point of IND-CPA security is that one can't differentiate between two encryptions of different messages. Adding randomness gives you this.


Aah, but that's the rub. The result isn't actually random, it is only "computationally indistinguishable" (or, some lower threshold, since it is only in the last few decades of the multi-millennia history of crypto that this has been true). I'm not denying that pseudo-randomness (or even in very limited cases, real randomness) is used in most varients of crypto today, rather, I am suggesting that despite this, the end result is precisely the opposite: a message so highly ordered that it is "computationally indistinguishable", merely masquerading as random.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: