For those wondering why you haven't yet seen practical applications of this research, this answer[1] on Stack Exchange does a decent job of explaining:
> What must be understood is that in all these constructions, each circuit gate must map to an instance of Gentry's fully homomorphic encryption scheme, and at every clock cycle for the obfuscated circuit, all gates must be processed, regardless of whether they are "active" or not in the circuit (this is a big part of why the obfuscation theoretically works: it does not reveal active gates, by always making them all active). This[2] article gives performance result: on a rather big PC, we are up for minutes of computation. That's for each gate in the obfuscated circuit, and for each clock cycle.
> There are millions or even probably billions of gates in the envisioned circuit, given the setup of functional encryption: the "obfuscated circuit" must include asymmetric decryption and validation of zero-knowledge proofs. So now we are talking about using all the computers currently running on Earth, all collaborating on the computation, and they might make non-negligible progress towards running one instance of functional encryption within a few centuries.
Another reason why you won't see practical applications for a while is that we aren't really sure yet whether indistinguishability obfuscation exists at all. The first proposed construction has recently been broken [1], and while other candidates do exist, they are mostly based on similar principles, so most experts wouldn't bet a lot on their long-term security.
This is a rapidly evolving field, though, so I'm cautiously hopeful that some genuinely novel ideas will emerge soon to overcome the current stumbling blocks.
Let me save you some time. Here's the beef. The rest of the article is prognostication and the names of many famous research institutions.
The team’s obfuscator works by transforming a computer program into what Sahai calls a “multilinear jigsaw puzzle.” Each piece of the program gets obfuscated by mixing in random elements that are carefully chosen so that if you run the garbled program in the intended way, the randomness cancels out and the pieces fit together to compute the correct output. But if you try to do anything else with the program, the randomness makes each individual puzzle piece look meaningless.
This obfuscation scheme is unbreakable, the team showed, provided that a certain newfangled problem about lattices is as hard to solve as the team thinks it is. Time will tell if this assumption is warranted, but the scheme has already resisted several attempts to crack it, and Sahai, Barak and Garg, together with Yael Tauman Kalai of Microsoft Research New England and Omer Paneth of Boston University, have proved that the most natural types of attacks on the system are guaranteed to fail. And the hard lattice problem, though new, is closely related to a family of hard problems that have stood up to testing and are used in practical encryption schemes.
Lol lattice-based theoretical crypto has solid theoretical foundations. Finding the correct paramter ranges for applications is difficult and that's why you aren't seeing widespread usage yet, but it's certainly not snake-oil.
And don't get your hopes up about seeing a practical implementation of iO any time soon; it's thoroughly and utterly impractical at the moment, and will remain so for the foreseeable future.
> Lol lattice-based theoretical crypto has solid theoretical foundations.
Yes, it was (is?) one of the candidates for post-quantum public-key encryption. But that doesn't stop terrible projects from claiming to have it in their marketing today.
Lattice-based crypto seems to be on the way out already: most papers I see exploring cutting edge crypto are based on ring-LWE, which is a different problem that was proven (by Regev) to reduce to the SVP problem over integer lattices, thus being equally as hard. But it's not actually a lattice problem itself.
LWE and SIS-based cryptography absolutely is lattice-based crypto, and I can't really imagine what other kind of scheme you would deem more deserving of the name than those. Even the epitome of classical lattice-based crypto, Ajtai-Dwork, is proved secure under a problem (hidden hyperplane) which is "not actually a lattice problem itself", at least not anymore than LWE and SIS. But being based on a problem equivalent to worst-case lattice problems under polynomial-time reductions strikes me as the strongest possible sense in which you could claim a scheme to be lattice-based.
I meant ring LWE is not the same problem as SVP. Perhaps I have misunderstood the algorithm but I do not see any lattices in the description of ring LWE. The fact that it's equivalent in hardness doesn't, to my mind, imply it's "lattice based".
Key exchange, plus the fact that the key is the same size of the message, and can only be used once. This means that securely exchanging keys is as hard as securely exchanging the message itself -- if you had a way to exchange the keys, you might as well exchange the message itself and be done with it.
Nothing's wrong with one time pads. They're as good as they've ever been, which is to say: completely impractical for use on the internet and almost all other computing applications.
My understanding of OTP's implies that these must be used only once. As such they offer the same PFS as all the schemes I know of. PFS does not guarantee the secrecy of a specific message. It does guarantee that if you are able to crack one message you can't use the key to crack others. Therefore if you use an OTP only once it gives you the same guarantees.
Am I wrong or the phrase "a type of mathematical protocol for convincing someone that something is true without revealing any details of why it is true" is wrong and should be "a type of mathematical protocol for allowing someone to verify that something is true without revealing any details of how it is calculated to be true?"
No, that's what a zero knowledge proof is; at the end of the protocol, you are convinced that some statement is either true or false, and learn nothing else about the "proof".
In NP-terms, you learn whether the NP instance is true (eg a 3SAT clause is satisfiable), but learn nothing about the witness (eg the assignment to the clause's variables).
Not completely clear to me what you mean by "calculated to be true", but you may want to look at the difference between computational and statistical zero knowledge. A computational ZK proof hides the witness from computationally bounded (i.e. probabilistic polynomial time) adversaries. Statistical ZK, on the other hand, hides the witness from any adversary, even if they can carry out an unbounded amount of computations (there are slight subtleties there depending on the precise security model but that's the basic idea).
Obfuscation is different from homomorphic encryption, and in some sense much more powerful.
With homomorphic encryption, Alice can send some secret data to Bob in encrypted form, and let Bob carry out computation on that encrypted data. However, the result of the computation remains encrypted, and only Alice's private key can decrypt the result.
By contrast, obfuscation allows Alice to give Bob a program that contains some secret information (such as cryptographic keys) in such a way that Bob can run it (without any further interaction with Alice) on any inputs of his choice, and get the result in the clear. However, he cannot learn anything about the hidden secret information other than what is revealed by the input-output pairs he has obtained.
It's not hard to see that obfuscation gives you homomorphic encryption for free (you can probably get a rough idea of how to do it based on the somewhat imprecise descriptions above), but we don't know how to go in the other direction. (Current obfuscation candidates do use fully homomorphic encryption under the hood, but they need to rely on much more than that).
This is precisely right: "Similarly, a black box obfuscator would provide a way to instantly convert any private cryptography scheme to a public one that could be performed over the Internet by strangers. In a sense, obfuscation is the key to all cryptographies."
Ciphertext is a higher-order, or simply, highly ordered, deterministically derived version of plaintext.
It's a shame that so little philosophical thought has been given to cryptography. In fact, the very concept of order appears to have been lost in all contemporary discussions. Consider Leibniz's discussion and you may see some shocking origins: http://www.labirintoermetico.com/12ArsCombinatoria/Leibniz_G...
Even if that made sense it would be wrong; secure encryption (IND-CPA) needs either state or randomness. Most popular approaches stick to using randomness.
Surely you don't mean random. At best you mean pseudo-random, but either way, the result isn't random (because after all, you'd be left with something scrambled, not encrypted). That's the entire conceptual point of cryptography.
No, I do mean random. Block ciphers, such as AES, model PRFs. One cannot get IND-CPA security by just passing the message into AES ala ECB mode. Instead, one has to use randomness, in the form of a nonce, as in CBC mode, to obtain IND-CPA security.
Sure, the resulting ciphertext is only computationally indistinguishable from random. But the point of IND-CPA security is that one can't differentiate between two encryptions of different messages. Adding randomness gives you this.
Aah, but that's the rub. The result isn't actually random, it is only "computationally indistinguishable" (or, some lower threshold, since it is only in the last few decades of the multi-millennia history of crypto that this has been true). I'm not denying that pseudo-randomness (or even in very limited cases, real randomness) is used in most varients of crypto today, rather, I am suggesting that despite this, the end result is precisely the opposite: a message so highly ordered that it is "computationally indistinguishable", merely masquerading as random.
> What must be understood is that in all these constructions, each circuit gate must map to an instance of Gentry's fully homomorphic encryption scheme, and at every clock cycle for the obfuscated circuit, all gates must be processed, regardless of whether they are "active" or not in the circuit (this is a big part of why the obfuscation theoretically works: it does not reveal active gates, by always making them all active). This[2] article gives performance result: on a rather big PC, we are up for minutes of computation. That's for each gate in the obfuscated circuit, and for each clock cycle.
> There are millions or even probably billions of gates in the envisioned circuit, given the setup of functional encryption: the "obfuscated circuit" must include asymmetric decryption and validation of zero-knowledge proofs. So now we are talking about using all the computers currently running on Earth, all collaborating on the computation, and they might make non-negligible progress towards running one instance of functional encryption within a few centuries.
[1] http://security.stackexchange.com/a/50972
[2] http://eprint.iacr.org/2010/520.pdf