To view the accompanying paper, visit doi.acm.org/10.1145/2902313
For thousands of years, cryptography was synonymous with "secret writing" whereby two parties used some shared secret key or method to communicate confidential information. But beginning in the 1970s, there was an explosion of new cryptographic concepts and applications. These include public key cryptography—confidential communication between two parties over an open channel, secure function evaluation—jointly computing a function of private inputs (such as the results of an election or auction) without revealing them, zero knowledge proofs—proving a statement is true without revealing anything else, fully homomorphic encryption—computing on encrypted data, and more.
These notions seem so paradoxical it is amazing these cryptography pioneers even imagined they could ever be achieved!a Based on their writing, it seems at least part of these inventors' thought process involved the following mental experiment: it not too difficult to convince yourself these wonderful objects can in fact be achieved if we had a black box computing some function, whereas every party could use the box to compute an output from an input but cannot understand its internal working. In the words of James Ellis,b "we can regard our [encryption function] as a look-up table containing one value of output for each possible input value." The hope was it would be possible to simulate such a program by using what Diffie and Hellman called a "one-way compiler, which takes an easily understood program in a high-level language and translates it into an incomprehensible program in some machine language."c
Such a compiler is known as a software obfuscator, and despite serving as a guiding light as to what cryptographic tools we can hope to achieve, in the many years since, most practical and theoretical results on this notion have been negative. Obfuscation was and still is practiced to protect intellectual property and other secrets in source code, but no practical scheme has been found that resists a truly determined attacker. On the theory front, in 2001 my co-authors and I showed that for arguably the most natural notion of obfuscation, known as "virtual black box" obfuscation, is impossible to achieve.d We left a "tiny" loophole by noting it did not rule out a weaker notion known as "indistinguishability obfuscation" or IO. However, at least to me it seemed this notion of IO may well be also impossible to achieve and even if it was achievable, it did not seem very useful. The following paper of Garg et al. proved I was probably wrong on the first count and definitely wrong on the second one.
In this remarkable work, the authors construct a "one-way compiler" of the type envisioned by Diffie and Hellman. This is a way to transform a program P into a program Q that is functionally equivalent, in the sense it would produce the same output as P on every input, but is "incomprehensible" in the sense it satisfies this IO condition. The compiler itself is very different than all prior obfuscators. It is based on another breakthrough paper by a subset of these authors,e who constructed an object known as "cryptographic multilinear maps." They combine this with a result of Barringtonf that originally arose in the context of circuit lower bounds. The resulting compiler can be thought of as somewhat analogous to converting the program P into a scrambled Rubik's cube. If you interpret the input x as a sequence of rotations to apply to the cube, the final pattern will convey the output P(x). But you will not be able to learn any other information from it.
The second contribution of this paper was to show this seemingly weak notion of IO is in fact sufficiently strong to obtain some of the most exciting applications of obfuscation. This has been greatly extended in many follow-up works, which have established IO as a kind of a "cryptographic master tool" that can yield a great many other applications.
This paper opens many more research directions than it closes. For starters, while the authors gave some evidence for the security of their construction, it is not truly satisfying, and basing IO on firmer foundations is a crucial research objective. Also, at the moment the construction is terribly impractical, incurring a huge blowup to even the simplest programs. Both of these and more are now the object of intensive research efforts. It truly is an exciting time for cryptography.
a. Indeed, when Ralph Merkle, who was one of the first people to conceive of the idea of public-key encryption, along with Whit Diffie and Martin Hellman, as well as James Ellis of the British agency CESG, submitted his work to Communications of the ACM, he noted the only reference he could find to this problem was in a science fiction story.
b. J. Ellis. The possibility of secure non-secret digital encryption. CESG Report, Jan. 1970
c. W. Diffie and M.E. Hellman. New directions in cryptography. IEEE Trans. Information Theory 22, 6 (1976), 644–654.
d. B. Barak et al. On the (im)possibility of obfuscating programs. Advances in Cryptology. Springer Berlin Heidelberg, 2001, 1–18. Also, JACM 2012.
e. S. Garg, C. Gentry, and S. Halevi. Candidate multilinear maps from ideal lattices. Euro-crypt 7881 (2013), 1–17.
f. D. Barrington. Bounded-width polynomial-size branching programs recognize exactly those languages in NC1. J. Comput. Syst. Sci. 38, 1 (1989), 150–164.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2016 ACM, Inc.
No entries found