acm-header
Sign In

Communications of the ACM

ACM News

Exploring the Theory of Computing


View as: Print Mobile App Share:
Avi Wigderson.

In his new book, Avi Wigderson discusses, among other things, how fundamental computational processes are to all fields of science.

Credit: Simons Foundation

For close to half a century, the problem of P versus NP has resisted resolution. It also has inspired a tremendous flowering of the field of theoretical computer science and led to an explosion of interconnections to other branches of science and engineering.

Avi Wigderson's new 440-page book, Mathematics and Computation: A Theory Revolutionizing Technology and Science (Princeton University Press, October 2019), lays out a commanding overview of the theory of computing and argues for its central role in human thought.

Wigderson is the Herbert H. Maass Professor in the School of Mathematics of the Institute for Advanced Study in Princeton, NJ, where he has been leading computer science and discrete math activities for the past 20 years. Among his honors is the 2019 Donald E. Knuth Prize, conferred by the ACM Special Interest Group on Algorithms and Computation Theory (SIGACT) and the IEEE Technical Committee on the Mathematical Foundations of Computing for outstanding contributions to the foundations of computer science,

CACM recently spoke with Wigderson about his book. What follows is an edited and condensed version of the conversation.

This is a very unusual book. It's part exposition of the development of a technical area, part account of the state of an art, part philosophical exploration. It seems to me it's also part love letter; there is a lot of passion in the book. How did it come about?

It came out of a failure to write a different book. When I started eight or more years ago, I was planning to write a popular book. I wrote several chapters but found it very hard to decide on the level. Eventually Edna, my wife, suggested that I write a book that I can finish!

This is the book that I knew how to write. It's aimed at a somewhat more sophisticated audience, not very sophisticated; undergrads can read it.  Quite a few chapters can be read by almost anyone with a little background and, of course, motivation. I hope the clarity is attractive enough for people who start reading to be caught and to continue.

I am indeed passionate about my research area (and academic community). You mentioned how unusual the book is. This has several aspects. I have always been fascinated by the philosophical side of the field, its connection to other sciences, and to the history of ideas. I wanted to write about all these things. I don't find them so much in existing texts.

What is absolutely essential to the success of complexity theory is the many interconnections and a flow of ideas between various areas. To describe this was a major goal for me in this book: to have all the cross-references, both between complexity and cryptography and randomness, and among many concepts that arise and flow freely between areas, even into quantum mechanics and quantum computing and back. It's amazing that all these connections exist. I made a decision that I am not going to put any proofs in the book, only ideas of proofs and references to proofs. If there were proofs, it would be five times as large.

Most of the world views the success of computers as a success of technology. Well, underlying any major industry is a theoretical idea. Electronic commerce, the Internet, quantum computing, blockchains, things that billions are invested in and that the world relies on, started with theory. The last chapter of the book talks about how fundamental computational processes are to science in general, in all fields. We now have plenty of evidence of this from seven or eight different fields of science where the impact is already felt. I believe that it will grow much more.

The book gives a great account of the P versus NP problem, which is at the heart of the theory of computing. Can you say briefly why this problem has become so central, and why it's so hard?

First, we are extremely lucky as a field to have had this problem as the major goal. It was developed in the early 1970s and is really the beginning of computational complexity. The problem is fundamental. One can see that because it asks something that I can explain to anybody on the street---and I do, all the time! It's a problem about the difficulty of finding something, versus just verifying that you found what you were looking for. Everybody's experience in the world is that it is far easier to verify something that you were looking for than actually to find it. Proving theorems and solving puzzles are great examples of this. Finding specific theories that explain given data is also a great example.

The P versus NP problem captures a vast amount of human activity that is of this nature, of looking for something: explanations, theories, solutions to problems, proofs of theorems. And there is an unconscious belief that when we find this, we will recognize that we have found it, right? We would not be looking for anything we wouldn't recognize when we found it. We know already when we are looking for something that it would be easy to verify. And if finding is as easy as verifying, if P equals NP, then all these human activities would have an automatic solution, immediately. That's why this problem is so intriguing, and why it's so fundamentally connected to so many human activities.

It's also a very concrete, simple-to-state mathematical problem. I give you a graph or a planar map, and I ask whether it's 3-colorable. If you can solve this quickly, then P equals NP. It's a purely mathematical question, completely formal. But it also has philosophical content. There are many important mathematical problems, such as the Riemann Hypothesis, but I don't believe there is any mathematical problem with such deep philosophical content as P versus NP.

Fifty years of evidence shows that it's probably difficult to resolve---of course, one never knows, maybe tomorrow some kid will find an answer. We don't know how difficult it is. But we are fortunate as a field to have had this as a major goal, because once you try to understand this problem and develop a theory and a set of techniques to attack it, like you do with any other major mathematical problem, you come to a huge set of other problems that are much more concrete. Maybe they are about computations that are more restricted; maybe they are about different models of computation like randomized and quantum models. You start to discover lots of other things.

The book makes the case that the theory of computing is right up there with the theory of biology, the theory of physics, that it really is that central and fundamental.

Absolutely, no doubt about it. The theory of computing is a field that could exist, and probably should exist, separately from computer science. It's as fundamental as the major theories of physics and biology. It was born in computer science, and that is why it is in computer science, and of course understanding computation is what computer science is about. But understanding computational processes, natural and artificial, wherever they exist (and they exist all over) is the business of the theory of computing. There might have been no computers whatsoever, and this theory would still be fundamental to biology, to physics, to evolution, to social processes, because anyplace where there is a process, you can view it as a computational process.

The way I foresee the future of science focuses on processes and how they use resources, their tradeoffs, and their impact on the power of a particular system---like the brain or the immune system or evolution. Once you consciously add these parameters, namely the resources utilized by a process, and study feasibility of theories relative to their potential implementation under these restrictions, there is a lot to gain.

In the P versus NP question, there is a dividing line based on polynomials: One is always looking for polynomial-time algorithms. Why polynomial?

There is a subsection in the book called "Why polynomial?" This is another extremely lucky choice made early, in the 1960s. There were problems solvable in exponential time with just brute-force search, and then there were the few examples where we had efficient algorithms---some ran in linear time, some in quadratic time, some in cubic time. The smart people at the time, Edmonds and Rabin and Cobham, observed that polynomials represent a kind of moderate growth that is very different from exponential growth.

A polynomial is composable, so if you multiply or add polynomials, or if you take a polynomial of a polynomial, you get a polynomial. So it has nice closure properties. This is very much in synchrony with the fact that, when we write programs or algorithms, we often subsequently put one inside another, like a subroutine.

If you have a problem where the only known algorithm is a brute-force solution exponential in n, and you manage to find a polynomial-time algorithm, even if the degree is n1000, then you must have had a fundamental idea in understanding that problem.; you have made a leap. Breaking this exponential barrier, finding a much faster algorithm, is a major understanding of the problem. Then, you can look for even faster, more practical algorithms.

Deep learning is mentioned in your book. Deep learning tools are used as a "black box"; they have no explanatory theory. Do you see prospects for developing a theory for deep learning?

Of course I do. I see this prospect in every scientific endeavor. I am very optimistic, but it's a serious scientific problem. People have been studying the brain seriously in a scientific way for, let's say a century. Nobody complained that we don't yet have a theory of how the brain works.

The deep-learning machine is another organ. We designed it, it did not evolve like the brain, but suddenly it does all these things that the human brain does. It has existed for less than 10 years. Industry is thirsty, people are thirsty for quick delivery of understanding of this. I am optimistic there will be a theory of it, as I am optimistic about the brain, but you have to be patient. Obviously, it's a complex beast.

Another thing that people tend to forget is that the processing power of deep learning machines is only part of their success. Deep in the heart of any deep learning algorithm is stochastic gradient descent and the back-propagation algorithm, which computes gradients and which is from 30 or 40 years ago. Understanding of its efficiency improved back-propagation from quadratic to near-to-linear time, and that is essential to the success of deep learning. There are many examples where improving the efficiency of theoretical algorithms creates billion-dollar industries: signal processing, error correction, Internet search, and many others.

The book is dedicated to your father, Pinchas Wigderson. Can you tell me about him?

Both my parents were Holocaust survivors. They lost essentially their whole families in World War II. The Nazis killed almost all of them. My parents met in Israel, and my father worked for the Israeli Navy all his life, fixing the engines of ships.


Avi Wigderson's father, Pinchas.
Photo courtesy of Avi Wigderson.

The picture of him in the book shows him in one of the ships?

No, the picture is from the wartime; I put 1943, although I am not absolutely sure of the date. After the Nazi invasion of Poland, he and his brother escaped to Russia. My father was very young, 17, and was sent to Ashgabat, Turkmenistan. His talents as a potential engineer were clear to the Russians. He worked most of the war years in an electrical plant, and that's where the picture was taken. My parents never talked much about their experiences in the Nazi period. They didn't burden their kids with the memories---this is common among Holocaust survivors.

My father loved people, and he loved puzzles. I was extremely influenced by these two loves. From the time I was very young, he would ask me riddles and puzzles. He would describe his work trying to understand a problem in the engine of a ship, with very limited ways of testing. My love of math I attribute a lot to him. He was a very social person, he loved talking to people, he loved helping people. He loved explaining anything he understood to anyone who was willing to listen. He loved to explain how to build things, how to fix the car and the washing machine, which I was never interested in, but my younger brother was. I absorbed all this as a kid.

Any final comments?

I am very happy that the book finally appeared in print. I own the copyright, and the book will also remain freely available on my website for anyone to download.

Allyn Jackson is a journalist specializing in science and mathematics, who is based in Germany.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account