Why did the world become digital? In his book The Discrete Charm of the Machine, Ken Steiglitz looks at this question from various viewpoints: physical, technological, mathematical, computational, and historical. Along the way, he also evokes the beauty and ingenuity of analog machines.
Steiglitz received his Eng.D.Sc. in 1963 from New York University, where he also had earned bachelor and master's degrees in electrical engineering. That same year he joined the faculty of Princeton University, where he now holds the position of Eugene Higgins Professor of Computer Science Emeritus and Senior Scholar. He is a Fellow of ACM. Steiglitz was one of 1,984 IEEE Fellows honored with the IEEE Centennial Medal in 1984, and one of 3,000 awarded the IEEE Millennium Medal in 2000.
What follows is a condensed and edited interview with Steiglitz conducted in May 2020.
I don't mean to butter you up, but I have to say that your book is really great. What made you write it?
I retired in 2011 after teaching for 48 years. I realized that my life had the same arc as the computer. I was born in 1939, when the computer was born, roughly speaking. I came into an analog world, and I will leave a digital one behind.
The digital versus the analog is a major theme of your book. What is it about that theme that appeals to you?
I grew up with everything analog. When I was a kid, the most sophisticated device around was the radio, analog radio. I could get radios for free in the garbage, as I point out in the book. I would open them up, and inside they had tubes and amplifiers and condensers and inductors. This was the way that I got interested in the technology of information processing.
I luckily stumbled into New York University, Heights Campus, where there were people who were really fine researchers, including John Ragazzini and my doctoral advisor, Sheldon Chang. My dissertation had to do with the relationship between the digital and analog worlds; I showed that they were isomorphic under certain circumstances. That set the tone for my future trajectory.
I've always been fascinated by the fact that you could process signals with a computer. We take that for granted. You're looking at an image and hearing a sound, but it's a little bit of a miracle. It's just 0s and 1s.
I found the parts of the book that discussed analog devices to be especially compelling. You described tuning a radio that had a crescent on the dial, turning the dial until the crescent got very small. It's such a visceral thing. It seems to me digital devices don't give that visceral feeling. Or do they?
You've touched on a subject dear to my heart, which is difficult to talk about, however: the aesthetics of technology.The devices today work beautifully, but they are not…. well, "visceral" is a great word. In my youth, a radio was sort of like a piano. It was a giant machine. That green phosphorescent disk with the crescent on it, I think I still have one of those tubes in my garage, was an amazing thing. There was a phosphor inside, and I knew the electrons were hitting the phosphor and glowing. I knew there was an electric field steering the electrons. It moved very fast, it seemed to have no inertia; it was just an electric field moving electrons around. It was a wondrous thing for me to contemplate.
I have a friend who is a radio fanatic. He is a ham radio operator. I once came onto the roof of a parking garage to get my car late at night, and he was in his car with his daughter sleeping in the back. I asked, "What are you doing?" He says, "Well, I'm waiting for a satellite to come up over the horizon." He's beaming things to a satellite, which relays them to somebody on a ship in the Indian Ocean that is very hard to reach by radio. He sends the signal, and there is a half a second or a second delay before the echo comes. You could feel the speed of light.
I have had an attachment to analog devices all my life, and I've always been fascinated by computers as well. The first job I had was writing assembler code for a vacuum tube computer in Manhattan, in the summer of 1957. I think that was almost IBM's last vacuum tube machine, a 704. So here was this wonderful machine, called the digital computer, and here was this analog world. The connection between the two was irresistible.
Do you find that visceral feeling for analog machines is absent from your students at Princeton? Or do they have a different kind of intuition, having grown up in a digital world?
I think they do have different attachments, emotionally, to machines. It's probably a matter of where the synapses get attached when you are growing up. If they get attached when you are growing up hacking smartphones, then you probably have a visceral connection to that.
Among those students who have shared some feeling about analog machines is Anastasios Vergis, who is mentioned in the book and was a graduate student at Princeton. Unfortunately, he died at much too early an age. A colleague of mine, Brad Dickinson, and I wrote a paper with Vergis about the complexity of analog computation, trying to come to grips with how long it takes for analog computers to solve problems and whether they can solve, quickly, problems believed to be intractable.
Vergis devised a machine, which is mentioned in the book, a machine that is eerily reminiscent of the Antikythera mechanism, another fascinating analog device that uses gears to solve problems. He was really a brilliant student. I think he got whatever it is we're talking about, that feeling for analog machines.
What you can compute with an analog computer is one of the things that I think is still not completely understood. If you believe the extended Church-Turing thesis, then you could simulate anything out there, fast, with a Turing machine, which is the essence of simplicity. We will never be able to prove the extended Church-Turing thesis because it is a statement about the physical world, and there are no theorems about the physical world, only theories. So there is always that tantalizing possibility that there is something you can do with some gadget that you can't do with a Turing machine.
You have worked on soliton computing. Do you see that kind of possibility there?
Not so much from the point of view of overcoming a complexity barrier, but from the point of view of building a different kind of computer. We usually think of information as being stored in particular places in a computer and going from one place to another. The idea of using solitons to compute involves the information being carried by colliding particles; in particular, solitons.
An undergraduate, James Park, and I were fooling around with cellular automata, and I suggested that he try something that is analogous to a digital filter. He tried it on the cellular automaton, and particles appeared. This was a one-dimensional automaton, that is, based on a one-dimensional array of bits; it differs in this way from Conway's Game of Life, which is two-dimensional, although you get some of the same kinds of patterns, like the glider gun.
This is how we met Bill Thurston. Bill poked his head in my office when Park and I were at the blackboard. Princeton is an unusual place; people knock on the door and tell you things you want to know. One of the things I think about when I look back are the amazing people who I met. I've been astoundingly lucky.
Bill was an incredible guy and one of the world's experts on low-dimensional topology. We had diagrams on the board with various kinds of automata and told him what we were doing. Bill glanced at the diagrams and said, "Well, that's no problem, you just tilt the axis." And I said, "What do you mean, tilt the axis?" We ended up writing a paper together about solitons in one-dimensional automata.
That led to a whole sequence of research problems. The next step was: How about physical solitons, not made-up solitons with 0s and 1s? A physicist and electrical engineer who worked across the street from me, Moti Segev, who is now at the Technion, had a laboratory with a device that sent solitons down fibers. Talk about visceral; this was really cool equipment.
You send a couple of solitons into a fiber at different speeds, and they hit each other. What happens? Well, the first answer is, nothing; they just go through one another. That's the classic behavior that John Scott Russell observed in solitons in the canal in Scotland in the 19th century.
But there are many different kinds of solitons in different kinds of fibers. For example, there is a soliton with two components having to do with the polarization modes in a fiber. You can have a vertical component and a horizontal component, and when they hit each other, energy gets redistributed between them, so they can process information. Those are called Manakov solitons. I have studied with various people what you could do with Manakov solitons and whether you could build a computer with them. That's a long story, and it's related to the connection between physics and computing, digital and analog.
Were you ever able to build an actual computing device with solitons?
Oh no, it's a long way off. The most recent thing I was working on was with Darren Rand, who is now at (Massachusetts Institute of Technology's) Lincoln Labs and was a student of Paul Prucnal in the electrical engineering department at Princeton. We investigated theoretically the idea of capturing a photon with a soliton. A soliton creates a kind of potential well, and you can trap a photon in it, like a ball can fall into a hole. The soliton is traveling down a fiber, so if you work out the equations, you can take a photon and pick it up with a soliton and carry it from one place to another. This has immediate application to quantum computing, because it's a flying qubit; you can capture a photon and move a qubit around. No one has tested that experimentally. I'm trying to get somebody in the Netherlands interested in doing it. Maybe someday.
So, no soliton computer yet. If it would have an application, it would most likely be to quantum computing, which is another unbelievably fascinating topic.
Near the end of your book, you note that there are very few fundamental scientific laws to be discovered. The greats of the past have discovered many of them, so the rest of us miss out. Then you say that The Rite of Spring, for example, could only have been written by Stravinsky, so he didn't prevent anyone else from writing it. That's an interesting difference between art and science. But don't we also have the feeling of discovery when we learn about scientific laws, read about them, talk to people who know them well?
That's true, it's like a sense of discovery, but there is a difference between listening to Stravinsky and writing what Stravinsky wrote. You picked out something that particularly fascinates me, and this is how I end the book: the question of what art means that is different from what science means. That's something I'm still thinking about. But I take your point, we have a sense of discovery when we learn Newton's laws.
Richard Feynman had a reputation for working things out himself. He would find out about something and then would go away and figure it out himself, sometimes in a different way. Feynman wanted the thrill of figuring it out for himself. And, of course, you understand something much better if you figure it out for yourself.
You end the book with a fantasy scene where some extraterrestrials get a message from Earth. They are not too impressed with the scientific and technological things in the message, but the message also contains some music of Mozart, and they see value in that.
In the book, I am bold enough to say that art is better than science. Down deep in my bones, maybe I'm not sure which is better. I love them both, as we all do.
Allyn Jackson is a journalist specializing in science and mathematics, who is based in Germany.
No entries found