Andrew S. Tanenbaum, a professor of computer science at the Vrije Universiteit in Amsterdam, has been at the forefront of operating systems design for more than 20 years. For an appreciation of Tanenbaum's sense of humor, academic musings, and philosophy of life, see his homepage's FAQ (http://www.cs.vu.nl/~ast/home/faq.html).
You are best known for MINIX, the Unix-like operating system that you created in 1987.
Version 6 of UNIX was widely used in universities, and it was a popular tool in operating systems courses. Then some bean counter at AT&T said, "Gee, if we keep this secret, we can make more money," so they put a clause in the version 7 contract saying you couldn't teach it anymore. At that point I decided I would write something that looked quite a bit like UNIX but was my own code and wasn't tied to their license.
What's going on with MINIX now? I understand you continued to develop and refine it over the years.
Yes, and in 2004 I decided to pick it up again and really make the point that I think microkernels are a more reliable way of doing things.
This is the idea that we can build more reliable operating systems by breaking up the components.
I find it very peculiar that anybody believes you can take anything as complicated as an operating system and have it run as one gigantic program. It's just too complicated. I mean, there are other complicated things in the world, but they're generally modular.
And that's why you decided to base MINIX on a microkernel rather than a monolithic kernel.
Yes, a microkernel is a very small piece of code running in kernel mode that provides the basis for the system, and doesn't do very much itself. Then the operating system runs as a collection of other processes in user mode. So, instead of one big program that does everything, you've got one program that drives the disk, one program that handles the audio, and one program that manages memory, each doing one, specific task and communicating with the others by well-defined protocols.
So by breaking up the operating system into well-defined pieces, you make it more reliable.
It also means you can replace pieces independently, which has a variety of consequences, such as security. MINIX has all these components—these servers and device drivers that run the user processes. Each piece has certain powers. And there's a data structure within the kernel that tells, for every process, exactly what it's allowed to do. So if a hacker found a vulnerability in the audio driver, took it over, and wanted to fork it, the kernel would say, "Sorry, you don't have permission to fork, there's no reason for audio drivers to do that." It's not perfect, of course, but it puts a lot of obstacles in the way of somebody trying to get into the system.
How do you think microkernel systems can penetrate the broader consumer marketplace?
I don't think it's going to be easy, but I can think of a couple routes by which it could happen. Some people in the European Union have talked about changing regulations to require software to fall under the same liability laws as everything else. If you make a tire, and one in 10 million explodes, you can't say, "Well, tires explode sometimes." Why isn't software like any other product? Imagine if there were some liability associated with its not working. Manufacturers would suddenly be very interested in making things reliable.
You've also worked on security-related projects, like an e-voting system.
The trouble with voting machines is knowing that the software can be trusted. And cryptographers who work in this area have complicated schemes to address that, but the schemes are so complicated that virtually nobody except a professional cryptographer could understand them. We've designed a scheme which, though it uses cryptography, is much simpler and easier to use.
"I think microkernels are a more reliable way of doing things."
We assume that the software is open source. At the time you go to vote, you could come with some handheld device and query the voting machine, "Give me a cryptographic checksum of the software currently in your memory." Then you could check that and know the software running on the machine is the software that's supposed to be there. We have a lot of other design issues dealing with how the keys are distributed, and not trusting any single party. But basically we're trying to design a system that's more secure than current ones and less subject to hacking.
It sounds like a very comprehensive approach.
I've always tried to look at the systems aspect of a problem. I've also done work on sensor networks—people have proposed dropping sensors along the national border to prevent people from sneaking in. The only thing they worry about is, "Suppose someone captures a sensor and steals all the keys?" What they hadn't considered is, "What is the range of a sensor? How does it detect the difference between an illegal immigrant and a rabbit?" Any system will be attacked at the weakest link, so you have to pay attention to where the weakest link is, and not which is the most mathematically interesting problem.
©2010 ACM 0001-0782/10/0400 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
No entries found