Not if it's business as usual in the software industry. But we could make it work.
Throughout this issue, you'll hear some amazing predictions about the futureinstant universal communication, pervasive computing, new medical applications, and lots more. There's only one problem. The software for all these things might not work.
If today's software is any indication, it certainly won't; most software today is appallingly full of bugs. A large, complex product like Microsoft Word is routinely released even when the vendor knows it includes them by the thousands. One classic example was the misplaced comma in a Fortran program that caused the Mariner 1 Venus space mission to fail in 1962. Computers crash or freeze, and applications lose data or files, seemingly for no reason. Cryptic error message serve only to confuse users.
We could go on complaining about this situation; unfortunately, we don't need to. Every computer user has his or her own story of the unreliability of modern software. Many of these problems are minor time-wasting annoyances. But as computer applications enter more and more aspects of our lives, it becomes more and more important that the software we rely on really works.
The problem is not, as many people assume, that system designers and programmers make mistakes. That, we can't avoid. To err is human. We certainly know of many good software practices that can and should reduce error, including systematic design practices, good programming style, safer programming languages, and better testing before release. But we can hardly hope to completely eliminate bugs before software is released. The problem is really that when errors do occur, we don't have really good ways of discovering what went wrong and how to fix it. That's what we've got to change.
People make plenty of mistakes in social, economic, and informational exchanges, but an important difference between people and machines is that when mistakes occur in human society, we have good ways of finding out what they are and fixing them. If you think people are telling you something that's incorrect, you can interact with them about it to find out what is wrong, and, assuming goodwill, correct it. You can ask why they did what they did. You can verify what you are being told with others. You can ask what each person can do to correct the mistake.
When something goes wrong with a computer, you are likely to be stuck. You can't ask the computer what it was doing, why it did it, or what it might be able to do about it. You can report the problem to a programmer, but, typically, that person doesn't have very good ways of finding out what happened either. So bugs don't get fixed. It's such helplessness in the face of problems that causes interaction with computers to often feel so frustrating.
Happily, this situation can be fixed. But not if the software industry goes on competing only through an ever-increasing accumulation of features. Instead, future software development should increasingly be oriented toward making software more self-aware, transparent, and adaptive. Software will still contain some bugs (though perhaps fewer), but users will be able to fix them themselves by interacting with the software. Software developers will have better tools for systematically finding out where bugs are, and the software itself will help them in correcting the bugs. Interacting with buggy software will be a cooperative problem-solving activity of the end user, the system, and the developer.
Nevertheless, some strong forces are working against software ever really working. The first is economic. Given the competitive marketplace, developers are often pressured to come up with innovations. Products featuring reliability get edged-out by products offering more features. Another problem is the endless treadmill of software releases, where "version skew" occurs as Product A depends on Version 1 of Product B, but Version 2 of Product B breaks A. Asking users to manually track and manage these relationships is disastrous. These conditions practically guarantee unreliability in today's software.
Interacting with buggy software will be a cooperative problem-solving activity of the end user, the system, and the developer.
There will have to be a consumer revolt against widespread unreliability and the willingness to reward reliability and improvability in products. Historically, such a revolt might be comparable to the American or French Revolutions in its social and economic effects. A small but encouraging sign is the recent commercial acceptance of the Palm handheld, which delivers a simple, reliable, functional interface, winning over more "capable" but complicated and unreliable competitors.
Another obstacle is the macho culture of programming. "Real programmers" don't need debugging tools. People are psychologically reluctant to admit the prevalence of bugs in their programs, making them unwilling to devote time and money to improving the process of dealing with them.
John Guttag, a professor of software engineering at MIT, said, "Finding a bug in your program is like finding a cockroach in your kitchen; if you have one you probably have more and is not something one should be pleased about," a distasteful metaphor suggesting the presumed cause of a bug is negligence on the programmer's part. We think it is this denial of the normalcy of bugs and debugging that has led directly to the unreliability of software.
It may sound silly to say, but software will work only if we provide the tools to fix it when it goes wrong. Right now, we don't.
We see an important new direction in providing end-user debugging tools. Users themselves will be able to use them to fix or improve their software. It's crazy that at any moment we can't ask a computer: "What are you doing?" or "Why did you do that?" If we can't get that kind of basic information, we won't be able to tell the programmers what's wrong.
An exciting new technology for giving end users the kind of procedural control that only programmers have had is "programming by example," also known as programming by demonstration (because the user demonstrates examples of the desired behavior to the computer) [2]. When the user wants to teach the computer how to do something new or different, an example is demonstrated step-by-step in the user interface, and the computer responds and generalizes a program.
Just because most software users are end users rather than programmers doesn't mean the public shouldn't be concerned about tools for developers. The quality of debugging tools for developers has a direct effect on the quality of the resulting software; if developers can't find and fix bugs, the software can't improve quickly enough.
The future belongs to those who believe in the beauty of their dreams.
Eleanor Roosevelt, author, U.S. First Lady
The Boeing Company spent more than $100 million developing the instrument panel in its 777 aircraft, even though the user community is only a few hundred pilots. This expense was justified because those hundreds of pilots ferry the millions of passengers around the world who ultimately pay the consequences if the plane crashes. Hundreds of programmers write programs for millions of people, yet no effort on a scale comparable to what companies invest to prevent plane crashes has been mounted to prevent computer crashes.
Debugging tools are the instrument panel of the programming environment. But good tools need the foundation of a programming language designed to be debuggable. The language should be dynamic (everything easily changeable at any time) and introspective (extensive access to its own internal workings). One reason debugging hasn't progressed further in recent years is that programmers have been stuck in undebuggable languages like C and C++ in the name of efficiency.
Moore's Law states that computer performance doubles once every 18 months. Fry's Law states that programming-environment performance doubles once every 18 years, if that. We're not talking about simply the speed of running an application but, more important, the speed of developing reliable software functionality, regardless of how fast it runs.
It's not our place here to detail the many ways debugging tools can be improved; see [1] and [3] for a myriad of exciting new developments and directions. An important component of debugging tools is software visualization, using the considerable graphic capabilities of modern computers and our prodigious power of perception to quickly perceive spatially and dynamically what is going on in software. Other kinds of tools support the detective work of localizing bugs, diagnosing and analyzing problems, and instrumenting pieces of the software environment to monitor their behavior.
Ultimately, software will be something we can use, not just for doing tasks, but for figuring out what it is we really want to do. After all, almost any improvement in a piece of software could be viewed as debugging it. So the process of debugging is really a process of improvement, and software is really a medium for debugging ourselves. Therein lies hope.
1. Lieberman, H., Guest Ed. The debugging scandal special section. Commun. ACM 40, 3 (Mar. 1997).
2. Lieberman, H. Ed. Your Wish is My Command, Morgan Kaufmann, San Francisco, 2001.
3. Stasko, J., Domingue, J., Brown, M., and Price, B., Eds. Software Visualization, MIT Press, Cambridge, MA, 1998.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2001 ACM, Inc.
No entries found