acm-header
Sign In

Communications of the ACM

Inside risks

Risks Are Your Responsibility


In his February 2007 column, Peter Neumann mentioned some failures that resulted from inadequate attention to the architecture of the overall system when considering components. But many developers cannot influence or even comprehend the system architecture. So, how can they be held responsible in such a situation? Although many system failures can be detected and prevented without reference to the system architecture, professionals working on isolated components still have professional—indeed, moral—duties to ensure their results are as risk-free as possible.

The aphorism "He can't see the forest for the trees," comes to mind. From my perspective, there are two issues: Are there tools that permit those of you working at the tree level to see the larger context? Do you use them?

Here, tools means representations and analysis methods (supported by tools in the usual connotation) that can represent more than small, individual components of a larger system—the forest. We have a number of representations—wiring diagrams, flow charts, structure charts, UML, more formal techniques, and so on—but they have (at least) two major faults. Fundamentally, our representations at best can only incompletely represent the full scope of a complete system architecture and its environment. For example, they may have no way of representing the unexpected external event, or they may represent the physical parts of a system but not the software, or incompletely describe interactions between the system under consideration and the rest of the world, or not be able to fully represent potential (damaging) behaviors, and so on. The result is consequences that are not foreseen. Another major fault of representations is that they typically support only the most limited forms of rigorous analysis techniques.

Various proposed analysis techniques can be applied to representations of computer-based systems, but many are neither proven nor widely used. The prevalence of testing rather than proving or assuring via simulation in the world of computer systems is a clear indication of the lack of practical, intellectual tools that are so vital in other areas of engineering. Consider how a new airplane is simulated many times before ever being test-flown—and, fortunately, how rarely a new plane design crashes on initial flight. Or, how a new chip design is rigorously checked before being burned into silicon, even though some errors still do occur.

Substantial academic research for at least 40 years has been aimed at addressing this lack of intellectual tools. While R&D continues and has produced some useful results, it has not produced a so-called silver bullet. Nonetheless, I continue to believe strongly that research will eventually improve our stock of intellectual tools and produce automated aids for applying them to large complex systems. The alternative is to continue to flounder in the dark.

In the meantime, however, each of us must address the second issue—using those tools we do have. But, how can you do that in today's world of competitive pressures and failure up the line to insist on good engineering practice, at least in the case of software?

Put starkly, you must have the fortitude to apply what you do know how to do, demand training on those tools that may apply, and insist as professionals that you will not tolerate less. If you are a requirements analyst, insist that security issues be made a part of any overall requirements statement. If you are a systems designer, utilize available techniques for risk analysis carefully and rigorously. If you are a programmer or a component or subsystem designer, make sure your parts fit into the larger architecture in a way that minimizes risks. If you are a manager or supervisor at any level, enable and insist on these behaviors. If you are an educator, make sure your students learn to take the larger view and to consider risks.

Ultimately, it isn't sufficient just for individual tree planters to do the right thing. If a large-scale system is to be as risk-free as possible, the planners of the forest and the planters of the trees must be able to communicate and be incentivized to do so. If you are a system designer or development manager, you have broader purview and authority; thus, it is your responsibility even more. In short, risks are everyone's responsibility.

The challenges are significant, but more research, development, successful examples, and human understanding are needed.

Back to Top

Author

Peter A. Freeman (www.cc.gatech.edu/staff/f/freeman) is Emeritus Dean and Professor at Georgia Tech, and immediate past Assistant Director of NSF for CISE.


©2007 ACM  0001-0782/07/0600  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: