acm-header
Sign In

Communications of the ACM

Communications of the ACM

Responsible Programming


Vinton G. Cerf

Welcome to "Cerf's up!" I am grateful for Editor-in-Chief Moshe Vardi's invitation to continue writing for Communications; this column succeeds the "From the President" column I penned during my service to ACM in that role.

Let me congratulate Alex Wolf, the newly elected ACM president. I know he will give exemplary service to our organization. Congratulations also go to Vicki Hanson and Erik Altman in their new roles as vice president and secretary/treasurer respectively. I know this team will provide first-rate leadership.

I also thank Alain Chenais, who ends his term as Past President and I begin mine. He has been a staunch, reliable, and active leader in ACM matters and I expect this will continue. There are many others elected to new positions or moving on as their terms in office end. I thank them all without enumeration, and commend them to your attention.

Lastly, allow me to note the enormous contributions of the ACM staff and, especially, the leadership of John White, CEO, and Pat Ryan, COO of ACM. They have accumulated a truly enviable record of steadfast leadership spanning the terms of many elected ACM officers.

Now to the substance of this column: responsible programming. What do I mean by that? In a nutshell, I think it means people who write software should have a clear sense of responsibility for its reliable operation and resistance to compromise and error. We do not seem to know how to write software that has no bugs...at least, not yet. But that, in a sense, is the very subject I want to explore.

My very good friend, Steve Crocker, drew me into a conversation about this topic a short while ago. As a graduate student, he had pursued a dissertation on provable correctness of programs. While this is not a new topic, the objective continues to elude us. We have developed related tactics for trying to minimize errors. Model checking is one good example of a systematic effort to improve reliability for which ACM gave the Turing Award in 2007 to Edmund Clarke, Allen Emerson, and Joseph Sifakis. What is apparent, and emphasized by Crocker, is the tools available to programmers for validating assertions about program operation are complex, with user interfaces only a mother could love (my characterization). Formal proofs are difficult, especially for anything but the simplest sort of program. Just conceiving the appropriate conditional statements to characterize program correctness is a challenge.

Despite the Turing Halting Problem, it is still possible to establish lines of reasoning to show a particular program terminates or achieves a repeatable state under the right conditions. One can make other kinds of statements about I/O checking (for example, buffer overflows). Some unending programs, like email user agents, can still have characterizations of well-defined states. It is clear, however, it is not easy to develop succinct and potentially demonstrable statements about program behavior that show the likelihood the program will behave as desired. Yet harder may be demonstrating the program does not do something undesired.

While I have no ready solution to the problem, I believe better interactive tools are needed to test assertions about the program's anticipated behavior while it is being written and to get some useful feedback from the composition and validation system that these assertions are somehow supportable. If not provable, then at least not disproved by counterexample perhaps. It seems fair to imagine that when a programmer is designing a program and actually writing the code, there is a model in the programmer's head of what the program is supposed to be doing and, presumably things it is not supposed to do or should avoid. Whether this model is sufficiently clear and complete to allow provable or verifiable assertions to be made could be the subject of considerable debate.

One intriguing example of programming environments that is tangentially relevant comes from Bret Victor (http://worrydream.com) who has conceived and implemented a programming environment that allows one to see immediately the results of executing the current program. Obviously, the system can only do this when the programmer has reached a point where the program can be parsed and interpreted. Imagine an environment fashioned for continuous validation of a set of assertions, as the program is developed. One suspects heavy use of libraries could either help or hinder the process of verifying program correctness. If the library of subroutines is opaque to the verifying tools, bugs could be hidden. However, if a set of assertions that are invariant for the subroutine could be codified, the use of such a library might actually help the validation process. I am fairly certain a body of prior work exists that can be cited here, but my impression is such tools are not used regularly by professional programmers today.

It seems timely to suggest responsible programming calls for renewed efforts to verify proper operation of software many may depend upon heavily to work as advertised. To do this, we need much better tools and programming environments than seem to be available today. I await with great interest responses from ACM members more knowledgeable than I in this area.

Back to Top

Author

Vinton G. Cerf is vice president and Chief Internet Evangelist at Google. He served as ACM president from 2012–2014.


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2014 ACM, Inc.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor of the September 2014 CACM (http://cacm.acm.org/magazines/2014/9/177939).
--CACM Administrator

Vinton G. Cerf's Cerf's Up column "Responsible Programming" (July 2014) should be echoed wherever software is used, procured, or developed. Dismal software quality hinders the economy, national security, and quality of life. Every organization is likely rife with process error. If you have not been affected by a cyberattack you soon could be. Software industry analyst Capers Jones (http://www.spr.com) reported deployed software systems, circa 2012, contained approximately 0.4 latent faults per function point. Reflecting on the urgency of moving to responsible programming, this statistic improved approximately 300% since the 1970s; compare this statistic to automobile engineers achieving 3,000% reduction in emissions in less time.

Almost all operational errors and successful cyberattacks can be traced to faulty code. Responsible programming must therefore extend beyond individual programs to the whole set of programs that interoperate to accomplish a user's purpose, even in the context of nondeterministic situations. Responsible programming could thus ensure each program supports system principles concerning safety properties.

An early example involved Antonio Pizzarello, who co-founded a company in 1995 to commercialize a fault-detection-and-correction theory developed by Edsger W. Dijkstra et al. at the University of Texas. As described in Pizzarello et al.'s U.S. Patent No. 6029002 Method and Apparatus for Analyzing Computer Code Using Weakest Precondition the code-analysis method starts with a user-identified, unacceptable post-condition. An analyst writes the desired result, then calculates the weakest precondition until reaching a contradiction highlighting the statement containing the logic, arithmetic, or semantic fault. However, though Pizzarello's method was a technical success, it could not scale economically to larger systems of programs containing numerous possible paths because it was prohibitively labor-intensive and slow, even for highly trained analysts.

The promise of hardware for massively parallel, conditional processing prompted a complete reconceptualization in 2008; typical is the Micron Automata Processor (http://www.micron.com/about/innovations/automata-processing). A new software-integrity assessment method thus enables proofreading computer code as text while applying deep reasoning regarding software as predicates for logic, arithmetic, and semantic coherence at a constant, predictable rate of approximately 1Gb/sec. Software faults are detected, systemwide, automatically.

Informal polls of software developers find they spend approximately 50% of their project time and budget defining, negotiating, and reworking program interfaces and interoperation agreements. They then waste approximately 40% of test time and budget awaiting diagnosis of and fixes for test aborts. Software-integrity assessment can preclude wasted time and money. Moreover, software maintainers and developers may be able to find and null faults more quickly than cyberattackers are able to create them.

Jack Ring
Gilbert, AZ


CACM Administrator

The following letter was published in the Letters to the Editor in the October 2014 CACM (http://cacm.acm.org/magazines/2014/10/178772).
--CACM Administrator

Vinton G. Cerf's Cerf's Up column "Responsible Programming" (July 2014) raised the interesting question of whether it is responsible to produce software without using state-of-the-art defect-detection-and-validation tools. Research on such tools is ongoing for decades, and despite progress made possible through improved SAT solvers and theorem provers, software-defect tools are known primarily within their research community and rarely used in development projects, open or closed source. Perhaps a more important question involves how computer science can accelerate development of effective tools and motivate their widespread adoption.

The answer is legal, not technical. The standard software license, disclaiming all liability and suitability for any purpose, should be prohibited, with software subject to the same liability and suitability terms as any other manufactured product. If software is insecure or simply does not work, then the manufacturer must show it was constructed with sufficient care to avoid liability. This financial and legal pressure would eventually end egregious practices, including failure to use the best available tools and practices, shipping bug-ridden code, and using customers as beta testers.
The transition from the current state of software to where it should be will take time, financial investment, new ideas, and great effort. It cannot happen overnight but might never happen if software producers avoid assuming responsibility for the correct, secure operation of their products.

James Larus
Lausanne, Switzerland


Displaying all 2 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: