acm-header
Sign In

Communications of the ACM

Research highlights

Technical Perspective: Learning To Do Program Verification


When you decide to use a piece of software, how do you know it will do what you need it to do or what it claims to do? Will it even be safe to run? Will it interfere with other software you already have?

As a software engineer developing that piece of software, what can you do to ensure the software will be ready in time and will meet your quality standards?

A long-standing dream of ideal software development calls for the use of program verification as part of the answers to these questions. Program verification is the process by which one develops a mathematical proof that shows the software satisfies its functional specification. This dream had grown roots already by the 1970s.

Since then, and especially in the last decade, the technology behind program verification has become an important part of software development practice. At Microsoft alone, a number of symbolic execution techniques, including abstract interpretation, counterexample-guided predicate abstraction, and satisfiability-modulo-theories solving, are used routinely to enhance the testing of software, in some cases even mathematically verifying the absence of certain kinds of software defects.

But what we use today pales in comparison to the grand dreams of the 1970s. Why? Is it not possible to provide rigorous proofs of modern software? Is it too difficult to capture in functional specifications the intended behavior of software? Is full program verification only for small algorithms and toy examples? Are hardcore programmers and verification experts not able to find common goals to work toward? Is it not cost-effective to insist that every detail of a piece of software is correct?

The following work by Gerwin Klein et al. is a landmark in the further development of applying functional-correctness verification in practice. The authors have produced a machine-checkable proof of correctness for the microkernel of an operating system. For a user of the operating system, this verification provides a number of significant benefits; for example, the assurance that no hostile application running under this operating system can subvert the integrity of the kernel through a buffer-overrun attack. For the software engineers involved, the verification brings the benefit of a code analysis that far surpasses that achieved by testing alone.

The seL4 verification connects three descriptions of the system: an abstract specification of the kernel that describes what its functional behavior is, a high-level prototype implementation of the system that introduces further details of how the kernel performs its operations, and a low-level, hand-optimized implementation that deals with the nitty-gritty of the kernel's operation. A novel aspect of this project is how the middle layer was used as a steppingstone not just for the verification, but also for the actual design and implementation of the system itself. This has the great advantage that the verification process gets the chance to influence the design. As the paper reports, this led to a large number of changes in the top two layers, with the effect of boosting the productivity of both the design team and the verification team.


A project like seL4 verification is not easy to pull off. In short, it is a big bet. I applaud the team for undertaking it.


The authors are careful to list not only positive implications of the verification, but also the assumptions upon which the verification rests. This is important, for all verification is performed at some level of abstraction. For example, verifying a program at the level of abstraction provided by a programming language does not say anything about hardware failures. Verification is not an absolute; what it seeks to do is offer detailed aid for programmers at the level of abstraction at which they are working.

A project like the seL4 verification is not easy to pull off. It takes a strong vision, as well as a significant amount of work, expertise, and persistence. In short, it is a big bet. I applaud the team for undertaking it, and congratulate them on delivering. Any doubts as to the technical feasibility of such a project should by now have been removed.

The question about cost-effectiveness, however, remains. Some may argue that the end result of the verification—a level of assurance that could not have been obtained using more traditional methods—has already made up for the effort expended. Others may balk at the 20 person-years to complete the proof or at the ratio of 200,000 lines of proof script to 6,000 lines of eventual C code. I would like to offer a different perspective on these numbers. First, they provide a benchmark against which to compare future work. I would expect that in another decade, a similar project will take less effort and will involve a larger degree of automation. Second, the effort has resulted not just in an impressive engineering achievement, but also in an appreciable amount of scientific learning. It is through pioneering and repeated efforts like this one that we will learn how to apply full program verification on a more regular basis.

Back to Top

Author

K. Rustan M. Leino ([email protected]) is a Principal Researcher at Microsoft Research, Redmond, WA.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1743546.1743573


©2010 ACM  0001-0782/10/0600  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: