Software developers encounter many ghosts and figments of the imagination when building reliable, inherently useful software systems for business. Apparitions are seen by dissatisfied clients claiming they did not listen, understand, or fulfill their expressed requirements.
Clients seek software to perform specific functions, freeing them for more lucrative pursuits. Sometimes software professionals don't have quite the right skills or background to understand the business requirements or apply the right tools to model and produce the corresponding systems. In 30 years in business and software engineering, I have found that Generally Accepted Software Engineering Practices (GASEP) are not consistently recognized, measured, or applied. Rather, practices are more generally applied from ghostly figments of individual recollections and past personal experience.
Much like the teeming, seething, gelatinous evil surging under the streets of a Gotham City-like New York in the film Ghostbusters,1 the software engineering industry has become a labyrinth of varied practices and procedures. There is little consensus but many points of view on how a software consultant or professional might analyze a client's requirements. The industry needs a way to verify skills directly transferable to practice [3, 8].
Consider that other trusted industries examine individual members for professional competency in civil, mechanical, and electrical engineering, audit and accountancy, legal and medical specialities, and even network management. Recent moves by software industry societies indicate a dawning inclination to reclaim credibility and redefine the state of the practice in software.
Symptoms of distrust in software development are evident when one takes on a new protégé. As a mentor-manager in my previous working life and now as an instructor of future software analysts and software engineers, I often do not select my apprentices. They arrive wanting to hear words of wisdom. Some expect that after a period of about six months (the duration of a typical short-term project), they will have learned all there is to know about managing complex requirements, as well as how to identify and apply software engineering techniques to a particular project.
Employers expect mentors who have learned the best practices over time and through great financial cost to their past clients are able to convert inexperienced or untested practitioners into solid software professionals. They expect it to occur without benefit of formal examination of software engineering or business practices.
Unfortunately, few people developing software today have formally studied business or information systems. One might claim software engineers and analysts need neither formalized training nor certification to prove their worth. Clients and managers disagree. Even employees feel training "is an experience you take away with you because you are learning on the project" [5]. More realistic testimonials come from clients themselves, including "Clients [have] a level of assurance ... and the graduate a level of credibility ..." [5].
Competencies have been defined as "a set of observable performance dimensions, including knowledge, skills, attitudes, and behaviors, as well as ... collective team, process, and organisational capabilities that are linked to high performance and provide the organisation with sustainable competitive advantage" [1]. Thus, the mentor/manager is not burdened with teaching mechanics on the job, at the expense of clients. Rather, the mentor guides an individual in effective techniques for requirements analysis, process and data specification, modeling, and design of systems that satisfy customer expectations. Through applied learning, the individual develops repeatable competencies in the use of computer-aided software engineering tools, specific modeling languages, modern programming, and other tools of software development.
Many other industries have qualification procedures and collected practices. Bodies of experts, including the IEEE and the Federal Accounting Standards Board (FASB), oversee the entry of new practitioners into the ranks of civil engineers, accountants, and auditors. Trust in business has suffered of late in some areas and is, perhaps, why establishment of qualifications for software engineering are compelling. How can an industry survive or continue to demand billions of dollars of public and private spending without some concrete understanding and consensus on how it operates?
Consider the observation that traditional engineering societies (such as Tau Beta Pi) limit the entry of computer scientists to membershipto "prevent the dilution of the word engineer" [10]. Is the software industry being ostracized? Shouldn't software professionals seek to validate the idea that their specialized competencies are distinct from other areas of engineering?
"We are familiar with the common sense that professionals deliver specialized knowledge to paying customers," wrote Peter Denning and Robert Dunham [2]. "What is missing from this common sense is that we also have to satisfy customers, and we do so by creating value for them." Companies that cannot deliver value will wind up wondering where their customers have gone; Denning and Dunham, thus call for the "reinterpretation of what we mean by a professional."
Appropriate standards and practices for software engineering are available; they need only to be gleaned and related. Good models have evolved, de facto (endorsed through industry adoption and practical use) and de jure (backed by authentication from national and international standards bodies), applied ad hoc, substituting for GASEP. In fact, models meant for organizations are twisted and retrofitted to apply to individual software competency.
Consider the appropriateness of the Malcolm Baldrige National Quality Award to firms engaged in software engineering. The U.S. Congress established the award in 1987; it's now a program of the National Institute of Standards and Technology (NIST). The award application contains self-assessment questions, followed by a study of the candidate organization. The candidate organization selects key factors to be evaluated, irrespective of whether they are the right set of practices for the industry in which it claims to demonstrate recognition-worthy quality.
The award does not validate the efficacy of specific practices (such as software engineering) in the candidate organization. It looks for artifacts to evaluate competency, including leadership, planning, customer awareness, culture, and other qualities of citizenship. Are they the hallmarks of a firm that consistently builds solid, reliable, and inherently useful software? Indicators include measurements, systems, and processes, but the total points awarded to these relatively more engineering-like factors are minimal compared to the touchy-feely categories. All in all, the one-size-fits-all format of this standard, albeit supported by NIST, does not support the rigorous evaluation of individual professionals needed in GASEP.
Consider the International Organization for Standardization's ISO 9000 quality models that were developed for manufacturing production processes. Many studies suggest ISO 9000 standardization leads to a higher-quality process of software engineering. The problem behind such retrofitting is that, without an endorsed GASEP, clients regard ISO 9000 certification as a qualification for software production, which is not a form of manufacturing. These clients hope they can rely on the company's version of the truth, as evidenced by the claim of a capability or quality model that may not indicate the competency of individual employees.
Finally, consider the Software Engineering Institute (SEI) Capability Maturity Model (CMM), which enumerates key process areas and qualifying practices for measuring an organization's level of competence. SEI's goal is "The right software, delivered defect free, on time and on cost, every time." The model is not a standard but has been adopted as a ghostbuster by the U.S. government; several agencies require potential contractors to demonstrate their organizational competency in software engineering, with qualification at CMM level three or higher.
Surely there have been enough software project failures to acknowledge the need for Generally Accepted Software Engineering Practices.
The point made by Watts Humphrey [9] that personal capabilities are the building blocks an organization must have to achieve maturity beyond CMM level three means the individual ultimately drives sustainable corporate achievement of software capabilities. However, if models of corporate excellence are used throughout the software development industry, how can the individuals who make excellence possible be certified? And where does the individual seeking employment, recognition, or career advancement go to become certified?
The SEI served an interim need with its Personal Software Process training. Positive results were reported, including substantially higher productivity and reduced error rates in software design [4], highlighting the effectiveness of individuals working with a form of GASEP. Examinations for individual practitioners on systems analysis and design capabilities will likely lead to better software and ultimately to on-time, on-budget, on-target systems.
As a longtime mentor and now professor teaching software analysts and designers, I find it advisable to have a gospel of software engineering practices endorsed by a recognized accrediting body for software professionals. A ghost-free world would emerge where prospective recruits and protégés had proved their competency for the job by having passed comprehensive examinations. For example, the rigorous CPA exam requires that protégés work for two years (states have varying requirements), then study Generally Accepted Accounting Practices for several months, and finally take a multi-day exam to demonstrate their capabilities. Many prospective CPAs initially fail the exam before finally passing. These and other exams qualify professionals to carry out securities transactions, manage financial portfolios, analyze clients' books, and interpret the veracity of the accounting.
Perhaps our counterparts have something there. For years as a graduate student in business, then later in industrial engineering, and finally as a doctoral student in information systems engineering, I heard professors and employers bemoan the lack of engineering discipline for software analysis, design, and development. Universities taught the lessons of Frederick Brooks, Watts Humphrey, James Martin, Tom Lister, Tom DeMarco, Ed Yourdon, Stephen Andriole, and Barry Boehm.
We competed for multimillion-dollar contracts using Microsoft Word and PowerPoint presentations to tell the story. With no external accrediting authority to verify the knowledge, experience, and capabilities of software engineering practitioners, baffled past clients repeatedly affirmed the stellar job done in past projects, along with the capability of our software professionals.
There are numerous instances of software debacles (such as in the U.S. Patent and Trade Office) where the contractor missed schedules, was accused of not delivering on expectations, and was described as totally inadequate for the job of organizing millions of records. Purported failures of highly respected software and services firms 20 years ago continue today. Huge cost overruns and a sad travesty of schedule delays taint the Internal Revenue Service modernization effort and the National Aeronautics and Space Administration space station program; billions of dollars were spent, and the projects were years behind schedule. Are these results due to customers not adequately stating their needs? Or were ghosts lurking in vague requirements or personnel with inconsistent qualifications for performing software engineering?
If shareholders invest hard-earned dollars in a publicly traded company based on confidence in the auditor qualified by CPA exams, why not define a set of certifying examinations to assure clients that software engineers are individually qualified to build software? CPA certifications began in 1917, and it was many years before they were broadly used as qualifying instruments for hiring accounting professionals. This shift was largely a result of the massive October 1929 stock market crash. Surely there have been enough software project failures to acknowledge the need for GASEP.
Inasmuch as the profession is based to a large degree on trust, clients believe professional software developers will understand requirements and be able to translate them into working solutions. Novell has been certifying network engineers for decades, and Microsoft certifies professionals in its various tools and techniques [6, 7]. The marketplace has responded with higher salaries and jobs for those with the certifications and experience.
Based on a sampling of 30,000 information technology professionals across industries and governments, "those individuals with technical certifications related to project management, database management, and security have seen the largest increases in bonuses since 2000" [7]. A unified GASEP is needed for similar results for software professionals.
In the years following the 1987 stock market crash, software employers "found themselves hiring candidates who did not truly meet the actual skill requirements and providing these individuals with on-the-job training" [11]. This costly and risky situation is related to trust and performance. Employers need performance-based testing. For example, the state of Texas now tests software engineers based on defined practices and experience. And, since 2002, the IEEE has conducted examinations for Certified Software Development Professionals (CSDP), producing several hundred certified professionals. The CSDP examination is based largely on draft Version 1.0 of the Software Engineering Body of Knowledge (SWEBOK). After looking through the questions, software engineers are likely to feel this collection of practices would serve as an initial GASEP.
As more certification examinations are conducted internationally, many improvements and modifications to the SWEBOK are likely to be recommended for inclusion in a GASEP. This is an industry advance toward capturing the knowledge and essence of software analysis and design. SWEBOK guides software professionals to best practices. Progressing toward a comprehensive GASEP, the institute or agency that addresses evolution in software practices will be acknowledged for having busted the ghosts that sustained the black art of software production.
GASEP is like a power cell, continually recharging the credibility of software professionals. The official GASEP would maintain the common encyclopedia of terms, practices, and tools. Imagine prospective job candidates coming to interviews armed with their "software engineering qualifications"; an interviewer could ask specific questions about using CASE tools, quality reviews, integrated testing, systems architecture, and modeling languages (such as the Unified Modeling Language). The answers might give everyone concerned confidence in the individual's abilities.
We can bust the ghosts, doing away with what candidates may know and demand that they demonstrate specific knowledge and abilities by requiring that they present their certifications. If they have not passed the examinations, the Ghostbusters are ready.
Some would argue that the SWEBOK project is a political maneuver developed to gain business funding for large government contractors (such as Boeing, Raytheon, and Mitre) [10]. However, the trial version of processes and practices released in 2001 represents a broad cross-section of software practices. In 1999, ACM expressed concern over the formal endorsement of SWEBOK in licensing software engineers for safety-critical applications. It could lead to premature promotion and possibly litigation for unwary software engineers, rather than leaving responsibility with the firm that employs them. Many managers and software professionals feel that SWEBOK gives their industry a starting GASEP for those who would practice this eclectic science.
The SWEBOK objectives state "in other engineering disciplines, the accreditation of university curricula and the licensing and certification of practicing professionals are taken very seriously. These activities are seen as critical to the constant upgrading of professionals and, hence, the improvement of the level of professional practice. Recognizing a core body of knowledge is pivotal to the development and accreditation of university curricula and the licensing and certification of professionals." They are worthy for both academic researchers and the profession. Failure to endorse SWEBOK and the dearth of published uses are a shame to the industry.
Because practitioners have so much at stake in this issue, it's time to establish a Software Engineering Standards Board (SESB) to officially endorse a GASEP. SESB would begin with baselining SWEBOK and establishing a formal revision process. As the accounting industry continually reviews accounting cases and emerging techniques, the FASB issues pronouncements and rulings. The software development industry should likewise have a formal review and update process.
The industry reflects the expenditure of billions of dollars annually. As such, it warrants standardized practices. The ISO endorsement review and IEEE examinations benefit practitioners and clients alike. Focusing on GASEP, the software industry would be taking at least small steps away from historical inertia and reluctance.
1. Baruch, Y. and Peiperl, M. The impact of an MBA on graduate careers. Hum. Res. Manage. J. 10, 2 (2000), 6990.
2. Denning, P. and Dunham, R. The missing customer. Commun. ACM 46, 3 (Mar. 2003), 1923.
3. Harrison, W. The marriage of research and practice. IEEE Software 20, 2 (Mar./Apr. 2003), 57.
4. Kamatar, J. and Hayes, W. An experience report on the personal software process. IEEE Software 17, 6 (Nov./Dec. 2000), 8589
5. Marketing. What's an MBA worth? (Feb. 18, 1999), 3335
6. Musthaler, L. Make a good career move: Get certified. Network World (Mar. 17, 1997), 37.
7. Piazza, P. Tech talk. Secur. Manage. (July 2002).
8. Turner, R. Seven pitfalls to avoid in the hunt for best practices. IEEE Software 20, 1 (Jan./Feb. 2003), 6769.
9. Venkataraman, B. and Ward, W., Jr. An introduction to software quality. Technical Report ITL-99-4. U.S. Army Corps of Engineers, June 1999; itl.erdc.usace.army.mil/pdf/tritl994.pdf.
10. Wing, M. Standing up for ourselves. ACM SIGSOFT Softw. Eng. Notes 28, 2 (Mar. 2003), 45.
11. Yost, D. Certification exams: Changing for the better? Certification Magazine (Mar. 2003), 13; www.certmag.com.
1 In the movie Ghostbusters (1984) three researchers of the paranormal are kicked out of a university and go into business catching the ghosts and demons haunting New York City.
©2005 ACM 0001-0782/05/0800 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.