acm-header
Sign In

Communications of the ACM

Communications of the ACM

Changes in Computer Science Accreditation


In the short time since computing has emerged as a profession, many new educational programs have emerged. How can a prospective employer judge that programs adequately prepare the candidate work force? How is a prospective student to distinguish among programs? Where can an institution find a proven framework for building a new program? During the past 16 years, the accreditation of computing programs has emerged as a useful process to help make these decisions.

Indeed, the accreditation infrastructure for computer science consolidated and achieved relative stability even as the computing discipline continued to evolve rapidly. There have been a number of significant changes recently in both the evaluative criteria for computer science and the structure of the accrediting body. This article explains the nature of those changes.

From its inception through the summer of 2001, the Computer Science Accreditation Commission (CSAC) was the body that set standards for the accreditation of undergraduate programs in computer science, assessing programs that desired accreditation, and granting accreditation to programs that met its standards. The Computing Sciences Accreditation Board (CSAB), the sponsoring Board for the CSAC, was established in 1982 by the ACM and the IEEE on behalf of the Computer Society. Its formation was an outcome of several years of work by the Joint Task Force on Computer Science Program Accreditation. The CSAB's general purpose was to advance the development and practice of computing sciences and to enhance the quality of educational programs in computing sciences.

The CSAC granted its initial accreditation in June 1986. There are now 171 accredited computer science programs. Slightly more than one-half of the programs reside in Colleges or Schools of Arts and Science, while about one-third reside in Colleges or Schools of Engineering. A complete list of accredited programs can be found at www.abet.org/accredited_programs/CACWebsite.html.

The CSAC's initial accreditation criteria were strongly influenced by the state of accreditation at the time, most prominently represented by the work of the Engineering Accreditation Commission (EAC) of the ABET. Over the next 16 years, the CSAC criteria underwent continual improvement. Their language was clarified, some criteria was made less prescriptive, and standards for the inclusion of coverage of social and ethical implications of computing and for program assessment were introduced. They remained, however, focused on program content and delivery.

Since the inception of computer science accreditation, there has been close collaboration between the CSAB (www.csab.org) and ABET (www.abet.org). For many years the CSAC and ABET's EAC cooperated on programs that qualified for accreditation by both commissions. In October 1998 several years of discussions between the two boards culminated in a memorandum of agreement to integrate CSAC accreditation activities into the ABET structure over a two-year period. Under the agreement the CSAC has been transformed into the Computing Accreditation Commission (CAC)—a new commission under ABET. As of July 2001 computer science accreditation responsibilities transitioned from CSAB to ABET's CAC and all CSAB-accredited programs became ABET-accredited programs.

In its new role, CSAB functions as a professional society for the computing disciplines. ACM and IEEE were the initial member organizations in CSAB. In 2001, the Association for Information Systems became a member of CSAB and the Computing Accreditation Commission broadened its accreditation responsibilities to include information systems. CSAB is now the lead society for the accreditation for computer science, information systems accreditation, and software engineering within the ABET structure. CSAB is also a cooperating society for accreditation of computer engineering programs.

Back to Top

Criteria Modernization

During the 1995–96 accreditation cycle the CSAC's Criteria Committee was chartered to compare the CSAC accreditation criteria to those used by other accreditation agencies (both discipline-specific and institutional) and to make a recommendation to the CSAC Executive Committee regarding the need for criteria modernization. This review was motivated in part by a change in society's traditional view of educational institutions. There was a new insistence on greater accountability for the results of the educational process and the relevance of the education of graduates. It was in this climate (June 1996) that the committee recommended that CSAC proceed with modernization.

The Criteria Committee also established a set of guiding principles for the modernization activity:

  • Retain the strengths of the existing criteria while incorporating the positive aspects of other approaches;
  • Clarify existing ambiguities;
  • Continue to recognize the criteria is a statement of the minimum standards for accreditation; and
  • Avoid prescriptive statements unless they are required for expressing necessary minimums.

The criteria modernization effort spanned five years. During the development years the CSAC constituency periodically reviewed drafts of the criteria. Each comment received was cataloged and resolved. Initially members of the CSAC and the CSAB were the primary reviewers. However, during the 1997–98 accreditation cycle a draft of the criteria was broadly circulated for review to educators and practitioners. The draft of the criteria was also made publicly available on the CSAB Web site.

In January 1998 the CSAC Executive Committee approved the use of the new criteria in a two-year pilot program. The pilot program ended in July 2000. General usage of the new criteria began with the 2000–2001 accreditation cycle. Following the end of the ABET/CSAB integration period the new CSAC criteria were adopted by ABET as the CAC's computer science accreditation criteria.

The most visible differences between the old criteria and the new are the changes in structure and style. The old criteria document was written in a narrative style and consisted of nine sections. Two sections explained interpretation and philosophy. The other seven contained the evaluative criteria. Over the years work aids evolved that extracted the essential evaluative criteria from the narrative along with indicators of the importance of each criterion, for example, criteria that must be fulfilled, or criteria that should be fulfilled. These work aids were not publicly available, so it was difficult for institutions to interpret the criteria without appropriate training.

The new criteria document has been restructured and written in a different style to address these problems. Most striking is that there are now two documents: Criteria for Accrediting Computer Science Programs in the United States (Criteria document), and (Guidance). The first document contains the actual evaluative criteria. The second contains nonprescriptive information to help clarify the evaluative criteria. In both documents, the narrative style has been abandoned in favor of an explicit enumeration style.

Structurally, the new Criteria document is divided into seven major categories:

  • Objectives and Assessments
  • Student Support
  • Faculty
  • Curriculum
  • Laboratory and Computing Facilities
  • Institutional Support and Financial Resources
  • Institutional Facilities

Each Category begins with a statement of Intent. The Intent statements present the underlying principles associated with the Category. For a program to be accreditable it must meet the Intent statement of every Category. The seven Intent statements appear in Panel 1.

In the Criteria document each Intent statement is followed by a list of Standards. Standards provide a description of how a computer science program can minimally meet the statement of Intent. The word "must" is used within each Standard to convey the expectation that the conditions of the Standard need to be satisfied in all cases. For a program to meet the Intent of a Category it must satisfy all the Standards in that Category or demonstrate an alternative approach to achieving the Intent of the Category. The Standards from the Curriculum category are listed in Panel 2.

While flexibility of interpretation has always been a feature of the CSAC's accreditation criteria, the new criteria make this much more explicit. The new Criteria is designed to be flexible enough to permit the expression of an institution's individual qualities and ideals, and stimulate creative and imaginative programs. An explicit part of the training offered to Team Chairs and Program Evaluators is to determine if these new criteria are met.

To help institutions apply the criteria, the Guidance document provides statements of generally acknowledged ways to satisfy Standards. It is important to note the Guidance document is nonprescriptive. Also, the Guidance document is not comprehensive. Merely following the Guidance statements associated with a Standard does not guarantee a Standard is fully satisfied. In addition, many Standards are not addressed in the Guidance.

The Guidance document is a tool for communicating traditional interpretations of Standards. It also provides institutions and evaluators a repository for common understandings of how new computing technologies, disciplines, and educational techniques will be evaluated. We expect, for example, to soon see Guidance statements relating to the usage of distance learning techniques in accreditable degree programs.


While the changes in accreditation are significant, we believe this modernization effort is an evolution rather than a revolution.


Back to Top

Changes in Content

The primary difference in content between the old and new criteria is the new Category of Objectives and Assessments. The Intent statement for the category states the program must have a process for systematically setting program objectives and assessing how well they are met. In addition, the documented objectives must include expected outcomes for students who graduate from their program. The intended effect of these changes is to make the choice of a program's direction and evolution more deliberate and to provide a more objective basis for evaluating program relevance and effectiveness.

There are also a number of new Standards in some of the other categories. There are, for example, new standards relating to library capabilities, classrooms, and faculty offices.

Moreover, many quantitative statements from the old criteria have been moved into the new criteria's Guidance document. These Guidance statements support less prescriptive Standards that address the broader desired characteristics of a computer science program. Quantitative standards were retained only where it was necessary to express meaningful minimums. This is particularly evident in the Curriculum category.

It is appropriate to draw some comparisons between the CAC Criteria document and the EAC's Criteria 2000 document. The most striking area for comparison is the extent to which the two commissions have embraced outcome-based learning. The EAC Criteria 2000 has oriented its approach around outcome-based learning. The CAC's Criteria encompasses setting broader program objectives, of which student outcomes are a part, and assessing whether programs are meeting these objectives. This approach integrates objective-setting into a framework that should be familiar to those with experience applying the previous computer science accreditation criteria. Both commissions' criteria give programs the opportunity to gain accreditation by meeting a small number of broad Intent statements. Computer science programs that have embraced objectives-based program design should find the CAC criteria provides a convenient context for showing they are accreditable. This will be extremely useful to programs seeking accreditation from both commissions to programs residing in colleges of engineering.

Back to Top

Criteria Implementation

The experience from the two years of pilots and the first two years of full deployment provided two major lessons: the new criteria works as intended; and the most anxiety among faculty is about how to implement an effective program of objectives and assessment.

The criteria worked as intended; the visiting teams and institutions took the new approach to heart and the process worked smoothly. The teams reported, as hoped, that they relied primarily on the Intent and Standards and the Guidance played a subordinate role in the evaluation process. Additionally, we found the institutions made good efforts at implementing the newer features of the criteria, especially the Objectives and Assessment Category. We found reports to the institutions were easier to write and easier to read, communicating program strengths and issues much more clearly. In summary, the results have been as positive as we had hoped.

Despite their anxiety, the institutions have done a good job preparing for evaluations against the new Objectives and Assessment category. Most programs found they had many elements in existence and the challenge was to put these elements in a more rigorous and comprehensive framework. We offer some suggestions to other programs based on these early experiences:

  • Establish a process for setting objectives and assessing results. The process should specify such things as the major steps and their sequencing, who is involved in each step, the inputs and outputs of each step, and the principal activities within each step. The representation of this process need not be elaborate, but it should be written down.
  • Document what happens as a result of executing the process and keep a repository of this documentation. This might include summaries of surveys and other assessment instruments; and minutes of meetings during which assessment results and program improvements were considered and actions taken.
  • Have broad participation in the process. This is a job for the entire faculty and any advisory boards you might form. An effective improvement program should touch most aspects of the program.
  • Finally, look for help on campus. With a growing emphasis on program assessment, many institutions now have the equivalent of an office of assessment. These organizations can provide valuable assistance and save time and effort.

Back to Top

Conclusion

While the changes in accreditation are significant, we believe this modernization effort is an evolution rather than a revolution. Every effort has been made to keep the aspects of the system that worked while updating to keep pace with the rapid growth of the discipline and increasing demands for program accountability. We believe this effort benefits all constituents of accreditation: the institutions, the students, industry, and society.

Back to Top

Authors

Lawrence G. Jones ([email protected]) is a senior member of the technical staff of the Software Engineering Institute at Carnegie Mellon University, Colorado Springs, CO. He is the vice chair of the Computing Accreditation Commission of ABET.

Arthur L. Price ([email protected]) is an independent software process and quality consultant in Westminster, CO. He is a member of the Computing Accreditation Commission of ABET.

Back to Top

Figures

F1Figure 1. Criteria 2000 intent statement.

F2Figure 2. Standards in the curriculum category.

Back to top


©2002 ACM  0002-0782/02/0800  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2002 ACM, Inc.


 

No entries found