acm-header
Sign In

Communications of the ACM

Technical opinion

Conferences Under Scrutiny


A recent event that attracted media attention and was discussed extensively in Web pages and blogs concerns a computer program designed to generate random text to create nonsensical research papers. One of the generated papers was accepted for presentation as a non-reviewed conference paper [1, 4, 5]. This affair has raised several important issues; among them is the question of how to assemble a reasonable system of scrutiny in computer science conferences. The aim of such a system is to impart to others (the general public) that as an important professional apparatus, computer science conferencing has established standards of quality, in a manner similar to educational accreditation. Any deviation from these standards is attributed to an individual event and not to the profession at large. According to the Department of Computer Science at the National University of Singapore, "in computer science, conferences are the primary means for communicating research results" [8].

Back to Top

What Needs To Be Done?

In reporting the circumstances of the non-reviewed paper, some media articles maintained that it has become impossible to distinguish a valid research paper from a hoax and claimed that "high-tech computer conferences are so loaded with jargon that people are only pretending to understand anything" [10]. Clearly, quality assurance in computer science conferences is being questioned. The affair undermines the computing profession and diminishes trust, both among computer scientists and between computer scientists and the public.

I propose a systematic understanding of the affair as a phenomenon that deserves analysis and a solution. This includes briefly reviewing scholarly publishing, that is, examining the peer-review process, developing a model that reflects its scrutiny, and finding a possible solution to the phenomenon reflected by the non-reviewed paper indicate.

Back to Top

Responsibility and Ethics

Professionals must assess their approaches regularly to determine which ones deserve continued employment and further improvement. One of the key scientific professional concerns is publishing, which has been the subject of several controversies.

Professionalism entails relationships built around responsibility to society. While holding the profession to higher ethical standards is very important, methods to improve current practices are necessary, rather than emphasizing the ethics of good behavior. Professional societies are responsible for ensuring that practices are not left solely to private ethics. This community-based predilection would move any solution toward an institutional professional setting rather than an ethical state of affairs.

Even though the controversy does not directly involve the ACM conferences, it affects the computing profession and therefore ACM as "the first society in computing." Professionalism gives the profession justification to maintain not only material benefits, but also social power, such as monopoly of competence [6]. This professional power entails a special kind of responsibility, including concern for professional standards at large.

Back to Top

Peer Reviewing

One of the most standards-sensitive notions is refereeing or peer reviewing. "Refereeing does provide an imprimatur alleging a paper has survived a certain rite of passage. More important, refereeing is a mechanism for selecting preferred papers from potentially suitable papers" [9]. The peer-review system is currently the foundation of scholarly communication. It is based on the interaction of three main agents: the researcher in the role of creator of the manuscript; the editor in the role of primary mediator; and the reviewer in the role of quality-assurance provider.

Several alternative peer-review models have been suggested, including the decoupling of the relationship between editors and the peer-review process [10]. For example, the interactive journal includes a review process where a paper is reviewed in an open forum, and then undergoes the scrutiny of the standard peer-review process. The Berkeley Electronic Press (bepress) uses a refereeing process that is semi-independent from journal submissions. The Internet has introduced several alternatives to the classical peer-review process, including combined editorial control and external review, and an informal process where anybody can write a review [2].

"Peer review is to science what democracy is to politics. It's not the most efficient mechanism, but it's the least corruptible" [7]. Nevertheless, there is a need to reflect on the nature of such a system. Why does it sometimes seem to be strained by such phenomena as lax standards? Proposed alternative refereeing systems do not introduce a basic understanding of the phenomenon.

Back to Top

The Scrutiny-Based Model

Within the standard peer-review framework, we propose a scrutiny-based model with the three classical agents: researcher, editor, and reviewer, as shown in Figure 1. Normally, they participate in the scrutiny process of the refereeing system, each according to their particular role. However, certain circumstances change the role of the agent.

The notion of "scrutiny" is related to assessment, auditing, examination, and monitoring; all are characteristics of the peer-review system. The basic task in such a system is to detect flaws in methodology, design, claims of importance, and so forth. The scrutinizing process involves a level of hold-to-account, control, and surveillance. The bold lines in Figure 1 establish the scrutiny continuum performed by agents, which ranges from lax scrutiny (-1) to nit-picking procedures (1).

The shaded middle circle on the scrutiny continuum indicates the scrutiny margin of variations in an acceptable reviewing system. We assume the common-acceptability notion where reviewers work according to a variation of the golden rule: treat all manuscripts in the manner you would want your own to be treated [3].

As a scrutinizer, a researcher examines and criticizes his or her own research; examines and criticizes other researchers' works (notice the double reflexive arrows of the researcher node in Figure 1); and may question referee and editor decisions. However, in an ideal situation, the researcher is unconcerned with the direct scrutiny of the peer-review system and concentrates on creating research materials based on self-scrutiny and the scrutiny of other researchers. Consequently, the researcher's direct role in scrutinizing the referees and editors is minor and indicated by dotted arrows in Figure 1.


A mechanism can be designed to permit the assessment and monitoring of an acceptable level of conferencing with an appropriate peer-review system.


For reviewers, the scrutiny continuum represents the reviewer's level of scrutiny. The subject of scrutiny in this case is the research material. The shaded circle represents an ideal reviewer. Point -1 represents a mediocre reviewer and Point 1 represents a saturated reviewer, meaning a scrutinizer who is closer to extreme or maximum scrutiny in the refereeing process. Editors scrutinize the work of researchers and reviewers.

A researcher may become "saturated" by factors that move him or her toward Point 1. For example, the researcher may become threatened by the increased number of pseudo-researchers, and thus overscrutinize the referral process. This saturation reflects dissatisfaction with the scrutiny levels of reviewers and editors. In this case, according to a blog, "the researchers [are] fighting back" (see www.rereviewed.com/roguesemiotics/?m=200504). Figure 2 illustrates the pressures involved in researchers shifting toward a high-level scrutinizing role.

In the non-reviewed paper case, the researchers felt the pressure of what they considered a substandard conferencing. Editing is judged to be lax and refereeing is nonexistent (indicated by the dotted lines). This triggered a move from the researcher's role to scrutinizing editors and reviewers (indicated by the bold lines). The main attack is directed toward the editors and organizers as the gatekeepers who guarantee the quality of what is published.

Back to Top

Accreditation

Conferences seem to be more susceptible to controversy because of their uneven quality and the diversity of the evaluation procedures. So what can we do to stabilize the peer-review system? Possible options include:

  • Institutional evaluation mechanism for conferences offered by reputed computer societies such as ACM and IEEE. Such an option may already exist, but it needs explicit recognition as an independent process. It lacks the scope of application necessary for a professionwide solution to the problem under consideration. Furthermore, this option does not address issues of comparability in the field as a whole. These shortcomings can be eased with bilateral and/or multilateral institutional agreements for mutual recognition.
  • An accreditation agency for conferences: A meta-agency can be established with the ability to accredit on demand, and can provide a common set of standards of scientific conferencing.

An accreditation meta-agency is, in my opinion, the best of these options. A mechanism can be designed to permit the assessment and monitoring of an acceptable level of conferencing with an appropriate peer-review system. This regulatory mechanism is an evaluation based on agreed standards, resulting in a formal, public recognition of a conference. It is not a replacement of "trust culture" by an "accountability or audit culture" [3]. Rather, it is the missing piece required for an incomplete system, as shown in Figure 3.

In Figure 1, the only unscrutinized entity is the editor. The accreditor in Figure 3 acts as a fourth scrutinizer in the peer-review system and thus provides an additional procedural safeguard. This certainly would ease researchers' apprehension about the pseudo-research invasion of their territory. There would be specific criteria, as in the case of unaccredited academic programs, for distinguishing accredited from unaccredited conferences.

Back to Top

About Accreditation

The principal objective of a conference accreditation process is to ensure that computer science conferences operate according to specified standards. Accreditation is usually designed to endorse quality and accountability and involves ongoing monitoring as well as de-accreditation.

An accreditation system for computer conferences must be perceived as legitimate by a considerable number of computer science communities. Several basic characteristics can be specified in initializing this endeavor, such as uniformity, transparency, objectivity, fair procedures, and compliance to evaluation. Also, the system should allow for diversity and the preservation of professional identity.

The basic functions of such a mechanism involve approving the conferences recognized as having the appropriate level of professionalism and dealing with complaints against accredited conferences and imposing sanctions where appropriate. A conference assessment includes examining conference implementation, monitoring performance, and auditing the application of professional and ethical guidelines.

Back to Top

References

1. ABC News online. Scientific conference falls for gibberish prank (Apr. 15, 2005); www.abc.net.au/news/newsitems/200504/s1345732.htm.

2. Arms, W.Y. What are the alternatives to peer review? Quality control in scholarly publishing on the Web. The Journal of Electronic Publishing 8, 1 (Aug. 2002); www.press.umich.edu/jep/ 08-01/arms.html.

3. Beno, D., Kevin, L., and Hall, J. How to review a paper. Advances in Physiology Education 27.2, (2003), 47–52; www.tropica.us/science-education/how-to-review-a-paper.html.

4. Ball, P. Computer conference welcomes gobbledegook paper. Nature 434, 946 (Apr. 21, 2005); www.nature.com/nature/journal/v434/ n7036/full/nature03653.html.

5. Farrell, F. Computer gibberish accepted by boffins. The Inquirer (Apr. 15, 2005); www.theinquirer. net/Default.aspx?article=22550.

6. Kultgen, J. The ideological use of professional codes. Ethics, Information and Technology Readings. R.N. Stichler and R. Hauptman, Eds. McFarland and Company, Inc., Jefferson, NC, 1998.

7. Lachmann, P. The research integrity initiative: Progress report. The Cope Report 2002; www.publicationethics.org.uk/reports/2002/2002pdf5.pdf.

8. Long, P., Lee T., and Jaffar, J. Benchmarking research performance in departments of computer science. (Apr. 12, 1999); www.comp.nus.edu.sg/~tankl/bench.html.

9. Poorten, A. Three views of peer review. Notices of the AMS 50, 6 (2003); www.ams.org/notices/200306/comm-peerreview.pdf.

10. Rodriguez, M., Bollen J., and Van de Sompel, H. The convergence of digital libraries and the peer-review process. Journal of Information Science 32, 2 (2006), 149–159; arxiv.org/ftp/ cs/papers/0504/0504084.pdf.

Back to Top

Author

Sabah Al-Fedaghi (sabah@eng. kuniv.edu.kw) is an associate professor in the Computer Engineering Department at Kuwait University, Kuwait.

Back to Top

Figures

F1Figure 1. A model of scrutiny for a peer-review system, including the directions and levels of scrutiny. The shaded areas represent an acceptable range of variation.

F2Figure 2. What is perceived as low-level editing and refereeing creates a high level of scrutiny of these two activities.

F3Figure 3. A peer-review system that includes an accreditor as a fourth scrutinizer.

Back to top


©2007 ACM  0001-0782/07/0700  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2007 ACM, Inc.


 

No entries found