Conferences in the computing field have large numbers of submissions, overworked and overly critical reviewers, and low acceptance rates. Conferences boast about their low acceptance rates as if this were the main metric for evaluating the conference's quality. With strict limits placed on the number of accepted papers, conference program committees face a daunting task in selecting the top papers, and even the best committees reject papers from which the community could benefit. Rejected papers get re-submitted many times over to different conferences before these papers are eventually accepted or the authors give up in frustration. Good ideas go unpublished or have their publication delayed, to the detriment of the research community. Poor papers receive little attention and do not get the constructive feedback necessary to improve the paper or the work.
Because reviewers approach their job knowing they must eventually reject four out of five submissions (or more), they often focus on finding reasons to reject a paper. Once they formulate such a reason, correctly or incorrectly, they pay less thought to the rest of the paper. They do not adequately consider whether the flaws could be corrected through modest revisions or whether the good points outweigh the bad. Papers with the potential for long-term impact get rejected in favor of papers with easily evaluated, hard to refute results. Program committees spend considerable time trying to agree on the best 20% of the papers that were submitted rather than providing comments to improve the papers for the good of all. Even if committees were able to perfectly order submissions according to quality, which they are not, papers that are close in quality may receive different outcomes since the line needs to be drawn somewhere. People do not always get the credit they deserve for inventing a new technique when their submission is rejected and some later work is published first.
Whilst I can agree that conference acceptance rates in many cases could be relaxed without much loss, I disagree with the author that we should publish all reasonable submissions. There is a cost to the reader and audience to accepting papers that needs to be balanced. For example, I also attend OR conferences where "all reasonable submissions" are accepted. The conference experience at such events is much worse than at selective computer science conferences. OR conferences tend to have too many parallel tracks and it is next to impossible to find the "diamonds" in the programme.
Toby Walsh
I agree with much of what is said in this article and strongly support the spirit of the proposed solutions. Some colleagues and I proposed a somewhat similar system in:
Christopher M. Kelty, C. Sidney Burrus, and Richard G. Baraniuk, Peer Review Anew: Three Principles and a Case Study in Postpublication Quality Assurance, Proceedings of the IEEE, invited paper, vol. 96, no. 6, June 2008. pp 1000-1011.
The current review system for both conference and journal papers is broken and only a structural change can fix it. By setting a low criterion for acceptance, the main bottle neck of the current system is relieved and the whole research enterprise is much faster. The question of quality and importance is handled in a separate process. The Connexions project does this by allowing self publishing under a Creative Commons copyright with the quality assurance being administered by what is called a "lens". \cnx.org
C. Sidney Burrus
Prof. ECE. Rice University
[email protected]
I agree with most of the points but somewhere the author made comments on other basic disciplines which are not at all welcome. I can't agree with loose comments such as, "Some fields, such as physics, I am told, hold large annual conferences where anyone can talk about almost anything". Is it a personal observation? Will the author give a reference who told this?
Displaying all 3 comments