acm-header
Sign In

Communications of the ACM

BLOG@CACM

When Reviews Do More Than Sting


Bertrand Meyer

Bertrand Meyer wonders why malicious reviews run rampant in computer science.

The full text of this article is premium content


Comments


Anonymous

Publishing reviewer statistics is a great idea and will make reviewers more careful before they provide hasty evaluations. Recently our paper in CHI was rejected because one reviewer gave a strong negative rating without really even justifying his decision. His comments showed he had clearly not read the paper fully. While other reviewers scored our paper 2 categories higher giving favorable reviews. Conferences still resort to average scores and ignore such situations where it is clear that the reviewer has not done his job properly.


Anonymous

I think that the reviewing process also should be more open. I think that reviews should be available together with the accepted papers. This can be done still preserving reviewers anonymity.
This way you also get the average of scores and so on....


Philip Godfrey

Regarding NSF grant proposal scores, apparently CISE (i.e. computer science) proposals average 0.41 points lower than other directorates. Data via Jeannette Wing:
http://cacm.acm.org/blogs/blog-cacm/134743-yes-computer-scientists-are-hypercritical/fulltext


Mark Wallace

"Rejecting a paper is not the same thing as insulting the author under the convenient cover of anonymity."

Perhaps this is a training problem, and reviewers should first be instructed in how to reject papers without being "insulting." Unfortunately, what is or is not an insult is, to a certain extent, in the eye of the beholder. It isn't easy to imagine anyone who worked hard on a paper receiving a rejection notice without some bad feeling, and yet, as Dr. Meyer points out, the vast majority of submissions have to be rejected.


Rafael Anschau

The real problem is that we still dont have a way to measure the quality of research in CS the same way physics has. In physics, if a theory is refutable and survives a few tests, it is considered of quality. A CS thesis simply depends on so many facts that's hard to enumerate. Which criteria the judges of garbage collection should have used to evaluate the idea ? CS research evaluation is still very subjective, so biases become substitute for tests. CS needs its own Karl Popper.


Anonymous

I have served on a variety of panels and program committees and indeed I see that happening often. On the other hand, it is also the job of Program Directors and Program Chairs to filter and guide reviewers to avoid this.


Anonymous

"Rejecting a paper is not the same thing as insulting the author under the convenient cover of anonymity."

This is so true. We understand that the vast majority of submissions have to be rejected. But reviewers MUST contribute to the paper. The systems will judge papers by their grades. Reviewers do not need to judge, they need to review and contribute. Reviewers must provide facts: "authors should have read "; "authors should have employed "; and so on. A good review do not need to be positive. It do not need to approve. It must contribute to the paper!


Anonymous

The author is sadly correct. I have a similar collection of woefully inadequate comments and have been subjected on occasions to an almost sneering approach. One of my better pieces of work (after years of experience, you know which ones they are), had such an unfair battering from IEEE Transactions on Software Engineering, a journal I have published in before,that I complained about the reviewing process and got an acknowledgement but that was all in spite of several follow-up requests. Pathetic. Another to the Journal of Cryptology invited two reviewers. One reviewer said the algorithm was trivially like something else (it wasn't) and the other didn't understand it and said so, even though there was supporting experimental data. Needless to say it was roundly rejected. I'm old enough and experienced enough to shrug it off, but when it happens to a new PhD student, it really undermines them and you have to spend time counselling them about the vagaries of the process. Everybody understands that many papers need rejection or at least significant rethinking but this has to be a constructive and thoughtful process. It all too often isn't.

After years of publishing in a number of these journals, I've mostly given up and publish in ArXiV. I also refuse to review for those journals that behave like this. It really is time for CS to grow up.


Frederick Carlson

The author is correct. ACM is in a league of it's own when it comes to flat out nastiness. The first (and probably last) paper I submitted to an ACM conference had 3 referees. 2 were helpful, the 3rd guy was personally insulting, unprofessional and, worse, unhelpful.


CACM Administrator

The following letter was published in the Letters to the Editor in the April 2013 CACM (http://cacm.acm.org/magazines/2013/4/162502).
--CACM Administrator

Bertrand Meyer's blog post "When Reviews Do More Than Sting" (Feb. 2013) is an opportunity to reflect on how CS academic publishing has evolved since it was first posted at blog@cacm (Aug. 2011). Meyer rightly identified rejection of 80% to 90% of conference submissions as a key source of negative reviewing, with competitors ready to step in with even higher rejection rates, eager to claim the quality mantle for themselves.

In recent years, we have seen that conference quality can be improved and constructive reviewing facilitated, even when a greater proportion of papers is accepted. At least six conferences, including ACM's Special Interest Group on Management of Data (SIGMOD), Computer-Supported Cooperative Work (CSCW), and High Performance Embedded Architectures and Compilers (HiPEAC), incorporate one or more paper-revision cycles, leading to initial reviews that are constructive rather than focused on grounds for rejection. Giving authors an opportunity to revise also provides a path toward accepting more submissions while still improving overall conference quality.

Analyses by Tom Anderson of the University of Washington and George Danezis of Microsoft Research suggest there is little or no objective difference among conference submissions that reviewers rank in the top 10% to 50%. Many conferences could even double their acceptance rates without diminishing their quality significantly, even as a serious revision cycle would improve quality.

This change in the CS conference process would blend conference and journal practices. Though journal reviews may not always be measured and constructive, on balance they are, and, in any case, revision cycles are a way for conferences to be more collegial.

Jonathan Grudin
Redmond, WA


Displaying all 10 comments

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account