In the two years since we launched the revitalized Communications of the ACM, I have received hundreds of email messages from readers. The feedback has been mostly, but not universally, positive. Many people do note places where we can do better. Some readers point out errors in published articles. Nothing in life is perfect. Communications is an ongoing project; continuous improvement is the name of the game.
At the same time, I have also received a fair number of notes with nothing short of withering criticism. For example, six issues into the revitalized Communications, I received this comment from a leading computer scientist: "Although I have looked at every issue and at least glanced at every article, I have not yet found one good one."
Do you find this statement harsh? It surely pales in comparison to this: "The level is unbelievably poor. It reads sometimes like a PR article for big companies. Donation to the ACM seems to be the main reviewing criterion. I would call the policy of ACM scientific prostitution, and I don't want to pay for a prostitute."
I believe most of us have received at some point very harsh reviewsthough, hopefully, not that harshon papers or proposals we have written. If you are an experienced researcher, you have undoubtedly dealt with papers and proposals being declined. Still, the harsh tone of negative reviews can be quite unsettling even to experienced authors. When I talk to colleagues about this, they just shrug, but I think this phenomenon, which I call "hypercriticality," deserves our collective attention. Other people recently commented on this issue. In the context of proposal reviewing, Ed Lazowska coined the phrase "circling the wagons and shooting inwards," and John L. King, in a recent CCC blog, referred to such verbal assaults as "Fratricide." Jeff Naughton, referring to conference paper reviewing, said in a recent invited talk that "bad reviewing" is "sucking the air out of our community."
The "hypercriticality" claim is not just based on anecdotes; we actually have data that supports it. Proposals submitted to the Computer and Information Science and Engineering (CISE) Directorate of the U.S. National Science Foundation (NSF) are rated, on the average, close to 0.4 lower (on a 1-to-5 scale) than the average NSF proposal. In his blog entry, King discussed the harmful effects of such harshness.
What is the source of this harshness within our discipline? Here one can only speculate. Let me offer two possible explanations. My first theory refers to the intrinsic nature of our discipline. Computing systems are notoriously brittle. Mistyping one variable name can lead to a catastrophic failure. Computing embodies the principle of "For lack of a nail, the kingdom was lost." This makes us eternally vigilant, looking for the slightest flaw. In our eternal hunt for flaws, we often focus on the negative and lose perspective of the positive.
What is the source of this harshness within our discipline?
My second theory refers to the sociology of our field. We typically publish in conferences where acceptance rates are 1/3, 1/4, or even lower. Reviewers read papers with "reject" as the default mode. They pounce on every weakness, finding justification for a decision that, in some sense, has already been made. It is particularly easy to be harsh when reviewing proposals. If the proposal is not detailed enough, then the proposer "does not have a clear enough plan of research," but if the proposal is rich in detail, then "it is clear that the proposer has already done the work for which funding is sought."
What is to be done? Remember, we are the authors and we are the reviewers. It is not "them reviewers;" it is "us reviewers." Hillel the Elder, a Jewish scholar, 30 B.C.-10 A.D., said "What is hateful to you, do not do to your fellow." This is known as the Silver Rule in moral philosophy. The Golden Rule, which strengthens the Silver Rule, asserts "do unto others as you would have them do to you." Allow me to rephrase this as the Golden Rule of Reviewing: "Write a review as if you are writing it to yourself." This does not mean that we should not write critical reviews! But the reviews we write must be fair, weighing both strengths and weaknesses; they must be constructive, suggesting how the weaknesses can be addressed; and, above all, they must be respectful.
After all, these are the reviews that we would like to receive!
Moshe Y. Vardi's
EDITOR-IN-CHIEF
©2010 ACM 0001-0782/10/0700 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
Indeed, it's really annoying, if not disrespectful, to have a work of months or years rejected without further explanation and _constructive_ criticism.
For instance, I'd suggest to 'review the reviewer': why cannot the authors assign a rate to the reviewer of their paper? Then, for future reviews, those low rated ones would not be called to the committee (unless they improve their reviewing skills).
Of course, the real issue to tackle is the lack of (proper) education. Most of it is geared towards formalisms, models, and abstraction, often used in the virtual world of our own and cold-hearted machines. Topics such as moral, and ethics, are not only shallowly handled, but not present in ours daily life. Proper education onto this topics, and the engagement of computer scientists into real-world issues in their neighborhood (not necessarily computer related) would improve this situation.
I am one of the rare researchers practicing multidisciplinary research. Therefore, I experience the differences in review practices at first hand. This letter from the editor confirms my experience. I have been reducing my submissions to computer science forums. Publication acceptance increasingly requires me to squander the tax payers money on matters that bring no benefit to society nor contribute to our knowledge. It takes several weeks of writing to make a submission compliant with the unwritten rules of IT research communities where absolutely no added value is generated by this effort.
In contrast to the editors letter, I perceive more than hypercriticality. I see reviewers behaving like old-style school teachers correcting exam papers, not like researchers that are keen to learn something new and to contribute their substantial expertise in this process. Allow me to elaborate.
Reviewers fail to appreciate truly innovative contributions. If Herbert Simon, Nobel Prize winner, were to submit a paper on how bounded rationality has implications on software engineering methodologies, it most likely would be rejected in a blind review. Formally because the paper failed to discuss work-by-others, etc. In reality, the submission is rejected because the reviewer failed to understand it, because it is perceived as competition undermining his or her own work, etc.
Radical innovation should be welcomed by researchers that are longing for fascinating new insights. This hunger to learn something new is absent, more in IT than in other domains. In particular, reviews for workshops must select innovation over other criteria. Especially, reviewers must behave more humbly if they do not understand a paper and show respect. Moreover, IT researchers must be cautious when they fail to perceive a contribution, which often means that it is situated in the border area crossing IT with an application domain.
Reviewers seem to shut down their own brain when it comes to identifying the research contribution in a paper submission. Papers are rejected when they fail to cite work-by-others, fail to discuss how they contribute in meticulous detail, etc. Papers are rejected because they constitute a fresh start, addressing long-standing problems, when there is no substantial work-by-others to discuss. Reviewers are supposed to be experts who, at least, should be able to be precise about what is missing or wrong in this respect. Note that pretending not to understand is a key mechanism to abuse power/authority. Todays non-cooperative hostile reviewing attitude involuntarily implements this mechanism.
Overall, this situation is more than a nuisance. It is truly alarming. This attitude destroys civilizations; at the very least, it is correlated with their demise. I recommend reading The sleepwalkers: by Koestler to learn just how worried we need to be.
Our society is facing new challenges when it comes to science and technology. The cost and effort to perform scientific experiments increases as we need to investigate infrastructures rather than components, systems of systems rather than a single machine The present attitude makes it impossible to have cooperative research with joint results in preparation of those experiments. Indeed, smooth access to publications requires us to perform the experiments first. As a result, most will be toy experiments and the valuable bigger ones will be selected based on politics rather than scientific interactions. A lottery would be a superior and certainly more robust mechanism to allocate resources to perform research.
At a time where we need to reinvent the scientific discussion, using all facilities that IT and psychology may offer, we are freezing into a hostile, formalistic and conservative community. In conclusion, I applaud this letter of the editor and want to point out the seriousness of this matter. It is not about being at a disadvantage as an IT researcher, it is not about being more civilized colleagues. It actually is about preserving our civilization for future generations.
See http://www.sigcomm.org/ccr/papers/2011/July/2002250.frontmatter
and http://jbiol.com/content/8/3/24
Displaying all 3 comments