acm-header
Sign In

Communications of the ACM

BLOG@CACM

The Nastiness Problem in Computer Science


View as: Print Mobile App Share:
Bertrand Meyer

Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism. They admit it themselves. A common complaint in the profession is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone's chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view ("Circling the wagons and shooting inward," as Greg Andrews put it in a recent discussion). Is it accurate?

One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.

More such examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see his Powerpoint slides). Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous “accept” from its referees, and only four had an average accept recommendation. As he writes, "either we all suck or something is broken!"

Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had "good" proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterwards several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward successful initiatives and promote the discipline.

We come across such cases so often—the research proposal evaluation that gratuitously but lethally states that you have "less than a 10% chance" of reaching your goals, the killer argument  "I didn't hear anything that surprised me" after a candidate's talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The "surprise" comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)

A particular source of evidence is refereeing, as in the SIGMOD example.  I keep wondering at the sheer nastiness of referees in CS venues.

We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%-20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence that these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.

Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.

The particular combination of incompetence and arrogance that characterizes much of what Naughton calls "bad refereeing" always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory, collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software, 8, 1988, pp. 199-246). The IEEE rejection was on the basis of such review gems as:

  • I think time will show that inheritance (section 1.5.3) is a terrible idea.
     
  • Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

One of the reviewers also wrote: "But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others." This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsos custodes?

"More established disciplines": Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, "time for computer science to grow up." After some 60 or 70 years we are not so young any more.

What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?

 

 


Comments


Anonymous

My feeling is that rejection rates of that level are just wrong. conferences should realize that the selection done via peer review is only good for screening the bottom 20% of the paper and reviewers cannot discriminate among the rest. we need to find a model where people can come to conferences to present and we let attendees choose what they want to hear, not reviewers. there are conferences in other areas that do this.


Anonymous

I have long debated on how to remove the nastiness and dishonesty in the review process. I have seen reviews that were one sentence rejections which does nothing to help the authors fix the issues or learn. I have also seen cases of rejections designed to prevent a publication that was competing with their own work, an ethics violation the committee can't possibly know about if they don't know of current work of a reviewer. Of course reviewing anonymity has other drawbacks but perhaps the above comment on providing reviewer statistics would help the problem without the drawbacks of full anonymity. It would need to be centralized to have value though, which raises questions of who is in charge of it. It is a hard problem to solve though as humans don't seem to be fundamentally nice and even more so when they feel there are no consequences for bad behavior as is currently the case. We need to do something though as the review process seems largely broken in computer science.


Anonymous

Thanks for this Bertrand.

It is the responsibility of the journal editor-in-chief or her associates, or the program committee chair, to mitigate the nastiness. But it seems many abdicate this responsibility and only look at votes (AAB, CDC etc.) or thumbs up or down.

In these roles, the major reasons I had to return a review to the reviewer are:

1. Inflammatory words: This is stupid
2. A one liner: I did not like it. Reject.
3. A strong opinion, but no justification for it, nor directions or hints on how to remediate (even in the case of an absolute rejection)
4. You forgot to cite me, and here is a list of my papers on this topic
5. Mixing the important, fundamental comments about reasoning errors, flawed reasoning, missing elements, etc. from the small stuff (typos, grammar, awkward wording)

I often tell reviewers: write it as if it was NOT anonymous. Several of our colleagues have actually decided to sign their reviews, and insist on us leaving their signature Food for thought...

Maybe if program committee chair and editor would take their role more seriously, beyond just shuffling the paperwork, we could reduce the nastiness, and have a more fruitful scientific debate. Acceptance and rejection being the same.

Philippe Kruchten, UBC, Vancouver


Anonymous

In my opinion: Now that we have the Web, print journals are MOOT. They're propped up by a establishment of scholars who "paid their dues" in print journals and expect the youngsters to do the same. Academics are particularly vulnerable to temptations of hubris and arrogance.

Eventually though some mavericks less worried about it all will bypass the hurdle and post in forums on the Web organized for the purpose. NEAR INSTANT peer-review! Near instant acceptance, grading, rejection.

Some will have a mix of evaluations, and those who see promise will be able to develop such ideas to futility or to a new scientific revoluion.

www.infosmarts.wordpress.com


Anonymous

Very valid points.

PC Chairs can have a modicum of positive influence on this---by taking deliberate steps to assert a positive tone. For example, they can engage novice PC members in an "orientation discussion" as to what is realistic to expect; they can also send out "review anti-patterns" to the entire PC to try to create some positive community attitude, and perhaps some peer-pressure to write constructive reviews.

Prem Devanbu


Anonymous

Not sure how much Bertrand is talking about the programming languages community or computer science in general. In my experience the programming languages community is much less welcoming to new ideas than any other field of computer science. And with much less I mean like two orders of magnitude, at least.


Anonymous

Many computer scientists love short readable pseudo-code and math-notations to get a good grasp of the problem-statement; a choice on programming language is most times secondary - except when there is a statistical comparison between more languages. I could not put the paper in either category.

I must admit I could have written the review "But of course, the bulk of the paper is contained in Part 2 (...)" and I also wasn't surprised by the results - while I love to be and would admit it immediately with lots of enthusiasm. Since you are the designer of Eiffel you just have the bad luck that papers by you on how good Eiffel is, will not be accepted easily. Self-promotion and promotion of self-designed products are not accepted well anywhere in the scientific world, as you have found out.

Try some compiler-design and tell in the first paragraph you designed Eiffel as a reference for your expertise. Disclaimer: I do respect your work on OOP.


Anonymous

There is a big difference between critical thinking and pure hubris. Unfortunately, I've seen more hubris than sharp, critical thinking - especially here in the US. It is concerning, to say the least.


Anonymous

Anybody I know has their review stories. Yes it is a problem, but it certainly is also a problem in other fields. I know some economics journals can be very competitive and the review process very drawn out, so much so that having papers submitted and in review is understood to be relevant for tenure cases.

I had one experience where I found a flaw in previous work that changed an interpretation. I wrote it up for publication (naively thinking that this is what should be done). I did get the author of the work, a senior established professor, as reviewer. That is proper. But what proceeded was a back and forth in demands for revisions, the professor trying to torpedo my paper because it would lead to reinterpretation of his work and the discussion how the previous interpretation was flawed.

The editors really are at a loss when that happens and after a few iterations the editor decided to reject the paper based on the notion that no consensus could be found.

What we do indeed learn is how to sell our ideas, not just how to have them. And also how to cope with reviews that can be anywhere from excellent, to careless, to adversarial.

I understand people defending their territory and their standing. But of course that goes directly counter to what ideally academia should do, i.e. allow new, and per chance difficult ideas to emerge, as long as they have indeed the staples of quality scholarly work.

But reality is that we have no standards for reviews, and plenty of protection of senior members of the community to leverage all sorts of modes of criticism that can be used to control a range of relevant outcomes.

My own review attitude is simple: It's my job to see that good new ideas get published. Rejection recommendations can really only be based on lack of novelty or lack of scholarship, or flaws in method.

I think we need discourse what a reviewers task is. And more importantly what it is not. For example in the Eiffel example the reviewer second-guesses future prospects of an idea. To me that is clearly questionable.

But I had the same. When I wrote my very first article on a new topic I started to develop, the very first review said essentially that no-one would ever want this. Now it is an emerging field with many prominent colleagues entering it and contributing.

Finally on NSF, in word we are asked to review for high risk, high reward with words like "transformational". In reality the most successful grants are safe, and real risk is discouraged. That too has to do with the critical stance of reviewers. If everything is scrutinized the work that most likely survives is the one where key questions have already been answered. Now, bright and skilled grant writers will get funded and will be able to use the resources to do great things. But the way this works, I would argue, is not ideal.


Anonymous

2012c08 1822 'anon'

There are many reasons for what has been described. Three major ones are a) CS/IT is not a profession in the sense of scientist, physician, engineer, etc. where there is some length of tradition as a community; b) CS/IT is exceeding democratic (anyone can join, good, bad or indifferent) so the range of individuals is likely higher than the traditional professions.

And c) is in many ways, the clincher: Because we deal with 'information' as the primary 'means and ends' of our work, there is a great deal more subjectivity than objectivity. I see this trend in accountants and lawyers who also deal a lot with information: in fact the financial collapse of 2008 has a lot to do with the way information can be (mis-)interpreted.
But getting back to CS/IT community, the competitiveness of this field has certainly increased tremendously. There can be little doubt that when some idea can become billions of dollars after an IPO, one becomes guarded (if it's one's baby) or denigrates it (in preparation to stealing it). I am not Moshe Vardi, editor of Communications of ACM. So I can say I admire his recent editorial drawing attention to the dis-honesty in some reviewers ethics. I also agree with some others' comments that some reviewers have no clue to what is being expressed.
Decades ago, an article by Marvin Minsky described that John von Neumann, on Minsky's dissertation committe, read his (Minsky's paper) and said he (von Neumann) did not understand it but agreed to award the PhD anyway. I guess that is what being a great mind allows you to do.

-anon


Displaying comments 21 - 30 of 33 in total

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account