acm-header
Sign In

Communications of the ACM

BLOG@CACM

The Nastiness Problem in Computer Science


View as: Print Mobile App Share:
Bertrand Meyer

Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism. They admit it themselves. A common complaint in the profession is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone's chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view ("Circling the wagons and shooting inward," as Greg Andrews put it in a recent discussion). Is it accurate?

One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.

More such examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see his Powerpoint slides). Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous “accept” from its referees, and only four had an average accept recommendation. As he writes, "either we all suck or something is broken!"

Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had "good" proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterwards several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward successful initiatives and promote the discipline.

We come across such cases so often—the research proposal evaluation that gratuitously but lethally states that you have "less than a 10% chance" of reaching your goals, the killer argument  "I didn't hear anything that surprised me" after a candidate's talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The "surprise" comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)

A particular source of evidence is refereeing, as in the SIGMOD example.  I keep wondering at the sheer nastiness of referees in CS venues.

We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%-20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence that these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.

Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.

The particular combination of incompetence and arrogance that characterizes much of what Naughton calls "bad refereeing" always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory, collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software, 8, 1988, pp. 199-246). The IEEE rejection was on the basis of such review gems as:

  • I think time will show that inheritance (section 1.5.3) is a terrible idea.
     
  • Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

One of the reviewers also wrote: "But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others." This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsos custodes?

"More established disciplines": Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, "time for computer science to grow up." After some 60 or 70 years we are not so young any more.

What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?

 

 


Comments


Anonymous

Good article. One solution is to eliminate blind reviewing. That may seem radical, but really, how nasty would many reviewers be if the author knows who they are? It is easy to be macho when you are anonymous. You'd still get the odd tool who just can't help him/herself, but I think it would be reduced.


Anonymous

I think one must separate the pursuit of notoriety from the pursuit of excellence. There will always be marginal peer review comments that demonstrate misunderstanding or antagonistic opinions, and time pressure for publication volumes and rates only exacerbate that tendency. It is also the case that the dissemination technologies, while promoting faster communication of new ideas, also accelerate the dissemination of half-informed opinions.

So, to offer perhaps one of these: read more, write less, and contribute to an awareness that the rapid dissemination of scholarly opinion, well-intended, helps combat the noted tendency to "shoot inwards." As the industrial note mentions, "customers" vote, and in this case, pursuit of scientific excellence is really the goal.


Anonymous

Thanks for candidly bringing this up. I must say that I have experienced both some nastiness but also (on more rare occasions) some honest-to-goodness help. I remember very well my first submission to JAIR several years ago when the Editor-in-Chief, Steve Minton at the time, went way out of his way to help me, through several iterations, improve my paper so it could be published; I was mighty impressed by what appeared to be genuine interest on his part not only to maintain the integrity of the journal but also to help a young researcher come up to par. I thought I'd bring this up to show that some of our colleagues seem to have taken the higher road. In any case, your article did cause me to pause and think about my own attitude and behavior. Whether nastiness is widespread or not, it really has no place in our discipline. Despite the constraints on our time, we can all try to be more helpful. Rarely does one rise to true prominence by keeping everyone else down. If we are to improve as a science, we must make sure that the next generation of researchers is better than we are. That needs to be nurtured not "nastied out." Thanks again for a thought-provoking piece!


Anonymous

See http://cacm.acm.org/magazines/2010/7/95070-hypercriticality/fulltext


Anonymous

I am one of the persons who sometimes writes harsh reviews.

These reviews are not meant to embarrass the PhD students who are usually working hard and trying their best, but rather their supervisors who do not take their responsibility serious enough and barely read the papers of their protgs (or somehow made it into a senior researcher's position without having a fundamental understanding the basic tools of research, such as statistics).

One of the problems is that our conferences are held in fancy places and that starting research in some fields CS is too easy and does not need many prerequisites/equipment - and barely any in-depth knowledge of our art. I have seen researchers publishing about software quality & metrics while, at the same time, admitting offline that they find "design patterns very interesting but are not too familiar with them". Or publishing papers about supporting developers with novel tools, having "only used debuggers for toy examples during their undergraduate studies".

So, in many cases, quality assurance before submitting is not taken seriously enough and people instead just give it a shot - one can always resubmit somewhere else in case of rejection. Eventually the paper will slip through and get accepted in a smaller venue. The result are too many, often rushed submissions and of those, only a fraction can be published without embarrassing our field.

Examples are when people do not check the prerequisites for statistical tests (distributions, etc.) in empirical SE or do not even take prior probabilities into account when presenting the performance of their machine learning algorithms.

Do not get me wrong, I am not talking about research results that are "unsurprising" or ideas that are controversial (such as inheritance ;)) - those papers do not deserve harsh words - but rather about work that does not follow the basics of the scientific method. And there is plenty of it in our field.


Anonymous

I agree overall, but the 8%-20% conference acceptance rates guarantee bad behavior, because grounds must be found for rejecting 80%-90% of everything and being a zero sum game reviewers have further incentive to excoriate others. Meyer is too quick to accept them. Other approaches might be tried. For example, after reviewing that produces an overall average rating for each paper, let authors decide whether to include and present it with the rating attached, and form sessions based on both topic and rating. Authors of work with middle or low ratings who felt the work was good could publish it unconcerned that colleagues would see the rating, but would have fewer in the session; others would withdraw to revise for elsewhere. Attendees could go to high-rated sessions except perhaps where they had high interest in the topic, willing to see unpolished work. Authors could compute where their scores fell and report they were e.g. "In the top 15% of submissions." In my case, where a good paper was hurt by a vicious reviewer or two I might well present it, where a paper got the same score but reviewers had identified a real problem, I might withdraw and revise it. This is just an example, the point being that we could give more thought to finding better solutions, rather than just complain.


Anonymous

Invent time machine, go back to high school, date prom queen (or king but men are the worst). This is a culture in which insecure passive-aggressiveness has flourished and it's toxic.


Anonymous

An excellent essay on this topic is "Conference Reviewing Considered Harmful" by Thomas Anderson at http://www.cs.washington.edu/homes/tom/support/confreview.pdf If he is right about the Zipf distributions (Figs. 3 and 4, it explains a lot and has tremendous implications for designing better conference review processes, left as an exercise for the reader.


Anonymous

I would like to second the anonymous physicist above. In computer science we publish a lot more papers (per researcher) than in other hard sciences (math included), and in my view this means there is a lot of subpar work being published in our field. I think people get tired of reviewing multiple mediocre submissions.


Anonymous

I have noticed a lack of 'rigorousness' in some software publications as compared to Math or Physics publications. Part of the problem in my opinion is that it is quite difficult to prove a programming pattern or algorithm (let alone a program). Eiffel offers constructs that allow a publishing individual to get close .


Displaying comments 11 - 20 of 33 in total

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account