acm-header
Sign In

Communications of the ACM

BLOG@CACM

The Nastiness Problem in Computer Science


View as: Print Mobile App Share:
Bertrand Meyer

Are we malevolent grumps? Nothing personal, but as a community computer scientists sometimes seem to succumb to negativism. They admit it themselves. A common complaint in the profession is that instead of taking a cue from our colleagues in more cogently organized fields such as physics, who band together for funds, promotion, and recognition, we are incurably fractious. In committees, for example, we damage everyone's chances by badmouthing colleagues with approaches other than ours. At least this is a widely perceived view ("Circling the wagons and shooting inward," as Greg Andrews put it in a recent discussion). Is it accurate?

One statistic that I have heard cited is that in 1-to-5 evaluations of projects submitted to the U.S. National Science Foundation the average grade of computer science projects is one full point lower than the average for other disciplines. This is secondhand information, however, and I would be interested to know if readers with direct knowledge of the situation can confirm or disprove it.

More such examples can be found in the material from a recent keynote by Jeffrey Naughton, full of fascinating insights (see his Powerpoint slides). Naughton, a database expert, mentions that only one paper out of 350 submissions to SIGMOD 2010 received a unanimous “accept” from its referees, and only four had an average accept recommendation. As he writes, "either we all suck or something is broken!"

Much of the other evidence I have seen and heard is anecdotal, but persistent enough to make one wonder if there is something special with us. I am reminded of a committee for a generously funded CS award some time ago, where we came close to not giving the prize at all because we only had "good" proposals, and none that a committee member was willing to die for. The committee did come to its senses, and afterwards several members wondered aloud what was the reason for this perfectionism that almost made us waste a great opportunity to reward successful initiatives and promote the discipline.

We come across such cases so often—the research proposal evaluation that gratuitously but lethally states that you have "less than a 10% chance" of reaching your goals, the killer argument  "I didn't hear anything that surprised me" after a candidate's talk—that we consider such nastiness normal without asking any more whether it is ethical or helpful. (The "surprise" comment is particularly vicious. Its real purpose is to make its author look smart and knowledgeable about the ways of the world, since he is so hard to surprise; and few people are ready to contradict it: Who wants to admit that he is naïve enough to have been surprised?)

A particular source of evidence is refereeing, as in the SIGMOD example.  I keep wondering at the sheer nastiness of referees in CS venues.

We should note that the large number of rejected submissions is not by itself the problem. Naughton complains that researchers spend their entire careers being graded, as if passing exams again and again. Well, I too like acceptance better than rejection, but we have to consider the reality: with acceptance rates in the 8%-20% range at good conferences, much refereeing is bound to be negative. Nor can we angelically hope for higher acceptance rates overall; research is a competitive business, and we are evaluated at every step of our careers, whether we like it or not. One could argue that most papers submitted to ICSE and ESEC are pretty reasonable contributions to software engineering, and hence that these conferences should accept four out of five submissions; but the only practical consequence would be that some other venue would soon replace ICSE and ESEC as the publication place that matters in software engineering. In reality, rejection remains a frequent occurrence even for established authors.

Rejecting a paper, however, is not the same thing as insulting the author under the convenient cover of anonymity.

The particular combination of incompetence and arrogance that characterizes much of what Naughton calls "bad refereeing" always stings when you are on the receiving end, although after a while it can be retrospectively funny; one day I will publish some of my own inventory, collected over the years. As a preview, here are two comments on the first paper I wrote on Eiffel, rejected in 1987 by the IEEE Transactions on Software Engineering (it was later published, thanks to a more enlightened editor, Robert Glass, in the Journal of Systems and Software, 8, 1988, pp. 199-246). The IEEE rejection was on the basis of such review gems as:

  • I think time will show that inheritance (section 1.5.3) is a terrible idea.
     
  • Systems that do automatic garbage collection and prevent the designer from doing his own memory management are not good systems for industrial-strength software engineering.

One of the reviewers also wrote: "But of course, the bulk of the paper is contained in Part 2, where we are given code fragments showing how well things can be done in Eiffel. I only read 2.1 arrays. After that I could not bring myself to waste the time to read the others." This is sheer boorishness passing itself off as refereeing. I wonder if editors in other, more established disciplines tolerate such attitudes. I also have the impression that in non-CS journals the editor has more personal leverage. How can the editor of IEEE-TSE have based his decision on such a biased an unprofessional review? Quis custodiet ipsos custodes?

"More established disciplines": Indeed, the usual excuse is that we are still a young field, suffering from adolescent aggressiveness. If so, it may be, as Lance Fortnow has argued in a more general context, "time for computer science to grow up." After some 60 or 70 years we are not so young any more.

What is your experience? Is the grass greener elsewhere? Are we just like everyone else, or do we truly have a nastiness problem in computer science?

 

 


Comments


Mauro Bianco

I completely agree with the post and most of the comments. There are couple of points that I think may be interesting to discuss. First, there is, I think,a pressure from the "outside" -the industry- which is felt by the academic community. After all, industry is proceeding quite fast and not openly (I talked to academic researchers who were trying to understand google's algorithms, as they were facts of nature, and that does not make much sense to me). Second, it seems to me that computer science is waiting for a Newton of some kind, some form of breaking research that would make the difference in the field, instead of thousands of incremental little pieces that, most likely, are going to stay mostly unnoticed. This said, I think the idea of making the reviewers to sign their reviews may calm people down...


Jon Crowcroft

It is very important to separate the personal behaviour of CS people to one another, which I find to be collegiate, friendly, supportive, outward looking, and the behaviour of CS people to an artefact put in front of them on screen or paper - I think the behaviour described can partly be ascribed to our training in debugging - we look at something and try and find flaws to fix. This is less true in other disciplines (with occasional exceptions - lets not mention Italian faster-than-light neutrinos - actually, lets - just look at the entire tone of that debate as an exemplar of supportive endeavour).

We've proposed quite a few ideas for potential solutions, but I don't see anything deployable in propositions. By the way, I don't accept that this is connected with the debate on Conference v. Journal venue for CS work. At least I havn't seen much evidence to support that. I do think the US tenure system doesn't help. And have no idea how to fix that:)


Hein Meling

I agree with most of Bertrand's observations, and I remember Keith Marzullo (former division director at NSF) told me a similar statistic regarding other fields vs computer science.

Anyway, regarding the review comment on inheritance being a terrible idea. For me, I'm not convinced inheritance is a good software design principle, and there are plenty of evidence out there that demonstrate this now days. Personally, I favor composition over inheritance. In fact, Go, which I currently use in my courses and on projects, does not have inheritance in the usual sense. That said, I don't think a paper should be rejecting on such grounds, unless the idea had already been discussed sufficiently elsewhere. Of course, the reviewer might have had some insights towards it being a bad idea, and if so it should be explained in the review, and a generous reviewer might even offer a better idea. I guess it wasn't, as it takes time to understand the consequences of such design principles.


Displaying comments 31 - 33 of 33 in total

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account