acm-header
Sign In

Communications of the ACM

BLOG@CACM

What Is Your Research Culture? Part 2: Background


View as: Print Mobile App Share:
Bertrand Meyer

The first part of of this article explained the reason behind the research culture questionnaire: the gaping discrepancies in how institutions around the world understand computer science (informatics) and assess computer scientists. The observation was already implicit in the 2009 Informatics Europe report and the resulting Communications of the ACM article [1] on "Research Evaluation for Computer Science", which explained why evaluation criteria appropriate for — say — physics researchers do not all transpose to computer science. The authors received countless notes of thanks from deans and others for giving them an authoritative source to back their arguments, in discussions with higher management, that CS is special; for example, the argument that in our field the top conferences are as prestigious as the best journals. (Ten years earlier, the CRA report [2] had already made this point.) Most of these comments came from institutions and countries that have not yet awakened to this plain factual observation, and persist, for example, in using the ISI-Thomson (Web of Knowledge) citation database in spite of its pathetic inadequacy. Top institutions know better.

The questionnaire provides a vivid way to contrast the "retro" and "modern" cultures. I hope that you had some fun answering it and computing your result. Fun was, indeed,part of the goal: this is a blog, not a paper in a scientific journal (indexed or not for citations). The questionnaire unabashedly conflates questions addressing four complementary aspects of what makes a research culture retro or modern:

  1. The seriousness of criteria used for assessment, in particular of publications.
  2. Reliance on mechanically measured performance indicators.
  3. The autonomy and independence of institutions and professors.
  4. Respect and trust for professors and researchers.

Applying a single linear scale to cover these distinct dimensions is a simplification. You obviously understood that a difference between scores of 22 and 25 is not significant. But a difference between, say, -30 and +30 is hard to ignore. A steep negative number suggests that the local assessment system tends to judge computer scientists through improper criteria; that the assessment tends to follow from numerical indexes without the filter of expert interpretation; that central bureaucracy tends to stifle local autonomy; and that important decisions tend not to be entrusted to researchers. In short, it suggests a retro culture. A high positive number suggests the reverse.

My colors are clear: this second style — the "modern" culture — is in my experience what gives an institution, or a nation as a whole, the best shot at top-class research. It is not a single culture: there are many different ways to reach excellence. But the best institutions share basic assumptions along the four above dimensions: they use meaningful evaluation criteria, do not stop at mechanically computed numbers, provide autonomy, and trust researchers.

The questionnaire is (I hope) fun, but not just fun, because the underlying issues are not a joke at all. Often they are dramatic. Looking at the state of research policy in many countries today, one cannot but be struck by the dire state of academic careers. Politicians make big speeches about high tech and how crucial research is, but deeds, particularly money, do not follow. La république n'a pas besoin de savants [3]. The result, for young researchers, is catastrophic. Excellent candidates spend years trying to find a  decent position, and more years trying to advance. The scarcity of resources makes it particularly important at least to select and evaluate people on criteria that make sense.

In the following installments of this multi-part article, I will come back to some of the questionnaire's individual questions and explain what your answers reveal about the value of your institution's and your country's research culture.

References

[1] Bertrand Meyer, Christine Choppy, Jørgen Staunstrup and Jan van Leeuwen: Research Evaluation for Computer Science, in Communications of the ACM, vol. 52, no. 4, April 2009, pages 31-34, text available here on the ACM site.

[2] Computing Research Association: Best Practices Memo — Evaluating Computer Scientists and Engineers for Promotion and Tenure, prepared by a David Patterson, Lawrence Snyder and Jeffery Ullman; in Computing Research News, September 1999, available here.

[3] If you don't know this quote, paste it into your search engine.


Comments


Alberto Bartoli

I have recently coauthored a journal paper in which we discuss in depth a recent nation-wide evaluation of researchers in Italy that was mostly based on bibliometrics (disclaimer: I was not assessed positively and thus I am not eligible for a promotion).

Some of the points that we made were identical to those associated here with "retro" research culture. Indeed, I scored -36 to the questionnaire.

Bibliometric Evaluation of Researchers in the Internet Age
http://www.tandfonline.com/doi/full/10.1080/01972243.2014.944731#.VFh_DZCG-rI
(full text available upon request to me).


Displaying 1 comment

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account