Discussions frequently take place on issues of publication and research in our field; they have occurred for example at the Dagstuhl workshop convened by Moshe Vardi on publications in CS and in numerous meetings of Informatics Europe. They have already led to a few articles in this blog.
In talking to members of the academic community in various places, I have realized that we often ignore the tremendous culture gaps between countries. The kinds of things we debate between representatives of (say) ETH, Rice, Stanford, and Microsoft Research are a world away from the assumptions, the practice and more generally the research culture of our colleagues in many institutions around the world.
There are many kinds of research culture, but by an large we can talk of the "retro" and "modern" cultures. I will discuss them in a few articles following this one, but to begin I have devised a questionnaire to help you find out where you stand. The questionnaire is at
http://se.ethz.ch/~meyer/publications/acm/culture_questionnaire.html
Please try it. I am not giving any explanations yet; you will have to wait a few days for the following articles in the series. Let me just say that the higher the score (positive values up to 47) the more "modern" your research culture, and the lower the score (negative values down to -77) the more "retro" the culture.
Disclaimer (should not be necessary, but...): this is a questionnaire, not a survey. It is for your own benefit and we do not retain any data.
Acknowledgments: the idea for this questionnaire came up during a discussion with Carlo Ghezzi at the last European Computer Science Summit of Informatics Europe. I am grateful to Christian Estler for his help with the programming.
I think this questionaire is poorly designed for the goal of capturing one's place among the diversity of research management cultures around the world. (This is as well, of course, as the deeper mistake because the whole idea of placing the different operating styles into a linearly-ordered continuum is totally bogus.) For example, the choices offered about the use of one metric (h-index) are different from the choices offered about anouther (number of publications) {and in particular, there is no choice for saying the h-index is used as one among a group of major indicators}. The question about rcorporate credit card fails to offer the possibility that a card is available quite widely (anyone who needs to spend money, including assistant professors and secretaries!). The question about age profile seems to offer the chocies of uniform, middle-heavy, or top-heavy; but what about bottom-heavy? The questions about hiring/promotion evaluation neglect the common case where there are multiple stages of evaluation (eg department and then university-wide) and that these could differ [for example, one model I know has departments focussed on Google Scholar metrics, while the central review looks more at ISI metrics (due to the number of scientists on those committees)].
To me, the most important distinction in research management cultures is quite different from what the questionaire explores. I would say the crucial question is "who gets to choose their own research agenda? a) almost everyone, including the PhD students; b) almost everyone who has completed their PhD; c) almost everyone who is 7 years or more after their PhD; d) a few leaders within the department; e) hardly anyone, as research priorities are set by university management, the heads of granting agendas, or national committees"
I agree that the questionnaire (and above all its scoring) is poorly designed.
At first, some questions are conceptually repeated N times so that you collect negative (or positive) points for essentially the same reason (e.g. your favorite bibliographic databases) while others have close to zero correlation with research (e.g. whether profs have corporate credit cards).
The worse part is the scoring which only reflects the idiosyncrasies of those who wrote the questionnaire. I make three egregious examples:
1) If you use ISI to evaluate computer scientists you lose a whopping 20 points just by doing so (and you can't even specify that you use it among other indicators)
2) If you use Scopus to evaluate them and use it to assess publications (instead of Citeseer) you are going lose to lose another 6 points.
3) if the old crone heading the group stamps his name in all publications you only lose 3 points
... give us a break...
We might debate endelessly about the "right" bibliographic database. I consider Citeseer a junk collector and Scopus a good compromise (you only need to know that there is Scopus/Scholar ratio and your are done). Other people have different ideas. Differ in this single point and you lose 50% of the positive score!
In contrast, consider the piggy back crone. The ERC asks applicants for junior grants to have articles as senior authors. How can they possible have them if the crone is always on their shoulder? They can never apply for a grant when post-docs. Instead the policy allows the crone to apply for a senior ERC grant has he well have lots of papers of which he might not even know they existed.
This a detrimental policy for young researchers but is considered 6x less important over the bibliographic database. Surely Swiss, German, or Italian professors won't score well if we gave them the -20 they deserve for this policy, but that would be a more appropriate score of modernity.
Disclosure: I originally scored 10. If I repent my sins and won't use Scopus ever I'm going to net 19 and enter the top quartile of modernity. Is this a sign of modernity? I seriously doubt so.
Displaying all 3 comments