Publication outlets for MIS scholars are important for many reasons, namely they unify an academic discipline by providing a communication system for acquiring and disseminating information; they are used in hiring, promotion, tenure, and merit pay decisions; and they are used for ranking academic departments. Such outlets also provide researchers with target vehicles for their work; they help researchers to identify streams of research in an academic discipline; and they are used by librarians to optimize disbursement of available funds.
The importance of journals in a discipline naturally leads to the question of relative journal quality. As a result, a number of studies have ranked a variety of journals (many not solely devoted to MIS). These studies differ in a number of ways, including the size and composition of respondent samples, the number of journals included, the methods used for including journals, and the methods used for ranking the journals. Further, each of these studies provides a journal ranking at one point in time.
To address the variability across journal ranking studies, we present a method to average journal rankings across studies. We use this method with nine such studies published between 19912003 to produce a composite ranking of the top 50 journals across these studies. Table 1 shows the nine studies, the number of journals ranked in each study, the research methodology used, and the sample size. These studies present several points of interest.
Two studies used citation analysis to rank journals [4, 5] and the remaining seven employed the perceptions of respondents to rank journals. The use of citation analysis is noteworthy as this method is purported to be more objective than using respondent perceptions. Holsapple et al. took a further step with their citation analysis by controlling the number of years each journal had been in publication [5].
Several studies using respondent perceptions are also notable. Whitman et al. [8] collected the most widespread sample of respondents from a mailed survey. Their study also provided the most thorough list of journal rankings across the nine studies. Mylonopoulos and Theoharakis [6] and Peffers and Tang [7] used online surveys to obtain the largest respondent samples, as well as the greatest international representation across the nine studies.
Table 1 also points out interesting trends in journal ranking studies, the first being that sampling methods have progressed from mailed, to emailed, to online surveys. One key advantage of online surveysconveniencehas undoubtedly contributed to two other trends in these studies: increasing sample sizes and increasing numbers of international respondents. The fourth trend is an increasing number of journals for respondents to rank.
The nine studies have employed a variety of methods to obtain a list of journals (of varying number) to be ranked. Regardless of the methods used to include journals in a ranking study, or the methods used to rank them, each study produced a journal ranking, which we analyze here.
To be able to average journal rankings across studies, we had to calculate a common denominator to account for differing numbers of journals in each ranking. Accordingly, we calculated the score for each journal in each study. We divided the rank of each journal by the total number of journals ranked in that study, resulting in that journal's score (see Table 2). For example, MIS Quarterly ranked first in the 1999 [9] and 2001 [6] studies, with scores of .01 and .02 respectively, because the 1999 study ranked 80 journals and the 2001 study ranked 50 journals.
Scores close to zero indicate highly ranked journals, where scores approaching one indicate lower-ranked journals. We then averaged each journal's scores across the studies in which that journal appeared to obtain its average score. We used the average score to rank the top 50 journals (see Table 2). Ties were resolved (where possible) based on the number of ranking studies in which the journals appear. Table 2 presents the rank of each journal that appeared in each study, the journal's score in that study (in parentheses), and the journal's average score across studies.
Table 2 ranks the top 50 journals across the nine studies from 19912003. We make no attempt to classify journals as top-tier (or "A" list), second-tier ("B" list), and so on. However, the composite ranking of the 50 journals does provide a comprehensive view of the relative quality of the journals from the standpoint of MIS scholars.
Table 2 also shows how journals change in rank over time. Some journals vary (for example, Decision Sciences, IEEE Computer, and IEEE Transactions on Systems, Man, and Cybernetics) while others are quite consistent (for example, MIS Quarterly, Communications of the ACM, Information Systems Research, Management Science, and Journal of Management Information Systems). Eight journals appear in all nine studies (MIS Quarterly, Communications of the ACM, Management Science, Journal of Management Information Systems, Harvard Business Review, Decision Sciences, Information & Management, and Sloan Management Review). Four journals appear in eight studies (Decision Support Systems, IEEE Transactions on Software, IEEE Computer, and ACM Computing Surveys) and another four journals appear in seven studies (Data Base, Interfaces, Information Systems Management, and Journal of Systems Management).
Table 3 reflects the rich diversity of the journals in which MIS scholars publish their research. We find 29 "pure" MIS journals. Demonstrating the MIS field's main reference disciplines, we note 11 computer science journals, seven management journals, and three operations research journals. It is interesting that, out of the top 20 journals, only six are "pure" MIS journals, nine are computer science journals, two are management journals, and three are operations research journals. These findings point out the breadth and interdisciplinary nature of MIS research.
Despite movements in the rank of many individual journals, the overall journal rankings have remained remarkably consistent over time, providing evidence the MIS field is forming a consensus on its potential publication outlets and their relative quality. These findings suggest that MIS is maturing as a coherent, academic discipline.
As a result of the consistency across journal rankings, the question arises as to whether or not future journal ranking studies will provide value. The answer is an unqualified "yes" because the MIS field is very dynamic, with new technologies constantly emerging. Therefore, the MIS field continues to evolve, new journals appear, and future journal rankings will include these new outlets. In fact, many journals that have appeared more recently are highly regarded by the MIS community (for example, Communications of the AIS and the European Journal of Information Systems to note just two).
In addition, future journal rankings should continue to examine regional differences in perceptions of journal quality (see [6]). We feel that future global ranking studies will be both useful and informative.
How should future journal rankings be conducted? One suggestion would be to provide a comprehensive list of journals in an online survey and have MIS faculty rank the journals. As this list would be lengthy, respondents could rank some number of journals in order or they could indicate the perceived quality of each journal with which they are familiar on Likert scales. Respondents would be free to add journals not on the list. Following our methodology in this study, future journal rankings can be added to our list of rankings, new average scores obtained for each journal, and new composite rankings calculated.
The composite journal rankings smooth out differences in the methods used to rank journals and differences in the methods used to include journals in the rankings. We have provided a comprehensive overview of journal ranking studies over a 12-year period and a composite ranking of the top 50. However, our ranking is not the last word as future ranking studies will certainly change these rankings.
1. Doke, E., Rebstock, S. and Luke, R. Journal publishing preferences of CIS/MIS scholars: An empirical investigation. J. CIS 36 (1995), 4964.
2. Gillenson, M. and Stutz, J. Academic issues in MIS: Journals and books. MISQ 15 (1991), 447452.
3. Hardgrave, B. and Walstrom, K. Forums for MIS scholars. Commun. ACM 40, 11 (Nov. 1997), 119124.
4. Holsapple, C., Johnson, L., Manakyan, H., and Tanner, J. A citation analysis of business computing research journals. Info. and Manage. 25 (1993), 231244.
5. Holsapple, C., Johnson, L., Manakyan, H., and Tanner, J. Business computing research journals: An normalized citation analysis. J. MIS 11 (1994), 131140.
6. Mylonopoulos, N. and Theoharakis, V. Global perceptions of IS journals. Commun. ACM 44, 9 (Sept. 2001), 2933.
7. Peffers, K. and Tang, Y. (2003). Identifying and evaluating the universe of outlets for information systems research: Ranking the journals. J. Info. Tech. Theory and App. 5 (2003) 6384.
8. Walstrom, K., Hardgrave, B., and Wilson, R. Forums for management information systems scholars. Commun. ACM 38, 3 (Mar. 1995), 93107.
9. Whitman, M., Hendrickson, A., and Townsend, A. Academic rewards for teaching, research, and service: Data and discourse. Info. Systems Research 10 (1999), 99109.
Table 1. Previous journal ranking studies from 19912003.
Table 2. Composite journal rankings for studies (19912003).
©2005 ACM 0001-0782/05/0200 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2005 ACM, Inc.
No entries found