You are on page 1of 5

Education

Do Academic
Researchers Journals
from Their Favor
Own
Institutions?
by Yaniv Reingewertz
and Carmela Lutmar
February 27, 2018

dave wheeler for hbr

Summary.   People tend to favor their group, whether it is their close family, their
hometown, or their ethnic group. A new study suggests the same may be true in
academia. Researchers looked at papers published in a selection of journals. When
the journal had an affiliation with a... more
Are academic journals impartial? While many would suggest that
academic journals work for the advancement of knowledge and
science, we show this is not always the case. In a recent study,
we find that two international relations (IR) journals favor articles
written by authors who share the journal’s institutional affiliation. We
term this phenomenon “academic in-group bias.”
In-group bias is a well-known phenomenon that is widely
documented in the psychological literature. People tend to favor their
group, whether it is their close family, their hometown, their ethnic
group, or any other group affiliation. Before our study, the evidence
regarding academic in-group bias was scarce, with only one study
finding academic in-group bias in law journals. Studies from
economics found mixed results. Our paper provides evidence of
academic in-group bias in IR journals, showing that this phenomenon
is not specific to law. We also provide tentative evidence which could
potentially resolve the conflict in economics, suggesting that these
journals might also exhibit in-group bias. In short, we show that
academic in-group bias is general in nature, even if not necessarily
large in scope.
To test the possibility of academic in-group bias, we examined four of
the leading academic journals in international relations: World
Politics, International Security, International Organization, and
International Studies Quarterly. World Politics is published by
Cambridge University Press on behalf of the Princeton Institute for
International and Regional Affairs at Princeton University.
International Security is published by MIT Press and edited by the
Belfer Center for Science and International Affairs at Harvard
University. The other two journals are not affiliated with a specific
university and are considered our control group.
The basic idea of our methodology was to compare the citation counts
of articles published by in-group members with those published by
out-group members. In academia, citations are considered a marker
of quality — the more citations a paper receives, the higher quality it
is assumed to be. If papers written by researchers from Blue
University and published in the Blue Journal get fewer citations than
papers written by researchers from Red University and published by
the Blue Journal, this could signal that the Blue Journal was willing to
lower its standards for its own researchers; this could then indicate
in-group bias. We found, for example, that the average article
published in World Politics by an author affiliated with Princeton,
which publishes the journal, gets roughly 80 Google Scholar
citations, whereas papers written for that journal by non-Princeton
researchers receive roughly 105 Google Scholar citations.
We look at faculty and PhD graduates from Princeton, Harvard, and
MIT as the in-group members. Examining two journals and two in-
groups provides more evidence that this phenomenon is general in
nature. We consider MIT as part of the Harvard in-group because of
the close ties between the institutions; both are located in Cambridge,
Massachusetts, and researchers from these institutions have
collaborated together on many research projects.
Our results confirm the existence of academic in-group bias. When
published in Harvard- or MIT-related journals, articles published by
graduates of Harvard and MIT receive roughly 60% fewer
citations than papers written by out-group scholars. This difference is
statistically significant and very large in magnitude. It’s also in
contrast to what we see when we look at the control group journals.
In these journals, Princeton authors get roughly the same citations as
the out-group authors, while Harvard and MIT get more citations.
The favoritism we found did not look the same in each context. Unlike
the PhD graduates, Harvard and MIT faculty do not seem to enjoy
such favoritism. The picture for Princeton is the opposite — Princeton
graduates do not seem to get special treatment, but faculty members
do enjoy editorial favoritism, though to a lesser extent than Harvard
and MIT. In the analysis we control for other confounding factors the
best we can, such as the number of authors and the length of the
article, and make sure the results are not driven by our choice of
specification or empirical strategy.
While our results survive many robustness checks, there are several
possible limitations to our study. The first is in the interpretation of
our findings. Journals might choose their articles based on other
factors than likely citation counts, such as better suitability with the
journals’ scope. It is also important to bear in mind that most
academic journals are not affiliated with universities or research
institutions. Therefore, our results, while troubling, do not carry over
to the majority of academic journals.
What harm can academic in-group bias create? First, it can tilt tenure
decisions and other promotions based on an academic’s publications.
Some competent scholars might lose while others who are less
competent might benefit. This adverse effect can be minimized if the
field incorporates this bias into its decision-making process, putting
less weight on publications of in-group members in the home journals
and assigning more weight to publications of out-group
members. Another possible approach would be to strengthen the
double-blind refereeing process by not allowing the editor to see the
affiliation of the author. This, however, seems unpractical when the
author and the editor have the same affiliation.
Another, potentially more severe, implication of our findings is
the possible effect of academic in-group bias on the academic
endeavor to advance science. If articles are not published based on
merit, the dissemination of knowledge might be at stake. Having non-
meritocratic systems might push out talented individuals, to the
detriment of the academic community. These implications are
troubling enough to call for immediate attention, and subsequent
action, to address it.

YR
Yaniv Reingewertz is an Assistant Professor at the
division of Public Administration and Policy,
School of Political Sciences at the University of
Haifa.
CL
Carmela Lutmar is a lecturer in the Division of
International Relations in the School of Political
Sciences at the University of Haifa.

You might also like