You are on page 1of 28

Article

A Cautionary Tale Psychology and Developing Societies


32(1) 94–121, 2020

for Psychology and


© 2020 Department of Psychology,
University of Allahabad
Reprints and permissions:
Higher Education in.sagepub.com/journals-permissions-india
DOI: 10.1177/0971333619900043
in Asia: Following journals.sagepub.com/home/pds

Western Practices
of Incentivising
Scholarship May Have
Negative Outcomes

Bruce B. Svare1

Abstract
Cases of scientific fraud and research misconduct in general have
escalated in Western higher education over the last 20 years. These
practices include forgery, distortion of facts and plagiarism, the outright
faking of research results and thriving black markets for positive peer
reviews and ghost-written papers. More recently, the same abuses have
found their way into Asian higher education with some high profile and
widely covered cases in India, South Korea, China and Japan. Reports
of misconduct are now reaching alarming proportions in Asia, and the
negative consequences for individuals, institutions, governments and
society at large are incalculable. The incentives for academic scientists
in Asia are approaching and even surpassing those ordinarily seen in
the West. Cash payments for publishing articles in high impact journals
can double or even triple yearly salaries in some cases. Combining this
environment with the simultaneous pressure to obtain oftentimes scarce
funding for research has produced a culture of unethical behaviour

1
Department of Psychology, State University of New York, Albany, USA.

Corresponding author:
Bruce B. Svare, Department of Psychology, State University of New York, Albany,
NY 12222, USA.
E-mail: bsvare@albany.edu
Svare 95

worldwide. This article assesses three important issues regarding scientific


fraud and research misconduct: distorted incentives for research and
overreliance upon metrics, damage to the integrity of higher education
and public trust and improving research environments so as to deter
unethical behaviour. This is especially crucial for emerging Asian countries,
in particular Association for Southeast Asian Nations (ASEAN), whose
scientific infrastructure is less developed, but nonetheless has the
potential to become a major player in the development of psychology
as well as Science, Technology, Engineering and Mathematics (STEM)
research and training.

Keywords
Scientific misconduct, perverse incentives, publishing analytics, world
rankings

Background
Reports of scientific fraud are increasing in a worldwide fashion as the
pressure to report research findings in high impact journals has intensi-
fied (Brainard & You, 2018). This has spawned at least one watchdog
group, The Center for Scientific Integrity (www.retractionwatch.com), to
keep track of scientific fraud and compile lists of journal retractions as a
function of country, discipline and investigator. Sadly, this website now
reports an average of two to three retractions per day, whereas only a few
years ago the average was one or two per year.
There is a long history of scientific misconduct in the West. In fact,
many of the more outrageous examples of scientific misconduct in
Western nations are well known by those within higher education cir-
cles as well as by the public (for recent reviews see Gross, 2016;
Hesselmann, Graf, Schmidt, & Reinhart, 2017). For example, at
Cornell University recently, psychologist Brian Wansink, who
researched human eating habits, was forced to resign his position after
13 of his papers were retracted with another 15 corrected (Servick,
2018). Some of the retractions were in the high-profile publication The
Journal of the American Medical Association. The misconduct included
misreporting of research data, problematic statistical techniques, fail-
ure to properly document and preserve research results, and inappro-
priate authorship. In another notorious case that had far-reaching
implications for public health, British surgeon and researcher Andrew
96 Psychology and Developing Societies 32(1)

Wakefield purported to document an association between measles,


mumps, and rubella (MMR) vaccine and autism. The results were
fraudulent, widely discredited and the research was ultimately retracted
from The Lancet (Mathews-King, 2018). However, it promoted the
anti-vaccine movement for a long time. In fact, many parents declined
to have their children vaccinated, and it led to an increase in the USA
in the occurrence of measles, which can be life threatening. At Harvard
University, the distinguished evolutionary psychologist Marc Hauser
published exciting research on the development of cognitive process-
ing in infrahuman primates. He was forced to resign his position after
revealing that he had fabricated data, manipulated experimental results
and published falsified findings. Three of his papers from high impact
journals like Cognition were retracted (Couzin-Frankel, 2014). In one
of the most pervasive and outlandish examples of scientific fraud,
Dutch psychologist Diederik Stapel of Tilburg University, committed
academic fraud in numerous publications that since have been retracted
(Bhattacharjee, 2013). Over a decade of work, his fraud included fre-
quently cited papers in Science on racial stereotyping, advertisements
and the power of hypocrisy. So extensive was the fraud, that it is now
thought to include over a dozen doctoral theses, thus harming the repu-
tations of many of his former students.
Scientific misconduct is not restricted to the developed world.
Regrettably, cases of scientific misconduct and fraud by Asian scientists
seem to have followed suit with what has happened in the West. These
cases have been chronicled in the press (cf., Qin, 2017) and leading sci-
entific publications (cf., Cyranoski, 2018). Scientific fraud seems to be
increasing dramatically where the stakes for publication in high-impact,
high-profile journals are intensifying. At the University of Tokyo, promi-
nent cell biologist Yoshinori Watanabe committed scientific misconduct
in five papers and was dismissed by the university, and his pension was
terminated (Normile, 2017a). In China, scientific misconduct has been a
significant problem with frequent plagiarism cases, the use of fraudulent
data, falsified CVs and fake peer reviews. The prominent journal Tumor
Biology retracted a record number of 107 papers authored over 4 years
by Chinese scientists Normile, 2017b). The journal cited concerns that
the peer review process had been compromised with faked reviews. In
another recent case, more than a dozen papers by nanoscientists Ashutosh
Tiwari and Prashant Sharma at the Indian Institute of Technology were
retracted because of manipulated images that were reported in their pub-
lication. Another 50 of the team’s papers have been flagged as being
highly suspect for the same problems (Sachan, 2018). In one of most
Svare 97

notorious cases of scientific misconduct, a South Korean veterinarian


and researcher Woo-suk Hwang at Seoul National University, who
became infamous for fabricating a series of experiments that appeared in
high-profile journals in the area of stem cell research, was dismissed
from the university. He was considered one of the pioneering experts in
the field, best known for two articles published in the prestigious journal
Science where he claimed he had succeeded in creating human embry-
onic stem cells by cloning. Soon after the first paper was released, an
article in the journal Nature charged the scientist with having committed
ethical violations by using eggs from his graduate students and from the
black market. Although he denied the charges at first, he later admitted
the allegations were true, then confessed that his human cloning experi-
ments were made up (Sang-Hun, 2009).
Scholars from a variety of disciplines have lamented the fact that the
pressure to obtain scarce grant funds for research and the perverse incen-
tives associated with publishing in top journals have fuelled many unin-
tended negative consequences in science and higher education (Edwards
& Roy, 2017; Gallup & Svare, 2016; Lillienfeld, 2017a), not the least of
which is the fundamental corruption of the scientific enterprise. In addi-
tion to their importance for tenure and promotion, conventional faculty
incentives for conducting research take on many different forms ranging
from teaching load reductions to summer salary support, to improve-
ments in office and laboratory space and equipment and to merit raise
increases in base salaries. In a perverted manifestation of these incen-
tives, a number of Asian institutions have been offering large bonuses to
faculty who publish in high-impact journals. Bonuses in the range of
US$40,000 for a single publication in Nature or Science are now com-
mon in China, Japan and South Korea (Abritis & McCook, 2017). This
can double or even triple the salary of a scientist in these countries.
While some have argued that Asian scientific fraud will not reach the
levels seen in the West because of the dominance of a non-confronta-
tional cultural style (Maslog, 2014), the early results showing escalating
retractions owing to misconduct in China, Japan and South Korea would
argue against this prediction.
The author has considerable experience with the higher education
system in Asia, in particular with the development of psychology in
ASEAN, and is familiar with the challenges and the enormous potential
for the further development of the behavioural sciences and STEM in
this region (Svare, 2011, 2018, 2020, in press). (While there is debate
about whether or not psychology should be considered an STEM field,
the National Science Foundation [NSF] is on record as supporting its
98 Psychology and Developing Societies 32(1)

inclusion [NSF, 2014]. Thus, for the purposes of this article, we also
consider it to be an integral part of STEM training and research.) As
reviewed by Gross (2016), psychology, like other STEM areas, has had
its fair share of scientific misconduct cases, some of which were noted
above. When this is combined with the replication crisis that presently
plagues psychology today (Jarrett, 2016), there is growing concern that
misconduct may be endemic in the behavioural sciences.
This article examines three components of the present culture of aca-
demic research in the West, and how some negative consequences of this
culture are spilling over into Asian higher education. It also examines
how these pitfalls can be avoided in Asia if important modifications are
made to their still nascent educational and scientific systems. First, we
explore the development of distorted incentives for academic research in
a historical context and the contribution that metrics have played in
scholarly publishing. Second, we examine the assault on scientific integ-
rity throughout the world and the impact that it has on maintaining the
integrity of higher education and science and public trust in its goals.
Third, suggestions for creating a better environment for academic
research are advanced especially in the context of the rapid changes tak-
ing place in Asian higher education. In Asia, and especially in ASEAN
where higher education is less well developed, growth of STEM and
scientific research in general will depend upon addressing the causes and
consequences of scientific misconduct and then employing a prevention
model to deter it.

Scholarly Publishing and the Scientific Enterprise

Consequences of Metrics and Perverse Incentives for Academic


Research
Scholarship and the publication of research in top scientific journals is at
the core of academic life. It determines tenure and promotion as well as
decisions regarding salary, external and internal grants and awards, and
ultimately an academic’s status in their discipline and profession. The
current drive to quantify and rate scholarly activity through sophisticated
metrics has been adopted worldwide over 40 years ago as a way of
assessing individual and institutional performance (cf., Edwards & Roy,
2017; Van Noorden, 2010). More recently, in the USA, this has occurred
against a backdrop of declining fiscal support for public education,
reduced federal research funding from the National Institute of Health
Svare 99

(NIH), NSF and the Environmental Protection Agency (EPA), and a cor-
porate business model adopted by college and university administrators
(Brownlee, 2014).
Metrics and Individual and Institutional Decision-making
While metrics are not inherently bad, there is scepticism in many quar-
ters of higher education that they can be manipulated and gamed to a
point where they may become meaningless (e.g., Abbott et al., 2010).
Thus, if you are good at manipulating metrics as an individual or as an
institution, then you may be rewarded more than others who are not as
skilled at such manipulation. When scholarship is measured by how
much a particular article is cited by others, which is the norm today, it
does not necessarily follow that it is relevant or has made an impact in a
positive and socially meaningful way. It also rewards quantity over qual-
ity and, as some have noted (Quake, 2009), a mentality of publishing as
many papers as possible even though they are less comprehensive in
scope. It has also led to the practice of manipulating journal impact fac-
tors (defined as the yearly average number of citations to recent articles
published in a given journal). What has resulted is journals manipulating
impact factors, rigging of peer review, overcitation and scholars engag-
ing in data dredging (the attempt to massage and manipulate data to a
point where statistical significance ultimately will be attained) (e.g.,
Edwards & Roy, 2017).
The other metric of importance for scientists is the h-index, which
measures the productivity and citation impact of a scholar. It is the
attempt to sum up a scholar’s contributions in a single number. It is com-
puted by examining a scientist’s most cited papers and the number of
citations that they have received in other publications. It is used for all
kinds of important decisions including tenure and promotion, grant fund-
ing and external and internal awards. But it is terribly flawed in many
ways, and critics are now saying that it is almost meaningless (cf.,
Rowlands, 2018). The h-index correlates strongly with numbers of both
papers and citations, and there is substantial uncertainty as to whether or
not it accurately assesses quality, consistency, longevity or stage of
career. Some disciplines simply publish and cite more than do other dis-
ciplines making it nearly impossible to compare across disciplines. Also,
while it may be good for traditional journal articles, it does not capture
non-traditional outlets like citations in books and blog posts, influence
on public policy or global health initiatives, or patents developed or crea-
tion of software. The h-index also evaluates scholars unfairly if they tend
to be single authors on papers in contrast to publishing with multiple
100 Psychology and Developing Societies 32(1)

authors on papers. It is notoriously inaccurate when judging scholars at


different stages of their careers and tends to favour more senior scholars
versus those at the early stages of their professional life. While most
agree that the h-index should never be used as the sole indicator of a
scholar’s impact and it rarely is, it has been elevated to an undeserved
and overly important metric for decision making in many academic and
scientific venues. Performance metrics also have damaging effects on
institutions. College rankings in the US News and World Reports rate
institutions in terms of a number of measures of academic excellence.
The rankings are perceived by parents, potential students and the public
in general as valid quantitative assessments of institutional strengths and
weaknesses. But some institutions manipulate or even submit false data
to US News and World Report so as to boost their rankings (Anderson,
2013). While this constitutes cheating, it appears to happen with regular-
ity and has become an accepted part of the rankings process. The Times
Higher Education (THE) World Rankings drive institutional decision
making in the same way. The largest component of the ranking system
(close to 70%) is based upon the quantity and quality (impact) of research
publications and how much they are cited. There are many problems
with this ranking system. It is heavily biased towards English-speaking
journals and favours institutions that are heavily invested in STEM. It is
also heavily biased against institutions in developing countries as well as
institutions that do not have graduate programmes and a significant
investment in research. That being said, it is very much a race to the top
and institutions use it as a yardstick to track their performance and those
of their competitors in order to make changes in priorities. One such
change is to incentivise faculty publishing in high-impact journals as a
way of moving up the world rankings ladder. To elevate rankings in par-
ticular disciplines, well-endowed institutions routinely game the system
by hiring faculty from around the world with impressive publication and
grant support credentials. Buying rankings (by buying distinguished sci-
entists) and encouraging faculty to engage in overcitation and multiple
authorship of papers are other ways of cheating to improve institutional
prestige.
Perverted Incentives
With all this being said about the overreliance on metrics, it is interesting
to explore the associated impact of perverse incentives. This has been
articulated by Edwards and Roy (2017) in their important analysis of this
topic. For example, rewarding researchers for increased number of pub-
lications is intended to improve research productivity, but it has actually
Svare 101

produced many substandard, incremental papers with poor methods,


false discovery rates and reduced quality of peer review (Smaldino &
McElreath, 2016). This is referred to as ‘salami science’ or submitting
‘the least publishable unit’. Splitting what should be one manuscript into
several that are submitted to different journals so as to have more publi-
cations. Rewarding researchers for increased number of citations was
intended to reward quality work that influences other scientists. What it
actually does is to inflate citations since often peer reviewers request
citation of their work. Rewarding researchers for increased grant funding
was designed to ensure that research programmes are funded and that
they generate overhead for their university. What it actually does is con-
sume valuable time writing proposals, diminish time thinking about
research and data and promote overselling of positive results and ignor-
ing negative results. Rewarding programmes and scholars for increased
doctoral student productivity was designed to elevate school rankings
and programme prestige. What it has actually done is lower standards
and create an oversupply of PhDs, many of whom must do several post-
doctoral assignments before landing their first academic post. Reducing
the teaching load for faculty who are active researchers was intended to
free up time to pursue more grants. What it has actually done is increase
the demand for itinerant adjunct faculty to teach classes and oftentimes
lower the quality of instruction and therefore cheapen the value of degrees.
Clearly, there has been a significant shift in the history of science from
quality to quantity, and its impact may be to force the next generation of
young researchers, especially minorities and women, to think more about
individual achievement in contrast to more altruistic objectives like serv-
ing the public good (Diekman, Brown, Johnson, & Clark, 2010). More
importantly, what it also may have done is attract those individuals who
are comfortable with engaging in unethical behaviour to maintain their
careers. Consequently, they become complicit in scientific misconduct. In
contrast, those who are altruistic in their motives end up moving to other
careers where their unselfishness is better suited to their profession. Sadly,
some have concluded that unethical behaviours in science have become
endemic and a part of the culture of modern research to a point where cor-
ruption has become the new normal (Ashforth & Anand, 2003).
The Funding Environment
The external funding environment for research in the USA can only be
described as hypercompetitive and nearly dysfunctional (Daniels, 2015;
Edwards & Roy, 2017; Marcus, 2017). By almost any measure, federal
research and development funding in the USA has stagnated or actually
102 Psychology and Developing Societies 32(1)

decreased (e.g., Hourihan & Parkes, 2016). This environment, which has
been exacerbated by an oversupply of young researchers, has dramati-
cally elevated competition for an ever-dwindling source of funding. In
the USA, the intense pressure on faculty members to get federal grants
has produced an array of undesirable consequences. Because of its
hypercompetitive nature (<10% of grant applications are funded), fac-
ulty members often spend inordinate amounts of time revising and
resubmitting grants. The result is a scientific climate that suppresses the
creativity, cooperation, risk taking and the original thinking that is so
important for new discoveries. It breeds conservative short-term think-
ing that produce results measured in terms of dollars rather than good
sense. Postdoctoral scientists, who can spend 10 or more years in their
positions before landing an academic appointment, are especially disad-
vantaged. Because federal grant money is so tight, they may not receive
their first grant until their early 40s, when it may be too late for tenure
and promotion (Daniels, 2015; Gallup & Svare, 2016). There is increas-
ing evidence that this hypercompetitive environment produces reviewer
biases as well as a strong influence of prior success as opposed to scien-
tific merit (Fang & Casadevall, 2016).
According to one study, the peer-review system for federal grants is
approaching the point of becoming arbitrary. Upon analysing the number
of citations and publications resulting from funded NIH projects, Fang
and Casadevall (2016) discovered that excellent productivity was exhib-
ited by some projects with relatively lower scores and poor productivity
by other projects with outstanding scores. Since peer review panels were
unable to make accurate predictions about which projects will have the
greatest impact, the authors concluded that a lottery system would work
just as well. Discouraged by this picture, less-established faculty mem-
bers are turning to private foundations for support or are pursuing aca-
demic appointments in other countries. For example, the Canadian
system of supporting scientists has a great deal to admire. It is able to
support a much higher percentage of its scientists simply by making
more modest grants that do not include salary support or overhead costs.
The funding of scientific research in Asia is starting to mimic what is
seen in the USA in some important ways (Maslog, 2014). Asian countries
taking the lead are China, Japan, South Korea and Singapore. Like the
West, these countries place a high value on publication in peer-reviewed
journals, especially those that are in English and included in the accepted
journal database maintained by the THE World Rankings. As noted earlier,
Chinese scientists can double or triple their salary by publishing in high-
impact journals like Science and Nature. In the Philippines, scientists are
Svare 103

awarded cash incentives of US$2,500 for each paper published in a


Western, English-speaking peer-reviewed journal. While funding of
STEM in many Asian countries is quite modest, Japan, South Korea and
China devote substantial resources to research and development. In par-
ticular, China will soon outdistance the USA in total dollars spent on
research as well as number of yearly publications (Showstack, 2018). This
country has seen a fivefold increase in their science budget over the last 3
years, and it is no surprise that it (as well as Japan and South Korea) has
experienced some of the greatest increases in scientific misconduct during
that time (www.retractionwatch.com). Thus, the pressure to publish or per-
ish and the pressure to obtain grant funds to support research may be as
strong in certain parts of Asia as it is in the West.

The Assault on Scientific Integrity


The warning signals that the scientific enterprise throughout the world is
at risk of crumbling from its own weight are numerous, shocking and
seemingly overwhelming. I have reviewed some of the issues here as
have others who have examined the problem before me (Daniels, 2015;
Edwards & Roy, 2017; Lillienfeld, 2017a, b). There is substantial evi-
dence to indicate that today’s published research in almost all disciplines
is compromised in many ways by a system that favours quantity over
quality. Present scientific research also lacks replicability, routinely uses
biased data and substandard statistical methods and fails to safeguard
against researcher biases and overhyping and overselling of findings. As
reviewed here, unethical and fraudulent behaviour leading to retractions
and outright falsification of findings are seriously impairing the integrity
of research throughout the world. Some have proposed that this is just
the tip of the iceberg and that, if anything, scientific misconduct is
grossly underreported (e.g., Gobry, 2016).
The price paid for scientific fraud. In the USA, the Office of Research
Integrity (ORI) is charged with overseeing cases of scientific misconduct.
The cost of handling individual cases is about US$525,000, and yearly
over US$110 million is spent on all cases (Michalek, Hutson, Wicher, &
Trump, 2010). From 1992 to 2012, 291 articles were retracted due to
scientific misconduct at a cost of about US$58 million (Stern, Casadevall,
Steen, & Fang, 2014). The true incidence of scientific misconduct is dif-
ficult to determine, but some surveys indicate that about 1 in 50 scientists
have admitted to misconduct and roughly 14 per cent of scientists knew
104 Psychology and Developing Societies 32(1)

of colleagues who did (Fanelli, 2009; Chambers, 2014), but this may
underreport what some believe is a much more pervasive problem.
An honour system that once prevailed in academe and science seems
to have weakened. Self-policing and self-correcting, once thought to be
the bedrock culture of higher education, has been reduced to benign
neglect or simply ‘looking the other way’ and pretending that it does not
exist. Furthermore, the mechanisms for reporting scientific misconduct
are not well established and even where formal procedures do exist there
is a reluctance to engage in the process owing to the lack of protection
for whistle-blowers and the negative consequences that often ensue. It
takes courage to call out colleagues for scientific misconduct when you
know that you may be sacrificing your own career by doing so.
Common Features of Scientific and Intercollegiate Sports Fraud
Fraud and corruption in higher education are not limited to the scientific
research enterprise. For example, the history of big-time intercollegiate
sports in the USA is one of a complicit faculty and administration that
commit academic corruption in the name of winning games, tourna-
ments and ultimately more donations from an adoring alumni fan base.
Intercollegiate sports are a multibillion-dollar commercial enterprise in
the USA. Keeping it going often requires admitting grossly unprepared
students and then keeping them academically eligible. This is often done
by creating phony no show classes, assisting athletes with course require-
ments by having tutors perform all the work for them, as well as chang-
ing grades or dropping requirements altogether (Svare, 2014). This is
very troubling unethical behaviour in higher education, but it has been
accepted practice and, in fact, has been normalised at many institutions
of higher learning where academic integrity has not been prioritised.
While calls to reform the system are made every year, little has
changed in the last 30 years. To keep up with the arms race, salaries of
coaches have escalated exponentially and are the highest on college
campuses and more expensive new facilities are constructed or older
ones renovated. There is simply too much money at stake to put a stop
to it. Cheating has become endemic and, like clockwork, accompany-
ing every intercollegiate sports season are reports of recruiting scan-
dals, academic corruption, athlete payoffs, athletes’ indiscretions and
gross failures in administrative oversight. Most importantly, many col-
lege athletes end up with no degree or a degree that is not worth the
paper it is written on. This is the reality of intercollegiate sports in the
USA. Like the scientific enterprise, it is often corrupt and those
involved are unwilling to reform it. The fallout is the diminished status
Svare 105

of our institutions of higher learning as the foundation of integrity and


truth telling in our society.
In US higher education, corruption and misconduct in scientific and
athletic pursuits share many of the same causative factors. Both are
driven by a hypercompetitive culture that promotes the acquisition of
finite resources in an atmosphere of perverse incentives. The idea that
cheating is necessary in order to win seems to have taken hold in the
university scientific establishment as well as in intercollegiate sports. It
is ‘winning at all costs’ that drives poor decision making and the aban-
donment of sound ethical values. While scientific misconduct is a rela-
tively new entry to higher education scandals, it is probably much more
disruptive to core academic values than college athletics corruption. It
has the potential to completely undermine our institutions of higher
learning and the belief that scientific facts can be trusted. Without trust
in science, solutions to society’s most pressing problems will be compro-
mised. Indeed, this is what happens to any organisation when transpar-
ency, integrity and ethical standards are abandoned by those responsible
for maintaining the system. It is a shared responsibility by scientists,
educational institutions and government agencies to ensure that a corrupt
academic culture is not allowed to take root and compromise research
that benefits mankind in fundamentally important ways. If science can
be discredited so easily and the public can be harmed as a result, then
fixing the problems should become our highest priority.

A Case Study of Current Academic Practices


Before recommendations for reform are advanced here, the discussion of
a case I am familiar with at another institution is important in order to
make larger points about the serious problem we face regarding scholarly
publishing, competing for research grants, perverse incentives, scientific
misconduct and individual career and institutional goals. The names have
been changed and some minor details altered, but the stories are factual.

Professor Bob Smith and Professor Tim Jones


Two faculty members at the same large research-intensive institution,
Bob Smith and Tim Jones, both full professors, could not be more differ-
ent in their career journeys in higher education. Professor Smith is an
106 Psychology and Developing Societies 32(1)

academic star. He is a serious data-driven scientist who routinely pub-


lishes in high-impact, high-quality journals. He has a very high h-index
and has produced a steady stream of highly cited articles for a very long
period of time. To his credit, he has many citation classics (e.g., an article
that has been cited 100 times by other scientists). His work, which has
opened new doors to fundamental scientific questions that have helped
to solve long running scientific debates and shape public policy, is well
regarded in the scientific community. His scholarship is characterised by
a relatively high degree of ‘risk taking’ and he ‘thinks outside the box’ on
many controversial issues in his discipline. He is asked to speak at con-
ferences throughout the world, is routinely referenced in leading news-
paper and magazine articles and frequently appears on prestigious
science television programmes as an expert. Professor Smith has never
had a federal or a private grant but has from time to time completed
applications that were not funded. He is highly resourceful and imagina-
tive in his work, collaborates with many other scientists worldwide, and
was able to make do with very modest internal funding support from his
institution. He is considered to be in the top three of his discipline and
has won a number of awards from professional societies for his research
and teaching. He has been routinely passed over for merit awards by his
own institution in spite of his publishing and teaching record. He is a
great colleague and is known for helping others, both within his institu-
tion and outside it. Even though he is demanding in the classroom,
undergraduate and graduate students flock to him because he has the
ability to make science ‘come alive’ and stimulate even the most cynical
and disengaged audience. He has successfully mentored a large number
of graduate students in their doctoral studies who are now shaping the
next era of scientists in his discipline. Professor Smith is the consum-
mate academician on many different levels.
Professor Jones has also produced a steady stream of publications,
has published many more articles than Professor Smith, but they tend to
be shorter communications that at one time would simply be called ‘brief
communications.’. The work is programmatic but in many ways unin-
spiring. His h-index is quite modest. He has no citation classics to his
credit. His work has jumped from one trend to another and typically ‘fol-
lows the money’. He has a strong record of funded research but is con-
stantly writing and submitting many grants every year. His articles are
published in good but not great journals that would generally be classi-
fied as second tier and lower impact. His published work has been
described by some as ‘unimaginative’ ‘workday’, ‘opportunistic’ and
merely ‘dotting is and crossing ts.’ His work is not particularly well
Svare 107

known and seems to have had little impact upon the larger, outside com-
munity. He is rarely called upon by the media for commentary nor does
he participate very much in professional conferences. He is not particu-
larly well known in his profession. He is described by some as being
‘insular’, not a particularly dynamic or thoughtful teacher and mentor,
and a ‘difficult to get along with’ colleague. The doctoral students he has
trained have not distinguished themselves in their academic and non-
academic positions. Many students have left his laboratory in the early
parts of their training because of issues related to poor mentoring and
suspect ethics. The faculty at his institution have provided evidence of
scientific misconduct but administrators have largely ignored it and
swept it under the rug. Instead, they actually enabled future unethical
behaviour of Professor Jones by providing internal rewards for research
and mentoring excellence. To outside observers, it appears as though the
institution just wanted to keep the steady stream of grant money flowing
instead of dealing with the thorny issue of misconduct.
Professor Smith was recommended for the distinction of distin-
guished professor by his faculty. In spite of the fact that the impact
(total citations and h-index) and quality of his work outdistanced
almost every faculty member who had been promoted to distinguished
professor at that institution, he was denied promotion by his higher
administration with the ostensible reason being that he had not received
extramural grant support. In previous recent cases at that institution,
faculty who were granted the status of distinguished professor had gar-
nered large federal grants even though their publication records were
hardly that impressive. Professor Jones has not as yet been recom-
mended for promotion to distinguished professor. But if he is, you can
bet that his history of grant funding will weigh very heavily in the deci-
sion making.

The Money Chase that Drives Perverted Incentives and


Misconduct
The above tale of two professors is hardly unique in higher education
today. But it is one that gives everyone considerable pause because it
reflects the serious erosion of academic values and the steadily creeping
culture of the corporatisation of higher education. Money dictates scien-
tific agendas, institutional decision making and individual career deci-
sions, and it does so in a pervasive ‘take no prisoners’ manner.
108 Psychology and Developing Societies 32(1)

The importance of the money chase in higher education can’t be over-


emphasised (Gallup & Svare, 2016; Lillienfeld, 2017a, b). Without ques-
tion, there are very good reasons for faculty to apply for research grants.
Some science is especially expensive to perform and requires sophisti-
cated equipment and advanced technology as well as additional person-
nel. Other science is not so expensive or costs nothing at all, especially
in more theoretical subject areas. But should a scientist be judged on
how much grant money they bring in? It is a fact of current academic life
that endowed chairs along with named and distinguished professorships
are about external funding and often not about impactful research.
Clearly, titled professorships are being bought and sold on the basis of
external grant funding.
As evidenced by the case of Professor Smith above, grants are not
needed to do good research. When Nicholson and Ioannidis (2012) ana-
lysed researchers who authored articles that were cited 1,000 or more
times, they found that most of the scientists had no current external fund-
ing. Likewise, no one has ever been awarded a MacArthur genius award,
a Nobel prize or a Fulbright scholar award based upon the dollar value of
their external grant support.
Scholarly merit should not be gauged by grant success, but increas-
ingly this is happening in the modern academic world. Issues of hiring,
tenure and promotion are now dependent upon whether or not you bring
in substantial grant money, especially from federal sources that provide
overhead. The impact of the scholarly research that a faculty member
publishes is almost irrelevant. Even if you publish research that finds its
way into high-impact journals, you run the risk of being fired if you
don’t bring in external funding.

The Major Negative Consequences of the Grant


Culture and Perverse Incentives
Just how can we effectively deter misconduct in science and higher edu-
cation today and simultaneously create a better environment for aca-
demic research to thrive? The answer to this question is threefold. First,
and foremost, science and the academy must admit that it has a huge prob-
lem on its hands and it can’t simply wish that it will go away with time.
Second, there must be significant ‘outside the box’ thinking and the will-
ingness to dramatically modify, maybe even dismantle, some parts of the
current system in order to provide a permanent solution. Finally, there
Svare 109

must be recognition that multiple strategies will be needed to disinfect


science and the academy of the misconduct that presently ravages the core
of our important institutions. Clearly, the possible solutions for this prob-
lem are especially crucial for many less developed ASEAN countries, as
well as other emerging and developing countries of the world, where
higher education and scientific publishing are just beginning to build
strength. Science is in its infancy in many of these countries and the
expectations for it to solve human problems will only escalate as infra-
structure is built and the development of higher education progresses.
It is instructive to first enumerate the major negative consequences of
the grant culture and perverse incentives for scholarship.
There is far too much pressure and way too many incentives to pub-
lish in high-impact journals and to obtain grants to support research. As
a result, scientific misconduct is increasing, even in developing regions
of the world where this was not a problem just a few years ago. The
analysis and reporting of scientific misconduct have become a cottage
industry with its own set of analytics, researchers and reporters. It is
certainly needed, but it is a sad commentary on the state of higher educa-
tion and science that it exists.
Scientists are engaging in questionable research practices in order to
gain grants. It is the fear of losing funding and losing personnel because
one can no longer support them if funding is lost. Negative results are
often dismissed and confirmation bias, the tendency to favour one’s
hypothesis while dismissing others, becomes entrenched.
Scientists are becoming increasingly specialised and single minded in
their research focus. Programmatic research is important because of the
complexity of research questions examined today. However, it often
leads to a narrowed perspective on research aims and a failure to see
things from a much broader perspective. While this is what is rewarded
by granting agencies, it is an impediment to collaborative and interdisci-
plinary research that is more representative of ‘outside the box’ thinking.
Moreover, both creativity and risk taking are diminished and a strategy
of only engaging in ‘safe’ research becomes normalised.
There are serious replication problems in many branches of science.
When making the next big discovery becomes the dominant theme in
science as it is today, then replication becomes a secondary priority. This
is dangerous on a number of different levels. It cheapens the scientific
process by lowering incentives for replication of research and puts new
research findings in the category of ‘accepted and proven’ with only
superficial scientific scrutiny.
110 Psychology and Developing Societies 32(1)

Publishing of research can be critiqued on a number of different lev-


els, but none more important than the tendency of many scientists to go
well beyond their results and over speculate about the implications of
their research. This happens in grant applications, research reports and
commentary to the press. It results from the unending pressure to justify
one’s research in the hypercompetitive environment that presently exist.
Most scientists with few exceptions are spending way too much time
constantly applying for grants to support their work. Because a break in
funding can devastate the overall continuity of a research programme, it
is incumbent upon scientists to submit many applications with the hopes
that one will eventually be successful. It is called the ‘buckshot’ approach
to scientific funding, and it has taken hold in many junior faculty looking
to maximise the possibility that they will find money for their research.
The process of constantly writing as many grant proposals as possible is
extremely time consuming and mentally draining. It takes away from the
more deliberate long-term thinking it takes to view the larger picture of
a scientist’s research programme.

The Courage to Reform

Creating a Better Environment for Psychology, Science and


Higher Education
The culture of higher education and science has never been perfect.
However, in the present climate, there is a sense that we have reached a
critical tipping point. Eruptions of scientific misconduct and evidence of
perverse incentives have so tainted the present culture of higher education
that some believe that radical reform may be needed (cf., Edwards &
Roy, 2017; Gallup & Svare, 2016; Lillienfeld, 2017a, b). Some
recommendations are advanced here to reform higher education and
science. The courage to implement them will rest squarely upon our
leaders in our universities, professional societies and granting agencies.
They include the rolling back or elimination of some incentives that
presently exist, a fundamental rethinking of the process whereby research
grants are allocated and how they currently determine so much about
career advancement, a re-evaluation of how scholarly publications are
evaluated and used for the basis of promotions, awards, and the
distribution of grants, and the implementation of a more serious emphasis
upon scientific ethics. If we avoid change by not implementing many of
Svare 111

the suggestions below, then the scientific community has no one to blame
but themselves for continued scientific misconduct.
A serious attempt must be made to actually quantify the extent of the
scientific misconduct problem. The watchdog organisation, The Center
for Scientific Integrity, has performed outstanding service to the science
and higher education community by documenting retracted journal arti-
cles. This is important work which helps to publicise when scientific
misconduct takes place, where, and by who. However, it does not quan-
tify the real extent of the problem, how deep it may run and how much
of it may never get reported to the public (Edwards & Roy, 2017). Just
who should engage in this type of fact-finding is unclear, but it should
probably include a combination of professional societies, government
agencies, higher education faculty and administration, and national and
international academies. Representatives of all these groups, funded by
their respective organisations, should convene to develop methods to
research the problem, produce an open access report quantifying scien-
tific misconduct worldwide and then propose best practice guidelines for
preventing it in the future. This is not an easy task that will be accom-
plished quickly, but rather one that will take time, resources and contri-
butions from many different sectors of the scientific and higher education
community. Failure to engage in this type of analysis in an expeditious
manner will further endanger science and the life-changing decision
making that emerges from it.
More must be done to prioritise the teaching of ethical behaviour.
Albert Einstein (2014) once said ‘Most people say that it is the intellect
which makes a great scientist. They are wrong: it is character’. There are
healthy debates in education concerning whether or not character can be
taught, whether it is nature or nurture and whether it matters at all in
some cases of scientific misconduct. That being said, in the face of esca-
lating scientific misconduct in the world today, many institutions of
higher learning are taking the initiative to develop classes in science eth-
ics (Kabasenche, 2014). These collaborative efforts often include courses
that are co-taught by those in ethics, philosophy and the life sciences.
The subject matter includes real life situations that scientists are con-
fronted with including the incentives and pressures that could lead to
cheating. Such courses should be required and should be taught, at a
minimum, in both undergraduate and graduate curriculum. In some
instances, this may be too late; hence, the introduction of formal courses
in scientific ethics could begin in high school or even earlier. As a cor-
relate of teaching scientific ethics, some have even suggested that it is
critically important at this time to promote the ideal of practising science
112 Psychology and Developing Societies 32(1)

as a service to humanity (Edwards & Roy, 2017; Huber, 2014). After all,
science is performed for the public benefit and those who may become
interested in the profession as a career need to understand their responsi-
bilities to be both ethical and altruistic.
We must rethink the use of metrics in hiring, promotion and grant
funding decisions. There are over two million research articles published
annually in over 28,000 journals, and this is escalating at a rate of 3.26
per cent a year and doubling every 20 years (Hoffman, 2017). This pro-
liferation of scientific content is difficult, if not impossible, to keep up
with by most scientists, let alone guarantee the integrity and authenticity
of the scholarship. Some journals are favoured over others because of
metrics. In this article, we have discussed several metrics that tradition-
ally have been used for important decisions in higher education and sci-
ence. Clearly, journal impact factors and h-index metrics have taken on
a life of their own and drive too much important decision making today.
As reviewed here and by others (Edwards & Roy, 2017; Hoffman, 2017),
they are fundamentally flawed metrics and it is time to question our pri-
mary reliance upon them for assessing impact. Citation counts are noto-
riously low. In one study (Remler, 2014), 12 per cent of medicine articles
were never cited, nor were 27 per cent of natural science papers, 32 per
cent in the social sciences and 82 per cent in the humanities (Hoffman,
2017). According to the editorial board of the prestigious journal Nature
(2005), 89 per cent of the journal’s impact factor of 32.2 could be attrib-
uted to just 25 per cent of the papers published. Also, citations to books,
blog posts and social media accounts as well as creation of software and
other products are not taken into consideration when measuring the
impact of a scientist’s scholarly work. Social media and blog posts are
outlets where the public receives much of its scientific news in contrast
to little read academic journals. These alternate sources are probably far
more likely to move public opinion in contrast to more traditional cita-
tion metrics. One could argue that we need to incorporate these non-tra-
ditional sources as much, if not more, than traditional citation measures
when it comes time for important decisions of the awarding of grants,
hiring of faculty and tenure and promotion decisions. Citation metrics
are not unimportant, but overreliance upon them is both dangerous and
misleading. Clearly, they do not always reflect quality research. For
example, with respect to journal impact factors, there is a great deal of
excellent research published in good journals that have much lower
impact factors than the premiere luxury journals such as Science and
Nature. But these expensive subscription journals have a ‘brand’ of qual-
ity science and are heavily preferred by scientists over other journal
Svare 113

outlets because they publish work that is controversial, provocative and


makes waves. At least one Nobel Laureate, Randy Schekman, has now
led a boycott of these journals because he feels it leads to cutting corners
and scientific misconduct and simultaneously discourages other impor-
tant work such as replication studies (Schekman, 2013). He further
argues that there are good open access journals that publish excellent
work that is free to anyone to read and does not have expensive subscrip-
tions to promote. Finally, Schekman notes that committees deciding on
grant funding and hiring and promoting of scientists need to be told that
scientific papers should not be judged on the basis of where they are
published. Instead it is the quality of the research and not the journal’s
brand that really matters. Likewise, institutions that make decisions that
are heavily reliant upon world rankings data like the THE World Ranking
System must understand that about 70 per cent of such rankings are heav-
ily weighted towards citation metrics (i.e., counts and journal impact
factors). In view of the fallibility of those metrics, important decisions
on individual scientists as well as institutional priorities may be subject
to considerable error.
Some incentives for scholarship need to be dramatically revised or
even eliminated and new ways of assessing the integrity of scholarship
need to be implemented. There have always been incentives for engaging
in scholarly publishing in higher education. Promotion and tenure, salary
adjustments by way of merit raises and internal and external awards and
recognition are a few of the better-known rewards used in higher educa-
tion to shape faculty scholarship. Incentives are not inherently bad, but
they have become so extreme, so excessive in some cases, that they have
produced unintended negative consequences far beyond what anyone
could have predicted years ago. As reviewed in this article, monetary
rewards increasingly are being used as an incentive for publishing in
high impact branded journals (e.g., Science, Nature, and Cell) (Abritis &
McCook, 2017). Doubling or even tripling a scientist’s salary through a
single publication in a high-impact journal is a perverse distortion of the
reward system. Such rewards undoubtedly increase productivity, but
they also drive the mentality of publishing for publishing’s sake and
gaming the system instead of pursuing answers to important research
questions. At present, the outlandish rewards for publishing seem to out-
weigh the risks of being caught for misconduct. Certainly, a good start
would be to reduce the very large rewards for publishing that seem to
have proliferated, especially in various Asian countries. But this alone
will not curb scientific misconduct because many researchers will still be
driven to publish in branded luxury journals for reasons of promotion
114 Psychology and Developing Societies 32(1)

and the awarding of grant funding. Also, scientific fraud persists because
there is nothing to really prevent it from occurring. But a recent proposal
called the ‘prepublication audit’ might be part of the answer to the thorny
problem of misconduct and perverse incentives. As articulated by Iorns
(2013) and modified by Lossie and Mane (2016), the audit system would
work in a preventative manner to curb scientific misconduct. An inde-
pendent panel of scientists funded by professional societies, journals,
universities and granting agencies would operate on a fee per service
basis to audit a certain percentage (maybe 3%–5%) of journal submis-
sions each year. The submissions would be randomly drawn and the
audit by experts would thoroughly examine every component of the sub-
mission including the raw data, statistical analysis, methodology, table
and graph presentation, conclusions and reliability of reference informa-
tion. Because the audits would be randomised, all authors would per-
ceive an equal risk of being examined. The examining body would be
independent of author-affiliated institutions (universities, hospitals, etc.)
and journal editorial boards. The audit report would then be sent to the
editors of the intended academic journal for review and the article, if
published, would include an acknowledgement in the manuscript that it
was reviewed by the examining panel and a link to their report would be
provided. The audit system has a number of positive features. It provides
unbiased verification that the experiments were conducted ethically, that
the statistics were computed correctly and the conclusions were based on
the data rather than potential to be attractive to the media. A prepublica-
tion audit could also provide the opportunity for authors to choose to be
audited as an expression of their confidence in their research. The audit
system has tremendous potential but only to the extent that a broad spec-
trum of those in the scientific community buy into it and are willing to
monetarily support it.
The external funding culture must be changed to take the pressure off
of scientists. Never before has there been so many scientists, new and
old, competing for a finite amount of grant money to do their research.
The sources of funding include government agencies, industry and pri-
vate foundations. This article has reviewed the consequences of this
pressure, and others have also highlighted some of the same themes
(Daniels, 2015; Edwards & Roy, 2017; Gallup & Svare, 2016; Lillienfeld,
2017a, b). Short of a massive infusion of more money to the system, a
highly unlikely scenario, there are at least five measures which can be
taken to relieve the pressure. First, institutions, especially those that are
heavily endowed, must do more to provide internal funding for scien-
tists, not just for those in the early stages of their career, but for those
Svare 115

during their entire career path. Second, at least in the USA, more must be
done to reduce or eliminate overhead rates (the Canadian model) that
colleges and universities negotiate with federal granting agencies. This
will increase the amount of money to researchers for the actual support
of their research. Third, the requirement that a young scientist receive an
individual grant (called an R01 in the USA) in order to be promoted and
tenured needs to be dropped. This requirement is simply out of touch
with reality in that there is not enough money (public or private) to con-
tinue it. Fourth, the peer review system is broken and, as noted earlier in
this article, has become arbitrary. What replaces it is uncertain and the
subject of frequent heated debate. However, at a minimum, any new sys-
tem must reward quality as well as replicability. Quantity should be on
the back burner. The system would also benefit from the delivery of
some form of penalties for publishing poor quality research. Fifth, there
are too many young scientists who drop out of science altogether because
of the poor funding climate. The current method of training scientists
prepares them only to be scientists and does little to help them progress
to other career paths. More must be done to create and reinforce viable
professional paths other than research. At present, these are difficult to
find and often require significant retraining.

Asia Can Learn from the West’s Mistakes


The template for the development of successful psychology and STEM
programmes and higher education systems in general started in the West
and then slowly migrated to other parts of the world. Due to globalisa-
tion and internationalisation, they are now travelling more rapidly to
Asia. There will soon come a point where East and West will probably be
indistinguishable in many of their scientific and educational practices.
However, at present, many developing Asian universities lag behind the
West in the quality of their programmes.
When examining the top 1000 universities in the world as reported by
the THE World Ranking System (Times Higher Education, 2019), only a
few Asian countries have a strong representation; those being Japan (103
institutions in the top 1,000), China (72), India (49) and South Korea
(29). ASEAN countries in the top 1,000 include Thailand (11 institutions),
Malaysia (11), Indonesia (5), the Philippines (2) and Singapore (2).
Vietnam, Laos, Cambodia, Myanmar (Burma) and Brunei have no
universities in the top 1000, but this is understandable given the slow
116 Psychology and Developing Societies 32(1)

rate of economic growth in many of these countries. An examination of


psychology alone in this ranking system yields a similar low level of
development for our discipline. Japan (27 institutions in the top 1,000),
China (24), South Korea (14), India (14), Taiwan (10) and Hong Kong
(4) have relatively strong representation. ASEAN countries with
representation in the top 1000 include Malaysia (5 institutions in the top
1,000), Thailand (2), the Philippines (2) and Indonesia (1). Once again,
Vietnam, Laos, Cambodia, Myanmar and Brunei have no psychology
departments in the top 1,000.
There are powerful cultural, economic and historical factors that have
severely limited the growth of psychology in ASEAN (Svare, 2011,
2018). However, because of Asia’s growing population and wealth, its
rising educated population and the resulting needs for understanding and
treating behaviour disorders, psychology will grow dramatically in the
next century. Some have even predicted that by the mid-21st century,
much of the world’s scientific work in psychology will be done in Asia
by Asians (Miller, 2006). A reflection of this is the impressive research
in areas such as experimental psychology, educational psychology,
behavioural neuroscience, cognitive psychology, industrial/organisa-
tional psychology and clinical psychology that is already being done in
Japan, China, South Korea, India, Taiwan and Hong Kong. ASEAN will
soon follow. Psychology is clearly poised for further significant develop-
ment in Asia, especially in ASEAN countries, where the needs for infra-
structure and manpower in servicing mental health needs are great.
The more immediate concerns of most developing ASEAN countries
presently are in five key areas: making higher education more available
and accessible (e.g., the percentage of students completing university
education is still quite low 3%–10%), curriculum reform (e.g., getting
away from rote memorisation and high stakes testing), accreditation (e.g.,
ensuring that students get an education as measured by national and world
standards), teacher quality (e.g., escalating credentialing such that all
teachers have at least a master’s degree and preferably a doctoral degree)
and alternative ways of teaching (e.g., student centred and active learning
that requires the development of critical thinking) (Temmerman, 2019).
These steps, along with infrastructure development and the building of
more colleges and universities, are certain priorities for the immediate
future. Many ASEAN countries still have a long way to go before arriving
at a place where they can actually provide the kind of education that will
meet the basic demands of their region’s economic growth. However, the
potential is there in many developing Asian countries to provide the
needed investments for growing and reforming higher education.
Svare 117

In ASEAN and other Asian countries, preventing scientific miscon-


duct and dialling down the pressure to obtain research grants and pub-
lish in high-impact journals may presently only be a long-term goal.
Psychological science, like other STEM areas, will experience a rise in
both the quantity and quality of the research from this area. It is also
reasonable to predict that it will experience a commensurate rise in
scientific misconduct. There are signs that this region is poised to
experience a more accelerated pace of economic growth as it continues
to open up to the West. Ultimately this will hasten the further develop-
ment of psychology and STEM areas that will compete regionally and
nationally for faculty and students. Coincident with this development
will also come the inevitable problems of scientific misconduct and
perverted incentives that have occurred elsewhere. Dealing with these
problems will not be easy unless a plan is shaped beforehand to diffuse
them and better yet to prevent them from occurring in the first place.
Therefore, this presents an important opportunity to plan for the future
and be proactive.
As the title of this article suggests, there are danger signals for Asian
higher education that require prudent future planning. As noted here,
Western systems of scientific inquiry and higher education have been
adopted in much of Asia. They are not perfect systems and they have
resulted in serious problems of distorted incentives and breaches of
scientific integrity. Plans to remove what is bad about these practices
should not in any way detract from what is good about Western educa-
tional and scientific practices. Education and government leaders in
Asia and ASEAN in particular have the opportunity to be proactive and
build something from scratch. If the mistakes of other countries are
seriously studied, and some of the reform measures advanced here are
adopted at the outset, then constructing new systems to prevent scien-
tific misconduct will be easier and more likely to succeed. This should
be the long-term goal of ASEAN and the developing Asian higher edu-
cation community.

Declaration of Conflicting Interests


The author declared no potential conflicts of interest with respect to the research,
authorship and/or publication of this article.

Funding
The author received no financial support for the research, authorship and/or
publication of this article.
118 Psychology and Developing Societies 32(1)

References
Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden,
R. (2010). Metrics: Do metrics matter? Nature, 465, 860.
Abritis, A., & McCook, A. (2017, August 11). Cash incentives for papers go global.
Science. Retrieved from http://science.sciencemag.org/content/357/6351/541
Anderson, N. (2013, February 6). Five colleges misreported data to US
News, raising concerns about rankings, reputation. The Washington Post.
Retrieved from https://www.washingtonpost.com/local/education/five-
colleges-misreported-data-to-us-news-raising-concerns-about-rankings-
reputation/2013/02/06/cb437876-6b17-11e2-af53-7b2b2a7510a8_story.
html?utm_term=.4db78f335c86
Ashforth, B. E., & Anand, V. (2003). The normalization of corruption in
organizations. Research in Organizational Behavior, 25, 1.
Bhattacharjee, Y. (2013, April 26). The mind of a con man. New York Times.
Retrieved from https://www.nytimes.com/2013/04/28/magazine/diederik-
stapels-audacious-academic-fraud.html
Brainard, J., & You, J. (2018). What a massive database of retracted papers
reveals about science publishing’s ‘death penalty’. Science. Retrieved
from https://www.sciencemag.org/news/2018/10/what-massive-database-
retracted-papers-reveals-about-science-publishing-s-death-penalty
Brownlee, J. K. (2014). Irreconcilable differences: The corporatization of
Canadian universities (Doctoral dissertation). Carleton University. Retrieved
from https://curve.carleton.ca/system/files/etd/b945d1f1-64d4-40eb-92d2-
1a29effe0f76/etd_pdf/2fbce6a2de5f5de090062ca7af0a4b1e/brownlee-irrec
oncilabledifferencesthecorporatization.pdf
Chambers, C. (2014). The changing face of psychology. The Guardian. Retrieved
from https://www.theguardian.com/science/head-quarters/2014/jan/24/the-
changing-face-of-psychology
Couzin-Frankel, J. (2014, May 30). Harvard misconduct investigation of
psychologist released. Science. Retrieved from https://www.sciencemag.org/
news/2014/05/harvard-misconduct-investigation-psychologist-released
Cyranoski, D. (2018, June 8). China introduces sweeping reforms to crack down
on academic misconduct. Nature. Retrieved from https://www.nature.com/
articles/d41586-018-05359-8
Daniels, R. J. (2015). A generation at risk: Young investigators and the future of
the biomedical workforce. Proceedings of the National Academy of Sciences,
112(2), 313–318.
Diekman, A., Brown, E. R., Johnson, A. M., & Clark, E. K. (2010). Seeking
congruity between goals and roles: A new look at why women opt out of
science, technology, engineering, and mathematical careers. Psychological
Science, 21, 1051.
Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century:
Maintaining scientific integrity in a climate of perverse incentives and
hypercompetition. Environmental Engineering Science, 34(1), 51–61.
Svare 119

Einstein, A. (2014). The world as I see it. New York, NY: CreateSpace.
Fanelli, D. (2009). How many scientists fabricate and falsify research? A
systematic review and meta-analysis of survey data. Plos One, 4, e5738.
Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified
lottery. mBio, 7, e00422.
Gallup, G. G., & Svare, B. (2016, July 25). Has higher education been hijacked by
the external funding game? Inside Higher Education. Retrieved from https://
www.insidehighered.com/views/2016/07/25/undesirable-consequences-
growing-pressure-faculty-get-grants-essay
Gobry, P. E. (2016, February 24). Big science is broken. The Week. Retrieved
from https://theweek.com/articles/618141/big-science-broken
Gross, C. (2016). Scientific misconduct. Annual Review of Psychology, 67, 693–711.
Hesselmann, F., Graf, V., Schmidt, M., & Reinhardt, M. (2017). The visibility of
scientific misconduct: A review of the literature on retracted journal articles.
Current Sociology, 65(6), 814–845.
Hoffman, A. J. (2017, March 28). In praise of ‘B’ journals. Inside Higher Education.
Retrieved from https://www.insidehighered.com/views/2017/03/28/academics-
shouldnt-focus-only-prestigious-journals-essay
Hourihan, M., & Parkes, D. (2016, December 19). Federal R & D budget trends:
A short summary. American Association for the Advancement of Science.
Retrieved from https://www.aaas.org/news/federal-rd-budget-trends-summary
Huber, B. R. (2014, September 22). Scientists seen as competent but not trusted
by Americans. Woodrow Wilson Research Briefs. Retrieved from http://wws.
princeton.edu/news-and-events/news/item/scientists-seen-competent-not-
trusted-americans
Iorns, E. (2013, February 20). Solving the research integrity crisis. Science
Exchange. Retrieved from https://blog.scienceexchange.com/2013/05/
solving-the-research-integrity-crisis/
Jarrett, C. (2016, September 16). Ten famous psychology findings that it’s been
difficult to replicate. British Psychological Society Research Digest. Retrieved
from https://digest.bps.org.uk/2016/09/16/ten-famous-psychology-findings-
that-its-been-difficult-to-replicate/
Kabasenche, W. P. (2014). The ethics of teaching science and ethics: A collaborative
proposal. Journal of Microbiology and Biology Education, 15(2), 135–138.
Lillienfeld, S. (2017a). Psychology’s replication crisis and the grant culture:
Righting the ship. Perspectives in Psychological Science, 12(4), 660–664.
Lillienfeld, S. (2017b). Seven costs of the money chase: How academia’s focus
on funding influences scientific progress. APS Observer, 30(8), 13–15.
Lossie, A., & Mane, V. (2016, February 4). Do scientists need audits?
Retraction Watch. Retrieved from https://retractionwatch.com/2016/02/04/
do-scientists-need-audits/
Marcus, J. (2017, September/October). The looming decline of the public
research university. Washington Monthly Magazine. Retrieved from https://
washingtonmonthly.com/magazine/septemberoctober-2017/the-looming-
decline-of-the-public-research-university/
120 Psychology and Developing Societies 32(1)

Maslog, C. (2014). Asia-Pacific analysis: Addressing science fraud in Asia.


SciDevNet. Retrieved from https://www.scidev.net/asia-pacific/r-d/columns/
asia-pacific-analysis-addressing-science-fraud-in-asia.html
Mathews-King, A. (2018, May 4). Who is Andrew Wakefield and what did
the disgraced MMR doctor do? Independent. Retrieved from https://www.
independent.co.uk/news/health/andrew-wakefield-who-is-mmr-doctor-anti-
vaccine-anti-vaxxer-us-a8328326.html
Michalek, A. M., Hutson, A. D., Wicher, C. P., & Trump, D. L. (2010). The costs
and underappreciated consequences of research misconduct: A case study.
PLoS Medicine, 7, e1000318.
Miller, G. (2006). The Asian future of evolutionary psychology. Evolutionary
Psychology, 4, 107–119.
National Science Foundation. (2014). NSF approved STEM fields. Retrieved from
https://www.btaa.org/docs/default-source/diversity/nsf-approved-fields-of-
study.pdf?sfvrsn=1bc446f3_2
Nature Editorial Board. (2005). Not-so-deep-impact. Nature, 435(7045), 1003–1004.
Nicholson, J. M., & Ioannidis, J. P. (2012). Research grants: Conform and be
funded. Nature, 492, 34–36.
Normile, D. (2017a, August 1). University of Tokyo probe says chromosome
team doctored images. Science. Retrieved from https://www.sciencemag.org/
news/2017/08/university-tokyo-probe-says-chromosome-team-doctored-
images
Normile, D. (2017b, July 31). China cracks down after investigation finds massive
peer-review fraud. Science. Retrieved from https://www.sciencemag.org/
news/2017/07/china-cracks-down-after-investigation-finds-massive-peer-
review-fraud
Qin, A. (2017, October 13). Fraud scandals sap China’s dream of becoming
a science superpower. NY Times. Retrieved from https://www.nytimes.
com/2017/10/13/world/asia/china-science-fraud-scandals.html
Quake, S. (2009, February). Letting scientists off the leash. The New York Times
Blog, Feb 10.
Remler, D. (2014, April 28). How few papers ever get cited? It’s bad but not that
bad. Social Science Space. Retrieved from https://www.socialsciencespace.
com/2014/04/how-few-papers-ever-get-cited-its-bad-but-not-that-bad/
Rowlands, I. (2018, March 2018). Is it time to bury the h-index? Bibliomagician.
Retrieved from https://thebibliomagician.wordpress.com/2018/03/23/is-it-
time-to-bury-the-h-index/
Sachan, D. (2018, September 7). India’s early efforts to tackle scientific fraud fail to
impress. Chemistry World. Retrieved from https://www.chemistryworld.com/
news/indias-early-efforts-to-tackle-scientific-fraud-fail-to-impress/3009482.
article
Sang-Hun, C. (2009, October 26). Disgraced cloning expert convicted in South
Korea. NY Times. Retrieved from https://www.nytimes.com/2009/10/27/
world/asia/27clone.html
Svare 121

Servick, K. (2018, September 21). Cornell nutrition scientist resigns after


retractions and research misconduct finding. Science. Retrieved from https://
www.sciencemag.org/news/2018/09/cornell-nutrition-scientist-resigns-
after-retractions-and-research-misconduct-finding
Schekman, R. (2013, December 9). How journals like nature, cell and science are
damaging science. The Guardian. Retrieved from https://www.theguardian.
com/commentisfree/2013/dec/09/how-journals-nature-science-cell-damage-
science
Showstack, R. (2018, February 20). China may soon surpass the United States in
R&D funding. EOS. Retrieved from https://eos.org/articles/china-may-soon-
surpass-the-united-states-in-rd-funding
Smaldino, P., & McElreath, R. (2016, September 16). The natural selection of bad
science. The Royal Society. Retrieved from https://royalsocietypublishing.
org/doi/full/10.1098/rsos.160384
Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial costs
and personal consequences of research misconduct resulting in retracted
publications. eLife, 3, e02956.
Svare, B. (2011). Assessing psychology in Thailand. International Psychology
Bulletin, 15(2), 21–26.
Svare, B. (2014). Telling the truth about intercollegiate sports: Time to expose
faculty corruption and the ‘Big Lie’. In K. Spracklen (Ed.), Sport: Probing
the boundaries (pp. 116–140). Oxford, UK: Inter-Disciplinary Press.
Svare, B. (2018). Spreading the discipline of psychology in Thailand: Reflections
from a Fulbright scholar. International Psychology Bulletin, 22(2), 11–24.
Svare, B. (2020, in press). Why the teaching of psychology internationally is
important. International Psychology Bulletin.
Temmerman, N. (2019, February 1). Transforming higher education in Vietnam.
University World News. Retrieved from https://www.universityworldnews.
com/post.php?story=20190129142655883
Times Higher Education. (2019). World University Rankings. Retrieved from
https://www.timeshighereducation.com/world-university-rankings/2019/
world-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats
Van Noorden, R. (2010). Metrics: A profusion of measures. Nature, 465, 864.

You might also like