Professional Documents
Culture Documents
Western Practices
of Incentivising
Scholarship May Have
Negative Outcomes
Bruce B. Svare1
Abstract
Cases of scientific fraud and research misconduct in general have
escalated in Western higher education over the last 20 years. These
practices include forgery, distortion of facts and plagiarism, the outright
faking of research results and thriving black markets for positive peer
reviews and ghost-written papers. More recently, the same abuses have
found their way into Asian higher education with some high profile and
widely covered cases in India, South Korea, China and Japan. Reports
of misconduct are now reaching alarming proportions in Asia, and the
negative consequences for individuals, institutions, governments and
society at large are incalculable. The incentives for academic scientists
in Asia are approaching and even surpassing those ordinarily seen in
the West. Cash payments for publishing articles in high impact journals
can double or even triple yearly salaries in some cases. Combining this
environment with the simultaneous pressure to obtain oftentimes scarce
funding for research has produced a culture of unethical behaviour
1
Department of Psychology, State University of New York, Albany, USA.
Corresponding author:
Bruce B. Svare, Department of Psychology, State University of New York, Albany,
NY 12222, USA.
E-mail: bsvare@albany.edu
Svare 95
Keywords
Scientific misconduct, perverse incentives, publishing analytics, world
rankings
Background
Reports of scientific fraud are increasing in a worldwide fashion as the
pressure to report research findings in high impact journals has intensi-
fied (Brainard & You, 2018). This has spawned at least one watchdog
group, The Center for Scientific Integrity (www.retractionwatch.com), to
keep track of scientific fraud and compile lists of journal retractions as a
function of country, discipline and investigator. Sadly, this website now
reports an average of two to three retractions per day, whereas only a few
years ago the average was one or two per year.
There is a long history of scientific misconduct in the West. In fact,
many of the more outrageous examples of scientific misconduct in
Western nations are well known by those within higher education cir-
cles as well as by the public (for recent reviews see Gross, 2016;
Hesselmann, Graf, Schmidt, & Reinhart, 2017). For example, at
Cornell University recently, psychologist Brian Wansink, who
researched human eating habits, was forced to resign his position after
13 of his papers were retracted with another 15 corrected (Servick,
2018). Some of the retractions were in the high-profile publication The
Journal of the American Medical Association. The misconduct included
misreporting of research data, problematic statistical techniques, fail-
ure to properly document and preserve research results, and inappro-
priate authorship. In another notorious case that had far-reaching
implications for public health, British surgeon and researcher Andrew
96 Psychology and Developing Societies 32(1)
inclusion [NSF, 2014]. Thus, for the purposes of this article, we also
consider it to be an integral part of STEM training and research.) As
reviewed by Gross (2016), psychology, like other STEM areas, has had
its fair share of scientific misconduct cases, some of which were noted
above. When this is combined with the replication crisis that presently
plagues psychology today (Jarrett, 2016), there is growing concern that
misconduct may be endemic in the behavioural sciences.
This article examines three components of the present culture of aca-
demic research in the West, and how some negative consequences of this
culture are spilling over into Asian higher education. It also examines
how these pitfalls can be avoided in Asia if important modifications are
made to their still nascent educational and scientific systems. First, we
explore the development of distorted incentives for academic research in
a historical context and the contribution that metrics have played in
scholarly publishing. Second, we examine the assault on scientific integ-
rity throughout the world and the impact that it has on maintaining the
integrity of higher education and science and public trust in its goals.
Third, suggestions for creating a better environment for academic
research are advanced especially in the context of the rapid changes tak-
ing place in Asian higher education. In Asia, and especially in ASEAN
where higher education is less well developed, growth of STEM and
scientific research in general will depend upon addressing the causes and
consequences of scientific misconduct and then employing a prevention
model to deter it.
(NIH), NSF and the Environmental Protection Agency (EPA), and a cor-
porate business model adopted by college and university administrators
(Brownlee, 2014).
Metrics and Individual and Institutional Decision-making
While metrics are not inherently bad, there is scepticism in many quar-
ters of higher education that they can be manipulated and gamed to a
point where they may become meaningless (e.g., Abbott et al., 2010).
Thus, if you are good at manipulating metrics as an individual or as an
institution, then you may be rewarded more than others who are not as
skilled at such manipulation. When scholarship is measured by how
much a particular article is cited by others, which is the norm today, it
does not necessarily follow that it is relevant or has made an impact in a
positive and socially meaningful way. It also rewards quantity over qual-
ity and, as some have noted (Quake, 2009), a mentality of publishing as
many papers as possible even though they are less comprehensive in
scope. It has also led to the practice of manipulating journal impact fac-
tors (defined as the yearly average number of citations to recent articles
published in a given journal). What has resulted is journals manipulating
impact factors, rigging of peer review, overcitation and scholars engag-
ing in data dredging (the attempt to massage and manipulate data to a
point where statistical significance ultimately will be attained) (e.g.,
Edwards & Roy, 2017).
The other metric of importance for scientists is the h-index, which
measures the productivity and citation impact of a scholar. It is the
attempt to sum up a scholar’s contributions in a single number. It is com-
puted by examining a scientist’s most cited papers and the number of
citations that they have received in other publications. It is used for all
kinds of important decisions including tenure and promotion, grant fund-
ing and external and internal awards. But it is terribly flawed in many
ways, and critics are now saying that it is almost meaningless (cf.,
Rowlands, 2018). The h-index correlates strongly with numbers of both
papers and citations, and there is substantial uncertainty as to whether or
not it accurately assesses quality, consistency, longevity or stage of
career. Some disciplines simply publish and cite more than do other dis-
ciplines making it nearly impossible to compare across disciplines. Also,
while it may be good for traditional journal articles, it does not capture
non-traditional outlets like citations in books and blog posts, influence
on public policy or global health initiatives, or patents developed or crea-
tion of software. The h-index also evaluates scholars unfairly if they tend
to be single authors on papers in contrast to publishing with multiple
100 Psychology and Developing Societies 32(1)
decreased (e.g., Hourihan & Parkes, 2016). This environment, which has
been exacerbated by an oversupply of young researchers, has dramati-
cally elevated competition for an ever-dwindling source of funding. In
the USA, the intense pressure on faculty members to get federal grants
has produced an array of undesirable consequences. Because of its
hypercompetitive nature (<10% of grant applications are funded), fac-
ulty members often spend inordinate amounts of time revising and
resubmitting grants. The result is a scientific climate that suppresses the
creativity, cooperation, risk taking and the original thinking that is so
important for new discoveries. It breeds conservative short-term think-
ing that produce results measured in terms of dollars rather than good
sense. Postdoctoral scientists, who can spend 10 or more years in their
positions before landing an academic appointment, are especially disad-
vantaged. Because federal grant money is so tight, they may not receive
their first grant until their early 40s, when it may be too late for tenure
and promotion (Daniels, 2015; Gallup & Svare, 2016). There is increas-
ing evidence that this hypercompetitive environment produces reviewer
biases as well as a strong influence of prior success as opposed to scien-
tific merit (Fang & Casadevall, 2016).
According to one study, the peer-review system for federal grants is
approaching the point of becoming arbitrary. Upon analysing the number
of citations and publications resulting from funded NIH projects, Fang
and Casadevall (2016) discovered that excellent productivity was exhib-
ited by some projects with relatively lower scores and poor productivity
by other projects with outstanding scores. Since peer review panels were
unable to make accurate predictions about which projects will have the
greatest impact, the authors concluded that a lottery system would work
just as well. Discouraged by this picture, less-established faculty mem-
bers are turning to private foundations for support or are pursuing aca-
demic appointments in other countries. For example, the Canadian
system of supporting scientists has a great deal to admire. It is able to
support a much higher percentage of its scientists simply by making
more modest grants that do not include salary support or overhead costs.
The funding of scientific research in Asia is starting to mimic what is
seen in the USA in some important ways (Maslog, 2014). Asian countries
taking the lead are China, Japan, South Korea and Singapore. Like the
West, these countries place a high value on publication in peer-reviewed
journals, especially those that are in English and included in the accepted
journal database maintained by the THE World Rankings. As noted earlier,
Chinese scientists can double or triple their salary by publishing in high-
impact journals like Science and Nature. In the Philippines, scientists are
Svare 103
of colleagues who did (Fanelli, 2009; Chambers, 2014), but this may
underreport what some believe is a much more pervasive problem.
An honour system that once prevailed in academe and science seems
to have weakened. Self-policing and self-correcting, once thought to be
the bedrock culture of higher education, has been reduced to benign
neglect or simply ‘looking the other way’ and pretending that it does not
exist. Furthermore, the mechanisms for reporting scientific misconduct
are not well established and even where formal procedures do exist there
is a reluctance to engage in the process owing to the lack of protection
for whistle-blowers and the negative consequences that often ensue. It
takes courage to call out colleagues for scientific misconduct when you
know that you may be sacrificing your own career by doing so.
Common Features of Scientific and Intercollegiate Sports Fraud
Fraud and corruption in higher education are not limited to the scientific
research enterprise. For example, the history of big-time intercollegiate
sports in the USA is one of a complicit faculty and administration that
commit academic corruption in the name of winning games, tourna-
ments and ultimately more donations from an adoring alumni fan base.
Intercollegiate sports are a multibillion-dollar commercial enterprise in
the USA. Keeping it going often requires admitting grossly unprepared
students and then keeping them academically eligible. This is often done
by creating phony no show classes, assisting athletes with course require-
ments by having tutors perform all the work for them, as well as chang-
ing grades or dropping requirements altogether (Svare, 2014). This is
very troubling unethical behaviour in higher education, but it has been
accepted practice and, in fact, has been normalised at many institutions
of higher learning where academic integrity has not been prioritised.
While calls to reform the system are made every year, little has
changed in the last 30 years. To keep up with the arms race, salaries of
coaches have escalated exponentially and are the highest on college
campuses and more expensive new facilities are constructed or older
ones renovated. There is simply too much money at stake to put a stop
to it. Cheating has become endemic and, like clockwork, accompany-
ing every intercollegiate sports season are reports of recruiting scan-
dals, academic corruption, athlete payoffs, athletes’ indiscretions and
gross failures in administrative oversight. Most importantly, many col-
lege athletes end up with no degree or a degree that is not worth the
paper it is written on. This is the reality of intercollegiate sports in the
USA. Like the scientific enterprise, it is often corrupt and those
involved are unwilling to reform it. The fallout is the diminished status
Svare 105
known and seems to have had little impact upon the larger, outside com-
munity. He is rarely called upon by the media for commentary nor does
he participate very much in professional conferences. He is not particu-
larly well known in his profession. He is described by some as being
‘insular’, not a particularly dynamic or thoughtful teacher and mentor,
and a ‘difficult to get along with’ colleague. The doctoral students he has
trained have not distinguished themselves in their academic and non-
academic positions. Many students have left his laboratory in the early
parts of their training because of issues related to poor mentoring and
suspect ethics. The faculty at his institution have provided evidence of
scientific misconduct but administrators have largely ignored it and
swept it under the rug. Instead, they actually enabled future unethical
behaviour of Professor Jones by providing internal rewards for research
and mentoring excellence. To outside observers, it appears as though the
institution just wanted to keep the steady stream of grant money flowing
instead of dealing with the thorny issue of misconduct.
Professor Smith was recommended for the distinction of distin-
guished professor by his faculty. In spite of the fact that the impact
(total citations and h-index) and quality of his work outdistanced
almost every faculty member who had been promoted to distinguished
professor at that institution, he was denied promotion by his higher
administration with the ostensible reason being that he had not received
extramural grant support. In previous recent cases at that institution,
faculty who were granted the status of distinguished professor had gar-
nered large federal grants even though their publication records were
hardly that impressive. Professor Jones has not as yet been recom-
mended for promotion to distinguished professor. But if he is, you can
bet that his history of grant funding will weigh very heavily in the deci-
sion making.
the suggestions below, then the scientific community has no one to blame
but themselves for continued scientific misconduct.
A serious attempt must be made to actually quantify the extent of the
scientific misconduct problem. The watchdog organisation, The Center
for Scientific Integrity, has performed outstanding service to the science
and higher education community by documenting retracted journal arti-
cles. This is important work which helps to publicise when scientific
misconduct takes place, where, and by who. However, it does not quan-
tify the real extent of the problem, how deep it may run and how much
of it may never get reported to the public (Edwards & Roy, 2017). Just
who should engage in this type of fact-finding is unclear, but it should
probably include a combination of professional societies, government
agencies, higher education faculty and administration, and national and
international academies. Representatives of all these groups, funded by
their respective organisations, should convene to develop methods to
research the problem, produce an open access report quantifying scien-
tific misconduct worldwide and then propose best practice guidelines for
preventing it in the future. This is not an easy task that will be accom-
plished quickly, but rather one that will take time, resources and contri-
butions from many different sectors of the scientific and higher education
community. Failure to engage in this type of analysis in an expeditious
manner will further endanger science and the life-changing decision
making that emerges from it.
More must be done to prioritise the teaching of ethical behaviour.
Albert Einstein (2014) once said ‘Most people say that it is the intellect
which makes a great scientist. They are wrong: it is character’. There are
healthy debates in education concerning whether or not character can be
taught, whether it is nature or nurture and whether it matters at all in
some cases of scientific misconduct. That being said, in the face of esca-
lating scientific misconduct in the world today, many institutions of
higher learning are taking the initiative to develop classes in science eth-
ics (Kabasenche, 2014). These collaborative efforts often include courses
that are co-taught by those in ethics, philosophy and the life sciences.
The subject matter includes real life situations that scientists are con-
fronted with including the incentives and pressures that could lead to
cheating. Such courses should be required and should be taught, at a
minimum, in both undergraduate and graduate curriculum. In some
instances, this may be too late; hence, the introduction of formal courses
in scientific ethics could begin in high school or even earlier. As a cor-
relate of teaching scientific ethics, some have even suggested that it is
critically important at this time to promote the ideal of practising science
112 Psychology and Developing Societies 32(1)
as a service to humanity (Edwards & Roy, 2017; Huber, 2014). After all,
science is performed for the public benefit and those who may become
interested in the profession as a career need to understand their responsi-
bilities to be both ethical and altruistic.
We must rethink the use of metrics in hiring, promotion and grant
funding decisions. There are over two million research articles published
annually in over 28,000 journals, and this is escalating at a rate of 3.26
per cent a year and doubling every 20 years (Hoffman, 2017). This pro-
liferation of scientific content is difficult, if not impossible, to keep up
with by most scientists, let alone guarantee the integrity and authenticity
of the scholarship. Some journals are favoured over others because of
metrics. In this article, we have discussed several metrics that tradition-
ally have been used for important decisions in higher education and sci-
ence. Clearly, journal impact factors and h-index metrics have taken on
a life of their own and drive too much important decision making today.
As reviewed here and by others (Edwards & Roy, 2017; Hoffman, 2017),
they are fundamentally flawed metrics and it is time to question our pri-
mary reliance upon them for assessing impact. Citation counts are noto-
riously low. In one study (Remler, 2014), 12 per cent of medicine articles
were never cited, nor were 27 per cent of natural science papers, 32 per
cent in the social sciences and 82 per cent in the humanities (Hoffman,
2017). According to the editorial board of the prestigious journal Nature
(2005), 89 per cent of the journal’s impact factor of 32.2 could be attrib-
uted to just 25 per cent of the papers published. Also, citations to books,
blog posts and social media accounts as well as creation of software and
other products are not taken into consideration when measuring the
impact of a scientist’s scholarly work. Social media and blog posts are
outlets where the public receives much of its scientific news in contrast
to little read academic journals. These alternate sources are probably far
more likely to move public opinion in contrast to more traditional cita-
tion metrics. One could argue that we need to incorporate these non-tra-
ditional sources as much, if not more, than traditional citation measures
when it comes time for important decisions of the awarding of grants,
hiring of faculty and tenure and promotion decisions. Citation metrics
are not unimportant, but overreliance upon them is both dangerous and
misleading. Clearly, they do not always reflect quality research. For
example, with respect to journal impact factors, there is a great deal of
excellent research published in good journals that have much lower
impact factors than the premiere luxury journals such as Science and
Nature. But these expensive subscription journals have a ‘brand’ of qual-
ity science and are heavily preferred by scientists over other journal
Svare 113
and the awarding of grant funding. Also, scientific fraud persists because
there is nothing to really prevent it from occurring. But a recent proposal
called the ‘prepublication audit’ might be part of the answer to the thorny
problem of misconduct and perverse incentives. As articulated by Iorns
(2013) and modified by Lossie and Mane (2016), the audit system would
work in a preventative manner to curb scientific misconduct. An inde-
pendent panel of scientists funded by professional societies, journals,
universities and granting agencies would operate on a fee per service
basis to audit a certain percentage (maybe 3%–5%) of journal submis-
sions each year. The submissions would be randomly drawn and the
audit by experts would thoroughly examine every component of the sub-
mission including the raw data, statistical analysis, methodology, table
and graph presentation, conclusions and reliability of reference informa-
tion. Because the audits would be randomised, all authors would per-
ceive an equal risk of being examined. The examining body would be
independent of author-affiliated institutions (universities, hospitals, etc.)
and journal editorial boards. The audit report would then be sent to the
editors of the intended academic journal for review and the article, if
published, would include an acknowledgement in the manuscript that it
was reviewed by the examining panel and a link to their report would be
provided. The audit system has a number of positive features. It provides
unbiased verification that the experiments were conducted ethically, that
the statistics were computed correctly and the conclusions were based on
the data rather than potential to be attractive to the media. A prepublica-
tion audit could also provide the opportunity for authors to choose to be
audited as an expression of their confidence in their research. The audit
system has tremendous potential but only to the extent that a broad spec-
trum of those in the scientific community buy into it and are willing to
monetarily support it.
The external funding culture must be changed to take the pressure off
of scientists. Never before has there been so many scientists, new and
old, competing for a finite amount of grant money to do their research.
The sources of funding include government agencies, industry and pri-
vate foundations. This article has reviewed the consequences of this
pressure, and others have also highlighted some of the same themes
(Daniels, 2015; Edwards & Roy, 2017; Gallup & Svare, 2016; Lillienfeld,
2017a, b). Short of a massive infusion of more money to the system, a
highly unlikely scenario, there are at least five measures which can be
taken to relieve the pressure. First, institutions, especially those that are
heavily endowed, must do more to provide internal funding for scien-
tists, not just for those in the early stages of their career, but for those
Svare 115
during their entire career path. Second, at least in the USA, more must be
done to reduce or eliminate overhead rates (the Canadian model) that
colleges and universities negotiate with federal granting agencies. This
will increase the amount of money to researchers for the actual support
of their research. Third, the requirement that a young scientist receive an
individual grant (called an R01 in the USA) in order to be promoted and
tenured needs to be dropped. This requirement is simply out of touch
with reality in that there is not enough money (public or private) to con-
tinue it. Fourth, the peer review system is broken and, as noted earlier in
this article, has become arbitrary. What replaces it is uncertain and the
subject of frequent heated debate. However, at a minimum, any new sys-
tem must reward quality as well as replicability. Quantity should be on
the back burner. The system would also benefit from the delivery of
some form of penalties for publishing poor quality research. Fifth, there
are too many young scientists who drop out of science altogether because
of the poor funding climate. The current method of training scientists
prepares them only to be scientists and does little to help them progress
to other career paths. More must be done to create and reinforce viable
professional paths other than research. At present, these are difficult to
find and often require significant retraining.
Funding
The author received no financial support for the research, authorship and/or
publication of this article.
118 Psychology and Developing Societies 32(1)
References
Abbott, A., Cyranoski, D., Jones, N., Maher, B., Schiermeier, Q., & Van Noorden,
R. (2010). Metrics: Do metrics matter? Nature, 465, 860.
Abritis, A., & McCook, A. (2017, August 11). Cash incentives for papers go global.
Science. Retrieved from http://science.sciencemag.org/content/357/6351/541
Anderson, N. (2013, February 6). Five colleges misreported data to US
News, raising concerns about rankings, reputation. The Washington Post.
Retrieved from https://www.washingtonpost.com/local/education/five-
colleges-misreported-data-to-us-news-raising-concerns-about-rankings-
reputation/2013/02/06/cb437876-6b17-11e2-af53-7b2b2a7510a8_story.
html?utm_term=.4db78f335c86
Ashforth, B. E., & Anand, V. (2003). The normalization of corruption in
organizations. Research in Organizational Behavior, 25, 1.
Bhattacharjee, Y. (2013, April 26). The mind of a con man. New York Times.
Retrieved from https://www.nytimes.com/2013/04/28/magazine/diederik-
stapels-audacious-academic-fraud.html
Brainard, J., & You, J. (2018). What a massive database of retracted papers
reveals about science publishing’s ‘death penalty’. Science. Retrieved
from https://www.sciencemag.org/news/2018/10/what-massive-database-
retracted-papers-reveals-about-science-publishing-s-death-penalty
Brownlee, J. K. (2014). Irreconcilable differences: The corporatization of
Canadian universities (Doctoral dissertation). Carleton University. Retrieved
from https://curve.carleton.ca/system/files/etd/b945d1f1-64d4-40eb-92d2-
1a29effe0f76/etd_pdf/2fbce6a2de5f5de090062ca7af0a4b1e/brownlee-irrec
oncilabledifferencesthecorporatization.pdf
Chambers, C. (2014). The changing face of psychology. The Guardian. Retrieved
from https://www.theguardian.com/science/head-quarters/2014/jan/24/the-
changing-face-of-psychology
Couzin-Frankel, J. (2014, May 30). Harvard misconduct investigation of
psychologist released. Science. Retrieved from https://www.sciencemag.org/
news/2014/05/harvard-misconduct-investigation-psychologist-released
Cyranoski, D. (2018, June 8). China introduces sweeping reforms to crack down
on academic misconduct. Nature. Retrieved from https://www.nature.com/
articles/d41586-018-05359-8
Daniels, R. J. (2015). A generation at risk: Young investigators and the future of
the biomedical workforce. Proceedings of the National Academy of Sciences,
112(2), 313–318.
Diekman, A., Brown, E. R., Johnson, A. M., & Clark, E. K. (2010). Seeking
congruity between goals and roles: A new look at why women opt out of
science, technology, engineering, and mathematical careers. Psychological
Science, 21, 1051.
Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century:
Maintaining scientific integrity in a climate of perverse incentives and
hypercompetition. Environmental Engineering Science, 34(1), 51–61.
Svare 119
Einstein, A. (2014). The world as I see it. New York, NY: CreateSpace.
Fanelli, D. (2009). How many scientists fabricate and falsify research? A
systematic review and meta-analysis of survey data. Plos One, 4, e5738.
Fang, F. C., & Casadevall, A. (2016). Research funding: The case for a modified
lottery. mBio, 7, e00422.
Gallup, G. G., & Svare, B. (2016, July 25). Has higher education been hijacked by
the external funding game? Inside Higher Education. Retrieved from https://
www.insidehighered.com/views/2016/07/25/undesirable-consequences-
growing-pressure-faculty-get-grants-essay
Gobry, P. E. (2016, February 24). Big science is broken. The Week. Retrieved
from https://theweek.com/articles/618141/big-science-broken
Gross, C. (2016). Scientific misconduct. Annual Review of Psychology, 67, 693–711.
Hesselmann, F., Graf, V., Schmidt, M., & Reinhardt, M. (2017). The visibility of
scientific misconduct: A review of the literature on retracted journal articles.
Current Sociology, 65(6), 814–845.
Hoffman, A. J. (2017, March 28). In praise of ‘B’ journals. Inside Higher Education.
Retrieved from https://www.insidehighered.com/views/2017/03/28/academics-
shouldnt-focus-only-prestigious-journals-essay
Hourihan, M., & Parkes, D. (2016, December 19). Federal R & D budget trends:
A short summary. American Association for the Advancement of Science.
Retrieved from https://www.aaas.org/news/federal-rd-budget-trends-summary
Huber, B. R. (2014, September 22). Scientists seen as competent but not trusted
by Americans. Woodrow Wilson Research Briefs. Retrieved from http://wws.
princeton.edu/news-and-events/news/item/scientists-seen-competent-not-
trusted-americans
Iorns, E. (2013, February 20). Solving the research integrity crisis. Science
Exchange. Retrieved from https://blog.scienceexchange.com/2013/05/
solving-the-research-integrity-crisis/
Jarrett, C. (2016, September 16). Ten famous psychology findings that it’s been
difficult to replicate. British Psychological Society Research Digest. Retrieved
from https://digest.bps.org.uk/2016/09/16/ten-famous-psychology-findings-
that-its-been-difficult-to-replicate/
Kabasenche, W. P. (2014). The ethics of teaching science and ethics: A collaborative
proposal. Journal of Microbiology and Biology Education, 15(2), 135–138.
Lillienfeld, S. (2017a). Psychology’s replication crisis and the grant culture:
Righting the ship. Perspectives in Psychological Science, 12(4), 660–664.
Lillienfeld, S. (2017b). Seven costs of the money chase: How academia’s focus
on funding influences scientific progress. APS Observer, 30(8), 13–15.
Lossie, A., & Mane, V. (2016, February 4). Do scientists need audits?
Retraction Watch. Retrieved from https://retractionwatch.com/2016/02/04/
do-scientists-need-audits/
Marcus, J. (2017, September/October). The looming decline of the public
research university. Washington Monthly Magazine. Retrieved from https://
washingtonmonthly.com/magazine/septemberoctober-2017/the-looming-
decline-of-the-public-research-university/
120 Psychology and Developing Societies 32(1)