You are on page 1of 38

The current issue and full text archive of this journal is available on Emerald Insight at:

https://www.emerald.com/insight/1066-2243.htm

INTR
32,5 Fake news on the internet:
a literature review, synthesis and
directions for future research
1662 Yuanyuan Wu
School of Management, Harbin Institute of Technology, Harbin, China and
Received 11 May 2021
Revised 17 October 2021
Department of Management and Marketing, The Hong Kong Polytechnic University,
5 March 2022 Hong Kong, China
Accepted 5 March 2022
Eric W.T. Ngai
Department of Management and Marketing, The Hong Kong Polytechnic University,
Hong Kong, China
Pengkun Wu
Business School, Sichuan University, Chengdu, China, and
Chong Wu
School of Management, Harbin Institute of Technology, Harbin, China

Abstract
Purpose – The extensive distribution of fake news on the internet (FNI) has significantly affected many lives.
Although numerous studies have recently been conducted on this topic, few have helped us to systematically
understand the antecedents and consequences of FNI. This study contributes to the understanding of FNI and
guides future research.
Design/methodology/approach – Drawing on the input–process–output framework, this study reviews
202 relevant articles to examine the extent to which the antecedents and consequences of FNI have been
investigated. It proposes a conceptual framework and poses future research questions.
Findings – First, it examines the “what”, “why”, “who”, “when”, “where” and “how” of creating FNI. Second, it
analyses the spread features of FNI and the factors that affect the spread of FNI. Third, it investigates the
consequences of FNI in the political, social, scientific, health, business, media and journalism fields.
Originality/value – The extant reviews on FNI mainly focus on the interventions or detection of FNI, and a
few analyse the antecedents and consequences of FNI in specific fields. This study helps readers to
synthetically understand the antecedents and consequences of FNI in all fields. This study is among the first to
summarise the conceptual framework for FNI research, including the basic relevant theoretical foundations,
research methodologies and public datasets.
Keywords Fake news, Internet, Literature review, Input–process–output framework, Antecedents and
consequences
Paper type Research paper

1. Introduction
The Macquarie Dictionary and Oxford English Dictionary designated “fake news” and “post-
truth”, respectively, as the words of the year in 2016 (Shu et al., 2017). In 2017, both the
American Dialect Society and Collins Dictionary named “fake news” the word of the year.

The authors thank the editor and three anonymous reviewers for their constructive comments,
criticisms and help in improving the paper. Yuanyuan Wu was supported in part by Joint PhD
programmes (PolyU-HIT) leading to Dual Awards. Pengkun Wu was supported in part by National
Natural Science Foundation of China (62001314), MOE (Ministry of Education in China) Project of
Internet Research Humanity and Social Science (20YJC630159), Foundational Research Funds of the Central Universities
Vol. 32 No. 5, 2022
pp. 1662-1699 (YJ202008, SXYPY202106), From 0 to 1 Project of Sichuan University (2021CXC19), and International
© Emerald Publishing Limited Visiting Program for Excellent Young Scholars of Sichuan University. Chong Wu was supported in part
1066-2243
DOI 10.1108/INTR-05-2021-0294 by National Natural Science Foundation of China (72131005).
Donald Trump’s narrow victory in the 2016 United States (US) presidential election and Fake news on
the United Kingdom (UK)’s vote to leave the European Union (“Brexit”) have attracted great the internet
attention to the political influence of fake news. Apart from the political context, fake news
has been found to promulgate information in other vital areas (Domenico et al., 2021b; Lazer
et al., 2018). For example, in April 2013, a fake tweet was posted by a hacked Associated Press
account, stating that the White House had been hit by two explosions and Barack Obama was
injured. It caused the S&P 500 Index to decline by 0.9%, which was enough to destroy US$
130 billion in stock value in a matter of seconds (Boididou et al., 2018b). 1663
Fake news had long been embedded in history before the internet media entered the public
lexicon (Al-Rawi, 2019a). It first appeared in the 19th century and evolved from its satirical
literary origins into a passionately criticised Internet phenomenon (Bakir and McStay, 2018).
Presently, fake news causes controversies among the public, scholars and politicians,
especially owing to the widespread use of Internet media, such as Facebook, Twitter and
Snapchat.
The influential combination of fake news and the internet media has attracted significant
concern and attention in academic research Chen et al. (2015). In recent years, many articles on
fake news have appeared in conferences and journal publications, including several top-tier
journals, such as Science (Bakshy et al., 2015; Grinberg et al., 2019; Lazer et al., 2018; Vosoughi
et al., 2018), Nature (Kucharski, 2016; Spinney, 2017; Williamson, 2016), MIS Quarterly (Kim
and Dennis, 2019; Moravec et al., 2019) and Information Systems Research (Clarke et al., 2021).
Additionally, there has been a trend towards dedicating fake news as a core research topic for
certain special issues of international journals, such as the Journal of Management
Information Systems and Information Processing and Management.
Although publications have gradually increased in recent years, this area of enquiry is not
clearly known by potential researchers. To our knowledge, 14 review papers relevant to fake
news on the internet (FNI) have been published to date. However, they focussed on one
specific FNI field or the interventions and detection of FNI. Wang et al. (2019) analysed 57
selected articles to explore the spread of health-related fake news on social media, while
Domenico et al. (2021b) examined 117 selected articles to investigate marketing-related FNI
from a consumer perspective. Bryanov and Vziatysheva (2021) reviewed 26 scholarly articles
to understand the determinants of individuals’ beliefs in fake news and explore fake news
interventions. The remaining 11 review papers focussed on the detection of FNI (Ahmad and
Lokeshkumar, 2019; Bondielli and Marcelloni, 2019; Boukhari and Gayakwad, 2019; Conroy
et al., 2015; Lozano et al., 2020; Meel and Vishwakarma, 2020; Saquete et al., 2020; Sharma et al.,
2019; Shu et al., 2017; Viviani and Pasi, 2017; Zhang and Ghorbani, 2020). To distinguish itself
from these extant review articles, this study provides a deeper understanding of the
antecedents and consequences of FNI by analysing the relevant multidisciplinary literature
and indicates directions for future research.
Grounded in an extensive literature review, this study makes several contributions. First,
we adopt the input-process-output (IPO) framework to organise the dispersed literature on
the antecedents and consequences of FNI. Second, our review enables the development of a
conceptual framework, including the theoretical relationship of extant studies, basic
theoretical foundations, research methods and research datasets. This conceptual framework
provides an important basis for future academic studies on FNI and facilitates a more
proficient management of FNI. Third, we discuss and evaluate the current state of the FNI
literature and outline potential future avenues by presenting detailed suggestions for future
research.
The remainder of this paper is organised as follows. Section 2 provides the working
definition of FNI, research design and general quantitative findings of the reviewed literature. It
also presents the IPO framework, based on which we thoroughly investigate the existing
literature to understand the antecedents and consequences of FNI in Section 3. Section 4
INTR describes the conceptual framework for FNI research by summarising the basic psychological
32,5 foundations, research methodologies and research datasets. To better guide future research, we
discuss future research directions in Section 5. Section 6 concludes the paper.

2. Research design and general findings


2.1 Definition of FNI
1664 Since 2017, some studies have discussed and attempted to define fake news. Rini (2017) stated
that fake news mimics the conventions of traditional media reportage to describe events in
the real world, yet is known by its creators to be significantly false, and it is transmitted with
the dual goals of being widely re-transmitted and deceiving at least some of its audience. Shu
et al. (2017) defined fake news as low-quality news with intentionally false information. Lazer
et al. (2018) envisaged fake news as fabricated information that mimics news media content in
form but not in organisational processes or intent. Tandoc et al. (2018) saw fake news as a
broad category of false information that comprises six variants: satire, parody, fabrication,
manipulation, advertising and propaganda. Some scholars have provided a more precise
conceptualisation of fake news. Bakir and McStay (2018) averred that fake news is either
wholly false or containing deliberately misleading elements within its content or context.
Against a broad background of different definitions, we summarised three elements of fake
news: verified false information, intentional motivation with clear goals and genre blending
combining elements of traditional news with normative professional journalism.
In addition to these three features, Zhang and Ghorbani (2020) focussed on the primary
host of fake news and defined fake news as all kinds of false information mainly published or
distributed on the internet to purposely mislead, fool or lure readers for financial, political or
other gains. By summarising these four factors, we conceptualised FNI and proposed a
working definition: FNI is the deliberate presentation of verifiably false news on the internet
by mimicking the formats of traditional news or normative professional journalism.
As shown in Table 1, our definition clearly distinguishes fake news from other similar
terms: (1) misinformation, which is incorrect or misleading information and often considered
as an “honest mistake” (Shu et al., 2020); (2) rumour, which is unverified and instrumentally
relevant information in circulation (Zubiaga et al., 2018) and (3) troll posts, yellow journalism,
satire, conspiracy and parody, which may be intentionally false information or actual
information (Tandoc et al., 2018). In addition to the above-mentioned terms, three other
expressions are closely related to fake news: disinformation, hoax and deepfakes. Fake news
and disinformation have similar definitions, and thus, can be used interchangeably. The
Oxford English Dictionary defines “hoax” as a humorous or mischievous deception (Kumar
et al., 2016). Deepfakes are an emerging form of disinformation involving doctored
multimedia content (Ahmed, 2021c; Dasilva et al., 2021; Vaccari and Chadwick, 2020; Wahl-
Jorgensen and Carlson, 2021).
Particularly, Shin et al. (2018) pointed out that misinformation, disinformation and rumour
are used interchangeably to describe information that lacks truth in many studies. Lazer et al.
(2018, p. 1094) held that “fake news overlaps with other information disorders, such as

Motivation
Falsity Unintentional Intentional
Table 1.
Conceptual differences True Facts Facts
among several terms False Misinformation Hoax, Deepfakes, Disinformation, Fake News
related to fake news Unknown Rumour Troll Post, Yellow Journalisms, Satire, Conspiracy, Parody
misinformation (false or misleading information) and disinformation (false information that is Fake news on
purposely spread to deceive people)”. Since fake news conceptually overlaps with a few the internet
similar terms, we included the following four widely adopted terms in our literature search
process: “fake news”, “misinformation”, “disinformation” and “rumour”. Our working
definition of FNI can help to identify relevant articles.

2.2 Databases for article search 1665


To understand the antecedents and consequences of FNI, we searched for relevant articles in
the databases of the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index
(SSCI) in the Web of Science Core Collection database. We chose the Web of Science platform, as
it is a widely used research tool that employs various search and analysis capabilities (Ngai and
Wu, 2022). Moreover, it provides a comprehensive coverage of the sciences, social sciences, arts
and humanities across journals, books and conference proceedings.
When describing information on the internet, some studies also interchangeably use social
media or social network sites. Therefore, “Internet”, “social media” and “social network site”
are also included in our literature search. We used the following retrieval string in the SCIE
and SSCI databases: TS5(“fake news” OR “misinformation” OR “disinformation” OR
“rumour”) AND TS5(“Internet” OR “social media” OR “social network sites”). As of October
11, 2021, 1,896 articles had been initially retrieved using the above-stated query.

2.3 Study selection and evaluation


Despite the initial broad scope, the review was limited to several criteria. In the initial
selection stage, we excluded articles written in a non-English language and obtained 1,838
articles. Thereafter, in the selection stage, our working definition of FNI was the first major
limitation, which meant that the literature outside the four features of FNI was disregarded.
As this study aimed to understand the antecedents and consequences of FNI, articles
focussing solely on the interventions and detections of FNI were excluded. Finally, our review
consisted of 202 relevant articles that focussed on the antecedents and consequences of FNI.
Figure 1 provides an overview of the research design of this study.
Following prior systematic reviews (Domenico et al., 2021b; Wu et al., 2020), we manually
developed a data extraction process to report the main characteristics of the papers.
Specifically, two authors independently constructed a table comprising eight overall analytical
categories for classifying and evaluating the identified articles. The first column comprises
biographical information, whereas the second column summarises the research objectives or
questions. The main findings about the antecedents, spread process and consequences of FNI
are summarised in the third, fourth and fifth columns, respectively. The sixth column analyses
the theoretical foundations, while the seventh column systematises information on
methodological issues. Finally, the eighth column summarises the dataset for these articles.

2.4 Descriptive quantitative analysis


2.4.1 Overall growth. Figure 2 illustrates the annual number of published articles on FNI.
Notably, FNI research has recently attracted considerable attention among scholars. Besides
two papers published in 2014 and 2016, the remaining 200 papers were published between
2017 and 2021. The number of published articles has increased exponentially.
2.4.2 Publication sources and disciplines. Overall, 202 articles were published in 117
journals from various disciplines. The distribution of these journals reflects the multi-
disciplinary nature of FNI studies. The journals that published three or more articles included
PLoS ONE (nine articles), Computers in Human Behavior (seven), Journalism Practice (seven),
New Media and Society (seven), Digital Journalism (six), Social Media þ Society (six), Media
32,5
INTR

1666

this study
Figure 1.
Research design of
Four features of FNI:
● Verified false information
● Intentional motivation with clear goals
● Presentation format of the news
Definition and ● Distributed on the Internet Propose IPO framework
Define FNI Select keywords
keyword selection and research issues
Widely-adopted terms in the literature:
● Fake news, misinformation, disinformation, rumour
● Internet, social media, social network sites

Include articles
a ticles related
ar
Initial selection stage to fa
ffake
ake news Selection stage

Literature
Search articles in SCIE n = 1896 Remove non- n = 18388 n = 502 Include ar
aarticles
ticles related n = 209 Verify classification n = 13 Discuss inconsistent
search, selection, and SSCI databases English articles to our research issues results classification results
and evaluation
Include ar
aarticles
ticles
focussing
focussing on the Internett Agree with the Agree with the
classification classification
results: n = 196 results: n = 6

n = 202
Confirm final
classification results

Literature Propose conceptual framework Discuss and provide future


Analyse selected articles
Analysis for the FNI research research directions
120 Fake news on
102
the internet
100
Number of Publication

80

60
43 1667
40
23 25

20 7
2
0 Figure 2.
2016 and 2017 2018 2019 2020 2021 Distribution of
before reviewed articles by
publication year
Year

and Communication (five), Journal of Medical Internet Research (four), African Journalism
Studies (three), Computational and Mathematical Organization Theory (three), Information
Processing and Management (three), Information, Communication and Society (three),
International Journal of Environmental Research and Public Health (three), Journal of
Information Technology and Politics (three), Journal of Retailing and Consumer Services
(three), Online Information Review (three), Technological Forecasting and Social Change
(three) and Telematics and Informatics (three). Top-tier journals have also published articles
on FNI studies.
By referring to the classification in the SCIE and SSCI databases, we grouped the 117
journals into nine disciplines: business and economics, information systems, medicine and
health, communication, politics, psychology, social science, multidisciplinary sciences and
others. As we excluded articles on the detection of FNI, two journals in the computer science
field were classified as “others”.
In Figure 3, we classified the articles into two groups based on publication year (2021 and
forthcoming, and 2020 and before) and depicted the number of studies in different disciplines.

Others 1 5
Social science 3 5
MulƟdisciplinary sciences 6 9
PoliƟcs 78
Disciplines

Business and economics 7 10


Psychology 7 11
InformaƟon systems 9 18
Medicine and health 12 21
CommunicaƟon 24 39

0 5 10 15 20 25 30 35 40 45 Figure 3.
Number of studies
Number of Publication from different
disciplines
2020 and before 2021 and forthcoming
INTR Most articles were published in the fields of communication, medicine and health, information
32,5 systems and psychology. While many FNI studies before 2020 focussed on the political
events and dissemination of FNI, recent studies concentrated on the fields of healthcare and
information systems.

2.5 Input–process–output conceptual framework and research issues


1668 Although most studies emphasise the detection of FNI, understanding the antecedents and
consequences of FNI is equally important. In recent decades, studies have implemented and
advanced the IPO framework to describe the antecedents, processes and consequences of the
studied objects (Ilgen et al., 2005; Stewart and Barrick, 2000; Yoo et al., 2020). This framework
portrays the current state of knowledge in investigating the antecedents and consequences of
FNI (Figure 4).
“Input” in this framework refers to the antecedents of FNI, that is, the starting point of FNI
development. We explored the “what”, “why”, “who”, “when”, “where” and “how” (5W1H) of
creating FNI.
“Process” focusses on the spread of FNI. The widespread adoption of information and
communication technologies, especially the internet media, provides unique features of FNI
spreading. In this process module, we also ascertained the factors that affect the spread
of FNI.
“Output” in this framework refers to the consequences of FNI. The effects of FNI cover
diverse fields, including the political, social, scientific, health, business, media and journalism
fields.

3. Literature analysis
To address these research issues, we coded the identified literature using a mix of open, axial
and selective coding. First, open coding was used to assign theoretical labels to the first-order
concepts and evidence; thereafter, axial coding was used to identify and validate new and
existing second-order themes aligned with the dimensions; finally, selective coding was used
to group the second-order themes into new and existing aggregate dimensions (Gioia
et al., 2012).

3.1 Input of FNI


We conceptualised FNI to summarise four core features in Subsection 2.1 to guide the study
selection and evaluation process. This part realises the 5W1H information about the input of

Framework Aggregate Corresponding


Module Dimensions Research Issues

● What FNI looks like ● What does FNI look like?


● Why FNI is created ● Why do FNI providers want to create FNI?
● Who creates FNI ● Who is highly motivated to create FNI?
Input ● When FNI is created ● When are FNI providers more prone to create FNI?
● Where FNI is created ● Where are FNI providers more prone to create FNI?
● How FNI is created ● How do FNI providers create FNI?

Fake News on the ● Spread features ● What are the features of FNI spreading?
Figure 4. Process ● Factors affecting the spread ● Which factors affect the spread of FNI?
Internet (FNI)
The IPO conceptual
framework for
investigating the ● In the political field ● What are the effects of FNI in the political field?
● In the social field ● What are the effects of FNI in the social field?
antecedents and Output ● In the scientific and health field ● What are the effects of FNI in the scientific and health field?
consequences of FNI ● In the business field ● What are the effects of FNI in the business field?
● In the media and journalism field ● What are the effects of FNI in the media and journalism field?
FNI, including what FNI looks like, why FNI is created, who creates FNI, when and where FNI Fake news on
is created and how FNI is created. the internet
3.1.1 What FNI looks like. Notably, FNI usually originates from real events (Biancovilli
et al., 2021; Chen et al., 2021a), such as elections, earthquakes and bombings, and it contains
information about these events (Berkowitz and Schwartz, 2016; Mour~ao and Robertson,
2019). Moreover, FNI acts as a mirror to reflect national news agendas. For instance,
immigrants are most targeted by FNI in German-speaking countries, while FNI in English-
speaking countries frequently attacks political actors (Humprecht, 2019). 1669
As it is distributed and spread on the internet, FNI is conceptualised as a type of syntactic
digital object comprising content and structure and characterised by attributes of editability,
openness, interactivity and distributedness (Khan et al., 2021). Owing to the widespread use
of FNI in politics, it is often considered as a digitally politicised term (Brummette et al., 2018).
Mour~ao and Robertson (2019) report that FNI is more akin to partisan viewpoints and closely
related to identity politics and partisanship.
3.1.2 Why FNI is created. The main motivations regarding the emergence of FNI are
pecuniary or ideological in nature or both (Tandoc et al., 2018). The FNI phenomenon is
largely economically motivated (Bakir and McStay, 2018; Hughes and Waismel-Manor, 2021;
Rini, 2017; Tandoc et al., 2019). Essentially, FNI produces a surge of clickbait to attract diverse
audiences (Munger, 2020), and the attracted traffic and clicks are converted into advertising
revenues (Carlson, 2020; Tandoc et al., 2019). As long as the traffic is real and the
advertisements are being served to real people, Internet media platforms would profit from
and accept FNI (Silverman et al., 2017b) and brands may profit from and fund FNI (Berthon
and Pitt, 2018).
Ideologically, some FNI providers intentionally muddy the waters of public discourse or
discredit particular personalities to push for the political or ideological agenda they support
(Tandoc et al., 2019; Tejedor et al., 2021).
In addition to pecuniary and ideological motivations, providers may create FNI for other
reasons, such as sensationalism (Robledo and Jankovic, 2017), sarcasm and education
(Metzger et al., 2021), persuading/informing others, debating and entertaining/trolling
(Chadwick et al., 2018).
3.1.3 Who creates FNI. In many cases, FNI is adopted by political actors, including
government and partisan third parties (Benham, 2020), to convince supporters against rival
parties or to advance candidates they favour (Bennett and Livingston, 2018). Internet users
from opposing political parties are contextually homophilous and they use FNI to discredit
the opposition (Brummette et al., 2018; Hameleers, 2020).
As young people are relatively immature and impulsive, students with high-level political
engagement frequently create and share political FNI (Madrid-Morales et al., 2021). Similarly,
young people with low knowledge generate FNI to make money. For instance, in Veles, a
small town in central North Macedonia that generated many viral posts during the 2016 US
presidential election, mostly young people in their early twenties with little English fluency
created and disseminated substantial volumes of FNI (Hughes and Waismel-Manor, 2021).
From the literature, it is evident that a significant amount of FNI is generally created and
shared by a small, disloyal group of heavy Internet users (Ahmed et al., 2020; Nelson and
Taneja, 2018); however, even a small population of FNI providers can cause tremendous harm
(Kopp et al., 2018). For instance, Grinberg et al. (2019) reported that only 1% of individuals
accounted for 80% of fake news source exposure, whereas 0.1% accounted for approximately
80% of fake news source shares. Waszak et al. (2018) found that more than 20% of dangerous
links in their material are generated by one source.
3.1.4 When and where FNI is created. Notably, FNI has very strong timeliness. When the
targeted topics emerge, FNI quickly emerges to attract traffic to achieve pecuniary or
ideological goals or seek self-satisfaction (Talwar et al., 2020) and is more likely to disappear
INTR than real news after the targeted events (Bastos, 2021). Moreover, FNI tends to return multiple
32,5 times after the initial publication until the tension around the target dissolves (Shin
et al., 2018).
Promulgators prefer to create FNI in countries with low levels of trust in professional news
media and government or those without sufficient public service broadcasting, such that the
citizens’ levels of public affairs knowledge are poor (Humprecht, 2019). There are significant
differences in the FNI topics in Western democracies, such as the US, the UK, Germany and
1670 Austria (Humprecht, 2019). Therefore, FNI should be understood within its particular context
of production and consumption, and investigations into FNI in different environments should
consider local specificities (Wasserman, 2020).
When creating FNI, the promulgators usually and purposely create websites and adopt
names that are similar to those of legitimate news organisations and then intentionally
publish FNI on these websites (Allcott and Gentzkow, 2017).
3.1.5 How FNI is created. When creating FNI, the promulgators elaborately design it and
may even satirise politicians or political organisations by impersonating them (Ferrari, 2020).
Although FNI usually originates from real events (Berkowitz and Schwartz, 2016; Mour~ao
and Robertson, 2019), real and fake news have more significant differences than similarities
(Horne and Adali, 2017) in terms of news content, images and other links.
Regarding news content, FNI usually exaggerates scientific findings and hypes new
therapies and uncritical optimism (Jang et al., 2019; Marcon et al., 2017; Robledo and Jankovic,
2017), and it is more topically autonomous (Vargo et al., 2018). During the spreading process,
the content of FNI usually undergoes significant modifications to achieve specific goals (Jang
et al., 2018). Apparently, FNI content inspires fear, disgust and surprise, whereas true stories
inspire anticipation, sadness, joy and trust (Vosoughi et al., 2018). In terms of image
distribution patterns, images in real news are more diverse and denser than those in fake
news (Jin et al., 2017). Considering the economics of emotions (Bakir and McStay, 2018), FNI
leverages emotions to generate attention and produce a surge of clickbait to attract diverse
audiences (Munger, 2020). As Internet users are more likely to trust images and audio,
deepfakes have quickly become a popular form of fake news (Ahmed, 2021b; Dasilva et al.,
2021; Vaccari and Chadwick, 2020; Wahl-Jorgensen and Carlson, 2021). Regarding other
links, root tweets about fake news are mostly generated by ordinary accounts, but they often
include a link to non-credible news websites (Jang et al., 2018).

3.2 Spread process of FNI


3.2.1 Spread features of FNI. The spread of FNI is modelled as the spread of a viral contagion
to emphasise extensive spreading. On account of its rapid spread, the World Economic
Forum considers FNI as one of the main threats to human society (Tornberg, 2018). Usually,
FNI returns multiple times after the initial publication with textual changes, whereas real
news does not (Jang et al., 2018; Shin et al., 2018), and is propagated for a longer period
gradually but constantly (Jang et al., 2019). Generally, FNI receives more attention (Clarke
et al., 2021) and is shared and viewed substantially more than real news (Clarke et al., 2021;
Waszak et al., 2018). Alsyouf et al. (2019) revealed that inaccurate articles are 28 times more
likely to be shared than factual articles. Waszak et al. (2018) found that 40% of the most
frequently shared links contained in text are fake news and are shared more than 450,000
times. Even a very small population of FNI providers that transiently invades a much larger
population of common users can strongly alter the equilibrium behaviour of the population
and cause the viral spread of FNI (Kopp et al., 2018).
Unlike its spreading scope, the spreading speed of FNI has not been consistent. Vosoughi
et al. (2018) concluded that FNI diffuses significantly farther, faster, deeper and more broadly
than the truth in all categories of information, while Jang et al. (2018) showed that tweets Fake news on
about real news spread widely and quickly. the internet
Current efforts to stop the spread of FNI have not produced satisfactory results, thus
highlighting the challenge faced by all stakeholders to model its spread and identify the
influencing factors to halt or decelerate it (Giglietto et al., 2019).
3.2.2 Factors affecting the spread of FNI. Factors that influence the spread of FNI are
generally divided into individual factors, FNI information, debunking information, the
internet media and information environment. Perceived trust in news usually plays a 1671
mediating role between these influencing factors and spread behaviours (Altay et al., 2021;
Giglietto et al., 2019; Hopp, 2021; Laato et al., 2020; Pedersen and Burnett, 2018).
Individual factors that affect the acceptance and spread of FNI are demographics,
personality traits, personal involvement, prior exposure, confirmation bias, conformity to
other users’ views and media habits. Specifically, people who tend to trust and spread FNI
usually have lower levels of education (Schaewitz et al., 2020; Scherer et al., 2021) or
elementary occupations (Bapaye and Bapaye, 2021), are young people (Allcott and Gentzkow,
2017) or are aged over 65 years (Bapaye and Bapaye, 2021) and male (Buchanan, 2020;
Filkukova et al., 2021). Moreover, age, not gender nor education, has a greater influence on
particular cultures (Rampersad and Althiyabi, 2020). Some personality traits characterise
users who share FNI: agreeableness (Buchanan and Benson, 2019) or conscientiousness
(Buchanan, 2021), higher extraversion and neuroticism (Buchanan, 2020), altruism (Apuke
and Omar, 2021a, c; Balakrishnan et al., 2021) or overconfidence (Lyons et al., 2021) and
religious beliefs (Islam et al., 2020). Political beliefs are highly related to users’ sharing
behaviours (Lobato et al., 2020; McPhetres et al., 2021; Neyazi et al., 2021; Osmundsen et al.,
2021; Tandoc et al., 2021). Individuals with a high cognitive ability are less trustful of FNI
(Tandoc et al., 2021), including deepfakes (Ahmed, 2021a, c). Cognitive elaboration serves as a
mediator between perceived credibility and sharing intention (Ali et al., 2022). Cognitive-
perceptual schizotypy directly affects sharing intention (Buchanan and Kempley, 2021).
People usually trust and share news that they have touched before (Choi and Lee, 2021;
Pennycook et al., 2018) or that aligns with their beliefs (Buchanan, 2021; Kim and Dennis,
2019), perceived relevance (Chua and Banerjee, 2018) and perceived importance (Tully, 2022),
but their intention to share would be reduced after exposure to others’ critical comments
(Colliander, 2019). Social norms significantly affect users’ intention and spread behaviours
(Andı and Akesson, 2021). Valecha et al. (2021) used social (personal relations), spatial
(geometric) and temporal (time gap) distances to the health crisis to measure perceived
relevance and importance. Anxiety (Freiling et al., 2021), negative emotions (Wang et al., 2020)
and death-related thoughts (Lim et al., 2021) are driving factors in users’ willingness to share
FNI. Media use habits are relevant to sharing behaviours on Facebook and Twitter (Neyazi
et al., 2021). Users with decreased news consumption, high Internet use (Bringula et al., 2021)
and high trust in the news on the internet (Filkukova et al., 2021) easily trust and spread FNI.
Tie strength (Apuke and Omar, 2020) and news-find-me perception (Apuke and Omar, 2021b)
are strong predictors of FNI sharing. Network size on the internet media affects deepfake
sharing (Ahmed, 2021c). In addition, Talwar et al. (2019) focussed on the relationship between
the dark side of social media and fake news sharing behaviour, and investigated the effects of
several individual media use habits, including online trust in the internet, self-disclosure on
the internet, fear of missing information, social media fatigue and social comparison.
The FNI-related factors affecting the acceptance and spread of FNI are information cues,
news type, credibility of news sources and the presentation format of news. News containing
trolling (Fichman and Vaughn, 2021) and persuasive and uncertain words (Zhou et al., 2021a),
and that accompanying a high number of Facebook “likes” (Ali et al., 2022) are more likely to
be disseminated. The veracity of headlines has little effect on sharing intention despite the
fact that it has a considerable effect on judgements of accuracy (Pennycook et al., 2021). The
INTR news type moderates the relationship between personal involvement and intention to trust
32,5 and share (Chua and Banerjee, 2018). The topics of FNI matter are consequential, as
conspiracy theories are most likely to be shared (Wang et al., 2021). Specifically, health advice,
help seeking misinformation and emotional support significantly increase the dissemination
of misinformation (Zhou et al., 2021b). The trustworthiness of the news source directly affects
users’ intention to share (Buchanan and Benson, 2019), and unknown and low-rated
sources are the usual culprits in spreading fake news (Kim and Dennis, 2019; Kim et al., 2019).
1672 Kim and Dennis (2019) also revealed that the presentation format of highlighting the source
makes users more sceptical of all articles, regardless of the source’s credibility, particularly
when the FNI has a weak interpersonal relationship with the receiver (Domenico et al., 2021a).
Furthermore, users’ intention to share FNI is significantly affected by debunking
information. The presence of debunking information (Chua and Banerjee, 2018; Chung and
Kim, 2021) and the flagging of news (Ardevol-Abreu et al., 2020; Mena, 2020) reduce the
intention to share by diminishing the credibility of the news. Sometimes, FNI plays a small
part in the overall conversation, but community-based debunking and shaming responses to
FNI overwhelm the initial FNI by orders of magnitude. Even if the response information is
neither debunked nor unsuitable, the negative reaction to FNI can also spread at significant
speeds (Babcock et al., 2019). Therefore, effectively debunking FNI is very important.
The extensive spread of FNI cannot be achieved without the internet media (Bandeli and
Agarwal, 2021; Brady et al., 2017; Fernandez-Torres et al., 2021; Kopp et al., 2018; Nelson and
Taneja, 2018; Su, 2021). Based on the echo chamber effect, the network is strongly segregated
along the types of information circulating in it (Shao et al., 2018), and the presence of an
opinion and network-polarised cluster of nodes in a network contributes to the diffusion of
FNI (Gaumont et al., 2018; Tornberg, 2018). Owing to the presence of filter bubbles, surgeons
may be unaware of the FNI that patients read, and thus, it is difficult to counteract the FNI
shared around the patient in a timely manner (Brady et al., 2017). As reflected by the
information cocoon effect, the spread of FNI tends to be confined to fewer communities than
other political news (Gaumont et al., 2018). The newly emerging automated bots also
accelerate the diffusion of FNI. Al-Rawi (2019b) disclosed that most of the top 50 Twitter
users during the propagation process of FNI were more likely to be automated bots. However,
online regulations have not kept pace with the rapid development of the internet media.
Without the existence of traditional gatekeepers, information quality on the internet cannot
be guaranteed (Benham, 2020). Aspects of social media testimony also play a role in the
transmission of FNI (Rini, 2017). Information from traditional gatekeepers is shared much
less than the content from other nonprofessional organisations (Bradshaw et al., 2020). If
platforms can assume the gatekeeper role, FNI spreading can be significantly controlled.
With the increase in regulation, the relative magnitude of the FNI problem on Facebook has
fallen sharply since 2016, but it continues to rise on Twitter (Allcott et al., 2019).
Some studies blame the information environment and the development of platforms, such
as Google and Facebook, for the rise of FNI (Humprecht, 2019). Notably, FNI spreads much
faster not only because of the internet media’s technological affordances but also because of
how users have domesticated the internet media in their daily lives (Tandoc et al., 2019).
Information overload leads to an increased likelihood of FNI sharing by increasing
consumers’ psychological strain (Apuke and Omar, 2021b; Bermes, 2021). Culture has the
most significant impact on the spread of fake news (Chen et al., 2021a), as mediated by the
comprehensibility of the news item (Rampersad and Althiyabi, 2020). A culture with mutual
support can restrict the spread of FNI. In Singapore, most Internet users ignore the FNI they
come across and only offer corrections when the issue is strongly relevant to them and to
people with whom they share strong and close interpersonal relationships (Tandoc et al.,
2020). Globally, Saudi Arabia has a comparatively higher level of awareness and higher
reluctance to share medical information related to coronavirus disease 2019 (COVID-19)
online (Alshareef and Alotiby, 2021). Mobile connectivity and political freedom in a nation Fake news on
contribute to COVID-19-related FNI propensity, whereas economic and media freedom inhibit the internet
it (Shirish et al., 2021). The specific surrounding environments, such as the UK tabloid
newspapers, provide a fertile context for misinformation and resources for disinformation
(Chadwick et al., 2018). The behaviours of a political leader, such as the former US President
Donald J. Trump, may have nudged people’s sharing of COVID-19-related FNI (Wang et al.,
2021). Financial incentives have a positive but diminishing impact on the likelihood of
sharing online healthcare information regardless of validity, and legislation may deter 1673
the sharing of healthcare information that users perceive to be true but cannot deter them
from sharing healthcare FNI that they perceive to be fake (Au et al., 2021b).

3.3 Output of FNI


Extant studies have explored the effects of FNI in various fields. As disclosed by the third-
person effect, FNI would have greater effects on out-group members than in-group members
(Jang and Kim, 2018; Mena, 2020), meaning that the effects of FNI would not be restricted to a
limited group. The significant effects of FNI are also described as the agenda-setting power of
FNI (Vargo et al., 2018). Figure 5 describes the influencing mechanisms of FNI.
Since the 2016 US presidential campaign and Brexit, the effects of FNI in politics have
been widely explored. For instance, FNI delegitimises the electoral process and disrupts the
normal democratic order (McKay and Tenove, 2020) through two pathways. First, FNI
pollutes iconoclastic political discourse (Richey, 2018; Smith, 2019), which reduces citizen
confidence in institutions; undermines the credibility of official information and institutional
legitimacy and destabilises centre parties, governments and elections (Bennett and
Livingston, 2018). Second, FNI directly causes negative attitudes of inefficacy, alienation
and cynicism towards political candidates, and this relationship is mediated by the perceived
realism of FNI (Balmas, 2014). By affecting citizens’ attitudes, FNI interferes with voting
decisions and delegitimises the electoral process (Jones-Jang et al., 2020; Ncube, 2019; Neyazi
et al., 2021). For instance, Clinton lost many votes in 2016 because of FNI (Gunther et al., 2019).
Allcott and Gentzkow (2017) estimated that FNI, with its similar persuasiveness to TV
advertising, would change vote shares by approximately hundredths of a percentage point.
Moreover, FNI leads the public away from accurate information and polarises public
discourse (Marcon et al., 2017), and thus, it is recognised as a social problem (Tandoc et al.,
2019). Additionally, FNI imposes social costs (Allcott and Gentzkow, 2017), destroys honesty
and interpersonal trust (Duffy et al., 2020), causes unconscious behaviour (Bastick, 2021;
Wani et al., 2021) and leads to information panic (Carlson, 2020; Creech and Roessner, 2019).
However, FNI may have the potential to interpret the underlying social bonds that are at
stake (Inwood and Zappavigna, 2021).
Furthermore, FNI highlights the challenges confronting science development and
healthcare management by creating distrust in scientific enterprises (Iyengar and Massey,
2019) and causing misperceptions of scientific knowledge (Marcon et al., 2017). False medical
information is widely publicised without accountability or concerns regarding the safety of
patients (Brady et al., 2017; Robledo and Jankovic, 2017). In addition, FNI is detrimental to
public health and raises several health policy-related issues (Akpan et al., 2021; Al Khaja et al.,
2018; Allington et al., 2021; Atehortua and Patino, 2021; Beletsky et al., 2020; Luo and Cheng,
2021; Nazar and Pieters, 2021; Warner et al., 2021). FNI about the prevention and treatment
strategies for COVID-19 has lethal consequences (Sridharan and Sivaramakrishnan, 2021;
Uwalaka et al., 2021). Propagated FNI is a primary cause of vaccine hesitancy and a reduction
in immunisation rates (Basch et al., 2021; Calo et al., 2021; Carrieri et al., 2019; Chen et al.,
2021b; Featherstone and Zhang, 2020; Hansen and Schmidtblaicher, 2021; Islam et al., 2021;
Romer and Jamieson, 2021; Sallam et al., 2021a, 2021b). Regarding COVID-19, FNI causes
32,5
INTR

1674

Figure 5.
The influencing
mechanisms of FNI
Reducing citizen In the social field
confi
confidence
f dence in institutions Leading the public away from
f om
fr
f rmation
accurate info
information

Undermining credibility
official
of offi information
f cial info
f rmation Imposing social costs for
f r
fo
Delegitimising electoral Polluting iconoclastic addressing the FNI
process political discourse
Undermining
institutional legitimacy Destroying honesty and
interpersonal trust
Destabilising centre
parties, governments, and Causing unconscious
elections behaviour

Causing negative
Disrupting normal Causing information
info
f rmation panic
attitudes towards
democratic order
political candidates

In the political field Interpreting the underlying


social bonds

In the scientific
FNI
and health field In the media and
Creating distrust in
scientific enterprise
journalism field
Attracting diverse Obtaining advertising
Causing misperceptions audiences revenues Threatening mainstream
off scientific
scientifi
f c knowledge media

Threatening public Damaging brand


health and health policy reputation Harming balanced journalism
without vetted gatekeepers
Causing vaccine
hesitancy and reducing
In the
immunisation rates Causing damage to both business field Pushing the development of
offending
off
ffending and victim firm
f m
fir traditional news organisations
Causing resistance to
pandemic measures
Affecting trading activity
Engendering an and price volatility in Attesting to new chapter of
infodemic stock market news and journalism
more resistance to pandemic management measures (Forati and Ghose, 2021) and engenders Fake news on
an infodemic (Himelein-Wachowiak et al., 2021). the internet
Furthermore, FNI is a matter of concern for business and marketing research and practice.
A major reason for the emergence and spread of FNI is that it produces a surge of clickbait to
attract diverse audiences (Munger, 2020), and the attracted traffic and clicks are converted
into advertising revenues (Carlson, 2020; Flostrand, 2020; Tandoc et al., 2019). Although FNI
can attract traffic for brands, FNI has exposure risks and may negatively affect brand
reputation (Berthon and Pitt, 2018), particularly for service brands (Flostrand, 2020). Once 1675
FNI is exposed, the credibility and trustworthiness of the involved brands would be seriously
undermined, and consumer behaviours would ultimately be affected (Song et al., 2019;
Visentin et al., 2019). Additionally, FNI is used to attack opponents. However, this strategy is
considered a failure strategy (Song et al., 2019), especially as the damage to the offending firm
is more detrimental, in terms of advertising effectiveness and negative news publicity, than
that to the victim firm. Moreover, FNI affects trading activity and price volatility in stock
markets. Clarke et al. (2021) found that the stock price reaction to FNI is discounted when
compared to legitimate news articles, although FNI generates more attention than a control
sample of legitimate articles.
It is noteworthy that FNI is a symptom of the collapse of the old news order and chaos in
contemporary public communication (Waisbord, 2018). The presence of FNI undermines
public trust (Al-Rawi, 2019a; Bennett and Livingston, 2018), thereby seriously threatening
mainstream media, whose selling point is credibility (Al-Rawi, 2021; Okoro and Emmanuel,
2018), and harms balanced journalism without vetted gatekeepers (Benham, 2020; Whipple
and Shermak, 2020). Driven by FNI, traditional news organisations need to re-imagine how
journalism is practised (Wahutu, 2019), re-assert their dominance, re-affirm the professional
paradigm of news (Wasserman, 2020) and re-assert control (Carlson, 2020). Although FNI is
considered a critical problem confronting journalism (Tandoc et al., 2019), it is usually
regarded as an opportunity to develop media and journalism (Creech and Roessner, 2019).
The past news order can no longer adapt to the requirements of contemporary society.
Therefore, FNI attests to the new chapter of news and journalism by re-defining the “truth”
(Jukes, 2018) and creating “empathic media” (Bakir and McStay, 2018). Moreover, the
accusatory tweets about FNI may even boost journalism as the tweets raise the perceived
integrity and professionalism of newsmakers and increase the desire for audiences to read
news in the future (Tamul et al., 2020).

4. Conceptual framework for FNI research


To better guide FNI research, we proposed a conceptual framework (see Figure 6) by
summarising the basic theoretical foundation, research methodologies and research datasets.

4.1 Theoretical foundations


Based on the identified articles, we summarised the adopted theories and hope that they serve
as solid foundations for future FNI research. The adopted theories are communication, social
science and psychology theories.
4.1.1 Communication theories. Communication theories can help us to understand the
process of FNI production, transmission, effects and intervention. Notably, FNI is produced
by various parties, including common journalists and political leaders. Media bias describes
the phenomenon whereby journalists allow their partisan predispositions to affect the choices
they make regarding the stories to pursue and publish; thus, it is used to explain the
emergence of FNI (Benham, 2020). The political communication theory explains the purposeful
process whereby elected and appointed leaders, the media and public citizens use messages to
32,5

research
INTR

1676

Figure 6.
The conceptual
framework for FNI
FNI research
Input of FNI Spread process of FNI Output of FNI
What FNI looks like Spread features In the political field
Why FNI is created Factors affecting the spread In the social field
Who creates FNI In the scientific and health field
When FNI is created In the business field
Where FNI is created In the media and journalism field
How FNI is created

Research methodologies Research datasets


● Conceptual approach (17 articles) Secondary data from platforms Secondary data from professional firms
● Case study (17) ● Twitter (25 articles) ● Crimson Hexagon
● Focus group interview (10) ● Facebook (11) ● ComScore
● Survey (54) ● WhatsApp (8) ● Seeking Alpha
● Experiment (33) ● Microblog (2)
● Secondary data analysis (65) ● Instagram (1)
● Modelling and simulation (6) ● Pinterest (1)
● Reddit (1)
● TikTok (1)

Theoretical Foundations
Communication theories Social science theories Psychology theories ● Self-enhancement theory
● Information processing theory ● Dunning-Kruger effect
● Media bias ● Stressor-strain-outcome model ● Ignorance theory
● Political communication theory ● Technology acceptance model ● Dual process theory
● Heuristic systematic model ● Polarisation theory
● Propaganda model ● Nudge theory ● Disruption theory
● Hegemony theory ● Accuracy nudge effect ● Rational choice theory
● Limited capacity model ● Cognitive dissonance theory
● Audience theory ● Social field theory ● Selective exposure theory
● Computer-mediated communication ● Digital curation ● Bounded rationality theory
● Uses and gratifications theory ● Confirmation bias theory
● Theory of media connectedness ● Social support theory ● Illusory truth effect
● Theory of homophily ● Social impact theory ● Prospect theory
● Media richness theory ● Relevance theory
● Echo chamber effect, liberal bubble effect, filter ● Theory of word-of-mouth ● Social comparison theory
bubble effect, information cocoon effect ● Theoretical model of trading ● Cognitive load theory
● Connectivism learning theory ● Persuasion theory
● Marketplace of ideas ● Terror management theory ● Elaboration likelihood model
● Non-ideal theory ● Coping theory
● Cognitive behavioural theory ● Narrative persuasion theory
● Reinforcing spirals model ● Transportation theory
● Theory of negativity bias ● Theory of transactional stress
● Psychological resilience theory ● Exemplification theory
● Third person effect ● Realism heuristic theory
● Agenda-setting theory ● Theories of perception
● Theory of planned behavior ● Excitation transfer theory
● Network gatekeeping theory ● Affordance theory
● Theory of boomerang effect ● Self-concept theory
● Self-efficacy theory ● Reputation theory
● Self-determination theory ● Information credibility theory
● Self-presentation theory ● Source credibility theory
construct meaning about political practices; its important units consist of political parties, the Fake news on
state and policy experts (Bradshaw et al., 2020). The emergence of FNI necessitates revising the internet
the political communication theory about the coherence and functionality of communication
flow between institutional actors, the media and the public (Bennett and Livingston, 2018).
Al-Rawi (2019a) argued that Twitter offered Trump a networked flak, based on the
propaganda model, which focusses on the inequality of wealth and power and its multilevel
effects on news media (Ali and Zain-ul-abdin, 2021). The hegemony theory explains the means
by which the dominant class maintains its superiority over the subordinate classes, thus 1677
emphasising the significant effects of Trump’s news (Okoro and Emmanuel, 2018).
After the production of FNI, the audience theory explains how people encounter and use FNI
and how FNI affects them. Based on the audience theory, Nelson and Taneja (2018) elucidated
the role of audience availability in FNI consumption. In multichannel media, audiences are not
only exposed to a single-media outlet but also have diverse sources and contents to choose from.
Computer-mediated communication highlights the importance of digital technologies and
devices in the processes of human communication and news transmission (Au et al., 2021a). The
theory of media connectedness focusses on multiple information outlets and their interaction,
and it is adopted to explore users’ exposure to the propagated FNI (Balmas, 2014). When
analysing public discussion online, the theory of homophily suggests that social media users
have a propensity to associate and interact with other users that have similar traits and
ideologies (Brummette et al., 2018). Homophily impedes the open flow and exchange of
information and opinions, and therefore, it limits the existence of a competitive opinion debate.
As reflected in the theory of homophily, some Internet media phenomena have been proposed,
including the echo chamber effect (Gaumont et al., 2018; Shao et al., 2018; Tornberg, 2018), also
referred to as the liberal bubble effect (Pedersen and Burnett, 2018), the filter bubble effect (Brady
et al., 2017) and the information cocoon effect (Gaumont et al., 2018). Some theories also focus on
users. The marketplace of ideas serves as a means through which FNI can be identified and
eradicated (Brummette et al., 2018). Unlike the ideal theory, the non-ideal theory proposes that
FNI analysis should be used to identify realistic structural changes, rather than specifying
idealised individual practice (Rini, 2017). The reinforcing spirals model analyses the reciprocal
relationships between FNI and media trust, as well as how these relationships may fluctuate
over the long term (Valenzuela et al., 2022).
The theory of negativity bias argues that good news is usually taken for granted, and
therefore, it exerts less effect on behaviour and cognition when compared to bad news, which
is often disconcerting. Consequently, negative information tends to be weighed more heavily
than positive information (Chua and Banerjee, 2018). It is highly important to realise the
effects of bad news. As disclosed by the third person effect, FNI has greater effects on out-
group members than in-group members (Cheng and Luo, 2021; Chung and Kim, 2021; Jang
and Kim, 2018; Mena, 2020; Talwar et al., 2020; Yang and Horning, 2020; Yang and Tian,
2021), meaning that the effects of FNI would not be restricted in a limited group. The
significant effects of FNI are also described as the agenda-setting theory (Guo and Vargo,
2020; Khan et al., 2021; Vargo et al., 2018), also called the agenda-building theory
(Arayankalam and Krishnan, 2021).
Gatekeeping plays an important role in the dissemination of information (Benham, 2020).
The network gatekeeping theory defines several basic concepts, such as gatekeepers,
gatekeeping and gatekeeping mechanisms, and it helps us to understand the relationships
among these concepts. Gatekeeping FNI on mainstream media versus the internet is very
important (Al-Rawi, 2019a). The theory of boomerang effect refers to a situation whereby the
information for debunking FNI backfires by reinforcing the FNI it is meant to refute (Chua
and Banerjee, 2018).
4.1.2 Social science theories. In particular, FNI is an interdisciplinary phenomenon, and
social science theories are widely adopted to explain it. When studying FNI, studies adopt the
INTR stressor-strain-outcome model (Islam et al., 2020; Khan, 2021) or the technology acceptance
32,5 model (Rampersad and Althiyabi, 2020) to explore the specific factors that affect users’
attitudes and spread intention. Moreover, a few studies use the nudge theory to analyse data
from natural or lab experiments to ascertain the nudges affecting the spread of FNI (Altay
et al., 2021; Kim and Dennis, 2019; Wang et al., 2021). The accuracy nudge effect states that
asking participants to rate how accurate a piece of news is before sharing it reduces the
propensity to share FNI more than true news (Altay et al., 2021).
1678 Drawing on social field theory, Pedersen and Burnett (2018) asserted that the Fourth Estate
is viewed as a field framed by homogenous journalistic praxes resulting from organisational
norms, and Wahutu (2019) analysed the spread of FNI by the journalism field. By
acknowledging the significant role played by citizen curation, Pedersen and Burnett (2018)
extended the understanding of digital curation beyond formalised professional environments
and explored the roles of common citizens in the formation or spread of FNI.
As FNI can be treated as false social support for individuals, the social support theory is
also adopted for studying FNI (Zhou et al., 2021b). Based on the social impact theory, when
many people on the internet frequently share specific information, they have higher social
impacts and Internet users may consider it the truth and adopt it (Apuke and Omar, 2020).
When analysing the effects of FNI in the business or marketing field, the theory of word-of-
mouth is adopted (Song et al., 2019). Clarke et al. (2021) used a theoretical model of trading to
show the effects of FNI on stock prices. Drawing on the terror management theory, Lim et al.
(2021) proposed that FNI is more likely to elicit death-related thoughts than real news.
4.1.3 Psychology theories. The information processing theory (Kim and Dennis, 2019) and
the dual process theory (Ahmed, 2021c; Ali et al., 2022; Tandoc et al., 2021) are proposed to
describe the process by which users’ perceptions and evaluations are formed. By using the
heuristic systematic model, studies analyse how to process messages heuristically or
systematically in two different ways (Ali et al., 2022; Wang et al., 2021). While Talwar et al.
(2019) relied on the rational choice theory to analyse users’ perceptions and spreading
behaviours, Ali et al. (2022) believed that the limited capacity model and bounded rationality
theory were more suitable in analysing users’ utilities and perceptions. The uses and
gratifications theory helps to explain the rationale behind people’s use of Internet media by
analysing the needs they seek to satisfy and exploring motivations for the widespread use of
the internet platforms (Apuke and Omar, 2021a, c; Balakrishnan et al., 2021; Gioia et al., 2012;
Tandoc et al., 2020). Additionally, financial incentives are disclosed in the prospect theory,
wherein users’ expected utilities are affected by the initial expectations and subsequent
evaluations (Au et al., 2021b). People feel losses more keenly than equivalent gains and try to
avoid the former, meaning that they may act on FNI even if they do not believe it, merely
based on the “better safe than sorry” principle (Duffy et al., 2020).
As disclosed in the media richness theory, online information is prolific (Zhou et al., 2021a),
so determining news credibility may not be an easy nor straightforward task in an
information-abundant Internet environment. The cognitive load theory (Apuke and Omar,
2021b; Islam et al., 2020; Laato et al., 2020) and the connectivism learning theory (Akpan et al.,
2021) are proposed to guide the learning procedures for processing and evaluating online
information. The coping theory (Khan, 2022) and the cognitive behavioural theory (Khan, 2021)
are also used to analyse the cognitive and behavioural efforts exerted to manage specific
external and/or internal demands. When facing information overload, the theory of
transactional stress and the psychological resilience theory reveal that consumers with high
resilience feel less invaded by a mass of information, mitigate transactional stress and thus
have lower probabilities of spreading FNI (Bermes, 2021).
Traditional theories of perception present perception as a passive action, where stimuli
impinge on the outside world, which, in turn, are then filtered, and thus noticed, ignored or
processed further. Unlike the traditional theories of perception, Neisser’s perception theory
posits that perception is not a reception action, but rather a construction (Berthon and Pitt, Fake news on
2018). The theory of planned behaviour is adopted to investigate the determinants of users’ the internet
perceptions and behaviours (Chen et al., 2021b).
The self-concept theory reflects an individual’s collection of beliefs about his or her
ability to respond to FNI Colliander (2019), and the self-efficacy theory analyses an individual’s
belief in his or her capacity to identify and not share FNI (Hopp, 2021). As disclosed by
the self-determination theory, individuals have innate psychological needs to fulfil
(Balakrishnan et al., 2021), and unfulfilled needs may result in psychological harm (Talwar 1679
et al., 2019), thus promoting users’ spreading behaviours. The self-presentation theory and the
self-enhancement theory help in understanding users’ motivation to present and promote a
positive image, and thus, explain specific factors affecting users’ FNI spreading behaviours
and attitudes (Tandoc et al., 2020).
Regarding confidence in identifying FNI, the Dunning–Kruger effect for news
discernment has been identified (Lyons et al., 2021). Based on the ignorance theory,
Osmundsen et al. (2021) also revealed that people want to share accurate information, but they
end up sharing FNI because they lack the cognitive reflection or motivation to discern
between true and false information. However, the polarisation theory holds that FNI sharing is
not an accident caused by ignorance; it is partisan business as usual. Reflected by the
disruption theory, deep-seated discontent with the status quo and a desire to disrupt the
existing social and political order fuel the spread of FNI (Osmundsen et al., 2021).
When users encounter information contrary to their opinions and expectations, they
experience cognitive dissonance (Kim et al., 2019; Wang et al., 2020). Rooted in the theory of
cognitive dissonance, the selective exposure theory posits that audiences seek information
consistent with their beliefs to avoid cognitive dissonance, which may lead to psychological
stress (Melki et al., 2021). Based on the confirmation bias theory, certain factors would affect
people’s confirmation bias, and thus influence their perceptions (Bringula et al., 2021; Kim and
Dennis, 2019). As disclosed by the illusory truth effect, repetition increases the ease with which
statements are processed, which, in turn, is heuristically used to infer the accuracy of news
(Altay et al., 2021; Pennycook et al., 2018). According to the relevance theory, people’s
communication is governed by the expectations of relevance (Altay et al., 2021). Drawing on
the social comparison theory, Talwar et al. (2019) established that users prefer to compare with
others and try to identify FNI and not spread it to build a positive image.
Furthermore, the external environment stimulates and persuades users to change their
evaluation and behaviour. The persuasion theory is widely adopted to analyse how users are
persuaded by news, including FNI, and even in guiding the production of FNI (Chen et al.,
2021c). The elaboration likelihood model of persuasion explains two routes to change one’s
attitude: the central and peripheral routes (Chen et al., 2021c). As disclosed by the narrative
persuasion theory, priming tweets about FNI moderate the transportability of a news
narrative in light of the attempt (Tamul et al., 2020). The transportation theory describes a
process whereby media narratives may engage message recipients and mediate stronger
effects of messages with narrative formats on informational and persuasive outcomes
compared to non-narrative formats (Tamul et al., 2020). Many Internet media allow users to
interact through comments. Based on the exemplification theory, the use of emotional and
arousing messages or images to elicit responses based on impression formation significantly
affects users’ perceptions (Bandeli and Agarwal, 2021).
Based on the realism heuristic theory, online users are more likely to trust visual
information than text-based descriptions of the world (Ahmed, 2021c). As the excitation
transfer theory puts it, the surrounding users’ evaluations would affect their personal
evaluation of FNI (Tamul et al., 2020). Based on the affordance theory, the affordance of the
internet media affects users’ sharing behaviours (Apuke and Omar, 2021b; Islam et al., 2020).
According to the reputation theory (Kim et al., 2019), the information credibility theory
INTR (Ahmed, 2021b) and the source credibility theory (Visentin et al., 2019), the reputation of news
32,5 sources affects users’ perception of the news and spreading behaviours.

4.2 Research methodologies


This section analyses the research methodologies used to study the antecedents and
consequences of FNI. Seven research methodologies were identified through a review of 202
1680 articles: conceptual approach (n 5 17), case study (n 5 17), focus group interview (n 5 10),
survey (n 5 54), experiment (n 5 33), secondary data analysis (n 5 65) and modelling and
simulation (n 5 6).
Sixty-five articles fell into the “secondary data analysis” category, wherein different data
analysis methods were adopted. As FNI studies encounter a certain degree of news content,
researchers usually adopt content analysis (n 5 22), network analysis (n 5 9) and topic
modelling (n 5 3) to handle text data. Descriptive statistics (n 5 17), regression (n 5 11) and
SEM (n 5 3) are used to analyse the collected data.
Surveys (n 5 54) and experiments (n 5 33) are widely adopted to obtain primary data.
Regression (n 5 51), SEM (n 5 17), descriptive statistics (n 5 10) and ANOVA (n 5 9) are used
to analyse primary data obtained from surveys and experiments. Moreover, case studies
(n 5 17) and focus group interviews (n 5 10) are also used to obtain primary data for
studying FNI.
With 17 articles, a conceptual approach is adopted to conceptualise the FNI phenomenon.
General descriptions, conceptual frameworks and theories have been developed to
understand FNI. The number of articles aligning with the “conceptual approach”
illustrates that research opportunities for the FNI phenomenon still exist. In addition to a
more conceptual analysis, future verification analysis for these proposed conceptualisations
and theories would be conducted by collecting and analysing data.
Six articles adopt modelling or simulation approaches to explore the diffusion of FNI, of
which two conduct a simulation analysis on the spread network of FNI.

4.3 Research datasets


As the accurate determination of fake news is challenging, the limited availability of high-
quality datasets is a major issue to study FNI (Mitra and Gilbert, 2015). Both industry and
academia are actively involved in combating FNI and obtaining FNI datasets.
Most studies directly collect primary data. Although it is difficult to obtain FNI data, 65
articles still obtain secondary data for studying FNI. A common way to obtain secondary
data is to crawl online datasets (Brummette et al., 2018; Jin et al., 2017). Specifically, 50 articles
obtain their datasets from various platforms. The distribution of the internet media platforms
chosen to obtain the data is shown in Figure 7. The top three platforms chosen by scholars are
Twitter (25 articles), Facebook (11) and WhatsApp (8). A few studies collect their datasets
from professional firms, such as Crimson Hexagon (Jang et al., 2018), ComScore (Nelson and
Taneja, 2018) and Seeking Alpha (Clarke et al., 2021).
To collect datasets from the platforms, we can rely on whitelisted access to the Facebook
Graph Application Programming Interface (API) (Santia and Williams, 2018; Vicario et al.,
2019), Twitter Streaming API (Barbon et al., 2017; Gupta et al., 2013; Ratkiewicz et al., 2011),
Twitter REST API (Wang and Zhuang, 2018), Hoaxy API (Shao et al., 2016, 2018), Twitter
NodeXL (Brummette et al., 2018) and Weibo API (Wu et al., 2015). To obtain satisfactory
datasets, useful analysis tools, such as BuzzSumo (Silverman, 2016; Silverman et al., 2017a;
Silverman and Pham, 2018; Waszak et al., 2018), BS Detector (Risdal, 2016) and
FakeNewsTracker (Shu et al., 2019), are employed.
Owing to various restrictions, most of the aforementioned datasets are not publicly
available. The limited publicly available datasets are valuable for future studies. Hence, we
30 Fake news on
25
25 the internet
Number of Articles

20

15
11 1681
10 8

5
2 Figure 7.
1 1 1 1
0
The platforms studied
by the reviewed
TwiƩer Facebook WhatsApp Microblog Instagram Pinterest Reddit TikTok
articles
Platforms

meticulously searched for the FNI dataset from the reviewed articles. To better guide future
research, we evaluated and compared the existing public FNI-related datasets, which are
alphabetically listed in Table 2.

5. Discussion and future research directions


After a detailed analysis of the FNI-related studies in Section 3, we found a few open-ended
research questions that had to be explored. In this section, we provide 15 research questions
to guide future research.

5.1 Input of FNI


A significant number of Internet platforms exist, through which creators propagate FNI
(Biancovilli et al., 2021); however, no study has analysed the features of the platforms where
FNI providers often use to spread FNI. Apart from the increased level of regulation on these
platforms, other Internet media phenomena may concern FNI providers. Therefore, we
present the following future research question (FQ1): Which types of Internet media
platforms are easily utilised by FNI providers to create FNI?
Extant studies reveal that FNI is generally created and shared by a small, disloyal group of
heavy Internet users (Nelson and Taneja, 2018), possibly including users from opposing
political parties (Brummette et al., 2018), students with high-level political engagements
(Madrid-Morales et al., 2021), and young people with low knowledge (Hughes and Waismel-
Manor, 2021). The origins of FNI are individuals, businesses, news publishers, and political
parties. However, we have limited knowledge about the features of FNI providers. If we can
recognise the types of promulgators who are more motivated, we can effectively respond to
FNI. Therefore, we present the following future research question (FQ2): Which types of
individuals, businesses, news publishers and political parties have high motivation to post FNI?
With the development of artificial intelligence (AI) technology, AI agents play an
increasingly important role in creating and spreading news. The most common AI agents are
social bots and cyborgs. Social bots are computer algorithms designed to exhibit human-like
behaviours, automatically produce content and interact with humans on the internet (Zhang
and Ghorbani, 2020). Similarly, humans can create malicious cyborg accounts to produce FNI.
In an era of automated journalism, AI agents will increasingly undertake the task of creating
news. However, at the initial stage of automated journalism, we are worried about the quality
of news and do not know whether the automated news is credible. W€olker and Powell (2021)
32,5
INTR

1682

Table 2.
Research data sources
used in existing studies
Data News Social Spatial-temporal
originally Pos- engag-
Categories Datasets (sources) analysed by Descriptions Text Image ters ement Spatial Temporal

Public FNI Benjamin Political News Dataset-Random Political News Data (https://github.com/ Horne and 75 fake news, 75 real √
datasets rpitrust/fakenewsdata1) Adali (2017) news, 75 satires
Boididou MediaEval 2015 (https://github.com/MKLab-ITI/image-verification- Boididou 6,225 real and 9,404 fake √ √ √ √ √
(VC-VMU corpus/tree/master/mediaeval2016) et al. (2015) posts by 5,895 and 9,025
task) users
Datasets MediaEval 2016 (https://github.com/MKLab-ITI/image-verification- Boididou Additional 998 real and √ √ √ √ √
corpus/) et al. (2018a) 1,230 fake tweets, 64
real cases and 66 cases
of misused multimedia
BuzzFeed BuzzFace (https://github.com/gsantia/BuzzFace) Santia and 2,263 news and 1.6 √ √ √ √ √
News Williams million comments
(2018)
Buzzfeed Political News Data (https://docs.google.com/spreadsheets/d/ Silverman Top 20 real and fake √ √ √ √
1ysnzawW6pDGBEqbXqeYuzWa7Rx2mQUip6CXUUUk4jIk/ (2016) election stories in three
edit#gid5399992108) time periods
2016-10-facebook-fact-check (https://github.com/BuzzFeedNews/2016- Silverman 2,282 news (posts) √ √ √ √ √
10-facebook-fact-check) et al. (2016)
2017-12-fake-news-top-50 (https://github.com/BuzzFeedNews/2017-12- Silverman Top 50 fake news √ √ √
fake-news-top-50) et al. (2017a) articles of 2016 and 2017
2018-12-fake-news-top-50 (https://github.com/BuzzFeedNews/2018-12- Silverman Top fake news articles √ √ √
fake-news-top-50) and Pham of 2018
(2018)
Credbank-Data (http://compsocial.github.io/CREDBANK-data/) Mitra and More than 169 million √ √ √
Gilbert (2015) tweets
Fake News Challenge Dataset (https://github.com/FakeNewsChallenge/fnc-1) Riedel et al. 49,972 news tuples √
(2017)
Fakenews.mit.edu 2018 (https://github.com/sophiabiancalatessa/ O’Brien et al. 16,400 news (7,401 fake √ √
FakeNewsDeepLearning) (2018) and 8,999 real) in 2016
FakeNew- Newest Version (https://github.com/KaiDMML/FakeNewsNet) Shu et al. 5,755 fake news and √ √ √ √ √ √
Net Dataset (2018) 17,441 real news
Old Version (https://github.com/KaiDMML/FakeNewsNet/tree/old- Shu et al. 211 fake news and 211 √ √ √ √ √ √
version) (2017) real news
Fake Stock News (https://ftalphaville-cdn.ft.com/wp-content/uploads/2017/04/10231526/ Clarke et al. 494 fake stock news by √ √ √
Stock-promoters.pdf) (2021) Securities and
Exchange Commission

(continued )
Data News Social Spatial-temporal
originally Pos- engag-
Categories Datasets (sources) analysed by Descriptions Text Image ters ement Spatial Temporal

Harvard Dataverse-Facebook Dataverse (https://dataverse.harvard.edu/dataset.xhtml? Bakshy et al. 3.8 billion potential and √ √ √ √
persistentId5doi:10.7910/DVN/AAI7VA) (2015) 903 million exposures,
59 million clicks, 10.1
million active users and
7 million shared URLs
Hoaxy Dataset (https://zenodo.org/record/1154840#.XZSGi1X7Q2w) Shao et al. 29,351,187 tweets and √ √ √
(2018) 653,911 documents
Kaggle.com-BS Detector Dataset (https://www.kaggle.com/mrisdal/fake-news/download) Risdal (2016) 12,999 news (posts) √ √ √ √ √
Kwon Dataset (https://dataverse.harvard.edu/dataset.xhtml?persistentId5doi%3A10. Kwon et al. 111 events (60 rumours √ √ √ √
7910%2FDVN%2FBFGAVZ) (2017) and 51 non-rumours)
LIAR (https://www.cs.ucsb.edu/∼william/data/liar_dataset.zip) Wang (2017) 12,836 short statements √ √
Rumdect Dataset (http://alt.qcri.org/∼wgao/data/rumdect.zip) Ma et al. More than 5,000 claims √ √ √
(2016) that scale to 5 million
microblog posts
RumourEval 2019 Data (https://figshare.com/articles/RumourEval_2019_data/8845580) Gorrell et al. 297 source tweets and √
(2018) 7,100 discussing tweets
Vlachos Emergent Dataset (https://github.com/willferreira/mscproject) Ferreira and 300 claims and 2,595 √
Dataset Vlachos related news articles
(2016)
Fact Checking Corpus (https://sites.google.com/site/andreasvlachos/ Vlachos and 221 statements √ √ √
resources) Riedel (2014)
Vosoughi Dataset (https://docs.google.com/forms/d/e/ Vosoughi 126,000 stories tweeted √ √ √
1FAIpQLSdVL9q8w3MG6myI4l8FI5X45SmnRzGoOEdBROeBoNni5IbfKw/viewform) et al. (2018) by 3 million users more
than 4.5 million times
from 2006 to 2017
Public BuzzFeed 2016-12-fake-news-survey (https://github.com/BuzzFeedNews/2016-12- Silverman 3015 US surveyed Not applicable
datasets News fake-news-survey) and Singer- adults
related with Vine (2016)
FNI 2017-04-fake-news-ad-trackers (https://github.com/BuzzFeedNews/2017- Silverman Ad trackers information Not applicable
04-fake-news-ad-trackers) et al. (2017b) in 107 websites
Claim Sources (https://docs.google.com/spreadsheets/d/ Shao et al. Fake news websites, Not applicable
1S5eDzOUEByRcHSwSNmSqjQMpaKcKXmUzYT6YlRy3UOg/edit#gid51882442466) (2016) tracking fake news
media from 9 sources
2016 post-election online survey (https://www.aeaweb.org/articles?id510.1257/jep.31.2. Allcott and 1,200 online surveys Not applicable
211) Gentzkow and 156 fake election-
(2017) related news
the internet

1683
Fake news on

Table 2.
INTR explored whether there is a detectable difference in news readers’ credibility perceptions of
32,5 automated and human journalism, and whether news readers perceive combined automated
and human journalism as credible. With the integration of AI agents in news production, we
do not know whether the FNI situation will become better or worse; thus, we present the
following future research question (FQ3): Can AI agents improve or deteriorate the current
situation of FNI?
The emergence of AI agents challenges our notions of media, our understanding of
1684 communication partners and even our comprehension of the boundaries of communication
(Peter and K€ uhne, 2018). People perceive robots as communicative partners distinct from
humans, albeit as social (Guzman and Lewis, 2019). If such AI agents are employed by FNI
providers, we wonder whether these automated FNI would have some unique features; thus,
we present the following future research question (FQ4): What are the differences in features
between the FNI posted by human individuals and those posted by AI agents?
The format of news has changed from computer-based to mobile-based platforms.
However, few studies have analysed and compared the differences in the features of FNI on
mobile- and computer-based clients. Therefore, we present the following future research
question (FQ5): What are the unique features of FNI on mobile-based clients?
Additionally, the feature differences of FNI among different platforms or cultural
backgrounds must be further explored because culture significantly affects FNI features.
Therefore, we present the following future research question (FQ6): What are the features
differences of FNI on diverse Internet media platforms and in divergent cultural
backgrounds?
Promulgators may post FNI about themselves to promote their positions or about their
opponents to damage their reputation. Few studies have explored the feature differences
between these two types of FNI. Therefore, we present the following future research question
(FQ7): What are the differences in the features between FNI posted by promulgators about
themselves and those posted about their opponents?

5.2 Process of FNI


A broad understanding of FNI is obtained by exploring its features. Almost all existing
research has revealed the rapid transmission of FNI and explored certain influencing factors.
However, once individuals recognise FNI, their ability to manage it is uncertain. If audiences
intervene in FNI spreading, we do not know whether it would spread more widely than real
news. Therefore, we present the following future research questions (FQ8 and FQ9): Does FNI
always spread more widely than real news? How do the factors influencing the spread of FNI
change with an increase in individuals’ awareness of FNI?
Moreover, we need to further explore the role of key opinion leaders (KOLs) in FNI
spreading. News diffusion can be harnessed reliably by targeting central nodes (Cheng et al.,
2021). Studies have concluded that fake news gains visibility through an influential Twitter
KOL (Ahmed et al., 2020; Gupta et al., 2013; Li and Su, 2020; Shin et al., 2018), but few have
explored the specific influence mechanisms of KOLs in FNI spreading. Therefore, we present
the following future research question (FQ10): What are the specific influence mechanisms of
KOLs in FNI spreading?
While some AI agents are designed to post FNI, others perform important roles in FNI
spreading (Ratkiewicz et al., 2011; Vosoughi et al., 2018). However, extant studies have not
analysed the role of AI agents in the spread of FNI. Therefore, we present the following future
research question (FQ11): What are the mechanisms by which AI agents influence FNI
spreading?
5.3 Output of FNI Fake news on
A broad understanding of FNI can be obtained by analysing these effects. However, certain the internet
problems remain unclear and require further investigation. Almost all existing studies have
emphasised the powerful effects of FNI, but one study establishes that individuals who
consume FNI comprise a small, disloyal group of heavy Internet users (Nelson and Taneja,
2018). Thus, we first consider whether the current powerful effect of FNI is a huge bubble
because the audience’s awareness of FNI remains insufficient to take precautionary
measures. Therefore, we present the following future research questions (FQ12 and FQ13): 1685
How do the effects of FNI change with an increase in people’s awareness of FNI? Is the current
powerful effect of FNI a huge bubble?
Second, we must have a broad understanding of the effects of FNI by synthetically
quantifying and comparing its positive and negative effects. Most existing studies conclude
that FNI has significantly negative effects on various aspects of the lives of people and a few
believe that FNI has certain positive effects. Specifically, FNI may have potential value in
interpreting the underlying social bonds that are at stake (Inwood and Zappavigna, 2021),
and it is considered as a critical issue confronting journalism (Tandoc et al., 2019). Until now,
we have not found studies that have synthetically identified both the positive and negative
effects of FNI. Therefore, we present the following future research question (FQ14): How can
we quantify and compare the positive and negative effects of FNI?
Third, it is necessary to update the existing theories or develop new theories to explain the
effects of FNI. Only two studies have used the intermedia agenda-setting (Vargo et al., 2018)
and third-person perception (Jang and Kim, 2018) theories to explain the effects of FNI. Most
classic theories have not been applied to FNI environments. Hence, we present the following
future research question (FQ15): How can we build and update theories to explain the effects
of FNI?

6. Conclusions
A critical part of any new research venture is the timely establishment of a reference
collection of the relevant literature and forward-looking analysis of the existing literature.
The FNI research area is no exception. Although the importance and significant effects of FNI
have been recognised, a systematic review of the antecedents and consequences of FNI in all
fields is lacking.
This study searched for relevant articles published as of October 11, 2021, from the SCIE
and SSCI databases, and identified 202 relevant articles. First, we provided the working
definition of the studied term, FNI and then conducted descriptive statistical analyses of the
selected articles. The IPO framework was employed to propose FNI-related open research
questions and guide the literature review. To address these research issues, we coded the
identified literature using a mix of open, axial and selective coding. After the literature review,
we proposed a conceptual framework to summarise the theoretical relationships among
extant studies and highlight promising research directions.
The main contribution of this study is to guide academicians and practitioners in this area
regarding the current research situations, implications and limitations of existing studies,
and prospective research directions. Although our literature review cannot claim to be
exhaustive, we believe that it can be a useful resource for anyone interested in FNI and
stimulate further interest in this field. In addition to contributing to the overall understanding
of the existing research and its limitations, this study highlights promising research
directions. Furthermore, it proposes a conceptual framework to guide FNI research, including
summarising the basic theoretical foundations, research methodologies and research
datasets.
INTR The main limitation of this research was that it focussed on English-language academic
32,5 journals and conferences, and neglected articles in other languages. Furthermore, the
journals covered were limited to the particular databases included in this study.

References
Ahmad, F. and Lokeshkumar, R. (2019), “A comparison of machine learning algorithms in fake news
1686 detection”, International Journal on Emerging Technologies, Vol. 10 No. 4, pp. 177-183.
Ahmed, S. (2021a), “Fooled by the fakes: cognitive differences in perceived claim accuracy and
sharing intention of non-political deepfakes”, Personality and Individual Differences, Vol. 182,
p. 111074.
Ahmed, S. (2021b), “Navigating the maze: deepfakes, cognitive ability, and social media news
skepticism”, New Media and Society, ahead-of-print, pp. 1-22, doi: 10.1177/14614448211019198.
Ahmed, S. (2021c), “Who inadvertently shares deepfakes? Analyzing the role of political
interest, cognitive ability, and social network size”, Telematics and Informatics, Vol. 57,
p. 101508.
Ahmed, W., Seguı, F.L., Vidal-Alaball, J. and Katz, M.S. (2020), “COVID-19 and the “film your hospital”
conspiracy theory: social network analysis of Twitter data”, Journal of Medical Internet
Research, Vol. 22 No. 10, e22374.
Akpan, I.J., Aguolu, O.G., Kobara, Y.M., Razavi, R., Akpan, A.A. and Shanker, M. (2021), “Association
between what people learned about COVID-19 using web searches and their behavior toward
public health guidelines: empirical infodemiology study”, Journal of Medical Internet Research,
Vol. 23 No. 9, e28975.
Al Khaja, K.A., AlKhaja, A.K. and Sequeira, R.P. (2018), “Drug information, misinformation, and
disinformation on social media: a content analysis study”, Journal of Public Health Policy,
Vol. 39 No. 3, pp. 343-357.
Al-Rawi, A. (2019a), “Gatekeeping fake news discourses on mainstream media versus social media”,
Social Science Computer Review, Vol. 37 No. 6, pp. 687-704.
Al-Rawi, A. (2019b), “What the fake? Assessing the extent of networked political spamming and bots
in the propagation of #fakenews on Twitter”, Online Information Review, Vol. 43 No. 1,
pp. 53-71.
Al-Rawi, A. (2021), “Political memes and fake news discourses on Instagram”, Media and
Communication, Vol. 9 No. 1, pp. 276-290.
Ali, K. and Zain-ul-abdin, K. (2021), “Post-truth propaganda: heuristic processing of political fake news
on Facebook during the 2016 U.S. presidential election”, Journal of Applied Communication
Research, Vol. 49 No. 1, pp. 109-128.
Ali, K., Li, C., Zain-ul-abdin, K. and Zaffar, M.A. (2022), “Fake news on Facebook: examining the
impact of heuristic cues on perceived credibility and sharing intention”, Internet Research,
Vol. 32 No. 1, pp. 379-397.
Allcott, H. and Gentzkow, M. (2017), “Social media and fake news in the 2016 election”, Journal of
Economic Perspectives, Vol. 31 No. 2, pp. 211-236.
Allcott, H., Gentzkow, M. and Yu, C. (2019), “Trends in the diffusion of misinformation on social
media”, Research and Politics, Vol. 6 No. 2, 2053168019848554.
Allington, D., Duffy, B., Wessely, S., Dhavan, N. and Rubin, J. (2021), “Health-protective behaviour,
social media usage and conspiracy belief during the COVID-19 public health emergency”,
Psychological Medicine, Vol. 51 No. 10, pp. 1763-1769.
Alshareef, M. and Alotiby, A. (2021), “Prevalence and perception among Saudi Arabian
population about resharing of information on social media regarding natural remedies as
protective measures against COVID-19”, International Journal of General Medicine, Vol. 14,
pp. 5127-5137.
Alsyouf, M., Stokes, P., Hur, D., Amasyali, A., Ruckle, H. and Hu, B. (2019), “‘Fake news’ in urology: Fake news on
evaluating the accuracy of articles shared on social media in genitourinary malignancies”, BJU
International, Vol. 124 No. 4, pp. 701-706. the internet
Altay, S., de Araujo, E. and Mercier, H. (2021), ““If this account is true, it is most enormously
wonderful”: interestingness-if-true and the sharing of true and false news”, Digital Journalism,
ahead-of-print, pp. 1-22, doi: 10.1080/21670811.2021.1941163.
Andı, S. and Akesson, J. (2021), “Nudging away false news: evidence from a social norms experiment”,
Digital Journalism, Vol. 9 No. 1, pp. 106-125. 1687
Apuke, O.D. and Omar, B. (2020), “Modelling the antecedent factors that affect online fake news
sharing on COVID-19: the moderating role of fake news knowledge”, Health Education
Research, Vol. 35 No. 5, pp. 490-503.
Apuke, O.D. and Omar, B. (2021a), “Fake news and COVID-19: modelling the predictors of fake news
sharing among social media users”, Telematics and Informatics, Vol. 56, p. 101475.
Apuke, O.D. and Omar, B. (2021b), “Social media affordances and information abundance: enabling
fake news sharing during the COVID-19 health crisis”, Health Informatics Journal, Vol. 27
No. 3, pp. 1-23.
Apuke, O.D. and Omar, B. (2021c), “User motivation in fake news sharing during the COVID-19
pandemic: an application of the uses and gratification theory”, Online Information Review,
Vol. 45 No. 1, pp. 220-239.
Arayankalam, J. and Krishnan, S. (2021), “Relating foreign disinformation through social media,
domestic online media fractionalization, government’s control over cyberspace, and social
media-induced offline violence: insights from the agenda-building theoretical perspective”,
Technological Forecasting and Social Change, Vol. 166, p. 120661.
Ardevol-Abreu, A., Delponti, P. and Rodrıguez-Wang€ uemert, C. (2020), “Intentional or inadvertent fake
news sharing? Fact-checking warnings and users’ interaction with social media content”,
Profesional de la Informacion, Vol. 29 No. 5, e290507.
Atehortua, N.A. and Patino, S. (2021), “COVID-19, a tale of two pandemics: novel coronavirus and fake
news messaging”, Health Promotion International, Vol. 36 No. 2, pp. 524-534.
Au, C.H., Ho, K.K.W. and Chiu, D.K.W. (2021a), “The role of online misinformation and fake news in
ideological polarization: barriers, catalysts, and implications”, Information Systems Frontiers,
ahead-of-print, pp. 1-24, doi: 10.1007/s10796-021-10133-9.
Au, C.H., Ho, K.K.W. and Chiu, D.K.W. (2021b), “Stopping healthcare misinformation: the effect of
financial incentives and legislation”, Health Policy, Vol. 125 No. 5, pp. 627-633.
Babcock, M., Cox, R.A.V. and Kumar, S. (2019), “Diffusion of pro- and anti-false information tweets:
the Black Panther movie case”, Computational and Mathematical Organization Theory, Vol. 25
No. 1, pp. 72-84.
Bakir, V. and McStay, A. (2018), “Fake news and the economy of emotions: problems, causes,
solutions”, Digital Journalism, Vol. 6 No. 2, pp. 154-175.
Bakshy, E., Messing, S. and Adamic, L.A. (2015), “Exposure to ideologically diverse news and opinion
on Facebook”, Science, Vol. 348 No. 6239, pp. 1130-1132.
Balakrishnan, V., Ng, K.S. and Rahim, H.A. (2021), “To share or not to share-the underlying motives of
sharing fake news amidst the COVID-19 pandemic in Malaysia”, Technology in Society, Vol. 66,
p. 101676.
Balmas, M. (2014), “When fake news becomes real: combined exposure to multiple news sources and
political attitudes of inefficacy, alienation, and cynicism”, Communication Research, Vol. 41
No. 3, pp. 430-454.
Bandeli, K.K. and Agarwal, N. (2021), “Analyzing the role of media orchestration in conducting
disinformation campaigns on blogs”, Computational and Mathematical Organization Theory,
Vol. 27 No. 2, pp. 134-160.
INTR Bapaye, J.A. and Bapaye, H.A. (2021), “Demographic factors influencing the impact of coronavirus-
related misinformation on WhatsApp: cross-sectional questionnaire study”, JMIR Public Health
32,5 and Surveillance, Vol. 7 No. 1, pp. 280-294.
Barbon, S., Igawa, R.A. and Zarpel~ao, B.B. (2017), “Authorship verification applied to detection of
compromised accounts on online social networks”, Multimedia Tools and Applications, Vol. 76
No. 3, pp. 3213-3233.
Basch, C.H., Meleo-Erwin, Z., Fera, J., Jaime, C. and Basch, C.E. (2021), “A global pandemic in the time
1688 of viral memes: COVID-19 vaccine misinformation and disinformation on TikTok”, Human
Vaccines and Immunotherapeutics, Vol. 17 No. 8, pp. 2373-2377.
Bastick, Z. (2021), “Would you notice if fake news changed your behavior? An experiment on
the unconscious effects of disinformation”, Computers in Human Behavior, Vol. 116, p. 106633.
Bastos, M. (2021), “This account doesn’t exist: tweet decay and the politics of deletion in the Brexit
debate”, American Behavioral Scientist, Vol. 65 No. 5, pp. 757-773.
Beletsky, L., Seymour, S., Kang, S., Siegel, Z., Sinha, M.S., Marino, R., Dave, A. and Freifeld, C. (2020),
“Fentanyl panic goes viral: the spread of misinformation about overdose risk from casual
contact with fentanyl in mainstream and social media”, International Journal of Drug Policy,
Vol. 86, p. 102951.
Benham, J. (2020), “Best practices for journalistic balance: gatekeeping, imbalance and the fake news
era”, Journalism Practice, Vol. 14 No. 7, pp. 791-811.
Bennett, W.L. and Livingston, S. (2018), “The disinformation order: disruptive communication and the
decline of democratic institutions”, European Journal of Communication, Vol. 33 No. 2, pp. 122-139.
Berkowitz, D. and Schwartz, D.A. (2016), “Miley, CNN and the Onion: when fake news becomes realer
than real”, Journalism Practice, Vol. 10 No. 1, pp. 1-17.
Bermes, A. (2021), “Information overload and fake news sharing: a transactional stress perspective
exploring the mitigating role of consumers’ resilience during COVID-19”, Journal of Retailing
and Consumer Services, Vol. 61, p. 102555.
Berthon, P.R. and Pitt, L.F. (2018), “Brands, truthiness and post-fact: managing brands in a post-
rational world”, Journal of Macromarketing, Vol. 38 No. 2, pp. 218-227.
Biancovilli, P., Makszin, L. and Jurberg, C. (2021), “Misinformation on social networks during the novel
coronavirus pandemic: a quali-quantitative case study of Brazil”, BMC Public Health, Vol. 21
No. 1, p. 1200.
Boididou, C., Andreadou, K., Papadopoulos, S., Dang-Nguyen, D.-T., Boato, G., Riegler, M. and
Kompatsiaris, Y. (2015), “Verifying multimedia use at mediaeval 2015”, Paper Presented at
MediaEval 2015 Workshop, September 14–15, 2015, Wurzen, Germany, available at:
https://iris.unitn.it/retrieve/handle/11572/121886/91765/Verif2015.pdf (accessed 21
February 2020).
Boididou, C., Middleton, S.E., Jin, Z., Papadopoulos, S., Dang-Nguyen, D.-T., Boato, G. and
Kompatsiaris, Y. (2018a), “Verifying information with multimedia content on Twitter”,
Multimedia Tools and Applications, Vol. 77 No. 12, pp. 15545-15571.
Boididou, C., Papadopoulos, S., Zampoglou, M., Apostolidis, L., Papadopoulou, O. and Kompatsiaris,
Y. (2018b), “Detection and visualization of misleading content on Twitter”, International Journal
of Multimedia Information Retrieval, Vol. 7 No. 1, pp. 71-86.
Bondielli, A. and Marcelloni, F. (2019), “A survey on fake news and rumour detection techniques”,
Information Sciences, Vol. 497, pp. 38-55.
Boukhari, M.A. and Gayakwad, M.D. (2019), “An experimental technique on fake news detection in
online social media”, International Journal of Innovative Technology and Exploring Engineering,
Vol. 8 No. 8, pp. 526-530.
Bradshaw, S., Howard, P.N., Kollanyi, B. and Neudert, L.-M. (2020), “Sourcing and automation of
political news and information over social media in the United States, 2016-2018”, Political
Communication, Vol. 37 No. 2, pp. 173-193.
Brady, J.T., Kelly, M.E. and Stein, S.L. (2017), “The trump effect: with no peer review, how do we know Fake news on
what to really believe on social media?”, Clinics in Colon and Rectal Surgery, Vol. 30 No. 4,
pp. 270-276. the internet
Bringula, R.P., Catacutan-Bangit, A.E., Garcia, M.B., Gonzales, J.P.S. and Valderama, A.M.C. (2021),
““Who is gullible to political disinformation?”: predicting susceptibility of university students to
fake news”, Journal of Information Technology and Politics, ahead-of-print, pp. 1-15, doi: 10.
1080/19331681.2021.1945988.
Brummette, J., DiStaso, M., Vafeiadis, M. and Messner, M. (2018), “Read all about it: the politicization 1689
of “fake news” on Twitter”, Journalism and Mass Communication Quarterly, Vol. 95 No. 2,
pp. 497-517.
Bryanov, K. and Vziatysheva, V. (2021), “Determinants of individuals’ belief in fake news: a scoping
review determinants of belief in fake news”, PLoS ONE, Vol. 16 No. 6, e0253717.
Buchanan, T. (2020), “Why do people spread false information online? The effects of message and
viewer characteristics on self-reported likelihood of sharing social media disinformation”, PLoS
ONE, Vol. 15 No. 10, e0239666.
Buchanan, T. (2021), “Trust, personality, and belief as determinants of the organic reach of political
disinformation on social media”, The Social Science Journal, ahead-of-print, pp. 1-12, doi: 10.
1080/03623319.2021.1975085.
Buchanan, T. and Benson, V. (2019), “Spreading disinformation on Facebook: do trust in message
source, risk propensity, or personality affect the organic reach of “fake news””, Social Media þ
Society, Vol. 5 No. 4, 2056305119888654.
Buchanan, T. and Kempley, J. (2021), “Individual differences in sharing false political information on
social media: direct and indirect effects of cognitive-perceptual schizotypy and psychopathy”,
Personality and Individual Differences, Vol. 182, p. 111071.
Calo, W.A., Gilkey, M.B., Shah, P.D., Dyer, A.M., Margolis, M.A., Dailey, S.A. and Brewer, N.T. (2021),
“Misinformation and other elements in HPV vaccine tweets: an experimental comparison”,
Journal of Behavioral Medicine, Vol. 44 No. 3, pp. 310-319.
Carlson, M. (2020), “Fake news as an informational moral panic: the symbolic deviancy of social media
during the 2016 US presidential election”, Information, Communication and Society, Vol. 23
No. 3, pp. 374-388.
Carrieri, V., Madio, L. and Principe, F. (2019), “Vaccine hesitancy and (fake) news: quasi-experimental
evidence from Italy”, Health Economics, Vol. 28 No. 11, pp. 1377-1382.
Chadwick, A., Vaccari, C. and O’Loughlin, B. (2018), “Do tabloids poison the well of social media?
Explaining democratically dysfunctional news sharing”, New Media and Society, Vol. 20 No. 11,
pp. 4255-4274.
Chen, Y., Conroy, N. and Rubin, V. (2015), “News in an online world: the need for an “automatic” crap
detector”, in Given, L.M. (Ed.), Proceedings of the 78th ASIS&T Annual Meeting: Information
Science with Impact: Research in and for the Community, Silver Springs, MD, United States,
American Society for Information Science, pp. 1-4.
Chen, K.L., Luo, Y.N., Hu, A.Y., Zhao, J. and Zhang, L.W. (2021a), “Characteristics of misinformation
spreading on social media during the COVID-19 outbreak in China: a descriptive analysis”, Risk
Management and Healthcare Policy, Vol. 14, pp. 1869-1879.
Chen, L., Zhang, Y., Young, R., Wu, X. and Zhu, G. (2021b), “Effects of vaccine-related conspiracy
theories on Chinese young adults’ perceptions of the HPV vaccine: an experimental study”,
Health Communication, Vol. 36 No. 11, pp. 1343-1353.
Chen, S.J., Xiao, L. and Mao, J. (2021c), “Persuasion strategies of misinformation-containing posts in
the social media”, Information Processing and Management, Vol. 58 No. 5, p. 102665.
Cheng, Y. and Luo, Y. (2021), “The presumed influence of digital misinformation: examining US
public’s support for governmental restrictions versus corrective action in the COVID-19
pandemic”, Online Information Review, Vol. 45 No. 4, pp. 834-852.
INTR Cheng, M.X., Yin, C.Z., Nazarian, S. and Bogdan, P. (2021), “Deciphering the laws of social network-
transcendent COVID-19 misinformation dynamics and implications for combating
32,5 misinformation phenomena”, Scientific Reports, Vol. 11 No. 1, p. 10424.
Choi, J. and Lee, J.K. (2021), “Confusing effects of fake news on clarity of political information in the
social media environment”, Journalism Practice, ahead-of-print, pp. 1-19, doi: 10.1080/17512786.
2021.1903971.
Chua, A.Y.K. and Banerjee, S. (2018), “Intentions to trust and share online health rumors: an
1690 experiment with medical professionals”, Computers in Human Behavior, Vol. 87, pp. 1-9.
Chung, M. and Kim, N. (2021), “When I learn the news is false: how fact-checking information stems
the spread of fake news via third-person perception”, Human Communication Research, Vol. 47
No. 1, pp. 1-24.
Clarke, J., Chen, H.L., Du, D. and Hu, Y.J. (2021), “Fake news, investor attention, and market reaction”,
Information Systems Research, Vol. 32 No. 1, pp. 35-52.
Colliander, J. (2019), ““This is fake news”: investigating the role of conformity to other users’ views
when commenting on and spreading disinformation in social media”, Computers in Human
Behavior, Vol. 97, pp. 202-215.
Conroy, N.J., Rubin, V.L. and Chen, Y. (2015), “Automatic deception detection: methods for finding
fake news”, in Given, L.M. (Ed.), Proceedings of the Association for Information Science and
Technology, Silver Springs, MD, United States, American Society for Information
Science, pp. 1-4.
Creech, B. and Roessner, A. (2019), “Declaring the value of truth: progressive-era lessons for
combatting fake news”, Journalism Practice, Vol. 13 No. 3, pp. 263-279.
Dasilva, J.P., Ayerdi, K.M. and Galdospin, T.M. (2021), “Deepfakes on Twitter: which actors control
their spread?”, Media and Communication, Vol. 9 No. 1, pp. 301-312.
Domenico, G.D., Nunan, D., Sit, J. and Pitardi, V. (2021a), “Free but fake speech: when giving primacy
to the source decreases misinformation sharing on social media”, Psychology and Marketing,
Vol. 38 No. 10, pp. 1700-1711.
Domenico, G.D., Sit, J., Ishizaka, A. and Nunan, D. (2021b), “Fake news, social media and marketing: a
systematic review”, Journal of Business Research, Vol. 124, pp. 329-341.
Duffy, A., Tandoc, E. and Ling, R. (2020), “Too good to be true, too good not to share: the social utility
of fake news”, Information, Communication and Society, Vol. 23 No. 13, pp. 1965-1979.
Featherstone, J.D. and Zhang, J. (2020), “Feeling angry: the effects of vaccine misinformation and
refutational messages on negative emotions and vaccination attitude”, Journal of Health
Communication, Vol. 25 No. 9, pp. 692-702.
Fernandez-Torres, M.J., Almansa-Martinez, A. and Chamizo-Sanchez, R. (2021), “Infodemic and fake
news in Spain during the COVID-19 pandemic”, International Journal of Environmental
Research and Public Health, Vol. 18 No. 4, p. 1781.
Ferrari, E. (2020), “Sincerely fake: exploring user-generated political fakes and networked publics”,
Social Media þ Society, Vol. 6 No. 4, 2056305120963824.
Ferreira, W. and Vlachos, A. (2016), “Emergent: a novel data-set for stance classification”, in Knight,
K., Nenkova, A. and Rambow, O. (Eds), Proceedings of the 2016 Conference of the North
American Chapter of the Association for Computational Linguistics: Human Language
Technologies, San Diego, California, United States, Association for Computational Linguistics,
pp. 1163-1168.
Fichman, P. and Vaughn, M. (2021), “The relationships between misinformation and outrage trolling
tactics on two Yahoo! Answers categories”, Journal of the Association for Information Science
and Technology, ahead-of-print, pp. 1-13, doi: 10.1002/asi.24497.
Filkukova, P., Ayton, P., Rand, K. and Langguth, J. (2021), “What should I trust? Individual differences
in attitudes to conflicting information and misinformation on COVID-19”, Frontiers in
Psychology, Vol. 12, p. 588478.
Flostrand, A. (2020), “Fake news and brand management: a Delphi study of impact, vulnerability and Fake news on
mitigation”, Journal of Product and Brand Management, Vol. 29 No. 2, pp. 246-254.
the internet
Forati, A.M. and Ghose, R. (2021), “Geospatial analysis of misinformation in COVID-19 related tweets”,
Applied Geography, Vol. 133, p. 102473.
Freiling, I., Krause, N.M., Scheufele, D.A. and Brossard, D. (2021), “Believing and sharing
misinformation, fact-checks, and accurate information on social media: the role of anxiety
during COVID-19”, New Media & Society, ahead-of-print, pp. 1-22, doi: 10.1177/
14614448211011451. 1691
Gaumont, N., Panahi, M. and Chavalarias, D. (2018), “Reconstruction of the socio-semantic dynamics
of political activist Twitter networks-method and application to the 2017 French presidential
election”, PLoS ONE, Vol. 13 No. 9, e0201879.
Giglietto, F., Iannelli, L., Valeriani, A. and Rossi, L. (2019), “‘Fake news’ is the invention of a liar: how
false information circulates within the hybrid news system”, Current Sociology, Vol. 67 No. 4,
pp. 625-642.
Gioia, D.A., Corley, K.G. and Hamilton, A.L. (2012), “Seeking qualitative rigor in inductive
research: notes on the Gioia methodology”, Organizational Research Methods, Vol. 16 No. 1,
pp. 15-31.
Gorrell, G., Bontcheva, K., Derczynski, L., Kochkina, E., Liakata, M. and Zubiaga, A. (2018),
RumourEval 2019: Determining Rumour Veracity and Support for Rumours, Cornell
University, Ithaca, NY, unpublished manuscript, arXiv preprint, arXiv:1809.06683.
Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B. and Lazer, D. (2019), “Fake news
on Twitter during the 2016 U.S. presidential election”, Science, Vol. 363 No. 6425,
pp. 374-378.
Gunther, R., Beck, P.A. and Nisbet, E.C. (2019), “Fake news” and the defection of 2012 Obama voters in
the 2016 presidential election”, Electoral Studies, Vol. 61, p. 102030.
Guo, L. and Vargo, C. (2020), “Fake news” and emerging online media ecosystem: an integrated
intermedia agenda-setting analysis of the 2016 U.S. presidential election”, Communication
Research, Vol. 47 No. 2, pp. 178-200.
Gupta, A., Lamba, H., Kumaraguru, P. and Joshi, A. (2013), “Faking sandy: characterizing and
identifying fake images on Twitter during hurricane sandy”, in Schwabe, D., Almeida, V. and
Glaser, H. (Eds), Proceedings of the 22nd International Conference on World Wide Web, New
York, United States, Association for Computing Machinery, pp. 729-736.
Guzman, A.L. and Lewis, S.C. (2019), “Artificial intelligence and communication: a Human–Machine
Communication research agenda”, New Media and Society, Vol. 22 No. 1, pp. 70-86.
Hameleers, M. (2020), “Populist disinformation: exploring intersections between online populism and
disinformation in the US and the Netherlands”, Politics and Governance, Vol. 8 No. 1,
pp. 146-157.
Hansen, P.R. and Schmidtblaicher, M. (2021), “A dynamic model of vaccine compliance: how fake news
undermined the Danish HPV vaccine program”, Journal of Business and Economic Statistics,
Vol. 39 No. 1, pp. 259-271.
Himelein-Wachowiak, M., Giorgi, S., Devoto, A., Rahman, M., Ungar, L., Schwartz, H.A., Epstein, D.H.,
Leggio, L. and Curtis, B. (2021), “Bots and misinformation spread on social media: implications
for COVID-19”, Journal of Medical Internet Research, Vol. 23 No. 5, e26933.
Hopp, T. (2021), “Fake news self-efficacy, fake news identification, and content sharing on Facebook”,
Journal of Information Technology and Politics, ahead-of-print, pp. 1-24, doi: 10.1080/19331681.
2021.1962778.
Horne, B.D. and Adali, S. (2017), “This just in: fake news packs a lot in title, uses simpler, repetitive
content in text body, more similar to satire than real news”, in Ruths, D., Mason, W., Marwick,
A. and Gonzalez-Bailon, S. (Eds), Proceedings of the Eleventh International AAAI Conference on
Web and Social Media, California, United States, AAAI Press, pp. 759-766.
INTR Hughes, H.C. and Waismel-Manor, I. (2021), “The Macedonian fake news industry and the 2016 US
election”, PS: Political Science and Politics, Vol. 54 No. 1, pp. 19-23.
32,5
Humprecht, E. (2019), “Where ‘fake news’ flourishes: a comparison across four Western democracies”,
Information, Communication and Society, Vol. 22 No. 13, pp. 1973-1988.
Ilgen, D.R., Hollenbeck, J.R., Johnson, M. and Jundt, D. (2005), “Teams in organizations: from input-
process-output models to IMOI models”, Annual Review of Psychology, Vol. 56 No. 1,
pp. 517-543.
1692
Inwood, O. and Zappavigna, M. (2021), “Ambient affiliation, misinformation and moral panic:
negotiating social bonds in a YouTube internet hoax”, Discourse and Communication, Vol. 15
No. 3, pp. 281-307.
Islam, A.K.M.N., Laato, S., Talukder, S. and Sutinen, E. (2020), “Misinformation sharing and social
media fatigue during COVID-19: an affordance and cognitive load perspective”, Technological
Forecasting and Social Change, Vol. 159, p. 120201.
Islam, M.S., Kamal, A.H.M., Kabir, A., Southern, D.L., Khan, S.H., Hasan, S.M.M., Sarkar, T., Sharmin,
S., Das, S., Roy, T., Harun, M.G.D., Chughtai, A.A., Homaira, N. and Seale, H. (2021), “COVID-19
vaccine rumors and conspiracy theories: the need for cognitive inoculation against
misinformation to improve vaccine adherence”, PLoS ONE, Vol. 16 No. 5, e0251605.
Iyengar, S. and Massey, D.S. (2019), “Scientific communication in a post-truth society”, Proceedings of
the National Academy of Sciences, Vol. 116 No. 16, pp. 7656-7661.
Jang, S.M. and Kim, J.K. (2018), “Third person effects of fake news: fake news regulation and media
literacy interventions”, Computers in Human Behavior, Vol. 80, pp. 295-302.
Jang, S.M., Geng, T., Li, J.-Y.Q., Xia, R., Huang, C.-T., Kim, H. and Tang, J. (2018), “A computational
approach for examining the roots and spreading patterns of fake news: evolution tree analysis”,
Computers in Human Behavior, Vol. 84, pp. 103-113.
Jang, Y., Park, C.-H. and Seo, Y.-S. (2019), “Fake news analysis modeling using quote retweet”,
Electronics, Vol. 8 No. 12, p. 1377.
Jin, Z., Cao, J., Zhang, Y., Zhou, J. and Tian, Q. (2017), “Novel visual and statistical image features for
microblogs news verification”, IEEE Transactions on Multimedia, Vol. 19 No. 3, pp. 598-608.
Jones-Jang, S.M., Kim, D.H. and Kenski, K. (2020), “Perceptions of mis- or disinformation exposure
predict political cynicism: evidence from a two-wave survey during the 2018 US midterm
elections”, New Media and Society, Vol. 23 No. 10, pp. 3105-3125.
Jukes, S. (2018), “Back to the future”, Journalism Practice, Vol. 12 No. 8, pp. 1029-1038.
Khan, A.N. (2021), “A diary study of psychological effects of misinformation and COVID-19 threat on
work engagement of working from home employees”, Technological Forecasting and Social
Change, Vol. 171, p. 120968.
Khan, A.N. (2022), “Misinformation and work-related outcomes of healthcare community: sequential
mediation role of COVID-19 threat and psychological distress”, Journal of Community
Psychology, Vol. 50, pp. 944-964.
Khan, A., Brohman, K. and Addas, S. (2021), “The anatomy of ‘fake news’: studying false messages as
digital objects”, Journal of Information Technology, ahead-of-print, pp. 1-22, doi: 10.1177/
02683962211037693.
Kim, A. and Dennis, A.R. (2019), “Says who? The effects of presentation format and source rating on
fake news in social media”, MIS Quarterly, Vol. 43 No. 3, pp. 1025-1039.
Kim, A., Moravec, P.L. and Dennis, A.R. (2019), “Combating fake news on social media with source
ratings: the effects of user and expert reputation ratings”, Journal of Management Information
Systems, Vol. 36 No. 3, pp. 931-968.
Kopp, C., Korb, K.B. and Mills, B.I. (2018), “Information-theoretic models of deception: modelling
cooperation and diffusion in populations exposed to "fake news”, PLoS ONE, Vol. 13 No. 11,
e0207383.
Kucharski, A. (2016), “Study epidemiology of fake news”, Nature, Vol. 540, p. 525. Fake news on
Kumar, S., West, R. and Leskovec, J. (2016), “Disinformation on the web: impact, characteristics, and the internet
detection of Wikipedia hoaxes”, in Bourdeau, J., Hendler, J.A. and Nkambou, R.N. (Eds),
Proceedings of the 25th International Conference on World Wide Web, International World Wide
Web Conferences Steering Committee, Switzerland, Republic and Canton of Geneva, pp. 591-602.
Kwon, S., Cha, M. and Jung, K. (2017), “Rumor detection over varying time windows”, PLoS ONE,
Vol. 12 No. 1, e0168344.
1693
Laato, S., Islam, A.K.M.N., Islam, M.N. and Whelan, E. (2020), “What drives unverified information
sharing and cyberchondria during the COVID-19 pandemic?”, European Journal of Information
Systems, Vol. 29 No. 3, pp. 288-305.
Lazer, D.M., Baum, M.A., Benkler, Y., Berinsky, A.J., Greenhill, K.M., Menczer, F., Metzger, M.J.,
Nyhan, B., Pennycook, G. and Rothschild, D. (2018), “The science of fake news”, Science,
Vol. 359 No. 6380, pp. 1094-1096.
Li, J. and Su, M.-H. (2020), “Real talk about fake news: identity language and disconnected networks of
the US public’s “fake news” discourse on Twitter”, Social Media þ Society, Vol. 6 No. 2,
2056305120916841.
Lim, A.J., Tan, E. and Lim, T. (2021), “Infodemic: the effect of death-related thoughts on news-
sharing”, Cognitive Research-Principles and Implications, Vol. 6, p. 39.
Lobato, E.J.C., Powell, M., Padilla, L.M.K. and Holbrook, C. (2020), “Factors predicting willingness to
share COVID-19 misinformation”, Frontiers in Psychology, Vol. 11, p. 566108.
Lozano, M.G., Brynielsson, J., Franke, U., Rosell, M., Tj€ornhammar, E., Varga, S. and Vlassov, V.
(2020), “Veracity assessment of online data”, Decision Support Systems, Vol. 129, p. 113132.
Luo, Y.J. and Cheng, Y. (2021), “The presumed influence of COVID-19 misinformation on social media:
survey research from two countries in the global health crisis”, International Journal of
Environmental Research and Public Health, Vol. 18 No. 11, p. 5505.
Lyons, B.A., Montgomery, J.M., Guess, A.M., Nyhan, B. and Reifler, J. (2021), “Overconfidence in news
judgments is associated with false news susceptibility”, Proceedings of the National Academy of
Sciences of the United States of America, Vol. 118 No. 23, e2019527118.
Ma, J., Gao, W., Mitra, P., Kwon, S., Jansen, B.J., Wong, K.-F. and Cha, M. (2016), “Detecting rumors
from microblogs with recurrent neural networks”, in Kambhampati, S. (Ed.), Proceedings of the
25th International Joint Conference on Artificial Intelligence (IJCAI 2016), New York, United
States, AAAI Press, pp. 3818-3824.
Madrid-Morales, D., Wasserman, H., Gondwe, G., Ndlovu, K., Sikanku, E., Tully, M., Umejei, E. and
Uzuegbunam, C. (2021), “Motivations for sharing misinformation: a comparative study in six Sub-
Saharan African countries”, International Journal of Communication, Vol. 15, pp. 1200-1219.
Marcon, A.R., Murdoch, B. and Caulfield, T. (2017), “Fake news portrayals of stem cells and stem cell
research”, Regenerative Medicine, Vol. 12 No. 7, pp. 765-775.
McKay, S. and Tenove, C. (2020), “Disinformation as a threat to deliberative democracy”, Political
Research Quarterly, Vol. 74 No. 3, pp. 703-717.
McPhetres, J., Rand, D.G. and Pennycook, G. (2021), “Character deprecation in fake news: is it in
supply or demand?”, Group Processes and Intergroup Relations, Vol. 24 No. 4, pp. 624-637.
Meel, P. and Vishwakarma, D.K. (2020), “Fake news, rumor, information pollution in social media and
web: a contemporary survey of state-of-the-arts, challenges and opportunities”, Expert Systems
with Applications, Vol. 153, p. 112986.
Melki, J., Tamim, H., Hadid, D., Makki, M., El Amine, J. and Hitti, E. (2021), “Mitigating infodemics: the
relationship between news exposure and trust and belief in COVID-19 fake news and social
media spreading”, PLoS ONE, Vol. 16 No. 6, e0252830.
Mena, P. (2020), “Cleaning up social media: the effect of warning labels on likelihood of sharing false
news on Facebook”, Policy and Internet, Vol. 12 No. 2, pp. 165-183.
INTR Metzger, M.J., Flanagin, A.J., Mena, P., Jiang, S. and Wilson, C. (2021), “From dark to light: the many
shades of sharing misinformation online”, Media and Communication, Vol. 9 No. 1, pp. 134-143.
32,5
Mitra, T. and Gilbert, E. (2015), “Credbank: a large-scale social media corpus with associated
credibility annotations”, in Quercia, D. (Ed.), Proceedings of the Ninth International AAAI
Conference on Web and Social Media, California, United States, AAAI Press, pp. 258-267.
Moravec, P., Minas, R. and Dennis, A.R. (2019), “Fake news on social media: people believe what they
want to believe when it makes no sense at all”, MIS Quarterly, Vol. 43, pp. 1343-1360.
1694
Mour~ao, R.R. and Robertson, C.T. (2019), “Fake news as discursive integration: an analysis of sites
that publish false, misleading, hyperpartisan and sensational information”, Journalism Studies,
Vol. 20 No. 14, pp. 2077-2095.
Munger, K. (2020), “All the news that’s fit to click: the economics of clickbait media”, Political
Communication, Vol. 37 No. 3, pp. 376-397.
Nazar, S. and Pieters, T. (2021), “Plandemic revisited: a product of planned disinformation amplifying
the COVID-19 "infodemic”, Frontiers in Public Health, Vol. 9, p. 649930.
Ncube, L. (2019), “Digital media, fake news and pro-movement for democratic change (MDC) alliance cyber-
propaganda during the 2018 Zimbabwe election”, African Journalism Studies, Vol. 40, pp. 44-61.
Nelson, J.L. and Taneja, H. (2018), “The small, disloyal fake news audience: the role of audience
availability in fake news consumption”, New Media and Society, Vol. 20 No. 10, pp. 3720-3737.
Neyazi, T.A., Kalogeropoulos, A. and Nielsen, R.K. (2021), “Misinformation concerns and online news
participation among internet users in India”, Social Media þ Society, Vol. 7 No. 2,
20563051211009013.
Ngai, E.W.T. and Wu, Y. (2022), “Machine learning in marketing: a literature review, conceptual
framework, and research agenda”, Journal of Business Research, Vol. 145, pp. 35-48.
O’Brien, N., Latessa, S., Evangelopoulos, G. and Boix, X. (2018), “The language of fake news: opening
the black-box of deep learning based detectors”, in Bengio, S., Wallach, H.M., Larochelle, H.,
Grauman, K. and Cesa-Bianchi, N. (Eds), Proceedings of the 32nd International Conference on
Neural Information Processing Systems, Red Hook, NY, United States, Curran Associates
Inc., pp. 1-5.
Okoro, N. and Emmanuel, N.O. (2018), “Beyond misinformation: survival alternatives for Nigerian
media in the “post-truth” era”, African Journalism Studies, Vol. 39 No. 4, pp. 67-90.
Osmundsen, M., Bor, A., Vahlstrup, P.B., Bechmann, A. and Petersen, M.B. (2021), “Partisan
polarization is the primary psychological motivation behind political fake news sharing on
Twitter”, American Political Science Review, Vol. 115 No. 3, pp. 999-1015.
Pedersen, S. and Burnett, S. (2018), “Citizen curation” in online discussions of Donald Trump’s
presidency: sharing the news on Mumsnet”, Digital Journalism, Vol. 6 No. 5, pp. 545-562.
Pennycook, G., Cannon, T.D. and Rand, D.G. (2018), “Prior exposure increases perceived accuracy of
fake news”, Journal of Experimental Psychology: General, Vol. 147 No. 12, pp. 1865-1880.
Pennycook, G., Epstein, Z., Mosleh, M., Arechar, A.A., Eckles, D. and Rand, D.G. (2021), “Shifting
attention to accuracy can reduce misinformation online”, Nature, Vol. 592 No. 7855,
pp. 590-595.
Peter, J. and K€uhne, R. (2018), “The new Frontier in communication research: why we should study
social robots”, Media and Communication, Vol. 6 No. 3, pp. 73-76.
Rampersad, G. and Althiyabi, T. (2020), “Fake news: acceptance by demographics and culture on
social media”, Journal of Information Technology and Politics, Vol. 17 No. 1, pp. 1-11.
Ratkiewicz, J., Conover, M.D., Meiss, M., Gonçalves, B., Flammini, A. and Menczer, F.M. (2011),
“Detecting and tracking political abuse in social media”, in Nicolov, N., Shanahan, G.J., Adamic,
L., Baeza-Yates, R. and Counts, S. (Eds), Proceedings of the Fifth International AAAI Conference
on Weblogs and Social Media, California, United States, AAAI Press, pp. 297-304.
Richey, M. (2018), “Contemporary Russian revisionism: understanding the Kremlin’s hybrid warfare Fake news on
and the strategic and tactical deployment of disinformation”, Asia Europe Journal, Vol. 16 No. 1,
pp. 101-113. the internet
Riedel, B., Augenstein, I., Spithourakis, G.P. and Riedel, S. (2017), A Simple but Tough-To-Beat
Baseline for the Fake News Challenge Stance Detection Task, Cornell University, Ithaca, NY,
unpublished manuscript, arXiv preprint, arXiv:1707.03264.
Rini, R. (2017), “Fake news and partisan epistemology”, Kennedy Institute of Ethics Journal, Vol. 27
No. 2, pp. E43-E64. 1695
Risdal, M. (2016), “Getting real about fake news: text and metadata from fake and biased news sources
around the web, Kaggle Dataset”, available at: https://www.kaggle.com/mrisdal/fake-news/
download (accessed 8 October 2019).
Robledo, I. and Jankovic, J. (2017), “Media hype: patient and scientific perspectives on misleading
medical news”, Movement Disorders, Vol. 32 No. 9, pp. 1319-1323.
Romer, D. and Jamieson, K.H. (2021), “Patterns of media use, strength of belief in COVID-19
conspiracy theories, and the prevention of COVID-19 from March to July 2020 in the United
States: survey study”, Journal of Medical Internet Research, Vol. 23 No. 4, e25215.
Sallam, M., Dababseh, D., Eid, H., Al-Mahzoum, K., Al-Haidar, A., Taim, D., Yaseen, A., Ababneh, N.A.,
Bakri, F.G. and Mahafzah, A. (2021a), “High rates of COVID-19 vaccine hesitancy and its
association with conspiracy beliefs: a study in Jordan and Kuwait among other Arab countries”,
Vaccines, Vol. 9 No. 1, p. 42.
Sallam, M., Dababseh, D., Eid, H., Hasan, H., Taim, D., Al-Mahzoum, K., Al-Haidar, A., Yaseen, A.,
Ababneh, N.A., Assaf, A., Bakri, F.G., Matar, S. and Mahafzah, A. (2021b), “Low COVID-19
vaccine acceptance is correlated with conspiracy beliefs among university students in Jordan”,
International Journal of Environmental Research and Public Health, Vol. 18 No. 5, p. 2407.
Santia, G.C. and Williams, J.R. (2018), “Buzzface: a news veracity dataset with Facebook user
commentary and egos”, in Hancock, J. (Ed.), Proceedings of the Twelfth International AAAI
Conference on Web and Social Media, California, United States, AAAI Press, pp. 531-540.
Saquete, E., Tomas, D., Moreda, P., Martınez-Barco, P. and Palomar, M. (2020), “Fighting post-truth
using natural language processing: a review and open challenges”, Expert Systems with
Applications, Vol. 141, p. 112943.
Schaewitz, L., Kluck, J.P., Kl€osters, L. and Kr€amer, N.C. (2020), “When is disinformation (in)credible?
Experimental findings on message characteristics and individual differences”, Mass
Communication and Society, Vol. 23 No. 4, pp. 484-509.
Scherer, L.D., McPhetres, J., Pennycook, G., Kempe, A., Allen, L.A., Knoepke, C.E., Tate, C.E. and
Matlock, D.D. (2021), “Who is susceptible to online health misinformation? A test of four
psychosocial hypotheses”, Health Psychology, Vol. 40 No. 4, pp. 274-284.
Shao, C., Ciampaglia, G.L., Flammini, A. and Menczer, F. (2016), “Hoaxy: a platform for tracking online
misinformation”, in Bourdeau, J., Hendler, J.A. and Nkambou, R.N. (Eds), Proceedings of the 25th
International Conference Companion on World Wide Web, International World Wide Web
Conferences Steering Committee, Switzerland, Republic and Canton of Geneva, pp. 745-750.
Shao, C., Hui, P.-M., Wang, L., Jiang, X., Flammini, A., Menczer, F. and Ciampaglia, G.L. (2018),
“Anatomy of an online misinformation network”, PLoS ONE, Vol. 13 No. 4, e0196087.
Sharma, K., Qian, F., Jiang, H., Ruchansky, N., Zhang, M. and Liu, Y. (2019), “Combating fake news: a
survey on identification and mitigation techniques”, ACM Transactions on Intelligent Systems
and Technology, Vol. 10 No. 3, pp. 1-42.
Shin, J., Jian, L., Driscoll, K. and Bar, F. (2018), “The diffusion of misinformation on social media:
temporal pattern, message, and source”, Computers in Human Behavior, Vol. 83, pp. 278-287.
Shirish, A., Srivastava, S.C. and Chandra, S. (2021), “Impact of mobile connectivity and freedom on
fake news propensity during the COVID-19 pandemic: a cross-country empirical examination”,
European Journal of Information Systems, Vol. 30 No. 3, pp. 322-341.
INTR Shu, K., Sliva, A., Wang, S., Tang, J. and Liu, H. (2017), “Fake news detection on social media: a data
mining perspective”, ACM SIGKDD Explorations Newsletter, Vol. 19 No. 1, pp. 22-36.
32,5
Shu, K., Mahudeswaran, D., Wang, S., Lee, D. and Liu, H. (2018), Fakenewsnet: A Data Repository with
News Content, Social Context and Dynamic Information for Studying Fake News on Social Media,
Cornell University, Ithaca, NY, unpublished manuscript, arXiv preprint, arXiv:1809.01286.
Shu, K., Mahudeswaran, D. and Liu, H. (2019), “FakeNewsTracker: a tool for fake news collection,
detection, and visualization”, Computational and Mathematical Organization Theory, Vol. 25
1696 No. 1, pp. 60-71.
Shu, K., Bhattacharjee, A., Alatawi, F., Nazer, T.H., Ding, K., Karami, M. and Liu, H. (2020),
“Combating disinformation in a social media age”, WIREs Data Mining and Knowledge
Discovery, Vol. 10 No. 6, p. e1385.
Silverman, C. (2016), “This analysis shows how viral fake election news stories outperformed real
news on Facebook”, available at: https://www.buzzfeednews.com/article/craigsilverman/viral-
fake-election-news-outperformed-real-news-on-facebook (accessed 8 October 2019).
Silverman, C. and Pham, S. (2018), “These are 50 of the biggest fake news hits on Facebook in 2018”,
available at: https://www.buzzfeednews.com/article/craigsilverman/facebook-fake-news-hits-
2018 (accessed 8 October 2019).
Silverman, C. and Singer-Vine, J. (2016), “Most Americans who see fake news believe it, new survey
says”, available at: https://www.buzzfeednews.com/article/craigsilverman/fake-news-survey
(accessed 8 October 2019).
Silverman, C., Strapagiel, L., Shaban, H., Hall, E. and Singer-Vine, J. (2016), “Hyperpartisan Facebook
pages are publishing false and misleading information at an alarming rate”, available at:
https://www.buzzfeednews.com/article/craigsilverman/partisan-fb-pages-analysis (accessed 8
October 2019).
Silverman, C., Lytvynenko, J. and Pham, S. (2017a), “These are 50 of the biggest fake news hits on
Facebook in 2017”, available at: https://www.buzzfeednews.com/article/craigsilverman/these-
are-50-of-the-biggest-fake-news-hits-on-facebook-in (accessed 8 October 2019).
Silverman, C., Singer-Vine, J. and Vo, L.T. (2017b), “In spite of the crackdown, fake news publishers
are still earning money from major ad networks”, available at: https://www.buzzfeednews.com/
article/craigsilverman/fake-news-real-ads (accessed 8 October 2019).
Smith, C.A. (2019), “Weaponized iconoclasm in Internet memes featuring the expression ‘fake news”,
Discourse and Communication, Vol. 13 No. 3, pp. 303-319.
Song, R., Kim, H., Lee, G.M. and Jang, S. (2019), “Does deceptive marketing pay? The evolution of
consumer sentiment surrounding a pseudo-product-harm crisis”, Journal of Business Ethics,
Vol. 158 No. 3, pp. 743-761.
Spinney, L. (2017), “How Facebook, fake news and friends are warping your memory”, Nature,
Vol. 543 No. 7644, pp. 168-170.
Sridharan, K. and Sivaramakrishnan, G. (2021), “Disinformation about COVID-19 preventions and
treatments: analysis of USFDA warning letters”, Health Communication, ahead-of-print, pp. 1-7,
doi: 10.1080/10410236.2021.1980254.
Stewart, G.L. and Barrick, M.R. (2000), “Team structure and performance: assessing the mediating role
of intrateam process and the moderating role of task type”, Academy of Management Journal,
Vol. 43 No. 2, pp. 135-148.
Su, Y. (2021), “It doesn’t take a village to fall for misinformation: social media use, discussion
heterogeneity preference, worry of the virus, faith in scientists, and COVID-19-related
misinformation beliefs”, Telematics and Informatics, Vol. 58, p. 101547.
Talwar, S., Dhir, A., Kaur, P., Zafar, N. and Alrasheedy, M. (2019), “Why do people share fake news?
Associations between the dark side of social media use and fake news sharing behavior”,
Journal of Retailing and Consumer Services, Vol. 51, pp. 72-82.
Talwar, S., Dhir, A., Singh, D., Virk, G.S. and Salo, J. (2020), “Sharing of fake news on social media: Fake news on
application of the honeycomb framework and the third-person effect hypothesis”, Journal of
Retailing and Consumer Services, Vol. 57, p. 102197. the internet
Tamul, D.J., Ivory, A.H., Hotter, J. and Wolf, J. (2020), “All the president’s tweets: effects of exposure to
Trump’s “fake news” accusations on perceptions of journalists, news stories, and issue
evaluation”, Mass Communication and Society, Vol. 23 No. 3, pp. 301-330.
Tandoc, E.C., Lim, Z.W. and Ling, R. (2018), “Defining “fake news” a typology of scholarly
definitions”, Digital Journalism, Vol. 6 No. 2, pp. 137-153. 1697
Tandoc, E.C., Jenkins, J. and Craft, S. (2019), “Fake news as a critical incident in journalism”,
Journalism Practice, Vol. 13 No. 6, pp. 673-689.
Tandoc, E.C., Lim, D. and Ling, R. (2020), “Diffusion of disinformation: how social media users
respond to fake news and why”, Journalism, Vol. 21 No. 3, pp. 381-398.
Tandoc, E.C., Lee, J., Chew, M., Tan, F.X. and Goh, Z.H. (2021), “Falling for fake news: the role of
political bias and cognitive ability”, Asian Journal of Communication, Vol. 31 No. 4,
pp. 237-253.
Tejedor, S., Portales-Oliva, M., Carniel-Bugs, R. and Cervi, L. (2021), “Journalism students and information
consumption in the era of fake news”, Media and Communication, Vol. 9 No. 1, pp. 338-350.
Tornberg, P. (2018), “Echo chambers and viral misinformation: modeling fake news as complex
contagion”, PLoS ONE, Vol. 13 No. 9, e0203958.
Tully, M. (2022), “Everyday news use and misinformation in Kenya”, Digital Journalism, Vol. 10,
pp. 109-127.
Uwalaka, T., Nwala, B. and Chinedu, A.C. (2021), “Social media, fake news and fake COVID-19 cures in
Nigeria”, Journal of African Media Studies, Vol. 13 No. 3, pp. 435-449.
Vaccari, C. and Chadwick, A. (2020), “Deepfakes and disinformation: exploring the impact of synthetic
political video on deception, uncertainty, and trust in news”, Social Media þ Society, Vol. 6
No. 1, 2056305120903408.
Valecha, R., Volety, T., Rao, H.R. and Kwon, K.H. (2021), “Misinformation sharing on Twitter during
Zika: an investigation of the effect of threat and distance”, IEEE Internet Computing, Vol. 25
No. 1, pp. 31-39.
Valenzuela, S., Halpern, D. and Araneda, F. (2022), “A downward spiral? A panel study of
misinformation and media trust in Chile”, The International Journal of Press/Politics, Vol. 27
No. 2, pp. 353-373.
Vargo, C.J., Guo, L. and Amazeen, M.A. (2018), “The agenda-setting power of fake news: a big data
analysis of the online media landscape from 2014 to 2016”, New Media and Society, Vol. 20
No. 5, pp. 2028-2049.
Vicario, M.D., Quattrociocchi, W., Scala, A. and Zollo, F. (2019), “Polarization and fake news: early
warning of potential misinformation targets”, ACM Transactions on the Web, Vol. 13 No. 2, p. 10.
Visentin, M., Pizzi, G. and Pichierri, M. (2019), “Fake news, real problems for brands: the impact of
content truthfulness and source credibility on consumers’ behavioral intentions toward the
advertised brands”, Journal of Interactive Marketing, Vol. 45, pp. 99-112.
Viviani, M. and Pasi, G. (2017), “Credibility in social media: opinions, news, and health information—a
survey”, Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, Vol. 7 No. 5, e1209.
Vlachos, A. and Riedel, S. (2014), “Fact checking: task definition and dataset construction”, in
Eisenstein, J., Mckeown, K. and Smith, N.A. (Eds), Proceedings of the ACL 2014 Workshop on
Language Technologies and Computational Social Science, Baltimore, MD, USA, Danescu-
Niculescu-Mizil, Association for Computational Linguistics, pp. 18-22.
Vosoughi, S., Roy, D. and Aral, S. (2018), “The spread of true and false news online”, Science, Vol. 359
No. 6380, pp. 1146-1151.
INTR Wahl-Jorgensen, K. and Carlson, M. (2021), “Conjecturing fearful futures: journalistic discourses on
deepfakes”, Journalism Practice, Vol. 15 No. 6, pp. 803-820.
32,5
Wahutu, J.S. (2019), “Fake news and journalistic “rules of the game””, African Journalism Studies,
Vol. 40 No. 4, pp. 13-26.
Waisbord, S. (2018), “Truth is what happens to news: on journalism, fake news, and post-truth”,
Journalism Studies, Vol. 19 No. 13, pp. 1866-1878.
1698 Wang, W.Y. (2017), “Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection,
Cornell University, Ithaca, NY, United States, unpublished manuscript, arXiv preprint, arXiv:
1705.00648.
Wang, B. and Zhuang, J. (2018), “Rumor response, debunking response, and decision makings of
misinformed Twitter users during disasters”, Natural Hazards, Vol. 93 No. 3, pp. 1145-1162.
Wang, Y., McKee, M., Torbica, A. and Stuckler, D. (2019), “Systematic literature review on the spread
of health-related misinformation on social media”, Social Science and Medicine, Vol. 240,
p. 112552.
Wang, R., He, Y., Xu, J. and Zhang, H. (2020), “Fake news or bad news? Toward an emotion-driven
cognitive dissonance model of misinformation diffusion”, Asian Journal of Communication,
Vol. 30 No. 5, pp. 317-342.
Wang, X.Y., Zhang, M., Fan, W.G. and Zhao, K. (2021), “Understanding the spread of COVID-19
misinformation on social media: the effects of topics and a political leader’s nudge”, Journal of
the Association for Information Science and Technology, ahead-of-print, pp. 1-12, doi: 10.1002/
asi.24576.
Wani, M.A., Agarwal, N. and Bours, P. (2021), “Impact of unreliable content on social media users
during COVID-19 and stance detection system”, Electronics, Vol. 10 No. 1, p. 5.
Warner, E.L., Kirchhoff, A.C., Wilson, A., Cloyes, K.G., Sun, Y., Waters, A.R., Nelson, T. and Ellington,
L. (2021), “Social support enactments on social media during the first 6 months of young adult
cancer caregiving”, Journal of Cancer Survivorship: Research and Practice, ahead-of-print,
pp. 1-12, doi: 10.1007/s11764-021-01004-y.
Wasserman, H. (2020), “Fake news from Africa: panics, politics and paradigms”, Journalism, Vol. 21
No. 1, pp. 3-16.
Waszak, P.M., Kasprzycka-Waszak, W. and Kubanek, A. (2018), “The spread of medical fake news in
social media–the pilot quantitative study”, Health Policy and Technology, Vol. 7 No. 2,
pp. 115-118.
Whipple, K.N. and Shermak, J.L. (2020), “The enemy of my enemy is my tweet: how #NotTheEnemy
Twitter discourse defended the journalistic paradigm”, Journalism and Mass Communication
Quarterly, Vol. 97 No. 1, pp. 188-210.
Williamson, P. (2016), “Take the time and effort to correct misinformation”, Nature, Vol. 540, p. 171.
W€olker, A. and Powell, T.E. (2021), “Algorithms in the newsroom? News readers’ perceived credibility
and selection of automated journalism”, Journalism, Vol. 22 No. 1, pp. 86-103.
Wu, K., Yang, S. and Zhu, K.Q. (2015), “False rumors detection on Sina Weibo by propagation
structures”, in Gehrke, J., Lehner, W., Shim, K., Cha, S.K. and Lohman, G. (Eds), 2015 IEEE 31st
International Conference on Data Engineering, New York, United States, IEEE, pp. 651-662.
Wu, Y., Ngai, E.W., Wu, P. and Wu, C. (2020), “Fake online reviews: literature review, synthesis, and
directions for future research”, Decision Support Systems, Vol. 132, p. 113280.
Yang, F. and Horning, M. (2020), “Reluctant to share: how third person perceptions of fake news
discourage news readers from sharing “real news” on social media”, Social Media þ Society,
Vol. 6 No. 3, 2056305120955173.
Yang, J. and Tian, Y. (2021), ““Others are more vulnerable to fake news than I am”: third-person effect
of COVID-19 fake news on social media users”, Computers in Human Behavior, Vol. 125, p.
106950.
Yoo, C.W., Goo, J. and Rao, H.R. (2020), “Is cybersecurity a team sport? A multilevel examination of Fake news on
workgroup information security effectiveness”, MIS Quarterly, Vol. 44 No. 2, pp. 907-931.
the internet
Zhang, X. and Ghorbani, A.A. (2020), “An overview of online fake news: characterization, detection,
and discussion”, Information Processing and Management, Vol. 57 No. 2, p. 102025.
Zhou, C., Li, K. and Lu, Y.H. (2021a), “Linguistic characteristics and the dissemination of
misinformation in social media: the moderating effect of information richness”, Information
Processing and Management, Vol. 58 No. 6, p. 102679.
1699
Zhou, C., Xiu, H.X., Wang, Y.Q. and Yu, X.Y. (2021b), “Characterizing the dissemination of
misinformation on social media in health emergencies: an empirical study based on COVID-19”,
Information Processing and Management, Vol. 58 No. 4, p. 102554.
Zubiaga, A., Aker, A., Bontcheva, K., Liakata, M. and Procter, R. (2018), “Detection and resolution of
rumours in social media: a survey”, ACM Computing Surveys, Vol. 51 No. 2, p. 32.

About the authors


Yuanyuan Wu is currently a Joint-Ph.D. student at The Hong Kong Polytechnic
University and Harbin Institute of Technology. Her research interests are fake reviews,
fake news and E-commerce. She has published papers in some international journals,
such as Decision Support Systems, Applied Mathematical Modelling and Social
Indicators Research.

Eric W.T. Ngai is a Professor in MIS at Department of Management and Marketing at


the Hong Kong Polytechnic University. His research interests are in the areas of E-
commerce, Decision Support Systems, RFID research and Social Media Technology
and Applications. He has over 130 journal publications in a number of international
journals including MIS Quarterly, Journal of Operations Management, Decision
Support Systems, European Journal of Operational Research, IEEE Transactions on
Systems, Man and Cybernetics, Information and Management, Production and
Operations Management and others.
Pengkun Wu is an Associate Professor in the Business School at Sichuan University.
He received two PhD degrees; one from The Hong Kong Polytechnic University (2018)
and the other from the Harbin Institute of Technology (2019). His research interests are
fake reviews, fake news, E-commerce and spatial crowdsourcing. He has published
over 10 papers in some international journals including Decision Support Systems,
International Journal of Production Research, Applied Mathematical Modelling, Journal
of the Operational Research Society and others. Pengkun Wu is the corresponding
author and can be contacted at: wupengkun@scu.edu.cn
Chong Wu is a Professor in the School of Economics and Management at Harbin
Institute of Technology. His research interests are fuzzy mathematics and decision
science. He has over 170 journal publications in a number of international journals
including IEEE Transactions on Knowledge and Data Engineering, Journal of the
Association for Information Science and Technology, Information Science, Fuzzy Sets
and Systems, Expert Systems with Applications and others.

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

You might also like