Professional Documents
Culture Documents
Shades of Fake News: How Fallacies Influence Consumers' Perception
Shades of Fake News: How Fallacies Influence Consumers' Perception
Sven Beisecker
Christian Schlereth
Sebastian Hein*
Abstract
So far, fake news has been mostly associated with fabricated content that intends to manipulate
or shape opinions. In this manuscript, we aim to establish that the perception of information as
fake news is influenced by not only fabricated content but also by the rhetorical device used (i.e.,
how news authors phrase the message). Based on argumentation theory, we advance that
fallacies – a subset of well-known deceptive rhetorical devices – share a conceptual overlap with
fake news and are therefore suitable for shedding light on the issue’s grey areas. In a first two-
by-two, between-subject, best-worst scaling experiment (case 1), we empirically test whether
fallacies are related to the perception of information as fake news and to what extent a reader can
identify them. In a second two-by-two experiment, we presume that a reader believes that some
of a sender’s messages contain fake news and investigate recipients’ subsequent reactions. We
find that users distinguish nuances based on the applied fallacies; however, they will not
immediately recognize some fallacies as fake news while overemphasizing others. Regarding
users’ reactions, we observe a more severe reaction when the message identified as fake news
comes from a company instead of an acquaintance.
Keywords: Fake News; Best-Worst Scaling; Fallacies; Social Media; Argumentation Theory;
Rhetorical Devices
*WHU – Otto Beisheim School of Management, Marketing and Sales Group, Chair of Digital
Marketing, Burgplatz 2, 56179 Vallendar, Germany, Phone: +49-261-6509-458,
sven.beisecker@whu.edu, christian.schlereth@whu.edu, sebastian.hein@whu.edu
Acknowledgments: The authors gratefully thank Prof. Dr. Dr. h.c. mult. Klaus Brockhoff for
initially igniting the research direction of this paper. The authors further thank the SALTY
professors for participating in our post-study. Finally, the authors would like to thank the
Konrad-Adenauer-Stiftung e. V. for financially and intellectually supporting the first author
through a PhD scholarship.
2
1 Introduction
“Pope Francis Shocks World, Endorses Donald Trump for President.” This headline
turned out to be what is commonly classified as fake news; still, it generated 960,000
engagements on Facebook (Silverman, 2016). Rather than being an outlier, this anecdote is
emblematic of our current social media landscape, in which a broad audience creates, shares, and
consumes false information without further reflection (Vosoughi, Roy, & Aral, 2018).
The term fake news has been gaining relevance due to its far-reaching implications for
society, most recently during the COVID-19 pandemic. Fake news on topics like the safety of
vaccinations and political measures to contain the spread of the virus can influence individuals’
adoption behavior, with downstream consequences for national and global health (Laato, Islam,
Islam, & Whelan, 2020). For these reasons, policymakers around the world have taken steps to
limit the dissemination. For instance, Germany’s parliamentary body passed the Network
Enforcement Act, which requires social media platforms to remove fake news, hate speech, and
other unlawful content within one day after notification, or else face fines of up to €50 million
(Bundestag, 2017). However, apart from the dissemination of unlawful content such as
propaganda material and insults, the law does not specify what exactly fake news is. In
policymakers’ defense, scholars also lack a clear definition of the concept and have mostly
focused on the facticity of news content (Tandoc, Lim, & Ling, 2018).
Kim, Moravec, and Dennis (2019) defined fake news as “news […] that are intentionally
and verifiably false and could mislead readers” (p. 934). Here, we emphasize the quality of being
misled, which can encompass the use of not only fabricated content, but also certain rhetorical
devices. We focus on one prominent subset of deceptive rhetorical devices: namely, fallacies,
3
which have a well-documented ability to misinform and shape opinions. We argue that, because
of their deceptive nature, fallacies share a conceptual overlap with fake news.
The goal of our research is to examine this overlap in the perception between fake news
and fallacies. In two hypotheses, we draw on argumentation theory and advance that the
rhetorical device shapes the perception of information as fake news to a certain, device-specific
degree, independent of the content. We further argue that not all fallacies are equally detectable:
As a second goal, we follow up on the idea that readers of news statements are skeptical
that some information from a specific sender is fake news. We study how the users react, i.e., to
what extent their reaction is directed toward the platform that distributes the information in
question or toward the sender, and how this reaction is linked to observable characteristics of the
reader and the sender. We thereby shed light on the repercussions that may accrue for
individuals, companies, and platforms that share or distribute fake news. The second goal
deviates from previous research on user reactions, which has mainly examined why some readers
share information even though they likely perceived it as fake (Kim & Dennis, 2019; Kim et al.,
2019; Pennycook & Rand, 2021). In contrast to previous research, we focus on the sub-
population that is skeptical that a post contains fake news and study its reaction.
statements as fake news when a) evaluated with or without explanations of the fallacies and b)
when applied to different contexts (in this case, a political and a business context). To study their
reactions, we used another two-by-two, between-subject experiment, in which we varied the type
of sender (company vs. acquaintance) and the share of information perceived as fake news in
4
posts. Here, respondents stated how likely they are to express different reactions against the
The insights from this research can be used in manifold ways. First, if the perception of
information as fake news relates to not only the facticity of content, but also the rhetorical device
used, then fake news detection algorithms could be extended to automatically identify those
fallacies that are highly related to fake news perceptions. Indeed, there are already initial NLP-
based machine learning algorithms being developed to detect some fallacies, such as ad hominem
(Delobelle, Cunha, Cano, Peperkamp, & Berendt, 2019; Li, Thomas, & Liu, 2021). Concurrently,
the field of argumentation mining is researching methods for analyzing people’s reasoning
(Habernal & Gurevych, 2017). Second, because some other fallacies, such as formal logical errors,
are difficult to detect automatically (Delobelle et al., 2019), social media platforms may strive to
educate readers to help them manually detect fake news. Third, by studying user reactions, we
outline who has more to lose when readers perceive certain messages as fake news (companies or
individuals). Finally, we challenge the common wisdom that fake news is primarily related to
politics (Bronstein, Pennycook, Bear, Rand, & Cannon, 2019; Faragó, Kende, & Krekó, 2020).
Instead, and in line with, e.g., Visentin, Pizzi, and Pichierri (2019), we show that the perception of
information as fake news can be independent of the context, as well as have severe business
implications.
We first review the literature related to fake news and people’s reactions upon detecting
fake news. Afterward, we derive the hypotheses for our study and introduce the setup before
2.1 Operationalization
Even though the search term ‘fake news’ peaked in early 2020 on Google Trends (Google
Trends, 2021), the underlying concept has a long history that goes far beyond the recently
surging interest. One of the earliest known examples of fake news dates back to the Middle
Ages, when the duke of Austria, Rudolf IV, falsely claimed that his lineage, the Habsburgs, had
received an imperial certificate called “privilegium maius” that would grant them a vote in the
election of the emperor (ZDF, 2021). Since then, researchers have studied this multi-faceted
issue from various historical angles (Weiss, Alwan, Garcia, and Garcia (2020). For instance,
Weiss et al. (2020) noted that rumors are one expression of fake news, i.e., unintentional
information distortions that occur out of ignorance and are repeated by different people over
time—and, thus, are difficult to control (Shibutani, 1966). According to Allport and Postman
(1947), rumors are formed through leveling (conveyed information is shortened over time),
consistent with the person’s prior views). The accuracy of a rumor decreases the more their
spreaders are subject to narrowed attention, limited memory, and perceptual biases (Buckner,
While the issues that encourage rumors are largely baked into human psychology, they
may be exacerbated by recent developments. In the present paper, we follow the belief of authors
like Egelhofer and Lecheler (2019), Laato et al. (2020), and Moravec, Minas, and Dennis (2019)
in arguing that fake news is strongly connected to the rise of social media. With many users able
to share and interact with news on social media, information generally spreads faster and often
without users reflecting on the credibility of the source compared to traditional media.
6
lie (Hancock, 2007), the advancement of social media makes it increasingly difficult to
Ortega, 2016). Likewise, properly fact-checking digital information gets harder with an
increasing amount of information (Lecheler & Kruikemeier, 2016). Based on 4.5 million tweets,
Vosoughi et al. (2018) concluded that fake news is more likely to go viral and spread much faster
than accurate news. Against this background, researchers have begun to study fake news from
different angles—particularly its creators and their motivations, its recipients, and the
Bronstein, Pennycook, Fabricated news stories that are presented as being from Yes Survey Delusion-prone individuals are more likely to believe fake news headlines
Bear, Rand, and Cannon legitimate sources and promoted on social media to deceive
(2019) the public for ideological or financial gain
Egelhofer and Lecheler All news which is “inaccurate” Yes Conceptual Fake news consists of two dimensions, fake news genre (deliberate creation of
(2019) disinformation) and fake news label (use of the term to discredit media sources)
Pennycook and Rand Fabricated information that mimics news Yes Correlational study Propensity to engage in analytical reasoning is negatively associated with
(2018) media content in form but not in organizational process or perceived accuracy of fake news and positively with ability to discern fake news
intent from real news
Pennycook, Cannon, and News stories that were fabricated (but presented as if from Yes Experiment Exposure to fake news headlines increases subsequent perceptions of accuracy
Rand (2018) legitimate sources) and promoted on social media to (illusory truth effect)
deceive the public for ideological and/or financial gain
Lazer et al. (2018) Fabricated information that mimics news media content in Yes Conceptual Interventions against fake news should either focus on empowering individuals
form but not in organizational process or intent to evaluate fake news or preventing their exposure to fake news
Berthon and Pitt (2018) False information Yes Conceptual Brands can be impacted by (e.g., as target of fake news) and actively impact
(e.g., by deliberate or unintentional association with dubious content) fake news
in different ways
Vosoughi et al. (2018) News that has been verified as false Yes Descriptive, Fake news on Twitter is retweeted by more people, and far more rapidly than
investigation of real news, especially for posts in a political context
rumor cascades
Tandoc, Lim, and Ling Several components including news satire, news parody, No Literature review Fake news can be categorized across the two dimensions of levels of facticity
(2018) fabrication, manipulation, advertising, and propaganda and deception
Allcott and Gentzkow News articles that are intentionally and verifiably false, and Yes Web-browsing Social media are important channels to distribute fake news and may influence
(2017) could mislead readers data, survey public elections
Shu et al. (2017) Low-quality news with intentionally false information Yes Literature Review Summary of different approaches to detecting fake news
Berkowitz and Schwartz Content that blurs lines between nonfiction and fiction No Textual analysis Differentiation between fake news and satire
(2016)
Wineburg, McGrew, - - Survey Students largely fail to accurately evaluate the trustworthiness of news articles
Breakstone, and Ortega
(2016)
Lewandowsky et al. Misinformation, false beliefs Yes Conceptual Continued influence effect (retractions fail to eliminate the influence of
(2012) misinformation)
Hancock (2007) Use of various concepts of deception and lying Yes Conceptual Modern communication technologies facilitate deception and complicate
deception detection
DiFonzo and Bordia Rumors - Conceptual Definition of rumors, psychological aspects of rumor spreading
(2007)
Gilbert, Krull, and - - Experiment Interruptions in information processing make subjects more likely to consider
Malone (1990) false propositions true
Begg, Armour, & - - Experiment New details about familiar topics are rated truer than new details about
Thérèse (1985) unfamiliar topics
Gardner (1975) Advertisements that leave the consumer with factually Yes Conceptual Identification of a three-level typology of advertising deception, focused on
untrue or potentially misleading impressions and/or beliefs consumer reaction: unconscionable lie, claim-fact discrepancy, and claim-belief
interaction
Shibutani (1966) Rumors - Conceptual Definition of rumors, processes by which rumors form
Buckner (1965) Rumors - Survey Effect of rumor network interactions on the accuracy of rumor transmission
Allport and Postman Rumors - Conceptual Definition of rumors, processes by which rumors form
(1947)
Knapp (1944) Rumors - Conceptual Systematization and classification of rumors
9
With the present research, we aim to investigate whether additional factors play a role in
defining fake news. We contribute to this ongoing literature stream by studying the perception of
information as fake news beyond the facticity of content. Thereby, we focus on news headlines
and how the framing of an argument impacts readers’ perceptions. We follow the work of
Shibutani (1966), who, in the context of rumors, stated that how something is said matters just as
much as what is said. To this end, we dwell on a subset of well-studied rhetorical devices called
fallacies (Dowden, 1993; Van Eemeren, Garssen, & Meuffels, 2009). Although these devices
have been extensively studied, we are not aware of any empirical work that links the different
types of fallacies to the perception of information as fake news. While a few extant studies have
discussed certain individual fallacies, none have performed a comparison. For instance, Van
Eemeren et al. (2009) found that respondents consider fallacies to be less reasonable than sound
statements; however, focus solely on the ad hominem fallacy, i.e., attacking the arguer instead of
the argument. Another study relates to the false dilemma fallacy, i.e., framing a situation as
having only two options when there are in fact more. Brisson, Markovits, Robert, and Schaeken
(2018) found that an individual’s tendency to fall for this fallacy depends on their background
knowledge and their ability to retrieve options from memory other than the ones presented.
A second deviation from the literature is the treatment of fake news as a multi-level
construct. Although most empirical studies do not explicitly make this conceptual distinction
(e.g., Allcott & Gentzkow, 2017; Vosoughi et al., 2018), they do operationalize the fake news
construct solely in binary terms (i.e., as either true or false). A binary classification simplifies the
process by matching existing statements with fact-checking websites. However, it reduces fake
news detection to only explicitly verifiable cases, allowing current detection mechanisms to
underestimate the severity of fake news. With our focus on fallacies, we deviate from most
previous research by treating fake news as non-binary (two exceptions with explicit non-binary
10
classifications are: Tandoc et al. (2017), who characterized fake news by the “level of facticity”
and the “author’s immediate intention to deceive”, as well as Berkowitz and Schwartz (2016),
who defined fake news as content that “blurs lines between nonfiction and fiction”).
fake news. We structure these efforts along the following dimensions: 1.) detecting fake news,
2.) understanding what impacts believability and dissemination, and 3.) educating people to help
There are several characteristics of fake news that make it detectable. According to Zhou
and Zafarani (2020), detection approaches should address false information, writing style,
propagation patterns, and source credibility. The first approach studies whether the news content
(Alonso, Vilares, Gómez-Rodríguez, & Vilares, 2021; Shu, Sliva, Wang, Tang, & Liu, 2017).
The second approach aims to assess whether readers are intentionally misled due to how the
news is written or presented (Shu et al., 2017). Here, certain text features (such as swear words
or emotional words) and the characteristics of associated news images are analyzed using
sentiment analysis (Alonso et al., 2021; Brasoveanu & Andonie, 2021). In line with our research,
this approach transcends the notion of factual accuracy. The third approach studies information
diffusion patterns. However, platforms and researchers can observe such patterns only in
hindsight (Zhou & Zafarani, 2020). The fourth approach aims to detect fake news by evaluating
the credibility of the news source. This approach is preoccupied with identifying unreliable
websites: for instance, based on structural features of the URL (Mazzeo, Rapisarda, & Giuffrida,
2021) or malicious social media users such as bots (Shu et al., 2017; Zhou & Zafarani, 2020).
11
Moreover, the extant literature has identified several factors that influence the extent to
which people believe in fake news. First, individuals’ traits and characteristics can determine
how prone they are. Bronstein et al. (2019) found that proclivities for delusion, dogmatism, and
Second, the tendencies to engage in elaborate cognitive reflection (Fiske & Taylor, 2013;
Pennycook & Rand, 2021) and analytical reflection (Pennycook & G. Rand, 2018) are associated
with a lower belief in fake news. Similarly, individuals who score high on information literacy
are more likely to correctly detect fake news (Jones-Jang, Mortensen, & Liu, 2021). Third,
people are impacted by the circumstances and timing of fake news exposure. In the study of
rumors, Shibutani (1966) noted that people’s default stance is to implicitly believe in the
truthfulness of anything they hear. They will only distance themselves from this initial belief if
presented with considerable reason for doubt. However, a person’s proclivity to doubt what they
hear or read depends on several factors: Martel, Pennycook, and Rand (2020) found that people
in an emotional state are particularly prone to believing fake news. Aspects like a person’s time
constraints and current energy level also play a role (Wilson & Brekke, 1994), as do
interruptions in information processing (Gilbert, Krull, & Malone, 1990). Pennycook, Cannon,
and Rand (2018) determined that repeated exposure leads to increased accuracy judgments,
which they termed the illusory truth effect. Finally, the characteristics of the message itself can
influence believability. In line with confirmation bias (Nickerson, 1998), users tend to believe
news that aligns with their prior views (Kim & Dennis, 2019; Pennycook & Rand, 2021). At the
same time, users are less likely to believe news with an incoherent story (Lewandowsky, Ecker,
Seifert, Schwarz, & Cook, 2012) or a dubious source (Begg, Armour, & Thérèse, 1985), while
being more likely to believe news with trusted endorsements (Bryanov & Vziatysheva, 2021).
12
social media platforms that educate users about news consumption. Here, researchers have
mainly proposed remedies that address fabricated content. For instance, Moravec, Kim, and
Dennis (2020) studied the design of fake news flags (tags attached to news articles containing
disputed content) as a potential remedy. Relying on dual-process theory, Kahneman and Egan
(2011) investigated how people’s beliefs in disputed news articles are affected by interventions
focusing on either System 1 (automatic evaluation triggered by a stop sign icon) or System 2
(deliberate evaluation triggered by a text warning). Pennycook, Bear, Collins, and Rand (2020)
explored the psychological effects of attaching warnings or ratings to the article source. In a
similar vein, Kim et al. (2019) examined three types of ratings, namely: expert (an expert judges
the source), user source (users judge the source), and user article ratings (users judge individual
articles). The authors found that expert ratings were the most effective in reducing the perceived
In this research, we focus on the detectability of fake news. In line with Clarke, Chen,
Du, and Hu (2020), we define detectability as the outcome effectiveness of an approach that aims
detecting certain words or sentiments, but on the use of fallacies and how they influence the
perception of information as fake news. Fallacies use true information in a reliable context, but
exaggerate or modify certain elements that distract from the factual level (Fearnside & Holther,
1959). The detection of certain fallacies is becoming operationally feasible thanks to early NLP-
based machine learning algorithms (Delobelle et al., 2019; Li et al., 2020). Thereby, we treat
fake news as a continuum between clearly fabricated information (i.e., alternative facts) and
news statements containing deliberately misleading elements. This broadened view can
substantially extend the possibilities of combatting fake news by allowing one to: (i) train
13
algorithms to better detect fake news automatically and (ii) educate readers about the use of
as prompt users to reflect on the veracity of a message, this paper’s final question is how readers
react when they are skeptical that a sender is sharing such messages. Altay, Hacquin, and
Mercier (2020) demonstrated that sharing fake news has damaging effects on the sharer’s
reputation and on readers’ trust in the source, independent of whether it is a media outlet or an
individual sharing the news. Ultimately, sources suffer a greater loss of trust when sharing fake
news than they gain from sharing a real news story (Altay et al., 2020).
Even in situations where companies are the targets, rather than originators, of fake news,
their reputations can still be sullied. In such cases, readers of fake news may react by changing
their consumer behavior, which can have severe consequences for businesses. Berthon and Pitt
(2018) noted that consumers tend to dissociate from brands that have been targets of fake news,
with negative downstream consequences for companies’ brand equity. Prominent examples of
companies impacted by fake news include PepsiCo and New Balance (Di Domenico & Visentin,
2020). However, brands do not seem to suffer if their advertisements appear next to fake news
articles if the host website is generally regarded as credible (Visentin et al., 2019).
In this study, we explore the reaction of users who are suspicious that one of their
contacts has shared fake news. Thus, compared to previous literature, we do not focus on why
content is shared despite being perceived as fake news, nor on the consequences of fake news for
companies’ brand equity. Instead, we distinguish reactions toward the sender and the platform
that shared the content, thereby focusing on the potential damage for the parties involved.
14
3 Theoretical Framework
derive our understanding of theory from Gregor's (2006) theoretical taxonomy, with a particular
focus on his theory type III (”Theory for Predicting”). This type of theory is used to discover
previously unknown regularities in order to predict outcomes from a set of explanatory factors,
but without determining the underlying causal connections between the dependent and
independent variables (Gregor, 2006). In short, parts of the system remain a “black box.”
A core claim of our article is that fake news is not defined by content alone, but also
depends on the employed rhetorical devices. Based on this argument, we derive the central
hypotheses below. Thereby, we employ Gregor's (2006) type III understanding of theory in order
to demonstrate the relationship between rhetorical devices and the perception of information as
fake news. In doing so, we leave out the question of why specific rhetorical devices are more
requirements for an argument to be correct (D. Walton, 2009). An argument refers to finding
support for a conclusion based on one or several reasons (Dowden, 1993). In argumentation
theory, there are certain rules for how an argument should be presented. Van Eemeren,
Grootendorst, and Snoeck Henkemans (2002) produced the most prominent summary of ten
rules for sound argumentation. For instance, one rule (burden-of-proof rule) states that the
burden of proof always lies with the person who puts forth a standpoint. Another rule (relevance
rule) establishes that a person can only defend their standpoint based on arguments relevant to
said standpoint (Van Eemeren et al., 2002). In this sense, a related goal of argumentation theory
15
Willard, 2013). This entails efforts to evaluate and resolve rule violations (i.e., errors arising in
A fallacy occurs when an argument contains such errors (D. Walton, 2009). Fallacies
represent a well-documented subset of rhetorical devices (Dowden, 2019), which news authors
use to shift readers’ perception of information, even in cases where the information itself is
factually correct (Madon, Fadzil, & Rahmat, 2021). They stand apart from other rhetorical
devices: they contain purposely made errors in reasoning or unsound or illogical arguments that
claim to conform to the rules of sound argumentation while actually failing to do so (Fearnside &
Holther, 1959). Fallacies conceptually differ from falsifiable information in that they use true
information in a reliable context, but exaggerate or modify certain elements to distract from the
factual level (Fearnside & Holther, 1959). In other words, they are arguments that seem valid,
but are actually invalid (Van Eemeren, Garssen, & Meuffels, 2010). Scholars have
complemented this theoretical view with empirical evidence: A wide array of experiments—
involving a total of more than 1,900 participants—indicate that the average person’s perception
In the present research, we advance that fallacies (as a subset of rhetorical devices)
overlap with the concept of fake news. Our position follows from Lazer et al.'s (2018) argument
that “fake news overlaps with other information disorders, such as misinformation (false or
deceive people)” (p. 2). Notably Tandoc et al. (2018) already stated that deception is a necessary
precondition for news to be regarded as fake: Fake news has no impact if audiences do not
erroneously perceive it as real (Tandoc et al., 2018). Thereby, fallacies share common ground
16
with fake news since both intend to deceive, misinform, and shape the opinions of recipients.
Based on the extant literature (Table 1), we recognize that prior research operationalized fake
news only as fabricated information. We extend this understanding by including the rhetorical
presentation of an argument, which can also affect whether information is perceived as fake
news.
Fig. 1 visualizes the proposed relation among fake news, rhetorical devices, and fallacies.
The relationship implies that some rhetorical devices and fallacies do not overlap with fake
news. One potential example is the conjunction fallacy, i.e., a formal fallacy that occurs when a
decision-maker assumes that specific conditions are more likely than a single general one
Fallacies can be nuanced, as there are many ways to produce an error in reasoning. As
Van Eemeren et al. (2002) showed, fallacies can occur by violating any one of several rules for
sound argumentation. Given the variety of ways in which fallacies may obscure an argument’s
construction, we anticipate that some fallacies are more closely linked to today’s common
17
perception of fake news than others. As such, they might be able to (partially) explain the grey
area of fake news. Based on this reasoning, we introduce our first hypothesis (H1):
H1: As a subset of rhetorical devices, fallacies can help to distinguish nuances (i.e., grey
(finding implicit premises, i.e., the unstated assumptions of an argument, as well as conclusions
that need to become explicit), evaluation (evaluating the strength of an argument), and invention
(developing new arguments to support the conclusion; D. Walton, 2009). Certain fallacies are
harder to identify and analyze than others. This is because there are different ways in which
people can be lured into errors in reasoning (Dowden, 1993). Reviewing different fallacies,
Dowden (1993) summarized several approaches by which these errors can be detected: focusing
on the reasons instead of the reasoner; pointing out choices other than the ones mentioned;
assessing the credibility of the argument source, and noticing when an argument attempts to
divert a reader’s attention from the issue at hand. Given the different ways in which fallacies can
deceive an audience, “it makes no more sense to suppose that they must all be given a common
analysis than it does to suppose that all diseases should be given the same diagnosis and
For illustration, we consider two prominently used fallacies: a) ad hominem (e.g., Barnes,
Johnston, MacKenzie, Tobin, & Taglang, 2018; D. N. Walton, 1987), i.e., an author deliberately
attacks the person who has brought forth an argument instead of the argument itself, often
accompanied by insults about the counterpart’s personality, and b) formal logical error (e.g.,
18
Binoy, 2014; Fearnside & Holther, 1959; Floridi, 2009), i.e., an author deliberately makes a
deductively invalid argument that typically commits a logical error. Recognizing the former, i.e.,
seeing the insult as a distraction from the argument itself, can be done with relative ease, but
recognizing the latter demands more cognitive deliberation from the reader.
explaining how people process information and make judgments. Both Kahneman and Egan
(2011) and Strack and Deutsch (2004) differentiated between two systems that work together to
guide human behavior: an impulsive (also referred to as System 1) and a reflective (also referred
to as System 2) system. While the former primarily relies on associative links, the latter is
governed by structured decision processes and a clear intent (Strack & Deutsch, 2004). System 1
operates on a low cognitive capacity and is always active per default, whereas System 2 requires
more cognitive investment and can be easily disturbed (e.g., through distraction or arousal;
On social media, where more than half of U.S. adults consume their news (Shearer,
2021), readers tend to consume news incidentally as it arises in their newsfeed—i.e., they do not
deliberately search for the news (Bergström & Jervelycke Belfrage, 2018). The resulting lack of
critical thinking and deliberation strongly suggests a predominance of System 1 processing when
it comes to news consumption on social media. Relying on System 1 processing may prevent
readers on social media from detecting the fake news character of certain news headlines,
especially if the latter employ fallacies that are subtle in their intent to misinform and cannot be
Based on this hypothesis, we argue that educating readers on social media about the
presence and type of fallacy in a news headline can help them manually detect errors in cases
4 Empirical Study
To test the two hypotheses, we conducted an experimental study that employed the best-
worst scaling method, case 1 (e.g., Hinz, Schlereth, & Zhou, 2015; Kaufmann, Rottenburger,
Carter, & Schlereth, 2018; Louviere, Flynn, & Marley, 2015; Louviere, Lings, Islam, Gudergan,
& Flynn, 2013). Directly afterward, we conducted a second experimental study to assess the
reactions of readers who suspect that a contact has shared information perceived as fake news
(described later in Section 4.4). We implemented and executed the questionnaire using the online
The best-worst scaling method is a variant of discrete choice experiments (e.g., Hauser,
Eggers, & Selove, 2019; Schlereth & Skiera, 2017) that has recently gained popularity because it
allows one to measure an individual’s strength of preference for, or level of agreement with, a
number of items. An item can be a statement or some other element of interest. This method is
notable for ensuring that respondents hold consistent interpretations of the same decisions
(Mueller Loose & Lockshin, 2013). Compared with verbal measurement scales (e.g., Likert
for their preferences. Instead, they choose among alternatives that are easy to understand and that
can be quickly evaluated. Consequently, the results from this method provide a higher degree of
With the best-worst scaling method, we aim to measure the individual strength of
respondents’ perception that a specific fallacy in a news statement represents fake news. Put
20
differently, we investigate whether fallacies can influence the perception of information as fake
news and whether readers distinguish nuances (i.e., grey areas) based on the applied fallacies.
Fig. 2 illustrates the use of the best-worst scaling method. Respondents repeatedly see choice
sets, each consisting of a different subset of items, and choose the best and worst item from each
set (Mueller, Lockshin, & Louviere, 2010). The terms “best” and “worst” constitute a metaphor
for the extremes of a latent, subjective continuum (Louviere et al., 2015). In our study, we
operationalized the “best” and “worst” labeling by letting respondents decide which item in each
choice set “most closely resembles fake news” and “least closely resembles fake news”,
respectively.
The author deliberately attacks the person who has brought forth an argument
instead of the argument itself, often not shying away from insults about
counterpart’s personality.
e.g., "Herbert Diess is an absolutely unqualified and incompetent CEO: The
entire VW board of directors has known about the emissions scandal for
years."
The author deliberately uses stylistic flaws, including the use of emotion
evoking words, excessive punctuation marks, and case insensitivity.
X
e.g., "ATTENTION!!! - The entire VW board of directors has known about the
emissions scandal for years!"
represented additional rhetorical non-fallacy devices, and a final item represented fabricated
content. Respondents saw the implementation of each item in the form of an exemplary headline
21
that represented a news statement. In some versions of the experiment, respondents also saw
garnering consumers’ attention. The goal of headlines is to make readers click on the article or
provide the reader with immediate opinions. The headline is one of the most influential elements
because it is prominently visible anywhere a page is shared or linked. Often, the headline is the
most visible (and clickable) part, as it shows up in the link preview when anyone shares an
article, and it shows up in any browser tab. So, when many tabs are open, the headline may drive
experimental setting to empirically test the two hypotheses. We varied a) the topic of the news
statement (business or political context) and b) whether respondents saw brief explanations of
the rhetorical devices or just the news statements alone. Respondents who saw no explanations
made their decisions solely based on the exemplary news statements. In cases where respondents
saw the explanations, we instructed them to focus on the explanations and consider the
To empirically test our hypotheses, we studied the correlations between the best-worst
scores of the different versions of our experiment. The use of correlational analysis corresponds
to Gregor's (2006) understanding of type III theory in information systems. If respondents in the
political and business scenarios produce comparable rankings of the rhetorical devices when
given explanations, then we can conclude that they complied with our instructions and focused
on the explanations. If, in addition, the rankings correlate between the political and business
context within the experimental condition that only contained the exemplary news statements
and no explanations, we would consider this scenario as support for the first hypothesis (H1, i.e.,
22
that fallacies as a subset of rhetorical devices can help illuminate nuances, i.e., grey areas, in the
By taking the differences in the perceptions as fake news between an item that included
explanations and the corresponding item that did not include explanations, we can identify which
rhetorical device is over- or under-detected as fake news. If this difference is also similar
between the two contexts, we can consider that as support for the second hypothesis (H2, i.e.,
though the underlying rhetorical devices of fallacies are well understood, we are not aware of
any empirical work that examines how audiences react to different types of fallacies. One
exception is the study by Van Eemeren et al. (2009), who found that respondents generally
consider fallacies as less reasonable than sound statements. However, the authors solely focused
on one fallacy (i.e., ad hominem) and thus did not provide comparisons (e.g., according to the
Given our lack of knowledge on how fallacies differ in their degree of deception or usage
frequency, we selected six exemplary fallacies based on our subjective determination that they
would be suitable for detecting nuances in people’s perceptions of information as fake news1.
Besides the aforementioned ad hominem and formal logical error, the remaining four fallacies
are false dilemma, argument from ignorance, bandwagon effect, and false attribution. Some of
the chosen fallacies provide an invalid argument because their pattern of reasoning is wrong;
1
Later, in Section 4.6, we present the results of a post-study, in which we challenged our anticipation through an
expert survey with researchers in the field of communication.
23
others use a poor reasoning structure. We acknowledge that these fallacies do not represent the
full range of possible fallacies and want to emphasize that other fallacies could have sufficed.
We added another item to this list of fallacies that researchers consider quite common in
Internet articles: what is frequently referred to as “clickbait” or “stylistic flaws” (Rubin, 2017).
These terms acknowledge that many creators design their statements to generate as much
audience attention as possible through sensational and emotionally appealing headlines (Bakir &
McStay, 2018). They often encourage consumers to click on a link through the excessive use of
Finally, we added two additional items as an upper and lower extreme on the range of
perceptions of fake news. The item serving as an upper range limit is alternative facts: a phrase
that U.S. presidential counselor Kellyanne Conway used in a 2017 press conference (Bradner,
2017), defined as a “completely made up statement”, i.e., fabricated content. The item serving as
a lower range limit is rhetorical question, which is a statement that is suggestive but not
necessarily false. We summarize all items alongside the corresponding descriptions in Table 2.
24
Formal logical error “Proved: Social media allows for election manipulation. Trump used social media extensively. Therefore, social media
(Fallacy) manipulation is the main reason for the presidential election results.”
“Proved: Car manufacturers manipulated the emission levels. The board of directors has insight into all activities of the
company. The entire VW board of directors has known about the emissions scandal for years!”
The author deliberately states he/she doesn’t have to prove his/her claim; instead someone else has to disprove it.
Argument from
ignorance
“No counterevidence: Social media manipulations are the main reason for the U.S. presidential election results!”
(Fallacy)
“No counterevidence: The entire VW board of directors has known about the emissions scandal for years!”
The author deliberately assumes that the probability of individual adoption increases along with the overall proportion of
people who adopt a practice/opinion.
Bandwagon effect
“General public is certain: Social media manipulation is the main reason for the U.S. presidential election results – Here
(Fallacy)
is why you should think so!”
“Majority of frustrated VW users switch to other cars because the entire VW board of directors has known about the
exhaust gas scandal for many years – Here is why you should think so too!”
The author deliberately appeals to an irrelevant, unqualified, unidentified, biased or fabricated source in support of an
argument.
False attribution
“Close friend of a government official who wishes to remain anonymous, confirmed: Social media manipulation is the
(Fallacy)
main reason for the U.S. presidential election results.”
“Several employees who want to remain anonymous confirm: The entire VW board has known about the emissions
scandal for years.”
The author deliberately attacks the person who has brought forth an argument instead of the argument itself, often not
shying away from insults about counterpart’s personality.
Ad hominem
“Marc Zuckerberg is a completely unqualified and incompetent CEO: Social media manipulation is the main reason for
(Fallacy)
U.S. presidential election results”
“Herbert Diess is an absolutely unqualified and incompetent CEO: The entire VW board of directors has known about
the emissions scandal for years”
The author deliberately uses stylistic flaws, including the use of emotion-evoking words, excessive punctuation marks,
and case insensitivity.
Stylistic flaws
(---)
“WATCH OUT!!! - Social media manipulations are the main reason for the presidential election results of the USA!”
“ATTENTION!!! - The entire VW board of directors has known about the emissions scandal for years!”
The author deliberately frames a question such that it has an obvious or implied answer.
Rhetorical question
(Boundary item with
“Will you really continue to use social media, even if manipulation is the main reason for the U.S. presidential election
no or low expected
results?”
relationship to fake
“Will your next car really be a VW, even though the entire VW board of directors has known about the emissions scandal
news)
for years?”
25
business version, illustrated in Fig. 2, dealt with VW’s software-based manipulation of emission
values in their diesel cars. The political version dealt with the influence of social media on the
2016 U.S. election results. News media prominently discussed both topics at the time of the
study.
To construct suitable headlines for each rhetorical device, we kept the facticity of
statements about equal for each topic. The factually true parts were always VW’s manipulation
of emission values and the Trump campaign’s social media activities. We then exaggerated or
added made-up elements to this existing context. For example, we stated that social media was
the “main reason for the U.S. election results” and that the “entire VW board has known about
the emissions scandal for years.” All headlines might have been true or false, at least to some
degree, except for alternative facts, which were factually flawed. Table 2 lists all exemplary
headlines.
In total, each respondent saw nine items. For these items, we used a balanced incomplete
block design with 12 choice sets consisting of three items each, which ensures level balance
(each item appears four times) and orthogonality (each pair of items appears once) across
respondents. Respondents chose the items that most and least resembled fake news in each of the
twelve choice sets. To avoid order effects, we randomized the choice set and item order within a
react in cases where they are skeptical that a certain social media user is posting information that
they perceive as fake news. On a scale between 1 (= totally disagree) and 7 (= totally agree), we
26
asked how likely they a) would stop paying attention to the account, b) report the account to the
platform, c) stop receiving messages from the account, and d) stop using the whole platform.
These questions also contained an attention check that read “please click ‘totally disagree’ to
one of the following conditions: a) we varied the type of account (labeled as either an
acquaintance, i.e., not-so-close friend, or a company) and b) the frequency of posts containing
fake news (every 4th message or every 10th message). Regarding the first experimental condition,
Following the work by Clark and Mills (1993), we anticipated that relationships with a
company (a non-personal contact) would be governed more by exchange norms (i.e., norms that
focus on self-interest and material gain; Aggarwal & Larrick, 2012), while relationships with an
acquaintance (a person known to the reader) would be governed more by communal norms (i.e.,
norms that focus on mutual caring and trust; Aggarwal & Larrick, 2012; Clark & Mills, 1993). In
the former relationship, the user expects a certain benefit in return for following the company (in
this case, the provision of truthful content); they become willing to withdraw from the
relationship if this benefit disappears. In the latter relationship, the user may not be guided solely
by norms of reciprocity, but instead value aspects like mutual support (Aggarwal & Larrick,
2012; Clark & Mills, 1993), which extend beyond the truthful sharing of information. As a
Germany, completed the online survey in the fourth quarter of 2019. We excluded 72 of these
respondents because they failed the attention check. The final sample consisted of 416
proprietary software. Simply counting the number of best choices for an item and subtracting the
number of times a respondent chose this item as worst provides individual or aggregate sample
preference estimates, which we subsequently refer to as BW scores (Finn & Louviere, 1992;
Mueller Loose & Lockshin, 2013). Each item appeared four times (= 12 x 3 / 9). Consequently,
an item may generate BW scores ranging between -4 and +4, depending on how frequently a
respondent chose it as “most likely resembles fake news” (+1), as “least likely resembles fake
news” (-1), or not at all (+0). Adding +5 to all BW scores will transform them into a range
between 1 and 9, i.e., the response we would observe on a nine-point Likert scale. After the
estimation, we followed Louviere et al. (2015) and normalized the BW scores between 100 (most
closely resembles fake news) and 0 (least closely resembles fake news).
Fig. 3 visualizes the normalized BW scores for all items and combines three types of
results. First, it visualizes the BW scores of each experimental condition and summarizes their
values in a table below. Furthermore, we report changes in the ranking positions between the
experimental condition in which respondents saw explanations of the rhetorical device and the
experimental condition in which they did not see them, as well as display the differences of the
two normalized BW scores for an item. Finally, we statistically tested how the BW scores relate
to each experimental manipulation using the Pearson correlation coefficient on the right-hand
side of Fig. 3.
28
90.00
80.00
70.00
60.00
50.00
40.00
30.00
20.00
10.00
0.00
Alternative False Ad Formal logical Stylistic Argument from Bandwagon False Rhetorical
facts attribution hominem error flaws ignorance effect dilemma question
Alternative False Ad Formal Stylistic Argument Bandwagon False Rhetorical Techniques in business Techniques Statements
facts attribution hominem logical flaws from effect dilemma question context in political alone in
error ignorance context business context
Explanations in business context, normalized mean
100.00 60.67 44.51 32.93 30.79 36.28 17.07 15.24 0.00 Correlations
BW scores
Explanations in political context, normalized mean
100.00 75.70 52.11 51.76 51.06 25.35 17.25 7.04 0.00 .94***
BW scores
Statements alone in business context, normalized
100.00 9.40 69.59 0.00 40.44 20.69 73.67 51.10 27.27 .39 .23
mean BW scores
Statements alone in political context, normalized
86.58 13.42 100.00 12.99 80.52 27.71 43.72 24.68 0.00 .51 .50 .73**
mean BW scores
Differences between explanations vs. statements alone
Difference in ranking positions for business context 0 -6 0 -5 0 -1 5 4 3 Correlation in ranking
differences between .85***
-1 -5 2 -4 2 1 3 2 0
Difference in ranking positions for political context contexts
Difference in normalized mean BW scores for Correlation in
0.00 -51.27 25.08 -32.93 9.65 -15.59 56.59 35.85 27.27
business context normalized mean BW
.83***
Difference in normalized mean BW scores for score differences
-13.42 -62.28 47.89 -38.77 29.46 2.35 26.47 17.63 0.00
political context between contexts
Note: N = 416; *: p < .1; **: p < .05; ***p < .01
When looking at the first two rows (i.e., the ones related to the explanations of the
rhetorical device), we found that alternative facts most closely resembled fake news (BW scores
= 100.00 for both groups) while rhetorical question least closely resembled fake news (0.00 for
both groups). A “grey area” of perception of information as fake news existed between these
two. False attribution (mean BW score of both groups = 68.19), ad hominem (48.31), and formal
logical error (42.34) ranked relatively high in terms of their perception as fake news, followed
by stylistic flaws (40.92), argument from ignorance (30.82), bandwagon effect (17.16), and false
dilemma (11.14).
When respondents saw the explanations of the applied fallacy, we observed a high
Pearson correlation coefficient of .94 (p < .01) between the normalized BW scores of the two
contexts. A likely explanation is that respondents followed our instructions and concentrated on
the explanations. However, when respondents saw the exemplary news statements without the
explanations, the normalized BW scores were also highly correlated between the political and
business contexts, with a Pearson correlation coefficient of .73 (p < .05). This high correlation
supports our first hypothesis (H1) and suggests that fallacies and rhetorical devices can indeed
capture the grey areas of fake news, i.e., the nuances in perception.
We noticed substantial differences between respondents who saw the news statements
alone and those who also saw the explanations. The correlation between the two groups was not
significant for either the political or business context (p > .10). While respondents in the first
group identified the news statement that employed alternative facts as made up and likewise
awarded rhetorical question with a relatively low BW score (13.64), the BW scores of false
attribution and formal logical error were substantially lower compared to the case where the
explanations were also provided. For example, the item ranking position of false attribution
overemphasized the bandwagon effect and false dilemma technique as fake news compared to
those who received the explanations. Meanwhile, the BW scores of stylistic flaw and ad
To test the second hypothesis (H2), we calculated the differences in BW scores and
ranking positions between the versions in which explanations were provided vs. the versions in
which the statements appeared alone. Fig. 3 lists the results in the final rows. The Pearson
correlation coefficients for the two contexts were .85 (p < .01) when comparing the differences
in the ranking positions and .83 (p < .01) when comparing the differences in the BW scores. As
regressed the experimental condition together with the demographics age and gender on each
score and report the results in Table 3. We additionally regressed them on a consistency measure,
i.e., a quality measure of respondents’ choices, which we adapted from Louviere et al. (2015).
The authors propose that consistency for each respondent should be measured as the individual
sum of all squared BW scores. Given the properties of the balanced incomplete block design
(i.e., level balance and orthogonality), a perfectly consistent respondent achieves the highest
consistency measure (in our case, with each item appearing four times: 2·4²+2·3²+2·2²+2·1² =
Coefficients for each Alternative False Ad Formal Stylistic Argument Bandwagon False Rhetorical
independent variable (linear facts attribution hominem logical flaws from effect dilemma question
regression; DV: normalized BW error ignorance
scores)
Consistency .47 -.02 .09 -.01 .14 -.02 -.11 -.33 -.26
(.11)*** (.12) (.12) (.01)* (.12) (.11) (.11) (.11)*** (.11)**
R² .108 .119 .037 .111 .037 .030 .102 .076 .035
Note: N=416; *: p < .1; **: p < .05; ***p < .01
particular, whether respondents saw the explanations for the respective rhetorical devices or not.
For instance, without explanations, respondents did not perceive the news statements that
contained false attribution as fake news to the same degree (-25.79 and -19.23, p < .01). In
contrast, they overemphasized ad hominem as fake news (+10.11 and +13.21, p < .05) compared
to the condition in which we provided explanations. With few exceptions, the coefficients of
gender, age, and the consistency measure were significant for those items that we added as
natural boundaries of the experiment (i.e., alternative facts and rhetorical questions). For
example, regarding the dependent variable alternative facts, the positive and significant value for
age (.33) indicates that older respondents had a strong perception that this item constitutes fake
news (also, when we did not provide the explanations). Yet, most of the fallacies were unaffected
32
by these variables. This suggests that the degree to which people perceive fallacies as fake news
We tested the robustness of the results and replicated the political subset of the first
experiment (i.e., the two experimental conditions with explanations of the rhetorical device and
without) with 100 undergraduate students. We found that the consistency in their best-worst
choices (again calculated as the individual sum of the squared BW scores) was substantially
higher compared to the respondents in the panel. Nevertheless, the normalized BW scores were
similar with correlation coefficients of .94 (p < .01) and .73 (p < .05). We conclude that the
people’s perceptions of information as fake news, we conducted a post-study with either tenured
Information Systems or Marketing). Thereby, we tested two underlying assumptions of our main
study, namely: (i) that the used techniques are deceptive (in the sense of opinion-shaping) and
(ii) that news authors use them in news statements on social media.
We sent participation requests to about 60 professors and obtained a sample of 30: 25 full
professors and 5 assistant professors or post-doctoral researchers. Of the 30, 93% indicated that
they used one or more social media platforms at least once per month (Facebook: 73%,
LinkedIn: 73%, Twitter: 47%, Instagram: 37%), which we took as a sign that they were likely
We asked them to assess whether the techniques are deceptive. The respondents saw
eleven techniques: namely, the nine techniques from the main study and two new fallacies that
we considered when designing the main study, i.e., “post hoc ergo propter hoc” and “relative
33
privation” (Bennett, 2012; Fearnside & Holther, 1959). For each technique, the respondents
evaluated the following statement on a Likert scale from 1 to 7 (1: do not agree at all, 7: fully
agree): “I perceive the technique as deceptive (in the sense of opinion-shaping).” In Fig. 4, we
Alternative facts were perceived as the most deceptive, followed by the remaining
techniques that we had chosen for the main study. The two newly added techniques, post hoc
ergo propter hoc and relative privation, ranked lowest among the studied items. These results
strengthen our anticipation that the chosen exemplary fallacies are suitable for detecting nuances
in people’s perceptions of information as fake news. Interestingly, among the experts, rhetorical
devices were perceived to be more deceptive than we anticipated and observed in the main study.
Subsequently, the respondents evaluated how often they thought news authors use each of
the eleven techniques in social media news statements (not at all, seldom, frequently, very
frequently). Out of all 330 (=30x11) evaluations, only five evaluations were “not at all”. Hence,
34
the majority of respondents felt that the techniques are actually used in social media. We
Finally, respondents evaluated the following statement on a Likert scale from 1 to 7 (1:
do not agree at all, 7: fully agree): “I perceive a social media news statement as fake news when
the afore-shown techniques are used.” We deliberately placed this question toward the end of the
survey and used the term “fake news” here for the first time so as not to reveal the topic of the
main study in the previous questions. We obtained an average score of 5.50 on this measure,
indicating that the average participant agreed that using these techniques influences the
account containing fake news in their news feed, as derived from our second study. Consumers
who noticed fake news in their news feed primarily reacted toward the authors of the post, i.e.,
acquaintance or company (4.60-5.59), and less toward the platform (2.82-3.88). To explore this
finding further, we estimated a linear regression with each reaction as the dependent variable,
while the author of the post, frequency, gender, age, and social media usage intensity were
behavior in Likert scales, which is also referred to as acquiescence bias (e.g., Dinev, Xu, Smith,
& Hart, 2013; Johnston, Werkentin, McBride, & Carter, 2016; Podsakoff, MacKenzie, Lee, &
Podsakoff, 2003). To this end, we incorporated an additional independent variable. For each
reaction, the additional variable consisted of the product of the other three reactions, and thus, it
captures respondents’ tendency to answer on the right- or left-hand side of the scale.
36
The regression results show that companies generally suffer more from spreading fake
news than acquaintances: In such cases, it is significantly more likely that consumers will stop
following the company’s account or leave the platform entirely. The actual posting frequency
(every 4th vs. every 10th message) had no significant impact. Regarding social media usage, we
found that people who use social media more frequently have a more pronounced reaction when
they notice fake news in their news feed: They are significantly (albeit weakly) more likely to
report the friend to the platform (p < .1), significantly more likely to stop following the account
(p < .05), but significantly less likely to stop using the platform (p < .01). Heavy users have
developed a habit for social media use and may therefore be less likely to abandon the platform.
At the same time, fake news detracts from the user experience and perceived value of the
platform, leading people to take corrective action by reporting a friend who shares fake news.
Heavy users should also be more acquainted with the features offered by social media platforms,
such as the ability to unfollow or report an account, and may, therefore, be more prone to using
them.
To conclude, the frequency of messages that users perceive as fake news is not important for
users’ reaction toward the platform or the sender. However, we observe a more severe reaction
37
among users when the message comes from a company instead of an acquaintance. Together
with the findings on fallacies, we conclude that companies must be cautious about how readers
perceive their messages and should carefully evaluate their choice of rhetorical devices.
While other disciplines have generally conceptualized fake news as binary (either true or
false) and lacked a common definition, our manuscript represents a first step in analyzing and
defining the term “fake news” apart from its content. Without claiming to cover the whole
spectrum of fake news, we propose that rhetorical devices play a fundamental role in people’s
perception of information as fake news. Thus, we sought to study the use of exemplary fallacies
In our paper, we observed that consumers differentiate nuances of fake news (i.e., grey
areas) with high correlations between political and business contexts. However, without
explanations, some fallacies successfully manage to distract their audience from the statements’
contents. Readers perceive these statements to a lesser degree as fake news when presented in
isolation rather than when presented alongside the underlying rhetorical device. For example,
respondents who did not receive explanations about false attribution did not immediately
recognize this fallacy as fake news. At the same time, they overemphasized the perception of
Our study challenges the simplistic view that fake news has a binary operationalization
based on the availability of facts. Instead, fake news is more multi-faceted. Focusing attention on
fallacies in news statements can enhance consumers’ information literacy (i.e., their ability to
discriminate between credible and fake news) and help them overcome common individual
prejudices. Besides, such a focus provides further opportunities for scholars and practitioners to
38
identify and prevent fake news. This would help not only in social media, but also in a traditional
Our study also points out that fake news is not only of concern in politics, but that the
studied rhetorical devices are similarly perceived as fake news in our business context. This
observation suggests that fake news can cause extensive harm to companies. Business cases
support this argument: For instance, in November 2016, PepsiCo’s CEO was misquoted as
saying that Trump voters should “take their business elsewhere” (Picchi, 2016); this proved
damaging to the brand’s reputation and resulted in a plunge of its stock price, which took more
than a month to recover. Of course, fake news has implications that go far beyond politics or
business and can impact society at large. The COVID-19 pandemic and the issue of climate
change are two prominent breeding grounds for fake news, as many decisions are irreversible
The results of the reaction experiment suggest that users show a stronger reaction when a
particular, we found that users are less likely to withdraw from a relationship with an
acquaintance (as opposed to a company) by unfollowing the account or leaving the platform
altogether. This is in line with our expectations based on Clark and Mills (1993), who suggested
that the relationship with an acquaintance is governed by communal norms, while the
relationship with the company follows exchange norms. Regarding respondents’ characteristics,
our results indicate that heavy users are more likely to report a friend to the platform and stop
following an account but are less likely to quit using the platform altogether.
Like any research, ours contains several opportunities for future studies. For example, we
selected only a small number of fallacies. Future scholars could more deeply examine other
fallacies to determine which ones drive the perception of information as fake news. Moreover,
39
we only presented items in a textual format. Future studies could investigate the effect of other
ranked (see Table 3), we only collected and compared age and gender to keep the questionnaire
as short as possible. These demographic details mainly affected the ranking of the items
“alternative facts” and “rhetorical questions”, but not the ranking of the fallacies. Future research
may collect additional characteristics to shed light on individual differences in the perception of
information as fake news. For example, extant research finds that fake news believability is
largely driven by confirmation bias and thus determined by a reader’s prior views on an issue
(Kim & Dennis, 2019). Ricco (2007) found that performance in fallacy identification and
explanation tasks is unrelated to income and education level. However, social media usage
intensity has been linked to a higher belief in certain conspiracy theories and misinformation
Another limitation is that we only conducted our study in the German market. Future
research may acknowledge the existence of cultural differences in the creation and perception of
information as fake news. The method chosen in this study should be well-suited for this task,
since best-worst scaling offers the advantage of using scales that are similarly interpreted by
result, if replications of the study measure differences in the perceptions across respondents of
different cultures, these differences can be directly linked to variable perceptions of information
as fake news across cultures. In other words, they will not be affected by different interpretations
of the scale used to study the perceptions. Furthermore, our best-worst scaling approach provides
A final limitation is that, while we can place all fallacies on a scale that ranges between
“most” and “least likely resembles fake news”, we cannot determine a threshold whereby
that enable researchers to measure this threshold when applying best-worst scaling. Usefully,
Dyachenko, Reczek, and Allenby (2014) and Louviere et al. (2015) have provided some initial
References
Adamsen, J. M., Rundle-Thiele, S., & Whitty, J. A. (2013). Best-worst scaling...reflections on presentation, analysis,
and lessons learnt from case 3 BWS experiments. Market and Social Research, 21(1), 9-27.
Aggarwal, P., & Larrick, R. P. (2012). When consumers care about being treated fairly: The interaction of
relationship norms and fairness norms. Journal of Consumer Psychology, 22(1), 114-127.
Allcott, H., & Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of Economic
Perspectives, 31(2), 211-236.
Allport, G. W., & Postman, L. (1947). The psychology of rumor.
Alonso, M. A., Vilares, D., Gómez-Rodríguez, C., & Vilares, J. (2021). Sentiment analysis for fake news detection.
Electronics, 10(11), 1348.
Altay, S., Hacquin, A.-S., & Mercier, H. (2020). Why do so few people share fake news? It hurts their reputation.
New Media & Society, 1-22.
Bakir, V., & McStay, A. (2018). Fake news and the economy of emotions: problems, causes, solutions. Digital
Journalism, 6(2), 154-175.
Barnes, R. M., Johnston, H. M., MacKenzie, N., Tobin, S. J., & Taglang, C. M. (2018). The effect of ad hominem
attacks on the evaluation of claims promoted by scientists. PLoS ONE, 13(1), e0192025.
Begg, I., Armour, V., & Thérèse, K. (1985). On believing what we remember. Canadian Journal of Behavioural
Science, 17(3), 199-214.
Bennett, B. (2012). Logically fallacious: the ultimate collection of over 300 logical fallacies (academic edition):
eBookIt. com.
Bergström, A., & Jervelycke Belfrage, M. (2018). News in social media. Digital Journalism, 6(5), 583-598.
Berkowitz, D., & Schwartz, D. A. (2016). Miley, CNN and The Onion: When fake news becomes realer than real.
Journalism Practice, 10(1), 1-17.
Berthon, P. R., & Pitt, L. F. (2018). Brands, truthiness and post-fact: managing brands in a post-rational world.
Journal of Macromarketing, 38(2), 218-227.
Binoy, S. (2014). Logical fallacies in public discourse and law. Economic and Political Weekly, 49(40), 24-27.
Bradner, E. (2017). Conway: Trump White House offered 'alternative facts' on crowd size. Retrieved from
https://edition.cnn.com/2017/01/22/politics/kellyanne-conway-alternative-facts/index.html
Brasoveanu, A. M. P., & Andonie, R. (2021). Integrating machine learning techniques in semantic fake news
detection. Neural Processing Letters, 53(2), 3055-3072.
Brisson, J., Markovits, H., Robert, S., & Schaeken, W. (2018). Reasoning from an incompatibility: false dilemma
fallacies and content effects. Memory & Cognition, 46(5), 657-670.
Bronstein, M. V., Pennycook, G., Bear, A., Rand, D. G., & Cannon, T. D. (2019). Belief in fake news is associated
with delustionality, dogmatism, religious fundamentalism, and reduced analytic thinking. Journal of
Applied Research in Memory and Cognition, 8(1), 108-117.
Bryanov, K., & Vziatysheva, V. (2021). Determinants of individuals' belief in fake news: A scoping review
determinants of belief in fake news. PLoS ONE, 16(6), 1-25.
Buckner, H. T. (1965). A theory of rumor transmission. Public Opinion Quarterly, 29(1), 54-70.
Act to Improve Enforcement of the Law in Social Networks (Network Enforcement Act) (2017).
41
Clark, M. S., & Mills, J. (1993). The difference between communal and exchange relationships: What it is and is
not. Personality and Social Psychology Bulletin, 19(6), 684-691.
Clarke, J., Chen, H., Du, D., & Hu, Y. J. (2020). Fake news, investor attention, and market reaction. Information
Systems Research, 32(1), 35-52.
Delobelle, P., Cunha, M., Cano, E. M., Peperkamp, J., & Berendt, B. (2019). Computational ad hominem detection.
Paper presented at the Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics: Student Research Workshop.
Di Domenico, G., & Visentin, M. (2020). Fake news or true lies? Reflections about problematic contents in
marketing. International Journal of Market Research, 62(4), 409-417.
DiFonzo, N., & Bordia, P. (2007). Rumor psychology: social and organizational approaches: American
Psychological Association.
Dinev, T., Xu, H., Smith, J. H., & Hart, P. (2013). Information privacy and correlates: an empirical attempt to bridge
and distinguish privacy-related concepts. European Journal of Information Systems, 22(3), 295-316.
Dowden, B. (1993). Logical reasoning: Wadsworth Pub. Co.
Dowden, B. (2019). Fallacies. In The Internet Encyclopedia of Philosophy.
Dyachenko, T., Reczek, R. W., & Allenby, G. M. (2014). Models of sequential evaluation in best-worst choice
tasks. Marketing Science, 33(6), 828-848.
Egelhofer, J. L., & Lecheler, S. (2019). Fake news as a two-dimensional phenomenon: a framework and research
agenda. Annals of the International Communication Association, 43(2), 97-116.
Enders, A. M., Uscinski, J. E., Seelig, M. I., Klofstad, C. A., Wuchty, S., Funchion, J. R., . . . Stoler, J. (2021). The
relationship between social media use and beliefs in conspiracy theories and misinformation. Politcal
Behavior, 1-24.
Faragó, L., Kende, A., & Krekó, P. (2020). We only believe in news that we doctored ourselves. Social Psychology,
51(2), 77-90.
Fearnside, W. W., & Holther, W. B. (1959). Fallacy: Englewood Cliffs, N.J. Prentice-Hall 1959.
Finn, A., & Louviere, J. J. (1992). Determining the appropriate response to evidence of public concern: the case of
food safety. Journal of Public Policy and Marketing, 11(2), 12-25.
Fiske, S. T., & Taylor, S. E. (2013). Social cognition: from brains to culture: Sage.
Floridi, L. (2009). Logical fallacies as informational shortcuts. Synthese, 167(2), 317-325.
Gardner, D. M. (1975). Deception in advertising: a conceptual approach. Journal of Marketing, 39(1) ,40-46.
Gilbert, D. T., Krull, D. S., & Malone, P. S. (1990). Unbelieving the unbelievable: Some problems in the rejection of
false information. Journal of personality and social psychology, 59(4), 601-613.
Google Trends. (2021). Retrieved from https://trends.google.de/trends/explore?date=all&geo=DE&q=fake%20news
Gregor, S. (2006). The nature of theory in information systems. Management Information Systems Quarterly, 30(3),
611-642.
Habernal, I., & Gurevych, I. (2017). Argumentation mining in user-generated web discourse. Computational
Linguistics, 43(1), 125-179.
Hancock, J. T. (2007). Digital deception. In Oxford handbook of internet psychology (pp. 289-301).
Hauser, J., Eggers, F., & Selove, M. (2019). The Strategic Implications of Scale in Choice-Based Conjoint Analysis.
Marketing Science, 38(6), 1059-1081.
Hinz, O., Schlereth, C., & Zhou, W. (2015). Fostering the adoption of electric vehicles by providing complementary
mobility services: a two-step approach using Best–Worst Scaling and Dual Response. Journal of Business
Economics, 85(8), 921-951.
Johnston, A. C., Werkentin, M., McBride, M., & Carter, L. (2016). Dispositional and situational factors: influences
on information security policy violations. European Journal of Information Systems, 25(3), 231-251.
Jones-Jang, S. M., Mortensen, T., & Liu, J. (2021). Does media literacy help identification of fake news?
Information literacy helps, but other literacies don't. TheAmerican behavioral scientist, 65, 371-388.
Kahneman, D., & Egan, P. (2011). Thinking, fast and slow (Vol. 1): Farrar, Straus and Giroux New York.
Kaufmann, L., Rottenburger, J., Carter, C. R., & Schlereth, C. (2018). Bluffs, lies, and consequences: a
reconceptualization of bluffing in buyer-supplier negotiations. Journal of Supply Chain Management,
54(2), 49-70.
Kim, A., & Dennis, A. R. (2019). Says who? The effects of presentation format and source rating on fake news in
social media. Management Information Systems Quarterly, 43(3), 1025-1039.
Kim, A., Moravec, P. L., & Dennis, A. R. (2019). Combating fake news on social media with source ratings: the
effects of user and expert reputation ratings. Journal of Management Informantion Systems, 36(3), 931-968.
Knapp, R. H. (1944). A psychology of rumor. Public Opinion Quarterly, 8(1), 22-37.
42
Laato, S., Islam, A. N., Islam, M. N., & Whelan, E. (2020). What drives unverified information sharing and
cyberchondria during the COVID-19 pandemic? European Journal of Information Systems, 29(3), 288-305.
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., . . . Zittrain, J. L. (2018).
The Science of Fake News. Science, 359(6380), 1094-1096.
Lecheler, S., & Kruikemeier, S. (2016). Re-evaluating journalistic routines in a digital age: A review of research on
the use of online sources. New media & society, 18(1), 156-171.
Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction:
continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106-131.
Li, Y., Thomas, M. a., & Liu, D. (2021). From semantics to pragmatics: where IS can lead in Natural Language
Processing (NLP) research. European Journal of Information Systems, 30(5), 569-590.
Louviere, J., Flynn, T., & Marley, A. (2015). Best-Worst Scaling: Theory, Methods and Applications: Cambridge
University Press.
Louviere, J., Lings, I., Islam, T., Gudergan, S., & Flynn, T. (2013). An introduction to the application of (case 1)
best–worst scaling in marketing research. International Journal of Research in Marketing, 30(3), 292-303.
Madon, H., Fadzil, I. L. M., & Rahmat, N. H. (2021). A rhetorical analysis of news article on work from home.
European Journal of Applied Linguistics Studies, 3(2), 22-36.
Martel, C., Pennycook, G., & Rand, D. G. (2020). Reliance on emotion promotes belief in fake news. Cognitive
Research: Principles and Implications, 5(47), 1-20.
Mazzeo, V., Rapisarda, A., & Giuffrida, G. (2021). Detection of fake news on Covid-19 on web search engines.
Frontiers in physics, 9, 1-14.
Moravec, P. L., Kim, A., & Dennis, A. R. (2020). Appealing to sense and sensibility: System 1 and system 2
interventions for fake news on social media. Information Systems Research, 31(3), 987-1006.
Moravec, P. L., Minas, R. K., & Dennis, A. R. (2019). Fake news on social media: People believe what they want to
believe when it makes no sense at all. Management Information Systems Quarterly, 43(4), 1343-1360.
Mueller Loose, S., & Lockshin, L. (2013). Testing the robustness of best worst scaling for cross-national
segmentation with different numbers of choice sets. Food Quality and Preference, 27(2), 230-242.
Mueller, S., Lockshin, L., & Louviere, J. J. (2010). What you see may not be what you get: Asking consumers what
matters may not reflect what they choose. Marketing Letters, 21(4), 335-350.
Nickerson, R. S. (1998). Confirmation bias: a ubiquitous phenomenon in many guises. Review of General
Psychology, 2(2), 175-220.
Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The implied truth effect: attaching warnings to a
subset of fake news headlines increases perceived accuracy of headlines without warnings. Management
Science, 66(11), 4944-4957.
Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news.
Journal of Experimental Psychology: General, 147(12), 1865-1880.
Pennycook, G., & G. Rand, D. (2018). Who falls for fake news? The roles of analytic thinking, motivated reasoning,
political ideology, and bullshit receptivity. Working Paper.
Pennycook, G., & Rand, D. G. (2021). The psychology of fake news. Trends in Cognitive Sciences, 25(5), 388-402.
Picchi, A. (2016). Fake news spurs Trump backers to boycott PepsiCo. Retrieved from
https://www.cbsnews.com/news/trump-supporters-boycott-pepsico-over-fake-ceo-reports/
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral
research: a critical review of the literature and recommended remedies. Journal of Applied Psychology,
88(5), 879-903.
Ricco, R. B. (2007). Individual differences in the analysis of informal reasoning fallacies. Contemporary
Educational Psychology, 32(3), 459-484.
Rubin, V. L. (2017). Deception detection and rumor debunking for social media. The SAGE Handbook of Social
Media Research Methods, 342.
Schlereth, C., & Skiera, B. (2012). DISE: Dynamic Intelligent Survey Engine. In A. Diamantopoulos, W. Fritz, & L.
Hildebrandt (Eds.), Quantitative Marketing and Marketing Management - Festschrift in Honor of Udo
Wagner (pp. 225-243). Wiesbaden: Gabler Verlag.
Schlereth, C., & Skiera, B. (2017). Two new features in discrete choice experiments to improve willingness to pay
estimation that result in new methods: separated (adaptive) dual response. Management Science, 63(3),
829-842.
Shearer, E. (2021). More than eight-in-ten Americans get news from digital devices. Retrieved from
https://www.pewresearch.org/fact-tank/2021/01/12/more-than-eight-in-ten-americans-get-news-from-
digital-devices/
Shibutani, T. (1966). Improvised news: a sociological study of rumor: Ardent Media.
43
Shu, K., Sliva, A., Wang, S., Tang, J., & Liu, H. (2017). Fake news detection on social media: a data mining
perspective. ACM SIGKDD Explorations Newsletter, 19(1), 22-36.
Silverman, C. (2016). This analysis shows how viral fake election news stories outperformed real news on
Facebook. BuzzFeed News. Retrieved from https://www.buzzfeednews.com/article/craigsilverman/viral-
fake-election-news-outperformed-real-news-on-facebook
Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social
Psychology Review, 8(3), 220-247.
Tandoc, E. C., Lim, Z. W., & Ling, R. (2018). Defining “fake news”. Digital Journalism, 6(2), 137-153.
Tversky, A., & Kahneman, D. (1981). Judgments of and by representativeness. Retrieved from
Van Eemeren, F. H., Garssen, B., & Meuffels, B. (2009). Fallacies and judgments of reasonableness: Empirical
research concerning the pragma-dialectical discussion rules (Vol. 16): Springer Science & Business
Media.
Van Eemeren, F. H., Garssen, B., & Meuffels, B. (2010). Fallacies and judgments of reasonableness: empirical
research concerning the pragma-dialectical discussion rules. Information Design Journal, 18(2), 175-177.
Van Eemeren, F. H., Grootendorst, R., Johnson, R. H., Plantin, C., & Willard, C. A. (2013). Fundamentals of
argumentation theory: a handbook of historical backgrounds and contemporary developments: Routledge.
Van Eemeren, F. H., Grootendorst, R., & Snoeck Henkemans, A. F. (2002). Argumentation: analysis, evaluation,
presentation: Lawrence Erlbaum Associates.
Visentin, M., Pizzi, G., & Pichierri, M. (2019). Fake news, real problem for brands: the impact of content
truthfulness and source credibility on consumers' behavioral intentions toward the advertised brands.
Journal of Interactive Marketing, 45(C), 99-112.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Walton, D. (2009). Argumentation theory: a very short introduction. In G. Simari & I. Rahwan (Eds.),
Argumentation in Artificial Intelligence (pp. 1-22). Boston, MA: Springer US.
Walton, D. N. (1987). The ad hominem argument as an informal fallacy. Argumentation, 1(3), 317-331.
Weiss, A. P., Alwan, A., Garcia, E. P., & Garcia, J. (2020). Surveying fake news: assessing university faculty's
fragmented definition of fake news and its impact on teaching critical thinking. International Journal for
Educational Integrity, 16(1), 1-30.
Wilson, T. D., & Brekke, N. (1994). Mental contamination and mental correction: unwanted influences on
judgments and evaluations. Psychological Bulletin, 116(1), 117-142.
Wineburg, S., McGrew, S., Breakstone, J., & Ortega, T. (2016). Evaluating information: the cornerstone of civic
online reasoning. Stanford Digital Repository, 8, 2018.
ZDF. (2021). Die dreistesten Fake News der Geschichte. Retrieved from https://www.zdf.de/dokumentation/die-
glorreichen-10/die-dreistesten-fake-news-der-geschichte-102.html
Zhou, X., & Zafarani, R. (2020). A survey of fake news: fundamental theories, detection methods, and opportunities.
ACM Computing Surveys, 53(5), 1-40.