You are on page 1of 44

Do I think BLS data are BS?

The Consequences of Conspiracy Theories


Katherine Levine Einstein∗ David M. Glick
Assistant Professor Assistant Professor
Department of Political Science Department of Political Science
Boston University Boston University
kleinst@bu.edu dmglick@BU.edu
August 13, 2014

Abstract
While the willingness of people to believe unfounded and conspiratorial explanations
of events is fascinating and troubling, few have addressed the broader impacts of the
dissemination of conspiracy claims. We use survey experiments to assess whether real-
istic exposure to a conspiracy claim affects conspiracy beliefs and trust in government.
These experiments yield interesting and potentially surprising results. We discover that
respondents who are asked whether they believe in a conspiracy claim after reading a
specific allegation actually report lower beliefs than those not exposed to the specific
claim. Turning to trust in government, we find that exposure to a conspiracy claim has
a potent negative effect on trust in government services and institutions including those
unconnected to the allegations. Moreover, and consistent with our belief experiment,
we find that first asking whether people believe in the conspiracy mitigates the nega-
tive trust effects. Combining these findings suggests that conspiracy exposure increases
conspiracy beliefs and reduces trust, but that asking about beliefs prompts additional
thinking about the claims which softens and/or reverses the exposure’s effect on beliefs
and trust.


Authors names are listed alphabetically. Einstein is corresponding author. They would like to thank
Adam Berinsky, Jennifer Hochschild, Doug Kriner, Brendan Nyhan, Dustin Tingley, seminar participants at
Dartmouth College, and five anonymous reviewers for their helpful comments.
Conspiracy theories have long pervaded politics and other realms. Domestically and
abroad, many people are eager to hear, share, and discuss conspiracy theories that purport
to offer the explanations for events ranging from presidential assassinations, to the moon
landing, to figure skating judging. For political scientists interested in conspiracy theories,
the contemporary era abounds with salient examples. A “birther” movement asserts that
President Obama’s Hawaiian birth certificate was forged. Others question whether the Sandy
Hook massacre was actually part of a gun control plot or if sustainable development initiatives
are evidence of a United Nations plan to eliminate private property.
Though these conspiracy theories are part of our contemporary political zeitgeist and
tap into important scholarly questions, they remain substantially understudied. The small
existing literature has largely focused on belief in conspiracy theories while eschewing the
consequences of conspiracy beliefs. In particular, the extant conspiracy literature frequently
overlaps with research concerning those who hold factually inaccurate beliefs more broadly.
Much of this research centers on the potential for correcting misinformation(Kuklinski et al.,
2003; Nyhan and Reifler, 2010; Berinsky, 2013; Lewandowsky, Oberauer and Gignac, 2013)
and largely finds that efforts to educate the misinformed fail to change minds, “backfire,” or
lead to over-corrections (Nyhan and Reifler, 2010; Berinsky, 2013; Cobb, Nyhan and Reifler,
2013).
These findings concerning conspiracy beliefs and misinformation are interesting, impor-
tant, and grounds for pessimism. Nevertheless, the normatively most problematic conspiracy
theory story may not be the familiar one of the ill-informed public. The literature which
emphasizes the micro-processes undergirding conspiracy beliefs may be missing conspiracy
theories’ most disturbing effects by only focusing on individuals’ factual beliefs and ignoring
the broader consequences of the dissemination of conspiracy theories. The more significant,
normatively troubling, and under-investigated reality is that the propagation of conspir-
acy theories may undermine confidence in government in ways that extend far beyond the
substance of the conspiracies.

1
Citizen trust has long been viewed as a critical component of a functioning democracy
(Citrin and Muste 1999; Levi and Stoker 2000; Putnam 2001; Hibbing and Theiss-Morse
2002; though see Hardin 1999). The spread of conspiracy theories might undermine trust in
government. A New York Times editorial from October 2012 (NYTimes, 2012) formulates
the potential connection between conspiracy theories and trust:

When desperation leads political critics of the president to discredit important


nonpolitical institutions—including the Census Bureau, the Bureau of Labor
Statistics, the Federal Reserve, and the Congressional Budget Office—the dam-
age can be long-lasting. If voters come to mistrust the most basic functions of
government, the resulting cynicism can destroy the basic compact of citizenship.

Our main substantive question follows directly from the intuition in this editorial. We explore
whether exposure to conspiracy theories reduces trust in government at all levels. While there
is some evidence that incorrect factual information diminishes political participation (Jolley
and Douglas, 2013) and leads individuals to vote against their own interests (Bartels, 2005),
no studies to date have empirically explored the effects of conspiracies on the normatively
and institutionally critical issue of confidence in government. Instead, these concerns have
largely been confined to expert opinion pieces like Nyhan (2013) and the New York Times
editorial above.
To estimate the consequences of conspiracy claims, however, we need to measure both
exposure to a conspiracy theory and belief in that conspiracy. Attempting to do so requires
taking a second, more methodological, question seriously. Can researchers ask questions
about belief in conspiracy theories without affecting the responses they get? As physi-
cists have known since the articulation of Heisenberg’s Uncertainty Principle, measuring
one variable in an experiment can bias the measurement of another. A wealth of research
has revealed that some survey questions can have unintended consequences and accidentally
shape responses (Kuklinski, Cobb and Gilens, 1997; Presser and Stinson, 1998; Gingerich,
2010; Nyhan and Reifler, 2010; Bullock, Imai and Shapiro, 2011; Blair and Imai, 2012). It
is possible, then, that merely asking a question about conspiracy beliefs—as many scholarly

2
articles and public surveys do—could accidentally engender inaccurate estimates of these be-
liefs. Thus, as part of our assessment of conspiracy consequences, we also estimate whether
conspiracy beliefs are subject to similar research-induced effects.
We therefore ask substantive and methodological questions, and make both types of
contributions. We use two survey experiments to realistically measure the trust effects of
exposure to conspiracies and the effects of asking conspiracy belief questions. First, we
discover that respondents who are both exposed to a conspiracy claim and asked a question
about it are less likely to express belief in that conspiracy. These results suggest critical
challenges in measuring actual conspiracy beliefs. Second, we reveal that respondents who
are exposed to a conspiracy theory exhibit lower levels of trust in government as long as
they are not asked a question about their conspiracy belief. These results suggest that
the disturbing implications outlined by the New York Times may indeed be of concern for
democracy. Third, we find that these negative effects of conspiracy exposure are mitigated
when respondents are asked about their conspiracy belief. These final findings indicate that
asking a question about conspiracy belief may inoculate respondents from the adverse effects
of conspiracy exposure by forcing them to think about the claims.

1 Conspiracy Exposure, Beliefs, and Trust in Govern-

ment

Before investigating the many possible effects of conspiracy exposure and belief, we first
need to define what we mean by conspiracy. Our experiments and analyses use a conspiracy
theory—and not factual misinformation—as their main treatment. The two concepts are
related, but distinct. While factual misinformation encompasses essentially any fact that is
incorrect, conspiracy theories are flawed, illegitimate, and unfalsifiable causal explanations
at odds with a widely accepted mainstream account of events (Brotherton, French and
Pickering, 2013; Keeley, 1999; Coady, 2006; Aaronovitch and Langton, 2010). As political

3
scientists, we are primarily interested in government conspiracies, in which conspiracists
accuse government officials of promulgating an “official story” in a deliberate attempt to
deceive the public.
This article focuses on government conspiracy theories, rather than factual misinforma-
tion, because the link between them and government trust is clearer. Indeed, it is intuitive
that believing the government is intentionally misrepresenting data (or allowing disasters to
occur, or committing murders)—or even being exposed to stories about these topics—might
affect your feelings about government more generally; conspiracy theories are inherently cyn-
ical about the government. Conversely, it does not immediately follow that thinking that,
say, the earth is cooling should lead citizens to mistrust their democratic government.
While our analyses center on conspiracies, we use the far more ample political science
literature on misinformation to help us generate hypotheses. We acknowledge, however, that
many of our predictions—particularly those having to do with trust in government—apply
to conspiracy theories, and not necessarily factual misinformation in general.

1.1 Beliefs

Most previous work concerning political conspiracy theories focuses on beliefs and accep-
tance. Common sense and an ample literature suggest that greater exposure to conspiracy
theories will increase belief in them (Uscinski and Atkinson, 2013; Banas and Miller, 2013;
Berinsky, 2013; Lewandowsky et al., 2013; Mulligan and Habel, 2013). Indeed, it turns out
to be remarkably easy to generate persistent conspiracy belief. Psychological research on the
encoding of misinformation suggests that false facts persists even when exposure to misin-
formation is weak (with few repetitions), and when the correction to that misinformation is
strong (with many repetitions) (Ecker et al., 2011). Similarly, even providing falsifying in-
formation about conspiracy theories might help them spread. Skurnik et al. (2005) discover
that informing people that a consumer claim is false paradoxically causes them to recall it
as true thanks to repetition. This research leads us to Belief Hypothesis 1: Exposure to

4
a conspiracy theory will increase belief in that conspiracy theory.
Belief Hypothesis 1 centers on the straightforward effect of conspiracy theory exposure
on actual beliefs. However, conspiracy theories have a more complicated effect on measured
beliefs. Measuring conspiracy beliefs typically requires asking people if they accept con-
spiratorial claims. It is possible that these types of questions may produce unintended and
surprising results. A rich scholarship has identified backlash effects, in which persuasive
efforts have unintended consequences (Lord, Ross and Lepper, 1979; Edwards and Smith,
1996; Ansolabehere and Iyengar, 1997; Bullock, 2007; Chong and Druckman, 2007; Nyhan
and Reifler, 2010; Kriner and Howell, 2012). While not themselves persuasive in nature,
questions about conspiracy belief may provoke analogous unanticipated results.
We suspect that, under certain conditions, asking a question about conspiracy beliefs
might reduce respondents’ admitted and/or actual belief in the conspiracy. In particular, we
believe a backlash effect may occur when the belief question references a specific and highly
familiar allegation. The familiarity of the conspiracy can stem from a variety of sources.
It might simply be highly salient due to media coverage (e.g., Obama’s birth certificate).
Or, more relevant to our research design, the conspiracy might feel familiar to a respondent
because it was recently referenced, say, in an experimental treatment. When asked a belief
question about an already-familiar conspiracy theory, a respondent might view the issue as
considerably more complex and nuanced. She may also question whether she wants to stand
with the cynics. Survey research in general suggests that individuals who are prompted by
questions to consider a particular topic in greater depth are more apt to offer responses
that are complex, equivocal, and moderate (Zaller and Feldman, 1992; Eagly and Chaiken,
1993; Barker and Hansen, 2005; Rahn, 2000). More specific to conspiracy theories, Banas
and Miller (2013) illustrate that inoculation treatments can induce resistance to conspiracy
theories by spurring more nuanced, equivocal thinking. While asking a question about
conspiracy beliefs in conjunction with a conspiracy exposure is a far weaker prompt than
the inoculation treatments in Banas and Miller (2013), it may play a similar role. Indeed,

5
psychological research on the processing of misinformation reveals that questions can instill
doubt, spur greater scrutiny, and augment strategic memory (Lewandowsky et al., 2012).
This leads us to Belief Hypothesis 2: Exposure to a conspiracy theory decreases measured
conspiracy belief.
While the hypotheses are individually straightforward, together they are a bit compli-
cated. They could both be true, but if the latter is valid, and exposure to a conspiracy
theory decreases admitted belief as measured by a direct question, then one cannot reliably
measure the actual beliefs necessary to confirming the first hypothesis with normal survey
questions.

1.2 Conspiracy Effects on Trust

Our most novel question and hypotheses concern the effect of exposure to conspiratorial
claims on trust in government. Nyhan’s (2013) journalistic piece suggests broad-based neg-
ative democratic consequences as a result of the widespread dissemination of conspiracy
beliefs. Keeley’s (1999) philosophical exploration of conspiracy theories suggests that they
critically undermine trust in civic institutions, which in turn prevents those same institutions
from refuting them. There is some empirical evidence to support this conjecture. Exposure
to conspiracy theories diminishes political participation (Jolley and Douglas, 2013) and be-
lief in a conspiracy induces negative attitudes towards the government (Allport and Lepkin,
1945). We build on these ideas to examine the possibility that exposure to a conspiracy
theory could cause general distrust in government institutions.
Consistent with our discussion of conspiracy beliefs, we introduce two trust predictions
that flow directly from the two concerning beliefs. The first applies the intuitive logic of
exposure discussed above to the trust question. Increased acceptance of a conspiracy claim
stemming from exposure—the lynchpin of Belief Hypothesis 1—may in turn diminish trust in
government institutions because conspiracy claims about government are rooted in cynicism
and distrust of it. Because of the complications with admitted beliefs, we frame (and mea-

6
sure) Trust Hypothesis 1 in terms of exposure rather than belief: exposure to a conspiracy
theory will decrease trust in government. This decrease will even affect trust in institutions
not directly implicated in the allegations. This conceptualization most closely mimics the
“real world” effect of conspiracy claims. People are rarely (if ever) actually asked about
their beliefs after encountering conspiracy claims.
We earlier observed that if Belief Hypothesis 2 is correct, Belief Hypothesis 1 is not
testable with a direct beliefs question. However, evidence that exposure to conspiracy claims
reduces trust would also implicitly suggest support for the hypothesis that exposure increases
beliefs. If respondents who are exposed to a conspiracy exhibit lower levels of trust in
government, the most likely explanation is that their belief in the conspiracy increased.
While the potential unintended consequences of asking a belief questions may prevent us
from directly estimating the connection between Belief Hypothesis 1 and Trust Hypothesis
1, we can view any exploration (without a belief question) of Trust Hypothesis 1 as at least
an indirect test of the first belief hypothesis.
Turning to our second belief and trust hypotheses, we postulated that explicitly asking a
question about conspiracy beliefs might decrease admitted beliefs by promoting respondents
to “stop and think.” While this prediction could simply reflect a narrow measurement
effect, we believe that this diminished admitted belief could have broader implications. In
particular, prompting individuals to more carefully consider the conspiracy claims might
inoculate individuals from other potentially negative effects of conspiracy exposure. In other
words, asking the belief question might have real effects on real beliefs and not just affect
measurement. Applying this logic to the diminished trust prediction, we arrive at Trust
Hypothesis 2: Among those who have already been exposed to a conspiracy claim, those
asked a question about their belief in the claim will have relatively higher trust in government.
Again, the basic logic here is that asking about belief will prompt additional thinking, which
will lead at least some to reject the conspiratorial claim and its assorted negative trust effects.

7
1.3 Four Interconnected Hypotheses

In sum, we evaluate four related hypotheses that can all be correct simultaneously. For two of
these hypotheses—Belief Hypothesis 2 and Trust Hypothesis 2—we are able to provide direct
tests. Belief Hypothesis 2 suggests that those individuals who are exposed to a conspiracy
theory will exhibit lower levels of admitted conspiracy belief. This same “stop and think”
mechanism carries over theoretically to trust in government. Trust Hypothesis 2 predicts that
individuals who are led to seriously consider a specific conspiracy claim via a belief question
and conspiracy exposure will not experience decreased trust in government. Rather, their
levels of trust will be higher relative to those who were not prompted to “stop and think”
by a belief question.
Conversely, Belief Hypothesis 1 and Trust Hypothesis 1 center on true conspiracy beliefs
and effects which may be difficult to measure. If Belief Hypothesis 2 is accurate, this latent
belief is unmeasurable. For this reason, we posit that we can use one test that investigates
whether exposure to a conspiracy theory diminishes trust in government to assess both real
effects. Indeed, directly measuring trust effects also indirectly speaks to true belief effects. If
exposure does decrease trust, the most likely mechanism is an increase in actual conspiracy
beliefs. While these two hypotheses are clearly not as quantifiable as their counterparts
above, their strength lies in their realism. A test linking exposure to trust in government
most closely reflects how an individual might encounter a conspiracy in the real world. What
we lose in our ability to precisely pin down mechanisms, we gain in external validity.

2 Data and Methods

We use a pair of experiments to evaluate our hypotheses. We begin by exploring beliefs. Here,
the key causal variable is whether an individual is exposed to a conspiracy allegation, and
the dependent variable is whether she believes in that conspiracy. We then move on to trust.
In this analysis, all participants are exposed to the conspiracy claim and the manipulation

8
is whether they are asked about their conspiracy beliefs and trust or only about trust.
Our primary goal is to evaluate the causal effect of mainstream media exposure to a con-
spiracy theory on trust in government. Consequently, when we designed our experiments, we
erred on the side of realism and externality validity. This design, however, comes with inher-
ent tradeoffs. We sacrifice some ability to cleanly parse the precise psychological mechanisms
underlying our results.
In both sets of experiments, our exposure-to-conspiracy treatment is a newspaper article
describing the view—propagated most prominently by former General Electric CEO Jack
Welch—that the Bureau of Labor Statistics (BLS) manipulated recently reported unemploy-
ment data for political reasons. The synthetic article (displayed in Figure 2) is filled with
economic data and includes a sober rebuttal to the cynical claims. While Welch initiated
the allegations on Twitter, they were widely disseminated and discussed in the mainstream
media and beyond in October 2012.
We drew the text—including Welch’s exact BLS conspiracy comments—from two sources:
a USA Today article about BLS jobs data from the fall 2012 election cycle and an ABC News
story about the conspiracy claims (Ellin, 2012). The article replicates the appearance of an
online USA Today piece, but contains material from both sources. We selected the latter
article because it is representative of mainstream media coverage of conspiracy theories. In-
deed, it outlines the BLS conspiracy claim using a “he said, she said” approach in which
each side is offered equal weight. In fact, Nyhan (2012a) criticized this ABC News story for
lending the conspiracy credence by presenting it alongside its counterargument in a balanced
way. We replicate this point-counterpoint treatment of the conspiracy by presenting para-
graphs from both sides taken directly from the ABC News article. Because Welch’s criticism
occurred during the fall 2012 election—and our experiments in January 2013—we removed
obvious election content and focused our article on more generic “political manipulation,”
rather than election season jobs data.1
1
For example, we dropped “September” from the headline and article and replaced “July”

9
Figure 1: Experimental Exposure to a Conspiracy Theory

and “August” with “the two previous months.”


10
It is important to pause and note what “exposure to a conspiracy theory” means in
our study. Our conspiracy treatment exposes the participant to the conspiracy claim and
a rebuttal to it. Thus, when we analyze experimental exposure to a conspiracy theory, we
are, in reality exploring the causal effect of the theory and rebuttal in combination. We
therefore cannot fully parse whether the results presented below are attributable to the text
describing the conspiracy, the joint effect of the conspiracy and rebuttal, or even the rebuttal
on its own. We believe that, for an initial study of these issues, erring on the side of external
validity by presenting the conspiracy theory in the way the media did (i.e. with the rebuttal)
is worth the internal validity cost. As we discuss more below, exposure to a conspiracy claim
for many people likely means exposure with a rebuttal. We certainly hope that future work
begins to separate the conspiracy claim from the rebuttal in order to better understand the
psychological mechanisms underlying our results. We propose some possible avenues in this
vein in our discussion section, but for now, it is important that the reader understand that
when we describe the effects of “exposure to a conspiracy claim” below, we mean exposure
to a conspiracy claim and a rebuttal to it.
To assess belief in conspiracy—our dependent variable in the first experiment and one of
our independent variables in the second—we asked a simple question about BLS statistics.
The exact question wording, similar to Nyhan (2012b), is: “Do you think that recent monthly
employment data from the Bureau of Labor Statistics are always calculated as accurately as
possible or are they politically manipulated?” Respondents were then offered two options: (1)
“Calculated as accurately as possible;” or (2) “Politically manipulated.” (Wording for this
and other key question are also aggregated in the appendix). Our question was modified from
Nyhan’s in a couple of ways. For one, we included the “always” phrasing to be consistent
with our dateless article, and to make sense for respondents who did not read about the
conspiracy claim. Second, we added “as possible” to account for the possibility that people
have heard that monthly jobs data are always revised and therefore never initially reported
“accurately.”

11
We collected data for both experiments using participants recruited with Amazon’s Me-
chanical Turk (MTurk), an online crowdsourcing marketplace increasingly used in social sci-
entific experimental research (e.g. Berinsky, Huber and Lenz, 2012). MTurk samples—while
not as representative as the best national probability samples—have better demographic
distributions than typical convenience samples. Table A1 (in the appendix) appends our
experiment’s demographics onto the demographic table in Berinsky, Huber and Lenz (2012).
Our MTurk demographics are highly similar to Berinsky et al.’s survey, and only moderately
differ (in expected ways) from American National Election Survey samples and the Current
2
Population Survey.

3 Belief in Conspiracy

Having established our main treatments (the modified Jack Welch story and the accom-
panying question), we turn now to setting up our initial experimental manipulation. This
experiment permits us to investigate Belief Hypothesis 1, which predicts increased belief
and Belief Hypothesis 2, which anticipates that asking about one’s beliefs in a familiar and
specific conspiracy allegation may produce an artificially low belief estimate. To be clear,
this design enables us to investigate, but not necessarily test both hypotheses. As noted
earlier, if true, Belief Hypothesis 2 implies that it is impossible to directly evaluate Belief
Hypothesis 1 because we can only calculate measured conspiracy beliefs, not latent true
conspiracy beliefs.
We generated three possible comparison groups. The first received an article describing
2
Our participants were paid 75 cents, which is consistent with standard rates on MTurk.
We restricted participation to those in American who had at least a 95% approval rate on at
least 50 HITs—which are surveys or tasks in mTurk’s lingo—and we dropped respondents
from the second experiment who participated in the first by using their random MTurk ID
numbers.

12
economic statistics that have nothing to do with the BLS, but rather, concerned the Oregon
craft beer market.3 The second control group read an article identical to that displayed in
Figure 2, but without any reference to Welch’s conspiracy comments. By featuring only
straightforward news about positive jobs numbers, this article should help control for the
possible effects of receiving positive news about the Obama economy. Our third group—
the treatment group—was exposed to the article in Figure 2. More information about our
treatments (including full articles for the two control groups (Figures A1 and A2)) are
available in the appendix. After respondents read the article, they were asked two questions
about the clarity of the data presented in the assigned article and about the media’s use
of statistics in general. We placed these questions prior to our items exploring belief in
conspiracy to moderately mask the purpose of the study and distract respondents from
thinking about the experimental manipulation.

3.1 Analysis and Results

Because each of our treatment categories is relatively small and thus varies demographically,
we include control variables in most results we report. Table A2 in the appendix shows
the variation by condition. All of our models control for age and partisan identification (on
a seven point branching scale) because these are two of the biggest sources of mismatch
between the MTurk population and the general public, as well as important sources of
variation in trust in government (Pew, 2013) among other things. Moreover, not only would
we expect partisans to react differently to a claim against the Obama administration, but
“Chicago style politics” and GE CEO Jack Welch may mean different things to older and
younger respondents. We also control for political knowledge by including the number (0-
3
Because the article was unrelated to the BLS data, we included the following transitional
preface to questions in this condition: “Speaking of numerical data, the government provides
a lot of economic data of its own. For example, the Bureau of Labor Statistics reports
monthly economic data.”

13
3) of factual questions about politics the respondent correctly answered at the beginning
of the survey. Finally, as a consequence of a growing body of research linking the Obama
administration with the racialization of federal policy (e.g. Tesler and Sears, 2010; Tesler,
2012), we also include a control for racial resentment. This measure is constructed using
four standard questions designed to tap into subtle forms of racial bias (Kinder and Sanders,
1996). While we focus on these relatively parsimonious models, the main findings below
remain stable when other demographic variables like gender, race, urban residence, and
income are included. Though not reported in the paper we also estimated models with an
variable capturing the amount of news people consume in a typical day. Our treatment
was framed as a news story, and interest in the news could affect its impact. We find that
controlling for media exposure does not affect our main findings reported below, but these
results do suggest that more self-reported news consumption significantly correlates with
less conspiracy belief. While not the primary focus of our paper this relationship points to
interesting future inquiries.
First, we compare participants’ responses to our question about belief in the conspiracy
allegations across the three randomly assigned news stories. Figure 2 illustrates the impact
of the Welch article, the BLS article without the conspiracy claim and the data laden beer
industry article on reported beliefs. The left-hand panel simply tabulates the percent of
respondents in each of our three experimental categories who replied that BLS statistics were
manipulated for political reasons. The right-hand panel graphically depicts the predicted
probability (with 95% confidence intervals) of saying that the data were manipulated for
political reasons among those who did and did not read the Welch story. These estimates
come from probit models (reported in Table A3 in the appendix), which estimate our
experimental effect while controlling for racial resentment, age, political knowledge, and
partisanship. The figure presents two sets of comparisons. The first distinguishes respondents
who received the simple BLS jobs story from those who received the jobs story with the
conspiracy claim. The second explores differences between BLS conspiracy recipients and

14
their beer story counterparts.

Figure 2: Percent of Respondents who Believe that BLS Data were Manipulated by Treat-
ment Group and Predicted Probability of Believing that BLS Data were Manipulated by
Treatment Group.

Ns for the conditions in left hand panel are 138, 136, 133. Estimates on right hand panel from probit
models with controls for partisanship, age, political knowledge, and racial resentment. Control story is
article about craft beer industry filled with statistics to parallel the BLS articles.

Both of these graphics fail to support the intuitive hypothesis that exposure to the con-
spiracy claim increases belief in it. Instead, shifts in self-reported conspiracy beliefs are
consistent with the second belief hypothesis that predicts a backfire-type effect. Exposure to
the conspiracy claim reduces a respondent’s likelihood of saying that BLS numbers were po-
litically manipulated by approximately 20 percentage points depending upon the comparison
used.
Given rising levels of partisan polarization in the United States, it is reasonable to wonder
whether the counterintuitive pattern we observe is driven by our disproportionately Demo-
cratic sample reacting negatively to a story critical of the federal government and President
Obama. It is critical that we demonstrate that our results reflect real conspiracy effects, and
that they are not simply a consequence of partisanship. We perform this important robust-
ness check (which goes above and beyond our partisanship control in the models) in Figure 3
which depicts the percentage who expressed belief in the conspiracy theory by treatment
and by party. Our results reveal that, irrespective of party identification, those who read the

15
Welch story were less likely to agree that the BLS data were politically manipulated. While
the magnitudes are dramatically different (our percentage of Republicans believing in the
conspiracy is similar to Nyhan (2012b)) the backlash pattern is the same.

Figure 3: Percent believing that the jobs data were manipulated by treatment and party.

Includes “leaners” on the seven point branching scale. N= 227D and 125R

This relative consistency across partisan lines is striking. Political scientists have un-
covered powerful links between partisanship and a variety of important outcomes, including
voting decisions, the acquisition of factual information, and even social identity (e.g. Camp-
bell et al., 1980; Zaller, 1992; Green, Palmquist and Schickler, 2004). Moreover, rising
partisan polarization means that the connection between partisanship and these variables
has become even sharper in recent years (McCarty et al., 2006; Abramowitz, 2010). Most
recent studies of political misinformation find substantial partisan conditioning (Nyhan and
Reifler, 2010; Berinsky, 2013), though this relationship may not be so clearcut when the
analysis more narrowly focuses on conspiracies (Uscinski and Atkinson, 2013). Thus, even
though we find varying magnitudes by party identification, it is relatively rare and notewor-
thy that we find a pattern that cuts across partisan lines especially since the substantive

16
issue was highly partisan (though see (Ecker et al., 2011), for examples of other work with
analogously non-partisan patterns).4
As we previously indicated, interpreting belief results is complicated because of the way
the two hypotheses can interact. In particular, it is unclear whether the decrease in conspir-
acy beliefs among those exposed to the claim reflects diminishing actual or measured beliefs.
There are a few possibilities which could produce the observed data and these different pos-
sibilities are consistent with very different substantive interpretations. The first possibility
is that exposure to a conspiracy theory increases actual conspiracy beliefs, while exposure
when accompanied with a belief question decreases actual and measured beliefs. This artic-
ulation suggests that the belief question in conjunction with conspiracy exposure is having a
genuine effect on conspiracy belief, and not merely a measurement or survey response issue.
Thus, with actual belief shaped by joint exposure to a conspiracy and a question about
conspiracy belief, we should expect that those who are prompted to “stop and think” by a
belief question and exposed to a conspiracy will exhibit higher trust in government relative
to those who are simply exposed to the conspiracy. Analogously, trust in government should
be lower among respondents who are just exposed to the conspiracy theory as compared to
those who receive a control story.
A second possibility is that exposure to a specific conspiracy allegation and a belief
question may increase actual beliefs, while decreasing measured beliefs. In other words,
4
The liberal and young nature of MTurk demographics suggest some caution in extrap-
olating these results to the elderly and extremely conservative segments of the American
population. However, if anything, we would expect the elderly and conservative to be even
more sharply affected by our conspiracy exposure than the young and liberal; so, our results
likely downplay the effect of conspiracy exposure as a consequence of our sample’s demo-
graphics. Moreover, while our sample skews young and liberal, we do have a sizable number
of elderly and conservative respondents (indeed, we control for both age and partisanship in
our models).

17
the decreased measured belief we observed among those who are exposed to the conspiracy
could be masking an increase in actual belief. This second possibility could stem from one
of two sources. First, it could reflect social desirability concerns and confirm the difficulty of
measuring sensitive topics accurately in a survey. Second, it is also possible that individuals
are simply providing expressive responses—that is, offering responses consistent with their
partisan priors rather than sincere beliefs—when asked about their conspiracy beliefs post-
conspiracy exposure (Berinsky, 2013; Bullock et al., 2013). Under these scenarios we should
not find support for the hypothesis that the belief question affects trust responses. In other
words, if the effect of asking the question is a measurement or survey response issue it should
not dramatically effect the trust responses later in the survey. If, on the other hand, it has
a real effect on beliefs it may also have a real effect on the trust.
Thus, we now turn to our trust experiment to answer two important questions. The
first is the central substantive question that motivated this article: whether exposure to a
conspiracy theory decreases trust in government. The second follows from the belief results,
asking whether the decrease in conspiracy beliefs among those exposed to the conspiracy
treatment reflects a decrease in actual and/or measured beliefs. Finding support for both
Trust Hypotheses would strongly suggest that exposure to a conspiracy increases actual
conspiracy belief, while exposure accompanied by a belief question decreases both actual
and measured conspiracy belief.

4 Trust in Government

In our trust experiment, our key dependent variable shifts from belief in conspiracies to
confidence in government. The only treatment article we use is the one containing Welch’s
allegations. We measure trust in government by asking respondents to rate their confidence
in multiple government institutions—including the U.S. Census Bureau, the Food and Drug
Administration (FDA), the President, Congress, and local police—on a four-point scale.

18
Higher scores denote greater confidence. The wording of this confidence question follows
a widely used Gallup poll question. It reads: “below is a list of institutions in American
society. Please indicate how much confidence you have in each one.” The four options are
“very confident,” “somewhat confident,” “not so confident,” “not confident at all.” Because
our conspiracy treatment centers on a federal agency, we expect our strongest trust effects
to emerge when confidence in federal agencies is our dependent variable. We also expect
to see effects on the presidency because the president is implicated in the claims, though
these effects may be muted by the fact that attitudes are already well formed. Moreover, the
strong version of the first trust hypothesis suggests that we will also see conspiracy effects
on trust in local government services—the local cousins of the federal agencies directly cited
in the conspiracy claims. By the same token, we do not predict any confidence effects for
Congress or the Supreme Court because the conspiracy is primarily centered on government
services and the executive branch. Finally, we anticipate no effects on confidence in non-
governmental institutions: churches and corporations. We consider these institutions helpful
checks whether we are just capturing some sort of general cynicism in our analyses.
Using these dependent variables, we experimentally measure the effects of exposure to
conspiracy theories and being asked a question about conspiracy claims on trust in govern-
ment. In evaluating Trust Hypothesis 1, the key comparison is between people who read
the conspiracy allegations and were NOT asked about their conspiracy beliefs to those who
neither read the article nor were asked about their conspiracy beliefs. In contrast, the key
comparison for evaluating Trust Hypothesis 2 is between those who read the article and were
asked about their beliefs to those who read the article and were not asked about them.

4.1 Analysis and Results

We evaluate our two trust hypotheses by estimating a series of seemingly unrelated regres-
sion models that include people in the three conditions of interest: (1) article and belief
question; (2) article and no belief question; and (3) no article and no question. To test Trust

19
Hypothesis 1, we compare conditions 2 and 3, while, for Trust Hypothesis 2, we compare
conditions 1 and 2. These models estimate trust in a variety of institutions and, as above,
include controls for partisan identification, age, knowledge, and racial resentment. We re-
port estimates for the effects of the article without question treatment and the article with
question treatment relative to the baseline of no article and no belief question (the control).
We graphically present the main estimates of interest, the trust effects on the presidency
and other government agencies with 95% confidence intervals, in figure 4.5 The institutions
we include on the graphics are either directly implicated by the conspiracy (federal bureau-
cracy), connected to the conspiracy (the presidency), or somewhat similar to an implicated
institution (local government services). As the appendix models show, the other institutions
which are most distant from the conspiracy claims, and excluded from the graphics, do not
exhibit any evidence of confidence effects.
The figure provides strong support for both the decreasing trust hypothesis and for the
prediction that forcing people to think about the conspiracy claim by asking them about it
mutes its effects. The real world effects are substantially and significantly negative compared
to the control baseline. People who read the article with Welch’s claim exhibited less trust
than those who did not. These effects are particularly large, negative, and significant (or
close to it based on conventional levels) when the dependent variable is confidence in federal
bureaucratic institutions—exactly where we would expect to see exposure to the conspiracy
matter the most. Moreover, these effects seem to stretch beyond the agencies like the FDA
that are plausibly (though still not) linked with the BLS. We see similarly large effects on
confidence in local services. These results suggest a substantial spillover effect stemming
solely from exposure to a subtle conspiracy theory treatment. Reading Welch’s claims not
5
The full models, including results for all of the institutions including those for which we
do not expect confidence effects can be found in the appendix, Table A4. All results remain
substantively the same when we calculate a series of individual OLS estimates; the seemingly
unrelated regression equations simply provide more efficient estimates.

20
Figure 4: Comparing the effect of the conspiracy claim on trust when asked and not asked
(real world) about one’s beliefs. Simultaneous comparison of each condition to the control.

Results from seemingly unrelated regression models with controls for partisanship, age, polit-
ical knowledge, and racial resentment. Baseline is those who were not exposed to a conspiracy
article and not asked about their conspiracy beliefs. n = 493. The model outputs are in the
appendix.

only adversely affects confidence in the FDA and the US Census Bureau, which are at least
plausibly similar to the BLS, it shapes confidence in local non-partisan service providers like
the schools and police.
The figure also provides support for the second trust hypothesis. It demonstrates that
asking a conspiracy belief question shapes whether the conspiracy allegation has a negative
impact on trust. The responses from people who read the article and were asked about be-

21
liefs were substantively and statistically indistinguishable from those in the control. Indeed,
respondents who answered a question about their conspiracy belief were essentially rendered
identical to members of the control group who were neither queried about their belief nor
presented with our conspiracy theory treatment. Asking a question about belief in a con-
spiracy theory does not increase overall confidence in government; rather, it dampens the
potent negative influence of exposure to the conspiracy treatment and essentially inoculates
people from the cynical claims.
Finally, because the effects on those who read the article and are not asked the belief
question most closely mimic the real world and are therefore the most substantively impor-
tant, we briefly elaborate on their magnitude. Previously, we showed the effects from models
based on a linear approximation. In figure 5, we report the effect of reading the claim on the
predicted likelihood of falling into each of the four confidence in government categories for
each institution. The estimates come from ordered probit models (full model in Appendix
Table A5) comparing those who read the article and are not asked about their beliefs to
those in the control. They provide the changes in the predicted probability for each response
category attributable to switching from the control to the real world condition. For example,
a positive 5% change in “not so confident” means respondents were five percentage points
more likely to fall into the “not so confident” category if exposed to the claim than if they
were in the no claim and no question condition. These estimates come from models with the
same controls we have used elsewhere. The results show that the magnitudes of the effects
are substantial. In general, the predicted responses swing five or ten percentage points away
from the two response options indicating confidence and into their counterparts indicating
less confidence. In fact the confidence in the FDA swings by approximately 20 points and
“confident” flips from being the majority position to the minority one. Moreover, because
these estimates come from ordered probit models they also provide a robustness check on
the linear assumption in the prior model.
In summary, the dissemination of conspiracy theories decreases trust in government in-

22
Figure 5: Changes in the predicted probabilities of each survey response when exposed to
conspiracy claim without belief question (real world vs. control).

Results from ordered probit models with controls for partisanship, age, political knowledge,
and racial resentment. Baseline is those who were not exposed to a conspiracy article and
not asked about their conspiracy beliefs. Calculated using delta method (SPost in STATA
10.1 (Long and Freese, 2005)) n = 296

stitutions that are somewhat similar to those indicted in the conspiracy theory but not
necessarily an alleged part of it. The effect, however, is nuanced. Conspiracy theories ap-
pear to only have deleterious consequences for democratic governance when they remain
subtle and/or when people are not forced to consider their belief in them. Moreover, by us-
ing real mainstream media coverage, our experiment likely approximates how many people
would actually hear about conspiracy theories. This suggests that our disturbing findings

23
represent a real world effect.
These trust findings also have important implications for interpreting the confounding
belief effects above. In particular, they provide indirect evidence concerning actual con-
spiracy beliefs. Indeed, the fact that conspiracy exposure without an accompanying belief
question reduces trust provides at least suggestive evidence that exposure increases actual
belief. Moreover, our results for Trust Hypothesis 2 reveal that the decrease in admitted
conspiracy belief among those exposed to the conspiracy is not merely a measurement ar-
tifact. Rather, forcing individuals to “stop and think” about a specific allegation appears
to inoculate them from a conspiracy’s deleterious effects on trust. We think the following
interpretation best fits the data taken together. Exposure to a conspiracy claim in the real
world, without being asked whether one believes it, increases actual belief in the conspiracy
and reduces trust in government. However, asking a belief question about a familiar and
specific conspiracy allegation forces people to think about the conspiracy claim which a)
reduces their belief in the conspiracy, and b) reduces its adverse effects on trust.

4.2 Differences by Partisanship

As we did above when analyzing beliefs, we must confront the possibility that asymmetrical
partisan effects combined with a Democratic-leaning sample may be driving our results in
spite of our partisan control. Once again, we can reject the possibility that our story is
primarily a partisan one. For all of the dependent variables of interest, Democrats and
Republicans exhibit relative confidence ratings consistent with the claim that asking the
belief question inoculates people from the adverse trust effects of exposure. Moreover, both
Democrats and Republicans give lower average scores in the condition that mimics the real
world than they do in the when asked about their beliefs after reading the conspiracy claim.
For example, Democrats’ mean confidence in the FDA was 2.72 and 2.44 in the two conditions
respectively. The analogous scores for Republicans were 2.45 and 2.15. We see a similar
consistency across partisan lines for local schools. Democrats had a mean score of 2.88

24
when asked about their beliefs and 2.69 in the real world condition. Republicans exhibited
comparable differences: 2.74 vs. 2.33. Thus, as with our belief findings above, we observe
relative similarity across partisan lines.

5 Discussion and Conclusion

Our experimental results build on a conspiracy literature that largely emphasizes beliefs. We
demonstrate that exposure to conspiratorial explanations of events has real consequences for
the democratic enterprise. Our results suggest that even a subtle and specific claim can have
a stark impact. Indeed, our long newspaper article featured only a brief mention of the BLS
conspiracy theory as propagated by Jack Welch, followed by an arguably stronger rebuttal.
What’s more, in contrast to a shadowy figure on the grassy knoll, the BLS conspiracy cites
a specific government agency and report, suggesting a far higher degree of falsifiability than
is standard among conspiracy theories. In other words, our chosen conspiracy theory had
several traits which should have biased our analyses in favor of not finding conspiracy effects.
In light of our findings, it is likely that scholarship that focuses exclusively on conspiracy
beliefs or the media misses their actual impact. In particular, studies of conspiracy theories
that ask about beliefs likely understate the true effects which manifest in trust (Uscinski,
Parent and Torres, 2011).
In addition to this substantive contribution, our results have important methodological
implications for the study of conspiracy theories. Our findings indicate significant challenges
in directly evaluating belief in conspiracy theories, in line with other recent studies (e.g.
Berinsky, 2013; Bullock et al., 2013). Our belief question results suggest that it may be
impossible to directly and accurately assess survey respondents’ true conspiracy views. It is
possible, and even likely, that current surveys lamenting the proliferation of conspiracies may
in fact yield inaccurate estimates. Because we are unable to directly measure true belief,
the direction of this bias is unclear. Some reported numbers may be unintentionally inflated

25
because they ask vague questions about non-specific and unfamiliar conspiracy claims, similar
to our control condition. Or, it might be that some surveys—particularly those featuring a
conspiracy exposure as in our BLS treatment group—elicit conspiracy belief numbers that
are lower than their true values.
This inability to directly ask about conspiracy views precludes researchers from assessing
the true effect of exposure on belief. It may very well be that if researchers were able to
ask about conspiracy beliefs without altering these beliefs we would see a strong positive
link between the dissemination of conspiracy theories and acceptance of them. Conducting
such studies will require less direct measures of conspiracy belief. For example, our findings
suggest that confidence in government might be a useful tool for indirectly assessing the
impact of a particular conspiracy claim on beliefs (see Lewandowsky et al. (2012)) for more
on indirect belief measures).
One straightforward limitation of our study is that it only concerns one conspiracy claim.
Nevertheless, we believe that the BLS conspiracy is a reasonable one for a single case design.
Like other prominent recent conspiracy claims such as those related to vaccinations and
global warming, it concerns unelected government agencies and scientific data. Moreover,
other research shows that individuals’ beliefs (and implicitly the consequences of them) are
correlated across different conspiracies (Berinsky, 2013). While the right-wing nature of this
conspiracy might also engender concerns about generalizability, research by McClosky et
al. (1985) suggests that the far left and right subscribe to conspirational thinking in equal
numbers. So, we should reasonably expect our results to generalize to a more left-leaning
conspiracy.
Our experiments were designed to first and foremost investigate the real world confidence
effects with as much external validity as possible. Inevitably, this leaves us with some un-
certainty about the exact psychological micro-mechanism(s) at play. As we discussed above,
our collected findings together point towards a mechanism whereby people who think more
seriously about a specific conspiracy claim end up rejecting it. Because our experiments

26
prioritized external validity, we cannot precisely discern exactly how this cognitive mecha-
nism works. Another possibility is that we are capturing the trust effects of partisan dispute
rather than a unique conspiracy finding. While we think the combination of our belief and
trust results suggests that we are observing more than disagreement leading to trust declines,
we cannot reject these alternative (and also interesting) micro-foundations. Finally, it is also
possible that a reputable academic survey asking about belief in the conspiracy simply pro-
vides an additional cue that the claim is questionable. All of these alternative possibilities
are interesting in their own right. While we believe our proposed mechanism best fits all of
the data pieces together, we think and hope that future research can focus more on internally
valid experiments that better sort out the psychological mechanisms undergirding our main
findings.
Relatedly, our conspiracy claim—while realistic—represents only one type of presenta-
tion: that of a mainstream newspaper article with a counterargument. As we noted in our
methods section, this choice increases external validity while precluding us from separat-
ing the effects of the conspiracy and counterargument from each other. In order to claim
that a conspiracy theory by itself drives lower trust in government, we would need include
an experimental manipulation featuring only a conspiracy claim, with no rebuttal. Future
research could begin the task of addressing this issue, then, by comparing the effect of a
6
conspiracy claim in isolation and a conspiracy claim joint with a rebuttal.
The inclusion of a “conspiracy only” condition would also allow future researchers to ex-
plore the multitude of other methods by which an individual might encounter a conspiracy
theory. These include cable news, talk radio, Twitter, blog posts, conversations with friends
and neighbors, and internet forums. We were primarily interested in exploring the effect of
exposure via the mainstream media, not the effect of seeking out news sources likely to be
6
Ideally one could also investigate the independent effect of the rebuttal, but doing so may not have much
substantive meaning and/or confuse participants since a rebuttal without the conspiracy claim it relates to
does not make much sense. Rebuttals on their own only make sense when they refer to well known conspiracy
claims. In such an instance, though, the rebuttal is likely serving the role of the conspiracy exposure by
reminding people of the claim as well.

27
conspiracy-laden. While the reinforcing effect of reading conspirational blogs among the al-
ready conspiracy-oriented is an important question, we are more interested in what might be
termed accidental exposure—that is, the effect on an individual who is not inclined towards
conspiracy, but who happens to encounter a conspiracy claim. Subsequent research could
begin to distinguish between the effects of different kinds of media sources and presentations
(including those which do not normally feature rebuttals), and test whether these effects are
moderated by important demographic characteristics. Future research could also include a
political disagreement condition with a claim and a rebuttal but not a conspiracy allegation
to discern what is different about conspiracy politics.
Finally, our findings also suggest additional work on the correction of misinformation.
It appears that asking a question about a conspiracy belief—a phenomenon related to
misinformation—acts as a subtle and successful correction. This stands in stark contrast
to the backfire associated with more explicit factual corrections (Nyhan and Reifler, 2010).
This is likely a result of the differences between our mechanism and the existing ones in the
literature. Indeed, our study simply invites respondents to consider a particular conspiracy
in greater depth. Conversely, prior research prompts respondents with new information,
which may or may not fit with their existing views.
As we noted in our introduction, scholars of political theory, political behavior, public
policy, and democratic governance all consider trust in government to be a central compo-
nent of a healthy democracy. Concern about confidence in our democratic institutions has
become even more acute in recent years. A recent survey by the Pew Research Center found
that levels of trust in the federal government remained “mired near a historic low, while frus-
tration with government remains high” (Pew, 2013). It is beyond the scope of this article to
determine whether these low levels of trust are definitively connected to the proliferation of
conspiracy theories about President Obama. Nonetheless, our experimental results suggest
that these allegations contribute. We hope that, by untangling the connection between con-
spiracy theories and trust, these experimental findings can serve as a roadmap for scholars

28
to continue investigating the effects of conspiracy theories on politics, and the threat they
pose to democratic governance. Public polls reporting the percentage of people that believe
the moon landing was fake, that Vince Foster was murdered, or that the National Football
League turned off the lights during the Super Bowl to slow the Baltimore Ravens’ momentum
are fun. Unfortunately, the political entertainment they provide masks more important and
less humorous realities. The conspiracy theories underlying these polls, and the numerous
articles and internet posts they inspire, have serious consequences that deserve more serious
study.

29
References

Aaronovitch, David and James Langton. 2010. Voodoo Histories: The Role of the Conspiracy
Theory in Shaping Modern History. Wiley Online Library.

Abramowitz, Alan. 2010. The Disappearing Center: Engaged Citizens, Polarization, and
American Democracy. Yale University Press.

Allport, Floyd H and Milton Lepkin. 1945. “Wartime Rumors of Waste and Special Privilege:
Why Some People Believe Them.” The Journal of Abnormal and Social Psychology 40(1):3.

Ansolabehere, Stephen and Shanto Iyengar. 1997. Going Negative. New York: Simon and
Schuster.

Banas, John A and Gregory Miller. 2013. “Inducing Resistance to Conspiracy Theory Pro-
paganda: Testing Inoculation and Metainoculation Strategies.” Human Communication
Research 39(2):184–207.

Barker, David C and Susan B Hansen. 2005. “All Things Considered: Systematic Cognitive
Processing and Electoral Decisionmaking.” Journal of Politics 67(2):319–344.

Bartels, Larry M. 2005. “Homer Gets a Tax Cut: Inequality and Public Policy in the
American mind.” Perspectives on Politics 3(01):15–31.

Berinsky, Adam .J. 2013. “Rumors, Truths, and Reality: A Study of Political Misinforma-
tion.” Unpublished Working Paper (V3.1) .

Berinsky, Adam J., Gregory A. Huber and Gabriel S. Lenz. 2012. “Evaluating Online Labor
Markets for Experimental Research: Amazon. com’s Mechanical Turk.” Political Analysis
20(3):351–368.

Blair, Graeme and Kosuke Imai. 2012. “Statistical Analysis of List Experiments.” Political
Analysis 20(1):47–77.

30
Brotherton, Robert, Christopher C French and Alan D Pickering. 2013. “Measuring Belief
in Conspiracy Theories: The Generic Conspiracist Beliefs Scale.” Frontiers in psychology
4.

Bullock, John G. 2007. Experiments on Partisanship and Public Opinion: Party Cues, False
Beliefs, and Bayesian Updating. Stanford University Dissertation.

Bullock, John G, Alan S Gerber, Seth J Hill and Gregory A Huber. 2013. “Partisan Bias in
Factual Beliefs About Politics.” National Bureau of Economic Research Working Paper .

Bullock, Will, Kosuke Imai and Jacob N. Shapiro. 2011. “Statistical Analysis of Endorsement
Experiments: Measuring Support for Militant Groups in Pakistan.” Political Analysis
19(4):363–384.

Campbell, Angus, Philip E Converse, Warren E Miller and Donald E Stokes. 1980. The
American Voter. University of Chicago Press.

Chong, Dennis and James N Druckman. 2007. “Framing Public Opinion in Competitive
Democracies.” American Political Science Review 101(04):637–655.

Citrin, Jack and Christopher Muste. 1999. Trust in Government. In Measures of Political
Attitudes, ed. JP Robinson, PR Shaver and L Wrightsman. New York: Academic Press.

Coady, David. 2006. Conspiracy Theories: The Philosophical Debate. Ashgate Publishing,
Ltd.

Cobb, Michael D, Brendan Nyhan and Jason Reifler. 2013. “Beliefs Don’t Always Per-
severe: How Political Figures Are Punished When Positive Information about Them Is
Discredited.” Political Psychology .

Eagly, Alice H and Shelly Chaiken. 1993. The Psychology of Attitudes. Harcourt Brace
Jovanovich College Publishers.

31
Ecker, Ullrich KH, Stephan Lewandowsky, Briony Swire and Darren Chang. 2011. “Correct-
ing False Information in Memory: Manipulating the Strength of Misinformation Encoding
and its Retraction.” Psychonomic Bulletin Review 18(3):570–578.

Edwards, Kari and Edward E Smith. 1996. “A Disconfirmation Bias in the Evaluation of
Arguments.” Journal of Personality and Social Psychology 71(1):5.

Ellin, Abby. 2012. “GOP Jobs Report Manipulation Claims Dismissed.” ABCnews.com .

Gingerich, Daniel W. 2010. “Understanding Off-the-Books Politics: Conducting Inference


on the Determinants of Sensitive Behavior with Randomized Response Surveys.” Political
Analysis 18(3):349–380.

Green, Donald P, Bradley Palmquist and Eric Schickler. 2004. Partisan Hearts and Minds:
Political Parties and the Social Identities of Voters. Yale University Press.

Hardin, Russell. 1999. Do We Want Trust in Government? In Democracy and Trust, ed.
Mark E. Warren. Cambridge: Cambridge University Press pp. 22–41.

Hibbing, John R. and Elizabeth Theiss-Morse. 2002. Stealth Democracy: Americans’ Beliefs
About How Government Should Work. Cambridge University Press.

Jolley, Daniel and Karen M. Douglas. 2013. “The Social Consequences of Conspiracism:
Exposure to Conspiracy Theories Decreases Intentions to Engage in Politics and to Reduce
One’s Carbon Footprint.” The British Journal of Psychology .

Keeley, Brian L. 1999. “Of Conspiracy Theories.” The Journal of Philosophy pp. 109–126.

Kinder, Donald R. and Linda M. Sanders. 1996. Divided by Color: Racial Politics and
Democratic Ideals. Chicago: University of Chicago Press.

Kriner, Douglas L. and William G. Howell. 2012. Congressional Leadership of War Opinion?:
Backlash Effects and the Polarization of Public Support for War. In Congress Reconsidered
10th Edition, ed. Lawrence C. Dodd and Bruce I. Oppenheimer. CQ Press.

32
Kuklinski, James H., Michael D. Cobb and Martin Gilens. 1997. “Racial Attitudes and the
New South.” Journal of Politics pp. 323–349.

Kuklinski, James H., Paul J. Quirk, Jennifer Jerit, David Schwieder and Robert F. Rich.
2003. “Misinformation and the Currency of Democratic Citizenship.” Journal of Politics
62(3):790–816.

Levi, Margaret and Laura Stoker. 2000. “Political Trust and Trustworthiness.” Annual
Review of Political Science 3(1):475–507.

Lewandowsky, Stephan, Klaus Oberauer and Gilles Gignac. 2013. “NASA Faked the Moon
Landing Therefore (Climate) Science is a Hoax: An Anatomy of the Motivated Rejection
of Science.” Psychological Science .

Lewandowsky, Stephan, Ullrich KH Ecker, Colleen M Seifert, Norbert Schwarz and John
Cook. 2012. “Misinformation and its Correction Continued Influence and Successful De-
biasing.” Psychological Science in the Public Interest 13(3):106–131.

Lewandowsky, Stephan, Werner GK Stritzke, Alexandra M Freund, Klaus Oberauer and


Joachim I Krueger. 2013. “Misinformation, Disinformation, and Violent Conflict: From
Iraq and the War on Terror to Future Threats to Peace.” American Psychologist 68(7):487.

Long, J. Scott and Jeremy Freese. 2005. Regression Models for Categorical Outcomes Using
Stata (Second Edition). College Station, TX: Stata Press.

Lord, Charles G, Lee Ross and Mark R Lepper. 1979. “Biased Assimilation and Attitude Po-
larization: The Effects of Prior Theories on Subsequently Considered Evidence.” Journal
of Personality and Social Psychology 37(11):2098.

McCarty, Nolan M, Keith T Poole, Howard Rosenthal and Janet T Knoedler. 2006. Polarized
America: The Dance of Ideology and Unequal Riches. MIT Press Cambridge, MA.

33
Mulligan, Kenneth and Philip Habel. 2013. “The Implications of Fictional Media for Political
Beliefs.” American Politics Research 41(1):122–146.

Nyhan, Brendan. 2012a. “Enabling the Jobs Report Conspiracy Theory The Consequences
of Careless Coverage of Fridays Unemployment Numbers.” Columbia Journalism Review .

Nyhan, Brendan. 2012b. “Political Knowledge Does Not Guard Against Belief In Conspiracy
Theories.” You Gov: Model Politics .

Nyhan, Brendan. 2013. “Boosting the Sandy Hook Truther Myth: The Dangers of Covering
Fringe Misperceptions.” Columbia Journalism Review January, 22 2013.

Nyhan, Brendan and Jason Reifler. 2010. “When Corrections fail: The Persistence of Political
Misperceptions.” Political Behavior 32(2):303–330.

NYTimes. 2012. “Editorial: Conspiracy World.” The New York Times .

Pew, Research Center. 2013. “January Survey, Trust in Gov-


ernment.” Available at http:// www.people-press.org/ 2013/ 01/ 31/
majority-says-the-federal-government-threatens-their-personal-rights/ #low-trust .

Presser, Stanley and Linda Stinson. 1998. “Data Collection Mode and Social Desirability
Bias in Self-Reported Religious Attendance.” American Sociological Review 63:137–145.

Putnam, Robert D. 2001. Bowling Alone: The Collapse and Revival of American Commu-
nity. Simon Schuster.

Rahn, Wendy M. 2000. Affect as Information: The Role of Public Mood in Political Reason-
ing. In Elements of Reason: Cognition, Choice, and the Bounds of Rationality, ed. Arthur
Lupia, Matthew D. McCubbins and Samuel L. Popki. Cambridge: Cambridge University
Press pp. 130–150.

34
Skurnik, Ian, Carolyn Yoon, Denise C. Park and Norbert Schwarz. 2005. “How Warn-
ings About False Claims Become Recommendations.” Journal of Consumer Research
31(4):713–724.

Tesler, Michael. 2012. “The Spillover of Racialization into Health Care: How President
Obama Polarized Public Opinion by Racial Attitudes and Race.” American Journal of
Political Science 56(3).

Tesler, Michael. and David O. Sears. 2010. Obama’s Race: The 2008 Election and the Dream
of a Post-Racial America. Chicago: University of Chicago Press.

Uscinski, Joseph E, Joseph M Parent and Bethany Torres. 2011. “Conspiracy Theories are
for Losers.” American Political Science Association Annual Conference 2011 .

Uscinski, Joseph E and Matthew Atkinson. 2013. “Why Do People Believe in Conspiracy
Theories? The Role of Informational Cues and Predispositions.” SSRN Working Paper .

Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge university press.

Zaller, John and Stanley Feldman. 1992. “A Simple Theory of the Survey Response: An-
swering Questions Versus Revealing Preferences.” American Journal of Political Science
pp. 579–616.

35
Appendix

6 Survey question wording

• Conspiracy belief:

Do you think that recent monthly unemployment data from the Bureau of Labor Statis-
tics are always calculated as accurately as possible or are they politically manipulated?
1) Calculated as accurately as possible 2) Politically manipulated

• Confidence in Government:

Below is a list of institutions in American society. Please indicate how much confidence
you have in each one. 1) Very confident 2) Somewhat confident 3) Not so confident 4)
Not confident at all

• Four Question Racial Resentment Index

How strongly do you agree or disagree with the following statement? Irish, Italian, Jew-
ish, and many other minorities overcame prejudice and worked their way up. Blacks
should do some same without any special favors. 1) Strongly Agree 2) Agree 3) Neither
Agree nor Disagree 4) Disagree 5) Strongly Disagree

How strongly do you agree or disagree with the following statement? Generations of
slavery and discrimination have created conditions that make it difficult for blacks to
work their way out of the lower class. Strongly Agree 1) Strongly Agree 2) Agree 3)
Neither Agree nor Disagree 4) Disagree 5) Strongly Disagree

36
How strongly do you agree or disagree with the following statement? Over the past
few years, blacks have gotten less than they deserve. Strongly Agree 1) Strongly Agree
2) Agree 3) Neither Agree nor Disagree 4) Disagree 5) Strongly Disagree

How strongly do you agree or disagree with the following statement? It’s really a mat-
ter of some people not trying hard enough; if blacks would only try harder they could
be just as well off as whites. 1) Strongly Agree 2) Agree 3) Neither Agree nor Disagree
4) Disagree 5) Strongly Disagree

37
7 Supplementary Tables and Figures

Figure A1: Experimental story: Straight jobs data without Welch

38
Figure A2: Experimental story: Control - data about Oregon craft beer industry

39
Table A1: Sample Demographics Comparison: The table compares our sample demographics
to those found in another MTurk study and to other high quality surveys.

Internet Samples Face to Face Samples


Variable BLS Exp BHL Turk ANES-P 08-09 CPS 08 ANES 08
% Female 47.2 60.1 57.6 51.7 55.0
% White 80.3 83.5 83.0 81.2 79.1
% Black 7.6 4.4 8.9 11.8 12.0
% Hispanic 6.0 6.7 5.0 13.7 9.1
Age (years) 35.1 32.3 49.7 46.0 46.6
Party ID (7 pt.) 3.1 3.5 3.9 3.7
Ideology (7 pt.) 3.2 3.4 4.3 4.2
Education 15.3 yrs 15.3 yrs 16.2 yrs 13.2 yrs 13.5 yrs
Income (median) 30-49K 45K 67.5K 55K 55K

“BHL Turk” = Berinsky, Huber and Lenz (2012), ANES-P = American National Election Panel Study
(Knowledge Networks), CPS = Current Population Survey, ANES = American National Election Study),
CPS and ANES are weighted. Data from all columns other than those corresponding to our experiments are
reproduced from Table 3 in Berinsky, Huber and Lenz (2012).

Table A2: Sample Demographics by Treatment for Experiment

Welch Article No Article Control Article


Variable Question No Question Question No Question BLS w/o Welch Beer
% Female 43.5 44.9 56.5 49.6 50.7 40.6
% White 78.2 76.3 75.8 79.4 82.6 88.7
% Black 7.6 4.3 13.7 7.0 7.3 6.8
% Hispanic 4.0 6.5 7.4 7.0 7.3 5.3
Age (years) 35.9 38.0 35.2 35.5 33.5 32.8
Party ID (7 pt.) 3.3 3.1 2.8 2.9 2.9 3.3
Ideology (7 pt.) 3.2 3.4 3.0 3.1 3.2 3.4
Education (yrs) 15.4 15.5 15.0 15.1 15.5 15.7
Pol Know(0-3) 2.1 2.2 2.1 2.0 2.0 2.0
N 225 93 95 243 138 133

40
Table A3: Probit models for believing that data were manipulated as a function of exposure
to the Welch story and controls
(1) (2)
VARIABLES Yes Manipulated Yes Manipulated

Welch Story -0.75*** -0.66***


(0.19) (0.19)
Age 0.01 0.01
(0.01) (0.01)
Partisanship 0.27*** 0.30***
(0.05) (0.06)
Pol Knowledge 0.04 -0.15
(0.10) (0.10)
Racial Resent 0.28*** 0.15
(0.10) (0.10)
Constant -2.03*** -1.48***
(0.43) (0.40)

Observations 251 244


∗ ∗ ∗p < 0.01, ∗ ∗ p < 0.05, ∗p < 0.1
Baseline category for Model 1 is the BLS story without the conspiracy claim and rebuttal condition. For
Model 2 the baseline category is the Beer industry article condition.

41
Table A4: Seemingly Unrelated Regression estimates for confidence in government as a function being exposed to the conspiracy
and asked compared to not being exposed and not asked

(1) (2) (3) (4) (5) (6) (7) (8) (9)


VARIABLES President FDA Census Local Schools Local Police Congress Supremecourt Churches Corporations

Welch NotAsked -0.16* -0.34*** -0.15 -0.25** -0.15 0.04 -0.00 0.10 0.02
(0.09) (0.00) (0.13) (0.02) (0.16) (0.69) (0.97) (0.41) (0.81)
Welch Asked -0.01 -0.08 0.00 0.02 -0.08 0.06 0.10 -0.03 0.01
(0.89) (0.31) (1.00) (0.79) (0.34) (0.44) (0.21) (0.75) (0.93)
Age -0.00 -0.01** -0.00 -0.00 0.00 -0.01** -0.00 0.01*** 0.00
(0.94) (0.04) (0.24) (0.67) (0.13) (0.03) (0.82) (0.01) (0.99)
Partisanship -0.28*** -0.07*** -0.07*** -0.02 0.00 -0.03 -0.05*** 0.16*** 0.06***
(0.00) (0.00) (0.00) (0.35) (0.83) (0.22) (0.01) (0.00) (0.01)

42
Pol Knowledge -0.02 0.06 0.11*** -0.00 0.07* -0.12*** 0.05 -0.05 0.01
(0.64) (0.15) (0.00) (0.97) (0.09) (0.00) (0.19) (0.33) (0.89)
Racial Resent -0.11*** 0.03 -0.07* 0.02 0.05 0.09** 0.02 0.16*** 0.19***
(0.00) (0.41) (0.08) (0.71) (0.22) (0.04) (0.61) (0.00) (0.00)
Constant 4.01*** 2.93*** 3.39*** 2.90*** 2.46*** 2.22*** 2.79*** 1.02*** 1.22***
(0.00) (0.00) (0.00) (0.00) (0.00) (0.00) (0.00) (0.00) (0.00)

Observations 493 493 493 493 493 493 493 493 493
R2 0.38 0.06 0.08 0.02 0.02 0.05 0.02 0.17 0.10
pval in parentheses
*** p<0.01, ** p<0.05, * p<0.1

Baseline condition is not exposed to the story and not asked about beliefs. Main variables of interest are
WelchN otAsked(realworldcondition)andW elchA sked(peopleexposedandaskedaboutbelief s).Standarderrorsinsmallf ontinparentheses.P −
valuesinsmallf ontbelowstandarderrors.
Table A5: Ordered probit estimates for four categories of confidence in government responses as a function being exposed to
the conspiracy without belief question compared to not being exposed and not asked
(1) (2) (3) (4) (5) (6) (7) (8) (9)
President FDA Census Local Schools Local Police Congress Supremecourt Churches Corporations
Welch NotAsked -0.245∗ -0.457∗∗∗ -0.198 -0.301∗∗ -0.175 0.085 -0.009 0.104 0.034
(0.091) (0.001) (0.167) (0.032) (0.214) (0.553) (0.950) (0.468) (0.811)

Age 0.002 -0.006 -0.004 0.002 0.005 -0.008 -0.003 0.015∗∗∗ 0.000
(0.637) (0.211) (0.429) (0.725) (0.357) (0.132) (0.561) (0.004) (0.946)

Partisanship -0.431∗∗∗ -0.110∗∗∗ -0.113∗∗∗ -0.040 0.001 -0.044 -0.057 0.138∗∗∗ 0.096∗∗

43
(0.000) (0.006) (0.006) (0.312) (0.978) (0.279) (0.160) (0.001) (0.017)

Pol Knowledge -0.037 0.075 0.164∗∗ 0.009 0.093 -0.252∗∗∗ 0.059 -0.092 -0.064
(0.614) (0.296) (0.025) (0.902) (0.195) (0.001) (0.411) (0.199) (0.377)

Racial Resent -0.146∗ 0.085 -0.115 0.035 0.089 0.075 0.019 0.154∗∗ 0.205∗∗∗
(0.055) (0.247) (0.123) (0.630) (0.229) (0.321) (0.800) (0.040) (0.007)
N 296 296 296 296 296 296 296 296 296
p-values in parentheses
∗ ∗∗∗
p < 0.10, ∗∗ p < 0.05, p < .01
Baseline condition is not exposed to the story and not asked about beliefs. Standard errors in small font in parentheses. P-values in small
font below standard errors.

You might also like