You are on page 1of 32

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/279954362

Much Ado About Deception

Article  in  Sociological Methods & Research · August 2012


DOI: 10.1177/0049124112452526

CITATIONS READS

44 361

2 authors:

Davide Barrera Brent Simpson


Università degli Studi di Torino University of South Carolina
30 PUBLICATIONS   250 CITATIONS    69 PUBLICATIONS   2,006 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups View project

Generalized Exchange View project

All content following this page was uploaded by Brent Simpson on 28 September 2016.

The user has requested enhancement of the downloaded file.


Article
Sociological Methods & Research
41(3) 383–413
Much Ado About Ó The Author(s) 2012
Reprints and permission:
Deception: sagepub.com/journalsPermissions.nav
DOI: 10.1177/0049124112452526
Consequences of http://smr.sagepub.com

Deceiving Research
Participants in the
Social Sciences

Davide Barrera1,2 and Brent Simpson3

Abstract
Social scientists have intensely debated the use of deception in experimental
research, and conflicting norms governing the use of deception are now
firmly entrenched along disciplinary lines. Deception is typically allowed in
sociology and social psychology but proscribed in economics. Notably, dis-
agreements about the use of deception are generally not based on ethical
considerations but on pragmatic grounds: the anti-deception camp argues
that deceiving participants leads to invalid results, while the other side
argues that deception has little negative impact and, under certain condi-
tions, can even enhance validity. These divergent norms governing the use
of deception are important because they stifle interdisciplinary research and
discovery, create hostilities between disciplines and researchers, and can
negatively impact the careers of scientists who may be sanctioned for
1
Department of Culture Politics and Society and Collegio Carlo Alberto, University of Turin,
Torino, Italy
2
ICS/Department of Sociology, Utrecht University, Utrecht, The Netherlands
3
Department of Sociology, University of South Carolina, Columbia, SC, USA

Corresponding Author:
Brent Simpson, Department of Sociology, University of South Carolina, Columbia, SC 29208,
USA.
Email: bts@sc.edu

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


384 Sociological Methods & Research 41(3)

following the norms of their home discipline. We present two experimental


studies aimed at addressing the issue empirically. Study 1 addresses the
effects of direct exposure to deception, while Study 2 addresses the effects of
indirect exposure to deception. Results from both studies suggest that decep-
tion does not significantly affect the validity of experimental results.

Keywords
deception, ethics, validity, laboratory experiments, prosociality

It is believed by many undergraduates that psychologists are intentionally decep-


tive in most experiments. If undergraduates believe the same about economists,
we have lost control. It is for this reason that modern experimental economists
have been carefully nurturing a reputation for absolute honesty in all their experi-
ments . . . (I)f the data are to be valid, honesty in procedures is absolutely crucial.
Any deception can be discovered and contaminate a subject pool not only for that
experimenter but for others. Honesty is a methodological public good and decep-
tion is equivalent to not contributing. (Ledyard 1995)

Deception does not appear to ‘‘jeopardize future experiments’’ or ‘‘contaminate a


subject pool.’’ It does not mean that ‘‘we have lost control.’’ Nor does it ‘‘taint’’
experiments or cause the data they produce to be invalid. Indeed, there is good
reason to think that the selective use of deception can enhance control and ensure
validity. (Bonetti 1998)

Introduction
The above quotes exemplify contrasting positions in the debate about the
use of deception in social science experiments (see Sell 2008). The debate
largely occurs along disciplinary boundaries, with the first quote represent-
ing the position of a great majority of experimental economists and the latter
the position of a great majority of social psychologists (although it happens
to have been written by a dissenting economist). While some sociologists,
mostly those who adhere to the rational choice paradigm, side with experi-
mental economists and make a practice of not deceiving research partici-
pants (e.g., Buskens, Raub, and van der Veer 2010; Winter, Rauhut, and
Helbing 2011), the majority of sociologists who employ experiments tend to
use some form of deception (e.g., Molm 1991; Sell 1997; Lovaglia et al.
1998; Ridgeway et al. 1998; Yamagishi, Cook, and Watabe 1998; Horne
2001; Kalkhoff and Thye 2006; Willer, Kuwabara, and Macy 2009). In

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 385

general, deception that meets American Psychological Association (APA)


guidelines is allowed in both sociological and psychological social psychol-
ogy, whereas a policy of prohibition is enforced by the editors of all major
economics journals (Hertwig and Ortmann 2001). Although ethical issues
are not entirely irrelevant (e.g., Rothschild 1993), the primary grounds for
the dispute are rarely based on ethical considerations, as suggested in the
above quotes. Instead, the debate typically centers on pragmatism or conse-
quentialism, namely the validity of experimental results garnered from stud-
ies that do or do not employ deception.
A general disagreement between separate disciplines on methodological
principles would be less of an issue if the two disciplines were concerned
only with nonoverlapping areas of research, and if the effects of a
discipline-specific norm (to permit vs. prohibit deception) had no effects
across disciplinary boundaries. However, there is a broad range of interdisci-
plinary overlap in the experimental social sciences. This overlap is arguably
greatest in research on decision making in strategic interactions, where social
scientists study the foundations of trust, altruism, solidarity, cooperation, col-
lective action, and other forms of prosocial behavior (for reviews, see
Yamagishi 1995; Kollock 1998; Fehr and Gintis 2007). Moreover, their
research on these issues typically draws on the same set of game theoretical
tools. Given this high degree of overlap, conflicting norms governing the use
of deception collide especially hard in these areas. The consequences—for
the development of interdisciplinary insights and the careers of scholars—
can be quite serious. For instance, sociological social psychologists have
suggested that papers submitted to leading journals and grant proposals sub-
mitted to funding agencies often get reviewed—and rejected—based on
economists’ norms governing the use of deception (Cook and Yamagishi
2008). For these reasons, the present research focuses on these interdisciplin-
ary research areas, where the existence of conflicting norms is most relevant,
has the strongest impact on the development of interdisciplinary insights,
and arguably has the greatest potential to affect researchers’ careers.
As noted by economists who have written extensively on deception
(Hertwig and Ortmann 2008a), greater interdisciplinary agreement on how
to regulate the use of deception would be highly desirable. Research on
human cooperation and altruism has the potential to yield a range of societal
level benefits. Yet interdisciplinary insights into these (and other) areas are
stymied by the absence of agreed-upon methodological norms. Further, it is
critical that norms or policies governing methodology be based on empirical
evidence (Hertwig and Ortmann 2008b). Are calls for more restrictive rules
on the use of deception (Ortmann and Hertwig 1997; Hertwig and Ortmann

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


386 Sociological Methods & Research 41(3)

2008a) justified on pragmatist grounds? This question is amenable to empiri-


cal investigation. In the remainder of this article, we first briefly summarize
the debate about deception and then present the results of two experimental
studies that allow a direct test of economists’ and social psychologists’ com-
peting claims about the consequences of deceiving research participants.

The Deception Debate


Drawing a line to separate real deception from ‘‘perfectly legitimate . . .
economy with the truth’’ (McDaniel and Starmer 1998:406) is not always
straightforward (see, e.g., Bonetti 1998; Hey 1998; Hertwig and Ortmann
2008b). However, economists typically define deception as the explicit and
intentional provision of false or misleading information about facts or peo-
ple involved in the experiment (Hey 1998; McDaniel and Starmer 1998).
By contrast, simply withholding information from participants (e.g., about
the true purpose of the study) is generally not considered deception by econ-
omists and is therefore permitted (Hey 1998; McDaniel and Starmer 1998).
The most common form of deception, and the one most often debated in the
literature, involves the use of human confederates or computer-simulated
agents, disguised as real participants. Thus, this is the type of deception that
we focus on in this article.
Social psychologists raised questions about the use of deception in
experimental research long before economists adopted the experimental
method and established their own set of methodological conventions. These
early concerns about the use of deception were primarily ethics based
(Baumrind 1964) and led the APA, and later the American Sociological
Association, to establish formal rules limiting the use of deception. In addi-
tion, increased attention to gross breaches in ethics (e.g., the infamous
Tuskegee experiment) led to the creation of institutional review boards
(IRBs) to oversee the conduct of research involving human subjects.
Alongside the development of these formal guidelines and institutions, how-
ever, social psychologists began to question the long-term practical conse-
quences of deceiving participants. For instance, Kelman (1967) suggested
that frequent use of deception could make participants increasingly distrust-
ful, thus undermining the reputation of experimental social science and the
validity of experimental results.
Given these growing concerns, the 70s and 80s saw a number of studies,
mostly from experimental social psychologists, about the downstream
effects of deception (e.g., Fillenbaum 1966; Cook et al. 1970; Silverman,
Shulman, and Wiesenthal 1970; Willis and Willis 1970; Stang 1976; Smith

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 387

and Richardson 1983; Christensen 1988). Broadly speaking, these studies


addressed two basic issues: whether deceived participants are subsequently
more likely to harbor negative feelings or attitudes toward experimental
research (e.g., Cook et al. 1970; Christensen 1988; Epley and Huff 1998)
and whether suspicion resulting from the actual experience of deception
(e.g., Willis and Willis 1970; Stang 1976) or from warnings that deception
may be used (e.g., Fillenbaum 1966; Cook et al. 1970; Silverman et al.
1970) affects the behaviors of participants during the course of a single
experiment. An extensive review of these studies (Hertwig and Ortmann
2008b) concluded that the evidence about the impact of deception on suspi-
cion and experimental results is inconclusive.
Later, as economists started adopting the experimental method in larger
numbers, calls for a ban on deception began to reappear (Davis and Holt
1993; Ledyard 1995).1 The emergence of a proscription against the use of
deception in economics revived the deception debate (Ortmann and Hertwig
1997; Hertwig and Ortmann 2001), this time across disciplinary boundaries,
leading to a renewed defense of the selective use of deception by social psy-
chologists (Kimmel 1998; Cook and Yamagishi 2008). These researchers
defend the use of deception on a variety of grounds. For instance, some
researchers argue that deception is often necessary to elicit spontaneous or
unconscious reactions that occur naturally outside the laboratory but would
otherwise be impossible to study in a controlled laboratory context
(Kimmel 1998; Ariely and Norton 2007; Cook and Yamagishi 2008).
Others argue that deception—especially the use of confederates or simu-
lated actors—is often the only means by which the experimenter can main-
tain full control over the experimental stimulus (Weimann 1994; Bonetti
1998; Baron 2001; Willer 2009). More generally, those who defend the
selective use of deception argue that deceiving participants will not reduce
the validity of experimental results nor damage the reputations of experi-
mentalists (Bonetti 1998; Kimmel 1998). Indeed, as noted earlier, defenders
suggest that the selective use of deception can enhance validity and thus
increase our confidence in causal inferences. The magnitude of the dis-
agreement is witnessed by the frequency with which studies employing
deception are published in different disciplines. According to Hertwig and
Ortmann (2001), some form of deception is used in about a third of the
studies published in the highest ranked journal in social psychology, the
Journal of Personality and Social Psychology, and this rate is even higher
for lower ranked journals. Conversely, economics experiments employing
deception ‘‘can probably be counted on two hands’’ (Hertwig and Ortmann
2001:396).2

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


388 Sociological Methods & Research 41(3)

As discussed above, early empirical studies on the consequences of


deceiving research participants primarily addressed either the effects of
deception on suspicion or the effects of deception in an initial part of a
study on suspicion and behavior in a subsequent part of the same study.
But, as detailed more fully below, recent interdisciplinary debates about the
use of deception mostly revolve around three different questions, with the
first arguably being most prominent: (i) how does being deceived in one
experiment impact suspicion and behavior in subsequent experiments? (ii)
how does indirect exposure to deception (e.g., in introductory social psy-
chology courses or popular media discussion of social science studies)
impact behavior in experiments? and (iii) how does the use of deception by
sociologists and social psychologists affect the reputations of economists?
We know of only one previous study that has addressed the first question
(Jamison, Karlan, and Schechter 2008, reviewed below), and no studies that
have addressed either of the other two questions.

The Anti-Deception Hypothesis


Economists argue against the use of deception on three grounds, each of
which provides a testable hypothesis. First, the direct exposure hypothesis
states that, when a participant is deceived in one study (and knows that
deception has occurred), the participant loses trust in experimenters and
experiments. As a result, the participant’s behavior (relative to a nonde-
ceived participant) will be biased in subsequent experiments (Davis and
Holt 1993; Ledyard 1995; Hertwig and Ortmann 2008b). Second, the indi-
rect exposure hypothesis states that, as it becomes common knowledge that
social psychologists (and experimental sociologists) deceive their
participants—for example, because introductory textbooks or university lec-
tures describe experiments that use deception—even participants who have
never been deceived by researchers will tend to suspect deception. Thus,
their behavior will be biased by this indirect exposure (Davis and Holt
1993; Ledyard 1995; Hertwig and Ortmann 2008b). Finally, the spillover
hypothesis states that even if economists proscribe deception, the use of
deception in other social sciences will lead participants to believe that
experimentalists in general tend to be untrustworthy. As a result, all experi-
mental results—including those gathered by researchers who do not employ
deception—will be biased (Davis and Holt 1993; Ledyard 1995). Thus, this
latter hypothesis assumes that the ‘‘bad’’ reputations of social psychologists
travel across disciplinary boundaries to impact economists.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 389

According to economists’ reasoning, the (direct or indirect) experience


of deception is assumed to produce a change in participants’ beliefs about
whether the experimental instructions are truthful. As a result, we should
observe a change in participants’ behaviors in laboratory experiments. The
behavioral change could be of two types (Jamison et al. 2008). Most impor-
tantly, direct or indirect exposure to deception could produce a systematic
bias in participants. For instance, if a participant in a social dilemma study
believes that she is interacting with a simulated or fictitious actor, rather
than an actual person, she may act more selfishly. Such a bias would there-
fore reduce the likelihood of observing cooperation or prosocial behavior.
In addition, suspicion may produce a nonsystematic bias in participants’
behaviors, such that doubts about the content of instructions lead to more
erratic responses (Jamison et al. 2008). A nonsystematic bias could
inflate the standard deviations of the observed variables and thus produce
larger standard errors which, in turn, would make statistical tests more
conservative.
As noted above, a study by Jamison et al. (2008) addressed the direct
exposure hypothesis. Given that our first experiment builds on the Jamison
et al. procedures, we review the study in some detail. Participants in the
Jamison et al. study first took part in a repeated trust dilemma (Berg,
Dickhaut, and McCabe 1995; Buchan, Croson, and Johnson 2002; Barrera
2007). The trust dilemma is described in detail below. Half of the partici-
pants were told (correctly) that they had been deceived about the presence
of a human partner. Three weeks later, participants returned to the labora-
tory for a second phase, where they made decisions in several tasks. All but
one of these tasks were standard measures of prosociality (e.g., generosity
and cooperation) where participants were paired with actual human part-
ners. The remaining task was a solitary lottery task designed to measure risk
preferences. Importantly, because this was a completely independent
decision-making task, it involved no other person (real or fictitious). The
participants’ behaviors in these tasks constitute the dependent variables in
Jamison et al. (2008).
Jamison et al. (2008) reported several findings that they contend support
the direct exposure hypothesis: (1) females who were deceived in the first
phase were less likely than nondeceived females to show up 3 weeks later
for the second phase,3 (2) deceived participants made more erratic decisions
in the risk aversion lottery, and (3) a higher proportion of deceived partici-
pants made inconsistent choices in the risk aversion lottery. The authors con-
cluded from these three findings that experiencing deception alters behavior
in subsequent experiments. We contend that this conclusion is unwarranted,

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


390 Sociological Methods & Research 41(3)

for several reasons. First, because approximately 40 percent of participants


did not return for the second phase, their design cannot disentangle the
effects of deception from selection effects, as Jamison et al. acknowledge.
Second, the complexity of the experimental design used in Phase 1 produced
large inequalities in earnings and experiences, the effects of which are diffi-
cult to disentangle from the effects of the manipulation of interest (decep-
tion) without substantial loss of statistical power.4 Finally, and perhaps most
importantly, the researchers found only one significant difference between
the behaviors of deceived and nondeceived participants. But this difference
was in the risk aversion lottery, which is a straightforward solitary task, that
is, it involves no interaction with other (real or fictitious) participants (Holt
and Laury 2002). Indeed, this measure was included as a control and, as the
authors noted, it was the only dependent measure studied for which they
expected no effect of deception. Given the large number of comparisons the
authors make, it is possible that the significant effects for the solitary deci-
sion task may have been based on Type I error. In the first study outlined
below, which addresses direct exposure to deception, we modified the
Jamison et al. design to address key shortcomings.

Overview of Experiments
Here we introduce two new experiments designed to test the three hypoth-
eses outlined earlier about whether and how direct (Study 1) and indirect
(Study 2) exposure to deception leads to suspicion and behavioral changes.
(Materials and instructions for the two studies are available upon request
from the authors.) Given that the use of fictitious partners is the most
common—and most debated—type of deception employed in experimental
social science, both studies focus on this form of deception. Note that,
unlike the position taken by most sociologists and social psychologists, the
anti-deception position predicts the effects of direct and indirect exposure to
deception. Thus, as detailed below, we aimed to relieve the anti-deception
position of some of the burden of proof by creating conditions favorable to
finding such an effect. Study 1 tests the impact of direct exposure to decep-
tion on beliefs and behaviors. Study 2 addresses the impact of indirect expo-
sure on the same outcome variables. Both studies test systematic and
nonsystematic effects and also allow a test of the spillover hypothesis, that
is, that use of deception by sociologists and social psychologists impacts the
reputations of economists.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 391

Study 1 Methods: Direct Exposure to Deception


Study 1 was conducted in the social psychology laboratory in the department
of sociology at a large public university. Participants were recruited from
several introductory sociology courses. We sampled from students in large
introductory sociology classes because the students enrolled in these courses
are predominantly freshmen. This allowed us to minimize prior (direct or
indirect) exposure to deception.
We modeled our first study on the Jamison et al. (2008) experiment but
implemented a number of design changes in order to avoid the problems
with that study discussed earlier. First, in order to ensure that participants
would come back for the second phase (and thus to solve the selection prob-
lem), we gave participants research participation credit contingent on partic-
ipation in two experiments. We then made sure that participants could only
enroll in the two phases of the study. Using this system, mortality from
Phase 1 to Phase 2 was reduced to 8 percent versus 40 percent in Jamison
et al. Thus, in our study, the effects of the manipulation (deception) were
not affected by the self-selection processes. Second, we greatly simplified
the procedures in the manipulation phase, as well as in the second phase, in
an effort to reduce other potential confounds. For instance, rather than pla-
cing our participants in a repeated interaction involving different roles, the
manipulation phase of our study involved a simple one-shot ‘‘prisoner’s
dilemma.’’ This experimental design allowed us to maximize the uniformity
of the stimulus within experimental conditions, keeping our two conditions
virtually identical except for the independent variable of interest, the pres-
ence or absence of deception. In addition, as a result of eliminating possible
confounds, we reduced the probability of capitalization on chance.
A total of 153 students participated in our first phase. After making a
decision in a one-shot prisoner’s dilemma, participants completed the Big 5
personality index (McCrae and Costa 1987).5 Participants were randomly
assigned to either the treatment (n = 78) or the control condition (n = 75).
Participants in the control condition were paired and paid according to the
choice combination of the two participants. Those in the treatment condition
were told at the beginning of the study that they would be matched with
another participant in another room in the laboratory but, in reality, the oth-
er’s choice was simulated. In order to keep the two conditions as similar as
possible, we yoked decisions from participants in the control condition onto
the choices of the fictitious partners in the treatment condition. Thus, a par-
ticipant in the treatment condition was just as likely as a participant in the

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


392 Sociological Methods & Research 41(3)

control condition to meet a noncooperative partner. This created identical


payments between conditions.
After participants made their decisions, the research assistant paid parti-
cipants according to their choices, and those of their actual or simulated
partner. Thereafter, during the Phase 1 debriefing sessions, participants in
the control condition were told, correctly, that there was no deception.
Participants in the treatment condition were told explicitly that the other
participant with whom she or he was paired was actually simulated, without
specifying how we determined the choice of the fictitious partner. (Had we
told them that their choices were matched with those of a participant in a
previous experimental session, we might have dampened the power of the
manipulation.) Finally, the research assistant asked a series of follow-up
questions to make sure that the participant understood that the partner was
real (in the control condition) or fictitious (in the treatment condition).
Two to three weeks later, 140 participants (71 of whom were in the
deception condition) returned to the same laboratory to take part in Phase 2.
The instructions did not draw any connection to Phase 1. In Phase 2, each
participant completed a risk aversion lottery (included in order to replicate a
finding reported by Jamison et al. 2008), two dictator games (once as sender
and once as receiver) and two trust dilemmas (once as trustor and once as
trustee), for a total of five decision scenarios.6 If being deceived leads to
more selfish behavior (e.g., because participants believe they are paired with
a simulated actor rather than another person), participants who were
deceived in Phase 1 should be less generous in the dictator game and less
trusting and trustworthy in the trust dilemma.
The experiment was conducted using paper and pencil. The instructions
for each decision scenario were delivered in separate envelopes and partici-
pants were paired with a different (actual) partner for each of the decisions.
The instructions began by informing participants that (1) they would partici-
pate in five decision scenarios, (2) in each scenario they would be matched
with a different person, sitting in another room, whom they would not meet
during or after the study, and (3) for each participant, one of the five scenar-
ios would be randomly selected at the end of the experiment. She or he
would be paid according to the outcomes of the randomly selected scenario.
Furthermore, instructions of every scenario contained detailed information
on how the payoffs for that scenario would be computed. Consequently,
every scenario constituted an anonymous one-shot interaction, and every
decision had monetary consequences for the participants with some positive
probability.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 393

The sequence of the five decisions was the same for all participants.
(Participants did not know the sequence in advance.) Moreover, participants
were not given any feedback about the decisions of their partners until the
end of the experiment. These design details were based on the need to hold
constant all aspects of participants’ experiences, other than the independent
variable (the presence or absence of deception in Phase 1). In so doing, we
aimed to avoid the potential confounds in the Jamison et al. (2008) study.
Participants first made a decision in the risk aversion lottery, which con-
sists of a series of 10 ordered binary choices between two lotteries (Holt
and Laury 2002). Every pair includes a ‘‘safer’’ and a ‘‘riskier’’ option.7
Respondents tend to choose the safe option at the beginning, switch to the
risky one at some point, and stick to risky choices thereafter. The number of
safe choices is the individual’s risk aversion. Switching back and forth
between safe and risky is an indication of inconsistent risk preferences
(Jamison et al. 2008).
For the dictator game, participants assigned to the dictator role were
given a $20 endowment and asked to decide how much, between $ 0.00 and
$20.00, to transfer to a different participant (the receiver) with whom they
had been paired. The amount donated in dictator games is the most com-
monly used behavioral measure of generosity or altruism (e.g., Mifune,
Hashimoto, and Yamagishi 2010). Participants were also paired in a second
dictator game, but because they were receivers in the second game, it
entailed no actual decision.
For the trust dilemma, the participant in the role of the trustor could
choose to send any amount of a $10 endowment (from $ 0.00 to $10.00) to
the trustee. Whatever amount the trustor sent would be tripled by the experi-
menter and, subsequently, the trustee could choose how much of the tripled
amount to return (from $ 0.00 to the entire tripled amount). The amounts
sent and returned are standard measures of trust and trustworthiness, respec-
tively (e.g., Buchan et al. 2002; Barrera 2007). We measured trustees’ deci-
sions using the strategy method (Yamagishi et al. 2009; Rauhut and Winter
2010). That is, trustees were asked to decide how much they would return
to the trustor for each of the possible amounts the trustor could send. The
actual payoffs to trustors and trustees were based on the trustee’s decision
for the amount actually sent by the trustor. Using the strategy method
allowed us to have every participant first occupy the trustor role, and then
the trustee role. In addition, use of the strategy method allowed us to with-
hold partners’ choices from participants until the end of the study, thus
eliminating all differences except for whether or not participants were
deceived in the earlier phase.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


394 Sociological Methods & Research 41(3)

Immediately after the measures of risk aversion (the lottery task), altru-
ism (dictator game), and trust and trustworthiness (trust dilemma), partici-
pants completed a questionnaire that included 5 items about trust in science
(Miller 1998; Pardo and Calvo 2002) and seven questions about trust in
social science created by modifying the trust in science items from Pardo
and Calvo (2002). The trust in science and trust in social science scales each
showed moderate reliability, a = .67 and a = .73, respectively. In addition,
the questionnaire included three questions about the participant’s percep-
tions of the frequency of use of deception by sociologists, social psycholo-
gists, and economists (0 = never, 10 = always); and one question asking how
often social scientists use unethical procedures (0 = never, 10 = always).
Finally, we asked participants whether they had previously participated in
laboratory experiments conducted by sociologists, social psychologists, and
economists.8 We paid participants for one randomly selected scenario.
(Payments averaged $17; SD = 7). Thereafter, participants were debriefed
and dismissed.9 There was no deception in Phase 2. It took approximately 1
hr.

Study 1 Results
We test the hypotheses on beliefs by comparing responses to questionnaire
items and scales across conditions. For the behavioral hypotheses, we use
amounts donated by dictators and amounts sent by trustors as measures of
altruism and trust, respectively. Our trustworthiness measure is the average
proportion returned in the 10 decisions elicited via the strategy method.
We tested all hypotheses on systematic effects using multivariate
Hotelling T 2 tests and univariate one-tailed t tests. Assuming that erratic
behavior would increase the variances of the postmanipulation measures
(Jamison et al. 2008), we tested nonsystematic effects using Levene’s F
tests. As some of our dependent measures were not normally distributed, we
also tested our hypotheses using nonparametric tests (i.e., Mann–Whitney
instead of t tests). However, these analyses yielded substantively identical
results. Finally, in ancillary analyses, we controlled for possible interactions
between deception and partner’s cooperation in Phase 1 (i.e., the difference
between deceived and nondeceived participants, depending on whether the
simulated or real partner cooperated or defected). In none of these analyses
was the interaction significant. Hence we do not discuss this interaction
below.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 395

Effects on Beliefs
As shown in the bottom row of Table 1, the multivariate Hotelling T 2 is sta-
tistically significant. The univariate t tests show that this result is driven by
significant differences in two of the belief variables: participants who were
deceived believed that deception is used more often by social psychologists
and by sociologists, compared to those who were not deceived. Recall that
the study was conducted in a social psychology laboratory in a sociology
department, and Phase 1 of our study constituted the bulk of most partici-
pants’ exposure to social science experiments. That participants in the
deception condition, compared to those in the control condition, were more
likely to conclude that sociology and social psychology experiments involve
deception suggests that our manipulation was successful. These effects on
beliefs remain significant when Bonferroni correction for multiple parallel
tests is used.
As shown in Table 1, we found no other differences in beliefs between
conditions. Most notably, differences in beliefs about the frequency of
deception do not appear to transfer to economics. Although beliefs about
the use of deception in economics is close to the critical threshold (p = .07),
the observed effect size (Cohen’s d = 20.248) is much smaller than the
effect observed for sociology and social psychology. In addition, this effect
is far from significant using Bonferroni correction (a/n = 0.008). Therefore,
we conclude that our results do not support the spillover argument, that is,
deception by social psychologists does not appear to have a substantial
impact on the reputations of economists.

Effects on Behavior
Neither multivariate nor univariate tests showed significant effects for any
of the behavioral measures. That is, we observed no systematic or nonsyste-
matic differences in behaviors between deceived and nondeceived partici-
pants. Like Jamison and colleagues (2008), we found that a number of
participants (30 percent of the total) made inconsistent choices in the risk
aversion lottery. However, contrary to their results, we found that nonde-
ceived participants were more likely to make inconsistent choices, although
this difference was not significant. We also ran both t tests and F tests (com-
paring means and variances of the risk aversion measures between deceived
and nondeceived participants) separately for those who made inconsistent
choices in the risk aversion lottery and for those who did not. However,
unlike Jamison et al., the results for the two groups were virtually identical.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Table 1. Study 1, Descriptive Statistics and Tests

396
Behavior Control Treatment Testsa p Effect size d

Multivariate T2 = 3.42 .50


Altruism M 5.75 5.52 t(138) = 0.29 .39 0.049
SD 4.92 4.49 F(1, 138) = 0.47 .49
Trust M 3.29 3.92 t(138) = –1.12 .87 20.190
SD 3.14 3.45 F(1, 138) =1.32 .25
Trustworthiness M 0.31 0.29 t(138) = 0.66 .26 0.112
SD 0.18 0.20 F(1, 138) = 2.58 .11
Risk aversionb M 5.17 5.30 t(138) = 20.47 .64 20.079
SD 1.47 1.61 F(1, 138) =1.04 .31
Beliefs
Multivariate T2 = 18.05 .01*
Trust in science M 1.46 1.29 t(138) = 0.98 .84 0.166
SD 1.00 1.02 F(1, 138) = 0.09 .77
Trust in social sciences M 5.19 5.12 t(138) = 0.69 .75 0.117
SD 0.74 0.72 F(1, 138) = 0.00 .98
Deception by sociologists M 5.22 6.53 t(138) = 23.75 .00** 20.633
SD 2.15 1.99 F(1, 138) = 1.13 .29
Social psychologists M 5.32 6.48 t(138) = –3.53 .00** 20.596
SD 2.05 1.84 F(1, 138) = 1.51 .22
Economists M 5.78 6.21 t(138) = 21.46 .07 20.248
SD 1.75 1.71 F(1, 138) = 0.18 .67
Unethical procedures by social scientists M 3.49 3.65 t(138) = 20.48 .32 20.081

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


SD 1.90 1.91 F(1, 138) = 0.04 .85

Note. aAll t tests except the risk aversion lottery are one tail; according to the anti-deception prediction, t should be positive for behavioral measures
and negative for beliefs measures.
b
The t test for risk aversion is two tailed because the anti-deception hypothesis makes no prediction for systematic effects on this measure.
*p \ .05. **p \ .01.
Barrera and Simpson 397

Therefore, we report only the results for the full sample in Table 1. Given
that this measure was identical to Jamison et al.’s and yielded inconsistent
and weak results, the most prudent conclusion is that there is no effect. In
short, our data do not support any hypothesis on systematic or nonsyste-
matic effects of direct exposure to deception on behavior.

Study 2 Methods: Indirect Exposure to Deception


In Study 1, exposure to deception was direct but measures on the dependent
variables (Phase 2) were taken 3 weeks after the stimulus (Phase 1); by con-
trast, in Study 2 we investigated indirect exposure to deception. Study 2
participants were not actually deceived. Rather we manipulated whether
participants were exposed to text describing the use of deception in a classic
behavioral experiment. Because, this is arguably a subtler manipulation of
deception than direct exposure, we reduced the proximity between the sti-
mulus and response by measuring the dependent variables immediately after
the stimulus. In addition, we increased the statistical power of our test using
a pretest/posttest design: we measured the dependent variables before and
after the manipulation and compared within-subject differences between
experimental conditions.
A total of 106 subjects took part in the second experiment. Like Study 1,
Study 2 was conducted at the social psychology laboratory of the depart-
ment of sociology at a large public university, using paper and pencil. Upon
arrival, participants were escorted to isolated subject rooms, where they
could not directly interact with any other participants. Participants were ran-
domly assigned to either the treatment (n = 52) or control condition (n =
54). Before and after the exposure to deception, participants completed the
same standard measures of altruism (dictator game), trust, and trustworthi-
ness (trust dilemma) used for Study 1, in the same order and following the
same procedures. Given that we found no significant results for the risk
aversion lottery in Study 1, and given that sociologists and social psycholo-
gists are typically less concerned with risk aversion in solitary tasks, we
omitted the lottery task in Study 2.
At the beginning of the study, the instructions informed the participants
that they would be involved in ‘‘several decision or task scenarios.’’ They
were told that they would be paid for one (randomly selected) task at the end
of the study, as well as how the payoffs for each scenario would be com-
puted. These scenarios included our pretest and posttest measures of the
dependent variables: generosity in the dictator game, and trust and trust-
worthiness in the trust dilemma (see descriptions of these measures in the

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


398 Sociological Methods & Research 41(3)

Study 1 procedures). In order to prevent participants from guessing the true


purpose of the study, our manipulation of indirect exposure to deception
(described below) was presented as one of the ‘‘several decision or task sce-
narios’’ and labeled ‘‘research comprehension task.’’ As in Phase 2 of Study
1, these decision scenarios entailed no deception. Further, all tasks yielded
potential monetary payoffs, the sequence of decisions was the same for all
participants, and participants were not told in advance the sequence of those
decisions.
For both treatment and control condition, the ‘‘research comprehension
task’’ consisted of reading an excerpt from an experimental methods text
(Aronson et al. 1990) that summarized the procedures of a classic study in
social psychology (Aronson and Mills 1959).10 The text in the treatment
condition explicitly mentioned several forms of deception used in the study,
including that ostensible other participants were preprogrammed. After read-
ing the study description, participants completed a series of comprehension
questions about the text, for which they could earn $1 per correct answer, if
the research comprehension task was selected for payment at the end of the
experiment.
The Aronson–Mills study description and comprehension questions were
identical in both conditions, with two exceptions: (1) all references to decep-
tion were removed from the control condition: the excerpt in the control
condition did not indicate that deception was used at any point; and (2) the
questionnaire for the treatment condition included a question asking expli-
citly whether the study involved deception. A pilot test confirmed that the
treatment condition clearly indicated the presence of deception and that the
control condition did not. This allowed us to remove the question about
deception in the control condition of the actual experiment, to avoid priming
control participants with deception. Our manipulation is in line with argu-
ments about the effects of indirect or ‘‘secondhand’’ exposure to deception
(Hertwig and Ortmann 2008b).
Following the manipulation, participants completed the posttest-
dependent measures. We did not call attention to the pretest dependent mea-
sures. For instance, we did not inform participants that they would take part
in the same four types of decision scenarios. As for the pretest measures,
we only emphasized that they would interact with a completely different
partner for each decision scenario. Finally, participants completed the same
poststudy questionnaire used in Study 1, except that we asked about their
perceptions of the use of unethical procedures separately for economists,
sociologists, and social psychologists.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 399

As in Study 1, in order to avoid history effects, we did not give partici-


pants feedback about their payoffs until the very end of the experiment, at
which point one of the nine tasks (one of the eight decision scenarios, or the
research comprehension task) was randomly selected. Participants were paid
for this task, along with a show up fee. Payments averaged $29 (SD = 7).11

Study 2 Results
As in Study 1, we tested the hypotheses on systematic effects of deception
on both behaviors and beliefs using t tests and all hypotheses on nonsyste-
matic effects using Levene’s F tests. For systematic effects on behaviors,
we measured within-subject differences in change scores across conditions,
that is, changes in amounts given as dictator (altruism), amounts sent as
trustor (trust), and amounts returned as trustee (trustworthiness). As the
hypotheses on nonsystematic effects postulate an increase in the variances
after the experience of deception, the Levene’s F tests were performed on
the variances of the posttest measurements. Again, using nonparametric
tests yielded substantively identical results.

Effects on Beliefs
The postmanipulation measure of beliefs included both ‘‘trust in science’’
and ‘‘trust in social science’’ scales, as well as items measuring perceptions
of the use of deception and unethical procedures in three social sciences:
economics, sociology, and social psychology. As shown in the lower part of
Table 2, across both multivariate and univariate tests, indirect exposure had
no impact on any of the beliefs measures.

Effects on Behavior
Although we found no effects on beliefs, it is still possible that indirect
exposure may impact behavior. To ensure that we could detect small effects,
we compared change scores between conditions. The upper part of Table 2
shows the results of univariate behavioral tests, while the multivariate test is
shown in the bottom row of Table 2. We found no differences between con-
ditions for any behavior (altruism, trust, or trustworthiness).12 Nor did the
postmanipulation variances, used to test the hypotheses of nonsystematic
effects, show differences between conditions. Thus, these data fail to support
any of the anti-deception hypotheses, those either for systematic changes or
for nonsystematic changes.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Table 2. Study 2, Descriptive Statistics and Tests

400
Behavior Control Treatment Testsa p Effect size d

Multivariate T2 = 0.93 .82


Altruismb M(x1 2x2) 0.85 0.65 t(104) = 0.33 .63 0.065
SDx2 4.89 4.69 F(1, 104) = 0.04 .83
Trust2 M(x1 2x2) 0.20 20.08 t(104) = 0.66 .75 0.129
SDx2 3.32 3.31 F(1, 104) = 0.25 .62
Trustworthiness2 M(x1 2x2) 0.01 0.01 t(104) = 20.22 .41 20.044
SDx2 0.18 0.20 F(1, 104) = 0.52 .47
Beliefs
Multivariate T2 = 4.98 .79
Trust in science M 1.54 1.35 t(104) = 0.94 .82 0.184
SD 0.97 1.03 F(1, 104) = 0.57 .45
Trust in social sciences M 5.29 5.34 t(104) = 20.44 .33 20.086
SD 0.65 0.66 F(1, 104) = 0.00 .97
Deception by sociologists M 5.01 5.55 t(104) = 21.13 .12 20.223
SD 2.06 2.04 F(1, 104) = 0.01 .91
Social psychologists M 5.50 5.80 t(104) = 20.77 .22 20.151
SD 2.12 1.90 F(1, 104) = 1.02 .31
Economists M 5.44 5.76 t(104) = 20.83 .20 20.164
SD 2.01 1.89 F(1, 104) = 0.01 .94
Unethical procedures in sociology M 3.01 2.67 t(104) = 0.95 .83 0.186
SD 1.95 1.84 F(1, 104) = 0.00 .96
Social psychology M 3.31 2.98 t(104) = 0.83 .80 0.480
SD 2.01 1.98 F(1, 104) = 0.29 .59
Economics M 3.81 3.84 t(104) = 20.06 .48 20.012

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


SD 2.34 2.47 F(1, 104) = 0.70 .40

Note.aAll t tests are one tail; according to the anti-deception prediction, all ts should be negative.
b
Means of these measures refer to within-subject differences between pre- and postmanipulation scores; standard deviations refer to the posttest
scores.
Barrera and Simpson 401

t tests - Means: Difference between two independent means (two groups)


Tail(s) = One, α err prob = 0.05, Allocation ratio N2/N1 = 0.971831, Total sample size = 140

0.55

0.5
Effect size d

0.45

0.4

0.35

0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95


Power (1-β err prob)

Figure 1. Sensitivity analysis (calculation based on sample from Study 2).1

Statistical Power
Our studies failed to support the anti-deception hypotheses. But it may be
premature to conclude that deception does not lead to any behavioral
effects. Failing to find evidence that deception influences behavior is, of
course, distinct from providing evidence that deception does not influence
behavior. Yet, as noted earlier, researchers have proposed a number of theo-
retical arguments for why we should not expect deception to affect behavior
(Bonetti 1998; Kimmel 1998). Thus, in this case, the null hypothesis is a
substantive hypothesis. Moreover, while no number of empirical tests can
ever show that deception does not matter (since some future study may
reveal some conditions under which it does matter), it is important to ask
how powerful was our microscope? How large would deception effects
need to be for our studies to detect them?
To answer these questions, we conducted power sensitivity analyses to
assess ex post whether our statistical tests had a fair chance to reject an
incorrect null hypothesis. These sensitivity analyses were performed using
the software Gpower (Faul et al. 2007). Using data from Study 1, Figure 1
plots achieved power against effect size, given a = .05 and our sample size,
which is relatively large for a simple between-subject design with only two
conditions. Figure 1 shows that the statistical power of our test would have
been sufficient (1 2 b = 0.95) to find a significant difference if the mean

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


402 Sociological Methods & Research 41(3)

score of the deceived participants had been 0.558 SDs lower than the con-
trol group mean in any of the dependent variables we tested. Performing the
same calculation for Study 2 yields a minimal detectable mean difference of
0.643 SDs. However, Study 2 has even more power because within-subject
differences (based on change scores) are less subject to noise and therefore
produce smaller standard errors. The within-subject design of Study 2 bal-
ances the fact that indirect exposure is subtler than direct exposure.
Because, to our knowledge, our study and the one by Jamison et al.
(2008) are the only two experiments where these hypotheses were tested, no
estimates of effects size are available from the existing literature. However,
the magnitude of the actual observed effect sizes (rightmost columns of
Tables 1 and 2) is substantially smaller than the one given by the power sen-
sitivity analysis of Figure 1. Yet, nothing indicates that our samples were
unusual in any way, as the means of our behavioral measures are in line with
those typically observed in the literature (see Camerer 2003). Moreover,
some of our observed effects go in the opposite direction than that predicted
by the anti-deception hypothesis. For example, the variable trust in Study 1
has the largest (behavioral) effect size, but the direction of the (nonsignifi-
cant) effect contradicts the anti-deception hypothesis. Therefore, the lack of
evidence for behavioral effects of deception in our studies is unlikely to be
attributable to Type II error.

Discussion
We reported the results of two new studies designed to investigate both
behavioral and attitudinal effects of direct and indirect exposure to decep-
tion in laboratory experiments. For the beliefs measures, participants who
were directly deceived (by social psychologists in a sociology department)
subsequently believed that deception is used more often by social psycholo-
gists and sociologists than those who were not deceived. As the experiment
was conducted in the social psychology lab housed in a sociology depart-
ment, the experience of deception affected the reputation of both disci-
plines. Again, this finding is not surprising, given that the majority of our
participants had taken part in only one study and that study involved decep-
tion. Indeed, we would have worried if the manipulation did not impact
beliefs. This result is consistent with prior work showing that the experience
of deception increases the expectation that deception may be used in future
experiments (Epley and Huff 1998).
In contrast to the spillover hypothesis, the experience of deception did
not affect the reputation of economists. This result is important because

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 403

arguments against the use of deception are often based on the spillover
hypothesis (Ledyard 1995). Even though we did not provide a behavioral
test of the spillover hypothesis, a number of aspects of our findings speak
against spillover effects. First, we used the same survey item to assess the
impact of deception on the reputation of sociologists, social psychologists,
and economists and found clearly different results for the three disciplines.
But despite the impact on the reputations of sociologists and social psychol-
ogists, we observed no significant difference in any behavioral measures.
Neither direct nor indirect exposure to deception significantly altered the
behavior of participants in subsequent experiments (Study 1) or decisions
(Study 2). That is, participants who experienced deception were not less
generous (as measured by giving in the dictator game) than those who did
not experience deception. Nor were they less trusting or trustworthy in the
trust dilemma. Furthermore, neither study showed nonsystematic behavioral
effects of exposure to deception. That is, participants did not show any sign
of erratic or random behavior as hypothesized by the anti-deception argu-
ment (Jamison et al. 2008). Finally, we failed to replicate a finding from a
prior study of deception (Jamison et al. 2008), namely that deceived partici-
pants are more likely to make inconsistent choices in the risk aversion lot-
tery. Indeed, in our study, inconsistent choices were slightly more common
among nondeceived participants. As we did not find any behavioral effect
of deception, it is even less plausible that behavioral effects could ever be
observed by experimental economists, whose reputation was not signifi-
cantly affected.
Nevertheless, future work should provide a behavioral test of the spil-
lover hypothesis. One such test would entail a straightforward extension of
the direct exposure study (Study 1) reported above. In Phase 1, participants
would be deceived (or not) in a social psychology lab. Participants would
then take part in a study conducted in an experimental economics lab at
some point (days or weeks) later. The spillover hypothesis predicts that
those who were deceived in Phase 1 would act differently in Phase 2 than
those who were not deceived. If warranted, additional follow-up studies
could investigate whether and how the ubiquitous practice of universities
having separate physical laboratories for social psychology, sociology, and
economics experiments inhibits spillover. For instance, a future study might
entail running both phases in the same physical laboratory but framing
Phase 1 as a social psychology experiment and Phase 2 as a social psychol-
ogy or economics experiment (depending on condition). Such a design
could offer insight into whether separate research facilities insulate econo-
mists against ‘‘spillover’’ effects from social psychology experiments

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


404 Sociological Methods & Research 41(3)

employing deception. It could also yield important insights for those univer-
sities where lab facilities are shared by economists, psychologists, and
sociologists. As warranted, additional studies could address the role of other
institutional arrangements. But again, we want to emphasize that neither the
study reported above nor a previous study by economists (Jamison et al.
2008) yielded any effects on behavior in interdependent situations, that is,
the types of situations in which the anti-deception hypothesis would predict
effects. Thus, all currently available evidence suggests that behavioral spil-
lover would be minimal to nonexistent.
It may seem puzzling that experiencing deception influenced partici-
pants’ beliefs, but not their behaviors, even though the procedures and
experimental setting were very similar to those in which deception occurred.
One possible explanation is that, while being deceived can increase partici-
pants’ perceptions that experiments (or experiments in a given discipline)
employ deception, it may have more limited effects on their beliefs about a
particular experiment. For instance, participants may simply ‘‘suspend dis-
belief’’ when they enter a research lab (Mixon 1977). An alternative argu-
ment is that participants in our studies had simply not become suspicious
‘‘enough.’’ Perhaps more spectacular forms of deception are necessary
before suspicion evolves into full distrust and impacts behavior. For
instance, we could have used the exact same procedures administered by the
same research assistant with only a few hours or even a few minutes
between the manipulation of the independent variable (whether participants
are deceived) and the dependent measure. And we might have obtained
some effect. But we are interested in knowing whether deception affects
subsequent behavior in the range of conditions that both proponents and
opponents have in mind when they debate the effects of deception.
As students often participate in multiple experiments during the course
of their time at University, another possibility is that effects may arise only
after repeated experiences of deception. Hertwig and Ortmann (2001:397-
98) have argued that the interaction between experimenter and participant
can be modeled as a repeated ‘‘trust dilemma’’ in which the participant ini-
tially believes the experimenter is being honest. They contend that, if a par-
ticipant experiences deception, that participant will never believe any
experimenter in subsequent encounters. But our results indicate that a differ-
ent model would be more appropriate, such as one with incomplete informa-
tion (for a model of a trust dilemma with incomplete information, see Raub
2004). In such a model, the participant believes that the experimenter is
probably being honest. Thereafter, each time the participant is deceived,
she or he lowers the expected probability that any subsequent experimenter

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 405

with whom she or he interacts will be honest. Thus, repeated experiences of


deception may be necessary before a participant believes that the probability
of meeting an honest experimenter is so low that she or he should no longer
believe any given experimenter. Under these conditions, we might expect
repeated experiences with deception to impact behavior. From this perspec-
tive, moderation is key. There may be little value (for scientific knowledge
or participants) of studies that employ participants who have participated in
many experiments. Importantly, this is not only the case for studies that use
deception but also for nondeception studies. For instance, there is evidence
from economics experiments (where deception is not used) that participants
tend to become less cooperative as they accumulate experience in experi-
ments on cooperation (Zelmer 2003). In any case, repeated exposure to
deception provides an important avenue for future studies.
As we stated earlier, we focused on experiments involving cooperation
and prosocial behavior because this is the research area where the existence
of conflicting norms regulating the use of deception is most relevant. But
the use of deception is more general and thus disagreements about its use
are broader than suggested above.13 We know of no reason that our argu-
ments and findings would not be relevant to other areas of research that
employ similar forms of deception as those used here, namely the use of
simulated others (e.g., Molm 1991; Lovaglia et al. 1998; Ridgeway et al.
1998; Horne 2001; Kalkhoff and Thye 2006). Nonetheless, future research
should assess the generalizability of the results reported above. For instance,
we would hesitate to draw inferences from our work to experiments where
participants are misled in ways that produce high levels of psychological
discomfort (e.g., Milgram’s authority experiments). The consequences of
these more severe forms of deception are important issues for continued
investigation. Of course, the very nature of some of those studies makes it
more difficult to conduct them without deception (i.e., it may not be straight-
forward to establish an appropriate control, or nondeception, condition).
Thus, besides role-playing (Willis and Willis 1970; Kerr, Nerenz, and
Herrick 1979), pretest/posttest designs may be useful alternatives to studies
that manipulate exposure to deception.
Summing up, our experiments were designed with the goal of creating
conditions favorable to finding effects of exposure to deception on subse-
quent behavior. For instance, in Study 1, the two phases were based in the
same laboratory and they involved very similar tasks (prisoner’s dilemma in
Phase 1, dictator and trust dilemmas in Phase 2). Study 2 used a within sub-
jects pretest/posttest design and the posttest measure was taken minutes after
participants were exposed to deception. In addition, across both experiments,

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


406 Sociological Methods & Research 41(3)

we searched for both systematic and nonsystematic effects on behaviors.


Yet neither study showed significant effects of deception on behavior.
Of course, it is impossible to prove that deception never produces any
undesirable downstream consequences. Thus, we are not optimistic that the
arguments and evidence presented above will convince those who contend
that ‘‘the mere possibility that deception might influence the behavior of
the subject pool would be enough to raise grave concern about deception’’
(McDaniel and Starmer 1998). While we agree that researchers should
always be on the lookout for potential deleterious effects of deception, we
think empirical evidence of the existence of such effects would be needed
to justify bans on the use of deception on pragmatist grounds. Even then,
researchers would need to balance the costs of abandoning the use of decep-
tion with the benefits of doing so. As noted by experimental economist
Alvin Roth (2001:427), ‘‘even if all psychologists stopped using deception
tomorrow, the fact that important experiments using deception are taught in
introductory classes might mean the benefit from this change would be long
in coming, since psychology students would remain suspicious for a long
time. But the costs would be immediate . . . because there have been psy-
chology experiments that used deception to spectacularly good effect’’ and
such experiments could no longer be conducted.
We hope that our work will discourage experimentalists across the social
sciences from ‘‘digging in’’ to ideological positions and instead adopt a
pragmatic, evidence-based approach to the question of when deception is
advisable. Yet, we recognize that the studies reported above do not, by any
means, provide the last word. Thus, we also hope that researchers develop
new empirical strategies for understanding the conditions under which
deception does and does not matter. Additional empirical evidence would
provide a foundation for weighing the potential costs and benefits of using
deception, thus allowing more informed decisions, and evidence-based poli-
cies, governing its use.

Acknowledgment
We appreciate helpful comments and suggestions from Ozan Aksoy, Vincent
Buskens, Ashley Harrell, Irene Klugvist, Hanne van der Iest, and three anonymous
reviewers.

Authors’ Notes
Contributions were equal and the order of authorship is alphabetical.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 407

Declaration of Conflicting Interests


The author(s) declared no potential conflicts of interest with respect to the research,
authorship, and/or publication of this article.

Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was supported by grants
SES-0551895 and SES-0647169 from the National Science Foundation to the sec-
ond author.

Notes
1. Although it is not completely clear why a norm against the use of deception
emerged in economics, Ariely and Norton (2007) suggest a plausible explana-
tion. They note that because economists typically assume that human behavior
is driven by utility maximization, procedures in experimental economics tend
to emphasize the provision of monetary incentives as well as full (and honest)
information about the costs and benefits associated with alternative lines of
action. Categorically avoiding the use of deception presumably increases the
chances that these conditions are realized.
2. We know of no prior work on the prevalence of deception in sociology experi-
ments. Although a detailed analysis is beyond the scope of the current article,
we conducted a cursory review of articles published in what are often consid-
ered to be the top three mainstream sociology journals (American Journal of
Sociology, American Sociological Review, and Social Forces) and the primary
outlet for research in sociological social psychology (Social Psychology
Quarterly). We limited our search to the past 3 years. Of the studies that
employed laboratory experiments, just under two thirds used some form of
deception.
3. Note, however, that deceived males were more likely to return than nonde-
ceived males.
4. In addition to the manipulation (whether participants were deceived or not),
Phase 1 also manipulated whether participants were assigned to the ‘‘trustor’’
or ‘‘trustee’’ role in the trust dilemma. Because a trustee’s behavioral options
are determined by the trustor’s previous decision, the design relinquished
experimental control to the decisions of other participants. This problem is
compounded by the fact that the trust dilemma was repeated five rounds.
Together, these design features introduce substantial differences in payoffs,
both within and between dyads, and therefore constitute confounds.
5. We included the Big 5 personality index primarily as a filler task, so that parti-
cipants (particularly those in the nondeception condition) would not become

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


408 Sociological Methods & Research 41(3)

suspicious about the brevity of the study. Given that responses on the personal-
ity index are not relevant for current purposes, we do not discuss them further.
6. For both experiments, we avoided use of loaded terms such as ‘‘dictator,’’
‘‘generosity,’’ ‘‘trust,’’ ‘‘trustworthiness,’’ and so on. We use the terms here
for simplicity.
7. In the safer lottery the two alternative amounts that the participants can win are
similar to each other. In the riskier lottery, one prize is substantially higher than
the other. For example, in the first pair a participant chooses between a (safe)
lottery that pays $11 with 10 percent chance and $8.80 with 90 percent chance,
and a (riskier) lottery that pays $21.20 with 10 percent chance and $ 0.55 with
90 percent chance. As the participant moves from the first to the tenth choice,
the amounts remain constant, while the relative probabilities change, so that
higher amounts are increasingly likely in later pairs. In the final (tenth) pair,
the higher amount in both lotteries is paid with certainty.
8. A total of 42 participants (30 percent) had previously taken part in experiments
(28 percent had taken part in experiments in sociology or social psychology
and 2 percent in economics). However, only 13 participants (9 percent) had
taken part in more than two experiments. Experienced participants were
equally distributed between the two conditions. Excluding experienced
participants—whether all of them or just the ones who participated in more
than two experiments—yielded substantively identical results. Moreover, tests
run separately on experienced and inexperienced participants yielded remark-
ably consistent results. (These analyses are available upon request). Thus, the
analyses discussed below are performed on the full sample.
9. We checked for suspicion using a funneled debriefing procedure, asking partici-
pants whether they found anything ‘‘odd’’ or ‘‘hard to believe,’’ and whether
they thought there ‘‘may have been more to the experiment than meets the
eye’’ (see Aronson et al. 1990:316-17). Consistent with our beliefs measures
(reported in detail below), participants in the deception condition were more
likely to mention the possibility that others may have been simulated. As this
study is aimed at addressing the behavioral effects of deception and suspicion,
our analyses include all suspicious participants. Importantly, however, none of
the results reported below depend on whether or not these participants are
included. Analyses available upon request show that participants who expressed
suspicions did not differ in any other way from those who did not (or those in
the control condition).
10. We chose to use a description of the Aronson–Mills experiment because it is
one of the most famous classic studies employing deception but is not as
widely known to the general public as, for example, the Milgram obedience
experiments. As explained below, our deception manipulation involves includ-
ing or omitting details in a summary of the Aronson–Mills experiment. Using
the Milgram experiment would likely reduce differences between conditions,

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 409

as participants in the control condition might have ‘‘filled in the blanks’’ in


omitted descriptions of deception.
11. A total of 14 (13 percent) of the participants had participated in experiments
involving deception. Experienced participants were equally distributed between
the two conditions. We ran separate analyses for experienced and inexper-
ienced subjects and the results were substantively identical to those presented
below performed on the full sample (these additional analyses are available
upon request).
12. An alternative explanation for the lack of support for the hypotheses on sys-
tematic effects on behavior was that participants were striving to be consistent
in the pre- and posttest behavioral measures. We think this is unlikely for sev-
eral reasons. First, we emphasized that each decision was independent, and that
they would be matched with a different partner for each decision scenario.
Given that they were presented with a number of distinct decision scenarios
(all presented abstractly), it is unlikely that many participants drew explicit
connections between decisions. (Indeed, no participants mentioned similarities
between decision scenarios.) Furthermore, the change score variances were
substantial, suggesting that participants were not motivated by consistency.
13. For instance, some anti-deceptionists argue that most, if not all, experiments
that employ deception could be conducted without the use of deception, for
example, see McDaniel and Starmer (1998) about Weimann (1994). Yet Cook
and Yamagishi (2008:215-16) note that whether deception is required often
depends decisively on what one assumes guides human behavior, ‘‘Many
experimental economists adhere to one primary view of human behavior while
social psychologists, sociologists, and even some behavioral economists have a
wider range of views that include nonrational, emotional, and heuristic-based
elements. Some of the alternative methods advocated by economists to avoid
the use of deception . . . are not valid modes of conducting experiments when
investigating these other elements of choice or behavior.’’ For further discus-
sion and illustrative examples, see Cook and Yamagishi (2008) and Ariely and
Norton (2007).

References
Ariely, Dan and Michael I. Norton. 2007. ‘‘Psychology and Experimental
Economics: A Gap in Abstraction.’’ Current Directions in Psychological Science
16:336-39.
Aronson, Elliot and Judson Mills. 1959. ‘‘The Effects of Severity of Initiation on
Liking for a Group.’’ Journal of Abnormal Psychology 59:177-81.
Aronson, Elliot, Phoebe C. Ellsworth, Merril J. Carlsmith, and Marti H. Gonzales.
1990. Methods of Research in Social Psychology. New York: McGraw-Hill.
Baron, Jonathan. 2001. ‘‘Purposes and Methods.’’ Behavioral and Brain Sciences
24:403.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


410 Sociological Methods & Research 41(3)

Barrera, Davide. 2007. ‘‘The Impact of Negotiated Exchanges on Trust and


Trustworthiness.’’ Social Networks 29:508-26.
Baumrind, Diana. 1964. ‘‘Some Thoughts on Ethics of Research. After Reading
Milgram’s ‘‘Behavioral Study of Obedience.’’’’ American Psychologist 19:421-23.
Berg, Joyce, John Dickhaut, and Kevin McCabe. 1995. ‘‘Trust, Reciprocity, and
Social History.’’ Games and Economic Behavior 10:122-42.
Bonetti, Shane. 1998. ‘‘Experimental Economics and Deception.’’ Journal of
Economic Psychology 19:377-95.
Buchan, Nancy R., Rachel T. A. Croson, and Eric J. Johnson. 2002. ‘‘Swift
Neighbors and Persistent Strangers: A Cross-Cultural Investigation of Trust and
Reciprocity in Social Exchange.’’ American Journal of Sociology 108:168-206.
Buskens, Vincent, Werner Raub, and Joris van der Veer. 2010. ‘‘Trust in Triads: An
Experimental Study.’’ Social Networks 32:301-12.
Camerer, Colin F. 2003. Behavioral Game Theory. New York: Russell Sage
Foundation.
Christensen, Larry. 1988. ‘‘Deception in Psychological Research: When is its Use
Justified?’’ Personality and Social Psychology Bulletin 14:664-75.
Cook, Karen S. and Toshio Yamagishi. 2008. ‘‘A Defense of Deception on Scientific
Grounds.’’ Social Psychology Quarterly 71:215-21.
Cook, Thomas D., James R. Bean, Bobby J. Calder, Robert Frey, Martin L. Krovetz,
and Stephen R. Reisman. 1970. ‘‘Demand Characteristics and Three Conceptions
of the Frequently Deceived Subjects.’’ Journal of Personality and Social
Psychology 14:185-94.
Davis, Douglas D. and Charles A. Holt. 1993. Experimental Economics. Princeton,
NJ: Princeton University Press.
Epley, Nicholas and Chuck Huff. 1998. ‘‘Suspicion, Affective Response, and
Educational Benefit as Result of Deception in Psychology Research.’’
Personality and Social Psychology Bulletin 24:759-68.
Faul, Franz, Edgar Erdfelder, Albert-Georg Lang, and Axel Buchner. 2007.
‘‘G*Power 3: A Flexible Statistical Power Analysis Program for the Social,
Behavioral, and Biomedical Sciences.’’ Behavior Research Methods 39:175-91.
Fehr, Ernst and Herbert Gintis. 2007. ‘‘Human Motivation: Experimental and
Analytical Foundations.’’ Annual Review of Sociology 33:43-64.
Fillenbaum, Samuel. 1966. ‘‘Prior Deception and Subsequent Experimental
Performance: The ‘‘Faithful’’ Subject.’’ Journal of Personality and Social
Psychology 5:532-37.
Hertwig, Ralph and Andreas Ortmann. 2001. ‘‘Experimental Practices in Economics:
A Methodological Challenge for Psychologists?’’ Behavioral and Brain Sciences
24:383-403.
Hertwig, Ralph and Andreas Ortmann. 2008a. ‘‘Deception in Social Psychological
Experiments: Two Misconceptions and a Research Agenda.’’ Social Psychology
Quarterly 71:222-27.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 411

Hertwig, Ralph and Andreas Ortmann. 2008b. ‘‘Deception in Experiments:


Revisiting the Argument in its Defense.’’ Ethics and Behavior 18:59-92.
Hey, John D. 1998. ‘‘Experimental Economics and Deception: A Comment.’’
Journal of Economic Psychology 19:397-401.
Holt, Charles A. and Susan K. Laury. 2002. ‘‘Risk Aversion and Incentive Effects.’’
American Economic Review 92:1644-55.
Horne, Christine. 2001. ‘‘The Enforcement of Norms: Group Cohesion and Meta-
Norms.’’ Social Psychology Quarterly 63:253-66.
Jamison, Julian, Dean Karlan, and Laura Schechter. 2008. ‘‘To Deceive or Not to
Deceive: The Effects of Deception on Behavior in Future Laboratory
Experiments.’’ Journal of Economic Behavior and Organization 68:477-88.
Kalkhoff, Will and Shane R. Thye. 2006. ‘‘Expectation States Theory and Research:
New Observations from Meta-Analyses.’’ Sociological Methods and Research
35:219-49.
Kelman, Herbert C. 1967. ‘‘Human Use of Human Subjects: The Problem of
Deception in Social Psychological Experiments.’’ Psychological Bulletin 67:1-11.
Kerr, Norbert, L., David R. Nerenz, and David Herrick. 1979. ‘‘Role Playing and the
Study of Jury Behavior.’’ Sociological Methods & Research 7:337-55.
Kimmel, Allan J. 1998. ‘‘In Defense of Deception.’’ American Psychologist 53:803-
805.
Kollock, Peter. 1998. ‘‘Social Dilemmas: The Anatomy of Cooperation.’’ Annual
Review of Sociology 24:183-214.
Ledyard, John O. 1995. ‘‘Public Goods: A Survey of Experimental Research.’’
Pp. 111-94 in The Handbook of Experimental Economics, edited by John H.
Kagel and Alvin E. Roth. Princeton, NJ: Princeton University Press.
Lovaglia, Michael J., Jeffrey W. Lucas, Jeffrey A. Houser, Shane R. Thye, and Barry
Markovsky. 1998. ‘‘Status Processes and Mental Ability Test Scores.’’ American
Journal of Sociology 104:195-228.
McCrae, Robert R. and Paul T. Costa. 1987. ‘‘Validation of the Five-factor Model
of Personality Across Instruments and Observers.’’ Journal of Personality and
Social Psychology 52:81-90.
McDaniel, Tanga and Chris Starmer. 1998. ‘‘Experimental Economics and
Deception: A Comment.’’ Journal of Economic Psychology 19:403-409.
Mifune, Nobuhiro, Hirofumi Hashimoto, and Toshio Yamagishi. 2010. ‘‘Altruism
Toward In-group Members as a Reputation Mechanism.’’ Evolution and Human
Behavior 31:109-17.
Miller, Jon D. 1998. ‘‘The Measurement of Civic Scientific Literacy.’’ Public
Understanding of Science 7:203-23.
Mixon, Don. 1977. ‘‘Why Pretend to Deceive?’’ Personality and Social Psychology
Bulletin 3:647-53.
Molm, Linda D. 1991. ‘‘Affect and Social Exchange: Satisfaction in Power-
Dependence Relations.’’ American Sociological Review 56:475-93.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


412 Sociological Methods & Research 41(3)

Ortmann, Andreas and Ralph Hertwig. 1997. ‘‘Is Deception Acceptable?’’ American
Psychologist 52:746-47.
Pardo, Rafael and Félix Calvo. 2002. ‘‘Attitudes Toward Science Among the
European Public: A Methodological Analysis.’’ Public Understanding of Science
11:155-95.
Raub, Werner. 2004. ‘‘Hostage Posting as a Mechanism of Trust. Binding,
Compensating and Signaling.’’ Rationality and Society 16:319-65.
Rauhut, Heiko and Fabian Winter. 2010. ‘‘A Sociological Perspective on Measuring
Norms Using Strategy Method Experiments.’’ Social Science Research 39:1181-94.
Ridgeway, Cecilia L., Elizabeth Heger Boyle, Kathy J. Kuipers, and Dawn T.
Robinson. 1998. ‘‘How Do Status Beliefs Develop? The Role of Resources and
Interactional Experience.’’ American Sociological Review 63:331-50.
Roth, Alvin E. 2001. ‘‘Form and Function in Experimental Design.’’ Behavioral and
Brain Sciences 24:427-28.
Rothschild, Kurt W. 1993. Ethics and Economic Theory. Aldershot, UK: Edward
Elgar.
Sell, Jane. 1997. ‘‘Gender, Strategies and Contributions to Public Goods.’’ Social
Psychology Quarterly 60:252-65.
Sell, Jane. 2008. ‘‘Introduction to Deception Debate.’’ Social Psychology Quarterly
71:213-14.
Silverman, Irwin, Arthur D. Shulman, and David L. Wiesenthal. 1970. ‘‘Effects of
Deceiving and Debriefing Experimental Subjects on Performance in Later
Experiments.’’ Journal of Personality and Social Psychology 14:203-12.
Smith, Stephen S. and Deborah Richardson. 1983. ‘‘Amelioration of Deception and
Harm in Psychological Research: The Important Role of Debriefing.’’ Journal of
Personality and Social Psychology 44:1075-82.
Stang, David J. 1976. ‘‘Ineffective Deception in Conformity Research: Some Causes
and Consequences.’’ European Journal of Social Psychology 6:353-67.
Weimann, Joachim. 1994. ‘‘Individual Behavior in a Free Riding Experiment.’’
Journal of Public Economics 54:185-200.
Willer, Robb. 2009. ‘‘Groups Reward Individual Sacrifice: The Status Solution to
the Collective Action Problem.’’ American Sociological Review 74:23-43.
Willer, Robb, Ko Kuwabara, and Michael W. Macy. 2009. ‘‘The False Enforcement
of Unpopular Norms.’’ American Journal of Sociology 115:451-90.
Willis, Richard H. and Yolanda A. Willis. 1970. ‘‘Role Playing versus Deception:
An Experimental Comparison.’’ Journal of Personality and Social Psychology
16:472-77.
Winter, Fabian, Heiko Rauhut, and Dirk Helbing. 2011. ‘‘How Norms Can Generate
Conflict: An Experiment on the Failure of Cooperative Micro-Motives on the
Macro-Level.’’ Social Forces. 90:919-46.
Yamagishi, Toshio. 1995. ‘‘Social Dilemmas.’’ Pp. 311-35 in Sociological
Perspectives on Social Psychology, edited by Karen S. Cook, Gary Alan Fine,
and James S. House. Boston, MA: Allyn and Bacon.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


Barrera and Simpson 413

Yamagishi, Toshio, Karen S. Cook, and Motoki Watabe. 1998. ‘‘Uncertainty, Trust,
and Commitment Formation in the United States and Japan.’’ American Journal
of Sociology 104:165-94.
Yamagishi, Toshio, Yutaka Horita, Haruto Takagishi, Mizuho Shinada, Shigehito
Tanida, and Karen S. Cook. 2009. ‘‘The Private Rejection of Unfair Offers and
Emotional Commitment.’’ Proceedings of the National Academy of Sciences
106:11520-523.
Zelmer, Jennifer. 2003. ‘‘Linear Public Good Experiments: A Meta-Analysis.’’
Experimental Economics 6:299-310.

Bios
Davide Barrera is an assistant professor at the University of Turin (Italy). His
research interests include group processes, mechanisms of cooperation in small
groups, experimental methods, and social networks. Currently, he is working on two
main projects: one on the effects of sanctioning rules in public good games (with
Nynke van Miltenburg, Vincent Buskens, and Werner Raub), and the other on for-
mation and consequences of negative relationships in small groups.
Brent Simpson is Professor of Sociology at the University of South Carolina. His
current projects include studies of altruism homophily in social networks, successful
collective action in large groups, and how interpersonal moral judgments influence
cooperation and social order.

Downloaded from smr.sagepub.com at UNIVERSITY OF SOUTH CAROLINA on September 28, 2016


View publication stats

You might also like