Professional Documents
Culture Documents
net/publication/279954362
CITATIONS READS
44 361
2 authors:
Some of the authors of this publication are also working on these related projects:
The Enforcement of Moral Boundaries Promotes Cooperation and Prosocial Behavior in Groups View project
All content following this page was uploaded by Brent Simpson on 28 September 2016.
Deceiving Research
Participants in the
Social Sciences
Abstract
Social scientists have intensely debated the use of deception in experimental
research, and conflicting norms governing the use of deception are now
firmly entrenched along disciplinary lines. Deception is typically allowed in
sociology and social psychology but proscribed in economics. Notably, dis-
agreements about the use of deception are generally not based on ethical
considerations but on pragmatic grounds: the anti-deception camp argues
that deceiving participants leads to invalid results, while the other side
argues that deception has little negative impact and, under certain condi-
tions, can even enhance validity. These divergent norms governing the use
of deception are important because they stifle interdisciplinary research and
discovery, create hostilities between disciplines and researchers, and can
negatively impact the careers of scientists who may be sanctioned for
1
Department of Culture Politics and Society and Collegio Carlo Alberto, University of Turin,
Torino, Italy
2
ICS/Department of Sociology, Utrecht University, Utrecht, The Netherlands
3
Department of Sociology, University of South Carolina, Columbia, SC, USA
Corresponding Author:
Brent Simpson, Department of Sociology, University of South Carolina, Columbia, SC 29208,
USA.
Email: bts@sc.edu
Keywords
deception, ethics, validity, laboratory experiments, prosociality
Introduction
The above quotes exemplify contrasting positions in the debate about the
use of deception in social science experiments (see Sell 2008). The debate
largely occurs along disciplinary boundaries, with the first quote represent-
ing the position of a great majority of experimental economists and the latter
the position of a great majority of social psychologists (although it happens
to have been written by a dissenting economist). While some sociologists,
mostly those who adhere to the rational choice paradigm, side with experi-
mental economists and make a practice of not deceiving research partici-
pants (e.g., Buskens, Raub, and van der Veer 2010; Winter, Rauhut, and
Helbing 2011), the majority of sociologists who employ experiments tend to
use some form of deception (e.g., Molm 1991; Sell 1997; Lovaglia et al.
1998; Ridgeway et al. 1998; Yamagishi, Cook, and Watabe 1998; Horne
2001; Kalkhoff and Thye 2006; Willer, Kuwabara, and Macy 2009). In
Overview of Experiments
Here we introduce two new experiments designed to test the three hypoth-
eses outlined earlier about whether and how direct (Study 1) and indirect
(Study 2) exposure to deception leads to suspicion and behavioral changes.
(Materials and instructions for the two studies are available upon request
from the authors.) Given that the use of fictitious partners is the most
common—and most debated—type of deception employed in experimental
social science, both studies focus on this form of deception. Note that,
unlike the position taken by most sociologists and social psychologists, the
anti-deception position predicts the effects of direct and indirect exposure to
deception. Thus, as detailed below, we aimed to relieve the anti-deception
position of some of the burden of proof by creating conditions favorable to
finding such an effect. Study 1 tests the impact of direct exposure to decep-
tion on beliefs and behaviors. Study 2 addresses the impact of indirect expo-
sure on the same outcome variables. Both studies test systematic and
nonsystematic effects and also allow a test of the spillover hypothesis, that
is, that use of deception by sociologists and social psychologists impacts the
reputations of economists.
The sequence of the five decisions was the same for all participants.
(Participants did not know the sequence in advance.) Moreover, participants
were not given any feedback about the decisions of their partners until the
end of the experiment. These design details were based on the need to hold
constant all aspects of participants’ experiences, other than the independent
variable (the presence or absence of deception in Phase 1). In so doing, we
aimed to avoid the potential confounds in the Jamison et al. (2008) study.
Participants first made a decision in the risk aversion lottery, which con-
sists of a series of 10 ordered binary choices between two lotteries (Holt
and Laury 2002). Every pair includes a ‘‘safer’’ and a ‘‘riskier’’ option.7
Respondents tend to choose the safe option at the beginning, switch to the
risky one at some point, and stick to risky choices thereafter. The number of
safe choices is the individual’s risk aversion. Switching back and forth
between safe and risky is an indication of inconsistent risk preferences
(Jamison et al. 2008).
For the dictator game, participants assigned to the dictator role were
given a $20 endowment and asked to decide how much, between $ 0.00 and
$20.00, to transfer to a different participant (the receiver) with whom they
had been paired. The amount donated in dictator games is the most com-
monly used behavioral measure of generosity or altruism (e.g., Mifune,
Hashimoto, and Yamagishi 2010). Participants were also paired in a second
dictator game, but because they were receivers in the second game, it
entailed no actual decision.
For the trust dilemma, the participant in the role of the trustor could
choose to send any amount of a $10 endowment (from $ 0.00 to $10.00) to
the trustee. Whatever amount the trustor sent would be tripled by the experi-
menter and, subsequently, the trustee could choose how much of the tripled
amount to return (from $ 0.00 to the entire tripled amount). The amounts
sent and returned are standard measures of trust and trustworthiness, respec-
tively (e.g., Buchan et al. 2002; Barrera 2007). We measured trustees’ deci-
sions using the strategy method (Yamagishi et al. 2009; Rauhut and Winter
2010). That is, trustees were asked to decide how much they would return
to the trustor for each of the possible amounts the trustor could send. The
actual payoffs to trustors and trustees were based on the trustee’s decision
for the amount actually sent by the trustor. Using the strategy method
allowed us to have every participant first occupy the trustor role, and then
the trustee role. In addition, use of the strategy method allowed us to with-
hold partners’ choices from participants until the end of the study, thus
eliminating all differences except for whether or not participants were
deceived in the earlier phase.
Immediately after the measures of risk aversion (the lottery task), altru-
ism (dictator game), and trust and trustworthiness (trust dilemma), partici-
pants completed a questionnaire that included 5 items about trust in science
(Miller 1998; Pardo and Calvo 2002) and seven questions about trust in
social science created by modifying the trust in science items from Pardo
and Calvo (2002). The trust in science and trust in social science scales each
showed moderate reliability, a = .67 and a = .73, respectively. In addition,
the questionnaire included three questions about the participant’s percep-
tions of the frequency of use of deception by sociologists, social psycholo-
gists, and economists (0 = never, 10 = always); and one question asking how
often social scientists use unethical procedures (0 = never, 10 = always).
Finally, we asked participants whether they had previously participated in
laboratory experiments conducted by sociologists, social psychologists, and
economists.8 We paid participants for one randomly selected scenario.
(Payments averaged $17; SD = 7). Thereafter, participants were debriefed
and dismissed.9 There was no deception in Phase 2. It took approximately 1
hr.
Study 1 Results
We test the hypotheses on beliefs by comparing responses to questionnaire
items and scales across conditions. For the behavioral hypotheses, we use
amounts donated by dictators and amounts sent by trustors as measures of
altruism and trust, respectively. Our trustworthiness measure is the average
proportion returned in the 10 decisions elicited via the strategy method.
We tested all hypotheses on systematic effects using multivariate
Hotelling T 2 tests and univariate one-tailed t tests. Assuming that erratic
behavior would increase the variances of the postmanipulation measures
(Jamison et al. 2008), we tested nonsystematic effects using Levene’s F
tests. As some of our dependent measures were not normally distributed, we
also tested our hypotheses using nonparametric tests (i.e., Mann–Whitney
instead of t tests). However, these analyses yielded substantively identical
results. Finally, in ancillary analyses, we controlled for possible interactions
between deception and partner’s cooperation in Phase 1 (i.e., the difference
between deceived and nondeceived participants, depending on whether the
simulated or real partner cooperated or defected). In none of these analyses
was the interaction significant. Hence we do not discuss this interaction
below.
Effects on Beliefs
As shown in the bottom row of Table 1, the multivariate Hotelling T 2 is sta-
tistically significant. The univariate t tests show that this result is driven by
significant differences in two of the belief variables: participants who were
deceived believed that deception is used more often by social psychologists
and by sociologists, compared to those who were not deceived. Recall that
the study was conducted in a social psychology laboratory in a sociology
department, and Phase 1 of our study constituted the bulk of most partici-
pants’ exposure to social science experiments. That participants in the
deception condition, compared to those in the control condition, were more
likely to conclude that sociology and social psychology experiments involve
deception suggests that our manipulation was successful. These effects on
beliefs remain significant when Bonferroni correction for multiple parallel
tests is used.
As shown in Table 1, we found no other differences in beliefs between
conditions. Most notably, differences in beliefs about the frequency of
deception do not appear to transfer to economics. Although beliefs about
the use of deception in economics is close to the critical threshold (p = .07),
the observed effect size (Cohen’s d = 20.248) is much smaller than the
effect observed for sociology and social psychology. In addition, this effect
is far from significant using Bonferroni correction (a/n = 0.008). Therefore,
we conclude that our results do not support the spillover argument, that is,
deception by social psychologists does not appear to have a substantial
impact on the reputations of economists.
Effects on Behavior
Neither multivariate nor univariate tests showed significant effects for any
of the behavioral measures. That is, we observed no systematic or nonsyste-
matic differences in behaviors between deceived and nondeceived partici-
pants. Like Jamison and colleagues (2008), we found that a number of
participants (30 percent of the total) made inconsistent choices in the risk
aversion lottery. However, contrary to their results, we found that nonde-
ceived participants were more likely to make inconsistent choices, although
this difference was not significant. We also ran both t tests and F tests (com-
paring means and variances of the risk aversion measures between deceived
and nondeceived participants) separately for those who made inconsistent
choices in the risk aversion lottery and for those who did not. However,
unlike Jamison et al., the results for the two groups were virtually identical.
396
Behavior Control Treatment Testsa p Effect size d
Note. aAll t tests except the risk aversion lottery are one tail; according to the anti-deception prediction, t should be positive for behavioral measures
and negative for beliefs measures.
b
The t test for risk aversion is two tailed because the anti-deception hypothesis makes no prediction for systematic effects on this measure.
*p \ .05. **p \ .01.
Barrera and Simpson 397
Therefore, we report only the results for the full sample in Table 1. Given
that this measure was identical to Jamison et al.’s and yielded inconsistent
and weak results, the most prudent conclusion is that there is no effect. In
short, our data do not support any hypothesis on systematic or nonsyste-
matic effects of direct exposure to deception on behavior.
Study 2 Results
As in Study 1, we tested the hypotheses on systematic effects of deception
on both behaviors and beliefs using t tests and all hypotheses on nonsyste-
matic effects using Levene’s F tests. For systematic effects on behaviors,
we measured within-subject differences in change scores across conditions,
that is, changes in amounts given as dictator (altruism), amounts sent as
trustor (trust), and amounts returned as trustee (trustworthiness). As the
hypotheses on nonsystematic effects postulate an increase in the variances
after the experience of deception, the Levene’s F tests were performed on
the variances of the posttest measurements. Again, using nonparametric
tests yielded substantively identical results.
Effects on Beliefs
The postmanipulation measure of beliefs included both ‘‘trust in science’’
and ‘‘trust in social science’’ scales, as well as items measuring perceptions
of the use of deception and unethical procedures in three social sciences:
economics, sociology, and social psychology. As shown in the lower part of
Table 2, across both multivariate and univariate tests, indirect exposure had
no impact on any of the beliefs measures.
Effects on Behavior
Although we found no effects on beliefs, it is still possible that indirect
exposure may impact behavior. To ensure that we could detect small effects,
we compared change scores between conditions. The upper part of Table 2
shows the results of univariate behavioral tests, while the multivariate test is
shown in the bottom row of Table 2. We found no differences between con-
ditions for any behavior (altruism, trust, or trustworthiness).12 Nor did the
postmanipulation variances, used to test the hypotheses of nonsystematic
effects, show differences between conditions. Thus, these data fail to support
any of the anti-deception hypotheses, those either for systematic changes or
for nonsystematic changes.
400
Behavior Control Treatment Testsa p Effect size d
Note.aAll t tests are one tail; according to the anti-deception prediction, all ts should be negative.
b
Means of these measures refer to within-subject differences between pre- and postmanipulation scores; standard deviations refer to the posttest
scores.
Barrera and Simpson 401
0.55
0.5
Effect size d
0.45
0.4
0.35
Statistical Power
Our studies failed to support the anti-deception hypotheses. But it may be
premature to conclude that deception does not lead to any behavioral
effects. Failing to find evidence that deception influences behavior is, of
course, distinct from providing evidence that deception does not influence
behavior. Yet, as noted earlier, researchers have proposed a number of theo-
retical arguments for why we should not expect deception to affect behavior
(Bonetti 1998; Kimmel 1998). Thus, in this case, the null hypothesis is a
substantive hypothesis. Moreover, while no number of empirical tests can
ever show that deception does not matter (since some future study may
reveal some conditions under which it does matter), it is important to ask
how powerful was our microscope? How large would deception effects
need to be for our studies to detect them?
To answer these questions, we conducted power sensitivity analyses to
assess ex post whether our statistical tests had a fair chance to reject an
incorrect null hypothesis. These sensitivity analyses were performed using
the software Gpower (Faul et al. 2007). Using data from Study 1, Figure 1
plots achieved power against effect size, given a = .05 and our sample size,
which is relatively large for a simple between-subject design with only two
conditions. Figure 1 shows that the statistical power of our test would have
been sufficient (1 2 b = 0.95) to find a significant difference if the mean
score of the deceived participants had been 0.558 SDs lower than the con-
trol group mean in any of the dependent variables we tested. Performing the
same calculation for Study 2 yields a minimal detectable mean difference of
0.643 SDs. However, Study 2 has even more power because within-subject
differences (based on change scores) are less subject to noise and therefore
produce smaller standard errors. The within-subject design of Study 2 bal-
ances the fact that indirect exposure is subtler than direct exposure.
Because, to our knowledge, our study and the one by Jamison et al.
(2008) are the only two experiments where these hypotheses were tested, no
estimates of effects size are available from the existing literature. However,
the magnitude of the actual observed effect sizes (rightmost columns of
Tables 1 and 2) is substantially smaller than the one given by the power sen-
sitivity analysis of Figure 1. Yet, nothing indicates that our samples were
unusual in any way, as the means of our behavioral measures are in line with
those typically observed in the literature (see Camerer 2003). Moreover,
some of our observed effects go in the opposite direction than that predicted
by the anti-deception hypothesis. For example, the variable trust in Study 1
has the largest (behavioral) effect size, but the direction of the (nonsignifi-
cant) effect contradicts the anti-deception hypothesis. Therefore, the lack of
evidence for behavioral effects of deception in our studies is unlikely to be
attributable to Type II error.
Discussion
We reported the results of two new studies designed to investigate both
behavioral and attitudinal effects of direct and indirect exposure to decep-
tion in laboratory experiments. For the beliefs measures, participants who
were directly deceived (by social psychologists in a sociology department)
subsequently believed that deception is used more often by social psycholo-
gists and sociologists than those who were not deceived. As the experiment
was conducted in the social psychology lab housed in a sociology depart-
ment, the experience of deception affected the reputation of both disci-
plines. Again, this finding is not surprising, given that the majority of our
participants had taken part in only one study and that study involved decep-
tion. Indeed, we would have worried if the manipulation did not impact
beliefs. This result is consistent with prior work showing that the experience
of deception increases the expectation that deception may be used in future
experiments (Epley and Huff 1998).
In contrast to the spillover hypothesis, the experience of deception did
not affect the reputation of economists. This result is important because
arguments against the use of deception are often based on the spillover
hypothesis (Ledyard 1995). Even though we did not provide a behavioral
test of the spillover hypothesis, a number of aspects of our findings speak
against spillover effects. First, we used the same survey item to assess the
impact of deception on the reputation of sociologists, social psychologists,
and economists and found clearly different results for the three disciplines.
But despite the impact on the reputations of sociologists and social psychol-
ogists, we observed no significant difference in any behavioral measures.
Neither direct nor indirect exposure to deception significantly altered the
behavior of participants in subsequent experiments (Study 1) or decisions
(Study 2). That is, participants who experienced deception were not less
generous (as measured by giving in the dictator game) than those who did
not experience deception. Nor were they less trusting or trustworthy in the
trust dilemma. Furthermore, neither study showed nonsystematic behavioral
effects of exposure to deception. That is, participants did not show any sign
of erratic or random behavior as hypothesized by the anti-deception argu-
ment (Jamison et al. 2008). Finally, we failed to replicate a finding from a
prior study of deception (Jamison et al. 2008), namely that deceived partici-
pants are more likely to make inconsistent choices in the risk aversion lot-
tery. Indeed, in our study, inconsistent choices were slightly more common
among nondeceived participants. As we did not find any behavioral effect
of deception, it is even less plausible that behavioral effects could ever be
observed by experimental economists, whose reputation was not signifi-
cantly affected.
Nevertheless, future work should provide a behavioral test of the spil-
lover hypothesis. One such test would entail a straightforward extension of
the direct exposure study (Study 1) reported above. In Phase 1, participants
would be deceived (or not) in a social psychology lab. Participants would
then take part in a study conducted in an experimental economics lab at
some point (days or weeks) later. The spillover hypothesis predicts that
those who were deceived in Phase 1 would act differently in Phase 2 than
those who were not deceived. If warranted, additional follow-up studies
could investigate whether and how the ubiquitous practice of universities
having separate physical laboratories for social psychology, sociology, and
economics experiments inhibits spillover. For instance, a future study might
entail running both phases in the same physical laboratory but framing
Phase 1 as a social psychology experiment and Phase 2 as a social psychol-
ogy or economics experiment (depending on condition). Such a design
could offer insight into whether separate research facilities insulate econo-
mists against ‘‘spillover’’ effects from social psychology experiments
employing deception. It could also yield important insights for those univer-
sities where lab facilities are shared by economists, psychologists, and
sociologists. As warranted, additional studies could address the role of other
institutional arrangements. But again, we want to emphasize that neither the
study reported above nor a previous study by economists (Jamison et al.
2008) yielded any effects on behavior in interdependent situations, that is,
the types of situations in which the anti-deception hypothesis would predict
effects. Thus, all currently available evidence suggests that behavioral spil-
lover would be minimal to nonexistent.
It may seem puzzling that experiencing deception influenced partici-
pants’ beliefs, but not their behaviors, even though the procedures and
experimental setting were very similar to those in which deception occurred.
One possible explanation is that, while being deceived can increase partici-
pants’ perceptions that experiments (or experiments in a given discipline)
employ deception, it may have more limited effects on their beliefs about a
particular experiment. For instance, participants may simply ‘‘suspend dis-
belief’’ when they enter a research lab (Mixon 1977). An alternative argu-
ment is that participants in our studies had simply not become suspicious
‘‘enough.’’ Perhaps more spectacular forms of deception are necessary
before suspicion evolves into full distrust and impacts behavior. For
instance, we could have used the exact same procedures administered by the
same research assistant with only a few hours or even a few minutes
between the manipulation of the independent variable (whether participants
are deceived) and the dependent measure. And we might have obtained
some effect. But we are interested in knowing whether deception affects
subsequent behavior in the range of conditions that both proponents and
opponents have in mind when they debate the effects of deception.
As students often participate in multiple experiments during the course
of their time at University, another possibility is that effects may arise only
after repeated experiences of deception. Hertwig and Ortmann (2001:397-
98) have argued that the interaction between experimenter and participant
can be modeled as a repeated ‘‘trust dilemma’’ in which the participant ini-
tially believes the experimenter is being honest. They contend that, if a par-
ticipant experiences deception, that participant will never believe any
experimenter in subsequent encounters. But our results indicate that a differ-
ent model would be more appropriate, such as one with incomplete informa-
tion (for a model of a trust dilemma with incomplete information, see Raub
2004). In such a model, the participant believes that the experimenter is
probably being honest. Thereafter, each time the participant is deceived,
she or he lowers the expected probability that any subsequent experimenter
Acknowledgment
We appreciate helpful comments and suggestions from Ozan Aksoy, Vincent
Buskens, Ashley Harrell, Irene Klugvist, Hanne van der Iest, and three anonymous
reviewers.
Authors’ Notes
Contributions were equal and the order of authorship is alphabetical.
Funding
The author(s) disclosed receipt of the following financial support for the research,
authorship, and/or publication of this article: This research was supported by grants
SES-0551895 and SES-0647169 from the National Science Foundation to the sec-
ond author.
Notes
1. Although it is not completely clear why a norm against the use of deception
emerged in economics, Ariely and Norton (2007) suggest a plausible explana-
tion. They note that because economists typically assume that human behavior
is driven by utility maximization, procedures in experimental economics tend
to emphasize the provision of monetary incentives as well as full (and honest)
information about the costs and benefits associated with alternative lines of
action. Categorically avoiding the use of deception presumably increases the
chances that these conditions are realized.
2. We know of no prior work on the prevalence of deception in sociology experi-
ments. Although a detailed analysis is beyond the scope of the current article,
we conducted a cursory review of articles published in what are often consid-
ered to be the top three mainstream sociology journals (American Journal of
Sociology, American Sociological Review, and Social Forces) and the primary
outlet for research in sociological social psychology (Social Psychology
Quarterly). We limited our search to the past 3 years. Of the studies that
employed laboratory experiments, just under two thirds used some form of
deception.
3. Note, however, that deceived males were more likely to return than nonde-
ceived males.
4. In addition to the manipulation (whether participants were deceived or not),
Phase 1 also manipulated whether participants were assigned to the ‘‘trustor’’
or ‘‘trustee’’ role in the trust dilemma. Because a trustee’s behavioral options
are determined by the trustor’s previous decision, the design relinquished
experimental control to the decisions of other participants. This problem is
compounded by the fact that the trust dilemma was repeated five rounds.
Together, these design features introduce substantial differences in payoffs,
both within and between dyads, and therefore constitute confounds.
5. We included the Big 5 personality index primarily as a filler task, so that parti-
cipants (particularly those in the nondeception condition) would not become
suspicious about the brevity of the study. Given that responses on the personal-
ity index are not relevant for current purposes, we do not discuss them further.
6. For both experiments, we avoided use of loaded terms such as ‘‘dictator,’’
‘‘generosity,’’ ‘‘trust,’’ ‘‘trustworthiness,’’ and so on. We use the terms here
for simplicity.
7. In the safer lottery the two alternative amounts that the participants can win are
similar to each other. In the riskier lottery, one prize is substantially higher than
the other. For example, in the first pair a participant chooses between a (safe)
lottery that pays $11 with 10 percent chance and $8.80 with 90 percent chance,
and a (riskier) lottery that pays $21.20 with 10 percent chance and $ 0.55 with
90 percent chance. As the participant moves from the first to the tenth choice,
the amounts remain constant, while the relative probabilities change, so that
higher amounts are increasingly likely in later pairs. In the final (tenth) pair,
the higher amount in both lotteries is paid with certainty.
8. A total of 42 participants (30 percent) had previously taken part in experiments
(28 percent had taken part in experiments in sociology or social psychology
and 2 percent in economics). However, only 13 participants (9 percent) had
taken part in more than two experiments. Experienced participants were
equally distributed between the two conditions. Excluding experienced
participants—whether all of them or just the ones who participated in more
than two experiments—yielded substantively identical results. Moreover, tests
run separately on experienced and inexperienced participants yielded remark-
ably consistent results. (These analyses are available upon request). Thus, the
analyses discussed below are performed on the full sample.
9. We checked for suspicion using a funneled debriefing procedure, asking partici-
pants whether they found anything ‘‘odd’’ or ‘‘hard to believe,’’ and whether
they thought there ‘‘may have been more to the experiment than meets the
eye’’ (see Aronson et al. 1990:316-17). Consistent with our beliefs measures
(reported in detail below), participants in the deception condition were more
likely to mention the possibility that others may have been simulated. As this
study is aimed at addressing the behavioral effects of deception and suspicion,
our analyses include all suspicious participants. Importantly, however, none of
the results reported below depend on whether or not these participants are
included. Analyses available upon request show that participants who expressed
suspicions did not differ in any other way from those who did not (or those in
the control condition).
10. We chose to use a description of the Aronson–Mills experiment because it is
one of the most famous classic studies employing deception but is not as
widely known to the general public as, for example, the Milgram obedience
experiments. As explained below, our deception manipulation involves includ-
ing or omitting details in a summary of the Aronson–Mills experiment. Using
the Milgram experiment would likely reduce differences between conditions,
References
Ariely, Dan and Michael I. Norton. 2007. ‘‘Psychology and Experimental
Economics: A Gap in Abstraction.’’ Current Directions in Psychological Science
16:336-39.
Aronson, Elliot and Judson Mills. 1959. ‘‘The Effects of Severity of Initiation on
Liking for a Group.’’ Journal of Abnormal Psychology 59:177-81.
Aronson, Elliot, Phoebe C. Ellsworth, Merril J. Carlsmith, and Marti H. Gonzales.
1990. Methods of Research in Social Psychology. New York: McGraw-Hill.
Baron, Jonathan. 2001. ‘‘Purposes and Methods.’’ Behavioral and Brain Sciences
24:403.
Ortmann, Andreas and Ralph Hertwig. 1997. ‘‘Is Deception Acceptable?’’ American
Psychologist 52:746-47.
Pardo, Rafael and Félix Calvo. 2002. ‘‘Attitudes Toward Science Among the
European Public: A Methodological Analysis.’’ Public Understanding of Science
11:155-95.
Raub, Werner. 2004. ‘‘Hostage Posting as a Mechanism of Trust. Binding,
Compensating and Signaling.’’ Rationality and Society 16:319-65.
Rauhut, Heiko and Fabian Winter. 2010. ‘‘A Sociological Perspective on Measuring
Norms Using Strategy Method Experiments.’’ Social Science Research 39:1181-94.
Ridgeway, Cecilia L., Elizabeth Heger Boyle, Kathy J. Kuipers, and Dawn T.
Robinson. 1998. ‘‘How Do Status Beliefs Develop? The Role of Resources and
Interactional Experience.’’ American Sociological Review 63:331-50.
Roth, Alvin E. 2001. ‘‘Form and Function in Experimental Design.’’ Behavioral and
Brain Sciences 24:427-28.
Rothschild, Kurt W. 1993. Ethics and Economic Theory. Aldershot, UK: Edward
Elgar.
Sell, Jane. 1997. ‘‘Gender, Strategies and Contributions to Public Goods.’’ Social
Psychology Quarterly 60:252-65.
Sell, Jane. 2008. ‘‘Introduction to Deception Debate.’’ Social Psychology Quarterly
71:213-14.
Silverman, Irwin, Arthur D. Shulman, and David L. Wiesenthal. 1970. ‘‘Effects of
Deceiving and Debriefing Experimental Subjects on Performance in Later
Experiments.’’ Journal of Personality and Social Psychology 14:203-12.
Smith, Stephen S. and Deborah Richardson. 1983. ‘‘Amelioration of Deception and
Harm in Psychological Research: The Important Role of Debriefing.’’ Journal of
Personality and Social Psychology 44:1075-82.
Stang, David J. 1976. ‘‘Ineffective Deception in Conformity Research: Some Causes
and Consequences.’’ European Journal of Social Psychology 6:353-67.
Weimann, Joachim. 1994. ‘‘Individual Behavior in a Free Riding Experiment.’’
Journal of Public Economics 54:185-200.
Willer, Robb. 2009. ‘‘Groups Reward Individual Sacrifice: The Status Solution to
the Collective Action Problem.’’ American Sociological Review 74:23-43.
Willer, Robb, Ko Kuwabara, and Michael W. Macy. 2009. ‘‘The False Enforcement
of Unpopular Norms.’’ American Journal of Sociology 115:451-90.
Willis, Richard H. and Yolanda A. Willis. 1970. ‘‘Role Playing versus Deception:
An Experimental Comparison.’’ Journal of Personality and Social Psychology
16:472-77.
Winter, Fabian, Heiko Rauhut, and Dirk Helbing. 2011. ‘‘How Norms Can Generate
Conflict: An Experiment on the Failure of Cooperative Micro-Motives on the
Macro-Level.’’ Social Forces. 90:919-46.
Yamagishi, Toshio. 1995. ‘‘Social Dilemmas.’’ Pp. 311-35 in Sociological
Perspectives on Social Psychology, edited by Karen S. Cook, Gary Alan Fine,
and James S. House. Boston, MA: Allyn and Bacon.
Yamagishi, Toshio, Karen S. Cook, and Motoki Watabe. 1998. ‘‘Uncertainty, Trust,
and Commitment Formation in the United States and Japan.’’ American Journal
of Sociology 104:165-94.
Yamagishi, Toshio, Yutaka Horita, Haruto Takagishi, Mizuho Shinada, Shigehito
Tanida, and Karen S. Cook. 2009. ‘‘The Private Rejection of Unfair Offers and
Emotional Commitment.’’ Proceedings of the National Academy of Sciences
106:11520-523.
Zelmer, Jennifer. 2003. ‘‘Linear Public Good Experiments: A Meta-Analysis.’’
Experimental Economics 6:299-310.
Bios
Davide Barrera is an assistant professor at the University of Turin (Italy). His
research interests include group processes, mechanisms of cooperation in small
groups, experimental methods, and social networks. Currently, he is working on two
main projects: one on the effects of sanctioning rules in public good games (with
Nynke van Miltenburg, Vincent Buskens, and Werner Raub), and the other on for-
mation and consequences of negative relationships in small groups.
Brent Simpson is Professor of Sociology at the University of South Carolina. His
current projects include studies of altruism homophily in social networks, successful
collective action in large groups, and how interpersonal moral judgments influence
cooperation and social order.