You are on page 1of 71

An Offer You Cant Refuse?

Incentives Change How We Think


JOB MARKET PAPER

For the most recent version, see


http://web.stanford.edu/sambuehl/
Sandro Ambuehl
Stanford University
November 28, 2015
Abstract
Around the world there are laws that severely restrict incentives for many transactions, such
as living kidney donation, even though altruistic participation is applauded. Proponents of such
legislation fear that undue inducements would be coercive; opponents maintain that it merely
prevents mutually beneficial transactions. Despite the substantial economic consequences of such
laws, empirical evidence on the proponents argument is scarce. I present a simple model of costly
information acquisition in which incentives skew information demand and thus expectations about
the consequences of participation. In a laboratory experiment, I test whether monetary incentives
can alter subjects expectations about a highly visceral aversive experience (eating whole insects).
Indeed, higher incentives make subjects more willing to participate in this experience at any
price. A second experiment explicitly shows in a more stylized setting that incentives cause
subjects to perceive the same information differently. They make subjects systematically more
optimistic about the consequences of the transaction in a way that is inconsistent with Bayesian
rationality. Broadly, I show that important concerns by proponents of the current legislation can
be understood using the toolkit of economics, and thus can be included in cost-benefit analysis.
My work helps bridge a gap between economists on the one hand, and policy makers and ethicists
on the other.

Keywords: Incentives, Repugnant Transactions, Information Acquisition, Inattention, Experiment


JEL Codes: D03, D04, D84
Department

of Economics, 579 Serra Mall, Stanford, CA, 94305, sambuehl@stanford.edu. I am deeply indebted to

my advisors Muriel Niederle, B. Douglas Bernheim, and Alvin E. Roth. I am extremely grateful to P.J. Healy and to Yan
Chen for letting me use their experimental economics laboratories. This work has been approved by the Stanford IRB
in protocols 29615 (online experiments) and 34001 (laboratory experiments). Funding from the Stanford Department
of Economics is gratefully acknowledged.

[B]enefits ... raise ... the danger that ... prospective participants enroll ... when it might be against
their better judgment and when otherwise they would not do so.
- National Bioethics Advisory Commission (2001)
Russ Roberts: Its an interesting issue ... that somehow, when money gets involved, the whole
transaction becomes tainted. Richard Epstein: I agree ... a lot of this misapprehension comes from
the fact that people dont understand the way markets work.
- EconTalk with guest Richard Epstein (2006)

Introduction

Around the world, there are laws that severely limit material incentives for many transactions, most
notably organ donation, surrogate motherhood, human egg donation, and participation in medical
trials. The goal is not to discourage these activities per se; on the contrary, altruistic participation is
often applauded. These laws have led to intense public discourse. (Open Letter To President Obama
(2014), Vatican Radio (2014)). Opponents maintain that they prevent voluntary contracts and thus
impede efficiency (Emanuel (2005), Becker and Elias (2007), Satel and Cronin (2015)). Indeed, the
economic consequences are substantial. For instance, Held, McCormick, Ojo and Roberts (2015)
estimate that the ban on incentives for kidney donation is responsible for the premature death of 5,000
- 10,000 Americans on the waiting list each year, and place the net welfare gains from introducing
a regulated market for kidneys at $46 billion per year. By contrast, proponents and makers of such
laws (including prominent ethicists) worry that undue inducements may be coercive. They argue that
incentives can harm people by distorting decision making (National Bioethics Advisory Commission
(2001), Kanbur (2004), Satz (2010), Grant (2011), Sandel (2012)). Opponents have sometimes been
puzzled by these arguments, and dismissive of these concerns (see the quotes above). In addition to
a lack of a common language, there is a dearth of empirical evidence on the proponents claim.
In this paper I study the proponents concern that incentives interfere with sound decision making.
I develop a formal model based on costly information acquisition, and provide empirical evidence
that incentives can make individuals systematically more optimistic about the consequences of a
transaction. I do not (and cannot) take a stance on whether incentives for particular transactions
should be limited. Instead, I aim to bridge a gap between disciplines by using standard economic
methodology to address influential existing concerns about the effects of incentives that are widely
held by non-economists.1
1 I focus on a single mechanism. I do not attempt to exhaustively address all arguments that have been made against
certain markets. I also do not attempt to explain why limits on incentives are present in some markets but not in

I start by recognizing that many of the transactions for which incentives are limited have uncertain
consequences. For instance, it is hard to know what life with a single kidney is like. Previous work
has (implicitly) assumed that subjects beliefs about these consequences are independent of incentives
(Becker and Elias, 2007). In this case, individuals participate in a transaction if the incentive amount is
sufficient compensation for the expected disutility of participation, and does not participate otherwise.
My work, by contrast, considers the case in which acquiring information about the consequences
of a transaction costs time and effort. In this case, incentives have a second effect. They change
individuals expectations of the disutility from participation. Specifically, higher incentives decrease
the costs of ex post mistaken participation (a false positive error), as they partially insure against
adverse outcomes. Simultaneously, they increase the opportunity costs of abstention, and thus the cost
of false negative errors. Hence, they affect optimal information acquisition and attention allocation,
and thus expectations. For instance, a person contemplating kidney donation in exchange for an
increased payment may inform herself about the transaction by talking to more previous donors that
she expects are happy with the outcome, and to fewer previous donors that she expects are unhappy.
Does this mean that incentives can harm people? Clearly, regarding the ex ante welfare of Bayesrational decision makers, the answer is no.2 Even in this case, however, incentives alter the distribution
of ex post welfare. This is relevant for two reasons. First, both citizens and commentators often care
about ex post outcomes regardless of the ex ante choices that lead to them (Satz (2010), Kanbur (2004),
Andreoni, Aydin, Barton, Bernheim and Naecker (2015)).3 This is evidenced in policies as diverse as
veteran service, personal bankruptcy laws, and emergency medical services, which mitigate adverse
outcomes even if they result from a voluntary decision. Second, a policy that causes widespread ex post
regret may be politically unsustainable no matter whether or not it is ex ante beneficial.4 Regarding
anonymous kidney donation, for instance, Massey et al. (2010) find that 96% of donors report they
would make the same decision again. According to the mechanism I document, introducing incentives
may decrease this percentage (under a condition on the cost-of-information function).
If decision makers are not Bayesian, incentives may have a third effect. They may make individuals
systematically more optimistic, and thus possibly cause them to participate when a decision maker
with unbiased beliefs would have refused the transaction. Hence, increased incentives can make such
agents ex ante worse off.
In order to empirically study these mechanisms, I conduct two experiments. In the first experiment,
subjects trade off a highly visceral aversive experience against money. It allows me to study whether
higher incentives can indeed make people more optimistic about how unpleasant they will find a
others, even if they share some characteristics. My evidence derives from laboratory experiments in two highly specific
decision problems. By theoretically modeling the mechanism I document, I argue that it is present more generally. The
experiments are not informative, however, about its magnitude in different settings.
2 I use the term Bayes-rational to refer to a subjective expected utility maximizer who updates beliefs according to
Bayes law.
3 Kanbur (2004) is explicit: Extreme negative situations for individuals that leave them destitute attract our sympathy, no matter that the actions which led to them were freely undertaken.
4 See the movie Eggsploitation (2011) for a recent push along these lines.

subjective experience in an arguably naturalistic setting. The second experiment studies behavior in
a more stylized setting. It allows me to control subjects objective function and prior beliefs and hence
allows for an explicit test of Bayesian rationality.
Broadly, the experiments show that higher incentives make subjects systematically more optimistic about the consequences of the transaction than lower incentives, in a way that is inconsistent
with Bayesian rationality. They do so through affecting subjects information demand and attention
allocation in the direction predicted by the model.
In the first experiment I use cash to induce subjects to eat whole insects, including silkworm pupae,
mealworms, and various species of crickets. This transaction is both aversive and perfectly safe, and
thus feasible in the laboratory. It is also unfamiliar, so that expectations are potentially malleable.
The experiment follows a two-by-two design. One dimension varies the incentive amount offered, the
other varies access to information. Subjects in the treatment group choose to watch one of two videos
about insect-eating, one highlighting the upsides, the other emphasizing the downsides. Subjects in
the control group cannot access any such information. I observe the fraction of subjects who are
willing to eat the insect for the incentive offered. The experiment yields three main results.
First, incentives increase participation in either group, but significantly more so in the treatment
group. Hence, as the model predicts, information gathering amplifies the effect of incentives on uptake.
A corroborating finding is that subjects offered the high incentive significantly more often chose the
video emphasizing the upsides of eating insects than those offered the low incentive.
Second, when offered the high incentive, subjects in the video treatment persuade themselves that
eating insects is less aversive than they would otherwise have thought. To show this, I elicit the
least amount of money for which a subject is willing to eat insects (her willingness-to-accept, WTA).
Incentives significantly decrease WTA in the treatment condition compared to subjects in the control
condition, and thus make them more willing to eat insects at any price when the video option is
available.
Third, subjects do not predict the self-persuasion effect in others, in spite of material incentives
for accurate predictions. Hence, they seem unaware of it. This finding calls into question whether the
effects of incentives result from rational information processing.5
The second experiment allows me to explicitly test for Bayesian rationality. It also replicates the
main qualitative findings of the first experiment in a different environment with a different subject
pool, and thus demonstrates their robustness. In this experiment, I incentivize subjects to risk the
loss a fixed amount of money. Subjects know the ex ante probability of a loss, and thus have correct
prior beliefs. Before deciding whether or not to take that gamble, they scrutinize a picture. That
picture perfectly reveals whether taking the gamble and receiving the incentive amount will lead to a
5 Lack of awareness does not disprove the rationality hypothesis, however, as Friedman and Savages (1948) famous
example illustrates: An expert billiard player may be unaware of the laws of Newtonian mechanics, but he still strikes
balls as if he were.

net gain or a net loss, but only at considerable attentional costs. I measure (a monotonic function of)
subjects posterior beliefs about the winning probability.
I find that subjects behavior is inconsistent with Bayesian information processing. An increase in
incentives leads to a first-order stochastic dominant shift in subjects posterior belief, making them
more optimistic. This violates the law of iterated expectations, the hallmark of Bayesian rationality.
The experiment also allows me to study the effects of incentives separately on false positive and
false negative errors. This is relevant since proponents of limits on incentives are primarily concerned
with mistaken participation (false positives).
Specifically, I vary whether subjects can skew their attention allocation in response to incentives,
by telling subjects the incentive they will be given for taking the gamble either before or only after
they examine the picture. I find that subjects in the high incentive treatment who can skew their
attention allocation in response to incentives have a substantially higher false positive rate than those
who cannot, but not a significantly different false negative rate. Hence, knowledge of the incentive
amount must have caused them to search for information that seemingly justified taking the bet, even
when they ultimately lost from taking the gamble.
Overall, my work shows that some of the central concerns voiced by proponents of the current
legislation can be understood and measured using the toolkit of economics, and it presents directional
evidence for the behavioral hypotheses on which these concerns are based. Incentives cause subjects
to seek out different information about the consequences of a transaction, and to perceive the same
information differently. They can make subjects systematically more optimistic in a way that is not
consistent with Bayesian rationality. To reiterate, the paper does not take a stance on normative
questions, but instead aims to foster a common understanding between disciplines to advance debates
about the appropriateness of material incentives.
My research is related to a small literature on repugnant transactions which examines when unaffected third parties approve of selected transactions (Kahneman, Knetsch and Thaler (1986), Roth
(2007), Leider and Roth (2010), Niederle and Roth (2014), Ambuehl, Niederle and Roth (2015), Elias,
Lacetera and Macis (2015a), Elias et al. (2015b)). It differs from that literature as it studies the
effect of incentives on those whom they target. This paper focuses on the internalities caused by
incentives, and thus also differs from a burgeoning literature on morals and markets. That literature
studies how market institutions cause externalities, and how they affect participants willingness to
impose them (Basu (2003), Basu (2007), Falk and Szech (2013), Malmendier and Schmidt (2014),
Bartling, Weber and Yao (2015)). My research examines how incentives shape individuals acquisition
and interpretation of additional information about the transaction rather than the inferences individuals draw from the incentive amount per se. It thus complements the study by Cryder, London,
Volpp and Loewenstein (2010) which shows that prospective medical trial participants interpret the
incentive offered as compensation for greater risk, even though institutional guidelines explicitly dis-

courage such compensation.6 It similarly complements the literature on anchoring which shows that
even transparently arbitrary prices can per se influence individuals valuations (Ariely, Loewenstein
and Prelec (2003), Alevy, Landry and List (2010), Beggs and Graddy (2009), Bergman, Ellingsen,
Johannesson and Svensson (2010), Maniadis, Tufano and List (2014), Simonson and Drolet (2004),
Sunstein, Kahneman, Schkade and Ritov (2002), Fudenberg, Levine and Maniadis (2012)). This study
also relates to work by Babcock and Loewenstein (1997) and Gneezy, Saccardo, Serra-Garcia and van
Veldhuizen (2015) who find that strategic reasons can bias subjects interpretation of information. In
the current paper, by contrast, subjects choices affect only their own well-being.
More generally, my research relates three broad literatures. Both its empirical and theoretical
results derive from mechanisms of information demand and attention allocation that are often studied
under the label of rational inattention (Sims (2003), Sims (2006), Woodford (2012), Martin (2012),
Caplin and Dean (2013a), Caplin and Martin (2014), Woodford (2014), Yang (2014), Matejka and
McKay (2015), Caplin and Dean (2015); see Caplin (forthcoming) for a review.) As it studies how
incentives affect expectations, it relates to a vast literature in psychology and economics on motivated
reasoning (Lord, Ross and Lepper (1979), Kunda (1990), Rabin and Schrag (1999), Caplin and Leahy
(2001), Benabou and Tirole (2002), Brunnermeier and Parker (2005), Massey and Wu (2005), Koszegi
(2006), Holt and Smith (2009), Eil and Rao (2011), Moebius, Niederle, Niehaus and Rosenblat (2013),
Gottlieb (2014), DiTella, Perez-Truglia, Babino and Sigman (2015), Huck, Szech and Wenner (2015)).
Finally, by studying how incentives affect the quality of decision making, it contributes to a burgeoning
literature on behavioral welfare economics (Koszegi and Rabin (2008), Bernheim and Rangel (2009),
Ambuehl, Bernheim and Lusardi (2014), Bernheim, Fradkin and Popov (2015); see Bernheim (2009)
for a review).
The remainder of this paper proceeds as follows. Section 2 provides a brief overview of the legislation that caps incentives. Section 3 presents the conceptual framework. The two experiments are in
sections 4 and 5, respectively. Section 6 discusses policy implications and applications of my findings
to other domains, and Section 7 concludes.

Capped Incentives

Here, I briefly review legislation and guidelines that limit or prohibit incentive payments. For all of
the laws reviewed, protecting the person whom incentives would target is an important motivation.
Limits on incentives are not intended to discourage these activities per se. On the contrary, altruistic
participation is often applauded, especially with living kidney donation and participation in medical
trials.
Human research participants. Preventing coercion is a central objective of the pertinent legislation (Nuremberg Code, 1949). Incentives are explicitly considered a form of coercion since the
6 That

paper thus shows that low incentives can make subjects overly optimistic.

Belmont Report (1978), the basis of many current laws on medical research ethics. The report
defines that undue influence ... occurs through an offer of an excessive ... reward or other
overture in order to obtain compliance. The concern is that an offer may be so excessively
desirable that it compromises judgment (Emanuel, 2004). By contrast, the literature applauds
an altruistic desire to contribute to the progress of research (Macklin, 1981).
Human egg donation. The majority of countries surveyed by the Council of Europe in 1998
prevented egg donations for commercial gain (Council of Europe, 1998).7 Donors undergo substantial hormonal treatment, and protecting them is a critical aim. The U.S. permits commercial human egg donations, but the Ethics Committee of the American Society for Reproductive
Medicine (2007) recommends that payments to women providing oocytes should be fair and
not so substantial that they ... lead donors to discount risks. The committee concludes that
sums of $5,000 or more require justification and sums above $10,000 are not appropriate.
Surrogate motherhood. Many U.S. states strictly limit material benefits for surrogate mothers.
Nevada, New Hampshire and Washington, for instance, prohibit payments to surrogate mothers
except for reimbursement of certain expenses that are explicitly listed in the states statutes.8
Protection of the surrogate mother is an important motive. The prospective surrogate mothers
ability to predict her preferences plays a central role. This is particularly salient for the legislation
in Russia which permits commercial surrogacy, but only for women who already have a child
of their own. Arguably, they are better able to assess the costs and benefits of the transaction
than other women (Svitnev, 2010).
Living kidney donation. Paid living kidney donation is outlawed in every country of the world,
except for the Islamic Republic of Iran (Rosenberg, 2015b). A frequent argument holds that
incentives would coerce some individuals to sell their kidneys (Choi, Gulati and Posner, 2014)
and that they would distort prospective participants assessment of the costs and benefits of the
transaction, possibly to their detriment (Satz (2010), Grant (2011), Kanbur (2004)).
Concerns about incentives are particularly prevalent for transactions involving bodily products
and parts. But they are neither limited to that domain, nor do they fully encompass it. On the one
hand, a variety of laws limit incentives in domains that do not concern the body. For instance, the
U.S. outlaws selling oneself into voluntary slavery (42 U.S. Code 1994), and excessive payments
are prohibited also for participants in non-medical experiments. Inducements are outlawed in student
athlete recruiting on account that they would consist undue influence (National Collegiate Athletic
Association (2015), Alabama HSAA (2015), Indiana HSAA (2015), Kentucky HSAA (2014), Michigan
7 10 of these had regulations allowing some reimbursement of expenses. Others outlawed human ovum donations
entirely (Bundesrepublik Deutschland, 1990).
8 Legislation varies widely within the U.S. On one extreme, California and Illinois support commercial surrogacy. On
the other extreme, Michigan declares any participation in a surrogacy agreement a gross misdemeanor punishable with
jail.

HSAA (2015)). And the World Council of Churches compels its members not to use material incentives
to induce individuals to change their confession, arguing that incentives would be coercive and impair
religious freedom (Clark, 1996). On the other hand, there are transactions with bodily products
for which no comparable laws apply. Many U.S. states, for instance, explicitly exclude the trade
with human hair from statutory regulations on trade with bodily products and parts.9 Relatedly, no
concerns about paying donors of human feces have surfaced, although donors can earn up to $13,000
a year (Feltman, 2015).10

Conceptual framework

In this section, I present a purposefully simple model to understand the effects of incentives when
the consequences of participation in a transaction are uncertain, and when information acquisition is
costly.
The aim of this section is twofold. First, I show how incentives affect information acquisition and
how they change the nature of erroneous decisions (relative to perfect foresight) for rational decision
makers. The model yields comparative statics which, at first sight, might appear like an indication of
non-rational behavior. For instance, the model predicts that higher incentives may increase rational
subjects demand for information they expect to encourage participation.11
Second, I explicitly highlight which behaviors are consistent with Bayesian rationality and which
are not; and I derive a criterion to empirically distinguish between them. The distinction is important.
An increase in incentives affects the ex post distribution of welfare in either case, but it can only
possibly ex ante hurt decision makers who deviate from Bayesian rationality.
I first introduce the setup and then discuss its interpretation.

3.1

Setup

An agent decides whether or not to participate in a transaction in exchange for a material incentive
m. The agent is uncertain about the (utility) consequences of participation, which depend on an
unknown state of the world s {G, B}. The state is good (s = G) with prior probability . If so, the
agent obtains utility G from participation such that net utility is positive, G + m > 0. Otherwise,
9 For instance Connecticut, the District of Columbia, Illinois, Michigan, and Texas. Trade in human hair is a multimillion dollar industry (Khaleeli, 2012).
10 There are other transactions that have raised concerns about the effects of incentives on the provider, but they differ
to the extent that voluntary participation would not necessarily be applauded. An important example is prostitution.
While some commentators are concerned with the well-being of sex workers, the idea of a woman sleeping with a large
number of anonymous men has historically met disapproval even if no material incentives are involved. Less common
transactions include forehead advertising, in which an individual is paid a significant amount of money to have an
advertisement tattooed on their forehead or other body part, and virginity auctions (Sandel, 2012). Incentives for these
latter transactions are not regulated, possibly because of their infrequent occurrence.
11 There are other domains in which behavior at first sight appears non-rational, even though it is consistent with
Bayesian updating. An example in which 60% of all drivers rationally consider themselves as above-average drivers is
in Benot and Dubra (2011).

the state is bad (s = B). In that case, participation leads to utility B such that net utility is negative
B + m < 0.12 If the agent does not participate, he receives utility 0. The agent observes the state
only after he has decided whether or not to participate.
Before the agent decides whether or not to participate, he can acquire noisy information about the
state at a cost. Formally, the agent selects an information structure I from a menu I. An information
I
I
structure I is a pair of probability distributions (G
, B
) over some set SI of signal realizations. If the

agent chooses information structure I and the state is good, then the signal is distributed according to
I
I
G
; if the state is bad, it is distributed according to B
. From observing a signal realization, the agent

can infer a posterior probability about the state by applying Bayes rule. He then decides whether
or not to participate in the transaction. Because the agent can only decide between accepting and
rejecting the offer, I assume that all information structures the agent has access to have two possible
signal realizations, SI = {accept, reject} for all I I. For an optimizing agent, this assumption
is without loss, as long as strictly more informative information structures are strictly more costly
(Matejka and McKay, 2015).13
Hence, the agent can be modeled as choosing a probability pG of agreeing to participate in the
transaction if the state of the world is good, and a probability pB of participating if it is bad, with
pG pB .14 pB is the probability of false positives (the agent participates even though under perfect
foresight he would have abstained), and (1 pG ) is the probability of false negatives (the agent
abstains, even though under perfect foresight he would have participated). To illustrate, consider an
agent who chooses choice probabilities (pG , pB ) = (0.9, 0.1). This agent has better information about
the state of the world than an agent who chooses (pG , pB ) = (0.8, 0.2); both his false positive and false
negative probabilities are lower. An agent who choses (pG , pB ) = (1, 0) is perfectly informed about
the state.
Information acquisition is costly. Specifically, the cost of the information associated with a pair
of state-contingent choice probabilities (pG , pB ) is given by the convex, real valued, differentiable
function c(pG , pB ), where > 0 is a constant. It is increasing in the first argument, and decreasing
in the second. These conditions encompass Shannon mutual information costs, a standard assumption
in the literature on rational inattention (Sims (2003), Sims (2006), Caplin and Dean (2013b), Matejka
and McKay (2015)).15
12 Throughout I assume quasilinear preferences. Appendix E considers the more general case in which the state s can
have any distribution on the real line, and ex-post utility is given by u(s) + m.
13 Here, more informative refers to the Blackwell (1953) ordering. To see the argument, suppose that the agent
chooses action a at two different signal realizations 1a and 2a . We can then garble this information structure in a way
that prevents the agent from distinguishing these two signal realizations. This makes the information structure strictly
less informative, and thus strictly less costly. Hence, the initial information structure cannot have been cost-minimizing.
14 A pair (p , p ) with p < p
G B
G
B cannot be optimal. The agent first observes a signal, and then chooses which action
to take. If an information structure allows the agent to implement (pG , pB ) = (x, y) with x < y, he can costlessly
improve his expected utility by instead implementing (pG , pB ) = (y, x). To do so, he simply interprets each signal
realization in the opposite way. Formally, this is the no improving action switches condition of Caplin and Dean (2015).
15 In this setting, the Shannon mutual information cost function takes the following form. For state-dependent

participation probabilities pG , pB let p = pG + (1 )pB denote the ex ante participation probabilities, and G = pG
p

and B =

pB (1)
p

the agents posterior belief about the event {s = G} if he has observed a signal that makes him

3.2

Discussion

Interpretation of information. An individual deciding whether to participate in a transaction


may be unsure about the utility from participation. Thus, there are two aspects of information
acquisition. First, an individual can acquire objective information, for instance on the medical risks
associated with participation. Second, he can also acquire information on how participation affects
his subjective well-being, both directly, and via the implications of medical consequences. He may
consider, for instance, how he will feel after having saved someones life by donating a kidney, or how
a particular medical consequence will restrain the activities he may want to pursue.
How to choose kind and amount of information in practice.

There are many ways to select

between different kinds of information in practice. An individual contemplating whether to donate


a kidney, for instance, can select between different amounts of information by deciding how many
previous donors to consult, and she may select between different kinds of information by focusing
her efforts on previous donors whom she believes are happy with their decision, or on those who she
thinks are not.
More formally, we can imagine a decision maker who gathers information sequentially, and has
the following decision rule. He decides to participate as soon as his posterior that the state is good
is sufficiently high, and he decides to abstain as soon as the posterior is sufficiently low. Otherwise,
he continues searching for information. By choosing these bounds, the agent implicitly chooses the
state-contingent participation probabilities pG and pB . (This corresponds to a Wald (1947) sequential
probability ratio test, or to its continuous cousin, the drift-diffusion model (Ratcliff (1978), see Fehr
and Rangel (2011) and Bogacz, Brown, Moehlis, Holmes and Cohen (2006) for reviews).)

3.3
3.3.1

Analysis
Bayesian Decision Makers

If the decision maker selects state-dependent choice probabilities (pG , pB ), he obtains the upside payoff
(G +m) > 0 with probability pG , and the downside payoff (B +m) < 0 with probability (1)pB .
With the remaining probability he does not participate in the transaction and obtains 0. Hence, his ex
ante expected utility, excluding costs of information, is U (pG , pB ; m) = pG (G +m)+(1)pB (B +
m). The decision maker thus chooses the pair of probabilities (pG , pB ) to solve the following problem.
max U (pG , pB ; m) c(pG , pB )

pG ,pB

(1)

participate and abstain, respectively. Let h denote the binary entropy function, h(x) = x log(x) + (1 x) log(1 x).
Then, Shannon mutual information costs are given by c(pG , pB ) = h() E[h(s )].

10

How does the solution to this problem depend on the monetary incentive m? The answer is
most easily seen graphically. Figure 1 depicts (a part of) the agents choice set.16 The vertical axis
depicts pG , the probability of accepting if the state is good; the horizontal axis depicts (1 pB ),
the probability of rejecting if the state is bad. Hence, the further to the north the bundle the agent
chooses, the smaller is the probability of false negatives. The further to the east the bundle he chooses,
the smaller is the probability of false positives. I separately plot the level curves of U and those of
the cost of information function c on this space. The level curves of U are straight (and parallel)
lines, since U is a linear combination of pG , (1 pB ) and a constant.17 U increases towards the upper
right of the graph. The level curves of c are concave, since c is convex. For an initial level of material
compensation m, the agents optimal choice may be a bundle such as point A in this figure.
The total effect of an increase in m derives from a substitution effect and a stakes effect. We
obtain the former by temporarily interpreting problem (1) as the Lagrangian to the maximization
of U subject to a constraint on the costs of information acquisition, c(pG , pB ) = c for some fixed
c. An increase in m raises the weight of the good pG in the utility function U and lowers that of
(1 pB ). Intuitively, the increase in the weight on pG reflects the increased opportunity cost of nonparticipation, whereas the decrease in the weight on (1 pB ) captures the idea that higher incentives
partially insure against adverse outcomes. Hence, the indifference curves tilt to the left,18 and the
constrained optimum shifts to the northwest; for instance to a bundle such as point B in figure 1. The
agent now takes greater care avoid false negatives and is more tolerant of false positives. He acquires
a different kind of information.
An increase in m not only changes the relative cost of false negatives and false positives, it also
changes the total stakes of this decision. Hence, the agent may choose to spend a different amount
of resources on information acquisition. If the increase in m increases the stakes of the decision, the
agent will acquire a larger amount of information, and his optimal bundle will move to the northeast,
for instance to a bundle such as point C in figure 1.19
Distribution of ex post outcomes.

There are ethicists and policy makers who worry that in-

centives may cause individuals to participate who ex post regret this decision (false positives); they
are typically less concerned about individuals who do not participate (false negatives). Whether an
increase in the incentive increases the false positive rate depends on whether the substitution effect
exceeds the stakes effect.
Formally, an increase in the incentive payment m increases the false positive rate pB if and only if
c has decreasing differences (a sufficient condition is that the cross-derivative of c is negative globally),
as follows directly from Topkis theorem (Milgrom and Shannon, 1994). This condition means that
16 The

agents choice set is {(pG , pB )|1 pG pB 0}. For ease of exposition only a subset is depicted.
can be written as U = pG (G + m) (1 )(1 pB )(B + m) + (1 )(B + m)
(1) B +m
dpG
18 The slope of an indifference curve is given by
=
< 0. Because G > B , this increases in m.
d(1pB )
G +m
19 Whether the agent demand more or less information with an increase in incentives depends on parameters.
17 U

11

!
!
!!!

!! !
1!

C!
B!

A!

!
!

1 !! !

1!

!
Figure 1: Effects of an increase in the incentive amount m. The horizontal axis plots the 1 pB ,
the probability that the agent rejects if the state is bad, or one minus the false positive probability.
The vertical axis plots pG , the probability that the agent accepts if the state is good, or one minus
the false negative probability. The choice set is {(pG , pB ) [0, 1]2 : pG pB }. For better visibility
only a part of this choice set is plotted here. Straight lines represent indifference curves of a Bayesian
decision maker for different incentive amounts. Curved lines are iso-cost functions. The black arrows
indicate the substitution effect. The red (dashed) arrows indicate the stakes effect.

12

a marginal decrease in the false positive rate is costlier the lower the false negative rate, and vice
versa.20
Both policy makers concerned with political sustainability and ethicists may be interested in the
effects of incentives if a stronger condition holds; namely that amongst those who decided to participate,
higher incentives increase the fraction of subjects who ex post regret their decision. An equivalent
statement is that higher incentives decrease the posterior belief that the state is good, conditional on
participating. It necessitates that the substitution effect outweighs the stakes effect by a sufficient
amount. Graphically, the isoquants of the cost of information function cannot be too curved. A
sufficient condition is that the costs of information are proportional to Shannon mutual information,
the workhorse model of rational inattention theory.21
The following proposition summarizes this discussion. The proof is in appendix F.
Proposition 1. An increase in the material incentive m
(i) increases the false positive rate P (participate|s = B) = pB if

2c
pG pB (pG , pB )

< 0 for all pG , pB .

(ii) decreases the posterior probability that the state is good conditional on participating, P (s =
G|participate) =

pG
pG +(1)pB ,

Experimental predictions.

if c is given by Shannon mutual information.

The experiment in section 5 directly reveals how an increase in incen-

tives affect the false positive and false negative rates. This model also has testable implications when
false positives and false negatives cannot be observed directly. The experiment in section 4 test these.
First, it predicts that an increase in incentives has a stronger effect if subjects can adjust their
information demand and attention allocation in response to incentives than if they cannot. To see
this, note that in the model as outlined here, a change in incentives increases participation only if
subjects can adjust their information demand (unless the change in incentives is so large that it affects
whether or not the subjects participation decision is dependent the signal realization). In an empirical
setting, incentives may change participation for other reasons (such as heterogeneous preferences), but
the effect will be larger if subjects can also adjust the false positive and false negative probabilities.
Prediction 1. An increase in incentives has a larger effect on the rate of participation if subjects can
adjust their information demand in response to incentives than if they cannot.
Second, the model predicts how information demand depends on incentives. Someone who faces a
higher incentive for participation will demand information which she ex ante believes is more likely to
make her participate. The reason is that such a person optimally chooses a higher false positive probability, and a lower false negative probability, and thus a higher ex ante probability of participating.
20 Appendix E shows that the cross-derivative of c is negative globally if c is posterior-separable in the sense of Caplin
and Dean (2013b).
21 Caplin and Dean (2013b) provide a behavioral characterization of mutual information costs. First, optimal posteriors
are independent of priors. (This can be viewed as a dynamic consistency condition; the posterior beliefs a sequential
information gatherer optimally attains before making a decision should be independent of his current posterior.) Second,
the relation of different realizations of optimal posterior beliefs is independent of certain transformations of utilities.

13

If subjects are presented with an explicit selection of pieces of information, their choice should reflect
this.
Prediction 2. With higher incentives subjects demand more information they expect to encourage
participation, and less they expect to discourage it.
3.3.2

Non-Bayesian Decision Makers

The model so far has focused Bayes-rational agents. Clearly, they cannot be made ex ante worse off
by an increase in material incentives. Non-Bayesian decision makers, by contrast, can ex ante suffer
from an increase in incentives, even if they voluntarily decide about participation. This happens if
the increase leads individuals to participate who abstained before the increase, and who would still
abstain after the increase if they held unbiased beliefs. The latter means that participation is worse
than non-participation. It requires that the individual is overly optimistic, either because she was so
from the start,22 or because incentives made her so.23
I first develop a criterion to test whether incentives make subjects more optimistic in a nonBayesian way. I then consider subjects who exhibit a belief updating bias documented in existing
research in behavioral economics and psychology. I show that higher incentives make such subjects
systematically more optimistic if their demand for information directionally follows the prediction of
the rational model.
Testing Bayesian rationality.

A test of whether incentives cause deviations from Bayesian updat-

ing faces two challenges. First, it should be applicable in settings in which the demand for information
and the allocation of attention are endogenous. Second, neither of these may be observed directly.
My solution is to test an implication of the law of iterated expectations, the hallmark of Bayesian
rationality. That law demands that a subjects expected posterior equals the prior. Hence, no treatment variation may induce a first order stochastic dominance relation between posterior beliefs, even
if it leads the agent to select different information structures. Since first order stochastic dominance
is preserved under monotonic transformations, this further implies that no strictly monotonic function of posterior beliefs (such as the least amount of money for which one would participate in the
transaction) may vary with incentives in the first order.24
22 van den Steen (2004) argues that when people can choose from a range of possible actions and have heterogenous
priors, they will rationally tend towards selecting those actions for whom their prior is larger. Therefore, conditional
on having chosen a given action, subjects are overconfident about how good that action is.
23 Increasing incentives for a voluntary decision cannot make overly pessimistic agents worse off. Such agents may
merely fail to take advantage of an opportunity that might increase their ex ante expected utility. Moreover in domains
such as kidney donation or surrogate motherhood, the consequences of participation are typically irreversible. By
contrast, opportunities to participate usually do not disappear, so that mistaken abstention is reversible.
24 A merely weakly monotonic function of posteriors may vary in the first order. An example is the decision whether
or not to participate in the transaction, which is weakly monotonic in posterior beliefs. If the treatment variation
consists in giving some decision makers the opportunity to skew their information demand, but not others, then the
participation rate may differ across these two groups even if all decision makers are Bayes-rational.

14

Proposition 2. For Bayesian decision makers, no treatment variation may lead to a first-order
stochastic dominance ordering in any strictly monotonic function of posterior beliefs.
Alternative hypothesis. Some proponents of limits on incentives are concerned that incentives
cause overly optimistic beliefs. By incorporating a bias documented in the existing literature into the
costly information acquisition model, I provide a theoretical foundation for these concerns.25
The model predicts that higher incentives will lead subjects to acquire more highly skewed information. Less than fully rational subjects may skew their information demand in the predicted direction
even if they fail to choose the predicted magnitude.26 Correctly interpreting skewed information is
difficult, however, and requires that the subject correctly accounts for magnitudes. Previous literature
shows that many people are unable to do this; they typically interpret skewed information as if it
were less skewed than it is (Slowiaczek, Klayman, Sherman and Skov (1992), Brenner, Koehler and
Tversky (1996), McKenzie (2006)).27
If subjects skew information demand in the predicted direction, but fail to fully account for this
when interpreting the information obtained, higher incentives lead to systematically more optimistic
beliefs. The reason is the following. Under the condition in proposition 1 (i), higher incentives lead
rational subjects to choose an information structure with an increased likelihood of producing the
accept signal in either state of the world. Such an information structure is more highly left-skewed.
Because the agent now expects to observe that signal more often, he should, upon observing it, be less
confident that the state is good. If he interprets information as if it was less skewed than it actually
is, this decrease in confidence will be attenuated compared to the Bayesian. Hence, his confidence
after observing a good signal from a more left-skewed information structure will drop by less than the
increased skew would warrant. Similarly, his confidence after observing a bad signal will increase by
less than the increased left-skew would warrant, so his optimism will be higher than the Bayesians
also in that case.
Formally, suppose a decision maker perceives information structure (pG , pB ) as (
pG , pB ) = pG +

pG +(1pB )
(1 )
p, pB + (1 )(1 p) , with p =
, and (0, 1). Let G = pGp and B = pB (1)
2
p
be a rational agents posterior belief about the event {s = G} after observing a good and bad signal,
respectively. Let G and B denote the respective posteriors for the behavioral agent. Fix some
25 Incentives can affect through other mechanisms. For instance, subjects may choose beliefs by optimally trading off
gains from anticipatory utility against losses from suboptimal decision making (Brunnermeier and Parker, 2005).
26 Existing evidence in psychology demonstrates that subjects tend to seek information to confirm rather than disconfirm maintained hypotheses (see Nickerson (1998) for a review). If high (low) incentives cause subjects start their
inquiry from the hypothesis that the utility maximizing action is to participate (abstain) and demand information to
confirm this hypothesis, their information demand will be skewed as in the rational model.
27 Within a sequential information acquisition framework, this is also predicted if subjects overinfer from weak signals,
and underinfer from strong signals, as has been demonstrated in Ambuehl and Li (2015), Griffin and Tversky (1992),
Massey and Wu (2005), Hoffman (2014). The reason is the following. With higher incentives the subjects is willing to
accept already at a less extreme posterior (that the state is good), but asks for an even more extreme posterior (that
the state is bad) before he is willing to reject. Therefore, the subject will see longer sequences of information (stronger
signals) before he rejects, and shorter sequences of information (weaker signals) before he accepts when incentives are
higher. If he overinfers from weak signals, he will accept even sooner than he would like to; if he underinfers from strong
signals, he will reject even later than he would like to.

15

information structure (p0G , p0B ) and let information structure (pG , pB ) = (p0G + , p0B + ) for some
[1, 1] such that (pG , pB ) [0, 1]2 . An increase in increases the left-skew of (pG , pB ). We then
have the following proposition
Proposition 3. The difference between the non-Bayesian and the Bayesian posterior conditional on
accepting the transaction, P (s = G|accept)P (s = G|accept) = G G , is increasing in the left-skew
of the information structure. The same is true of the difference between the non-Bayesian and the
Bayesian posterior conditional on rejecting the transaction, P (s = G|reject) P (s = G|reject) =
B B .
In words, the proposition shows that if a behavioral agent chooses an information structure with
a more pronounced left-skew , his posterior after observing a good signal will exceed that of the
rational decision maker by a larger amount. Hence, if an increase in the incentive amount m causes
a behavioral decision maker to choose an information structure with a more pronounced left-skew, he
will be systematically more optimistic than he would have been with a lower incentive amount.28

Experiment: An Aversive, Visceral Experience

Proponents of limits on incentives worry that incentives may interfere with sound decision making. In
this section, I conduct a laboratory experiment to test whether incentives can indeed affect subjects
expectations about how unpleasant they will find a highly visceral, aversive experience.
I incentivize participants to eat whole insects. This is a suitable transaction for this study, for
several reasons. First, it is unfamiliar as well as rather intense to most subjects.29 This ensures that
their expectations are potentially malleable, and that they have an incentive to inform themselves
about the transaction. Second, there are no externalities. Hence, subjects expectations about the
outcome of the transaction exclusively concern their own experience; there are no social concerns that
could act as a confound. Third, it satisfies the constraints of a laboratory experiment. Insects are
commercially processed for human consumption in certified facilities, eating them is harmless, and
paying for people to eat them is legal.
28 This is not the only model through which incentives can affect optimism, the model by Brunnermeier and Parker
(2005) is another. In that model, agents choose prior beliefs by optimally trading off anticipatory utility against the costs
of possibly making suboptimal due to distorted beliefs. Because higher incentives increase the the marginal anticipatory
utility the agent can gain from increasing his prior, they may, in some specifications, lead to more optimistic beliefs.
The many other ways in which subjects could deviate from Bayesian rationality, do not have direct implications on how
incentives affect optimism (Peterson and Beach (1967), Tversky and Kahneman (1974), Grether (1980), Grether (1992),
Rabin (1994), El-Gamal and Grether (1995), Rabin and Schrag (1999), Massey and Wu (2005), Holt and Smith (2009),
Ambuehl and Li (2015)). A closely related theoretical literature considers how subjects manipulate their beliefs, but
impose Bayesian rationality (B
enabou and Tirole (2002), Koszegi (2006), Caplin and Leahy (2001), Gottlieb (2014)).
Manipulation is achieved, for instance, by choosing to forget undesired signal realizations. Formally, this is similar to
the choice of an information structure in the present setting.
29 Subjects frequently reported that this was one of the most interesting experiments they had participated in. Occasionally, subjects reported that the experiment was stressful or that the insects were scary. Even in countries
such as China, Thailand, and Mexico, insect-eating is associated with particular regions and / or communities, rather
than frequently practiced by a wide majority. In my data, Asians and Hispanics are neither more nor less willing to eat
insects than Caucasians.

16

4.1

Design

Structure.

The experiment follows a 22-across subjects design. The first dimension varies whether

a subject is offered a $3 or $30 incentive for eating an insect (the low and high incentive conditions,
respectively). The second dimension varies whether or not a subject selects and watches a video about
insects as food (the video and no video conditions, respectively). This allows me to test the prediction
that endogenous information acquisition amplifies the effect of incentives on takeup (prediction 1 in
section 3). Subjects in the video condition choose between two videos entitled Why you may want
to eat insects (the pro-video) and Why you may not want to eat insects (the con-video).30 I can
therefore test whether higher incentives skew subjects demand towards information that encourages
rather than discourages participation (prediction 2 in section 3).
This study focuses on the effects of information acquired due to incentives, rather than on information conveyed by incentives. Hence, the design minimizes contextual inference. Specifically, subjects
are aware about both payment amounts, and that they are randomly assigned to one of them. Therefore, they cannot rationally interpret the incentive they are given as a signal about the experience of
eating an insect. (By contrast, subjects who watch a video are not aware that others do not, and vice
versa.)
To study subjects expectations about the unpleasantness of eating insects, I measure the least
amount of money for which they are willing to eat it (their WTA). Additional decisions are necessary
to measure this, since the effect of incentives on expectations cannot be determined by comparing the
decisions some subjects make at a low incentive to those others make at a high incentive. Specifically,
subjects reveal their WTA by filling in multiple decision lists. In each of them they decide, for a
variety of compensation amounts, whether or not to eat an insect in exchange for that amount. All
choices are payoff relevant; one of them is randomly selected for implementation at the end of the
experiment (see below for details).
Because subjects reveal their WTA after having been promised either a low or a high incentive
payment, their choices may not only be affected by the information acquired due to that incentive, but
also by anchoring (see section 1 for references). The anchoring hypothesis predicts that subjects in the
high incentive condition will reveal a higher WTA than those in the low incentive condition. Such an
effect may mask subjects attempts to persuade themselves that consuming insects is tolerable when
incentives are high. The experimental design allows me to difference out anchoring. Self-persuasion
30 The pro and con videos each list reasons for and against eating insects. The reasons stated in the pro-video are 1.
Insects can be yummy, 2. Insects are a highly nutritious protein source, 3. Our objection to eating insects is arbitrary,
4. Insects are more sustainable than chicken, pork, or beef, 5. We already eat insects all the time. The reasons stated
in the con-video are 1. Some cultures eat insects. But to those of us who are not used to it ... see for yourself, 2.
Insects have many body parts. Most of those we do not usually eat in other animals, 3. When you eat an insect, you
eat all of it. In particular its digestive system, including its stomach, intestine, rectum, anus and whatever partially
digested food is still in there, 4. We tend to associate insects with death and disease. Even if we know that eating some
insects is harmless, this association is difficult to overcome. The videos are available on https://youtu.be/HiNnbYuuRcA
(Why you may want to eat insects) and https://youtu.be/ii4YSGOEcRY (Why you may not want to eat insects).
Transcriptions are in Appendix A.

17

effects should be more pronounced when subjects can acquire information about the transaction (in
the video condition) than when they cannot (in the no video condition). If anchoring effects are
unchanged across the video and no-video conditions, the difference-in-difference in WTA identifies the
self-persuasion effect.
Timeline.

The experiment proceeds in 7 steps, which are summarized in Table 1.

In the first step, subjects learn the incentive amount that they are assigned to, and that they will
decide, for each of five food items, whether to eat the item in exchange for that amount. They then
learn that all of the food items are whole insects that are either baked, or cooked and dehydrated, and
produced for human consumption. They do not actually make these decisions until the fourth step of
the experiment. But the knowledge of the incentive they will be offered may affect their behavior in
steps 2 and 3.
Only subjects in the video condition participate in the second step. They each select one of the two
videos, and watch it. The titles, and the approximate length of 6 minutes is all the information subjects
have about them. Each subject in the video condition must view one of the videos, and nobody can
watch both. Because the videos are relatively long and contain significant detail, incentives may also
affect which parts of the video the subjects pay attention to.
Subjects in the video condition also select at least four out of a selection of 9 video clips. This
reveals whether incentives affect the amount of information demanded. The clips are grouped in bins
of three named Reasons for eating insects, Reasons against eating insects, Other information
about eating insects. Subjects know that they will either watch the selected 6 minute video, or all
the clips they selected, but not both. They also know that the chance of the former is 97%.31
In the third step, subjects fill in five multiple decision lists, one for each of the five insect species
in column 1 of Table 2. The compensation amount p ranges from $0 to $60 in 21 increasingly larger
steps.32 Subjects select the least amount for which they are willing to eat the food item by clicking
on the respective line; the remaining choices are filled in automatically. These choices reveal subjects
willingness-to-accept (WTA) to eat each of the insects.
In the fourth step, subjects make the decisions they were promised in step 1 (the treatment decisions). Subjects in the high (low) incentive condition decide, for each of the insects, whether or not
to eat that insect in exchange for $30 ($3).
31 The probability that the choice of clips will be implemented is low for two reasons. First, the focus of this experiment
is on the kind of information, rather than on the amount. A low implementation probability of the clips minimizes the
chance that behavioral effects are driven by decisions concerning the amount rather than kind of information. Second,
there are many possible selections of video clips, each of which could potentially differently affect behavior. By contrast,
there are only two selections of 6-minute videos. Hence, I expected that a low implementation probability of the
clips would lead to higher statistical power. To incentivize the decision, the implementation probability is nonetheless
positive.
32 The amounts are 0, 1, 2, 3, 4, 6, 8, 10, 12.5, 15, 17.5, 20, 22.5, 25, 27.5, 30, 33, 36, 39, 44, 50, 60. Resolution is
finer at lower levels since the distribution of WTA is positively skewed. The amount $3 was not included in the decision
lists for the first 78 Stanford subjects.

18

Implementation probability
1.
2.
3.
4.
5.
6.
7.

Learn incentive that will be offered in step 4 to eat an insect: $3 or $30


Video treatment only: Select video to watch and watch it.
Fill in 5 multiple price lists, one for each of 5 species.
Make the decision announced in step 1, for each species of insect.
Insects handed out. Filler task: CRT, Ravens matrices.
Fill in 5 multiple price lists, one for each species.
Predict others WTA for each species, and each payment condition.

7%
80%
7%
6%

Table 1: Experiment timeline. Instructions for steps 1 through 6 are read out aloud in the beginning
of the experiment. Instructions for step 7 are displayed on the subjects screens right before it starts.
Subjects have no information about the food items up to and including step 4, except for a
verbal description and, potentially, whatever information was contained in the video they had chosen.
This motivates subjects in the video treatment to carefully decide which video to watch, and to pay
attention.
I test the persistence of the effects of incentives by measuring subjects WTA both before they
have seen the insects (in step 4), and after. Thus, in the fifth step, all subjects receive five containers.
Each is filled with insects and a folded piece of paper with a code. Subjects must enter all codes into
the computer. This forces them to open each container, remove the label from within and thus view
and smell each of the food items. The subjects are encouraged to closely examine each specimen. (See
appendix A for pictures of the insects.) As a filler task during the handing out of the insects, subjects
complete an extended version of the Cognitive Response Scale (Toplak, West and Stanovich, 2014),
and 24 of Ravens (1960) standard progressive matrices.33
In order to measure subjects awareness of the effects of incentives on others, the seventh step asks
them to predict the WTA of previous participants for each of the insects. Subjects make separate
predictions for others in the $3 and in the $30 incentive conditions.34 They only predict the WTA of
previous participants in the same video condition as themselves. Subjects only learn that they will
make these predictions right before they start with this step, so their own choices in previous steps
are not affected by considerations of how others would decide.
Payment and execution of consumption decisions. The incentive scheme is chosen such that
subjects find it optimal to reveal their genuine preferences in each decision they make. Specifically, at
the very end of the experiment, one of each subjects decisions is randomly chosen for implementation.
That decision entirely determines her payment and consumption of insects. With 80% probability one
of the five treatment decisions (which are based on the promised incentive amount) is selected. This
33 Subjects

complete sets D and E.


participant first makes a prediction for an average participant. They then separately predict the WTA of those
who were offered $3 and for those who were offered $30, respectively. Subjects always predict the average participant
first, but then answer first about either the participants in the $3 condition, or about participants in the $30 condition,
with equal probability.
34 Each

19

probability is high so that subjects thinking about the experience of eating insects is influenced
primarily by the incentive amount they are offered for those decisions, rather than by those in the
multiple price lists. With the remaining 20% probability one of the remaining decisions is selected for
implementation, as detailed in Table 1.
Subjects make some consumption decisions before having seen the actual insects, and thus may
be unpleasantly surprised. Participants cannot be forced to ingest insects. To nevertheless incentivize
them to reveal their genuine preferences during the experiment, reneging on a decision made during the
experiment is costly. Each subject receives a completion payment of $35. But someone who reneges
not only forfeits whatever she would have received for eating the insect, she also see her completion
payment reduced by $20. Hence, somebody who accepts an offer and subsequently reneges would have
been better off if she had never encountered the offer. (Subjects for whom a decision is selected in
which they have chosen not to eat the insect cannot change their decision.) Subjects know this from
the outset of the experiment.
A subjects payment from the experiment may also be determined by one of her predictions. If so,
she will not consume any insects, and her completion payment is reduced by $0.50 for each $1 from
which the prediction differs from the truth.35
All subjects know that consumption of food items occurs in a visually secluded space in the presence
of only the experimenter who ensures that the participant completely consumes the assigned insect.
Hence, social considerations are absent.

4.2

Implementation and Preliminary Analysis

A total of 671 participants participated in one of 39 sessions in May, June, and July 2015 that
lasted about 2.5 hours each. In each session both payment conditions were present, but either all
or no subjects in a session were in the video condition. A large number of subjects is required
since individuals willingness to eat insects is highly heterogenous. I ran all treatments at Stanford
University (110 subjects), the Ohio State University (499 subjects), and at the University of Michigan
(62 subjects) using an interface coded in Qualtrics.36
I recruited subjects using the universities experimental economics participant databases.37 The
invitation emails informed prospective participants that the experiment would involve the consumption
of food items on the spot, and asked recipients not to participate if they have food allergies, are
vegetarian or vegan, or eat kosher or halal. The recruiting emails did not mention insects.38
35 A subject thus maximizes her expected payoff by stating the median of her belief distribution of the WTA of
previous participants.
36 I had expected to enroll several hundred subjects at the University of Michigan, but only 62 were available.
37 The 78 Stanford students who first participated in this experiment were not given any decisions to eat field crickets.
The 48 first Stanford students also did not make any predictions about other participants. In addition, 68 Stanford
students participated in an exploratory treatment. These data are not included in any analysis. In the exploratory
treatments, the overwhelming majority of participants were presented with highly visceral images of insects, which made
it very difficult for them to persuade themselves that eating insects is tolerable.
38 The exception are the invitation emails in Michigan, and those for the last 31 Stanford subjects informed prospective
participants that the experiment involves the voluntary consumption of food items, including edible insects. This

20

Nonetheless, 60 subjects essentially opted out of the experiment; they rejected every offer in every
multiple decision list throughout the study. In particular, each of these participants rejected each of
10 offers to eat an insect in exchange for $60. For these participants, $30 is not a high incentive. It is
thus unsurprising that the fraction of these subjects is very similar across incentive treatments, and
statistically indistinguishable. (See Appendix B.3 for details.) Since these subjects add no interesting
variation to the data, I drop them from the analysis.39 Appendix B.4 presents robustness checks to all
main results, including a double hurdle model that explicitly models the decision of the un-buyable
subjects and accounts for the stochasticity in that choice. An additional six subjects refused to remove
the labels with the codes from the food containers in step 5 of the study, one participant realized in
the middle of the study that she had misunderstood the study, and another participant failed to watch
the video he had selected. I also exclude these subjects from the analysis.
Thus, the study sample contains a total of 603 participants, 234 participants in the no-video
treatment (118 and 116 with $3 and $30 incentives, respectively), and 369 in the video treatment (183
and 186 with $3 and $30 incentives, respectively).40
Randomization Check.

Randomization into treatments was successful. Of 24 F -tests for differ-

ences in subjects predetermined characteristics across treatments, only one is significant at the 5%
level, and three more are significant at the 10% level.41 This falls within the expected range. Details
are reported in Appendix B.1.
Regression specification.

In all analyses I estimate linear regression specifications. Throughout,

I cluster standard errors on the subject level, and I use university and, if appropriate, species fixed
effects. Data from the multiple price lists are interval-coded and possibly censored. For ease of
presentation, I use the interval-midpoints for analysis, and set subjects WTA to $60 if they rejected
all offers in a multiple price list. Appendix B.4 performs robustness checks. Accounting for intervalcoding and censoring in WTA increases the estimated effect sizes.

4.3

Analysis

Summary Statistics. Eating insects is aversive to most participants. For each of the five species,
Column 1 of Table 2 lists the fraction of subjects who have a positive WTA (before having seen the
insects). For each item the fraction of participants who would eat it for free falls short of 5%. Column
2 lists the median WTA. It is substantial, ranging from $9 to $18.75. Column 3 lists the percentage
information had no statistically measurable effect on the fraction of participants who refused to eat an insect for any
price offered, perhaps because prospective participants knew that participation in this study would be lucrative even
for subjects who do not consume an insect.
39 The treatment decisions cannot be used for this purpose, as they were different for subjects in the high and low
incentive treatments. Amongst participants who reject every offer in every multiple decision list, 287 of 295 treatment
decisions are rejected (97%).
40 I oversampled the video condition since only that condition reveals information choice.
41 Arts and humanities majors are slightly underrepresented in the low incentives treatments.

21

Food Item
2
5
3
2
2

house crickets
large mealworms
silkworm pupae
mole crickets
field crickets

Fraction W T A > $0

Median WTA

Fraction W T A $60

0.96
0.96
0.95
0.96
0.95

9.00
18.75
13.75
13.75
13.75

0.17
0.30
0.23
0.23
0.22

Table 2: Summary statistics of WTA to eat each insect (before the insects were distributed). Pooled
over treatment conditions. $60 is the highest price offered in the multiple price lists. Interval midpoints
are used for analysis. n = 603.
of subjects who would not eat a given insect even for the highest incentive amount offered in the
multiple price lists ($60). The numbers range from 17% to 30%.42
Subjects could renege on the decision selected for implementation in exchange for $20 if they had
agreed to eat an insect. Five participants (0.8%) chose to do so.43 These participants would have
been better off never having been offered the voluntary choice to eat an insect in exchange for money.
All of them were in the high incentive condition.44
Information acquisition amplifies the effect of incentives.

Panel A of Table 3 shows subjects

decisions on the offers they were promised in the beginning of the experiment, averaged over species.
For subjects in the no video condition participation increases from 40.56% to 66.87% as incentives
are raised from $3 to $30. Subjects in the video treatment are 10.67 percentage points more likely to
accept the $30 incentive. But they are equally likely to eat an insect for $3 as those who could not
watch a video. Hence, the effect of incentives is amplified by more than a third if subjects have access
to a video, and thus can skew their information gathering and attention allocation. This confirms
prediction 1 of the model in section 3.
The effect of access to a video is concentrated in the high incentive condition. This suggests those
subjects made use of the videos to persuade themselves that eating insects is less appalling than they
would have thought had they been given the low incentive.
High incentives lead subjects to persuade themselves.

The self-persuasion hypothesis is not

the only one that can explain the data in Panel A of Table 3. An alternative explanation is that the
video option affects subjects WTA to eat insects independently of the incentive treatment they are in.
This hypothesis can explains the data if access to the videos decreases the WTA of subjects who are
42 Each decision made in step 4 of the experiment is also made as a part of a multiple price list in step 3. These
decisions may be inconsistent. I find this is the case for 15.15% of decisions, and 35.24% of participants reveal at least
one inconsistency. These inconsistencies are conceptually different across the two incentive conditions, however, and
thus cannot be used as a control or selection criterion. Appendix B.2 presents details.
43 I retain these subjects in the sample. This decision is not informative about whether or not the WTA these subjects
stated before the handing out of the insect revealed the participants genuine expectations; it only means that those
expectations may have been overly optimistic. If I do drop them, my results strengthen.
44 Four of them were in the video condition, and three had opted for the pro-video.

22

appalled by the thought of eating insects (whose WTA is in the vicinity of $30) but not those who find
it mildly unpleasant (whose WTA is in the vicinity of $3). If so, giving subjects access to the videos
will increase participation at $30, but leave participation at $3 unchanged, even if the distribution of
WTA within the video condition is entirely unaffected by the incentive a subjects have been promised.
By studying the decisions subjects made in the multiple price lists in step 3 of the experiment, I
can disentangle the two hypotheses. In that step, all subjects make the same decisions, irrespective
of treatment condition. If the effect of the video option is independent of the treatment a subject is
in, the difference in WTA between those who could watch a video and those who could not should
not depend on the incentive condition. By contrast, if high, but not low incentives cause subjects to
persuade themselves that insects are tolerable food, the difference in WTA between those who could
watch a video and those who could not should be pronounced in the high incentive condition, but
absent in the low incentive condition.
Panel B of Table 3 shows that access to the video decreases WTA in the high incentive condition
by a significant $5.85, but has no statistically significant effect in the low incentive condition. The
difference in the effect of the video option across incentive treatments is a significant $6.64. Hence,
the data are consistent with the self-persuasion hypothesis, and inconsistent with the hypothesis that
the effect of access to a video is independent of the incentive treatment.
Panel C of Table 3 shows that the treatment effects persist beyond the distribution of the insects.
While receiving the insects on average increases subjects WTA by $2.33 (s.e. 0.36), it does not
significantly alter any treatment effects.
It is also noteworthy that in the no-video condition, the incentive treatment leads to a substantial
anchoring effect.45 The WTA of subjects in the no video treatment is $19.08 for subjects who are
offered the low incentive, and $24.14 for those who are offered the high incentive. Below, I show that
behavior in this experiment is not consistent with the alternative hypothesis that the video option
eliminates anchoring.
How do subjects persuade themselves? Subjects who are given higher incentives are more likely
to demand information that encourages rather than discourages eating insects, as predicted by the
model in Section 3 (prediction 2). Columns 1 and 2 of Table 4 show that the fraction of subjects
choosing the con-video drops by almost half, from 19% to 11%, as incentives raise from $3 to $30.
(The majority of subjects choose to watch the pro-video, perhaps because they already know why
they may not want to eat insects.) Subjects selection of video clips reinforces this finding. Subjects
in the high incentive condition select significantly fewer con-clips, and significantly more pro-clips; the
number of other-clips is unaffected (Columns 3 - 5). Incentives do not affect the total number of clips
selected, as most subjects opt for the minimum number of four clips (Column 6). In this experiment,
45 Previous

experiments on anchoring make it evident to subject that the anchors they use are uninformative.

23

A. Percentage willing to eat insects for promised amount


n = 603

No Video
Video

Difference Video - No Video

Offered $3,
accept $3

Offered $30,
accept $30

Difference
$30 - $3

40.56
(3.88)
40.22
(3.27)

66.87
(3.48)
77.53
(2.42)

26.30***
(5.16)
37.31***
(4.01)

-0.34
(5.13)

10.67**
(4.31)

11.01*
(6.54)

B. WTA before distribution of insects, in dollars


n = 603

No Video
Video

Difference Video - No Video

Low
Incentive

High
Incentive

Difference
High - Low

19.08
(1.71)
19.88
(1.51)

24.14
(1.91)
18.28
(1.35)

5.05**
(2.54)
-1.59
(1.99)

0.79
(2.30)

-5.85***
(2.37)

-6.64**
(3.22)

C. WTA after distribution of insects, in dollars


n = 603

No Video
Video

Difference Video - No Video

Low
Incentive

High
Incentive

Difference
High - Low

21.86
(1.84)
22.58
(1.58)

26.84
(2.02)
19.66
(1.41)

4.97*
(2.70)
-2.92
(2.09)

0.71
(2.45)

-7.18***
(2.50)

-7.89**
(3.42)

Table 3: Panel A shows the fraction of participants in percent who agree to eat the food item for
the incentive amount they were promised, by treatment, and averaged over the five food items. Panel
B shows mean WTA in dollars by treatment, before the insects were distributed. Panel C shows mean
WTA in dollars by treatment, after the insects were distributed. Standard errors are in parentheses,
clustered by subject. Coefficients are estimated using university and species fixed effects. *, ** and
*** denote statistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressed
for levels.

24

incentives affect the kind of information demanded, but not the amount, possibly because of the lower
bound.
Each video contains a variety of arguments laid out over six minutes. Hence, incentives may not
only affect subjects explicit choice of information sources, they may also affect which parts of a given
video subjects pay attention to and believe in. The latter mechanism must be at play, since the 7
percentage points difference in choice frequencies of the pro-video is not sufficient to explain the $6.64
effect on subjects WTA. If the effect of the different videos is $60, the maximum that can be measured
in the multiple price lists, the 7 percentage point difference in video choice can explain only a $4.20
difference in WTA. If the effect is a more plausible $30 (the interquartile range in WTA is $30.20),
this drops to $2.10.46 The experiment in section 5 explicitly shows that incentives cause subjects to
perceive the same information differently.47
(1)

(2)

(3)

6-min Video

High Incentive
Low Incentive

Difference

Observations
R-squared

(4)

(5)

(6)

# Clips

Pro

Con

Pro

Con

Other

Total

0.89
(0.03)
0.81
(0.03)

0.11
(0.03)
0.19
(0.03)

2.28
(0.07)
2.10
(0.07)

0.98
(0.07)
1.23
(0.08)

1.07
(0.06)
1.06
(0.06)

4.33
(0.08)
4.39
(0.08)

0.07**
(0.04)

-0.07**
(0.04)

0.18*
(0.09)

-0.25**
(0.11)

0.01
(0.08)

-0.06
(0.11)

397
0.01

397
0.01

319
0.01

319
0.02

319
0.04

319
0.03

Table 4: Information choice by incentive condition. Each subject in the video condition chose
exactly one 6 minute video, and at least 4 video clips. The number of observations is smaller for the
video clips as the first 78 participants at Stanford could not choose any clips.

Are subjects aware of these effects on themselves?

On average, participants are not aware

of the self-persuasion effect. This is true even though their predictions of the anchoring effect are
extremely accurate. This calls into question whether the effects of incentives on subjects behavior is
due to fully rational behavior.48
46 The mean WTA of subjects who saw different videos differ by $10.60. This depends on an endogenous choice which
video to watch.
47 As an alternative method to answer the same question, one can redo the estimation excluding those subjects
who opted for the con-video. This does not alter any qualitative conclusion. These estimates, however, suffer from
endogeneity bias, since subjects can choose which video to watch.
48 It does not disprove the rationality hypothesis, however. See footnote 5.

25

To show this, I estimate the following regression model, separately for subjects in the video condition and for subjects in the no-video condition. (Recall that subjects in the video condition predicted
the WTA of other subjects in the video condition, and similarly for subjects in the no-video condition.)
\
W
T Aics = 0 + 1 1(incentive owni = high) + 2 1(incentive otherc = high) + ics

(2)

\
Here, W
T Aics is subject is prediction of the least amount of money for which another subject in
incentive condition c is willing to eat an insect of species s. In words, I regress subjects predictions
on a dummy that indicates whether the prediction concerns a previous subject facing high or low
incentives, and I let the intercept vary depending on whether the subject making the predictions was
herself offered the high or the low incentives. I estimate this model using OLS with university and
species fixed effects, and cluster standard errors by subject.
Column 1 of table 5 displays the estimates of equation (2) for subjects in the no-video treatment.
It shows that these subjects predicted that other subjects in the no-video treatment would demand
an additional $4.84 to eat an insect when offered the high rather than the low incentive. This deviates
from the measured effect of $5.08 by just $0.24, or 4.7%. Column 2 shows that subjects in the video
treatment predict that the effect of incentives on other subjects in the video treatment is $5.12, and
thus very accurately predict the anchoring effect. In reality, however, that effect is countervailed by
a sizable self-persuasion effect. These two effects sum to a negative $1.59. Hence, the predictions
of subjects in the video treatment are wildly off. On average, subjects lack awareness of the selfpersuasion effect.
Even though subjects do not predict the self-persuasion effect, their predictions are affected by it.
This is corroborating evidence for the self-persuasion effect. Apparently, subjects who have both the
incentive and the opportunity to persuade themselves do so. But because they lack awareness of this
effect, they project their own lower willingness to accept onto others. Column 2 of Table 5 shows that
amongst subjects in the video condition, those who were given the high incentive make significantly
lower predictions about the least amount for which previous participants were willing to eat an insect.
For those who could not see a video, this effect is much smaller and not statistically significant.49
These averages mask some individual heterogeneity. Subjects both predict different effects of
incentives on others, and are affected differently by incentives. This heterogeneity in both beliefs and
behavior may contribute to disagreements about policies that restrict incentives.
To study this heterogeneity, I split the sample into those who predict that incentives lower WTA
(consistent with the idea that anchoring outweighs self-persuasion), and those who predict the opposite. I can do so because each subject separately predicted the WTA of previous participants in the
high and low incentive conditions. These predictions cannot be an ex-post rationalization of subjects
own behavior, since each subject was in only one treatment. Roughly one third of subjects predict
49 The

difference-in-differences is not significantly different from zero, however.

26

(1)
Sample

(2)

(3)

All

(4)

Predict incentives
decrease WTA

(5)

(6)

Predict incentives
increase WTA

No Video

Video

No Video

Video

No Video

Video

4.84***
(0.75)
-1.06
(1.63)
20.35***
(1.25)

5.12***
(0.64)
-3.38**
(1.35)
19.55***
(1.10)

-6.80***
(0.77)
-0.09
(3.38)
28.37***
(2.45)

-6.49***
(0.67)
-5.50**
(2.59)
27.60***
(2.11)

10.64***
(0.69)
-1.40
(1.78)
16.28***
(1.31)

11.01***
(0.57)
-2.12
(1.52)
15.35***
(1.14)

Actual effect
Actual effect
of incentives
Subjects prediction
minus actual effect

5.08**
(2.53)
-0.25
(2.59)

-1.59
(1.98)
6.71***
(1.96)

3.78
(4.52)
-10.58**
(4.61)

-12.08***
(3.70)
5.60
(3.78)

5.89*
(3.07)
4.75
(3.06)

3.18
(2.27)
7.83***
(2.28)

Observations

3,510

4,958

1,170

1,603

2,340

3,355

116
118

162
159

38
40

51
57

78
78

111
102

100
100

100
100

32.8
33.9

27.4
31.1

67.2
66.1

72.6
68.9

Subjects predictions
Prediction of effect
of incentives (2 )
Effect of predictors
own incentive (1 )
Constant (0 )

Number of subjects
High incentive
Low incentive
Percentage of subjects
High incentive
Low incentive

Table 5: The first 78 participants at Stanford did not predict others WTA, and are therefore not
included in the regressions in this table. Estimated using university and species fixed effects. *, **
and *** denote statistical significance at the 10%, 5% and 1% levels, respectively.
that higher incentives decrease WTA, and this fraction does not substantively differ across the four
treatment cells, as reported in the two bottom rows of Table 5 (p = 0.87, F -test for joint significance).
Columns 3 and 4 of Table 5 report the effect of incentives on the WTA of those who predict that selfpersuasion overweighs, columns 5 and 6 report the effect for those who make the opposite predictions.
Amongst those subjects who predict that higher incentives decrease WTA, they in fact do so, by a
highly significant $12.08but only for the 101 of them who were in the video-condition. Amongst
the remaining subjects, higher incentives increase WTA, but do so significantly (at the 10% level)
only for those who could not watch a video.50 Hence, subjects are partially aware of how incentives
50 It is not possible to compare the estimates of subjects predictions of the effect of incentives to the actual effect
on the subsamples in Columns (3) - (6) without additional statistical corrections. This is because those subjects are
selected on their prediction of the effect, but not on the associated behavior. Hence, the OLS estimates of subjects
mean predictions within those subsamples are biased.

27

affect themselves, even though they fail to appropriately account for situational factor such as access
to information.
Self-Persuasion or Information Weakens Anchoring? While incentives decrease WTA in
the video condition relative to behavior in the no-video condition, the effect in the video condition
alone is not statistically different from zero. Hence, an alternative to the self-persuasion hypothesis is
that information eliminates anchoring.
This alternative falls short of explaining many aspects of the data.51 First, it cannot explain why
subjects offered the $30 incentive are more likely to accept that offer when they are in the video
condition. To explain the change in participation at $30, the $30 anchor would have to increase the
WTA of some subjects from a value below $30 to a value above $30. The anchoring hypothesis,
however, merely says that valuations will be drawn towards an anchor, not that they will overshoot.
Second, the alternative hypothesis can only explain why giving subjects the video option might
attenuate a positive effect of the incentive on WTA; it cannot explain why higher incentives would
lead to lower WTA. Therefore, it fails to explain why WTA falls as incentives increase for those who
predict that higher incentives decrease WTA (Columns 3 and 4 of Table 5).
Third, the self-persuasion hypothesis naturally explains why subjects in the video treatment make
lower predictions about the least amount for which others would eat an insect when they themselves
are given high rather than low incentives. The alternative hypothesis cannot easily account for this.
Discussion.

This experiment provides evidence for the kind of concerns that proponents of limits

on incentives have raised.


It shows that incentives can affect subjects expectations about how unpleasant they will find an
experience that only affects themselves. Subjects given both the incentive and the opportunity to
persuade themselves that eating an insect is not that unpleasant choose to do so. Therefore, endogenous information acquisition amplifies the effect of incentives, as predicted in Section 3 (prediction
1). Subjects achieve this partly by demanding more information that encourages participation, and
less information that discourages participation when incentives are high, confirming prediction 2 of
Section 3. Nonetheless, they fail to predict the effect of incentives in others, which suggests that
they are not aware of it. This calls into question whether behavior in this experiment is a result of
Bayes-rational behavior.
This experiment leaves two open questions, which I address in the next experiment. First, incentives can increase ex post harm only if they increase participation by individuals who subsequently
regret this decision (an increase in false positives). If they instead cause individuals to participate who
would have ex post been pleased with participation even for a lower incentive amount, this increases
51 In light of previous literature, this is not surprising. The participants in Ariely et al. (2003) were all given a sample
of the aversive stimulus (obnoxious noise) before they were subjected to the anchor and revealed their WTA to listen
to more of the same noise, and hence had arguably complete information about that experience. Substantial anchoring
effects were nonetheless present.

28

welfare (a decrease in false negatives). The insect experiment cannot distinguish between these mechanisms, as it does not contain a measure of subjects actual experience of eating the insects. The
next experiment uses the induced preference paradigm (Smith, 1976). It thus allows me to separately
measure the effect of incentives on false positives and false negatives.
Second, the experiment allows me to test explicitly whether an increase in optimism due to higher
incentives is consistent with Bayesian rationality, and thus corroborate the suggestive evidence on
deviations from Bayesian behavior obtained in the insect experiment.

Experiment: A Losing Gamble

In this experiment I directly address the concern by proponents of limits on incentives that incentives
affect how individuals perceive the (likelihood of) upsides and downsides of a transaction. I explicitly
show that incentives cause subjects to perceive the same information differently. I also show that
incentives make subjects more optimistic in a way that is inconsistent with Bayesian rationality.52
This is relevant because only non-Bayesian decision makers can possibly be made worse off by higher
incentives.
This experiment complements the previous one in three ways. First, I induce subjects objective
function. I can thus separately study the effect of incentives on false negatives and false positives.
The distinction matters since an offer to participate in exchange for money can make an individual (ex
post) worse off only if the individual participates and subsequently regrets (false positive), but not if
the individual rejects, even by mistake (false negative). Second, this experiment excludes preferencebased explanations of the effects I document. Treatments therefore affect subjects beliefs about
outcomes, without altering those outcomes per se. In the insects experiment, by contrast, treatments
could possibly affect subjects not only by affecting their expectations concerning the experience of
eating an insect, but also by altering that experience itself. Third, I replicate the main finding from
the insect experiment in a different environment with a different population, and thus demonstrate
its robustness.

5.1

Design

The decision environment closely follows the model in section 3. I incentivize participants to risk
losing a money amount B > 0, or nothing (G = 0), each with prior probability = 0.5. The
participant can freely decide whether or not to take this gamble in exchange for an incentive amount
52 The criterion derived in Section 3 (proposition 2) is an implication of the law of iterated expectations and thus
demands two prerequisites. First, subjects cannot have systematically biased priors. Second, the criterion is applicable
when the entire distribution of posterior beliefs is observed. This, in turn, requires studying a sample in which the full
distribution of signal realizations is observed. (In the insect experiment, by contrast, behavior might potentially driven
by specifics of the particular videos I used.) This experiment meets these criteria as it induces priors, and draws signal
realizations separately and independently over subjects and decisions.

29

m, with B > m > 0. Hence, taking the lottery either leads to a net loss (if the state is bad) or to a
net gain (if the state is good). A participant who does not take the lottery neither gains nor loses.
Structure.

The experiment follows a 2 2 design. The first dimension varies the incentive amount

for participating in the gamble. The second dimension varies whether the subject knows the incentive
amount she will be offered at the point she studies information about the consequences of accepting
the gamble. Her attention allocation can respond to the incentive only if she knows the incentive.
The structure of this experiment thus resembles the previous one. I vary treatment conditions within
subjects; the experiment thus proceeds in multiple rounds, each of which follows the same steps.
Timeline.

First, the participant either learns the incentive amount m he will be offered in that round

(the before condition), or that he will learn that incentive amount in step 3 (the after condition).
Second, he observes information on the consequences of participation. In principle, it perfectly
reveals whether or not taking the gamble will lead to a loss, but only at a considerable attentional
cost. Specifically, the participant is shown a picture consisting of 450 randomly ordered letters such
as in Panel A of Figure 2. If the state is good (taking the lottery does not lead to a loss), the picture
contains 50 letters G and 40 letters B (for good and bad, respectively). If the state is bad (taking
the lottery leads to a loss), these numbers are reversed. The participant can examine that picture in
any way and as long as he likes. Implicitly, by choosing how to do this, he chooses kind and amount
of information. For instance, a subject can count all letters to learn with certainty whether the state
is good or bad, or she can obtain a noisier signal by focusing on a part of the picture (Caplin and
Dean (2013a) and Caplin and Dean (2013b) use a similar methodology).53
Third, the subject learns (in the after-condition) or is reminded (in the before-condition) of the
incentive for taking the gamble, and decides whether or not to participate. This decision is made in a
new screen, and subjects cannot return to the screen with the picture. This prevents subjects in the
after-condition from skewing their attention allocation.54
Up to this point, the experiment lets me study the effect of incentives and skewed information
demand on false positive and false negative rates. The fourth step measures (a monotonic function
of) posterior beliefs about the state in the current round, and hence allows me to test for Bayesian
rationality. Specifically, the subject decides how much money to invest in a project. The agent loses
her money if the state in the current round is bad. If it is good, she wins back the money, plus a
decreasing return. Hence, the more confident she is that the state is good, the larger the investment
amount she optimally chooses. The payoffs are the sum of those from a quadratic scoring rule (e.g.
53 If, for instance, she counts letters until she either observed one more letter G than B or until she observed five
more letters B than G, the probability she mistakenly takes the lottery is high, and the probability she mistakenly
rejects the lottery is low.
54 In principle, subjects could take screenshots. The fact that I find significant treatment effects shows that the
majority of subjects did not engage in such behavior. To the extent that they did, my results underestimate the effect
of endogenous attention allocation.

30

Selten (1998)) and a state-contingent payment that is independent of the subjects choice.55 The
investment opportunity is presented as in Panel B of Figure 2. A subject who thinks about her degree
of confidence may conclude that her choice in step 3 was not optimal. Therefore the subject can
return to the previous step and alter her choice at any point while making her investment decision.
Payment.

Participants are paid for one randomly selected decision of one randomly selected round.

Hence, they have an incentive to reveal their genuine preferences in each decision. To incentivize them
to scrutinize the pictures, the decision promised in the beginning of the round is selected with 80%
probability. With the remaining 20% probability, the investment decision is selected.

5.2

Implementation and preliminary analysis

I conducted this experiment on the Amazon Mechanical Turk online labor platform with a total of
450 participants in March and October 2015, coded in Qualtrics.56 Instructions are presented on
screen. Before participants can proceed, they have to correctly mark each of 10 statements about the
instructions as true or false.57
A participant who takes a losing bet suffers a loss of G = $3.50. The incentive amounts m are
$0.50 and $3. Hence, the gamble is either win $3 / lose $0.50, or win $0.50 / lose $3, and is presented
in this way. Losses are discounted from a completion payment of $6, gains are added. By comparison,
laborers on Amazon Mechanical Turk typically earn an hourly wage around $5 (Horton, Rand and
Zeckhauser (2011), Mason and Suri (2012)).
Each participant completes all four treatments in individually randomized order.58 Subjects learn
at the beginning of the experiment that in some rounds they may not know the incentive amount
when examining the picture. They are also told that they will be given different lotteries in different
rounds, but no additional detail. For each participant and each round, states of the world are drawn
independently, and a picture with scrambled letters is generated individually.59
55 The possible investment amounts increase in units of $0.15 from $0 to $2.10. The vector of associated returns is
$0.00, 1.19, 1.60, 1.87, 2.08, 2.25, 2.39, 2.50, 2.59, 2.67, 2.74, 2.80, 2.85, 2.89, 2.92. The fact that the optimal choice
depends on risk preferences is immaterial to this analysis. Even if subjects are not risk neutral, their optimal choice is
monotonic in posterior beliefs. This is sufficient to apply the criterion of section 3 to test for violations of the law of
iterated expectations.
56 The results from the 155 subjects who participated in March are both qualitatively and quantitatively very similar
to those reported here, and statistically significant. I obtained the additional observations as a replication check.
57 In case of a mistake, the computer only lets the subject know that at least one of the statements is marked incorrectly.
Hence, it is extremely unlikely that participants complete this task by chance.
58 In addition, each subject participates in two additional rounds, in one of which the bet is win $3 / lose $3, and one
in which it is win $0.5 / lose $0.5. In both of these, the amounts are known from the start. Multiplying both the upside
and the downside of the bet by the same amount leads to a more pronounced S-shape of the investment choices subjects
made in step 4 of the round. This is consistent with rational inattention theory, which predicts that such a change leads
to a second order stochastic dominance increase in posterior beliefs. The 155 participants who participated in March
completed an additional two rounds.
59 Participants cannot use a text editor to automatically count the letters since they are presented in a picture format
(HTML5 Canvas).

31

(A)

(B)

Figure 2: Panel (A): Presentation of state and information in the implicit information choice. Panel
(B): Presentation of investment decision.
Preliminary analysis. On average, subjects take 33 minutes to complete the study. Subjects pay
attention to the pictures; they are able to discern the state with a significantly higher likelihood than
chance. Averaged over all treatments, subjects decide to bet 31.33 percent of the time if the state is
bad, and a significantly higher 62.88 percent of the time if it is good.60
18.06% of participants make use of the option to revise their decision about the bet after deciding
about the continuous investment in at least one round. They do so infrequently; in total, only 1.13%
of decisions are changed.61

5.3

Analysis

Endogenous attention allocation amplifies the effect of incentives.

Panel A of Table 6 shows

subjects decisions of whether or not to take the losing gamble by treatment condition. Incentives
increase participation even when subjects examine the picture without knowing whether they will be
offered the large or the small incentive. In this case, the participation rate increases from 23.16%
to 67.80% as the incentive amount is raised from $0.50 to $3. Crucially, the increase is a significant
8.36 percentage points larger when subjects can skew their attention allocation in response to the
incentive offered. This confirms the prediction of the theory (prediction 1 in section 3), and replicates
the respective result of the insect experiment. Moreover, as with that experiment, this difference in
the effectiveness of incentives is almost entirely caused by behavior in the high incentive condition.
60 The time spent examining a picture is right skewed, with a mean of 46 seconds per picture, and a median of 22
seconds.
61 Data on reversals of betting decisions are only available for the 155 subjects who participated in March, due to a
coding error. The stated numbers are the frequencies of revision within this subsample.

32

Overall, this shows that subjects who know that they will decide whether to take the gamble in
exchange for a high incentive literally perceive the same information differently than those who make
the exactly same decision but do not know that when examining the picture.
Being able to skew attention allocation mainly increases false positives. The offer to take
the gamble in exchange for payment can only make the subject (ex post) worse off if she accepts and
then loses, but not if she fails to accept even though she would have won. I therefore separately study
how the treatments influence the false positive and false negative rates.
Panel B of Table 6 shows the ability to skew attention allocation according to the incentive increases
participation in the high incentive condition almost entirely due to an increase in false positives. If
the state is bad and skewing is not possible, 48.29% of participants take the lottery. If information
demand can be skewed, a highly significant additional 12.98 percentage of participants take the same
gamble even though they will lose.
By contrast, the ability to skew the information demand has no statistically significant effect on
the rate of false negatives, as Panel C shows. When the state is good, about the same fraction of
participants recognize this, regardless of whether or not they can skew their attention allocation.
This increase in false positives is particularly large compared to subjects ability to discriminate
between the states. Subjects who are offered the high incentive but do not know this when examining
the picture are 40.60 percentage points (s.e. 3.91) more likely to take the bet when the state is good
than when it is bad (88.89% and 48.29, respectively). The 12.98 percentage point increase in the false
positive rate from being able to skew the information demand is almost a third of this discrimination
ability (31.98%).
Incentives increase optimism in a non-Bayesian fashion. The ability to skew attention in
response to incentives increased subjects willingness to take the gamble in exchange for high incentives.
To test whether this response is consistent with Bayesian rationality, or whether it reflects a nonBayesian increase in optimism, I analyze subjects decisions of how much to invest into the decreasing
returns technology.62, 63 Figure 3 displays the cumulative distribution functions of the invested amount
by treatment condition. (These CDF pool over the good and bad state, so that the law of iterated
expectations can be tested for.)
Even for Bayes-rational decision makers the CDF may differ across treatment conditions, due to
differences in attention allocation. But for Bayesian decision makers, any pair of CDF must cross
at least once. If it does not, there is a first order stochastic dominance relationship in a monotonic
62 In principle, the approach of eliciting subjects posteriors separately from their betting decision leaves open the
possibility that some posteriors are not instrumental in the sense that they could be changed without affecting the the
betting decision. Such posteriors, however, are incompatible with optimization. If a subject acquires a posterior belief
that could be altered without altering the betting decision with positive probability, then the subject would have been
able to acquire a less informative posterior at a lower informational cost without affecting her choices.
63 Caplin and Dean (2015) provide an alternative measure of rationality that does not rely on separate elicitation of
beliefs data. Appendix C presents this analysis.

33

Percentage of subjects willing to take the lottery


Incentive Incentive Difference
$0.50
$3
High - Low
A. Both states
Learn incentive
after examining picture
(cannot affect attention)
before examining picture
(can affect attention)
Difference before - after

23.16
(1.99)
20.83
(1.92)

67.80
(2.21)
76.17
(2.61)

44.64***
(3.20)
55.33***
(3.05)

-2.33
(2.29)

8.36***
(2.80)

10.70***
(3.69)

B. Bad state only (false positives)


Learn incentive
after examining picture
(cannot affect attention)
before examining picture
(can affect attention)
Difference before - after

10.36
(2.05)
7.14
(1.73)

48.29
(3.28)
61.27
(3.42)

37.93 ***
(3.90)
54.13***
(3.86)

-3.21
(2.57)

12.98***
(4.33)

16.20***
(5.14)

C. Good state only (correct positives)


Learn incentive
after examining picture
(cannot affect attention)
before examining picture
(can affect attention)
Difference before - after

35.96
(3.19)
34.51
(3.17)

88.89
(2.14)
91.06
(1.82)

52.93***
(3.92)
56.54***
(3.72)

-1.45
(3.88)

2.17
(2.69)

3.62
(4.89)

Table 6: Panel A shows participation rates pooled over states. Exactly half the total weight is given
to observations in which the state is good, and half to those in which the state is bad. Panels B and
C separately show participation rates in the bad and good states, respectively. *, ** and *** denote
statistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressed for levels.
function of posterior beliefs, which contradicts the law of iterated expectations, as shown in section
3.64
The figure suggests that higher incentives systematically increase optimism in a way that is not
consistent with Bayes-rationality, and so does the ability to allocate attention in response to incentives
64 The criterion precludes f.o.s.d. relations in strictly monotonic functions of posterior beliefs. Subjects can only invest
amounts that are a multiple of $0.15. This decision is thus merely weakly monotonic in posterior beliefs. Because the
step size is small, I nonetheless argue that a f.o.s.d. relation indicates a violation of Bayesian rationality.

34

1
CDF of Invested Amount
.4
.6
.8
0

.2

Low Incentive, known before


High Incentive, known before
Low Incentive, known after
High Incentive, known after

.5

1
Invested Amount

1.5

Figure 3: Cumulative distribution functions of invested amounts for each treatment. Bayesian
rationality precludes first order stochastic dominance amongst any pair of these functions.
when the incentive is high. Specifically, invested amounts are lowest for either of the low incentive
conditions, highest for the high incentive condition when attention allocation can be skewed, and
intermediate when no such skewing is possible (in the f.o.s.d. order). Each of these pairwise comparisons of mean invested amounts is significant at the 5% percent level. The difference in differences is
significant at the 10% level.
Because a difference in means is not sufficient for a first order dominance relation, I also calculate
Davidson and Duclos (2000) statistics and use the approach as in Tse and Zhang (2004) to explicitly
test for such a relation.65 I find that the distribution of invested amount is significantly smaller in
the first order at the 5% level in either low-incentive treatment than in the high incentive treatment
in which subjects cannot skew their information demand. Additionally, within the high incentive
condition the distribution of invested amounts is significantly larger (in the first order) when subjects
can skew their attention allocation than when they cannot. The latter result suggests that the effect
of incentives on beliefs is not merely due to wishful thinking, but is related to subjects information
demand.66
Discussion.

This experiment provides further directional evidence for the kind of concerns that

proponents of limits on incentives have raised.


65 I

bootstrap the distribution of the test statistic using 1000 bootstrap samples for each comparison.
are two caveats to these results. First, subjects made the investment choice after deciding whether or not to
take the bet, and hence, their invested amounts could still be influenced by mechanisms such as ex-post rationalization.
This seems unlikely to the extent that subjects had the opportunity to go back and change their decision on the bet when
deciding about the investment amount at a very low cost. Second, subjects decisions could be influenced by anchoring.
Anchoring, however, only plausibly explain the difference in invested amounts across the incentive treatments, not the
difference across information conditions within the high incentive condition. The experiment reported in appendix D
addresses these concerns, and finds qualitatively unchanged results.
66 There

35

In this experiment, giving subjects the opportunity to skew their attention allocation in response
to incentives substantially increases the false positive rate, and thus ex post regret, when the incentive
amount is high. This shows that incentives cause subjects to perceive the same information differently.
This is reminiscent of the concern that incentives distort subjects assessment of the costs and benefits
of participation.
The experiment shows that this response cannot fully be explained by Bayes rational behavior.
Higher incentives make subjects systematically more optimistic in a way that violates the law of
iterated expectations, and so does the opportunity to skew attention allocation when incentives are
high.
These findings are broadly consistent with the model in section 3, in particular when non-Bayesian
belief updating is accounted for. They also replicate the main findings obtained in the insect experiment in a different setting with a different subject population, and thus corroborate these results.

Policy implications

This paper has several policy implications. They are necessarily qualitative; an important task for
future research is to quantify the magnitude of the mechanism I document within the markets specific
policies target.
Informed Consent.

Comprehensive informed consent requirements are an obvious means to over-

come issues caused by endogenous information acquisition. Current informed consent policies are
unlikely to achieve such an objective, for three reasons.
First, for many transactions there is a dearth of information regarding possible consequences. For
instance, there exists no comprehensive registry of previous egg donors (Bodri, 2013). Hence, any informed consent policy on egg donation is necessarily incomplete, even for issues as centrally important
as the effect of the transaction on ones own fertility and on the likelihood of reproductive cancers.
The more prospective participants attempt to fill this void by seeking out stories and anecdotes, the
more their information gathering may be affected by the mechanisms identified in this paper.
Second, when information is available, it is often difficult to interpret. For instance, of two recent
studies on the long term effects of kidney donation, one estimates that donors are 5 percentage points
more likely to be dead 20 years after donation than comparable non-donors (Mjen, Hallan, Hartmann,
Foss, Midtvedt, yen, Reister, Pfeffer, Jenssen and Leivestad, 2014), whereas the other estimates
they are 2 percentage points less likely to be dead after that time (Ibrahim, Foley, Tan, Rogers, Bailey,
Guo, Gross and Matas, 2009). This difference has an abundance of explanations; a decision maker
will have to weigh between them, and between the different conclusions drawn by the two studies.
Again, this opens the door for the mechanisms identified in this paper.

36

Third, any informed consent policy is largely limited to objective information. By contrast, the
decision maker is concerned with the implications of participation on his subjective well being. For
instance, how much quality of life he would lose from fatigue that may arise as a side effect of
an organ donation (Tellioglu, Berber, Yatkin, Yigit, Ozgezer, Gulle, Isitmangil, Caliskan and Titiz
(2008), Beavers, Sandler, Fair, Johnson and Shrestha (2001)) will always be up to the participant to
determine. Information acquisition about subjective consequences may likewise be influenced by the
mechanisms identified here.
If an informed consent policy aims to address the concerns raised by proponents of limits on
incentives by curtailing the mechanisms identified in this paper, therefore, several steps are necessary.
First, the likelihood of relevant outcomes must be researched. Second, objective information should
be presented in an easily accessible and unequivocal manner. Third, such an informed consent policy
may place more emphasis on ensuring that participants understand the possible consequences of a
transaction on subjective well-being.
Insurance.

Insuring participants against adverse outcomes is a frequently discussed policy in mar-

kets for which incentives are capped, for instance in kidney donation (Rosenberg (2015a), Open Letter
To President Obama (2014)).
This paper points to a side effect of such a policy. Insurance may increase the fraction of participants who participate but later regret having done so. The reason is that insurance decreases the
cost of false positives. If information acquisition is costly, the rate of false positive errors may thus
increase. In any concrete policy application, this effect needs to be weighed against the increase in
welfare for those who would otherwise suffer unabated consequences.
Forbid it entirely or do not restrict it.

Some commentators criticize limits on incentives by

the following argument. They maintain that either there is a legitimate paternalistic concern to restrict
a certain activity, or that there is not. They conclude that in the first case, the activity should be
prohibited entirely, and in the second there is no reason to restrict incentives (Lewin (2015), Emanuel
(2004)).67 This paper shows that there may be a middle ground. Since high incentives themselves can
be a reason for paternalistic concerns (for instance if they make subjects overly optimistic), merely
restricting them may be superior to both outlawing the activity, and permitting it unconditionally.
Bait-and-switch.

Finally, my findings explain why marketing techniques reminiscent of bait-and-

switch appear to be effective. Prospective military recruits, for instance, are told at the beginning
of the recruitment interview that they may be eligible for a signup bonus of up to several tens of
thousands of dollars in cash and subsidies for college tuition. They then proceed through the entire
67 For

instance, the president of Barnard college, Debora L. Spar, states Our whole system makes no sense. ... [W]e
should either say, Egg-selling is bad and we forbid it, as some countries do, or Egg-selling is O.K., and the horse is
out of the barn, but were going to regulate the market for safety, cited in Lewin (2015).

37

recruitment interview and take a battery of tests. Only just before signing the contract do recruits
learn the actual bonus they are eligible for; for most it is far lower than the maximum that got
their attention (Cave (2005), McCormick (2007)). My results suggests that this technique is effective
because the prospect of the high bonus causes candidates to favorably interpret the information given
in the recruiting interview, and thus increases their willingness to enroll.68

Conclusion

Around the world there are laws that limit incentives for transactions such as living organ donation,
surrogate motherhood, human egg donation, and clinical trial participation, amongst others. While
these laws do not intend to discourage these activities altruistic participation is often applauded
they have been held responsible for shortages of goods such as donor kidneys.
There is an ongoing public discourse on whether these laws should be changed. Proponents of these
laws maintain that they protect people from harm, since incentives might distort decision making.
Opponents have sometimes been puzzled by these arguments. Some have dismissed the proponents
concerns, and have attributed them to an insufficient of understanding of basic economics.
In this paper, I have used standard economic tools to understand the concerns voiced by proponents
of limits on incentives, and I have provided directional evidence for these mechanisms.
I have first developed a conceptual framework in which incentives affect how prospective participants optimally inform themselves about the possible consequences of a transaction, and thus affect
their expectations. It predicts that incentive change the ex post distribution of outcomes, by increasing rates of (ex post) mistaken participation and decreasing rates of (ex post) mistaken abstention.
While this mechanism cannot ex ante hurt a Bayes-rational decision maker, it is relevant both because
it may affect the political feasibility of incentive programs, and because concerns about ex post outcomes are prevalent regardless of whether the actions that led to them were undertaken voluntarily.
By contrast, overly optimistic decision makers can be made ex ante worse off by incentives.
Second, I have conducted an experiment in which subjects are promised either a high or a low
incentive for eating whole insects, and either can or cannot inform themselves about insects as food
before making a decision. This experiment shows that higher incentives can make individuals more
optimistic about how unpleasant they will find the experience of ingesting the bugs. The fact that
subjects are unable to predict this effect in others suggests that they are unaware of it, and thus calls
into question whether it is a consequence of purely rational behavior.
The third part of this paper presents a complementary experiment. It shows that incentives
cause people to perceive literally the same information differently. In that experiment, subjects who
can alter their attention allocation in response to incentives are substantially more likely to commit
68 Similarly

spam from alleged kidney buyers frequently cites compensation amounts of several hundreds of thousands
of dollars. Actual prices paid to sellers are estimated to be no more than several tens of thousands of dollars Havocscope
(2015).

38

false positive errors than subjects who cannot when both face a high incentive for participation.
False negative rates, however, are not significantly affected. That experiment also shows that higher
incentives make subjects systematically more optimistic in a way that is inconsistent with Bayesian
rationality.
Overall, my results show that a central concern raised by policy makers and ethicists about the
effect of incentives can be understood using standard tools of economic analysis. My laboratory
experiments provide directional empirical evidence for this effect. My work thus helps bridge a gap
between disciplines. In further research, it is important to quantify the magnitude of these effects
within the domains to which the laws apply, such as human egg donation, or living organ donation.
More broadly, my research contributes to an emerging literature that aims to understand the
motivations behind paternalistic concerns. Such an understanding is crucial for an informed policy
debate.

39

References
Alabama High School Athletic Association, Handbook, Montgomery, AL July 2015.
Alevy, Jonathan E, Craig E Landry, and John A List, Field experiments on anchoring of
economic valuations, Available at SSRN 1824400, 2010.
Ambuehl, Sandro and Shengwu Li, Belief Updating and the Demand for Information, unpublished manuscript, Stanford University, 2015.
, B Douglas Bernheim, and Annamaria Lusardi, Financial Education, Financial Competence, and Consumer Welfare, NBER working paper 20618, 2014.
, Muriel Niederle, and Alvin E. Roth, More Money, More Problems? Can High Pay be
Coercive and Repugnant?, American Economic Review, Papers & Proceedings, 2015, 105 (5).
Andreoni, James, Deniz Aydin, Blake Barton, B. Douglas Bernheim, and Jeffrey
Naecker, When Fair Isnt Fair: Sophisticated Time Inconsistency in Social Preferences, unpublished manuscript, Stanford University, 2015.
Ariely, Dan, George Loewenstein, and Drazen Prelec, Coherent Arbitrariness: Stable
Demand Curves Without Stable Preferences, The Quarterly Journal of Economics, 2003, 118 (1),
73105.
Babcock, Linda and George Loewenstein, Explaining bargaining impasse: The role of selfserving biases, The Journal of Economic Perspectives, 1997, pp. 109126.
Bartling, Bj
orn, Roberto A Weber, and Lan Yao, Do Markets Erode Social Responsibility?,
The Quarterly Journal of Economics, 2015, 130 (1), 219266.
Basu, Kaushik, The economics and law of sexual harassment in the workplace, The Journal of
Economic Perspectives, 2003, 17 (3), 141157.
, Coercion, Contract and the Limits of the Market, Social Choice and Welfare, 2007, 29 (4),
559579.
Beavers, Kimberly L, Robert S Sandler, Jeffrey H Fair, Mark W Johnson, and Roshan
Shrestha, The living donor experience: donor health assessment and outcomes after living donor
liver transplantation, Liver transplantation, 2001, 7 (11), 943947.
Becker, Gary S and Julio J Elias, Introducing incentives in the market for live and cadaveric
organ donations, The Journal of Economic Perspectives, 2007, pp. 324.
Beggs, Alan and Kathryn Graddy, Anchoring effects: Evidence from art auctions, The American Economic Review, 2009, pp. 10271039.
40

B
enabou, Roland and Jean Tirole, Self-confidence and personal motivation, Quarterly Journal
of Economics, 2002, pp. 871915.
Benot, Jean-Pierre and Juan Dubra, Apparent overconfidence, Econometrica, 2011, 79 (5),
15911625.
Bergman, Oscar, Tore Ellingsen, Magnus Johannesson, and Cicek Svensson, Anchoring
and cognitive ability, Economics Letters, 2010, 107 (1), 6668.
Bernheim, B. Douglas, Behavioral Welfare Economics, Journal of the European Economic Association, 2009, 7 (2-3), 267319.
and Antonio Rangel, Beyond Revealed Preference: Choice-Theoretic Foundations for Behavioral Welfare Economics, The Quarterly Journal of Economics, 2009, 124 (1), 51104.
, Andrei Fradkin, and Igor Popov, The Welfare Economics of Default Options in 401(k)
Plans, American Economic Review, 2015, 105 (9), 27982837.
Blackwell, David, Equivalent comparisons of experiments, The Annals of Mathematical Statistics,
1953, 24 (2), 265272.
Bodri, Daniel, Risk and complications associated with egg donation, in Principles of Oocyte and
Embryo Donation, Springer, 2013, pp. 205219.
Bogacz, Rafal, Eric Brown, Jeff Moehlis, Philip Holmes, and Jonathan D Cohen, The
physics of optimal decision making: a formal analysis of models of performance in two-alternative
forced-choice tasks., Psychological Review, 2006, 113 (4), 700.
Brenner, Lyle A, Derek J Koehler, and Amos Tversky, On the evaluation of one-sided
evidence, Journal of Behavioral Decision Making, 1996, 9 (1), 5970.
Brunnermeier, Markus K. and Jonathan A. Parker, Optimal Expectations, American Economic Review, 2005, 95 (4), 10921118.
Bundesrepublik Deutschland, Gesetz zum Schutz von Embryonen 1 Missbrauchliche Anwendung von Fortpflanzungstechniken, 1990.
Caplin, Andrew, Measuring and Modeling Attention, Annual Reviews of Economics, forthcoming.
and Daniel Martin, A testable theory of imperfect perception, The Economic Journal, 2014.
and John Leahy, Psychological expected utility theory and anticipatory feelings, Quarterly
Journal of Economics, 2001, pp. 5579.
and Mark Dean, Rational inattention and state dependent stochastic choice, unpublished
manuscript, New York University, 2013.
41

and

, Rational Inattention, Entropy, and Choice: The Posterior-Based Approach, unpublished

manuscript, New York University, 2013.


and

, Revealed Preference, Rational Inattention, and Costly Information Acquisition, Amer-

ican Economic Review, 2015, 105 (7), 21832203.


Cave, Damien, Critics Say Its Time to Overhaul The Armys Bonus System, New York Times,
August 15 2005.
Center for Bioethics and Culture, Eggsploitation, www.eggsploitation.com 2011.
Choi, Stephen J, Mitu Gulati, and Eric A Posner, Altruism Exchanges and the Kidney
Shortage, Law & Contemp. Probs., 2014, 77, 289.
Clark, Alan C., The Challenge of Proselytism and the Calling to Common Witness, Ecumenical
Review, 1996, 48 (2), 212221.
Council of Europe, Medically assisted procreation and the protection of the human embryo. Comparative study on the situation in 39 states., Strasbourg June 1998.
Cryder, Cynthia E., Alex John London, Kevin G. Volpp, and George Loewenstein, Informative inducement: Study payment as a signal of risk, Social Science & Medicine, 2010, 70,
455464.
Davidson, Russell and Jean-Yves Duclos, Statistical inference for stochastic dominance and
for the measurement of poverty and inequality, Econometrica, 2000, pp. 14351464.
DiTella, Rafael, Ricardo Perez-Truglia, Andres Babino, and Mariano Sigman, Conveniently Upset: Avoiding Altruism by Distorting Beliefs about Others Altruism, American Economic Review, 2015, 105 (11), 34163442.
Eil, David and Justin M. Rao, The Good News-Bad News Effect: Asymmetric Processing of
Objective Information about Yourself, American Economic Journal: Microeconomics, 2011, 3,
114138.
El-Gamal, Mahmoud A. and David M. Grether, Are People Bayesian? Uncovering Behavioral
Strategies, Journal of the American Statistical Association, 1995, 90 (432).
Elias, Julio J, Nicola Lacetera, and Mario Macis, Markets and Morals: An Experimental
Survey Study, PloS one, 2015, 10 (6).
Elias, Julio, Nicola Lacetera, and Mario Macis, Sacred Values? The Effect of Information on
Attitudes Toward Payments for Human Organs, American Economic Review, Papers & Proceedings, 2015, 105 (5).
42

Emanuel, Ezekiel J, Ending concerns about undue inducement, The Journal of Law, Medicine
& Ethics, 2004, 32 (1), 100105.
, Undue inducement: Nonsense on stilts?, The American Journal of Bioethics, 2005, 5 (5), 913.
Engel, Christoph and Peter G Moffatt, dhreg, xtdhreg, and bootdhreg: Commands to implement double-hurdle regression, Stata Journal, 2014, 14 (4), 778797.
Epstein, Richard, The Economics of Organ Donations: EconTalk Transcript, http://www.
econlib.org/library/Columns/y2006/Epsteinkidneys.html 2006.
Ethics Committee of the American Society for Reproductive Medicine, Financial compensation of oocyte donors, Fertility and Sterility, 2007, 88 (2), 305309.
Eyal, Nir, Julio Frenk, Michele B. Goodwin, Lori Gruen, Gary R. Hall, Douglas W.
Hanto, Frances Kissling, Ruth Macklin, Steven Pinker, Lloyd E. Ratner, Harold T.
Shapiro, Peter Singer; Andrew W. Torrance, Robert D. Truog, and Robert M. Veatch,
An Open Letter to President Barack Obama, Secretary of Health and Human Services Sylvia
Mathews Burwell, Attorney General Eric Holder and Leaders of Congress, 2014.
Falk, Armin and Nora Szech, Morals and markets, Science, 2013, 340 (6133), 707711.
Fehr, Ernst and Antonio Rangel, Neuroeconomic foundations of economic choicerecent advances, The Journal of Economic Perspectives, 2011, 25 (4), 330.
Feltman, Rachel, You can earn $13,000 a year selling your poop, Washington Post, January 2015.
Friedman, Milton and Leonard J Savage, The utility analysis of choices involving risk, Journal
of Political Economy, 1948, pp. 279304.
Fudenberg, Drew, David K Levine, and Zacharias Maniadis, On the robustness of anchoring
effects in WTP and WTA experiments, American Economic Journal: Microeconomics, 2012, 4 (2),
131145.
Gneezy, Uri, Silvia Saccardo, Marta Serra-Garcia, and Roel van Veldhuizen, Motivated
self-deception, identity, and unethical behavior, unpublished manuscript, University of California,
2015.
Gottlieb, Daniel, Imperfect memory and choice under risk, Games and Economic Behavior, 2014,
85, 127158.
Grant, Ruth W, Strings attached: Untangling the ethics of incentives, Princeton University Press,
2011.

43

Grether, David M., Bayes Rule as a Descriptive Model: The Representativeness Heuristic, Quarterly Journal of Economics, 1980, 95 (3), 537557.
, Testing bayes rule and the representativeness heuristic: Some experimental evidence, Journal
of Economic Behavior and Organization, 1992, 17 (1), 3157.
Griffin, Dale and Amos Tversky, The Weighing of Evidence and the Determinants of Confidence, Cognitive Psychology, 1992, 24, 411435.
Havocscope, Organ Trafficking Prices and Kidney Transplant Sales, http://www.havocscope.
com/black-market-prices/organs-kidneys/ Accessed November 8 2015.
Held, P.J., F. McCormick, A. Ojo, and J.P. Roberts, A Cost-Benefit Analysis of Government
Compensation of Kidney Donors, American Journal of Transplantation, 2015, forthcoming.
Hoffman, Mitchell, How is Information Valued? Evidence from Framed Field Experiments,
unpublished manuscript, 2014.
Holt, Charles A. and Angela M. Smith, An Update on Bayesian Updating, Journal of Economic Behavior and Organization, 2009, 69, 125134.
and Susan K. Laury, Risk Aversion and Incentive Effects, American Economic Review, 2002,
92(5), 1644 1655.
Horton, John J., David G. Rand, and Richard J. Zeckhauser, The online laboratory: Conducting experiments in a real labor market, Experimental Economics, 2011, 14, 399425.
Huck, Steffen, Nora Szech, and Lukas Wenner, More Effort With Less Pay - On Information
Avoidance, Belief Design and Performance, unpublished manuscript, London Business School, 2015.
Ibrahim, Hassan N, Robert Foley, LiPing Tan, Tyson Rogers, Robert F Bailey, Hongfei
Guo, Cynthia R Gross, and Arthur J Matas, Long-term consequences of kidney donation,
New England Journal of Medicine, 2009, 360 (5), 459469.
Indiana High School Athletic Association, By-Laws & Articles of Incorporation, Indianapolis,
IN 2015.
Kahneman, Daniel, Jack L Knetsch, and Richard Thaler, Fairness as a constraint on profit
seeking: Entitlements in the market, The American Economic Review, 1986, pp. 728741.
Kanbur, Ravi, On obnoxious markets, in Stephen Cullenberg and Prasanta Pattanaik, eds., Globalization, Culture and the Limits of the Market: Essays in Economics and Philosophy, Oxford
University Press, 2004.
Kentucky High School Athletic Association, Handbook, Lexington, KY 2014.
44

Khaleeli, Homa, The hair trades dirty secret, The Guardian, 28 October 2012.
Koszegi, Botond, Ego Utility, Overconfidence, and Task Choice, Journal of the European Economic Assiciation, 2006, 4 (4), 673707.
and Matthew Rabin, Revealed mistakes and revealed preferences, The foundations of positive
and normative economics: a handbook, 2008, pp. 193209.
Kunda, Ziva, The case for motivated reasoning., Psychological bulletin, 1990, 108 (3), 480.
Leider, Stephen and Alvin E Roth, Kidneys for sale: Who disapproves, and why?, American
Journal of Transplantation, 2010, 10 (5), 12211227.
Lewin, Tamar, Egg Donors Challenge Pay Rates, Saying They Shortchange Women, New York
Times, October 2015.
Lord, Charles G, Lee Ross, and Mark R Lepper, Biased assimilation and attitude polarization:
the effects of prior theories on subsequently considered evidence., Journal of personality and social
psychology, 1979, 37 (11), 2098.
Macklin, Ruth, Due and Undue Inducements: On Paying Money to Research Subjects, IRB,
1981, pp. 16.
Malmendier, Ulrike and Klaus Schmidt, You owe me, unpublished manuscript, 2014.
Maniadis, Zacharias, Fabio Tufano, and John A List, One swallow doesnt make a summer:
New evidence on anchoring effects, The American Economic Review, 2014, 104 (1), 277290.
Martin, Daniel, Strategic pricing and rational inattention to quality, Available at SSRN 2393037,
2012.
Mason, Winter and Siddarth Suri, Conducting behavioral research on Amazons Mechanical
Turk, Behavioral Research, 2012, 44, 123.
Massey, Cade and George Wu, Detecting Regime Shifts: The Causes of Under- and Overreaction, Management Science, 2005, 51 (6), 932947.
Massey, EK, LW Kranenburg, WC Zuidema, G Hak, RAM Erdman, Medard Hilhorst,
JNM Ijzermans, JJ Busschbach, and Willem Weimar, Encouraging psychological outcomes
after altruistic donation to a stranger, American Journal of Transplantation, 2010, 10 (6), 1445
1452.
Mat
ejka, Filip and Alisdair McKay, Rational inattention to discrete choices: A new foundation
for the multinomial logit model, American Economic Review, 2015, 105 (1), 27298.

45

McCormick, Patrick T, Volunteers and Incentives: Buying the Bodies of the Poor, Journal of
the Society of Christian Ethics, 2007, pp. 7793.
McKenzie, Craig RM, Increased sensitivity to differentially diagnostic answers using familiar
materials: Implications for confirmation bias, Memory & Cognition, 2006, 34 (3), 577588.
Michigan High School Athletic Association, The History, Rationale and Application of the
Essential Regulations of High School Athletics in Michigan, East Lansing, MI 2015.
Milgrom, Paul and Chris Shannon, Monotone comparative statics, Econometrica, 1994,
pp. 157180.
Mjen, Geir, Stein Hallan, Anders Hartmann, Aksel Foss, Karsten Midtvedt, Ole yen,
Anna Reister, Per Pfeffer, Trond Jenssen, and Torbjrn Leivestad, Long-term risks
for kidney donors, Kidney International, 2014, 86 (1), 162167.
Moebius, Markus M., Muriel Niederle, Paul Niehaus, and Tanya S. Rosenblat, Managing
Self-Confidence: Theory and Experimental Evidence, unpublished manuscript, 2013.
National Bioethics Advisory Commission, Ethical and policy issues in research involving human
participants, Rockville, MD 2001.
National Collegiate Athletic Association, Division 1 Manual, Indianapolis, IN July 2015.
National Commission for the Proptection of Human Subjects of Biomedical and Behavioral Research, Bethesda, MD., The Belmont report: Ethical principles and guidelines for the
protection of human subjects of research 1978.
Nickerson, Raymond S, Confirmation bias: A ubiquitous phenomenon in many guises., Review
of general psychology, 1998, 2 (2), 175.
Niederle, Muriel and Alvin E Roth, Philanthropically Funded Heroism Awards for Kidney
Donors, Law & Contemp. Probs., 2014, 77, 131.
Nuremberg Code, in Trials of war criminals before the Nuremberg military tribunals under control
council law, Vol. 10 1949, pp. 181182.
Peterson, Cameron R. and Lee Roy Beach, Man As An Intuitive Statistician, Psychological
Bulletin, 1967, 68 (1), 2946.
Rabin, Matthew, Cognitive dissonance and social change, Journal of Economic Behavior & Organization, 1994, 23 (2), 177194.
and Joel L Schrag, First impressions matter: A model of confirmatory bias, Quarterly Journal
of Economics, 1999, pp. 3782.
46

Ratcliff, Roger, A theory of memory retrieval., Psychological review, 1978, 85 (2), 59.
Raven, John C, Guide to the standard progressive matrices: sets A, B, C, D and E, Pearson, 1960.
Rosenberg, Tina, Its Time to Compensate Kidney Donors, The New York Times August 7 2015.
, Need a Kidney? Not Iranian? Youll Wait., The New York Times July 31 2015.
Roth, Alvin E, Repugnance as a Constraint on Markets, Journal of Economic Perspectives, 2007,
21 (3), 2.
Sandel, Michael J, What money cant buy: the moral limits of markets, Macmillan, 2012.
Satel, Sally and David C Cronin, Time to Test Incentives to Increase Organ Donation, JAMA
internal medicine, 2015.
Satz, Debra, Why Some Things Should Not Be For Sale: The Moral Limits of Markets, Oxford
University Press, 2010.
Selten, Reinhard, Axiomatic characterization of the quadratic scoring rule, Experimental Economics, 1998, 1 (1), 4362.
Simonson, Itamar and Aimee Drolet, Anchoring effects on consumers willingness-to-pay and
willingness-to-accept, Journal of Consumer Research, 2004, 31 (3), 681690.
Sims, Christopher A, Implications of rational inattention, Journal of Monetary Economics, 2003,
50 (3), 665690.
, Rational inattention: Beyond the linear-quadratic case, American Economic Review, 2006,
pp. 158163.
Slowiaczek, Louisa M, Joshua Klayman, Steven J Sherman, and Richard B Skov, Information selection and use in hypothesis testing: What is a good question, and what is a good
answer?, Memory & Cognition, 1992, 20 (4), 392405.
Smith, Vernon L, Experimental economics: Induced value theory, The American Economic Review, 1976, pp. 274279.
Sunstein, Cass R, Daniel Kahneman, David Schkade, and Ilana Ritov, Predictably incoherent judgments, Stanford Law Review, 2002, pp. 11531215.
Svitnev, Konstantin, Legal regulation of assisted reproduction treatment in Russia, Reproductive
Biomedicine Online, 2010, 20 (7), 892894.

47

Tellioglu, G, I Berber, I Yatkin, B Yigit, T Ozgezer, S Gulle, G Isitmangil, M Caliskan,


and I Titiz, Quality of life analysis of renal donors, in Transplantation proceedings, Vol. 40
Elsevier 2008, pp. 5052.
Toplak, Maggie E, Richard F West, and Keith E Stanovich, Assessing miserly information
processing: An expansion of the Cognitive Reflection Test, Thinking & Reasoning, 2014, 20 (2),
147168.
Tse, Yiu Kuen and Xibin Zhang, A Monte Carlo investigation of some tests for stochastic
dominance, Journal of Statistical Computation and Simulation, 2004, 74 (5), 361378.
Tversky, Amos and Daniel Kahneman, Judgment under uncertainty: Heuristics and biases,
Science, 1974, 185 (4157), 11241131.
van den Steen, Eric, Rational overoptimism (and other biases), American Economic Review,
2004, pp. 11411151.
Vatican Radio, Pope Francis meets a group of transplant surgeons; including the mayor
of Rome, http://en.radiovaticana.va/news/2014/09/20/pope_francis_meets_a_group_of_
transplant_surgeons/1106931 September 20 2014.
Wald, Abraham, Sequential Analysis, New York: Wiley, 1947.
Woodford,

Michael, Inattentive valuation and reference-dependent choice, Unpublished

Manuscript, Columbia University, 2012.


, An Optimizing Neuroeconomic Model of Discrete Choice, National Bureau of Economic Research, 2014.
Yang, Ming, Coordination with flexible information acquisition, Journal of Economic Theory,
2014.

48

ONLINE-APPENDIX
Sandro Ambuehl
An Offer You Cant Refuse? Incentives Change How We Think

Table of Contents
A Experimental Materials

B Insect Experiment: Additional Analysis

B.1 Randomization Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

B.2 Choice Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

B.3 Un-buyable Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

B.4 Robustness Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C Online Experiment: Additional Analysis

10

D Additional Experiment: Explicit Choice of Information

13

D.1 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

D.2 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

E Model with Continuous State Space

19

21

Proofs

References

23

Experimental Materials

Figure 4 displays photographs of the insects used for the experiment in section 4. The following is
a transcription of the videos used in that experiment. The videos are available at https://youtu.
be/HiNnbYuuRcA (Why you may want to eat insects) and https://youtu.be/ii4YSGOEcRY (Why
you may not want to eat insects).
Transcription: Why You May Want to Eat Insects Five reasons you should consider eating
insects. For your own personal health, and for the overall health of the planet, and, most importantly,
for your pleasure, you should be eating more insects. This isnt meant as a provocative, theoretical
idea. Here are five very serious reasons why you should consider increasing your insect intake.
First, insects can be yummy. Youd think that insects would have a pungent, unusual aroma. But
they are actually very tasty, and considered a delicacy in many parts of the world. Also, like tofu,
they often take on the flavor of whatever theyre cooked with. Thats why we are on the verge of a
real insectivorous moment in consumer culture. The Brooklyn startup Exo just started selling protein
bars made from ground cricket flour, and the British company Ento sells sushi-like bento boxes with
cricket-based foods. The restaurant Don Bugito in San Franciscos Mission district offers creative
insect-based foods inspired by Mexican pre-hispanic and contemporary cuisine. I am trying to bring
a solution into the food market which is introducing edible insects [Monica Martinez, owner of Don
Bugito]. New cookbooks are entering the market, such as Daniella Martins Edible, or van Huis
et al.s The insect cookbook. Don Bugitos reviews on yelp are glowing. Most Americans need
some courage to take a bite. But once they do, they are pleasantly surprised. Morgane M., from
Sunnyvale, CA describes her experience: I saw their stand at the Ferry Building farmers market
and decided to take the plunge. I tried the chili-lime crickets and they were surprisingly good! For
the curious-but-apprehensive: the chili-lime crickets taste like flavorful, super crunchy (almost flaky)
chips. Thats it. If youve ever had super thin tortilla chips, youll have an idea what to expect.
Other people liked them even more. For example Nelson Q. from Las Vegas, NV: This Pre-Hispanic
Snackeria has made me a fan .... They had the most interesting menu items of the evening at Off The
Grid ... Would I try insects again??? Yessir!...ALOHA!!! Rodney H. from San Francisco agrees:
Its great! And the mealworms add kind of a nice, savory quality to it. You never would guess that
youre eating an insect.
Second, insects are a highly nutritious protein source. Insects are actually the most ... one of the
most efficient proteins on the planet [Monica Martinez]. It turns out that pound for pound, insects
provide much higher levels of protein compared to conventional meats like beef, chicken, and fish.
While eggs consist to just 12% proteins, and beef jerky clocks in at 33%, a single pound of cricket
flour has 65% protein. Thats twice as much as you get in beef jerky! Insects also have much higher
levels of nutrients like calcium, iron, and zinc. They are also good sources of vitamin B12. Thats an

(A)

(B)

(C)

(D)

(E)
Figure 4: Insects eaten by subjects. A. House cricket (acheta domesticus) B. Mole cricket (gryllotalpae) C. Field cricket (gryllus bimaculatus) D. Mealworm (tenebrio molitor) F. Silkworm pupa
(bombyx mori).

essential vitamin thats barely found in any plant-based foods and thus can be difficult for vegans to
come by.
Third, our objection to eating insects is arbitrary. Your first reaction to this movie was probably a
sense of dislike. But theres nothing innate about that reaction. For one, billions of people already eat
insects in Asia, Africa, and Latin America every day. More generally, the animals considered to be fit
for consumption vary widely from culture to culture for arbitrary reasons. Most Americans consider
the idea of eating horses or dogs repugnant, even though theres nothing substantial that differentiates
horses from cows. Meanwhile, in India, eating cows is taboo, while eating goat is common. These
random variations are the results of cultural beliefs that crystallize over generations. But luckily,
these arbitrary taboos can be defeated over time. There was a time when raw fish served as sushi
was seen as repugnant in mainstream US culture. Now its ubiquitous. Soon, insects which are
closely related to shrimp may be elegant hors doeuvres.
Fourth, insects are more sustainable than chicken, pork, or beef. I think the biggest problem
for United States right now is we eating to much cattle, too much meat [Monica Martinez.] Insects
are a serious solution to our increasingly pressing environmental problems. It takes 2000 gallons of
water to produce a single pound of beef, and 800 gallons for one pound of pork. How much do you
think is required for a pound of crickets? One single gallon! Producing a pound of beef also takes
thirteen times more arable land than raising a pound of crickets. It needs twelve times as much feed,
and produces 100 times as much greenhouse gases. These very handsome environmental benefits are
why the UN has released a 200 page report on how eating insects could solve the worlds hunger and
environmental problems just two years ago. Needless to say, the UN strongly advocates for insects as
a food source. And its not just the UN. In 2011, the European Commission has offered a four million
dollar prize to the group that comes up with the best idea for developing insects as a popular food.
Five, we already eat insects all the time. The majority of processed foods you buy have pieces of
insect in them. The last jar of peanut butter you bought, for instance, may have had up to 50 insect
fragments. A bar of chocolate can have about 60 fragments of various insect species. Some experts
estimate that, in total, we eat about one or two pounds of insects each year with our food. These
insects pose no health risks. The FDA does set limits, but they are simply set for aesthetic reasons
in other words, so you dont actually see them mixed into your food.
To summarize, these are five very compelling reasons to give it a try. Five, we already eat insects
all the time anyway. Four, insects are more sustainable and ethical than chicken, pork, or beef. Three,
our objection to eating insects is completely arbitrary. Two, insects are a highly nutritious protein
source. One. Most of people react really, really positive [Monica Martinez]. Insects can be very
tasty!
Transcription: Why You May Not Want to Eat Insects

Four reasons you may want to avoid

eating insects. Reason 1. Some cultures eat insects. But to those of us who are not used to it,

insects can be... well, see for yourself. [American tourist in China] Welcome to eating crazy foods
around the world with Mike. And were in China. If Ive learned one thing about China its they will
eat absolutely everything. So you have caterpillars and you have butterflies. The pupae is what the
caterpillar turns into before it turns into a butterfly. ... they dont look very appealing at all. But ...
try everything once. So, up to the face. Hhh. [Eats puppae.] Not good. Ugh ... it ... it popped.
It popped! Its just ... its just too much for me. [Throws remaining pupae into trash bin.] [Bear
Grylls] Whoa! Ready for this? Oh my goodness! Pfh! This one has been living in there a very, very
long time. Im not gonna need to eat for a week after this. Pfh. [Eats live beetle larva.] Argh! This
actually ranks as one of the worst things Ive ever, ever eaten!
Reason 2. Insects have many body parts. Most of those parts we do not usually eat in other
animals. Lets see those parts... [Biology student] Lets take a closer look at some of the structures
we see on this grasshopper. So the first thing I want to point out is that it has six legs. There are
two pairs. Here is the pair of hindlegs. Theres a pair in the middle here, on the middle segment of
the thorax. Ok, those are the midlegs. And then theres another pair on the front here, those are
the forelegs or prolegs. Ok? So theres six altogether, all insects have six legs, or three pairs of legs,
its characteristic of the class. Ok? So we also can see, right up here, there are a pair of wings. On
each side of the body there are two wings. The forewing, k? as in the one in front and this is
the hindwing down here, ok? So there are four wings on this animal. Other insects only have two,
some have none. Now well move up to the head. The first thing youll notice is this pair of long
antennae. Ok, weve seen antennae in other animals. So, clearly, those are involved ... they have a
sensory function. Theyre usually involved in a tactile, or a touch sensory function. Some of them
are used in chemoreception, which would be like a smell or taste. And speaking of sensory organs,
we got one more here, which we would be remiss to not mention, uhm, which is the large compound
eye here. So, Ive made an incision on the dorsal surface of this grasshopper. Ok? And Ive peeled
back the exoskeleton. And before I go digging too much, uhm, its going to be difficult to see many
structures, but on these individuals its very easy to see, uhm, all of these very large and pronounced
little sort of tubular looking structures. Theres one right there. Those are all eggs.
Reason 3. When you eat an insect, you eat ALL of it. In particular, its digestive system, including
its stomach, intestine, rectum, anus, and whatever partially digested food is still in there. [Biology
student] Now, if we move on to the digestive system... there is a mouth, of course, we talked about
that being down here, ok? The mouth opens into a small pharynx, ok? And then it basically opens up
into this large, dark, thin-walled sack right here, ok? This is the crop. Ok, so this is basically a food
storage pouch right in here. So ... getting to the stomach, thats what we find next, this thin-walled,
sort of darker colored sack right here, which Ive just broken a little bit, that, uhm, is the stomach,
all in here, ok? Below the stomach we find this slightly darker and a bit more muscular tube right
here. That is the intestine. And the intestine opens into a short rectum and an anus.

Reason 4. Edible insects are perfectly save to eat. Nonetheless, we tend to associate insects with
death and disease. Even if we know that eating some insects is harmless, this association is difficult
to overcome. [Nature film maker] Just a few days ago, one of those gaur was killed by a tiger in the
night. This carcass is now probably about five days old, and, as you can see, absolutely riving with
maggots of many different species.

Insect Experiment: Additional Analysis

B.1

Randomization Check

The four treatments are balanced across demographic characteristics. Table B.7 displays summary
statistics of these variables by treatment. For each variable, the table reports the p-value of an F -test
for differences in the mean value of the variable across treatments. Of 24 tests conducted, one is
significant at the 5% level, and an additional three are significant at the 10% level. This is within the
expected range.

B.2

Choice Consistency

A participant reveals inconsistent choice behavior if she rejects a transaction at price p in the MPL in
step 3 of the experiment, but accepts it in the treatment decision in step 4, or vice versa.1 Table B.8
details the fraction of each of these types of inconsistencies by treatment. It shows that subjects in
the low incentive treatments tend to state WTA that are too high relative to their behavior in their
$3-treatment decision. No such directional bias is evident for subjects in the high incentive condition.
Importantly, this does not point to a difference across treatments, as the treatment decisions are
different.
The fraction of inconsistent decisions is somewhat higher than is usually found in the literature on
decision making under explicit risk, in which inconsistencies are identified by means of multiple switching in a price list (e.g. Holt and Laury (2002)). A variety of factors can account for this divergence.
First, the decisions that reveal inconsistencies in the current experiment are temporally separated,
whereas they are typically presented at once in the risky decision making literature. Second, in the
risky decision making literature, the likelihood that one of two decisions that reveal an inconsistency
will be selected for implementation is typically equal, whereas they are highly asymmetric in the
present study.
1 The choices subjects made in step 6 of the experiment cannot reveal any inconsistencies, as they are made with
different information about the transaction than the treatment decisions in step 4.

Treatment condition
Incentive
Video

$30
Yes

$3
Yes

Variable

$30
No

$3
No

Mean

Male
Age
Ethnicity
African-American
Caucasian
East Asian
Hispanic
Indian
Other
Monthly spending in USD
Year of studya
Graduate student
Field of study
Arts and humanities
Business or economics
Engineering
Science
Social science (excluding business and economics)
Political orientationb
Ravens scorec
CRT scored
Experience with insects as food
Has intentionally eaten insects before
Grown up in culture that practices entomophagy
Grown up eating mostly western foods
Had a pet that fed on store-bought insects
Knew that this study concerns insect eating

p-value

0.52
21.41

0.52
22.01

0.56
21.38

0.55
21.31

0.99
0.37

0.05
0.57
0.19
0.07
0.03
0.08
249.50
3.51
0.13

0.06
0.52
0.26
0.08
0.04
0.05
300.64
3.60
0.15

0.06
0.59
0.20
0.05
0.04
0.07
286.45
3.60
0.12

0.07
0.56
0.23
0.04
0.07
0.04
289.63
3.47
0.05

0.73
0.32
0.31
0.99
0.57
0.67
0.39
0.35
0.07

0.16
0.27
0.20
0.20
0.17
0.50
14.85
3.78

0.09
0.35
0.16
0.23
0.17
0.32
14.75
3.80

0.14
0.34
0.11
0.27
0.14
0.25
14.66
3.55

0.11
0.43
0.12
0.23
0.11
0.08
14.70
3.24

0.04
0.12
0.52
0.43
0.58
0.08
1.00
0.11

0.19
1.30
0.81
0.26
2.63

0.22
1.28
0.73
0.26
2.46

0.19
1.26
0.82
0.23
2.53

0.19
1.31
0.78
0.27
2.49

0.74
0.95
0.09
0.70
0.32

Table B.7: Summary statistics and randomization check. The last column displays the p-value of
the test of joint significance of a regression of the indicated variable on treatment dummies.
a Year

of study only include undergraduate students.


orientation is measured on a scale of -2 (conservative) to 2 (liberal).
c Ravens score is measured on a scale of 0 to 24.
d CRT score is measured on a scale of 0 to 6. Questions from the extended version are included (Toplak et al., 2014).
b Political

Video

No Video

Low Incentives
WTA > $3 in MPL, accept $3 in treatment decision
WTA < $3 in MPL, reject $3 in treatment decision
Total

15.31%
1.33%
16.63%

16.44%
3.85%
20.30%

High Incentives
WTA > $30 in MPL, accept $30 in treatment decision
WTA < $30 in MPL, reject $30 in treatment decision
Total

4.88%
5.77%
10.65%

8.18%
6.36%
14.55%

Table B.8: Inconsistent choices across the multiple price lists before the distribution of the insects,
6
and the treatment decisions.

B.3

Un-buyable Subjects

Here I provide further data on the fractions and behavior of participants who refuse all ten offers to
eat an insect an insect in exchange for $60 (the highest amount offered in the study), as well as on
those who always agree to eat any insect for free. Participants who reject every offer in every price
list also reject most offers in the treatment decisions in step 4 of the experiment.2 Amongst these
participants, 291 of 300 treatment decisions are rejected (97%), and 58 of 60 of them (96.7%) reject
all offers in the treatment decisions. All participants who accept every offer in every MPL also accept
every offer in every treatment decision.
Table B.9 lists the frequencies of each type by treatment condition. For un-buyable subjects, the
test of joint insignificance of the treatment dummies cannot be rejected (p = 0.6), and any pairwise
comparison of treatments is insignificant with a p-value of at least 0.25. For participants who accept
every offer, the test of joint insignificance cannot be rejected either (p = 0.33) and any pairwise
comparison of treatments is insignificant with a p-value of at least 0.37.

Table B.9:
condition.

B.4

Video

No Video

Always reject
Low Incentives
High Incentives

8.28%
9.25%

12.59%
12.12%

Always accept
Low Incentives
High Incentives

3.82%
2.47%

1.48%
2.27%

Fractions of participants who reject (accept) every offer in every MPL, by treatment

Robustness Checks

The analysis in section 4 exclude un-buyable subjects. Table B.10 replicates Panel A of Table 3
including those subjects. The effect of information acquisition on participation rates within the high
incentive treatment remains significant, and nearly unchanged in magnitude. The estimate of the
difference in differences loses statistical significance (p = 0.12), but remains similar in magnitude.
For ease of presentation, the analysis in section 4 estimates linear regression models of WTA
excluding un-buyable subjects. This abstracts from two considerations. First, even amongst the
included subjects, a sizable fraction of decisions are censored above (the WTA of the subject for a
particular insect species exceeds $60). Second, random noise may cause a subject to appear un-buyable
in the data, even though she actually is not. This stochastic selection potentially affects the variance
of the parameter estimates.
2 Behavior in the treatment decisions in step 4 of the experiment cannot be used for classification, as this would
introduce differential selection across treatments.

I address both of these issues simultaneously by estimating a panel double hurdle model with
correlated errors (Engel and Moffatt, 2014).3 Briefly, in that model, a participant is one of two types.
An un-buyable type never participates; all his observations are censored for every insect. As in a Probit
model, the type is determined by whether a latent continuous variable exceeds a threshold (this is
the first hurdle). The other type potentially participates, but may also have censored observations,
possibly all of them (this is the second hurdle). These decisions are estimated as in a Tobit model. The
estimation procedure jointly estimates the probability of the types, and the effect of the treatments
on WTA of the subjects who are not un-buyable. It allows for correlation between the classification
error and the effect size errors, and thus differs from a Tobit estimation of the treatment effects on
subjects who are not un-buyable.
Percentage of subjects willing to eat insects for the promised amount
n = 663

No Video
Video

Difference Video - No Video

Offered $3,
accept $3

Offered $30,
accept $30

Difference
$30 - $3

36.36
(3.59)
37.67
(3.16)

59.85
(3.59)
71.30
(2.68)

23.48***
(5.03)
33.63***
(4.07)

-1.31
(4.83)

11.45**
(4.56)

10.14
(6.46)

Table B.10: This table replicates table 3 including the un-buyable subjects.
Table B.11 displays the estimated parameters. In Column 1 the selection equation includes only a
constant. In Column 2, selection may depend on the video condition. This accounts for the fact that
there is a slightly smaller fraction of un-buyable subjects in the video condition than in the no-video
condition. Estimates are very similar across these two specifications. The treatment effects exceed
those obtained from the OLS estimation on the sample of buyable subjects, presumably because
censoring is properly accounted for.
To account for the interval-coded nature of the WTA data, Column 3 presents the estimates of an
interval regression, excluding un-buyable subjects. Again, the estimates are highly similar to those of
the panel double hurdle models, but slightly attenuated.
Finally, Column (4) presents the OLS estimates on the subsample of buyable subjects, controlling
for demographic characteristics, Ravens and CRT scores, and a vector of variables relating to previous
experience with insects. Again, the estimates of the treatment effects are highly similar. Many control
variables do not significantly affect WTA, with some notable exceptions. Males WTA is about $5
less than females, a $1 increase in monthly spending leads to about a $0.01 increase in WTA, and
3I

estimate the model using the Stata statistical package with the command xtdhreg by the same authors.

having intentionally eaten an insect before, or having experience with a pet that feeds on store-bought
insects decreases WTA by $2 to $7. Finally, subjects with a higher IQ score are more willing to eat
insects. Each additional correct answer to a Ravens matrix decreases WTA by $0.30 to $0.40. All of
these effects are significant at the 5%-threshold. Perhaps surprisingly, ethnicity does not significantly
affect WTA to eat insects.
(1)
Dependent variable
Difference-in-differences
Effect of access to video
In $3 treatment
In $30 treatment
Effect of incentives
In no video treatment
In video treatment
Correlation in error terms

Model

Subjects
Observations

(4)

-7.41**
(3.00)

-7.41**
(2.99)

-7.41*
(3.93)

-6.29**
(2.99)

-0.12
(2.10)
-7.53***
(2.15)

-0.18
(2.11)
-7.59***
(2.15)

0.74
(2.81)
-6.67**
(2.89)

0.40
(2.14)
-5.88***
(2.24)

5.41**
(2.25)
-2.00
(1.98)
0.33***
(0.13)

5.39**
(2.45)
-2.01
(1.98)
0.32**
(0.13)

5.57*
(3.08)
-1.83
(2.44)
-

5.15*
(2.37)
-1.13
(1.83)
-

Panel
Hurdle

Panel
Hurdle
Video

Interval
Regression

OLS

Explanatory variables in first hurdle


Controls
Sample

(2)
(3)
WTA before distribution

Yes
All

All

663
3218

663
3218

excluding
un-buyable
603
2918

excluding
un-buyable
603
2918

Table B.11: All regressions include species and university fixed effects, and cluster standard errors
by subject. Controls are gender, age, monthly spending, CRT score, Ravens score, dummies for race
and college major, and self-reports whether the subject has intentionally eaten insects before, grown
up eating mostly western foods, grown up in a culture that practices entomophagy, or ever had a pet
that fed on store-bought insects.

Online Experiment: Additional Analysis

Applying the rationality criterion by Caplin and Dean (2015).

For completeness, I here

apply the rationalizability criterion by Caplin and Dean (2015). I use the decision whether or not
to risk the loss of money in exchange for the promised amount alone. (If the continuous investment
decision is also considered, we already know that behavior deviates from Bayesian rationality, and is

thus not rationalizable.) The experiment is not explicitly geared towards applying Caplin and Deans
criterion, which lacks discriminatory power in this setting. In fact, there are almost as many free
parameters as data points.4 It is thus not surprising that there exists a concave utility function and
a cost of information function that can rationalize the data.
I choose the following notation. u(3) = 1 3, u(0.5) = 2 0.5, u(0) = 0, u(0.5) = 3 0.5,
u(3) = 4 3. I impose the restriction that utility is concave, and hence 1 2 3 4 .
Without loss of generality, I set 2 = 1.
I find that for 2 = 3 = 4 = 1 and 1 [0.54, 0.72], the behavior of subjects in the online
experiment satisfies the rationality conditions of Caplin and Dean (2015). I derive this as follows.
Translated into the notation of the current paper, Caplin and Deans NIAS condition is
pG max {pB , pB + }
where =

0u(B +m)
u(G +m)0

and =

u(G +m)+u(B +m)0


.
u(G +m)0

Inserting parameters, I find that for the low incentives case, we have =
1 64 . Because 4 1, NIAS reduces to

plG

64 plB .

4 3
0.5

= 64 , and =

Inserting the respective values from Table

6 then yields 0.35 64 0.07. Because 4 1, this condition is violated for the point estimates. If
4 = 1, however, there are values within the 95% confidence intervals of the point estimates for which
the condition is satisfied. By concavity of the utility function we then also have 3 = 1. I proceed on
this assumption.
For the high incentive case, we thus obtain =
NIAS thus is

phG

phB

1
61 ,

and = 1

1
61 .

For the case that > 0,

+ . Inserting the measured probabilities, this is equivalent to 1 0.72.

The NIAC condition is


pG [u(G + m) 0] + (1 pB ) [0 u(B + m)] 0
where (x) denotes the difference in some variable x across the high and low incentive conditions.
Inserting values, we get (0.910.35)(1 30.5)+(0.390.93)(0.5+3) 0, or, equivalently 1 0.54.
How do incentives affect ex post regret amongst those who take the gamble? The analysis
in the main text has shown that endogenous attention allocation significantly increases the false
positive rate when incentives are high, but leaves the false negative rate approximately unchanged.
Here I show that these effects are sufficiently pronounced that a stronger result holds: Endogenous
attention allocation substantially increases the fraction of subjects who ex post regret amongst those
who take the gamble when incentives are high (but the p-value of the estimate is 0.13).
4 The four data points are the conditional participation probabilities in the case of high and low incentives. The
three choice variables parametrize the utility function. Additionally, the criterion tests for the existence of a cost-ofinformation function that can rationalize the data. Implicitly, therefore, that function is an additional free parameter.

10

Table C.12 shows decisions that are ex post regretted, as a fraction of all decisions to accept,
separately for each treatment. Even in the after condition, the incidence of ex post regret is a
significant 12.84 percentage points higher with higher incentives. An explanation is that the possible
net loss subjects can incur is smaller when incentives are higher, so that with heterogenous risk
preferences, a larger fraction of participants are willing to accept, even holding constant posterior
beliefs.
Importantly, the effect is substantially stronger in the after condition in which endogenous attention allocation is possible. The effect on ex post regret almost doubles, to 23.08 percentage points.
Because these statistics include only a subset of the observations, however, this sizable increase is not
statistically significant (p = 0.13).
Percentage of subjects who ex post regret
amongst those who take the bet
Accept for Accept for Difference
n = 853
$0.50
$3
High - Low
Learn incentive
After examining picture
(cannot skew search)
Before examining picture
(can skew search)
Difference before - after

22.36
(4.10)
17.15
(3.91)

35.20
(2.71)
40.22
(2.69)

12.84***
(4.64)
23.08***
(4.47)

-5.22
(5.73)

5.02
(3.69)

-10.23
(6.79)

Table C.12: Percentage of subjects who ex post regret amongst those who take the bet.

Do incentives make subjects ex ante worse off ? In principle, incentives can make non-Bayesian
decision makers ex ante worse off. In this experiment, this is not the case, possibly because the increase
in incentives is nearly twice the expected loss from taking the lottery. The expected payoff is $1.21***
(s.e. 0.08) when the incentive is high, regardless of whether or not the incentive is known when the
subject examines the picture. It is -$0.02 (s.e. 0.03) when the incentive is low and this is known, and
-$0.06* (s.e. 0.03) when the incentive is low but this is unknown.5
How far can subjects deviate from Bayes-rational behavior before incentives make them
ex ante worse off ?

We can estimate the extent of optimism required for incentives to make subjects

ex ante worse off. I find that with the decision making strategies subjects have deployed in this
5 Additionally, even though the ability to skew information search in response to incentives significantly increases
the false positive rate without significantly affecting the false negative rate, giving people this opportunity does not
decrease their expected payoff. (It insignificantly increases by $0.18 (s.e. 0.18).) This is due to the fact that a false
negatives are six times as costly as false positives in this experiment.

11

experiment, deviations from Bayesian rationality must be rather substantial before incentives make
subjects ex ante worse off.
Specifically, I make counterfactual assumptions about the parameters of the decision problems. I
calculate the expected payoff subjects would have obtained under these parameters, holding constant
their actual behavior, which reflects the belief that the probability of the good state is 0.5 and the
loss from taking the gamble in the bad state is $3.50.
First, I assume that the probability of the good state is
< 0.5, so that subjects behavior is based
on overly optimistic prior beliefs. For the condition in which subjects know the incentive amount when
scrutinizing the picture, the increase in incentives makes subjects worse off if the probability of the
good state
is between 0 and 2.86%.
Second, I assume that the amount of money B a subject can lose in the bad state is larger than
$3.50, so that subjects behavior is based on an underestimation of the downside. In this case, an
increase in incentives of $3 makes subjects worse off if the actual loss from taking the gamble in the
bad state is at least $4.56 higher than the $3.50 loss subjects (correctly) believed would occur.
Hence, with the decision making strategies subjects have deployed in this experiment, the $3
increase in incentives used in this experiment would not make them ex ante worse off even if they
deviated from Bayesian rationality quite substantially.

Additional Experiment: Explicit Choice of Information

After having completed the experiment described in section 4, the same subjects completed the experiment described here.6 It essentially parallels the online experiment in section 5, but differs in that
subjects are given an explicit choice between two different information structures, and attentional
mechanisms are minimized.
This experiment complements the experiments in sections 4 and 5 of the main text in two ways.
First, it shows that when facing an explicit choice between information structures, subjects choices
align with the theoretical predictions even in an abstract setting in which choosing information structures optimally is far from trivial. Second, it addresses two potential shortcomings of the online
experiment. On the one hand, the effects of incentives on the continuous investment choice in the
online experiment are possibly exacerbated by anchoring. On the other hand, subjects made this
choice after having decided whether or not to risk the loss of money in exchange for the promised
amount. Even thought subjects could change this decision at any point while they were deciding
about their continuous investment choice, they are possibly influenced by ex-post rationalization. In
the experiment reported here, anchoring countervails rather than exacerbates the effects of incentives
on optimism. Moreover, subjects make a choice that reveals their posterior beliefs before they decide
whether or not to risk the loss of money.
6 This

experiment was not administered to the first 78 Stanford students.

12

D.1

Design

The design largely parallels that of the online experiment in section 5. I incentivize subjects to risk
losing $10, or nothing, each with prior probability = 0.5. They can freely decide whether or not to
take this gamble in exchange for an incentive amount m.
The experiment follows a 2 2 within-subjects design. The first dimension varies the incentive
amount for participating in the gamble, which is either high (m 7) or low (m 3). There are six
different incentive amounts (m {1, 2, 3, 7, 8, 9}) to prevent subjects from repeatedly having to make
the literally same decision.7 The second dimension varies whether the subject knows the incentive
amount she will be offered at the point she studies information about the consequences of accepting
the gamble (the before condition), or whether she lacks that knowledge (the after condition).
The experiment proceeds in multiple rounds, each of which follows the same four steps. First, the
participant either learns the incentive amount m he will be offered in that round, or that he will learn
that incentive amount in step 3 .
Second, the subject choses to observe one of two information structures. Both information structures produce signals G and B. The left-skewed information structure is given by Il = (0.9, 0.5).
It produces signal G with probability 0.9 if the state is good, and with probability 0.5 if the state is
bad. The right-skewed information structure is given by Ir = (0.5, 0.1). It produces signal G with
probability 0.5 if the state is good, and with probability 0.1 if the state is bad. This decision concerns
the kind rather than the amount of information, in the sense that both information structures have
the same 0.7 unconditional probability of producing a signal that matches the true state. The subject
first selects the information structure. He then learns (in the after-condition) or is reminded (in the
before-condition) of the incentive for taking the gamble, and observes a signal realization.
Third, the subject reveals his confidence about whether or not the lottery will lead to a loss.
Specifically, he decides, on each line of a multiple decision list, whether or not to risk losing $10 in
case the state is bad, in exchange for incentives that vary between $0 and $10 (inclusive) in steps
of $0.5. To lighten the participants cognitive load, the incentives and lottery are not presented as
separate entities. Instead, on each line corresponding to incentive m0 , the subject decides whether or
not to take a lottery that pays m0 in the good state and (m0 10) in the bad state.
Fourth, the subject decides whether or not to participate in the gamble for the incentive offered.
Framing.

The experiment is presented in the context of a fisherman fishing from one of two ponds,

called the red fish pond and the striped fish pond (see figure 5).
The fisherman randomly decides which pond to fish from. This corresponds to the state of the
world, and determines whether or not a subject who takes the gamble will lose. Taking the lottery
does not lead to a loss if the fisherman is fishing from the red fish pond, but leads to a loss if he is
7 Hence,

subjects face one of the following


{(9, 1), (8, 2), (7, 3), (3, 7), (2, 8), (1, 9)}.

pairs

13

of

state-contingent

payoffs

(m, m 10)

Figure 5: Presentation of information in the explicit information choice experiment. The state of the
world corresponds to the pond the fisherman is fishing from. The fisherman catches one fish from that
pond. Each fish has two properties, a color and a pattern, corresponding to two different information
structures. Before deciding whether or not to risk losing money, the subject decides whether to ask
about the color of the fish the fisherman has caught, or whether to ask about the pattern. The subject
can only ask about one, and does not learn about the other.
fishing from the striped fish pond. (Whether the red fish pond or the striped fish pond corresponds
to the good state is determined randomly in each round; for ease of exposition, I describe the former
case.)
The subject chooses an information structure and observes a signal realization in the following
way. The fisherman randomly catches a fish from the pond he is fishing from in the current round.
The properties of that fish are the information the subject learns about the state of the world. The
subject chooses an information structure by deciding which properties of that fish to learn about.
Specifically, the subject decides between information structures as follows. Each fish has both a color
(red or not) and a pattern (striped or not). The color, but not the pattern is a distinctive feature of
fish in the red fish pond. 90% of fish in that pond are red, but half of them are striped. By contrast,
the pattern, but not the color is a distinctive feature of fish in the striped fish pond. 90% of fish in
that pond are striped, but only half of them are red. The subject can ask one of two questions about
the fish the fisherman has caught. She can either ask Is the fish red? or Is the fish striped?, but
not both.
Payment and implementation One round is randomly selected for payment. Within that round,
one decision is randomly selected for implementation. With 80% chance, this is the decision promised
at the beginning of the round. With the remaining chance, it is a decision made in the multiple price
list. Gains are added to the payment from the insect experiment, losses are discounted. This is known
to subjects before they start the insect experiment.
Instructions are presented on screen just before subjects begin with this experiment. To continue,
they have to correctly mark 16 statements about the experiment as true or false. In case of a mistake,

14

the computer does not provide feedback about which statement was marked incorrectly, making it
extremely unlikely that a subject would pass this test by chance.
Each subject completes 14 rounds in individually randomized order. Of these, 6 are in the beforecondition, 4 are in the after condition, and in 4 of them, subjects cannot obtain any information about
the state. The latter four decisions are not analyzed here.8
Hypotheses The strategies that maximize the expected payoff for each incentive amount are presented in table D.13. On average, a risk-neutral subject should choose the left-skewed (right skewed)
information structure more often than chance when the incentive is high (low).
Specifically, the table shows that for the incentive $9 (leading to state-dependent payoffs (9, 1))
a risk neutral subject should always take the bet, regardless of the signal, and for the incentive $1
(leading to state-dependent payoffs (1, 9)), a risk neutral subject should never bet. For payoffs (8,
-2) and (7, -3), a risk neutral subject should chose the left-skewed information structure Il (which has
a higher ex ante likelihood of yielding signal G); for payoffs (3, -7) and (2, -8), the subject should
choose the right-skewed information structure Ir (which has a higher ex ante likelihood of yielding
signal B). In each of the latter four cases, the subject should bet if a good signal is observed, and
reject the bet otherwise.
(1)

(2)

Choice
frequency
of Il

Realized
mean payoff

s=1

s=0

70.71%***
(2.09)
68.16%***
(2.18)
67.25%***
(2.17)
46.71%
(2.36)
41.32%***
(2.15)
42.04%***
(2.15)

3.47**
(0.21)
2.29***
(0.19)
1.29***
(0.18)
-0.11***
(0.08)
-0.24***
(0.08)
- 0.23***
(0.06)

91.85%
(1.72)
86.25%
(2.16)
74.39%
(2.89)
17.6%
(2.46)
14.55
(2.28)
5.70%
(1.43)

77.11%
(2.59)
61.13%
(3.03)
49.45%
(3.19)
5.95%
(1.49)
5.73%
(1.55)
3.61%
(1.23)

E(payoff) from strategy


(m, m 10)

Never
bet

Always
bet

Bet only after


good signal from
information structure
(0.9, 0.5)
(0.5, 0.1)

(9,-1)

3.8

2.2

(8,-2)

3.1

1.9

(7,-3)

2.4

1.6

(3,-7)

-2

- 0.4

0.4

(2,-8)

-3

- 1.1

0.1

(1,-9)

-4

- 1.8

- 0.2

(3)
(4)
Bet taken

Table D.13: Expected payoffs for each strategy. Payoffs corresponding to the optimal strategy for
a risk neutral subject are printed in bold font. Significance levels on the choice frequency concern the
null hypothesis that the choice frequency is 50%. Significance levels for realized payoff concern the
null hypothesis that realized payoff is equal to the maximal possible expected payoff.
8 In these decisions, the least amount for which subjects are willing to participate in the gamble is significantly lower
when incentives are high. This is suggestive of wishful thinking.

15

D.2

Analysis

Choices of information structures

As predicted, subjects are significantly more likely than

chance to choose the left skewed information structure Il = (0.9, 0.5) when the incentive amount
is high. They are also significantly more likely than chance to choose the right skewed information
structure Ir = (0.5, 0.1) when the incentive amount is low. Hence, subjects choice of information
significantly deviates from the random choice benchmark of 0.5 in the direction predicted by theory,
as Column 1 of Table D.13 reveals. Given that it is far from trivial to intuitively determine the payoff
maximizing information structure, this is a remarkable finding.
Choices of bets

Table D.14 reports subjects choices of whether or not participate in the gamble in

exchange for the incentive offered, by treatment. Panel A pools across states, Panels B and C display
choices separately for each state.
When pooled across states (Panel A), the treatment effects directionally replicate the findings from
the insect and online experiments. They are, however, much smaller, and insignificant. In contrast to
the online experiment, being able to skew information demand in response to the incentive amount
decreases the false negative rate and leaves the false positive rate unchanged (Panels B and C).
Given that subjects in this experiment have only two information structures to choose from, this
is not surprising. For many subjects neither of the two information structures may be such that
particular realizations make them change their mind about betting. By contrast, in the online and
insect experiments, subjects have access to a rich set of information structures, and thus are much
better able to select information structures that make them change their mind about betting. Another
explanation is that attentional channels, and mechanisms relating to the interpretation of information,
through which psychological mechanisms can affect beliefs, are dampened.
Beliefs

This experiment provides further evidence that higher incentives make subjects systemat-

ically more optimistic in a way that is inconsistent with Bayesian rationality. The least amount for
which a subject is willing to risk losing $10 is a monotonically decreasing function of her posterior
beliefs. Hence, I can apply the criterion developed in section 3 to test for violations of the law of
iterated expectations induced by the treatment conditions.
Higher incentives make subjects more optimistic. Subjects in the high incentive treatment who
know the incentive amount when deciding about the information structure demand, on average,
$0.22*** (s.e. 0.06, clustered by subject) less for risking to lose $10 than those in the high incentive treatment. For subjects who do not know the incentive amount when choosing the information
structure, this amount drops by about half, to $0.13* (s.e. 0.08).9
I formally test for a first order stochastic dominance relationship using the statistic by Davidson
and Duclos (2000) as in section 5. For subjects who know the incentive amount before they decide
9 The

difference in these effect sizes is not statistically significant.

16

Percentage of subjects willing to take the lottery


Accept for
Accept for
Difference
$1, $2, or $3 $7, $8, or $9 High - Low
A. Both states
Learn incentive
After selecting question
(cannot skew search)
Before selecting question
(can skew search)
Difference before - after

8.27
(0.92)
8.74
(0.79)

71.99
(1.47)
73.16
(1.22)

63.72***
(1.72)
64.42***
(1.40)

0.47
(1.07)

1.17
(1.61)

0.70
(1.95)

B. Good state only (correct positives)


Learn incentive
After selecting question
(cannot skew search)
Before selecting question
(can skew search)
Difference before - after

12.45
(1.46)
12.55
(1.46)

79.62
(1.86)
84.25
(1.46)

67.17***
(2.31)
71.70***
(1.85)

0.10
(1.82)

4.63**
(2.08)

4.53*
(2.74)

C. Bad state only (false positives)


Learn incentive
After selecting question
(cannot skew search)
Before selecting question
(can skew search)
Difference before - after

4.35
(0.94)
5.07
(0.82)

63.06
(2.10)
62.10
(1.81)

58.71***
(2.29)
57.03***
(1.93)

0.73
(1.16)

-0.96
(2.57)

-1.68
(2.80)

Table D.14: Fraction of decisions in which participants chose to take the bet. Part A shows
participation rates pooled over states. n = 5, 310. Parts B and C show participation rates in the
good and bad states, respectively. n = 3, 716 and n = 3, 728, respectively. Standard errors are in
parentheses, clustered by subject. Coefficients are estimated using university and species fixed effects.
Estimates in Panel A weight observations according to the prior probabilities of the states. *, ** and
*** denote statistical significance at the 10%, 5% and 1% levels, respectively. Asterisks are suppressed
for levels.
which question to ask, an increase in the incentive amount leads to a first-order stochastic dominance
decrease in the least amount for which they are willing to risk losing $10. The change is significant
at the p = 0.05 level. By contrast, if subjects learn the incentive amount only after deciding which
question to ask, no statistically significant first-order dominance relation can be detected (p > 0.2).

17

Model with Continuous State Space

In this section, I show that the two-states assumption made in section 3 is inessential to the qualitative
predictions of the rational model. I extend the model to a continuous state space. In particular, this
allows a decision maker to learn not only about the likelihood that the consequence of a transaction
will be good or bad, but also about how good or bad it may be. I still find that higher incentives
increase the demand for information about the upside of the transaction, and decrease the demand
for information about the downside. I also find that if the costs of information are proportional to
Shannon mutual information, then an increase in incentives increases the probability an agent ex post
regrets participating conditional on having participated. In addition, I show that posterior-separability
of the cost of information function (in the sense of Caplin and Dean (2013b)) is sufficient for higher
incentives to increase the false positive rate.
Setup The setup differs from that in section 3 only to the extent that the state space is continuous,
R. Specifically, an agent whose preferences are quasilinear in money can decide whether or not to
participate in a transaction. If he abstains, he receives utility 0. If he accepts, he receives a monetary
payment m 0, and stochastic, non-monetary utility u() with u : R increasing and u(0) = 0.
The agent is imperfectly informed about and thus about his utility from accepting the transaction.
His prior distribution of is given by a probability density function (). Before deciding whether
or not to accept the transaction, the agent can obtain information about . As in section 3, I directly
model the agent as choosing state-dependent participation probabilities p = P (accept|). The cost

of a vector of state dependent acceptance probabilities (p ) is given by c (p ) ) R.
Analysis

The agents utility from state dependent acceptance probabilities (p ) is


U =E

 

u() + m p c (p )

(3)

Here, the expectation is taken with respect to the agents prior beliefs. To illustrate, note that with
incentives m, a perfectly informed agent would accept if + m 0 and reject otherwise.
I consider an increase in the incentive for participation from m to m0 > m. Such a change leads to
the substitution and stakes effects outlined in section 3. On the one hand, higher incentives change
the stakes of the decision, and thus lead the agent to acquire a different amount of information. If it
causes the agent to purchase more information, he will increase p for those in which accepting is
optimal, and decrease p for the for which rejection is optimal. On the other hand, higher incentives
make false positives cheaper, and they make false negatives more expensive. Hence, the agent will
purchase a different kind of information; p will increase for all , including those for which rejection
is optimal. Which of these effects outweighs depends on the cost of information function. Figure 6
illustrates.

18

!(!""#$%|!)!

!!
!!

!!!

false!acceptance!
!

compensation!

correct!acceptance!

Figure 6: Effects of a change in incentives on the agents information with u() = . The figure
plots, for each state of the world the probability that the agent accepts the offer given his optimal
information demand. With incentives m = 0, optimal information demand could, for instance, lead to
a schedule of acceptance probabilities P (accept|) depicted by the bold line. An increase in incentives
to m0 > m increase P (accept|) for all > m, if the income and stakes effects have the same
direction. Depending on the cost of information function, the stakes effect or the substitution effect
may dominate for < m0 . These cases are illustrated by the dashed and dot-dashed schedules,
respectively. In case of a posterior-separable cost of information function, the substitution effect
dominates.
I now show that posterior separability (Caplin and Dean (2013b)) of the cost of information
function is a sufficient condition for the substitution effect to outweigh (higher incentives increase false
R
positives). To define this property, I use the following notation. I write p = E(p ) = p d for the
unconditional probability that the agent participates if his state-contingent participation probabilities
p
p for the density at of the posterior belief distribution
(1p )
for the respective entity in case the agent abstains.
1p

=
are given by (p ) . Moreover, I write G

=
in case the agent participates, and B

The cost function c is posterior separable if there exists a strictly convex function f : [0, 1] R such
that c can be written in the following form

c ((p ) ) = E [f ()] + pE [f (G
)] + (1 p)E [f (B
)]

If f is the negative of the binary entropy function, f (x) = x log(x) + (1 x) log(1 x), then c is the
Shannon mutual information cost function.
Part (i) of the following proposition formally shows that an increase in incentives increases the false
positive rate if the cost function is posterior separable. Specifically, for all < m0 , the agent will expost regret if he participates if the state is . Because under posterior separability all p increase, this
means that in particular the false positive probability increases. Note also that posterior separability

19

is also a sufficient condition for the required sign of the cross-derivative in the special version of this
model outlined in section 3.
Part (ii) of the proposition helps to better understand the implications of Shannon mutual information costs, by applying a result from Matejka and McKay (2015). It shows that with Shannon costs,
the shape of the function 7 p depends on prior beliefs only through the unconditional participation probability p. Hence, it is independent of the shape of the prior belief distribution. Consequently,
Shannon mutual information costs limit the effectiveness of certain information provision policies.
Finally, part (iii) of the proposition shows that with Shannon mutual information costs, higher
incentives increase the probability that the agent ex post regrets participation conditional on having
participated.
Proposition 4.
(i) If c is posterior separable, and if m0 > m, then for all , p (m0 ) p (m).
(ii) If c is proportional to Shannon mutual information with factor of proportionality , then p is
strictly increasing in , and for any m, p+m is a function only of p and .

(iii) If c is proportional to Shannon mutual information, and P u() [m0 , m] is sufficiently
small, then an increase in incentives from m to m0 > m increases the probability the agent
regrets conditional on having participated.

Proofs

Proof of proposition 1 Part (i) is proved in the main text. To show Part (ii) I use theorem 1
and lemma 2 of Matejka and McKay (2015) to explicitly characterize the state-contingent acceptance
probabilities p for {G, B} and the unconditional acceptance probability p = pG + (1 )pB in
case of Shannon mutual information costs. They are given by the following equations.
{G, B} : p
0





1
1
1
1+
1 exp ( + m)
p

=
1

1 +
1

1
p + exp (G + m) 1
p + exp (B + m) 1
=

(4)
(5)

The probability of regret conditional on having participated, P (s = B|accept) is given by


pB (1 )
1
=
1
p
p + (1 p)e (B +m)
where the equality follows directly from (4).
h
i
h
i
pB (1)

I show that d pB (1)


=
dp +
p
p
p
that

dp
dm

pB (1)
p

(6)

dm > 0. From equation 5 it follows

> 0, so that it suffices to show that each of the partial derivatives in the previous expression
20

is positive. That the second is positive is obvious from equation (6). Regarding the first, note that
since B + m < 0, the exponent in the denominator of the right hand side of equation (6) is positive.
Hence the denominator is a convex combination of 1 and some number larger than 1, with weight p
on the former. Hence, the denominator is decreasing in p, and hence the expression is increasing in
p. Fixing p, it is increasing in m. Moreover, from (5) it is apparent that p is increasing in m. Hence,
the probability of regret given in (6) is increasing in m.
Proof of proposition 3 Fix q >

1
2

and with (1 q) 1 q. Then, (q + , 1 q + ) is

a valid information structure, and the claim to be shown concerns the comparative statics regarding
delta. Write G () =

(q+)
(q+)+(1)(1q+)

and B () =

(1q+)
(1q+)+(1)(q+)

upon receiving the G and B signal, respectively. Write G () =


B () =

(1q+)
(1q+)+(1)(q+)

for the Bayesian posterior


(q+)
(q+)+(1)(1q+)

and

for the respective posteriors of a non-Bayesian agent. The first claim

to be shown is that
 (p q) (p q)

G () G () =

>0

(p + )2
(p + )2
where p = q + (1 )(1 q). Note that due to q 21 , we have q > p. Consequently, the above can
2

, which is equivalent to (p + )2 (p + )2 , and further to 2 p2 .
be rearranged to p+
p+
Because || 1 q and p 1 q, the previous expression is true for all 1. The proof of the
second claim follows from symmetry.
Proof of proposition 4
(i) The agent maximizes (3) by choosing (p ) . I use Topkis theorem to prove the claim. We must
show that the objective function has increasing differences both in (p , p0 ) and in (p , m) for
all , 0 . As m enters the utility function additively, the latter is trivially true.
To show the former, write p := E(p ). By posterior separability of the cost function, taking
derivatives of the objective function with respect to p yields
U
p




p 2

= + m [f (G
) + f (B
)] pf 0 (a )

p
p2


(1 p )2

+
(1 p)f 0 (B
)
1p
(1 p)2

= + m [f (G
) + f (B
)] f 0 (a ) (1 G
) + f 0 (r ) (1 B
)

21

(7)

For 0 6= , p0 enters the above expression only through p. Therefore, we have that
p
2U
p p p0

2U
0
p p ,

2U
p p

Since

G
p

G
p

and

so that it suffices to show that

f 00 (G
)(1 G
)

B
p

2U
p p

B
1p ,

2U
p p

2U
p p0

> 0. Indeed,

B
+ f 00 (B
)(1 B
)
p
p

by the definition of G
and B
we obtain

= f 00 (G
)G
(1 G
) + f 00 (B
)B
(1 B
)

which is nonnegative due to f 00 > 0 and G


, B
[0, 1].

(ii) Part (ii) of the above proposition derives from Matejka and McKay (2015) whose theorem 1
and lemma 2 imply that the state-contingent acceptance probabilities p and the unconditional
acceptance probability p satisfy the following equations.


1


1
1
1 exp
1+
p


 
1 #1
Z "
1
0 =
p + exp
1
d()

: p

(8)
(9)

By recognizing that a change in incentives can simply be modeled as a shift in the prior , the
claims in part (ii) of the above proposition become obvious. Most notably, the shape of the
function p is independent of the shape of the prior .10


(iii) I assume P u() [m0 , m] = 0. The extension to the case in which P u() [m0 , m]
is sufficiently small follows by continuity. Moreover, without loss of generality, I set m = 0. We
 R0
have P (regret|participate) = P u() + m 0 |participate = pp d(). By equation (8),
this is equivalent to
Z

m0

P (regret|participate) =

h
i d()
0
p + (1 p) exp u()+m

The denominator is decreasing in m0 as well as decreasing in p (the latter because the integral is taken only over for which < m0 ). Because by part (i) of this proposition, an
increase in m0 increases p for all , it increases p. Consequently, an increase in m0 increases
P (regret|participate), as was to be shown.

10 This, in turn, is an implication of the fact that with Shannon mutual information costs, the marginal costs of
additional precision in a state are proportional to the likelihood of that state.

22