You are on page 1of 21

American Economic Review 2008, 98:3, 990–1008

http://www.aeaweb.org/articles.php?doi=10.1257/aer.98.3.990

Pride and Prejudice: The Human Side of Incentive Theory

By Tore Ellingsen and Magnus Johannesson*

Desire for social esteem is a source of prosocial behavior. We develop a model


in which actors’ utility of esteem depends on the audience. In a principal-
agent setting, we show that the model can account for motivational crowding
out. Control systems and pecuniary incentives erode morale by signaling to
the agent that the principal is not worth impressing. The model also offers an
explanation for why agents are motivated by unconditionally high pay and by
mission-oriented principals. (JEL D01, D82)

Nature, when she formed man for society, endowed him with an original desire to please,
and an original aversion to offend his brethren. She taught him to feel pleasure in their
favourable, and pain in their unfavourable regard. She rendered their approbation most
flattering and most agreeable to him for its own sake; and their disapprobation most mor-
tifying and most offensive.
— Adam Smith (1790)

Douglas McGregor’s (1960) celebrated management book The Human Side of Enterprise
argues that managers who subscribe to the conventional economic view that employees dis-
like work—McGregor labels it Theory X—may create workers who are “resistant, antagonis-
tic, uncooperative” (38). That is, managerial control and material incentives may trigger the
very behaviors that they are designed to avert. Conversely, managers who subscribe to the more
optimistic view that employees see their work as a source of self-realization and social esteem
—Theory Y—may create workers who voluntarily seek to fulfill the organization’s goals.
Two empirical observations lending support to Theory Y are the wage level puzzle that higher
wages sometimes induce better performance, and the incentive intensity puzzle that stronger
material incentives and closer control sometimes induce worse performance. Both observa-
tions violate the standard principal-agent model, which predicts that the agent’s effort should be
unaffected by the level of pay, and that stronger incentives should always entail higher effort.
Indeed, the evidence suggests that many agents not only care about the principal’s payoff as
well as their own, but are influenced by the principal’s likely intentions. For example, Falk and
Kosfeld (2006) find that control hurts the agent’s motivation only when the principal has a choice

* Ellingsen: Department of Economics, Stockholm School of Economics, Box 6501, S-113 83 Stockholm, Sweden
(e-mail: gte@hhs.se); Johannesson: Department of Economics, Stockholm School of Economics, Box 6501, S-113
83 Stockholm, Sweden (e-mail: hemj@hhs.se). We are grateful to the Torsten and Ragnar Söderberg Foundation
(Ellingsen) and the Swedish Research Council (Johannesson) for financial support. Thanks to George Baker, Kjell-
Arne Brekke, Florian Englmaier, Ernst Fehr, Martin Flodén, Oliver Hart, Bengt Holmström, Erik Lindqvist, John
Moore, Anna Sjögren, Jean-Robert Tyran, Robert Östling, and especially Michael Kosfeld for helpful discussions. The
paper has also benefited from comments by many other seminar and conference participants. Errors are our own.

The wage level puzzle is documented by, among others, Ernst Fehr, Georg Kirchsteiger, and Arno Riedl (1993)
and Truman F. Bewley (1999). The incentive intensity puzzle, which has proven even harder to explain, is documented
by Bruno S. Frey and Felix Oberholzer-Gee (1997), Uri Gneezy and Aldo Rusticini (2000a, b), Iris Bohnet, Frey, and
Steffen Huck (2001), Fehr and Simon Gächter (2002), Fehr and Bettina Rockenbach (2003), Fehr and John A. List
(2004), and Armin Falk and Michael Kosfeld (2006), among others.

We here abstract from the effect of wealth changes on labor supply. In most of the empirical studies, wealth effects
can safely be assumed to be negligible due to low stakes and short horizons.
990
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 991

whether to impose control or not. Gary Charness (2004) finds that a high wage has a larger effect
on the agent’s performance when it is explicitly chosen by the principal than when it comes about
accidentally.
In this paper, we propose a model of motivation that is consistent with Theory Y and resolves
these empirical puzzles. Our otherwise conventional principal-agent model builds on two key
premises: First, some people care about social esteem. Second, the value of esteem depends on
the provider; people care relatively more about the approval of those that they themselves approve
of. The principal’s choice of incentive scheme wholly or partly reveals the principal’s character
to the agent, thereby affecting the agent’s esteem for the principal. Similarly, the agent’s choice
of action reveals the agent’s character to the principal, affecting the principal’s esteem for the
agent. Our main contribution is to show how an incentive that in isolation would have a positive
effect on the agent’s behavior has a negative effect on the behavior of some agent types because
of what the incentive tells the agent about the principal.
Before discussing the implications and novelty of our result, we emphasize that both our key
assumptions have considerable support. Adam Smith’s view on esteem is summarized in the
quote above. Geoffrey Brennan and Philip Pettit (2004, chap. 1) provide extensive references to
similar views on esteem held by other classical thinkers. These thinkers also widely agree that the
value of respect depends on its source. As David Hume (1739, book II, part I, sect. XI) expresses
in his account of humans’ fundamental love of fame: “tho fame in general be agreeable, yet we
receive a much greater satisfaction from the approbation of those, whom we ourselves esteem
and approve of, than those, whom we hate and despise.” Smith (1790, part II, sect. III, para. 10),
articulates the same idea: “What most of all charms us in our benefactor, is the concord between
his sentiments and our own, with regard to what interests us so nearly as the worth of our own
character, and the esteem that is due to us.” In modern times, psychologists from Abraham H.
Maslow (1943) onward have similarly argued that esteem is a fundamental source of motivation.
More importantly, there is ample evidence that people’s performance is affected by the presence
of others, and that much of the effect is due to concern about being evaluated by them. Our
two premises are also consistent with the assumptions of the seminal economics papers on peer
monitoring by Heinz Holländer (1990) and Eugene Kandel and Edward P. Lazear (1992).
As an illustration of our argument, consider a principal who decides whether or not to control
an agent—as in the experiment by Falk and Kosfeld (2006). By not controlling, the principal sig-
nals a prosocial attitude. While the lack of control is costly in case the agent is selfish, it will be
profitable in case the agent is prosocial, because a prosocial agent has a stronger desire to make
a good impression on a prosocial principal than on a selfish one. Depending on parameters, our
model generates either pooling equilibria, in which the principal always trusts the agent, or sepa-
rating equilibria, in which a prosocial principal trusts whereas a selfish principal controls.


This so-called social facilitation research goes back to experiments by Norman Triplett (1898); the emphasis on
evaluation apprehension as a key mechanism has been increasing since the work of Nickolas B. Cottrell et al. (1968).
For a survey of the social facilitation literature, see Jim Blascovic et al. (1999). Recent related work in a labor context
includes Falk and Andrea Ichino (2006) and Alexandre Mas and Enrico Moretti (forthcoming). Both find that low-pro-
ductivity workers put in more effort when observed by high-productivity peers. In the context of charitable giving, the
social esteem motive has been documented by, among others, William T. Harbaugh (1998) and Adriaan R. Soetevent
(2005). There are also considerable audience effects on personal hygiene, as demonstrated by Kristen Munger and
Shelby J. Harris (1989), and probably also on civic behaviors such as voting—see Patricia Funk (2007). Finally, the
decisions of participants in experiments are known to be affected by the experimenter’s ability to observe behavior,
as documented by Elizabeth Hoffman et al. (1994), or even subtle cues that someone might be watching; see Kevin J.
Haley and Daniel M. T. Fessler (2005).

Both these papers assume that people care about what others will think about their actions, but only Kandel and
Lazear explicitly discuss the role of audience identity. Both papers confine attention to horizontal interaction between
peers rather than the interaction between a principal and the agent, which is our focus.
992 THE AMERICAN ECONOMIC REVIEW June 2008

The principal may signal a prosocial attitude in other ways, too. Offering a high wage or donat-
ing profits to charity are other credible signals that may induce higher effort from a ­prosocial
agent. Therefore, the model offers a microfoundation for George A. Akerlof’s (1982) gift exchange
hypothesis as well as for Timothy Besley and Maitreesh Ghatak’s (2005) argument that motivated
workers exert higher effort in mission-oriented (as opposed to profit-maximizing) enterprises.
Our paper is closely related to Roland Bénabou and Jean Tirole (2006), who also employ a
signaling model to show that material incentives may undermine esteem incentives. Bénabou
and Tirole’s analysis relies on the assumption that people have private information about multiple
personal characteristics, and that their materialism and altruism are not perfectly correlated.
Under these assumptions, material incentives may reduce the motivation of altruists. In the clas-
sic blood donation example, an altruist may donate less blood if there is a money payment,
because the incentive attracts materialists and thereby dilutes the signaling value of blood dona-
tion. Comparing assumptions, our model dispenses with the requirement of multidimensional
characteristics, imposing instead the assumption that the value of esteem depends on who gives
it. Concerning predictions, our main advance over Bénabou and Tirole (2006) is to explain why
material incentives are more likely to erode esteem incentives if the principal has a choice of
incentive scheme than if the scheme is exogenously imposed.
Dirk Sliwka (2007) is also quite closely related. Assuming that some people get utility from
acting conformistically while being unsure about the relative population shares of various types
of nonconformists, Sliwka shows that an employee may respond favorably to a generous con-
tract offer. The generous offer serves as a signal that generosity is relatively common, inducing
conformists to be generous. Like us, Sliwka can thus explain why the set of potential incentive
schemes matters—and not only the chosen incentive scheme. A conceptual difference between
the two models is that Sliwka focuses on internal rewards from norm adherence, whereas we
focus on esteem, which is an external reward. A difference in predictive scope is that our model
generates motivational crowding out even without any interpersonal differences in probability
assessments.
Since peoples’ behavior is predicted to depend on others’ unchosen options, our model offers
a new approach to the modelling of reciprocity. In the reciprocity literature, a major question
has been to explain the role that people ascribe to others’ intentions. Two previous explanations
have been formalized. David K. Levine (1998) suggests that people’s altruism or spite depend
on their beliefs about their opponents (see also Julio J. Rotemberg, forthcoming). For example,
if a player feels more altruistic toward other altruists, a generous action by an opponent may
be rewarded because it signals that the opponent’s altruism is high. If the situation is the same,


For a related idea, see Kjell Arne Brekke and Karine Nyborg (2004).

See David M. Kreps (1997), Jerker Denrell (1998), Maarten C.W. Janssen and Ewa Mendys-Kamphorst (2004),
and Paul Seabright (2004) for similar ideas. The underlying general theme that generous acts to some extent are moti-
vated by the desire to signal personal characteristics, such as generosity or wealth, is well understood by now; see, for
example, Colin F. Camerer (1988) and Amihai Glazer and Kai A. Konrad (1996) for early formal models, and James
Andreoni and B. Douglas Bernheim (2006) for a general analysis and an experimental test. Like all these papers, we
rely heavily on the assumption that players have private information about their characteristics, in contrast to the career
concerns model of Bengt Holmström (1982) where the principal and the agent learn at the same rate.

Richard Titmuss (1970) famously originated the idea that material incentives crowd out voluntary blood donation.
For a supportive field experiment, see Carl Mellström and Johannesson (forthcoming).

Our argument that the principal’s action conveys information about the principal’s type is conceptually related
to Kathryn E. Spier (1992), who demonstrated that a principal may want to leave contracts incomplete (or with weak
incentives) rather than introducing clauses that signal bad news to the agent. In recent work that ties in with the psy-
chological literature on motivation, Bénabou and Tirole (2003) and Hanming Fang and Guiseppe Moscarini (2005)
similarly assume that the principal has private information about the agent’s ability or the difficulty of the task. In these
models, too, a strengthening of incentives may induce lower effort.

See Joel Sobel (2005) for a comparison of different reciprocity models.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 993

except the opponent lacks the opportunity to be selfish, altruism can no longer be inferred, and
the same action may go unrewarded. Our model is closely related to Levine’s inasmuch as we
also focus on signaling and the impact of opponents’ type on a player’s utility. One virtue of our
model is that it more readily explains symbolic generosity and punishment, as well as why some
people are generous even toward relatively selfish opponents. Yet, we believe that both forces
often operate simultaneously, and the most general version of our model therefore assumes that
prosociality is belief-contingent.
A second way to accommodate intention-based reciprocity is to let players care about oppo-
nents’ prior expectations. Analyzing the consequences of such preferences requires an extension
of the standard game theory model. John Geanakoplos, David Pearce, and Ennio Stacchetti
(1989) develop the necessary formal apparatus, called psychological game theory. Matthew
Rabin (1993) uses psychological game theory to study reciprocity. Rabin’s fairness equilibrium
concept formalizes the notion that people want to reward kind actions and punish unkind actions.
However, because an action’s kindness hinges on the actor’s beliefs about what the opponent will
do, the analysis becomes complex, and Rabin confines attention to simultaneous move games.
Martin Dufwenberg and Kirchsteiger (2004) and Falk and Urs Fischbacher (2006) provide
extensions of Rabin’s model to dynamic situations. It is easier to define kindness as a property of
players’ preferences, like we do, than to define it as a joint property of actions, available actions,
and beliefs, as Rabin does. Despite the introduction of incomplete information, our model is
therefore simpler.10
Some readers may object to the notion that social esteem motives can explain laboratory
evidence like Falk and Kosfeld’s, since their experiments involve anonymous interaction. Why
would people who interact anonymously be at all concerned about what their opponent thinks
about their behavior? Our view is that giving money under anonymous laboratory conditions is
similar to tourists’ tipping of waiters, taxi drivers, and other service workers that they will never
meet again. Of course, service workers usually see their customers, and sight probably affects
their customers’ feelings of pride and shame in connection to tipping. However, a recent labo-
ratory experiment by Jason D. Dana, Daylian M. Cain, and Robyn M. Dawes (2006) suggests
that social esteem concerns matter even under anonymous laboratory interaction. The study
investigates the behavior of Dictator Game players who, after making their allocation choices,
are offered the option to exit the game at the cost of keeping only nine out of the ten dollars they
were given to allocate between themselves and an anonymous potential recipient. The authors
compare two treatments. In one treatment, the recipient is informed about the game if the dicta-
tor chooses not to exit. In another treatment, information about the game is always private to
the dictator; the recipient gets any money that the dictator allocates, but never learns where the
money comes from. Dana, Cain, and Dawes find that there is considerable exit in the first treat-
ment (a finding that is replicated and refined by Tomas Broberg, Ellingsen, and Johannesson
2007), but that there is almost no exit in the second treatment, where generosity or lack thereof
is completely unobservable by the recipient. Dana, Cain, and Dawes (2006, 201) conclude: “Just
knowing that one is the anonymous dictator that the receiver will think badly of can be sufficient
to compel giving.”

10
Arguably, our model also better captures some of the original intuitions of Geanakoplos, Pearce, and Stacchetti
(1989). For example, they write that a player’s payoff “depends not only on what he does but also on what he thinks his
friends will think about his character” (66)—but in the psychological game model there is no (exogenous) variation in
character: for a given role, players have homogeneous preferences, and typically play games with complete informa-
tion. All the variation in behavior is caused either by multiplicity of equilibria or by the use of mixed strategies. Recent
work nonetheless indicates that psychological game theory may have more to offer. Charness and Dufwenberg (2006)
propose guilt aversion as an alternative driver of reciprocity, and Pierpaolo Battigalli and Dufwenberg (forthcoming)
extend psychological game theory in several promising directions.
994 THE AMERICAN ECONOMIC REVIEW June 2008

Two anonymous referees note that the findings of Dana, Cain, and Dawes admit the compet-
ing interpretation that people dislike knowing that others are being disappointed or hurt. We
agree, but offer two remarks. First, the distinction between the two explanations is smaller than
it may seem. Formally, our model says only that the actor cares about what others will think
about “the person who committed my act.” That is, we do not have to interpret esteem in terms
of “what others will think about me.”11 Second, future experiments might distinguish between
the two explanations. The difference between them comes down to the question: do people care
only about whether others are being disappointed, or do they also care about the role that their
own action played in causing the disappointment? Our conjecture is that few dictators would be
willing to pay for the exit of other dictators in an experiment like that of Dana, Cain, and Dawes,
just like few diners would be willing to top up the neighboring table’s tip. We care more about
what the waiter thinks about our table’s tip. Such a concern for what others think about one’s
acts appears intimately related to the “warm glow” motives identified by Andreoni (1989, 1990).
Many donors get utility (warm glow) from their own contributions over and above the utility they
get from the recipient’s consumption, and the social esteem model pinpoints a possible psycho-
logical mechanism behind the warm glow motive.
The paper is organized as follows. Section I introduces the model. Section II shows that the
model can explain why the behavior of trustees depends on the trustors’ set of feasible actions,
as documented by Kevin A. McCabe, Mary L. Rigdon, and Vernon L. Smith (2003). Section III
matches the model to Falk and Kosfeld’s (2006) evidence on hidden costs of control. Section IV
shows how the model implements Akerlof’s (1982) gift exchange argument. Some final remarks
are collected in Section V.

I.  The Model

We confine attention to two-stage games between a principal, P, and an agent, A. The principal
moves first. The agent observes the principal’s move before making his own move. (In some of
the games we consider, the principal has no choice. In these games, the principal acts only as an
audience for the agent.)

A. Actions

The set of (pure) actions for player i is denoted Ai and a generic action is denoted ai. For
example, AP may be the set of feasible contracts and AA the set of feasible effort levels. Mixed
actions are probability distributions over Ai. Let Xi denote player i’s set of mixed actions.

B. Types

Players are heterogeneous. For simplicity, we assume that heterogeneity is unidimensional,


and that there are only two types.12 A player’s type is denoted ui [ U 5 5uL , uH6 , R. Each
player’s type is private information.

11
The concept of “me” or “self” is problematic in our context. One measure of anonymity would run from com-
plete invisibility on the one hand to close observation by an acquaintance on the other. Where on that scale would the
observer start to view “the self” (“me”)?
12
The two-type case greatly simplifies our analysis. A drawback is that in some games much of the action is in the
lower type interval, while in other games there is more action in the upper type interval. Thus, when we use the two-
type model to capture qualitative features of different games, we cannot hope to make any quantitative comparison
across them.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 995

C. Beliefs

Let p 0i denote the prior probability that player i assigns to the event that the opponent is of type
H. In order to accommodate the systematic differences in beliefs that are observed in experi-
ments, we assume that players’ beliefs about the opponent’s type may be positively correlated
with their own type, that is, pH $ pL .13 Our main qualitative insights hold also in the special case
of homogeneous beliefs, pH 5 pL .
We assume that the priors are common knowledge and that players update their beliefs using
Bayes’s rule as the game progresses. Let hi denote the history of actions observed by player i
when it is i ’s turn to move, and let h̄ denote the history at the end of the game, i.e., the realization
1ai, aj 2. Finally, let pi 1hi, ui 2 denote i ’s conditional expectation that the opponent is of type H.

D. Preferences

Let mi 1ai, aj, ui 2 denote the material payoff to player i. We assume that prosociality takes the
form of altruism.14 The materialistic part of a player’s utility function is thus mi 1 ui mj. We
assume that 1 . uH . uL $ 0.
The crucial assumption of our model is that players take pride in what the opponent will think
about them.15 More precisely, they prefer being classified as type H (relatively altruistic) to being
classified as type L (relatively selfish).16 Let

uij 5 E 3uj Zui, h̄ 4

denote player i ’s (ultimate) esteem for player j. To capture the notion that the value of esteem
depends on the source, we write player i ’s feeling of being esteemed as

(1)  ûji 5 Euj 3s 1uj 2 uji Zui 4 .

We refer to sj 5 s 1uj 2 as the salience of the opponent’s esteem, and assume that sH . sL . 0.
We refer to ûji as player i ’s pride.
To summarize, player i ’s utility function is17

(2)  ui 5 mi 1 ui mj 1 ûji .

13
It is well known that people tend to think that others are like them. Thus, it is commonly concluded that people
systematically overestimate the degree of similarity—creating a “false-consensus” effect; see Lee Ross, David Green,
and Pamela House (1977). However, as noted by Dawes (1989), it is rational to use information about one’s own inclina-
tions to infer the likely inclinations of others, so not all consensus effects are false.
14
An alternative would be to assume that players care about fairness, as in Fehr and Klaus M. Schmidt (1999) and
Gary E. Bolton and Axel Ockenfels (2000). As shown in an earlier version of our paper, the main results would be
similar. Altruism is mathematically simpler, and by not building reciprocity directly into the preferences, we highlight
more starkly the effect of esteem on behavior.
15
In principle, players may also care about what other spectators think. We return to this issue below.
16
According to Donald E. Brown (1991), approval of generosity is a universal phenomenon—it exists in all known
cultures. People may seek approval for its own sake or for instrumental reasons, and the distinction between the two
is blurred. Evolutionary anthropology suggests that altruism (as well as concern for approval) has evolved because of
social rewards; for evidence, see for example Kristen Hawkes and Rebecca Bliege Bird (2002).
17
The linearity of utility in monetary payoffs has the implication that pride becomes an insignificant concern as
material stakes get large; to fit data across experiments with very different stakes, a nonlinear specification is probably
preferable.
996 THE AMERICAN ECONOMIC REVIEW June 2008

One plausible generalization, suggested by Gary Charness and Rabin (2002), is to let the
degree of altruism depend on whether mi is greater or smaller than mj. Another plausible gen-
eralization, in the spirit of Levine (1998), is to let player i ’s altruism depend on (beliefs about)
the opponent’s type, e.g., on pi.18 Although both extensions would help to explain some observed
empirical patterns, we choose to keep the analysis as simple as possible.
Players are assumed to be risk-neutral, so when payoffs are uncertain, player i maximizes the
expectation of ui.

E. Strategies and Solution Concepts

Since we confine attention to two-person sequential games in which each player moves (at most)
once, the set of possible histories for player i when moving can be written Hi 5 5Aj < ~6.
A strategy for player i is thus a mapping si : U 3 30, 14 3 Hi S Xi. In words, player i ’s (mixed)
action depends on the own type, the belief about the opponent’s type, and any observed prior
actions by the opponent. We seek pairs of strategies 1sP*  , sA* 2 and beliefs 1 pP*  , pA* 2 such that s*i is a
best response to s*j given the beliefs p*i , and we require that pP* and pA*  satisfy Bayes’s rule when-
ever it applies. In addition, we insist that the beliefs are “reasonable” off the equilibrium path in
the sense that they satisfy the Intuitive Criterion of In-Koo Cho and Kreps (1987). For brevity,
we refer to such refined perfect Bayesian equilibria as intuitive equilibria.
Observe that our assumption of common knowledge about priors together with Bayesian
updating imply that players’ beliefs about the opponent’s conditional (type-dependent) beliefs
are correct on the equilibrium path. In equilibrium, the pride of player i is thus

(4)  ûji  5 p*i 1a*i, a*j , ui 2 sH 3 p*j 1a*i, a*j, uH2 uH 1 11 2 p*j 1a*i, a*j, uH2 2 uL 4

1 11 2 p*i 1a*i, a*j, ui 2 2 sL 3 p*j 1a*i, a*j, uL 2 uH 1 11 2 p*j 1a*i, a*j, uL 2 2 uL 4 .

Clearly, both players have an incentive to impress the opponent in order to obtain social esteem
and thus feel proud. However, due to being the first mover, the principal has an added incentive:
because sH . sL , it is more valuable for the agent to impress the principal the more likely it is
that the principal is altruistic. Thus, if the principal conveys an altruistic impression, raising pA,
that will typically motivate the agent.
Having completed the model description, we are now ready to demonstrate how the concern
for esteem can shed light on behaviors in a variety of experiments.

II.  The Trust Game

McCabe, Rigdon, and Smith (2003) show that subjects’ behavior in the “Voluntary Trust
Game” (VTG) depicted in Figure 1A is robustly different from their behavior in the “Involuntary
Trust Game” (ITG) depicted in Figure 1B.19 Although the agent has exactly the same choice set
in both situations, the frequency of strategy R—reward trust—is much greater in the voluntary
trust game, where the principal can choose whether to trust (T) or not (NT), than in the invol-
untary trust game. Regardless of players’ altruism or other social preferences, this behavior is
inconsistent with any model in which only material outcomes matter.

18
A more flexible version of our model thus involves the utility function
(3)    ui 5 mi 1 kf 1 pi 2 ui mj 1 ûji ,
where k 5 1 if mi . mj, k , 1 otherwise; and f 1pi 2 is increasing.
19
For a closely related experiment, see James C. Cox (2004).
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 997

/5    


1

1 

 
 
 
 
 
 
 
 
 
 
5  5 
 
 
 
 
 
 
 
 
 

"  " 
 
  

 
   
    
   
   

  
 
/3


 
 /3


 


  
 
 
  


  
 

 
 
 


  
 

 
 
 

   

 
 
 


  
 
 
  

   
       

" 7PMVOUBSZ USVTU # *OWPMVOUBSZ USVTU

Figure 1. Voluntary and Involuntary Trust

We now show that there are parameters of our model such that it unambiguously predicts more
play of R in VTG than in ITG. To be specific, we can induce the altruistic principal to play T, the
selfish principal to play NT, and the agent to always play N in ITG while playing N or R in VTG,
depending on type. (Readers who seek maximum simplicity may confine attention to the case uL
5 sL 5 0 and pL0 5 pH0 5 p in the analysis below.)
To identify the parameters, let us start by analyzing the problem of the agent in ITG. For the
agent to always choose N regardless of type, it is sufficient that the altruistic agent (who has most
to gain by acquiring high esteem) chooses to play N even if play of N yields the lowest possible
evaluation and play of R yields the highest possible evaluation. Formally, the condition is

30 1 15uH 1 uL 3 pH 
0
sH 1 11 2 pH0 2 sL 4 . 25 1 25uH 1 uH 3 pH0 sH 1 11 2 pH0 2 sL )],

or, equivalently,
5 2 10u H
(5)  0
pH sH 1 11 2 pH0 2 sL , .
uH 2 uL

Turning to the voluntary trust game, the altruistic agent should play R and the selfish agent
should play N. Given the proposed equilibrium expectations, the altruistic agent plays R if

25 1 25uH 1 uH sH . 30 1 15uH 1 uL sH ,

or, equivalently,
5 2 10u H
(6) sH . .
uH 2 uL

The selfish agent plays N if

25 1 25uL 1 uH sH , 30 1 15uL 1 uL sH,


998 THE AMERICAN ECONOMIC REVIEW June 2008

or, equivalently,

5 2 10u L
(7)  sH , .
uH 2 uL

Finally, the altruistic principal plays T if

pH0 125 1 25uH 1 uHsH2 1 11 2 pH0 2 115 1 30uH 1 uHsL2 . 20 1 20uH 1 pH0 uL sH 1 11 2 pH0 2 uL sL ,

or, equivalently,

5 2 10u H 2 1 u H 2 u L 2 sL
(8)  pH0 . ,
10 2 5u H 1 1 u H 2 u L 2 1 sH 2 sL 2

whereas the selfish principal plays NT if

pL0 125 1 25uL 1 uH sH2 1 11 2 pL 0 2 115 1 30uL 1 uHsL2 , 20 1 20uL 1 pL  
0
uL sH 1 11 2 pL 0 2 uL sL ,

or, equivalently,
5 2 10u L 2 1 u H 2 u L 2 sL
(9)  pL0 , .
10 2 5u L 1 1 u H 2 u L 2 1 sH 2 sL 2

One example that simultaneously satisfies all constraints is pL0 5 pH0 5 2/5, uL 5 sL 5 0, uH 5
/ , sH 5 6. The example is robust to small changes in all parameters, so we are ready to establish
1 3

our first result.

Proposition 1: There exists an open set of parameters such that, in the unique intuitive equi-
librium outcomes of the two games: (i) no agent type ever plays R in the involuntary trust game,
but an altruistic agent plays R in the voluntary trust game, and (ii) an altruistic principal trusts
in the voluntary trust game; a selfish principal does not.

We have essentially already proved that there exist equilibria with the desired properties.
Uniqueness is proven in the Appendix.
The intuition for the result comes in two parts. First, the altruistic principal is relatively more
inclined to play T than is the selfish principal (if pL0 5 pH 
0
, as in the example, we need uH . uL ).
Second, when T is a reliable signal of altruism, an altruistic agent is, in turn, willing to play
R (i.e., we need sH sufficiently large). When T does not signal the principal’s type, the agent’s
esteem benefit from signaling altruism is too small, so both agent types play N (i.e., we need sL
and sH sufficiently small).
More generally, regardless of pH0 and pL0 , we can show that both uH . uL and sH . sL are
necessary conditions.

Proposition 2: Generically, any vector of parameters satisfying the conditions of Proposition


1 has the properties uH . uL and sH . sL .

Proof:
See the Appendix.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 999

Three further observations are in order. First, for a given average degree of altruism, if we
would allow altruism to be greater toward an altruistic opponent as in (3), that would weaken the
parameter requirements by inducing altruistic agents to discriminate more between voluntarily
and involuntarily trusting principals. Second, according to the experiment of McCabe, Rigdon,
and Smith (2003), about a third of the agents play R even in the ITG, whereas about a third play
N in the VTG. Therefore, in this example, we think of type L belonging to the bottom third and
type H belonging to the middle third of the distribution of altruism. Finally, given our basic
assumptions, there are no parameters such that agents would be more willing to play R in the
ITG than in the VTG. While our model is flexible enough to fit the facts, it thus also rules out
some patterns that we do not observe.

III.  The Hidden Costs of Control

We now turn to the intriguing evidence of Falk and Kosfeld (2006). In their Control Game
(our label) experiment, an agent has an endowment of 120 money units and can make transfers
to the principal. For every unit that the agent gives up, the principal receives two units. Hence,
the principal can receive an amount of at least 0 and at most 240. Before the agent decides how
much to transfer voluntarily, the principal has the opportunity to impose a compulsory transfer
of 10 (receiving 20), leaving the agent free to chose voluntary transfers above this minimum
level. Observe that the principal’s choices resemble those on the VTG, but that the agent’s choice
set is richer. In particular, the agent still has considerable freedom following the least trusting
action by the principal.
With the exceptions of Levine (1998) and Sliwka (2007), all previous models erroneously
predict that the principal will always control the agent. The reason is that only a relatively selfish
agent would ever give less than ten and trusting a selfish agent makes no sense in any of these
models. In stark contrast, Falk and Kosfeld find that the majority of principals trust their agent,
abstaining from the compulsory transfer. Moreover, such trust on average pays significantly bet-
ter than distrust. The most important features of Falk and Kosfeld’s evidence can be summarized
as follows:

1. The agents’ average transfer is 17.5 with control and 23 without. There are few trans-
fers above 40, but without control a significant fraction of the agents transfer exactly 40.
Transfers of 40 are much less common when the principal controls. About half of the agents
choose to transfer exactly 10 if controlled. Roughly, control implies that transfers of 0 are
replaced by transfers of 10, whereas transfers of 40 are replaced by transfers of about 20.

2. If control is exogenously imposed, in the sense that the principal must leave at least 10 to
the agent, the negative effect of control vanishes.

3. About 30 percent of the principals choose to control. (The remaining 70 percent trust.)

4. Principals make roughly correct predictions about agent’s actions following their own con-
trol choice. Controlling principals severely underestimate what the average transfer would
have been had they trusted.

We first attempt to fit qualitative aspects of the evidence. Then, we discuss quantitative
properties.
Translated to our model, Falk and Kosfeld grant the principal two actions, T and NT. If the
principal chooses T, the agent’s set of actions is 30, 1204 ; if the principal chooses NT, the agent’s
1000 THE AMERICAN ECONOMIC REVIEW June 2008

set of actions is 310, 1204 .20 Letting a denote the agent’s action, the utility functions can be
written

uP 5 2a 2 auP 1 ûAP

and

u A 5 2a 1 2auA 1 ûPA.

Note that the full evidence cannot be replicated by any model that insists on correct expecta-
tions; Falk and Kosfeld show that subjects have biased preferences. Indeed, with correct expecta-
tions, all principals would have to trust almost regardless of their preferences—since material
payoffs are higher than under control (and esteem must be at least as high). Heterogeneous priors
are therefore necessary to fit all aspects of the evidence.
We now construct a fully separating equilibrium with the following three features: (a) only
the altruistic principal trusts; (b) the selfish agent always transfers the smallest admitted amount
(0 and 10, respectively); (c) the altruistic agent transfers y [ 110, 1202 if trusted and x [ 110, y 2 if
not trusted. (We focus on a fully separating equilibrium because of its simple structure. As we
shall see, it is possible to construct semi-separating equilibria that better match the quantitative
features of the data.) Note that the inequality y . x is the defining feature of Theory Y.
Let us list the conditions for a least costly separating equilibrium. Start with the selfish agent’s
problem. Since principals fully separate in the equilibrium under consideration, a “controlled”
selfish agent is indifferent between transferring the minimum amount of 10 and mimicking the
altruist’s transfer of x if

210 1 20uL 1 sL uL 5 2x 1 2xuL 1 sL uH,

or, equivalently,
10 2 20u L 1 sL 1 u H 2 u L 2
(10)  x5 .
1 2 2u L

Analogously, a trusted selfish agent is indifferent between transferring 0 and y if

sH uL 5 2y 1 2yuL 1 sH uH,

or, equivalently,
sH 1 u H 2 u L 2
(11)  y5 .
1 2 2u L

Consider here how our desiderata on x and y bound the set of admissible parameters. First,
x . 10 is not restrictive, as it is an immediate consequence of (10). To have x , 120, note that a
controlled altruistic agent is content to transfer x, which suffices for signaling purposes, rather

20
Strictly speaking, this violates our assumption that the agent’s action set is independent of the principal’s action.
However, (a) our model is easily extended to cover this case, and (b) given our rationality assumptions, Falk and
Kosfeld’s setup is isomorphic to one in which the principal, instead of controlling, puts a penalty of at least 10 on any
agent action below 10.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 1001

than any higher amount if uH # 1/2. In order to have y , 120, we additionally need that sH ,
1120 2 240uL 2/(uH 2 uL 2. Finally, y . x implies sH 2 sL . 110 2 20uL 2/ 1uH 2 uL 2. Clearly, there
is a large set of parameters that simultaneously satisfy these conditions.
Turn now to the principal’s problem. Conditional on the agent behavior above, the selfish prin-
cipal prefers control to trust if

pL0 12x 1 1120 2 x 2 uL 1 sH uL 2 1 11 2 pL 0 2 120 1 1120 2 102 uL 1 sL uL 2

$ pL0 12y 1 1120 2 y 2 uL 1 sH uH2 1 11 2 pL 0 2 10 1 120uL 1 sL uH2 ,

or, equivalently,
20 2 10u L 2 sL 1 u H 2 u L 2
(12)  pL0 # .
20 2 10u L 1 1 y 2 x 2 1 2 2 u L 2 1 1 u H 2 u L 2 1 sH 2 sL 2

Observe that the critical value is strictly smaller than one, and positive for small enough uH and
uL . Finally, the altruistic principal prefers trust to control if

pH0 12x 1 1120 2 x 2 uH 1 sH uL 2 1 11 2 pH 


0
2 120 1 1120 2 102 uH 1 sL uL 2

   # pH0 12y 1 1120 2 y 2 uH 1 sH uH2 1 11 2 pH 


0
2 10 1 1120 2 02 uH 1 sL uH2 ,

or, equivalently,

20 2 10u H 2 sL 1 u H 2 u L 2
(13)  pH0 $ .
20 2 10u H 1 1 y 2 x 2 1 2 2 u H 2 1 1 u H 2 u L 2 1 sH 2 sL 2

Again, the critical value is strictly smaller than one, and positive for small enough uH and uL .
It is clear that many parameter vectors satisfy all our desiderata and equilibrium conditions;
just let uH and uL be small, let pH0 be close to 1 and pL0 close to 0, let sL be small and let sH belong
to the open interval 1sL 1 110 2 20uL 2/ 1uH 2 uL 2 , 1120 2 240uL2/ 1uH 2 uL 2 2. Moreover, it is
straightforward to check that the constructed equilibria satisfy the intuitive criterion.

Proposition 3: There exists an open set of parameters such that there are intuitive equi-
librium outcomes of the Control Game with the following properties: (i) a selfish agent always
transfers the minimum amount, i.e., 10 if uncontrolled, 0 if controlled; (ii) an altruistic agent
always transfers more than the minimum amount, yet less if controlled than if uncontrolled; (iii)
a selfish principal controls and an altruistic principal does not control.

Proof:
See Appendix for the remaining step.

To ensure that there are no other equilibria, notably pooling equilibria in which both types
of principal control, we merely need to strengthen our conditions on pL0 and pH0 . Pooling among
principal types obviously vanishes as we simultaneously let pL0 go to 0 and pH0 go to 1.
Our analysis is also consistent with other important qualitative features of the evidence. First,
exogenously imposed control does not have the same negative effect as endogenously imposed
control. With exogenously imposed control, the agent can no longer infer that the principal is
1002 THE AMERICAN ECONOMIC REVIEW June 2008

selfish, and therefore the esteem incentive is stronger than under endogenous control.21 Second,
when asked about their reaction to the principal’s decision to control them, subjects who reduce
their donations express feelings of being restricted and distrusted (Falk and Kosfeld 2006,
fig. 2). Our model captures this phenomenon inasmuch as both types of agents would prefer to
be trusted. This is most obvious for the type L agent, who loses 10 without any compensation in
terms of pride when controlled. The type H agent keeps more money when controlled, but the
monetary gain is more than offset by a reduced sense of pride.
Although our two-type example clearly cannot perfectly match the great heterogeneity of
behavior in Falk and Kosfeld’s data, even the fully separating equilibria described in Prop­o­si­
tion 3 can quantitatively emulate some key features. For example, let uL 5 0, uH 5 1/3, sL 5 30,
sH 5 120, pL0 , 1/9, and pH0 . 1/3. Then, the separating equilibrium exists, with x 5 20 and y
5 40, which roughly matches the observed aggregates. From a quantitative point of view, this
example is still not quite satisfactory. In particular, it is hardly a coincidence that so many agents
in the experiment choose to transfer exactly 40; this feature remains, even as Falk and Kosfeld
vary the control that is available to the principal. Also, the large fraction of principals choosing
to trust suggests that at least some of them refrain from controlling because they expect a trusting
choice to be materially more rewarding.22
The principals’ behavior can be accommodated in our two-type model by relaxing the con-
straint on the selfish principal’s belief. Ceteris paribus, if the selfish principal gets more opti-
mistic than assumed in equation (12), full separation is no longer attainable. Instead, there is
semi-separation, with the selfish principal randomizing between control and no control. In order
to accommodate the prevalence of transfers of 40, however, we would have to admit the more
general version of our utility function, (3).23 Specifically, we may emulate the agents’ behavior
by assuming that altruistic players place a weight larger than 1/2 on the opponent’s payoff if they
believe sufficiently strongly that the opponent is altruistic and if the opponent is not materially
ahead, but a weight smaller than 1/2 otherwise. In this case, pride does not affect the altruistic
agent’s choice if the principal trusts: as long as the selfish agent does not consider mimicking,
the optimal transfer is aH 5 40. When the principal does not trust, the transfer drops to the level
that is just sufficient for signaling purposes.24

IV.  Gift Exchange

George Akerlof (1982) famously proposed that employers set high wages in the hope that
workers, acquiring a positive sentiment for the employer, will return the gift by working harder.

21
The exogenous control treatment implemented by Falk and Kosfeld limits the action set of the agent to 310, 1204 .
Recall that the donation of altruistic agents is given by the need to separate from selfish agents. Given that the agent
has no information about the principal, the separating equilibrium condition pinning down the type H agent’s donation
becomes
210 1 20uL 1 sL uL 5 2aH 1 2aH uH 1 uH 1 pL 0 sH 1 11 2 pL 0 2 sL 2 .

Under endogenous control, due to full separation of principal types, the condition was

210 1 20uL 1 sL uL 5 2aH 1 2aH uH 1 sL uH.

The solution (aH ) must be larger when control is exogenous than when it is endogenous, because sH . sL .
22
A final quantitative concern is that the parameters that allow us to explain the Control Game behavior generally
appear to be larger than those that allowed us to explain the Trust Game behavior. As noted above, however, the dis-
crepancy is partly due to the two-type simplification.
23
Alternatively, we could assume that people want to signal fair-mindedness rather than altruism, like Andreoni
and Bernheim (2006) do.
24
Observe here that the related model of Levine (1998) fails to explain why the altruistic agent’s transfer does not
drop to the minimum level when the agent suspects the principal to be selfish (or spiteful).
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 1003

Fehr, Kirchsteiger, and Riedl (1993) found experimental support for Akerlof’s gift exchange
hypothesis, a finding that has later been replicated in a variety of experimental settings. Charness
(2004) is particularly interesting to us, as he documents the role that intentions play.
In a simple two-person Gift Exchange Game, our model straightforwardly reproduces the link
between the wage level and the agent’s effort choice. To see this, suppose that the principal can
choose any nonnegative fixed wage w and that the agent can choose any nonnegative effort a. Let
m A 1a 2 and mP 1a 2 be continuous functions measuring material costs and benefits exclusive of w,
with m A decreasing and mP increasing in a. We make the normalization m A 102 5 mP 102 5 0. To
ensure that incentive compatibility constraints always bind, we assume that m A 1a 2 1 uH mP 1a 2 is
decreasing in a and that mP 1a 2 1 uH m A 1a 2 is increasing in a. Finally, we make the assumption
that players have identical priors pL0 5 pH0 5 p 0.25
As before, we may look for an equilibrium in which altruistic players separate from selfish play-
ers at minimum cost. Such an equilibrium has the properties that the selfish agent chooses a 5 0,
the selfish principal chooses w 5 0, and the numbers 1w*, a1, a2 2 describing the altruistic principal’s
pay and the altruistic agent’s effort under low and high pay, respectively, are derived as follows: we
find a1 from the selfish agent’s incentive compatibility constraint upon seeing a low wage,

(14)  sL uL 5 m A 1a1 2 1 uLmP 1a1 2 1 sL uH ,

and a2 from the selfish agent’s incentive compatibility constraint upon seeing a high wage,

(15)  sH uL 5 m A 1a2 2 1 uLmP 1a2 2 1 sH uH.

Finally, we find w* from the selfish principal’s incentive compatibility constraint,

(16)  p 0 1mP 1a1 2 1 uLm A 1a1 2 1 sH uL 2 1 11 2 p 0 2 sL uL

   5 p 0 1mP 1a2 2 1 uLm A 1a2 2 1 sH uH2 1 11 2 p 0 2 sL uH 2 11 2 uL 2 w*.

Our assumption that m A 1a 2 1 uH mP 1a 2 is continuously decreasing ensures that a1 and a2 exist
and are unique. Likewise, the assumption that mP 1a 2 1 uHm A 1a 2 is continuously increasing in a
ensures existence and uniqueness of w*.

Proposition 4: Under our assumptions, the two-person Gift Exchange Game has a unique
intuitive equilibrium outcome. The outcome is: (i) an altruistic principal pays a wage w* . 0
and a selfish principal pays a wage w 5 0; and (ii) a selfish agent exerts effort a 5 0 regardless
of wage; an altruistic agent exerts effort a1 . 0 if w 5 0 and effort a2 . a1 if w 5 w*.

Intuitively, the employer who offers a high wage signals altruism. Workers value esteem from
such employers more than esteem from employers with low expectations, and hence altruistic
workers put in more effort following a high wage offer than following a low wage offer.
Notice that in expectation the altruists here obtain strictly lower material payoff in equilib-
rium than do the egoists. This would appear to be a generic feature of any two-player model
with homogeneous expectations and enough room for signaling. In other words, the principal’s

25
With priors pL0 , pH0 , we have so far been unable to prove that there is a unique intuitive equilibrium outcome.
Briefly, the problem is that, following pooling by the principal, the altruistic agent does not necessarily have most to
gain from an upward deviation from a pooling agent outcome.
1004 THE AMERICAN ECONOMIC REVIEW June 2008

material gain from trusting the agent in the Trust Game and in the Control Game is due to the
heterogeneous beliefs and/or the restricted room for signaling in these settings.
It is worth pointing out that the principal might, in general, signal altruism in other ways than
through generous conditions for the agent. For example, an employer who commits to donating
a substantial share of the profits to charity, or to offering cheap service to clients, also signals
altruism. Therefore, we do not wish to claim that in reality altruistic employers necessarily offer
better wages or working conditions; instead, they may choose to incorporate as nonprofit enter-
prises or acquire a reputation as generous donors to worthy causes. However, a thorough analysis
of this question requires a separate paper.

V.  Final Remarks

We have shown that people’s desire to appear prosocial may help to explain a variety of other­
wise puzzling behavioral patterns. In particular, we emphasize that principals who trust their
agents, thereby leaving themselves open to exploitation by selfish agents, encourage higher effort
by prosocial agents. Conversely, the model explains why careful ex ante planning by one con-
tracting party “indicates a lack of trust and blunts the demands of friendship” as suggested by
Macaulay (1963, 64).
However, since we have deliberately chosen highly stylized examples as a test bed for the
model, much work remains before we can reliably assess the importance of prosocial pride for
behavior in the laboratory and in the field. Below is an incomplete list of more or less open
issues.

  • Is the concern for esteem less important in field settings than in the laboratory?
  • How do people choose to signal their characteristics in the field (if at all)?
  • To what extent do people take pride in other characteristics than their prosociality?
  • To what extent is concern for pride affected by social context?

Before ending, we warn against the mechanical interpretation of our model that taking social
esteem into account always entails a weakening of optimal material incentives. This implication
appears reasonably robust in simple two-person examples where the principal can signal a proso-
cial attitude only through the material incentive scheme, but may well be misleading otherwise.
We have already hinted at what might happen if the principal has access to multiple signals.
Other modifications of the model, such as introducing a larger audience, could overturn the result
more dramatically. When agents seek esteem from others than their principal, the presence of
esteem incentives could well strengthen the case for material incentives. For example, getting a
CEO to engage in heavy cost cutting is difficult if the CEO cares more about esteem from the
firm’s employees than from the firm’s owners. We hypothesize that this is one reason why corpo-
rate restructuring is often carried out by a new manager subject to strong material incentives.

Appendix

Proof of Proposition 1:
We have already proved existence of the perfect Bayesian equilibrium. The proposed outcome
also trivially satisfies the Intuitive Criterion. In the ITG, (5) ensures that the agent would not
choose R regardless of the principal’s beliefs; in the proposed equilibrium of the VTG, neither
the principal nor the agent has any out-of-equilibrium action, so there are no out-of-equilibrium
beliefs. It thus remains to prove uniqueness.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 1005

In the ITG, (5) ensures that N is the agent’s unique best response. In the VTG, (7) rules out
the possibility that the selfish agent plays R. Three candidate equilibria remain: (i) Pooling of
principal types at T. This is ruled out by the fact that selfish agents always play N; by (9) selfish
principals thus always play NT. (ii) Pooling of agent types at N. Since a selfish principal never
plays T, the Intuitive Criterion induces the out-of-equilibrium belief that T is played by an altru-
istic principal. With this belief, the altruistic agent will play R—a contradiction. (iii) Pooling of
principal types at NT. The altruistic principal plays NT only under the belief that the altruistic
agent with strictly positive probability pool with the selfish agent at N. But we have just proved
that such pooling violates the Intuitive Criterion.

Proof of Proposition 2:
We first prove (generic) necessity of the strict inequality uH . uL . In a separating equilibrium
of the VTG, agents of both type H and type L feel the same (decision dependent) pride follow-
ing pay of T by the principal. If playing R, the pride is ûA 5 sH uH; if playing N, the pride is
ûA 5 sH uL . Suppose uH 5 uL 5 u. Then the pride is independent of the agent’s action, and behav-
ior is dictated by preferences over allocations alone. If u , 1, both types of agent play N. If u . 1,
both types of agent play R. Either case is inconsistent with separation of agent types in the VTG.
(We abstract from the nongeneric case u 5 1.)
We next prove necessity of the strict inequality uH . uL . Neglecting pride, the agent faces the
same choice in VTG as in ITG. In order for an agent of type H to play R in VTG and N in ITG,
the pride ûA must differ in the two cases. More precisely, we must have sH uH . pH sH 1 11 2
pH2 sL . If sH 5 sL , this inequality is violated (except in the nongeneric case pH 5 1).

Proof of Proposition 3:
Let us posit that, conditional on the principal’s action, any agent action a below (above) the
equilibrium action a*H leads the principal to believe that the agent is of type L (H). These beliefs
satisfy the Intuitive Criterion, and they trivially imply that our proposed strategy is a best
response for the agent. Since the principal has no out-of-equilibrium action, the agent has no out-
of-equilibrium beliefs. Thus, the proposed outcome is an intuitive equilibrium.

Proof of Proposition 4:
Step 1: Existence.—Consider the following set of beliefs. (i) The agent believes that any wage
strictly below w* is set by a selfish principal and any wage weakly above w* is set by an altruistic
principal. (ii) Following w 5 0, the principal believes that any effort strictly below a1 indicates
that the agent is selfish and any effort weakly above a1 indicates that the agent is altruistic.
Following w 5 w*, the principal believes that any effort strictly below a2 indicates that the agent
is selfish and any effort weakly above a2 indicates that the agent is altruistic. It is straightfor-
ward to check both that these beliefs sustain the equilibrium and that they satisfy the Intuitive
Criterion. Comparing (14) to (15), we also see that a2 . a1 due to the fact that sH . sL .

Step 2: Uniqueness.—Part 1: Fully separating equilibria. All fully separating equilibria have
the feature that selfish principals set w 5 0 and selfish agents set a 5 0, because these strate-
gies maximize their payoffs conditional on types being revealed. Conditional on the principal’s
type being revealed, the altruistic agent cannot reveal his type using an effort level lower than
a1 and a2, respectively, because the selfish agent would then prefer to mimic. Fully separating
equilibria with an effort level â exceeding a1 or a2, respectively, require that the principal’s out-
of-­equilibrium beliefs following an observed effort level above a1 or a2 but below â assign posi-
tive probability to the effort level being chosen by a selfish agent (otherwise, choosing â would
1006 THE AMERICAN ECONOMIC REVIEW June 2008

be irrational). However, relative to the equilibrium payoff, the selfish agent would then make a
loss, so these beliefs violate the Intuitive Criterion.
Part 2: Pooling equilibria. Given identical priors and Bayesian updating, both agent types have
identical beliefs following any wage offer. Equilibria in which the two agent types pool at some
effort level are then ruled out by the fact that the agent’s payoff function satisfies the single
crossing condition 0Eup 3u A4/0a0uA . 0 on our continuous and unbounded set of actions. (There is
always a profitable deviation for the altruist, given the restriction on out-of-equilibrium beliefs.)
Likewise, given identical priors, both types of principal face the same distribution of a’s fol-
lowing any wage choice. Therefore, the single crossing condition 0EuA 3uP4/0w0uP . 0 rules out
pooling by the principal.

References
Akerlof, George A. 1982. “Labor Contracts as Partial Gift Exchange.” Quarterly Journal of Economics,
97(4): 543–69.
Andreoni, James. 1989. “Giving with Impure Altruism: Applications to Charity and Ricardian Equiva-
lence.” Journal of Political Economy, 97(6): 1447–58.
Andreoni, James. 1990. “Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow Giv-
ing?” Economic Journal, 100(401): 464–77.
Andreoni James, and B. Douglas Bernheim. 2006. “Social Image and the 50:50 Norm.” Unpublished.
Battigalli, Pierpaolo, and Martin Dufwenberg. Forthcoming. “Dynamic Psychological Games.” Journal
of Economic Theory.
Bénabou, Roland, and Jean Tirole. 2003. “Intrinsic and Extrinsic Motivation.” Review of Economic Stud-
ies, 70(3): 489–520.
Bénabou, Roland, and Jean Tirole. 2006. “Incentives and Prosocial Behavior.” American Economic
Review, 96(5): 1652–78.
Besley, Timothy, and Maitreesh Ghatak. 2005. “Competition and Incentives with Motivated Agents.”
American Economic Review, 95(3): 616–36.
Bewley, Truman F. 1999. Why Wages Don’t Fall during a Recession. Cambridge, MA: Harvard Univer-
sity Press.
Blascovic, Jim, Wendy Berry Mendes, Sarah B. Hunter, and Kristen Salomon. 1999. “Social ‘Facilitation’
as Challenge and Threat.” Journal of Personality and Social Psychology, 77 (1): 68–77.
Bohnet, Iris, Bruno S. Frey, and Steffen Huck. 2001. “More Order with Less Law: On Contract Enforce-
ment, Trust, and Crowding.” American Political Science Review, 95(1): 131–44.
Bolton, Gary E., and Axel Ockenfels. 2000. “ERC: A Theory of Equity, Reciprocity, and Competition.”
American Economic Review, 90(1): 166–93.
Brekke, Kjell Arne, and Karine Nyborg. 2004. “Moral Hazard and Moral Motivation: Corporate Social
Responsibility as Labor Market Screening.” University of Oslo Department of Economics Memoran-
dum 25/2004.
Brennan, Geoffrey, and Philip Pettit. 2004. The Economy of Esteem: An Essay on Civil and Political Soci-
ety: New York: Oxford University Press.
Broberg, Tomas, Tore Ellingsen, and Magnus Johannesson. 2007. “Is Generosity Involuntary?” Econom-
ics Letters, 94(1): 32–37.
Brown, Donald E. 1991. Human Universals. New York: McGraw-Hill.
Camerer, Colin F. 1988. “Gifts as Economic Signals and Social Symbols.” American Journal of Sociol-
ogy, 94 (Supplement), S180–S214.
Camerer, Colin F. 2003. Behavioral Game Theory: Experiments in Strategic Interaction: Princeton, NJ:
Princeton University Press.
Charness, Gary. 2004. “Attribution and Reciprocity in an Experimental Labor Market.” Journal of Labor
Economics, 22(3): 665–88.
Charness, Gary, and Martin Dufwenberg. 2006. “Promises and Partnership.” Econometrica, 74(6): 1579–
1601.
Charness, Gary, and Matthew Rabin. 2002. “Understanding Social Preferences with Simple Tests.” Quar-
terly Journal of Economics, 117(3): 817–69.
Cho, In-Koo, and David M. Kreps. 1987. “Signaling Games and Stable Equilibria.” Quarterly Journal of
Economics, 102(2): 179–221.
VOL. 98 NO. 3 ellingsen and johannesson: the human side of incentive theory 1007

Cox, James C. 2004. “How to Identify Trust and Reciprocity.” Games and Economic Behavior, 46(2):
260–81.
Cottrell, Nickolas B., Dennis L. Wack, Gary J. Sekerak, and Robert H. Rittle. 1968. “Social Facilitation
of Dominant Responses by the Presence of an Audience and the Mere Presence of Others.” Journal of
Personality and Social Psychology, 9(3): 245–50.
Dana, Jason D., Daylian M. Cain, and Robyn M. Dawes. 2006. “What You Don’t Know Won’t Hurt Me:
Costly but Quiet Exit in Dictator Games.” Organizational Behavior and Human Decision Processes,
100(2): 193–201.
Dawes, Robyn M. 1989. “Statistical Criteria for Establishing a Truly False Consensus Effect.” Journal of
Experimental Social Psychology, 25(1): 1–17.
Denrell, Jerker. 1998. “Essays on the Economic Effects of Vanity and Career Concerns.” PhD diss. Stock-
holm School of Economics.
Dufwenberg, Martin, and Georg Kirchsteiger. 2004. “A Theory of Sequential Reciprocity.” Games and
Economic Behavior, 47(2): 268–98.
Falk, Armin, and Urs Fischbacher. 2006. “A Theory of Reciprocity.” Games and Economic Behavior,
54(2): 293–315.
Falk, Armin, and Andrea Ichino. 2006. “Clean Evidence on Peer Effects.” Journal of Labor Economics,
24(1): 39–57.
Falk, Armin, and Michael Kosfeld. 2006. “The Hidden Costs of Control.” American Economic Review,
96(5): 1611–30.
Fang, Hanming, and Giuseppe Moscarini. 2005. “Morale Hazard.” Journal of Monetary Economics,
52(4): 749–77.
Fehr, Ernst, and Bettina Rockenback. 2003. “Detrimental Effects of Sanctions on Human Altruism.”
Nature, 422: 137–40.
Fehr, Ernst, and Simon Gächter. 2002. “Do Incentive Contracts Undermine Voluntary Cooperation?”
Institute for Empirical Research in Economics Working Paper 34.
Fehr, Ernst, George Kirchsteiger, and Arno Riedl. 1993. “Does Fairness Prevent Market Clearing? An
Experimental Investigation.” Quarterly Journal of Economics, 108(2): 437–59.
Fehr, Ernst, and John A. List. 2004. “The Hidden Costs and Returns of Incentives––Trust and Trustwor-
thiness among CEOs.” Journal of the European Economic Association, 2(5): 743–71.
Fehr, Ernst, and Klaus M. Schmidt. 1999. “A Theory of Fairness, Competition, and Cooperation.” Quar-
terly Journal of Economics, 114(3): 817–68.
Frey, Bruno S., and Felix Oberholzer-Gee. 1997. “The Cost of Price Incentives: An Empirical Analysis of
Motivation Crowding-Out.” American Economic Review, 87(4): 746–55.
Funk, Patricia. 2007. Social Incentives and Voter Turnout: Theory and Evidence.” Unpublished.
Geanakoplos, John, David Pearce, and Ennio Stacchetti. 1989. “Psychological Games and Sequential
Rationality.” Games and Economic Behavior, 1(1): 60–79.
Glazer, Amihai, and Kai A Konrad. 1996. “A Signaling Explanation for Charity.” American Economic
Review, 86(4): 1019–28.
Gneezy, Uri, and Aldo Rustichini. 2000a. “A Fine Is a Price.” Journal of Legal Studies, 29(1): 1–17.
Gneezy, Uri, and Aldo Rustichini. 2000b. “Pay Enough or Don’t Pay at All.” Quarterly Journal of Eco-
nomics, 115(3): 791–810.
Haley, Kevin J., and Daniel M. T. Fessler. 2005. “Nobody’s Watching? Subtle Cues Affect Generosity in
an Anonymous Economic Game.” Evolution and Human Behavior, 26(3): 245–56.
Harbaugh, William T. 1998. “What Do Donations Buy? A Model of Philanthropy Based on Prestige and
Warm Glow.” Journal of Public Economics, 67(2): 269–84.
Hawkes, Kristen, and Rebecca Bliege Bird. 2002. “Showing Off, Handicap Signaling, and the Evolution of
Men’s Work.” Evolutionary Anthropology, 11(1): 58–67.
Hoffman, Elizabeth, Kevin A. McCabe, Keith Shachat, and Vernon L. Smith. 1994. “Preferences,
Property Rights, and Anonymity in Bargaining Games.” Games and Economic Behavior, 7(3):
346–80.
Holländer, Heinz. 1990. “A Social Exchange Approach to Voluntary Cooperation.” American Economic
Review, 80(5): 1157–67.
Holmström, Bengt. 1999. “Managerial Incentive Problems: A Dynamic Perspective.” Review of Economic
Studies, 66(1): 169–82.
Hume, David. 1896. “A Treatise of Human Nature.” Ed. Lewis Amherst Selby-Bigge. 3 vols. (Orig. pub.
1739.)
Janssen, Maarten C. W., and Ewa Mendys-Kamphorst. 2004. “The Price of a Price: On the Crowding Out
of Social Norms.” Journal of Economic Behavior and Organization, 55(3): 377–95.
1008 THE AMERICAN ECONOMIC REVIEW June 2008

Kandel, Eugene, and Edward P. Lazear. 1992. “Peer Pressure and Partnerships.” Journal of Political
Economy, 100(4): 801–17.
Kreps, David M. 1997. “Intrinsic Motivation and Extrinsic Incentives.” American Economic Review, 87(2):
359–64.
Levine, David K. 1998. “Modeling Altruism and Spitefulness in Experiments.” Review of Economic
Dynamics, 1(3): 593–622.
Macaulay, Stewart. 1963. “Non-Contractual Relations in Business: A Preliminary Study.” American Soci-
ological Review, 28(1): 55–70.
Mas, Alexandre, and Enrico Moretti. Forthcoming. “Peers at Work.” American Economic Review.
Maslow, Abraham H. 1943. “A Theory of Human Motivation.” Psychological Review, 50(4): 370–96.
McCabe, Kevin A., Mary L. Rigdon, and Vernon L. Smith. 2003. “Positive Reciprocity and Intentions in
Trust Games.” Journal of Economic Behavior and Organization, 52(2): 267–75.
McGregor, Douglas. 1960. The Human Side of Enterprise. New York: McGraw-Hill.
Mellström, Carl, and Magnus Johannesson.  Forthcoming. “Crowding Out in Blood Conation: Was Tit-
muss Right?” Journal of the European Economic Association.
Rabin, Matthew. 1993. “Incorporating Fairness into Game Theory and Economics.” American Economic
Review, 83(5): 1281–1302.
Rotemberg, Julio J. Forthcoming. “Minimally Acceptable Altruism and the Ultimatum Game.” Journal of
Economic Behavior and Organization.
Ross, Lee, David Greene, and Pamela House. 1977. “The “False-Consensus” Effect: An Egocentric Bias
in Social Perception and Attribution Processes.” Journal of Experimental Social Psychology, 13(3):
279–301.
Seabright, Paul. 2004. “Continuous Preferences Can Cause Discontinuous Choices: An Application to the
Impact of Incentives on Altruism.” Center for Economic Policy Research Discussion Paper 4322.
Sliwka, Dirk. 2007. “Trust as a Signal of a Social Norm and the Hidden Costs of Incentive Schemes.”
American Economic Review, 97(3): 999–1012.
Smith, Adam. 1790. The Theory of Moral Sentiments. 6th edition. (First edition 1759.) London: A. Millar.
Sobel, Joel. 2005. “Interdependent Preferences and Reciprocity.” Journal of Economic Literature, 43(2):
392–436.
Soetevent, Adriaan R. 2005. “Anonymity in Giving in a Natural Context: A Field Experiment in 30
Churches.” Journal of Public Economics, 89(11–12): 2301–23.
Spier, Kathryn E. 1992. “Incomplete Contracts and Signalling.” RAND Journal of Economics, 23(3):
432–43.
Titmuss, Richard. 1970. The Gift Relationship: From Human Blood to Social Policy. London: George
Allen and Unwin.
Triplett, Norman. 1898. “The Dynamogenic Factors in Pacemaking and Competition.” American Journal
of Psychology, 9(4): 507–33.
This article has been cited by:

1. Amitai Etzioni. 2011. Behavioural Economics: Next Steps. Journal of Consumer Policy . [CrossRef]
2. Tore Ellingsen, Magnus Johannesson. 2011. Conspicuous generosity. Journal of Public Economics .
[CrossRef]
3. Paul Gregg, Paul Grout, Anita Ratcliffe, Sarah Smith, Frank Windmeijer. 2011. How important is
pro-social behaviour in the delivery of public services?. Journal of Public Economics . [CrossRef]
4. Guido Friebel, Wendelin Schnedler. 2011. Team governance: Empowerment or hierarchical control�.
Journal of Economic Behavior & Organization . [CrossRef]
5. JULIA NAFZIGER. 2010. Motivational Job Assignments. Economica no-no. [CrossRef]
6. INGELA ALGER. 2010. Public Goods Games, Altruism, and Evolution. Journal of Public Economic
Theory 12:4, 789-813. [CrossRef]
7. Sera Linardi, Margaret A. McConnell. 2010. No excuses for good behavior: Volunteering and the
social environment. Journal of Public Economics . [CrossRef]
8. Pablo Brañas-Garza, Teresa García-Muñoz, Shoshana Neuman. 2010. The big carrot: High-stakes
incentives revisited. Journal of Behavioral Decision Making 23:3, 288-313. [CrossRef]
9. K. W. Eriksen, O. Kvaloy. 2010. Myopic Investment Management. Review of Finance 14:3, 521-542.
[CrossRef]
10. Kaushik Basu. 2010. THE MORAL BASIS OF PROSPERITY AND OPPRESSION: ALTRUISM,
OTHER-REGARDING BEHAVIOUR AND IDENTITY. Economics and Philosophy 26:02,
189-216. [CrossRef]
11. Daniel John Zizzo. 2010. Experimenter demand effects in economic experiments. Experimental
Economics 13:1, 75-98. [CrossRef]
12. Ting Ren. 2010. Value Congruence as a Source of Intrinsic Motivation. Kyklos 63:1, 94-109.
[CrossRef]
13. J. Li, E. Xiao, D. Houser, P. R. Montague. 2009. Neural responses to sanction threats in two-party
economic exchange. Proceedings of the National Academy of Sciences 106:39, 16835-16840. [CrossRef]
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

You might also like