7 views

Uploaded by Anna Kruglova

Behavioral Finance

Behavioral Finance

Attribution Non-Commercial (BY-NC)

- rgrgrghrrt
- Nash Equilibria in Competitive Societies, With Applications to Facility Location, Traffic Routing and Auctions
- MWG: Ch.00 Contents Preface
- Business And Economics: Course Manual 2012
- prob2
- The Goals of Research in Psychology
- Acemoglu_s Black Box
- Application of Game Theory in Wireless Communication Networks
- Linden Berg Steg Goal Framing
- Besanko 1987 Performance Versus Design Standards in the Regulation of Pollution
- Rational
- 666156000000000057[1]
- 9 Repeated
- assignment 7
- Equilibrium and Mixed Strategy
- lec2
- EU China Observer 3_2009
- Lecture 8
- Global Warming, Ethics
- Game theory

You are on page 1of 34

. Published by Blackwell Publishing, 9600 Garsington Road, Oxford OX4 2DQ, UK and 350 Main Street, Malden, MA 02148, USA.

B. Douglas Bernheim and Raphael Thomadsen

The introduction of memory imperfections into models of economic decision making creates a natural role for anticipatory emotions. Their combination has striking behavioural implications. The paper rst shows that agents can rationally select apparently dominated strategies. We consider Newcombs Paradox and the Prisoners Dilemma. We provide a resolution for Newcombs Paradox and argue it requires the decision maker to ascribe only a tiny weight to anticipatory emotions. For some ranges of parameters, it is possible to obtain cooperation in the Prisoners Dilemma with probability arbitrarily close to unity. The second half of the paper provides a theory of reminders.

This paper studies decision problems with two twists: rst, memory is imperfect, and second, the decision maker cares about anticipatory emotions. Previous work in this area has treated these phenomena separately. Recent analyses of decisions with imperfect memory include Piccione and Rubinstein (1994, 1997a), Benabou and Tirole (2002) and Mullainathan (2002).1 Separately, the notion of anticipatory emotions has been studied by Loewenstein (1987), Elster and Loewenstein (1992), Caplin and Leahy (2001, 2004), Koszegi (2002, 2004) and others. Why combine these two disparate lines of work? After all, many behavioural alternatives to the standard model of decision making have been proposed and the number of potential combinations is enormous. Slogging through every permutation seems at best a tedious task with uncertain prospects for useful insights. Our decision to focus on this particular combination imperfect memory and anticipatory emotions is, however, not an arbitrary one. As we argue below, imperfect memory creates a natural role in decision making for anticipatory emotions, one that does not exist when memory is perfect. In addition, we argue that the combination of imperfect memory and anticipatory emotions yields some striking and surprising implications for behaviour. The rst half of the paper shows that agents can rationally select apparently dominated strategies. We consider two applications: Newcombs Paradox and the Prisoners Dilemma. We provide a resolution for Newcombs Paradox and argue that it requires the decision maker to ascribe only a tiny weight to anticipatory emotions. We also demonstrate that, under relatively weak conditions, it is possible to obtain cooperation in the Prisoners Dilemma with probability arbitrarily close to unity. The second half of the paper provides a theory of reminders. It shows that people may prefer to be uninformed, or to have coarse information, in situations where, eliminating either memory imperfections or anticipatory emotions, this would not be the case. We exhibit a mechanism whereby the opportunity to leave a reminder can improve a

1 See also the symposium in Games and Economic Behavior dedicated to discussions of the issues raised by Piccione and Rubinstein, including Aumann et al. (1997a,b), Battigalli (1997), Gilboa (1997), Grove and Halpern (1997), Halpern (1997), Lipman (1997) and Piccione and Rubinstein (1997b), as well as a later paper by Segal (2000).

[ 271 ]

272

[APRIL

concurrent decision, even though the reminder does not change the information available when the decision is made. We also provide endogenous explanations for as-if overoptimism and behaviour associated with cognitive dissonance. There are, of course, other theories purporting to explain a number of the phenomena studied in this paper. Philosophers have proposed a variety of possible resolutions for Newcombs Paradox (see e.g. the anthology edited by Campbell and Sowden (1985)), but the puzzle has received little attention among economists (with a few exceptions such as Geanakoplos (1996)). Cooperation in the oneshot Prisoners Dilemma has been attributed to a variety of factors, such as altruism and concerns for fairness, social image, and self-image. A preference for ignorance arises in Carillo and Mariotti (2000), Benabou and Tirole (2002) and Koszegi (2002). Explanations for overoptimism and overcondence appear in Rabin and Schrag (1999), Koszegi (2000), Hvide (2002), Benabou and Tirole (2002), Postlewaite and Compte (2003) and Van den Steen (2003). Sources of cognitive dissonance have been studied by Akerlof and Dickens (1982) and others. To our knowledge, the mechanisms proposed here are, however, novel. The remainder of the paper is organised as follows. Section 1 discusses some issues concerning the modelling of imperfect memory. Section 2 explains how memory imperfections create a natural role in decision-making for anticipatory emotions. Section 3 explains how a rational player with imperfect memory and anticipatory emotions can rationally justify the selection of an apparently dominated strategy. Section 4 examines the role of reminders. Section 5 concludes.

The literature on decision making with imperfect memory encompasses two fundamentally different approaches. In one strand, the decision maker is naive, and always acts as if he has forgotten nothing (Mullainathan, 2002). In the other strand, the decision maker is sophisticated, and draws rational inferences concerning things he may have forgotten, given his memory technology (Piccione and Rubinstein, 1997a, Benabou and Tirole, 2002). In practice, behaviour is probably marked both by naivete and sophistication. In this paper, we explore the behaviour of sophisticated decision makers. As emphasised by Piccione and Rubinstein, various issues that are immaterial when modelling decisions with perfect recall emerge as important when analysing problems with imperfect recall. These issues include the role of an initial planning stage and the decision makers ability to change a strategy during its execution. There are several coherent ways to resolve these issues and no single correct model of imperfect recall. In this paper, we model imperfect recall using an approach proposed by Piccione and Rubinstein (and adopted implicitly by Benabou and Tirole (2002)), which they call modied multiself consistency. The decision problem is viewed as a game played by multiple incarnations of the decision maker, where a new incarnation takes over whenever his memory fails. Behaviour corresponds to an equilibrium of this game. Each self has the ability to deviate from its prescribed equilibrium strategy but it cannot choose to deviate for other incarnations.

Royal Economic Society 2005

2005 ]

273

For a simple illustration of this concept, see Figure 1, taken from Piccione and Rubinstein (1994). This is a one player decision problem with imperfect recall. The individual starts at node A, and must choose either Left (L), placing him at node B, or Right (R), placing him at node C. He immediately forgets this choice, so B and C lie in the same information set. He must then choose either left (l ) or right (r). Payoffs are determined as shown in the Figure. With perfect recall, the individual would choose (R, r). To nd modied multiself consistent outcomes with imperfect recall, we imagine that there are two separate players, both of whom receive the same payoff; one makes the choice at node A, and the other makes the choice at information set I containing B and C. One equilibrium of this two-player game involves the choice R at A and the choice r at I. Since the individual knows he will choose r at I, it is in his interests to choose R at A. Similarly, even though he forgets his actual choice at A once he reaches I, he remembers that his strategy is to select R, so he infers that he is at C, in which case r is optimal. This is not, however, the only equilibrium. Another involves the choice L at A and the choice l at I. Since the individual knows he will choose l at I, it is in his interests to choose L at A. Similarly, even though he forgets his actual choice at A once he reaches I, he remembers that his strategy is to select L, so he infers that he is at B, in which case l is optimal. Both (R, r) and (L, l ) are multiself consistent outcomes.

L I B

Royal Economic Society 2005

274

[APRIL

This example is instructive in part because it highlights some key assumptions. The outcome (L, l) survives because, at node A, the individual is unable to change his strategy for the entire game. If there was an initial planning period in which he could select his strategy, he would clearly benet from picking (R, r) rather than (L, l ). We interpret the Piccione-Rubinstein solution concept as follows. The equilibrium decision strategy is a norm, in the sense that it describes how the individual normally handles a certain type of decision problem. When he confronts one of these decision problems, he can choose to deviate from the norm. A deviation may include the selection of an action other than the one prescribed by the norm, as well as the adoption of a plan of contingent actions (a continuation strategy) that differs from the norm. However, any time he experiences a memory lapse, he forgets any decision he may have made to follow a continuation plan that departs from the norm. He remembers the norm, and assumes he has always intended to follow it. An alternative approach would be to assume the individual can reformulate his strategy at any point during the decision process, and remember the reformulation at later stages even if he has forgotten actions and events. In this case, the outcome (L, l) would not survive. At node A, the individual would deviate not only to R, but also to a continuation plan prescribing the choice r at I. Upon reaching I, he would forget his actual choice, but would remember his decision to adopt the strategy (R, r) rather than the strategy (L, l). From this he would infer that he is at node C, in which case r is optimal. Anticipating this later response makes the deviation at A attractive. As we said, there is no right or wrong way to model imperfect recall. The Piccione-Rubinstein solution concept is one plausible approach. We nd it appealing because it describes circumstances in which, upon experiencing a memory lapse, the decision maker asks himself what he usually does in similar circumstances.

To clarify the connection between imperfect memory and anticipatory emotions, we consider a simple decision setting. A decision maker (abbreviated throughout this article as DM) makes a choice in period 0 based on his available information, I0. He then receives or, in the case of imperfect recall, loses information. He waits in period 1, forming a new expectation about his ultimate payoff based on his available information, I1. Finally, in period 2, he receives his payoff, U. In period 2, the individuals emotional well-being depends only on the payoff received, U. In period 1, his emotional well-being depends on his anticipated period 2 payoff, V E(U j I1). Following Caplin and Leahy (2001), we assume that, in period 0, he cares about his future emotional well-being both in period 1 and in period 2. We summarise his preferences over his future emotional states by the function W(V, U ). For the moment, let us suppose that W, is linear: W(V, U ) V dU. Here and elsewhere in this paper, d is simply the weight attached to the ultimate payoff relative to anticipatory emotions, rather than a discount factor. From the perspective of period 0, his expected utility is

Royal Economic Society 2005

2005 ]

275

EW V ; U j I0 EEU j I1 j I0 dEU j I0 : For the moment, let us also assume the individual has perfect recall (so that I1 includes all of the information in I0). In that case, the law of iterated expectations applies, and EEU j I1 j I0 EU j I0 : In that case, EW V ; U j I0 1 dEU j I0 : 2 1

The implications of the preceding observation are important. We have formulated a model in which the individual cares about anticipatory emotions in period 1. However, he acts in period 0 exactly as he would if he ignored anticipatory emotions entirely and cared only about the nal period 2 outcome. In this setting, the individual may care about future anticipatory emotions but this plays no role in decision making. As Caplin and Leahy point out, anticipatory emotions can affect behaviour when W is nonlinear. Consider, for example, the case where W(V, U ) w(V ) dU, where w is strictly concave. In this case, the individual is averse to variation in future anticipatory emotions (Caplin and Leahy call this anxiety). Thus, any information received between periods 0 and 1 makes him worse off ex ante (from the perspective of period 0). Similarly, he prefers not to receive information prior to making a period 0 decision unless this allows him to select an action that sufciently improves his outcome. With imperfect memory, anticipatory emotions matter, even if the function W is linear, because the law of iterated expectations breaks down. For (1) to hold, I1 must contain at least all of the information in I0. If it omits some information, the chain of logic leading to (2) does not hold. Consider the following illustration. Suppose the state of the world is either bad, in which case the individuals payoff is 0, or good, in which case his payoff is 10. The two states occur with equal probability. The individual observes the state in period 0, but forgets it before period 1. In that case, EU j I1 5 and EEU j I1 j I0 5: However, E(U j I0 bad) 0, and E(U j I0 good) 10. Clearly, the law of iterated expectations fails. To see how this can affect decisions, suppose that, in the bad state only, the individual has the option to take an action that cuts his losses, in which case his payoff is 1 instead of 0. Imagine also that he remembers his action in period 1, even though he forgets the state. Ignoring anticipatory emotions, the best choice is obviously to cut losses when the state is bad. However, with anticipatory emotions and d 4, he chooses not to

Royal Economic Society 2005

276

[APRIL

cut losses. The reason is that he does not want to remind himself that the outcome will be bad. If he chooses not to cut losses upon observing the bad state, his payoff (from the perspective of period 0) is 5 d 0 5. If he cuts losses, his payoff (from the perspective of period 0) is 1 d 1 1 d. So he is better off not cutting losses when d 4. If the weight on the nal outcome (relative to anticipatory emotions) is sufciently large (d ! 9), then there is a modied multiself consistent outcome in which the DM chooses to cut losses. With a decision to cut losses, his payoff (from the perspective of period 0) is again 1 d 1 1 d. If he chooses not to cut losses, his payoff (from the perspective of period 0) is 10 d 0 10 (in this case, he falsely infers from his equilibrium strategy that the state must have been good). Thus, he cuts losses if d ! 9. What happens when d 2 (4, 9)? It turns out that outcomes necessarily involve randomisations between cutting and not cutting losses. Suppose that, upon observing the bad state, he cuts losses with probability k. When he chooses to cut losses in the bad state, his period 0 expected payoff is again 1 d 1 1 d. When he chooses not to cut losses in the bad state, his period 0 expected payoff is [1/(2 k)]10 d 0 10/(2 k). He is indifferent between these choices, and willing to randomise, when 10/(2 k) 1 d, or k2 10 : 1d

When d 9, k 1 (he cuts losses with certainty). When d 4, k 0 (he never cuts losses). As d declines from 9 to 4, k declines monotonically from 1 to 0. Thus, for all d 2 (4, 9), there is a mixed strategy outcome, involving some loss cutting. At the ends of this interval, the mixed strategy outcome converges to a pure strategy outcome (not cutting losses for d # 4, and cutting losses for d " 9), so the outcome is continuous in d.

The logic that prescribes avoidance of strictly dominated strategies rests on a simple and appealing assumption: a players opponents strategies are causally independent of his own strategy. Regardless of which strategies a players opponents have chosen, these remain xed when he alters his own choice. If, for some reason, he does not believe in causal independence, then he could justify playing a dominated strategy. For example, if in the Prisoners Dilemma he believes that, by cooperating, he somehow makes it more likely that his opponent cooperates, he might be able to justify behaving cooperatively rather than opportunistically. The general idea pursued in this Section is that players may induce ex post subjective correlation between their choices by conditioning on something that is commonly observed but then forgotten. Conceptually, conditioning on something that is commonly observed leads in the direction of correlated equilibria but, of course, correlated equilibria never involve dominated strategies. Here, the added twist is that each player cares about the payoff he

Royal Economic Society 2005

2005 ]

277

expects to receive after he has forgotten the signal but before the payoff is realised. Though my decision does not causally affect the decision of my opponent, in this setting it can causally affect the inference that I make in the future about my opponents decision, thereby altering my anticipatory payoffs, potentially for the better. There are at least two different ways to introduce the correlation. One is to assume that a parameter of the game is realised randomly, observed by all players, and then forgotten. Another is to introduce commonly observed but irrelevant signals (sunspots). We use the rst approach in the context of Newcombs Paradox, and the second in the context of the Prisoners Dilemma. We begin with Newcombs Paradox because the analysis is more straightforward. 3.1. Newcombs Paradox Newcombs Paradox is due to physicist William Newcomb, and was popularised by the philosopher Robert Nozick (1969). The paradox involves interaction between a human and a superior being. In some variants of the paradox, we are invited to think of the superior being as God, but for reasons discussed below we prefer to view this player as a psychic. The superior being asks the human to pick between two boxes. One is open, and the other is closed. The open box contains one thousand dollars. The closed box contains either one million dollars or nothing. The subject has two choices: (1) take only the closed box or (2) take both boxes. So far, it seems as though it is clearly better to take both boxes. But there is a twist. Before presenting the subject with this problem, the being predicts the subjects choice. If it predicts the subject will choose only the closed box, it puts $1 million inside. If it predicts the subject will choose both boxes, it puts nothing inside the closed box. Moreover, the being has presented this same choice to hundreds of thousands of humans. Some have chosen both boxes, and some have chosen the closed box but the being has always predicted the choice correctly. What should the subject do? Choosing both boxes is the right answer from the perspective of dominance. Yet many people say they would choose only the closed box. Their logic: if I were to choose both boxes, then the being would know I am the kind of person that would do this and would put no money in the closed box. Yet this seems to suffer from a mistaken view of causation. Suppose I am the kind of person who would tend not to choose the closed box and that the being somehow knows this. Then it will put $1 million in the closed box. When I make my decision, the beings decision is a fait accomplit. It is clearly in my interests to resist my natural tendencies and choose both boxes, regardless of what the being has done. How then can we rationalise the dominated choices that many people seem to favour? Here we attempt to provide an explanation that neither endows the superior being with the ability to defy causality, nor attributes to the human a belief that the superior being possesses this ability. We assume that the being is superior only in the sense that it is extremely knowledgeable and endowed with an ability to discern nuances of anothers preferences hence we prefer to think of it as a human psychic rather than as a divinity.

Royal Economic Society 2005

278

[APRIL

The avour of our explanation is as follows. There is a preference characteristic which, in equilibrium, is related to my choice, and which the being observes (this is why he can predict my choice). Once I make my choice, I recall my choice but do not recall the characteristic on which my choice was conditioned. From the choice I made, I can use my equilibrium strategy to infer my characteristic and thereby infer the beings choice. Thus, if I have chosen both boxes, Ill think it more likely that my type is such that the being has put nothing in the closed box. If I have chosen only the closed box, Ill think it more likely that my type is such that the being has put $1 million in the closed box. When I make my choice, I recognise that the beings decision is a fait accomplit, and does not depend on what I do. However, I care about my anticipatory state of mind once I have made my choice and before I learn the ultimate outcome. I would rather anticipate receiving the million dollars, so I choose only the closed box. As we will see, for this argument to be valid, I need only place a very small amount of weight on my anticipatory emotions. 3.1.1. The game Consider a game played by two agents, a human (H ) and a superior being (S ). Choices and events unfold as follows. 1. Nature randomly selects a preference parameter, d 2 R (with CDF F ), for H, which will govern the weight placed on the actual outcome relative to anticipatory emotions (see below). The value of this parameter is observed both by H and by S. 2. S chooses one of two actions: Z (for zero) and M (for million). 3. Without having observed S s choice, H selects one of two actions: C (for closed) and B (for both). We will use x to denote H s choice. 4. H forgets the value of d but recalls his action. 5. H waits to learn the outcome and forms an expectation of what it will be. 6. Payoffs are realised. Payoffs are determined according to the matrix illustrated in Figure 2. In each cell of this matrix, the rst entry is H s payoff; v is a Von Neumann-Morgenstern utility function, and its argument is the monetary reward. When describing the problem, we imagined that a 1,000,000, and b 1,000. The second entry in each cell is S s payoff. Notice that S s payoffs imply that it wants to put one million dollars in the box when H chooses only the closed box, and it wants to leave the box empty when H chooses both boxes; S cares only about making the correct prediction and not about the monetary payment. In stage 3, when making his decision, H thinks about how he will feel at both stage 5 and stage 6. In stage 3, his expected stage 6 utility is E[v(y) j x, d]. In stage 5, his expected stage 6 utility is E[v(y) j x], and this determines his stage 5 utility, which he takes into account at stage 3. He maximises a weighted average of these two terms: E[v(y) j x]dE[v(y) j x, d]. Suppose for the moment that H has a perfect memory. In that case, he always chooses x to maximise E[v(y) j x, d] (since this represents expected utility, as of stage 3, for both stage 5 and stage 6). The standard dominance argument applies.

Royal Economic Society 2005

2005 ]

279

Ss choice M Z

v(a), 1

v(0), 0

Hs choice

v(a+b), 0

v(b), 1

The only subgame perfect equilibrium involves H choosing B and S choosing Z. In equilibrium, the superior being does predict the humans behaviour with perfect accuracy. However, the human never picks only the closed box. This cannot explain Newcombs Paradox because, as part of the scenario, we are asked to imagine that the being has correctly predicted both choices in the past. Now we look for the equilibria of the game with imperfect memory. We use the Piccione-Rubinstein multiself approach discussed in Section 2, and study sequential equilibria. 3.1.2. The main result When H forgets d, there is always an equilibrium in which H chooses B and S chooses Z with probability one. To verify this, we have to describe H s beliefs when, in period 5, he recalls that he has chosen C (which occurs with probability zero on the equilibrium path). We posit that H thinks S has chosen Z in this case. H s choice of B is then clearly optimal regardless of d. This is not, however, the only equilibrium. In stating our main result, we will specialise to the case of linear utility, where v(y) y. After the theorem, we explain how the result is modied in the presence of concave utility.

Royal Economic Society 2005

280

[APRIL

Theorem 1: Let d (a b)/b, and suppose that d lies on the interior of the support of F. There exists an equilibrium in which (i) for d ! d , H chooses B and S correctly predicts this (choosing Z), and (ii) for d < d , H chooses C and S correctly predicts this (choosing M). Proof. First imagine that d ! d . H s prescribed choice is B, so S s prescribed choice of Z is optimal. Given S s prescribed choices, is B optimal for H ? If H chooses B, his payoff is Ey j B dEy j B; d: Let us start with E(y j B, d). Knowing d > d , H can infer from S s equilibrium strategy that S has chosen Z, so E(y j B, d) b. Now consider E(y j B). Recalling that he has chosen B, H can infer from his own equilibrium strategy that d > d , in which case he concludes that S has chosen Z, so E(y j B) b. Thus, Ey j B dEy j B; d 1 db: If H chooses C, his payoff is Ey j C dEy j C; d: Let us start with E(y j C, d). Knowing d ! d , H can infer from S s equilibrium strategy that S has chosen Z, so E(y j C, d) 0. Now consider E(y j C). Recalling that he has chosen C, H will infer from his own equilibrium strategy (incorrectly) that d < d , in which case he concludes that S has chosen M, so E(y j C) a. Thus, Ey j C dEy j C; d a: Comparing these two payoffs, we see that it is indeed optimal for H to choose B provided that 1 db ! a or d! a ab 1 d b b

But this is the case we are examining. Now imagine that d < d . Hs prescribed choice is C , so S s prescribed choice of M is optimal. Given S s prescribed choices, is C optimal for H ? If H chooses C, his payoff is Ey j C dEy j C; d: Let us start with E(y j C, d). Knowing d < d , H can infer from S s equilibrium strategy that S has chosen M, so E(y j C, d) a. Now consider E(y j C). Recalling that he has chosen C, H will infer from his own equilibrium strategy that d < d , in which case he concludes that S has chosen M, so E(y j C) a. Thus,

Royal Economic Society 2005

2005 ]

281

Ey j C dEy j C; d 1 da: If H chooses B, his payoff is Ey j B dEy j B; d: Let us start with E(y j B, d). Knowing d < d , H can infer from Ss equilibrium strategy that S has chosen M, so E(y j B, d) a b. Now consider E(y j B). Recalling that he has chosen B, H will infer from his own equilibrium strategy (incorrectly) that d > d , in which case he concludes that S has chosen Z, so E(y j B) b. Thus, Ey j B dEy j B; d b da b: Comparing these two payoffs, we see that it is indeed optimal for H to choose C provided that 1 da ! b da b or d ab d : b

But this holds (with strict inequality) in the case we are examining. The equilibrium mentioned in the theorem has the desired features. The human subject makes both choices the closed box and both boxes with positive probability. The superior being always predicts this choice correctly, putting the million dollars in the closed box when the human selects it alone, and leaving the closed box empty when the human picks both boxes. What is the intuition for this result? In equilibrium, the parties create correlation between their choices by conditioning on something commonly observed. The thing that is commonly observed is subsequently forgotten by H. Choosing the closed box only does not change whats in the closed box. But, because of the equilibrium correlation, it does cause the subject to subsequently infer that S has predicted closed box only, and consequently that the money is in the box. Creating this inference is valuable because it improves the subjects anticipatory feelings, and is more valuable when d is smaller. This analysis illustrates the general principle that, with imperfect memory and anticipatory emotions, people can rationally choose apparently dominated strategies. Is it also a good resolution of Newcombs Paradox? Perhaps. Think of d as a characteristic about which the individual is only dimly (or possibly even subconsciously) aware to begin with. After making his choice, his reasons are at least partially obscure to himself. However, by hypothesis, he thinks the superior being understands these reasons perfectly. It is therefore reasonable for him to conclude that the choice he made is correlated with the beings decision.

3.1.3. The weight attached to anticipatory emotions Does this theory require H to put an implausibly large amount of weight on anticipatory emotions relative to the actual outcome, to rationalise choosing only

Royal Economic Society 2005

282

[APRIL

the closed box? Some simple calculations shed light on this issue. For a 1,000,000 and b 1,000, we have d 999. Thus, even an H who places, say, 998 times as much weight on actual outcomes as on anticipatory emotions chooses only the closed box. Notice that d rises with a and falls with b. This makes intuitive sense. The subject will be less willing to risk losing a (by choosing both boxes) when a is larger, and will be less willing to pass on b (by choosing only the open box) when b is larger. How does this result extend to cases with a strictly concave utility function, v? Precisely the same reasoning identies two thresholds, d1 and d2, dened as follows: d1 and d2 va vb : va b va va vb vb v0

With v strictly concave, v(b) v(0) > v(a b) v(a), so d1 < d2. We construct equilibria as follows. For d < d1, we have H choose C and S choose M. For d > d2, we have H choose B and S choose Z. For any d 2 [d1, d2], we can either have H choose C and S choose M, or have H choose B and S choose Z. This indeterminacy gives rise to a class of equilibria. Resolving choice for all d 2 [d1, d2] in favour of C results in C being chosen for a wider range of d than for v linear. Likewise, resolving choice for all d in this range of indeterminacy in favour of B results in C being chosen for a smaller range of d than for v linear. To illustrate, image that p vy y, a 1,000,000, and b 1,000. Then d1 30.6 and d2 1937. Resolving the indeterminacy in favour of C wherever possible, H does not choose both boxes unless he places nearly two thousand times as much weight on the actual outcome as on the anticipation. This last conclusion becomes even more striking as we increase the curvature. For example, when v(y) y2, d2 5.01 108. So there is an equilibrium in which putting 500 million times as much weight on the actual outcome as on the anticipatory emotion is still consistent with choosing the closed box only. 3.2. The Prisoners Dilemma In the previous subsection, we justied the selection of an apparently dominated strategy by a rational agent (with imperfect memory and anticipatory emotions) in a game where players could condition choices on a randomly drawn and commonly observed feature of the game. It is also possible to do something similar by allowing players to condition choices on a randomly drawn and commonly observed but otherwise irrelevant signal. We illustrate this possibility in the context of the one-shot Prisoners Dilemma. As we show, under relatively weak conditions it is possible to obtain cooperation with probability arbitrarily close to unity.

Royal Economic Society 2005

2005 ]

283

3.2.1. The game Consider a game played by two agents, A and B. Choices and events unfold as follows. 1. Nature randomly selects a signal x, distributed uniformly over the interval [0, 1]. (We choose the uniform distribution here for notational simplicity. Any continuous distribution will clearly sufce since we can transform the variable to make its distribution uniform. Specically, if F is the CDF of x, then the realised value of F(x) has a uniform distribution.) The value of this parameter is observed both by A and by B. 2. A and B simultaneously choose one of two actions: C (for cooperate) and N (for not cooperate). 3. A and B both forget the value of x. 4. A and B wait to learn the outcome, and form expectations of what it will be. 5. Payoffs are realised. Payoffs are determined according to the matrix illustrated in Figure 3. In each cell of this matrix, the rst entry is As payoff and the second is Bs. We will refer to player i s decision as xi, and his payoff as ui. We impose two restrictions on the parameters:

Bs choice C N

a, a C

b, c

As choice

c, b

d, d

Royal Economic Society 2005

284

[APRIL

Assumption P1: c > a > d > b Assumption P2: d b > c a Assumption P1 is what makes this game a Prisoners Dilemma. It implies that N is a dominant strategy for each player but both would do better if both played C. Assumption P2 says that the gain from playing N rather than C is greater when ones opponent plays N rather than C; in combination with Assumption P1, it implies c b < 2a, which means that mutual cooperation produces the greatest aggregate payoff. As we will see, this assumption identies circumstances in which cooperation is achievable. When making a decision in stage 2, a player thinks about how he will feel both at stage 4 and stage 5. In stage 2, his expected stage 5 utility is E(ui j xi, x). In stage 4, his expected stage 5 utility is E(ui j xi), and this determines his stage 4 utility, which he anticipates at stage 2. He maximises a weighted average of these two terms: E(ui j xi) dE(ui j xi, x). Suppose for the moment that A and B have perfect memory. In that case, they always choose xi to maximise E(ui j xi, x) (since this represents expected utility, as of stage 2, both for stage 4 and stage 5 utility). The standard dominance argument applies. The only subgame perfect equilibrium involves A and B choosing N. No cooperation is observed. Now we look for the equilibria of the game with imperfect memory. Again we use the Piccione-Rubinstein multiself approach discussed in Section 1 and study sequential equilibria.

3.2.2. The main result When A and B forget x, there is always an equilibrium in which they both choose N. To verify this, we have to describe a players beliefs when, in period 4, he recalls that he has chosen C (which occurs with probability zero on the equilibrium path). We posit that i thinks j has chosen N in this case. Choosing N is then clearly optimal regardless of x. This is not, however, the only equilibrium. As the following result shows, provided d is not too large, there exists an equilibrium for which cooperation occurs with probability arbitrarily close to unity. Theorem 2: Suppose that d < (a d)/(c a). Then, for all e > 0, there exists an equilibrium for which the players cooperate (play C) with probability greater than 1 e. The proof of Theorem 2 appears in Appendix available on the Journals the a d a d website http://www.res.org.uk. For d 2 ; (a non-empty interval by d b c a Assumption P2), there is a simpler proof, which we offer here in the text to help build intuition. Consider strategies of the following form: for some b 2 (0, 1),

Royal Economic Society 2005

2005 ]

285

If x

b; play C

If x > b; play N : Is it an equilibrium for both players to use this strategy? We verify that the prescribed choices are optimal for all values of x. First suppose rst that x b is observed. If i chooses C as prescribed, his payoff is Eui j C dEui j C; x: Let us start with E(ui j C, x). Knowing x strategy that j has chosen C, so b, i can infer from js equilibrium

Eui j C; x a: Now consider E(ui j C). Recalling that he has chosen C, i can infer from his own equilibrium strategy that x b, in which case he concludes that j has chosen C, so Eui j C a: Thus, Eui j C dEui j C; x 1 da: If i instead chooses N, his payoff is Eui j N dEui j N ; x: Let us start with E(ui j N, x). Knowing x strategy that j has chosen C, so b, i can infer from j s equilibrium 3

Eui j N ; x c: Now consider E(ui j N). In stage 4, i will recall that he has chosen N. Given his equilibrium strategy, he will conclude (mistakenly) that x > b; from j s equilibriums strategy, he then infers that j has chosen N, so Eui j N d: Thus, Eui j N dEui j N ; x d dc: Combining (3) and (4), we have Eui j C dEui j C; x Eui j N dEui j N ; x 1 da d dc a d dc a ! 0; 5 4

where the last inequality follows from the assumption that d (a d)/(c a). Thus, upon observing x b, it is in i s interest to play C, as prescribed. Now suppose rst that x > b is observed. If i chooses N as prescribed, his payoff is Eui j N dEui j N ; x

Royal Economic Society 2005

286

[APRIL

Let us start with E(ui j N, x). Knowing x > b, i can infer from j s equilibrium strategy that j has chosen N, so Eui j N ; x d: Now consider E(ui j N). In stage 4, i will recall that he has chosen N. Given his equilibrium strategy, he will conclude that x > b; from j s equilibrium strategy, he then infers that j has chosen N, so Eui j N d: Thus, Eui j N dEui j N ; x 1 dd: If i instead chooses C, his payoff is Eui j C dEui j C; x: Let us start with E(ui j C, x). Knowing x > b, i can infer from j s equilibrium strategy that j has chosen N, so Eui j C; x b: Now consider E(ui j C). Recalling that he has chosen C, i will infer (mistakenly) from his own equilibrium strategy that x b, in which case he concludes that j has chosen C, so Eui j C a: Thus, Eui j C dEui j C; x a db: Combining (7) and (6), we have Eui j N dEui j N ; x Eui j C dEui j C; x 1 dd a db dd b a d ! 0; where the last inequality follows from the assumption that d ! (a d)/(d b). Thus, upon observing x > b, it is in i s interest to play N, as prescribed. Notice that this equilibrium arises for all values of b. In particular, by taking b closer and closer to unity, we can construct an equilibrium where cooperation emerges with probability arbitrarily close to unity. There is also a Perfect Bayesian equilibrium where both players select C with probability one, but it is somewhat problematic. For this equilibrium, there is zero probability that player i learns in stage 4 that he has previously chosen N. By assigning to this out-of-equilibrium event the belief that j has chosen N with probability one, we ensure that C is i s best choice for all x. However, if we select any sequence of completely mixed strategies converging uniformly to the equilibrium strategies, the implied posteriors when i chooses N will, as we pass to the limit, place nearly unitary probability on the event that j has chosen C. Thus, the

Royal Economic Society 2005

2005 ]

287

equilibrium strategies and beliefs are not consistent in the sense of Kreps and Wilson (1982), at least when the strategy space is endowed with the topology of uniform convergence.2 Theorem 2 also encompasses cases where d < (a d)/(d b). For the equilibrium described above, we impose the condition d ! (a d)/(d b) to make sure that a player does not place so much weight on anticipatory emotions that he is tempted to select C (thereby producing the subsequent inference that his opponent has also chosen C) even when he is supposed to play N. Without this condition, a more subtle argument is required; the proof involves mixed strategies instead of pure strategies, and uses a more complex limiting argument to establish that one can nd an equilibrium with cooperation probabilities arbitrarily close to unity (see the Appendix for details). Theorem 2 may seem counterintuitive. After all, since N is a dominant strategy, and since the signal is pure noise, one would not ordinarily expect the players to choose C. The intuition for the theorem has to do with the failure of the law of iterated expectations. In equilibrium, the parties create correlation between their choices by conditioning on the commonly observed signal that both subsequently forget. While playing cooperatively does not cause ones opponent to play cooperatively, it does cause the player to subsequently infer that his opponent has played cooperatively. Creating this inference is more valuable when the players attach more importance to anticipatory emotions (that is, d must be sufciently small).

3.2.3. Relation to observed behaviour Laboratory experiments consistently show that subjects cooperate in the one-shot Prisoners Dilemma game with non-trivial frequency. Formal game theory lacks an explanation of cooperation in this setting, though various explanations have been offered involving other-regarding preferences. On occasion, one hears informal explanations for cooperation in the one-shot Prisoners Dilemma such as the following. Each player believes the other player is like himself. He expects the other player to go through the same thought process when choosing a strategy. Thus, if a player concludes that he ought to play C, then he thinks it is likely that the opponent also reaches the same conclusion. Similarly, if a player concludes that he ought to play N, he thinks it is likely the opponent also settles on N. Under these conditions, it is argued, C is the better choice. We have always regarded the logic of this argument as highly suspect. After all, the opponent will choose what the opponent will choose; by changing his choice

2 This conclusion is sensitive to the choice of topology. To illustrate, choose some sequence of scalars bj 2 (0, 1) converging to unity. We construct a sequence of strictly mixed strategy proles as follows: for j each j, each player selects N with probability (1 bj)2 (and C otherwise) if x 2 (bj, 1), and selects N with probability bj (and C otherwise) if x 2 (bj, 1). Upon observing that one player has selected N, the conditional probability that the other has played N converges to unity as j ! 1, which corresponds to the beliefs used to construct the Perfect Bayesian Equilibrium. While this strategy prole does not converge uniformly to the equilibrium prole, it does converge pointwise.

288

[APRIL

from N to C, or from C to N, player i cannot affect j s choice. To assume he can ascribes causality in a setting where no causal link can possibly exist (as in Newcombs paradox). And yet, Theorem 2 captures some of this intuition. The signal induces correlation between the choices of the players. Ex ante, i understands that, given any signal, j s choice will not change just because i s choice changes. However, i also understands that, at a later date prior to observing the outcome, he will have forgotten his signal, and that, from this intermediate perspective, it will appear that js choice is correlated with his. In particular, if i has chosen C, he will ascribe greater likelihood to the possibility that his opponent has chosen C; if i has chosen N, he will ascribe greater likelihood to the possibility that his opponent has chosen N. We remain agnostic indeed, we are at least somewhat sceptical about the extent to which our theory accounts for experimental results. On the favourable side, work by Shar and Tversky (1992) suggests that, as predicted by our model, uncertainty about an opponents choice plays a positive and signicant role in producing unselsh choices.3 We acknowledge, however, that this phenomenon may have other causes. Irrespective of its applicability to specic laboratory experiments, our analysis provides a potentially reasonable explanation for the selection of apparently dominated cooperative strategies in situations where the elapsed time between a decision and an outcome is substantial, and where the outcome is sufciently signicant to generate anticipation. It also raises the possibility that, in behaving unselshly, experimental subjects may follow rules of thumb that have rational origins.

4. Reminders

When a decision maker suffers from imperfect recall, it is natural to think that he or she may attempt to improve or supplement memory. Some strategies for improving memory are internal (e.g. rehearsal) while others are external (reminders). Here we focus on external reminders, though one could model internal mechanisms similarly. With standard preferences, the analysis of reminders is relatively straightforward. The decision maker wishes to be as well-informed in the future as possible, and trades off the gains from better information against the cost

3 In standard treatments, these authors found that subjects cooperate in the one-shot prisoners dilemma game in roughly 37% of trials. In other treatments, subjects were informed of their opponents choices before they made their own decisions. When informed that the opponent had played selshly, only 3% played unselshly. When informed that the opponent had played unselshly, only 16% played unselshly. Note that both of these gures are signicantly lower than the 37% gure obtained when the opponents choice was not revealed. Thus, uncertainty about the opponents choice plays a positive and important role in producing unselsh play. Our analysis roughly ts this pattern; it predicts cooperation only if the opponents choice is not known when a player makes his or her choice. It does not, however, explain the fact that 3% and 16% (respectively), rather than 0%, play unselshly when the opponents choice is known. In contrast, theories of reciprocity would predict counterfactually that the frequency of unselsh play should be highest when the subject knows the opponent has played unselshly. Reciprocity may nevertheless help to explain the experimental results as a contributory factor, in as much as the frequency of unselsh play is strictly positive when the DM knows the opponent has played unselshly, and signicantly higher than when the DM knows the opponent has played selshly.

2005 ]

289

of providing it. When we add anticipatory emotions to the mix, things become considerably more interesting. The decision maker must also consider the effects of reminders on future emotional states. As we will see, this leads to a variety of striking and, in some cases surprising, behavioural implications.

4.1. Types of Reminders How do reminders work? We consider two different reminder technologies: hard reminders and soft reminders. A hard reminder consists of hard, unequivocal information. A soft reminder consists of a pure message. To make this distinction more concrete, Let us consider a simple example.4 Once a year, you set aside time to take stock of your personal nances. You keep pertinent information in a le, and you always start by reviewing the les contents. In making your plans for each year, you want take into account the performance of your portfolio over the past year. This information appears on monthly statements but you tend to glance at these quickly and forget most of the details. Consequently, you make a habit of including this information in your personal nance le. One possibility is to place asset statements in the le. These are hard reminders. Another possibility is to write yourself notes and leave them in the le. These are soft reminders. We model hard reminders as hard information, much as in the literature on disclosure; see e.g. Grossman and Hart (1980), Grossman (1981), Milgrom (1981) and Dye (1985). We model soft reminders as cheap talk, as in the literature on pure communication (Crawford and Sobel, 1982). We acknowledge, however, that our treatment of soft reminders is potentially controversial, and may be inappropriate in some circumstances. Our analysis implicitly assumes that the individual can attempt to lie to himself through soft reminders, and that he interprets all soft reminders in light of his incentives to do so. Whether this is plausible depends on the technology of memory. If a soft reminder triggers a specic memory of hard evidence, then there is no difference between a soft reminder and a hard reminder. Alternatively, if an individual is not in the habit of lying to himself, then deceptive soft reminders may be self-defeating. When leaving an inaccurate soft reminder is a signicant departure from the individuals normal practice, he may, upon receiving the reminder, jog a specic memory of his intent to deceive himself. In that case, soft reminders would again function much like hard information. Our treatment of soft reminders is appropriate in situations where the individual can conjure up no specic memory either of the original information, or of his thought process in leaving the reminder. We believe there are situations that t this description, as well as situations that do not.

4 Another concrete and highly vivid illustration appears in the motion picture Memento (Newmarket Films, 2001). After incurring an injury that destroys his ability to form new memories, the main character leaves himself hard reminders by taking photographs, and soft reminders by annotating the photographs and tatooing messages on his body. As in our analysis, he sometimes seeks to manipulate his subsequent beliefs and behaviour by crafting potentially misleading reminders.

[APRIL

Events and choices unfold as follows. 1. A payoff-relevant state of nature, x 2 [0, 1], is realised. 2. The DM observes information s. Unless specied otherwise, s x (the DM observes the state of nature). He then makes two decisions. First, he chooses reminders. For the hard reminder technology, he selects h 2 f0, 1g, where h 1 causes him to recall s at a later date, whereas h 0 does not. For the soft reminder technology, he selects a message m. We could in principle allow the message space to be arbitrarily complex, but in this setting all that will matter is that it has at least two elements, so we suppose that m 2 f0, 1g. Second, he chooses an initial action x1 2 f0, 1g. 3. If the DM has not left a hard reminder, he forgets s. He recalls x1, m, and also s if he has left a hard reminder. 4. The DM selects a second action, x2 2 f0, 1g. Time passes. 5. The outcome is realised. The setting described above is extremely simple. This is intentional, since our object is to illustrate basic ideas as transparently as possible. The analysis extends directly to cases in which recall is probabilistic, where information is observed probabilistically rather than with certainty, where reminders are probabilistically misplaced (and therefore have no effect), and where the sets of potential actions are continuous rather than dichotomous. The DMs state of emotional well-being in Stage 5, U, is given by U u1 x1 ; x u2 x2 ; x ch; where c represents the cost of leaving a hard reminder. We assume that soft reminders are costless. Assumption R: ui is differentiable and strictly increasing in x with ui(0, 0) > ui(1, 0) and ui(1, 1) > ui(0, 1) and oui(0, x)/ox < oui(1, x)/ox. Assumption R tells that there is some cutoff state, x , such that the rst-best rule 1 (from the perspective of Stage 5 well-being) involves choosing xi 0 when x < x , and xi 1 when x > x . At this cutoff, ui 0; x ui 1; x . 1 1 1 1 The DMs state of emotional well-being in Stage 4, V, is given by his expected well-being in stage 5, conditional on his information: V EU j m; x1 without a hard reminder V EU j m; x1 ; s with a hard reminder In Stage 2, his utility is given by W(U, V ). We will focus on cases where W(U, V ) dU w(V ). Unless specied otherwise, we assume that w is the identity function, so W(U, V ) dU V.

Royal Economic Society 2005

2005 ]

291

In some cases, we will also add a decision in Stage 1 (whether to acquire information). In these cases, we assume the DM acts to maximise the expected value of W(U, V ). This model has a number of components: an initial (Stage 2) action, a delayed (Stage 4) action, hard reminders and soft reminders. We will focus on one or two components at a time, shutting the others off in each case. In one case we will also vary the informativeness of the signal, s. To provide baseline results, we start by analysing the role of reminders without actions. Next we will focus on the initial action, rst examining choice without reminders, and then indicating how this choice changes in the presence of hard and soft reminders. Finally, we will focus on the delayed action, once again examining choice without reminders, and then indicating how reminders affect this decision. For concreteness, we suggest the following life-cycle planning problem as an application. Suppose the DM switches jobs and must relocate to a new city. The state of nature determines the generosity of the new employers dened benet pension plan and/or post-retirement medical benets. The DM receives information about these resources upon accepting the position. Though he processes this information, it is relatively complicated and the details are easily forgotten. To jog his memory, he can leave himself reminders by creating an easily accessible le containing plan descriptions, notes, correspondence with his employers human resources department, and other materials, or he can throw these materials out. Upon relocating, he buys a new house, and may also invest in other durable goods such as automobiles (the initial action, x1). He makes other consumption decisions (the second action, x2) after his detailed recollection of the retirement plan fades but before reaching retirement. Though his memory is imperfect, he continues to observe x1 (he lives in the house and drives the cars) and he occasionally consults any materials he may have led. During the pre-retirement period, his well-being depends on consumption (both x1 and x2), and on his anticipated happiness in retirement, which in turn reects his unspent income and the generosity of the retirement plan. 4.3. Reminders Without Actions Suppose there is no initial or delayed action (equivalently, x1 and x2 are both degenerate). The payoff in Stage 6 is simply given by u(x) ch. Without anticipatory emotions, reminders serve no purpose. With anticipatory emotions, reminders may be useful because they can inuence the DMs anticipated emotional state in Stage 4. In this setting, however, soft reminders are useless. Irrespective of which x is realised, the DM would like to convince his Stage 4 incarnation that x is as high as possible. If two distinct messages lead to different inferences about x, he will always choose the message leading to a higher inferred value. Thus, no degree of separation is sustainable. The only equilibrium outcome involves babbling the message is uninformative. The case of hard reminders is more interesting. An equilibrium consists of a mapping from states of the world to a binary choice set, rh : [0, 1] ! f0, 1g, where rh 1 (rh 0) indicates that he leaves (does not leave) a hard reminder,

Royal Economic Society 2005

292

[APRIL

along with a mapping from choices to beliefs about types, where the second is derived from the rst where possible, and where the rst is optimal given the second for all states of nature. The following result tells us several things. The DM leaves hard reminders in good states and does not leave hard reminders in bad states. The dividing line between good and bad states depends on the cost of leaving a hard reminder. When the cost is zero, the DM leaves hard reminders in all states. When the cost is sufciently high, he leaves no reminders. The theorem also provides an expression for equilibrium payoffs. Theorem 3: Suppose hard reminders are available, and that there are no actions (initial or delayed). An equilibrium exists. Moreover, in any equilibrium, there exists b b b xh c 2 0; 1 with rh(x) 0 for x < xh c, and rh(x) 1 for x > xh c. Furb b b thermore, xh 0 0; limc#0 xh c 0, and xh c 1 for c sufciently large. The DMs expected equilibrium payoff, from the perspective of Stage 1, is b fEux Prw > xh cc g1 d: Proof. First we show that, in any equilibrium, rh is weakly increasing in the state of nature. Suppose the DM has observed x. Let V 0 denote the Stage 4 emotional state experienced in equilibrium when he does not leave a reminder (clearly, this cannot depend on x). From the perspective of Stage 2, the net gain from leaving a reminder is ux c1 d V 0 dux ux V 0 1 dc: 8

Since this expression is strictly increasing in x, the desired conclusion follows b immediately. Thus, we look for a cutoff value xh c such that rh(x) 0 for b b x < xh c, and rh(x) 1 for x > xh c. Suppose c 0. We claim that there exists an equilibrium in which the DM always leaves a hard reminder, and where he believes that, in the absence of a reminder, the state is x 0 (which implies V 0 u(0)). Using (8), we see that the gain from leaving a reminder in any state x0 is u(x0 ) u(0), which is strictly positive for all x > 0 and zero for x 0, as required. To see that there is no equilibrium with a b b cutoff value x > 0 (with the DM leaving a reminder for x0 > x and no reminder b b for x0 < x), note that this would imply V 0 Eux j x x. Using (8), we see that the gain from leaving a reminder in any state x0 would then be b ux0 Eux j x x. Notice that this expression is strictly positive for x0 b slightly below x, which implies that the DM would leave a reminder upon observing x0 , a contradiction. Now suppose c! u1 u0 c: 1d 9

We claim that there exists an equilibrium in which the DM never leaves a hard reminder (which implies V 0 E[u(x)]). Using (8), we see that the gain from leaving a reminder in any state x0 is

Royal Economic Society 2005

2005 ]

293

ux0 Eux 1 dc

which is strictly negative as required. To see that there is no equilibrium with a b b cutoff value x < 1 (with the DM leaving a reminder for x0 > x and no reminder b b for x0 < x), note that this would imply V 0 Eux j x x. Using (8), we see that the gain from leaving a reminder in any state x0 would then be ux0 Eux j x b x 1 dc ux0 u1 fu0 Eux j x b xg;

which is strictly negative for all x0 2 b ; 1, a contradiction. x Now suppose that c 2 (0, c ). We claim that an equilibrium exists, and that the b b cutoff, xh c, lies in (0, 1]. Using (8), we see that, with a cutoff x (which implies b V 0 Eux j x x), the gain from leaving a reminder in state x0 is b ux0 ; x; c ux0 Eux j x b x 1 dc: If u(1, 1, c) 0, then there is plainly an equilibrium for which the DM never leaves a reminder. Suppose instead that u(1, 1, c) > 0. Trivially, u(0, 0, c) < 0. b Since u is continuous, there exists xh 2 0; 1 for which ub h ; xh ; c 0. Since x b 0 b 0 0 b 0 b b ux ; xh ; c > 0 for x > xh and ux ; xh ; c < 0 for x < xh , there is clearly an b equilibrium for which the DM leaves a reminder in states x0 > xh and does not b leave a reminder in states x0 < xh . b b Next we show that limc#0 xh c 0. We know that ub h c; xh c; c x 0 (with b b equality when xh c < 1). Thus, as c # 0, ub h c Eux j x x xh c ! 0. b But this can occur only if xh c ! 0. In any of the equilibria described above, the DMs expected payoff from the b perspective of Stage 2 upon observing state x0 is [u(x0 ) c](1 d) if x0 > xh c, 0 0 b b and Eux j x xh c dux for x < xh c. Thus, his expected payoff prior to observing the state is b b b Prx0 < xh cEfEux j x xh c dux0 j x0 < xh cg 0 0 0 b b Prx > xh cEfux c1 d j x > xh cg b b Prx0 < xh cEux0 j x0 < xh c1 d

as claimed. Given the close parallel to standard disclosure problems (Grossman and Hart, 1980; Grossman, 1981; Milgrom, 1981; Dye, 1985), nothing about this theorem is particularly surprising. However, in this context, it has three important implications. First, when reminders are costless, the DM ends up with full information in Stage 4. This observation will be relevant when we discuss the effects of reminders on actions. Second, when reminders are costly, the DM reminds himself of favourable states of nature, and does not remind himself of unfavourable states. Thus, our model endogenously produces a systematic memory bias: people tend to recall

Royal Economic Society 2005

294

[APRIL

favourable information (e.g., in our proposed application, about the generosity of an employers pension plan), and forget unfavourable information.5 With this observation in mind, a small extension of our model endogenously produces a phenomenon associated with cognitive dissonance: the tendency to pay attention to information that conrms beliefs supporting prior decisions, and to ignore contradictory information (Akerlof and Dickens, 1982). In particular, imagine that in some initial stage, the DM must choose either left, in which case his payoffs are given by u(x) ch, or right, in which case his payoffs are given by u(1 x) ch. In other words, the choice of right, rather than left, simply reverses which states are favourable and which are unfavourable. Given each choice, the solutions to the continuation problems are symmetric and described by Theorem 3. Thus, when the DM has chosen left, he endogenously forgets information when it tells him that x is low and that right would have been a better choice. Similarly, when he has chosen right, he endogenously forgets information when it tells him that x is high and that left would have been a better choice. Third, equilibrium payoffs are non-monotonic in the cost of reminders. When reminders are free, the ex ante expected equilibrium payoff is E[u(x)]. Likewise, when reminders are sufciently costly, the DM does not use them in any state of nature and again the ex ante expected equilibrium payoff is E[u(x)]. For intermediate values of c, the DM uses reminders in some states of the world and his expected equilibrium payoff is less than E[u(x)]. To understand why, note that leaving a free collection of state-specic reminders is a wash from an ex ante perspective losses in some states exactly offset gains in others. Accordingly, ex ante, the DMs expected payoff falls by the expected cost of the reminders he leaves after learning x. In this setting, he is better off not receiving the information to begin with; he would certainly not invest to acquire it and would even pay to avoid it. If he nevertheless receives information, he is better off without a reminder technology. This third implication is reminiscent of an existing result due to Caplin and Leahy (2001), who point out that a decision maker may prefer ignorance when he is averse to variation in future emotional states. To illustrate the implications of this point in our setting, imagine that w is strictly concave and that c 0 (reminders are costless). Then once again the DM will leave reminders in all states (this is just the standard disclosure result). His expected equilibrium payoff (from the perspective of Stage 2) is Z 1 Z 1 wux f xdx d uxf xdx 0 0 Z 1 Z 1 <w uxf xdx d uxf xdx:

0 0

5 In a model where an excessively positive view of ones ability improves delayed choices from an ex ante perspective (by offsetting a distortion arising from present-biased preferences), Benabou and Tirole (2002) show that a decision maker will tend to repress unfavourable information. The mechanism is related to ours in that the decision maker attempts to forget less favourable information so his subsequent beliefs will be more positive.

2005 ]

295

The right hand side of this expression would be his expected equilibrium payoff (from the perspective of Stage 2) if he either received no information or was able to refrain from leaving reminders. Once again, leaving reminders makes him worse off, but he does it anyway. This extends Caplin and Leahys result by showing that ignorance is preferable with w strictly concave, even though the decision maker could in principle disregard the information by leaving no reminders. Theorem 3 goes beyond this result and identies conditions under which an agent may prefer ignorance even when he is not averse to variation in future emotional states. The third implication sounds like an example of dynamically inconsistent preferences in Stage 1, the DM would like to avoid leaving reminders in Stage 2, but is unable to follow through once Stage 2 arrives. However, it is a different phenomenon. Given the equilibrium inferences hell make in Stage 4, the DMs Stage 1 self concurs with his Stage 2s decision in each state of nature. He wishes to constrain his future actions not merely to change the actions themselves (as would be the case with dynamically inconsistent preferences), but also to change inferences (which is what makes the change in actions desirable). Theorem 3 is somewhat related to a result in Caplin and Leahys (2004) analysis of information transmission between doctors and patients, where doctors are empathetic and patients experience anticipatory emotions. In stage 2 of our model, the DM is in the position of informing his later self concerning likely outcomes, much as an empathetic doctor would inform a patient concerning diagnosis, necessary procedures, and prognosis. Caplin and Leahy consider doctorpatient communication through costless hard information (a veriable diagnosis). In a model with binary information, they demonstrate that, if anxiety primarily depends on pessimism rather than on the degree of uncertainty, information revelation is complete (see their Proposition 2).6 Specialising to the case where c 0, Theorem 3 provides the same result in a setting with continuous information. 4.4. The Initial Action Now let us suppose there is an initial action (but no delayed action). In formulating our model, we assumed that the initial action is recalled even though the signal is forgotten. Alternatively, one could imagine that the initial action is forgotten as well. In that case, anticipatory emotions from Stage 4 would not affect the decision, and the DM would simply make the rst-best choice. The problem becomes more interesting when the initial action is recalled. In that case, the action also serves the role of a reminder. An equilibrium consists of a mapping from states of the world to choices, r1 : [0, 1] ! f0, 1g (not to be confused with rh from the previous Section), along with a mapping from choices to beliefs about types, where the second is derived from the rst where possible, and where the rst is optimal given the second for all states of nature. Since the

6 In their model, patients also provide doctors with information concerning their susceptibilities to different sources of anxiety but this aspect of their analysis has no parallel in the current paper because here the DM knows his true preferences.

296

[APRIL

problem involves signalling, we also rule out some implausible outcomes by imposing the intuitive criterion (Cho and Kreps, 1987). Theorem 4: Suppose there is an initial action, no delayed action and no reminders. An b equilibrium exists. Moreover, in any equilibrium, there exists x1 2 0; x with r1(x) 0 1 b b for x < x1 , and r1(x) 1 for x > x1 . Proof. It is easy to check that, under Assumption R, the choice must be weakly b increasing in the state of nature. Thus, we look for a cutoff value x1 such that b b b r1(x) 0 for x < x1 , and r1(x) 1 for x > x1 . If x1 2 0; 1, then the DM b must, upon observing x1 , be indifferent between the two choices, so Eu1 0; x j x b b b b x1 du1 0; x1 Eu1 1; x j x ! x1 du1 1; x1 : 10 Since u1(1, x) u1(0, x) is strictly increasing in x, (10) implies that the DM b b strictly prefers x1 0 when x < x1 and x1 1 when x > x1 , exactly as required in an equilibrium. This conguration trivially satises the intuitive criterion since no action is chosen with zero probability in equilibrium. b Consider the question of existence. Setting x1 1, it is clear that the righthand side of (10) exceeds the left (from this observation it is easy to check b that applying the intuitive criterion rules out the possibility that x1 1 with r1(1) 0). If u1 0; 0 du1 0; 0 > Eu1 1; x du1 1; 0 11

then there is clearly an intermediate solution to (10) on (0,1). If (11) does not hold, then there is an equilibrium where r1(x) 1 for all x (and, upon recalling b x1 0, the DM infers that x 0) that is, x1 0. In this case, indifference may b not hold for x x1 , so we set r1(0) 1. If, upon observing x1 0 (a zeroprobability event), the DM infers that the state is x 0, the intuitive criterion is satised (this is easy to check). b Now we show that x1 < x . For all x0 ! x , we know that 1 1 0 0 u1(0, x ) u1(1, x ), and therefore Eu1 0; x j x x0 < u1 0; x0 u1 1; x0 Eu1 1; x j x ! x0 : It follows that (10) cannot hold for any such x0 . Combining this observation with existence establishes the claim. b The fact that x1 < x is no great surprise. Self-signalling distorts choices in 1 favour of alternatives that lead to more favourable inferences and therefore more positive Stage 4 anticipatory emotions. However, the result has two important implications. First, without reminders, the DM acts as if he is excessively optimistic. Observing only his choices (e.g. concerning housing and durable purchases in our proposed application), one could offer a rationalisation based on the assumption that he is overly optimistic (concerning the generosity of his employers retirement plan in

Royal Economic Society 2005

2005 ]

297

the application), attaching too great a likelihood to favourable states of the world b (since x1 < x ). This is signicant because the phenomenon of excessive opti1 mism is reasonably well-documented, and has recently received a signicant amount of attention (Rabin and Schrag, 1999; Koszegi, 2000; Hvide, 2002; Benabou and Tirole, 2002; Postlewaite and Compte, 2003; Van den Steen, 2003). Our model produces excessively optimistic behaviour endogenously. Though the individuals expectations are, on average, correct, he acts as if he is excessively optimistic in an attempt to fool himself. Second, without reminders, equilibrium choices are not rst-best. As with the case of a delayed action, there is a potential role for reminders. Both of these conclusions extend in a straightforward way to efcient separating equilibria in environments where the set of actions is continuous. They reect a simple property of signalling equilibria: the sender distorts choices in the direction of types he is trying to imitate (here, those with more favourable information). How do reminders affect decisions? The most obvious mechanism which we study in the next subsection is to equip the decision maker with more information at the point in time when he makes a choice. However, this is not the only mechanism. Here, we examine the inuence of reminders on the initial action, assuming there is no delayed actions. In our model, initial actions are taken in Stage 2 along with decisions to leave reminders. Consequently, reminders do not help inform these decisions. Nevertheless, as we show, reminders can inuence concurrent decisions. The mechanism studied here is intuitive. Theorem 4 tells us that, when a reminder technology is not available, initial choices serve dual functions as payoffrelevant actions and reminders. From the perspective of maximising ultimate payoffs, the choice of an action is distorted by concerns about the effects of choices (as reminders) on anticipatory well-being in Stage 4. When reminders are available, the decision maker no longer needs to use actions to serve two objectives. In principle, he can address concerns about anticipatory well-being in Stage 4 through reminders, leaving actions undistorted. Here we ask whether things work out this way in equilibrium. Soft reminders are unhelpful in this context. Since the DM wishes to induce the same favourable inferences regardless of the state of nature, only babbling emerges as an equilibrium. Adding soft reminders has no effect on initial actions. Hard reminders have more interesting effects. For simplicity, we will focus on the case where c 0 (reminders are costless). Theorem 5: With an initial action, no delayed action, and costless hard reminders, the DM chooses the rst-best action in every state (that is, x1 0 when x < x , and x1 1 1 when x > x ) and is perfectly informed about the state in Stage 4. 1

Proof. First we verify that there is an equilibrium with the properties described in the theorem. Suppose the DM leaves hard reminders in all states, chooses x1 0 when x < x , and chooses x1 1 when x > x . To complete the description 1 1 of an equilibrium, we need to supplement this description of equilibrium actions with out-of-equilibrium beliefs. If in Stage 4 the DM does not receive a reminder,

Royal Economic Society 2005

298

[APRIL

he infers that x 0. To see that prescribed choices are optimal given these beliefs, rst notice that the DM cannot improve his payoff for any x by continuing to leave a reminder but choosing a different action. The only remaining question is whether he can improve his payoff by failing to leave a reminder. In that case, his payoff (from the perspective of Stage 2) is u1 x1 ; 0 du1 x1 ; x u1 x1 ; x1 d u1 r x; x1 d 1

(where r assigns x1 0 when x < x , and x1 1 when x > x ) Since the last 1 1 1 expression is the DMs equilibrium payoff in state x, the deviation is not benecial. Next we demonstrate that this is the only possible equilibrium outcome. The rst step is to show that the DM is perfectly informed about the state in Stage 4. Assume not. Then there is a non-empty set of states X in which the DM does not leave a hard reminder. Let X0 X be the set of states in X such that the DM chooses x1 0 , and let X1 X be the set of states in X such that the DM chooses x1 1. Since the DM is, by assumption, not perfectly informed in Stage 4, either X1 or X0 must contain at least two states. Assume Xi contains at least two states. Then there exists x0 2 Xi such that u(i, x0 ) > E[u(i, x) j x 2 Xi] (this follows because Xi contains at least two distinct states and u(i, x) is strictly increasing in x). The DM could do better in state x0 by leaving a hard reminder and choosing r x0 , a contradiction. 1 Now assume there is some state x00 for which r1 x00 6 r x00 . The DMs 1 equilibrium payoff in state x00 (from the perspective of Stage 2) is 1 du1 r1 x00 ; x00 < 1 du1 r x00 ; x00 , which immediately implies that 1 he would be better off in state x00 by choosing r x00 and leaving a hard reminder. 1 From this contradiction, we infer that r1 x r x for all x. 1 Though simple, Theorem 5 has a striking implication: the ability to leave hard reminders completely restores the DMs ability to make rst-best decisions (in our proposed application, concerning housing and other durable consumption). This occurs even though the reminders do not improve the quality of information on which decisions are based. Instead, the presence of reminders removes the temptation to inuence anticipatory well-being in Stage 4 by distorting the choice of an action.

4.5. The Delayed Action Now let us suppose there is a delayed action (but no initial action). Without reminders, the DM takes this action without information. Consequently, he chooses x2 to maximise E[u2(x2, x)]. This outcome is clearly not rst-best. In principle, reminders can improve decision making by improving the DMs information at the point in time when he makes the decision. With costless hard reminders, the DM leaves a reminder in all states (except possibly the lowest, when he is indifferent), and always selects the rst-best action in Stage 4. This is in the spirit of Theorems 3 and 5, and the formal proof (omitted

Royal Economic Society 2005

2005 ]

299

to conserve space) invokes the same full disclosure logic. Thus, with costless hard reminders, the rst-best outcome is always achieved, and it makes no difference whether an action is immediate or delayed.7 Of course, this equivalence depends on the assumption that reminders are perfectly effective. The case of soft reminders (e.g. led notes reecting the DMs own interpretation of his retirement plans generosity) is more interesting. In this setting, cheap talk can successfully convey information, at least for some parameter values. In an informative equilibrium, the DM partitions the state space into two non-empty b b segments, 0; x2 and b 2 ; 1. One can place the boundary point, x2 , in either x segment; here we place it in the higher segment by convention but the choice is b immaterial. When x 2 0; x2 , he chooses a message, m0, that induces him to pick x2 0 in Stage 4. When x 2 b 2 ; 1, he chooses a message, m1, that induces him to x pick x2 1 in Stage 4. There are three requirements for this to be an equilibrium. First, in Stage 4, having received m0, he must prefer to pick x2 0: b b Eu2 0; x j x < x2 ! Eu2 1; x j x < x2 : Second, in Stage 4, having received m1, he must prefer to pick x2 1: b b Eu2 1; x j x > x2 ! Eu2 0; x j x > x2 : 13 12

Third, given his subsequent responses, in Stage 2 he must prefer to send the b message m0 (leading to x2 0) when x < x2 , and he must prefer to send the b message m1 (leading to x2 1) when x > x2 . This requirement is satised b provided that he is indifferent between these choices for x x2 : Eu2 0; x j x b b x2 du2 0; x2 b b Eu2 1; x j x ! x2 du2 1; x2 : 14

Notice that (14) is identical to (10), except that the subscripts are 2s instead of 1s. Thus, when the same payoff function is used for initial actions and delayed actions (u1 u2), the correspondence between states and choices is the same with an initial action and no reminders, and with a delayed action and soft reminders b (b 1 x2 ) provided of course that an informative equilibrium exists in the latter x case (which depends on conditions (12) and (13)). It is worth mentioning that this equivalence would not hold with continuous choices (with no reminders, there will typically be an equilibrium with full separation, and this is impossible to achieve through cheap talk). b Given the preceding observation and Theorem 4, we know that x2 < x (the 2 cutoff for rst-best decisions, dened analogously to x ). Consequently, when 1 (14) is satised, so is (12). An informative equilibrium therefore exists when b the value of x2 dened implicitly in (14) is strictly positive and also satises (13). These informative equilibria are always inefcient. Indeed, as the next result demonstrates, the DM always strictly prefers (from the perspective of Stage 1) to

7 With costly hard reminders, the DM behaves as in Theorem 3, perfectly informing delayed choices when the state is sufciently good and leaving uncertainty when the state is sufciently bad.

300

[APRIL

receive categorical information instead of continuous information. That is, he would strictly prefer to contract with an information provider in Stage 1 to tell him whether the state of nature is above or below some threshold (e.g. whether his retirement plan is either generous or miserly). Our model therefore gives rise (endogenously) to a taste for categorical information. (For this interpretation of the result to hold, we must assume the DM would remember whether he purchased categorical or continuous information.) Theorem 6: Suppose there is a delayed action and no initial action and that only soft reminders are available. Suppose also that there exists an informative equilibrium. There exists a dichotomous signal function, s :[0, 1] ! f0, 1g, such that, given the choice in Stage 1 between this dichotomous signal and a fully informative signal, the DM will choose the dichotomous signal, and thereby achieve a higher level of well-being (evaluated in Stage 1). Proof. Without worrying about incentive compatibility, let us assume the DM mechanically adheres to the following rule: for some arbitrarily selected x2, send m0 when x < x2, send m1 when x > x2, choose x2 0 upon receiving m0, and choose x2 1 upon receiving m1. In that case, his expected payoff (from the perspective of Stage 1) is Z x2 fEu2 0; x0 j x0 < x2 du2 0; xgf xdx

0

1 x2

0

Eu2 1; x j x > x2 1 F x2 d Z 1 d

0 x2

u2 1; xf xdx

u2 0; xf xdx

x2 1

u2 1; xf xdx:

x2

Taking the derivative of this nal expression with respect to x2 gives us 1 du2 0; x2 u2 1; x2 f x2 : b b Since x2 < x , we know that this term is strictly positive evaluated at x2 x2 . 2 Thus, a small increase in the cutoff state from its equilibrium value would improve the DMs equilibrium payoff. The question is: how do we make this incentive compatible? Suppose the signal s takes on only one of two values: s s0 when x < x2, and s s1 when x > x2. Let us attempt to construct an equilibrium with the following properties: in Stage 2, the DM sends one message, m0, upon receiving signal s0, and a different message, m1, upon receiving signal s1; in Stage 4, he chooses x2 0 upon receiving m0, and chooses x2 1 upon receiving m1. There are four requirements for this to be an equilibrium. First, in Stage 4, having received m0, he must prefer to pick x2 0:

Royal Economic Society 2005

2005 ]

301 15

Eu2 0; x j x < x2 ! Eu2 1; x j x < x2 : Second, in Stage 4, having received m1, he must prefer to pick x2 1: Eu2 1; x j x > x2 ! Eu2 0; x j x > x2 :

16

Third, given his subsequent responses, in Stage 2 he must prefer to send the message m0 (leading to x2 0) when learning that x < x2: Eu2 0; x j x < x2 dEu2 0; x2 j x < x2 ! Eu2 1; x j x > x2 dEu2 1; x2 j x < x2 : 17

Fourth, he must prefer to send the message m1 (leading to x2 1) when learning that x > x2: Eu2 1; x j x > x2 dEu2 1; x2 j x > x2 ! Eu2 0; x j x < x2 dEu2 0; x2 j x > x2 : b Let us evaluate each of these constraints at x2 x2 . When (14) holds with equality, both (17) and (18) hold with strict inequality. Therefore, (17) and (18) b continue to hold for x2 slightly larger than x2 . So does (15), provided b x2 2 b 2 ; x . Finally, if (16) holds for x2 x2 , it also holds for slightly larger x 2 values of x2. To see this, note that Z 1 d u2 1; x u2 0; xf xdx u2 0; x2 u2 1; x2 f x2 ; dx2 x2 b which is strictly positive for x2 x2 < x . Thus, we have an equilibrium, and the 2 DM is strictly better off. Intuitively, why does this result hold? When the DM chooses a cheap-talk message, he is concerned both with the quality of the subsequent decision and with inducing a favourable inference in Stage 4. Creating a favourable inference may help in one particular state, but it cannot change the overall ex ante expectation concerning the state what he gains in one state, he loses in another. Consequently, it cannot increase his expected payoff prior to making his decision. Thus, the decision is distorted with no offsetting gain from an ex ante perspective. Reversing this distortion therefore improves the ex ante expected payoff. Theorem 6 bears some resemblance to a result by Fischer and Stocken (2001), who show in a special parametric case of Crawford and Sobels (1982) cheap talk model that reducing the quality of information received can increase the amount of information communicated (as measured by the number of distinct inferences draw from all messages in equilibrium). Our focus here is not on the amount of information communicated, but on the quality of the decision made, and we demonstrate that it is possible to improve the quality of the decision without increasing the informativeness of the equilibrium, in the sense of Fisher and Stocken. In addition, our result does not appear to require special parametric assumptions, even when a continuum of actions is available. Some of the analysis in this section is also related to Koszegis (2004) analysis of information transmission between doctors and patients. Like Caplin and Leahy

Royal Economic Society 2005

18

302

[APRIL

(2004) (discussed in Section 5.3), Koszegi assumes that doctors are empathetic and patients experience anticipatory emotions; in addition, he also assumes that patients take actions (treatment) after receiving information from doctors. The doctors decision to advise the patient is analogous to that of an informed DM who, in our model, advises his later uninformed self concerning a delayed action. Koszegi considers doctor-patient communication both through hard information (a veriable diagnosis) and cheap talk (a treatment recommendation). For the case of hard information, he demonstrates that revelation is incomplete, which occurs in his model because there is some probability that the doctor is uninformed. For the case of soft information, he shows that the doctor distorts his recommendation toward the course of treatment that is appropriate when the patient is relatively healthy. In our b framework, this corresponds to the observation that x2 < x . 2

5. Conclusion

In this paper, we have argued that the introduction of memory imperfections into models of economic decision making creates a natural role for anticipatory emotions. The combination of memory imperfections and anticipatory emotions has striking behavioural implications. In the rst half of the paper, we showed that agents can rationally select apparently dominated strategies. We considered two applications: Newcombs Paradox and the Prisoners Dilemma. We provided a resolution for Newcombs Paradox and argued that it requires the decision maker to ascribe only a tiny weight to anticipatory emotions. We also demonstrated that, for some ranges of parameters, it is possible to obtain cooperation in the Prisoners Dilemma with probability arbitrarily close to unity. The second half of the paper provided a theory of reminders. It showed that people may prefer to be uninformed, or to have coarse information, in situations where, eliminating either memory imperfections or anticipatory emotions, this would not be the case. We exhibited a mechanism whereby the opportunity to leave a reminder can improve a concurrent decision, even though the reminder does not change the information available when the decision is made. We also provided endogenous explanations for as-if overoptimism and behaviours associated with cognitive dissonance. Stanford University and NBER Columbia University Date of receipt of rst submission: April 2004 Date of receipt of nal typescript: May 2004 Technical Appendix is available for this paper: http://www.res.org.uk/economic/ ta/tahome.asp

References

Akerlof, George A. and Dickens, William T. (1982). The economic consequences of cognitive dissonance, American Economic Review, vol. 72(3), (June), pp. 30719. Aumann, Robert J., Hart, Sergiu and Perry, Motty (1997a). The absent-minded driver, Games and Economic Behavior, vol. 20, pp. 10216. Royal Economic Society 2005

2005 ]

303

Aumann, Robert J., Hart, Sergiu and Perry, Motty (1997b). The forgetful passenger, Games and Economic Behavior, vol. 20, pp. 11720. Battigalli, Pierpaolo (1997). Dynamic consistency and imperfect recall, Games and Economic Behavior, vol. 20, pp. 3150. Benabou, Roland and Tirole, Jean (2002). Self-condence and personal motivation, Quarterly Journal of Economics, vol. 117, pp. 871915. Caplin, Andrew and Leahy, John (2001). Psychological expected utility theory and anticipatory feelings, Quarterly Journal of Economics, vol. 116(1), (February), pp. 5579. Caplin, Andrew and Leahy, John (2004). The supply of information by a concerned expert, Economic Journal, vol. 114, pp. 487505. Carillo, J. and Mariotti, T. (2000). Strategic ignorance as a self-disciplining device, Review of Economic Studies, vol. 66, pp. 52944. Campbell, Richmond and Sowden, Lanning (eds.) (1985). Paradoxes of Rationality and Cooperation: Prisoners Dilemma and Newcombs Problem, Vancouver: University of British Columbia Press. Cho, In-Koo and Kreps, David (1987). Signaling games and stable equilibria, Quarterly Journal of Economics, vol. 102(2), pp. 179221. Crawford, Vincent and Sobel, Joel (1982). Strategic information transmission, Econometrica, vol. 50, pp. 143151. Dye, Ronald (1985). Disclosure of nonproprietary information, Journal of Accounting Research, vol. 23(1), pp. 123-45. Elster, Jon and Loewenstein, George (1992). Utility from memory and anticipation, in (George. Loewenstein and Jon Elster, eds.), Choice Over Time, pp. 21334, New York: Russell Sage Foundation. Fischer, Paul E. and Stocken, Phillip C. (2001). Imperfect information and credible communication, Journal of Accounting Research , vol. 39(1), (June), pp. 11934. Geanakoplos, John (1996). The Hangmans Paradox and Newcombs Paradox as psychological games, Cowles Foundation Discussion Paper No. 1128. Gilboa, Itzhak (1997). A comment on the absent-minded driver paradox, Games and Economic Behavior, vol. 20, pp. 2530. Grossman, Sanford (1981). The informational role of warranties and private disclosure about product quality, Journal of Law and Economics, vol. 24, pp. 46183. Grossman, Sanford and Hart, Oliver (1980). Disclosure laws and takeover bids, Journal of Finance, vol. 35, pp. 32334. Grove, Adam J. and Halpern, Joseph Y. (1997). On the expected value of games with absentmindedness, Games and Economic Behavior, vol. 20, pp. 5165. Halpern, Joseph Y. (1997). On ambiguities in the interpretation of game trees, Games and Economic Behavior, vol. 20, pp. 6696. Hvide, H.K. (2002). Pragmatic beliefs and overcondence, Journal of Economic Behavior and Organization, vol. 48, pp. 1528. Kreps, David and Wilson, Robert (1982). Sequential Equilibria, Econometrica, vol. 50, pp. 86394. Koszegi, Botond (2000). Ego utility, overcondence and task choice, mimeo, UC Berkeley. Koszegi, Botond (2002). Anticipation in observable behavior, mimeo, UC Berkeley. Koszegi, Botond (2004). Emotional agency: the case of the doctor-patient relationship, mimeo, UC Berkeley. Lipman, Bart (1997). More absentmindedness, Games and Economic Behavior, vol. 20, pp. 97101. Loewenstein, George (1987). Anticipation and the valuation of delayed consumption, Economic Journal, vol. 97, (September), pp. 66684. Milgrom, Paul (1981). Good news and bad news: representation theorems and applications, Bell Journal of Economics, vol. 12, pp. 38091. Mullainathan, Sendhil (2002). A memory-based model of bounded rationality, Quarterly Journal of Economics, vol. 117(3), (August), pp. 73574. Nozick, Robert (1969). Newcombs problem and two principles of choice, in (N. Rescher et al., eds.), Essays in Honor of Carl G. Hempel, pp. 11446, Dordrecht: D.D. Reidel Publishing Company. Piccione, Michele and Rubinstein, Ariel (1994). On the interpretation of decision problems with imperfect recall, Working Paper, Tel Aviv, March 22. Piccione, Michele and Rubinstein, Ariel (1997a). On the interpretation of decision problems with imperfect recall, Games and Economic Behavior, vol. 20, pp. 324. Piccione, Michele and Rubinstein, Ariel (1997b). The absent-minded drivers paradox: synthesis and responses, Games and Economic Behavior, vol. 20, pp. 12130. Postlewaite Andrew and Compte, Olivier (2003). Condence-enhanced performance, mimeo, University of Pennsylvania, April. Rabin, Matthew and Schrag, Joel (1999). First impressions matter: a model of conrmation bias, Quarterly Journal of Economics, vol. 114(1), pp. 3782. Royal Economic Society 2005

304

[ A P R I L 2005 ]

Segal, Uzi (2000). Dont fool yourself to believe you wont fool yourself again, Economic Letters, vol. 67, pp. 13. Shar, Eldar and Tversky, Amos (1992) Thinking through uncertainly: nonconsequential reasoning and choice, Cognitive Psychology, vol. 24(4), pp. 44974. Van den Steen, Eric (2003). Rational overoptimism (and other biases), mimeo, MIT, July.

- rgrgrghrrtUploaded bySuhailyShukri
- Nash Equilibria in Competitive Societies, With Applications to Facility Location, Traffic Routing and AuctionsUploaded bypasomaga
- MWG: Ch.00 Contents PrefaceUploaded byvitoen
- Business And Economics: Course Manual 2012Uploaded byRayCharlesCloud
- prob2Uploaded byJay Sangurino
- Acemoglu_s Black BoxUploaded byMartin Mendelski
- Application of Game Theory in Wireless Communication NetworksUploaded byMotasim_m
- RationalUploaded byDeochand Bridgemohan
- Linden Berg Steg Goal FramingUploaded byRichard
- Besanko 1987 Performance Versus Design Standards in the Regulation of PollutionUploaded byNaYJ
- 666156000000000057[1]Uploaded byekmoekmo
- Equilibrium and Mixed StrategyUploaded byFryzka Bella
- The Goals of Research in PsychologyUploaded byAnonymous yiqdcRgYdJ
- 9 RepeatedUploaded byamiref
- lec2Uploaded bygimilimar
- Lecture 8Uploaded byPrashant Patel
- assignment 7Uploaded byapi-342591141
- EU China Observer 3_2009Uploaded byjothebaud
- Global Warming, EthicsUploaded byktiiti60
- Game theoryUploaded byAsrafuzzaman Robin
- culture essay first draft finalUploaded byapi-272661956
- english paper 1 revisedUploaded byapi-301110252
- LECTURE 5 OligopolyUploaded byMuhammad Hassan Khan
- multi0page.pdfUploaded byJuanda Abd
- review 1Uploaded byIleana Coletta
- Final Findings5Uploaded byChristine Ingmanson
- finalizd management profileUploaded byapi-341982648
- 75FordhamLRev1171(2006)Uploaded byLuaré Rodante
- tiered lesson kellie saulsUploaded byapi-393766975
- Rigidity of Social SystemsUploaded byRandall_Stevens

- Shipping Industry GlencoreUploaded byAnna Kruglova
- US Retail Trends Spending MillennialsUploaded byAnna Kruglova
- Figi (McKinsey PST)Uploaded byKatherina Levchenko
- Victor Cheng - Case Interview Core FrameworksUploaded byBitokov Alim
- Valuation of Metals and Mining CompaniesUploaded byAnna Kruglova
- FASHION InternationalUploaded byAnna Kruglova
- Minute by minute: How do global digital consumers spend their tech time? (McKinsey & Company) - OCT11-111013041220-phpapp01Uploaded byretelur
- Case Book Harvard HBS 2004Uploaded byzeronomity
- Baiiplusguidebook EnUploaded byGreen Lemon
- steel-industry-in-indiaUploaded byAnna Kruglova
- McKinsey PST Coaching Guide 2010-MAY BE SHARED WITH CANDIDATESUploaded byCarlos Welsh
- GOLDMAN SACHS 2011 GLOBAL STEEL CONFERENCEUploaded byAnna Kruglova
- Commodities_Risk Management InstrumentsUploaded byAnna Kruglova
- Derivatives Risk ManagementUploaded byAnna Kruglova
- Glencore IPO ProspectusUploaded byhumenmh

- Wacquant- Bourdieu Sp 2005Uploaded bybaddiktab
- Lague & Rhaiem - Practical Organizational UnlearningUploaded bytef_25
- multisensory.pdfUploaded byWidya Ulfa
- Key Issues in SlaUploaded byHartono Hart
- Differentiated Instruction4 v2Uploaded byMa Christine Burnasal Tejada
- How to Teach Speaking SkillUploaded byGoogo Ramerame Huh
- phil1000 termpaper katiebairdUploaded byapi-242802244
- 7042020 Psycho RMP EndangUploaded byPrio Sakti Prambudi
- Crystal - Plotinus on the Structure of Self-IntellectionUploaded bympaulaguevara
- Answer Key Prefinal2Uploaded byNar-nar Leuterio
- NFEUploaded byGhita Petrus
- Applying Constructivist and Objectivist Learning TheoriesUploaded bysuhartojago
- Philology and Its SotriesUploaded byCatalina Fina
- reflection essayUploaded byapi-302631650
- Social Cognitive Career Theory February 2013 (1)Uploaded byMuhammad Salman Rasheed
- The Meaning of Adult Education - E.C. Lindeman.pdfUploaded byVitor Marangoni
- The Self-Aware ImageUploaded byRichard Shepherd
- Wrting TheoryUploaded byNancy Morales
- Aristotle on TimeUploaded bybrysonru
- Goethe Theory of Colours PDFUploaded byKristen
- SCTUploaded byMuthmainnah Ph
- CIL.completeUploaded byRansford Benson
- English Syllabus PQEUploaded byShweta Gupta
- Programa de Entrenamiento en Estrategias Para Mejorar La MemoriaUploaded byJavita Paz Aldunate Mengual
- speak and you shall be heard rubricUploaded byapi-318393933
- Artificial IntelligenceUploaded byzan_race_football
- Spring 2018 Guidelines for Analyzing and Presenting Strategic Management Case Study (1)Uploaded byYousefkic
- EdRe-1-2Uploaded byMich
- workshop plan-standard 2Uploaded byapi-321266940
- Nothing Comes From NothingUploaded byRosalind Sharon G