You are on page 1of 59

Microeconomic Theory II

FDPE Spring 2014


Pauli Murto / Juuso välimäki

Microeconomic Theory II focuses on strategic interactions and informational asymmetries. The


main methodology is game theory. The emphasis is on analytical techniques and modelling issues.
Together with Microeconomic Theory I, this course will offer the basic analytical framework for
other more specialized courses.

Lectures and study material:

The course consists of 42 lecture hours. The first half is taught by Murto, and focuses on game
theory. The second half is taught by Välimäki, and focuses on information economics.

The main text is:

- Mas-Colell, Whinston and Green: “Microeconomic Theory”, Oxford University Press


(MWG).

This book covers much of the material of the lectures. However, there are many more specialized
books that can be helpful as supplementary material.

The first half of the course focuses on game theory. There are at least three well established
graduate level text books on game theory:

- Fudenberg and Tirole: “Game Theory”, MIT Press.


- Osborne and Rubinstein: “A Course in Game Theory”, MIT Press.
- Myerson: “Game Theory. Analysis of Conflict”, Harvard University Press.

In addition, a new, comprehensive game theory book is now available:

- Maschler, Solan, and Zamir: “Game Theory”, Cambridge University Press.

The second half of the course focuses on information economics. The following books are useful
for that:

- Salanie: “The Economics of Contracts. A primer”, MIT Press.


- Bolton and Dewatripont: “Contract Theory”, MIT Press.

We will post lecture notes on the course web-site as the course proceeds. These will give a good
idea about the contents of the lectures, but they will not be self-contained: proofs, illustrations,
further discussions, etc. will be done in the class. Some further readings are also pointed out in the
lecture notes.

Exercises:

Working on exercises should be an essential part of your studying. There will be 8 exercise
sessions, each with a problem set that will be posted on the course page well in advance.
FDPE microeconomic theory, spring 2014
Lecture notes I: Basic concepts of game theory
Pauli Murto

1 Introduction
1.1 Background
In Part I of the course, the focus was on individual decision making
Collective decisions appeared only in the context of general equilibrium:

– Every agent optimizes against the prices


– Prices equate supply and demand
– No need for the agents to understand where prices come from

In Part II, we will be explicitly concerned about strategic interaction be-


tween the agents
This requires more sophisticated reasoning from agents: how are the other
players expected to behave?
The …rst half of the course studies the methodology of modeling interactive
decision making: game theory

1.2 Some classi…cations of game theory


Non-cooperative vs. cooperative game theory

– In non-cooperative games, individual players and their optimal ac-


tions are the primitives
– In cooperative games, coalitions of players and their joint actions are
the primitives
– In this course, we concentrate on non-cooperative games

Static vs. dynamic games

– We start with static games but move then to dynamic games

Games with complete vs. incomplete information

– We start with games with complete information but then move to


incomplete information

1
1.3 From decision theory to game theory
We maintain the decision theoretic framework familiar from Part I of the
course
In particular, the decision maker is rational :

– Let A denote the set of available actions


– An exhaustive set of possible consequences C.
– A consequence function g : A ! C specifying which actions lead to
which consequences.
– Complete and transitive preference relation % on C:

Preference relation can be represented by a real valued utility function u


on C
To model decision making under uncertainty, we assume that the prefer-
ences also satisfy von Neumann -Morgenstern axioms.
This leads to expected utility maximization:

– Let the consequence depend not only decision maker’s action but also
a state in some set :
– If the decision maker takes action a; and state ! 2 materializes,
then the decision maker’s payo¤ is u(g(a; !)):
– If the uncertainty is captured by a probability distribution p on ;
the decision maker maximizes his expected payo¤
X
p(!)u(g(a; !)):
!2

In decision theory uncertainty in the parameters of environment or events


that take place during the decision making process

In strategic situations, consequence g not only depends on the DM’s own


action, but also on the actions of the other DMs
Hence, uncertainty may stem from random actions of other players or from
reasoning of other players

2 Basic concepts
2.1 A strategic form game
A game in strategic (or normal) form consists of:

1. Set I = f1; :::; Ig of players

2
2. Pure strategy space Si for each i 2 I
3. A von Neumann-Morgenstern utility ui for each i 2 I:

ui : S ! R,
I
where S := i=1 Si .

That is, ui (s) gives the utility of i for strategy pro…le s := (s1 ; :::; sI ).

We write u := (u1 ; :::; uI )


We also write s = (si ; s i ), where s i 2S i := j6=i Sj

We often (but not always) assume that Si are …nite sets.


The game is hence de…ned by I; fSi gi2I ; fui gi2I

In standard table presentation, player 1 chooses row and player 2 chooses


column. Each cell corresponds to payo¤s so that player 1 payo¤ is given
…rst.
Some classic 2x2 games that highlight di¤erent stratecic aspects:
Prisoner’s dilemma:
Cooperate Def ect
Cooperate 3; 3 0; 4
Def ect 4; 0 1; 1

Coordination game ("stag hunt"):

A B
A 2; 2 0; 1
B 1; 0 1; 1

Battle of sexes:
Ballet F ootball
Ballet 2; 1 0; 0
F ootball 0; 0 1; 2

Hawk-Dove:
Dove Hawk
Dove 3; 3 1; 4
Hawk 4; 1 0; 0

Matching pennies:
Head T ail
Head 1; 1 1; 1
T ail 1; 1 1; 1

3
2.1.1 Mixed strategies
The players may also randomize their actions, i.e. use mixed strategies
Suppose that strategy sets Si are …nite

De…nition 1 A mixed strategy for player i, i : Si ! [0; 1] assigns to each pure


strategy si 2 Si a probability i (si ) 0 that it will be player, such that
X
i (si ) = 1.
si 2Si

The mixed strategy space for i is a simplex over the pure strategies
( n
)
X
i := ( i (s1 ) ; :::; i (sn )) 2 Rn : i (sik ) > 0 8k, i (sik ) = 1 ;
k=1

where n is the number of pure strategies.


(We alternatively use notation (Si ) to denote the space of probability
distributions over Si )
:= ( 1 ; :::; I) is a mixed strategy pro…le
If the players choose their strategies simultaneously and independently of
each other, a given pure strategy pro…le (s1 ; :::; sI ) is chosen with proba-
bility
YI
i (si ) .
i=1

Player i’s payo¤ to pro…le is


I
!
X Y
ui ( ) = i (si ) ui (s) .
s2S i=1

Note that here we utilize the von Neumann - Morgenstern utility repre-
sentation
Mixed strategies over continuous pure strategy spaces are de…ned analo-
gously
The game I; f i gi2I ; fui gi2I is sometimes called the mixed extension
of the game I; fSi gi2I ; fui gi2I

4
2.2 Extensive form
Strategic form seems to miss some essential features of strategic situations:
dynamics and information
Consider a simple card game example:

– Players 1 and 2 put one dollar each in a pot


– Player 1 draws a card from a stack and observes it privately
– Player 1 decides wheter to raise or fold
– If fold, then game ends, and player 1 takes the money if the card is
red, while player 2 takes the money if black
– If raise, then player 1 adds another dollar in the pot, and player 2
must decide whether to meet or pass
– If player 2 passes, the game ends and player 1 takes the money in the
pot
– If player 2 meets, he adds another dollar in the pot. Then player 1
shows the card, and the game ends. Again, player 1 takes the money
in the pot if the card is red, while player 2 takes the money if black

To formalize this game, we must specify:

– Who moves when?


– What do players know when they move?
– What are payo¤s under all contingencies?

Formally, a …nite game in extensive form consists of the following elements:

1. The set of players, I = f1; :::; Ig.


2. A directed graph i.e. a set nodes X and arrows connecting the nodes.
This must form a tree which means:

(a) There is a single initial node x0 , i.e. a node with no arrows pointing
towards it.
(b) For each node, there is a uniquely determined path of arrows con-
necting it to the initial node. (This is called the path to the node).

1. The nodes are divided into:

(a) Terminal nodes Z, i.e. with no outward pointing arrows.


(b) Decision nodes XnZ, i.e. nodes with outward pointing arrows.

4. Each decision node is labeled as belonging to a player in the game (the


player to take the decision). This labeling is given by a function : XnZ !
I.

5
5. Each arrow represents an action available to the player at the decision
node at the origin of the arrow. Actions available at node x is A (x). If
there is a path of arrows from x to x0 , then we say that x0 is a successor
of x, and we write x0 2 s(x).
2. Payo¤s assign a utility number to each terminal payo¤ (and thus also to
each path through the game tree). Each player i has a payo¤ function
ui : Z ! R:
7. A partition H of decision nodes (I.e. H = h1 ; :::; hK such that hk XnZ
for all k and hk \ hl = ? for k 6= l and [k hk = XnZ) into information
sets hk . These are collections of nodes such that:

(a) The same player acts at each node within the information set. (I.e.
(x) = (x0 ) if 9k such that x; x0 2 hk ).
(b) The same actions must be available at all nodes within the informa-
tion set. (I.e. A (x) = A (x0 ) if 9k such that x; x0 2 hk ).

8. If Nature moves, the probability that she takes each available action must
be speci…ed.

2.2.1 Remarks:
Simultaneous actions can be modeled by an extensive form, where one
player moves …rst, but so that all nodes resulting from her actions are in
a single information set for the other player
Asymmetric information can be modeled by moves by nature and ap-
propriately chosen information sets (in particular, Bayesian games, to be
considered later)

2.2.2 Classi…cations:
Games with perfect information

– If all information sets are singletons, then a game has perfect infor-
mation.
– Otherwise the game has imperfect information.
– Note that since only one player moves in each node, games of perfect
information do not allow simultaenous actions

Multi-stage games with observed actions

– There are "stages" k = 1; 2; ::: such that


1. In each stage k every player knows all the actions taken in pre-
vious stages (including actions taken by Nature)
2. Each player moves at most once within a given stage

6
3. No information set contained in stage k provides information
about play in that stage
– In these games, all actions taken before stage k can be summarized
in public history hk

Bayesian games, or games of incomplete information

– Nature chooses a "type" for each player according to a common prior


– Each player observes her own type but not that of others

Games of perfect recall

– A game is of perfect recall, when no player forgets information that


he once knew
– A formal de…nition involves some restrictions on information sets
– All the games that we will consider are of perfect recall

2.2.3 Strategies of extensive form games


Let Hi be the set of player i’s information sets, and let A (hi ) be the set
of actions available at hi 2 Hi
The set of all actions for i is then Ai := [hi 2Hi A (hi )
De…nition 2 A pure strategy for i is a map
si : Hi ! Ai
with si (hi ) 2 A (hi ) for all hi 2 Hi .
Important: a strategy must de…ne the action for i at all contingencies
de…ned in the game
The set of pure strategies for i is
Si = A (hi ) :
hi 2Hi

In a …nite game, this is a …nite set.

2.2.4 Mixed strategies


Having de…ned pure strategies, we can de…ne mixed strategies just as in
the case of strategic form:
De…nition 3 A mixed strategy for player i, i : Si ! [0; 1] assigns to each pure
strategy si 2 Si a probability i (si ) 0 that it will be played, such that
X
i (si ) = 1.
si 2Si

7
There is another more convenient way to de…ne mixed strategies, called
behavior strategies.
With those strategies, mixing takes place independently at each decision
node:
De…nition 4 A behavior strategy for player i speci…es for each hi 2 Hi a prob-
ability distribution on the set of available actions A (hi ). That is,
bi 2 (A (hi )) ;
hi 2Hi

where (A (hi )) is a simplex over A (hi ).


Every mixed stratey generates a unique behavior strategy (see e.g. Fudenberg-
Tirole for the construction)
In games of perfect recall (all relevant games for our purpose), it makes
no di¤erence whether we use mixed or behavior strategies:
Theorem 5 (Kuhn 1953) In a game of perfect recall, mixed and behavior
strategies are equivalent.
More precisely: every mixed strategy is equivalent to the unique behavior
strategy it generates, and each behavior strategy is equivalent to every
mixed strategy that generates is.
Therefore, there is no loss in using behavior strategies

2.2.5 From extensive form to strategic form


Recall that extensive form de…nes payo¤ ui : Z ! R for each terminal
node.
Since each strategy pro…le leads to a probability distribution over terminal
nodes Z, we may directly associate payo¤s for strategy pro…les (utilizing
expected utility formulation):
ui : S ! R,
I
where S := i=1 Si .

Now I; fSi gi2I ; fui gi2I meets our de…nition of a strategic form game
This is the strategic-form representation of our extensive form game
To see how this works, take any 2x2 game, formulate its extensive form
assuming sequential moves, and then move back to strategic form (and
you get a 2x4 game)
Every extensive form game may be represented in strategic form
However, as will be made clear later, we will need extensive form to re…ne
solution concpets suitable for dynamic situations

8
3 Basic solution concepts in strategic form games
We now develop the basic solution concepts for the strategic form games
This is the simple game form normally used for analysing static interac-
tions
But any extensive form game may be represented in strategic form, so the
concepts that we develop here apply to those as well

3.1 Implications of rationality


Rationality means that each of the players chooses si in order to maximize
her expectation of ui
But what should players expect of other player’s actions?
Standard game theory is built on the assumption that the rationality of
the players is common knowledge.
This means that all the players are rational, all the players know that all
the players are rational, all the players know that all the players know
that all the players are rational, and so on...
We start by asking what implication common knowledge of rationality has
on the players’behavior
This will lead us to the concept of rationalizable strategies

3.1.1 Dominant strategies


Let us start with the most straight-forward concepts:

De…nition 6 A strategy si is a dominant strategy for i if for all s i 2S i and


for all s0i 6= si ,
ui (si ; s i ) > u (s0i ; s i ) .

De…nition 7 A strategy si is a weakly dominant strategy for i if for all s0i 6= si ,

ui (si ; s i ) u (s0i ; s i ) for all s i 2 S i and


ui (si ; s i ) > u (s0i ; s i ) for some s i 2 S i :

In a few cases, this is all we need.

9
Example: Prisoner’s dilemma
Let I = f1; 2g; Ai = fC; Dg for all i; and let payo¤s be determined as
follows:
C D
C 3; 3 0; 4
D 4; 0 1; 1

Whatever strategy the other player chooses, it is strictly optimal for i to


choose D and not C: Thus (D; D) is the dominant strategy equilibrium of
this game.
Thus, rational players should always play D (even if (C; C) would be better
for both)

Example: Second-price auction


A seller has one indivisible object for sale
There are I potential buyers with valuations 0 v1 ::: vI , and these
valuations are common knowledge
The bidders simultaneously submit bids si 0.
The highest bidder wins the object and pays the second highest bid (if
several bidders bid the highest price, then the good is allocated randomly
among them)
Excercise: show that for each player i bidding si = vi weakly dominates
all other strategies
Thus, si = vi for all i is a weakly dominant strategy equilibrium
Bidder I wins and has payo¤ vI vI 1.

Note that these strategies would remain dominant even if the players would
not know each other’s valuations

3.1.2 Dominated strategies


However, very few games have dominant strategies for all players

Consider the following game:

L M R
U 4; 3 5; 1 6; 2
M 2; 1 8; 4 3; 6
D 3; 0 9; 6 2; 8

10
There are no dominant strategies, but M is dominated by R, thus a ra-
tional player 2 should not play M
But if player 2 will not to play M , then player 1 should play U
But if player 1 will play U , player 2 should play L
This process of elimination is called iterated strict dominance
We say that a game is solvable by iterated strict dominance when the
elimination process leaves each player with only a single strategy
Note that a pure strategy may be strictly dominated by a mixed strategy
even if not dominated by a pure strategy. Below, M is not dominated by
U or D, but it is dominated by playing U with prob. 1/2 and D with
prob. 1/2:
L R
U 2; 0 1; 0
M 0; 0 0; 0
D 1; 0 2; 0
De…nition 8 Pure strategy si is strictly dominated for player i if there exists
i 2 i such that

ui ( i ; s i ) > ui (si ; s i ) for all s i 2S i

De…nition 9 The process of iterated deletion of strictly dominated strategies


proceeds as follows: Set Si0 = Si and 0i = i . De…ne Sin recursively by

Sin = si 2 Sin 1
@ i 2 n 1
i s.t. ui ( i ; s i ) > ui (si ; s i ) , 8s i 2 Sni 1 :

and
n
i =f i 2 i j i (si ) > 0 only if si 2 Sin g .
Set
1
\
Si1 = Sin :
n=0
Let
1 0
i = i 2 i @ i 2 i s.t. ui ( 0i ; s i ) > ui ( i ; s i ) , 8s i 2 S 1i .

Si1 is the set of player i’s pure strategies that survive iterated deletion of
strictly dominated strategies, and 1 i is the set of mixed strategies that survive
iterated strict dominance.

The game is solvable by iterated (strict) dominance if, for each player i,
Si1 is a singleton.
Strict dominance is attractive since it is directly implied by rationality:
Common knowledge of rationality means that players would only use
strategies in Si1 :

11
In process de…ned here, one deletes simulatenously all dominated strate-
gies for both players in each round. One can show that the details of
elimination process do not matter.
One can also apply iterative deletion to weakly dominated strategies, but
then the order of deletion matters
Note that dominated mixed strategies are eliminated only at the end of
the process. One would get the same result by eliminating all dominated
mixed strategies at each round of the process.

Example: Cournot model with linear demand


Let us model the two-…rm Cournot model as a game hf1; 2g; (ui ); (Si )i ;
where Si = R+ and, for any (s1 ; s2 ) 2 S1 S2 ;

u1 (s1 ; s2 ) = s1 (1 (s1 + s2 )) ;
u2 (s1 ; s2 ) = s2 (1 (s1 + s2 )) :

Here si is to be interpreted as quantity produced, and 1 (s1 + s2 ) is the


inverse demand function
Taking the derivative gives the e¤ect of a marginal increase in si on i’s
payo¤:
@ui (si ; sj )
= 1 sj 2si : (1)
@si
If (1) is positive (negative) under (si ; sj ), then marginally increasing (de-
creasing) si increases i’s payo¤. If this holds continuously in the interval
[a; b] of i’s choices under sj , then increasing si from a to b increases i’s
payo¤.
By (1), si = 1=2 strictly dominates any si > 1=2; given that sj 0: Thus
1
Si1 = si : 0 si , i = 1; 2.
2

By (1), si = 1=2 (1=2)2 strictly dominates any si < 1=2 (1=2)2 ; given
that 0 sj 1=2: Thus
( )
2
2 1 1 1
Si = ai : ai , i = 1; 2.
2 2 2

By (1), ai = 1=2 (1=2)2 + (1=2)3 strictly dominates any ai > 1=2


(1=2)2 + (1=2)3 ; given that 1=2 (1=2)2 aj 1=2: Thus
( )
2 2 3
1 1 1 1 1
Si3 = aj : aj + , i = 1; 2.
2 2 2 2 2

12
Continuing this way for k (odd) steps, we get
8 9
> 1 1 2 3 1 k 1 >
< 2 2 + 12 ::: 2 =
k
Si = ai : ai :
>
: 1 1 2 3 k >
;
2 + 1 2 ::: + 1
2 2

Letting k go to in…nity, both the end points of the interval converge to

1=2 (1=2)2 1
= :
1 (1=2)2 1 (1=2)2 3

Thus
1 1
;
3 3
is the unique strategy pair that survives the iterated elimination of strictly
dominated strategies.

3.1.3 Rationalizability
Iterated strict dominance eliminates all the strategies that are dominated
Perhaps we could be even more selective: eliminate all the strategies that
are not best responses to a reasonable belief about the opponents strategy
This leads to the concept of rationalizability
But it turns out that this concept is (almost) equivalent to the concept of
iterated strict dominance
We say that a strategy of player i is rationalizable when it is a best response
to a "reasonable" belief of i concerning the other players’actions
By a "belief" of i, we mean a probability distribution over the other play-
ers’pure strategies:
i 2 (S i ) ;
where
S i = j2I i Sj :

There are two notions of rationality in the literature: either i is a joint


probability distribution allowing other players’actions to be correlated, or
more restrictively, other player’s actions are required to be independent
Unlike MWG, we allow here correlation (this is sometimes called correlated
rationalizability).
What is a "reasonable" belief of i regarding other players’actions?

13
Building on the notion of "common knowledge of rationality", beliefs
should put positive weight only on those other players’strategies, which
in turn can be rationalized
Formally, this leads to a following de…nition (assume …nite strategy sets).

De…nition 10 The set of rationalizable strategies is the largest set i2I Zi ,


where Zi Si , and each si 2 Zi is a best-response to some belief i 2 (Z i ).

To make the link to iterated strict dominance, de…ne:

De…nition 11 A strategy si is a never-best response if it is not a best reso-


ponse to any belief i 2 (S i ).

We have:

Proposition 12 A strategy si is a never-best response if and only if it is strictly


dominated.

Proof. It is clear that a strictly dominated strategy is never a best response.


The challenge is to prove the converse, that a never-best response is strictly
dominated. By contrapositive, we need to show that if strategy si is not strictly
dominated, then it is a best-response given some belief i 2 (S i ).
Let ui ( i ) := fui ( i ; s i )gs i 2S i denote a vector, where each component is
i’s payo¤ with mixed strategy i , given a particular pure strategy pro…le s i for
the other players. This vector contains i’s value for all possible combinations
of pure strategies possible for the other players. Let N denote the number
of elements of that vector so that ui ( i ) 2 RN . Given an arbitrary belief
i 2 (S i ), we can then write the payo¤ for strategy i as:

u ( i; i) := i ui ( i ) :

Consider the set of such vectors over all i:

Ui := fui ( i )g i2 i
.

It is clear that Ui is a convex set.


Assume that si is not strictly dominated. Let U + (si ) denote the set of
payo¤ vectors that strictly dominate ui (si ):

U + (si ) := u 2 RN : (u)k (ui (si ))k for all k = 1; :::; N and (u)k > (ui (si ))k for some k = 1; :::; N ;

where ( )k denotes the k th component of a vector. U + (si ) is a convex set, and


since si is not strictly dominated, we have Ui \ U + (si ) = ;. By the separating
hyperplane theorem, there exists some vector i 2 RN , i 6= 0, such that

i (ui ( i ) ui (si )) 0 for all i 2 i and (2)


i (ui ui (si )) 0 for all ui 2 U + (si ) . (3)

14
By (3), each component of i must be positive. We can also normalize i so
that its components sum to one (without violating (2) or (3)), so that

i 2 (S i ) .

Equation (2) can now be written as

i ui (si ) i ui ( i ) for all i 2 i;

or
u (si ; i) u ( i; i) for all i 2 i,

so that si is a best response to belief i 2 (S i ).

Given this result, the process of iteratively deleting those strategies that
are not best responses to any belief on the other players’remaining strate-
gies is equivalent to the process of deleting strictly dominated strategies.
Therefore, we have the following result:

Proposition 13 The set of pure strategies that survive the elimination of strictly
dominated strategies is the same as the set of rationalizable strategies.

Note: our de…nition of "never-best response" considers arbitrary belief


i 2 (S i ) that allows i to believe that other players’actions are corre-
lated
If correlation not allowed, then the equivalence between "never-best re-
sponse" and "strictly dominated" breaks down with more than two play-
ers: there are strategies that are never best responses to independent ran-
domizations of the other players, yet they are not strictly dominated
Hence, the alternative notion of "rationalizability" (that rules out corre-
lation) is somewhat stronger than iterated strict dominance
But this di¤erenece is not relevant in two-player games (because correla-
tion between other players strategies is not relevant)

3.1.4 Discussion
Rationalizability is the ultimate implication of common knowledge of ra-
tionality in games
But it makes generally weak predictions. In many intersting games it does
not imply anything. For example, consider the "Battle of sexes" game:

Ballet F ootball
Ballet 2; 1 0; 0
F ootball 0; 0 1; 2

15
Rationalizability allows all outcomes. For example, players could choose
(F; B): F is optimal to player 1 who expects 2 to play F , and B is optimal
to player 2 who expects 1 to play B.
A way forward: require expectations to be mutually correct -> Nash equi-
librium

3.2 Nash equilibrium


Rationalizability requires that each player’s strategy is a best response to
a reasonable conjecture on other player’s play
Nash equilibrium is a more stringent condition on strategic behavior.
It requires that players play a best response against a correct belief of each
other’s play.
For pure strategies, Nash equilibrium is de…ned as:

De…nition 14 A pure strategy pro…le s = (s1 ; :::; sI ) constitutes a Nash equi-


librium if for every i 2 I,

ui (si ; s i ) ui (s0i ; s i )

for all s0i 2 Si .

Equivalently, s constitutes a Nash equilibrium if si is a best response


against s i for all i.

3.2.1 Non-existence of pure strategy Nash equilibrium


Consider the game of Matching Pennies (or think about familiar Rock-
Paper-Scissors game):

2
H T
:
1 H 1; 1 1; 1
T 1; 1 1; 1

Clearly, whenever player i chooses best response to j, j wants to change.


There is no rest point for the best-response dynamics.
Hence, there is no pure strategy Nash equilibrium

16
3.2.2 Nash equilibrium in mixed strategies
This non-existence problem is avoided if we allow mixed strategies

De…nition 15 A mixed strategy pro…le 2 constitutes a Nash equilibrium if


for every i 2 I,
ui ( i ; i ) ui (si ; i )
for all si 2 Si .

It is easy to check that playing H and T with prob. 1/2 constitutes a


Nash equilibrium of the matching pennies game
Whenever is a Nash equilibrium, each player i is indi¤erent between
all si for which (si ) > 0. This is the key to solving for mixed strategy
equilibrium.

3.2.3 Discussion
A Nash equilibrium strategy is a best response, so it is rationalizable
Hence, if there is a unique rationalizable strategy pro…le, then this pro…le
must be a Nash equilibrium
Obviously also: dominance solvability implies Nash
An attractive feature of Nash equilibrium is that if players agree on playing
Nash, then no player has an incentive to deviate from agreement
Hence, Nash equilibrium can be seen as a potential outcome of preplay
communication
Keep in mind that Nash equilibrium can be seriously ine¢ cient (Prisoner’s
dilemma)

3.2.4 Interpretations of mixed strategy equilibrium


Do people really "randomize" their actions? Or should we interpret the
mixed strategy equilibrium in some "deeper" way?
There are various interpretations:

1. Mixed strategies as objects of choice

This is the straightforward interpretation: people just randomize

2. Mixed strategy equilibrium as a steady state

Players interact in an environment where similar situation repeats


itself, without any strategic link between plays

17
Players know the frequences with which actions were taken in the
past

3. Mixed strategies as pure strategies in a perturbed game

Players’preferences are subject to small perturbations


Exact preferences are private information
Mixed strategy equilibrium as the limit of pure strategy equilibrium
of the perturbed game as perturbation vanishes
This is the puri…cation argument by Harsanyi (1973)

4. Mixed strategies as beliefs

Think of as a belief system such that i is the common belief of all


the players of i’s actions
Each action in the support of i is best-response to i

Each player chooses just one action


Equilibrium is a steady state in the players’belies, not in their actions

3.3 Existence of Nash equilibrium


There are many existence results that guarantee existence of Nash equi-
librium under varying conditions
The best known applies to …nite games, and was proved by Nash (1950):

Theorem 16 Finite games, i.e., games with I players and …nite strategy sets
Si , i = 1; :::; I, have a mixed strategy Nash equilibrium.

The proof relies on the upper-hemicontinuity of the players’best-response


correspondences, and the utilization of Kakutani’s …xed point theorem
See MWG Appendix of Ch. 8 for proof (and mathematical appendix for
upper-hemicontinuity)
In many applications, it is more natural to model strategy space as a
continuum
Think about, e.g., Cournot oligopoly
There is then no general existence theorem (it is easy to construct games
without Nash equilibria)
The simplest existence theorem assumes quasi-concave utilities:

Theorem 17 Assume that Si are nonempty compact convex subsets of an Euclid-


ean space, ui : S ! R is continuous for all i and quasiconcave in si for all i.
Then the game has a Nash equilibrium in pure strategies.

18
Again, see MWG Appendix of Ch. 8 for the proof.
In fact, Nash’s theorem (previous therem) is a special case of this
Many other theorems apply to various situations where continuity and/or
quasiconcavity fail

3.4 Multiplicity of Nash equilibria


A more serious concern for game theory is the multiplicity of equilibria.
Consider Battle of sexes
Ballet F ootball
Ballet 2; 1 0; 0
F ootball 0; 0 1; 2

or, Hawk-Dove
Dove Hawk
Dove 3; 3 1; 4
Hawk 4; 1 0; 0
Both of these games have two pure strategy equilibria and one mixed
strategy equilibrium (can you see this?)
There is no way to choose (especially between the two pure strategy equi-
libra)
Sometimes equilibria can be pareto ranked
Consider stag-hunt:
A B
A 2; 2 0; 1
B 1; 0 1; 1
It can be argued that preplay communication helps to settle on pareto
dominant equilibrium (A; A)
But even this might not be obvious. Consider:
A B
A 9; 9 0; 8
B 8; 0 7; 7

Now playing A seems a bit shaky.. (what if the other player still chooses
B?)
Morever, with preplay communication, players have an incentive to con-
vince the other player that A will be played, even if they plan to play B.
Is preplay communication credible?
We conclude that generally there is no good answer for selecting among
multiple equilibria

19
3.5 Correlated equilibrium
Consider once again Battle of Sexes example
There is a unique symmetric equilibrium in mixed strategies: each player
takes her favourite action with a certain probability (compute this)
But suppose that the players have a public randomization device (a coin
for example). Let both players take the following strategy: go to ballet if
heads, and to football if tails.
Excercise: Show that this is an "equilbrium" and gives a better payo¤ to
both players than the symmetric mixed strategy equilibrium.
A generalization of this idea is called correlated equilibrium (see Osborne-
Rubinstein Ch. 3.3, Fudenberg-Tirole Ch. 2.2, or Myerson Ch. 6 for more
details)
Correlated equilibrium may be interpreted as a solution concept that im-
plicitly accounts for communication

4 Zero-sum games
Let us end with a few words about a special class of games: zero-sum
games
A two-player game is a zero sum game if u1 (s) = u2 (s) for all s 2 S.
This of course implies that u1 ( ) = u2 ( ) for all 2 .
Matching pennies is a zero-sum game
Zero-sum games are the most "competitive" games: maximizing ones pay-
o¤ is equivalent to minimizing "opponent"’s payo¤. There is absolutely
no room for cooperation (should tennis players cooperate in Wimbledon
…nal?)
What is the largest payo¤ that player 1 can guarantee herself? This is
obtained by choosing

max min u1 ( 1; 2) .
12 1 22 2

Similarly for player 2:

max min u2 ( 1; 2) .
22 2 12 1

But, because u1 = u2 , this is equivalent to

min max u1 ( 1; 2) :
22 2 12 1

20
Famous minmax theorem by von Neumann (1928) states that

max min u1 ( 1; 2) = min max u1 ( 1; 2) :


12 1 22 2 22 2 12 1

This means that maxmin value must give the payo¤ of a player in any
Nash equilibrium (can you see why?)
See e.g. Myerson Ch. 3.8 for more details.

21
FDPE microeconomic theory, spring 2014
Lecture notes 2: Sequential rationality in dynamic games
Pauli Murto

1 Outline
We now move to dynamic games
We focus especially on Nash equilibrium re…nements induced by sequential
rationality: sub-game perfect equilibrium and sequential equilibrium
Material: MWG Chapter 9
Other relevant sources: Fudenberg-Tirole Ch. 3-4, 8.3, Osborne-Rubinstein
Ch. 6-7, 12, Myerson Ch. 4
Some motivating examples:

1.1 Example: predation


An entrant considers entry into an industry with a current incumbent …rm
Entry costs 1 unit
Monopoly pro…t in the industry is 4
If entry takes place, the monopolist can either accomodate or …ght
Accommodation splits monopoly pro…ts, whereas …ghting gives zero pro…t
to both …rms
Will entrant enter, and if so, will incumbent …ght or accomodate?
Normal form representation of the game:

Fight if entry Accommodate if entry


Enter 1; 0 1; 2
Stay out 0; 4 0; 4

There are two Nash equilibria: (Enter, Accommodate) and (Stay out, Fight
if entry)
Is one of the two equilibria more plausible?

1
1.2 Example: quality game
A producer can produce an indivisible good, and choose either high or low
quality
Producing high quality costs 1 and bad quality 0
Buyers values high quality at 3 and bad quality at 1
For simplicity, suppose that good must be sold at …xed price 2
Which quality will be produced and will the buyer buy?
Normal form representation of the game:

High quality Low quality


Buy 1; 1 1; 2
Do not buy 0; 1 0; 0

Only one Nash equilibrium (Do not buy, Low)


What if seller moves …rst?
What if buyer moves …rst?
What if seller moves …rst, but quality is unobservable?

1.3 Example: Stackelberg vs. Cournot


Consider the quantity setting duopoly with pro…t functions

i (qi ; qj ) = qi (1 q1 q2 ); for all i = 1; 2:

Suppose the players set their quantities simultaneously (Cournot model).


The unique Nash equilibrium is:

1 1
(q1 ; q2 ) = ; ;
3 3

which gives payo¤s

1 1 1 1 2 1 1
1 ; = 2 ; = 1 = :
3 3 3 3 3 3 9

What if player 1 moves …rst? (Stackelberg model)


After observing the quantity choice of player 1, player 2 chooses his quan-
tity.

2
Given the observed q1 ; …rm 2 chooses q2 . Optimal choice is
1 q1
BR2 (q1 ) = :
2

Player 1 should then choose:

1 q1 (1 q1 )q1
max u1 (q1 ; BR2 (q1 )) = 1 q1 q1 = :
q1 2 2

This leads to
1 1
q1 = ; q2 = BR2 (q1 ) =
2 4
with payo¤s
1 1 1 1 1 1
1 ; = ; 2 ; = :
2 4 8 2 4 16

1.4 Example: matching pennies


Head T ail
Head 1; 1 1; 1
T ail 1; 1 1; 1

Nash equilibrium, where both players mix with 1/2 probabilities


What if player 1 moves …rst?
What if player 1 moves …rst, but choice is unobservable?

1.5 Discussion
These examples illustrate that the order of moves is crucial
Moving …rst may help (commitment to an action)
Moving …rst may also hurt (matching pennies)
The normal form representation misses the dynamic nature of events, so
we need to utilize extensive form representation
The key principle is sequential rationality, which means that a player
should always use a continuation strategy that is optimal given the current
situation
For example, once entrant has entered, the incumbent should act optimally
given this fact (accommodate)
This will lead us to re…nements of Nash equilibrium, in particular subgame
perfect equilibrium (SPE) and sequential equilibrium

3
2 Subgame perfect equilibrium
2.1 Subgame
Consider an extensive form game with perfect recall
A subgame is a subset of the original game-tree that inherits information
sets and payo¤s from the original game, and which meets the following
requirements:

1. There is a single node such that all the other nodes are successors of
this node (that is, there is a single initial node to the subgame)
2. Whenever a node belongs to the subgame, then all of its successor
nodes must also belong to the subgame.
3. Whenever a node belongs to the subgame, then all nodes in the same
information set must also belong to the subgame.

2.2 Subgame perfect equilibrium


De…nition 1 A strategy pro…le of an extensive form game is a subgame per-
fect Nash equilibrium (SPE) if it induces a Nash equilibrium in every subgame
of the original game.

Every subgame perfect equilibrium is a Nash equilibrium, but the con-


verse is not true. Thus, subgame perfection is a re…nement of the Nash
equilibrium concept.
The idea is to get rid of equilibria that violate sequential rationality prin-
ciple.
It is instructive to go through the examples that we’ve done so far and
identify subgame perfect equilibria.

2.3 Backward induction in games of perfect information


In …nite games with perfect information, sub-game perfect equilibria are
found by backward induction:
Consider the nodes, whose immediate successors are terminal nodes
Specify that the player who can move in those nodes chooses the action
that leads to the best terminal payo¤ for her (in case of tie, make an
arbitrary selection)
Then move one step back to the preceding nodes, and specify that the
players who move in those nodes choose the action that leads to the best
terminal payo¤ - taking into account the actions speci…ed for the next
nodes

4
Continue this process until all actions in the game tree have been deter-
mined
This process is a multi-player generalization of backward induction prin-
ciple of dynamic programming

Theorem 2 A …nite game of perfect information has a subgame perfect Nash


equilibrium in pure strategies.

The proof is the backward induction argument outlined above


If optimal actions are unique in every node, then there is a unique sub-
game perfect equilibrium
Note that the terminal nodes are needed to start backward induction.
Does not work for in…nite games.
How about chess?

2.4 Example: chain store paradox


Note that the entry game (predation) discussed at the beginning of the
lecture is a perfect information game
Could a …rm build a reputation for …ghting if it faces a sequence of en-
trants?
Chain store paradox considers an incumbent …rm CS that has branches in
cities 1; :::; K.
In each city there is a potential entrant.
In period k, entrant of city k enters or not. If it enters, incumbent may
…ght or accomodate.
Payo¤s for city k are as in original entry game:
Fight if entry Accommodate if entry
Enter 1; 0 1; 2
Stay out 0; 4 0; 4

Incumbent maximizes the sum of payo¤s over all cities, while each entrant
maximizes pro…ts of that period.
An entrant only enters if it knows the CS does not …ght.
Would it pay for CS to build a reputation of toughness if K is very large?
The paradox is that in SPE, the CS can not build a reputation.
In the …nal stage, the optimal action of CS is Accomodate, if the entrant
enters.

5
The entrant know this, and thus enters.
By backward induction, this happens in all stages.
We …nd that the unique SPE is that all entrants enter and CS always
accomodates.
To bring reputation e¤ects to life, we would need to introduce incomplete
information (later in this course)

2.5 Example: centipede game


Centipede game is a striking example of backward induction
Two players take turns to choose Continute (C) or stop (S)
The game can continue at most K steps (K can be arbitrarily large)
In stage 1, player 1 decides between C and S. If he chooses S, he gets 2
and player 2 gets 0. Otherwise game goes to stage 2.
In stage 2, player 2 decides between C and S. If he chooses S, he gets 3
and player 2 gets 1. Otherwise game goes to stage 3, and so on.
If i stops in stage k, he gets k + 1, while j gets k 1.
If no player ever stops, both players get K.
Draw extensive form and solve by backward induction. What is the unique
SPE?

3 Multi-stage games with observed actions


One restrictive feature of games of perfect information is that only one
player moves at a time
A somewhat larger class of dynamic games is that of multi-stage games
with observed actions
Many players may act simultaneously within each stage
We may summarize each node that begins stage t by history ht that con-
tains all actions taken in previous stages: ht := a0 ; a1 ; :::; at 1 .
A pure strategy is a sequence of maps sti from histories to actions ati 2
Ai (ht ).
Payo¤ ui is a function of the terminal history hT +1 , where T is in…nite in
some applications.

6
3.1 One-step deviation principle
Since many players may act simultaneously within a stage, backward in-
duction argument can not be applied as easily as with games of perfect
information
However, the following principle that extends backward induction idea is
useful:

Theorem 3 In a …nite multi-stage game with observed actions, strategy pro…le


s is a subgame perfect equilibrium if and only if there is no player i and no
strategy s0i that agrees with si except at a single t and ht , and such that s0i is a
better response to s i than si conditional on history ht being reached.

That is: to check if s is a SPE, we only need to check if any player can
improve payo¤s by a one-step deviation
Note that the result of the previous slide requires a …nite horizon, just like
backward induction
Importantly, the result carries over to in…nite horizon games under an
extra condition that essentially requires that distant events are relatively
unimportant
In particular, if payo¤s are discounted sums of per period payo¤s, and
payo¤s per period are uniformly bounded, then this condition holds
The proof of the one-step deviation principle is essentially the principle of
optimality for dynamic programming.

3.2 Example: repeated prisoner’s dilemma


Consider prisoner’s dilemma with payo¤s:

Cooperate Def ect


Cooperate 1; 1 1; 2
Def ect 2; 1 0; 0

Suppose that two players play the game repeatedly for T periods, with
total payo¤
T
X1
1 t
T
gi at ;
1 t=0

where gi gives the per-period payo¤ of action pro…le at as given in the


table above
Players hence maximize their discounted sum of payo¤s, where the term
1
1 T
is just a normalization factor to make payo¤s of games with di¤erent
horizons easily comparable

7
Suppose …rst that the game is played just once (T = 1). Then (De-
fect,Defect) is the unique Nash equilibrium (Defect is a dominant action)
Suppose next that T is …nite. Now, subgame perfection requires both
players to defect in the last period, and backward induction implies that
both players always defect.
Finally, suppose that T is in…nite. Then backward induction cannot be
applied but one-step deviation principle holds (discounting and bounded
payo¤s per period)
"Both defect every period" is still a SPE
However, provided that is high enough, there are now other SPEs too
By utilizing one-step deviation principle, show that the following is a SPE:
"cooperate in the …rst period and continue cooperating as long as no player
has ever defected. Once one of the players defect, defect in every period
for the rest of the game".

4 Sequential equilibrium
Recall that a subgame starts with an information set that consists of a
single node
But in games of imperfect information, there may be few such nodes
For example, in Bayesian games, where nature chooses a type for each
player, the only subgame is the whole game tree
In such situations the re…nement of subgame perfect equilibrium has no
bite
To evaluate sequential rationality in information sets with many nodes,
we must consider the beliefs of the player that chooses her action
We de…ne a belief system as:

De…nition 4 A belief system assigns for each information set h a probability


distribution on the nodes of that information set. In other words, h (x) 2 [0; 1]
gives a probability of node x in information set h, where x2h h (x) = 1.

In words, h expresses the beliefs of player (h) on the nodes in h condi-


tional on reaching h:
Let b denote some behavior strategy

8
Let ax be the path of actions that leads from x0 to x: De…ne
Y
P b (x) = fb(a)ja 2 ax g;

and X
P b (h) = fP b (x)jx 2 hg:

Let ui (bj h ) be the expected utility of player i given that information


set h is reached, given that player i’s beliefs with respect to the nodes
x 2 h is given by h , and given that the strategy pro…le b is played on all
information sets that follow h
Sequential rationality can now be formally stated:

De…nition 5 behavior strategy pro…le b is sequentially rational (given belief sys-


tem ) if for all i and all h such that i moves at h;

ui (bj h
) ui ((ai ; bi h ); b i j h
) for all ai 2 Ai (h) :

By one-step deviation principIe, this de…nition implies expected utility


maximization at each h given the beliefs at h and given that all future
decisions are taken according to b:
So far we have said nothing about how beliefs are formed
To connect beliefs to strategies, we require that they are obtained from
the strategies using Bayes’rule:

h P b (x)
(x) = ; whenever P b (h) > 0:
P b (h)

We have:

De…nition 6 A Perfect Bayesian Equilibrium (P BE) is a pair (b; ) such that


b is sequentially rational given and is derived from b using Bayes’ rule
whenever applicable.

What to do with o¤-equilibrium pats, i.e. information sets such that


P b (h) = 0?
The version of Perfect Bayesian Equilibrium de…ned above gives full free-
dom for choosing those beliefs (this version is called weak PBE in MWG)
Why do o¤-equilibrium beliefs matter? Because they may induce o¤-
equilibrium actions that in turn in‡uence behavior on-equilibrium path
To make the concept of PBE more useful in applications, additional restric-
tions for o¤-equilibrium beliefs have been introduced (see e.g. Fudenberg-
Tirole section 8.2, or MWG section 13.C), but this is not a general cure
as it may lead to non-existence problems

9
The solution concept, introduced in Kreps and Wilson (1982, Economet-
rica), called sequential equilibrium derives beliefs at o¤-equilibrium infor-
mation sets as limits from strategies that put a positive but small proba-
bility on all actions (so that all information sets are reached with positive
probability):

De…nition 7 A pair (b; ) is a Sequential Equilibrium if:


1) Sequential Rationality: b is sequentially rational given
2) Consistency of beliefs: there exists a sequence of pairs (bn ; n ) ! (b; );
such that for all n; bn puts a positive probability on all availabe actions,
n n
and for any h and any x 2 h; nh (x) = P b (x)=P b (h):

Every …nite extensive form game with perfect recall has a sequential equi-
librium
In practice, PBE is a popular solution concept in applications
Sequential equilibrium is important because:

– Existence is guaranteed (in …nite games with perfect recall)


– Every sequential equilibrium is at the same time a (weak) perfect
Bayesian equilibrium
– Also, if (b; ) is a sequential equibrium, then at the same time b is
a sub-game perfect equilibrium (this does not necessarily hold for a
weak PBE).

A related concept is called extensive form trembling-hand perfect Nash


equilibrium, which also always exists in …nite games (see MWG Appendix
B to Ch. 9). An extensive form trembling-hand perfect equilibrium is a
sequential equilibrium, but the converse is not necessarily true.

10
FDPE microeconomic theory, spring 2014
Lecture notes 3
Pauli Murto

We consider …rst a well known application of perfect information games:


alternating o¤ers bargaining
Then we consider repeated games (with perfect monitoring)

Both models fall into the category of multi-stage games with observed
actions, so that we can use sub-game perfect equilibrium as the solution
concept

1 Alternating o¤ers bargaining


1.1 One stage game
Start with the simplest case: One period ultimatum game
Two players share a pie of size 1:
Player 1 suggests a division x 2 (0; 1).
Player 2 accepts or rejects.
In the former case, 1 gets x and 2 gets 1 x: In the latter case, both get
0:
Given any x 2 (0; 1), the strategy pro…le fa1 = x; a2 = (accept i¤ a1 x)g
is a Nash equilibrium. So, there are in…nitely many Nash equilibria.
But once player 1 has made an o¤er, the optimal strategy for 2 is to accept
any o¤er a1 < 1 and he is indi¤erent with accepting o¤er a1 = 1:
What can you say about subgame perfect equilibria?

1.2 Two stages


Suppose next that after 2 rejects an o¤er, the roles are changed

2 makes an o¤er x for player 1 and if accepted, she gets 1 x for herself
What is the SPE of this two-round bargaining game?
What if players are impatient and payo¤ in stage 2 is only worth i <1
times the payo¤ in stage 1 for player i.

What is the SPE of this game?

1
1.3 Generalization to longer horizons: Alternating o¤ers
bargaining
The player that rejects an o¤er makes a countero¤er
Players discount every round of delay by factor i, i = 1; 2
If there are T periods, we can solve for a SPE by backward induction.
Suppose we are at the last period, and player 1 makes the o¤er. Then he
should demand the whole pie x = 1 and player 2 should accept.
Suppose we are at period T 1, where player 2 makes the o¤er. He knows
that if his o¤er is not accepted, in the next period player 1 will demand
everything. So he should o¤er the least amount that player 1 would accept,
that is x = 1 .
Similarly, at period T 2 player 1 should o¤er division x = 1 2 (1 1 ),
and so on

Can you show that as T ! 1, then the player i who starts o¤ers
1 j
x=
1 i j

in the …rst period, and this o¤er is accepted?


Note that the more patient player is stronger.
What if there is in…nite horizon? With no end point (i.e. all rejected o¤ers
are followed by a new proposal), backward induction is not possible

Use the concept of SPE


Notice that the subgame starting after two rejections looks exactly the
same as the original game
Therefore the set of SPE also is the same for the two games
The famous result proved by Rubinstein (1982) shows that the in…nite
horizon game also has a unique equilibrium

Theorem 1 A subgame perfect equilibrium in the in…nite horizon alter-


nating o¤ er bargaining game results in immediate acceptance. The unique
subgame perfect equilibrium payo¤ for player 1 is
1 2
v= :
1 1 2

2
1.4 Proof of Rubinstein’s result:
The part on immediate acceptance is easy and left as an exercise.
Calculate …rst the largest subgame perfect equilibrium payo¤ v for player
1 in the game.
Denote by v2 (2) the continuation payo¤ to player 2 following a rejection
in T = 1. The largest payo¤ for 1 consistent with this continuation payo¤
to 2 is:

1 2 v2 (2)

Hence the maximal payo¤ to 1 results from the minimal v2 (2).


We also know that
v2 (2) = 1 1 v1 (3)

Hence v2 (2) is minimized when v1 (3) is maximized.


Notice next that the game starting after two rejections is the same game
as the original one. Hence v is also the maximal value for v1 (3).
Hence combining the equations, we have

v=1 2 (1 1 v)

And hence
1 2
v= :
1 1 2

Denote by v the smallest subgame perfect equilibrium payo¤ to 1. The


same argument goes through exchanging everywhere words minimal and
maximal. Hence we have:

v=1 2 (1 1 v)

and
1 2
v= :
1 1 2
And thus the result is proved.

3
1.5 Comments:
If 1 = 2 = ! 1, then the SPE payo¤ converges to 50-50 split
This is a theory that explains bargaining power by patience
Cannot explain why there is often delays in bargaining
Hard to generalize to more than two players
Must have perfectly divisible o¤ers
Sensitive to bargaining protocol
This model is based on Rubinstein (1982), "Perfect equilibrium in a bar-
gaining model", Econometrica 50.

2 Repeated games
An important class of dynamic games
We only give some basic results and intuitions, and restrict here to the
case of perfect monitoring (i.e. both players observe perfectly each others’
previous actions)
An extensive text book treatment: Mailath and Samuelson (2006), "Re-
peated games and reputations: long-run relationships", Oxford University
Press
In these games, the same "stage game" is repeated over and over again
Player’s payo¤ is most typically the discounted sum of the payo¤s across
stages
The underlying idea: players may punish other players’ deviations from
nice behavior by their future play
This may discipline behavior in the current period
As a result, more cooperative behavior is possible

2.1 Stage game


A stage game is a …nite I-player simultaneous-move game
Denote by Ai , i = 1; :::; I the action spaces within a stage
Stage-game payo¤ given by
gi : A ! R.

In an in…nite horizon repeated game, the same stage game is repeated


forever

4
2.2 Strategies and payo¤s
Players observe each other’s actions in previous periods
Therefore, this is a multi-stage game with observed actions
Denote by at := (at1 ; :::; atI ) the action pro…le in stage t
As before, history at stage t, ht := a0 ; :::; at 1
2 H t , summarizes the
actions taken in previous stages
A pure strategy is a sequence of maps sti from histories to actions
A mixed (behavior) strategy i is a sequence of maps from histories to
probability distributions over actions:
t
i : Ht ! (Ai ) .

The payo¤s are (normalized) discounted sum of stage payo¤s:


1
X
t t
ui ( ) = E (1 ) gi ht ;
t=0

where expectation is taken over possible in…nite histories generated by


The term (1 ) just normalizes payo¤s to "per-period" units
Note that every period begins a proper subgame
For any and ht , we can compute the "continuation payo¤" at the current
stage:
1
X
E (1 ) gi ( (h )) :
=t

A preliminary result:
Proposition 2 If = ( 1 ; :::; I ) 2 (S1 ) ::: (SI ) is a Nash equilibrium
of the stage game, then the strategy pro…le
t
i ht = i for all i 2 I, ht 2 H t , t = 0; 1; :::
is a sub-game perfect equilibrium of the repeated game. Moreover, if the stage
1
game has m Nash equilibria ; :::; m , then for any map j (t) from time
periods to f1; :::; mg, there is a subgame perfect equilibrium
t
ht = j(t)
,
j(t)
i.e. every player plays according to the stage-game equilibrium in stage t.
Check that you understand why these strategies are sub-game perfect
equilibria of the repeated game
These equilibria are not very interesting. The point in analyzing repeated
games is, of course, that more interesting equilibria exist too

5
2.3 Folk theorems
What kind of payo¤s can be supported in equilibrium?
The main insight of the so-called folk theorems (various versions apply un-
der di¤erent conditions) is that virtually any "feasible" and "individually
rational" payo¤ pro…le can be enforced in an equilibrium, provided that
discounting is su¢ ciently mild

Individually rational payo¤s:

What is the lowest payo¤ that player i’s opponents can impose on i?
Let
v i := min max gi ( i ; i) ;
i i

where i 2 (Si ) and i 2 j6=i (Sj )


It is easy to prove the following:

Proposition 3 Player i’s payo¤ is at least v i in any Nash equilibrium of the


repeated game, regardless of the level of the discount factor.

Hence, we call f(v1 ; :::; vI ) : vi v i for all ig the set of individually ratio-
nal payo¤s.

Feasible payo¤s:

We want to identify the set of all payo¤ vectors that result from some
feasible strategy pro…le
With independent strategies, feasible payo¤ set is not necessarily convex
(e.g. in battle of sexes, payo¤ 32 ; 32 can only be attained by correlated
strategies)
Also, with standard mixed strategies, deviations are not perfectly detected
(only actions observed, not the actual strategies)
But in repeated games, convex combinations can be attained by time-
varying strategies (if discount factor is large)
To sidestep this technical issue, we assume here that players can condition
their actions on the outcome of a public randomization device in each
period
This allows correlated strategies, where deviations are publicly detected
Then, the set of feasible payo¤s is given by

V = co fv : 9a 2 A such that g (a) = vg ;

where co denotes convex hull operator

6
Having de…ned individually rational and feasible payo¤s, we may state the
simplest Folk theorem:

Theorem 4 For every feasible and strictly individually rational payo¤ vector v
(i.e. an element of fv 2 V : vi > v i for all ig), there exists a < 1 such that
for all 2 ( ; 1) there is a Nash equilibrium of the repeated game with payo¤ s v.

The proof idea is simple and important: construct strategies where all the
players play the stage-game strategies that give payo¤s v as long as no
player has deviated from this strategy. As soon as one player deviates,
other players turn to punishment strategies that "minmax" the deviating
player forever after.
If the players are su¢ ciently pationt, any …nite one-period gain from de-
viating is outweighed by the loss caused by the punishment, therefore
strategies are best-responses (check the details).
The problem with this theorem is that the Nash equilibrium constructed
here is not necessarily sub-game perfect
The reason is that punishment can be very costly, so once a deviation has
occurred, it may not be optimal to carry out the punishment
However, if the minmax payo¤ pro…le itself is a stage-game Nash equilib-
rium, then the equilibrium is sub-game perfect
This is the case in repeated Prisoner’s dilemma
The question arises: using less costly punishements, can we generalize the
conclusion of the theorem to sub-game perfect equilibria?
Naturally, we can use some low-payo¤ stage-game Nash equilibrium pro…le
as a punishment:

Theorem 5 Let be a stage-game Nash equilibrium with payo¤ pro…le e.


Then, for any feasible payo¤ vector with vi > ei for every i, there is a < 1
such that for all 2 ( ; 1) there is a sub-game perfect Nash equilibrium of the
repeated game with payo¤ s v.

The proof is easy and uses the same idea as in above theorem, except
here one uses Nash equilibrium strategy pro…le as the punishment to
a deviation
Because the play continues according to a Nash equilibrium even after
deviation, this is a sub-game perfect equilibrium
Note that the conclusion of Theorem 5 is weaker than in Theorem 4 in
the sense that it only covers payo¤ pro…les where each player gets more
than in some stage-game Nash equilibrium

7
Is it possible to extend the result to cover all individually rational and
feasible payo¤ pro…les?
Fudenberg and Maskin (1986) show that the answer is positive: in fact,
for any v 2 V such that vi > v i for all i, there is a SPE with payo¤s v
given that is high enough (given an additional dimensionality condition
on the payo¤ set: the dimension of the set V equals the number of players)
See Fudenberg-Tirole book or the original article for the construction

2.4 Structure of equilibria


The various folk theorems show that repeated interaction makes coopera-
tion feasible as ! 1
At the same time, they show that the standard equilibrium concepts do
little to predict actual play in repated games: the proofs use just one
strategy pro…le that works if is large enough
The set of possible equilibria is large. Is there a systematic way to char-
acterize behavior in equilibrium for a given …xed ?
What is the most e¤ective way to punish deviations?
At the outset, the problem is complicated because the set of potential
strategy pro…les is very large (what to do after all possible deviations...)
Abreu (1988) shows that all subgame perfect equilibrium paths can be
generated by simple strategy pro…les
"Simple" means that these pro…les consists of I + 1 equilibrium paths: the
actual play path and I punishment paths.
A path is just a sequence of action pro…les
This is a relatively simple object - does not contain description of players’
behavior after deviations
The idea is that a deviation is punished by switching to the worst subgame
perfect equilibrium path for the deviator:

– Take a path as a candidate for a subgame perfect equilibrium path.


We want to de…ne a simple strategy pro…le that is a SPE and supports
this path.
– Find the worst sub-game perfect equilibrium path for each player.
These are used as "punishment" paths.
– De…ne players’behavior: follow the default path as long as no player
deviates.
– If one player deviates, switch to the punishment path of the deviator.

8
– If there is another deviation from the punishment srategy, again
switch to the equilibrium that punishes deviator.
– By one step deviation principle, this is a sub-game perfect equilibrium
that replicates the original equilibrium path (recall that one-step de-
viation principle works for in…nite horizon games with discounting)
– Note that once all players follow these strategies, there is no deviation
and hence punishment are not used along equilibrium path

For a formalization of this, see Abreu (1988): "On the theory of in…nitely
repeated games with discounting", Econometrica 56 (2).

2.5 Example: oligopoly


Finding the worst possible SPE for each player, as the construction above
requires, may be di¢ cult
However, for symmetric games, …nding the worst strongly symmetric pure-
strategy equilibrium is much easier

A strategy pro…le is strongly symmetric, if for all histories ht and all


players i and j, we have si (ht ) = sj (ht )
Following the same idea as in Abreu (1988), we can construct the best
strongly symmetric equilibrium by …nding the worst punishment paths in
the class of strongly symmetric equilibria
This works nicely in games where arbitrarily low stage-game payo¤s may
be induced by symmetric strategies
As an example, consider a quantity setting oligopoly model (with contin-
uum action spaces)
This originates from Abreu (1986), "Extremal equilibria of oligopolistic
supergames", Journal of Economic Theory (here adapted from Mailath-
Samuelson book)
There are n …rms, producing homogeneous output with marginal cost
c<1
Firms maximize discounted sum of stage payo¤s with discount factor
Given outputs q1 ; :::; qn , stage payo¤ of …rm i is
0 8 9 1
< Xn =
ui (q1 ; :::; qn ) = qi @max 1 qj ; 0 cA .
: ;
j=1

9
The stage game has a unique symmetric Nash equilibrium
1 c
qiN = := q N , i = 1; :::; n
n+1
with stage payo¤s
2
1 c
ui q1N ; :::; qnN = .
n+1

The symmetric output that maximizes joint pro…ts is


1 c
qim = := q m
2n
giving payo¤s
2
1 1 c
ui (q1m ; :::; qnm ) = .
n 2

Note that in this model, one sub-game perfect equilibrium is trivially


si (ht ) = q N for all i and ht
Therefore, if is high enough, optimal outputs are achieved by Nash-
reversion strategies: play q m as long as all the players do so, otherwise
revert to playing q N forever
However, cooperation at lower discount rates is possible with more e¤ec-
tive punishments as follows
Let (q) denote a payo¤ with symmetric output pro…le qi = q for i =
1; :::; n:
(q) = q (max f1 nq; 0g c)
d
Let (q) denote maximal "deviation payo¤" for i when others produce
q:
d
(q) = max u1 (q1 ; q; :::; q)
q1
1 2
(1 (n 1) q c) if 1 (n 1) q c 0
= 4
0 otherwise

Note that (q) can be made arbitrarily low with high enough q, allowing
severe punishments
d d
Also, (q) is decreasing in q and (q) = 0 for q high enough
Let v denote the worst payo¤ achievable in strongly symmetric equilib-
rium (can be shown as part of the construction that a strategy pro…le
achieving this minimum payo¤ exists)

10
Given this, the best payo¤ that can be achieved in SPE is obtained by
every player choosing q given by

q = arg max (q) (1)


q

subject to
d
(q) (1 ) (q) + v ; (2)
where the inequality constraint ensures that playinng q (now and forever)
is better than choosing the best deviation and obaining the worst SPE
payo¤ from that point on
How do we …nd v ?
The basic insight is that we can obtain v by using a "carrot-and-stick"
punishment strategy with some "stick" output q s and "carrot" output q c
According to such strategy, choose output q s in the …rst period and there-
after play q c in every period, unless any player deviates from this plan,
which causes this prescription to be repeated from the beginning
Intuitively: q s leads to painfully low pro…ts (stick), but it has to be su¤ered
once in order for the play to resume to q c
To be a SPE, such a strategy must satisfy:

1. Players don’t have an incentive to deviate from "carrot":

(q c ) (1 ) d (q c ) + [(1 ) (q s ) + (q c )] or
d
(q c ) (q c ) ( (q c ) (q s )) (3)

2. Players dont’have an incentive to deviate from "stick":


d
(q s ) (1 ) (q s ) + (q c ) (4)

To …nd the optimal "carrot-and-stick" punishment, we can proceed as


follows:
First, guess that joint optimum q m can be supported in SPE. If that is
the case, then let q c = q m , and let q s be the worst "stick" that the players
still want to carry out (knowning that this restores play to q m ), ie solve
q s from
d
(q s ) = (1 ) (q s ) + (q m ) :

If
d
(q m ) (q m ) ( (q m ) (q s )) ;
then no player indeed wants to deviate from q m , and this carrot-and-stick
strategy works giving:

v = (1 ) (q s ) + (q m )

11
However, if
d
(q m ) (q m ) > ( (q m ) (q s )) ;
then the worst possible punishment is not severe enough, and q m cannot
be implemented
Then we want to …nd the lowest q c > q m for which there is some q s such
that (3) and (4) hold
This task is accomplished by …nding q c and q s that solve those two in-
equalities as "=" (both "incentive constraints" bind)
Note that this algorithm gives us the solution to (1) - (2): q = q c and
v = (1 ) (q s ) + (q )
Is something lost by restricting to strongly symmetric punishment strate-
gies? If v = 0, then clearly there cannot be any better asymmetric pun-
ishments (every player guarantees zero by producing zero in every period).
Then restricting to strongly symmetric strategies is without loss
However, if v > 0, then one could improve by adopting asymmetric pun-
ishment strategies

It can be shown that q and v are decreasing in discount factor , and


corresponding stick output q s is increasing in
That is, higher discount factor improves the achievable stage-payo¤ by
making feasible punishments more severe
For a high enough discount factor, we have v = 0 and q = q m

12
FDPE microeconomic theory, spring 2014
Lecture notes 4
Pauli Murto

1 Bayesian games
1.1 Modeling incomplete information
So far we have assumed that players know each other’s characteristics, in
particular, their preferences
It is clear that this is often unrealistic
How other players behave, depends on their payo¤s
Therefore, player i’s payo¤s depends on his beliefs of others’payo¤s, and
so i’s actions will depend on his beliefs
But then, other players’ optimal actions depend on what they believe
about i’s beliefs
... and so i’s optimal action depends on his beliefs about other players’
beliefs about his beliefs, and so on
A full model of incomplete information should specify the belief hierarchy
containing beliefs of all orders (belief over payo¤s, beliefs over beliefs,
beliefs over beliefs over beliefs, and so on)
Harsanyi (1967-68) showed how such a model can be speci…ed in a tractable
way
His insight is to assume that a random variable, "nature’s move", speci…es
the "type" for each player, where the "type" of player i contains the
information about i’s payo¤s as well as i’s beliefs of all orders
The probability distribution of this random variable is assumed to be
common among the players (common prior ) and the players then use
Bayesian rule to reason about probabilities
In applications, resulting belief hierarchies are typically very simple
Mertens and Zamir (1985) showed that in principle one can construct a
rich enough type space to model any situation of incomplete information
in such a manner
It should be mentioned that the assumption of "common prior" underlying
Harsanyi model is critical
Note that Harsanyi model in e¤ect just models incomplete information as
a standard extensive form game with imperfect information (nature takes
the …rst move, and players are asymmetrically informed about this move)

1
However, it is more practical to treat such models as a separate class of
games that are called Bayesian games

1.2 Bayesian game


A game of incomplete information, or a Bayesian game, is de…ned by:

1. Set of players, I = f1; 2; :::; Ig :


2. Set of possible types for each player: i , i 2 I. Let i 2 i denote a
typical type of player i: We adopt the following notation = ( 1 ; :::; I ) ;
i = ( 1 ; :::; i 1 ; i+1 ; :::; I ) etc.

3. Natures move: is drawn from a joint probability distribution F on =


1 I (common prior)
4. Set of actions available to each player: Ai , i 2 I. Let ai 2 Ai denote a
typical action taken by i.
5. Strategies, si : i ! Ai ; for i 2 f1; 2; :::; Ig. The action that type i takes
is then given by si ( i ) 2 Ai . Denote the strategy space of i by Si .
6. Payo¤s, ui (a1 ; :::; aN ; 1 ; :::; N ) :

The game proceeds as follows:

– First, nature chooses according to F:


– Then, each player i observes the realized type bi and updates her
beliefs of other players types based on F . Denote the distribution on
b
i conditional on i by Fi
b
i j i :

– Players then choose their actions simultaneously (we may also inter-
prete those as representing full action plans in extensive form games)

If i are …nite, then F is just a discrete probability distribution on ,


and we may denote by p ( 1 ; :::; I ) the probability of realization =
( 1 ; :::; I ).

Then the expected payo¤ of i given bi and pro…le s is simply


h i
E i ui si bi ; s i ( i ) ; bi ; i bi

= : b
i= i
ui si bi ; s i ( i) ; pi i j bi

In many applications type space is continuous, and then the expected


payo¤ is de…ned analogously using intergral instead of summation

Some classi…cations of models:

2
– If for all i, ui is independent of j , j 6= i, then we have private values.
Otherwise, the model has interdependent values.
– If i , i = 1; :::; I, are distributed independently, we have a model with
indepenent types. Otherwise, types are correlated.
– The simplest case is independently distributed, private values

1.3 Bayesian Nash equilibrium


Bayesian Nash Equilibrium is just the Nash Equilibrium in the current
context:
That is, each player chooses a strategy si that is a best response to other
players strategies s i :
De…nition 1 A strategy pro…le (s1 ; :::; sN ) is a Bayesian Nash Equilibrium if
E [ui (si ( i ) ; s i ( i) ; )] E [ui (s0i ( i ) ; s i ( i) ; )]
for all s0i 2 Si .
Another way to think about a Bayesian equilibrium is to note that if s is
a best response to s i , then each possible type i must be playing a best
reponse to the conditional distribution of the other player’s types.
Hence, the above de…nition can be restated as follows: a pro…le s is a
Bayesian Nash equilibrium if and only if, for all i and all bi 2 i occurring
with positive probability,
h i
E i ui si bi ; s i ( i ) ; bi ; i bi
h i
E ui s0 bi ; s i ( i ) ; bi ; i bi
i i

for all s0i 2 Si .


We can also allow mixed strategies as before

1.3.1 Cuto¤ strategies


With many types and actions, strategy spaces are large
In many applications, best-response strategies are monotonic in types
This often leads to tractable models that exploit cuto¤ strategies
Consider a binary game with a high action aH and a low action aL
If ui (aH ; a i ; i ; i ) ui (aL ; a i ; i ; i ) is monotonic in i for all a i ,
then all best responses take the form of a cuto¤ strategy:
i < i ) ai = aL ;
i i ) ai = aH :
You have many examples based on this technique in problem set 4.

3
1.4 Some examples
1.4.1 Interpreting mixed strategies through incomplete information
(puri…cation)
We have often relied on mixed strategies in our analysis of games
Sometimes mixed strategies raise objections. Why would people random-
ize?
Harsanyi suggested that mixed strategies may be interpreted as pure
strategies in an incomplete information game, in the limit where incom-
plete information vanishes
To see how this works, consider the following game

L R
U 0; 0 0; 1
D 1; 0 1; 3

This has a unique equilibrium in mixed strategies: 1 (U ) = 3=4, 1 (D) =


1=4, 2 (L) = 2 (R) = 1=2
Let us change the game so that payo¤s are given by

L R
U " 1; " 2 " 1; 1
D 1; " 2 1; 3
where " > 0 is a small number and 1 and 2 are independent random variables
uniformly distributed over [0; 1]

1 is private information to player 1 and 2 is private information to player


2
This is a Bayesian game with independent private values
Note that here u1 ((U; a2 ) ; 1 ) u1 ((D; a2 ) ; 1 ) is increasing in 1 for all
a2 , and similarly u2 ((a1 ; L) ; 2 ) u2 ((a1 ; R) ; 2 ) is increasing in 2 for
all a1 , so best responses are cuto¤ strategies
In particular, there should be some cuto¤ levels 1 and 2 such that player
1 chooses U i¤ 1 and player 2 chooses L i¤ 2

For such cuto¤ strategies to be an equilibrium, the cuto¤ types should be


indi¤erent between the two actions, which holds if:

" 1 = (1 2) 1 + 2 ( 1) and
" 2 = (1 2 ) ( 1) + 2 3

4
Solving these gives equilibrium cuto¤s:

1 = (2 + ") = 8 + "2
2 = (4 ") = 8 + "2

Check that then all other types are choosing strictly optimal action
Ex ante, player 1 chooses U with probability 1 (2 + ") = 8 + "2 and
D with probability (2 + ") = 8 + "2 , whereas player 2 chooses L with
probability 1 (4 ") = 8 + "2 and R with probability (4 ") = 8 + "2

Let " ! 0, and note that these probabilities converge to the unique mixed
strategy equilibrium probabilities of the original game

1.4.2 Electronic mail game


Recall that the type may entail not only what the payo¤s of a player
are, but also what player’s believe about each others payo¤s, what other
players believe about what other players believe about payo¤s, and so on
In other words, a type should capture a belief hierarchy
In the example considered above (as in much of the literature), belief
hierarchies are very simple: if types are independently distributed, then
all beliefs of order two or higher are degenerate (i knows j’s belief for sure,
because prior is common, and j’s private information does not a¤ect his
belief over i’s payo¤s)
To demonstrate a more complicated case, consider the following game
analyzed by Rubinstein (1989)
Each of the two players has to choose action A or B
With probability p < 1=2 the payo¤s are given by game Ga and with
probability 1 p payo¤s are given by game Gb :

A B A B
Game Ga : A M; M 1; L Game Gb : A 0; 0 1; L
B L; 1 0; 0 B L; 1 M; M

where L > M > 1.

In both games, it is mutually bene…cial to choose the same action, but the
best action depends on the game
Note that B is the more risky action: even if the true game is Gb , choosing
B is bad if the other player chooses A

5
Assume …rst that player 1 knows the true game, but player 2 does not
Then this is a very simple game of incomplete information, where both
player choose A in the unique Bayesian equilibrium (see why?)
On the other hand, if both players know the game, then there is an equi-
librium where (A; A) is played in game Ga and (B; B) is played in game
Gb
The interesting case is the following: suppose that only player 1 knows the
true state, and players communicate through a special protocol as follows
If the game is Ga , then there is no communication
If the game is Gb , then player 1’s computer sends an automatic message
to player 2
If player 2’s computer receives a message, then it sends automatically a
con…rmation to player 1
If player 1’s computer receives a con…rmation, then it sends automatically
a further con…rmation to player 2, and so on
The con…rmations are sent automatically, but in each transmission there
is a small probability " that the message does not get through
If a message does not get through, then communication ends
At the end of the communication phase each player sees on her computer
screen exactly how many messages her computer has sent
To model this as a Bayesian game, de…ne a type Q1 of player one to be
the number of messages her computer has sent, and type Q2 of player two
to be the number of messages her computer has sent
Then, if Q1 = 0, the game is Ga , otherwise it is Gb
Note that both players know the true game, except type Q2 = 0 of player
2 (she is quite convinced that game is Ga is " is small)
This is a Bayesian game with payo¤s
M if Q1 = 0
u1 ((Q1 ; Q2 ) ; (A; A)) =
0 if Q1 > 0
:::: and so on

The common prior is:


8
< 1 p if q1 = q2 = 0
q1 +q2 1
Pr ((Q1 ; Q2 ) = (q1 ; q2 )) = p" (1 ") if q1 1 and (q2 = q1 1 or q2 = q1 )
:
0 otherwise

6
How to compute players’beliefs?
Consider player 1. She knows her own type Q1 , and she knows that player
2’s type is either Q2 = Q1 1 or Q2 = Q2 (depending on whether her own
message, or the next message by player 2, failed to go through). Therefore,
his belief of player 2’s types are Pr (Q2 = Q1 1) = "= (" + (1 ") ") >
1=2 and Pr (Q2 = Q1 ) = ((1 ") ") = (" + (1 ") ") < 1=2 (and Pr (Q2 = q) =
0 for all other q)
Similarly, player 2 knows her own type Q2 and she knows that player 1’s
type is either Q1 = Q2 or Q1 = Q2 + 1
Hence, her beliefs of player 2’s types are Pr (Q1 = Q2 ) = "= (" + (1 ") ") >
1=2 and Pr (Q1 = Q2 + 1) = ((1 ") ") = (" + (1 ") ") < 1=2 (and Pr (Q1 = q) =
0 for all other q)
What are the higher order beliefs?
Claim: there is a unique Bayesian equilibrium, where all types play (A; A)
To prove the result: show …rst that for type Q1 = 0, it is a dominant
action to choose A
Next, show that then it is optimal for Q2 = 0 to choose A, then also for
type Q1 = 1, then Q2 = 1, and so on... (check)
So, even when 256 messages have gone through, players play (A; A) (no
matter how large number M is)
The problem is: even if both players know for sure that game is Gb , it is
not common knowledge
An event is common knowledge among the players if all the players know
the event, all the players know that all the players know the event, all the
players know that all the players know that all the players know the event,
and so on
Can you see that in the electronic mail game event "game is Gb " is not
common knowledge even when Q1 = 256 and Q2 = 256 (think …rst about
e.g. case Q1 = 2 and Q2 = 1)

7
FDPE microeconomic theory, spring 2014
Lecture notes 5
Pauli Murto

1 Dynamic games of incomplete information


We consider here brie‡y some dynamic games of incomplete information:
nature …rst draws types for the players, each player observes her own type,
and then the players play some extensive form game
As before, the principle of sequential rationality requires modeling explic-
itly the dynamic structure of the game
With incomplete information, sub-game perfectness re…nement has no
bite, so the relevant re…nements used here are the perfect Bayesian equi-
librium and sequential equilibrium
The games we consider fall in the class of "Multi-stage games with ob-
served actions and incomplete information", or "Bayesian extensive game
with observed actions"
This is a Bayesian variant of the multi-stage games with observed actions;
the only uncertainty is about the types of the players
In such a model:

– First, nature chooses a private type for each player


– Then, players play a multi-stage game, where at the beginning of
each stage, all previous actions are observed (except the initial move
by nature)
– We assume here that the type distribution is more restricted than in
general Bayesian games: types are independentely distributed across
the players

More precicely: the game starts by the nature choosing a type i 2 i for
all players and types are independent:

p ( ) = p1 ( 1 ) ::: pI ( I )

and this distribution is common knowledge.


Now we may summarize information at stage t as a list of previous actions
by all players:
ht = (a1 ; :::; at 1 ) :

Without loss of generality, we may assume that all the players have a move
in each period (if not, then we may let i choose from a one-element set
Ai (ht ) = fag)

1
A behavior strategy pro…le now assigns for each ht a distribution over
actions that depends on type: i (ai jht ; i ) is the probability with which
i chooses action ai 2 Ai (ht ) given ht and her type i
At the beginning of t, players know exactly all the actions taken in the
past, so uncertainty concerns just the types of the other players. The
belief system can therefore be summarized as

ht = i i ht i2I
;

de…ned for all ht , where i ( i jht ) is i’s probability assessment of the


other players’types
Following Fudenberg-Tirole, it is reasonable to de…ne Perfect Bayesian
Equilibrium in this context by imposing several natural extra requirements
in addition to weak consistency for the belief system:

C1: At all ht , players share a common belief on each others’types. That is,
players i and j have identical belief, denoted ( k jht ), on k’s type:

i k ht = j k ht := k ht for all ht , k 2 k, and i 6= j 6= k.

Moreover, these assesments remain independent across players throughout


the game:
ht = 1 h
t
::: I h
t
:
C2: Other players’ belief of player i’s type do not depend on actions by
j 6= i (even if i takes an unexpected action):

i h t ; at = i ht ; b
at whenever ati = b
ati .

C3: Bayes rule is applied whenever possible. That is, for all i, ht , and
ati 2 Ai (ht ), if there exists i with ( i jht ) > 0 and i (ati jht ; i ) > 0, then

( i jht ) i (ati jht ; i)


i ht ; ati = X 0
:
i jht t
i ai h ; 0i
t
02
i i

Note that this applies also when history ht is reached with probability 0.
In particular: players update their beliefs about player i using Bayes’rule
until her behavior contradicts her strategy, at which point they form a new
common belief about i’s type. From then on, this new belief will serve as
the basis of future Bayesian updating
Bayes rule also applies to beliefs about player i if some other player j takes
an unexpected action (i.e. if k atj jht ; j = 0 for all j 2 j )

2
To summarize: conditions C1 - C3 basically say that after each ht , all
players i have a common probability distribution on i’s type, denoted by
( i jht ), and this is derived from i h
t 1
by Bayesian rule whenever
that is applicable.
Moreoever, if ( i jht ) cannot be derived by Bayesian rule from i ht 1
,
it is independent on actions atj , j 6= i
Sequential rationality is as before: is sequentially rational if for all i and
all ht ,

ui ( jht ; i ; i ht ) ui ( jht ; i ; i ht ) for all 0


i 2 i:

A perfect Bayesian equilibrium (PBE) is a pair ( ; ) such that is se-


quentially rational (given ), and satis…es C1-C3.
Literature often uses the concept of PBE rather than sequential equilib-
rium in this class of games, because it is simpler and more easily checked
But PBE as de…ned here is closely related to sequential equilibrium:

– Every sequential equilibrium is PBE


– By a result by Fudenberg-Tirole (1991), if either each player has at
most two possible types, or if there are two periods, then the set
of perfect Bayesian equilibrium coincides with the set of sequential
equilibria.
– These conditions apply to many applications, so in those cases there
is no di¤erence between sequential equilibrium and PBE.

1.1 Example: Spence’s signalling model


n o
L H
A worker’s talent (= value to employer) is either low or high: 2 ; ,
L H
<
H
Pr = = p > 0.

The worker knows her talent but the employer does not
Employer o¤ers a wage w
2
Suppose that employer minimizes (w ) , so that given her belief, opti-
mal wage is w = E ( ).
This is just a short-cut way to model a labor market, where competition
drives wage to the expectation of talent
Before seeking a job, the worker chooses the level of education e 0
We assume that education does not a¤ect productivity, but has cost e=

3
The payo¤ for worker is w e= if she accepts the job with wage w
The game proceeds as follows:

– Worker observes her type


– Worker chooses education level e
– Employer o¤ers wage w
– Employer accepts or rejects, and the game ends

There are many equilibria. We will next look separately at pooling equi-
libria and separating equilibria.

Pooling equilibrium

In a pooling equilibrium, both type of workers choose the same education


level eL = eH = e
Since the employer learns nothing from education level, she o¤ers wage
w = (1 p) L + p H
For this to be an equilibrium, it can not be optimal for worker to choose
some e 6= e
The easiest way to satisfy this is to consider a belief system for the em-
ployer, where she believes that any deviation from education level e orig-
inates from a worker type L
L
Hence consider employer’s strategy w (e ) = w , w (e) = for e 6= e
When is this an equilibrium?

Separating equilibrium

In a separating equilibrium both types choose di¤erent education levels


and hence employer can tell them apart
L
Clearly, type should choose eL = 0
To ensure that no type wants to mimic the other, we must have
L H L H H L
eH = and eH = ;

or
L H L H H L
eH :

H L
Since > , a continuum of feasible values of eH exists
Check that you can complete those to a PBE
This model has a lot of PBE

4
Note: this is a multi-stage game with observable actions with at most two
types, so sequential equilibria are the same as PBE
There is a large literature that considers re…nements to sequential equi-
librium to narrow down the plausible predictions
In this case, the so called intuitive criterium by Cho and Kreps (1987)
selects the best separating equilibrium (see MWG Chapter 13, Appendix
A)

1.2 Reputation e¤ects: chain-store game with incomplete


information
We consider here a simple model of reputation following seminal papers
by Kreps and Wilson (1982) and Milgrom and Roberts (1982)
Consider the following variant of the chain-store game
A single long-run incumbent …rm faces potential entry by a series of short-
run …rms
Each entrant plays only once but observes all previous play
Payo¤ matrix

Fight if entry Accommodate if entry


Enter 1; 1 b; 0
Stay out 0; a 0; a

Note that an entrant enters if she considers probability of incumbent …ght-


ing to be less than b= (b + 1)
As we observed earlier, this game with a …nite number of entrants has a
unique subgame perfect equilibrium, where every entrant enters and the
incumbent accommodates every time
With an in…nite horizon, there are in fact many equilibria, including one
where every entrant enters, and one where entry is deterred (if discount
factor is high enough). Which one should we expecte to be played?
Introduce incomplete information: with a probability p0 > 0 the incum-
bent is tough and prefers to …ght (with complementary probability the
incumbent is weak and gets payo¤s as shown in the table)
Assume also that each entrant is tough with probability q 0 > 0 and prefer
to enter no matter how incumbent responds. Weak entrant gets the payo¤s
shown in the table.
Clearly, a one period game has a sequential equilibrium, where weak en-
trant enters if p0 < b= (b + 1) and stays out if p0 > b= (b + 1)

5
(and tough entrant enters with probability 1, tough incumbent …ghts with
probability 1, and weak incumbent accommodates with probability 1)
Now consider a game with a …nite number of periods, where incumbent
maximizes the sum of payo¤s over periods
If the incumbent could credibly commit to …ghting every entrant, then it
would be optimal to do so if a 1 q 0 > q 0 , that is

q 0 < a= (a + 1) .

Assume that this inequality holds. We will see that under that condition,
the incumbent gets close to that behavior in PBE

The reason why a weak incumbent might …ght is that this could make it
look more likely to the entrants that the incumbent is tough
To see this, suppose that entrants’belief that incumbent is tough is p, and
suppose that a weak incumbent …ghts with probability
Then, if entry takes place and incumbent …ghts, p jumps to
p
p0 (p; ) = p
p + (1 p)

by Bayesian rule.
Clearly, p0 (p; ) is decreasing in with p0 (p; 1) = p and p0 (p; 0) = 1
That is, if weak incumbent …ghts with probability 1, then entrants learn
nothing. And if weak incumbent …ghts with probability 0, then entrants
learn perfectly incumbent’s type
Consider the second last period of the game
If a 1 q 0 > 1, then the incumbent …nds it worthwhile to …ght in the
current period if this deters entry (of weak entrant) in the …nal period
For simplicity, suppose that this is the case, that is:

q 0 < (a 1) =a:

The analysis would be qualitatively similar if this does not hold, but we
would need more backward induction steps from the last period to get
repuation e¤ects work
What does a weak entrant do? This depends on the probability with which
incumbent …ghts
Let p denote the current period belief of entrants about incumbent’s type

6
If p > b= (1 + b), then weak entrant should not enter
This also means that a weak incumbent should …ght (because this deters
entry for the next period)
What if p < b= (1 + b)?
Suppose p < b (1 b), and consider incumbents equilibrium …ghting prob-
ability that leads to …nal period belief p0 (p; )
Could we have such that p0 < b= (1 + b)? No, because then next period
weak entrant enters, so it would be better to accommodate in the current
period than …ght

Could we have such that p0 > b= (1 + b)? No, because then next period
weak entrant does not enter, so it is better to …ght than accommodate
The only candidate for equilibrium is that leads to p0 = b= (1 + b):

= p= ((1 p) b)

This also requires that in the …nal period weak entrant (who will then be
indi¤erent between entry and not entering) enters with a probability that
makes the incumbent indi¤erent
The total probability that entry is fought in second last period is therefore

p 1 + (1 p) p= ((1 p) b) = p (b + 1) =b
2
and therefore entrant stays out if p > [b= (1 + b)] .
Now, what happens in the third last period?
2
Continuing with the same backward induction logic, if p > [b= (1 + b)] ,
then weak entrant stays out and weak incumbent …ghts (to deter entry of
next period weak incumbent)
3 2
If [b= (1 + b)] < p < [b= (1 + b)] , then weak entrant stays out and weak
incumbent randomizes
3
If p < [b= (1 + b)] , then weak entrant enters and weak incumbent ran-
domizes

More generally, for the k:th period from the end, weak entrant stays out
k 1
and weak incumbent …ghts if p > [b= (1 + b)]
k 1
Note that [b= (1 + b)] goes geometrically to zero as k increases
Therefore, for a …xed prior probability p0 , there is some k such that the
incumbent …ghts with probability one for the …rst N k periods

7
Hence, for a …xed p0 , as the total number of periods N increases, the total
payo¤ of the incumbent converges to the payo¤ that it would obtain by
committing to always …ghting
Posterior p stays constant at p0 for the N k …rst periods, so the incumbent
does not make the entrants believe that it is a tough type
Rather, reputational concerns make the incumbent behave as if it was
tough
This is the unique sequential equilibrium of the game (= PBE in this case)

You might also like