Professional Documents
Culture Documents
ECON2112: Week 1
DJ Thornton
1
Table of contents
1. Introduction
3. Nash Equilibrium
4. Mixed Strategies
2
Introduction
Game Theory (Informally)
• A solution is a set of recommendations about how to play the game, such that no
player would have an incentive not to follow this recommendation.
3
Example: The Prisoners’ dilemma
4
Example: The Prisoners’ dilemma
Prisoner 2
Not Confess Confess
Not Confess 1, 1 9, 0
Prisoner 1
Confess 0, 9 6, 6
5
Normal Form Games
Normal form games: General case
• Each player has to make a decision or strategy, therefore, for each player we need
a strategy set.
• We need to know players’ preferences over possible outcomes. Therefore, for each
player and each strategy profile (a strategy for each player) we need to specify the
payo↵ to that player if that strategy profile is played.
6
Normal form games: Formal definition
7
Example: Prisoners’ Dilemma
Of course, all this information can be summarised using the following normal form
(sometimes called matrix form) representation.
9
Example: Battle of the sexes
• There are two players, therefore n = 2. Let player 1 be the wife and player 2 be
the husband.
O F
O 2, 1 0, 0
F 0, 0 1, 2
11
Game Theory: Assumptions
• If players can communicate at no cost, then the messages they send are part of
their strategies and the game specifies what messages can be sent (not in this
course). Second part of the course deals with “costly” signaling/communication.
12
Nash Equilibrium
Nash equilibrium
• The aim of Game Theory is to provide at least one solution for every game.
• The solution has to be stable: no player has any reason to stray from the
recommendation (that is, they can’t achieve a strictly higher payo↵ by straying).
13
Nash equilibrium. Formal definition
for every si 2 Si .
14
Nash equilibrium. A characterisation
Notation: We write
S i = S1 ⇥ · · · ⇥ Si 1 ⇥ Si+1 ⇥ · · · ⇥ Sn ,
and
s i = (s1 , . . . , si 1 , si+1 , . . . , sn ) 2 S i.
So, S i is the set of all strategy profiles for players other than i. For example, in a two
player game,
S 1 = S2 , S 2 = S1 .
15
Nash equilibrium. A characterisation
0
max ui (s1 , . . . , si 1 , si , si+1 , . . . , sn ).
si0 2Si
Furthermore, we denote by BRi (s i ) the set of player i’s best responses against s i .
Sometimes you might also see the compact notation (si , s i ) ⌘ (s1 , s2 , . . . , sn ) so that
si0 is a best response to s i if and only if
si0 2 arg max ui (si0 , s i ).
si0 2Si
16
Nash equilibrium. A characterisation
17
Nash equilibrium
If the strategy profile s = (s1 , . . . , sn ) is not a Nash equilibrium then there is a player i
and a strategy si0 2 Si such that:
That is, if s is not a Nash equilibrium at least one player has an incentive to deviate
from s.
18
Nash Equilibrium: Examples
19
Nash Equilibrium: Examples
• NE = {(O, O), (F , F )}
• Note: There’s actually another NE here, but it involves something called mixed
strategies (next week’s topic).
20
Nash Equilibrium: Examples
21
Nash Equilibrium: Examples
L R L R
T 1, 1, 1 0, 0, 0 T 1, 1, 3 0, 0, 0
B 0, 0, 0 1, 1, 1 B 2, 0, 0 0, 1, 0
W E
23
• NE = {(B, R, W )}
Matching Pennies
A game is played between two players, player 1 and player 2. Each player has
a penny and must secretly turn the penny to heads or tails. The players then
reveal their choices simultaneously. If the pennies match (both heads or both
tails) player 1 keeps both pennies, so wins one from player 2 (+1 for player 1,
-1 for player 2). If the pennies do not match (one heads and one tails) player 2
keeps both pennies, so receives one from Player 1 (-1 for player 1, +1 for
player 2).
24
Matching Pennies
• NE = ?
25
Mixed Strategies
Mixed Strategies
H T
H 1, 1 1, 1
T 1, 1 1, 1
• That is, suppose that player 1 is allowed to play H with probability 0 x 1 and
T with probability 1 x.
26
Mixed Strategies
H T
H 1, 1 1, 1
T 1, 1 1, 1
• Also note that x and y are just arbitrary variable names, that is, S1 = S2 .
H 1, 1 1, 1
T 1, 1 1, 1
u1 (H, y ) = y (1 y)
u1 (T , y ) = y + (1 y ).
28
Matching Pennies: Mixed Strategies
We now want to compute BR1 (y ): player 1’s best response to player 2 playing H with
probability y and T with probability (1-y).
Example: Comparing expected payo↵s for player 1
u1 (H, y ) = y (1 y)
u1 (T , y ) = y + (1 y ).
y (1 y) > y + (1 y ),
1
y>
2
29
Matching Pennies: Mixed Strategies
1
• Player 1 prefers H to T if and only if y > 2
30
Mixed Strategies
H 1
BR1 (y )
1
2
T x
1 1
2
T H
31
Mixed Strategies
x H 1, 1 1, 1
(1 x) T 1, 1 1, 1
u2 (H, x) = x + (1 x)
u2 (T , x) = x (1 x).
1
We find that player 2 strictly prefers H to T if x < 2 32
Mixed Strategies
1
• Player 2 strictly prefers H to T if x < 2
33
Mixed Strategies
BR2 (x)
H 1
BR1 (y )
1
2
T x
1 1
2
T H
34
Mixed Strategies
Recall Theorem 1,
Theorem 1 (NE i↵ BR)
The strategy profile (s1⇤ , . . . , sn⇤ ) is a Nash equilibrium if and only if for every
i = 1, . . . , n we have si⇤ 2 BRi (s ⇤ i ).
• Given y = 12 , player 1 obtains an expected payo↵ equal to zero and does not have
an incentive to deviate.
• Given x = 12 , player 2 obtains an expected payo↵ equal to zero and does not have
an incentive to deviate.
35
Mixed Strategies and Dominated Strategies
ECON2112: Week 2
DJ Thornton
Mixed Strategies
Mixed Strategies
1
Mixed strategies
Example 2
If player 1 has two strategies, e.g., S1 = {T , B}, examples of mixed strategies are:
1 1
T + B,
2 2
1 2
T + B,
3 3
↵T + (1 ↵)B for any 0 ↵ 1
2
Mixed strategies
Example 4
If player 1 has two strategies, e.g., S1 = {T , B}, we could also write the mixed
strategy
p1 = ↵T + (1 ↵)B
p1 = (↵, 1 ↵),
so long as we have imposed some order on the strategies. This has the advantage of
instantly resembling a coordinate in the Cartesian plane.
This notation tends to generalise more naturally when we have more strategies, but
for the purposes of this course we will stick to the former notation. Be aware of this
subtle di↵erence if you consult other textbooks.
3
Utility of a mixed strategy
Consider a two player game with strategy sets S1 = {T , B}, and S2 = {L, R}. What is
player 2’s utility if player 1 plays the mixed strategy p1 = 13 T + 23 B?
We define u2 (p1 , L) as the expected utility under the strategy p1 , that is,
1 2
u2 (p1 , L) = u2 ( 13 T + 23 B, L) = u2 (T , L) + u2 (B, L).
3 3
What about if player 2 plays the mixed strategy p2 = 12 L + 12 R? Then the expected
utility of (p1 , p2 ) is
1 1 1 1 2 1 2 1
u2 (p1 , p2 ) = · u2 (T , L) + · u2 (B, L) + · u2 (B, L) + · u2 (B, R).
3 2 3 2 3 2 3 2
4
Utility of a mixed strategy
The notation for the utility of a mixed strategy in an n-player game is a bit heavy. We
present the two-player case below and note that the more general case is similar.
Let’s write S1 = {s11 , s12 } and S2 = {s21 , s22 } for player 1 and player 2’s pure
strategies, and write p1 = (p11 , p12 ) and p2 = (p21 , p22 ) for an arbitrary mixed strategy
by player 1 and player 2. Note that the probability that player 1 plays s11 and player 2
plays s21 is p11 ⇥ p21 . So
u1 (p1 , p2 ) = p11 p21 u1 (s11 , s21 ) + p12 p21 u1 (s12 , s21 ) + p11 p22 u1 (s11 , s22 ) + p12 p22 u1 (s12 , s22 )
u2 (p1 , p2 ) = p11 p21 u2 (s11 , s21 ) + p12 p21 u2 (s12 , s21 ) + p11 p22 u2 (s11 , s22 ) + p12 p22 u2 (s12 , s22 ).
5
Utility of a mixed strategy
The notation for the utility of a mixed strategy in an n-player game is a bit heavy. We
present the two-player case below and note that the more general case is similar.
Let’s write S1 = {s11 , s12 } and S2 = {s21 , s22 } for player 1 and player 2’s pure
strategies, and write p1 = (p11 , p12 ) and p2 = (p21 , p22 ) for an arbitrary mixed strategy
by player 1 and player 2. Note that the probability that player 1 plays s11 and player 2
plays s21 is p11 ⇥ p21 . So
2 X
X 2
u1 (p1 , p2 ) = p1i p2j u1 (s1i , s2j )
i=1 j=1
2 X
X 2
u2 (p1 , p2 ) = p1i p2j u2 (s1i , s2j )
i=1 j=1
5
Nash equilibrium with mixed strategies
6
Nash equilibrium with mixed strategies
Note:
Each of the pi⇤ ’s in the mixed strategy profile p ⇤ are themselves distributions over
player i’s strategies (i = 1, . . . , n). So pi⇤ might look something like 12 T + 12 B (for
example).
6
Mixed Strategies
Proof.
Unfortunately well beyond the scope of this course... but for those interested, it
follows from a famous result about fixed points.
Note:
Every pure strategy si can be expressed as a mixed strategy that gives probability one
to si and probability zero to every other strategy.
7
Mixed Strategies and Nash equilibrium
ui (pi⇤ , p ⇤ i ) ui (si , p ⇤ i )
8
Mixed Strategies
Example 6
H T
H 1, 1 1, 1
T 1, 1 1, 1
9
Nash equilibrium: Equivalent definition
Definition 7
The set of player i’s best responses against p i 2 P i is the set of player i’s
strategies that solve:
max
0
ui (pi0 , p i ).
pi
Furthermore, we denote by BRi (p i ) the set of player i’s best responses against p i .
10
Nash equilibrium: Equivalent definition
Definition 8
The strategy profile (p1⇤ , p2⇤ , . . . , pn⇤ ) is a Nash equilibrium if for every i = 1, . . . , n we
have pi⇤ 2 BRi (p ⇤ i ).
11
Mixed Strategies
• The game has two Nash equilibria in pure strategies (T , L) and (B, R).
• What about Nash equilibria in mixed strategies?
12
Mixed Strategies
8 8
> 1 > 2
<x = 1
> if y> 3 <y = 1
> if x> 3
BR1 (y ) 0 x 1 if 1 BR2 (x) 0 y 1 if 2
y= x=
>
>
3 >
>
3
: 1 : 2
x =0 if y< 3. y =0 if x< 3.
• Under what conditions can we say that a strategy is unequivocally better than
another?
• Certainly this should be the case if a strategy always gives a larger payo↵ than a
second strategy no matter what the other players do.
14
Dominant Strategies
• For any player, whatever the other player does, Confess pays more than Not
Confess.
• We say that the strategy Not Confess is strictly dominated by Confess, or that
the strategy Confess strictly dominates the strategy Not Confess. 15
Normal Form Games
16
Equilibrium in Dominant Strategies: Example
17
Strictly Dominated Strategies: Properties
Theorem 2
Let G be an n-player normal form game such that player i has a strictly dominated
strategy in G . Let G 0 be the n-player normal form game that is obtained from G by
removing such a strictly dominated pure strategy. Then G and G 0 have the same set
of Nash equilibria.
18
Strictly Dominated Strategies: Properties
Example 14
3 player game
L C R L C R
T 1, 1, 1 0, 0, 1 1, 2, 1 T 1, 1, 0 1, 3, 0 2, 1, 0
B 0, 1, 1 2, 0, 1 0, 0, 1 B 1, 1, 0 2, 0, 0 2, 3, 0
X Y
19
Strictly Dominated Strategies: Properties
Example 14
3 player game
R
T 1, 2, 1
19
Strictly Dominated Strategies: Properties
20
Strictly Dominated Strategies: Properties
We have reduced the original game to the Battle of the Sexes between players 1 and 2.
20
Weakly Dominated Strategies: Motivation
21
Weakly Dominated strategies.
22
Weakly Dominated strategies.
ui (pi , s i ) ui (pi0 , s i )
and there exists at least one pure strategy profile s 0 i such that
23
Weakly and Strictly Dominated Strategies: Comparison
ui (pi , s i ) ui (pi0 , s i ),
24
Weakly Dominated strategies.
To summarise
• Players will never use strictly dominated strategies in a Nash equilibrium.
• Therefore, if the objective is to compute Nash equilibria, it is safe to apply
iterated deletion of strictly dominated strategies.
• As we have seen, there are Nash equilibria where players play weakly dominated
strategies.
• Therefore, if the objective is to compute Nash equilibria, it is not safe to apply
iterated deletion of weakly dominated strategies.
26
Applications
Normal form games
27
Applications
• The market clearing price when Q = q1 + q2 is the total amount in the market is:
8
<a Q if Q < a (where a > 0)
p=
:0 if Q a.
• Producing q2 units of the good costs cq2 dollars to firm 2. We assume c < a.
28
Cournot model of duopoly: The game.
• Strategy set of firm 1: any number larger or equal to zero. That is, S1 = [0, 1).
• Strategy set of firm 2: any number larger or equal to zero. That is, S2 = [0, 1).
• For each strategy profile (q1 , q2 ) we need to specify payo↵s to player (firm) 1 and
player (firm) 2.
(
(a q1 q2 )q1 cq1 if q1 + q2 < a
• ⇡1 (q1 , q2 ) = pq1 cq1 =
cq1 if q1 + q2 a.
(
(a q1 q2 )q2 cq2 if q1 + q2 < a
• ⇡2 (q1 , q2 ) = pq2 cq2 =
cq2 if q1 + q2 a.
29
Doupoly and Cournot competition: Solution.
• Fix the production of firm 2 to q2 . What is the best strategy that firm 1 can take?
30
Duopoly and Cournot competition: Solution.
31
Duopoly and Cournot competition: Solution.
32
Doupoly and Cournot competition: Solution.
a c
• The monopoly quantity 2 strictly dominates any larger quantity.
a c
• Given this, 4 dominates any lower quantity.
• S12 = [ a 4 c , a 2 c ]; S22 = [ a 4 c , a 2 c ].
3(a c)
• Given this, 8 dominates any larger quantity.
33
Doupoly and Cournot competition: Solution.
• ···
• S11 = { a 3 c }; S21 = { a 3 c }.
34
Applications
• The demand is going to absorb Q units of the good, no matter at what price.
• Each firm i has to choose a price out of the interval [0, 1).
• Buyers will always buy from the firm with the lowest price.
Q
• If both firms set the same price, each firm will sell 2 units.
35
Bertrand model of duopoly: The game.
• Two players.
36
Bertrand model of duopoly: The solution.
37
Applications
• Goats will have to share the same limited space, hence, the benefit of having a
goat will decrease on the total number of goats.
• We will assume that the benefit of having a goat when the total number of goats
P
is equal to A G 2 (where A is a positive constant and G = ni=1 gi ).
• We assume A > c.
• Number of players: n.
• Strategy sets: For each farmer (player) the interval [0, 1).
39
The Problem of the commons: Solution.
• n(A c) = (2 + n)(g1 + · · · + gn )2 .
• From where we can obtain a value for the total number of goats in a Nash
equilibrium:
1
⇤ n 2
G = g1 + · · · + gn = (A c)
2+n
40
The Problem of the commons: Social Optimum.
• From where we can obtain a value for the socially optimal number of goats:
1
⇤⇤ 1 2
G = (A c)
3
• The party with most votes wins the election. In case of a tie, a fair coin is tossed
to determine the winner.
• Each voter will cast her vote for the political party that is closest to her political
ideology.
42
Political competition (Hotelling): Assumptions
• Suppose that each “point” in [0, 1] is inhabited by a voter who has that ideology.
E.g. there is a voter with ideology ⇡/4.
0.5 ⇡/4
0 1
Socialist Alternative UNSW Conservatives
43
Hotelling model of Political competition: The game.
• Payo↵s:
• If 12 (I1 + I2 ) > 1
2
then Party 1 gets a payo↵ equal to 1 and Party 2 gets a payo↵ equal
to 0.
• If 12 (I1 + I2 ) = 1
2
then both parties get a payo↵ equal to 12 .
44
Hotelling model of Political competition: The solution.
I1 I2
45
Hotelling model of Political competition: The solution.
I1 I2
1
(I
2 1
+ I2 )
0.3 0.5 0.8
0 1
45
Hotelling model of Political competition: The solution.
I1 I2
1
(I
2 1
+ I2 )
0.3 0.5 0.8
0 1
• In the above example, party 1 wins (though party 2 is not playing a best
response!).
• Nash equilibrium: Ideologies I1⇤ and I2⇤ such that no party can unilaterally increase
their chances of winning the election.
45
Applications
47
Bertrand model of duopoly (v 2): Solution.
DJ Thornton
Introduction
1
Games of Perfect Information
Perfect Information
2, 1 0, 0
• The open circle is the initial node.
• The black circles and the open circle are called decision nodes.
• L, R , a and b are called actions.
• 1 and 2 are the names of the players.
• We specify a payoff vector in each final node.
2
Perfect Information
What is a strategy?
1
L R
2 2
a b c d
2, 1 0, 0 4, 0 1, 3
• L, R , a, b, c , d are called actions.
• What is player 1’s set of pure strategies?
• What is player 2’s set of pure strategies?
3
Pure, mixed and behavioral strategies
Mixed strategies
In an extensive form game, a player’s mixed strategy is a probability distribution
on the set of that player’s pure strategies.
Behavioral strategies
A player’s behavioral strategy at a decision node where the player has to move
is a probability distribution over the actions available at that decision node.
4
Pure strategies
Example 1
1
L R
2 2
a b c d
2, 1 0, 0 4, 0 1, 3
• Recall: L, R , a, b, c , d are the actions.
• Player 1’s set of pure strategies: S1 = {L, R }
• Player 2’s set of pure strategies: S2 = {ac , ad , bc , bd }
5
Pure strategies
Example 2
1
L R
2 2
a b c d
1
2, 1 0, 0 A B 1, 3
4, 0 1, 4
• Player 1’s set of pure strategies: S1 = {LA, LB , RA, RB }
• Player 2’s set of pure strategies: S2 = {ac , ad , bc , bd }.
6
Pure strategies
4, 0 2, 3
• We can introduce moves of Nature.
• Player 1 set of pure strategies: S1 = {L, R }.
• Player 2 set of pure strategies: S2 = {ac , ad , bc , bd }.
7
Behavioral strategies
Example 4
1
L R
2 2
a b c d
2, 1 0, 0 4, 0 1, 3
• Set of behavioral strategy profiles: the set of profiles that look like this
(xL + (1 − x )R , ya + (1 − y )b, zc + (1 − z )d )
where 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ z ≤ 1.
8
Behavioral strategies
Example 5
1
L R
2 2
a b c d
1
2, 1 0, 0 A B 1, 3
4, 0 1, 4
• Set of behavioral strategy profiles: the set of profiles that look like this
(xL + (1 − x )R , ya + (1 − y )b, zc + (1 − z )d , wA + (1 − w )B )
where 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ z ≤ 1, 0 ≤ w ≤ 1.
9
Behavioral strategies
4, 0 2, 3
• Set of behavioral strategy profiles: the set of profiles that look like this
(xL + (1 − x )R , ya + (1 − y )b, zc + (1 − z )d )
where 0 ≤ x ≤ 1, 0 ≤ y ≤ 1, 0 ≤ z ≤ 1.
10
Normal Form Representation
Back to pure strategies: Normal Form Representation
• Once we know what the set of pure strategies is for each player, we can write
down the normal form representation of a an extensive form game.
Example 7
1
L R
2
a b 1, 3
a b
2, 1 0, 0 L 2, 1 0, 0
R 1, 3 1, 3
11
Normal Form Representation
Example 8
1
L R
2 2
a b c d
2, 1 0, 0 4, 0 1, 3
ac ad bc bd
L 2, 1 2, 1 0, 0 0, 0
R 4, 0 1, 3 4, 0 1, 3
12
Normal form Representation
Example 9
1
L R
2 2
a b c d
1
2, 1 0, 0 A B 1, 3
4, 0 1, 4
ac ad bc bd
LA 2, 1 2, 1 0, 0 0, 0
LB 2, 1 2, 1 0, 0 0, 0
RA 4, 0 1, 3 4, 0 1, 3
RB 1, 4 1, 3 1, 4 1, 3 13
Normal form Representation
Example 10
1
L R
2 N
1 1
a b 2 2
2
2, 1 0, 0 c d 0, 3
4, 0 2, 3
ac ad bc bd
L 2, 1 2, 1 0, 0 0, 0
R 2, 32 1, 3 2, 32 1, 3
14
Nash Equilibrium
Nash Equilibrium of an extensive form game
15
Nash Equilibrium
1
L R
a b
2
L 2, 1 0, 0
a b 1, 3
R 1, 3 1, 3
2, 1 0, 0
• The game has two Nash equilibria in pure strategies (L, a) and (R , b).
• Under (R , b), player 2 is threating player 1 with playing b if he plays L.
• But, is this a credible threat?
16
Nash Equilibrium
1
L R
a b
2
L 2, 1 0, 0
a b 1, 3
R 1, 3 1, 3
2, 1 0, 0
18
Backwards Induction. Computation
19
Backwards Induction. Computation
Example 15
1
L R
2 2
a b c d
1
2, 1 0, 0 A B 1,3
4, 0 1, 4
• The optimal behavior as prescribed by backwards induction is A,
20
Backwards Induction. Computation
Example 15
1
L R
2 2
a b c d
2, 1 0, 0 4,0 1,3
20
Backwards Induction. Computation
Example 15
1
L R
2
2,1 c d
4,0 1,3
20
Backwards Induction. Computation
Example 15
1
L R
2
2,1 c d
4,0 1,3
20
Backwards Induction. Computation
Example 15
1
L R
2,1 1,3
20
Backwards Induction. Computation
Example 15
1
L R
2,1 1,3
Example 16
1
L R
2 N
1 1
a b 2 2
2
2, 1 0, 0 c d 0, 3
4, 0 2, 3
• Moves that conform with Backwards Induction: d ,
21
Backwards Induction. Computation
Example 16
1
L R
2 N
1 1
a b 2 2
2, 1 0, 0 2, 3 0, 3
21
Backwards Induction. Computation
Example 16
1
L R
N
1 1
2, 1 2 2
2, 3 0, 3
21
Backwards Induction. Computation
Example 16
1
L R
2, 1 1, 3
• The consequence is that, although mixed strategies are more general, there
is no loss of generality in working only with behavioral strategies.
22
Perfect vs. imperfect information
• However, in general, a player might not know all the history of moves when it
is his turn to move.
23
Perfect vs. imperfect information
1
L R
2 2
a b c d
2, 1 0, 0 1, 3 1, 3
24
Perfect vs. imperfect information
1
L R
2
a b a b
2, 1 0, 0 1, 3 1, 3
• When player 2 has to move, she does not knows whether player 1 chose L or
R.
25
Definitions and Terminology
Extensive form game definition
Definition
The extensive form representation of a game specifies:
1. the players in the game,
2. when each player has the move,
3. if relevant, Nature’s moves and the corresponding probabilities attached to
each move of Nature,
4. what each player can do at each of her opportunities to move,
5. what each player knows at each of her opportunities to move,
6. the payoff received by each player for each combination of moves that could
be chosen by the players.
26
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• The game starts at the initial node (the open circle) and continues along the
branches according to Nature’s and the players’ moves.
• N represents Nature. 27
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
27
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• Each information set is labelled by the player who moves at that information
set. 28
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• An information set contains the decision nodes that the player who moves at
that information set cannot differentiate between.
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• When player 1 has to move he does not know what has been Nature’s move.
(But he knows the probabilities.)
29
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• When player 2 has to move, he does not know whether player 1 chose L or R .
(But he knows the outcome of Nature’s move)
29
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• In his singleton information set, Player 3 knows Nature’s move, that player 1
chose R and that player 2 chose a.
29
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• At every information set the extensive form specifies the set of actions
available to the player that moves.
30
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• Note that decision nodes contained in the same information have the same
set of actions available.
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• There is a unique history of actions from the initial node to each decision
node.
31
Definitions and Terminology
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
• Likewise, there is a unique history of actions from the initial node to each
ending node.
31
Extensive form games and subgame-perfect equilibrium
ECON2112: Week 4
DJ Thornton
Information Sets
Information Sets
• Information sets tell us when players are moving.
• An information set contains all the information that a player possesses when
he has to move.
• When a player moves at some information set, this player cannot distinguish
between decision nodes (in other words, the histories leading to the decision
nodes) contained in that information set.
• Every decision node inside the same information set must have the same set
of choices available.
1
Perfect Recall
Perfect Recall
We only consider extensive form games with perfect recall. Perfect recall means
that whenever a player has to move, he remembers all his previous moves and
whatever he knew before.
Extensive form that does not satisfy perfect recall:
1
L R
2 2
a b c d
1
e f e f
2
Subgames
Subgames
A subgame is a part of an extensive form game that could be considered as a
separate game. In particular:
• If the subgame contains a decision node x then it also contains the whole
information set that x belongs to.
• If the subgame contains a decision node x then it also contains every choice,
decision node, and final node that comes after x .
Example 1
Every extensive form is also a subgame of itself.
Example 3
1
L R
1 1
A B C D
2 2
a b a b c d c d
2, 1 − 2, 0 − 2, 0 − 1, 4 4, 1 0, 0 0, 0 1, 4
• This game has three subgames.
4
Information Sets, Perfect Recall and Subgames
• This extensive form game has only one subgame, which is the entire game.
• Therefore, this game has no proper subgames.
5
Strategies
Strategies
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
N
1 2
3 3
1
L R
2 L R
a b a b
3 3
2,1,2 3,1,0 e f c d c d c d
9
Nash Equilibrium in Extensive Form Games
1
T B
2
L R L R
L R
3, 1 0, 0 0, 0 1, 3 T 3, 1 0, 0
B 0, 0 1, 3
10
Nash Equilibrium in Extensive Form Games
Example 10
1
Out In
1
2,5 T B
2
L R L R L R
OutT 2, 5 2, 5
3, 1 0, 0 0, 0 1, 3 OutB 2, 5 2, 5
InT 3, 1 0, 0
InB 0, 0 1, 3
• However, when the extensive-form game has subgames, there is another
(stricter) equilibrium concept that we can use.
11
Backwards Induction
Backwards Induction
Backwards Induction
Players should make their choices in a way consistent with deductions about
other players’ rational behavior in the future.
12
Backwards Induction
Backwards Induction
Players should make their choices in a way consistent with deductions about
other players’ rational behavior in the future.
12
Backwards Induction
Backwards Induction
Players should make their choices in a way consistent with deductions about
other players’ rational behavior in the future.
12
Subgame Perfect Equilibrium
Subgame Perfect Equilibrium
13
Subgame Perfect Equilibrium
13
Subgame Perfect Equilibrium
13
Subgame Perfect Equilibrium
13
Subgame Perfect Equilibrium
Example 12
1
Out In
1
2,5 T B
2
L R L R
3, 1 0, 0 0, 0 1, 3
• The last subgame has three Nash equilibria, (T , L), (B , R ) and
T + 41 B , 14 L + 43 R .
!3 "
4
14
Subgame Perfect Equilibrium
Example 12
1
Out In
2,5 3, 1
Example 13
1
Out In
1
2,5 T B
2
L R L R
3, 1 0, 0 0, 0 1, 3
• The last subgame has three Nash equilibria, (T , L), (B , R ) and
T + 41 B , 14 L + 43 R .
!3 "
4
15
Subgame Perfect Equilibrium
Example 13
1
Out In
2,5 1, 3
Example 14
1
Out In
1
2,5 T B
2
L R L R
3, 1 0, 0 0, 0 1, 3
• The last subgame has three Nash equilibria, (T , L), (B , R ) and
T + 41 B , 14 L + 43 R .
!3 "
4
16
Subgame Perfect Equilibrium
Example 14
1
Out In
2,5 3 3
,
4 4
Out , 34 T + 41 B , 14 L + 43 R .
! "
16
Subgame Perfect Equilibrium
Example 15
1
L R
1 1
A B C D
2 2
a b a b c d c d
1, 1 − 2, 0 − 2, 0 − 1, 3 3, 1 0, 0 0, 0 1, 3
17
Subgame Perfect Equilibrium
Example 15
1
L R
1 1
A B C D
2 2
a b a b c d c d
1, 1 − 2, 0 − 2, 0 − 1, 3 3, 1 0, 0 0, 0 1, 3
17
Subgame Perfect Equilibrium
Example 15
1
L R
1 1
A B C D
2 2
a b a b c d c d
1, 1 − 2, 0 − 2, 0 − 1, 3 3, 1 0, 0 0, 0 1, 3
17
Subgame Perfect Equilibrium
Example 15
1
L R
1 1
A B C D
2 2
a b a b c d c d
1, 1 − 2, 0 − 2, 0 − 1, 3 3, 1 0, 0 0, 0 1, 3
Example 16
1
L R
2 2
a b c d
1
2, 1 0, 0 A B 1, 3
4, 0 1, 4
Example 16
1
L R
2 2
a b c d
2, 1 0, 0 4, 0 1, 3
Example 16
1
L R
2
2, 1 c d
4, 0 1, 3
Example 16
1
L R
2, 1 1, 3
Example 16
1
L R
2, 1 1, 3
19
Applications
Applications
Stackelberg competition
Stackelberg model of duopoly competition.
20
Stackelberg model of duopoly competition.
20
Stackelberg model of duopoly competition.
P = P (Q ) = a − Q
20
Stackelberg model of duopoly competition.
P = P (Q ) = a − Q
20
Stackelberg model of duopoly competition.
Stackelberg competition
• It is a sequential move game.
• We have to compute the subgame perfect equilibrium of the game.
• We start analysing the last subgames.
21
Stackelberg model of duopoly competition.
Last stages
• Suppose firm 1 chose q1 = q¯1 .
22
Stackelberg model of duopoly competition.
Last stages
• Suppose firm 1 chose q1 = q¯1 .
• Firm 2 observes this and maximizes:
22
Stackelberg model of duopoly competition.
Last stages
• Suppose firm 1 chose q1 = q¯1 .
• Firm 2 observes this and maximizes:
a − q̄1 − c
q2∗ (q̄1 ) = .
2
22
Stackelberg model of duopoly competition.
Last stages
• Suppose firm 1 chose q1 = q¯1 .
• Firm 2 observes this and maximizes:
a − q̄1 − c
q2∗ (q̄1 ) = .
2
• Note that a strategy for firm 2 is a function q2∗ : [0, ∞) → [0, ∞) such that, given
a quantity produced by firm 1, q1 , tells how much to produce, q2∗ (q1 ).
22
Stackelberg model of duopoly competition.
First stage
• Firm 1 knows that if he produces q1 firm 2 will produce according to q2∗ (q1 ).
23
Stackelberg model of duopoly competition.
First stage
• Firm 1 knows that if he produces q1 firm 2 will produce according to q2∗ (q1 ).
• Firm 1 will choose q1 to maximize
i.e.
# # $$
a − q1 − c
π1 (q1 ) = a − q1 + q1 − cq1 .
2
23
Stackelberg model of duopoly competition.
First stage
• Firm 1 knows that if he produces q1 firm 2 will produce according to q2∗ (q1 ).
• Firm 1 will choose q1 to maximize
i.e.
# # $$
a − q1 − c
π1 (q1 ) = a − q1 + q1 − cq1 .
2
24
Applications
Bargaining
Bargaining
A model of bargaining
Players 1 and 2 are bargaining over a dollar. They alternate making offers. In the
first stage, player 1 offers a split (s1 , 1 − s1 ) of the dollar to player 2. Player 2 can
either accept, in which case the game would end, or reject in which case the
game would continue to the second stage. In the second stage, player 2 offers a
split (s2 , 1 − s2 ) of the dollar to player 1. Player 2 can either accept, in which case
the game would end, or reject in which case the game would continue to the
third stage. In the third stage a settlement (s, 1 − s) is given exogenously and
the game ends. Players are impatient and have a discount factor equal to δ < 1.
25
Bargaining
26
Bargaining
26
Bargaining
0.1 If player 2 offers s2 = δs, player 1 accepts and payoffs are (δs, 1 − δs).
26
Bargaining
0.1 If player 2 offers s2 = δs, player 1 accepts and payoffs are (δs, 1 − δs).
0.2 If player 2 offers s2 < δs, player 1 rejects and payoffs are (δs, δ(1 − s)).
26
Bargaining
0.1 If player 2 offers s2 = δs, player 1 accepts and payoffs are (δs, 1 − δs).
0.2 If player 2 offers s2 < δs, player 1 rejects and payoffs are (δs, δ(1 − s)).
26
Bargaining
t=1: player 2 accepts any split (s1 , 1 − s1 ) such that 1 − s1 ≥ δ(1 − δs).
27
Bargaining
t=1: player 2 accepts any split (s1 , 1 − s1 ) such that 1 − s1 ≥ δ(1 − δs).
0.1 If player 1 offers 1 − s1 = δ(1 − δs), player 2 accepts and payoffs are
(1 − δ(1 − δs), δ(1 − δs)).
27
Bargaining
t=1: player 2 accepts any split (s1 , 1 − s1 ) such that 1 − s1 ≥ δ(1 − δs).
0.1 If player 1 offers 1 − s1 = δ(1 − δs), player 2 accepts and payoffs are
(1 − δ(1 − δs), δ(1 − δs)).
0.2 If player 1 offers 1 − s1 < δ(1 − δs), player 2 rejects and payoffs are
(δ2 s, δ(1 − δs)).
27
Bargaining
t=1: player 2 accepts any split (s1 , 1 − s1 ) such that 1 − s1 ≥ δ(1 − δs).
0.1 If player 1 offers 1 − s1 = δ(1 − δs), player 2 accepts and payoffs are
(1 − δ(1 − δs), δ(1 − δs)).
0.2 If player 1 offers 1 − s1 < δ(1 − δs), player 2 rejects and payoffs are
(δ2 s, δ(1 − δs)).
27
Bargaining
t=1: player 2 accepts any split (s1 , 1 − s1 ) such that 1 − s1 ≥ δ(1 − δs).
0.1 If player 1 offers 1 − s1 = δ(1 − δs), player 2 accepts and payoffs are
(1 − δ(1 − δs), δ(1 − δs)).
0.2 If player 1 offers 1 − s1 < δ(1 − δs), player 2 rejects and payoffs are
(δ2 s, δ(1 − δs)).
In the subgame perfect equilibrium, payoffs are (1 − δ(1 − δs), δ(1 − δs)).
27
Repeated Games
ECON2112: Week 5
DJ Thornton
1
Table of contents
1. Introduction
2
Introduction
Prisoner’s Dilemma
L R
T 10, 10 0, 11
B 11, 0 3, 3
3
Prisoner’s Dilemma
L R
T 10, 10 0, 11
B 11, 0 3, 3
3
Twice Repeated Prisoner’s Dilemma
4
Twice Repeated Prisoner’s Dilemma
L R
1. Players play T 10, 10 0, 11
B 11, 0 3, 3
4
Twice Repeated Prisoner’s Dilemma
L R
1. Players play T 10, 10 0, 11
B 11, 0 3, 3
4
Twice Repeated Prisoner’s Dilemma
L R
1. Players play T 10, 10 0, 11
B 11, 0 3, 3
L R
3. Players play T 10, 10 0, 11
B 11, 0 3, 3
4
Twice Repeated Prisoner’s Dilemma
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
5
Twice Repeated Prisoner’s Dilemma
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
• It does not matter what was played in the first stage, we have a subgame that
resembles the prisoners dilemma.
• Prisoners dilemma:
L R
T 10, 10 0, 11
B 11, 0 3, 3
5
Twice Repeated Prisoner’s Dilemma
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
• It does not matter what was played in the first stage, we have a subgame that
resembles the prisoners dilemma.
L R
T 20, 20 10, 21
B 21, 10 13, 13
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
• It does not matter what was played in the first stage, we have a subgame that
resembles the prisoners dilemma.
L R
T 10, 21 0, 22
B 11, 11 3, 14
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
• It does not matter what was played in the first stage, we have a subgame that
resembles the prisoners dilemma.
L R
T 21, 10 11, 11
B 22, 0 14, 3
What is the subgame perfect equilibrium of the twice repeated Prisoner’s Dilemma?
• It does not matter what was played in the first stage, we have a subgame that
resembles the prisoners dilemma.
L R
T 13, 13 3, 14
B 14, 3 6, 6
• Go now to the first stage and solve the game obtaining by replacing each subgame
with its corresponding equilibrium payo↵
L R
T 13, 13 3, 14
B 14, 3 6, 6
6
Twice Repeated Prisoner’s Dilemma
• Go now to the first stage and solve the game obtaining by replacing each subgame
with its corresponding equilibrium payo↵
L R
T 13, 13 3, 14
B 14, 3 6, 6
6
Twice Repeated Prisoner’s Dilemma
• Go now to the first stage and solve the game obtaining by replacing each subgame
with its corresponding equilibrium payo↵
L R
T 13, 13 3, 14
B 14, 3 6, 6
• Therefore, in the twice repeated Prisoner’s Dilemma players play (B, R) twice.
6
Twice Repeated Prisoner’s Dilemma
• Go now to the first stage and solve the game obtaining by replacing each subgame
with its corresponding equilibrium payo↵
L R
T 13, 13 3, 14
B 14, 3 6, 6
• Therefore, in the twice repeated Prisoner’s Dilemma players play (B, R) twice.
6
Finitely Repeated Prisoner’s
Dilemma
Finitely Repeated Prisoner’s Dilemma: T repetitions
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
• Go to the T th stage: Whatever has been played does not matter. The Nash
equilibrium of the subgame is (B, R).
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
• Go to the T th stage: Whatever has been played does not matter. The Nash
equilibrium of the subgame is (B, R).
• Go to the (T 1)th stage: Whatever has been played does not matter, and in the
next stage players will always play (B, R). The Nash equilibrium of the subgame is
(B, R).
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
• Go to the T th stage: Whatever has been played does not matter. The Nash
equilibrium of the subgame is (B, R).
• Go to the (T 1)th stage: Whatever has been played does not matter, and in the
next stage players will always play (B, R). The Nash equilibrium of the subgame is
(B, R).
• [. . .]
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
• Go to the T th stage: Whatever has been played does not matter. The Nash
equilibrium of the subgame is (B, R).
• Go to the (T 1)th stage: Whatever has been played does not matter, and in the
next stage players will always play (B, R). The Nash equilibrium of the subgame is
(B, R).
• [. . .]
7
Finitely Repeated Prisoner’s Dilemma: T repetitions
• Go to the T th stage: Whatever has been played does not matter. The Nash
equilibrium of the subgame is (B, R).
• Go to the (T 1)th stage: Whatever has been played does not matter, and in the
next stage players will always play (B, R). The Nash equilibrium of the subgame is
(B, R).
• [. . .]
• No matter how often this game is repeated (as long as this is a finite number of
times).
• There is only one equilibrium: Play the one-shot equilibrium at every stage.
8
Repeated Games
• No matter how often this game is repeated (as long as this is a finite number of
times).
• There is only one equilibrium: Play the one-shot equilibrium at every stage.
8
Repeated Games
• No matter how often this game is repeated (as long as this is a finite number of
times).
• There is only one equilibrium: Play the one-shot equilibrium at every stage.
• The players know that they will always play the one-shot equilibrium in the last
stage.
8
Repeated Games
• No matter how often this game is repeated (as long as this is a finite number of
times).
• There is only one equilibrium: Play the one-shot equilibrium at every stage.
• The players know that they will always play the one-shot equilibrium in the last
stage.
• The second to the last round will be treated as a single game where they play the
one-shot equilibrium.
• etc. . .
8
Repeated Games
• No matter how often this game is repeated (as long as this is a finite number of
times).
• There is only one equilibrium: Play the one-shot equilibrium at every stage.
• The players know that they will always play the one-shot equilibrium in the last
stage.
• The second to the last round will be treated as a single game where they play the
one-shot equilibrium.
• etc. . .
• In most actual situations one cannot exclude the possibility of meeting the
opponents once more.
9
Infinitely Repeated Prisoner’s Dilemma
• In most actual situations one cannot exclude the possibility of meeting the
opponents once more.
9
Infinitely Repeated Prisoner’s Dilemma
• In most actual situations one cannot exclude the possibility of meeting the
opponents once more.
9
Infinitely Repeated Prisoner’s Dilemma
10
Infinitely Repeated Prisoner’s Dilemma
10
Infinitely Repeated Prisoner’s Dilemma
10
Infinitely Repeated Prisoner’s Dilemma
10
Infinitely Repeated Prisoner’s Dilemma
10
Infinitely Repeated Prisoner’s Dilemma
• We will solve this by discounting the future. Let 2 [0, 1) be the discount factor.
10
Infinitely Repeated Prisoner’s Dilemma
• We will solve this by discounting the future. Let 2 [0, 1) be the discount factor.
10
Infinitely Repeated Prisoner’s Dilemma
• We will solve this by discounting the future. Let 2 [0, 1) be the discount factor.
Geometric Series
The sum of a geometric series is computed as follows:
2 3 t
M =1+ + + + ... + + ...
11
Aside: Sum of a Geometric Series
Geometric Series
The sum of a geometric series is computed as follows:
2 3 t
M =1+ + + + ... + + ...
2 3 t
= 1 + (1 + + + + ... + + . . .)
11
Aside: Sum of a Geometric Series
Geometric Series
The sum of a geometric series is computed as follows:
2 3 t
M =1+ + + + ... + + ...
2 3 t
= 1 + (1 + + + + ... + + . . .)
=1+ M
11
Aside: Sum of a Geometric Series
Geometric Series
The sum of a geometric series is computed as follows:
2 3 t
M =1+ + + + ... + + ...
2 3 t
= 1 + (1 + + + + ... + + . . .)
=1+ M
1
=) M = .
1
11
Aside: Sum of a Geometric Series
Geometric Series
The sum of a geometric series is computed as follows:
2 3 t
M =1+ + + + ... + + ...
2 3 t
= 1 + (1 + + + + ... + + . . .)
=1+ M
11
Infinitely Repeated Prisoner’s Dilemma
12
Infinitely Repeated Prisoner’s Dilemma
• For that we need to specify what would happen if a player deviates, i.e. does not
cooperate in some stage.
12
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
10
• If players cooperate in every stage, the total payo↵ for each of them is 1 .
13
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
10
• If players cooperate in every stage, the total payo↵ for each of them is 1 .
13
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
10
• If players cooperate in every stage, the total payo↵ for each of them is 1 .
13
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
• Consider the following strategy profile: Play (T , L) for ever unless a player
deviates, if a player deviates play (B, R) for ever.
14
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
• Consider the following strategy profile: Play (T , L) for ever unless a player
deviates, if a player deviates play (B, R) for ever.
10
• If nobody deviates payo↵s are 1 for each player.
14
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
• Consider the following strategy profile: Play (T , L) for ever unless a player
deviates, if a player deviates play (B, R) for ever.
10
• If nobody deviates payo↵s are 1 for each player.
14
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
• Consider the following strategy profile: Play (T , L) for ever unless a player
deviates, if a player deviates play (B, R) for ever.
10
• If nobody deviates payo↵s are 1 for each player.
10 3
11 +
1 1
15
Infinitely Repeated Prisoner’s Dilemma
10 3
11 +
1 1
()
1
8
15
Infinitely Repeated Prisoner’s Dilemma
10 3
11 +
1 1
()
1
8
15
Infinitely Repeated Prisoner’s Dilemma
10 3
11 +
1 1
()
1
8
15
Infinitely Repeated Prisoner’s Dilemma
10 3
11 +
1 1
()
1
8
1
• If the discount factor is strictly larger than 8 we do not need to punish the
deviator for ever.
15
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
Note:
• If a player does not want to deviate in the first stage, then he does not want to
deviate in any stage.
16
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
Note:
• If a player does not want to deviate in the first stage, then he does not want to
deviate in any stage.
• In the described equilibrium we do not observe deviations nor punishments.
• This is because punishments are design so that no player deviates.
16
Infinitely Repeated Prisoner’s Dilemma
L R
Players play infinitely many times: T 10, 10 0, 11
B 11, 0 3, 3
Note:
• If a player does not want to deviate in the first stage, then he does not want to
deviate in any stage.
• In the described equilibrium we do not observe deviations nor punishments.
• This is because punishments are design so that no player deviates.
• Cooperation and then punishment if somebody deviates is one equilibrium. Note
that players playing (B, R) for ever is also a subgame perfect equilibrium.
• (In general, in any finitely of infinitely repeated game playing the one shot game
equilibrium in every stage is always a SPE.) 16
Finitely repeated (modified) Prisoner’s Dilemma
• It has been argued that the Prisoner’s Dilemma gives a distorted image of
repeated games.
17
Finitely repeated (modified) Prisoner’s Dilemma
• It has been argued that the Prisoner’s Dilemma gives a distorted image of
repeated games.
17
Finitely repeated (modified) Prisoner’s Dilemma
• It has been argued that the Prisoner’s Dilemma gives a distorted image of
repeated games.
17
Finitely repeated (modified) Prisoner’s Dilemma
• It has been argued that the Prisoner’s Dilemma gives a distorted image of
repeated games.
• The unique Nash equilibrium payo↵ coincides with the payo↵ that both players
would obtain if they want to punish each other.
17
Finitely repeated (modified) Prisoner’s Dilemma
• It has been argued that the Prisoner’s Dilemma gives a distorted image of
repeated games.
• The unique Nash equilibrium payo↵ coincides with the payo↵ that both players
would obtain if they want to punish each other.
• In other words. The only way of punishing your opponent is to play the unique
one-shot Nash equilibrium.
17
Repeated Games
L R RR
T 10, 10 0, 11 2, 0
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
• The modified game has now 3 Nash equilibria, (R, B), (BB, RR), and
( 14 B + 34 BB, 14 R + 34 RR).
• For the purposes of the example let us focus in the two pure Nash equilibria. 18
Can we sustain cooperation, (T , L), in the first stage in a SPE?
L R RR
T 10, 10 0, 11 2, 0
Players play twice
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
19
Can we sustain cooperation, (T , L), in the first stage in a SPE?
L R RR
T 10, 10 0, 11 2, 0
Players play twice
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
19
Can we sustain cooperation, (T , L), in the first stage in a SPE?
L R RR
T 10, 10 0, 11 2, 0
Players play twice
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
L R RR
T 10, 10 0, 11 2, 0
Players play twice
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
L R RR
T 10, 10 0, 11 2, 0
Players play twice
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
L R RR
T 10, 10 0, 11 2, 0
Players play T times
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
20
Can we sustain cooperation, (T , L), in the first T 1 stages?
L R RR
T 10, 10 0, 11 2, 0
Players play T times
B 11, 0 3, 3 1, 0
BB 0, 2 0, 1 0, 0
21
Repeated Games (Additional Examples)
ECON2112: Week 5
DJ Thornton
1
Table of contents
3. Environmental Agreements
4. International Trade
2
Repeated Bertrand Competition
Repeated Bertrand Competition
• For a given pi and pj (j 6= i), demand for firm i’s product is:
8
>
< a pi , pi < pj
>
Q = a 2pi , pi = pj
>
>
:
0, pi > pj
a+c
• Monopoly price is pm = 2
• Yes, if punishment and rewards are appropriately designed and if players are
patient.
4
Grim Trigger Strategy
1. A player deviated from the cooperative strategy (p1 , p2 ) = (pm , pm ) at some time
prior to t.
• If player 2 observes that player 1 has deviated from the cooperative strategy,
player 2 plays p2 = c in all future periods. A mutual best-response from player 1’s
is to also play p1 = c in all future periods.
• If neither player has deviated, both players continue to play pm . We need to check
this is a mutual best response.
5
Grim Trigger Strategy
• Suppose player 2 plays the trigger strategy and that no player has deviated up
until time t. If player 1 plays p1 = pm ✏ at time t, (for ✏ arbitrarily small), their
profit is
(a c)2
+ 0 + 0 + ...
4
1
• If 2, then the grim trigger strategy is a Nash equilibrium.
• In fact, it is a subgame perfect Nash equilibrium. The two types of subgames are:
• In case 1., continuing to play the grim trigger strategy is a NE in the subgame. In
case 2., playing (c, c) in all future stages is a NE in the subgame.
7
Repeated Cournot Competition
Repeated Cournot Competition (Exercise)
a c
• Monopoly quantity is qm = 2 .
• Consider the infinitely repeated game based on this stage game– can the
monopoly quantity be sustained in a SPE?
8
Environmental Agreements
Environmental Agreements
20 10
(Ei +Ej )2
• Net benefit 40Ei 2 20 0, 0 350, 50
10 50, 350 200, 200
• Ei = 20 is a dominant strategy.
3
• (E1 , E2 ) = (10, 10) can be sustained if 7.
9
International Trade
Setting up the game
10
Setting up the game
• Write FT (Free trade) and PT (Prohibitive Tari↵) for the strategy of each
government.
• Write (qi1 , qi2 ) 2 [0, 1)2 for the strategy of firm i 2 {A, B}.
• Payo↵s exhibit tradeo↵: profits for firms vs. welfare for government.
Consider the second stage. Three things could have happened in the first stage:
Now we work out the stage 2 equilibrium for all possible outcomes in stage 1.
12
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = qB1 = 4, profits are ⇡A1 = ⇡B1 = 16.
Q12
• Aggregate output: Q1 = 4 + 4 = 8, consumer surplus is CS1 = 2 = 32.
• Country 2 is identical.
FT PT
Profits are: ⇡A = ⇡A1 + ⇡A2 = 32 and ⇡B = ⇡B1 + ⇡B2 = 32. FT 64, 64 ,
PT , ,
Welfare is W1 = CS1 + ⇡A = 64, and W2 = CS2 + ⇡B = 64.
13
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = 6, qB1 = 0, profits are ⇡A1 = 36, ⇡B1 = 0.
Q12
• Aggregate output: Q1 = 6 + 0 = 6, consumer surplus is CS1 = 2 = 18.
14
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = qB1 = 4, profits are ⇡A1 = ⇡B1 = 16.
Q12
• Aggregate output: Q1 = 4 + 4 = 8, consumer surplus is CS1 = 2 = 32.
• Country 2
• Equilibrium quantities are qA2 = 0, qB2 = 6, profits are ⇡A2 = 0, ⇡B2 = 36.
Q22
• Aggregate output: Q2 = 0 + 6 = 6, consumer surplus is CS2 = 2 = 18.
FT PT
Prisoner’s dilemma strikes again! FT 64, 64 48, 70
PT 70, 48 54, 54
• SPE outcome is for both firms to choose PT in stage 1, and each firm to sell
monopoly quantity in its own market.
Now suppose countries play the two-stage game infinitely many times. Can they come
to a (tacit) free-trade agreement?
Check that there is an SPE in which countries play a trigger strategy and check that
3
(FT , FT ) can be sustained for 8.
16
Repeated Games (Additional Examples)
ECON2112: Week 5
DJ Thornton
1
Table of contents
3. Environmental Agreements
4. International Trade
2
Repeated Bertrand Competition
Repeated Bertrand Competition
• For a given pi and pj (j 6= i), demand for firm i’s product is:
8
>
< a pi , pi < pj
>
Q = a 2pi , pi = pj
>
>
:
0, pi > pj
a+c
• Monopoly price is pm = 2
• Yes, if punishment and rewards are appropriately designed and if players are
patient.
4
Grim Trigger Strategy
1. A player deviated from the cooperative strategy (p1 , p2 ) = (pm , pm ) at some time
prior to t.
• If player 2 observes that player 1 has deviated from the cooperative strategy,
player 2 plays p2 = c in all future periods. A mutual best-response from player 1’s
is to also play p1 = c in all future periods.
• If neither player has deviated, both players continue to play pm . We need to check
this is a mutual best response.
5
Grim Trigger Strategy
• Suppose player 2 plays the trigger strategy and that no player has deviated up
until time t. If player 1 plays p1 = pm ✏ at time t, (for ✏ arbitrarily small), their
profit is
(a c)2
+ 0 + 0 + ...
4
1
• If 2, then the grim trigger strategy is a Nash equilibrium.
• In fact, it is a subgame perfect Nash equilibrium. The two types of subgames are:
• In case 1., continuing to play the grim trigger strategy is a NE in the subgame. In
case 2., playing (c, c) in all future stages is a NE in the subgame.
7
Repeated Cournot Competition
Repeated Cournot Competition (Exercise)
a c
• Monopoly quantity is qm = 2 .
• Consider the infinitely repeated game based on this stage game– can the
monopoly quantity be sustained in a SPE?
8
Environmental Agreements
Environmental Agreements
20 10
(Ei +Ej )2
• Net benefit 40Ei 2 20 0, 0 350, 50
10 50, 350 200, 200
• Ei = 20 is a dominant strategy.
3
• (E1 , E2 ) = (10, 10) can be sustained if 7.
9
International Trade
Setting up the game
10
Setting up the game
• Write FT (Free trade) and PT (Prohibitive Tari↵) for the strategy of each
government.
• Write (qi1 , qi2 ) 2 [0, 1)2 for the strategy of firm i 2 {A, B}.
• Payo↵s exhibit tradeo↵: profits for firms vs. welfare for government.
Consider the second stage. Three things could have happened in the first stage:
Now we work out the stage 2 equilibrium for all possible outcomes in stage 1.
12
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = qB1 = 4, profits are ⇡A1 = ⇡B1 = 16.
Q12
• Aggregate output: Q1 = 4 + 4 = 8, consumer surplus is CS1 = 2 = 32.
• Country 2 is identical.
FT PT
Profits are: ⇡A = ⇡A1 + ⇡A2 = 32 and ⇡B = ⇡B1 + ⇡B2 = 32. FT 64, 64 ,
PT , ,
Welfare is W1 = CS1 + ⇡A = 64, and W2 = CS2 + ⇡B = 64.
13
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = 6, qB1 = 0, profits are ⇡A1 = 36, ⇡B1 = 0.
Q12
• Aggregate output: Q1 = 6 + 0 = 6, consumer surplus is CS1 = 2 = 18.
14
Subgame perfect equilibrium
• Country 1
• Equilibrium quantities are qA1 = qB1 = 4, profits are ⇡A1 = ⇡B1 = 16.
Q12
• Aggregate output: Q1 = 4 + 4 = 8, consumer surplus is CS1 = 2 = 32.
• Country 2
• Equilibrium quantities are qA2 = 0, qB2 = 6, profits are ⇡A2 = 0, ⇡B2 = 36.
Q22
• Aggregate output: Q2 = 0 + 6 = 6, consumer surplus is CS2 = 2 = 18.
FT PT
Prisoner’s dilemma strikes again! FT 64, 64 48, 70
PT 70, 48 54, 54
• SPE outcome is for both firms to choose PT in stage 1, and each firm to sell
monopoly quantity in its own market.
Now suppose countries play the two-stage game infinitely many times. Can they come
to a (tacit) free-trade agreement?
Check that there is an SPE in which countries play a trigger strategy and check that
3
(FT , FT ) can be sustained for 8.
16
Games of Incomplete Information (Bayesian Games)
ECON2112: Week 7
DJ Thornton
1
Table of contents
2
Incomplete Information
• Static:
• Dynamic:
I don’t know who you really are—though I have some idea and I see what you did.
3
Introduction to incomplete
information
A robbery
g ng
R R
s n s n
4
A robbery
g ng
R R
s n s n
g ng
R R
s n s n
5
A robbery
g ng
R R
s n s n
• Unique SPNE: (g , n, s)
5
A robbery
g ng g ng
R R R R
s n s n s n s n
6
From imperfect to incomplete information
7
From imperfect to incomplete information
7
A matter of language
• E.g. your grade is either 85 or 35. The lecturer knows it (it is her private
information), but it’s not really her type
8
Static games of incomplete
information
Static games of incomplete information
• We begin with static games; normal form representation will be our focus for now
• What do we need?
• Players
• Types for each player, representing all possible di↵erent information they can
privately know and that a↵ects payo↵s (this way includes “states”)
• Payo↵s that depend both on actions and types (potentially on the type of other
players)
• In static games, this complication arises only if my type tells me something about
the other player’s type:
If good lecturers have good students, then if I am a good lecturer I must believe...
10
Prior and beliefs (technical note)
• By Bayes’ rule the belief pi (t i | ti ) that player i of type ti has about i’s type is
given by
p (t i , ti )
pi (t i | ti ) =
p (ti )
Pr (ti | t i ) p (t i )
=P
z2T i Pr (ti | z) p (z)
11
Static games of incomplete info: some examples
• The Prisoner’s Dilemma, with two possible types for one or both players
• If ti = s, payo↵ are as normal. If ti = n, payo↵s are such that the prisoner prefers to
deny if the other is denying
12
Static games of incomplete info: some examples
• Battle of Sexes, with two possible types for one or both players
13
Static games of incomplete info: some examples
• Bertrand competition, with two possible types for one or both players
• One’s type is private info, prior distribution over types is common knowledge
14
Static games of incomplete info: some examples
• If ✓ = h, market demand is Q = ah P.
15
Let’s go for a drink! (version 1)
Let’s go for a drink! (version 1)
• Then I need types: each player is either fun (type 1) or boring (type 0)—private
info!
1
• Pr (1) = Pr (0) = 2
• If both choose in, they get payo↵ equal to the other player’s type
17
Let’s go for a drink! (version 1)
18
Better!
in out
in t2 ,t1 0, 13
1 1 1
out 3, 0 3, 3
19
Strategies
• That is, we reduce the extensive form and define di↵erent strategies for each
di↵erent player-type
20
Bayesian Nash Equilibrium
• The expectation is taken with respect to the set of types of all other players
E [ui (ai , s i (t i ) | t) | ti ] ⌘
X
ui (s1 (t1 ) , . . . , si 1 (ti 1 ) , ai , si+1 (ti+1 ) , sn (tn ) | t) pi (t i | ti )
t i 2T i
21
Bayesian Nash Equilibrium
• That is, each player-type is maximizing the expected payo↵ conditional on her
own type.
• In a BNE, no player wants to change their strategy, even if the change involves
only one action by one type.
22
Let’s go for a drink (version 1)
in out
in t2 ,t1 0, 13
1 1 1
out 3, 0 3, 3
• And what about other symmetric strategy profiles? (symmetric means: all players
do the same—only allowed/interesting when the game is symmetric: if we invert
the names of the players, the game remains the same)
23
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which all players play out
in out
in t2 ,t1 0, 13
1 1 1
out 3, 0 3, 3
24
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which fun players play in
and boring players play out
in out
in t2 ,t1 0, 13
1 1 1
out 3, 0 3, 3
25
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which fun players play out
and boring players play in
in out
in t2 ,t1 0, 13
1 1 1
out 3, 0 3, 3
26
Let’s go for a drink! (version 2)
Let’s go for a drink! (version 2)
27
Let’s go for a drink! (version 2)
• Then I need types: each player is either fun (type 1) or boring (type 0)—private
info!!!
1
• Pr (1) = Pr (0) = 2
27
Let’s go for a drink! (version 2)
• Then I need types: each player is either fun (type 1) or boring (type 0)—private
info!!!
1
• Pr (1) = Pr (0) = 2
27
Let’s go for a drink! (version 2)
• Then I need types: each player is either fun (type 1) or boring (type 0)—private
info!!!
1
• Pr (1) = Pr (0) = 2
• If both choose in, they get payo↵ equal to the product of their types
28
Better!
in out
in t1 t2 ,t2 t1 0, 13
1 1 1
out 3, 0 3, 3
29
Let’s go for a drink (version 2)
in out
in t1 t2 ,t2 t1 0, 13
1 1 1
out 3, 0 3, 3
30
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which all players play out
in out
in t1 t2 ,t2 t1 0, 13
1 1 1
out 3, 0 3, 3
31
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which fun players play in
and boring players play out
in out
in t1 t2 ,t2 t1 0, 13
1 1 1
out 3, 0 3, 3
32
Prove/disprove the statement
• There exists a symmetric Bayesian Nash equilibrium in which fun players play out
and boring players play in
in out
in t1 t2 ,t2 t1 0, 13
1 1 1
out 3, 0 3, 3
33
Applications of Bayesian Nash Equilibrium
ECON2112: Week 8
DJ Thornton
1
Table of contents
1. Application: Auctions
2
Application: Auctions
An auction
• Two bidders, i = 1, 2
• i’s valuation is vi
3
An auction
• Actions: bi 2 R+ ⌘ [0, 1)
• Types Ti = [0, 1]
• Payo↵s 8
>
< vi b i ,
> if bi > bj ;
ui (bi , bj | vi ) = (vi bi ) /2, if bi = bj ;
>
>
:
0, otherwise.
4
Strategies
• Recall that a pure strategy for player i in a static Bayesian game is a function that
for each possible type gives an action
• In our case we can write bi (vi ), i.e. the bid that player i would choose if type i is
drawn
5
(Bayesian Nash) Equilibrium
6
(Bayesian Nash) Equilibrium
6
(Bayesian Nash) Equilibrium
1
max (vi bi ) Pr (bi > ↵ + vj ) + (vi bi ) Pr (bi = ↵ + vj )
bi 2
7
(Bayesian Nash) Equilibrium
1
max (vi bi ) Pr (bi > ↵ + vj ) + (vi bi ) Pr (bi = ↵ + vj )
bi 2
• Since vj is uniformly distributed on [0, 1], the probability that it is exactly equal to
any number is 0:
max (vi bi ) Pr (bi > ↵ + vj )
bi
7
(Bayesian Nash) Equilibrium
8
(Bayesian Nash) Equilibrium
✓ ◆
bi ↵ bi ↵
• Notice that Pr (bi > ↵ + vj ) = Pr vj < =
8
(Bayesian Nash) Equilibrium
✓ ◆
bi ↵ bi ↵
• Notice that Pr (bi > ↵ + vj ) = Pr vj < =
bi ↵
max (vi bi )
bi
8
(Bayesian Nash) Equilibrium
✓ ◆
bi ↵ bi ↵
• Notice that Pr (bi > ↵ + vj ) = Pr vj < =
bi ↵
max (vi bi )
bi
2 vi + ↵
F.O.C. bi =
8
(Bayesian Nash) Equilibrium
vi ↵
• We get bi (vi ) = 2 + 2
• How do we find ↵?
• Notice that no bidder should ever bid more than her valuation: bi (vi ) vi
9
(Bayesian Nash) Equilibrium
vi ↵
• We get bi (vi ) = 2 + 2
• How do we find ↵?
• Notice that no bidder should ever bid more than her valuation: bi (vi ) vi
• The last bidder to call out a bid wins, and pays his bid
• If the increment by which you can outbid the last person is infinitesmally small,
who wins? What fraction of their valuation do they end up paying?
10
Conclusions
• As long as you can call out a higher bid and stay under/at your valuation, you
would do it (bidding exactly your valuation, you are indi↵erent)
• Each bidder drops out when bidding reaches their valuation, so the winner is the one
with the highest valuation, and will pay the second highest valuation
11
Conclusions
• What about the case of two bidders with vi ⇠ U (0, 1) competing via first-price
auction?
• Each bidder bids as if he has the highest valuation in the room (if he’s wrong, it
won’t matter anyway)
• Each bidder bids the expected second highest valuation in the room,
conditional on his being the highest
vi
• When there are two bidders, that is 2 . If there are 3 bidders, it is 23 vi . If there are n
bidders, it is n n 1 vi
• This is a general result in auction theory known as the revenue equivalence theorem
12
Application: Another interpretation
of mixed strategies
Mixed strategies revisited
Pat
Opera Fight
Opera 2, 1 0, 0
Chris
Fight 0, 0 1, 2
13
Mixed strategies revisited
• A player’s mixed strategy might represent her opponent’s uncertainty about her
choice of a pure strategy
14
Mixed strategies revisited
• A player’s mixed strategy might represent her opponent’s uncertainty about her
choice of a pure strategy
14
Mixed strategies revisited
Pat
Opera Fight
Opera 2 + tc , 1 0, 0
Chris
Fight 0, 0 1, 2 + tp
• tc and tp are uniformly distributed over [0, x] and we will look at x ! 0 (hence, a
slight perturbation of the payo↵s, not a big one)
15
Threshold strategies
• Notice that this implies that each play favourite option with probability (x c) /x
and (x p) /x
• That is, as incomplete information vanishes, the BNE in which each player-type takes
a single action for sure converges to a NE in which each player plays a mixed strategy
16
Expected payo↵s
17
Expected payo↵s
17
Expected payo↵s
18
Expected payo↵s
18
Expected payo↵s
p
p 3+ 9 + 4x
Pr (most pref. option) = 1 =1
x 2x
18
Dynamic Games of Incomplete Information
ECON2112: Week 9
DJ Thornton
1
Table of contents
2
Introduction and a new solution
concept
Bayesian Nash equilibrium
3
Plan of the Lecture
4
Plan of the Lecture
4
Throw away the wallet
Y g (1, 3)
tf tw
n s n s
5
Throw away the wallet
n s
g 1, 3 1, 3
tf 2, 2 0, 1
tw 1, 2 0, 0
• There are several ways to eliminate this equilibrium, we focus on one way which
will help us introduce a new equilibrium concept
6
Beliefs
Y g (1, 3)
tf tw
n s n s
• Suppose R’s info set is reached—even if it was not on the equilibrium path,
shouldn’t she “believe” something about the relative probability of the two nodes?
7
Beliefs
8
Beliefs
Y g (1, 3)
tf tw
n s n s
9
Perfect Bayesian Equilibrium (PBE)
• A PBE is
10
PBE requirements
Requirements 1 and 2
Requirement 1 [R1] At any information set, the player with the move must have a
belief about which node in the information set has been reached.
Requirement 2 [R2] Given their beliefs, the players’ strategies must be sequentially
rational: at each information set the strategy of the player with the move
from that node onward must be optimal given the players’ beliefs at that
information set and the other players’ strategies from that node onward.
11
Sequential rationality
Y g (1, 3)
tf tw
n s n s
• For any belief that R can hold, n is the only sequentially rational strategy
12
Sequential rationality
Y g (1, 3)
tf tw
n s n s
• For any belief that R can hold, n is the only sequentially rational strategy
1 p p
ng ng
n s n s
14
Full vs. empty wallet
1 p p
ng ng
n s n s
15
Full vs. empty wallet
1 p p
ng ng
n s n s
16
Full vs. empty wallet
1 p p
ng ng
n s n s
16
Full vs. empty wallet
1 p p
ng ng
n s n s
1 p p
ng ng
n s n s
17
And o↵ the path? (Requirement 4)
• See pp. 180 and 181 in textbook for a clear example; meaning of “where
possible” might depend on circumstances
[R3b] At any information set, beliefs are determined by Bayes’ rule and players’
equilibrium strategies where possible
18
Solving full vs. empty wallet
Full vs. empty wallet
1 p p
ng ng
n s n s
19
Full vs. empty wallet
1. Y ’s seq. rationality
3. consistency
20
Some notation
• E: probability of ng if E
• F: probability of ng if F
• R: probability of s
21
Pooling and separating equilibria
22
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
23
Full vs. empty wallet
1 p p
ng ng
n s n s
24
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
25
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
1. Y ’s seq. rationality !
26
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
1. Y ’s seq. rationality !
27
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
28
Prove/disprove the statement
There is a separating equilibrium in which you give the wallet if and only
if it is empty.
28
Prove/disprove the statement
1. Y ’s seq. rationality !
3. Consistency !
29
Full vs. empty wallet
1 p p
ng ng
n s n s
30
Prove/disprove the statement
2
2. R’s seq. rationality ! shoot is seq. rational if Pr (F | ng ) 5
31
Prove/disprove the statement
2
2. R’s seq. rationality ! shoot is seq. rational if Pr (F | ng ) 5
Answer: yes, for example, you always give, the robber shoots if you do not
give; if you do not give, the robber believes the wallet is full with probability
.7.
31
Prove/disprove the statement
1. Y ’s seq. rationality !
3. Consistency !
32
Full vs. empty wallet
1 p p
ng ng
n s n s
33
Prove/disprove the statement
2
2. R’s seq. rationality ! not shoot is seq. rational if p 5
Answer: yes, but only if p 2/5. In this equilibrium, you never give, the
robber never shoots; if you do not give, the robber believes the wallet is full
with probability p.
34
Is there any separating equilibrium?
35
Is there any separating equilibrium?
35
Is there any separating equilibrium?
R · 0 + (1 R) · 1 1
|{z} ) R =0
| {z }
exp. payo↵ of ng |E exp. payo↵ of g |E
2. If a separating equilibrium exists, then E > 0 (i.e. when the wallet is empty, you
do not give it to the robber with positive probability)
36
Is there any separating equilibrium?
2. If a separating equilibrium exists, then E > 0 (i.e. when the wallet is empty, you
do not give it to the robber with positive probability)
37
Is there any separating equilibrium?
Proof: Suppose R > 0, then you will give the wallet if it is empty:
R · 0 + (1 R) · 1 < 1
|{z}
| {z }
exp. payo↵ of ng |E exp. payo↵ of g |E
contradicting E > 0.
37
Is there any separating equilibrium?
38
Is there any separating equilibrium?
• E > 0, F = 1, R =0
• consistency:
Fp p
Pr (F | ng ) = =
Fp + E (1 p) p + E (1 p)
39
Is there any separating equilibrium?
• E > 0, F = 1, R =0
• consistency:
Fp p
Pr (F | ng ) = =
Fp + E (1 p) p + E (1 p)
• E > 0, F = 1, R =0
• consistency:
Fp p
Pr (F | ng ) = =
Fp + E (1 p) p + E (1 p)
40