You are on page 1of 36

The Unofficial Solution Manual to

A Primer in Game Theory


by RA Gibbons
Unfinished Draft

Navin Kumar
Delhi School of Economics
2

This version is an unreleased and unfinished manuscript.

The author can be reached at navin.ksrk@gmail.com

Last Updated: January 20, 2013

Typeset using LATEX and the Tufte book class.

This work is not subject to copyright. Feel free to reproduce, distribute


or falsely claim authorship of it in part or whole.
3

This is strictly a beta version. Two thirds of it are missing and there are er-
rors aplenty. You have been warned.

On a more positive note, if you do find an error, please email me at


navin.ksrk@gmail.com, or tell me in person.

- Navin Kumar
Static Games of Complete Information

Answer 1.1 See text.

Answer 1.2 B is strictly dominated by T. C is now strictly dom-


inated by R. The strategies (T,M) and (L,R) survive the iterated
elimination of strictly dominated strategies. The Nash Equilibrium
are (T,R) and (M,L).

Answer 1.3 For whatever value Individual 1 chooses (denoted


by S1 ), Individual 2’s best response is S2 = B2 (S1 ) = 1 − S2 .
Conversely, S1 = B1 (S2 ) = 1 − S1 . We know this because if S2 <
1 − S1 , then there is money left on the table and Individual 2 could
increase his or her payoff by asking for more. If, however, S2 >
1 − S1 , Individual 2 earns nothing and can increase his payoff by
reducing his demands sufficiently. Thus the Nash Equilibrium is
S1 + S2 = 1.

Answer 1.4 The market price of the commodity is determined by


the formula P = a − Q in which Q is determined Q = q1 + ... + qn
The cost for an individual company is given by Ci = c · qi . The
profit made by a single firm is

πi = ( p − c) · qi = ( a − Q − c) · qi = ( a − q1∗ − ... − q∗n − c) · qi

Where q∗j is the profit-maximizing quantity produced by firm j in


equilibrium. This profit is maximized at
dπi
= ( a − q1∗ − ... − qi∗ − ... − q∗n − c) − qi∗ = 0
dqi
⇒ a − q1∗ − ... − 2 · qi∗ − ... − q∗n − c = 0
⇒ a − c = q1∗ + ... + 2 · qi∗ + ... + q∗n

For all i=1, ... ,n. We could solve this question using matrices and
Cramer’s rule, but a simpler method would be to observe that since
all firms are symmetric, their equilibrium quantity will be the same
i.e.

q1∗ = q2∗ = ... = qi∗ = ... = q∗n

Which means the preceding equation becomes.


a−c
a − c = (n + 1)qi∗ ⇒ qi∗ =
n+1
6 Static Games of Complete Information

A similar argument applies to all other firms.

Answer 1.5 Let qm be the amount produced by a monopolist.


Thus, if the two were colluding, they’d each produce
qm a−c
q1m = q2m = =
2 4
In such a scenario, the profits earned by Firm 1 (and, symmetrically,
Firm 2) is

a−c
 
1 1 1
πmm = ( P − c) · qm = ( a − Q − c) · qm = a − − c · q1m
2
a−c a−c ( a − c )2
= · = = 0.13 · ( a − c)2
2 4 8
If both are playing the Cournot equilibrium quantity, than the prof-
its earned by Firm 1 (and Firm 2) are: 1 1
The price is determined by
P = a−Q
1 2( a − c ) a + 2c
πcc = ( P − c) · q1c = ( a − − c) · q1c = ( − c) · q1c = a − q1cc − q2cc
3 3
a−c
( a − c )2 = a−2·
= 3
9 a+c
=
= 0.11 · ( a − c)2 2

What if one of the firms (say Firm 1) plays the Cournot quantity
and the other plays the Monopoly quantity? 2 Firm 1’s profits are: 2
The price is given by
P = a−Q
1 5a 7c a−c 5
πcm = ( P − c) · q1c = ( + − c) · = · ( a − c )2 = a − q1c − q2m
12 12 3 36
a−c a−c
= 0.14 · ( a − c)2 = a− −
3 4
7
And Firm 2’s profits are: = a− · ( a − c)
12
5a 7c
5a 7c a−c 5 = +
2 12 12
πcm = ( P − c) · q2m = ( + − c) · = · ( a − c )2
12 12 4 48
= 0.10 · ( a − c)2

For notational simplicity, let

α ≡ ( a − c )2

Their profits are reversed when their production is. Thus, the pay-
offs are:

Player 2
qm qc
qm 0.13α, 0.13α 0.10α, 0.14α
Player 1
qc 0.14α, 0.10α 0.11α, 0.11α

As you can see, we have a classic Prisoner’s Dilemma: regardless of


the other firm’s choice, both firm’s would maximize their payoff by
choosing to produce the Cournot quantity. Each firm has a strictly
q
dominated strategy ( 2m ) and they’re both worse off in equilibrium
(where they make 0.11 · ( a − c)2 in profits) than they would have
7

been had they cooperated by producing qm together (which would


have earned them 0.13 · ( a − c)2 ).

Answer 1.6 Price is determined by P = a − Q and Q is determined


by Q = q1 + q2 . Thus the profit generated for Firm 1 is given by:

π1 = ( P − c1 ) · q1 = ( a − Q − c1 ) · q1 = ( a − q1 − q2 − c1 ) · q1

At the maximum level of profit,


dπ1 a − c1 − q2
= ( a − c1 − q1 − q2 ) + q1 · (−1) = 0 ⇒ q1 =
dq1 2
And by a similar deduction,
a − c2 − q1
q2 =
2
Plugging the above equation into it’s predecessor,
a − c2 − q1
a − c1 − 2 a − 2c1 + c2
q1 = =
2 3
And by a similar deduction,
a − 2c2 + c1
q2 =
3
Now,
a − 2c2 + c1
2c2 > a + c1 ⇒ 0 > a − 2c2 + c1 ⇒ 0 > ⇒ 0 > q2
3
⇒ q2 = 0

Since quantities cannot be be negative. Thus, under certain condi-


tions, a sufficient difference in costs can drive one of the firms to
shut down.

Answer 1.7 We know that,



 a − pi if pi < p j ,


qi = a−2pi if pi = p j ,


0 if pi > p j .

We must now prove pi = p j = c is the Nash Equilibrium of this


game. To this end, let’s consider the alternatives exhaustively.
If pi > p j = c, qi = 0 and π j = 0. In this scenario, Firm j can
increase profits by charging p j + ε where ε > 0 and ε < pi − p j ,
raising profits. Thus this scenario is not a Nash Equilibrium.
If pi > p j > c, qi = 0 and πi = 0 and Firm i can make pos-
itive profits by charging p j − ε > c. Thus, this cannot be a Nash
Equilibrium.
a− p
If pi = p j > c, πi = ( pi − c) · 2 i . Firm i can increase profits by
charging pi − ε such that p j > pi − ε > 0, grab the entire market and
a larger profit, provided
( a − pi )
( pi − ε − c ) · ( a − pi − ε ) > ( pi − c ) ·
2
8 Static Games of Complete Information

Therefore, this is not a Nash Equilibrium


If pi = p j = c, then πi = π j = 0. Neither firm has any reason
to deviate; if FFirm i were to reduce pi , πi would become negative.
If Firm i were to raise pi , qi = πi = 0 and he would be no better
off. Thus Firm i (and, symmetrically, Firm j) have no incentive to
deviate, making this a Nash Equilibrium.

Answer 1.8 The share of votes received by a candidate is given by



1
 xi +

 2 · ( x j − xi ) if xi < x j ,
Si = 1
if xi = x j ,
2
1


(1 − xi ) + · ( xi − x j ) if xi > x j .
2

We aim to prove the Nash Equilibrium is ( 12 , 21 ). Let us exhaustively


consider the alternatives.
Suppose xi = 12 and x j > 12 i.e. One candidate is a centrist
while the other (Candidate j) isn’t. In such a case, Candidate j can
increase his share of the vote by moving to the left i.e. reducing x j .
If x j < 21 , Candidate j can increase his share of the vote by moving
to the right.
Suppose xi > x j > 12 ; Candidate i can gain a larger share by
moving to a point x j > xi > 12 . Thus this is not a Nash Equilibrium.
Suppose xi = x j = 21 ; the share of i and j are 12 . If Candidate i were
to deviate to a point (say) xi > 12 , his share of the vote would de-
cline. Thus ( 12 , 12 ) is a unique Nash Equilibrium. This is the famous
Median Voter Theorm, used extensively in the study of politics. It
explains why, for example, presidential candidates in the US veer
sharply to the center as election day approaches.

Answer 1.9 See text.

Answer 1.10 (a) Prisoner’s Dilemma

Player 2
( p) Mum (1 − p) Fink
(q) Mum −1, −1 −9, 0
Player 1
(1 − q) Fink 0, −9 −6, −6
Prisoner’s Dilemma

In a mixed strategy equilibrium, Player 1 would choose q such that


Player 2 would be indifferent between Mum and Fink. The payoff
from playing Mum and Fink must be equal. i.e.

−1 · q + −9 · (1 − q) = 0 · q + −6 · (1 − q) ⇒ q = −3.5

This is impossible.
9

(b)

Player 2
Le f t(q0 ) Middle(q1 ) Right(1 − q0 − q1 )
U p( p) 1, 0 1, 2 0, 1
Player 1
Down(1 − p) 0, 3 0,1 2, 0
Figure 1.1.1.

Here, Player 1 must set p so that Player 2 is indifferent between


Left, Middle and Right. The payoffs from Left and Middle, for
example, have to be equal. i.e.

p · 0 + (1 − p ) · 3 = p · 2 + (1 − p ) · 1
⇒ p = 0.5

Similarly, the payoffs from Middle and Right have to be equal

2 · p + 1 · (1 − p ) = 1 · p + 0 · (1 − p )
⇒ p = −0.5

Which, besides contradicting the previous result, is quite impossi-


ble.
(c)

Player 2
L ( q0 ) C ( q1 ) R (1 − q0 − q1 )
T ( p0 ) 0, 5 4, 0 5, 3
Player 1 M ( p1 ) 4, 0 0, 4 5, 3
B (1 − p0 − p1 ) 3, 5 3, 5 6, 6
Figure 1.1.4.

In a mixed equilibrium, Player 1 sets p0 and p1 so that Player 2


would be indifferent between L, C and R. The payoffs to L and C
must, for example, be equal i.e.

4 · p0 + 0 · p1 + 5 · (1 − p0 − p1 ) = 0 · p0 + 4 · p1 + 5 · (1 − p0 − p1 )
⇒ p0 = p1

Similarly,

0 · p0 + 4 · p1 + 5 · (1 − p0 − p1 ) = 3 · p0 + 3 · p1 + 6 · (1 − p0 − p1 )
⇒ p1 = 2.5 − p0

Which violates p0 = p1 .
10 Static Games of Complete Information

Answer 1.11 This game can be written as

Player 2
L ( q0 ) C ( q1 ) R (1 − q0 − q1 )
T ( p0 ) 2, 0 1, 1 4, 2
Player 1 M ( p1 ) 3, 4 1, 2 2, 3
B (1 − p0 − p1 ) 1, 3 0, 2 3, 0

In a mixed Nash Equilibrium, Player 1 sets p0 and p1 so that the


expected payoffs from L and C are the same. i.e.

E2 ( L) = E2 (C )
⇒ 0 · p0 + 4 · p1 + 3 · (1 − p0 − p1 ) = 1 · p0 + 2 · p1 + 2 · (1 − p0 − p1 )
⇒ p1 = 2 · p0 − 1

Similarly,

E2 (C ) = E2 ( R)
⇒ 1 · p0 + 2 · p1 + 2 · (1 − p0 − p1 ) = 2 · p0 + 3 · p1 + 0 · (1 − p0 − p1 )
2
⇒ p1 = − p0
3
Combining these,

2 5
2 · p0 − 1 = − p0 ⇒ p0 =
3 9
5 1
∴ p1 = 2 · p0 − 1 = 2 · − 1 =
9 9
5 1 3
∴1 − p0 − p1 = 1 − − =
9 9 9
Now we must calculate q0 and q1 . Player 2 will set them such that

E1 ( T ) = E1 ( M )
⇒ 2 · q0 + 1 · q1 + 4 · (1 − q0 − q1 ) = 3 · q0 + 1 · q1 + 2 · (1 − q0 − q1 )
⇒ q1 = 1 − 1.5 · q0

And

E1 ( M) = E1 ( B)
⇒ 3 · q0 + 1 · q1 + 2 · (1 − q0 − q1 ) = 1 · q0 + 0 · q1 + 3 · (1 − q0 − q1 )

Qed

Answer 1.12

( q ) L2 (1 − q ) R2
( p) T1 2, 1 0, 2
(1 − p) B1 1, 2 3, 0
11

Player 1 will set p such that

E2 ( L) = E2 ( R)

⇒ 1 · p + 2 · (1 − p ) = 2 · p + 0 · (1 − p )

2
⇒p=
3
Player 2 will set q such that

E1 ( T ) = E1 ( B)

⇒ 2 · q + 0 · (1 − q ) = 1 · q + 3 · (1 − q )

3
⇒q=
4

Answer 1.13

(q)Apply1 to Firm 1 (1 − q)Apply1 to Firm 2


( p)Apply2 to Firm 1 1 1
2 w1 , 2 w1 w1 ,w2
(1 − p)Apply2 to Firm 2 w2 ,w1 1 1
2 w2 , 2 w2

There are two pure strategy Nash Equilibrium (Apply to Firm 1,


Apply to Firm 2) and (Apply to Firm 2, Apply to Firm 1). In a
mixed-strategy equilibrium, Player 1 sets p such that Player 2 is
indifferent between Applying to Firm 1 and Applying to Firm 2.

E2 (Firm 1) = E2 (Firm 2)

1 1
⇒ p · w1 + ( 1 − p ) · w1 = p · w2 + ( 1 − p ) · w2
2 2

2w1 − w2
⇒p=
w1 + w2

Since 2w1 > w2 , 2w1 − w2 is positive and p > 0. For p < 1 to be


true, it must be the case that
2w1 − w2 1
< 1 ⇒ w1 < w2
w1 + w2 2

Which is true. And since the payoffs are symmetric, a similar analy-
sis would reveal that
2w1 − w2
q=
w1 + w2

Answer 1.14
Dynamic Games of Complete Information

Answer 2.1 The total family income is given by

IC ( A) + IP ( A)

This is maximized at
d( IC ( A) + IP ( A)) dI ( A) dI ( A)
=0⇒ C =− P
dA dA dA
The utility function of the parents is given by

V ( IP − B) + kU ( IC + B)

This is maximized at
dU ( IC + B) dV ( IP − B)
k + =0
dB db

dU ( IC + B) d( IC + B) dV ( IP − B) IP − B
⇒k · + · =0
d( IC + B) dB d ( IP − B ) dB

⇒ kU 0 ( IC + B) − V 0 ( IP − B) = 0

⇒ V 0 ( IP − B∗ ) = kU 0 ( IC + B∗ )

Where B∗ is the maximizing level of the bequest. We know it exists


because a) there are no restrictions on B and b) V() and U() are
concave and increasing
The child’s utility function is given by U ( IC ( A) − B∗ ( A)). This is
maximized at
dU ( IC ( A) + B∗ ( A))
=0
dA

dIC ( A) dB∗ ( A)
 
⇒ U 0 ( IC ( A) + B∗ ( A)) · + =0
dA dA

dIC ( A) dB∗ ( A)
⇒ + = 0 ⇒ IC0 ( A) = − B0∗ ( A)
dA dA
We now have only to prove B0∗ ( A) = IP0 ( A)

dV ( IP ( A) − B∗ ( A))
=0
dA
14 Dynamic Games of Complete Information

dIP ( A) dB∗ ( A)
 
0
⇒V ( IP ( A)∗B ( A)) · − =0
dA dA

⇒ IP0 ( A) = B0∗ ( A)

Answer 2.2 The utility function of the parent is given by V ( IP −


B) + k[U1 ( IC − S) + U2 ( B + S)]. This is maximized at
d · {V ( IP − B) + k[U1 ( IC − S) + U2 ( B + S)]}
=0
dB

⇒ −V 0 + [U10 (−S0B ) + U20 (S0B + 1)] = 0 ⇒ V 0 = −kU10 S0 + U20 S0 + U20

The utility of the child is given by U1 ( IC − S) + U2 (S + B). This is


maximized at:
d[U1 ( IC − S) + U2 (S + B)]
= 0 ⇒ U10 = U20 (1 + B0 )
dS
Total utility is given by

V ( I p − B) + k(U1 ( IC − S) + U2 ( B + S)) + U1 ( IC − S) + U2 ( B + S)

= V ( I p − B) + (1 + k)(U1 ( IC − S) + U2 ( B + S))

This is maximized (w.r.t S) at:

V 0 · (− BS0 ) + (1 + K ) · [U10 (−1) + U20 (1 + BS0 )] = 0

V 0 BS0
⇒ U10 = U20 (1 + BS0 ) −
1+k
as opposed to U10 = U20 (1 + BS0 ), which is the equilibrium condition.
V 0 BS0
Since 1+ k> 0, the equilibrium U10 is ‘too high’ which means that S
0
- the level of savings - must be too low (since dU
dS < 0). It should be
higher.

Answer 2.3 To be done

Answer 2.4 Let’s suppose c2 = R − c1 , partner 2’s payoff is V −


( R − c1 )2 . If c1 ≥ R, partner 2’s best response is to put in 0 and
pocket V. If c1 < R and partner 2 responds with some c2 such
that c2 < R − c1 , his payoff is −c22 . If he puts in nothing, he will,
receive a payoff of zero. There is, therefore, no reason to put in such
a low amount. There is - obviously - also no reason to put in any
c2 > R − c1 . He will put in R − c1 if it’s better than putting in
nothing i.e.

V − ( R − c1 )2 ≥ 0 ⇒ c1 ≥ R − V
√ √
For player 1, any c1 > R − V is dominated by c1 = R − V. Now,
player 1 will do this if the benefit exceeds the cost:

δV ≥ ( R − V )2
15

 2
R
⇒δ≥ √ −1
V

If R2 ≥ 4V, δ would have to be greater than one, which is impossi-


 2
ble. Therefore, if δ ≥ √R − 1 and R2 ≥ 4V (i.e. the cost is not
√ V √
‘too high’), c1 = R − V and c2 = V. Otherwise, c2 = 0 and
c1 = 0.

Answer 2.5 Let the ‘wage premium’ be p = w D − wE , where


p ∈ (−∞, ∞). In order to get the worker to acquire the skill, the
firm has to credibly promise to promote him if he acquires the skill
- and not promote him if he doesn’t.
Let’s say that he hasn’t acquired the skill. The firm will not pro-
mote him iff the returns to the firm are such that:

y D0 − w D ≤ y E0 − wE ⇒ y D0 − y E0 ≤ w D − wE = p

If he does acquire the skill, the firm will promote if the returns to
the firm are such that:

y DS − w D ≥ y ES − wE ⇒ y DS − y ES ≥ w D − wE = p

Thus the condition which the firm behaves as it ought to in the


desired equilibrium is:

y D0 − y E0 ≤ p ≤ y DS − y ES

Given this condition, the worker will acquire the promotion iff he
acquires the skill. He will acquire the skill iff the benefit outweighs
the cost i.e.

wD − C ≥ wE ⇒ wE + p − C ≥ wE ⇒ p ≥ c

That is, the premium paid by the company must cover the cost of
training. The company wishes (obviously) to minimize the pre-
mium, which occurs at:

C if C ≥ y D0 − y E0 ,
wD − wE = p =
y D0 − y E0 if C < y D0 − y E0

A final condition is that the wages must be greater than the alterna-
tive i.e. wE ≥ 0 and w D ≥ 0. The firm seeks to maximize yij − wi ,
which happens at wE = 0 and w D = p.

Answer 2.6 The price of the good is determined by

P ( Q ) = a − q1 − q2 − q3

The profit earned by a firm is given by πi = ( p − c) · qi . For Firm 2,


for example

π2 = ( a − q1∗ − q2 − q3 − c) · q2
16 Dynamic Games of Complete Information

which is maximized at
dπ2
= ( a − q1∗ − q2 − q3 − c) + q2 (−1) = 0
dq2

a − q1∗ − q3 − c
⇒ q2 =
2
Symmetrically,

a − q1∗ − q2 − c
q3 =
2
Putting these two together,
a−q1∗ −q2 −c
a − q1∗ − 2 −c a − c − q1∗
q2 = ⇒ q2 =
2 3
Which, symmetrically, is equal to q3 .

a − c − q1 a − q1 − c
 
∴ π1 = ( a − q1 − q2 − q3 − c ) · q1 = ( a − q1 − 2 · − c ) · q1 = · q1
3 3

This is maximized at
dπ1 a − q1 − c − q1 a−c
= + = 0 ⇒ q1∗ =
dq1 3 3 2

Plugging this into the previous equations, we get


a−c
q2 = q3 =
6

Answer 2.7 The profit earned by firm i is

πi = ( p − w ) · Li

Which is maximized at
dπi d ( p − w ) · Li d ( a − Q − w ) Li
= = =0
dLi dLi dLi

d ( a − L1 − . . . − L i − . . . − L n − w ) · L i
⇒ =0
dLi

⇒ ( a − L1 − . . . − Li − . . . − Ln − w) + Li (−1) = 0

⇒ L1 + . . . + 2Li + . . . + Ln = a − w ∀ i = 1, . . . , n

This generates a system of equation similar to the system in Ques-


tion (1.4)
     
2 1 ... 1 L1 a−w
     
1 2 . . . 1  L2   a − w 
     
·  = 
 .. .. . . ..   ..   .. 
 
. . . .
  .   . 
    

1 1 ... 2 Ln a−w
17

Which resolves to
a−w
Li =
n+1
Thus, total labor demand is given by
a−w a−w n
L = L1 + L2 + . . . + L n = +...+ = · ( a − w)
n+1 n+1 n+1
The labor union aims to maximize
n n
U = (w − wa ) · L = (w − wa ) · · ( a − w) = · ( aw − awa − w2 + wwa )
n+1 n+1
The union maximize U by setting w. This is maximized at

dU n a + wa
= ( a − 2w + wa ) · =0⇒w=
dw n+1 2

The sub-game perfect equilibrium is Li = an−+w1 and w = a+2wa .


Although the wage doesn’t change with n, the union’s utility (U)
n
is an increasing function of n+ 1 , which is an increasing function
of n. This is so because the more firms there are in the market, the
greater the quantity produced: more workers are hired to produce
this larger quantity, increasing employment and the utility of the
labor union.

Answer 2.8

Answer 2.9 From section 2.2.C, we know that exports from coun-
try i can be driven to 0 iff

a − c − 2t j a−c
ei∗ = = 0 ⇒ tj =
3 2
which, symmetrically, is equal to ti . Note that in this model c =
wi and we will only be using wi from now, for simplicity. What
happens to domestic sales?
a − wi
a − wi + t i a − wi + 2 a − wi
∴ hi = = =
3 3 2
Which is the monopoly amount. Now, in the monopoly-union bar-
gaining model, the quantity produced equals the labor demanded.
Thus,
a − wi
Lie=0 =
2
The profit earned by firm i in this case is given by
2
a − wi a − wi a − wi
     
πie=0 = ( pi − wi )hi = ( a − wi − hi )hi = a − wi − =
2 2 2

The union function’s payoff is defined by

a − wi wi a − wa − wi2 + wi wa
 
U e =0 = ( wi − w a ) L i = ( wi − w a ) =
2 2
18 Dynamic Games of Complete Information

The union sets wages to maximize utility with the following condi-
tion:
dU e=0 a − 2wi + wa a + wa
= = 0 ⇒ wi =
dwi 2 2
Now, tariffs decline to zero. In this situation, t j = 0 and therefore
a − wi
h j = hi =
3
and
a − wi
e j = ei =
3
Due to this, prices fall:
a − wi a − wi a + 2wi
Pi = Pj = a − Q = a − hi − e j = a − − =
3 3 3
So what happens to profits?

a − wi a − wi
     
a + 2wi a + 2wi
πit=0 = ( pi − wi )hi + ( p j − wi )e j = − wi + − wi
3 3 3 3

2
a − wi

⇒ πit=0 =2 = 2 · πie=0
3
Thus, profits are higher at zero tariffs. What happens to employ-
ment?
a − wi
 
t =0 2 4 4
L i = q i = h i + ei = · ( a − wi ) = · = · Lie=0
3 3 2 3
Employment rises. And what happens to the wage? That depends
on the payoff the union now faces:
2 2
U t =0 = ( wi − w a ) · L i = (w − wa )( a − wi ) = · (wi a − wa a − wi2 + wa wi )
3 i 3
This is maximized at
dU t=0 2 a + wa
= ( a − 2wi + wa ) = 0 ⇒ wi =
dwi 3 2
Which is the same as before.

Answer 2.10 Note that ( P1 , P2 ), ( R1 , R2 ) and (S1 , S2 ) are Nash


Equilibrium.

P2 Q2 R2 S2
P1 2, 2 x, 0 -1, 0 0,0
Q1 0, x 4, 4 -1, 0 0, 0
R1 0, 0 0, 0 0, 2 0, 0
S2 0, -1 0, -1 -1, -1 2, 0

j
So what are player 1’s payoff from playing the strategy? Let POi ( X1 , X2 )
denote player i’s payoff in round j when player 1 plays X1 and
19

player 2 plays X2 . If player 2 doesn’t deviate, he earns the sum of


payoffs from two rounds of gaming:

PO11 ( Q1 , Q2 ) + PO12 ( P1 , P2 ) = 4 + 2 = 6

And if player 2 deviates from the strategy, player 1 earns:

PO11 ( Q1 , P2 ) + PO12 (S1 , S2 ) = 0 + 2 = 2

Now, let’s look at his payoff from deviating, when 2 doesn’t:

PO11 ( P1 , Q2 ) + PO12 ( R1 , R2 ) = x + 0 = x

And when they both deviate:

PO11 ( P1 , P2 ) + PO12 ( P1 , P2 ) = 2 + 2 = 4

Thus, if player 2 deviates, player 1’s best response is to deviate for


a payoff of 4 (as opposed to the 2 he’d get when not deviating). If,
however, player 2 doesn’t deviate, player 1 gets a payoff of 6 when
playing the strategy and x when he doesn’t. Thus, he (player 1)
will play the strategy i f f x < 6. A symmetric argument applies to
player 2.
Thus the condition for which the strategy is a sub-game perfect
Nash Equilibrium is

4<x<6

Answer 2.11 The only pure-strategy subgame-perfect Nash Equi-


librium in this game are ( T, L) and ( M, C ).

L C R
T 3,1 0,0 5,0
M 2,1 1,2 3,1
B 1,2 0,1 4,4

Unfortunately, the payoff (4,4) - which comes from the actions (B,R)
- cannot be maintained. Player 2 would play R, if player 1 plays B -
player 1 however would deviate to T to earn a payoff of 5. Consider,
however, the following strategy for player 2:
• In Round 1, play R.
• In Round 2, if Round 1 was (B,R), play L. Else, play M.
Player 1’s best response in Round 2 is obviously T or M, depend-
ing on what player 2 does. But what should he do in Round 1? If
he plays T, his playoff is: 3 3
If you do not understand the nota-
tion, see Answer 2.10
PO11 ( T, R) + PO12 ( M, C ) = 5 + 1 = 6

If he plays B:

PO11 ( B, R) + PO12 ( T, L) = 4 + 3 = 7

Thus, as long as player 2 is following the strategy given above, we


can induce (B,R).
20 Dynamic Games of Complete Information

To get a intuitive idea of how we constructed the strategy, note


that there are two Nash equilibrium that player 2 can play in the
second round: a "reward" equilibrium in which player 1 gets 3 and
"punishment" equilibrium in which player 1 gets 1. By (credibly)
threatening to punish player 1 in Round 2, player 2 induces "good"
behavior in Round 1.

Answer 2.12 See text.

Answer 2.13 The monopoly quantity is a− c


2 . The monopoly price
is, therefore:

a−c
 
a+c
P = a−Q = q− =
2 2

The players can construct the following strategy.


• In Round 1, play pi = a+2 c
• In Round t6=1, if the previous Round had p j 6= a+2 c , play pi = c
(the Bertrand equilibrium). Else, play pi = a+ 2
c

Now, if player i deviates by charging a− c


2 − ε where ε > 0, he
secures the entire market and earns monopoly profits for one round
and Bertrand profits (which are 0) for all future rounds which totals
to
a−c a−c ( a − c )2
   
πdeviate = a − −c · + δ · 0 + δ2 · 0 + . . . =
2 2 4

The payoff from sticking to the strategy (in which both firms pro-
duce a− c
4 ) the payoff is

a−c a−c a−c a−c


       
π f ollow = a − −c · +δ· a− −c · +...
2 4 2 4

a−c a−c ( a − c )2
       
1 1
= · · = ·
1−δ 2 4 1−δ 8

The strategy is stable if

πdeviate ≤ π f ollow

( a − c )2 ( a − c )2
 
1
⇒ ≤ ·
4 1−δ 2

1
⇒δ≥
2
Q.E.D.

Answer 2.14 The monopoly quantity when demand is high is


a H −c
2 which makes the monopoly price

aH − c
 
a +c
pH = aH − = H
2 2
21

This is the price that the firms have to maintain when demand is
high. Conversely, when demand is low

aL + c
pL =
2
Let p M be the monopoly price, defined as

p
H if ai = a H
pM =
 pL if ai = a L

Consider the following strategy for firm i:


• In Round 1, set pi = p M .
• In Round t 6= 1, if p j = p M in the previous round, play p M , else
play pi = c
The payoff received from deviating is the monopoly profit for
one round 4 and then zero profits in all future rounds: 4
See the previous question for the
derivation of one round monopoly
( a i − c )2 ( a − c )2 profits
πdeviate = + δ · 0 + δ2 · 0 + . . . = i
4 4
If he follows the strategy, he earns:

( a i − c )2 ( a H − c )2 ( a L − c )2
 
π f ollow = +δ· π· + (1 − π ) · +...
8 8 8

( a i − c )2 ( a − c )2 ( a − c )2
 
δ
⇒ π f ollow = + · π· H + (1 − π ) · L
8 1−δ 8 8

The strategy is stable if

πdeviate ≤ π f ollow

( a i − c )2 ( a − c )2 ( a − c )2 ( a − c )2
 
δ
⇒ ≤ i + · π· H + (1 − π ) · L
4 8 1−δ 8 8

Answer 2.15 If the quantity produced by a monopolist is a− c


2 , the
quantity produced by a single company in a successful cartel is

a−c
qm
n =
2n
Therefore, the profit earned by one of these companies is
2
a−c a−c a−c
   
1
πm = ( p − c)qm
n = (a − Q − c)qm
n = a− −c = ·
2 2n n 2
a−c
The Cournot oligopoly equilibrium quantity5 is 1+ n which means 5
See Answer 1.4
that the profit earned at this equilibrium is

a−c a−c a−c 2


       
πc = ( a − Q − c)qcn = a−n −c · =
1+n 1+n 1+n

A grim trigger strategy for a single company here is


• In Round t=1, produce qm n
22 Dynamic Games of Complete Information

• In Round t > 1, if the total quantity produced in t − 1 is n · qm n,


m c
produce qn , else produce qn .
Now, the best response to everyone else producing qm n is deter-
0
mined by finding q which renders profit

a−c
 
0 0 n+1
π = ( a − Q − c)q = a − · ( n − 1) − q − c q 0 =
0
· ( a − c ) q 0 − q 02
2n 2n

Which is maximized at
dπ 0 n+1 a+1
0
= · ( a − c) − 2q0 = 0 ⇒ q0 = ( a − c)
dq 2n 4n

The profit at q0 (cheating gain) is


2
a−c
    
n+1 n+1 n+1
π0 = a−c− · ( n − 1) − · ( a − c) · · ( a − c) = · ( a − c)
2n 4n 4n 4n

If the firm deviates, it earns the cheating gain for one round and
Cournot profits for all future rounds i.e. the gain from deviating
from the strategy is
2 2 2
a−c a−c
  
n+1
πdeviate = · ( a − c) +δ + δ2 +...
4n 1+n 1+n

" 2  2 #
n+1 δ 1
⇒ πdeviate = + ( a − c )2
4n 1+δ 1+n

If the firm follows the strategy, it’s payoff is πm for all rounds:
2 2 2 2
a−c a−c a−c a−c
   
1 1 1
2 1 1
π f ollow = +δ· +δ · +... = ·
n 2 n 2 n 2 1−δ n 2

The strategy is stable if

π f ollow ≥ πdeviate

2 " 2 2 #
a−c
 
1 1 n+1 δ 1
⇒ · ≥ + ( a − c )2
1−δ n 2 4n 1−δ n+1

n2 + 2n + 1
⇒ δ∗ ≤
n2 + 6n + 1
Thus as n rises, δ∗ falls. If you want to know more, see Rotember
and Saloner (1986).

Answer 2.16

Answer 2.17

Answer 2.18 See text.

Answer 2.19 In a one period game, player 1 would get a payoff


of 1 and player 2 would get a payoff of 0 i.e. (1,0). In a two period
23

game, if the two players can’t agree, the game goes to the second
stage, at which point, player 2 gets 1 and player 1 gets 0. This pay-
off of 1 in the second round is worth δ to player 2 in the first round.
If player 1 offers δ to player 2 in the first round, player 2 will accept,
getting a payoff of 1 − δ i.e. (1 − δ, δ).
In a three period game, if player 2 rejects the offer in the first
round, they go on to the second round, at which point it becomes
a two period game and player 2 gets a payoff of 1 − δ and player 1
gets δ. This payoff (δ,1 − δ) is worth (δ2 ,δ[1 − δ]) to the players in
the first round. Thus, if player 1 makes an offer of (1 − δ[1 − δ],δ[1 −
δ])=(1 − δ + δ2 ,δ − δ2 ), player 2 would accept and player 1 would
secure a higher payoff.

1 δ
Answer 2.20 In round 1, player A offers ( 1− δ , 1+δ ) to player B,

which player B accepts since 1+δ ≥ δs = δ 1−δ .
δ 1

What if A deviates from the equilibrium, offers less and B re-


fuses? The game then goes into the next round and B offers A 1+δ δ ,
1 1
which will be accepted, leaving 1+ δ for B. This is worth δ · 1+δ to
B in the first round (which is why B will refuse anything less than
2
this amount) and 1δ+δ to A in the first round. This is less than the
1
1+δ A would have made had he not deviated.

Answer 2.21

Answer 2.22 In the first round, investors can either withdraw (w)
or not (d). A strategy can represented as x1 x2 where x1 is what the
investor does in the first round and x2 is what the investor does in
the second round. The game can be represented by the following
table:

ww wd dd dw
ww r,r r,r D,2r-D D,2r-D
wd r,r r,r D,2r-D D,2r-D
dd 2r-D,D 2r-D,D R,R D,2R-D
dw 2r-D,D 2r-D,D 2R-D,D R,R

There are 5 Nash Equilibria: (dw,dw) , (ww,ww) , (ww,wd) , (wd,ww)


and (wd,wd). Of these, (ww,wd), (wd,ww) and (wd,wd) are not
Subgame Perfect Equilibria since there is no subgame in which both
or either player doesn’t withdraw his or her funds in the second
round.

Answer 2.23 The optimal investment is given by


d(v + I − p − I 2 ) 1
= 0 ⇒ I∗ =
dI 2
The boost in the value added is If the buyer had played ‘Invest’,
buyer will buy if

v + I − p − I2 ≥ − I2 ⇒ p ≤ v + I
24 Dynamic Games of Complete Information

Thus, the highest possible price that the buyer will pay at this point
is p = v + I. If, however, the buyer doesn’t invest, the buyer will
buy if

v−p ≥ 0 ⇒ v ≥ p

Thus the buyer would be willing to pay p = v. Thus investment is


I ⊆ {0, 21 }. The price is drawn from p ⊆ {v, v + 12 }. There is no gain
from charging anything other than these prices.

1
v+ 2 v
1
2, A − 41 , v + 1
2
1
4, v
1
2, R − 14 , 0 − 41 , 0
0, A − 21 , v + 1
2 0, v
0, R 0, 0 0, 0

As you can see from this (complicated) table, if I = 21 , A weakly


dominates R. And if I = 0 R weakly dominates A. Thus we can
collapse the above table into a simpler one:

1
(q)v + 2 (1 − q ) v
1
( p) I = 2 Accept − 14 , v + 21 1
4, v

(1 − p) I = 0Reject 0, 0 0, 0

The only pure Nash Equilibrium is for the buyer to not invest and
the
Static Games of Incomplete Information

Answer 3.1 See text

Answer 3.2 Firm 1 aims to maximize

π1 = ( p − c ) q1 = ( a i − c − q1 − q2 ) q1

Which is done by

dπ1 a − c − q2
= ai − c − q1 − q2 + q1 (−1) = 0 ⇒ q1 = i
dq1 2

Thus, the strategy for firm 1 is



 a H − c − q2 if ai = a H
q1 = a −c2−q
 L 2
if ai = a L
2

Now, the firm 2 aims to maximize

π2 = ( p − c ) q2 = ( a − c − q1 − q2 ) q2

This is maximized at
dπ2 a − c − q1 θa + (1 − θ ) a L − c − q1
= a − c − q1 − q2 + q2 (−1) = 0 ⇒ q2 = = H
dq2 2 2

Plugging in q1 , we get
h i
θa H +(1−θ ) a L −c−q2
θa H + (1 − θ ) a L − c − 2
q2 =
2

θa H + (1 − θ ) a L − c
⇒ q2 =
3
Now, we need to find out what firm 1’s output would be. If ai =
aH ,
θa H +(1−θ ) a L −c
aH − c − (3 − θ ) a H − (1 − θ ) a L − 2c
q1H = 3
=
2 6
But what if ai = a L ?
θa H +(1−θ ) a L −c
aL − c − (2 + θ ) a L − θa H − 2c
q1L = 3
=
2 6
26 Static Games of Incomplete Information

Now, based on these results, the constraints for non-negativity are:

c − aL
q2 ≥ 0 ⇒ θa H + (1 − θ ) a L − c ≥ 0 ⇒ θ ≥
aH − aL

Which also requires

θ ≤ 1 ⇒ 1 ≥ c − aL ⇒ aL ≥ c − 1

Furthermore,

(2 + θ ) a L − θa H − 2c ≥ c − aL
q1L ≥ 0 ⇒ ≥ 0 ⇒ θ ≤ 2·
6 aH − aL

Which subsumes the last-but-one result. And finally,

2c − 3a H + a L
q1H ≥ 0 ⇒ θ ≤
aH − aL

Answer 3.3 The profits earned by firm 1 is given by

π1 = ( p1 − c)q1 = ( p1 − c)( a − p1 − b1 p2 )

This is maximized at
dπ1 a − b1 p2 a − b1 [θ p H + (1 − θ ) p L ]
= a − p1 − b1 p2 + p1 (−1) = 0 ⇒ p1 = =
dp1 2 2

Now, what if b1 = b H ? To start with, p1 = p H and

a − b H [ θ p H + (1 − θ ) p L ] a − (1 − θ ) b H p L
pH = ⇒ pH =
2 2 + θb H

And if b1 = bL :

a − b L [ θ p H + (1 − θ ) p L ] a − θbL p H
pL = =
2 2 + b L (1 − θ )

Which means that,


h i
a−θb L p H
a − (1 − θ ) 2+(1−θ )b L
pH =
2 + θb H

a (1 − [1 − θ ] b H )
⇒ pH =
4 + 2(1 − θ )bL + 2θb H

Similarly,

a(1 − θbL )
pL =
4 + 2(1 − θ )bL + 2θb H

Answer 3.4 Game 1 is played with 0.5 probability:

L R
(q)T 1,1 0,0
(1-q)B 0,0 0,0
27

If nature picks game 2, which 0.5 probability:

L R
(q)T 0,0 0,0
(1-q)B 0,0 2,2

If nature picks game 2, player 1 will always play B, since it weakly


dominates T and player 2 will play R, since it weakly dominates L.
Now, if nature chooses game 1, player 1 will play T. If nature
chooses game 2, player 1 will play B. Furthermore, if player 2 plays
L with probability p:
1 1 1 1 1
π2 = p[ · 0 + · 1] + (1 − p)[ · 0 + · 2] = 1 − · p
2 2 2 2 2
This is maximized at p = 0 i.e. player 2 will always play R. Thus the
Pure-strategy Bayesian Nash equilibrium is

PSNE = {(1, T, R), (2, B, R)}

Answer 3.5

Answer 3.6 The payoff is given by



vi − bi if bi > b j ∀ j = 1, 2, . . . , i − 1, i + 1, . . . , n


vi − b j
ui = m if bi = b j


0 if bi < b j for any j = 1, 2, . . . , i − 1, i + 1, . . . , n

The beliefs are: v j is uniformly distributed on [0,1]. Actions are


given by bi ⊆ [0, 1] and types are given by vi ⊆ [0, 1]. The strategy is
given by bi = ai + ci vi . Thus, the aim is to maximize

πi = (vi − bi ) · P(bi > b j ∀ j = 1, . . . , i − 1, i + 1, . . . , n) = (vi − bi ) · [P(bi > b j )]n−1

" !#n−1 ! n −1
n −1
bi − a j bi − a j
⇒ πi = (vi − bi ) · [P(bi > a j + c j v j )] = ( v i − bi ) · P v j < = ( v i − bi )
cj cj

This is maximized at
! n −1 ! ! n −2
dπi bi − a j n−1 bi − a j
= (−1) · + ( v i − bi ) =0
dbi cj cj cj

! n −2 !
bi − a j a j + (n − 1)vi − nbi
· =0
cj cj

This requires that either


bi − a j
= 0 ⇒ bi = a j
cj

Or that
a j + (n − 1)vi − nbi aj n−1
= 0 ⇒ bi = + · vi
cj n n
28 Static Games of Incomplete Information

n −1
Now, we know that bi = ai + ci vi . Here, we know that ci = n and
aj a a an
ai = ⇒ a1 = 2 = 3 = . . . =
n n n n
Which is only possible if a1 = a2 = . . . = an = 0. Thus,

n−1
bi = · vi
n

Answer 3.7

Answer 3.8
Dynamic Games of Incomplete Information

Answer 4.1 (a)

(q)L’ (1-q)R’
L 4,1 0,0
M 3,0 0,1
R 2,2 2,2

The Nash Equilibria are ( L, L0 ), ( R, R0 ) and they are both sub-game


perfect equilibria. Now, the payoff to player 2 from playing L0 is

π2 ( L 0 ) = 1 · p + 0 · (1 − p ) = p

The payoff from playing R0

π2 ( R 0 ) = 0 · p + 1 · (1 − p ) = 1 − p

Player 2 will always play L0 if


1
π2 ( L 0 ) > π2 ( R 0 ) ⇒ p > 1 − p ⇒ p >
2
The playoff to player 1 from playing L is

π1 ( L) = 4 · q + 0 · (1 − q) = 4q

And the payoff from playing M is

π1 ( M ) = 3 · q + 0 · (1 − q) = 3q

Player 1 will always play L if

π1 ( L) > π1 ( M ) ⇒ 4q > 3q

Which is true. Thus p = 1. In which case, player 2 will always play


L0 . Thus the outcome ( R, R0 ) violates Requirements 1 and 2.
(b)

L0 M0 R0
L 1,3 1,2 4,0
M 4,0 0,2 3,3
R 2,4 2,4 2,4
30 Dynamic Games of Incomplete Information

The expected values of the payoffs to player 2 are:

π2 ( L0 ) = 3 · p + 0 · (1 − p) = 3p

π2 ( M 0 ) = 2 · p + 2 · (1 − p ) = 2

π2 ( R0 ) = 0 · p + 3 · (1 − p) = 3 − 3p

And the payoffs to player 1 are:

π1 ( R ) = 2

π1 ( L) = 1 · q1 + 1 · q2 + 4 · (1 − q1 − q2 ) = 4 − 3q1 − 3q2

π1 ( M) = 4 · q1 + 0 · q2 + 3 · (1 − q1 − q2 ) = 3 + q1 − 3q2

The only Nash Equilibrium is ( R, M0 ); it is also sub-game perfect.


To be a Perfect Bayesian Equilibrium, player 2 must believe that
2
π2 ( M0 ) > π2 ( L0 ) ⇒ 2 > 3p ⇒ >p
3
and
1
π2 ( M0 ) > π2 ( R0 ) ⇒ 2 > 3 − 3p ⇒ p >
3
Furthermore, player 1 must believe
2
π1 ( R) > π1 ( L) ⇒ 2 > 4 − 3q1 − 3q2 ⇒ q1 > − q2
3
Since q1 > 0, this implies that
2 2
− q2 > 0 ⇒ > q2
3 3
and

π1 ( R) > π1 ( M ) ⇒ 2 > 3 + q1 − 3q2 ⇒ 3q2 − 1 > q1

Which, in turn, requires


2 5
3q2 − 1 > − q2 ⇒ q2 >
3 12
The pure Bayesian Nash equilibrium is
 
1 2 2 2 5
( R, M), > p > , 3q2 − 1 > q1 > − q2 , > q2 >
3 3 3 3 12

Answer 4.2

(q) L0 (1 − q ) R 0
( p) L 3,0 0,1
(1 − p ) M 0,1 3,0
R 2,2 2,2
31

As you can see, there is no Pure Nash Equilibrium. But, we need


rigorous proof: A pure strategy Nash Equilibrium exists if
(a) Player 1 always picks either L or M. For example, player 1
will always play L if
1
π1 ( L ) > π1 ( M ) ⇒ 3 · q + 0 · (1 − q ) > 0 · q + 3 · (1 − q ) ⇒ q >
2
Thus if q > 0.5, p = 1.
(b) Player 2 always picks either L0 or R0 ; player 2 will always play
0
L if
1
π2 ( L 0 ) > π2 ( R 0 ) ⇒ 0 · p + 1 · (1 − p ) > 1 · p + 0 · (1 − p ) ⇒ >p
2
Thus, if p < 0.5, q = 1. This violates the condition we uncovered in
part (a), proving that there is no PSNE.
In a mixed strategy BE, player 1 plays L with probability p and
player 2 plays L0 with probability q. In equilibrium, player 2 is
indifferent between L0 and R0 :
1
π2 ( L 0 ) = π2 ( R 0 ) ⇒ 0 · p + 1 · (1 − p ) = 1 · p + 0 · (1 − p ) ⇒ p =
2
And similarly, for player 1:
1
π1 ( L ) = π1 ( M ) ⇒ 3 · q + 0 · (1 − q ) = 0 · q + 3 · (1 − q ) ⇒ q =
2
Thus, in a mixed strategy equilibrium, player 1 plays R with p = 0.5
and player 2 plays L0 with probability q = 0.5.

Answer 4.3 (a) Let’s start with the pooling equilibrium ( R, R). In
this situation, p = 0.5. Now, the payoff to the receiver is

π R ( R, u) = 0.5 · (1) + 0.5 · (0) = 0.5

π R ( R, d) = 0.5 · (0) + 0.5 · (2) = 1

Thus, if the sender plays R, the receiver will play d. We have to test
two strategies for the receiver: (u, d) and(d, d). Under the strategy
(d, d)

π1 ( L, d) = 2 and π1 ( R, d) = 3

π2 ( L, d) = 3 and π2 ( R, d) = 2

There is no incentive for type 1 to deviate and play L, but there is


an incentive for type 2 to do so. Under the strategy (u, d),

π1 ( L, u) = 1 and π1 ( R, d) = 3

π2 ( L, u) = 0 and π2 ( R, d) = 2

Neither type 1 nor type 2 have any reason to L instead of R. Thus,


we have the following pooling equilibrium:

[( R, R), (u, d), p = 0.5, 1 ≥ q ≥ 0]


32 Dynamic Games of Incomplete Information

(b) We must find a pooling equilibrium in which the sender


plays ( L, L, L). For the receiver, the payoffs are

1 1 1
π R ( L, u) = ·1+ ·1+ ·1 = 1
3 3 3

1 1 1
π R ( L, d) = ·0+ ·0+ ·0 = 0
3 3 3
There are two strategies: (u, u) and (u, d). Under (u, u):

π1 ( L, u) = 1 and π1 ( R, u) = 0

π2 ( L, u) = 2 and π2 ( R, u) = 1

π3 ( L, u) = 1 and π3 ( R, u) = 0

None of the three types have an incentive to send R instead of L.


Thus, we have the following equilibrium:

1
[( L, L, L), (u, u), p = , 1 ≥ q ≥ 0]
3

Answer 4.4 (a) Let’s examine pooling equilibrium ( L, L). p =


0.5. π R ( L, u) = π R ( L, d). Thus, it doesn’t matter for the receiver
whether he/she plays u or d.
• Under (u, u), π1 ( L, u) = 1 < π1 ( R, u) = 2, making it unsustain-
able.
• Under (u, d), π2 ( L, u) = 0 < π2 ( R, d) = 1, making it unsustain-
able.
• Under (d, d), π2 ( L, d) = 0 < π2 ( R, d) = 1, making it unsustain-
able.
• Under (d, u), π2 ( L, u) = 0 < π2 ( R, d) = 1, making it unsustain-
able.
Thus, ( L, L) is not a sustainable equilibrium.
Let’s examine separating equilibrium ( L, R). The best response
to this is (u, d)6 . Let’s see if either of the types have an incentive to 6
Since π R (1, L, u) > π R (1, L, d) and
deviate: π R (2, R, u) > π R (2, R, d)

• For type 1, π1 ( L, u) = 1 > π1 ( R, d) = 0 i.e. no reason to play R


instead of L.
• For type 2, π2 ( L, u) = 0 < π2 ( R, d) = 1 i.e. no reason to play L
instead of R.
Let’s examine pooling equilibrium ( R, R). π R ( R, u) = 1 >
π R ( R, d) = 0.5. Therefore, the two strategies that can be followed by
the receiver are (u, u) and (d, u).
• Under (u, u), π ( L, u) < π ( R, u) for both types.
• Under (d, u), π ( L, d) ≤ π ( R, u) for both types.
Let’s examine pooling equilibrium ( R, L). The best response to
this is (d, u)7 . 7
Since π R (1, R, u) = 2 > π R (1, R, d) =
• For type 1, π1 ( L, d) = 2 ≥ π1 ( R, u) = 2, i.e. will play L. 0 and π R (2, L, u) = 0 < π R (2, L, d) = 1

• For type 2, π2 ( L, d) = 0 ≤ π2 ( R, u) = 1, i.e. will play R.


33

Thus the perfect Bayesian equilibrium are:

[( L, R), (u, d), p, q]

[( R, R), (u, u), p, q = 0.5]

[( R, R), (d, u), p, q = 0.5]

[( R, L), (d, u), p, q]

(b) Let’s examine pooling equilibrium is ( L, L). π R ( L, u) = 1.5 >


π R ( L, d) = 1, therefore player 2 will respond to L with u. The two
strategies are (u, u) and (u, d).
• Under (u, u), π ( L, u) > π ( R, u) for both types.
• Under (u, d), π1 ( L, u) = 3 < π1 ( R, d) = 4, making it unsustain-
able.
Let’s examine separating equilibrium ( L, R). The best response to
this is (d, u).
• For type 1, π1 ( L, d) = 1 > π1 ( R, u) = 0 i.e. type 1 will play L.
• For type 2, π2 ( L, d) = 0 < π2 ( R, u) = 1 i.e. type 2 will play R.
Let’s examine pooling equilibrium ( R, R). π R ( R, u) = 1 >
π R ( L, d) = 0.5, therefore player 2 will respond to R with u. The two
strategies are (u, u) and (d, u).
• Under (u, u), π1 ( L, u) = 3 > π1 ( R, u) = 0, making ( R, R)
unsustainable.
• Under (d, u), π1 ( L, d) = 1 > π1 ( R, u) = 0, making ( R, R)
unsustainable.
Let’s examine separating equilibrium ( R, L). The best response to
this is (d, u).
• For type 1, π1 ( L, d) = 1 > π1 ( R, u) = 0, making ( R, L)
unsustainable.
• For type 2, π2 ( L, d) = 0 < π2 ( R, u) = 1, which doesn’t conflict
with the equilibrium.
The perfect Bayesian Equilibria are

[( L, L), (u, u), p = 0.5, q]

[( L, R), (d, u), p = 1, q = 0]

Answer 4.5 Let’s examine 4.3(a). We’ve already tested equilib-


rium ( R, R). Let’s try another pooling equilibrium ( L, L). q = 0.5.
π R ( L, u) = 1 > π R ( L, d) = 0.5. Thus, the receiver’s response to
L will always be u. We have to test two strategies for the receiver:
(d, u) and(u, u).
• Under the strategy (d, u), π ( L, d) = 2 > π ( R, u) = 0 i.e. there is
no incentive for either type to play R instead of L.
• Under the strategy (u, u), π2 ( L, u) = 0 < π2 ( R, u) = 1, making
( L, L) unsustainable.
34 Dynamic Games of Incomplete Information

Let’s examine ( L, R). The best response to this is (u, d). In re-
sponse to this,
• For type 1, π1 ( L, u) = 1 < π1 ( R, d) = 3 i.e. type 1 will play R
which violates the equilibrium
• For type 2, π2 ( L, u) = 0 < π2 ( R, d) = 2 i.e. type 2 will play R
which doesn’t violate the equilibrium.
Let’s examine ( R, L). The best response to this is (u, d). In re-
sponse to this,
• For type 1, π1 ( L, u) = 1 < π1 ( R, d) = 3, i.e. type 1 will play R.
• For type 2, π2 ( L, u) = 0 < π2 ( R, d) = 2, i.e. type 2 will play R,
violating the equilibrium.
The perfect Bayesian Equilibrium is

[( L, L), (d, u), p, q = 0.5]

Now, let’s examine 4.3(b). There is one pooling equilibrium


other than the ( L, L, L): ( R, R, R). There are six pooling equilib-
ria: 1.( L, L, R), 2.( L, R, L), 3.( R, L, L), 4.( L, R, R), 5.( R, L, R) and
6.( R, R, L).
Let’s start with pooling equilibrium ( R, R, R). π R ( R, u) = 23 >
π R ( R, d) = 13 . Thus, receiver will play the strategy (u, u) or (d, u).
• For strategy (u, u), π ( L, u) < π ( R, u) for all types, making it
unsustainable.
• For strategy (d, u), π1 ( L, d) < π1 ( R, u), making the equilibrium
unsustainable. Let’s examine the various separating the equilib-
rium.
1. ( L, L, R). The best response to this is (u, d), 8 8
Since π R (3, R, u) = 0 < π R (3, R, d) =
• For type 1, πS (1, L, u) = 1 > πS (1, R, d) = 0 i.e. type 1 will play 1 and 0.5 · π R (1, L, u) + 0.5 ·
π R (2, L, u) = 1 > 0.5 · π R (1, L, d) + 0.5 ·
L. π R (1, L, d) = 0
• For type 2, πS (2, L, u) = 2 > πS (2, R, d) = 1 i.e. type 2 will play
L.
• For type 3, πS (3, L, u) = 1 < πS (3, R, d) = 2 i.e. type 3 will play
R.
Thus, this is a viable equilibrium.
2. ( L, R, L). The best response to this is (u, u), 9 9
since π R (2, R, u) = 1 > π R (2, R, d) =
• For type 1, πS (1, L, u) = 1 > πS (1, R, u) = 0 i.e. type 1 will play 0 and 0.5 · π R (1, L, u) + 0.5 ·
π R (3, L, u) = 1 > 0.5 · π R (1, L, d) + 0.5 ·
L. π R (3, L, d) = 0
• For type 2, πS (2, L, u) = 2 > πS (2, R, u) = 1 i.e. type 2 will play
L, instead of R.
• For type 3, πS (3, L, u) = 1 > πS (3, R, u) = 0 i.e. type 3 will play
L.
Thus, this is not a viable equilibrium.
3. ( R, L, L). The best response to this is (u, u), 10 10
since π R (1, R, u) = 1 < π R (1, R, d) =
• For type 1, πS (1, L, u) = 1 > πS (1, R, u) = 0 i.e. type 1 will play 0 and 0.5 · π R (2, L, u) + 0.5 ·
π R (3, L, u) = 1 > 0.5 · π R (2, L, d) + 0.5 ·
L, instead of R. π R (3, L, d) = 0
• For type 2, πS (2, L, u) = 2 > πS (2, R, u) = 1 i.e. type 2 will play
L.
• For type 3, πS (3, L, u) = 1 > πS (3, R, u) = 0 i.e. type 3 will play
L.
Thus, this is a not viable equilibrium.
35

4. ( L, R, R). The best response to this is (u, u) and (u, d) 11 11


since π R (1, L, u) = 1 > π R (1, L, d) =
Let’s test (u, u): 0 and 0.5 · π R (2, R, u) + 0.5 ·
π R (3, R, u) = 0.5 = 0.5 · π R (2, R, d) +
• For type 1, πS (1, L, u) = 1 > πS (1, R, u) = 0 i.e. type 1 will play 0.5 · π R (3, R, d) = 0.5
L.
• For type 2, πS (2, L, u) = 2 > πS (2, R, u) = 1 i.e. type 2 will play
L, instead of R.
• For type 3, πS (3, L, u) = 1 > πS (3, R, u) = 0 i.e. type 3 will play
L, instead of R.
Thus, this is not a viable equilibrium.
Let’s test (u, d)
• For type 1, πS (1, L, u) = 1 > πS (1, R, d) = 0 i.e. type 1 will play
L.
• For type 2, πS (2, L, u) = 2 > πS (2, R, d) = 1 i.e. type 2 will play
L, instead of R.
• For type 3, πS (3, L, u) = 1 < πS (3, R, d) = 2 i.e. type 3 will play
R.
Thus, this is not a viable equilibrium.
5. ( R, L, R). The best response to this is either (u, u) or (u, d) 12 12
since π R (2, L, u) = 1 > π R (2, L, d) =
Let’s test (u, u). 0 and 0.5 · π R (2, L, u) + 0.5 ·
π R (3, L, u) = 0.5 = 0.5 · π R (2, L, d) +
• For type 1, πS (1, L, u) = 1 > πS (1, R, u) = 0 i.e. type 1 will play 0.5 · π R (3, L, d) = 0.5
L, instead of R.
• For type 2, πS (2, L, u) = 2 > πS (2, R, u) = 1 i.e. type 2 will play
L.
• For type 3, πS (3, L, u) = 1 > πS (3, R, u) = 0 i.e. type 3 will play
L, instead of R.
Thus, this is not a viable equilibrium.
Let’s test (u, d)
• For type 1, πS (1, L, u) = 1 > πS (1, R, d) = 0 i.e. type 1 will play
L, instead of R.
• For type 2, πS (2, L, u) = 2 > πS (2, R, d) = 0 i.e. type 2 will play
L.
• For type 3, πS (3, L, u) = 1 = πS (3, R, d) = 1 i.e. type 3 can play
R.
Thus, this is not a viable equilibrium.
6. ( R, R, L). The best response to this is (u, u), 13 13
since π R (3, L, u) = 1 > π R (3, L, d) =
• For type 1, πS (1, L, u) = 1 > πS (1, R, u) = 0 i.e. type 1 will play 0 and 0.5 · π R (1, R, u) + 0.5 ·
π R (2, R, u) = 0.5 > 0.5 · π R (1, R, d) +
L, instead of R 0.5 · π R (2, R, d) = 0
• For type 2, πS (2, L, u) = 2 > πS (2, R, u) = 1 i.e. type 2 will play
L, instead of R
• For type 3, πS (3, L, u) = 1 > πS (3, R, u) = 0 i.e. type 3 will play
L.
Thus, this is not a viable equilibrium.
The only other perfect Bayesian Equilibrium is
 
1 1
( L, L, R), (u, d), p, q0 = , q1 =
3 3

Answer 4.6 Type 2 will always play R since πS (2, R, a) > πS (2, R, u) and πS (2, R, a) >
πS (2, R, d). Thus if the Receiver gets the message L, he knows that
it can only be type 1. In such a case, the Receiver plays u14 , creating 14
in fact, π R ( x, L, u) > π R ( x, L, d) for
both types, so the Receiver will always
play u
36 Dynamic Games of Incomplete Information

a payoff of (2, 1). This gives type 1 a higher payoff than if he played
R, which would have given him a payoff of 1. Thus, the perfect
Bayesian Equilibrium is

[( L, R), (u, a), p = 1, q = 0]

You might also like