You are on page 1of 9

Microeconomics II - problem set 1

M.M

November 2022

Exercise 1

Answer 1.

Player 2
L C R
U 2, 3 1, 1 1, 2
Player 1 M 1, 2 4, 1 0, 4
D 1, 0 2, 0 3, 1

There exist two Nash equilibria in pure strategies, namely {U,L} and {D,R} with respective payoff
outcomes (2, 3) and (3, 1). This result is found by highlighting in the table above the best response
of one player given a specific choice of the other player. Whenever the two best responses meet
(both numbers in the cell are highlighted) that is a Nash equilibrium, indeed, players are ”best
responding” to each other.

Answer 2.
In order to characterize the equilibrium in mixed strategy, I first simplify the table above eliminat-
ing those strategies that are strictly dominated and will not be chosen by a rational player with
positive probability among the available strategies. Such elimination is possible assuming common
knowledge of rationality which means that the opponent knows that her/his opponent will never
play a strictly dominated strategy. This sequence of reasoning is repeated up to a possibly infinite
amount of times by both players. For player 2, strategy C is strictly dominated by strategy R.
Moreover, for player 1, strategy M is strictly dominated by U, thus, I can remove strategy M as
well. The remaining table is the one I will consider for the computation of mixed Nash.

Let me now define pu as the probability that player 1 will choose strategy U and 1 − pu the proba-
bility that player 1 will choose strategy D. Similarly, pl is the probability that player 2 will choose
strategy L and 1 − pl the probability that player 2 will choose strategy R.

1
Player 2
L R
Player 1 M 2, 3 1, 2
D 1, 0 3, 1

Now I compute for both players the expected utility of one player given:
1) each possible strategy (one at a time)
2) the probabilities at which the other player will mix her two available strategies.
E1 (u|M ) = 2pl + 1(1 − pl ) = 1 + pl
E1 (u|D) = 1pl + 3(1 − pl ) = 3 − 2pl
A player equilibrium strategy is determined bu thee condition that the other must be indifferent.
So I equate the two lines above, with outcome pu = 32 . Performing an analogous computations for
the other player.
E2 (u|L) = 3pu + 0(1 − pu ) = 3pu
E2 (u|R) = 2pu + (1 − pu ) = pu + 1
Outcome pl = 12 .
Thus, the mixed Nash equilibrium is such that player 1 will mix U and D choosing them with
equal probability and player 2 will mix L and R choosing L 32 of the times. Written more formally:
{( 12 U, 12 D), ( 32 L, 13 R)}.

Answer 3.

Figure 1: Representation of the two best response functions and the three Nash equilibria

From the graph above it is possible to observe two pure Nash equilibria and one mixed Nash equilib-
rium attained at the intersection of the best response curves of players 1 and 2 (black dots inticate

2
the equilibria in the graph above).
If pl > 32 player 1 will choose U with probability 1.
If pl < 23 player 1 will choose D with probability 1.
If pu > 21 player 2 will choose L with probability 1.
If pu < 12 player 2 will choose R with probability 1.

Exercise 2

Answer 1.
I subsequently represent the game in extensive form.
P layer1

U M D

P layer2

L R L R L R

(3, 4) (2, 3) (2, 8) (5, 7) (2, 1) (0, 0)

There exist four different sub-games. Three proper sub-games, plus 1 which is the game itself.
The three proper sub-games are each including one of the three couple of choices player 2 can make
(left of right) and the node connecting the two lines. The whole game is instead represented in
extensive form through the above picture. Another possible (and unique) representation of the
same game can be obtained through a normal form, represented by means of a matrix, as below:

Player 2
lll llr lrr rrr rrl rll rlr lrl
U 3, 4 3, 4 3, 4 2, 3 2, 3 2, 3 2, 3 3, 4
Player 1 M 2, 8 2, 8 5, 7 5, 7 5, 7 2, 8 2, 8 5, 7
D 2, 1 0, 0 0, 0 0, 0 2, 1 2, 1 0, 0 2, 1

Answer 2.
From the game above I can find (using the same method as in exercise 1 and in exercise 3 sub-
sequently) five different Nash equilibria, leading to three different outcomes. The Nash equilibria
strategies are {U,lll}, {U,llr}, {M,rll}, {M,rlr}, {D,rll}. Thus, the Nash outcomes in terms of payoff
are: (3,4), (2,8), and (2,1).

3
Answer 3.
To find the sub-game perfect Nash equilibria I apply the method of backward induction. Thus, first
I consider the three sub-games taken individually. I find the Nash equilibria in each of the three
subgames. Since player 2 only cares about maximizing its payoff once she is at the node, in the
subgame where strategy L leads to a payoff of 4 and R to 3, she will go for L. In the subgame where
strategy L leads to 8 and R to 7 she will go for L. Finally, in the sub-game where L leads to 1 and
R to 0, she will choose L again. Note: R and L are two available strategies for player two at each
node, taking the whole game as the only choice between R and L that player 2 can make. So, the
best response for player 2 is {lll}. Thus, I may rewrite the game as:

P layer1

U M D
(3, 4) (2, 8) (2, 1)

Now I just included the feasible ending nodes for player 1 (notice that, not by chance, the outcomes
of all three optimal strategies for player two are exactly the Nash equilibria of the game). Observing
this new game, player 1 will decide among the three strategies leading to outcomes 3, 2, or 2. The
preferred strategy is clearly the one leading to 3, which is strategy U. Thus, the strategy (unique)
SPNE (subgame perfect Nash equilibrium) is {U,lll} leading to a subgame perfect Nash outcome
of (3,4) for player 1 and 2.

Exercise 3

Answer 1.
I will first proceed by eliminating all the strictly dominated strategies which will not be played by
any player. Moreover, since payoffs are visible to both players, common knowledge of rationality
(introduced in exercise 1) will also mean that the opponent will know that a strictly dominated
strategy will not be played by her opponent all she will rule it out.

Player 2
a b c d
A 2, 5 2, 7 1, 2 1, 0
Player 1 B 2, 1 3, 2 2, 5 2, 1
C 0, 1 1, 4 1, 3 1, 2
D 1, 0 0, 1 3, 2 3, 3

It is easy to note that the C strategy is dominated by strategy B for player 1 given that each of

4
the outcomes of strategy B for player 1 is higher than the corresponding outcome in C (regardless
of the choice by player 1). The same reasoning can be applied to player 1 concerning strategy a,
which is strictly dominated by strategy b. Thus, eliminating C and a, I obtain:

Player 2
b c d
A 2, 7 1, 2 1, 0
Player 1 B 3, 2 2, 5 2, 1
D 0, 1 3, 2 3, 3

I can still notice that for player 1 strategy A is strictly dominated by strategy B, so I can rule it
out. The same applies (considering the remaining cells in the matrix) for strategy b with respect to
strategy c, which is strictly dominant over the former. I thus remain with the following 2x2 matrix:

Player 2
c d
Player 1 B 2, 5 2, 1
D 3, 2 3, 3

Again, for player 1 strategy B is now strictly dominated by D. After ruling it out, I observe that
strategy c is strictly dominated by d for player 2, which implies that, if I rule it out, the only
remaining strategy is {D,d}. This also corresponds to the Nash equilibrium of the game.
Note: I could have reached the same conclusion considering the initial matrix and highlighting the
best responses for each player given the other player’s moves (as did in exercise 1). The result will
not change leading to a Nash equilibrium {D,d}, with payoff 3 for both players.

Answer 2.
First, it should be noted that IESDS is not an equilibrium concept (as instead Nash equilibrium
is). However, it represents a powerful tool in some instances to reach conclusions about Nash
equilibria in games. In particular, Nash equilibrium is a stronger concept than IESDS. Indeed,
Nash equilibria are never deleted by IESDS as the strategies leading to them cannot be strictly
dominated. However, not all strategies that are not eliminated following the process of IESDS are
necessarily Nash equilibria (it suffices to look at exercise 1 where, differently from this exercise, it
is not possible to isolate Nash equilibria by means of IESDS only). To summarize these thoughts,
I make two considerations:
1) if some strategies are part of a Nash equilibrium, then they cannot be eliminated following
IESDS. The idea of the proof here comes by contradiction. If a Nash equilibrium strategy were
to be deleted by IESDS, that would imply that there exists a strategy that strictly dominates it,
but then the first strategy could NOT be a Nash equilibrium by the definition of Nash equilibrium

5
itself.
2) if all possible strategies are eliminated by applying IESDS but one set remains, then that must
be a Nash equilibria of the game (as it happens in this case). Again, if that was not the case, I
would end up (at a certain point) at eliminating some Nash equilibrium strategies. But this is not
possible because of 1) above.

Exercise 4.

Answer 1.
This game is an instance of a game with an infinite strategy set. Because of that, a representation
of the game in normal form using matrices is not feasible, since the matrix should have infinite rows
and columns. Thus, the normal form representation of the game takes the following form:
I = {1, 2}
Si = S−i ∈ (0, +∞) for each player i = 1, 2. Note: (if Si = S1 , then S−i = S2 )

s if s + s ≤1
i 1 2
u(si ) = (1)
0 if s1 + s2 > 1

Answer 2.
The pure Nash strategy equilibria are all the infinite strategies such that the sum of the two values
chosen by the players is 1, s1 + s2 = 1. If such points are reached, both players do not have the
incentive to deviate (given the choice of the other player fixed). This can be seen by writing down
the best response functions of both players:

1 - s if s−i <1
−i
best responsei = (2)
(0, +∞) if s−i ≥ 1

where i = 1, 2
Indeed, if player 1 bets s1 , the best response of player 2 given what player 1 did is to bet 1 − s1 .
Moreover, another Nash equilibrium is reached if both players bet a number between [1, +∞). In-
deed, in that case, both players cannot change their strategy to increase their payoffs, given the
strategy chosen by the other player. Specifically, regardless of what one player does, if the opponent
chooses a number greater or equal to one, her payoff is always 0.
However, all other possible strategies cannot be a Nash equilibrium. In particular, if the sum of
the two bets is smaller than one, then one player could have increased her bet (given the bet of
the other player), increasing her payoff. But also, if one player bets a number higher than one,
(with the other player betting less than 1), the player betting the higher number would have had

6
an incentive in betting a smaller number, potentially increasing its payoff above zero if the sum of
the two numbers would have been less than zero. If players can unilaterally increase their payoff,
that cannot be a Nash equilibrium.
To summarize: Nash equilibria are ∀si , s−i such that si + s−i = 1 and ∀si , s−i such that si , s−i > 1

Answer 3.
Among the infinitely many equilibria highlighted in point b, si , s−i > 1 is a set of Nash equilibria
composed by weakly dominated strategies. Indeed, the payoff that can be reached with all other
possible strategies is higher or equal to the one obtained through the mentioned one (namely, 0).
However, it must be noted that these Nash equilibrium strategies are not strictly dominated, but
only weakly, otherwise, they could not represent a Nash equilibrium (exercise 3 discusses these
issues in more detail in point 3.)

Exercise 5

Answer.
This game features three different players with the objective of maximizing their profits which are
given by the following formula:
πi (q1 , q2 , q3 ) = p(q1 , q2 , q3 )qi − c(qi )
where i = 1, 2, 3 are the three players. The functional form of the cost function and the price
function (both depending on the quantities produced by all firms producing a quantity at least
> 0) are given by the exercise.
My first step is to maximize the profits of each firm (starting with firm 1), subject to a given
quantity produced by firms 2 and 3. Mathematically, this translates into solving the following
maximization problem:
max π1 (q1 , q2 , q3 ), where q2 and q3 are given and q1 > 0
This means evaluating,
max π1 (q1 , q2 , q3 ) = p(q1 , q2 , q3 )q1 − c(q1 , q2 , q3 ) = [20 − (q1 − q2 − q3 )]q1 − (30 + 12 (q1 )2 ).
So, I take the first partial derivative of the profit function for firm one with respect to q1 , equaling
it to 0 and obtaining the following expression (taking q2 and q3 as given as it would be in a best
response setting):
20 − 2q1 − q2 − q3 − q1 = 20 − 3q1 − q2 − q3 = 0
Thus, given the quantities chosen by firm 1 and firm 2, firm’s 1 best response is given by the equa-
tion above.
Moreover, exploiting the symmetry of the game, I can notice that the other two best responses are:
20 − q1 − 3q2 − q3 = 0, for firm 2, given q1 and q3 and
20 − q1 − q2 − 3q3 = 0, for firm 3, given q1 and q2

7
Thus, I can now solve the system of best response functions to find the intersection:

20 - 3q1 − q2 − q3 = 0


20 - q1 − 3q2 − q3 = 0 (3)


20 - q1 − q2 − 3q3 = 0

The solution is q1 = q2 = q3 = 4.
However, if I substitute the values in the profit functions, I realize that for qi = 4 profits are actually
negative (-6). Thus, this cannot be a Nash equilibrium, since all three firms would make equally
negative profits at equilibrium. The issue with this situation lies in the fact that the problem
specifies that if firms choose to produce 0 in terms of output, they suffer no profit losses. Indeed,
the 30 present in the cost function figures as a high fixed cost of production, but it is not a sunk
cost. Thus, the fixed cost of 30 is totally avoidable by not beginning the production. Knowing that
a firm may regret having produced q = 4 playing the game, since, given what the others did, it
could have achieved a higher profit (in this case simply moving from -6 to 0 profits) I conclude that
qi = 4 is not Nash.
However, it must be noted that q1 = q2 = q3 = 0 is not a Nash equilibrium too. Indeed, if say
q1 = q2 = q3 = 0, then q1 regrets not having entered the game producing the monopolistic quantity
obtaining positive profits (given the other players decided to produce both 0).
I will now properly formalize the consideration above and analyze the two cases that have not been
considered so far:
1) one company produces the monopolistic quantity and the other two firms produce 0
2) two companies produce the duopoly quantity and the third firm produces 0

Let me start with case 1)


Monopolistic quantity in this game leads to positive profits. The monopolistic quantity can be
found by means of the following equation:
M C = M R where I take the derivative of both the cost function and the revenue function obtaining
the following inequality 20 − 2q = q so that the maximizing quantity is q = 20 3
More directly, the same result can be obtained by substituting q2 = q3 = 0 in the first equation of
system (3) above, finding q1 = 20 3 . Due to symmetry, if firms 2 or 3 were monopolists, they will
produce the same amount (given the other two firms’ production is equal to 0). The profits related
to monopolistic production (assuming the other two firms produce 0) are 36.37.
I can thus conclude that q1 = q2 = q3 = 0 cannot be a Nash equilibrium since a player has the
incentive to produce monopolist, given the other two players choosing not to produce. Thus, I can
highlight three Nash equilibria, of the following type: {q1 = 20 20
3 , q2 = 0, q3 = 0}, {q2 = 3 , q1 =
20
0, q3 = 0}, {q3 = 3 , q1 = 0, q2 = 0}.

8
Case 2) Similarly, I now consider the duopoly case. I can now build a system considering the
first two equations of system (3) and substituting q3 = 0, for the case where firms 1 and 2 are
duopolists. Similarly, the same reasoning can be applied to the cases where firms 2 and 3, or 1 and
3 are duopolists.
The equilibrium quantity there is qi = 5 for the two duopolists. This returns positive profits (7.5).

Conclusion: both situations where one firm acts as a monopolist (with the other two produc-
ing 0) and the one where two firms produce the duopoly quantity (with the third firm producing
0) are Nash equilibria. In the monopoly case, the two firms that did not enter are not interested in
deviating, since they would have ended up with negative profits (due to high fixed costs, it is easy
to compute that small production cannot overcome the fixed costs). In the other case, if two firms
act as duopolists, the third firm is not interested in deviating since it will achieve negative profits
(again, fixed and variable costs are too high). Moreover, the two duopolists are not willing to move
in that case since they would diminish their profits in doing so.
Note: one could mathematically prove that in case 1), the two firms NOT in the monopoly cannot
achieve positive profits by producing a quantity higher than 0, indeed, their profit function as a
function of their q (given the monopoly quantity already produced) cannot turn out positive. The
same can be said for the one firm NOT participating in the duopoly in case 2).
πi = (20 − 20 1 2
3 − qi ) − (30 − 2 (qi ) ) has NO real roots (case 1)
πi = (20 − (5 + 5) − qi ) − (30 − 12 (qi )2 ) has NO real roots (case 2)
This proves more formally that the firms producing 0 in both monopoly and duopoly cases cannot
unilaterally move to improve their payoff. Thus, the game has 6 Nash equilibria in terms of strategic
quantities produced: { 20 20 20
3 , 0, 0}; {0, 3 0}; {0, 0, 3 }; {5, 5, 0}; {5, 0, 5}; {0, 5, 5}.

You might also like