You are on page 1of 5

應用賽局理論 學號

110 秋
作業三解答 姓名
繳交期限:110/12/28

本次作業共有 5 頁(包括本頁)、6 題。總共有 100 分,請將答案在繳交期限的上


午 10:30 前交出。請彙整成一份 pdf 檔後統一上傳到 NTU COOL,pdf 檔名請用作業
X + 姓名。不接受遲交或補交,但只要不是交空白的答案上來都至少有 60% 的分數。
作業上請務必標明同組所有人的學號與姓名,以 3–4 人一組為原則。

1. (20 points) Watson Chapter 10: Exercises 4

Solution:
2
(a) G solves maxx −y 2 x−1 − xc4 . This yields the first‐order condition xy 2 − c4 = 0.
Rearranging, we find G’s best‐response function to be x(ȳ) = ȳ/c2 . C solves
y 1/2 x
maxy y 1/2 (1 + xy)−1 . This yields the first‐order condition 2y1/2 (1+xy)
1
− (1+xy) 2 =

0. Rearranging, we find C’s best‐response function to be y(x̄) = 1/x̄. These are


represented in the graph that follows.
應用賽局理論 作業三解答 ‐ 第 2 頁,共 5 頁 繳交期限:110/12/28

Solution:

(b) We find x and y such that x = y/c2 and y = 1/x. The Nash equilibrium is
x = 1/c and y = c.

(c) As the cost of enforcement c increases, enforcement x decreases and criminal


activity y increases.

2. (20 points) Watson Chapter 10: Exercises 14

Solution:

(a) We show that any member i has no incentive to deviate from the proposed equi‐
librium. If member i would have the median vote in equilibrium, then the
decision in equilibrium is d = xi . Since this is her ideal policy, any deviation
can only make her worse off.
If member i would not have the median vote in equilibrium, then she must
have either the highest vote or the lowest vote in equilibrium. Since every‐
thing is symmetric, consider just the case in which she has the lowest vote in
equilibrium. Then, in order to change the outcome, she would have to vote
higher than the equilibrium median voter. By doing so, either her vote would
become the median vote, or the highest equilibrium vote would become the
median vote. In either case, the committee’s decision would increase. Since
member i would have had the lowest vote in equilibrium, an increase in the
decision must make her worse off.

(b) The reasoning in part (a) considered member i’s incentives when the other voters
followed the equilibrium strategy of voting truthfully. Member i’s incentives
would not change if a committee member from part (a) happened to have an
ideal policy of xj = 0.5 and a strategy of voting yj = 0.5, but was replaced by
a machine that always votes yj = 0.5. Hence, member i has no incentive to
deviate under the phantom median voter rule.
應用賽局理論 作業三解答 ‐ 第 3 頁,共 5 頁 繳交期限:110/12/28

Solution:

(c) It cannot be a Nash equilibrium for each voter to vote xi = yi , since in that
case the decision would be d = 0.6, and member 1 could profit by deviating to
y1′ = 0 and changing the decision to d′ = 0.5.
So we guess that member 1 votes y1∗ = 0. By the same intuition, we also guess
that member 3 votes y3∗ = 1. Then member 2’s best response is to vote y2∗ = 0.8,
in which case the decision is d∗ = 0.6. Since member 2 gets his ideal policy,
he has no incentive to deviate. Member 1 would like to reduce the policy, but
cannot vote any lower than 0. Similarly, member 3 would like to increase the
policy, but cannot vote any higher than 1. Hence, (0, 0.8, 1) is a Nash equilib‐
rium.
To show that this equilibrium is unique, observe that if the equilibrium deci‐
sion is d > 0.3, then member 1 must be voting 0—otherwise he would want
to reduce the decision by reducing his vote. Similarly, if the equilibrium deci‐
sion is d < 0.9, then member 3 must be voting 1. So the only equilibrium that
reaches a decision d ∈ (0.3, 0.9) is the one described above.
There can be no equilibrium with d ≤ 0.3, since member 3 could deviate to
force a decision of at least 0.33 just by voting 1. Similarly, there can be no
equilibrium with d ≥ 0.9, since member 1 could deviate to force a decision of
at most 0.67 just by voting 0.
應用賽局理論 作業三解答 ‐ 第 4 頁,共 5 頁 繳交期限:110/12/28

3. (10 points) Watson Chapter 14: Exercises 2

Solution: Suppose not. Then it must be that some pure‐strategy profile induces at
least two paths through the tree. Since a strategy profile specifies an action to be
taken in every contingency (at every node), having two paths induced by the same
pure‐strategy profile would require that Tree Rule 3 not hold.

4. (15 points) Watson Chapter 15: Exercises 8

Solution:

(a) Si = {A, B}×(0, ∞)×(0, ∞). Each player selects A or B, picks a positive number
when (A, B) is chosen, and picks a positive number when (B, A) is chosen.

(b) It is easy to see that 0 < (x1 +x2 )/(1+x1 +x2 ) < 1, and that (x1 +x2 )/(1+x1 +x2 )
approaches 1 as (x1 + x2 ) → ∞. Thus, each has a higher payoff when both
choose A. Further, B will never be selected in equilibrium. The Nash equilibria
of this game are given by (Ax1 , Ax2 ), where x1 and x2 are any positive numbers.

(c) There is no subgame perfect equilibrium because the subgames following (A, B)
and (B, A) have no Nash equilibria.

5. (15 points) Watson Chapter 22: Exercises 2

Solution:

(a) To support cooperation, δ must be such that 2/(1 − δ) ≥ 4 + δ/(1 − δ). Solving
for δ, we see that cooperation requires δ ≥ 2/3.

(b) To support cooperation by player 1, it must be that δ ≥ 1/2. To support cooper‐


ation by player 2, it must be that δ ≥ 3/5. Thus, we need δ ≥ 3/5.

(c) Cooperation by player 1 requires δ ≥ 4/5. Player 2 has no incentive to deviate in


the short run. Thus, it must be that δ ≥ 4/5.
應用賽局理論 作業三解答 ‐ 第 5 頁,共 5 頁 繳交期限:110/12/28

6. (20 points) Watson Chapter 22: Exercises 11

Solution:

(a) The efficient level of effort solves

max 4a − a2 ,
a

which has a first‐order condition of 4−2a ≡ 0, so we have a = 2 as the efficient


level of effort.

(b) Since uP is strictly decreasing in p, p = 0 dominates all other strategies for the
principal. Similarly, since uA is strictly decreasing in a, a = 0 dominates all
other strategies for the agent. Thus the unique rationalizable strategy profile,
and therefore the unique Nash equilibrium, is a = p = 0.

(c) In the second period, subgame perfection requires that the players play the a = 0
and p = 0. Since there is no way to condition behavior in the second period on
behavior in the first period so as to influence behavior in the first period, there
is only one subgame perfect Nash equilibrium, which has a = 0 and p = 0
played in both periods.

(d) Consider a grim‐trigger strategy profile in which a = 2 and p = 5 in every period


along the equilibrium path, and a = p = 0 along the punishment path. The
principal can gain 5 by deviating along the equilibrium path, but would lose
4 · 2 − 5 = 3 in every future period. The agent can gain 22 = 4 by deviating
along the equilibrium path, but would lose 5−22 = 1 in every future period.
Thus the discount factor must satisfy both

3δ δ
5≤ and 4 ≤ .
1−δ 1−δ
Rearranging and combining yields δ ≥ 4/5 > 5/8.

You might also like