Professional Documents
Culture Documents
Matthew Elliott
Cambridge
September, 2020
Some Logistics
My office hour is Wednesday (7.15-8.15pm) via
Teams.
You have, or will be given soon, printed handouts.
Spare copies are available.
Everything, including slides and a recording of the
lectures, is on Moodle.
Please do your best to attend lectures live and ask
questions via text.
I I’ll periodically stop and review questions.
I More interaction will make the lecture much more
entertaining and interesting.
I I normally rely on seeing puzzled faces to know when I’m
being unclear.
Road Map
Myerson (1999).
Questions we’ll be able to think about
On the Brexit negotiations:
What difference does it make that the UK had to
get agreement from all EU member countries?
How could uncertainty about what they might have
been willing to accept affected the UK’s bargaining
position?
Who should you get to bargain on your behalf?
What impact might separating the trade deal part
of the negotiations have had?
...
Questions we’ll be able to think about
On business decisions:
Is a price matching “never knowingly undersold”
commitment good for consumers?
Should a board appoint a CEO that will maximize
profits or empire build?
When are mergers good for consumers?
...
Questions we’ll be able to think about
Example:
N = {1, 2}
Ai = {Q, F } for i = 1, 2
A = {(Q, Q), (F, F ), (Q, F ), (F, Q)}
(F, Q) 1 (Q, Q) 1 (F, F ) 1 (Q, F )
(Q, F ) 2 (Q, Q) 2 (F, F ) 2 (F, Q)
Notation
Simultaneous move strategic game described by tuple:
( |{z}
N , {Ai }i∈N , {i }i∈N )
| {z } | {z }
Players Action sets for all Preferences for all
Preferences are ordinal. They tell us preference
orderings.
They are not cardinal. They don’t tell us how much
people care.
Can represent preferences by a payoff function or
utility function ui : A → R such that a0 i a if and
only if ui (a0 ) ≥ ui (a).
But many different utility functions represent the
same preferences.
Example 1
By convention:
row player is Player 1
column player is player 2
Player 2
Heads Tails
Heads 1, −1 −1, 1
Player 1
Tails −1, 1 1, −1
What is the game? Known in the literature as matching
pennies.
Modeling a strategic Situation
Player 2
Quiet Fink
Quiet
Player 1
Fink
Modeling a strategic Situation
Police offer both suspects the following deal, shown for
player 1.
If 1 finks and 2 keeps quiet, (F, Q), no charge;
Both keep quiet, (Q, Q), trespassing charge;
If both fink (F, F ), burglary charge but reduced
sentence for cooperating;
If 1 keeps quite and 2 finks, (Q, F ), burglary charge.
Modeling a strategic Situation
Police offer both suspects the following deal, shown for
player 1.
If 1 finks and 2 keeps quiet, (F, Q), no charge;
Both keep quiet, (Q, Q), trespassing charge;
If both fink (F, F ), burglary charge but reduced
sentence for cooperating;
If 1 keeps quite and 2 finks, (Q, F ), burglary charge.
Player 2
Quiet Fink
Quiet a, a b, c
Player 1
Fink c, b d, d
Player 2
Quiet Fink
Quiet a, a b, c
Player 1
Fink c, b d, d
Player 2
Quiet Fink
Quiet 2, 2 0, 3
Player 1
Fink 3, 0 1, 1
Describe this game
Player 2
Left Right
Up 4, −2 1, 0
Player 1
Down 3, 1 −10, 10
Describe this game
Player 2
Left Right
Up 4, −2 1, 0
Player 1
Down 3, 1 −10, 10
N =2
A1 = (Up, Down); A2 = (Left, Right)
(U, L) 1 (D, L) 1 (U, R) 1 (D, R)
(D, R) 2 (D, L) 2 (U, R) 2 (U, L)
Describe this game
Player 2
Left Right
Up 4, −2 1, 0
Player 1
Down 3, 1 −10, 10
N =2
A1 = (Up, Down); A2 = (Left, Right)
(U, L) 1 (D, L) 1 (U, R) 1 (D, R)
(D, R) 2 (D, L) 2 (U, R) 2 (U, L)
Familiar?
Relabel Actions
Player 2
Quiet Fink
Fink 4, −2 1, 0
Player 1
Quiet 3, 1 −10, 10
N =2
A1 = (Fink, Quiet); A2 = (Quiet, Fink)
(F, Q) 1 (Q, Q) 1 (F, F) 1 (Q, F)
(Q, F) 2 (Q, Q) 2 (F, F) 2 (F, Q)
Another representation of the Prisoners’ Dilemma.
Basketball or Soccer
Player 2
Basketball Soccer
Basketball 2, 1 0, 0
Player 1
Soccer 0, 0 1, 2
Stag Hunt
Player 2
Stag Hare
Stag 2, 2 0, 1
Player 1
Hare 1, 0 1, 1
Outline
1 Game Theory: Strategic Thinking
Introduction
Nash Equilibria
Applications: Nash equilibria
Dominant Actions
Applications: Dominant Actions
Iterated Deletion of Strictly Dominated Actions
Applications: Iterated Deletion of Strictly Dominated Actions
Additional Applications
So in a Nash Equilibrium:
Player 2
Quiet Fink
Quiet 2, 2 0, 3
Player 1
Fink 3, 0 1, 1
Prisoners’ Dilemma
Can represent best responses with arrows.
Player 2
Q F
Q
2 0
Player 1 y 2−→
y 3
F 3 1
0−→1
Nash equilibria are all cells with no arrows exiting.
Stag Hunt
Player 2
Stag Hare
Stag 2, 2 0, 1
Player 1
Hare 1, 0 1, 1
Stag Hunt
Player 2
S H
S x
2 0
Player 1 2←−
y 1
H 1 1
0−→1
Basketball or Soccer
Player 2
Basketball Soccer
Basketball 2, 1 0, 0
Player 1
Soccer 0, 0 1, 2
Basketball or Soccer
Player 2
B S
B x
2 0
Player 1 1←−
y 0
S 0 1
0−→2
Matching Pennies
Player 2
Heads Tails
Heads 1, −1 −1, 1
Player 1
Tails −1, 1 1, −1
Matching Pennies
Player 2
H T
H x
1 0
Player 1 0−→
y 1
T 0 1
1←−0
Formalising Best Responses
If players other than i play actions a−i , how do we
mathematically describe i’s best response?
Issue: i’s best response may not be single valued.
Many actions may be be equally good.
Formalising Best Responses
If players other than i play actions a−i , how do we
mathematically describe i’s best response?
Issue: i’s best response may not be single valued.
Many actions may be be equally good.
Need best response correspondences (i.e. set valued
functions).
Maximize profits: πi = qi P .
max qi (1 − qi − qj )
qi
1−qj
Generates best response functions: Bi (qj ) = 2 .
1/2 1
𝑞1
Is there a Nash equilibrium? At what quantities?
1−qi
1−qj
qi = 2 and so qi = 1
2 − 2
2 = 1
4 + q4i , so qi = 13 .
Building a Road
Bergstrom, Blume and Varian (86).
Two villages want to build a connecting road.
Road quality improves as more is invested.
Both would benefit from the road.
How will they split the cost of building it in
equilibrium?
Building a Road
Bergstrom, Blume and Varian (86).
Two villages want to build a connecting road.
Road quality improves as more is invested.
Both would benefit from the road.
How will they split the cost of building it in
equilibrium?
. . . in equilibrium both make individually optimal
contributions, given the other’s contribution.
Building a Road-model
b1 (0) = b2 (0) = 0.
b01 (q) > b02 (q) for all q ≥ 0.
b02 (0) > 1
b00i (q) < 0 for i = 1, 2.
There exists a q̂ > 0 such that b01 (q̂) = 1.
Building a Road-analysis
How can we find best responses?
Building a Road-analysis
How can we find best responses?
Player 1’s problem (for a given a2 ):
with λ ≥ 0.
Aside: Lagrangians
Example: Suppose
√ √
b1 (q) = 2 q and b2 (q) = q
1.2
1.0
0.8
0.6
0.4
0.2
a2
-1.0 -0.5 0.5 1.0
For a1 = 1.
Lagrangians
Can we change the objective so the first order conditions
yield the correct (constrained) maximum?
Need:
dL(a2 )
=0
da2 a2=0
So set:
λ2 = 1 − b02 (a1 )
Adjusted Objective
1.4
Initial Objective
1.2
1.0
0.8
0.4
0.2
a2
-1.0 -0.5 0.0 0.5 1.0
for a1 = 1.
Lagrangians
1.4
Initial Objective
1.2
1.0
0.8
0.4
0.2
a2
-1.0 -0.5 0.0 0.5 1.0
Lagrangians–summary
Change the problem to:
max L(a2 ) = b2 (1 + a2 ) − a2 +λ a
a2 | {z } | {z2 }2
Initial Objective Adjustment
No, this implies b01 (0) < 1 and b02 (0) < 1.
Building a Road-analysis
No, this implies b01 (a∗1 + a∗2 ) < 1 and b02 (a∗1 + a∗2 ) = 1.
Building a Road-analysis
If:
1. Different judgements are stochastically independent,
given the true state.
2. Each individual’s judgement is better than random
In which case, p ≥ vi .
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 9,3 1,6 8,7
Player 1
Monopoly profits:
1−c 2−1+c−2c
(1−c)2
πi = q M (1 − q M − c) = 2 2 = 4
Dominance Solvability
Proposition
The linear Cournot game with two identical firms is
dominance solvable. The only actions surviving the
iterated deletion of dominated strategies are
2qm
q1 = q2 = 1−c
3 = 3 .
Proof
Lemma
Suppose qj ∈ [q, q], then all qi > Bi (q) are strictly
dominated by qi = Bi (q) and all qi < Bi (q) are strictly
dominated by Bi (q)
Lemma Proof
1−q−c
Bi (q) = 2 := q̂i .
Need to show D(qj ) > 0 for all qi < q̂i and all qj ∈ [q, q].
Can we sign D(q)?
Lemma Proof
1−q−c
Bi (q) = 2 := q̂i .
Need to show D(qj ) > 0 for all qi < q̂i and all qj ∈ [q, q].
Can we sign D(q)? Yes!
Because q̂i is the unique best response to qj = q,
D(q) > 0.
Lemma Proof
𝑞𝑖
0 𝑞𝑚
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
𝑞𝑖
0 𝑞𝑚
𝐵𝑖 (𝑞𝑗 = 0)
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
𝑞𝑖
0 𝑞𝑚
𝐵𝑗(𝑞𝑖 = 0)
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
𝑞𝑖
0 𝑞𝑚 𝑞𝑚
2
𝐵𝑖 (𝑞𝑗 = 𝑞𝑚)
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
𝑞𝑖
0 𝑞𝑚 𝑞𝑚
2
𝐵𝑗(𝑞𝑖 = 𝑞𝑚)
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
𝑞𝑖
0 𝑞𝑚 3𝑞𝑚 𝑞𝑚
2 4
𝐵𝑖 (𝑞𝑗 = 𝑞𝑚Τ2)
𝑞𝑗
0 𝑞𝑚
Applying the Lemma
B(0) = qm
Applying the Lemma
B(0) = qm
qm
B(B(0)) = B 2 (0) =
2
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m
B 3 (0) =
4
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m qm qm
B 3 (0) = = qm − +
4 2 4
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m qm qm
B 3 (0) = = qm − +
4 2 4
5qm
B 4 (0) =
8
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m qm qm
B 3 (0) = = qm − +
4 2 4
5qm qm qm qm
B 4 (0) = = qm − + −
8 2 4 8
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m qm qm
B 3 (0) = = qm − +
4 2 4
5qm qm qm qm
B 4 (0) = = qm − + −
8 2 4 8
What is B t (0)?
Applying the Lemma
B(0) = qm
qm qm
B(B(0)) = B 2 (0) = = qm −
2 2
3q m qm qm
B 3 (0) = = qm − +
4 2 4
5qm qm qm qm
B 4 (0) = = qm − + −
8 2 4 8
What is B t (0)?
k
B t (0) = qm tk=0 − 12
P
Applying the Lemma
t k
X 1
lim qm − =
t→∞ 2
k=0
Applying the Lemma
t k
X 1 qm 2qm
lim qm − = 1 = .
t→∞ 2 1 + 2
3
k=0
Outline
1 Game Theory: Strategic Thinking
Introduction
Nash Equilibria
Applications: Nash equilibria
Dominant Actions
Applications: Dominant Actions
Iterated Deletion of Strictly Dominated Actions
Applications: Iterated Deletion of Strictly Dominated Actions
Additional Applications
F.O.C: 1 − 2pi + c = 0.
1+c
So pi = 2 := pM
2−1−c
1+c−2c (1−c)2
πi (pi ) = 2 2 = 4
Best Responses
What is the best response of firms i to a price pj ?
Best Responses
What is the best response of firms i to a price pj ?
Any pi > pj if pj <c
Any p ≥ c
i if pj =c
Bi (pj ) =
Does not exist if pj ∈ (c, pM ]
M
p if pj > pM
Proposition
There is a unique Nash equilibrium in which pi = pj = c.
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Proof
𝑝𝑗
𝑐 𝑝𝑖
Discussion: Cournot Vs Bertrand
Does price or quantity competition make a
difference for a monopolist?
Proposition
There is a unique Nash equilibrium of the two player
Hotelling model. Letting M be the solution to
Z M
1
f (t)dt = ,
t=0 2
both candidates locate at M and the election is a tie.
Proof Outline
Locating at M guarantees don’t lose.
v − (l − xi )2 t − pi .
v − (l∗ )2 t − p1 = v − (1 − l∗ )2 t − p2 ,
t+p2 −p1
and so l∗ = 2t
Demands
0 𝑙∗ 1
Demands
1𝑙 ∗ 1(1 − 𝑙 ∗ )
0 𝑙∗ 1
Product Differentiation
t + p2 + c
p1 =
2
t + p1 + c
p2 =
2
Proposition
There is no Nash equilibrium of the three player
Hotelling model with entry.
Proof Outline
If none enter, then there is a profitable deviation
from one entering.
If one enters, then there is a profitable deviation
from another entering
I Can enter at M and at worst tie the election.
Player 2
Rock Paper Scissors
Rock 0, 0 −1, 1 1, −1
Player 1 Paper 1, −1 0, 0 −1, 1
Scissors −1, 1 1, −1 0, 0
Is there a Nash equilibrium?
Rock, Paper, Scissors
Player 2
Rock Paper Scissors
Rock 0, 0 −1, 1 1, −1
Player 1 Paper 1, −1 0, 0 −1, 1
Scissors −1, 1 1, −1 0, 0
Is there a Nash equilibrium? No.
Key questions:
Let:
I Let α(a) be the probability of an action profile a ∈ A
I u(a) be the utility of that outcome for sure.
Then the v-NM utility function is:
X
U (α) = α(a)u(a)
a∈A
Example
Suppose there are two possible action profiles a and
a0 .
α(a)ui (a) + α(a0 )ui (a0 ) > β(a)ui (a) + β(a0 )ui (a0 ).
Discussion
Example:
I 50% chance of 0 dollars, 50% chance of 1000 dollars.
I Prefer 500 dollars, but indifferent to 450 for sure.
I u(0) = 0; u(1000) = 10; u(450) = 5; u(500) > 5.
Action sets
Suppose there are m actions in Ai .
Player 2
Rock Paper Scissors
Rock 0, 0 −1, 1 2, −1
Player 1 Paper 1, −1 0, 0 −1, 1
Scissors −1, 2 1, −1 0, 0
What is different about this game?
A twist...
Player 2
Rock Paper Scissors
Rock 0, 0 −1, 1 2, −1
Player 1 Paper 1, −1 0, 0 −1, 1
Scissors −1, 2 1, −1 0, 0
What is different about this game?
What is the new MSNE?
A twist...
Suppose player i = 1, 2 plays:
rock with probability ri > 0
paper with probability pi > 0
scissors with probability si > 0,
We then have:
U1 (rock) = 0r2 − 1p2 + 2s2
U1 (paper) = 1r2 + 0p2 − s2
U1 (scissors) = −1r2 + 1p2 + 0s2
By the indifference property:
U1 (rock) = U1 (paper) = U1 (scissors).
A twist...
In this equilibrium:
1 5 1
ri = pi = si = .
3 12 4
Are there any other mixed strategy Nash equilibria in
which both i and j mix over all alternatives?
Conceptual Issues with Mixed Strategies
How realistic is randomization? Do people really
randomize?
Returner
Down Across
line Court
Down
Server
0.5dr + 0.8(1 − dr ).
Server’s expect payoff from choosing across court is:
0.7dr + 0.4(1 − dr ).
Serves chooses ds ∈ [0, 1] to maximize expected payoff
This is maximized by ds = 1 if
And maximized by ds = 0 if
dr = 2/3.
1 𝐵𝑠(𝑑𝑟)
𝑑𝑠
0
0 𝑑𝑟 2/3 1
Best Response Correspondences
𝑑𝑠
𝐵𝑟(𝑑𝑠)
0
0 𝑑𝑟 2/3 1
Best Response Correspondences
1 𝐵𝑠(𝑑𝑟)
𝑑𝑠
𝐵𝑟(𝑑𝑠)
0
0 𝑑𝑟 2/3 1
Tennis Serving
Rearranging:
1 pi − c − k
Fj (pi ) =
πj pi − c
Bertrand Competition with Sunk Costs
1 pi − c − k
Fj (pi ) =
πj pi − c
Support is [c + k, v]. So Fj (c + k) = 0 (holds).
Moreover, Fj (v) = 1.
(v−c−k)
Thus πj = v−c and
0
if p < c + k
(v−c)(p−c−k)
Fj (p) = (v−c−k)(p−c) if p ∈ [c + k, v] ,
1 if p > v
for j = 1, 2.
Equilibrium CDF
0
𝑐+𝑘 𝑣
𝑝
How do we know this is an equilibrium?
Suppose player 2:
enters with probability π2 = (v−c−k)
v−c
and, conditional on entering, chooses a price
according to the CDF
0
if p < c + k
(v−c)(p−c−k)
F2 (p) = (v−c−k)(p−c) if p ∈ [c + k, v] ,
1 if p > v
Sociological explanations:
People who live in cities don’t care about each
other.
v − c = v 1 − (1 − p)n−1 .
𝑛
Kitty Genovese
𝑎2
00 𝑎1 1
Intuition for existence
1
𝑎2
00 𝑎1 1
Intuition for existence
1
𝐵1(𝑎2)
𝑎2
00 𝑎1 1
Intuition for existence
1
𝑎2
00 𝑎1 1
Intuition for existence
1
𝑎2 𝐵2(𝑎1)
00 𝑎1 1
Intuition for existence
1
𝐵1(𝑎2)
𝑎2 𝐵2(𝑎1)
00 𝑎1 1
Existence
𝐵1(𝑎2)
00 𝑎1 1
Brouwer’s fixed point theorem: Conditions
What can go wrong when we don’t have a compact,
convex domain or a continuous function?
1
𝐵2(𝑎1) 𝐵1(𝑎2)
𝑎2
00 𝑎1 1
Brouwer’s fixed point theorem: Conditions
What can go wrong when we don’t have a compact,
convex domain or a continuous function?
1
𝐵1(𝑎2)
𝐵2(𝑎1)
𝑎2
00 𝑎1 1
Applying Brouwer
How can we represent Nash equilibria as fixed
points?
Then, B(a∗ ) = a∗ .
Existence: Mixed Strategies
Player 2
𝑙 𝑟
8,3 0,3
Player 1
𝑢
𝑚 3,9 3,1
𝑑 0,8 8,9
Finding equilibria
Suppose you are player 1, would you ever play middle
(m)?
Player 2
𝑝 1−𝑝
𝑙 𝑟
8,3 0,3
Player 1
𝑢
𝑚 3,9 3,1
𝑑 0,8 8,9
Finding equilibria
Suppose you are player 1, would you ever play middle
(m)?
8 8
𝜋1(𝑑) 𝜋1(𝑢)
𝜋1(𝑚)
0 0
0 𝑝 1
Finding equilibria
Suppose you are player 1, would you ever play middle
(m)?
8 8
𝜋1(𝑑/2, 𝑢/2)
𝜋1(𝑚)
0 0
0 𝑝 1
Useful Results
Proposition
Action ai is strictly dominated by a mixture of some
other actions if and only if ai is never a best response to
any mixed strategy profile played by the other players.
Proposition
No strictly dominated pure strategy is ever played with
positive probability in a mixed strategy Nash equilibrium.
Finding equilibria
Find all Nash equilibria—mixed and pure.
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝐴 𝐵 𝐶 𝐷
𝑊 4,1 1,3 1,6 8,7
Player 1
Player 2
𝑝 1−𝑝
𝐶 𝐷
Player 1
𝑞 𝑊 1,6 8,7
1 − 𝑞 𝑍 3,9 7,8
1’s indifference: 𝑝1 + 1 − 𝑝 8 = 3𝑝 + 1 − 𝑝 7
2’s indifference: 𝑞6 + 1 − 𝑞 9 = 7𝑞 + 1 − 𝑞 8
Finding equilibria
Find all Nash equilibria—mixed and pure.
Player 2
𝑝 1−𝑝
𝐶 𝐷
Player 1
𝑞 𝑊 1,6 8,7
1 − 𝑞 𝑍 3,9 7,8
1’s indifference: 𝑝 = 1/3
2’s indifference: 𝑞 = 1/2
Outline
1 Game Theory: Strategic Thinking
Player 2
iPhone Android
iPhone 5, 5 4, 2
Player 1
Android 2, 4 6, 6
iPhone or Android
Suppose two mixes as shown:
Player 2
iPhone (p) Android (1 − p)
iPhone 5, 5 4, 2
Player 1
Android 2, 4 6, 6
U1 (iPhone) = 5p + 4(1 − p)
U1 (Android) = 2p + 6(1 − p)
1’s Payoff
We plot 1’s payoff from alternative pure strategies as a
function of 2’s mixing probability p.
6 𝑈1(𝑖𝑃ℎ𝑜𝑛𝑒)
5
4
𝑈1(𝐴𝑛𝑑𝑟𝑜𝑖𝑑) 2
0
0 0.4 𝑝 1
Stability
Player 2
Hawk Dove
Hawk −1, −1 4, 0
Player 1
Dove 0, 4 2, 2
Hawk Dove Game
Player 2
Hawk Dove
Hawk −1, −1 4, 0
Player 1
Dove 0, 4 2, 2
Player 2
Hawk (p) Dove (1 − p)
Hawk −1, −1 4, 0
Player 1
Dove 0, 4 2, 2
U1 (Hawk) = −p + 4(1 − p)
U1 (Dove) = 0 + 2(1 − p)
1’s Payoff
1’s payoff is shown for alternative pure strategies as a function of 2’s
mixing probability p. If there is a single population we are drawing
players from, 2/3rds playing Hawk is stable.
4
𝑈1(𝐻𝑎𝑤𝑘)
2
𝑈1(𝐷𝑜𝑣𝑒)
0
−1
0 2/3 𝑝 1
Outline
1 Game Theory: Strategic Thinking
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
More formally
A game tree is a graph (nodes and edges) in which:
There is a finite set of nodes.
2 h (1,1)
In
l (-1,0)
1 h (0,3)
Out
2 l (0,0)
Norm Form Representation
Player 2
ℎ, ℎ ℎ, 𝑙 𝑙, ℎ 𝑙, 𝑙
Player 1
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
Entry Game
What are the pure strategy Nash equilibria of this game?
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
Entry Game
What are the pure strategy Nash equilibria of this game?
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
Threats and Credibility
Does the (out, fight) equilibrium seem strange?
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
Entry Game
What are the subgames? Which Nash equilibria are
subgame perfect?
Incumbent
(1,1)
Acc
Fight (-1,0)
Entry Game
What are the subgames? Which Nash equilibria are
subgame perfect?
Incumbent
(1,1)
Acc
Fight (-1,0)
Entry Game
What are the subgames? Which Nash equilibria are
subgame perfect?
Incumbent
(1,1)
Acc
In Fight (-1,0)
Entrant Out
(0,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Centipede Game
stop
stop
stop
stop
stop
stop
(3,0) (2,2) (4,1) (3,3) (5,2) (4,4) (6,3)
Back to our questions
Firm 1 chooses q1 .
Firm 2 observes q1 and chooses q2 .
Market output: Q = q1 + q2
Inverse demand function: P (Q) = 1 − Q
Marginal cost: c
Using Backward Induction
Suppose firm 1 has choosen q1 .
max q2 (1 − Q − c)
q2 ≥0
1−q1 −c
From FOC: B2 (q1 ) = 2 .
1−c
From FOC: q1 = 2 .
1−c
And so q2 = 4 .
1/2 1
𝑞1
Stackelberg in Pictures
𝑞2
1
𝐵1(𝑞2)
1/2
1/2 1
𝑞1
Stackelberg in Pictures
𝑞2
1
𝐵1(𝑞2)
Profits
Increasing
1/2 in this
direction
1/2 1
𝑞1
Stackelberg in Pictures
𝑞2
1
𝐵1(𝑞2)
1/2
1/2 1
𝑞1
Stackelberg in Pictures
𝑞2
1
𝐵1(𝑞2)
1/2
1/2 1
𝑞1
Stackelberg in Pictures
𝑞2
1
𝐵1(𝑞2)
1/2
1/2 1
𝑞1
Differentiated Bertrand: Sequential Moves
Recall our differentiated Bertrand Model:
0 𝑙∗ 1
Demands
1𝑙 ∗ 1(1 − 𝑙 ∗ )
0 𝑙∗ 1
Differentiated Bertrand: Sequential Moves
Then 2 solves:
π1 = q1 (p1 , p2 )(p1 − c)
9t
=
16
π2 = q2 (p1 , p2 )(p2 − c)
25t
=
32
Differentiated Bertrand: Sequential Moves
With simultaneous moves p1 = p2 = pS = t + c
p1 > p2 > pS for all t > 0
p1 = p2 = pS = c for t = 0.
𝐵2(𝑝1)
𝑝1
Differentiated Bertrand: Sequential Moves
𝑡 = 1,
𝑝2 𝐵1(𝑝2) 𝑐 = 0.5
𝑝1
Differentiated Bertrand: Sequential Moves
𝑡 = 1,
𝑝2 𝐵1(𝑝2) 𝑐 = 0.5
𝐵2(𝑝1)
𝑝1
Differentiated Bertrand: Sequential Moves
𝑡 = 1,
𝑝2 𝐵1(𝑝2) 𝑐 = 0.5
𝐵2(𝑝1)
𝑝1
Delegating to Maximizing Revenues
max q1 (1 − q1 − q2 ).
q1
and so:
1 − q2
B1 (q2 ) =
2
Delegating to Maximizing Revenues
max q2 (1 − q1 − q2 − c).
q2
and so:
1 − q1 − c
B2 (q1 ) =
2
Delegating to Maximizing Revenues
Then, in equilibrium
1+c 1 − 2c
q1 = q2 = .
3 3
and
2
1 − 2c 1+c 1 − 2c
π1 = π2 = .
3 3 3
1−c
q1 = q 2 = .
3
and
(1 − c)2
π 1 = π2 = .
9
Delegation
Recall that if manager 1 maximizes revenues and
manager 2 maximizes profits:
1+c 1 − 2c
q1 = q2 = .
3 3
and
2
1 − 2c 1+c 1 − 2c
π1 = π2 = .
3 3 3
Delegation
max qi (1 − qi − qj ).
qi
and so:
1 − qj
Bi (qj ) =
2
Delegation
In equilibrium
1
q1 = q2 = .
3
and
1 − 3c
π 1 = π2 = .
9
Delegation
Have the following ordering of profits for Firm 1:
1 − c − 2c2
π1 (revenues, profits) =
9
>
1 − 2c + c2
π1 (profits, profits) =
9
>
1 − 3c
π1 (revenues, revenues) =
9
>
1 − 4c + 4c2
π1 (profits, revenues) = .
9
Delegation
Thus
3 1
pI = and Q= .
4 4
Hence our guess that there is an interior solution is
validated.
Let a2 = 100 − a1
Accept
(𝑎1 , 100 − 𝑎1 )
1 2
𝑎1 ∈ [0,100]
Reject
(0, 0)
The n-Player Ultimatum Game
n players 1, 2, . . . , n.
Round 1:
1 makes proposal x ∈ [0, 100].
2 accepts or rejects.
If 2 accepts, payoffs are (x, 100 − x).
If 2 rejects, 2 makes a counter offer but the size of
the pie shrinks.
Alternating Offer Bargaining
Round 2:
Player 2 offers y ∈ [0, 80].
If player 1 accepts, payoffs are (80 − y, y).
If 1 rejects, 1 makes a counter offer but the size of
the pie shrinks.
Alternating Offer Bargaining
Round 3:
Player 1: x ∈ [0, 64].
Round 4:
Player 2: y ∈ [0, 51].
Round 5:
Player 1: x ∈ [0, 41].
Round 6:
Player 2: y ∈ [0, 33].
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Round 5: Player 1 gets $8. Player 2 gets $33.
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Round 5: Player 1 gets $8. Player 2 gets $33.
Round 4: Player 1 gets $8. Player 2 gets $43.
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Round 5: Player 1 gets $8. Player 2 gets $33.
Round 4: Player 1 gets $8. Player 2 gets $43.
Round 3: Player 1 gets $21. Player 2 gets $43.
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Round 5: Player 1 gets $8. Player 2 gets $33.
Round 4: Player 1 gets $8. Player 2 gets $43.
Round 3: Player 1 gets $21. Player 2 gets $43.
Round 2: Player 1 gets $21. Player 2 gets $59.
Alternating Offer Bargaining
Solve by backward induction. Suppose indifferent players
accept.
Round 6. Player 1 gets $0. Player 2 gets $33.
Round 5: Player 1 gets $8. Player 2 gets $33.
Round 4: Player 1 gets $8. Player 2 gets $43.
Round 3: Player 1 gets $21. Player 2 gets $43.
Round 2: Player 1 gets $21. Player 2 gets $59.
Round 1: Player 1 gets $41. Player 2 gets $59.
Alternating Offer Bargaining
Have a unique SPNE with payoffs (41, 59). But not what
usually gets played.
Perhaps too hard to calculate.
But biases are systematically different.
Last round subgame is the Ultimatum game.
So play here might be expected to differ.
What if there was no round 6? Who’d get the
higher payoff?
Alternating Offer Bargaining
What if the game had no end? Would we get a similar
solution?
Thus,
V1 = (100 − δV2 )
V2 = (100 − δV1 ).
Alternating Offer Bargaining
Solving these equations,
100
V1 =
(1 + δ)
100
V2 =
(1 + δ)
So, there is a subgame perfect equilibrium in which 1
100δ
offers 2 (1+δ) and 2 accepts.
An alternative interpretation of δ:
I Opportunity is lost with probability 1 − δ each period.
Alternating Offer Bargaining: Discussion
Accept
1/2
2 1 (𝑎2 , 100 − 𝑎2 )
𝑎2 ∈ [0,100]
Reject
(0, 0)
Simultaneous Moves in Extensive Form
Seen how finite sequential move games can be
transformed into simultaneous move games.
We can go from an extensive form representation to
a normal form representation, but can we go the
other way?
Can we turn any simultaneous move game into a
sequential game?
How might we represent simultaneous moves in a
game tree?
Let players make moves that the other players do
not observe.
Matching Pennies in Extensive Form
Each node?
Subgames
What is a subgame now? What makes sense?
n players 1, 2, . . . , n.
Players N
Actions ai ∈ Ai for i ∈ N
Payoffs ui (a).
Formalisation
Define the T -period repeated game of G, with discount
factor δ as a extensive form game with:
Players N
Strategy for i: action ai ∈ Ai at every information
set for i.
Payoffs:
T
X
1 2 T
Ui (a , a , . . . , a ) = δ t−1 ui (at ),
t=1
where at ∈ A.
Formalisation
∞
X ∞
X
t−1
(1 − δ) δ u(a) = (1 − δ) δ t u(a) = u(a)
t=1 t=0
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Payoffs
Example: Alternating between outcomes a and b
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Payoffs
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Nash Equilibrium Payoffs
Which sequences of actions can be played on path
in a Nash equilibrium (for sufficiently high δ)?
How about both Cooperate in even period and both
defect in odd periods?
Nash Equilibrium Payoffs
Which sequences of actions can be played on path
in a Nash equilibrium (for sufficiently high δ)?
How about both Cooperate in even period and both
defect in odd periods? Yes.
I Grim trigger with this cooperation phase.
Nash Equilibrium Payoffs
Which sequences of actions can be played on path
in a Nash equilibrium (for sufficiently high δ)?
How about both Cooperate in even period and both
defect in odd periods? Yes.
I Grim trigger with this cooperation phase.
I Payoff from deviating goes to (Defect, Defect) payoff as
δ → 1.
Nash Equilibrium Payoffs
Which sequences of actions can be played on path
in a Nash equilibrium (for sufficiently high δ)?
How about both Cooperate in even period and both
defect in odd periods? Yes.
I Grim trigger with this cooperation phase.
I Payoff from deviating goes to (Defect, Defect) payoff as
δ → 1.
1 2 3 𝑈1
Possible Nash Equilibrium Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Nash Equilibrium Payoffs
Player 2
A B C
A (2,2) (0,3) (0,0)
Player 1
B (3,0) (1,1) (0,0)
C (0,0) (0,0) (0,0)
Nash Equilibrium Payoffs
1 2 3 𝑈1
Possible Nash Equilibrium Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Nash Equilibrium Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
New Game 2
Player 2
A B C
A (2,2) (0,3) (0,0)
Player 1
B (3,0) (1,1) (0,0)
Nash Equilibrium Payoffs
Player 2 can use C as a punishment.
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Possible Payoffs
𝑈2
3
2
1
1 2 3 𝑈1
Folk Theorem
Finding the payoffs that can be achieved comes
down to finding the harshest possible punishment.
Player 2
A B C
A (2,2) (0,3) (0,0)
Player 1
B (3,0) (1,1) (0,0)
Subgame Perfection
Player 2
A B C
A (2,2) (0,3) (0,0)
Player 1
B (3,0) (1,1) (0,0)
C (0,0) (0,0) (0,0)
Subgame Perfection
But, in New Game 1, it is credible for both players
to use C.
Player 2
A B C
A (2,2) (0,3) (-1,-1)
Player 1
B (3,0) (1,1) (-1,-1)
C (-1,-1) (-1,-1) (-1,-1)
Finite Repetitions: A Comment
Suppose both players played the following strategy:
Supermajority:
P
1
if
Pi
vi > k
v= 0 if i vi = k
−1 otherwise
Dictatorship:
1
if vi = 1
v= 0 if vi = 0
−1 otherwise
for a fixed i.
Social Decisions
A social decision can depend on identities.
Definition (Anonymity)
If there are two admissible voting profiles (v1 , . . . , vn )
and (w1 , . . . , wn ) that are permutations (reorderings) of
each other, then the same social decision is made in both
cases:
Axiomatic Approach
Definition (Anonymity)
If there are two admissible voting profiles (v1 , . . . , vn )
and (w1 , . . . , wn ) that are permutations (reorderings) of
each other, then the same social decision is made in both
cases: f (v1 , . . . , vn ) = f (w1 , . . . , wn ).
Anonymity Example
three people
one vote for, one against and one abstention.
7→ social choice.
f (1’s vote, 2’s vote, 3’s vote) |{z}
maps to
f (1, 0, −1) = f (1, −1, 0) = f (0, 1, −1) = f (0, −1, 1) = f (−1, 1, 0) = f (−1, 0, 1).
Axiomatic Approach
Definition (Neutrality)
If there are two admissible voting profiles (v1 , . . . , vn )
and (−v1 , . . . , −vn ), then the opposite social decisions
are made:
Axiomatic Approach
Definition (Neutrality)
If there are two admissible voting profiles (v1 , . . . , vn )
and (−v1 , . . . , −vn ), then the opposite social decisions
are made: f (v1 , . . . , vn ) = −f (−v1 , . . . , −vn )
Neutrality Example
Two people.
Three choices
I vote for a policy
I abstain
I vote against the policy.
Suppose the social choice rule has the following
properties:
f (1, 1, 0) = 1
f (1, 0, 0) = 1
Neutrality Example
f (1, 1, 0) = 1
f (1, 0, 0) = 1
f (−1, −1, 0) = −1
f (−1, 0, 0) = −1.
Neutrality Example
f (−1, −1, 0) = −1
f (−1, 0, 0) = −1
f (−1, −1, 0) = −1
f (−1, 0, 0) = −1
f(1,1,1,-1,1) = 1
f(1,1,-1,1,1) = 1
f(1,0,1,0,1) = 1
f(1,0,1,1,1) = 1
f(1,0,1,1,0) = 1
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1
f(1,0,1,0,1) = 1
f(1,0,1,1,1) = 1
f(1,0,1,1,0) = 1
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1 No
f(1,0,1,0,1) = 1
f(1,0,1,1,1) = 1
f(1,0,1,1,0) = 1
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,0,1,1,1) = 1
f(1,0,1,1,0) = 1
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,0,1,1,1) = 1 Yes
f(1,0,1,1,0) = 1
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,0,1,1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(-1,-1,-1,-1,-1) = -1
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 1.
f(1,1,1,-1,1) = 1 Yes
f(1,1,-1,1,1) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,0,1,1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(-1,-1,-1,-1,-1) = -1 No
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1
f(1,0,1,1,0) = 1
f(1,0,1,0,1) = 1
f(1,-1,0,-1,1) = -1
f(1,-1,1,1,-1) = 0
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1 Yes
f(1,0,1,1,0) = 1
f(1,0,1,0,1) = 1
f(1,-1,0,-1,1) = -1
f(1,-1,1,1,-1) = 0
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(1,0,1,0,1) = 1
f(1,-1,0,-1,1) = -1
f(1,-1,1,1,-1) = 0
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,-1,0,-1,1) = -1
f(1,-1,1,1,-1) = 0
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,-1,0,-1,1) = -1 Yes
f(1,-1,1,1,-1) = 0
Positive Responsiveness Examples
Consider the following profile of votes: (1, −1, 1, −1, 1)
and suppose f (1, −1, 1, −1, 1) = 0.
f(1,1,1,-1,1) = 1 Yes
f(1,0,1,1,0) = 1 No
f(1,0,1,0,1) = 1 Yes
f(1,-1,0,-1,1) = -1 Yes
f(1,-1,1,1,-1) = 0 No
Characterization
Assume:
1. Different judgements are stochastically independent,
given the true state.
2. Each individual’s judgement is better than random
Definition (Competence)
For all i ∈ N and all states of the world x ∈ {−1, 1},
P r(Vi = x|X = x) = p > 1/2 (where p is constant
across states and individuals).
Competence Example
Suppose I fill a jar with pound coins and 104 fit in.
So we are done?
So we are done?
Notation:
Let x i y represent person i weakly preferring the
alternative x to y.
If x i y and y i x, then we say i is indifferent
between x and y: x ∼i y.
If x i y and y 6i x, then we say i strictly prefers x
to y: x i y.
Recap: Rationality
We need axioms!
Axiomatic methodology
We could have:
ab
ac
bc
c b.
Definition (IIA)
For any two admissible individual preference profiles
ˆ 0n ) and any x, y ∈ X, if
(1 , . . . , n ) and (01 , . . . ,
x i y if and only if x 0i y for all i ∈ N , then x y if
and only if x 0 y.
Definition (Non-dictatorship)
There does not exist an individual i ∈ N such that for all
admissible individual preference profile (1 , . . . , n ) and
all x, y ∈ X, x y if and only if x i y.
Definition (IIA)
An individual preference profile (1 , . . . , n ) satisfies
single peakedness if alternatives can be ordered such that
each individual has a most preferred choice and prefers
things closer to this idea point.
Single Peakedness
Definition (IIA)
An individual preference profile (1 , . . . , n ) satisfies
single peakedness if alternatives can be ordered such that
each individual has a most preferred choice and prefers
things closer to this idea point.
d 1 c 1 b 1 a 1 e
c 2 b 2 a 2 d 2 e
a 3 b 3 c 3 d 3 e
c 4 d 4 b 4 e 4 a
e 5 d 5 c 5 b 5 a
A B C D E
Single Peakeness in Pictures
Suppose we assigned each person a utilities to represent
their preferences:
A B C D E
Single Peakeness in Pictures
Suppose we assigned each person a utilities to represent
their preferences:
A B C D E
Single Peakeness in Pictures
Suppose we assigned each person a utilities to represent
their preferences:
A B C D E
Single Peakeness in Pictures
Suppose we assigned each person a utilities to represent
their preferences:
A B C D E
Example 2
Suppose N = 1, . . . , 5 and X = {a, b, c, d, e}. If:
d 1 e 1 b 1 a 1 c
e 2 b 2 a 2 d 2 c
a 3 b 3 e 3 d 3 c
e 4 d 4 b 4 c 4 a
c 5 d 5 e 5 b 5 a
A B C D E
Single Peaked Now?
A B C D E
Single Peaked Now?
A B C D E
Single Peaked Now?
A B C D E
Single Peaked Now?
A B C D E
Single Peakedness
A B E D C
Reordering the Domain
A B E D C
Reordering the Domain
A B E D C
Reordering the Domain
A B E D C
Reordering the Domain
A B E D C
Condorcet Winners
d 1 c 1 b 1 a 1 e
c 2 b 2 a 2 d 2 e
a 3 b 3 c 3 d 3 e
c 4 d 4 b 4 e 4 a
e 5 d 5 c 5 b 5 a
Median Voter Theorem
Single peakedness implies ordering of the domain.
Implies ordering of voters by their most preferred choices.
Median voter—voter with the median most preferred
choice.
To avoid technical difficulties, suppose N is odd.
d 1 c 1 b 1 a 1 e
c 2 b 2 a 2 d 2 e
a 3 b 3 c 3 d 3 e
c 4 d 4 b 4 e 4 a
e 5 d 5 c 5 b 5 a
Same Example
Is there a Condorcet winner? If so which one?
d 1 c 1 b 1 a 1 e
c 2 b 2 a 2 d 2 e
a 3 b 3 c 3 d 3 e
c 4 d 4 b 4 e 4 a
e 5 d 5 c 5 b 5 a
Yes, c.
Same Example
Ordered, most preferred choices: (a, c, c, d, e).
Then Median(a, c, c, d, e) = c.
Same Example
Ordered, most preferred choices: (a, c, c, d, e).
Then Median(a, c, c, d, e) = c.
Check:
Who prefers c to a: {1, 2, 4, 5}
Who prefers c to b: {1, 2, 4, 5}
Who prefers c to d: {2, 3, 4}
Who prefers c to e: {1, 2, 3, 4}
So c is indeed a Condorcet winner.
A2: Ordering
This required that the social choice rule output best
choices.
x y if and only if
x y if and only if
x y if and only if
Suppose:
34% of people, x i y i z.
33% of people, y i z i x.
33% of people, z i y i x.
Plurality Rules
But there are other problems.
Suppose:
34% of people, x i y i z.
33% of people, y i z i x.
33% of people, z i y i x.
Suppose:
34% of people, x i y i z.
33% of people, y i z i x.
33% of people, z i y i x.
Suppose:
34% of people, x i y i z.
33% of people, y i z i x.
33% of people, z i y i x.
X X
|z ∈ X : x i z| ≥ |z ∈ X : y i z|
i∈N i∈N
Borda Count
x y if and only if
X X
|z ∈ X : x i z| ≥ |z ∈ X : y i z|
i∈N i∈N