You are on page 1of 81

TYBME|

Game Theory

ADITYA KASHYAP
aditya@isme.co.in

Past, present, future


1. ​Game Theory
◦ ​Simple games
◦ ​Formal approach
◦ ​Mixed strategies
◦ ​Dynamic (sequential & repeated)
◦ ​Incomplete Information

2. ​Risk & insurance


3. ​Incentives & institutions

Game Theory | ​Strategic behavior


Simple behavior
◦ ​No link between what you choose and what others choose
◦ ​Examples = perfect competition, monopoly

Strategic behavior
◦ ​Strategic interdependence: your choices will impact other people’s choices, and
vice versa
◦ ​Need to consider “he-thinks-I-think” scenarios (multiple
iterations) ​◦ ​Examples = oligopoly, poker, war, most of life

Pick a number between 1 and 100


◦ ​Winner is the person who picks a number closest to 2/3rds the
average ​◦ ​http://twothirdsofaverage.creativitygames.net

INTRODUCTION ​
2

Strategic thinking:
Efficient markets hypothesis
Consider: everybody knows that the price of Reliance shares will double
next Tuesday…

What will happen today?

3
Strategic thinking:
Random class quiz
We will have a quiz in class, but the quiz will be on a day that you don’t
expect…

When is the quiz?


4

Brief history of
Game Theory
Pre-history of game theory
◦ ​Letters from Waldegrave (1713) about a card game
◦ ​Games studied by Cournot (1838), Bertrand (1883) and Edgeworth (1925) in the context of
oligopolistic pricing

John von Neumann and Oskar Morgenstern (1944)


◦ ​Extensive & strategic form representation, minimax solution for zero-sum games

John Nash (1950)


◦ ​Proposes concept to extend analysis to non-zero-sum games

Post-Nash world
◦ ​Reinhard Selten (1965) proposes sub-game perfect equilibria
◦ ​John Harsanyi (1967-68) introduced techniques to solve static games of incomplete
information where players are unsure about one another’s payoff

Cooperative &
non-cooperative games
Cooperative games
◦ ​The focus of early game theory – optimal strategies for groups of individuals,
presuming that they can enforce agreements

Non-cooperative games
◦ ​Most common use of game theory is in situations where you have to make
your decision without cooperation/enforcement
◦ ​Your optimal choice may depend on forecasting opponent’s
choice ​◦ ​Does not mean you are working “against” the other
player(s) ​◦ ​This subject will only consider non-cooperative games

Practical uses of
game theory
Game theory allows us to make predictions about the outcome of strategic
situations
◦ ​When predictions wrong; learn about preferences (behavioural economics) ​◦ ​Can also work
backwards… create rules of a game to get preferred outcome (market design, institutional
economics)

Generally seen as a sub-discipline within economics


◦ ​Applications in industrial organization, public economics, law and economics, international
trade, political economy

Has also been applied to other sciences


◦ ​Biology, military, computer science and logic, political sciences, philosophy

Careful not to over-interpret…


◦ ​Can be useful; but is often imprecise (recall “pick a number” & EMH examples) ​7

Process of game theory


Convert a story/situation into a game
◦ ​(1) how many players?
◦ ​(2) what can they do? (called “strategies”)
◦ ​(3) what are the consequences? (called “payoff”)
◦ ​…make sure you know what assumptions are being made (type of game)

Solve for an equilibrium


◦ ​The equilibrium solution is the stable state (no incentive to change)
◦ ​Different types of equilibrium for different games (see next slide)

Interpret the solution


◦ ​Equilibrium is not necessarily optimal: how to improve?
◦ ​Equilibrium is only as good as the assumptions: are they accurate?
◦ ​Knowing the true “payoff” can be difficult in practice since people often have complex and
changing preferences.

Different equilibriums
informatio
n
Complete Incomplete
informatio n

Static ​1. Nash Equilibrium 3. Bayes Nash Equilibrium

Dynamic ​2. Sub-game Perfect 9


Equilibrium

4. Perfect Bayesian Equilibrium

Starting with the basics


Initially, we will assume…
◦ ​Dominant & dominated strategy exists
◦ ​Only a few players (generally two) and a few possible strategies
◦ ​Must choose only one strategy each
◦ ​All actions are simultaneous & once only (no time)
◦ ​Everybody knows all consequences (common knowledge = complete info)
Relaxing assumptions gives more complex games
◦ ​No dominant strategy (Nash equilibrium)
◦ ​Many players, strategies & payoffs (generalised approach)
◦ ​Mixed strategies & existence of equilibrium
◦ ​Sequential decisions (sub-game equilibrium) & repeated games
◦ ​Incomplete information & Bayesian equilibrium

But first… the famous prisoners’ dilemma

10

The prisoners’
dilemma ​ ​actually
happened
Over 50 years ago, Perry Smith & Dick Hickock robbed and murdered a
family in Kansas for $50
◦ ​Caught in Las Vegas six weeks later
◦ ​Hard evidence for lesser crimes (parole violation & fraud)
◦ ​Weak evidence for the murders
◦ ​They were interrogated separately, not knowing what the other said

Consider the hypothetical options & outcomes...


◦ ​If neither confess – both get 10 year in jail
◦ ​If both confess – both get 30 years in jail
◦ ​If only one confesses – confessor gets 5 years & other gets 50 years in
jail ​◦ ​Neither knows what decision the other person makes
◦ ​Should they confess?

11

Normal form game


of ​ ​prisoners’
dilemma
Define the prisoners’ dilemma (PD)
game ​◦ ​Two players = {player A, player B}
◦ ​Two strategies = {not confess, confess}
◦ ​Player A chooses the row & Player B chooses the
column ​◦ ​Payoffs for each outcome in brackets (player A,
player B) ​◦ ​Note: payoffs are negative since players don’t
like jail
Not confess (-10, -10) (-50, -5)

Confess (-5, -50) (-30, -30)

Player A
Player B
GAME (PD) ​Not confess Confess
12

Solution for
prisoners’ dilemma
Con​sider the options for Player A:
◦ ​If Player B “not confess”, then A can “not confess” (-10) or ​“confess”
(-5)​ ​◦ ​If Player B “confess”, then A can “not confess” (-50) or ​“confess”
(-30)​ ​◦ ​In both situations, Player A does better by choosing “confess”

The incentives are the same for Player B


◦ ​So Player B should also choose “confess”
GAME (PD) ​Not confess Confess

Not confess (-10, -10) (-50, -5)

Player A Confess (-5, -50) ​(-30, -30)


Player B
13

Thinking through the


prisoners’ dilemma
The equilibrium ≠ optimal outcome
◦ ​Equilibrium strategy {confess, confess} has a bad payoff (-30, -30) ​◦ ​In the
historical example, Smith & Hickock both confessed & were killed ​◦ ​Both
prisoners would do better with if both chose “not confess” (-10, -10) ​◦ ​This
game is bad for prisoners, but good for law enforcement
◦ ​Are there ways to change the outcome? (enforcement, incentives)

The “confess” strategy was dominant


◦ ​If “strategy x” dominates “strategy y”, then we can say that “strategy y” is
dominated by “strategy x”; so from above, “not confess” was dominated ​◦
Dominant strategy ​= no matter what the other person does, you should always
chose the dominant strategy
◦ ​Dominated strategy ​= no matter what the other person does, you should never
chose the dominated strategy
14

Dominant &
weakly dominant
Dominant strategy
◦ ​A strategy is dominant if the payoff for that strategy is always better than
alternative strategies, no matter what the other player does
GAME (Dominant) ​Good Bad
Good (10, 7) (9, 2)
Bad (3, 6) (2, 1)

Weakly dominant strategy


◦ ​A strategy is weakly dominant if the payoff for that strategy is better than
a​lternative strategies in at least one situation, with equality for all others
GAME (Weak) ​Weakly good Meh
Weakly good (10, 7) (5, 2)
Meh (3, 6) (5, 6)

15

Easy games:
Dominant & dominated
The prisoners’ dilemma had a dominant strategy ​◦ ​In all
situations, “confess” gives better result than “not confess”

However, often no obvious dominant strategy


◦ ​Below, player A prefers “up” sometimes and “down” sometimes
◦ ​Below, player B prefers “centre” sometimes and “left”
sometimes ​◦ ​No dominant strategy
GAME (1) ​Left Centre Right ​Up (1,0)

(1,2) (0,1) Down (0,3) (0,1) (2,0)


Player A
Player B

16

Easy games:
Dominant & dominated
Solve by iterative removal of ​dominated ​strategies ​◦ ​For
player B, “centre” dominates “right” – so “right” is dominated ​◦
Therefore, we can remove that column from the game ​◦ ​We now
have a (2 x 2) matrix
◦ ​Note: still no dominant strategy for player B
Player B
GAME (1) ​Left Centre ​Right​ ​Up (1,0)
(1,2) ​(0,1)​ Down (0,3) (0,1) ​(2,0)

Player A

17

Easy games:
Dominant & dominated
Solve by iterative removal of ​dominated ​strategies
◦ ​Now just considering the smaller (2 x 2) matrix
◦ ​For player A, “up” dominates “down” (can also say “down” is
dominated) ​◦ ​Therefore, we can remove “down” from the game
◦ ​We now only have two options left = (up, left) or (up, centre)
Player B
GAME (1) ​Left Centre ​Right​ ​Up (1,0)
(1,2) ​(0,1)​ ​Down (0,3) (0,1) (2,0)

Player A

18

Easy games:
Dominant & dominated
Solve by iterative removal of ​dominated
strategies ​◦ ​Two options remaining = (up, left) or
(up, centre) ​◦ ​For player B, “centre” dominates “left”

◦ ​Equilibrium strategy = (up, centre) with payoff = (1, 2)


Player B
GAME (1) ​Left Centre ​Right​ ​Up (1,0)
(1,2) ​(0,1)​ ​Down (0,3) (0,1) (2,0)

Player A

19

Easy games:
Another example
Is there a dominant strategy?
◦ ​Below, player A prefers “up” sometimes and “down” sometimes ​◦
Below, player B prefers “right”, “left” & “centre” in different scenarios
◦ ​No dominant strategy
Player B
GAME (2) ​Left Centre Right ​Up (4, 11)
(3, 6) (5, 12) Middle (3, 4) (2, 8) (4, 6)

Down (3, 10) (4, 6) (3, 8)

Player A

20

Easy games:
Another example
Solve by iterative removal of ​dominated ​strategies ​◦ ​For player A,
“middle” is dominated by “up”; so “middle” is removed ​◦ ​Still leaves
no dominant strategy for player A or player B ​◦ ​Next round…
Player B
GAME (2) ​Left Centre Right ​Up (4, 11)
(3, 6) (5, 12) ​Middle (3, 4) (2, 8) (4, 6)

Down (3, 10) (4, 6) (3, 8)

Player A

21

Easy games:
Another example
Solve by iterative removal of ​dominated ​strategies ​◦ ​For
player B, “centre” is dominated; that column can be removed ​◦
Leaves (2 x 2) matrix
◦ ​Next round…
Player B
GAME (2) ​Left ​Centre ​Right ​Up (4, 11)
(3, 6) ​(5, 12) ​Middle (3, 4) (2, 8) (4, 6)

Down (3, 10) ​(4, 6) ​(3, 8)

Player A

22

Easy games:
Another example
Solve by iterative removal of ​dominated ​strategies ​◦ ​For player A,
“down” is dominated by “up”; so “down” can be removed ​◦ ​Leaves only
two options = (up, left) and (up, right)
◦ ​Next round…
Player B
GAME (2) ​Left ​Centre ​Right ​Up (4, 11)
(3, 6) ​(5, 12) ​Middle (3, 4) (2, 8) (4, 6)

Down (3, 10) (4, 6) (3, 8)

Player A

23

Easy games:
Another example
Solve by iterative removal of ​dominated ​strategies
◦ ​Only two options remaining = (up, left) and (up,
right) ​◦ ​For Player B, “right” dominates “left”

◦ ​Equilibrium strategy = (up, right) with payoff = (5, 12)


Player B
GAME (2) ​Left ​Centre ​Right ​Up (4, 11)
(3, 6) ​(5, 12)​ ​Middle (3, 4) (2, 8) (4, 6)

Down (3, 10) (4, 6) (3, 8)


Player A

24

Too easy…
The above games had dominated strategies
◦ ​Relatively easy to solve with iterative elimination of dominated
strategies ​◦ ​However, in many instances there are no dominated strategy
◦ ​Can still find the “Nash equilibrium”
25

Problem:
no dominated strategy
In the below example there is no dominant or dominated
strategy ​◦ ​Player A prefers “up”, “middle” & “down” in different
scenarios ​◦ ​Player B prefers “left”, “centre” & “right” in different
scenarios ​◦ ​Cannot use iterative elimination to find equilibrium
◦ ​… but an equilibrium still exists
Player B
GAME (3) ​Left Centre Right ​Up (0, 4)

(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)


Down (3, 5) (3, 5) (6, 6)
Player A

26

Nash equilibrium
A Nash equilibrium exists when all players pick the best strategy given the
other players’ strategies
◦ ​No incentive to unilaterally change strategy
◦ ​The “no regret” or “stable state” strategy

All dominant strategies are Nash equilibriums


◦ ​In the earlier games (PD, 1, 2) the solutions were all Nash equilibrium ​◦
Check for yourself – neither player has an incentive to unilaterally change

But not all Nash equilibriums are dominant strategies


◦ ​Can have a unique Nash equilibrium despite no clear dominant
strategy ​◦ ​Nash equilibrium is more powerful than dominant strategy
equilibrium ​◦ ​Consider the example in game (3)

27

Nash equilibrium
w/o ​ ​dominant
strategy
Game (3) has no dominant or dominated strategy
◦ ​Cannot use iterative elimination to find equilibrium
◦ ​Nash equilibrium is (down, right) with payoff (6, 6)
◦ ​From (down, right), player A has no incentive to change to up or middle
◦ ​From (down, right), player B has no incentive to change to left or
centre
Player B
GAME (3) ​Left Centre Right ​Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)
Down (3, 5) (3, 5) ​(6, 6)
Player A

28

How to find
Nash equilibrium
The manual approach…
◦ ​Check every payoff individually to see whether either player has an incentive to
move. No incentive to move = Nash equilibrium.
◦ ​For small games (2 x 2) this is easy; for large games it is time consuming

The “if-then” approach


◦ ​For each player, find the best response for each strategy of the other player(s),
and underline that payoff
◦ ​Any payoff that is chosen by all players is a Nash equilibrium

The mathematical approach


◦ ​For games with many players and/or strategies
◦ ​Will discuss later…

29

If everyone chooses collude, all students get 10 bonus points in final


exam
If everyone chooses to collude, but one person defect, that person
defecting gets 50 bonus points and no other student get any points.
If more than 1 person chooses to defect, no student get any points. ​30

“If-then” approach to
find​ ​Nash equilibrium
For player A, what is the best response for each strategy by player
B? ​◦ ​If player B chooses “left”, player A should choose “middle” ​◦
Underline the relevant payoff
Player B
GAME (3) ​Left Centre Right ​Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)

Down (3, 5) (3, 5) (6, 6)

Player A

31

“If-then” approach to
find​ ​Nash equilibrium
For player A, what is the best response for each strategy by player
B? ​◦ ​If player B chooses “centre”, player A should choose “up” ​◦ ​Underline
the relevant payoff
Player B
GAME (3) ​Left Centre Right ​Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)

Down (3, 5) (3, 5) (6, 6)

Player A

32

“If-then” approach to
find​ ​Nash equilibrium
For player A, what is the best response for each strategy by player
B? ​◦ ​If player B chooses “right”, player A should choose “down” ​◦
Underline the relevant payoff
Player B
GAME (3) ​Left Centre Right ​Up (0, 4)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)

Down (3, 5) (3, 5) (​6, ​6)

Player A

33

“If-then” approach to
find​ ​Nash equilibrium
For player B, what is the best response for each strategy by player
A? ​◦ ​If player A chooses “up”, player B should choose “left”
◦ ​Underline the relevant payoff
Player B
GAME (3) ​Left Centre Right ​Up (0, ​4​)
(4, 0) (5, 3) Middle (4, 0) (0, 4) (5, 3)

Down (3, 5) (3, 5) (​6, ​6)

Player A

34

“If-then” approach to
find​ ​Nash equilibrium
For player B, what is the best response for each strategy by player
A? ​◦ ​If player A chooses “middle”, player B should choose “centre” ​◦
Underline the relevant payoff
Player B
GAME (3) ​Left Centre Right ​Up (0, ​4​)
(4, 0) (5, 3) Middle (4, 0) (0, ​4​) (5, 3)

Down (3, 5) (3, 5) (​6, ​6)

Player A

35

“If-then” approach to
find​ ​Nash equilibrium
For player B, what is the best response for each strategy by player
A? ​◦ ​If player A chooses “down”, player B should choose “right” ​◦
Underline the relevant payoff
◦ ​After all “if-then” answers, there is one solution that was chosen by both
players (down, right) with a payoff (6, 6) = Nash equilibrium
GAME (3) ​Left Centre Right ​Up (0, ​4​)

(4, 0) (5, 3) Middle (4, 0) (0, ​4​) (5, 3)


Down (3, 5) (3, 5) ​(​6, 6​)
Player A
Player B

36

“If-then” approach to
Nash:​ ​another example
Game (4) = similar game with different payoffs
◦ ​Confirm that there is no dominant (or dominated) strategy
◦ ​For each player, find the best response for each strategy of the other
player, and underline that payoff
Player B
GAME (4) ​Left Centre Right ​Up (7, 2)
(1, 3) (3, 4) Middle (4, 2) (6, 4) (4, 0)

Down (2, 8) (3, 6) (9, 5)

Player A

37

“If-then” approach to
Nash:​ ​another example
For player A, what is the best response for each strategy by player
B? ​◦ ​If player B chooses “left”, player A chooses “up”
◦ ​If player B chooses “centre”, player A chooses “middle”
◦ ​If player B chooses “right”, player A chooses “down”
◦ ​Underline the relevant payoffs
Player B
GAME (4) ​Left Centre Right ​Up (​7​, 2)

(1, 3) (3, 4) Middle (4, 2) (​6​, 4) (4, 0)


Down (2, 8) (3, 6) (​9​, 5)
Player A

38

“If-then” approach to
Nash:​ ​another example
For player B, what is the best response for each strategy by player
A? ​◦ ​If player A chooses “up”, player B chooses “right”
◦ ​If player A chooses “middle”, player B chooses “centre”
◦ ​If player A chooses “down”, player B chooses “left”
◦ ​Underline the relevant payoffs
◦ ​Nash equilibrium = (middle, centre) with payoff = (6, 4)
GAME (4) ​Left Centre Right ​Up (​7​, 2)
(1, 3) (3, ​4​) Middle (4, 2) ​(​6​, ​4​) ​(4, 0)

Down (2, ​8​) (3, 6) (​9​, 5)


Player A
Player B

39

Some things to note…


Nash equilibrium (middle, centre) = payoff (6, 4)
◦ ​But the strategy (down, right) = better payoff (9, 5)
◦ ​Remember that equilibrium does not necessarily mean optimal

Consider: starting at (down, right)


◦ ​Player B would prefer “left” to “right” resulting in (down, left)… then player
A would prefer “up” to “down” resulting in (up, left), etc = not stable
GAME (4) ​Left Centre Right ​Up (​7​, 2)
(1, 3) (3, ​4​) Middle (4, 2) ​(​6​, ​4​) ​(4, 0)

Player A Down (2, ​8​) (3, 6) (​9​, 5)


Player B

40

Multiple equilibriums
Nash concept can generate multiple equilibriums

Consider the game (Cards) below:


◦ ​Two players have two cards (King, Ace) and must choose one
card ​◦ ​Play is simultaneous
◦ ​If both play Ace, both receive $2… if both play King, both receive
$1 ​◦ ​If they play different cards, both receive $0
GAME (Cards) ​Ace King ​Ace (2, 2)

(0, 0) King (0, 0) (1, 1)


Player A
Player B

41

Multiple equilibriums
In this game there are two Nash equilibriums
◦ ​Either (Ace, Ace) or (King, King)
◦ ​Simple to show using manual approach or “if-then” approach ​◦
There is actually a third equilibrium (mixed strategy = discussed later)

Pareto dominance
◦ ​An equilibrium Pareto-dominates another equilibrium if at least one player
would be better off while no other player worse off… (Ace, Ace)
GAME (Cards) ​Ace King ​Ace ​(2, 2)

(0, 0) King (0, 0) ​(1, 1)


Player A
Player B
42

Non-existence of pure
equilibrium
Nash equilibrium concept might not find a “pure-strategy” equilibrium
◦ ​“Pure strategy” means players must choose only one strategy ​◦ ​Will
consider “mixed strategies” later

Consider the game (Pennies) below:


◦ ​Two players have penny each
◦ ​They put the penny down either showing “heads” or
“tails” ​◦ ​Play is simultaneous
◦ ​If the two pennies match, then player B pays $1 to player A ​◦ ​If
the two pennies are different, then player A pays $1 to player B

◦ ​Note: this is a pure luck game; and a zero-sum game

43

Non-existence of pure
equilibrium
In this game (Pennies), there is no “pure strategy” Nash
equilibrium ​◦ ​Simple to show using manual approach or “if-then”
approach
◦ ​Consider starting at (heads, heads)… then player B would switch to “tails”
resulting in (heads, tails)… then player A would switch to “tails” resulting
in (tails, tails), etc = no stable strategy
◦ ​There is a mixed strategy Nash equilibrium = discussed later
GAME (Pennies) ​Heads Tails

Heads (1, -1) (-1, 1) Tails (-1, 1) (1,

-1)

Player A
Player B

44

Nash equilibrium and the


real ​ ​world
Before making our games harder, we can use what we know to
consider some famous games and some real world applications
1.​Free-riding problem (prisoners’ dilemma)
2.​Conflict & brinksmanship (chicken run & hawk-dove)
3.​Coordination (battle of the sexes)
4.​Trust & assurance (stag hunt)

45

Application 1:
Free riding & public goods
Since at least Hume (1740), political philosophers have known about
the “free rider” problem
◦ ​“Public goods” and “common resources” are non-excludable, so the
incentives for each person is to over-use and/or under-pay

Consider people buying security for their street


◦ ​Imagine a street with crime; residents consider paying for a security
guard ​◦ ​Assume two people: each can choose to “buy” or “not buy” ​◦
Security cost is fixed = $100 if one “buys”, or $50 each if both “buy” ​◦
Security is public good: if anybody buys security, both benefits (value $80)
◦ ​If neither people buy security, there is no security and no benefit ​◦ ​Will
the players buy security?

46

Application 1:
Free riding & public
goods​ ​The​ game (security) is same as prisoners’
dilemma​ ​◦ ​Two players = {player A, player B} & two strategies = {buy,
not buy} ​◦ ​If both buy: benefit is $80, cost is $50, net benefit is $30 each
◦ ​If neither buy: no benefit or cost, net benefit is $0 each ​◦ ​If one buys:
the buyer benefit is $80, cost is $100, net benefit is -$20 ​◦ ​If one buys:
the non-buyer benefit is $80, cost is $0, net benefit is $80 ​◦ ​Best
outcome is (buy, buy) with payoff (30, 30)
GAME (Security) ​Buy Not buy ​Buy

(30, 30) (-20, 80) Not buy (80, -20)

(0, 0)
Player A
Player B

47

Application 1:
Free riding & public
goods​ ​Con​sider the incentives
◦ ​If player B “buy”, then player A should “not buy” ($80 v $20)
◦ ​If player B “not buy”, then player A should “not buy” ($0 v -$20)
◦ ​Nash equilibrium is (not buy, not buy) – check manually or using “if-then”

◦ ​Conclusion… people will try to “free ride” on other people buying public goods, and
therefore public goods will be under-supplied
GAME (Security) ​Buy Not buy ​Buy

(30, 30) (-20, 80) Not buy (80, -20)

(0, 0)
Player A
Player B

48
Application 1:
Free riding & public goods
Mixed evidence from reality & experiments
◦ ​Historically, some public goods have been provided privately
◦ ​When people are given public good “prisoner dilemma” scenarios in experiments, some
people chose to “buy” – altruism? Ignorance?
◦ ​More communication with (or concern for) other players = more people “buy”

Ways to solve the prisoners’ dilemma?


◦ ​Government force the correct outcome (e.g. tax for public goods)
◦ ​Binding contracts (e.g. body corporate buys security)
◦ ​Change the incentives (change the rules of the game)

Change the game


◦ ​Punish or pressure “free-riders” (experiments show that people pay for revenge) ​◦ ​Increase
benefit (status symbols for contributing, social/emotional reward) ​◦ ​Repeated games (will
consider soon in “dynamic game theory”)
◦ ​Institutional solutions (will consider in a different topic: “incentives & institutions”) ​49

Application 2:
Chicken game &
Hawk/Dove​ ​Chic​ken game
◦ ​Two players drive towards each other at high speed
◦ ​Two strategies {swerve, don’t swerve}
◦ ​If neither swerves, they crash and suffer injuries (-3,
-3) ​◦ ​If both swerve, they are safe but didn’t “win” (2, 2)
◦ ​If one swerves, the swerver loses (0) and the straight driver wins (3)
GAME (Chicken) ​Swerve Don’t

swerve ​Swerve (1, 1) (0, 3) Don’t

swerve (3, 0) (-2, -2)

Player A
Player B

50

Application 2:
Chicken game &
Hawk/Dove​ ​Chic​ken game outcome
◦ ​Two Nash equilibriums = (swerve, don’t swerve) & (don’t swerve, swerve) ​◦ ​The
worst outcome is a crash; both players have an incentive to change ​◦ ​Mutual swerving
might give a good outcome (2, 2), but if you think the other person
will swerve then you get a higher benefit by not swerving, and so you have an
incentive to change
◦ ​Each player would prefer to be “don’t swerve” playing against “swerve”
GAME (Chicken) ​Swerve Don’t

swerve ​Swerve (2, 2) ​(0, 3)​ ​Don’t

swerve ​(3, 0) ​(-3, -3)


Player A
Player B

51

Application 2:
Chicken game &
Hawk/Dove​ ​Haw​k/Dove (HD) game
◦ ​A variant of the “chicken game” originally used in biology
◦ ​Two players {A, B} are competing for a resource & can be aggressive or passive, so
there are two strategies {hawk, dove}
◦ ​If both are aggressive (hawks), they fight and suffer injuries (-5, -5) ​◦ ​If
both are passive (dove), they are safe and share the benefit (10, 10) ​◦ ​If
they differ (hawk, dove), the hawk wins (20) & the dove misses out (0)
Player B
GAME (HD) ​Hawk Dove ​Hawk (-5,

-5) (20, 0) Dove (0, 20) (10, 10)

Player A

52

Application 2:
Chicken game &
Hawk/Dove​ ​Haw​k/Dove outcome = chicken game
outcome
◦ ​Two Nash equilibriums = (hawk, dove) & (dove, hawk)
◦ ​Worst outcome is a (hawk, hawk); both players have incentive to change ​◦
Cooperation (dove, dove) might give a good outcome, but if you think the other
person will choose “dove” then you get a higher benefit by choosing “hawk”, and so
you have an incentive to change
◦ ​Each player would prefer to be “hawk” playing against “dove”
Player B
GAME (HD) ​Hawk Dove ​Hawk (-5,

-5) ​(20, 0)​ ​Dove ​(0, 20) ​(10, 10)

Player A

53

Application 2:
Hawk/Dove & Prisoners’
dilemma​ N​ ot​e: careful not to change the game
◦ ​For “hawk/dove” game, the outcome from fighting (hawk, hawk) must be worse
than the outcome from being a dove against a hawk
◦ ​If the the fighting outcome is better than the outcome from being a dove against a
hawk, then “hawk/dove” game turns into “prisoners’ dilemma”
◦ ​See below that if (hawk, hawk) has payoff (1, 1) then it becomes the sole Nash
equilibrium in a prisoners’ dilemma
GAME (HD as PD) ​Hawk Dove

Hawk ​(1, 1) ​(20, 0) Dove (0, 20)

(10, 10)

Player A
Player B

54

Application 2:
Hawk/Dove & Brinkmanship
In “hawk/dove”, both players would prefer the other player to
choose “dove”
◦ ​Communication, reputation & credible threats become important
◦ ​If the other player is certain you will play “hawk” then their optimal strategy
is to play “dove”
◦ ​In experiments where one player was able to make credible threats (lock in
the “hawk” strategy) they gained an advantage
◦ ​More generally, if you can make your opponent think you are irrational, crazy
or suicidal, then they are more likely to play “dove”

55

Application 2:
Modified hawk/dove
game​ M​ o​dified Hawk/Dove (MHD) game
◦ ​Modified version of hawk/dove has same payoffs for all strategies except (dove,
dove), which now has a higher payoff for “pacifist” player A
◦ ​There is now a single Nash equilibrium (dove, hawk) with payoff (0, 20); this is an
ideal situation for player B, but player A would prefer (dove, dove) ​◦ ​The “pacifist
problem”: player A can try to create a credible threat of choosing “hawk”, to scare
player B into choosing “dove”, but everybody knows they are really a pacifist. Is the
threat credible?
Player B
GAME (MHD) ​Hawk Dove ​Hawk

(-5, -5) (20, 0) Dove ​(0, 20) ​(​30​, 10)

Player A

56
Application 2:
Modified hawk/dove
game​ ​Mo​dified Hawk/Dove (MHD) game
◦ ​For threat to be credible, there must be at least the chance of war. This could be
achieved by ambiguity (or randomness) in one of the payoffs – for example below,
the (dove, dove) payoff is ( ? , 10). This is an example of “incomplete information”
which we will discuss later
◦ ​The “peace loving warrior”: with ambiguity, player A can try to create a credible
threat of choosing “hawk”, to scare player B into choosing “dove” (threaten war
to try prevent war). Is the threat credible now?
Player B
GAME (MHD) ​Hawk Dove ​Hawk

(-5, -5) (20, 0) Dove (0, 20) ( ? , 10)

Player A
57

Application 2:
Modified HD – Golden
Balls​ ​Brit​ish game show “Golden Balls”
◦ ​Two players, who each have two strategies {steal, split}
◦ ​Similar to “hawk/dove” except the conflict option (steal, steal) is costless, so no
incentive to change. Therefore, three (weak) Nash equilibriums.
◦ ​Cooperation (split, split) might give a good outcome, but if you think the other
person will choose “split” then you get a higher benefit by choosing “steal”, and so
you have an incentive to change
Player B
GAME (Balls) ​Steal Split ​Steal ​(0,

0) (100, 0)​ ​Split ​(0, 100) ​(50, 50)

Player A

58

Application 2:
Modified HD – Golden
Balls​ ​Brit​ish game show “Golden Balls”
◦ ​Each player would prefer the other player to choose “split”
◦ ​An altruistic player might get benefit (+60) from sharing, but they face the problem
that their opponent might not feel the same (pacifist problem)
◦ ​Players have two minutes to discuss what they should do… what would you
do? ​◦ ​https://www.youtube.com/watch?v=S0qjK3TWZE8
Player B
GAME (Balls) ​Steal Split ​Steal ​(0,

0) (100, 0)​ ​Split ​(0, 100) ​(50, 50)

Player A

59

Application 2:
Brinkmanship summary
In “anti-coordination” games like hawk/dove, there are generally two

Nash ​
equilibriums ​◦ ​Either (hawk, dove) or (dove, hawk)
◦ ​Both players prefer to be the “hawk” and the opponent be the “dove” ​◦ ​Players
want to provide credible threat (communication or reputation) that they will play
“hawk”, to force the other player to choose “dove”
Hawk/dove – threats as a fighting strategy
◦ ​During the Cuban Missile Crisis, the US government took actions that indicated an
intention to invade Cuba (threat of “hawk”) to encourage the Russians to remove
their missiles from Cuba
◦ ​One argument in favour of the 2003 Iraq war was that it showed the US was a
dangerous and unpredictable “hawk”, therefore giving other nations an
incentive to play “dove” in future disagreements
◦ ​A “tough guy” can build a reputation for fighting, which forces other people to
the “dove” position, therefore winning future conflict without needing to fight

60

Application 3:
Coordination game (easy)
Recall the earlier game (Cards):
◦ ​Two players have two cards (King, Ace) and must choose one card
◦ ​If both play Ace, both receive $2… if both play King, both receive
$1 ​◦ ​If they play different cards, both receive $0
◦ ​Two Nash equilibriums = (Ace, Ace) & (King, King)
◦ ​One solution (Ace, Ace) is Pareto dominant; higher payoff for both
Player B
GAME (Cards) ​Ace King ​Ace ​(2, 2)

(0, 0) King (0, 0) ​(1, 1)

Player A

61

Application 3:
Coordination games
Bat​tle of the sexes (1950s version)
◦ ​Two players (A = wife, B = husband) choose where to go for the
evening ​◦ ​Two strategies {ballet, boxing}
◦ ​They want to be together, so opposite choices gives no benefit (0,
0) ​◦ ​If they both go to the ballet, then payoff is (2, 1)
◦ ​If they both go to the boxing, then payoff is (1, 2)
Player B: husband
GAME (battle) ​Ballet Boxing ​Ballet

(2, 1) (0, 0) Boxing (0, 0) (1, 2)

Player A: ​ ​wife

62

Application 3:
Coordination games
Bat​tle of the sexes outcome
◦ ​Two Nash equilibriums = (ballet, ballet) & (boxing, boxing) ​◦ ​The
worst outcome is picking opposite strategies; incentive to change ​◦
Wife would prefer (ballet, ballet) and husband prefer (boxing, boxing)
◦ ​But no Pareto dominant solution… so how to choose?
Player B: husband
GAME (battle) ​Ballet Boxing ​Ballet

(2, 1) ​(0, 0) Boxing (0, 0) ​(1, 2)

Player A: ​ ​wife

63

Application 3:
Coordination games
Two​ good equilibriums but no way to choose ​a priori
◦ ​This is the heart of coordination problems
◦ ​Pure coordination games: meeting a friend in NYC but don’t know where, or two
people trying to guess the same number for a shared prize, but don’t know
what?
◦ ​Minority games: people want to go to bar, but only if it’s not too
crowded ​◦ ​Anti-coordination games: chicken and hawk/dove as discussed
above

No easy solution
◦ ​Can be solved in sequential games = discussed later
◦ ​There is also a mixed strategy = discussed later
◦ ​In static pure games = need a signal, heuristic, reputation, or focal point
◦ ​A focal point (Schelling point) is an assumed outcome in the absence of
communication, such as friends in NYC guessing that they will meet the same place
they met last time

64

Application 4:
Stag hunt & trust games
Stag​ hunt
◦ ​Two players are hunting for food & have two strategies {stag, rabbit}
◦ ​Catching the stag requires two people – if both people hunt the stag then they both
get the biggest reward (4, 4)
◦ ​If one person hunts the stag while the other chases a rabbit, the stag-hunter gets
nothing, while the rabbit-chaser gets some benefit (0, 3)
◦ ​If both people chase the rabbit, they both get a small benefit (2, 2)
GAME (stag) ​Hunt stag Chase rabbit

Hunt stag (4, 4) (0, 3) Chase rabbit (3,

0) (2, 2)
Player A
Player B

65

Application 4:
Stag hunt & trust games
Stag​ hunt outcome
◦ ​Two Nash equilibriums = (stag, stag) & (rabbit, rabbit)
◦ ​The worst outcome for each player is hunting the stag when the other person is
chasing the rabbit; incentive to change
◦ ​Optimal outcome is (stag, stag), but requires trust in the other person ​◦ ​This is a
type of coordination game, but temptation to betray (choose “rabbit” when other
chooses “stag”) if players are risk averse & lack trust
GAME (stag) ​Hunt stag Chase rabbit

Hunt stag ​(4, 4) ​(0, 3) Chase rabbit (3,

0) ​(2, 2)
Player A
Player B

66

Application 4:
Stag hunt, risk & trust
Stag​ hunt & risk aversion
◦ ​Choosing “stag” gives possible outcomes of (4) or (0) – risky ​◦
Choosing “rabbit” gives possible outcomes of (3) or (2) – low risk ​◦
Without a view on other player’s strategy, “rabbit” is a safer option
◦ ​We can say that (stag, stag) is “payoff dominant”, while (rabbit, rabbit) is “risk
dominant”
GAME (stag) ​Hunt stag Chase rabbit

Hunt stag ​(4, 4) – payoff dominant ​(0, 3)

Chase rabbit (3, 0) ​(2, 2) – risk dominant

Player A
Player B

67

Application 4:
Stag hunt, risk & trust
Stag​ hunt, social cooperation & safety
◦ ​(stag, stag) = “payoff dominant”; (rabbit, rabbit) = “risk dominant” ​◦ ​Player
need to choose whether they will “risk trusting other people in search of the
biggest benefit” or “not trust and taking the safe result”
◦ ​This is sometimes interpreted as a choice between “social cooperation” and
“safety”. More trust ​ ​more cooperation ​ ​better outcome.

How to increase social cooperation?


◦ ​Government force cooperation (no need for trust)
◦ ​Binding contracts (trust through law)
◦ ​Allow communication… experiment without communication = 97% of games
had safe equilibrium (rabbit, rabbit); with communication = 91% of games had
optimal equilibrium (stag, stag)
◦ ​Repeated games (trust through reputation) = discuss later
◦ ​Change the incentives (change the rules or payoffs in the game) ​68

Application 4:
Stag hunt & prisoners’
dilemma​ ​Not​e: careful not to change the game
◦ ​For “stag hunt”, outcome from betrayal (choosing “rabbit” when others choose
“stag”) must be worse than the outcome from (stag, stag)
◦ ​If the the outcome from betrayal is better than outcome from (stag, stag)
cooperation, then “stag hunt” game turns into “prisoners’ dilemma” ​◦ ​See below
that if the benefit from betrayal increases to (5), then (rabbit, rabbit) becomes
the sole Nash equilibrium of the prisoners’ dilemma
GAME (stag as PD) ​Hunt stag Chase

rabbit ​Hunt stag (4, 4) (0, ​5​) Chase

rabbit (​5​, 0) ​(2, 2)


Player A
Player B

69
Application 4:
Changing games on
purpose​ ​The​ prisoners’ dilemma equilibrium is
sub-optimal
GAME (PD) ​Not confess Confess ​Not
confess (-10, -10) (-50, -5) Confess (-5,
Player A -50) ​(-30, -30)
Player B

Can convert into stag hunt with punishment


◦ ​Decrease the benefit from betrayal (choosing “confess” when the other chooses
“not confess”), so that cooperation (not confess, not confess) is preferable. Now
have possibility of optimal Nash equilibrium.
Player B
GAME (PD as stag) ​Not confess Confess
Not confess ​(-10, -10) ​(-50, ​-15​)
Confess (​-15​, -50) ​(-30, -30) Player
​ A

70

Some more games…


Dictator game
◦ ​Player A has $10 and can decide how to split it with player B. This is not a “game”
in the game theoretic sense since player B don’t act
◦ ​If we assume that money = utility, then the Nash equilibrium is to keep $10 &
give nothing to the other player
◦ ​In experiments, people often give; though interestingly, the final allocation
depends on many things (starting allocation, information about recipient, reason
for the game, etc)
◦ ​Conclusion = people care about more than just money

Collapsing solution games


◦ ​We already considered one of these with “pick a number”, where multiple
iterations mean the solution collapses to the lowest number
◦ ​Another famous (and counter-intuitive) example is the “travellers’ dilemma”,
where two players will be paid the amount of money they ask for, but the offer of
a small bonus for bidding low results in them making the lowest possible bid – we
will consider some similar games later

71

Robust Nash equilibrium


We​akly dominated Nash equilibrium
◦ ​Possible to have Nash equilibrium that includes weakly dominated strategies
◦ ​In the below game, “up” weakly dominates “down” for player A; and “left” weakly dominates “right”
for player B, and yet (down, right) is a Nash equilibrium
◦ ​Robust Nash equilibrium (generally known as “trembling hand” Nash equilibrium) introduces an
additional requirement that eliminates weakly dominated strategies
Player B
GAME (weak Nash) ​Left Right ​Up

(1, 1) ​(0, -3) Down (-3, 0) ​(0, 0)

Player A

72

Robust Nash equilibrium


Rob​ust equilibrium: trembling-hand perfection
◦ ​We allow for some small deviations in strategies, analogous to a player who has a trembling hand and
might make a minor mistake
◦ ​By giving some small probability that a player might accidentally play any pure strategy, then a weakly
dominated strategy (down, right) will become unstable, and is therefore not a Robust “trembling
hand” Nash equilibrium
◦ ​Strategies that are not weakly dominated (up, left) will remain stable despite small deviations in
strategy and so pass the requirements for Robust “trembling hand” Nash equilibrium
Player B
GAME (weak Nash) ​Left Right ​Up

(1, 1) ​(0, -3) Down (-3, 0) (0, 0)

Player A

73

Phew…
The above games show how we can find a Nash equilibrium in some
situations
◦ ​Prisoners’ dilemma; hawk & dove; cooperation; stag hunt
◦ ​Simple games = still relatively easy to solve

With many players and/or strategies, games become more difficult to


solve
◦ ​Using the formal approach (with associated maths) allows us to solve these
more difficult problems
◦ ​Takes us back to the first game theory problem: oligopoly

74

That’s enough for now:


Still to come…
Formal approach
◦ ​Using basic algebra and calculus to derive Nash equilibrium

Mixed strategies
◦ ​Allow players to chose a mix of strategies instead of a “pure” strategy
◦ ​Existence of Nash equilibrium

Dynamic games
◦ ​Repeated games (finite & infinite)
◦ ​Sequential games & extensive form (game tree)
◦ ​Sequential games with imperfect information

Incomplete information
◦ ​Reasons for incomplete information (mixed strategies, genuine uncertainty, private information,
deception)
◦ ​Modeling incomplete information and imperfect information
◦ ​Bayesian updating & Bayesian equilibrium

75

You might also like