You are on page 1of 23

Questions:

If the spy game has no Nash equilibrium, isn't that contradictory to Neumanns theorem?
Dominant strategies Folie 18: I don't understand the definition of strategy profile, what would be an example?
Folie 21: what is an information set?

Shorthand notation: we write to denote .

Given two strategies of player , we say that dominates if and only if


Si dominates Si' when the utility
for using Si is higher than the for all .
utility for using Si'

We say that strictly dominates if the above inequality is strictly satisfied.

If dominates , then is at least as good as for player regardless of what the other players decide.
Cooperation is a strictly
It's better for both do defect
Harmony dominant strategy for each
Prisoner’s dilemma whatever the other does. I
player!
player 1 cooperates, player
2 should defect to get 4
points. If player 1 defects,
player 2 should also defect
to get at least 2 points.
Harmony game is the same but
Defection is a strictly dominant with cooperation

strategy for each player! But


cooperate-cooperate yields Pareto efficiency/optimality is a situation that cannot be
higher payoffs for both! modified so as to make any one party better off without cf. Problem Set 2
making at least one individual or preference criterion worse off.
Here (4,4) would be a pareto optimum but not (3,3) since you change to
(4,1) and improve a player. 08.10.2020 | Philip Grech | 99
A dominant strategy doesn't always exist

What about the other two?

Stag hunt Chicken game

Individually best decision depends on opponnent’s decision Ex. Stag hunt


If player 1 cooperates, player 2 shoud also
→ no dominant strategies cooperate to get (4; 4) but if player 1
defects, player 2 shoud also defect to get
→ need alternative solution concept (2; 2).
The chicken game is similar.

Nash equilibrium

08.10.2020 | Philip Grech | 100


Nash equilibrium (preliminary): pure strategies

A Nash equilibrium of a normal form game is a strategy profile


such that for each player , is a best reply to the strategies of the other
players. That is, for all
For each player the used strategy is
better against the strategy of the other
players
or, equivalently,

Remarks
 Colloquial formulations:
− Strategies in a Nash equilibrium are mutual best replies: they are a fixed point of a
‘mutual best replies map’
− In a Nash equilibrium, no player has an incentive to deviate
 If (iterated) elimination of dominated strategies leads to a single strategy profile,
it is a Nash equilibrium (why?)
→ Nash equilibrium is a generalization of the approach using elimination of dominated
strategies
John Forbes Nash (1928-2015)
→ It is also a generalization of von Neumann’s approach to zero-sum games (not treated
specifically in this lecture) We check for each player (once at a time) if he can deviate and if this benefits him. If this is not the case for each
player than we have a Nash equilibrium. Multiple deviations from different players at the same time are not| allowed.
08.10.2020 Philip Grech | 101
Dilemmas revisited: a structured representation
C D
C: Cooperate a > c: mutual cooperation
C a,a b,d better than mutual
D: Defect
D d,b c,c Pure strategy Nash equilibrium defection

C D

C 4,4 1,3

D 3,1 2,2
C D C D
Stag Hunt
C 3,3 1,4 C 4,4 2,3

D 4,1 2,2 D 3,2 1,1

Prisoner’s Dilemma Harmony


C D

C 3,3 2,4

D 4,2 1,1

Chicken Game
08.10.2020 | Philip Grech | 102
Quiz
-1 for spy if he gets caught.
What is the Nash equilibrium of this game? -1 for the security agent if he doesn't
get the spy.
“Spy interception game”

Player 1 is a spy, player 2 wants spy game has no pure equilibrium strategy but a
? to catch the spy
mixed equilibrium strategy

Zero-sum game Nash equilibrium is not possible here.


Every player has always an incentitive
to deviate.

Stag hunt
Mixed strategies

Each of these has


Chicken game one more
equilibrium

08.10.2020 | Philip Grech | 103


Uncertain outcomes: lotteries

In a normal form game , a mixed strategy for a player is given by a


probability distribution over .

We attribute one probability at every strategy so that we get a probability


distribution.

 For finite , a mixed strategy is given by , with


 Allowing for mixed strategies amounts to giving each player a random device. The player can only program the
probabilities of outcomes with the device, but not the specific outcome
 If an opponent plays a mixed strategy, the outcome is uncertain: a “lottery”. What is the utility of lottery?

08.10.2020 | Philip Grech | 104


Utility of lotteries

Expected Utility Hypothesis

The utility of a lottery is the expected utility of the possible outcomes of that lottery.

 For a finite lottery over outcomes this means (with some


abuse of notation)
"Espérance mathématique"

 Caveat: If the are numbers (e.g. monetary outcomes of a lottery), we can build the expected value of the
lottery , but note that in general

Expected utility of the lottery Utility of the expected value of the lottery

This is related to risk preferences (not treated here)

08.10.2020 | Philip Grech | 105


pure strategy vs mixed strategy
Nash equilibrium II: mixed strategies
The complete definition of the Nash equilibrium is identical to the one given above but instead of only admitting pure
strategies , general mixed strategies are admitted, i.e.
entry

is a Nash equilibrium if and only if for any it holds that


for all .
no player benefits from a
deviation

 Note that e.g. the right hand side above can be written as (expected utility hypothesis), etc.
 The following observation is useful to compute mixed strategy equilibria:
If is played in equilibrium, then any pure strategy with must be a best reply to . If not,
decreasing for the sake of an actual best reply pure strategy would lead to a strictly better reply than , contradicting
the equilibrium property of

In equilibrium, Player must be indifferent between all with .

08.10.2020 | Philip Grech | 106


Example

Let and denote mixed strategies for players and with .


probability where
defecting and
player 1 cooperates Indifference for player : player 1 defects cooperating have
the same utility.
Chicken game

Indifference for player :

Expected utility in mixed strategy equilibrium (same abuse of notation used earlier):

similarly for player .

! Note that the indifference condition for Player 1 determines Player 2’s strategy and vice-versa !
Equilibrium is not a single-player situation. There is a mutual dependence.
08.10.2020 | Philip Grech | 107
Illustration: fixed point of best replies

Chicken game

Best reply Player 2

utility of player 2 when player 1 plays mixed strategy


and player 2 cooperates or deviates.

Best reply Player 1

3 fixed points of best replies (Nash equilibria)

08.10.2020 | Philip Grech | 108


The mixed-strategy equilibrium in the chicken game is sometimes likened to what Thomas Schelling refers to as
“brinkmanship”:

We have encountered him before


→ strategic realism
Brinkmanship (also brinksmanship) is the practice of trying to achieve an advantageous outcome by
pushing dangerous events to the brink of active conflict. It occurs in international politics, foreign
policy, labor relations, and (in contemporary settings) military strategy involving the threat of nuclear
weapons, and high-stakes litigation. This maneuver of pushing a situation with the opponent to the
brink succeeds by forcing the opponent to back down and make concessions.

 Brinkmanship is a threat that leaves something to chance


 Schelling:
If "brinkmanship" means anything, it means manipulating the shared risk of
war. It means exploiting the danger that somebody may inadvertently go over
the brink, dragging the other with him.

 The concept of brinkmanship however goes beyond the pure chicken game. We will encounter this later when we
discuss the Cuban missile crisis in 1962.

08.10.2020 | Philip Grech | 109


Another example Indifference for player 1 gives probability
distribution for player 2 and vice versa

Indifference for player :


“Spy interception game”

Indifference for player :

Expected utility in mixed-strategy equilibrium :

similarly for player .

Homework: same exercise for stag hunt and other games


08.10.2020 | Philip Grech | 110
Existence of a Nash equilibrium

Nash existence theorem*

Every normal form game with finite strategy spaces has a Nash equilibrium.

 Von Neumann, after Nash (still a student) presented the idea to him:

„That's trivial, you


know. That's just a
fixed point theorem“

 Indeed, proofs rely on fixed point theorems (Kakutani, Brouwer); very short ~ 1 page
 Resulted in Nobel Prize in Economics for Nash: many applications.
 Theorem extends von Neumann’s “minimax solution” to non-constant sum games.

*Nash, J. F. (1950). Equilibrium points in n-person games. Proceedings of the national academy of sciences, 36(1), 48-49.
08.10.2020 | Philip Grech | 111
Multiplicity of equilibria
Note that Nash’s existence theorem makes no assertion regarding uniqueness, indeed:
 the Chicken Game has 3 Nash equilibria
 the Stag Hunt has 3 Nash equilibria

“Which one is the realistic one”? → 2 answers

1. The mathematical view: There are many refinement concepts of the Nash equilibrium (strict
equilibrium, payoff vs. risk dominant equilibrium, trembling-hand equilibrium etc.). Pick your favorite…
2. The contextual view: Add elements beyond the pure mathematical structure of the game, for example:
Focal points (Thomas Schelling): you and your friend cannot communicate but you have to meet
tomorrow in New York City.
→ What do would you do in real life?
Schelling’s answer: Empire State building at noon is focal! In war games (e.g. chicken), a small
perturbation of payoffs can render one equilibrium focal.

Schelling:
One cannot, without empirical evidence, deduce what understandings can be perceived in a
non-zero sum game of maneuver any more than one can prove, by purely formal deduction,
that a particular joke is bound to be funny.

08.10.2020 | Philip Grech | 112


2.4.6 Dynamic games
Sequential move games

Restricting ourselves to normal form games is often too restrictive → dynamic games

Dynamic games

Sequential move game Repeated games […] Mixed/more general forms

In a sequential move game one player moves after the other and so on.
(We will talk a little bit about repeated games in the next chapter).

08.10.2020 | Philip Grech | 113


Sequential move games are best represented by a game tree (“extensive form representation”):

“Sequential Chicken Game” actions available to player 1

payoff for player 2


payoff for player 1 Player 1 can have a
strategy for both
branches even if the
choice of Player 2
! Important distinction! Action: possible choices for a player at a given node where that player can choose
eliminates a branch

Strategy: complete plan of action for a player for all nodes where that player can choose*

Player 1’s actions: c,d *More precisely, this includes nodes that are
Player 2’s actions: c,d at left node; c,d at right node
theoretically unreachable by a player’s earlier
Player 1’s strategies: c,d choice, cf. Problem Set 2.
Player 2’s strategies: (c,c),(c,d),(d,c),(d,d)
08.10.2020 | Philip Grech | 114
Backward induction and subgame perfect equilibrium

Naïve solution attempt: transform to normal form game by ignoring information and move structure

We cannot reduce this table to 2x2 because we would lose P2 always defects
information. Player 2 has to know what player 1 decided.
Empty threat by player 2:
“I will defect if you defect.”
P2 always
cooperates
3 Nash equilibria in
pure strategies!
Unrealistic equilibria.
Empty promise by player 2:
“I will cooperate if you cooperate.”
Player 2 would never do that

The Nash equilibrium applied to sequential move games can make implausible predictions.

08.10.2020 | Philip Grech | 115


Backward induction: solve the game backwards from each terminal node to avoid non-credible strategies

= “equilibrium path”

 The implausible Nash equilibria correspond to non-credible choices off the equilibrium path.
 Subgame = any initial singleton node together with all its successors.

Subgame 1 Subgame 2 Subgame 3

Reinhard Selten
“proper” subgames
(1930-2016);
Nobel Prize 1994
A strategy profile is a subgame perfect equilibrium if it represents
a Nash equilibrium of every subgame of the original game.
08.10.2020 | Philip Grech | 116
2.4.7 Adding uncertainty
Generally, two kinds of uncertainty exists:

1. Imperfect information. Uncertainty regarding previous moves: The set of possible nodes where a player can be at
a given stage of the game is his/her information set at that state (connected by dashed line).
2. Incomplete information. Players are not sure which game they are playing: payoffs come with uncertainty.
Ex: Poker or 2 companies that don't know the production costs of each other and want to be cheaper than
Example for 1 each other. We don't know the utility/payoffs of other players

Here player 2 doesn't know what


player 1 chosed. That's why there is
a dashed line. The two nodes linked
by the dashed line are in the same
information set. Player 2 doesn't
know where he is.

A normal form game is strategically


equivalent to a dynamic game of
imperfect information
 There is a trick (Harsanyi) to “transform type 2 into type 1”, see later.
 Games with uncertainty can get quite complicated and may require refined equilibrium concepts involving
probability distributions over information sets, Bayesian updating etc. We will have a glimpse at this later.*
 Sometimes there is a clear combination of sequential and simultaneous moves that allows for an easy application
of backward induction.
*For a detailed account, attend a dedicated game theory course.
08.10.2020 | Philip Grech | 117
Case in point: 5G mobile network investment decisions
Two mobile carriers (e.g. Salt and Sunrise) consider an investment into a new antenna
network to enable 5G technology (stylized).

 The cost of such an investment is estimated at CHF 4 billion.


 Both make their investment decision simultaneously: invest (i) vs. do not invest (ni)
 If one invests and the other does not, the investor makes a pricing decision:
− High (h): CHF 10 billion operating profit
− Low (l): CHF 6 billion operating profit
 If both firms invest, they have to make a pricing decision without letting the other
know:
− Both pick h: CHF 5 billion operating profit each
− Both pick l: CHF 3 billion operating profit each
− One picks l, one picks h: all customers go to the one that picked l, profit is CHF 10 billion.

Imperfect information

08.10.2020 | Philip Grech | 118


One proper subgame

Two subgames
4 subgames in total in total

For general information sets, a subgame is determined by the following 3 criteria:


1. the initial node is in a singleton information set).
2. if a node is contained in the subgame then so are all of its successors.
3. if a node in a particular information set is in the subgame then all members of that information set belong to the
subgame.
Game can be simplified
Playing “l” is a dominant strategy; if it weren’t since there are dominant
backward induction would be trickier strategies
sunrise
the last step (chosing l) is
not showed since it is a chicken
dominant strategy
+ one mixed-strategy
equilibrium with
(check!)
Chicken game!
salt chicken
3 subgame perfect equilibria (tuple ordering according to information set order on previous slide):

Equilibrium Salt equilibrium strategy Sunrise equilibrium strategy


“Salt chickens”
“Sunrise chickens”
“Mixed strategy”

In the next chapter (“strategic elements of conflicts and their resolution”) we will also discuss models of incomplete
information.

08.10.2020 | Philip Grech | 121

You might also like