This action might not be possible to undo. Are you sure you want to continue?
Mergers and
Acquisitions
Strategic Games
A (short) Introduction to Game Theory
(based on M.J. Osborne – An Introduction to Game
Theory – Oxford University Press – 2004)
Pedagogic approach
More seriously
I assume “nothing known”
Any question is welcome (if possible, in English)
The goal is to “enter into the field”, not to cover as much as
possible stuff (I don’t care about going to the end of the announced
program but I care a lot on an indepth understanding)
The mathematical level of the lecture should not be too challenging
(basic equations resolution, some mathematical optimization)
We will “play” many games during the lecture
You MUST work each week to prepare the lecture (read the slides
in advance, prepare questions, review concept definitions, …)
I rest on you to correct my numerous mistakes
This is a theoretical lecture !
Game Theory  A (Short) Introduction 2 9/12/2011
Game Theory  A (Short) Introduction 3 9/12/2011
Outline
1 Introduction
1.1 What is game theory?
1.2 The theory of rational choice
1.3 Coming attractions: interacting decisionmakers
2 Nash Equilibrium Theory (perfect information)
2.1 Strategic games
2.2 Example: the Prisoner’s Dilemma
2.3 Example: Bach or Stravinsky?
2.4 Example: Matching Pennies
2.5 Example: the Stag Hunt
2.6 Nash equilibrium
2.7 Examples of Nash equilibrium
2.8 Best response functions
Game Theory  A (Short) Introduction 4 9/12/2011
Outline
2.9 Dominated actions
2.10 Equilibrium in a single population: symmetric games and
symmatric equilibria
3 Nash Equilibrium: Illustrations
3.5 Auctions
4 Mixed Strategy Equilibrium (probabilistic behavior)
4.1 Introduction
4.2 Strategic games in which players may randomize
4.3 Mixed strategy Nash equilibrium
4.4 Dominated actions
4.5 Pure equilibria when randomization is allowed
4.7 Equilibrium in a single population
Outline
4.9 The formation of player’s beliefs
4.10 Extension: finding all mixed strategy Nash equilibria
4.11 Extension: games in which each player has a continuum of
actions
4.12 Appendix: Representing preferences by expected payoffs
9 Bayesian Games (imperfect information)
9.1 Motivational examples
9.2 General definitions
9.3 Two examples concerning information
9.6 Illustration: auctions
Game Theory  A (Short) Introduction 5 9/12/2011
Outline
5 Extensive Games (Perfect Information): Theory
5.1 Extensive games with perfect information
5.2 Strategies and outcomes
5.3 Nash equilibrium
5.4 Subgame perfect equilibrium
5.5 Finding subgame perfect equilibria of finite horizon games:
backward induction
10 Extensive Games (Imperfect Information)
10.1 Extensive games with imperfect information
10.2 Strategies
10.3 Nash equilibrium
10.4 Beliefs and sequential equilibrium
10.5 Signaling games
10.8 Illustration: strategic information transmission
Game Theory  A (Short) Introduction 6 9/12/2011
1 Introduction
Game Theory  A (Short) Introduction 8 9/12/2011
1.1 What is game theory?
Game theory aims to help understand situations in which
decisionmakers interact.
The main fields of applications are:
Economic analysis
Social analysis
Politic
Biology
Typical applications:
Competing firms
Bidders in auctions
Main tool: model development. This is an arbitrage between:
Realistic assumptions
Simplicity
Game Theory  A (Short) Introduction 9 9/12/2011
1.1 What is game theory?
An outline of the history of game theory
First major development in the 1920s
Emile Borel
John von Neumann
Decisive publication: “Theory of Games and Economic Behavior”,
von Neumann and Morgenstern (1944)
Early 1950s: John Nash
Nash equilibrium
Gametheoric study of bargaining
1994 Nobel Prize in Economic Sciences
Harsanyi (19202000) Bayesian games (Harsanyi doctrine)
Nash (1928) Nash equilibrium
Selten (1930) Bounded rationality, extensive games
Game Theory  A (Short) Introduction 10 9/12/2011
1.1 What is game theory?
Modeling process
Step 1: selecting aspects of a given situation (that appear to be
relevant) and incorporating them into a model. This step is mostly
an “art”
Step 2: model analysis (using logic and mathematic)
Step 3: studying model’s implications to determine whether our
ideas make sense. This may point towards a revision of the
model’s assumptions in order to better capture “stylized facts”.
Game Theory  A (Short) Introduction 11 9/12/2011
1.2 The theory of rational choice
Rational choice:
The decisionmaker chooses the best action according to her preferences, among
all the actions available to her
No qualitative restriction is place on preferences
Rationality means consistency of her decisions when faced with different sets of
available actions.
The theory is based on two components: Actions and
Preferences
1.2.1 Actions
Set A consisting of all actions that, under some circumstances, are available to the
decisionmaker
In any given situation, the decisionmaker knows the subset of available choices,
and takes it as given (the subset is not influenced by the decisionmaker
preferences)
Game Theory  A (Short) Introduction 12 9/12/2011
1.2 The theory of rational choice
1.2.2 Preferences and payoff functions
We assume that the decisionmaker, when presented with any pair of
actions, knows which of the pair she prefers
We assume further that these preferences are consistent (if a > b and
b > c, then a > c).
Preferences representation: preferences can be represented by a
payoff function:
the payoff function associates a number with each action in such a way
that actions with higher numbers are preferred.
More precisely:
u(a) > u(b) if and only if the decisionmaker prefers a to b
(Economists often speak about utility function)
Game Theory  A (Short) Introduction 13 9/12/2011
1.2 The theory of rational choice
Exercise 5.3
Person 1 cares about both her income and person 2’ income.
Precisely, the value she attaches to each unit of her own income is
the same as the value she attaches to any two units of person 2’s
income. For example, she is indifferent between a situation in
which her income is 1 and person 2’s is 0, and one in which her
income is 0 and person 2’s is 2. How do her preferences order the
outcomes (1,4), (2,1) and (3,0), where the first component in each
case is her income and the second component is person 2’s
income? Give a payoff function consistent with these preferences.
Game Theory  A (Short) Introduction 14 9/12/2011
1.2 The theory of rational choice
Note that, as decisionmaker’s preferences convey only ordinal
information, the payoff function also conveys only ordinal preference.
Eg.: if u(a)=0, u(b)=1 and u(c)=100, it doesn’t mean that the decision
maker likes c a lot more than b! A payoff function contains no such
information.
Note that, as a consequence, a decisionmaker’s preferences can be
represented by many different payoff functions.
If u represents a decisionmaker’s preferences and v is another payoff
function for which
v(a) > v(b) if and only if u(a) > u(b)
then v also represents the decisionmaker’s preferences.
More succinctly: if u represents a decisionmarker’s preferences, then
any increasing function of u also represents these preferences.
Game Theory  A (Short) Introduction 15 9/12/2011
1.2 The theory of rational choice
Exercice 6.1
A decisionmaker’s preferences over the set A={a,b,c} are
represented by the payoff function u for which u(a)=0, u(b)=1 and
u(c)=4. Are they also represented by the function v for which v(a)=
1,v(b)=0, and v(c)=2? How about the function w for which
w(a)=w(b)=0 and w(c)=8?
Game Theory  A (Short) Introduction 16 9/12/2011
1.2 The theory of rational choice
1.2.3 The theory of rational choice
The theory of rational choice is the action chosen by a decision
maker is at least as good, according her preferences, as every
other available action.
Note that not every collection of choices for different sets of
available actions is consistent with the theory.
Eg. : we observe that a decision chooses a whenever she faces the set {a,b}, but
sometimes chooses b when facing the {a,b,c}. This is inconsistent:
 always choosing a when facing {a,b} means that the decisionmaker prefers a to
b
 when facing {a,b,c}, she must choose a or c.
(Independence of irrelevant alternatives)
Game Theory  A (Short) Introduction 17 9/12/2011
1.2 The theory of rational choice
1.3 Coming attractions
Up to now, the decisionmaker cares only about her own choice.
In the real world, a decisionmaker often does not control all the
variables that affect her.
Game theory studies situations in which some of the variables that
affect the decisionmarker are the actions of other decision
markers.
2 Nash Equilibrium:
Theory
Game Theory  A (Short) Introduction 19 9/12/2011
2.1 Strategic games
Terminology:
we refer to decisionmakers as players
each player has a set of possible actions
the action profile is the list of all players’ actions
each player has preferences about the action profiles
Definition 13.1 (Strategic game with ordinal preferences)
A strategic game with ordinal preferences consists of
a set of players
for each player, a set of actions
for each player, preferences over the set of action profiles
Game Theory  A (Short) Introduction 20 9/12/2011
2.1 Strategic games
Note that:
This allows to model a very wide range of situations:
players = firms, actions = prices, preferences = profits
players = animals, actions = fighting for a prey, preferences =
winning or loosing
It is frequently convenient to specify the payers’ preferences by
giving payoff functions that represent them. Keep however in
mind that a strategic game with ordinal preferences is defined by
the players’ preferences, not by the payoffs that represent these
preferences
Time is absent from the model : each player chooses her action
once and for all and the players choose their actions
simultaneously (no player is informed of the action chosen by any
other player)
Game Theory  A (Short) Introduction 21 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
Example 14.1
Two suspects in a major crime are held in separate cells. There is
enough evidence to convict each of them of a minor offense, but
not enough evidence to convict either of them of the major crime
unless one of them acts as an informer against the other (finks). If
they both stay quiet, each will be convicted of the minor offense
and spend one year in prison. If one and only one of the finks, she
will be freed and used as a witness against the other, who will
spend four years in prison. If the both fink, each will spend three
years in prison.
Model this situation as a strategic game.
Game Theory  A (Short) Introduction 22 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
Solution
Players: the two suspects
Actions: Each player’s set of actions is {Quiet, Fink}
Preferences: Suspect 1’s ordering of the action profiles (from
best to worse):
(Fink,Quiet) free
(Quiet,Quiet) one year in prison
(Fink,Fink) three years in prison
(Quiet,Fink) four years in prison
(and viceversa for player 2)
We can adopt a payoff function for each player:
u
1
(Fink,Quiet)>u
1
(Quiet,Quiet)>u
1
(Fink,Fink)>u
1
(Quiet,Fink)
Eg.:
F Q F F Q Q Q F , , , ,
1 0 1 1 1 2 1 3 × + × + × + ×
Game Theory  A (Short) Introduction 23 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
Graphically, the situation is the following :
(numbers are payoffs of payers)
Suspect 1
Suspect 2
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
The prisoner’s dilemma models a situation in which there are gains from cooperation
(each player prefers that both players choose Quiet than they both choose Fink) but
each player has an incentive to free ride whatever the other play does.
Game Theory  A (Short) Introduction 24 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
2.2.1 Working on a joint project
You are working with a friend on a joint project. Each of you
can either work hard or goof off. If your friend works hard, then
you prefer to goof off (the outcome of the project would be
better if you worked hard too, but the increment in its value to
you is not worth the extra effort). You prefer the outcome of
your both working hard to the outcome of your both goofing off
(in which case nothing gets accomplished), and the worst
outcome for you is that you work hard and your friend goofs off
(you hate to be exploited).
Model this situation as a strategic game.
Game Theory  A (Short) Introduction 25 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
2.2.2 Duopoly
In a simple model of a duopoly, two firms produce the same
good, for which each firm charges either a low price or a high
price. Each firm wants to achieve the highest possible profit. If
both firms choose High, then each earns a profit of $1000. If
one firm chooses High and the other chooses Low, then the firm
choosing High obtains no customers and makes a loss of $200,
whereas the firm choosing Low earns a profit of $1200 (its unit
profit is low, but its volume is high). If both firms choose Low,
the each earns a profit of $600. Each firm cares only about its
profit.
Model this situation as a strategic game.
Game Theory  A (Short) Introduction 26 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
Exercise 17.1
Determine whether each of the following games differs from the
Prisoner’s Dilemma only in the names of the players’ actions
X
Y
X
Y
X Y X Y
3,3
5,1
1,5
0,0
2,1
3,2
0,5
1,1
An application to M&As: the Grossman & Hart free riding argument.
Game Theory  A (Short) Introduction 27 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
Situation:
Players agree that it is better to cooperate
Players disagree about the best outcome
Example 18.2
Two people wish to go out together. Two concerts are available:
one of music by Bach, and one of music by Stravisky. One person
prefers Bach and the other prefers Stravinsky. If they go to different
concerts, each of them is equally unhappy listening to the music of
either composer.
Model this situation as a strategic game.
An application to merging banks: two banks are merging. Both
agree that they will be better off using the same information
system technology but they disagree on which one to choose.
Google versus Microsoft/Yahoo
Game Theory  A (Short) Introduction 28 9/12/2011
Game Theory  A (Short) Introduction 29 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
Solution
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Game Theory  A (Short) Introduction 30 9/12/2011
2.4 Example: Matching Pennies
Situation:
A purely conflictual situation
Example 19.1
Two people choose, simultaneously, whether to show the head or
the tail of a coin. If they show the same side, person 2 pays person
1 a dollar. I they show different sides, person 1 pays person 2 a
dollar. Each person cares only about the amount of money she
receives (and is a profit maximizer!).
Model this situation as a strategic game.
An application to choices of appearances for new products by an established
produced and a new entrant in a market of fixed size: the established produced
prefers the newcomer’s product to look different from its own (to avoid confusion)
while the newcomer prefers that the products look alike.
IPhone iOS versus Android
Game Theory  A (Short) Introduction 31 9/12/2011
Game Theory  A (Short) Introduction 32 9/12/2011
2.4 Example: Matching Pennies
Solution
Player 1
Player 2
Head
Tail
Head Tail
(1,1)
(1,1)
(1,1)
(1,1)
Game Theory  A (Short) Introduction 33 9/12/2011
2.5 Example: the stag Hunt
Situation:
Cooperation is better for both but not credible.
Example 20.2
Each of a group of hunters has two options: she may remain
attentive to the pursuit of a stag, or she may catch a hare. If all
hunters pursue the stag, they catch it and share it equally. If any
hunter devotes her energy to catching a hare, the stag escapes,
and the hare belongs to the defecting hunter alone. Each hunter
prefers a share of the stag to a hare.
Model this situation as a strategic game.
Game Theory  A (Short) Introduction 34 9/12/2011
2.5 Example: the stag Hunt
Solution
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Game Theory  A (Short) Introduction 35 9/12/2011
2.6 Nash equilibrium
Question:
What actions will be chosen by players in a strategic game?
(assuming that each player chooses the best available action)
Answer:
To make a choice, each player must form a belief about other players’
action.
Assumption:
We assume in strategic games that players’ beliefs are derived from
their past experience playing the game:
they know how their opponent will behave.
note however that they do not know which specific opponent they are faced to and
so, they can not condition their behavior on being faced to a specific opponent.
Beliefs are about “typical” opponents, not any specific set of opponents.
Game Theory  A (Short) Introduction 36 9/12/2011
2.6 Nash equilibrium
In this setup, a Nash equilibrium is action profile a* with the
property that no player i can do better by choosing an action
different from a*
i
, given that every other player j adheres to a*
j
.
Note:
A Nash equilibrium corresponds to a steady state: if, whenever the
game is played, the action profile is the same Nash equilibrium a*,
then no player has a reason to choose any action different from her
component of a*.
Players’ beliefs about each other’s actions are (assumed to be)
correct. This implies, in particular, that two players’ beliefs about a
third player’s action are the same (expectations are coordinated –
Harsanyi Doctrine).
Two key ingredients: rational choices and correct beliefs
Game Theory  A (Short) Introduction 37 9/12/2011
2.6 Nash equilibrium
Notations and formal definition:
Let a
i
be the action of player i
Let a be an action profile: a=(a
1
, a
2
, … a
n
)
Let a’
i
be any action of player i (different from a
i
)
Let (a’
i
,a
i
) be the action profile in which every player j except i
chooses her action a
j
as specified by a, whereas player i chooses
a’
i
(the subscript –i stands for “except i”).
(a’
i
,a
i
) is the action profile in which all the players other than i
adhere to a while i “deviates” to a’
i
.
Note that if a’
i
=a
i
, then (a’
i
,a
i
) = (a
i
,a
i
) =a
Game Theory  A (Short) Introduction 38 9/12/2011
2.6 Nash equilibrium
Definition 23.1 (Nash equilibrium of strategic game with ordinal
preferences)
The action profile a* in a strategic game with ordinal
preferences is a Nash equilibrium if, for every player i and every
action a
i
of player i, a* is at least as good according to player i’s
preferences as the action profile (a
i
,a*
i
) in which player i
chooses a
i
while every other player j chooses a*
i
.
Equivalently:
u
i
(a*) ≥ u
i
(a
i
, a*
i
) for every action a
i
of player i
Game Theory  A (Short) Introduction 39 9/12/2011
2.6 Nash equilibrium
Note:
This definition implies neither that a strategic game necessarily has a
Nash equilibrium, nor that it has at most one.
This definition is designed to model a steady state among experienced
players. An alternative approach (called “rationalizability”) is:
to assume that players know each others’ preferences
to consider what each player can deduce about the other players’ action
from their rationality and their knowledge of each other’s rationality
Nash equilibrium has been studied experimentally.
The keys to conceive suited experiment are:
to ensure that players are experienced playing the game
to ensure that players do not face repeatedly the same opponents (as each
game must played in isolation)
The key to correctly interpret results is to remember that Nash
equilibrium is about equilibrium: the outcome must have converged (and
the theory says nothing about the necessary for convergence to
appear).
Game Theory  A (Short) Introduction 40 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.1 Prisoner’s Dilemma
Suspect 1
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Suspect 2
Game Theory  A (Short) Introduction 41 9/12/2011
2.7 Examples of Nash
equilibrium
Detailed explanation
(Fink, Fink) is a Nash equilibrium because:
given that player 2 chooses Fink, player 1 is better off choosing
Fink than Quiet
given that player 1 chooses Fink, player 2 is better off choosing
Fink than Quiet
No other action profile is a Nash equilibrium. Eg, (Quiet, Quiet) is
not a Nash equilibrium because:
if player 2 chooses Quiet, player 1 is better off choosing Fink
(moreover), if player 1 chooses Quiet, player 2 is also better off
choosing Fink
The incentive to free ride eliminates the possibility that
the mutually desirable outcome (Quiet, Quiet) occurs.
Game Theory  A (Short) Introduction 42 9/12/2011
2.7 Examples of Nash
equilibrium
Note that:
in the present case, the Nash equilibrium action is the best action
for each player:
if the other player chooses her equilibrium action (Fink)
but also if the other player chooses her other action (Quiet)
In this sense, this equilibrium is highly robust. But, this is not a
requirement of the Nash equilibrium. Only the first condition
must be met.
Game Theory  A (Short) Introduction 43 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 27.1
Each of two players has two possible actions, Quiet and Fink;
each action pair results in the players’ receiving amounts of
money equal to the numbers corresponding to that action pair in
the following figure:
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Player 1
Player 2
Game Theory  A (Short) Introduction 44 9/12/2011
2.7 Examples of Nash
equilibrium
Players are not “selfish”: the preferences of each player i are
represented by the payoff function m
i
(a)+α m
j
(a), where m
i
(a) is
the amount of money received by player i, j is the other player,
and α is a given nonnegative number. Player 1’s payoff to the
action pair (Quiet,Quiet) is, for example, 2 + 2α.
1. Formulate the strategic game that models this situation in the case
α=1. Is this game the Prisoner’s dilemma?
2. Find the range of values of α for which the resulting game is the
Prisoner’s dilemma. For values of α for which the game is not the
Prisoner’s dilemma, find the Nash equilibria.
Game Theory  A (Short) Introduction 45 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.2 BoS
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Nash equilibria are (B,B) and (S,S). Why?
Note that this means that BoS has two steady states!
Game Theory  A (Short) Introduction 46 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.3 Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,1)
(1,1)
(1,1)
(1,1)
There is no Nash equilibrium. Why?
Game Theory  A (Short) Introduction 47 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.4 The Stag Hunt
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Nash equilibria are (S,S) and (H,H). Why?
Note that, despites (S,S) is better for both players than (H,H), this
has no bearing on the equilibrium status of (H,H).
Game Theory  A (Short) Introduction 48 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 30.1 (extension to n players)
Consider the variants of the nhunter Stag Hunt in which only m
hunters, with 2≤m≤n, need to pursue the stag in order to catch it
(continue to assume that there is a single stag). Assume that a
captured stag is shared only by the hunters who catch it. Under
each of following assumptions on the hunters’ preferences, find
the Nash equilibria of the strategic game that models the
situation.
a. As before, each hunter prefers the fraction 1/m of the stag to
a hare;
b. Each hunter prefers a fraction 1/k of the stag to a hare, but
prefers a hare to any smaller fraction of the stag, where k is an
integer with m≤k≤n.
Game Theory  A (Short) Introduction 49 9/12/2011
2.7 Examples of Nash
equilibrium
Note
In games with many Nash equilbria, the theory isolates more than
one steady state but says nothing about which one is more likely to
appear.
In some games, however, some of these equilibria seem more
likely to attract the players’ attentions than others. These equilibria
are called focal.
Example: (B,B) seems here more “likely” than (S,S)
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,2)
(0,0)
(0,0)
(1,1)
Game Theory  A (Short) Introduction 50 9/12/2011
2.7 Examples of Nash
equilibrium
2.7.8 Strict and nonstrict equilibria
The definition 23.1 requires only that the outcome of a deviation (by
a player) be no better for the deviant than the equilibrium outcome.
A equilibrium is strict if each player’s equilibrium action is better
than all her other actions, given the other players’ actions:
u
i
(a*) > u
i
(a
i
, a*
i
) for every action a
i
≠ a*
i
of player i
(Note the strict inequality, contrasting with definition 23.1)
Game Theory  A (Short) Introduction 51 9/12/2011
2.8 Best Response Functions
2.8.1 Definition
In more complicated games, analyzing one by one each action
profile quickly becomes intractable.
Let us denote the set of player i best actions when the list of the
other players’ actions is a
i
by B
i
(a
i
) or, more precisely:
{ }
i i i i i i i i i i i i
A a a a u a a u A a a B in ' all for ) , ' ( ) , ( : in ) (
÷ ÷ ÷
> =
Any action in B
i
(a
i
) is at least as good for player i as
every other action of player i when the other players’
actions are given by a
i
.
Game Theory  A (Short) Introduction 52 9/12/2011
2.8 Best Response Functions
2.8.2 Using best response functions to define Nash equilibrium
Proposition 36.1: The action profile a* is a Nash equilibrium of a
stragetic game with ordinal preferences if and only if every player’s
actions is a best response to the other players’ actions:
If each player i has a single best response to each list a
i
(B
i
(a
i
) = {b
i
(a*
i
)}), then this is equivalent to:
The Nash Equilibrium is then characterized by a set of n equations in
the n unknowns a*
i
:
i a B a
i i i
player every for ) ( in is
* *
÷
i a b a
i i i
player every for ) (
* *
÷
=
) ,... (
...
) ,... (
*
1
*
1
*
* *
2 1
*
1
÷
=
=
n n n
n
a a b a
a a b a
Game Theory  A (Short) Introduction 53 9/12/2011
2.8 Best Response Functions
2.8.3 Using the best response functions to find Nash equilibria
Procedure:
1. find the best response function of each player
2. find the action profiles that satisfy proposition 36.1
Exercise 37.1.b
Find the Nash Equiliria of the game in Figure 38.1
Represents graphically the solution
2,2 1,3 0,1
3,1 0,0 0,0
1,0 0,0 0,0
T
M
B
L C R
Game Theory  A (Short) Introduction 54 9/12/2011
2.8 Best Response Functions
Solution
2,2 1*,3* 0*,1
3*,1* 0,0 0*,0
1,0* 0,0* 0*,0*
T
M
B
L C R
Player 1
Player 1
T M B
L
C
R
Player 2
Player 2
Game Theory  A (Short) Introduction 55 9/12/2011
2.8 Best Response Functions
Example 39.1
Two individuals are involved in a synergistic relationship. If both
individuals devote more effort to the relationship, they are both
better off. For any given effort of individual j, the return to individual
i’s effort first increases, then decreases. Specifically, an effort level
is a nonnegative number, and individual i’s preferences (for i=1,2)
are represented by the payoff function a
i
(c+a
j
a
i
), where a
i
is i effort
level, a
j
is the other individual’s effort level, c>0 is a constant.
Questions:
Model the situation as a strategic game
Find players best response functions
Find the Nash equilibrium
Represent graphically the situation
Game Theory  A (Short) Introduction 56 9/12/2011
2.8 Best Response Functions
Strategic game:
Players: the two individuals
Actions: each player’s set of actions is the set of effort levels (non
negative numbers)
Preferences: player i’s preferences are represented by payoff
function a
i
(c+a
j
a
i
), for i=1,2
Note that each player has infinitely many actions, so the game can not
be represented by a matrix of payoff, as previously.
Game Theory  A (Short) Introduction 57 9/12/2011
2.8 Best Response Functions
Best response function:
Intuitive construction
Given a
j
, individual i payoff is a quadratic function of a
i
, that is
zero when a
i
=0 and when a
i
=c+a
j
. As quadratic function are
symmetric, this implies that the player i best response to a
j
is:
) (
2
1
) (
j j i
a c a b + =
0
Payoff
a
i
c+a
j
Game Theory  A (Short) Introduction 58 9/12/2011
2.8 Best Response Functions
Mathematical construction
) (
2
1
0 2
2
) (
*
2
j i
i j
i j
i
i i j i
i j i
a c a
a a c
FOC
a a c
a
a a a ca
a a c a
+ =
= ÷ +
÷ + =
c
H c
÷ + = H
÷ + = H
Game Theory  A (Short) Introduction 59 9/12/2011
2.8 Best Response Functions
Nash equilibrium:
To find the Nash equilibrium, following proposition 36.1, we have to
solve the following system of equations:
By substitution, we get:
) (
2
1
) (
2
1
1 2
2 1
a c a
a c a
+ =
+ =
c a
a c
a c c a
=
+ =
+ + =
1
1
1 1
: So
4
1
4
3
)) (
2
1
(
2
1
The unique Nash equilibrium
is (c,c)
Game Theory  A (Short) Introduction 60 9/12/2011
2.8 Best Response Functions
Graphical representation
Player 1
Player 2
0 a
1
a
2
½ c
½ c
c
c
b
1
(a
2
)
b
2
(a
1
)
Game Theory  A (Short) Introduction 61 9/12/2011
2.8 Best Response Functions
Note that:
The best response of a player to actions of other players needs not
to be unique. If a player has many best responses to some of the
other players’ actions, then her best response function is “thick” (a
surface) at some points;
Nash equilibrium needs not to exist: the best response function
may not cross;
If best response functions are not linear, the Nash equilibria need
not to be unique;
Best response function can be discontinuous, generating another
set of difficulties
Game Theory  A (Short) Introduction 62 9/12/2011
2.8 Best Response Functions
Exercice 42.1
Find the Nash Equilibria of the twoplayer strategic game in which
each player’s set of actions is the set of nonnegative numbers and
the players’ payoff functions are u
1
(a
1
,a
2
)=a
1
(a
2
a
1
) and
u
2
(a
1
,a
2
)=a
2
(1a
1
a
2
)
Game Theory  A (Short) Introduction 63 9/12/2011
2.9 Dominated actions
2.9.1 Strict dominations
In any game, a player’s action “strictly dominates” another action if
it is superior, no matter what the other player do.
Definition 45.1 (Strict domination): in a strategic game with ordinal
preferences, player i’s action a’’
i
strictly dominates her action a’
i
if:
Action a’
i
is said to be strictly dominated.
Example: in the Prisoner’s Dilemma,
the action Fink strictly dominates
the action Quiet
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Game Theory  A (Short) Introduction 64 9/12/2011
2.9 Dominated actions
Note that, as a strictly dominated action is not a best response to
any actions of the other players, a strictly dominated action is not
used in any Nash equilibrium.
When looking for Nash equilibria of a game, we can therefore
eliminate from consideration all strictly dominated actions.
2.9.2 Weak domination
In any game, a player’s action weakly dominates another action if
the first action is at least as good as the second action, no matter
what the other players do, and is better than the second action for
some actions of the other players.
Game Theory  A (Short) Introduction 65 9/12/2011
2.9 Dominated actions
Definition 46.1 (Weak domination) : In a strategic game with ordinal
preferences, player i’s action a’’
i
weakly dominates her action a’
i
if:
Note that is a strict Nash equilibrium, no player’s equilibrium action
is strictly dominated but in a nonstrict Nash equilibrium, an action
can be weakly dominated.
actions players' other of list every for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u
÷ ÷ ÷
>
actions players' other of list some for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u
÷ ÷ ÷
>
Game Theory  A (Short) Introduction 66 9/12/2011
2.9 Dominated actions
Exercise 47.1 (Strict equilibria and dominated actions)
For the game in Figure 48.1, determine, for each player,
whether any action is strictly dominated or weakly dominated.
Find the Nash equilibria of the game. Detemine whether any
equilibrium is strict.
0,0 1,0 1,1
1,1 1,1 3,0
1,1 2,1 2,2
T
M
B
L C R
Game Theory  A (Short) Introduction 67 9/12/2011
2.9 Dominated actions
2.9.4 Illustration: collective decisionmaking
The members of a group of people are affected by a policy,
modeled as a number. Each person i has a favorite policy, denoted
x*
i
. She prefers the policy y to the policy z if and only if y is closer to
x*
i
than is z. The number of n people is odd. The following
mechanism is used to choose the policy:
each person names a policy
the policy chosen is the median of those named
Eg.: if there are five people, and they name the policies 2, 0,0.6,5
and 10, the policy 0.6 is chosen.
Questions:
Model this situation as a strategic game
Find the equilibrium strategy of the players
Does anyone have an incentive to name her favorite policy?
Game Theory  A (Short) Introduction 68 9/12/2011
2.9 Dominated actions
Strategic game:
Players: n people
Actions: each person’s set of actions is the set of policies
(numbers)
Preferences: each person i prefers the action profile a to the action
profile a’ if and only if the median policy named in a is closer to x*
i
than is the median policy named in a’.
Equilibrium strategy of the players:
Claim: for each player i, the action of naming her favorite policy x*
i
weakly dominates all her other actions.
Why ?
Game Theory  A (Short) Introduction 69 9/12/2011
2.9 Dominated actions
Proof:
Take x
i
> x*
I
(reporting a higher policy than the preferred one)
a. for all actions of the other players, player i is at least as well off
naming x*
i
as she is naming x
i
for any list of actions of the players other than player i, denote the value
of the ½ (n1)th highest action by a and the value of ½ (n+1)th highest
action a+ (so that half of the remaining players’ actions are at most a
and half of them are at least a+).
if x*
i
≥ a+ : the median policy is the same whether player i names x*
i
or x
i
(as
x
i
> x*
i
).
if x
i
≤ a : the same hold true (as x*
i
< x
i
)
if x*
i
< a+ and x
i
> a, then
when the player i names x*
i
, the median policy is at most the greater of
x*
i
and a
when the play i names x
i
, the median policy is at least the lesser of x
i
and a+.
Thus, player i is worse off naming x
i
than naming x*
i
.
a at ½ (n1)th
a+ at ½ (n+1)th
½ n
n
0
Game Theory  A (Short) Introduction 70 9/12/2011
2.9 Dominated actions
b. for some actions of the other players, player i is better of naming x*
i
than she is naming x
i
Suppose that half of the remaining players name policies less than x*
i
and half of them name policies greater than x
i
. Then the outcome is x*
i
if
player i names x*
i
and x
i
if she names x
i
. Thus player i is better off
naming x
i
than she is naming x*
i
.
A symmetric argument applies when x
i
< x*
i
.
Telling the truth weakly dominates all other action.
Game Theory  A (Short) Introduction 71 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
We focus here in cases where we want to model the interaction
between members of a single homogenous population of players.
Players interact anonymously and symmetrically.
Definition 51.1 (Symmetric twoplayer game with ordinal preferences)
A twoplayer strategic game with ordinal preferences is symmetric if the
players’ sets of actions are the same and the players’ preferences are
represented by payoff functions u
1
and u
2
for which u
1
(a
1
,a
2
)=u
2
(a
2
,a
1
)
for every action pair (a
1
,a
2
)
Definition 52.1 (Symmetric Nash equilibrium)
An action profile a* in a strategic game with ordinal preferences in which
each player has the same set of actions is a symmetric Nash
equilibrium if it is a Nash equilibrium and a*
i
is the same for every
player i.
Game Theory  A (Short) Introduction 72 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
Exercise 52.2
Find all the Nash equilibria of the game in Figure 53.1. Which of
the equilibria, if any, correspond to a steady state if the game
models pairwise interactions between the members of a single
population?
1,1 2,1 4,1
1,2 5,5 3,6
1,4 6,3 0,0
A
B
C
A B C
3 Nash Equilibrium:
Illustrations
Game Theory  A (Short) Introduction 74 9/12/2011
3.5 Auctions
3.5.1 Introduction
Auctions are used to allocate significant economic resources, from works of art to
shortterm government bonds to radio spectrum …
Auctions are of many form:
Sequential or sealed bid (simultaneous)
First or Second price
Ascending (English) or Descending (Dutch)
Single or MultiUnits
With or without reservation price
With or without entry costs
…
Auctions:
exist since long ago (annual auction of marriageable womans in Babylonian’s
villages
and remain uptodate (EBay on Internet)
Main questions
What are the designs likely to be the most effective at allocating resources?
What are the designs more likely to raise the most revenue?
Game Theory  A (Short) Introduction 75 9/12/2011
3.5 Auctions
Main assumption: we discuss here auctions in which every
buyer knows her own valuation and every other buyer’s
valuation of the item being sold
Buyers are perfectly informed.
This assumption will be dropped in Chapter 9.
Game Theory  A (Short) Introduction 76 9/12/2011
3.5 Auctions
3.5.2 Secondprice sealedbid auctions
In a common form of auction, people sequentially submit increasing
bids for an object. When no one wish to submit a higher bid than the
current bid, the person making the current bid obtains the object at the
price shed bid.
Given that every person is certain of her valuation (perfect valuation) of
the object before the bidding begins, during the bidding, no one can
learn anything relevant to her actions.
Thus we can model the auction by assuming that each person decides,
before bidding begins, the most she is willing to bid (her maximal bid).
During the bidding, eventually, only the person with the maximal bid and
the one with the second highest maximal bid will be left competing
against each other.
To win, the person with the highest maximal bid needs therefore to bid
slightly more than the second highest maximal bid.
Game Theory  A (Short) Introduction 77 9/12/2011
3.5 Auctions
We can therefore model such an ascending auction as a strategic
game in which each player chooses an amount of money (the
maximal amount she is willing to bid) and the player who chooses
the highest amount obtains the object and pays a price equal to the
second highest amount.
This game model also a situation in which the people
simultaneously put bids in sealed envelopes, and the person who
submits the highest bid wins and pays a price equal to the second
highest bid.
In a perfect information context, ascending auctions (or English
auctions) and secondprice sealed bid auction are modeled by the
same strategic game.
Game Theory  A (Short) Introduction 78 9/12/2011
3.5 Auctions
Notations
v
i
: the value player i attaches to the object
p: price paid for the object
v
i
p: winning player payoff
n: number of players
number the players such that v
1
>v
2
> … > v
n
>0
b
i
: sealed bid submitted by each player
Rules
Each player submit a sealed bid b
i
If b
i
is the highest bid, player i wins the auction, get the object and pays
the second highest bid (say j). In such a case, player i payoff is v
i
b
j
In case of tie, it is the player with the smallest number (the highest
valuation) who wins. She pays her own bid (as there is a tie)
Game Theory  A (Short) Introduction 79 9/12/2011
3.5 Auctions
Strategic game representation:
Players: the n bidders, where n ≥ 2
Actions: the set of actions of each player is the set of possible bids
(nonnegative numbers)
Preferences: denote by b
i
the bid of player i and by b+ the highest
bid submitted by a player other than i. If either b
i
>b+ or b
i
=b+ and
the number of every other player who bids b+ is greater than i, then
player i’s payoff is v
i
b+. Otherwise player i’s payoff is 0.
Game Theory  A (Short) Introduction 80 9/12/2011
3.5 Auctions
Nash equilibrium
The game has many Nash equilibria:
One equilibrium is (b
1
,b
2
,… b
n
)=(v
1
,v
2
, … v
n
): each player bid is equal to
her valuation of the object:
because v
1
>v
2
> … > v
n
, the outcome is that player 1 obtains the object and
pays b
2
. Her payoff is v
1
b
2
. Every other player’s payoff is zero.
if player 1 changes he bid to some other price at least equal to b
2
, then the
outcome does not change. If she changes her bid to a price less than b
2
, then
she loses and obtains a zero payoff
if some other player lowers her bid or raises her bid to some price at most
equal to b
1
, the she remains a loser. If she raises her bid above b
1
, then she
wins but, in paying the price b
1
, she makes a loss (because her valuation is
less then b
1
).
Game Theory  A (Short) Introduction 81 9/12/2011
3.5 Auctions
Another equilibrium is (b
1
,b
2
,… b
n
)=(v
1
,0, … 0): the player 1 obtains the
object and pays 0. Sad is issue for the auctioneer …
Another equilibirum is (b
1
,b
2
,… b
n
)=(v
2
,v
1
, 0… 0): the player 2 bids v
1
and obtains the object at price v
2
and every players payoff is zero:
if player 1 raises her bid to v
1
or more, she wins the object but her payoff
remains zero (she pays the price v
1
, bid by player 2)
if player 2 changes her bid to some other prices greater than v
2
, the outcome
does not change. If she changes her bid to v
2
or less, she loses, and her
payoff remains zero.
if any other player raises her bid to a most v
1
, the outcome does not change.
If she raises her bid above v
1
, then she wins but get a negative payoff.
Note that, in this equilibrium, player 2 bids more than her valuation. This
might seem strange. This is due to the fact that, in a Nash equilibrium, a
player does not consider the “risk” that another player will take an action
different from her equilibrium action. Each player simply chooses an
action that is optimal, given the other players’ actions.
This however suggests that this equilibrium is less plausible as an
outcome of the auction than the equilibrium in which each bidder bids
her valuation.
Game Theory  A (Short) Introduction 82 9/12/2011
3.5 Auctions
This is due to the fact that:
in a secondprice sealedbid auction (with perfect information),
a player’s bid equal to her valuation weakly dominates all her
other bids.
That is:
for any bid b
i
≠ v
i
, player i bid v
i
is at least as good as b
i
, no
matter what the other players bid, and is better than b
i
for
some actions of the other players.
Game Theory  A (Short) Introduction 83 9/12/2011
3.5 Auctions
The precise argument is given by Figure 85.1
0 b+
v
i
b+
v
i
0 b+
v
i
b+
v
i
b’
i
v
i
is better than b’
i
in this region
0 b+
v
i
b+
v
i
b’’
i
v
i
is better than b’’
i
in this region
The Figure compares player i payoffs to the bid v
i
(left panel) with her payoff to a bid b’
i
< v
i
(middle
panel) and with her payoff to a bid b’’
i
> v
i
, as a function of the highest of the other players’ bids (b+).
We see that:
for all value of b+, player i payoff to a bid v
i
is a least as large as her payoffs to any other bid;
for some values of the b+, her payoffs to v
i
exceed her payoff to any other bid.
Game Theory  A (Short) Introduction 84 9/12/2011
3.5 Auctions
Exercise 84.1
Find a Nash equilibrium of a secondprice sealed bid auction in
which player n obtains the object.
Exercise 86.1 (Auctioning the right to choose)
An action affects each of two people. The right to choose the
action is sold in a secondprice auction. That is, the two people
simultaneously submit bids, and the one who submits the higher
bid chooses her favorite action and pays (to a third party) the
amount bid by the other person, who pays nothing. Assume that if
the bids are the same, person 1 is the winner.
For i=1,2, the payoff of person i when the action is a and person i
pays m is u
i
(a)m.
In the game that models this situation, find for each player a bid
that weakly dominates all the player’s other bids (and thus find a
Nash equilibrium in which each player’s equilibrium action weakly
dominates all her other actions).
Game Theory  A (Short) Introduction 85 9/12/2011
3.5 Auctions
3.5.3 Firstprice sealedbid auctions
Difference with as secondprice auction: the winner pays the price
she bids
Strategic game representation:
Players: the n bidders, where n≥2
Actions: the set of actions of each player is the set of possible
bids (nonnegative numbers)
Preferences: denote by b
i
the bid of player i and by b+ the
highest bid submitted by a player other than i. If either (a) b
i
>
b+ or (b) b
i
= b+ and the number of every other player who
bids b+ is greater than i, then player i payoff is v
i
b
i
. Otherwise,
player i payoff is 0.
Game Theory  A (Short) Introduction 86 9/12/2011
3.5 Auctions
Note that this game models:
a sealedbid auction where the highest bid wins
but also
a dynamic auction in which the auctioneer begins by
announcing a high price, which she gradually lowers until
someone indicates her willingness to buy the object (a
Dutch auction)
(this equivalence is even, in some sense, stronger than
the one between an ascending auction and secondprice
sealedbid auction – does not depend on private values).
Nash equilibrium
One Nash equilibrium is (b
1
,b
2
,… b
n
)=(v
2
,v
2
, … v
n
), in which
player 1 bid is player 2 valuation and every other player’s bid is
her own valuation. The outcome is that player 1 obtains the
object at price v
2
.
Game Theory  A (Short) Introduction 87 9/12/2011
3.5 Auctions
Exercise 86.2
Show that (b
1
,b
2
,… b
n
)=(v
2
,v
2
, … v
n
) is a Nash equilibrium
of a firstprice sealedbid auction.
A firstprice sealedbid auction has many other equilibria, but in all
equilibria the winner is the player who values the object most highly
(player 1), by the following argument:
in any action profile (b1, …bn) in which some player i≠1 wins,
we have bi > b1.
If b
i
> v
2
, then i payoff is negative, so that she can do
better by reducing her bid to 0
if b
i
≤ v
2
, then player 1 can increase her payoff from 0 to
v
1
b
i
by bidding b
i
, in which case she wins.
Game Theory  A (Short) Introduction 88 9/12/2011
3.5 Auctions
Exercise 87.1 (Firstprice sealedbid auction)
Show that in a Nash equilibrium of a firstprice sealedbid
auction the two highest bids are the same, one of these
bids is submitted by player 1, and the highest bid is at
least v
2
and at most v
1
. Show also that any action profile
satisfying these conditions is a Nash equilibrium.
Game Theory  A (Short) Introduction 89 9/12/2011
3.5 Auctions
As in the secondprice auction sealedbid auction, the potential
“riskiness” to player i of a bid b
i
> v
i
is reflected in the fact that it is
weakly dominated by the bid v
i
, as shown by the following argument:
if the other players’ bids are such that player i loses when she bids
b
i
, then the outcome is the same whether she bids b
i
or v
i
it the other players’ bids are such that player i wins when she bids
b
i
, then her payoff is negative when she bids b
i
and zero when she
bids v
i
(regardless of whether this bid wins)
However, unlike a secondprice auction, in a firstprice auction, a bid b
i
< v
i
of player i is not weakly dominated by the bid v
i
(it is in fact not
weakly dominated by any bid):
it is not weakly dominated by a bid b’
i
<b
i
because if the other
players’ highest bid is between b’
i
and b
i
, then b’
i
loses whereas b
i
wins and yields player i a positive payoff
it is not weakly dominated by a bid b’
i
>b
i
because if the other
players’ highest bid is less than b
i
, then both b
i
and b’
i
win and b
i
yield a lower price.
Game Theory  A (Short) Introduction 90 9/12/2011
3.5 Auctions
Note also that, though the bid v
i
weakly dominates higher bids,
this bid is itself weakly dominated by a lower bid! The
argument is the following:
if player i bids v
i
, her payoff is 0 regardless of the other
players’ bids
whereas, if she bids less than v
i
, her payoff is either 0 (if
she loses) or positive (if she wins)
In a firstprice sealedbid auction (with perfect information), a player’s
bid of at least her valuation is weakly dominated, and a bid of less than
her valuation is not weakly dominated.
Game Theory  A (Short) Introduction 91 9/12/2011
3.5 Auctions
Note finally that this property of the equilibria depends on the
assumption that a bid may be any number. In the variant of the game in
which bids and valuations are restricted to be multiples of some discrete
monetary unit ε,
an action profile (v
2
ε, v
2
 ε, b
3
, …b
n
) for any b
j
≤ v
j
 ε for j = 3, …n
is a Nash equilibrium in which no player’s bid is weakly dominated.
further, every equilibrium in which no player’s bid is weakly
dominated takes this form.
If ε is small, this is very close to (v
2
, v
2
, b
3
, …b
n
) : this equilibrium is
therefore (on a somewhat adhoc basis) considered as the distinguished
equilibria of a firstprice sealedbid auction.
One conclusion of this analysis is that, while both secondprice and firstprice
auctions have many Nash equilibria, their distinguished equilibria yield the
same outcome: in every distinguished equilibrium of each game, the object is
old to player 1 at the price v
2
. This is notion of revenueequivalence is a
cornerstone of the auction theory and will be analyzed in depth later.
Game Theory  A (Short) Introduction 92 9/12/2011
3.5 Auctions
3.5.4 Variants
Uncertain valuation: we have assumed that each bidder is certain
of both her own valuation and every other bidder’s valuation, which
is highly unrealistic. We will study the case of imperfect information
in Chap. 9 (in the framework of Bayesian games)
Interdependent/Common valuations: in some auction, the main
difference between bidders is not that they value the object
differently but that they have different information about its value
(eg, oil tract auctions). As this also involve informational
considerations, we will again study this in Chap. 9.
Allpay auctions: in some auctions, every bidder pay, not only the
winner (eg, competition of loby groups for government attention).
Game Theory  A (Short) Introduction 93 9/12/2011
3.5 Auctions
Mutiunit auctions: in some auctions, many units of an object are
available (eg, US Treasury bills auctions) and each bidder may
value positively more than one unit. Each bidder chooses therefore
a bid profile (b
1
,b
2
,…b
k
) if there are k units to sell. Different auction
mechanisms exist and are characterized by the rule governing the
price paid by the winner:
Discriminatory auction: the price paid for each unit is the
winning bid for that unit
Uniformprice auction: the price paid for each unit is the
same, equal to the highest rejected bid among all the bids for
all unit
Vickrey auction (of the name of Nobel prize): a bidder wins k
objects pays the sum of the k highest rejected bids submitted
by the other bidders.
4. Mixed Strategy
Equilibrium
Game Theory  A (Short) Introduction 95 9/12/2011
4.1. Introduction
4.1.1. Stochastic steady state
Nash Equilibrium in a strategic game: action profile in which every
player’s action is optimal given every other player’s action (see def.
23.1)
This corresponds to a steady state of the game:
every player’s behavior is the same whenever she plays
the game
no player wishes to change her behavior, knowing (from
experience) the other players’ behavior
In such a framework, the outcome of every play of the game is
the same Nash equilibrium
More general notion of steady state exists
Game Theory  A (Short) Introduction 96 9/12/2011
4.1. Introduction
players’ choices are allowed to vary:
different members of a given population may choose
different actions, each player choosing the same action
whenever she plays the game
each individual may, on each occasion she plays the
game, choose her action probabilistically according to
the same, unchanging, distribution
these situations are equivalent:
in the first case, a fraction p of the population
representing player i chooses the action a
in the second case, each member of the population
representing player i chooses the action a with
probability p
These notion of (stochastic) steady state of
modeled as mixed strategy Nash equilibrium
Game Theory  A (Short) Introduction 97 9/12/2011
4.1. Introduction
4.1.2 Example: Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,1)
(1,1)
(1,1)
(1,1)
Outcomes
The game has no Nash equilibrium: no pair of
action is compatible with a steady state.
Game Theory  A (Short) Introduction 98 9/12/2011
4.1. Introduction
The game has however stochastic steady state in which each
player chooses each of her actions with probability 1/2 :
Suppose that player 2 chooses each of her actions with probability ½
If player 1 chooses Head with probability p and Tail with probability (1
p), then:
each outcome (Head,Head) and (Head,Tail) occurs with
probability p x ½
each outcome (Tail,Head) and (Tail,Tail) occurs with
probability (1p) x ½
Thus, the probability that the outcome is either (Head,Head) or
(Tail,Tail) (in which case player 1 wins 1$) ½ p + ½ (1p) = ½.
The other two outcomes (Head,Tail) and (Tail,Head) (which correspond
to a loss of 1$) have also probability ½
Game Theory  A (Short) Introduction 99 9/12/2011
4.1. Introduction
the probability distribution over outcome is independent of p!
every value of p is optimal (in particular ½ )!
the same analysis hold for player 2. We conclude that the
game has a stochastic steady state in which each player
chooses each action with probability ½ .
Moreover (under a reasonable assumption on the players’
preferences), the game has no other steady state :
Assumption: each player wants the probability of her gaining
1$ to be as large as possible (maximization of expected profit)
Denote q the probability with which player 2 chooses Head
(she chooses Tail with probability (1q) )
If player 1 chooses Head with probability p, she gains 1$ with
probability pq + (1p)(1q) (outcomes Head,Head or Tail,Tail)
and she looses 1$ with probability (1p)q + p(1q).
Game Theory  A (Short) Introduction 100 9/12/2011
4.1. Introduction
Note that:
Player 1 wins 1$ : pq + (1p)(1q) = 1q + p(2q1)
Player 1 loses 1$:(1p)q + p(1q).= q + p (12q)
If q < ½, the first probability (winning 1$) is decreasing in p and
the second probability (loosing 1$) is increasing in p. Player 1
chooses therefore p = 0.
Thus, if player 2 chooses Head with probability less than ½,
the best response of player 1 is to choose Tail with certainty.
A similar argument shows that if player 2 chooses Head with
probability superior to ½, the best response of player 1 is to
choose Head with certainty.
We already have shown that is one player is choosing a given
action with certainty (Nash Equilibrium), there is no steady
state.
Game Theory  A (Short) Introduction 101 9/12/2011
4.1. Introduction
4.1.3 Generalizing the analysis: expected payoffs
The matching pennies case is particularly simple because it has
only two outcomes for each player, allowing to deduce players’
preferences regarding lotteries (probability distributions) over
outcomes from their preferences regarding deterministic
outcomes:
if a player prefers a to b and if p > q, he most likely prefers a
lottery in which a occurs with probability p (and b with probability
(1p)) to a lottery in which a occurs with probability q (and b with
probability (1q))
To deal with more general cases (eg, more than two outcomes),
we need to add to the model a description of her preferences
regarding lotteries (probability distribution) over outcomes
Game Theory  A (Short) Introduction 102 9/12/2011
4.1. Introduction
The standard approach is to restrict attention to preferences
regarding lotteries (probability distribution) over outcomes that may
be represented by the expected value of a payoff function over
deterministic outcomes:
for every player i, there is a payoff function u
i
, with the
property that player i prefers one probability distribution over
outcomes to another if and only if, according to u
i
, the
expected value of the first probability distribution exceeds the
expected value of the second probability distribution.
eg. :
three outcomes: a, b, c
two prob. dist.: P(p
a
,p
b
,p
c
) and Q(q
a
,q
b
,q
c
)
for each player i, prob. dist. P is preferred to prob. dist. Q if and
only if p
a
u
i
(a) + p
b
u
i
(b) + p
c
u
i
(c) > q
a
u
i
(a) + q
b
u
i
(b) + q
c
u
i
(c)
Preferences that can be represented by the expected value of a
payoff function over deterministic outcomes are called vNM (von
Neumann – Morgenstern) preferences.
A payoff function whose expected value represents such
preferences is called a Bernouilli payoff function.
Game Theory  A (Short) Introduction 103 9/12/2011
4.1. Introduction
The restrictions on preferences regarding prob. dist. over
outcomes required for them to be represented by expected
value of a payoff function are NOT innocuous (see violations
example on page 104). They are however commonly accepted
in game theory.
However, these restriction do not restrict player attitudes to risk:
eg. :
suppose that a,b and c are three outcomes. A person prefers a to b to c.
If the person is very averse to risky outcomes, she prefers then to obtain
b for sure rather than to face a prob. dist. in which a occurs with
probability p and c with probability (1p), even if p is relatively large.
such preferences can be represented by the expected value of a payoff
function u for which u(a) is close to u(b), which is much larger than u(c)
(concave payoff function)
c
b a
u(a)
u(b)
u(c)
(Figure 103.1)
Game Theory  A (Short) Introduction 104 9/12/2011
4.1. Introduction
• Note that if the outcomes are amount of money and if the preferences are
represented by the expected value of the amount of money, the player is risk
neutral.
• Two classic utility functions: CARA & CRRA
• In the reality:
• the fact that people buy insurance (the expected payoff is inferior to the
insurance fee) show that economic agents are risk averse.
• the fact that people buy lottery tickets shows that, in some circumstance, than
can be risk preferring (small investment, extremely high payoff.
in both cases, the preferences can be represented by the expected value of a
payoff function:
• concave in case of risk aversion
• convex in case of risk preference
• Note finally that given preferences, many different payoff functions can be used to
represented them. It is the ordering that matter.
Game Theory  A (Short) Introduction 105 9/12/2011
4.2 Strategic games in which
players may randomize
Definition 106.1 (Strategic game with vNM preferences)
A strategic game with vNM preferences consists of
a set of players
for each player, a set of actions
for each player, preferences regarding prob. dist. over action
profiles that may be represented by the expected value of a
(Bernoulli) payoff function over action profiles.
Representation: a twoplayer strategic game with vNM preferences in
which each player has finitely many actions may be represented in a
table like in Chapter 2. However, the interpretation of the number is
different:
in Chapter 2, numbers are values of payoff functions that represent the
players’ preferences over deterministic outcome
here, numbers are values of (Bernoulli) payoffs whose expected values
represent the players’ preferences over prob. dist..
Game Theory  A (Short) Introduction 106 9/12/2011
4.2 Strategic games in which
players may randomize
The change is subtle but important (figure 107.1)
The 2 games represent the same game with ordinal preferences (the
prisoner’s dilemma).
However, the 2 games represent different strategic games with vNM
preferences:
left game: player’s 1 payoff to (Q,Q) is the same as her expected payoff
to the prob. dist. that yield (F,Q) with probability ½ and (F,F) with
probability ½
right game: her payoff to (Q,Q) is higher than her expected payoff to this
prob. dist.
Q
F
Q
F
Q F Q F
2,2
3,0
0,3
1,1
3,3
4,0
0,4
1,1
Game Theory  A (Short) Introduction 107 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.1 Mixed strategies
We allow now each player to choose a probability distribution over
her set of actions (rather than restricting her to choose a single
deterministic action)
Definition 107.1 (Mixed strategy)
A mixed strategy of a player in a strategic game is a probability
distribution over the player’s actions.
Notations:
α: profile of mixed strategies (matrix)
α
i
(a
i
): probability assigned by player i’s mixed strategy α
i
to her
action a
i
Game Theory  A (Short) Introduction 108 9/12/2011
4.3 Mixed strategy Nash
equilibrium
eg: in Matching pennies, the strategy of player 1 that assigns
probability ½ to each action is the strategy α
1
(Head)= ½ and
α
1
(Tail) = ½.
Shortcut: mixed strategies are often written as a list of
probabilities (one for each action), in the order the actions are
given in the table (see table 107.1).
eg.: ( ½ , ½ ) assigns, in table 107.1, probability ½ to Q and
probability ½ to F.
Note that a mixed strategy may assign probability 1 to a single
action. In that case, such as strategy is referred as a pure
strategy.
Game Theory  A (Short) Introduction 109 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.2 Equilibrium
The mixed strategy Nash equilibrium extend the concept of
Nash equilibrium to the probabilistic setup.
Definition 108.1 (Mixed strategy Nash equilibrium of strategic
game with vNM preferences)
The mixed strategy profiles α* in a strategic game with vNM
preferences is a mixed strategy Nash equilibrium if, for each
player i and every mixed strategy α
i
of player i, the expected
payoff to player i of α* is at least as large as the expected payoff
payoff to player i of (α
i
,α
i
*), according to a payoff function
whose expected value represents player i’s preferences over
prob. dist.
. profile strategy mixed the to payoff expected s ' player is ) ( where
player of strategy mixed every for ), * , ( *) (
o o
o o o o
i U
i U U
i
i i i i i ÷
>
Game Theory  A (Short) Introduction 110 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.3 Best response functions
Notation: B
i
is player i’s best response function
For strategic game with ordinal preferences: B
i
(a
i
) is the set of
player i’s best actions when the list of the other players’ actions is
a
i
For a strategic game with vNM preferences, B
i
(α
i
) is the set of
player i’s best mixed strategies when the list of the other players’
mixed strategies is α
i
.
the mixed strategy profile α* is a mixed strategy Nash
equilibrium if and only if α*
i
is in B
i
(α*
i
) for every player i
eg.: in the Matching Pennies, the set of best responses to a mixed
strategy of the other player is either a single pure strategy or the set of all
mixed strategy.
Game Theory  A (Short) Introduction 111 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Two players – two actions games
Player 1 has action T and B
Player 2 has action L and R
u
i
(i=1,2) denotes a Bernoulli payoff function for player i (payoff
over action pair whose expected value represents player i’s
preferences regarding prob. dist. over action pairs)
Player 1 mixed strategy α
1
assigns probability α
1
(T) to her
action T (denoted p) and probability α
1
(B) to her action B
(denoted 1p), with α
1
(T) + α
1
(B) = 1.
Similarly, denotes q the probability that player 2’s mixed
strategy assigns to L et 1q to R.
We take the players’ choices to be independent (when players
choose the mixed strategies α
1
and α
2
, the probability of any
action pair (a1,a2) is the product of the corresponding
probabilities assigned by mixed strategies).
Game Theory  A (Short) Introduction 112 9/12/2011
4.3 Mixed strategy Nash
equilibrium
From this probability distribution, we can compute player 1’s
expected payoff to the mixed strategy pair (α
1
, α
2
):
which can be written as:
So, the probabilities of the four outcomes are:
T(p)
B(1p)
L(q) R(1q)
pq
(1p)q
p(1q)
(1p)(1q)
(Figure 109.1)
) , ( ) 1 )( 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q p L B u q p R T u q p L T u pq × ÷ ÷ + × ÷ + × ÷ + ×
Game Theory  A (Short) Introduction 113 9/12/2011
4.3 Mixed strategy Nash
equilibrium
which can be written more compactly as:
    ) , ( ) 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q L B u q p R T u q L T u q p × ÷ + × ÷ + × ÷ + ×
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to T and player 2
uses a mixed strategy α
2
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to B and player 2
uses a mixed strategy α
2
   
2 1 2 1
, ) 1 ( , o o B E p T pE ÷ +
Player 1’ expected payoff to the mixed strategy pair (α
1
,α
2
)
as a weighted average of her expected payoffs to T and B
when player 2 uses the mixed strategy α
2
, with weights
equal to the probabilities assigned to T and B by α
1
.
Game Theory  A (Short) Introduction 114 9/12/2011
4.3 Mixed strategy Nash
equilibrium
In particular, player 1’s expected payoff is a linear function of p
0
p 1
   
2 1 2 1
, ) 1 ( , o o B E p T pE ÷ +
 
2 1
,o B E
   
2 1 2 1
, , o o B E T E >
 
2 1
,o T E
(Figure 110.1)
Game Theory  A (Short) Introduction 115 9/12/2011
4.3 Mixed strategy Nash
equilibrium
A significant implication of this linearity form of the player 1’s
expected payoff is that there is only three possibilities for her
best response to a given mixed strategy of player 2:
player 1’s unique best response is the pure strategy T (if E
1
(T,α
2
) >
E
1
(B,α
2
) ): see figure 110.1
player 1’s unique best response is the pure strategy B (if E
1
(T,α
2
) <
E
1
(B,α
2
) ): see figure 110.1 with a downward sloping line
all mixed strategies of player 1 yield the same expected payoff
(hence, all are best response) (if E
1
(T,α
2
) = E
1
(B,α
2
) ): see figure
110.1 with a horizontal line
in particular, a mixed strategy (p, 1p) for which 0 < p < 1 is never
a unique best response.
Game Theory  A (Short) Introduction 116 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Example: Matching Pennies revisited
Represent each player’s preferences by the expected value of a
payoff unction that assigns the payoff 1 to a gain of $1 and the
payoff 1 to a loss of $1. The resulting strategic game with vNM
preferences is (figure 111.1)
Player 1
Player 2
Head
Tail
Head Tail
(1,1)
(1,1)
(1,1)
(1,1)
Game Theory  A (Short) Introduction 117 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Denote by p the probability that player 1’s mixed strategy assigns
to Head and q the probability that player 2’s mixed strategy assigns
to Head.
Player 1’s expected payoff to pure strategy Head, given player 2
mixed strategy is : q . 1 + (1q) .(1) = 2q – 1
Her expected payoff to Tail is : q . (1) + (1q) . 1 = 1 – 2q
0 p 1
0
1 q
½
½
Player
1
Player
2
(Figure 112.1:
Best response functions)
Game Theory  A (Short) Introduction 118 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Thus:
if q < ½, player 1’ expected payoff to Tail exceeds her
expected payoff to Head (and hence exceeds also her
expected payoff to any mixed strategy that assigns a positive
probability to Head)
similarly, if q > ½, her expected payoff to Head exceeds her
expected payoff to Tail.
if q = ½, then both Head and Tail (and all her mixed strategies)
lead to the same payoff.
we conclude that player 1’s best responses to player 2’s
strategy are her mixed strategy that assigns probability 0 to
Head if q < ½ , her mixed strategy that assigns probability 1 to
Head if q > ½ and all her mixed strategies if q = ½.
Game Theory  A (Short) Introduction 119 9/12/2011
4.3 Mixed strategy Nash
equilibrium
The best response function of player 2 is similar (see figure 112.1)
The set of mixed strategy Nash equilibria corresponds (as before)
to the set of intersections of the best response functions in figure
112.1.
Matching Pennies has no Nash Equilibrium if players are not
allowed to randomize !
Game Theory  A (Short) Introduction 120 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 114.2
Find all the mixed strategy Nash equilibria of the strategic
games in Figures 114.2
T
B
T
B
L R L R
6,0
3,2
0,6
6,0
0,1
2,2
0,2
0,1
(Figure 114.2)
Game Theory  A (Short) Introduction 121 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 114.3
Two people can perform a task if, and only if, they both exert effort. They are both
better off if they both exert effort and perform the task than if neither exerts effort
(and nothing is accomplished); the worst outcome for each person is that she
exerts effort and the other person does not (in which case again nothing is
accomplished). Specifically, the players’ preferences are represented by the
expected value of the payoff functions in Figure 115.1, which c is a positive
number less than 1 than can be interpreted as the cost of exerting effort. Find all
the mixed strategy Nash equilibria of this game. How do the equilibria change as c
increase? Explain the reasons for the changes.
No Effort
Effort
No Effort Effort
0,0
c,0
0,c
1c,1c
(Figure 115.1)
Game Theory  A (Short) Introduction 122 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.4 A useful characterization of mixed strategy Nash
equilibrium
The method used up to now to find Mixed strategy Nash equilibria
involves constructing players’ best response functions. In
complicated games, this method may be intractable. There is a
characterization of mixed strategy Nash equilibria that is an
invaluable tool in the study of generale game.
The key is the following observation: a player’s expected payoff
to a mixed strategy profile α is a weighted average of her
expected payoffs to all pure strategy profiles of the type (a
i
,α
i
),
where the weights attached to each pure strategy (a
i
,α
i
) is the
probability α
i
(a
i
) assigned to that strategy a
i
by the player’s
mixed strategy α
i
(see section 4.3.3).
Game Theory  A (Short) Introduction 123 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Symbolically:
where:
A
i
is player i’s set of actions (pure strategies)
E
i
(a
i
,α
i
) is her expected payoff when she uses the pure strategy
that assign probability 1 to a
i
and every other player j uses her
mixed strategy α
j
.
¿
e
÷
=
i i
A a
i i i i i i
a E a U ) , ( ) ( ) ( o o o
Game Theory  A (Short) Introduction 124 9/12/2011
4.3 Mixed strategy Nash
equilibrium
This leads to the following analysis:
Let α* be a mixed strategy Nash equilibrium
Denote by E*
i
player i’s expected payoff in the equilibrium
Because α* is an equilibrium, payer i’s expected payoff, given α*
i
,
to all her strategies (including all her pure strategies), is at most E*
i
But E*
i
is a weighted average of player i’s expected payoffs to the
pure strategies to which α*
i
assigns a positive probability
Thus, player i’s expected payoffs to these pure strategies are all
equal to E*
i
(if any smaller, then the weighted average would be
smaller!).
Game Theory  A (Short) Introduction 125 9/12/2011
4.3 Mixed strategy Nash
equilibrium
We conclude that:
expected payoff to each action to which α*
i
assigns positive
probability is E*
i
the expected payoff to every other action is at most E*
i
Proposition 116.2
A mixed strategy profile α* in a strategic game with vNM
preferences in which each player has finitely many actions is a
mixed strategy Nash equilibrium if and only if, for each player i,
the expected payoff, given α*
i
, to every action to which α*
i
assigns
a positive probability is the same
the expected payoff, given α*
i
, to every action to which α*
i
assigns
a zero probability is at most the expected payoff to any action to
which α*
i
assigns a positive probability
Each player’s expected payoff in an equilibrium is her expected
payoff to any of her actions that she uses with positive
probability
Game Theory  A (Short) Introduction 126 9/12/2011
4.3 Mixed strategy Nash
equilibrium
This proposition allows to check whether a mixed strategy
profile is an equilibrium.
Example 117.1
L(0) C(1/3) R(2/3)
T(3/4)
M(0)
B(1/4)
.,2 3,3 1,1
.,. 0,. 2,.
.,4 5,1 0,7
(Figure 117.1)
Game Theory  A (Short) Introduction 127 9/12/2011
4.3 Mixed strategy Nash
equilibrium
For the game in Figure 117,1 (in which the dots indicate
irrelevant payoffs), the indicated pair of strategies ((3/4,0,1/4)
for player 1 and (0,1/3,2/3) for player 2) is a mixed strategy
Nash equilibrium.
To verify this claim, it suffices, by proposition 116.2, to study
each player’s expected payoffs to her three pure strategies. For
player 1, these payoffs are:
3
5
0
3
2
5
3
1
:
3
4
2
3
2
0
3
1
:
3
5
1
3
2
3
3
1
:
= +
= +
= +
B
M
T
Game Theory  A (Short) Introduction 128 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Player 1’s mixed strategy assigns positive probability to T and B
and probability zero to M. So, the two conditions of proposition
116.2 are satisfied for player 1.
The same verification is easily done for player 2. Note however
that, for player 2, the action L (which she uses with probability
0), has the same expected payoff to her other two actions. This
equality is consistent with proposition 116.2 (no greater than).
Game Theory  A (Short) Introduction 129 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Exercise 117.2 (Choosing numbers)
Players 1 and 2 each choose a positive integer up to K. If the
players choose the same number, then player 2 pays $1 to
player 1; otherwise no payment is made. Each player’s
preferences are represented by her expected monetary payoff.
Show that the game has a mixed strategy Nash equilibrium in
which each player chooses each positive integer up to K with
probability 1/K
Show that the game has no other mixed strategy Nash equilibria
(Deduce from the fact that player 1 assigns positive probability to
some action k that player 2 must do so; then look at the implied
restriction on player 1’s equilibrium strategy)
Game Theory  A (Short) Introduction 130 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Note finally that
an implication of Proposition 116.2 is that a nondegenerate mixed
strategy equilibrium (a mixed strategy equilibrium that is not also a
pure strategy equilibrium) is never a strict Nash equilibrium: every
player whose mixed strategy assigns a positive probability to more
than one action is indifferent between her equilibrium mixed
strategy and every action to which this mixed strategy assigns
positive probability.
The theory of mixed Nash equilibrium does not state that players
consciously choose their strategies at random given the equilibrium
probabilities. Rather, the conditions for equilibrium are designed to
ensure that it is consistent with a steady state. The question of how
a steady state may come about remains to be studied at this stage.
Game Theory  A (Short) Introduction 131 9/12/2011
4.3 Mixed strategy Nash
equilibrium
4.3.5 Existence of equilibrium in finite games
Proposition 119.1 (Existence of mixed strategy Nash
equilibrium in finite games)
Every strategic game with vNM preferences in which each player
has finitely many actions has a mixed strategy Nash equilibrium.
This proposition does not help to find the equilibrium but it is a
useful fact.
Note also that:
the finiteness of the number of actions is a sufficient condition for
the existence of an equilibrium, not a necessary one.
that a player’s strategy in mixed strategy Nash equilibrium may
assign probability 1 to a single action.
Game Theory  A (Short) Introduction 132 9/12/2011
4.4 Dominated actions
Definition 120.1 (Strict Domination)
In a strategic game with vNM preferences, player i’s mixed
strategy α
i
stricly dominates her action a’
i
if:
where u
i
is a Bernoulli payoff function and U
i
(α
i
,a
i
) is player i’s
expected payoff under u
i
when she uses the mixed strategy α
i
and the actions chosen by the other players are given by a
i
.
actions players' other for the list every for ) , ' ( ) , (
i i i i i i i
a a a u a U
÷ ÷ ÷
> o
Game Theory  A (Short) Introduction 133 9/12/2011
4.4 Dominated actions
An action not strictly dominated by any pure strategy may be
strictly dominated by a mixed strategy (see Figure 120.1)
T
M
L R
1
4
1
0
(Figure 120.1)
B 0 3
The action T of player 1 is not strictly (or weakly) dominated
by M or B, but it is strictly dominated by the mixed strategy
that assigns probability ½ to M and probability ½ to B.
Game Theory  A (Short) Introduction 134 9/12/2011
4.4 Dominated actions
Exercise 120.2 (Strictly dominated mixed strategy)
In Figure 120.1, the mixed strategy that assigns probability ½ to
M and ½ to B is not the only mixed strategy that strictly
dominates T. Find all the mixed strategy that do so.
Exercise 120.3 (Strict domination for mixed strategies)
Determine whether each of the following statements is true of
false:
A mixed strategy that assigns positive probability to a strictly
dominated action is strictly dominated.
A mixed strategy that assigns positive probability only to actions
that are not strictly dominated is not strictly dominated.
Game Theory  A (Short) Introduction 135 9/12/2011
4.4 Dominated actions
A strictly dominated action is not a best response to any
collection of mixed strategies of the other players
Suppose that player i’s action a’
i
is strictly dominated by her mixed
strategy α
i
Player i’s expected payoff U
i
(α
i
,α
i
) when she uses the mixed strategy
α
i
and the other players use the mixed strategies α
i
is a weighted
average of her payoffs U
i
(α
i
,a
i
) as a
i
varies over all the collections of
action for the other players, with the weight on each a
i
equal to the
probability with which it occurs when the other players’ mixed
strategies are α
i
.
Player i’s expected payoff when she uses the action a’
i
and the other
players use the mixed strategies α
i
is a similar weighted average; the
weights are the same but the terms take the form u
i
(a’
i
,a
i
), rather than
U
i
(α
i
,a
i
).
The fact that a’
i
is strictly dominated by α
i
means that U
i
(α
i
,a
i
) >
u
i
(a’
i
,a
i
) for every collection a
i
of the players’ actions.
Hence player i’s expected payoff when she uses the mixed strategy α
i
exceeds her expected payoff when she uses the action a’
i
, given α
i
.
Game Theory  A (Short) Introduction 136 9/12/2011
4.4 Dominated actions
Consequently, a strictly dominated action is not used with
positive probability in any mixed strategy Nash equilibrium.
Definition 121.1 (Weak domination)
In a strategic game with vNM preferences, player i’s mixed
strategy α
i
weakly dominates her action a’
i
if:
and
where u
i
is a Bernoulli payoff function and U
i
(α
i
,a
i
) is player i’s
expected payoff under u
i
when she uses the mixed strategy α
i
and the actions chosen by the other players are given by a
i
.
actions playes' other the of list every for ) , ' ( ) , (
i i i i i i i
a a a U a U
÷ ÷ ÷
> o
actions playes' other the of list some for ) , ' ( ) , (
i i i i i i i
a a a U a U
÷ ÷ ÷
> o
Game Theory  A (Short) Introduction 137 9/12/2011
4.4 Dominated actions
Note that, as a weakly dominated action may be used in a Nash
equilibrium, a weakly dominated action may be used with a
positive probability in a mixed strategy equilibrium. We can
therefore not eliminate weakly dominated actions from
consideration when finding mixed strategy equilibrium.
However:
Proposition 122.1 (Existence of mixed strategy Nash
equilibrium with no weakly dominated strategies in finite games)
Every strategic game with vNM preferences in which each
player has finitely many actions has a mixed strategy Nash
equilibrium in which no player’s strategy is weakly dominated.
Game Theory  A (Short) Introduction 138 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Equilibria when the players are not allowed to randomize
remain equilibria when they are allowed to randomize
Proposition 122.2 (Pure strategy equilibria survive when
randomization is allowed)
Let a* be a Nash equilibrium of G and for each player i, let α*
i
be the mixed strategy of player i that assigns probability one to
the action a*
i
. Then α* is a mixed strategy Nash equilibrium of
G’.
Game Theory  A (Short) Introduction 139 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Any pure equilibria that exist when the players are allowed to
randomize are equilibria when they are not allowed to
randomize.
Proposition 123.1 (Pure strategy equilibria survive when
randomization is prohibited)
Let α* be a mixed strategy Nash equilibrium of G’ in which the
mixed strategy of each player i assigns probability one to the
single action a*
i
. The a* is a Nash equilibrium of G.
Game Theory  A (Short) Introduction 140 9/12/2011
4.5 Pure equilibria when
randomization is allowed
To establish these two propositions, let N be a set of players
and let A
i
, for each player i, be a set of actions.
Consider the following two games:
G: the strategic game with ordinal preferences in which the set of
players is N, the set of actions of each player i is A
I
, and the
preferences of each player i are represented by the payoff function
u
i
G’: the strategic game with vNM preferences in which the set of
players is N, the set of actions of each player i is A
i
, and the
preferences of each player i are represented by the expected
value of u
i
Game Theory  A (Short) Introduction 141 9/12/2011
4.5 Pure equilibria when
randomization is allowed
Proposition 122.2
Let a* be a Nash equilibrium of G, and for each player i let α*
i
be the mixed that assigns probability 1 to a*
i
. Since a* is a Nash
equilibrium of G, we know that in G’ no player i has an action
that yields her a payoff higher than does a*
i
when all other
players adhere to α*
i
. Thus α* satisfies the two conditions in
Proposition 116.2. So, it is a mixed strategy equilibrium of G’.
Proposition 123.1
Let α* be a mixed strategy Nash equilibrium of G’ in which every
player’s mixed strategy is pure. For each player i, denote a*
i
the
action to which α
i
assigns probability one. Then, no mixed
strategy of player i yields her a payoff higher than does α*
i.
Thus
a* is Nash equilibrium of G.
Game Theory  A (Short) Introduction 142 9/12/2011
4.7 Equilibrium in a single
population
Definition 129.1 (Symmetric twoplayer strategic game with
vNM preferences)
A twoplayer strategic game with vNM preferences is symmetric
if the players’ sets of actions are the same and the players’
preferences are represented by the expected values of payoff
functions u
1
and u
2
for which u
1
(a
1
,a
2
) = u
2
(a
2
,a
1
) for every
action pair (a
1
,a
2
).
Definition 129.2 (Symmetric mixed strategy Nash equilibrium)
A profile α* of mixed strategies in a strategic game with vNM
preferences in which each player has the same set of actions is
a symmetric mixed strategy Nash equilibrium if it is a mixed
strategy Nash equilibrium and α*
i
is the same for every player i.
Game Theory  A (Short) Introduction 143 9/12/2011
4.7 Equilibrium in a single
population
Game of approaching pedestrian (Figure 129.1)
This game has two deterministic steady states ( (Left,Left) and
(Right,Right) ), corresponding to the two symmetric Nash equilibria in
pure strategies.
The game has also a symmetric mixed strategy Nash equilibrium, in
which each player assigns probability ½ to Left and probability ½ to
Right.
This equilibrium corresponds to a steady state in which half of all
encounters result in collisions!
Left
Right
Left Right
1,1
0,0
0,0
1,1
(Figure 115.1)
Game Theory  A (Short) Introduction 144 9/12/2011
4.7 Equilibrium in a single
population
Exercise 130.3 (Bargaining)
Pairs of players from a single population bargain over the division of a
pie of size 10. The members of a pair simultaneously make demands.
The possible demands are nonnegative even integers up to 10.
If the demands sum to 10, then each player receives her demand. If the
demands sum to less than 10, then each player receives her demand
plus half of the pie that remains after both demands have been
satisfied. If the demands sum to more than 10, then neither player
receives any payoff.
Find all the symmetric mixed strategy Nash equilibria in which each
player assigns positive probability to at most two demands (many
situations in which each player assigns positive probability to two
actions – says a’ and a’’ – can be ruled out as equilibria because when
one player uses such strategy, some action yields the other player a
payoff higher than does one or both of the actions a’ and a’’).
Game Theory  A (Short) Introduction 145 9/12/2011
4.9 The formation of players’
beliefs
In a Nash equilibrium, each player chooses a strategy that
maximizes her expected payoff, knowing the other players’
strategies.
The idea underlying the previous analysis is that the players
have learned each other’s strategies from their experience
playing the game.
The idealized situation is the following:
for each player in the game, there is a large population of
individuals who may take the role of that player
in any play of the game, one participant is drawn randomly from
each population
In this situation, a new individual who joins a population can
learn the other players’ strategies by observing their actions
over many plays of the game.
Game Theory  A (Short) Introduction 146 9/12/2011
4.9 The formation of players’
beliefs
As long as the number of new players is low, existing players’
encounters with neophytes (who may use nonequilibrium
strategies) will be sufficiently rare that their beliefs about the
steady state will not be disturbed. So, a new player’s problem is
simply to learn the other players’ actions.
But, what might happen if new players simultaneously join more
than one population in sufficient numbers, such that the
probability that they encounter is not anymore small? In
particular, can we expect a steady state to be reached if no one
has experience?
Game Theory  A (Short) Introduction 147 9/12/2011
4.9 The formation of players’
beliefs
4.9.1 Eliminating dominated actions
In some games, players may reasonably be expected to choose
their Nash equilibrium actions from an introspective analysis of
the game:
At the extreme (eg., the Prisoner’s Dilemma), players’ best action
are independent of the other players’ actions.
In a less extreme case, some player’s best action may depend on
the other players’ actions, but the actions the other players will
choose may be clear because each of these players has an action
that strictly dominates all others.
Game Theory  A (Short) Introduction 148 9/12/2011
4.9 The formation of players’
beliefs
eg.: in the game in Figure 135.1, player 2’s action R strictly
dominates L. So, no matter what player 2 thinks player 1 will be
playing, she should choose R. Consequently, player 1, who can
deduce by this argument that player 2 will choose R, may
reason that she should choose B, even without any expercience
of the game.
T
B
L R
1,1
0,0
0,0
1,1
(Figure 135.1)
Game Theory  A (Short) Introduction 149 9/12/2011
4.9 The formation of players’
beliefs
4.9.2 Learning
Another approach to the question of how a steady state might be
reached assumes that player learns:
she starts with an unexplained “prior” belief about the other players’
actions
she changes these beliefs in response to information she receives
Two theories are:
Best Response Dynamics: a simple theory assumes that in each
period after the first, each player believes that the other players will
choose the actions the chose in the previous period:
at the first period, each player chooses a best response to an arbitrary
deterministic belief about the other players’ actions
in every subsequent period, each player chooses a best response to the
other players’ action in the previous period
An action profile that remains the same from period to period (steady
state) is then a pure Nash equilibrium of the game. The two questions
are then:
does the game convergence to a steady state?
how long does it take to converge?
Game Theory  A (Short) Introduction 150 9/12/2011
4.9 The formation of players’
beliefs
eg.: the BoS game (example 18.2), for some initial beliefs,
does not converge:
if player 1 initially believes that player 2 will choose
Stravinsky and player 2 believes that player 1 will
initially choose Bach, then the players’ choices will
subsequently alternate in definitively between the
action pairs (Bach, Stravinsky) and
(Stravinsky,Bach).
Fictitious play: under the Best Response Dynamics, a player’s
belief does not admit the possibility that her opponents’ actions are
realizations of mixed strategies. Under the Fictitious play theory,
players consider actions in all the previous periods when forming a
belief about their opponents’ strategies. They treat these actions as
realizations of mixed strategies:
each player begins with an arbitrary probabilistic belief about
the other player’s action
then, in any period, she adopts the belief that her opponent is
using a mixed strategy in which the probability of each action is
proportional to the frequency with which her opponent chose
that action in the previous periods.
Game Theory  A (Short) Introduction 151 9/12/2011
4.9 The formation of players’
beliefs
Note that:
in any twoplayers game in which the player has two
actions (eg, the Matching Pennies), this process
converges to a Mixed strategy Nash equilibrium from any
initial beliefs;
for other games, there are initial beliefs for which the
process does not converge.
Game Theory  A (Short) Introduction 152 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
The following systematic method can be used to find all mixed strategy
Nash equilibria of a game:
For each player i, choose a subset S
i
of her set of A
i
of actions
Check whether there exists a mixed strategy profile α such that:
(i) the set of actions to which strategy α
i
assigns positive probability is S
i
(ii) α satisfies the conditions of Proposition 116.2
Repeat the analysis for every collection of subsets of the players’ sets
of actions.
The shortcoming of the method is that for games in which each player
has several strategies, or in which there are several players, the
number of possibilities to examine is huge. In a two player game in
which each player has three actions:
each player’s set of action has seven nonempty subset (three actions
consisting of a single action, three consisting of two actions, and one
consisting of the entire set of actions).
so that there are 49 (7x7) possible collections of subsets to check.
Game Theory  A (Short) Introduction 153 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Example 138.1: Finding all mixed strategy equilibria of a two
player game in which each player has two actions.
Denote the actions and payoffs as in Figure 139.1.
Each player’s set of actions has three nonempty subsets:
two each consisting of a single action
one consisting of both action
Thus, there are nine (3x3) pairs of subsets of the players’ action
sets.
T
B
L R
u
11
,v
11
u
21
,v
21
u
12
,v
12
u
22
,v
22
(Figure 139.1)
Game Theory  A (Short) Introduction 154 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
For each pair (S
1
,S
2
), we check if there is a pair (α
1
,α
2
) of mixed
strategies such that each strategy α
i
assigns positive probability
only to actions in S
i
and the conditions in Proposition 116.2 are
satisfied:
checking the four pairs of subsets in which each player’s
subset consists of a single action amounts to checking whether
any of the four pairs of actions is a pure strategy equilibrium.
consider the pair of subsets {T,B} for player 1 and {L} for player
2:
the second condition in Proposition 116.2 is automatically
satisfied for player 1 (she has no actions to which she
assigns probability 0)
the first condition in Proposition 116.2 is automatically
satisfied for player 2 (she assigns positive probability only
to one action).
Game Theory  A (Short) Introduction 155 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Thus, for there to be a mixed strategy equilibrium in which
player 1’s probability of using T is p, we need:
u
11
= u
21
: player 1’s payoffs to her two actions must be
equal
p v
11
+(1p) v
21
≥ p v
12
+(1p) v
22
: L must be at least as
good as R given player 1’s mixed strategy.
A similar argument applies to the three other pairs of subsets
in which one player’s subset consists of both her actions and
the other player’s subset consists of a single action.
To check finally whether there is a mixed strategy equilibrium
in which the subsets are {T,B} for player 1 and {L,R} for player
2, we need to find a pair of mixed strategy that satisfied the
first condition of Proposition 116.2 (the second condition being
automatically satifisfied, no action having 0 probability). That is,
we need to find p and q such as:
q u
11
+ (1q) u
12
= q u
21
+ (1q) u
22
p v
11
+ (1p) v
21
= p v
21
+ (1p) v
22
Game Theory  A (Short) Introduction 156 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Example 139.2: Find all mixed strategy equilibria of a variant of
BoS
Exercise 141.1: Find all mixed strategy equilibria of a two
player game
B
S X B
S
4,2
0,0 0,1
0,0
2,4 1,3
(Figure 139.2)
T
M R L
B
2,2
0,3 1,3
3,2
1,1 0,2
(Figure 141.1)
Game Theory  A (Short) Introduction 157 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
Exercise 142.1: Find the mixed strategy Nash equilibria of the
three player game in Figure 142.1 (each player has two actions)
A
B
A B
1,1,1
0,0,0
0,0,0
0,0,0
(Figure 142.1)
Game Theory  A (Short) Introduction 158 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Consider now the case of a continuum of actions: the principle involved
in finding mixed strategy equilibria of games are the same as for games
with finitely many actions, though the techniques are different.
In a game in which a player has a continuum of actions, a mixed
strategy of a player is determined by the probabilities it assigns to sets
of actions.
Proposition 116.2 becomes
Proposition 142.2 (Characterization of mixed strategy Nash
equilibrium)
A mixed strategy profile α* in a strategic game with vNM preferences is
a mixed strategy Nash equilibrium if and only if, for each player i,
α*
i
assigns probability zero to the set of actions a
i
for which the action
profile (a
i
,α*
i
) yields player i an expected payoff less than her expected
payoff to α*.
for no action a
i
does the action profile (a
i
,a*
i
) yield player i an expected
payoff greater than her expected payoff to α*.
Game Theory  A (Short) Introduction 159 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Games with continuum of actions can be very complex to
analyze. A significant class of games consist of games in which
each player’s set of actions is a onedimensional interval of
numbers:
Consider such a game with two players
Let player i’s set of actions be the interval from a
i
to a
+i
, for i=1,2
Identify each player’s mixed strategy with a cumulative probability
distribution of this interval: the mixed strategy of each player i is a
nondecreasing function F
i
for which 0≤F
i
(a
i
)≤1, for every action a
i
.
The number F
i
(a
i
) is the probability that player i’s action is at most
a
i
.
the form of a mixed strategy Nash equilibrium in such a game can
be very complex but some such games, however, have equilibria of
a particularly simple form, in which each player’s equilibrium mixed
strategy assigns probability zero except in an interval.
Game Theory  A (Short) Introduction 160 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
The mixed strategies (F
1
,F
2
) satisfies the following conditions for
i=1,2:
There are numbers x
i
and y
i
such that player i’s mixed strategy
F
i
assigns probability zero except in the interval from x
i
to y
i
:
F
i
(z)=0 for z<x
i
, and Fi(z)=1, for z ≥ y
i
.
Player i’s expected payoff when her action is a
i
and the other
player uses her mixed strategy F
j
takes the form:
= c
i
for x
i
≤ a
i
≤ yi
≤ c
i
for a
i
< x
i
and a
i
> y
i
where c
i
is a constant.
Game Theory  A (Short) Introduction 161 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Example 143.1 (Allpay auction)
Two people submit sealed bid for an object worth $K for each of
them. Each person’s bid may be any nonnegative number up to
$K. The winner is the person whose bid is higher. In the event
of a tie, each person receive half of the object (which she
values at $K/2). Each person pays her bid, regardless of
whether she wins, and has preferences represented by the
expected amount of money the receives.
Game Theory  A (Short) Introduction 162 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
This situation may be modeled by the following strategic game:
Players: the two bidders
Actions: each player’s set of actions is the set of possible bids
(nonnegative numbers up to K)
Payoff functions: Each player i’s preferences are represented by
the expected value of the payoff function given by:
¦
¹
¦
´
¦
> ÷
= ÷
< ÷
=
j i i
j i i
j i i
i
a a a K
a a a
K
a a a
a a u
if
if
2
if
) , (
2 1
eg.: a competition among two firms to develop a new product by
some deadline, where the firm that spends the most develops a
better product, which capture the entire market.
Game Theory  A (Short) Introduction 163 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
An allpay auction has no pure strategy Nash equilibrium, by the
following argument:
No pair of actions (x,x) with x < K is a Nash equilibrium because
either player can increase her payoff by slightly increasing her bid
(K,K) is not a Nash equilibrium because either player can increase
her payoff from –K/2 to 0 by reducing her bid to 0
No pair of actions (a
1
,a
2
) with a
1
≠a
2
is a Nash equilibrium because
the player whose bid is higher can increase her payoff by reducing
her bid (and the player whose bid is lower can, if her bid is positive,
increase her payoff by reducing her bid to 0)
Game Theory  A (Short) Introduction 164 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
Consider the possibility that the game has mixed strategy Nash
equilibrium. Denote F
i
the mixed strategy (cumulative density
function over the interval of possible bids).
We look for an equilibrium in which neither mixed strategy
assigns positive probability to any single bid (there are infinitely
many possible bids and for continuous random variables,
Prob(x=c)=0).
In that case, F
i
(a
i
) is the probability that player i bids at most a
i
and the probability that she bids less than a
i
.
We restrict our attention to strategy pairs (F
1
,F
2
) for which, for
i=1,2, there are numbers x
i
and y
i
such that F
i
assigns positive
probability only to the interval form x
i
to y
i
.
Game Theory  A (Short) Introduction 165 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
To investigate the possibility of such an equilibrium, consider
player 1’s expected payoff when she uses the action a
1
, given
player 2’s mixed strategy F
2
:
if a
1
< x
2
, then a
1
is less than player 2’s bid with probability one, so that
player 1’s payoff is –a
1
if a
1
> y
2
, then a1 exceeds player 2’s bid with probability one, so that
player 1’s payoff is Ka
1
if x
2
≤ a
1
≤ y
2
, then player 1’s expected payoff is:
with probability F
2
(a
1
), player 2’s bid is less than 1, in which case player
1’s payoff is Ka
1
with probability 1F
2
(a
1
), player 2’s bid exceeds a1, in which case player
1’s payoff is –a
1
by assumption, the probability that player 2’s bid exactly equal to a
1
is
zero
Player 1 expected payoff is (Ka
1
) F
2
(a
1
) + (a
1
) (1F
2
(a
1
)) = KF2(a
1
)a
1
Game Theory  A (Short) Introduction 166 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
We need to find values of x1 and y1 and a strategy F2 such that
player 1’s expected payoff satisfies condition of Proposition
142.2
a
1
a
+1
x
1
y
1
c
1
(Figure 144.1)
it is a constant on the interval x
1
to y
1
and less than this
constant outside this interval.
Game Theory  A (Short) Introduction 167 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
The conditions are therefore:
K F
2
(a
1
)a
1
=c
1
for x
1
≤ a
1
≤ y
1
for some constant c
1
F
2
(x
2
) = 0 and F
2
(y
2
)=1
F
2
must be non decreasing (it is a CDF)
and analogous conditions must hold for x
2
,y
2
, and F
1
.
Solution: we see that if x
1
= x
2
= 0, y
1
= y
2
= K, and F
1
(z) = F
2
(z)
= z/K for all z with 0 ≤ z ≤ K, these conditions are fulfilled. We
see that each player expected payoff is constant and equal to 0,
for all her actions.
Thus, the game has a mixed strategy Nash equilibrium in which
each player randomizes uniformly over all her actions. Proving
that it is the only mixed strategy Nash equilibrium is more
complex.
Game Theory  A (Short) Introduction 168 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.1 Expected payoffs
Suppose that a decisionmarker has preferences over a set of
deterministic outcomes and that each of her actions results in a
lottery (probability distribution) over these outcomes
To determine the action she chooses, we need her preferences
over lotteries
We cannot derive these preferences form her preferences over
deterministic outcomes. So, assume we are given preferences over
lotteries.
Under fairly week assumptions, we can represent these
preferences by a payoff function: we can find a function, say U,
over lotteries (p
1
, …p
k
) such that U(p
1
,…p
k
) > U(p’
1
, …p’
k
) only if
the decision marker prefers (p
1
, …p
k
) to (p’
1
, …p’
k
), where each
outcome occurs with probability p
i
.
Game Theory  A (Short) Introduction 169 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
In most case, we need however more structure to go farther in the
analysis. The standard approach, developed by von Neumann and
Morgenstern (1994), is to impose an additional assumption (known as the
“independence assumption”) that allows us to conclude that the decision
maker’s preferences can be represented by the expected payoff function.
Under this assumption, there is a payoff function u over deterministic
outcomes such that the decisionmaker’s preference relation over lotteries
is represented by the function:
where a
k
is the kth outcome of the lottery and:
if and only if the decisionmaker prefers the lottery (p
1
,…p
k
) to (p’
1
,…p’
k
).
¿
=
=
K
k
k k K
a u p p p U
1
1
) ( ) ,... (
¿ ¿
= =
>
K
k
k k
K
k
k k
a u p a u p
1 1
) ( ' ) (
Game Theory  A (Short) Introduction 170 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
This sort of payoff function (for which the decisionmaker
preferences are represented by the expected value of the
payoffs) is known as Bernoulli payoff functions.
eg.: suppose that there are three possible deterministic
outcomes: the decisionmaker may receive $0, $1 or $5 (and
naturally prefers $5 to $1 to $0). Suppose that she prefers the
lottery (1/2,0,1/2) to the lottery (0,3/4,1/4), where probabilities
are given for outcomes $0, $1 and $5. This preference is
consistent with preferences represented by the expected value
of a payoff function u for which u(0)=0, u(1)=1 and u(5)=4:
4 .
4
1
1 .
4
3
4 .
2
1
0 .
2
1
+ > +
Game Theory  A (Short) Introduction 171 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
The great advantage of Bernoulli payoff function is that preferences are
completely specified by the payoff function: once we know u(a
k
) for each
possible outcome a
k
, we know the decisionmaker preferences among
all lotteries.
Bernoulli payoff function must however not be confused with payoff
function that represents the decisionmarker’s preferences over
deterministic outcomes:
if u is a Bernoulli payoff function, it certainly is a payoff function that
represents the decisionmaker’s preferences over deterministic
outcomes
however, the converse is not true.
eg. : suppose a decisionmaker prefers $5, $1 and $0 and prefers lottery
(1/2,0,1/2) to (0,3/4,1/4). Defines u as u(0)=0, u(1)=3 and u(5)=4. u is
compatible with preferences over deterministic outcomes. However, it is
not compatible with preferences over lotteries:
4 .
4
1
3 .
4
3
4 .
2
1
0 .
2
1
+ < +
Game Theory  A (Short) Introduction 172 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.2 Equivalent Bernoulli payoff functions
Lemma 148.1 (Equivalence of Bernoulli payoff functions)
Suppose that there are at least three possible outcomes. The
expected values of the Bernoulli payoff functions u and v
represent the same preferences over lotteries if and only if there
exist number k and m (with m > 0) such that u(x) = k + m v(x), for
all x.
Exercise 149.2 (Normalized Bernoulli payoff functions)
Suppose that a decisionmarker’s preferences can be
represented by the expected value of the Bernoulli payoff
function u. Find a Bernoulli payoff function whose expected
value represents the decisionmaker’s preferences and assigns
a payoff of 1 to the best outcome and a payoff of 0 to the worst
outcome.
Game Theory  A (Short) Introduction 173 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
4.12.3 Equivalent strategic games with vNM preferences
the three games of figure 150.1 represents the same strategic game
with deterministic preferences
only the left and middle tables represent the same strategic game
with vNM preferences. The reason is that the payoff functions in the
middle table are linear functions of the payoff functions in the left
table, whereas the payoff fonctions in the right table are not.
B
S
B S
1,1
0,0
0,0
1,1
B
S
B S
1,1
0,0
0,0
1,1
(Figure 150.1)
B
S
B S
1,1
0,0
0,0
1,1
Game Theory  A (Short) Introduction 174 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
Denotes ui, vi, wi the Bernoulli payoff functions of the three games.
Then v
1
(a)=2u
1
(a) and v
2
(a)=3+u
2
(a). But, w
1
is not a linear
function of u
1
. There is no constant μ and θ such that w
1
(a) = μ +
θu
1
(a):
has no solution.
2 3
1 1
0 0
u µ
u µ
u µ
+ =
+ =
+ =
Game Theory  A (Short) Introduction 175 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
Exercise 150.1 (Games equivalent to the Prisoner’s Dilemma)
Which of the right tables in Figure 150.2 represents the same
strategic game with vNM preferences as the Prisoner’s
Dilemma as specified in the left panel?
C
D
C D
2,2
3,0
0,3
1,1
C
D
C D
3,3
4,0
0,4
2,2
(Figure 150.2)
C
D
C D
6,0
9,4
0,2
3,2
9. Bayesian Games
Game Theory  A (Short) Introduction 177 9/12/2011
Framework
An assumption underlying the notion of Nash equilibrium is that
each player holds the correct belief about the other players’
actions. To do so, the player must know the other player
preferences.
However, in many situation, players are not perfectly informed
about their opponents’ characteristics (eg.: firms may not know
each others’ cost functions).
In this chapter, we generalize the notion of strategic game to
allow the analysis of situations in which each player is
imperfectly informed about an aspect of her environment that
is relevant to her choice of action.
Game Theory  A (Short) Introduction 178 9/12/2011
9.1 Motivational examples
With start with one example to illustrate main ideas of Bayesian
games
Example 273.1 (Variant of BoS with imperfect information)
Consider a variant of BoS in which player 1 is unsure whether
player 2 prefers to go out with her or prefers to avoid her,
whereas player 2 (as before) knows player 1’s preferences.
B
S
B S
2,1
0,0
0,0
1,2
2 wishes to meet 1
B
S
B S
2,0
0,1
0,2
1,0
2 wishes to avoid 1
2 2
1
Prob 1/2 Prob 1/2
(Figure 274.1)
Game Theory  A (Short) Introduction 179 9/12/2011
9.1 Motivational examples
Specifically, suppose player 1 thinks that with probability ½ player 2
wants to go out with her, and with probability ½ player 2 wants to
avoid her (see figure 274.1)
Because probabilities are involved, an analysis of the situation
requires us to know the players’ preferences over lotteries.
We can represent the situation as being with two states. In state 1,
the Bernoulli payoff are given in the left table. In state 2, the
Bernoulli payoff are given in the right table. Player assigns
probability ½ to each state.
The notion of Nash equilibrium must be generalize to this new
setting:
from player 1’s point of view, player 2 has two possible types
(one whose preferences are given by the left table of Figure
274.1 and the other, by the right table).
Game Theory  A (Short) Introduction 180 9/12/2011
9.1 Motivational examples
Player 1 does not know player 2 types. So, to choose an action
rationally, she needs to form a belief about the action of each
player 2 type.
Given these beliefs and her belief about the likelihood of each
type, she can calculate her expected payoff to each of her
actions.
For example, if player 1, conditionally on choosing B, thinks
that type 1 of player 2 will choose B and type 2 of player 2 will
choose S, then she thinks that B will yield her a payoff of 2 with
probability ½ and of 0 with probability ½. So, in this case, her
expected payoff to B is ( ½ . 2 + ½ . 0)=1. Similar calculations
lead to table 275.1.
(B,B) (B,S) (S,B) (S,S)
Type 1 player 2 choice Type 2 player 2 choice
S
B 2 1 1 0
0 ½ ½ 0
(Figure 275.1)
Game Theory  A (Short) Introduction 181 9/12/2011
9.1 Motivational examples
For this situation, we define a pure strategy Nash equilibrium to
be a triple of actions (one for player 1 and one for each type of
player 2) with the property that:
the action of player 1 is optimal, given the actions of the two
types of player 2 (and player 1’s belief about the state)
the action of each type of player 2 is optimal, given the action
of player 1
Note that in a Nash equilibrium:
player 1’s action is a best response in Figure 275.1 to the pair
of actions of the two types of player 2
the action of the type of player 2 who wishes to meet player 1
is a best response in the left table of Figure 274.1
the action of the type of player 2 who wished to avoid player 1
is a best response in the right table of Figure 274.1 to the
action of player 1
Game Theory  A (Short) Introduction 182 9/12/2011
9.1 Motivational examples
Why should player 2, who knows his own type, have to plan
what to do in both cases?
She does not!
However, as an analysts, we need to consider what she would do
in both cases. The reason is that to determine her best action,
player 1, who does not know player 2 type, needs to form a belief
about the action each type of player 2 would take, and we wish to
impose the equilibrium condition that these beliefs are correct.
(B,(B,S)) is a Nash equilibrium where
)) , ( , ( S B B
Player 1
Player 2 – Type 1
Player 2 – Type 2
Game Theory  A (Short) Introduction 183 9/12/2011
9.1 Motivational examples
Proof:
given that the action of the two types of player 2 are (B,S), player
1’s action B is optimal (see Figure 275.1)
given that player 1 chooses B, B is optimal for player 2 type 1 and
S is optimal for player 2 type 2 (see Figure 274.1)
We interpret the equilibrium as follows:
Type 1  player 2 chooses B and type 2 – player 2 chooses S,
inferring that player 1 will choose B
player 1, who does not know if player 2 is of type 1 or of type 2,
believes that if player 2 is of type 1, she will choose B and if player
2 is of type 2, she will choose S.
Game Theory  A (Short) Introduction 184 9/12/2011
9.1 Motivational examples
We can interpret the actions of the two types of player 2 to
reflect player 2’s intentions in the hypothetical situation before
she knows the state. This corresponds to the following situation:
initially, player 2 does not know the state; she will be informed by a
signal that depends on the state;
before receiving this signal, she plans an action for each possible
state;
the same story is valid for player 1 but player 1 will receive an
uninformative signal (same signal in each state)
Note that in such a setup, a Nash equilibrium is list of
actions, one for each type of each player, such that the action
of each type of each player is a best response to the
actions of all the types of the other player, given the player’s
beliefs about the state after she observes her signal.
Game Theory  A (Short) Introduction 185 9/12/2011
9.1 Motivational examples
Exercise 276.1 (Equilibria of a variant of BoS with imperfect
information)
(i) Show that there is no pure strategy of this game in which
player 1 chooses S.
(ii) Find the mixed strategy Nash equilibria of the game (First
check whether there is an equilibrium in which both types of
player 2 use pure strategies; then look for equilibria in which
one or both of these types randomize).
Game Theory  A (Short) Introduction 186 9/12/2011
9.2 General definitions
9.2.1 Bayesian games
A strategic game with imperfect information is called a Bayesian
game.
A key component in the specification of the imperfect information is
the set of state: each state is a complete description of one
collection of the players’ relevant characteristics (preferences,
information, …). For every collection of characteristics that some
player believes to be possible, their must be a state.
A the start of the game a state is realized. The players do not
observe this state. Rather, each player receives a signal that may
give her some information about the state. We denote the signal
player i receives in state ω by τ
i
(ω). The function τ
i
(.) is called the
player i’s signal function. Note that this is a deterministic function:
for each state, a given signal is received.
Game Theory  A (Short) Introduction 187 9/12/2011
9.2 General definitions
The state that generates any given signal t
i
is said to be consistent
with the signal t
i
.
The size of the set of states consistent with each player i’s signal
reflect the quality of player i’s information. The two extreme cases
are:
if τ
i
(ω) is different for each value of ω, the player i knows,
given her signal, the state that has occurred: she is perfectly
informed about all the players’ relevant characteristics.
if τ
i
(ω) is the same for all states, then player i’s signal conveys
no information: she is perfectly uninformed.
We refer to player i in the event that she receives t
i
as type t
i
of
player i. Each type of each player holds a belief about the
likelihood of the states is consistent with her signal (eg.: if t
i
= τ
i
(ω
1
)=
τ
i
(ω
2
), then type t
i
of player i assigns probabilities to ω
1
and ω
2
.
Game Theory  A (Short) Introduction 188 9/12/2011
9.2 General definitions
Each player (may) care about the actions chosen by the other
players and about the state. We need therefore to specify their
preferences regarding probability distribution over pairs (a, ω),
consisting of action profile a and a state ω.
We assume that each player’s preferences over such probability
distributions are represented by the expected value of a Bernoulli
function. We therefore specify player i’s preferences by giving the
Bernoulli payoff function u
i
over pair (a, ω).
Game Theory  A (Short) Introduction 189 9/12/2011
9.2 General definitions
Definition 279.1 (Bayesian game)
A Bayesian game consist of
a set of players
a set of states
and for each player
a set of actions
a set of signals that she may receive and a signal function that
associates a signal with each state
for each signal that she may receive, a belief about the states
consistent with the signal (a probability distribution over the set of
states with which the signal is associated)
a Bernoulli payoff function over pairs (a,ω), where a is an action
profile and ω is a state, the expected value of which represents the
player’s preferences.
Game Theory  A (Short) Introduction 190 9/12/2011
9.2 General definitions
Note that the set of actions of each player is independent of the
state: each player may care about the state, but the set of
actions available to her is the same in every state.
Application to Example 273.1
players: the pair of people
states: {meet, avoid}
actions: for each player {B,S}
signals: player 1 may receive a single signal z. Her signal function τ
1
satisfies τ
1
(meet)= τ
1
(avoid)=z. Player 2 receives one of two signals (m
and v). Her signal function τ
2
satistifies τ
2
(meet)=m and τ
2
(avoid)=v.
beliefs: player 1 assigns probability ½ to each state after receiving the
signal z. Player 2 assigns probability 1 to state “meet” after receiving the
signal m, and probability 1 to state “avoid” after receiving the signal v.
payoffs: the payoffs u
i
(a,meet) of each player i for all possible action
pairs are given in the left panel of Figure 274.1. (for u
i
(a,avoid) in the
right panel).
Game Theory  A (Short) Introduction 191 9/12/2011
9.2 General definitions
9.2.2 Nash equilibrium
In a Bayesian game, each player chooses a collection of actions:
one for each signal she may receive (each type of each player
chooses an action).
In a Nash equilibrium of such a game, the action chosen by each
type of each player is optimal, given the actions chosen by every
type of every other player.
We define a Nash equilibrium of Bayesian game to be a Nash
equilibrium of a strategic game in which each player is one of the
types of one of the players in the Bayesian game.
Game Theory  A (Short) Introduction 192 9/12/2011
9.2 General definitions
Notations:
Pr(ωt
i
): probability assigned by the belief ot type t
i
of player i to
state ω.
a(j,t
j
): action taken by each type t
j
of each player j.
τ
j
(ω): player j’s signal in state ω. Her action is this state is
a(j, τ
j
(ω)). We denote â
j
(ω)=a(j, τ
j
(ω)).
With these notations, the expected payoff of type t
i
of player i
when she chooses action a
i
is:
) ( ˆ chooses player other every and
action the chooses player in which profile action the is )) ( ˆ , (
states of set the is
: where
) )), ( ˆ , (( ) Pr(
e
e
e e e
e
j
i i i
i i i i
a j
a i a a
Ω
a a u t
÷
O e
÷ ¿
Game Theory  A (Short) Introduction 193 9/12/2011
9.2 General definitions
Definition 281.1 (Nash equilibrium of Bayesian game)
A Nash equilibrium of Bayesian game is a Nash equilibrium of
the strategic game (with vNM preferences) defined as follows:
players: the set of all pairs (i,t
i
) in which i is a player in the
Bayesian game and t
i
is one of the signals that i may receive;
actions: the set of actions of each player (i,t
i
) is the set of actions
of player i in the Bayesian game;
preferences: the Bernoulli payoff function of each player (i,t
i
) is
given by
¿
O e
÷
e
e e e ) )), ( ˆ , (( ) Pr(
i i i i
a a u t
Game Theory  A (Short) Introduction 194 9/12/2011
9.2 General definitions
Exercise 282.3 (Adverse selection)
Firm A (the “acquirer”) is considering taking over firm T (the “target”). It
does not know firm T’s value; it believes that this value, when firm T is
controlled by its own management, is at least $0 and at most $100, and
assigns equal probability to each of the 101 dollar values in this range
(uniform distribution). Firm T will be worth 50% more under firm A’s
management than it is under its own management. Suppose that firm A
bids y to take over firm T, and firm T is worth x (under its own
management). Then if T accepts A’s offer, A’s payoff is (3/2 x – y) and
T’s payoff is y. If T rejects A’s offer, A’s payoff is 0 and T’s payoff is x.
Model this situation as a Bayesian game in which firm A chooses how
much to offer and firm T decides the lowest offer to accept. Find the
Nash equilibrium (equilibria?). Explain why the logic behind the
equilibrium is called adverse selection.
Game Theory  A (Short) Introduction 195 9/12/2011
9.3 Example concerning
information
9.3.1 More information may hurt
A decisionmaker in a singleperson decision problem cannot be
worse off if she has more information: if she wishes, she can ignore
the information. In a game, this is not true.
L M R
T
B
1,2ε 1,0 1,3ε
2,2 0,0 0,3
State ω
1
L M R
T
B
1,2ε 1,0 1,3ε
2,2 0,3 0,0
State ω
2
½ ½
2
½ ½
1
(Figure 283.1)
Game Theory  A (Short) Introduction 196 9/12/2011
9.3 Example concerning
information
Consider the twoplayer game in Figure 283.1. ε is 0 < ε < ½. In this
game, there is two states and neither player knows the state.
Player 2’s unique best response to each action of player 1 is L:
if player 1 chooses T:
L yieds 2ε
M and R each yield 3/2 ε
if player 2 chooses B:
L yields 2
M and R each yield 3/2.
Player 1’s unique best response to L is B.
Thus, (B,L) is the unique Nash equilibrium. Each player get a
payoff of 2. The game has no mixed strategy Nash equilibrium.
Game Theory  A (Short) Introduction 197 9/12/2011
9.3 Example concerning
information
Consider now that player 2 is informed of the state: player 2’s
signal function satisfies τ
2
(ω
1
)≠ τ
2
(ω
2
).
In this game, (T,(R,M)) is the unique Nash equilibrium (each type of
player 2 has a strictly dominant action, to which T is player 1’s
unique best response).
In this game, player 2’s payoff is 3ε (in each state). She is therefore
worse off when she knows the state !
To understand this result, R is good only in state ω
1
and M is good
only in state ω
2
while L is a compromise. Knowing the state leads
player 2 to choose either R or M, which induces player 1 to choose
T. There is no steady state in which player 2 chooses L, to induce
player 1 to choose B.
Game Theory  A (Short) Introduction 198 9/12/2011
9.6 Illustration: auctions
9.6.1 Introduction
In section 3.5, every bidder knows every other bidder’s valuation of
the object for sale. This is highly unrealistic!
Assume that a single object is for sale. Each bidder receives
independently some information (a signal) about the value of the
object to her:
if each bidder’s signal is simply her valuation, we say that the
bidders’ valuation are private (eg.: work of art whose beauty
interests the buyers);
if each bidder’s valuation depends on other bidders’ signals as
well as her own, we say that the valuations are common
(eg.:oil tract containing unknown reserves on which each bidder
has conducted a test)
We will consider models in which bids for a single object are
submitted simultanesously (bids are sealed) and the participant
who submits the highest bid obtains the object.
Game Theory  A (Short) Introduction 199 9/12/2011
9.6 Illustration: auctions
We will consider both firstprice (the winner pays the price she
bids) and secondprice (the winner pays the highest of the
remaining bids) auctions.
Note that the argument that the secondprice rule corresponds to
an open ascending auction (English auction) depends upon the
bidders’ valuations being private. In a common valuation setup, the
open ascending information reveals information to bidders, they do
not have access to in a sealed bid procedure.
9.6.2 Independent private values
Each bidder knows that all other bidders’ valuations are at least v
(where v ≥ 0) and at most v+. She believes that the probability
that any given bidder’s valuation is at most v is F(v), independent of
all other bidders’ valuations, where F is a continuous increasing
function (CDF).
Game Theory  A (Short) Introduction 200 9/12/2011
9.6 Illustration: auctions
The preferences of bidder whose valuation is v are represented by
a Bernoulli payoff function that assigns 0 to the outcome in which
she does not win the object and vp to the outcome in which she
wins the object and pays the price p (quasilinear payoff function).
This amounts to consider that the bidder is riskneutral.
We assume that the expected payoff of a bidder whose bid is tied
for first place is (vp)/m, where m is the number of tied winning
bids.
We denote P(b) the price paid by the winner of the auction when
the profile of bids is b:
for a firstprice auction, P(b) is the winning bid (the largest b
i
)
for a secondprice auction, P(b) is the highest bid made by a
bidder different from the winner
Game Theory  A (Short) Introduction 201 9/12/2011
9.6 Illustration: auctions
The Bayesian game that models first and secondprice auctions
with independent private valuations is therefore:
players: the set of bidders 1,…n
states: the set of all profiles (v
1
, … v
n
) of valuations, where v ≤
v
i
≤ v+ for all i
actions: each player’s set of actions is the set of possible bids
(nonnegative numbers)
signals: the set of signal that each player may observe is the
set of possible valuations (the signal function is τ
i
(v
1
, … v
n
) =
v
i
).
beliefs: every type of player i assigns probability F(v
1
) F(v
2
) …
F(v
i1
) x F(v
i+1
) … F(v
n
) to the event that the valuation of every
other player j is at most v
i
.
Game Theory  A (Short) Introduction 202 9/12/2011
9.6 Illustration: auctions
payoff functions:
Nash equilibrium in a secondprice sealedbid auction: in a
secondprice sealedbid auction with imperfect information about
valuations (as in the perfect information setup), a player’s bid equal
to her valuation weakly dominates all her other bids:
consider some type v
i
of some player i and let b
i
be a bid not equal to v
i
for all bids by all types of all the other players, the expected payoff of
type v
i
of player i is at least as high when she bids v
i
as it is when she
bids b
i
, and for some bids by the various types of the other players, her
expected payoff is greater when she bids v
i
than it is when she bids b
i
¹
´
¦
= >
= = s ÷
=
i j b b
m b b i j b b m b P v
v v b u
i j
i j i j i
n i
some for if 0
players for and all for if / )) ( (
)) ,... ( , (
1
Game Theory  A (Short) Introduction 203 9/12/2011
9.6 Illustration: auctions
Exercise 294.1 (Weak domination in a secondprice sealedbid
auction)
Show that for each type v
i
of each player i in a secondprice
sealedbid auction with imperfect information about valuations the
bid v
i
weakly dominates all other bids.
We conclude that a secondprice sealedbid auction with imperfect
information about valuations has a Nashequilibrium in which every
type of every player bids her valuation.
Exercise 294.2 (Nash equilibria of a secondprice sealedbid
auction)
For every player i, find a Nash equilibrium of a secondprice
sealedbid auction in which player i wins.
Game Theory  A (Short) Introduction 204 9/12/2011
9.6 Illustration: auctions
Nash equilibrium in a firstprice sealedbid auction
in case of perfect information, the bid v
i
by type v
i
of player i
weakly dominates any bid greater than v
i
, does not weakly
dominate bids less than v
i
, and is itself weakly dominated by
any such lower bid.
So, the game under imperfect information may have a Nash
equilibrium in which each bidder bids less than her valuation.
Take the case of two bidders and each player’s valuation being
distributed uniformly between 0 and 1 (this assumption means
that the fraction of valuations less than v is exactly v, so that
F(v) = v for all v with 0 ≤ v ≤ 1).
Denote by β
i
(v) the bid of type v of player i.
In this case, the game has a (symmetric) Nash equilibrium in
which the function β
i
is the same for both players, with β
i
(v) =
½ v for all v (each type of each player bids exactly half her
valuation).
Game Theory  A (Short) Introduction 205 9/12/2011
9.6 Illustration: auctions
Proof:
suppose that each type of bidder 2 bids in this way;
as far as player 1 is concerned, player 2’s bids are
uniformly distributed between 0 and ½;
thus, if player 1 bids more than ½, she surely wins. If she
bids b
1
≤ ½, the probability that she wins is the probability
that player 2’s valuation is less than 2b
1
, which is 2b
1
;
consequently, her payoff function of her bid is:
¦
¹
¦
´
¦
> ÷
s s ÷
2
1
if
2
1
0 if ) ( 2
1 1 1
1 1 1 1
b b v
b b v b
Player 1’s expected payoff
½ v
1
0 ½ v
1
b
1
(Figure 295.1)
Game Theory  A (Short) Introduction 206 9/12/2011
9.6 Illustration: auctions
This function is maximized at ½ v
1
(this can easily be seen
graphically on Figure 295.1) or established mathematically.
Both player are identical. So, player 2 bids also half is valuation,
conditional on player 1 bidding half is valuation.
Thus, the game has a Nash equilibrium in which each player bids
half his valuation.
When the number n of bidder exceeds 2, a similar analysis shows that
the game a (symmetric) Nash equilibrium in which every bidder bids the
fraction 1 – 1/n of her valuation.
Interpretation: in this example (but also for any distribution F satisfying
our assumptions):
choose n1 valuations randomly and independently, each
according to the cumulative distribution F
the highest of these n1 valuations is a random variable. Denote it
X;
Fix a valuation v. Some values of X are less than v and others are
greater.
Game Theory  A (Short) Introduction 207 9/12/2011
9.6 Illustration: auctions
Consider the distribution of X in those cases in which it is less than
v. The expected value of this distribution is:
Then, the following proposition holds:
Application for the case of 2 bidders and uniform distribution:
for any valuation v of player 1, the cases in which player 2’s
valuation is less than v are distributed uniformly between 0
and v;
so the expected value of player 2’s valuation conditional on
being less than v is ½ v.
) ( v X X E <
If each bidder’s valuation is drawn independently from the
same continuous and increasing cumulative distribution, a
firstprice sealedbid auction (with imperfect information
about valuations) has a (symmetric) Nash equilibrium in
which each type v of each player bids E(XX<v), the
expected value of the highest of the other players’ bids
conditional on v being higher than all the other valuations.
Game Theory  A (Short) Introduction 208 9/12/2011
9.6 Illustration: auctions
Comparing equilibria of first and secondprice auctions
As in the case of perfect information, under the assumptions of
this section, first and secondprice auctions are revenue
equivalent;
Consider the equilibrium of a secondprice auction in which
every player bids her valuation:
the expected price paid by the bidder with valuation v who
wins is the expectation of the highest of the other n1
valuations, conditional on this maximum being less than
v;
in notation, this is E(XX<v);
we have just seen that this is precisely the bid a player
with valuation v in a firstprice auction (and hence, the
amount paid by such a player if she wins);
as in both case, the winner with highest valuation win,
both auctions yield the auctioneer the same revenue!
Game Theory  A (Short) Introduction 209 9/12/2011
9.6 Illustration: auctions
Exercise 296.1 (Auctions with riskaverse bidders)
Consider a variant of the Bayesian game defined earlier in this
section in which the players are risk averse. Specifically, suppose
each of the n players’ preferences are represented by the expected
value of the Bernoulli payoff function x
1/m
, where x is the player’s
monetary payoff and m > 1. Suppose also that each player’s
valuation is distributed uniformly between 0 and 1. Show that the
Bayesian game that models a firstprice sealedbid auction under
these assumptions has a (symmetric) Nash equilibrium in which
each type v
i
of each player i bids:
Note that the solution of the problem max
b
[b
k
(vb)
l
] is kv/(k + l).
 
i i
v
n m
b


.

\

+ ÷
÷ =
1 ) 1 (
1
1
Game Theory  A (Short) Introduction 210 9/12/2011
9.6 Illustration: auctions
Compare the auctioneer’s revenue in this equilibrium with her
revenue in the symmetric Nash equilibrium of a secondprice
sealedbid auction in which each player bids her valuation (note
that the equilibrium of the secondprice auction does not depend on
the players’ payoff functions).
9.6.3 Interdependent valuations
In this setup, each player’s valuation depends on the other players’
signals as well as her own.
Denote the function that gives player i’s valuation by g
i
, and
assume that it is increasing in all the signals.
Let P(b) be the function that determines the price paid by the
winner as a function of the profile b of bids.
Game Theory  A (Short) Introduction 211 9/12/2011
9.6 Illustration: auctions
The following Bayesian game models first and secondprice
auctions with common valuations:
players: the set of bidders 1,…n
states: the set of all profiles (t
1
, … t
n
) of signals that the players
may receive
actions: each player’s set of actions is the set of possible bids
(nonnegative numbers)
signals: the signal function τ
i
of each player i is the set of
possible valuations (the signal function is τ
i
(v
1
, … v
n
) = v
i
: each
player observes her own signal).
beliefs: each type of each player believes that the signal of
every type of every other player is independent of all the other
players’ signals.
Game Theory  A (Short) Introduction 212 9/12/2011
9.6 Illustration: auctions
payoff functions:
Nash equilibrium in a secondprice sealedbid auction
We analyze the case of two bidders, each bidder’s signal is
uniformly distributed from 0 to 1 and the valuation of each
bidder i is v
i
= α t
i
+ γ t
j
, where j is the other player and α ≥ γ ≥
0 (the case α = 1 and γ = 0 is the private value case and the
case α = γ is called pure common value.
The assumption is that a bidder does not know any other
player’s signal but, as the analysis will show, other players’
bids contain some information about the other players’ signals.
¹
´
¦
= >
= = s ÷
=
i j b b
m b b i j b b m b P t t g
t t b u
i j
i j i j n i
n i
some for if 0
players for and all for if / )) ( ) ... ( (
)) ,... ( , (
1
1
Game Theory  A (Short) Introduction 213 9/12/2011
9.6 Illustration: auctions
Under these assumptions, a secondprice sealedbid auction has a
Nash equilibrium in which each type t
i
of each player i bids (α+γ) t
i
.
Proof: to determine the expected payoff of type t1 of player 1, we need
to find:
the probability with which she wins
the expected price she pays
the expected value of player 2’s signal if she wins
Probability that player 1 win:
given that player 2’s bidding function is (α+γ) t
2
, player 1’s bid of b
1
wins only if b
1
≥ (α+γ) t
2
, or if :
t
2
is distributed uniformly between 0 and 1. So, the probability that
is is at most b
1
/ (α+γ) is b
1
/ (α+γ). Thus, a bid b
1
by player 1 wins
with probability b
1
/ (α+γ).
) (
1
2
¸ o +
s
b
t
Game Theory  A (Short) Introduction 214 9/12/2011
9.6 Illustration: auctions
Expected price player 1 pays if she wins:
the price she pays is equal to the player 2 bid;
the player 2 bid, conditional on being less than b
1
, is distributed
uniformly between 0 and b
1
. Thus, the expected value of player
2’s bid, given that it is less than b
1
is ½ b
1
.
Expected value of player 2’s signal if player 1 wins:
Player 2 bid, given her signal t
2
, (α+γ) t
2
. So, the expected value of
signal that yield a bid less than b
1
is ½ b
1
/ (α+γ).
The expected payoff if she bids b
1
is the difference between her
expected valuation (given her signal t
1
and the fact that she wins) and
the expected price she pays, multiplied by her probability of
winning. Using the previous results, we get:
( )
1 1 1
2
1
1
1
1
) ( 2
) ( 2 ) (
2
1
) (
2
1
b b t
b
b
b
t ÷ +
+
=
+


.

\

÷
+
+ ¸ o
¸ o
o
¸ o ¸ o
¸ o
Game Theory  A (Short) Introduction 215 9/12/2011
9.6 Illustration: auctions
This function is maximized at b
1
=(α+γ)t
1
: so, if each type t
2
of
player 2 bids =(α+γ)t
2
, any type t
1
of player 1 optimally bids
=(α+γ)t
1.
The arguments are symmetric for player 2. We therefore get a
symmetric Nash equilibrium.
Exercise 299.1 (Asymmetric Nash equilibria of secondprice
sealedbid common value auctions)
Show that when α=γ=1, for any value λ > 0, the game has an
(asymmetric) Nash equilibrium in which each type t
1
of player 1
bids (1+λ) t
1
and each type t2 of player 2 bids (1 + 1/λ) t
2
.
Game Theory  A (Short) Introduction 216 9/12/2011
9.6 Illustration: auctions
Note that when player 1 calculates her expected value of the
object, she finds the expected value of player 2’s signal given that
her bid wins. The fact that her bid wins is, in fact, a bad news
about the level of other player valuation. A bidder who does not
take account of this fact is said to suffer from the winner’s curse.
Nash equilibrium in a firstprice sealdbid auction
A firstprice sealedbid auction has a Nash equilbrium in which
each type t
i
of each player i bids ½ (α+γ) t
i
.
Exercise 299.2 (Firstprice sealed bid auction with common
values)
Verify that a firstprice sealed bid auction has a Nash equilibrium in
which the bid of each type t
i
of each player i is ½ (α+γ) t
i
.
Game Theory  A (Short) Introduction 217 9/12/2011
9.6 Illustration: auctions
Comparing equilibria of first and secondprice auctions:
The revenue equivalence of first and secondprice auctions
holds also under common valuations:
in each case, the expected price paid by the winner (for
the symmetric equilibrium) is ½ (α+γ) t
i
.
in each case, the bidder wins if she has the highest
valuation (this is to say, with the same probability).
In fact, the revenue equivalence principle holds much more
generally (see Meyrson Lemma).
Game Theory  A (Short) Introduction 218 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
9.8.1 Firstprice sealed bid auctions
We construct here a symmetric equilibrium of a firstprice sealed
bid auction for a generic distribution F of valuations that satisfies
the assumptions in Section 9.6.2 and is differentiable on (v, v+).
Denote the bid of type v of bidder i by β
i
(v).
In a symmetric equilibrium, every player uses the same bidding
function (so β
i
(v)=β for some function β).
Assume:
β is increasing in valuation (seems reasonable)
β is differentiable.
Then:
then there is a condition that β must satisfy in any symmetric
equilibrium
exactly one function β satisfies this condition
this function is increasing.
Game Theory  A (Short) Introduction 219 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Suppose that all n1 players other than i bid according to the
increasing differentiable function β.
Then, given the assumption on F, the probability of a tie is zero.
Hence, for any bid b, the expected payoff of player i when her
valuation is v and she bids b is :
(v – b) Pr(Highest bid is b) = (vb) Pr(All n1 other bids ≤ b)
A player bidding according to the function β bids at most b, for β(v)
≤ b ≤ β(v+), if her valuation is at most β
1
(b) (the inverse evaluated
at b).
Game Theory  A (Short) Introduction 220 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Thus, the probability that the bids of the n1 other players are all at
most b is the probability that the highest of n1 other players are all
at most b is the probability that the highest of n1 randomly
selected valuations (denoted X in section 9.6.2) is at most β
1
(b).
Denoting the CDF of X by H, the expected payoff is thus:
(v – b) H(β
1
(b)) if β(v) ≤ b ≤ β(v+)
and 0 is b < β(v), vb if b > β(v+)
Game Theory  A (Short) Introduction 221 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
In a symmetric equilibrium in which every player bids according to
β, we have β(v) ≤ v if v > v and β(v)=v:
if v > v and β(v) > v, then a player with valuation v wins with
positive probability (players with valuations less than v bid less
than β(v) because β is increasing);
if she wins, she obtains a negative payoff while she obtains a
payoff of 0 by bidding v. So, for equilibrium, we need β(v) ≤ v if
v > v.
given that β satisfies this condition, if β(v)>v, then a player
with valuation v wins with positif probability and obtains a
negative payoff. Thus, β(v)≤v. But, if β(v)<v bids v, then
players with valuations slightly greater than v also bid less
than v (because β is continuous). So that a player with
valuation v who increases her bid slightly wins with positive
probability and obtains a positive payoff if she does so. We
conclude that β(v)=v.
Game Theory  A (Short) Introduction 222 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
The expected payoff of a player of type v when every other player
uses the bidding function β is differentiable on (v,β(v+))
given that β is increasing and differentiable
given then β(v) = v
and, if v > v, is increasing at v.
Thus, the derivative of this expected payoff with respect to b is
zero at any best response less than β(v+) :
knowing that the derivative of β
1
at the point b is
0
)) ( ( '
)) ( ( ' ) (
)) ( ( : F.O.C.
1
1
1
=
÷
+ ÷
÷
÷
÷
b
b H b v
b H
 


)) ( ( '
1
1
b
÷
 
Game Theory  A (Short) Introduction 223 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
In a symmetric equilibrium in which every player bids according β,
the best response of type v of any given player to the other players’
strategies is β(v). Because β is increasing, we have β(v)< β(v+) for
v < v+. So, β(v) must satisfy the F.O.C. whenever v < v < v+.
If b = β(v), then β
1
(b)=v. So that substituting b= β(v), then β
1
(v) =
v, so that substituting b = β(v) into the F.O.C. and multiplying by
β’(v) yields:
The lefthand side of the differential equation is the derivative with
respect to v of β(v) H(v). Thus, for some constant C:
+ < < ÷ = + v v v v vH v H v v H v for ) ( ' ) ( ' ) ( ) ( ) ( '  
}
÷
+ < < ÷ + =
v
v
v v v C dx x xH v H v for ) ( ' ) ( ) ( 
Game Theory  A (Short) Introduction 224 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
The function β is bounded (as it differentiable), so considering the
limit as v approaches v, we deduce that C = 0.
We conclude that if the game has a symmetric Nash equilibrium in
which each player’s bidding function is increasing and
differentiable on (v,v+), then this function is defined by:
Note that, the function H being the CDF of X, the highest of n1
independently drawn valuations. Thus β*(v) is the expected value
of X conditional on its being less than v:
÷ = ÷
+ < < ÷ =
}
v v
v v v
v H
dx x xH
v
) ( *
and
for
) (
) ( '
) ( *
v
 v


)  ( ) ( * v X X E v < = 
Game Theory  A (Short) Introduction 225 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Note finally that, using integration by parts, the numerator in the
expression β*(v) is:
Given H(v) = (F(v))
n1
(the probability that n1 valuations is at most
v), we have:
We see that β*(v) < v for v < v < v+.
}
÷
÷
v
v
dx x H v vH ) ( ) (
+ < < ÷ ÷ = ÷ =
÷
÷
} }
v v v
v F
dx x F
v
v H
dx x H
v v
n
n
for
)) ( (
)) ( (
) (
) (
) ( *
1
v
 v
1
v
 v

Game Theory  A (Short) Introduction 226 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
Exercise 309.2 (Property of the bidding function in a firstprice
auction)
Show that the bidding function β* is increasing.
5. Extensive Games
with Perfect
Information: Theory
Framework
Strategic games suppress the sequential structure of decision
making: everything is about exante anticipations and
simultaneous decisions.
Extensive game describes explicitly the sequential structure of
decisionmaking, allowing us to study situations in which each
decisionmaker is free to change her mind as events unfold.
In this setup, perfect information means that each decision
maker is fully informed about all previous actions.
Game Theory  A (Short) Introduction 228 9/12/2011
5.1 Extensive games with
perfect information
5.1.1 Definition
We add to players and preferences, the order of the players’ moves
and the actions each play may take at each point.
Each possible sequence of actions form a terminal history.
The function that gives the player who moves at each point in each
terminal history is the player function.
So, the components of an extensive game are:
The players
The terminal histories
The player function
The preferences for the players
Game Theory  A (Short) Introduction 229 9/12/2011
5.1 Extensive games with
perfect information
Example 154.1: Entry game
An incumbent faces the possibility of entry by a challenger (eg.:
new entrant in an industry). The challenger may enter or not. If
it enters, the incumbent may either acquiesce or fight.
Extensive game components:
Players: Incumbent, Challenger
Terminal histories: (In, Acquiesce), (In, Fight), Out
Player function : player(Start) = Challenger, player(In) =
Incumbent
Preferences : ?
Game Theory  A (Short) Introduction 230 9/12/2011
5.1 Extensive games with
perfect information
Note that the set of actions available to each player is NOT
part of the game description. But it can be deduced from the
description of the game (after any sequence of events, a player
chooses an action).
Entry Game
Game Theory  A (Short) Introduction 231 9/12/2011
Actions
Challenger: {In,Out}
Incumbent: {Fight, Acquiesce)
5.1 Extensive games with
perfect information
Terminal histories are a set of sequences:
The first element of the sequence starts the game
The order of the sequence depicts the order of actions by players
Entry game
{(In, Acquiesce), (In, Fight), (Out) }
Define:
Subhistories of a finite sequence (a
1
, a
2
, …, a
k
) of actions to be:
The empty sequence of no actions (empty history,
representing the start of the game)
All sequences of the form (a
1
, a
2
, …,a
m
), where 1 ≤ m ≤ k.
The entire sequence is a subhistory of itself.
A subhistory NOT equal to the entire sequence is called a proper
subhistory.
Game Theory  A (Short) Introduction 232 9/12/2011
5.1 Extensive games with
perfect information
Entry game:
The subhistories of (In, Acquiesce) are the empty history and the
sequences (In) and (In, Acquiesce).
The proper subhistories are the empty history and the sequence
(In).
Definition 155.1 (Extensive game with perfect information)
An extensive game with perfect information consists of
A set of players
A set of sequences (terminal histories) with the property that no sequence
is a proper subhistory of any other sequences
A function (the player function) that assigns a player to every sequence that
is a proper subhistory of some terminal history
For each player, preferences over the set of terminal histories
Game Theory  A (Short) Introduction 233 9/12/2011
The set of terminal histories is the set of all sequences of actions that may occur. Terminal
histories represent outcomes of the game.
If the length of the longest terminal history is finite, we way that the game has a finite
horizon. If the game has finite horizon and finitely many terminal histories, we say that the
game is finite.
5.1 Extensive games with
perfect information
Entry game:
Suppose that the best outcome for the challenger is that it
enters and the incumbent acquiesces, and the worst outcome is
that it enters and the incumbent fights, whereas the best
outcome for the incumbent is that the challenger stays out, and
the worst outcome is that it enters and there is a fight.
The situation is modeled as follow:
Players: {Challenger, Incumbent}
Terminal histories: (In,Acquiesce), (In, Fight), (Out)
Player function: P(0) = Challenger, P(In) = Incumbent
Preferences:
Challenger: u(In,Acquiesce) = 2, u(Out) = 1, u(In,Fight) = 0
Incumbent: u(Out) = 2, u(In,Acquiesce) = 1, u(In,Fight) = 0
Game Theory  A (Short) Introduction 234 9/12/2011
5.1 Extensive games with
perfect information
Game Theory  A (Short) Introduction 235 9/12/2011
Player
Start of the game
Action
Payoffs
The sets of actions can be deduced from the set of terminal histories and the player function :
A(h) = {a: (h,a) is a history}
Where h is some nonterminal history, (h,a) is a history, a is one of the actions available to the
player who moves after h.
Eg.: A(In) = {Acquiesce, Fight}
5.1 Extensive games with
perfect information
Exercise 156.2
a. Represent in a diagram the twoplayer
extensive game with perfect information in
which the terminal histories are (C,E),
(C,F), (D,G), and (D,H), the player
function is given by P(0) = 1 and P(C) =
P(D) = 2, player 1 prefers (C,F) to (D,G)
to (D,H) and player 2 prefers (D,G) to
(C,F) to (C,E).
b. Write down the set of players, the set of
terminal histories, player function, and
players’ preferences for the game
represented on the right side of the slide.
Game Theory  A (Short) Introduction 236 9/12/2011
5.1 Extensive games with
perfect information
An extensive game with perfect information models a situation
in which each player, when choosing an action, knows all
actions chosen previously and always move alone. Typical
situations modeled this way are:
A race between firms developing a new technology;
A race between directors to become CEO;
Games like chess, ticktacktoe,
Two extension of extensive game with perfect information are:
Allowing players to move simultaneously exists
Allowing arbitrary patterns of information
Game Theory  A (Short) Introduction 237 9/12/2011
5.1 Extensive games with
perfect information
Entry game solution:
Solution: the challenger will enter and the incumbent will acquiesce
Analysis:
The challenger sees that, if he enters, the incumbent will
acquiesce
As the incumbent will acquiesce in case of entry, the
challenger is better off entering than staying out
Backward induction can not always be used to solve extensive
games:
For infinite horizon game, there is no end point from which to start
the induction
But even for finite horizon game
Game Theory  A (Short) Introduction 238 9/12/2011
This line of argument is a backward induction.
5.1 Extensive games with
perfect information
Example: in this game, the Challenger sees that the Incumbent
is indifferent between Acquiesce and Fight if he enters. The
question of whether to enter or not remains open.
Game Theory  A (Short) Introduction 239 9/12/2011
5.1 Extensive games with
perfect information
Another approach to defining equilibrium takes off from the
notion of Nash equilibrium: it seeks steady states.
In games in which backward induction is welldefined, this
approach turns out to lead to the backward induction outcome.
So, there is no conflict between the two approaches.
Game Theory  A (Short) Introduction 240 9/12/2011
5.2 Strategies and outcomes
5.2.1 Strategies
Definition 159.1 ((full) strategy)
A (full) strategy of player i in an extensive game with perfect
information is a function that assigns to each history h after which it
is player i’s turn to move (P(h) = i, where P is the player function)
an action in A(h), the set of actions available after h.
Game Theory  A (Short) Introduction 241 9/12/2011
Player 1 has 2 strategies: C and D
Player 2 has 4 strategies:
Action Assigned to
History C
Action Assigned to
History D
Strategy 1
E G
Strategy 2
E H
Strategy 3
F G
Strategy 4
F H
5.2 Strategies and outcomes
Notation
Player 1 strategies: C, D
Player 2 strategies: EG, EH, FG, FH
Actions are written in the order in which they occur in the
game.
If actions are available at the same stage of the game, they are
written from left to right as they appear in the game diagram.
Game Theory  A (Short) Introduction 242 9/12/2011
Each player full strategy is more than a “plan of action” or “contingency
plan”: it specifies what the player does for each of the possible choice of
the other player.
In other words, if the player appoints an agent to play the game for her
and tell the agent her strategies, then the agent has enough information
to carry out her wishes, whatever action the other players take.
5.2 Strategies and outcomes
Exercise:
Determine the strategies of the player 1 in the following game:
Game Theory  A (Short) Introduction 243 9/12/2011
5.2 Strategies and outcomes
Solution: CG, CH, DG, DH
Game Theory  A (Short) Introduction 244 9/12/2011
Each (full) strategy specifies an action after history (C,E)
even if it specifies the action D at the beginning of the
game!
A (full) strategy must specify an action for every history
after which it is the player turn to move, even for histories
that, if the strategy is followed, do not occur (this is the
difference between “plan of actions” and a “full
strategy”).
A way to interpret (full) strategy is that it is a plan of action
that specifies players actions even if they make mistakes.
Eg. : DG may read as “I choose D but, if I do a mistake
and I play C, then I will play G if the other player
plays E.
5.2 Strategies and outcomes
5.2.2 Outcomes
A strategy profile is the vector of strategies played by each player.
It determines the terminal history that occurs. We denote strategy
profile by s. The terminal history associated with the strategy
profile s is the outcome of s and is denoted O(s).
Example:
The outcome
The strategy profile (DG,E) is associated to terminal
history D
The strategy profile (CH,E) is associated to terminal
history (C,E,H)
Note that the outcome O(s) of the strategy profile s depends
only on the players’ plans of action, not their full strategies.
Game Theory  A (Short) Introduction 245 9/12/2011
5.3 Nash Equilibrium
Definition 161.2 (Nash Equilibrium of extensive game with
perfect information)
The strategy profile s* in an extensive game with perfect
information is a Nash equilibrium if, for every player i and
every strategy r
i
of player i, the terminal history O(s*) generated
by s* is at least as good according to player i’s preferences as
the terminal history O(r
i
,s*
i
) generated by the strategy profile (r
i
,s*
i
) in which player i chooses r
i
while every other player j
chooses s*
j
. Equivalently, for each player i :
u
i
(O(s*)) ≥ u
i
(O(r
i
,s*
i
) ) for every strategy r
i
Game Theory  A (Short) Introduction 246 9/12/2011
5.3 Nash Equilibrium
One way to find the Nash equilibria of an extensive game in
which each player has finitely many strategies is :
To list each player’s (full) strategies;
To combine strategies of all players to list strategies profiles;
To find the outcome of each strategy profile;
To analyze this information as a strategic game.
This is known as the strategic form of the extensive game.
Game Theory  A (Short) Introduction 247 9/12/2011
The set of Nash equilibria of any extensive game
with perfect information is the set of Nash equilibria
of its strategic form.
5.3 Nash Equilibrium
Example 162.1: the entry game
Game Theory  A (Short) Introduction 248 9/12/2011
Player 1 strategies: {In,Out}
Player 2 strategies: {Acquiesce,Fight}
Strategic form of the game
Incumbent
Challenger
In
Out
Acquiesce Fight
2*,1*
1,2*
0,0
1*,2*
Nash equilibria
(In,Acquiesce) : the one identified by
backward induction
(Out,Fight): this also a steady state. No
player has incentive to deviate.
5.3 Nash Equilibrium
How to interpret the Nash Equilibrium (Out,Fight)?
This situation is never observed in the extensive game
A solution to escape from this difficulty is by considering a slighthly
perturbed steady state in which, on rate occasions, nonequilibrium
actions are taken :
Players makes mistakes or deliberately experiment
Perturbations allow each player eventually to observe every
other players’ action after every history
Another important point to note is that extensive games
embodies the assumption that the incumbent cannot commit, at
the beginning of the game, to fight if the challenger enters. If
such a commitment was credible, the challenger would stay
out. But the threat is not credible (because it is irrational to
fight after entry).
Game Theory  A (Short) Introduction 249 9/12/2011
5.3 Nash Equilibrium
Exercise 163.1 (Nash equilibria of extensive games)
Find the Nash equilibria of the extensive game represented by
the figure (when constructing the strategic form of each game,
be sure to include all the strategies of each player).
Game Theory  A (Short) Introduction 250 9/12/2011
5.4 Subgame perfect equilbrium
5.4.1 Definition
The notion of Nash equilibrium ignores the sequential structure of
an extensive game. This may lead to steady states that are not
robust (in the sense that they do not appear as such in the
extensive game).
We consider now a new notion of equilibrium that models a robust
steady state. This notion requires:
(i) That each player’s strategy to be optimal
(ii) After every possible history
Subgame: for any nonterminal history h, the subgame following h
is the part of the game that remains after h has occurred.
Example: in the entry game, the subgame following the history In is the game
in which the incumbent is the only player and there are two terminal histories
: Acquiesce and Fight.
Game Theory  A (Short) Introduction 251 9/12/2011
5.4 Subgame perfect equilbrium
Definition 164.1 (Subgame of extensive game with perfect
information)
Let Gamma be an extensive game with perfect information, with
player function P. For any nonterminal history h of Gamma, the
subgame Gamma(h) following the history h is the following
extensive game:
Players: the players in Gamma
Terminal histories: the set of all sequences h’ of actions such that
(h,h’) is a terminal history of Gamma
Player function: the player P(h,h’) is assigned to each proper
subhistory h’ of a terminal history
Preferences: each player prefers h’ to h’’ if she prefers (h,h’) to
(h,h’’) in Gamma.
Note that the subgame following the empty history is the entire game.
Game Theory  A (Short) Introduction 252 9/12/2011
5.4 Subgame perfect equilbrium
A subgame perfect equilibrium is a strategy profile s* with the
property that in no subgame can any player i do better by
choosing a strategy different from s*
i
given that every player j
adheres to s*
j
.
Example: in the entry game, the Nash equilibrium (Out,Fight) is
not a subgame perfect equilibrium because in the subgame
following the history In, the strategy Fight is not optimal for the
incumbent: in this subgame (the In subgame), the incumbent is
better off choosing Acquiesce than it is choosing Fight.
Notation: Let h be a history and s a strategy profile to which
adhere afterwards h. We denote O
h
(s) the outcome generated
in the subgame following h by the strategy profile induced by s.
Game Theory  A (Short) Introduction 253 9/12/2011
5.4 Subgame perfect equilbrium
Example: the entry game
Let s be the strategy profile (Out,Fight)
Let h be the history In
If h occurs and, afterwards, the players adhere to s, the resulting
terminal history is O
h
(s) = (In,Fight)
Game Theory  A (Short) Introduction 254 9/12/2011
5.4 Subgame perfect equilbrium
Definition 166.1 (Subgame perfect equilibrium of extensive
game with perfect information)
The strategy profile s* in an extensive game with perfect
information is subgame perfect equilibrium if, for every player
i, every history h after which it is player i’s turn to move (P(h)=i),
and every strategy r
i
of player i, the terminal history O
h
(s*)
generated by s* after the history h is at least as good according
to payer i’s preferences as the terminal history O
h
(r
i
,s*
i
)
generated by the strategy profile (r
i
,s*
i
):
u
i
(O
h
(s*)) ≥ u
i
(Oh(r
i
,s*
i
)) for every strategy r
i
of player i
Game Theory  A (Short) Introduction 255 9/12/2011
The key point is that payer’s strategy is required to be optimal for every
history after which it is the player’s turn to move, not only at the
start of the game (as in the definition of a Nash equilibrium)
5.4 Subgame perfect equilbrium
5.4.2 Subgame perfect equilibrium and Nash equilibrium
Every subgame perfect equilibrium is a Nash equilibrium (because in
a subgame perfect equilibrium, every player’s strategy is optimal, in
particular after the empty history)
A subgame perfect equilibrium generates a Nash equilibrium in every
subgame
A Nash equilibrium is optimal in any subgame that is reached when
the players follow theirs strategies.
Subgame perfect equilibrium requires moreover that each player’s
strategy is optimal after histories that do not occur if the players follow
their strategy.
Game Theory  A (Short) Introduction 256 9/12/2011
5.4 Subgame perfect equilbrium
Example 167.2 (Variant of the entry game)
Consider the variant of the entry game in which the incumbent
is indifferent between fighting and acquiescing if the challenger
enters. Find the subgame perfect equilibria.
Game Theory  A (Short) Introduction 257 9/12/2011
5.4 Subgame perfect equilbrium
Solution: both Nash equilibria (In,Acquiesce) and (Out,Fight)
are subgame perfect equilibria because, after history In, both
Fight and Acquiesce are optimal for the incumbent.
Exercice 168.1
Which of the Nash equilibria of the following game are subgame
perfect?
Game Theory  A (Short) Introduction 258 9/12/2011
5.4 Subgame perfect equilbrium
5.4.4 Interpretation
A Nash equilibrium corresponds to a steady state in an idealized
setting in which players’ long experience leads her to correct
beliefs about the other players’ actions.
A subgame perfect equilibrium of an extensive game corresponds
to a slightly perturbed steady state in which all players, on rare
occasions, take nonequilibrium actions. Thus, players know how
the other players will behave in every subgame.
Subgame perfect equilibrium is a plan of action specifying
players’ actions:
Not only after histories consistent with the strategy
But also after histories that result when the player chooses
arbitrary alternatives actions.
Game Theory  A (Short) Introduction 259 9/12/2011
5.4 Subgame perfect equilbrium
Alternative interpretation:
Consider an extensive game with perfect information in which:
each player has a unique best action at every history after
which it is her turn to move;
horizon is finite;
In such a game, a player who knows the other players’ preferences
(eg: profit maximization) and knows that the other players are
rational may use backward induction to deduce her optimal
strategy.
The subgame perfect equilibrium is the outcome of the players’
rational calculations about each other’s strategies. Note that:
this interpretation is not tenable in games in which some player has more
than one optimal action after some history;
But an extension of the procedure of backward induction can be used to find
all subgame perfect equilibria of finite horizon games.
Game Theory  A (Short) Introduction 260 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
In a game with finite horizon, the set of subgame perfect
equilibria may be found more directly by using an extension of
the procedure of backward induction.
Define the length of a subgame to be the length of the longest
history in the subgame.
The procedure of backward induction works as follow:
(i) Start by finding the optimal actions of the players who move in
the last subgames (stage k);
(ii) Next, find the optimal actions of the players who move at stage
k1, given the optimal actions we have found in all subgames k;
(iii) Continue the procedure up to stage 1.
Game Theory  A (Short) Introduction 261 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Example
We first deduce that in the subgame of length 1 following history
(C,E), player 1 chooses G;
Then, at the start of the subgame of length 2 following the history
C, player 2 chooses E;
Then, at the start of the whole game, player 1 chooses D.
Game Theory  A (Short) Introduction 262 9/12/2011
In any game in which this procedure selects
a single action for the player who moves at
the start of each subgame, the strategy
profile thus selected is the unique subgame
perfect equilibrium of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
What happens in a game in which at the start of some
subgames, more than one action is optimal ?
Game Theory  A (Short) Introduction 263 9/12/2011
The solution is to traces back separately the
implications for behavior in the longer
subgames of every combination of optimal
actions in the shorter subgames.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Example 172.1
Game Theory  A (Short) Introduction 264 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
The game has three subgames of length 1, in each of which
player 2 moves:
In subgames following the histories C and D, player 2 is indifferent
between her two actions;
In the subgame following the history E, player 2’s unique optimal
action is K.
Game Theory  A (Short) Introduction 265 9/12/2011
There are four combinations of player 2’s
optima actions in the subgame of length 1:
•FHK
•FIK
•GHK
•GIK
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
The game has a single subgame of length 2, namely the whole
game, in which player 1 moves first. We now consider player
1’s optimal action in this game for every combination of optimal
actions of player 2 in the subgame of length 1:
For the combinations FHK and FIK of optimal actions of player 2,
player 1’s optimal action at the start of the game is C;
For the combination GHK of optimal actions of player 2, the actions
C, D, and E are optimal for player 1;
For the combination GIK of optimal actions of player 2, player 1’s
optimal action at the start of the game is D.
Game Theory  A (Short) Introduction 266 9/12/2011
The strategy pairs isolated by the procedure are (C,FHK),
(C,FIK), (C,GHK), (D,GHK) and (D,GIK)
The set of strategy profiles that this procedure yields for the whole
game is the set of subgame perfect equilibria of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Two important propositions:
Game Theory  A (Short) Introduction 267 9/12/2011
PROPOSITION 172.1 (Subgame perfect equilibrium of finite
horizon games and backward induction)
The set of subgame perfect equilibria of a finite horizon
extensive game with perfect information is equal to the set of
strategy profiles isolated by the procedure of backward
induction.
PROPOSITION 173.1 (Existence of subgame perfect
equilibrium)
Every finite extensive game with perfect information has a
subgame perfect equilibrium.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 173.2
Find the subgame perfect equilibria of this game:
Game Theory  A (Short) Introduction 268 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 176.1 (Dollar auction)
An object that two people each value at v, a positive integer, is
sold in an auction. In the auction, the people take turns bidding;
a bid must be a positive integer greater than the previous bid.
On her turn, a player may pass rather than bid, in which case
the game ends and the other player receives the object; both
players pay their last bids (if any) (if player 1 passes initially, for
example, player 2 receives the object and makes no payment; if
player 1 bids 1, player 2 bids 3 and then player 1 passes, player
2 obtains the object and pays 3, and player 1 pays 1). Each
person’s wealth is w, which exceeds v. Neither player may bid
more than her wealth. For v=2 and w=3, model the auction as
an extensive game and find its subgame perfect equilibria.
Game Theory  A (Short) Introduction 269 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 176.2 (A synergistic relationship)
Two individuals are involved in a synergistic relationship.
Suppose that the players choose their effort levels sequentially
(rather than simultaneously). First individual 1 chooses her effet
level a
1
. Then individual 2 chooses her effort level a
2
. An effort
level is a nonnegative number, and individual i’s preferences
(for i = 1,2) are represented by the payoff function a
i
(c+a
j
a
i
),
where j is the other individual and c > 0, some constant.
Find the subgame perfect equilibria.
Game Theory  A (Short) Introduction 270 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
Exercise 174.2 (An entry game with a financially constrained
firm)
An incumbent in an industry faces the possibility of entry by a challenger. First the
challenger chooses whether to enter. If it does not enter, neither firm has any
further action; the incumbent’s payoff is TM (it obtains the profit M in each of the
following T ≥ 1 periods). The challenger’s payoff is 0. If the challenger enters, it
pays the entry costs f > 0, and in each of T periods the incumbent first commits to
fight or cooperate with the challenger in that period, then the challenger chooses
whether to stay in the industry or to exit. If, in any period, the challenger stays in,
each firm obtains in that period the profit –F < 0 if the incumbent fights and C >
max {F,f} if it cooperates. If, in any period, the challenger exits, both firms obtain
the profit zero in that period (regardless of the incumbent’s action); the incumbent
obtains the profit M > 2 C and the challenger the profit 0 in every subsequent
period. Once the challenger exits, it cannot reenter. Each firm cares about the sum
of its profits.
Find the subgame perfect equilibria of the extensive game.
Game Theory  A (Short) Introduction 271 9/12/2011
10. Extensive Games
with Imperfect
Information
Framework
We keep in this chapter the Extensive game setup: extensive
game describes explicitly the sequential structure of decision
making, allowing us to study situations in which each decision
maker is free to change her mind as events unfold.
In this imperfect information setup, each player, when
choosing her action, may not be informed of the other players’
previous actions.
Game Theory  A (Short) Introduction 273 9/12/2011
10.1 Extensive games with
imperfect information
To describe an extensive game with perfect information, we
need to specify the set of players, the set of terminal histories,
the player function and the players’ preferences.
To describe an extensive game with imperfect information, we
need to add a specification of each player’s information about
the history at every point at which she moves:
Denote by
the set of histories after which player moves
We specify player’s information by partitioning
into a collection
of information sets (the collection is called the information
partition).
When making her decision, the player is information of the
information set that has occurred, but not of which history within
that set has occurred.
Game Theory  A (Short) Introduction 274 9/12/2011
10.1 Extensive games with
imperfect information
Example
Suppose player
moves after histories , and (
= , , )
is informed only that the history is or that it is either or
The player information partition is the two information sets
and , .
Note that if the player is not informed at all, her information
partition contains a unique information partition , , .
Important restriction
Denote by (ℎ) the set of actions available to the player who
moves after history ℎ.
We allow two histories ℎ and ℎ′ to be in the same information set
only if ℎ = (ℎ
′
).
Why?
Game Theory  A (Short) Introduction 275 9/12/2011
10.1 Extensive games with
imperfect information
Note that we allow move of chance. So an outcome is a lottery
(a probability distribution function) over the set of terminal
histories.
Definition 314.1 (Extensive game with imperfect information)
A set of players
A set of sequences (terminal histories) having the property that no
sequence is a proper subhistory of some terminal history
A function (the player function) that assigns either a player or “chance” to
every sequence that is a proper subhistory of some terminal history
A function that assigns to each history that the player function assigns to
chance a probability distribution over the actions available after that history
(each probability distribution is independent of every other distribution).
For each player, a partition (information partition) of the set of histories
assigned to that player by the player function.
For each player, preferences over the set of lotteries over terminal histories.
Game Theory  A (Short) Introduction 276 9/12/2011
10.1 Extensive games with
imperfect information
Example 314.2: BoS as an extensive game
Games in which each player moves once and no player, when
moving, is informed of any other player’s action, may be modeled
as strategic games or extensive games with imperfect information.
BoS :
Each of two people chooses whether to go to a Bach of
Stravinsky concert
Neither person, when choosing a concert, knows the one
chosen by the other person.
Model this game as an extensive game with imperfect information.
Game Theory  A (Short) Introduction 277 9/12/2011
10.1 Extensive games with
imperfect information
Solution:
Players: the two people, say 1 and 2
Terminal histories: , , , , , , (, )
Player function: ∅ = 1, = = 2
Chance moves: None
Information partitions
Player 1: ∅ (a single information set: player 1 has a single
move and when she moves, she is informed that the
game is beginning)
Player 2: , (player 2 has a single move and when she
moves, she is not informed whether the history is or )
Preferences: given in the game description
Game Theory  A (Short) Introduction 278 9/12/2011
10.1 Extensive games with
imperfect information
Figure 315.1
Game Theory  A (Short) Introduction 279 9/12/2011
Indicates that the
histories are in the
same information set
10.1 Extensive games with
imperfect information
Example 317.1: Variant of Entry Game (the challenger, before
entering, takes an action that the incumbent does not observe)
An incumbent faces the possibility of entry by a challenger (see example
154.1)
The challenger has three choices:
Stay out
Prepare itself for combat and enter (preparation is costly but reduces
loss from fight)
Enter without preparations
A fight is less costly for the incumbent if the entrant is unprepared. But
regardless of entrant’s readiness, the incumbent prefers to acquiesce than to
fight.
The incumbent observes whether the challenger enters but not whether he is
prepared.
Model (graphically by a tree) this game as an extensive game with imperfect
information.
Game Theory  A (Short) Introduction 280 9/12/2011
10.1 Extensive games with
imperfect information
Figure 317.1
Game Theory  A (Short) Introduction 281 9/12/2011
10.2 Strategies
A strategy specifies the action the player takes whenever it is her turn to
move.
Definition 318.1 (Strategy in extensive game)
A (pure) strategy of player in an extensive game is a function that assigns
to each of
′
information sets
an action in (
) (the set of actions available
to player at the information set
).
In the BoS game, each player has a single information set at which two actions (
or ) are available. Thus, each player has two possible strategies: or . If
players have several information sets, a strategy specifies the list of actions at
each information set in the form (
1
,
2
,
…).
Definition 318.3 (Mixed Strategy in extensive game)
A mixed strategy of a player in an extensive game is a probability
distribution over the player’s pure strategies.
With mixed strategies, players are allowed to choose their actions randomly.
Game Theory  A (Short) Introduction 282 9/12/2011
10.3 Nash equilibrium
Definition 318.4 (Nash equilibrium of extensive game)
Intuition: a strategy profile is a Nash equilibrium if no player has an
alternative strategy that increases her payoff, given the other
player’s strategies.
Formal definition: The mixed strategy profile
∗
in an extensive
game is a (mixed strategy) Nash equilibrium if, for each player
and every mixed strategy
of player , player
′
expected payoff
to
∗
is at least as large as her expected payoff to (
,
−
∗
)
according to a payoff function whose expected value represents
players
′
preferences over lotteries.
Notes:
an equilibrium in which no player’s strategy entails any randomization (every player’s
strategy assigns probability 1 to a single action at each information set) is a pure Nash
equilbrium.
One way to find a Nash equilibrium of an extensive game is to construct the strategic form
of the game and analyze it as a strategic game.
Game Theory  A (Short) Introduction 283 9/12/2011
10.3 Nash equilibrium
Example 319.1: BoS as an extensive game
Each player has two strategies: and
The strategic form of the game is given in Figure 19.1
Thus the game has two pure Nash equilibria:
(, )
(, )
In the BoS game, player 2 is not informed of the action chosen by player 1 when
taking an action (her information set contains both the history and the history ).
However, player’s 2 experience playing the game tells her the history to expect.
Eg.: in steady state in which every person who plays the role of either player
chooses , each player knows (by experience) that the other player will choose
.
Game Theory  A (Short) Introduction 284 9/12/2011
10.3 Nash equilibrium
How may we extend the idea of subgame perfect equilibrium to
extensive game with imperfect information to deal with
situations in which the notion of Nash equilibrium is not
adequate?
Example 322.1: Entry game
The strategic form of the entry game in Example 317.1 is the
following:
Game Theory  A (Short) Introduction 285 9/12/2011
3,2* 1,1
4*,3* 0,2
2,4* 2*,4*
Acquiesce Fight
Ready
Unready
Out
10.3 Nash equilibrium
The game has two Nash equilbria:
(Unready, Acquiesce)
(Out, Fight)
(The game has also a Nash mixed strategy equilbrium in which the
challenger uses the pure strategy Out and the probability assigned by the
incumbent to Acquiesce is at most
1
2
).
As in Chapter 5 (perfect information), the Nash equilibrium (, ℎ) is not
plausible. The notion of subgame perfect equilibrium eliminates this strategy
by requiring that each player’s strategy be optimal, given the other players’
strategies, for every history after which she moves, regardless of whether
the history occurs if the players adhere to their strategies.
The natural extension of this idea to games with imperfect information
requires that each player’s strategy be optimal at each of her information
sets.
Game Theory  A (Short) Introduction 286 9/12/2011
10.3 Nash equilibrium
In Example 322.1, the incumbent’s action ℎ is unambigously suboptimal
at its information set because the incumbent prefers if the
challenger enters, regardless of whether the challenger is ready. So, any
equilbrium that assigns a positive probability to ℎ does not satisfy the
additional requirement introduced by the notion of subgame perfect
equilibrium.
However, the implementation of the idea in other may be less straightforward
because the optimality of an action at an information set may depend on the
history that has occurred. Consider for example a variant of the entry game in
which the incumbent prefers to fight than to accommodate an unprepared
entrant (see Figure 323.1).
Game Theory  A (Short) Introduction 287 9/12/2011
10.3 Nash equilibrium
Game Theory  A (Short) Introduction 288 9/12/2011
Figure 323.1
10.3 Nash equilibrium
Like the original game, (, ℎ) is a Nash equilibrium. But:
given that now fighting is optimal if the challenger enters
unprepared, the reasonableness of the modified game
depends on the history the incumbent believes has occurred;
and the challenger’s strategy gives the incumbent no basis
on which to form such a belief.
Game Theory  A (Short) Introduction 289 9/12/2011
So, to study this situation, we must specify players’ beliefs.
10.4 Beliefs and sequential
equilibrium
A Nash equilibrium of a strategic game with imperfect
information is characterized by two requirements:
Each player chooses her best action given her belief about other
players
Each player belief is correct
The notion of equilibrium we define here:
Embodies these two requirements;
Insists that they hold at each point at which a player has to choose
an action (like subgame perfect equilibrium in extensive games with
perfect information).
Game Theory  A (Short) Introduction 290 9/12/2011
10.4 Beliefs and sequential
equilibrium
10.4.1 Beliefs
We assume that at an information set that contains more than one
history, the player whose turn it is to move forms a belief about the
history that has occurred;
We model this belief as a probability distribution over the histories
in the information set;
We call a collection of beliefs (one for each information set) a belief
system.
Definition 324.1
A belief system in an extensive game is a function that
assigns to each information set a probability distribution over
the histories in that information set.
Game Theory  A (Short) Introduction 291 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example: the entry game (317.1)
The belief system consists of a pair of probability distributions:
One assigns probability 1 to the empty history (the
challenger belief at the start of the game)
The other assigns probabilities to histories Ready and
Unready (the incumbent belief after the challenger enter)
10.4.2 Strategies
Definition 324.2 (Behavioral strategy in extensive game)
A behavioral strategy of player in an extensive game is a
function that assigns to each
′
information sets
a
probability distribution over the action in (
), with the property
that each probability distribution is independent of every other
distribution.
Game Theory  A (Short) Introduction 292 9/12/2011
10.4 Beliefs and sequential
equilibrium
Note:
A behavioral strategy that assigns probability one to a single
action is equivalent to a pure strategy.
Behavioral strategies are assigned to actions in information
sets with mixed strategies are assigned to possible
combinations of pure strategies.
In all the games that we study, a behavioral strategy and
mixed strategy are equivalent but behavioral strategy are
easier to deal with.
Example: the BoS game (314.2)
Each player has a single information set;
So, a behavioral strategy for each player is a single probability
distribution over her actions.
In this game, the set of behavioral strategies is identical to the
set of mixed strategies.
Game Theory  A (Short) Introduction 293 9/12/2011
10.4 Beliefs and sequential
equilibrium
10.4.3 Equilibrium
Definition 325.1 (Assessment)
An assessment is an equilibrium if it satisfies the following two
requirements:
Sequential rationality: each player’s strategy is optimal
whenever she has to move, given her belief and the other players’
strategies;
Consistency of beliefs with strategies: each players’ belief is
consistent with the strategy profile.
The sequential rationality generalizes the requirement of subgame perfect
equilibrium: each player’s strategy must be optimal in the part of the game
that follows each of her information sets, given the strategy profile and given
the player’s belief about the history in the information set that has occurred,
regardless of whether the information set is reached if the players follow their
strategies..
Game Theory  A (Short) Introduction 294 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 325 and Figure 326.1
Game Theory  A (Short) Introduction 295 9/12/2011
10.4 Beliefs and sequential
equilibrium
Player 1 strategies are indicated by the red branches:
Selects E at the start of the game;
Select J after the history (C,F)
Player 2 beliefs at her information set (number in brackets) is that
the history C has occurred with probability 2/3 and history D has
occurred with probability 1/3.
Sequential rationality requires that player 2 strategy be optimal at her
information set, given the subsequent behavior specified by player 1
strategy, even though this set is not reached if player 1 follows her
strategy. Player 2 expected payoff in the part of the game starting at
her information set is:
Strategy F : (2/3 x 0) + (1/3 x 1) = 1/3
Strategy G : (2/3 x 1) + (1/3 x 0) = 2/3
Sequential rationality requires Player 2 to select G.
Game Theory  A (Short) Introduction 296 9/12/2011
10.4 Beliefs and sequential
equilibrium
Sequential rationality requires also that player 1 strategy be optimal at
each of her two (one element) information sets, given player 2
strategy:
Player 1 optimal action after history (C,F) is J;
If Player 2 strategy is G, player 1 optimal actions at the start of the
game are D and E;
Thus, given player 2 strategy G, player 1 has two optimal strategies:
DJ and EJ.
Game Theory  A (Short) Introduction 297 9/12/2011
10.4 Beliefs and sequential
equilibrium
Sequential rationality requirements (more formal definition)
Denote (, ) an assessment ( is a profile of behavioral strategies and is a
belief system);
Let
be an information set of player ;
Denote
(, ) the probability distribution over terminal histories that results
if each history in
occurs with probability assigned to it by player
′
belief
(not necessarily the probability with which it occurs if the player adhere to )
and subsequently, the players adhere to the strategy profile ;
In Figure 326.1:
For the information set , , the probability distribution assigns 2/3 to
terminal history (C,G) and probability 1/3 to (D,G)
Game Theory  A (Short) Introduction 298 9/12/2011
Sequential rationality requires for each player and each of her
information sets
, her expected payoff to
(, ) is at least as
large as her expected payoff to
(,
−
), for each of her
behavioral strategies
.
10.4 Beliefs and sequential
equilibrium
The Consistencies of beliefs with strategies is a new requirement. In
a steady state, each player’s belief must be correct: the probability it
assigns to any history must be the probability with which that history
occurs if the players adhere to their strategies.
The implementation of this idea is somewhat unclear at an information
set not reached if the players follow their strategies: every history has
probability 0 if players follow their strategies. We deal with this difficulty
allowing the player who moves at such an information set to hold any
belief at that information set.
The consistency requirement restrict the belief system only at information
sets reached with positive probability if every player adheres to her
strategy.
Game Theory  A (Short) Introduction 299 9/12/2011
10.4 Beliefs and sequential
equilibrium
By the Bayes’ rule, this probability is:
Pr (ℎ
∗
according to )
ℎ according to
ℎ∈
Game Theory  A (Short) Introduction 300 9/12/2011
Precisely, the consistency requirement imposes that the probability
assigned to every history ℎ
∗
in a information set reached with
positive probability by the belief of the player who moves at that
information set to be equal to the probability that ℎ
∗
occurs according
to the strategy profile, conditional on the information set’s being
reached.
10.4 Beliefs and sequential
equilibrium
Figure 326.1
If player 1 behavioral strategy assigns probability 1 to action E at
the start of the game, the consistency requirement places no
restriction on player 2 belief (player 2 information set is not reached
if player 1 adheres to her strategy);
If player 2 action at the start of the game assigns positive
probability to C or D, the consistency requirement enters into play:
Denote the probability assigned to C by player 1 strategy and
to D;
Consistency requires that player 2 belief assigns probability
/( +) to C and /( + ) to D.
Game Theory  A (Short) Introduction 301 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 327.4: Consistency of beliefs in entry game (Figures
317.1 and 323.1)
Denote by
,
and
the probability that the challenger assigns
to Ready, Unready and Out.
If
= 1, the consistency condition does not restrict the incumbent
belief.
Otherwise, the condition requires that the incumbent assigns
probability
/(
+
) to Ready and
/(
+
) to Unready.
Definition 328.1 (Weak sequential equilibrium)
An assessment (, ) (consisting of a behavioral strategy profile
and a belief system ) is a weak sequential equilibrium if it
satisfies the sequential rationality and the weak consistency of
beliefs with strategies.
Game Theory  A (Short) Introduction 302 9/12/2011
10.4 Beliefs and sequential
equilibrium
Figure 326.1
In this game, player 1 strategy EJ is sequentially rational given player 2
strategy G, and player 2 strategy G is sequentially rational given the beliefs
indicated in the Figure and player 1 strategy EJ.
The belief is consistent with the strategy profile (EJ,G), because this profile
does not lead to player 2 information set.
Thus the game has a weak sequential equilibrium.
Note:
In an extensive game with perfect information, only one belief system is
possible (each player believes at each information set that a single
compatible history has occurred with probability 1);
Therefore, in an extensive game with perfect information, the strategy profile
in any weak sequential equilibrium is a subgame perfect equilibrium.
The strategy profile in any weak sequential equilibrium is a Nash equilibrium
(if an assessment is a weak sequential equilibrium, then each player’s
strategy in the assessment is optimal at the beginning of the game, given the
other players’ strategies).
Game Theory  A (Short) Introduction 303 9/12/2011
10.4 Beliefs and sequential
equilibrium
How to find weak sequential equilibria?
We can use a combination of techniques for finding subgame
perfect equilibria of extensive games with perfect information and
for finding Nash equilbria of strategic games;
We can find all the Nash equilibria of the game, and then check
which of these equilibria are associated with weak sequential
equilibria.
Figure 326.1
Does the game have a weak sequential equilibrium in which player 1
chooses E?
If player 1 chooses E, player 2 belief is not restricted by consistency;
We need therefore to ask:
Whether any strategy of player 2 makes E optimal for player 1;
Whether there is a belief of player 2 that makes any such strategy
optimal.
Game Theory  A (Short) Introduction 304 9/12/2011
10.4 Beliefs and sequential
equilibrium
We see that:
E is optimal if and only if player 2 chooses F with probability at
most 2/3:
Any such strategy of player 2 is optimal if Player 2
believes the history is C with probability ½
The strategy of choosing F with probability 0 is optimal if
player 2 believes the history is C with any probability of at
least ½
Thus: an assessment is a weak sequential equilibrium if player
strategy is EJ and player 2:
Either chooses F with probability at most 2/3 and believes
that the history is C with probability ½
Or chooses G and believes that the history is C with
probability at least ½
Game Theory  A (Short) Introduction 305 9/12/2011
10.4 Beliefs and sequential
equilibrium
Example 330.1 (Weak sequential equilibria of entry game, example
317.1)
The entry game has two pure strategy Nash equilibria: (Unready, Acquiesce)
and (Out,Fight)
Consider (Unready,Acquiesce):
Consistency requires that the incumbent believe that the history is
Unready at its information set (because it is the optimal choice for
the challenger), making Acquiesce optimal;
The game has a weak sequential equilibrium in which the strategy
profile is (Unready,Acquiesce) and the incumbent belief is that the
history is Unready;
Consider (Out,Fight)
Regardless of the incumbent belief at its information set, Fight is
not an optimal action in the remainder of the game, for every belief
(Acquiesce yields a higher payoff than Fight).
No assessment in which the strategy profile is (Out,Fight) is both
sequentially rational and consistent.
Game Theory  A (Short) Introduction 306 9/12/2011
10.4 Beliefs and sequential
equilibrium
Why weak sequential equilibrium?
The consistency condition’s limitation to information sets reach with
positive probability generates, in some games, a relative large set
of equilibrium assessments;
Some of the equilibrium assessments do not plausibly correspond
to steady states. Consider the following variant of the entry game:
307 9/12/2011
10.4 Beliefs and sequential
equilibrium
In this variant, Ready is better than Unready for the challenger,
regardless of the incumbent’s action;
This game has a weak sequential equilibrium in which the
challenger’s strategy is Out, the incumbent’s strategy is F, and the
incumbent believes at its information set that the history is Unready
(with probability one);
In this equilibrium, the incumbent believes that the challenger has
chosen Unready, although this action is dominated by Ready for
the challenger. This belief seems not reasonable.
Game Theory  A (Short) Introduction 308 9/12/2011
10.5 Signaling games
In many interactions, information is asymmetric: some parties
are more informed than the other ones.
In one interesting class of situations, the informed parties have
the opportunity to take actions observed by uninformed parties
before uninformed parties take actions that affect everyone: the
informed parties’ actions may “signal” their information.
Game Theory  A (Short) Introduction 309 9/12/2011
10.5 Signaling games
Example 332.1: Entry as a signaling game.
The challenger is strong with probability and weak with probability
1 − (with 0 < < 1).
The challenger knows its type but the incumbent does not.
The challenger may either ready itself for battle or remain unready.
The incumbent observe the challenger readiness but not its type
and chooses either fight or acquiesce.
An unready challenger payoff is 5 if the incumbent acquiesces to its
entry.
Preparations cost a strong challenger 1 unit of payoff and a weak
one 3 units, and fighting entails a loss of 2 units for each type.
The incumbent prefer to fight (payoff 1) rather than to acquiesce to
(payoff 0) a weak challenger and prefer to acquiesce to (payoff 2)
rather than to fight (payoff 1) a strong one.
Game Theory  A (Short) Introduction 310 9/12/2011
10.5 Signaling games
Figure 333.1
Game Theory  A (Short) Introduction 311 9/12/2011
10.5 Signaling games
The Figure 333.1 models this situation:
The empty history is in the center of the diagram
The first move is made by chance (which determines the
challenger type)
Both types have two actions (so the challenger has four
strategies)
The incumbent has two information sets, at each of which it
has two actions (A and F), and thus also four strategies
Searching for pure weak sequential equilibria
Note that a weak challenger prefers Unready to Ready,
regardless of the incumbent’s actions (even if the incumbent
acquiesces to a ready and fight an unready one). Thus, in any
weak sequential equilibrium, a weak challenger chooses
Unready.
Game Theory  A (Short) Introduction 312 9/12/2011
10.5 Signaling games
Consider each possible action of a strong challenger
Strong challenger chooses Ready
Both the incumbent information sets are reached, so consistency
condition restrict its beliefs at each set;
At the top information set, the incumbent must believe that the
history was (Strong, Ready) with probability one (because a weak
challenger never chooses Ready), and hence choose A:
At the bottom information set, the incumbent must believe that the
history was (Weak, Unready), and hence choose F;
Thus, if the challenger deviates and chooses Unready when he is
strong, he is worse of (he get 3 rather than 4);
We conclude that the game has a weak sequential equilibrium in
which challenger chooses Ready when he is strong and Unready
when he is weak. The incumbent acquiesces when he sees Ready
and fights when he sees Unready.
Game Theory  A (Short) Introduction 313 9/12/2011
10.5 Signaling games
Strong challenger chooses Unready
At his bottom information set, the incumbent believes, by
consistency, that the history was (Strong, Unready) with
probability p and (Weak, Unready) with probability (1p).
Thus, his expected payoff:
To A = p (2) + (1p) 0 = 2 p
To F = p (1) + (1p) 1 = 1 – 2 p
A is therefore optimal if ≥
1
4
and F is optimal if ≤
1
4
.
Game Theory  A (Short) Introduction 314 9/12/2011
10.5 Signaling games
Suppose that ≥
1
4
and the incumbent chooses A in
response to Unready:
A strong challenger who chooses Unready obtains
the payoff of 5;
If he switches to Ready, his payoff is less than 5
regardless of the incumbent action;
Thus, if ≥
1
4
, the game has a weak sequential
equilibrium in which both types of challenger choose
Unready and the incumbent acquiesces to an
unready challenger. The incumbent may hold any
belief about the type of a ready challenger, and,
depending on his belief, may fight or acquiesce.
Game Theory  A (Short) Introduction 315 9/12/2011
10.5 Signaling games
Now suppose that ≥
1
4
and the incumbent chooses F in response to
Unready:
A strong challenger who chooses Unready obtains the payoff of 3. If he
switches to Ready, his payoff is 2 if the incumbent fights and 4 if he
acquiesces. Thus, for an equilibrium, the incumbent must fight a ready
challenger.
If the incumbent believes that a ready challenger is weak with high
enough probability (at least ¾), fighting is indeed optimal.
Is such a believe an equilibrium? Yes: the consistency condition does
not restrict the incumbent’s belief upon observing Ready because this
action is not taken when the challenger follows his strategy to choose
Unready regardless of his type.
Thus, if ≤
1
4
, the game has a weak sequential equilibrium in which:
Both types of challenger choose Unready
The incumbents fights regardless of the challenger’s action;
The incumbent assigns probability of at least ¾ to the challenger’s
being weak if it observes that the challenger is ready for battle.
Game Theory  A (Short) Introduction 316 9/12/2011
10.5 Signaling games
This example shows that two kinds of pure strategy equilibrium
may exist in signaling games:
Separating equilibrium : each type of sender (of the signal)
chooses a different action so that, upon observing the sender’s
action, the receiver (of the signal) knows the sender’s type;
Pooling equilibrium : all types of the sender choose the same
action, so that the sender’s action gives the receiver no clue to the
sender’s type.
Note: if the sender has more than two types, mixtures of these
types of equilibrium may exist (the set of types may be divided
into groups, within each of which all types choose the same
action and between which the actions are different).
Game Theory  A (Short) Introduction 317 9/12/2011
10.8 Strategic information
transmission
The situation
You research the market for new product and submit a report to
your boss, who decides which product to develop;
Your preferences differ from from those of your boss:
You are interested in promoting the interest of your division;
Your boss is interested in promoting the interest of the whole
firm.
If you report the results of you research without distortion, the
product your boss will choose is not the best for you.
If you systematically distort your findings, your boss will be able to
unravel your report and deduce your actual findings.
Obfuscation seems therefore a more promising route.
Game Theory  A (Short) Introduction 318 9/12/2011
10.8 Strategic information
transmission
The model
A sender (you) observes the state , a number between 0 and 1,
that a receiver (the boss) can not see;
The distribution of the state is uniform: the probability: Pr ≤ =
;
The sender submit a report (a number) to the receiver;
The receiver observes the report and takes an action (a number);
The payoff functions are:
Sender: − − +
2
Receiver: − −
2
Where (the sender bias) is a fixed number that reflects the
divergence between the sender and the receiver preferences.
Note that the receiver optimal action is = and the sender
optimal action is = + (see Figure 343.1).
Game Theory  A (Short) Introduction 319 9/12/2011
10.8 Strategic information
transmission
Figure 343.1 (players’ payoff functions)
Game Theory  A (Short) Introduction 320 9/12/2011
10.8 Strategic information
transmission
10.8.1 Perfect information transmission?
Consider an equilibrium in which the sender accurately reports the
state he observes: = ∀
Given this strategy, the consistency condition requires that the
receiver believe (correctly) that the state is when the sender
reports . The receiver hence optimally chooses the action (the
maximum of − −
2
).
Is the sender’s strategy the best response to the receiver strategy?
Not if > 0. Suppose the state is . If the sender reports , the
receiver chooses = and the sender payoff is −
2
. If the sender
chooses instead + , the receiver chooses = + and the
sender payoff is 0.
So, unless the sender and the receiver preferences are the same
( = 0), the game has no equilibrium in which the sender accurately
report the state.
Game Theory  A (Short) Introduction 321 9/12/2011
10.8 Strategic information
transmission
10.8.2 No information transmission?
Consider an equilibrium in which the sender reports a constant
value: = ∀ .
The consistency condition requires that if the receiver observes a
report , her belief must remain the same as it was initially (state
uniformly distributed between 0 and 1). The expected value of is
then =
1
2
and his optimal action (the action that maximizes the
expected payoff) is =
1
2
.
The consistency condition does not constraint the receiver belief
about the state upon receiving a report different from : such a
report does not occur if the sender follows her strategy.
Note also that if the receiver simply ignores completely the sender
report, his optimal action remains the same. Because the sender
reports has no effect on the receiver optimal action, any constant
report is optimal for him and, in particular, = is optimal.
Game Theory  A (Short) Introduction 322 9/12/2011
10.8 Strategic information
transmission
In summary, for every value of , the game has a weak sequential
equilibrium in which the sender’s report conveys no information
(constant report), the receiver ignores the report (he maintains his
initial belief about the state) and takes the action that maximizes his
expected payoff.
If is small, this equilibrium is not very attractive for both the
sender and the receiver. For example, if =
1
4
, for any with
0 ≤ ≤
1
4
, both the sender and the receiver are better off if the
receiver action is +.
Game Theory  A (Short) Introduction 323 9/12/2011
10.8 Strategic information
transmission
10.8.3 Some information transmission
Does the game has equilibria in which some information is
transmitted?
Suppose the sender makes one of two reports:
1
if 0 ≤ ≤
1
2
if
1
≤ ≤ 1
With
1
≠
2
Consider the receiver optimal response to this strategy:
If he sees the report
1
, the consistency condition requires that
he now believe that the is uniformly distributed between 0 and
1
. His optimal action is the =
1
2
1
Similarly, if he sees the report
2
, the consistency condition
requires that he now believe that the is uniformly distributed
between
1
and 1. His optimal action is the =
1
2
(1 +
1
)
Game Theory  A (Short) Introduction 324 9/12/2011
10.8 Strategic information
transmission
The consistency condition does not restrict the receiver belief if
he sees a report other than
1
or
2
. Assume therefore that for
any such report, the receiver belief is one of the two beliefs he
hold if he sees
1
or
2
(so the optimal action is either =
1
2
1
or =
1
2
(1 +
1
).
Now, for equilibrium, we need the sender report
1
to be
optimal if 0 ≤ ≤
1
and his report
2
to be optimal if
1
≤ ≤ 1,
given the receiver strategy.
By changing his report, the sender can change the receiver
optimal action form
1
2
1
to
1
2
(1 +
1
). So, for the report
1
to be
optimal when 0 ≤ ≤
1
, the sender must like
1
2
1
at least as
much as
1
2
(1 +
1
) (and viceversa for the report
2
).
In particular, in state
1
, the sender must be indifferent
between the two actions
1
2
1
and
1
2
(1 +
1
):
Game Theory  A (Short) Introduction 325 9/12/2011
10.8 Strategic information
transmission
This indifference implies that
1
+ (the sender preferred
action) is midway between
1
2
1
and
1
2
(1 +
1
) (the receiver
optimal actions). So (see Figure 346.1):
1
+ =
1
2
1
2
1
+
1
2
(1 +
1
)
1
=
1
2
−2
Game Theory  A (Short) Introduction 326 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
We need
1
> 0: this condition is satisfied only if <
1
4
. If ≥
1
4
,
the game has no equilibrium in which the sender makes two
different reports. Put differently, if preferences diverges too
much, there is no point to ask the sender to submit a report.
The receiver should simply take the best action for himself
given his prior belief.
1
=
1
2
−2 is not only a necessary condition for equilibrium
but also a sufficient condition. Indeed, in such a case:
In every state with 0 ≤ <
1
: the sender optimally report
1
In every state with
1
≤ ≤ 1: the sender optimally report
2
≠
1
This follows form the shape of payoff function, which is
symmetric (see Figure 346.2)
Game Theory  A (Short) Introduction 327 9/12/2011
10.8 Strategic information
transmission
Game Theory  A (Short) Introduction 328 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
This equilibrium is better for both the receiver and the sender
than the one in which no information is transmitted. Consider
the receiver:
If no information is transmitted, he takes action ½ in all states and
his payoff is in each state −
1
2
−
2
In this two reports equilibrium, his payoff is:
−
1
2
1
−
2
for 0 ≤ <
1
−
1
2
1
+1 −
2
for
1
≤ ≤ 1
Game Theory  A (Short) Introduction 329 9/12/2011
10.8 Strategic information
transmission
10.8.4 How much information transmission?
For <
1
4
, does the game have equilibria In which more information
is transmitted than in the two reports equilibrium?
Consider an equilibrium in which the sender makes one of K
reports, depending on the state. Specifically, the sender’s report is:
1
if 0 ≤ <
1
2
if
1
≤ <
2
…
if
−1
≤ < 1
Where
≠
for ≠ .
The equilibrium analysis follows the same line as the two reports
equilibrium.
Game Theory  A (Short) Introduction 330 9/12/2011
10.8 Strategic information
transmission
Specifically:
If the receiver observes the report
, then the consistency
condition requires that he believes the state to be uniformly
distributed between
−1
and
. Therefore, he optimally takes
the action
1
2
(
−1
+
).
If he observes a report different from any
, the consistency
condition does not restrict his belief. We assume that his belief
in such case is the belief he holds upon receiving one of the
reports
.
Now, for equilibrium, we need the sender report
to be
optimal when the state is with
−1
≤ <
, for = 1, ….
A sufficient condition for optimality is that, in each state
,
= 1, …, the sender be indifferent between the between the
reports
and
+1
and, therefore, between the receiver
actions
1
2
(
−1
+
) and
1
2
(
+
+1
).
Game Theory  A (Short) Introduction 331 9/12/2011
10.8 Strategic information
transmission
This indifference implies that
+ is equal to the average of
1
2
(
−1
+
) and
1
2
(
+
+1
):
+ =
1
2
1
2
−1
+
+
1
2
(
+
+1
)
Or
+1
−
=
−
−1
+4
This is to say that the interval of states for which the
sender’s report is
+1
is longer by 4 than the interval for
which the report is
.
The length of the first interval, from 0 to
1
, is
1
. The sum of
the lengths of all interval must be equal to one:
1
+
1
+4 +⋯+
1
+ − 1 4 = 1
Or
1
+4 1 +2 +⋯+ −1 = 1
Game Theory  A (Short) Introduction 332 9/12/2011
10.8 Strategic information
transmission
The sum of the first positive integer is
1
2
+1 :
1
+2 −1 = 1
If is small enough for 2 −1 < 1, there is a positive value
of
1
that satisfies the equation:
If
1
24
≤ <
1
12
, the inequality is satisfied for ≤ 3
So, in the equilibrium in which more information is transmitted,
the sender chooses one of three reports.
From
1
+2 −1 = 1, we have
1
=
1
3
−4 and
2
=
2
3
−4.
The Figure 348.2 shows equilibrium action taken by the
receiver as a function of the state .
The values of the reports
does not matter as long as no two
are the same (we think of them as words in a language).
Game Theory  A (Short) Introduction 333 9/12/2011
10.8 Strategic information
transmission
Figure 348.2
Game Theory  A (Short) Introduction 334 9/12/2011
10.8 Strategic information
transmission
In summary:
If there is a positive value of
1
that satisfies
1
+2 −1 = 1,
then the game has a weak sequential equilibrium in which the
sender submits one of different reports, depending on the state.
For any given value of , the largest value of for which an
equilibrium exists is the largest value for which 2 −1 < 1.
If 2 −1 = 1, using the quadratic formula, we have =
1
2
(1 +
1 +
2
). Thus the largest the value of , the smaller the largest
value of possible in an equilibrium.
Game Theory  A (Short) Introduction 335 9/12/2011
The greater the difference between the sender and receiver
preferences, the coarser the information transmitted in the
equilibrium with the largest number of steps (the most informative
equilibrium).
Pedagogic approach
More seriously
I assume “nothing known” Any question is welcome (if possible, in English) The goal is to “enter into the field”, not to cover as much as possible stuff (I don’t care about going to the end of the announced program but I care a lot on an indepth understanding) The mathematical level of the lecture should not be too challenging (basic equations resolution, some mathematical optimization) We will “play” many games during the lecture You MUST work each week to prepare the lecture (read the slides in advance, prepare questions, review concept definitions, …) I rest on you to correct my numerous mistakes This is a theoretical lecture !
9/12/2011
Game Theory  A (Short) Introduction
2
Outline
1 Introduction
1.1 What is game theory? 1.2 The theory of rational choice 1.3 Coming attractions: interacting decisionmakers
2.1 Strategic games 2.2 Example: the Prisoner’s Dilemma 2.3 Example: Bach or Stravinsky? 2.4 Example: Matching Pennies 2.5 Example: the Stag Hunt 2.6 Nash equilibrium 2.7 Examples of Nash equilibrium 2.8 Best response functions
Game Theory  A (Short) Introduction 3
2 Nash Equilibrium Theory (perfect information)
9/12/2011
Outline
2.9 Dominated actions 2.10 Equilibrium in a single population: symmetric games and symmatric equilibria
3 Nash Equilibrium: Illustrations
3.5 Auctions 4.1 Introduction 4.2 Strategic games in which players may randomize 4.3 Mixed strategy Nash equilibrium 4.4 Dominated actions 4.5 Pure equilibria when randomization is allowed 4.7 Equilibrium in a single population
4 Mixed Strategy Equilibrium (probabilistic behavior)
9/12/2011
Game Theory  A (Short) Introduction
4
Outline
4.9 The formation of player’s beliefs 4.10 Extension: finding all mixed strategy Nash equilibria 4.11 Extension: games in which each player has a continuum of actions 4.12 Appendix: Representing preferences by expected payoffs
9 Bayesian Games (imperfect information)
9.1 Motivational examples 9.2 General definitions 9.3 Two examples concerning information 9.6 Illustration: auctions
9/12/2011
Game Theory  A (Short) Introduction
5
Outline
5 Extensive Games (Perfect Information): Theory
5.1 Extensive games with perfect information 5.2 Strategies and outcomes 5.3 Nash equilibrium 5.4 Subgame perfect equilibrium 5.5 Finding subgame perfect equilibria of finite horizon games: backward induction
10.1 Extensive games with imperfect information 10.2 Strategies 10.3 Nash equilibrium 10.4 Beliefs and sequential equilibrium 10.5 Signaling games 10.8 Illustration: strategic information transmission
Game Theory  A (Short) Introduction 6
10 Extensive Games (Imperfect Information)
9/12/2011
1 Introduction .
The main fields of applications are: Economic analysis Social analysis Politic Biology Typical applications: Competing firms Bidders in auctions Realistic assumptions Simplicity Game Theory .1.A (Short) Introduction 8 Main tool: model development. This is an arbitrage between: 9/12/2011 .1 What is game theory? Game theory aims to help understand situations in which decisionmakers interact.
von Neumann and Morgenstern (1944) Early 1950s: John Nash Nash equilibrium Gametheoric study of bargaining 1994 Nobel Prize in Economic Sciences Harsanyi (19202000) Bayesian games (Harsanyi doctrine) Nash (1928) Nash equilibrium Selten (1930) Bounded rationality. extensive games 9/12/2011 Game Theory .A (Short) Introduction 9 .1.1 What is game theory? An outline of the history of game theory First major development in the 1920s Emile Borel John von Neumann Decisive publication: “Theory of Games and Economic Behavior”.
1.A (Short) Introduction 10 . 9/12/2011 Game Theory . This may point towards a revision of the model’s assumptions in order to better capture “stylized facts”.1 What is game theory? Modeling process Step 1: selecting aspects of a given situation (that appear to be relevant) and incorporating them into a model. This step is mostly an “art” Step 2: model analysis (using logic and mathematic) Step 3: studying model’s implications to determine whether our ideas make sense.
and takes it as given (the subset is not influenced by the decisionmaker preferences) Game Theory . The theory is based on two components: Actions and Preferences 1. under some circumstances.1.2 The theory of rational choice Rational choice: The decisionmaker chooses the best action according to her preferences. the decisionmaker knows the subset of available choices.2.1 Actions Set A consisting of all actions that.A (Short) Introduction 11 9/12/2011 . among all the actions available to her No qualitative restriction is place on preferences Rationality means consistency of her decisions when faced with different sets of available actions. are available to the decisionmaker In any given situation.
when presented with any pair of actions.2.A (Short) Introduction 12 .1. Preferences representation: preferences can be represented by a payoff function: the payoff function associates a number with each action in such a way that actions with higher numbers are preferred.2 Preferences and payoff functions We assume that the decisionmaker.2 The theory of rational choice 1. then a > c). More precisely: u(a) > u(b) if and only if the decisionmaker prefers a to b (Economists often speak about utility function) 9/12/2011 Game Theory . knows which of the pair she prefers We assume further that these preferences are consistent (if a > b and b > c.
the value she attaches to each unit of her own income is the same as the value she attaches to any two units of person 2’s income. she is indifferent between a situation in which her income is 1 and person 2’s is 0.A (Short) Introduction 13 . 9/12/2011 Game Theory .1) and (3. and one in which her income is 0 and person 2’s is 2. (2. where the first component in each case is her income and the second component is person 2’s income? Give a payoff function consistent with these preferences.3 Person 1 cares about both her income and person 2’ income. How do her preferences order the outcomes (1.4). Precisely. For example.2 The theory of rational choice Exercise 5.0).1.
2 The theory of rational choice Note that. If u represents a decisionmaker’s preferences and v is another payoff function for which v(a) > v(b) if and only if u(a) > u(b) then v also represents the decisionmaker’s preferences. Eg.A (Short) Introduction 14 . it doesn’t mean that the decisionmaker likes c a lot more than b! A payoff function contains no such information. Note that. 9/12/2011 Game Theory . a decisionmaker’s preferences can be represented by many different payoff functions. u(b)=1 and u(c)=100.: if u(a)=0. the payoff function also conveys only ordinal preference. then any increasing function of u also represents these preferences. as decisionmaker’s preferences convey only ordinal information.1. as a consequence. More succinctly: if u represents a decisionmarker’s preferences.
and v(c)=2? How about the function w for which w(a)=w(b)=0 and w(c)=8? 9/12/2011 Game Theory . Are they also represented by the function v for which v(a)=1.1 A decisionmaker’s preferences over the set A={a.1.A (Short) Introduction 15 .2 The theory of rational choice Exercice 6.v(b)=0. u(b)=1 and u(c)=4.c} are represented by the payoff function u for which u(a)=0.b.
as every other available action. according her preferences.2. This is inconsistent: .when facing {a.A (Short) Introduction 16 . : we observe that a decision chooses a whenever she faces the set {a.c}.b} means that the decisionmaker prefers a to b .b. but sometimes chooses b when facing the {a. she must choose a or c.2 The theory of rational choice 1.always choosing a when facing {a.3 The theory of rational choice The theory of rational choice is the action chosen by a decisionmaker is at least as good.b}. Eg. (Independence of irrelevant alternatives) 9/12/2011 Game Theory .b.1.c}. Note that not every collection of choices for different sets of available actions is consistent with the theory.
In the real world.2 The theory of rational choice 1. 9/12/2011 Game Theory . the decisionmaker cares only about her own choice. Game theory studies situations in which some of the variables that affect the decisionmarker are the actions of other decisionmarkers.A (Short) Introduction 17 .1. a decisionmaker often does not control all the variables that affect her.3 Coming attractions Up to now.
2 Nash Equilibrium: Theory .
A (Short) Introduction 19 . a set of actions for each player.1 Strategic games Terminology: we refer to decisionmakers as players each player has a set of possible actions the action profile is the list of all players’ actions each player has preferences about the action profiles Definition 13.1 (Strategic game with ordinal preferences) A strategic game with ordinal preferences consists of a set of players for each player. preferences over the set of action profiles 9/12/2011 Game Theory .2.
preferences = profits players = animals. actions = prices. preferences = winning or loosing It is frequently convenient to specify the payers’ preferences by giving payoff functions that represent them.1 Strategic games Note that: This allows to model a very wide range of situations: players = firms. not by the payoffs that represent these preferences Time is absent from the model : each player chooses her action once and for all and the players choose their actions simultaneously (no player is informed of the action chosen by any other player) 9/12/2011 Game Theory . Keep however in mind that a strategic game with ordinal preferences is defined by the players’ preferences.2. actions = fighting for a prey.A (Short) Introduction 20 .
but not enough evidence to convict either of them of the major crime unless one of them acts as an informer against the other (finks).2 Example: the Prisoner’s Dilemma Example 14. Model this situation as a strategic game. she will be freed and used as a witness against the other. who will spend four years in prison. 9/12/2011 Game Theory . each will be convicted of the minor offense and spend one year in prison. If one and only one of the finks.2. If they both stay quiet. If the both fink. There is enough evidence to convict each of them of a minor offense. each will spend three years in prison.A (Short) Introduction 21 .1 Two suspects in a major crime are held in separate cells.
A (Short) Introduction 22 . F 0 1Q.Quiet)>u1(Quiet.2.Q 2 1Q.Fink)>u1(Quiet.Fink) three years in prison (Quiet. Fink} Preferences: Suspect 1’s ordering of the action profiles (from best to worse): (Fink. F Game Theory .Quiet) free (Quiet.Quiet)>u1(Fink.: 9/12/2011 3 1F .Quiet) one year in prison (Fink.Fink) four years in prison (and viceversa for player 2) We can adopt a payoff function for each player: u1(Fink.2 Example: the Prisoner’s Dilemma Solution Players: the two suspects Actions: Each player’s set of actions is {Quiet.Fink) Eg.Q 11F .
the situation is the following : (numbers are payoffs of payers) Suspect 2 Quiet (2. 9/12/2011 Game Theory .2 Example: the Prisoner’s Dilemma Graphically.2) Fink (0.A (Short) Introduction 23 .3) Quiet Suspect 1 Fink (3.0) (1.1) The prisoner’s dilemma models a situation in which there are gains from cooperation (each player prefers that both players choose Quiet than they both choose Fink) but each player has an incentive to free ride whatever the other play does.2.
A (Short) Introduction 24 .2.2 Example: the Prisoner’s Dilemma 2. but the increment in its value to you is not worth the extra effort). Each of you can either work hard or goof off.1 Working on a joint project You are working with a friend on a joint project. and the worst outcome for you is that you work hard and your friend goofs off (you hate to be exploited). 9/12/2011 Game Theory . Model this situation as a strategic game. You prefer the outcome of your both working hard to the outcome of your both goofing off (in which case nothing gets accomplished). If your friend works hard.2. then you prefer to goof off (the outcome of the project would be better if you worked hard too.
the each earns a profit of $600.A (Short) Introduction 25 . If one firm chooses High and the other chooses Low. If both firms choose Low. two firms produce the same good. for which each firm charges either a low price or a high price.2 Duopoly In a simple model of a duopoly. then each earns a profit of $1000. but its volume is high).2.2. Model this situation as a strategic game. If both firms choose High.2 Example: the Prisoner’s Dilemma 2. Each firm wants to achieve the highest possible profit. 9/12/2011 Game Theory . whereas the firm choosing Low earns a profit of $1200 (its unit profit is low. Each firm cares only about its profit. then the firm choosing High obtains no customers and makes a loss of $200.
A (Short) Introduction 26 .5 1.1 Determine whether each of the following games differs from the Prisoner’s Dilemma only in the names of the players’ actions X X Y 3.2 Example: the Prisoner’s Dilemma Exercise 17.1 Y 1.1 3.2 Y 0.1 An application to M&As: the Grossman & Hart free riding argument.3 5. 9/12/2011 Game Theory .0 X Y X 2.5 0.2.
Model this situation as a strategic game. Two concerts are available: one of music by Bach. 9/12/2011 An application to merging banks: two banks are merging. and one of music by Stravisky.2.3 Example: Back or Stravinsky? (Battle of the Sexes or BoS) Situation: Players agree that it is better to cooperate Players disagree about the best outcome Example 18. If they go to different concerts. each of them is equally unhappy listening to the music of either composer. Both agree that they Theory .2 Two people wish to go out together. .A better off using the same information 27 will be (Short) Introduction Game system technology but they disagree on which one to choose. One person prefers Bach and the other prefers Stravinsky.
Google versus Microsoft/Yahoo 9/12/2011 Game Theory .A (Short) Introduction 28 .
1) Stravinsky (0.2.A (Short) Introduction 29 .2) 9/12/2011 Game Theory .3 Example: Back or Stravinsky? (Battle of the Sexes or BoS) Solution Player 2 Bach (2.0) (1.0) Bach Player 1 Stravinsky (0.
If they show the same side. whether to show the head or the tail of a coin. . I they show different sides. Model this situation as a strategic game. simultaneously. person 2 pays person 1 a dollar. person 1 pays person 2 a dollar.2.1 Two people choose.A (Short) Introduction 30 while the newcomer prefers that the products look alike.4 Example: Matching Pennies Situation: A purely conflictual situation Example 19. Each person cares only about the amount of money she receives (and is a profit maximizer!). 9/12/2011 An application to choices of appearances for new products by an established produced and a new entrant in a market of fixed size: the established produced prefers the newcomer’s product to look different from its own (to avoid confusion) Game Theory .
A (Short) Introduction 31 .IPhone iOS versus Android 9/12/2011 Game Theory .
1) 9/12/2011 Game Theory .4 Example: Matching Pennies Solution Player 2 Head Tail Head (1.1) (1.1) Player 1 Tail (1.2.A (Short) Introduction 32 .1) (1.
Example 20.2. If all hunters pursue the stag. 9/12/2011 Game Theory .2 Each of a group of hunters has two options: she may remain attentive to the pursuit of a stag. Each hunter prefers a share of the stag to a hare.A (Short) Introduction 33 . the stag escapes. they catch it and share it equally. and the hare belongs to the defecting hunter alone. If any hunter devotes her energy to catching a hare.5 Example: the stag Hunt Situation: Cooperation is better for both but not credible. Model this situation as a strategic game. or she may catch a hare.
1) 9/12/2011 Game Theory .5 Example: the stag Hunt Solution Player 2 Stag (2.2) Hare (0.A (Short) Introduction 34 .1) Stag Player 1 Hare (1.0) (1.2.
Beliefs are about “typical” opponents. Assumption: We assume in strategic games that players’ beliefs are derived from their past experience playing the game: they know how their opponent will behave.2.A (Short) Introduction 35 9/12/2011 . Game Theory .6 Nash equilibrium Question: What actions will be chosen by players in a strategic game? (assuming that each player chooses the best available action) Answer: To make a choice. not any specific set of opponents. note however that they do not know which specific opponent they are faced to and so. they can not condition their behavior on being faced to a specific opponent. each player must form a belief about other players’ action.
Two key ingredients: rational choices and correct beliefs 9/12/2011 Game Theory . Players’ beliefs about each other’s actions are (assumed to be) correct.2. in particular. the action profile is the same Nash equilibrium a*. This implies. then no player has a reason to choose any action different from her component of a*. whenever the game is played. Note: A Nash equilibrium corresponds to a steady state: if. a Nash equilibrium is action profile a* with the property that no player i can do better by choosing an action different from a*i. given that every other player j adheres to a*j. that two players’ beliefs about a third player’s action are the same (expectations are coordinated – Harsanyi Doctrine).6 Nash equilibrium In this setup.A (Short) Introduction 36 .
6 Nash equilibrium Notations and formal definition: Let ai be the action of player i Let a be an action profile: a=(a1. whereas player i chooses a’i (the subscript –i stands for “except i”).ai) is the action profile in which all the players other than i adhere to a while i “deviates” to a’i. … an) Let a’i be any action of player i (different from ai ) Let (a’i. (a’i. Note that if a’i=ai.A (Short) Introduction 37 . a2.ai) =a 9/12/2011 Game Theory .ai) = (ai.2.ai) be the action profile in which every player j except i chooses her action aj as specified by a. then (a’i.
Equivalently: ui(a*) ≥ ui(ai. a* is at least as good according to player i’s preferences as the action profile (ai.A (Short) Introduction 38 .6 Nash equilibrium Definition 23. a*i) for every action ai of player i 9/12/2011 Game Theory .2.a*i) in which player i chooses ai while every other player j chooses a*i.1 (Nash equilibrium of strategic game with ordinal preferences) The action profile a* in a strategic game with ordinal preferences is a Nash equilibrium if. for every player i and every action ai of player i.
A (Short) Introduction 39 9/12/2011 .2.6 Nash equilibrium Note: This definition implies neither that a strategic game necessarily has a Nash equilibrium. to ensure that players are experienced playing the game to ensure that players do not face repeatedly the same opponents (as each game must played in isolation) The key to correctly interpret results is to remember that Nash equilibrium is about equilibrium: the outcome must have converged (and the theory says nothing about the necessary for convergence to appear). This definition is designed to model a steady state among experienced players. nor that it has at most one. An alternative approach (called “rationalizability”) is: to assume that players know each others’ preferences to consider what each player can deduce about the other players’ action from their rationality and their knowledge of each other’s rationality The keys to conceive suited experiment are: Nash equilibrium has been studied experimentally. Game Theory .
2.0) (1.A (Short) Introduction 40 .2) Fink (0.1) 9/12/2011 Game Theory .7 Examples of Nash equilibrium 2.1 Prisoner’s Dilemma Suspect 2 Quiet (2.7.3) Quiet Suspect 1 Fink (3.
9/12/2011 Game Theory . Quiet) is not a Nash equilibrium because: if player 2 chooses Quiet.2. Eg. Fink) is a Nash equilibrium because: given that player 2 chooses Fink. Quiet) occurs. (Quiet. player 1 is better off choosing Fink than Quiet given that player 1 chooses Fink. player 2 is also better off choosing Fink The incentive to free ride eliminates the possibility that the mutually desirable outcome (Quiet. player 1 is better off choosing Fink (moreover). player 2 is better off choosing Fink than Quiet No other action profile is a Nash equilibrium. if player 1 chooses Quiet.A (Short) Introduction 41 .7 Examples of Nash equilibrium Detailed explanation (Fink.
9/12/2011 Game Theory .A (Short) Introduction 42 . the Nash equilibrium action is the best action for each player: if the other player chooses her equilibrium action (Fink) but also if the other player chooses her other action (Quiet) In this sense. Only the first condition must be met.2. this is not a requirement of the Nash equilibrium. this equilibrium is highly robust.7 Examples of Nash equilibrium Note that: in the present case. But.
1) 9/12/2011 Game Theory .2) (0.3) Player 1 Fink (3.2. Quiet and Fink.1 Each of two players has two possible actions.7 Examples of Nash equilibrium Exercise 27.0) (1. each action pair results in the players’ receiving amounts of money equal to the numbers corresponding to that action pair in the following figure: Player 2 Quiet Fink Quiet (2.A (Short) Introduction 43 .
9/12/2011 Game Theory . j is the other player. Find the range of values of α for which the resulting game is the Prisoner’s dilemma. Formulate the strategic game that models this situation in the case α=1.A (Short) Introduction 44 .Quiet) is.7 Examples of Nash equilibrium Players are not “selfish”: the preferences of each player i are represented by the payoff function mi(a)+α mj(a). for example. For values of α for which the game is not the Prisoner’s dilemma. 1. Player 1’s payoff to the action pair (Quiet.2. Is this game the Prisoner’s dilemma? 2. where mi(a) is the amount of money received by player i. find the Nash equilibria. and α is a given nonnegative number. 2 + 2α.
2. Why? Note that this means that BoS has two steady states! 9/12/2011 Game Theory .1) (0.7.2 BoS Player 2 Bach Stravinsky Bach (2.0) (1.0) Player 1 Stravinsky (0.B) and (S.A (Short) Introduction 45 .2) Nash equilibria are (B.7 Examples of Nash equilibrium 2.S).
1) Head Player 1 Tail (1.1) (1.2.A (Short) Introduction 46 .7. Why? 9/12/2011 Game Theory .1) There is no Nash equilibrium.3 Matching Pennies Player 2 Head (1.1) Tail (1.7 Examples of Nash equilibrium 2.
1) Player 1 Hare (1. 9/12/2011 Game Theory .H).S) is better for both players than (H.7 Examples of Nash equilibrium 2.A (Short) Introduction 47 .4 The Stag Hunt Player 2 Stag Hare Stag (2.H). Why? Note that.H). despites (S.S) and (H.2) (0.0) (1.1) Nash equilibria are (S.7. this has no bearing on the equilibrium status of (H.2.
Assume that a captured stag is shared only by the hunters who catch it. Under each of following assumptions on the hunters’ preferences. with 2≤m≤n. find the Nash equilibria of the strategic game that models the situation. but prefers a hare to any smaller fraction of the stag. where k is an integer with m≤k≤n. need to pursue the stag in order to catch it (continue to assume that there is a single stag). As before. 9/12/2011 Game Theory .A (Short) Introduction 48 . Each hunter prefers a fraction 1/k of the stag to a hare.7 Examples of Nash equilibrium Exercise 30. each hunter prefers the fraction 1/m of the stag to a hare. a.1 (extension to n players) Consider the variants of the nhunter Stag Hunt in which only m hunters. b.2.
1) 9/12/2011 Game Theory .A (Short) Introduction 49 . the theory isolates more than one steady state but says nothing about which one is more likely to appear. however. In some games. some of these equilibria seem more likely to attract the players’ attentions than others.S) Bach (2.2) Player 2 Stravinsky (0. Example: (B.B) seems here more “likely” than (S. These equilibria are called focal.0) (1.0) Bach Player 1 Stravinsky (0.7 Examples of Nash equilibrium Note In games with many Nash equilbria.2.
A (Short) Introduction 50 .7 Examples of Nash equilibrium 2.2.1 requires only that the outcome of a deviation (by a player) be no better for the deviant than the equilibrium outcome. A equilibrium is strict if each player’s equilibrium action is better than all her other actions.1) 9/12/2011 Game Theory . contrasting with definition 23. given the other players’ actions: ui(a*) > ui(ai.8 Strict and nonstrict equilibria The definition 23. a*i) for every action ai ≠ a*i of player i (Note the strict inequality.7.
8 Best Response Functions 2. ai ) ui (a'i .2.8.A (Short) Introduction 51 . 9/12/2011 Game Theory . ai ) for all a'i in Ai Any action in Bi(ai) is at least as good for player i as every other action of player i when the other players’ actions are given by ai. Let us denote the set of player i best actions when the list of the other players’ actions is ai by Bi(ai) or.1 Definition In more complicated games. more precisely: Bi (ai ) ai in Ai : ui (ai . analyzing one by one each action profile quickly becomes intractable.
* * * an bn (a1 .1: The action profile a* is a Nash equilibrium of a stragetic game with ordinal preferences if and only if every player’s actions is a best response to the other players’ actions: * ai* is in Bi (ai ) for every player i If each player i has a single best response to each list ai (Bi(ai) = {bi(a*i)})...8 Best Response Functions 2..A (Short) Introduction 52 .2 Using best response functions to define Nash equilibrium Proposition 36..an ) .8.. then this is equivalent to: * ai* bi (ai ) for every player i The Nash Equilibrium is then characterized by a set of n equations in the n unknowns a*i: * * * a1 b1 (a2 ..2..an 1 ) 9/12/2011 Game Theory ..
0 0.0 M B 9/12/2011 1.8.1 0. find the action profiles that satisfy proposition 36.2.1 Represents graphically the solution L T 2.2 3.3 0.3 Using the best response functions to find Nash equilibria Procedure: 1.0 R 0.1.0 Game Theory .0 0.1 Exercise 37.1 C 1.A (Short) Introduction 53 .b Find the Nash Equiliria of the game in Figure 38. find the best response function of each player 2.8 Best Response Functions 2.
2.3* 0.0 0*.0 0.0* T Player 1 2.0* R 0*.8 Best Response Functions Solution Player 2 L C 1*.A (Short) Introduction .1 0*.1* B 1.0* R C L T M Player 1 B 54 9/12/2011 Game Theory .2 Player 2 M 3*.
and individual i’s preferences (for i=1. Questions: Model the situation as a strategic game Find players best response functions Find the Nash equilibrium Represent graphically the situation 9/12/2011 Game Theory . they are both better off. an effort level is a nonnegative number.2) are represented by the payoff function ai (c+ajai). where ai is i effort level. For any given effort of individual j. the return to individual i’s effort first increases.8 Best Response Functions Example 39.2.A (Short) Introduction 55 .1 Two individuals are involved in a synergistic relationship. If both individuals devote more effort to the relationship. then decreases. aj is the other individual’s effort level. Specifically. c>0 is a constant.
for i=1.2 Note that each player has infinitely many actions.8 Best Response Functions Strategic game: Players: the two individuals Actions: each player’s set of actions is the set of effort levels (non negative numbers) Preferences: player i’s preferences are represented by payoff function ai(c+ajai). 9/12/2011 Game Theory . so the game can not be represented by a matrix of payoff.A (Short) Introduction 56 . as previously.2.
2. As quadratic function are symmetric.A (Short) Introduction 57 .8 Best Response Functions Best response function: Intuitive construction Given aj. individual i payoff is a quadratic function of ai. this implies that the player i best response to aj is: 1 bi (a j ) (c a j ) 2 Payoff 0 c+aj ai 9/12/2011 Game Theory . that is zero when ai=0 and when ai=c+aj.
2.A (Short) Introduction 58 .8 Best Response Functions Mathematical construction ai (c a j ai ) cai a j ai ai2 c a j 2ai ai FOC c a j 2ai 0 ai* 1 (c a j ) 2 9/12/2011 Game Theory .
c) 9/12/2011 Game Theory .A (Short) Introduction 59 .1.8 Best Response Functions Nash equilibrium: To find the Nash equilibrium. we get: 1 1 (c (c a1 )) 2 2 3 1 c a1 4 4 So : a1 c a1 The unique Nash equilibrium is (c.2. following proposition 36. we have to solve the following system of equations: 1 (c a2 ) 2 1 a2 (c a1 ) 2 a1 By substitution.
A (Short) Introduction 60 .8 Best Response Functions Graphical representation a2 b1(a2) b2(a1) Player 2 c ½c 0 ½c c a1 Player 1 9/12/2011 Game Theory .2.
If best response functions are not linear. then her best response function is “thick” (a surface) at some points.A (Short) Introduction 61 .8 Best Response Functions Note that: The best response of a player to actions of other players needs not to be unique.2. generating another set of difficulties 9/12/2011 Game Theory . If a player has many best responses to some of the other players’ actions. the Nash equilibria need not to be unique. Nash equilibrium needs not to exist: the best response function may not cross. Best response function can be discontinuous.
2.8 Best Response Functions Exercice 42.a2)=a2(1a1a2) 9/12/2011 Game Theory .a2)=a1(a2a1) and u2(a1.A (Short) Introduction 62 .1 Find the Nash Equilibria of the twoplayer strategic game in which each player’s set of actions is the set of nonnegative numbers and the players’ payoff functions are u1(a1.
9 Dominated actions 2.1 Strict dominations In any game.A (Short) Introduction Fink (3. Example: in the Prisoner’s Dilemma.1 (Strict domination): in a strategic game with ordinal preferences. a player’s action “strictly dominates” another action if it is superior.0) (1. no matter what the other player do.9.2. player i’s action a’’i strictly dominates her action a’i if: Action a’i is said to be strictly dominated. the action Fink strictly dominates the action Quiet Quiet Quiet (2.1) 63 . Definition 45.2) Fink (0.3) 9/12/2011 Game Theory .
When looking for Nash equilibria of a game. 9/12/2011 Game Theory .2. a strictly dominated action is not used in any Nash equilibrium. we can therefore eliminate from consideration all strictly dominated actions.2 Weak domination In any game. a player’s action weakly dominates another action if the first action is at least as good as the second action. and is better than the second action for some actions of the other players.A (Short) Introduction 64 . no matter what the other players do.9 Dominated actions Note that. 2.9. as a strictly dominated action is not a best response to any actions of the other players.
an action can be weakly dominated. ai ) ui (ai' . ai ) ui (ai' .9 Dominated actions Definition 46.2.A (Short) Introduction 65 . player i’s action a’’i weakly dominates her action a’i if: ui (ai'' . 9/12/2011 Game Theory . no player’s equilibrium action is strictly dominated but in a nonstrict Nash equilibrium. ai ) for some list ai of other players'actions Note that is a strict Nash equilibrium. ai ) for every list ai of other players'actions ui (ai'' .1 (Weak domination) : In a strategic game with ordinal preferences.
2.1 2.1 R 1. L T 0.1. whether any action is strictly dominated or weakly dominated.0 1. Detemine whether any equilibrium is strict.1 C 1.A (Short) Introduction 66 .1 3.1 (Strict equilibria and dominated actions) For the game in Figure 48. determine.2 9/12/2011 Game Theory .1 2.9 Dominated actions Exercise 47. for each player.0 1. Find the Nash equilibria of the game.0 M B 1.
: if there are five people.2. the policy 0. denoted x*i.4 Illustration: collective decisionmaking The members of a group of people are affected by a policy. She prefers the policy y to the policy z if and only if y is closer to x*i than is z.0. The following mechanism is used to choose the policy: each person names a policy the policy chosen is the median of those named Eg. The number of n people is odd.9 Dominated actions 2.6.A (Short) Introduction 67 9/12/2011 . 0.6 is chosen. modeled as a number. Questions: Model this situation as a strategic game Find the equilibrium strategy of the players Does anyone have an incentive to name her favorite policy? Game Theory . and they name the policies 2.5 and 10. Each person i has a favorite policy.9.
A (Short) Introduction 68 . Equilibrium strategy of the players: Claim: for each player i. the action of naming her favorite policy x*i weakly dominates all her other actions.9 Dominated actions Strategic game: Players: n people Actions: each person’s set of actions is the set of policies (numbers) Preferences: each person i prefers the action profile a to the action profile a’ if and only if the median policy named in a is closer to x*i than is the median policy named in a’.2. Why ? 9/12/2011 Game Theory .
: the same hold true (as x*i < xi ) if x*i < a+ and xi > a. player i is worse off naming xi than naming x*i .A (Short) Introduction 69 . denote the value of the ½ (n1)th highest action by a. if xi ≤ a. then when the player i names x*i.and the value of ½ (n+1)th highest action a+ (so that half of the remaining players’ actions are at most aand half of them are at least a+). player i is at least as well off naming x*i as she is naming xi n for any list of actions of the players other than player i. the median policy is at least the lesser of xi and a+. 0 9/12/2011 Game Theory . a+ at ½ (n+1)th ½n a.9 Dominated actions Proof: Take xi > x*I (reporting a higher policy than the preferred one) a.at ½ (n1)th if x*i ≥ a+ : the median policy is the same whether player i names x*i or xi (as xi > x*i ). for all actions of the other players. Thus. the median policy is at most the greater of x*i and a when the play i names xi.2.
Thus player i is better off naming xi than she is naming x*i .9 Dominated actions b. player i is better of naming x*i than she is naming xi Suppose that half of the remaining players name policies less than x*i and half of them name policies greater than xi. 9/12/2011 Game Theory . for some actions of the other players. Then the outcome is x*i if player i names x*i and xi if she names xi . Telling the truth weakly dominates all other action.A (Short) Introduction 70 . A symmetric argument applies when xi < x*i .2.
Game Theory .1 (Symmetric Nash equilibrium) An action profile a* in a strategic game with ordinal preferences in which each player has the same set of actions is a symmetric Nash equilibrium if it is a Nash equilibrium and a*i is the same for every player i.10 Equilibrium in a single population: symmetric games We focus here in cases where we want to model the interaction between members of a single homogenous population of players.1 (Symmetric twoplayer game with ordinal preferences) A twoplayer strategic game with ordinal preferences is symmetric if the players’ sets of actions are the same and the players’ preferences are represented by payoff functions u1 and u2 for which u1(a1. Definition 51. Players interact anonymously and symmetrically.a2)=u2(a2.A (Short) Introduction 71 9/12/2011 .a1) for every action pair (a1.a2) Definition 52.2.
4 B 2.10 Equilibrium in a single population: symmetric games Exercise 52.5 6. if any.1 1.1 3. Which of the equilibria.1 5.2.2 1.6 0.0 C 9/12/2011 Game Theory .A (Short) Introduction 72 .1. correspond to a steady state if the game models pairwise interactions between the members of a single population? A A B 1.3 C 4.2 Find all the Nash equilibria of the game in Figure 53.
3 Nash Equilibrium: Illustrations .
3.A (Short) Introduction 74 Auctions: Main questions 9/12/2011 . from works of art to shortterm government bonds to radio spectrum … Auctions are of many form: Sequential or sealed bid (simultaneous) First or Second price Ascending (English) or Descending (Dutch) Single or MultiUnits With or without reservation price With or without entry costs … exist since long ago (annual auction of marriageable womans in Babylonian’s villages and remain uptodate (EBay on Internet) What are the designs likely to be the most effective at allocating resources? What are the designs more likely to raise the most revenue? Game Theory .1 Introduction Auctions are used to allocate significant economic resources.5 Auctions 3.5.
5 Auctions Main assumption: we discuss here auctions in which every buyer knows her own valuation and every other buyer’s valuation of the item being sold Buyers are perfectly informed.A (Short) Introduction 75 . 9/12/2011 Game Theory .3. This assumption will be dropped in Chapter 9.
Thus we can model the auction by assuming that each person decides. the person making the current bid obtains the object at the price shed bid. the person with the highest maximal bid needs therefore to bid slightly more than the second highest maximal bid.A (Short) Introduction 76 . only the person with the maximal bid and the one with the second highest maximal bid will be left competing against each other. To win. During the bidding. Given that every person is certain of her valuation (perfect valuation) of the object before the bidding begins. during the bidding.3. eventually.5 Auctions 3. 9/12/2011 Game Theory .2 Secondprice sealedbid auctions In a common form of auction.5. When no one wish to submit a higher bid than the current bid. before bidding begins. people sequentially submit increasing bids for an object. no one can learn anything relevant to her actions. the most she is willing to bid (her maximal bid).
and the person who submits the highest bid wins and pays a price equal to the second highest bid. In a perfect information context. ascending auctions (or English auctions) and secondprice sealed bid auction are modeled by the same strategic game.A (Short) Introduction 77 . 9/12/2011 Game Theory .3. This game model also a situation in which the people simultaneously put bids in sealed envelopes.5 Auctions We can therefore model such an ascending auction as a strategic game in which each player chooses an amount of money (the maximal amount she is willing to bid) and the player who chooses the highest amount obtains the object and pays a price equal to the second highest amount.
player i payoff is vibj In case of tie. player i wins the auction. get the object and pays the second highest bid (say j).5 Auctions Notations vi: the value player i attaches to the object p: price paid for the object vip: winning player payoff n: number of players number the players such that v1>v2> … > vn>0 bi: sealed bid submitted by each player Rules Each player submit a sealed bid bi If bi is the highest bid. In such a case. it is the player with the smallest number (the highest valuation) who wins. She pays her own bid (as there is a tie) 9/12/2011 Game Theory .3.A (Short) Introduction 78 .
If either bi>b+ or bi=b+ and the number of every other player who bids b+ is greater than i. Otherwise player i’s payoff is 0. then player i’s payoff is vib+. 9/12/2011 Game Theory . where n ≥ 2 Actions: the set of actions of each player is the set of possible bids (nonnegative numbers) Preferences: denote by bi the bid of player i and by b+ the highest bid submitted by a player other than i.A (Short) Introduction 79 .3.5 Auctions Strategic game representation: Players: the n bidders.
then the outcome does not change. If she raises her bid above b1.v2. in paying the price b1.A (Short) Introduction 80 .5 Auctions Nash equilibrium The game has many Nash equilibria: One equilibrium is (b1. the she remains a loser. then she wins but.… bn)=(v1. then she loses and obtains a zero payoff if some other player lowers her bid or raises her bid to some price at most equal to b1.3. the outcome is that player 1 obtains the object and pays b2. If she changes her bid to a price less than b2.b2. she makes a loss (because her valuation is less then b1). Every other player’s payoff is zero. if player 1 changes he bid to some other price at least equal to b2. … vn): each player bid is equal to her valuation of the object: because v1>v2> … > vn. Her payoff is v1b2. 9/12/2011 Game Theory .
This however suggests that this equilibrium is less plausible as an outcome of the auction than the equilibrium in which each bidder bids her valuation. the outcome does not change. she loses. and her payoff remains zero. she wins the object but her payoff remains zero (she pays the price v1. If she raises her bid above v1. Note that. if any other player raises her bid to a most v1.b2.… bn)=(v2.… bn)=(v1. then she wins but get a negative payoff. in this equilibrium. Each player simply chooses an action that is optimal.3. a player does not consider the “risk” that another player will take an action different from her equilibrium action.0. player 2 bids more than her valuation. bid by player 2) if player 2 changes her bid to some other prices greater than v2. the outcome does not change. If she changes her bid to v2 or less. … 0): the player 1 obtains the object and pays 0. 9/12/2011 Game Theory . This is due to the fact that. in a Nash equilibrium.v1.5 Auctions Another equilibrium is (b1.b2. Sad is issue for the auctioneer … Another equilibirum is (b1. given the other players’ actions.A (Short) Introduction 81 . This might seem strange. 0… 0): the player 2 bids v1 and obtains the object at price v2 and every players payoff is zero: if player 1 raises her bid to v1 or more.
player i bid vi is at least as good as bi. no matter what the other players bid. a player’s bid equal to her valuation weakly dominates all her other bids. That is: for any bid bi ≠ vi.5 Auctions This is due to the fact that: in a secondprice sealedbid auction (with perfect information). and is better than bi for some actions of the other players.A (Short) Introduction 82 . 9/12/2011 Game Theory .3.
3.A (Short) Introduction 83 . for some values of the b+.5 Auctions The precise argument is given by Figure 85. We see that: for all value of b+. as a function of the highest of the other players’ bids (b+). her payoffs to vi exceed her payoff to any other bid. 9/12/2011 Game Theory . player i payoff to a bid vi is a least as large as her payoffs to any other bid.1 vi is better than b’i in this region vib+ 0 vi b+ vib+ 0 b’i vi b+ vib+ 0 vi is better than b’’i in this region b’’i vi b+ The Figure compares player i payoffs to the bid vi (left panel) with her payoff to a bid b’i < vi (middle panel) and with her payoff to a bid b’’i > vi.
5 Auctions Exercise 84. the payoff of person i when the action is a and person i pays m is ui(a)m. Exercise 86. For i=1. find for each player a bid that weakly dominates all the player’s other bids (and thus find a Nash equilibrium in which each player’s equilibrium action weakly dominates all her other actions). Assume that if the bids are the same.1 (Auctioning the right to choose) An action affects each of two people. who pays nothing.3.A (Short) Introduction 84 9/12/2011 . and the one who submits the higher bid chooses her favorite action and pays (to a third party) the amount bid by the other person.2. That is. In the game that models this situation.1 Find a Nash equilibrium of a secondprice sealed bid auction in which player n obtains the object. person 1 is the winner. The right to choose the action is sold in a secondprice auction. the two people simultaneously submit bids. Game Theory .
3.5. Otherwise. player i payoff is 0.5 Auctions 3.A (Short) Introduction 85 .3 Firstprice sealedbid auctions Difference with as secondprice auction: the winner pays the price she bids Strategic game representation: Players: the n bidders. 9/12/2011 Game Theory . If either (a) bi > b+ or (b) bi = b+ and the number of every other player who bids b+ is greater than i. where n≥2 Actions: the set of actions of each player is the set of possible bids (nonnegative numbers) Preferences: denote by bi the bid of player i and by b+ the highest bid submitted by a player other than i. then player i payoff is vibi.
in which player 1 bid is player 2 valuation and every other player’s bid is her own valuation. The outcome is that player 1 obtains the object at price v2.v2.b2. which she gradually lowers until someone indicates her willingness to buy the object (a Dutch auction) (this equivalence is even. in some sense. Nash equilibrium One Nash equilibrium is (b1.A (Short) Introduction 86 9/12/2011 . … vn).… bn)=(v2. Game Theory .5 Auctions Note that this game models: a sealedbid auction where the highest bid wins but also a dynamic auction in which the auctioneer begins by announcing a high price. stronger than the one between an ascending auction and secondprice sealedbid auction – does not depend on private values).3.
…bn) in which some player i≠1 wins. then player 1 can increase her payoff from 0 to v1bi by bidding bi. If bi > v2. so that she can do better by reducing her bid to 0 if bi ≤ v2.b2. then i payoff is negative.v2.A (Short) Introduction 87 . 9/12/2011 Game Theory . by the following argument: in any action profile (b1.3. A firstprice sealedbid auction has many other equilibria. in which case she wins. … vn) is a Nash equilibrium of a firstprice sealedbid auction. we have bi > b1.5 Auctions Exercise 86. but in all equilibria the winner is the player who values the object most highly (player 1).… bn)=(v2.2 Show that (b1.
3.5 Auctions
Exercise 87.1 (Firstprice sealedbid auction)
Show that in a Nash equilibrium of a firstprice sealedbid auction the two highest bids are the same, one of these bids is submitted by player 1, and the highest bid is at least v2 and at most v1. Show also that any action profile satisfying these conditions is a Nash equilibrium.
9/12/2011
Game Theory  A (Short) Introduction
88
3.5 Auctions
As in the secondprice auction sealedbid auction, the potential “riskiness” to player i of a bid bi > vi is reflected in the fact that it is weakly dominated by the bid vi, as shown by the following argument: if the other players’ bids are such that player i loses when she bids bi, then the outcome is the same whether she bids bi or vi it the other players’ bids are such that player i wins when she bids bi, then her payoff is negative when she bids bi and zero when she bids vi (regardless of whether this bid wins) However, unlike a secondprice auction, in a firstprice auction, a bid bi < vi of player i is not weakly dominated by the bid vi (it is in fact not weakly dominated by any bid): it is not weakly dominated by a bid b’i<bi because if the other players’ highest bid is between b’i and bi, then b’i loses whereas bi wins and yields player i a positive payoff it is not weakly dominated by a bid b’i>bi because if the other players’ highest bid is less than bi, then both bi and b’i win and bi yield a lower price.
9/12/2011
Game Theory  A (Short) Introduction
89
3.5 Auctions
Note also that, though the bid vi weakly dominates higher bids, this bid is itself weakly dominated by a lower bid! The argument is the following: if player i bids vi, her payoff is 0 regardless of the other players’ bids whereas, if she bids less than vi, her payoff is either 0 (if she loses) or positive (if she wins)
In a firstprice sealedbid auction (with perfect information), a player’s bid of at least her valuation is weakly dominated, and a bid of less than her valuation is not weakly dominated.
9/12/2011
Game Theory  A (Short) Introduction
90
3.5 Auctions
Note finally that this property of the equilibria depends on the assumption that a bid may be any number. In the variant of the game in which bids and valuations are restricted to be multiples of some discrete monetary unit ε, an action profile (v2ε, v2 ε, b3, …bn) for any bj ≤ vj ε for j = 3, …n is a Nash equilibrium in which no player’s bid is weakly dominated. further, every equilibrium in which no player’s bid is weakly dominated takes this form. If ε is small, this is very close to (v2, v2, b3, …bn) : this equilibrium is therefore (on a somewhat adhoc basis) considered as the distinguished equilibria of a firstprice sealedbid auction.
One conclusion of this analysis is that, while both secondprice and firstprice auctions have many Nash equilibria, their distinguished equilibria yield the same outcome: in every distinguished equilibrium of each game, the object is old to player 1 at the price v2. This is notion of revenueequivalence is a cornerstone of the auction theory and will be analyzed in depth later.
9/12/2011 Game Theory  A (Short) Introduction 91
3.5 Auctions
3.5.4 Variants
Uncertain valuation: we have assumed that each bidder is certain of both her own valuation and every other bidder’s valuation, which is highly unrealistic. We will study the case of imperfect information in Chap. 9 (in the framework of Bayesian games) Interdependent/Common valuations: in some auction, the main difference between bidders is not that they value the object differently but that they have different information about its value (eg, oil tract auctions). As this also involve informational considerations, we will again study this in Chap. 9. Allpay auctions: in some auctions, every bidder pay, not only the winner (eg, competition of loby groups for government attention).
9/12/2011
Game Theory  A (Short) Introduction
92
3.5 Auctions
Mutiunit auctions: in some auctions, many units of an object are available (eg, US Treasury bills auctions) and each bidder may value positively more than one unit. Each bidder chooses therefore a bid profile (b1,b2,…bk) if there are k units to sell. Different auction mechanisms exist and are characterized by the rule governing the price paid by the winner: Discriminatory auction: the price paid for each unit is the winning bid for that unit Uniformprice auction: the price paid for each unit is the same, equal to the highest rejected bid among all the bids for all unit Vickrey auction (of the name of Nobel prize): a bidder wins k objects pays the sum of the k highest rejected bids submitted by the other bidders.
9/12/2011
Game Theory  A (Short) Introduction
93
4. Mixed Strategy Equilibrium .
the outcome of every play of the game is the same Nash equilibrium More general notion of steady state exists 9/12/2011 Game Theory .1.1. 23.1) This corresponds to a steady state of the game: every player’s behavior is the same whenever she plays the game no player wishes to change her behavior.A (Short) Introduction 95 . Introduction 4.1. Stochastic steady state Nash Equilibrium in a strategic game: action profile in which every player’s action is optimal given every other player’s action (see def.4. knowing (from experience) the other players’ behavior In such a framework.
distribution these situations are equivalent: in the first case. a fraction p of the population representing player i chooses the action a in the second case. choose her action probabilistically according to the same. unchanging.A (Short) Introduction 96 . each player choosing the same action whenever she plays the game each individual may. each member of the population representing player i chooses the action a with probability p These notion of (stochastic) steady state of modeled as mixed strategy Nash equilibrium 9/12/2011 Game Theory .1. Introduction players’ choices are allowed to vary: different members of a given population may choose different actions.4. on each occasion she plays the game.
1) (1.1.1) Player 1 Tail (1.1.A (Short) Introduction 97 .1) (1.4. Introduction 4.1) Outcomes The game has no Nash equilibrium: no pair of action is compatible with a steady state. 9/12/2011 Game Theory .2 Example: Matching Pennies Player 2 Head Tail Head (1.
Head) and (Head.Tail) occurs with probability (1p) x ½ Thus. the probability that the outcome is either (Head. Introduction The game has however stochastic steady state in which each player chooses each of her actions with probability 1/2 : Suppose that player 2 chooses each of her actions with probability ½ If player 1 chooses Head with probability p and Tail with probability (1p).Tail) (in which case player 1 wins 1$) ½ p + ½ (1p) = ½.Head) (which correspond to a loss of 1$) have also probability ½ 9/12/2011 Game Theory .Head) and (Tail.4.1. then: each outcome (Head. The other two outcomes (Head.Head) or (Tail.A (Short) Introduction 98 .Tail) occurs with probability p x ½ each outcome (Tail.Tail) and (Tail.
4.Head or Tail. she gains 1$ with probability pq + (1p)(1q) (outcomes Head. the game has no other steady state : Assumption: each player wants the probability of her gaining 1$ to be as large as possible (maximization of expected profit) Denote q the probability with which player 2 chooses Head (she chooses Tail with probability (1q) ) If player 1 chooses Head with probability p.A (Short) Introduction 99 9/12/2011 .1. Moreover (under a reasonable assumption on the players’ preferences). We conclude that the game has a stochastic steady state in which each player chooses each action with probability ½ . Game Theory . Introduction the probability distribution over outcome is independent of p! every value of p is optimal (in particular ½ )! the same analysis hold for player 2.Tail) and she looses 1$ with probability (1p)q + p(1q).
A similar argument shows that if player 2 chooses Head with probability superior to ½. the best response of player 1 is to choose Head with certainty. 9/12/2011 Game Theory .1.A (Short) Introduction 100 . Introduction Note that: Player 1 wins 1$ : pq + (1p)(1q) = 1q + p(2q1) Player 1 loses 1$:(1p)q + p(1q).= q + p (12q) If q < ½. the best response of player 1 is to choose Tail with certainty. We already have shown that is one player is choosing a given action with certainty (Nash Equilibrium). there is no steady state. Thus. Player 1 chooses therefore p = 0. the first probability (winning 1$) is decreasing in p and the second probability (loosing 1$) is increasing in p. if player 2 chooses Head with probability less than ½.4.
more than two outcomes). he most likely prefers a lottery in which a occurs with probability p (and b with probability (1p)) to a lottery in which a occurs with probability q (and b with probability (1q)) To deal with more general cases (eg.A (Short) Introduction 101 .3 Generalizing the analysis: expected payoffs The matching pennies case is particularly simple because it has only two outcomes for each player. allowing to deduce players’ preferences regarding lotteries (probability distributions) over outcomes from their preferences regarding deterministic outcomes: if a player prefers a to b and if p > q. Introduction 4.1.4. we need to add to the model a description of her preferences regarding lotteries (probability distribution) over outcomes 9/12/2011 Game Theory .1.
dist.qb. dist. c two prob. b.4. Q if and only if paui(a) + pbui(b) + pcui(c) > qaui(a) + qbui(b) + qcui(c) Preferences that can be represented by the expected value of a payoff function over deterministic outcomes are called vNM (von Neumann – Morgenstern) preferences. P is preferred to prob.pb.pc) and Q(qa.1. the expected value of the first probability distribution exceeds the expected value of the second probability distribution. prob. Game Theory .: P(pa. dist. according to ui. Introduction The standard approach is to restrict attention to preferences regarding lotteries (probability distribution) over outcomes that may be represented by the expected value of a payoff function over deterministic outcomes: for every player i.qc) for each player i.A (Short) Introduction 102 A payoff function whose expected value represents such preferences is called a Bernouilli payoff function. : three outcomes: a. eg. 9/12/2011 . there is a payoff function ui. with the property that player i prefers one probability distribution over outcomes to another if and only if.
They are however commonly accepted in game theory. these restriction do not restrict player attitudes to risk: eg. : u(a) u(b) u(c) c b a suppose that a. such preferences can be represented by the expected value of a payoff function u for which u(a) is close to u(b). If the person is very averse to risky outcomes.b and c are three outcomes.1) 9/12/2011 . in which a occurs with probability p and c with probability (1p). which is much larger than u(c) (concave payoff function) Game Theory .A (Short) Introduction 103 (Figure 103. even if p is relatively large.4. Introduction The restrictions on preferences regarding prob. dist. over outcomes required for them to be represented by expected value of a payoff function are NOT innocuous (see violations example on page 104).1. A person prefers a to b to c. dist. she prefers then to obtain b for sure rather than to face a prob. However.
the player is risk neutral. than can be risk preferring (small investment. the preferences can be represented by the expected value of a payoff function: • • concave in case of risk aversion convex in case of risk preference • • • Note finally that given preferences.4. in both cases. Two classic utility functions: CARA & CRRA In the reality: • the fact that people buy insurance (the expected payoff is inferior to the insurance fee) show that economic agents are risk averse. It is the ordering that matter. in some circumstance.A (Short) Introduction 104 .1. extremely high payoff. Introduction • Note that if the outcomes are amount of money and if the preferences are represented by the expected value of the amount of money. 9/12/2011 Game Theory . • the fact that people buy lottery tickets shows that. many different payoff functions can be used to represented them.
a set of actions for each player.A (Short) Introduction 105 9/12/2011 .4. dist. dist. over action profiles that may be represented by the expected value of a (Bernoulli) payoff function over action profiles. the interpretation of the number is different: in Chapter 2. preferences regarding prob.2 Strategic games in which players may randomize Definition 106. Game Theory . However. Representation: a twoplayer strategic game with vNM preferences in which each player has finitely many actions may be represented in a table like in Chapter 2. numbers are values of payoff functions that represent the players’ preferences over deterministic outcome here.1 (Strategic game with vNM preferences) A strategic game with vNM preferences consists of a set of players for each player. numbers are values of (Bernoulli) payoffs whose expected values represent the players’ preferences over prob..
3 1.2 Strategic games in which players may randomize The change is subtle but important (figure 107. However.4.2 3.1) Q F Q F Q F 2.A (Short) Introduction 106 9/12/2011 .0 0.Q) is higher than her expected payoff to this prob. Game Theory .Q) is the same as her expected payoff to the prob.3 4. that yield (F.Q) with probability ½ and (F.F) with probability ½ right game: her payoff to (Q.1 Q F 3.0 0.1 The 2 games represent the same game with ordinal preferences (the prisoner’s dilemma). the 2 games represent different strategic games with vNM preferences: left game: player’s 1 payoff to (Q. dist.4 1. dist.
1 (Mixed strategy) A mixed strategy of a player in a strategic game is a probability distribution over the player’s actions.3 Mixed strategy Nash equilibrium 4.1 Mixed strategies We allow now each player to choose a probability distribution over her set of actions (rather than restricting her to choose a single deterministic action) Definition 107. Notations: α: profile of mixed strategies (matrix) αi(ai): probability assigned by player i’s mixed strategy αi to her action ai 9/12/2011 Game Theory .A (Short) Introduction 107 .4.3.
probability ½ to Q and probability ½ to F.: ( ½ . Note that a mixed strategy may assign probability 1 to a single action. eg.1. In that case. Shortcut: mixed strategies are often written as a list of probabilities (one for each action). the strategy of player 1 that assigns probability ½ to each action is the strategy α1(Head)= ½ and α1(Tail) = ½.4. in the order the actions are given in the table (see table 107. such as strategy is referred as a pure strategy. ½ ) assigns.3 Mixed strategy Nash equilibrium eg: in Matching pennies.1).A (Short) Introduction 108 . in table 107. 9/12/2011 Game Theory .
3 Mixed strategy Nash equilibrium 4.2 Equilibrium The mixed strategy Nash equilibrium extend the concept of Nash equilibrium to the probabilistic setup. dist.αi*).1 (Mixed strategy Nash equilibrium of strategic game with vNM preferences) The mixed strategy profiles α* in a strategic game with vNM preferences is a mixed strategy Nash equilibrium if. is player A (Short) Introduction .4. for each player i and every mixed strategy αi of player i. the expected payoff to player i of α* is at least as large as the expected payoff payoff to player i of (αi. U i ( *) U i ( i .3. according to a payoff function whose expected value represents player i’s preferences over prob. Definition 108. for every mixed strategy i of player i 9/12/2011 109 where U i ( )Game Theoryi' s expected payoff to the mixed strategy profile . *i ).
.3.3 Best response functions Notation: Bi is player i’s best response function For strategic game with ordinal preferences: Bi(ai) is the set of player i’s best actions when the list of the other players’ actions is ai For a strategic game with vNM preferences.3 Mixed strategy Nash equilibrium 4. the mixed strategy profile α* is a mixed strategy Nash equilibrium if and only if α*i is in Bi(α*i) for every player i 9/12/2011 eg.: in the Matching Pennies.4. Bi(αi) is the set of player i’s best mixed strategies when the list of the other players’ mixed strategies is αi.A (Short) Introduction 110 mixed strategy. the set of best responses to a mixed strategy of the other player is either a single pure strategy or the set of all Game Theory .
2) denotes a Bernoulli payoff function for player i (payoff over action pair whose expected value represents player i’s preferences regarding prob.4. Game Theory . Similarly. with α1(T) + α1(B) = 1. We take the players’ choices to be independent (when players choose the mixed strategies α1 and α2. the probability of any action pair (a1.3 Mixed strategy Nash equilibrium Two players – two actions games Player 1 has action T and B Player 2 has action L and R ui (i=1.a2) is the product of the corresponding probabilities assigned by mixed strategies). over action pairs) Player 1 mixed strategy α1 assigns probability α1(T) to her action T (denoted p) and probability α1(B) to her action B (denoted 1p). dist. denotes q the probability that player 2’s mixed strategy assigns to L et 1q to R.A (Short) Introduction 111 9/12/2011 .
3 Mixed strategy Nash equilibrium So.A (Short) Introduction 112 .1) From this probability distribution. R) (1 p)q u1 ( B. R) which can be written as: 9/12/2011 Game Theory .4. the probabilities of the four outcomes are: L(q) R(1q) T(p) pq p(1q) B(1p) (1p)q (1p)(1q) (Figure 109. L) (1 p)(1 q) u1 ( B. α2): pq u1 (T . we can compute player 1’s expected payoff to the mixed strategy pair (α1. L) p(1 q) u1 (T .
R) Player 1 expected payoff when she uses a pure strategy that assigns probability 1 to T and player 2 uses a mixed strategy α2 Player 1 expected payoff when she uses a pure strategy that assigns probability 1 to B and player 2 uses a mixed strategy α2 which can be written more compactly as: pE1T .α2) as a weighted average of her expected payoffs to T and B when player 2 uses theIntroduction Game Theory . .3 Mixed strategy Nash equilibrium pq u1 (T .A (Short) mixed strategy α2. with weights 113 equal to the probabilities assigned to T and B by α1.4. L) (1 q) u1 ( B. L) (1 q) u1 (T . 2 9/12/2011 Player 1’ expected payoff to the mixed strategy pair (α1. 2 (1 p) E1B. R) (1 p)q u1 ( B.
player 1’s expected payoff is a linear function of p E1 T . 2 E1B. 2 0 p (Figure 110. 2 E1T .A (Short) Introduction 114 . 2 E1 B.1) 1 9/12/2011 Game Theory .3 Mixed strategy Nash equilibrium In particular. 2 pE1T .4. 2 (1 p) E1B.
α2) ): see figure 110. 9/12/2011 Game Theory . a mixed strategy (p.A (Short) Introduction 115 .α2) > E1(B.1 player 1’s unique best response is the pure strategy B (if E1(T.4.α2) < E1(B.3 Mixed strategy Nash equilibrium A significant implication of this linearity form of the player 1’s expected payoff is that there is only three possibilities for her best response to a given mixed strategy of player 2: player 1’s unique best response is the pure strategy T (if E1(T.1 with a downward sloping line all mixed strategies of player 1 yield the same expected payoff (hence. all are best response) (if E1(T.α2) ): see figure 110.α2) ): see figure 110.1 with a horizontal line in particular.α2) = E1(B. 1p) for which 0 < p < 1 is never a unique best response.
1) Player 1 Tail (1.1) Player 2 Head Tail Head (1.4.A (Short) Introduction 116 .1) 9/12/2011 Game Theory .1) (1. The resulting strategic game with vNM preferences is (figure 111.1) (1.3 Mixed strategy Nash equilibrium Example: Matching Pennies revisited Represent each player’s preferences by the expected value of a payoff unction that assigns the payoff 1 to a gain of $1 and the payoff 1 to a loss of $1.
1: Best response functions) 117 0 9/12/2011 ½ 1 p Game Theory .3 Mixed strategy Nash equilibrium Denote by p the probability that player 1’s mixed strategy assigns to Head and q the probability that player 2’s mixed strategy assigns to Head. 1 + (1q) . given player 2 mixed strategy is : q .A (Short) Introduction . (1) + (1q) .4. Player 1’s expected payoff to pure strategy Head.(1) = 2q – 1 Her expected payoff to Tail is : q . 1 = 1 – 2q q 1 ½ 0 Player2 Player1 (Figure 112.
A (Short) Introduction 118 . then both Head and Tail (and all her mixed strategies) lead to the same payoff. her expected payoff to Head exceeds her expected payoff to Tail. if q = ½. we conclude that player 1’s best responses to player 2’s strategy are her mixed strategy that assigns probability 0 to Head if q < ½ .4. if q > ½. her mixed strategy that assigns probability 1 to Head if q > ½ and all her mixed strategies if q = ½. 9/12/2011 Game Theory .3 Mixed strategy Nash equilibrium Thus: if q < ½. player 1’ expected payoff to Tail exceeds her expected payoff to Head (and hence exceeds also her expected payoff to any mixed strategy that assigns a positive probability to Head) similarly.
3 Mixed strategy Nash equilibrium The best response function of player 2 is similar (see figure 112.A (Short) Introduction 119 .4.1.1) The set of mixed strategy Nash equilibria corresponds (as before) to the set of intersections of the best response functions in figure 112. Matching Pennies has no Nash Equilibrium if players are not allowed to randomize ! 9/12/2011 Game Theory .
3 Mixed strategy Nash equilibrium Exercise 114.4.0 (Figure 114.2 Find all the mixed strategy Nash equilibria of the strategic games in Figures 114.A (Short) Introduction 120 .1 2.2) L T B 0.2 R 0.2 0.6 6.2 L T B 6.0 3.2 R 0.1 9/12/2011 Game Theory .
1c (Figure 115. and only if. they both exert effort.3 Mixed strategy Nash equilibrium Exercise 114.0 c. How do the equilibria change as c increase? Explain the reasons for the changes.1. They are both better off if they both exert effort and perform the task than if neither exerts effort (and nothing is accomplished). Find all the mixed strategy Nash equilibria of this game.3 Two people can perform a task if.1) 9/12/2011 Game Theory . which c is a positive number less than 1 than can be interpreted as the cost of exerting effort.4. Specifically. the worst outcome for each person is that she exerts effort and the other person does not (in which case again nothing is accomplished).A (Short) Introduction 121 . the players’ preferences are represented by the expected value of the payoff functions in Figure 115.c 1c.0 Effort 0. No Effort No Effort Effort 0.
this method may be intractable.αi).3 Mixed strategy Nash equilibrium 4. where the weights attached to each pure strategy (ai.A (Short) Introduction 122 .4. 9/12/2011 Game Theory .3.4 A useful characterization of mixed strategy Nash equilibrium The method used up to now to find Mixed strategy Nash equilibria involves constructing players’ best response functions.3. There is a characterization of mixed strategy Nash equilibria that is an invaluable tool in the study of generale game. In complicated games.αi) is the probability αi(ai) assigned to that strategy ai by the player’s mixed strategy αi (see section 4.3). The key is the following observation: a player’s expected payoff to a mixed strategy profile α is a weighted average of her expected payoffs to all pure strategy profiles of the type (ai.
4. 9/12/2011 Game Theory . i i i i i ) Ai is player i’s set of actions (pure strategies) Ei(ai.3 Mixed strategy Nash equilibrium Symbolically: U i ( ) where: ai Ai ( a )E ( a .A (Short) Introduction 123 .αi) is her expected payoff when she uses the pure strategy that assign probability 1 to ai and every other player j uses her mixed strategy αj.
A (Short) Introduction 124 . to all her strategies (including all her pure strategies).3 Mixed strategy Nash equilibrium This leads to the following analysis: Let α* be a mixed strategy Nash equilibrium Denote by E*i player i’s expected payoff in the equilibrium Because α* is an equilibrium. 9/12/2011 Game Theory . payer i’s expected payoff.4. then the weighted average would be smaller!). is at most E*i But E*i is a weighted average of player i’s expected payoffs to the pure strategies to which α*i assigns a positive probability Thus. player i’s expected payoffs to these pure strategies are all equal to E*i (if any smaller. given α*i.
the expected payoff.3 Mixed strategy Nash equilibrium We conclude that: expected payoff to each action to which α*i assigns positive probability is E*i the expected payoff to every other action is at most E*i Proposition 116. to every action to which α*i assigns a positive probability is the same the expected payoff.4. given α*i.2 A mixed strategy profile α* in a strategic game with vNM preferences in which each player has finitely many actions is a mixed strategy Nash equilibrium if and only if. to every action to which α*i assigns a zero probability is at most the expected payoff to any action to which α*i assigns a positive probability 9/12/2011 Each player’s expected payoff in an equilibrium is her expected payoff to any of her actions that she uses with positive 125 probability Game Theory . given α*i. for each player i.A (Short) Introduction .
1 (Figure 117.4. 0.1 .A (Short) Introduction 126 .3 Mixed strategy Nash equilibrium This proposition allows to check whether a mixed strategy profile is an equilibrium.1) 2. .. 5..1 L(0) T(3/4) M(0) B(1/4) ....3 R(2/3) 1.2 C(1/3) 3.7 9/12/2011 Game Theory .. Example 117.4 0.
0.1/4) for player 1 and (0.2/3) for player 2) is a mixed strategy Nash equilibrium.2.3 Mixed strategy Nash equilibrium For the game in Figure 117. by proposition 116. to study each player’s expected payoffs to her three pure strategies.1 (in which the dots indicate irrelevant payoffs).4. these payoffs are: T : 1 3 2 1 5 3 3 3 M : 1 0 2 2 4 3 3 3 B: 1 5 2 0 5 3 3 3 9/12/2011 Game Theory .A (Short) Introduction 127 . it suffices.1/3. To verify this claim. the indicated pair of strategies ((3/4. For player 1.
for player 2.2 (no greater than). the action L (which she uses with probability 0).4. the two conditions of proposition 116. has the same expected payoff to her other two actions.3 Mixed strategy Nash equilibrium Player 1’s mixed strategy assigns positive probability to T and B and probability zero to M.A (Short) Introduction 128 .2 are satisfied for player 1. 9/12/2011 Game Theory . This equality is consistent with proposition 116. Note however that. The same verification is easily done for player 2. So.
then look at the implied restriction on player 1’s equilibrium strategy) 9/12/2011 Game Theory . Each player’s preferences are represented by her expected monetary payoff. otherwise no payment is made. then player 2 pays $1 to player 1.3 Mixed strategy Nash equilibrium Exercise 117.2 (Choosing numbers) Players 1 and 2 each choose a positive integer up to K.A (Short) Introduction 129 . Show that the game has a mixed strategy Nash equilibrium in which each player chooses each positive integer up to K with probability 1/K Show that the game has no other mixed strategy Nash equilibria (Deduce from the fact that player 1 assigns positive probability to some action k that player 2 must do so.4. If the players choose the same number.
9/12/2011 Game Theory . the conditions for equilibrium are designed to ensure that it is consistent with a steady state.4.2 is that a nondegenerate mixed strategy equilibrium (a mixed strategy equilibrium that is not also a pure strategy equilibrium) is never a strict Nash equilibrium: every player whose mixed strategy assigns a positive probability to more than one action is indifferent between her equilibrium mixed strategy and every action to which this mixed strategy assigns positive probability. Rather.3 Mixed strategy Nash equilibrium Note finally that an implication of Proposition 116.A (Short) Introduction 130 . The theory of mixed Nash equilibrium does not state that players consciously choose their strategies at random given the equilibrium probabilities. The question of how a steady state may come about remains to be studied at this stage.
3. that a player’s strategy in mixed strategy Nash equilibrium may assign probability 1 to a single action.3 Mixed strategy Nash equilibrium 4. This proposition does not help to find the equilibrium but it is a useful fact.A (Short) Introduction 131 9/12/2011 .1 (Existence of mixed strategy Nash equilibrium in finite games) Every strategic game with vNM preferences in which each player has finitely many actions has a mixed strategy Nash equilibrium.4. Game Theory . not a necessary one.5 Existence of equilibrium in finite games Proposition 119. Note also that: the finiteness of the number of actions is a sufficient condition for the existence of an equilibrium.
player i’s mixed strategy αi stricly dominates her action a’i if: U i (i . ai ) ui (a'i .4 Dominated actions Definition 120.1 (Strict Domination) In a strategic game with vNM preferences.4.ai) is player i’s expected payoff under ui when she uses the mixed strategy αi and the actions chosen by the other players are given by ai. ai ) for every list ai for the other players'actions where ui is a Bernoulli payoff function and Ui(αi.A (Short) Introduction 132 . 9/12/2011 Game Theory .
1) L T M B 1 4 0 R 1 0 3 (Figure 120.1) The action T of player 1 is not strictly (or weakly) dominated by M or B.4.A (Short) Introduction 133 .4 Dominated actions An action not strictly dominated by any pure strategy may be strictly dominated by a mixed strategy (see Figure 120. 9/12/2011 Game Theory . but it is strictly dominated by the mixed strategy that assigns probability ½ to M and probability ½ to B.
1. Exercise 120.A (Short) Introduction 134 .4. Find all the mixed strategy that do so.4 Dominated actions Exercise 120. 9/12/2011 Game Theory .2 (Strictly dominated mixed strategy) In Figure 120. A mixed strategy that assigns positive probability only to actions that are not strictly dominated is not strictly dominated. the mixed strategy that assigns probability ½ to M and ½ to B is not the only mixed strategy that strictly dominates T.3 (Strict domination for mixed strategies) Determine whether each of the following statements is true of false: A mixed strategy that assigns positive probability to a strictly dominated action is strictly dominated.
The fact that a’i is strictly dominated by αi means that Ui(αi. with the weight on each ai equal to the probability with which it occurs when the other players’ mixed strategies are αi. the weights are the same but the terms take the form ui(a’i.ai) for every collection ai of the players’ actions. Player i’s expected payoff when she uses the action a’i and the other players use the mixed strategies αi is a similar weighted average.4 Dominated actions A strictly dominated action is not a best response to any collection of mixed strategies of the other players 9/12/2011 Suppose that player i’s action a’i is strictly dominated by her mixed strategy αi Player i’s expected payoff Ui(αi. given αi.ai).αi) when she uses the mixed strategy αi and the other players use the mixed strategies αi is a weighted average of her payoffs Ui(αi. Hence player i’s expected payoff when she uses the mixed strategy αi exceeds Game Theory .ai).ai) > ui(a’i.ai) as ai varies over all the collections of action for the other players. 135 her expected (Short) when she . rather than Ui(αi.A payoffIntroduction uses the action a’i.4.
ai ) U i (a'i .4 Dominated actions Consequently.4.A (Short) Introduction 136 . ai ) for every list ai of the other playes'actions and U i (i . player i’s mixed strategy αi weakly dominates her action a’i if: U i (i . Definition 121. ai ) for some list ai of the other playes'actions where ui is a Bernoulli payoff function and Ui(αi.ai) is player i’s expected payoff under ui when she uses the mixed strategy αi and the actions chosen by the other players are given by ai. a strictly dominated action is not used with positive probability in any mixed strategy Nash equilibrium. ai ) U i (a'i .1 (Weak domination) In a strategic game with vNM preferences. 9/12/2011 Game Theory .
A (Short) Introduction 137 . as a weakly dominated action may be used in a Nash equilibrium. a weakly dominated action may be used with a positive probability in a mixed strategy equilibrium. We can therefore not eliminate weakly dominated actions from consideration when finding mixed strategy equilibrium.4 Dominated actions Note that.1 (Existence of mixed strategy Nash equilibrium with no weakly dominated strategies in finite games) Every strategic game with vNM preferences in which each player has finitely many actions has a mixed strategy Nash equilibrium in which no player’s strategy is weakly dominated.4. However: Proposition 122. 9/12/2011 Game Theory .
let α*i be the mixed strategy of player i that assigns probability one to the action a*i. Then α* is a mixed strategy Nash equilibrium of G’.4.A (Short) Introduction 138 . 9/12/2011 Game Theory .2 (Pure strategy equilibria survive when randomization is allowed) Let a* be a Nash equilibrium of G and for each player i.5 Pure equilibria when randomization is allowed Equilibria when the players are not allowed to randomize remain equilibria when they are allowed to randomize Proposition 122.
The a* is a Nash equilibrium of G.A (Short) Introduction 139 .4.5 Pure equilibria when randomization is allowed Any pure equilibria that exist when the players are allowed to randomize are equilibria when they are not allowed to randomize.1 (Pure strategy equilibria survive when randomization is prohibited) Let α* be a mixed strategy Nash equilibrium of G’ in which the mixed strategy of each player i assigns probability one to the single action a*i. Proposition 123. 9/12/2011 Game Theory .
the set of actions of each player i is Ai. and the preferences of each player i are represented by the expected value of ui 9/12/2011 Game Theory . the set of actions of each player i is AI. and the preferences of each player i are represented by the payoff function ui G’: the strategic game with vNM preferences in which the set of players is N.5 Pure equilibria when randomization is allowed To establish these two propositions. for each player i. be a set of actions. Consider the following two games: G: the strategic game with ordinal preferences in which the set of players is N.A (Short) Introduction 140 . let N be a set of players and let Ai.4.
we know that in G’ no player i has an action that yields her a payoff higher than does a*i when all other players adhere to α*i.2 Let a* be a Nash equilibrium of G. 9/12/2011 Game Theory . Thus α* satisfies the two conditions in Proposition 116. and for each player i let α*i be the mixed that assigns probability 1 to a*i. Then.1 Let α* be a mixed strategy Nash equilibrium of G’ in which every player’s mixed strategy is pure. So. it is a mixed strategy equilibrium of G’. Since a* is a Nash equilibrium of G. Thus a* is Nash equilibrium of G. no mixed strategy of player i yields her a payoff higher than does α*i. For each player i. denote a*i the action to which αi assigns probability one.A (Short) Introduction 141 .4. Proposition 123.2.5 Pure equilibria when randomization is allowed Proposition 122.
7 Equilibrium in a single population Definition 129.2 (Symmetric mixed strategy Nash equilibrium) A profile α* of mixed strategies in a strategic game with vNM preferences in which each player has the same set of actions is a symmetric mixed strategy Nash equilibrium if it is a mixed strategy Nash equilibrium and α*i is the same for every player i.1 (Symmetric twoplayer strategic game with vNM preferences) A twoplayer strategic game with vNM preferences is symmetric if the players’ sets of actions are the same and the players’ preferences are represented by the expected values of payoff functions u1 and u2 for which u1(a1. 9/12/2011 Game Theory .A (Short) Introduction 142 .a2).a1) for every action pair (a1.a2) = u2(a2. Definition 129.4.
7 Equilibrium in a single population Game of approaching pedestrian (Figure 129. in which each player assigns probability ½ to Left and probability ½ to Right. The game has also a symmetric mixed strategy Nash equilibrium.0 1.1 (Figure 115.4.0 0.1 0.A (Short) Introduction 143 This equilibrium corresponds to a steady state in which half of all encounters result in collisions! . Game Theory .1) Left Right Left Right 1.Left) and (Right.1) 9/12/2011 This game has two deterministic steady states ( (Left.Right) ). corresponding to the two symmetric Nash equilibria in pure strategies.
Find all the symmetric mixed strategy Nash equilibria in which each player assigns positive probability to at most two demands (many situations in which each player assigns positive probability to two actions – says a’ and a’’ – can be ruled out as equilibria because when one player uses such strategy. 9/12/2011 Game Theory . some action yields the other player a payoff higher than does one or both of the actions a’ and a’’). If the demands sum to more than 10.4. The possible demands are nonnegative even integers up to 10. then neither player receives any payoff.3 (Bargaining) Pairs of players from a single population bargain over the division of a pie of size 10. If the demands sum to 10. If the demands sum to less than 10. The members of a pair simultaneously make demands. then each player receives her demand. then each player receives her demand plus half of the pie that remains after both demands have been satisfied.7 Equilibrium in a single population Exercise 130.A (Short) Introduction 144 .
knowing the other players’ strategies. there is a large population of individuals who may take the role of that player in any play of the game. one participant is drawn randomly from each population In this situation. a new individual who joins a population can learn the other players’ strategies by observing their actions over many plays of the game. 9/12/2011 Game Theory .9 The formation of players’ beliefs In a Nash equilibrium. each player chooses a strategy that maximizes her expected payoff.4.A (Short) Introduction 145 . The idea underlying the previous analysis is that the players have learned each other’s strategies from their experience playing the game. The idealized situation is the following: for each player in the game.
4.A (Short) Introduction 146 . So. But. what might happen if new players simultaneously join more than one population in sufficient numbers. can we expect a steady state to be reached if no one has experience? 9/12/2011 Game Theory .9 The formation of players’ beliefs As long as the number of new players is low. existing players’ encounters with neophytes (who may use nonequilibrium strategies) will be sufficiently rare that their beliefs about the steady state will not be disturbed. such that the probability that they encounter is not anymore small? In particular. a new player’s problem is simply to learn the other players’ actions.
A (Short) Introduction 147 . the Prisoner’s Dilemma). but the actions the other players will choose may be clear because each of these players has an action that strictly dominates all others. In a less extreme case.9 The formation of players’ beliefs 4. players’ best action are independent of the other players’ actions..1 Eliminating dominated actions In some games.4. some player’s best action may depend on the other players’ actions.9. 9/12/2011 Game Theory . players may reasonably be expected to choose their Nash equilibrium actions from an introspective analysis of the game: At the extreme (eg.
Consequently.1 0.0 0.: in the game in Figure 135. player 2’s action R strictly dominates L.A (Short) Introduction 148 .0 1. So. even without any expercience of the game.4.9 The formation of players’ beliefs eg. no matter what player 2 thinks player 1 will be playing.1) 9/12/2011 Game Theory . may reason that she should choose B.1. player 1. who can deduce by this argument that player 2 will choose R. she should choose R. L R T B 1.1 (Figure 135.
each player chooses a best response to an arbitrary deterministic belief about the other players’ actions in every subsequent period. each player believes that the other players will choose the actions the chose in the previous period: Two theories are: at the first period.9 The formation of players’ beliefs 4.4. The two questions are then: 9/12/2011 does Game Theory .A (Short) Introduction the game convergence to a steady state? how long does it take to converge? 149 .2 Learning Another approach to the question of how a steady state might be reached assumes that player learns: she starts with an unexplained “prior” belief about the other players’ actions she changes these beliefs in response to information she receives Best Response Dynamics: a simple theory assumes that in each period after the first.9. each player chooses a best response to the other players’ action in the previous period An action profile that remains the same from period to period (steady state) is then a pure Nash equilibrium of the game.
she adopts the belief that her opponent is Game Theory .2).: the BoS game (example 18.9 The formation of players’ beliefs eg. Fictitious play: under the Best Response Dynamics. players consider actions in all the previous periods when forming a belief about their opponents’ strategies. a player’s belief does not admit the possibility that her opponents’ actions are realizations of mixed strategies. Under the Fictitious play theory. then the players’ choices will subsequently alternate in definitively between the action pairs (Bach. does not converge: if player 1 initially believes that player 2 will choose Stravinsky and player 2 believes that player 1 will initially choose Bach.A (Short) Introduction 150 using a mixed strategy in which the probability of each action is proportional to the frequency with which her opponent chose 9/12/2011 . in any period. Stravinsky) and (Stravinsky.Bach). for some initial beliefs.4. They treat these actions as realizations of mixed strategies: each player begins with an arbitrary probabilistic belief about the other player’s action then.
9 The formation of players’ beliefs Note that: in any twoplayers game in which the player has two actions (eg. for other games. this process converges to a Mixed strategy Nash equilibrium from any initial beliefs.4. the Matching Pennies). there are initial beliefs for which the process does not converge.A (Short) Introduction 151 . 9/12/2011 Game Theory .
Game Theory . choose a subset Si of her set of Ai of actions Check whether there exists a mixed strategy profile α such that: (i) the set of actions to which strategy αi assigns positive probability is Si (ii) α satisfies the conditions of Proposition 116.4. three consisting of two actions. and one consisting of the entire set of actions).10 Extension: finding all mixed strategy Nash equilibria The following systematic method can be used to find all mixed strategy Nash equilibria of a game: For each player i.2 Repeat the analysis for every collection of subsets of the players’ sets of actions. In a two player game in which each player has three actions: 9/12/2011 each player’s set of action has seven nonempty subset (three actions consisting of a single action. The shortcoming of the method is that for games in which each player has several strategies. or in which there are several players. so that there are 49 (7x7) possible collections of subsets to check.A (Short) Introduction 152 . the number of possibilities to examine is huge.
v12 B u21.4.1. Each player’s set of actions has three nonempty subsets: two each consisting of a single action one consisting of both action Thus. Game Theory .1) Denote the actions and payoffs as in Figure 139.1: Finding all mixed strategy equilibria of a two player game in which each player has two actions. there are nine (3x3) pairs of subsets of the players’ action sets.A (Short) Introduction 153 9/12/2011 . L T u11.v11 R u12.v22 (Figure 139.v21 u22.10 Extension: finding all mixed strategy Nash equilibria Example 138.
consider the pair of subsets {T.10 Extension: finding all mixed strategy Nash equilibria For each pair (S1.B} for player 1 and {L} for player 2: the second condition in Proposition 116.4.α2) of mixed strategies such that each strategy αi assigns positive probability only to actions in Si and the conditions in Proposition 116.A (Short) Introduction 154 9/12/2011 .2 is automatically satisfied for player 2 (she assigns positive probability only to one action).2 are satisfied: checking the four pairs of subsets in which each player’s subset consists of a single action amounts to checking whether any of the four pairs of actions is a pure strategy equilibrium.2 is automatically satisfied for player 1 (she has no actions to which she assigns probability 0) the first condition in Proposition 116. Game Theory .S2). we check if there is a pair (α1.
10 Extension: finding all mixed strategy Nash equilibria Thus. no action having 0 probability).A (Short) Introduction 155 9/12/2011 .2 (the second condition being automatically satifisfied.B} for player 1 and {L. we need to find a pair of mixed strategy that satisfied the first condition of Proposition 116. we need: u11 = u21 : player 1’s payoffs to her two actions must be equal p v11+(1p) v21 ≥ p v12+(1p) v22 : L must be at least as good as R given player 1’s mixed strategy.4. That is.R} for player 2. A similar argument applies to the three other pairs of subsets in which one player’s subset consists of both her actions and the other player’s subset consists of a single action. we need to find p and q such as: q u11 + (1q) u12 = q u21 + (1q) u22 p v11 + (1p) v21 = p v21 + (1p) v22 Game Theory . for there to be a mixed strategy equilibrium in which player 1’s probability of using T is p. To check finally whether there is a mixed strategy equilibrium in which the subsets are {T.
4.3 0.2 M 0.10 Extension: finding all mixed strategy Nash equilibria Example 139.3 4.0 S Exercise 141.4 (Figure 139.1 R 1.0 2.2 3.2 156 (Figure 141.1: Find all mixed strategy equilibria of a two player game T B 9/12/2011 L 2.A (Short) Introduction .2: Find all mixed strategy equilibria of a variant of BoS B B S 0.1) Game Theory .3 1.2 0.1 1.2) X 0.
0.1: Find the mixed strategy Nash equilibria of the three player game in Figure 142.1.1) 9/12/2011 Game Theory .0 (Figure 142.0.0.0 0.A (Short) Introduction 157 .0 B 0.10 Extension: finding all mixed strategy Nash equilibria Exercise 142.4.1 B 0.1 (each player has two actions) A A 1.
though the techniques are different. for no action ai does the action profile (ai. for each player i.α*i) yields player i an expected payoff less than her expected payoff to α*. α*i assigns probability zero to the set of actions ai for which the action profile (ai. Game Theory .a*i) yield player i an expected payoff greater than her expected payoff to α*. In a game in which a player has a continuum of actions. a mixed strategy of a player is determined by the probabilities it assigns to sets of actions.A (Short) Introduction 158 9/12/2011 .11 Extension: games in which each player as a continuum of actions Consider now the case of a continuum of actions: the principle involved in finding mixed strategy equilibria of games are the same as for games with finitely many actions.2 (Characterization of mixed strategy Nash equilibrium) A mixed strategy profile α* in a strategic game with vNM preferences is a mixed strategy Nash equilibrium if and only if.2 becomes Proposition 142.4. Proposition 116.
A significant class of games consist of games in which each player’s set of actions is a onedimensional interval of numbers: Consider such a game with two players Let player i’s set of actions be the interval from ai to a+i.4. Game Theory .11 Extension: games in which each player as a continuum of actions Games with continuum of actions can be very complex to analyze. for every action ai. the form of a mixed strategy Nash equilibrium in such a game can be very complex but some such games.A (Short) Introduction 159 9/12/2011 . for i=1. in which each player’s equilibrium mixed strategy assigns probability zero except in an interval.2 Identify each player’s mixed strategy with a cumulative probability distribution of this interval: the mixed strategy of each player i is a nondecreasing function Fi for which 0≤Fi(ai)≤1. have equilibria of a particularly simple form. The number Fi(ai) is the probability that player i’s action is at most ai. however.
for z ≥ yi.4. Player i’s expected payoff when her action is ai and the other player uses her mixed strategy Fj takes the form: = ci for xi ≤ ai ≤ yi ≤ ci for ai < xi and ai > yi where ci is a constant.F2) satisfies the following conditions for i=1.2: There are numbers xi and yi such that player i’s mixed strategy Fi assigns probability zero except in the interval from xi to yi: Fi(z)=0 for z<xi. and Fi(z)=1. 9/12/2011 Game Theory .11 Extension: games in which each player as a continuum of actions The mixed strategies (F1.A (Short) Introduction 160 .
9/12/2011 Game Theory . each person receive half of the object (which she values at $K/2). Each person pays her bid. Each person’s bid may be any nonnegative number up to $K.11 Extension: games in which each player as a continuum of actions Example 143. and has preferences represented by the expected amount of money the receives. In the event of a tie.A (Short) Introduction 161 .1 (Allpay auction) Two people submit sealed bid for an object worth $K for each of them.4. The winner is the person whose bid is higher. regardless of whether she wins.
9/12/2011 Game Theory . where the firm that spends the most develops a better product. which capture the entire market. a2 ) K ai if ai a j 2 K a if a a i i j eg.4.11 Extension: games in which each player as a continuum of actions This situation may be modeled by the following strategic game: Players: the two bidders Actions: each player’s set of actions is the set of possible bids (nonnegative numbers up to K) Payoff functions: Each player i’s preferences are represented by the expected value of the payoff function given by: ai if ai a j ui (a1 .A (Short) Introduction 162 .: a competition among two firms to develop a new product by some deadline.
a2) with a1≠a2 is a Nash equilibrium because the player whose bid is higher can increase her payoff by reducing her bid (and the player whose bid is lower can.11 Extension: games in which each player as a continuum of actions An allpay auction has no pure strategy Nash equilibrium.4. by the following argument: No pair of actions (x. increase her payoff by reducing her bid to 0) 9/12/2011 Game Theory .A (Short) Introduction 163 . if her bid is positive.K) is not a Nash equilibrium because either player can increase her payoff from –K/2 to 0 by reducing her bid to 0 No pair of actions (a1.x) with x < K is a Nash equilibrium because either player can increase her payoff by slightly increasing her bid (K.
A (Short) Introduction 164 . In that case.4. Fi(ai) is the probability that player i bids at most ai and the probability that she bids less than ai. for i=1. 9/12/2011 Game Theory . there are numbers xi and yi such that Fi assigns positive probability only to the interval form xi to yi. Prob(x=c)=0).F2) for which. We look for an equilibrium in which neither mixed strategy assigns positive probability to any single bid (there are infinitely many possible bids and for continuous random variables.11 Extension: games in which each player as a continuum of actions Consider the possibility that the game has mixed strategy Nash equilibrium.2. Denote Fi the mixed strategy (cumulative density function over the interval of possible bids). We restrict our attention to strategy pairs (F1.
in which case player 1’s payoff is –a1 by assumption. so that player 1’s payoff is –a1 if a1 > y2. given player 2’s mixed strategy F2: if a1 < x2. player 2’s bid exceeds a1. then a1 exceeds player 2’s bid with probability one.11 Extension: games in which each player as a continuum of actions To investigate the possibility of such an equilibrium.A (Short) Introduction 165 .4. then a1 is less than player 2’s bid with probability one. so that player 1’s payoff is Ka1 if x2 ≤ a1 ≤ y2. in which case player 1’s payoff is Ka1 with probability 1F2(a1). the probability that player 2’s bid exactly equal to a1 is zero Player 1 expected payoff is (Ka1) F2(a1) + (a1) (1F2(a1)) = KF2(a1)a1 9/12/2011 Game Theory . player 2’s bid is less than 1. then player 1’s expected payoff is: with probability F2(a1). consider player 1’s expected payoff when she uses the action a1.
11 Extension: games in which each player as a continuum of actions We need to find values of x1 and y1 and a strategy F2 such that player 1’s expected payoff satisfies condition of Proposition 142.1) y1 a+1 it is a constant on the interval x1 to y1 and less than this constant outside this interval.4.A (Short) Introduction 166 . 9/12/2011 Game Theory .2 c1 a1 x1 (Figure 144.
and F1. these conditions are fulfilled. Proving that it is the only mixed strategy Nash equilibrium is more complex. y1 = y2 = K.A (Short) Introduction 167 .y2.11 Extension: games in which each player as a continuum of actions The conditions are therefore: K F2(a1)a1=c1 for x1 ≤ a1 ≤ y1 for some constant c1 F2(x2) = 0 and F2(y2)=1 F2 must be non decreasing (it is a CDF) and analogous conditions must hold for x2. for all her actions. We see that each player expected payoff is constant and equal to 0. and F1(z) = F2(z) = z/K for all z with 0 ≤ z ≤ K. the game has a mixed strategy Nash equilibrium in which 9/12/2011 each player randomizes uniformly over all her actions. Game Theory . Thus.4. Solution: we see that if x1 = x2 = 0.
…pk) such that U(p1. …p’k) only if the decision marker prefers (p1. over lotteries (p1. Under fairly week assumptions. we need her preferences over lotteries We cannot derive these preferences form her preferences over deterministic outcomes.1 Expected payoffs Suppose that a decisionmarker has preferences over a set of deterministic outcomes and that each of her actions results in a lottery (probability distribution) over these outcomes To determine the action she chooses. where each outcome occurs with probability pi.12.…pk) > U(p’1. …p’k).A (Short) Introduction 168 . we can represent these preferences by a payoff function: we can find a function.12 Appendix: representing preferences by expected payoffs 4. 9/12/2011 Game Theory .4. So. …pk) to (p’1. say U. assume we are given preferences over lotteries.
A (Short) Introduction K K k u ( ak ) if and only if the decisionmaker prefers the lottery (p1..12 Appendix: representing preferences by expected payoffs In most case.…p’k). The standard approach.4. is to impose an additional assumption (known as the “independence assumption”) that allows us to conclude that the decisionmaker’s preferences can be represented by the expected payoff function. pK ) pk u (ak ) k 1 K where ak is the kth outcome of the lottery and: p u (a ) p' k 1 k k k 1 9/12/2011 Game Theory . Under this assumption.…pk) to (p’1. we need however more structure to go farther in the analysis. 169 .. developed by von Neumann and Morgenstern (1994).. there is a payoff function u over deterministic outcomes such that the decisionmaker’s preference relation over lotteries is represented by the function: U ( p1 .
This preference is consistent with preferences represented by the expected value of a payoff function u for which u(0)=0. where probabilities are given for outcomes $0.1/2) to the lottery (0.4 2 2 4 4 9/12/2011 Game Theory .0 .1 . $1 or $5 (and naturally prefers $5 to $1 to $0). Suppose that she prefers the lottery (1/2.: suppose that there are three possible deterministic outcomes: the decisionmaker may receive $0.4.A (Short) Introduction 170 .4 .3/4.1/4). u(1)=1 and u(5)=4: 1 1 3 1 . $1 and $5. eg.12 Appendix: representing preferences by expected payoffs This sort of payoff function (for which the decisionmaker preferences are represented by the expected value of the payoffs) is known as Bernoulli payoff functions.0.
u(1)=3 and u(5)=4.0 . Defines u as u(0)=0.4 . we know the decisionmaker preferences among all lotteries.A (Short) Introduction 4 2 2 4 171 . However.4.1/4). the converse is not true. Bernoulli payoff function must however not be confused with payoff function that represents the decisionmarker’s preferences over deterministic outcomes: if u is a Bernoulli payoff function. $1 and $0 and prefers lottery (1/2. it certainly is a payoff function that represents the decisionmaker’s preferences over deterministic outcomes however. u is compatible with preferences over deterministic outcomes. it is not compatible with preferences over lotteries: 9/12/2011 1 1 3 1 .4 Game Theory .0.3/4.12 Appendix: representing preferences by expected payoffs The great advantage of Bernoulli payoff function is that preferences are completely specified by the payoff function: once we know u(ak) for each possible outcome ak.1/2) to (0. eg.3 . : suppose a decisionmaker prefers $5.
Exercise 149. Game Theory .2 Equivalent Bernoulli payoff functions Lemma 148.2 (Normalized Bernoulli payoff functions) 9/12/2011 Suppose that a decisionmarker’s preferences can be represented by the expected value of the Bernoulli payoff function u. The expected values of the Bernoulli payoff functions u and v represent the same preferences over lotteries if and only if there exist number k and m (with m > 0) such that u(x) = k + m v(x).A (Short) Introduction 172 . for all x.1 (Equivalence of Bernoulli payoff functions) Suppose that there are at least three possible outcomes. Find a Bernoulli payoff function whose expected value represents the decisionmaker’s preferences and assigns a payoff of 1 to the best outcome and a payoff of 0 to the worst outcome.12 Appendix: representing preferences by expected payoffs 4.12.4.
0 1.1 represents the same strategic game with deterministic preferences only the left and middle tables represent the same strategic game with vNM preferences.A (Short) Introduction 173 9/12/2011 .1 0.1 0.12 Appendix: representing preferences by expected payoffs 4.1 (Figure 150.0 1.1) the three games of figure 150. whereas the payoff fonctions in the right table are not.0 0. Game Theory .0 1.1 B S 1.4. The reason is that the payoff functions in the middle table are linear functions of the payoff functions in the left table.12.0 0.3 Equivalent strategic games with vNM preferences B S B S B S B S 1.1 B S 1.0 0.1 0.
A (Short) Introduction 174 . There is no constant μ and θ such that w1(a) = μ + θu1(a): 0 0 1 1 3 2 has no solution. w1 is not a linear function of u1. 9/12/2011 Game Theory .12 Appendix: representing preferences by expected payoffs Denotes ui. Then v1(a)=2u1(a) and v2(a)=3+u2(a). wi the Bernoulli payoff functions of the three games.4. vi. But.
3 1.4 D 0.4.12 Appendix: representing preferences by expected payoffs Exercise 150.0 9.2 C D C 6.2 (Figure 150.2 3.2 represents the same strategic game with vNM preferences as the Prisoner’s Dilemma as specified in the left panel? C C D 2.3 4.2 3.0 D 0.1 C D C 3.4 2.A (Short) Introduction 175 .1 (Games equivalent to the Prisoner’s Dilemma) Which of the right tables in Figure 150.2) 9/12/2011 Game Theory .0 D 0.
9. Bayesian Games .
the player must know the other player preferences.A (Short) Introduction 177 . in many situation. players are not perfectly informed about their opponents’ characteristics (eg. we generalize the notion of strategic game to allow the analysis of situations in which each player is imperfectly informed about an aspect of her environment that is relevant to her choice of action. However.: firms may not know each others’ cost functions). In this chapter. To do so. 9/12/2011 Game Theory .Framework An assumption underlying the notion of Nash equilibrium is that each player holds the correct belief about the other players’ actions.
1 S 0.0 2 wishes to meet 1 2 wishes to avoid 1 Prob 1/2 9/12/2011 Game Theory . B S 1 B S B 2 2.1 (Variant of BoS with imperfect information) Consider a variant of BoS in which player 1 is unsure whether player 2 prefers to go out with her or prefers to avoid her. whereas player 2 (as before) knows player 1’s preferences.1 Motivational examples With start with one example to illustrate main ideas of Bayesian games Example 273.0 S 0.9.2 1.0 1.2 B 2 2.A (Short) Introduction Prob 1/2 178 (Figure 274.1) .1 0.0 0.
In state 1. The notion of Nash equilibrium must be generalize to this new setting: from player 1’s point of view. an analysis of the situation requires us to know the players’ preferences over lotteries.A (Short) Introduction 179 . 9/12/2011 Game Theory .1) Because probabilities are involved. and with probability ½ player 2 wants to avoid her (see figure 274. the Bernoulli payoff are given in the left table.1 and the other. player 2 has two possible types (one whose preferences are given by the left table of Figure 274. the Bernoulli payoff are given in the right table. Player assigns probability ½ to each state.1 Motivational examples Specifically. suppose player 1 thinks that with probability ½ player 2 wants to go out with her. by the right table).9. We can represent the situation as being with two states. In state 2.
she needs to form a belief about the action of each player 2 type.S) 0 0 180 2 0 (Figure 275. her expected payoff to B is ( ½ . conditionally on choosing B. if player 1.A (Short) Introduction . Similar calculations lead to table 275. 0)=1. For example. 2 + ½ . to choose an action rationally. So. in this case. then she thinks that B will yield her a payoff of 2 with probability ½ and of 0 with probability ½. Given these beliefs and her belief about the likelihood of each type. thinks that type 1 of player 2 will choose B and type 2 of player 2 will choose S.S) 1 ½ (S.B) B S 9/12/2011 (B. Type 2 player 2 choice Type 1 player 2 choice (B.1 Motivational examples Player 1 does not know player 2 types.B) 1 ½ (S.1. So.1) Game Theory .9. she can calculate her expected payoff to each of her actions.
A (Short) Introduction 181 9/12/2011 .1 to the action of player 1 Game Theory .9.1 Motivational examples For this situation. given the action of player 1 Note that in a Nash equilibrium: player 1’s action is a best response in Figure 275.1 to the pair of actions of the two types of player 2 the action of the type of player 2 who wishes to meet player 1 is a best response in the left table of Figure 274. we define a pure strategy Nash equilibrium to be a triple of actions (one for player 1 and one for each type of player 2) with the property that: the action of player 1 is optimal. given the actions of the two types of player 2 (and player 1’s belief about the state) the action of each type of player 2 is optimal.1 the action of the type of player 2 who wished to avoid player 1 is a best response in the right table of Figure 274.
A (Short) Introduction Player 2 – Type 1 Player 2 – Type 2 182 . needs to form a belief about the action each type of player 2 would take.(B. as an analysts. ( B. and we wish to impose the equilibrium condition that these beliefs are correct. S )) Player 1 9/12/2011 Game Theory . The reason is that to determine her best action. player 1.S)) is a Nash equilibrium where ( B. who does not know player 2 type. we need to consider what she would do in both cases.9. have to plan what to do in both cases? She does not! However. who knows his own type. (B.1 Motivational examples Why should player 2.
A (Short) Introduction 183 . 9/12/2011 Game Theory . player 1’s action B is optimal (see Figure 275. B is optimal for player 2 type 1 and S is optimal for player 2 type 2 (see Figure 274. inferring that player 1 will choose B player 1. who does not know if player 2 is of type 1 or of type 2. believes that if player 2 is of type 1.S).1) We interpret the equilibrium as follows: Type 1 .player 2 chooses B and type 2 – player 2 chooses S. she will choose S.1 Motivational examples Proof: given that the action of the two types of player 2 are (B. she will choose B and if player 2 is of type 2.9.1) given that player 1 chooses B.
she will be informed by a signal that depends on the state. This corresponds to the following situation: initially.9. given the player’s beliefs about the state after she observes her signal. such that the action of each type of each player is a best response to the actions of all the types of the other player. the same story is valid for player 1 but player 1 will receive an uninformative signal (same signal in each state) Note that in such a setup. before receiving this signal. she plans an action for each possible state.A (Short) Introduction 184 . a Nash equilibrium is list of actions.1 Motivational examples We can interpret the actions of the two types of player 2 to reflect player 2’s intentions in the hypothetical situation before she knows the state. one for each type of each player. player 2 does not know the state. 9/12/2011 Game Theory .
9/12/2011 Game Theory .9. (ii) Find the mixed strategy Nash equilibria of the game (First check whether there is an equilibrium in which both types of player 2 use pure strategies.A (Short) Introduction 185 .1 (Equilibria of a variant of BoS with imperfect information) (i) Show that there is no pure strategy of this game in which player 1 chooses S. then look for equilibria in which one or both of these types randomize).1 Motivational examples Exercise 276.
information.A (Short) Introduction 186 . Note that this is a deterministic function: for each state. A key component in the specification of the imperfect information is the set of state: each state is a complete description of one collection of the players’ relevant characteristics (preferences. The function τi(. a given signal is received. each player receives a signal that may give her some information about the state.1 Bayesian games A strategic game with imperfect information is called a Bayesian game. We denote the signal player i receives in state ω by τi(ω). …). The players do not observe this state.2. 9/12/2011 Game Theory . Rather.2 General definitions 9. their must be a state.) is called the player i’s signal function. For every collection of characteristics that some player believes to be possible. A the start of the game a state is realized.9.
A (Short) Introduction 187 . We refer to player i in the event that she receives ti as type ti of player i. The size of the set of states consistent with each player i’s signal reflect the quality of player i’s information.9. then player i’s signal conveys no information: she is perfectly uninformed.2 General definitions The state that generates any given signal ti is said to be consistent with the signal ti. the player i knows. if τi(ω) is the same for all states. 9/12/2011 Game Theory . then type ti of player i assigns probabilities to ω1 and ω2. the state that has occurred: she is perfectly informed about all the players’ relevant characteristics.: if ti= τi(ω1)= τi(ω2). given her signal. The two extreme cases are: if τi(ω) is different for each value of ω. Each type of each player holds a belief about the likelihood of the states is consistent with her signal (eg.
9/12/2011 Game Theory . ω). We therefore specify player i’s preferences by giving the Bernoulli payoff function ui over pair (a. We assume that each player’s preferences over such probability distributions are represented by the expected value of a Bernoulli function.9. We need therefore to specify their preferences regarding probability distribution over pairs (a. consisting of action profile a and a state ω.A (Short) Introduction 188 . ω).2 General definitions Each player (may) care about the actions chosen by the other players and about the state.
where a is an action profile and ω is a state. 9/12/2011 Game Theory .2 General definitions Definition 279. the expected value of which represents the player’s preferences.A (Short) Introduction 189 .9.ω). a belief about the states consistent with the signal (a probability distribution over the set of states with which the signal is associated) a Bernoulli payoff function over pairs (a.1 (Bayesian game) A Bayesian game consist of a set of players a set of states and for each player a set of actions a set of signals that she may receive and a signal function that associates a signal with each state for each signal that she may receive.
beliefs: player 1 assigns probability ½ to each state after receiving the signal z. Player 2 receives one of two signals (m and v). payoffs: the payoffs ui(a.1.A (Short) Introduction 190 right panel). Application to Example 273. Player 2 assigns probability 1 to state “meet” after receiving the signal m. (for ui(a. but the set of actions available to her is the same in every state.9. Her signal function τ1 satisfies τ1(meet)= τ1(avoid)=z. 9/12/2011 .S} signals: player 1 may receive a single signal z.2 General definitions Note that the set of actions of each player is independent of the state: each player may care about the state. and probability 1 to state “avoid” after receiving the signal v.avoid) in the Game Theory .1 players: the pair of people states: {meet. avoid} actions: for each player {B. Her signal function τ2 satistifies τ2(meet)=m and τ2(avoid)=v.meet) of each player i for all possible action pairs are given in the left panel of Figure 274.
9/12/2011 Game Theory . given the actions chosen by every type of every other player. In a Nash equilibrium of such a game.2 Nash equilibrium In a Bayesian game.9.2. each player chooses a collection of actions: one for each signal she may receive (each type of each player chooses an action). We define a Nash equilibrium of Bayesian game to be a Nash equilibrium of a strategic game in which each player is one of the types of one of the players in the Bayesian game. the action chosen by each type of each player is optimal.A (Short) Introduction 191 .2 General definitions 9.
With these notations. τj(ω)). τj(ω): player j’s signal in state ω.A (Short) Introduction 192 .2 General definitions Notations: Pr(ωti): probability assigned by the belief ot type ti of player i to state ω. ai ( )) is the action profile in which player i chooses the action ai ˆ and every other player j chooses a j ( ) 9/12/2011 Game Theory . We denote âj(ω)=a(j. Her action is this state is a(j. a i i i i ( )).tj): action taken by each type tj of each player j. the expected payoff of type ti of player i when she chooses action ai is: ˆ Pr( t )u ((a .9. τj(ω)). a(j. ) where : Ω is the set of states ˆ (ai .
a i i i i ( )). ) 9/12/2011 Game Theory .A (Short) Introduction 193 .ti) in which i is a player in the Bayesian game and ti is one of the signals that i may receive. actions: the set of actions of each player (i.ti) is the set of actions of player i in the Bayesian game.1 (Nash equilibrium of Bayesian game) A Nash equilibrium of Bayesian game is a Nash equilibrium of the strategic game (with vNM preferences) defined as follows: players: the set of all pairs (i.2 General definitions Definition 281. preferences: the Bernoulli payoff function of each player (i.ti) is given by ˆ Pr( t )u ((a .9.
is at least $0 and at most $100. Firm T will be worth 50% more under firm A’s management than it is under its own management.9.2 General definitions Exercise 282.3 (Adverse selection) Firm A (the “acquirer”) is considering taking over firm T (the “target”). A’s payoff is (3/2 x – y) and T’s payoff is y. 9/12/2011 Game Theory . A’s payoff is 0 and T’s payoff is x. Then if T accepts A’s offer. it believes that this value. If T rejects A’s offer. and assigns equal probability to each of the 101 dollar values in this range (uniform distribution). It does not know firm T’s value. Suppose that firm A bids y to take over firm T.A (Short) Introduction 194 . Find the Nash equilibrium (equilibria?). and firm T is worth x (under its own management). Model this situation as a Bayesian game in which firm A chooses how much to offer and firm T decides the lowest offer to accept. Explain why the logic behind the equilibrium is called adverse selection. when firm T is controlled by its own management.
this is not true.3ε 0.1) 9/12/2011 Game Theory .A (Short) Introduction State ω2 195 .3ε 0. she can ignore the information. ½ 1 L ½ M R 2 L ½ ½ M R T B 1.2 1.2ε 2.3 1.3.3 T B 1.0 0.0 1.2 1. In a game.0 0.9.3 Example concerning information 9.0 State ω1 (Figure 283.1 More information may hurt A decisionmaker in a singleperson decision problem cannot be worse off if she has more information: if she wishes.2ε 2.
there is two states and neither player knows the state. The game has no mixed strategy Nash equilibrium.3 Example concerning information Consider the twoplayer game in Figure 283.9. (B. In this game. Player 1’s unique best response to L is B. Player 2’s unique best response to each action of player 1 is L: if player 1 chooses T: L yieds 2ε M and R each yield 3/2 ε if player 2 chooses B: L yields 2 M and R each yield 3/2. 9/12/2011 Game Theory . Each player get a payoff of 2.L) is the unique Nash equilibrium. Thus. ε is 0 < ε < ½.A (Short) Introduction 196 .1.
M)) is the unique Nash equilibrium (each type of player 2 has a strictly dominant action. (T. to induce player 1 to choose B.9.3 Example concerning information Consider now that player 2 is informed of the state: player 2’s signal function satisfies τ2(ω1)≠ τ2(ω2). She is therefore worse off when she knows the state ! To understand this result.A (Short) Introduction 197 .(R. 9/12/2011 Game Theory . which induces player 1 to choose T. In this game. player 2’s payoff is 3ε (in each state). R is good only in state ω1 and M is good only in state ω2 while L is a compromise. There is no steady state in which player 2 chooses L. Knowing the state leads player 2 to choose either R or M. In this game. to which T is player 1’s unique best response).
if each bidder’s valuation depends on other bidders’ signals as well as her own. Each bidder receives independently some information (a signal) about the value of the object to her: if each bidder’s signal is simply her valuation.1 Introduction In section 3. Game Theory .6. This is highly unrealistic! Assume that a single object is for sale.:oil tract containing unknown reserves on which each bidder has conducted a test) We will consider models in which bids for a single object are submitted simultanesously (bids are sealed) and the participant who submits the highest bid obtains the object. we say that the valuations are common (eg.9. every bidder knows every other bidder’s valuation of the object for sale.A (Short) Introduction 198 9/12/2011 .: work of art whose beauty interests the buyers). we say that the bidders’ valuation are private (eg.6 Illustration: auctions 9.5.
9. Note that the argument that the secondprice rule corresponds to an open ascending auction (English auction) depends upon the bidders’ valuations being private. the open ascending information reveals information to bidders.6 Illustration: auctions We will consider both firstprice (the winner pays the price she bids) and secondprice (the winner pays the highest of the remaining bids) auctions. independent of all other bidders’ valuations. Game Theory . She believes that the probability that any given bidder’s valuation is at most v is F(v). they do not have access to in a sealed bid procedure. where F is a continuous increasing function (CDF). 9.≥ 0) and at most v+.A (Short) Introduction 199 9/12/2011 .2 Independent private values Each bidder knows that all other bidders’ valuations are at least v(where v. In a common valuation setup.6.
P(b) is the winning bid (the largest bi) for a secondprice auction. This amounts to consider that the bidder is riskneutral.9. P(b) is the highest bid made by a bidder different from the winner 9/12/2011 Game Theory .6 Illustration: auctions The preferences of bidder whose valuation is v are represented by a Bernoulli payoff function that assigns 0 to the outcome in which she does not win the object and vp to the outcome in which she wins the object and pays the price p (quasilinear payoff function). We denote P(b) the price paid by the winner of the auction when the profile of bids is b: for a firstprice auction.A (Short) Introduction 200 . We assume that the expected payoff of a bidder whose bid is tied for first place is (vp)/m. where m is the number of tied winning bids.
and secondprice auctions with independent private valuations is therefore: players: the set of bidders 1.…n states: the set of all profiles (v1. 9/12/2011 Game Theory .≤ vi ≤ v+ for all i actions: each player’s set of actions is the set of possible bids (nonnegative numbers) signals: the set of signal that each player may observe is the set of possible valuations (the signal function is τi(v1. where v.A (Short) Introduction 201 .6 Illustration: auctions The Bayesian game that models first.9. … vn) = vi). … vn) of valuations. beliefs: every type of player i assigns probability F(v1) F(v2) … F(vi1) x F(vi+1) … F(vn) to the event that the valuation of every other player j is at most vi.
the expected payoff of type vi of player i is at least as high when she bids vi as it is when she bids bi.. a player’s bid equal to her valuation weakly dominates all her other bids: consider some type vi of some player i and let bi be a bid not equal to vi for all bids by all types of all the other players.9.. her expected payoff is greater when she bids vi than it is when she bids bi Game Theory . and for some bids by the various types of the other players.vn )) 0 if b j bi for some j i Nash equilibrium in a secondprice sealedbid auction: in a secondprice sealedbid auction with imperfect information about valuations (as in the perfect information setup)..A (Short) Introduction 202 9/12/2011 .6 Illustration: auctions payoff functions: (vi P(b)) / m if b j bi for all j i and b j bi for m players ui (b. (v1 .
We conclude that a secondprice sealedbid auction with imperfect information about valuations has a Nashequilibrium in which every type of every player bids her valuation. Exercise 294. find a Nash equilibrium of a secondprice sealedbid auction in which player i wins.2 (Nash equilibria of a secondprice sealedbid auction) For every player i. 9/12/2011 Game Theory .9.A (Short) Introduction 203 .1 (Weak domination in a secondprice sealedbid auction) Show that for each type vi of each player i in a secondprice sealedbid auction with imperfect information about valuations the bid vi weakly dominates all other bids.6 Illustration: auctions Exercise 294.
the bid vi by type vi of player i weakly dominates any bid greater than vi. Denote by βi(v) the bid of type v of player i. So. In this case. the game under imperfect information may have a Nash equilibrium in which each bidder bids less than her valuation. so that F(v) = v for all v with 0 ≤ v ≤ 1). with βi(v) = ½ v for all v (each type of each player bids exactly half her valuation).6 Illustration: auctions Nash equilibrium in a firstprice sealedbid auction in case of perfect information.9. Game Theory .A (Short) Introduction 204 9/12/2011 . the game has a (symmetric) Nash equilibrium in which the function βi is the same for both players. Take the case of two bidders and each player’s valuation being distributed uniformly between 0 and 1 (this assumption means that the fraction of valuations less than v is exactly v. and is itself weakly dominated by any such lower bid. does not weakly dominate bids less than vi.
she surely wins. thus. which is 2b1. if player 1 bids more than ½. player 2’s bids are uniformly distributed between 0 and ½. as far as player 1 is concerned. If she bids b1 ≤ ½.1) v1 b1 205 .6 Illustration: auctions Proof: suppose that each type of bidder 2 bids in this way. the probability that she wins is the probability that player 2’s valuation is less than 2b1. her payoff function of her bid is: 2b1 (v1 b1 ) if 0 b1 1 2 1 v1 b1 if b1 2 Player 1’s expected payoff 0 9/12/2011 Game Theory .A (Short) Introduction ½ v1 ½ (Figure 295.9. consequently.
1) or established mathematically. Thus.A (Short) Introduction 206 . the game has a Nash equilibrium in which each player bids half his valuation. When the number n of bidder exceeds 2. So. Fix a valuation v.9. a similar analysis shows that the game a (symmetric) Nash equilibrium in which every bidder bids the fraction 1 – 1/n of her valuation. Denote it X. player 2 bids also half is valuation. each according to the cumulative distribution F the highest of these n1 valuations is a random variable. conditional on player 1 bidding half is valuation.6 Illustration: auctions This function is maximized at ½ v1 (this can easily be seen graphically on Figure 295. Both player are identical. Some values of X are less than v and others are greater. Interpretation: in this example (but also for any distribution F satisfying our assumptions): choose n1 valuations randomly and independently. 9/12/2011 Game Theory .
9.A (Short) Introduction 207 . Application for the case of 2 bidders and uniform distribution: for any valuation v of player 1. the following proposition holds: If each bidder’s valuation is drawn independently from the same continuous and increasing cumulative distribution.6 Illustration: auctions Consider the distribution of X in those cases in which it is less than v. so the expected value of player 2’s valuation conditional on being less than v is ½ v. The expected value of this distribution is: E ( X X v) Then. the cases in which player 2’s valuation is less than v are distributed uniformly between 0 and v. the expected value of the highest of the other players’ bids conditional on v being higher than all the other valuations. a firstprice sealedbid auction (with imperfect information about valuations) has a (symmetric) Nash equilibrium in which each type v of each player bids E(XX<v). 9/12/2011 Game Theory .
under the assumptions of this section. in notation. Consider the equilibrium of a secondprice auction in which every player bids her valuation: the expected price paid by the bidder with valuation v who wins is the expectation of the highest of the other n1 valuations.A (Short) Introduction 208 9/12/2011 . this is E(XX<v).and secondprice auctions As in the case of perfect information.6 Illustration: auctions Comparing equilibria of first.9. we have just seen that this is precisely the bid a player with valuation v in a firstprice auction (and hence. both auctions yield the auctioneer the same revenue! Game Theory . conditional on this maximum being less than v.and secondprice auctions are revenue equivalent. the amount paid by such a player if she wins). as in both case. the winner with highest valuation win. first.
Suppose also that each player’s valuation is distributed uniformly between 0 and 1.9. where x is the player’s monetary payoff and m > 1.A (Short) Introduction 209 .1 (Auctions with riskaverse bidders) Consider a variant of the Bayesian game defined earlier in this section in which the players are risk averse. suppose each of the n players’ preferences are represented by the expected value of the Bernoulli payoff function x1/m. Specifically. 9/12/2011 Game Theory .6 Illustration: auctions Exercise 296. Show that the Bayesian game that models a firstprice sealedbid auction under these assumptions has a (symmetric) Nash equilibrium in which each type vi of each player i bids: 1 bi 1 m(n 1) 1 vi Note that the solution of the problem maxb[bk(vb)l] is kv/(k + l).
6 Illustration: auctions Compare the auctioneer’s revenue in this equilibrium with her revenue in the symmetric Nash equilibrium of a secondprice sealedbid auction in which each player bids her valuation (note that the equilibrium of the secondprice auction does not depend on the players’ payoff functions). each player’s valuation depends on the other players’ signals as well as her own. Let P(b) be the function that determines the price paid by the winner as a function of the profile b of bids. and assume that it is increasing in all the signals. Denote the function that gives player i’s valuation by gi.A (Short) Introduction 210 .3 Interdependent valuations In this setup.9. 9/12/2011 Game Theory .6. 9.
… vn) = vi: each player observes her own signal).A (Short) Introduction 211 . beliefs: each type of each player believes that the signal of every type of every other player is independent of all the other players’ signals. 9/12/2011 Game Theory .…n states: the set of all profiles (t1. … tn) of signals that the players may receive actions: each player’s set of actions is the set of possible bids (nonnegative numbers) signals: the signal function τi of each player i is the set of possible valuations (the signal function is τi(v1.and secondprice auctions with common valuations: players: the set of bidders 1.9.6 Illustration: auctions The following Bayesian game models first.
.. (t1 .. each bidder’s signal is uniformly distributed from 0 to 1 and the valuation of each bidder i is vi = α ti + γ tj. Game Theory .6 Illustration: auctions payoff functions: ( gi (t1. other players’ bids contain some information about the other players’ signals. The assumption is that a bidder does not know any other player’s signal but... where j is the other player and α ≥ γ ≥ 0 (the case α = 1 and γ = 0 is the private value case and the case α = γ is called pure common value. as the analysis will show.tn )) 0 if b j bi for some j i Nash equilibrium in a secondprice sealedbid auction We analyze the case of two bidders.A (Short) Introduction 212 9/12/2011 .9.tn ) P(b)) / m if b j bi for all j i and b j bi for m players ui (b.
we need to find: the probability with which she wins the expected price she pays the expected value of player 2’s signal if she wins Probability that player 1 win: given that player 2’s bidding function is (α+γ) t2. or if : t2 b1 ( ) t2 is distributed uniformly between 0 and 1.9. a bid b1 by player 1 wins with probability b1 / (α+γ). Proof: to determine the expected payoff of type t1 of player 1. Thus. the probability that is is at most b1 / (α+γ) is b1 / (α+γ). player 1’s bid of b1 wins only if b1 ≥ (α+γ) t2. 9/12/2011 Game Theory .A (Short) Introduction 213 . a secondprice sealedbid auction has a Nash equilibrium in which each type ti of each player i bids (α+γ) ti. So.6 Illustration: auctions Under these assumptions.
6 Illustration: auctions Expected price player 1 pays if she wins: the price she pays is equal to the player 2 bid.9. given that it is less than b1 is ½ b1. Using the previous results. the expected value of signal that yield a bid less than b1 is ½ b1 / (α+γ). Expected value of player 2’s signal if player 1 wins: Player 2 bid. is distributed uniformly between 0 and b1. the expected value of player 2’s bid. conditional on being less than b1. Thus. (α+γ) t2. we get: 1 t 2 b1 1 b b1 2( )t1 b1 b1 2 1 ( ) 2( ) 2 1 ( ) 9/12/2011 Game Theory . The expected payoff if she bids b1 is the difference between her expected valuation (given her signal t1 and the fact that she wins) and the expected price she pays. multiplied by her probability of winning. the player 2 bid.A (Short) Introduction 214 . So. given her signal t2.
Exercise 299.1 (Asymmetric Nash equilibria of secondprice sealedbid common value auctions) Show that when α=γ=1.6 Illustration: auctions This function is maximized at b1=(α+γ)t1: so. 9/12/2011 Game Theory . any type t1 of player 1 optimally bids =(α+γ)t1.A (Short) Introduction 215 . for any value λ > 0. the game has an (asymmetric) Nash equilibrium in which each type t1 of player 1 bids (1+λ) t1 and each type t2 of player 2 bids (1 + 1/λ) t2.9. if each type t2 of player 2 bids =(α+γ)t2. The arguments are symmetric for player 2. We therefore get a symmetric Nash equilibrium.
2 (Firstprice sealed bid auction with common values) Verify that a firstprice sealed bid auction has a Nash equilibrium in which the bid of each type ti of each player i is ½ (α+γ) ti. Nash equilibrium in a firstprice sealdbid auction A firstprice sealedbid auction has a Nash equilbrium in which each type ti of each player i bids ½ (α+γ) ti.6 Illustration: auctions Note that when player 1 calculates her expected value of the object. 9/12/2011 Game Theory . The fact that her bid wins is.A (Short) Introduction 216 . Exercise 299. A bidder who does not take account of this fact is said to suffer from the winner’s curse. she finds the expected value of player 2’s signal given that her bid wins. in fact. a bad news about the level of other player valuation.9.
and secondprice auctions holds also under common valuations: in each case. in each case. the revenue equivalence principle holds much more generally (see Meyrson Lemma).and secondprice auctions: The revenue equivalence of first.6 Illustration: auctions Comparing equilibria of first. with the same probability). the bidder wins if she has the highest valuation (this is to say.9. In fact.A (Short) Introduction 217 . 9/12/2011 Game Theory . the expected price paid by the winner (for the symmetric equilibrium) is ½ (α+γ) ti.
every player uses the same bidding function (so βi(v)=β for some function β).8 Appendix: auctions with an arbitrary distribution of valuations 9. Denote the bid of type v of bidder i by βi(v). v+). Game Theory .A (Short) Introduction 218 9/12/2011 .9.2 and is differentiable on (v.8.6.1 Firstprice sealed bid auctions We construct here a symmetric equilibrium of a firstprice sealed bid auction for a generic distribution F of valuations that satisfies the assumptions in Section 9. In a symmetric equilibrium. Assume: β is increasing in valuation (seems reasonable) β is differentiable. Then: then there is a condition that β must satisfy in any symmetric equilibrium exactly one function β satisfies this condition this function is increasing.
if her valuation is at most β1(b) (the inverse evaluated at b). for β(v) ≤ b ≤ β(v+). the probability of a tie is zero.8 Appendix: auctions with an arbitrary distribution of valuations Suppose that all n1 players other than i bid according to the increasing differentiable function β.9. Hence. 9/12/2011 Game Theory . for any bid b. the expected payoff of player i when her valuation is v and she bids b is : (v – b) Pr(Highest bid is b) = (vb) Pr(All n1 other bids ≤ b) A player bidding according to the function β bids at most b. Then. given the assumption on F.A (Short) Introduction 219 .
the probability that the bids of the n1 other players are all at most b is the probability that the highest of n1 other players are all at most b is the probability that the highest of n1 randomly selected valuations (denoted X in section 9. vb if b > β(v+) 9/12/2011 Game Theory .A (Short) Introduction 220 .9. Denoting the CDF of X by H.2) is at most β1(b).8 Appendix: auctions with an arbitrary distribution of valuations Thus. the expected payoff is thus: (v – b) H(β1(b)) if β(v) ≤ b ≤ β(v+) and 0 is b < β(v).6.
β(v)≤v. So that a player with valuation v.(because β is continuous).and β(v)=v: if v > v. But. then a player with valuation v wins with positive probability (players with valuations less than v bid less than β(v) because β is increasing). she obtains a negative payoff while she obtains a payoff of 0 by bidding v. We conclude that β(v)=v.also bid less than v. Game Theory . given that β satisfies this condition. we have β(v) ≤ v if v > v. then a player with valuation v.and β(v) > v. we need β(v) ≤ v if v > v.A (Short) Introduction 221 9/12/2011 . if β(v)<v. then players with valuations slightly greater than v. if she wins.9. So.who increases her bid slightly wins with positive probability and obtains a positive payoff if she does so. if β(v)>v. for equilibrium.8 Appendix: auctions with an arbitrary distribution of valuations In a symmetric equilibrium in which every player bids according to β.bids v. Thus.wins with positif probability and obtains a negative payoff.
C.O.: H ( (b)) 0 ' ( 1 (b)) 1 knowing that the derivative of β1 at the point b is 1 ' ( 1 (b)) 9/12/2011 Game Theory . Thus. the derivative of this expected payoff with respect to b is zero at any best response less than β(v+) : (v b) H ' ( 1 (b)) F.A (Short) Introduction 222 . is increasing at v.β(v+)) given that β is increasing and differentiable given then β(v) = vand.9. if v > v.8 Appendix: auctions with an arbitrary distribution of valuations The expected payoff of a player of type v when every other player uses the bidding function β is differentiable on (v.
C.9. If b = β(v). So. β(v) must satisfy the F. whenever v. we have β(v)< β(v+) for v < v+. then β1(v) = v. so that substituting b = β(v) into the F. Because β is increasing.C.O. for some constant C: (v) H (v) xH ' ( x)dx C for v v v v 9/12/2011 Game Theory .O. So that substituting b= β(v). then β1(b)=v. and multiplying by β’(v) yields: ' (v) H (v) (v) H ' (v) vH ' (v) for v v v The lefthand side of the differential equation is the derivative with respect to v of β(v) H(v).8 Appendix: auctions with an arbitrary distribution of valuations In a symmetric equilibrium in which every player bids according β.A (Short) Introduction 223 v .< v < v+. the best response of type v of any given player to the other players’ strategies is β(v). Thus.
then this function is defined by: * (v ) v xH ' ( x)dx H (v ) v for v v v Note that. We conclude that if the game has a symmetric Nash equilibrium in which each player’s bidding function is increasing and differentiable on (v. so considering the limit as v approaches v. Thus β*(v) is the expected value of X conditional on its being less than v: and * (v ) v * (v) E( X  X v) 9/12/2011 Game Theory .9. the highest of n1 independently drawn valuations.8 Appendix: auctions with an arbitrary distribution of valuations The function β is bounded (as it differentiable).v+). we deduce that C = 0. the function H being the CDF of X.A (Short) Introduction 224 .
9/12/2011 Game Theory .< v < v+. we have: * (v ) v v  H ( x)dx H (v ) v v v ( F ( x)) v n 1 dx for v v v ( F (v)) n 1 We see that β*(v) < v for v.A (Short) Introduction 225 .8 Appendix: auctions with an arbitrary distribution of valuations Note finally that. using integration by parts.9. the numerator in the expression β*(v) is: vH (v) H ( x)dx v v Given H(v) = (F(v))n1 (the probability that n1 valuations is at most v).
A (Short) Introduction 226 .9.8 Appendix: auctions with an arbitrary distribution of valuations Exercise 309. 9/12/2011 Game Theory .2 (Property of the bidding function in a firstprice auction) Show that the bidding function β* is increasing.
5. Extensive Games with Perfect Information: Theory .
Extensive game describes explicitly the sequential structure of decisionmaking. perfect information means that each decision maker is fully informed about all previous actions. 9/12/2011 Game Theory . In this setup. allowing us to study situations in which each decisionmaker is free to change her mind as events unfold.Framework Strategic games suppress the sequential structure of decision making: everything is about exante anticipations and simultaneous decisions.A (Short) Introduction 228 .
5. The function that gives the player who moves at each point in each terminal history is the player function.1. Each possible sequence of actions form a terminal history.A (Short) Introduction 229 . the order of the players’ moves and the actions each play may take at each point.1 Definition We add to players and preferences. So. the components of an extensive game are: The players The terminal histories The player function The preferences for the players 9/12/2011 Game Theory .1 Extensive games with perfect information 5.
1 Extensive games with perfect information Example 154. Fight). The challenger may enter or not. Extensive game components: Players: Incumbent.: new entrant in an industry). Out Player function : player(Start) = Challenger. (In. Challenger Terminal histories: (In. player(In) = Incumbent Preferences : ? 9/12/2011 Game Theory .5. the incumbent may either acquiesce or fight.A (Short) Introduction 230 . Acquiesce). If it enters.1: Entry game An incumbent faces the possibility of entry by a challenger (eg.
Acquiesce) 9/12/2011 Game Theory . But it can be deduced from the description of the game (after any sequence of events.Out} Incumbent: {Fight.5. a player chooses an action).1 Extensive games with perfect information Note that the set of actions available to each player is NOT part of the game description.A (Short) Introduction 231 . Entry Game Actions Challenger: {In.
am).A (Short) Introduction 232 9/12/2011 . Fight). a2. (In. The entire sequence is a subhistory of itself. a2. representing the start of the game) All sequences of the form (a1. ….1 Extensive games with perfect information Terminal histories are a set of sequences: The first element of the sequence starts the game The order of the sequence depicts the order of actions by players Entry game {(In. where 1 ≤ m ≤ k. ak) of actions to be: The empty sequence of no actions (empty history.5. …. Acquiesce). A subhistory NOT equal to the entire sequence is called a proper subhistory. Game Theory . (Out) } Define: Subhistories of a finite sequence (a1.
Terminal histories represent outcomes of the game. we way that the game has a finite horizon. The proper subhistories are the empty history and the sequence (In). If the gameGamefinite horizon andIntroduction has Theory .1 Extensive games with perfect information Entry game: The subhistories of (In.5. .A (Short) finitely many terminal histories.1 (Extensive game with perfect information) An extensive game with perfect information consists of A set of players A set of sequences (terminal histories) with the property that no sequence is a proper subhistory of any other sequences A function (the player function) that assigns a player to every sequence that is a proper subhistory of some terminal history For each player. Acquiesce) are the empty history and the sequences (In) and (In. we say that the 233 game is finite. Acquiesce). If the length of the longest terminal history is finite. preferences over the set of terminal histories 9/12/2011 The set of terminal histories is the set of all sequences of actions that may occur. Definition 155.
u(In. Fight). and the worst outcome is that it enters and the incumbent fights. (In.5. u(In. Incumbent} Terminal histories: (In.A (Short) Introduction 234 . u(In. The situation is modeled as follow: Players: {Challenger. P(In) = Incumbent Preferences: Challenger: u(In. and the worst outcome is that it enters and there is a fight.1 Extensive games with perfect information Entry game: Suppose that the best outcome for the challenger is that it enters and the incumbent acquiesces.Acquiesce).Fight) = 0 9/12/2011 Game Theory .Fight) = 0 Incumbent: u(Out) = 2. (Out) Player function: P(0) = Challenger.Acquiesce) = 2.Acquiesce) = 1. u(Out) = 1. whereas the best outcome for the incumbent is that the challenger stays out.
a) is a history} Where h is some nonterminal history.a) is a history. Fight} 9/12/2011 Game Theory .: A(In) = {Acquiesce.5. a is one of the actions available to the player who moves after h. Eg.A (Short) Introduction 235 . (h.1 Extensive games with perfect information Player Start of the game Action Payoffs The sets of actions can be deduced from the set of terminal histories and the player function : A(h) = {a: (h.
G).G) to (C. (D. the player function is given by P(0) = 1 and P(C) = P(D) = 2.1 Extensive games with perfect information Exercise 156. b.F). player function.E). (C. player 1 prefers (C.H) and player 2 prefers (D.5. the set of terminal histories.F) to (D. and players’ preferences for the game represented on the right side of the slide.A (Short) Introduction 236 .G) to (D. 9/12/2011 Game Theory .H).F) to (C.E). Represent in a diagram the twoplayer extensive game with perfect information in which the terminal histories are (C. Write down the set of players. and (D.2 a.
Two extension of extensive game with perfect information are: Allowing players to move simultaneously exists Allowing arbitrary patterns of information 9/12/2011 Game Theory . Games like chess. A race between directors to become CEO. knows all actions chosen previously and always move alone. when choosing an action.5.1 Extensive games with perfect information An extensive game with perfect information models a situation in which each player.A (Short) Introduction 237 . Typical situations modeled this way are: A race between firms developing a new technology. ticktacktoe.
A (Short) Introduction 238 9/12/2011 . if he enters. the incumbent will acquiesce As the incumbent will acquiesce in case of entry. there is no end point from which to start the induction But even for finite horizon game Game Theory . Backward induction can not always be used to solve extensive games: For infinite horizon game.1 Extensive games with perfect information Entry game solution: Solution: the challenger will enter and the incumbent will acquiesce Analysis: The challenger sees that.5. the challenger is better off entering than staying out This line of argument is a backward induction.
A (Short) Introduction 239 . 9/12/2011 Game Theory . the Challenger sees that the Incumbent is indifferent between Acquiesce and Fight if he enters. The question of whether to enter or not remains open.5.1 Extensive games with perfect information Example: in this game.
In games in which backward induction is welldefined. 9/12/2011 Game Theory . there is no conflict between the two approaches.A (Short) Introduction 240 . this approach turns out to lead to the backward induction outcome.1 Extensive games with perfect information Another approach to defining equilibrium takes off from the notion of Nash equilibrium: it seeks steady states. So.5.
the set of actions available after h. Player 1 has 2 strategies: C and D Player 2 has 4 strategies: Action Assigned to History C Strategy 1 Strategy 2 9/12/2011 Strategy 3 Game Theory .A (Short) Introduction Action Assigned to History D E G E F H G241 Strategy 4 F H .2.5. where P is the player function) an action in A(h).1 Strategies Definition 159.1 ((full) strategy) A (full) strategy of player i in an extensive game with perfect information is a function that assigns to each history h after which it is player i’s turn to move (P(h) = i.2 Strategies and outcomes 5.
whatever action the other players take. In other words. 9/12/2011 Game Theory . D Player 2 strategies: EG.A (Short) Introduction 242 . Each player full strategy is more than a “plan of action” or “contingency plan”: it specifies what the player does for each of the possible choice of the other player. then the agent has enough information to carry out her wishes.5. they are written from left to right as they appear in the game diagram. FH Actions are written in the order in which they occur in the game. If actions are available at the same stage of the game. if the player appoints an agent to play the game for her and tell the agent her strategies. FG.2 Strategies and outcomes Notation Player 1 strategies: C. EH.
5.2 Strategies and outcomes Exercise: Determine the strategies of the player 1 in the following game: 9/12/2011 Game Theory .A (Short) Introduction 243 .
DG. if the strategy is followed.A (Short) Introduction 244 .2 Strategies and outcomes Solution: CG. CH. do not occur (this is the difference between “plan of actions” and a “full strategy”). DH Each (full) strategy specifies an action after history (C. then I will play G if the other player plays E.5.E) even if it specifies the action D at the beginning of the game! A (full) strategy must specify an action for every history after which it is the player turn to move. Eg. 9/12/2011 Game Theory . even for histories that. A way to interpret (full) strategy is that it is a plan of action that specifies players actions even if they make mistakes. if I do a mistake and I play C. : DG may read as “I choose D but.
5. not their full strategies.H) Note that the outcome O(s) of the strategy profile s depends only on the players’ plans of action. We denote strategy profile by s. The terminal history associated with the strategy profile s is the outcome of s and is denoted O(s). 9/12/2011 Game Theory .A (Short) Introduction 245 .E. It determines the terminal history that occurs.E) is associated to terminal history D The strategy profile (CH.E) is associated to terminal history (C.2 Outcomes A strategy profile is the vector of strategies played by each player.2. Example: The outcome The strategy profile (DG.2 Strategies and outcomes 5.
s*i) in which player i chooses ri while every other player j chooses s*j.2 (Nash Equilibrium of extensive game with perfect information) The strategy profile s* in an extensive game with perfect information is a Nash equilibrium if.s*i) generated by the strategy profile (ri .5.3 Nash Equilibrium Definition 161. for each player i : ui(O(s*)) ≥ ui(O(ri . Equivalently. for every player i and every strategy ri of player i.s*i) ) for every strategy ri 9/12/2011 Game Theory . the terminal history O(s*) generated by s* is at least as good according to player i’s preferences as the terminal history O(ri .A (Short) Introduction 246 .
9/12/2011 Game Theory .5. To find the outcome of each strategy profile. To combine strategies of all players to list strategies profiles. This is known as the strategic form of the extensive game.A (Short) Introduction 247 .3 Nash Equilibrium One way to find the Nash equilibria of an extensive game in which each player has finitely many strategies is : To list each player’s (full) strategies. To analyze this information as a strategic game. The set of Nash equilibria of any extensive game with perfect information is the set of Nash equilibria of its strategic form.
5.3 Nash Equilibrium
Example 162.1: the entry game
Player 1 strategies: {In,Out} Player 2 strategies: {Acquiesce,Fight} Strategic form of the game
Incumbent Acquiesce Fight
In Challenger Out
2*,1*
1,2*
0,0
1*,2*
Nash equilibria (In,Acquiesce) : the one identified by backward induction (Out,Fight): this also a steady state. No player has incentive to deviate.
9/12/2011
Game Theory  A (Short) Introduction
248
5.3 Nash Equilibrium
How to interpret the Nash Equilibrium (Out,Fight)?
This situation is never observed in the extensive game A solution to escape from this difficulty is by considering a slighthly perturbed steady state in which, on rate occasions, nonequilibrium actions are taken : Players makes mistakes or deliberately experiment Perturbations allow each player eventually to observe every other players’ action after every history
Another important point to note is that extensive games
embodies the assumption that the incumbent cannot commit, at the beginning of the game, to fight if the challenger enters. If such a commitment was credible, the challenger would stay out. But the threat is not credible (because it is irrational to fight after entry).
9/12/2011 Game Theory  A (Short) Introduction 249
5.3 Nash Equilibrium
Exercise 163.1 (Nash equilibria of extensive games)
Find the Nash equilibria of the extensive game represented by the figure (when constructing the strategic form of each game, be sure to include all the strategies of each player).
9/12/2011
Game Theory  A (Short) Introduction
250
5.4 Subgame perfect equilbrium
5.4.1 Definition
The notion of Nash equilibrium ignores the sequential structure of an extensive game. This may lead to steady states that are not robust (in the sense that they do not appear as such in the extensive game). We consider now a new notion of equilibrium that models a robust steady state. This notion requires: (i) That each player’s strategy to be optimal (ii) After every possible history
Subgame: for any nonterminal history h, the subgame following h is the part of the game that remains after h has occurred.
Example: in the entry game, the subgame following the history In is the game in which the incumbent is the only player and there are two terminal histories : Acquiesce and Fight.
9/12/2011
Game Theory  A (Short) Introduction
251
5.4 Subgame perfect equilbrium
Definition 164.1 (Subgame of extensive game with perfect information) Let Gamma be an extensive game with perfect information, with player function P. For any nonterminal history h of Gamma, the subgame Gamma(h) following the history h is the following extensive game:
Players: the players in Gamma Terminal histories: the set of all sequences h’ of actions such that (h,h’) is a terminal history of Gamma Player function: the player P(h,h’) is assigned to each proper subhistory h’ of a terminal history Preferences: each player prefers h’ to h’’ if she prefers (h,h’) to (h,h’’) in Gamma.
Note that the subgame following the empty history is the entire game.
9/12/2011 Game Theory  A (Short) Introduction 252
5.4 Subgame perfect equilbrium
A subgame perfect equilibrium is a strategy profile s* with the
property that in no subgame can any player i do better by choosing a strategy different from s*i given that every player j adheres to s*j.
Example: in the entry game, the Nash equilibrium (Out,Fight) is not a subgame perfect equilibrium because in the subgame following the history In, the strategy Fight is not optimal for the incumbent: in this subgame (the In subgame), the incumbent is better off choosing Acquiesce than it is choosing Fight.
Notation: Let h be a history and s a strategy profile to which
adhere afterwards h. We denote Oh(s) the outcome generated in the subgame following h by the strategy profile induced by s.
9/12/2011 Game Theory  A (Short) Introduction 253
the resulting terminal history is Oh(s) = (In.4 Subgame perfect equilbrium Example: the entry game Let s be the strategy profile (Out. the players adhere to s.Fight) 9/12/2011 Game Theory .Fight) Let h be the history In If h occurs and. afterwards.A (Short) Introduction 254 .5.
for every player i.1 (Subgame perfect equilibrium of extensive game with perfect information) The strategy profile s* in an extensive game with perfect information is subgame perfect equilibrium if.s*i): ui(Oh(s*)) ≥ ui(Oh(ri. and every strategy ri of player i. every history h after which it is player i’s turn to move (P(h)=i).s*i) generated by the strategy profile (ri. the terminal history Oh(s*) generated by s* after the history h is at least as good according to payer i’s preferences as the terminal history Oh(ri.A (Short) Introduction 255 .s*i)) for every strategy ri of player i The key point is that payer’s strategy is required to be optimal for every history after which it is the player’s turn to move. not only at the start of the game (as in the definition of a Nash equilibrium) 9/12/2011 Game Theory .5.4 Subgame perfect equilbrium Definition 166.
5.4 Subgame perfect equilbrium
5.4.2 Subgame perfect equilibrium and Nash equilibrium
Every subgame perfect equilibrium is a Nash equilibrium (because in a subgame perfect equilibrium, every player’s strategy is optimal, in particular after the empty history)
A subgame perfect equilibrium generates a Nash equilibrium in every subgame A Nash equilibrium is optimal in any subgame that is reached when the players follow theirs strategies.
Subgame perfect equilibrium requires moreover that each player’s strategy is optimal after histories that do not occur if the players follow their strategy.
Game Theory  A (Short) Introduction 256
9/12/2011
5.4 Subgame perfect equilbrium
Example 167.2 (Variant of the entry game)
Consider the variant of the entry game in which the incumbent is indifferent between fighting and acquiescing if the challenger enters. Find the subgame perfect equilibria.
9/12/2011
Game Theory  A (Short) Introduction
257
5.4 Subgame perfect equilbrium
Solution: both Nash equilibria (In,Acquiesce) and (Out,Fight) are subgame perfect equilibria because, after history In, both Fight and Acquiesce are optimal for the incumbent.
Exercice 168.1
Which of the Nash equilibria of the following game are subgame perfect?
9/12/2011
Game Theory  A (Short) Introduction
258
5.4 Subgame perfect equilbrium
5.4.4 Interpretation
A Nash equilibrium corresponds to a steady state in an idealized setting in which players’ long experience leads her to correct beliefs about the other players’ actions. A subgame perfect equilibrium of an extensive game corresponds to a slightly perturbed steady state in which all players, on rare occasions, take nonequilibrium actions. Thus, players know how the other players will behave in every subgame. Subgame perfect equilibrium is a plan of action specifying players’ actions: Not only after histories consistent with the strategy But also after histories that result when the player chooses arbitrary alternatives actions.
9/12/2011
Game Theory  A (Short) Introduction
259
5.4 Subgame perfect equilbrium
Alternative interpretation:
Consider an extensive game with perfect information in which: each player has a unique best action at every history after which it is her turn to move; horizon is finite; In such a game, a player who knows the other players’ preferences (eg: profit maximization) and knows that the other players are rational may use backward induction to deduce her optimal strategy. The subgame perfect equilibrium is the outcome of the players’ rational calculations about each other’s strategies. Note that:
this interpretation is not tenable in games in which some player has more than one optimal action after some history; But an extension of the procedure of backward induction can be used to find all subgame perfect equilibria of finite horizon games.
Game Theory  A (Short) Introduction 260
9/12/2011
5.5 Finding subgame perfect equilibria of finite horizon games: backward induction
In a game with finite horizon, the set of subgame perfect
equilibria may be found more directly by using an extension of the procedure of backward induction.
Define the length of a subgame to be the length of the longest
history in the subgame.
The procedure of backward induction works as follow:
(i) Start by finding the optimal actions of the players who move in the last subgames (stage k); (ii) Next, find the optimal actions of the players who move at stage k1, given the optimal actions we have found in all subgames k; (iii) Continue the procedure up to stage 1.
9/12/2011
Game Theory  A (Short) Introduction
261
E). In any game in which this procedure selects a single action for the player who moves at the start of each subgame. player 2 chooses E.5 Finding subgame perfect equilibria of finite horizon games: backward induction Example We first deduce that in the subgame of length 1 following history (C.A (Short) Introduction 262 . the strategy profile thus selected is the unique subgame perfect equilibrium of the game. Then. Then. at the start of the subgame of length 2 following the history C. at the start of the whole game. player 1 chooses D. player 1 chooses G. 9/12/2011 Game Theory .5.
9/12/2011 Game Theory .5 Finding subgame perfect equilibria of finite horizon games: backward induction What happens in a game in which at the start of some subgames. more than one action is optimal ? The solution is to traces back separately the implications for behavior in the longer subgames of every combination of optimal actions in the shorter subgames.5.A (Short) Introduction 263 .
1 9/12/2011 Game Theory .5 Finding subgame perfect equilibria of finite horizon games: backward induction Example 172.A (Short) Introduction 264 .5.
A (Short) Introduction 265 . player 2 is indifferent between her two actions.5 Finding subgame perfect equilibria of finite horizon games: backward induction The game has three subgames of length 1. player 2’s unique optimal action is K. in each of which player 2 moves: In subgames following the histories C and D. In the subgame following the history E.5. There are four combinations of player 2’s optima actions in the subgame of length 1: •FHK •FIK •GHK •GIK 9/12/2011 Game Theory .
GHK).A (Short) Introduction 266 . For the combination GHK of optimal actions of player 2. player 1’s optimal action at the start of the game is C. namely the whole game. (C. We now consider player 1’s optimal action in this game for every combination of optimal actions of player 2 in the subgame of length 1: For the combinations FHK and FIK of optimal actions of player 2.GHK) and (D. in which player 1 moves first. For the combination GIK of optimal actions of player 2. and E are optimal for player 1. 9/12/2011 Game Theory . The strategy pairs isolated by the procedure are (C. the actions C.GIK) The set of strategy profiles that this procedure yields for the whole game is the set of subgame perfect equilibria of the game. (D.5.FHK).5 Finding subgame perfect equilibria of finite horizon games: backward induction The game has a single subgame of length 2. (C. D.FIK). player 1’s optimal action at the start of the game is D.
9/12/2011 Game Theory .A (Short) Introduction 267 .1 (Existence of subgame perfect equilibrium) Every finite extensive game with perfect information has a subgame perfect equilibrium.1 (Subgame perfect equilibrium of finite horizon games and backward induction) The set of subgame perfect equilibria of a finite horizon extensive game with perfect information is equal to the set of strategy profiles isolated by the procedure of backward induction. PROPOSITION 173.5 Finding subgame perfect equilibria of finite horizon games: backward induction Two important propositions: PROPOSITION 172.5.
5.5 Finding subgame perfect equilibria of finite horizon games: backward induction Exercise 173.A (Short) Introduction 268 .2 Find the subgame perfect equilibria of this game: 9/12/2011 Game Theory .
A (Short) Introduction 269 . the people take turns bidding. player 2 bids 3 and then player 1 passes. a positive integer. player 2 obtains the object and pays 3.5 Finding subgame perfect equilibria of finite horizon games: backward induction Exercise 176. a player may pass rather than bid. model the auction as an extensive game and find its subgame perfect equilibria. On her turn. 9/12/2011 Game Theory . both players pay their last bids (if any) (if player 1 passes initially. In the auction. Each person’s wealth is w.5. Neither player may bid more than her wealth. a bid must be a positive integer greater than the previous bid. which exceeds v. in which case the game ends and the other player receives the object.1 (Dollar auction) An object that two people each value at v. if player 1 bids 1. For v=2 and w=3. is sold in an auction. player 2 receives the object and makes no payment. for example. and player 1 pays 1).
A (Short) Introduction 270 . First individual 1 chooses her effet level a1. An effort level is a nonnegative number.5. Suppose that the players choose their effort levels sequentially (rather than simultaneously). where j is the other individual and c > 0. 9/12/2011 Game Theory . Then individual 2 chooses her effort level a2. Find the subgame perfect equilibria. and individual i’s preferences (for i = 1.2) are represented by the payoff function ai (c+ajai).2 (A synergistic relationship) Two individuals are involved in a synergistic relationship.5 Finding subgame perfect equilibria of finite horizon games: backward induction Exercise 176. some constant.
the challenger stays in. each firm obtains in that period the profit –F < 0 if the incumbent fights and C > max {F. If the challenger enters.A (Short) Introduction 271 . and in each of T periods the incumbent first commits to fight or cooperate with the challenger in that period. Find the subgame perfect equilibria of the extensive game. Once the challenger exits. First the challenger chooses whether to enter. the incumbent’s payoff is TM (it obtains the profit M in each of the following T ≥ 1 periods).f} if it cooperates. neither firm has any further action. Each firm cares about the sum of its profits. If it does not enter. the incumbent obtains the profit M > 2 C and the challenger the profit 0 in every subsequent period. both firms obtain the profit zero in that period (regardless of the incumbent’s action). If. in any period.2 (An entry game with a financially constrained firm) An incumbent in an industry faces the possibility of entry by a challenger.5 Finding subgame perfect equilibria of finite horizon games: backward induction Exercise 174. it cannot reenter. in any period. the challenger exits. If. then the challenger chooses whether to stay in the industry or to exit. The challenger’s payoff is 0. it pays the entry costs f > 0. 9/12/2011 Game Theory .5.
Extensive Games with Imperfect Information .10.
each player. when choosing her action. allowing us to study situations in which each decisionmaker is free to change her mind as events unfold.A (Short) Introduction 273 . may not be informed of the other players’ previous actions. 9/12/2011 Game Theory .Framework We keep in this chapter the Extensive game setup: extensive game describes explicitly the sequential structure of decisionmaking. In this imperfect information setup.
A (Short) Introduction 274 9/12/2011 . When making her decision. To describe an extensive game with imperfect information. the player function and the players’ preferences. Game Theory .10. we need to specify the set of players. the set of terminal histories.1 Extensive games with imperfect information To describe an extensive game with perfect information. we need to add a specification of each player’s information about the history at every point at which she moves: Denote by 𝐻𝑖 the set of histories after which player 𝑖 moves We specify player’s 𝑖 information by partitioning 𝐻𝑖 into a collection of information sets (the collection is called the information partition). but not of which history within that set has occurred. the player is information of the information set that has occurred.
𝐷.A (Short) Introduction 275 Important restriction 9/12/2011 . 𝐷. her information partition contains a unique information partition 𝐶. Why? Game Theory .10. 𝐷 and 𝐸 (𝐻𝑖 = 𝐶. 𝐸 .1 Extensive games with imperfect information Example Suppose player 𝑖 moves after histories 𝐶. Note that if the player is not informed at all. 𝐸 ) is informed only that the history is 𝐶 or that it is either 𝐷 or 𝐸 The player information partition is the two information sets 𝐶 and 𝐷. 𝐸 . Denote by 𝐴(ℎ) the set of actions available to the player who moves after history ℎ. We allow two histories ℎ and ℎ′ to be in the same information set only if 𝐴 ℎ = 𝐴(ℎ′ ).
Game Theory . For each player.10.1 (Extensive game with imperfect information) A set of players A set of sequences (terminal histories) having the property that no sequence is a proper subhistory of some terminal history A function (the player function) that assigns either a player or “chance” to every sequence that is a proper subhistory of some terminal history A function that assigns to each history that the player function assigns to chance a probability distribution over the actions available after that history (each probability distribution is independent of every other distribution). preferences over the set of lotteries over terminal histories. So an outcome is a lottery (a probability distribution function) over the set of terminal histories.A (Short) Introduction 276 9/12/2011 . a partition (information partition) of the set of histories assigned to that player by the player function.1 Extensive games with imperfect information Note that we allow move of chance. For each player. Definition 314.
10. Model this game as an extensive game with imperfect information. knows the one chosen by the other person. when choosing a concert. when moving.2: BoS as an extensive game Games in which each player moves once and no player. is informed of any other player’s action. 9/12/2011 Game Theory . BoS : Each of two people chooses whether to go to a Bach of Stravinsky concert Neither person.A (Short) Introduction 277 .1 Extensive games with imperfect information Example 314. may be modeled as strategic games or extensive games with imperfect information.
A (Short) Introduction 278 . she is informed that the game is beginning) Player 2: 𝐵. she is not informed whether the history is 𝐵 or 𝑆) Preferences: given in the game description 9/12/2011 Game Theory . 𝑆) Player function: 𝑃 ∅ = 1. 𝑆. 𝑃 𝐵 = 𝑃 𝑆 = 2 Chance moves: None Information partitions Player 1: ∅ (a single information set: player 1 has a single move and when she moves.1 Extensive games with imperfect information Solution: Players: the two people. 𝑆 .10. (𝑆. 𝐵 . 𝐵. 𝐵 . 𝑆 (player 2 has a single move and when she moves. say 1 and 2 Terminal histories: 𝐵.
A (Short) Introduction 279 .1 Indicates that the histories are in the same information set 9/12/2011 Game Theory .1 Extensive games with imperfect information Figure 315.10.
1: Variant of Entry Game (the challenger. Model (graphically by a tree) this game as an extensive game with imperfect information.1 Extensive games with imperfect information Example 317. the incumbent prefers to acquiesce than to fight. The incumbent observes whether the challenger enters but not whether he is prepared. takes an action that the incumbent does not observe) An incumbent faces the possibility of entry by a challenger (see example 154.10.A (Short) Introduction 280 9/12/2011 . before entering.1) The challenger has three choices: Stay out Prepare itself for combat and enter (preparation is costly but reduces loss from fight) Enter without preparations A fight is less costly for the incumbent if the entrant is unprepared. But regardless of entrant’s readiness. Game Theory .
A (Short) Introduction 281 .1 9/12/2011 Game Theory .1 Extensive games with imperfect information Figure 317.10.
each player has a single information set at which two actions (𝐵 or 𝑆) are available. In the BoS game. Thus.3 (Mixed Strategy in extensive game) A mixed strategy of a player in an extensive game is a probability distribution over the player’s pure strategies.2 Strategies A strategy specifies the action the player takes whenever it is her turn to move. 9/12/2011 With mixed strategies. Definition 318. 𝐴𝑐𝑡𝑖𝑜𝑛𝐼2 .10.A (Short) Introduction 282 . …). a strategy specifies the list of actions at each information set in the form (𝐴𝑐𝑡𝑖𝑜𝑛𝐼1 . each player has two possible strategies: 𝐵 or 𝑆. If players have several information sets.1 (Strategy in extensive game) A (pure) strategy of player 𝑖 in an extensive game is a function that assigns to each of 𝑖 ′ 𝑠 information sets 𝐼𝑖 an action in 𝐴(𝐼𝑖 ) (the set of actions available to player 𝑖 at the information set 𝐼𝑖 ). Definition 318. Game Theory . players are allowed to choose their actions randomly.
𝛼−𝑖 ) according to a payoff function whose expected value represents players 𝑖 ′ 𝑠 preferences over lotteries.4 (Nash equilibrium of extensive game) Intuition: a strategy profile is a Nash equilibrium if no player has an alternative strategy that increases her payoff.3 Nash equilibrium Definition 318. player 𝑖 ′ 𝑠 expected payoff ∗ to 𝛼 ∗ is at least as large as her expected payoff to (𝛼𝑖 . given the other player’s strategies. One way to find a Nash equilibrium of an extensive game is to construct the strategic form of the game and analyze it as a strategic game. Notes: an equilibrium in which no player’s strategy entails any randomization (every player’s strategy assigns probability 1 to a single action at each information set) is a pure Nash equilbrium. Formal definition: The mixed strategy profile 𝛼 ∗ in an extensive game is a (mixed strategy) Nash equilibrium if.A (Short) Introduction 283 . 9/12/2011 Game Theory .10. for each player 𝑖 and every mixed strategy 𝛼𝑖 of player 𝑖.
A (Short) Introduction 284 . Eg.10. 𝐵) (𝑆. each player knows (by experience) that the other player will choose 𝐵.1: BoS as an extensive game Each player has two strategies: 𝐵 and 𝑆 The strategic form of the game is given in Figure 19.3 Nash equilibrium Example 319.: in steady state in which every person who plays the role of either player chooses 𝐵.1 Thus the game has two pure Nash equilibria: (𝐵. 9/12/2011 Game Theory . 𝑆) In the BoS game. player’s 2 experience playing the game tells her the history to expect. However. player 2 is not informed of the action chosen by player 1 when taking an action (her information set contains both the history 𝐵 and the history 𝑆).
10.2* 4*.2 Unready Out 2.1 is the following: Acquiesce Ready Fight 3.4* 2*.1: Entry game The strategic form of the entry game in Example 317.1 0.4* 285 9/12/2011 Game Theory .A (Short) Introduction .3* 1.3 Nash equilibrium How may we extend the idea of subgame perfect equilibrium to extensive game with imperfect information to deal with situations in which the notion of Nash equilibrium is not adequate? Example 322.
The notion of subgame perfect equilibrium eliminates this strategy by requiring that each player’s strategy be optimal. the Nash equilibrium (𝑂𝑢𝑡. Fight) (The game has also a Nash mixed strategy equilbrium in which the challenger uses the pure strategy Out and the probability assigned by the incumbent to Acquiesce is at most 1 2).10. for every history after which she moves.3 Nash equilibrium The game has two Nash equilbria: (Unready. As in Chapter 5 (perfect information). 9/12/2011 Game Theory . The natural extension of this idea to games with imperfect information requires that each player’s strategy be optimal at each of her information sets.A (Short) Introduction 286 . 𝐹𝑖𝑔ℎ𝑡) is not plausible. given the other players’ strategies. regardless of whether the history occurs if the players adhere to their strategies. Acquiesce) (Out.
3 Nash equilibrium In Example 322.A (Short) Introduction 287 . However. the implementation of the idea in other may be less straightforward because the optimality of an action at an information set may depend on the history that has occurred. So. regardless of whether the challenger is ready.10. any equilbrium that assigns a positive probability to 𝐹𝑖𝑔ℎ𝑡 does not satisfy the additional requirement introduced by the notion of subgame perfect equilibrium. 9/12/2011 Game Theory .1). the incumbent’s action 𝐹𝑖𝑔ℎ𝑡 is unambigously suboptimal at its information set because the incumbent prefers 𝐴𝑐𝑞𝑢𝑖𝑒𝑠𝑐𝑒 if the challenger enters. Consider for example a variant of the entry game in which the incumbent prefers to fight than to accommodate an unprepared entrant (see Figure 323.1.
10.3 Nash equilibrium Figure 323.A (Short) Introduction 288 .1 9/12/2011 Game Theory .
to study this situation. So. and the challenger’s strategy 𝑂𝑢𝑡 gives the incumbent no basis on which to form such a belief. But: given that now fighting is optimal if the challenger enters unprepared.10.A (Short) Introduction 289 . 9/12/2011 Game Theory . we must specify players’ beliefs. 𝐹𝑖𝑔ℎ𝑡) is a Nash equilibrium. (𝑂𝑢𝑡.3 Nash equilibrium Like the original game. the reasonableness of the modified game depends on the history the incumbent believes has occurred.
A (Short) Introduction 290 . Insists that they hold at each point at which a player has to choose an action (like subgame perfect equilibrium in extensive games with perfect information).10.4 Beliefs and sequential equilibrium A Nash equilibrium of a strategic game with imperfect information is characterized by two requirements: Each player chooses her best action given her belief about other players Each player belief is correct The notion of equilibrium we define here: Embodies these two requirements. 9/12/2011 Game Theory .
A (Short) Introduction 291 . the player whose turn it is to move forms a belief about the history that has occurred.4 Beliefs and sequential equilibrium 10.4.1 A belief system in an extensive game is a function that assigns to each information set a probability distribution over the histories in that information set.10. We call a collection of beliefs (one for each information set) a belief system. We model this belief as a probability distribution over the histories in the information set.1 Beliefs We assume that at an information set that contains more than one history. 9/12/2011 Game Theory . Definition 324.
A (Short) Introduction 292 . with the property that each probability distribution is independent of every other distribution.10.2 (Behavioral strategy in extensive game) A behavioral strategy of player 𝑖 in an extensive game is a function that assigns to each 𝑖 ′ 𝑠 information sets 𝐼𝑖 a probability distribution over the action in 𝐴(𝐼𝑖 ).4 Beliefs and sequential equilibrium Example: the entry game (317.2 Strategies Definition 324.4. 9/12/2011 Game Theory .1) The belief system consists of a pair of probability distributions: One assigns probability 1 to the empty history (the challenger belief at the start of the game) The other assigns probabilities to histories Ready and Unready (the incumbent belief after the challenger enter) 10.
Game Theory .2) Each player has a single information set. a behavioral strategy and mixed strategy are equivalent but behavioral strategy are easier to deal with.A (Short) Introduction 293 9/12/2011 .4 Beliefs and sequential equilibrium Note: A behavioral strategy that assigns probability one to a single action is equivalent to a pure strategy. In this game. In all the games that we study. the set of behavioral strategies is identical to the set of mixed strategies. Example: the BoS game (314. a behavioral strategy for each player is a single probability distribution over her actions. So.10. Behavioral strategies are assigned to actions in information sets with mixed strategies are assigned to possible combinations of pure strategies.
4.1 (Assessment) An assessment is an equilibrium if it satisfies the following two requirements: Sequential rationality: each player’s strategy is optimal whenever she has to move.3 Equilibrium Definition 325.A (Short) Introduction 294 . given her belief and the other players’ strategies. regardless of whether the information set is reached if the players follow their strategies.4 Beliefs and sequential equilibrium 10..10. The sequential rationality generalizes the requirement of subgame perfect equilibrium: each player’s strategy must be optimal in the part of the game that follows each of her information sets. Consistency of beliefs with strategies: each players’ belief is consistent with the strategy profile. 9/12/2011 Game Theory . given the strategy profile and given the player’s belief about the history in the information set that has occurred.
A (Short) Introduction 295 .4 Beliefs and sequential equilibrium Example 325 and Figure 326.1 9/12/2011 Game Theory .10.
A (Short) Introduction 296 .10. 9/12/2011 Game Theory . Player 2 expected payoff in the part of the game starting at her information set is: Strategy F : (2/3 x 0) + (1/3 x 1) = 1/3 Strategy G : (2/3 x 1) + (1/3 x 0) = 2/3 Sequential rationality requires Player 2 to select G. given the subsequent behavior specified by player 1 strategy. Select J after the history (C. Sequential rationality requires that player 2 strategy be optimal at her information set.F) Player 2 beliefs at her information set (number in brackets) is that the history C has occurred with probability 2/3 and history D has occurred with probability 1/3.4 Beliefs and sequential equilibrium Player 1 strategies are indicated by the red branches: Selects E at the start of the game. even though this set is not reached if player 1 follows her strategy.
F) is J.4 Beliefs and sequential equilibrium Sequential rationality requires also that player 1 strategy be optimal at each of her two (one element) information sets. 9/12/2011 Game Theory . player 1 optimal actions at the start of the game are D and E. given player 2 strategy G. If Player 2 strategy is G. player 1 has two optimal strategies: DJ and EJ.A (Short) Introduction 297 . Thus.10. given player 2 strategy: Player 1 optimal action after history (C.
𝜇) the probability distribution over terminal histories that results if each history in 𝐼𝑖 occurs with probability assigned to it by player 𝑖 ′ 𝑠 belief 𝜇𝑖 (not necessarily the probability with which it occurs if the player adhere to 𝛽) and subsequently. In Figure 326. 𝛽−𝑖 ). the probability distribution assigns 2/3 to terminal history (C. 𝐷 . 𝜇) an assessment (𝛽 is a profile of behavioral strategies and 𝜇 is a belief system).A (Short) Introduction 298 .4 Beliefs and sequential equilibrium Sequential rationality requirements (more formal definition) Denote (𝛽.G) and probability 1/3 to (D. her expected payoff to 𝑂𝐼𝑖 (𝛽. the players adhere to the strategy profile 𝛽. Sequential rationality requires for each player 𝑖 and each of her information sets 𝐼𝑖 . 𝜇) is at least as large as her expected payoff to 𝑂𝐼𝑖 (𝛾. Let 𝐼𝑖 be an information set of player 𝑖.1: For the information set 𝐶.G) 9/12/2011 Game Theory . Denote 𝑂𝐼𝑖 (𝛽.10. 𝜇 for each of her behavioral strategies 𝛾𝑖 .
10. We deal with this difficulty allowing the player who moves at such an information set to hold any belief at that information set. In a steady state. 9/12/2011 Game Theory . The implementation of this idea is somewhat unclear at an information set not reached if the players follow their strategies: every history has probability 0 if players follow their strategies. each player’s belief must be correct: the probability it assigns to any history must be the probability with which that history occurs if the players adhere to their strategies.4 Beliefs and sequential equilibrium The Consistencies of beliefs with strategies is a new requirement. The consistency requirement restrict the belief system only at information sets reached with positive probability if every player adheres to her strategy.A (Short) Introduction 299 .
this probability is: Pr(ℎ∗ according to 𝛽) ℎ∈𝐼 𝑃𝑟 ℎ according to 𝛽 𝑖 9/12/2011 Game Theory . the consistency requirement imposes that the probability assigned to every history ℎ∗ in a information set reached with positive probability by the belief of the player who moves at that information set to be equal to the probability that ℎ∗ occurs according to the strategy profile.4 Beliefs and sequential equilibrium Precisely. conditional on the information set’s being reached.10.A (Short) Introduction 300 . By the Bayes’ rule.
Consistency requires that player 2 belief assigns probability 𝑝/(𝑝 + 𝑞) to C and 𝑞/(𝑝 + 𝑞) to D.1 If player 1 behavioral strategy assigns probability 1 to action E at the start of the game.A (Short) Introduction 301 .10. the consistency requirement enters into play: Denote 𝑝 the probability assigned to C by player 1 strategy and 𝑞 to D. If player 2 action at the start of the game assigns positive probability to C or D. the consistency requirement places no restriction on player 2 belief (player 2 information set is not reached if player 1 adheres to her strategy).4 Beliefs and sequential equilibrium Figure 326. 9/12/2011 Game Theory .
Otherwise.4: Consistency of beliefs in entry game (Figures 317.1 (Weak sequential equilibrium) An assessment (𝛽. 𝜇) (consisting of a behavioral strategy profile 𝛽 and a belief system 𝜇) is a weak sequential equilibrium if it satisfies the sequential rationality and the weak consistency of beliefs with strategies. the consistency condition does not restrict the incumbent belief. 𝑝𝑈 and 𝑝𝑂 the probability that the challenger assigns to Ready.A (Short) Introduction 302 9/12/2011 . the condition requires that the incumbent assigns probability 𝑝𝑅 /(𝑝𝑅 + 𝑝𝑈 ) to Ready and 𝑝𝑈 /(𝑝𝑅 + 𝑝𝑈 ) to Unready. Unready and Out.1) Denote by 𝑝𝑅 . Game Theory .10.1 and 323. Definition 328. If 𝑝𝑂 = 1.4 Beliefs and sequential equilibrium Example 327.
1 In this game. The strategy profile in any weak sequential equilibrium is a Nash equilibrium (if an assessment is a weak sequential equilibrium. because this profile does not lead to player 2 information set.10.4 Beliefs and sequential equilibrium Figure 326. the strategy profile in any weak sequential equilibrium is a subgame perfect equilibrium.A (Short) Introduction 303 9/12/2011 . player 1 strategy EJ is sequentially rational given player 2 strategy G. Thus the game has a weak sequential equilibrium. Note: In an extensive game with perfect information. Game Theory . Therefore. given the other players’ strategies). in an extensive game with perfect information. then each player’s strategy in the assessment is optimal at the beginning of the game. The belief is consistent with the strategy profile (EJ.G). only one belief system is possible (each player believes at each information set that a single compatible history has occurred with probability 1). and player 2 strategy G is sequentially rational given the beliefs indicated in the Figure and player 1 strategy EJ.
10. Whether there is a belief of player 2 that makes any such strategy optimal.4 Beliefs and sequential equilibrium How to find weak sequential equilibria? We can use a combination of techniques for finding subgame perfect equilibria of extensive games with perfect information and for finding Nash equilbria of strategic games. We can find all the Nash equilibria of the game.A (Short) Introduction 304 9/12/2011 . Game Theory . Figure 326. player 2 belief is not restricted by consistency. and then check which of these equilibria are associated with weak sequential equilibria. We need therefore to ask: Whether any strategy of player 2 makes E optimal for player 1.1 Does the game have a weak sequential equilibrium in which player 1 chooses E? If player 1 chooses E.
4 Beliefs and sequential equilibrium We see that: E is optimal if and only if player 2 chooses F with probability at most 2/3: Any such strategy of player 2 is optimal if Player 2 believes the history is C with probability ½ The strategy of choosing F with probability 0 is optimal if player 2 believes the history is C with any probability of at least ½ Thus: an assessment is a weak sequential equilibrium if player strategy is EJ and player 2: Either chooses F with probability at most 2/3 and believes that the history is C with probability ½ Or chooses G and believes that the history is C with probability at least ½ 9/12/2011 Game Theory .10.A (Short) Introduction 305 .
for every belief (Acquiesce yields a higher payoff than Fight). Game Theory . The game has a weak sequential equilibrium in which the strategy profile is (Unready.1) The entry game has two pure strategy Nash equilibria: (Unready. No assessment in which the strategy profile is (Out.Acquiesce) and the incumbent belief is that the history is Unready. example 317.Acquiesce): Consistency requires that the incumbent believe that the history is Unready at its information set (because it is the optimal choice for the challenger).Fight) Consider (Unready. Consider (Out.1 (Weak sequential equilibria of entry game. Acquiesce) and (Out.10. making Acquiesce optimal. Fight is not an optimal action in the remainder of the game.Fight) Regardless of the incumbent belief at its information set.Fight) is both sequentially rational and consistent.4 Beliefs and sequential equilibrium Example 330.A (Short) Introduction 306 9/12/2011 .
10.4 Beliefs and sequential equilibrium Why weak sequential equilibrium? The consistency condition’s limitation to information sets reach with positive probability generates. in some games. a relative large set of equilibrium assessments. Some of the equilibrium assessments do not plausibly correspond to steady states. Consider the following variant of the entry game: 9/12/2011 307 .
This belief seems not reasonable. although this action is dominated by Ready for the challenger.10. the incumbent believes that the challenger has chosen Unready. This game has a weak sequential equilibrium in which the challenger’s strategy is Out. In this equilibrium. Ready is better than Unready for the challenger.4 Beliefs and sequential equilibrium In this variant. regardless of the incumbent’s action.A (Short) Introduction 308 . the incumbent’s strategy is F. 9/12/2011 Game Theory . and the incumbent believes at its information set that the history is Unready (with probability one).
9/12/2011 Game Theory .5 Signaling games In many interactions. the informed parties have the opportunity to take actions observed by uninformed parties before uninformed parties take actions that affect everyone: the informed parties’ actions may “signal” their information. In one interesting class of situations. information is asymmetric: some parties are more informed than the other ones.A (Short) Introduction 309 .10.
The challenger may either ready itself for battle or remain unready. The incumbent observe the challenger readiness but not its type and chooses either fight or acquiesce.10. The challenger knows its type but the incumbent does not.5 Signaling games Example 332. and fighting entails a loss of 2 units for each type. Preparations cost a strong challenger 1 unit of payoff and a weak one 3 units. 9/12/2011 Game Theory . An unready challenger payoff is 5 if the incumbent acquiesces to its entry. The challenger is strong with probability 𝑝 and weak with probability 1 − 𝑝 (with 0 < 𝑝 < 1).A (Short) Introduction 310 .1: Entry as a signaling game. The incumbent prefer to fight (payoff 1) rather than to acquiesce to (payoff 0) a weak challenger and prefer to acquiesce to (payoff 2) rather than to fight (payoff 1) a strong one.
A (Short) Introduction 311 .5 Signaling games Figure 333.1 9/12/2011 Game Theory .10.
10.5 Signaling games The Figure 333. 9/12/2011 Game Theory . and thus also four strategies Searching for pure weak sequential equilibria Note that a weak challenger prefers Unready to Ready. at each of which it has two actions (A and F). a weak challenger chooses Unready.1 models this situation: The empty history is in the center of the diagram The first move is made by chance (which determines the challenger type) Both types have two actions (so the challenger has four strategies) The incumbent has two information sets. Thus. in any weak sequential equilibrium. regardless of the incumbent’s actions (even if the incumbent acquiesces to a ready and fight an unready one).A (Short) Introduction 312 .
the incumbent must believe that the history was (Strong. and hence choose F.A (Short) Introduction 313 . The incumbent acquiesces when he sees Ready and fights when he sees Unready. Ready) with probability one (because a weak challenger never chooses Ready). 9/12/2011 Game Theory . so consistency condition restrict its beliefs at each set. Unready). he is worse of (he get 3 rather than 4).5 Signaling games Consider each possible action of a strong challenger Strong challenger chooses Ready Both the incumbent information sets are reached. At the top information set. if the challenger deviates and chooses Unready when he is strong.10. We conclude that the game has a weak sequential equilibrium in which challenger chooses Ready when he is strong and Unready when he is weak. and hence choose A: At the bottom information set. the incumbent must believe that the history was (Weak. Thus.
the incumbent believes.10.5 Signaling games Strong challenger chooses Unready At his bottom information set. that the history was (Strong. by consistency. Unready) with probability (1p). 1 4 1 4 9/12/2011 Game Theory .A (Short) Introduction 314 . his expected payoff: To A = p (2) + (1p) 0 = 2 p To F = p (1) + (1p) 1 = 1 – 2 p A is therefore optimal if 𝑝 ≥ and F is optimal if 𝑝 ≤ . Unready) with probability p and (Weak. Thus.
his payoff is less than 5 regardless of the incumbent action. if 𝑝 ≥ . The incumbent may hold any belief about the type of a ready challenger. If he switches to Ready. depending on his belief. and.10. 1 Thus. may fight or acquiesce.5 Signaling games Suppose that 𝑝 ≥ and the incumbent chooses A in 4 response to Unready: A strong challenger who chooses Unready obtains the payoff of 5.A (Short) Introduction 315 . the game has a weak sequential 4 equilibrium in which both types of challenger choose Unready and the incumbent acquiesces to an unready challenger. 1 9/12/2011 Game Theory .
The incumbent assigns probability of at least ¾ to the challenger’s being weak if it observes that the challenger is ready for battle. Game Theory .A (Short) Introduction 316 9/12/2011 . 1 Thus. Is such a believe an equilibrium? Yes: the consistency condition does not restrict the incumbent’s belief upon observing Ready because this action is not taken when the challenger follows his strategy to choose Unready regardless of his type.10. Thus. If he switches to Ready. If the incumbent believes that a ready challenger is weak with high enough probability (at least ¾). the incumbent must fight a ready challenger. the game has a weak sequential equilibrium in which: 1 4 Both types of challenger choose Unready The incumbents fights regardless of the challenger’s action. for an equilibrium. fighting is indeed optimal. his payoff is 2 if the incumbent fights and 4 if he acquiesces. if 𝑝 ≤ .5 Signaling games Now suppose that 𝑝 ≥ 4 and the incumbent chooses F in response to Unready: A strong challenger who chooses Unready obtains the payoff of 3.
5 Signaling games This example shows that two kinds of pure strategy equilibrium may exist in signaling games: Separating equilibrium : each type of sender (of the signal) chooses a different action so that. Note: if the sender has more than two types. within each of which all types choose the same action and between which the actions are different).10. 9/12/2011 Game Theory .A (Short) Introduction 317 . mixtures of these types of equilibrium may exist (the set of types may be divided into groups. the receiver (of the signal) knows the sender’s type. so that the sender’s action gives the receiver no clue to the sender’s type. upon observing the sender’s action. Pooling equilibrium : all types of the sender choose the same action.
your boss will be able to unravel your report and deduce your actual findings. If you report the results of you research without distortion.A (Short) Introduction 318 . Obfuscation seems therefore a more promising route.10. the product your boss will choose is not the best for you. Your boss is interested in promoting the interest of the whole firm. 9/12/2011 Game Theory . If you systematically distort your findings. Your preferences differ from from those of your boss: You are interested in promoting the interest of your division. who decides which product to develop.8 Strategic information transmission The situation You research the market for new product and submit a report to your boss.
that a receiver (the boss) can not see. The receiver observes the report and takes an action 𝑦 (a number).8 Strategic information transmission The model A sender (you) observes the state 𝑡.1). Note that the receiver optimal action is 𝑦 = 𝑡 and the sender optimal action is 𝑦 = 𝑡 + 𝑏 (see Figure 343.A (Short) Introduction 319 . The sender submit a report 𝑟 (a number) to the receiver. The payoff functions are: Sender: − 𝑦 − 𝑡 + 𝑏 2 Receiver: − 𝑦 − 𝑡 2 Where 𝑏 (the sender bias) is a fixed number that reflects the divergence between the sender and the receiver preferences. The distribution of the state is uniform: the probability: Pr 𝑡 ≤ 𝑧 = 𝑧. a number between 0 and 1. 9/12/2011 Game Theory .10.
10.1 (players’ payoff functions) 9/12/2011 Game Theory .8 Strategic information transmission Figure 343.A (Short) Introduction 320 .
8 Strategic information transmission 10. unless the sender and the receiver preferences are the same (𝑏 = 0).10.8.A (Short) Introduction 321 9/12/2011 . So. Game Theory . If the sender reports 𝑡. the consistency condition requires that the receiver believe (correctly) that the state is 𝑡 when the sender reports 𝑡. The receiver hence optimally chooses the action 𝑡 (the maximum of − 𝑦 − 𝑡 2 ). the receiver chooses 𝑦 = 𝑡 and the sender payoff is −𝑏2 . the receiver chooses 𝑦 = 𝑡 + 𝑏 and the sender payoff is 0. Is the sender’s strategy the best response to the receiver strategy? Not if 𝑏 > 0. If the sender chooses instead 𝑡 + 𝑏.1 Perfect information transmission? Consider an equilibrium in which the sender accurately reports the state he observes: 𝑟 𝑡 = 𝑦 ∀ 𝑡 Given this strategy. Suppose the state is 𝑡. the game has no equilibrium in which the sender accurately report the state.
The consistency condition requires that if the receiver observes a report 𝑐. in particular. Note also that if the receiver simply ignores completely the sender report.10. Because the sender reports has no effect on the receiver optimal action. The expected value of 𝑡 is 1 then 𝐸 𝑡 = and his optimal action (the action that maximizes the 2 expected payoff) is 𝑦 = . her belief must remain the same as it was initially (state uniformly distributed between 0 and 1).2 No information transmission? Consider an equilibrium in which the sender reports a constant value: 𝑟 𝑡 = 𝑐 ∀ 𝑡. his optimal action remains the same.A (Short) Introduction 322 9/12/2011 . any constant report is optimal for him and.8. 𝑟 𝑡 = 𝑐 is optimal. Game Theory .8 Strategic information transmission 10. 1 2 The consistency condition does not constraint the receiver belief about the state upon receiving a report different from 𝑐 : such a report does not occur if the sender follows her strategy.
1 .A (Short) Introduction 323 . If 𝑏 is small. for any 𝑡 with 0 ≤ 𝑡 ≤ both the sender and the receiver are better off if the receiver action is 𝑡 + 𝑏. For example. for every value of 𝑏. the receiver ignores the report (he maintains his initial belief about the state) and takes the action that maximizes his expected payoff. this equilibrium is not very attractive for both the 1 sender and the receiver.10.8 Strategic information transmission In summary. the game has a weak sequential equilibrium in which the sender’s report conveys no information (constant report). if 𝑏 = . 4 4 9/12/2011 Game Theory .
the consistency condition requires that he now believe that the is uniformly distributed between 0 and 1 𝑡1 .8. if he sees the report 𝑟2.3 Some information transmission Does the game has equilibria in which some information is transmitted? Suppose the sender makes one of two reports: 𝑟1 if 0 ≤ 𝑡 ≤ 𝑡1 𝑟2 if 𝑡1 ≤ 𝑡 ≤ 1 With 𝑟1 ≠ 𝑟2 Consider the receiver optimal response to this strategy: If he sees the report 𝑟1 .8 Strategic information transmission 10.A (Short) Introduction 324 . His optimal action is the 𝑦 = (1 + 𝑡1 ) 2 9/12/2011 Game Theory . the consistency condition requires that he now believe that the is uniformly distributed 1 between 𝑡1 and 1.10. His optimal action is the 𝑦 = 𝑡1 2 Similarly.
we need the sender report 𝑟1 to be optimal if 0 ≤ 𝑡 ≤ 𝑡1 and his report 𝑟2 to be optimal if 𝑡1 ≤ 𝑡 ≤ 1. for equilibrium. the sender must be indifferent 1 1 between the two actions 𝑡1 and (1 + 𝑡1 ): 2 2 9/12/2011 Game Theory . the receiver belief is one of the two beliefs he 1 hold if he sees 𝑟1 or 𝑟2 (so the optimal action is either 𝑦 = 𝑡1 or 𝑦 = 1 (1 + 𝑡1 ).10. By changing his report. given the receiver strategy. So. Assume therefore that for any such report. 1 2 1 2 In particular. the sender must like 𝑡1 at least as much as (1 + 𝑡1 ) (and viceversa for the report 𝑟2). in state 𝑡1 . for the report 𝑟1 to be 2 2 optimal when 0 ≤ 𝑡 ≤ 𝑡1 .A (Short) Introduction 325 .8 Strategic information transmission The consistency condition does not restrict the receiver belief if he sees a report other than 𝑟1 or 𝑟2. 2 2 Now. the sender can change the receiver 1 1 optimal action form 𝑡1 to (1 + 𝑡1 ).
A (Short) Introduction 326 .10.1): 𝑡1 + 𝑏 = 1 2 1 1 𝑡 2 2 1 + (1 + 𝑡1 ) 1 2 𝑡1 = − 2𝑏 Figure 346.1 9/12/2011 Game Theory .8 Strategic information transmission This indifference implies that 𝑡1 + 𝑏 (the sender preferred 1 1 action) is midway between 𝑡1 and (1 + 𝑡1 ) (the receiver 2 2 optimal actions). So (see Figure 346.
10.8 Strategic information transmission
We need 𝑡1 > 0: this condition is satisfied only if 𝑏 < . If 𝑏 ≥ , 4 4 the game has no equilibrium in which the sender makes two different reports. Put differently, if preferences diverges too much, there is no point to ask the sender to submit a report. The receiver should simply take the best action for himself given his prior belief. 𝑡1 = − 2𝑏 is not only a necessary condition for equilibrium 2 but also a sufficient condition. Indeed, in such a case: In every state with 0 ≤ 𝑡 < 𝑡1 : the sender optimally report 𝑟1 In every state with 𝑡1 ≤ 𝑡 ≤ 1: the sender optimally report 𝑟2 ≠ 𝑟1 This follows form the shape of payoff function, which is symmetric (see Figure 346.2)
Game Theory  A (Short) Introduction 327
1
1
1
9/12/2011
10.8 Strategic information transmission
Figure 346.1
9/12/2011
Game Theory  A (Short) Introduction
328
10.8 Strategic information transmission
This equilibrium is better for both the receiver and the sender
than the one in which no information is transmitted. Consider the receiver:
If no information is transmitted, he takes action ½ in all states and
his payoff is in each state 𝑡 −
1 − 2 𝑡
2
In this two reports equilibrium, his payoff is:
−
−
1 𝑡 2 1
1 2
− 𝑡
2
for 0 ≤ 𝑡 < 𝑡1
2
𝑡
1 + 1 − 𝑡
for 𝑡1 ≤ 𝑡 ≤ 1
9/12/2011
Game Theory  A (Short) Introduction
329
10.8 Strategic information transmission
10.8.4 How much information transmission?
For 𝑏 < , does the game have equilibria In which more information 4 is transmitted than in the two reports equilibrium? Consider an equilibrium in which the sender makes one of K reports, depending on the state. Specifically, the sender’s report is: 𝑟1 if 0 ≤ 𝑡 < 𝑡1 𝑟2 if 𝑡1 ≤ 𝑡 < 𝑡2 … 𝑟𝐾 if 𝑡𝐾−1 ≤ 𝑡 < 1 Where 𝑟𝑖 ≠ 𝑟𝑗 for 𝑖 ≠ 𝑗.
The equilibrium analysis follows the same line as the two reports equilibrium.
1
9/12/2011
Game Theory  A (Short) Introduction
330
10.8 Strategic information transmission
Specifically: If the receiver observes the report 𝑟𝑘 , then the consistency condition requires that he believes the state to be uniformly distributed between 𝑡𝑘−1 and 𝑡𝑘 . Therefore, he optimally takes 1 the action (𝑡𝑘−1 + 𝑡𝑘 ).
2
If he observes a report different from any 𝑟𝑘 , the consistency condition does not restrict his belief. We assume that his belief in such case is the belief he holds upon receiving one of the reports 𝑟𝑘 . Now, for equilibrium, we need the sender report 𝑟𝑘 to be optimal when the state is 𝑡 with 𝑡𝑘−1 ≤ 𝑡 < 𝑡𝑘 , for 𝑘 = 1, … 𝐾. A sufficient condition for optimality is that, in each state 𝑡𝑘 , 𝑘 = 1, … 𝐾, the sender be indifferent between the between the reports 𝑟𝑘 and 𝑟𝑘+1 and, therefore, between the receiver 1 1 actions (𝑡𝑘−1 + 𝑡𝑘 ) and (𝑡𝑘 + 𝑡𝑘+1 ).
2 2
9/12/2011
Game Theory  A (Short) Introduction
331
10.8 Strategic information transmission
This indifference implies that 𝑡𝑘 + 𝑏 is equal to the average of 1 1 (𝑡𝑘−1 + 𝑡𝑘 ) and (𝑡𝑘 + 𝑡𝑘+1 ):
2
𝑡𝑘
+ 𝑏 =
2 1 1 2 2 𝑡𝑘
−1 + 𝑡𝑘 + (𝑡𝑘 + 𝑡𝑘+1 )
1 2
Or
𝑡𝑘
+1 − 𝑡𝑘 = 𝑡𝑘 − 𝑡𝑘−1 + 4𝑏 This is to say that the interval of states for which the sender’s report is 𝑟𝑘+1is longer by 4𝑏 than the interval for which the report is 𝑟𝑘 . The length of the first interval, from 0 to 𝑡1 , is 𝑡1 . The sum of the lengths of all interval must be equal to one: 𝑡1 + 𝑡1 + 4𝑏 + ⋯ + 𝑡1 + 𝐾 − 1 4𝑏 = 1 Or
𝐾𝑡
1 + 4𝑏 1 + 2 + ⋯ + 𝐾 − 1
Game Theory  A (Short) Introduction
=1
332
9/12/2011
From 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1. 12 the inequality is satisfied for 𝐾 ≤ 3 So.A (Short) Introduction 333 9/12/2011 . Game Theory . the sender chooses one of three reports. in the equilibrium in which more information is transmitted.2 shows equilibrium action 𝑦 taken by the receiver as a function of the state 𝑡. there is a positive value of 𝑡1 that satisfies the equation: If 1 24 ≤ 𝑏 < 1 .10. The values of the reports 𝑟𝑘 does not matter as long as no two are the same (we think of them as words in a language). we have 𝑡1 = − 4𝑏 and 𝑡2 = − 4𝑏.8 Strategic information transmission The sum of the first 𝑛 positive integer is 𝑛 𝑛 + 1 : 1 2 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1 If 𝑏 is small enough for 2𝑏𝐾 𝐾 − 1 < 1. 2 3 1 3 The Figure 348.
A (Short) Introduction 334 .10.8 Strategic information transmission Figure 348.2 9/12/2011 Game Theory .
depending on the state. then the game has a weak sequential equilibrium in which the sender submits one of 𝐾 different reports. If 2𝑏𝐾 𝐾 − 1 = 1.A (Short) Introduction 335 . the smaller the largest 2 𝑏 1 2 value of 𝐾 possible in an equilibrium.8 Strategic information transmission In summary: If there is a positive value of 𝑡1 that satisfies 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1.10. the largest value of 𝐾 for which an equilibrium exists is the largest value for which 2𝑏𝐾 𝐾 − 1 < 1. Thus the largest the value of 𝑏. 9/12/2011 Game Theory . the coarser the information transmitted in the equilibrium with the largest number of steps (the most informative equilibrium). using the quadratic formula. The greater the difference between the sender and receiver preferences. we have 𝐾 = (1 + 1 + ). For any given value of 𝑏.