Mergers and

Acquisitions
Strategic Games

A (short) Introduction to Game Theory

(based on M.J. Osborne – An Introduction to Game
Theory – Oxford University Press – 2004)
Pedagogic approach
More seriously
 I assume “nothing known”
 Any question is welcome (if possible, in English)
 The goal is to “enter into the field”, not to cover as much as
possible stuff (I don’t care about going to the end of the announced
program but I care a lot on an in-depth understanding)
 The mathematical level of the lecture should not be too challenging
(basic equations resolution, some mathematical optimization)
 We will “play” many games during the lecture
 You MUST work each week to prepare the lecture (read the slides
in advance, prepare questions, review concept definitions, …)
 I rest on you to correct my numerous mistakes
 This is a theoretical lecture !


Game Theory - A (Short) Introduction 2 9/12/2011
Game Theory - A (Short) Introduction 3 9/12/2011
Outline
 1 Introduction
 1.1 What is game theory?
 1.2 The theory of rational choice
 1.3 Coming attractions: interacting decision-makers
 2 Nash Equilibrium Theory (perfect information)
 2.1 Strategic games
 2.2 Example: the Prisoner’s Dilemma
 2.3 Example: Bach or Stravinsky?
 2.4 Example: Matching Pennies
 2.5 Example: the Stag Hunt
 2.6 Nash equilibrium
 2.7 Examples of Nash equilibrium
 2.8 Best response functions
Game Theory - A (Short) Introduction 4 9/12/2011
Outline
 2.9 Dominated actions
 2.10 Equilibrium in a single population: symmetric games and
symmatric equilibria
 3 Nash Equilibrium: Illustrations
 3.5 Auctions
 4 Mixed Strategy Equilibrium (probabilistic behavior)
 4.1 Introduction
 4.2 Strategic games in which players may randomize
 4.3 Mixed strategy Nash equilibrium
 4.4 Dominated actions
 4.5 Pure equilibria when randomization is allowed
 4.7 Equilibrium in a single population
Outline
 4.9 The formation of player’s beliefs
 4.10 Extension: finding all mixed strategy Nash equilibria
 4.11 Extension: games in which each player has a continuum of
actions
 4.12 Appendix: Representing preferences by expected payoffs
 9 Bayesian Games (imperfect information)
 9.1 Motivational examples
 9.2 General definitions
 9.3 Two examples concerning information
 9.6 Illustration: auctions


Game Theory - A (Short) Introduction 5 9/12/2011
Outline
 5 Extensive Games (Perfect Information): Theory
 5.1 Extensive games with perfect information
 5.2 Strategies and outcomes
 5.3 Nash equilibrium
 5.4 Subgame perfect equilibrium
 5.5 Finding subgame perfect equilibria of finite horizon games:
backward induction
 10 Extensive Games (Imperfect Information)
 10.1 Extensive games with imperfect information
 10.2 Strategies
 10.3 Nash equilibrium
 10.4 Beliefs and sequential equilibrium
 10.5 Signaling games
 10.8 Illustration: strategic information transmission

Game Theory - A (Short) Introduction 6 9/12/2011
1 Introduction
Game Theory - A (Short) Introduction 8 9/12/2011
1.1 What is game theory?
 Game theory aims to help understand situations in which
decision-makers interact.
 The main fields of applications are:
 Economic analysis
 Social analysis
 Politic
 Biology
 Typical applications:
 Competing firms
 Bidders in auctions
 Main tool: model development. This is an arbitrage between:
 Realistic assumptions
 Simplicity

Game Theory - A (Short) Introduction 9 9/12/2011
1.1 What is game theory?
 An outline of the history of game theory
 First major development in the 1920s
 Emile Borel
 John von Neumann
 Decisive publication: “Theory of Games and Economic Behavior”,
von Neumann and Morgenstern (1944)
 Early 1950s: John Nash
 Nash equilibrium
 Game-theoric study of bargaining
 1994 Nobel Prize in Economic Sciences
 Harsanyi (1920-2000)  Bayesian games (Harsanyi doctrine)
 Nash (1928-)  Nash equilibrium
 Selten (1930-)  Bounded rationality, extensive games
Game Theory - A (Short) Introduction 10 9/12/2011
1.1 What is game theory?
 Modeling process
 Step 1: selecting aspects of a given situation (that appear to be
relevant) and incorporating them into a model. This step is mostly
an “art”
 Step 2: model analysis (using logic and mathematic)
 Step 3: studying model’s implications to determine whether our
ideas make sense. This may point towards a revision of the
model’s assumptions in order to better capture “stylized facts”.
Game Theory - A (Short) Introduction 11 9/12/2011
1.2 The theory of rational choice
 Rational choice:
 The decision-maker chooses the best action according to her preferences, among
all the actions available to her
 No qualitative restriction is place on preferences
Rationality means consistency of her decisions when faced with different sets of
available actions.

 The theory is based on two components: Actions and
Preferences

 1.2.1 Actions
 Set A consisting of all actions that, under some circumstances, are available to the
decision-maker
 In any given situation, the decision-maker knows the subset of available choices,
and takes it as given (the subset is not influenced by the decision-maker
preferences)
Game Theory - A (Short) Introduction 12 9/12/2011
1.2 The theory of rational choice
 1.2.2 Preferences and payoff functions
 We assume that the decision-maker, when presented with any pair of
actions, knows which of the pair she prefers
 We assume further that these preferences are consistent (if a > b and
b > c, then a > c).

 Preferences representation: preferences can be represented by a
payoff function:
the payoff function associates a number with each action in such a way
that actions with higher numbers are preferred.
More precisely:
u(a) > u(b) if and only if the decision-maker prefers a to b

(Economists often speak about utility function)
Game Theory - A (Short) Introduction 13 9/12/2011
1.2 The theory of rational choice
 Exercise 5.3

 Person 1 cares about both her income and person 2’ income.
Precisely, the value she attaches to each unit of her own income is
the same as the value she attaches to any two units of person 2’s
income. For example, she is indifferent between a situation in
which her income is 1 and person 2’s is 0, and one in which her
income is 0 and person 2’s is 2. How do her preferences order the
outcomes (1,4), (2,1) and (3,0), where the first component in each
case is her income and the second component is person 2’s
income? Give a payoff function consistent with these preferences.
Game Theory - A (Short) Introduction 14 9/12/2011
1.2 The theory of rational choice
 Note that, as decision-maker’s preferences convey only ordinal
information, the payoff function also conveys only ordinal preference.
Eg.: if u(a)=0, u(b)=1 and u(c)=100, it doesn’t mean that the decision-
maker likes c a lot more than b! A payoff function contains no such
information.

 Note that, as a consequence, a decision-maker’s preferences can be
represented by many different payoff functions.
If u represents a decision-maker’s preferences and v is another payoff
function for which
v(a) > v(b) if and only if u(a) > u(b)
then v also represents the decision-maker’s preferences.

More succinctly: if u represents a decision-marker’s preferences, then
any increasing function of u also represents these preferences.
Game Theory - A (Short) Introduction 15 9/12/2011
1.2 The theory of rational choice
 Exercice 6.1

 A decision-maker’s preferences over the set A={a,b,c} are
represented by the payoff function u for which u(a)=0, u(b)=1 and
u(c)=4. Are they also represented by the function v for which v(a)=-
1,v(b)=0, and v(c)=2? How about the function w for which
w(a)=w(b)=0 and w(c)=8?
Game Theory - A (Short) Introduction 16 9/12/2011
1.2 The theory of rational choice
 1.2.3 The theory of rational choice

The theory of rational choice is the action chosen by a decision-
maker is at least as good, according her preferences, as every
other available action.

Note that not every collection of choices for different sets of
available actions is consistent with the theory.
Eg. : we observe that a decision chooses a whenever she faces the set {a,b}, but
sometimes chooses b when facing the {a,b,c}. This is inconsistent:
- always choosing a when facing {a,b} means that the decision-maker prefers a to
b
- when facing {a,b,c}, she must choose a or c.
(Independence of irrelevant alternatives)
Game Theory - A (Short) Introduction 17 9/12/2011
1.2 The theory of rational choice
 1.3 Coming attractions
 Up to now, the decision-maker cares only about her own choice.
 In the real world, a decision-maker often does not control all the
variables that affect her.

Game theory studies situations in which some of the variables that
affect the decision-marker are the actions of other decision-
markers.
2 Nash Equilibrium:
Theory
Game Theory - A (Short) Introduction 19 9/12/2011
2.1 Strategic games
 Terminology:
 we refer to decision-makers as players
 each player has a set of possible actions
 the action profile is the list of all players’ actions
 each player has preferences about the action profiles

 Definition 13.1 (Strategic game with ordinal preferences)
A strategic game with ordinal preferences consists of
 a set of players
 for each player, a set of actions
 for each player, preferences over the set of action profiles
Game Theory - A (Short) Introduction 20 9/12/2011
2.1 Strategic games
 Note that:
 This allows to model a very wide range of situations:
 players = firms, actions = prices, preferences = profits
 players = animals, actions = fighting for a prey, preferences =
winning or loosing
 It is frequently convenient to specify the payers’ preferences by
giving payoff functions that represent them. Keep however in
mind that a strategic game with ordinal preferences is defined by
the players’ preferences, not by the payoffs that represent these
preferences
 Time is absent from the model : each player chooses her action
once and for all and the players choose their actions
simultaneously (no player is informed of the action chosen by any
other player)
Game Theory - A (Short) Introduction 21 9/12/2011
2.2 Example: the Prisoner’s
Dilemma

 Example 14.1
Two suspects in a major crime are held in separate cells. There is
enough evidence to convict each of them of a minor offense, but
not enough evidence to convict either of them of the major crime
unless one of them acts as an informer against the other (finks). If
they both stay quiet, each will be convicted of the minor offense
and spend one year in prison. If one and only one of the finks, she
will be freed and used as a witness against the other, who will
spend four years in prison. If the both fink, each will spend three
years in prison.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 22 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
 Solution
 Players: the two suspects
 Actions: Each player’s set of actions is {Quiet, Fink}
 Preferences: Suspect 1’s ordering of the action profiles (from
best to worse):
 (Fink,Quiet)  free
 (Quiet,Quiet)  one year in prison
 (Fink,Fink)  three years in prison
 (Quiet,Fink)  four years in prison
(and vice-versa for player 2)

We can adopt a payoff function for each player:
u
1
(Fink,Quiet)>u
1
(Quiet,Quiet)>u
1
(Fink,Fink)>u
1
(Quiet,Fink)
Eg.:

F Q F F Q Q Q F , , , ,
1 0 1 1 1 2 1 3 × + × + × + ×
Game Theory - A (Short) Introduction 23 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
Graphically, the situation is the following :
(numbers are payoffs of payers)
Suspect 1
Suspect 2
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
The prisoner’s dilemma models a situation in which there are gains from cooperation
(each player prefers that both players choose Quiet than they both choose Fink) but
each player has an incentive to free ride whatever the other play does.
Game Theory - A (Short) Introduction 24 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
 2.2.1 Working on a joint project
You are working with a friend on a joint project. Each of you
can either work hard or goof off. If your friend works hard, then
you prefer to goof off (the outcome of the project would be
better if you worked hard too, but the increment in its value to
you is not worth the extra effort). You prefer the outcome of
your both working hard to the outcome of your both goofing off
(in which case nothing gets accomplished), and the worst
outcome for you is that you work hard and your friend goofs off
(you hate to be exploited).

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 25 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
 2.2.2 Duopoly

In a simple model of a duopoly, two firms produce the same
good, for which each firm charges either a low price or a high
price. Each firm wants to achieve the highest possible profit. If
both firms choose High, then each earns a profit of $1000. If
one firm chooses High and the other chooses Low, then the firm
choosing High obtains no customers and makes a loss of $200,
whereas the firm choosing Low earns a profit of $1200 (its unit
profit is low, but its volume is high). If both firms choose Low,
the each earns a profit of $600. Each firm cares only about its
profit.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 26 9/12/2011
2.2 Example: the Prisoner’s
Dilemma
 Exercise 17.1
Determine whether each of the following games differs from the
Prisoner’s Dilemma only in the names of the players’ actions
X
Y
X
Y
X Y X Y
3,3
5,1
1,5
0,0
2,1
3,-2
0,5
1,-1
An application to M&As: the Grossman & Hart free riding argument.
Game Theory - A (Short) Introduction 27 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
 Situation:
 Players agree that it is better to cooperate
 Players disagree about the best outcome

Example 18.2
Two people wish to go out together. Two concerts are available:
one of music by Bach, and one of music by Stravisky. One person
prefers Bach and the other prefers Stravinsky. If they go to different
concerts, each of them is equally unhappy listening to the music of
either composer.

Model this situation as a strategic game.
An application to merging banks: two banks are merging. Both
agree that they will be better off using the same information
system technology but they disagree on which one to choose.
Google versus Microsoft/Yahoo
Game Theory - A (Short) Introduction 28 9/12/2011
Game Theory - A (Short) Introduction 29 9/12/2011
2.3 Example: Back or Stravinsky?
(Battle of the Sexes or BoS)
Solution
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Game Theory - A (Short) Introduction 30 9/12/2011
2.4 Example: Matching Pennies
 Situation:
 A purely conflictual situation

Example 19.1

Two people choose, simultaneously, whether to show the head or
the tail of a coin. If they show the same side, person 2 pays person
1 a dollar. I they show different sides, person 1 pays person 2 a
dollar. Each person cares only about the amount of money she
receives (and is a profit maximizer!).

Model this situation as a strategic game.

An application to choices of appearances for new products by an established
produced and a new entrant in a market of fixed size: the established produced
prefers the newcomer’s product to look different from its own (to avoid confusion)
while the newcomer prefers that the products look alike.
IPhone iOS versus Android
Game Theory - A (Short) Introduction 31 9/12/2011
Game Theory - A (Short) Introduction 32 9/12/2011
2.4 Example: Matching Pennies
Solution
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Game Theory - A (Short) Introduction 33 9/12/2011
2.5 Example: the stag Hunt
 Situation:
 Cooperation is better for both but not credible.

Example 20.2
Each of a group of hunters has two options: she may remain
attentive to the pursuit of a stag, or she may catch a hare. If all
hunters pursue the stag, they catch it and share it equally. If any
hunter devotes her energy to catching a hare, the stag escapes,
and the hare belongs to the defecting hunter alone. Each hunter
prefers a share of the stag to a hare.

Model this situation as a strategic game.
Game Theory - A (Short) Introduction 34 9/12/2011
2.5 Example: the stag Hunt
 Solution
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Game Theory - A (Short) Introduction 35 9/12/2011
2.6 Nash equilibrium
 Question:
What actions will be chosen by players in a strategic game?
(assuming that each player chooses the best available action)

 Answer:
To make a choice, each player must form a belief about other players’
action.

 Assumption:
We assume in strategic games that players’ beliefs are derived from
their past experience playing the game:
 they know how their opponent will behave.
 note however that they do not know which specific opponent they are faced to and
so, they can not condition their behavior on being faced to a specific opponent.
Beliefs are about “typical” opponents, not any specific set of opponents.
Game Theory - A (Short) Introduction 36 9/12/2011
2.6 Nash equilibrium
 In this setup, a Nash equilibrium is action profile a* with the
property that no player i can do better by choosing an action
different from a*
i
, given that every other player j adheres to a*
j
.

 Note:
 A Nash equilibrium corresponds to a steady state: if, whenever the
game is played, the action profile is the same Nash equilibrium a*,
then no player has a reason to choose any action different from her
component of a*.
 Players’ beliefs about each other’s actions are (assumed to be)
correct. This implies, in particular, that two players’ beliefs about a
third player’s action are the same (expectations are coordinated –
Harsanyi Doctrine).
Two key ingredients: rational choices and correct beliefs
Game Theory - A (Short) Introduction 37 9/12/2011
2.6 Nash equilibrium
 Notations and formal definition:
 Let a
i
be the action of player i
 Let a be an action profile: a=(a
1
, a
2
, … a
n
)
 Let a’
i
be any action of player i (different from a
i
)
 Let (a’
i
,a
-i
) be the action profile in which every player j except i
chooses her action a
j
as specified by a, whereas player i chooses
a’
i
(the subscript –i stands for “except i”).
 (a’
i
,a
-i
) is the action profile in which all the players other than i
adhere to a while i “deviates” to a’
i
.
Note that if a’
i
=a
i
, then (a’
i
,a
-i
) = (a
i
,a
-i
) =a
Game Theory - A (Short) Introduction 38 9/12/2011
2.6 Nash equilibrium
Definition 23.1 (Nash equilibrium of strategic game with ordinal
preferences)

The action profile a* in a strategic game with ordinal
preferences is a Nash equilibrium if, for every player i and every
action a
i
of player i, a* is at least as good according to player i’s
preferences as the action profile (a
i
,a*
-i
) in which player i
chooses a
i
while every other player j chooses a*
i
.

Equivalently:

u
i
(a*) ≥ u
i
(a
i
, a*
-i
) for every action a
i
of player i
Game Theory - A (Short) Introduction 39 9/12/2011
2.6 Nash equilibrium
 Note:
 This definition implies neither that a strategic game necessarily has a
Nash equilibrium, nor that it has at most one.
 This definition is designed to model a steady state among experienced
players. An alternative approach (called “rationalizability”) is:
 to assume that players know each others’ preferences
 to consider what each player can deduce about the other players’ action
from their rationality and their knowledge of each other’s rationality
 Nash equilibrium has been studied experimentally.
 The keys to conceive suited experiment are:
 to ensure that players are experienced playing the game
 to ensure that players do not face repeatedly the same opponents (as each
game must played in isolation)
 The key to correctly interpret results is to remember that Nash
equilibrium is about equilibrium: the outcome must have converged (and
the theory says nothing about the necessary for convergence to
appear).
Game Theory - A (Short) Introduction 40 9/12/2011
2.7 Examples of Nash
equilibrium
 2.7.1 Prisoner’s Dilemma
Suspect 1
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Suspect 2
Game Theory - A (Short) Introduction 41 9/12/2011
2.7 Examples of Nash
equilibrium
 Detailed explanation
 (Fink, Fink) is a Nash equilibrium because:
 given that player 2 chooses Fink, player 1 is better off choosing
Fink than Quiet
 given that player 1 chooses Fink, player 2 is better off choosing
Fink than Quiet
 No other action profile is a Nash equilibrium. Eg, (Quiet, Quiet) is
not a Nash equilibrium because:
 if player 2 chooses Quiet, player 1 is better off choosing Fink
 (moreover), if player 1 chooses Quiet, player 2 is also better off
choosing Fink
The incentive to free ride eliminates the possibility that
the mutually desirable outcome (Quiet, Quiet) occurs.
Game Theory - A (Short) Introduction 42 9/12/2011
2.7 Examples of Nash
equilibrium
 Note that:
 in the present case, the Nash equilibrium action is the best action
for each player:
 if the other player chooses her equilibrium action (Fink)
 but also if the other player chooses her other action (Quiet)
In this sense, this equilibrium is highly robust. But, this is not a
requirement of the Nash equilibrium. Only the first condition
must be met.

Game Theory - A (Short) Introduction 43 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 27.1
Each of two players has two possible actions, Quiet and Fink;
each action pair results in the players’ receiving amounts of
money equal to the numbers corresponding to that action pair in
the following figure:
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Player 1
Player 2
Game Theory - A (Short) Introduction 44 9/12/2011
2.7 Examples of Nash
equilibrium
Players are not “selfish”: the preferences of each player i are
represented by the payoff function m
i
(a)+α m
j
(a), where m
i
(a) is
the amount of money received by player i, j is the other player,
and α is a given non-negative number. Player 1’s payoff to the
action pair (Quiet,Quiet) is, for example, 2 + 2α.

1. Formulate the strategic game that models this situation in the case
α=1. Is this game the Prisoner’s dilemma?
2. Find the range of values of α for which the resulting game is the
Prisoner’s dilemma. For values of α for which the game is not the
Prisoner’s dilemma, find the Nash equilibria.
Game Theory - A (Short) Introduction 45 9/12/2011
2.7 Examples of Nash
equilibrium
 2.7.2 BoS

Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,1)
(0,0)
(0,0)
(1,2)
Nash equilibria are (B,B) and (S,S). Why?
Note that this means that BoS has two steady states!
Game Theory - A (Short) Introduction 46 9/12/2011
2.7 Examples of Nash
equilibrium
 2.7.3 Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
There is no Nash equilibrium. Why?
Game Theory - A (Short) Introduction 47 9/12/2011
2.7 Examples of Nash
equilibrium
 2.7.4 The Stag Hunt
Player 1
Player 2
Stag
Hare
Stag Hare
(2,2)
(1,0)
(0,1)
(1,1)
Nash equilibria are (S,S) and (H,H). Why?
Note that, despites (S,S) is better for both players than (H,H), this
has no bearing on the equilibrium status of (H,H).
Game Theory - A (Short) Introduction 48 9/12/2011
2.7 Examples of Nash
equilibrium
Exercise 30.1 (extension to n players)
Consider the variants of the n-hunter Stag Hunt in which only m
hunters, with 2≤m≤n, need to pursue the stag in order to catch it
(continue to assume that there is a single stag). Assume that a
captured stag is shared only by the hunters who catch it. Under
each of following assumptions on the hunters’ preferences, find
the Nash equilibria of the strategic game that models the
situation.
a. As before, each hunter prefers the fraction 1/m of the stag to
a hare;
b. Each hunter prefers a fraction 1/k of the stag to a hare, but
prefers a hare to any smaller fraction of the stag, where k is an
integer with m≤k≤n.
Game Theory - A (Short) Introduction 49 9/12/2011
2.7 Examples of Nash
equilibrium
 Note
In games with many Nash equilbria, the theory isolates more than
one steady state but says nothing about which one is more likely to
appear.
In some games, however, some of these equilibria seem more
likely to attract the players’ attentions than others. These equilibria
are called focal.
Example: (B,B) seems here more “likely” than (S,S)
Player 1
Player 2
Bach
Stravinsky
Bach Stravinsky
(2,2)
(0,0)
(0,0)
(1,1)
Game Theory - A (Short) Introduction 50 9/12/2011
2.7 Examples of Nash
equilibrium
 2.7.8 Strict and nonstrict equilibria
 The definition 23.1 requires only that the outcome of a deviation (by
a player) be no better for the deviant than the equilibrium outcome.
 A equilibrium is strict if each player’s equilibrium action is better
than all her other actions, given the other players’ actions:

u
i
(a*) > u
i
(a
i
, a*
-i
) for every action a
i
≠ a*
i
of player i

(Note the strict inequality, contrasting with definition 23.1)

Game Theory - A (Short) Introduction 51 9/12/2011
2.8 Best Response Functions
 2.8.1 Definition
 In more complicated games, analyzing one by one each action
profile quickly becomes intractable.
 Let us denote the set of player i best actions when the list of the
other players’ actions is a
-i
by B
i
(a
-i
) or, more precisely:
{ }
i i i i i i i i i i i i
A a a a u a a u A a a B in ' all for ) , ' ( ) , ( : in ) (
÷ ÷ ÷
> =
Any action in B
i
(a
-i
) is at least as good for player i as
every other action of player i when the other players’
actions are given by a
-i
.
Game Theory - A (Short) Introduction 52 9/12/2011
2.8 Best Response Functions
 2.8.2 Using best response functions to define Nash equilibrium
 Proposition 36.1: The action profile a* is a Nash equilibrium of a
stragetic game with ordinal preferences if and only if every player’s
actions is a best response to the other players’ actions:


 If each player i has a single best response to each list a
-i

(B
i
(a
-i
) = {b
i
(a*
-i
)}), then this is equivalent to:


 The Nash Equilibrium is then characterized by a set of n equations in
the n unknowns a*
i
:
i a B a
i i i
player every for ) ( in is
* *
÷
i a b a
i i i
player every for ) (
* *
÷
=
) ,... (
...
) ,... (
*
1
*
1
*
* *
2 1
*
1
÷
=
=
n n n
n
a a b a
a a b a
Game Theory - A (Short) Introduction 53 9/12/2011
2.8 Best Response Functions
 2.8.3 Using the best response functions to find Nash equilibria
 Procedure:
 1. find the best response function of each player
 2. find the action profiles that satisfy proposition 36.1
 Exercise 37.1.b
 Find the Nash Equiliria of the game in Figure 38.1
 Represents graphically the solution

2,2 1,3 0,1
3,1 0,0 0,0
1,0 0,0 0,0
T
M
B
L C R
Game Theory - A (Short) Introduction 54 9/12/2011
2.8 Best Response Functions
 Solution
2,2 1*,3* 0*,1
3*,1* 0,0 0*,0
1,0* 0,0* 0*,0*
T
M
B
L C R
Player 1
Player 1
T M B
L
C
R
Player 2
Player 2
Game Theory - A (Short) Introduction 55 9/12/2011
2.8 Best Response Functions
 Example 39.1
 Two individuals are involved in a synergistic relationship. If both
individuals devote more effort to the relationship, they are both
better off. For any given effort of individual j, the return to individual
i’s effort first increases, then decreases. Specifically, an effort level
is a nonnegative number, and individual i’s preferences (for i=1,2)
are represented by the payoff function a
i
(c+a
j
-a
i
), where a
i
is i effort
level, a
j
is the other individual’s effort level, c>0 is a constant.
 Questions:
 Model the situation as a strategic game
 Find players best response functions
 Find the Nash equilibrium
 Represent graphically the situation
Game Theory - A (Short) Introduction 56 9/12/2011
2.8 Best Response Functions
 Strategic game:
 Players: the two individuals
 Actions: each player’s set of actions is the set of effort levels (non
negative numbers)
 Preferences: player i’s preferences are represented by payoff
function a
i
(c+a
j
-a
i
), for i=1,2

 Note that each player has infinitely many actions, so the game can not
be represented by a matrix of payoff, as previously.


Game Theory - A (Short) Introduction 57 9/12/2011
2.8 Best Response Functions
 Best response function:
 Intuitive construction
 Given a
j
, individual i payoff is a quadratic function of a
i
, that is
zero when a
i
=0 and when a
i
=c+a
j
. As quadratic function are
symmetric, this implies that the player i best response to a
j
is:

) (
2
1
) (
j j i
a c a b + =
0
Payoff
a
i
c+a
j

Game Theory - A (Short) Introduction 58 9/12/2011
2.8 Best Response Functions
 Mathematical construction
) (
2
1
0 2
2
) (
*
2
j i
i j
i j
i
i i j i
i j i
a c a
a a c
FOC
a a c
a
a a a ca
a a c a
+ =
= ÷ +
÷ + =
c
H c
÷ + = H
÷ + = H
Game Theory - A (Short) Introduction 59 9/12/2011
2.8 Best Response Functions
 Nash equilibrium:
 To find the Nash equilibrium, following proposition 36.1, we have to
solve the following system of equations:




 By substitution, we get:
) (
2
1
) (
2
1
1 2
2 1
a c a
a c a
+ =
+ =
c a
a c
a c c a
=
+ =
+ + =
1
1
1 1
: So
4
1
4
3
)) (
2
1
(
2
1
The unique Nash equilibrium
is (c,c)
Game Theory - A (Short) Introduction 60 9/12/2011
2.8 Best Response Functions
 Graphical representation
Player 1
Player 2
0 a
1

a
2

½ c
½ c
c
c
b
1
(a
2
)
b
2
(a
1
)
Game Theory - A (Short) Introduction 61 9/12/2011
2.8 Best Response Functions
 Note that:
 The best response of a player to actions of other players needs not
to be unique. If a player has many best responses to some of the
other players’ actions, then her best response function is “thick” (a
surface) at some points;
 Nash equilibrium needs not to exist: the best response function
may not cross;
 If best response functions are not linear, the Nash equilibria need
not to be unique;
 Best response function can be discontinuous, generating another
set of difficulties
Game Theory - A (Short) Introduction 62 9/12/2011
2.8 Best Response Functions
 Exercice 42.1
 Find the Nash Equilibria of the two-player strategic game in which
each player’s set of actions is the set of nonnegative numbers and
the players’ payoff functions are u
1
(a
1
,a
2
)=a
1
(a
2
-a
1
) and
u
2
(a
1
,a
2
)=a
2
(1-a
1
-a
2
)
Game Theory - A (Short) Introduction 63 9/12/2011
2.9 Dominated actions
 2.9.1 Strict dominations
 In any game, a player’s action “strictly dominates” another action if
it is superior, no matter what the other player do.
 Definition 45.1 (Strict domination): in a strategic game with ordinal
preferences, player i’s action a’’
i
strictly dominates her action a’
i
if:


Action a’
i
is said to be strictly dominated.

 Example: in the Prisoner’s Dilemma,
the action Fink strictly dominates
the action Quiet
Quiet
Fink
Quiet Fink
(2,2)
(3,0)
(0,3)
(1,1)
Game Theory - A (Short) Introduction 64 9/12/2011
2.9 Dominated actions
 Note that, as a strictly dominated action is not a best response to
any actions of the other players, a strictly dominated action is not
used in any Nash equilibrium.
When looking for Nash equilibria of a game, we can therefore
eliminate from consideration all strictly dominated actions.

 2.9.2 Weak domination
 In any game, a player’s action weakly dominates another action if
the first action is at least as good as the second action, no matter
what the other players do, and is better than the second action for
some actions of the other players.
Game Theory - A (Short) Introduction 65 9/12/2011
2.9 Dominated actions
 Definition 46.1 (Weak domination) : In a strategic game with ordinal
preferences, player i’s action a’’
i
weakly dominates her action a’
i
if:




 Note that is a strict Nash equilibrium, no player’s equilibrium action
is strictly dominated but in a nonstrict Nash equilibrium, an action
can be weakly dominated.
actions players' other of list every for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u
÷ ÷ ÷
>
actions players' other of list some for ) , ( ) , (
' ' '
i i i i i i i
a a a u a a u
÷ ÷ ÷
>
Game Theory - A (Short) Introduction 66 9/12/2011
2.9 Dominated actions
 Exercise 47.1 (Strict equilibria and dominated actions)
For the game in Figure 48.1, determine, for each player,
whether any action is strictly dominated or weakly dominated.
Find the Nash equilibria of the game. Detemine whether any
equilibrium is strict.
0,0 1,0 1,1
1,1 1,1 3,0
1,1 2,1 2,2
T
M
B
L C R
Game Theory - A (Short) Introduction 67 9/12/2011
2.9 Dominated actions
 2.9.4 Illustration: collective decision-making
 The members of a group of people are affected by a policy,
modeled as a number. Each person i has a favorite policy, denoted
x*
i
. She prefers the policy y to the policy z if and only if y is closer to
x*
i
than is z. The number of n people is odd. The following
mechanism is used to choose the policy:
 each person names a policy
 the policy chosen is the median of those named
Eg.: if there are five people, and they name the policies -2, 0,0.6,5
and 10, the policy 0.6 is chosen.
 Questions:
 Model this situation as a strategic game
 Find the equilibrium strategy of the players
 Does anyone have an incentive to name her favorite policy?
Game Theory - A (Short) Introduction 68 9/12/2011
2.9 Dominated actions
 Strategic game:
 Players: n people
 Actions: each person’s set of actions is the set of policies
(numbers)
 Preferences: each person i prefers the action profile a to the action
profile a’ if and only if the median policy named in a is closer to x*
i
than is the median policy named in a’.

 Equilibrium strategy of the players:
 Claim: for each player i, the action of naming her favorite policy x*
i

weakly dominates all her other actions.
 Why ?


Game Theory - A (Short) Introduction 69 9/12/2011
2.9 Dominated actions
 Proof:
 Take x
i
> x*
I
(reporting a higher policy than the preferred one)

 a. for all actions of the other players, player i is at least as well off
naming x*
i
as she is naming x
i

 for any list of actions of the players other than player i, denote the value
of the ½ (n-1)th highest action by a- and the value of ½ (n+1)th highest
action a+ (so that half of the remaining players’ actions are at most a-
and half of them are at least a+).
 if x*
i
≥ a+ : the median policy is the same whether player i names x*
i
or x
i
(as
x
i
> x*
i
).
 if x
i
≤ a- : the same hold true (as x*
i
< x
i
)
 if x*
i
< a+ and x
i
> a-, then
 when the player i names x*
i
, the median policy is at most the greater of
x*
i
and a-
 when the play i names x
i
, the median policy is at least the lesser of x
i

and a+.
Thus, player i is worse off naming x
i
than naming x*
i
.
a- at ½ (n-1)th
a+ at ½ (n+1)th
½ n
n
0
Game Theory - A (Short) Introduction 70 9/12/2011
2.9 Dominated actions
 b. for some actions of the other players, player i is better of naming x*
i

than she is naming x
i

 Suppose that half of the remaining players name policies less than x*
i

and half of them name policies greater than x
i
. Then the outcome is x*
i
if
player i names x*
i
and x
i
if she names x
i
. Thus player i is better off
naming x
i
than she is naming x*
i
.

A symmetric argument applies when x
i
< x*
i
.
Telling the truth weakly dominates all other action.
Game Theory - A (Short) Introduction 71 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
 We focus here in cases where we want to model the interaction
between members of a single homogenous population of players.
Players interact anonymously and symmetrically.

 Definition 51.1 (Symmetric two-player game with ordinal preferences)
A two-player strategic game with ordinal preferences is symmetric if the
players’ sets of actions are the same and the players’ preferences are
represented by payoff functions u
1
and u
2
for which u
1
(a
1
,a
2
)=u
2
(a
2
,a
1
)
for every action pair (a
1
,a
2
)

 Definition 52.1 (Symmetric Nash equilibrium)
An action profile a* in a strategic game with ordinal preferences in which
each player has the same set of actions is a symmetric Nash
equilibrium if it is a Nash equilibrium and a*
i
is the same for every
player i.
Game Theory - A (Short) Introduction 72 9/12/2011
2.10 Equilibrium in a single
population: symmetric games
 Exercise 52.2
Find all the Nash equilibria of the game in Figure 53.1. Which of
the equilibria, if any, correspond to a steady state if the game
models pairwise interactions between the members of a single
population?
1,1 2,1 4,1
1,2 5,5 3,6
1,4 6,3 0,0
A
B
C
A B C
3 Nash Equilibrium:
Illustrations
Game Theory - A (Short) Introduction 74 9/12/2011
3.5 Auctions
 3.5.1 Introduction
 Auctions are used to allocate significant economic resources, from works of art to
short-term government bonds to radio spectrum …
 Auctions are of many form:
 Sequential or sealed bid (simultaneous)
 First or Second price
 Ascending (English) or Descending (Dutch)
 Single or Multi-Units
 With or without reservation price
 With or without entry costs
 …
 Auctions:
 exist since long ago (annual auction of marriageable womans in Babylonian’s
villages
 and remain up-to-date (EBay on Internet)
 Main questions
 What are the designs likely to be the most effective at allocating resources?
 What are the designs more likely to raise the most revenue?


Game Theory - A (Short) Introduction 75 9/12/2011
3.5 Auctions
 Main assumption: we discuss here auctions in which every
buyer knows her own valuation and every other buyer’s
valuation of the item being sold

Buyers are perfectly informed.

 This assumption will be dropped in Chapter 9.
Game Theory - A (Short) Introduction 76 9/12/2011
3.5 Auctions
 3.5.2 Second-price sealed-bid auctions
 In a common form of auction, people sequentially submit increasing
bids for an object. When no one wish to submit a higher bid than the
current bid, the person making the current bid obtains the object at the
price shed bid.
 Given that every person is certain of her valuation (perfect valuation) of
the object before the bidding begins, during the bidding, no one can
learn anything relevant to her actions.
 Thus we can model the auction by assuming that each person decides,
before bidding begins, the most she is willing to bid (her maximal bid).
 During the bidding, eventually, only the person with the maximal bid and
the one with the second highest maximal bid will be left competing
against each other.
 To win, the person with the highest maximal bid needs therefore to bid
slightly more than the second highest maximal bid.
Game Theory - A (Short) Introduction 77 9/12/2011
3.5 Auctions
 We can therefore model such an ascending auction as a strategic
game in which each player chooses an amount of money (the
maximal amount she is willing to bid) and the player who chooses
the highest amount obtains the object and pays a price equal to the
second highest amount.


 This game model also a situation in which the people
simultaneously put bids in sealed envelopes, and the person who
submits the highest bid wins and pays a price equal to the second
highest bid.

In a perfect information context, ascending auctions (or English
auctions) and second-price sealed bid auction are modeled by the
same strategic game.
Game Theory - A (Short) Introduction 78 9/12/2011
3.5 Auctions
 Notations
 v
i
: the value player i attaches to the object
 p: price paid for the object
 v
i
-p: winning player payoff
 n: number of players
 number the players such that v
1
>v
2
> … > v
n
>0
 b
i
: sealed bid submitted by each player

 Rules
 Each player submit a sealed bid b
i
 If b
i
is the highest bid, player i wins the auction, get the object and pays
the second highest bid (say j). In such a case, player i payoff is v
i
-b
j

 In case of tie, it is the player with the smallest number (the highest
valuation) who wins. She pays her own bid (as there is a tie)
Game Theory - A (Short) Introduction 79 9/12/2011
3.5 Auctions
 Strategic game representation:

 Players: the n bidders, where n ≥ 2

 Actions: the set of actions of each player is the set of possible bids
(nonnegative numbers)

 Preferences: denote by b
i
the bid of player i and by b+ the highest
bid submitted by a player other than i. If either b
i
>b+ or b
i
=b+ and
the number of every other player who bids b+ is greater than i, then
player i’s payoff is v
i
-b+. Otherwise player i’s payoff is 0.
Game Theory - A (Short) Introduction 80 9/12/2011
3.5 Auctions
 Nash equilibrium
 The game has many Nash equilibria:
 One equilibrium is (b
1
,b
2
,… b
n
)=(v
1
,v
2
, … v
n
): each player bid is equal to
her valuation of the object:

 because v
1
>v
2
> … > v
n
, the outcome is that player 1 obtains the object and
pays b
2
. Her payoff is v
1
-b
2
. Every other player’s payoff is zero.

 if player 1 changes he bid to some other price at least equal to b
2
, then the
outcome does not change. If she changes her bid to a price less than b
2
, then
she loses and obtains a zero payoff

 if some other player lowers her bid or raises her bid to some price at most
equal to b
1
, the she remains a loser. If she raises her bid above b
1
, then she
wins but, in paying the price b
1
, she makes a loss (because her valuation is
less then b
1
).
Game Theory - A (Short) Introduction 81 9/12/2011
3.5 Auctions
 Another equilibrium is (b
1
,b
2
,… b
n
)=(v
1
,0, … 0): the player 1 obtains the
object and pays 0. Sad is issue for the auctioneer …
 Another equilibirum is (b
1
,b
2
,… b
n
)=(v
2
,v
1
, 0… 0): the player 2 bids v
1

and obtains the object at price v
2
and every players payoff is zero:
 if player 1 raises her bid to v
1
or more, she wins the object but her payoff
remains zero (she pays the price v
1
, bid by player 2)
 if player 2 changes her bid to some other prices greater than v
2
, the outcome
does not change. If she changes her bid to v
2
or less, she loses, and her
payoff remains zero.
 if any other player raises her bid to a most v
1
, the outcome does not change.
If she raises her bid above v
1
, then she wins but get a negative payoff.
Note that, in this equilibrium, player 2 bids more than her valuation. This
might seem strange. This is due to the fact that, in a Nash equilibrium, a
player does not consider the “risk” that another player will take an action
different from her equilibrium action. Each player simply chooses an
action that is optimal, given the other players’ actions.
This however suggests that this equilibrium is less plausible as an
outcome of the auction than the equilibrium in which each bidder bids
her valuation.


Game Theory - A (Short) Introduction 82 9/12/2011
3.5 Auctions
This is due to the fact that:

in a second-price sealed-bid auction (with perfect information),
a player’s bid equal to her valuation weakly dominates all her
other bids.

That is:

for any bid b
i
≠ v
i
, player i bid v
i

is at least as good as b
i
, no
matter what the other players bid, and is better than b
i
for
some actions of the other players.


Game Theory - A (Short) Introduction 83 9/12/2011
3.5 Auctions
The precise argument is given by Figure 85.1
0 b+
v
i
-b+
v
i
0 b+
v
i
-b+
v
i
b’
i
v
i
is better than b’
i

in this region
0 b+
v
i
-b+
v
i

b’’
i
v
i
is better than b’’
i

in this region
The Figure compares player i payoffs to the bid v
i
(left panel) with her payoff to a bid b’
i
< v
i
(middle
panel) and with her payoff to a bid b’’
i
> v
i
, as a function of the highest of the other players’ bids (b+).

We see that:
-for all value of b+, player i payoff to a bid v
i
is a least as large as her payoffs to any other bid;
-for some values of the b+, her payoffs to v
i
exceed her payoff to any other bid.
Game Theory - A (Short) Introduction 84 9/12/2011
3.5 Auctions
 Exercise 84.1
 Find a Nash equilibrium of a second-price sealed bid auction in
which player n obtains the object.

 Exercise 86.1 (Auctioning the right to choose)
 An action affects each of two people. The right to choose the
action is sold in a second-price auction. That is, the two people
simultaneously submit bids, and the one who submits the higher
bid chooses her favorite action and pays (to a third party) the
amount bid by the other person, who pays nothing. Assume that if
the bids are the same, person 1 is the winner.
 For i=1,2, the payoff of person i when the action is a and person i
pays m is u
i
(a)-m.
 In the game that models this situation, find for each player a bid
that weakly dominates all the player’s other bids (and thus find a
Nash equilibrium in which each player’s equilibrium action weakly
dominates all her other actions).


Game Theory - A (Short) Introduction 85 9/12/2011
3.5 Auctions
 3.5.3 First-price sealed-bid auctions
 Difference with as second-price auction: the winner pays the price
she bids

 Strategic game representation:
 Players: the n bidders, where n≥2
 Actions: the set of actions of each player is the set of possible
bids (nonnegative numbers)
 Preferences: denote by b
i
the bid of player i and by b+ the
highest bid submitted by a player other than i. If either (a) b
i
>
b+ or (b) b
i
= b+ and the number of every other player who
bids b+ is greater than i, then player i payoff is v
i
-b
i
. Otherwise,
player i payoff is 0.

Game Theory - A (Short) Introduction 86 9/12/2011
3.5 Auctions
 Note that this game models:
 a sealed-bid auction where the highest bid wins
 but also
 a dynamic auction in which the auctioneer begins by
announcing a high price, which she gradually lowers until
someone indicates her willingness to buy the object (a
Dutch auction)
(this equivalence is even, in some sense, stronger than
the one between an ascending auction and second-price
sealed-bid auction – does not depend on private values).
 Nash equilibrium
 One Nash equilibrium is (b
1
,b
2
,… b
n
)=(v
2
,v
2
, … v
n
), in which
player 1 bid is player 2 valuation and every other player’s bid is
her own valuation. The outcome is that player 1 obtains the
object at price v
2
.
Game Theory - A (Short) Introduction 87 9/12/2011
3.5 Auctions
 Exercise 86.2
Show that (b
1
,b
2
,… b
n
)=(v
2
,v
2
, … v
n
) is a Nash equilibrium
of a first-price sealed-bid auction.

 A first-price sealed-bid auction has many other equilibria, but in all
equilibria the winner is the player who values the object most highly
(player 1), by the following argument:
 in any action profile (b1, …bn) in which some player i≠1 wins,
we have bi > b1.
 If b
i
> v
2
, then i payoff is negative, so that she can do
better by reducing her bid to 0
 if b
i
≤ v
2
, then player 1 can increase her payoff from 0 to
v
1
-b
i
by bidding b
i
, in which case she wins.
Game Theory - A (Short) Introduction 88 9/12/2011
3.5 Auctions
 Exercise 87.1 (First-price sealed-bid auction)
Show that in a Nash equilibrium of a first-price sealed-bid
auction the two highest bids are the same, one of these
bids is submitted by player 1, and the highest bid is at
least v
2
and at most v
1
. Show also that any action profile
satisfying these conditions is a Nash equilibrium.
Game Theory - A (Short) Introduction 89 9/12/2011
3.5 Auctions
 As in the second-price auction sealed-bid auction, the potential
“riskiness” to player i of a bid b
i
> v
i
is reflected in the fact that it is
weakly dominated by the bid v
i
, as shown by the following argument:
 if the other players’ bids are such that player i loses when she bids
b
i
, then the outcome is the same whether she bids b
i
or v
i
 it the other players’ bids are such that player i wins when she bids
b
i
, then her payoff is negative when she bids b
i
and zero when she
bids v
i
(regardless of whether this bid wins)
 However, unlike a second-price auction, in a first-price auction, a bid b
i

< v
i
of player i is not weakly dominated by the bid v
i
(it is in fact not
weakly dominated by any bid):
 it is not weakly dominated by a bid b’
i
<b
i
because if the other
players’ highest bid is between b’
i
and b
i
, then b’
i
loses whereas b
i

wins and yields player i a positive payoff
 it is not weakly dominated by a bid b’
i
>b
i
because if the other
players’ highest bid is less than b
i
, then both b
i
and b’
i
win and b
i

yield a lower price.
Game Theory - A (Short) Introduction 90 9/12/2011
3.5 Auctions
 Note also that, though the bid v
i
weakly dominates higher bids,
this bid is itself weakly dominated by a lower bid! The
argument is the following:
 if player i bids v
i
, her payoff is 0 regardless of the other
players’ bids
 whereas, if she bids less than v
i
, her payoff is either 0 (if
she loses) or positive (if she wins)
In a first-price sealed-bid auction (with perfect information), a player’s
bid of at least her valuation is weakly dominated, and a bid of less than
her valuation is not weakly dominated.


Game Theory - A (Short) Introduction 91 9/12/2011
3.5 Auctions
 Note finally that this property of the equilibria depends on the
assumption that a bid may be any number. In the variant of the game in
which bids and valuations are restricted to be multiples of some discrete
monetary unit ε,
 an action profile (v
2
-ε, v
2
- ε, b
3
, …b
n
) for any b
j
≤ v
j
- ε for j = 3, …n
is a Nash equilibrium in which no player’s bid is weakly dominated.
 further, every equilibrium in which no player’s bid is weakly
dominated takes this form.
If ε is small, this is very close to (v
2
, v
2
, b
3
, …b
n
) : this equilibrium is
therefore (on a somewhat ad-hoc basis) considered as the distinguished
equilibria of a first-price sealed-bid auction.
One conclusion of this analysis is that, while both second-price and first-price
auctions have many Nash equilibria, their distinguished equilibria yield the
same outcome: in every distinguished equilibrium of each game, the object is
old to player 1 at the price v
2
. This is notion of revenue-equivalence is a
cornerstone of the auction theory and will be analyzed in depth later.
Game Theory - A (Short) Introduction 92 9/12/2011
3.5 Auctions
 3.5.4 Variants
 Uncertain valuation: we have assumed that each bidder is certain
of both her own valuation and every other bidder’s valuation, which
is highly unrealistic. We will study the case of imperfect information
in Chap. 9 (in the framework of Bayesian games)
 Interdependent/Common valuations: in some auction, the main
difference between bidders is not that they value the object
differently but that they have different information about its value
(eg, oil tract auctions). As this also involve informational
considerations, we will again study this in Chap. 9.
 All-pay auctions: in some auctions, every bidder pay, not only the
winner (eg, competition of loby groups for government attention).
Game Theory - A (Short) Introduction 93 9/12/2011
3.5 Auctions
 Mutiunit auctions: in some auctions, many units of an object are
available (eg, US Treasury bills auctions) and each bidder may
value positively more than one unit. Each bidder chooses therefore
a bid profile (b
1
,b
2
,…b
k
) if there are k units to sell. Different auction
mechanisms exist and are characterized by the rule governing the
price paid by the winner:
 Discriminatory auction: the price paid for each unit is the
winning bid for that unit
 Uniform-price auction: the price paid for each unit is the
same, equal to the highest rejected bid among all the bids for
all unit
 Vickrey auction (of the name of Nobel prize): a bidder wins k
objects pays the sum of the k highest rejected bids submitted
by the other bidders.
4. Mixed Strategy
Equilibrium
Game Theory - A (Short) Introduction 95 9/12/2011
4.1. Introduction
 4.1.1. Stochastic steady state
 Nash Equilibrium in a strategic game: action profile in which every
player’s action is optimal given every other player’s action (see def.
23.1)
 This corresponds to a steady state of the game:
 every player’s behavior is the same whenever she plays
the game
 no player wishes to change her behavior, knowing (from
experience) the other players’ behavior
 In such a framework, the outcome of every play of the game is
the same Nash equilibrium

 More general notion of steady state exists

Game Theory - A (Short) Introduction 96 9/12/2011
4.1. Introduction
 players’ choices are allowed to vary:
 different members of a given population may choose
different actions, each player choosing the same action
whenever she plays the game
 each individual may, on each occasion she plays the
game, choose her action probabilistically according to
the same, unchanging, distribution
 these situations are equivalent:
 in the first case, a fraction p of the population
representing player i chooses the action a
 in the second case, each member of the population
representing player i chooses the action a with
probability p

These notion of (stochastic) steady state of
modeled as mixed strategy Nash equilibrium
Game Theory - A (Short) Introduction 97 9/12/2011
4.1. Introduction
 4.1.2 Example: Matching Pennies
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Outcomes
The game has no Nash equilibrium: no pair of
action is compatible with a steady state.
Game Theory - A (Short) Introduction 98 9/12/2011
4.1. Introduction
 The game has however stochastic steady state in which each
player chooses each of her actions with probability 1/2 :

 Suppose that player 2 chooses each of her actions with probability ½
 If player 1 chooses Head with probability p and Tail with probability (1-
p), then:
 each outcome (Head,Head) and (Head,Tail) occurs with
probability p x ½
 each outcome (Tail,Head) and (Tail,Tail) occurs with
probability (1-p) x ½
 Thus, the probability that the outcome is either (Head,Head) or
(Tail,Tail) (in which case player 1 wins 1$) ½ p + ½ (1-p) = ½.
 The other two outcomes (Head,Tail) and (Tail,Head) (which correspond
to a loss of 1$) have also probability ½

Game Theory - A (Short) Introduction 99 9/12/2011
4.1. Introduction
 the probability distribution over outcome is independent of p!
 every value of p is optimal (in particular ½ )!
 the same analysis hold for player 2. We conclude that the
game has a stochastic steady state in which each player
chooses each action with probability ½ .

 Moreover (under a reasonable assumption on the players’
preferences), the game has no other steady state :
 Assumption: each player wants the probability of her gaining
1$ to be as large as possible (maximization of expected profit)
 Denote q the probability with which player 2 chooses Head
(she chooses Tail with probability (1-q) )
 If player 1 chooses Head with probability p, she gains 1$ with
probability pq + (1-p)(1-q) (outcomes Head,Head or Tail,Tail)
and she looses 1$ with probability (1-p)q + p(1-q).


Game Theory - A (Short) Introduction 100 9/12/2011
4.1. Introduction
 Note that:
 Player 1 wins 1$ : pq + (1-p)(1-q) = 1-q + p(2q-1)
 Player 1 loses 1$:(1-p)q + p(1-q).= q + p (1-2q)
 If q < ½, the first probability (winning 1$) is decreasing in p and
the second probability (loosing 1$) is increasing in p. Player 1
chooses therefore p = 0.
 Thus, if player 2 chooses Head with probability less than ½,
the best response of player 1 is to choose Tail with certainty.
 A similar argument shows that if player 2 chooses Head with
probability superior to ½, the best response of player 1 is to
choose Head with certainty.
 We already have shown that is one player is choosing a given
action with certainty (Nash Equilibrium), there is no steady
state.
Game Theory - A (Short) Introduction 101 9/12/2011
4.1. Introduction
 4.1.3 Generalizing the analysis: expected payoffs
 The matching pennies case is particularly simple because it has
only two outcomes for each player, allowing to deduce players’
preferences regarding lotteries (probability distributions) over
outcomes from their preferences regarding deterministic
outcomes:
 if a player prefers a to b and if p > q, he most likely prefers a
lottery in which a occurs with probability p (and b with probability
(1-p)) to a lottery in which a occurs with probability q (and b with
probability (1-q))

 To deal with more general cases (eg, more than two outcomes),
we need to add to the model a description of her preferences
regarding lotteries (probability distribution) over outcomes
Game Theory - A (Short) Introduction 102 9/12/2011
4.1. Introduction
 The standard approach is to restrict attention to preferences
regarding lotteries (probability distribution) over outcomes that may
be represented by the expected value of a payoff function over
deterministic outcomes:
 for every player i, there is a payoff function u
i
, with the
property that player i prefers one probability distribution over
outcomes to another if and only if, according to u
i
, the
expected value of the first probability distribution exceeds the
expected value of the second probability distribution.
 eg. :
 three outcomes: a, b, c
 two prob. dist.: P(p
a
,p
b
,p
c
) and Q(q
a
,q
b
,q
c
)
 for each player i, prob. dist. P is preferred to prob. dist. Q if and
only if p
a
u
i
(a) + p
b
u
i
(b) + p
c
u
i
(c) > q
a
u
i
(a) + q
b
u
i
(b) + q
c
u
i
(c)

Preferences that can be represented by the expected value of a
payoff function over deterministic outcomes are called vNM (von
Neumann – Morgenstern) preferences.
A payoff function whose expected value represents such
preferences is called a Bernouilli payoff function.
Game Theory - A (Short) Introduction 103 9/12/2011
4.1. Introduction
 The restrictions on preferences regarding prob. dist. over
outcomes required for them to be represented by expected
value of a payoff function are NOT innocuous (see violations
example on page 104). They are however commonly accepted
in game theory.

 However, these restriction do not restrict player attitudes to risk:
 eg. :
 suppose that a,b and c are three outcomes. A person prefers a to b to c.
If the person is very averse to risky outcomes, she prefers then to obtain
b for sure rather than to face a prob. dist. in which a occurs with
probability p and c with probability (1-p), even if p is relatively large.
 such preferences can be represented by the expected value of a payoff
function u for which u(a) is close to u(b), which is much larger than u(c)
(concave payoff function)
c
b a
u(a)
u(b)
u(c)
(Figure 103.1)
Game Theory - A (Short) Introduction 104 9/12/2011
4.1. Introduction
• Note that if the outcomes are amount of money and if the preferences are
represented by the expected value of the amount of money, the player is risk
neutral.

• Two classic utility functions: CARA & CRRA

• In the reality:
• the fact that people buy insurance (the expected payoff is inferior to the
insurance fee) show that economic agents are risk averse.
• the fact that people buy lottery tickets shows that, in some circumstance, than
can be risk preferring (small investment, extremely high payoff.
 in both cases, the preferences can be represented by the expected value of a
payoff function:
• concave in case of risk aversion
• convex in case of risk preference

• Note finally that given preferences, many different payoff functions can be used to
represented them. It is the ordering that matter.
Game Theory - A (Short) Introduction 105 9/12/2011
4.2 Strategic games in which
players may randomize
 Definition 106.1 (Strategic game with vNM preferences)
A strategic game with vNM preferences consists of
 a set of players
 for each player, a set of actions
 for each player, preferences regarding prob. dist. over action
profiles that may be represented by the expected value of a
(Bernoulli) payoff function over action profiles.

 Representation: a two-player strategic game with vNM preferences in
which each player has finitely many actions may be represented in a
table like in Chapter 2. However, the interpretation of the number is
different:
 in Chapter 2, numbers are values of payoff functions that represent the
players’ preferences over deterministic outcome
 here, numbers are values of (Bernoulli) payoffs whose expected values
represent the players’ preferences over prob. dist..
Game Theory - A (Short) Introduction 106 9/12/2011
4.2 Strategic games in which
players may randomize
 The change is subtle but important (figure 107.1)






The 2 games represent the same game with ordinal preferences (the
prisoner’s dilemma).
However, the 2 games represent different strategic games with vNM
preferences:
 left game: player’s 1 payoff to (Q,Q) is the same as her expected payoff
to the prob. dist. that yield (F,Q) with probability ½ and (F,F) with
probability ½
 right game: her payoff to (Q,Q) is higher than her expected payoff to this
prob. dist.
Q
F
Q
F
Q F Q F
2,2
3,0
0,3
1,1
3,3
4,0
0,4
1,1
Game Theory - A (Short) Introduction 107 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 4.3.1 Mixed strategies
 We allow now each player to choose a probability distribution over
her set of actions (rather than restricting her to choose a single
deterministic action)

 Definition 107.1 (Mixed strategy)
A mixed strategy of a player in a strategic game is a probability
distribution over the player’s actions.

 Notations:
 α: profile of mixed strategies (matrix)
 α
i
(a
i
): probability assigned by player i’s mixed strategy α
i
to her
action a
i
Game Theory - A (Short) Introduction 108 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 eg: in Matching pennies, the strategy of player 1 that assigns
probability ½ to each action is the strategy α
1
(Head)= ½ and
α
1
(Tail) = ½.
 Shortcut: mixed strategies are often written as a list of
probabilities (one for each action), in the order the actions are
given in the table (see table 107.1).
eg.: ( ½ , ½ ) assigns, in table 107.1, probability ½ to Q and
probability ½ to F.

 Note that a mixed strategy may assign probability 1 to a single
action. In that case, such as strategy is referred as a pure
strategy.
Game Theory - A (Short) Introduction 109 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 4.3.2 Equilibrium
The mixed strategy Nash equilibrium extend the concept of
Nash equilibrium to the probabilistic setup.

Definition 108.1 (Mixed strategy Nash equilibrium of strategic
game with vNM preferences)
The mixed strategy profiles α* in a strategic game with vNM
preferences is a mixed strategy Nash equilibrium if, for each
player i and every mixed strategy α
i
of player i, the expected
payoff to player i of α* is at least as large as the expected payoff
payoff to player i of (α
i

-i
*), according to a payoff function
whose expected value represents player i’s preferences over
prob. dist.

. profile strategy mixed the to payoff expected s ' player is ) ( where
player of strategy mixed every for ), * , ( *) (
o o
o o o o
i U
i U U
i
i i i i i ÷
>
Game Theory - A (Short) Introduction 110 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 4.3.3 Best response functions

 Notation: B
i
is player i’s best response function

 For strategic game with ordinal preferences: B
i
(a
-i
) is the set of
player i’s best actions when the list of the other players’ actions is
a
-i

 For a strategic game with vNM preferences, B
i

-i
) is the set of
player i’s best mixed strategies when the list of the other players’
mixed strategies is α
-i
.
the mixed strategy profile α* is a mixed strategy Nash
equilibrium if and only if α*
i
is in B
i
(α*
-i
) for every player i

eg.: in the Matching Pennies, the set of best responses to a mixed
strategy of the other player is either a single pure strategy or the set of all
mixed strategy.
Game Theory - A (Short) Introduction 111 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Two players – two actions games
 Player 1 has action T and B
 Player 2 has action L and R
 u
i
(i=1,2) denotes a Bernoulli payoff function for player i (payoff
over action pair whose expected value represents player i’s
preferences regarding prob. dist. over action pairs)
 Player 1 mixed strategy α
1
assigns probability α
1
(T) to her
action T (denoted p) and probability α
1
(B) to her action B
(denoted 1-p), with α
1
(T) + α
1
(B) = 1.
 Similarly, denotes q the probability that player 2’s mixed
strategy assigns to L et 1-q to R.
 We take the players’ choices to be independent (when players
choose the mixed strategies α
1
and α
2
, the probability of any
action pair (a1,a2) is the product of the corresponding
probabilities assigned by mixed strategies).
Game Theory - A (Short) Introduction 112 9/12/2011
4.3 Mixed strategy Nash
equilibrium








 From this probability distribution, we can compute player 1’s
expected payoff to the mixed strategy pair (α
1
, α
2
):


which can be written as:
So, the probabilities of the four outcomes are:
T(p)
B(1-p)
L(q) R(1-q)
pq
(1-p)q
p(1-q)
(1-p)(1-q)
(Figure 109.1)
) , ( ) 1 )( 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q p L B u q p R T u q p L T u pq × ÷ ÷ + × ÷ + × ÷ + ×
Game Theory - A (Short) Introduction 113 9/12/2011
4.3 Mixed strategy Nash
equilibrium








which can be written more compactly as:
| | | | ) , ( ) 1 ( ) , ( ) 1 ( ) , ( ) 1 ( ) , (
1 1 1 1
R B u q L B u q p R T u q L T u q p × ÷ + × ÷ + × ÷ + ×
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to T and player 2
uses a mixed strategy α
2
Player 1 expected payoff
when she uses a pure
strategy that assigns
probability 1 to B and player 2
uses a mixed strategy α
2
| | | |
2 1 2 1
, ) 1 ( , o o B E p T pE ÷ +
Player 1’ expected payoff to the mixed strategy pair (α
1

2
)
as a weighted average of her expected payoffs to T and B
when player 2 uses the mixed strategy α
2
, with weights
equal to the probabilities assigned to T and B by α
1
.
Game Theory - A (Short) Introduction 114 9/12/2011
4.3 Mixed strategy Nash
equilibrium
In particular, player 1’s expected payoff is a linear function of p
0
p 1
| | | |
2 1 2 1
, ) 1 ( , o o B E p T pE ÷ +
| |
2 1
,o B E
| | | |
2 1 2 1
, , o o B E T E >
| |
2 1
,o T E
(Figure 110.1)
Game Theory - A (Short) Introduction 115 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 A significant implication of this linearity form of the player 1’s
expected payoff is that there is only three possibilities for her
best response to a given mixed strategy of player 2:
 player 1’s unique best response is the pure strategy T (if E
1
(T,α
2
) >
E
1
(B,α
2
) ): see figure 110.1
 player 1’s unique best response is the pure strategy B (if E
1
(T,α
2
) <
E
1
(B,α
2
) ): see figure 110.1 with a downward sloping line
 all mixed strategies of player 1 yield the same expected payoff
(hence, all are best response) (if E
1
(T,α
2
) = E
1
(B,α
2
) ): see figure
110.1 with a horizontal line
 in particular, a mixed strategy (p, 1-p) for which 0 < p < 1 is never
a unique best response.

Game Theory - A (Short) Introduction 116 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Example: Matching Pennies revisited
 Represent each player’s preferences by the expected value of a
payoff unction that assigns the payoff 1 to a gain of $1 and the
payoff -1 to a loss of $1. The resulting strategic game with vNM
preferences is (figure 111.1)
Player 1
Player 2
Head
Tail
Head Tail
(1,-1)
(-1,1)
(-1,1)
(1,-1)
Game Theory - A (Short) Introduction 117 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Denote by p the probability that player 1’s mixed strategy assigns
to Head and q the probability that player 2’s mixed strategy assigns
to Head.

 Player 1’s expected payoff to pure strategy Head, given player 2
mixed strategy is : q . 1 + (1-q) .(-1) = 2q – 1

 Her expected payoff to Tail is : q . (-1) + (1-q) . 1 = 1 – 2q
0 p 1
0
1 q
½
½
Player
1

Player
2

(Figure 112.1:
Best response functions)
Game Theory - A (Short) Introduction 118 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Thus:
 if q < ½, player 1’ expected payoff to Tail exceeds her
expected payoff to Head (and hence exceeds also her
expected payoff to any mixed strategy that assigns a positive
probability to Head)
 similarly, if q > ½, her expected payoff to Head exceeds her
expected payoff to Tail.
 if q = ½, then both Head and Tail (and all her mixed strategies)
lead to the same payoff.

we conclude that player 1’s best responses to player 2’s
strategy are her mixed strategy that assigns probability 0 to
Head if q < ½ , her mixed strategy that assigns probability 1 to
Head if q > ½ and all her mixed strategies if q = ½.

Game Theory - A (Short) Introduction 119 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 The best response function of player 2 is similar (see figure 112.1)

 The set of mixed strategy Nash equilibria corresponds (as before)
to the set of intersections of the best response functions in figure
112.1.

Matching Pennies has no Nash Equilibrium if players are not
allowed to randomize !
Game Theory - A (Short) Introduction 120 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Exercise 114.2

Find all the mixed strategy Nash equilibria of the strategic
games in Figures 114.2
T
B
T
B
L R L R
6,0
3,2
0,6
6,0
0,1
2,2
0,2
0,1
(Figure 114.2)
Game Theory - A (Short) Introduction 121 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Exercise 114.3
Two people can perform a task if, and only if, they both exert effort. They are both
better off if they both exert effort and perform the task than if neither exerts effort
(and nothing is accomplished); the worst outcome for each person is that she
exerts effort and the other person does not (in which case again nothing is
accomplished). Specifically, the players’ preferences are represented by the
expected value of the payoff functions in Figure 115.1, which c is a positive
number less than 1 than can be interpreted as the cost of exerting effort. Find all
the mixed strategy Nash equilibria of this game. How do the equilibria change as c
increase? Explain the reasons for the changes.
No Effort
Effort
No Effort Effort
0,0
-c,0
0,-c
1-c,1-c
(Figure 115.1)
Game Theory - A (Short) Introduction 122 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 4.3.4 A useful characterization of mixed strategy Nash
equilibrium
 The method used up to now to find Mixed strategy Nash equilibria
involves constructing players’ best response functions. In
complicated games, this method may be intractable. There is a
characterization of mixed strategy Nash equilibria that is an
invaluable tool in the study of generale game.
 The key is the following observation: a player’s expected payoff
to a mixed strategy profile α is a weighted average of her
expected payoffs to all pure strategy profiles of the type (a
i

-i
),
where the weights attached to each pure strategy (a
i

-i
) is the
probability α
i
(a
i
) assigned to that strategy a
i
by the player’s
mixed strategy α
i
(see section 4.3.3).
Game Theory - A (Short) Introduction 123 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Symbolically:



where:
 A
i
is player i’s set of actions (pure strategies)
 E
i
(a
i

-i
) is her expected payoff when she uses the pure strategy
that assign probability 1 to a
i
and every other player j uses her
mixed strategy α
j
.
¿
e
÷
=
i i
A a
i i i i i i
a E a U ) , ( ) ( ) ( o o o
Game Theory - A (Short) Introduction 124 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 This leads to the following analysis:
 Let α* be a mixed strategy Nash equilibrium
 Denote by E*
i
player i’s expected payoff in the equilibrium

 Because α* is an equilibrium, payer i’s expected payoff, given α*
-i
,
to all her strategies (including all her pure strategies), is at most E*
i


 But E*
i
is a weighted average of player i’s expected payoffs to the
pure strategies to which α*
i
assigns a positive probability

Thus, player i’s expected payoffs to these pure strategies are all
equal to E*
i
(if any smaller, then the weighted average would be
smaller!).

Game Theory - A (Short) Introduction 125 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 We conclude that:
 expected payoff to each action to which α*
i
assigns positive
probability is E*
i

 the expected payoff to every other action is at most E*
i

 Proposition 116.2
A mixed strategy profile α* in a strategic game with vNM
preferences in which each player has finitely many actions is a
mixed strategy Nash equilibrium if and only if, for each player i,
 the expected payoff, given α*
-i
, to every action to which α*
i
assigns
a positive probability is the same
 the expected payoff, given α*
-i
, to every action to which α*
i
assigns
a zero probability is at most the expected payoff to any action to
which α*
i
assigns a positive probability
Each player’s expected payoff in an equilibrium is her expected
payoff to any of her actions that she uses with positive
probability
Game Theory - A (Short) Introduction 126 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 This proposition allows to check whether a mixed strategy
profile is an equilibrium.

 Example 117.1

L(0) C(1/3) R(2/3)
T(3/4)
M(0)
B(1/4)
.,2 3,3 1,1
.,. 0,. 2,.
.,4 5,1 0,7
(Figure 117.1)
Game Theory - A (Short) Introduction 127 9/12/2011
4.3 Mixed strategy Nash
equilibrium
For the game in Figure 117,1 (in which the dots indicate
irrelevant payoffs), the indicated pair of strategies ((3/4,0,1/4)
for player 1 and (0,1/3,2/3) for player 2) is a mixed strategy
Nash equilibrium.

To verify this claim, it suffices, by proposition 116.2, to study
each player’s expected payoffs to her three pure strategies. For
player 1, these payoffs are:
3
5
0
3
2
5
3
1
:
3
4
2
3
2
0
3
1
:
3
5
1
3
2
3
3
1
:
= +
= +
= +
B
M
T
Game Theory - A (Short) Introduction 128 9/12/2011
4.3 Mixed strategy Nash
equilibrium
Player 1’s mixed strategy assigns positive probability to T and B
and probability zero to M. So, the two conditions of proposition
116.2 are satisfied for player 1.

The same verification is easily done for player 2. Note however
that, for player 2, the action L (which she uses with probability
0), has the same expected payoff to her other two actions. This
equality is consistent with proposition 116.2 (no greater than).
Game Theory - A (Short) Introduction 129 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Exercise 117.2 (Choosing numbers)
Players 1 and 2 each choose a positive integer up to K. If the
players choose the same number, then player 2 pays $1 to
player 1; otherwise no payment is made. Each player’s
preferences are represented by her expected monetary payoff.
 Show that the game has a mixed strategy Nash equilibrium in
which each player chooses each positive integer up to K with
probability 1/K
 Show that the game has no other mixed strategy Nash equilibria
(Deduce from the fact that player 1 assigns positive probability to
some action k that player 2 must do so; then look at the implied
restriction on player 1’s equilibrium strategy)
Game Theory - A (Short) Introduction 130 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 Note finally that
 an implication of Proposition 116.2 is that a nondegenerate mixed
strategy equilibrium (a mixed strategy equilibrium that is not also a
pure strategy equilibrium) is never a strict Nash equilibrium: every
player whose mixed strategy assigns a positive probability to more
than one action is indifferent between her equilibrium mixed
strategy and every action to which this mixed strategy assigns
positive probability.
 The theory of mixed Nash equilibrium does not state that players
consciously choose their strategies at random given the equilibrium
probabilities. Rather, the conditions for equilibrium are designed to
ensure that it is consistent with a steady state. The question of how
a steady state may come about remains to be studied at this stage.


Game Theory - A (Short) Introduction 131 9/12/2011
4.3 Mixed strategy Nash
equilibrium
 4.3.5 Existence of equilibrium in finite games

Proposition 119.1 (Existence of mixed strategy Nash
equilibrium in finite games)
Every strategic game with vNM preferences in which each player
has finitely many actions has a mixed strategy Nash equilibrium.

This proposition does not help to find the equilibrium but it is a
useful fact.
Note also that:
 the finiteness of the number of actions is a sufficient condition for
the existence of an equilibrium, not a necessary one.
 that a player’s strategy in mixed strategy Nash equilibrium may
assign probability 1 to a single action.
Game Theory - A (Short) Introduction 132 9/12/2011
4.4 Dominated actions
 Definition 120.1 (Strict Domination)
In a strategic game with vNM preferences, player i’s mixed
strategy α
i
stricly dominates her action a’
i
if:



where u
i
is a Bernoulli payoff function and U
i

i
,a
-i
) is player i’s
expected payoff under u
i
when she uses the mixed strategy α
i

and the actions chosen by the other players are given by a
-i
.
actions players' other for the list every for ) , ' ( ) , (
i i i i i i i
a a a u a U
÷ ÷ ÷
> o
Game Theory - A (Short) Introduction 133 9/12/2011
4.4 Dominated actions
 An action not strictly dominated by any pure strategy may be
strictly dominated by a mixed strategy (see Figure 120.1)
T
M
L R
1
4
1
0
(Figure 120.1)
B 0 3
The action T of player 1 is not strictly (or weakly) dominated
by M or B, but it is strictly dominated by the mixed strategy
that assigns probability ½ to M and probability ½ to B.
Game Theory - A (Short) Introduction 134 9/12/2011
4.4 Dominated actions
 Exercise 120.2 (Strictly dominated mixed strategy)
In Figure 120.1, the mixed strategy that assigns probability ½ to
M and ½ to B is not the only mixed strategy that strictly
dominates T. Find all the mixed strategy that do so.

 Exercise 120.3 (Strict domination for mixed strategies)
Determine whether each of the following statements is true of
false:
 A mixed strategy that assigns positive probability to a strictly
dominated action is strictly dominated.
 A mixed strategy that assigns positive probability only to actions
that are not strictly dominated is not strictly dominated.
Game Theory - A (Short) Introduction 135 9/12/2011
4.4 Dominated actions
 A strictly dominated action is not a best response to any
collection of mixed strategies of the other players
 Suppose that player i’s action a’
i
is strictly dominated by her mixed
strategy α
i
 Player i’s expected payoff U
i

i

-i
) when she uses the mixed strategy
α
i
and the other players use the mixed strategies α
-i
is a weighted
average of her payoffs U
i

i
,a
-i
) as a
-i
varies over all the collections of
action for the other players, with the weight on each a
-i
equal to the
probability with which it occurs when the other players’ mixed
strategies are α
-i
.
 Player i’s expected payoff when she uses the action a’
i
and the other
players use the mixed strategies α
-i
is a similar weighted average; the
weights are the same but the terms take the form u
i
(a’
i
,a
-i
), rather than
U
i

i
,a
-i
).
 The fact that a’
i
is strictly dominated by α
i
means that U
i

i
,a
-i
) >
u
i
(a’
i
,a
-i
) for every collection a
-i
of the players’ actions.
 Hence player i’s expected payoff when she uses the mixed strategy α
i
exceeds her expected payoff when she uses the action a’
i
, given α
-i
.

Game Theory - A (Short) Introduction 136 9/12/2011
4.4 Dominated actions
Consequently, a strictly dominated action is not used with
positive probability in any mixed strategy Nash equilibrium.

Definition 121.1 (Weak domination)
In a strategic game with vNM preferences, player i’s mixed
strategy α
i
weakly dominates her action a’
i
if:

and

where u
i
is a Bernoulli payoff function and U
i

i
,a
-i
) is player i’s
expected payoff under u
i
when she uses the mixed strategy α
i

and the actions chosen by the other players are given by a
-i
.
actions playes' other the of list every for ) , ' ( ) , (
i i i i i i i
a a a U a U
÷ ÷ ÷
> o
actions playes' other the of list some for ) , ' ( ) , (
i i i i i i i
a a a U a U
÷ ÷ ÷
> o
Game Theory - A (Short) Introduction 137 9/12/2011
4.4 Dominated actions
 Note that, as a weakly dominated action may be used in a Nash
equilibrium, a weakly dominated action may be used with a
positive probability in a mixed strategy equilibrium. We can
therefore not eliminate weakly dominated actions from
consideration when finding mixed strategy equilibrium.

However:

Proposition 122.1 (Existence of mixed strategy Nash
equilibrium with no weakly dominated strategies in finite games)
Every strategic game with vNM preferences in which each
player has finitely many actions has a mixed strategy Nash
equilibrium in which no player’s strategy is weakly dominated.
Game Theory - A (Short) Introduction 138 9/12/2011
4.5 Pure equilibria when
randomization is allowed
 Equilibria when the players are not allowed to randomize
remain equilibria when they are allowed to randomize

Proposition 122.2 (Pure strategy equilibria survive when
randomization is allowed)
Let a* be a Nash equilibrium of G and for each player i, let α*
i

be the mixed strategy of player i that assigns probability one to
the action a*
i
. Then α* is a mixed strategy Nash equilibrium of
G’.




Game Theory - A (Short) Introduction 139 9/12/2011
4.5 Pure equilibria when
randomization is allowed
 Any pure equilibria that exist when the players are allowed to
randomize are equilibria when they are not allowed to
randomize.

Proposition 123.1 (Pure strategy equilibria survive when
randomization is prohibited)
Let α* be a mixed strategy Nash equilibrium of G’ in which the
mixed strategy of each player i assigns probability one to the
single action a*
i
. The a* is a Nash equilibrium of G.

Game Theory - A (Short) Introduction 140 9/12/2011
4.5 Pure equilibria when
randomization is allowed
 To establish these two propositions, let N be a set of players
and let A
i
, for each player i, be a set of actions.

 Consider the following two games:
 G: the strategic game with ordinal preferences in which the set of
players is N, the set of actions of each player i is A
I
, and the
preferences of each player i are represented by the payoff function
u
i

 G’: the strategic game with vNM preferences in which the set of
players is N, the set of actions of each player i is A
i
, and the
preferences of each player i are represented by the expected
value of u
i
Game Theory - A (Short) Introduction 141 9/12/2011
4.5 Pure equilibria when
randomization is allowed
 Proposition 122.2
Let a* be a Nash equilibrium of G, and for each player i let α*
i

be the mixed that assigns probability 1 to a*
i
. Since a* is a Nash
equilibrium of G, we know that in G’ no player i has an action
that yields her a payoff higher than does a*
i
when all other
players adhere to α*
-i
. Thus α* satisfies the two conditions in
Proposition 116.2. So, it is a mixed strategy equilibrium of G’.
 Proposition 123.1
Let α* be a mixed strategy Nash equilibrium of G’ in which every
player’s mixed strategy is pure. For each player i, denote a*
i
the
action to which α
i
assigns probability one. Then, no mixed
strategy of player i yields her a payoff higher than does α*
i.
Thus
a* is Nash equilibrium of G.

Game Theory - A (Short) Introduction 142 9/12/2011
4.7 Equilibrium in a single
population
 Definition 129.1 (Symmetric two-player strategic game with
vNM preferences)
A two-player strategic game with vNM preferences is symmetric
if the players’ sets of actions are the same and the players’
preferences are represented by the expected values of payoff
functions u
1
and u
2
for which u
1
(a
1
,a
2
) = u
2
(a
2
,a
1
) for every
action pair (a
1
,a
2
).

 Definition 129.2 (Symmetric mixed strategy Nash equilibrium)
A profile α* of mixed strategies in a strategic game with vNM
preferences in which each player has the same set of actions is
a symmetric mixed strategy Nash equilibrium if it is a mixed
strategy Nash equilibrium and α*
i
is the same for every player i.

Game Theory - A (Short) Introduction 143 9/12/2011
4.7 Equilibrium in a single
population
 Game of approaching pedestrian (Figure 129.1)







This game has two deterministic steady states ( (Left,Left) and
(Right,Right) ), corresponding to the two symmetric Nash equilibria in
pure strategies.
The game has also a symmetric mixed strategy Nash equilibrium, in
which each player assigns probability ½ to Left and probability ½ to
Right.
This equilibrium corresponds to a steady state in which half of all
encounters result in collisions!
Left
Right
Left Right
1,1
0,0
0,0
1,1
(Figure 115.1)
Game Theory - A (Short) Introduction 144 9/12/2011
4.7 Equilibrium in a single
population
 Exercise 130.3 (Bargaining)
Pairs of players from a single population bargain over the division of a
pie of size 10. The members of a pair simultaneously make demands.
The possible demands are nonnegative even integers up to 10.
If the demands sum to 10, then each player receives her demand. If the
demands sum to less than 10, then each player receives her demand
plus half of the pie that remains after both demands have been
satisfied. If the demands sum to more than 10, then neither player
receives any payoff.

Find all the symmetric mixed strategy Nash equilibria in which each
player assigns positive probability to at most two demands (many
situations in which each player assigns positive probability to two
actions – says a’ and a’’ – can be ruled out as equilibria because when
one player uses such strategy, some action yields the other player a
payoff higher than does one or both of the actions a’ and a’’).
Game Theory - A (Short) Introduction 145 9/12/2011
4.9 The formation of players’
beliefs
 In a Nash equilibrium, each player chooses a strategy that
maximizes her expected payoff, knowing the other players’
strategies.
 The idea underlying the previous analysis is that the players
have learned each other’s strategies from their experience
playing the game.
 The idealized situation is the following:
 for each player in the game, there is a large population of
individuals who may take the role of that player
 in any play of the game, one participant is drawn randomly from
each population
In this situation, a new individual who joins a population can
learn the other players’ strategies by observing their actions
over many plays of the game.
Game Theory - A (Short) Introduction 146 9/12/2011
4.9 The formation of players’
beliefs
 As long as the number of new players is low, existing players’
encounters with neophytes (who may use nonequilibrium
strategies) will be sufficiently rare that their beliefs about the
steady state will not be disturbed. So, a new player’s problem is
simply to learn the other players’ actions.

 But, what might happen if new players simultaneously join more
than one population in sufficient numbers, such that the
probability that they encounter is not anymore small? In
particular, can we expect a steady state to be reached if no one
has experience?
Game Theory - A (Short) Introduction 147 9/12/2011
4.9 The formation of players’
beliefs
 4.9.1 Eliminating dominated actions
In some games, players may reasonably be expected to choose
their Nash equilibrium actions from an introspective analysis of
the game:
 At the extreme (eg., the Prisoner’s Dilemma), players’ best action
are independent of the other players’ actions.
 In a less extreme case, some player’s best action may depend on
the other players’ actions, but the actions the other players will
choose may be clear because each of these players has an action
that strictly dominates all others.
Game Theory - A (Short) Introduction 148 9/12/2011
4.9 The formation of players’
beliefs
eg.: in the game in Figure 135.1, player 2’s action R strictly
dominates L. So, no matter what player 2 thinks player 1 will be
playing, she should choose R. Consequently, player 1, who can
deduce by this argument that player 2 will choose R, may
reason that she should choose B, even without any expercience
of the game.
T
B
L R
1,1
0,0
0,0
1,1
(Figure 135.1)
Game Theory - A (Short) Introduction 149 9/12/2011
4.9 The formation of players’
beliefs
 4.9.2 Learning
Another approach to the question of how a steady state might be
reached assumes that player learns:
 she starts with an unexplained “prior” belief about the other players’
actions
 she changes these beliefs in response to information she receives
 Two theories are:
 Best Response Dynamics: a simple theory assumes that in each
period after the first, each player believes that the other players will
choose the actions the chose in the previous period:
 at the first period, each player chooses a best response to an arbitrary
deterministic belief about the other players’ actions
 in every subsequent period, each player chooses a best response to the
other players’ action in the previous period
An action profile that remains the same from period to period (steady
state) is then a pure Nash equilibrium of the game. The two questions
are then:
 does the game convergence to a steady state?
 how long does it take to converge?
Game Theory - A (Short) Introduction 150 9/12/2011
4.9 The formation of players’
beliefs
 eg.: the BoS game (example 18.2), for some initial beliefs,
does not converge:
 if player 1 initially believes that player 2 will choose
Stravinsky and player 2 believes that player 1 will
initially choose Bach, then the players’ choices will
subsequently alternate in definitively between the
action pairs (Bach, Stravinsky) and
(Stravinsky,Bach).
 Fictitious play: under the Best Response Dynamics, a player’s
belief does not admit the possibility that her opponents’ actions are
realizations of mixed strategies. Under the Fictitious play theory,
players consider actions in all the previous periods when forming a
belief about their opponents’ strategies. They treat these actions as
realizations of mixed strategies:
 each player begins with an arbitrary probabilistic belief about
the other player’s action
 then, in any period, she adopts the belief that her opponent is
using a mixed strategy in which the probability of each action is
proportional to the frequency with which her opponent chose
that action in the previous periods.
Game Theory - A (Short) Introduction 151 9/12/2011
4.9 The formation of players’
beliefs
 Note that:
 in any two-players game in which the player has two
actions (eg, the Matching Pennies), this process
converges to a Mixed strategy Nash equilibrium from any
initial beliefs;
 for other games, there are initial beliefs for which the
process does not converge.
Game Theory - A (Short) Introduction 152 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 The following systematic method can be used to find all mixed strategy
Nash equilibria of a game:
 For each player i, choose a subset S
i
of her set of A
i
of actions
 Check whether there exists a mixed strategy profile α such that:
 (i) the set of actions to which strategy α
i
assigns positive probability is S
i
 (ii) α satisfies the conditions of Proposition 116.2
 Repeat the analysis for every collection of subsets of the players’ sets
of actions.

 The shortcoming of the method is that for games in which each player
has several strategies, or in which there are several players, the
number of possibilities to examine is huge. In a two player game in
which each player has three actions:
 each player’s set of action has seven non-empty subset (three actions
consisting of a single action, three consisting of two actions, and one
consisting of the entire set of actions).
 so that there are 49 (7x7) possible collections of subsets to check.
Game Theory - A (Short) Introduction 153 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 Example 138.1: Finding all mixed strategy equilibria of a two-
player game in which each player has two actions.






 Denote the actions and payoffs as in Figure 139.1.
 Each player’s set of actions has three nonempty subsets:
 two each consisting of a single action
 one consisting of both action
 Thus, there are nine (3x3) pairs of subsets of the players’ action
sets.
T
B
L R
u
11
,v
11
u
21
,v
21
u
12
,v
12
u
22
,v
22
(Figure 139.1)
Game Theory - A (Short) Introduction 154 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 For each pair (S
1
,S
2
), we check if there is a pair (α
1

2
) of mixed
strategies such that each strategy α
i
assigns positive probability
only to actions in S
i
and the conditions in Proposition 116.2 are
satisfied:
 checking the four pairs of subsets in which each player’s
subset consists of a single action amounts to checking whether
any of the four pairs of actions is a pure strategy equilibrium.
 consider the pair of subsets {T,B} for player 1 and {L} for player
2:
 the second condition in Proposition 116.2 is automatically
satisfied for player 1 (she has no actions to which she
assigns probability 0)
 the first condition in Proposition 116.2 is automatically
satisfied for player 2 (she assigns positive probability only
to one action).

Game Theory - A (Short) Introduction 155 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 Thus, for there to be a mixed strategy equilibrium in which
player 1’s probability of using T is p, we need:
 u
11
= u
21
: player 1’s payoffs to her two actions must be
equal
 p v
11
+(1-p) v
21
≥ p v
12
+(1-p) v
22
: L must be at least as
good as R given player 1’s mixed strategy.
 A similar argument applies to the three other pairs of subsets
in which one player’s subset consists of both her actions and
the other player’s subset consists of a single action.
 To check finally whether there is a mixed strategy equilibrium
in which the subsets are {T,B} for player 1 and {L,R} for player
2, we need to find a pair of mixed strategy that satisfied the
first condition of Proposition 116.2 (the second condition being
automatically satifisfied, no action having 0 probability). That is,
we need to find p and q such as:
 q u
11
+ (1-q) u
12
= q u
21
+ (1-q) u
22
 p v
11
+ (1-p) v
21
= p v
21
+ (1-p) v
22
Game Theory - A (Short) Introduction 156 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 Example 139.2: Find all mixed strategy equilibria of a variant of
BoS






 Exercise 141.1: Find all mixed strategy equilibria of a two-
player game
B
S X B
S
4,2
0,0 0,1
0,0
2,4 1,3
(Figure 139.2)
T
M R L
B
2,2
0,3 1,3
3,2
1,1 0,2
(Figure 141.1)
Game Theory - A (Short) Introduction 157 9/12/2011
4.10 Extension: finding all
mixed strategy Nash equilibria
 Exercise 142.1: Find the mixed strategy Nash equilibria of the
three player game in Figure 142.1 (each player has two actions)
A
B
A B
1,1,1

0,0,0

0,0,0

0,0,0

(Figure 142.1)
Game Theory - A (Short) Introduction 158 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 Consider now the case of a continuum of actions: the principle involved
in finding mixed strategy equilibria of games are the same as for games
with finitely many actions, though the techniques are different.
 In a game in which a player has a continuum of actions, a mixed
strategy of a player is determined by the probabilities it assigns to sets
of actions.
 Proposition 116.2 becomes
Proposition 142.2 (Characterization of mixed strategy Nash
equilibrium)
A mixed strategy profile α* in a strategic game with vNM preferences is
a mixed strategy Nash equilibrium if and only if, for each player i,
 α*
i
assigns probability zero to the set of actions a
i
for which the action
profile (a
i
,α*
-i
) yields player i an expected payoff less than her expected
payoff to α*.
 for no action a
i
does the action profile (a
i
,a*
-i
) yield player i an expected
payoff greater than her expected payoff to α*.

Game Theory - A (Short) Introduction 159 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 Games with continuum of actions can be very complex to
analyze. A significant class of games consist of games in which
each player’s set of actions is a one-dimensional interval of
numbers:
 Consider such a game with two players
 Let player i’s set of actions be the interval from a
-i
to a
+i
, for i=1,2
 Identify each player’s mixed strategy with a cumulative probability
distribution of this interval: the mixed strategy of each player i is a
nondecreasing function F
i
for which 0≤F
i
(a
i
)≤1, for every action a
i
.
The number F
i
(a
i
) is the probability that player i’s action is at most
a
i
.
 the form of a mixed strategy Nash equilibrium in such a game can
be very complex but some such games, however, have equilibria of
a particularly simple form, in which each player’s equilibrium mixed
strategy assigns probability zero except in an interval.

Game Theory - A (Short) Introduction 160 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 The mixed strategies (F
1
,F
2
) satisfies the following conditions for
i=1,2:
 There are numbers x
i
and y
i
such that player i’s mixed strategy
F
i
assigns probability zero except in the interval from x
i
to y
i
:
F
i
(z)=0 for z<x
i
, and Fi(z)=1, for z ≥ y
i
.
 Player i’s expected payoff when her action is a
i
and the other
player uses her mixed strategy F
j
takes the form:
 = c
i
for x
i
≤ a
i
≤ yi
 ≤ c
i
for a
i
< x
i
and a
i
> y
i

where c
i
is a constant.


Game Theory - A (Short) Introduction 161 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 Example 143.1 (All-pay auction)
Two people submit sealed bid for an object worth $K for each of
them. Each person’s bid may be any nonnegative number up to
$K. The winner is the person whose bid is higher. In the event
of a tie, each person receive half of the object (which she
values at $K/2). Each person pays her bid, regardless of
whether she wins, and has preferences represented by the
expected amount of money the receives.
Game Theory - A (Short) Introduction 162 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 This situation may be modeled by the following strategic game:
 Players: the two bidders
 Actions: each player’s set of actions is the set of possible bids
(nonnegative numbers up to K)
 Payoff functions: Each player i’s preferences are represented by
the expected value of the payoff function given by:
¦
¹
¦
´
¦
> ÷
= ÷
< ÷
=
j i i
j i i
j i i
i
a a a K
a a a
K
a a a
a a u
if
if
2
if
) , (
2 1
eg.: a competition among two firms to develop a new product by
some deadline, where the firm that spends the most develops a
better product, which capture the entire market.
Game Theory - A (Short) Introduction 163 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 An all-pay auction has no pure strategy Nash equilibrium, by the
following argument:
 No pair of actions (x,x) with x < K is a Nash equilibrium because
either player can increase her payoff by slightly increasing her bid
 (K,K) is not a Nash equilibrium because either player can increase
her payoff from –K/2 to 0 by reducing her bid to 0
 No pair of actions (a
1
,a
2
) with a
1
≠a
2
is a Nash equilibrium because
the player whose bid is higher can increase her payoff by reducing
her bid (and the player whose bid is lower can, if her bid is positive,
increase her payoff by reducing her bid to 0)


Game Theory - A (Short) Introduction 164 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 Consider the possibility that the game has mixed strategy Nash
equilibrium. Denote F
i
the mixed strategy (cumulative density
function over the interval of possible bids).
 We look for an equilibrium in which neither mixed strategy
assigns positive probability to any single bid (there are infinitely
many possible bids and for continuous random variables,
Prob(x=c)=0).
 In that case, F
i
(a
i
) is the probability that player i bids at most a
i

and the probability that she bids less than a
i
.
 We restrict our attention to strategy pairs (F
1
,F
2
) for which, for
i=1,2, there are numbers x
i
and y
i
such that F
i
assigns positive
probability only to the interval form x
i
to y
i
.
Game Theory - A (Short) Introduction 165 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 To investigate the possibility of such an equilibrium, consider
player 1’s expected payoff when she uses the action a
1
, given
player 2’s mixed strategy F
2
:
 if a
1
< x
2
, then a
1
is less than player 2’s bid with probability one, so that
player 1’s payoff is –a
1
 if a
1
> y
2
, then a1 exceeds player 2’s bid with probability one, so that
player 1’s payoff is K-a
1
 if x
2
≤ a
1
≤ y
2
, then player 1’s expected payoff is:
 with probability F
2
(a
1
), player 2’s bid is less than 1, in which case player
1’s payoff is K-a
1
 with probability 1-F
2
(a
1
), player 2’s bid exceeds a1, in which case player
1’s payoff is –a
1
 by assumption, the probability that player 2’s bid exactly equal to a
1
is
zero
Player 1 expected payoff is (K-a
1
) F
2
(a
1
) + (-a
1
) (1-F
2
(a
1
)) = KF2(a
1
)-a
1
Game Theory - A (Short) Introduction 166 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 We need to find values of x1 and y1 and a strategy F2 such that
player 1’s expected payoff satisfies condition of Proposition
142.2
a
-1
a
+1
x
1
y
1
c
1
(Figure 144.1)
it is a constant on the interval x
1
to y
1
and less than this
constant outside this interval.
Game Theory - A (Short) Introduction 167 9/12/2011
4.11 Extension: games in which
each player as a continuum of
actions
 The conditions are therefore:
 K F
2
(a
1
)-a
1
=c
1
for x
1
≤ a
1
≤ y
1
for some constant c
1
 F
2
(x
2
) = 0 and F
2
(y
2
)=1
 F
2
must be non decreasing (it is a CDF)
and analogous conditions must hold for x
2
,y
2
, and F
1
.

 Solution: we see that if x
1
= x
2
= 0, y
1
= y
2
= K, and F
1
(z) = F
2
(z)
= z/K for all z with 0 ≤ z ≤ K, these conditions are fulfilled. We
see that each player expected payoff is constant and equal to 0,
for all her actions.

 Thus, the game has a mixed strategy Nash equilibrium in which
each player randomizes uniformly over all her actions. Proving
that it is the only mixed strategy Nash equilibrium is more
complex.
Game Theory - A (Short) Introduction 168 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 4.12.1 Expected payoffs
 Suppose that a decision-marker has preferences over a set of
deterministic outcomes and that each of her actions results in a
lottery (probability distribution) over these outcomes
 To determine the action she chooses, we need her preferences
over lotteries
 We cannot derive these preferences form her preferences over
deterministic outcomes. So, assume we are given preferences over
lotteries.
 Under fairly week assumptions, we can represent these
preferences by a payoff function: we can find a function, say U,
over lotteries (p
1
, …p
k
) such that U(p
1
,…p
k
) > U(p’
1
, …p’
k
) only if
the decision marker prefers (p
1
, …p
k
) to (p’
1
, …p’
k
), where each
outcome occurs with probability p
i
.
Game Theory - A (Short) Introduction 169 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 In most case, we need however more structure to go farther in the
analysis. The standard approach, developed by von Neumann and
Morgenstern (1994), is to impose an additional assumption (known as the
“independence assumption”) that allows us to conclude that the decision-
maker’s preferences can be represented by the expected payoff function.
Under this assumption, there is a payoff function u over deterministic
outcomes such that the decision-maker’s preference relation over lotteries
is represented by the function:



where a
k
is the kth outcome of the lottery and:



if and only if the decision-maker prefers the lottery (p
1
,…p
k
) to (p’
1
,…p’
k
).

¿
=
=
K
k
k k K
a u p p p U
1
1
) ( ) ,... (
¿ ¿
= =
>
K
k
k k
K
k
k k
a u p a u p
1 1
) ( ' ) (
Game Theory - A (Short) Introduction 170 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 This sort of payoff function (for which the decision-maker
preferences are represented by the expected value of the
payoffs) is known as Bernoulli payoff functions.

eg.: suppose that there are three possible deterministic
outcomes: the decision-maker may receive $0, $1 or $5 (and
naturally prefers $5 to $1 to $0). Suppose that she prefers the
lottery (1/2,0,1/2) to the lottery (0,3/4,1/4), where probabilities
are given for outcomes $0, $1 and $5. This preference is
consistent with preferences represented by the expected value
of a payoff function u for which u(0)=0, u(1)=1 and u(5)=4:
4 .
4
1
1 .
4
3
4 .
2
1
0 .
2
1
+ > +
Game Theory - A (Short) Introduction 171 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 The great advantage of Bernoulli payoff function is that preferences are
completely specified by the payoff function: once we know u(a
k
) for each
possible outcome a
k
, we know the decision-maker preferences among
all lotteries.
 Bernoulli payoff function must however not be confused with payoff
function that represents the decision-marker’s preferences over
deterministic outcomes:
 if u is a Bernoulli payoff function, it certainly is a payoff function that
represents the decision-maker’s preferences over deterministic
outcomes
 however, the converse is not true.
eg. : suppose a decision-maker prefers $5, $1 and $0 and prefers lottery
(1/2,0,1/2) to (0,3/4,1/4). Defines u as u(0)=0, u(1)=3 and u(5)=4. u is
compatible with preferences over deterministic outcomes. However, it is
not compatible with preferences over lotteries:
4 .
4
1
3 .
4
3
4 .
2
1
0 .
2
1
+ < +
Game Theory - A (Short) Introduction 172 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 4.12.2 Equivalent Bernoulli payoff functions
Lemma 148.1 (Equivalence of Bernoulli payoff functions)
Suppose that there are at least three possible outcomes. The
expected values of the Bernoulli payoff functions u and v
represent the same preferences over lotteries if and only if there
exist number k and m (with m > 0) such that u(x) = k + m v(x), for
all x.

 Exercise 149.2 (Normalized Bernoulli payoff functions)
Suppose that a decision-marker’s preferences can be
represented by the expected value of the Bernoulli payoff
function u. Find a Bernoulli payoff function whose expected
value represents the decision-maker’s preferences and assigns
a payoff of 1 to the best outcome and a payoff of 0 to the worst
outcome.
Game Theory - A (Short) Introduction 173 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 4.12.3 Equivalent strategic games with vNM preferences







 the three games of figure 150.1 represents the same strategic game
with deterministic preferences
 only the left and middle tables represent the same strategic game
with vNM preferences. The reason is that the payoff functions in the
middle table are linear functions of the payoff functions in the left
table, whereas the payoff fonctions in the right table are not.
B
S
B S
1,1
0,0
0,0
1,1
B
S
B S
1,1
0,0
0,0
1,1
(Figure 150.1)
B
S
B S
1,1
0,0
0,0
1,1
Game Theory - A (Short) Introduction 174 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 Denotes ui, vi, wi the Bernoulli payoff functions of the three games.
Then v
1
(a)=2u
1
(a) and v
2
(a)=-3+u
2
(a). But, w
1
is not a linear
function of u
1
. There is no constant μ and θ such that w
1
(a) = μ +
θu
1
(a):





has no solution.


2 3
1 1
0 0
u µ
u µ
u µ
+ =
+ =
+ =
Game Theory - A (Short) Introduction 175 9/12/2011
4.12 Appendix: representing
preferences by expected payoffs
 Exercise 150.1 (Games equivalent to the Prisoner’s Dilemma)
Which of the right tables in Figure 150.2 represents the same
strategic game with vNM preferences as the Prisoner’s
Dilemma as specified in the left panel?
C
D
C D
2,2
3,0
0,3
1,1
C
D
C D
3,3
4,0
0,4
2,2
(Figure 150.2)
C
D
C D
6,0
9,-4
0,2
3,-2
9. Bayesian Games
Game Theory - A (Short) Introduction 177 9/12/2011
Framework
 An assumption underlying the notion of Nash equilibrium is that
each player holds the correct belief about the other players’
actions. To do so, the player must know the other player
preferences.
 However, in many situation, players are not perfectly informed
about their opponents’ characteristics (eg.: firms may not know
each others’ cost functions).
 In this chapter, we generalize the notion of strategic game to
allow the analysis of situations in which each player is
imperfectly informed about an aspect of her environment that
is relevant to her choice of action.
Game Theory - A (Short) Introduction 178 9/12/2011
9.1 Motivational examples
 With start with one example to illustrate main ideas of Bayesian
games
 Example 273.1 (Variant of BoS with imperfect information)
Consider a variant of BoS in which player 1 is unsure whether
player 2 prefers to go out with her or prefers to avoid her,
whereas player 2 (as before) knows player 1’s preferences.
B
S
B S
2,1
0,0
0,0
1,2
2 wishes to meet 1
B
S
B S
2,0
0,1
0,2
1,0
2 wishes to avoid 1
2 2
1
Prob 1/2 Prob 1/2
(Figure 274.1)
Game Theory - A (Short) Introduction 179 9/12/2011
9.1 Motivational examples
 Specifically, suppose player 1 thinks that with probability ½ player 2
wants to go out with her, and with probability ½ player 2 wants to
avoid her (see figure 274.1)
 Because probabilities are involved, an analysis of the situation
requires us to know the players’ preferences over lotteries.
 We can represent the situation as being with two states. In state 1,
the Bernoulli payoff are given in the left table. In state 2, the
Bernoulli payoff are given in the right table. Player assigns
probability ½ to each state.
 The notion of Nash equilibrium must be generalize to this new
setting:
 from player 1’s point of view, player 2 has two possible types
(one whose preferences are given by the left table of Figure
274.1 and the other, by the right table).
Game Theory - A (Short) Introduction 180 9/12/2011
9.1 Motivational examples
 Player 1 does not know player 2 types. So, to choose an action
rationally, she needs to form a belief about the action of each
player 2 type.
 Given these beliefs and her belief about the likelihood of each
type, she can calculate her expected payoff to each of her
actions.
 For example, if player 1, conditionally on choosing B, thinks
that type 1 of player 2 will choose B and type 2 of player 2 will
choose S, then she thinks that B will yield her a payoff of 2 with
probability ½ and of 0 with probability ½. So, in this case, her
expected payoff to B is ( ½ . 2 + ½ . 0)=1. Similar calculations
lead to table 275.1.
(B,B) (B,S) (S,B) (S,S)
Type 1 player 2 choice Type 2 player 2 choice
S
B 2 1 1 0
0 ½ ½ 0
(Figure 275.1)
Game Theory - A (Short) Introduction 181 9/12/2011
9.1 Motivational examples
 For this situation, we define a pure strategy Nash equilibrium to
be a triple of actions (one for player 1 and one for each type of
player 2) with the property that:
 the action of player 1 is optimal, given the actions of the two
types of player 2 (and player 1’s belief about the state)
 the action of each type of player 2 is optimal, given the action
of player 1

 Note that in a Nash equilibrium:
 player 1’s action is a best response in Figure 275.1 to the pair
of actions of the two types of player 2
 the action of the type of player 2 who wishes to meet player 1
is a best response in the left table of Figure 274.1
 the action of the type of player 2 who wished to avoid player 1
is a best response in the right table of Figure 274.1 to the
action of player 1
Game Theory - A (Short) Introduction 182 9/12/2011
9.1 Motivational examples
 Why should player 2, who knows his own type, have to plan
what to do in both cases?
 She does not!
 However, as an analysts, we need to consider what she would do
in both cases. The reason is that to determine her best action,
player 1, who does not know player 2 type, needs to form a belief
about the action each type of player 2 would take, and we wish to
impose the equilibrium condition that these beliefs are correct.
 (B,(B,S)) is a Nash equilibrium where
)) , ( , ( S B B
Player 1
Player 2 – Type 1
Player 2 – Type 2
Game Theory - A (Short) Introduction 183 9/12/2011
9.1 Motivational examples
 Proof:
 given that the action of the two types of player 2 are (B,S), player
1’s action B is optimal (see Figure 275.1)
 given that player 1 chooses B, B is optimal for player 2 type 1 and
S is optimal for player 2 type 2 (see Figure 274.1)

 We interpret the equilibrium as follows:
 Type 1 - player 2 chooses B and type 2 – player 2 chooses S,
inferring that player 1 will choose B
 player 1, who does not know if player 2 is of type 1 or of type 2,
believes that if player 2 is of type 1, she will choose B and if player
2 is of type 2, she will choose S.
Game Theory - A (Short) Introduction 184 9/12/2011
9.1 Motivational examples
 We can interpret the actions of the two types of player 2 to
reflect player 2’s intentions in the hypothetical situation before
she knows the state. This corresponds to the following situation:
 initially, player 2 does not know the state; she will be informed by a
signal that depends on the state;
 before receiving this signal, she plans an action for each possible
state;
 the same story is valid for player 1 but player 1 will receive an
uninformative signal (same signal in each state)

 Note that in such a setup, a Nash equilibrium is list of
actions, one for each type of each player, such that the action
of each type of each player is a best response to the
actions of all the types of the other player, given the player’s
beliefs about the state after she observes her signal.
Game Theory - A (Short) Introduction 185 9/12/2011
9.1 Motivational examples
 Exercise 276.1 (Equilibria of a variant of BoS with imperfect
information)
(i) Show that there is no pure strategy of this game in which
player 1 chooses S.
(ii) Find the mixed strategy Nash equilibria of the game (First
check whether there is an equilibrium in which both types of
player 2 use pure strategies; then look for equilibria in which
one or both of these types randomize).
Game Theory - A (Short) Introduction 186 9/12/2011
9.2 General definitions
 9.2.1 Bayesian games
 A strategic game with imperfect information is called a Bayesian
game.
 A key component in the specification of the imperfect information is
the set of state: each state is a complete description of one
collection of the players’ relevant characteristics (preferences,
information, …). For every collection of characteristics that some
player believes to be possible, their must be a state.
 A the start of the game a state is realized. The players do not
observe this state. Rather, each player receives a signal that may
give her some information about the state. We denote the signal
player i receives in state ω by τ
i
(ω). The function τ
i
(.) is called the
player i’s signal function. Note that this is a deterministic function:
for each state, a given signal is received.

Game Theory - A (Short) Introduction 187 9/12/2011
9.2 General definitions
 The state that generates any given signal t
i
is said to be consistent
with the signal t
i
.
 The size of the set of states consistent with each player i’s signal
reflect the quality of player i’s information. The two extreme cases
are:
 if τ
i
(ω) is different for each value of ω, the player i knows,
given her signal, the state that has occurred: she is perfectly
informed about all the players’ relevant characteristics.
 if τ
i
(ω) is the same for all states, then player i’s signal conveys
no information: she is perfectly uninformed.
 We refer to player i in the event that she receives t
i
as type t
i
of
player i. Each type of each player holds a belief about the
likelihood of the states is consistent with her signal (eg.: if t
i
= τ
i

1
)=
τ
i

2
), then type t
i
of player i assigns probabilities to ω
1
and ω
2
.

Game Theory - A (Short) Introduction 188 9/12/2011
9.2 General definitions
 Each player (may) care about the actions chosen by the other
players and about the state. We need therefore to specify their
preferences regarding probability distribution over pairs (a, ω),
consisting of action profile a and a state ω.
 We assume that each player’s preferences over such probability
distributions are represented by the expected value of a Bernoulli
function. We therefore specify player i’s preferences by giving the
Bernoulli payoff function u
i
over pair (a, ω).
Game Theory - A (Short) Introduction 189 9/12/2011
9.2 General definitions
 Definition 279.1 (Bayesian game)
A Bayesian game consist of
 a set of players
 a set of states
and for each player
 a set of actions
 a set of signals that she may receive and a signal function that
associates a signal with each state
 for each signal that she may receive, a belief about the states
consistent with the signal (a probability distribution over the set of
states with which the signal is associated)
 a Bernoulli payoff function over pairs (a,ω), where a is an action
profile and ω is a state, the expected value of which represents the
player’s preferences.
Game Theory - A (Short) Introduction 190 9/12/2011
9.2 General definitions
 Note that the set of actions of each player is independent of the
state: each player may care about the state, but the set of
actions available to her is the same in every state.

 Application to Example 273.1
 players: the pair of people
 states: {meet, avoid}
 actions: for each player {B,S}
 signals: player 1 may receive a single signal z. Her signal function τ
1

satisfies τ
1
(meet)= τ
1
(avoid)=z. Player 2 receives one of two signals (m
and v). Her signal function τ
2
satistifies τ
2
(meet)=m and τ
2
(avoid)=v.
 beliefs: player 1 assigns probability ½ to each state after receiving the
signal z. Player 2 assigns probability 1 to state “meet” after receiving the
signal m, and probability 1 to state “avoid” after receiving the signal v.
 payoffs: the payoffs u
i
(a,meet) of each player i for all possible action
pairs are given in the left panel of Figure 274.1. (for u
i
(a,avoid) in the
right panel).
Game Theory - A (Short) Introduction 191 9/12/2011
9.2 General definitions
 9.2.2 Nash equilibrium
 In a Bayesian game, each player chooses a collection of actions:
one for each signal she may receive (each type of each player
chooses an action).
 In a Nash equilibrium of such a game, the action chosen by each
type of each player is optimal, given the actions chosen by every
type of every other player.

We define a Nash equilibrium of Bayesian game to be a Nash
equilibrium of a strategic game in which each player is one of the
types of one of the players in the Bayesian game.
Game Theory - A (Short) Introduction 192 9/12/2011
9.2 General definitions
 Notations:
 Pr(ω|t
i
): probability assigned by the belief ot type t
i
of player i to
state ω.
 a(j,t
j
): action taken by each type t
j
of each player j.
 τ
j
(ω): player j’s signal in state ω. Her action is this state is
a(j, τ
j
(ω)). We denote â
j
(ω)=a(j, τ
j
(ω)).

With these notations, the expected payoff of type t
i
of player i
when she chooses action a
i
is:
) ( ˆ chooses player other every and
action the chooses player in which profile action the is )) ( ˆ , (
states of set the is
: where
) )), ( ˆ , (( ) Pr(
e
e
e e e
e
j
i i i
i i i i
a j
a i a a
Ω
a a u t
÷
O e
÷ ¿
Game Theory - A (Short) Introduction 193 9/12/2011
9.2 General definitions
 Definition 281.1 (Nash equilibrium of Bayesian game)
A Nash equilibrium of Bayesian game is a Nash equilibrium of
the strategic game (with vNM preferences) defined as follows:
 players: the set of all pairs (i,t
i
) in which i is a player in the
Bayesian game and t
i
is one of the signals that i may receive;
 actions: the set of actions of each player (i,t
i
) is the set of actions
of player i in the Bayesian game;
 preferences: the Bernoulli payoff function of each player (i,t
i
) is
given by
¿
O e
÷
e
e e e ) )), ( ˆ , (( ) Pr(
i i i i
a a u t
Game Theory - A (Short) Introduction 194 9/12/2011
9.2 General definitions
 Exercise 282.3 (Adverse selection)
Firm A (the “acquirer”) is considering taking over firm T (the “target”). It
does not know firm T’s value; it believes that this value, when firm T is
controlled by its own management, is at least $0 and at most $100, and
assigns equal probability to each of the 101 dollar values in this range
(uniform distribution). Firm T will be worth 50% more under firm A’s
management than it is under its own management. Suppose that firm A
bids y to take over firm T, and firm T is worth x (under its own
management). Then if T accepts A’s offer, A’s payoff is (3/2 x – y) and
T’s payoff is y. If T rejects A’s offer, A’s payoff is 0 and T’s payoff is x.
Model this situation as a Bayesian game in which firm A chooses how
much to offer and firm T decides the lowest offer to accept. Find the
Nash equilibrium (equilibria?). Explain why the logic behind the
equilibrium is called adverse selection.
Game Theory - A (Short) Introduction 195 9/12/2011
9.3 Example concerning
information
 9.3.1 More information may hurt
 A decision-maker in a single-person decision problem cannot be
worse off if she has more information: if she wishes, she can ignore
the information. In a game, this is not true.
L M R
T
B
1,2ε 1,0 1,3ε
2,2 0,0 0,3
State ω
1

L M R
T
B
1,2ε 1,0 1,3ε
2,2 0,3 0,0
State ω
2

½ ½
2
½ ½
1
(Figure 283.1)
Game Theory - A (Short) Introduction 196 9/12/2011
9.3 Example concerning
information
 Consider the two-player game in Figure 283.1. ε is 0 < ε < ½. In this
game, there is two states and neither player knows the state.
 Player 2’s unique best response to each action of player 1 is L:
 if player 1 chooses T:
 L yieds 2ε
 M and R each yield 3/2 ε
 if player 2 chooses B:
 L yields 2
 M and R each yield 3/2.
 Player 1’s unique best response to L is B.
 Thus, (B,L) is the unique Nash equilibrium. Each player get a
payoff of 2. The game has no mixed strategy Nash equilibrium.
Game Theory - A (Short) Introduction 197 9/12/2011
9.3 Example concerning
information
 Consider now that player 2 is informed of the state: player 2’s
signal function satisfies τ
2

1
)≠ τ
2

2
).
 In this game, (T,(R,M)) is the unique Nash equilibrium (each type of
player 2 has a strictly dominant action, to which T is player 1’s
unique best response).
 In this game, player 2’s payoff is 3ε (in each state). She is therefore
worse off when she knows the state !
 To understand this result, R is good only in state ω
1
and M is good
only in state ω
2
while L is a compromise. Knowing the state leads
player 2 to choose either R or M, which induces player 1 to choose
T. There is no steady state in which player 2 chooses L, to induce
player 1 to choose B.
Game Theory - A (Short) Introduction 198 9/12/2011
9.6 Illustration: auctions
 9.6.1 Introduction
 In section 3.5, every bidder knows every other bidder’s valuation of
the object for sale. This is highly unrealistic!
 Assume that a single object is for sale. Each bidder receives
independently some information (a signal) about the value of the
object to her:
 if each bidder’s signal is simply her valuation, we say that the
bidders’ valuation are private (eg.: work of art whose beauty
interests the buyers);
 if each bidder’s valuation depends on other bidders’ signals as
well as her own, we say that the valuations are common
(eg.:oil tract containing unknown reserves on which each bidder
has conducted a test)
 We will consider models in which bids for a single object are
submitted simultanesously (bids are sealed) and the participant
who submits the highest bid obtains the object.
Game Theory - A (Short) Introduction 199 9/12/2011
9.6 Illustration: auctions
 We will consider both first-price (the winner pays the price she
bids) and second-price (the winner pays the highest of the
remaining bids) auctions.
 Note that the argument that the second-price rule corresponds to
an open ascending auction (English auction) depends upon the
bidders’ valuations being private. In a common valuation setup, the
open ascending information reveals information to bidders, they do
not have access to in a sealed bid procedure.

 9.6.2 Independent private values
 Each bidder knows that all other bidders’ valuations are at least v-
(where v- ≥ 0) and at most v+. She believes that the probability
that any given bidder’s valuation is at most v is F(v), independent of
all other bidders’ valuations, where F is a continuous increasing
function (CDF).
Game Theory - A (Short) Introduction 200 9/12/2011
9.6 Illustration: auctions
 The preferences of bidder whose valuation is v are represented by
a Bernoulli payoff function that assigns 0 to the outcome in which
she does not win the object and v-p to the outcome in which she
wins the object and pays the price p (quasi-linear payoff function).
This amounts to consider that the bidder is risk-neutral.
 We assume that the expected payoff of a bidder whose bid is tied
for first place is (v-p)/m, where m is the number of tied winning
bids.
 We denote P(b) the price paid by the winner of the auction when
the profile of bids is b:
 for a first-price auction, P(b) is the winning bid (the largest b
i
)
 for a second-price auction, P(b) is the highest bid made by a
bidder different from the winner
Game Theory - A (Short) Introduction 201 9/12/2011
9.6 Illustration: auctions
 The Bayesian game that models first- and second-price auctions
with independent private valuations is therefore:
 players: the set of bidders 1,…n
 states: the set of all profiles (v
1
, … v
n
) of valuations, where v- ≤
v
i
≤ v+ for all i
 actions: each player’s set of actions is the set of possible bids
(nonnegative numbers)
 signals: the set of signal that each player may observe is the
set of possible valuations (the signal function is τ
i
(v
1
, … v
n
) =
v
i
).
 beliefs: every type of player i assigns probability F(v
1
) F(v
2
) …
F(v
i-1
) x F(v
i+1
) … F(v
n
) to the event that the valuation of every
other player j is at most v
i
.
Game Theory - A (Short) Introduction 202 9/12/2011
9.6 Illustration: auctions
 payoff functions:





 Nash equilibrium in a second-price sealed-bid auction: in a
second-price sealed-bid auction with imperfect information about
valuations (as in the perfect information setup), a player’s bid equal
to her valuation weakly dominates all her other bids:
 consider some type v
i
of some player i and let b
i
be a bid not equal to v
i
 for all bids by all types of all the other players, the expected payoff of
type v
i
of player i is at least as high when she bids v
i
as it is when she
bids b
i
, and for some bids by the various types of the other players, her
expected payoff is greater when she bids v
i
than it is when she bids b
i
¹
´
¦
= >
= = s ÷
=
i j b b
m b b i j b b m b P v
v v b u
i j
i j i j i
n i
some for if 0
players for and all for if / )) ( (
)) ,... ( , (
1
Game Theory - A (Short) Introduction 203 9/12/2011
9.6 Illustration: auctions
 Exercise 294.1 (Weak domination in a second-price sealed-bid
auction)
Show that for each type v
i
of each player i in a second-price
sealed-bid auction with imperfect information about valuations the
bid v
i
weakly dominates all other bids.

 We conclude that a second-price sealed-bid auction with imperfect
information about valuations has a Nash-equilibrium in which every
type of every player bids her valuation.

 Exercise 294.2 (Nash equilibria of a second-price sealed-bid
auction)
For every player i, find a Nash equilibrium of a second-price
sealed-bid auction in which player i wins.
Game Theory - A (Short) Introduction 204 9/12/2011
9.6 Illustration: auctions
 Nash equilibrium in a first-price sealed-bid auction
 in case of perfect information, the bid v
i
by type v
i
of player i
weakly dominates any bid greater than v
i
, does not weakly
dominate bids less than v
i
, and is itself weakly dominated by
any such lower bid.
 So, the game under imperfect information may have a Nash
equilibrium in which each bidder bids less than her valuation.
 Take the case of two bidders and each player’s valuation being
distributed uniformly between 0 and 1 (this assumption means
that the fraction of valuations less than v is exactly v, so that
F(v) = v for all v with 0 ≤ v ≤ 1).
 Denote by β
i
(v) the bid of type v of player i.
 In this case, the game has a (symmetric) Nash equilibrium in
which the function β
i
is the same for both players, with β
i
(v) =
½ v for all v (each type of each player bids exactly half her
valuation).
Game Theory - A (Short) Introduction 205 9/12/2011
9.6 Illustration: auctions
 Proof:
 suppose that each type of bidder 2 bids in this way;
 as far as player 1 is concerned, player 2’s bids are
uniformly distributed between 0 and ½;
 thus, if player 1 bids more than ½, she surely wins. If she
bids b
1
≤ ½, the probability that she wins is the probability
that player 2’s valuation is less than 2b
1
, which is 2b
1
;
 consequently, her payoff function of her bid is:
¦
¹
¦
´
¦
> ÷
s s ÷
2
1
if
2
1
0 if ) ( 2
1 1 1
1 1 1 1
b b v
b b v b
Player 1’s expected payoff
½ v
1
0 ½ v
1
b
1

(Figure 295.1)
Game Theory - A (Short) Introduction 206 9/12/2011
9.6 Illustration: auctions
 This function is maximized at ½ v
1
(this can easily be seen
graphically on Figure 295.1) or established mathematically.
 Both player are identical. So, player 2 bids also half is valuation,
conditional on player 1 bidding half is valuation.
 Thus, the game has a Nash equilibrium in which each player bids
half his valuation.
 When the number n of bidder exceeds 2, a similar analysis shows that
the game a (symmetric) Nash equilibrium in which every bidder bids the
fraction 1 – 1/n of her valuation.
 Interpretation: in this example (but also for any distribution F satisfying
our assumptions):
 choose n-1 valuations randomly and independently, each
according to the cumulative distribution F
 the highest of these n-1 valuations is a random variable. Denote it
X;
 Fix a valuation v. Some values of X are less than v and others are
greater.


Game Theory - A (Short) Introduction 207 9/12/2011
9.6 Illustration: auctions
 Consider the distribution of X in those cases in which it is less than
v. The expected value of this distribution is:

 Then, the following proposition holds:







Application for the case of 2 bidders and uniform distribution:
 for any valuation v of player 1, the cases in which player 2’s
valuation is less than v are distributed uniformly between 0
and v;
 so the expected value of player 2’s valuation conditional on
being less than v is ½ v.
) ( v X X E <
If each bidder’s valuation is drawn independently from the
same continuous and increasing cumulative distribution, a
first-price sealed-bid auction (with imperfect information
about valuations) has a (symmetric) Nash equilibrium in
which each type v of each player bids E(X|X<v), the
expected value of the highest of the other players’ bids
conditional on v being higher than all the other valuations.
Game Theory - A (Short) Introduction 208 9/12/2011
9.6 Illustration: auctions
 Comparing equilibria of first- and second-price auctions
 As in the case of perfect information, under the assumptions of
this section, first- and second-price auctions are revenue
equivalent;
 Consider the equilibrium of a second-price auction in which
every player bids her valuation:
 the expected price paid by the bidder with valuation v who
wins is the expectation of the highest of the other n-1
valuations, conditional on this maximum being less than
v;
 in notation, this is E(X|X<v);
 we have just seen that this is precisely the bid a player
with valuation v in a first-price auction (and hence, the
amount paid by such a player if she wins);
 as in both case, the winner with highest valuation win,
both auctions yield the auctioneer the same revenue!
Game Theory - A (Short) Introduction 209 9/12/2011
9.6 Illustration: auctions
 Exercise 296.1 (Auctions with risk-averse bidders)
Consider a variant of the Bayesian game defined earlier in this
section in which the players are risk averse. Specifically, suppose
each of the n players’ preferences are represented by the expected
value of the Bernoulli payoff function x
1/m
, where x is the player’s
monetary payoff and m > 1. Suppose also that each player’s
valuation is distributed uniformly between 0 and 1. Show that the
Bayesian game that models a first-price sealed-bid auction under
these assumptions has a (symmetric) Nash equilibrium in which
each type v
i
of each player i bids:




Note that the solution of the problem max
b
[b
k
(v-b)
l
] is kv/(k + l).

| |
i i
v
n m
b
|
|
.
|

\
|
+ ÷
÷ =
1 ) 1 (
1
1
Game Theory - A (Short) Introduction 210 9/12/2011
9.6 Illustration: auctions
Compare the auctioneer’s revenue in this equilibrium with her
revenue in the symmetric Nash equilibrium of a second-price
sealed-bid auction in which each player bids her valuation (note
that the equilibrium of the second-price auction does not depend on
the players’ payoff functions).

 9.6.3 Interdependent valuations
 In this setup, each player’s valuation depends on the other players’
signals as well as her own.
 Denote the function that gives player i’s valuation by g
i
, and
assume that it is increasing in all the signals.
 Let P(b) be the function that determines the price paid by the
winner as a function of the profile b of bids.
Game Theory - A (Short) Introduction 211 9/12/2011
9.6 Illustration: auctions
 The following Bayesian game models first- and second-price
auctions with common valuations:
 players: the set of bidders 1,…n
 states: the set of all profiles (t
1
, … t
n
) of signals that the players
may receive
 actions: each player’s set of actions is the set of possible bids
(nonnegative numbers)
 signals: the signal function τ
i
of each player i is the set of
possible valuations (the signal function is τ
i
(v
1
, … v
n
) = v
i
: each
player observes her own signal).
 beliefs: each type of each player believes that the signal of
every type of every other player is independent of all the other
players’ signals.

Game Theory - A (Short) Introduction 212 9/12/2011
9.6 Illustration: auctions
 payoff functions:





 Nash equilibrium in a second-price sealed-bid auction
 We analyze the case of two bidders, each bidder’s signal is
uniformly distributed from 0 to 1 and the valuation of each
bidder i is v
i
= α t
i
+ γ t
j
, where j is the other player and α ≥ γ ≥
0 (the case α = 1 and γ = 0 is the private value case and the
case α = γ is called pure common value.
 The assumption is that a bidder does not know any other
player’s signal but, as the analysis will show, other players’
bids contain some information about the other players’ signals.
¹
´
¦
= >
= = s ÷
=
i j b b
m b b i j b b m b P t t g
t t b u
i j
i j i j n i
n i
some for if 0
players for and all for if / )) ( ) ... ( (
)) ,... ( , (
1
1
Game Theory - A (Short) Introduction 213 9/12/2011
9.6 Illustration: auctions
 Under these assumptions, a second-price sealed-bid auction has a
Nash equilibrium in which each type t
i
of each player i bids (α+γ) t
i
.
 Proof: to determine the expected payoff of type t1 of player 1, we need
to find:
 the probability with which she wins
 the expected price she pays
 the expected value of player 2’s signal if she wins
 Probability that player 1 win:
 given that player 2’s bidding function is (α+γ) t
2
, player 1’s bid of b
1

wins only if b
1
≥ (α+γ) t
2
, or if :



t
2
is distributed uniformly between 0 and 1. So, the probability that
is is at most b
1
/ (α+γ) is b
1
/ (α+γ). Thus, a bid b
1
by player 1 wins
with probability b
1
/ (α+γ).
) (
1
2
¸ o +
s
b
t
Game Theory - A (Short) Introduction 214 9/12/2011
9.6 Illustration: auctions
 Expected price player 1 pays if she wins:
 the price she pays is equal to the player 2 bid;
 the player 2 bid, conditional on being less than b
1
, is distributed
uniformly between 0 and b
1
. Thus, the expected value of player
2’s bid, given that it is less than b
1
is ½ b
1
.
 Expected value of player 2’s signal if player 1 wins:
 Player 2 bid, given her signal t
2
, (α+γ) t
2
. So, the expected value of
signal that yield a bid less than b
1
is ½ b
1
/ (α+γ).
 The expected payoff if she bids b
1
is the difference between her
expected valuation (given her signal t
1
and the fact that she wins) and
the expected price she pays, multiplied by her probability of
winning. Using the previous results, we get:
( )
1 1 1
2
1
1
1
1
) ( 2
) ( 2 ) (
2
1
) (
2
1
b b t
b
b
b
t ÷ +
+
=
+
|
|
.
|

\
|
÷
+
+ ¸ o
¸ o
o
¸ o ¸ o
¸ o
Game Theory - A (Short) Introduction 215 9/12/2011
9.6 Illustration: auctions
 This function is maximized at b
1
=(α+γ)t
1
: so, if each type t
2
of
player 2 bids =(α+γ)t
2
, any type t
1
of player 1 optimally bids
=(α+γ)t
1.
 The arguments are symmetric for player 2. We therefore get a
symmetric Nash equilibrium.

 Exercise 299.1 (Asymmetric Nash equilibria of second-price
sealed-bid common value auctions)
Show that when α=γ=1, for any value λ > 0, the game has an
(asymmetric) Nash equilibrium in which each type t
1
of player 1
bids (1+λ) t
1
and each type t2 of player 2 bids (1 + 1/λ) t
2
.
Game Theory - A (Short) Introduction 216 9/12/2011
9.6 Illustration: auctions
 Note that when player 1 calculates her expected value of the
object, she finds the expected value of player 2’s signal given that
her bid wins. The fact that her bid wins is, in fact, a bad news
about the level of other player valuation. A bidder who does not
take account of this fact is said to suffer from the winner’s curse.

 Nash equilibrium in a first-price seald-bid auction
A first-price sealed-bid auction has a Nash equilbrium in which
each type t
i
of each player i bids ½ (α+γ) t
i
.

 Exercise 299.2 (First-price sealed bid auction with common
values)
Verify that a first-price sealed bid auction has a Nash equilibrium in
which the bid of each type t
i
of each player i is ½ (α+γ) t
i
.
Game Theory - A (Short) Introduction 217 9/12/2011
9.6 Illustration: auctions
 Comparing equilibria of first- and second-price auctions:
 The revenue equivalence of first- and second-price auctions
holds also under common valuations:
 in each case, the expected price paid by the winner (for
the symmetric equilibrium) is ½ (α+γ) t
i
.
 in each case, the bidder wins if she has the highest
valuation (this is to say, with the same probability).
 In fact, the revenue equivalence principle holds much more
generally (see Meyrson Lemma).
Game Theory - A (Short) Introduction 218 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 9.8.1 First-price sealed bid auctions
 We construct here a symmetric equilibrium of a first-price sealed
bid auction for a generic distribution F of valuations that satisfies
the assumptions in Section 9.6.2 and is differentiable on (v-, v+).
 Denote the bid of type v of bidder i by β
i
(v).
 In a symmetric equilibrium, every player uses the same bidding
function (so β
i
(v)=β for some function β).
 Assume:
 β is increasing in valuation (seems reasonable)
 β is differentiable.
 Then:
 then there is a condition that β must satisfy in any symmetric
equilibrium
 exactly one function β satisfies this condition
 this function is increasing.
Game Theory - A (Short) Introduction 219 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 Suppose that all n-1 players other than i bid according to the
increasing differentiable function β.
 Then, given the assumption on F, the probability of a tie is zero.
 Hence, for any bid b, the expected payoff of player i when her
valuation is v and she bids b is :

(v – b) Pr(Highest bid is b) = (v-b) Pr(All n-1 other bids ≤ b)

 A player bidding according to the function β bids at most b, for β(v-)
≤ b ≤ β(v+), if her valuation is at most β
-1
(b) (the inverse evaluated
at b).
Game Theory - A (Short) Introduction 220 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 Thus, the probability that the bids of the n-1 other players are all at
most b is the probability that the highest of n-1 other players are all
at most b is the probability that the highest of n-1 randomly
selected valuations (denoted X in section 9.6.2) is at most β
-1
(b).
 Denoting the CDF of X by H, the expected payoff is thus:

(v – b) H(β
-1
(b)) if β(v-) ≤ b ≤ β(v+)

and 0 is b < β(v-), v-b if b > β(v+)


Game Theory - A (Short) Introduction 221 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 In a symmetric equilibrium in which every player bids according to
β, we have β(v) ≤ v if v > v- and β(v-)=v-:
 if v > v- and β(v) > v, then a player with valuation v wins with
positive probability (players with valuations less than v bid less
than β(v) because β is increasing);
 if she wins, she obtains a negative payoff while she obtains a
payoff of 0 by bidding v. So, for equilibrium, we need β(v) ≤ v if
v > v-.
 given that β satisfies this condition, if β(v-)>v-, then a player
with valuation v- wins with positif probability and obtains a
negative payoff. Thus, β(v-)≤v-. But, if β(v-)<v- bids v-, then
players with valuations slightly greater than v- also bid less
than v- (because β is continuous). So that a player with
valuation v- who increases her bid slightly wins with positive
probability and obtains a positive payoff if she does so. We
conclude that β(v-)=v-.
Game Theory - A (Short) Introduction 222 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 The expected payoff of a player of type v when every other player
uses the bidding function β is differentiable on (v-,β(v+))
 given that β is increasing and differentiable
 given then β(v-) = v-
and, if v > v-, is increasing at v-.
 Thus, the derivative of this expected payoff with respect to b is
zero at any best response less than β(v+) :




knowing that the derivative of β
-1
at the point b is
0
)) ( ( '
)) ( ( ' ) (
)) ( ( : F.O.C.
1
1
1
=
÷
+ ÷
÷
÷
÷
b
b H b v
b H
| |
|
|
)) ( ( '
1
1
b
÷
| |
Game Theory - A (Short) Introduction 223 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 In a symmetric equilibrium in which every player bids according β,
the best response of type v of any given player to the other players’
strategies is β(v). Because β is increasing, we have β(v)< β(v+) for
v < v+. So, β(v) must satisfy the F.O.C. whenever v- < v < v+.
 If b = β(v), then β
-1
(b)=v. So that substituting b= β(v), then β
-1
(v) =
v, so that substituting b = β(v) into the F.O.C. and multiplying by
β’(v) yields:


 The left-hand side of the differential equation is the derivative with
respect to v of β(v) H(v). Thus, for some constant C:
+ < < ÷ = + v v v v vH v H v v H v for ) ( ' ) ( ' ) ( ) ( ) ( ' | |
}
÷
+ < < ÷ + =
v
v
v v v C dx x xH v H v for ) ( ' ) ( ) ( |
Game Theory - A (Short) Introduction 224 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 The function β is bounded (as it differentiable), so considering the
limit as v approaches v-, we deduce that C = 0.
 We conclude that if the game has a symmetric Nash equilibrium in
which each player’s bidding function is increasing and
differentiable on (v-,v+), then this function is defined by:





Note that, the function H being the CDF of X, the highest of n-1
independently drawn valuations. Thus β*(v) is the expected value
of X conditional on its being less than v:
÷ = ÷
+ < < ÷ =
}
v v
v v v
v H
dx x xH
v
) ( *
and
for
) (
) ( '
) ( *
v
- v
|
|
) | ( ) ( * v X X E v < = |
Game Theory - A (Short) Introduction 225 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 Note finally that, using integration by parts, the numerator in the
expression β*(v) is:



Given H(v) = (F(v))
n-1
(the probability that n-1 valuations is at most
v), we have:





We see that β*(v) < v for v- < v < v+.
}
÷
÷
v
v
dx x H v vH ) ( ) (
+ < < ÷ ÷ = ÷ =
÷
÷
} }
v v v
v F
dx x F
v
v H
dx x H
v v
n
n
for
)) ( (
)) ( (
) (
) (
) ( *
1
v
- v
1
v
- v
|
Game Theory - A (Short) Introduction 226 9/12/2011
9.8 Appendix: auctions with an
arbitrary distribution of valuations
 Exercise 309.2 (Property of the bidding function in a first-price
auction)
Show that the bidding function β* is increasing.
5. Extensive Games
with Perfect
Information: Theory
Framework

 Strategic games suppress the sequential structure of decision-
making: everything is about ex-ante anticipations and
simultaneous decisions.

 Extensive game describes explicitly the sequential structure of
decision-making, allowing us to study situations in which each
decision-maker is free to change her mind as events unfold.

 In this setup, perfect information means that each decision-
maker is fully informed about all previous actions.
Game Theory - A (Short) Introduction 228 9/12/2011
5.1 Extensive games with
perfect information
 5.1.1 Definition
 We add to players and preferences, the order of the players’ moves
and the actions each play may take at each point.
 Each possible sequence of actions form a terminal history.
 The function that gives the player who moves at each point in each
terminal history is the player function.
 So, the components of an extensive game are:
 The players
 The terminal histories
 The player function
 The preferences for the players
Game Theory - A (Short) Introduction 229 9/12/2011
5.1 Extensive games with
perfect information
 Example 154.1: Entry game
An incumbent faces the possibility of entry by a challenger (eg.:
new entrant in an industry). The challenger may enter or not. If
it enters, the incumbent may either acquiesce or fight.

Extensive game components:
 Players: Incumbent, Challenger
 Terminal histories: (In, Acquiesce), (In, Fight), Out
 Player function : player(Start) = Challenger, player(In) =
Incumbent
 Preferences : ?


Game Theory - A (Short) Introduction 230 9/12/2011
5.1 Extensive games with
perfect information
 Note that the set of actions available to each player is NOT
part of the game description. But it can be deduced from the
description of the game (after any sequence of events, a player
chooses an action).

 Entry Game
Game Theory - A (Short) Introduction 231 9/12/2011
Actions
Challenger: {In,Out}
Incumbent: {Fight, Acquiesce)
5.1 Extensive games with
perfect information
 Terminal histories are a set of sequences:
 The first element of the sequence starts the game
 The order of the sequence depicts the order of actions by players
Entry game
{(In, Acquiesce), (In, Fight), (Out) }

 Define:
 Subhistories of a finite sequence (a
1
, a
2
, …, a
k
) of actions to be:
 The empty sequence of no actions (empty history,
representing the start of the game)
 All sequences of the form (a
1
, a
2
, …,a
m
), where 1 ≤ m ≤ k.
 The entire sequence is a subhistory of itself.
 A subhistory NOT equal to the entire sequence is called a proper
subhistory.


Game Theory - A (Short) Introduction 232 9/12/2011
5.1 Extensive games with
perfect information
Entry game:
 The subhistories of (In, Acquiesce) are the empty history and the
sequences (In) and (In, Acquiesce).
 The proper subhistories are the empty history and the sequence
(In).
 Definition 155.1 (Extensive game with perfect information)
An extensive game with perfect information consists of
 A set of players
 A set of sequences (terminal histories) with the property that no sequence
is a proper subhistory of any other sequences
 A function (the player function) that assigns a player to every sequence that
is a proper subhistory of some terminal history
 For each player, preferences over the set of terminal histories
Game Theory - A (Short) Introduction 233 9/12/2011
The set of terminal histories is the set of all sequences of actions that may occur. Terminal
histories represent outcomes of the game.
If the length of the longest terminal history is finite, we way that the game has a finite
horizon. If the game has finite horizon and finitely many terminal histories, we say that the
game is finite.
5.1 Extensive games with
perfect information
Entry game:
Suppose that the best outcome for the challenger is that it
enters and the incumbent acquiesces, and the worst outcome is
that it enters and the incumbent fights, whereas the best
outcome for the incumbent is that the challenger stays out, and
the worst outcome is that it enters and there is a fight.
The situation is modeled as follow:
 Players: {Challenger, Incumbent}
 Terminal histories: (In,Acquiesce), (In, Fight), (Out)
 Player function: P(0) = Challenger, P(In) = Incumbent
 Preferences:
 Challenger: u(In,Acquiesce) = 2, u(Out) = 1, u(In,Fight) = 0
 Incumbent: u(Out) = 2, u(In,Acquiesce) = 1, u(In,Fight) = 0
Game Theory - A (Short) Introduction 234 9/12/2011
5.1 Extensive games with
perfect information
Game Theory - A (Short) Introduction 235 9/12/2011
Player
Start of the game
Action
Payoffs
The sets of actions can be deduced from the set of terminal histories and the player function :
A(h) = {a: (h,a) is a history}
Where h is some nonterminal history, (h,a) is a history, a is one of the actions available to the
player who moves after h.
Eg.: A(In) = {Acquiesce, Fight}
5.1 Extensive games with
perfect information
 Exercise 156.2
a. Represent in a diagram the two-player
extensive game with perfect information in
which the terminal histories are (C,E),
(C,F), (D,G), and (D,H), the player
function is given by P(0) = 1 and P(C) =
P(D) = 2, player 1 prefers (C,F) to (D,G)
to (D,H) and player 2 prefers (D,G) to
(C,F) to (C,E).

b. Write down the set of players, the set of
terminal histories, player function, and
players’ preferences for the game
represented on the right side of the slide.
Game Theory - A (Short) Introduction 236 9/12/2011
5.1 Extensive games with
perfect information
 An extensive game with perfect information models a situation
in which each player, when choosing an action, knows all
actions chosen previously and always move alone. Typical
situations modeled this way are:
 A race between firms developing a new technology;
 A race between directors to become CEO;
 Games like chess, ticktacktoe,

 Two extension of extensive game with perfect information are:
 Allowing players to move simultaneously exists
 Allowing arbitrary patterns of information
Game Theory - A (Short) Introduction 237 9/12/2011
5.1 Extensive games with
perfect information
 Entry game solution:
 Solution: the challenger will enter and the incumbent will acquiesce
 Analysis:
 The challenger sees that, if he enters, the incumbent will
acquiesce
 As the incumbent will acquiesce in case of entry, the
challenger is better off entering than staying out


 Backward induction can not always be used to solve extensive
games:
 For infinite horizon game, there is no end point from which to start
the induction
 But even for finite horizon game

Game Theory - A (Short) Introduction 238 9/12/2011
This line of argument is a backward induction.
5.1 Extensive games with
perfect information
Example: in this game, the Challenger sees that the Incumbent
is indifferent between Acquiesce and Fight if he enters. The
question of whether to enter or not remains open.
Game Theory - A (Short) Introduction 239 9/12/2011
5.1 Extensive games with
perfect information
 Another approach to defining equilibrium takes off from the
notion of Nash equilibrium: it seeks steady states.

 In games in which backward induction is well-defined, this
approach turns out to lead to the backward induction outcome.
So, there is no conflict between the two approaches.
Game Theory - A (Short) Introduction 240 9/12/2011
5.2 Strategies and outcomes
 5.2.1 Strategies
 Definition 159.1 ((full) strategy)
A (full) strategy of player i in an extensive game with perfect
information is a function that assigns to each history h after which it
is player i’s turn to move (P(h) = i, where P is the player function)
an action in A(h), the set of actions available after h.
Game Theory - A (Short) Introduction 241 9/12/2011
Player 1 has 2 strategies: C and D
Player 2 has 4 strategies:
Action Assigned to
History C
Action Assigned to
History D

Strategy 1
E G
Strategy 2
E H
Strategy 3
F G
Strategy 4
F H
5.2 Strategies and outcomes
 Notation
 Player 1 strategies: C, D
 Player 2 strategies: EG, EH, FG, FH
 Actions are written in the order in which they occur in the
game.
 If actions are available at the same stage of the game, they are
written from left to right as they appear in the game diagram.
Game Theory - A (Short) Introduction 242 9/12/2011
Each player full strategy is more than a “plan of action” or “contingency
plan”: it specifies what the player does for each of the possible choice of
the other player.

In other words, if the player appoints an agent to play the game for her
and tell the agent her strategies, then the agent has enough information
to carry out her wishes, whatever action the other players take.
5.2 Strategies and outcomes
 Exercise:
Determine the strategies of the player 1 in the following game:
Game Theory - A (Short) Introduction 243 9/12/2011
5.2 Strategies and outcomes
 Solution: CG, CH, DG, DH
Game Theory - A (Short) Introduction 244 9/12/2011
Each (full) strategy specifies an action after history (C,E)
even if it specifies the action D at the beginning of the
game!

A (full) strategy must specify an action for every history
after which it is the player turn to move, even for histories
that, if the strategy is followed, do not occur (this is the
difference between “plan of actions” and a “full
strategy”).

A way to interpret (full) strategy is that it is a plan of action
that specifies players actions even if they make mistakes.
Eg. : DG may read as “I choose D but, if I do a mistake
and I play C, then I will play G if the other player
plays E.
5.2 Strategies and outcomes
 5.2.2 Outcomes
 A strategy profile is the vector of strategies played by each player.
It determines the terminal history that occurs. We denote strategy
profile by s. The terminal history associated with the strategy
profile s is the outcome of s and is denoted O(s).

 Example:
 The outcome
 The strategy profile (DG,E) is associated to terminal
history D
 The strategy profile (CH,E) is associated to terminal
history (C,E,H)
 Note that the outcome O(s) of the strategy profile s depends
only on the players’ plans of action, not their full strategies.
Game Theory - A (Short) Introduction 245 9/12/2011
5.3 Nash Equilibrium
Definition 161.2 (Nash Equilibrium of extensive game with
perfect information)
The strategy profile s* in an extensive game with perfect
information is a Nash equilibrium if, for every player i and
every strategy r
i
of player i, the terminal history O(s*) generated
by s* is at least as good according to player i’s preferences as
the terminal history O(r
i
,s*
-i
) generated by the strategy profile (r
i

,s*
-i
) in which player i chooses r
i
while every other player j
chooses s*
j
. Equivalently, for each player i :

u
i
(O(s*)) ≥ u
i
(O(r
i
,s*
-i
) ) for every strategy r
i

Game Theory - A (Short) Introduction 246 9/12/2011
5.3 Nash Equilibrium
 One way to find the Nash equilibria of an extensive game in
which each player has finitely many strategies is :
 To list each player’s (full) strategies;
 To combine strategies of all players to list strategies profiles;
 To find the outcome of each strategy profile;
 To analyze this information as a strategic game.
This is known as the strategic form of the extensive game.
Game Theory - A (Short) Introduction 247 9/12/2011
The set of Nash equilibria of any extensive game
with perfect information is the set of Nash equilibria
of its strategic form.
5.3 Nash Equilibrium
 Example 162.1: the entry game
Game Theory - A (Short) Introduction 248 9/12/2011
Player 1 strategies: {In,Out}
Player 2 strategies: {Acquiesce,Fight}
Strategic form of the game
Incumbent
Challenger
In
Out
Acquiesce Fight
2*,1*
1,2*
0,0
1*,2*
Nash equilibria
-(In,Acquiesce) : the one identified by
backward induction
-(Out,Fight): this also a steady state. No
player has incentive to deviate.
5.3 Nash Equilibrium
 How to interpret the Nash Equilibrium (Out,Fight)?
 This situation is never observed in the extensive game
 A solution to escape from this difficulty is by considering a slighthly
perturbed steady state in which, on rate occasions, nonequilibrium
actions are taken :
 Players makes mistakes or deliberately experiment
 Perturbations allow each player eventually to observe every
other players’ action after every history

 Another important point to note is that extensive games
embodies the assumption that the incumbent cannot commit, at
the beginning of the game, to fight if the challenger enters. If
such a commitment was credible, the challenger would stay
out. But the threat is not credible (because it is irrational to
fight after entry).
Game Theory - A (Short) Introduction 249 9/12/2011
5.3 Nash Equilibrium
 Exercise 163.1 (Nash equilibria of extensive games)
Find the Nash equilibria of the extensive game represented by
the figure (when constructing the strategic form of each game,
be sure to include all the strategies of each player).
Game Theory - A (Short) Introduction 250 9/12/2011
5.4 Subgame perfect equilbrium
 5.4.1 Definition
 The notion of Nash equilibrium ignores the sequential structure of
an extensive game. This may lead to steady states that are not
robust (in the sense that they do not appear as such in the
extensive game).
 We consider now a new notion of equilibrium that models a robust
steady state. This notion requires:
 (i) That each player’s strategy to be optimal
 (ii) After every possible history

 Subgame: for any nonterminal history h, the subgame following h
is the part of the game that remains after h has occurred.
Example: in the entry game, the subgame following the history In is the game
in which the incumbent is the only player and there are two terminal histories
: Acquiesce and Fight.
Game Theory - A (Short) Introduction 251 9/12/2011
5.4 Subgame perfect equilbrium
Definition 164.1 (Subgame of extensive game with perfect
information)
Let Gamma be an extensive game with perfect information, with
player function P. For any nonterminal history h of Gamma, the
subgame Gamma(h) following the history h is the following
extensive game:
 Players: the players in Gamma
 Terminal histories: the set of all sequences h’ of actions such that
(h,h’) is a terminal history of Gamma
 Player function: the player P(h,h’) is assigned to each proper
subhistory h’ of a terminal history
 Preferences: each player prefers h’ to h’’ if she prefers (h,h’) to
(h,h’’) in Gamma.

Note that the subgame following the empty history is the entire game.
Game Theory - A (Short) Introduction 252 9/12/2011
5.4 Subgame perfect equilbrium
 A subgame perfect equilibrium is a strategy profile s* with the
property that in no subgame can any player i do better by
choosing a strategy different from s*
i
given that every player j
adheres to s*
j
.

Example: in the entry game, the Nash equilibrium (Out,Fight) is
not a subgame perfect equilibrium because in the subgame
following the history In, the strategy Fight is not optimal for the
incumbent: in this subgame (the In subgame), the incumbent is
better off choosing Acquiesce than it is choosing Fight.

 Notation: Let h be a history and s a strategy profile to which
adhere afterwards h. We denote O
h
(s) the outcome generated
in the subgame following h by the strategy profile induced by s.
Game Theory - A (Short) Introduction 253 9/12/2011
5.4 Subgame perfect equilbrium
Example: the entry game
 Let s be the strategy profile (Out,Fight)
 Let h be the history In
 If h occurs and, afterwards, the players adhere to s, the resulting
terminal history is O
h
(s) = (In,Fight)
Game Theory - A (Short) Introduction 254 9/12/2011
5.4 Subgame perfect equilbrium
Definition 166.1 (Subgame perfect equilibrium of extensive
game with perfect information)
The strategy profile s* in an extensive game with perfect
information is subgame perfect equilibrium if, for every player
i, every history h after which it is player i’s turn to move (P(h)=i),
and every strategy r
i
of player i, the terminal history O
h
(s*)
generated by s* after the history h is at least as good according
to payer i’s preferences as the terminal history O
h
(r
i
,s*
-i
)
generated by the strategy profile (r
i
,s*
-i
):

u
i
(O
h
(s*)) ≥ u
i
(Oh(r
i
,s*
-i
)) for every strategy r
i
of player i

Game Theory - A (Short) Introduction 255 9/12/2011
The key point is that payer’s strategy is required to be optimal for every
history after which it is the player’s turn to move, not only at the
start of the game (as in the definition of a Nash equilibrium)
5.4 Subgame perfect equilbrium
 5.4.2 Subgame perfect equilibrium and Nash equilibrium

  Every subgame perfect equilibrium is a Nash equilibrium (because in
a subgame perfect equilibrium, every player’s strategy is optimal, in
particular after the empty history)

  A subgame perfect equilibrium generates a Nash equilibrium in every
subgame

  A Nash equilibrium is optimal in any subgame that is reached when
the players follow theirs strategies.

  Subgame perfect equilibrium requires moreover that each player’s
strategy is optimal after histories that do not occur if the players follow
their strategy.
Game Theory - A (Short) Introduction 256 9/12/2011
5.4 Subgame perfect equilbrium
 Example 167.2 (Variant of the entry game)
Consider the variant of the entry game in which the incumbent
is indifferent between fighting and acquiescing if the challenger
enters. Find the subgame perfect equilibria.

Game Theory - A (Short) Introduction 257 9/12/2011
5.4 Subgame perfect equilbrium
Solution: both Nash equilibria (In,Acquiesce) and (Out,Fight)
are subgame perfect equilibria because, after history In, both
Fight and Acquiesce are optimal for the incumbent.

 Exercice 168.1
Which of the Nash equilibria of the following game are subgame
perfect?
Game Theory - A (Short) Introduction 258 9/12/2011
5.4 Subgame perfect equilbrium
 5.4.4 Interpretation
 A Nash equilibrium corresponds to a steady state in an idealized
setting in which players’ long experience leads her to correct
beliefs about the other players’ actions.
 A subgame perfect equilibrium of an extensive game corresponds
to a slightly perturbed steady state in which all players, on rare
occasions, take nonequilibrium actions. Thus, players know how
the other players will behave in every subgame.

  Subgame perfect equilibrium is a plan of action specifying
players’ actions:
 Not only after histories consistent with the strategy
 But also after histories that result when the player chooses
arbitrary alternatives actions.
Game Theory - A (Short) Introduction 259 9/12/2011
5.4 Subgame perfect equilbrium
 Alternative interpretation:
 Consider an extensive game with perfect information in which:
 each player has a unique best action at every history after
which it is her turn to move;
 horizon is finite;
 In such a game, a player who knows the other players’ preferences
(eg: profit maximization) and knows that the other players are
rational may use backward induction to deduce her optimal
strategy.

The subgame perfect equilibrium is the outcome of the players’
rational calculations about each other’s strategies. Note that:
 this interpretation is not tenable in games in which some player has more
than one optimal action after some history;
 But an extension of the procedure of backward induction can be used to find
all subgame perfect equilibria of finite horizon games.


Game Theory - A (Short) Introduction 260 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 In a game with finite horizon, the set of subgame perfect
equilibria may be found more directly by using an extension of
the procedure of backward induction.

 Define the length of a subgame to be the length of the longest
history in the subgame.

 The procedure of backward induction works as follow:
 (i) Start by finding the optimal actions of the players who move in
the last subgames (stage k);
 (ii) Next, find the optimal actions of the players who move at stage
k-1, given the optimal actions we have found in all subgames k;
 (iii) Continue the procedure up to stage 1.
Game Theory - A (Short) Introduction 261 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Example
 We first deduce that in the subgame of length 1 following history
(C,E), player 1 chooses G;
 Then, at the start of the subgame of length 2 following the history
C, player 2 chooses E;
 Then, at the start of the whole game, player 1 chooses D.
Game Theory - A (Short) Introduction 262 9/12/2011
In any game in which this procedure selects
a single action for the player who moves at
the start of each subgame, the strategy
profile thus selected is the unique subgame
perfect equilibrium of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 What happens in a game in which at the start of some
subgames, more than one action is optimal ?

Game Theory - A (Short) Introduction 263 9/12/2011
The solution is to traces back separately the
implications for behavior in the longer
subgames of every combination of optimal
actions in the shorter subgames.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Example 172.1
Game Theory - A (Short) Introduction 264 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 The game has three subgames of length 1, in each of which
player 2 moves:
 In subgames following the histories C and D, player 2 is indifferent
between her two actions;
 In the subgame following the history E, player 2’s unique optimal
action is K.
Game Theory - A (Short) Introduction 265 9/12/2011
There are four combinations of player 2’s
optima actions in the subgame of length 1:
•FHK
•FIK
•GHK
•GIK
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 The game has a single subgame of length 2, namely the whole
game, in which player 1 moves first. We now consider player
1’s optimal action in this game for every combination of optimal
actions of player 2 in the subgame of length 1:
 For the combinations FHK and FIK of optimal actions of player 2,
player 1’s optimal action at the start of the game is C;
 For the combination GHK of optimal actions of player 2, the actions
C, D, and E are optimal for player 1;
 For the combination GIK of optimal actions of player 2, player 1’s
optimal action at the start of the game is D.
Game Theory - A (Short) Introduction 266 9/12/2011
The strategy pairs isolated by the procedure are (C,FHK),
(C,FIK), (C,GHK), (D,GHK) and (D,GIK)
The set of strategy profiles that this procedure yields for the whole
game is the set of subgame perfect equilibria of the game.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Two important propositions:
Game Theory - A (Short) Introduction 267 9/12/2011
PROPOSITION 172.1 (Subgame perfect equilibrium of finite
horizon games and backward induction)

The set of subgame perfect equilibria of a finite horizon
extensive game with perfect information is equal to the set of
strategy profiles isolated by the procedure of backward
induction.
PROPOSITION 173.1 (Existence of subgame perfect
equilibrium)

Every finite extensive game with perfect information has a
subgame perfect equilibrium.
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Exercise 173.2
Find the subgame perfect equilibria of this game:
Game Theory - A (Short) Introduction 268 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Exercise 176.1 (Dollar auction)
An object that two people each value at v, a positive integer, is
sold in an auction. In the auction, the people take turns bidding;
a bid must be a positive integer greater than the previous bid.
On her turn, a player may pass rather than bid, in which case
the game ends and the other player receives the object; both
players pay their last bids (if any) (if player 1 passes initially, for
example, player 2 receives the object and makes no payment; if
player 1 bids 1, player 2 bids 3 and then player 1 passes, player
2 obtains the object and pays 3, and player 1 pays 1). Each
person’s wealth is w, which exceeds v. Neither player may bid
more than her wealth. For v=2 and w=3, model the auction as
an extensive game and find its subgame perfect equilibria.

Game Theory - A (Short) Introduction 269 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Exercise 176.2 (A synergistic relationship)
Two individuals are involved in a synergistic relationship.
Suppose that the players choose their effort levels sequentially
(rather than simultaneously). First individual 1 chooses her effet
level a
1
. Then individual 2 chooses her effort level a
2
. An effort
level is a nonnegative number, and individual i’s preferences
(for i = 1,2) are represented by the payoff function a
i
(c+a
j
-a
i
),
where j is the other individual and c > 0, some constant.

Find the subgame perfect equilibria.

Game Theory - A (Short) Introduction 270 9/12/2011
5.5 Finding subgame perfect equilibria
of finite horizon games: backward
induction
 Exercise 174.2 (An entry game with a financially constrained
firm)
An incumbent in an industry faces the possibility of entry by a challenger. First the
challenger chooses whether to enter. If it does not enter, neither firm has any
further action; the incumbent’s payoff is TM (it obtains the profit M in each of the
following T ≥ 1 periods). The challenger’s payoff is 0. If the challenger enters, it
pays the entry costs f > 0, and in each of T periods the incumbent first commits to
fight or cooperate with the challenger in that period, then the challenger chooses
whether to stay in the industry or to exit. If, in any period, the challenger stays in,
each firm obtains in that period the profit –F < 0 if the incumbent fights and C >
max {F,f} if it cooperates. If, in any period, the challenger exits, both firms obtain
the profit zero in that period (regardless of the incumbent’s action); the incumbent
obtains the profit M > 2 C and the challenger the profit 0 in every subsequent
period. Once the challenger exits, it cannot reenter. Each firm cares about the sum
of its profits.

Find the subgame perfect equilibria of the extensive game.
Game Theory - A (Short) Introduction 271 9/12/2011
10. Extensive Games
with Imperfect
Information
Framework

 We keep in this chapter the Extensive game setup: extensive
game describes explicitly the sequential structure of decision-
making, allowing us to study situations in which each decision-
maker is free to change her mind as events unfold.

 In this imperfect information setup, each player, when
choosing her action, may not be informed of the other players’
previous actions.
Game Theory - A (Short) Introduction 273 9/12/2011
10.1 Extensive games with
imperfect information
 To describe an extensive game with perfect information, we
need to specify the set of players, the set of terminal histories,
the player function and the players’ preferences.

 To describe an extensive game with imperfect information, we
need to add a specification of each player’s information about
the history at every point at which she moves:
 Denote by

the set of histories after which player moves
 We specify player’s information by partitioning

into a collection
of information sets (the collection is called the information
partition).
 When making her decision, the player is information of the
information set that has occurred, but not of which history within
that set has occurred.

Game Theory - A (Short) Introduction 274 9/12/2011
10.1 Extensive games with
imperfect information
 Example
 Suppose player
 moves after histories , and (

= , , )
 is informed only that the history is or that it is either or
The player information partition is the two information sets
and , .
Note that if the player is not informed at all, her information
partition contains a unique information partition , , .
 Important restriction
 Denote by (ℎ) the set of actions available to the player who
moves after history ℎ.
 We allow two histories ℎ and ℎ′ to be in the same information set
only if ℎ = (ℎ

).
 Why?
Game Theory - A (Short) Introduction 275 9/12/2011
10.1 Extensive games with
imperfect information
 Note that we allow move of chance. So an outcome is a lottery
(a probability distribution function) over the set of terminal
histories.

 Definition 314.1 (Extensive game with imperfect information)
 A set of players
 A set of sequences (terminal histories) having the property that no
sequence is a proper subhistory of some terminal history
 A function (the player function) that assigns either a player or “chance” to
every sequence that is a proper subhistory of some terminal history
 A function that assigns to each history that the player function assigns to
chance a probability distribution over the actions available after that history
(each probability distribution is independent of every other distribution).
 For each player, a partition (information partition) of the set of histories
assigned to that player by the player function.
 For each player, preferences over the set of lotteries over terminal histories.

Game Theory - A (Short) Introduction 276 9/12/2011
10.1 Extensive games with
imperfect information
 Example 314.2: BoS as an extensive game
 Games in which each player moves once and no player, when
moving, is informed of any other player’s action, may be modeled
as strategic games or extensive games with imperfect information.
 BoS :
 Each of two people chooses whether to go to a Bach of
Stravinsky concert
 Neither person, when choosing a concert, knows the one
chosen by the other person.
 Model this game as an extensive game with imperfect information.

Game Theory - A (Short) Introduction 277 9/12/2011
10.1 Extensive games with
imperfect information
 Solution:
 Players: the two people, say 1 and 2
 Terminal histories: , , , , , , (, )
 Player function: ∅ = 1, = = 2
 Chance moves: None
 Information partitions
 Player 1: ∅ (a single information set: player 1 has a single
move and when she moves, she is informed that the
game is beginning)
 Player 2: , (player 2 has a single move and when she
moves, she is not informed whether the history is or )
 Preferences: given in the game description
Game Theory - A (Short) Introduction 278 9/12/2011
10.1 Extensive games with
imperfect information
 Figure 315.1
Game Theory - A (Short) Introduction 279 9/12/2011
Indicates that the
histories are in the
same information set
10.1 Extensive games with
imperfect information
 Example 317.1: Variant of Entry Game (the challenger, before
entering, takes an action that the incumbent does not observe)
 An incumbent faces the possibility of entry by a challenger (see example
154.1)
 The challenger has three choices:
 Stay out
 Prepare itself for combat and enter (preparation is costly but reduces
loss from fight)
 Enter without preparations
 A fight is less costly for the incumbent if the entrant is unprepared. But
regardless of entrant’s readiness, the incumbent prefers to acquiesce than to
fight.
 The incumbent observes whether the challenger enters but not whether he is
prepared.
 Model (graphically by a tree) this game as an extensive game with imperfect
information.


Game Theory - A (Short) Introduction 280 9/12/2011
10.1 Extensive games with
imperfect information
 Figure 317.1
Game Theory - A (Short) Introduction 281 9/12/2011
10.2 Strategies
 A strategy specifies the action the player takes whenever it is her turn to
move.
 Definition 318.1 (Strategy in extensive game)
 A (pure) strategy of player in an extensive game is a function that assigns
to each of

information sets

an action in (

) (the set of actions available
to player at the information set

).

 In the BoS game, each player has a single information set at which two actions (
or ) are available. Thus, each player has two possible strategies: or . If
players have several information sets, a strategy specifies the list of actions at
each information set in the form (

1
,

2
,
…).

 Definition 318.3 (Mixed Strategy in extensive game)
 A mixed strategy of a player in an extensive game is a probability
distribution over the player’s pure strategies.

 With mixed strategies, players are allowed to choose their actions randomly.

Game Theory - A (Short) Introduction 282 9/12/2011
10.3 Nash equilibrium
 Definition 318.4 (Nash equilibrium of extensive game)
 Intuition: a strategy profile is a Nash equilibrium if no player has an
alternative strategy that increases her payoff, given the other
player’s strategies.
 Formal definition: The mixed strategy profile

in an extensive
game is a (mixed strategy) Nash equilibrium if, for each player
and every mixed strategy

of player , player

expected payoff
to

is at least as large as her expected payoff to (

,


)
according to a payoff function whose expected value represents
players

preferences over lotteries.

Notes:
 an equilibrium in which no player’s strategy entails any randomization (every player’s
strategy assigns probability 1 to a single action at each information set) is a pure Nash
equilbrium.
 One way to find a Nash equilibrium of an extensive game is to construct the strategic form
of the game and analyze it as a strategic game.
Game Theory - A (Short) Introduction 283 9/12/2011
10.3 Nash equilibrium
 Example 319.1: BoS as an extensive game
 Each player has two strategies: and
 The strategic form of the game is given in Figure 19.1
 Thus the game has two pure Nash equilibria:
 (, )
 (, )

In the BoS game, player 2 is not informed of the action chosen by player 1 when
taking an action (her information set contains both the history and the history ).
However, player’s 2 experience playing the game tells her the history to expect.

Eg.: in steady state in which every person who plays the role of either player
chooses , each player knows (by experience) that the other player will choose
.
Game Theory - A (Short) Introduction 284 9/12/2011
10.3 Nash equilibrium
 How may we extend the idea of subgame perfect equilibrium to
extensive game with imperfect information to deal with
situations in which the notion of Nash equilibrium is not
adequate?

 Example 322.1: Entry game
 The strategic form of the entry game in Example 317.1 is the
following:
Game Theory - A (Short) Introduction 285 9/12/2011
3,2* 1,1
4*,3* 0,2
2,4* 2*,4*
Acquiesce Fight
Ready
Unready
Out
10.3 Nash equilibrium
 The game has two Nash equilbria:
 (Unready, Acquiesce)
 (Out, Fight)
(The game has also a Nash mixed strategy equilbrium in which the
challenger uses the pure strategy Out and the probability assigned by the
incumbent to Acquiesce is at most
1
2
).

 As in Chapter 5 (perfect information), the Nash equilibrium (, ℎ) is not
plausible. The notion of subgame perfect equilibrium eliminates this strategy
by requiring that each player’s strategy be optimal, given the other players’
strategies, for every history after which she moves, regardless of whether
the history occurs if the players adhere to their strategies.

 The natural extension of this idea to games with imperfect information
requires that each player’s strategy be optimal at each of her information
sets.
Game Theory - A (Short) Introduction 286 9/12/2011
10.3 Nash equilibrium
 In Example 322.1, the incumbent’s action ℎ is unambigously suboptimal
at its information set because the incumbent prefers if the
challenger enters, regardless of whether the challenger is ready. So, any
equilbrium that assigns a positive probability to ℎ does not satisfy the
additional requirement introduced by the notion of subgame perfect
equilibrium.

 However, the implementation of the idea in other may be less straightforward
because the optimality of an action at an information set may depend on the
history that has occurred. Consider for example a variant of the entry game in
which the incumbent prefers to fight than to accommodate an unprepared
entrant (see Figure 323.1).
Game Theory - A (Short) Introduction 287 9/12/2011
10.3 Nash equilibrium
Game Theory - A (Short) Introduction 288 9/12/2011
Figure 323.1
10.3 Nash equilibrium
 Like the original game, (, ℎ) is a Nash equilibrium. But:
 given that now fighting is optimal if the challenger enters
unprepared, the reasonableness of the modified game
depends on the history the incumbent believes has occurred;
 and the challenger’s strategy gives the incumbent no basis
on which to form such a belief.

Game Theory - A (Short) Introduction 289 9/12/2011
So, to study this situation, we must specify players’ beliefs.
10.4 Beliefs and sequential
equilibrium
 A Nash equilibrium of a strategic game with imperfect
information is characterized by two requirements:
 Each player chooses her best action given her belief about other
players
 Each player belief is correct

 The notion of equilibrium we define here:
 Embodies these two requirements;
 Insists that they hold at each point at which a player has to choose
an action (like subgame perfect equilibrium in extensive games with
perfect information).
Game Theory - A (Short) Introduction 290 9/12/2011
10.4 Beliefs and sequential
equilibrium
 10.4.1 Beliefs
 We assume that at an information set that contains more than one
history, the player whose turn it is to move forms a belief about the
history that has occurred;
 We model this belief as a probability distribution over the histories
in the information set;
 We call a collection of beliefs (one for each information set) a belief
system.

 Definition 324.1
 A belief system in an extensive game is a function that
assigns to each information set a probability distribution over
the histories in that information set.

Game Theory - A (Short) Introduction 291 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Example: the entry game (317.1)
 The belief system consists of a pair of probability distributions:
 One assigns probability 1 to the empty history (the
challenger belief at the start of the game)
 The other assigns probabilities to histories Ready and
Unready (the incumbent belief after the challenger enter)

 10.4.2 Strategies
 Definition 324.2 (Behavioral strategy in extensive game)
 A behavioral strategy of player in an extensive game is a
function that assigns to each

information sets

a
probability distribution over the action in (

), with the property
that each probability distribution is independent of every other
distribution.

Game Theory - A (Short) Introduction 292 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Note:
 A behavioral strategy that assigns probability one to a single
action is equivalent to a pure strategy.
 Behavioral strategies are assigned to actions in information
sets with mixed strategies are assigned to possible
combinations of pure strategies.
 In all the games that we study, a behavioral strategy and
mixed strategy are equivalent but behavioral strategy are
easier to deal with.
 Example: the BoS game (314.2)
 Each player has a single information set;
 So, a behavioral strategy for each player is a single probability
distribution over her actions.
 In this game, the set of behavioral strategies is identical to the
set of mixed strategies.
Game Theory - A (Short) Introduction 293 9/12/2011
10.4 Beliefs and sequential
equilibrium
 10.4.3 Equilibrium
 Definition 325.1 (Assessment)
 An assessment is an equilibrium if it satisfies the following two
requirements:
 Sequential rationality: each player’s strategy is optimal
whenever she has to move, given her belief and the other players’
strategies;
 Consistency of beliefs with strategies: each players’ belief is
consistent with the strategy profile.

 The sequential rationality generalizes the requirement of subgame perfect
equilibrium: each player’s strategy must be optimal in the part of the game
that follows each of her information sets, given the strategy profile and given
the player’s belief about the history in the information set that has occurred,
regardless of whether the information set is reached if the players follow their
strategies..
Game Theory - A (Short) Introduction 294 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Example 325 and Figure 326.1
Game Theory - A (Short) Introduction 295 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Player 1 strategies are indicated by the red branches:
 Selects E at the start of the game;
 Select J after the history (C,F)
 Player 2 beliefs at her information set (number in brackets) is that
the history C has occurred with probability 2/3 and history D has
occurred with probability 1/3.

Sequential rationality requires that player 2 strategy be optimal at her
information set, given the subsequent behavior specified by player 1
strategy, even though this set is not reached if player 1 follows her
strategy. Player 2 expected payoff in the part of the game starting at
her information set is:
 Strategy F : (2/3 x 0) + (1/3 x 1) = 1/3
 Strategy G : (2/3 x 1) + (1/3 x 0) = 2/3
Sequential rationality requires Player 2 to select G.



Game Theory - A (Short) Introduction 296 9/12/2011
10.4 Beliefs and sequential
equilibrium
Sequential rationality requires also that player 1 strategy be optimal at
each of her two (one element) information sets, given player 2
strategy:
 Player 1 optimal action after history (C,F) is J;
 If Player 2 strategy is G, player 1 optimal actions at the start of the
game are D and E;

Thus, given player 2 strategy G, player 1 has two optimal strategies:
DJ and EJ.


Game Theory - A (Short) Introduction 297 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Sequential rationality requirements (more formal definition)
 Denote (, ) an assessment ( is a profile of behavioral strategies and is a
belief system);
 Let

be an information set of player ;
 Denote

(, ) the probability distribution over terminal histories that results
if each history in

occurs with probability assigned to it by player

belief

(not necessarily the probability with which it occurs if the player adhere to )
and subsequently, the players adhere to the strategy profile ;





In Figure 326.1:
 For the information set , , the probability distribution assigns 2/3 to
terminal history (C,G) and probability 1/3 to (D,G)
Game Theory - A (Short) Introduction 298 9/12/2011
Sequential rationality requires for each player and each of her
information sets

, her expected payoff to

(, ) is at least as
large as her expected payoff to

(,

), for each of her
behavioral strategies

.
10.4 Beliefs and sequential
equilibrium
 The Consistencies of beliefs with strategies is a new requirement. In
a steady state, each player’s belief must be correct: the probability it
assigns to any history must be the probability with which that history
occurs if the players adhere to their strategies.

 The implementation of this idea is somewhat unclear at an information
set not reached if the players follow their strategies: every history has
probability 0 if players follow their strategies. We deal with this difficulty
allowing the player who moves at such an information set to hold any
belief at that information set.

The consistency requirement restrict the belief system only at information
sets reached with positive probability if every player adheres to her
strategy.
Game Theory - A (Short) Introduction 299 9/12/2011
10.4 Beliefs and sequential
equilibrium






 By the Bayes’ rule, this probability is:


Pr (ℎ

according to )
ℎ according to
ℎ∈

Game Theory - A (Short) Introduction 300 9/12/2011
Precisely, the consistency requirement imposes that the probability
assigned to every history ℎ

in a information set reached with
positive probability by the belief of the player who moves at that
information set to be equal to the probability that ℎ

occurs according
to the strategy profile, conditional on the information set’s being
reached.
10.4 Beliefs and sequential
equilibrium
 Figure 326.1
 If player 1 behavioral strategy assigns probability 1 to action E at
the start of the game, the consistency requirement places no
restriction on player 2 belief (player 2 information set is not reached
if player 1 adheres to her strategy);
 If player 2 action at the start of the game assigns positive
probability to C or D, the consistency requirement enters into play:
 Denote the probability assigned to C by player 1 strategy and
to D;
 Consistency requires that player 2 belief assigns probability
/( +) to C and /( + ) to D.
Game Theory - A (Short) Introduction 301 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Example 327.4: Consistency of beliefs in entry game (Figures
317.1 and 323.1)
 Denote by

,

and

the probability that the challenger assigns
to Ready, Unready and Out.
 If

= 1, the consistency condition does not restrict the incumbent
belief.
 Otherwise, the condition requires that the incumbent assigns
probability

/(

+

) to Ready and

/(

+

) to Unready.

 Definition 328.1 (Weak sequential equilibrium)
 An assessment (, ) (consisting of a behavioral strategy profile
and a belief system ) is a weak sequential equilibrium if it
satisfies the sequential rationality and the weak consistency of
beliefs with strategies.
Game Theory - A (Short) Introduction 302 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Figure 326.1
 In this game, player 1 strategy EJ is sequentially rational given player 2
strategy G, and player 2 strategy G is sequentially rational given the beliefs
indicated in the Figure and player 1 strategy EJ.
 The belief is consistent with the strategy profile (EJ,G), because this profile
does not lead to player 2 information set.
 Thus the game has a weak sequential equilibrium.

 Note:
 In an extensive game with perfect information, only one belief system is
possible (each player believes at each information set that a single
compatible history has occurred with probability 1);
 Therefore, in an extensive game with perfect information, the strategy profile
in any weak sequential equilibrium is a subgame perfect equilibrium.
 The strategy profile in any weak sequential equilibrium is a Nash equilibrium
(if an assessment is a weak sequential equilibrium, then each player’s
strategy in the assessment is optimal at the beginning of the game, given the
other players’ strategies).
Game Theory - A (Short) Introduction 303 9/12/2011
10.4 Beliefs and sequential
equilibrium
 How to find weak sequential equilibria?
 We can use a combination of techniques for finding subgame
perfect equilibria of extensive games with perfect information and
for finding Nash equilbria of strategic games;
 We can find all the Nash equilibria of the game, and then check
which of these equilibria are associated with weak sequential
equilibria.

 Figure 326.1
 Does the game have a weak sequential equilibrium in which player 1
chooses E?
 If player 1 chooses E, player 2 belief is not restricted by consistency;
 We need therefore to ask:
 Whether any strategy of player 2 makes E optimal for player 1;
 Whether there is a belief of player 2 that makes any such strategy
optimal.
Game Theory - A (Short) Introduction 304 9/12/2011
10.4 Beliefs and sequential
equilibrium
 We see that:
 E is optimal if and only if player 2 chooses F with probability at
most 2/3:
 Any such strategy of player 2 is optimal if Player 2
believes the history is C with probability ½
 The strategy of choosing F with probability 0 is optimal if
player 2 believes the history is C with any probability of at
least ½
 Thus: an assessment is a weak sequential equilibrium if player
strategy is EJ and player 2:
 Either chooses F with probability at most 2/3 and believes
that the history is C with probability ½
 Or chooses G and believes that the history is C with
probability at least ½

Game Theory - A (Short) Introduction 305 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Example 330.1 (Weak sequential equilibria of entry game, example
317.1)
 The entry game has two pure strategy Nash equilibria: (Unready, Acquiesce)
and (Out,Fight)
 Consider (Unready,Acquiesce):
 Consistency requires that the incumbent believe that the history is
Unready at its information set (because it is the optimal choice for
the challenger), making Acquiesce optimal;
 The game has a weak sequential equilibrium in which the strategy
profile is (Unready,Acquiesce) and the incumbent belief is that the
history is Unready;
 Consider (Out,Fight)
 Regardless of the incumbent belief at its information set, Fight is
not an optimal action in the remainder of the game, for every belief
(Acquiesce yields a higher payoff than Fight).
 No assessment in which the strategy profile is (Out,Fight) is both
sequentially rational and consistent.


Game Theory - A (Short) Introduction 306 9/12/2011
10.4 Beliefs and sequential
equilibrium
 Why weak sequential equilibrium?
 The consistency condition’s limitation to information sets reach with
positive probability generates, in some games, a relative large set
of equilibrium assessments;
 Some of the equilibrium assessments do not plausibly correspond
to steady states. Consider the following variant of the entry game:
307 9/12/2011
10.4 Beliefs and sequential
equilibrium
 In this variant, Ready is better than Unready for the challenger,
regardless of the incumbent’s action;
 This game has a weak sequential equilibrium in which the
challenger’s strategy is Out, the incumbent’s strategy is F, and the
incumbent believes at its information set that the history is Unready
(with probability one);
 In this equilibrium, the incumbent believes that the challenger has
chosen Unready, although this action is dominated by Ready for
the challenger. This belief seems not reasonable.
Game Theory - A (Short) Introduction 308 9/12/2011
10.5 Signaling games
 In many interactions, information is asymmetric: some parties
are more informed than the other ones.

 In one interesting class of situations, the informed parties have
the opportunity to take actions observed by uninformed parties
before uninformed parties take actions that affect everyone: the
informed parties’ actions may “signal” their information.

Game Theory - A (Short) Introduction 309 9/12/2011
10.5 Signaling games
 Example 332.1: Entry as a signaling game.
 The challenger is strong with probability and weak with probability
1 − (with 0 < < 1).
 The challenger knows its type but the incumbent does not.
 The challenger may either ready itself for battle or remain unready.
The incumbent observe the challenger readiness but not its type
and chooses either fight or acquiesce.
 An unready challenger payoff is 5 if the incumbent acquiesces to its
entry.
 Preparations cost a strong challenger 1 unit of payoff and a weak
one 3 units, and fighting entails a loss of 2 units for each type.
 The incumbent prefer to fight (payoff 1) rather than to acquiesce to
(payoff 0) a weak challenger and prefer to acquiesce to (payoff 2)
rather than to fight (payoff -1) a strong one.

Game Theory - A (Short) Introduction 310 9/12/2011
10.5 Signaling games
 Figure 333.1
Game Theory - A (Short) Introduction 311 9/12/2011
10.5 Signaling games
 The Figure 333.1 models this situation:
 The empty history is in the center of the diagram
 The first move is made by chance (which determines the
challenger type)
 Both types have two actions (so the challenger has four
strategies)
 The incumbent has two information sets, at each of which it
has two actions (A and F), and thus also four strategies
 Searching for pure weak sequential equilibria
 Note that a weak challenger prefers Unready to Ready,
regardless of the incumbent’s actions (even if the incumbent
acquiesces to a ready and fight an unready one). Thus, in any
weak sequential equilibrium, a weak challenger chooses
Unready.

Game Theory - A (Short) Introduction 312 9/12/2011
10.5 Signaling games
 Consider each possible action of a strong challenger
 Strong challenger chooses Ready
 Both the incumbent information sets are reached, so consistency
condition restrict its beliefs at each set;
 At the top information set, the incumbent must believe that the
history was (Strong, Ready) with probability one (because a weak
challenger never chooses Ready), and hence choose A:
 At the bottom information set, the incumbent must believe that the
history was (Weak, Unready), and hence choose F;
 Thus, if the challenger deviates and chooses Unready when he is
strong, he is worse of (he get 3 rather than 4);
 We conclude that the game has a weak sequential equilibrium in
which challenger chooses Ready when he is strong and Unready
when he is weak. The incumbent acquiesces when he sees Ready
and fights when he sees Unready.

Game Theory - A (Short) Introduction 313 9/12/2011
10.5 Signaling games
 Strong challenger chooses Unready
 At his bottom information set, the incumbent believes, by
consistency, that the history was (Strong, Unready) with
probability p and (Weak, Unready) with probability (1-p).
Thus, his expected payoff:
 To A = p (2) + (1-p) 0 = 2 p
 To F = p (-1) + (1-p) 1 = 1 – 2 p
A is therefore optimal if ≥
1
4
and F is optimal if ≤
1
4
.

Game Theory - A (Short) Introduction 314 9/12/2011
10.5 Signaling games
 Suppose that ≥
1
4
and the incumbent chooses A in
response to Unready:
 A strong challenger who chooses Unready obtains
the payoff of 5;
 If he switches to Ready, his payoff is less than 5
regardless of the incumbent action;
 Thus, if ≥
1
4
, the game has a weak sequential
equilibrium in which both types of challenger choose
Unready and the incumbent acquiesces to an
unready challenger. The incumbent may hold any
belief about the type of a ready challenger, and,
depending on his belief, may fight or acquiesce.
Game Theory - A (Short) Introduction 315 9/12/2011
10.5 Signaling games
 Now suppose that ≥
1
4
and the incumbent chooses F in response to
Unready:
 A strong challenger who chooses Unready obtains the payoff of 3. If he
switches to Ready, his payoff is 2 if the incumbent fights and 4 if he
acquiesces. Thus, for an equilibrium, the incumbent must fight a ready
challenger.
 If the incumbent believes that a ready challenger is weak with high
enough probability (at least ¾), fighting is indeed optimal.
 Is such a believe an equilibrium? Yes: the consistency condition does
not restrict the incumbent’s belief upon observing Ready because this
action is not taken when the challenger follows his strategy to choose
Unready regardless of his type.
 Thus, if ≤
1
4
, the game has a weak sequential equilibrium in which:
 Both types of challenger choose Unready
 The incumbents fights regardless of the challenger’s action;
 The incumbent assigns probability of at least ¾ to the challenger’s
being weak if it observes that the challenger is ready for battle.
Game Theory - A (Short) Introduction 316 9/12/2011
10.5 Signaling games
 This example shows that two kinds of pure strategy equilibrium
may exist in signaling games:

 Separating equilibrium : each type of sender (of the signal)
chooses a different action so that, upon observing the sender’s
action, the receiver (of the signal) knows the sender’s type;

 Pooling equilibrium : all types of the sender choose the same
action, so that the sender’s action gives the receiver no clue to the
sender’s type.

 Note: if the sender has more than two types, mixtures of these
types of equilibrium may exist (the set of types may be divided
into groups, within each of which all types choose the same
action and between which the actions are different).
Game Theory - A (Short) Introduction 317 9/12/2011
10.8 Strategic information
transmission
 The situation
 You research the market for new product and submit a report to
your boss, who decides which product to develop;
 Your preferences differ from from those of your boss:
 You are interested in promoting the interest of your division;
 Your boss is interested in promoting the interest of the whole
firm.
 If you report the results of you research without distortion, the
product your boss will choose is not the best for you.
 If you systematically distort your findings, your boss will be able to
unravel your report and deduce your actual findings.
 Obfuscation seems therefore a more promising route.
Game Theory - A (Short) Introduction 318 9/12/2011
10.8 Strategic information
transmission
 The model
 A sender (you) observes the state , a number between 0 and 1,
that a receiver (the boss) can not see;
 The distribution of the state is uniform: the probability: Pr ≤ =
;
 The sender submit a report (a number) to the receiver;
 The receiver observes the report and takes an action (a number);
 The payoff functions are:
 Sender: − − +
2

 Receiver: − −
2

Where (the sender bias) is a fixed number that reflects the
divergence between the sender and the receiver preferences.
Note that the receiver optimal action is = and the sender
optimal action is = + (see Figure 343.1).
Game Theory - A (Short) Introduction 319 9/12/2011
10.8 Strategic information
transmission
 Figure 343.1 (players’ payoff functions)
Game Theory - A (Short) Introduction 320 9/12/2011
10.8 Strategic information
transmission
 10.8.1 Perfect information transmission?
 Consider an equilibrium in which the sender accurately reports the
state he observes: = ∀
 Given this strategy, the consistency condition requires that the
receiver believe (correctly) that the state is when the sender
reports . The receiver hence optimally chooses the action (the
maximum of − −
2
).
 Is the sender’s strategy the best response to the receiver strategy?
Not if > 0. Suppose the state is . If the sender reports , the
receiver chooses = and the sender payoff is −
2
. If the sender
chooses instead + , the receiver chooses = + and the
sender payoff is 0.
 So, unless the sender and the receiver preferences are the same
( = 0), the game has no equilibrium in which the sender accurately
report the state.
Game Theory - A (Short) Introduction 321 9/12/2011
10.8 Strategic information
transmission
 10.8.2 No information transmission?
 Consider an equilibrium in which the sender reports a constant
value: = ∀ .
 The consistency condition requires that if the receiver observes a
report , her belief must remain the same as it was initially (state
uniformly distributed between 0 and 1). The expected value of is
then =
1
2
and his optimal action (the action that maximizes the
expected payoff) is =
1
2
.
 The consistency condition does not constraint the receiver belief
about the state upon receiving a report different from : such a
report does not occur if the sender follows her strategy.
 Note also that if the receiver simply ignores completely the sender
report, his optimal action remains the same. Because the sender
reports has no effect on the receiver optimal action, any constant
report is optimal for him and, in particular, = is optimal.
Game Theory - A (Short) Introduction 322 9/12/2011
10.8 Strategic information
transmission
 In summary, for every value of , the game has a weak sequential
equilibrium in which the sender’s report conveys no information
(constant report), the receiver ignores the report (he maintains his
initial belief about the state) and takes the action that maximizes his
expected payoff.
 If is small, this equilibrium is not very attractive for both the
sender and the receiver. For example, if =
1
4
, for any with
0 ≤ ≤
1
4
, both the sender and the receiver are better off if the
receiver action is +.


Game Theory - A (Short) Introduction 323 9/12/2011
10.8 Strategic information
transmission
 10.8.3 Some information transmission
 Does the game has equilibria in which some information is
transmitted?
 Suppose the sender makes one of two reports:

1
if 0 ≤ ≤
1


2
if
1
≤ ≤ 1
With
1

2

 Consider the receiver optimal response to this strategy:
 If he sees the report
1
, the consistency condition requires that
he now believe that the is uniformly distributed between 0 and

1
. His optimal action is the =
1
2

1

 Similarly, if he sees the report
2
, the consistency condition
requires that he now believe that the is uniformly distributed
between
1
and 1. His optimal action is the =
1
2
(1 +
1
)


Game Theory - A (Short) Introduction 324 9/12/2011
10.8 Strategic information
transmission
 The consistency condition does not restrict the receiver belief if
he sees a report other than
1
or
2
. Assume therefore that for
any such report, the receiver belief is one of the two beliefs he
hold if he sees
1
or
2
(so the optimal action is either =
1
2

1

or =
1
2
(1 +
1
).
 Now, for equilibrium, we need the sender report
1
to be
optimal if 0 ≤ ≤
1
and his report
2
to be optimal if
1
≤ ≤ 1,
given the receiver strategy.
 By changing his report, the sender can change the receiver
optimal action form
1
2

1
to
1
2
(1 +
1
). So, for the report
1
to be
optimal when 0 ≤ ≤
1
, the sender must like
1
2

1
at least as
much as
1
2
(1 +
1
) (and vice-versa for the report
2
).
 In particular, in state
1
, the sender must be indifferent
between the two actions
1
2

1
and
1
2
(1 +
1
):

Game Theory - A (Short) Introduction 325 9/12/2011
10.8 Strategic information
transmission
 This indifference implies that
1
+ (the sender preferred
action) is midway between
1
2

1
and
1
2
(1 +
1
) (the receiver
optimal actions). So (see Figure 346.1):

1
+ =
1
2
1
2

1
+
1
2
(1 +
1
)

1
=
1
2
−2
Game Theory - A (Short) Introduction 326 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
 We need
1
> 0: this condition is satisfied only if <
1
4
. If ≥
1
4
,
the game has no equilibrium in which the sender makes two
different reports. Put differently, if preferences diverges too
much, there is no point to ask the sender to submit a report.
The receiver should simply take the best action for himself
given his prior belief.

1
=
1
2
−2 is not only a necessary condition for equilibrium
but also a sufficient condition. Indeed, in such a case:
 In every state with 0 ≤ <
1
: the sender optimally report

1

 In every state with
1
≤ ≤ 1: the sender optimally report

2

1

This follows form the shape of payoff function, which is
symmetric (see Figure 346.2)
Game Theory - A (Short) Introduction 327 9/12/2011
10.8 Strategic information
transmission









Game Theory - A (Short) Introduction 328 9/12/2011
Figure 346.1
10.8 Strategic information
transmission
 This equilibrium is better for both the receiver and the sender
than the one in which no information is transmitted. Consider
the receiver:
 If no information is transmitted, he takes action ½ in all states and
his payoff is in each state −
1
2

2

 In this two reports equilibrium, his payoff is:
 −
1
2

1

2
for 0 ≤ <
1

 −
1
2

1
+1 −
2
for
1
≤ ≤ 1
Game Theory - A (Short) Introduction 329 9/12/2011
10.8 Strategic information
transmission
 10.8.4 How much information transmission?
 For <
1
4
, does the game have equilibria In which more information
is transmitted than in the two reports equilibrium?
 Consider an equilibrium in which the sender makes one of K
reports, depending on the state. Specifically, the sender’s report is:

1
if 0 ≤ <
1


2
if
1
≤ <
2

 …

if
−1
≤ < 1
Where

for ≠ .
 The equilibrium analysis follows the same line as the two reports
equilibrium.
Game Theory - A (Short) Introduction 330 9/12/2011
10.8 Strategic information
transmission
 Specifically:
 If the receiver observes the report

, then the consistency
condition requires that he believes the state to be uniformly
distributed between
−1
and

. Therefore, he optimally takes
the action
1
2
(
−1
+

).
 If he observes a report different from any

, the consistency
condition does not restrict his belief. We assume that his belief
in such case is the belief he holds upon receiving one of the
reports

.
 Now, for equilibrium, we need the sender report

to be
optimal when the state is with
−1
≤ <

, for = 1, ….
 A sufficient condition for optimality is that, in each state

,
= 1, …, the sender be indifferent between the between the
reports

and
+1
and, therefore, between the receiver
actions
1
2
(
−1
+

) and
1
2
(

+
+1
).
Game Theory - A (Short) Introduction 331 9/12/2011
10.8 Strategic information
transmission
 This indifference implies that

+ is equal to the average of
1
2
(
−1
+

) and
1
2
(

+
+1
):

+ =
1
2
1
2

−1
+

+
1
2
(

+
+1
)
Or

+1

=


−1
+4
This is to say that the interval of states for which the
sender’s report is
+1
is longer by 4 than the interval for
which the report is

.
 The length of the first interval, from 0 to
1
, is
1
. The sum of
the lengths of all interval must be equal to one:

1
+
1
+4 +⋯+
1
+ − 1 4 = 1
Or

1
+4 1 +2 +⋯+ −1 = 1
Game Theory - A (Short) Introduction 332 9/12/2011
10.8 Strategic information
transmission
The sum of the first positive integer is
1
2
+1 :

1
+2 −1 = 1
If is small enough for 2 −1 < 1, there is a positive value
of
1
that satisfies the equation:
 If
1
24
≤ <
1
12
, the inequality is satisfied for ≤ 3
 So, in the equilibrium in which more information is transmitted,
the sender chooses one of three reports.
 From
1
+2 −1 = 1, we have
1
=
1
3
−4 and

2
=
2
3
−4.
 The Figure 348.2 shows equilibrium action taken by the
receiver as a function of the state .
 The values of the reports

does not matter as long as no two
are the same (we think of them as words in a language).

Game Theory - A (Short) Introduction 333 9/12/2011
10.8 Strategic information
transmission
 Figure 348.2
Game Theory - A (Short) Introduction 334 9/12/2011
10.8 Strategic information
transmission
 In summary:
 If there is a positive value of
1
that satisfies
1
+2 −1 = 1,
then the game has a weak sequential equilibrium in which the
sender submits one of different reports, depending on the state.
 For any given value of , the largest value of for which an
equilibrium exists is the largest value for which 2 −1 < 1.
 If 2 −1 = 1, using the quadratic formula, we have =
1
2
(1 +
1 +
2

). Thus the largest the value of , the smaller the largest
value of possible in an equilibrium.

Game Theory - A (Short) Introduction 335 9/12/2011
The greater the difference between the sender and receiver
preferences, the coarser the information transmitted in the
equilibrium with the largest number of steps (the most informative
equilibrium).

Pedagogic approach
 More seriously
 

 


I assume “nothing known” Any question is welcome (if possible, in English) The goal is to “enter into the field”, not to cover as much as possible stuff (I don’t care about going to the end of the announced program but I care a lot on an in-depth understanding) The mathematical level of the lecture should not be too challenging (basic equations resolution, some mathematical optimization) We will “play” many games during the lecture You MUST work each week to prepare the lecture (read the slides in advance, prepare questions, review concept definitions, …) I rest on you to correct my numerous mistakes This is a theoretical lecture !

9/12/2011

Game Theory - A (Short) Introduction

2

Outline
 1 Introduction
  

1.1 What is game theory? 1.2 The theory of rational choice 1.3 Coming attractions: interacting decision-makers
2.1 Strategic games 2.2 Example: the Prisoner’s Dilemma 2.3 Example: Bach or Stravinsky? 2.4 Example: Matching Pennies 2.5 Example: the Stag Hunt 2.6 Nash equilibrium 2.7 Examples of Nash equilibrium 2.8 Best response functions
Game Theory - A (Short) Introduction 3

 2 Nash Equilibrium Theory (perfect information)


     

9/12/2011

Outline
 

2.9 Dominated actions 2.10 Equilibrium in a single population: symmetric games and symmatric equilibria

 3 Nash Equilibrium: Illustrations

3.5 Auctions 4.1 Introduction 4.2 Strategic games in which players may randomize 4.3 Mixed strategy Nash equilibrium 4.4 Dominated actions 4.5 Pure equilibria when randomization is allowed 4.7 Equilibrium in a single population

 4 Mixed Strategy Equilibrium (probabilistic behavior)

   

9/12/2011

Game Theory - A (Short) Introduction

4

Outline
  

4.9 The formation of player’s beliefs 4.10 Extension: finding all mixed strategy Nash equilibria 4.11 Extension: games in which each player has a continuum of actions 4.12 Appendix: Representing preferences by expected payoffs

 9 Bayesian Games (imperfect information)

  

9.1 Motivational examples 9.2 General definitions 9.3 Two examples concerning information 9.6 Illustration: auctions

9/12/2011

Game Theory - A (Short) Introduction

5

Outline
 5 Extensive Games (Perfect Information): Theory
    

5.1 Extensive games with perfect information 5.2 Strategies and outcomes 5.3 Nash equilibrium 5.4 Subgame perfect equilibrium 5.5 Finding subgame perfect equilibria of finite horizon games: backward induction
10.1 Extensive games with imperfect information 10.2 Strategies 10.3 Nash equilibrium 10.4 Beliefs and sequential equilibrium 10.5 Signaling games 10.8 Illustration: strategic information transmission
Game Theory - A (Short) Introduction 6

 10 Extensive Games (Imperfect Information)
  


 

9/12/2011

1 Introduction .

 The main fields of applications are:     Economic analysis Social analysis Politic Biology  Typical applications:   Competing firms Bidders in auctions Realistic assumptions Simplicity Game Theory .1.A (Short) Introduction 8  Main tool: model development.1 What is game theory?  Game theory aims to help understand situations in which decision-makers interact. This is an arbitrage between:   9/12/2011 .

1 What is game theory?  An outline of the history of game theory     First major development in the 1920s  Emile Borel  John von Neumann Decisive publication: “Theory of Games and Economic Behavior”.1. extensive games 9/12/2011 Game Theory . von Neumann and Morgenstern (1944) Early 1950s: John Nash  Nash equilibrium  Game-theoric study of bargaining 1994 Nobel Prize in Economic Sciences  Harsanyi (1920-2000)  Bayesian games (Harsanyi doctrine)  Nash (1928-)  Nash equilibrium  Selten (1930-)  Bounded rationality.A (Short) Introduction 9 .

1.1 What is game theory?  Modeling process    Step 1: selecting aspects of a given situation (that appear to be relevant) and incorporating them into a model. 9/12/2011 Game Theory . This step is mostly an “art” Step 2: model analysis (using logic and mathematic) Step 3: studying model’s implications to determine whether our ideas make sense. This may point towards a revision of the model’s assumptions in order to better capture “stylized facts”.A (Short) Introduction 10 .

and takes it as given (the subset is not influenced by the decision-maker preferences) Game Theory .  The theory is based on two components: Actions and Preferences  1. the decision-maker knows the subset of available choices. under some circumstances.2.1 Actions   Set A consisting of all actions that. among all the actions available to her No qualitative restriction is place on preferences Rationality means consistency of her decisions when faced with different sets of available actions.2 The theory of rational choice  Rational choice:   The decision-maker chooses the best action according to her preferences.1.A (Short) Introduction 11 9/12/2011 . are available to the decision-maker In any given situation.

then a > c). More precisely: u(a) > u(b) if and only if the decision-maker prefers a to b (Economists often speak about utility function) 9/12/2011 Game Theory .1.2 Preferences and payoff functions  We assume that the decision-maker. when presented with any pair of actions. knows which of the pair she prefers  We assume further that these preferences are consistent (if a > b and b > c.2 The theory of rational choice  1.A (Short) Introduction 12 .2.  Preferences representation: preferences can be represented by a payoff function: the payoff function associates a number with each action in such a way that actions with higher numbers are preferred.

9/12/2011 Game Theory . Precisely. she is indifferent between a situation in which her income is 1 and person 2’s is 0.2 The theory of rational choice  Exercise 5. (2.4). and one in which her income is 0 and person 2’s is 2. How do her preferences order the outcomes (1. the value she attaches to each unit of her own income is the same as the value she attaches to any two units of person 2’s income.A (Short) Introduction 13 .0).1) and (3.1.3  Person 1 cares about both her income and person 2’ income. For example. where the first component in each case is her income and the second component is person 2’s income? Give a payoff function consistent with these preferences.

u(b)=1 and u(c)=100. then any increasing function of u also represents these preferences. the payoff function also conveys only ordinal preference. Note that. Eg. it doesn’t mean that the decisionmaker likes c a lot more than b! A payoff function contains no such information. as a consequence.  9/12/2011 Game Theory . More succinctly: if u represents a decision-marker’s preferences.2 The theory of rational choice  Note that.: if u(a)=0. as decision-maker’s preferences convey only ordinal information.1. a decision-maker’s preferences can be represented by many different payoff functions.A (Short) Introduction 14 . If u represents a decision-maker’s preferences and v is another payoff function for which v(a) > v(b) if and only if u(a) > u(b) then v also represents the decision-maker’s preferences.

A (Short) Introduction 15 . Are they also represented by the function v for which v(a)=1.2 The theory of rational choice  Exercice 6.1. u(b)=1 and u(c)=4. and v(c)=2? How about the function w for which w(a)=w(b)=0 and w(c)=8? 9/12/2011 Game Theory .b.c} are represented by the payoff function u for which u(a)=0.v(b)=0.1  A decision-maker’s preferences over the set A={a.

b}.A (Short) Introduction 16 . (Independence of irrelevant alternatives) 9/12/2011 Game Theory .b.always choosing a when facing {a.c}. Note that not every collection of choices for different sets of available actions is consistent with the theory.b.c}.2.when facing {a. : we observe that a decision chooses a whenever she faces the set {a. she must choose a or c.1. This is inconsistent: . as every other available action.2 The theory of rational choice  1. Eg.b} means that the decision-maker prefers a to b .3 The theory of rational choice The theory of rational choice is the action chosen by a decisionmaker is at least as good. but sometimes chooses b when facing the {a. according her preferences.

3 Coming attractions   Up to now. a decision-maker often does not control all the variables that affect her. In the real world. Game theory studies situations in which some of the variables that affect the decision-marker are the actions of other decisionmarkers. the decision-maker cares only about her own choice.2 The theory of rational choice  1.1.A (Short) Introduction 17 . 9/12/2011 Game Theory .

2 Nash Equilibrium: Theory .

preferences over the set of action profiles 9/12/2011 Game Theory .A (Short) Introduction 19 .2. a set of actions for each player.1 (Strategic game with ordinal preferences) A strategic game with ordinal preferences consists of    a set of players for each player.1 Strategic games  Terminology:     we refer to decision-makers as players each player has a set of possible actions the action profile is the list of all players’ actions each player has preferences about the action profiles  Definition 13.

actions = fighting for a prey. preferences = winning or loosing It is frequently convenient to specify the payers’ preferences by giving payoff functions that represent them.1 Strategic games  Note that:    This allows to model a very wide range of situations:  players = firms.A (Short) Introduction 20 . preferences = profits  players = animals. not by the payoffs that represent these preferences Time is absent from the model : each player chooses her action once and for all and the players choose their actions simultaneously (no player is informed of the action chosen by any other player) 9/12/2011 Game Theory . actions = prices. Keep however in mind that a strategic game with ordinal preferences is defined by the players’ preferences.2.

2 Example: the Prisoner’s Dilemma  Example 14. each will be convicted of the minor offense and spend one year in prison. but not enough evidence to convict either of them of the major crime unless one of them acts as an informer against the other (finks).1 Two suspects in a major crime are held in separate cells. who will spend four years in prison. If one and only one of the finks. each will spend three years in prison. If they both stay quiet.A (Short) Introduction 21 . Model this situation as a strategic game. she will be freed and used as a witness against the other.2. 9/12/2011 Game Theory . There is enough evidence to convict each of them of a minor offense. If the both fink.

Quiet)  one year in prison  (Fink.Quiet)  free  (Quiet.Fink) Eg.Fink)  four years in prison (and vice-versa for player 2) We can adopt a payoff function for each player: u1(Fink.Fink)>u1(Quiet.Q  2 1Q.: 9/12/2011 3 1F . Fink}  Preferences: Suspect 1’s ordering of the action profiles (from best to worse):  (Fink.A (Short) Introduction 22 .Fink)  three years in prison  (Quiet.Q  11F . F Game Theory .2 Example: the Prisoner’s Dilemma  Solution  Players: the two suspects  Actions: Each player’s set of actions is {Quiet. F  0 1Q.Quiet)>u1(Quiet.Quiet)>u1(Fink.2.

2.3) Quiet Suspect 1 Fink (3.A (Short) Introduction 23 .2) Fink (0.0) (1. the situation is the following : (numbers are payoffs of payers) Suspect 2 Quiet (2.1) The prisoner’s dilemma models a situation in which there are gains from cooperation (each player prefers that both players choose Quiet than they both choose Fink) but each player has an incentive to free ride whatever the other play does. 9/12/2011 Game Theory .2 Example: the Prisoner’s Dilemma Graphically.

Each of you can either work hard or goof off.2 Example: the Prisoner’s Dilemma  2. Model this situation as a strategic game.2. but the increment in its value to you is not worth the extra effort). You prefer the outcome of your both working hard to the outcome of your both goofing off (in which case nothing gets accomplished).1 Working on a joint project You are working with a friend on a joint project.A (Short) Introduction 24 . 9/12/2011 Game Theory . and the worst outcome for you is that you work hard and your friend goofs off (you hate to be exploited). then you prefer to goof off (the outcome of the project would be better if you worked hard too.2. If your friend works hard.

2 Duopoly In a simple model of a duopoly.2.2 Example: the Prisoner’s Dilemma  2. If one firm chooses High and the other chooses Low. the each earns a profit of $600. then each earns a profit of $1000.A (Short) Introduction 25 . whereas the firm choosing Low earns a profit of $1200 (its unit profit is low. but its volume is high). Each firm cares only about its profit. for which each firm charges either a low price or a high price. 9/12/2011 Game Theory . If both firms choose High. Model this situation as a strategic game. then the firm choosing High obtains no customers and makes a loss of $200. two firms produce the same good. Each firm wants to achieve the highest possible profit.2. If both firms choose Low.

5 0.3 5.2.1 Y 1.5 1.1 Determine whether each of the following games differs from the Prisoner’s Dilemma only in the names of the players’ actions X X Y 3.2 Example: the Prisoner’s Dilemma  Exercise 17.0 X Y X 2. 9/12/2011 Game Theory .A (Short) Introduction 26 .-1 An application to M&As: the Grossman & Hart free riding argument.1 3.-2 Y 0.

Model this situation as a strategic game. One person prefers Bach and the other prefers Stravinsky.2 Two people wish to go out together.3 Example: Back or Stravinsky? (Battle of the Sexes or BoS)  Situation:   Players agree that it is better to cooperate Players disagree about the best outcome Example 18. Two concerts are available: one of music by Bach.2. . If they go to different concerts. 9/12/2011 An application to merging banks: two banks are merging. Both agree that they Theory . and one of music by Stravisky.A better off using the same information 27 will be (Short) Introduction Game system technology but they disagree on which one to choose. each of them is equally unhappy listening to the music of either composer.

A (Short) Introduction 28 .Google versus Microsoft/Yahoo 9/12/2011 Game Theory .

0) Bach Player 1 Stravinsky (0.A (Short) Introduction 29 .1) Stravinsky (0.0) (1.2.2) 9/12/2011 Game Theory .3 Example: Back or Stravinsky? (Battle of the Sexes or BoS) Solution Player 2 Bach (2.

person 2 pays person 1 a dollar.1 Two people choose. whether to show the head or the tail of a coin. Model this situation as a strategic game. 9/12/2011 An application to choices of appearances for new products by an established produced and a new entrant in a market of fixed size: the established produced prefers the newcomer’s product to look different from its own (to avoid confusion) Game Theory .2. Each person cares only about the amount of money she receives (and is a profit maximizer!). If they show the same side. person 1 pays person 2 a dollar. I they show different sides. simultaneously. .4 Example: Matching Pennies  Situation:  A purely conflictual situation Example 19.A (Short) Introduction 30 while the newcomer prefers that the products look alike.

IPhone iOS versus Android 9/12/2011 Game Theory .A (Short) Introduction 31 .

1) Player 1 Tail (-1.1) (1.A (Short) Introduction 32 .2.4 Example: Matching Pennies Solution Player 2 Head Tail Head (1.-1) (-1.-1) 9/12/2011 Game Theory .

or she may catch a hare. and the hare belongs to the defecting hunter alone.5 Example: the stag Hunt  Situation:  Cooperation is better for both but not credible. If any hunter devotes her energy to catching a hare. Example 20.2.2 Each of a group of hunters has two options: she may remain attentive to the pursuit of a stag. they catch it and share it equally. 9/12/2011 Game Theory . the stag escapes. Each hunter prefers a share of the stag to a hare. Model this situation as a strategic game. If all hunters pursue the stag.A (Short) Introduction 33 .

5 Example: the stag Hunt  Solution Player 2 Stag (2.2) Hare (0.0) (1.2.A (Short) Introduction 34 .1) Stag Player 1 Hare (1.1) 9/12/2011 Game Theory .

they can not condition their behavior on being faced to a specific opponent. Assumption: We assume in strategic games that players’ beliefs are derived from their past experience playing the game:     they know how their opponent will behave.2. note however that they do not know which specific opponent they are faced to and so.6 Nash equilibrium  Question: What actions will be chosen by players in a strategic game? (assuming that each player chooses the best available action) Answer: To make a choice. not any specific set of opponents. Beliefs are about “typical” opponents.A (Short) Introduction 35 9/12/2011 . Game Theory . each player must form a belief about other players’ action.

Players’ beliefs about each other’s actions are (assumed to be) correct. Two key ingredients: rational choices and correct beliefs 9/12/2011 Game Theory . This implies. given that every other player j adheres to a*j. in particular.  Note:   A Nash equilibrium corresponds to a steady state: if.A (Short) Introduction 36 . that two players’ beliefs about a third player’s action are the same (expectations are coordinated – Harsanyi Doctrine).2. then no player has a reason to choose any action different from her component of a*. whenever the game is played.6 Nash equilibrium  In this setup. a Nash equilibrium is action profile a* with the property that no player i can do better by choosing an action different from a*i. the action profile is the same Nash equilibrium a*.

a-i) is the action profile in which all the players other than i adhere to a while i “deviates” to a’i. … an) Let a’i be any action of player i (different from ai ) Let (a’i.a-i) = (ai. Note that if a’i=ai. whereas player i chooses a’i (the subscript –i stands for “except i”).a-i) =a 9/12/2011 Game Theory . then (a’i.  (a’i.a-i) be the action profile in which every player j except i chooses her action aj as specified by a.A (Short) Introduction 37 .6 Nash equilibrium  Notations and formal definition:     Let ai be the action of player i Let a be an action profile: a=(a1.2. a2.

A (Short) Introduction 38 .6 Nash equilibrium Definition 23. Equivalently: ui(a*) ≥ ui(ai.2. a* is at least as good according to player i’s preferences as the action profile (ai. for every player i and every action ai of player i.a*-i) in which player i chooses ai while every other player j chooses a*i. a*-i) for every action ai of player i 9/12/2011 Game Theory .1 (Nash equilibrium of strategic game with ordinal preferences) The action profile a* in a strategic game with ordinal preferences is a Nash equilibrium if.

2. Game Theory .6 Nash equilibrium  Note:  This definition implies neither that a strategic game necessarily has a Nash equilibrium. An alternative approach (called “rationalizability”) is:   to assume that players know each others’ preferences to consider what each player can deduce about the other players’ action from their rationality and their knowledge of each other’s rationality The keys to conceive suited experiment are:    Nash equilibrium has been studied experimentally.  This definition is designed to model a steady state among experienced players.  to ensure that players are experienced playing the game to ensure that players do not face repeatedly the same opponents (as each game must played in isolation)  The key to correctly interpret results is to remember that Nash equilibrium is about equilibrium: the outcome must have converged (and the theory says nothing about the necessary for convergence to appear). nor that it has at most one.A (Short) Introduction 39 9/12/2011 .

A (Short) Introduction 40 .7 Examples of Nash equilibrium  2.3) Quiet Suspect 1 Fink (3.7.1) 9/12/2011 Game Theory .0) (1.1 Prisoner’s Dilemma Suspect 2 Quiet (2.2) Fink (0.2.

if player 1 chooses Quiet. player 1 is better off choosing Fink  (moreover). player 2 is also better off choosing Fink The incentive to free ride eliminates the possibility that the mutually desirable outcome (Quiet. 9/12/2011 Game Theory . Quiet) occurs. Eg.7 Examples of Nash equilibrium  Detailed explanation   (Fink. Quiet) is not a Nash equilibrium because:  if player 2 chooses Quiet.A (Short) Introduction 41 . player 2 is better off choosing Fink than Quiet No other action profile is a Nash equilibrium. (Quiet.2. Fink) is a Nash equilibrium because:  given that player 2 chooses Fink. player 1 is better off choosing Fink than Quiet  given that player 1 chooses Fink.

this equilibrium is highly robust. the Nash equilibrium action is the best action for each player:  if the other player chooses her equilibrium action (Fink)  but also if the other player chooses her other action (Quiet) In this sense. 9/12/2011 Game Theory . But.A (Short) Introduction 42 .2. this is not a requirement of the Nash equilibrium. Only the first condition must be met.7 Examples of Nash equilibrium  Note that:  in the present case.

1) 9/12/2011 Game Theory .0) (1.2) (0.2.A (Short) Introduction 43 .3) Player 1 Fink (3. each action pair results in the players’ receiving amounts of money equal to the numbers corresponding to that action pair in the following figure: Player 2 Quiet Fink Quiet (2. Quiet and Fink.7 Examples of Nash equilibrium Exercise 27.1 Each of two players has two possible actions.

9/12/2011 Game Theory . and α is a given non-negative number.Quiet) is. For values of α for which the game is not the Prisoner’s dilemma. for example. where mi(a) is the amount of money received by player i. Is this game the Prisoner’s dilemma? 2. Find the range of values of α for which the resulting game is the Prisoner’s dilemma. Player 1’s payoff to the action pair (Quiet.2. j is the other player.A (Short) Introduction 44 . 1. Formulate the strategic game that models this situation in the case α=1. find the Nash equilibria.7 Examples of Nash equilibrium Players are not “selfish”: the preferences of each player i are represented by the payoff function mi(a)+α mj(a). 2 + 2α.

2.0) (1.7 Examples of Nash equilibrium  2.0) Player 1 Stravinsky (0.2 BoS Player 2 Bach Stravinsky Bach (2.2) Nash equilibria are (B.A (Short) Introduction 45 .B) and (S.1) (0.7.S). Why? Note that this means that BoS has two steady states! 9/12/2011 Game Theory .

Why? 9/12/2011 Game Theory .3 Matching Pennies Player 2 Head (1.-1) There is no Nash equilibrium.1) (1.7.1) Head Player 1 Tail (-1.2.7 Examples of Nash equilibrium  2.-1) Tail (-1.A (Short) Introduction 46 .

Why? Note that. 9/12/2011 Game Theory .1) Nash equilibria are (S.1) Player 1 Hare (1.2.0) (1.2) (0.7 Examples of Nash equilibrium  2.7.A (Short) Introduction 47 . despites (S.S) and (H.H).S) is better for both players than (H. this has no bearing on the equilibrium status of (H.H).H).4 The Stag Hunt Player 2 Stag Hare Stag (2.

each hunter prefers the fraction 1/m of the stag to a hare. but prefers a hare to any smaller fraction of the stag. As before.1 (extension to n players) Consider the variants of the n-hunter Stag Hunt in which only m hunters. find the Nash equilibria of the strategic game that models the situation. Each hunter prefers a fraction 1/k of the stag to a hare. with 2≤m≤n. a. b. where k is an integer with m≤k≤n. Assume that a captured stag is shared only by the hunters who catch it. Under each of following assumptions on the hunters’ preferences.A (Short) Introduction 48 . 9/12/2011 Game Theory .2. need to pursue the stag in order to catch it (continue to assume that there is a single stag).7 Examples of Nash equilibrium Exercise 30.

2.2) Player 2 Stravinsky (0. the theory isolates more than one steady state but says nothing about which one is more likely to appear.S) Bach (2. some of these equilibria seem more likely to attract the players’ attentions than others. however.B) seems here more “likely” than (S.0) Bach Player 1 Stravinsky (0.0) (1.1) 9/12/2011 Game Theory . These equilibria are called focal.A (Short) Introduction 49 .7 Examples of Nash equilibrium  Note In games with many Nash equilbria. Example: (B. In some games.

contrasting with definition 23.1) 9/12/2011 Game Theory .A (Short) Introduction 50 . a*-i) for every action ai ≠ a*i of player i (Note the strict inequality. given the other players’ actions: ui(a*) > ui(ai. A equilibrium is strict if each player’s equilibrium action is better than all her other actions.8 Strict and nonstrict equilibria   The definition 23.2.7 Examples of Nash equilibrium  2.7.1 requires only that the outcome of a deviation (by a player) be no better for the deviant than the equilibrium outcome.

8. ai ) for all a'i in Ai  Any action in Bi(a-i) is at least as good for player i as every other action of player i when the other players’ actions are given by a-i.1 Definition   In more complicated games. more precisely: Bi (ai )  ai in Ai : ui (ai .A (Short) Introduction 51 . ai )  ui (a'i .2. Let us denote the set of player i best actions when the list of the other players’ actions is a-i by Bi(a-i) or. analyzing one by one each action profile quickly becomes intractable.8 Best Response Functions  2. 9/12/2011 Game Theory .

an 1 ) 9/12/2011 Game Theory .. then this is equivalent to: * ai*  bi (ai ) for every player i  The Nash Equilibrium is then characterized by a set of n equations in the n unknowns a*i: * * * a1  b1 (a2 ..1: The action profile a* is a Nash equilibrium of a stragetic game with ordinal preferences if and only if every player’s actions is a best response to the other players’ actions: * ai* is in Bi (ai ) for every player i  If each player i has a single best response to each list a-i (Bi(a-i) = {bi(a*-i)}).A (Short) Introduction 52 ..2 Using best response functions to define Nash equilibrium  Proposition 36..2..8 Best Response Functions  2.. * * * an  bn (a1 ..8.an ) ..

find the action profiles that satisfy proposition 36.2.0 Game Theory .8 Best Response Functions  2.1 0.b  Find the Nash Equiliria of the game in Figure 38.A (Short) Introduction 53 .1.1  Represents graphically the solution L T 2.1 Exercise 37.3 Using the best response functions to find Nash equilibria   Procedure:  1.8.3 0.2 3.0 0.0 M B 9/12/2011 1.0 0.0 R 0. find the best response function of each player  2.1 C 1.

0 0.2 Player 2 M 3*.0* R 0*.0* R C L T M Player 1 B 54 9/12/2011 Game Theory .1 0*.1* B 1.8 Best Response Functions  Solution Player 2 L C 1*.0 0*.3* 0.0* T Player 1 2.A (Short) Introduction .2.

then decreases. c>0 is a constant.2. Questions:  Model the situation as a strategic game  Find players best response functions  Find the Nash equilibrium  Represent graphically the situation 9/12/2011 Game Theory . an effort level is a nonnegative number. Specifically. the return to individual i’s effort first increases. If both individuals devote more effort to the relationship.A (Short) Introduction 55 . For any given effort of individual j. they are both better off.8 Best Response Functions  Example 39. aj is the other individual’s effort level. where ai is i effort level. and individual i’s preferences (for i=1.2) are represented by the payoff function ai (c+aj-ai).1   Two individuals are involved in a synergistic relationship.

for i=1.2  Note that each player has infinitely many actions. so the game can not be represented by a matrix of payoff.A (Short) Introduction 56 .2. as previously. 9/12/2011 Game Theory .8 Best Response Functions  Strategic game:    Players: the two individuals Actions: each player’s set of actions is the set of effort levels (non negative numbers) Preferences: player i’s preferences are represented by payoff function ai(c+aj-ai).

individual i payoff is a quadratic function of ai. this implies that the player i best response to aj is: 1 bi (a j )  (c  a j ) 2 Payoff 0 c+aj ai 9/12/2011 Game Theory .A (Short) Introduction 57 .2.8 Best Response Functions  Best response function:  Intuitive construction  Given aj. As quadratic function are symmetric. that is zero when ai=0 and when ai=c+aj.

8 Best Response Functions  Mathematical construction   ai (c  a j  ai )   cai  a j ai  ai2   c  a j  2ai ai FOC c  a j  2ai  0 ai*  1 (c  a j ) 2 9/12/2011 Game Theory .2.A (Short) Introduction 58 .

2.A (Short) Introduction 59 . following proposition 36.8 Best Response Functions  Nash equilibrium:  To find the Nash equilibrium.c) 9/12/2011 Game Theory . we have to solve the following system of equations: 1 (c  a2 ) 2 1 a2  (c  a1 ) 2 a1   By substitution. we get: 1 1 (c  (c  a1 )) 2 2 3 1  c  a1 4 4 So : a1  c a1  The unique Nash equilibrium is (c.1.

2.8 Best Response Functions  Graphical representation a2 b1(a2) b2(a1) Player 2 c ½c 0 ½c c a1 Player 1 9/12/2011 Game Theory .A (Short) Introduction 60 .

A (Short) Introduction 61 . If best response functions are not linear. Best response function can be discontinuous. the Nash equilibria need not to be unique. then her best response function is “thick” (a surface) at some points.8 Best Response Functions  Note that:     The best response of a player to actions of other players needs not to be unique. Nash equilibrium needs not to exist: the best response function may not cross. generating another set of difficulties 9/12/2011 Game Theory . If a player has many best responses to some of the other players’ actions.2.

1  Find the Nash Equilibria of the two-player strategic game in which each player’s set of actions is the set of nonnegative numbers and the players’ payoff functions are u1(a1.A (Short) Introduction 62 .a2)=a1(a2-a1) and u2(a1.a2)=a2(1-a1-a2) 9/12/2011 Game Theory .8 Best Response Functions  Exercice 42.2.

the action Fink strictly dominates the action Quiet Quiet Quiet (2.1 Strict dominations   In any game.2) Fink (0. player i’s action a’’i strictly dominates her action a’i if: Action a’i is said to be strictly dominated.3) 9/12/2011 Game Theory . Definition 45.0) (1.A (Short) Introduction Fink (3.1 (Strict domination): in a strategic game with ordinal preferences.1) 63 .2.9 Dominated actions  2.9. a player’s action “strictly dominates” another action if it is superior.  Example: in the Prisoner’s Dilemma. no matter what the other player do.

A (Short) Introduction 64 . a strictly dominated action is not used in any Nash equilibrium.2 Weak domination  In any game.9 Dominated actions  Note that. we can therefore eliminate from consideration all strictly dominated actions. When looking for Nash equilibria of a game.  2. 9/12/2011 Game Theory . as a strictly dominated action is not a best response to any actions of the other players.2. no matter what the other players do. and is better than the second action for some actions of the other players. a player’s action weakly dominates another action if the first action is at least as good as the second action.9.

2.A (Short) Introduction 65 .1 (Weak domination) : In a strategic game with ordinal preferences. 9/12/2011 Game Theory . player i’s action a’’i weakly dominates her action a’i if: ui (ai'' . ai )  ui (ai' .9 Dominated actions  Definition 46. ai ) for some list ai of other players'actions  Note that is a strict Nash equilibrium. ai )  ui (ai' . no player’s equilibrium action is strictly dominated but in a nonstrict Nash equilibrium. an action can be weakly dominated. ai ) for every list ai of other players'actions ui (ai'' .

0 1.2 9/12/2011 Game Theory .9 Dominated actions  Exercise 47.A (Short) Introduction 66 . whether any action is strictly dominated or weakly dominated. Detemine whether any equilibrium is strict. Find the Nash equilibria of the game.1 2.1 R 1.0 1.2.1 C 1. for each player.0 M B 1.1 (Strict equilibria and dominated actions) For the game in Figure 48. L T 0.1. determine.1 2.1 3.

The number of n people is odd.: if there are five people.9. She prefers the policy y to the policy z if and only if y is closer to x*i than is z. modeled as a number.0. and they name the policies -2. The following mechanism is used to choose the policy:  each person names a policy  the policy chosen is the median of those named Eg.9 Dominated actions  2. the policy 0.4 Illustration: collective decision-making   The members of a group of people are affected by a policy. 0.A (Short) Introduction 67 9/12/2011 . Questions:  Model this situation as a strategic game  Find the equilibrium strategy of the players  Does anyone have an incentive to name her favorite policy? Game Theory .2. Each person i has a favorite policy.5 and 10.6. denoted x*i.6 is chosen.

A (Short) Introduction 68 .2.  Equilibrium strategy of the players:   Claim: for each player i. the action of naming her favorite policy x*i weakly dominates all her other actions. Why ? 9/12/2011 Game Theory .9 Dominated actions  Strategic game:    Players: n people Actions: each person’s set of actions is the set of policies (numbers) Preferences: each person i prefers the action profile a to the action profile a’ if and only if the median policy named in a is closer to x*i than is the median policy named in a’.

 a+ at ½ (n+1)th ½n a. the median policy is at least the lesser of xi and a+. player i is worse off naming xi than naming x*i .9 Dominated actions  Proof:  Take xi > x*I (reporting a higher policy than the preferred one)  a.and the value of ½ (n+1)th highest action a+ (so that half of the remaining players’ actions are at most aand half of them are at least a+). 0 9/12/2011 Game Theory . if xi ≤ a.: the same hold true (as x*i < xi ) if x*i < a+ and xi > a-. Thus. then  when the player i names x*i. player i is at least as well off naming x*i as she is naming xi  n for any list of actions of the players other than player i. denote the value of the ½ (n-1)th highest action by a.A (Short) Introduction 69 . for all actions of the other players.2.at ½ (n-1)th   if x*i ≥ a+ : the median policy is the same whether player i names x*i or xi (as xi > x*i ). the median policy is at most the greater of x*i and a when the play i names xi.

A (Short) Introduction 70 . player i is better of naming x*i than she is naming xi  Suppose that half of the remaining players name policies less than x*i and half of them name policies greater than xi.2. for some actions of the other players. 9/12/2011 Game Theory . Thus player i is better off naming xi than she is naming x*i . Telling the truth weakly dominates all other action. Then the outcome is x*i if player i names x*i and xi if she names xi .9 Dominated actions  b. A symmetric argument applies when xi < x*i .

Game Theory .1 (Symmetric two-player game with ordinal preferences) A two-player strategic game with ordinal preferences is symmetric if the players’ sets of actions are the same and the players’ preferences are represented by payoff functions u1 and u2 for which u1(a1.2.a2) Definition 52.A (Short) Introduction 71  9/12/2011 .10 Equilibrium in a single population: symmetric games  We focus here in cases where we want to model the interaction between members of a single homogenous population of players.a1) for every action pair (a1. Players interact anonymously and symmetrically.  Definition 51.a2)=u2(a2.1 (Symmetric Nash equilibrium) An action profile a* in a strategic game with ordinal preferences in which each player has the same set of actions is a symmetric Nash equilibrium if it is a Nash equilibrium and a*i is the same for every player i.

5 6.1 1.2 Find all the Nash equilibria of the game in Figure 53. Which of the equilibria. correspond to a steady state if the game models pairwise interactions between the members of a single population? A A B 1.10 Equilibrium in a single population: symmetric games  Exercise 52.3 C 4.6 0.2 1.2.0 C 9/12/2011 Game Theory .1 5.1.A (Short) Introduction 72 .1 3. if any.4 B 2.

3 Nash Equilibrium: Illustrations .

A (Short) Introduction 74  Auctions:    Main questions   9/12/2011 .5 Auctions  3.3.5. from works of art to short-term government bonds to radio spectrum … Auctions are of many form:        Sequential or sealed bid (simultaneous) First or Second price Ascending (English) or Descending (Dutch) Single or Multi-Units With or without reservation price With or without entry costs … exist since long ago (annual auction of marriageable womans in Babylonian’s villages and remain up-to-date (EBay on Internet) What are the designs likely to be the most effective at allocating resources? What are the designs more likely to raise the most revenue? Game Theory .1 Introduction   Auctions are used to allocate significant economic resources.

 This assumption will be dropped in Chapter 9. 9/12/2011 Game Theory .3.A (Short) Introduction 75 .5 Auctions  Main assumption: we discuss here auctions in which every buyer knows her own valuation and every other buyer’s valuation of the item being sold Buyers are perfectly informed.

eventually. the person making the current bid obtains the object at the price shed bid. before bidding begins.5 Auctions  3.  Given that every person is certain of her valuation (perfect valuation) of the object before the bidding begins. the person with the highest maximal bid needs therefore to bid slightly more than the second highest maximal bid.  To win. only the person with the maximal bid and the one with the second highest maximal bid will be left competing against each other. no one can learn anything relevant to her actions. the most she is willing to bid (her maximal bid).2 Second-price sealed-bid auctions  In a common form of auction. people sequentially submit increasing bids for an object.A (Short) Introduction 76 . during the bidding. 9/12/2011 Game Theory .  Thus we can model the auction by assuming that each person decides. When no one wish to submit a higher bid than the current bid.3.  During the bidding.5.

A (Short) Introduction 77 .5 Auctions  We can therefore model such an ascending auction as a strategic game in which each player chooses an amount of money (the maximal amount she is willing to bid) and the player who chooses the highest amount obtains the object and pays a price equal to the second highest amount. and the person who submits the highest bid wins and pays a price equal to the second highest bid.3. In a perfect information context.  This game model also a situation in which the people simultaneously put bids in sealed envelopes. ascending auctions (or English auctions) and second-price sealed bid auction are modeled by the same strategic game. 9/12/2011 Game Theory .

player i payoff is vi-bj In case of tie.3.5 Auctions  Notations       vi: the value player i attaches to the object p: price paid for the object vi-p: winning player payoff n: number of players number the players such that v1>v2> … > vn>0 bi: sealed bid submitted by each player  Rules    Each player submit a sealed bid bi If bi is the highest bid.A (Short) Introduction 78 . player i wins the auction. She pays her own bid (as there is a tie) 9/12/2011 Game Theory . In such a case. it is the player with the smallest number (the highest valuation) who wins. get the object and pays the second highest bid (say j).

where n ≥ 2 Actions: the set of actions of each player is the set of possible bids (nonnegative numbers) Preferences: denote by bi the bid of player i and by b+ the highest bid submitted by a player other than i. Otherwise player i’s payoff is 0. then player i’s payoff is vi-b+.   9/12/2011 Game Theory .3.A (Short) Introduction 79 .5 Auctions  Strategic game representation:  Players: the n bidders. If either bi>b+ or bi=b+ and the number of every other player who bids b+ is greater than i.

if player 1 changes he bid to some other price at least equal to b2.b2. then she loses and obtains a zero payoff if some other player lowers her bid or raises her bid to some price at most equal to b1. the outcome is that player 1 obtains the object and pays b2. then the outcome does not change.… bn)=(v1. the she remains a loser. If she changes her bid to a price less than b2. If she raises her bid above b1.3.5 Auctions  Nash equilibrium  The game has many Nash equilibria:  One equilibrium is (b1. Every other player’s payoff is zero. Her payoff is v1-b2.   9/12/2011 Game Theory . then she wins but. … vn): each player bid is equal to her valuation of the object:  because v1>v2> … > vn. in paying the price b1.v2. she makes a loss (because her valuation is less then b1).A (Short) Introduction 80 .

given the other players’ actions. bid by player 2) if player 2 changes her bid to some other prices greater than v2. in a Nash equilibrium. Sad is issue for the auctioneer … Another equilibirum is (b1. she wins the object but her payoff remains zero (she pays the price v1.b2.b2. the outcome does not change. … 0): the player 1 obtains the object and pays 0. if any other player raises her bid to a most v1. This however suggests that this equilibrium is less plausible as an outcome of the auction than the equilibrium in which each bidder bids her valuation. If she changes her bid to v2 or less. Each player simply chooses an action that is optimal. If she raises her bid above v1.0. then she wins but get a negative payoff. player 2 bids more than her valuation. she loses. 9/12/2011 Game Theory . in this equilibrium. the outcome does not change.3. This might seem strange.v1. and her payoff remains zero.… bn)=(v2.A (Short) Introduction 81 .5 Auctions   Another equilibrium is (b1. This is due to the fact that. Note that.… bn)=(v1. 0… 0): the player 2 bids v1 and obtains the object at price v2 and every players payoff is zero:    if player 1 raises her bid to v1 or more. a player does not consider the “risk” that another player will take an action different from her equilibrium action.

That is: for any bid bi ≠ vi.3. a player’s bid equal to her valuation weakly dominates all her other bids. no matter what the other players bid.5 Auctions This is due to the fact that: in a second-price sealed-bid auction (with perfect information). player i bid vi is at least as good as bi. and is better than bi for some actions of the other players. 9/12/2011 Game Theory .A (Short) Introduction 82 .

9/12/2011 Game Theory . player i payoff to a bid vi is a least as large as her payoffs to any other bid.1 vi is better than b’i in this region vi-b+ 0 vi b+ vi-b+ 0 b’i vi b+ vi-b+ 0 vi is better than b’’i in this region b’’i vi b+ The Figure compares player i payoffs to the bid vi (left panel) with her payoff to a bid b’i < vi (middle panel) and with her payoff to a bid b’’i > vi. We see that: -for all value of b+.A (Short) Introduction 83 . -for some values of the b+.5 Auctions The precise argument is given by Figure 85. her payoffs to vi exceed her payoff to any other bid. as a function of the highest of the other players’ bids (b+).3.

the two people simultaneously submit bids.5 Auctions  Exercise 84.A (Short) Introduction 84 9/12/2011 . The right to choose the action is sold in a second-price auction.3. For i=1. Assume that if the bids are the same.  Exercise 86.1 (Auctioning the right to choose)    An action affects each of two people.1  Find a Nash equilibrium of a second-price sealed bid auction in which player n obtains the object. the payoff of person i when the action is a and person i pays m is ui(a)-m. find for each player a bid that weakly dominates all the player’s other bids (and thus find a Nash equilibrium in which each player’s equilibrium action weakly dominates all her other actions). and the one who submits the higher bid chooses her favorite action and pays (to a third party) the amount bid by the other person. person 1 is the winner. That is. who pays nothing. Game Theory . In the game that models this situation.2.

3. player i payoff is 0.5.5 Auctions  3.3 First-price sealed-bid auctions  Difference with as second-price auction: the winner pays the price she bids Strategic game representation:  Players: the n bidders. then player i payoff is vi-bi. Otherwise.  9/12/2011 Game Theory . where n≥2  Actions: the set of actions of each player is the set of possible bids (nonnegative numbers)  Preferences: denote by bi the bid of player i and by b+ the highest bid submitted by a player other than i. If either (a) bi > b+ or (b) bi = b+ and the number of every other player who bids b+ is greater than i.A (Short) Introduction 85 .

A (Short) Introduction 86 9/12/2011 . Game Theory . The outcome is that player 1 obtains the object at price v2. in which player 1 bid is player 2 valuation and every other player’s bid is her own valuation.v2. Nash equilibrium  One Nash equilibrium is (b1.b2.5 Auctions   Note that this game models:  a sealed-bid auction where the highest bid wins  but also  a dynamic auction in which the auctioneer begins by announcing a high price. which she gradually lowers until someone indicates her willingness to buy the object (a Dutch auction) (this equivalence is even.… bn)=(v2.3. … vn). stronger than the one between an ascending auction and second-price sealed-bid auction – does not depend on private values). in some sense.

…bn) in which some player i≠1 wins.  A first-price sealed-bid auction has many other equilibria.5 Auctions  Exercise 86.3.A (Short) Introduction 87 . by the following argument:  in any action profile (b1. … vn) is a Nash equilibrium of a first-price sealed-bid auction. then i payoff is negative. 9/12/2011 Game Theory .v2.  If bi > v2. then player 1 can increase her payoff from 0 to v1-bi by bidding bi.2 Show that (b1.… bn)=(v2. so that she can do better by reducing her bid to 0  if bi ≤ v2.b2. in which case she wins. but in all equilibria the winner is the player who values the object most highly (player 1). we have bi > b1.

3.5 Auctions

Exercise 87.1 (First-price sealed-bid auction)

Show that in a Nash equilibrium of a first-price sealed-bid auction the two highest bids are the same, one of these bids is submitted by player 1, and the highest bid is at least v2 and at most v1. Show also that any action profile satisfying these conditions is a Nash equilibrium.

9/12/2011

Game Theory - A (Short) Introduction

88

3.5 Auctions

As in the second-price auction sealed-bid auction, the potential “riskiness” to player i of a bid bi > vi is reflected in the fact that it is weakly dominated by the bid vi, as shown by the following argument:  if the other players’ bids are such that player i loses when she bids bi, then the outcome is the same whether she bids bi or vi  it the other players’ bids are such that player i wins when she bids bi, then her payoff is negative when she bids bi and zero when she bids vi (regardless of whether this bid wins) However, unlike a second-price auction, in a first-price auction, a bid bi < vi of player i is not weakly dominated by the bid vi (it is in fact not weakly dominated by any bid):  it is not weakly dominated by a bid b’i<bi because if the other players’ highest bid is between b’i and bi, then b’i loses whereas bi wins and yields player i a positive payoff  it is not weakly dominated by a bid b’i>bi because if the other players’ highest bid is less than bi, then both bi and b’i win and bi yield a lower price.

9/12/2011

Game Theory - A (Short) Introduction

89

3.5 Auctions

Note also that, though the bid vi weakly dominates higher bids, this bid is itself weakly dominated by a lower bid! The argument is the following:  if player i bids vi, her payoff is 0 regardless of the other players’ bids  whereas, if she bids less than vi, her payoff is either 0 (if she loses) or positive (if she wins)
In a first-price sealed-bid auction (with perfect information), a player’s bid of at least her valuation is weakly dominated, and a bid of less than her valuation is not weakly dominated.

9/12/2011

Game Theory - A (Short) Introduction

90

3.5 Auctions

Note finally that this property of the equilibria depends on the assumption that a bid may be any number. In the variant of the game in which bids and valuations are restricted to be multiples of some discrete monetary unit ε,  an action profile (v2-ε, v2- ε, b3, …bn) for any bj ≤ vj- ε for j = 3, …n is a Nash equilibrium in which no player’s bid is weakly dominated.  further, every equilibrium in which no player’s bid is weakly dominated takes this form. If ε is small, this is very close to (v2, v2, b3, …bn) : this equilibrium is therefore (on a somewhat ad-hoc basis) considered as the distinguished equilibria of a first-price sealed-bid auction.

One conclusion of this analysis is that, while both second-price and first-price auctions have many Nash equilibria, their distinguished equilibria yield the same outcome: in every distinguished equilibrium of each game, the object is old to player 1 at the price v2. This is notion of revenue-equivalence is a cornerstone of the auction theory and will be analyzed in depth later.
9/12/2011 Game Theory - A (Short) Introduction 91

3.5 Auctions
 3.5.4 Variants

Uncertain valuation: we have assumed that each bidder is certain of both her own valuation and every other bidder’s valuation, which is highly unrealistic. We will study the case of imperfect information in Chap. 9 (in the framework of Bayesian games) Interdependent/Common valuations: in some auction, the main difference between bidders is not that they value the object differently but that they have different information about its value (eg, oil tract auctions). As this also involve informational considerations, we will again study this in Chap. 9. All-pay auctions: in some auctions, every bidder pay, not only the winner (eg, competition of loby groups for government attention).

9/12/2011

Game Theory - A (Short) Introduction

92

3.5 Auctions

Mutiunit auctions: in some auctions, many units of an object are available (eg, US Treasury bills auctions) and each bidder may value positively more than one unit. Each bidder chooses therefore a bid profile (b1,b2,…bk) if there are k units to sell. Different auction mechanisms exist and are characterized by the rule governing the price paid by the winner:  Discriminatory auction: the price paid for each unit is the winning bid for that unit  Uniform-price auction: the price paid for each unit is the same, equal to the highest rejected bid among all the bids for all unit  Vickrey auction (of the name of Nobel prize): a bidder wins k objects pays the sum of the k highest rejected bids submitted by the other bidders.

9/12/2011

Game Theory - A (Short) Introduction

93

4. Mixed Strategy Equilibrium .

Stochastic steady state  Nash Equilibrium in a strategic game: action profile in which every player’s action is optimal given every other player’s action (see def. the outcome of every play of the game is the same Nash equilibrium More general notion of steady state exists  9/12/2011 Game Theory . Introduction  4.1.1. 23.1)  This corresponds to a steady state of the game:  every player’s behavior is the same whenever she plays the game  no player wishes to change her behavior.A (Short) Introduction 95 .4. knowing (from experience) the other players’ behavior  In such a framework.1.

on each occasion she plays the game. choose her action probabilistically according to the same.A (Short) Introduction 96 . each player choosing the same action whenever she plays the game  each individual may. distribution  these situations are equivalent:  in the first case. each member of the population representing player i chooses the action a with probability p These notion of (stochastic) steady state of modeled as mixed strategy Nash equilibrium 9/12/2011 Game Theory .1.4. Introduction  players’ choices are allowed to vary:  different members of a given population may choose different actions. a fraction p of the population representing player i chooses the action a  in the second case. unchanging.

4. 9/12/2011 Game Theory .1. Introduction  4.1) (1.-1) Outcomes The game has no Nash equilibrium: no pair of action is compatible with a steady state.A (Short) Introduction 97 .-1) (-1.2 Example: Matching Pennies Player 2 Head Tail Head (1.1.1) Player 1 Tail (-1.

A (Short) Introduction 98 . the probability that the outcome is either (Head.Tail) and (Tail.Tail) occurs with probability (1-p) x ½   Thus.Tail) (in which case player 1 wins 1$) ½ p + ½ (1-p) = ½.Tail) occurs with probability p x ½ each outcome (Tail. then:   each outcome (Head.1.Head) and (Head.Head) or (Tail.4.Head) and (Tail. The other two outcomes (Head. Introduction  The game has however stochastic steady state in which each player chooses each of her actions with probability 1/2 :   Suppose that player 2 chooses each of her actions with probability ½ If player 1 chooses Head with probability p and Tail with probability (1p).Head) (which correspond to a loss of 1$) have also probability ½ 9/12/2011 Game Theory .

4.A (Short) Introduction 99 9/12/2011 .Head or Tail.Tail) and she looses 1$ with probability (1-p)q + p(1-q).  Moreover (under a reasonable assumption on the players’ preferences). Game Theory .1. We conclude that the game has a stochastic steady state in which each player chooses each action with probability ½ . the game has no other steady state :  Assumption: each player wants the probability of her gaining 1$ to be as large as possible (maximization of expected profit)  Denote q the probability with which player 2 chooses Head (she chooses Tail with probability (1-q) )  If player 1 chooses Head with probability p. she gains 1$ with probability pq + (1-p)(1-q) (outcomes Head. Introduction    the probability distribution over outcome is independent of p! every value of p is optimal (in particular ½ )! the same analysis hold for player 2.

Introduction      Note that:  Player 1 wins 1$ : pq + (1-p)(1-q) = 1-q + p(2q-1)  Player 1 loses 1$:(1-p)q + p(1-q). We already have shown that is one player is choosing a given action with certainty (Nash Equilibrium). A similar argument shows that if player 2 chooses Head with probability superior to ½.A (Short) Introduction 100 . the first probability (winning 1$) is decreasing in p and the second probability (loosing 1$) is increasing in p. if player 2 chooses Head with probability less than ½.= q + p (1-2q) If q < ½. 9/12/2011 Game Theory . there is no steady state. the best response of player 1 is to choose Tail with certainty. the best response of player 1 is to choose Head with certainty. Thus.1. Player 1 chooses therefore p = 0.4.

allowing to deduce players’ preferences regarding lotteries (probability distributions) over outcomes from their preferences regarding deterministic outcomes:  if a player prefers a to b and if p > q. Introduction  4.1. more than two outcomes).3 Generalizing the analysis: expected payoffs  The matching pennies case is particularly simple because it has only two outcomes for each player. he most likely prefers a lottery in which a occurs with probability p (and b with probability (1-p)) to a lottery in which a occurs with probability q (and b with probability (1-q)) To deal with more general cases (eg.A (Short) Introduction 101 .1.4. we need to add to the model a description of her preferences regarding lotteries (probability distribution) over outcomes  9/12/2011 Game Theory .

dist. dist.A (Short) Introduction 102 A payoff function whose expected value represents such preferences is called a Bernouilli payoff function. with the property that player i prefers one probability distribution over outcomes to another if and only if.pb. Q if and only if paui(a) + pbui(b) + pcui(c) > qaui(a) + qbui(b) + qcui(c) Preferences that can be represented by the expected value of a payoff function over deterministic outcomes are called vNM (von Neumann – Morgenstern) preferences.4.qc) for each player i.pc) and Q(qa. Introduction  The standard approach is to restrict attention to preferences regarding lotteries (probability distribution) over outcomes that may be represented by the expected value of a payoff function over deterministic outcomes:  for every player i. the expected value of the first probability distribution exceeds the expected value of the second probability distribution. 9/12/2011 . c two prob.: P(pa. prob. b.1.  eg. there is a payoff function ui.qb. P is preferred to prob. :    three outcomes: a. Game Theory . dist. according to ui.

these restriction do not restrict player attitudes to risk:  eg. If the person is very averse to risky outcomes. A person prefers a to b to c. in which a occurs with probability p and c with probability (1-p). she prefers then to obtain b for sure rather than to face a prob.A (Short) Introduction 103 (Figure 103.1) 9/12/2011 . :  u(a) u(b) u(c) c  b a suppose that a. even if p is relatively large.4. such preferences can be represented by the expected value of a payoff function u for which u(a) is close to u(b). which is much larger than u(c) (concave payoff function) Game Theory .b and c are three outcomes. Introduction  The restrictions on preferences regarding prob. over outcomes required for them to be represented by expected value of a payoff function are NOT innocuous (see violations example on page 104). They are however commonly accepted in game theory.  However. dist. dist.1.

4. 9/12/2011 Game Theory . Introduction • Note that if the outcomes are amount of money and if the preferences are represented by the expected value of the amount of money.1.A (Short) Introduction 104 . • the fact that people buy lottery tickets shows that. the preferences can be represented by the expected value of a payoff function: • • concave in case of risk aversion convex in case of risk preference • • • Note finally that given preferences. the player is risk neutral. extremely high payoff. in some circumstance. than can be risk preferring (small investment. many different payoff functions can be used to represented them. Two classic utility functions: CARA & CRRA In the reality: • the fact that people buy insurance (the expected payoff is inferior to the insurance fee) show that economic agents are risk averse. It is the ordering that matter.  in both cases.

A (Short) Introduction 105 9/12/2011 .2 Strategic games in which players may randomize  Definition 106. dist. over action profiles that may be represented by the expected value of a (Bernoulli) payoff function over action profiles.. numbers are values of (Bernoulli) payoffs whose expected values represent the players’ preferences over prob.  Representation: a two-player strategic game with vNM preferences in which each player has finitely many actions may be represented in a table like in Chapter 2. preferences regarding prob. the interpretation of the number is different:   in Chapter 2.4. Game Theory . numbers are values of payoff functions that represent the players’ preferences over deterministic outcome here. a set of actions for each player. dist. However.1 (Strategic game with vNM preferences) A strategic game with vNM preferences consists of    a set of players for each player.

1 Q F 3.0 0.2 3.4 1.1 The 2 games represent the same game with ordinal preferences (the prisoner’s dilemma).0 0.1) Q F Q F Q F 2. dist.2 Strategic games in which players may randomize  The change is subtle but important (figure 107.Q) is the same as her expected payoff to the prob. the 2 games represent different strategic games with vNM preferences:   left game: player’s 1 payoff to (Q. Game Theory .3 1.F) with probability ½ right game: her payoff to (Q. However.3 4. that yield (F.Q) is higher than her expected payoff to this prob.4.A (Short) Introduction 106 9/12/2011 .Q) with probability ½ and (F. dist.

1 Mixed strategies  We allow now each player to choose a probability distribution over her set of actions (rather than restricting her to choose a single deterministic action) Definition 107.4.3.A (Short) Introduction 107 .   Notations:  α: profile of mixed strategies (matrix)  αi(ai): probability assigned by player i’s mixed strategy αi to her action ai 9/12/2011 Game Theory .1 (Mixed strategy) A mixed strategy of a player in a strategic game is a probability distribution over the player’s actions.3 Mixed strategy Nash equilibrium  4.

probability ½ to Q and probability ½ to F. the strategy of player 1 that assigns probability ½ to each action is the strategy α1(Head)= ½ and α1(Tail) = ½. in table 107. eg.1). in the order the actions are given in the table (see table 107. ½ ) assigns.4. such as strategy is referred as a pure strategy.  Note that a mixed strategy may assign probability 1 to a single action.1.: ( ½ . 9/12/2011 Game Theory .3 Mixed strategy Nash equilibrium   eg: in Matching pennies.A (Short) Introduction 108 . In that case. Shortcut: mixed strategies are often written as a list of probabilities (one for each action).

4.3 Mixed strategy Nash equilibrium  4.1 (Mixed strategy Nash equilibrium of strategic game with vNM preferences) The mixed strategy profiles α* in a strategic game with vNM preferences is a mixed strategy Nash equilibrium if. according to a payoff function whose expected value represents player i’s preferences over prob. for every mixed strategy  i of player i 9/12/2011 109 where U i ( )Game Theoryi-' s expected payoff to the mixed strategy profile  .  *i ). the expected payoff to player i of α* is at least as large as the expected payoff payoff to player i of (αi. U i ( *)  U i ( i . dist.α-i*).3.2 Equilibrium The mixed strategy Nash equilibrium extend the concept of Nash equilibrium to the probabilistic setup. for each player i and every mixed strategy αi of player i. is player A (Short) Introduction . Definition 108.

: in the Matching Pennies.3 Mixed strategy Nash equilibrium  4.3. Bi(α-i) is the set of player i’s best mixed strategies when the list of the other players’ mixed strategies is α-i. the mixed strategy profile α* is a mixed strategy Nash equilibrium if and only if α*i is in Bi(α*-i) for every player i   9/12/2011 eg. the set of best responses to a mixed strategy of the other player is either a single pure strategy or the set of all Game Theory . .3 Best response functions  Notation: Bi is player i’s best response function For strategic game with ordinal preferences: Bi(a-i) is the set of player i’s best actions when the list of the other players’ actions is a-i For a strategic game with vNM preferences.4.A (Short) Introduction 110 mixed strategy.

3 Mixed strategy Nash equilibrium  Two players – two actions games  Player 1 has action T and B  Player 2 has action L and R  ui (i=1.a2) is the product of the corresponding probabilities assigned by mixed strategies). denotes q the probability that player 2’s mixed strategy assigns to L et 1-q to R. the probability of any action pair (a1.4.A (Short) Introduction 111 9/12/2011 .2) denotes a Bernoulli payoff function for player i (payoff over action pair whose expected value represents player i’s preferences regarding prob.  We take the players’ choices to be independent (when players choose the mixed strategies α1 and α2. Game Theory .  Similarly. with α1(T) + α1(B) = 1. over action pairs)  Player 1 mixed strategy α1 assigns probability α1(T) to her action T (denoted p) and probability α1(B) to her action B (denoted 1-p). dist.

1)  From this probability distribution.A (Short) Introduction 112 . R) which can be written as: 9/12/2011 Game Theory . L)  p(1  q)  u1 (T . α2): pq  u1 (T . R)  (1  p)q  u1 ( B.3 Mixed strategy Nash equilibrium So.4. the probabilities of the four outcomes are: L(q) R(1-q) T(p) pq p(1-q) B(1-p) (1-p)q (1-p)(1-q) (Figure 109. L)  (1  p)(1  q)  u1 ( B. we can compute player 1’s expected payoff to the mixed strategy pair (α1.

L)  (1  q)  u1 (T . R) Player 1 expected payoff when she uses a pure strategy that assigns probability 1 to T and player 2 uses a mixed strategy α2 Player 1 expected payoff when she uses a pure strategy that assigns probability 1 to B and player 2 uses a mixed strategy α2 which can be written more compactly as: pE1T . .3 Mixed strategy Nash equilibrium pq  u1 (T .A (Short) mixed strategy α2. R)  (1  p)q  u1 ( B.  2   (1  p) E1B. L)  (1  q)  u1 ( B.  2  9/12/2011 Player 1’ expected payoff to the mixed strategy pair (α1.α2) as a weighted average of her expected payoffs to T and B when player 2 uses theIntroduction Game Theory .4. with weights 113 equal to the probabilities assigned to T and B by α1.

 2  0 p (Figure 110.  2  E1 B. player 1’s expected payoff is a linear function of p E1 T .  2   E1B.  2  E1T .  2   (1  p) E1B.3 Mixed strategy Nash equilibrium In particular.1) 1 9/12/2011 Game Theory .  2  pE1T .4.A (Short) Introduction 114 .

1-p) for which 0 < p < 1 is never a unique best response.1 player 1’s unique best response is the pure strategy B (if E1(T.α2) < E1(B.α2) > E1(B. 9/12/2011 Game Theory .1 with a horizontal line in particular.α2) ): see figure 110.α2) = E1(B.α2) ): see figure 110.A (Short) Introduction 115 . all are best response) (if E1(T.α2) ): see figure 110.3 Mixed strategy Nash equilibrium  A significant implication of this linearity form of the player 1’s expected payoff is that there is only three possibilities for her best response to a given mixed strategy of player 2:     player 1’s unique best response is the pure strategy T (if E1(T.4. a mixed strategy (p.1 with a downward sloping line all mixed strategies of player 1 yield the same expected payoff (hence.

1) (1.-1) 9/12/2011 Game Theory .3 Mixed strategy Nash equilibrium  Example: Matching Pennies revisited  Represent each player’s preferences by the expected value of a payoff unction that assigns the payoff 1 to a gain of $1 and the payoff -1 to a loss of $1.1) Player 1 Tail (-1.-1) (-1.4.1) Player 2 Head Tail Head (1.A (Short) Introduction 116 . The resulting strategic game with vNM preferences is (figure 111.

given player 2 mixed strategy is : q . 1 + (1-q) .  Player 1’s expected payoff to pure strategy Head.4. 1 = 1 – 2q q 1 ½ 0  Player2 Player1 (Figure 112.1: Best response functions) 117 0 9/12/2011 ½ 1 p Game Theory .A (Short) Introduction .3 Mixed strategy Nash equilibrium  Denote by p the probability that player 1’s mixed strategy assigns to Head and q the probability that player 2’s mixed strategy assigns to Head.(-1) = 2q – 1 Her expected payoff to Tail is : q . (-1) + (1-q) .

we conclude that player 1’s best responses to player 2’s strategy are her mixed strategy that assigns probability 0 to Head if q < ½ . her expected payoff to Head exceeds her expected payoff to Tail. then both Head and Tail (and all her mixed strategies) lead to the same payoff.3 Mixed strategy Nash equilibrium  Thus:  if q < ½. 9/12/2011 Game Theory .  if q = ½.A (Short) Introduction 118 . if q > ½. player 1’ expected payoff to Tail exceeds her expected payoff to Head (and hence exceeds also her expected payoff to any mixed strategy that assigns a positive probability to Head)  similarly. her mixed strategy that assigns probability 1 to Head if q > ½ and all her mixed strategies if q = ½.4.

A (Short) Introduction 119 .3 Mixed strategy Nash equilibrium  The best response function of player 2 is similar (see figure 112.1.4. Matching Pennies has no Nash Equilibrium if players are not allowed to randomize !  9/12/2011 Game Theory .1) The set of mixed strategy Nash equilibria corresponds (as before) to the set of intersections of the best response functions in figure 112.

4.2) L T B 0.0 (Figure 114.2 L T B 6.A (Short) Introduction 120 .0 3.2 R 0.2 Find all the mixed strategy Nash equilibria of the strategic games in Figures 114.3 Mixed strategy Nash equilibrium  Exercise 114.6 6.1 2.1 9/12/2011 Game Theory .2 0.2 R 0.

they both exert effort.0 Effort 0. No Effort No Effort Effort 0.1.0 -c.1-c (Figure 115. the worst outcome for each person is that she exerts effort and the other person does not (in which case again nothing is accomplished). which c is a positive number less than 1 than can be interpreted as the cost of exerting effort.3 Mixed strategy Nash equilibrium  Exercise 114. and only if.A (Short) Introduction 121 .-c 1-c.4. Find all the mixed strategy Nash equilibria of this game. How do the equilibria change as c increase? Explain the reasons for the changes.3 Two people can perform a task if. They are both better off if they both exert effort and perform the task than if neither exerts effort (and nothing is accomplished).1) 9/12/2011 Game Theory . the players’ preferences are represented by the expected value of the payoff functions in Figure 115. Specifically.

4 A useful characterization of mixed strategy Nash equilibrium   The method used up to now to find Mixed strategy Nash equilibria involves constructing players’ best response functions.α-i) is the probability αi(ai) assigned to that strategy ai by the player’s mixed strategy αi (see section 4. this method may be intractable.3. In complicated games. The key is the following observation: a player’s expected payoff to a mixed strategy profile α is a weighted average of her expected payoffs to all pure strategy profiles of the type (ai.3 Mixed strategy Nash equilibrium  4.4.3).α-i). There is a characterization of mixed strategy Nash equilibria that is an invaluable tool in the study of generale game.A (Short) Introduction 122 . where the weights attached to each pure strategy (ai. 9/12/2011 Game Theory .3.

4.α-i) is her expected payoff when she uses the pure strategy that assign probability 1 to ai and every other player j uses her mixed strategy αj. 9/12/2011 Game Theory . i i i i i ) Ai is player i’s set of actions (pure strategies) Ei(ai.3 Mixed strategy Nash equilibrium  Symbolically: U i ( )  where:   ai Ai  ( a )E ( a .A (Short) Introduction 123 .

is at most E*i But E*i is a weighted average of player i’s expected payoffs to the pure strategies to which α*i assigns a positive probability Thus.   9/12/2011 Game Theory . payer i’s expected payoff. then the weighted average would be smaller!).4. player i’s expected payoffs to these pure strategies are all equal to E*i (if any smaller. given α*-i.3 Mixed strategy Nash equilibrium  This leads to the following analysis:   Let α* be a mixed strategy Nash equilibrium Denote by E*i player i’s expected payoff in the equilibrium Because α* is an equilibrium.A (Short) Introduction 124 . to all her strategies (including all her pure strategies).

A (Short) Introduction . to every action to which α*i assigns a positive probability is the same the expected payoff. given α*-i.4. given α*-i. for each player i.3 Mixed strategy Nash equilibrium  We conclude that:   expected payoff to each action to which α*i assigns positive probability is E*i the expected payoff to every other action is at most E*i  Proposition 116. to every action to which α*i assigns a zero probability is at most the expected payoff to any action to which α*i assigns a positive probability 9/12/2011 Each player’s expected payoff in an equilibrium is her expected payoff to any of her actions that she uses with positive 125 probability Game Theory .   the expected payoff.2 A mixed strategy profile α* in a strategic game with vNM preferences in which each player has finitely many actions is a mixed strategy Nash equilibrium if and only if.

4 0.1 (Figure 117.4.1) 2.7 9/12/2011 Game Theory .. .1 ...3 R(2/3) 1.2 C(1/3) 3. 5.1 L(0) T(3/4) M(0) B(1/4) .. 0..A (Short) Introduction 126 .  Example 117..3 Mixed strategy Nash equilibrium  This proposition allows to check whether a mixed strategy profile is an equilibrium.

1/4) for player 1 and (0. For player 1.3 Mixed strategy Nash equilibrium For the game in Figure 117.2.2/3) for player 2) is a mixed strategy Nash equilibrium. it suffices. by proposition 116. these payoffs are: T : 1 3 2 1 5 3 3 3 M : 1 0 2 2  4 3 3 3 B: 1 5 2 0  5 3 3 3 9/12/2011 Game Theory . to study each player’s expected payoffs to her three pure strategies. the indicated pair of strategies ((3/4.1/3.0.A (Short) Introduction 127 . To verify this claim.4.1 (in which the dots indicate irrelevant payoffs).

the action L (which she uses with probability 0).4. 9/12/2011 Game Theory .3 Mixed strategy Nash equilibrium Player 1’s mixed strategy assigns positive probability to T and B and probability zero to M. for player 2.2 (no greater than).A (Short) Introduction 128 . The same verification is easily done for player 2. So. Note however that. has the same expected payoff to her other two actions. the two conditions of proposition 116. This equality is consistent with proposition 116.2 are satisfied for player 1.

  Show that the game has a mixed strategy Nash equilibrium in which each player chooses each positive integer up to K with probability 1/K Show that the game has no other mixed strategy Nash equilibria (Deduce from the fact that player 1 assigns positive probability to some action k that player 2 must do so. otherwise no payment is made.A (Short) Introduction 129 . then look at the implied restriction on player 1’s equilibrium strategy) 9/12/2011 Game Theory .4. then player 2 pays $1 to player 1. Each player’s preferences are represented by her expected monetary payoff. If the players choose the same number.3 Mixed strategy Nash equilibrium  Exercise 117.2 (Choosing numbers) Players 1 and 2 each choose a positive integer up to K.

A (Short) Introduction 130 . the conditions for equilibrium are designed to ensure that it is consistent with a steady state. 9/12/2011 Game Theory . The question of how a steady state may come about remains to be studied at this stage.4.3 Mixed strategy Nash equilibrium  Note finally that   an implication of Proposition 116.2 is that a nondegenerate mixed strategy equilibrium (a mixed strategy equilibrium that is not also a pure strategy equilibrium) is never a strict Nash equilibrium: every player whose mixed strategy assigns a positive probability to more than one action is indifferent between her equilibrium mixed strategy and every action to which this mixed strategy assigns positive probability. The theory of mixed Nash equilibrium does not state that players consciously choose their strategies at random given the equilibrium probabilities. Rather.

not a necessary one.1 (Existence of mixed strategy Nash equilibrium in finite games) Every strategic game with vNM preferences in which each player has finitely many actions has a mixed strategy Nash equilibrium.3. that a player’s strategy in mixed strategy Nash equilibrium may assign probability 1 to a single action. Game Theory .5 Existence of equilibrium in finite games Proposition 119.3 Mixed strategy Nash equilibrium  4.A (Short) Introduction 131 9/12/2011 . This proposition does not help to find the equilibrium but it is a useful fact.4. Note also that:   the finiteness of the number of actions is a sufficient condition for the existence of an equilibrium.

A (Short) Introduction 132 . ai ) for every list ai for the other players'actions where ui is a Bernoulli payoff function and Ui(αi. 9/12/2011 Game Theory .4.1 (Strict Domination) In a strategic game with vNM preferences. ai )  ui (a'i .a-i) is player i’s expected payoff under ui when she uses the mixed strategy αi and the actions chosen by the other players are given by a-i. player i’s mixed strategy αi stricly dominates her action a’i if: U i (i .4 Dominated actions  Definition 120.

4 Dominated actions  An action not strictly dominated by any pure strategy may be strictly dominated by a mixed strategy (see Figure 120.1) L T M B 1 4 0 R 1 0 3 (Figure 120. but it is strictly dominated by the mixed strategy that assigns probability ½ to M and probability ½ to B. 9/12/2011 Game Theory .4.A (Short) Introduction 133 .1) The action T of player 1 is not strictly (or weakly) dominated by M or B.

the mixed strategy that assigns probability ½ to M and ½ to B is not the only mixed strategy that strictly dominates T.3 (Strict domination for mixed strategies) Determine whether each of the following statements is true of false:   A mixed strategy that assigns positive probability to a strictly dominated action is strictly dominated. Find all the mixed strategy that do so.2 (Strictly dominated mixed strategy) In Figure 120. 9/12/2011 Game Theory .4 Dominated actions  Exercise 120.  Exercise 120. A mixed strategy that assigns positive probability only to actions that are not strictly dominated is not strictly dominated.1.A (Short) Introduction 134 .4.

Hence player i’s expected payoff when she uses the mixed strategy αi exceeds Game Theory . given α-i.a-i) for every collection a-i of the players’ actions. with the weight on each a-i equal to the probability with which it occurs when the other players’ mixed strategies are α-i. the weights are the same but the terms take the form ui(a’i. rather than Ui(αi.a-i).4. Player i’s expected payoff when she uses the action a’i and the other players use the mixed strategies α-i is a similar weighted average. 135 her expected (Short) when she .a-i) > ui(a’i. The fact that a’i is strictly dominated by αi means that Ui(αi.4 Dominated actions  A strictly dominated action is not a best response to any collection of mixed strategies of the other players      9/12/2011 Suppose that player i’s action a’i is strictly dominated by her mixed strategy αi Player i’s expected payoff Ui(αi.A payoffIntroduction uses the action a’i.α-i) when she uses the mixed strategy αi and the other players use the mixed strategies α-i is a weighted average of her payoffs Ui(αi.a-i).a-i) as a-i varies over all the collections of action for the other players.

9/12/2011 Game Theory . a strictly dominated action is not used with positive probability in any mixed strategy Nash equilibrium. ai )  U i (a'i . ai ) for some list ai of the other playes'actions where ui is a Bernoulli payoff function and Ui(αi. player i’s mixed strategy αi weakly dominates her action a’i if: U i (i .1 (Weak domination) In a strategic game with vNM preferences.a-i) is player i’s expected payoff under ui when she uses the mixed strategy αi and the actions chosen by the other players are given by a-i.4.4 Dominated actions Consequently. ai ) for every list ai of the other playes'actions and U i (i . ai )  U i (a'i .A (Short) Introduction 136 . Definition 121.

9/12/2011 Game Theory . as a weakly dominated action may be used in a Nash equilibrium. We can therefore not eliminate weakly dominated actions from consideration when finding mixed strategy equilibrium. a weakly dominated action may be used with a positive probability in a mixed strategy equilibrium.4.A (Short) Introduction 137 .1 (Existence of mixed strategy Nash equilibrium with no weakly dominated strategies in finite games) Every strategic game with vNM preferences in which each player has finitely many actions has a mixed strategy Nash equilibrium in which no player’s strategy is weakly dominated.4 Dominated actions  Note that. However: Proposition 122.

5 Pure equilibria when randomization is allowed  Equilibria when the players are not allowed to randomize remain equilibria when they are allowed to randomize Proposition 122. Then α* is a mixed strategy Nash equilibrium of G’.A (Short) Introduction 138 . 9/12/2011 Game Theory .2 (Pure strategy equilibria survive when randomization is allowed) Let a* be a Nash equilibrium of G and for each player i.4. let α*i be the mixed strategy of player i that assigns probability one to the action a*i.

A (Short) Introduction 139 . Proposition 123. The a* is a Nash equilibrium of G.5 Pure equilibria when randomization is allowed  Any pure equilibria that exist when the players are allowed to randomize are equilibria when they are not allowed to randomize.4. 9/12/2011 Game Theory .1 (Pure strategy equilibria survive when randomization is prohibited) Let α* be a mixed strategy Nash equilibrium of G’ in which the mixed strategy of each player i assigns probability one to the single action a*i.

and the preferences of each player i are represented by the payoff function ui G’: the strategic game with vNM preferences in which the set of players is N. let N be a set of players and let Ai. the set of actions of each player i is AI.  Consider the following two games:   G: the strategic game with ordinal preferences in which the set of players is N.A (Short) Introduction 140 . and the preferences of each player i are represented by the expected value of ui 9/12/2011 Game Theory . the set of actions of each player i is Ai. be a set of actions.5 Pure equilibria when randomization is allowed  To establish these two propositions. for each player i.4.

we know that in G’ no player i has an action that yields her a payoff higher than does a*i when all other players adhere to α*-i. Since a* is a Nash equilibrium of G. Thus a* is Nash equilibrium of G. no mixed strategy of player i yields her a payoff higher than does α*i. Then. Thus α* satisfies the two conditions in Proposition 116.5 Pure equilibria when randomization is allowed  Proposition 122. denote a*i the action to which αi assigns probability one.2 Let a* be a Nash equilibrium of G. For each player i. 9/12/2011 Game Theory .  Proposition 123.4.1 Let α* be a mixed strategy Nash equilibrium of G’ in which every player’s mixed strategy is pure. it is a mixed strategy equilibrium of G’. and for each player i let α*i be the mixed that assigns probability 1 to a*i. So.A (Short) Introduction 141 .2.

4.A (Short) Introduction 142 .a2).a1) for every action pair (a1.1 (Symmetric two-player strategic game with vNM preferences) A two-player strategic game with vNM preferences is symmetric if the players’ sets of actions are the same and the players’ preferences are represented by the expected values of payoff functions u1 and u2 for which u1(a1.a2) = u2(a2. 9/12/2011 Game Theory .2 (Symmetric mixed strategy Nash equilibrium) A profile α* of mixed strategies in a strategic game with vNM preferences in which each player has the same set of actions is a symmetric mixed strategy Nash equilibrium if it is a mixed strategy Nash equilibrium and α*i is the same for every player i.7 Equilibrium in a single population  Definition 129.  Definition 129.

7 Equilibrium in a single population  Game of approaching pedestrian (Figure 129.1 (Figure 115.4. corresponding to the two symmetric Nash equilibria in pure strategies.0 1.Left) and (Right. Game Theory . The game has also a symmetric mixed strategy Nash equilibrium.1) 9/12/2011 This game has two deterministic steady states ( (Left.0 0.1 0.A (Short) Introduction 143 This equilibrium corresponds to a steady state in which half of all encounters result in collisions! .Right) ).1) Left Right Left Right 1. in which each player assigns probability ½ to Left and probability ½ to Right.

The possible demands are nonnegative even integers up to 10.3 (Bargaining) Pairs of players from a single population bargain over the division of a pie of size 10.7 Equilibrium in a single population  Exercise 130. then each player receives her demand plus half of the pie that remains after both demands have been satisfied. 9/12/2011 Game Theory .A (Short) Introduction 144 . If the demands sum to more than 10. If the demands sum to less than 10.4. then each player receives her demand. then neither player receives any payoff. The members of a pair simultaneously make demands. Find all the symmetric mixed strategy Nash equilibria in which each player assigns positive probability to at most two demands (many situations in which each player assigns positive probability to two actions – says a’ and a’’ – can be ruled out as equilibria because when one player uses such strategy. If the demands sum to 10. some action yields the other player a payoff higher than does one or both of the actions a’ and a’’).

 The idealized situation is the following:   for each player in the game.4.A (Short) Introduction 145 . 9/12/2011 Game Theory . a new individual who joins a population can learn the other players’ strategies by observing their actions over many plays of the game.9 The formation of players’ beliefs  In a Nash equilibrium. one participant is drawn randomly from each population In this situation. there is a large population of individuals who may take the role of that player in any play of the game.  The idea underlying the previous analysis is that the players have learned each other’s strategies from their experience playing the game. knowing the other players’ strategies. each player chooses a strategy that maximizes her expected payoff.

9 The formation of players’ beliefs  As long as the number of new players is low.A (Short) Introduction 146 . what might happen if new players simultaneously join more than one population in sufficient numbers. existing players’ encounters with neophytes (who may use nonequilibrium strategies) will be sufficiently rare that their beliefs about the steady state will not be disturbed.4. So. can we expect a steady state to be reached if no one has experience? 9/12/2011 Game Theory . such that the probability that they encounter is not anymore small? In particular.  But. a new player’s problem is simply to learn the other players’ actions.

players may reasonably be expected to choose their Nash equilibrium actions from an introspective analysis of the game:   At the extreme (eg. players’ best action are independent of the other players’ actions.A (Short) Introduction 147 .4. 9/12/2011 Game Theory . the Prisoner’s Dilemma).9..1 Eliminating dominated actions In some games. but the actions the other players will choose may be clear because each of these players has an action that strictly dominates all others.9 The formation of players’ beliefs  4. some player’s best action may depend on the other players’ actions. In a less extreme case.

: in the game in Figure 135. L R T B 1.9 The formation of players’ beliefs eg. who can deduce by this argument that player 2 will choose R. So.4. may reason that she should choose B.1.0 0.1) 9/12/2011 Game Theory . player 2’s action R strictly dominates L.A (Short) Introduction 148 . even without any expercience of the game. player 1.1 (Figure 135.0 1. no matter what player 2 thinks player 1 will be playing. Consequently. she should choose R.1 0.

2 Learning Another approach to the question of how a steady state might be reached assumes that player learns:   she starts with an unexplained “prior” belief about the other players’ actions she changes these beliefs in response to information she receives Best Response Dynamics: a simple theory assumes that in each period after the first.9 The formation of players’ beliefs  4.A (Short) Introduction the game convergence to a steady state? how long does it take to converge? 149 . each player chooses a best response to an arbitrary deterministic belief about the other players’ actions in every subsequent period. The two questions are then: 9/12/2011   does Game Theory . each player chooses a best response to the other players’ action in the previous period An action profile that remains the same from period to period (steady state) is then a pure Nash equilibrium of the game.4.9. each player believes that the other players will choose the actions the chose in the previous period:   Two theories are:   at the first period.

a player’s belief does not admit the possibility that her opponents’ actions are realizations of mixed strategies.Bach). Stravinsky) and (Stravinsky. then the players’ choices will subsequently alternate in definitively between the action pairs (Bach. she adopts the belief that her opponent is Game Theory .9 The formation of players’ beliefs eg. in any period. Under the Fictitious play theory. for some initial beliefs. Fictitious play: under the Best Response Dynamics.A (Short) Introduction 150 using a mixed strategy in which the probability of each action is proportional to the frequency with which her opponent chose   9/12/2011 .4. does not converge:  if player 1 initially believes that player 2 will choose Stravinsky and player 2 believes that player 1 will initially choose Bach. players consider actions in all the previous periods when forming a belief about their opponents’ strategies.: the BoS game (example 18. They treat these actions as realizations of mixed strategies:  each player begins with an arbitrary probabilistic belief about the other player’s action  then.2).

9 The formation of players’ beliefs  Note that:  in any two-players game in which the player has two actions (eg. this process converges to a Mixed strategy Nash equilibrium from any initial beliefs. the Matching Pennies). there are initial beliefs for which the process does not converge. 9/12/2011 Game Theory .A (Short) Introduction 151 .4.  for other games.

and one consisting of the entire set of actions). choose a subset Si of her set of Ai of actions Check whether there exists a mixed strategy profile α such that:   (i) the set of actions to which strategy αi assigns positive probability is Si (ii) α satisfies the conditions of Proposition 116. or in which there are several players. the number of possibilities to examine is huge.A (Short) Introduction 152 .10 Extension: finding all mixed strategy Nash equilibria  The following systematic method can be used to find all mixed strategy Nash equilibria of a game:   For each player i.4. so that there are 49 (7x7) possible collections of subsets to check. Game Theory . three consisting of two actions. In a two player game in which each player has three actions:  9/12/2011  each player’s set of action has seven non-empty subset (three actions consisting of a single action.2  Repeat the analysis for every collection of subsets of the players’ sets of actions.  The shortcoming of the method is that for games in which each player has several strategies.

there are nine (3x3) pairs of subsets of the players’ action sets.v11 R u12. L T u11.1: Finding all mixed strategy equilibria of a two- player game in which each player has two actions.v22 (Figure 139.A (Short) Introduction 153 9/12/2011 . Each player’s set of actions has three nonempty subsets:  two each consisting of a single action  one consisting of both action Thus. Game Theory .4.v21 u22.v12 B u21.1.1)    Denote the actions and payoffs as in Figure 139.10 Extension: finding all mixed strategy Nash equilibria  Example 138.

2 are satisfied:  checking the four pairs of subsets in which each player’s subset consists of a single action amounts to checking whether any of the four pairs of actions is a pure strategy equilibrium.10 Extension: finding all mixed strategy Nash equilibria  For each pair (S1.α2) of mixed strategies such that each strategy αi assigns positive probability only to actions in Si and the conditions in Proposition 116.2 is automatically satisfied for player 1 (she has no actions to which she assigns probability 0)  the first condition in Proposition 116. we check if there is a pair (α1. Game Theory .4.  consider the pair of subsets {T.B} for player 1 and {L} for player 2:  the second condition in Proposition 116.2 is automatically satisfied for player 2 (she assigns positive probability only to one action).S2).A (Short) Introduction 154 9/12/2011 .

we need to find a pair of mixed strategy that satisfied the first condition of Proposition 116.10 Extension: finding all mixed strategy Nash equilibria    Thus. we need to find p and q such as:  q u11 + (1-q) u12 = q u21 + (1-q) u22  p v11 + (1-p) v21 = p v21 + (1-p) v22 Game Theory .A (Short) Introduction 155 9/12/2011 . To check finally whether there is a mixed strategy equilibrium in which the subsets are {T. no action having 0 probability). That is.B} for player 1 and {L.2 (the second condition being automatically satifisfied. we need:  u11 = u21 : player 1’s payoffs to her two actions must be equal  p v11+(1-p) v21 ≥ p v12+(1-p) v22 : L must be at least as good as R given player 1’s mixed strategy. for there to be a mixed strategy equilibrium in which player 1’s probability of using T is p.R} for player 2.4. A similar argument applies to the three other pairs of subsets in which one player’s subset consists of both her actions and the other player’s subset consists of a single action.

3 4.10 Extension: finding all mixed strategy Nash equilibria  Example 139.1: Find all mixed strategy equilibria of a two- player game T B 9/12/2011 L 2.3 0.0 S  Exercise 141.2) X 0.4.1 1.4 (Figure 139.0 2.2 3.A (Short) Introduction .1) Game Theory .3 1.2 M 0.2: Find all mixed strategy equilibria of a variant of BoS B B S 0.1 R 1.2 156 (Figure 141.2 0.

0.0 0.0.4.1 (each player has two actions) A A 1.0 (Figure 142.1: Find the mixed strategy Nash equilibria of the three player game in Figure 142.A (Short) Introduction 157 .0.1 B 0.1) 9/12/2011 Game Theory .10 Extension: finding all mixed strategy Nash equilibria  Exercise 142.0 B 0.1.

though the techniques are different. for each player i.2 becomes Proposition 142.  In a game in which a player has a continuum of actions. Game Theory . a mixed strategy of a player is determined by the probabilities it assigns to sets of actions.11 Extension: games in which each player as a continuum of actions  Consider now the case of a continuum of actions: the principle involved in finding mixed strategy equilibria of games are the same as for games with finitely many actions.4.  Proposition 116.2 (Characterization of mixed strategy Nash equilibrium) A mixed strategy profile α* in a strategic game with vNM preferences is a mixed strategy Nash equilibrium if and only if.A (Short) Introduction 158 9/12/2011 .a*-i) yield player i an expected payoff greater than her expected payoff to α*.α*-i) yields player i an expected payoff less than her expected payoff to α*.   α*i assigns probability zero to the set of actions ai for which the action profile (ai. for no action ai does the action profile (ai.

Game Theory . A significant class of games consist of games in which each player’s set of actions is a one-dimensional interval of numbers:     Consider such a game with two players Let player i’s set of actions be the interval from a-i to a+i. have equilibria of a particularly simple form.4. in which each player’s equilibrium mixed strategy assigns probability zero except in an interval. for i=1.A (Short) Introduction 159 9/12/2011 .2 Identify each player’s mixed strategy with a cumulative probability distribution of this interval: the mixed strategy of each player i is a nondecreasing function Fi for which 0≤Fi(ai)≤1. however. for every action ai.11 Extension: games in which each player as a continuum of actions  Games with continuum of actions can be very complex to analyze. The number Fi(ai) is the probability that player i’s action is at most ai. the form of a mixed strategy Nash equilibrium in such a game can be very complex but some such games.

4. and Fi(z)=1.A (Short) Introduction 160 .F2) satisfies the following conditions for i=1.11 Extension: games in which each player as a continuum of actions  The mixed strategies (F1.2:  There are numbers xi and yi such that player i’s mixed strategy Fi assigns probability zero except in the interval from xi to yi: Fi(z)=0 for z<xi. for z ≥ yi. 9/12/2011 Game Theory .  Player i’s expected payoff when her action is ai and the other player uses her mixed strategy Fj takes the form:  = ci for xi ≤ ai ≤ yi  ≤ ci for ai < xi and ai > yi where ci is a constant.

Each person pays her bid.A (Short) Introduction 161 . Each person’s bid may be any nonnegative number up to $K.4. The winner is the person whose bid is higher.1 (All-pay auction) Two people submit sealed bid for an object worth $K for each of them. each person receive half of the object (which she values at $K/2). and has preferences represented by the expected amount of money the receives.11 Extension: games in which each player as a continuum of actions  Example 143. 9/12/2011 Game Theory . regardless of whether she wins. In the event of a tie.

which capture the entire market.11 Extension: games in which each player as a continuum of actions  This situation may be modeled by the following strategic game:    Players: the two bidders Actions: each player’s set of actions is the set of possible bids (nonnegative numbers up to K) Payoff functions: Each player i’s preferences are represented by the expected value of the payoff function given by:   ai if ai  a j  ui (a1 . 9/12/2011 Game Theory .: a competition among two firms to develop a new product by some deadline. where the firm that spends the most develops a better product. a2 )   K  ai if ai  a j 2  K  a if a  a i i j  eg.4.A (Short) Introduction 162 .

K) is not a Nash equilibrium because either player can increase her payoff from –K/2 to 0 by reducing her bid to 0 No pair of actions (a1.a2) with a1≠a2 is a Nash equilibrium because the player whose bid is higher can increase her payoff by reducing her bid (and the player whose bid is lower can.A (Short) Introduction 163 . if her bid is positive.11 Extension: games in which each player as a continuum of actions  An all-pay auction has no pure strategy Nash equilibrium.x) with x < K is a Nash equilibrium because either player can increase her payoff by slightly increasing her bid (K. by the following argument:    No pair of actions (x. increase her payoff by reducing her bid to 0) 9/12/2011 Game Theory .4.

11 Extension: games in which each player as a continuum of actions  Consider the possibility that the game has mixed strategy Nash equilibrium. Denote Fi the mixed strategy (cumulative density function over the interval of possible bids).2.A (Short) Introduction 164 .  We restrict our attention to strategy pairs (F1. 9/12/2011 Game Theory .  We look for an equilibrium in which neither mixed strategy assigns positive probability to any single bid (there are infinitely many possible bids and for continuous random variables.  In that case. Fi(ai) is the probability that player i bids at most ai and the probability that she bids less than ai. for i=1. there are numbers xi and yi such that Fi assigns positive probability only to the interval form xi to yi. Prob(x=c)=0).F2) for which.4.

then a1 exceeds player 2’s bid with probability one.11 Extension: games in which each player as a continuum of actions  To investigate the possibility of such an equilibrium. the probability that player 2’s bid exactly equal to a1 is zero Player 1 expected payoff is (K-a1) F2(a1) + (-a1) (1-F2(a1)) = KF2(a1)-a1  9/12/2011 Game Theory . so that player 1’s payoff is K-a1 if x2 ≤ a1 ≤ y2. player 2’s bid is less than 1.4. then player 1’s expected payoff is: with probability F2(a1). so that player 1’s payoff is –a1 if a1 > y2. player 2’s bid exceeds a1. given player 2’s mixed strategy F2:    if a1 < x2. then a1 is less than player 2’s bid with probability one. in which case player 1’s payoff is –a1  by assumption. in which case player 1’s payoff is K-a1  with probability 1-F2(a1).A (Short) Introduction 165 . consider player 1’s expected payoff when she uses the action a1.

11 Extension: games in which each player as a continuum of actions  We need to find values of x1 and y1 and a strategy F2 such that player 1’s expected payoff satisfies condition of Proposition 142.4.1) y1 a+1 it is a constant on the interval x1 to y1 and less than this constant outside this interval.A (Short) Introduction 166 .2 c1 a-1 x1 (Figure 144. 9/12/2011 Game Theory .

and F1(z) = F2(z) = z/K for all z with 0 ≤ z ≤ K.4. y1 = y2 = K. Game Theory .  Thus.A (Short) Introduction 167 . and F1. the game has a mixed strategy Nash equilibrium in which 9/12/2011 each player randomizes uniformly over all her actions. We see that each player expected payoff is constant and equal to 0. for all her actions.11 Extension: games in which each player as a continuum of actions  The conditions are therefore: K F2(a1)-a1=c1 for x1 ≤ a1 ≤ y1 for some constant c1  F2(x2) = 0 and F2(y2)=1  F2 must be non decreasing (it is a CDF) and analogous conditions must hold for x2. these conditions are fulfilled. Proving that it is the only mixed strategy Nash equilibrium is more complex.   Solution: we see that if x1 = x2 = 0.y2.

we can represent these preferences by a payoff function: we can find a function.4.12. …pk) such that U(p1. assume we are given preferences over lotteries. So.A (Short) Introduction 168 . …p’k) only if the decision marker prefers (p1. we need her preferences over lotteries We cannot derive these preferences form her preferences over deterministic outcomes.12 Appendix: representing preferences by expected payoffs  4. …pk) to (p’1. over lotteries (p1. …p’k). 9/12/2011 Game Theory . say U. where each outcome occurs with probability pi.1 Expected payoffs     Suppose that a decision-marker has preferences over a set of deterministic outcomes and that each of her actions results in a lottery (probability distribution) over these outcomes To determine the action she chooses. Under fairly week assumptions.…pk) > U(p’1.

…p’k).…pk) to (p’1.A (Short) Introduction K K k u ( ak ) if and only if the decision-maker prefers the lottery (p1.12 Appendix: representing preferences by expected payoffs  In most case.4. pK )   pk u (ak ) k 1 K where ak is the kth outcome of the lottery and:  p u (a )   p' k 1 k k k 1 9/12/2011 Game Theory . is to impose an additional assumption (known as the “independence assumption”) that allows us to conclude that the decisionmaker’s preferences can be represented by the expected payoff function. Under this assumption. developed by von Neumann and Morgenstern (1994).. we need however more structure to go farther in the analysis. The standard approach.. 169 .. there is a payoff function u over deterministic outcomes such that the decision-maker’s preference relation over lotteries is represented by the function: U ( p1 .

: suppose that there are three possible deterministic outcomes: the decision-maker may receive $0. $1 and $5.0.4. u(1)=1 and u(5)=4: 1 1 3 1 .3/4.1  . Suppose that she prefers the lottery (1/2.A (Short) Introduction 170 .4  . This preference is consistent with preferences represented by the expected value of a payoff function u for which u(0)=0. where probabilities are given for outcomes $0.4 2 2 4 4 9/12/2011 Game Theory .1/4). eg. $1 or $5 (and naturally prefers $5 to $1 to $0).12 Appendix: representing preferences by expected payoffs  This sort of payoff function (for which the decision-maker preferences are represented by the expected value of the payoffs) is known as Bernoulli payoff functions.0  .1/2) to the lottery (0.

 Bernoulli payoff function must however not be confused with payoff function that represents the decision-marker’s preferences over deterministic outcomes:   if u is a Bernoulli payoff function. we know the decision-maker preferences among all lotteries. $1 and $0 and prefers lottery (1/2. : suppose a decision-maker prefers $5. u is compatible with preferences over deterministic outcomes.A (Short) Introduction 4 2 2 4 171 . it certainly is a payoff function that represents the decision-maker’s preferences over deterministic outcomes however. u(1)=3 and u(5)=4. eg. Defines u as u(0)=0. the converse is not true.3/4.4  . However.0.1/2) to (0.1/4).4.12 Appendix: representing preferences by expected payoffs  The great advantage of Bernoulli payoff function is that preferences are completely specified by the payoff function: once we know u(ak) for each possible outcome ak. it is not compatible with preferences over lotteries: 9/12/2011 1 1 3 1 .4 Game Theory .3  .0  .

1 (Equivalence of Bernoulli payoff functions) Suppose that there are at least three possible outcomes. Find a Bernoulli payoff function whose expected value represents the decision-maker’s preferences and assigns a payoff of 1 to the best outcome and a payoff of 0 to the worst outcome. Game Theory .2 (Normalized Bernoulli payoff functions) 9/12/2011 Suppose that a decision-marker’s preferences can be represented by the expected value of the Bernoulli payoff function u.  Exercise 149.2 Equivalent Bernoulli payoff functions Lemma 148. for all x.12 Appendix: representing preferences by expected payoffs  4.4.A (Short) Introduction 172 . The expected values of the Bernoulli payoff functions u and v represent the same preferences over lotteries if and only if there exist number k and m (with m > 0) such that u(x) = k + m v(x).12.

1)   the three games of figure 150.4.1 0.0 1.0 1.0 0.1 represents the same strategic game with deterministic preferences only the left and middle tables represent the same strategic game with vNM preferences.1 B S 1.0 0.A (Short) Introduction 173 9/12/2011 .12.1 (Figure 150. whereas the payoff fonctions in the right table are not.12 Appendix: representing preferences by expected payoffs  4.1 B S 1.1 0.0 0. The reason is that the payoff functions in the middle table are linear functions of the payoff functions in the left table. Game Theory .3 Equivalent strategic games with vNM preferences B S B S B S B S 1.1 0.0 1.

But.A (Short) Introduction 174 . w1 is not a linear function of u1. wi the Bernoulli payoff functions of the three games. 9/12/2011 Game Theory . vi.4. Then v1(a)=2u1(a) and v2(a)=-3+u2(a). There is no constant μ and θ such that w1(a) = μ + θu1(a): 0    0 1    1 3    2 has no solution.12 Appendix: representing preferences by expected payoffs  Denotes ui.

-2 (Figure 150.2 C D C 6.2 represents the same strategic game with vNM preferences as the Prisoner’s Dilemma as specified in the left panel? C C D 2.3 4.3 1.0 D 0.2 3.A (Short) Introduction 175 .0 D 0.0 9.2) 9/12/2011 Game Theory .2 3.1 (Games equivalent to the Prisoner’s Dilemma) Which of the right tables in Figure 150.4.4 2.12 Appendix: representing preferences by expected payoffs  Exercise 150.1 C D C 3.-4 D 0.

9. Bayesian Games .

in many situation.  However. players are not perfectly informed about their opponents’ characteristics (eg. 9/12/2011 Game Theory .: firms may not know each others’ cost functions).A (Short) Introduction 177 . the player must know the other player preferences. To do so. we generalize the notion of strategic game to allow the analysis of situations in which each player is imperfectly informed about an aspect of her environment that is relevant to her choice of action.Framework  An assumption underlying the notion of Nash equilibrium is that each player holds the correct belief about the other players’ actions.  In this chapter.

2 1.1 (Variant of BoS with imperfect information) Consider a variant of BoS in which player 1 is unsure whether player 2 prefers to go out with her or prefers to avoid her. whereas player 2 (as before) knows player 1’s preferences.9.0 0.1 0. B S 1 B S B 2 2.0 2 wishes to meet 1 2 wishes to avoid 1 Prob 1/2 9/12/2011 Game Theory .0 1.1 Motivational examples  With start with one example to illustrate main ideas of Bayesian games  Example 273.2 B 2 2.A (Short) Introduction Prob 1/2 178 (Figure 274.1) .1 S 0.0 S 0.

by the right table). The notion of Nash equilibrium must be generalize to this new setting:  from player 1’s point of view.A (Short) Introduction 179 . the Bernoulli payoff are given in the right table. and with probability ½ player 2 wants to avoid her (see figure 274.1 Motivational examples     Specifically. player 2 has two possible types (one whose preferences are given by the left table of Figure 274. In state 2. the Bernoulli payoff are given in the left table.1 and the other. Player assigns probability ½ to each state. 9/12/2011 Game Theory . an analysis of the situation requires us to know the players’ preferences over lotteries. We can represent the situation as being with two states. suppose player 1 thinks that with probability ½ player 2 wants to go out with her.9.1) Because probabilities are involved. In state 1.

conditionally on choosing B.1) Game Theory . For example. she can calculate her expected payoff to each of her actions. 2 + ½ .S) 1 ½ (S.9. So.A (Short) Introduction .1 Motivational examples    Player 1 does not know player 2 types. she needs to form a belief about the action of each player 2 type. then she thinks that B will yield her a payoff of 2 with probability ½ and of 0 with probability ½. Similar calculations lead to table 275.1.S) 0 0 180 2 0 (Figure 275. thinks that type 1 of player 2 will choose B and type 2 of player 2 will choose S. if player 1. 0)=1.B) 1 ½ (S. her expected payoff to B is ( ½ . in this case. Given these beliefs and her belief about the likelihood of each type. So.B) B S 9/12/2011 (B. to choose an action rationally. Type 2 player 2 choice Type 1 player 2 choice (B.

9. we define a pure strategy Nash equilibrium to be a triple of actions (one for player 1 and one for each type of player 2) with the property that:  the action of player 1 is optimal. given the action of player 1 Note that in a Nash equilibrium:  player 1’s action is a best response in Figure 275.A (Short) Introduction 181  9/12/2011 .1 Motivational examples  For this situation.1  the action of the type of player 2 who wished to avoid player 1 is a best response in the right table of Figure 274. given the actions of the two types of player 2 (and player 1’s belief about the state)  the action of each type of player 2 is optimal.1 to the pair of actions of the two types of player 2  the action of the type of player 2 who wishes to meet player 1 is a best response in the left table of Figure 274.1 to the action of player 1 Game Theory .

A (Short) Introduction Player 2 – Type 1 Player 2 – Type 2 182 .S)) is a Nash equilibrium where ( B.  (B. who knows his own type.1 Motivational examples  Why should player 2. as an analysts. S )) Player 1 9/12/2011 Game Theory . we need to consider what she would do in both cases. player 1. and we wish to impose the equilibrium condition that these beliefs are correct.9.(B. ( B. The reason is that to determine her best action. who does not know player 2 type. have to plan what to do in both cases?   She does not! However. needs to form a belief about the action each type of player 2 would take.

player 1’s action B is optimal (see Figure 275.player 2 chooses B and type 2 – player 2 chooses S.1)  We interpret the equilibrium as follows:   Type 1 . B is optimal for player 2 type 1 and S is optimal for player 2 type 2 (see Figure 274. 9/12/2011 Game Theory . believes that if player 2 is of type 1.S). who does not know if player 2 is of type 1 or of type 2.1 Motivational examples  Proof:   given that the action of the two types of player 2 are (B. she will choose B and if player 2 is of type 2. she will choose S.9.1) given that player 1 chooses B.A (Short) Introduction 183 . inferring that player 1 will choose B player 1.

9. such that the action of each type of each player is a best response to the actions of all the types of the other player. This corresponds to the following situation:    initially. the same story is valid for player 1 but player 1 will receive an uninformative signal (same signal in each state)  Note that in such a setup.1 Motivational examples  We can interpret the actions of the two types of player 2 to reflect player 2’s intentions in the hypothetical situation before she knows the state. a Nash equilibrium is list of actions.A (Short) Introduction 184 . before receiving this signal. she will be informed by a signal that depends on the state. 9/12/2011 Game Theory . one for each type of each player. she plans an action for each possible state. given the player’s beliefs about the state after she observes her signal. player 2 does not know the state.

A (Short) Introduction 185 .1 (Equilibria of a variant of BoS with imperfect information) (i) Show that there is no pure strategy of this game in which player 1 chooses S.9. then look for equilibria in which one or both of these types randomize). (ii) Find the mixed strategy Nash equilibria of the game (First check whether there is an equilibrium in which both types of player 2 use pure strategies.1 Motivational examples  Exercise 276. 9/12/2011 Game Theory .

2.9. We denote the signal player i receives in state ω by τi(ω). 9/12/2011 Game Theory . their must be a state. The players do not observe this state. a given signal is received. Rather. Note that this is a deterministic function: for each state. The function τi(. information.) is called the player i’s signal function.A (Short) Introduction 186 . A key component in the specification of the imperfect information is the set of state: each state is a complete description of one collection of the players’ relevant characteristics (preferences. …).2 General definitions  9. For every collection of characteristics that some player believes to be possible.1 Bayesian games    A strategic game with imperfect information is called a Bayesian game. A the start of the game a state is realized. each player receives a signal that may give her some information about the state.

 if τi(ω) is the same for all states.9. then type ti of player i assigns probabilities to ω1 and ω2. We refer to player i in the event that she receives ti as type ti of player i. The size of the set of states consistent with each player i’s signal reflect the quality of player i’s information. the state that has occurred: she is perfectly informed about all the players’ relevant characteristics. the player i knows. then player i’s signal conveys no information: she is perfectly uninformed. given her signal. 9/12/2011 Game Theory . The two extreme cases are:  if τi(ω) is different for each value of ω. Each type of each player holds a belief about the likelihood of the states is consistent with her signal (eg.A (Short) Introduction 187 .: if ti= τi(ω1)= τi(ω2).2 General definitions    The state that generates any given signal ti is said to be consistent with the signal ti.

A (Short) Introduction 188 . consisting of action profile a and a state ω. We need therefore to specify their preferences regarding probability distribution over pairs (a. We therefore specify player i’s preferences by giving the Bernoulli payoff function ui over pair (a. 9/12/2011 Game Theory . ω).2 General definitions   Each player (may) care about the actions chosen by the other players and about the state. We assume that each player’s preferences over such probability distributions are represented by the expected value of a Bernoulli function. ω).9.

where a is an action profile and ω is a state.9.2 General definitions  Definition 279.A (Short) Introduction 189 . a belief about the states consistent with the signal (a probability distribution over the set of states with which the signal is associated)  a Bernoulli payoff function over pairs (a.1 (Bayesian game) A Bayesian game consist of a set of players  a set of states and for each player  a set of actions  a set of signals that she may receive and a signal function that associates a signal with each state  for each signal that she may receive. the expected value of which represents the player’s preferences.ω).  9/12/2011 Game Theory .

 Application to Example 273.1  players: the pair of people  states: {meet. and probability 1 to state “avoid” after receiving the signal v.  beliefs: player 1 assigns probability ½ to each state after receiving the signal z. 9/12/2011 .9. but the set of actions available to her is the same in every state.avoid) in the Game Theory . Her signal function τ1 satisfies τ1(meet)= τ1(avoid)=z. Player 2 receives one of two signals (m and v).A (Short) Introduction 190 right panel).  payoffs: the payoffs ui(a.meet) of each player i for all possible action pairs are given in the left panel of Figure 274.2 General definitions  Note that the set of actions of each player is independent of the state: each player may care about the state. avoid}  actions: for each player {B.1. Her signal function τ2 satistifies τ2(meet)=m and τ2(avoid)=v. (for ui(a. Player 2 assigns probability 1 to state “meet” after receiving the signal m.S}  signals: player 1 may receive a single signal z.

We define a Nash equilibrium of Bayesian game to be a Nash equilibrium of a strategic game in which each player is one of the types of one of the players in the Bayesian game. 9/12/2011 Game Theory .2 General definitions  9. In a Nash equilibrium of such a game.2 Nash equilibrium   In a Bayesian game.2.A (Short) Introduction 191 . given the actions chosen by every type of every other player. each player chooses a collection of actions: one for each signal she may receive (each type of each player chooses an action). the action chosen by each type of each player is optimal.9.

ai ( )) is the action profile in which player i chooses the action ai ˆ and every other player j chooses a j ( ) 9/12/2011 Game Theory .9. the expected payoff of type ti of player i when she chooses action ai is:  ˆ  Pr( t )u ((a .  ) where : Ω is the set of states ˆ (ai . With these notations.A (Short) Introduction 192 . a i i i i ( )).2 General definitions  Notations:  Pr(ω|ti): probability assigned by the belief ot type ti of player i to state ω.  τj(ω): player j’s signal in state ω.  a(j. We denote âj(ω)=a(j. τj(ω)).tj): action taken by each type tj of each player j. τj(ω)). Her action is this state is a(j.

2 General definitions  Definition 281.9.ti) is given by ˆ  Pr( t )u ((a .ti) is the set of actions of player i in the Bayesian game.1 (Nash equilibrium of Bayesian game) A Nash equilibrium of Bayesian game is a Nash equilibrium of the strategic game (with vNM preferences) defined as follows:    players: the set of all pairs (i.  ) 9/12/2011 Game Theory . preferences: the Bernoulli payoff function of each player (i.A (Short) Introduction 193 . actions: the set of actions of each player (i.ti) in which i is a player in the Bayesian game and ti is one of the signals that i may receive. a   i i i i ( )).

and assigns equal probability to each of the 101 dollar values in this range (uniform distribution). and firm T is worth x (under its own management).9. A’s payoff is 0 and T’s payoff is x. If T rejects A’s offer. when firm T is controlled by its own management. Suppose that firm A bids y to take over firm T. It does not know firm T’s value. A’s payoff is (3/2 x – y) and T’s payoff is y. Model this situation as a Bayesian game in which firm A chooses how much to offer and firm T decides the lowest offer to accept. Then if T accepts A’s offer. Explain why the logic behind the equilibrium is called adverse selection. Find the Nash equilibrium (equilibria?). is at least $0 and at most $100. Firm T will be worth 50% more under firm A’s management than it is under its own management. 9/12/2011 Game Theory . it believes that this value.2 General definitions  Exercise 282.3 (Adverse selection) Firm A (the “acquirer”) is considering taking over firm T (the “target”).A (Short) Introduction 194 .

0 1. she can ignore the information.2ε 2.3ε 0.1) 9/12/2011 Game Theory .0 0.0 State ω1 (Figure 283.3 Example concerning information  9.A (Short) Introduction State ω2 195 .0 0. ½ 1 L ½ M R 2 L ½ ½ M R T B 1.2 1.3.3 T B 1.2 1.9.1 More information may hurt  A decision-maker in a single-person decision problem cannot be worse off if she has more information: if she wishes. In a game. this is not true.2ε 2.3ε 0.3 1.

L) is the unique Nash equilibrium. Player 1’s unique best response to L is B. ε is 0 < ε < ½. Thus.A (Short) Introduction 196 . Player 2’s unique best response to each action of player 1 is L:  if player 1 chooses T:  L yieds 2ε  M and R each yield 3/2 ε  if player 2 chooses B:  L yields 2  M and R each yield 3/2.3 Example concerning information     Consider the two-player game in Figure 283. The game has no mixed strategy Nash equilibrium. (B. 9/12/2011 Game Theory . there is two states and neither player knows the state. In this game.9.1. Each player get a payoff of 2.

(T.3 Example concerning information     Consider now that player 2 is informed of the state: player 2’s signal function satisfies τ2(ω1)≠ τ2(ω2). to which T is player 1’s unique best response). R is good only in state ω1 and M is good only in state ω2 while L is a compromise. She is therefore worse off when she knows the state ! To understand this result. In this game. In this game. Knowing the state leads player 2 to choose either R or M.(R. player 2’s payoff is 3ε (in each state).M)) is the unique Nash equilibrium (each type of player 2 has a strictly dominant action.A (Short) Introduction 197 . to induce player 1 to choose B. There is no steady state in which player 2 chooses L. 9/12/2011 Game Theory .9. which induces player 1 to choose T.

:oil tract containing unknown reserves on which each bidder has conducted a test) We will consider models in which bids for a single object are submitted simultanesously (bids are sealed) and the participant who submits the highest bid obtains the object. every bidder knows every other bidder’s valuation of the object for sale.6. This is highly unrealistic! Assume that a single object is for sale.5.9. we say that the valuations are common (eg.  if each bidder’s valuation depends on other bidders’ signals as well as her own.A (Short) Introduction 198 9/12/2011 .: work of art whose beauty interests the buyers). Each bidder receives independently some information (a signal) about the value of the object to her:  if each bidder’s signal is simply her valuation.6 Illustration: auctions  9. Game Theory .1 Introduction    In section 3. we say that the bidders’ valuation are private (eg.

they do not have access to in a sealed bid procedure. She believes that the probability that any given bidder’s valuation is at most v is F(v). In a common valuation setup.6 Illustration: auctions   We will consider both first-price (the winner pays the price she bids) and second-price (the winner pays the highest of the remaining bids) auctions.9.2 Independent private values  Each bidder knows that all other bidders’ valuations are at least v(where v.6.  9.≥ 0) and at most v+. independent of all other bidders’ valuations.A (Short) Introduction 199 9/12/2011 . where F is a continuous increasing function (CDF). the open ascending information reveals information to bidders. Game Theory . Note that the argument that the second-price rule corresponds to an open ascending auction (English auction) depends upon the bidders’ valuations being private.

We denote P(b) the price paid by the winner of the auction when the profile of bids is b:  for a first-price auction. P(b) is the highest bid made by a bidder different from the winner 9/12/2011 Game Theory . P(b) is the winning bid (the largest bi)  for a second-price auction.6 Illustration: auctions    The preferences of bidder whose valuation is v are represented by a Bernoulli payoff function that assigns 0 to the outcome in which she does not win the object and v-p to the outcome in which she wins the object and pays the price p (quasi-linear payoff function). We assume that the expected payoff of a bidder whose bid is tied for first place is (v-p)/m. where m is the number of tied winning bids. This amounts to consider that the bidder is risk-neutral.A (Short) Introduction 200 .9.

6 Illustration: auctions  The Bayesian game that models first. 9/12/2011 Game Theory .≤ vi ≤ v+ for all i  actions: each player’s set of actions is the set of possible bids (nonnegative numbers)  signals: the set of signal that each player may observe is the set of possible valuations (the signal function is τi(v1.and second-price auctions with independent private valuations is therefore:  players: the set of bidders 1. … vn) of valuations.9. where v.  beliefs: every type of player i assigns probability F(v1) F(v2) … F(vi-1) x F(vi+1) … F(vn) to the event that the valuation of every other player j is at most vi.A (Short) Introduction 201 .…n  states: the set of all profiles (v1. … vn) = vi).

9.vn ))   0 if b j  bi for some j  i   Nash equilibrium in a second-price sealed-bid auction: in a second-price sealed-bid auction with imperfect information about valuations (as in the perfect information setup). a player’s bid equal to her valuation weakly dominates all her other bids:   consider some type vi of some player i and let bi be a bid not equal to vi for all bids by all types of all the other players.6 Illustration: auctions  payoff functions: (vi  P(b)) / m if b j  bi for all j  i and b j  bi for m players ui (b... the expected payoff of type vi of player i is at least as high when she bids vi as it is when she bids bi. and for some bids by the various types of the other players. (v1 .. her expected payoff is greater when she bids vi than it is when she bids bi Game Theory .A (Short) Introduction 202 9/12/2011 .

  9/12/2011 Game Theory . We conclude that a second-price sealed-bid auction with imperfect information about valuations has a Nash-equilibrium in which every type of every player bids her valuation.9. find a Nash equilibrium of a second-price sealed-bid auction in which player i wins.2 (Nash equilibria of a second-price sealed-bid auction) For every player i. Exercise 294.6 Illustration: auctions  Exercise 294.1 (Weak domination in a second-price sealed-bid auction) Show that for each type vi of each player i in a second-price sealed-bid auction with imperfect information about valuations the bid vi weakly dominates all other bids.A (Short) Introduction 203 .

9.  Take the case of two bidders and each player’s valuation being distributed uniformly between 0 and 1 (this assumption means that the fraction of valuations less than v is exactly v.  So. and is itself weakly dominated by any such lower bid. the game under imperfect information may have a Nash equilibrium in which each bidder bids less than her valuation. so that F(v) = v for all v with 0 ≤ v ≤ 1).A (Short) Introduction 204 9/12/2011 .6 Illustration: auctions  Nash equilibrium in a first-price sealed-bid auction  in case of perfect information. with βi(v) = ½ v for all v (each type of each player bids exactly half her valuation).  In this case.  Denote by βi(v) the bid of type v of player i. the bid vi by type vi of player i weakly dominates any bid greater than vi. the game has a (symmetric) Nash equilibrium in which the function βi is the same for both players. Game Theory . does not weakly dominate bids less than vi.

her payoff function of her bid is: 2b1 (v1  b1 ) if 0  b1  1  2  1 v1  b1 if b1   2  Player 1’s expected payoff 0 9/12/2011 Game Theory . she surely wins.  thus. the probability that she wins is the probability that player 2’s valuation is less than 2b1. which is 2b1.A (Short) Introduction ½ v1 ½ (Figure 295. if player 1 bids more than ½. player 2’s bids are uniformly distributed between 0 and ½.1) v1 b1 205 . If she bids b1 ≤ ½.6 Illustration: auctions  Proof:  suppose that each type of bidder 2 bids in this way.  consequently.9.  as far as player 1 is concerned.

the game has a Nash equilibrium in which each player bids half his valuation. player 2 bids also half is valuation.    9/12/2011 Game Theory .1) or established mathematically.9.6 Illustration: auctions This function is maximized at ½ v1 (this can easily be seen graphically on Figure 295. Interpretation: in this example (but also for any distribution F satisfying our assumptions):  choose n-1 valuations randomly and independently. So. Some values of X are less than v and others are greater.  Both player are identical.  Thus. Denote it X.A (Short) Introduction 206 . a similar analysis shows that the game a (symmetric) Nash equilibrium in which every bidder bids the fraction 1 – 1/n of her valuation. conditional on player 1 bidding half is valuation. When the number n of bidder exceeds 2. each according to the cumulative distribution F  the highest of these n-1 valuations is a random variable.  Fix a valuation v.

a first-price sealed-bid auction (with imperfect information about valuations) has a (symmetric) Nash equilibrium in which each type v of each player bids E(X|X<v). The expected value of this distribution is: E ( X X  v)  Then. Application for the case of 2 bidders and uniform distribution:  for any valuation v of player 1.9. the cases in which player 2’s valuation is less than v are distributed uniformly between 0 and v. the following proposition holds: If each bidder’s valuation is drawn independently from the same continuous and increasing cumulative distribution.A (Short) Introduction 207 . the expected value of the highest of the other players’ bids conditional on v being higher than all the other valuations. 9/12/2011 Game Theory .  so the expected value of player 2’s valuation conditional on being less than v is ½ v.6 Illustration: auctions  Consider the distribution of X in those cases in which it is less than v.

 we have just seen that this is precisely the bid a player with valuation v in a first-price auction (and hence.  in notation. under the assumptions of this section. this is E(X|X<v). both auctions yield the auctioneer the same revenue! Game Theory .9. conditional on this maximum being less than v.A (Short) Introduction 208 9/12/2011 .  as in both case. the amount paid by such a player if she wins).and second-price auctions are revenue equivalent.  Consider the equilibrium of a second-price auction in which every player bids her valuation:  the expected price paid by the bidder with valuation v who wins is the expectation of the highest of the other n-1 valuations.6 Illustration: auctions  Comparing equilibria of first.and second-price auctions  As in the case of perfect information. the winner with highest valuation win. first.

Specifically.A (Short) Introduction 209 . where x is the player’s monetary payoff and m > 1.6 Illustration: auctions  Exercise 296. Suppose also that each player’s valuation is distributed uniformly between 0 and 1. suppose each of the n players’ preferences are represented by the expected value of the Bernoulli payoff function x1/m. 9/12/2011 Game Theory .9.1 (Auctions with risk-averse bidders) Consider a variant of the Bayesian game defined earlier in this section in which the players are risk averse. Show that the Bayesian game that models a first-price sealed-bid auction under these assumptions has a (symmetric) Nash equilibrium in which each type vi of each player i bids:   1 bi  1   m(n  1)  1 vi    Note that the solution of the problem maxb[bk(v-b)l] is kv/(k + l).

6 Illustration: auctions Compare the auctioneer’s revenue in this equilibrium with her revenue in the symmetric Nash equilibrium of a second-price sealed-bid auction in which each player bids her valuation (note that the equilibrium of the second-price auction does not depend on the players’ payoff functions).A (Short) Introduction 210 . 9/12/2011 Game Theory . Let P(b) be the function that determines the price paid by the winner as a function of the profile b of bids.  9. Denote the function that gives player i’s valuation by gi. each player’s valuation depends on the other players’ signals as well as her own. and assume that it is increasing in all the signals.6.9.3 Interdependent valuations    In this setup.

9/12/2011 Game Theory .…n  states: the set of all profiles (t1.9.6 Illustration: auctions  The following Bayesian game models first. … vn) = vi: each player observes her own signal).A (Short) Introduction 211 . … tn) of signals that the players may receive  actions: each player’s set of actions is the set of possible bids (nonnegative numbers)  signals: the signal function τi of each player i is the set of possible valuations (the signal function is τi(v1.and second-price auctions with common valuations:  players: the set of bidders 1.  beliefs: each type of each player believes that the signal of every type of every other player is independent of all the other players’ signals.

6 Illustration: auctions  payoff functions: ( gi (t1.tn ))   0 if b j  bi for some j  i   Nash equilibrium in a second-price sealed-bid auction  We analyze the case of two bidders.. Game Theory .  The assumption is that a bidder does not know any other player’s signal but.tn )  P(b)) / m if b j  bi for all j  i and b j  bi for m players ui (b. as the analysis will show. where j is the other player and α ≥ γ ≥ 0 (the case α = 1 and γ = 0 is the private value case and the case α = γ is called pure common value. other players’ bids contain some information about the other players’ signals.9. (t1 .. each bidder’s signal is uniformly distributed from 0 to 1 and the valuation of each bidder i is vi = α ti + γ tj...A (Short) Introduction 212 9/12/2011 ..

Thus.6 Illustration: auctions    Under these assumptions. So. 9/12/2011 Game Theory . a second-price sealed-bid auction has a Nash equilibrium in which each type ti of each player i bids (α+γ) ti. the probability that is is at most b1 / (α+γ) is b1 / (α+γ).9.A (Short) Introduction 213 . or if : t2  b1 (   ) t2 is distributed uniformly between 0 and 1. player 1’s bid of b1 wins only if b1 ≥ (α+γ) t2. a bid b1 by player 1 wins with probability b1 / (α+γ). Proof: to determine the expected payoff of type t1 of player 1. we need to find:  the probability with which she wins  the expected price she pays  the expected value of player 2’s signal if she wins Probability that player 1 win:  given that player 2’s bidding function is (α+γ) t2.

Using the previous results. The expected payoff if she bids b1 is the difference between her expected valuation (given her signal t1 and the fact that she wins) and the expected price she pays.A (Short) Introduction 214 . Expected value of player 2’s signal if player 1 wins:  Player 2 bid. (α+γ) t2.  the player 2 bid. the expected value of signal that yield a bid less than b1 is ½ b1 / (α+γ). the expected value of player 2’s bid.9. given her signal t2. conditional on being less than b1. So. we get: 1     t   2 b1  1 b  b1  2(   )t1  b1 b1 2 1  (   ) 2(   ) 2  1 (   )   9/12/2011 Game Theory . Thus.6 Illustration: auctions    Expected price player 1 pays if she wins:  the price she pays is equal to the player 2 bid. multiplied by her probability of winning. given that it is less than b1 is ½ b1. is distributed uniformly between 0 and b1.

if each type t2 of player 2 bids =(α+γ)t2. The arguments are symmetric for player 2.6 Illustration: auctions   This function is maximized at b1=(α+γ)t1: so.1 (Asymmetric Nash equilibria of second-price sealed-bid common value auctions) Show that when α=γ=1. the game has an (asymmetric) Nash equilibrium in which each type t1 of player 1 bids (1+λ) t1 and each type t2 of player 2 bids (1 + 1/λ) t2.9.  Exercise 299. We therefore get a symmetric Nash equilibrium. 9/12/2011 Game Theory . for any value λ > 0.A (Short) Introduction 215 . any type t1 of player 1 optimally bids =(α+γ)t1.

2 (First-price sealed bid auction with common values) Verify that a first-price sealed bid auction has a Nash equilibrium in which the bid of each type ti of each player i is ½ (α+γ) ti. The fact that her bid wins is. a bad news about the level of other player valuation. in fact.A (Short) Introduction 216 .9. Nash equilibrium in a first-price seald-bid auction A first-price sealed-bid auction has a Nash equilbrium in which each type ti of each player i bids ½ (α+γ) ti.   9/12/2011 Game Theory . she finds the expected value of player 2’s signal given that her bid wins. Exercise 299. A bidder who does not take account of this fact is said to suffer from the winner’s curse.6 Illustration: auctions  Note that when player 1 calculates her expected value of the object.

9.and second-price auctions holds also under common valuations:  in each case. the expected price paid by the winner (for the symmetric equilibrium) is ½ (α+γ) ti.A (Short) Introduction 217 .and second-price auctions:  The revenue equivalence of first.6 Illustration: auctions  Comparing equilibria of first. 9/12/2011 Game Theory .  in each case. the bidder wins if she has the highest valuation (this is to say.  In fact. with the same probability). the revenue equivalence principle holds much more generally (see Meyrson Lemma).

v+).9. In a symmetric equilibrium.1 First-price sealed bid auctions      We construct here a symmetric equilibrium of a first-price sealed bid auction for a generic distribution F of valuations that satisfies the assumptions in Section 9.A (Short) Introduction 218 9/12/2011 . Then:  then there is a condition that β must satisfy in any symmetric equilibrium  exactly one function β satisfies this condition  this function is increasing. every player uses the same bidding function (so βi(v)=β for some function β).2 and is differentiable on (v-. Denote the bid of type v of bidder i by βi(v). Game Theory .8.6.8 Appendix: auctions with an arbitrary distribution of valuations  9. Assume:  β is increasing in valuation (seems reasonable)  β is differentiable.

for β(v-) ≤ b ≤ β(v+). the expected payoff of player i when her valuation is v and she bids b is : (v – b) Pr(Highest bid is b) = (v-b) Pr(All n-1 other bids ≤ b)  A player bidding according to the function β bids at most b. Then. for any bid b. if her valuation is at most β-1(b) (the inverse evaluated at b).8 Appendix: auctions with an arbitrary distribution of valuations    Suppose that all n-1 players other than i bid according to the increasing differentiable function β. the probability of a tie is zero.A (Short) Introduction 219 . given the assumption on F. Hence.9. 9/12/2011 Game Theory .

8 Appendix: auctions with an arbitrary distribution of valuations   Thus.2) is at most β-1(b). Denoting the CDF of X by H.9.A (Short) Introduction 220 . the probability that the bids of the n-1 other players are all at most b is the probability that the highest of n-1 other players are all at most b is the probability that the highest of n-1 randomly selected valuations (denoted X in section 9.6. the expected payoff is thus: (v – b) H(β-1(b)) if β(v-) ≤ b ≤ β(v+) and 0 is b < β(v-). v-b if b > β(v+) 9/12/2011 Game Theory .

if β(v-)>v-.who increases her bid slightly wins with positive probability and obtains a positive payoff if she does so.bids v-.  if she wins.and β(v-)=v-:  if v > v. But. then players with valuations slightly greater than v. for equilibrium. then a player with valuation v wins with positive probability (players with valuations less than v bid less than β(v) because β is increasing).wins with positif probability and obtains a negative payoff. we need β(v) ≤ v if v > v-.also bid less than v.  given that β satisfies this condition. Thus. we have β(v) ≤ v if v > v.(because β is continuous). So that a player with valuation v. β(v-)≤v-.8 Appendix: auctions with an arbitrary distribution of valuations  In a symmetric equilibrium in which every player bids according to β.A (Short) Introduction 221 9/12/2011 . she obtains a negative payoff while she obtains a payoff of 0 by bidding v.9. then a player with valuation v. We conclude that β(v-)=v-. if β(v-)<v. Game Theory . So.and β(v) > v.

Thus.9. the derivative of this expected payoff with respect to b is zero at any best response less than β(v+) : (v  b) H ' (  1 (b)) F.O.8 Appendix: auctions with an arbitrary distribution of valuations   The expected payoff of a player of type v when every other player uses the bidding function β is differentiable on (v-.A (Short) Introduction 222 . if v > v-. is increasing at v-.β(v+))  given that β is increasing and differentiable  given then β(v-) = vand.C.:  H (  (b))  0  ' (  1 (b)) 1 knowing that the derivative of β-1 at the point b is 1  ' (  1 (b)) 9/12/2011 Game Theory .

for some constant C:  (v) H (v)   xH ' ( x)dx  C for v  v  v  v 9/12/2011 Game Theory . whenever v.O.C. If b = β(v). β(v) must satisfy the F. Because β is increasing.9. the best response of type v of any given player to the other players’ strategies is β(v).O. then β-1(b)=v.C.< v < v+. So.A (Short) Introduction 223 v . So that substituting b= β(v).8 Appendix: auctions with an arbitrary distribution of valuations   In a symmetric equilibrium in which every player bids according β. Thus. we have β(v)< β(v+) for v < v+. then β-1(v) = v. so that substituting b = β(v) into the F. and multiplying by β’(v) yields:  ' (v) H (v)   (v) H ' (v)  vH ' (v) for v  v  v   The left-hand side of the differential equation is the derivative with respect to v of β(v) H(v).

v+). We conclude that if the game has a symmetric Nash equilibrium in which each player’s bidding function is increasing and differentiable on (v-.9. the highest of n-1 independently drawn valuations. Thus β*(v) is the expected value of X conditional on its being less than v: and  * (v  )  v   * (v)  E( X | X  v) 9/12/2011 Game Theory .8 Appendix: auctions with an arbitrary distribution of valuations   The function β is bounded (as it differentiable). we deduce that C = 0. so considering the limit as v approaches v-. the function H being the CDF of X.A (Short) Introduction 224 . then this function is defined by:  * (v )  v-  xH ' ( x)dx H (v ) v for v   v  v  Note that.

< v < v+. 9/12/2011 Game Theory .9.A (Short) Introduction 225 . we have:  * (v )  v  v -  H ( x)dx H (v ) v  v  v-  ( F ( x)) v n 1 dx for v   v  v  ( F (v)) n 1 We see that β*(v) < v for v. using integration by parts.8 Appendix: auctions with an arbitrary distribution of valuations  Note finally that. the numerator in the expression β*(v) is: vH (v)   H ( x)dx v v Given H(v) = (F(v))n-1 (the probability that n-1 valuations is at most v).

A (Short) Introduction 226 . 9/12/2011 Game Theory .2 (Property of the bidding function in a first-price auction) Show that the bidding function β* is increasing.9.8 Appendix: auctions with an arbitrary distribution of valuations  Exercise 309.

Extensive Games with Perfect Information: Theory .5.

 In this setup.Framework  Strategic games suppress the sequential structure of decision- making: everything is about ex-ante anticipations and simultaneous decisions. 9/12/2011 Game Theory .  Extensive game describes explicitly the sequential structure of decision-making. perfect information means that each decision- maker is fully informed about all previous actions.A (Short) Introduction 228 . allowing us to study situations in which each decision-maker is free to change her mind as events unfold.

So. Each possible sequence of actions form a terminal history. the order of the players’ moves and the actions each play may take at each point. The function that gives the player who moves at each point in each terminal history is the player function. the components of an extensive game are:  The players  The terminal histories  The player function  The preferences for the players 9/12/2011 Game Theory .1 Definition     We add to players and preferences.1 Extensive games with perfect information  5.A (Short) Introduction 229 .5.1.

5. If it enters. Extensive game components:     Players: Incumbent.: new entrant in an industry). (In. Out Player function : player(Start) = Challenger.A (Short) Introduction 230 .1 Extensive games with perfect information  Example 154.1: Entry game An incumbent faces the possibility of entry by a challenger (eg. The challenger may enter or not. Challenger Terminal histories: (In. Fight). Acquiesce). player(In) = Incumbent Preferences : ? 9/12/2011 Game Theory . the incumbent may either acquiesce or fight.

Acquiesce) 9/12/2011 Game Theory .A (Short) Introduction 231 .1 Extensive games with perfect information  Note that the set of actions available to each player is NOT part of the game description.5. But it can be deduced from the description of the game (after any sequence of events. a player chooses an action).Out} Incumbent: {Fight.  Entry Game Actions Challenger: {In.

a2.1 Extensive games with perfect information  Terminal histories are a set of sequences: The first element of the sequence starts the game  The order of the sequence depicts the order of actions by players Entry game {(In. …. Acquiesce).am). Fight). (Out) }   Define:   Subhistories of a finite sequence (a1.A (Short) Introduction 232 9/12/2011 . ak) of actions to be:  The empty sequence of no actions (empty history. A subhistory NOT equal to the entire sequence is called a proper subhistory. a2. (In. Game Theory . ….5. representing the start of the game)  All sequences of the form (a1. where 1 ≤ m ≤ k.  The entire sequence is a subhistory of itself.

 Definition 155. we way that the game has a finite horizon.1 Extensive games with perfect information Entry game:   The subhistories of (In.1 (Extensive game with perfect information) An extensive game with perfect information consists of     A set of players A set of sequences (terminal histories) with the property that no sequence is a proper subhistory of any other sequences A function (the player function) that assigns a player to every sequence that is a proper subhistory of some terminal history For each player. If the gameGamefinite horizon andIntroduction has Theory . Acquiesce). preferences over the set of terminal histories 9/12/2011 The set of terminal histories is the set of all sequences of actions that may occur. Acquiesce) are the empty history and the sequences (In) and (In. we say that the 233 game is finite. . The proper subhistories are the empty history and the sequence (In). If the length of the longest terminal history is finite.A (Short) finitely many terminal histories. Terminal histories represent outcomes of the game.5.

Fight).Fight) = 0 9/12/2011 Game Theory .1 Extensive games with perfect information Entry game: Suppose that the best outcome for the challenger is that it enters and the incumbent acquiesces. The situation is modeled as follow:     Players: {Challenger.Acquiesce) = 2. whereas the best outcome for the incumbent is that the challenger stays out. u(In.Fight) = 0  Incumbent: u(Out) = 2.Acquiesce) = 1.Acquiesce). u(Out) = 1. and the worst outcome is that it enters and there is a fight. u(In. (Out) Player function: P(0) = Challenger. (In. u(In. Incumbent} Terminal histories: (In.5. and the worst outcome is that it enters and the incumbent fights. P(In) = Incumbent Preferences:  Challenger: u(In.A (Short) Introduction 234 .

1 Extensive games with perfect information Player Start of the game Action Payoffs The sets of actions can be deduced from the set of terminal histories and the player function : A(h) = {a: (h.5.a) is a history. Eg.a) is a history} Where h is some nonterminal history.A (Short) Introduction 235 . a is one of the actions available to the player who moves after h.: A(In) = {Acquiesce. (h. Fight} 9/12/2011 Game Theory .

H) and player 2 prefers (D. Write down the set of players. and players’ preferences for the game represented on the right side of the slide.E).G).2 a. 9/12/2011 Game Theory .G) to (C.5. and (D. (D.A (Short) Introduction 236 .G) to (D. player 1 prefers (C.F).H). player function. b. the player function is given by P(0) = 1 and P(C) = P(D) = 2.F) to (C. Represent in a diagram the two-player extensive game with perfect information in which the terminal histories are (C.1 Extensive games with perfect information  Exercise 156. (C.F) to (D. the set of terminal histories.E).

when choosing an action.A (Short) Introduction 237 . A race between directors to become CEO. ticktacktoe.5. Typical situations modeled this way are:    A race between firms developing a new technology.1 Extensive games with perfect information  An extensive game with perfect information models a situation in which each player.  Two extension of extensive game with perfect information are:   Allowing players to move simultaneously exists Allowing arbitrary patterns of information 9/12/2011 Game Theory . knows all actions chosen previously and always move alone. Games like chess.

A (Short) Introduction 238 9/12/2011 . there is no end point from which to start the induction But even for finite horizon game Game Theory .5. if he enters. the incumbent will acquiesce  As the incumbent will acquiesce in case of entry. the challenger is better off entering than staying out This line of argument is a backward induction.1 Extensive games with perfect information  Entry game solution:   Solution: the challenger will enter and the incumbent will acquiesce Analysis:  The challenger sees that.  Backward induction can not always be used to solve extensive games:   For infinite horizon game.

the Challenger sees that the Incumbent is indifferent between Acquiesce and Fight if he enters.5.A (Short) Introduction 239 . The question of whether to enter or not remains open. 9/12/2011 Game Theory .1 Extensive games with perfect information Example: in this game.

there is no conflict between the two approaches.  In games in which backward induction is well-defined. this approach turns out to lead to the backward induction outcome.1 Extensive games with perfect information  Another approach to defining equilibrium takes off from the notion of Nash equilibrium: it seeks steady states.A (Short) Introduction 240 . 9/12/2011 Game Theory . So.5.

5.2 Strategies and outcomes  5. Player 1 has 2 strategies: C and D Player 2 has 4 strategies: Action Assigned to History C Strategy 1 Strategy 2 9/12/2011 Strategy 3 Game Theory . where P is the player function) an action in A(h). the set of actions available after h.A (Short) Introduction Action Assigned to History D E G E F H G241 Strategy 4 F H .2.1 ((full) strategy) A (full) strategy of player i in an extensive game with perfect information is a function that assigns to each history h after which it is player i’s turn to move (P(h) = i.1 Strategies  Definition 159.

whatever action the other players take. then the agent has enough information to carry out her wishes. FG.2 Strategies and outcomes  Notation  Player 1 strategies: C.  If actions are available at the same stage of the game.A (Short) Introduction 242 . Each player full strategy is more than a “plan of action” or “contingency plan”: it specifies what the player does for each of the possible choice of the other player. they are written from left to right as they appear in the game diagram. if the player appoints an agent to play the game for her and tell the agent her strategies.5. 9/12/2011 Game Theory . D  Player 2 strategies: EG. EH. FH  Actions are written in the order in which they occur in the game. In other words.

5.A (Short) Introduction 243 .2 Strategies and outcomes  Exercise: Determine the strategies of the player 1 in the following game: 9/12/2011 Game Theory .

Eg. then I will play G if the other player plays E. even for histories that. if the strategy is followed. DH Each (full) strategy specifies an action after history (C. if I do a mistake and I play C.5. CH. 9/12/2011 Game Theory .2 Strategies and outcomes  Solution: CG.E) even if it specifies the action D at the beginning of the game! A (full) strategy must specify an action for every history after which it is the player turn to move. : DG may read as “I choose D but.A (Short) Introduction 244 . DG. A way to interpret (full) strategy is that it is a plan of action that specifies players actions even if they make mistakes. do not occur (this is the difference between “plan of actions” and a “full strategy”).

Example:  The outcome  The strategy profile (DG.E) is associated to terminal history D  The strategy profile (CH. The terminal history associated with the strategy profile s is the outcome of s and is denoted O(s).  9/12/2011 Game Theory . We denote strategy profile by s.2 Outcomes  A strategy profile is the vector of strategies played by each player.E) is associated to terminal history (C.A (Short) Introduction 245 .5.2 Strategies and outcomes  5. It determines the terminal history that occurs.E. not their full strategies.H)  Note that the outcome O(s) of the strategy profile s depends only on the players’ plans of action.2.

A (Short) Introduction 246 . the terminal history O(s*) generated by s* is at least as good according to player i’s preferences as the terminal history O(ri .5.2 (Nash Equilibrium of extensive game with perfect information) The strategy profile s* in an extensive game with perfect information is a Nash equilibrium if.3 Nash Equilibrium Definition 161. Equivalently. for every player i and every strategy ri of player i.s*-i) generated by the strategy profile (ri .s*-i) in which player i chooses ri while every other player j chooses s*j. for each player i : ui(O(s*)) ≥ ui(O(ri .s*-i) ) for every strategy ri 9/12/2011 Game Theory .

 To combine strategies of all players to list strategies profiles.  To find the outcome of each strategy profile.  The set of Nash equilibria of any extensive game with perfect information is the set of Nash equilibria of its strategic form. This is known as the strategic form of the extensive game.A (Short) Introduction 247 .  To analyze this information as a strategic game.3 Nash Equilibrium  One way to find the Nash equilibria of an extensive game in which each player has finitely many strategies is : To list each player’s (full) strategies.5. 9/12/2011 Game Theory .

5.3 Nash Equilibrium
 Example 162.1: the entry game
Player 1 strategies: {In,Out} Player 2 strategies: {Acquiesce,Fight} Strategic form of the game
Incumbent Acquiesce Fight

In Challenger Out

2*,1*
1,2*

0,0
1*,2*

Nash equilibria -(In,Acquiesce) : the one identified by backward induction -(Out,Fight): this also a steady state. No player has incentive to deviate.

9/12/2011

Game Theory - A (Short) Introduction

248

5.3 Nash Equilibrium
 How to interpret the Nash Equilibrium (Out,Fight)?
 

This situation is never observed in the extensive game A solution to escape from this difficulty is by considering a slighthly perturbed steady state in which, on rate occasions, nonequilibrium actions are taken :  Players makes mistakes or deliberately experiment  Perturbations allow each player eventually to observe every other players’ action after every history

 Another important point to note is that extensive games

embodies the assumption that the incumbent cannot commit, at the beginning of the game, to fight if the challenger enters. If such a commitment was credible, the challenger would stay out. But the threat is not credible (because it is irrational to fight after entry).
9/12/2011 Game Theory - A (Short) Introduction 249

5.3 Nash Equilibrium
 Exercise 163.1 (Nash equilibria of extensive games)

Find the Nash equilibria of the extensive game represented by the figure (when constructing the strategic form of each game, be sure to include all the strategies of each player).

9/12/2011

Game Theory - A (Short) Introduction

250

5.4 Subgame perfect equilbrium
 5.4.1 Definition

The notion of Nash equilibrium ignores the sequential structure of an extensive game. This may lead to steady states that are not robust (in the sense that they do not appear as such in the extensive game). We consider now a new notion of equilibrium that models a robust steady state. This notion requires:  (i) That each player’s strategy to be optimal  (ii) After every possible history
Subgame: for any nonterminal history h, the subgame following h is the part of the game that remains after h has occurred.
Example: in the entry game, the subgame following the history In is the game in which the incumbent is the only player and there are two terminal histories : Acquiesce and Fight.

9/12/2011

Game Theory - A (Short) Introduction

251

5.4 Subgame perfect equilbrium
Definition 164.1 (Subgame of extensive game with perfect information) Let Gamma be an extensive game with perfect information, with player function P. For any nonterminal history h of Gamma, the subgame Gamma(h) following the history h is the following extensive game:
 

Players: the players in Gamma Terminal histories: the set of all sequences h’ of actions such that (h,h’) is a terminal history of Gamma Player function: the player P(h,h’) is assigned to each proper subhistory h’ of a terminal history Preferences: each player prefers h’ to h’’ if she prefers (h,h’) to (h,h’’) in Gamma.

Note that the subgame following the empty history is the entire game.
9/12/2011 Game Theory - A (Short) Introduction 252

5.4 Subgame perfect equilbrium
 A subgame perfect equilibrium is a strategy profile s* with the

property that in no subgame can any player i do better by choosing a strategy different from s*i given that every player j adheres to s*j.
Example: in the entry game, the Nash equilibrium (Out,Fight) is not a subgame perfect equilibrium because in the subgame following the history In, the strategy Fight is not optimal for the incumbent: in this subgame (the In subgame), the incumbent is better off choosing Acquiesce than it is choosing Fight.
 Notation: Let h be a history and s a strategy profile to which

adhere afterwards h. We denote Oh(s) the outcome generated in the subgame following h by the strategy profile induced by s.
9/12/2011 Game Theory - A (Short) Introduction 253

4 Subgame perfect equilbrium Example: the entry game    Let s be the strategy profile (Out.A (Short) Introduction 254 . afterwards. the resulting terminal history is Oh(s) = (In.5.Fight) 9/12/2011 Game Theory .Fight) Let h be the history In If h occurs and. the players adhere to s.

4 Subgame perfect equilbrium Definition 166.1 (Subgame perfect equilibrium of extensive game with perfect information) The strategy profile s* in an extensive game with perfect information is subgame perfect equilibrium if. not only at the start of the game (as in the definition of a Nash equilibrium) 9/12/2011 Game Theory .5. and every strategy ri of player i.s*-i) generated by the strategy profile (ri.s*-i)) for every strategy ri of player i The key point is that payer’s strategy is required to be optimal for every history after which it is the player’s turn to move. the terminal history Oh(s*) generated by s* after the history h is at least as good according to payer i’s preferences as the terminal history Oh(ri.s*-i): ui(Oh(s*)) ≥ ui(Oh(ri. for every player i.A (Short) Introduction 255 . every history h after which it is player i’s turn to move (P(h)=i).

5.4 Subgame perfect equilbrium
 5.4.2 Subgame perfect equilibrium and Nash equilibrium

 Every subgame perfect equilibrium is a Nash equilibrium (because in a subgame perfect equilibrium, every player’s strategy is optimal, in particular after the empty history)

 A subgame perfect equilibrium generates a Nash equilibrium in every subgame  A Nash equilibrium is optimal in any subgame that is reached when the players follow theirs strategies.
 Subgame perfect equilibrium requires moreover that each player’s strategy is optimal after histories that do not occur if the players follow their strategy.
Game Theory - A (Short) Introduction 256

9/12/2011

5.4 Subgame perfect equilbrium
 Example 167.2 (Variant of the entry game)

Consider the variant of the entry game in which the incumbent is indifferent between fighting and acquiescing if the challenger enters. Find the subgame perfect equilibria.

9/12/2011

Game Theory - A (Short) Introduction

257

5.4 Subgame perfect equilbrium
Solution: both Nash equilibria (In,Acquiesce) and (Out,Fight) are subgame perfect equilibria because, after history In, both Fight and Acquiesce are optimal for the incumbent.
 Exercice 168.1

Which of the Nash equilibria of the following game are subgame perfect?

9/12/2011

Game Theory - A (Short) Introduction

258

5.4 Subgame perfect equilbrium
 5.4.4 Interpretation

A Nash equilibrium corresponds to a steady state in an idealized setting in which players’ long experience leads her to correct beliefs about the other players’ actions. A subgame perfect equilibrium of an extensive game corresponds to a slightly perturbed steady state in which all players, on rare occasions, take nonequilibrium actions. Thus, players know how the other players will behave in every subgame.  Subgame perfect equilibrium is a plan of action specifying players’ actions:  Not only after histories consistent with the strategy  But also after histories that result when the player chooses arbitrary alternatives actions.

9/12/2011

Game Theory - A (Short) Introduction

259

5.4 Subgame perfect equilbrium
 Alternative interpretation:

Consider an extensive game with perfect information in which:  each player has a unique best action at every history after which it is her turn to move;  horizon is finite; In such a game, a player who knows the other players’ preferences (eg: profit maximization) and knows that the other players are rational may use backward induction to deduce her optimal strategy. The subgame perfect equilibrium is the outcome of the players’ rational calculations about each other’s strategies. Note that:

this interpretation is not tenable in games in which some player has more than one optimal action after some history; But an extension of the procedure of backward induction can be used to find all subgame perfect equilibria of finite horizon games.
Game Theory - A (Short) Introduction 260

9/12/2011

5.5 Finding subgame perfect equilibria of finite horizon games: backward induction
 In a game with finite horizon, the set of subgame perfect

equilibria may be found more directly by using an extension of the procedure of backward induction.
 Define the length of a subgame to be the length of the longest

history in the subgame.
 The procedure of backward induction works as follow:

(i) Start by finding the optimal actions of the players who move in the last subgames (stage k); (ii) Next, find the optimal actions of the players who move at stage k-1, given the optimal actions we have found in all subgames k; (iii) Continue the procedure up to stage 1.

9/12/2011

Game Theory - A (Short) Introduction

261

at the start of the whole game. player 1 chooses G. Then. Then.5. In any game in which this procedure selects a single action for the player who moves at the start of each subgame.E).5 Finding subgame perfect equilibria of finite horizon games: backward induction  Example    We first deduce that in the subgame of length 1 following history (C. 9/12/2011 Game Theory . player 2 chooses E. the strategy profile thus selected is the unique subgame perfect equilibrium of the game. player 1 chooses D.A (Short) Introduction 262 . at the start of the subgame of length 2 following the history C.

5.A (Short) Introduction 263 .5 Finding subgame perfect equilibria of finite horizon games: backward induction  What happens in a game in which at the start of some subgames. 9/12/2011 Game Theory . more than one action is optimal ? The solution is to traces back separately the implications for behavior in the longer subgames of every combination of optimal actions in the shorter subgames.

1 9/12/2011 Game Theory .5 Finding subgame perfect equilibria of finite horizon games: backward induction  Example 172.5.A (Short) Introduction 264 .

5. In the subgame following the history E. player 2’s unique optimal action is K. player 2 is indifferent between her two actions.5 Finding subgame perfect equilibria of finite horizon games: backward induction  The game has three subgames of length 1. in each of which player 2 moves:   In subgames following the histories C and D.A (Short) Introduction 265 . There are four combinations of player 2’s optima actions in the subgame of length 1: •FHK •FIK •GHK •GIK 9/12/2011 Game Theory .

5 Finding subgame perfect equilibria of finite horizon games: backward induction  The game has a single subgame of length 2. player 1’s optimal action at the start of the game is C. in which player 1 moves first. (D.5.GHK) and (D. 9/12/2011 Game Theory . (C.FHK). D. We now consider player 1’s optimal action in this game for every combination of optimal actions of player 2 in the subgame of length 1:    For the combinations FHK and FIK of optimal actions of player 2.GIK) The set of strategy profiles that this procedure yields for the whole game is the set of subgame perfect equilibria of the game. For the combination GHK of optimal actions of player 2. (C.FIK). the actions C. player 1’s optimal action at the start of the game is D. and E are optimal for player 1. namely the whole game.A (Short) Introduction 266 . For the combination GIK of optimal actions of player 2. The strategy pairs isolated by the procedure are (C.GHK).

5.1 (Subgame perfect equilibrium of finite horizon games and backward induction) The set of subgame perfect equilibria of a finite horizon extensive game with perfect information is equal to the set of strategy profiles isolated by the procedure of backward induction.5 Finding subgame perfect equilibria of finite horizon games: backward induction  Two important propositions: PROPOSITION 172. 9/12/2011 Game Theory .1 (Existence of subgame perfect equilibrium) Every finite extensive game with perfect information has a subgame perfect equilibrium. PROPOSITION 173.A (Short) Introduction 267 .

2 Find the subgame perfect equilibria of this game: 9/12/2011 Game Theory .A (Short) Introduction 268 .5 Finding subgame perfect equilibria of finite horizon games: backward induction  Exercise 173.5.

if player 1 bids 1. a player may pass rather than bid. For v=2 and w=3. Each person’s wealth is w. for example. player 2 obtains the object and pays 3. a positive integer. both players pay their last bids (if any) (if player 1 passes initially.5. a bid must be a positive integer greater than the previous bid. model the auction as an extensive game and find its subgame perfect equilibria. in which case the game ends and the other player receives the object. player 2 receives the object and makes no payment. 9/12/2011 Game Theory . In the auction. On her turn. and player 1 pays 1).A (Short) Introduction 269 . is sold in an auction. Neither player may bid more than her wealth. the people take turns bidding.1 (Dollar auction) An object that two people each value at v. which exceeds v. player 2 bids 3 and then player 1 passes.5 Finding subgame perfect equilibria of finite horizon games: backward induction  Exercise 176.

and individual i’s preferences (for i = 1. Find the subgame perfect equilibria.5 Finding subgame perfect equilibria of finite horizon games: backward induction  Exercise 176.5. Then individual 2 chooses her effort level a2.2) are represented by the payoff function ai (c+aj-ai). An effort level is a nonnegative number. First individual 1 chooses her effet level a1. 9/12/2011 Game Theory .2 (A synergistic relationship) Two individuals are involved in a synergistic relationship. where j is the other individual and c > 0. some constant.A (Short) Introduction 270 . Suppose that the players choose their effort levels sequentially (rather than simultaneously).

both firms obtain the profit zero in that period (regardless of the incumbent’s action). neither firm has any further action. If. the incumbent obtains the profit M > 2 C and the challenger the profit 0 in every subsequent period. the challenger stays in. and in each of T periods the incumbent first commits to fight or cooperate with the challenger in that period. If it does not enter. Each firm cares about the sum of its profits. If the challenger enters. First the challenger chooses whether to enter. it pays the entry costs f > 0. in any period. in any period. the incumbent’s payoff is TM (it obtains the profit M in each of the following T ≥ 1 periods).f} if it cooperates. each firm obtains in that period the profit –F < 0 if the incumbent fights and C > max {F.A (Short) Introduction 271 . Once the challenger exits. it cannot reenter.5.5 Finding subgame perfect equilibria of finite horizon games: backward induction  Exercise 174. Find the subgame perfect equilibria of the extensive game. The challenger’s payoff is 0. the challenger exits. 9/12/2011 Game Theory . If.2 (An entry game with a financially constrained firm) An incumbent in an industry faces the possibility of entry by a challenger. then the challenger chooses whether to stay in the industry or to exit.

10. Extensive Games with Imperfect Information .

each player. when choosing her action. may not be informed of the other players’ previous actions. 9/12/2011 Game Theory .  In this imperfect information setup.A (Short) Introduction 273 . allowing us to study situations in which each decisionmaker is free to change her mind as events unfold.Framework  We keep in this chapter the Extensive game setup: extensive game describes explicitly the sequential structure of decisionmaking.

the player is information of the information set that has occurred.  To describe an extensive game with imperfect information. When making her decision.10. but not of which history within that set has occurred. the set of terminal histories. Game Theory .1 Extensive games with imperfect information  To describe an extensive game with perfect information. the player function and the players’ preferences. we need to specify the set of players.A (Short) Introduction 274 9/12/2011 . we need to add a specification of each player’s information about the history at every point at which she moves:    Denote by 𝐻𝑖 the set of histories after which player 𝑖 moves We specify player’s 𝑖 information by partitioning 𝐻𝑖 into a collection of information sets (the collection is called the information partition).

𝐷. 𝐷 and 𝐸 (𝐻𝑖 = 𝐶.1 Extensive games with imperfect information  Example  Suppose player 𝑖  moves after histories 𝐶.10. 𝐸 . Denote by 𝐴(ℎ) the set of actions available to the player who moves after history ℎ. Why? Game Theory . 𝐸 )  is informed only that the history is 𝐶 or that it is either 𝐷 or 𝐸 The player information partition is the two information sets 𝐶 and 𝐷. 𝐷. 𝐸 .A (Short) Introduction 275  Important restriction    9/12/2011 . Note that if the player is not informed at all. her information partition contains a unique information partition 𝐶. We allow two histories ℎ and ℎ′ to be in the same information set only if 𝐴 ℎ = 𝐴(ℎ′ ).

1 Extensive games with imperfect information  Note that we allow move of chance. preferences over the set of lotteries over terminal histories. For each player. Game Theory .1 (Extensive game with imperfect information)       A set of players A set of sequences (terminal histories) having the property that no sequence is a proper subhistory of some terminal history A function (the player function) that assigns either a player or “chance” to every sequence that is a proper subhistory of some terminal history A function that assigns to each history that the player function assigns to chance a probability distribution over the actions available after that history (each probability distribution is independent of every other distribution).  Definition 314. So an outcome is a lottery (a probability distribution function) over the set of terminal histories. For each player. a partition (information partition) of the set of histories assigned to that player by the player function.A (Short) Introduction 276 9/12/2011 .10.

knows the one chosen by the other person. 9/12/2011 Game Theory .A (Short) Introduction 277 . when moving. is informed of any other player’s action.10.2: BoS as an extensive game    Games in which each player moves once and no player. may be modeled as strategic games or extensive games with imperfect information. BoS :  Each of two people chooses whether to go to a Bach of Stravinsky concert  Neither person. when choosing a concert. Model this game as an extensive game with imperfect information.1 Extensive games with imperfect information  Example 314.

(𝑆. say 1 and 2  Terminal histories: 𝐵. she is not informed whether the history is 𝐵 or 𝑆)  Preferences: given in the game description 9/12/2011 Game Theory . 𝑆 (player 2 has a single move and when she moves.10. 𝑃 𝐵 = 𝑃 𝑆 = 2  Chance moves: None  Information partitions  Player 1: ∅ (a single information set: player 1 has a single move and when she moves. 𝑆 . 𝐵 . 𝐵. 𝑆)  Player function: 𝑃 ∅ = 1. she is informed that the game is beginning)  Player 2: 𝐵. 𝐵 . 𝑆.A (Short) Introduction 278 .1 Extensive games with imperfect information  Solution:  Players: the two people.

1 Indicates that the histories are in the same information set 9/12/2011 Game Theory .A (Short) Introduction 279 .1 Extensive games with imperfect information  Figure 315.10.

But regardless of entrant’s readiness.1: Variant of Entry Game (the challenger. Game Theory . before entering. takes an action that the incumbent does not observe)      An incumbent faces the possibility of entry by a challenger (see example 154. the incumbent prefers to acquiesce than to fight.10. The incumbent observes whether the challenger enters but not whether he is prepared. Model (graphically by a tree) this game as an extensive game with imperfect information.1 Extensive games with imperfect information  Example 317.1) The challenger has three choices:  Stay out  Prepare itself for combat and enter (preparation is costly but reduces loss from fight)  Enter without preparations A fight is less costly for the incumbent if the entrant is unprepared.A (Short) Introduction 280 9/12/2011 .

10.A (Short) Introduction 281 .1 9/12/2011 Game Theory .1 Extensive games with imperfect information  Figure 317.

each player has a single information set at which two actions (𝐵 or 𝑆) are available.  In the BoS game.3 (Mixed Strategy in extensive game)  A mixed strategy of a player in an extensive game is a probability distribution over the player’s pure strategies.A (Short) Introduction 282 .  Definition 318. 𝐴𝑐𝑡𝑖𝑜𝑛𝐼2 . …). Game Theory .  9/12/2011 With mixed strategies. If players have several information sets. Thus.2 Strategies  A strategy specifies the action the player takes whenever it is her turn to move.  Definition 318. players are allowed to choose their actions randomly. each player has two possible strategies: 𝐵 or 𝑆.10.1 (Strategy in extensive game)  A (pure) strategy of player 𝑖 in an extensive game is a function that assigns to each of 𝑖 ′ 𝑠 information sets 𝐼𝑖 an action in 𝐴(𝐼𝑖 ) (the set of actions available to player 𝑖 at the information set 𝐼𝑖 ). a strategy specifies the list of actions at each information set in the form (𝐴𝑐𝑡𝑖𝑜𝑛𝐼1 .

𝛼−𝑖 ) according to a payoff function whose expected value represents players 𝑖 ′ 𝑠 preferences over lotteries.4 (Nash equilibrium of extensive game)   Intuition: a strategy profile is a Nash equilibrium if no player has an alternative strategy that increases her payoff. given the other player’s strategies. Formal definition: The mixed strategy profile 𝛼 ∗ in an extensive game is a (mixed strategy) Nash equilibrium if.3 Nash equilibrium  Definition 318. 9/12/2011 Game Theory .A (Short) Introduction 283 . player 𝑖 ′ 𝑠 expected payoff ∗ to 𝛼 ∗ is at least as large as her expected payoff to (𝛼𝑖 .10.  One way to find a Nash equilibrium of an extensive game is to construct the strategic form of the game and analyze it as a strategic game. Notes:  an equilibrium in which no player’s strategy entails any randomization (every player’s strategy assigns probability 1 to a single action at each information set) is a pure Nash equilbrium. for each player 𝑖 and every mixed strategy 𝛼𝑖 of player 𝑖.

player’s 2 experience playing the game tells her the history to expect. player 2 is not informed of the action chosen by player 1 when taking an action (her information set contains both the history 𝐵 and the history 𝑆).A (Short) Introduction 284 . However.: in steady state in which every person who plays the role of either player chooses 𝐵. 9/12/2011 Game Theory . 𝐵)  (𝑆.1: BoS as an extensive game    Each player has two strategies: 𝐵 and 𝑆 The strategic form of the game is given in Figure 19.1 Thus the game has two pure Nash equilibria:  (𝐵. 𝑆) In the BoS game.10. each player knows (by experience) that the other player will choose 𝐵.3 Nash equilibrium  Example 319. Eg.

3 Nash equilibrium  How may we extend the idea of subgame perfect equilibrium to extensive game with imperfect information to deal with situations in which the notion of Nash equilibrium is not adequate?  Example 322.1: Entry game  The strategic form of the entry game in Example 317.3* 1.4* 2*.A (Short) Introduction .2* 4*.4* 285 9/12/2011 Game Theory .1 is the following: Acquiesce Ready Fight 3.2 Unready Out 2.10.1 0.

The natural extension of this idea to games with imperfect information requires that each player’s strategy be optimal at each of her information sets. given the other players’ strategies. regardless of whether the history occurs if the players adhere to their strategies. Fight) (The game has also a Nash mixed strategy equilbrium in which the challenger uses the pure strategy Out and the probability assigned by the incumbent to Acquiesce is at most 1 2). for every history after which she moves.  9/12/2011 Game Theory .10.3 Nash equilibrium  The game has two Nash equilbria:  (Unready. the Nash equilibrium (𝑂𝑢𝑡. 𝐹𝑖𝑔ℎ𝑡) is not plausible. The notion of subgame perfect equilibrium eliminates this strategy by requiring that each player’s strategy be optimal.  As in Chapter 5 (perfect information).A (Short) Introduction 286 . Acquiesce)  (Out.

the implementation of the idea in other may be less straightforward because the optimality of an action at an information set may depend on the history that has occurred. Consider for example a variant of the entry game in which the incumbent prefers to fight than to accommodate an unprepared entrant (see Figure 323. regardless of whether the challenger is ready.  9/12/2011 Game Theory . the incumbent’s action 𝐹𝑖𝑔ℎ𝑡 is unambigously suboptimal at its information set because the incumbent prefers 𝐴𝑐𝑞𝑢𝑖𝑒𝑠𝑐𝑒 if the challenger enters. So.A (Short) Introduction 287 .3 Nash equilibrium  In Example 322. any equilbrium that assigns a positive probability to 𝐹𝑖𝑔ℎ𝑡 does not satisfy the additional requirement introduced by the notion of subgame perfect equilibrium. However.10.1.1).

10.1 9/12/2011 Game Theory .A (Short) Introduction 288 .3 Nash equilibrium Figure 323.

So. to study this situation.10. But:  given that now fighting is optimal if the challenger enters unprepared. (𝑂𝑢𝑡. 9/12/2011 Game Theory . 𝐹𝑖𝑔ℎ𝑡) is a Nash equilibrium. the reasonableness of the modified game depends on the history the incumbent believes has occurred. we must specify players’ beliefs.  and the challenger’s strategy 𝑂𝑢𝑡 gives the incumbent no basis on which to form such a belief.A (Short) Introduction 289 .3 Nash equilibrium  Like the original game.

9/12/2011 Game Theory . Insists that they hold at each point at which a player has to choose an action (like subgame perfect equilibrium in extensive games with perfect information).4 Beliefs and sequential equilibrium  A Nash equilibrium of a strategic game with imperfect information is characterized by two requirements:   Each player chooses her best action given her belief about other players Each player belief is correct  The notion of equilibrium we define here:   Embodies these two requirements.A (Short) Introduction 290 .10.

4 Beliefs and sequential equilibrium  10. Definition 324. We call a collection of beliefs (one for each information set) a belief system. the player whose turn it is to move forms a belief about the history that has occurred.  9/12/2011 Game Theory . We model this belief as a probability distribution over the histories in the information set.1 Beliefs    We assume that at an information set that contains more than one history.1  A belief system in an extensive game is a function that assigns to each information set a probability distribution over the histories in that information set.10.A (Short) Introduction 291 .4.

with the property that each probability distribution is independent of every other distribution.4.1)  The belief system consists of a pair of probability distributions:  One assigns probability 1 to the empty history (the challenger belief at the start of the game)  The other assigns probabilities to histories Ready and Unready (the incumbent belief after the challenger enter)  10.10.2 Strategies  Definition 324.4 Beliefs and sequential equilibrium  Example: the entry game (317. 9/12/2011 Game Theory .2 (Behavioral strategy in extensive game)  A behavioral strategy of player 𝑖 in an extensive game is a function that assigns to each 𝑖 ′ 𝑠 information sets 𝐼𝑖 a probability distribution over the action in 𝐴(𝐼𝑖 ).A (Short) Introduction 292 .

 In all the games that we study.A (Short) Introduction 293 9/12/2011 . a behavioral strategy and mixed strategy are equivalent but behavioral strategy are easier to deal with.2)  Each player has a single information set.  So.  Behavioral strategies are assigned to actions in information sets with mixed strategies are assigned to possible combinations of pure strategies.4 Beliefs and sequential equilibrium   Note:  A behavioral strategy that assigns probability one to a single action is equivalent to a pure strategy.10. Example: the BoS game (314. the set of behavioral strategies is identical to the set of mixed strategies. Game Theory .  In this game. a behavioral strategy for each player is a single probability distribution over her actions.

3 Equilibrium  Definition 325.  Consistency of beliefs with strategies: each players’ belief is consistent with the strategy profile. The sequential rationality generalizes the requirement of subgame perfect equilibrium: each player’s strategy must be optimal in the part of the game that follows each of her information sets.A (Short) Introduction 294 ..  9/12/2011 Game Theory .10.1 (Assessment)  An assessment is an equilibrium if it satisfies the following two requirements:  Sequential rationality: each player’s strategy is optimal whenever she has to move. given the strategy profile and given the player’s belief about the history in the information set that has occurred. regardless of whether the information set is reached if the players follow their strategies.4. given her belief and the other players’ strategies.4 Beliefs and sequential equilibrium  10.

10.1 9/12/2011 Game Theory .4 Beliefs and sequential equilibrium  Example 325 and Figure 326.A (Short) Introduction 295 .

10. Sequential rationality requires that player 2 strategy be optimal at her information set.F) Player 2 beliefs at her information set (number in brackets) is that the history C has occurred with probability 2/3 and history D has occurred with probability 1/3. even though this set is not reached if player 1 follows her strategy.A (Short) Introduction 296 . 9/12/2011 Game Theory . Player 2 expected payoff in the part of the game starting at her information set is:  Strategy F : (2/3 x 0) + (1/3 x 1) = 1/3  Strategy G : (2/3 x 1) + (1/3 x 0) = 2/3 Sequential rationality requires Player 2 to select G.  Select J after the history (C.4 Beliefs and sequential equilibrium   Player 1 strategies are indicated by the red branches:  Selects E at the start of the game. given the subsequent behavior specified by player 1 strategy.

4 Beliefs and sequential equilibrium Sequential rationality requires also that player 1 strategy be optimal at each of her two (one element) information sets.A (Short) Introduction 297 .  If Player 2 strategy is G. player 1 has two optimal strategies: DJ and EJ. given player 2 strategy:  Player 1 optimal action after history (C. Thus.10. 9/12/2011 Game Theory . given player 2 strategy G. player 1 optimal actions at the start of the game are D and E.F) is J.

1:  For the information set 𝐶.A (Short) Introduction 298 . the probability distribution assigns 2/3 to terminal history (C.G) and probability 1/3 to (D. her expected payoff to 𝑂𝐼𝑖 (𝛽. 𝐷 .G) 9/12/2011 Game Theory . Let 𝐼𝑖 be an information set of player 𝑖. Sequential rationality requires for each player 𝑖 and each of her information sets 𝐼𝑖 . 𝛽−𝑖 ).10. 𝜇 for each of her behavioral strategies 𝛾𝑖 . the players adhere to the strategy profile 𝛽. 𝜇) an assessment (𝛽 is a profile of behavioral strategies and 𝜇 is a belief system). 𝜇) the probability distribution over terminal histories that results if each history in 𝐼𝑖 occurs with probability assigned to it by player 𝑖 ′ 𝑠 belief 𝜇𝑖 (not necessarily the probability with which it occurs if the player adhere to 𝛽) and subsequently. In Figure 326. 𝜇) is at least as large as her expected payoff to 𝑂𝐼𝑖 (𝛾. Denote 𝑂𝐼𝑖 (𝛽.4 Beliefs and sequential equilibrium  Sequential rationality requirements (more formal definition)    Denote (𝛽.

10. The implementation of this idea is somewhat unclear at an information set not reached if the players follow their strategies: every history has probability 0 if players follow their strategies. 9/12/2011 Game Theory .A (Short) Introduction 299 . each player’s belief must be correct: the probability it assigns to any history must be the probability with which that history occurs if the players adhere to their strategies.4 Beliefs and sequential equilibrium  The Consistencies of beliefs with strategies is a new requirement. In a steady state. We deal with this difficulty allowing the player who moves at such an information set to hold any belief at that information set.  The consistency requirement restrict the belief system only at information sets reached with positive probability if every player adheres to her strategy.

conditional on the information set’s being reached.10.A (Short) Introduction 300 . this probability is: Pr(ℎ∗ according to 𝛽) ℎ∈𝐼 𝑃𝑟 ℎ according to 𝛽 𝑖 9/12/2011 Game Theory . the consistency requirement imposes that the probability assigned to every history ℎ∗ in a information set reached with positive probability by the belief of the player who moves at that information set to be equal to the probability that ℎ∗ occurs according to the strategy profile.4 Beliefs and sequential equilibrium Precisely.  By the Bayes’ rule.

1   If player 1 behavioral strategy assigns probability 1 to action E at the start of the game.4 Beliefs and sequential equilibrium  Figure 326. If player 2 action at the start of the game assigns positive probability to C or D.10.  Consistency requires that player 2 belief assigns probability 𝑝/(𝑝 + 𝑞) to C and 𝑞/(𝑝 + 𝑞) to D. the consistency requirement enters into play:  Denote 𝑝 the probability assigned to C by player 1 strategy and 𝑞 to D. 9/12/2011 Game Theory . the consistency requirement places no restriction on player 2 belief (player 2 information set is not reached if player 1 adheres to her strategy).A (Short) Introduction 301 .

the consistency condition does not restrict the incumbent belief.A (Short) Introduction 302 9/12/2011 . Unready and Out.4 Beliefs and sequential equilibrium  Example 327. the condition requires that the incumbent assigns probability 𝑝𝑅 /(𝑝𝑅 + 𝑝𝑈 ) to Ready and 𝑝𝑈 /(𝑝𝑅 + 𝑝𝑈 ) to Unready. Otherwise. Game Theory .1)    Denote by 𝑝𝑅 .1 (Weak sequential equilibrium)  An assessment (𝛽.  Definition 328. 𝑝𝑈 and 𝑝𝑂 the probability that the challenger assigns to Ready.10. 𝜇) (consisting of a behavioral strategy profile 𝛽 and a belief system 𝜇) is a weak sequential equilibrium if it satisfies the sequential rationality and the weak consistency of beliefs with strategies.1 and 323.4: Consistency of beliefs in entry game (Figures 317. If 𝑝𝑂 = 1.

then each player’s strategy in the assessment is optimal at the beginning of the game. only one belief system is possible (each player believes at each information set that a single compatible history has occurred with probability 1).4 Beliefs and sequential equilibrium  Figure 326. Game Theory . the strategy profile in any weak sequential equilibrium is a subgame perfect equilibrium. in an extensive game with perfect information.10. given the other players’ strategies). and player 2 strategy G is sequentially rational given the beliefs indicated in the Figure and player 1 strategy EJ. player 1 strategy EJ is sequentially rational given player 2 strategy G.G).  Note:    In an extensive game with perfect information. Therefore. Thus the game has a weak sequential equilibrium. The strategy profile in any weak sequential equilibrium is a Nash equilibrium (if an assessment is a weak sequential equilibrium. The belief is consistent with the strategy profile (EJ. because this profile does not lead to player 2 information set.A (Short) Introduction 303 9/12/2011 .1    In this game.

 We need therefore to ask:  Whether any strategy of player 2 makes E optimal for player 1.1  Does the game have a weak sequential equilibrium in which player 1 chooses E?  If player 1 chooses E.10.  Whether there is a belief of player 2 that makes any such strategy optimal. player 2 belief is not restricted by consistency. We can find all the Nash equilibria of the game.A (Short) Introduction 304 9/12/2011 .  Figure 326. and then check which of these equilibria are associated with weak sequential equilibria. Game Theory .4 Beliefs and sequential equilibrium  How to find weak sequential equilibria?   We can use a combination of techniques for finding subgame perfect equilibria of extensive games with perfect information and for finding Nash equilbria of strategic games.

4 Beliefs and sequential equilibrium  We see that:  E is optimal if and only if player 2 chooses F with probability at most 2/3:  Any such strategy of player 2 is optimal if Player 2 believes the history is C with probability ½  The strategy of choosing F with probability 0 is optimal if player 2 believes the history is C with any probability of at least ½  Thus: an assessment is a weak sequential equilibrium if player strategy is EJ and player 2:  Either chooses F with probability at most 2/3 and believes that the history is C with probability ½  Or chooses G and believes that the history is C with probability at least ½ 9/12/2011 Game Theory .10.A (Short) Introduction 305 .

A (Short) Introduction 306 9/12/2011 . Fight is not an optimal action in the remainder of the game.Fight)  Regardless of the incumbent belief at its information set.  No assessment in which the strategy profile is (Out. for every belief (Acquiesce yields a higher payoff than Fight).1)  The entry game has two pure strategy Nash equilibria: (Unready.1 (Weak sequential equilibria of entry game.Fight)  Consider (Unready. Game Theory . Acquiesce) and (Out.  The game has a weak sequential equilibrium in which the strategy profile is (Unready.Acquiesce):  Consistency requires that the incumbent believe that the history is Unready at its information set (because it is the optimal choice for the challenger).  Consider (Out.10.4 Beliefs and sequential equilibrium  Example 330.Acquiesce) and the incumbent belief is that the history is Unready.Fight) is both sequentially rational and consistent. example 317. making Acquiesce optimal.

Consider the following variant of the entry game: 9/12/2011 307 .10. in some games. a relative large set of equilibrium assessments.4 Beliefs and sequential equilibrium  Why weak sequential equilibrium?   The consistency condition’s limitation to information sets reach with positive probability generates. Some of the equilibrium assessments do not plausibly correspond to steady states.

and the incumbent believes at its information set that the history is Unready (with probability one).10.4 Beliefs and sequential equilibrium    In this variant. regardless of the incumbent’s action. the incumbent’s strategy is F. In this equilibrium. This game has a weak sequential equilibrium in which the challenger’s strategy is Out. This belief seems not reasonable.A (Short) Introduction 308 . 9/12/2011 Game Theory . the incumbent believes that the challenger has chosen Unready. although this action is dominated by Ready for the challenger. Ready is better than Unready for the challenger.

9/12/2011 Game Theory . the informed parties have the opportunity to take actions observed by uninformed parties before uninformed parties take actions that affect everyone: the informed parties’ actions may “signal” their information.  In one interesting class of situations.10.A (Short) Introduction 309 . information is asymmetric: some parties are more informed than the other ones.5 Signaling games  In many interactions.

and fighting entails a loss of 2 units for each type.10. The incumbent prefer to fight (payoff 1) rather than to acquiesce to (payoff 0) a weak challenger and prefer to acquiesce to (payoff 2) rather than to fight (payoff -1) a strong one.A (Short) Introduction 310 . The incumbent observe the challenger readiness but not its type and chooses either fight or acquiesce. An unready challenger payoff is 5 if the incumbent acquiesces to its entry.       The challenger is strong with probability 𝑝 and weak with probability 1 − 𝑝 (with 0 < 𝑝 < 1). The challenger may either ready itself for battle or remain unready.5 Signaling games  Example 332. The challenger knows its type but the incumbent does not. Preparations cost a strong challenger 1 unit of payoff and a weak one 3 units.1: Entry as a signaling game. 9/12/2011 Game Theory .

A (Short) Introduction 311 .10.5 Signaling games  Figure 333.1 9/12/2011 Game Theory .

Thus. at each of which it has two actions (A and F). regardless of the incumbent’s actions (even if the incumbent acquiesces to a ready and fight an unready one).5 Signaling games   The Figure 333.1 models this situation:  The empty history is in the center of the diagram  The first move is made by chance (which determines the challenger type)  Both types have two actions (so the challenger has four strategies)  The incumbent has two information sets. a weak challenger chooses Unready.A (Short) Introduction 312 . 9/12/2011 Game Theory .10. in any weak sequential equilibrium. and thus also four strategies Searching for pure weak sequential equilibria  Note that a weak challenger prefers Unready to Ready.

5 Signaling games  Consider each possible action of a strong challenger  Strong challenger chooses Ready  Both the incumbent information sets are reached. The incumbent acquiesces when he sees Ready and fights when he sees Unready. Ready) with probability one (because a weak challenger never chooses Ready). Unready).A (Short) Introduction 313 .  We conclude that the game has a weak sequential equilibrium in which challenger chooses Ready when he is strong and Unready when he is weak.10. 9/12/2011 Game Theory . the incumbent must believe that the history was (Strong. if the challenger deviates and chooses Unready when he is strong. he is worse of (he get 3 rather than 4). so consistency condition restrict its beliefs at each set. and hence choose A:  At the bottom information set. and hence choose F. the incumbent must believe that the history was (Weak.  Thus.  At the top information set.

1 4 1 4 9/12/2011 Game Theory . the incumbent believes.5 Signaling games  Strong challenger chooses Unready  At his bottom information set. Thus. Unready) with probability (1-p).10. Unready) with probability p and (Weak. his expected payoff:  To A = p (2) + (1-p) 0 = 2 p  To F = p (-1) + (1-p) 1 = 1 – 2 p A is therefore optimal if 𝑝 ≥ and F is optimal if 𝑝 ≤ . that the history was (Strong.A (Short) Introduction 314 . by consistency.

The incumbent may hold any belief about the type of a ready challenger.  1 Thus. if 𝑝 ≥ . the game has a weak sequential 4 equilibrium in which both types of challenger choose Unready and the incumbent acquiesces to an unready challenger. depending on his belief. his payoff is less than 5 regardless of the incumbent action.  If he switches to Ready. may fight or acquiesce. and.A (Short) Introduction 315 . 1 9/12/2011 Game Theory .5 Signaling games  Suppose that 𝑝 ≥ and the incumbent chooses A in 4 response to Unready:  A strong challenger who chooses Unready obtains the payoff of 5.10.

 Is such a believe an equilibrium? Yes: the consistency condition does not restrict the incumbent’s belief upon observing Ready because this action is not taken when the challenger follows his strategy to choose Unready regardless of his type. his payoff is 2 if the incumbent fights and 4 if he acquiesces. The incumbent assigns probability of at least ¾ to the challenger’s being weak if it observes that the challenger is ready for battle. the incumbent must fight a ready challenger.5 Signaling games  Now suppose that 𝑝 ≥ 4 and the incumbent chooses F in response to Unready:  A strong challenger who chooses Unready obtains the payoff of 3. the game has a weak sequential equilibrium in which:    1 4 Both types of challenger choose Unready The incumbents fights regardless of the challenger’s action.  If the incumbent believes that a ready challenger is weak with high enough probability (at least ¾).  1 Thus. Thus. for an equilibrium.10. if 𝑝 ≤ .A (Short) Introduction 316 9/12/2011 . Game Theory . If he switches to Ready. fighting is indeed optimal.

so that the sender’s action gives the receiver no clue to the sender’s type. upon observing the sender’s action.   Note: if the sender has more than two types.10.5 Signaling games  This example shows that two kinds of pure strategy equilibrium may exist in signaling games:  Separating equilibrium : each type of sender (of the signal) chooses a different action so that. within each of which all types choose the same action and between which the actions are different).A (Short) Introduction 317 . 9/12/2011 Game Theory . the receiver (of the signal) knows the sender’s type. Pooling equilibrium : all types of the sender choose the same action. mixtures of these types of equilibrium may exist (the set of types may be divided into groups.

your boss will be able to unravel your report and deduce your actual findings.8 Strategic information transmission  The situation      You research the market for new product and submit a report to your boss. 9/12/2011 Game Theory .  Your boss is interested in promoting the interest of the whole firm. If you report the results of you research without distortion. the product your boss will choose is not the best for you. If you systematically distort your findings. Your preferences differ from from those of your boss:  You are interested in promoting the interest of your division. who decides which product to develop.10.A (Short) Introduction 318 . Obfuscation seems therefore a more promising route.

that a receiver (the boss) can not see.A (Short) Introduction 319 . The payoff functions are:   Sender: − 𝑦 − 𝑡 + 𝑏 2 Receiver: − 𝑦 − 𝑡 2 Where 𝑏 (the sender bias) is a fixed number that reflects the divergence between the sender and the receiver preferences.8 Strategic information transmission  The model      A sender (you) observes the state 𝑡. 9/12/2011 Game Theory . Note that the receiver optimal action is 𝑦 = 𝑡 and the sender optimal action is 𝑦 = 𝑡 + 𝑏 (see Figure 343. a number between 0 and 1.1). The distribution of the state is uniform: the probability: Pr 𝑡 ≤ 𝑧 = 𝑧.10. The sender submit a report 𝑟 (a number) to the receiver. The receiver observes the report and takes an action 𝑦 (a number).

10.8 Strategic information transmission  Figure 343.1 (players’ payoff functions) 9/12/2011 Game Theory .A (Short) Introduction 320 .

So. The receiver hence optimally chooses the action 𝑡 (the maximum of − 𝑦 − 𝑡 2 ). Is the sender’s strategy the best response to the receiver strategy? Not if 𝑏 > 0.8. the receiver chooses 𝑦 = 𝑡 and the sender payoff is −𝑏2 .8 Strategic information transmission  10. the receiver chooses 𝑦 = 𝑡 + 𝑏 and the sender payoff is 0.1 Perfect information transmission?     Consider an equilibrium in which the sender accurately reports the state he observes: 𝑟 𝑡 = 𝑦 ∀ 𝑡 Given this strategy.A (Short) Introduction 321 9/12/2011 . unless the sender and the receiver preferences are the same (𝑏 = 0). Suppose the state is 𝑡. the consistency condition requires that the receiver believe (correctly) that the state is 𝑡 when the sender reports 𝑡.10. If the sender reports 𝑡. Game Theory . the game has no equilibrium in which the sender accurately report the state. If the sender chooses instead 𝑡 + 𝑏.

in particular. The expected value of 𝑡 is 1 then 𝐸 𝑡 = and his optimal action (the action that maximizes the 2 expected payoff) is 𝑦 = .8.  1 2  The consistency condition does not constraint the receiver belief about the state upon receiving a report different from 𝑐 : such a report does not occur if the sender follows her strategy.10.2 No information transmission?   Consider an equilibrium in which the sender reports a constant value: 𝑟 𝑡 = 𝑐 ∀ 𝑡. any constant report is optimal for him and. his optimal action remains the same.8 Strategic information transmission  10. Because the sender reports has no effect on the receiver optimal action. her belief must remain the same as it was initially (state uniformly distributed between 0 and 1).A (Short) Introduction 322 9/12/2011 . 𝑟 𝑡 = 𝑐 is optimal. The consistency condition requires that if the receiver observes a report 𝑐. Note also that if the receiver simply ignores completely the sender report. Game Theory .

A (Short) Introduction 323 . for every value of 𝑏. for any 𝑡 with 0 ≤ 𝑡 ≤ both the sender and the receiver are better off if the receiver action is 𝑡 + 𝑏.8 Strategic information transmission   In summary. 4 4 9/12/2011 Game Theory . If 𝑏 is small. the receiver ignores the report (he maintains his initial belief about the state) and takes the action that maximizes his expected payoff.10. if 𝑏 = . this equilibrium is not very attractive for both the 1 sender and the receiver. 1 . the game has a weak sequential equilibrium in which the sender’s report conveys no information (constant report). For example.

His optimal action is the 𝑦 = (1 + 𝑡1 ) 2 9/12/2011 Game Theory .8 Strategic information transmission  10. if he sees the report 𝑟2. the consistency condition requires that he now believe that the is uniformly distributed 1 between 𝑡1 and 1.10.A (Short) Introduction 324 .3 Some information transmission    Does the game has equilibria in which some information is transmitted? Suppose the sender makes one of two reports:  𝑟1 if 0 ≤ 𝑡 ≤ 𝑡1  𝑟2 if 𝑡1 ≤ 𝑡 ≤ 1 With 𝑟1 ≠ 𝑟2 Consider the receiver optimal response to this strategy:  If he sees the report 𝑟1 . His optimal action is the 𝑦 = 𝑡1 2  Similarly. the consistency condition requires that he now believe that the is uniformly distributed between 0 and 1 𝑡1 .8.

the sender can change the receiver 1 1 optimal action form 𝑡1 to (1 + 𝑡1 ). Assume therefore that for any such report. So.A (Short) Introduction 325 .8 Strategic information transmission  The consistency condition does not restrict the receiver belief if he sees a report other than 𝑟1 or 𝑟2.  1 2 1 2 In particular. we need the sender report 𝑟1 to be optimal if 0 ≤ 𝑡 ≤ 𝑡1 and his report 𝑟2 to be optimal if 𝑡1 ≤ 𝑡 ≤ 1. for equilibrium. in state 𝑡1 . for the report 𝑟1 to be 2 2 optimal when 0 ≤ 𝑡 ≤ 𝑡1 . the receiver belief is one of the two beliefs he 1 hold if he sees 𝑟1 or 𝑟2 (so the optimal action is either 𝑦 = 𝑡1 or 𝑦 =  1 (1 + 𝑡1 ).10. By changing his report. given the receiver strategy. the sender must like 𝑡1 at least as much as (1 + 𝑡1 ) (and vice-versa for the report 𝑟2). 2 2  Now. the sender must be indifferent 1 1 between the two actions 𝑡1 and (1 + 𝑡1 ): 2 2 9/12/2011 Game Theory .

8 Strategic information transmission  This indifference implies that 𝑡1 + 𝑏 (the sender preferred 1 1 action) is midway between 𝑡1 and (1 + 𝑡1 ) (the receiver 2 2 optimal actions).1):   𝑡1 + 𝑏 = 1 2 1 1 𝑡 2 2 1 + (1 + 𝑡1 ) 1 2 𝑡1 = − 2𝑏 Figure 346. So (see Figure 346.A (Short) Introduction 326 .10.1 9/12/2011 Game Theory .

10.8 Strategic information transmission

We need 𝑡1 > 0: this condition is satisfied only if 𝑏 < . If 𝑏 ≥ , 4 4 the game has no equilibrium in which the sender makes two different reports. Put differently, if preferences diverges too much, there is no point to ask the sender to submit a report. The receiver should simply take the best action for himself given his prior belief. 𝑡1 = − 2𝑏 is not only a necessary condition for equilibrium 2 but also a sufficient condition. Indeed, in such a case:  In every state with 0 ≤ 𝑡 < 𝑡1 : the sender optimally report 𝑟1  In every state with 𝑡1 ≤ 𝑡 ≤ 1: the sender optimally report 𝑟2 ≠ 𝑟1 This follows form the shape of payoff function, which is symmetric (see Figure 346.2)
Game Theory - A (Short) Introduction 327
1

1

1

9/12/2011

10.8 Strategic information transmission
Figure 346.1

9/12/2011

Game Theory - A (Short) Introduction

328

10.8 Strategic information transmission
 This equilibrium is better for both the receiver and the sender

than the one in which no information is transmitted. Consider the receiver:

If no information is transmitted, he takes action ½ in all states and

his payoff is in each state 𝑡 −

1 − 2 𝑡

2

In this two reports equilibrium, his payoff is:


1 𝑡 2 1
1 2

− 𝑡

2

for 0 ≤ 𝑡 < 𝑡1
2 𝑡

1 + 1 − 𝑡

for 𝑡1 ≤ 𝑡 ≤ 1

9/12/2011

Game Theory - A (Short) Introduction

329

10.8 Strategic information transmission
 10.8.4 How much information transmission?

For 𝑏 < , does the game have equilibria In which more information 4 is transmitted than in the two reports equilibrium? Consider an equilibrium in which the sender makes one of K reports, depending on the state. Specifically, the sender’s report is:  𝑟1 if 0 ≤ 𝑡 < 𝑡1  𝑟2 if 𝑡1 ≤ 𝑡 < 𝑡2  …  𝑟𝐾 if 𝑡𝐾−1 ≤ 𝑡 < 1 Where 𝑟𝑖 ≠ 𝑟𝑗 for 𝑖 ≠ 𝑗.
The equilibrium analysis follows the same line as the two reports equilibrium.

1

9/12/2011

Game Theory - A (Short) Introduction

330

10.8 Strategic information transmission

Specifically:  If the receiver observes the report 𝑟𝑘 , then the consistency condition requires that he believes the state to be uniformly distributed between 𝑡𝑘−1 and 𝑡𝑘 . Therefore, he optimally takes 1 the action (𝑡𝑘−1 + 𝑡𝑘 ).
2

If he observes a report different from any 𝑟𝑘 , the consistency condition does not restrict his belief. We assume that his belief in such case is the belief he holds upon receiving one of the reports 𝑟𝑘 . Now, for equilibrium, we need the sender report 𝑟𝑘 to be optimal when the state is 𝑡 with 𝑡𝑘−1 ≤ 𝑡 < 𝑡𝑘 , for 𝑘 = 1, … 𝐾. A sufficient condition for optimality is that, in each state 𝑡𝑘 , 𝑘 = 1, … 𝐾, the sender be indifferent between the between the reports 𝑟𝑘 and 𝑟𝑘+1 and, therefore, between the receiver 1 1 actions (𝑡𝑘−1 + 𝑡𝑘 ) and (𝑡𝑘 + 𝑡𝑘+1 ).
2 2

9/12/2011

Game Theory - A (Short) Introduction

331

10.8 Strategic information transmission

This indifference implies that 𝑡𝑘 + 𝑏 is equal to the average of 1 1 (𝑡𝑘−1 + 𝑡𝑘 ) and (𝑡𝑘 + 𝑡𝑘+1 ):
2
 𝑡𝑘

+ 𝑏 =

2 1 1 2 2 𝑡𝑘

−1 + 𝑡𝑘 + (𝑡𝑘 + 𝑡𝑘+1 )

1 2

Or 𝑡𝑘
+1 − 𝑡𝑘 = 𝑡𝑘 − 𝑡𝑘−1 + 4𝑏 This is to say that the interval of states for which the sender’s report is 𝑟𝑘+1is longer by 4𝑏 than the interval for which the report is 𝑟𝑘 . The length of the first interval, from 0 to 𝑡1 , is 𝑡1 . The sum of the lengths of all interval must be equal to one:  𝑡1 + 𝑡1 + 4𝑏 + ⋯ + 𝑡1 + 𝐾 − 1 4𝑏 = 1 Or
 

 𝐾𝑡

1 + 4𝑏 1 + 2 + ⋯ + 𝐾 − 1
Game Theory - A (Short) Introduction

=1
332

9/12/2011

12 the inequality is satisfied for 𝐾 ≤ 3 So.A (Short) Introduction 333 9/12/2011 . From 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1. the sender chooses one of three reports. in the equilibrium in which more information is transmitted.2 shows equilibrium action 𝑦 taken by the receiver as a function of the state 𝑡. 2 3 1 3    The Figure 348. there is a positive value of 𝑡1 that satisfies the equation:    If 1 24 ≤ 𝑏 < 1 . Game Theory .8 Strategic information transmission The sum of the first 𝑛 positive integer is 𝑛 𝑛 + 1 : 1 2 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1 If 𝑏 is small enough for 2𝑏𝐾 𝐾 − 1 < 1.10. we have 𝑡1 = − 4𝑏 and 𝑡2 = − 4𝑏. The values of the reports 𝑟𝑘 does not matter as long as no two are the same (we think of them as words in a language).

A (Short) Introduction 334 .8 Strategic information transmission  Figure 348.10.2 9/12/2011 Game Theory .

Thus the largest the value of 𝑏. The greater the difference between the sender and receiver preferences.A (Short) Introduction 335 .10. we have 𝐾 = (1 + 1 + ). then the game has a weak sequential equilibrium in which the sender submits one of 𝐾 different reports. the largest value of 𝐾 for which an equilibrium exists is the largest value for which 2𝑏𝐾 𝐾 − 1 < 1.8 Strategic information transmission  In summary:   If there is a positive value of 𝑡1 that satisfies 𝐾𝑡1 + 2𝑏𝐾 𝐾 − 1 = 1. using the quadratic formula. For any given value of 𝑏. If 2𝑏𝐾 𝐾 − 1 = 1. the coarser the information transmitted in the equilibrium with the largest number of steps (the most informative equilibrium). 9/12/2011 Game Theory . the smaller the largest 2 𝑏 1 2  value of 𝐾 possible in an equilibrium. depending on the state.

Sign up to vote on this title
UsefulNot useful