Professional Documents
Culture Documents
BARGAINING
A THESIS
submitted in partial fulfillment of the requirements
for the award of the dual degree of
MATHEMATICS
by
DEPARTMENT OF MATHEMATICS
INDIAN INSTITUTE OF SCIENCE EDUCATION AND
RESEARCH BHOPAL
BHOPAL - 462066
April 2018
i
CERTIFICATE
I hereby declare that this MS-Thesis is my own work and, to the best of
my knowledge, that it contains no material previously published or written
by another person, and no substantial proportions of material which have
been accepted for the award of any other degree or diploma at IISER Bhopal
or any other educational institution, except where due acknowledgement is
made in the document.
ACKNOWLEDGEMENT
ABSTRACT
Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
4. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 The Extensive Form Game . . . . . . . . . . . . . . . . . . . . 22
4.1.1 Game Trees . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Strategies and Nash Equilibrium . . . . . . . . . . . . . . . . 25
4.2.1 Pure Strategies . . . . . . . . . . . . . . . . . . . . . . 26
4.2.2 Mixed versus Behavioral Strategies . . . . . . . . . . . 27
5. Sequential Rationality . . . . . . . . . . . . . . . . . . . . . . 29
5.1 Subgame Perfect Equilibrium . . . . . . . . . . . . . . . . . . 30
6. Strategic Bargaining . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1 The Ultimatum Game . . . . . . . . . . . . . . . . . . . . . . 36
6.2 Finitely Many Rounds of Bargaining . . . . . . . . . . . . . . 38
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Part I
The motivation for studying this topic is to understand a method for systematically
selecting among possible choices that are based on reason and facts. In this
section, we study the fundamental ideas related to the decision problem. The
decision problem consists of three features:
1. Actions (𝐴) are all the alternative from which the player can choose.
2. Outcomes (𝑋) are the possible consequences that can result from any
of the actions.
3. Preferences describe how the player ranks the set of possible outcomes,
from the most desired to least desired.
1.2 Rationality
Based on the above observations, we can now define various important concepts
used for further development of the theory of rational decision making. Some
important definitions are as follows:
Definition 1.3. Let 𝑢(𝑥) be the player’s payoff function over outcomes in
𝑋 = 𝑥1 , 𝑥2 , … , 𝑥𝑛 , and let 𝑝 = (𝑝1 , 𝑝2 , … , 𝑝𝑛 ) be a lottery over 𝑋 such that
𝑝𝑘 = 𝑃 𝑟{𝑥 = 𝑥𝑘 }. Then, we define the player’s expected payoff from the
lottery p as
𝑛
𝐸[𝑢(𝑥)|𝑝] = ∑𝑘=1 𝑝𝑘 𝑢(𝑥𝑘 ) = 𝑝1 𝑢(𝑥1 ) + 𝑝2 𝑢(𝑥2 ) + … + 𝑝𝑛 𝑢(𝑥𝑛 )
A player facing a decision problem with a payoff function 𝑣(.) over actions
is rational, if he chooses an action 𝑎 ∈ 𝐴 that maximizes his payoff. That is,
𝑎∗ ∈ 𝐴 is chosen if and only if 𝑣(𝑎∗ ) ≥ 𝑣(𝑎) for all 𝑎 ∈ 𝐴.
2. STATIC GAMES OF COMPLETE
INFORMATION
Under this chapter, we discuss the definition of a normal form game. We also
define notions of dominated and dominant strategies, and study topics like
consequences of assuming rationality and common knowledge of rationality.
𝑣1 (𝑀 , 𝑀 ) = 𝑣2 (𝑀 , 𝑀 ) = −2
𝑣1 (𝐹 , 𝐹 ) = 𝑣2 (𝐹 , 𝐹 ) = −4
𝑣1 (𝑀 , 𝐹 ) = 𝑣2 (𝐹 , 𝑀 ) = −5
𝑣1 (𝐹 , 𝑀 ) = 𝑣2 (𝑀 , 𝐹 ) = −1.
𝑣𝑖 (𝑠𝑖 , 𝑠−𝑖 ) > 𝑣𝑖 (𝑠′𝑖 , 𝑠−𝑖 ) for all 𝑠′𝑖 ∈ 𝑆𝑖 , 𝑠′𝑖 ≠ 𝑠𝑖 , and all 𝑠−𝑖 ∈ 𝑆−𝑖 .
2.3.2 IESDS
Consider the following example of normal form game in matrix form representation:
Player 2
𝐿 𝐶 𝑅
𝑈 5,4 6,2 7,3
Player 1 𝑀 3,2 9,5 4,7
𝐷 4,1 10,7 3,9
2. Static Games of Complete Information 8
Note that there is no strictly dominated strategy for player 1. There is,
however, a strictly dominated strategy for player 2: the strategy C is strictly
dominated by R because 3 > 2(𝑟𝑜𝑤 𝑈 ), 7 > 5(𝑟𝑜𝑤 𝑀 ), and 9 > 7(𝑟𝑜𝑤 𝐷).
Thus, because this is common knowledge, both players know that we can
effectively eliminate the strategy C from player 2’s strategy set, which results
in the following reduced game:
Player 2
𝐿 𝑅
𝑈 5,4 7,3
Player 1 𝑀 3,2 4,7
𝐷 4,1 3,9
In this reduced game, observe that both 𝑀 and 𝐷 are strictly dominated by
strategy 𝑈 for player 1, allowing us to perform a second round of eliminating
strategies but for player 1 this time. Eliminating these two strategies yields
the following trivial game:
Player 2
𝐿 𝑅
Player 1 𝑈 5,4 7,3
Observe that player 2 has strictly dominated strategy, playing 𝑅. This
process of Iterated Elimination of Strictly Dominated Strategies (IESDS)
yields a unique prediction that the strategy profile we expect these players
to play is (𝑈 , 𝐿), giving the players the payoffs of (5, 4).
2.3.3 Rationalizability
Definition 2.6. The strategy 𝑠𝑖 ∈ 𝑆𝑖 is player 𝑖’𝑠 best response to his
opponents’ strategies s−𝑖 ∈ 𝑆−𝑖 if
After eliminating all the strategies that are never a best response, and
employing this reasoning again and again in a way similar to what we did
for IESDS, the strategies that remain are called the set of rationalizable
strategies and the solution concept is known as rationalizability.
Chris
𝑂 𝐹
𝑂 2,1 0,0
Alex
𝐹 0,0 1,2
Definition 2.8. The pure-strategy profile 𝑠∗ = (𝑠∗1 , 𝑠∗2 , ..., 𝑠∗𝑛 ) ∈ 𝑆 is a Nash
equilibrium if 𝑠∗𝑖 is a best response to 𝑠∗−𝑖 , for all 𝑖 ∈ ℕ , that is,
Similarly,
The mixed-strategy profile 𝜎∗ = (𝜎1∗ , 𝜎2∗ , ..., 𝜎𝑛∗ ) is a Mixed Strategy Nash
equilibrium if 𝜎𝑖∗ is a best response to 𝜎−𝑖 ∗
, for all 𝑖 ∈ ℕ, that is,
2. Static Games of Complete Information 10
𝑣𝑖 (𝜎𝑖∗ , 𝜎−𝑖
∗ ∗
) ≥ 𝑣𝑖 (𝜎𝑖 , 𝜎−𝑖 ) ∀𝜎𝑖 ∈ △𝑆𝑖 and ∀𝑖 ∈ ℕ.
Observe that the pure strategy Nash equilibria in the battle of the sexes
game are the strategies (𝑂, 𝑂) and (𝐹 , 𝐹 ). Also the mixed strategy (( 23 , 31 ), ( 13 , 23 ))
is a Nash equilibrium in the above game .
3. EXISTENCE OF NASH
EQUILIBRIUM
In this chapter, we will prove the infamous Nash Equilibrium using some
important aspects of Algebraic Topology including Kakutani Fixed Point
Theorem. A solution concept is valuable as far as it applies to a wide variety
of games, and not just to a small and particular family of games. That is why
a solution concept should apply generally and should not be developed in an
ad hoc way that is specific to a certain situation or game. Therefore, when
we apply our solution concept to different games, we require it to result in the
existence of an equilibrium solution. And it turns out, however, that for quite
general conditions games will have at least one Nash equilibrium. This fact
gives the Nash solution concept its power—like IESDS and rationalizability,
the solution concept of Nash is widely applicable. It will, however, usually
lead to more refined predictions than those of IESDS and rationalizability.
Let us now state some important results required in proving Nash Existence
theorem.
Sperner’s Lemma
of 𝑛 + 1 points in a general position. That is, for given vertices 𝑣1 , ..., 𝑣𝑛+1 ,
the simplex would be
𝑛+1 𝑛+1
𝑆 = { ∑ 𝛼𝑖 𝑣𝑖 ∶ 𝛼 ≥ 0, ∑ 𝛼𝑖 = 1}. (3.1)
𝑖=1 𝑖=1
Proof. [2] Assume that a cell of the subdivision a rainbow cell, if its vertices
assigned with all different colors. Apart from proving above statement, we
will also prove that the number of rainbow cells is odd for any proper coloring,
that is an, even more, stronger statement.
– Over cells of the subdivision: Notice that for each of 𝑄 type cells,
we get 2 edges colored (1, 2). Similarly, for 𝑅 type cells, we get
precisely 1 edge. Therefore, we count inner edges of type (1, 2)
twice, whereas boundary edges only once. Hence, we have 2𝑄 +
𝑅 = 𝑋 + 2𝑌 .
– Over the boundary of 𝑇 : Edges with color (1, 2) can only be found
inside the edge between two vertices of triangle 𝑇 with color 1 and
2. We have already proved in the previous case that between the
1 and 2 colored cells, so there will be an odd number of edges
colored (1, 2). Hence we can conclude that 𝑋 is odd, which in
turn implies that 𝑅 is also odd.
– Notice that each cell of type 𝑅 contributes exactly one face that
colored with {1, 2, … , 𝑛}. Whereas, each cell of type 𝑄 contributes
two such faces. Observe that inside faces appear in two cells
whereas the boundary faces appear in one cell. Hence, we can
3. Existence of Nash Equilibrium 14
conclude that 2𝑄 + 𝑅 = 𝑋 + 2𝑌
– On the boundary, notice that the only (𝑛 − 1)-dimensional faces
colored with 1, 2, ..., 𝑛 colors can be on the face 𝐹 ⊂ 𝑆 whose
vertices are colored again by 1, 2, ..., 𝑛. Now by induction hypothesis
for 𝐹 (which forms a properly colored (n − 1)-dimensional subdivision)
contains an odd number of rainbow (𝑛 − 1)-dimensional cells.
Therefore 𝑋 is odd, implying that 𝑅 is odd as well.
Proof. [2] We show how Sperner’s lemma is used to prove this theorem.
For convenience, we work with a simplex instead of a ball as they both are
equivalent homeomorphically. Specifically, let 𝑆 be a simplex embedded in
𝑅𝑛+1 so that the vertices of 𝑆 are 𝑣1 = (1, 0, ..., 0), 𝑣2 = (0, 1, ..., 0), ..., and
𝑣𝑛+1 = (0, 0, ..., 1). Let 𝑓 ∶ 𝑆 → 𝑆 be a continuous map and assume that it
has no fixed point.
Now let us construct a sequence of subdivisions of 𝑆 denoted by 𝑆1 , 𝑆2 , 𝑆3 , …
where each 𝑆𝑗 is subdivision of 𝑆𝑗−1 , so that the size of each cell tends to
zero as 𝑗 → ∞.To define a coloring of 𝑆𝑗 , assign a color 𝑐(𝑥) ∈ [𝑛 + 1] for
each vertex 𝑥 ∈ 𝑆𝑗 such that (𝑓(𝑥))𝑐(𝑥) < 𝑥𝑐(𝑥) . To check it is feasible, note
that for each point 𝑥 ∈ 𝑆, ∑ 𝑥𝑖 = 1 and ∑(𝑓(𝑥))𝑖 = 1. Therefore, there are
coordinates such that (𝑓(𝑥))𝑖 < 𝑥𝑖 and also (𝑓(𝑥))𝑖 < 𝑥𝑖 , unless 𝑓(𝑥) = 𝑥.
In case when there are multiple coordinates such that (𝑓(𝑥))𝑖 < 𝑥𝑖 , we pick
the smallest 𝑖.
Before applying Sperner’s lemma, we have to verify that the coloring we
have assigned is a proper coloring as per the sperner’s lemma. For vertices of
𝑆, 𝑣𝑖 = (0, … , 1, … , 0), we have 𝑐(𝑥) = 𝑖 because the only coordinate where
(𝑓(𝑥))𝑖 < 𝑥𝑖 is possible is the 𝑖𝑡ℎ coordinate. Similarly for certain faces of
3. Existence of Nash Equilibrium 15
𝑆, for eg. 𝑥 = 𝑐𝑜𝑛𝑣{𝑣𝑖 ∶ 𝑖 ∈ 𝐴}, the only coordinates where (𝑓(𝑥))𝑖 < 𝑥𝑖 is
possible are the ones where 𝑖 ∈ 𝐴, and hence 𝑐(𝑥) ∈ 𝐴.
With the help of Sperner’s lemma, we can claim that there exists a
(𝑗,𝑖)
rainbox cell with vertices 𝑥(𝑗,1) , … , 𝑥(𝑗,𝑛+1) ∈ 𝑆𝑗 or (𝑓(𝑥(𝑗,𝑖) ))𝑖 < 𝑥𝑖 for
each 𝑖 ∈ [𝑛 + 1]. Since this is true for each 𝑆𝑗 , we can find a sequence of
points {𝑥(𝑗,𝑖) } inside a compact set 𝑆 having a convergent subsequence. Now
by removing the elements outside of this subsequence assume that {𝑥(𝑗,𝑖) }
itself is convergent. As the size of the cells in 𝑆𝑗 tends to zero, the limits
𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) are the same for all 𝑖 ∈ [𝑛 + 1]. Let us call this common limit
point 𝑥∗ = 𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) .
As we have assumed that there is no fixed point , therefore 𝑓(𝑥∗ ) ≠ 𝑥∗ .
This implies that (𝑓(𝑥∗ ))𝑖 > 𝑥∗𝑖 for some coordinate 𝑖. But as per the
discussion above ,we already concluded that (𝑓(𝑥(𝑗,𝑖) ))𝑖 < 𝑥(𝑗,𝑖) for all 𝑗
and 𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) = 𝑥∗ . Therefore, (𝑓(𝑥(𝑗,𝑖) ))𝑖 ≤ 𝑥(𝑗,𝑖) by continuity. The
assumption that there is no fixed point is hence contradicted.
Then, 𝑄 has a fixed point, that is, there exists some 𝑥 ∈ 𝑋, such that
𝑥 ∈ 𝑄(𝑥).
and we get,
𝑛
(𝑝) (𝑝) (𝑝)
𝑓 (𝑥) = ∑ 𝜆𝑗 𝑓 (𝑝) (𝑎𝑗 ) (3.3)
𝑗=0
Note that, since the barycentric coordinates of points are unique, if 𝑥 lies on
a common face, the two definitions coincide on the common face.
Now it is clear that the various maps 𝑓 (𝑝) are continuous maps of the simplex
𝑋 onto itself. Hence the Brouwer theorem guarantees that each has a fixed
(𝑝) (𝑝) (𝑝)
point, say a point 𝑥∗ such that 𝑓 (𝑝) (𝑥∗ ) = 𝑥∗ . Suppose that any of these
fixed points is a vertex, therefore, it is a fixed point of 𝑄 by construction and
the proof is complete.
On the other hand, if none of these points are vertices then, for a given 𝑝,
we have
𝑛
(𝑝) (𝑝) (𝑝)
𝑥∗ = ∑ 𝜆𝑗 𝑎𝑗 (3.4)
𝑗=0
(𝑝)
and so, using the definition of 𝑓 (𝑝) and the fact that 𝑥∗ is its fixed point,
we have
𝑛
(𝑝) (𝑝) (𝑝)
𝑥∗ = ∑ 𝜆𝑗 𝑦𝑗 (3.5)
𝑗=0
3. Existence of Nash Equilibrium 17
where
(𝑝)
𝑥∗ → 𝑥∗ as 𝑝 → ∞ (3.7)
(𝑝)
𝜆𝑗 → 𝜆𝑗 as 𝑝 → ∞, 𝑗 = 1, … , 𝑛 (3.8)
(𝑝)
𝑦𝑗 → 𝑦𝑗 as 𝑝 → ∞, 𝑗 = 1, … , 𝑛. (3.9)
Theorem 3.7 (Nash, 1950). Any finite player game with finite strategies for
all players has at least a Nash equilibrium in mixed strategies.
△𝑆 = ∏ △𝑆𝑖
𝑖∈𝑁
The preceding realtions imply that for all 𝜆 ∈ [0, 1], we have
𝜆𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) + (1 − 𝜆)𝑢𝑖 (𝜎𝑖″ , 𝜎−𝑖 ) ≥ 𝑢𝑖 (𝜏𝑖 , 𝜎−𝑖 ) ∀𝜏𝑖 ∈ △𝑆𝑖 .
By the linearity of 𝑢𝑖 ,
𝑛
By the continuity of 𝑢𝑖 and the fact that 𝜎−𝑖 → 𝜎−𝑖 , we have for
sufficiently large 𝑛,
𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖
𝑛
) ≥ 𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) − 𝜖
𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖
𝑛 𝑛
) > 𝑢𝑖 (𝜎̂𝑖 , 𝜎−𝑖 ) + 2𝜖 ≥ 𝑢𝑖 (𝜎̂𝑖𝑛 , 𝜎−𝑖 )+𝜖
3. Existence of Nash Equilibrium 20
The existence of the fixed point then follows from Kakutani’s theorem. If
𝜎∗ ∈ 𝐵(𝜎∗ ), then by definition 𝜎∗ is a mixed strategy Nash equilibrium.
Part II
DYNAMIC GAMES OF
COMPLETE INFORMATION
4. PRELIMINARIES
1. Set of players 𝑁
Now to overcome the limitations of normal form games and capture the
sequential play, we introduce two more parts for actions: First, what players
can do, as before, and second, when they can do it. Thus in general we need
two more components to capture sequential play:
3. Order of moves
As some players move after choices are made by other players, we should
be able to describe the knowledge the players have about the history of the
game when it is their turn to move. Therefore, we add a fifth component to
the description of a extensive form game:
Finally we must account for the possibility that some random event, called
as moves of Nature, can happen during the game. We will call such event
as Exogenous events, because the predetermined probability distribution of
Nature’s choice is independent to the choices made by the strategic players.
Thus we represent actions of Nature as our sixth component:
And to be able to analyze these situations with the method and concepts
to which we have been already introduced, we add a final and familiar
requirement:
With these components in place, the only question arises: What kind of
notation can we use to put all this together? Well, for this we borrow
a familiar concept of a decision tree and expand it to capture multiplayer
strategic situations.
𝑁 𝑇
2
0, 0
𝐶 𝐷
1, 1 −1, 2
The root of the tree, denoted by 𝑥0 , is a special kind of node that precedes
any other 𝑥 ∈ 𝑋. Nodes that do not precedes other nodes are called terminal
nodes, denoted by the set 𝑍 ⊂ 𝑋. Payoffs are associated to terminal nodes
which denotes the outcomes of the game. Every node 𝑥 that is not a terminal
node is assigned either to a player, 𝑖(𝑥), with the action set 𝐴𝑖 (𝑥), or to
Nature.
4. Preliminaries 25
Let us consider the familiar example of Battle of the Sexes game discussed
in Section 2.4 but with slight modification. Consider that Alex finishes work
at 2:00 p.m. while Chris finishes work at 5:30 p.m. This gives Alex apmple
time to decide to go to either football game or the opera and then to call
Chris at 5:00 p.m. to let him know where she actually is. Now Chris has to
decide where to go. If the choice is to the venue where Alex is waiting then
Chris will get some payoff. (It would be 1 if Alex is at the opera and 2 if Alex
is at the football game.) If Chris’s choice is to go to the other venue, then
he will get 0. Hence a rational Chris should go to the same venue that Alex
did. Anticipating this, a rational Alex ought to choose the opera, because
then Alex gets 2 instead of 1 from football.
We will call this game as the sequential-move Battle of the Sexes game and
the game tree representing above conditions is depicted in Figure 4.2.
Alex
𝑂 𝐹
Chris Chris
𝑜 𝑓 𝑜 𝑓
2, 1 0, 0 0, 0 1, 2
𝑂 𝐹
2
𝑜 𝑓 𝑜 𝑓
2, 1 0, 0 0, 0 1, 2
where a pure strategy “𝑎𝑏” means “player 2 will play 𝑎 if player 1 plays 𝑂
and 𝑏 if player 1 plays 𝐹 . Whereas set of pure strategies for player 1 remains
4. Preliminaries 27
Note: Assuming that player 𝑖 has 𝑘 > 1 information sets, the first with
𝑚1 actions to choose from, the second one with 𝑚2 , and so on until 𝑚𝑘 .
Then
|𝑆𝑖 | = 𝑚1 × 𝑚2 × … × 𝑚𝑘 ,
where |𝑆𝑖 | denote the number of elements in 𝑆𝑖 , i.e, the total number of pure
strategies player 1 has. For example, a player with 3 information sets, 3
actions in the first, 3 in the second, and 5 in the third will have a total of 45
pure strategies.
1
𝑥0
𝑂 𝐹
2 𝑥 𝑥2 2
1
𝑜 𝑓 𝑜 𝑓
[ 13 ] [ 23 ] [ 12 ] [ 12 ]
2, 1 0, 0 0, 0 1, 2
will call as ℎ𝑂 𝐹
2 and ℎ2 . In each information sets, he has two actions to choose
from, i.e, 𝐴2 = {𝑜, 𝑓}. Observe that Player 2 must have (2×2 =) 4 strategies.
Therefore, 𝑆2 = {𝑜𝑜, 𝑜𝑓, 𝑓𝑜, 𝑓𝑓}. A mixed strategy would be a probability
distribution (𝑝𝑜𝑜 , 𝑝𝑜𝑓 , 𝑝𝑓𝑜 , 𝑝𝑓𝑓 ), where 𝑝𝑆2 ≥ 0 and 𝑝𝑜𝑜 + 𝑝𝑜𝑓 + 𝑝𝑓𝑜 + 𝑝𝑓𝑓 = 1.
Whereas behavioral strategies will be denoted as, 𝜎2 (𝑜(ℎ𝑂 𝑂
2 )), 𝜎2 (𝑓(ℎ2 )),
𝜎2 (𝑜(ℎ𝐹 𝐹 𝑂 𝑂
2 )) and 𝜎2 (𝑓(ℎ2 )), where 𝜎2 (𝑜(ℎ2 )) + 𝜎2 (𝑓(ℎ2 )) = 𝜎2 (𝑜(ℎ2 )) +
𝐹
1 1
𝜎2 (𝑓(ℎ𝐹 𝑂 𝑂
2 )) = 1. In Figure 4.4, we have used 𝜎2 (𝑜(ℎ2 )) = 3 , 𝜎2 (𝑓(ℎ2 )) = 3 ,
1
and 𝜎2 (𝑜(ℎ𝐹 𝐹
2 )) = 𝜎2 (𝑓(ℎ2 )) = 2
5. SEQUENTIAL RATIONALITY
Consider the sequential battle of the sexes game as shown in the figure 5.1.
The matrix representation of the game gives us sets of strategies that are
Nash equilibrium.
Chris
𝑜𝑜 𝑜𝑓 𝑓𝑜 𝑓𝑓
𝑂 2, 1 2, 1 0, 0 0, 0
Alex
𝐹 0, 0 1, 2 0, 0 1, 2
The set of strategies resulting in Nash equilibrium are (𝑂, 𝑜𝑜), (𝑂, 𝑜𝑓), (𝐹 , 𝑓𝑓).
But above results fail to answer that what are player 2’s best responses in
each of his information sets precisely? Because it is obvious that if player 1
played 𝑂 then player 2 should play 𝑜, and if player 1 played 𝐹 then player 2
should play 𝑓. As Nash equilibrium is not enough to answer such questions,
we define a new concept called as sequential rationality.
Definition 5.1. Given strategies 𝜎−𝑖 ∈ △𝑆−𝑖 of 𝑖’s opponents, we say that
Alex
𝑂 𝐹
Chris Chris
𝑜 𝑓 𝑜 𝑓
2, 1 0, 0 0, 0 1, 2
Corollary 5.3. Any finite game of perfect information has at least one
sequentially rational Nash equilibrium in pure strategies. Furthermore if no
two terminal nodes prescribe the same payoffs to any player then the game
has a unique sequentially rational Nash equilibrium.
𝑎 𝑏
2 2
𝑥 𝑦 𝑥 𝑦
1 1
(11) (00)
𝑐 𝑑 𝑐 𝑑
Consider figure 5.2 and 5.3 depicting a game with perfect information
and imperfect information respectively. The above definition enable us to
state an important concept used to cope with the limitation of backward
induction in the case of games of imperfect information.
5. Sequential Rationality 32
𝑎 𝑏
2 2
𝑥 𝑦 𝑥 𝑦
1
(11) (00) 𝑐 𝑑 𝑐 𝑑
Alex
𝑂 𝐹
Chris Chris
𝑜 𝑓 𝑜 𝑓
2, 1 0, 0 0, 0 1, 2
only one of these Nash equilibria, i.e, strategy (𝑂, 𝑜𝑓) satisfies the condition
of sub-game perfect equilibrium. This is because when we restrict other two
equilibrium to the proper sub-games where player 2 has to make his choice,
they will not satisfy the condition of Nash equilibrium in either one of the
proper sub-games.
6. STRATEGIC BARGAINING
Bargaining is one of the situation which comes in our mind when we discuss
about strategic interactions. In this chapter, an important example of extensive
form game, strategic bargaining will be discussed. We will follow a particular
model to study strategic bargaining. The model will be as follows:
Assuming a constant discounting factor 𝛿 for both the players, we can summarize
the bargaining game as follows:
In the first round:
• Player 1 offers shares (𝑥, 1 − 𝑥), where player 1 gets 𝑥 and player 2
receives the remaining pie, i.e, (1 − 𝑥)
• Player then chooses to accept the offer of player or to reject it. Accepting
the offer will lead the game to end with payoffs 𝑣1 = 𝑥 and 𝑣2 = 1 − 𝑥.
Whereas, by rejecting, player 2 has a chance to make a offer to player
1 causing the game to proceed to next stage.
In stage 2:
• A share of pie gets removed, say (1 − 𝛿). Therefore the players now
has to bargain out of 𝛿 portion of total pie.
6. Strategic Bargaining 35
1
period 1
𝑥
2
𝐴 𝑅
2
period 2
𝑥, 1 − 𝑥
𝑥
1
𝐴 𝑅
1
period 3
𝛿 𝑥, 1 − 𝑥
( )⋯ 𝑥
𝛿 2
𝐴
𝛿2 𝑥, 1 − 𝑥
( 2) ⋯
𝛿
1
period t
𝑥
2
𝐴 𝑅
𝛿 𝑡−1 𝑥, 1 − 𝑥 0, 0
( 𝑡−1 ) ⋯
𝛿
• Again if Player 1 accepts the offer then game will end with payoffs
𝑣1 = 𝛿𝑥 and 𝑣2 = 𝛿(1 − 𝑥), or rejects, causing the game to move to the
third round.
6. Strategic Bargaining 36
In stage 𝑡:
The strategic bargaining can be visualized in the game tree shown in figure
6.1. We will be discussing three important cases of strategic bargaining.
Firstly, we analyze the most trivial case of bargaining, i.e, the ultimatum
game where𝑡 = 1. Advancing our discussion, we will analyze the case where
the bargaining will end in a time period 𝑡. There we will observe that the
bargaining should end in the first stage only given the players are sequentially
rational. Finally, we will move our goal of proving the existence of Perfect
Equilibrium Partition (P.E.P) in the case of infinite horizon where 𝑡 → ∞.
𝑥
2
𝐴 𝑅
𝑥, 1 − 𝑥 0, 0
As we usually do, we start our analysis by trying to find out the path of play
that is supported by Nash equilibrium. And the result is quite surprising:
Proof. Let us construct a pair of strategies that are mutual best responses
and that lead to (𝑥∗ , 1 − 𝑥∗ ) as the partition of the pie. Suppose that the
player 1’s strategy is to propose 𝑥 and let player 2’s strategy is to accept any
offer 𝑥 ≤ 𝑥∗ and reject any offer 𝑥 > 𝑥∗ . It is easy to observe that both these
strategies are mutual best responses to each other and is independent of the
value of 𝑥∗ ∈ [0, 1].
This proposition made one thing clear and that is Nash equilibrium is
a weak tool to analyze this case. Surprisingly, this holds true for even
further cases where 𝑡 is finite as well as when 𝑡 → ∞, the proof of which
will be discussed in later sections. Therefore, we resort to the concepts we’ve
developed in the previous chapter. Will sequential rationality give us a more
precise solution of our problem? Well, the following proposition is expected
to answer this question.
Proof. We have already proved that player 2 must accept any share 𝑥 (0 <
𝑥 < 1). However, player 2 is indifferent between accepting or rejecting 𝑥 = 1,
as in both the cases, he will be getting a payoff of 0. Therefore, the proposed
strategy is sequentially rational and the unique best response of player 1’s
to player 2’s strategy is to offer 𝑥 = 1. Observe that only other sequentially
rational strategy available to player 2 is to accept any offer positive offer
and reject getting 0 (𝑥 = 1). But player 1 does not have any best response
to this strategy. This is because player 1’s best response correspondence is
discontinuous at 𝑥 = 1. So, it cannot be a part of the subgame perfect
equilibrium.
6. Strategic Bargaining 38
• Let 𝑆 = [0, 1], where 𝑠 ∈ [0, 1] is the partition of the pie rewarded to
player 1.
• That is, 𝐹 is the set of all strategies of the player who starts the
bargaining.
• Whereas, 𝐺 is the set of all strategies of the player who in the first
move has to respond to the other player’s offer.
• Assuming that player 1 is starting the game, 𝑃 (𝑓,̂ 𝑔)̂ is the partition
player 1 will get if he play 𝑓 ̂ and player 2 play 𝑔.̂
Above result is true for any horizon that is for 𝑡 finite and even when 𝑡 →
∞. However, (𝑓,̂ 𝑔)̂ as described in previous proposition is not a perfect
equilibrium. For instance, take 𝑠 = 0.5 with fixed bargaining costs 𝑐1 = 0.1
and 𝑐2 = 0.2. Observe that player 2 plans to reject a possible offer of 0.6 by
player 1, i.e, 𝑔1̂ (0.6) = 𝑁 . After such a rejection players are expected to agree
̂
on 0.5, i.e, 𝑃 (𝑓|0.6, 𝑔|0.6)
̂ = 0.5. Therefore, player 2 will get (0.5−0.2) = 0.3
after first round violating sequential rationality, as, he can get 0.4 in the
initial round itself.
Another valuable property that the finite round of bargaining posses is
that, here the bargaining should end in the first stage itself.
Proof. Suppose the agreement is reached at later stage with payoffs (𝑣1′ , 𝑣2′ ).
Discounting implies that
But then player 1 could deviate and offer 𝑥 = 1 − 𝑣2′ − 𝜖 for some small 𝜖 > 0,
which guarantees player 2 the payoff 𝑣2′ + 𝜖 in the first round.
Sequential rationality implies that player 2 should accept this offer immediately
and for 𝜖 small enough, this gives player 1 a payoffs greater that 𝑣1′ .
Similar proposition will be followed in the infinite horizon case but the
agreement will be reached in at most two stages. Before that, it is important
to prove the existence of such perfect equilibrium partition (P.E.P) which will
6. Strategic Bargaining 40
In this chapter, we will follow Rubinstein bargaining model and discuss the
existence of P.E.P in bargaining with 𝑡 → ∞. We will start by finding
some relations between the set of P.E.P if player 1 starts the game with the
set of P.E.P if player 2 starts the game. Advancing the discussion, we will
prove that their solution set containing the set of P.E.P of both players is
nonempty, and the player who starts the game has first mover advantage.
To conclude this chapter, we’ll calculate the P.E.P considering two kinds of
discounting,i.e, fixed bargaining costs and fixed discounting factors.
7.1 Preliminaries
Let us define some new notations, that will be used throughout this chapter.
Let 𝜎(𝑓, 𝑔) be the sequence of offers. Here 1 starts the bargaining and follow
𝑓 ∈ 𝐹 , and 2 adopts 𝑔 ∈ 𝐺. Let 𝑇 (𝑓, 𝑔) be the length of 𝜎(𝑓, 𝑔) which may
goes upto ∞. Let 𝐷(𝑓, 𝑔) be the element of 𝜎(𝑓, 𝑔) at the terminal node (if
such element exists). Call 𝐷(𝑓, 𝑔) as the partition induced by (𝑓, 𝑔). The
outcome function 𝑃 (𝑓, 𝑔) of the game is defined by:
⎧
{(𝐷(𝑓, 𝑔), 𝑇 (𝑓, 𝑔)), if 𝑇 (𝑓, 𝑔) < ∞
𝑃 (𝑓, 𝑔) = ⎨
{
⎩(0, ∞), if 𝑇 (𝑓, 𝑔) = ∞.
consider the case when player 2 starts the game. We define 𝜎(𝑔, 𝑓), 𝑇 (𝑔, 𝑓),
𝐷(𝑔, 𝑓) and 𝑃 (𝑔, 𝑓) similarly. Here player 2 starts the game and adopts
𝑓 ∈ 𝐹 and player 1 adopts 𝑔 ∈ 𝐺.
Before proceeding further, let us recall the preference relation on the set
of outcomes. We assume that player 𝑖 has a preference relation ≳𝑖 that is
complete, reflexive and transitive. It is defined on the set of 𝑆 × 𝑁 ∪ {0, ∞)},
where 𝑁 is the set of natural numbers.
We assume that the following assertions are satisfied by the preference
relation:
(A-2) if 𝑠𝑖 > 0 and 𝑡2 > 𝑡1 , then (𝑠, 𝑡1 ) >𝑖 (𝑠, 𝑡2) >𝑖 (0, ∞)
After defining types of discounting, let us define two sets : (𝐴) the set of
all P.E.P.’s in a game in which player 1 is starts the bargaining and makes
the first offer, i.e, {𝑠 ∈ 𝑆 | there is a P.E. (𝑓, 𝑔) ∈ 𝐹 × 𝐺 such that 𝑠 =
𝐷(𝑓, 𝑔)}; and (𝐵) the set of all P.E.P.’s in a game in which player 2 is starts
the bargaining , i.e, {𝑠 ∈ 𝑆 | there is a P.E. (𝑔, 𝑓) ∈ 𝐹 × 𝐺 such that 𝑠 =
𝐷(𝑔, 𝑓)}. In the following lemmas, we will try to establish some relations
between these two sets.
7. The Infinite Horizon Game 43
Lemma 7.1. For all 𝑎 ∈ 𝐴and for all 𝑏 ∈ 𝑆 such that 𝑏 > 𝑎, there is 𝑐 ∈ 𝐵
such that (𝑐, 1) ≳2 (𝑏, 0).
Proof. This lemma states that for 𝑎 to be in 𝐴 it has to be the best possible
partition for player 1 and he cannot have better than 𝑎 at later stages of
game. If such partition exists, say 𝑏, where 𝑏 ∈ 𝑆 satisfying 𝑏 > 𝑎 such that
2 would accept 𝑏 if it were offered. Player 2 must therefore reject such an
offer. And to reject this offer, he should expect some better payoff in future,
that is, there must be some 𝑐 ∈ 𝐵 so that (𝑐, 1) ≳ (𝑏, 0).
Formally, let (𝑓,̂ 𝑔)̂ be a P.E. such that 𝐷(𝑓,̂ 𝑔)̂ = 𝑎. Let 𝑏 ∈ 𝑆 and
𝑏 > 𝑎. Observe that 𝑔1̂ (𝑏) = 𝑁 otherwise if 𝑓 1 = 𝑏 then 𝑃 (𝑓, 𝑔)̂ = (𝑏, 1) >1
(𝑎, 1) ≳ (𝑎, 𝑇 (𝑓,̂ 𝑔))
̂ = 𝑃 (𝑓,̂ 𝑔).̂ This imply that 𝑃 (𝑓, 𝑔)̂ >1 𝑃 (𝑓,̂ 𝑔)̂ which
violates the property of perfect equilibrium. Also, 𝑃 (𝑓|𝑏, ̂ 𝑔|𝑏))
̂ ≳2 (𝑏, 0)
̂ 𝑔|𝑏),
thus, (𝐷(𝑓|𝑏, ̂ 𝑇 (𝑓|𝑏, ̂ 𝑔|𝑏))
̂ ̂ 𝑔|𝑏),
≳2 (𝑏, 0) and 𝐷(𝑓|𝑏, ̂ 1) ≳2 (𝑏, 0) by (A-
2). Therefore 𝐷(𝑓|𝑏, ̂ 𝑔|𝑏)
̂ is the desirable 𝑐.
Lemma 7.2. For all 𝑎 ∈ 𝐵 and for all 𝑏 ∈ 𝑆 such that 𝑏 < 𝑎, there is 𝑐 ∈ 𝐴
such that (𝑐, 1) ≳1 (𝑏, 0).
Lemma 7.3. For all 𝑎 ∈ 𝐴 and for all 𝑏 ∈ 𝑆 such that (𝑏, 1) >2 (𝑎, 0) there
is 𝑐 ∈ 𝐴 such that (𝑐, 1) ≳1 (𝑏, 0).
Proof. This lemma implies that player 1 should have a strong reason to reject
any offer from player 2. Knowing this player 2 will accept the partition offered
by player originally. Let (𝑓,̂ 𝑔)̂ be a P.E. such that 𝐷(𝑓,̂ 𝑔)̂ = 𝑎. Now consider
the following possibilities:
Case A: 𝑔1̂ (𝑓 1̂ ) = 𝑁 . Let 𝑓 1̂ = 𝑠. Then 𝐷(𝑓 1̂ |𝑠, 𝑔1̂ |𝑠) = 𝑎 and 𝑎 ∈ 𝐵. From
(A-1) and (A-2), we now that if (𝑏, 2) >2 (𝑎, 1) then 𝑏 < 𝑎. Therefore
by Lemma 7.2 there is 𝑐 ∈ 𝐴 such that (𝑐, 1) ≳1 (𝑏, 0).
Case B: Let 𝑓 1̂ = 𝑎 and 𝑔1̂ (𝑎) = 𝑌 . Suppose that 𝑏 satisfy (𝑏, 1) >2 (𝑎, 0),
𝑓 2̂ (𝑎, 𝑏) = 𝑁 , if not then for any 𝑓 ∈ 𝐹 satisfying 𝑓 1 = 𝑏, 𝑃 (𝑓|𝑎,
̂ 𝑓) =
(𝑏, 1) >2 (𝑎, 0) which contradicts the definition of perfect equilibrium.
7. The Infinite Horizon Game 44
̂ 𝑏, 𝑔|𝑎,
Also 𝑃 (𝑓|𝑎, ̂ 𝑏, 𝑔|𝑎,
̂ 𝑏) ≳1 (𝑏, 0). Therefore (𝐷(𝑓|𝑎, ̂ 𝑏), 1) ≳1 (𝑏, 0)
̂ 𝑏, 𝑔|𝑎,
and 𝐷(𝑓|𝑎, ̂ 𝑏) ∈ 𝐴
Lemma 7.4. For all 𝑎 ∈ 𝐵 and for all 𝑏 ∈ 𝑆 such that (𝑏, 1) >1 (𝑎, 0) there
is 𝑐 ∈ 𝐵 such that (𝑐, 1) ≳2 (𝑏, 0).
Proof. The theorem will be proved in three stages. Starting with our first
claim, we prove the following statement:
7. The Infinite Horizon Game 45
⎧
{𝑌 , 𝑠𝑡 ≦ 𝑥,
𝑓 𝑡̂ ≡ 𝑥, 𝑔𝑡̂ (𝑠1 … 𝑠𝑡 ) = ⎨
{
⎩𝑁 , 𝑠𝑡 > 𝑥;
⎧
{𝑌 , 𝑠𝑡 ≧ 𝑦,
𝑡̂ 1 𝑡
𝑓 (𝑠 … 𝑠 ) = ⎨ , 𝑔𝑡̂ ≡ 𝑦.
{ 𝑡
⎩𝑁 , 𝑠 < 𝑦.
Claim 2: △ ≠ ∅.
Above claim also implies that the sets 𝐴 and 𝐵 are not empty. Define
⎧
{0 if for all 𝑦, (𝑦, 0) >1 (𝑥, 1),
𝑑1 (𝑥) =
⎨
{
⎩𝑦 if there exists 𝑦, (𝑦, 0) ∼1 (𝑥, 1),
and
⎧
{1 if for all 𝑥, (𝑦, 0) >2 (𝑥, 1),
𝑑2 (𝑦) = ⎨
{
⎩𝑥 if there exists 𝑥, (𝑥, 0) ∼2 (𝑦, 1).
Observe that 𝑑1 (𝑥) is the smallest 𝑦 such that (𝑦, 0) ≳1 (𝑥, 1) and 𝑑2 (𝑦) is
the largest 𝑥 such that (𝑥, 0) ≳2 (𝑦, 1). As we know that both the players
will try to maximize their respective payoffs at each stage due to sequentially
rationality, we get
It is easy to show that 𝑑1 and 𝑑2 are well defined, continuous and increasing
7. The Infinite Horizon Game 46
functions. Also, 𝑑1 and 𝑑2 are strictly increasing where 𝑑1 (𝑥) > 0 and
𝑑2 (𝑦) < 1 respectively.
Define 𝐷(𝑥) = 𝑑2 (𝑑1 (𝑥)). Thus △ = {(𝑥, 𝑦)|𝑦 = 𝑑1 (𝑥) and 𝑥 = 𝑑2 (𝑑1 (𝑥))}.
Notice that 𝐷(1) ≦ 1 and 𝐷(0) ≧ 0. From the continuity of function 𝐷, there
exists a fixed point 𝑥0 such that 𝐷(𝑥0 ) = 𝑥0 . Hence, (𝑥0 , 𝑑1 (𝑥0 )) ∈ △.
7.3 Conclusion
After proving that there always exists P.E.P. in an infinite horizon bargaining
game, let us calculate P.E.P. for the two discounting models discussed before.
Corollary 7.6. Suppose that both the players have fixed bargaining costs,
(𝑐1 , 𝑐2 ), then:
(1) If 𝑐1 > 𝑐2 , (𝑐2 , (1−𝑐2 )) is the only P.E.P (Perfect Equilibrium Partition).
1 2
𝑥 𝑦
2 1
𝑌 𝑁 𝑌 𝑁
2 1
𝑥, 1 − 𝑥 𝑦 𝑦, 1 − 𝑦 𝑥
1 2
𝑌 𝑁 𝑌 𝑁
(𝑦 − 𝑐1 ), (1 − 𝑦 − 𝑐2 ) (𝑥 − 𝑐1 ), (1 − 𝑥 − 𝑐2 )
(a) If Player 1 starts. (b) If Player 2 starts.
Observe that if player 1 starts the bargaining then the game tree will look
like the figure 7.1 (a) and 7.1 (b) if player 2 starts the game.
Suppose that player 1 is starting the game. In an infinite horizon game,
the subgame starting from the node where player 2 has to offer the partition is
same as the game where player 2 starts the bargaining. Due to this, observe
that one thing is clear and that is the players should reach an agreement
either in first stage or in second stage. If not, then suppose that the game
ends at later stage with payoffs (𝑣, 1 − 𝑣). Then with the similar reasoning
discussed in Proposition 6.5 it can be shown that the assumption does not
hold in case of sequentially rational players. Due to discounting, each player
wants to end the game immediately after the first stage. Hence, both the
players try to convince the other player to not reject his offer. This is possible
by making a greater or equivalent offer to the partition the other player will
7. The Infinite Horizon Game 48
𝑥
2
𝑌 𝑁
2
𝑥,
(1−𝑥 ) 𝑦
1
𝑌 𝑁
𝑦−𝑐1 ,
(1−𝑦−𝑐 )
2
1 − 𝑥 ≥ 1 − (𝑦 + 𝑐2 ) ⟹ 𝑥 ≤ 𝑦 + 𝑐2
𝑦 ≥ 𝑥 − 𝑐1
Given the above conditions, both the player wants to maximize their partitions
as well. Thus △, the set of all P.E.P., is the set of all solutions to the
equations 𝑦 = 𝑚𝑎𝑥{𝑥 − 𝑐1 , 0} and 𝑥 = 𝑚𝑖𝑛{𝑦 + 𝑐2 , 1}. The solution is
implied by the three diagrams of Figure 7.3 related to the cases (1) 𝑐1 > 𝑐2 ,
(2) 𝑐1 = 𝑐2 , and 𝑐1 < 𝑐2 .
7. The Infinite Horizon Game 49
We will only consider the case when atleast atleast one of the 𝛿𝑖 is strictly
less than 1 and at least one of them is strictly positive, then the only P.E.P
is 𝑃 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ). This is because in the case where 𝛿1 = 𝛿2 = 1,
both the players keep on playing the game to the infinite stages as there is
no threat of loosing partition if rejection happens. Similarly, the case when
𝛿1 = 𝛿2 = 0 is equivalent to the ultimatum game. That is, both the parties
will be getting zero payoffs in case of rejection immediately after first stage.
Corollary 7.7. Suppose that both the players have fixed discounting factors,
(𝛿1 , 𝛿2 ), where if atleast one of the 𝛿𝑖 is strictly less than 1 and at least one
of them is strictly positive, then the only P.E.P is 𝑃 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ).
Proof. Similar to what we have argued in previous proof, the △ in this case
is the set of all solutions of the equations 𝑥 = 1 − 𝛿2 (1 − 𝑦) and 𝑦 = 𝑥 ⋅ 𝛿1 .The
solution of the equation 𝑥 = 1 − 𝛿2 (1 − 𝑥 ⋅ 𝛿1 ) is 𝑥 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ). The
result follows from the figure 7.4.
7. The Infinite Horizon Game 50
[3] J.R. Munkres. Topology. Featured Titles for Topology Series. Prentice
Hall, Incorporated, 2000.