You are on page 1of 57

GAME THEORY AND STRATEGIC

BARGAINING

A THESIS
submitted in partial fulfillment of the requirements
for the award of the dual degree of

Bachelor of Science-Master of Science


in

MATHEMATICS
by

MOHAMMED RAMEEZ QURESHI


(13088)

DEPARTMENT OF MATHEMATICS
INDIAN INSTITUTE OF SCIENCE EDUCATION AND
RESEARCH BHOPAL
BHOPAL - 462066

April 2018
i

CERTIFICATE

This is to certify that Mohammed Rameez Qureshi, BS-MS (Dual


Degree) student in Department of Mathematics, has completed bonafide work
on the dissertation entitled ‘Game theory and Strategic Bargaining’
under my supervision and guidance.

April 2018 Dr. Nikita Agarwal


IISER Bhopal

Committee Member Signature Date


ii

ACADEMIC INTEGRITY AND


COPYRIGHT DISCLAIMER

I hereby declare that this MS-Thesis is my own work and, to the best of
my knowledge, that it contains no material previously published or written
by another person, and no substantial proportions of material which have
been accepted for the award of any other degree or diploma at IISER Bhopal
or any other educational institution, except where due acknowledgement is
made in the document.

I certify that all copyrighted material incorporated into this document is


in compliance with the Indian Copyright (Amendment) Act (2012) and that
I have received written permission from the copyright owners for my use of
their work, which is beyond the scope of that law. I agree to indemnify and
safeguard IISER Bhopal from any claims that may arise from any copyright
violation.

April 2018 Mohammed Rameez Qureshi


IISER Bhopal
iii

ACKNOWLEDGEMENT

I would like to express my sincere gratitude to my project co-adviser Dr.


Arya Kumar Srustidhar Chand for the continuous support throughout my
project and for his forbearance, indulgence and immense knowledge.
Besides my adviser, I would like to give my sincere regards to other
project evaluation committee members Dr. Nikita Agarwal and Dr. Kashyap
Rajeevsarathy for their discerning suggestions and inputs.
In the pursuit of this project, nobody has been more important than my
family members. I would like to thank my parents for their unconditional
love and support. I’m grateful to my elder brother, Rehan, due to whom I
don’t have to find a role model elsewhere and to my younger brother, Rizwan,
for his encouragement and motivation. I am thankful to Tahseen Bhabhijaan
and Shekhu for sharing such a special bond with me that I cherish forever.
Last but not the least, I would like to thank my friends who supported me
in my ups and downs during the span of five long years at IISER Bhopal.

Mohammed Rameez Qureshi


iv

ABSTRACT

This study aspires to analyze mathematical framework of game theory


and its application. The aim is to analyze models where players interact
strategically in various interesting settings. Starting with discussing basic
concepts of rational decision making, notion of normal games and equilibrium
will be defined exploring some examples. The much celebrated Nash Equilibrium
and its existence will be introduced in the subsequent sections using some
examples highlighting its advantages and disadvantages. The definitions are
followed by the book, Game theory - an introduction, by Steven Tadelis [6].
In the second part of this study, the aim is to analyze bargaining model
given by A. Rubinstein [5] for infinite horizon case. For this, Dynamic games
of complete information will be discussed, in which actions of players may
change as the game unfolds over time. At last, this study will provide
an overview about the existence of perfect equilibrium in bargaining model
including infinite stages.
CONTENTS

Certificate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Academic Integrity and Copyright Disclaimer . . . . . . . . . . ii

Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

Part I Static Games of Complete Information 2

1. Rational Decision Making . . . . . . . . . . . . . . . . . . . . 3


1.1 Rational Preference Relation . . . . . . . . . . . . . . . . . . . 3
1.2 Rationality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2. Static Games of Complete Information . . . . . . . . . . . . 5


2.1 Normal Form Games . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Example: Prisoner’s Dilemma . . . . . . . . . . . . . . 5
2.2 Mixed Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Solution/Equilibrium Concepts . . . . . . . . . . . . . . . . . 6
2.3.1 Strictly Dominant Strategy Equilibrium . . . . . . . . 7
2.3.2 IESDS . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.3 Rationalizability . . . . . . . . . . . . . . . . . . . . . 8
2.4 Nash Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . 9

3. Existence of Nash Equilibrium . . . . . . . . . . . . . . . . . 11


Contents 1

Part II Dynamic Games of Complete Information 21

4. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
4.1 The Extensive Form Game . . . . . . . . . . . . . . . . . . . . 22
4.1.1 Game Trees . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Strategies and Nash Equilibrium . . . . . . . . . . . . . . . . 25
4.2.1 Pure Strategies . . . . . . . . . . . . . . . . . . . . . . 26
4.2.2 Mixed versus Behavioral Strategies . . . . . . . . . . . 27

5. Sequential Rationality . . . . . . . . . . . . . . . . . . . . . . 29
5.1 Subgame Perfect Equilibrium . . . . . . . . . . . . . . . . . . 30

6. Strategic Bargaining . . . . . . . . . . . . . . . . . . . . . . . . 34
6.1 The Ultimatum Game . . . . . . . . . . . . . . . . . . . . . . 36
6.2 Finitely Many Rounds of Bargaining . . . . . . . . . . . . . . 38

7. The Infinite Horizon Game . . . . . . . . . . . . . . . . . . . 41


7.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.2 Existence of Perfect Equilibrium Partition . . . . . . . . . . . 44
7.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Part I

STATIC GAMES OF COMPLETE


INFORMATION
1. RATIONAL DECISION MAKING

The motivation for studying this topic is to understand a method for systematically
selecting among possible choices that are based on reason and facts. In this
section, we study the fundamental ideas related to the decision problem. The
decision problem consists of three features:

1. Actions (𝐴) are all the alternative from which the player can choose.

2. Outcomes (𝑋) are the possible consequences that can result from any
of the actions.

3. Preferences describe how the player ranks the set of possible outcomes,
from the most desired to least desired.

The preference relation ⪰ is defined as a binary relation that describes the


player’s preferences, and the notation 𝑥 ⪰ 𝑦 means “𝑥 is preferred over 𝑦” or
“𝑥 is at least as good as 𝑦”.

1.1 Rational Preference Relation


In order to define rational preference relation, we discuss some important
axioms including:

The Completeness Axiom The preference relation ⪰ is complete if any


two outcomes 𝑥, 𝑦 ∈ 𝑋 can be ranked by the preference relation, so
that either 𝑥 ⪰ 𝑦 or 𝑦 ⪰ 𝑥 or both.

The Transitivity Axiom The preference relation ⪰ is transitive if for any


three outcomes 𝑥, 𝑦, 𝑧 ∈ 𝑋, if 𝑥 ⪰ 𝑦 and 𝑦 ⪰ 𝑧 then 𝑥 ⪰ 𝑧.
1. Rational Decision Making 4

We define a preference relation that is complete and transitive as a rational


preference relation.

Example 1.1. The ≥ relation over real numbers is a rational preference


relation.

1.2 Rationality
Based on the above observations, we can now define various important concepts
used for further development of the theory of rational decision making. Some
important definitions are as follows:

Definition 1.2. A payoff/utility function 𝑢 ∶ 𝑋 → ℝ represents the preference


relation ⪰ if for any pair 𝑥, 𝑦 ∈ 𝑋 , 𝑢(𝑥) ≥ 𝑢(𝑦) if and only if 𝑥 ⪰ 𝑦.

Definition 1.3. Let 𝑢(𝑥) be the player’s payoff function over outcomes in
𝑋 = 𝑥1 , 𝑥2 , … , 𝑥𝑛 , and let 𝑝 = (𝑝1 , 𝑝2 , … , 𝑝𝑛 ) be a lottery over 𝑋 such that
𝑝𝑘 = 𝑃 𝑟{𝑥 = 𝑥𝑘 }. Then, we define the player’s expected payoff from the
lottery p as
𝑛
𝐸[𝑢(𝑥)|𝑝] = ∑𝑘=1 𝑝𝑘 𝑢(𝑥𝑘 ) = 𝑝1 𝑢(𝑥1 ) + 𝑝2 𝑢(𝑥2 ) + … + 𝑝𝑛 𝑢(𝑥𝑛 )

A player facing a decision problem with a payoff function 𝑣(.) over actions
is rational, if he chooses an action 𝑎 ∈ 𝐴 that maximizes his payoff. That is,
𝑎∗ ∈ 𝐴 is chosen if and only if 𝑣(𝑎∗ ) ≥ 𝑣(𝑎) for all 𝑎 ∈ 𝐴.
2. STATIC GAMES OF COMPLETE
INFORMATION

Under this chapter, we discuss the definition of a normal form game. We also
define notions of dominated and dominant strategies, and study topics like
consequences of assuming rationality and common knowledge of rationality.

2.1 Normal Form Games


We define a normal-form game as a game which includes three components
as follows:
1. A finite set of players, 𝑁 = {1, 2, ..., 𝑛}.

2. A collection of sets of pure strategies, {𝑆1 , 𝑆2 , ..., 𝑆𝑛 }.

3. A set of payoff functions, {𝑣1 , 𝑣2 , ..., 𝑣𝑛 }, each assigning a payoff


value to each combination of chosen strategies, that is, a set of functions
𝑣𝑖 ∶ 𝑆1 × 𝑆2 × ... × 𝑆𝑛 → ℝ for each 𝑖 ∈ ℕ .
An event 𝐸 is common knowledge if (1) everyone knows 𝐸, (2) everyone
knows that everyone knows 𝐸, and so on.
A normal form game of complete information requires that the above three
components are common knowledge among all the players of the game.

2.1.1 Example: Prisoner’s Dilemma


Players: 𝑁 = {1, 2}.
Strategy sets: 𝑆𝑖 = {𝑀 , 𝐹 } for 𝑖 ∈ {1, 2}.
2. Static Games of Complete Information 6

Payoffs: Let 𝑣𝑖 (𝑠1 , 𝑠2 ) be the payoff to player 𝑖 if player 1 chooses 𝑠1 and


player 2 chooses 𝑠2 . We can then write payoffs as

𝑣1 (𝑀 , 𝑀 ) = 𝑣2 (𝑀 , 𝑀 ) = −2
𝑣1 (𝐹 , 𝐹 ) = 𝑣2 (𝐹 , 𝐹 ) = −4
𝑣1 (𝑀 , 𝐹 ) = 𝑣2 (𝐹 , 𝑀 ) = −5
𝑣1 (𝐹 , 𝑀 ) = 𝑣2 (𝑀 , 𝐹 ) = −1.

The matrix representation of prisoner’s dilemma game is as follows:


Player 2
M F
M -2,-2 -5,-1
Player 1
F -1,-5 -4,-4

2.2 Mixed Strategies


After observing various examples of normal form games, we discuss the
concept of mixed strategies.
Let 𝑆𝑖 = {𝑠𝑖1 , 𝑠𝑖2 , ..., 𝑠𝑖𝑚 } be player 𝑖’𝑠 finite set of pure strategies. Define
△𝑆𝑖 as the simplex of 𝑆𝑖 , which is the set of all probability distributions
over 𝑆𝑖 .

Definition 2.1. A mixed strategy for player 𝑖 is an element 𝜎𝑖 ∈ △𝑆𝑖 , so


that 𝜎𝑖 = {𝜎𝑖 (𝑠𝑖1 ), 𝜎𝑖 (𝑠𝑖2 ), ..., 𝜎𝑖 (𝑠𝑖𝑚 )} is a probability distribution over 𝑆𝑖 ,
where 𝜎𝑖 (𝑠𝑖𝑗 ) is the probability that player 𝑖 plays 𝑠𝑖𝑗 .

Remark. A mixed strategy for player 𝑖 is just a probability distribution over


his pure strategies.

2.3 Solution/Equilibrium Concepts


In the previous sections, we have focused on how to describe a game formally
and fit it into a well-defined structure. Our next aim is to be able to either
advise players on how to play or try to predict how players will play. To
accomplish this, we need some method to solve the game, and in this section
2. Static Games of Complete Information 7

we outline some criteria that will be helpful in evaluating potential methods


to analyze and solve games.

2.3.1 Strictly Dominant Strategy Equilibrium


Definition 2.2. Let 𝑠𝑖 ∈ 𝑆𝑖 and 𝑠′𝑖 ∈ 𝑆𝑖 be possible strategies for player 𝑖.
We say that 𝑠′𝑖 is strictly dominated by 𝑠𝑖 if for any possible combination
of the other players’ strategies, 𝑠−𝑖 ∈ 𝑆−𝑖 , player 𝑖’𝑠 payoff from 𝑠′𝑖 is strictly
less than that from 𝑠𝑖 . That is,

𝑣𝑖 (𝑠𝑖 , 𝑠−𝑖 ) > 𝑣𝑖 (𝑠′𝑖 , 𝑠−𝑖 ) for all 𝑠−𝑖 ∈ 𝑆−𝑖 .

We will write 𝑠𝑖 ≻𝑖 𝑠𝑖 ′ to denote that 𝑠𝑖 ′ is strictly dominated by 𝑠𝑖 .

Definition 2.3. 𝑠𝑖 ∈ 𝑆𝑖 is a strictly dominant strategy for 𝑖 if every


other strategy of 𝑖 is strictly dominated by it, that is,

𝑣𝑖 (𝑠𝑖 , 𝑠−𝑖 ) > 𝑣𝑖 (𝑠′𝑖 , 𝑠−𝑖 ) for all 𝑠′𝑖 ∈ 𝑆𝑖 , 𝑠′𝑖 ≠ 𝑠𝑖 , and all 𝑠−𝑖 ∈ 𝑆−𝑖 .

Definition 2.4. A strategy profile 𝑠𝐷 ∈ 𝑆 is a strictly dominant strategy


equilibrium if 𝑠𝐷
𝑖 ∈ 𝑆𝑖 is a strictly dominant strategy for all 𝑖 ∈ 𝑁 .

Example 2.5. (𝐹 , 𝐹 ) in Prisoner’s Dilemma is a strictly dominant strategy


equilibrium.

2.3.2 IESDS
Consider the following example of normal form game in matrix form representation:

Player 2
𝐿 𝐶 𝑅
𝑈 5,4 6,2 7,3
Player 1 𝑀 3,2 9,5 4,7
𝐷 4,1 10,7 3,9
2. Static Games of Complete Information 8

Note that there is no strictly dominated strategy for player 1. There is,
however, a strictly dominated strategy for player 2: the strategy C is strictly
dominated by R because 3 > 2(𝑟𝑜𝑤 𝑈 ), 7 > 5(𝑟𝑜𝑤 𝑀 ), and 9 > 7(𝑟𝑜𝑤 𝐷).
Thus, because this is common knowledge, both players know that we can
effectively eliminate the strategy C from player 2’s strategy set, which results
in the following reduced game:
Player 2
𝐿 𝑅
𝑈 5,4 7,3
Player 1 𝑀 3,2 4,7
𝐷 4,1 3,9
In this reduced game, observe that both 𝑀 and 𝐷 are strictly dominated by
strategy 𝑈 for player 1, allowing us to perform a second round of eliminating
strategies but for player 1 this time. Eliminating these two strategies yields
the following trivial game:
Player 2
𝐿 𝑅
Player 1 𝑈 5,4 7,3
Observe that player 2 has strictly dominated strategy, playing 𝑅. This
process of Iterated Elimination of Strictly Dominated Strategies (IESDS)
yields a unique prediction that the strategy profile we expect these players
to play is (𝑈 , 𝐿), giving the players the payoffs of (5, 4).

2.3.3 Rationalizability
Definition 2.6. The strategy 𝑠𝑖 ∈ 𝑆𝑖 is player 𝑖’𝑠 best response to his
opponents’ strategies s−𝑖 ∈ 𝑆−𝑖 if

𝑣𝑖 (𝑠𝑖 , 𝑠−𝑖 ) ≥ 𝑣𝑖 (𝑠′𝑖 , 𝑠−𝑖 ) ∀𝑠′𝑖 ∈ 𝑆𝑖 .

Therefore, a strategy 𝑠𝑖 ∈ 𝑆𝑖 is never a best response if there are no


beliefs 𝑠−𝑖 ∈ 𝑆−𝑖 for player 𝑖 for which 𝑠𝑖 ∈ 𝐵𝑅𝑖 (𝑠−𝑖 ). Also, a belief of
player 𝑖 is a possible profile of his opponents’ strategies, 𝑠−𝑖 ∈ 𝑆−𝑖 . Now let
us define what we call as best response correspondence:
2. Static Games of Complete Information 9

Definition 2.7. The best-response correspondence of player 𝑖 selects for


each 𝑠−𝑖 ∈ 𝑆−𝑖 a subset 𝐵𝑅𝑖 (𝑠−𝑖 ) ⊂ 𝑆𝑖 where each strategy 𝑠𝑖 ∈ 𝐵𝑅𝑖 (𝑠−𝑖 )
is a best response to 𝑠−𝑖 .

After eliminating all the strategies that are never a best response, and
employing this reasoning again and again in a way similar to what we did
for IESDS, the strategies that remain are called the set of rationalizable
strategies and the solution concept is known as rationalizability.

2.4 Nash Equilibrium


Let us take a classic example in game theory known as the Battle of the Sexes
game:

Chris
𝑂 𝐹
𝑂 2,1 0,0
Alex
𝐹 0,0 1,2

As we can observe that concepts of IESDS and rationalizability suggests


that anything can happen in above example whereas dominant strategy
equilibrium doesn’t exists. In this section, we discuss a much more demanding
concept, that of the Nash equilibrium. This concept was first put forth
by John Nash (1951) [4], who received the Nobel Prize in Economics for this
achievement. We define Nash equilibrium as a profile of strategies for which
each player is choosing a best response to the strategies of all other players.
Formally, we have

Definition 2.8. The pure-strategy profile 𝑠∗ = (𝑠∗1 , 𝑠∗2 , ..., 𝑠∗𝑛 ) ∈ 𝑆 is a Nash
equilibrium if 𝑠∗𝑖 is a best response to 𝑠∗−𝑖 , for all 𝑖 ∈ ℕ , that is,

𝑣𝑖 (𝑠∗𝑖 , 𝑠∗−𝑖 ) ≥ 𝑣𝑖 (𝑠′𝑖 , 𝑠∗−𝑖 ) ∀𝑠′𝑖 ∈ 𝑆𝑖 and ∀𝑖 ∈ ℕ.

Similarly,
The mixed-strategy profile 𝜎∗ = (𝜎1∗ , 𝜎2∗ , ..., 𝜎𝑛∗ ) is a Mixed Strategy Nash
equilibrium if 𝜎𝑖∗ is a best response to 𝜎−𝑖 ∗
, for all 𝑖 ∈ ℕ, that is,
2. Static Games of Complete Information 10

𝑣𝑖 (𝜎𝑖∗ , 𝜎−𝑖
∗ ∗
) ≥ 𝑣𝑖 (𝜎𝑖 , 𝜎−𝑖 ) ∀𝜎𝑖 ∈ △𝑆𝑖 and ∀𝑖 ∈ ℕ.

Observe that the pure strategy Nash equilibria in the battle of the sexes
game are the strategies (𝑂, 𝑂) and (𝐹 , 𝐹 ). Also the mixed strategy (( 23 , 31 ), ( 13 , 23 ))
is a Nash equilibrium in the above game .
3. EXISTENCE OF NASH
EQUILIBRIUM

In this chapter, we will prove the infamous Nash Equilibrium using some
important aspects of Algebraic Topology including Kakutani Fixed Point
Theorem. A solution concept is valuable as far as it applies to a wide variety
of games, and not just to a small and particular family of games. That is why
a solution concept should apply generally and should not be developed in an
ad hoc way that is specific to a certain situation or game. Therefore, when
we apply our solution concept to different games, we require it to result in the
existence of an equilibrium solution. And it turns out, however, that for quite
general conditions games will have at least one Nash equilibrium. This fact
gives the Nash solution concept its power—like IESDS and rationalizability,
the solution concept of Nash is widely applicable. It will, however, usually
lead to more refined predictions than those of IESDS and rationalizability.
Let us now state some important results required in proving Nash Existence
theorem.

Sperner’s Lemma

A surprisingly simple proof of Brouwer’s fixed theorem was given by Emanuel


Sperner in 1928. He proved the theorem using a combinatorial lemma.
Following are some definitions we will need to prove Brouwer’s theorem.

Definition 3.1. The points 𝑥0 , 𝑥1 , ...𝑥𝑛 , 𝑛 ≤ 𝑁 ∈ ℝ𝑁 are said to be in


general position if the vectors (𝑥1 − 𝑥0 ), (𝑥2 − 𝑥0 ), ..., (𝑥𝑛 − 𝑥0 ) are linearly
independent.

Definition 3.2. An 𝑛-dimensional simplex is a convex linear combination


3. Existence of Nash Equilibrium 12

of 𝑛 + 1 points in a general position. That is, for given vertices 𝑣1 , ..., 𝑣𝑛+1 ,
the simplex would be

𝑛+1 𝑛+1
𝑆 = { ∑ 𝛼𝑖 𝑣𝑖 ∶ 𝛼 ≥ 0, ∑ 𝛼𝑖 = 1}. (3.1)
𝑖=1 𝑖=1

A simplicial subdivision of an 𝑛-dimensional simplex 𝑆 is a partition of


𝑆 into small simplices (“cells”) such that any two cells are either disjoint, or
they share a full face of a certain dimension.

Definition 3.3. A proper coloring of a simplicial subdivision is an assignment


of 𝑛 + 1 colors to the vertices of the subdivision, so that the vertices of 𝑆
receive all different colors, and points on each face of 𝑆 use only the colors
of the vertices defining the respective face of 𝑆.

For example, for 𝑛 = 2 we have a subdivision of a triangle 𝑇 into


triangular cells. A proper coloring of 𝑇 assigns different colors to the 3
vertices of 𝑇 , and inside vertices on each edge of 𝑇 use only the two colors
of the respective endpoints. (Note that it is not necessary that endpoints of
an edge receive different colors.)

Lemma 3.4 (Sperner, 1928). Every properly colored simplicial subdivision


contains a cell whose vertices have all different colors.

Proof. [2] Assume that a cell of the subdivision a rainbow cell, if its vertices
assigned with all different colors. Apart from proving above statement, we
will also prove that the number of rainbow cells is odd for any proper coloring,
that is an, even more, stronger statement.

Case 𝑛 = 1. For, 1-dimensional case, we have a line segment (𝑎, 𝑏) that is


subdivided into smaller segments. Now let us color the vertices of the
given subdivision with two different colors. Now as per the definition
of proper coloring, 𝑎 and 𝑏 should receive different colors. Therefore,
going from 𝑎 to 𝑏, the color will be switched an odd number of times so
that 𝑏 will get a different color. Hence, the number of small segments
that receive two different colors is odd.
3. Existence of Nash Equilibrium 13

Case 𝑛 = 2. Here we have properly colored simplical subdivision of a triangle


𝑇 . Suppose the 𝑄 denotes the number of cells colored with (1, 2, 2) and
(1, 1, 2). Let 𝑅 be the number of rainbow cells, that is, colored with
(1, 2, 3), 𝑋 be the number of boundary edges colored (1, 2), and 𝑌
be the number of interior edges that are colored with (1, 2). So the
counting will be done in two different ways:

– Over cells of the subdivision: Notice that for each of 𝑄 type cells,
we get 2 edges colored (1, 2). Similarly, for 𝑅 type cells, we get
precisely 1 edge. Therefore, we count inner edges of type (1, 2)
twice, whereas boundary edges only once. Hence, we have 2𝑄 +
𝑅 = 𝑋 + 2𝑌 .
– Over the boundary of 𝑇 : Edges with color (1, 2) can only be found
inside the edge between two vertices of triangle 𝑇 with color 1 and
2. We have already proved in the previous case that between the
1 and 2 colored cells, so there will be an odd number of edges
colored (1, 2). Hence we can conclude that 𝑋 is odd, which in
turn implies that 𝑅 is also odd.

General case. We will proceed by the method of induction on 𝑛 in this case.


Suppose that we have a proper coloring of a simplical subdivision of 𝑆
using 𝑛 + 1 colors. Again, 𝑅 will denote the number of rainbow cells
using all 𝑛 + 1 colors. 𝑄 be the number of simplical cells which are
colored by all colors except 𝑛+1, i.e, they are colored using {1, 2, … , 𝑛}
so that one of these colors is used twice. Now consider the (𝑛 − 1)
dimensional faces that use exactly the colors {1, 2, … , 𝑛}. Consider 𝑋
as the number of such faces on the boundary of 𝑆, and 𝑌 the number
of such faces inside 𝑆. Again the counting will be done in two different
ways:

– Notice that each cell of type 𝑅 contributes exactly one face that
colored with {1, 2, … , 𝑛}. Whereas, each cell of type 𝑄 contributes
two such faces. Observe that inside faces appear in two cells
whereas the boundary faces appear in one cell. Hence, we can
3. Existence of Nash Equilibrium 14

conclude that 2𝑄 + 𝑅 = 𝑋 + 2𝑌
– On the boundary, notice that the only (𝑛 − 1)-dimensional faces
colored with 1, 2, ..., 𝑛 colors can be on the face 𝐹 ⊂ 𝑆 whose
vertices are colored again by 1, 2, ..., 𝑛. Now by induction hypothesis
for 𝐹 (which forms a properly colored (n − 1)-dimensional subdivision)
contains an odd number of rainbow (𝑛 − 1)-dimensional cells.
Therefore 𝑋 is odd, implying that 𝑅 is odd as well.

Brouwer’s Fixed Point Theorem

Theorem 3.5 (Brouwer, 1911). Let 𝐵𝑛 denote an 𝑛-dimensional ball. For


any continuous map 𝑓 ∶ 𝐵𝑛 → 𝐵𝑛 , there is a point 𝑥 ∈ 𝐵𝑛 such that
𝑓(𝑥) = 𝑥.

Proof. [2] We show how Sperner’s lemma is used to prove this theorem.
For convenience, we work with a simplex instead of a ball as they both are
equivalent homeomorphically. Specifically, let 𝑆 be a simplex embedded in
𝑅𝑛+1 so that the vertices of 𝑆 are 𝑣1 = (1, 0, ..., 0), 𝑣2 = (0, 1, ..., 0), ..., and
𝑣𝑛+1 = (0, 0, ..., 1). Let 𝑓 ∶ 𝑆 → 𝑆 be a continuous map and assume that it
has no fixed point.
Now let us construct a sequence of subdivisions of 𝑆 denoted by 𝑆1 , 𝑆2 , 𝑆3 , …
where each 𝑆𝑗 is subdivision of 𝑆𝑗−1 , so that the size of each cell tends to
zero as 𝑗 → ∞.To define a coloring of 𝑆𝑗 , assign a color 𝑐(𝑥) ∈ [𝑛 + 1] for
each vertex 𝑥 ∈ 𝑆𝑗 such that (𝑓(𝑥))𝑐(𝑥) < 𝑥𝑐(𝑥) . To check it is feasible, note
that for each point 𝑥 ∈ 𝑆, ∑ 𝑥𝑖 = 1 and ∑(𝑓(𝑥))𝑖 = 1. Therefore, there are
coordinates such that (𝑓(𝑥))𝑖 < 𝑥𝑖 and also (𝑓(𝑥))𝑖 < 𝑥𝑖 , unless 𝑓(𝑥) = 𝑥.
In case when there are multiple coordinates such that (𝑓(𝑥))𝑖 < 𝑥𝑖 , we pick
the smallest 𝑖.
Before applying Sperner’s lemma, we have to verify that the coloring we
have assigned is a proper coloring as per the sperner’s lemma. For vertices of
𝑆, 𝑣𝑖 = (0, … , 1, … , 0), we have 𝑐(𝑥) = 𝑖 because the only coordinate where
(𝑓(𝑥))𝑖 < 𝑥𝑖 is possible is the 𝑖𝑡ℎ coordinate. Similarly for certain faces of
3. Existence of Nash Equilibrium 15

𝑆, for eg. 𝑥 = 𝑐𝑜𝑛𝑣{𝑣𝑖 ∶ 𝑖 ∈ 𝐴}, the only coordinates where (𝑓(𝑥))𝑖 < 𝑥𝑖 is
possible are the ones where 𝑖 ∈ 𝐴, and hence 𝑐(𝑥) ∈ 𝐴.
With the help of Sperner’s lemma, we can claim that there exists a
(𝑗,𝑖)
rainbox cell with vertices 𝑥(𝑗,1) , … , 𝑥(𝑗,𝑛+1) ∈ 𝑆𝑗 or (𝑓(𝑥(𝑗,𝑖) ))𝑖 < 𝑥𝑖 for
each 𝑖 ∈ [𝑛 + 1]. Since this is true for each 𝑆𝑗 , we can find a sequence of
points {𝑥(𝑗,𝑖) } inside a compact set 𝑆 having a convergent subsequence. Now
by removing the elements outside of this subsequence assume that {𝑥(𝑗,𝑖) }
itself is convergent. As the size of the cells in 𝑆𝑗 tends to zero, the limits
𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) are the same for all 𝑖 ∈ [𝑛 + 1]. Let us call this common limit
point 𝑥∗ = 𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) .
As we have assumed that there is no fixed point , therefore 𝑓(𝑥∗ ) ≠ 𝑥∗ .
This implies that (𝑓(𝑥∗ ))𝑖 > 𝑥∗𝑖 for some coordinate 𝑖. But as per the
discussion above ,we already concluded that (𝑓(𝑥(𝑗,𝑖) ))𝑖 < 𝑥(𝑗,𝑖) for all 𝑗
and 𝑙𝑖𝑚𝑗→∞ 𝑥(𝑗,𝑖) = 𝑥∗ . Therefore, (𝑓(𝑥(𝑗,𝑖) ))𝑖 ≤ 𝑥(𝑗,𝑖) by continuity. The
assumption that there is no fixed point is hence contradicted.

Kakutani’s Fixed Point Theorem

Theorem 3.6 (Kakutani, 1941). Let 𝑋 be a non-empty subset of a finite


dimensional Euclidean space. Let 𝑄 ∶ 𝑋 ⇉ 𝑋 be a correspondence, with
𝑥 ∈ 𝑋 ↦ 𝑄(𝑥) ⊆ 𝑋, satisfying the following conditions:

• 𝑋 is a compact and convex set,

• 𝑄(𝑥) is non-empty for all 𝑥 ∈ 𝐴.

• 𝑄(𝑥) is a convex-valued correspondence: for all 𝑥 ∈ 𝑋, 𝑄(𝑥) is a


convex set.

• 𝑄(𝑥) has a closed graph: that is, if 𝑥𝑛 , 𝑦𝑛 → 𝑥, 𝑦 with 𝑦𝑛 ∈ 𝑄(𝑥𝑛 ),


then 𝑦 ∈ 𝑄(𝑥).

Then, 𝑄 has a fixed point, that is, there exists some 𝑥 ∈ 𝑋, such that
𝑥 ∈ 𝑄(𝑥).

Proof. [1] We prove the theorem for 𝑋 a non-degenerate simplex in 𝑅𝑛 .


Indeed let 𝑋 = [𝑎𝑜 , 𝑎1 , ..., 𝑎𝑛 ]. Now, for each integer 𝑝 we consider the 𝑝𝑡ℎ
3. Existence of Nash Equilibrium 16

barycentric subdivision of 𝑋 and a continuous function 𝑓 (𝑝) as follows: if 𝑥 is


the vertex of any cell in the subdivision, let 𝑦 be an arbitrary point of 𝑄(𝑥)
and set 𝑓 (𝑝) (𝑥) = 𝑦 ∈ 𝑄(𝑥). If 𝑥 is not such a vertex, then 𝑥 lies in some cell
(𝑝) (𝑝)
of the subdivision, say 𝑥 ∈ [𝑎0 , ...𝑎𝑛 ]. Then 𝑥 is a convex combination of
these vertices, say
𝑛 𝑛
(𝑝) (𝑝) (𝑝) (𝑝)
𝑥 = ∑ 𝜆𝑗 𝑎𝑗 , 𝜆𝑗 ≥ 0, ∑ 𝜆𝑗 = 1 (3.2)
𝑗=0 𝑗=0

and we get,
𝑛
(𝑝) (𝑝) (𝑝)
𝑓 (𝑥) = ∑ 𝜆𝑗 𝑓 (𝑝) (𝑎𝑗 ) (3.3)
𝑗=0

Note that, since the barycentric coordinates of points are unique, if 𝑥 lies on
a common face, the two definitions coincide on the common face.
Now it is clear that the various maps 𝑓 (𝑝) are continuous maps of the simplex
𝑋 onto itself. Hence the Brouwer theorem guarantees that each has a fixed
(𝑝) (𝑝) (𝑝)
point, say a point 𝑥∗ such that 𝑓 (𝑝) (𝑥∗ ) = 𝑥∗ . Suppose that any of these
fixed points is a vertex, therefore, it is a fixed point of 𝑄 by construction and
the proof is complete.
On the other hand, if none of these points are vertices then, for a given 𝑝,
we have
𝑛
(𝑝) (𝑝) (𝑝)
𝑥∗ = ∑ 𝜆𝑗 𝑎𝑗 (3.4)
𝑗=0

(𝑝)
and so, using the definition of 𝑓 (𝑝) and the fact that 𝑥∗ is its fixed point,
we have
𝑛
(𝑝) (𝑝) (𝑝)
𝑥∗ = ∑ 𝜆𝑗 𝑦𝑗 (3.5)
𝑗=0
3. Existence of Nash Equilibrium 17

where

(𝑝) (𝑝) (𝑝)


𝑦𝑗 = 𝑓 (𝑝) (𝑎𝑗 ) ∈ 𝑄(𝑎𝑗 ), 𝑗 = 0, 1, 2, … , 𝑛. (3.6)

We now have 2(𝑛 + 1) sequences all of which lie in compact subsets of


(𝑝)
ℝ𝑛 , namely the sequence of fixed points {𝑥∗ }∞ 𝑝=1 , the 𝑛 sequences of their
(𝑝) ∞
barycentric coordinates {𝜆𝑗 }𝑝=1 for each 𝑗 = 1, … , 𝑛 and the 𝑛 sequences
(𝑝)
{𝑦𝑗 }∞𝑝=1 for each 𝑗 = 1, … , 𝑛. The first and last of these lie in the simplex
𝑋 which is closed and bounded. While all the sequences of the barycentric
coordinates lie in the simplex of ℝ𝑛 .
By a standard application of the Bolzano-Weistrass Theorem, we may assume
that all thee sequences converge as 𝑝 → ∞. Thus

(𝑝)
𝑥∗ → 𝑥∗ as 𝑝 → ∞ (3.7)

(𝑝)
𝜆𝑗 → 𝜆𝑗 as 𝑝 → ∞, 𝑗 = 1, … , 𝑛 (3.8)

(𝑝)
𝑦𝑗 → 𝑦𝑗 as 𝑝 → ∞, 𝑗 = 1, … , 𝑛. (3.9)

Now, as the diameter of the subcells approach 0 as 𝑝 → ∞, the convergence


(𝑝)
of the fixed points to 𝑥∗ implies that the vertices 𝑎𝑗 → 𝑥∗ as 𝑝 → ∞ for all
𝑗 = 1, … , 𝑛. Moreover we must have
𝑛
𝑥∗ = ∑ 𝜆𝑗 𝑦𝑗 , (3.10)
𝑗=0

As the graph of 𝑄 is closed according to the hypothesis, we have

(𝑝) (𝑝) (𝑝) (𝑝)


𝑦𝑗 ∈ 𝑄(𝑎𝑗 ), 𝑎𝑛𝑑 𝑎𝑗 → 𝑥∗ , 𝑦𝑗 ∈ 𝑦𝑗 (3.11)

hence we must have 𝑦𝑗 ∈ 𝑄(𝑥∗ ). But 𝑄(𝑥∗ ) is convex and we have 𝑥∗ is a


convex combination of the 𝑦𝑗 . Hence 𝑥∗ ∈ 𝑄(𝑥∗ ).
3. Existence of Nash Equilibrium 18

Nash Existence Theorem

The statement of Nash Existence theorem is as follows:

Theorem 3.7 (Nash, 1950). Any finite player game with finite strategies for
all players has at least a Nash equilibrium in mixed strategies.

Proof. Recall that a mixed-strategy profile 𝜎∗ is a Nash equilibrium if


𝑣𝑖 (𝜎𝑖∗ , 𝜎−𝑖 ) ≥ 𝑣𝑖 (𝜎𝑖 , 𝜎−𝑖 )∀𝜎𝑖 ∈ △𝑆𝑖 .
In other words, 𝜎∗ is a Nash Equilibrium if and only if 𝜎∗ ∈ 𝐵𝑅𝑖 (𝜎−𝑖 ∗
),

where 𝐵𝑅𝑖 (𝜎−𝑖 ) is the best response of player 𝑖, given that the other players’

strategies are (𝜎−𝑖 ).
We define the Best Response Correspondence 𝐵 ∶ △𝑆 ⇉ △𝑆 such that for
all 𝜎 ∈ △𝑆, we have
𝐵(𝜎) = [𝐵𝑅𝑖 (𝜎−𝑖 )]𝑖∈𝑁
Claim: A mixed-strategy profile 𝜎∗ ∈ △𝑆 is a Nash equilibrium if and only
if it is a fixed point of the best-response correspondence 𝐵, 𝜎∗ ∈ 𝐵(𝜎∗ ). To
find fixed point in correspondence 𝐵, we apply Kakutani’s theorem to the
best response correspondence 𝐵 ∶ △𝑆 ⇉ △𝑆. We show that 𝐵(𝜎) satisfies
the conditions of Kakutani’s theorem.

• △𝑆 is compact, convex & non-empty. By definition,

△𝑆 = ∏ △𝑆𝑖
𝑖∈𝑁

where each △𝑆𝑖 = {𝑥| ∑𝑗 𝑥𝑗 = 1} is a simplex of dimension |𝑆𝑖 | − 1,


thus each △𝑆𝑖 is closed and bounded, and thus compact. Their product
set is also compact. Also, △𝑆 is the convex hull of the set of pure
strategies. Hence, △𝑆 is convex.

• 𝐵(𝜎) is non-empty.By definition,

𝐵𝑅𝑖 (𝜎−𝑖 ) = 𝑎𝑟𝑔 𝑚𝑎𝑥 𝑢𝑖 (𝑥, 𝜎−𝑖 )


𝑥∈△𝑆𝑖
3. Existence of Nash Equilibrium 19

where △𝑆𝑖 is non-empty and compact, and 𝑢𝑖 is linear in 𝑥. Hence, 𝑢𝑖


is continuous, and by Weierstrass theorem 𝐵(𝜎) is non-empty.

• 𝐵(𝜎) is a convex-valued correspondance. Equivalently, 𝐵(𝜎) ⊂ △𝑆


is convex if and only if 𝐵𝑅𝑖 (𝜎−𝑖 ) is convex for all 𝑖. Let 𝜎𝑖′ , 𝜎𝑖″ ∈
𝐵𝑅𝑖 (𝜎−𝑖 ). Then, we have

𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) ≥ 𝑢𝑖 (𝜏𝑖 , 𝜎−𝑖 ) for all 𝜏𝑖 ∈ △𝑆𝑖


𝑢𝑖 (𝜎𝑖″ , 𝜎−𝑖 ) ≥ 𝑢𝑖 (𝜏𝑖 , 𝜎−𝑖 ) for all 𝜏𝑖 ∈ △𝑆𝑖 .

The preceding realtions imply that for all 𝜆 ∈ [0, 1], we have

𝜆𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) + (1 − 𝜆)𝑢𝑖 (𝜎𝑖″ , 𝜎−𝑖 ) ≥ 𝑢𝑖 (𝜏𝑖 , 𝜎−𝑖 ) ∀𝜏𝑖 ∈ △𝑆𝑖 .

By the linearity of 𝑢𝑖 ,

𝑢𝑖 (𝜆𝜎𝑖′ + (1 − 𝜆)𝜎𝑖″ , 𝜎−𝑖 ) ≥ 𝑢𝑖 (𝜏𝑖 , 𝜎−𝑖 ) ∀𝜏𝑖 ∈ △𝑆𝑖

Therefore, 𝜆𝜎𝑖′ + (1 − 𝜆)𝜎𝑖″ ∈ 𝐵𝑖 (𝜎−𝑖), showing that 𝐵(𝜎) is convex-


valued.

• 𝐵(𝜎) has a closed graph. Suppose, to obtain a contradiction, that


𝐵(𝜎) does not have a closed graph. Then, there exists a sequence
(𝜎𝑛 , 𝜎̂ 𝑛 ) → (𝜎, 𝜎)̂ with 𝜎̂ 𝑛 ∈ 𝐵(𝜎𝑛 ), but 𝜎̂𝑖 ∉ 𝐵𝑅𝑖 (𝜎−𝑖 ) for some 𝑖.
This implies that there exists some 𝜎𝑖′ ∈ △𝑆𝑖 and some 𝜖 > 0 such
that

𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) > 𝑢𝑖 (𝜎̂𝑖 , 𝜎−𝑖 ) + 3𝜖

𝑛
By the continuity of 𝑢𝑖 and the fact that 𝜎−𝑖 → 𝜎−𝑖 , we have for
sufficiently large 𝑛,

𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖
𝑛
) ≥ 𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖 ) − 𝜖

Combining the preceding two relations, we obtain

𝑢𝑖 (𝜎𝑖′ , 𝜎−𝑖
𝑛 𝑛
) > 𝑢𝑖 (𝜎̂𝑖 , 𝜎−𝑖 ) + 2𝜖 ≥ 𝑢𝑖 (𝜎̂𝑖𝑛 , 𝜎−𝑖 )+𝜖
3. Existence of Nash Equilibrium 20

where the second relation follows from the continuity of 𝑢𝑖 . This


contradicts the assumption that 𝜎̂𝑖𝑛 ∈ 𝐵𝑅𝑖 (𝜎−𝑖
𝑛
), and completes the
proof.

The existence of the fixed point then follows from Kakutani’s theorem. If
𝜎∗ ∈ 𝐵(𝜎∗ ), then by definition 𝜎∗ is a mixed strategy Nash equilibrium.
Part II

DYNAMIC GAMES OF
COMPLETE INFORMATION
4. PRELIMINARIES

You may have realized that we use normal-form representation to put a


formal structure on strategic situations. It allowed us to analyze a variety of
games and help us to reach conclusions about the possible strategic interactions
between players and their outcomes. But when we discuss the games which
unfold over time, the normal form game structure which we have been using
until now is unable to cover the essence of “sequential rationality” This
chapter lays out a framework to analyze such sequential strategic situations
and apply strategic reasoning to these newly defined representations.

4.1 The Extensive Form Game


In this section, we define the most common representation for games that
unfold over time and in which players move after they learn the actions of
other players. As with the normal form games, two elements must be part
of any extensive form game’s representation:

1. Set of players 𝑁

2. Players’ payoffs as function of outcomes, {𝑣𝑖 (.)}𝑖∈𝑁 .

Now to overcome the limitations of normal form games and capture the
sequential play, we introduce two more parts for actions: First, what players
can do, as before, and second, when they can do it. Thus in general we need
two more components to capture sequential play:

3. Order of moves

4. Actions of players when they can move.


4. Preliminaries 23

As some players move after choices are made by other players, we should
be able to describe the knowledge the players have about the history of the
game when it is their turn to move. Therefore, we add a fifth component to
the description of a extensive form game:

5. The knowledge that players have when they can move.

Finally we must account for the possibility that some random event, called
as moves of Nature, can happen during the game. We will call such event
as Exogenous events, because the predetermined probability distribution of
Nature’s choice is independent to the choices made by the strategic players.
Thus we represent actions of Nature as our sixth component:

6. Probability distribution over exogenous events.

And to be able to analyze these situations with the method and concepts
to which we have been already introduced, we add a final and familiar
requirement:

7. The structure of the extensive form game represented by 1-6 is common


knowledge among all the players.

With these components in place, the only question arises: What kind of
notation can we use to put all this together? Well, for this we borrow
a familiar concept of a decision tree and expand it to capture multiplayer
strategic situations.

4.1.1 Game Trees


Consider a very common game which falls under the category of a “trust
game”. Player 1 first chooses whether to ask for the services of player 2.
He can trust player 2 (T) or not trust him (N), the latter choice giving
both players a payoff of 0. If player 1 plays T, then player 2 can choose to
cooperate (C), which represents offering player 1 some fair level of service, or
defect (D), by which he basically cheats player 1 with an inferior, less costly
to provide service. Assume that if player 2 cooperates then both players get
4. Preliminaries 24

a payoff of 1, while if player 2 chooses to defect then player 1 gets a payoff of


−1 and player 2 gets a payoff of 2. This game depicts various real-life trading
situations, be it a case of a driver who trusts a mechanic to be honest and
perform the right service for his vehicle rather than rip him off or a buyer
bidding for an auction on an online website and hoping to get the product
in good condition by paying up front. A simple way to depict this game is
with the game tree depicted in Figure 4.1

𝑁 𝑇

2
0, 0
𝐶 𝐷

1, 1 −1, 2

Fig. 4.1: Trust Game

Definition 4.1. A game tree is a set of nodes 𝑥 ∈ 𝑋 with a precedence


relation 𝑥 > 𝑥′ , which means “𝑥 precedes 𝑥′ ”.Every node in a game tree has
only one predecessor satifying the following conditions:

• The precedence relation is transitive (𝑥 > 𝑥′ , 𝑥′ > 𝑥″ ⇒ 𝑥 > 𝑥″ )

• asymmetric (𝑥 > 𝑥′ ⇒ not 𝑥′ > 𝑥)

• incomplete (not every pair of nodes 𝑥, 𝑦 can be ordered).

The root of the tree, denoted by 𝑥0 , is a special kind of node that precedes
any other 𝑥 ∈ 𝑋. Nodes that do not precedes other nodes are called terminal
nodes, denoted by the set 𝑍 ⊂ 𝑋. Payoffs are associated to terminal nodes
which denotes the outcomes of the game. Every node 𝑥 that is not a terminal
node is assigned either to a player, 𝑖(𝑥), with the action set 𝐴𝑖 (𝑥), or to
Nature.
4. Preliminaries 25

Let us consider the familiar example of Battle of the Sexes game discussed
in Section 2.4 but with slight modification. Consider that Alex finishes work
at 2:00 p.m. while Chris finishes work at 5:30 p.m. This gives Alex apmple
time to decide to go to either football game or the opera and then to call
Chris at 5:00 p.m. to let him know where she actually is. Now Chris has to
decide where to go. If the choice is to the venue where Alex is waiting then
Chris will get some payoff. (It would be 1 if Alex is at the opera and 2 if Alex
is at the football game.) If Chris’s choice is to go to the other venue, then
he will get 0. Hence a rational Chris should go to the same venue that Alex
did. Anticipating this, a rational Alex ought to choose the opera, because
then Alex gets 2 instead of 1 from football.
We will call this game as the sequential-move Battle of the Sexes game and
the game tree representing above conditions is depicted in Figure 4.2.

Alex

𝑂 𝐹

Chris Chris
𝑜 𝑓 𝑜 𝑓

2, 1 0, 0 0, 0 1, 2

Fig. 4.2: The sequential-move Battle of the Sexes game

4.2 Strategies and Nash Equilibrium


Given the well defined extensive form game and proper game tree representation,
we can observe that notion of a strategy is more involved as compared to the
strategy we discussed in normal form games. We start this section by defining
a pure strategy in Extensive-Form Games.
4. Preliminaries 26

4.2.1 Pure Strategies


Consider the batle of the sexes game described in Figure 4.2. Player 1 has a
single information set with one node, so he has two possible actions/strategies,
i.e, “play 𝑂” or “play 𝐹 ”. For player 2, however, things are a bit complex.
Player 2 has two information sets, each associated with a different action of
player 1. Hence considring two simple statements “play 𝑜” and “play 𝑓” do
not seem to exhaust all the possibilities for player 2. In particular, player
2 can choose the following strategy: “If player 1 plays 𝑂 then I will play 𝑜,
while if player 1 plays 𝐹 then I will play 𝑓 .”
Definition 4.2. A pure strategy for player i is a complete plan of play that
describes which pure action player i will choose at each of his information
sets.
If we consider simultaneous-move Battle of the Sexes game as depicted
in Figure 4.3, the pure strategies for the player 1 are 𝑆1 = {𝑂, 𝐹 }, and those
for player 2 are 𝑆2 = {𝑜, 𝑓}.

𝑂 𝐹

2
𝑜 𝑓 𝑜 𝑓

2, 1 0, 0 0, 0 1, 2

Fig. 4.3: The simultaneous-move Battle of the Sexes game

In contrast, in the sequential-move Battle of the Sexes game depicted in


Figure 4.2, the set of pure strategies for player 2 are as follows:

𝑆2 = {𝑜𝑜, 𝑜𝑓, 𝑓𝑜, 𝑓𝑓},

where a pure strategy “𝑎𝑏” means “player 2 will play 𝑎 if player 1 plays 𝑂
and 𝑏 if player 1 plays 𝐹 . Whereas set of pure strategies for player 1 remains
4. Preliminaries 27

the same, i.e, 𝑆1 = {𝑂, 𝐹 }.


To define pure strategy formally, let us introduce some notations that builds
on what we have already developed. Define 𝐻𝑖 be the collection of all
information sets at which the payer 𝑖 plays, and let ℎ𝑖 ∈ 𝐻𝑖 be one of the 𝑖’s
information sets. Let 𝐴𝑖 (ℎ𝑖 ) be the actions of the player 𝑖 that he can take
at ℎ𝑖 and 𝐴𝑖 be the set of all actions player 𝑖, therefore, 𝐴𝑖 = ∪ℎ𝑖 ∈𝐻𝑖 𝐴𝑖 (ℎ𝑖 ).

Note: Assuming that player 𝑖 has 𝑘 > 1 information sets, the first with
𝑚1 actions to choose from, the second one with 𝑚2 , and so on until 𝑚𝑘 .
Then

|𝑆𝑖 | = 𝑚1 × 𝑚2 × … × 𝑚𝑘 ,

where |𝑆𝑖 | denote the number of elements in 𝑆𝑖 , i.e, the total number of pure
strategies player 1 has. For example, a player with 3 information sets, 3
actions in the first, 3 in the second, and 5 in the third will have a total of 45
pure strategies.

4.2.2 Mixed versus Behavioral Strategies


As we have defined pure strategies in previous section, the definition of mixed
strategies follows immidiately, similar to what we have seen in normal form
games.

Definition 4.3. A mixed strategy for player 𝑖 is a probability distribution


over his pure strategies 𝑠𝑖 ∈ 𝑆𝑖 .

That is if a mixed strategy is played, the player chooses a plan randomly


out of the available pure strategies just before the game is played and then
follows that particular strategy. But if the player wants to randomize his
choice at some nodes, independently to his previous actions during the game,
he has no choice other than to follow his chosen plan. This is where the
definition of mixed strategy falls short of concepts to deal such cases. To
tackle the situations in which player makes his choice as the games unfold,
we have to define new concepts as follows:
4. Preliminaries 28

Definition 4.4. A behavioral strategy specifies for each information set


ℎ𝑖 ∈ 𝐻𝑖 an independent probability distribution over 𝐴𝑖 (ℎ𝑖 ) and is denoted
by 𝜎𝑖 ∶ 𝐻𝑖 → △𝐴𝑖 (ℎ𝑖 ), where 𝜎𝑖 (𝑎𝑖 (ℎ𝑖 )) is the probability that player 𝑖 plays
action 𝑎𝑖 (ℎ𝑖 ) ∈ 𝐴𝑖 (ℎ𝑖 ) in information set ℎ𝑖 .

To observe the difference between these two kinds of strategy, let us


consider the example of sequential move battle of sexes depicted in game
tree below. Here Player 2 has two sets denoted with 𝑥1 and 𝑥2 , which we

1
𝑥0
𝑂 𝐹

2 𝑥 𝑥2 2
1

𝑜 𝑓 𝑜 𝑓
[ 13 ] [ 23 ] [ 12 ] [ 12 ]

2, 1 0, 0 0, 0 1, 2

Fig. 4.4: The simultaneous-move Battle of the Sexes game

will call as ℎ𝑂 𝐹
2 and ℎ2 . In each information sets, he has two actions to choose
from, i.e, 𝐴2 = {𝑜, 𝑓}. Observe that Player 2 must have (2×2 =) 4 strategies.
Therefore, 𝑆2 = {𝑜𝑜, 𝑜𝑓, 𝑓𝑜, 𝑓𝑓}. A mixed strategy would be a probability
distribution (𝑝𝑜𝑜 , 𝑝𝑜𝑓 , 𝑝𝑓𝑜 , 𝑝𝑓𝑓 ), where 𝑝𝑆2 ≥ 0 and 𝑝𝑜𝑜 + 𝑝𝑜𝑓 + 𝑝𝑓𝑜 + 𝑝𝑓𝑓 = 1.
Whereas behavioral strategies will be denoted as, 𝜎2 (𝑜(ℎ𝑂 𝑂
2 )), 𝜎2 (𝑓(ℎ2 )),
𝜎2 (𝑜(ℎ𝐹 𝐹 𝑂 𝑂
2 )) and 𝜎2 (𝑓(ℎ2 )), where 𝜎2 (𝑜(ℎ2 )) + 𝜎2 (𝑓(ℎ2 )) = 𝜎2 (𝑜(ℎ2 )) +
𝐹

1 1
𝜎2 (𝑓(ℎ𝐹 𝑂 𝑂
2 )) = 1. In Figure 4.4, we have used 𝜎2 (𝑜(ℎ2 )) = 3 , 𝜎2 (𝑓(ℎ2 )) = 3 ,
1
and 𝜎2 (𝑜(ℎ𝐹 𝐹
2 )) = 𝜎2 (𝑓(ℎ2 )) = 2
5. SEQUENTIAL RATIONALITY

Consider the sequential battle of the sexes game as shown in the figure 5.1.
The matrix representation of the game gives us sets of strategies that are
Nash equilibrium.

Chris
𝑜𝑜 𝑜𝑓 𝑓𝑜 𝑓𝑓
𝑂 2, 1 2, 1 0, 0 0, 0
Alex
𝐹 0, 0 1, 2 0, 0 1, 2

The set of strategies resulting in Nash equilibrium are (𝑂, 𝑜𝑜), (𝑂, 𝑜𝑓), (𝐹 , 𝑓𝑓).
But above results fail to answer that what are player 2’s best responses in
each of his information sets precisely? Because it is obvious that if player 1
played 𝑂 then player 2 should play 𝑜, and if player 1 played 𝐹 then player 2
should play 𝑓. As Nash equilibrium is not enough to answer such questions,
we define a new concept called as sequential rationality.

Definition 5.1. Given strategies 𝜎−𝑖 ∈ △𝑆−𝑖 of 𝑖’s opponents, we say that

Alex

𝑂 𝐹

Chris Chris
𝑜 𝑓 𝑜 𝑓

2, 1 0, 0 0, 0 1, 2

Fig. 5.1: The simultaneous-move Battle of the Sexes game


5. Sequential Rationality 30

𝜎𝑖 is sequentially rational if and only if 𝑖 is playing a best response to 𝜎−𝑖


in each of his information sets.

Considering the above example, a sequentially rational player 2 should


choose to play a strategy that is the best response at each of the nodes, and
he has a unique such strategy, i.e., (𝑜, 𝑓). Now move back to the node where
player 1 has to choose between 𝑂 and 𝐹 . Taking into the account of the
action of sequentially rational player 2, the player 1 will prefer to go with
𝑂 giving her a payoff of 2 rather than just 1 which she could have got by
playing 𝐹 . This is how one can predict the behavior players in a dynamic
game. This type procedure of predicting the actions of player, which starts at
directly preceding to the terminal node and then move inductively backward
through the game tree, is widely termed as backward induction in games.
By applying this method to finite games of perfect information, it will result
in a series of strategies for each player that are sequentially rational.

Proposition 5.2. Any finite game of perfect information has a backward


induction solution that is sequentially rational. Furthermore if no two terminal
nodes prescribe the same payoffs to any player then the backward induction
solution is unique.

By the construction of the backward induction procedure, each player


will necessarily play a best response to the actions of the other players who
come after him.

Corollary 5.3. Any finite game of perfect information has at least one
sequentially rational Nash equilibrium in pure strategies. Furthermore if no
two terminal nodes prescribe the same payoffs to any player then the game
has a unique sequentially rational Nash equilibrium.

5.1 Subgame Perfect Equilibrium


In previous section, we have developed some concepts that argues that players
should move as per sequential rationality. Furthermore, a method of backward
induction was also elaborated, by which one can find out the sequentially
5. Sequential Rationality 31

𝑎 𝑏

2 2
𝑥 𝑦 𝑥 𝑦

1 1
(11) (00)
𝑐 𝑑 𝑐 𝑑

(12) (21) (11) (22)

Fig. 5.2: Subgames in a game with perfect information.

rational Nash equilibrium in finite games of perfect information. But as soon


as we try to expand our approach to the games of imperfect information,
the process of backward induction faces some serious problems. As, when we
encounter two nodes sharing a same information set, the backward induction
method fails due the fact that the best response function is not well defined
at the information sets which are not singleton. Strengthening the concept
of sequential rationality to deal with such issues, we advance the following
definition:

Definition 5.4. A proper subgame 𝐺 of an extensive-form game Γ consists


of only a single node and all its successors in Γ with the property that if 𝑥 ∈ 𝐺
and 𝑥′ ∈ ℎ(𝑥) then 𝑥′ ∈ 𝐺. The subgame 𝐺 is itself a game tree with its
information sets and payoffs inherited from Γ.

Consider figure 5.2 and 5.3 depicting a game with perfect information
and imperfect information respectively. The above definition enable us to
state an important concept used to cope with the limitation of backward
induction in the case of games of imperfect information.
5. Sequential Rationality 32

𝑎 𝑏

2 2
𝑥 𝑦 𝑥 𝑦

1
(11) (00) 𝑐 𝑑 𝑐 𝑑

(12) (21) (11) (22)

Fig. 5.3: Subgames in a game with imperfect information.

Definition 5.5. Let Γ be an 𝑛-player extensive-form game. A strategy


profile 𝜎∗ = (𝜎1∗ , 𝜎2∗ , … , 𝜎𝑛∗ ) is a subgame-perfect (Nash) equilibrium
if for every proper subgame 𝐺 of Γ the restriction of 𝜎∗ to 𝐺 is a Nash
equilibrium in 𝐺.

This concept was introduced by Reinhard Selten in 1975. He was the


second of the three Nobel Laureates sharing the prize in 1994 for the development
of game theory. This equilibrium concept successfully pins down sequential
rationality into the static Nash equilibrium solution concept. Using the
terminology developed before, subgame perfection requires not only that
a Nash equilibrium profile of strategies is a combination of best responses
on the equilibrium path, but it should also specify the profile of strategies
consisting the mutual best responses off the equilibrium path.
Notice that the sub-game perfect equilibrium is a stronger concept as
compared to Nash equilibrium. As per the construction of sub-game perfect
equilibrium, every sub-game perfect equilibrium is a Nash equilibrium but
converse does not hold. For example, let us consider our usual example,
that is the sequential battle of the sexes game. Observe in figure 5.3, that
there exists three proper sub-games. As we saw earlier that there are three
Nash equilibrium in this game consisting of (𝑂, 𝑜𝑜), (𝑂, 𝑜𝑓), (𝐹 , 𝑓𝑓). But
5. Sequential Rationality 33

Alex

𝑂 𝐹

Chris Chris
𝑜 𝑓 𝑜 𝑓

2, 1 0, 0 0, 0 1, 2

Fig. 5.4: Subgames in the simultaneous-move Battle of the Sexes game

only one of these Nash equilibria, i.e, strategy (𝑂, 𝑜𝑓) satisfies the condition
of sub-game perfect equilibrium. This is because when we restrict other two
equilibrium to the proper sub-games where player 2 has to make his choice,
they will not satisfy the condition of Nash equilibrium in either one of the
proper sub-games.
6. STRATEGIC BARGAINING

Bargaining is one of the situation which comes in our mind when we discuss
about strategic interactions. In this chapter, an important example of extensive
form game, strategic bargaining will be discussed. We will follow a particular
model to study strategic bargaining. The model will be as follows:

• Two parties/players need to split a “pie”.

• The pie is assumed to have a total value normalized to 1.

• Parties bargain how to split the pie so as to maximize their shares.

• “Time is money”. There may be a fixed cost of progressing from one


round to another, a part of pie get removed every time a rejection
occurs.

Assuming a constant discounting factor 𝛿 for both the players, we can summarize
the bargaining game as follows:
In the first round:

• Player 1 offers shares (𝑥, 1 − 𝑥), where player 1 gets 𝑥 and player 2
receives the remaining pie, i.e, (1 − 𝑥)

• Player then chooses to accept the offer of player or to reject it. Accepting
the offer will lead the game to end with payoffs 𝑣1 = 𝑥 and 𝑣2 = 1 − 𝑥.
Whereas, by rejecting, player 2 has a chance to make a offer to player
1 causing the game to proceed to next stage.

In stage 2:

• A share of pie gets removed, say (1 − 𝛿). Therefore the players now
has to bargain out of 𝛿 portion of total pie.
6. Strategic Bargaining 35

1
period 1
𝑥
2
𝐴 𝑅
2
period 2
𝑥, 1 − 𝑥
𝑥
1
𝐴 𝑅
1
period 3
𝛿 𝑥, 1 − 𝑥
( )⋯ 𝑥
𝛿 2
𝐴

𝛿2 𝑥, 1 − 𝑥
( 2) ⋯
𝛿
1
period t
𝑥
2
𝐴 𝑅

𝛿 𝑡−1 𝑥, 1 − 𝑥 0, 0
( 𝑡−1 ) ⋯
𝛿

Fig. 6.1: An odd 𝑡- period alternating-offer bargaining game.

• Player 2 offer shares (𝑥, 1 − 𝑥) to player 1.

• Again if Player 1 accepts the offer then game will end with payoffs
𝑣1 = 𝛿𝑥 and 𝑣2 = 𝛿(1 − 𝑥), or rejects, causing the game to move to the
third round.
6. Strategic Bargaining 36

In stage 𝑡:

• The game will proceed in a similar manner, where following a rejection


in odd stages player 2 has to offer shares in even stages, or vice versa.

• Each period further implies a penalty in terms of share of pie, so that


in period 𝑡 the worth of the pie will be 𝛿 𝑡−1

The strategic bargaining can be visualized in the game tree shown in figure
6.1. We will be discussing three important cases of strategic bargaining.
Firstly, we analyze the most trivial case of bargaining, i.e, the ultimatum
game where𝑡 = 1. Advancing our discussion, we will analyze the case where
the bargaining will end in a time period 𝑡. There we will observe that the
bargaining should end in the first stage only given the players are sequentially
rational. Finally, we will move our goal of proving the existence of Perfect
Equilibrium Partition (P.E.P) in the case of infinite horizon where 𝑡 → ∞.

6.1 The Ultimatum Game

𝑥
2
𝐴 𝑅

𝑥, 1 − 𝑥 0, 0

Fig. 6.2: An ultimatum game

As discussed above, we’ll consider the case in which 𝑡 = 1, that is the


game will end in one round. Here, player 1 will make a take it or leave it
offer to player 2. Player 2 then has to decide whether he accepts the offer
and take away the partition offered to him or he rejects it ending the game
with zero payoffs awarded to both the players. We draw the game tree of
this case as shown in figure 6.2.
6. Strategic Bargaining 37

As we usually do, we start our analysis by trying to find out the path of play
that is supported by Nash equilibrium. And the result is quite surprising:

Proposition 6.1. In the bargaining game if 𝑡 = 1, then any division of


surplus 𝑥∗ ∈ [0, 1], (𝑣1 , 𝑣2 ) = (𝑥∗ , 1 − 𝑥∗ ), can be supported as a Nash
equilibrium.

Proof. Let us construct a pair of strategies that are mutual best responses
and that lead to (𝑥∗ , 1 − 𝑥∗ ) as the partition of the pie. Suppose that the
player 1’s strategy is to propose 𝑥 and let player 2’s strategy is to accept any
offer 𝑥 ≤ 𝑥∗ and reject any offer 𝑥 > 𝑥∗ . It is easy to observe that both these
strategies are mutual best responses to each other and is independent of the
value of 𝑥∗ ∈ [0, 1].

This proposition made one thing clear and that is Nash equilibrium is
a weak tool to analyze this case. Surprisingly, this holds true for even
further cases where 𝑡 is finite as well as when 𝑡 → ∞, the proof of which
will be discussed in later sections. Therefore, we resort to the concepts we’ve
developed in the previous chapter. Will sequential rationality give us a more
precise solution of our problem? Well, the following proposition is expected
to answer this question.

Proposition 6.2. The bargaining game with 𝑡 = 1 admits a unique subgame-


perfect equilibrium in which player 1 offers 𝑥 = 1 and player 2 accepts any
offer 𝑥 ≤ 1.

Proof. We have already proved that player 2 must accept any share 𝑥 (0 <
𝑥 < 1). However, player 2 is indifferent between accepting or rejecting 𝑥 = 1,
as in both the cases, he will be getting a payoff of 0. Therefore, the proposed
strategy is sequentially rational and the unique best response of player 1’s
to player 2’s strategy is to offer 𝑥 = 1. Observe that only other sequentially
rational strategy available to player 2 is to accept any offer positive offer
and reject getting 0 (𝑥 = 1). But player 1 does not have any best response
to this strategy. This is because player 1’s best response correspondence is
discontinuous at 𝑥 = 1. So, it cannot be a part of the subgame perfect
equilibrium.
6. Strategic Bargaining 38

This result outlines the importance of sequential rationality in extensive


form games. Considering only mutual best responses does not yield any
meaning prediction, however, sequential rationality predicts a unique and
extreme outcome. We can also observe that player 1 is enjoying the first
mover advantage and can offer whole pie to himself knowing that player 2
should accept his offer.

6.2 Finitely Many Rounds of Bargaining


In this section, we extend the ultimatum game to finite stages with 𝑡 < ∞.
However in this case also, the game is expected to end after certain stages.
The rejection at the last stage will lead to zero payoffs for both the players.
Similar to the case of ultimatum game, we can easily construct strategies
that are supported by Nash equilibrium independent of the value of 𝑥 to any
horizon, including infinite horizon. To prove this result, let us define some
notations for our convenience.

• Let 𝑆 = [0, 1], where 𝑠 ∈ [0, 1] is the partition of the pie rewarded to
player 1.

• 𝐹 be the set of all sequences of functions 𝑓 = {𝑓 𝑡 }∞ 1


𝑡=1 , where 𝑓 ∈ 𝑆,
for 𝑡 odd 𝑓 𝑡 ∶ 𝑆 𝑡−1 → 𝑆, and for 𝑡 even 𝑓 𝑡 ∶ 𝑆 𝑡 → {𝑌 , 𝑁 }

• Similarly, 𝐺 be the set of all sequences of functions 𝑔 = {𝑔𝑡 }∞


𝑡=1 , for 𝑡
𝑡 𝑡 𝑡 𝑡−1
odd 𝑔 ∶ 𝑆 → {𝑌 , 𝑁 } , and for 𝑡 even 𝑔 ∶ 𝑆 → 𝑆.

• That is, 𝐹 is the set of all strategies of the player who starts the
bargaining.

• Whereas, 𝐺 is the set of all strategies of the player who in the first
move has to respond to the other player’s offer.

• Assuming that player 1 is starting the game, 𝑃 (𝑓,̂ 𝑔)̂ is the partition
player 1 will get if he play 𝑓 ̂ and player 2 play 𝑔.̂

Proposition 6.3. For all 𝑠 ∈ 𝑆, 𝑠 is a partition induced by Nash equilibrium.


6. Strategic Bargaining 39

Proof. Let us define 𝑓 ̂ ∈ 𝐹 and 𝑔 ̂ ∈ 𝐺 as follows:



{𝑌 , 𝑠𝑡 ≦ 𝑠,
for 𝑡 odd, 𝑓 𝑡̂ ≡ 𝑠, 𝑔𝑡̂ (𝑠1 … 𝑠𝑡 ) =

{ 𝑡
⎩𝑁 , 𝑠 > 𝑠;

{𝑌 , 𝑠𝑡 ≧ 𝑠,
for 𝑡 even, 𝑔𝑡̂ ≡ 𝑠, 𝑓 𝑡̂ (𝑠1 … 𝑠𝑡 ) = ⎨
{ 𝑡
⎩𝑁 , 𝑠 < 𝑠.
Observe, that (𝑓,̂ 𝑔)̂ is a Nash equilibrium and 𝑃 (𝑓,̂ 𝑔)̂ = 𝑠.

Above result is true for any horizon that is for 𝑡 finite and even when 𝑡 →
∞. However, (𝑓,̂ 𝑔)̂ as described in previous proposition is not a perfect
equilibrium. For instance, take 𝑠 = 0.5 with fixed bargaining costs 𝑐1 = 0.1
and 𝑐2 = 0.2. Observe that player 2 plans to reject a possible offer of 0.6 by
player 1, i.e, 𝑔1̂ (0.6) = 𝑁 . After such a rejection players are expected to agree
̂
on 0.5, i.e, 𝑃 (𝑓|0.6, 𝑔|0.6)
̂ = 0.5. Therefore, player 2 will get (0.5−0.2) = 0.3
after first round violating sequential rationality, as, he can get 0.4 in the
initial round itself.
Another valuable property that the finite round of bargaining posses is
that, here the bargaining should end in the first stage itself.

Proposition 6.4. Any subgame-perfect equilibrium must have the players


reach an agreement in the first round.

Proof. Suppose the agreement is reached at later stage with payoffs (𝑣1′ , 𝑣2′ ).
Discounting implies that

𝑣1′ + 𝑣2′ < 1

But then player 1 could deviate and offer 𝑥 = 1 − 𝑣2′ − 𝜖 for some small 𝜖 > 0,
which guarantees player 2 the payoff 𝑣2′ + 𝜖 in the first round.
Sequential rationality implies that player 2 should accept this offer immediately
and for 𝜖 small enough, this gives player 1 a payoffs greater that 𝑣1′ .

Similar proposition will be followed in the infinite horizon case but the
agreement will be reached in at most two stages. Before that, it is important
to prove the existence of such perfect equilibrium partition (P.E.P) which will
6. Strategic Bargaining 40

require more involved concepts. Therefore we dedicate the next chapter to


prove that perfect equilibrium partitions exists in the case of infinite horizon
bargaining.
7. THE INFINITE HORIZON
GAME

In this chapter, we will follow Rubinstein bargaining model and discuss the
existence of P.E.P in bargaining with 𝑡 → ∞. We will start by finding
some relations between the set of P.E.P if player 1 starts the game with the
set of P.E.P if player 2 starts the game. Advancing the discussion, we will
prove that their solution set containing the set of P.E.P of both players is
nonempty, and the player who starts the game has first mover advantage.
To conclude this chapter, we’ll calculate the P.E.P considering two kinds of
discounting,i.e, fixed bargaining costs and fixed discounting factors.

7.1 Preliminaries
Let us define some new notations, that will be used throughout this chapter.
Let 𝜎(𝑓, 𝑔) be the sequence of offers. Here 1 starts the bargaining and follow
𝑓 ∈ 𝐹 , and 2 adopts 𝑔 ∈ 𝐺. Let 𝑇 (𝑓, 𝑔) be the length of 𝜎(𝑓, 𝑔) which may
goes upto ∞. Let 𝐷(𝑓, 𝑔) be the element of 𝜎(𝑓, 𝑔) at the terminal node (if
such element exists). Call 𝐷(𝑓, 𝑔) as the partition induced by (𝑓, 𝑔). The
outcome function 𝑃 (𝑓, 𝑔) of the game is defined by:


{(𝐷(𝑓, 𝑔), 𝑇 (𝑓, 𝑔)), if 𝑇 (𝑓, 𝑔) < ∞
𝑃 (𝑓, 𝑔) = ⎨
{
⎩(0, ∞), if 𝑇 (𝑓, 𝑔) = ∞.

Thus, we will denote the outcome as (𝑠, 𝑡) which will be interpreted as


the reaching of agreement 𝑠 at time 𝑡 and (0, ∞) defines the disagreement
between parties. For the perfect understanding of the game, we also have to
7. The Infinite Horizon Game 42

consider the case when player 2 starts the game. We define 𝜎(𝑔, 𝑓), 𝑇 (𝑔, 𝑓),
𝐷(𝑔, 𝑓) and 𝑃 (𝑔, 𝑓) similarly. Here player 2 starts the game and adopts
𝑓 ∈ 𝐹 and player 1 adopts 𝑔 ∈ 𝐺.
Before proceeding further, let us recall the preference relation on the set
of outcomes. We assume that player 𝑖 has a preference relation ≳𝑖 that is
complete, reflexive and transitive. It is defined on the set of 𝑆 × 𝑁 ∪ {0, ∞)},
where 𝑁 is the set of natural numbers.
We assume that the following assertions are satisfied by the preference
relation:

(A-1) if 𝑟𝑖 > 𝑠𝑖 , then (𝑟, 𝑡) >𝑖 (𝑠, 𝑡);

(A-2) if 𝑠𝑖 > 0 and 𝑡2 > 𝑡1 , then (𝑠, 𝑡1 ) >𝑖 (𝑠, 𝑡2) >𝑖 (0, ∞)

(A-3) (𝑟, 𝑡1 ) ≳𝑖 (𝑠, 𝑡1 + 1) ⇔ (𝑟, 𝑡2 ) ≳𝑖 (𝑠, 𝑡2 + 1)

(A-4) if 𝑟𝑛 → 𝑟 and (𝑟𝑛 , 𝑡1 ≳𝑖 (𝑠, 𝑡2 ), then (𝑟, 𝑡1 ) ≳𝑖 (𝑠, 𝑡2 );


if 𝑟𝑛 → 𝑟 and (𝑟𝑛 , 𝑡1 ≳𝑖 (0, ∞), then (𝑟, 𝑡1 ) ≳𝑖 (0, ∞)

(A-5) if (𝑠 + 𝜖, 1) ∼𝑖 (𝑠, 0), (𝑠 + 𝜖, 1) ∼𝑖 (𝑠, 0), and 𝑠𝑖 < 𝑠𝑖 , then 𝜖𝑖 ≦ 𝜖𝑖

As discussed before, two families of discounting models satisfying the above


preference relations are:

A. Fixed bargaining costs: A number 𝑐𝑖 is assigned to each player 𝑖


such that (𝑠, 𝑡1 ) ≳𝑖 (𝑠′ , 𝑡2 ) ⇔ (𝑠𝑖 − 𝑐𝑖 ⋅ 𝑡1 ) ≧ (𝑠′𝑖 − 𝑐𝑖 ⋅ 𝑡2 ).

B. Fixed discounting factors: A number 0 < 𝛿𝑖 ≦ 1 is assigned to each


𝑡 𝑡
player 𝑖 such that (𝑠, 𝑡1 ) ≳𝑖 (𝑠′ , 𝑡2 ) ⇔ 𝑠𝑖 𝛿𝑖 1 ≧ 𝑠′𝑖 𝛿𝑖 2 .

After defining types of discounting, let us define two sets : (𝐴) the set of
all P.E.P.’s in a game in which player 1 is starts the bargaining and makes
the first offer, i.e, {𝑠 ∈ 𝑆 | there is a P.E. (𝑓, 𝑔) ∈ 𝐹 × 𝐺 such that 𝑠 =
𝐷(𝑓, 𝑔)}; and (𝐵) the set of all P.E.P.’s in a game in which player 2 is starts
the bargaining , i.e, {𝑠 ∈ 𝑆 | there is a P.E. (𝑔, 𝑓) ∈ 𝐹 × 𝐺 such that 𝑠 =
𝐷(𝑔, 𝑓)}. In the following lemmas, we will try to establish some relations
between these two sets.
7. The Infinite Horizon Game 43

Lemma 7.1. For all 𝑎 ∈ 𝐴and for all 𝑏 ∈ 𝑆 such that 𝑏 > 𝑎, there is 𝑐 ∈ 𝐵
such that (𝑐, 1) ≳2 (𝑏, 0).

Proof. This lemma states that for 𝑎 to be in 𝐴 it has to be the best possible
partition for player 1 and he cannot have better than 𝑎 at later stages of
game. If such partition exists, say 𝑏, where 𝑏 ∈ 𝑆 satisfying 𝑏 > 𝑎 such that
2 would accept 𝑏 if it were offered. Player 2 must therefore reject such an
offer. And to reject this offer, he should expect some better payoff in future,
that is, there must be some 𝑐 ∈ 𝐵 so that (𝑐, 1) ≳ (𝑏, 0).
Formally, let (𝑓,̂ 𝑔)̂ be a P.E. such that 𝐷(𝑓,̂ 𝑔)̂ = 𝑎. Let 𝑏 ∈ 𝑆 and
𝑏 > 𝑎. Observe that 𝑔1̂ (𝑏) = 𝑁 otherwise if 𝑓 1 = 𝑏 then 𝑃 (𝑓, 𝑔)̂ = (𝑏, 1) >1
(𝑎, 1) ≳ (𝑎, 𝑇 (𝑓,̂ 𝑔))
̂ = 𝑃 (𝑓,̂ 𝑔).̂ This imply that 𝑃 (𝑓, 𝑔)̂ >1 𝑃 (𝑓,̂ 𝑔)̂ which
violates the property of perfect equilibrium. Also, 𝑃 (𝑓|𝑏, ̂ 𝑔|𝑏))
̂ ≳2 (𝑏, 0)
̂ 𝑔|𝑏),
thus, (𝐷(𝑓|𝑏, ̂ 𝑇 (𝑓|𝑏, ̂ 𝑔|𝑏))
̂ ̂ 𝑔|𝑏),
≳2 (𝑏, 0) and 𝐷(𝑓|𝑏, ̂ 1) ≳2 (𝑏, 0) by (A-
2). Therefore 𝐷(𝑓|𝑏, ̂ 𝑔|𝑏)
̂ is the desirable 𝑐.

Similarly we can also prove the following lemma.

Lemma 7.2. For all 𝑎 ∈ 𝐵 and for all 𝑏 ∈ 𝑆 such that 𝑏 < 𝑎, there is 𝑐 ∈ 𝐴
such that (𝑐, 1) ≳1 (𝑏, 0).

Lemma 7.3. For all 𝑎 ∈ 𝐴 and for all 𝑏 ∈ 𝑆 such that (𝑏, 1) >2 (𝑎, 0) there
is 𝑐 ∈ 𝐴 such that (𝑐, 1) ≳1 (𝑏, 0).

Proof. This lemma implies that player 1 should have a strong reason to reject
any offer from player 2. Knowing this player 2 will accept the partition offered
by player originally. Let (𝑓,̂ 𝑔)̂ be a P.E. such that 𝐷(𝑓,̂ 𝑔)̂ = 𝑎. Now consider
the following possibilities:

Case A: 𝑔1̂ (𝑓 1̂ ) = 𝑁 . Let 𝑓 1̂ = 𝑠. Then 𝐷(𝑓 1̂ |𝑠, 𝑔1̂ |𝑠) = 𝑎 and 𝑎 ∈ 𝐵. From
(A-1) and (A-2), we now that if (𝑏, 2) >2 (𝑎, 1) then 𝑏 < 𝑎. Therefore
by Lemma 7.2 there is 𝑐 ∈ 𝐴 such that (𝑐, 1) ≳1 (𝑏, 0).

Case B: Let 𝑓 1̂ = 𝑎 and 𝑔1̂ (𝑎) = 𝑌 . Suppose that 𝑏 satisfy (𝑏, 1) >2 (𝑎, 0),
𝑓 2̂ (𝑎, 𝑏) = 𝑁 , if not then for any 𝑓 ∈ 𝐹 satisfying 𝑓 1 = 𝑏, 𝑃 (𝑓|𝑎,
̂ 𝑓) =
(𝑏, 1) >2 (𝑎, 0) which contradicts the definition of perfect equilibrium.
7. The Infinite Horizon Game 44

̂ 𝑏, 𝑔|𝑎,
Also 𝑃 (𝑓|𝑎, ̂ 𝑏, 𝑔|𝑎,
̂ 𝑏) ≳1 (𝑏, 0). Therefore (𝐷(𝑓|𝑎, ̂ 𝑏), 1) ≳1 (𝑏, 0)
̂ 𝑏, 𝑔|𝑎,
and 𝐷(𝑓|𝑎, ̂ 𝑏) ∈ 𝐴

The following lemma can also be showed in a similar manner.

Lemma 7.4. For all 𝑎 ∈ 𝐵 and for all 𝑏 ∈ 𝑆 such that (𝑏, 1) >1 (𝑎, 0) there
is 𝑐 ∈ 𝐵 such that (𝑐, 1) ≳2 (𝑏, 0).

7.2 Existence of Perfect Equilibrium


Partition
Let

𝑦 is the smallest number such that (𝑥, 1) ≲1 (𝑦, 0);


△ = {(𝑥, 𝑦) ∈ 𝑆 × 𝑆∣ },
𝑥 is the largest number such that (𝑦, 1) ≲2 (𝑥, 0)

△1 = {𝑥 ∈ 𝑆| there is 𝑦 ∈ 𝑆 such that (𝑥, 𝑦) ∈ △} , and

△2 = {𝑦 ∈ 𝑆| there is 𝑥 ∈ 𝑆 such that (𝑥, 𝑦) ∈ △} .

Theorem 7.5. 𝐴 = △1 ≠ ∅, similarly 𝐵 = △2 ≠ ∅.

Proof. The theorem will be proved in three stages. Starting with our first
claim, we prove the following statement:
7. The Infinite Horizon Game 45

Claim 1: If (𝑥, 𝑦) ∈ △, then 𝑥 ∈ 𝐴 and 𝑦 ∈ 𝐵.


Consider the (𝑓,̂ 𝑔)̂ as follows:
for 𝑡 odd,


{𝑌 , 𝑠𝑡 ≦ 𝑥,
𝑓 𝑡̂ ≡ 𝑥, 𝑔𝑡̂ (𝑠1 … 𝑠𝑡 ) = ⎨
{
⎩𝑁 , 𝑠𝑡 > 𝑥;

and for 𝑡 even,


{𝑌 , 𝑠𝑡 ≧ 𝑦,
𝑡̂ 1 𝑡
𝑓 (𝑠 … 𝑠 ) = ⎨ , 𝑔𝑡̂ ≡ 𝑦.
{ 𝑡
⎩𝑁 , 𝑠 < 𝑦.

It is easy to check that above functions induce a perfect equilibrium partition.

Claim 2: △ ≠ ∅.
Above claim also implies that the sets 𝐴 and 𝐵 are not empty. Define


{0 if for all 𝑦, (𝑦, 0) >1 (𝑥, 1),
𝑑1 (𝑥) =

{
⎩𝑦 if there exists 𝑦, (𝑦, 0) ∼1 (𝑥, 1),

and


{1 if for all 𝑥, (𝑦, 0) >2 (𝑥, 1),
𝑑2 (𝑦) = ⎨
{
⎩𝑥 if there exists 𝑥, (𝑥, 0) ∼2 (𝑦, 1).

Observe that 𝑑1 (𝑥) is the smallest 𝑦 such that (𝑦, 0) ≳1 (𝑥, 1) and 𝑑2 (𝑦) is
the largest 𝑥 such that (𝑥, 0) ≳2 (𝑦, 1). As we know that both the players
will try to maximize their respective payoffs at each stage due to sequentially
rationality, we get

△ = {(𝑥, 𝑦)|𝑦 = 𝑑1 (𝑥) and 𝑥 = 𝑑2 (𝑦)}.

It is easy to show that 𝑑1 and 𝑑2 are well defined, continuous and increasing
7. The Infinite Horizon Game 46

functions. Also, 𝑑1 and 𝑑2 are strictly increasing where 𝑑1 (𝑥) > 0 and
𝑑2 (𝑦) < 1 respectively.
Define 𝐷(𝑥) = 𝑑2 (𝑑1 (𝑥)). Thus △ = {(𝑥, 𝑦)|𝑦 = 𝑑1 (𝑥) and 𝑥 = 𝑑2 (𝑑1 (𝑥))}.
Notice that 𝐷(1) ≦ 1 and 𝐷(0) ≧ 0. From the continuity of function 𝐷, there
exists a fixed point 𝑥0 such that 𝐷(𝑥0 ) = 𝑥0 . Hence, (𝑥0 , 𝑑1 (𝑥0 )) ∈ △.

Claim 3: If 𝑎 ∈ 𝐴, then 𝑎 ∈ △1 , and if 𝑏 ∈ 𝐵, then 𝑏 ∈ △2 . That


is, we claim that converse of claim 1 also holds. Suppose △1 = [𝑥1 , 𝑥2 ],
△2 = [𝑦1 , 𝑦2 ] and 𝑠 = sup{𝑎 ∈ 𝐴}. Assuming 𝑥2 < 𝑠, we get 𝑑2 (𝑑2 (𝑠)) < 𝑠.
Take 𝑎 ∈ 𝐴 such that 𝑟 = 𝑑2 (𝑑1 (𝑠)) < 𝑎 < 𝑠 and 𝑏 ∈ 𝑆 satisfying
𝑑2−1 (𝑎) > 𝑏 > 𝑑1 (𝑠). Therefore 𝑎 > 𝑑2 (𝑏) and (𝑏, 1) >2 (𝑎, 0). We know
from Lemma 3 that there exists 𝑐 ∈ 𝐴 such that (𝑐, 1) ≳1 (𝑏, 0). So there
exists 𝑐 ∈ 𝐴 which satisfies 𝑑1 (𝑐) ≧ 𝑏. As 𝑑1 is an increasing continuous
function and 𝑑1 (𝑐) ≧ 𝑏 > 𝑑1 (𝑠). It implies that 𝑐 > 𝑠 which contradicts
the definition of 𝑠. Similarly, we can also prove that 𝑦1 = inf{𝑏 ∈ 𝐵} with
the help of Lemma 4. Using Lemmas 1 and 2, we get 𝑥1 = inf{𝑎 ∈ 𝐴} and
𝑦2 = sup{𝑏 ∈ 𝐵}.

7.3 Conclusion
After proving that there always exists P.E.P. in an infinite horizon bargaining
game, let us calculate P.E.P. for the two discounting models discussed before.

Fixed Bargaining Cost

Corollary 7.6. Suppose that both the players have fixed bargaining costs,
(𝑐1 , 𝑐2 ), then:

(1) If 𝑐1 > 𝑐2 , (𝑐2 , (1−𝑐2 )) is the only P.E.P (Perfect Equilibrium Partition).

(2) If 𝑐1 = 𝑐2 , (𝑥, 1 − 𝑥) is a P.E.P, where 𝑐1 ≦ 𝑥 ≦ 1.

(3) If 𝑐1 < 𝑐2 , (1, 0) is the only P.E.P

Proof. As we have already proved that there exists a P.E.P., consider


7. The Infinite Horizon Game 47

1 2
𝑥 𝑦
2 1
𝑌 𝑁 𝑌 𝑁
2 1
𝑥, 1 − 𝑥 𝑦 𝑦, 1 − 𝑦 𝑥
1 2
𝑌 𝑁 𝑌 𝑁

(𝑦 − 𝑐1 ), (1 − 𝑦 − 𝑐2 ) (𝑥 − 𝑐1 ), (1 − 𝑥 − 𝑐2 )
(a) If Player 1 starts. (b) If Player 2 starts.

Fig. 7.1: Infinite Horizon Bargaining Game.

• 𝑥 is the offer made by Player 1 to himself in equilibrium if Player 1


starts the bargaining. That is, 𝑥 ∈ 𝐴.

• 𝑦 is the offer made by Player 2 to Player 1 in equilibrium if Player 2


starts the bargaining. That is, 𝑦 ∈ 𝐵.

Observe that if player 1 starts the bargaining then the game tree will look
like the figure 7.1 (a) and 7.1 (b) if player 2 starts the game.
Suppose that player 1 is starting the game. In an infinite horizon game,
the subgame starting from the node where player 2 has to offer the partition is
same as the game where player 2 starts the bargaining. Due to this, observe
that one thing is clear and that is the players should reach an agreement
either in first stage or in second stage. If not, then suppose that the game
ends at later stage with payoffs (𝑣, 1 − 𝑣). Then with the similar reasoning
discussed in Proposition 6.5 it can be shown that the assumption does not
hold in case of sequentially rational players. Due to discounting, each player
wants to end the game immediately after the first stage. Hence, both the
players try to convince the other player to not reject his offer. This is possible
by making a greater or equivalent offer to the partition the other player will
7. The Infinite Horizon Game 48

𝑥
2

𝑌 𝑁

2
𝑥,
(1−𝑥 ) 𝑦
1

𝑌 𝑁

𝑦−𝑐1 ,
(1−𝑦−𝑐 )
2

Fig. 7.2: Subgames in the infinite horizon bargaining game.

be getting if rejection happens. Therefore, player 1 offer 𝑥 such that :

1 − 𝑥 ≥ 1 − (𝑦 + 𝑐2 ) ⟹ 𝑥 ≤ 𝑦 + 𝑐2

Similarly, player 2 offer 𝑦 such that:

𝑦 ≥ 𝑥 − 𝑐1

Given the above conditions, both the player wants to maximize their partitions
as well. Thus △, the set of all P.E.P., is the set of all solutions to the
equations 𝑦 = 𝑚𝑎𝑥{𝑥 − 𝑐1 , 0} and 𝑥 = 𝑚𝑖𝑛{𝑦 + 𝑐2 , 1}. The solution is
implied by the three diagrams of Figure 7.3 related to the cases (1) 𝑐1 > 𝑐2 ,
(2) 𝑐1 = 𝑐2 , and 𝑐1 < 𝑐2 .
7. The Infinite Horizon Game 49

Fig. 7.3: Solving equations 𝑦 = 𝑚𝑎𝑥{𝑥 − 𝑐1 , 0} and 𝑥 = 𝑚𝑖𝑛{𝑦 + 𝑐2 , 1}.

Fixed Discounting Factor

We will only consider the case when atleast atleast one of the 𝛿𝑖 is strictly
less than 1 and at least one of them is strictly positive, then the only P.E.P
is 𝑃 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ). This is because in the case where 𝛿1 = 𝛿2 = 1,
both the players keep on playing the game to the infinite stages as there is
no threat of loosing partition if rejection happens. Similarly, the case when
𝛿1 = 𝛿2 = 0 is equivalent to the ultimatum game. That is, both the parties
will be getting zero payoffs in case of rejection immediately after first stage.

Corollary 7.7. Suppose that both the players have fixed discounting factors,
(𝛿1 , 𝛿2 ), where if atleast one of the 𝛿𝑖 is strictly less than 1 and at least one
of them is strictly positive, then the only P.E.P is 𝑃 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ).

Proof. Similar to what we have argued in previous proof, the △ in this case
is the set of all solutions of the equations 𝑥 = 1 − 𝛿2 (1 − 𝑦) and 𝑦 = 𝑥 ⋅ 𝛿1 .The
solution of the equation 𝑥 = 1 − 𝛿2 (1 − 𝑥 ⋅ 𝛿1 ) is 𝑥 = (1 − 𝛿2 )/(1 − 𝛿1 𝛿2 ). The
result follows from the figure 7.4.
7. The Infinite Horizon Game 50

Fig. 7.4: Solving equations 𝑥 = 1 − 𝛿2 (1 − 𝑦) and 𝑦 = 𝑥 ⋅ 𝛿1 .


BIBLIOGRAPHY

[1] Tom Angell. Notes on fixed point theorems.


http://www.math.udel.edu/ angell/.

[2] Jacob Fox. Sperner’s lemma and brouwer’s theorem.


http://math.mit.edu/ fox/.

[3] J.R. Munkres. Topology. Featured Titles for Topology Series. Prentice
Hall, Incorporated, 2000.

[4] John Nash. Non-cooperative games. Annals of mathematics, pages 286–


295, 1951.

[5] Ariel Rubinstein. Perfect equilibrium in a bargaining model.


Econometrica: Journal of the Econometric Society, pages 97–109, 1982.

[6] Steven Tadelis. Game theory: an introduction. Princeton University


Press, 2013.

You might also like