Part I

THE CLASSICAL THEORY OF
GAMES
1 Static games of complete information
In this chapter we consider games of the following simple form: …rst the play-
ers simultaneously choose actions, then the players receive payo¤s that depend
on the combination of actions just chosen. Within the class of such static (or
simultaneous-move) games we restrict attention to games of complete infor-
mation. That is, each player’s payo¤ function (the function that determines
the player’s payo¤ from the combination of action by the players) is common
knowledge among all the players.
1
1.1 Zero-sum two-person games
We consider a game with two players, the player 1 and the player 2. What
player 1 wins is just what player 2 loses, and vice versa.
In order to have an intuitive understanding of a such game we introduce
some related basic ideas through a few simple examples.
Example 1.1. (Matching pennies)
Each of two participants (players) puts down a coin on the table without
letting the other player see it. If the coins match, that is, if both coins show
heads or both show tails, player 1 wins the two coins. If they do not match,
player 2 wins the two coins. In other words, in the …rst case, player 1 receives a
payment of 1 from player 2, and, in the second case, player 1 receives a payment
of ÷1.
These outcomes can be listed in the following table:
Player 2
Player 1
1 (heads) 2 (tails)
1 (heads) 1 ÷1
2 (tails) ÷1 1
Also, they can be written in the payo¤s matrices for the two players:
H
1
=
¸
1 ÷1
÷1 1

. H
2
=
¸
÷1 1
1 ÷1

.
We say that each player has two strategies (actions, moves). In the
matrix H
1
the …rst row represents the …rst strategy of player 1, the second row
represents the second strategy of player 1. If player 1 chooses his strategy 1, it
means that his coin shows heads up. Strategy 2 means tails up. Similarly, the
…rst and the second columns of matrix H
1
correspond respectively to the …rst
and the second strategies of player 2. In H
2
we have the same situations, but
for player 2.
Remark 1.1. This gambling contest is a zero-sum two-person game.
Brie‡y speaking, a game is a set of rules, in which the regulations of the entire
procedure of competition (or contest, or struggle), including players, strate-
gies, and the outcome after each play of the game is over, etc., are speci…cally
described.
Remark 1.2. The entries in above table form a payo¤ matrix (of player
1, that is H
1
). The matrix H
2
is the payo¤ matrix of player 2, and we have
H
1
÷H
t
2
= O
2
.
where H
t
2
is the transpose of H
2
.
Remark 1.3. The payo¤ is a function of the strategies of the two players.
If, for instance, players 1’s coin shows heads up (strategy 1) and player 2’s coin
also shows heads up (strategy 1), then the element /
11
= 1 denotes the amount
which player 1 receives from player 2. Again, if player 1 chooses strategy 2
(tails) and player 2 chooses strategy 1 (heads), then the element /
21
= ÷1 is
2
the payment that player 1 receives. In this case, the payment that player 1
receives is a negative number. This means that player 1 loses one unit, that is,
player 1 pays one unit to player 2.
Example 1.2. (Stone-paper-scissors)
Scissors defeats paper, paper defeats stone, and stone in turn defeats scissors.
There are two players: 1 and 2. Each player has three strategies. Let strategies
1, 2, 3 represent stone, paper, scissors respectively. If we suppose that the
winner wins one unit from the loser, then the payo¤ matrix is
Player 2
Player 1
1 2 3
1 0 ÷1 1
2 1 0 ÷1
3 ÷1 1 0

Remark 1.4. The payo¤ matrices for the two players are:
H
1
=

0 ÷1 1
1 0 ÷1
÷1 1 0
¸
¸
, H
2
=

0 ÷1 1
1 0 ÷1
÷1 1 0
¸
¸
.
We have H
1
= H
2
and H
1
÷H
t
2
= O
3
.
Example 1.3. We consider zero-sum two-person game for which the payo¤
matrix is given in the following table:
Player 2
Player 1
j`c 0 1 2
0 0 1 4
1 -1 2 7
2 -4 1 8
3 -9 -2 7
We have the payo¤ matrices:
H
1
=

0 1 4
÷1 2 7
÷4 1 8
÷0 ÷2 7
¸
¸
¸
¸
. H
2
=

0 1 4 0
÷1 ÷2 ÷1 2
÷4 ÷7 ÷8 ÷7
¸
¸
Player 1 has four strategies, while player 2 has three strategies.
Remark 1.5. The payo¤ of player 1 (that is, the amount that player 2 pays
to player 1) can be determined by the function
1 : ¦0. 1. 2. 8¦ ¦0. 1. 2¦ ÷Z. 1(j. c) = c
2
÷j
2
÷ 2jc.
In each of the above examples there are two players, namely player 1 and
player 2, and a payo¤ matrix, H
1
(there is H
2
too such that H
1
÷H
t
2
= 0). Each
3
player has several strategies. The strategies of player 1 are represented by the
rows of the payo¤ matrix H
1
, and those of player 2 by the columns of the payo¤
matrix H
1
. (The strategies of player 2 are represented by the rows of the payo¤
matrix H
2
, and those of player 1 by the columns of the payo¤ matrix H
2
.)
The player 1 chooses a strategy from his strategy set, and player 2, indepen-
dently, chooses a strategy from his strategy set. After the two choices have been
made, player 2 pays an amount to player 1 as the outcome of this particular
play of the game. The amount is shown in the payo¤ matrix. This amount may
be with positive, 0, or negative value. If the payo¤ is positive, player 1 receives
a positive amount from player 2, that is, player 1 wins an amount from player
2. If the payo¤ is negative, player 1 receives a negative amount from player 2,
that is, player 1 loses an amount to player 2 (player 2 wins an amount from
player 1). The gain of player 1 equals the loss of player 2. What player 1 wins
is just what player 2 loses, and vice versa, For this, such a game is called a
zero-sum game.
1.2 Matrix games
In what follows we suppose that player 1 has : strategies and player 2 has :
strategies. We denote by a
ij
,i = 1. :, , = 1. :, the payo¤ which player 1 gains
from player 2 if player 1 chooses strategy i and player 2 chooses strategy ,. So,
we obtain the payo¤ matrix H
1
(= ¹):
¹ = (a
ij
) = (1)
=

a
11
a
12
... a
1n
... ... ... ...
a
m1
a
m2
... a
mn
¸
¸
De…nition 1.1. We call matrix game the game which is completely de-
termined by above matrix ¹.
To solve the game, that is, to …nd out the solution (what maximum payo¤
has the player 1 and what strategies are chosen by both players to do this) we
examine the elements of matrix ¹.
In this game, player 1 wishes to gain a payo¤ a
ij
as large as it is possible,
while player 2 will do his best to reach a value a
ij
as small as it is possible. The
interests of the two players are completely con‡icting.
If player 1 chooses strategy i he can be sure to obtain at least the payo¤
min
1<j<n
a
ij
. (2)
This is the minimum of the i
th
-row element in the payo¤ matrix ¹.
Since player 1 wishes to maximize his payo¤ he can choose strategy i so as
to make the value in (2) as large as it is possible. That is to say, player 1 can
choose strategy i in order to receive a payo¤ not less than
max
1<i<m
min
1<j<n
a
ij
. (3)
4
In other words, if player 1 makes his best choice, the payo¤ which player 1
receives cannot be less than the value given in (3).
Similarly, if player 2 chooses his strategy ,, he will lose at most
max
1<i<m
a
ij
. (4)
Now, player 2 wishes to minimize his lose so, he will try to choose strategy
, so as to obtain the minimum of the value in (4). Namely, player 2 can choose
, so as to have his loss not greater than
min
1<j<n
max
1<i<m
a
ij
. (5)
So, if player 2 makes his best choice, the payo¤ which player 1 receives cannot
be greater than the value given by (5).
We have seen that player 1 can choose the strategy i to ensure a payo¤ which
is at least
·
1
= max
1<i<m
min
1<j<n
a
ij
.
while player 2 can choose the strategy , to make player 1 get at most
·
2
= min
1<j<n
max
1<i<m
a
ij
.
Is there any relationship between these two values, ·
1
and ·
2
?
Lemma 1.1. The following inequality holds: ·
1
_ ·
2
, that is
·
1
= max
1<i<m
min
1<j<n
a
ij
_ min
1<j<n
max
1<i<m
a
ij
= ·
2
. (6)
Proof. For every i we have
min
1<j<n
a
ij
_ a
ij
. , = 1. :.
and for every , we have
a
ij
_ max
1<i<m
a
ij
. i = 1. :.
Hence the inequality
min
1<j<n
a
ij
_ max
1<i<m
a
ij
holds, for all i = 1. : and all , = 1. :.
Since the left-hand side of the last inequality is independent of ,, taking the
minimum with respect to , on both sides we have
min
1<j<n
a
ij
_ min
1<j<n
max
1<i<m
a
ij
= ·
2
. i = 1. :.
that is,
min
1<j<n
a
ij
_ ·
2
.
5
Since the right-hand side of the last inequality is independent of i, taking
the maximum with respect to i on both sides we obtain
max
1<i<m
min
1<j<n
a
ij
_ ·
2
.
that is, ·
1
_ ·
2
, and the proof is completed.
Let us examine the three examples from the section 1.1.
In Example 1.1 we have : = 2, : = 2, therefore
·
1
= max
1<i<2
min
1<j<2
a
ij
= max(÷1. ÷1) = ÷1.
·
2
= min
1<j<2
max
1<i<2
a
ij
= min(1. 1) = 1.
So, in Example 1.1 we have ·
1
< ·
2
.
In Example 1.2, we have : = 8, : = 8, therefore
·
1
= max
1<i<3
min
1<j<3
a
ij
= max(÷1. ÷1. ÷1) = ÷1.
·
2
= min
1<j<3
max
1<i<3
a
ij
= min(1. 1. 1) = 1.
So, in Example 1.2 we have ·
1
< ·
2
.
In Example 1.3, we have : = 4, : = 8, therefore
·
1
= max
1<i<4
min
1<j<3
a
ij
= max(0. ÷1. ÷4. ÷0) = 0.
·
2
= min
1<j<3
max
1<i<4
a
ij
= min(0. 2. 8) = 0.
So, in Example 1.3 we have ·
1
= ·
2
.
1.3 Saddle points in pure strategies
There are situations in which ·
1
= ·
2
. Consequently we give
De…nition 1.2. If the elements of the payo¤ matrix ¹ of a matrix game
satisfy the following equality
·
1
= max
1<i<m
min
1<j<n
a
ij
= min
1<j<n
max
1<i<m
a
ij
= ·
2
. (7)
then the quantity ·(= ·
1
= ·
2
) is called the value of the game.
Remark 1.6. The value · is the common value of those given in (3) and
(5).
The value of the game in Example 1.3 is · = 0.
If the equality (7) holds, then there exist an i
+
and a ,
+
such that
min
1<j<n
a
i

j
= max
1<i<m
min
1<j<n
a
ij
= ·.
and
max
1<i<m
a
ij
= min
1<j<n
max
1<i<m
a
ij
= ·.
6
Therefore
min
1<j<n
a
i

j
= max
1<i<m
a
ij
.
But, obviously we have
min
1<j<n
a
i

j
_ a
i

j
_ max
1<i<m
a
ij
.
Thus
max
1<i<m
a
ij
= a
i

j
= · = min
1<j<n
a
i

j
.
Therefore, for all i and all ,
a
ij
_ a
i

j
= · _ a
i

j
. (8)
Consequently, if player 1 chooses the strategy i
+
, then the payo¤ cannot
be less than · if player 2 departs from the strategy ,
+
; if player 2 chooses the
strategy ,
+
, then the payo¤ cannot exceed · if player 1 departs from the strategy
i
+
.
De…nition 1.3. We call i
+
and ,
+
optimal strategies of players 1 and 2
respectively. The pair (i
+
. ,
+
) is a saddle point (in pure strategies) of the
game. We say that i = i
+
, , = ,
+
is a solution (or Nash equilibrium) of the
game.
Remark 1.7. The relationship (8) shows us that the payo¤ at the saddle
point (i
+
. ,
+
) (solution of the game) is the value of the game. When player 1
sticks to his optimal strategy i
+
, he can hope to increase his payo¤ if player 2
departs from his optimal strategy ,
+
. Similarly, if player 2 sticks to his optimal
strategy ,
+
, player 1’s payo¤ may decrease if he departs from his optimal strategy
i
+
. Thus if the game has a saddle point (i
+
. ,
+
) then the equality (7) holds and
a
i

j
= ·.
Remark 1.8. A matrix game may have more than one saddle point. How-
ever, the payo¤s at di¤erent saddle points are all equal, the common value being
the value of the game.
Example 1.4. Consider the matrix game with the payo¤ matrix
¹ =

4 8 6 2
1 2 0 0
ò 6 7 ò
¸
¸
.
We have for the minimum of its rows
min(4. 8. 6. 2) = 2. min(1. 2. 0. 0) = 0. min(ò. 6. 7. ò) = ò
and then the maximum of these minimums:
max(2. 0. ò) = ò = ·
1
.
Now, we have for the maximum of its columns
max(4. 1. ò) = ò. max(8. 2. 6) = 6. max(6. 0. 7) = 7. max(2. 0. ò) = ò.
7
and then the minimum of these maximums:
min(ò. 6. 7. ò) = ò = ·
2
.
How ·
1
= ·
2
= ò we have saddle point. It is easy to verify that (8. 1) and
(8. 4) are both saddle points because
a
31
= a
34
= · = ò.
Remark 1.9. If the matrix game has a saddle point (i
+
. ,
+
), then it is very
easy to found it. Really, by the De…nition 1.3 of a saddle point (8), the value
a
i

j
is an element in the payo¤ matrix ¹=(a
ij
) which is at the same time the
minimum of its row and the maximum of its column.
In Example 1.3, (1. 1) is a saddle point of the game because a
11
= 0 is the
smallest element in the …rst row and at the same time the largest element in
the …rst column. In Example 1.4 a
31
= a
34
= ò are two smallest elements in
the third row, and at the same time the largest element in the …rst and fourth
columns, respectively.
A matrix game can have several saddle points. In this case we can prove the
following result:
Lemma 1.2. Let (i
+
. ,
+
) and (i
++
. ,
++
) be saddle points of a matrix game.
Then (i
+
. ,
++
) and (i
++
. ,
+
) are also saddle points, and the values at all saddle
points are equal, that is
a
i

j
= a
i

j
= a
i

j
= a
i

j
. (9)
Proof. We prove that (i
+
. ,
++
) is a saddle point. The fact that (i
++
. ,
+
) is a
saddle point can be proved in a similar way.
Since (i
+
. ,
+
) is a saddle point, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and all , = 1. :. Since (i
++
. ,
++
) is a saddle point, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and , = 1. :. From these inequalities we obtain
a
i

j
_ a
i

j
_ a
i

j
_ a
i

j
_ a
i

j
.
which proves (9). By (9) and the above inequalities, we have
a
ij
_ a
i

j
_ a
i

j
for all i = 1. : and all , = 1. :. Hence (i
+
. ,
++
) is a saddle point.
From this lemma we see that a matrix game with saddle points has the
following properties:
– the exchangeability or rectangular property of saddle points,
8
– the equality of the values at all saddle points.
Example 1.5. The game with the payo¤ matrix
¹ =

8 0 ÷ò
÷7 ÷1 4
2 1 1
¸
¸
has the saddle point (8. 2) because ·
1
= 1, ·
2
= 1 and a
32
= · = 1.
Example 1.6. The pair (8. 8) is a saddle point for the game with the payo¤
matrix
¹ =

0 ÷1 ÷1
1 0 ÷1
1 1 0
¸
¸
.
We have · = 0.
Example 1.7. The pair (2. 8) is a saddle point for the game with the payo¤
matrix
¹ =

2 ÷8 ÷1 4
1 2 0 1
÷2 8 ÷1 ÷2
¸
¸
.
The value of the game is · = 0.
Example 1.8. The game with the payo¤ matrix
¹ =

4 1 1
2 1 1
÷7 ÷1 4
¸
¸
has four saddle points because we have (see Lemma 1.2)
a
12
= a
13
= a
22
= a
23
= 1 = ·.
Example 1.9. The game with the payo¤ matrix
¹ =

7 ò 6
0 0 4
14 1 8
¸
¸
hasn’t a saddle point in the sense of De…nition 1.2 because
·
1
= max(ò. 0. 1) = ò
and
·
2
= min(14. 0. 8) = 8.
1.4 Mixed strategies
We have seen so far that there exist matrix games which have saddle points and
matrix games that don’t.
9
When a matrix game hasn’t saddle point, that is, if
·
1
= max
1<i<m
min
1<j<n
a
ij
< min
1<j<n
max
1<i<m
a
ij
= ·
2
(10)
we cannot solve the game in the sense given in the previous section. The payo¤
matrix given in Example 1.2 (Stone-paper-scissors) hasn’t saddle point because
·
1
= ÷1 < 1 = ·
2
. The same situation is in Example 1.9, where ·
1
= ò < 8 = ·
2
.
About the game given in Example 1.2, with the payo¤ matrix
¹ =

0 ÷1 1
1 0 ÷1
÷1 1 0
¸
¸
we can say the following.
Player 1 can be sure to gain at least ·
1
= ÷1, player 2 can guarantee that
his loss is at most ·
2
= 1. In this situation, player 1 will try to gain a payo¤
greater than ÷1, player 2 will try to make the payo¤ (to player 1) less than 1.
For these purposes, each player will make e¤orts to prevent his opponent from
…nding out his actual choice of strategy. To accomplish this, player 1 can use
some chance device to determine which strategy he is going to choose; similarly,
player 2 will also decide his choice of strategy by some chance method. This is
the mixed strategy that we introduce in this section.
We consider a matrix game with the payo¤ matrix ¹ = (a
ij
) where i = 1. :,
, = 1. :.
De…nition 1.4. A mixed strategy of player 1 is a set of : numbers
r
i
_ 0, i = 1. : satisfying the relationship
¸
m
i=1
r
i
= 1. A mixed strategy of
player 2 is a set of : numbers n
j
_ 0, , = 1. :, satisfying
¸
n
j=1
n
j
= 1.
Remark 1.10. The numbers r
i
and n
j
are probabilities. Player 1 chooses
his strategy i with probability r
i
, and player 2 chooses his strategy , with
probability n
j
. Hence r
i
n
j
is the probability that player 1 chooses strategy i
and player 2 chooses strategy , with payo¤ a
ij
for player 1 (and ÷a
ij
for player
2).
In opposite to mixed strategies, the strategies in the saddle points are called
pure strategies. The pure strategy i = i
t
is a special mixed strategy: r
i
0 = 1,
r
i
= 0 for i = i
t
.
Let A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
) be the mixed strategies of
players 1 and 2, respectively.
De…nition 1.5. The expected payo¤ of player 1 is the following real
number
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
(11)
which is obtained through multiplying every payo¤ a
ij
by the corresponding
probability r
i
n
j
and summing for all i and all ,.
Player 1 wishes to maximize the expected payo¤, while player 2 wants to
minimize it.
10
Let o
m
and o
n
be the sets of all A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
)
respectively, satisfying the following conditions
r
i
_ 0. i = 1. :.
m
¸
i=1
r
i
= 1:
n
j
_ 0. , = 1. :.
n
¸
j=1
n
j
= 1.
If player 1 uses the mixed (or no) strategy A ÷ o
m
, then his expected payo¤
is at least
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
. (12)
Player 1 can choose A ÷ o
m
such as to obtain the maximum of the value in
(12), that is he can be sure of an expected payo¤ not less than
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
. (13)
If player 2 chooses the strategy 1 ÷ o
n
, then the expected payo¤ of player
1 is at most
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
. (14)
Player 2 can choose 1 ÷ o
n
such as to obtain the minimum of the value in
(14), that is, he can prevent player 1 from gaining an expected payo¤ greater
than
·
2
= min
Y ÷Sn
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
. (15)
As in the case studied in section 1.2 (Lemma 1.1) we have the following
result:
Lemma 1.3. For all A = (r
1
. r
2
. . . . . r
m
) ÷ o
m
and all 1 = (n
1
. n
2
. . . . . n
n
) ÷
o
n
the following inequality holds ·
1
_ ·
2
, that is
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ min
Y ÷Sn
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
= ·
2
(16)
Proof. For all A ÷ o
m
and all 1 ÷ o
n
, we have
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
.
11
Then, taking the maximum for all A ÷ o
m
on both sides of the inequality,
we get
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ max
X÷Sm
n
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
.
This inequality holds for all 1 ÷ o
n
. Therefore,
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ min
Y ÷Sn
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
= ·
2
.
that is, ·
1
_ ·
2
, and the proof is completed.
The main result of this chapter is the well-known fundamental theorem of
the theory of matrix game, the minimax theorem. This is the aim of the
following section.
1.5 The minimax theorem
J. von Neumann was the …rst which proved this theorem. We present here
von Neumann’s proof given in [15].
Theorem 1.1. If the matrix game has the payo¤ matrix ¹ = (a
ij
), then
·
1
= ·
2
, that is,
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
= min
Y ÷Sn
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
= ·
2
. (17)
To prove this theorem we need some auxiliary notions and results.
Let ¹ = (a
ij
) be a :: matrix, and
a
(1)
= (a
11
. a
21
. . . . . a
m1
). a
(2)
= (a
12
. a
22
. . . . . a
m2
). . . . .
a
(n)
= (a
1n
. a
2n
. . . . . a
mn
)
obtained by using the columns of matrix ¹, that are : points in the :-dimensional
Euclidean space R
m
.
De…nition 1.6. We call the convex hull (CH) of the : points a
(1)
. a
(2)
. . . . . a
(n)
the set CH = CH(a
(1)
. a
(2)
. . . . . a
(n)
) de…ned by
CH =

a[ a ÷ R
m
. a = t
1
a
(1)
÷t
2
a
(2)
÷ ÷t
n
a
(n)
.
t
k
÷ R. t
k
_ 0. / = 1. :.
n
¸
k=1
t
k
= 1
¸
.
Remark 1.11. The elements of CH are expressed as a convex linear com-
bination of the : points a
(1)
. a
(2)
. . . . . a
(n)
. CH is a convex set, this can be easy
veri…ed by showing that every convex linear combination of two arbitrary points
of CH also belongs to CH.
12
Lemma 1.4. Let CH be the convex hull of a
(1)
. a
(2)
. . . . . a
(n)
. If 0 ÷ CH,
then there exist : real numbers c
1
. c
2
. . . . . c
m
such that for every point a ÷ CH,
a = (a
1
. a
2
. . . . . a
m
) we have
c
1
a
1
÷c
2
a
2
÷ ÷c
m
a
m
0.
Proof. Since 0 ÷ CH, there exists a point c = (c
1
. c
2
. . . . . c
m
) ÷ CH,
c = 0, such that the distance [c[ from c to 0 is the smallest. This is equivalent
to the statement that c
2
1
÷c
2
2
÷ ÷c
2
m
0 is the smallest.
Now, let a = (a
1
. a
2
. . . . . a
m
) be an arbitrary point in CH. Then
`a ÷ (1 ÷`)c ÷ CH. 0 _ ` _ 1.
and
[`a ÷ (1 ÷`)c[
2
_ [c[
2
.
or
m
¸
i=1
[`a
i
÷ (1 ÷`)c
i
[
2
=
m
¸
i=1
[`(a
i
÷c
i
) ÷c
i
[
2
=
= `
2
m
¸
i=1
(a
i
÷c
i
)
2
÷ 2`
m
¸
i=1
(a
i
÷c
i
)c
i
÷
m
¸
i=1
c
2
i
_
m
¸
i=1
c
2
i
.
Thus, if ` = 0, we obtain
`
m
¸
i=1
(a
i
÷c
i
)
2
÷ 2
m
¸
i=1
(a
i
c
i
÷c
2
i
) _ 0.
Now let ` ÷0; we get
m
¸
i=1
a
i
c
i
_
m
¸
i=1
c
2
i
0.
and the lemma is proved.
Remark 1.12. This result is usually referred to as the theorem of the
supporting hyperplanes. It states that if the origin 0 doesn’t belong to the
convex hull CH of the : points a
(1)
. a
(2)
. . . . . a
(n)
, then there exists a supporting
hyperplane j passing through 0 such that CH lies entirely in one side of j, that
is, in one of the two half-spaces formed by j.
Lemma 1.5. Let ¹ = (a
ij
) be an arbitrary :: matrix. Then either
(1) there exist numbers n
1
. n
2
. . . . . n
n
with
n
j
_ 0. , = 1. :.
n
¸
j=1
n
j
= 1.
such that
n
¸
j=1
a
ij
n
j
= a
i1
n
1
÷a
i2
n
2
÷ ÷a
in
n
n
_ 0. i = 1. ::
13
or
(2) there exist numbers r
1
. r
2
. . . . . r
m
with
r
i
_ 0. i = 1. :.
m
¸
i=1
r
i
= 1
such that
m
¸
i=1
a
ij
r
i
= a
1j
r
1
÷a
2j
r
2
÷ ÷a
mj
r
m
0. , = 1. :.
Proof. We consider the convex hull of the : ÷: points
a
(1)
= (a
11
. a
21
. . . . . a
m1
). a
(2)
= (a
12
. a
22
. . . . . a
m2
). . . . .
a
(n)
= (a
1n
. a
2n
. . . . . a
mn
)
c
(1)
= (1. 0. . . . . 0). c
(2)
= (0. 1. 0. . . . . 0). . . . . c
(m)
= (0. 0. . . . . 1).
We denote by CH this convex hull. We distinguish two cases:
(1) 0 ÷ CH, respectively (2) 0 ÷ CH.
Let 0 ÷ CH be. Then there exist real numbers
t
1
. t
2
. . . . . t
n+m
_ 0.
n+m
¸
j=1
t
j
= 1
such that
t
1
a
(1)
÷t
2
a
(2)
÷ ÷t
n
a
(n)
÷t
n+1
c
(1)
÷
÷t
n+2
c
(2)
÷ ÷t
n+m
c
(m)
= 0.
that is, 0 was written as a convex linear combination of the above :÷: points.
Expressed in terms of the components, the i
th
equation (there are : equa-
tions), is
t
1
a
i1
÷t
2
a
i2
÷ ÷t
n
a
in
÷t
n+i
1 = 0.
Hence
t
1
a
i1
÷t
2
a
i2
÷ ÷t
n
a
in
= ÷t
n+i
_ 0. i = 1. :. (18)
It follows that t
1
÷t
2
÷ ÷t
n
0, for otherwise we have
t
1
= t
2
= = t
n
= 0 = t
n+1
= = t
n+m
.
which contradicts that
¸
n+m
j=1
t
j
= 1.
Dividing each inequality of (18) by t
1
÷t
2
÷ ÷t
n
0 and putting
n
1
=
t
1
t
1
÷... ÷t
n
. n
2
=
t
2
t
1
÷... ÷t
n
. . . . . n
n
=
t
n
t
1
÷... ÷t
n
14
we obtain
n
¸
j=1
a
ij
n
j
= a
i1
n
1
÷ ÷a
in
n
n
_ 0. i = 1. :.
(2) 0 ÷ CH. By Lemma 1.4, there exists c = (c
1
. . . . . c
m
) ÷ CH such that
ca
(j)
= c
1
a
1j
÷c
2
a
2j
÷ ÷c
m
a
mj
0. , = 1. :. cc
(i)
= c
i
0. i = 1. :.
(19)
Dividing each inequality in (19) by c
1
÷ ÷c
m
0 and putting
r
1
=
c
1
c
1
÷...c
m
. r
2
=
c
2
c
1
÷...c
m
. . . . . r
m
=
c
m
c
1
÷...c
m
we obtain
m
¸
i=1
a
ij
r
i
= a
1j
r
1
÷a
2j
r
2
÷ ÷a
mj
r
m
0. , = 1. :.
This complete the proof of Lemma.
Proof of Theorem 1.1. We have proved that ·
1
_ ·
2
in Lemma 1.3, so it
is su¢cient to give the proof for ·
1
_ ·
2
.
By Lemma 1.5, one of the following two statements holds.
(1) There exist n
1
. n
2
. . . . . n
n
_ 0,
¸
n
j=1
n
j
= 1, such that
n
¸
j=1
a
ij
n
j
_ 0. i = 1. :.
Hence, for any A = (r
1
. r
2
. . . . . r
m
) ÷ o
m
we have
m
¸
i=1

¸
n
¸
j=1
a
ij
n
j
¸

r
i
_ 0.
Therefore
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ 0.
It follows that
·
2
= min
Y ÷Sn
max
X÷Sm
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ 0. (20)
(2) There exist r
1
. r
2
. . . . . r
m
_ 0,
¸
m
i=1
r
i
= 1, such that
m
¸
i=1
a
ij
r
i
0. , = 1. :.
15
Hence, for any 1 = (n
1
. n
2
. . . . . n
n
) ÷ o
n
, we have
n
¸
j=1

m
¸
i=1
a
ij
r
i

n
j
_ 0.
Therefore,
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ 0.
It follows that
·
1
= max
X÷Sm
min
Y ÷Sn
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
_ 0. (21)
By (20) and (21) it follows that, either ·
1
_ 0 or ·
2
_ 0, that is, never
·
1
< 0 < ·
2
. We repeat the above judgement with the new matrix 1 = (a
ij
÷/),
where / is an arbitrary number. Because
¸
m
i=1
r
i
= 1 and
¸
n
j=1
n
j
= 1 we
obtain never ·
1
÷ / < 0 < ·
2
÷ /, or never ·
1
< / < ·
2
. Therefore, ·
1
< ·
2
is impossible, for otherwise there would be a number / satisfying ·
1
< / < ·
2
,
thus contradicting the statement "never ·
1
< / < ·
2
". We have proved ·
1
_ ·
2
.

Remark 1.13. For another proof of the minimax theorem – an inductive
proof – see [20]. Here the new statement of minimax theorem is the following:
Let ¹ = (a
ij
) be an arbitrary :: matrix, and o
m
and o
n
respectively sets
of points A = (r
1
. r
2
. . . . . r
m
) and 1 = (n
1
. n
2
. . . . . n
n
) satisfying
r
i
_ 0. i = 1. :.
m
¸
i=1
r
i
= 1. n
j
_ 0. , = 1. :.
n
¸
j=1
n
j
= 1.
Then we have
max
X÷Sm
min
1<j<n
m
¸
i=1
a
ij
r
i
= min
Y ÷Sn
max
1<i<n
n
¸
j=1
a
ij
n
j
. (22)
1.6 Saddle points in mixed strategies
In this section, we show, that for any matrix game, a saddle point always
exists.
Let ¹ = (a
ij
) be the payo¤ matrix of an : : matrix game. If A =
(r
1
. r
2
. . . . . r
m
) ÷ o
m
and 1 = (n
1
. n
2
. . . . . n
n
) ÷ o
n
are respectively mixed
strategies of players 1 and 2, then the expected payo¤
¸
m
i=1
¸
n
j=1
a
ij
r
i
n
j
can
be written in matrix notation
m
¸
i=1
n
¸
j=1
a
ij
r
i
n
j
= A¹1
t
.
16
De…nition 1.7. A pair (A
+
. 1
+
) ÷ o
m
o
n
is called a saddle point (in
mixed strategies)(or Nash equilibrium) of the matrix game ¹ = (a
ij
) if
A¹1
+t
_ A
+
¹1
+t
_ A
+
¹1
t
. (23)
for all A ÷ o
m
and all 1 ÷ o
n
.
The following important result establishes the equivalence between the ex-
istence of a saddle point and the minimax theorem.
Theorem 1.2. The :: matrix game ¹ = (a
ij
) has a saddle point if and
only if the numbers
max
X÷Sm
min
Y ÷Sn
A¹1
t
and min
Y ÷Sn
max
X÷Sm
A¹1
t
(24)
exist and are equal.
Proof. "==" The two numbers in (24) both exist, obviously (there are
optimal values of continuous functions de…ned on compact sets). Assume that
: : matrix game has a saddle point (A
+
. 1
+
). That is to say that, the
inequalities from relationship (23) hold for all A ÷ o
m
and all 1 ÷ o
n
. From
the …rst inequality in (23), we obtain
max
X÷Sm
A¹1
+t
_ A
+
¹1
+t
hence
min
Y ÷Sn
max
X÷Sm
A¹1
t
_ A
+
¹1
+t
. (25)
Similarly, from the second inequality in (23), we have
A
+
¹1
+t
_ min
Y ÷Sn
A
+
¹1
t
_ max
X÷Sm
min
Y ÷Sn
A¹1
t
. (26)
From (25) and (26) it follows that
·
2
= min
Y ÷Sn
max
X÷Sm
A¹1
t
_ max
X÷Sm
min
Y ÷Sn
A¹1
t
= ·
1
.
But it is known (see Lemma 1.3) that the reverse inequality ·
1
_ ·
2
holds.
Therefore,
·
1
= max
X÷Sm
min
Y ÷Sn
A¹1
t
= min
Y ÷Sn
max
X÷Sm
A¹1
t
= ·
2
.
and the necessity of the condition is proved.
"==" Assume that the two values in (24) are equal. Let A
+
÷ o
m
and
1
+
÷ o
n
be, such that
max
X÷Sm
min
Y ÷Sn
A¹1
t
= min
Y ÷Sn
A
+
¹1
t
. (27)
min
Y ÷Sn
max
X÷Sn
A¹1
t
= max
X÷Sm
A¹1
+t
. (28)
17
By the de…nitions of minimum and maximum, we have
min
Y ÷Sn
A
+
¹1
t
_ A
+
¹1
+t
. A
+
¹1
+t
_ max
X÷Sm
A¹1
+t
. (29)
Since the left-hand sides of (27) and (28) are equal, all terms in (27) through
(29) are equal to each other. In particular, we have
max
X÷Sm
A¹1
+t
= A
+
¹1
+t
.
Therefore, for all A ÷ o
m
,
A¹1
+t
_ A
+
¹1
+t
. (30)
Similarly, for all 1 ÷ o
n
,
A
+
¹1
+t
_ A
+
¹1
t
. (31)
By (30) and (31), it results that (A
+
. 1
+
) is a saddle point of A¹1
t
, and
the su¢ciency of the condition is proved.
De…nition 1.8. If (A
+
. 1
+
) is a saddle point (see De…nition 1.7), then
we say that A
+
. 1
+
are respectively optimal strategies of players 1 and 2,
and · = A
+
¹1
+t
is the value of the game. We also say that (A
+
. 1
+
) is a
solution(or a Nash equilibrium) of the game.
Remark 1.14. By Theorem 1.2 the value · of the game is the common
value of ·
1
= max
X÷Sm
min
Y ÷Sn
A¹1
t
and ·
2
= min
Y ÷Sn
max
X÷Sm
A¹1
t
.
The de…nition of a saddle point shows us that, as long as player 1 sticks
to his optimal strategy A
+
, he can be sure to get at least the expected payo¤
· = A
+
¹1
+t
no matter which strategy player 2 chooses; similarly, as long as
player 2 sticks to his optimal strategy 1
+
, he can hold player 1’s expected payo¤
down to at most · no matter how player 1 makes his choice of strategy.
Now, we give some essential properties of optimal strategies. To do this, we
introduce …rst, some notations.
For the matrix ¹ = (a
ij
) we denote the i
th
row vector of ¹ by ¹
i
and the
,
th
column vector of ¹ by ¹
j
. Thus

j
=
m
¸
i=1
a
ij
r
i
. ¹
i
1
t
=
n
¸
j=1
a
ij
n
j
.
and A¹
j
is the expected payo¤ when player 1 chooses the mixed strategy A and
player 2 chooses the pure strategy ,, again ¹
i
1
t
is the expected payo¤ when
player 2 chooses the mixed strategy 1 and player 1 chooses the pure strategy i.
We give some essential properties of optimal strategies.
Lemma 1.6. Let ¹ = (a
ij
) be the payo¤ matrix of an : : matrix game
whose value is ·. The following statements are true:
(1) If 1
+
is an optimal strategy of player 2 and ¹
i
1
+t
< ·, then r
+
i
= 0 in
every optimal strategy A
+
of player 1.
18
(2) If A
+
is an optimal strategy of player 1 and A
+
¹
j
·, then n
+
j
= 0 in
every optimal strategy 1
+
of player 2.
Proof. We prove only (1). The proof of (2) is similar. Since 1
+
is an
optimal strategy of player 2, we have ¹
i
1
+t
_ ·, i = 1. :. We denote by
o
1
= ¦i[ ¹
i
1
+t
< ·¦, o
2
= ¦i[ ¹
i
1
+t
= ·¦.
Then we can write
· = A
+
¹1
+t
=
m
¸
i=1
r
+
i
¹
i
1
+t
=
=
¸
i÷S1
r
+
i
¹
i
1
+t
÷
¸
i÷S2
r
+
i
¹
i
1
+t
=
¸
i÷S1
r
+
i
¹
i
1
+t
÷
¸
i÷S2
r
+
i
·.
Hence
·

1 ÷
¸
i÷S2
r
+
i

=
¸
i÷S1
r
+
i
¹
i
1
+t
.
that is,
·
¸
i÷S1
r
+
i
=
¸
i ÷ o
1
r
+
i
¹
i
1
+t
. or
¸
i÷S1
(· ÷¹
i
1
+t
)r
+
i
= 0.
Since i ÷ o
1
implies · ÷¹
i
1
+t
0, we have r
+
i
= 0.
Remark 1.15. This result states that if player 2 has an optimal strategy
1
+
in a matrix game with value ·, and if player 1, by using the i
th
pure strategy
cannot attain the expected payo¤ ·, then the pure strategy i is a bad strategy
and cannot appear in any of his optimal mixed strategies.
Lemma 1.7. Let ¹ = (a
ij
) be the payo¤ matrix of an : : matrix game
whose value is ·. The following statements are true:
(1) A
+
÷ o
m
is an optimal strategy of player 1 if and only if · _ A
+
¹
j
,
, = 1. :.
(2) 1
+
÷ o
n
is an optimal strategy of player 2 if and only if ¹
i
1
+t
_ ·,
i = 1. :.
Proof. We prove only (1), the proof of (2) is similar. Necessity ("==") of
the condition follows directly from the de…nition of a saddle point.
To prove the su¢ciency ("==") of the condition, assume that · _ A
+
¹
j
,
, = 1. :.
Let (A
·
. 1
·
) be a saddle point of the game, that is A¹1
·t
_ A
·
¹1
·
_
A
·
¹1
t
, for all A ÷ o
m
and all 1 ÷ o
n
.
We prove that (A
+
. 1
·
) is a saddle point of the game. Let 1 = (n
1
. n
2
. . . . . n
n
) ÷
o
n
be any mixed strategy of player 2. Multiplying both sides of inequality
· _ A
+
¹
j
, , = 1. :, by n
j
and summing for , = 1. : we obtain
· _
n
¸
j=1
A
+
¹
j
n
j
= A
+
¹1
t
.
In particular, · _ A
+
¹1
·t
. But, the de…nition of saddle point implies
A
+
¹1
·t
_ A
·
¹1
·t
= ·. It follows that A¹1
·t
_ A
+
¹1
·t
_ A
+
¹1
t
, which
19
proves us that (A
+
. 1
·
) is a saddle point of the game. Hence, A
+
is an optimal
strategy of player 1.
Remark 1.16. If the value of a game is known, the above lemma can be
used to examine whether a given strategy A
+
of player 1 is optimal, or a given
strategy 1
+
of player 2 is optimal.
Example 1.10. The matrix game with the payo¤ matrix
¹ =

2 8 1
1 2 8
8 1 2
¸
¸
has the value · = 2, and A
+
= 1
+
=

1
3
.
1
3
.
1
3

are the optimal strategies for the
players 1 and 2. According to Remark 1.16 the pure strategy r
2
= 1, namely
A
2
= (0. 1. 0) is a bad strategy. Really, we have
· ÷A
2
¹
1
= 2 ÷(0. 1. 0)

¸
2
1
8
¸

= 2 ÷1 = 1.
so · A
2
¹
1
.
Thus the pure strategy A
2
= (0. 1. 0). is a bad strategy. The same for the
others strategies.
Also, according to Lemma 1.7, the strategy A
+
= (1´8. 1´8. 1´8) of player
1 is optimal. Really, we have · = 2. and
A
+
¹
:1
= (1´8. 1´8. 1´8)
2
1
8
= 2.
A
+
¹
:2
= (1´8. 1´8. 1´8)
8
2
1
= 2.
A
+
¹
:3
= (1´8. 1´8. 1´8)
1
8
2
= 2.
therefore · = 2 = A
+
¹
:j
. , = 1. 8.
Moreover, we have
20
· ÷¹
2
1
+t
= 2 ÷(1. 2. 8)

¸
1
3
1
3
1
3
¸

= 2 ÷2 = 0.
· ÷¹
1
1
+t
= 2 ÷(2. 8. 1)

¸
1
3
1
3
1
3
¸

= 2 ÷2 = 0.
and
· ÷¹
3
1
+t
= 2 ÷(8. 1. 2)

¸
1
3
1
3
1
3
¸

= 2 ÷2 = 0.
so, 1
+
is an optimal strategy.
The game hasn’t saddle point in pure strategy because we have
·
1
= max mina
ij
= max(1. 1. 1) = 1.
while
·
2
= minmax a
ij
= min(8. 8. 8) = 8.
1.7 Domination of strategies
There are situations in which, an examination of the elements of the payo¤
matrix shows us that player 1 will never use a pure strategy since each element
of this row (pure strategy) is smaller than the corresponding element in the
other row (pure strategy). For example, we consider the matrix game whose
payo¤ matrix is
¹ =

2 ÷1 1
0 1 ÷1
1 ÷2 0
¸
¸
.
In this matrix ¹ the elements of third row are smaller than the corresponding
elements in the …rst row. Consequently, the player 1 will never use his third
strategy. Hence, regardless of which strategy player 2 chooses, player 1 will gain
more by choosing strategy 1 than by choosing strategy 3. Strategy 3 of player
1 can only appear in his optimal mixed strategies with probability zero.
Thus, in order to solve the matrix game with the payo¤ matrix ¹, the third
row can be deleted and we need to consider only the resulting matrix
¹
t
=
¸
2 ÷1 1
0 1 ÷1

.
Now, in this matrix ¹
t
each element of the …rst column is greater than
the corresponding element of the third column. So, player 2 will lose less by
choosing strategy 3 than by choosing strategy 1. Thus, the …rst strategy of
player 2 will never be included in any of his optimal mixed strategies with
positive probability.
21
Therefore, the …rst column of the matrix ¹
t
can be deleted to obtain ¹" =
1 ÷1
÷1 1
.
It is easy to verify that this 22 matrix game has the mixed strategy solution
A
+
= 1
+
=

1
2
.
1
2

and · = 0.
Returning to the original 88 matrix game with payo¤ matrix ¹, its solution
is
A
+
=

1
2
.
1
2
. 0

. 1
+
=

0.
1
2
.
1
2

. · = 0.
Remark 1.17. We have seen that in matrix game with the payo¤ matrix
¹, player 1 will never use his strategy 3 since strategy 1 gives him a greater
payo¤ than strategy 3. Similarly, in matrix game with the payo¤ matrix ¹
t
,
player 2 will never use his strategy 1 since it always costs him a greater loss
than strategy 3. Therefore the strict dominated strategies will not play by a
rational player 1, so they can be eliminated, and the strict dominant strategies
will not play by a rational player 2, so they can be eliminated.
De…nition 1.9. Let ¹ = (a
ij
) be the payo¤ matrix of an : : matrix
game. If
a
kj
_ a
lj
. , = 1. : (32)
we say that player 1’s strategy / dominates strategy |.
If
a
ik
_ a
il
. i = 1. : (33)
we say that player 2’s strategy / dominates strategy |.
If the inequalities in (32) or (33) are replaced by strict inequalities, we say
that the strategy / of player 1 or 2 strictly dominates his strategy |.
Remark 1.18. It can be proved that in the case in which a pure strategy is
strict dominated by a pure strategy (or by a convex linear combination of several
other pure strategies), then we can delete the row or column in the payo¤ matrix
corresponding to the dominated pure strategy and solve the reduces matrix
game. The optimal strategies of the original matrix game can be obtain from
those of the reduced one by assigning the probability zero to the pure strategy
corresponding to the deleted row or column.
Remark 1.19. If the domination isn’t strict, we can still obtain a solution
for the original game from that of the reduced game. But, the deletion of a row
or column may involve loss of some optimal strategies of the original game.
Example 1.11. Let be the payo¤ matrix of a matrix game
¹ =

2 1 4
8 1 2
1 0 8
¸
¸
.
Strategy 3 of player 2 is dominated by his strategy 2, so we can delete the
third column of the payo¤ matrix and we obtain
¹
t
=

2 1
8 1
1 0
¸
¸
.
22
Then, strategy 1 of player 2 is dominated by his strategy 2, so the …rst
column can be deleted; one obtain
¹
tt
=

1
1
0
¸
¸
.
Strategy 3 of player 1 is dominated by his strategy 2 (or 1), so we delete the
third row and it result
¹
ttt
=
¸
1
1

.
The reduced game has the pure strategies A
+
1
= (1. 0), A
+
2
= (0. 1), 1
+
= (1),
hence the original game has the pure strategies A
+
1
= (1. 0. 0), A
+
2
= (0. 1. 0),
1
+
= (0. 1. 0). The value of game is · = 1.
Remark 1.20. The game in Example 1.11 has the saddle points (1. 2) and
(2. 2). The optimal strategies of this game are A
+
= (t
1
. t
2
. 0), 1
+
= (0. 1. 0)
where t
1
. t
2
_ 0, t
1
÷ t
2
= 1, that is, A
+
is the convex linear combination of
pure strategies A
+
1
and A
+
2
.
Example 1.12.In the matrix game with the payo¤ matrix
¹ =

8 2 4 0
8 4 2 8
4 8 4 2
0 4 0 8
¸
¸
¸
¸
.
we can delete the strategies dominated and so we get the reduce game with
the matrix
4 2
0 8
. It is easy to verify that the optimal strategies of 2 x 2 matrix
game are A
+
= (
4
5
.
1
5
). 1
+
= (
3
5
.
2
5
) and the value of game is · =
16
5
. Therefore
A
+
= (0. 0.
4
5
.
1
5
), 1
+
1
= (0.0.
3
5
.
2
5
) are optimal strategies of the original matrix
game, and · =
16
5
. There exists the optimal strategy 1
+
2
= (0.
8
15
.
1
3
.
2
15
) too.
Remark 1.21. In the 8 8 matrix game, and in the 8 2 matrix game
obtained above we used domination of a strategy by a convex linear combination
with t
1
= t
2
=
1
2
.
Remark 1.22. The deletion of a certain row or column of a payo¤ matrix
using non-strict domination of strategies may result in a reduced game whose
complete set of solutions does not lead to the complete set of solutions of the
original larger game. That is, the solution procedure may lose some optimal
strategies of the original game. This situation appears, for example, for matrix
game with payo¤ matrix
¹ =

8 ò 8
4 ÷8 2
8 2 8
¸
¸
.
We get the reduced game with the matrix
¹
tt
=
¸
ò 8
÷8 2

.
23
which has the optimal mixed strategies A
+
1
=

1
3
.
2
3

, 1
+
1
=

1
2
.
1
2

. Thus the
original game has the optimal mixed strategies A
+
1
=

1
3
.
2
3
. 0

, 1
+
1
=

0.
1
2
.
1
2

.
But, we have again the optimal pure strategies A
+
2
= (1. 0. 0), 1
+
2
= (0. 0. 1).
Really, all convex linear combinations of A
+
1
and A
+
2
are optimal (mixed)
strategies of player 1, respectively, all convex linear combinations of 1
+
1
and 1
+
2
are optimal strategies of player 2.
1.8 Solution of 2 2 matrix game
Writing these equations in terms of elements of the payo¤ matrix, we have:
aj ÷c(1 ÷j) = ·. ac ÷/(1 ÷c) = ·. /j ÷d(1 ÷j) = ·. cc ÷d(1 ÷c) = ·.
The equations in j give us j
+
=
d÷c
a+d÷b÷c
, and the equations in c give us
c
+
=
d÷b
a+d÷b÷c
. Then · =
ad÷bc
a+d÷b÷c
.
Remark 1.23. The above formulae are also valid for the case a d, a c,
d /, d c.
Example 1.13. The 2 2 matrix game with the payo¤ matrix
¹ =
¸
ò 8
÷8 2

has solution in pure strategies A
+
= (1. 0), 1
+
= (0. 1), · = 8. We have
·
1
= max(8. ÷8) = 8, ·
2
= min(ò. 8) = 8 and a
12
= 8.
Example 1.14. The 2 2 matrix game with the payo¤ matrix
¹ =
¸
8 2
0 ò

hasn’t solution in pure strategies. We have ·
1
= max(2. 0) = 2, ·
2
= min(8. ò) =
8. Thus, we obtain
j
+
=
ò ÷0
8 ÷ ò ÷2 ÷0
=
ò
6
. c
+
=
ò ÷2
8 ÷ ò ÷2 ÷0
=
8
6
=
1
2
.
hence A
+
=

5
6
.
1
6

, 1
+
=

1
2
.
1
2

. Then the value of game is · =
15÷0
6
=
5
2
.
Indeed we have
· = A
+
¹1
+t
=

ò
6
.
1
6

8 2
0 ò

1
2
1
2

=
=


6
.

6

1
2
1
2

=

6
=
ò
2
.
Remark 1.24. For the 2 2 matrix game with no saddle point, an inter-
esting technique of solution is described by Williams. Let be the payo¤ matrix
¹ =
¸
a /
c d

.
24
First, subtract each element of the second column from the corresponding
element of the …rst column: a ÷ / and c ÷ d. Then take absolute values of the
two di¤erences and reverse the order of the absolute values: [c ÷d[ and [a ÷/[.
The ratio
|c÷d|
|a÷b|
is the ratio of r
1
and r
2
in player 1’s optimal strategy, namely
A
+
= (r
1
. r
2
) = (j. 1÷j). Hence
x1
x2
=
|c÷d|
|a÷b|
, and how r
1
÷r
2
= 1, we get r
1
. r
2
.
The similar technique, but with the rows, lead us to 1
+
= (n
1
. n
2
) = (c. 1 ÷c).

Example 1.15. In the case of Example 1.14, we have
¹ =
¸
8 2
0 ò

.
hence, 8 ÷2 and 0 ÷ò
. .. .
that is 1 and ÷ò
. .. .
in the …rst step.
Then 1 and ÷ò
. .. .
÷ò and 1
. .. .
in the second step.
In the end we have
x1
x2
= ò, hence r
1
= òr
2
. How r
1
÷ r
2
= 1 we obtain
6r
2
= 1, that is r
2
=
1
6
, r
1
=
5
6
. Thus A
+
=

5
6
.
1
6

.
In the …rst step we have 8 ÷0 and 2 ÷ò
. .. .
that is 8 and ÷8
. .. .
with the elements of rows. Then, in the second step, we take absolute values of
the two di¤erences and reverse the order of the absolute values
8 and ÷8
. .. .
÷8 and 8
. .. .
The ratio 8´8 is the ratio of n
1
to n
2
in player 2’s optimal strategy, hence
n
1
= n
2
. So, we obtain n
1
= n
2
=
1
2
, that is 1
+
=

1
2
.
1
2

. These results are the
same as those of Example 1.14.
1.9 Graphical solution of 2 n and m2 matrix games
In the case of 2: and :2 matrix games we can present a graphical method
for …nding the solution. We illustrate the method by a 8 2 matrix game.
Suppose that the payo¤ matrix ¹ is
¹ =

a /
c d
c 1
¸
¸
.
Denote player 1’s pure strategies by T. '. 1 and player 2’s pure strategies
by 1. 1. Assume that player 2 uses the mixed strategy 1 = (n
1
. n
2
) = (n. 1÷n),
where 0 _ n _ 1. Suppose that n = 1 and n = 0 represent the pure strategies 1
25
and 1 respectively. So, we can write
¹ =
n 1 ÷n
1 1
T
'
1

a /
c d
c 1
¸
¸
If player 2 chooses the pure strategy 1, that is, n = 1, and if player 1 chooses
the pure strategy T, the payo¤ is a, as it is shown in Fig. 1.1. If player 2 chooses
the pure strategy 1, that is, n = 0, the payo¤ corresponding to T is /. We join
the line a/ in Fig. 1.1.
Figure 1.1: Mixed strategy Y
Now, we suppose that player 2 chooses a mixed strategy 1 = (n. 1 ÷ n),
represented by 1 in the …gure. Then it can see that the height 1Q represents
the expected payo¤ when player 2 uses 1 and player 1 uses T. This amount is
¹
1
1
t
= an ÷/(1 ÷n).
Similarly, corresponding to player 1’s strategies ' and 1 we have the line
cd and c1 and the amounts are
¹
2
1
t
= cn ÷d(1 ÷n)
¹
3
1
t
= cn ÷1(1 ÷n).
The heights of the points on these lines represent the expected payo¤ if
player 2 uses 1 while player 1 uses ' and 1, respectively.
For any mixed strategy 1 of player 2, his expected lost is at more the
maximum of the three ordinates on the lines a/. cd. c1 at the point n, that
is,
max
1<i<3
¹
i
1
t
= max
1<i<3
2
¸
j=1
a
ij
n
j
. (34)
The graphic of this function is represented by the heavy black line in the
Fig. 1.1.
Player 2 wishes to choose an 1 so as to minimize the maximum function in
(34). We see from the …gure that he should choose the mixed strategy corre-
sponding to the point ¹
t
. At this point the expected payo¤ is
¹
t
1
t
= min
Y ÷S2
max
1<i<3
2
¸
j=1
a
ij
n
j
and ¹
t
1
t
is the value of the game.
26
The graphical solution of a 2 : matrix game is similar. We explain it for
the case : = 8 and let the payo¤ matrix ¹ of the game be
¹ =
¸
a / c
d c 1

.
Denote player 1’s pure strategies by l. 1 and player 2’s pure strategies by
1. '. 1.
Assume that player 1 uses the mixed strategy A = (r
1
. r
2
) = (r. 1 ÷ r),
where 0 _ r _ 1. Suppose that r = 1 represents the pure strategy l and r = 0
represents the pure strategy 1.
If player 1 chooses the pure strategy l, that is when r = 1, and if player 2
chooses the pure strategy 1, the payo¤ is a, as it is shown in Fig. 1.2. If player
1 chooses 1, that is, r = 0, the payo¤ corresponding to 1 is d. We join the line
ad in the …gure.
Now suppose that player 1 chooses a mixed strategy A = (r. 1 ÷ r) repre-
sented by 1 in the …gure. Then it can see that the height 1Q represents the
expected payo¤ when player 1 uses A and player 2 uses 1. The amount is

1
=
2
¸
i=1
a
i1
r
i
= ar ÷d(1 ÷r).
Similarly, corresponding to player 2’s strategies ' and 1 we have the line /c
and c1. The heights of the points on these lines represents the expected payo¤s
if player 1 uses A while player 2 uses ' and 1 respectively.
Figure 1.2: Mixed strategy X
For any mixed strategy A of player 1, his expected payo¤ is at least the
minimum of the three ordinates on the lines ad. /c. c1 at the point r, that is,
min
1<j<3

j
= min
1<j<3
2
¸
i=1
a
ij
r
i
. (35)
The graphic of this function is represented by the heavy black line in the
…gure.
Player 1 wishes to choose an A so as to maximize the minimum function in
(35). We see from the …gure that he should choose the mixed strategy corre-
sponding to the point ¹
t
. At this point the expected payo¤ is
¹
t
1
t
= max
X÷S2
min
1<j<3
2
¸
i=1
a
ij
r
i
= max
X÷S2
min
1<j<3

j
.
which is the value of the game.
27
We note that the point 1
t
in Fig. 1.2. is the intersection of the lines ad and
c1. The abscissa r = r
+
of the point ¹
t
and the value of ¹
t
1
t
can be evaluated
by solving a system of two linear equations in two unknowns.
Remark 1.25. The graph also shows us that player 2’s optimal strategy
doesn’t involve his pure strategy M. Therefore, the solution of the 2 x 3 matrix
game can be obtained from the solution of the 2 x 2 matrix game
¸
a c
d 1

.
The graphical method described above can be used to solve all 2 : matrix
games.
Example 1.16. Find out the solution of 2 4 matrix game with the payo¤
matrix
¹ =
¸
1 ò ò 8
4 1 8 2

.
The third column is dominated by the fourth column and so it can be elim-
inate. We have the payo¤ matrix
¹ =
1 ' 1
l
1
¸
1 ò 8
4 1 2

Now suppose that player 1 chooses a mixed strategy A = (r. 1 ÷r). In the
Figure 1.3 we have the lines ad, /c and c1 corresponding to player 2’s strategies
1. ' and 1.
Figure 1.3: X for Example 1.16.
We see from the …gure that player 1 should choose the mixed strategy cor-
responding to the point ¹
t
. The abscissa r = r
+
of the point ¹
t
and the value
of ¹
t
1
t
can be evaluated by solving the system of two linear equations corre-
sponding to strategies 1 and 1. The system of linear equations is

8r ÷n = 4 (1)
r ÷n = ÷2 (1)
and the solution is r =
1
2
, n =
5
2
. Thus the optimal mixed strategy of player
1 is A =

1
2
.
1
2

, and the value of game is · =
5
2
. To …nd the optimal mixed
strategy of player 2 we have 1 = (c
1
. c
2
. c
3
) and equality

1
2
.
1
2
¸
1 ò 8
4 1 2

c
1
c
2
c
3
¸
¸
=
ò
2
.
So, we obtain
5
2
c
1
÷ 8c
2
÷
5
2
c
3
=
5
2
, and because c
1
÷ c
2
÷ c
3
= 1 we get
c
2
= 0 and c
1
÷c
3
= 1. Thus we have 1 = (c. 0. 1 ÷c), where c = c
1
÷ [0. 1[.
For the original matrix game the optimal strategies of player 2 are 1 =
(c. 0. 0. 1 ÷c), c ÷ [0. 1[. The value of game is · =
5
2
.
28
1.10 Solution of 3 3 matrix game
To obtain the solution of 88 matrix game we use the fact that a linear function
on a convex polygon can reach its maximum (minimum) only at a vertex of the
polygon.
Consider the payo¤ matrix of an arbitrary 8 8 matrix game given by
¹ =

a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
¸
¸
.
A mixed (pure) strategy for the player 1 has the form A = (r
1
. r
2
. r
3
) with
r
1
. r
2
. r
3
_ 0 and r
1
÷r
2
÷r
3
= 1. The value of the game is
· = max
X÷S3
min
1<j<3

j
= max
X÷S3
min¦A¹
1
. A¹
2
. A¹
3
¦. (36)
Consider the equations

1
= A¹
2
. A¹
2
= A¹
3
. A¹
3
= A¹
1
. (37)
Each equation represents a straight line which divides the whole plane into
two half-planes.
The conditions r
1
. r
2
. r
3
_ 0, r
1
÷ r
2
÷ r
3
= 1 show us that r
1
. r
2
. r
3
are
baricentric coordinates of the point A = (r
1
. r
2
. r
3
). The set of all points
in the closed equilateral triangle 128 with the vertices (1. 0. 0), (0. 1. 0), (0. 0. 1)
is the simplex o
3
.
The numbers r
1
. r
2
. r
3
with the above conditions represents the distances
from A to the sides of triangle o
3
with the vertices 1, 2, 3 respectively. The
equations of the three sides 23, 31, 12 of the triangle are r
1
= 0, r
2
= 0, r
3
= 0,
respectively (see Fig. 1.4.).
Figure 1.4: Baricentric coordinates
Equation A¹
1
= A¹
2
, for instance, divides the whole plane into two half-
planes. The points A in one half-plane satisfy the condition A¹
1
< A¹
2
,
while those in the other half-plane satisfy the condition A¹
1

2
.
The same situation is for other two equations in (37).
The three lines (37) either intersect at one point or are parallel to each other.
In both cases these lines divide the whole plane into three regions 1
1
. 1
2
. 1
3
,
see Fig. 1.5. (The points outside of triangle can be regarded as points with one
or two of the three coordinates r
1
. r
2
. r
3
assuming negative values.)
Figure 1.5: The three regions
29
In the region 1
1
we have
min
1<j<3

j
= A¹
1
.
in the region 1
2
we have
min
1<j<3

j
= A¹
2
.
and in the region 1
3
we have
min
1<j<3

j
= A¹
3
.
Therefore, the value of game (36) can be written as
· = max
X÷S3
min
1<j<3

j
=
= max

min
X÷S3|R1

1
. min
X÷S3|R2

2
. min
X÷S3|R3

3
¸
. (38)
To determine the value ·, we should …rst compute
min
X÷S3|Rj

j
. , = 1. :.
Each of the sets o
3
¨ 1
j
, , = 1. :, is a convex polygon. It is su¢cient
to evaluate the values of A¹
j
at the relevant vertices of this polygon and to
make a comparison between these values. The maximum value must be ·. The
optimal strategies of player 1 can be determined by comparison.
The optimal strategies of player 2 can be determined in a similar manner,
after the value · of the game is determined. We have
· = min
Y ÷S3
max
1<i<3
¹
i
1
t
=
= min
Y ÷S3
max¦¹
1
1
t
. ¹
2
1
t
. ¹
3
1
t
¦ =
= min

max
Y ÷S3|T1
¹
1
1
t
. max
Y ÷S3|T2
¹
2
1
t
. max
Y ÷S3|T3
¹
3
1
t
¸
.
where T
i
is the region in which the linear function ¹
i
1
t
satis…es
¹
i
1
t
= max
1<i<3
¹
i
1
t
. i = 1. :.
It su¢ces to compute the values of ¹
i
1
t
at the vertices of convex polygons
and to make a comparison between them. The minimum value must be ·, and
the vertices 1 at which the minimum is assumed are points corresponding to
the optimal strategies of player 2.
Remark 1.26. To simplify the computation we can add a convenient con-
stant to each element of the initial matrix.
30
Example 1.17. Let us compute the value of game and …nd out the optimal
strategies of the game for which the payo¤ matrix is
¹ =

4 2 8
8 4 2
4 0 8
¸
¸
.
To simplify the computation we add the constant ÷4 to each element of the
matrix. The result is the matrix
¹ =

0 ÷2 ÷1
÷1 0 ÷2
0 ÷4 4
¸
¸
.
For this matrix game ¹ we have, with a mixed strategy A = (r
1
. r
2
. r
3
),

1
= ÷r
2
. A¹
2
= ÷2r
1
÷4r
3
. A¹
3
= ÷r
1
÷2r
2
÷ 4r
3
.
The equation of the line A¹
1
= A¹
2
is
2r
1
÷r
2
÷ 4r
3
= 0. or 8r
1
÷ òr
3
= 1.
The equation of the line A¹
2
= A¹
3
is
÷r
1
÷ 2r
2
÷8r
3
= 0. or ÷8r
1
÷10r
3
= ÷2.
The equation of the line A¹
3
= A¹
1
is
÷r
1
÷r
2
÷ 4r
3
= 0. or òr
3
= 1.
The regions 1
1
. 1
2
. 1
3
in which min
1<j<3

j
are equal with A¹
1
, A¹
2
,

3
respectively are shown in Fig. 1.6.
Figure 1.6: The three regions for Example 1.17
We evaluate A¹
1
at the point

0.
4
5
.
1
5

. It results

0.
4
ò
.
1
ò

¸
0
÷1
0
¸

= ÷
4
ò
.
The values of A¹
2
at the points

2
3
.
1
3
. 0

. (1. 0. 0). (0. 0. 1) and

0.
4
5
.
1
5

are:

2
8
.
1
8
. 0

¸
÷2
0
÷4
¸

= ÷
4
8
. (1. 0. 0)

¸
÷2
0
÷4
¸

= ÷2
(0. 0. 1)

¸
÷2
0
÷4
¸

= ÷4.

0.
4
ò
.
1
ò

¸
÷2
0
÷4
¸

= ÷
4
ò
.
31
The value of A¹
3
at the points

2
3
.
1
3
. 0

. (0. 1. 0) and

0.
4
5
.
1
5

are

2
8
.
1
8
. 0

¸
÷1
÷2
4
¸

= ÷
4
8
. (0. 1. 0)

¸
÷1
÷2
4
¸

= ÷2
and

0.
4
ò
.
1
ò

¸
÷1
÷2
4
¸

= ÷
4
ò
.
By comparison of the above …ve values, (÷
4
5
. ÷4 and ÷2). we see that the maxi-
mum value of the matrix game is · = ÷
4
5
, and the vertex at which the maximum
is reached is A
+
=

0.
4
5
.
1
5

. Thus A
+
=

0.
4
5
.
1
5

is the optimal strategy of player
1.
We proceed in a similar way to …nd out the optimal strategy of player 2.
We get that the vertices 1
+
1
=

0.
3
5
.
2
5

, 1
+
2
=

8
15
.
1
3
.
2
15

represent optimal
strategies of player 2. Hence 1
+
= `1
+
1
÷ (1 ÷ `)1
+
2
, 0 _ ` _ 1. By coming
back to the original matrix game with the payo¤ matrix 1 we obtain the value
·
B
= ·
A
÷ 4 that is ·
B
=
16
5
. The optimal strategies are
A
+
=

0.
4
ò
.
1
ò

.
1
+
=

8(1 ÷`)

.
8
ò
` ÷
1 ÷`
8
.
8
ò
` ÷
2(1 ÷`)

. 0 _ ` _ 1.
Remark 1.27. We have the same result as that obtained in Example 1.12
where we used the elimination of dominated strategies.
1.11 Matrix games and linear programming
Next, we formulate the matrix game problem as a linear programming problem.
Let ¹ = (a
ij
) be the payo¤ matrix of a matrix game. It isn’t a restriction to
assume that a
ij
0 for all i = 1. : and all , = 1. :. Then the value · of the
game must be a positive number.
By choosing a mixed strategy A ÷ o
m
player 1 can get at least the expected
payo¤
min
1<j<n

j
= n.
Therefore, we have A¹
j
_ n, , = 1. :, that is
m
¸
i=1
a
ij
r
i
_ n. , = 1. :
with
m
¸
i=1
r
i
= 1. r
i
_ 0. i = 1. :.
32
We denote
xii
u
= r
t
i
, i = 1. :. Then the above problem becomes
m
¸
i=1
a
ij
r
t
i
_ 1. , = 1. :
m
¸
i=1
r
t
i
=
1
n
r
t
i
_ 0. i = 1. :.
Player 1 wishes to maximize n, (this maximum is the value · of the game),
that is, he wishes to minimize
1
u
. Thus the problem reduces to the following
linear programming problem

[min[1 = r
t
1
÷r
t
2
÷ ÷r
t
m
¸
m
i=1
a
ij
r
t
i
_ 1. , = 1. :
r
t
i
_ 0. i = 1. :
(39)
Similarly, player 2, by choosing a mixed strategy 1 ÷ o
n
, can keep player 1
from getting more than
max
1<i<m
¹
i
1
t
= n.
So, we have ¹
i
1
t
_ n, i = 1. :, that is,
n
¸
j=1
a
ij
n
j
_ n. i = 1. :.
where
n
¸
j=1
n
j
= 1. n
j
_ 0. , = 1. :.
We denote
yj
w
= n
t
j
, , = 1. :.
Since player 2 wishes to minimize n (this minimum is also the value · of
the game), that is, he wishes to maximize
1
w
, the above problem reduces to the
following linear programming problem, which is the dual of (39), formulated
above:

[max[o = n
t
1
÷n
t
2
÷ ÷n
t
n
¸
n
j=1
a
ij
n
t
j
_ 1. i = 1. :
n
t
j
_ 0. , = 1. :
(40)
Thus the solution of a matrix game is equivalent to the problem of solving
a pair of dual linear programming problems.
Remark 1.28. Due to the duality theorem, well known in linear program-
ming, it is enough to solve one of those above problems.
33
Example 1.18. We consider the same matrix game as in Example 1.17.
Thus we have
1 =

4 2 8
8 4 2
4 0 8
¸
¸
.
To obtain a
ij
0 we add the constant 1 at each element of matrix 1 and
so we obtain
¹ =

ò 8 4
4 ò 8
ò 1 0
¸
¸
.
The corresponding linear programming problem (40) is

[max[o = n
t
1
÷n
t
2
÷n
t
3
òn
t
1
÷ 8n
t
2
÷ 4n
t
3
_ 1
4n
t
1
÷ òn
t
2
÷ 8n
t
3
_ 1
ò n
t
1
÷n
t
2
÷ 0n
t
3
_ 1
n
t
1
. n
t
2
. n
t
3
_ 0
In order to solve this problem we use the simplex method. The simplex
matrix can be written successively

ò 8 4 1 0 0 1
4 ò 8 0 1 0 1
ò 1 0 0 0 1 1
1 1 1 0 0 0 0
¸
¸
¸
¸
3

÷
÷

0 2 ÷ò 1 0 ÷1 0
0 21´ò ÷21´ò 0 1 ÷4´ò 1´ò
1 1´ò 0´ò 0 0 1´ò 1´ò
0 4´ò ÷4´ò 0 0 ÷1´ò ÷1´ò
¸
¸
¸
¸
÷5;÷4;÷1
÷
÷

0 1 ÷ò´2 1´2 0 ÷1´2 0
0 0 68´10 ÷21´10 1 18´10 1´ò
1 0 28´10 ÷1´10 0 8´10 1´ò
0 0 6´ò ÷2´ò 0 1´ò ÷1´ò
¸
¸
¸
¸
÷
÷

0 1 0 ÷1´8 2ò´68 1´68 ò´68
0 0 1 ÷1´8 10´68 18´68 2´68
1 0 0 2´8 ÷28´68 11´68 8´68
0 0 0 0 ÷4´21 ÷1´21 ÷ò´21
¸
¸
¸
¸
Thus the solution is o
max
=
5
21
, n
t
1
=
8
63
, n
t
2
=
5
63
, n
t
3
=
2
63
, n
t
4
= 0, n
t
5
= 0,
n
t
6
= 0, r
t
1
= 0, r
t
2
=
4
21
, r
t
3
=
1
21
.
34
We have o
max
=
1
w
, hence n =
21
5
is the value of game with the matrix ¹.
Also, n
1
= n
t
1
n =
8
63

21
8
=
8
15
, n
2
=
1
3
, n
3
=
2
15
, r
1
= 0, r
2
=
4
5
, r
3
=
1
5
.
The problem has still another solution because we have

1´2 1 0 0 8´14 18´126 8´21
1´2 0 1 0 ÷1´42 87´126 2´21
8´2 0 0 1 ÷28´42 11´42 4´21
0 0 0 0 ÷4´21 ÷1´21 ÷ò´21
¸
¸
¸
¸
Therefore n
t
1
= 0, n
t
2
= 8´21, n
t
3
= 2´21, n
t
4
= 4´21, n
t
5
= 0, n
t
6
= 0, thus
n
1
= 0, n
2
= 8´ò, n
3
= 2´ò.
In conclusion, the solution of matrix game with payo¤ matrix 1 is:
· = n ÷1 =
21
ò
÷1 =
16
ò
. A
+
=

0.
4
ò
.
1
ò

.
1
+
1
=

8

.
1
8
.
2

. 1
+
2
=

0.
8
ò
.
2
ò

hence
1
+
= `1
+
1
÷ (1 ÷`)1
+
2
. 0 _ ` _ 1.
Remark 1.29. In a next section we will do an another approach for this
kind of problems.
1.12 De…nition of the non-cooperative game
For each game there are : players, : ÷ N, : _ 2. In our mathematical consider-
ations it is important the existence of the players and the possibility to identify
and to distinguish them between the others players. The set of the players 1
is identi…ed with the set of …rst : non zero natural numbers 1 = ¦1. 2. . . . . :¦.
Each player i, i ÷ 1 can apply many strategies. In the case of an e¤ective
game the player i, in the moments of the decision during the game, may choose
from a set of variants o
i
. We consider that o
i
is a …nite set, for every i. Be-
cause from mathematical point of view the concrete nature of the variants isn’t
essential but the possibility to identify them is important, we denote generally
o
i
= ¦1. . . . . :
i
¦ and we consider in what follows the general notation o
i
= ¦:
i
¦,
i = 1. : and for each …xed i, :
i
= 1. :
i
. If we take a strategy of each player
then we obtain a situation (strategy) of the game : = (:
1
. . . . . :
n
) which it is
an element of the cartezian product o
1
o
n
=
¸
i÷I
o
i
. For every situation
:, each player i obtains a payo¤ H
i
(:). So, H is a function de…ned on the set
of all situations : and we call it the payo¤ matrix of the player i.
De…nition 1.10. The ensemble I = < 1. ¦o
i
¦. ¦H
i
¦. i ÷ 1 is called non-
cooperative game. Here 1 and o
i
are sets which contain natural numbers,
H
i
= H
i
(:), i ÷ 1, are real functions de…ned on the set o, : ÷ o, o =
¸
i÷I
o
i
.

35
Remark 1.30. We call the function H
i
the payo¤ matrix because its set of
values can be e¤ective written as a :-dimensional matrix of type ¦:
1
. . . . . :
n
¦.
So, we can accept the name matrix game when we want to underline that this
game is given by a :-dimensional matrix.
Example 1.19. Two players put on the table a coin of same kind. If the
both players choose same face, then the …rst player take the two coins, and in
contrary case, the second player take the two coins. (See the Example 1.1).
The …rst player is denoted by 1 and the second by 2. So, 1 = ¦1. 2¦. Each
player has two strategies, o
1
= o
2
= ¦1. 2¦. If :
1
= 1 or :
1
= 2, then the player
1 chose "heads" respectively "tails". Similarly are the values :
2
= 1 respectively
:
2
= 2 for the player 2. It follows that o = o
1
o
2
= ¦(1. 1). (1. 2). (2. 1). (2. 2)¦.
Then this game is
I =< 1. o
1
. o
2
. H
1
. H
2
.
The payo¤ matrix H
1
(:) of the player 1 can be written as:
H
1
(:) =
¸
H
1
(1. 1) H
1
(1. 2)
H
1
(2. 1) H
1
(2. 2)

=
¸
1 ÷1
÷1 1

.
where the rows correspond to the strategies of player 1 and the columns to the
strategies of player 2.
The payo¤ matrix of the player 2 is
H
2
(:) =
¸
H
2
(1. 1) H
2
(2. 1)
H
2
(1. 2) H
2
(2. 2)

=
¸
÷1 1
1 ÷1

and here the rows correspond to the strategies of player 2 and the columns
correspond to the strategies of player 1.
Remark 1.31. A general notation for the payo¤ matrices is given by the
following table:
Situation Payo¤ matrix
:
1
. . . :
n
H
1
. . . H
n
So, for the game considered in Example 1.19 the payo¤ matrices are
Situation Payo¤ matrix
:
1
:
2
H
1
H
2
1 1 1 -1
1 2 -1 1
2 1 -1 1
2 2 1 -1

36
1.13 De…nition of the equilibrium point
Let us consider a non-cooperative game
I =< 1. ¦o
i
¦. ¦H
i
¦. i ÷ 1 .
We suppose that the game repeats oneself many times.
Example 1.19 shows us that it isn’t in advantage for every player to apply
the same strategy all the time. If, for example, the player 1 applies only the
strategy 1, then the player 2 observe this thing and he applies the strategy 2
and so the player 1 will be a loser all the time.
The similar situation is, if the player 1 applies the strategy 2 all the time.
Similarly, for the player 2.
So, it follows that in every situation : = (:
1
. :
2
), each player can choose
a preferred situation :
t
, in opposite with the strategy :, which exists at that
moment of time. This strategy can be obtained by modifying only the strategy
of the player with another one.
Given situation Preferred situation
: :
t
for the player
1 2
(1,1) (1,2)
(1,2) (2,2)
(2,1) (1,1)
(2,2) (2,1)
Hence, by repeating the game, it is necessary to apply each strategy :
i
with
the probability (relative frequency) j
isi
, in order to obtain a payo¤ as much as
it is possible, for every player, in all games which are played. That is to ensure
the possible average value of the game for every player. For the row matrix of
all probabilities j
isi
, :
i
= 1. :
i
, which correspond to the player i, we use the
notation 1
i
= [j
i1
. . . . . j
imi
[. The vector 1
i
, for all values of the probabilities, is
called the mixed strategy of the player i. If only a probability from the vector 1
i
is di¤erent from 0, and it is equal with 1, then 1
i
is the pure strategy :
i
of the
player. If all strategies 1
i
, i = 1. : are pure strategies, then 1 = (1
1
. . . . . 1
n
) is
pure strategy (the situation : = (:
1
. . . . . :
n
)) of the whole game.
We denote J
i
the row matrix which contains 1 and, so, we can write 1
i
J
t
i
= 1,
where t is the symbol for the transposed matrix.
We denote 1
i
the mixed strategy of all players except the player i. We
suppose that each player …xed his strategy which is independent of those of
the others players: 1
i
=
¸
j,=i
1
j
, where we consider this product of the vec-
tors as a cartezian product (each component with each component). When
we write the elements of the vector 1
i
we consider the lexicographic ordo-
nation of the elements. For example, if 1
1
= [j
11
. j
12
[, 1
2
= [j
21
. j
22
. j
23
[,
1
3
= [j
31
. j
32
. j
33
. j
34
[ are the mixed strategies of the players 1 = ¦1. 2. 8¦, then
we have
1
1
= 1
2
1
3
= [j
21
j
31
. j
21
j
32
. j
21
j
33
. j
21
j
34
. j
22
j
31
. j
22
j
32
.
37
j
22
j
33
. j
22
j
34
. j
23
j
31
. j
23
j
32
. j
23
j
33
. j
23
j
34
[.
1
2
= 1
1
1
3
= [j
11
j
31
. j
11
j
32
. j
11
j
33
. j
11
j
34
. j
12
j
31
. j
12
j
32
. j
12
j
33
. j
12
j
34
[.
1
3
= 1
1
1
2
= [j
11
j
21
. j
11
j
22
. j
11
j
23
. j
12
j
21
. j
12
j
22
. j
12
j
23
[.
De…nition 1.11. We say that the non-cooperative game is solved if we
can determine those mixed strategies (solutions) 1
i
, 1
i
J
t
i
= 1, i = 1. :, for
which considering a constant vector 1
i
, the payo¤ function 1
i
= 1
i
H
i
1
t
i
has
the maximum value, for every i = 1. :.
We denote the strategies 1
i
, i = 1. : as 1 = (1
1
. . . . . 1
n
).
The mathematical object obtained here is called the equilibrium point
(Nash equilibrium) of the game.
Example 1.20. The data of non-cooperative game from Example 1.19 can
be represented in the following form: (1
1
= 1
2
, 1
2
= 1
1
):
1
1
1
1
`1
2
j
21
j
22
j
11
1 ÷1
j
12
÷1 1
1
2
1
2
`1
1
j
11
j
12
j
21
÷1 1
j
22
1 ÷1
In this case the corresponding system is:
j
11
÷j
12
= 1. j
21
÷j
22
= 1.
1
1
= (j
21
÷j
22
)j
11
÷ (÷j
21
÷j
22
)j
12
.
1
2
= (÷j
11
÷j
12
)j
21
÷ (j
11
÷j
12
)j
22
.
If 1
1
. 1
2
is the solution of the problem, then it isn’t any vector with prob-
abilities 1
0
1
= [j
0
11
. j
0
12
[ for which 1
0
1
= 1
1
(1
0
1
. 1
2
) 1
1
= 1
1
(1
1
. 1
2
), where
1
0
1
= (j
21
÷ j
22
)j
0
11
÷ (j
21
÷ j
22
)j
0
12
and there isn’t any vector with proba-
bilities 1
0
2
= [j
0
21
. j
0
22
[ for which 1
0
2
= 1
2
(1
1
. 1
0
2
) 1
2
= 1
2
(1
1
. 1
2
), where
1
0
2
= (÷j
11
÷j
12
)j
0
21
÷ (j
11
÷j
12
)j
0
22
.
For example, 1
1
=

1
2
.
1
2

, 1
2
=

1
2
.
1
2

is a solution of the game and we have
1
1
= 1
2
= 0.
1.14 The establishing of the equilibrium points of a non-
cooperative game
The solution of the game from Example 1.20 have been obtained by a private
procedure, by using the elements of the matrices of this game. But we don’t use
a general method to solve every non-cooperative game and for every solution of
the game.
In order to solve the non-cooperative game as in the De…nition 1.11, we
suppose that we obtained the mixed strategies 1
i
, i = 1. :, and we write the
payo¤ functions 1
i
in the matriceal form 1
i
= 1
i
1
t
i
, where 1
i
= [1
i
. . . . . 1
i
[ is
a row matrix with :
i
components that are equal with 1
i
.
38
We remind that J
i
is a row vector with :
i
components all equal with 1.
So, by the given de…nition, it results that we can write 1
i
H
i
1
t
i
= 1
i
1
t
i
, hence
1
i
(H
i
1
i
÷1
t
i
) = 0 where 1
i
_ 0, i = 1. : and H
i
1
t
i
÷1
t
i
_ 0.
If the ,-component of the vector H
i
1
t
i
÷1
t
i
is positive, then by multiplying,
on the left, with the vector 1
+
i
, with all its components equal with 0, except the
,-component that it is equal with 1, it results that 1
+
i
= 1
+
i
H
i
1
t
i
1
i
. But
this is in opposite with De…nition 1.11 which shows us that 1
i
is the maximum
value of the expression 1
i
H
i
1
t
i
, for …xed 1
i
.
For every values of the probabilities j
isi
, 0 _ j
isi
_ 1, that is for the solution
that gives us the maximum too, the maximum value of the payo¤ function 1
i
is obtained (between others values) for a strategy :
i
for which
H
i
(:
i
)1
t
i
= max
s
0
i
H
i
(:
0
i
)1
t
i
.
Here :
0
i
is an arbitrary strategy. We denote H
i
(:
i
)1
t
i
, respectively H
i
(:
0
i
)1
t
i
the element with row index :
i
, respectively :
0
i
of the matrix H
i
1
t
i
.
By introducing a row matrix T
i
with independent non-negative variables
t
isi
, :
i
= 1. :
i
, T
i
= [t
i1
. . . . . t
imi
[, for every player i, we can write a matriceal
equation that is with the inequation H
i
1
t
i
÷1
t
i
_ 0 equivalent: H
i
1i÷1
t
i
÷T
t
i
=
0, or 1
t
i
÷H
i
1
t
i
= T
t
i
.
We have
Theorem 1.3. The determination of the equilibrium points of a non-
cooperative game consists in solving, in non-negative numbers, of the system of
multilinear equations: 1
i
J
t
i
= 1, H
i
1
t
i
÷1
t
i
÷T
t
i
= 0, 1
i
T
t
i
= 0, where i = 1. :.
Remark 1.32. We consider that the unknown real values 1
i
have been
written as di¤erence between two non-negative values 1
t
i
and 1
tt
i
, 1
i
= 1
t
i
÷1
tt
i
,
in order to have all the unknowns as non-negative numbers.
Remark 1.33. To solve the problem formulated by Theorem 1.3, we can
apply a method for solving the systems of equations and inequations with an
arbitrary degree in non-negative numbers. Such a method can be the complete
elimination method.
Remark 1.34. Because the determination of the equilibrium points of a
non-cooperative game consists in solving of a system of the multilinear equa-
tions, we can call this theory as "the theory of multilinear games".
From the previous presentation we don’t obtain that the solution is e¤ective
and it is nonempty.
So, the following Nash’s theorem is important:
Theorem 1.4. Every non-cooperative game has nonempty solution.
We don’t present here the proof of this theorem.
Example 1.21. Because of Theorem 1.3, the problem given in Example
1.20 is with the problem of solving in non-negative numbers 1
1
_ 0, 1
2
_ 0,
T
1
_ 0, T
2
_ 0 of a system with multilinear equations equivalent:
1
1
J
t
1
= 1. H
1
1
t
1
÷ 1
t
1
÷T
t
1
= 0. 1
1
T
t
1
= 0
1
2
J
t
2
= 1. H
2
1
t
2
÷1
t
2
÷T
t
2
= 0. 1
2
T
t
2
= 0.
39
where
1
1
= [j
11
. j
12
[. 1
2
= [j
21
. j
22
[.
1
1
= 1
2
. 1
2
= 1
1
. J
1
= J
2
= [1. 1[
T
1
= [t
11
. t
12
[. T
2
= [t
21
. t
22
[.
H
1
=
¸
1 ÷1
÷1 1

. H
2
=
¸
÷1 1
1 ÷1

.
1
1
= [1
t
1
÷1
tt
1
. 1
t
1
÷1
tt
1
[. 1
2
= [1
t
2
÷1
tt
2
. 1
t
2
÷1
tt
2
[
and
1
t
1
_ 0. 1
tt
1
_ 0. 1
t
2
_ 0. 1
tt
2
_ 0. 1
1
= 1
t
1
÷1
tt
1
. 1
2
= 1
t
2
÷1
tt
2
or in the developed form:
j
11
÷j
12
= 1. j
21
÷j
22
= 1.
j
21
÷j
22
÷1
t
1
÷1
tt
1
÷t
11
= 0. ÷j
21
÷j
22
÷1
t
1
÷1
tt
1
÷t
12
= 0.
÷j
11
÷j
12
÷1
t
2
÷1
tt
2
÷t
21
= 0. j
11
÷j
12
÷1
t
2
÷1
tt
2
÷t
22
= 0.
j
11
t
11
= 0. j
12
t
12
= 0. j
21
t
21
= 0. j
22
t
22
= 0.
By solving this system with complete elimination method we obtain the same
solution as that obtained by the private procedure given in Example 1.20.
Remark 1.35. Because of the non-negativity of the unknowns we have the
following equivalences:
j
11
t
11
÷j
12
t
12
= 0 =j
11
t
11
= 0. j
12
t
12
= 0
j
21
t
21
÷j
22
t
22
= 0 =j
21
t
21
= 0. j
22
t
22
= 0
and so the equation 1
i
T
t
i
= 0 can be replaced by :
i
equations of the form
j
isi
t
isi
= 0, :
i
= 1. :
i
, for every i, i = 1. :.
1.15 The establishing of the equilibrium points of a bi-
matrix game
De…nition 1.12. The non-cooperative game for two players is called bi-matrix
game.
Such a game let us to solve it easily. The problem given by Theorem 1.2,
because 1
1
= 1
2
and 1
2
= 1
1
, can be decomposed in three problems that are
independent. The subproblem (41) consists in solving in non-negative numbers
1
2
of a system with linear equations

1
2
J
t
2
= 1
H
1
1
t
2
÷1
t
1
÷T
t
1
= 0.
(41)
40
the subproblem (42) consists in solving in non-negative numbers 1
1
of the
system of equations

1
1
J
t
1
= 1
H
2
1
t
1
÷1
t
2
÷T
t
2
= 0.
(42)
and both subproblems can be solved by simplex method. Because the general
solution is a linear convex combination of the basic solutions, it results that we
must select those basic solutions (1
1
. 1
2
) for which it is veri…ed the subproblem
(43) too, that is given by the system of equations

1
1
T
t
1
= 0
1
2
T
t
2
= 0.
(48)
If for an arbitrary index :
1
, 1 _ :
1
_ :
1
, the unknown t
1s1
is a component
of a basic solution of subproblem (41) and t
1s1
= 0 (t
1s1
= 0 when there is
degenerate case), then j
1s1
= 0. So, in all cases t
1s1
= 0 we have j
1s1
= 0.
Similarly, if t
2s2
= 0 then it results j
2s2
= 0, the property that let us to …nd
the solution which verify the system (43).
The general solution can be obtained by linear convex combination of all
basic solutions 1
1
corresponding to a …xed 1
2
and linear convex combination of
all basic solutions 1
2
, which correspond to a 1
1
.
Example 1.22. The problem given in Example 1.21 refers to a bi-matrix
game. The three systems are the following:

j
21
÷j
22
= 1
j
21
÷j
22
÷1
t
1
÷1
tt
1
÷t
11
= 0
÷j
21
÷j
22
÷1
t
1
÷1
tt
1
÷t
12
= 0
(41
t
)

j
11
÷j
12
= 1
÷j
11
÷j
12
÷1
t
2
÷1
tt
2
÷t
21
= 0
j
11
÷j
12
÷1
t
2
÷1
tt
2
÷t
22
= 0
(42
t
)
j
11
t
11
= 0. j
12
t
12
= 0. j
21
t
21
= 0. j
22
t
22
= 0. (48
t
)
To subproblem (41
t
) it corresponds the simplex matrix given below. The
row corresponding to the objective function (that will be minimized) is equal 0.
o
1
=

1 1 0 0 0 0 1
1 ÷1 ÷1 1 1 0 0
÷1 1 ÷1 1 0 1 0
0 0 0 0 0 0 0
¸
¸
¸
¸
41
We obtain the following basic solutions
A
11
=
¸
1
2
.
1
2
. 0. 0. 0. 0

. A
12
= [1. 0. 1. 0. 0. 2[. A
13
= [0. 1. 1. 0. 2. 0[.
Here, we use the symbol A to have a uniformized notation of the unknowns,
r
1
= j
21
. r
2
= j
22
. r
3
= 1
t
1
. r
4
= 1
tt
1
. r
5
= t
11
. r
6
= t
12
.
Such uniformizations will be used in what follows every time when they are
useful to us. To subproblem (42
t
) it corresponds the following simplex matrix:
o
2
=

1 1 0 0 0 0 1
÷1 1 ÷1 1 1 0 0
1 ÷1 ÷1 1 0 1 0
0 0 0 0 0 0 0
¸
¸
¸
¸
and it has the basic solutions
A
21
=
¸
1
2
.
1
2
. 0. 0. 0. 0

. A
22
= [1. 0. 1. 0. 2. 0[. A
23
= [0. 1. 1. 0. 0. 2[.
We denote A
t
ij
, i = 1. 2, , = 1. 8 the vectors obtained by omission of com-
ponents 1
t
i
, 1
tt
i
, i = 1. 2. We obtain
A
t
11
=
¸
1
2
.
1
2
. 0. 0

. A
t
12
= [1. 0. 0. 2[. A
t
13
= [0. 1. 2. 0[
A
t
21
=
¸
1
2
.
1
2
. 0. 0

. A
t
22
= [1. 0. 2. 0[. A
t
23
= [0. 1. 0. 2[.
To establish the pairs of solutions (A
t
1i
. A
t
2j
) that would be the solutions
of bi-matrix game it must satisfy the condition: t
1s1
= 0 = j
1s1
= 0 and
t
2s2
= 0 =j
2s2
= 0.
We observe that there exists only a solution: 1
1
= 1
2
=

1
2
.
1
2

, obtained for
t
11
= t
12
= t
21
= t
22
= 0 and 1
t
1
= 1
tt
2
= 1
t
2
= 1
tt
2
= 0. So 1
1
= 1
2
= 0.
1.16 The establishing of equilibrium points of an antago-
nistic game
De…nition 1.13. We call antagonistic game a bi-matrix game with the bi-
dimensional matrices H
1
and H
2
for which the following relationship is satis…ed:
H
1
÷H
t
2
= 0, where 0 is the zero matrix.
Because every equality is with two inequality equivalent, the systems (41),
(42) and (43) from 1.15 can be written as:
42

H
1
1
t
2
÷1
t
1
_ 0
÷J
2
1
t
2
_ ÷1
J
2
1
t
2
_ 1
(44)

1
1
H
1
÷1
2
_ 0
1
1
J
t
1
_ 1
÷1
1
J
t
1
_ ÷1
(4ò)

1
1
(H
1
1
t
2
÷1
t
1
) = 0
(1
1
H
1
÷1
2
)1
t
2
= 0
(46)
where 1
1
contains as elements one and the same value 1
1
, and 1
2
contains
one and the same value ÷1
2
.
From subproblem (46) it results that 1
1
1
t
1
= 1
2
1
t
2
, hence ÷1
2
= 1
1
. We
can consider these values as minimax values (the minimum of some maxim
values) obtained by minimization of function 1
1
(the maximum is equal in…nite)
respectively maximin (the maximum of some minimum values) obtained by
maximization of function ÷1
2
= 1
1
(the minimum is equal in…nite). Adding
to system (44) the function 1
1
= 1
t
1
÷ 1
tt
1
and to system (45) the function
÷1
2
= ÷1
t
2
÷ 1
tt
2
, it results two linear programming problems, that are dual
problems.
Certainly, we can use the simpli…ed notation 1 = 1
1
= ÷1
2
and we can
consider that we determine 1 = '1· by using the system (44) and 1 = '¹A
by using the system (45). So, at least a private solution of the antagonistic game
can be obtained by solving only one of systems (44), (45). The antagonistic game
may have, as a bi-matrix game, another solutions which result by solving the
systems (41), (42) and (43), by setting ÷1
2
= 1
1
= 1.
Because of the symmetry of the systems (44) and (45) and setting 1 =
1
1
= ÷1
2
, it results the following theorem relative to antagonistic games (von
Neumann-Morgenstern theorem):
Theorem 1.5. The minimum with respect to 1
2
of the maximum (minimax)
of the function 1(1
1
. 1
2
) with respect to 1
1
, for …xed 1
2
, is equal to the
maximum with respect to 1
1
of the minimum (maximin) of the function
1(1
1
. 1
2
) with respect to 1
2
, for …xed 1
1
, namely
min
P2
max
P1
1(1
1
. 1
2
) = max
P1
min
P2
1(1
1
. 1
2
). (47)
Remark 1.36. In the case of a bi-matrix game, as a generalization of
the condition which appears in the antagonistic game, we can formulate the
question: for which solution (1
1
. 1
2
) the function 1
1
÷1
2
reaches the minimum
value?
43
This formulation leads us to the problem of cooperation: when there is
cooperation and when there isn’t? For the antagonistic game we have 1
1
÷1
2
=
0.
Example 1.23. The bi-matrix game given in Example 1.22 is an antag-
onistic game. Because it has only a solution, this solution can be obtained if
we minimize the function 1 = 1
1
, namely 1 = 1
t
1
÷ 1
tt
1
, supposing that we
use, in order to solve the system (44), by replacing the row that contains only
zeros in the simplex matrix o
1
by the corresponding to the function to minimize
[0. 0. 1. ÷1. 0. 0. 0. 0[. So, we obtain the simplex matrix:

1 1 0 0 0 0 1
1 ÷1 ÷1 1 1 0 0
÷1 1 ÷1 1 0 1 0
0 0 1 ÷1 0 0 0
¸
¸
¸
¸
that, by reducing leads us to the same solution 1
1
= 1
2
=

1
2
.
1
2

for which
1
t
1
= 1
tt
2
= 0. So 1'1· = 0.
1.17 Applications in economics
In this sequel we give some applications of games in economics. We will be
evaluate the payo¤ function for each player, which will be dependent of the
player’s strategies. Here the sets of strategies are real intervals.
1.17.1 Cournot model of duopoly [21]
We consider a very simple version of Cournot’s model. Let c
1
and c
2
denote the
quantities of a homogeneous product, produced by …rms 1 and 2, respectively.
Let 1(Q) = a ÷Q be the market-clearing price when the aggregate quantity on
the market is Q = c
1
÷c
2
. Hence we have
1(Q) =

a ÷Q. 1or Q < a
0. 1or Q _ a.
Assume that the total cost to …rm i of producing quantity c
i
is C
i
(c
i
) = cc
i
.
That is, there are no …xed costs and the marginal cost is constant at c, where
we assume c < a. Suppose that the …rms choose their quantities simultaneously.
We …rst translate the problem into a "continuous" game. For this, we specify:
the players in the game (the two …rms), the strategies available to each player
(the di¤erent quantities it might produce), the payo¤ received by each player for
each combination of strategies that could be chosen by the players (the …rm’s
payo¤ is its pro…t). We will assume that output is continuously divisible and
negative outputs are not feasible. Thus, each …rm’s strategy space is o
i
= [0. ·),
the nonnegative real numbers, in which case a typical strategy :
i
is a quantity
choice, c
i
_ 0. Because 1(Q) = 0 for Q _ a, neither …rm will produce a
44
quantity c
i
a. The payo¤ to …rm i, a function of the strategies chosen by it
and by the other …rm, its pro…t, can be written as
¬
i
(c
i
. c
j
) = c
i
[a ÷(c
i
÷c
j
) ÷c[.
How we know an equilibrium point (Nash equilibrium) is the pair (c
+
1
. c
+
2
)
where c
+
i
, for each …rm i, solves the optimization problem
max
0<qi<o
¬
i
(c
i
. c
+
j
) = max
0<qi<o
c
i
[a ÷(c
i
÷c
+
j
) ÷c[.
Assuming c
+
j
< a ÷c (as will be shown to be true), the …rst order condition for
…rm i’s optimization problem is necessary and su¢cient
c
i
=
1
2
(a ÷c
+
j
÷c). (48)
Thus, if the quantity pair (c
+
1
. c
+
2
) is to be a Nash equilibrium, the …rm’s
quantity choices must satisfy
c
+
1
=
1
2
(a ÷c
+
2
÷c). c
+
2
=
1
2
(a ÷c
+
1
÷c).
Solving this pair of equations yields c
+
1
= c
+
2
=
a÷c
3
, which is indeed less than
a ÷c, as assumed.
The intuition behind this equilibrium is simple.
Each …rm would of course like to be a monopolist in this market, in which
case it would choose c
i
to maximize ¬
i
(c
i
. 0) = c
i
(a÷c
i
÷c), it would produce the
monopoly quantity c
m
=
a÷c
2
, and earn the monopoly pro…t ¬
i
(c
m
. 0) =
(a÷c)
2
4
.
Given that there are two …rms, aggregate pro…ts for the duopoly would be
maximized by setting the aggregate quantity c
1
÷ c
2
equal to the monopoly
quantity c
m
, as would occur if c
i
=
qm
2
for each i, for example. The problem
with this arrangement is that each …rm has an incentive to deviate: because
the monopoly quantity is low, the associated price 1(c
m
) is high, and at this
price each …rm would like to increase its quantity, in spite of the fact that such
an increase in production drives down the market-clearing price. To see this
formally, use (48) to check that
qm
2
isn’t …rm 2’s best response to the choice of
qm
2
by …rm 1.
In the Cournot equilibrium, in contrast, the aggregate quantity is higher, so
the associated price is lower, so the temptation to increase output is reduced –
reduced by just enough that each …rm is just deterred from increasing its output
by the realization that the market-clearing price will fall.
Remark 1.37. Rather than solving for Nash equilibrium in the Cournot
game algebraically, one could instead proceed graphically, using the best re-
sponse to a …rm:
1
2
(c
1
) =
1
2
(a ÷c
1
÷c) – …rm 2’s best response, and
1
1
(c
2
) =
1
2
(a ÷c
2
÷c) – …rm 1’s best response.
A third way to solve for this Nash equilibrium is to apply the process of
iterated elimination of strictly dominated strategies (see [7]).
45
1.17.2 Bertrand model of duopoly [21]
This Bertrand’s model is based on suggestion that …rms actually choose prices,
rather than quantities as in Cournot’s model. The Bertrand’s model is a dif-
ferent game than Cournot’s model because: the strategy spaces are di¤erent,
the payo¤ functions are di¤erent. Thus we obtain other equilibrium point, but
the equilibrium concept used is the Nash equilibrium de…ned in the previous
sections.
We consider the case of di¤erentiated products. If …rms 1 and 2 choose prices
j
1
and j
2
, respectively, the quantity that consumers demand from …rm i is
c
i
(j
i
. j
j
) = a ÷j
i
÷/j
j
.
where / 0 re‡ects the extent to which …rm i’s product is a substitute for
…rm ,’s product. This is an unrealistic demand function because demand for
…rm i’s product is positive even when …rm i charges an arbitrarily high price,
provided …rm , also charges a high enough price. We assume that there are no
…xed costs of production and that marginal costs are constant at c, where c < a,
and that the …rms act simultaneously (choose their prices). We translate the
economic problem into a non-cooperative game. There are again two players.
This time, however, the strategies available to each …rm are the di¤erent prices
it might charge, rather than the di¤erent quantities it might produce. We will
assume that negative prices are not feasible but that any non-negative price
can be charged – there is no restriction to prices denominated in pennies. Thus
each …rm’s strategy space can again be represented as o
i
= [0. ·), and a typical
strategy :
i
is now a price choice, j
i
_ 0.
We will again assume that the payo¤ function for each …rm is just its pro…t.
The pro…t to …rm i when it chooses the price j
i
and its rival choose the price
j
j
is
¬
i
(j
i
. j
j
) = c
i
(j
i
. j
j
)(j
i
÷c) = (a ÷j
i
÷/j
j
)(j
i
÷c).
Thus, the price pair (j
+
1
. j
+
2
) is Nash equilibrium if, for each …rm i, j
+
i
solves
the problem
max
0<pi<o
¬
i
(j
i
. j
+
j
) = max
0<pi<o
(a ÷j
i
÷/j
+
j
)(j
i
÷c).
The solution to …rm i’s optimization problem is
j
+
i
=
1
2
(a ÷/j
+
j
÷c).
Therefore, if the price pair (j
+
1
. j
+
2
) is to be a Nash equilibrium, the …rm’s
price choices must satisfy
j
+
1
=
1
2
(a ÷/j
+
2
÷c) a:d j
+
2
=
1
2
(a ÷/j
+
1
÷c).
Solving this pair of equations yields
j
+
1
= j
+
2
=
a ÷c
2 ÷/
.
46
1.17.3 Final-o¤er arbitration [6]
Many public-sector workers are forbidden to strike; instead, wage disputes are
settled by binding arbitration. Many other disputes including medical malprac-
tice cases and claims by shareholders against their stockbrokers, also involve
arbitration. The two major forms of arbitration are conventional and …nal-o¤er
arbitration. In …nal-o¤er arbitration, the two sides make wage o¤ers and then
the arbitrator piks one of the o¤ers as the settlement. In conventional arbitra-
tion, in contrast, the arbitrator is free to impose any wage as the settlement.
We now derive the Nash equilibrium wage o¤ers in a model of …nal-o¤er
arbitration.
Suppose the parties to the dispute are a …rm and a union and the dispute
concerns wages. First, the …rm and the union simultaneously make o¤ers, de-
noted by n
f
and n
u
, respectively. Second, the arbitrator chooses one of the two
o¤ers as the settlement. Assume that the arbitrator has an ideal settlement she
would like to impose, denoted by r. Assume, further that, after observing the
parties’ o¤ers, n
f
and n
u
, the arbitrator simply chooses the o¤er that is closer
to r: provided that n
f
< n
u
, the arbitrator chooses n
f
if r <
w
f
+wu
2
, chooses
n
u
if r
w
f
+wu
2
and chooses n
f
or n
u
if r =
w
f
+wu
2
. The arbitrator knows r
but the parties do not. The parties believe that r is randomly distributed ac-
cording to a probability distribution denoted by 1, with associated probability
density function denoted by 1. Thus, the parties believe that the probabilities
1¦n
f
c/o:c:¦ and 1¦n
u
c/o:c:¦ depend of arbitrator’s behavior, and can
be expressed as
1¦n
f
c/o:c:¦ = 1

r <
n
f
÷n
u
2

= 1

n
f
÷n
u
2

and
1¦n
u
c/o:c:¦ = 1 ÷1

n
f
÷n
u
2

.
Thus, the expected wage settlement is
n
f
1¦n
f
c/o:c:¦ ÷n
u
1¦n
u
c/o:c:¦ = n
f
1

n
f
÷n
u
2

÷
÷n
u
¸
1 ÷1

n
f
÷n
u
2

.
We assume that the …rm wants to minimize the expected wage settlement
imposed by the arbitrator and the union wants to maximize it.
If the pair of o¤ers (n
+
f
. n
+
u
) is to be a Nash equilibrium of the game between
the …rm and the union, n
+
f
must solve the optimization problem
min
w
f

n
f
1

n
f
÷n
+
u
2

÷n
+
u
¸
1 ÷1

n
f
÷n
+
u
2
¸
47
and n
+
u
must solve the optimization problem
max
wu

n
+
f
1

n
+
f
÷n
u
2

÷n
u
¸
1 ÷1

n
+
f
÷n
u
2
¸
.
Thus, the wage-o¤er pair (n
+
f
. n
+
u
) must solve the …rst-order conditions for
these optimization problems
(n
+
u
÷n
+
f
)
1
2
1

w

f
+w

u
2

= 1

w

f
+w

u
2

a:d (40)
(n
+
u
÷n
+
f
)
1
2
1

w

f
+w

u
2

= 1 ÷1

w

f
+w

u
2

.
It result
1

w

f
+w

u
2

=
1
2
. (ò0)
that is, the average of the o¤ers must equal the median of the arbitrator’s
preferred settlement. Substituting (50) into either of the …rst-order conditions
then yields
n
+
u
÷n
+
f
=
1
f

w

f
+w

u
2

. (ò1)
Remark 1.38. Suppose that the arbitrator’s preferred settlement is nor-
mally distributed with mean : and variance o
+
, in which case the density
function is given by
1(r) =
1
o


c
÷
(xm)
2
2
2
. :. o ÷ R. o 0.
We know that, in the case of normal distribution, the median of the distri-
bution equals the mean : of the distribution. Thus, (50) and (51) become:
n
f
÷n
u
2
= :. n
+
u
÷n
+
f
=
1
1(:)
= o

2¬.
and the Nash equilibrium o¤ers are
n
+
u
= :÷o

¬
2
. n
+
f
= :÷o

¬
2
.
48
1.17.4 The problem of the commons [9]
Consider the : farmers in a village. Each summer, all the farmers graze their
goats on the village green. Denote the number of goats the i
th
farmer owns
by o
i
and the total number of goats in the village by G = o
1
÷ o
2
÷ ÷ o
n
.
The cost of buying and caring for a goat is c, independent of how many goats a
farmer owns. The value to a farmer of grazing a goat on the green when a total
of G goats are grazing is ·(G) per goat. Since a goat needs at least a certain
amount of grass in order to survive, there is a maximum number of goats that
can be grazed on the green, G
max
: ·(G) 0 for G < G
max
but ·(G) = 0 for
G _ G
max
. Also since the …rst few goats have plenty of room to graze, adding
one more does little harm to those already grazing, but when so many goats are
grazing that they are all just barely surviving, that is G is just below G
max
,
then adding one more dramatically harms the rest. Formally: for G < G
max
,
·
t
(G) < 0 and ·
tt
(G) < 0.
During the spring, the farmers simultaneously choose how many goats to
own. Assume goats are continuously divisible. A strategy for farmer i is the
choice of a number of goats to graze on the village green, o
i
. Assuming that
the strategy space is [0. ·) covers all the choices that could possibly be of
interest to the farmer, [0. G
max
) would also su¢ce. The payo¤ to farmer i from
grazing o
i
goats, when the numbers of goats grazed by the other farmers are
(o
1
.
;
o
i÷1
. o
i+1
. . . . . o
n
). is
o
i
·(o
1
÷ ÷o
i÷1
÷o
i
÷o
i+1
÷ ÷o
n
) ÷co
i
. (ò2)
Thus, if (o
+
1
. . . . . o
+
n
) is to be a Nash equilibrium then, for each i, o
+
i
must
maximize (52) given that the other farmers choose (o
+
1
. . . . . o
+
i÷1
. o
+
i+1
. . . . . o
+
n
).
The …rst-order condition for this optimization problem is
·(o
i
÷o
+
1
÷ ÷o
+
i÷1
÷o
+
i+1
÷ ÷o
+
n
) ÷ (ò8)
÷o
i
·
t
(o
i
÷o
+
1
÷ ÷o
+
i÷1
÷o
+
i+1
÷ ÷o
+
n
) ÷c = 0
Substituting o
+
i
into (53), summing over all : farmer’s …rst-order conditions,
and then dividing by :, yields
·(G
+
) ÷
1
n
G
+
·
t
(G
+
) ÷c = 0. (ò4)
where G
+
= o
+
1
÷ ÷o
+
n
.
The …rst-order condition (52) re‡ects the incentives faced by a farmer who is
already grazing o
i
goats but is considering adding one more, or a tiny fraction of
one more. The value of the additional goat is ·(o
i
÷o
+
1
÷ ÷o
+
i÷1
÷o
+
i+1
÷ ÷o
+
n
)
and its cost is c. The harm to the farmer’s existing goats is ·
t
(o
i
÷ o
+
1
÷ ÷
o
+
i÷1
÷o
+
i+1
÷ ÷o
+
n
) per goat, or o
i
·
t
(o
i
÷o
+
1
÷ ÷o
+
i÷1
÷o
+
i+1
÷ ÷o
+
n
) in
total. The common resource is over utilized because each farmer considers only
49
his her own incentives, not the e¤ect of his or her actions on the other farmers,
hence the presence of G
+
·
t
(G
+
)´: in (54).
Remark 1.39. The social optimum, denoted by G
++
, solves the problem
max
0<G<o
G·(G) ÷Gc, the …rst-order condition for which is
·(G
++
) ÷G
++
·
t
(G
++
) ÷c = 0.
We have G
+
G
++
.
1.18 Exercises and problems solved
1. Let be a zero-sum two-person game with the payo¤ matrix
H
1
= ¹ =

0 8 1
6 ò 8
8 4 10
6 ò 6
¸
¸
¸
¸
.
Which is the payo¤ matrix of player 2? What strategies has the player 1
and 2, respectively?
Solution. The payo¤ matrix of player 2 is
H
2
=

÷0 ÷6 ÷8 ÷6
÷8 ÷ò ÷4 ÷ò
÷1 ÷8 ÷10 ÷6
¸
¸
.
because H
1
÷H
t
2
= O
4;3
.
The player 1 has four strategies, because the matrix ¹ has four rows. The
player 2 has three strategies, because in the matrix ¹ there are three columns.
2. Two players write independently, one of the numbers 1, 2 or 3. If they
have written the same number then the player 1 pays to player 2 equivalent
in unities monetary of this number. In the contrary case the player 2 pays to
player 1 this number of unities monetary that he has chosen. Which is the
payo¤ matrix of this game?
Solution. Easily we get that the payo¤ matrix of player 1 is
¹ =

÷1 1 1
2 ÷2 2
8 8 ÷8
¸
¸
.
3. What game in previous problems has the saddle point?
Solution. For the …rst game we have
·
1
= max
1<i<4
min
1<j<3
a
ij
= max(1. ò. 8. ò) = ò.
·
2
= min
1<j<3
max
1<i<4
a
ij
= min(0. ò. 10) = ò.
50
How ·
1
= ·
2
= ò it results that the …rst game has saddle point. It is easily
to verify that (2,2) and (4,2) are both saddle points because a
22
= a
42
= · = ò.
Thus i
+
= 2, i
++
= 4 are optimal strategies of player 1, and ,
+
= 2 is the
optimal strategy of player 2.
For the second game we have
·
1
= max
1<i<4
min
1<j<3
a
ij
= max(÷1. ÷2. ÷8) = ÷1.
·
2
= min
1<j<3
max
1<i<4
a
ij
= min(8. 8. 2) = 2.
Thus, the second game hasn’t a saddle point in the sense of pure strategies
because ·
1
= ÷1 < 2 = ·
2
.
4. Which are the expected payo¤s of player 1 in the previous games?
Solution. For the …rst game, let A = (r
1
. r
2
. r
3
. r
4
), 1 = (n
1
. n
2
. n
3
) be
the mixed strategies of players 1 and 2, respectively. Then the expected payo¤
of player 1 is
4
¸
i=1
3
¸
j=1
a
ij
r
i
n
j
= 0r
1
n
1
÷ 8r
1
n
2
÷r
1
n
3
÷ 6r
2
n
1
÷ òr
2
n
2
÷ ÷ 6r
4
n
3
.
For the second game, let A = (r
1
. r
2
. r
3
), 1 = (n
1
. n
2
. n
3
) be the mixed
strategies of players 1 and 2, respectively. Then the expected payo¤ of player 1
is
3
¸
i=1
3
¸
j=1
a
ij
r
i
n
j
= ÷r
1
n
1
÷r
1
n
2
÷r
1
n
3
÷ 2r
2
n
1
÷2r
2
n
2
÷ 2r
2
n
3
÷
÷8r
3
n
1
÷ 8r
3
n
2
÷8r
3
n
3
.
5. Using the iterated elimination of strictly dominated strategies solve the
matrix game with the payo¤ matrix
¹ =

0 ÷1 ÷1
1 0 ÷1
1 1 0
¸
¸
.
Solution. In this matrix ¹ the elements of …rst row is smaller than the
corresponding elements of the third row. Consequently, the player 1 will never
use his …rst strategy. The …rst row will be eliminated. We obtain the payo¤
matrix
¹
t
=
¸
1 0 ÷1
1 1 0

.
Now, in this matrix ¹
t
each element of …rst column is greater than the
corresponding element of the third column. Thus, the …rst strategy of player 2
will never be included in any of his optimal mixed strategies, therefore, the …rst
column of the matrix ¹
t
can be deleted to obtain
¹
tt
=
¸
0 ÷1
1 0

.
51
Similarly, we obtain successive,
¹
ttt
= [1 0[ a:d ¹
IV
= [0[.
Thus, the optimal (pure) strategies are A
+
= (0. 0. 1), 1
+
= (0. 0. 1) and
the value of game is · = 0. We have, actually, a saddle point (i
+
. ,
+
) = (8. 8),
because a
33
= · = 0.
6. Find the optimal strategies of the following matrix game with the payo¤
matrix
a) ¹ =
¸
2 0
1 8

; b) ¹ =
¸
1 2
2 0

; c) ¹ =
¸
1 ÷1
÷1 1

.
Solution. These games are 2 2 matrix game. Thus, we can use the mixed
strategies A
+
= (j. 1 ÷j), 1
+
= (c. 1 ÷c) where
j
+
=
d ÷c
a ÷d ÷/ ÷c
. c
+
=
d ÷/
a ÷d ÷/ ÷c
. · =
ad ÷/c
a ÷d ÷/ ÷c
.
a) We obtain
j
+
=
8 ÷1
2 ÷ 8 ÷0 ÷1
=
1
2
. c
+
=
8 ÷0
2 ÷ 8 ÷0 ÷1
=
8
4
.
· =
2.8 ÷0.1
2 ÷ 8 ÷0 ÷1
=
8
2
.
hence A
+
=

1
2
.
1
2

, 1
+
=

3
4
.
1
4

, · =
3
2
.
b) We have
j
+
=
0 ÷2
1 ÷ 0 ÷2 ÷2
=
2
8
. c
+
=
0 ÷2
1 ÷ 0 ÷2 ÷2
=
2
8
.
· =
1.0 ÷2.2
1 ÷ 0 ÷2 ÷2
=
4
8
.
hence A
+
=

2
3
.
1
3

, 1
+
=

2
3
.
1
3

, · =
4
3
.
c) We obtain
j
+
=
1 ÷(÷1)
1 ÷ 1 ÷(÷1) ÷(÷1)
=
1
2
. c
+
=
1
2
. · =
1.1 ÷(÷1)(÷1)
4
= 0.
hence A
+
= 1
+
=

1
2
.
1
2

, · = 0.
7. Solve the problem 6 with the procedure described in the Remark 1.24
(the Williams method).
Solution. Let A = (r
1
. r
2
), 1 = (n
1
. n
2
) be the mixed strategies for players
1 and 2, respectively. Here r
1
÷r
2
= 1 and
x1
x2
=
|c÷d|
|a÷b|
, n
1
÷n
2
= 1,
y1
y2
=
|d÷b|
|c÷a|
.
a) We obtain r
1
÷ r
2
= 1,
x1
x2
=
|1÷3|
|2÷0|
= 1, hence r
1
= r
2
, r
1
= r
2
=
1
2
,
respectively n
1
÷n
2
= 1,
y1
y2
=
|3÷0|
|1÷2|
= 8, hence 8n
2
= n
1
, n
1
=
3
4
, n
2
=
1
4
. Thus
A
+
=

1
2
.
1
2

, 1
+
=

3
4
.
1
4

and
·
+
=

1
2
.
1
2
¸
2 0
1 8
¸
8´4
1´4

=
52
=

8
2
.
8
2
¸
8´4
1´4

=
0
8
÷
8
8
=
12
8
=
8
2
.
b) We have r
1
÷ r
2
= 1,
x1
x2
=
|2÷0|
|1÷2|
= 2, hence r
1
= 2r
2
, r
1
=
2
3
, r
2
=
1
3
,
respectively n
1
÷ n
2
= 1,
y1
y2
=
|0÷2|
|2÷1|
= 2, n
1
=
2
3
, n
2
=
1
3
. Thus A
+
= 1
+
=

2
3
.
1
3

, and
·
+
=

2
8
.
1
8
¸
1 2
2 0
¸
2´8
1´8

=
=

4
8
.
4
8
¸
2´8
1´8

=
8
0
÷
4
0
=
12
0
=
4
8
.
c) We have r
1
÷ r
2
= 1,
x1
x2
=
|÷1÷1|
|1÷(÷1)|
= 1, hence r
1
= r
2
, r
1
= r
2
=
1
2
,
respectively n
1
÷ n
2
= 1,
y1
y2
=
|1÷(÷1)|
|÷1÷1|
= 1, n
1
= n
2
, n
1
= n
2
=
1
2
. Thus
A
+
= 1
+
=

1
2
.
1
2

, and
·
+
=

1
2
.
1
2
¸
1 ÷1
÷1 1
¸
1´2
1´2

= (0. 0)
¸
1´2
1´2

= 0.
8. Solve the problem 6 with the graphical method described for 2 : and
:2 matrix games.
Solution. Let A = (r. 1 ÷ r), 1 = (n. 1 ÷ n) be, respectively, the mixed
strategies for the players 1 and 2. The lines ac, /d, and a/, cd respectively, will
be represented in an illustrative …gure.
a) The payo¤ matrix is ¹ =
¸
2 0
1 8

. thus we have the lines
Figure 1.7: The problem 8. a)
The intersection points have, respectively, the abscissa r =
1
2
. n =
3
4
. hence
A
+
= (
1
2
.
1
2
). 1
+
= (
3
4
.
1
4
). · =
3
2
.
b) The payo¤ matrix is ¹ =
¸
1 2
2 0

. thus we have the lines
Figure 1.8: The problem 8. b)
The intersection points have, respectively, the abscissa r =
2
3
. n =
2
3
. hence
A
+
= 1
+
= (
2
3
.
1
3
). · =
4
3
.
c) The payo¤ matrix is ¹ =
¸
1 ÷1
÷1 1

. thus we have the lines
53
Figure 1.9: The problem 8. c)
The intersection points have, respectively, the abscissa r =
1
2
. n =
1
2
. hence
A
+
= 1
+
= (
1
2
.
1
2
). · = 0.
9. Using the graphical method, solve the following matrix games with the
payo¤ matrices:
a) ¹ =
¸
2 0 6 8
8 8 7 ò

. b) ¹ =

ò 6
0 4
1 8
¸
¸
.
Solution. a) Let A = (r. 1÷r) be the mixed strategy for the player 1. The
lines ac, d1, co, d/ are represented in the following …gure.
Figure 1.10: The problem 9.a)
The abscissa r = r
+
of the point ¹
t
, and the value of ¹
t
1
t
= ·, can be eval-
uated by solving the system of two linear equations corresponding to strategies
two and four of player 2. The linear equations correspond to lines which pass
through points (1,3), (0,5) and respectively (1,9), (0,3) (see the heavy black line
in …gure), that is

2r ÷n = ò
6r ÷n = ÷8.
The solution is r =
1
4
, n =
9
2
. Hence A =

1
4
.
3
4

, · =
9
2
.
For the mixed strategy of player 2 we have 1 = (c
1
. c
2
. c
3
. c
4
) and the equal-
ity

1
4
.
8
4
¸
2 0 6 8
8 8 7 ò

c
1
c
2
c
3
c
4
¸
¸
¸
¸
=
0
2
We obtain

26
4
.
18
4
.
27
4
.
18
4

c
1
c
2
c
3
c
4
¸
¸
¸
¸
=
0
2
.
hence
26
4
c
1
÷
18
4
c
2
÷
27
4
c
3
÷
18
4
c
4
=
0
2
.
Thus we have

26c
1
÷ 18c
2
÷ 28c
3
÷ 18c
4
= 18.
c
1
÷c
2
÷c
3
÷c
4
= 1.
c
j
_ 0. , = 1. 4
54
with the solution c
1
= 0, c
3
= 0, c
2
= c, c
4
= 1 ÷c.
The optimal strategies of player 2 are 1 = (0. c. 0. 1 ÷c), where c ÷ [0. 1[.
b) Let 1 = (n. 1 ÷ n) be the mixed strategy for the player 2. The lines a/,
cd and c1 are represented in the following …gure.
Figure 1.11: The problem 9. b)
The linear equations correspond to lines which pass through points (0,4),
(1,9) and (0,8), (1,1) (see the heavy black line in …gure), that is,

÷òn ÷. = 4
7n ÷. = 8.
The solution is n =
1
3
, . =
17
3
. Hence 1 =

1
3
.
2
3

, · =
17
3
.
For mixed strategy of player 1 we have A = (j
1
. j
2
. j
3
) and the equality
(j
1
. j
2
. j
3
)

ò 6
0 4
1 8
¸
¸
¸
1´8
2´8

=
17
8
.
We obtain
(j
1
. j
2
. j
3
)

17´8
17´8
17´8
¸
¸
=
17
8
.
hence
17
8
j
1
÷
17
8
j
2
÷
17
8
j
3
=
17
8
.
Thus we have

17j
1
÷ 17j
2
÷ 17j
3
= 17
j
1
÷j
2
÷j
3
= 1
j
i
_ 0. i = 1. 8
For j
1
= 0 we obtain j
2
=
7
12
. j
3
=
5
12
from the equality A¹
:1
= A¹
:2
.
namely òj
1
÷0j
2
÷j
3
= 6j
1
÷4j
2
÷8j
3
=
17
3
. The optimal strategies of player
1 is A = (0.
7
12
.
5
12
).
10. Solve the matrix game with the payo¤ matrix
¹ =

6 0 8
8 ÷2 8
4 6 ò
¸
¸
.
Solution. We use the method described for the 8 8 matrix game. Thus
we have, with the mixed strategy A = (r
1
. r
2
. r
3
).

1
= 6r
1
÷ 8r
2
÷ 4r
3
. A¹
2
= ÷2r
2
÷ 6r
3
. A¹
3
= 8r
1
÷ 8r
2
÷ òr
3
.
55
The equation of the line A¹
1
= A¹
2
is 6r
1
÷ 8r
2
÷ 4r
3
= ÷2r
2
÷ 6r
3
. or
6r
1
÷ 10r
2
÷2r
3
= 0. But r
1
÷r
2
÷r
3
= 1, so we get 2r
1
÷ 6r
3
= ò.
The equation of the line A¹
2
= A¹
3
is ÷2r
2
÷ 6r
3
= 8r
1
÷ 8r
2
÷ òr
3
or
8r
1
÷ òr
2
÷r
3
= 0, that is 2r
1
÷ 6r
3
= ò.
The equation of the line A¹
3
= A¹
1
is 8r
1
÷8r
2
÷òr
3
= 6r
1
÷8r
2
÷4r
3
,
or 8r
1
÷ òr
2
÷r
3
= 0, that is 2r
1
÷ 6r
3
= ò.
We obtain only the equation 2r
1
÷ 6r
3
= ò, hence the solutions are r
1
= j,
r
2
=
1÷4p
6
, r
3
=
5÷2p
6
, j ÷ [0. 1[. Thus A =

j.
1÷4p
6
.
5÷2p
6

, j ÷ [0. 1[. The
values A¹
1
, A¹
2
, A¹
3
are
28÷4p
6
, and this is maximum
14
3
when j = 0.
Hence A
+
=

0.
1
6
.
5
6

is optimal strategy of player 1.
For player 2, let 1 = (n
1
. n
2
. n
3
) be a mixed strategy. We have ¹
1
1
t
=
6n
1
÷ 8n
3
, ¹
2
1
t
= 8n
1
÷2n
2
÷ 8n
3
, ¹
3
1
t
= 4n
1
÷ 6n
2
÷ òn
3
.
The equation of the line ¹
1
1
t
= ¹
2
1
t
is 6n
1
÷ 8n
3
= 8n
1
÷ 2n
2
÷ 8n
3
or
2n
1
÷2n
2
= 0, hence n
1
= n
2
.
The equation of the line ¹
2
1
t
= ¹
3
1
t
is 8n
1
÷2n
2
÷8n
3
= 4n
1
÷6n
2
÷òn
3
,
or 4n
1
÷ 8n
2
÷ 2n
3
= 0, hence 2n
1
÷ 4n
2
= n
3
. This equation is 8n
1
÷ 8n
2
= 1
and it means that the lines n
1
= n
2
and 8n
1
÷8n
2
= 1 are parallel.
The equation of the line ¹
3
1
t
= ¹
1
1
t
is 4n
1
÷ 6n
2
÷ òn
3
= 6n
1
÷ 8n
3
, or
2n
1
÷ 6n
2
÷ 2n
3
= 0, hence n
1
÷ 8n
2
÷ n
3
= 0. This equation is 2n
1
÷ 2n
2
= 1,
and thus this line is parallel to another.
The line 8n
1
÷ 8n
2
= 1 is essential, because the intersection of the regions
1
1
and 1
2
is this line, and the region 1
2
= O. So we must consider the values
¹
1:
1
t
and ¹
1:
1
t
in the cases 1
1
= (
2
3
.
1
3
. 0), and 1
2
=

1
3
. 0.
2
3

. We get that
1
+
1
=

2
3
.
1
3
. 0

, 1
+
2
=

1
3
. 0.
2
3

are the optimal strategies for the player 2. Hence
1
+
= `1
+
1
÷ (1 ÷`)1
+
2
, ` ÷ [0. 1[, is the solution for player 2 and also, · =
14
3
.
11. Using the linear programming problem, solve the following matrix games
with the payo¤ matrices:
a) ¹ =
¸
0 2
ò 1

. b) ¹ =

6 0 8 ò
8 ÷2 8 0
4 6 ò 4
¸
¸
.
Solution. We consider the linear programming problem (40)
[max[o = n
t
1
÷n
t
2
÷ ÷n
t
n
n
¸
j=1
a
ij
n
t
j
_ 1. i = 1. :
n
t
j
_ 0. , = 1. :.
a) We have
[max[o = n
t
1
÷n
t
2
2n
t
2
_ 1
òn
t
1
÷n
t
2
_ 1
n
t
1
. n
t
2
_ 0
56
and so the simplex matrix is

0 2 1 0 1
ò 1 0 1 1
1 1 0 0 0
¸
¸
÷

0 2 1 0 1
1 1´ò 0 1´ò 1´ò
0 4´ò 0 ÷1´ò ÷1´ò
¸
¸
÷
÷

0 1 1´2 0 1´2
1 0 ÷1´10 1´ò 1´10
0 0 ÷2´ò ÷1´ò ÷8´ò
¸
¸
.
Thus o
max
= 8´ò = 1´n = n = ò´8, n
t
1
= 1´10 = n
1
= 1´6, n
t
2
= 1´2 =
n
2
= ò´6, n
t
3
= 0 = n
3
= 0, n
t
4
= 0 = n
4
= 0, r
t
1
= 2´ò = r
1
= 2´8,
r
t
2
= 1´ò =r
2
= 1´8. We have A
+
= (2´8. 1´8), 1
+
= (1´6. ò´6) and · = ò´8.
b) The simplex matrix in this case is

6 0 8 ò 1 0 0 1
8 ÷2 8 0 0 1 0 1
4 6 ò 4 0 0 1 1
1 1 1 1 0 0 0 0
¸
¸
¸
¸
÷
÷

0 8´2 8´4 ÷7´4 1 ÷8´4 0 1´4
1 ÷1´4 8´8 0´8 0 1´8 0 1´8
0 7 7´2 ÷1´2 0 ÷1´2 1 1´2
0 ò´4 ò´8 ÷1´8 0 ÷1´8 0 ÷1´8
¸
¸
¸
¸
÷
÷

0 0 0 ÷28´14 1 ÷0´14 ÷8´14 1´7
1 0 1´2 81´28 0 8´28 1´28 1´7
0 1 1´2 ÷1´14 0 ÷1´14 1´7 1´14
0 0 0 ÷1´28 0 ÷1´28 ÷ò´28 ÷8´14
¸
¸
¸
¸
We have o
max
= 8´14 = 1´n, hence n = 14´8, n
t
1
= 1´7 = n
1
= 2´8,
n
t
2
= 1´14 = n
2
= 1´8, n
t
3
= 0 = n
3
= 0, n
t
4
= 0 = n
4
= 0, r
t
1
= 0 = r
1
= 0,
r
t
2
= 1´28 =r
2
= 1´6, r
t
3
= ò´28 =r
3
= ò´6.
So, an optimal solution is A
+
= (0. 1´6. ò´6), 1
+
1
= (2´8. 1´8. 0. 0), · = 14´8.
There exists another optimal solution because we have the matrix

0 0 0 ÷28´14 1 ÷0´14 ÷8´14 1´7
1 ÷1 0 1ò´16 0 ò´28 ÷8´28 1´14
0 2 1 ÷1´7 0 ÷1´7 2´7 1´7
0 0 0 ÷1´28 0 ÷1´28 ÷ò´28 ÷8´14
¸
¸
¸
¸
.
Thus n
t
1
= 1´14 = n
1
= 1´8, n
t
2
= 0 = n
2
= 0, n
t
3
= 1´7 = n
3
= 2´8,
n
t
4
= 0 =n
4
= 0, and so we have 1
+
2
= (1´8. 0. 2´8. 0).
The optimal solution of matrix game is A
+
= (0. 1´6. ò´6), 1
+
= `1
+
1
÷(1÷
`)1
+
2
, ` ÷ [0. 1[, where 1
+
1
= (2´8. 1´8. 0. 0), 1
+
2
= (1´8. 0. 2´8. 0), and · = 14´8.
12. The payo¤ matrix in general representation. As technological
utilization, three …rms use water from the same source. Each …rm has two
strategies: the …rm build a station that makes water pure (strategy 1) or it uses
57
water that isn’t pure (strategy 2). We suppose that if at most one …rm uses
water which isn’t pure then the water that exists it is good to it and this …rm
are not expenses. If at least two …rms uses water that isn’t pure, then every
…rm that uses water loses 3 monetary unities (u.m.). By using the station that
makes water pure, it costs 1 u.m. for the …rm that do it.
Write the payo¤ matrix of this game.
Solution. The payo¤ matrix is given in Table 1. Let us consider, for
example, the situation (1,2,2). The …rms 2 and 3 use water that isn’t pure. So,
every …rm has 3 u.m. as a expense (negative payo¤). For the …rm 1 that has
the station to do water pure, there is a expense equal 1 u.m. more.
Table 1
Situation Payo¤ matrix
:
1
:
2
:
3
H
1
H
2
H
3
1 1 1 -1 -1 -1
1 1 2 -1 -1 0
1 2 1 -1 0 -1
1 2 2 -4 -3 -3
2 1 1 0 -1 -1
2 1 2 -3 -4 -3
2 2 1 -3 -3 -4
2 2 2 -3 -3 -3
13. The payo¤ matrix in bi-dimensional representation. Two facto-
ries produce the same type of production ¹, respectively 1 in two assortment
¹
1
and ¹
2
, respectively 1
1
and 1
2
. The products are interchangeable. By
making a test in advance, we obtain that the preferences given in percentages
are the following representation:
¹`1 1
1
1
2
¹
1
40 90
¹
2
70 20
The percentages given in the above table refer to the …rst factory (…rst
production), the percentage for the second factory (second production) are the
complementarities percentages (face from the total percentage 100%). Write
the payo¤ matrix.
Solution.
We have the general representation:
Situation Payo¤ matrix
:
1
:
2
H
1
H
2
1 1 40 60
1 2 90 10
2 1 70 30
2 2 20 80
58
which is with the following bi-dimensional representation equivalent:
H
1
=
¸
40 00
70 20

. H
2
=
¸
60 80
10 80

.
14. Solving of the bi-matrix game. We consider Problem 13 with
the following modi…cation of purchasing conditions: we remark that 50% from
those buyers that buy the product ¹
2
, respectively 1
2
, buy the product 1
2
,
respectively ¹
2
, too. By expressing the sales in absolute value, by considering
1000 units that have been sale in the condition of the …rst version of the problem,
we ask:
1. to express the payo¤ matrix
2. to solve the non-cooperative bi-matrix game.
Solution. We have the table:
Situation Payo¤ matrix
:
1
:
2
H
1
H
2
1 1 400 600
1 2 900 100
2 1 700 300
2 2 600 900
where in the situation (2,2) there are
600 = 200 ÷
1
2
800. 000 = 800 ÷
1
2
200.
The bi-dimensional writing is:
H
1
=
¸
400 000
700 600

. H
2
=
¸
600 800
100 000

.
2. The corresponding simplex matrices are:
o
A
=

i`, 1 2 3 4 5
1 1 1 0 0 1 =
2 400 900 -1 1 0 _
3 700 600 -1 1 0 _
4 0 0 0 0 0 MIN
¸
¸
¸
¸
¸
¸
o
B
=

i`, 1 2 3 4 5
1 1 1 0 0 1 =
2 600 300 -1 1 0 _
3 100 900 -1 1 0 _
4 0 0 0 0 0 MIN
¸
¸
¸
¸
¸
¸
By solving the linear programming problems we obtain the following solu-
tions:
59
1 j
1
j
2
j
3
j
4
j
5
j
6
1
1
0,55 0,45 463,64 0 0 0
1
2
1 0 600 0 0 500
1
3
0 1 900 0 600 0
Q c
1
c
2
c
3
c
4
c
5
c
6
Q
1
0,5 0,5 650 0 0 0
Q
2
1 0 700 0 300 0
Q
3
0 1 900 0 0 300
The value of the game for the …rst factory is 1
A
= 6ò0 and for the second is
1
B
= 468. ò4. These pairs of solutions (1. Q) are equilibrium points that verify
the condition: j
4+i
= 0 = c
i
= 0 and c
4+i
= 0 = j
i
= 0. We observe that the
single equilibrium point is (1
1
. Q
1
): 1
1
= [0. òò: 0. 4ò[, Q
1
= [0. ò: 0. ò[.
15. Let us consider the game 13, that is a antagonistic game with constant
sum 100%.
1. Solve the game.
2. Write the structure matrices.
Solution. 1. The simplex table corresponding to this game is:
1 2 3 4 5
1 1 1 0 0 1 =
2 40 90 -1 1 0 _
3 70 20 -1 1 0 _
4 0 0 1 -1 0 MIN
By solving the linear programming problem we obtain: 1 = [0. ò: 0. ò[, Q =
[0. 7: 0. 8[. The value of the game is 55%.
2. The structures matrices of the game are:
¯
A
=
¹`1 1
1
1
2
¹
1
14 13,5 27,5
¹
2
24,5 3 27,5
38,5 16,5 55
¯
B
=
1`¹ ¹
1
¹
2
1
1
21 10,5 31,5
1
2
1,5 12 13,5
22,5 22,5 45
So, we can see that the syntetique situation expressed in percentages about
the structure of the types of production are the following:
¹
1
: 27,5%, ¹
2
: 27,5%, 1
1
: 31,5%, 1
2
: 13,5%
if the production of both factories is 100%. Because of antagonistic market
competition the second factory realizes a less sale that the …rst, that it is 45%
from all sales.
16. Relation between information and income. Let us consider Prob-
lem 15, by supposing that the second factory, at the moment of choosing its
strategy, knows the strategy applied by the …rst factory.
60
We ask:
1. Write the matrix of game.
2. Solve the game.
3. Compare the results obtained here with those obtained by solving Problem
15 and interpret the di¤erence between these two solutions.
Solution. 1. Because the second factory knows the strategy applied by the
…rst factory, it can apply another two strategies obtained by combination of
strategies 1
1
and 1
2
:
strategy 1
1
respond to the strategy ¹
1
;
strategy 1
2
respond to the strategy ¹
2
;
strategy 1
2
respond to the strategy ¹
1
;
strategy 1
1
respond to the strategy ¹
2
.
We denote Q
t
= [c
t
1
. c
t
2
. c
t
3
. c
t
4
[ the strategy of the second factory in agreement
with another four strategies to respond to two strategies 1 = [j
1
. j
2
[ of the …rst
factory.
We denote \
t
the value of the new game. The matrix of the game is given
by the following table:
¹`1 1
1
1
1
1
2
1
2
1
1
1
2
1
1
1
2
¹
1
40 40 90 90
¹
2
70 20 70 20
2. By elimination of the dominate column 3, the corresponding simplex table
is:
i`, 1 2 3 4 5 6
1 1 1 1 0 0 1 =
2 40 40 90 -1 1 0 _
3 70 20 20 -1 1 0 _
4 0 0 0 1 -1 0 MIN
Solving this linear programming problem we obtain: 1 = [1. 0[, Q
t
=
`
1
[0. 1. 0. 0[ ÷`
2
[0. 4: 0. 6: 0. 0[, `
1
. `
2
_ 0, `
1
÷`
2
= 1, ·
t
= 40/.
3. To compare the results with those obtained in 15, we write the structure
matrices of the game (rows and columns for which the strategy is equal zero
will be empty).
¯
A
=
¹`1 1
1
1
1
1
2
1
2
¹
1
16`
2
40`
1
÷ 24`
2
40
16`
2
40`
1
÷ 24`
2
¯
B
=
1`¹ ¹
1
1
1
: 1
1
24`
2
24`
2
1
1
: 1
2
60`
1
÷ 86`
2
60`
1
÷ 86`
2
60 60
61
We remark a decreasing equal \
t
÷ \ = ÷1ò/ for the …rst factory and an
increasing equal 15% for the second factory, as a result of the fact that it owns
an information important to it.
How is separated all production 100% of both factories?
The …rst factory produces only the assortment ¹
1
as 40% and the second
factory produces only the assortment 1
1
, as 24`
2
/ (inside the strategy 1
1
: 1
1
),
(60`
1
÷ 86`
2
)/ (inside the strategy 1
1
: 1
2
), namely a total of 60%.
1.19 Exercises and problems unsolved
Let be a zero-sum two-person game with the payo¤ matrix
H
1
= ¹ =

8 6 0 6
10 6 1 8
4 ò 8 ò
¸
¸
.
Which is the payo¤ matrix of player 2?
What strategies have the player 1 and the player 2?
2. (The Morra game) Two players show simultaneous one or two …ngers from
the left hand and in the same time yells the number of …ngers that the believe
that shows the opponent. If a player forecasts the number of …ngers showed
by the opponent, he receives so many unities monetary as much as …ngers they
showed together. If the both players forecast or neither forecast no then neither
receives nothing. Which is the payo¤ matrix of this game?
3. What game in previous problems has the saddle point?
4. Which are the expected payo¤s of player 1 in the previous games?
5. Using the iterated elimination of strictly dominated strategies solve the
matrix game with the payo¤ matrix
¹ =

1 ÷1 ÷2 0
8 0 2 4
4 ò 1 ò
2 8 ÷1 8
¸
¸
¸
¸
.
6. Find the optimal strategies of the following matrix game with the payo¤
matrix:
a) ¹ =
¸
2 8
ò 2

. b) ¹ =
¸
6 ÷1
4 ò

. c) ¹ =
¸
2 4
8 1

.
7. Solve the problem 6 with the Williams method.
8. Solve the problem 6 with the graphical method for 2: and :2 matrix
games.
9. Using the graphical method, solve the following matrix games with the
payo¤ matrices:
a) ¹ =
¸
2 1 4
8 ò 1

. b) ¹ =

2 4
8 1
1 6
ò 0
¸
¸
¸
¸
.
62
10. Solve the matrix game with the payo¤ matrix
¹ =

1 ÷1 ÷2
÷1 1 1
2 ÷1 0
¸
¸
.
11. Using the linear programming problem solve the following matrix game
with the payo¤ matrix:
a) ¹ =

2 8 0
÷1 8 ÷8
0 ÷1 2
¸
¸
. b) ¹ =

7 ò 6
0 0 4
14 1 8
¸
¸
.
12. A factory produces three types of production ¹: ¹
1
, ¹
2
, ¹
3
. To produce
one unit of product we use three types of materials: 1: 1
1
– metal, 1
2
– wooden
material, 1
3
– plastic material. The expenses with pole materials in a unit of
production are given in the table:
1`¹ ¹
1
¹
2
¹
3
1
1
4 4 6
1
2
3 5 3
1
3
5 2 4
Write the matrix of the game in general representation.
13. Two branches have to do investments in four objectives. The strategy i
consists to …nance the objective i, i = 1. 4. In accordance to all considerations,
the payo¤s of the …rst branch are given by the matrix:
¹ =

0 1 ÷1 2
÷1 0 8 2
0 1 2 ÷1
2 0 0 0
¸
¸
¸
¸
.
We suppose that every branch materializes its payo¤ in agreement with
another one: that is what the …rst wins the second loses and what the …rst loses
the second wins.
Write the matrix of the game in general representation.
14. Let us consider two persons playing a bi-matrix non-cooperative game,
given by the matrices
¹ =
¸
1 7
8 4

. 1 =
¸
1 8
7 8

.
Solve the game.
15. In order to get an economical and social development of a town, it
appears the problem to build or not to build two economical objectives. There
are two strategies for the corresponding ministry and for the leaders of the
town: 1 – the building of …rst objective; 2 – the building of second objective.
The people that represent the town may have two strategies: 1 – they agree
63
with the proposal of Ministry; 2 – they don’t agree with it. The strategies
apply independent. The payo¤s are given by the matrices:
¹ =
¸
÷10 2
1 ÷1

. 1 =
¸
ò ÷2
÷1 1

.
Solve the non-cooperative game.
16. Let us consider Problem 12, and we ask:
16.1. What are the percentages j
1
: j
2
: j
3
that we have to make the supply
in advance (supply before to know the volume of the contracts for the next
period of time) with prime materials in order to obtain that the stock will be
surely used and to ensure a maximum value of the production?
16.2. Find a production plan corresponding to a total production of 4 mil-
lions u.m.
17. Solve the antagonistic game given in Problem 13.
Answers
1. H
2
=

÷8 ÷10 ÷4
÷6 ÷6 ÷ò
÷0 ÷1 ÷8
÷6 ÷8 ÷ò
¸
¸
¸
¸
: three strategies for player 1 and for strategies
for player 2.
2. H
1
= ¹ =

0 2 ÷8 0
÷2 0 0 8
8 0 0 ÷4
0 ÷8 4 0
¸
¸
¸
¸
The rows are: 1
11
. 1
12
. 1
21
. 1
22
. where 1
11
means 1 …nger, 1 yells, 1
12
÷
1 …nger, 2 yells, 1
21
÷ 2 …ngers, 1 yells, 1
22
÷ 2 …ngers, 2 yells.
3. ·
1
= max mina
ij
= 8, ·
2
= minmax a
ij
= 6, there isn’t saddle point, in
pure strategy, for …rst game; ·
1
= ÷2, ·
2
= 2, there isn’t saddle point, in pure
strategy, for the second game.
4.
¸
3
i=1
¸
4
j=1
a
ij
r
i
n
j
= 8r
1
n
1
÷ 6r
1
n
2
÷ ÷ òr
3
n
4
;
¸
4
i=1
¸
3
j=1
a
ij
r
i
n
j
= 2r
1
n
2
÷8r
1
n
3
÷ ÷ 4r
4
n
3
.
5. A
+
= (0. 2´8. 1´8. 0), 1
+
= (0. 1´6. ò´6. 0), · = ò´8.
6. a) A
+
= (1´4. 8´4), 1
+
= (8´4. 1´4), · = 1
b) A
+
= (8´4. 1´4), 1
+
= (1´8. 7´8), · = 17´4
c) A
+
= (8´4. 1´4), 1
+
= (1´2. 1´2), · = ò´2.
9. a) A
+
= (1´2. 1´2), 1
+
= (8´4. 0. 1´4), · = ò´2.
b) A
+
= (0. 1´8. 0. 2´8), 1
+
= (1´8. 2´8), · = 1´8.
10. A
+
= (0. 8´ò. 2´ò), 1
+
= (2´ò. 8´ò. 0), · = 1´ò.
11. a) A
+
= (1´2. 0. 1´2), 1
+
1
= (1´2. 0. 1´2), 1
+
2
= (0. 1´8. 2´8), · = 1
b) A
+
= (0. 7´12. ò´12), 1
+
= (0. 1´8. 2´8), · = 17´8.
14. First solution: (1. Q), 1 = [1. 0[, Q = [0. 1[, 1
A
= 1
B
= 7.
64
Second solution: (1. Q): 1 = [0. 1[, Q = `
1
Q
1
÷ `
2
Q
2
, Q
1
= [0. 6: 0. 4[,
Q
2
= [1. 0[, `
1
_ 0, `
2
_ 0, `
1
÷`
2
= 1, 1
A
= 8. 4`
1
÷ 8`
2
, 1
B
= 8.
15. (1. Q), 1 = [0. 88: 0. 67[, Q = [0. 21: 0. 70[, 1
A
= ÷0. ò7, 1
B
= 0. 88.
16. 16.1. 1:0:0
16.2. ¹
1
: a = 2680000`
1
÷ 2000000`
2
u.m.
¹
2
: / = 1820000`
1
÷ 2000000`
2
u.m.
`
1
. `
2
_ 0. `
1
÷`
2
= 1.
17. 1 = [0. 8: 0. 11: 0. 26: 0. 88[, Q = [0. 28: 0. 88: 0. 17: 0. 17[.
The …rst factory wins 0,56 u.m.
1.20 References
1. Blaga, P., Mure¸san, A.S., Lupa¸s, Al., Applied mathematics, Vol. II, Ed.
Promedia Plus, Cluj-Napoca, 1999 (In Romanian)
2. Ciucu, G., Craiu, V., ¸ Stef¼ anescu, A., Mathematical statistics and opera-
tional research, Ed. Did. Ped., Bucure¸sti, 1978 (In Romanian)
3. Craiu, I., Mihoc, Gh., Craiu, V., Mathematics for economists, Ed. ¸ Sti-
in¸ti…c¼ a, Bucure¸sti, 1971 (In Romanian)
4. Dani, E., Numerical methods in games theory, Ed. Dacia, ClujNapoca,
1983 (In Romanian)
5. Dani, E., Mure¸san, A.S., Applied mathematics in economy, Lito. Univ.
Babe¸s-Bolyai, Cluj-Napoca, 1981 (In Romanian)
6. Faber, H., An analysis of …nal-o¤er arbitration, J. of Con‡ict Resolution,
35, 1980, 683-705
7. Gibbons, R., Games theory for applied economists, Princeton University
Press, New Jersey, 1992
8. Guia¸su, S., Mali¸ta, M., Games with three players, Ed. ¸ Stiin¸ti…c¼ a, Bu-
cure¸sti, 1973 (In Romanian)
9. Hardin, G., The tragedy of the commons, Science, 162, 1968, 1243-1248
10. Mure¸san, A.S., Operational research, Lito. Univ., Babe¸s-Bolyai, Cluj-
Napoca, 1996 (In Romanian)
11. Mure¸san, A.S., Applied mathematics in …nance, banks and exchanges,
Ed. Risoprint, Cluj-Napoca, 2000 (In Romanian)
12. Mure¸san, A.S., Blaga, P., Applied mathematics in economy, Vol. II, Ed.
Transilvania Press, Cluj-Napoca, 1996 (In Romanian)
13. Mure¸san, A.S., Rahman, M., Applied mathematics in …nance, banks and
exchanges, Vol. I, Ed. Risoprint, Cluj-Napoca, 2001 (In Romanian)
14. Mure¸san, A.S., Rahman, M., Applied mathematics in …nance, banks and
exchanges, Vol. II, Ed. Risoprint, Cluj-Napoca, 2002 (In Romanian)
15. von Neumann, J., Morgenstern, O., Theory of games and economic
behavior (3 rd edn), Princeton University Press, New Jersey, 1953
16. Onicescu, O., Strategy of games with applications to linear programming,
Ed. Academiei, Bucure¸sti, 1971 (In Romanian)
17. Owen, G., Game theory (2 nd edn), Academic Press, New York, 1982
65
18. Schatteles, T., Strategically games and economic analysis, Ed. ¸ Stiin¸ti…c¼ a,
Bucure¸sti, 1969 (In Romanian)
19. Tirole, J., The theory of industrial organization, M I T Press, 1988
20. Wang, J., An inductive proof of von Neumann’s minimax theorem, Chi-
nese J. of Operations Research, 1 (1987), 68-70
21. Wang, J., The theory of games, Clarendon Press, Oxford, 1988
2 Static games of incomplete information
In this chapter we consider games of incomplete information (Bayasian
games) that is, games in which at least one player is uncertain about another
player’s payo¤ function. One common example of a static game of incomplete
information is a sealed-bid auction: each bidder knows his own valuation for the
good being sold but doesn’t know any other bidder’s valuation; bids are submit-
ted in sealed envelopes, so the players’ move can be thought of as simultaneous.
2.1 Static Bayesian games and Bayesian Nash equilibrium
In this section we de…ne the normal-form representation of a static Bayesian
game and a Bayesian Nash equilibrium in such a game. Since these de…nitions
are abstract and bit complex, we introduce the main ideas with a simple exam-
ple, namely Cournot competition under asymmetric information. Consider
a Cournot duopoly model with inverse demand given by 1(Q) = a ÷Q, where
Q = c
1
÷ c
2
is the aggregate quantity on the market. Firm 1’s cost function
is C
1
(c
1
) = cc
1
. Firm 2’s cost function is C
2
(c
2
) which has the probabilistic
distribution
C
2
(c
2
) :
¸
c
L
c
2
c
H
c
2
1 ÷0 0

.
where c
L
< c
H
. Furthermore, information is asymmetric because …rm 2 knows
its cost function and …rm 1’s, but …rm 1 knows its cost function and only that
…rm 2’s marginal cost c has the probabilistic distribution
c :
¸
c
L
c
H
1 ÷0 0

.
This situation may be when …rm 2 could be a new entrant to the industry,
or could have just invented a new technology. All of this is common knowledge:
…rm 1 knows that …rm 2 has superior information, …rm 2 knows that …rm 1
knows this, and so on. Naturally, …rm 2 may want to chose a di¤erent (and
presumable lower) quantity if its marginal cost is high than if it is low. Firm 1,
for its part, should anticipate that 2 may tailor its quantity to its cost in this
way. Let c
2
+
(c) be denote …rm 2’s quantity choices as a function of its cost, that
is
c
+
2
=

c
+
2
(c
L
). if c = c
L
c
+
2
(c
H
). if c = c
H
.
(41)
66
Let c
+
1
be denote …rm 1’s single quantity choice. If …rm 2’s cost is low, it will
chose c
+
2
(c
L
) to solve the problem
:ar
q2
[(a ÷c
+
1
÷c
2
) ÷c
L
[c
2
.
Similarly, if …rm 2’s cost is high, c
+
2
(c
H
) will solve the problem
:ar
q2
[(a ÷c
+
1
÷c
2
) ÷c
H
[c
2
Firm 1 knows that …rm 2’s cost is low with probability 1 ÷ 0 and should
anticipate that …rm 2’s quantity choice will be c
+
2
(c
L
) or c
+
2
(c
H
), depending on
…rm 2’s cost. Thus …rm 1 chooses c
+
1
to solve the problem
:ar
q1
(1 ÷0)[(a ÷c
1
÷c
+
2
(c
L
)) ÷c[c
1
÷0[(a ÷c
1
÷c
+
2
(c
H
)) ÷c[c
1
so as to maximize expected pro…t. The …rst-order conditions for these three
optimization problems are
c
+
2
(c
L
) =
a ÷c
+
1
÷c
L
2
. c
+
2
(c
H
) =
a ÷c
+
1
÷c
H
2
.
and
c
+
1
= .
(1 ÷0)[a ÷c
+
2
(c
L
) ÷c[ ÷0[a ÷c
+
2
(c
H
) ÷c[
2
Assume that these …rst-order condition characterize the solutions to the
earlier optimization problems. Then, the solutions to the three …rst-order con-
ditions are
c
+
2
(c
L
) =
a ÷2c
L
÷c
8
÷
0
6
(c
H
÷c
L
).
c
+
2
(c
H
) =
a ÷2c
H
÷c
8
÷
1 ÷0
6
(c
H
÷c
L
).
and
c
+
1
=
a ÷2c ÷ (1 ÷0)c
L
÷0c
H
8
Compare c
+
2
(c
L
). c
+
2
(c
H
) and c
+
1
to the Cournot equilibrium under complete
information with costs c
1
and c
2
. Assuming that the values of c
1
and c
2
are
such that both …rms’ equilibrium quantities are both positive, …rm i produces
c
+
i
=
a÷2ci+cj
3
in this complete-information case. In the incomplete-information,
in contrast, c
+
2
(c
H
) is greater than
a÷2c
H
+c
3
and c
+
2
(c
L
) is less than
a÷2c
L
+c
3
. This
occurs because …rm 2 not only tailors its quantity to its cost but also responds
to the fact that …rm 1 cannot do so. If …rm 2’s cost is high, for example, it
produces less because its cost is high but also produces more because it knows
that …rm 1 will produce a quantity that maximizes its expected pro…t and thus
is smaller than …rm 1 would produce if it know …rm 2’s cost to be high.
67
2.2 Normal-form representation of static Bayesian games
Recall that the ensemble I =< 1. ¦o
i
¦. ¦H
i
¦. i ÷ 1 is a non-cooperative game
(see De…nition 1.10), where o
i
is player i’s strategy space and H
i
is player i’s
payo¤, hence H
i
(:) = H
i
(:
1
. :
2
. . . . . :
n
) is player i’s payo¤ when the players
choose the strategies (:
1
. :
2
. . . . . :
n
).
Remark 2.1. The non-cooperative game can also describe it as I = <
1. ¦¹
i
¦ . ¦H
i
¦. i ÷ 1 , where ¹
i
is player i’s action space and H
i
is player
i’s payo¤, hence H
i
(a) = H
i
(a
1
. a
2
. .... a
n
) is player i’s payo¤ when the players
choose the actions a = (a
1
. a
2
. .... a
n
). In a simultaneous-move game of complete
information a strategy for a player is simply an action, but in a dynamic game
of complete information (…nitely or in…nitely repeated game) a strategy can be
di¤erent of action. A player’s strategy is a complete plan of action - it speci…es
a feasible action for the player in every contingency in which the player might be
called upon to act. Hence, in a dynamic game a strategy is more complicated.

To prepare for our description of the timing of a static game of incomplete
information, we describe the timing of a static game of complete information
as follows: (1) the players simultaneously choose actions (player i chooses a
i
from the feasible set ¹
i
), and then (2) payo¤s H
i
(a
1
. a
2
. .... a
n
) are received.
Now we want to develop the normal-form representation of a static Bayesian
game, namely a simultaneous-move game of incomplete information.
The …rst step is to represent the idea each player knows his own payo¤
function but may be uncertain about the other players’ payo¤ functions. Let
player i’s possible payo¤ functions be represented H
i
(a
1
. a
2
. .... a
n
: t
i
), where t
i
is
called player i’s type and belongs to a set of possible types (or type space) T
i
.
Each type t
i
corresponds to a di¤erent payo¤ function that player i might have.
Given this de…nition of a player’s type, saying that player i knows his own
payo¤ function is equivalent to saying that player i knows his type. Likewise,
saying that player i may be uncertain about the other players’ payo¤ function
is equivalent to saying that player i may be uncertain about the types of other
players, denoted by t
÷i
= (t
1
. .... t
i÷1
. t
i+1
. .... t
n
).
We use T
÷i
to denote the set of all possible values of t
÷i
, and we use the
probability distribution j
i
(t
÷i
[t
i
) to denote player i
t
: belief about the other
players’ types, t
÷i
, given player i
t
: knowledge of his type, t
i
.
Remark 2.2. In most of application the player’s types are independent, in
which case j
i
(t
÷i
[t
i
) doesn’t depend on t
i
, so we can write player i
t
: belief as
j
i
(t
÷i
).
De…nition 2.1. The normal-form representation of an n-player sta-
tic Bayesian game speci…es the players’ action spaces ¹
1
. ¹
2
. . . . . ¹
n
, their
type spaces T
1
. T
2
. . . . . T
n
, their beliefs j
1
. j
2
. . . . . j
n
, and their payo¤ functions
H
1
. H
2
. . . . . H
n
.
Remark 2.3. We use I = < 1. ¦¹
i
¦. ¦T
i
¦. ¦j
i
¦. ¦H
i
¦. i ÷ 1 to denote
n-player static Bayesian game.
Remark 2.4. Player i
t
: type, t
i
, is privately known by player i, determines
player i
t
: payo¤ function, H
i
(a
1
. a
2
. . . . . a
n
: t
i
) and is a member of the set of
68
possible types T
i
. Player i
t
: belief j
i
(t
÷i
[t
i
) describes i
t
: uncertainty about the
: ÷1 other players’ possible types, t
÷i
, given i
t
: own type, t
i
.
Example 2.1. In the Cournot game the …rms’ actions are their quantity
choices, c
1
and c
2
. Firm 2 has two possible cost functions and thus two possible
pro…t or payo¤ functions:
H
2
(c
1
. c
2
: c
L
) = [(a ÷c
1
÷c
2
) ÷c
L
[c
2
and
H
2
(c
1
. c
2
: c
H
) = [(a ÷c
1
÷c
2
) ÷c
H
[c
2
.
Firm 1 has only one possible payo¤ function
H
1
(c
1
. c
2
: c) = [(a ÷c
1
÷c
2
) ÷c[c
1
.
Thus, …rm 1’s type space is T
1
= ¦c¦, and …rm 2’s type space is T
2
=
¦c
L
. c
H
¦.
Example 2.2. Suppose that player i has two possible payo¤ functions. We
would say that player i has two types, t
i1
and t
i2
, that player i
t
: type space is
T
i
= ¦t
i1
. t
i2
¦, and that player i
t
: two payo¤ functions are H
i
(a
1
. a
2
. . . . . a
n
: t
i1
)
and H
i
(a
1
. a
2
. . . . . a
n
: t
i2
). We can use the idea that each of a player’s types
corresponds to a di¤erent payo¤ function the player might have to represent
the possibility that the player might have di¤erent sets of feasible actions, as
follows. Suppose that player i
t
: set of feasible actions is ¦a. /¦ with probability
c and ¦a. /. c¦ with probability 1 ÷c. Then we can say that i has two types and
we can de…ne i
t
: feasible set of actions to be ¦a. /. c¦ for both types but de…ne
the payo¤ from taking action c to be ÷· for type t
i1
.
Remark 2.5. The timing of a static Bayesian game is as follows:
(1) nature draws a type vector t = (t
1
. t
2
. . . . . t
n
), where t
i
is drawn from
the set of possible types T
i
;
(2) nature reveals t
i
to player i but not to any other player;
(3) the players simultaneously choose actions, player i choosing a
i
from the
feasible set ¹
i
;
(4) payo¤s H
i
(a
1
. a
2
. . . . . a
n
: t
i
) are received.
Because nature reveals player i
t
: type to player i but no to player , in step
(2), player , doesn’t know the complete history of the game when actions are
chosen in step (3).
Remark 2.6. There are games in which player i has private information
not only about his own payo¤ function but also about another player’s payo¤
function. We capture this possibility by allowing player i
t
: payo¤ to depend not
only on the actions (a
1
. a
2
. . . . . a
n
) but also on all the types (t
1
. t
2
. . . . . t
n
). We
write this payo¤ as H
i
(a
1
. a
2
. . . . . a
n
: t
1
. t
2
. . . . . t
n
).
Remark 2.7. The second technical point involves the beliefs, j
i
(t
÷i
[t
i
). We
will assume that it is common knowledge that in step (1) of the timing of a
static Bayesian game, nature draws a type vector t = (t
1
. t
2
. . . . . t
n
) according
to the prior probability distribution j(t). When nature then reveals t
i
to player
i, he can compute the belief j
i
(t
÷i
[t
i
) using Bayes’ rule
69
j
i
(t
÷i
[t
i
) =
j(t
÷i
. t
i
)
j(t
i
)
=
j(t
÷i
. t
i
)
¸
ti÷Ti
j(t
÷i
. t
i
)
.

70
2.3 De…nition of Bayesian Nash equilibrium
First, we de…ne the players’ strategy spaces in the static Bayesian games. We
know that a player’s strategy is a complete plan of action, specifying a feasible
action in every contingency in which the player might be called on to act. Given
the timing of a static Bayesian game, in which nature begins the game by
drawing the players’ types, a (pure) strategy for player i must specify a feasible
action for each of player i
t
: possible types.
De…nition 2.2. In the static Bayesian game I, a strategy for player i is
a function :
i
, where for each type t
i
÷ T
i
, :
i
(t
i
) speci…es the action from the
feasible set ¹
i
that type t
i
would choose if drawn by nature.
The strategy spaces aren’t given in the normal-form representation of the
Bayesian game. Instead, in a static Bayesian game the strategy spaces are
constructed from the type and action spaces: player i
t
: set of possible (pure)
strategies, o
i
, is the set of all possible functions with domain T
i
and range ¹
i
.
Remark 2.8. In discussion of dynamic games of incomplete information we
will do distinction between two categories of strategies. Thus, in a separating
strategy each type t
i
÷ T
i
chooses a di¤erent action a
i
÷ ¹
i
. In pooling
strategy all types choose the same action. We introduce the distinction here
only to help describe the wide variety of strategies that can be constructed from
a given pair of type and action spaces, T
i
and ¹
i
.
Example 2.3. In the asymmetric-information Cournot game in Example 2.1
the solution consists of three quantity choices: c
+
2
(c
L
). c
+
2
(c
H
) and c
+
1
. In terms
of De…nition 2.2 of a strategy, the pair (c
+
2
(c
L
). c
+
2
(c
H
)) is …rm 2’s strategy and
c
+
1
is …rm 1’s strategy. Firm 2 will choose di¤erent quantity depending on its
cost. It is important to note, however, that …rm 1’s single quantity choice should
take into account that …rm 2’s quantity will depend on …rm 2’s cost in this way.
Thus, if our equilibrium concept is to require that …rm 1’s strategy be a best
response to …rm 2’s strategy, then …rm 2’s strategy must be a pair of quantities,
one for each possible cost type, else …rm 1 simply cannot compute whether its
strategy is indeed a best response to …rm 2’s.
Given the de…nition of a strategy in a Bayesian game, we turn next to the
de…nition of a Bayesian Nash equilibrium. The central idea is both simple and
familiar: each player’s strategy must be a best response to the other player’s
strategies. That is, a Bayesian Nash equilibrium is simply a Nash equilibrium
in a Bayesian game.
De…nition 2.3. In the static Bayesian game I the strategies :
+
= (:
+
1
. :
+
2
. . . . . :
+
n
)
are a (pure-strategy) Bayesian Nash equilibrium if for each player i and for
each of i
t
: types t
i
÷ T
i
, :
i
(t
i
) solves the problem
max
ai÷Ai
¸
ti÷Ti
H
i
(:
+
1
(t
1
). . . . . :
+
i÷1
(t
i÷1
). a
i
. :
+
i+1
(t
i+1
). . . . . :
+
n
(t
n
): t)j
i
(t
÷i
[t
i
).

Remark 2.9. In the Bayesian Nash equilibrium no player wants to chance
his strategy, even if the chance involves only one action by one type.
71
Remark 2.10. We can show that a …nite static Bayesian game there exists
a Bayesian Nash equilibrium, perhaps in mixed strategies.
2.4 The revelation principle
An important tool for designing games when the players have private informa-
tion, due to Myerson [ ], in context of Bayesian games, is the revelation principle.
It can be applied in the auction and bilateral-trading problems described in the
previous sections, as well as in a wide variety of other problems. Before we state
and prove the revelation principle for static Bayesian games, we sketch the way
the revelation principle is used in the auction and bilateral-trading problems.
Consider a seller who wishes to design an auction to maximize his expected
revenue. The highest bidder paid money to the seller and received the good, but
there are many other possibilities. The bidders might have to pay an entry fee.
More generally, some of the losing bidders might have to pay money, perhaps
in amounts that depend on their own and others’ bids. Also, the seller might
set a reservation price - a ‡oor below which bids will not be accepted. More
generally, the good might stay with the seller with some probability, and might
not always go to the highest bidder when the seller does release it.
The seller can use the revelation principle to simplify this problem in two
ways. First, the seller can restrict attention to the following class of games:
1) The bidders simultaneously make claims about their types (their valua-
tions). Bidderican claim to be any type t
i
from i
t
: set of feasible types T
i
, no
matter what i
t
: true type, t
i
.
2) Given the bidders’ claims (t
1
. t
2
. . . . . t
n
), bidder i pays r
i
(t
1
. t
2
. . . . . t
n
)
and receives the good with probability c
i
(t
1
. t
2
. . . . . t
n
).
For each possible combination of claims (t
1
. t
2
. . . . . t
n
). the sum of the prob-
ability c
1
(t
1
. t
2
. . . . . t
n
) ÷ ÷c
n
(t
1
. t
2
. . . . . t
n
) must be less than or equal to
one. The second way the seller can use the revelation principle is to restrict
attention to those direct mechanisms in which it is a Bayesian Nash equilibrium
for each bidder to tell the truth - that is, payment and probability functions
r
1
(t
1
. t
2
. . . . . t
n
). . . . . r
n
(t
1
. t
2
. . . . . t
n
):
c
1
(t
1
. t
2
. . . . . t
n
). . . . . c
n
(t
1
. t
2
. . . . . t
n
)
such that each player i
t
: equilibrium strategy is to claim t
i
(t
i
) = t
i
for each
t
i
÷ T
i
.
De…nition 2.4. Static Bayesian game in which each player’s only action
is to submit a claim about his type is called direct mechanism. A direct
mechanism in which truth - telling is a Bayesian Nash equilibrium is called
incentive - compatible.
Remark 2.11. Outside the context of auction design, the revelation princi-
ple can again be used in these two ways. Any Bayesian Nash equilibrium in an
appropriately chosen new Bayesian game, where by "represented" we mean that
for each possible combination of the players’ types (t
1
. t
2
. . . . . t
n
), the players’
72
actions and payo¤s in the new equilibrium are identical to those in the old equi-
librium. No matter what the original game, the new Bayesian game is always a
direct mechanism; no matter what the original equilibrium, the new equilibrium
in the new game is always truth - telling.
The following result hold
Theorem 2.1. (The revelation principle).
Any Bayesian Nash equilibrium of any Bayesian game can be represented by
an incentive - compatible direct mechanism.
Proof. Consider the Bayesian Nash equilibrium :
+
= (:
+
1
. :
+
2
. . . . . :
+
n
) in
Bayesian game I = < 1. ¦¹
i
¦. ¦T
i
¦. ¦j
i
¦. ¦H
i
¦. i ÷ 1 . We will construct
a direct mechanism with a truth - telling equilibrium that represent :
+
. The
appropriate direct mechanism is a static Bayesian game with the same types
spaces and beliefs as I but with new action spaces and new payo¤ functions.
The new action spaces are simple. Player i
t
: feasible actions in the direct
mechanism are claims about i
t
: possible types. That is, player i
t
: action space
is T
i
. The new payo¤ functions are more complicated. They depend not only on
original game I, but also on the original equilibrium in that game, :
+
. The idea
is to use the fact that :
+
is an equilibrium in I to ensure that truth - telling is an
equilibrium of the direct mechanism, as follows. The fact that :
+
is a Bayesian
Nash equilibrium of I means that for each player i, :
+
i
is i
t
: best response to
the other players’ strategies (:
+
1
. . . . . :
+
i÷1
. :
+
i+1
. . . . . :
+
n
).
Hence, for each of i
t
: types t
i
÷ T
i
, :
+
i
(t
i
) is the best action for i to choose
from ¹
i
, given that the other players’ strategies are (:
+
1
. . . . . :
+
i÷1
. :
+
i+1
. . . . . :
+
n
).
Thus, if i
t
: type is t
i
and we allow i to choose an action from a subset of ¹
i
that includes :
+
i
(t
i
), then i
t
: optimal choice remains :
+
i
(t
i
), again assuming that
the other functions in the direct mechanism are chosen so as to confront each
player with a choice of exactly this kind.
We de…ne the payo¤s in the direct mechanism by substituting the players’
type reports in the new game, t = (t
1
. t
2
. . . . . t
n
), into their equilibrium strate-
gies from the old game, :
+
, and then substituting the resulting actions in the
old game, :
+
(t) = (:
+
1
(t
1
). :
+
2
(t
2
). . . . . :
+
n
(t
n
)), into the payo¤ functions from
the old game. Formally, i
t
: payo¤ function is
·
i
(t. t) = H
i
(:
+
(t). t)
,
where t = (t
1
. t
2
. . . . . t
n
).
We conclude the proof by showing that truth - telling is a Bayesian Nash
equilibrium of this direct mechanism. By claiming to be type t
i
from T
i
, player i
is in e¤ect choosing to take the action :
+
i
(t
i
) from ¹
i
. If all the other players tell
the truth, then they are in e¤ect playing the strategies (:
+
1
. . . . . :
+
i÷1
. :
+
i+1
. . . . . :
+
n
).
But we argued earlier that if they play these strategies, then when i
t
: type is t
i
the best action forito choose is :
+
i
(t
i
). Thus, if the other players tell the truth,
then when i
t
: type is t
i
the best type to claim to be is t
i
. That is, truth -
telling is an equilibrium. Hence, it is a Bayesian Nash equilibrium of the static
73
Bayesian game I
t
= < 1. ¦T
i
¦. ¦T
i
¦. ¦j
i
¦. ¦H
i
¦. i ÷ 1 for each player i to play
the truth - telling strategy t
i
(t
i
) = t
i
for every t
i
÷ T
i
.
In [4], Harsanyi suggested that player ,
t
: mixed strategy represents player
i
t
: uncertainty about ,
t
: choice of a pure strategy, and that ,
t
: choice in turn
depends on the realization of a small amount of private information.
A mixed strategy Nash equilibrium in a game of complete information can
be interpreted as a pure - strategy Bayesian Nash equilibrium in a closed related
game with a little bit of incomplete information. The crucial feature of a mixed
- strategy Nash equilibrium is not that player , chooses a strategy randomly, but
rather than player i is uncertain about player ,
t
: choice; this uncertainty arise
either because of randomization or because of a little incomplete information,
as in the following example.
Example 2.4. Consider a bi - matrix game (like Battle of the sexes) in
which the players, although have known each other for quite some time, players
1 and 2 aren’t quite sure of each other’s payo¤. We suppose that player’s 1
payo¤ if both attend the …rst strategy is 2 ÷t
1
, where t
1
is privately known by
player 1; player’s 2 payo¤ if both attend the second strategy is 2 ÷t
2
, where t
2
is privately known by player 2; and t
1
. t
2
are independent draws from a uniform
distribution on [0. r[.
In terms of the static Bayesian game in normal form I =< ¦1. 2¦. ¹
1
. ¹
2
. T
1
. T
2
. j
1
. j
2
. H
1
. H
2
,
the action spaces are ¹
1
= ¹
2
= ¦1. 2¦, the type spaces are T
1
= T
2
= [0. r[, the
beliefs are j
1
(t
2
) = j
2
(t
1
) =
1
x
for all t
1
and t
2
, and the payo¤s are as follows
in the table
Situation Payo¤ matrix
:
1
:
2
H
1
H
2
1 1 2 ÷t
1
1
1 2 0 0
2 1 0 0
2 2 1 2 ÷t
2
We will construct a pure - strategy Bayesian Nash equilibrium of this incom-
plete information static game, in which player 1 plays strategy 1 if t
1
exceeds
a critical value, c
1
, and plays strategy 2 otherwise and player 2 plays strategy
2 if t
2
exceeds a critical value, c
2
, and plays strategy 1 otherwise. In such an
equilibrium, player 1 plays strategy 1 with the probability
x÷c1
x
and player 2
plays strategy 2 with the probability
x÷c2
x
. We will show that as the incom-
plete information disappears, that is, as r approaches zero, the players’ behavior
in this pure strategy Nash equilibrium approaches their behavior in the mixed
strategy Nash equilibrium in the original game of complete information. The
original game have the payo¤ matrices
H
1
=
¸
2 0
0 1

. H
2
=
¸
1 0
0 2

74
and there are two pure strategy Nash equilibria (1. 1) and (2. 2) and a mixed
strategy Nash equilibrium in which player 1 plays strategy 1 with the proba-
bility
2
3
and player 2 plays strategy 2 with the probability
2
3
. Really, the both
probabilities
x÷c1
x
and
x÷c2
x
approach
2
3
as r approaches zero.
Suppose that players 1 and 2 play the strategies just described. For a given
value of r, we will determine values of c
1
and c
2
such that these strategies are
a Bayesian Nash equilibrium. Given player’s 2 strategy, player 1’s expected
payo¤s from playing strategy 1 and from playing strategy 2 are
c
2
r
(2 ÷t
1
) ÷ (1 ÷
c
2
r
).0 =
c
2
r
(2 ÷t
1
)
and
c
2
r
.0 ÷ (1 ÷
c
2
r
).1 = 1 ÷
c
2
r
.
respectively. Thus playing strategy 1 is optimal if and only if
t
1
_
r ÷8
c
2
= c
1
.
Similarly, given player’s strategy, player 2’s expected payo¤s from playing
strategy 2 and from playing strategy 1 are
(1 ÷
c
1
r
).0 ÷
c
1
r
(2 ÷t
2
) =
c
1
r
(2 ÷t
2
)
and
(1 ÷
c
1
r
).1 ÷
c
1
r
.0 = 1 ÷
c
1
r
.
respectively. Thus, playing strategy 2 is optimal if and only if
t
2
_
r
c
1
÷8 = c
2
.
The above relationships yields to c
2
= c
1
and c
2
2
÷ 8c
2
÷ r = 0. Solving
the quadratic then shows that the probability that player 1 plays strategy 1,
namely
x÷c1
x
, and the probability that player 2 plays strategy 2, namely
x÷c2
x
.
both equal
1 ÷
÷8 ÷

0 ÷ 4r
2r
.
which approaches
2
3
as r approaches zero. Thus, as the incomplete information
disappears, the players’ behavior in this pure strategy Bayesian Nash equilib-
rium of the incomplete information game approaches their behavior in the mixed
strategy Nash equilibrium in the original game of complete information.
75
2.5 Exercises and problem solved
1. (An auction) There are two bidders, i = 1. 2. Bidder i has a valuation ·
i
for
the good - that is, if bidder i gets the good and pays the price j, then i
t
: payo¤ is
·
i
÷j. The two bidders’ valuations are independently and uniformly distributed
on [0. 1[. Bids are constrained to be nonnegative. The bidders simultaneously
submit their bids. The higher bidder wins the good and pays the price she
bid; the other bidder gets and pays nothing. In the case of a tie, the winner
is determined by a ‡ip of a coin. The bidder are risk - neutral. All of this is
common knowledge. Formulate this problem as a static Bayesian game, and
…nd out a Bayesian Nash equilibrium.
Solution In terms of a static Bayesian game I =< 1. ¹
1
. ¹
2
. T
1
. T
2
. j
1
. j
2
. H
1
. H
2

where 1 = ¦1. 2¦, the action space is ¹
i
= [0. ·) that is, player i
t
: action is
to submit a nonnegative bid, /
i
, and his type is his valuation ·
i
, hence the
type space is T
i
= [0. 1[. We must identify the beliefs and the payo¤ functions.
Because the valuations are independent, player i believes that ·
j
is uniformly
distributed on [0. 1[, no matter what the value of ·
i
. Player i
t
: payo¤ function
H
i
: ¹
1
¹
2
T
1
T
2
÷R is given by the relationship
H
i
(/
1
. /
2
. ·
1
. ·
2
) =

·
i
÷/
i
. i1 /
i
/
j
vi÷bi
2
. i1 /
i
= /
j
0. i1 /
i
< /
j
.
(42)
To derive a Bayesian equilibrium of this game we construct the players’
strategy spaces. We know that in a static Bayesian game a strategy is a function
from type space to action space, /
i
: T
i
÷¹
i
. ·
i
÷/
i

i
), where /
i

i
) speci…es
the bid that each of i
t
: types (valuations) would choose. In a Bayesian Nash
equilibrium, player 1
t
: strategy /
1

1
) is a best response to player 2
t
: strategy
/
2

2
), and vice versa. The pair of strategies (/
1

1
). /
2

2
)) is a Bayesian Nash
equilibrium if for each ·
i
in [0. 1[, /
i

i
) solves the problems
max
bi

i
÷/
i
)1¦/
i
/
j

j
)¦ ÷
1
2

i
÷/
i
)1¦/
i
= /
j

j
)¦. i = 1. 2.
We simplify the exposition and calculations by looking for a linear equilib-
rium /
1

1
) = a
1
÷ c
1
·
1
and /
2

2
) = a
2
÷ c
2
·
2
. For a given value of ·
i
, player
i
t
: best response solves the problem
max
bi

i
÷/
i
)1¦/
i
a
j
÷c
j
·
j
¦.
where we have used the fact that 1¦/
i
= /
j

j
)¦ = 0, because /
j

j
) =
a
j
÷c
j
·
j
and ·
j
is uniformly distributed, so /
j
is uniformly distributed. Since it
is pointless for player i to bid above ,
t
: maximum bid, we have a
j
_ /
i
_ a
j
÷c
j
,
so
1¦/
i
a
j
÷c
j
·
j
¦ = 1¦·
j
<
/
i
÷a
j
c
j
¦ =
/
i
÷a
j
c
j
.
76
Player i
t
: best response is therefore
/
i

i
) =

vi+ai
2
. i1 ·
i
_ a
j
a
j
. i1 a
i
< a
j
.
(43)
We prove that a
j
_ 0. If we have 0 < a
j
< 1 then there are some values of
·
i
such that ·
i
< a
j
. in which case /
i

i
) isn’t linear, rather, it is ‡at at …rst
and positively sloped later. Since we are looking for a linear equilibrium, we
therefore rule out 0 < a
j
< 1. focusing instead on a
j
_ 1 and a
j
_ 0. But the
former cannot occur in equilibrium since it is optimal for a higher type to bid
at least as much as a lower type’s optimal bid, we have c
j
_ 0. but then a
j
_ 1
would imply that /
j

j
) _ ·
j
. which cannot be optimal. Thus, if /
i

i
) is to
be linear, then we must have a
j
_ 0. in which case /
i

i
) =
vi+ai
2
. so a
i
=
aj
2
and c
i
=
1
2
. We can repeat the same analysis for player , under the assumption
that player i adopts the strategy /
i

i
) = a
i
÷c
i
·
i
. This yields a
i
_ 0. a
j
=
ai
2
.
and c
j
=
1
2
. Combining these two sets of results then yields a
i
= a
j
= 0 and
c
i
= c
j
=
1
2
. That is, /
i

i
) =
vi
2
.
Remark 2.12. Note well that we aren’t restricting the players’ strategy
spaces to include only linear strategies. Rather, we allow the players to choose
arbitrary strategies but ask whether there is an equilibrium that is linear. It
turns out that because the players’ valuations are uniformly distributed, a linear
equilibrium not only exists but is unique. We …nd out that /
i

i
) =
vi
2
. That
is, each player submit a bid equal to half her valuation. Such a bid re‡ects the
fundamental trade - o¤ a bidder faces in an action: the higher the bid, the more
likely the bidder is to win; the lower the bid, the larger the gain if the bidder
does win.
Remark 2.13. One might wonder whether there are other Bayesian Nash
equilibrium of game treated in the …rst problem. Also, how equilibrium bidding
changes as the distribution of the bidders’ valuations changes. Neither of these
questions can be answered using the technique just applied: it is fruitless to
try to guess all the functional forms other equilibria of this game might have,
and a linear equilibrium doesn’t exist for any other distribution of valuations.
We derive, next, a symmetric Bayesian Nash equilibrium (namely, the players’
strategies are identical, there is a single function /(·
i
) such that player 1
t
: strat-
egy /
1

1
) is /(·
1
) and player 2
t
: strategy /
2

2
) is /(·
2
). and this single strategy
is a best response to itself, again for the case of uniformly distributed valua-
tions. Under the assumption that the players’ strategies are strictly increasing
and di¤erentiable, we show that the unique symmetric Bayesian Nash equilib-
rium is the linear equilibrium. The technique we use can easily be extended to
a broad class of valuation distributions, as well as case of : bidders.
2. In the game of problem 1 there are other Bayesian Nash equilibria ?
Derive a symmetric Bayesian Nash equilibrium.
Solution As we just have mentioned in Remark 2.13, it is fruitless to try
to guess all the functional forms other equilibria. Suppose player , adopts the
strategy /, and assume that / is strictly increasing and di¤erentiable. Then for
a given value of ·
i
, player i
t
: optimal bid solves the problem
77
max
bi

i
÷/
i
)1¦/
i
/(·
j
)¦.
Let /
÷1
(/
j
) be denote the valuation that bidder , must have in order to bid
/
j
. That is, /
÷1
(/
j
) = ·
j
if /
j
= /(·
j
). Since ·
j
is uniformly distributed on [0. 1[,
1¦/
i
/(·
j
)¦ = 1¦/
÷1
(/
i
) ·
j
¦ = /
÷1
(/
i
). The …rst - order condition for
player i
t
: optimization problem is therefore
÷/
÷1
(/
i
) ÷ (·
i
÷/
i
)
d
d/
i
/
÷1
(/
i
) = 0.
This …rst - order condition is an implicit equation for bidder i
t
: best response
to the strategy / played by bidder ,, given that bidder i
t
: valuation is ·
i
. If the
strategy / is to be a symmetric Bayesian Nash equilibrium, we require that the
solution to the …rst - order condition be /(·
i
) : that is, for each of bidder i
t
:
possible valuations, bidder i doesn’t with to deviate from the strategy /, given
that bidder , plays this strategy. To impose this requirement, we substitute
/
i
= /(·
i
) into the …rst - order condition yielding
÷/
÷1
(/(·
i
)) ÷ (·
i
÷/(·
i
))
d
d/
i
/
÷1
(/(·
i
)) = 0.
We have /
÷1
(/(·
i
)) = ·
i
. of course. Furthermore,
d
dbi
(/
÷1
(/(·
i
))) =
1
b
0
(vi)
.
That is,
d
dbi
/
÷1

i
) measures how much bidder i
t
: valuation must change to
produce a unit change in the bid, whereas /
t

i
) measures how much the bid
changes in response to a unit change in the valuation. Thus, / must satisfy the
…rst - order di¤erential equation,
÷·
i
÷ (·
i
÷/(·
i
))
1
/
t

i
)
= 0.
which is more convenient expressed as ·
i
/
t

i
) ÷ /(·
i
) = ·
i
. The left - hand
side of this di¤erential equation is
d
dvi

i
/(·
i
)). Integrating both sides of the
equation therefore yields
·
i
/(·
i
) =
1
2
·
2
i
÷/.
where / is a constant of integration. To eliminate /, we need a bounda-
ry condition. Fortunately, simple economic reasoning provides one: no player
should bid more that his valuation. Thus, we require /(·
i
) _ ·
i
for every ·
i
. In
particular, we require /(0) _ 0. Since bids are constrained to be nonnegative,
this implies that /(0) = 0. so / = 0 and /(·
i
) =
vi
2
. as claimed.
3.(A double auction) Consider a trading game called a double auction.
The seller names an asking price, j
s
. and the buyer simultaneously names an
o¤er price, j
b
. If j
b
_ j
s
, then trade occurs at price j =
p
b
+ps
2
, if j
b
< j
s
, then
no trade occurs. The buyer’s valuation for the seller’s good is ·
b
, the sellers’s
is ·
s
.These valuations are private information and are drawn from independent
uniform distributions on [0.1[. If the buyer gets the good for price j, then the
78
buyer’s utility is ·
b
÷ j: if there isn’t trade, then the buyer’s utility is zero. If
the seller sells the good for price j, then the seller’s utility is j÷·
s
: if there isn’t
trade, then the seller’s utility is zero. Find out the Bayesian Nash equilibria.
Solution In this static Bayesian game, a strategy for the buyer is a func-
tion j
b
specifying the price the buyer will o¤er for each of the buyer’s possible
valuations, namely j
b

b
). Likewise, a strategy for seller is a function j
s
speci-
fying the price the seller will demand for each of the seller ’s valuations, namely
j
s

s
). A pair of strategies (j
b

b
). j
s

s
)) is a Bayesian Nash equilibrium if the
following two conditions hold. For each ·
b
in [0. 1[, j
b

b
) solves the problem
max
p
b
¦·
b
÷
j
b
÷'[j
s

s
)[j
b
_ j
s

s
)[
2
¦1(j
b
_ j
s

s
)).
where '[j
s

s
)[j
b
_ j
s

s
)[ is the expected price the seller will demand, con-
ditional on the demand being less than buyer’s o¤er of j
b
. For each ·
s
in [0. 1[,
j
s

s
) solves the problem
max
ps
¦
j
s
÷'[j
b

b
)[j
b

b
) _ j
s
[
2
÷·
s
¦1(j
b

b
) _ j
s
).
where '[j
b

b
)[j
b

b
) _ j
s
[ is the expected price the buyer will o¤er, con-
ditional on the o¤er being greater that the seller’s demand of j
s
.
There are many Bayesian Nash equilibria of this game. Consider the fol-
lowing one - price equilibrium, for example, in which trade occurs at a single
price if it occurs at all. For any value r in [0. 1[, let the buyer’s strategy be
to o¤er r if ·
b
_ r and to o¤er zero otherwise, and let seller’s strategy be to
demand r if ·
s
_ r and to demand one otherwise. Given the buyer’s strategy,
the seller’s choices amount to trading at r or not trading, so the seller’s strategy
is a best response to the buyer’s because the seller - types who prefer trading at
r to not trading do so, and vice versa. The analogous argument shows that the
buyer’s strategy is a best response to the seller’s so these strategies are indeed
a Bayesian Nash equilibrium. In this equilibrium, trade occurs for the (·
s
. ·
b
)
pairs which can be indicated in a …gure; trade would be e¢cient for all (·
s
. ·
b
)
pairs such that ·
b
_ ·
s
, but doesn’t occur in the two regions for which ·
b
_ ·
s
and ·
b
_ r or ·
b
_ ·
s
and ·
s
_ r.
We now derive a linear Bayesian Nash equilibrium of the double auction.
As in previous problem, we aren’t restricting the player’s strategy spaces to
include only linear strategies. Rather, we allow the players to choose arbitrary
strategies but ask whether there is an equilibrium that is linear.
Many other equilibria exist besides the one-price equilibria and the linear
equilibrium, but the linear equilibrium has interesting e¢ciency properties,
which we describe later.
Suppose the seller’s strategy is j
s

s
) = a
s
÷ c
s
·
s
. Then j
s
is uniformly
distributed on [a
s
. a
s
÷c
s
[. so the …rst relationship (the …rst problem) becomes
max
p
b

b
÷
1
2
(j
b
÷
a
s
÷j
b
2
)[
j
b
÷a
s
c
s
.
the …rst - order condition for which yields
79
j
b
=
2
8
·
b
÷
1
8
a
s
.
Thus, if the seller plays a linear strategy, then the buyer’s best response is
also linear. Analogously, suppose the buyer’s strategy is j
b

b
) = a
b
÷ c
b
·
b
.
Then j
b
is uniformly distributed on [a
b
. a
b
÷c
b
[, so the second relationship (the
second problem) becomes
max
ps
[
1
2
(j
s
÷
j
s
÷a
b
÷c
b
2
) ÷·
s
[
a
b
÷c
b
÷j
s
c
b
.
the …rst - order condition for which yields
j
s
=
2
8
·
s
÷
1
8
(a
b
÷c
b
).
Thus, if the buyer plays a linear strategy, then the seller’s best response
is also linear. If the players’ linear strategies are to be best responses to each
other, the relationship for j
b
implies that c
b
=
2
3
and a
b
=
as
3
, and relationship
for j
s
implies that c
s
=
2
3
and a
s
=
a
b
+c
b
3
. Therefore, the linear equilibrium
strategies are
j
b

b
) =
2
8
·
b
÷
1
12
and
j
s

s
) =
2
8
·
s
÷
1
4
.
Recall that trade occurs in the double auction if and only if j
b
_ j
s
. The
last relationship shows us that the trade occurs in the linear equilibrium if and
only if ·
b
_ ·
s
÷
1
4
.
A …gure with this situations reveals that seller - types above
3
4
make demands
above the buyer’s highest o¤er, j
b
(1) =
3
4
, and buyer - types below
1
4
make o¤ers
below the seller’s lowest o¤er, j
s
(0) =
1
4
. The depictions of which valuation pairs
trade in the one - price and linear equilibrium, respectively. In both cases, the
most valuable possible trade, namely ·
s
= 0 and ·
b
= 1, does occur. But the one
- price equilibrium misses some valuable trades (such as ·
s
= 0 and ·
b
÷r ÷-.
where c is small) and achieves some trades that are worth next to nothing,
such as ·
s
= r ÷ c and ·
b
÷ r ÷ c. The linear equilibrium, in contrast, misses
all trades worth next to nothing but achieves all trades worth at least
1
4
. This
suggest that the linear equilibrium may dominate the one - price equilibria, in
terms of the expected gains the players, receive, but also raises the possibility
that the players might do even better in an alternative equilibrium.
In [9] Myerson and Satterthwaite show that, for the uniform valuation dis-
tributions considered here, the linear equilibrium yields higher expected gains
for the player than any other Bayesian Nash equilibria of the double auction
(including but far from limited to the one - price equilibria). This implies that
80
there isn’t Bayesian Nash equilibrium of the double auction in which trade oc-
curs if and only if ·
b
_ ·
s
. that is, it is e¢cient.
They also show that this latter result is very general: if ·
b
is continuously
distribution on [r
b
. n
b
[ and ·
s
is continuously distributed on [r
s
. n
s
[. where n
b

r
s
and n
s
r
b
, then there isn’t bargaining game the buyer and seller would
willingly play that a Bayesian Nash equilibrium in which trade occurs if and
only if is e¢cient.
Remark 2.14. The revelation principle can be used to prove this general
result, and then translating the result into Hall and Lazear’s employment model.
If the …rm has private information about the worker’s marginal product (:) and
the worker has private information about his outside opportunity (·), then there
isn’t bargaining game that the …rm and the worker would willingly play that
produces employment if and only if it is e¢cient, that is, : _ ·.
81
2.6 Exercises and problems unsolved
1. Consider a Cournot duopoly operating in a market with inverse demand
1(Q) = a÷Q, where Q = c
1
÷c
2
is the aggregate quantity on the market. Both
…rms have total costs C
i
(c
i
) = cc
i
, but demand is uncertain: it is low, a = a
L
,
with probability 1 ÷ 0, and high, a = a
H
, with probability 0. Furthermore,
information is asymmetric: …rm 1 knows whether demand is high or low, but
…rm 2 doesn’t. All of this is common knowledge. The two …rms simultaneously
choose quantities. What are the strategy spaces for the two …rms ? Make
assumption concerning a
H
. a
L
. 0. and c such that all equilibrium quantities are
positive. What is the Bayesian Nash equilibrium of this game ?
2. Consider the following asymmetric - information model of Bertrand
duopoly with di¤erentiated products. Demand for …rm i is c
i
(j
i
. j
j
) = a ÷j
i
÷
/
i
j
j
.Costs are zero for both …rms. The sensitivity of …rm i
t
: demand to …rms
,
t
: price is either high or low. That is, /
i
is either /
H
or /
L
, where /
H
/
L
0.
For each …rm /
i
= /
H
with probability 0 and /
i
= /
L
with probability 1 ÷ 0,
independent of the realization of /
j
. Each …rm knows its own /
i
but not its
competitor’s. All of this is common knowledge. What are the action spaces,
type space, beliefs, and utility functions in this game ? What are the strategy
spaces ? What conditions de…ne a symmetric pure - strategy Bayesian Nash
equilibrium of this game ? Solve for such an equilibrium.
3. Find all the pure - strategy Bayesian Nash equilibrium in the following
static Bayesian game:
1. Nature determines whether the payo¤s are as in Game 1 or as in Game
2, each game being equally likely.
2. Player 1 learns whether nature has drawn Game 1 or Game 2, but player
2 doesn’t.
3. Player 1 chooses either T or B; player 2 simultaneously chooses either L
or R.
4. Payo¤s are given by the game drawn by nature.
L R L R
T 1,1 0,0 T 0,0 0,0
B 0,0 0,0 B 0,0 2,2
Game 1 Game 2
4. Recall from Section 1.1 of Chapter 1 that Matching pennies hasn’t pure -
strategy Nash equilibrium but has one mixed - strategy Nash equilibrium: each
player plays H with probability 1´2.
Player 2
Player 1
H T
H 1,-1 ÷1,1
T ÷1,1 1,÷1
82
Provide a pure - strategy Bayesian Nash equilibrium of a corresponding game
of incomplete information such that as the incomplete information disappears,
the players’ behavior in the Bayesian Nash equilibrium approaches their behav-
ior in the mixed - strategy Nash equilibria in the original game of complete
information.
5. Consider a …rst - price, sealed - bid auction in which the bidders’ val-
uations are independently and uniformly distributed on [0. 1[. Show that if
there are : bidders, then the strategy of bidding
n÷1
n
times one’s valuation is a
symmetric Bayesian Nash equilibrium of this auction.
6. Consider a …rst - price, sealed - bid auction in which the bidders’ val-
uations are independently and identically distributed according to the strictly
positive density 1(·
i
) on [0. 1[. Compute a symmetric Bayesian Nash equilib-
rium for the two - bidder case.
7. Reinterpret the buyer and seller in the double auction analyzed in problem
3 (A double auction) from Section 2.5 as a …rm that knows a worker’s marginal
product (:) and a worker who knows his outside opportunity (·), respectively.
In this context, trade means that the worker is employed by the …rm, and the
price at which the parties trade is worker’s wage n. If there is trade then the
…rm’s payo¤ is :÷n and the worker’s is n; if there isn’t trade then the …rm’s
payo¤ is zero and the worker’s is ·. Suppose that : and · are independent draws
from a uniform distribution on [0. 1[, as in the text. For purposes of comparison,
compute the players’ expected payo¤s in the linear equilibrium of the double
auction. Now consider the following two trading games as alternatives to the
double auction.
Game I: Before the parties learn their private information, they sign a
contract specifying that if the worker is employed by the …rm then the worker’s
wage will be n, but also that either side can escape from the employment
relationship at no cost. After the parties learn the values of their respective
pieces of private information, they simultaneously announce either that they
Accept the wage n or that they Reject that wage. If both announce Accept,
then trade occurs; otherwise it doesn’t. Given an arbitrary value of n from
[0. 1[, what is the Bayesian Nash equilibrium of this game ? Draw a diagram
showing the type - pairs that trade. Find the value of n that maximize the sum
of the players’ expected payo¤s and compute this maximized sum.
Game II: Before the parties learn their private information, they sign a
contract specifying that the following dynamic game will be used to determine
whether the worker joins the …rm and if so at what wage. After the parties learn
the values of their respective pieces of private information, the …rm chooses a
wage n to o¤er the worker, which the worker then accepts or rejects. Try to
analyze this game using backwards induction. Given n and ·, what will the
worker do ? If the …rm anticipates what the worker will do, then given : what
will the …rm do ? What is the sum of the players’ expected payo¤s ?
83
2.7 References
1. Dani, E., Numerical method in games theory, Ed. Dacia, Cluj-Napoca,
1983
2. Dani, E., Mure¸san, A.S., Applied mathematics in economy, Lito.
Univ., Babe¸s-Bolyai, Cluj-Napoca, 1981
3. Gibbons, R., Games theory for applied economists, Princeton Univer-
sity Press, New Jersey, 1992
4. Harsanyi, J., Games with randomly distributed payo¤s: A new ra-
tionale for mixed strategy equilibrium points, International Journal of Game
Theory, 2, 1973, 1-23
5. Mure¸san, A.S., Operational research, Lito. Univ., Babe¸s-Bolyai, Cluj-
Napoca, 1996
6. Mure¸san, A.S., Applied mathematics in …nance, banks and exchanges,
Ed. Risoprint, Cluj-Napoca, 2000
7. Mure¸san, A.S., Applied mathematics in …nance, banks and exchanges,
Vol. I, Ed. Risoprint, Cluj-Napoca, 2001
8. Mure¸san, A.S., Applied mathematics in …nance, banks and exchanges,
Vol. II, Ed. Risoprint, Cluj-Napoca, 2002
9. Myerson, R., Satterthwaite, M., E¢cient mechanisms for bilateral
trading, Journal of Economic Theory, 28, 1983, 265-281
10. Owen, G., Game theory (2 nd edn.) Academic Press, New York,
1982
11. Wang, J., The theory of games, Clarendon Press, Oxford, 1988
84
Part II
THE ABSTRACT THEORY OF
GAMES
85
3 Generalized games and abstract economies
Fixed point theorems are the basic mathematical tools in showing the existence
of solution in game theory and economics. While I have tried to integrate the
mathematics and applications this chapter isn’t a comprehensive introduction to
either general equilibrium theory or game theory. Here only …nite-dimensional
spaces are used. While many of the results presented here are true in arbi-
trary locally convex spaces, no attempt has been made to cover the in…nite-
dimensional results.
The main bibliographical source for this chapter is the Border’s book [10],
which I have been used in my lectures with the students in Computer Science
from the Faculty of Mathematics and Informatics. Also, we use the recently
results obtained by Aliprantis, Tourky, Yannelis, Maugeri, Ray, D’Agata, Oetli,
Schlager, Agarwal, O’Regan, Rim, Kim, Husai, Tarafdar, Llinares, Muresan,
and so on.
3.1 Introduction
The fundamental idealization made in modelling an economy is the notion of a
commodity. We suppose that it is possible to classify all the di¤erent goods and
services in the world into a …nite number, :, of commodities, which are available
in in…nitely divisible units. The commodity space is then R
m
. A vector in R
m
speci…es a list of quantities of each commodity. It is commodity vectors that are
exchanged, manufactured and consumed in the course of economic activity, not
individual commodities; although a typical exchange involves a zero quantity of
most commodities. A price vector lists the value of a unit of each commodity
and so belongs to R
m
. Thus the value of commodity vector r at price j is
¸
m
i=1
j
i
r
i
= j.r.
The principal participants in an economy are the consumers. We will assume
that there is a given …nite number of consumers. Not every commodity vector
is admissible as a …nal consumption for a consumer. The set A
i
· R
m
of all
admissible consumption vectors for consumer i is his consumption set. There
are a variety of restrictions that might be embodied in the consumption set. One
possible restriction that might be placed on admissible consumption vectors is
that they be nonnegative. Under this interpretation, negative quantities of a
commodity in a …nal consumption vector mean that the consumer is supplying
the commodity as a service.
In a private ownership economy consumers are also partially characterized
by their initial endowment of commodities. This is represented as a point
n
i
in the commodity space. In a market economy a consumer must purchase
his consumption vector at the market prices. The set of admissible commodity
vectors that he can a¤ord at prices j given an income '
i
is called his budget
set and is just ¦r ÷ A
i
[j.r _ '
i
¦. The budget set might well be empty. The
problem faced by a consumer in a market economy is to choose a consumption
vector or set of them from the budget set. To do this, the consumer must have
some criterion for choosing. One way to formalize the criterion is to assume that
86
the consumer has a utility index, that is, a real-valued function n
i
, n
i
: A
i
÷
R. r ÷ n
i
(r). The idea is that a consumer would prefer to consume vector r
rather that vector n if n
i
(r) n
i
(n) and would be indi¤erent if n
i
(r) = n
i
(n).
The solution to the consumer’s problem is then to …nd all vectors r which
maximize n on the budget set. The set of solutions to a consumer’s problem for
given prices is his demand set.
The supplier’s problem is simple. Suppliers are motivated by pro…ts. Each
supplier , has a production set 1
j
of technological feasible supply vectors.
A supply vector speci…es the quantities of each commodity supplied and the
amount of each commodity used as an input. Inputs are denoted by negative
quantities and outputs by positive ones. The pro…t or net income associated
with supply vector n at price j is just
¸
m
i=1
j
i
n
i
= j.n. The supplier’s problem
is then to choose a n from the set technologically feasible supply vectors which
maximizes the associated pro…t. The set of pro…t maximizing production vectors
is the supply set.
A variation on the notion of a noncooperative game is that of an abstract
economy. In an abstract economy, the set os strategies available to a player
depends on the strategy choices of the other players. For example, the problem of
…nding an equilibrium price vector for a market economy. This can be converted
into a game where the strategy sets of consumers are their consumption sets
demands and those of suppliers are their production sets.
87
3.2 Equilibrium of excess demand correspondences
There is a fundamental theorem for proving the existence of a market equilib-
rium of an abstract economy [10].
If is the excess demand multivalued mapping, the j is an equilibrium price
if 0 ÷ (j). The price j is a free disposal equilibrium price if there is a . ÷ (j)
such that . _ 0.
Theorem 3.1. (Gale-Debreu-Nikaido Lemma). Let : ^ ( R
m
be
an upper hemi-continuous multivalued mapping with nonempty compact convex
values such that for all j ÷ ^
j.. _ 0 1or cac/ . ÷ (j).
Put · = ÷R
n+1
+
. Then ¦j ÷ ^[·
¸
(j) = O¦ of free disposal equilibrium
prices is nonempty and compact.
Proof. For each j ÷ ^ set
l(j) = ¦c[ c.. 0. (\). ÷ (j)¦.
Then l(j) is convex for each j and j ´ ÷ l(j), and we have that l
÷1
(j) is
open for each j.
For if c ÷ l
÷1
(j), we have that j.. 0 for all . ÷ (c). Then since is
upper hemi-continuous,
+
[¦r[ j.r 0¦[ is a neighborhood of c in l
÷1
(j).
Now j is l-maximal if and only if
1or cac/ c ÷ ^. t/crc i: a . ÷ (j) nit/ c.. _ 0.
It is know that "if C · R
m
is a closed convex cone and 1 · R
m
is compact
and convex then 1
¸
C
+
= O if and only if"
(\) j ÷ C. (¬) . ÷ 1 j.. _ 0.
So, j is l-maximal if and only if (j)
¸
· = O. Thus by a Sonnenschein’s
theorem, ¦j[ (j)
¸
· = O¦ is nonempty and compact.
Theorem 3.2. (Neuefeind Lemma). Let o = ¦j[ j ÷ R
m
. j 0.
¸
m
i=1
j
i
=
1¦. Let : o (R
m
be upper hemi-continuous with nonempty closed convex val-
ues and satisfy the strong form of Walras’ law
j.. = 0 for all . ÷ (j)
and the boundary condition
there is a j
+
÷ o and a neighborhood \ of ^ ` o in ^ such that for all
j ÷ \
¸
o. j
+
.. 0 for all . ÷ (j).
Then the set ¦j[j ÷ o. 0 ÷ (j)¦ of equilibrium prices for is compact and
nonempty.
Proof. De…ne the binary relation l on ^ by
j ÷ l(c) i1¦j.. 0 1or a|| . ÷ (c) a:d j. c ÷ o or j ÷ o. c ÷ ^` o¦.
88
First show that the l-maximal elements are precisely the equilibrium prices.
Suppose that j is l-maximal, that is, l(j) = O. Since l(j) = o for all j ÷ ^`o,
it follows that j ÷ o. Since j ÷ o and l(j) = O.
for each c ÷ o. there is a . ÷ (j) with c.. _ 0. (*)
Now (*) implies 0 ÷ (j). Suppose by way of contradiction that 0 ´ ÷ (j).
Then since ¦0¦ is compact and convex and (j) is closed and convex, by Sepa-
rating hyperplane theorem, there is
+
j ÷ R
m
satisfying
+
j.. 0 for all . ÷ (j).
Put j

= `
+
j÷(1÷`)j. Then for . ÷ (j), j

.. = `
+
j..÷(1÷`)j.. = `
+
j.. 0
for ` 0. (Recall that j.. = 0 for . ÷ (j) by Walras’ law). For ` 0 small
enough, j

0 so that the normalized price vector c

= (
¸
j

i
)
÷1
j

÷ o and
c

.. 0 for all . ÷ (j), which violates (*).
Conversely, if j is an equilibrium price, then 0 ÷ (j) and since j.0 = 0 for
all j, it follows that l(j) = O.
Next verify that l satis…es the hypotheses of Sonnenschein’s theorem.
(ia) j ´ ÷ l(j): For j ÷ o this follows from Walras’ law. For j ÷ ^` o. j ´ ÷
o = l(j).
(ib) l(j) is convex: For j ÷ o, let c
1
. c
2
÷ l(j). that is, c
1
.. 0. c
2
.. 0
for . ÷ (j). Then [`c
1
÷ (1 ÷ `)c
2
[.. 0 as well. For j ÷ ^ ` o. l(j) = o
which is convex.
(ii) If c ÷ l
÷1
(j), then there is a j
t
with c ÷ i:tl
÷1
(j
t
): There are two
cases: (a) c ÷ o and (b) c ÷ ^` o.
(iia) c ÷ o
¸
l
÷1
(j). Then j.. 0 for all . ÷ (c). Let H = ¦r[j.r 0¦,
which is open. Then by upper hemi-continuity,
+
[H[ is a neighborhood of c
contained in l
÷1
(j).
(iib) c ÷ (^` o)
¸
l
÷1
(j). By boundary condition in state of the theorem,
c ÷ i:tl
÷1
(j
+
).
Theorem 3.3.(Grandmont’s Lemma). Let o = ¦j[ j ÷ R
m
. j 0.
¸
m
i=1
j
i
=
1¦. Let : o ( R
m
be upper hemi-continuous with nonempty compact convex
values and satisfy the strong form of Walras’ law
j.. = 0 for all . ÷ (j)
and the boundary condition
for every sequence c
n
÷ c ÷ ^` o and .
n
÷ (c
n
), there is a j ÷ o (which
may depend on ¦.
n
¦) such that j..
n
0 for in…nitely many :.
Then has an equilibrium price j, that is, 0 ÷ (j).
Proof. Set 1
n
= co¦r[r ÷ o. di:t(r. ^ ` o) _
1
n
¦. Then ¦1
n
¦ is an
increasing family of compact convex sets and o =
¸
n
1
n
. Let C
n
be the cone
generated by 1
n
. Use a Debreu’s theorem to conclude that for each :, there
is c
n
÷ 1
n
such that (c
n
)
¸
C
+
n
= O. Let .
n
÷ (c
n
)
¸
C
+
n
. Suppose that
c
n
÷ c ÷ ^ ` o. Then by boundary condition, there is a j ÷ o such that
j..
n
0 in…nitely often. But for large enough :. j ÷ 1
n
· C
n
. Since .
n
÷ C
+
n
.
it follows that j..
n
_ 0. a contradiction.
It follows then that no subsequence of c
n
converges to a point in ^ ` o.
Since ^ is compact, some subsequence must converge to some j ÷ o. Since is
upper hemi-continuous with compact values, by sequential characterization of
hemi-continuity, there is a subsequence of .
n
converging to
÷
. ÷ (j). This
÷
.
89
lies in
¸
n
C
+
n
= ÷R
m
+
. This fact together with the strong form of Walras’ law
imply that
÷
. = 0.
3.3 Existence of equilibrium for abstract economies
3.3.1 Preliminaries
Let ¹ a subset of a topological space A. We shall denote by 2
A
the family
of all subsets of ¹ and by c|¹ the closure of ¹ in A. If ¹ is a subset of a
vector space, we shall denote by co¹ the convex hull of ¹. If ¹ is a nonempty
subset of a topological vector space A and o. T : ¹ ÷ 2
X
are multivalued
mappings, then coT. c|T. T
¸
o : ¹ ÷ 2
X
are multivalued mappings de…ned by
(coT)(r) = coT(r). (c|T)(r) = c|T(r) and (T
¸
o)(r) = T(r)
¸
o(r) for each
r ÷ ¹. respectively. Let 1 be a nonempty subset of ¹. Denote the restriction
of T on 1 by T[
B
.
Let A be a nonempty subset of a topological vector space and r ÷ A.
Let c : A ÷ 2
X
be a given multivalued mapping. A multivalued mapping
c
x
: A ÷ 2
X
is said to be a O - majorant of c at r if there exists an open
neighborhood ·
x
of r in A such that
(a) for each . ÷ ·
x
. c(.) · c
x
(.),
(b) for each . ÷ ·
x
. . ´ ÷ c| co c
x
(.) and
(c) c
x
[
Nx
has open graph in ·
x
A.
The multivalued mapping c is said to be O-majorised if for each r ÷ A
with c(r) = O, there exists a O- majorant of c at r.
It is clear that every multivalued mapping c having an open graph with
r ´ ÷ c| co c(r) for each r ÷ A is a O - majorised multivalued mapping. However
the following simple multivalued mapping shows a O - majorised multivalued
mapping which doesn’t have an open graph:
The multivalued mapping c : A = (0. 1) ÷2
X
is de…ned by
c(r) = (0. r
2
[ for each r ÷ A.
Then c hasn’t open graph but c
x
(.) = (0. .) for all . ÷ A is a O - majorant
of c at any r ÷ A.
We now state the following de…nition.
De…nition 3.1. Let A and 1 be two topological spaces. Then a multivalued
mapping T : A ÷ 2
Y
is said to be upper semicontinuous (respectively,
almost upper semicontinuous) if for each r ÷ A and each open set \ in
1 with T(r) · \ , there exists an open neighborhood l of r in A such that
T(n) · \ (respectively, T(n) · c|\ ) for each n ÷ l.
Remark 3.1. An upper semicontinuous multivalued mapping is clearly al-
most upper semicontinuous. From the de…nition, if T is almost semicontinuous,
then c|T is also almost semicontinuous. And it should be noted that we don’t
need the closedness assumption of T(r) for each r ÷ A in the de…nitions.
The following example shows us an almost upper semicontinuous multivalued
mapping which isn’t upper semicontinuous.
Example 3.1. Let A = [0. ·) and c : A ÷2
X
be de…ned by
c(2) = (1. 8) , and c(r) = [1. 8[ if r = 2.
90
Then c isn’t upper semicontinuous at 2 since for an open neighborhood
(1,3) of c(2) there doesn’t exists any desired neighborhood l of 2 such that
T(n) · (1. 8) for all n ÷ l; however T(n) · [1. 8[ for all n in any neighborhood
of 2. Therefore T is almost upper semicontinuous.
Now we given the following general de…nitions of equilibrium theory in math-
ematical economics. Let 1 be a …nite set of agents. For each i ÷ 1, let A
i
be a
nonempty set of actions.
De…nition 3.2. An abstract economy (or generalized game) I =
(A
i
. ¹
i
. 1
i
. 1
i
)
i÷I
is de…ned as a family of ordered quadruples (A
i
. ¹
i
. 1
i
. 1
i
)
where A
i
is a nonempty topological vector space (a choice set), ¹
i
. 1
i
:
¸
j÷I
A
j
÷
2
Xi
are constraint multivalued mappings and 1
i
:
¸
j÷I
A
j
÷ 2
Xi
is a prefer-
ence multivalued mapping. An equilibrium for I (Schafer-Sonnenchein type)
is a point ´ r ÷ A =
¸
i÷I
A
i
such that for each i ÷ 1. ´ r
i
÷ c|1
i
(´ r) and
1
i
(´ r)
¸
¹
i
(´ r) = O.
Remark 3.2. When ¹
i
= 1
i
for each i ÷ 1, our de…nitions of an abstract
economy and an equilibrium coincide with the standard de…nitions of Shafer-
Sonnenchein.
For each i ÷ 1. 1
t
i
: A ÷ 2
X
will denote the multivalued mapping de…ned
by 1
t
i
(r) = ¦n[n ÷ A. n
i
÷ 1
i
(r)¦(= ¬
÷1
i
(1
i
(r)), where ¬
i
: A ÷ A
i
is the i-th
projection).
And we shall use the following notation:
A
i
=
¸
j÷I;j,=i
A
j
and let ¬
i
: A ÷ A
i
. ¬
i
: A ÷ A
i
be the projections of A onto A
i
and
A
i
, respectively. For any r ÷ A, we simply denote ¬
i
(r) ÷ A
i
by r
i
and
r = (r
i
. r
i
).
In [28] Greenberg introduced a further generalized concept of equilibrium as
follows: Under same settings as above, w = ¦c
i
¦
i÷I
be a family of functions
c
i
: A ÷R
+
for each i ÷ 1.
De…nition 3.3. A w- quasi-equilibrium for I is a point ´ r ÷ A such that
for all i ÷ 1,
(1) ´ r
i
÷ c|¹
i
(´ r).
(2) 1
i
(´ r)
¸
¹
i
(´ r) = O and/or c
i
(´ r) = 0.
Remark 3.3. Quasi-equilibrium can be of special interest for economies
with a tax authority and the result of Shafer-Sonnenschein cannot be applied
in this problem.
Next we give another de…nition of equilibrium for an abstract economy
given by utility functions. By following Debreu, an abstract economy I =
(A
i
. ¹
i
. 1
i
)
i÷I
is de…ned as a family of ordered triples (A
i
. ¹
i
. 1
i
) where A
i
is
a nonempty topological vector space (a choice set), ¹
i
:
¸
j÷I
A
j
= A ÷ 2
Xi
is a constraint multivalued mapping and 1
i
:
¸
j÷I
A
j
÷R is a utility function
(payo¤ function).
De…nition 3.4. An equilibrium for I (Nash type) is a point ´ r ÷ A
such that for each i ÷ 1. ´ r
i
÷ c|¹
i
(´ r) and
91
1
i
(´ r) = 1
i
(´ r
i
. ´ r
i
) = i:1¦1
i
(´ r
1
. .... ´ r
i÷1
. .. ´ r
i+1
. ...)[. ÷ c|¹
i
(´ r)¦.

Remark 3.4. It should be noted that if ¹
i
(r) = A
i
for all r ÷ A, then the
concept of an equilibrium for I coincides with the well-known Nash equilibrium.
The two types of equilibrium points coincide when the preference multivalued
mapping 1
i
can be de…ned by
1
i
(r) = ¦.
i
÷ A
i
[1
i
(r
i
. .
i
) < 1
i
(r)¦ 1or cac/ r ÷ A.

3.3.2 A generalization of Himmelberg’s …xed point theorem
We begin with the following lemma.
Lemma 3.1. Let A be a nonempty subset of a topological space and 1
be a nonempty compacy subset of A. Let T : A ÷ 2
D
be an almost upper
semicontinuous multivalued mapping such that for each r ÷ A. T(r) is closed.
Then T is upper semicontinuous.
Proof. For any r ÷ A. let l be an open neighborhood of T(r) in 1. Since
T(r) is closed in 1, there exists an open neighborhood \ of T(r) such that
T(r) · \ · c|\ · l.
Since T is almost upper semicontinuous at r, for such open neighborhood \
of T(r), we can …nd an open neighborhood \ of r such that T(n) · c|\ · l
for all n ÷ \. Therefore T is upper semicontinuous at r.
Remark 3.5. For any upper semicontinuous multivalued mapping T : A ÷
2
Y
. coT and c| co T aren’t necessarily upper semicontinuous in general even if
A = 1 is compact convex in a locally convex Hausdor¤ topological vector space.

However the almost upper semicontinuty can be preserved as follows:
Lemma 3.2. Let A be a convex subset of a locally convex Hausdor¤ topologi-
cal vector space 1 and 1 be a nonempty compact subset of A. Let T : A ÷2
D
be
an almost upper semicontinuous multivalued mapping such that for each r ÷ A,
coT(r) · 1. Then c| co T is almost upper semicontinuous.
Proof. For any r ÷ A, let l be an open set containing c| co T(r). Since
c| co T(r) is closed in 1, we can …nd an open convex neighborhood · of 0 such
that
c| co T(r) ÷· · c|(c| co T(r) ÷·) = c| co T(r) ÷c|· · l.
Clearly \ = c| co T(r) ÷ · is an open convex set containing c| co T(r)
and \ · l. Since T is almost upper semicontinuous, there exists an open
neighborhood \ of r in A such that T(n) · c|\ for all n ÷ \. Since \ is
92
convex, c| co T(n) · c|\ · c|l for all n ÷ \. Therefore c| co T is almost upper
semicontinuous.
Remark 3.6. In the Lemma 3.2, we don’t know whether the multivalued
mapping coT is almost upper semicontinuous even when T is upper semicontin-
uous.
We now prove the following generalization of Himmelberg’s …xed point the-
orem.
Theorem 3.4. Let A be a convex subset of a locally convex Hausdor¤
topological vector space 1 and 1 be a nonempty compact subset of A. Let
o. T : A ÷ 2
D
be almost upper semicontinuous multivalued mappings such
that
(1) for each r ÷ A. O = coo(r) · T(r).
(2) for each r ÷ A. T(r) is closed.
Then there exists a point ´ r ÷ 1 such that ´ r ÷ T(´ r).
Proof. For each r ÷ A, since coo(r) · T(r) is closed, we have c| co o(r) ·
T(r). By Lemma 3.2, the multivalued mapping c| co o : A ÷2
D
is also almost
upper semicontinuous, so that by Lemma 3.1, c| co o is upper semicontinuous
and closed convex valued in 1. Therefore by Himmelberg’s …xed point theorem,
there exists a point ´ r ÷ 1 such that ´ r ÷ c| co o(´ r) · T(´ r), which completes
the proof.
Corollary 3.1. Let A be a convex subset of a locally convex Hausdor¤
topological vector space 1 and 1 be a nonempty compact subset of A. Let
o : A ÷2
D
be an almost upper semicontinuous multivalued mapping such that
for each r ÷ A, coo(r) is a nonempty subset of 1. Then there exists a point
´ r ÷ 1 such that ´ r ÷ c| co o(´ r).
Proof. We de…ne a multivalued mapping T : A ÷2
D
by T(r) = c| co o(r)
for all r ÷ A. Then by Lemma 3.2, T is almost upper semicontinuous. Clearly
the pair (o. T) satis…es all conditions of Theorem 3.4, so that there exists a
point ´ r ÷ 1 such that ´ r ÷ T(´ r).
When o = T in Theorem 3.4, we obtain Himmelberg’s …xed point theorem
as a corollary:
Corollary 3.2. Let A be a convex subset of a locally convex Hausdor¤
topological vector space and 1 be a nonempty compact subset of A. Let T :
A ÷ 2
D
be an upper semicontinuous multivalued mapping such that for each
r ÷ A, T(r) is a nonempty closed convex subset of 1. Then there exists a point
´ r ÷ 1 such that ´ r ÷ T(´ r).
3.3.3 Existence of equilibria in abstract economies
In this section we consider both kinds of economy described in the prelimi-
naries (that is, an abstract economy given by preference multivalued mappings
(Shafer-Sonnenschein type) in compact setting and an abstract economy given
by utility functions (Nash type) in non-compact settings) and prove the exis-
tence of equilibrium points or quasi-equilibrium points for either case by using
the …xed point theorems in previous section.
93
First, using O-majorised multivalued mappings we shall prove an equilibrium
existence of a compact abstract economy, which generalizes the powerful result
of Shafer-Sonnenschein. For simplicity, we may assume that ¹
i
= 1
i
for each
i ÷ 1 in a abstract economy.
Theorem 3.5. Let I = (A
i
. ¹
i
. 1
i
)
i÷I
be an abstract economy where 1 is a
countable set such that for each i ÷ 1,
(1) A
i
is a nonempty compact convex subset of a metrisable locally convex
Hausdor¤ topological vector space,
(2) for each r ÷ A =
¸
i÷I
A
i
. ¹
i
(r) is nonempty convex,
(3) the multivalued mapping c|¹
i
: A ÷2
Xi
is continuous,
(4) the multivalued mapping 1
i
is O-majorised.
Then I has an equilibrium choice ´ r ÷ A, that is, for each i ÷ 1, ´ r
i
÷ c|¹
i
(´ r)
and ¹
i
(´ r)
¸
1
i
(´ r) = O.
Proof. Let i ÷ 1 be …xed. Since 1
i
is O- majorised, for each r ÷ A,
there exists a multivalued mapping c
x
: A ÷ 2
Xi
and an open neighborhood
l
x
of r in A such that 1
i
(.) · c
x
(.) and .
i
´ ÷ c| co c
x
(.) for each . ÷ l
x
,
and c
x
[l
x
has an open graph in l
x
A
i
. By compactness of A, the family
{l
x
[r ÷ A} of an open cover of A contains a …nite subcover {l
xj
[, ÷ J}, where
J = ¦1. 2. .... :¦. For each , ÷ J, we now de…ne c
j
: A ÷2
Xi
by
c
j
(.) =

c
xj
(.). i1 . ÷ l
xj
A
i
. i1 . ´ ÷ l
xj
.
(44)
and next we de…ne 1
i
: A ÷2
Xi
by
1
i
(.) =
¸
j÷J
c
j
(.)
for each . ÷ A.
For each . ÷ A, there exists / ÷ J such that . ÷ l
x
k
so that .
i
´ ÷
c| co c
x
k
(.) = c| co c
k
(.); thus .
i
´ ÷ c| co 1
i
(.). We now show that the graph
of 1
i
is open in A A
i
. For each (.. r) ÷ oraj/ o1 1
i
, since A =
¸
j÷J
l
xj
,
there exists ¦i
1
. ...i
k
¦ · J such that . ÷ l
xi
1
¸
...
¸
l
xi
k
. Then we can …nd
an open neighborhood l of . in A such that l · l
xi
1
¸
...
¸
l
xi
k
. Since
c
xi
1
(.)
¸
...
¸
c
xi
k
(.) is an open subset of A
i
containing r, there exists an open
neighborhood \ of r in A
i
such that r ÷ \ · c
xi
1
(.)
¸
...
¸
c
xi
k
(.). Therefore
we have an open neighborhood l \ of (.. r) such that l \ · oraj/ o1 1
i
.
so that the graph of 1
i
is open in A A
i
. And it is clear that 1
i
(.) · 1
i
(.)
for each . ÷ A.
Next, since A A
i
is compact and metrisable, so is perfectly normal. Since
the graph of 1
i
is open in A A
i
, by a result of Dugundji, there exists a
continuous function C
i
: A A
i
÷ [0. 1[ such that C
i
(r. n) = 0 for all (r. n) ´ ÷
oraj/ o1 1
i
and C
i
(r. n) = 0 for all (r. n) ÷ oraj/ o1 1
i
. For each i ÷ 1, we
de…ne a multivalued mapping 1
i
: A ÷2
Xi
by
1
i
(r) = ¦n[n ÷ c|¹
i
(r). C
i
(r. n) = max
z÷clAi(x)
C
i
(r. .)¦.
94
Then by a result of Aubin and Ekeland, 1
i
is upper semicontinuous and for
each r ÷ A, 1
i
(r) is nonempty closed. Then a multivalued mapping G : A ÷
2
X
de…ned by G(r) =
¸
i÷I
1
i
(r) is also upper semicontinuous by a result of
Fan and G(r) is a nonempty compact subset of A for each r ÷ A. Therefore
by Corollary 3.1, there exists a point ´ r ÷ A such that ´ r ÷ c| co G(´ r); that is,
´ r ÷ c| co G(´ r) ·
¸
i÷I
c| co 1
i
(´ r). Since 1
i
(´ r) · c|¹
i
(´ r) and ¹
i
(´ r) is convex,
c| co 1
i
(´ r) · c|¹
i
(´ r). Therefore ´ r
i
÷ c|¹
i
(´ r) for each i ÷ 1. It remains to show
that ¹
i
(´ r)
¸
1
i
(´ r) = O. If .
i
÷ ¹
i
(´ r)
¸
1
i
(´ r) = O. then C
i
(´ r. .
i
) 0 so that
C
i
(´ r. .
t
i
) 0 for all .
t
i
÷ 1
i
(´ r). This implies that 1
i
(´ r) · 1
i
(´ r). which implies
´ r
i
÷ c| co 1
i
(´ r) · c| co 1
i
(´ r): this is a contradiction. So the theorem is proved.

Remark 3.7. In a …nite dimensional space, for a compact set ¹, co ¹ is
compact and convex. Therefore when A
i
is a subset of R
n
, we can relax the
assumption (b) of the de…nition of O-majorant as follows without a¤ecting the
conclusion of Theorem 3.5:
(b’) for each . ÷ ·
x
, . ´ ÷ co c
x
(.).
And in this case, Theorem 3.5 generalizes a Shafer-Sonnenschein’s theorem
in two aspects, that is, (i) 1
i
need not have open graph and (ii) an index set 1
may not be …nite.
Using the concept of w-quasi-equilibrium described in the preliminaries, we
further generalize Theorem 3.5 as follows:
Theorem 3.6. Let I = (A
i
. ¹
i
. 1
i
)
i÷I
be an abstract economy where 1 is a
countable set such that for each i ÷ 1,
(1)A
i
is a nonempty compact convex subset of a metrisable locally convex
Hausdor¤ topological vector space,
(2) c
i
: A =
¸
i÷I
A
i
÷ R
+
is a nonnegative real-valued lower semicontin-
uous function,
(3) for each r ÷ A, ¹
i
(r) is nonempty convex,
(4) the multivalued mapping c| ¹
i
: A ÷ 2
Xi
is continuous for all r with
c
i
(r) 0 and is almost upper semicontinuous for all r with c
i
(r) = 0.
(5) the multivalued mapping 1
i
is O-majorised.
Then I has a w-quasi-equilibrium choice ´ r ÷ A, that is, for each i ÷ 1,
(a) ´ r
i
÷ c| ¹
i
(´ r),
(b) ¹
i
(´ r)
¸
1
i
(´ r) = O and/or c
i
(´ r) = 0.
Proof. We can repeat the proof of Theorem 3.5 again. In the proof of
Theorem 3.5, for each i ÷ 1 we shall replace the multivalued mapping 1
i
by a new multivalued mapping 1
+
i
: A ÷ 2
Xi
de…ned by 1
+
i
(r) = ¦n[n ÷
c| ¹
i
(r). C
i
(r. n) c
i
(r) = max
z÷cl Ai(x)
C
i
(r. .) c
i
(r)¦ for each r ÷ A.
Since ¦r[r ÷ A. c
i
(r) 0¦ is open, 1
+
i
is also upper semicontinuous. In fact,
for any open set \ containing 1
+
i
(r), if c
i
(r) = 0 then 1
+
i
(r) = c| ¹
i
(r) · \ .
Since c| ¹
i
is upper semicontinuous, there exists an open neighborhood \ of r
such that 1
+
i
(n) · c| ¹
i
(n) · \ for all n ÷ \; if c
i
(r) 0, then by a result of
Aubin and Ekeland, 1
+
i
(r) = 1
i
(r) is also upper semicontinuous at r, so that
there exists an open neighborhood \ of r such that 1
i
(n) · \ for each n ÷ \.
Then \
t
= \
¸
¦.[. ÷ A. c
i
(.) 0¦ is an open neighborhood of r such that
1
+
i
(n) · \ for each n ÷ \
t
. Therefore 1
+
i
is upper semicontinuous.
95
Then G =
¸
i÷I
1
i
: A ÷ 2
X
is also upper semicontinuous by a result of
Fan, and G(r) is a nonempty compact subset of A for each r ÷ A. Therefore
by the same proof as in Theorem 3.5, there exists a point ´ r ÷ A such that
´ r
i
÷ c| ¹
i
(´ r) for each i ÷ 1. Finally, if c
i
(´ r) = 0, then the conclusion (b) holds.
In case c
i
(´ r) 0, if .
i
÷ ¹
i
(´ r)
¸
1
i
(´ r) = O, then C
i
(´ r. .
i
) 0 for all .
t
i
÷ 1
i
(´ r).
This implies that 1
i
(´ r) · 1
i
(´ r), which implies ´ r
i
÷ c| co 1
i
(´ r) · c| co 1
i
(´ r);
this is a contradiction. Therefore we have ¹
i
(´ r)
¸
1
i
(´ r) = O.
In most results on the existence of equilibria for abstract economies the
underlying spaces (commodity spaces or choice sets) are always compact and
convex. However, in recent papers, the underlying spaces aren’t always com-
pact and it should be noted that we will encounter many kinds of multivalued
mappings in various economic situations; so it is important that we shall con-
sider several types of multivalued mappings and obtain some existence results
in non-compact settings. Now we prove the quasi-equilibrium existence theorem
of Nash type non-compact abstract economy.
Theorem 3.7. Let 1 be any (possibly uncountable) index set and for each
i ÷ 1, let A
i
be a convex subset of a locally convex Hausdor¤ topological vector
space 1
i
and 1
i
be a nonempty compact subset of A
i
. For each i ÷ 1, let 1
i
:
A =
¸
i÷I
A
i
÷R be a continuous function and c
i
: A ÷R
+
be a nonnegative
real-valued lower semicontinuous function. For each i ÷ 1, o
i
: A ÷ 2
Di
be a
continuous multivalued mapping for all r ÷ A with c
i
(r) 0 and be almost
upper semicontinuous for all r ÷ A with c
i
(r) = 0 such that
(1) o
i
(r) is a nonempty closed convex subset of 1
i
,
(2) r
i
÷1
i
(r
i
. r
i
) is quasi-convex on o
i
(r).
Then there exists an equilibrium point ´ r ÷ 1 =
¸
i÷I
1
i
such that for each
i ÷ 1,
(a) ´ r
i
÷ o
i
(´ r),
(b) 1
i
(´ r
i
. ´ r
i
) = inf
z÷Si(^ x)
1
i
(´ r
i
. .) and/or c
i
(´ r) = 0.
Proof. For each i ÷ 1, we now de…ne a multivalued mapping \
i
: A ÷2
Xi
by
\
i
(r) = ¦n [ n ÷ o
i
(r). 1
i
(r
i
. n)c
i
(r) = inf
z÷Si(x)
1
i
(r
i
. .)c
i
(r)¦.
Since ¦r [ r ÷ A. c
i
(r) 0¦ is open, for each r ÷ A with c
i
(r) 0, \
i
is upper semicontinuous at r by a result of Aubin and Ekeland and the same
argument of the proof of Theorem 3.6; and for each r ÷ A with c
i
(r) = 0,
\
i
(r) = o
i
(r) so that \
i
is also upper semicontinuous at r. Therefore for each
r ÷ A, \
i
is upper semicontinuous at r and \
i
(r) is nonempty compact and
convex.
Now we de…ne \ : A ÷2
D
by
\ (r) =
¸
i÷I
\
i
(r)
for each r ÷ A.
Then by a result of Fan, \ is also upper semicontinuous, and \ (r) is a
nonempty compact convex subset of 1 for each r ÷ A. Therefore, by Corollary
96
3.2 there exists a point ´ r ÷ 1 such that ´ r ÷ \ (´ r), that is, for each i ÷ 1, we
have
(a) ´ r
i
÷ \
i
(´ r) · o
i
(´ r) and
(b) 1
i
(´ r
i
. ´ r
i
) = inf
z÷Si(^ x)
1
i
(´ r
i
. .) and/or c
i
(´ r) = 0.
3.3.4 Nash equilibrium of games and abstract economies
Each strategy vector determines an outcome (which may be a lottery in some
models). Players have preferences over outcomes and this induce preferences
over strategy vectors. For convenience we will work with preferences over strat-
egy vectors. There are two ways we might do this. The …rst is to describe
player i’s preferences by a binary relation l
i
de…ned on A. Then l
i
(r) is
the set of all strategy vectors preferred to r. Since player i only has control
over the i-th component of r, we will …nd it more useful to describe player i’s
preferences in terms of the good reply set. Given a strategy vector r ÷ A and
a strategy n
i
÷ A
i
, let r[n
i
denote the strategy vector obtained from r when
player i chooses n
i
and other players keep their choices …xed. Let us say that n
i
is a good reply for player i to strategy vector r if r[n
i
÷ l
i
(r). This de…nes
a multivalued mapping l
i
: A ( A
i
, called the good reply multivalued map-
ping, by l
i
(r) = ¦n
i
[ n
i
÷ A
i
. r[n
i
÷ l
i
(r)¦. It will be convenient to describe
preferences in terms of the good reply multivalued mapping l
i
rather than the
preference relation l
i
. Note however that we lose some information by doing
this. Given a good reply multivalued mapping l
i
it will not generally possible
to reconstruct the preference relation l
i
, unless we know that l
i
is transitive,
and we will not make this assumption. Thus a game in strategic form is a
tuple (1. (A
i
). (l
i
)) where each l
i
:
¸
j÷I
A
j
(A
i
.
A shortcoming of this model of a game is that frequently there are situations
in which the choices of players cannot be made independently. A simpli…ed
example is the pumping of oil out of a common oil …eld by several producers.
Each producer chooses an amount r
i
to pump out and sell. The price depends
on the total amount sold. Thus each producer has partial control of the price
and hence of their pro…ts. But the r
i
cannot be chosen independently because
their sum cannot exceed the total amount of oil in the ground. To take such
possibilities into account we introduce a multivalued mapping 1
i
: A ( A
i
which tells which strategies are actually feasible for player i, given the strategy
vector of the others. (We have written 1
i
as a function of the strategies of all
the players including i as a technical convenience. In modelling most situations,
1
i
will be independent of player i’s choice.) The jointly feasible strategy vectors
are thus the …xed points of the multivalued mapping 1 =
¸
i÷I
1
i
: A (
A. A game with the added feasibility or constraint multivalued mapping is
called a generalized game or abstract economy. It is speci…ed by a tuple
(1. (A
i
). (1
i
). (l
i
)) where 1
i
: A (A
i
and l
i
: A (A
i
.
A Nash equilibrium of a strategic form game or abstract economy is a
strategy vector r for which no player has a good reply. For a game an equilibrium
is an r ÷ A such that l
i
(r) = O for each i. For an abstract economy an
equilibrium is an r ÷ A such that r ÷ 1(r) and l
i
(r)
¸
1
i
(r) = O for each i.
97
Nash proves the existence of equilibria for games where the players’ prefer-
ences are representable by continuous quasi-concave utilities and the strategy
sets are simplexes. Debreu proves the existence of equilibrium for abstract
economies. He assumes that strategy sets are contractible polyhedra and that
the feasibility multivalued mapping have closed graph and the maximized utility
is continuous and that the set of utility maximizers over each constraint set is
contractible. These assumptions are joint assumptions on utility and feasibility
and the simplest way to make separate assumptions is to assume that strategy
sets are compact and convex and that utilities are continuous and quasi-concave
and that the constraint multivalued mappings are continuous with compact con-
vex values. Then the maximum theorem guarantees continuity of maximized
utility and convexity of the feasible sets and quasi-concavity imply convexity
(and hence contractibility) of the set of maximizers. Arrow and Debreu used
Debreu’s result to prove the existence of Walrasian equilibrium of an economy
and coined the term abstract economy.
Gale and Mas-Colell prove a lemma which allows them to prove the exis-
tence of equilibrium for a game without ordered preferences. They assume that
strategy sets are compact convex sets and that the good reply multivalued map-
pings are convex valued and have open graph. Shafer and Sonnenschein prove
the existence of equilibria for abstract economies without ordered preferences.
They assume that the good reply multivalued mappings have open graph and
satisfy the convexity/irre‡exivity condition r
i
´ ÷ co l
i
(r). They also assume
that the feasibility multivalued mappings are continuous with compact convex
values. This result doesn’t strictly generalize Debreu’s result since convexity
rather than contractibility assumptions are made.
Theorem 3.8 (Gale, Mas-Colell). Let A =
¸
i÷I
A
i
, A
i
being a non-
empty, compact, convex subset of R
ki
, and let l
i
: A ( A
i
be a multivalued
mapping satisfying
(i) l
i
(r) is convex for all r ÷ A,
(ii) l
÷
i
(¦r
i
¦) is open in A for all r
i
÷ A
i
.
Then there exists r ÷ A such that for each i, either r
i
÷ l
i
(r) or l
i
(r) = O.
Proof. Let \
i
= ¦r [ l
i
(r) = O¦. Then \
i
is open by (ii) and l
i
[
Wi
: \
i
(
A
i
satis…es the hypotheses of the selection theorem, so there is a continuous
function 1
i
: \
i
÷ A
i
with 1
i
(r) ÷ l
i
(r). De…ne the multivalued mapping

i
: A (A
i
by

i
(r) =

¦1
i
(r)¦. i1 r ÷ \
i
A
i
. i1 r ´ ÷ \
i
.
(45)
Then
i
is upper hemi-continuous with nonempty compact and convex val-
ues, and thus so is =
¸
i÷I

i
: A (A. Thus by the Kakutani theorem, has
a …xed point ¯ r. If
i
(¯ r) = A
i
, then ¯ r
i
÷
i
(¯ r) implies ¯ r
i
= 1
i
(¯ r) ÷ l
i
(¯ r). If

i
(¯ r) = A
i
, then it must be that l
i
(¯ r) = O. (Unless of course A
i
is a singleton,
in which case ¦¯ r
i
¦ =
i
(¯ r).)
Remark 3.8. The previous theorem possesses a trivial extension. Each l
i
is assumed to satisfy (i) and (ii) so that the selection theorem may be employed.
98
If some l
i
is already a singleton-valued multivalued mapping, then the selec-
tion problem is trivial.Thus we may allow some of the l
i
’s to be continuous
singleton-valued multivalued mapping instead, and the conclusion follows. The
next Corollary is derived from Theorem 3.8 by assuming each r
i
´ ÷ l
i
(r) and
concludes that there exists some r such that l
i
(r) = O for each i. Assuming
that l
i
(r) is never empty yields a result equivalent to a Fan’s result.
Corollary 3.3. For each i, let l
i
: A ( A
i
have open graph and satisfy
r
i
´ ÷ co l
i
(r) for each r. Then there exists r ÷ A with l
i
(r) = O for all i.
Proof. Because A
i
is convex subset the multivalued mapping co l
i
satisfy
the hypotheses of Theorem 3.8 so there is r ÷ A such that for each i, r
i
÷
co l
i
(r) or co l
i
(r) = O. Since r
i
´ ÷ co l
i
(r) by hypothesis, we have co l
i
(r) =
O, so l
i
(r) = O.
Theorem 3.9. (Shafer-Sonnenschein) Let (1. (A
i
). (1
i
). (l
i
)) be an ab-
stract economy such that for each i,
(i) A
i
· R
ki
is nonempty, compact and convex
(ii) 1
i
is a continuous multivalued mapping with nonempty compact convex
values
(iii) Gr l
i
is open in A A
i
(iv) r
i
´ ÷ co l
i
(r) for all r ÷ A.
Then there is an equilibrium.
Proof. De…ne i
i
: AA
i
÷R
+
by i
i
(r. n
i
) = di:t[(r. n
i
). (Gr l
i
)
c
[. Then
i
i
(r. n
i
) 0 if and only if n
i
÷ l
i
(r) and i
i
is continuous since Gr l
i
is open.
De…ne H
i
: A (A
i
by
H
i
(r) = ¦n
i
[ n
i
÷ A
i
. n
i
:ari:i.c: i
i
(r. ) o: 1
i
(r)¦.
Then H
i
has nonempty compact values and is upper hemi-continuous and
hence closed. (To see that H
i
is upper hemi-continuous, apply the maximum
theorem to the multivalued mapping (r. n
i
) ÷(¦r¦ 1
i
(r) and the function
i
i
.) De…ne G : A ( A by G(r) =
¸
i÷I
co H
i
(r). Then by a well known
results, G is upper hemi-continuous with compact convex values and so satis…es
the hypotheses of the Kakutani …xed point theorem, so there is ¯ r ÷ A with ¯ r ÷
G(¯ r). Since H
i
(¯ r) · 1
i
(¯ r) which is convex, ¯ r
i
÷ G
i
(¯ r) = co H
i
(¯ r) · 1
i
(¯ r). We
now show l
i
(¯ r)
¸
1
i
(¯ r) = O. Suppose not, that is, there is .
i
÷ l
i
(¯ r)
¸
1
i
(¯ r).
Then since .
i
÷ l
i
(¯ r) we have i
i
(¯ r. .
i
) 0, and since H
i
(¯ r) consists of the
maximizers of i
i
(¯ r. ) on 1
i
(¯ r), we have that i
i
(¯ r. n
i
) 0 for all n
i
÷ H
i
(¯ r).
This says that n
i
÷ l
i
(¯ r) for all n
i
÷ H
i
(¯ r). Thus H
i
(¯ r) · l
i
(¯ r), so ¯ r
i
÷
G
i
(¯ r) = co H
i
(¯ r) · co l
i
(¯ r), which contradicts (iv). Thus l
i
(¯ r)
¸
1
i
(¯ r) = O.

Remark 3.9. The multivalued mappings H
i
used in the proof of previ-
ous theorem aren’t natural constructions, which is the cleverness of Shafer and
Sonnenschein’s proof. The natural approach would be to use the best reply
multivalued mappings, r ÷( ¦r
i
[ l
i
(r[r
i
)
¸
1
i
(r) = O¦. These multivalued
mappings are compact-valued and upper hemi-continuous. They may fail to
be convex-valued, however.Mas-Colell gives an example for which the best reply
multivalued mapping hasn’t connected-valued submultivalued mapping. Taking
99
the convex hull of the best reply multivalued mapping doesn’t help, since a …xed
point of convex hull multivalued mapping may fail to be an equilibrium.
Another natural approach would be to use the good reply multivalued map-
ping r ÷(co l
i
(r)
¸
1
i
(r). This multivalued mapping, while convex-valued,
isn’t closed-valued, and so the Kakutani theorem doesn’t apply. What Shafer
and Sonnenschein do is choose a multivalued mapping that is a submultival-
ued mapping of a good reply set when it is nonempty and equal to the whole
feasible strategy set otherwise. Under stronger assumptions on the 1
i
multi-
valued mappings this approach can be made to work without taking a proper
subset of the good reply set. The additional assumptions on 1
i
are the fol-
lowing. First, 1
i
(r) is assumed to be topologically regular for each r, that
is, 1
i
(r) = c| [i:t 1
i
(r)[. Second, the multivalued mapping r ÷( i:t 1
i
(r)
is assumed to have open graph. The requirement of open graph is stronger
than lower hemi-continuity. These assumptions were used by Borglin and Kei-
ding who reduced the multi-player abstract economy to a 1-person game. The
proof below adds an additional player to the abstract economy by introducing
an ”abstract auctioneer”, and incorporates the feasibility constraints onto the
preferences which converts it into a game. Both the topological regularity and
open graph assumptions are satis…ed by budget multivalued mappings, provided
income is always greater than the minimum consumption expenditures on the
consumption set. The proof is closely related to the arguments used by Gale
and Mas-Colell to reduce an economy to a noncooperative game.
Theorem 3.10. (A special case of Shafer-Sonnenschein theorem).
Let (1. (A
i
). (1
i
). (l
i
)) be an abstract economy such that for each i we have
(i) A
i
· R
ki
is nonempty, compact and convex
(ii) 1
i
is an upper hemi-continuous multivalued mapping with nonempty
compact convex values satisfying, for all r, 1
i
(r) = c| [i:t 1
i
(r)[ and r ÷(
i:t 1
i
(r) has open graph
(iii) Gr l
i
is open in A A
i
(iv) for all r. r
i
´ ÷ co l
i
(r).
Then there is an equilibrium, that is, an r
+
÷ A such that for each i,
r
+
i
÷ 1
i
(r
+
). a:d l
i
(r
+
)
¸
1
i
(r
+
) = O.
Proof. We de…ne a game as follows. Put 7
0
=
¸
i÷I
A
i
. For i ÷ 1 put
7
i
= A
i
, and set 7 = 7
0

¸
i÷I
7
i
. A typical element of 7 will be denoted
(r. n), where r ÷ 7
0
and n ÷
¸
i÷I
7
i
. De…ne preference multivalued mappings
j
i
: 7 (7
i
as follows. De…ne j
0
by j
o
(r. n) = ¦n¦, and for i ÷ 1 set
j
i
(r. n) =

1
i
(r). i1 n
i
´ ÷ 1
i
(r)
co l
i
(n)
¸
i:t 1
i
(r). i1 n
i
÷ 1
i
(r).
(46)
Note that j
0
is continuous and never empty-valued and that for i ÷ 1 the
multivalued mapping j
i
is convex-valued and satis…es n
i
´ ÷ j
i
(r. n). Also for
i ÷ 1, the graph of j
i
is open. To see this set
100
¹
i
= ¦(r. n. .
i
) [ .
i
÷ i:t 1
i
(r)¦. 1
i
= ¦(r. n. .
i
) [ n
i
´ ÷ 1
i
(r)¦.
C
i
= ¦(r. n. .
i
) [ .
i
÷ co l
i
(n)¦.
and note that
Gr j
i
= (¹
i
¸
1
i
)
¸

i
¸
C
i
).
The set ¹
i
is open because i:t 1
i
has open graph and C
i
is open by hy-
pothesis (iii). The set 1
i
is also open. If n
i
´ ÷ 1
i
(r), then there is a closed
neighborhood \ of n
i
such that 1
i
(r) · \
c
, and upper hemi-continuity of 1
i
then gives the desired result.
Thus the hypothesis of Remark 3.8 is satis…ed and so there exists (r
+
. n
+
) ÷ 7
such that
r
+
÷ j
0
(r
+
. n
+
). (+)
and for i ÷ 1
j
i
(r
+
. n
+
) = O. (++)
Now (*) implies r
+
= n
+
; and since 1
i
(r) is never empty, (**) becomes
co l
i
(r
+
)
¸
i:t 1
i
(r
+
) = O. 1or i ÷ 1.
Thus l
i
(r
+
)
¸
i:t 1
i
(r
+
) = O. But 1
i
(r
+
) = c| [i:t 1
i
(r
+
)[ and l
i
(r
+
) is
open, so l
i
(r
+
)
¸
1
i
(r
+
) = O; that is, r
+
is an equilibrium.
3.3.5 Walrasian equilibrium of an economy
We now have several tools for proving the existence of a Walrasian equilibrium
of an economy. We will focus on two approaches. These are: the excess demand
approach and the abstract economy approach. The excess demand approach
utilizes the Debreu-Gale-Nikaido lemma, namely Theorem 3.1. The abstract
economy approach converts the problem of …nding a Walrasian equilibrium of
the economy into the problem of …nding the Nash equilibrium of an associated
abstract economy.
The central di¢culty of the excess demand approach involves proving the
upper hemi-continuity of the excess demand multivalued mapping.
The abstract economy approach explicitly introduces a …ctitious agent, the
”auctioneer”, into the picture and models the economy as an abstract economy
or generalized game. The strategies of consumers are consumption vectors, the
strategies of suppliers are production vectors, and the strategies of the auction-
eer are prices. The auctioneer’s preferences are to increase the value of excess
demand. A Nash equilibrium of the abstract economy corresponds to a Wal-
rasian equilibrium of the original economy. The principal di¢culty to overcome
in applying the existence theorems for abstract economies is the fact that they
require compact strategy sets and the consumption and production sets aren’t
101
compact. This problem is dealt with by showing that any equilibrium must lie in
a compact set, then truncating the consumption and production sets and show-
ing that the Nash equilibrium of the truncated abstract economy is a Walrasian
equilibrium of the original economy.
We now recall some notations and de…nitions need in what follows.
Let R
m
denote the commodity space. For i = 1. 2. .... : let A
i
· R
m
denote
the i-th consumer’s consumption set, n
i
÷ R
n
his private endowment, and l
i
his preference relation on A
i
. For , = 1. 2. .... / let 1
j
denote the ,-th supplier’s
production set. Set A =
¸
n
i=1
A
i
. n =
¸
n
i=1
n
i
. and 1 =
¸
k
j=1
1
j
. Let a
i
j
denote the share of consumer i in the pro…ts of supplier ,. An economy is then
described by a tuple ((A
i
. n
i
. l
i
). (a
i
j
)).
De…nition 3.5. An attainable state of the economy is a tuple ((r
i
). (n
j
)) ÷
¸
n
i=1
A
i

¸
k
j=1
1
j
. satisfying
n
¸
i=1
r
i
÷
k
¸
j=1
n
j
÷n = 0.

Let 1 denote the set of attainable states and let
' = ¦((r
i
). (n
j
)) [ ((r
i
). (n
j
)) ÷ (R
m
)
n+k
.
n
¸
i=1
r
i
÷
k
¸
j=1
n
j
÷n = 0¦.
Then 1 = (
¸
n
i=1
A
i

¸
k
j=1
1
j
)
¸
'. Let A
t
i
be the projection of 1 on A
i
,
and let 1
t
j
be the projection of 1 on 1
j
.
De…nition 3.6. A Walrasian free disposal equilibriumis a price j
+
÷ ^
together with an attainable state ((r
+
i
). (n
+
j
)) satisfying:
(i) For each , = 1. 2. .... /,
j
+
n
+
j
_ j
+
n
j
. 1or a|| n
j
÷ 1
j
.
(ii) For each i = 1. 2. .... :.
r
+
i
÷ 1
i
. a:d l
i
(r
+
i
)
¸
1
i
= O.
where
1
i
= ¦r
i
[ r
i
÷ A
i
. j
+
r
i
_ j
+
n
i
÷
k
¸
j=1
a
i
j
(j
+
n
+
j
)¦.
Lemma 3.3. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy:
For i = 1. 2. .... :,
(1) A
i
is closed, convex and bounded from below, and n
i
÷ A
i
.
For , = 1. 2. .... / that
(2) 1
j
is closed, convex and 0 ÷ 1
j
.
102
(3) ¹1
¸
R
m
+
= ¦0¦.
(4) 1
¸
(÷1 ) = ¦0¦.
Then the set 1 of attainable states is compact and nonempty. Furthermore,
0 ÷ 1
t
j
. , = 1. 2. .... /.
Suppose in addition, that the following two assumptions hold. For each i =
1. 2. .... :,
(5) there is some r
t
i
÷ A
i
satisfying n
i
r
t
i
.
(6) 1 ÷R
m
+
.
Then r
t
i
÷ A
t
i
. i = 1. 2. .... :.
Proof. Clearly ((n
i
). (0
j
)) ÷ 1. so 1 is nonempty and 0 ÷ 1
t
j
. The set 1 of
attainable states is clearly closed, being the intersection of two closed sets. So,
it is su¢ces to show that ¹1 = ¦0¦. where ¹1 is asymptotic cone of 1 (the
set of all possible limits of sequences of the form ¦`
n
r
n
¦, where each r
n
÷ 1
and `
n
| 0.) By a well known result, we have
¹1 · ¹(
n
¸
i=1
A
i

k
¸
j=1
1
j
)
¸
¹'.
Also, we have
¹(
n
¸
i=1
A
i

k
¸
j=1
1
j
) ·
n
¸
i=1
(¹A
i
)
k
¸
j=1
(¹1
j
).
Since each A
i
is bounded below there is some /
i
÷ R
m
such that A
i
·
/
i
÷ R
m
+
. Thus ¹A
i
· ¹(/
i
÷ R
m
+
) = ¹R
m
+
= R
m
+
. Also, we have ¹1
j
· ¹1.
Again, since ' ÷ n is a cone, ¹' = ' ÷ n. Thus we can show ¹1 = ¦0¦ if
we can show that
(
n
¸
i=1
R
m
+

k
¸
j=1
¹1 )
¸
(' ÷n) = ¦0¦.
In other words, we need to show that if r
i
÷ R
m
+
. i = 1. 2. .... :. and n
j
÷ ¹1.
, = 1. 2. .... / and
¸
n
i=1
r
i
÷
¸
k
j=1
n
j
= 0. then r
1
= ... = r
n
= n
1
= ... =
n
k
= 0. Now
¸
n
i=1
r
i
_ 0. so that
¸
k
j=1
n
j
_ 0 too. Since ¹1 is a convex
cone,
¸
k
j=1
n
j
÷ ¹1. Since ¹1
¸
R
m
+
= ¦0¦.
¸
n
i=1
r
i
÷
¸
k
j=1
n
j
= 0 implies
¸
n
i=1
r
i
= 0 =
¸
k
j=1
n
j
. Now r
i
_ 0 and
¸
n
i=1
r
i
= 0 clearly imply that r
i
= 0,
i = 1. 2. .... :. Rewriting
¸
k
j=1
n
j
= 0 yields n
i
= ÷(
¸
j,=i
n
j
). Both n
i
and this
last sum belong to 1 as ¹1 · 1 . Thus n
i
÷ 1
¸
(÷1 ) so n
i
= 0. This is true
for all i = 1. 2. .... /.
Now assume that (5) and (6) hold. By (5),
¸
n
i=1
r
t
i
<
¸
n
i=1
n
i
. Set n
t
=
¸
n
i=1
r
t
i
÷
¸
n
i=1
n
i
. Then n
t
< 0. so by (6) there are n
t
j
. ,. 1. 2. .... /. satisfying
n
t
=
¸
k
j=1
n
t
j
. Then ((r
t
i
). (n
t
j
)) ÷ 1. so r
t
i
÷ A
t
i
.
Under the hypotheses of Lemma 3.3 the set 1 of attainable states is compact.
Thus for each consumer i, there is a compact convex set 1
i
containing A
t
i
in
103
its interior. Set A
tt
i
= 1
i
¸
A
i
. Then A
tt
i
· i:tA
t
i
. Likewise, for each supplier ,
there is a compact convex set C
j
containing 1
t
j
in its interior. Set 1
tt
j
= C
j
¸
1
j
.
Theorem 3.11. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy:
For i = 1. 2. .... :,
(1) A
i
is closed, convex and bounded from below, and n
i
÷ A
i
.
(2) There is some r
t
i
÷ A
i
satisfying n
i
r
t
i
.
(3) (a) l
i
has open graph, (b) r
i
´ ÷ co l
i
(r
i
), (c) r
i
÷ c| l
i
(r
i
).
For each , = 1. 2. .... /,
(4) 1
j
is closed and convex and 0 ÷ 1
j
.
(5) 1
¸
R
m
+
= ¦0¦.
(6) 1
¸
(÷1 ) = ¦0¦.
(7) 1 ÷R
m
+
.
Then there is a free disposal equilibrium of the economy.
Proof. De…ne an abstract economy as follows. Player 0 is the auctioneer.
His strategy set is ^
m÷1
. the closed standard (:÷1)-simplex. These strategies
will be price vectors. The strategy set of consumer i will be A
t
i
. The strategy
set of supplier , is 1
t
j
. A typical strategy vector is thus of the form (j. (r
i
). (n
j
)).
The auctioneer’s preferences are represented by the multivalued mapping
l
0
: ^
¸
i÷I
A
t
i

¸
j÷J
1
t
j
(^ de…ned by
l
0
(j. (r
i
). (n
j
)) = ¦c [ c ÷ ^. c (
¸
i÷I
r
i
÷
¸
j÷J
n
j
÷n)
j (
¸
i÷I
r
i
÷
¸
j÷J
n
j
÷n)¦.
Thus the auctioneer prefers to raise the value of excess demand. Observe
that l
0
has open graph, convex upper contour sets and j ´ ÷ l
0
(j. (r
i
). (n
j
)).
Supplier ,
+
’s preferences are represented by the multivalued mapping \
j
:
^
¸
i÷I
A
tt
i

¸
j÷J
1
tt
j
(1
tt
j
de…ned by
\
j
(j. (r
i
). (n
j
)) = ¦n
tt
j
[ n
tt
j
÷ 1
tt
j
. j n
tt
j
j n
j
¦.
Thus suppliers prefer larger pro…ts. These multivalued mappings have open
graph, convex upper contour sets and satisfy n
j
´ ÷ \
j
(j. (r
i
). (n
j
)).
The preferences of consumer i
+
are represented by multivalued mapping
l
t
i
: ^
¸
i÷I
A
tt
i

¸
j÷J
1
tt
j
(A
i
de…ned by
l
t
i
(j. (r
i
). (n
j
)) = co l
i
(r
i
).
This multivalued mapping has open graph, convex upper contour sets and
satis…es r
i
´ ÷ l
t
i
(j. (r
i
). (n
j
)).
The feasibility multivalued mappings are as follows. For suppliers and the
auctioneer, they are constant multivalued mappings and the values are equal
to their entire strategy sets. Thus they are continuous with compact convex
values. For consumers things are more complicated. Start by setting ¬
j
(j) =
max
yj÷Yj
j n
j
. By the maximum theorem this is a continuous function. Since
0 ÷ 1
t
j
. ¬
j
(j) is always nonnegative. Set
104
1
i
(j. (r
i
). (n
j
)) = ¦r
tt
i
[ r
tt
i
÷ A
tt
i
. j r
tt
i
_ j n
i
÷
k
¸
j=1
a
i

j
¬
j
(j)¦.
Since ¬
j
(j) is nonnegative and r
t
i
< n
i
in A
tt
i
, j r
t
i
< j n
i
for any
j ÷ ^. Thus 1
i
is lower hemi-continuous and nonempty-valued. Since A
tt
i

is compact, 1
i
is upper hemi-continuous, since it clearly has closed graph.
Thus for each consumer, the feasibility multivalued mapping is a continuous
multivalued mapping with nonempty comapct convex values.
The abstract economy so constructed satis…es all the hypotheses of the
Shafer-Sonnenschein theorem and so has a Nash equilibrium. Translating the de-
…nition of Nash equilibrium to the case at hand yields the existence of (j
+
. (r
+
i
). (n
t
j
)) ÷
^
¸
i÷I
A
tt
i

¸
j÷J
1
tt
j
satisfying
(i) c (
¸
i÷I
r
+
i
÷
¸
j÷J
n
t
j
÷n) _ j
+
(
¸
i÷I
r
+
i
÷
¸
j÷J
n
t
j
÷n) for all c ÷ ^.
(ii) j
+
n
t
j
_ j
+
n
j
for all n
j
÷ 1
tt
j
. , = 1. 2. .... /.
(iii) r
+
i
÷ 1
i
and co l
i
(r
+
i
)
¸
1
i
= O. i = 1. 2. .... :. where
1
i
= ¦r
i
[ r
i
÷ A
tt
i
. j
+
r
i
_ j
+
n
i
÷
k
¸
j=1
a
i
j
(j
+
n
tt
j
)¦.
Let '
i
= j
+
n
i
÷
¸
k
j=1
a
i
j
(j
+
n
tt
j
). Then in fact, each consumer spends all
his income, so that we have the budget equality j
+
r
+
i
= '
i
. Suppose not. Then
since l
i
(r
+
i
) is open and r
+
i
÷ c| l
i
(r
+
i
). it would follow that l
i
(r
+
i
)
¸
1
i
= O. a
contradiction.
Summing up the budget equalities and using
¸
n
i=1
a
i
j
= 1 for each , yields
j
+

¸
n
i=1
r
+
i
= j
+
(
¸
k
j=1
n
tt
j
÷n). so that
j
+
(
¸
i÷I
r
+
i
÷
¸
j÷J
n
t
j
÷n) = 0.
This and (i) yield
¸
i÷I
r
+
i
÷
¸
j÷J
n
t
j
÷n _ 0.
We next show that j
+
n
t
j
_ j
+
n
j
for all n
j
÷ 1
j
. Suppose not, and let
j
+
n
tt
j
j
+
n
t
j
. Since 1
j
is convex, `n
tt
j
÷ (1 ÷ `)n
t
j
÷ 1
j
. and it too yields a
higher pro…t than n
t
j
. But for ` small enough, `n
tt
j
÷ (1 ÷ `)n
t
j
÷ 1
tt
j
. because
1
t
j
is in the interior of C
j
. This contradicts (ii).
By (7) .
+
=
¸
i÷I
r
+
i
÷
¸
j÷J
n
t
j
÷ n ÷ 1. so that there exists n
tt
j
÷ 1
j
,
, = 1. 2. .... / satisfying .
+
=
¸
j÷J
n
t
j
. Set n
+
j
= n
t
j
÷n
tt
j
. Since each n
t
j
maximizes
j
+
n
j
over 1
j
, than
¸
j÷J
n
t
j
maximizes j
+
n over 1. But since j
+
.
+
= 0.
¸
j÷J
n
+
j
also maximizes j
+
over 1 . But then each n
+
j
must also maximizes
j
+
n
j
over 1
j
. Thus we have so far shown that j
+
n
+
j
_ j
+
n
j
for all n
j
÷ 1
j
.
, = 1. 2. .... /. By construction, we have that ((r
+
i
). (n
+
j
)) ÷ 1. To show that
105
(j
+
. (r
+
i
). (n
+
j
)) is indeed a Walrasian free disposal equilibrium it remains to be
proven that for each i,
l
i
(r
+
i
)
¸
¦r
i
[ r
i
÷ A
i
. j
+
r
i
_ j
+
n
i
÷
k
¸
j=1
a
i
j
(j
+
n
+
j
)¦ = O.
Suppose that there is some r
t
i
belonging to this intersection. Then for small
enough ` 0. `r
t
i
÷(1÷`)r
+
i
÷ A
tt
i
and since r
+
i
÷ c| l
i
(r
+
i
), `r
tt
i
÷(1÷`)r
+
i
÷
co l
i
(r
+
i
)
¸
1
i
. contradicting (iii). Thus ((r
+
i
). (n
+
j
)) is a Walrasian free disposal
equilibrium.
Theorem 3.12. Let the economy ((A
i
. n
i
. l
i
). (1
j
). (a
i
j
)) satisfy the hy-
potheses of Theorem 3.11 and further assume that there is a continuous quasi-
concave utility n
i
satisfying l
i
(r
i
) = ¦r
t
i
[ r
t
i
÷ A
i
. n
i
(r
t
i
) n
i
(r
i
)¦. Then the
economy has a Walrasian free disposal equilibrium.
Proof. Let 1
tt
j
be as in proof of previous theorem. We de…ne the multivalued
mapping
j
as follows
j
: ^ (1
tt
j
by

j
(j) = ¦n
j
[ n
j
÷ 1
tt
j
. j n
j
_ j n
tt
j
1or a|| n
tt
j
÷ 1
tt
j
¦.
De…ne ¬
j
: ^ ÷R. by ¬
j
(j) = max
yj÷Yj
j n
j
. By the maximum theorem,
j
is upper hemi-continuous with nonempty compact values and ¬
j
is continuous.
Since 0 ÷ 1
j
. ¬
j
is nonnegative. Since 1
tt
j
is convex,
j
(j) is convex too.
Let A
tt
i
be as in proof of the previous theorem and de…ne
i
: ^ (A
tt
i
by

i
(j) = ¦r
i
[ r
i
÷ A
tt
i
. j r
i
_ j n
i
÷
¸
j÷J
a
i
j
¬
j
(j)¦.
As in proof of previous theorem the existence of r
tt
i
< n
i
in A
tt
i
implies that

i
is a continuous multivalued mapping with nonempty values. Since A
tt
i
is
compact and convex,
i
has compact convex values. De…ne j
i
: ^ (A
tt
i
by
j
i
(j) = ¦r
i
. [ r
i
÷
i
(j). n
i
(r
i
) _ n
i
(r
tt
i
) 1or a|| r
tt
i
÷
i
(j)¦.
By a theorem of Berge, j
i
is an upper hemi-continuous multivalued mapping
with nonempty compact values. Since n
i
is quasi-concave, j
i
has convex values.
Set
7(j) =
n
¸
i=1
j
i
(j) ÷
k
¸
j=1

j
(j) ÷n.
This 7 is upper hemi-continuous and has nonempty compact convex values.
Also for any . ÷ 7(j), j . _ 0. To see this just add up the budget multivalued
mappings for each consumer. By theorem 3.1, there is some j
+
÷ ^ and .
+
÷
7(j
+
). satisfying .
+
_ 0. Thus there are r
+
i
÷ j
i
(j
+
) and n
+
j
÷
j
(j
+
) such that
n
¸
i=1
r
+
i
÷
k
¸
j=1
n
+
j
÷n _ 0.
106
It follows just as in proof of previous theorem that ((r
+
i
). (n
+
j
)) is a Walrasian
free disposal equilibrium.
Remark 3.10. The literature on Walrasian equilibrium is enormous. Two
standard texts in the …eld are Debreu and Arrow-Hahn.
3.3.6 Equilibria for abstract economies
The object of this subsection is to use new …xed-point theorems of the authors
Agarwal and Regan to establish the existence of equilibrium points of abstract
economies. These results improve, extend and complement those in the litera-
ture.
Throughout in this subsection, 1 will be a countable set of agents and we
describe an abstract economy by I = (Q
i
. 1
i
. 1
i
)
i÷I
where for each i ÷ 1. Q
i
is a choice (or strategy) set, 1
i
:
¸
i÷I
Q
i
= Q ÷ 2
Qi
(nonempty subsets of
Q
i
) is a constraint multivalued mapping, and 1
i
: Q ÷ 2
Qi
is a preference
multivalued mapping; here Q
i
will be a subset of a Fréchet space (complete,
metrizable locally convex vectorial topological space) 1
i
for each i ÷ 1. A point
r ÷ Q is called an equilibrium point of I if for each i ÷ 1. we have
r
i
÷ 1
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Theorem 3.13. Let \ be a closed, convex subset of a Fréchet space 1 with
r
0
÷ \. Suppose that there is an upper semicontinuous map 1 : \ ÷ C1(\)
(here C1(\) denotes the family of nonempty, compact, convex subsets of \)
with the following condition holding:
¹ _ \. ¹ = co (¦r
0
¦
¸
1(¹))
nit/ ¹ = C a:d C _ ¹ con:ta/|c. i:j|ic: ¹ i: co:jact. (+)
Then 1 has a …xed point in \.
Remark 3.11. Suppose in addition in Theorem 3.13, we assume
1or a:n ¹ _ \. nc /a·c 1(
¯
¹) _ 1(¹).
then we could replace (*) with
C _ \ con:ta/|c. C = co(¦r
0
¦
¸
1(C)) i:j|ic: C i: co:jact. (++)
and the result in Theorem 3.13 is again true.
Now Theorem 3.13 together with Remark 3.11 yields the following theorem
of M· onch type for single valued maps.
107
Theorem 3.14. Let \ be a closed, convex subset of a Fréchet space 1 with
r
0
÷ \. Suppose that there is a continuous map 1 : \ ÷ \ with the following
condition holding:
C _ \ con:ta/|c. C = co(¦r
0
¦
¸
1(C)) i:j|ic: C i: co:jact.
Then 1 has a …xed point in \.
Next we present a …xed point result of Furi-Pera type.
Theorem 3.15. Let 1 be a Fréchet space with Q a closed, convex subset
of 1 and 0 ÷ Q. Suppose 1 : Q ÷ C1(1) is a compact upper semicontinuous
map with the following condition holding:
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
1(r
n
)¦ _ Q 1or : _ :
0
.
Then 1 has a …xed point in Q.
Remark 3.12. In Theorem 3.15, if 1 is a Hilbert space, then one could
replace 1 : Q ÷C1(1) a compact map in Theorem 3.15 with 1 : Q ÷C1(1)
a one-set contractive, condensing map with 1(Q) a bounded set in 1.
Let 7 be a subset of a Hausdor¤ topological space 1
1
and \ a subset of a
topological vector space 1
2
. We say 1 ÷ 1T1(7. \) if \ is convex and there
exists a map 1 : 7 ÷\ with
co (1(r)) _ 1(r). 1or a|| r ÷ 7. 1(r) = O 1or cac/ r ÷ 7.
and the …bres
1
÷1
(n) = ¦. [ . ÷ 7. n ÷ 1(.)¦
are open (in 7) for each n ÷ \.
The following selection theorem hold
Theorem 3.16. Let 7 be a nonempty, paracompact Hausdor¤ topological
space and \ a nonempty, convex subset of a Hausdor¤ topological vector space.
Suppose 1 ÷ 1T1(7. \). Then 1 has a continuous selection, that is, there
exists a continuous single valued map 1 : 7 ÷\ of 1.
The following result is a …xed point theorem of Furi-Pera type for DTK
maps.
Theorem 3.17. Let 1 be a countable index set and ¦Q
i
¦
i÷I
a family of
nonempty closed, convex sets each in a Fréchet 1
i
. Let Q =
¸
i÷I
Q
i
and
assume 0 ÷ Q. For each i ÷ 1, let 1
i
÷ 1T1(Q. 1
i
) be a compact map. Let
1 : Q ÷2
E
(here 1 =
¸
i÷I
1
i
) be given by
108
1(r) =
¸
i÷I
1
i
(r). 1or r ÷ Q.
and suppose the following condition holds:
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
1(r
n
)¦ _ Q 1or : _ :
0
.
Then 1 has a …xed point in Q.
Remark 3.13. In Theorem 3.17, if 1
i
is a Hilbert space for each i ÷ 1,
then one could replace 1
i
, a compact map for each i ÷ 1 in Theorem 3.17 with
1 : Q ÷ 2
E
. a one-set contractive, condensing map with 1(Q) a bounded set
in 1.
We will now use the above …xed point results to obtain equilibrium theorems
for an abstract economy.
Theorem 3.18. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty closed, convex subset of a Fréchet space 1
i
,
(2) 1
i
: Q ÷ C1(Q
i
) is upper semicontinuous; here C1(Q
i
) denotes the
family of nonempty, compact, convex subsets of Q
i
,
(3) l
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦ is open in Q,
(4) 1
i
[
Ui
: l
i
÷ 2
Ei
is upper semicontinuous with 1
i
(r) closed and convex
for each r ÷ l
i
,
(5) r
i
´ ÷ 1
i
(r)
¸
1
i
(r). for each r ÷ Q: here r
i
is the projection of r on 1.
In addition, suppose r
0
÷ Q with
(6) ¹ _ Q. ¹ _ co(¦r
0
¦
¸
1(¹)) with ¹ = C and C _ ¹ countable, implies
¹ is compact
holding; here 1 : Q ÷2
Q
is given by
1(r) =
¸
i÷I
1
i
(r). 1or r ÷ Q.
Then I has a equilibrium point. That is, for each i ÷ 1, we have
r
i
÷ 1
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i ÷ 1. Let G
i
: l
i
÷2
Qi
be given by
G
i
(r) = 1
i
(r)
¸
1
i
(r).
which is upper semicontinuous. Let H
i
: Q ÷2
Qi
be de…ned by
109
H
i
(r) = G
i
(r). i1 r ÷ l
i
. a:d H
i
(r) = 1
i
(r). i1 r ´ ÷ l
i
.
which is upper semicontinuous with nonempty, compact, convex values (note
G
i
(r) _ 1
i
(r) for r ÷ l
i
).
Let H : Q ÷2
Q
be de…ned by
H(r) =
¸
i÷I
H
i
(r).
We have that H : Q ÷ C1(Q) is upper semicontinuous. We wish to apply
Theorem 3.13 to H. To see this, let ¹ _ Q with ¹ = co(¦r
0
¦
¸
H(¹))., ¹ = C
and C _ ¹ countable. Then since
H(r) _ 1(r). 1or r ÷ ¹
(note H
i
(r) _ 1
i
(r). for r ÷ Q), we have
¹ _ co(¦r
0
¦
¸
1(¹)).
Now (6) guarantees that ¹ is compact. Theorem 3.13 guarantees that there
exists r ÷ Q with r ÷ H(r). From (5), we have r ´ ÷ l
i
for each i ÷ 1. As a
result, for each i ÷ 1, we have r
i
÷ 1
i
(r) and 1
i
(r)
¸
1
i
(r) = O: here r
i
is the
projection of r on 1
i
.
Remark 3.14. If 1(1) _ 1(1) for any 1 _ Q. then could replace (6) in
Theorem 3.18 with (see Remark 3.11)
C _ Q con:ta/|c. C _ co(¦r
0
¦
¸
1(C)) i:j|ic: C i: co:jact.

Theorem 3.19. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty closed, convex subset of a Fréchet space 1
i
,
(2) 1
i
: Q ÷C1(Q
i
) is upper semicontinuous, compact map,
(3) l
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦ is open in Q,
(4) 1
i
[
Ui
: l
i
÷ 2
Ei
is upper semicontinuous with 1
i
(r) closed and convex
for each r ÷ l
i
,
(5) r
i
´ ÷ 1
i
(r)
¸
1
i
(r). for each r ÷ Q: here r
i
is the projection of r on 1.
In addition, suppose 0 ÷ Q with
(6)
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
1(r
n
)¦ _ Q 1or : _ :
0
110
holding; here 1 : Q ÷2
E
(here 1 =
¸
i÷I
1
i
) is given by
1(r) =
¸
i÷I
1
i
(r).
Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1, we have
r
i
÷ 1
i
(r). a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i ÷ 1 and let H
i
be as in Theorem 3.18. The same reasoning as
in Theorem 3.18 guarantees that H
i
: Q ÷ C1(1
i
) is upper semicontinuous.
Let H : Q ÷ 2
E
be as in proof of previous theorem. Notice H : Q ÷ C1(1)
is an upper semicontinuous, compact map (use (2) with H
i
(r) _ 1
i
(r) for
r ÷ Q). We wish to apply Theorem 3.15. To see this, suppose ¦(r
n
. `
n

n`1
is
a sequence in 0Q [0. 1[ converging to (r. `) with r ÷ `H(r) and 0 _ ` < 1.
Then since H(r) _ 1(r) for r ÷ Q. we have r ÷ `1(r). Now (6) guarantees that
there exists :
0
÷ ¦1. 2. ...¦ with ¦`
n
1
n
(r
n
)¦ _ Q for each : _ :
0
. Consequently,
¦`
n
H
n
(r
n
)¦ _ Q for each : _ :
0
. Theorem 3.15 guarantees that there exists
r ÷ Q with r ÷ H(r). and it is easy to check, as in Theorem 3.18, that r is an
equilibrium point of I.
Remark 3.15. Notice (6) can be replaced by
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` H(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
H(r
n
)¦ _ Q 1or : _ :
0
.
where H is given in proof of Theorem 3.18, and the result in Theorem 3.19
is again true.
Remark 3.16. If 1
i
is a Hilbert space for each i ÷ 1, then one could replace
1
i
: Q ÷ C1(1
i
) a compact map for each i ÷ 1 in (2) with 1 : Q ÷ 2
E
a
one-set contractive, condensing map with 1(Q) a bounded set in 1.
Next we present a generalization of Theorems 3.18 and 3.19.
Theorem 3.20. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy. Assume for each i ÷ 1 that (1), (2), (3) and (5) of Theorem 3.18 hold.
In addition, suppose for each i ÷ 1 that there exists an upper semicontinuous
selector
w
i
: l
i
÷2
Qi
o1 1
i
¸
1
i
[
Ui
: l
i
÷2
Qi
(7) with w
i
(r) closed and convex for each r ÷ l
i
is satis…ed. Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1,
we have
111
r
i
÷ 1
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Fix i ÷ 1. Let H
i
: Q ÷2
Qi
be de…ned by
H
i
(r) = w
i
(r). i1 r ÷ l
i
.
and
H
i
(r) = 1
i
(r). i1 r ´ ÷ l
i
.
This H
i
: Q ÷ C1(Q
i
) is upper semicontinuous (note w
i
(r) _ 1
i
(r) for
r ÷ l
i
). Essentially the same reasoning as in Theorem 3.18 onwards establishes
result.
Remark 3.17. If 1
i
[
Ui
: l
i
÷ 2
Ei
is upper semicontinuous with 1
i
(r)
closed and convex for each r ÷ l
i
, then of course (7) holds.
If 1
i
¸
1
i
[
Ui
: l
i
÷2
Qi
is lower semicontinuous with 1
i
(r) closed and convex
for each r ÷ l
i
, then (7) holds.
Theorem 3.21. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy. Assume for each i ÷ 1 that (1), (2), (3) and (5) of Theorem 3.19 hold.
In addition, suppose for each i ÷ 1 that there exists an upper semicontinuous
selector
w
i
: l
i
÷2
Ei
o1 1
i
¸
1
i
[
Ui
: l
i
÷2
Ei
(8) with w
i
(r) closed and convex for each r ÷ l
i
is satis…ed. Also assume 0 ÷ Q with (6) holding. Then I has an equilibrium
point.
Proof. Fix i ÷ 1 and let H
i
be as in Theorem 3.20. Essentially the same
reasoning as in Theorem 3.19 establishes the result.
The theorems so far in this subsection assume l
i
is open in Q. Our next
two results consider the case when l
i
is closed in Q.
Theorem 3.22. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
,
(2) 1
i
: Q ÷C1(Q
i
) is lower semicontinuous,
(3) l
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦ is closed in Q,
(4) there exists a lower semicontinuous selector w
i
: l
i
÷2
Qi
of 1
i
¸
1
i
[
Ui
:
l
i
÷2
Qi
with w
i
(r) closed and convex for each r ÷ l
i
,
and
(5) r
i
´ ÷ 1
i
(r)
¸
1
i
(r) for each r ÷ Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
÷ Q with
(6) ¹ _ Q. ¹ _ co(¦r
0
¦
¸
1(¹)) with ¹ = C and C _ ¹ countable, implies
¹ is compact
holding; here 1 : Q ÷2
Q
is given by
112
1(r) =
¸
i÷I
1
i
(r).
Then I has an equilibrium point.
Proof. Fix i ÷ 1 and let H
i
: Q ÷2
Qi
be given by
H
i
(r) = w
i
(r). i1 r ÷ l
i
H
i
(r) = 1
i
(r). i1 r ´ ÷ l
i
.
This H
i
: Q ÷C1(Q
i
) is lower semicontinuous. Then, there exists an upper
semicontinuous selector 1
i
: Q ÷C1(Q
i
) of H
i
. Let 1 : Q ÷2
Q
be given by
1(r) =
¸
i÷I
1
i
(r). 1or r ÷ Q.
Now 1 : Q ÷ C1(Q) is upper semicontinuous. We wish to apply Theorem
3.13 to 1. To see this, let ¹ _ Q with ¹ = co(¦r
0
¦
¸
1(¹)), ¹ = C and C _ ¹
countable. Then since
1(r) _ 1(r). 1or r ÷ Q
(note 1
i
(r) _ H
i
(r) _ 1
i
(r), for r ÷ Q), we have
¹ _ co(¦r
0
¦
¸
1(¹)).
Now (5) guarantees that ¹ is compact. Theorem 3.13 guarantees that there
exists r ÷ Q with r ÷ 1(r). Now if r ÷ l
i
for some i ÷ 1, then
r
i
÷ 1
i
(r) _ H
i
(r) = w
i
(r)
(here r
i
is the projection of r on 1
i
), and so r
i
÷ 1
i
(r)
¸
1
i
(r), a contra-
diction. As a result r ´ ÷ l
i
for each i ÷ 1, so r
i
÷ 1
i
(r) and 1
i
(r)
¸
1
i
(r) = O.

Remark 3.18. If 1
i
¸
1
i
[
Ui
: l
i
÷2
Qi
is lower semicontinuous with 1
i
(r)
closed and convex for each r ÷ l
i
, then (4) is clearly satis…ed.
Theorem 3.23. Let 1 be a countable set and I = (Q
i
. 1
i
. 1
i
)
i÷I
an abstract
economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
,
(2) 1
i
: Q ÷C1(1
i
) is lower semicontinuous, compact map,
(3) l
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦ is closed in Q,
(4) there exists a lower semicontinuous selector w
i
: l
i
÷2
Ei
of 1
i
¸
1
i
[
Ui
:
l
i
÷2
Ei
with w
i
(r) closed and convex for each r ÷ l
i
,
and
(5) r
i
´ ÷ 1
i
(r)
¸
1
i
(r) for each r ÷ Q: here r
i
is the projection of r on 1
i
.
In addition, suppose 0 ÷ Q with
(6)
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
113
nit/ r ÷ ` 1(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
1(r
n
)¦ _ Q 1or : _ :
0
holding; here 1 : Q ÷2
E
(here 1 =
¸
i÷I
1
i
) is given by
1(r) =
¸
i÷I
1
i
(r).
Then I has an equilibrium point r ÷ Q.
Proof. Fix i ÷ 1 and let H
i
be as in Theorem 3.22. The same reasoning as in
Theorem 3.22 guarantees that H
i
: Q ÷C1(1
i
) is upper semicontinuous, and
that there exists an upper semicontinuous selector 1
i
: Q ÷C1(1
i
) of H
i
. Let
1 : Q ÷ 2
E
be as in proof of previous theorem. Notice 1 : Q ÷ C1(1) is an
upper semicontinuous, compact map (use (2) with 1
i
(r) _ 1
i
(r) for r ÷ Q). We
wish to apply Theorem 3.22. To see this, suppose ¦(r
n
. `
n

n`1
is a sequence
in 0Q [0. 1[ converging to (r. `) with r ÷ `1(r) and 0 _ ` < 1. Then since
1(r) _ 1(r) for r ÷ Q. we have r ÷ `1(r). Now (6) guarantees that there
exists :
0
÷ ¦1. 2. ...¦ with ¦`
n
1
n
(r
n
)¦ _ Q for each : _ :
0
. Consequently,
¦`
n
1
n
(r
n
)¦ _ Q for each : _ :
0
. Theorem 3.22 guarantees that there exists
r ÷ Q with r ÷ 1(r). and it is easy to check, as in Theorem 3.22, that r is an
equilibrium point of I.
Next we discuss an abstract economy I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
(here 1 is count-
able) where for each i ÷ 1, Q
i
_ 1
i
is the choice set, 1
i
. G
i
:
¸
i÷I
Q
i
= Q ÷2
Ei
are constraint multivalued mapping, and 1
i
: Q ÷ 2
Ei
is a preference multi-
valued mapping. A point r ÷ Q is called an equilibrium point of I if for each
i ÷ 1, we have
r
i
÷ c|
Ei
G
i
(r) = G
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O
(here r
i
is the projection of r on 1
i
). The results which follows improve
those of Regan, Ding, Kim, Tan, Yannelis and Prabhaker. We establish a new
…xed point result for DTK maps.
Theorem 3.24. Let 1 be a countable index set and ¦Q
i
¦
i÷I
a family of
nonempty closed, convex sets each in a Fréchet 1
i
. For each i ÷ 1, let G
i
÷
1T1(Q. Q
i
) where Q =
¸
i÷I
Q
i
. Assume r
0
÷ Q and suppose G : Q ÷ 2
Q
,
de…ned by G(r) =
¸
i÷I
G
i
(r) for r ÷ Q, satis…es the following condition:
C _ Q con:ta/|c. C _ co(¦r
0
¦
¸
G(C)) i:j|ic: C i: co:jact.
Then G has a …xed point in Q.
Proof. Since Q is a subset of a metrizable space 1 =
¸
i÷I
1
i
we have that
Q is paracompact. Fix i ÷ 1. Then G
i
÷ 1T1(Q. Q
i
) together with Theorem
3.16 guarantees that there exists a continuous selector o
i
: Q ÷ Q
i
of G
i
. Let
o : Q ÷Q be de…ned by
114
o(r) =
¸
i÷I
o
i
(r). 1or r ÷ Q.
Notice G : Q ÷Q is continuous and o is a selector of G. We now show
i1 C _ Q i: con:ta/|c a:d C = co(¦r
0
¦
¸
o(C)) t/c: C i: co:jact.
To see this, notice if C _ Q is countable and C = co(¦r
0
¦
¸
o(C)), then
since o is a selector of G, we have
C _ co(¦r
0
¦
¸
G(C)).
Now the condition in state of theorem implies C is compact. Theorem 3.14
guarantees that there exists r ÷ Q with r = o(r). That is,
r = o(r) =
¸
i÷I
o
i
(r) _
¸
i÷I
G
i
(r) = G(r).

Theorem 3.25. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
.
(2) for each r ÷ Q, 1
i
(r) = O and co(1
i
(r)) _ G
i
(r).
(3) for each n
i
÷ Q
i
. the set [(co1
i
)
÷1
(n
i
)
¸
'
i
[
¸
1
÷1
i
(n
i
) is open in Q:
here '
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦.
(4) G
i
: Q ÷2
Qi
, and
(5) r
i
´ ÷ co(1
i
(r)) for each r ÷ Q; here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
÷ Q with
(6) C _ Q countable, C _ co(¦r
0
¦
¸
G(C)) implies C is compact
holding; here G : Q ÷2
Q
is given by
(7) G(r) =
¸
i÷I
G
i
(r). 1or r ÷ Q.
Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1, we have
r
i
÷ G
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. For each i ÷ 1, let
·
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O. ¦
and for each r ÷ Q, let
1(r) = ¦i [ i ÷ 1. 1
i
(r)
¸
1
i
(r) = O¦.
For each i ÷ 1, de…ne multivalued mappings ¹
i
. 1
i
: Q ÷2
Qi
by
¹
i
(r) = co1
i
(r)
¸
1
i
(r). i1 i ÷ 1(r) (t/at i:. r ÷ ·
i
).
115
a:d ¹
i
(r) = 1
i
(r). i1 i ´ ÷ 1(r).
and
1
i
(r) = co1
i
(r)
¸
G
i
(r). i1 i ÷ 1(r) (t/at i:. r ÷ ·
i
).
a:d 1
i
(r) = G
i
(r). i1 i ´ ÷ 1(r).
It is easy to see (use (2)) and the de…nition of 1(r) that for each i ÷ 1 and
r ÷ Q that
co(¹
i
(r)) _ 1
i
(r) a:d ¹
i
(r) = O.
Also, for each i ÷ 1 and n
i
÷ Q
i
we have
¹
÷1
i
(n
i
) = ¦r [ r ÷ Q. n
i
÷ ¹
i
(r)¦ =
¦r [ r ÷ ·
i
. n
i
÷ co1
i
(r)
¸
1
i
(r)¦
¸
¦r [ r ÷ '
i
. n
i
÷ 1
i
(r)¦ =
[¦r [ r ÷ ·
i
. n
i
÷ co1
i
(r)¦
¸
¦r [ r ÷ ·
i
. n
i
÷ 1
i
(r)¦[
¸
¦r [ r ÷ '
i
. n
i
÷ 1
i
(r)¦ =
¦[(co1
i
)
÷1
(n
i
)
¸
1
÷1
i
(n
i
)[
¸
·
i
¦
¸
[1
÷1
i
(n
i
)
¸
'
i
[ =
[(co1
i
)
÷1
(n
i
)
¸
1
÷1
i
(n
i
)[
¸
[1
÷1
i
(n
i
)
¸
'
i
[ =
[(co1
i
)
÷1
(n
i
)
¸
'
i
[
¸
1
÷1
i
(n
i
).
which is open in Q. Thus, 1
i
÷ 1T1(Q. Q
i
). Let 1 : Q ÷ 2
Q
be de…ned
by
1(r) =
¸
i÷I
1
i
(r). 1or r ÷ Q.
We now show
C _ Q con:ta/|c. C _ co(¦r
0
¦
¸
1(C)) i:j|ic: C i: co:jact.
To see this, let C _ Q be countable with C _ co(¦r
0
¦
¸
1(C)). Now since
1(r) _ G(r) for r ÷ Q (note for each i ÷ 1 that 1
i
(r) _ G
i
(r) for r ÷ Q), we
have
116
C _ co(¦r
0
¦
¸
G(C)).
Now (6) implies C is compact, so we have the above implication. Theorem
3.24 guarantees that there exists r ÷ Q with r ÷ 1(r), that is r
i
÷ 1
i
(r) for
each i ÷ 1; note if i ÷ 1(r) for some i ÷ 1, then 1
i
(r)
¸
1
i
(r) = O and so
r
i
÷ co(1
i
(r))
¸
G
i
(r). In particular, r
i
÷ co(1
i
(r)). and this contradicts (5).
Thus, i ´ ÷ 1(r) for all i ÷ 1. Consequently, 1
i
(r)
¸
1
i
(r) = O and r
i
÷ G
i
(r) for
all i ÷ 1.
Theorem 3.26. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
.
(2) for each r ÷ Q, 1
i
(r) = O and co(1
i
(r)) _ G
i
(r).
(3) for each n
i
÷ 1
i
. the set [(co1
i
)
÷1
(n
i
)
¸
'
i
[
¸
1
÷1
i
(n
i
) is open in Q: here
'
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦.
(4) G
i
: Q ÷2
Ei
, is a compact map, and
(5) r
i
´ ÷ co(1
i
(r)) for each r ÷ Q; here r
i
is the projection of r on 1
i
.
In addition, suppose 0 ÷ Q with
(6)
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` G(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
G(r
n
)¦ _ Q 1or : _ :
0
holding; here G : Q ÷2
E
(here 1 =
¸
i÷I
1
i
) is given by
G(r) =
¸
i÷I
G
i
(r).
Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1, we have
r
i
÷ G
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. For each i ÷ 1, let ·
i
. ¹
i
and 1
i
be as in Theorem 3.25. Essentially
the same reasoning as in Theorem 3.25 guarantees that 1
i
÷ 1T1(Q. 1
i
) for
each i ÷ 1. Also note that 1
i
is a compact map for each i ÷ 1. Let 1 : Q ÷2
E
be as in proof of previous theorem. We wish to apply Theorem 3.17. To see
this, suppose ¦(r
n
. `
n

n`1
is a sequence in 0Q [0. 1[ converging to (r. `)
with r ÷ `1(r) and 0 _ ` < 1. Then, since 1(r) _ G(r) for r ÷ Q, we
have r ÷ `G(r). Now (6) guarantees that there exists :
0
÷ ¦1. 2. ...¦ with
¦`
n
G(r
n
)¦ _ Q for each : _ :
0
. Consequently, ¦`
n
1(r
n
)¦ _ Q for each
: _ :
0
. Theorem 3.17 guarantees that there exists r ÷ Q with r ÷ 1(r). and it
is easy to check, as in Theorem 3.25, that r is an equilibrium point of I.
117
Remark 3.19. If 1
i
is a Hilbert space for each i ÷ 1, then one could replace
G
i
: Q ÷ 2
Ei
a compact map for each i ÷ 1 in (4) with G : Q ÷ 2
E
a one-set
contractive, condensing map with G(Q) a bounded set in 1.
Finally, in this subsection, we present two more results for upper semicon-
tinuous maps which extend the well known results in literature.
Theorem 3.27. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
.
(2) 1
i
: Q ÷2
Qi
is such that co(1
i
(r)) _ G
i
(r).
(3) G
i
: Q ÷2
Qi
and G
i
(r) is convex for each r ÷ Q.
(4) the multivalued mapping G
i
: Q ÷C1(Q
i
). de…ned by G
i
(r) = c|
Qi
G
i
(r).
is upper semicontinuous,
(5) for each n
i
÷ Q
i
, 1
÷1
i
(n
i
) is open in Q,
(6) for each n
i
÷ Q
i
, 1
÷1
i
(n
i
) is open in Q, and
(7) r
i
´ ÷ co(1
i
(r)) for each r ÷ Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
÷ Q with
(8) ¹ _ Q. ¹ _ co(¦r
0
¦
¸
G(¹)) with ¹ = C and C _ ¹ countable, implies
¹ is compact
holding; here G : Q ÷2
Q
is given by
(9)
G(r) =
¸
i÷I
G
i
(r). 1or r ÷ Q.
Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1, we have
r
i
÷ G
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O.
Proof. Fix i ÷ 1 and let c
i
: Q ÷2
Qi
be de…ned by
c
i
(r) = co(1
i
(r))
¸
co(1
i
(r)). 1or r ÷ Q.
and
l
i
= ¦r [ r ÷ Q. c
i
(r) = O¦.
Now (5) and (6) together with a result of Yannelis and Prabhaker imply for
each n ÷ Q
i
that (co1
i
)
÷1
(n) and (co1
i
)
÷1
(n) are open in Q. As a result, for
each n ÷ Q we have that
c
÷1
i
(n) = (co1
i
)
÷1
(n)
¸
(co1
i
)
÷1
(n)
is open in Q. Now it is easy to check that
l
i
=
¸
y÷Qi
c
÷1
i
(n).
118
and as a result, we have that l
i
is open in Q. Since l
i
is a subset of a
metrizable space 1 =
¸
i÷I
1
i
, we have that l
i
is paracompact. Notice as well
that
c
i
= c
i
[
Ui
: l
i
÷2
Qi
has convex values. Also for n ÷ Q
i
, we have
c
÷1
i
(n) = ¦r [ r ÷ l
i
. n ÷ c
i
(r)¦ =
¦r [ r ÷ Q. n ÷ c
i
(r)¦
¸
l
i
= (c
i
)
÷1
(n)
¸
l
i
.
so c
÷1
i
(n) is open in l
i
. Theorem 3.16 guarantees that there exists a con-
tinuous selection 1
i
: l
i
÷2
Qi
of c
i
. For each i ÷ 1, let H
i
: Q ÷2
Qi
be given
by
H
i
(r) = ¦1
i
(r)¦. i1 r ÷ l
i
. a:d H
i
(r) = G
i
(r). i1 r ´ ÷ l
i
.
This H
i
is upper semicontinuous (note for each r ÷ l
i
that ¦1
i
(r)¦ _
c
i
(r) _ co(1
i
(r)) _ G
i
(r)). Also notice (4) guarantees that H
i
: Q ÷C1(Q
i
).
Let H : Q ÷2
Q
be given by
H(r) =
¸
i÷I
H
i
(r). 1or r ÷ Q.
This H : Q ÷C1(Q) is upper semicontinuous. We wish to apply Theorem
3.13 to H. To see this, let ¹ _ Q with ¹ = co(¦r
0
¦
¸
H(¹)), ¹ = C, and
C _ ¹ countable. Then since H
i
(r) _ G
i
(r) for each r ÷ Q, we have
H(r) _
¸
i÷I
G
i
(r) = G(r). 1or r ÷ Q.
Thus
¹ _ co(¦r
0
¦
¸
G(¹)).
so (8) guarantees that ¹ is compact. Theorem 3.13 guarantees that there
exists r ÷ Q with r ÷ H(r). If r ÷ l
i
for some i, then
r
i
= 1
i
(r) ÷ co(1
i
(r))
¸
co(1
i
(r)) _ co(1
i
(r)).
This contradicts (7). Thus, for each i ÷ 1, we must have r ´ ÷ l
i
, so r
i
÷
G
i
(r) and co(1
i
(r))
¸
co(1
i
(r)) = O. Our result follows since
1
i
(r)
¸
1
i
(r) _ co(1
i
(r))
¸
co(1
i
(r)).

Remark 3.20. Notice (5) and (6) in last theorem could be replaced by
119
(10) for each i ÷ 1, for each n
i
÷ Q
i
, (co1
i
)
÷1
(n
i
)
¸
(co1
i
)
÷1
(n
i
) is open in
Q,
and the result is again true.
Theorem 3.28. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is a nonempty, closed, convex subset of a Fréchet space 1
i
.
(2) 1
i
: Q ÷2
Ei
is such that co(1
i
(r)) _ G
i
(r).
(3) G
i
: Q ÷2
Ei
and G
i
(r) is convex for each r ÷ Q.
(4) the multivalued mapping G
i
: Q ÷C1(1
i
). de…ned by G
i
(r) = c|
Ei
G
i
(r).
is upper semicontinuous,
(5) for each n
i
÷ 1
i
, 1
÷1
i
(n
i
) is open in Q,
(6) for each n
i
÷ 1
i
, 1
÷1
i
(n
i
) is open in Q, and
(7) r
i
´ ÷ co(1
i
(r)) for each r ÷ Q: here r
i
is the projection of r on 1
i
.
In addition, suppose r
0
÷ Q with
(8)
i1 ¦(r
n
. `
n

n`1
i: a :ccnc:cc i: 0Q[0. 1[ co:·croi:o to (r. `)
nit/ r ÷ ` G(r) a:d 0 _ ` _ 1. t/c: t/crc cri:t: :
0
÷ ¦1. 2. ...¦
nit/ ¦`
n
G(r
n
)¦ _ Q 1or : _ :
0
holding; here G : Q ÷2
E
(here 1 =
¸
i÷I
1
i
) is given by
G(r) =
¸
i÷I
G
i
(r).
Then I has an equilibrium point r ÷ Q. That is, for each i ÷ 1, we have
r
i
÷ G
i
(r) a:d 1
i
(r)
¸
1
i
(r) = O:
here r
i
is the projection of r on 1
i
.
Proof. Let c
i
. l
i
. H
i
. and H be as in previous theorem. Essentially the
same reasoning as in previous theorem guarantees that H : Q ÷C1(1) is upper
semicontinuous. Notice also that H is compact. We wish to apply Theorem 3.15
to H. To see this, suppose ¦(r
n
. `
n

n`1
is a sequence in 0Q[0. 1[ converging
to (r. `) with r ÷ `H(r) and 0 _ ` < 1. Then, since H(r) _ G(r) for r ÷ Q,
we have r ÷ `G(r). Now (8) guarantees that there exists :
0
÷ ¦1. 2. ...¦ with
¦`
n
G(r
n
)¦ _ Q for each : _ :
0
. Consequently, ¦`
n
H(r
n
)¦ _ Q for each
: _ :
0
. Theorem 3.15 guarantees that there exists r ÷ Q with r ÷ H(r).
Theorem 3.29. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is convex,
(2) 1
i
is a nonempty compact subset of Q
i
,
(3) for each r ÷ Q, 1
i
(r) is a nonempty convex subset of 1
i
,
120
(4) for each r
i
÷ 1
i
, ¦1
÷1
i
(r
i
)
¸
l
i
¦
¸
¹
÷1
i
(r
i
) contains a relatively open
subset O
xi
of co 1 such that
¸
xi÷Di
O
xi
= co 1. where l
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) =
O¦ and 1 =
¸
i÷I
1
i
;
(5) for each r = ¦r
i
¦ ÷ Q, r
i
´ ÷ co 1
i
(r).
Then I has an equilibrium point.
Proof. For each i ÷ 1, let
G
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦
and for each r ÷ Q, let
1(r) = ¦i [ i ÷ 1. 1
i
(r)
¸
1
i
(r) = O¦.
Now for each i ÷ 1, we de…ne a multivalued mapping T
i
: Q ÷2
Di
by
T
i
(r) = co 1
i
(r)
¸
1
i
(r). i1 i ÷ 1(r)
T
i
(r) = 1
i
(r). i1 i ´ ÷ 1(r).
Clearly for each r ÷ Q, T
i
(r) is a nonempty convex subset of 1
i
. Also for
each n
i
÷ 1
i
,
T
÷1
i
(n
i
) = [¦(co 1
i
)
÷1
(n
i
)
¸
1
÷1
i
(n
i

¸
G
i
[
¸
[1
÷1
i
(n
i
)
¸
l
i
[
[¦1
÷1
i
(n
i
)
¸
1
÷1
i
(n
i

¸
G
i
[
¸
[1
÷1
i
(n
i
)
¸
l
i
[ =
= [1
÷1
i
(n
i
)
¸
1
÷1
i
(n
i
)[
¸
[1
÷1
i
(n
i
)
¸
l
i
[ = [1
÷1
i
(n
i
)
¸
l
i
[
¸
1
÷1
i
(n
i
).
We note that the …rst inequality follows from the fact that for each n
i
÷ 1
i
,
1
÷1
i
(n
i
) · (co 1
i
)
÷1
(n
i
) because 1
i
(r) · (co 1
i
)(r) for each r ÷ Q. Further-
more, by virtute of (4), for each n
i
÷ 1
i
, T
÷1
i
(n
i
) contains a relatively open
set O
yi
of Q such that
¸
yi÷Di
O
yi
= co 1. Hence by a result of Hussain and
Tarafdar there exists a point r = ¦r
i
¦ such that r
i
÷ T
i
(r) for each i ÷ 1. By
condition (5) and the de…nition of T
i
, it now easily follows that r ÷ Q is an
equilibrium point of I.
Corollary 3.4. Let 1 be a countable set and I = (Q
i
. 1
i
. G
i
. 1
i
)
i÷I
an
abstract economy such that for each i ÷ 1, the following conditions hold:
(1) Q
i
is convex,
(2) 1
i
is a nonempty compact subset of Q
i
,
(3) for each r ÷ Q, 1
i
(r) is a nonempty convex subset of 1
i
,
(4) the set G
i
= ¦r [ r ÷ Q. 1
i
(r)
¸
1
i
(r) = O¦ is a closed subset of Q
i
,
(5) for each n
i
÷ 1
i
. 1
÷1
i
(n
i
) is a relatively open subset in G
i
and 1
÷1
i
(n
i
)
is a relatively open subset in Q.
(6) for each r = ¦r
i
¦ ÷ Q, r
i
´ ÷ co 1
i
(r).
121
Then there is an equilibrium point of the economy I.
Proof. Since 1
÷1
i
(n
i
) is relatively open in G
i
, there is an open subset \
i
of
Q with 1
÷1
i
(n
i
) = G
i
¸
\
i
. Hence for n
i
÷ 1
i
, 1
÷1
i
(n
i
)
¸
l
i
= (G
i
¸
\
i
)
¸
l
i
=
Q
¸
(\
i
¸
l
i
). Thus
¦1
÷1
i
(n
i
)
¸
l
i
¦
¸
1
÷1
i
(n
i
) = (\
i
¸
l
i
)
¸
1
÷1
i
(n
i
) = O
yi
.
say, is relatively open subset of Q for each n
i
÷ 1
i
, since \
i
. l
i
and 1
÷1
i
(n
i
)
are open subset of Q. Now it follows that
¸
yi÷Di
O
yi
= co 1. The corollary is
thus a consequence of Theorem 3.29.
3.4 Existence of …rst-order locally consistent equilibria
3.4.1 Introduction
A …rst-order locally consistent equilibrium (1-LCE) of a game is a con…gura-
tion of strategies at which the …rst-order condition for payo¤ maximization is
simultaneously satis…ed for all players. The economic motivation for introduc-
ing this equilibrium concept is that oligopolistic …rms don’t know their e¤ective
demand function, but ”at any given status quo each …rm knows only the linear
approximation of its demand curve and beliefs it to be demand curve it faces”.
In what follows, in order to distinguish between the abstract concept of 1-LCE,
that is, a con…guration of a game in which the …rst-order condition for payo¤
maximization is satis…ed for all players, and its economic interpretation, that is,
a pro…t-maximizing con…guration in a market or in an economy in which …rms
know only the linear approximation of their demand functions, the latter equi-
librium concept will be called …rst-order locally consistent economic equilibrium
(1-LCEE) (see [1], [22], [23]).
3.4.2 First-order equilibria for non-cooperative games
Consider the following non-cooperative game I = (1. (o
i
). (H
i
))
i÷I
, where 1 =
¦1. 2. .... :¦ is the index set of players, o
i
is the strategy set of player i, and
H
i
is the payo¤ function of player i. Set o =
¸
i÷I
o
i
. and o
÷i
=
¸
j÷I;j,=i
o
j
.
The generic element of set o, (respectively o
÷i
, respectively o
i
) is denoted by
r, (resp. r
÷i
, resp. r
i
). Denote by 1
xi
H
i
the derivative of H
i
with respect to
r
i
. The derivative of H
i
with respect to r
i
calculated at point r is denoted by
1
xi
H
i
(r).
A.1. (\) i ÷ 1, o
i
is a convex and compact subset of a Banach space.
A.2. (\) i ÷ 1, the function H
i
: o ÷ R is continuous; moreover, for every
r ÷ o, the derivative 1
xi
H
i
exists and is continuous, that is, there exists an
open set \
0
i
o
i
and an extension of function H
i
to \
0
i
which is continuously
di¤erentiable with respect to r
i
.
De…nition 3.7. A 1-LCE for game I is a con…guration r
+
÷ o such that:
(i) if r
+
i
÷ o
i
` 0o
i
, then 1
xi
H
i
(r
+
) = 0:
(ii) if r
+
i
÷ 0o
i
. then there exists a neighborhood of r
+
i
in o
i
, ·(r
+
i
). such
that: 1
xi
H
i
(r
+
)(r
i
÷r
+
i
) _ 0. for every r
i
÷ ·(r
+
i
).
122
Condition (ii) means that if r
+
i
belongs to the boundary of the strategy set,
then either it satis…es the …rst-order condition for payo¤ maximization, or it
is a local maximum. Notice that De…nition 3.7 is in line with the usual idea
that at 1-LCEs players carry out local experiments by employing the linear
approximations of some appropriate function, in this case the payo¤ function.
Given a con…guration r
0
÷ o, interpreted as the status quo, de…ne the func-
tion 1
i
: oo
i
÷R as follows: 1
i
(r
0
. r
i
) = H
i
(r
0
)÷1
xi
H
i
(r
0
)(r
i
÷r
0
i
). With
some abuse of language, the following …ctitious :-person non-cooperative game
I
c
= (1. (o
i
). (H
i
). (1
i
))
i÷I
will be associated to game I. In game I
c
, given
the status quo r
0
, the best strategy for player i is the solution to the following
problem:
(1
i
) :ar 1
i
(r
0
. r
i
). :nc/ t/at r
i
÷ o
i
.
Denote by 1
i
(r
0
) the set of solutions to problem (1
i
).
If we interpret game I
c
as an oligopolistic game among …rms which choose,
for example, the level of production, then the behavioral hypothesis underlying
problem (1
i
) is that given the status quo, …rms maximize the linear approxima-
tion of their pro…t functions.
De…nition 3.8. An equilibrium for the game I
c
is a con…guration r
+
÷ o
such that: r
+
i
÷ 1
i
(r
+
) for every i ÷ 1. Denote by 1(I
c
) the set of equilibria of
game I
c
, and by 1C1(I) the set of 1-LCEs of game I.
Theorem 3.30. Under A.1. and A.2., 1C1(I) = O.
Proof. First we show that 1(I
c
) = 1C1(I). Suppose that r
+
÷ 1(I
c
). It
is su¢cient to show that r
+
satis…es the following conditions:
(i)if r
+
i
÷ o
i
` 0o
i
. then 1
xi
H
i
(r
+
) = 0:
(ii) if r
i
÷ 0o
i
then 1
xi
H
i
(r
+
)(r
i
÷r
+
i
) _ 0., for every r
i
÷ o
i
.
To this end, suppose that r
+
i
÷ o
i
` 0o
i
but 1
xi
H
i
(r
+
) = 0. Since r
+
i
is
an interior point of o
i
then the linearity of 1
i
implies that there exists a point
r
|
i
÷ 0o
i
such that 1
i
(r
+
. r
|
i
) 1
i
(r
+
. r
+
i
). which is a contradiction. Suppose
now that r
+
i
÷ 0o
i
and 1
xi
H
i
(r
+
)(r
i
÷ r
+
i
) 0. for some r
i
÷ o
i
. Clearly, r
+
i
doesn’t solve problem (1
i
), a contradiction. Summarizing, r
+
÷ 1C1(I).
Finally, suppose that r
+
÷ 1C1(I). Then, r
+
satis…es conditions (i) and (ii)
in De…nition 3.7. If r
+
i
÷ o
i
` 0o
i
. then 1
xi
H
i
(r
+
) = 0: therefore, 1
i
(r
+
. r
i
) =
H
i
(r
+
) for every r
i
÷ o
i
. It follows that r
+
i
solves problem (1
i
). Consider now
the case r
+
i
÷ 0o
i
with 1
xi
H
i
(r
+
)(r
i
÷r
+
i
) _ 0 for every r
i
in some neighborhood
of r
+
i
, ·(r
+
i
). By linearity, one obtains that 1
xi
H
i
(r
+
)(r
i
÷ r
+
i
) _ 0 for every
r
i
÷ o
i
. Thus, also in this case r
+
i
solves problem (1
i
). Therefore, r
+
÷ 1(I
c
).
Now it is su¢cient to show that 1(I
c
) = O. By A.2 it follows that the
function 1
i
: oo
i
÷R is continuous. Thus, by Berge’s maximum theorem the
multivalued mapping 1
i
: o ( o
i
is upper hemi-continuous. It is also convex-
valued because of linearity of 1
i
. De…ne the multivalued mapping 1 : o ( A
as follows 1 =
¸
i÷I
1
i
. Because of A.1., a Bohnenblust and Karlin’s …xed
point theorem ensures that there exists r
+
÷ o such that r
+
÷ 1(r
+
). Thus,
r
+
÷ 1(I
c
).
123
3.4.3 Existence of a …rst-order economic equilibrium
Next, in following example, we prove the existence of a …rst-order locally con-
sistent economic equilibrium in a model of monopolistic competition similar to
the Bonanno and Zeeman’s one.
Example 3.2. We consider a monopolistic competitive market with : price-
making …rms, i ÷ 1, 1 = ¦1. 2. .... :¦. The cost function of …rm i is C
i
(c
i
) = c
i
c
i
.
where c
i
is the level of output of …rm i and c
i
is a positive number. We assume
that the …rm i choose any price in the interval J
i
= [c
i
. 1
i
[. Set J =
¸
i÷I
J
i
and
J
÷i
=
¸
j÷I; j,=i
J
j
. The price set by …rm i is denoted by j
i
. Denote by j
÷i
the
(:÷1)-dimensional vector whose elements are the prices set by all …rms except
the i-th one. Set j = (j
i
. j
÷i
). The function 1
i
: J ÷R is the demand function
of …rms i, and it is indicated by 1
i
(j). The true pro…ts of …rms are given by
H
i
(j) = 1
i
(j)(j
i
÷c
i
).

Next, one show that there exists a …rst-order locally consistent economic
equilibrium for the above monopolistic market.
We suppose that:
A.1. For every i ÷ 1, function 1
i
is continuous on J, and the derivative
01
i
´0j
i
: J ÷R exists and is continuous.
A.2. For every j
÷i
÷ J
÷i
. if 1
i
(j
t
i
. j
÷i
) = 0, j
t
i
÷ J
i
`¦1
i
¦. then (01
i
´0j
i
)(j
t
i
. j
÷i
) _
0 and 1
i
(j
tt
i
. j
÷i
) = 0 for every j
tt
i
_ j
t
i
.
Here it is possible that for every price in J
i
. …rm i’s market demand is zero.
Remark 3.21. We shall assume that …rms maximize their conjectural pro…t
function calculated by taking into account the linear approximation of their
demand function. Given the status quo j
0
÷ J. the conjectural demand of …rm
i is
^
i
(j
i
. j
0
) := 1
i
(j
0
) ÷ (01
i
´0j
i
)(j
0
)(j
i
÷j
0
i
)
and conjectural pro…t is
H
+
i
(j
i
. j
0
) := ^
i
(j
i
. j
0
)(j
i
÷c
i
).
De…nition 3.9. A …rst-order locally consistent economic equilibrium is a
vector j
+
÷ J such that for every i ÷ 1 we have
H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
). 1or c·crn j
i
÷ J
i
.

De…nition 3.9 means that at equilibrium …rms are maximizing their conjec-
tural pro…t function. It is easily seen that if j
+
is a …rst-order locally consistent
economic equilibrium then:
i) ^
i
(j
+
i
. j
+
) = 1
i
(j
+
). and
ii) (0^
i
´0j
i
)(j
+
) = (01
i
´0j
i
)(j
+
).
124
The condition i) means that at equilibrium the conjectural demand must be
equal to the true demand.
The condition ii) means that at equilibrium the slope of the true demand
function is equal to the slope of the conjectural demand.
We have
Theorem 3.31. Under A.1 and A.2 there exists a …rst-order locally consis-
tent economic equilibrium.
Proof. By setting o
i
= J
i
, and r
i
= j
i
. i ÷ 1. the industry we are con-
sidering reduces to the game I considered above. Under A.1 and A.2 the
game I has clearly a …rst-order locally consistent equilibrium r
+
= (r
+
i
)
i÷I
.
Set j
+
i
= r
+
i
. i ÷ 1. Thus, to prove Theorem 3.31 it is su¢cient to prove that if
(j
+
i
)
i÷I
is a …rst-order locally consistent equilibrium then it satis…es condition
in De…nition 3.9. We have to consider three possible cases:
a) j
+
i
= 1
i
. b) j
+
i
= c
i
. c) j
+
i
÷ 0J
i
. i ÷ 1.
Case a). j
+
i
= 1
i
. Assumption A.2 ensures that 1
i
(j
+
) = (01
i
´0j
i
)(j
+
) =
0. It follows that ^
i
(j
i
. j
+
) = 0. j
i
÷ J
i
. Therefore H
+
i
(j
+
i
. j
+
) = H
+
i
(j
i
. j
+
) =
0. j
i
÷ J
i
.
Thus, the condition in De…nition 3.9 is satis…ed.
Case b). j
+
i
= c
i
. Two case can occur:
/
1
) (0H
i
´0j
i
)(j
+
) = 0:
/
2
) (0H
i
´0j
i
)(j
+
) = 0.
In the case /
1
) it isn’t possible that 1
i
(j
+
) 0. In fact, if it is so, one
has that (0H
i
´0j
i
)(j
+
) = 1
i
(j
+
) 0. which is a contradiction. If 1
i
(j
+
) = 0
then H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
) for j
i
÷ J
i
. since H
i
(j
+
i
. j
+
) = 0 and H
+
i
(j
i
. j
+
) =
((01
i
´0j
i
)(j
+
)(j
i
÷ j
+
i
))(j
i
÷ c
i
) _ 0 because j
+
i
= c
i
and (01
i
´0j
i
)(j
+
) _ 0
from assumption A.2.
In the case /
2
). then by the fact that j
+
is a …rst-order locally consistent
equilibrium, it must satisfy the condition (1
i
(j
+
)÷(01
i
´0j
i
))(j
+
)(j
+
i
÷c
i
))(j
i
÷
j
+
i
) _ 0. j
i
÷ ·(c
i
). where ·(c
i
) is a right neighborhood of c
i
. Because j
+
i
= c
i
one has 1
i
(j
+
)(j
i
÷ j
+
i
) _ 0. j
i
÷ J
i
. This implies that 1
i
(j
+
) = 0. and
therefore by A.2, that (01
i
´0j
i
)(j
+
) _ 0 and that 1
i
(j
i
. j
+
÷i
) = 0 for every
j
i
÷ J
i
` ¦c
i
¦.
We shall prove that H
+
i
(j
+
i
. j
+
) _ H
+
i
(j
i
. j
+
) for j
i
÷ J
i
. In fact, H
+
i
(j
+
i
. j
+
) =
0 while H
+
i
(j
i
. j
+
) = (1
i
(j
+
)÷(01
i
´0j
i
)(j
+
)(j
i
÷j
+
i
))(j
i
÷c
i
) = (01
i
´0j
i
)(j
i
÷
c
i
)
2
_ 0 for every j
i
÷ J
i
` ¦c
i
¦. from the above argument. Thus, also in this
case condition of De…nition 3.9 is satis…ed.
Case c). j
i
÷ J
i
` 0J
i
. By de…nition of …rst-order locally consistent equilib-
rium, one must have (0H
i
´0j
i
)(j
+
) = 0.
Two cases can occur:
c
1
) 1
i
(j
+
) 0. and
c
2
) 1
i
(j
+
) = 0.
In the case c
1
) by noticing that (0H
i
´0j
i
)(j
+
) = 0 implies (01
i
´0j
i
)(j
+
) < 0
and that (0
2
H
+
i
´0j
2
i
)(j
+
i
. j
+
) = 2(01
i
´0j
i
)(j
+
) one can conclude that (0H
i
´0j
i
)(j
+
) =
0 implies (0
2
H
+
i
´0j
2
i
)(j
+
i
. j
+
) < 0. Thus the condition in De…nition 3.9 is satis-
…ed.
125
In the case c
2
) if we prove that (01
i
´0j
i
)(j
+
) = 0 we have completed
the proof because in this case H
+
i
(j
+
i
. j
+
) = H
+
i
(j
i
. j
+
) = 0. j
i
÷ J
i
. Sup-
pose, on contrary, that (01
i
´0j
i
)(j
+
) < 0, then (0H
i
´0j
i
)(j
+
)(j
i
÷ j
+
i
) =
(1
i
(j
+
) ÷(01
i
´0j
i
)(j
+
)(j
+
i
÷c
i
))(j
i
÷j
+
i
) = (01
i
´0j
i
)(j
+
)(j
+
i
÷c
i
)(j
i
÷j
+
i
) 0
for j
i
< j
+
i
. contradicting the hypothesis that j
+
is a …rst-order locally consis-
tent equilibrium. Thus, also in this last case the condition in De…nition 3.9 is
satis…ed. The proof is complete.
Remark 3.22. In [9], Bonanno and Zeeman have provided a general exis-
tence result of a …rst-order locally consistent equilibrium for an abstract game-
theoretic, and they employ their existence result to prove the existence of a
…rst-order locally consistent equilibrium in a monopolistic competitive industry
with price-making …rms.
3.4.4 First-order equilibria for an abstract economy
We consider an abstract economy with productions of : …rms and : goods
given by
1 = (G. 1. J. (n
i
)
i÷I
. (A
i
)
i÷I
. (.
i
)
i÷I
. (0
i
)
i÷I
. (1
j
)
j÷J
)
where G. 1. respectively J are the index sets of goods, household, respectively
…rms.
Given the production pro…le n = (n
1
. n
2
. .... n
n
) ÷ 1. where 1 =
¸
j÷J
1
j
. the
intermediate endowment of consumer i is .
0
i
(n) = .
i
÷
¸
j÷J
0
ij
n
j
. We denote
by 1
i
(j. n) the individual demand mapping, and by .(j. n) the aggregate excess
demand mapping of the economy at price j ÷ ^ · R
n
+
. given the production
pro…le n. The symbol \(n) indicates the set of Walrasian prices associated with
production pro…le n. that is \(n) = ¦j [ j ÷ ^. .(j. n) = 0¦.
We set \ = ¦n [ n ÷ R
m
. .
0
i
(n) 0. i ÷ 1¦.
We suppose that:
A1. For all i ÷ 1. n
i
is such that 1
i
(j. n) is single-valued, strictly positive
and of class C
o
in R
+n
+
\.
A2. 1 · \. Moreover, 1 is compact, and 1
j
is a convex set, , ÷ J.
A3. If \(n) is nonempty, then \(n) is singleton.
A4. For all n ÷ \ the rank 1
pn
.
÷
[j(n). n[ = :÷1 where .
÷
is the function
. without the last element, j(n) ÷ \(n). and 1
pn
is the derivative with respect
to the : ÷1 …rst component of j.
The producer , calculates his pro…ts on the basis of the linear approximation
of the e¤ective demand function
j
+
j
(n
i
. j
0
) = j(n
0
) ÷ (n
j
÷n
0
j
)1
yj
j(n
0
)
T
.
where n
0
is a status quo, and 1
yj
denote the derivative with respect to n
j
.
and symbol T indicates the operation of transposition for matrices.
De…nition 3.10. A …rst-order locally consistent economic equili-
brium for the economy 1 is a con…guration (j
+
. (n
+
j
)
j÷J
) ÷ ^1 such that
j
+
j
(n
+
j
. n
+
)n
+
j
_ j
+
j
(n
j
. n
+
)n
j
. n
j
÷ 1
j
. , ÷ J.
126

This de…nition means that at a …rst-order locally consistent economic equi-
librium …rms are maximizing their pro…ts according their perceived demand
functions. It is easily seen that if (j
+
. (n
+
j
)
j÷J
) is a …rst-order locally consistent
economic equilibrium then
(a) j
+
j
(n
+
j
. n
+
) = j(n
+
). , ÷ J. and
(b) 1
yj
j
+
j
(n
+
j
. n
+
) = 1
yj
j(n
+
). , ÷ J.
Condition (a) means that at …rst-order locally consistent economic equilib-
rium perceived prices are equal to the true ones, while condition (b) means that
the slopes of the perceived demand curves are equal to the slopes of the true
demand curves.
We have
Theorem 3.32.If the assumptions A1-A4 holds and c1
yj
j(n)c _ 0 for
every n ÷ 1 and for every c ÷ R
n
. then the economy 1 has a …rst-order locally
consistent economic equilibrium.
Proof. If use set o
j
= 1
j
. r
j
= n
j
and H
i
(n
j
. n) = j
+
j
(n
j
. n)n
j
. , ÷ J.
the economy 1 reduces to the game I introduced in the …rst subsection of this
section. Under assumptions A1-A4, j(n) is C
1
and this game has clearly a …rst-
order locally consistent equilibrium (r
+
j
)
j÷J
. We set n
+
j
= r
+
j
. , ÷ J. In order to
prove the theorem it is su¢cient to prove that n
+
= (n
+
j
)
j÷J
satis…es condition
in De…nition 3.9.
To this end, note that since n
+
is a …rst-order locally consistent equilibrium,
then it must satisfy the condition
j(n
+
)n
j
_ j(n
+
)n
+
j
÷ [j(n
+
) ÷n
+
j
1
yj
j(n
+
)[(n
j
÷n
+
j
)
or
j(n
+
)(n
j
÷n
+
j
) _ n
+
j
1
yj
j(n
+
)(n
+
j
÷n
j
). n
j
÷ 1
j
.
We prove the assertion if we show that n
+
satis…es the following condition
j(n
+
)n
+
j
_ j(n
+
)n
j
÷ (n
j
÷n
+
j
)1
yj
j(n
+
)
T
n
j
. n
j
÷ 1
j
.
that is,
[j(n
+
) ÷n
j
1
yj
j(n
+
)[(n
j
÷n
+
j
) _ 0. n
j
÷ 1
j
.
From the …rst member of last relationship and by taking into account pre-
vious relationship, one obtains
j(n
+
)(n
j
÷n
+
j
) ÷n
j
1
yj
j(n
+
)(n
j
÷n
+
j
) _
_ n
+
j
1
yj
j(n
+
)(n
+
j
÷n
j
) ÷n
j
1
yj
j(n
+
)(n
+
j
÷n
j
) =
= (n
+
j
÷n
j
)1
yj
j(n
+
)(n
+
j
÷n
j
) _ 0. n
j
÷ 1
j
.
by assumption. This ends the proof.
127
3.5 Existence of equilibrium in generalized games with
non-convex strategy spaces
3.5.1 Introduction
The generalized game concept (or abstract economy) extends the notion of
Nash non-cooperative game, in which the player’s strategy set depends on the
choices of all the other players. This concept was introduced by Debreu who
proved the existence of equilibrium in generalized games with general assump-
tions. Arrow and Debreu applied this result to obtain the existence of compet-
itive equilibrium by considering convex strategy subsets of a …nite dimensional
space and a …nite number of agents with continuous quasi-concave utility func-
tions. Since then, Arrow-Debreu’s result has been extended in several directions
by assuming weaker assumptions on strategy spaces, agent preferences, and so
on. We have mention the works of Gale and Mas-Colell, who consider pref-
erence relations which aren’t transitive or complete; Shafer and Sonnenschein
and Border, who modify continuity conditions on the constraint and preference
multivalued mappings; as well as Borglin and Keiding, Yannelis and Prabhakar,
Tarafdar, who consider in…nite dimensional strategy spaces or an in…nite number
of agents. Most of these existence theorems are proven by assuming convexity
conditions on the strategy spaces as well as on the constraint multivalued map-
ping, which allow to apply very well known …xed points theorems, as those of
Brower, Kakutani or Browder. The purpose of this section is to present gener-
alizations of some of these results on the existence of equilibrium in generalized
games by relaxing the convexity conditions. In order to do that, we make use
of a new abstract convexity notion called mc-spaces which generalizes usual
convexity as well as other abstract convexity structures. These results cover
situations in which neither strategy spaces nor preferences are convex.
3.5.2 Abstract convexity
This subsection is devoted to introduce the new notion of abstract convexity,
mc-spaces, which be used throughout the section. Formally, an abstract
convexity on a set A is a family · = ¦¹
i
¦
i÷I
of subsets of A stable under
arbitrary intersections, that is,
¸
i÷J
¹
i
÷ ·. for all J · 1, and that contains the
empty and the total set, O. A ÷ ·. The notion of mc-spaces is based on the idea
of replacing the linear segments which join any pair of points up (or the convex
hull of a …nite set of points) in the usual convexity, by an path (respectively, a
set) that will play its role.
De…nition 3.11. A topological space A is an mc-space, or has an mc-
structure, if for any nonempty …nite subset of A, ¹ · A. there exists an
ordering on it, namely ¹ = ¦a
0
. a
1
. .... a
n
¦. a set of elements ¦/
0
. /
1
. .... /
n
¦ · A.
(not necessary di¤erent) and a family of functions
1
A
i
: A [0. 1[ ÷A. i = 0. 1. .... :
such that
128
1. 1
A
i
(r. 0) = r. 1
A
i
(r. 1) = /
i
. \r ÷ A.
2. The function G
A
: [0. 1[
n
÷A given by
G
A
(t
0
. t
1
. .... t
n÷1
) = 1
A
0
(...(1
A
n÷1
(1
A
n
(/
n
. 1). t
n÷1
). ...). t
0
)
is a continuous function.
Remark 3.23. Note that if 1
A
i
(r. t) is continuous on t, then 1
A
i
(r. [0. 1[)
represents a continuous path which joints r and /
i
. These paths depend, in
some sense, on the points which are considered, as well as on the …nite sub-
set ¹ which contain them. Thus, function G
A
can be interpreted as follows:
1
A
n÷1
(/
n
. t
n÷1
) = j
n÷1
represents a point of the path which joins /
n
with /
n÷1
,
1
A
n÷2
(j
n÷1
. t
n÷2
) = j
n÷2
is a point in the path which joins j
n÷1
with /
n÷2
, etc.
So, G
A
can be seen as a composition of these paths and can be considered as
an abstract convex combination of the …nite set ¹.
Given an mc-structure, it is possible to de…ne an abstract convexity de…ned
by the family of those sets which are stable under function G
A
. In order to
de…ne this convexity, we need some previous concepts.
De…nition 3.12. If A is an mc-space, 7 a subset of A and we denote by
< A the family of nonempty …nite subsets of A, then for all ¹ ÷< A such
that ¹
¸
7 = O. ¹
¸
7 = ¦a
i0
. a
i1
. .... a
im
¦. (i
0
< i
1
< ... < i
m
). we de…ne the
restriction of function G
A
to 7 as follows
G
A|Z
: [0. 1[
m
÷A
G
A|Z
(t) = 1
A
i0
(...(1
A
im1
(1
A
im
(/
im
. 1). t
im1
). ...). t
i0
).
where 1
A
i
k
are the functions associated with the elements a
i
k
÷ ¹
¸
7.
By making use of this notion, we can de…ne mc-sets which generalize usual
convex sets.
De…nition 3.13. A subset 7 of an mc-space A is an mc-set, if and only
if it is satis…ed that
\ ¹ ÷< A . ¹
¸
7 = O. G
A|Z
([0. 1[
m
) · 7.
where : = [¹
¸
7[ ÷1.
Since the family of mc-sets is stable under arbitrary intersections, it de…nes
an abstract convexity on A. Furthermore, we can de…ne the mc-hull operator
in the usual way
C
mc
(7) =
¸
¦1 [ 7 · 1. 1 i: a: :c ÷:ct¦.
Then it is obvious that
\ ¹ ÷< A . ¹
¸
7 = O. G
A|Z
([0. 1[
m
) · C
mc
(7).
Remark 3.24. If A is a convex subset of a topological vector space, for any
…nite subset ¹¦a
0
. a
1
. .... a
n
¦ we can de…ne functions 1
A
i
(r. t) = (1 ÷t)r ÷ta
i
.
129
which represent the segment joining a
i
and r when t ÷ [0. 1[. In this case, the
image of the composition G
A
([0. 1[
n
) coincides with the convex hull of ¹, so
mc-sets generalize convex sets. Other abstract convexity structures which are
generalized by the notion of mc-structure are the simplicial convexity, c-spaces
or H-spaces, G-convex spaces, and so on.
It is important to point out that in some applications the space is required
to satisfy local properties. So we also introduce the notion of local convexity in
the context of mc-spaces.
De…nition 3.14. A metric mc-space (A. d) is a locally mc-space if and
only if for all - 0. 1(1. -) = ¦r [ r ÷ A. d(r. 1) < -¦ is an mc-set whenever
1 is an mc-set.
It is not hard to prove that the product of mc-spaces is an mc-space, and
that the product of a countable quantity of locally mc-spaces is also a locally
mc-space.
Next, the notion of KF-multivalued mapping and KF-majorized multivalued
mapping, introduced in Borglin and Keiding, are de…ned in the context of mc-
spaces.
De…nition 3.15. If A is an mc-space, then an mc-set multivalued mapping
A : A (A is a 11
+
-multivalued mapping if for all r ÷ A. A
÷1
(r) is open
and r ´ ÷ A(r). A multivalued mapping 1 : A ( A is called 11
+
-majorized
if there is a 11
+
-multivalued mapping, A : A (A (majorant), such that for
all r ÷ A, 1(r) · A(r).
The local version of 11
+
-multivalued mapping is de…ned as follows
De…nition 3.16. If A is an mc-space, then a multivalued mapping A :
A ( A is locally 11
+
-multivalued mapping if for all r ÷ A such that
A(r) = O. there exists an open neighborhood \
x
of r, and a 11
+
-multivalued
mapping, 1
x
: A (A. such that
\ . ÷ \
x
. A(.) · 1
x
(.).

3.5.3 Fixed points results
We present now some …xed point results which will be applied to prove the
existence of equilibrium in generalized games. The following Llinares’ lemma
states the existence of a continuous selection, with a …xed point, of the mc-hull
of a multivalued mapping de…ned on mc-spaces.
Lemma 3.4. If A is a compact topological mc-space and 1 : A ( A is a
nonempty multivalued mapping such that if n ÷ 1
÷1
(r). then there exists some
r
t
÷ A such that n ÷ i:t 1
÷1
(r
t
). Then, there exists a nonempty …nite subset
¹ of A, and a continuous function 1 : A ÷A satisfying:
1. (¬) r
+
÷ A such that r
+
= 1(r
+
):
2. (\) r ÷ A. 1(r) ÷ G
A|(x)
([0. 1[
m
).
Next result is an extension of Browder’s theorem. The proof is immediately
obtained by applying Lemma 3.4.
130
Theorem 3.33. If A is a compact topological mc-space and 1 : A ( A
a multivalued mapping with open inverse images and nonempty mc-set values,
then 1 has a continuous selection and a …xed point.
A consequence of Theorem 3.33 is that any 11
+
-multivalued mapping de-
…ned from a compact topological mc-space in itself, has a point with empty
image. In context of binary relations, the existence of points with empty images
in the multivalued mapping of upper contour sets is equivalent to existence of
maximal element (it is enough to consider 1(r) as the set of alternatives better
than r).
Corollary 3.5. If A is a compact topological mc-space and 1 : A (A is
a 11
+
-multivalued mapping, then there exists r
+
÷ A such that 1(r
+
) = O.
In order to extend the previous result to locally 11
+
-majorized multivalued
mapping, …rst we present the following lemma.
Lemma 3.5. If A is a compact topological mc-space and 1 : A (A is a lo-
cally majorized 11
+
-multivalued mapping, then there exists a 11
+
-multivalued
mapping c : A (A. such that (\)r ÷ A. 1(r) · c(r).
Proof. Consider 1 = ¦r [ r ÷ A. 1(r) = O¦ and for each r ÷ 1, choose a
11
+
-multivalued mapping c
x
majorant of 1 at r, and an open neighborhood
G
x
of r. The set G =
¸
x÷D
G
x
is paracompact, so the open covering ¦G
x
¦
x÷D
of G has a closed locally …nite re…nement ¦G
t
x
¦.
For each r ÷ 1 de…ne the set
J(r) = ¦r
i
[ r ÷ G
t
xi
¦.
and the following multivalued mapping
c(r) =
¸
xi÷J(x)
c
xi
(r). i1 r ÷ G. rc:jccti·c|n c(r) = O. i1 r ´ ÷ G.
Now we are going to see that multivalued mapping c is the 11
+
÷multivalued
mapping. It is clear that c hasn’t …xed point since c
xi
are 11
+
-multivalued
mappings, c has mc-set values by construction and satis…es for all r ÷ A,
1(r) · c(r). Finally, to see that c has open lower sections, consider
r ÷ c
÷1
(n) =n ÷ c(r) =
¸
xi÷J(x)
c
xi
(r).
n ÷ c
xi
(r). (\) r
i
÷ J(r).
Since c
xi
are 11
+
÷ multivalued mappings, then they have open lower sec-
tions, so for each r
i
÷ J(r) there exists an open neighborhood \
i
x
of r, such
that
\
i
x
· c
÷1
xi
(n). (\) r
i
÷ J(r).
By considering \
t
x
=
¸
xi÷J(x)
\
i
x
, \
t
x
is an open neighborhood of r since
J(r) is …nite. Moreover,
131
r ´ ÷
¸
xi = ÷J(x)
G
t
xi
(which is closed since ¦G
t
xi
¦ is a locally …nite re…nement). Therefore, there
exists an open set containing r, \
+
x
, such that
\
+
x
¸
[
¸
xi = ÷J(x)
G
t
xi
[ = O.
so, J(n) · J(r) for each n ÷ \
+
x
. But then,
\
t
x
¸
\
+
x
= \
x
· c
÷1
xi
(n). (\) r
i
÷ J(r).
and
n ÷ c
xi
(n). (\)n ÷ \
x
a:d (\)r
i
÷ J(r).
that is,
n ÷
¸
xi÷J(x)
c
xi
(n) ·
¸
xi÷J(w)
c
xi
(n) = c(n). (\)n ÷ \
x
therefore
\
x
· c
÷1
(n)
and we conclude that c has open lower sections.
As a consequence of Lemma 3.5 we state now the extension of Corollary 3.5
by considering locally 11
+
-majorized multivalued mappings.
Theorem 3.34. If A is a compact topological mc-space and 1 : A (A is
a locally 11
+
-majorized multivalued mapping, then the exists r
+
÷ A such that
1(r
+
) = O.
3.5.4 Existence of equilibrium
In this subsection we analyze the existence of equilibrium for generalized games
in the context of mc-spaces by considering similar conditions to those of Borglin
and Keiding and Tulcea. In order to do this, …rst we need to utilize the well
known notation from generalized games. First result is a version of Borglin and
Keiding’s result in context of mc-spaces.
Lemma 3.6. If A is a compact topological mc-space, 1 : A ( A a non-
empty mc-set multivalued mapping such that for all r ÷ A. 1
÷1
(r) is an open
set, and 1 : A ( A a locally 11
+
÷majorized multivalued mapping, then the
exists r
+
÷ A such that
r
+
÷ 1(r
+
) a:d 1(r
+
)
¸
1(r
+
) = O.
132
Proof. From Lemma 3.5 and without loss of generality we can assume that
multivalued mapping 1 is a 11
+
-multivalued mapping. De…ne the multivalued
mapping c : A (A by
c(r) = 1(r). i1 r ´ ÷ 1(r)
c(r) = 1(r)
¸
1(r). i1 r ÷ 1(r).
In order to see that multivalued mapping c is 11
+
-multivalued mapping,
consider r ÷ A, such that c(r) = O (if c(r) = O. we have the conclusion). It is
easy to see that c hasn’t …xed points and mc-set values. To see that has open
lower sections, consider r ÷ c
÷1
(n). that is, n ÷ c(r).
On the one hand, if r ´ ÷ 1(r). then it is possible to choose a neighborhood
\
x
of r such that
(\) . ÷ \
x
. . ´ ÷ 1(.).
Moreover, since n ÷ c(r) = 1(r). that is, r ÷ 1
÷1
(n). which is open,
there exists an open set \
x
containing r such that \
x
· 1
÷1
(n). If we take
l = \
x
¸
\
x
. then l · c
÷1
(n).
On the other hand, if r ÷ 1(r). then n ÷ c(r) = 1(r)
¸
1(r). so
r ÷ 1
÷1
(n)
¸
1
÷1
(n)
which are open sets, therefore, there exists an open set \
x
containing r such
that
\
x
· 1
÷1
(n)
¸
1
÷1
(n) · c
÷1
(n).
So, multivalued mapping c is a 11
+
÷multivalued mapping, then by apply-
ing Corollary 3.5 we have the conclusion.
Next result shows that the previous Lemma remains valid in the case of
considering a generalized game with a …nite quantity of agents.
Lemma 3.7. If for each i = 1. 2. .... :. A
i
is a compact topological mc-space,
A =
¸
n
i=1
A
i
. 1
i
: A ( A
i
is a nonempty mc-set multivalued mapping with
open lower sections, and 1
i
: A (A
i
is a locally 11
+
÷majorized multivalued
mapping, then there exists r
+
÷ A such that
r
+
i
÷ 1
i
(r
+
) a:d 1
i
(r
+
)
¸
1
i
(r
+
) = O. i = 1. 2. .... :.
Proof. Consider a multivalued mapping 1 : A (A de…ned as follows,
n ÷ 1(r) i1 a:d o:|n i1 n
i
÷ 1
i
(r). i = 1. 2. .... :.
that is, 1(r) =
¸
n
i=1
1
i
(r). So multivalued mapping 1 has nonempty mc-set
values and open lower sections.
133
From Lema 3.5 and without loss of generality, we can assume that mul-
tivalued mappings 1
i
are 11
+
÷multivalued mappings. Moreover, for each
i = 1. 2. .... :, we de…ne the following multivalued mappings
a) 1
+
i
: A (A such that n ÷ 1
+
i
(r) if and only if n
i
÷ 1
i
(r).
b) 1 : A (A in the following way:
1(r) =
¸
i÷I(x)
1
+
i
(r). i1 1(r) = O.
1(r) = O. i1 1(r) = O.
where 1(r) = ¦i [ i ÷ 1. 1
i
(r)
¸
1
i
(r) = O¦.
Next, we are going to see that 1 is 11
+
-majorized. To do that, consider r ÷
A such that 1(r) = O. then there exists i
0
÷ 1(r) such that 1
i0
(r)
¸
1
i0
(r) = O.
Since the set
¦r [ r ÷ A. 1
i0
(r)
¸
1
i0
(r) = O¦
is an open set, there exists a neighborhood \ of r such that (\). ÷ \. 1
i0
(.)
¸
1
i0
(.) =
O. that is, i
0
÷ 1(.). so
(\). ÷ \. 1(.) =
¸
i÷I(z)
1
+
i
(.) · 1
+
i0
(.).
Moreover, since 1
i0
is a 11
+
-multivalued mapping, 1
+
i0
is a 11
+
-multivalued
mapping and therefore multivalued mapping 1 is majorized by 1
+
i0
. Therefore
by applying previous lemma to multivalued mappings 1 and 1 we obtain the
conclusion.
In order to analyze the existence of equilibrium by considering a countable
quantity of agents, we use the next approximation result.
Lemma 3.8. Let A be a compact topological metric space and 1 a locally
mc-space. If 1 : A (1 is an upper hemicontinuous multivalued mapping with
mc-set values, then (\)- 0 there exists an mc-set valued multivalued mapping
H
"
: A (1 with open graph such that
Gr(1) · Gr(H
"
) · 1(Gr(1). -).
Proof. By applying that multivalued mapping 1 is upper hemicontinuous
we know that
(\)- 0. (¬)0 < c(r) < -. :nc/ t/at
(\). ÷ 1(r. c(r)). 1(.) · 1(1(r). -´2).
so, family ¦1(r. c(r)´2)¦
x÷X
is an open covering of A, which is compact,
thus there exists a …nite subcovering ¦1(r
i
. c(r
i
)´2)¦
n
i=1
. Consider c
i
= c(r
i
)´2
134
and de…ne for all r ÷ A 1(r) = ¦i [ i ÷ 1. r ÷ 1(r
i
. c
i
)¦ and the following
multivalued mapping
H
"
(r) =
¸
i÷I(x)
1(1(r
i
). -´2).
It is clear that H
"
is mc-set valued and moreover it has open graph, since
for every r ´ ÷
¸
i = ÷I(x)
1(r
i
. c
i
) which is a closed set, therefore, there exists j 0
such that 1(r. ·)
¸
(
¸
i = ÷I(x)
1(r
i
. c
i
)) = O. so, 1(.) · 1(r) for all . ÷ 1(r. j)
and
H
"
(r) =
¸
i÷I(x)
1(1(r
i
). -´2) ·
¸
i÷I(z)
1(1(r
i
). -´2) = H
"
(.)
and H
"
(r) is open because it is a …nite intersection of open sets.
Moreover since
(\)(.. n) ÷ 1(r. j) H
"
(r). H
"
(r) · H
"
(.)
then, 1(r. j) H
"
(r) · Gr(H
"
), that is, Gr(H
"
) is open.
Furthermore, Gr(1) · Gr(H
"
) · 1(Gr(1). -). since for all r ÷ A,
1(r) · 1(1(r
i
). -´2). 1or cac/ i ÷ 1(r).
therefore
1(r) ·
¸
i÷I(x)
1(1(r
i
). -´2) = H
"
(r).
thus Gr(1) · Gr(H
"
) and it is easy to see that Gr(H
"
) · 1(Gr(1). -).
Theorem 3.35. If A is a compact locally mc-space, 1 : A (A is a non-
empty mc-set valued multivalued mapping with closed graph, 1 : A (A is a lo-
cally 11
+
÷majorized multivalued mapping and the set ¦r [ r ÷ A. 1(r)
¸
1(r) =
O¦ is closed in A. then there exists r
+
÷ A such that
r
+
÷ 1(r
+
) a:d 1(r
+
)
¸
1(r
+
) = O.
Proof. By applying Lemma 3.8, we have that (\)- 0. (¬) H
"
such that
Gr(1) · Gr(H
"
) · 1(Gr(1). -)
where H
"
is an open graph multivalued mapping whose values are mc-sets.
If we consider (A. H
"
. 1) and we apply Lemma 3.6 we can ensure that there
exists an element r
"
such that
r
"
÷ H
"
(r
"
) a:d H
"
(r
"
)
¸
1(r
"
) = O.
Let ¦-
n
¦ be a sequence which converges to 0 and by reasoning as above we
obtain another sequence ¦r
"n
¦
n÷N
such that
135
(\): ÷ N. [1(r
"n
)
¸
1(r
"n
)[ · [H
(
r
"n
)
¸
1(r
"n
)[ = O
and since it belongs to a compact set due to
r
"n
÷ ¦r [ r ÷ A. 1(r)
¸
1(r) = O¦
then there exists a convergent subsequence to a point r
+
. which will be an
element of this set since it is closed.
In order to prove that r
+
is a …xed point of 1, we have that (\): ÷ N.
(r
"n
. r
"n
) ÷ Gr(H
"n
) · 1(Gr(1). -
n
)
and since Gr(1) is a compact set, then (r
"n
. r
"n
) converges to (r
+
. r
+
) ÷
Gr(1).
Next, a result on the existence of equilibrium in generalized games with a
countable number of agents is presented.
Theorem 3.36. Let I = (o
i
. 1
i
. 1
i
)
i÷I
be a generalized game such that
1 is a countable set of indexes and for each i ÷ 1 it is satis…ed that o
i
is a
nonempty compact locally mc-space; 1
i
is a closed graph multivalued mapping
such that 1
i
(r) is a nonempty mc-set (\)r ÷ A, 1
i
is a locally 11
+
÷majorized
multivalued mapping and the set ¦r [ r ÷ A. 1
i
(r)
¸
1
i
(r) = O¦ is closed in A,
then there exists an equilibrium for the generalized game.
Proof. Consider multivalued mapping 1 : A (A as follows:
n ÷ 1(r) i1 a:d o:|n i1 n
i
÷ 1
i
(r). (\)i ÷ 1.
that is, 1(r) =
¸
i÷I
1
i
(r).
So multivalued mapping 1 has closed graph with nonempty mc-set values.
Moreover, for each i ÷ 1 we de…ne the following multivalued mappings:
a)1
+
i
: A (A such that n ÷ 1
+
i
(r) if and only if n
i
÷ 1
i
(r).
b) 1 : A (A in the following way:
1(r) =
¸
i÷I(x)
1
+
i
(r). i1 1(r) = O. a:d 1(r) = O. i1 1(r) = O.
where 1(r) = ¦i [ i ÷ 1. 1
i
(r)
¸
1
i
(r) = O¦.
In order to see that 1 is 11
+
÷majorized, consider r ÷ A such that 1(r) =
O. then there exists i
0
÷ 1(r) such that 1
i0
(r)
¸
1
i0
(r) = O. Since the set
¦r [ r ÷ A. 1
i0
(r)
¸
1
i0
(r) = O¦
is open set, there exists a neighborhood \ of r such that (\). ÷ \. 1
i0
(.)
¸
1
i0
(.) =
O. that is, i
0
÷ 1(.). so
(\). ÷ \. 1(.) =
¸
i÷I(z)
1
+
i
(.) · 1
+
i0
(.).
136
Moreover, from Lemma 3.5 and without loss of generality, we can assume
that multivalued mapping 1
i0
is a 11
+
÷multivalued mapping, so 1
+
i0
is the
11
+
÷multivalued mapping which majorizes 1.
Finally, we show that the set ¦r [ r ÷ A. 1(r)
¸
1(r) = O¦ is closed. (\)i ÷
1(r) we de…ne the following multivalued mapping Q
i
: A (o
i
Q
i
(r) = 1
i
(r)
¸
1
i
(r). i1 i ÷ 1(r). a:d Q
i
(r) = 1
i
(r). i1 i ´ ÷ 1(r).
It is clear that
1(r)
¸
1(r) =
¸
Q
i
(r). i1 1(r) = O. a:d
1(r)
¸
1(r) = O. ot/crni:c.
Multivalued mappings Q
i
: A (o
i
have nonempty values, thus 1(r)
¸
1(r) =
O if and only if 1(r) = O.
Therefore, we have
¦r [ r ÷ A. 1(r)
¸
1(r) = O¦ = ¦r [ r ÷ A. 1(r) = O¦ =
=
¸
i÷I
¦r [ r ÷ A. 1
i
(r)
¸
1
i
(r) = O¦.
Hence, ¦r [ r ÷ A. 1(r)
¸
1(r) = O¦ is closed because it is the intersection
of closed sets. So, by applying the previous theorem we obtain that there exists
an element r
+
÷ A such that
r
+
÷ 1(r
+
) a:d 1(r
+
)
¸
1(r
+
) = O.
so, 1(r
+
) = O and …nally,
r
+
i
÷ 1
i
(r
+
) a:d 1
i
(r
+
)
¸
1
i
(r
+
) = O.

137
3.6 References
1. D’Agata, A., Existence of …rst-order locally consistent equilibria, Annales
d’Economie et de Statistique, 43 (1996), 171-179
2. Agarwal, R.P., O’Regan, P., A note on equilibria for abstract economies,
Mathematical and Computer Modelling, 34 (2001), 331-343
3. Aliprantis, C.D., Tourky, R., Yannelis, N.C., Cone conditions in
general equilibrium theory, Journal of Economic Theory, 92 (2000), 96-121
4. Aliprantis, C.D., Tourky, R., Yannelis, N.C., The Riesz-Kantorovich
formula and general equilibrium theory, Journal of Mathematical Economics, 34
(2000), 55-76
5. Arrow, K.J., Debreu, G., Existence of an equilibrium for a competitive
economy, Econometrice, 22 (1954), 265-290
6. Arrow, K.J., Hahn, F., General competitive analysis, 1971, San Fran-
cisco, Holden-Day
7. Aubin, J.P., Ekeland, I., Applied nonlinear analysis, New York, John
Wiley and Sons, 1984
8. Berge, C., Topological spaces, 1963, New York, Macmillan
9. Bonanno, G., Zeeman, C.E., Limited knowledge of demand and oligopoly
equilibria, J. Econom. Theory, 35 (1985), 276-283
10. Border, K.C., Fixed point theorems with applications to economics
and game theory, Cambridge University Press, 1985
11. Borglin, A., Keiding, H., Existence of equilibrium actions and of
equilibrium: A note on the new existence theorems, J. Math. Econom., 3 (1976),
313-316
12. Browder, F.E., The …xed point theory of multi-valued mappings in
topological vector spaces, Math. Annalen, 177 (1968), 283-301
13. Debreu, G., New concepts and techniques for equilibrium analysis,
Internatinal Economic Review, 3 (1962), 257-273
14. Ding, X., Kim, W., Tan, K., A selection theorem and its applications,
Bull. Austral. Math. Soc. 46(1992),205-212
15. Gale, D., Mas-Colell, A., An equilibrium existence theorem for a
general model without ordered preferences, J. Math. Econom. 2 (1975), 9-15
16. Grandmont, J.M., Temporary general equilibrium theory, Economet-
rica, 45 (1977), 535-572
17. Himmelberg, C.J., Fixed points of compact multifunctions, J. Math.
Anal. Appl., 38 (1972) 205-207
18. Husain, T., Tarafdar E., A selection and a …xed point theorem and
an equilibrium of an abstract economy, Internat. J. Math. and Math. Sci. 18,1
(1995), 179-184
19. Kakutani, S., A generalization of Brouwer’s …xed point theorem,
Duke Mathematical Journal, 8 (1941), 416-427
20. Llinares J.V., Existence of equilibrium in generalized games with
non-convex strategy spaces, CEPREMAP, No. 9801 (1998), 1-14
138
21. Maugeri, A., Time dependent generalized equilibrium problems, Ren-
diconti del Circolo Matematico di Palermo, 58 (1999), 197-204
22. Mure¸san, A.S., First-order equilibria for an abstract economy, I,
Bull. ¸ Stiin¸t. Univ., Baia Mare, Ser. B, Matematic¼ a-Informatic¼ a, Vol. XIV,2
(1998),191-196
23. Mure¸san, A.S., First-order equilibria for an abstract economy, II,
Acta Technica Napocensis, Ser. Applied Mathematics and Mechanics, 41 (1998),
201-204
24. Neuefeind, W., Notes on existence of equilibrium proofs and the
boundary behavior of supply, Econometrica, 48 (1980), 1831-1837
25. Nikaido, H., Convex structures and economic theory, New York,
Academic Press, 1968
26. Oettli, W., Schlager, D., Generalized vectorial equilibria and gen-
eralized monotonicity, Functional analysis with current applications in science,
technology and industry (Aligarh, 1996), 145-154, Pitman Res. Notes Math.
Ser., 377, Logman, Harlow, 1998
27. Petru¸sel, A., Multifunctions and applications, Cluj University Press,
Cluj-Napoca, 2002 (In Romanian)
28. Ray, I., On games with identical equilibrium payo¤s, Economic The-
ory, 17 (2001), 223-231
29. Rim, D.I., Kim, W.K., A …xed point theorem and existence of equi-
librium for abstract economies, Bull. Austral. Math. Soc., 45 (1992), 385-394
30. Rus, A.I., Generalized contractions and applications, Cluj University
Press, Cluj-Napoca, 2001
31. Rus, A.I., Iancu, C., Mathematical modelling, Transilvania Press,
Cluj-Napoca, 2000 (In Romanian)
32. Shafer, W.J., Sonnenschein, H., Equilibrium in abstract economics
without ordered preferences, J. Math. Econom. 2 (1975) 345-348
33. Tarafdar, E., A …xed point theorem and equilibrium point in abstract
economy, J. Math. Econom., 20 (1991), 211-218
34. Tulcea, C.I., On the approximation of upper semi-continuous cor-
respondeces and equilibrium of generalized games, J. Math. Anal. Appl., 136
(1988), 267-289
35. Yannelis, N., Prabhakar, N., Existence of minimal elements and
equilibria in linear topological spaces, J. Math. Econom., 12 (1983), 233-246
139

1.1

Zero-sum two-person games

We consider a game with two players, the player 1 and the player 2. What player 1 wins is just what player 2 loses, and vice versa. In order to have an intuitive understanding of a such game we introduce some related basic ideas through a few simple examples. Example 1.1. (Matching pennies) Each of two participants (players) puts down a coin on the table without letting the other player see it. If the coins match, that is, if both coins show heads or both show tails, player 1 wins the two coins. If they do not match, player 2 wins the two coins. In other words, in the …rst case, player 1 receives a payment of 1 from player 2, and, in the second case, player 1 receives a payment of 1. These outcomes can be listed in the following table: Player 2 1 (heads) 1 1 2 (tails) 1 1

Player 1

1 (heads) 2 (tails)

Also, they can be written in the payo¤s matrices for the two players: H1 = 1 1 1 1 ; H2 = 1 1 1 1 :

We say that each player has two strategies (actions, moves). In the matrix H1 the …rst row represents the …rst strategy of player 1, the second row represents the second strategy of player 1. If player 1 chooses his strategy 1, it means that his coin shows heads up. Strategy 2 means tails up. Similarly, the …rst and the second columns of matrix H1 correspond respectively to the …rst and the second strategies of player 2. In H2 we have the same situations, but for player 2. Remark 1.1. This gambling contest is a zero-sum two-person game. Brie‡ speaking, a game is a set of rules, in which the regulations of the entire y procedure of competition (or contest, or struggle), including players, strategies, and the outcome after each play of the game is over, etc., are speci…cally described. Remark 1.2. The entries in above table form a payo¤ matrix (of player 1, that is H1 ). The matrix H2 is the payo¤ matrix of player 2, and we have
t H1 + H2 = O2 ; t where H2 is the transpose of H2 . Remark 1.3. The payo¤ is a function of the strategies of the two players. If, for instance, players 1’ coin shows heads up (strategy 1) and player 2’ coin s s also shows heads up (strategy 1), then the element h11 = 1 denotes the amount which player 1 receives from player 2. Again, if player 1 chooses strategy 2 (tails) and player 2 chooses strategy 1 (heads), then the element h21 = 1 is

2

the payment that player 1 receives. In this case, the payment that player 1 receives is a negative number. This means that player 1 loses one unit, that is, player 1 pays one unit to player 2. Example 1.2. (Stone-paper-scissors) Scissors defeats paper, paper defeats stone, and stone in turn defeats scissors. There are two players: 1 and 2. Each player has three strategies. Let strategies 1, 2, 3 represent stone, paper, scissors respectively. If we suppose that the winner wins one unit from the loser, then the payo¤ matrix is Player 2 1 2 0 1 1 0 1 1 3 1 1 0

Player 1

1 2 3

Remark 1.4. The payo¤ matrices for the two players are: 0 H1 = 4 1 1 2 1 0 1 3 1 1 5 , 0 0 H2 = 4 1 1 2 1 0 1 3 1 1 5: 0

t We have H1 = H2 and H1 + H2 = O3 . Example 1.3. We consider zero-sum two-person game for which the payo¤ matrix is given in the following table:

Player 1

pnq 0 1 2 3

Player 2 0 1 0 1 -1 2 -4 1 -9 -2

2 4 7 8 7

Player 1 has four strategies, while player 2 has three strategies. Remark 1.5. The payo¤ of player 1 (that is, the amount that player 2 pays to player 1) can be determined by the function f : f0; 1; 2; 3g f0; 1; 2g ! Z; f (p; q) = q 2 p2 + 2pq:

We have the payo¤ matrices: 2 3 0 1 4 6 1 2 7 7 7 H1 = 6 4 4 1 8 5; 9 2 7

0 H2 = 4 1 4

2

1 2 7

4 1 8

3 9 2 5 7

In each of the above examples there are two players, namely player 1 and t player 2, and a payo¤ matrix, H1 (there is H2 too such that H1 +H2 = 0). Each 3

that is. such a game is called a zero-sum game. Since player 1 wishes to maximize his payo¤ he can choose strategy i so as to make the value in (2) as large as it is possible. If the payo¤ is positive. that is. player 1 wishes to gain a payo¤ aij as large as it is possible. So.i = 1. The gain of player 1 equals the loss of player 2. If player 1 chooses strategy i he can be sure to obtain at least the payo¤ 1 j n min aij : (2) This is the minimum of the ith -row element in the payo¤ matrix A. and those of player 2 by the columns of the payo¤ matrix H1 . To solve the game. or negative value. we obtain the payo¤ matrix H1 ( A): A = (aij ) = a11 =4 ::: am1 2 a12 ::: am2 ::: a1n ::: ::: 5 ::: amn 3 (1) De…nition 1. The strategies of player 1 are represented by the rows of the payo¤ matrix H1 . In this game.2 Matrix games In what follows we suppose that player 1 has m strategies and player 2 has n strategies. What player 1 wins is just what player 2 loses. independently. For this. This amount may be with positive. After the two choices have been made. If the payo¤ is negative. 1. n. player 1 receives a positive amount from player 2. The amount is shown in the payo¤ matrix. (The strategies of player 2 are represented by the rows of the payo¤ matrix H2 . That is to say.) The player 1 chooses a strategy from his strategy set. and vice versa. player 1 can choose strategy i in order to receive a payo¤ not less than 1 i m1 j n max min aij : 4 (3) . while player 2 will do his best to reach a value aij as small as it is possible. j = 1. player 1 receives a negative amount from player 2. and those of player 1 by the columns of the payo¤ matrix H2 . chooses a strategy from his strategy set. player 2 pays an amount to player 1 as the outcome of this particular play of the game. m.player has several strategies. player 1 wins an amount from player 2.1. that is. We call matrix game the game which is completely determined by above matrix A. The interests of the two players are completely con‡ icting. the payo¤ which player 1 gains from player 2 if player 1 chooses strategy i and player 2 chooses strategy j. to …nd out the solution (what maximum payo¤ has the player 1 and what strategies are chosen by both players to do this) we examine the elements of matrix A. We denote by aij . 0. player 1 loses an amount to player 2 (player 2 wins an amount from player 1). and player 2.

i = 1. player 2 wishes to minimize his lose so. Namely. if player 2 makes his best choice. player 2 can choose j so as to have his loss not greater than 1 j n1 i m min max aij : (5) So. Since the left-hand side of the last inequality is independent of j. he will try to choose strategy j so as to obtain the minimum of the value in (4). the payo¤ which player 1 receives cannot be less than the value given in (3).1. m: min aij 1 i m max aij holds. taking the minimum with respect to j on both sides we have 1 j n min aij 1 j n1 i m min max aij = v2 . n. For every i we have 1 j n min aij aij . 1 j n min aij 5 v2 : . he will lose at most 1 i m max aij : (4) Now. m and all j = 1. for all i = 1. n. i = 1. and for every j we have aij Hence the inequality 1 j n 1 i m max aij . that is. if player 1 makes his best choice. j = 1. We have seen that player 1 can choose the strategy i to ensure a payo¤ which is at least v1 = max min aij . the payo¤ which player 1 receives cannot be greater than the value given by (5). m. v1 and v2 ? Lemma 1.In other words. that is v1 = max min aij 1 i m1 j n 1 j n1 i m min max aij = v2 : (6) Proof. Similarly. if player 2 chooses his strategy j. 1 i m1 j n while player 2 can choose the strategy j to make player 1 get at most v2 = min max aij : 1 j n1 i m Is there any relationship between these two values. The following inequality holds: v1 v2 .

3 we have v1 = v2 . 9) = 0.3 is v = 0. 1 i 41 j 3 v2 = min max aij = min(0. 2.1 we have v1 < v2 .Since the right-hand side of the last inequality is independent of i. 1. 1) = 1 i 31 j 3 1. 1 i m1 j n and 1 i m max aij = min max aij = v: 1 j n1 i m 6 . in Example 1. n = 3. 1. 1) = 1 i 21 j 2 1. The value v is the common value of those given in (3) and (5). 4. In Example 1. and the proof is completed.2.3 Saddle points in pure strategies There are situations in which v1 = v2 . then there exist an i and a j such that 1 j n min ai j = max min aij = v. 1. Let us examine the three examples from the section 1. The value of the game in Example 1. therefore v1 = max min aij = max( 1. n = 3. in Example 1. If the elements of the payo¤ matrix A of a matrix game satisfy the following equality v1 = max min aij = min max aij = v2 . If the equality (7) holds. v2 = min max aij = min(1.1 we have m = 2. we have m = 3. 1) = 1: 1 j 31 i 3 So. taking the maximum with respect to i on both sides we obtain 1 i m1 j n max min aij v2 . 1. In Example 1. Consequently we give De…nition 1.2. v1 v2 . we have m = 4. n = 2.6.1. Remark 1. that is. in Example 1. 1 i m1 j n 1 j n1 i m (7) then the quantity v(= v1 = v2 ) is called the value of the game. 8) = 0: 1 j 31 i 4 So. In Example 1. v2 = min max aij = min(1.2 we have v1 < v2 . therefore v1 = max min aij = max( 1.3. therefore v1 = max min aij = max(0. 1) = 1: 1 j 21 i 2 So.

0. if player 2 chooses the strategy j . for all i and all j aij ai j =v ai j : (8) Consequently. then the payo¤ cannot be less than v if player 2 departs from the strategy j .Therefore 1 j n min ai j = max aij : 1 i m But. 7) = 7. 5) = 5 = v1 : Now. 3. j = j is a solution (or Nash equilibrium) of the game. max(2. 6. 5) = 5. Similarly. Consider the matrix game with the payo¤ matrix 2 3 4 3 6 2 A = 4 1 2 0 0 5: 5 6 7 5 We have for the minimum of its rows min(4. Remark 1. he can hope to increase his payo¤ if player 2 departs from his optimal strategy j . The relationship (8) shows us that the payo¤ at the saddle point (i . obviously we have 1 j n min ai j ai j 1 i m max aij : Thus 1 i m max aij = ai j = v = min ai j : 1 j n Therefore.4. j ) is a saddle point (in pure strategies) of the game. min(5. 5) = 5 and then the maximum of these minimums: max(2. if player 2 sticks to his optimal strategy j . max(6. if player 1 chooses the strategy i . player 1’ payo¤ may decrease if he departs from his optimal strategy s i .7. 2. We call i and j optimal strategies of players 1 and 2 respectively. 7 . 0) = 0. However. Thus if the game has a saddle point (i .3. 0. the common value being the value of the game. 0. j ) (solution of the game) is the value of the game. We say that i = i . 6. 1. max(3. When player 1 sticks to his optimal strategy i . j ) then the equality (7) holds and ai j = v. A matrix game may have more than one saddle point. The pair (i .8. we have for the maximum of its columns max(4. 6) = 6. 0. Remark 1. the payo¤s at di¤erent saddle points are all equal. 7. 5) = 5. 2. min(1. 2) = 2. De…nition 1. then the payo¤ cannot exceed v if player 1 departs from the strategy i . Example 1.

and then the minimum of these maximums: min(5; 6; 7; 5) = 5 = v2 : How v1 = v2 = 5 we have saddle point. It is easy to verify that (3; 1) and (3; 4) are both saddle points because a31 = a34 = v = 5: Remark 1.9. If the matrix game has a saddle point (i ; j ), then it is very easy to found it. Really, by the De…nition 1.3 of a saddle point (8), the value ai j is an element in the payo¤ matrix A=(aij ) which is at the same time the minimum of its row and the maximum of its column. In Example 1.3, (1; 1) is a saddle point of the game because a11 = 0 is the smallest element in the …rst row and at the same time the largest element in the …rst column. In Example 1.4 a31 = a34 = 5 are two smallest elements in the third row, and at the same time the largest element in the …rst and fourth columns, respectively. A matrix game can have several saddle points. In this case we can prove the following result: Lemma 1.2. Let (i ; j ) and (i ; j ) be saddle points of a matrix game. Then (i ; j ) and (i ; j ) are also saddle points, and the values at all saddle points are equal, that is ai
j

= ai

j

= ai

j

= ai

j

:

(9)

Proof. We prove that (i ; j ) is a saddle point. The fact that (i ; j ) is a saddle point can be proved in a similar way. Since (i ; j ) is a saddle point, we have aij ai
j

ai

j

for all i = 1; m and all j = 1; n. Since (i ; j ) is a saddle point, we have aij ai
j

ai

j

for all i = 1; m and j = 1; n. From these inequalities we obtain ai
j

ai

j

ai

j

ai

j

ai

j

;

which proves (9). By (9) and the above inequalities, we have aij ai
j

ai

j

for all i = 1; m and all j = 1; n. Hence (i ; j ) is a saddle point. From this lemma we see that a matrix game with saddle points has the following properties: –the exchangeability or rectangular property of saddle points, 8

has the saddle point (3; 2) because v1 = 1, v2 = 1 and a32 = v = 1. Example 1.6. The pair (3; 3) is a saddle point for the game with the payo¤ matrix 2 3 0 1 1 1 5: A=4 1 0 1 1 0 We have v = 0. Example 1.7. The pair (2; 3) is a saddle point for the game with the payo¤ matrix 2 3 2 3 1 4 2 0 1 5: A=4 1 2 3 1 2 The value of the game is v = 0. Example 1.8. The game with the payo¤ matrix 3 2 4 1 1 1 1 5 A=4 2 7 1 4

–the equality of the values at all saddle points. Example 1.5. The game with the payo¤ matrix 2 3 3 0 5 1 4 5 A=4 7 2 1 1

has four saddle points because we have (see Lemma 1.2) a12 = a13 = a22 = a23 = 1 = v: Example 1.9. The game with the payo¤ matrix 3 2 7 5 6 A=4 0 9 4 5 14 1 8 v1 = max(5; 0; 1) = 5 and v2 = min(14; 9; 8) = 8:

hasn’ a saddle point in the sense of De…nition 1.2 because t

1.4

Mixed strategies

We have seen so far that there exist matrix games which have saddle points and matrix games that don’ t.

9

When a matrix game hasn’ saddle point, that is, if t v1 = max min aij < min max aij = v2
1 i m1 j n 1 j n1 i m

(10)

we can say the following. Player 1 can be sure to gain at least v1 = 1, player 2 can guarantee that his loss is at most v2 = 1. In this situation, player 1 will try to gain a payo¤ greater than 1, player 2 will try to make the payo¤ (to player 1) less than 1. For these purposes, each player will make e¤orts to prevent his opponent from …nding out his actual choice of strategy. To accomplish this, player 1 can use some chance device to determine which strategy he is going to choose; similarly, player 2 will also decide his choice of strategy by some chance method. This is the mixed strategy that we introduce in this section. We consider a matrix game with the payo¤ matrix A = (aij ) where i = 1; m, j = 1; n. De…nition 1.4. A mixed strategy P player 1 is a set of m numbers of m xi 0, i = 1; m satisfying the relationship i=1 xi = 1. A mixed strategy of Pn player 2 is a set of n numbers yj 0, j = 1; n, satisfying j=1 yj = 1. Remark 1.10. The numbers xi and yj are probabilities. Player 1 chooses his strategy i with probability xi , and player 2 chooses his strategy j with probability yj . Hence xi yj is the probability that player 1 chooses strategy i and player 2 chooses strategy j with payo¤ aij for player 1 (and aij for player 2). In opposite to mixed strategies, the strategies in the saddle points are called pure strategies. The pure strategy i = i0 is a special mixed strategy: xi0 = 1, xi = 0 for i 6= i0 . Let X = (x1 ; x2 ; : : : ; xm ) and Y = (y1 ; y2 ; : : : ; yn ) be the mixed strategies of players 1 and 2, respectively. De…nition 1.5. The expected payo¤ of player 1 is the following real number m n XX aij xi yj (11)
i=1 j=1

we cannot solve the game in the sense given in the previous section. The payo¤ matrix given in Example 1.2 (Stone-paper-scissors) hasn’ saddle point because t v1 = 1 < 1 = v2 . The same situation is in Example 1.9, where v1 = 5 < 8 = v2 . About the game given in Example 1.2, with the payo¤ matrix 2 3 0 1 1 0 1 5 A=4 1 1 1 0

which is obtained through multiplying every payo¤ aij by the corresponding probability xi yj and summing for all i and all j. Player 1 wishes to maximize the expected payo¤, while player 2 wants to minimize it.

10

then the expected payo¤ of player 1 is at most m n XX aij xi yj : (14) max X2Sm i=1 j=1 Player 2 can choose Y 2 Sn such as to obtain the minimum of the value in (14). m X i=1 n X j=1 xi = 1.Let Sm and Sn be the sets of all X = (x1 . xm ) and Y = (y1 .2 (Lemma 1. j = 1. yj 0. n. satisfying the following conditions xi 0. x2 . then his expected payo¤ is at least m n XX min aij xi yj : (12) Y 2Sn i=1 j=1 Player 1 can choose X 2 Sm such as to obtain the maximum of the value in (12). For all X = (x1 . : : : . : : : . yn ) 2 Sn the following inequality holds v1 v2 . y2 . that is v1 = max min m n XX i=1 j=1 X2Sm Y 2Sn aij xi yj Y 2Sn X2Sm min max m n XX i=1 j=1 aij xi yj = v2 (16) Proof. that is. m. For all X 2 Sm and all Y 2 Sn . yj = 1: If player 1 uses the mixed (or no) strategy X 2 Sm . that is he can be sure of an expected payo¤ not less than v1 = max min m n XX i=1 j=1 X2Sm Y 2Sn aij xi yj : (13) If player 2 chooses the strategy Y 2 Sn . y2 . : : : . i = 1. x2 .3. xm ) 2 Sm and all Y = (y1 . he can prevent player 1 from gaining an expected payo¤ greater than m n XX aij xi yj : (15) v2 = min max Y 2Sn X2Sm i=1 j=1 As in the case studied in section 1.1) we have the following result: Lemma 1. : : : . we have Y 2Sn min m n XX i=1 j=1 aij xi yj m n XX i=1 j=1 aij xi yj : 11 . yn ) respectively.

a(2) . a2n . This is the aim of the following section. 1. a(n) = (a1n . Therefore.Then. CH is a convex set. 12 . If the matrix game has the payo¤ matrix A = (aij ).6. and the proof is completed. taking the maximum for all X 2 Sm on both sides of the inequality. a21 . The elements of CH are expressed as a convex linear combination of the n points a(1) . am2 ). : : : . that is. : : : . v1 v2 . k=1 n X o tk = 1 : Remark 1. we get m n n n XX XX v1 = max min aij xi yj max aij xi yj : X2Sm Y 2Sn i=1 j=1 X2Sm i=1 j=1 This inequality holds for all Y 2 Sn . v1 = max min m n XX i=1 j=1 X2Sm Y 2Sn aij xi yj = min max Y 2Sn X2Sm m n XX i=1 j=1 aij xi yj = v2 : (17) To prove this theorem we need some auxiliary notions and results. : : : . am1 ). We call the convex hull (CH) of the n points a(1) . a(2) . v1 = max min X2Sm Y 2Sn m n XX i=1 j=1 aij xi yj Y 2Sn X2Sm min max m n XX i=1 j=1 aij xi yj = v2 . a(n) .5 The minimax theorem J. We present here von Neumann’ proof given in [15]. tk 0. a22 . k = 1. s Theorem 1. that is. tk 2 R. Let A = (aij ) be a m n matrix. a(2) = (a12 . n. that are n points in the m-dimensional Euclidean space Rm . von Neumann was the …rst which proved this theorem. : : : .1. : : : . this can be easy veri…ed by showing that every convex linear combination of two arbitrary points of CH also belongs to CH. : : : . : : : . the minimax theorem. a(2) .11. The main result of this chapter is the well-known fundamental theorem of the theory of matrix game. and a(1) = (a11 . amn ) obtained by using the columns of matrix A. a(n) the set CH = CH(a(1) . then v1 = v2 . a = t1 a(1) + t2 a(2) + + tn a(n) . a(n) ) de…ned by n CH = aj a 2 Rm . De…nition 1.

: : : .Lemma 1. Since 0 62 CH. a = (a1 . Lemma 1. This is equivalent to the statement that 2 + 2 + + 2 > 0 is the smallest. m 1 2 Now. Then a + (1 and or j a + (1 m X ) 2 CH. such that the distance j j from to 0 is the smallest. : : : .5. and the lemma is proved. i = 1. a2 . n. This result is usually referred to as the theorem of the supporting hyperplanes. a(2) . there exists a point = ( 1 . n X j=1 yj = 1. m such that for every point a 2 CH. a(n) . Then either (1) there exist numbers y1 . : : : . a(2) . m.12. a(n) . am ) be an arbitrary point in CH. am ) we have 1 a1 + 2 a2 + + m am > 0: Proof. : : : . yn with yj such that n X j=1 0. if 2 6= 0. 6= 0. Let CH be the convex hull of a(1) . Remark 1. that is. 0 ) j2 m X i=1 1. i) [ ai + (1 (ai i) 2 ) i ]2 = +2 m X i=1 [ (ai i) i + 2 i 2 i] = m X i=1 2 i: = Thus. 2 . let a = (a1 . If 0 62 CH. j j2 . m ) 2 CH. then there exist m real numbers 1 . : : : . It states that if the origin 0 doesn’ belong to the t convex hull CH of the n points a(1) . then there exists a supporting hyperplane p passing through 0 such that CH lies entirely in one side of p. 13 . : : : . aij yj = ai1 y1 + ai2 y2 + + ain yn 0. Let A = (aij ) be an arbitrary m n matrix. : : : . we get m X i=1 (ai i 2 i) 0: m X i=1 ai i m X i=1 2 i > 0. 2 . we obtain m X i=1 i=1 m X i=1 (ai + m X i=1 (ai 2 i) + 2 Now let ! 0. y2 . j = 1. a2 .4. in one of the two half-spaces formed by p.

Dividing each inequality of (18) by t1 + t2 + y1 = t1 t2 tn . : : : . the ith equation (there are m equations). a22 . m. 0. We distinguish two cases: (1) 0 2 CH. : : : . respectively (2) 0 62 CH. x2 .or (2) there exist numbers x1 . We consider the convex hull of the n + m points a(1) = (a11 . 1. i = 1. : : : . 0). am1 ). amn ) e(1) = (1. n: Proof. m: (18) + tn > 0. : : : . a(n) = (a1n . : : : . am2 ). : : : . : : : . + tn > 0 and putting t 1 = t2 = = tn = 0 = tn+1 = Pn+m which contradicts that j=1 tj = 1. xi = 1 aij xi = a1j x1 + a2j x2 + + amj xm > 0. 0. 0. Let 0 2 CH be. e(2) = (0. Expressed in terms of the components. i = 1. a2n . xm with xi such that m X i=1 m X i=1 0. for otherwise we have = tn+m . e(m) = (0. : : : . is t1 ai1 + t2 ai2 + + tn ain + tn+i 1 = 0: Hence t1 ai1 + t2 ai2 + It follows that t1 + t2 + + tn ain = tn+i 0. y2 = . j = 1. n+m X j=1 tj = 1 that is. a(2) = (a12 . tn+m such that t1 a(1) + t2 a(2) + +tn+2 e(2) + + tn a(n) + tn+1 e(1) + + tn+m e(m) = 0. 1): We denote by CH this convex hull. : : : . 0). 0 was written as a convex linear combination of the above n + m points. a21 . : : : . 0. Then there exist real numbers t1 . : : : . t2 . yn = t1 + ::: + tn t1 + ::: + tn t1 + ::: + tn 14 .

We have proved that v1 v2 in Lemma 1. By Lemma 1. Pm i=1 xi = 1. xm = m + ::: m we obtain m X i=1 aij xi = a1j x1 + a2j x2 + + amj xm > 0. : : : . x2 . yn 0. m) (2) 0 62 CH. i = 1.4. j = 1. n: 15 . y2 . By Lemma 1. for any X = (x1 .we obtain n X j=1 aij yj = ai1 y1 + + ain yn 0. x2 . xm ) 2 Sm we have 0 1 m n X X @ aij yj A xi 0: i=1 j=1 Therefore X2Sm max m n XX i=1 j=1 aij xi yj 0: It follows that v2 = min max (2) There exist x1 . 1 2 Dividing each inequality in (19) by x1 = 1 1 + + m > 0. such that aij xi > 0. j = 1. n.5. : : : . i = 1. Proof of Theorem 1. j=1 yj = 1. i = 1. j = 1. xm m X i=1 m n XX i=1 j=1 Y 2Sn X2Sm aij xi yj 0: (20) 0. there exists a(j) = 1 a1j 2 CH such that i + 2 a2j + + m amj > 0. m: =( 1. : : : . n: This complete the proof of Lemma. so it is su¢ cient to give the proof for v1 v2 . Pn (1) There exist y1 . m: (19) > 0 and putting m 1 e(i) = + ::: . x2 = m 1 + ::: .1.3. : : : . such that n X j=1 aij yj 0. : : : . one of the following two statements holds. m: Hence.

m. m X i=1 xi = 1. and Sm and Sn respectively sets of points X = (x1 . If X = (x1 . y2 . y2 . Y 2Sn min m n XX i=1 j=1 aij xi yj 0: It follows that v1 = max min m n XX i=1 j=1 X2Sm Y 2Sn aij xi yj 0: (21) By (20) and (21) it follows that. : : : . Pn m where k is an arbitrary number. We repeat the above judgementP with the new matrix B = (aij k). yn ) 2 Sn . y2 . : : : . Let A = (aij ) be the payo¤ matrix of an m n matrix game. x2 . Therefore. xm ) 2 Sm and Y = (y1 . n. i = 1. that for any matrix game. for any Y = (y1 . For another proof of the minimax theorem – an inductive proof –see [20]. never v1 < 0 < v2 . or never v1 < k < v2 . we show.13. Because i=1 xi = 1 and j=1 yj = 1 we obtain never v1 k < 0 < v2 k. j = 1. that is. : : : .6 Saddle points in mixed strategies In this section. We have proved v1 v2 . xm ) and Y = (y1 . : : : . Remark 1. we have ! n m X X aij xi yj 0: j=1 i=1 Therefore.Hence. then the expected payo¤ i=1 j=1 aij xi yj can be written in matrix notation m n XX i=1 j=1 aij xi yj = XAY t : 16 . yn ) 2 Sn P respectively mixed are P n m strategies of players 1 and 2. n X j=1 yj = 1: Then we have max min m X i=1 X2Sm 1 j n aij xi = min max Y 2Sn 1 i n n X j=1 aij yj : (22) 1. v1 < v2 is impossible. thus contradicting the statement "never v1 < k < v2 ". a saddle point always exists. yn ) satisfying xi 0. : : : . for otherwise there would be a number k satisfying v1 < k < v2 . yj 0. Here the new statement of minimax theorem is the following: Let A = (aij ) be an arbitrary m n matrix. either v1 0 or v2 0. x2 .

Y 2Sn min max XAY t = max XAY t : X2Sm 17 .3) that the reverse inequality v1 Therefore.2. But it is known (see Lemma 1.De…nition 1. Y ). X2Sm Y 2Sn Y 2Sn X2Sm and the necessity of the condition is proved. A pair (X . From the …rst inequality in (23). "(=" Assume that the two values in (24) are equal. Y ) 2 Sm Sn is called a saddle point (in mixed strategies)(or Nash equilibrium) of the matrix game A = (aij ) if XAY t X AY t X AY t . (23) for all X 2 Sm and all Y 2 Sn . Assume that m n matrix game has a saddle point (X . obviously (there are optimal values of continuous functions de…ned on compact sets). from the second inequality in (23). such that X2Sm Y 2Sn Y 2Sn X2Sn 2 Sm and (27) (28) max min XAY t = min X AY t . "=)" The two numbers in (24) both exist. Theorem 1. we have X AY t Y 2Sn min X AY t X2Sm Y 2Sn max min XAY t : (26) From (25) and (26) it follows that v2 = min max XAY t Y 2Sn X2Sm X2Sm Y 2Sn max min XAY t = v1 : v2 holds. we obtain X2Sm max XAY t X AY t hence Y 2Sn X2Sm min max XAY t X AY t : (25) Similarly. The m n matrix game A = (aij ) has a saddle point if and only if the numbers X2Sm Y 2Sn max min XAY t and Y 2Sn X2Sm min max XAY t (24) exist and are equal. the inequalities from relationship (23) hold for all X 2 Sm and all Y 2 Sn . The following important result establishes the equivalence between the existence of a saddle point and the minimax theorem. That is to say that. v1 = max min XAY t = min max XAY t = v2 .7. Proof. Let X Y 2 Sn be.

The de…nition of a saddle point shows us that. If (X . The following statements are true: (1) If Y is an optimal strategy of player 2 and Ai Y t < v. as long as player 1 sticks to his optimal strategy X . again Ai Y t is the expected payo¤ when player 2 chooses the mixed strategy Y and player 1 chooses the pure strategy i. then we say that X . he can hold player 1’ expected payo¤ s down to at most v no matter how player 1 makes his choice of strategy. By Theorem 1. for all Y 2 Sn . some notations. X AY t t X AY t : (30) X AY t : (31) By (30) and (31). then xi = 0 in every optimal strategy X of player 1. and XA j is the expected payo¤ when player 1 chooses the mixed strategy X and player 2 chooses the pure strategy j. for all X 2 Sm .2 the value v of the game is the common value of v1 = max min XAY t and v2 = min max XAY t . Y ) is a saddle point (see De…nition 1. all terms in (27) through (29) are equal to each other. and the su¢ ciency of the condition is proved. we give some essential properties of optimal strategies. Y are respectively optimal strategies of players 1 and 2. he can be sure to get at least the expected payo¤ v = X AY t no matter which strategy player 2 chooses. Thus XA j = m X i=1 X2Sm Y 2Sn Y 2Sn X2Sm aij xi .8. We give some essential properties of optimal strategies. 18 . we introduce …rst. and v = X AY t is the value of the game. Ai Y t = n X j=1 aij yj . Let A = (aij ) be the payo¤ matrix of an m n matrix game whose value is v. De…nition 1. as long as player 2 sticks to his optimal strategy Y . we have Y 2Sn min X AY t X AY t . In particular. We also say that (X . Lemma 1. similarly.By the de…nitions of minimum and maximum.6. X AY t X2Sm max XAY t : (29) Since the left-hand sides of (27) and (28) are equal. XAY Similarly. For the matrix A = (aij ) we denote the ith row vector of A by Ai and the th j column vector of A by A j . it results that (X . Remark 1.7). Now. Y ) is a saddle point of XAY t . we have X2Sm max XAY t = X AY t : Therefore.14. To do this. Y ) is a solution(or a Nash equilibrium) of the game.

then yj = 0 in every optimal strategy Y of player 2. It follows that XAY t X AY t X AY t . Multiplying both sides of inequality v X A j . n. for all X 2 Sm and all Y 2 Sn .15. v X xi = X i2S2 X xi = i2S1 X xi Ai Y t . Proof. the proof of (2) is similar. We denote by S1 = fij Ai Y t < vg. Let (X . i = 1. we have Ai Y t v. by using the ith pure strategy cannot attain the expected payo¤ v. S2 = fij Ai Y t = vg. Proof. m. then the pure strategy i is a bad strategy and cannot appear in any of his optimal mixed strategies. and if player 1. y2 . j = 1. we have xi = 0. assume that v X A j . which 19 . The following statements are true: (1) X 2 Sm is an optimal strategy of player 1 if and only if v X A j. We prove only (1). But. the de…nition of saddle point implies t t X AY X AY = v. (2) Y 2 Sn is an optimal strategy of player 2 if and only if Ai Y t v. We prove only (1). yn ) 2 Sn be any mixed strategy of player 2. j = 1. This result states that if player 2 has an optimal strategy Y in a matrix game with value v. n. : : : . or i2S1 X (v Ai Y t )xi = 0: Since i 2 S1 implies v Ai Y t > 0. Lemma 1. Y ) is a saddle point of the game. Necessity ("=)") of the condition follows directly from the de…nition of a saddle point. that is XAY t X AY X AY t . Y ) be a saddle point of the game. The proof of (2) is similar. Since Y is an optimal strategy of player 2.7. i2S1 i 2 S1 xi Ai Y t . m. Let A = (aij ) be the payo¤ matrix of an m n matrix game whose value is v. by yj and summing for j = 1. n we obtain v n X j=1 X A j yj = X AY t : In particular. To prove the su¢ ciency ("(=") of the condition. j = 1.(2) If X is an optimal strategy of player 1 and X A j > v. v X AY t . i = 1. We prove that (X . n. Then we can write v = X AY = Hence X xi Ai Y t t = + i2S1 i2S2 X m X i=1 t xi Ai Y X t = t xi Ai Y ! = xi Ai Y + i2S1 i2S2 X xi v: v 1 that is. Let Y = (y1 . Remark 1.

The same for the others strategies. according to Lemma 1. or a given strategy Y of player 2 is optimal. 1=3. we have 20 . 1=3. 1=3) 3 = 2. 2 therefore v = 2 = X A:j . 1. 0) is a bad strategy. we have v = 2. 3 proves us that (X . Example 1. Really. 1=3) 1 = 2. Y ) is a saddle point of the game.1 has the value v = 2. 1. 0) @ 1 A = 2 1 = 1. 1=3) of player 1 is optimal. Also. is a bad strategy. X is an optimal strategy of player 1. the strategy X = (1=3.7. 3 3 X A:2 = (1=3. and 2 X A:1 = (1=3. Remark 1. and X = Y = 1 . The matrix game with the payo¤ matrix 2 3 2 3 1 A=4 1 2 3 5 3 1 2 so v > X2 A 1 : Thus the pure strategy X2 = (0. j = 1. namely X2 = (0.10.16 the pure strategy x2 = 1. 3 . 1=3. Hence. 1 1 X A:3 = (1=3. 3: Moreover. 1 are the optimal strategies for the 3 3 players 1 and 2. 1=3. According to Remark 1. 1=3) 2 = 2. 1. we have 0 1 2 v X2 A 1 = 2 (0.16. the above lemma can be used to examine whether a given strategy X of player 1 is optimal. Really. 0). If the value of a game is known.

3) @ (2. Hence. player 1 will gain more by choosing strategy 1 than by choosing strategy 3. Y is an optimal strategy. 1. Consequently. The game hasn’ saddle point in pure strategy because we have t v1 = max min aij = max(1. in order to solve the matrix game with the payo¤ matrix A. we consider the matrix game whose payo¤ matrix is 3 2 2 1 1 1 5: A=4 0 1 1 2 0 In this matrix A the elements of third row are smaller than the corresponding elements in the …rst row. For example. 3) = 3: 1. the third row can be deleted and we need to consider only the resulting matrix A0 = 2 0 1 1 1 1 : Now. player 2 will lose less by choosing strategy 3 than by choosing strategy 1. 1) = 1. 2) @ 0 0 0 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 3 1 1 1 A=2 A=2 A=2 2 = 0. 21 . 1) @ (3. regardless of which strategy player 2 chooses.v A2 Y t =2 (1. 3. A3 Y t =2 2 = 0. 3. an examination of the elements of the payo¤ matrix shows us that player 1 will never use a pure strategy since each element of this row (pure strategy) is smaller than the corresponding element in the other row (pure strategy). v and v A1 Y t =2 2 = 0. the player 1 will never use his third strategy. while v2 = min max aij = min(3. Strategy 3 of player 1 can only appear in his optimal mixed strategies with probability zero. the …rst strategy of player 2 will never be included in any of his optimal mixed strategies with positive probability. 2. Thus.7 Domination of strategies There are situations in which. so. Thus. in this matrix A0 each element of the …rst column is greater than the corresponding element of the third column. So. 1.

9. j = 1. If the domination isn’ strict. Let be the payo¤ matrix of a matrix game 2 3 2 1 4 A = 4 3 1 2 5: 1 0 3 Strategy 3 of player 2 is dominated by his strategy 2. Let A = (aij ) be the payo¤ matrix of an m n matrix game. 0 . .Therefore. If (32) akj alj . Remark 1.18. The optimal strategies of the original matrix game can be obtain from those of the reduced one by assigning the probability zero to the pure strategy corresponding to the deleted row or column. Example 1. player 1 will never use his strategy 3 since strategy 1 gives him a greater payo¤ than strategy 3. But. m (33) we say that player 2’ strategy k dominates strategy l. De…nition 1. player 2 will never use his strategy 1 since it always costs him a greater loss than strategy 3.17. v = 0: 2 2 2 2 Remark 1. s If aik ail . the deletion of a row or column may involve loss of some optimal strategies of the original game. so we can delete the third column of the payo¤ matrix and we obtain 2 3 2 1 A0 = 4 3 1 5 : 1 0 22 . and the strict dominant strategies will not play by a rational player 2. i = 1. Similarly. 2 2 Returning to the original 3 3 matrix game with payo¤ matrix A. n we say that player 1’ strategy k dominates strategy l. We have seen that in matrix game with the payo¤ matrix A. its solution is 1 1 1 1 X = . the …rst column of the matrix A0 can be deleted to obtain A" = 1 1 : 1 1 It is easy to verify that this 2 2 matrix game has the mixed strategy solution X = Y = 1 . 1 and v = 0. . we can still obtain a solution t for the original game from that of the reduced game. so they can be eliminated. Therefore the strict dominated strategies will not play by a rational player 1. .19. we say that the strategy k of player 1 or 2 strictly dominates his strategy l. Y = 0. Remark 1. then we can delete the row or column in the payo¤ matrix corresponding to the dominated pure strategy and solve the reduces matrix game. It can be proved that in the case in which a pure strategy is strict dominated by a pure strategy (or by a convex linear combination of several other pure strategies). s If the inequalities in (32) or (33) are replaced by strict inequalities. in matrix game with the payo¤ matrix A0 . so they can be eliminated.11.

for example.12.11 has the saddle points (1. X2 = (0. In the 3 3 matrix game. so the …rst column can be deleted. The value of game is v = 1. the solution procedure may lose some optimal strategies of the original game. 3 . 1. The deletion of a certain row or column of a payo¤ matrix using non-strict domination of strategies may result in a reduced game whose complete set of solutions does not lead to the complete set of solutions of the original larger game.In the matrix game with the payo¤ matrix 2 3 3 2 4 0 6 3 4 2 3 7 7 A=6 4 4 3 4 2 5: 0 4 0 8 we can delete the strategies dominated and so we get the reduce game with 4 2 the matrix . 1 . 5 ). 5 3 Remark 1. 1. 0). Y = (1). strategy 1 of player 2 is dominated by his strategy 2. Y = (0. . 1. X is the convex linear combination of pure strategies X1 and X2 . 1 ). Y = ( 3 . There exists the optimal strategy Y2 = (0: 15 . 2 ) and the value of game is v = 16 .Then. and v = 16 . Remark 1.21. This situation appears. 2) and (2. and in the 3 2 matrix game obtained above we used domination of a strategy by a convex linear combination 1 with t1 = t2 = 2 . 0. Y = (0. 2 ) are optimal strategies of the original matrix 5 5 5 2 8 game.20. for matrix game with payo¤ matrix 2 3 3 5 3 3 2 5: A=4 4 3 2 3 We get the reduced game with the matrix A00 = 5 3 23 3 2 . That is. 0). 2). one obtain 2 3 1 A00 = 4 1 5 : 0 Strategy 3 of player 1 is dominated by his strategy 2 (or 1). Y1 = (0:0. 15 ) too. The optimal strategies of this game are X = (t1 . It is easy to verify that the optimal strategies of 2 x 2 matrix 0 8 4 1 game are X = ( 5 . 0). 0). 0) where t1 .22. so we delete the third row and it result 1 A000 = : 1 The reduced game has the pure strategies X1 = (1. The game in Example 1. that is. X2 = (0. t2 . 0. hence the original game has the pure strategies X1 = (1. Example 1. 1). Remark 1. Therefore 5 5 5 4 X = (0. t2 0. 0). 5 . t1 + t2 = 1.

and the equations in q give us b d b ad bc q = a+d b c . v = 3. aq + b(1 q) = v. Example 1. 0. bp + d(1 p) = v. The 2 2 matrix game with the payo¤ matrix A= 5 3 3 2 has solution in pure strategies X = (1. 2 But. all convex linear combinations of X1 and X2 are optimal (mixed) strategies of player 1. 0 . 2 . Let be the payo¤ matrix A= a b c d 24 : . The 2 2 matrix game with the payo¤ matrix A= 3 0 2 5 hasn’ solution in pure strategies. d > b. 0).8 Solution of 2 2 matrix game Writing these equations in terms of elements of the payo¤ matrix.14. 3) = 3 and a12 = 3. Thus. 0. 5) = t 3. We have v1 = max(3. The above formulae are also valid for the case a > d. an interesting technique of solution is described by Williams. we have: ap + c(1 p) = v. v2 = min(5. We have v1 = max(2. 6 2 15 0 6 5 hence X = 6 . Example 1. respectively. 1). Really. Then the value of game is v = 5 1 . For the 2 2 matrix game with no saddle point. 0). 3 . 1 .23. we obtain p = 5 3+5 0 2 = 0 = 5 5 . v = X AY t = 3 0 = 2 5 15 5 = : 6 2 1 2 1 2 = = 15 15 . d > c. 1). Y2 = (0. Y1 = 2 . a > c. q = 6 3+5 2 2 0 = 3 1 = . 3) = 3. Thus the 3 2 1 1 2 original game has the optimal mixed strategies X1 = 3 . Y = (0.2 1 which has the optimal mixed strategies X1 = 1 . 1. 3 . all convex linear combinations of Y1 and Y2 are optimal strategies of player 2. 6 6 Remark 1. Y 6 Indeed we have 1 1 2.24. 2 . v2 = min(3. cq + d(1 q) = v: d The equations in p give us p = a+d c c .13. 6 6 1 2 1 2 = 5 2. Remark 1. 1 . 1 . Y1 = 0. we have again the optimal pure strategies X2 = (1. 0) = 2. Then v = a+d b c .

B and player 2’ pure strategies s s by L. 1 . that is Y = 1 . 1 q).14. Then 1 and 5 ! 5 and 1 | {z } | {z } in the second step. R.15. Hence x2 = ja dj . 6 6 6 3 In the …rst step we have 3 0 and 2 5 that is 3 and {z } | {z } | with the elements of rows.14. namely s bj jc x1 X = (x1 . hence s y1 = y2 . How x1 + x2 = 1 we obtain x2 1 6x2 = 1. x2 . we have A= hence. In the end we have x1 = 5. Then take absolute values of the two di¤erences and reverse the order of the absolute values: jc dj and ja bj. bj The similar technique. lead us to Y = (y1 . So. that is x2 = 6 . y2 ) = (y. where 0 y 1. Assume that player 2 uses the mixed strategy Y = (y1 . Example 1. In the case of Example 1. 1 p).9 Graphical solution of 2 n and m 2 matrix games In the case of 2 n and m 2 matrix games we can present a graphical method for …nding the solution. and how x1 +x2 = 1. y2 ) = (q. We illustrate the method by a 3 2 matrix game. Thus X = 5 . we take absolute values of the two di¤erences and reverse the order of the absolute values 3 and | {z 3 ! 3 and 3 | {z } } 1 and | {z The ratio 3=3 is the ratio of y1 to y2 in player 2’ optimal strategy. x1 = 5 . 1 y). 5 } in the …rst step. 1 . Then. hence x1 = 5x2 . subtract each element of the second column from the corresponding element of the …rst column: a b and c d. but with the rows. These results are the 2 2 2 same as those of Example 1. Suppose that the payo¤ matrix A is 2 3 a b A = 4 c d 5: e f Denote player 1’ pure strategies by T.First. we obtain y1 = y2 = 1 . M. Suppose that y = 1 and y = 0 represent the pure strategies L 25 . 3 | 2 and 0 {z 5 that is } 3 0 2 5 . we get x1 . jc The ratio ja dj is the ratio of x1 and x2 in player 1’ optimal strategy. 1. x2 ) = (p. in the second step.

the payo¤ corresponding to T is b.1.1. 1 y). cd. For any mixed strategy Y of player 2. the payo¤ is a. his expected lost is at more the maximum of the three ordinates on the lines ab. ef at the point y. y = 0.1. 2 X aij yj : (34) max Ai Y t = max 1 i 3 1 i 3 j=1 The graphic of this function is represented by the heavy black line in the Fig. If player 2 chooses the pure strategy R. 1. respectively. we suppose that player 2 chooses a mixed strategy Y = (y. that is.and R respectively.1: Mixed strategy Y Now. that is. corresponding to player 1’ strategies M and B we have the line s cd and ef and the amounts are A2 Y t = cy + d(1 A3 Y t = ey + f (1 y) y): The heights of the points on these lines represent the expected payo¤ if player 2 uses Y while player 1 uses M and B. Player 2 wishes to choose an Y so as to minimize the maximum function in (34). This amount is A1 Y t = ay + b(1 y): Similarly. as it is shown in Fig. Figure 1. We see from the …gure that he should choose the mixed strategy corresponding to the point A0 . 1. y = 1. So. and if player 1 chooses the pure strategy T . 1. We join the line ab in Fig. we can write 1 y L R3 2 a b 4 c d 5 e f y A= T M B If player 2 chooses the pure strategy L. that is. 2 X j=1 Y 2S2 1 i 3 aij yj 26 . At this point the expected payo¤ is A0 B 0 = min max and A0 B 0 is the value of the game. Then it can see that the height P Q represents the expected payo¤ when player 2 uses Y and player 1 uses T . represented by P in the …gure.

the payo¤ is a. Assume that player 1 uses the mixed strategy X = (x1 . M. We see from the …gure that he should choose the mixed strategy corresponding to the point A0 . be. where 0 x 1. 27 2 X i=1 X2S2 1 j 3 aij xi = max min XA j . and if player 2 chooses the pure strategy L. his expected payo¤ is at least the minimum of the three ordinates on the lines ad. x = 0. D and player 2’ pure strategies by s s L. If player 1 chooses D. Suppose that x = 1 represents the pure strategy U and x = 0 represents the pure strategy D. If player 1 chooses the pure strategy U . Now suppose that player 1 chooses a mixed strategy X = (x. 1 x) represented by P in the …gure. The amount is XA 1 = 2 X i=1 ai1 xi = ax + d(1 x): Similarly. that is when x = 1. At this point the expected payo¤ is A0 B 0 = max min which is the value of the game. 1 x). x2 ) = (x. R.2: Mixed strategy X For any mixed strategy X of player 1. Player 1 wishes to choose an X so as to maximize the minimum function in (35). Figure 1. min XA j = min 2 X i=1 1 j 3 1 j 3 aij xi : (35) The graphic of this function is represented by the heavy black line in the …gure. corresponding to player 2’ strategies M and R we have the line be s and cf . We join the line ad in the …gure.The graphical solution of a 2 n matrix game is similar. X2S2 1 j 3 .2. We explain it for the case n = 3 and let the payo¤ matrix A of the game be A= a b d e c f : Denote player 1’ pure strategies by U. 1. that is. Then it can see that the height P Q represents the expected payo¤ when player 1 uses X and player 2 uses L. The heights of the points on these lines represents the expected payo¤s if player 1 uses X while player 2 uses M and R respectively. cf at the point x. as it is shown in Fig. the payo¤ corresponding to L is d. that is.

q3 ) and equality 2 3 q 1 1 1 5 3 4 1 5 5 q2 = : .3 we have the lines ad. 0. and because q1 + q2 + q3 = 1 we get 2 2 2 q2 = 0 and q1 + q3 = 1. The graph also shows us that player 2’ optimal strategy s doesn’ involve his pure strategy M. M and R. To …nd the optimal mixed 2 strategy of player 2 we have Y = (q1 . We have the payo¤ matrix A= U P L M R 1 5 3 4 1 2 Now suppose that player 1 chooses a mixed strategy X = (x. In the Figure 1. 4 1 2 2 2 2 q3 So. is the intersection of the lines ad and cf . y = 5 . Thus the optimal mixed strategy of player 2 1 1 1 is X = 2 . 1 x). The abscissa x = x of the point A0 and the value of A0 B 0 can be evaluated by solving a system of two linear equations in two unknowns.25. Remark 1.We note that the point B 0 in Fig. 1]. where q = q1 2 [0. 1]. q2 . For the original matrix game the optimal strategies of player 2 are Y = 5 (q. Find out the solution of 2 4 matrix game with the payo¤ matrix 1 5 5 3 A= : 4 1 3 2 The third column is dominated by the fourth column and so it can be eliminate. The system of linear equations is 3x + y = 4 (L) x y = 2 (R) 1 and the solution is x = 2 . Therefore. and the value of game is v = 5 .2. be and cf corresponding to player 2’ strategies s L. 28 . 1. 1 q). We see from the …gure that player 1 should choose the mixed strategy corresponding to the point A0 .3: X for Example 1. 0. Example 1. the solution of the 2 x 3 matrix t a c game can be obtained from the solution of the 2 x 2 matrix game : d f The graphical method described above can be used to solve all 2 n matrix games. we obtain 5 q1 + 3q2 + 5 q3 = 5 . 1 q). 0. The abscissa x = x of the point A0 and the value of A0 B 0 can be evaluated by solving the system of two linear equations corresponding to strategies L and R. The value of game is v = 2 . Thus we have Y = (q. 2 .16. q 2 [0.16. Figure 1.

1) is the simplex S3 . x2 . 0). see Fig. XA 3 g: X2S3 1 j 3 X2S3 (36) Consider the equations XA 1 = XA 2 . The conditions x1 . (0. 31. The same situation is for other two equations in (37). 0. (0. x3 0. x2 . x1 + x2 + x3 = 1 show us that x1 .1. 3 respectively. x3 assuming negative values. R2 . x2 . while those in the other half-plane satisfy the condition XA 1 > XA 2 . 12 of the triangle are x1 = 0. 1. 1. x2 = 0. x2 . In both cases these lines divide the whole plane into three regions R1 . The set of all points in the closed equilateral triangle 123 with the vertices (1. 0). 2. (The points outside of triangle can be regarded as points with one or two of the three coordinates x1 . x2 . respectively (see Fig. Consider the payo¤ matrix of an arbitrary 3 3 matrix game given by 2 3 a11 a12 a13 A = 4 a21 a22 a23 5 : a31 a32 a33 A mixed (pure) strategy for the player 1 has the form X = (x1 .4. x2 . The equations of the three sides 23.10 Solution of 3 3 matrix game To obtain the solution of 3 3 matrix game we use the fact that a linear function on a convex polygon can reach its maximum (minimum) only at a vertex of the polygon. x3 ). x3 with the above conditions represents the distances from X to the sides of triangle S3 with the vertices 1. XA 3 = XA 1 : (37) Each equation represents a straight line which divides the whole plane into two half-planes. The value of the game is v = max min XA j = max minfXA 1 . x3 are baricentric coordinates of the point X = (x1 . The points X in one half-plane satisfy the condition XA 1 < XA 2 . 0. divides the whole plane into two halfplanes. for instance. 1.) Figure 1. R3 . XA 2 . Figure 1.4: Baricentric coordinates Equation XA 1 = XA 2 . x3 = 0. x3 ) with x1 . x3 0 and x1 + x2 + x3 = 1. The three lines (37) either intersect at one point or are parallel to each other. The numbers x1 .5. XA 2 = XA 3 . x2 .5: The three regions 29 .).

X2S3 \R3 min XA 3 : (38) To determine the value v. and in the region R3 we have 1 j 3 min XA j = XA 3 : Therefore. we should …rst compute X2S3 \Rj min XA j .In the region R1 we have 1 j 3 min XA j = XA 1 . after the value v of the game is determined. max A3 Y t . i = 1. Remark 1. Y 2S3 \T2 Y 2S3 \T3 where Ti is the region in which the linear function Ai Y t satis…es Ai Y t = max Ai Y t . It is su¢ cient to evaluate the values of XA j at the relevant vertices of this polygon and to make a comparison between these values. j = 1. The optimal strategies of player 1 can be determined by comparison. The minimum value must be v. 30 . The optimal strategies of player 2 can be determined in a similar manner. To simplify the computation we can add a convenient constant to each element of the initial matrix. j = 1. A3 Y t g = Y 2S3 = min Y 2S3 \T1 max A1 Y t . A2 Y t . We have v = min max Ai Y t = Y 2S3 1 i 3 = min maxfA1 Y t . in the region R2 we have 1 j 3 min XA j = XA 2 . The maximum value must be v. the value of game (36) can be written as v = max min XA j = X2S3 1 j 3 = max X2S3 \R1 min XA 1 . max A2 Y t . is a convex polygon. n. and the vertices Y at which the minimum is assumed are points corresponding to the optimal strategies of player 2. m: 1 i 3 It su¢ ces to compute the values of Ai Y t at the vertices of convex polygons and to make a comparison between them.26. n: Each of the sets S3 \ Rj . X2S3 \R2 min XA 2 .

(1.17 4 We evaluate XA 1 at the point 0. 0) @ 0 A = 2 3 3 3 4 4 0 1 0 1 2 2 4 1 @ 4 0 A= 0. 1. 4 . XA 3 respectively are shown in Fig. 0. 4x3 . 1) @ 0 A = 4. R3 in which min XA j are equal with XA 1 .0 @ 0 A = . XA 2 . or 3x1 10x3 = 2: The equation of the line XA 3 = XA 1 is x1 x2 + 4x3 = 0. 0.6. 0 . 1 5 5 31 . : 5 5 5 4 4 0. or 3x1 + 5x3 = 1: The equation of the line XA 2 = XA 3 is x1 + 2x2 8x3 = 0. Let us compute the value of game and …nd out the optimal strategies of the game for which the payo¤ matrix is 2 3 4 2 3 A = 4 3 4 2 5: 4 0 8 To simplify the computation we add matrix.17. 0). 5 . . with a mixed strategy X = (x1 .Example 1. XA 3 = x1 2x2 + 4x3 : The equation of the line XA 1 = XA 2 is 2x1 x2 + 4x3 = 0. (0. : 5 5 5 0 2 The values of XA 2 at the points 3 . x2 .6: The three regions for Example 1. XA 2 = 2x1 the constant 2 0 4 3 1 2 5: 4 4 to each element of the For this matrix game A we have. 0. It results 5 0 1 0 4 1 @ 4 1 A= 0. R2 . 1) and 3 are: 0 1 0 1 2 2 4 2 1 . Figure 1. (1. or 5x3 = 1: 1 j 3 The regions R1 . (0. 1 . 0. 1 . . . The result is the matrix 2 0 A=4 1 0 XA 1 = x2 . x3 ).

2 . 0) @ 2 A = 2 3 4 4 : 5 We proceed in a similar way to …nd out the optimal strategy of player 2. 15 represent optimal 5 5 3 strategies of player 2. 1. Y2 = 15 . (0. 0 1: 3 2(1 ) + 5 15 Remark 1. ( 5 . 8 2 We get that the vertices Y1 = 0. By choosing a mixed strategy X 2 Sm player 1 can get at least the expected payo¤ min XA j = u: 1 j n Therefore. that is aij xi u. j = 1. 3. The optimal strategies are 5 X = Y = 1 8(1 ) 3 . we have XA j m X i=1 u. j = 1. 1 1 @ 2 A= 4 . n. Then the value v of the game must be a positive number. 4 1 0. 1 are 5 5 0 1 1 4 . (0. i = 1. + 15 5 3 . 0 1. 4 . we see that the maxi4 mum value of the matrix game is v = 5 . It isn’ a restriction to t assume that aij > 0 for all i = 1. 4 and 2). .and The value of XA 3 at the points 0 1 1 2 1 . n with m X i=1 xi = 1. 5 . m: 32 . 5 5 2 1 3. . we formulate the matrix game problem as a linear programming problem. Let A = (aij ) be the payo¤ matrix of a matrix game. Hence Y = Y1 + (1 )Y2 . 4 .11 Matrix games and linear programming Next. m and all j = 1. 4 . 1 . 0) and 0. By coming back to the original matrix game with the payo¤ matrix B we obtain the value vB = vA + 4 that is vB = 16 . 3 . We have the same result as that obtained in Example 1. 1 is the optimal strategy of player 5 5 5 1. 0 0 4 By comparison of the above …ve values.12 where we used the elimination of dominated strategies. 5 5 .27. and the vertex at which the maximum 1 is reached is X = 0. xi 0. . . Thus X = 0. n. 1. 1.0 @ 2 A = 3 3 4 4 1 0.

where Thus the solution of a matrix game is equivalent to the problem of solving a pair of dual linear programming problems. m. player 2. m i Similarly. he wishes to minimize u . i = 1. we have Ai Y t w. j = 1. n. 1 that is. n 1 u x0 = i x0 i 0. i = 1. Remark 1. j = 1. formulated above: 8 0 0 0 + yn > [max]g = y1 + y2 + > > > < P n 0 1. n X j=1 aij yj w. n y n X j=1 yj = 1. m: Player 1 wishes to maximize u. can keep player 1 from getting more than max Ai Y t = w: 1 i m So. i = 1. by choosing a mixed strategy Y 2 Sn . which is the dual of (39). he wishes to maximize w . that is. 33 j 0 We denote w = yj . j = 1. n: . well known in linear programming. the above problem reduces to the following linear programming problem. j = 1. Since player 2 wishes to minimize w (this minimum is also the value v of 1 the game). Thus the problem reduces to the following linear programming problem 8 + x0 > [min]f = x0 + x0 + m 1 2 > > > < P m 0 1. that is. i = 1. yj 0. m. n (39) i=1 aij xi > > > > : x0 0.We denote xii u = x0 . i = 1. m (40) j=1 aij yj > > > > : 0 yj 0.28. m: Then the above problem becomes i m X i=1 aij x0 i m X i=1 1. Due to the duality theorem. (this maximum is the value v of the game). i = 1. j = 1. it is enough to solve one of those above problems.

Thus we have 2 3 4 2 3 B = 4 3 4 2 5: 4 0 8 To obtain aij > 0 we add the constant so we obtain 2 5 3 A=4 4 5 5 1 1 at each element of matrix B and 3 4 3 5: 9 In order to solve this problem we use the simplex method.17.18. y5 = 0. 0 y3 = 2 63 . y1 = 4 1 0 0 0 = 0. 0 y2 = 5 63 . The corresponding linear programming problem (40) is 8 0 0 0 > [max]g = y1 + y2 + y3 > > > > > > 0 0 0 > 5y1 + 3y2 + 4y3 1 > > > > < 0 0 0 4y1 + 5y2 + 3y3 1 > > > > > 0 0 0 > 5 y1 + y2 + 9y3 1 > > > > > > : 0 0 0 y1 .Example 1. 34 . The simplex matrix can be written successively 2 3 5 3 4 1 0 0 1 6 4 5 3 0 1 0 1 73 6 7 4 5 1 9 0 0 1 1 5! 1 1 1 0 0 0 0 2 3 0 2 5 1 0 1 0 6 0 21=5 21=5 0 1 4=5 1=5 7 5. x3 = 21 . 1 7 !6 ! 4 1 1=5 9=5 0 0 1=5 1=5 5 0 4=5 4=5 0 0 1=5 1=5 2 3 0 1 5=2 1=2 0 1=2 0 6 0 0 63=10 21=10 1 13=10 1=5 7 7! !6 4 1 0 23=10 1=10 0 3=10 1=5 5 0 0 6=5 2=5 0 1=5 1=5 2 3 0 1 0 1=3 25=63 1=63 5=63 6 0 0 1 1=3 10=63 13=63 2=63 7 7 !6 4 1 0 0 2=3 23=63 11=63 8=63 5 0 0 0 0 4=21 1=21 5=21 0 y6 5 0 Thus the solution is gmax = 21 . y 3 0 0 0 y4 = 0. 4. y 2 . x1 = 0. x2 = 21 . We consider the same matrix game as in Example 1. 8 63 .

29. y3 = 2=21. y1 = y1 w = 63 21 = 15 . Y2 = )Y2 . . hence w = 21 is the value of game with the matrix A. In our mathematical considerations it is important the existence of the players and the possibility to identify and to distinguish them between the others players. . Q Hi = Hi (s). y4 = 4=21. : : : . Here I and Si are sets which contain natural numbers. y5 = 0. n 2. If we take a strategy of each player then we obtain a situation (strategy) of the gameQ = (s1 . 35 . X = 5 . For every situation s. may choose from a set of variants Si . The set of the players I is identi…ed with the set of …rst n non zero natural numbers I = f1. for every i. 8 3 5 5 The problem has still another solution because we have 2 3 1=2 1 0 0 3=14 13=126 3=21 6 1=2 0 1 0 1=42 37=126 2=21 7 6 7 4 3=2 0 0 1 23=42 11=42 4=21 5 0 0 0 0 4=21 1=21 5=21 v=w 1= 21 5 1= 16 . n 2 N. So.0 0 0 0 0 0 Therefore y1 = 0. each player i obtains a payo¤ Hi (s). 0 4 1 0. : : : . ng. mi g and we consider in what follows the general notation Si = fsi g. mi . we denote generally Si = f1. 5 5 . fSi g. 2. 5 5 1: Y = Y1 + (1 Remark 1. The ensemble = < I. We consider that Si is a …nite set. y2 = 3=21. De…nition 1. Y1 = hence 8 1 2 . In conclusion. y6 = 0. sn ) which it is s an element of the cartezian product S1 Sn = i2I Si . si = 1.12 De…nition of the non-cooperative game For each game there are n players. In a next section we will do an another approach for this kind of problems. y3 = 2=5. x3 = 1 . thus y1 = 0. in the moments of the decision during the game. Each player i.10. Because from mathematical point of view the concrete nature of the variants isn’ t essential but the possibility to identify them is important. y3 = 15 . : : : . y2 = 3=5. H is a function de…ned on the set of all situations s and we call it the payo¤ matrix of the player i. 1. In the case of an e¤ective game the player i. x1 = 0. i 2 I. 5 8 2 8 0 Also. n and for each …xed i. are real functions de…ned on the set S. . x2 = 4 . fHi g. i = 1. i 2 I > is called noncooperative game. y2 = 1 . s 2 S. S = i2I Si . i 2 I can apply many strategies. 15 3 15 3 2 0. the solution of matrix game with payo¤ matrix B is: 1 We have gmax = w .

H1 . and in contrary case.19 the payo¤ matrices are Situation s1 s2 1 1 1 2 2 1 2 2 Payo¤ matrix H1 H2 1 -1 -1 1 -1 1 1 -1 36 . I = f1. 2) H1 (2. 2) = 1 1 1 1 . Similarly are the values s2 = 1 respectively s2 = 2 for the player 2. (See the Example 1. sn Payo¤ matrix H1 . S2 . 2).30. . we can accept the name matrix game when we want to underline that this game is given by a n-dimensional matrix.Remark 1. We call the function Hi the payo¤ matrix because its set of values can be e¤ective written as a n-dimensional matrix of type fm1 . A general notation for the payo¤ matrices is given by the following table: Situation s1 .19. where the rows correspond to the strategies of player 1 and the columns to the strategies of player 2. 2g. (2. . 2) = 1 1 1 1 and here the rows correspond to the strategies of player 2 and the columns correspond to the strategies of player 1. Two players put on the table a coin of same kind. 2g. Example 1. 1) H1 (2. . S1 . 1). then the …rst player take the two coins. the second player take the two coins. If the both players choose same face. . 1) H1 (1. Then this game is =< I. 2)g. S1 = S2 = f1. (2. Each player has two strategies. 2) H2 (2. (1. 1) H2 (2. mn g. It follows that S = S1 S2 = f(1. : : : . 1) H2 (1. Remark 1. The payo¤ matrix of the player 2 is H2 (s) = H2 (1. for the game considered in Example 1. then the player 1 chose "heads" respectively "tails". So. If s1 = 1 or s1 = 2. H2 > : The payo¤ matrix H1 (s) of the player 1 can be written as: H1 (s) = H1 (1.1).31. H n So. The …rst player is denoted by 1 and the second by 2. So. 1).

p22 p32 . where t is the symbol for the transposed matrix. p21 p33 . : : : . The similar situation is. If only a probability from the vector Pi is di¤erent from 0. each player can choose a preferred situation s0 . s2 ). for all values of the probabilities. p22 . where we consider this product of the vectors as a cartezian product (each component with each component). pimi ]. Example 1. i = 1. p32 .1) (2. sn )) of the whole game. mi . and it is equal with 1.1) Hence. This strategy can be obtained by modifying only the strategy of the player with another one. so.2) (2.1) (2. fSi g. p12 ]. then P = (P1 . is called the mixed strategy of the player i. p21 p34 . Similarly. we use the notation Pi = [pi1 . i 2 I > : Let us consider a non-cooperative game We suppose that the game repeats oneself many times. Given situation s (1. p21 p32 . Pn ) is pure strategy (the situation s = (s1 . si = 1. p33 . by repeating the game. The vector Pi . then the player 2 observe this thing and he applies the strategy 2 and so the player 1 will be a loser all the time.19 shows us that it isn’ in advantage for every player to apply t the same strategy all the time. We suppose that each player Q …xed his strategy which is independent of those of the others players: Pi = j6=i Pj . then Pi is the pure strategy si of the player. We denote Ji the row matrix which contains 1 and. 3g. If all strategies Pi . then we have P1 = P2 P3 = [p21 p31 .1) (1. it is necessary to apply each strategy si with the probability (relative frequency) pisi . For the row matrix of all probabilities pisi . We denote Pi the mixed strategy of all players except the player i.2) Preferred situation s0 for the player 1 2 (1. if P1 = [p11 . it follows that in every situation s = (s1 . for every player. For example. n are pure strategies. for the player 2.1. 2.2) (2. If. : : : . we can write Pi Jit = 1. That is to ensure the possible average value of the game for every player. P2 = [p21 .13 De…nition of the equilibrium point =< I. which correspond to the player i. When we write the elements of the vector Pi we consider the lexicographic ordonation of the elements. : : : . fHi g. in order to obtain a payo¤ as much as it is possible. which exists at that moment of time. p23 ]. p34 ] are the mixed strategies of the players I = f1. in opposite with the strategy s. if the player 1 applies the strategy 2 all the time. in all games which are played. 37 . p22 p31 . So. P3 = [p31 . the player 1 applies only the strategy 1. for example.2) (1.

p12 p21 . p11 p34 . where 11 12 0 F1 = (p21 p22 )p0 + (p21 + p22 )p0 and there isn’ any vector with probat 11 12 0 0 0 bilities P2 = [p0 . P2 ) > F2 = F2 (P1 . p11 p33 . n. and we write the payo¤ functions Fi in the matriceal form Fi = Pi t . p11 p32 . p23 p32 . P2 ) > F1 = F1 (P1 . p0 ] for which F2 = F2 (P1 . p12 p34 ]. p12 p22 . Pn ). P2 ). : : : . where 21 22 0 F2 = ( p11 + p12 )p0 + (p11 p12 )p0 . P2 ). P2 = P1 ): P1 nP2 p11 p12 P1 p21 1 1 p22 1 1 P2 nP1 p21 p22 P2 p11 1 1 p12 1 1 In this case the corresponding system is: p11 + p12 = 1. for which considering a constant vector Pi . p23 p33 . p11 p22 . : : : . n as P = (P1 . The mathematical object obtained here is called the equilibrium point (Nash equilibrium) of the game. 38 . n. Example 1. p23 p31 . But we don’ use t a general method to solve every non-cooperative game and for every solution of the game. P2 is the solution of the problem. 21 22 1 1 For example. 1.p22 p33 .14 The establishing of the equilibrium points of a noncooperative game The solution of the game from Example 1. In order to solve the non-cooperative game as in the De…nition 1. P2 = 2 . We denote the strategies Pi . P2 = P1 P3 = [p11 p31 . We say that the non-cooperative game is solved if we can determine those mixed strategies (solutions) Pi . we suppose that we obtained the mixed strategies Pi . i = 1. p23 p34 ]. Pi Jit = 1. p0 ] for which F1 = F1 (P1 . p11 p23 . p22 )p11 + ( p21 + p22 )p12 . F1 = (p21 p21 + p22 = 1. for every i = 1. p12 p23 ]: P3 = P1 De…nition 1. the payo¤ function Fi = Pi Hi Pit has the maximum value. p22 p34 .20. n. p12 p32 .20 have been obtained by a private procedure.19 can be represented in the following form: (P1 = P2 . p12 p31 . P1 = 2 . 1 . p12 )p22 : F2 = ( p11 + p12 )p21 + (p11 If P1 . Fi ] is i a row matrix with mi components that are equal with Fi . The data of non-cooperative game from Example 1. then it isn’ any vector with probt 0 0 0 abilities P1 = [p0 . P2 = [p11 p21 . i = 1. by using the elements of the matrices of this game. 1 is a solution of the game and we have 2 2 F1 = F2 = 0.11. i = 1.11. p12 p33 . where i = [Fi .

we can call this theory as "the theory of multilinear games". Because of Theorem 1. For every values of the probabilities pisi . : : : . it results that we can write Pi Hi Pit = Pi t . n. H1 P1 t t P2 J2 = 1. The determination of the equilibrium points of a noncooperative game consists in solving. But this is in opposite with De…nition 1. Fi = Fi0 Fi00 . except the j-component that it is equal with 1. Every non-cooperative game has nonempty solution. the maximum value of the payo¤ function Fi is obtained (between others values) for a strategy si for which Hi (si )Pit = max Hi (s0 )Pit : i 0 si Here s0 is an arbitrary strategy.11 which shows us that Fi is the maximum value of the expression Pi Hi Pit . t Example 1. So. that is for the solution that gives us the maximum too. the following Nash’ theorem is important: s Theorem 1. Remark 1. or t Hi Pit = Tit .3.34. Because the determination of the equilibrium points of a non-cooperative game consists in solving of a system of the multilinear equations. with the vector Pi .3. of the system of t t t multilinear equations: Pi Jit = 1.We remind that Ji is a row vector with mi components all equal with 1. We don’ present here the proof of this theorem. the problem given in Example 1. si = 1. We consider that the unknown real values Fi have been written as di¤erence between two non-negative values Fi0 and Fi00 . From the previous presentation we don’ obtain that the solution is e¤ective t and it is nonempty. where i = 1. Ti = [ti1 .33. n and Hi Pit 0. 0 pisi 1. timi ].20 is with the problem of solving in non-negative numbers P1 0.32. with all its components equal with 0. on the left. P1 T1 = 0 t t + T2 = 0. i ) = 0 where Pi i t If the j-component of the vector Hi Pit i is positive. Remark 1. by the given de…nition. To solve the problem formulated by Theorem 1.4. 39 . for …xed Pi . Pi Ti = 0. in non-negative numbers.3. hence i t t Pi (Hi Pi 0. Remark 1. P2 T2 = 0. So.21. H2 P2 t 1 t 2 t t + T1 = 0. We denote Hi (si )Pit . we can apply a method for solving the systems of equations and inequations with an arbitrary degree in non-negative numbers. Hi Pit i + Ti = 0. Such a method can be the complete elimination method. respectively Hi (s0 )Pit i i the element with row index si . i By introducing a row matrix Ti with independent non-negative variables tisi . T1 0. P2 0. respectively s0 of the matrix Hi Pit . we can write a matriceal t t t equation that is with the inequation Hi Pit 0 equivalent: Hi P i i i +Ti = 0. mi . i = 1. in order to have all the unknowns as non-negative numbers. then by multiplying. T2 0 of a system with multilinear equations equivalent: t t P1 J1 = 1. it results that Fi = Pi Hi Pit > Fi . for every player i. i We have Theorem 1.

p21 p22 0 00 F1 + F1 + t11 = 0. Remark 1.15 The establishing of the equilibrium points of a bimatrix game De…nition 1. H1 = 1 0 = [F1 1 1 00 0 F1 . T2 = [t21 . p21 + p22 = 1.12. J1 = J2 = [1. p22 ]. i = 1. F2 00 0.2. Because of the non-negativity of the unknowns we have the following equivalences: p11 t11 + p12 t12 = 0 ) p11 t11 = 0.where P1 = [p11 . F2 and 0 F1 00 0. P2 = P1 . The subproblem (41) consists in solving in non-negative numbers P2 of a system with linear equations 8 t P2 J2 = 1 < (41) : t t t H1 P2 + T1 = 0. p12 t12 = 0. p12 ]. p11 p11 t11 = 0. n. The non-cooperative game for two players is called bi-matrix game. P2 = [p21 . The problem given by Theorem 1. F2 0 0. t22 ]. p21 + p22 p12 0 00 F1 + F1 + t12 = 0. F1 0 0. P1 = P2 . p11 + p12 0 00 F2 + F2 + t21 = 0. mi . H2 = 00 F1 ]. p21 t21 = 0. p22 t22 = 0 1. Such a game let us to solve it easily. F1 1 1 .20. 0 00 F2 + F2 + t22 = 0. 1] T1 = [t11 . for every i.35. p22 t22 = 0: By solving this system with complete elimination method we obtain the same solution as that obtained by the private procedure given in Example 1. 2 0 = [F2 1 1 1 1 . because P1 = P2 and P2 = P1 . 1 40 . can be decomposed in three problems that are independent. p21 t21 + p22 t22 = 0 ) p21 t21 = 0. si = 1. 00 F2 ] 00 0 F2 . p12 t12 = 0 and so the equation Pi Tit = 0 can be replaced by mi equations of the form pisi tisi = 0. F1 = F1 00 0 F1 . t12 ]. F2 = F2 00 F2 or in the developed form: p11 + p12 = 1.

that is given by the system of equations 8 t < P1 T1 = 0 : (43) t P2 T2 = 0: If for an arbitrary index s1 . So. P2 ) for which it is veri…ed the subproblem (43) too. The general solution can be obtained by linear convex combination of all basic solutions P1 corresponding to a …xed P2 and linear convex combination of all basic solutions P2 . in all cases t1s1 6= 0 we have p1s1 = 0. Example 1. the unknown t1s1 is a component of a basic solution of subproblem (41) and t1s1 6= 0 (t1s1 = 0 when there is degenerate case).21 refers to a bi-matrix game. if t2s2 6= 0 then it results p2s2 = 0.the subproblem (42) consists in solving in non-negative numbers P1 of the system of equations 8 < : t P1 J1 = 1 (42) t H2 P1 t 2 t + T2 = 0. the property that let us to …nd the solution which verify the system (43). then p1s1 = 0. p12 t12 = 0. The problem given in Example 1. Similarly. p22 t22 = 0: (430 ) To subproblem (410 ) it corresponds the simplex matrix given below. The three systems are the following: 8 p21 + p22 = 1 > > > > < 0 00 p21 p22 F1 + F1 + t11 = 0 (410 ) > > > > : 0 00 p21 + p22 F1 + F1 + t12 = 0 8 p11 + p12 = 1 > > > > < 0 00 p11 + p12 F2 + F2 + t21 = 0 (420 ) > > > > : 0 00 p11 p12 F2 + F2 + t22 = 0 p11 t11 = 0. 1 s1 m1 . The row corresponding to the objective function (that will be minimized) is equal 0. p21 t21 = 0.22. Because the general solution is a linear convex combination of the basic solutions. it results that we must select those basic solutions (P1 . 2 3 1 1 0 0 0 0 1 6 1 1 1 1 1 0 0 7 7 S1 = 6 4 1 1 1 1 0 1 0 5 0 0 0 0 0 0 0 41 . and both subproblems can be solved by simplex method. which correspond to a P1 .

2. 2]: 2 2 0 We denote Xij . 0]: 2 2 Here. we use the symbol X to have a uniformized notation of the unknowns. 0. X23 = [0. 1. 0. 0. 0. x5 = t11 . X23 = [0. 0 . 0. x6 = t12 : Such uniformizations will be used in what follows every time when they are useful to us. x2 = p22 . X12 = [1.16 The establishing of equilibrium points of an antagonistic game De…nition 1. X13 = [0. the systems (41). 0. 0 . . 1. . X22 = [1. 0 . Because every equality is with two inequality equivalent. 0. j = 1.We obtain the following basic solutions X11 = 1 1 . 1. 0. where 0 is the zero matrix. 2. 1. . X13 = [0. X12 = [1. 1 We observe that there exists only a solution: P1 = P2 = 2 . 0 00 x1 = p21 . 1. x4 = F1 . 0. Fi . To subproblem (420 ) it corresponds the following simplex matrix: 3 2 1 1 0 0 0 0 1 6 1 1 1 1 1 0 0 7 7 S2 = 6 4 1 1 1 1 0 1 0 5 0 0 0 0 0 0 0 and it has the basic solutions X21 = 1 1 . 1. 0]. 2]: 2 2 0 X21 = 0 0 To establish the pairs of solutions (X1i . 1. 2]. (42) and (43) from 1. 0. 0 .13. 0. 1. 2. 0. 2. 0. 1. X22 = [1. 0. 3 the vectors obtained by omission of com0 00 ponents Fi . x3 = F1 . 0] 2 2 1 1 0 0 . 0. 0. . 0]. 0. So F1 = F2 = 0. X2j ) that would be the solutions of bi-matrix game it must satisfy the condition: t1s1 6= 0 ) p1s1 = 0 and t2s2 6= 0 ) p2s2 = 0. i = 1. 1 . 0. i = 1.15 can be written as: 42 . 0. 0. obtained for 2 0 00 0 00 t11 = t12 = t21 = t22 = 0 and F1 = F2 = F2 = F2 = 0. We obtain 0 X11 = 1 1 0 0 . 2]. We call antagonistic game a bi-matrix game with the bidimensional matrices H1 and H2 for which the following relationship is satis…ed: t H1 + H2 = 0. 2. 2.

P2 ) with respect to P1 . it results the following theorem relative to antagonistic games (von Neumann-Morgenstern theorem): Theorem 1.8 t > H1 P2 > > > < t J2 P2 > > > > : t J2 P2 8 > P1 H1 > > > < t P1 J1 > > > > : t P1 J1 8 t < P1 (H1 P2 : (P1 H1 t 1 0 1 1 (44) 2 0 1 1 t 1) (45) =0 (46) t 2 )P2 = 0 where 1 contains as elements one and the same value F1 . namely minP2 maxP1 F (P1 . it results two linear programming problems. as a bi-matrix game. (45). as a generalization of the condition which appears in the antagonistic game. P2 ): (47) Remark 1. Adding 0 00 to system (44) the function F1 = F1 F1 and to system (45) the function 0 00 F2 = F2 + F2 . for …xed P1 . for …xed P2 . that are dual problems. We 1 can consider these values as minimax values (the minimum of some maxim values) obtained by minimization of function F1 (the maximum is equal in…nite) respectively maximin (the maximum of some minimum values) obtained by maximization of function F2 = F1 (the minimum is equal in…nite). we can use the simpli…ed notation F = F1 = F2 and we can consider that we determine F = M IN by using the system (44) and F = M AX by using the system (45). So. P2 ) the function F1 + F2 reaches the minimum value? 43 .36. The antagonistic game may have. P2 ) with respect to P2 . and 2 contains one and the same value F2 . The minimum with respect to P2 of the maximum (minimax) of the function F (P1 . we can formulate the question: for which solution (P1 . Because of the symmetry of the systems (44) and (45) and setting F = F1 = F2 .5. is equal to the maximum with respect to P1 of the minimum (maximin) of the function F (P1 . In the case of a bi-matrix game. P2 ) = maxP1 minP2 F (P1 . at least a private solution of the antagonistic game can be obtained by solving only one of systems (44). by setting F2 = F1 = F . Certainly. hence F2 = F1 . another solutions which result by solving the systems (41). (42) and (43). t From subproblem (46) it results that P1 t = 2 P2 .

17 Applications in economics In this sequel we give some applications of games in economics. by replacing the row that contains only zeros in the simplex matrix S1 by the corresponding to the function to minimize [0.23. 0. Thus. 0. namely F = F1 F1 . 1. which will be dependent of the player’ strategies. where we assume c < a. there are no …xed costs and the marginal cost is constant at c. respectively. each …rm’ strategy space is Si = [0. the strategies available to each player (the di¤erent quantities it might produce). The bi-matrix game given in Example 1. qi 0. neither …rm will produce a 44 . Here the sets of strategies are real intervals. 0. Let q1 and q2 denote the s quantities of a homogeneous product.22 is an antagonistic game. 0. Because it has only a solution. s the nonnegative real numbers. 1 1 2. 1). We will assume that output is continuously divisible and negative outputs are not feasible. That is. f or Q < a P (Q) = : 0. produced by …rms 1 and 2. in which case a typical strategy si is a quantity choice. Hence we have 8 < a Q. the payo¤ received by each player for each combination of strategies that could be chosen by the players (the …rm’ s payo¤ is its pro…t). We will be evaluate the payo¤ function for each player. So F M IN = 0. supposing that we use. we specify: the players in the game (the two …rms). Example 1. Suppose that the …rms choose their quantities simultaneously.17. 0]. Let P (Q) = a Q be the market-clearing price when the aggregate quantity on the market is Q = q1 + q2 . 2 for which 1. this solution can be obtained if 0 00 we minimize the function F = F1 . in order to solve the system (44). f or Q a: Assume that the total cost to …rm i of producing quantity qi is Ci (qi ) = cqi . we obtain the simplex matrix: 2 3 1 1 0 0 0 0 1 6 1 1 1 1 1 0 0 7 6 7 4 1 1 1 1 0 1 0 5 0 0 1 1 0 0 0 that. Because P (Q) = 0 for Q a. We …rst translate the problem into a "continuous" game. So.This formulation leads us to the problem of cooperation: when there is cooperation and when there isn’ For the antagonistic game we have F1 + F2 = t? 0. by reducing leads us to the same solution P1 = P2 = 0 00 F1 = F2 = 0. For this. 1. s 1.1 Cournot model of duopoly [21] We consider a very simple version of Cournot’ model.

the associated price P (qm ) is high. use (48) to check that q2 isn’ …rm 2’ best response to the choice of t s qm by …rm 1. so the associated price is lower.quantity qi > a. Given that there are two …rms. the …rm’ s quantity choices must satisfy q1 = 1 (a 2 q2 c). as assumed. q2 = 1 (a 2 q1 c): Solving this pair of equations yields q1 = q2 = a 3 c . Rather than solving for Nash equilibrium in the Cournot game algebraically. Remark 1. The payo¤ to …rm i. in spite of the fact that such an increase in production drives down the market-clearing price. 0) = (a 4c) . s 2 A third way to solve for this Nash equilibrium is to apply the process of iterated elimination of strictly dominated strategies (see [7]). its pro…t. 45 . and s 2 R1 (q2 ) = 1 (a q2 c) –…rm 1’ best response. the …rst order condition for …rm i’ optimization problem is necessary and su¢ cient s qi = 1 (a qj c): (48) 2 Thus. in which case it would choose qi to maximize i (qi . Each …rm would of course like to be a monopolist in this market. in contrast.37. qj ) = max qi [a 0 qi <1 (qi + qj ) c]: Assuming qj < a c (as will be shown to be true). To see this m formally. for each …rm i. for example. a function of the strategies chosen by it and by the other …rm. q2 ) is to be a Nash equilibrium. one could instead proceed graphically. and earn the monopoly pro…t i (qm . using the best response to a …rm: R2 (q1 ) = 1 (a q1 c) –…rm 2’ best response. the aggregate quantity is higher. 2 In the Cournot equilibrium. aggregate pro…ts for the duopoly would be maximized by setting the aggregate quantity q1 + q2 equal to the monopoly m quantity qm . q2 ) where qi . can be written as i (qi . qj ) = qi [a (qi + qj ) c]: How we know an equilibrium point (Nash equilibrium) is the pair (q1 . so the temptation to increase output is reduced – reduced by just enough that each …rm is just deterred from increasing its output by the realization that the market-clearing price will fall. 0) = qi (a qi c). solves the optimization problem 0 qi <1 max i (qi . which is indeed less than a c. The problem with this arrangement is that each …rm has an incentive to deviate: because the monopoly quantity is low. if the quantity pair (q1 . as would occur if qi = q2 for each i. The intuition behind this equilibrium is simple. it would produce the 2 monopoly quantity qm = a 2 c . and at this price each …rm would like to increase its quantity.

rather than the di¤erent quantities it might produce. 1). This is an unrealistic demand function because demand for s …rm i’ product is positive even when …rm i charges an arbitrarily high price. and a typical s strategy si is now a price choice. the quantity that consumers demand from …rm i is qi (pi . pj ) = qi (pi . if the price pair (p1 .1. Thus each …rm’ strategy space can again be represented as Si = [0.17. Thus we obtain other equilibrium point. p2 ) is Nash equilibrium if. We will assume that negative prices are not feasible but that any non-negative price can be charged –there is no restriction to prices denominated in pennies. pj ) = a pi + bpj . We will again assume that the payo¤ function for each …rm is just its pro…t. This time. for each …rm i. s rather than quantities as in Cournot’ model. pj ) = max (a 0 pi <1 pi + bpj )(pi c): The solution to …rm i’ optimization problem is s pi = 1 (a + bpj + c): 2 s Therefore. p2 ) is to be a Nash equilibrium.2 Bertrand model of duopoly [21] This Bertrand’ model is based on suggestion that …rms actually choose prices. the …rm’ price choices must satisfy p1 = 1 (a + bp2 + c) 2 and p2 = 1 (a + bp1 + c): 2 Solving this pair of equations yields p1 = p2 = a+c : 2 b 46 . the strategies available to each …rm are the di¤erent prices it might charge. s provided …rm j also charges a high enough price. We assume that there are no …xed costs of production and that marginal costs are constant at c. where b > 0 re‡ ects the extent to which …rm i’ product is a substitute for s …rm j’ product. The pro…t to …rm i when it chooses the price pi and its rival choose the price pj is c) = (a pi + bpj )(pi c): i (pi . We translate the economic problem into a non-cooperative game. We consider the case of di¤erentiated products. the price pair (p1 . If …rms 1 and 2 choose prices p1 and p2 . where c < a. pi solves the problem 0 pi <1 max i (pi . pj )(pi Thus. respectively. however. s the payo¤ functions are di¤erent. The Bertrand’ model is a difs s ferent game than Cournot’ model because: the strategy spaces are di¤erent. but the equilibrium concept used is the Nash equilibrium de…ned in the previous sections. and that the …rms act simultaneously (choose their prices). pi 0. There are again two players.

Assume. the expected wage settlement is wf P fwf choseng + wu P fwu choseng = wf F +wu 1 F wf + wu 2 : wf + wu 2 + F x< wf + wu 2 =F wf + wu 2 wf + wu 2 : We assume that the …rm wants to minimize the expected wage settlement imposed by the arbitrator and the union wants to maximize it. the …rm and the union simultaneously make o¤ers. wf and wu .1. the arbitrator simply chooses the o¤er that is closer w +w to x: provided that wf < wu . wu ) is to be a Nash equilibrium of the game between the …rm and the union. The arbitrator knows x 2 2 but the parties do not. in contrast. the arbitrator chooses wf if x < f 2 u . the arbitrator chooses one of the two o¤ers as the settlement. Thus. We now derive the Nash equilibrium wage o¤ers in a model of …nal-o¤er arbitration. the parties believe that the probabilities P fwf choseng and P fwu choseng depend of arbitrator’ behavior. after observing the parties’o¤ers. Assume that the arbitrator has an ideal settlement she would like to impose. In conventional arbitration. The two major forms of arbitration are conventional and …nal-o¤er arbitration. If the pair of o¤ers (wf . and can s be expressed as P fwf choseng = P and P fwu choseng = 1 Thus. The parties believe that x is randomly distributed according to a probability distribution denoted by F . First. In …nal-o¤er arbitration. wage disputes are settled by binding arbitration. denoted by x. with associated probability density function denoted by f .17. denoted by wf and wu . also involve arbitration. Second. respectively.3 Final-o¤er arbitration [6] Many public-sector workers are forbidden to strike. chooses wf +wu wf +wu wu if x > and chooses wf or wu if x = . wf must solve the optimization problem min wf F wf wf + wu 2 + wu 1 F wf + wu 2 47 . Suppose the parties to the dispute are a …rm and a union and the dispute concerns wages. further that. the arbitrator is free to impose any wage as the settlement. the two sides make wage o¤ers and then the arbitrator piks one of the o¤ers as the settlement. instead. Many other disputes including medical malpractice cases and claims by shareholders against their stockbrokers.

the median of the distribution equals the mean m of the distribution. m. wf = m 2 48 2 : . wu 2 wf = p 1 = 2 . in the case of normal distribution. in which case the density function is given by f (x) = 1 p e 2 (x m)2 2 2 . f (m) r and the Nash equilibrium o¤ers are r wu = m + . the average of the o¤ers must equal the median of the arbitrator’ s preferred settlement. Thus. > 0: We know that. (50) and (51) become: wf + wu = m. 2 (50) that is. 2 R. the wage-o¤er pair (wf .and wu must solve the optimization problem max wf F wu wf + wu 2 + wu 1 F wf + wu 2 : Thus. Substituting (50) into either of the …rst-order conditions then yields 1 f w +wu f 2 wu wf = : (51) Remark 1. wu ) must solve the …rst-order conditions for these optimization problems (wu and (wu wf ) 1 f 2 wf +wu 2 wf ) 1 f 2 wf +wu 2 =F wf +wu 2 (49) =1 F wf +wu 2 : It result wf +wu 2 F = 1. Suppose that the arbitrator’ preferred settlement is nors mally distributed with mean m and variance .38.

gi . summing over all n farmer’ …rst-order conditions. (54) where G = g1 + + gn . gn ). During the spring. A strategy for farmer i is the choice of a number of goats to graze on the village green. is gi v(g1 + + gi 1 + gi + gi+1 + + gn ) cgi : (52) Thus. Each summer. or a tiny fraction of one more. gi 1 . Since a goat needs at least a certain amount of grass in order to survive. gn ). 1) covers all the choices that could possibly be of interest to the farmer. Formally: for G < Gmax . yields 1 v(G ) + n G v 0 (G ) c = 0. if (g1 . then adding one more dramatically harms the rest. The …rst-order condition for this optimization problem is v(gi + g1 + + gi 1 + gi+1 + + gi 1 + gn ) + + gn ) c=0 (53) +gi v 0 (gi + g1 + + gi+1 + Substituting gi into (53). The cost of buying and caring for a goat is c. or gi v 0 (gi + g1 + total. : : : . gn ) is to be a Nash equilibrium then. Gmax ) would also su¢ ce. : : : . The harm to the farmer’ existing goats is v 0 (gi + g1 + s + + gi 1 + gi+1 + + gn ) in gi 1 + gi+1 + + gn ) per goat. that is G is just below Gmax .4 The problem of the commons [9] Consider the n farmers in a village. independent of how many goats a farmer owns. gi must maximize (52) given that the other farmers choose (g1 . The common resource is over utilized because each farmer considers only 49 . Gmax : v(G) > 0 for G < Gmax but v(G) = 0 for G Gmax . adding one more does little harm to those already grazing. The value to a farmer of grazing a goat on the green when a total of G goats are grazing is v(G) per goat. The value of the additional goat is v(gi +g1 + +gi 1 +gi+1 + +gn ) and its cost is c. s and then dividing by n. there is a maximum number of goats that can be grazed on the green. The payo¤ to farmer i from grazing gi goats. Assuming that the strategy space is [0. : : : . Also since the …rst few goats have plenty of room to graze. when the numbers of goats grazed by the other farmers are (g1 .1. gi+1 . Denote the number of goats the ith farmer owns by gi and the total number of goats in the village by G = g1 + g2 + + gn . [0. Assume goats are continuously divisible. v 0 (G) < 0 and v 00 (G) < 0. all the farmers graze their goats on the village green.17. The …rst-order condition (52) re‡ ects the incentives faced by a farmer who is already grazing gi goats but is considering adding one more. gi+1 . the farmers simultaneously choose how many goats to own. but when so many goats are grazing that they are all just barely surviving. for each i. : : : . gi 1 ..

not the e¤ect of his or her actions on the other farmers. 10) = 5: 1 j 31 i 4 50 . solves the problem max0 G<1 Gv(G) Gc. What game in previous problems has the saddle point? Solution.3 . 3. because the matrix A has four rows. The social optimum. The payo¤ matrix of player 2 is 3 2 9 6 3 6 5 4 5 5. 5. c = 0: 1. The player 2 has three strategies. Remark 1. because in the matrix A there are three columns. For the …rst game we have v1 = max min aij = max(1.18 Exercises and problems solved 1. Two players write independently.39. 2. If they have written the same number then the player 1 pays to player 2 equivalent in unities monetary of this number. denoted by G . respectively? Solution. hence the presence of G v 0 (G )=n in (54). Which is the payo¤ matrix of this game? Solution. 5) = 5. H2 = 4 3 1 8 10 6 t because H1 + H2 = O4. The player 1 has four strategies. 2 or 3.his her own incentives. the …rst-order condition for which is v(G ) + G v 0 (G ) We have G > G . Easily we get that the payo¤ matrix of player 1 is 2 3 1 1 1 2 2 5: A=4 2 3 3 3 3. 5. Let be a zero-sum two-person game with the payo¤ matrix 3 2 9 3 1 6 6 5 8 7 7 H1 = A = 6 4 3 4 10 5 : 6 5 6 Which is the payo¤ matrix of player 2? What strategies has the player 1 and 2. one of the numbers 1. 1 i 41 j 3 v2 = min max aij = min(9. In the contrary case the player 2 pays to player 1 this number of unities monetary that he has chosen.

We obtain the payo¤ matrix 1 0 1 A0 = : 1 1 0 Now. 2) = 2: 1 j 31 i 4 Thus. y3 ) be the mixed strategies of players 1 and 2. It is easily to verify that (2. i = 4 are optimal strategies of player 1. the …rst strategy of player 2 will never be included in any of his optimal mixed strategies. v2 = min max aij = min(3. let X = (x1 . 3) = 1 i 41 j 3 1. Y = (y1 . x4 ). Using the iterated elimination of strictly dominated strategies solve the matrix game with the payo¤ matrix 3 2 0 1 1 1 5: A=4 1 0 1 1 0 . 2. Thus i = 2. x3 . let X = (x1 . the second game hasn’ a saddle point in the sense of pure strategies t because v1 = 1 < 2 = v2 . the …rst column of the matrix A0 can be deleted to obtain A00 = 0 1 51 1 0 : 5. the player 1 will never use his …rst strategy. x3 ). and j = 2 is the optimal strategy of player 2.How v1 = v2 = 5 it results that the …rst game has saddle point. y2 .2) and (4. Y = (y1 . y2 . x2 . in this matrix A0 each element of …rst column is greater than the corresponding element of the third column. Then the expected payo¤ of player 1 is 4 3 XX i=1 j=1 aij xi yj = 9x1 y1 + 3x1 y2 + x1 y3 + 6x2 y1 + 5x2 y2 + + 6x4 y3 : For the second game. respectively. respectively. Then the expected payo¤ of player 1 is 3 3 XX aij xi yj = x1 y1 + x1 y2 + x1 y3 + 2x2 y1 2x2 y2 + 2x2 y3 + i=1 j=1 +3x3 y1 + 3x3 y2 3x3 y3 : Solution. x2 . 3. The …rst row will be eliminated.2) are both saddle points because a22 = a42 = v = 5. therefore. y3 ) be the mixed strategies of players 1 and 2. Consequently. Thus. For the second game we have v1 = max min aij = max( 1. Which are the expected payo¤s of player 1 in the previous games? Solution. 4. For the …rst game. In this matrix A the elements of …rst row is smaller than the corresponding elements of the third row.

v= 2 ( 1)( 1) = 0.Similarly. Here x1 + x2 = 1 and x2 = ja dj . Solution. b) A = . x2 ). Thus. 0. 1). y1 = y2 1 X = 1 . 1 3 2 0 1 1 Solution. 1 p). v= 3 2+3 1 0 1 = q = 3 2+3 0 0 1 = v= 1 hence X = 2 . 4 2:3 0:1 3 = . Let X = (x1 . 3 q = 0 1+0 2 2 = 2 .24 (the Williams method). 2 . actually. y1 + y2 = 1. 2+3 0 1 2 3 . 1 and 2 4 4 v = j1 3j x1 x2 = j2 0j = 1. 4 p = 1 1+1 ( 1) 1 = . we can use the mixed strategies X = (p. respectively. Find the optimal strategies of the following matrix game with the payo¤ matrix 2 0 1 2 1 1 a) A = . We have. 2 2 2 1 52 0 3 3=4 1=4 = . x1 j3 0j 3 j1 2j = 3. 0. hence 3y2 = y1 . 1+0 2 2 3 . 1 = 4 . 2 . Y = (q. Y = (y1 . v = 2: = 2 . bj y2 a) We obtain x1 + x2 = 1. 3). y1 = 4 . hence x1 = x2 . Y = 2 b) We have 3 1 4. 3 q = 1 1:1 . Y = 3 . the optimal (pure) strategies are X = (0. These games are 2 2 matrix game. y2 = x2 = 1 2. y2 ) be the mixed strategies for players jc jd bj x1 1 and 2. 3 1:0 2:2 4 = . a saddle point (i . v = 0. 3 p = 0 1+0 2 2 2 2 v= 2 hence X = 3 . Y = (0. we obtain successive. Thus 1 1 . A000 = [1 0] and AIV = [0]: Thus. Y = 3 c) We obtain 2 1 3. 2 b b ad a+d bc : b c 3 . q = c . 6. v = 4. 1 . c) A = . 2 7. 4 c . 1) and the value of game is v = 0. because a33 = v = 0. j ) = (3. y1 = jc aj . 1 q) where d a+d a) We obtain p = p = c b d a+d 1 . respectively y1 + y2 = 1. Solve the problem 6 with the procedure described in the Remark 1. ( 1) ( 1) 2 1 hence X = Y = 1 . 1 .

and ab. b) We have x1 + x2 = 1. Let X = (x. Solution. x2 = 1 . 1 3. Y = (y. y = 2 . y1 = y2 . 2 ). b) The intersection points have. y1 = 2 0 y2 = = Thus X = Y = v = = 4 4 . the abscissa x = 2 . respectively y1 + y2 = 1. 2 2 y1 y2 = = j j1 j1 ( j 1 1 1j ( 1)j = 1. thus we have the lines Figure 1. c) We have x1 + x2 = 1.7: The problem 8. y = 3 . a) 1 The intersection points have. the abscissa x = 2 . x1 = x2 1)j 1 1j = 1. 1 y) be. thus we have the lines 53 . hence 4 1 1 3 1 3 X = ( 2 . respectively. and 2 2 v = 1 1 . 1 . 2 0 a) The payo¤ matrix is A = . respectively y1 + y2 = 1. 0) 1=2 1=2 = 0: m 8.= 3 3 . 2 2 x1 x2 y1 y2 3=4 1=4 = = 9 3 12 3 + = = : 8 8 8 2 2 3. v = 4 : 3 3 3 c) The payo¤ matrix is A = 1 1 1 1 .8: The problem 8. y1 = y2 = 2 . 3 3 2 1 . 1 x). Y = ( 4 . bd. hence x1 = x2 . thus we have the lines 1 3 Figure 1. hence 3 3 X = Y = ( 2 . respectively. Solve the problem 6 with the graphical method described for 2 n and 2 matrix games. the mixed strategies for the players 1 and 2. hence x1 = 2x2 . 2 1 3 . Thus 1 1 1 1 1=2 1=2 = (0. and = j2 0j j1 2j j0 2j j2 1j = = 2. X = Y = 1 . 4 ). x1 = 2 . The lines ac. v = 2 : b) The payo¤ matrix is A = 1 2 2 0 . 1 ). cd respectively. 3 3 2=3 1=3 x1 x2 1 2 = 2=3 1=3 12 4 8 4 + = = : 9 9 9 3 = 1 2. 3 . 3 3 2. will be represented in an illustrative …gure. respectively.

(0. . respectively. 1 x) be the mixed strategy for the player 1.3) (see the heavy black line in …gure). solve the following matrix games with the payo¤ matrices: 2 3 5 6 2 9 6 3 a) A = . q2 . v = 0: 2 2 9. v = 9 . (0. cg. 3 .3). df . 4 4 4 4 hence Thus we have 2 8 9 3 6 7 3 5 2 2 3 q1 6 q2 7 9 6 7 4 q3 5 = 2 . q4 3 q1 6 q2 7 9 6 7 4 q3 5 = 2 q4 26 18 27 18 9 q1 + q2 + q3 + q4 = : 4 4 4 4 2 8 > 26q1 + 18q2 + 28q3 + 18q4 = 18.5) and respectively (1. the abscissa x = 1 . that is 8 < 2x + y = 5 1 The solution is x = 4 . can be evaluated by solving the system of two linear equations corresponding to strategies two and four of player 2. y = 1 . 2 4 4 2 For the mixed strategy of player 2 we have Y = (q1 . > > > < q1 + q2 + q3 + q4 = 1. dh are represented in the following …gure. q3 . Hence X = 1 . y = 9 .10: The problem 9. Figure 1.Figure 1. 4 4 We obtain 26 18 27 18 .9: The problem 8. b) A = 4 9 4 5 : 8 3 7 5 1 8 Solution. hence 2 2 X = Y = ( 1 .9). and the value of A0 B 0 = v. The lines ae. q4 ) and the equal- : 6x y= 3: ity 1 3 . > > > > : qj 0. 1 ). c) The intersection points have. Using the graphical method. 4 54 . a) Let X = (x. . j = 1. The linear equations correspond to lines which pass through points (1.a) The abscissa x = x of the point A0 .

9) and (0. 2 . q2 = q. (1. 12 ).11: The problem 9. where q 2 [0. (p1 . q4 = 1 q. Thus 2x2 + 6x3 . p3 = 12 from the equality XA:1 = XA:2 . that is. 8 < 5y + z = 4 1 1 The solution is y = 3 . b) Let Y = (y. 10. XA 1 = 6x1 + 8x2 + 4x3 . 1]. 1 y) be the mixed strategy for the player 2.1) (see the heavy black line in …gure). 1 q). 3 3 matrix game. x2 . b) The linear equations correspond to lines which pass through points (0. XA 2 = 7 5 For p1 = 0 we obtain p2 = 12 . 0. x3 ). z = 17 . with the mixed strategy X = (x1 . q. The optimal strategies of player 3 7 5 1 is X = (0. We use the method described for the 3 we have. v = 17 . Solve the matrix game with the payo¤ matrix 2 3 6 0 3 2 3 5: A=4 8 4 6 5 17 17 17 17 p1 + p2 + p3 = : 3 3 3 3 8 > 17p1 + 17p2 + 17p3 = 17 > > > < p1 + p 2 + p3 = 1 > > > > : pi 0. 3 3 3 For mixed strategy of player 1 we have X = (p1 . (1. p2 .8). namely 5p1 + 9p2 + p3 = 6p1 + 4p2 + 8p3 = 17 . XA 3 = 3x1 + 3x2 + 5x3 : 55 . The optimal strategies of player 2 are Y = (0. p2 .4). The lines ab. p3 ) 4 17=3 5 = 3 17=3 2 Thus we have Solution. Hence Y = 3 . cd and ef are represented in the following …gure. p3 ) 4 9 4 5 : = 2=3 3 1 8 : 7y + z = 8: We obtain hence 3 17=3 17 . i = 1. p3 ) and the equality 2 3 5 6 17 1=3 (p1 .with the solution q1 = 0. Figure 1. 12 . q3 = 0. p2 .

or 3x1 + 5x2 x3 = 0. We have A1 Y t = 6y1 + 3y3 . 5 is optimal strategy of player 1. m 0 yj 0. The equation of the line XA 2 = XA 3 is 2x2 + 6x3 = 3x1 + 3x2 + 5x3 or 3x1 + 5x2 x3 = 0. 1]. The line 3y1 3y2 = 1 is essential. A3 Y t = 4y1 + 6y2 + 5y3 . 0).The equation of the line XA 1 = XA 2 is 6x1 + 8x2 + 4x3 = 2x2 + 6x3 . hence 2y1 4y2 = y3 . x3 = 5 62p . x2 = 1 64p . i = 1. solve the following matrix games with the payo¤ matrices: 3 2 6 0 3 5 0 2 2 3 9 5: a) A = . Y2 = 1 . that is 2x1 + 6x3 = 5. 1 64p . But x1 + x2 + x3 = 1. 1 . and Y2 = 1 . 1]. b) A = 4 8 5 1 4 6 5 4 Solution. p 2 [0. hence the solutions are x1 = p. y 2 56 . because the intersection of the regions R1 and R2 is this line. The values XA 1 . and this is maximum 14 when p = 0. or 6x1 + 10x2 2x3 = 0. We get that 3 3 Y1 = 2 . 1 . 5 62p . n: a) We have 0 0 [max]g = y1 + y2 0 2y2 0 5y1 1 0 y2 + 1 0 0 0 y1 . 0. Using the linear programming problem. Thus X = p. We obtain only the equation 2x1 + 6x3 = 5. or 4y1 8y2 2y3 = 0. let Y = (y1 . 1 . 6 6 For player 2. that is 2x1 + 6x3 = 5. We consider the linear programming problem (40) 0 0 [max]g = y1 + y2 + n X j=1 0 aij yj 0 + yn 1. or 2y1 6y2 2y3 = 0. This equation is 3y1 3y2 = 1 and it means that the lines y1 = y2 and 3y1 3y2 = 1 are parallel. This equation is 2y1 2y2 = 1. hence y1 3y2 y3 = 0. The equation of the line A3 Y t = A1 Y t is 4y1 + 6y2 + 5y3 = 6y1 + 3y3 .: So we must consider the values 2 2 A1: Y t and A1: Y t in the cases Y1 = ( 3 . and thus this line is parallel to another. hence y1 = y2 . Hence 3 3 3 3 Y = Y1 + (1 )Y2 . 3 . 1]. j = 1. and the region R2 = . y2 . The equation of the line A1 Y t = A2 Y t is 6y1 + 3y3 = 8y1 2y2 + 3y3 or 2y1 2y2 = 0. XA 3 are 28 6 4p . 2 [0. 3 11. 0 . is the solution for player 2 and also. so we get 2x1 + 6x3 = 5. 3 Hence X = 0. XA 2 . 0. The equation of the line A2 Y t = A3 Y t is 8y1 2y2 + 3y3 = 4y1 + 6y2 + 5y3 . v = 14 . 2 are the optimal strategies for the player 2. y3 ) be a mixed strategy. A2 Y t = 8y1 2y2 + 3y3 . The equation of the line XA 3 = XA 1 is 3x1 + 3x2 + 5x3 = 6x1 + 8x2 + 4x3 . p 2 [0.

0. 12. 1=3. 5=6). and so we have Y2 = (1=3. y1 = 1=7 ) y1 = 2=3. Y = (1=6. 0). 1 = 1=28 ) x2 = 1=6. y4 = 0 ) y4 = 0. where Y1 = (2=3. x1 = 2=5 ) x1 = 2=3. 2 b) The simplex matrix in this case is 3 2 6 0 3 5 1 0 0 1 6 8 2 3 9 0 1 0 1 7 7! 6 4 4 6 5 4 0 0 1 1 5 1 1 1 1 0 0 0 0 0 y2 x0 2 0 y4 0 0 0 Thus y1 = 1=14 ) y1 = 1=3. x0 = 0 ) x1 = 0. x0 = 1=5 ) x2 = 1=3. 0). As technological utilization. 1=3. 0). 1]. Each …rm has two strategies: the …rm build a station that makes water pure (strategy 1) or it uses 0 We have gmax = 3=14 = 1=w. 1=3). and v = 14=3. y3 = 0 ) y3 = 0. y2 = 1=2 ) 0 0 0 y2 = 5=6. = 0 ) y4 = 0. 3 So. 0. 5=6) and v = 5=3. The optimal solution of matrix game is X = (0. hence w = 14=3. 2=3. The payo¤ matrix in general representation. 5=6). three …rms use water from the same source. x0 = 5=28 ) x3 = 5=6. y1 = 1=10 ) y1 = 1=6. y4 = 0 ) y4 = 0. 0. There exists another optimal solution because we have the matrix 2 3 0 0 0 23=14 1 9=14 3=14 1=7 6 1 1 0 15=16 0 5=28 3=28 1=14 7 6 7: 4 0 2 1 1=7 0 1=7 2=7 1=7 5 0 0 0 1=28 0 1=28 5=28 3=14 0 3=2 6 1 1=4 !6 4 0 7 0 5=4 2 0 0 0 6 1 0 1=2 !6 4 0 1 1=2 0 0 0 2 3=4 3=8 7=2 5=8 7=4 1 9=8 0 1=2 0 1=8 0 3=4 0 1=8 0 1=2 1 1=8 0 9=14 3=28 1=14 1=28 23=14 1 31=28 0 1=14 0 1=28 0 3=14 1=28 1=7 5=28 3 1=4 1=8 7 7! 1=2 5 1=8 3 1=7 1=7 7 7 1=14 5 3=14 57 . v = 14=3. We have X = (2=3. y2 = 0 ) y2 = 0. 0.and so the simplex 2 0 2 4 5 1 1 1 matrix is 3 2 3 1 0 1 0 2 1 0 1 0 1 1 5 ! 4 1 1=5 0 1=5 1=5 5 ! 0 0 0 0 4=5 0 1=5 1=5 2 3 0 1 1=2 0 1=2 1=10 1=5 1=10 5 : !4 1 0 0 0 2=5 1=5 3=5 0 0 Thus gmax = 3=5 = 1=w ) w = 5=3. 1=6. Y1 = (2=3. Y2 = (1=3. 1=6. y3 = 1=7 ) y3 = 2=3. 0). Y = Y1 + (1 )Y2 . 2 [0. 2=3. 0 0 = 1=14 ) y2 = 1=3. an optimal solution is X = (0. y3 = 0 ) y3 = 0.

Write the payo¤ matrix of this game. Solution. If at least two …rms uses water that isn’ pure. we obtain that the preferences given in percentages are the following representation: AnB A1 A2 B1 40 70 B2 90 20 The percentages given in the above table refer to the …rst factory (…rst production). Two factories produce the same type of production A. Write the payo¤ matrix. Solution. So. By making a test in advance. then every t …rm that uses water loses 3 monetary unities (u. Let us consider. there is a expense equal 1 u. t every …rm has 3 u.m.2. the situation (1. The …rms 2 and 3 use water that isn’ pure. Table 1 Situation Payo¤ matrix s1 s2 s3 H1 H2 H3 1 1 1 -1 -1 -1 1 1 2 -1 -1 0 1 2 1 -1 0 -1 1 2 2 -4 -3 -3 2 1 1 0 -1 -1 -3 -4 -3 2 1 2 2 2 1 -3 -3 -4 2 2 2 -3 -3 -3 13.m.2). By using the station that makes water pure. more.). The products are interchangeable. for example. The payo¤ matrix is given in Table 1.water that isn’ pure (strategy 2). We suppose that if at most one …rm uses t water which isn’ pure then the water that exists it is good to it and this …rm t are not expenses. as a expense (negative payo¤). it costs 1 u.m. respectively B in two assortment A1 and A2 . the percentage for the second factory (second production) are the complementarities percentages (face from the total percentage 100%). The payo¤ matrix in bi-dimensional representation. We have the general representation: Situation s1 s2 1 1 1 2 2 1 2 2 Payo¤ matrix H1 H2 40 60 90 10 70 30 20 80 58 . respectively B1 and B2 . For the …rm 1 that has the station to do water pure. for the …rm that do it.m.

The corresponding simplex matrices are: 2 inj 1 2 3 6 1 1 1 0 6 SA = 6 2 400 900 -1 6 4 3 700 600 -1 4 0 0 0 2 inj 1 2 3 6 1 1 1 0 6 SB = 6 2 600 300 -1 6 4 3 100 900 -1 4 0 0 0 4 0 1 1 0 4 0 1 1 0 5 1 0 0 0 5 1 0 0 0 = 3 7 7 7 7 5 3 7 7 7 7 5 MIN = MIN 59 . too. respectively B2 . H2 = 600 100 300 900 : By solving the linear programming problems we obtain the following solutions: 2. 900 = 800 + 200: 2 2 Payo¤ matrix H1 H2 400 600 900 100 700 300 600 900 The bi-dimensional writing is: H1 = 400 700 900 600 . We have the table: Situation s1 s2 1 1 1 2 2 1 2 2 where in the situation (2. By expressing the sales in absolute value. respectively A2 . to solve the non-cooperative bi-matrix game. Solving of the bi-matrix game.2) there are 600 = 200 + 1 1 800. by considering 1000 units that have been sale in the condition of the …rst version of the problem. Solution.which is with the following bi-dimensional representation equivalent: H1 = 40 70 90 20 . We consider Problem 13 with the following modi…cation of purchasing conditions: we remark that 50% from those buyers that buy the product A2 . we ask: 1. H2 = 60 10 30 80 : 14. buy the product B2 . to express the payo¤ matrix 2.

60 . A2 : 27. 0. 5]. knows the strategy applied by the …rst factory. Q1 ): P1 = [0. 16.5 3 16. 5. that it is 45% from all sales. 2. 0. B1 : 31. 0.45 463. 7. Solution. Q = [0. we can see that the syntetique situation expressed in percentages about the structure of the types of production are the following: A1 : 27. 1. Solve the game. 2. 45]. 15. 0. 3]. Write the structure matrices.5 0.64 0 0 0 1 0 600 0 0 500 0 1 900 0 600 0 Q q1 q2 q3 q4 q5 q6 Q1 0. 5]. at the moment of choosing its strategy. The structures matrices of the game are: AnB A1 = A2 BnA B1 = B2 B1 14 24.5 650 0 0 0 Q2 1 0 700 0 300 0 Q3 0 1 900 0 0 300 The value of the game for the …rst factory is FA = 650 and for the second is FB = 463.5 A2 10.5 13. 5. Q) are equilibrium points that verify the condition: p4+i 6= 0 ) qi = 0 and q4+i 6= 0 ) pi = 0.5 12 22. Because of antagonistic market competition the second factory realizes a less sale that the …rst. Relation between information and income.5% if the production of both factories is 100%.5%. by supposing that the second factory. that is a antagonistic game with constant sum 100%.55 0. Q1 = [0. B2 : 13.5%. Let us consider Problem 15. 54.5 45 A B So.5 27. Let us consider the game 13. 55.5%.5 38. We observe that the single equilibrium point is (P1 . The simplex table corresponding to this game is: 1 2 3 4 1 1 40 70 0 2 1 90 20 0 3 0 -1 -1 1 4 0 1 1 -1 5 1 0 0 0 = MIN By solving the linear programming problem we obtain: P = [0. These pairs of solutions (P.5 22. The value of the game is 55%.P P1 P2 P3 p1 p2 p3 p4 p5 p6 0.5 B2 13.5 27.5 A1 21 1. 1.5 55 31.

We denote V 0 the value of the new game. 2 3. 1 [0. 0]. Write the matrix of game. 0. Compare the results obtained here with those obtained by solving Problem 15 and interpret the di¤erence between these two solutions. Solve the game. Because the second factory knows the strategy applied by the …rst factory. 0] + 2 [0. q3 . 0. Q0 = 0. To compare the results with those obtained in 15. it can apply another two strategies obtained by combination of strategies B1 and B2 : strategy B1 respond to the strategy A1 . the corresponding simplex table MIN Solving this linear programming problem we obtain: P = [1. we write the structure matrices of the game (rows and columns for which the strategy is equal zero will be empty). strategy B2 respond to the strategy A1 .We ask: 1. 0]. 1 + 2 = 1. 0 0 0 0 We denote Q0 = [q1 . strategy B2 respond to the strategy A2 . AnB A = A1 B1 B2 16 2 16 2 B1 B2 40 1 + 24 40 1 + 24 2 2 40 B BnA B1 : B1 = B1 : B2 A1 24 2 60 1 + 36 60 2 60 1 24 2 + 36 60 2 61 . 4. 1. p2 ] of the …rst factory. 6. Solution. v 0 = 40%. strategy B1 respond to the strategy A2 . q2 . 1 . By elimination of the dominate column 3. 1. The matrix of the game is given by the following table: AnB A1 A2 is: inj 1 2 3 4 1 1 40 70 0 2 1 40 20 0 3 1 90 20 0 4 0 -1 -1 1 5 0 1 1 -1 6 1 0 0 0 = B1 B1 40 70 B1 B2 40 20 B2 B1 90 70 B2 B2 90 20 2. 3. 2. 0. q4 ] the strategy of the second factory in agreement with another four strategies to respond to two strategies P = [p1 .

8. b) A = 6 4 1 6 5: 3 5 1 5 0 62 . as a result of the fact that it owns an information important to it. Solve the problem 6 with the graphical method for 2 n and m 2 matrix games. as 24 2 % (inside the strategy B1 : B1 ). If the both players forecast or neither forecast no then neither receives nothing. namely a total of 60%. How is separated all production 100% of both factories? The …rst factory produces only the assortment A1 as 40% and the second factory produces only the assortment B1 . If a player forecasts the number of …ngers showed by the opponent. 9. Using the graphical method. What game in previous problems has the saddle point? 4. solve the following matrix games with the payo¤ matrices: 2 3 2 4 6 3 1 7 2 1 4 7 a) A = . Solve the problem 6 with the Williams method.We remark a decreasing equal V 0 V = 15% for the …rst factory and an increasing equal 15% for the second factory. b) A = . 1. c) A = : 5 2 4 5 3 1 7. Which are the expected payo¤s of player 1 in the previous games? 5. Using the iterated elimination of strictly dominated strategies solve the matrix game with the payo¤ matrix 3 2 1 1 2 0 6 3 0 2 4 7 7: A=6 4 4 5 1 5 5 2 3 1 3 6. Find the optimal strategies of the following matrix game with the payo¤ matrix: 2 3 6 1 2 4 a) A = .19 Exercises and problems unsolved the payo¤ matrix 3 6 9 6 6 1 8 5: 5 3 5 Let be a zero-sum two-person game with 2 3 H1 = A = 4 10 4 Which is the payo¤ matrix of player 2? What strategies have the player 1 and the player 2? 2. Which is the payo¤ matrix of this game? 3. he receives so many unities monetary as much as …ngers they showed together. (60 1 + 36 2 )% (inside the strategy B1 : B2 ). (The Morra game) Two players show simultaneous one or two …ngers from the left hand and in the same time yells the number of …ngers that the believe that shows the opponent.

i = 1. In accordance to all considerations. 4. The strategy i consists to …nance the objective i. 2 – the building of second objective. A2 . To produce one unit of product we use three types of materials: B: B1 –metal. The expenses with pole materials in a unit of production are given in the table: BnA B1 B2 B3 A1 4 3 5 A2 4 5 2 A3 6 3 4 10. A3 . Two branches have to do investments in four objectives. given by the matrices A= 1 3 7 4 . Solve the matrix game with the 2 1 A=4 1 2 payo¤ matrix 3 1 2 1 1 5: 1 0 Write the matrix of the game in general representation. 14. b) A = 4 0 9 4 5 : a) A = 4 1 8 0 1 2 14 1 8 12.11. Write the matrix of the game in general representation. There are two strategies for the corresponding ministry and for the leaders of the town: 1 – the building of …rst objective. 15. In order to get an economical and social development of a town. 13. B2 –wooden material. B3 – plastic material. The people that represent the town may have two strategies: 1 – they agree 63 . Let us consider two persons playing a bi-matrix non-cooperative game. it appears the problem to build or not to build two economical objectives. B= 1 7 8 8 : Solve the game. the payo¤s of the …rst branch are given by the matrix: 3 2 0 1 1 2 6 1 0 3 2 7 7: A=6 4 0 1 2 1 5 2 0 0 0 We suppose that every branch materializes its payo¤ in agreement with another one: that is what the …rst wins the second loses and what the …rst loses the second wins. A factory produces three types of production A: A1 . Using the linear programming problem solve the following matrix game with the payo¤ matrix: 2 3 2 3 2 3 0 7 5 6 3 5 .

a) X = (1=4. 0). v1 = 2. L12 . v = 1=3. 5=12). 1 yells. L12 1 …nger. v2 = 2. 2=3). Y = (2=5. Let us consider Problem 12.with the proposal of Ministry. v = 1 b) X = (0. The payo¤s are given by the matrices: A= . v = 5=2. v = 1=5. 3=4). 2=5). aij xi yj = 3x1 y1 + 6x1 y2 + + 5x3 y4 . 1=3. 0). L22 . 5=6. 0]. 1]. 16. Solve the antagonistic game given in Problem 13. FA = FB = 7. 6. Find a production plan corresponding to a total production of 4 millions u. What are the percentages p1 : p2 : p3 that we have to make the supply in advance (supply before to know the volume of the contracts for the next period of time) with prime materials in order to obtain that the stock will be surely used and to ensure a maximum value of the production? 16. 1=4). i=1 j=1 5. Solve the non-cooperative game. 1=2). v1 = max min aij = 3. P = [1. 11. in t pure strategy. 1 yells. H1 = A = 6 4 3 0 0 4 5 0 3 4 0 The rows are: L11 . a) X = (1=2. Q). Y = (3=4. for …rst game. there isn’ saddle point. 1=3. and we ask: 16. P3 the 4 4. 17. 1=2).2. 1=3. Y = (1=3. a) X = (1=2. Y = (0. L21 . Y1 = (1=2. 2 yells. v = 17=4 c) X = (3=4.m. 2 – they don’ agree with it. 14. v = 5=2. 2=3. L22 2 …ngers. 1=2). 2=3). Y = (1=8. 10. 9.1. 1=4). 1=4). Y2 = (0. Y = (0. 3 2 0 2 3 0 6 2 0 0 3 7 7 2. Y = (1=2. 7=8). 1=6. Y = (3=4. 2 yells. b) X = (0. 1=2). there isn’ saddle point. v = 5=3. 3. 7=12. 1=4). 2=3). 2=3). 1=3. 0. X = (0. 3=5. X = (0. L21 2 …ngers. where L11 means 1 …nger. 2 3 6 9 6 10 6 1 8 64 . in pure t strategy. Q = [0. 0. The strategies t apply independent. v = 17=3. B= 5 1 2 1 Answers 3 4 6 5 7 7 . H2 = 6 4 3 5 5 for player 2. 0. v = 1 b) X = (3=4. for P second game. 0. v2 = min max aij = 6. 10 1 2 1 . 0). First solution: (P. three strategies for player 1 and for strategies 1. 3=5. P4 i=13 j=1 P aij xi yj = 2x1 y2 3x1 y3 + 4x4 y3 .

Numerical methods in games theory. V. 683-705 7. 1978 (In Romanian) s 3. Gh.. Mure¸an.. 17]. Faber. 1980.. Science. Q): P = [0. 33.S. Q = [0. Mure¸an. P. 1 + 2 = 1. Mure¸an.56 u. 2 17. 1981 (In Romanian) s 6. Craiu. 79]. Risoprint. S.. ClujNapoca. Dacia. II.20 References 1... Games with three players. New Jersey. banks and s exchanges. von Neumann. 2001 (In Romanian) 14. 162. Onicescu. Rahman. 1971 (In Romanian) t a s 4. s Ed. 57. banks and exchanges. 1. New Jersey. Cluj-Napoca. Strategy of games with applications to linear programming. Al. Sti¸ in¸i…c¼. FA = 0. Operational research. 17. E.. Risoprint. 0. Babe¸-Bolyai..1.S. 2000 (In Romanian) 12.. Risoprint..m. Q). Blaga. A. Applied mathematics in …nance. Applied mathematics in economy.. II. (P. Bus t ¸ t a cure¸ti. Cluj-Napoca. 26. Clujs s Napoca. Did. s s Promedia Plus. G.. II. Ed. P. O. of Con‡ Resolution. Academic Press.S. 15. Vol. A2 : b = 1320000 1 + 2000000 2 u.2. Q = [0. P = [0. Academiei. M.. Guia¸u. 2002 (In Romanian) 15. G. Q2 = [1. M. Mure¸an. Mure¸an. 1. 16. Mure¸an. s Babe¸-Bolyai.. Princeton University Press. 1999 (In Romanian) 2. 0.. Bucure¸ti. Cluj-Napoca. 1982 65 . Lupa¸. 0. 1983 (In Romanian) 5. 4]. The …rst factory wins 0. Gibbons. Bucure¸ti. A1 : a = 2680000 1 + 2000000 2 u. 11.. I. Dani. Mathematics for economists. Ed. 0. H. 67]. Game theory (2 nd edn). Ed. FA = 3. Applied mathematics in economy.. Craiu. Blaga.S. Q = 1 Q1 + 2 Q2 . J..Second solution: (P. Ed. 3. A.. Applied mathematics in …nance. 1996 (In Romanian) 13. 0. R. A. P = [0.. 1992 8. An analysis of …nal-o¤ er arbitration. 1953 16. 1996 (In Romanian) 11.. G. Applied mathematics. A. Mure¸an. E. 1243-1248 10. Bucure¸ti. Ciucu. 1:0:0 16. A. Ed. banks and s exchanges. M. Ped. 1968. Games theory for applied economists.S. 0]. 1 0. Morgenstern. Theory of games and economic behavior (3 rd edn). Univ. 1]. ict 35. Stiin¸i…c¼. Rahman.S. Lito..m. Dani. s Transilvania Press.. J. 0. Lito. Cluj-Napoca. Vol.. 6. 1 + 2 = 1. 33. Hardin. Mali¸a. I... 0. Ed. FB = 0. O. 0. 33]. 0. V. 1973 (In Romanian) s 9. Mathematical statistics and opera¸ a tional research. Cluj-Napoca. A. Vol. Craiu. Vol. 16... Mihoc. 4 1 + 3 2 . 38. New York.m.S. 2 0... 21. Ed. 0. A. The tragedy of the commons.. Ed. FB = 8. Applied mathematics in …nance. Cluj-Napoca. A. 1971 (In Romanian) s 17. Stef¼nescu. Q1 = [0. Princeton University Press. Ed. Univ. Owen. 28.

An inductive proof of von Neumann’ minimax theorem. Furthermore. or could have just invented a new technology. 1969 (In Romanian) s 19. The theory of industrial organization. where Q = q1 + q2 is the aggregate quantity on the market. 1988 2 Static games of incomplete information In this chapter we consider games of incomplete information (Bayasian games) that is.. 2. Naturally.1 Static Bayesian games and Bayesian Nash equilibrium In this section we de…ne the normal-form representation of a static Bayesian game and a Bayesian Nash equilibrium in such a game. Firm 1. Chis nese J. Firm 2’ cost function is C2 (q2 ) which has the probabilistic s distribution C2 (q2 ) : cL q2 1 cH q2 . 1988 20. …rm 2’ marginal cost c has the probabilistic distribution s c: cL 1 cH : This situation may be when …rm 2 could be a new entrant to the industry.. Since these de…nitions are abstract and bit complex. M I T Press. that s is q2 (cL ). …rm 2 knows that …rm 1 knows this. should anticipate that 2 may tailor its quantity to its cost in this way.18. if c = cL (41) q2 = q2 (cH ). Schatteles. J. of Operations Research. so the players’move can be thought of as simultaneous. where cL < cH . for its part. we introduce the main ideas with a simple example. if c = cH : 66 . Wang. and so on.. bids are submitt s ted in sealed envelopes. information is asymmetric because …rm 2 knows its cost function and …rm 1’ but …rm 1 knows its cost function and only that s. One common example of a static game of incomplete s information is a sealed-bid auction: each bidder knows his own valuation for the good being sold but doesn’ know any other bidder’ valuation. Firm 1’ cost function s is C1 (q1 ) = cq1 . Tirole. 1 (1987). Strategically games and economic analysis. Wang. Clarendon Press. …rm 2 may want to chose a di¤erent (and presumable lower) quantity if its marginal cost is high than if it is low. All of this is common knowledge: …rm 1 knows that …rm 2 has superior information. 68-70 21. namely Cournot competition under asymmetric information. Let q2 (c) be denote …rm 2’ quantity choices as a function of its cost. Consider a Cournot duopoly model with inverse demand given by P (Q) = a Q. ¸ t a Bucure¸ti. Ed.. Oxford. Stiin¸i…c¼. The theory of games. J. T. J. games in which at least one player is uncertain about another player’ payo¤ function.

depending on s …rm 2’ cost. 3 6 (1 )[a q2 (cL ) a q1 2 cL . q2 (cH ) = a q1 2 cH . …rm i produces a 2ci +cj qi = in this complete-information case.Let q1 be denote …rm 1’ single quantity choice. the solutions to the three …rst-order conditions are q1 = : q2 (cL ) = q2 (cH ) = and 3 Compare q2 (cL ). Assuming that the values of c1 and c2 are such that both …rms’equilibrium quantities are both positive. 3 in contrast. for example. if …rm 2’ cost is high. it s produces less because its cost is high but also produces more because it knows that …rm 1 will produce a quantity that maximizes its expected pro…t and thus is smaller than …rm 1 would produce if it know …rm 2’ cost to be high. Thus …rm 1 chooses q1 to solve the problem s maxq1 (1 )[(a q1 q2 (cL )) c]q1 + [(a q1 q2 (cH )) c]q1 so as to maximize expected pro…t. q2 (cH ) will solve the problem s maxq2 [(a q1 q2 ) cH ]q2 Firm 1 knows that …rm 2’ cost is low with probability 1 s and should anticipate that …rm 2’ quantity choice will be q2 (cL ) or q2 (cH ). If …rm 2’ cost is high. The …rst-order conditions for these three optimization problems are q2 (cL ) = and c] + [a q2 (cH ) c] 2 Assume that these …rst-order condition characterize the solutions to the earlier optimization problems. q2 (cH ) and q1 to the Cournot equilibrium under complete information with costs c1 and c2 . it will s s chose q2 (cL ) to solve the problem maxq2 [(a q1 q2 ) cL ]q2 : Similarly. In the incomplete-information. q2 (cH ) is greater than a 2cH +c and q2 (cL ) is less than a 2cL +c : This 3 3 occurs because …rm 2 not only tailors its quantity to its cost but also responds to the fact that …rm 1 cannot do so. s 67 q1 = a 2c + (1 )cL + cH a a 2cL + c (cH cL ). If …rm 2’ cost is low. 3 6 2cH + c 1 + (cH cL ). . Then.

an . given player i0 s knowledge of his type. hence Hi (a) = Hi (a1 . i 2 I >. Remark 2. : : : . Remark 2. i 2 I > is a non-cooperative game (see De…nition 1. an ) are received.1. a2 . but in a dynamic game of complete information (…nitely or in…nitely repeated game) a strategy can be di¤erent of action. Tn . a2 . an ) is player i’ payo¤ when the players s s choose the actions a = (a1 . hence Hi (s) = Hi (s1 . where Ai is player i’ action space and Hi is player s i’ payo¤. p2 . in a dynamic game a strategy is more complicated. Hi (a1 . s2 . so we can write player i0 s belief as t pi (t i ). fpi g. We use = < I. Given this de…nition of a player’ type.2 Normal-form representation of static Bayesian games Recall that the ensemble =< I. An . in s which case pi (t i jti ) doesn’ depend on ti . denoted by t i = (t1 . :::. t i . sn ): Remark 2. their beliefs p1 . fTi g. fHi g. and their payo¤ functions H1 .4. ti . and then (2) payo¤s Hi (a1 . : : : . :::. is privately known by player i. fAi g.10). determines player i0 s payo¤ function. s2 . Now we want to develop the normal-form representation of a static Bayesian game. In most of application the player’ types are independent. a2 .2. The normal-form representation of an n-player static Bayesian game speci…es the players’ action spaces A1 . an . T2 . A player’ strategy is a complete plan of action . ti . tn ): We use T i to denote the set of all possible values of t i . : : : . fSi g. i 2 I > to denote n-player static Bayesian game. where Si is player i’ strategy space and Hi is player i’ s s payo¤. Player i0 s type. : : : . we describe the timing of a static game of complete information as follows: (1) the players simultaneously choose actions (player i chooses ai from the feasible set Ai ). ti ). namely a simultaneous-move game of incomplete information. where ti is s called player i’ type and belongs to a set of possible types (or type space) Ti .2. ti+1 . Hn : Remark 2. The non-cooperative game can also describe it as = < I. their type spaces T1 . A2 . H 2 . : : : . Let player i’ possible payo¤ functions be represented Hi (a1 .3. :::. ti 1 . To prepare for our description of the timing of a static game of incomplete information.it speci…es s a feasible action for the player in every contingency in which the player might be called upon to act. pn . : : : .1. fHi g. sn ) is player i’ payo¤ when the players s choose the strategies (s1 . and we use the probability distribution pi (t i jti ) to denote player i0 s belief about the other players’types. s Each type ti corresponds to a di¤erent payo¤ function that player i might have. De…nition 2. a2 . fAi g . The …rst step is to represent the idea each player knows his own payo¤ function but may be uncertain about the other players’ payo¤ functions. : : : . saying that player i may be uncertain about the other players’payo¤ function is equivalent to saying that player i may be uncertain about the types of other players. an ): In a simultaneous-move game of complete information a strategy for a player is simply an action. :::. ti ) and is a member of the set of 68 . fHi g. :::. Likewise. saying that player i knows his own s payo¤ function is equivalent to saying that player i knows his type. :::. a2 . Hence.

Suppose that player i0 s set of feasible actions is fa. ti1 ) and Hi (a1 . Remark 2. Because nature reveals player i0 s type to player i but no to player j in step (2). a2 . player i choosing ai from the feasible set Ai . that player i0 s type space is Ti = fti1 . ti ) are received. Then we can say that i has two types and we can de…ne i0 s feasible set of actions to be fa. Example 2. ti . a2 . and that player i0 s two payo¤ functions are Hi (a1 . : : : . tn ). (4) payo¤s Hi (a1 . (3) the players simultaneously choose actions.7. ti2 g.possible types Ti : Player i0 s belief pi (t i jti ) describes i0 s uncertainty about the n 1 other players’possible types.2. q2 .5. nature draws a type vector t = (t1 . b. pi (t i jti ): We will assume that it is common knowledge that in step (1) of the timing of a static Bayesian game. as follows.1. Firm 2 has two possible cost functions and thus two possible pro…t or payo¤ functions: H2 (q1 . In the Cournot game the …rms’ actions are their quantity choices. ti2 ): We can use the idea that each of a player’ types s corresponds to a di¤erent payo¤ function the player might have to represent the possibility that the player might have di¤erent sets of feasible actions. t i .6. ti1 and ti2 . There are games in which player i has private information not only about his own payo¤ function but also about another player’ payo¤ s function. q2 . : : : . where ti is drawn from the set of possible types Ti . an . he can compute the belief pi (t i jti ) using Bayes’rule 69 . player j doesn’ know the complete history of the game when actions are t chosen in step (3). t2 . : : : . t2 . We capture this possibility by allowing player i0 s payo¤ to depend not only on the actions (a1 . q2 . : : : . Example 2. (2) nature reveals ti to player i but not to any other player. a2 . and …rm 2’ type space is T2 = s s fcL . : : : . cH g. cL ) = [(a and H2 (q1 . We write this payo¤ as Hi (a1 . : : : . an . tn ). We would say that player i has two types. t1 . The timing of a static Bayesian game is as follows: (1) nature draws a type vector t = (t1 . The second technical point involves the beliefs. cH ) = [(a q1 q2 ) cH ]q2 : Firm 1 has only one possible payo¤ function H1 (q1 . t2 . a2 . cg for both types but de…ne the payo¤ from taking action c to be 1 for type ti1 : Remark 2. an . : : : . c) = [(a q1 q2 ) c]q1 : q1 q2 ) cL ]q2 Thus. an . a2 . : : : . q1 and q2 . : : : . given i0 s own type. bg with probability q and fa. …rm 1’ type space is T1 = fcg. Suppose that player i has two possible payo¤ functions. tn ): Remark 2. tn ) according to the prior probability distribution p(t): When nature then reveals ti to player i. b. cg with probability 1 q. t2 . an ) but also on all the types (t1 .

t ) P i i : p(t i .pi (t i jti ) = p(t i . ti ) i 70 . ti ) = p(ti ) t i 2T p(t .

Si .2 of a strategy. In pooling strategy all types choose the same action. where for each type ti 2 Ti . De…nition 2. Thus.9. however.8.2. Example 2. : : : . In the Bayesian Nash equilibrium no player wants to chance his strategy. In terms of De…nition 2. that …rm 1’ single quantity choice should s take into account that …rm 2’ quantity will depend on …rm 2’ cost in this way. sn ) are a (pure-strategy) Bayesian Nash equilibrium if for each player i and for each of i0 s types ti 2 Ti . The strategy spaces aren’ given in the normal-form representation of the t Bayesian game. in a separating strategy each type ti 2 Ti chooses a di¤erent action ai 2 Ai . The central idea is both simple and familiar: each player’ strategy must be a best response to the other player’ s s strategies. In the asymmetric-information Cournot game in Example 2. Ti and Ai . the pair (q2 (cL ).2. in which nature begins the game by drawing the players’types.1 the solution consists of three quantity choices: q2 (cL ). si (ti ) speci…es the action from the feasible set Ai that type ti would choose if drawn by nature. In the static Bayesian game . we turn next to the de…nition of a Bayesian Nash equilibrium. Firm 2 will choose di¤erent quantity depending on its s cost. a strategy for player i is a function si . In the static Bayesian game the strategies s = (s1 . is the set of all possible functions with domain Ti and range Ai : Remark 2. specifying a feasible s action in every contingency in which the player might be called on to act. Instead. We introduce the distinction here only to help describe the wide variety of strategies that can be constructed from a given pair of type and action spaces. even if the chance involves only one action by one type. we de…ne the players’strategy spaces in the static Bayesian games. in a static Bayesian game the strategy spaces are constructed from the type and action spaces: player i0 s set of possible (pure) strategies. 71 .3. else …rm 1 simply cannot compute whether its strategy is indeed a best response to …rm 2’ s. si (ti ) solves the problem max X i 2T ai 2Ai Hi (s1 (t1 ). Given the de…nition of a strategy in a Bayesian game. s2 . si i t 1 (ti 1 ). si+1 (ti+1 ). q2 (cH )) is …rm 2’ strategy and s q1 is …rm 1’ strategy. That is. q2 (cH ) and q1 . a (pure) strategy for player i must specify a feasible action for each of player i0 s possible types. In discussion of dynamic games of incomplete information we will do distinction between two categories of strategies. s s one for each possible cost type. then …rm 2’ strategy must be a pair of quantities. ai . s s Thus.3 De…nition of Bayesian Nash equilibrium First. t)pi (t i jti ): Remark 2. a Bayesian Nash equilibrium is simply a Nash equilibrium in a Bayesian game.3. : : : . : : : . It is important to note. Given the timing of a static Bayesian game. De…nition 2. if our equilibrium concept is to require that …rm 1’ strategy be a best s response to …rm 2’ strategy. sn (tn ). We know that a player’ strategy is a complete plan of action.

that is. in context of Bayesian games. 2. : : : . we sketch the way the revelation principle is used in the auction and bilateral-trading problems. no matter what i0 s true type. The seller can use the revelation principle to simplify this problem in two ways.a ‡ oor below which bids will not be accepted. : : : . 2 . More generally. Outside the context of auction design. qn ( 1 . the seller might set a reservation price . : : : .10. n ): For each possible combination of claims ( 1 . perhaps in amounts that depend on their own and others’ bids.compatible. q1 ( 1 . : : : . n ). n ). is the revelation principle. More generally. : : : . 2. perhaps in mixed strategies. : : : . n ) must be less than or equal to one. 2. : : : . It can be applied in the auction and bilateral-trading problems described in the previous sections.telling is a Bayesian Nash equilibrium is called incentive . n ) and receives the good with probability qi ( 1 . Any Bayesian Nash equilibrium in an appropriately chosen new Bayesian game. the sum of the probability q1 ( 1 . The bidders might have to pay an entry fee.Remark 2. n ) + + qn ( 1 . Static Bayesian game in which each player’ only action s is to submit a claim about his type is called direct mechanism. the seller can restrict attention to the following class of games: 1) The bidders simultaneously make claims about their types (their valuations). n) such that each player i0 s equilibrium strategy is to claim i (ti ) = ti for each ti 2 T i : De…nition 2. n ). due to Myerson [ ]. Before we state and prove the revelation principle for static Bayesian games. First. 2 . Also. 2. 2) Given the bidders’claims ( 1 . bidder i pays xi ( 1 . n ). and might not always go to the highest bidder when the seller does release it. Remark 2. : : : . the good might stay with the seller with some probability. : : : . n ). tn ). ti .4. Bidderican claim to be any type i from i0 s set of feasible types Ti . 2 . 2 . t2 . 2 . payment and probability functions x1 ( 1 . : : : . 2 . We can show that a …nite static Bayesian game there exists a Bayesian Nash equilibrium. The highest bidder paid money to the seller and received the good. : : : . 2. as well as in a wide variety of other problems. the revelation principle can again be used in these two ways. where by "represented" we mean that for each possible combination of the players’ types (t1 . A direct mechanism in which truth . some of the losing bidders might have to pay money. xn ( 1 . The second way the seller can use the revelation principle is to restrict attention to those direct mechanisms in which it is a Bayesian Nash equilibrium for each bidder to tell the truth . : : : . but there are many other possibilities.4 The revelation principle An important tool for designing games when the players have private information.11. Consider a seller who wishes to design an auction to maximize his expected revenue. : : : . the players’ 72 .

fpi g. Formally. Player i0 s feasible actions in the direct mechanism are claims about i0 s possible types. That is. then i0 s optimal choice remains si (ti ). tn ): We conclude the proof by showing that truth . the new equilibrium in the new game is always truth . if i0 s type is ti and we allow i to choose an action from a subset of Ai that includes si (ti ). sn ): Hence. again assuming that the other functions in the direct mechanism are chosen so as to confront each player with a choice of exactly this kind. s ( ) = (s1 ( 1 ). sn ) in Bayesian game = < I. player i0 s action space is Ti : The new payo¤ functions are more complicated. The appropriate direct mechanism is a static Bayesian game with the same types spaces and beliefs as but with new action spaces and new payo¤ functions. : : : . We de…ne the payo¤s in the direct mechanism by substituting the players’ type reports in the new game. : : : . : : : . : : : . The new action spaces are simple. si is i0 s best response to the other players’strategies (s1 . no matter what the original equilibrium. The fact that s is a Bayesian Nash equilibrium of means that for each player i.actions and payo¤s in the new equilibrium are identical to those in the old equilibrium. player i is in e¤ect choosing to take the action si ( i ) from Ai : If all the other players tell the truth. No matter what the original game. where t = (t1 . t) . sn ): But we argued earlier that if they play these strategies. into their equilibrium strategies from the old game. si 1 . t2 . : : : .telling equilibrium that represent s . n ). the new Bayesian game is always a direct mechanism. : : : . t) = Hi (s ( ). They depend not only on original game . si+1 . fTi g. Consider the Bayesian Nash equilibrium s = (s1 . fAi g. s2 ( 2 ). si (ti ) is the best action for i to choose from Ai . i 2 I > : We will construct a direct mechanism with a truth . Any Bayesian Nash equilibrium of any Bayesian game can be represented by an incentive . into the payo¤ functions from the old game. si+1 . but also on the original equilibrium in that game.telling. given that the other players’strategies are (s1 . : : : . : : : . i0 s payo¤ function is vi ( . as follows. (The revelation principle). The following result hold Theorem 2. : : : . fHi g.1. sn ): Thus.compatible direct mechanism. s . if the other players tell the truth.telling is a Bayesian Nash equilibrium of this direct mechanism. sn ( n )). s : The idea is to use the fact that s is an equilibrium in to ensure that truth . and then substituting the resulting actions in the old game. = ( 1 . then when i0 s type is ti the best type to claim to be is ti : That is. 2 . truth telling is an equilibrium. then when i0 s type is ti the best action forito choose is si (ti ): Thus. Proof. si+1 . si 1 . Hence.telling is an equilibrium of the direct mechanism. By claiming to be type i from Ti . then they are in e¤ect playing the strategies (s1 . s2 . it is a Bayesian Nash equilibrium of the static 73 . for each of i0 s types ti 2 Ti . : : : . si 1 .

fTi g. 2g. in which player 1 plays strategy 1 if t1 exceeds a critical value. fTi g. the players’behavior in this pure strategy Nash equilibrium approaches their behavior in the mixed strategy Nash equilibrium in the original game of complete information. the 1 beliefs are p1 (t2 ) = p2 (t1 ) = x for all t1 and t2 . H2 = 74 .strategy Bayesian Nash equilibrium in a closed related game with a little bit of incomplete information. where t1 is privately known by player 1. the action spaces are A1 = A2 = f1. A mixed strategy Nash equilibrium in a game of complete information can be interpreted as a pure . t2 are independent draws from a uniform distribution on [0. p2 . player 1 plays strategy 1 with the probability x xc1 and player 2 plays strategy 2 with the probability x xc2 . that is. fpi g.strategy Bayesian Nash equilibrium of this incomplete information static game. p1 . where t2 s is privately known by player 2.telling strategy i (ti ) = ti for every ti 2 Ti : In [4]. H1 . i 2 I > for each player i to play the truth . Harsanyi suggested that player j 0 s mixed strategy represents player 0 i s uncertainty about j 0 s choice of a pure strategy. In such an equilibrium. T1 . We suppose that player’ 1 t s s payo¤ if both attend the …rst strategy is 2 + t1 . player’ 2 payo¤ if both attend the second strategy is 2 + t2 . The original game have the payo¤ matrices 2 0 0 1 1 0 0 2 H1 = . c1 . this uncertainty arise either because of randomization or because of a little incomplete information. and the payo¤s are as follows in the table Situation s1 s2 1 1 1 2 2 1 2 2 Payo¤ matrix H1 H2 2 + t1 1 0 0 0 0 1 2 + t2 We will construct a pure . as in the following example.Bayesian game 0 = < I. A2 . x]: In terms of the static Bayesian game in normal form = < f1. although have known each other for quite some time. fHi g. and plays strategy 2 otherwise and player 2 plays strategy 2 if t2 exceeds a critical value. the type spaces are T1 = T2 = [0. We will show that as the incomplete information disappears. and plays strategy 1 otherwise. Consider a bi .4.matrix game (like Battle of the sexes) in which the players. The crucial feature of a mixed . H2 >. x].strategy Nash equilibrium is not that player j chooses a strategy randomly. Example 2. but rather than player i is uncertain about player j 0 s choice. players 1 and 2 aren’ quite sure of each other’ payo¤. T2 . and t1 . 2g. c2 . as x approaches zero. and that j 0 s choice in turn depends on the realization of a small amount of private information. A1 .

both equal p 3 + 9 + 4x 1 . For a given value of x. Suppose that players 1 and 2 play the strategies just described. playing strategy 2 is optimal if and only if (1 t2 x c1 3 = c2 : c1 c1 c1 ):0 + (2 + t2 ) = (2 + t2 ) x x x The above relationships yields to c2 = c1 and c2 + 3c2 x = 0. namely x xc2 . namely x xc1 . 75 . player 2’ expected payo¤s from playing s s strategy 2 and from playing strategy 1 are (1 and c1 c1 c1 ):1 + :0 = 1 . given player’ strategy. Thus. 1) and (2. x x x respectively. the both x c2 x c1 2 probabilities x and x approach 3 as x approaches zero.and there are two pure strategy Nash equilibria (1. x x x respectively. Thus playing strategy 1 is optimal if and only if t1 x c2 3 = c1 : c2 c2 ):0 = (2 + t1 ) x x Similarly. and the probability that player 2 plays strategy 2. Solving 2 the quadratic then shows that the probability that player 1 plays strategy 1. as the incomplete information 3 disappears. 2) and a mixed strategy Nash equilibrium in which player 1 plays strategy 1 with the proba2 2 bility 3 and player 2 plays strategy 2 with the probability 3 : Really. Given player’ 2 strategy. player 1’ expected s s payo¤s from playing strategy 1 and from playing strategy 2 are c2 (2 + t1 ) + (1 x and c2 c2 c2 :0 + (1 ):1 = 1 . 2x which approaches 2 as x approaches zero. the players’ behavior in this pure strategy Bayesian Nash equilibrium of the incomplete information game approaches their behavior in the mixed strategy Nash equilibrium in the original game of complete information. we will determine values of c1 and c2 such that these strategies are a Bayesian Nash equilibrium. Thus.

2. All of this is ip common knowledge.2. A2 . Because the valuations are independent. 2: We simplify the exposition and calculations by looking for a linear equilibrium b1 (v1 ) = a1 + c1 v1 and b2 (v2 ) = a2 + c2 v2 : For a given value of vi . we have aj bi aj +cj . the action space is Ai = [0. i = 1. 1]. Bids are constrained to be nonnegative. no matter what the value of vi . The bidder are risk . Formulate this problem as a static Bayesian game. so P fbi > aj + cj vj g = P fvj < 76 bi cj aj g= bi cj aj : . p2 . The higher bidder wins the good and pays the price she bid. player i0 s best response solves the problem max(vi bi bi )P fbi > aj + cj vj g. bi (vi ) solves the problems 1 bi )P fbi > bj (vj )g + (vi 2 1. player 10 s strategy b1 (v1 ) is a best response to player 20 s strategy b2 (v2 ). We must identify the beliefs and the payo¤ functions. the winner is determined by a ‡ of a coin. and …nd out a Bayesian Nash equilibrium. H2 > where I = f1. player i0 s action is to submit a nonnegative bid. v i bi (42) Hi (b1 . 2g. if bidder i gets the good and pays the price p. 1]. and his type is his valuation vi . so bj is uniformly distributed. player i believes that vj is uniformly distributed on [0. (An auction) There are two bidders. p1 . bi : Ti ! Ai . Player i0 s payo¤ function Hi : A1 A2 T1 T2 ! R is given by the relationship 8 if bi > bj < vi bi . i = 1. hence the type space is Ti = [0.that is. v1 . H1 . In the case of a tie. the other bidder gets and pays nothing. 1]. b2 . Since it is pointless for player i to bid above j 0 s maximum bid. and vice versa. The bidders simultaneously submit their bids. then i0 s payo¤ is vi p: The two bidders’valuations are independently and uniformly distributed on [0.5 Exercises and problem solved To derive a Bayesian equilibrium of this game we construct the players’ strategy spaces. where bi (vi ) speci…es the bid that each of i0 s types (valuations) would choose.neutral. Bidder i has a valuation vi for the good . A1 . 1]. 1) that is. if bi = bj 2 : 0. We know that in a static Bayesian game a strategy is a function from type space to action space. Solution In terms of a static Bayesian game = < I. T1 . T2 . The pair of strategies (b1 (v1 ). b2 (v2 )) is a Bayesian Nash equilibrium if for each vi in [0. bi . v2 ) = . if bi < bj : max(vi bi bi )P fbi = bj (vj )g. In a Bayesian Nash equilibrium. because bj (vj ) = aj + cj vj and vj is uniformly distributed. vi 7! bi (vi ). where we have used the fact that P fbi = bj (vj )g = 0.

if vi aj if ai < aj : (43) We prove that aj 0: If we have 0 < aj < 1 then there are some values of vi such that vi < aj . focusing instead on aj 1 and aj 0: But the former cannot occur in equilibrium since it is optimal for a higher type to bid at least as much as a lower type’ optimal bid. next. the larger the gain if the bidder does win. bi (vi ) = vi : 2 Remark 2. The technique we use can easily be extended to a broad class of valuation distributions. Under the assumption that the players’strategies are strictly increasing and di¤erentiable. which cannot be optimal. if bi (vi ) is to a be linear. again for the case of uniformly distributed valuations. how equilibrium bidding changes as the distribution of the bidders’valuations changes. and this single strategy is a best response to itself. a symmetric Bayesian Nash equilibrium (namely. then we must have aj 0. Also. We …nd out that bi (vi ) = vi : That 2 is. there is a single function b(vi ) such that player 10 s strategy b1 (v1 ) is b(v1 ) and player 20 s strategy b2 (v2 ) is b(v2 ). we therefore rule out 0 < aj < 1. we have cj 0. It turns out that because the players’valuations are uniformly distributed. Since we are looking for a linear equilibrium. we allow the players to choose arbitrary strategies but ask whether there is an equilibrium that is linear. Neither of these questions can be answered using the technique just applied: it is fruitless to try to guess all the functional forms other equilibria of this game might have. in which case bi (vi ) = vi +ai .13. as well as case of n bidders. and assume that b is strictly increasing and di¤erentiable. player i0 s optimal bid solves the problem 77 .13. aj = ai . rather.o¤ a bidder faces in an action: the higher the bid. Note well that we aren’ restricting the players’ strategy t spaces to include only linear strategies. Thus. aj . In the game of problem 1 there are other Bayesian Nash equilibria ? Derive a symmetric Bayesian Nash equilibrium. we show that the unique symmetric Bayesian Nash equilibrium is the linear equilibrium. each player submit a bid equal to half her valuation. it is fruitless to try to guess all the functional forms other equilibria. Suppose player j adopts the strategy b. but then aj 1 s would imply that bj (vj ) vj . Solution As we just have mentioned in Remark 2. Then for a given value of vi .Player i0 s best response is therefore bi (vi ) = vi +ai 2 . the lower the bid. it is ‡ at …rst t at and positively sloped later. in which case bi (vi ) isn’ linear. Such a bid re‡ ects the fundamental trade . Remark 2. the more likely the bidder is to win. Rather.12. the players’ strategies are identical. t We derive. 2. so ai = 2j 2 1 and ci = 2 : We can repeat the same analysis for player j under the assumption that player i adopts the strategy bi (vi ) = ai + ci vi : This yields ai 0. One might wonder whether there are other Bayesian Nash equilibrium of game treated in the …rst problem. a linear equilibrium not only exists but is unique. and a linear equilibrium doesn’ exist for any other distribution of valuations. 2 and cj = 1 : Combining these two sets of results then yields ai = aj = 0 and 2 1 ci = cj = 2 : That is.

vi + (vi b(vi )) 1 = 0. if pb < ps . To impose this requirement. The buyer’ valuation for the seller’ good is vb .order condition for player i0 s optimization problem is therefore b 1 (bi ) + (vi bi ) d b dbi 1 (bi ) = 0: This …rst . then trade occurs at price p = pb +ps . 2 i where k is a constant of integration. 2 3. In particular. P fbi > b(vj )g = P fb 1 (bi ) > vj g = b 1 (bi ): The …rst . we need a boundary condition. 1]. so k = 0 and b(vi ) = vi . If the strategy b is to be a symmetric Bayesian Nash equilibrium. given that bidder i0 s valuation is vi . we require b(0) 0: Since bids are constrained to be nonnegative. for each of bidder i0 s possible valuations. of course. given t that bidder j plays this strategy. simple economic reasoning provides one: no player should bid more that his valuation. bidder i doesn’ with to deviate from the strategy b. and the buyer simultaneously names an o¤er price.order condition be b(vi ) : that is. Thus.order condition yielding b 1 (b(vi )) + (vi b(vi )) d b dbi 1 (b(vi )) = 0: d 1 We have b 1 (b(vi )) = vi . b0 (vi ) which is more convenient expressed as vi b0 (vi ) + b(vi ) = vi : The left .hand d side of this di¤erential equation is dvi (vi b(vi )): Integrating both sides of the equation therefore yields 1 2 v + k. Thus.order condition is an implicit equation for bidder i0 s best response to the strategy b played by bidder j.These valuations are private information and are drawn from independent uniform distributions on [0:1]: If the buyer gets the good for price p. we require that the solution to the …rst . pb . Fortunately. we substitute bi = b(vi ) into the …rst . If pb ps . dbi b 1 (vi ) measures how much bidder i0 s valuation must change to produce a unit change in the bid. b must satisfy the …rst . then 2 no trade occurs.(A double auction) Consider a trading game called a double auction. Furthermore. we require b(vi ) vi for every vi . whereas b0 (vi ) measures how much the bid changes in response to a unit change in the valuation. then the vi b(vi ) = 78 . the sellers’ s s s is vs .max(vi bi bi )P fbi > b(vj )g: Let b 1 (bj ) be denote the valuation that bidder j must have in order to bid bj : That is.order di¤erential equation. To eliminate k. dbi (b 1 (b(vi ))) = b0 (vi ) : d That is. The seller names an asking price. b 1 (bj ) = vj if bj = b(vj ): Since vj is uniformly distributed on [0. this implies that b(0) = 0. ps . as claimed.

if there isn’ trade. a strategy for seller is a function ps specifying the price the seller will demand for each of the seller ’ valuations. for example. We now derive a linear Bayesian Nash equilibrium of the double auction. 1]. s Solution In this static Bayesian game. 2 where M [ps (vs )jpb ps (vs )] is the expected price the seller will demand.types who prefer trading at s x to not trading do so. A pair of strategies (pb (vb ). trade would be e¢ cient for all (vs . conditional on the demand being less than buyer’ o¤er of pb . in which trade occurs at a single price if it occurs at all. as + cs ]. but the linear equilibrium has interesting e¢ ciency properties.price equilibrium.buyer’ utility is vb p. Rather. s ps (vs ) solves the problem maxfvb pb ps + M [pb (vb )jpb (vb ) ps ] vs gP (pb (vb ) ps ). so the …rst relationship (the …rst problem) becomes maxf ps as + pb pb as 1 (pb + )] . Find out the Bayesian Nash equilibria. pb (vb ) solves the problem pb + M [ps (vs )jpb ps (vs )] gP (pb ps (vs )). The analogous argument shows that the buyer’ strategy is a best response to the seller’ so these strategies are indeed s s a Bayesian Nash equilibrium. 1]. and let seller’ strategy be to s demand x if vs x and to demand one otherwise. As in previous problem. vb ) pairs such that vb vs . 2 2 cs the …rst . namely s ps (vs ). then the seller’ utility is p vs . In this equilibrium. Many other equilibria exist besides the one-price equilibria and the linear equilibrium. s There are many Bayesian Nash equilibria of this game. but doesn’ occur in the two regions for which vb vs t and vb x or vb vs and vs x. 2 where M [pb (vb )jpb (vb ) ps ] is the expected price the buyer will o¤er. Then ps is uniformly s distributed on [as . If s t s the seller sells the good for price p. namely pb (vb ): Likewise. Given the buyer’ strategy. we allow the players to choose arbitrary strategies but ask whether there is an equilibrium that is linear. if there isn’ s t trade.order condition for which yields max[vb pb 79 . so the seller’ strategy s s is a best response to the buyer’ because the seller . For each vs in [0. ps (vs )) is a Bayesian Nash equilibrium if the following two conditions hold. which we describe later. let the buyer’ strategy be s to o¤er x if vb x and to o¤er zero otherwise. For any value x in [0. vb ) pairs which can be indicated in a …gure. then the seller’ utility is zero. we aren’ restricting the player’ strategy spaces to t s include only linear strategies. Suppose the seller’ strategy is ps (vs ) = as + cs vs . then the buyer’ utility is zero. s the seller’ choices amount to trading at x or not trading. trade occurs for the (vs . Consider the following one . 1]. and vice versa. a strategy for the buyer is a function pb specifying the price the buyer will o¤er for each of the buyer’ possible s valuations. For each vb in [0. conditional on the o¤er being greater that the seller’ demand of ps .

types below 1 make o¤ers s 4 below the seller’ lowest o¤er.price equilibrium misses some valuable trades (such as vs = 0 and vb x ". misses 1 all trades worth next to nothing but achieves all trades worth at least 4 : This suggest that the linear equilibrium may dominate the one . where is small) and achieves some trades that are worth next to nothing. but also raises the possibility that the players might do even better in an alternative equilibrium. The linear equilibrium.types above 3 make demands 4 3 above the buyer’ highest o¤er.price and linear equilibrium. The depictions of which valuation pairs s 4 trade in the one . if the buyer plays a linear strategy. receive. If the players’ linear strategies are to be best responses to each 2 other. ab + cb ]. In [9] Myerson and Satterthwaite show that. the linear equilibrium yields higher expected gains for the player than any other Bayesian Nash equilibria of the double auction (including but far from limited to the one . 80 . such as vs = x and vb x + . namely vs = 0 and vb = 1. the most valuable possible trade. then the buyer’ best response is s also linear.price equilibria). in contrast. and buyer . In both cases. suppose the buyer’ strategy is pb (vb ) = ab + cb vb : s Then pb is uniformly distributed on [ab . does occur. in terms of the expected gains the players. But the one . the linear equilibrium strategies are ps = pb (vb ) = and 2 1 vs + : 3 4 Recall that trade occurs in the double auction if and only if pb ps : The last relationship shows us that the trade occurs in the linear equilibrium if and only if vb vs + 1 : 4 A …gure with this situations reveals that seller . respectively. ps (0) = 1 . This implies that ps (vs ) = 1 2 vb + 3 12 vs ] ab + cb cb ps . then the seller’ best response s is also linear.price equilibria. if the seller plays a linear strategy. and relationship 3 ab +cb 2 for ps implies that cs = 3 and as = 3 : Therefore.order condition for which yields 2 1 vs + (ab + cb ): 3 3 Thus. so the second relationship (the second problem) becomes pb = 1 ps + ab + cb ) max[ (ps + ps 2 2 the …rst . pb (1) = 4 . Analogously.2 1 vb + as : 3 3 Thus. for the uniform valuation distributions considered here. the relationship for pb implies that cb = 3 and ab = as .

where yb > xs and ys > xb . and then translating the result into Hall and Lazear’ employment model. then there isn’ bargaining game the buyer and seller would t willingly play that a Bayesian Nash equilibrium in which trade occurs if and only if is e¢ cient. then there isn’ bargaining game that the …rm and the worker would willingly play that t produces employment if and only if it is e¢ cient. m v: 81 . yb ] and vs is continuously distributed on [xs . s If the …rm has private information about the worker’ marginal product (m) and s the worker has private information about his outside opportunity (v).there isn’ Bayesian Nash equilibrium of the double auction in which trade oct curs if and only if vb vs . ys ]. They also show that this latter result is very general: if vb is continuously distribution on [xb . that is. that is.14. it is e¢ cient. Remark 2. The revelation principle can be used to prove this general result.

Furthermore. That is. Nature determines whether the payo¤s are as in Game 1 or as in Game 2. The sensitivity of …rm i0 s demand to …rms j 0 s price is either high or low. where Q = q1 + q2 is the aggregate quantity on the market. Player 1 chooses either T or B. but demand is uncertain: it is low. Recall from Section 1. 2. What are the action spaces. beliefs. type space. s.strategy Bayesian Nash equilibrium in the following static Bayesian game: 1.1 1. The two …rms simultaneously t. and utility functions in this game ? What are the strategy spaces ? What conditions de…ne a symmetric pure . 4. . bi is either bH or bL . choose quantities. Player 1 learns whether nature has drawn Game 1 or Game 2. aL . Demand for …rm i is qi (pi .2. with probability 1 . 3. Find all the pure .0 Game 2 R 0.0 0.0 T B L 0.0 0.information model of Bertrand duopoly with di¤erentiated products. where bH > bL > 0: For each …rm bi = bH with probability and bi = bL with probability 1 . independent of the realization of bj . Payo¤s are given by the game drawn by nature. What is the Bayesian Nash equilibrium of this game ? 2. player 2 simultaneously chooses either L or R.strategy Nash equilibrium: each player plays H with probability 1=2: Player 2 H T 1.strategy Bayesian Nash equilibrium of this game ? Solve for such an equilibrium. T B L 1. Consider a Cournot duopoly operating in a market with inverse demand P (Q) = a Q. but player 2 doesn’ t. What are the strategy spaces for the two …rms ? Make assumption concerning aH . Both …rms have total costs Ci (qi ) = cqi . Consider the following asymmetric . with probability . Each …rm knows its own bi but not its competitor’ All of this is common knowledge. but …rm 2 doesn’ All of this is common knowledge.6 Exercises and problems unsolved 1. pj ) = a pi bi pj :Costs are zero for both …rms.1 0. and c such that all equilibrium quantities are positive. 3. and high.1 1.0 2. each game being equally likely. information is asymmetric: …rm 1 knows whether demand is high or low. a = aH .-1 1.0 Game 1 R 0.2 4. a = aL . 1 Player 1 H T 82 .1 of Chapter 1 that Matching pennies hasn’ pure t strategy Nash equilibrium but has one mixed .

1]. as in the text. [0.price. what will the worker do ? If the …rm anticipates what the worker will do. After the parties learn the values of their respective pieces of private information. Suppose that m and v are independent draws s from a uniform distribution on [0. 1]. they sign a contract specifying that the following dynamic game will be used to determine whether the worker joins the …rm and if so at what wage. 7. then given m what will the …rm do ? What is the sum of the players’expected payo¤s ? 83 .strategy Bayesian Nash equilibrium of a corresponding game of incomplete information such that as the incomplete information disappears. 1]. what is the Bayesian Nash equilibrium of this game ? Draw a diagram showing the type .bid auction in which the bidders’ valuations are independently and uniformly distributed on [0. Find the value of w that maximize the sum of the players’expected payo¤s and compute this maximized sum. sealed . which the worker then accepts or rejects. Game I: Before the parties learn their private information. Consider a …rst . but also that either side can escape from the employment relationship at no cost. compute the players’ expected payo¤s in the linear equilibrium of the double auction. In this context. Compute a symmetric Bayesian Nash equilibrium for the two . 6. trade means that the worker is employed by the …rm. If there is trade then the s …rm’ payo¤ is m w and the worker’ is w. the …rm chooses a wage w to o¤er the worker. For purposes of comparison. If both announce Accept.price. Consider a …rst . then trade occurs.strategy Nash equilibria in the original game of complete information. sealed . they sign a contract specifying that if the worker is employed by the …rm then the worker’ s wage will be w.pairs that trade. otherwise it doesn’ Given an arbitrary value of w from t. the players’behavior in the Bayesian Nash equilibrium approaches their behavior in the mixed .Provide a pure .5 as a …rm that knows a worker’ marginal s product (m) and a worker who knows his outside opportunity (v).bid auction in which the bidders’ valuations are independently and identically distributed according to the strictly positive density f (vi ) on [0. After the parties learn the values of their respective pieces of private information. if there isn’ trade then the …rm’ s s t s payo¤ is zero and the worker’ is v. Show that if s there are n bidders. Now consider the following two trading games as alternatives to the double auction. then the strategy of bidding nn 1 times one’ valuation is a symmetric Bayesian Nash equilibrium of this auction. Game II: Before the parties learn their private information. Given w and v. 1]. 5. and the price at which the parties trade is worker’ wage w. they simultaneously announce either that they Accept the wage w or that they Reject that wage. Reinterpret the buyer and seller in the double auction analyzed in problem 3 (A double auction) from Section 2. respectively.bidder case. Try to analyze this game using backwards induction.

2.7

References

1. Dani, E., Numerical method in games theory, Ed. Dacia, Cluj-Napoca, 1983 2. Dani, E., Mure¸an, A.S., Applied mathematics in economy, Lito. s Univ., Babe¸-Bolyai, Cluj-Napoca, 1981 s 3. Gibbons, R., Games theory for applied economists, Princeton University Press, New Jersey, 1992 4. Harsanyi, J., Games with randomly distributed payo¤ s: A new rationale for mixed strategy equilibrium points, International Journal of Game Theory, 2, 1973, 1-23 5. Mure¸an, A.S., Operational research, Lito. Univ., Babe¸-Bolyai, Clujs s Napoca, 1996 6. Mure¸an, A.S., Applied mathematics in …nance, banks and exchanges, s Ed. Risoprint, Cluj-Napoca, 2000 7. Mure¸an, A.S., Applied mathematics in …nance, banks and exchanges, s Vol. I, Ed. Risoprint, Cluj-Napoca, 2001 8. Mure¸an, A.S., Applied mathematics in …nance, banks and exchanges, s Vol. II, Ed. Risoprint, Cluj-Napoca, 2002 9. Myerson, R., Satterthwaite, M., E¢ cient mechanisms for bilateral trading, Journal of Economic Theory, 28, 1983, 265-281 10. Owen, G., Game theory (2 nd edn.) Academic Press, New York, 1982 11. Wang, J., The theory of games, Clarendon Press, Oxford, 1988

84

Part II

THE ABSTRACT THEORY OF GAMES

85

3

Generalized games and abstract economies

Fixed point theorems are the basic mathematical tools in showing the existence of solution in game theory and economics. While I have tried to integrate the mathematics and applications this chapter isn’ a comprehensive introduction to t either general equilibrium theory or game theory. Here only …nite-dimensional spaces are used. While many of the results presented here are true in arbitrary locally convex spaces, no attempt has been made to cover the in…nitedimensional results. The main bibliographical source for this chapter is the Border’ book [10], s which I have been used in my lectures with the students in Computer Science from the Faculty of Mathematics and Informatics. Also, we use the recently results obtained by Aliprantis, Tourky, Yannelis, Maugeri, Ray, D’ Agata, Oetli, Schlager, Agarwal, O’ Regan, Rim, Kim, Husai, Tarafdar, Llinares, Muresan, and so on.

3.1

Introduction

The fundamental idealization made in modelling an economy is the notion of a commodity. We suppose that it is possible to classify all the di¤erent goods and services in the world into a …nite number, m, of commodities, which are available in in…nitely divisible units. The commodity space is then Rm : A vector in Rm speci…es a list of quantities of each commodity. It is commodity vectors that are exchanged, manufactured and consumed in the course of economic activity, not individual commodities; although a typical exchange involves a zero quantity of most commodities. A price vector lists the value of a unit of each commodity and so belongs to Rm : Thus the value of commodity vector x at price p is Pm i=1 pi xi = p:x: The principal participants in an economy are the consumers. We will assume that there is a given …nite number of consumers. Not every commodity vector is admissible as a …nal consumption for a consumer. The set Xi Rm of all admissible consumption vectors for consumer i is his consumption set. There are a variety of restrictions that might be embodied in the consumption set. One possible restriction that might be placed on admissible consumption vectors is that they be nonnegative. Under this interpretation, negative quantities of a commodity in a …nal consumption vector mean that the consumer is supplying the commodity as a service. In a private ownership economy consumers are also partially characterized by their initial endowment of commodities. This is represented as a point wi in the commodity space. In a market economy a consumer must purchase his consumption vector at the market prices. The set of admissible commodity vectors that he can a¤ord at prices p given an income Mi is called his budget set and is just fx 2 Xi jp:x Mi g. The budget set might well be empty. The problem faced by a consumer in a market economy is to choose a consumption vector or set of them from the budget set. To do this, the consumer must have some criterion for choosing. One way to formalize the criterion is to assume that 86

For example. A variation on the notion of a noncooperative game is that of an abstract economy. x 7! ui (x).the consumer has a utility index. The pro…t or net income associated Pm with supply vector y at price p is just i=1 pi yi = p:y. A supply vector speci…es the quantities of each commodity supplied and the amount of each commodity used as an input. The set of pro…t maximizing production vectors is the supply set. This can be converted into a game where the strategy sets of consumers are their consumption sets demands and those of suppliers are their production sets. The idea is that a consumer would prefer to consume vector x rather that vector y if ui (x) > ui (y) and would be indi¤erent if ui (x) = ui (y): The solution to the consumer’ problem is then to …nd all vectors x which s maximize u on the budget set. the problem of …nding an equilibrium price vector for a market economy. the set os strategies available to a player depends on the strategy choices of the other players. The supplier’ problem is simple. The set of solutions to a consumer’ problem for s given prices is his demand set. Inputs are denoted by negative quantities and outputs by positive ones. Each s supplier j has a production set Yj of technological feasible supply vectors. ui : Xi ! R. that is. In an abstract economy. 87 . a real-valued function ui . The supplier’ problem s is then to choose a y from the set technologically feasible supply vectors which maximizes the associated pro…t. Suppliers are motivated by pro…ts.

Now p is U -maximal if and only if f or each q 2 . Let : S ( Rm be upper hemi-continuous with nonempty closed convex values and satisfy the strong form of Walras’ law p:z = 0 for all z 2 (p) and the boundary condition there is a p 2 S and a neighborhood V of n S in such that for all T p 2 V S. Let : ( Rm be an upper hemi-continuous multivalued mapping with nonempty compact convex values such that for all p 2 p:z 0 f or each z 2 (p): jN T (p) 6= . q 2 n Sg: It is know that "if C R is a closed convex cone and K T and convex then K C 6= . If is the excess demand multivalued mapping. Let S = fpj p 2 Rm . 0 2 (p)g of equilibrium prices for is compact and nonempty.g of free disposal equilibrium Put N = Rn+1 . (8)z 2 (p)g: (8) p 2 C.2 Equilibrium of excess demand correspondences There is a fundamental theorem for proving the existence of a market equilibrium of an abstract economy [10]. p :z > 0 for all z 2 (p): Then the set fpjp 2 S. the p is an equilibrium price if 0 2 (p): The price p is a free disposal equilibrium price if there is a z 2 (p) such that z 0: Theorem 3. i=1 pi = 1g. Proof. Proof.g is nonempty and compact. q 2 S or p 2 S. if and only if" m 88 . p > 0.2. For each p 2 set Then U (p) is convex for each p and p 2 U (p). there is a z 2 (p) with q:z 0: Rm is compact U (p) = fqj q:z > 0. De…ne the binary relation U on by p 2 U (q) if fp:z > 0 f or all z 2 (q) and p. p is U -maximal if and only if (p) N 6= . For if q 2 U 1 (p). Then fp 2 + prices is nonempty and compact.3. we have that p:z > 0 for all z 2 (q): Then since is upper hemi-continuous. fpj (p) N 6= . and we have that U 1 (p) is = open for each p. (Neuefeind Lemma).1.: Thus by a Sonnenschein’ s T theorem. (Gale-Debreu-Nikaido Lemma). (9) z 2 K p:z 0: T So. Pm Theorem 3. + [fxj p:x > 0g] is a neighborhood of q in U 1 (p).

First show that the U -maximal elements are precisely the equilibrium prices. q 1 :z > 0. there is p 2 Rm satisfying p:z > 0 for all z 2 (p): Put p = p+(1 )p. But for large enough n. Then has an equilibrium price p.: Let z n 2 (q n ) Cn : Suppose that qn ! q 2 n S: Then by boundary condition.3. U (p) = . there T s T is q n 2 Kn such that (q n ) Cn 6= . that is. there is a p 2 S such that p:z n > 0 in…nitely often. some subsequence must converge to some p 2 S: Since is upper hemi-continuous with compact values. p > 0 so that the normalized price vector q = ( pi ) 1 p 2 S and q :z > 0 for all z 2 (p). then 0 2 (p) and since p:0 = 0 for all p. Conversely. q 2 2 U (p). it follows that p:z n 0. For p 2 n S. s i=1 pi = 1g. p 2 Kn Cn : Since z n 2 Cn . which is open. let q 1 . n S: It follows then that no subsequence of q n converges to a point in Since is compact. Next verify that U satis…es the hypotheses of Sonnenschein’ theorem. (ii) If q 2 U 1 (p). it follows that p 2 S: Since p 2 S and U (p) = . T (iib) q 2 ( n S) U 1 (p): By boundary condition in state of the theorem. Set Kn = cofxjx 2 S. Then for z 2 (p). p > 0. p :z = p:z+(1 )p:z = p:z > 0 for > 0. Then by upper hemi-continuity.(Grandmont’ Lemma). n S)S n g.. (Recall that p:z = 0 for z 2 (p) by Walras’law). there is a subsequence of z n converging to z 2 (p): This z 89 . U (p) = S which is convex. for each q 2 S. Pm Theorem 3. that is. p 2 = = S = U (p): (ib) U (p) is convex: For p 2 S. Suppose that p is U -maximal. q 2 intU 1 (p ). For > 0 small P enough. a contradiction. it follows that U (p) = . For p 2 n S. Suppose by way of contradiction that 0 2 (p): = Then since f0g is compact and convex and (p) is closed and convex. 0 2 (p): 1 Proof. Then fKn g is an increasing family of compact convex sets and S = n Kn : Let Cn be the cone generated by Kn : Use a Debreu’ theorem to conclude that for each n.. then there is a p0 with q 2 intU 1 (p0 ): There are two cases: (a) q 2 S and (b) q 2 n S: T (iia) q 2 S U 1 (p): Then p:z > 0 for all z 2 (q): Let H = fxjp:x > 0g. Let : S ( Rm be upper hemi-continuous with nonempty compact convex values and satisfy the strong form of Walras’ law p:z = 0 for all z 2 (p) and the boundary condition for every sequence q n ! q 2 n S and z n 2 (q n ). which violates (*). + [H] is a neighborhood of q contained in U 1 (p). if p is an equilibrium price. there is a z 2 (p) with q:z 0: (*) Now (*) implies 0 2 (p). dist(x. s (ia) p 2 U (p): For p 2 S this follows from Walras’law. by Separating hyperplane theorem.: Since U (p) = S for all p 2 nS. by sequential characterization of hemi-continuity. there is a p 2 S (which may depend on fz n g) such that p:z n > 0 for in…nitely many n. that is. q 2 :z > 0 for z 2 (p): Then [ q 1 + (1 )q 2 ]:z > 0 as well. Let S = fpj p 2 Rm .

clT. then clT is also almost semicontinuous. z) for all z 2 X is a . However = the following simple multivalued mapping shows a . if T is almost semicontinuous. there exists an open neighborhood U of x in X such that T (y) V (respectively. Let B be a nonempty subset of A.majorised multivalued mapping which doesn’ have an open graph: t The multivalued mapping : X = (0.majorised multivalued mapping. 1) ! 2X is de…ned by (x) = (0. 3) . Let X = [0. De…nition 3. If A is a nonempty subset of a topological vector space X and S. t Example 3.1.1 Existence of equilibrium for abstract economies Preliminaries Let A a subset of a topological space X. It is clear that every multivalued mapping having an open graph with x 2 cl co (x) for each x 2 X is a .3 3. And it should be noted that we don’ t need the closedness assumption of T (x) for each x 2 X in the de…nitions.. there exists a . almost upper semicontinuous) if for each x 2 X and each open set V in Y with T (x) V . (z) x (z).majorant of at x.majorant t of at any x 2 X: We now state the following de…nition. T : A ! 2X are multivalued T mappings. T S : A ! 2X are multivalued mappings de…ned by T T (coT )(x) = coT (x).majorant of at x if there exists an open x : X ! 2 neighborhood Nx of x in X such that (a) for each z 2 Nx . Remark 3. (clT )(x) = clT (x) and (T S)(x) = T (x) S(x) for each x 2 A. If A is a subset of a vector space. Let X and Y be two topological spaces.1. From the de…nition. A multivalued mapping X is said to be a . The following example shows us an almost upper semicontinuous multivalued mapping which isn’ upper semicontinuous.T lies in n Cn = Rm : This fact together with the strong form of Walras’ law + imply that z = 0: 3. z 2 cl co x (z) and = (c) x jNx has open graph in Nx X: The multivalued mapping is said to be -majorised if for each x 2 X with (x) 6= . An upper semicontinuous multivalued mapping is clearly almost upper semicontinuous. Denote the restriction of T on B by T jB : Let X be a nonempty subset of a topological vector space and x 2 X: Let : X ! 2X be a given multivalued mapping. we shall denote by coA the convex hull of A. Then a multivalued mapping T : X ! 2Y is said to be upper semicontinuous (respectively. respectively. x2 ] for each x 2 X: Then hasn’ open graph but x (z) = (0.3. 3] if x 6= 2. T (y) clV ) for each y 2 U . 90 . We shall denote by 2A the family of all subsets of A and by clA the closure of A in X. then coT. (b) for each z 2 Nx . and (x) = [1.1. 1) and : X ! 2X be de…ned by (2) = (1.

Ai . By following Debreu. Therefore T is almost upper semicontinuous. A .: x x Remark 3. fi ) where Xi is Q a nonempty topological vector space (a choice set).j6=i and let i : X ! Xi .quasi-equilibrium for is a point x 2 X such that ^ for all i 2 I.3. Pi0 : X ! 2X will denote the multivalued mapping de…ned by Pi0 (x) = fyjy 2 X.2. And we shall use the following notation: Y Xi = Xj j2I. Pi )i2I is de…ned as a family of ordered quadruples (Xi . Ai : j2I Xj = X ! 2Xi Q is a constraint multivalued mapping and fi : j2I Xj ! R is a utility function (payo¤ function). De…nition 3. fi )i2I is de…ned as a family of ordered triples (Xi .3. An equilibrium for (Schafer-Sonnenchein type) Q is a point x 2 X = ^ ^ x i2I Xi such that for each i 2 I. Ai . respectively. xi ): In [28] Greenberg introduced a further generalized concept of equilibrium as follows: Under same settings as above. 3) for all y 2 U . For each i 2 I. Quasi-equilibrium can be of special interest for economies with a tax authority and the result of Shafer-Sonnenschein cannot be applied in this problem. Bi . For any x 2 X. however T (y) [1. An abstract economy (or generalized game) = (Xi .4. Next we give another de…nition of equilibrium for an abstract economy given by utility functions. where i : X ! Xi is the i-th projection). yi 2 Pi (x)g(= i 1 (Pi (x)). Let I be a …nite set of agents. De…nition 3. ^ T x (2) Pi (^) Ai (^) = . xi 2 clAi (^) and ^ x i 91 . i : X ! X i be the projections of X onto Xi and X . 3] for all y in any neighborhood of 2. xi 2 clBi (^) and T Pi (^) Ai (^) = . For each i 2 I.Then isn’ upper semicontinuous at 2 since for an open neighborhood t (1. our de…nitions of an abstract economy and an equilibrium coincide with the standard de…nitions of ShaferSonnenchein. we simply denote i (x) 2 X i by xi and x = (xi . Ai . = f i gi2I be a family of functions + i : X ! R for each i 2 I. Bi : j2I Xj ! Q 2Xi are constraint multivalued mappings and Pi : j2I Xj ! 2Xi is a preference multivalued mapping. Now we given the following general de…nitions of equilibrium theory in mathematical economics. an abstract economy = (Xi . let Xi be a nonempty set of actions. (1) xi 2 clAi (^). Ai .2. De…nition 3. and/or i (^) = 0: x x x Remark 3.3) of (2) there doesn’ exists any desired neighborhood U of 2 such that t T (y) (1. An equilibrium for (Nash type) is a point x 2 X ^ such that for each i 2 I. When Ai = Bi for each i 2 I. Pi ) Q where Xi is a nonempty topological vector space (a choice set). Bi . Ai .

Let T : X ! 2D be an almost upper semicontinuous multivalued mapping such that for each x 2 X.fi (^) = fi (^i . For any x 2 X.3. Proof. zi ) < fi (x)g f or each x 2 X: 3. we can …nd an open convex neighborhood N of 0 such that cl co T (x) + N cl(cl co T (x) + N ) = cl co T (x) + clN U: Clearly V = cl co T (x) + N is an open convex set containing cl co T (x) and V U: Since T is almost upper semicontinuous. let U be an open neighborhood of T (x) in D. T (x) is closed. :::)jz 2 clAi (^)g: x Remark 3. for such open neighborhood V of T (x). Then T is upper semicontinuous. Since T (x) is closed in D. Since V is 92 . It should be noted that if Ai (x) = Xi for all x 2 X.4.5. Remark 3. For any x 2 X. z. xi x x ^ x ^ ^ 1 . xi ) = inf ffi (^1 . coT and cl co T aren’ necessarily upper semicontinuous in general even if t X = Y is compact convex in a locally convex Hausdor¤ topological vector space. Let X be a convex subset of a locally convex Hausdor¤ topological vector space E and D be a nonempty compact subset of X. Proof. xi+1 . we can …nd an open neighborhood W of x such that T (y) clV U for all y 2 W: Therefore T is upper semicontinuous at x. then the concept of an equilibrium for coincides with the well-known Nash equilibrium. Let T : X ! 2D be an almost upper semicontinuous multivalued mapping such that for each x 2 X. :::. Since cl co T (x) is closed in D. there exists an open neighborhood W of x in X such that T (y) clV for all y 2 W . For any upper semicontinuous multivalued mapping T : X ! 2Y . The two types of equilibrium points coincide when the preference multivalued mapping Pi can be de…ned by Pi (x) = fzi 2 Xi jfi (xi . However the almost upper semicontinuty can be preserved as follows: Lemma 3.2 A generalization of Himmelberg’ …xed point theorem s We begin with the following lemma. there exists an open neighborhood V of T (x) such that T (x) V clV U: Since T is almost upper semicontinuous at x. Let X be a nonempty subset of a topological space and D be a nonempty compacy subset of X.1. Lemma 3. let U be an open set containing cl co T (x).2. coT (x) D: Then cl co T is almost upper semicontinuous.

2.convex. cl co S is upper semicontinuous and closed convex valued in D.4. cl co T (y) clV clU for all y 2 W: Therefore cl co T is almost upper semicontinuous. Theorem 3.2. Then there exists a point x 2 D such that x 2 T (^): ^ ^ x 3. Let X be a convex subset of a locally convex Hausdor¤ topological vector space E and D be a nonempty compact subset of X.2. the multivalued mapping cl co S : X ! 2D is also almost upper semicontinuous. Therefore by Himmelberg’ …xed point theorem. T (x) is a nonempty closed convex subset of D. Corollary 3. Let X be a convex subset of a locally convex Hausdor¤ topological vector space E and D be a nonempty compact subset of X. We now prove the following generalization of Himmelberg’ …xed point thes orem. an abstract economy given by preference multivalued mappings (Shafer-Sonnenschein type) in compact setting and an abstract economy given by utility functions (Nash type) in non-compact settings) and prove the existence of equilibrium points or quasi-equilibrium points for either case by using the …xed point theorems in previous section. so that by Lemma 3. For each x 2 X.4. 93 .2. By Lemma 3. coS(x) is a nonempty subset of D. T (x) is closed. so that there exists a point x 2 D such that x 2 T (^): ^ ^ x When S = T in Theorem 3.3 Existence of equilibria in abstract economies In this section we consider both kinds of economy described in the preliminaries (that is. s there exists a point x 2 D such that x 2 cl co S(^) ^ ^ x T (^). 6 (2) for each x 2 X.3. = coS(x) T (x). Then there exists a point x 2 D such that x 2 T (^): ^ ^ x Proof. we don’ know whether the multivalued t mapping coT is almost upper semicontinuous even when T is upper semicontinuous. since coS(x) T (x) is closed.4. Then by Lemma 3. Let X be a convex subset of a locally convex Hausdor¤ topological vector space and D be a nonempty compact subset of X. we obtain Himmelberg’ …xed point theorem s as a corollary: Corollary 3. T : X ! 2D be almost upper semicontinuous multivalued mappings such that (1) for each x 2 X.1. which completes x the proof. Let S. . In the Lemma 3. Clearly the pair (S. Remark 3. we have cl co S(x) T (x). T is almost upper semicontinuous. Let T : X ! 2D be an upper semicontinuous multivalued mapping such that for each x 2 X. T ) satis…es all conditions of Theorem 3. We de…ne a multivalued mapping T : X ! 2D by T (x) = cl co S(x) for all x 2 X.1.6. Let S : X ! 2D be an almost upper semicontinuous multivalued mapping such that for each x 2 X. Then there exists a point x 2 D such that x 2 cl co S(^): ^ ^ x Proof.

y) 2 = graph of i and Ci (x. so that the graph of i is open in X Xi : And it is clear that Pi (z) i (z) for each z 2 X: Next. we may assume that Ai = Bi for each i 2 I in a abstract economy. y) 2 graph of i : For each i 2 I. which generalizes the powerful result of Shafer-Sonnenschein. there exists a multivalued mapping x : X ! 2Xi and an open neighborhood Ux of x in X such that Pi (z) = x (z) and zi 2 cl co x (z) for each z 2 Ux . ng. Ai (x) is nonempty convex. (4) the multivalued mapping Pi is -majorised. For each z 2 X.5.majorised. Ci (x. since X Xi is compact and metrisable. since X = j2J Uxj . there exists a continuous function Ci : X Xi ! [0. For each j 2 J. T T we can …nd Then an open neighborhood U of z in X such that U Uxi1 ::: Uxik . if z 2 Uxj if z 2 Uxj : = (44) and next we de…ne i : X ! 2Xi by \ i (z) = j (z) j2J for each z 2 X. Xi . Since T T ::: xi1 (z) xik (z) is an open subset of Xi containing x. and x jUx has an open graph in Ux Xi . For each (z. for each x 2 X.First. Then T an equilibrium choice x 2 X. using -majorised multivalued mappings we shall prove an equilibrium existence of a compact abstract economy. we now de…ne j : X ! 2Xi by j (z) = xj (z). thus zi 2 cl co i (z). (3) the multivalued mapping clAi : X ! 2Xi is continuous. Pi )i2I be an abstract economy where I is a countable set such that for each i 2 I. x) 2 graph of i . For simplicity. Let i 2 I be …xed. y) = 94 z2clAi (x) max Ci (x. by a result of Dugundji. where J = f1. there exists k 2 J such that z 2 Uxk so that zi 2 = cl co xk (z) = cl co k (z). there exists an open T T neighborhood V of x in Xi such that x 2 V ::: xi1 (z) xik (z): Therefore we have an open neighborhood U V of (z. (1) Xi is a nonempty compact convex subset of a metrisable locally convex Hausdor¤ topological vector space. z)g: . x) such that U V graph of i . the family {Ux jx 2 X} of an open cover of X contains a …nite subcover {Uxj jj 2 J}. Q (2) for each x 2 X = i2I Xi . :::. T T there exists fi1 . that is. Theorem 3. We now show that S graph = the of i is open in X Xi . we de…ne a multivalued mapping Fi : X ! 2Xi by Fi (x) = fyjy 2 clAi (x). Let = (Xi . y) = 0 for all (x. so is perfectly normal. xi 2 clAi (^) has ^ ^ x and Ai (^) Pi (^) = . for each i 2 I. Ai .: x x Proof. By compactness of X. y) 6= 0 for all (x. Since Pi is . 2. Since the graph of i is open in X Xi . :::ik g J such that z 2 Uxi1 ::: Uxik . 1] such that Ci (x.

(1)Xi is a nonempty compact convex subset of a metrisable locally convex Hausdor¤ topological vector space. Then a multivalued mapping G : X ! Q 2X de…ned by G(x) = i2I Fi (x) is also upper semicontinuous by a result of Fan and G(x) is a nonempty compact subset of X for each x 2 X: Therefore by Corollary 3. then Ci (^. Therefore Fi is upper semicontinuous. there exists an open neighborhood W of x such that Fi (y) cl Ai (y) V for all y 2 W . Pi )i2I be an abstract economy where I is a countable set such that for each i 2 I. zi ) > 0 for all zi 2 Fi (^). for each i 2 I. This implies that Fi (^) x 0 x x x i (^). Fi (x) is nonempty closed. this is a contradiction. T Then W 0 = W fzjz 2 X. We can repeat the proof of Theorem 3. if i (x) = 0 then Fi (x) = cl Ai (x) V . y) i (x) = maxz2cl Ai (x) Ci (x.: If zi 2 Ai (^) Pi (^) 6= . that is. Therefore when Xi is a subset of Rn . for any open set V containing Fi (x). there exists a point x 2 X such that x 2 cl co G(^). ^ x T (b) Ai (^) Pi (^) = . (5) the multivalued mapping Pi is -majorised. which implies xi 2 cl co Fi (^) cl co i (^). Using the concept of -quasi-equilibrium described in the preliminaries. that is. x x x x i2I cl co Fi (^) T clAi (^): Therefore xi 2 clAi (^) for each i 2 I. So the theorem is proved. ^ (a) xi 2 cl Ai (^). (3) for each x 2 X. we further generalize Theorem 3.Then by a result of Aubin and Ekeland. (4) the multivalued mapping cl Ai : X ! 2Xi is continuous for all x with i (x) > 0 and is almost upper semicontinuous for all x with i (x) = 0.5 generalizes a Shafer-Sonnenschein’ theorem s in two aspects. for each i 2 I we shall replace the multivalued mapping Fi by a new multivalued mapping Fi : X ! 2Xi de…ned by Fi (x) = fyjy 2 cl Ai (x). Let = (Xi . zi ) > 0 so that x x x x x 0 Ci (^. Then has a -quasi-equilibrium choice x 2 X. i (z) > 0g is an open neighborhood of x such that Fi (y) V for each y 2 W 0 . In a …nite dimensional space. if i (x) > 0. ^ x x Remark 3. Fi is upper semicontinuous and for each x 2 X. Ai . z) i (x)g for each x 2 X. we can relax the assumption (b) of the de…nition of -majorant as follows without a¤ecting the conclusion of Theorem 3. Since fxjx 2 X. It remains to show x x ^ T x that Ai (^) Pi (^) = . Ci (x.5.5 as follows: Theorem 3. Theorem 3. for a compact set A. co A is compact and convex. Q (2) i : X = i2I Xi ! R+ is a nonnegative real-valued lower semicontinuous function. ^ ^ x Q x 2 cl co G(^) ^ x cl co Fi (^): Since Fi (^) clAi (^) and Ai (^) is convex.. In fact.5: (b’ for each z 2 Nx . that is. so that there exists an open neighborhood W of x such that Fi (y) V for each y 2 W .5 again. i (x) > 0g is open. Since cl Ai is upper semicontinuous. z 2 co x (z): ) = And in this case. then by a result of Aubin and Ekeland. Ai (x) is nonempty convex. Fi is also upper semicontinuous.1. 95 .7. Fi (x) = Fi (x) is also upper semicontinuous at x. and/or i (^) = 0: x x x Proof. (i) Pi need not have open graph and (ii) an index set I may not be …nite. In the proof of Theorem 3.6.

we now de…ne a multivalued mapping Vi : X ! 2Xi by Vi (x) = fy j y 2 Si (x). Si : X ! 2Di be a continuous multivalued mapping for all x 2 X with i (x) > 0 and be almost upper semicontinuous for all x 2 X with i (x) = 0 such that (1) Si (x) is a nonempty closed convex subset of Di . However. and G(x) is a nonempty compact subset of X for each x 2 X. Theorem 3. Vi is upper semicontinuous at x and Vi (x) is nonempty compact and convex. ^ x (b) fi (^i . and V (x) is a nonempty compact convex subset of D for each x 2 X. if i (^) = 0. xi ) = inf z2Si (^) fi (^i . Let I be any (possibly uncountable) index set and for each i 2 I. there exists a point x 2 X such that ^ xi 2 cl Ai (^) for each i 2 I. then Ci (^. x x x x This implies that Fi (^) x (^). and for each x 2 X with i (x) = 0. ^ x x T 0 x In case i (^) > 0. Therefore by the same proof as in Theorem 3. let fi : E X = i2I Xi ! R be a continuous function and i : X ! R+ be a nonnegative real-valued lower semicontinuous function.6. (a) xi 2 Si (^). For each i 2 I. (2) xi ! fi (xi .7.5. For each i 2 I. the underlying spaces aren’ always comt pact and it should be noted that we will encounter many kinds of multivalued mappings in various economic situations.. y) i (x) = z2Si (x) inf fi (xi . Now we de…ne V : X ! 2D by Y V (x) = Vi (x) i2I for each x 2 X. x ^ x x i T this is a contradiction. which implies xi 2 cl co Fi (^) cl co i (^). zi ) > 0 for all zi 2 Fi (^). Therefore. z) and/or i (^) = 0.Q Then G = i2I Fi : X ! 2X is also upper semicontinuous by a result of Fan. Therefore for each x 2 X. so it is important that we shall consider several types of multivalued mappings and obtain some existence results in non-compact settings. Vi is upper semicontinuous at x by a result of Aubin and Ekeland and the same argument of the proof of Theorem 3. Q Then there exists an equilibrium point x 2 D = i2I Di such that for each ^ i 2 I.: x x In most results on the existence of equilibria for abstract economies the underlying spaces (commodity spaces or choice sets) are always compact and convex. z) i (x)g: Since fx j x 2 X. i (x) > 0g is open. for each x 2 X with i (x) > 0. Therefore we have Ai (^) Pi (^) = . let Xi be a convex subset of a locally convex Hausdor¤ topological vector spaceQ i and Di be a nonempty compact subset of Xi . Finally. Then by a result of Fan. For each i 2 I. fi (xi . in recent papers. x ^ x x x Proof. if zi 2 Ai (^) Pi (^) 6= . then the conclusion (b) holds. by Corollary 96 . Vi (x) = Si (x) so that Vi is also upper semicontinuous at x. Now we prove the quasi-equilibrium existence theorem of Nash type non-compact abstract economy. xi ) is quasi-convex on Si (x). V is also upper semicontinuous.

There are two ways we might do this. for each i. Given a good reply multivalued mapping Ui it will not generally possible to reconstruct the preference relation U i . It is speci…ed by a tuple (I. (Ui )) where each Ui : j2I Xj ( Xi .2 there exists a point x 2 D such that x 2 V (^). A simpli…ed example is the pumping of oil out of a common oil …eld by several producers. (We have written Fi as a function of the strategies of all the players including i as a technical convenience. Since player i only has control over the i-th component of x. (Xi ). z) and/or i (^) = 0: x ^ x x x 3. Note however that we lose some information by doing this. To take such possibilities into account we introduce a multivalued mapping Fi : X ( Xi which tells which strategies are actually feasible for player i. But the xi cannot be chosen independently because their sum cannot exceed the total amount of oil in the ground. called the good reply multivalued mapping. let xjyi denote the strategy vector obtained from x when player i chooses yi and other players keep their choices …xed. A shortcoming of this model of a game is that frequently there are situations in which the choices of players cannot be made independently. For an abstract economy an T equilibrium is an x 2 X such that x 2 F (x) and Ui (x) Fi (x) = . that is. given the strategy vector of the others. we ^ ^ x have (a) xi 2 Vi (^) Si (^) and ^ x x (b) fi (^i . Players have preferences over outcomes and this induce preferences over strategy vectors. Fi will be independent of player i’ choice. In modelling most situations. Each producer chooses an amount xi to pump out and sell. for each i 2 I. we will …nd it more useful to describe player i’ s preferences in terms of the good reply set. (Ui )) where Fi : X ( Xi and Ui : X ( Xi : A Nash equilibrium of a strategic form game or abstract economy is a strategy vector x for which no player has a good reply. For convenience we will work with preferences over strategy vectors. Thus each producer has partial control of the price and hence of their pro…ts.) The jointly feasibleQ s strategy vectors are thus the …xed points of the multivalued mapping F = i2I Fi : X ( X: A game with the added feasibility or constraint multivalued mapping is called a generalized game or abstract economy. 97 .3. (Fi ). The …rst is to describe player i’ preferences by a binary relation U i de…ned on X. The price depends on the total amount sold. by Ui (x) = fyi j yi 2 Xi . unless we know that U i is transitive. xjyi 2 U i (x)g: It will be convenient to describe preferences in terms of the good reply multivalued mapping Ui rather than the preference relation U i . (Xi ). Thus a game in strategic form is a Q tuple (I. xi ) = inf z2Si (^) fi (^i .3. Then U i (x) is s the set of all strategy vectors preferred to x. For a game an equilibrium is an x 2 X such that Ui (x) = . and we will not make this assumption.4 Nash equilibrium of games and abstract economies Each strategy vector determines an outcome (which may be a lottery in some models). for each i. Given a strategy vector x 2 X and a strategy yi 2 Xi . Let us say that yi is a good reply for player i to strategy vector x if xjyi 2 U i (x): This de…nes a multivalued mapping Ui : X ( Xi .

De…ne the multivalued mapping i : X ( Xi by i (x) = ffi (x)g. Then there exists x 2 X such that for each i. if x 2 Wi Xi .. (ii) Ui (fxi g) is open in X for all xi 2 Xi . Shafer and Sonnenschein prove the existence of equilibria for abstract economies without ordered preferences. These assumptions are joint assumptions on utility and feasibility and the simplest way to make separate assumptions is to assume that strategy sets are compact and convex and that utilities are continuous and quasi-concave and that the constraint multivalued mappings are continuous with compact convex values. This result doesn’ strictly generalize Debreu’ result since convexity t s rather than contractibility assumptions are made. Then Wi is open by (ii) and Ui jWi : Wi ( Xi satis…es the hypotheses of the selection theorem. Then the maximum theorem guarantees continuity of maximized utility and convexity of the feasible sets and quasi-concavity imply convexity (and hence contractibility) of the set of maximizers.) Remark 3. Let X = i2I Xi . Xi being a nonempty. either xi 2 Ui (x) or Ui (x) = . Arrow and Debreu used Debreu’ result to prove the existence of Walrasian equilibrium of an economy s and coined the term abstract economy.Nash proves the existence of equilibria for games where the players’preferences are representable by continuous quasi-concave utilities and the strategy sets are simplexes. They assume that strategy sets are compact convex sets and that the good reply multivalued mappings are convex valued and have open graph. They also assume = that the feasibility multivalued mappings are continuous with compact convex values. Mas-Colell). Let Wi = fx j Ui (x) 6= . 98 . compact. so there is a continuous function fi : Wi ! Xi with fi (x) 2 Ui (x). Q Theorem 3. convex subset of Rki . The previous theorem possesses a trivial extension. then it must be that Ui (x) = . Each Ui is assumed to satisfy (i) and (ii) so that the selection theorem may be employed.g. has a …xed point x. Proof. If i (x) 6= Xi .8. and thus so is = i2I i : X ( X: Thus by the Kakutani theorem. (Unless of course Xi is a singleton. if x 2 Wi : = (45) Then i is upper hemi-continuous with nonempty compact and convex valQ ues. and let Ui : X ( Xi be a multivalued mapping satisfying (i) Ui (x) is convex for all x 2 X. Debreu proves the existence of equilibrium for abstract economies. in which case fxi g = i (x). Gale and Mas-Colell prove a lemma which allows them to prove the existence of equilibrium for a game without ordered preferences.8 (Gale.. then xi 2 i (x) implies xi = fi (x) 2 Ui (x): If i (x) = Xi . He assumes that strategy sets are contractible polyhedra and that the feasibility multivalued mapping have closed graph and the maximized utility is continuous and that the set of utility maximizers over each constraint set is contractible. They assume that the good reply multivalued mappings have open graph and satisfy the convexity/irre‡ exivity condition xi 2 co Ui (x).

. yi ) > 0 if and only if yi 2 Ui (x) and i is continuous since Gr Ui is open. Gi (x) = co Hi (x) co Ui (x). so there is x 2 X with x 2 G(x). apply the maximum theorem to the multivalued mapping (x. that is. (To see that Hi is upper hemi-continuous. zi ) > 0. xi 2 Gi (x) = co Hi (x) FiT We (x). and the conclusion follows. for all i. The next Corollary is derived from Theorem 3. = Proof.: Remark 3. compact and convex (ii) Fi is a continuous multivalued mapping with nonempty compact convex values (iii) Gr Ui is open in X Xi (iv) xi 2 co Ui (x) for all x 2 X.9.g: These multivalued mappings are compact-valued and upper hemi-continuous.3. Proof. Since Hi (x) Fi (x) which is convex. which contradicts (iv).8 so there is x 2 X such that for each i. let Ui : X ( Xi have open graph and satisfy xi 2 co Ui (x) for each x. yi ) > 0 for all yi 2 Hi (x). yi maximizes i (x. De…ne Hi : X ( Xi by Hi (x) = fyi j yi 2 Xi . This says that yi 2 Ui (x) for all yi 2 Hi (x): Thus Hi (x) UiT so xi 2 (x). Then by a well known results.. (Gr Ui )c ]. The natural approach would be to use the best reply s T multivalued mappings. Assuming that Ui (x) is never empty yields a result equivalent to a Fan’ result. Since xi 2 co Ui (x) by hypothesis. we have co Ui (x) = = . there is zi 2 Ui (x) Fi (x): Then since zi 2 Ui (x) we have i (x. then the selection problem is trivial. Taking t 99 .. which is the cleverness of Shafer and t Sonnenschein’ proof. yi ) = dist[(x. xi 2 co Ui (x) or co Ui (x) = . T now show Ui (x) Fi (x) = . (i) Xi Rki is nonempty.: Theorem 3. The multivalued mappings Hi used in the proof of previous theorem aren’ natural constructions. (Ui )) be an abstract economy such that for each i. s Corollary 3.If some Ui is already a singleton-valued multivalued mapping. and since Hi (x) consists of the maximizers of i (x. yi ) 7!( fxg Fi (x) and the function Q i . Suppose not. (Fi ). for each i. They may fail to be convex-valued. yi ).9. we have that i (x. (Xi ). G is upper hemi-continuous with compact convex values and so satis…es the hypotheses of the Kakutani …xed point theorem. (Shafer-Sonnenschein) Let (I. Then there exists x 2 X with Ui (x) = . ) on Fi (x)g: Then Hi has nonempty compact values and is upper hemi-continuous and hence closed. however.Thus we may allow some of the Ui ’ to be continuous s singleton-valued multivalued mapping instead. = Then there is an equilibrium. ) on Fi (x). so Ui (x) = .) De…ne G : X ( X by G(x) = i2I co Hi (x). x 7!( fxi j Ui (xjxi ) Fi (x) = . For each i.Mas-Colell gives an example for which the best reply multivalued mapping hasn’ connected-valued submultivalued mapping. Then i (x. Thus Ui (x) Fi (x) = . Because Xi is convex subset the multivalued mapping co Ui satisfy the hypotheses of Theorem 3. De…ne i : X Xi ! R+ by i (x.8 by assuming each xi 2 Ui (x) and = concludes that there exists some x such that Ui (x) = .

T if yi 2 Fi (x) = co Ui (y) int Fi (x). First. and set Z = Z0 Qi2I Zi . y): Also for = i 2 I. To see this set Fi (x).10. The proof below adds an additional player to the abstract economy by introducing an ” abstract auctioneer” and incorporates the feasibility constraints onto the . We de…ne a game as follows. Let (I. provided income is always greater than the minimum consumption expenditures on the consumption set. De…ne 0 by o (x.the convex hull of the best reply multivalued mapping doesn’ help. What Shafer t t and Sonnenschein do is choose a multivalued mapping that is a submultivalued mapping of a good reply set when it is nonempty and equal to the whole feasible strategy set otherwise. that is. Under stronger assumptions on the Fi multivalued mappings this approach can be made to work without taking a proper subset of the good reply set. the multivalued mapping x 7!( int Fi (x) is assumed to have open graph. where x 2 Z0 and y 2 i2I Zi . compact and convex (ii) Fi is an upper hemi-continuous multivalued mapping with nonempty compact convex values satisfying. Another natural approach would be to use the good reply multivalued mapT ping x 7!( co Ui (x) Fi (x). if yi 2 Fi (x): (46) 100 . preferences which converts it into a game. The requirement of open graph is stronger than lower hemi-continuity. The additional assumptions on Fi are the following. Fi (x) is assumed to be topologically regular for each x. Put Z0 = i2I Xi : For i 2 I put Q Zi = Xi . that is. y) = Note that 0 is continuous and never empty-valued and that for i 2 I the multivalued mapping i is convex-valued and satis…es yi 2 i (x. Fi (x) = cl [int Fi (x)]. Second. (Ui )) be an abstract economy such that for each i we have (i) Xi Rki is nonempty. while convex-valued. This multivalued mapping. and for i 2 I set i (x. (A special case of Shafer-Sonnenschein theorem). \ xi 2 Fi (x ). y). The proof is closely related to the arguments used by Gale and Mas-Colell to reduce an economy to a noncooperative game. Both the topological regularity and open graph assumptions are satis…ed by budget multivalued mappings.: Q Proof. De…ne preference multivalued mappings i : Z ( Zi as follows. isn’ closed-valued. (Fi ). the graph of i is open. Theorem 3. for all x. Fi (x) = cl [int Fi (x)] and x 7!( int Fi (x) has open graph (iii) Gr Ui is open in X Xi (iv) for all x. since a …xed t point of convex hull multivalued mapping may fail to be an equilibrium. A typical element of Z will be denoted (x. These assumptions were used by Borglin and Keiding who reduced the multi-player abstract economy to a 1-person game. and so the Kakutani theorem doesn’ apply. = Then there is an equilibrium. y) = fyg. and Ui (x ) Fi (x ) = . xi 2 co Ui (x). (Xi ). an x 2 X such that for each i.

the strategies of suppliers are production vectors. and since Fi (x) is never empty.5 Walrasian equilibrium of an economy We now have several tools for proving the existence of a Walrasian equilibrium of an economy. f or i 2 I: T Thus Ui (x )T int Fi (x ) = ..Ai = f(x. The strategies of consumers are consumption vectors. y. (**) becomes \ co Ui (x ) int Fi (x ) = . A Nash equilibrium of the abstract economy corresponds to a Walrasian equilibrium of the original economy.1. The abstract economy approach explicitly introduces a …ctitious agent. ( ) . zi ) j yi 2 Fi (x)g.: ( ) Now (*) implies x = y . The principal di¢ culty to overcome in applying the existence theorems for abstract economies is the fact that they require compact strategy sets and the consumption and production sets aren’ t 101 . y ) = . The abstract economy approach converts the problem of …nding a Walrasian equilibrium of the economy into the problem of …nding the Nash equilibrium of an associated abstract economy. Bi = f(x..8 is satis…ed and so there exists (x . The central di¢ culty of the excess demand approach involves proving the upper hemi-continuity of the excess demand multivalued mapping. then there is a closed = neighborhood W of yi such that Fi (x) W c . or generalized game. The set Bi is also open.3. The auctioneer’ preferences are to increase the value of excess s demand. 3. and upper hemi-continuity of Fi then gives the desired result. zi ) j zi 2 co Ui (y)g. that is. The excess demand approach utilizes the Debreu-Gale-Nikaido lemma. y ) 2 Z such that x 2 and for i 2 I i (x 0 (x \ Bi ) [ (Ai \ Ci ): . These are: the excess demand approach and the abstract economy approach. We will focus on two approaches.: But Fi (x ) = cl [int Fi (x )] and Ui (x ) is open. y ). and the strategies of the auctioneer are prices. If yi 2 Fi (x). = Ci = f(x. Thus the hypothesis of Remark 3. x is an equilibrium. and note that Gr i = (Ai The set Ai is open because int Fi has open graph and Ci is open by hypothesis (iii). zi ) j zi 2 int Fi (x)g. y. y. the ” auctioneer” into the picture and models the economy as an abstract economy . namely Theorem 3. so Ui (x ) Fi (x ) = .

and Ui s his preference relation on Xi . convex and bounded from below.3.. :::. Let Rm denote the commodity space. (ai )): j De…nition 3. We now recall some notations and de…nitions need in what follows. convex and 0 2 Yj : 102 . and wi 2 Xi : For j = 1. 2. (yj )) satisfying: (i) For each j = 1. An economy is then described by a tuple ((Xi . 2. k let Yj denote the j-th supplier’ s Pk Pn Pn i production set.5. xi yj w = 0g: Qn Qk T 0 Then F = ( i=1 Xi M: Let Xi be the projection of F on Xi . For j = 1. n let Xi Rm denote the i-th consumer’ consumption set. (yj )) 2 (Rm )n+k . This problem is dealt with by showing that any equilibrium must lie in a compact set. and Ui (xi ) where Bi = fxi j xi 2 Xi . :::. wi . n. xi 2 Bi .compact. k. (Yj ). then truncating the consumption and production sets and showing that the Nash equilibrium of the truncated abstract economy is a Walrasian equilibrium of the original economy. An attainable state of the economy is a tuple ((xi ). :::. j=1 Yj ) 0 and let Yj be the projection of F on Yj : De…nition 3. n. A Walrasian free disposal equilibrium is a price p 2 together with an attainable state ((xi ). 2. 2. k that (2) Yj is closed. :::. (yj )) 2 Qn Qk i=1 Xi j=1 Yj . wi . For i = 1. p yj p yj . (ai )) satisfy: j For i = 1. Let the economy ((Xi . Ui ). :::. :::. w = i=1 wi . 2. (1) Xi is closed. (yj )) j ((xi ). Ui ). and Y = j=1 Yj : Let aj denote the share of consumer i in the pro…ts of supplier j.6. f or all yj 2 Yj : \ (ii) For each i = 1. Set X = i=1 Xi . p xi p Bi = . wi 2 Rn his private endowment. satisfying n X i=1 xi k X j=1 yj w = 0: Let F denote the set of attainable states and let n X i=1 k X j=1 M = f((xi ). 2. k X j=1 wi + ai (p j yj )g: Lemma 3.

0 2 Yj0 . (5) there is some x0 2 Xi satisfying wi > x0 : i i (6) Y Rm : + 0 Then x0 2 Xi . we have AF Also. The set F of attainable states is clearly closed. k: Pn Pn 0 0 PnNow assume that (5) and (6) hold. 0 Thus for each consumer i. 2. where each xn 2 F and n # 0:) By a well known result. so x0 2 Xi : i i Under the hypotheses of Lemma 3. k: Suppose in addition. we have AYj AY: + + + + Again. we have A( n Y A( i=1 n Y Xi j=1 k Y Yj ) \ AM: Xi i=1 j=1 Since each Xi is bounded below there is some bi 2 Rm such that Xi bi + Rm : Thus AXi A(bi + Rm ) = ARm = Rm : Also. Thus yi 2 Y ( Y ) so yi = 0: This is true for all i = 1. i=1 Pk P i = 1. :::. For each i = 1. Furthermore. :::. n. Clearly ((wi ).T (3) AY Rm = f0g: + T (4) Y ( Y ) = f0g: Then the set F of attainable states is compact and nonempty. j=1 yj 2 AY: Since AY R+ = f0g. i = 1. So. and yj 2 AY. we need to show that if xi 2 Rm . :::. :::. so by (6) there are yj . 2. that the following two assumptions hold. 2. being the intersection of two closed sets. n. n: i Proof. Since AY is a convex Pk Pk T m Pn cone. :::. there is a compact convex set Ki containing Xi in k Y AY ) \ (M w) = f0g: 103 . i=1 xi < i=1 wi : Set y = Pn 0 0 0 i=1 xi i=1 wi : Then y < 0. then x1 = ::: = xn = y1 = ::: = j=1 Pn Pk yk = 0: Now i=1 xi 0. where AF is asymptotic cone of F (the set of all possible limits of sequences of the form f n xn g. (0j )) 2 F. (yj )) 2 F. 2. satisfying Pk 0 0 0 0 y = j=1 yj : Then ((x0 ). k. j.3 the set F of attainable states is compact. n: Rewriting j=1 yj = 0 yields yi =T ( j6=i yj ): Both yi and this last sum belong to Y as AY Y . j = 1. 2. AM = M w: Thus we can show AF = f0g if we can show that ( n Y k Y Yj ) i=1 n Y (AXi ) j=1 k Y (AYj ): Rm + i=1 j=1 In other words. it is su¢ ces to show that AF = f0g. 2. 2. :::. By (5). so that j=1 yj 0 too. + Pn Pk j = 1. i=1 xi j=1 yj = 0 implies Pn Pk Pn xi = 0 = j=1 yj : Now xi 0 and i=1 xi = 0 clearly imply that xi = 0. i = 1. since M w is a cone. :::. 1. 2. k and i=1 xi yj = 0. so F is nonempty and 0 2 Yj0 . :::.

For suppliers and the auctioneer.T 00 00 0 its interior. (xi ).11. His strategy set is m 1 . Observe that U0 has open graph. Set Yj00 = Cj Yj : Theorem 3. (xi ). These multivalued mappings have open graph. (1) Xi is closed. n. (4) Yj T closed and convex and 0 2 Yj : is (5) Y T Rm = f0g: + (6) Y ( Y ) = f0g: (7) Y Rm : + Then there is a free disposal equilibrium of the economy. j (p) is always nonnegative. convex upper contour sets and satis…es xi 2 Ui0 (p. convex upper contour sets and satisfy yj 2 Vj (p. A typical strategy vector is thus of the form (p. (yj )): = The feasibility multivalued mappings are as follows. Thus they are continuous with compact convex values. (yj )): = Supplier j ’ Q s preferences are represented by the multivalued mapping Vj : Q 00 00 00 i2I Xi j2J Yj ( Yj de…ned by 00 00 00 Vj (p. (xi ). :::. q ( xi yj w) > i2I j2J X p ( xi i2I X j2J yj w)g: Thus the auctioneer prefers to raise the value of excess demand. p yj > p yj g: Thus suppliers prefer larger pro…ts. k. they are constant multivalued mappings and the values are equal to their entire strategy sets. Let the economy ((Xi . the closed standard (m 1)-simplex. Player 0 is the auctioneer. The strategy 0 set of supplier j is Yj . (xi ). Ui ). These strategies 0 will be price vectors. (yj )) = fq j q 2 . (yj )) = fyj j yj 2 Yj00 . (c) xi 2 cl Ui (xi ): = For each j = 1. (xi ). (xi ). (yj )): The auctioneer’ Q s preferences are represented by the multivalued mapping Q 0 0 U0 : Xi de…ned by i2I j2J Yj ( X X U0 (p. The strategy set of consumer i will be Xi . :::. (b) xi 2 co Ui (xi ). (yj )): = The preferences of consumer i are represented by multivalued mapping Q Q 00 00 Ui0 : i2I Xi j2J Yj ( Xi de…ned by Ui0 (p. De…ne an abstract economy as follows. for each supplier j T 0 there is a compact convex set Cj containing Yj in its interior. Set 104 . (ai )) satisfy: j For i = 1. (xi ). (yj )) = co Ui (xi ): This multivalued mapping has open graph. convex and bounded from below. (Yj ). 2. Set Xi = Ki Xi : Then Xi intXi : Likewise. Proof. Since 0 2 Yj0 . For consumers things are more complicated. and wi 2 Xi : (2) There is some x0 2 Xi satisfying wi > x0 : i i (3) (a) Ui has open graph. convex upper contour sets and p 2 U0 (p. wi . 2. Start by setting j (p) = maxyj 2Yj p yj : By the maximum theorem this is a continuous function.

and it too yields a 0 00 0 higher pro…t than yj : But for small enough. :::. Fi is upper hemi-continuous. Since Xi is compact. where 00 Bi = fxi j xi 2 Xi . :::. (yj )) 2 Q Q 00 00 X i2I Pi j2J Yj satisfying P P P 0 0 (i) q ( i2I xi w) p ( i2I xi w) for all q 2 : j2J yj j2J yj 0 00 (ii) p yj p yj for all yT2 Yj . (yj )) = fx00 i j x00 i 2 00 Xi . n. we have that ((xi ). j = 1. (xi ). a contradiction. and let 00 0 00 0 p yj > p yj : Since Yj is convex. Thus for each consumer. each consumer spends all j his income. since it clearly has closed graph. k: By construction. p xi p wi + 00 Let Mi = p wi + j=1 ai (p yj ): Then in fact. so that i=1 xi = p X X 0 p ( xi yj w) = 0: i2I j2J Pk k X j=1 ai (p j 00 yj )g: This and (i) yield X i2I xi X j2J 0 yj w 0: 0 We next show that p yj p yj for all yj 2 Yj : Suppose not. than j2J yj maximizes p y over Y: But since p z = 0. i = 1... 2. yj + (1 )yj 2 Yj00 . But then each yj must also maximizes p yj over Yj : Thus we have so far shown that p yj p yj for all yj 2 Yj . Then T since Ui (xi ) is open and xi 2 cl Ui (xi ). it would follow that Ui (xi ) Bi 6= . 0so that there exists yj 2 Yj . (xi ). 2. P j2J yj also maximizes p over Y . so that we have the budget equality p xi = Mi : Suppose not. (yj )) 2 F: To show that 105 . k satisfying z = j2J yj : Set yj = yj +yj : Since each yj maximizes P 0 p yj over Yj . the feasibility multivalued mapping is a continuous multivalued mapping with nonempty comapct convex values. Translating the de0 …nition of Nash equilibrium to the case at hand yields the existence of (p . 00 0 j = 1. yj + (1 )yj 2 Yj . P P 0 00 By (7) z = i2I xi P j2J y0j w 2 Y. j = 1. because 0 Yj is in the interior of Cj : This contradicts (ii). :::.p x00 i p wi + k X j=1 ai j j (p)g: 00 Since j (p) is nonnegative and x0 < wi in Xi . 2. Pn Summing up the budget equalities and using i=1 ai = 1 for each j yields j Pn Pk 00 p ( j=1 yj + w). k: j (iii) xi 2 Bi and co Ui (xi ) Bi = . The abstract economy so constructed satis…es all the hypotheses of the Shafer-Sonnenschein theorem and so has a Nash equilibrium. p x0 < p wi for any i i 00 p 2 : Thus Fi is lower hemi-continuous and nonempty-valued. 2. :::.Fi (p.

ui (xi ) ui (x00 ) f or all x00 2 i i i (p)g: By a theorem of Berge. Let the economy ((Xi . Since 0 2 Yj . i is an upper hemi-continuous multivalued mapping with nonempty compact values. Since Yj00 is convex. We de…ne the multivalued mapping j as follows j : ( Yj00 by j (p) = fyj j yj 2 Yj00 . ui (x0 ) > ui (xi )g: Then the i i i economy has a Walrasian free disposal equilibrium.11 and further assume that there is a continuous quasiconcave utility ui satisfying Ui (xi ) = fx0 j x0 2 Xi . p xi j j2J 00 As in proof of previous theorem the existence of x00 < wi in Xi implies that i 00 i is a continuous multivalued mapping with nonempty values. Theorem 3. x00 + (1 )xi 2 i i co Ui (xi ) Bi . contradicting (iii).: Suppose that there is some x0 belonging to this intersection.1. Since ui is quasi-concave. (Yj ). De…ne i : ( Xi by i (p) = fxi . p yj 00 00 p yj f or all yj 2 Yj00 g: De…ne j : ! R: by j (p) = maxyj 2Yj p yj : By the maximum theorem. Then for small i 00 enough T 0. i has convex values. j xi 2 i (p). x0 + (1 > )xi 2 Xi and since xi 2 cl Ui (xi ). (xi ). By theorem 3. j (p) is convex too. p xi p wi + k X j=1 ai (p j yj )g = . Proof. Also for any z 2 Z(p). j is upper hemi-continuous with nonempty compact values and j is continuous. Let Yj00 be as in proof of previous theorem. Ui (xi ) \ fxi j xi 2 Xi . (ai )) satisfy the hyj potheses of Theorem 3. wi . Thus ((xi ). Set Z(p) = n X i=1 i (p) k X j=1 j (p) w: This Z is upper hemi-continuous and has nonempty compact convex values. there is some p 2 and z 2 Z(p ).12. satisfying z 0: Thus there are xi 2 i (p ) and yj 2 j (p ) such that n X i=1 xi k X j=1 yj w 0: 106 . j is nonnegative. 00 00 Let Xi be as in proof of the previous theorem and de…ne i : ( Xi by X 00 p wi + ai j (p)g: i (p) = fxi j xi 2 Xi . (yj )) is indeed a Walrasian free disposal equilibrium it remains to be proven that for each i. (yj )) is a Walrasian free disposal equilibrium. i has compact convex values. p z 0: To see this just add up the budget multivalued mappings for each consumer. Ui ).(p . Since Xi is 00 compact and convex.

Throughout in this subsection.13 is again true. · 107 .13.6 Equilibria for abstract economies The object of this subsection is to use new …xed-point theorems of the authors Agarwal and Regan to establish the existence of equilibrium points of abstract economies. A point x 2 Q is called an equilibrium point of if for each i 2 I.3. A = co (fx0 g F (A)) with A = C and C A countable. ( ) . convex subsets of ) with the following condition holding: [ A .13. The literature on Walrasian equilibrium is enormous. I will be a countable set of agents and we describe an abstract economy by Q (Qi .11 yields the following theorem of Monch type for single valued maps. convex subset of a Fréchet space E with x0 2 : Suppose that there is an upper semicontinuous map F : ! CK( ) (here CK( ) denotes the family of nonempty.. Now Theorem 3.10. Remark 3. Pi )i2I where for each i 2 I.11. Qi = is a choice (or strategy) set.It follows just as in proof of previous theorem that ((xi ). extend and complement those in the literature. 3. metrizable locally convex vectorial topological space) Ei for each i 2 I. Fi . implies A is compact: ( ) Then F has a …xed point in . Fi : i2I Qi = Q ! 2Qi (nonempty subsets of Qi ) is a constraint multivalued mapping. C = co(fx0 g [ F (C)) implies C is compact.13 together with Remark 3. (yj )) is a Walrasian free disposal equilibrium. Two standard texts in the …eld are Debreu and Arrow-Hahn. compact. and the result in Theorem 3. we have \ xi 2 Fi (x) and Fi (x) Pi (x) = . Let be a closed. Theorem 3. and Pi : Q ! 2Qi is a preference multivalued mapping. Suppose in addition in Theorem 3. Remark 3. here xi is the projection of x on Ei . we have F (A) F (A). here Qi will be a subset of a Fréchet space (complete. we assume f or any A then we could replace (*) with C countable. These results improve.

Let Z be a nonempty. paracompact Hausdor¤ topological space and W a nonempty. :::g n F (xn )g Q f or n n0 : Then F has a …xed point in Q. f or all x 2 Z.14. condensing map with F (Q) a bounded set in E: Let Z be a subset of a Hausdor¤ topological space E1 and W a subset of a topological vector space E2 : We say F 2 DT K(Z. C = co(fx0 g [ f (C)) implies C is compact: Then F has a …xed point in : Next we present a …xed point result of Furi-Pera type. Let be a closed.Theorem 3. Let I be a countable index set and fQi gi2IQ family of a nonempty closed. f or each x 2 Z. 2. Theorem 3. 1] converging to (x. convex sets each in a Fréchet Ei .12. convex subset of a Hausdor¤ topological vector space. In Theorem 3.16. Suppose F 2 DT K(Z. Let Q F : Q ! 2E (here E = i2I Ei ) be given by 108 . Theorem 3. Then F has a continuous selection. then there exists n0 2 f1.15 with F : Q ! CK(E) a one-set contractive. B(x) 6= .15. Remark 3. let Fi 2 DT K(Q.17. then one could replace F : Q ! CK(E) a compact map in Theorem 3. convex subset of E and 0 2 Q: Suppose F : Q ! CK(E) is a compact upper semicontinuous map with the following condition holding: if f(xn . that is. W ). with x 2 n )gn 1 is a sequence in @Q [0. convex subset of a Fréchet space E with x0 2 : Suppose that there is a continuous map f : ! with the following condition holding: C countable. The following selection theorem hold Theorem 3. there exists a continuous single valued map f : Z ! W of F . if E is a Hilbert space. y 2 B(z)g are open (in Z) for each y 2 W . Ei ) be a compact map. (y) = fz j z 2 Z.15. ) F (x) and 0 with f 1. W ) if W is convex and there exists a map B : Z ! W with co (B(x)) and the …bres B 1 F (x). Let E be a Fréchet space with Q a closed. The following result is a …xed point theorem of Furi-Pera type for DTK maps. Let Q = i2I Qi and assume 0 2 Q: For each i 2 I.

Pi )i2I an abstract economy such that for each i 2 I. (4) Pi jUi : Ui ! 2Ei is upper semicontinuous with Pi (x) closed and convex for each x 2 Ui . then one could replace Fi . Remark 3.17. In Theorem 3. Fi . the following conditions hold: (1) Qi is a nonempty closed. here xi is the projection of x on E: = In addition. compact.g is open in Q. T (5) xi 2 Fi (x) Pi (x). for each i 2 I. here CK(Qi ) denotes the family of nonempty. implies A is compact holding. (2) Fi : Q ! CK(Qi ) is upper semicontinuous. 1] converging to (x. ) F (x) and 0 with f 1. a one-set contractive. and suppose the following condition holds: if f(xn . Fi (x) Pi (x) 6= . Let Gi : Ui ! 2Qi be given by \ Gi (x) = Fi (x) Pi (x). here xi is the projection of x on Ei . if Ei is a Hilbert space for each i 2 I. That is. condensing map with F (Q) a bounded set in E. A co(fx0 g F (A)) with A = C and C A countable. T convex subsets of Qi . a compact map for each i 2 I in Theorem 3. Proof. We will now use the above …xed point results to obtain equilibrium theorems for an abstract economy. convex subset of a Fréchet space Ei . Let Hi : Q ! 2Qi be de…ned by . then there exists n0 2 f1. 109 which is upper semicontinuous. f or x 2 Q: i2I Then has a equilibrium point.18. here F : Q ! 2Q is given by Y F (x) = Fi (x).13. with x 2 n )gn 1 is a sequence in @Q [0. (3) Ui = fx j x 2 Q. Theorem 3. :::g n F (xn )g Q f or n n0 : Then F has a …xed point in Q. we have \ xi 2 Fi (x) and Fi (x) Pi (x) = . Let I be a countable set and = (Qi .17 with F : Q ! 2E . for each x 2 Q.F (x) = Y i2I Fi (x). f or x 2 Q. suppose x0 2 Q with S (6) A Q.. 2. Fix i 2 I.

Fi (x) Pi (x) 6= .19.Hi (x) = Gi (x). then could replace (6) in Theorem 3. for each x 2 Q. for x 2 Q). if x 2 Ui ..g is open in Q. As a 2 = result. T (5) xi 2 Fi (x) Pi (x). Let H : Q ! 2Q be de…ned by Y H(x) = Hi (x): i2I We have that H : Q ! CK(Q) is upper semicontinuous. Fi . compact map.14.18 with (see Remark 3. we have [ A co(fx0 g F (A)): Now (6) guarantees that A is compact.11) C Q countable. :::g n F (xn )g Q f or n n0 110 . if x 2 Ui . 1] converging to (x. ) F (x) and 0 with f 1. for each i 2 I. we have x T Ui for each i 2 I.. Pi )i2I an abstract economy such that for each i 2 I. Theorem 3. the following conditions hold: (1) Qi is a nonempty closed. We wish to apply S Theorem 3. compact. let A Q with A = co(fx0 g H(A)). T (3) Ui = fx j x 2 Q. with x 2 n )gn 1 is a sequence in @Q [0. 2. Then since H(x) (note Hi (x) F (x). here xi is the projection of x on E: = In addition.13 to H. f or x 2 A Fi (x).13 guarantees that there exists x 2 Q with x 2 H(x): From (5). Let I be a countable set and = (Qi . = which is upper semicontinuous with nonempty. A = C and C A countable. C co(fx0 g [ F (C)) implies C is compact: Theorem 3. suppose 0 2 Q with (6) if f(xn . here xi is the projection of x on Ei : Remark 3. convex subset of a Fréchet space Ei . (4) Pi jUi : Ui ! 2Ei is upper semicontinuous with Pi (x) closed and convex for each x 2 Ui . convex values (note Gi (x) Fi (x) for x 2 Ui ). then there exists n0 2 f1. To see this. and Hi (x) = Fi (x). If F (B) F (B) for any B Q. we have xi 2 Fi (x) and Fi (x) Pi (x) = . (2) Fi : Q ! CK(Qi ) is upper semicontinuous.

In addition. Assume for each i 2 I that (1). and it is easy to check. Proof.19. We wish to apply Theorem 3. 2. we have \ xi 2 Fi (x).20. then one could replace Fi : Q ! CK(Ei ) a compact map for each i 2 I in (2) with F : Q ! 2E a one-set contractive. Fi . Theorem 3. (3) and (5) of Theorem 3. Notice H : Q ! CK(E) is an upper semicontinuous. :::g with f n Fn (xn )g Q for each n n0 : Consequently. as in Theorem 3.18 guarantees that Hi : Q ! CK(Ei ) is upper semicontinuous. 2. here xi is the projection of x on Ei . Remark 3. suppose for each i 2 I that there exists an upper semicontinuous selector \ Qi of Fi Pi jUi : Ui ! 2Qi i : Ui ! 2 (7) with i (x) closed and convex for each x 2 Ui is satis…ed.15. Then has an equilibrium point x 2 Q: That is. Remark 3.18. where H is given in proof of Theorem 3.15. That is. for each i 2 I.Q holding. suppose f(xn . 1] converging to (x. Fix i 2 I and let Hi be as in Theorem 3. Next we present a generalization of Theorems 3. here F : Q ! 2E (here E = i2I Ei ) is given by Y F (x) = Fi (x): i2I Then has an equilibrium point x 2 Q. The same reasoning as in Theorem 3. then there exists n0 2 f1. (2). we have 111 . and Fi (x) Pi (x) = .19 is again true. ) H(x) and 0 with f 1. 1] converging to (x. condensing map with F (Q) a bounded set in E.18. :::g n H(xn )g Q f or n n0 . To see this. Pi )i2I an abstract economy. we have x 2 F (x): Now (6) guarantees that there exists n0 2 f1. Let I be a countable set and = (Qi . that x is an equilibrium point of . f n Hn (xn )g Q for each n n0 : Theorem 3.18 hold.. compact map (use (2) with Hi (x) Fi (x) for x 2 Q).15 guarantees that there exists x 2 Q with x 2 H(x). with x 2 n )gn 1 is a sequence in @Q [0. n )gn 1 is a sequence in @Q [0. ) with x 2 H(x) and 0 < 1: Then since H(x) F (x) for x 2 Q.18. Notice (6) can be replaced by if f(xn . Let H : Q ! 2E be as in proof of previous theorem. and the result in Theorem 3.16. for each i 2 I.18 and 3. If Ei is a Hilbert space for each i 2 I.

Also assume 0 2 Q with (6) holding.20. A co(fx0 g F (A)) with A = C and C A countable.19 hold. the following conditions hold: (1) Qi is a nonempty. Let I be a countable set and = (Qi . implies A is compact holding. then of course (7) holds. if x 2 Ui : = This Hi : Q ! CK(Qi ) is upper semicontinuous (note i (x) Fi (x) for x 2 Ui ). T (3) Ui = fx j x 2 Q. Theorem 3. Assume for each i 2 I that (1). Let Hi : Q ! 2Qi be de…ned by Hi (x) = and Hi (x) = Fi (x). if x 2 Ui . Proof. Proof. Then has an equilibrium point. Fi .22. T If Fi Pi jUi : Ui ! 2Qi is lower semicontinuous with Pi (x) closed and convex for each x 2 Ui . Fi (x) Pi (x) 6= . closed. Let I be a countable set and = (Qi . then (7) holds. suppose for each i 2 I that there exists an upper semicontinuous selector \ Ei of Fi Pi jUi : Ui ! 2Ei i : Ui ! 2 (8) with i (x) closed and convex for each x 2 Ui is satis…ed. here xi is the projection of x on Ei . Essentially the same reasoning as in Theorem 3.21. The theorems so far in this subsection assume Ui is open in Q. (2) Fi : Q ! CK(Qi ) is lower semicontinuous. (3) and (5) of Theorem 3. . suppose x0 2 Q with S (6) A Q. Pi )i2I an abstract economy such that for each i 2 I. Fix i 2 I. In addition.xi 2 Fi (x) and Fi (x) \ Pi (x) = . Remark 3. Fix i 2 I and let Hi be as in Theorem 3. Our next two results consider the case when Ui is closed in Q. = In addition. Essentially the same reasoning as in Theorem 3.19 establishes the result. here F : Q ! 2Q is given by 112 i (x).17. and T (5) xi 2 Fi (x) Pi (x) for each x 2 Q. If Pi jUi : Ui ! 2Ei is upper semicontinuous with Pi (x) closed and convex for each x 2 Ui . here xi is the projection of x on Ei . (2).g is closed in Q. convex subset of a Fréchet space Ei . Theorem 3. Pi )i2I an abstract economy.18 onwards establishes result. Fi . T (4) there exists a lower semicontinuous selector i : Ui ! 2Qi of Fi Pi jUi : Ui ! 2Qi with i (x) closed and convex for each x 2 Ui ..

then (4) is clearly satis…ed. so xi 2 Fi (x) and Fi (x) Pi (x) = . T (3) Ui = fx j x 2 Q. Then since (x) (note i (x) F (x). Fi (x) Pi (x) 6= . Proof. Pi )i2I an abstract economy such that for each i 2 I. then T (here xi is the projection of x on Ei ). Fix i 2 I and let Hi : Q ! 2Qi be given by Hi (x) = i (x). and T (5) xi 2 Fi (x) Pi (x) for each x 2 Q. Theorem 3.18. let A Q with A = co(fx0 g (A)). Fi . n )gn 1 is a sequence in @Q 113 [0. here xi is the projection of x on Ei . (2) Fi : Q ! CK(Ei ) is lower semicontinuous. Let I be a countable set and = (Qi .: = xi 2 i (x) Fi (x).23. a contraT diction. f or x 2 Q Hi (x) Now (5) guarantees that A is compact. T (4) there exists a lower semicontinuous selector i : Ui ! 2Ei of Fi Pi jUi : Ui ! 2Ei with i (x) closed and convex for each x 2 Ui .g is closed in Q. if x 2 Ui Hi (x) = Fi (x). and so xi 2 Fi (x) Pi (x).13 to .F (x) = Y i2I Fi (x): Then has an equilibrium point. As a result x 2 Ui for each i 2 I. there exists an upper semicontinuous selector i : Q ! CK(Qi ) of Hi . if x 2 Ui : = This Hi : Q ! CK(Qi ) is lower semicontinuous. convex subset of a Fréchet space Ei . Let : Q ! 2Q be given by Y (x) = i (x). To see this.13 guarantees that there exists x 2 Q with x 2 (x): Now if x 2 Ui for some i 2 I. 1] converging to (x. the following conditions hold: (1) Qi is a nonempty. Then. If Fi Pi jUi : Ui ! 2Qi is lower semicontinuous with Pi (x) closed and convex for each x 2 Ui . Theorem 3. ) . A = C and C A countable. we have [ A co(fx0 g F (A)): Hi (x) = i (x) T Remark 3. f or x 2 Q: i2I Now : Q ! CK(Q) is upper semicontinuous. S wish to apply Theorem We 3. = In addition. closed. for x 2 Q). compact map. suppose 0 2 Q with (6) if f(xn .

Then Gi 2 DT K(Q. Theorem 3.24.22. compact map (use (2) with i (x) Fi (x) for x 2 Q). suppose f(xn . Since Q is a subset of a metrizable space E = i2I Ei we have that Q is paracompact. C co(fx0 g [ G(C)) implies C is compact: Then has an equilibrium point x 2 Q. 1] converging to (x. as in Theorem 3. Pi )Q (here I is counti2I able) where for each i 2 I. convex Q each in a Fréchet Ei . that x is an equilibrium point of . here F : Q ! 2E (here E = i2I Ei ) is given by Y F (x) = Fi (x): i2I (here xi is the projection of x on Ei ). 2. Q de…ned by G(x) = i2I Gi (x) for x 2 Q. A point x 2 Q is called an equilibrium point of if for each i 2 I.22. Q Proof. Let g : Q ! Q be de…ned by 114 . Proof. Qi ) together with Theorem 3. let Gi 2 sets DT K(Q. Fi . :::g with f n Fn (xn )g Q for each n n0 : Consequently. The same reasoning as in Theorem 3. Fix i 2 I and let Hi be as in Theorem 3. The results which follows improve those of Regan. Fix i 2 I. ) with x 2 (x) and 0 < 1: Then since (x) F (x) for x 2 Q. n )gn 1 is a sequence in @Q [0. Gi : i2I Qi = Q ! 2Ei are constraint multivalued mapping. To see this. Then G has a …xed point in Q. we have x 2 F (x): Now (6) guarantees that there exists n0 2 f1. 2. Qi ) where Q = i2I Qi : Assume x0 2 Q and suppose G : Q ! 2Q .22. For each i 2 I. Let I be a countable index set and fQi gi2I a family of nonempty closed.with x 2 F (x) and 0 with f 1. and it is easy to check. Yannelis and Prabhaker. we have \ xi 2 clEi Gi (x) = Gi (x) and Fi (x) Pi (x) = . :::g n F (xn )g Q f or n n0 Q holding. and that there exists an upper semicontinuous selector i : Q ! CK(Ei ) of Hi : Let : Q ! 2E be as in proof of previous theorem. Gi . We wish to apply Theorem 3.16 guarantees that there exists a continuous selector gi : Q ! Qi of Gi . and Pi : Q ! 2Ei is a preference multivalued mapping. Ding. We establish a new …xed point result for DTK maps. Kim. then there exists n0 2 f1. Tan.22 guarantees that there exists x 2 Q with x 2 (x). Fi .22 guarantees that Hi : Q ! CK(Ei ) is upper semicontinuous. Next we discuss an abstract economy = (Qi . Notice : Q ! CK(E) is an upper semicontinuous. f n n (xn )g Q for each n n0 : Theorem 3. satis…es the following condition: C Q countable. Qi Ei is the choice set.

. suppose x0 2 Q with S (6) C Q countable. we have \ xi 2 Gi (x) and Fi (x) Pi (x) = . convex subset of a Fréchet space Ei . (4) Gi : Q ! 2Qi . Fi (x) and for each x 2 Q. T here Mi = fx j x 2 Q. Proof. Fi (x) Pi (x) = . f or x 2 Q: Then has an equilibrium point x 2 Q: That is. Theorem 3. de…ne multivalued mappings Ai . let I(x) = fi j i 2 I. and co(Fi (x)) S Gi (x). notice if C Q is countable and C = co(fx0 g since g is a selector of G. Bi : Q ! 2Qi by \ Ai (x) = coPi (x) Fi (x). Fi (x) Pi (x) 6= . for each i 2 I. here G : Q ! 2Q is given by Q (7) G(x) = i2I Gi (x). we have [ C co(fx0 g G(C)): here xi is the projection of x on Ei . \ \ Ni = fx j x 2 Q.g(x) = Notice G : Q ! Q is continuous and g is a selector of G. and (5) xi 2 co(Pi (x)) for each x 2 Q. Pi )i2I an abstract economy such that for each i 2 I. Fi .g. then Y i2I gi (x). let Theorem 3. f or x 2 Q: Now the condition in state of theorem implies C is compact.14 guarantees that there exists x 2 Q with x = g(x): That is. For each i 2 I. (2) for each x 2 Q. Y Y x = g(x) = gi (x) Gi (x) = G(x): i2I i2I To see this. C co(fx0 g G(C)) implies C is compact holding. We now show if C Q is countable and C = co(fx0 g [ g(C)) then C is compact: S g(C)). x 2 Ni ). Fi (x) 6= . g Pi (x) 6= . 115 . T (3) for each yi 2 Qi . the following conditions hold: (1) Qi is a nonempty. the set [(coPi ) 1 (yi ) Mi ] Fi 1 (yi ) is open in Q. Let I be a countable set and = (Qi .25.g: For each i 2 I. closed. Gi . here xi is the projection of x on Ei : = In addition. if i 2 I(x) (that is..

if i 2 I(x). if i 2 I(x) (that is. Let B : Q ! 2Q be de…ned B(x) = We now show C Q countable. Qi ). and Bi (x) = Gi (x). for each i 2 I and yi 2 Qi we have Ai 1 (yi ) = fx j x 2 Q. if i 2 I(x): = It is easy to see (use (2)) and the de…nition of I(x) that for each i 2 I and x 2 Q that co(Ai (x)) Bi (x) and Ai (x) 6= . we have 116 . yi 2 coPi (x) \ Fi (x)g \ [ fx j x 2 Mi .: Also.and Ai (x) = Fi (x). yi 2 coPi (x)g [ f[(coPi ) 1 fx j x 2 Ni . yi 2 Fi (x)g] fx j x 2 Mi . Bi (x). C co(fx0 g [ B(C)) implies C is compact: Y i2I \ Fi 1 (yi ). yi 2 Fi (x)g = Fi 1 (yi )] \ 1 (yi ) 1 \ \ Ni g [ [ [Fi 1 (yi ) \ \ Mi ] = [(coPi ) (yi ) Fi 1 (yi )] (yi ) [ [Fi 1 (yi ) Mi ] = [(coPi ) by Mi ] which is open in Q. Bi 2 DT K(Q. yi 2 Fi (x)g = [fx j x 2 Ni . = and Bi (x) = coPi (x) \ Gi (x). f or x 2 Q: S To see this. yi 2 Ai (x)g = fx j x 2 Ni . x 2 Ni ). let C Q be countable with C co(fx0 g B(C)): Now since B(x) G(x) for x 2 Q (note for each i 2 I that Bi (x) Gi (x) for x 2 Q). Thus.

1] converging to (x. :::g with f n G(xn )g Q for each n n0 : Consequently. For each i 2 I. i 2 I(x) for all i 2 I.25 guarantees that Bi 2 DT K(Q. note if i 2 I(x) for some i 2 I. That is. Pi )i2I an abstract economy such that for each i 2 I. xi 2 co(PT i (x)). Fi (x) 6= . 1] converging to (x. that is xi 2 Bi (x) for T each i 2 I. has an equilibrium point x 2 Q. here the Mi = fx j x 2 Q. 117 . f n B(xn )g Q for each n n0 : Theorem 3. 2. here xi is the projection of x on Ei : = In addition. and co(Fi (x)) Gi (x). with x 2 n )gn 1 [ G(C)): is a sequence in @Q [0. Then. S T (3) for each yi 2 Ei .T set [(coPi ) 1 (yi ) Mi ] Fi 1 (yi ) is open in Q.17 guarantees that there exists x 2 Q with x 2 B(x).C co(fx0 g Now (6) implies C is compact. ) G(x) and 0 with f 1. closed.17.24 guarantees that there exists x 2 Q with x 2 B(x). the following conditions hold: (1) Qi is a nonempty. Consequently. Ai and Bi be as in Theorem 3. and so T xi 2 co(Pi (x)) Gi (x): In particular. and xi 2 Gi (x) for = all i 2 I: Theorem 3. Fi (x) Pi (x) = . To see this. Ei ) for each i 2 I. convex subset of a Fréchet space Ei . is a compact map. Proof.25. let Ni . Gi .26. that x is an equilibrium point of . Let B : Q ! 2E be as in proof of previous theorem. (4) Gi : Q ! 2Ei . Fi (x) Pi (x) = . Thus. 2. and this contradicts (5)..25. and (5) xi 2 co(Pi (x)) for each x 2 Q. we have x 2 G(x): Now (6) guarantees that there exists n0 2 f1. suppose 0 2 Q with (6) if f(xn .g. since B(x) G(x) for x 2 Q. for each i 2 I. suppose f(xn . We wish to apply Theorem 3. Also note that Bi is a compact map for each i 2 I. then Fi (x) Pi (x) 6= . and it is easy to check. n )gn 1 is a sequence in @Q [0. then there exists n0 2 f1. Essentially the same reasoning as in Theorem 3. ) with x 2 B(x) and 0 < 1. :::g n G(xn )g Q f or n n0 Q holding. Theorem 3. (2) for each x 2 Q. Fi . we have \ xi 2 Gi (x) and Fi (x) Pi (x) = . as in Theorem 3. here G : Q ! 2E (here E = i2I Ei ) is given by Y G(x) = Gi (x): i2I Then here xi is the projection of x on Ei . so we have the above implication. Let I be a countable set and = (Qi .

condensing map with G(Q) a bounded set in E. (3) Gi : Q ! 2Qi and Gi (x) is convex for each x 2 Q. f or x 2 Q: G(x) = i2I Then has an equilibrium point x 2 Q: That is. Fi . and (7) xi 2 co(Pi (x)) for each x 2 Q. (4) the multivalued mapping Gi : Q ! CK(Qi ). de…ned by Gi (x) = clQi Gi (x). is upper semicontinuous. Fix i 2 I and let and Ui = fx j x 2 Q. the following conditions hold: (1) Qi is a nonempty. for each i 2 I. As a result. we have \ xi 2 Gi (x) and Fi (x) Pi (x) = . i (x) 6= .27.: : Q ! 2Qi be de…ned by \ co(Pi (x)). (6) for each yi 2 Qi . Theorem 3.Remark 3. implies A is compact holding. Pi 1 (yi ) is open in Q. in this subsection. closed. A co(fx0 g G(A)) with A = C and C A countable. i (x) = co(Fi (x)) i Proof. (5) for each yi 2 Qi . here G : Q ! 2Q is given by (9) Y Gi (x).19.g: Now (5) and (6) together with a result of Yannelis and Prabhaker imply for each y 2 Qi that (coFi ) 1 (y) and (coPi ) 1 (y) are open in Q. Finally. If Ei is a Hilbert space for each i 2 I. we present two more results for upper semicontinuous maps which extend the well known results in literature. Now it is easy to check that [ 1 Ui = i (y). Let I be a countable set and = (Qi . y2Qi 118 . for each y 2 Q we have that \ 1 1 (y) (coPi ) 1 (y) i (y) = (coFi ) is open in Q. here xi is the projection of x on Ei : = In addition. Gi . (2) Fi : Q ! 2Qi is such that co(Fi (x)) Gi (x). then one could replace Gi : Q ! 2Ei a compact map for each i 2 I in (4) with G : Q ! 2E a one-set contractive. convex subset of a Fréchet space Ei . Pi )i2I an abstract economy such that for each i 2 I. Fi 1 (yi ) is open in Q. f or x 2 Q. suppose x0 2 Q with S (8) A Q.

: Our result follows since \ \ Fi (x) Pi (x) co(Fi (x)) co(Pi (x)): Remark 3. let A Q with A = co(fx0 g H(A)).13 guarantees that there exists x 2 Q with x 2 H(x): If x 2 Ui for some i. if x 2 Ui : This Hi is upper semicontinuous (note for each x 2 Ui that ffi (x)g co(Fi (x)) Gi (x)). we have Y Gi (x) = G(x).13 to H. Theorem 3. For each i 2 I. so (8) guarantees that A is compact. We wish to apply Theorem S 3. then \ xi = fi (x) 2 co(Fi (x)) co(Pi (x)) co(Pi (x)): This contradicts T Thus. Also for y 2 Qi . and Hi (x) = Gi (x). we have that Ui is paracompact. y 2 so i 1 (y) is open in Ui . y 2 i (x)g i (x)g 1 = \ Ui . for each i 2 I. Notice (5) and (6) in last theorem could be replaced by 119 . Also notice (4) guarantees that Hi : Q ! CK(Qi ): Let H : Q ! 2Q be given by Y H(x) = Hi (x). Theorem 3.16 guarantees that there exists a continuous selection fi : Ui ! 2Qi of i . Since Ui is a subset of a Q metrizable space E = i2I Ei . Then since Hi (x) Gi (x) for each x 2 Q. A = C. if x 2 Ui . so xi 2 (7). we must have x 2 Ui .20. we have 1 i (y) = fx j x 2 Ui . and C A countable. fx j x 2 Q. let Hi : Q ! 2Qi be given by = Hi (x) = ffi (x)g. we have that Ui is open in Q. Notice as well that i = i jUi : Ui ! 2Qi has convex values. f or x 2 Q: H(x) i2I Thus A co(fx0 g [ G(A)). To see this. = Gi (x) and co(Fi (x)) co(Pi (x)) = .and as a result. f or x 2 Q: i (x) i2I \ Ui = ( i ) (y) This H : Q ! CK(Q) is upper semicontinuous.

Q,

(10) for each i 2 I, for each yi 2 Qi , (coFi )

1

and the result is again true. Theorem 3.28. Let I be a countable set and = (Qi ; Fi ; Gi ; Pi )i2I an abstract economy such that for each i 2 I, the following conditions hold: (1) Qi is a nonempty, closed, convex subset of a Fréchet space Ei ; (2) Fi : Q ! 2Ei is such that co(Fi (x)) Gi (x); (3) Gi : Q ! 2Ei and Gi (x) is convex for each x 2 Q; (4) the multivalued mapping Gi : Q ! CK(Ei ); de…ned by Gi (x) = clEi Gi (x); is upper semicontinuous, (5) for each yi 2 Ei , Fi 1 (yi ) is open in Q, (6) for each yi 2 Ei , Pi 1 (yi ) is open in Q, and (7) xi 2 co(Pi (x)) for each x 2 Q; here xi is the projection of x on Ei : = In addition, suppose x0 2 Q with (8) if f(xn ; with x 2
n )gn 1

T (yi ) (coPi )

1

(yi ) is open in

is a sequence in @Q

[0; 1] converging to (x; )

G(x) and 0 with f

1; then there exists n0 2 f1; 2; :::g
n G(xn )g

Q f or n

n0

Q holding; here G : Q ! 2E (here E = i2I Ei ) is given by Y Gi (x): G(x) =
i2I

Then

here xi is the projection of x on Ei . Proof. Let i ; Ui ; Hi ; and H be as in previous theorem. Essentially the same reasoning as in previous theorem guarantees that H : Q ! CK(E) is upper semicontinuous. Notice also that H is compact. We wish to apply Theorem 3.15 to H. To see this, suppose f(xn ; n )gn 1 is a sequence in @Q [0; 1] converging to (x; ) with x 2 H(x) and 0 < 1. Then, since H(x) G(x) for x 2 Q, we have x 2 G(x): Now (8) guarantees that there exists n0 2 f1; 2; :::g with f n G(xn )g Q for each n n0 : Consequently, f n H(xn )g Q for each n n0 : Theorem 3.15 guarantees that there exists x 2 Q with x 2 H(x): Theorem 3.29. Let I be a countable set and = (Qi ; Fi ; Gi ; Pi )i2I an abstract economy such that for each i 2 I, the following conditions hold: (1) Qi is convex, (2) Di is a nonempty compact subset of Qi , (3) for each x 2 Q, Fi (x) is a nonempty convex subset of Di ,

has an equilibrium point x 2 Q. That is, for each i 2 I, we have \ xi 2 Gi (x) and Fi (x) Pi (x) = ;;

120

S T (4) for each xi 2 Di , fPi 1 (xi ) Ui g Ai 1 (xi ) contains a relatively open S T subset Oxi ofQ D such that xi 2Di Oxi = co D; where Ui = fx j x 2 Q; Pi (x) Fi (x) = co ;g and D = i2I Di ; (5) for each x = fxi g 2 Q, xi 2 co Pi (x): = Then has an equilibrium point. Proof. For each i 2 I, let \ Gi = fx j x 2 Q; Pi (x) Fi (x) 6= ;g and for each x 2 Q, let I(x) = fi j i 2 I; Pi (x) \ Fi (x) 6= ;g:

Now for each i 2 I, we de…ne a multivalued mapping Ti : Q ! 2Di by \ Ti (x) = co Pi (x) Fi (x); if i 2 I(x) Ti (x) = Fi (x); if i 2 I(x): = Clearly for each x 2 Q, Ti (x) is a nonempty convex subset of Di . Also for each yi 2 Di , Ti 1 (yi ) = [f(co Pi ) [fPi 1 (yi ) \
1

(yi )

\

\

Fi 1 (yi )g \ \ Gi ]

Fi 1 (yi )g

[

\

Gi ]

[

[Fi 1 (yi ) \ [

\

Ui ]

[Fi 1 (yi )

Ui ] = \

= [Pi 1 (yi )

Fi 1 (yi )]

We note that the …rst inequality follows from the fact that for each yi 2 Di , Pi 1 (yi ) (co Pi ) 1 (yi ) because Pi (x) (co Pi )(x) for each x 2 Q: Furthermore, by virtute of (4),S each yi 2 Di , Ti 1 (yi ) contains a relatively open for set Oyi of Q such that yi 2Di Oyi = co D: Hence by a result of Hussain and Tarafdar there exists a point x = fxi g such that xi 2 Ti (x) for each i 2 I. By condition (5) and the de…nition of Ti , it now easily follows that x 2 Q is an equilibrium point of . Corollary 3.4. Let I be a countable set and = (Qi ; Fi ; Gi ; Pi )i2I an abstract economy such that for each i 2 I, the following conditions hold: (1) Qi is convex, (2) Di is a nonempty compact subset of Qi , (3) for each x 2 Q, Fi (x) is a nonempty convex subset of Di , T (4) the set Gi = fx j x 2 Q; Pi (x) Fi (x) 6= ;g is a closed subset of Qi , (5) for each yi 2 Di ; Pi 1 (yi ) is a relatively open subset in Gi and Fi 1 (yi ) is a relatively open subset in Q; (6) for each x = fxi g 2 Q, xi 2 co Pi (x): = 121

[

[Fi 1 (yi )

Ui ] = [Pi 1 (yi )

Ui ]

Fi 1 (yi ):

Then there is an equilibrium point of the economy . Proof. Since Pi 1T i ) is relatively open in Gi , there is an openT (y subset Vi of S S Q T Si 1 (yi ) = Gi Vi : Hence for yi 2 Di , Pi 1 (yi ) Ui = (Gi Vi ) Ui = with P Q (Vi Ui ): Thus [ \ [ \ fPi 1 (yi ) Ui g Fi 1 (yi ) = (Vi Ui ) Fi 1 (yi ) = Oyi ; say, is relatively open subset of Q for each yi 2 Di , since Vi ; Ui and Fi 1 (yi ) S are open subset of Q. Now it follows that yi 2Di Oyi = co D: The corollary is thus a consequence of Theorem 3.29.

3.4
3.4.1

Existence of …rst-order locally consistent equilibria
Introduction

A …rst-order locally consistent equilibrium (1-LCE) of a game is a con…guration of strategies at which the …rst-order condition for payo¤ maximization is simultaneously satis…ed for all players. The economic motivation for introducing this equilibrium concept is that oligopolistic …rms don’ know their e¤ective t demand function, but ” any given status quo each …rm knows only the linear at approximation of its demand curve and beliefs it to be demand curve it faces” . In what follows, in order to distinguish between the abstract concept of 1-LCE, that is, a con…guration of a game in which the …rst-order condition for payo¤ maximization is satis…ed for all players, and its economic interpretation, that is, a pro…t-maximizing con…guration in a market or in an economy in which …rms know only the linear approximation of their demand functions, the latter equilibrium concept will be called …rst-order locally consistent economic equilibrium (1-LCEE) (see [1], [22], [23]). 3.4.2 First-order equilibria for non-cooperative games

Consider the following non-cooperative game = (I; (Si ); (Hi ))i2I , where I = f1; 2; :::; ng is the index set of players, Si is the strategy set of player i, and Q Q Hi is the payo¤ function of player i. Set S = i2I Si ; and S i = j2I;j6=i Sj : The generic element of set S, (respectively S i , respectively Si ) is denoted by x, (resp. x i , resp. xi ). Denote by Dxi Hi the derivative of Hi with respect to xi . The derivative of Hi with respect to xi calculated at point x is denoted by Dxi Hi (x): A.1. (8) i 2 I, Si is a convex and compact subset of a Banach space. A.2. (8) i 2 I, the function Hi : S ! R is continuous; moreover, for every x 2 S, the derivative Dxi Hi exists and is continuous, that is, there exists an open set Wi0 Si and an extension of function Hi to Wi0 which is continuously di¤erentiable with respect to xi : De…nition 3.7. A 1-LCE for game is a con…guration x 2 S such that: (i) if xi 2 Si n @Si , then Dxi Hi (x ) = 0; (ii) if xi 2 @Si ; then there exists a neighborhood of xi in Si , N (xi ); such that: Dxi Hi (x )(xi xi ) 0; for every xi 2 N (xi ): 122

N (xi ): By linearity. given c = (I. Summarizing. i (x . for example. or it is a local maximum. suppose that xi 2 Si n @Si but Dxi Hi (x ) 6= 0: Since xi is an interior point of Si then the linearity of i implies that there exists a point xy 2 @Si such that i (x . then Dxi Hi (x ) = 0.. x satis…es conditions (i) and (ii) in De…nition 3. a Bohnenblust and Karlin’ …xed s point theorem ensures that there exists x 2 S such that x 2 F (x ): Thus.. It is also convexvalued because of linearity of i : De…ne the multivalued mapping F : S ( X Q as follows F = i2I Fi : Because of A. and A.1. x 2 E( c ): such that xi 2 Si : 123 . (Hi ). xy ) > i (x . interpreted as the status quo. a contradiction. by Berge’ maximum theorem the s multivalued mapping Fi : S ( Si is upper hemi-continuous. suppose that x 2 LCE( ): Then. for every xi 2 Si : To this end. If xi 2 Si n @Si .. xi doesn’ solve problem (Pi ). Consider now the case xi 2 @Si with Dxi Hi (x )(xi xi ) 0 for every xi in some neighborhood of xi . also in this case xi solves problem (Pi ): Therefore. then Dxi Hi (x ) = 0. LCE( ) 6= . ( i ))i2I will be associated to game the status quo x0 . Denote by Fi (x0 ) the set of solutions to problem (Pi ): If we interpret game c as an oligopolistic game among …rms which choose.30.7. (ii) if xi 2 @Si then Dxi Hi (x )(xi xi ) 0. Under A. one obtains that Dxi Hi (x )(xi xi ) 0 for every xi 2 Si : Thus. the following …ctitious n-person non-cooperative game : In game c . Suppose i i now that xi 2 @Si and Dxi Hi (x )(xi xi ) > 0. xi ) = Hi (x0 ) + Dxi Hi (x0 )(xi x0 ): With i some abuse of language. An equilibrium for the game c is a con…guration x 2 S such that: xi 2 Fi (x ) for every i 2 I: Denote by E( c ) the set of equilibria of game c . then the behavioral hypothesis underlying problem (Pi ) is that given the status quo. xi ). in this case the payo¤ function.2 it follows that the function i : S Si ! R is continuous.1. de…ne the function i : S Si ! R as follows: i (x0 . De…nition 3. x 2 E( c ): Now it is su¢ cient to show that E( c ) 6= .8. Given a con…guration x0 2 S. x 2 LCE( ): t Finally. therefore. Notice that De…nition 3. and by LCE( ) the set of 1-LCEs of game : Theorem 3. the best strategy for player i is the solution to the following problem: (Pi ) max 0 i (x . First we show that E( c ) = LCE( ): Suppose that x 2 E( c ): It is su¢ cient to show that x satis…es the following conditions: (i)if xi 2 Si n @Si . which is a contradiction. xi ) = Hi (x ) for every xi 2 Si : It follows that xi solves problem (Pi ).2.: Proof.Condition (ii) means that if xi belongs to the boundary of the strategy set.7 is in line with the usual idea that at 1-LCEs players carry out local experiments by employing the linear approximations of some appropriate function. for some xi 2 Si : Clearly. Thus.: By A. the level of production. …rms maximize the linear approximation of their pro…t functions. (Si ). xi ). then either it satis…es the …rst-order condition for payo¤ maximization.

For every i 2 I. A …rst-order locally consistent economic equilibrium is a vector p 2 J such that for every i 2 I we have Hi (pi . ng.4. I = f1. and the derivative @Di =@pi : J ! R exists and is continuous. p ) Hi (pi . :::. p i ) = 0. p 0 ) := Di (p0 ) + (@Di =@pi )(p0 )(pi p0 ) i and conjectural pro…t is Hi (pi . The cost function of …rm i is Ci (qi ) = ci qi . …rm i’ market demand is zero. Given the status quo p0 2 J. f or every pi 2 Ji : De…nition 3. i 2 I. p i ) = 0 for every p00 p0 : i i i Here it is possible that for every price in Ji . one show that there exists a …rst-order locally consistent economic equilibrium for the above monopolistic market. s Remark 3. p i ) i i i 0 and Di (p00 . then (@Di =@pi )(p0 .2. p 0 )(pi ci ): De…nition 3. For every p i 2 J i . j6=i Jj : The price set by …rm i is denoted by pi : Denote by p i the (n 1)-dimensional vector whose elements are the prices set by all …rms except the i-th one.21. p0 ) := i (pi . We consider a monopolistic competitive market with n pricemaking …rms. s Example 3. p0 2 Ji nfPi g. p i ): The function Di : J ! R is the demand function of …rms i. where qi is the level of output of …rm i and ci is a positive number. function Di is continuous on J. We suppose that: A.3 Existence of a …rst-order economic equilibrium Next. we prove the existence of a …rst-order locally consistent economic equilibrium in a model of monopolistic competition similar to the Bonanno and Zeeman’ one.3. and ii) (@ i =@pi )(p ) = (@Di =@pi )(p ): 124 . Set p = (pi . if Di (p0 .9 means that at equilibrium …rms are maximizing their conjectural pro…t function.2. We shall assume that …rms maximize their conjectural pro…t function calculated by taking into account the linear approximation of their demand function.1. Q assume We that the …rm i choose any price in the interval Ji = [ci . Pi ]: Set J = i2I Ji and Q J i = j2I. p ). in following example. A. 2. the conjectural demand of …rm i is i (pi .9. and it is indicated by Di (p): The true pro…ts of …rms are given by Hi (p) = Di (p)(pi ci ): Next. p ) = Di (p ). It is easily seen that if p is a …rst-order locally consistent economic equilibrium then: i) i (pi .

Proof. p ) for pi 2 Ji : In fact. where N (ci ) is a right neighborhood of ci : Because pi = ci one has Di (p )(pi pi ) 0. pi = ci : Two case can occur: b1 ) (@Hi =@pi )(p ) = 0.2.31 it is su¢ cient to prove that if (pi )i2I is a …rst-order locally consistent equilibrium then it satis…es condition in De…nition 3. p ) = 0. The condition ii) means that at equilibrium the slope of the true demand function is equal to the slope of the conjectural demand. p ) = 0 and Hi (pi . from the above argument. then by the fact that p is a …rst-order locally consistent equilibrium. Under A. to prove Theorem 3.9. Thus. We have Theorem 3. If Di (p ) = 0 then Hi (pi . Case c). and xi = pi . and c2 ) Di (p ) = 0: In the case c1 ) by noticing that (@Hi =@pi )(p ) = 0 implies (@Di =@pi )(p ) < 0 and that (@ 2 Hi =@p2 )(pi . the industry we are considering reduces to the game considered above. c) pi 2 @Ji . since Hi (pi . Under A. i 2 I: Thus. p ) < 0: Thus the condition in De…nition 3. if it is so.2 there exists a …rst-order locally consistent economic equilibrium. In the case b2 ). i 2 I. p ) = ((@Di =@pi )(p )(pi pi ))(pi ci ) 0 because pi = ci and (@Di =@pi )(p ) 0 from assumption A. 125 . p ) Hi (pi . Hi (pi . which is a contradiction. one must have (@Hi =@pi )(p ) = 0: Two cases can occur: c1 ) Di (p ) > 0.9 is satis…ed. b) pi = ci . p ) = 0 while Hi (pi .9 is satisi …ed. Case b). p ) = Hi (pi . p ) = (Di (p )+(@Di =@pi )(p )(pi pi ))(pi ci ) = (@Di =@pi )(pi ci )2 0 for every pi 2 Ji n fci g.1 and A. b2 ) (@Hi =@pi )(p ) 6= 0: In the case b1 ) it isn’ possible that Di (p ) > 0: In fact. p ) = 2(@Di =@pi )(p ) one can conclude that (@Hi =@pi )(p ) = i 0 implies (@ 2 Hi =@p2 )(pi . that (@Di =@pi )(p ) 0 and that Di (pi . and therefore by A. pi = Pi : Assumption A. pi 2 Ji : Therefore Hi (pi . the condition in De…nition 3. p ) Hi (pi . i 2 I: Case a). We have to consider three possible cases: a) pi = Pi . p ) for pi 2 Ji . it must satisfy the condition (Di (p )+(@Di =@pi ))(p )(pi ci ))(pi pi ) 0. p ) = 0. p i ) = 0 for every pi 2 Ji n fci g: We shall prove that Hi (pi . one t has that (@Hi =@pi )(p ) = Di (p ) > 0.9 is satis…ed. pi 2 Ji : Thus. pi 2 N (ci ).1 and A. also in this case condition of De…nition 3.31.2 the game has clearly a …rst-order locally consistent equilibrium x = (xi )i2I : Set pi = xi .The condition i) means that at equilibrium the conjectural demand must be equal to the true demand. pi 2 Ji n @Ji : By de…nition of …rst-order locally consistent equilibrium.2 ensures that Di (p ) = (@Di =@pi )(p ) = 0: It follows that i (pi . pi 2 Ji : This implies that Di (p ) = 0. By setting Si = Ji .2.

(ui )i2I . yn ) 2 Y. If W (y) is nonempty. Y V: Moreover. In [9]. Thus.4. p(y) 2 W (y).4 First-order equilibria for an abstract economy We consider an abstract economy with productions of m …rms and n goods given by = (G. (Xi )i2I . household. ! 0 (y) 0. p ) = 0. y) is single-valued. and Yj is a convex set. ui is such that Fi (p. ( i )i2I . and by z(p. I. that is W (y) = fp j p 2 . The proof is complete. Y is compact. y) the individual demand mapping. For all y 2 V the rank Dp n z [p(y). yj 2 Yj .In the case c2 ) if we prove that (@Di =@pi )(p ) = 0 we have completed the proof because in this case Hi (pi . y) = 0g: We set V = fy j y 2 Rm . given the production + pro…le y: The symbol W (y) indicates the set of Walrasian prices associated with production pro…le y. then (@Hi =@pi )(p )(pi pi ) = (Di (p ) + (@Di =@pi )(p )(pi ci ))(pi pi ) = (@Di =@pi )(p )(pi ci )(pi pi ) > 0 for pi < pi . A4.10.9 is satis…ed. A …rst-order locally consistent economic equilibrium for the economy is a con…guration (p . i 2 Ig: i We suppose that: A1. (Yj )j2J ) where G. I. J. strictly positive n V: and of class C 1 in R+ A2. Remark 3. y] = n 1 where z is the function z without the last element. j 2 J: A3. also in this last case the condition in De…nition 3. respectively J are the index sets of goods.22. pi 2 Ji : Suppose. For all i 2 I. 3. z(p. the P intermediate endowment of consumer i is ! 0 (y) = ! i + j2J ij yj : We denote i by Fi (p. De…nition 3. j 2 J: 126 . y) the aggregate excess demand mapping of the economy at price p 2 Rn . Bonanno and Zeeman have provided a general existence result of a …rst-order locally consistent equilibrium for an abstract gametheoretic. p0 ) = p(y 0 ) + (yj 0 yj )Dyj p(y 0 )T . and Dp n is the derivative with respect to the n 1 …rst component of p: The producer j calculates his pro…ts on the basis of the linear approximation of the e¤ective demand function pj (yi . :::. where y 0 is a status quo. then W (y) is singleton. p ) = Hi (pi . contradicting the hypothesis that p is a …rst-order locally consistent equilibrium. respectively …rms. (! i )i2I . and Dyj denote the derivative with respect to yj . and they employ their existence result to prove the existence of a …rst-order locally consistent equilibrium in a monopolistic competitive industry with price-making …rms. y2 . that (@Di =@pi )(p ) < 0. (yj )j2J ) 2 Y such that pj (yj . Q Given the production pro…le y = (y1 . where Y = j2J Yj . and symbol T indicates the operation of transposition for matrices. y )yj . on contrary. y )yj pj (yj .

To this end. by assumption.If the assumptions A1-A4 holds and Dyj p(y) 0 for every y 2 Y and for every 2 Rn . and (b) Dyj pj (yj . j 2 J: Condition (a) means that at …rst-order locally consistent economic equilibrium perceived prices are equal to the true ones. y) = pj (yj . xj = yj and Hi (yj . yj 2 Yj . yj 2 Yj : p(y )yj + (yj yj )Dyj p(y )T yj . It is easily seen that if (p . then the economy has a …rst-order locally consistent economic equilibrium. yj 2 Yj .32. This ends the proof. We have Theorem 3. j 2 J. From the …rst member of last relationship and by taking into account previous relationship.This de…nition means that at a …rst-order locally consistent economic equilibrium …rms are maximizing their pro…ts according their perceived demand functions. Proof. j 2 J.9. y ) = Dyj p(y ). [p(y ) + yj Dyj p(y )](yj yj ) 0. note that since y is a …rst-order locally consistent equilibrium. If use set Sj = Yj . 127 . Under assumptions A1-A4. (yj )j2J ) is a …rst-order locally consistent economic equilibrium then (a) pj (yj . y)yj . j 2 J: In order to prove the theorem it is su¢ cient to prove that y = (yj )j2J satis…es condition in De…nition 3. one obtains p(y )(yj yj ) + yj Dyj p(y )(yj yj ) yj ) yj ) = yj Dyj p(y )(yj = (yj yj Dyj p(y )(yj yj ) yj )Dyj p(y )(yj 0. p(y) is C 1 and this game has clearly a …rstorder locally consistent equilibrium (xj )j2J : We set yj = xj . while condition (b) means that the slopes of the perceived demand curves are equal to the slopes of the true demand curves. yj 2 Yj : p(y )yj + [p(y ) + yj Dyj p(y )](yj yj ) We prove the assertion if we show that y satis…es the following condition p(y )yj that is. then it must satisfy the condition p(y )yj or p(y )(yj yj ) yj Dyj p(y )(yj yj ). the economy reduces to the game introduced in the …rst subsection of this section. y ) = p(y ).

Yannelis and Prabhakar. for all J I. The purpose of this section is to present generalizations of some of these results on the existence of equilibrium in generalized games by relaxing the convexity conditions. Shafer and Sonnenschein t and Border.11. Since then. X 2 =: The notion of mc-spaces is based on the idea of replacing the linear segments which join any pair of points up (or the convex hull of a …nite set of points) in the usual convexity. :::. In order to do that. :::.3. We have mention the works of Gale and Mas-Colell. . Arrow-Debreu’ result has been extended in several directions s by assuming weaker assumptions on strategy spaces. a1 . i = 0. in which the player’ strategy set depends on the s choices of all the other players.5. namely A = fa0 . who modify continuity conditions on the constraint and preference multivalued mappings. and that contains the empty and the total set. These results cover situations in which neither strategy spaces nor preferences are convex. De…nition 3. b1 . Tarafdar. Formally. Most of these existence theorems are proven by assuming convexity conditions on the strategy spaces as well as on the constraint multivalued mapping. :::. or has an mcstructure. A X. as those of Brower. who consider in…nite dimensional strategy spaces or an in…nite number of agents. an abstract convexity on a set X is a family = = fAi gi2I of subsets of X stable under T arbitrary intersections. agent preferences. that is. who consider preference relations which aren’ transitive or complete. by an path (respectively. mc-spaces. a set) that will play its role. we make use of a new abstract convexity notion called mc-spaces which generalizes usual convexity as well as other abstract convexity structures.1 Existence of equilibrium in generalized games with non-convex strategy spaces Introduction The generalized game concept (or abstract economy) extends the notion of Nash non-cooperative game. bn g X. (not necessary di¤erent) and a family of functions PiA : X such that [0.5 3. n 128 . 1] ! X. A topological space X is an mc-space. and so on. 3. i2J Ai 2 =. which allow to apply very well known …xed points theorems. This concept was introduced by Debreu who proved the existence of equilibrium in generalized games with general assumptions. Arrow and Debreu applied this result to obtain the existence of competitive equilibrium by considering convex strategy subsets of a …nite dimensional space and a …nite number of agents with continuous quasi-concave utility functions. 1. an g. as well as Borglin and Keiding.5. a set of elements fb0 . Kakutani or Browder. which be used throughout the section. there exists an ordering on it.2 Abstract convexity This subsection is devoted to introduce the new notion of abstract convexity. if for any nonempty …nite subset of X..

12. B is an mc setg: Then it is obvious that 8 A 2< X >. 1]n ! X given by GA (t0 . 0) = x. If X is an mc-space. tn 2 ) = pn 2 is a point in the path which joins pn 1 with bn 2 . So. 1]m ! X T where PiA are the functions associated with the elements aik 2 A Z: k By making use of this notion. t) = (1 t)x + tai . tn 1 ). A subset Z of an mc-space X is an mc-set. :::. we can de…ne mc-sets which generalize usual convex sets. PiA (x. (i0 < i1 < ::: < im ). t1 . 8x 2 X: 2. t0 ) is a continuous function.13..1. Z a subset of X and we denote by < X > T family of nonempty …nite subsets of X. in some sense.23. 0 m m Remark 3. A \ Z 6= . The function GA : [0. we need some previous concepts. ti0 ). GA can be seen as a composition of these paths and can be considered as an abstract convex combination of the …nite set A: Given an mc-structure. as well as on the …nite subset A which contain them. for any …nite subset Afa0 . tim 1 ). on the points which are considered.. then for all A 2< X > such the T that A Z 6= .24. 1]m ) Cmc (Z): GAjZ (t) = PiA (:::(PiA 1 (PiA (bim . De…nition 3. :::). 1). 1]m ) Z. it de…nes an abstract convexity on X. A Z 6= . Furthermore.. A Pn 2 (pn 1 . PiA (x. GAjZ ([0. If X is a convex subset of a topological vector space. an g we can de…ne functions PiA (x. a1 . 129 . 1) = bi . we de…ne the restriction of function GA to Z as follows GAjZ : [0. etc. tn 1 ) = pn 1 represents a point of the path which joins bn with bn 1 . :::). tn 1) A A A = P0 (:::(Pn 1 (Pn (bn . aim g. In order to de…ne this convexity. :::. it is possible to de…ne an abstract convexity de…ned by the family of those sets which are stable under function GA . if and only if it is satis…ed that \ 8 A 2< X >. 1). 1]) represents a continuous path which joints x and bi : These paths depend. ai1 . function GA can be interpreted as follows: A Pn 1 (bn . De…nition 3. Remark 3. :::. Thus. then PiA (x. A Z = fai0 . [0. we can de…ne the mc-hull operator in the usual way \ Cmc (Z) = fB j Z B. Note that if PiA (x. T where m = jA Zj 1: Since the family of mc-sets is stable under arbitrary intersections. t) is continuous on t. GAjZ ([0.

(9) x 2 X such that x = f (x ). So we also introduce the notion of local convexity in the context of mc-spaces. Next.16.14. If X is a compact topological mc-space and : X ( X is a 1 nonempty multivalued mapping such that if y 2 (x). with a …xed point. It is important to point out that in some applications the space is required to satisfy local properties. and a KF -multivalued mapping. there exists an open neighborhood Vx of x. (8) x 2 X. : X ( X (majorant). d) is a locally mc-space if and only if for all " > 0. 1]m ): Next result is an extension of Browder’ theorem. such that for all x 2 X. It is not hard to prove that the product of mc-spaces is an mc-space. Lemma 3. If X is an mc-space. The proof is immediately s obtained by applying Lemma 3. 130 . there exists a nonempty …nite subset x 2 X such that y 2 int A of X. (z) x (z): 3. (x) is open and x 2 (x): A multivalued mapping P : X ( X is called KF -majorized = if there is a KF -multivalued mapping. P (x) (x): The local version of KF -multivalued mapping is de…ned as follows De…nition 3.4. x : X ( X. then a multivalued mapping : X ( X is locally KF -multivalued mapping if for all x 2 X such that (x) 6= . De…nition 3. and so on. such that 8 z 2 Vx . Other abstract convexity structures which are generalized by the notion of mc-structure are the simplicial convexity. 1]: In this case. so mc-sets generalize convex sets. d(x. of the mc-hull of a multivalued mapping de…ned on mc-spaces. are de…ned in the context of mcspaces.which represent the segment joining ai and x when t 2 [0. E) < "g is an mc-set whenever E is an mc-set. c-spaces or H-spaces. 2. 1]n ) coincides with the convex hull of A. ") = fx j x 2 X. and that the product of a countable quantity of locally mc-spaces is also a locally mc-space. the notion of KF-multivalued mapping and KF-majorized multivalued mapping. f (x) 2 GAj (x) ([0. B(E. then there exists some 0 1 0 (x ): Then. then an mc-set multivalued mapping 1 : X ( X is a KF -multivalued mapping if for all x 2 X.3 Fixed points results We present now some …xed point results which will be applied to prove the existence of equilibrium in generalized games. the image of the composition GA ([0..4. introduced in Borglin and Keiding. A metric mc-space (X.5. G-convex spaces. and a continuous function f : X ! X satisfying: 1.15. The following Llinares’ lemma states the existence of a continuous selection. If X is an mc-space. De…nition 3.

Lemma 3. y 2 (x) = xi (x). If X is a compact topological mc-space and P : X ( X is a locally majorized KF -multivalued mapping. so the open covering fGx gx2D of G has a closed locally …nite re…nement fG0 g: x For each x 2 D de…ne the set J(x) = fxi j x 2 G0 i g. to see that has open lower sections. Consider D = fx j x 2 X.g and for each x 2 D. P (x) (x): Proof. (8) xi 2 J(x): Since xi are KF multivalued mappings. such that (8)x 2 X. has mc-set values by construction and satis…es for all x 2 X.5.33 is that any KF -multivalued mapping de…ned from a compact topological mc-space in itself. if x 2 G: = Now we are going to see that multivalued mapping is the KF multivalued mapping. It is clear that hasn’ …xed point since xi are KF -multivalued t mappings. then they have open lower seci tions. so for each xi 2 J(x) there exists an open neighborhood Wx of x. has a point with empty image.. 131 . respectively (x) = . then there exists x 2 X such that (x ) = . (8) xi 2 J(x): T 0 i 0 By considering Wx = xi 2J(x) Wx . P (x) 6= .33. Corollary 3. then has a continuous selection and a …xed point. If X is a compact topological mc-space and : X ( X is a KF -multivalued mapping. xi 2J(x) y2 xi (x). Wx is an open neighborhood of x since J(x) is …nite. choose a KF -multivalued mapping x majorant of P at x.Theorem 3. If X is a compact topological mc-space and :X (X a multivalued mapping with open inverse images and nonempty mc-set values. consider \ 1 x2 (y) .5. xi 2J(x) if x 2 G. In context of binary relations. the existence of points with empty images in the multivalued mapping of upper contour sets is equivalent to existence of maximal element (it is enough to consider (x) as the set of alternatives better than x). such that 1 i Wx xi (y). x and the following multivalued mapping (x) = \ xi (x).: In order to extend the previous result to locally KF -majorized multivalued mapping. P (x) (x): Finally. The set G = x2D Gx is paracompact. Moreover. and an open neighborhood S Gx of x. …rst we present the following lemma. then there exists a KF -multivalued mapping : X ( X. A consequence of Theorem 3.

: 3. …rst we need to utilize the well known notation from generalized games. \ 1 0 Wx Wx = Wx xi (y).4 Existence of equilibrium In this subsection we analyze the existence of equilibrium for generalized games in the context of mc-spaces by considering similar conditions to those of Borglin and Keiding and Tulcea. (8)w 2 Wx (y) and we conclude that has open lower sections. (8)w 2 Wx and (8)xi 2 J(x).5 by considering locally KF -majorized multivalued mappings. Theorem 3. As a consequence of Lemma 3. x xi 2J(x) = so. then the exists x 2 X such that (x ) = . J(w) and J(x) for each w 2 Wx : But then.: 132 . \ that is.34. F : X ( X a nonempty mc-set multivalued mapping such that for all x 2 X. If X is a compact topological mc-space and : X ( X is a locally KF -majorized multivalued mapping. such that \ [ Wx [ G0 i ] = . then the exists x 2 X such that \ x 2 F (x ) and F (x ) P (x ) = .5 we state now the extension of Corollary 3.. Therefore.5. and P : X ( X a locally KF majorized multivalued mapping. y2 xi (w). s Lemma 3. If X is a compact topological mc-space. there x exists an open set containing x.6. In order to do this. Wx . First result is a version of Borglin and Keiding’ result in context of mc-spaces. (8) xi 2 J(x). y2 therefore Wx 1 xi 2J(x) \ xi (w) xi (w) xi 2J(w) = (w).x2 = xi 2J(x) = [ G0 i x (which is closed since fG0 i g is a locally …nite re…nement). F 1 (x) is an open set.

which is open. there exists an open set Wx containing x such that \ 1 Wx P 1 (y) F 1 (y) (y): So. 2. if x 2 F (x). then by applying Corollary 3. De…ne the multivalued mapping : X ( X by (x) = F (x). if x 2 F (x). x 2 F 1 (y). :::. :::.5 and without loss of generality we can assume that multivalued mapping P is a KF -multivalued mapping.5 we have the conclusion. y 2 F (x) if and only if yi 2 Fi (x). Qn that is. there exists an open set Wx containing x such that Wx F 1 (y): If we take T 1 U = Wx Vx . so \ x 2 P 1 (y) F 1 (y) which are open sets.7. Consider a multivalued mapping F : X ( X de…ned as follows. then U (y): T On the other hand. such that (x) 6= .. To see that has open t lower sections. z 2 F (z): = Moreover. It is easy to see that hasn’ …xed points and mc-set values. y 2 (x): On the one hand. 2. multivalued mapping is a KF multivalued mapping. Next result shows that the previous Lemma remains valid in the case of considering a generalized game with a …nite quantity of agents. we have the conclusion). i = 1. and Pi : X ( Xi is a locally KF majorized multivalued mapping. since y 2 (x) = F (x). consider x 2 X. Xi is a compact topological mc-space.. :::. if x 2 F (x) = (x) = P (x) \ F (x). then y 2 (x) = P (x) F (x). n: Proof. From Lemma 3. Qn X = i=1 Xi . i = 1. consider x 2 1 (y). n. then it is possible to choose a neighborhood = Vx of x such that (8) z 2 Vx .Proof. 2. (if (x) = . that is. If for each i = 1. n. that is. if x 2 F (x): In order to see that multivalued mapping is KF -multivalued mapping. Lemma 3. Fi : X ( Xi is a nonempty mc-set multivalued mapping with open lower sections. therefore. then there exists x 2 X such that \ xi 2 Fi (x ) and Fi (x ) Pi (x ) = . 133 . F (x) = i=1 Fi (x): So multivalued mapping F has nonempty mc-set values and open lower sections.

n.. that is. so.. :::. consider x 2 T X such that P (x) 6= . (z) B( (x). i2I(x) P (x) = . (xi )=2)gn : Consider i = (xi )=2 i=1 134 . Pi (x) Fi (x) 6= . Lemma 3. P (z) = Pi (z) Pi0 (z): i2I(z) Moreover. which is compact. then (8)" > 0 there exists an mc-set valued multivalued mapping H" : X ( Y with open graph such that Gr( ) Gr(H" ) B(Gr( ). By applying that multivalued mapping we know that (8)" > 0. so \ (8)z 2 V. there exists a neighborhood V of x such that (8)z 2 V.. then there exists i0 2 I(x) such that Pi0 (x) Fi0 (x) 6= . family fB(x. we can assume that multivalued mappings Pi are KF multivalued mappings. T where I(x) = fi j i 2 I. if I(x) 6= .g T is an open set. Pi0 (z) Fi0 (z) 6= . (9)0 < (x) < ". Pi0 (x) Fi0 (x) 6= . if I(x) = . If : X ( Y is an upper hemicontinuous multivalued mapping with mc-set values. "=2). we use the next approximation result. 2. we de…ne the following multivalued mappings a) Pi : X ( X such that y 2 Pi (x) if and only if yi 2 Pi (x): b) P : X ( X in the following way: \ P (x) = Pi (x)..5 and without loss of generality. Moreover.. since Pi0 is a KF -multivalued mapping. Pi0 is a KF -multivalued mapping and therefore multivalued mapping P is majorized by Pi0 : Therefore by applying previous lemma to multivalued mappings F and P we obtain the conclusion.From Lema 3. "): is upper hemicontinuous Proof. Let X be a compact topological metric space and Y a locally mc-space. (x)=2)gx2X is an open covering of X.8. i0 2 I(z). (x)). for each i = 1. thus there exists a …nite subcovering fB(xi . To do that.: Since the set \ fx j x 2 X. such that (8)z 2 B(x. In order to analyze the existence of equilibrium by considering a countable quantity of agents.g: Next. we are going to see that P is KF -majorized.

) = and \ \ H" (x) = B( (xi ).: Proof..35. i2I(x) thus Gr( ) Gr(H" ) and it is easy to see that Gr(H" ) B(Gr( ). i )g and the following multivalued mapping \ H" (x) = B( (xi ). I(z) I(x) for all z 2 B(x. i ) which is a closed set. Gr(H" ) is open. "=2). "=2) = H" (x). since for every x 2 i2I(x) B(xi .and de…ne for all x 2 X I(x) = fi j i 2 I. Gr( ) Gr(H" ) B(Gr( ). (x) therefore (x) \ B( (xi ). since for all x 2 X. x 2 B(xi . ") Let f"n g be a sequence which converges to 0 and by reasoning as above we obtain another sequence fx"n gn2N such that 135 where H" is an open graph multivalued mapping whose values are mc-sets. H" (x) H" (z) then. ) H" (x). "). then there exists x 2 X such that \ x 2 F (x ) and F (x ) P (x ) = . so. %) ( i2I(x) B(xi . therefore. B( (xi ). Furthermore. P (x) F (x) = . ) H" (x) Gr(H" ). i )) = . there exists > 0 = = TS such that B(x.8. If X is a compact locally mc-space. f or each i 2 I(x). B(x. H" . Moreover since (8)(z. P : X ( XT a lois cally KF majorized multivalued mapping and the set fx j x 2 X. (9) H" such that Gr(F ) Gr(H" ) B(Gr(F ).: . u) 2 B(x. that is. By applying Lemma 3. "=2) B( (xi ). P ) and we apply Lemma 3. we have that (8)" > 0. F : X ( X is a nonempty mc-set valued multivalued mapping with closed graph. "=2) = H" (z) i2I(x) i2I(z) and H" (x) is open because it is a …nite intersection of open sets. "=2): i2I(x) It is clear S that H" is mc-set valued and moreover it has open graph.6 we can ensure that there exists an element x" such that \ x" 2 H" (x" ) and H" (x" ) P (x" ) = . If we consider (X.g is closed in X. "): Theorem 3.

[F (x"n ) then there exists a convergent subsequence to a point x . a result on the existence of equilibrium in generalized games with a countable number of agents is presented. Moreover. (x"n . "n ) and since it belongs to a compact set due to \ x"n 2 fx j x 2 X.. so \ (8)z 2 V. Theorem 3. which will be an element of this set since it is closed. that is. Pi (x) Fi (x) = .. then there exists an equilibrium for the generalized game. Pi0 (x) Fi0 (x) 6= .g \ P (x"n )] [H( x"n ) \ P (x"n )] = . Consider multivalued mapping F : X ( X as follows: y 2 F (x) if and only if yi 2 Fi (x). and since Gr(F ) is a compact set. i0 2 I(z). if I(x) 6= . we have that (8)n 2 N.36. PiT a locally KF majorized is multivalued mapping and the set fx j x 2 X. x"n ) converges to (x . (8)i 2 I. F (x) = i2I Fi (x): So multivalued mapping F has closed graph with nonempty mc-set values..(8)n 2 N. x ) 2 Gr(F ): Next. Pi (x) Fi (x) 6= . In order to prove that x is a …xed point of F .g: In order to see that P is KF majorized. Q that is. P (z) = Pi (z) Pi0 (z): i2I(z) T Fi0 (z) 6= 136 .. Proof. i2I(x) T where I(x) = fi j i 2 I. P (x) F (x) = . Pi0 (z) . then there exists i0 2 I(x) such that Pi0 (x) Fi0 (x) 6= .. Fi . for each i 2 I we de…ne the following multivalued mappings: a)Pi : X ( X such that y 2 Pi (x) if and only if yi 2 Pi (x): b) P : X ( X in the following way: \ P (x) = Pi (x). x"n ) 2 Gr(H"n ) B(Gr(F ). there exists a neighborhood V of x such that (8)z 2 V.: Since the set \ fx j x 2 X. Pi )i2I be a generalized game such that I is a countable set of indexes and for each i 2 I it is satis…ed that Si is a nonempty compact locally mc-space. if I(x) = .g is closed in X. and P (x) = .g is open set. Fi is a closed graph multivalued mapping such that Fi (x) is a nonempty mc-set (8)x 2 X. consider x 2 X such that P (x) 6= T . Let = (Si . then (x"n .

and P (x) F (x) = ..g = fx j x 2 X.. we have \ fx j x 2 X. Pi (x) \ Fi (x) = . I(x ) = . I(x) = . if i 2 I(x): = It is clear that P (x) \ F (x) = \ Y Qi (x).. if and only if I(x) = . and Qi (x) = Fi (x). by applying the previous theorem we obtain that there exists an element x 2 X such that \ x 2 F (x ) and F (x ) P (x ) = . fx j x 2 X.g = = i2I T F (x) = T Hence. otherwise: Multivalued mappings Qi : X ( Si have nonempty values.g is closed.Moreover.g: 137 . P (x) F (x) = . and …nally. we can assume that multivalued mapping Pi0 is a KF multivalued mapping. from Lemma 3. So. so Pi0 is the KF multivalued mapping which majorizes P: T Finally.: \ fx j x 2 X. P (x) F (x) = .5 and without loss of generality. (8)i 2 I(x) we de…ne the following multivalued mapping Qi : X ( Si \ Qi (x) = Pi (x) Fi (x). thus P (x) . if I(x) 6= .g is closed because it is the intersection of closed sets. P (x) F (x) = . if i 2 I(x). xi 2 Fi (x ) and Fi (x ) \ Pi (x ) = . we show that the set fx j x 2 X. so.: Therefore.

Existence of equilibrium in generalized games with non-convex strategy spaces. Theory. Econometrica. Anal. N. A selection theorem and its applications. Macmillan 9. s Duke Mathematical Journal. G. Applied nonlinear analysis. Tourky. 9801 (1998). A note on equilibria for abstract economies.. Browder. New York. Math. No. Soc. C.. 55-76 5. 283-301 13. Kakutani. K. Agarwal. Journal of Economic Theory. 8 (1941). Econom. C.. Bonanno. A. J. 177 (1968). 3 (1962).. Existence of equilibrium actions and of equilibrium: A note on the new existence theorems. X..C.E.C. Llinares J.J. I.. 2 (1975). Bull. Aubin. Debreu.6 References 1. New York.J. 34 (2001).. Math. K. 45 (1977). 257-273 14.M. 313-316 12. Econom. F. 331-343 3. Existence of …rst-order locally consistent equilibria. 18. Himmelberg. J. F. Arrow. J. CEPREMAP. H. Hahn.. 276-283 10. T. Internatinal Economic Review. San Francisco. 179-184 19.... 34 (2000). 46(1992). Math. Grandmont.. Temporary general equilibrium theory. J. Mathematical and Computer Modelling.D. K. S. Annalen. Aliprantis.P... K. Math. John Wiley and Sons. 9-15 16. The Riesz-Kantorovich formula and general equilibrium theory. O’ Regan. 1-14 138 . Econometrice.. C.. 22 (1954). Appl. Cone conditions in general equilibrium theory. Cambridge University Press. Math. D’ Agata. 535-572 17. 1985 11. The …xed point theory of multi-valued mappings in topological vector spaces. J. G. Fixed point theorems with applications to economics and game theory. J. Tarafdar E.J. 92 (2000). Tourky.. 416-427 20. An equilibrium existence theorem for a general model without ordered preferences. J..P. Mas-Colell. Aliprantis. and Math. D.. Yannelis. Fixed points of compact multifunctions. 171-179 2.205-212 15. Arrow. Limited knowledge of demand and oligopoly equilibria.V. Border. Sci. A... Existence of an equilibrium for a competitive economy. Topological spaces. 1971.. N.1 (1995).. W.. R. G. New concepts and techniques for equilibrium analysis. A generalization of Brouwer’ …xed point theorem. Gale.. C. General competitive analysis. Husain. 265-290 6. Austral. Keiding. 3 (1976).. R. Berge. Tan. 38 (1972) 205-207 18.D. 43 (1996). Yannelis. Holden-Day 7. 1984 8. C. Borglin. A.E..C..3.. 1963. 96-121 4... Econom. Math. 35 (1985). Zeeman.. Ekeland. Debreu. R. Annales d’ Economie et de Statistique.. Journal of Mathematical Economics... Internat. A selection and a …xed point theorem and an equilibrium of an abstract economy.. P. Ding. Kim.

21.2 ¸ t a a (1998). 211-218 34. Cluj-Napoca. H. 2001 31.. B. Schlager. Oettli. 17 (2001). Sonnenschein.. Cluj University Press. s Cluj-Napoca. Stiin¸. Pitman Res. Math. 136 (1988). D. I. Econometrica. 201-204 24..J.. Ser. technology and industry (Aligarh.K. s Bull. Matematic¼-Informatic¼. Yannelis. 12 (1983). 197-204 22. Kim.I. Tarafdar. A. Equilibrium in abstract economics without ordered preferences. 385-394 30. Mure¸an. On games with identical equilibrium payo¤ s. 2 (1975) 345-348 33. 48 (1980). 145-154. 2002 (In Romanian) 28.S. 1998 27.. XIV. Econom. s Acta Technica Napocensis. II. Maugeri.. A. Time dependent generalized equilibrium problems. First-order equilibria for an abstract economy. On the approximation of upper semi-continuous correspondeces and equilibrium of generalized games. Applied Mathematics and Mechanics.I. W. Univ. W. 41 (1998). 2000 (In Romanian) 32. W. Rendiconti del Circolo Matematico di Palermo. D.. Existence of minimal elements and equilibria in linear topological spaces. Petru¸el. Math. Economic Theory. I. Mathematical modelling. Notes on existence of equilibrium proofs and the boundary behavior of supply. E. 223-231 29. Econom. Rus.. Math.S. Austral. Tulcea. Ray. Rus.. Generalized vectorial equilibria and generalized monotonicity. Cluj-Napoca. Harlow. Rim.. C. Baia Mare. Convex structures and economic theory.. C. Math..I. N.. 58 (1999). 1831-1837 25.. Bull.. A.. Econom. Functional analysis with current applications in science. Shafer.. A …xed point theorem and equilibrium point in abstract economy... J. Math. Prabhakar. A.I. Iancu.. Anal. 1968 26. Ser. Cluj University Press.. J.. J. J. 20 (1991). 267-289 35. 45 (1992). Notes Math. Multifunctions and applications. A. Vol. Logman. Neuefeind. Generalized contractions and applications... Academic Press. Nikaido. N. Transilvania Press. First-order equilibria for an abstract economy.. 377. Appl. Soc. A. Mure¸an.. W. A …xed point theorem and existence of equilibrium for abstract economies. 1996). 233-246 139 . New York.191-196 23. Ser. H.

Sign up to vote on this title
UsefulNot useful