This action might not be possible to undo. Are you sure you want to continue?

**com/abstract=976592
**

Games, Fixed Points and

Mathematical Economics

Dr. Christian-Oliver Ewald

School of Economics and Finance

University of St.Andrews

Electronic copy of this paper is available at: http://ssrn.com/abstract=976592

Abstract

These are my Lecture Notes for a course in Game Theory which I

taught in the Winter term 2003/04 at the University of Kaiserslautern.

I am aware that the notes are not yet free of error and the manuscrip

needs further improvement. I am happy about any comment on the

notes. Please send your comments via e-mail to ce16@st-andrews.ac.uk.

Contents

1 Games 3

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 General Concepts of two Person Games . . . . . . . . . . 8

1.3 The Duopoly Economy . . . . . . . . . . . . . . . . . . . . 16

2 Brouwer’s Fixed Point Theorem and Nash’s Equilibrium

Theorem 20

2.1 Simplices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.2 Sperners Lemma . . . . . . . . . . . . . . . . . . . . . . . 23

2.3 Proof of Brouwer’s Theorem . . . . . . . . . . . . . . . . . 26

2.4 Nash’s Equilibrium Theorem . . . . . . . . . . . . . . . . 32

2.5 Two Person Zero Sum Games and the Minimax Theorem 34

3 More general Equilibrium Theorems 37

3.1 N-Person Games and Nash’s generalized EquilibriumThe-

orem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.2 Correspondences . . . . . . . . . . . . . . . . . . . . . . . . 38

3.3 Abstract Economies and the Walras Equilibrium . . . . . 48

3.4 The Maximum Theorem for Correspondences . . . . . . . 54

3.5 Approximation of Correspondences . . . . . . . . . . . . . 59

3.6 Fixed Point Theorems for Correspondences . . . . . . . . 62

3.7 Generalized Games and an Equilibrium Theorem . . . . 68

3.8 The Walras Equilibrium Theorem . . . . . . . . . . . . . . 72

1

4 Cooperative Games 87

4.1 Cooperative Two Person Games . . . . . . . . . . . . . . . 87

4.2 Nash’s Bargaining Solution . . . . . . . . . . . . . . . . . 92

4.3 N-person Cooperative Games . . . . . . . . . . . . . . . . 108

5 Differential Games 121

5.1 Setup and Notation . . . . . . . . . . . . . . . . . . . . . . 121

5.2 Stackelberg Equilibria for 2 Person Differential Games . 124

5.3 Some Results from Optimal Control Theory . . . . . . . . 126

5.4 Necessary Conditions for Nash Equilibria in N-person

Differential Games . . . . . . . . . . . . . . . . . . . . . . 130

2

Chapter 1

Games

1.1 Introduction

Game Theory is a formal approach to study games. We can think of

games as conﬂicts where some number of individuals ( called players )

take part and each one tries to maximize his utility in taking part in

the conﬂict. Sometimes we allow the players to cooperate, in this case

we speak of cooperative games as opposed to non-cooperative games,

where players are not allowed to cooperate. Game theory has many

applications in subjects like Economy, biology, psychology, but also in

such an unpleasant subject as warfare. In this lecture we will con-

centrate on applications in Economics. A lot of our examples though

are also motivated by classical games. However, reality is often far too

complex, so we study simpliﬁed models ( such as for example simpliﬁed

Poker which is played with only two cards, an Ace and a two ). The the-

ory by itself can be quite abstract and a lot of methods from Functional

Analysis and Topology come in. However all methods from these sub-

jects will be explained during the course. To start with, we describe two

games, which will later help us to understand the abstract deﬁnitions

coming in the next section.

Example 1. Simpliﬁed Poker : There are only two cards involved, an

“Ace” and a “Two” and only two players, player 1 and player 2. At the

3

beginning each one puts 1 Euro in the “pot”. Both cards lie face down

on the table and none of the players knows, which card is the “Ace” and

which one the “Two”. Then player two draws one of the cards and takes

a look. If it’s the “Ace” player 2 has to say “Ace”. If however he has

drawn the “Two”, he can say two and then loses the game ( in this case

player 1 wins the 2 Euro in the “pot” ) or he can “bluff” and say “Ace”.

In the case , player 2 says “Ace” he has to put another Euro in the “pot”.

Player 1 then has two choices. Either he believes player 2 and “folds”

in, in this case player 2 wins the ( now ) 3 Euro in the “pot”, or player 1

assumes that player two has “bluffed” puts another Euro in the “pot”. In

this case Player 2 has to show player 1 his card. If it is indeed the “Ace”

then player 2 wins the ( now ) 4 Euro in the pot but if it is the “Two”

then player 1 wins the 4 Euro. In both cases, the game is ﬁnished.

Example 2. Nim(2,2) : There are two piles of two matches each and

two players are taking part in the game, player 1 and player 2. The

players take alternating turns, player 1 takes the ﬁrst turn. At each

turn the player selects one pile which has at least one match left and

removes at least one match from this pile. The game ﬁnishes when all

matches are gone. The player who takes the last match looses.

Each of the games above is of the following structure :

1. there is a number of players taking part in the game

2. there are rules under which the player can choose their strategies

(moves)

3. the outcome of the game is determined by the strategies chosen

by each player and the rules

This structure will be included in the (”abstract”) deﬁnition of a

“Game” in Chapter 1. The two games just presented though differ un-

der some aspect. In the second game, the players have at each time

perfect knowledge about the action of their opponent. This in not the

4

case in the ﬁrst game, where player 2 can “bluff”. We speak respec-

tively of games with perfect information and games with non per-

fect information. Also in the ﬁrst game there is a chance element,

which is missing in the second game. Such informations will be con-

sidered as additional structure and considered individually. Usually a

game can be considered as a tree, where the nodes are the states of the

game and the edges represent the moves. For the Nim(2,2) game one

has the following tree :

Example 3. Nim(2,2) Tree :

[[/[[

1

1

ttj

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

j

1

$$

H

H

H

H

H

H

H

H

H

[/[[

2

2

zzt

t

t

t

t

t

t

t

t

2

2

$$

J

J

J

J

J

J

J

J

J

−/[[

3

2

zzv

v

v

v

v

v

v

v

v

2

##

H

H

H

H

H

H

H

H

H

−/[[

4

1

zzt

t

t

t

t

t

t

t

t

1

[/[

5

1

[/−

6

1

−/[

7

1

−/−

8

−/[

9

2

−/−

10

[/−

11

2

−/−

12

−/−

13

−/−

14

−/−

15

1 2 1 2 2 1

The number at each edge indicates which player does the move, the

number on the bottom of the diagram indicates the winner.

Trees can be helpful in understanding the structure of the game. For

mathematical purposes though, they are often to bulky to work with.

We would like to decode the structure of the game in a more condensed

form. First let us consider the individual strategies the players in

Nim(2,2) can choose :

5

o

1

1st turn if at go to o

2

if at go to

s

1

1

1 → 2 4 9 s

1

2

2

3

4

7

s

2

1

1 → 2 4 10 s

2

2

2

3

5

7

s

3

1

1 → 3 − − s

3

2

2

3

6

7

s

4

2

2

3

4

8

s

5

2

2

3

5

8

s

6

2

2

3

6

8

Here the strategies of player 1 are denoted with s

i

1

for i ∈ ¦1, 2, 3¦

and those for player 2 as s

j

2

for i ∈ ¦1, 2, 3, 4, 5, 6¦. If the players decide

for one of their strategies, the outcome of the game is already ﬁxed. Let

us denote the outcome of the game as 1 if player 1 loses and with −1

if player 1 wins. Then the game is equivalent to the following game,

which is described in matrix form :

L :=

¸

¸

−1 −1 1 −1 −1 1

1 −1 1 1 −1 1

1 1 1 −1 −1 −1

**The value L(i, j) at position (i, j) of this matrix is the outcome of the
**

game Nim(2,2) if player 1 chooses his i-th strategy and player 2 chooses

j-th strategy. We see that player 2 has a strategy which guarantees him

to win, namely s

3

2

.

Let us also discuss the simpliﬁed Poker in this way. The following table

shows the possible strategies of the players :

6

s

1

1

believe 2 when he says “Ace”

s

2

1

don’t believe 2 when he says “Ace”

s

1

2

say “Two” when you have “Two”

s

2

2

say “Ace” when you have “Two” (“bluff”)

Since their is a chance element in the game ( “Ace” or “Two” each

with probability 1/2 ) the losses corresponding to pairs of strategies

are not deterministic. According to what card Player 2 draws, we have

different losses :

L

Two

:=

−1 1

−1 −2

and L

Ace

:=

1 1

2 2

**One could now consider ”Nature” as a third player and denote the
**

losses in a three dimensional array but one usually decides not to do

this and instead denote the expected losses in a matrix. Since each

event ”Two” and ”Ace” occurs with probability 1/2 we have for the ex-

pected losses :

L :=

1

2

L

Two

+

1

2

L

Ace

=

0 1

1/2 0

**The two examples we had so far, had the characteristic property
**

that what one player loses, the other player wins. This is not always

the case as the following example shows :

Example 4. Battle of the Sexes A married couple are trying to decide

where to go for a night out. She would like to go to the theater, he would

like to go to a football match. However they are just married since a

couple of weeks, so they still like to spend their time together and enjoy

the entertainment only if their partner is with them. Let’s say the ﬁrst

strategy for each is to go to the theater and the second is to go to the

football match. Then the individual losses for each can be denoted in

matrix form as :

7

L :=

(−1, −4) (0, 0)

(0, 0) (−4, −1)

**Here in each entry, the ﬁrst number indicates the loss for the man,
**

the second one the loss for the woman.

The reader may ﬁnd it unusual, that we always speak about losses,

rather then gains ( or wins ). The reason is, that in convex analysis one

rather likes to determine minima instead of maxima ( only for formal

reasons ), and so since in fact we want to maximize the gains, we have

to minimize the losses. The examples mentioned in this chapter will

lead us directly to the formal deﬁnition of a ( Two Person ) Game in

the next section and henceforth serve for illustrating the theory, which

from now on at some points tends to be very abstract.

1.2 General Concepts of two Person Games

Deﬁnition 1.2.1. A two person game (

2

in normal form consists of

the following data :

1. topological spaces o

1

and o

2

, the so called strategies for player 1

resp. player 2,

2. a topological subspace U ⊂ o

1

o

1

of allowed strategy pairs

3. a biloss operator

L : U → R

2

(1.1)

(s

1

, s

2

) → (L

1

(s

1

, s

2

), L

2

(s

1

, s

2

)) (1.2)

L

i

(s

1

, s

2

) is the loss of player i if the strategies s

1

and s

2

are played.

For the games considered in the Introduction, the spaces o

i

have

been ﬁnite ( with discrete topology ) and U has to be chosen o

1

o

2

.

For the Nim(2,2) game and simpliﬁed poker we have L

2

= −L

1

.The

8

main problem in Game Theory is to develop solution concepts and later

on ﬁnd the solutions for the game. With solution concepts we mean

to characterize those strategies which are optimal in some sense. We

will see a lot of different approaches and also extend the deﬁnition to

n-person games, but for now we stick with two person games.

Deﬁnition 1.2.2. Given a two person game (

2

we deﬁne its shadow

minimum as

α = (α

1

, α

2

) (1.3)

where α

1

= inf

(s

1

,s

2

)∈U

L

1

(s

1

, s

2

) and α

2

= inf

(s

1

,s

2

)∈U

L

2

(s

1

, s

2

). (

2

is

bounded from below, if both α

1

and α

2

are ﬁnite.

The shadow minimum represents the minimal losses for both play-

ers if they don’t think about strategies at all. For the Nim(2,2) game

the shadow-minimum is α = (−1, −1), for the simpliﬁed poker its α =

(0, −1) and for the “Battle of the Sexes” it is α = (−4, −4). In case there

exists

(˜ s

1

, ˜ s

2

) ∈ U s.t. L(˜ s

1

, ˜ s

2

) = α

then a good choice for both players, would be to choose the strate-

gies ˜ s

1

and ˜ s

2

since they guarantee the minimal loss. However in most

games, such strategies do not exist. For example in the Nim(2,2) game,

there is no pair of strategies which gives biloss (−1, −1).

Given a game (

2

. To keep things easy for now we assume U = o

1

o

2

.

1

We deﬁne functions L

♯

1

: o

1

→R and L

♯

2

: o

2

→R as follows :

L

♯

1

(s

1

) = sup

s

2

∈S

2

L

1

(s

1

, s

2

)

L

♯

2

(s

2

) = sup

s

1

∈S

1

L

2

(s

1

, s

2

).

1

all coming deﬁnitions can be generalized to allowU to be a proper subset of o

1

o

2

9

L

♯

1

(s

1

) is the worst loss that can happen for player 1 when he plays

strategy s

1

and analogously L

♯

2

(s

2

) is the worst loss that can happen

for player 2 when he plays strategy s

2

. If both players are highly risk

aversive, the following consideration is reasonable : Player 1 should

use a strategy which minimizes L

♯

1

, i.e. minimizes the maximal loss

and analogously player 2 should use a strategy which minimizes L

♯

2

.

Exercise 1.2.1. Compute the function L

♯

i

for the games in the Introduc-

tion.

Deﬁnition 1.2.3. A strategy s

♯

1

which satisﬁes

L

♯

1

(s

♯

1

) = v

♯

1

:= inf

s

1

∈S

1

L

♯

1

(s

1

) = inf

s

1

∈S

1

sup

s

2

∈S

2

L

1

(s

1

, s

2

)

is called a conservative strategy for player 1 and analogously s

♯

2

for

player 2, if

L

♯

2

(s

♯

2

) = v

♯

2

:= inf

s

2

∈S

2

L

♯

2

(s

2

) = inf

s

2

∈S

2

sup

s

1

∈S

1

L

2

(s

1

, s

2

).

The pair v = (v

♯

1

, v

♯

2

) is called the conservative value of the game.

An easy computation shows that for the ”Battle of the Sexes” game

we have L

♯

1

≡ 0 ≡ L

♯

2

. Hence v

♯

1

= 0 = v

♯

2

and any strategy is a con-

servative strategy. However, assume the man decides he wants to see

the football match, hence chooses strategy s

♯

1

= s

2

1

and the woman de-

cides she wants to go to the theater, which means she chooses strategy

s

♯

2

= s

1

2

. Then both have chosen conservative strategies but both can do

better :

L

1

(s

♯

1

, s

♯

2

) = 0 ≥ −1 = L

1

(s

1

1

, s

♯

2

)

L

2

(s

♯

1

, s

♯

2

) = 0 ≥ −1 = L

2

(s

♯

1

, s

2

2

).

We say the chosen pair of strategies is not individually stable.

10

Deﬁnition 1.2.4. A pair (s

♯

1

, s

♯

2

) is called a non-cooperative equilib-

rium

2

or short NCE if

L

1

(s

♯

1

, s

♯

2

) ≤ L

1

(s

1

, s

♯

2

) ∀s

1

∈ o

1

L

2

(s

♯

1

, s

♯

2

) ≤ L

2

(s

♯

1

, s

2

) ∀s

2

∈ o

2

Clearly this means that

L

1

(s

♯

1

, s

♯

2

) = min

s

1

∈S

1

L

1

(s

1

, s

♯

2

)

L

2

(s

♯

1

, s

♯

2

) = min

s

2

∈S

2

L

2

(s

♯

1

, s

2

)

In words : A non-cooperative equilibrium is stable in the sense that

if the players use such a pair, then no one has reason to deteriorate

from his strategy. For the ”battle of the sexes” we have non-cooperative

equilibria (s

1

1

, s

1

2

) and (s

2

1

, s

2

2

). If the strategy sets are ﬁnite and L is

written as a matrix with entries the corresponding bilosses, then a

non-cooperative equilibrium is a pair (s

♯

1

, s

♯

2

) such that L

1

(s

♯

1

, s

♯

2

) is the

minimum in its column ( only taking into account the L

1

values ) and

L

2

(s

♯

1

, s

♯

2

) is the minimum in its row ( only taking into account the L

2

values ). Using this criterion, one can easily check that in the simpli-

ﬁed poker, no non cooperative equilibria exist. This leads us to a crucial

point in modern game theory, the extension of the strategy sets by so

called mixed strategies.

Deﬁnition 1.2.5. Let X be an arbitrary set and R

X

the vector-space

of real valued functions on X supplied with the topology of point wise

convergence. For any x ∈ X we deﬁne the corresponding Dirac measure

δ

x

as

2

sometimes also called individually stable

11

δ

x

: R

X

→ R

f → f(x).

Any ( ﬁnite ) linear combination m =

¸

n

i=1

λ

i

δ

x

i

which maps f ∈ R

X

to

m(f) =

n

¸

i=1

λ

i

f(x

i

)

is called as discrete measure. We say that m is positive if λ

i

≥ 0

for all i. We call m a discrete probability measure if it is positive and

¸

n

i=1

λ

i

= 1. We denote the set of discrete probability measures on X by

´(X). This space is equipped with the weak topology.

3

One can easily check that the set ´(X) is convex. Furthermore we

have a canonical embedding

δ : X → ´(X)

x → δ

x

.

Let us assume now that we have a two person game (

2

given by

strategy sets o

1

,o

2

and a biloss-operator L = (L

1

, L

2

). We deﬁne a new

game

˜

(

2

as follows : As strategy sets we take

˜

o

i

:= ´(o

i

) , i = 1, 2

and as biloss operator

˜

L = (

˜

L

1

,

˜

L

2

) with

3

this means m

n

→ m ⇔ m

n

(f) → m(f)∀f ∈ R

X

12

˜

L

i

:

˜

o

1

˜

o

2

→ R

(

n

¸

i=1

λ

1

i

δ

s

i

1

,

m

¸

j=1

λ

2

j

δ

s

1

2

) →

n

¸

i=1

m

¸

j=1

λ

1

i

λ

2

j

L

i

(s

i

1

, s

j

2

).

Deﬁnition 1.2.6. The sets ´(o

i

) are called the mixed strategies of (

and the game

˜

(

2

is called the extension of (

2

by mixed strategies. The

strategies which are contained in the image of the canonical embedding

δ : o

1

o

2

→ ´(o

1

) ´(o

2

)

(s

1

, s

2

) → (δ

s

1

, δ

s

2

)

are called pure strategies.

Exercise 1.2.2. Show that the extension of the simpliﬁed poker game

has non cooperative equilibria.

We will later see that for any zero sum game with ﬁnitely many

pure strategies, the extended game has non-cooperative equilibria.

How can we interpret mixed strategies. Often it happens, that games

are not just played once, but repeated many times. If player one has

say 2 pure strategies and the game is repeated let’s say 100 times, then

he can realize the mixed strategy 0.3δ

s

1

1

+ 0.7δ

s

2

1

by playing the strategy

s

1

1

for 30 times and the strategy s

2

1

for 70 times. Another interpretation

is, that every time he wants to play the mixed strategy λ

1

δ

s

1

1

+ λ

2

δ

s

2

1

he

does a random experiment which has two possible outcomes, one with

probability λ

1

and the other one with probability λ

2

. Then he decides

for one of the pure strategies s

1

1

resp. s

2

1

corresponding to the outcome

of the experiment. If there are only ﬁnitely many pure strategies the

mixed strategies also have a very nice geometric interpretation : Say

o

1

has n+1 elements. Then

˜

o

1

is homeomorphic to the closed standard

n simplex ∆

n

:= ¦(λ

0

, ..., λ

n

) ∈ R

n+1

[λ

i

≥ 0,

¸

n

i=0

λ

i

= 1¦ by the map

13

(λ

0

, ..., λ

n

) →

¸

n

i=1

λ

i−1

δ

s

i

. This relationship brings the geometry into

the game.

It can also happen, that one has to many non-cooperative equilibria, in

particular if one works with the extended game. In this case one would

like to have some decision rule, which of the equilibria one should

choose. This leads to the concept of strict solutions for a game (

2

.

Deﬁnition 1.2.7. A pair of strategies (˜ s

1

, ˜ s

2

) sub-dominates another

pair (s

1

, s

2

) if L

1

(˜ s

1

, ˜ s

2

) ≤ L

1

(s

1

, s

2

) and L

2

(˜ s

1

, ˜ s

2

) ≤ L

2

(s

1

, s

2

) with strict

inequality in at least one case. A pair of strategies (s

1

, s

2

) is called

Pareto optimal if it is not sub-dominated.

4

Deﬁnition 1.2.8. A two person game (

2

has a strict solution if :

1. there is a NCE within the set of Pareto optimal pairs

2. all Pareto optimal NCE’s are interchangeable in the sense that if

(s

1

, s

2

) and ( ˜ s

1

, ˜ s

2

) are Pareto optimal NCE’s, then so are (s

1

, ˜ s

2

)

and ( ˜ s

1

, s

2

).

The interpretation of the ﬁrst condition is that the two players wouldn’t

choose an equilibrium strategy, knowing that both can do better ( and

one can do strictly better ) by choosing different strategies. One can

easily see, that interchangeable equilibria have the same biloss, and so

the second condition implies, that all solutions in the strict sense have

the same biloss. We will later discuss other solution concepts. We end

this section with a deﬁnition, which is important in particular in the

context of cooperative games.

Deﬁnition 1.2.9. The core of the game (

2

is the subset of all Pareto

optimal strategies (˜ s

1

, ˜ s

2

) such that L

1

(˜ s

1

, ˜ s

2

) ≤ v

♯

1

and L

2

(˜ s

1

, ˜ s

2

) ≤ v

♯

2

where v = (v

♯

1

, v

♯

2

) denotes the conservative value of the game.

4

in this case one sometimes speak of the collective stability property

14

In the end of this introductory chapter we demonstrate, how the

question of existence of equilibria is related to the question of the exis-

tence of ﬁxed points. Assume that there exist maps

C : o

2

→ o

1

D : o

1

→ o

2

such that the following equations hold :

L

1

(C(s

2

), s

2

) = min

s

1

∈S

1

L

1

(s

1

, s

2

) ∀ s

2

∈ o

2

L

2

(s

1

, D(s

1

)) = min

s

2

∈S

2

L

2

(s

1

, s

2

) ∀ s

1

∈ o

1

Such maps C and D are called optimal decision rules. Then any

solution (˜ s

1

, ˜ s

2

) of the system

C(˜ s

2

) = ˜ s

1

D(˜ s

1

) = ˜ s

2

is a non-cooperative equilibrium. Denoting with F the function

F : o

1

o

2

→ o

1

o

2

(˜ s

1

, ˜ s

2

) → (C(˜ s

2

), D(˜ s

1

))

then any ﬁxed point

5

(˜ s

1

, ˜ s

2

) of F is a non cooperative equilibrium.

Hence we are in need of theorems about the existence of ﬁxed points.

The most famous one, the Banach ﬁxed point theorem in general does

not apply, since the functions we consider are often not contractive.

5

in general, if one has a map f : X → X, then any point x ∈ X with f(x) = x is

called a ﬁxed point

15

The second most famous is probably the Brouwer ﬁxed point theorem

which we will discuss in the next chapter. Later we will also consider

more general ﬁxed point theorem, which apply even in the framework

of generalized functions, so called correspondences.

1.3 The Duopoly Economy

In this section we try to illustrate the concepts of the previous section

by applying them to one of the easiest models in Economy, the so called

duopoly economy. In this economy we have to producers which compete

on the market, i.e. they produce and sell the same product. We consider

the producers as players in a game where the strategy-sets are given

by o

i

= R

+

and a strategy s ∈ R stands for the production of s units of

the product. It is reasonable to assume that the price of the product on

the market is determined by demand. More precisely that we assume

that there is an afﬁne relationship of the form :

p(s

1

, s

2

) = α −β(s

1

+ s

2

) (1.4)

where α, β are positive constants. This relationship says more or

less, that if the total production exceeds α, then no one wants to buy

the product anymore. We assume that the individual cost functions for

each producer are given by :

c

1

(s

1

) = γ

1

s

1

+ δ

1

c

2

(s

2

) = γ

2

s

2

+ δ

2

.

Here we interpret δ

1

, δ

2

as ﬁxed costs. The net costs for each pro-

ducer are now given by

16

L

1

(s

1

, s

2

) = c

1

(s

1

) −p(s

1

, s

2

) s

1

= βs

1

(s

1

+ s

2

−(

α −γ

1

β

) −δ

1

)

L

2

(s

1

, s

2

) = c

2

(s

1

) −p(s

1

, s

2

) s

2

= βs

2

(s

1

+ s

2

−(

α −γ

2

β

) −δ

2

).

The biloss operator is then deﬁned by L = (L

1

, L

2

). Assume now

player i chooses a strategy s

i

≥

α−γ

i

β

−δ

i

:= u

i

. Then he has positive net

costs. Since no producer will produce with positive net cost we assume

U = ¦(s

1

, s

2

) ∈ R

+

R

+

[s

1

≤ u

1

, s

2

≤ u

2

¦.

For simplicity we will now assume, that β = 1, δ

i

= 0 for i = 1, 2 and

γ

1

= γ

2

.

6

Then also u

1

= u

2

=: u and

U = [0, u] [0, u].

Let us ﬁrst consider the conservative solutions of this game. We

have

L

♯

1

(s

1

) = sup

0≤s

2

≤u

s

1

(s

1

+ s

2

−u) = s

2

1

L

♯

2

(s

2

) = sup

0≤s

1

≤u

s

2

(s

1

+ s

2

−u) = s

2

2

.

Hence inf

0≤s

1

≤u

L

♯

1

(s

1

) = 0 = inf

0≤s

2

≤u

L

♯

2

(s

2

) and the conservative

solution for this game is (˜ s

1

, ˜ s

2

) = (0, 0) which corresponds to the case

where no one produces anything. For the conservative value of the

game we obtain v

♯

= (0, 0). Obviously this is not the best choice. Let us

now consider the Set of Pareto optimal strategies. For the sum of the

individual net costs we have

L

1

(s

1

, s

2

) +L

2

(s

1

, s

2

) = (s

1

+ s

2

)(s

1

+ s

2

−u) = z(z −u)

6

it is a very good exercise to wok out the general case

17

where we substituted z := s

1

+ s

2

. If s

1

, s

2

range within U, z ranges

between 0 and 2u and hence the sum above ranges between −u

2

/4 and

2u

2

. The image of L is therefore contained in the set ¦(x, y) ∈ R

2

[ −

u

2

/4 ≤ x + y ≤ 0¦ ∪ R

+

R

+

. For strategy pairs (˜ s

1

, ˜ s

2

) such that

˜ s

1

+ ˜ s

1

= u/2 we have

L

1

(˜ s

1

, ˜ s

2

) +L

2

(˜ s

1

, ˜ s

2

) = (˜ s

1

+ ˜ s

2

)(˜ s

1

+ ˜ s

2

−u) = −u

2

/4.

Hence the set of Pareto optimal strategies is precisely the set

Pareto = ¦(˜ s

1

, ˜ s

2

)[˜ s

1

+ ˜ s

1

= u/2¦.

This set is also the core of the game. Furthermore we have α =

(−

u

2

4

, −

u

2

4

) for the shadow minimum of the game. To see what strategy

pairs are non cooperative equilibria we consider the optimal decision

rules C, D such that

L

1

(C(s

2

), s

2

) = min

s

1

∈S

1

L

1

(s

1

, s

2

) = min

s

1

∈S

1

s

1

(s1 +s

2

−u)

L

2

(s

1

, D(s

1

)) = min

s

2

∈S

2

L

2

(s

1

, s

2

) = min

s

2

∈S

1

s

2

(s1 +s

2

−u).

Differentiating with respect to s

1

resp. s

2

give

C(s

2

) =

u −s

2

2

D(s

1

) =

u −s

1

2

.

From the last section we know that any ﬁxed point of the map

(s

1

, s

2

) → (C(s

2

), D(s

1

))

is a non cooperative equilibrium. Solving

18

˜ s

1

= C(˜ s

2

) =

u −s

2

2

˜ s

2

= D(˜ s

1

) =

u −s

1

2

we get (˜ s

1

, ˜ s

2

) = (u/3, u/3). This is the only non-cooperative equi-

librium. Since ˜ s

1

+ ˜ s

2

= 2/3u it is not Pareto optimal and hence the

Duopoly game has no solution in the strict sense. However, these

strategies yield the players a net loss of −u

2

/9.

Assume now, player 1 is sure that player 2 uses the optimal decision

rule D from above. Then he can choose his strategy ˜ s

1

so as to minimize

s

1

→ L

1

(s

1

, D(s

1

)) = L

1

(s

1

,

u −s

1

2

) =

1

2

s

1

(s

1

−u).

This yields to ˜ s

1

= u/2. The second player then uses ˜ s

2

= D

1

(˜ s

1

) =

u−u/2

2

= u/4. The net losses are then

−

1

8

u

2

< −

1

9

u

2

= NCE loss, for Player 1

−

1

16

u

2

> −

1

9

u

2

= NCE loss, for Player 2.

This brings the second player in a much worse position. The pair

(˜ s

1

, ˜ s

2

) just computed is sometimes called the Stackelberg equilib-

rium. However, if the player 2 has the same idea as player 1, then

both play the strategy u/2 leading to a net loss of 0 for each player.

19

Chapter 2

Brouwer’s Fixed Point

Theorem and Nash’s

Equilibrium Theorem

The Brouwer ﬁxed point theorem is one of the most important theo-

rems in Topology. It can be seen as a multidimensional generalization

of the mean value theorem in basic calculus, which can be stated as

that any continuous map f : [a, b] → [a, b] has a ﬁxed point, that is a

point x such that f(x) = x.

Theorem : Let X ⊂ R

m

be convex and compact and let f : X → X

continuous, then f has a ﬁxed point.

This generalization of the mean value theorem is harder to prove

than it appears at ﬁrst. There are very nice proofs using methods from

either algebraic or differential Topology. However we will give a proof

which does not depend on these methods and only uses very basic ideas

from combinatorics. Our main tool will be Sperners lemma, which was

proven in 1928 and uses the idea of a proper labeling of a simplicial

20

complex. There are many applications of Brouwer’s theorem. The most

important one in the context of game-theory is doubtless Nash’s equi-

librium theorem which in its most elementary version guarantees the

existence of non cooperative equilibria for all games with ﬁnitely many

pure strategies. This will be proven in the last part of this chapter.

2.1 Simplices

Deﬁnition 2.1.1. Let x

0

, ..., x

n

∈ R

m

be a set of linear independent vec-

tors. The simplex spanned by x

0

, .., x

n

is the set of all strictly positive

convex combinations

1

of the x

i

x

0

...x

n

:= ¦

n

¸

i=0

λ

i

x

i

: λ

i

> 0 and

n

¸

i=1

λ

i

= 1¦. (2.1)

The x

i

are called the vertices of the simplex and each simplex of the

form x

i

0

...x

i

k

is called a face of x

0

...x

n

.

If we refer to the dimension of the simplex, we also speak of an

n-simplex, where n is as in the deﬁnition above.

Example 2.1.1. Let e

i

= (0, ..., 1, ..., 0) ∈ R

n+1

where 1 occurs at the i-th

position and we start counting positions with 0 then

∆

n

= e

0

...e

n

is called the standard n-simplex.

We denote with x

0

...x

n

the closure of the simplex x

0

...x

n

. Then

x

0

...x

n

= co(x

0

, ..., x

n

)

where the right hand side denotes the convex closure. For y =

¸

n

i=0

λ

i

x

i

∈ x

0

...x

n

we let

1

we consider the open simplex, note that some authors actually mean closed sim-

plexes when they speak of simplexes

21

χ(y) = ¦i[λ

i

> 0¦. (2.2)

If χ(y) = ¦i

0

, ..., i

k

¦ then y ∈ x

i

0

...x

i

k

. This face is called the carrier

of y. It is the only face of x

0

...x

n

which contains y. We have

x

0

...x

n

=

¸

{i

0

,...,i

k

}⊂{0,...n}

x

i

0

...x

i

k

. (2.3)

where k runs from 0 to n and the

¸

stands for the disjoint union.

The numbers λ

0

, ..., λ

n

are called the barycentric coordinates of y.

Exercise 2.1.1. Show that any n-simplex is homeomorphic to the stan-

dard n-simplex.

Deﬁnition 2.1.2. Let T = x

0

...x

n

be an n-simplex. A simplicial sub-

division of T is a ﬁnite collection of simplices ¦T

i

[i ∈ I¦ s.t.

¸

i∈I

T

i

= T

and for any pair i, j ∈ I we have

T

i

∩ T

j

=

∅

closure of a common face

The mesh of a subdivision is the diameter of the largest simplex in the

subdivision.

Example 2.1.2.

For any simplex T = x

0

...x

n

the barycenter of T denoted by b(T) is

the point

b(T) =

1

n + 1

n

¸

i=0

x

i

. (2.4)

For simplices T

1

, T

2

deﬁne

T

1

> T

2

:= T

2

is a face of T

1

and T

2

= T

1

.

Given a simplex T, let us consider the family of all simplices of the

form

22

b(T

0

)...b(T

k

) where T ≥ T

0

> T

1

> ... > T

k

where k runs from 0 to n. This deﬁnes a simplicial subdivision of

T. It is called the ﬁrst barycentric subdivision. Higher barycentric

subdivisions are deﬁned recursively. Clearly, for any 0 simplex v we

have b(v) = v.

Example 2.1.3. ( Barycentric subdivision )

Deﬁnition 2.1.3. Let T = x

0

...x

n

be simplicially subdivided. Let V

denote the collection of all the vertices of all simplices in the subdivision.

A function

λ : V → ¦0, ..., n¦ (2.5)

satisfying λ(v) ∈ χ(v) is called a proper labeling of the subdivision.

We call a simplex in the subdivision completely labeled if λ assumes

all values 0, ..., n on its set of vertices and almost completely labeled

if λ assumes exactly the values 0, ..., n −1.

Let us study the situation in the following Example :

Example 2.1.4. ( Labeling and completely labeled simplex )

2.2 Sperners Lemma

Theorem 2.2.1. Sperner (1928) Let T = x

0

...x

n

be simplicially sub-

divided and properly labeled by the function λ. Then there are an odd

number of completely labeled simplices in the subdivision.

Proof. The proof goes by induction on n. If n = 0, then T = T = x

0

and

λ(x

0

) ∈ χ(x

0

) = ¦0¦. That means T is the only simplex in the subdi-

vision and it is completely labeled. Let us now assume the theorem is

true for n −1. Let

23

C = set of all completely labeled n-simplices in the subdivision

A = set of almost completely labeled n-simplices in the subdivision

B = set of all (n −1) simplices in the subdivision, which lie on the

boundary and bear all the labels 0, ..., n −1

E = set of all (n −1) simplices in the subdivision which bear all the

labels 0, ..., n −1

The sets C, A, B are pairwise disjoint. B however is a subset of E.

Furthermore all simplices in B are contained in the face x

0

...x

n−1

. An

(n − 1) simplex lies either on the boundary and is then the face of ex-

actly one n-simplex in the subdivision, or it is the common face of two

n-simplices.

A graph consists of edges and nodes such that each edge joins exactly

two nodes. If e denotes an edge and d a node we write d ∈ e if d is one

of the two nodes joined by e and d / ∈ e if not.

Let us now construct a graph in the following way :

Edges := E

Nodes := C ∪ A ∪ B

and for d ∈ D, e ∈ E

d ∈ e :=

d ∈ A ∪ C and e is a face of d

e = d ∈ B

We have to check that this indeed deﬁnes a graph, i.e. that any edge

e ∈ E joins exactly two nodes. Here we have to consider two cases :

1. e lies on the boundary and bears all labels 0, ..., n − 1. Then d

1

:=

24

e ∈ B and is the face of exactly one simplex d

2

∈ A ∪ C. Inter-

preting d

1

and d

2

as nodes we see that the deﬁnition above tells

us that e joins d

1

and d

2

and no more.

2. e is a common face of two n-simplices d

1

and d

2

. Then both belong

to either A or C ( they are at least almost completely labeled since

one of their faces,e in fact, bears all the labels 0, ...n−1 ) and hence

by the deﬁnition above e joins d

1

and d

2

and no more.

For each node d ∈ D the degree is deﬁned by

δ(d) = number of edges e s.t. d ∈ e.

Let us compute the degree for each node in our graph. We have to

consider the following three cases :

1. d ∈ A : Then exactly two vertices v

1

and v

2

of d have the same

label and exactly two faces e

1

and e

2

of d belong to E ( those who

do not have both v

1

and v

2

as vertices, and therefore bear at most

(n−2) different labels and hence do not belong to E ). Hence d ∈ e

1

and d ∈ e

2

but no more. Therefore δ(d) = 2.

2. d ∈ B : Then e := d ∈ E is the only edge such that d ∈ e and

therefore δ(d) = 1.

3. d ∈ C : Then d is completely labeled and hence has only one face

which bears all the labels 0, ..., n − 1. This means only one of the

faces of d belongs to E and hence using the deﬁnition we have

δ(d) = 1.

Summarizing we get the following :

δ(d) =

1 if d ∈ B ∪ C

2 if d ∈ A.

In general for a graph with nodes D and edges E one has the follow-

ing relationship :

25

¸

d∈D

δ(d) = 2[E[. (2.6)

This relationship holds since when counting the edges on each node

and summing up over the nodes one counts each edge exactly twice.

For our graph it follows that

2 [A[ +[B[ +[C[ =

¸

d∈D

δ(d) = 2[E[

and this implies that [B[ +[C[ is even. The simplicial subdivision of

T and the proper labeling function λ induce via restriction a simplicial

subdivision and proper labeling function for x

0

...x

n−1

. Then B is the set

of completely labeled simplices in this simplicial subdivision with this

proper labeling function. Hence by induction [B[ is odd and therefore

[C[ is odd which was to prove.

In the proof of the Brouwer ﬁxed point theorem it will not be so im-

portant how many completely labeled simplices there are in the sim-

plicial subdivision, but that there is at least one. This is the statement

of the following corollary.

Corollary 2.2.1. Let T = x

0

...x

n

be simplicially subdivided and prop-

erly labeled. Then there exists at least one completely labeled simplex in

the simplicial subdivision.

Proof. Zero is not an odd number.

2.3 Proof of Brouwer’s Theorem

We will ﬁrst proof the following simpliﬁed version of the Brouwer ﬁxed

point theorem.

Proposition 2.3.1. Let f : ∆

n

→ ∆

n

be continuous, then f has a ﬁxed

point.

26

Proof. Let ǫ > 0. Since ∆

n

is compact we can ﬁnd a simplicial sub-

division with mesh less than ǫ.

2

Let V be the set of vertices of this

subdivision. Let us consider an arbitrary vertex v in the subdivision

and v ∈ x

i

0

...x

i

k

. Let v

i

and f

i

denote the components of v and f. Then

¦i

0

, ..., i

k

¦ ∩ ¦i[f

i

(v) ≤ v

i

¦ = ∅

since f

i

(v) > v

i

for all i ∈ ¦i

0

, ..., i

k

¦ would imply

1 =

n

¸

i=0

f

i

(v) ≥

k

¸

j=0

f

i

j

(v) >

k

¸

j=0

v

i

j

=

k

¸

i=0

v

i

= 1.

We deﬁne a labeling function

λ : V → ¦0, ..., n¦

by choosing for each vertex v ∈ x

i

0

...x

i

k

one element λ(v) ∈ ¦i

0

, ..., i

k

¦∩

¦i[f

i

(v) ≤ v

i

¦. Then λ is a proper labeling function. It follows from

Corollary 2.2.1 that there exists a completely labeled simplex in the

simplicial subdivision. This means, there exists a simplex x

0

ǫ

...x

n

ǫ

such

that for any i ∈ ¦0, 1, .., n¦ there exists j s.t.

f

i

(x

j

ǫ

) ≤ x

j

ǫ,i

(2.7)

Where x

j

ǫ,i

denotes the i-the component of x

j

ǫ

. We now let ǫ tend to

zero. Then we get a sequence of completely labeled simplexes x

0

ǫ

...x

n

ǫ

.

Since the mesh’ of the subdivisions converges to 0 and furthermore ∆

n

is compact we can extract a convergent subsequent which converges

to one point in ∆

n

. We denote this point with x. This point is the

common limit of all the sequences (x

j

ǫ

) for ǫ tending to 0. Therefore

using equation (2.7) and the continuity of f we get

f

i

(x) ≤ x

i

∀i ∈ ¦0, .., n¦.

Assume now that for one i ∈ ¦0, ..., n¦ we would have f

i

(x) < x

i

.

2

this is possible for example by using iterated barycentric subdivisions

27

Then

1 =

n

¸

i=0

f

i

(x) <

n

¸

i=0

x

i

= 1

Therefore we must have f

i

(x) = x

i

for all i. This is the same as

f(x) = x and x is a ﬁxed point.

In general a map φ : X → Y is called a homeomorphism, if it is

continuous, bijective and its inverse φ

−1

: Y → X is also continuous. In

this case we write X ≈ Y . We have the following corollary :

Corollary 2.3.1. Let X ≈ ∆

n

and f : X → X be continuous. Then f

has a ﬁxed point.

Proof. Let φ : X → ∆

n

be a homeomorphism. Deﬁne

f

φ

: ∆

n

→ ∆

n

y → φ(f(φ

−1

(y)))

Clearly f is continuous and hence by Proposition 2.3.1 must have a

ﬁxed point ˜ y i.e. f

φ

(˜ y) = ˜ y. Deﬁne ˜ x := φ

−1

(˜ y). Then

f(˜ x) = f(φ

−1

(˜ y)) = φ

−1

(f

φ

(˜ y)) = φ

−1

(˜ y) = ˜ x.

The following proposition will help us to prove the Brouwer ﬁxed

point theorem in its general form.

Proposition 2.3.2. Let X ⊂ R

m

be convex and compact. Then X ≈ D

n

for some 0 ≤ n ≤ m where D

n

:= ¦x ∈ R

n

: [[x[[ ≤ 1¦ is the n-dimensional

unit-ball.

Proof. Let us ﬁrst assume that 0 is contained in the interior X

◦

of X.

Let v ∈ R

m

. Consider the ray starting at the origin

28

γ(t) := t v, ∀t ≥ 0.

We claim that this ray intersects the boundary ∂X at exactly one

point. Since γ(0) ∈ X and X is compact, it is clear that it intersects

∂X in at least one point. Assume now that x, y ∈ ∂X ∩ γ([0, ∞)) and

x = y. Then since x and y are collinear w.l.o.g. we can assume that

[[x[[ > [[y[[. Since 0 ∈ X

◦

it follows that there exists ǫ > 0 such that

D

m

ǫ

:= ¦z ∈ R

m

: [[z[[ ≤ ǫ¦ ⊂ X

◦

. Then the convex hull co(x, D

m

ǫ

)

contains an open neighborhood of y. Since X is convex and closed,

we have co(x, D

m

ǫ

) ⊂ X and hence y ∈ X

◦

which is a contradiction to

y ∈ ∂X. Let us now consider the following function :

f : ∂X → S

m−1

:= ¦z ∈ R

m

: [[z[[ = 1¦

x →

x

[[x[[

Since X contains an open ball around the origin, it follows that the

map f is well deﬁned 0 ∈ ∂X and furthermore surjective. f is clearly

continuous and it follows from the discussion above that it is also in-

jective ( otherwise two elements in ∂X would lie on the same ray from

the origin ). Since ∂X is compact and S

m−1

is Hausdorff it follows that

f is a homeomorphism.

3

Hence the inverse map

f

−1

: S

m−1

→ ∂X

is also continuous. Let us deﬁne a map which is deﬁned on the

whole space X.

3

Result from Topology : f : A → B continuous and bijective, A compact, B Haus-

dorff, then f is a homeomorphism.

29

k : D

m

→ X

x →

[[x[[ f

−1

(x/[[x[[) if x = 0

0 if x = 0

Since X is compact, there exists M ∈ R such that [[x[[ ≤ M for all

x ∈ X. Then also [[ f

−1

(x/[[x[[)

. .. .

∈∂X

[[ ≤ M for all x ∈ X and hence

[[k(x)[[ ≤ [[x[[ M

It follows from this that the map k is continuous in 0. Continuity

in all other points is clear so that k is a continuous map. Assume that

x, y ∈ X and k(x) = k(y). Then

[[x[[ f

−1

(x/[[x[[) = [[y[[ f

−1

(y/[[y[[).

Since [[f

−1

(x/[[x[[)[[ = 1 = [[f

−1

(y/[[y[[)[[ we must have [[x[[ = [[y[[.

But then the equation above is equivalent to

f

−1

(x/[[x[[) = f

−1

(y/[[y[[).

and it follows from the injectivity of f

−1

that

x

||x||

=

y

||y||

which ﬁnally

implies that x = y. This shows that k is injective. k is also surjective :

Assume x ∈ X. Then as in the ﬁrst part of the proof x can be written as

x = t ¯ x where ¯ x ∈ ∂X and t ∈ [0, 1] and f(¯ x) =

¯ x

||¯ x||

which is equivalent

to ¯ x = f

−1

(

¯ x

||¯ x||

). Then

30

x = t ¯ x = t f

−1

(

t

¯ x

||¯ x||

[[t

¯ x

||¯ x||

[[

= [[t

¯ x

[[¯ x[[

[[f

−1

(

t

¯ x

||¯ x||

[[t

¯ x

||¯ x||

[[

= k(t

¯ x

[[¯ x[[

. .. .

∈D

m

)

Repeating the previous argument, since now D

m

is compact and X

is Hausdorff, we have that k is a homeomorphism. Let us now consider

the general case. W.l.o.g. we can still assume that 0 ∈ X ( by transla-

tion, translations are homeomorphisms ) but we can no longer assume

that 0 ∈ X

◦

. However we can ﬁnd a maximum number of linear in-

dependent vectors v

1

, .., v

n

∈ X. Then X ⊂ span(v

1

, .., v

n

). Easy linear

algebra shows that there exists a linear map

φ : R

m

→R

n

which maps span(v

1

, .., v

n

) isomorphically to R

n

( and its orthogonal

complement to zero ). This map maps X homeomorphically to φ(X) ⊂

R

n

, and since linear maps preserve convexity φ(X) is still convex. Now

we can apply our previous result to φ(X) and get

X ≈ φ(X) ≈ ∆

n

⇒ X ≈ ∆

n

.

Corollary 2.3.2. Let X ⊂ R

m

be convex and compact. Then X ≈ ∆

n

for

some 0 ≤ n ≤ m.

Proof. Since for all n we have that ∆

n

is convex and compact, we can

use the preceding proposition to follow that ∆

n

≈ D

n

( take the dimen-

sions into account ). Also from the preceding proposition we know that

there must exist n such that X ≈ D

n

. Then X ≈ D

n

≈ ∆

n

.

31

We are now able to proof the Brouwer ﬁxed point theorem in its

general form :

Theorem 2.3.1. Let X ⊂ R

m

be convex and compact and let f : X → X

continuous, then f has a ﬁxed point.

Proof. By Corollary 2.3.2 X ≈ ∆

n

. Hence the theorem follows by appli-

cation of Corollary 2.3.1.

2.4 Nash’s Equilibrium Theorem

Theorem 2.4.1. Let (

2

be a game with ﬁnite strategy sets o

1

and o

2

and

U = o

1

o

2

. Then there exists at least one non-cooperative equilibrium

for the extended game

˜

(

2

.

Proof. Let L be the biloss-operator of (

2

and

˜

L be its extension. Fur-

thermore let o

1

= ¦s

i

1

: i ∈ ¦1, .., n¦¦, o

2

= ¦s

j

2

: j ∈ ¦1, .., m¦¦ and

l

1

= −

˜

L

1

, l

2

= −

˜

L

2

. Clearly

˜

o

1

˜

o

2

≈ ∆

n−1

∆

m−1

is convex and

compact. Let ˜ s

1

∈

˜

o

1

and ˜ s

2

∈

˜

o

2

be given as

˜ s

1

=

n

¸

i=1

λ

1

i

δ

s

i

1

˜ s

2

=

m

¸

j=1

λ

2

j

δ

s

j

2

.

For 1 ≤ i ≤ n and 1 ≤ j ≤ m we deﬁne maps c

i

, d

j

:

˜

o

1

˜

o

2

→ R as

follows :

c

i

(˜ s

1

, ˜ s

2

) = max(0, l

1

(s

i

1

, ˜ s

2

) −l

1

(˜ s

1

, ˜ s

2

)

d

j

(˜ s

1

, ˜ s

2

) = max(0, l

2

(˜ s

1

, s

j

2

) −l

2

(˜ s

1

, ˜ s

2

).

Furthermore we deﬁne a map

32

f :

˜

o

1

˜

o

2

→

˜

o

1

˜

o

2

(˜ s

1

, ˜ s

2

) → (

n

¸

i=1

λ

1

i

+ c

i

(˜ s

1

, ˜ s

2

)

1 +

¸

n

i=1

c

i

(˜ s

1

, ˜ s

2

)

. .. .

=:

˜

λ

1

i

δ

s

i

1

,

n

¸

j=1

λ

2

j

+d

j

(˜ s

1

, ˜ s

2

)

1 +

¸

n

j=1

d

j

(˜ s

1

, ˜ s

2

)

. .. .

=:

˜

λ

j

2

δ

s

j

2

)

Clearly

¸

n

i=1

˜

λ

i

1

= 1 =

¸

m

j=1

˜

λ

j

2

and hence the right side in the ex-

pression deﬁnes indeed an element in

˜

o

1

˜

o

2

. Obviously the map f is

continuous and hence by application of the Brouwer ﬁxed point theo-

rem there must exist a pair (˜ s

♯

1

, ˜ s

♯

2

) such that

(˜ s

♯

1

, ˜ s

♯

2

) = f(˜ s

♯

1

, ˜ s

♯

2

).

Writing ˜ s

♯

1

=

¸

n

i=1

λ

1♯

i

δ

s

i

1

and ˜ s

♯

2

=

¸

n

j=1

λ

2♯

j

δ

s

j

2

we see that the

equation above is equivalent to the following set of equations

λ

1♯

i

=

λ

1♯

i

+ c

i

(˜ s

♯

1

, ˜ s

♯

2

)

1 +

¸

n

i=1

c

i

(˜ s

♯

1

, ˜ s

♯

2

)

, λ

2♯

j

=

λ

2♯

j

+ d

j

(˜ s

♯

1

, ˜ s

♯

2

)

1 +

¸

m

j=1

d

j

(˜ s

♯

1

, ˜ s

♯

2

)

∀ 1 ≤ i ≤ n, 1 ≤ j ≤ m.

Let us assume now that for all 1 ≤ i ≤ n we have l

1

(s

i

1

, ˜ s

♯

2

) > l

1

(˜ s

♯

1

, ˜ s

♯

2

).

Then using that the extended biloss-operator and hence also l

1

and l

2

are bilinear we have

l

1

(˜ s

♯

1

, ˜ s

♯

2

) =

n

¸

i=1

λ

1♯

i

l

1

(s

i

1

, ˜ s

♯

2

) >

n

¸

i=1

λ

1♯

i

l

1

(˜ s

♯

1

, ˜ s

♯

2

) = l

1

(˜ s

♯

1

, ˜ s

♯

2

).

which is a contradiction. Therefore there must exist i ∈ ¦1, .., n¦

such that l

1

(s

i

1

, ˜ s

♯

2

) < l

1

(˜ s

♯

1

, ˜ s

♯

2

). For this i we have c

i

(˜ s

♯

1

, ˜ s

♯

2

) = 0 and hence

it follows from our set of equations above that for this i we have

λ

1♯

i

=

λ

1♯

i

1 +

¸

n

i=1

c

i

(˜ s

♯

1

, ˜ s

♯

2

)

33

This equation however can only hold if

¸

n

i=1

c

i

(˜ s

♯

1

, ˜ s

♯

2

) = 0 which by

positivity of the maps c

i

can only be true if

c

i

(˜ s

♯

1

, ˜ s

♯

2

) = 0 for all 1 ≤ i ≤ n.

By deﬁnition of the maps c

i

this means nothing else than

l

1

(s

i

1

, ˜ s

♯

2

) ≤ l

1

(˜ s

♯

1

, ˜ s

♯

2

) ∀ 1 ≤ i ≤ n.

Using again the bilinearity of l

1

we get for arbitrary ˜ s

1

=

¸

n

i=1

λ

1

i

δ

s

i

1

l

1

(˜ s

1

, ˜ s

♯

2

) =

n

¸

i=1

λ

1

i

l

1

(s

i

1

, ˜ s

♯

2

) ≤

n

¸

i=1

λ

1

i

l

1

(˜ s

♯

1

, ˜ s

♯

2

) = l

1

(˜ s

♯

1

, ˜ s

♯

2

), ∀ 1 ≤ i ≤ n.

A similar argument involving λ

2♯

j

and d

j

, l

2

shows that for arbitrary

˜ s

2

=

¸

m

j=1

λ

2

j

δ

s

j

2

l

2

(˜ s

♯

1

, ˜ s

2

) ≤ l

2

(˜ s

♯

1

, ˜ s

♯

2

), ∀ 1 ≤ j ≤ m.

Using

˜

L

1

= −l

1

and

˜

L

2

= −l

2

we get

˜

L

1

(˜ s

1

, ˜ s

♯

2

) ≥

˜

L

1

(˜ s

♯

1

, ˜ s

♯

2

)

˜

L

2

(˜ s

♯

1

, ˜ s

2

) ≥

˜

L

2

(˜ s

♯

1

, ˜ s

♯

2

)

which shows that (˜ s

♯

1

, ˜ s

♯

2

) is a non-cooperative equilibrium for

˜

G

2

.

2.5 Two Person Zero Sum Games and the

Minimax Theorem

Deﬁnition 2.5.1. A two person game (

2

is called a zero sum game, if

L

1

= −L

2

.

Nim(2,2) and “simpliﬁed poker” are zero sum games, the “Battle of

34

the Sexes” is not. For zero sum games one usually only denotes L

1

,

since then L

2

is determined by the negative values of L

1

.

The Application of Theorem 2.4.1 in this contexts yields to a The-

orem which is called the MiniMax-Theorem and has applications in

many different parts of mathematics.

Theorem 2.5.1. (MiniMax Theorem) : Let (

2

be a zero sum game

with ﬁnite strategy sets. Then for the extended game

˜

(

2

we have

max

˜ s

1

∈

˜

S

1

min

˜ s

2

∈

˜

S

2

L

1

(˜ s

1

, ˜ s

2

) = min

˜ s

2

∈

˜

S

2

max

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

)

and for any NCE (˜ s

♯

1

, ˜ s

♯

2

) this value coincides with L

1

(˜ s

♯

1

, ˜ s

♯

2

). In par-

ticular all NCE’s have the same biloss.

4

Proof. Clearly we have for all ˜ s

1

∈

˜

o

1

, ˜ s

2

∈

˜

o

2

that

min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

) ≤ L

1

(˜ s

1

, ˜ s

2

).

Taking the maximum over all strategies ˜ s

2

∈

˜

o

2

on both sides we

get for all ˜ s

1

∈

˜

o

1

max

˜ s

2

∈

˜

S

2

min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

) ≤ max

˜ s

2

∈

˜

S

2

L

1

(˜ s

1

, ˜ s

2

).

Taking the minimum over all strategies ˜ s

1

∈

˜

o

1

on the right side of

the last equation we get that

4

we denote the biloss operator for the extended game with L instead of

˜

L as we did

before, just to make it readable and save energy

35

max

˜ s

2

∈

˜

S

2

min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

) ≤ min

˜ s

1

∈

˜

S

1

max

˜ s

2

∈

˜

S

2

L

1

(˜ s

1

, ˜ s

2

). (2.8)

It follows from Theorem 2.4.1 that there exists at least one NCE

(˜ s

♯

1

, ˜ s

♯

2

). Then

L

1

(˜ s

♯

1

, ˜ s

♯

2

) = min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

♯

2

)

L

2

(˜ s

♯

1

, ˜ s

♯

2

) = min

˜ s

2

∈

˜

S

2

L

1

(˜ s

♯

1

, ˜ s

2

)

Using that L

2

= −L

1

and min(−..) = −max(..) we get that the second

equation above is equivalent to

L

1

(˜ s

♯

1

, ˜ s

♯

2

) = max

˜ s

2

∈

˜

S

2

L

1

(˜ s

♯

1

, ˜ s

2

)

Now we have

min

˜ s

1

∈

˜

S

1

max

˜ s

2

∈

˜

S

2

L

1

(˜ s

1

, ˜ s

2

) ≤ max

˜ s

2

∈

˜

S

2

L

1

(˜ s

♯

1

, ˜ s

2

) = L

1

(˜ s

♯

1

, ˜ s

♯

2

)

= min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

♯

2

) ≤ max

˜ s

2

∈

˜

S

2

min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

)

Together with (2.7) we get

max

˜ s

2

∈

˜

S

2

min

˜ s

1

∈

˜

S

1

L

1

(˜ s

1

, ˜ s

2

) = L

1

(˜ s

♯

1

, ˜ s

♯

2

) = min

˜ s

1

∈

˜

S

1

max

˜ s

2

∈

˜

S

2

L

1

(˜ s

1

, ˜ s

2

)

Since (˜ s

♯

1

, ˜ s

♯

2

) was an arbitrary NCE the statement of the theorem

follows.

36

Chapter 3

More general Equilibrium

Theorems

3.1 N-Person Games and Nash’s generalized

Equilibrium Theorem

Deﬁnition 3.1.1. Let N = ¦1, 2, ...n¦. An N-person ( or n-person or

n-player ) game (

n

consists of the following data :

1. Topological spaces o

1

, .., o

n

so called strategies for player 1 to n

2. A subset o(N) ⊂ o

1

.. o

n

, the so called allowed or feasible multi

strategies

3. A (multi)-loss operator L = (L

1

, ..., L

n

) : o

1

... o

n

→R

n

All of the deﬁnitions in chapter 1 in the framework of 2 player games

can be generalized to n-player games. This is in fact a very good exer-

cise. We will restrict ourself though to the reformulation of the concept

of non cooperative equilibria within n player games.

Deﬁnition 3.1.2. A multi strategy s = (s

1

, .., s

n

) is called a non coop-

erative equilibrium ( in short NCE ) for the game (

n

if for all i we

have

37

L

i

(s) = min¦L

i

(˜ s) : ˜ s ∈ ¦s

1

¦ .. ¦s

i−1

¦ o

i

¦s

i+1

¦ .. ¦s

n

¦ ∩ o(N)¦

The following theorem is a generalization of Theorem 2.4.1. It is

also due to Nash.

Theorem 3.1.1. Nash (general Version) : Given an N-player game

as above. Suppose that o(N) is convex and compact and for any i ∈

¦1, .., n¦ the function L

i

(s

1

, .., s

i−1

, , s

i−1

, .., s

n

) considered as function in

the i-th variable when s

1

, .., s

i−1

, s

i+1

, .., s

n

is ﬁxed but arbitrary is convex

and continuous. Then there exists an NCE in (

n

.

We have to build up some general concepts, before we go into the

proof. It will follow later. One should mention though, that we do not

assume that the game in question is the extended version of a game

with only ﬁnitely many strategies and also the biloss operator does not

have to be the bilinearly extended version of a biloss operator for such

a game.

3.2 Correspondences

The concept of correspondences is a generalization of the concept of

functions. In the context of game theory it can be used on one side to

deﬁne so called generalized games and on the other side it shows up to

be very useful in many proofs. For an arbitrary set M we denote with

{(M) the power set of M, this is the set which contains as elements

the subsets of M.

Deﬁnition 3.2.1. Let X, Y be sets. A map γ : X → {(Y ) is called a

correspondence. We write γ : X →→ Y . We denote with

Gr(γ) := ¦(x, y) ∈ X Y : y ∈ γ(x)¦ ⊂ X Y

the graph of γ.

38

The following example shows how maps can be considered as corre-

spondences and that the deﬁnition above really is a generalization of

the concept of maps.

Example 3.2.1. Any map f : X → Y can be considered as the corre-

spondence

˜

f : X →→ Y where

˜

f(x) = ¦f(x)¦ ∈ {(Y ). In general the

inverse f

−1

of f as a map is not deﬁned. However it makes sense to

speak of the inverse correspondence f

−1

: Y →→ X where y ∈ Y maps to

the preimage f

−1

(¦y¦) ∈ {(X).

Deﬁnition 3.2.2. Let γ : X →→ Y be a correspondence, E ⊂ Y and

F ⊂ X. The image of F under γ is deﬁned by

γ(F) =

¸

x∈F

γ(x).

The upper inverse of E under γ is deﬁned by

γ

+

[E] = ¦x ∈ X : γ(x) ⊂ E¦.

The lower inverse of E under γ is deﬁned by

γ

−

[E] = ¦x ∈ X : γ(x) ∩ E = ∅¦.

Furthermore for y ∈ Y we set γ

−1

(y) = ¦x ∈ X[y ∈ γ(x)¦ = γ

−

[¦y¦].

It is easy to see that when γ has nonempty values one always has

γ

+

[E] ⊂ γ

−

[E], in general though there is no clear relation between the

upper and the lower inverse unless the correspondence is given by a

map. Then we have :

Example 3.2.2. Assume the correspondence

˜

f : X →→ Y is given by a

map as in Example 3.2.1. Then

˜

f

+

[E] = ¦x ∈ X : ¦f(x)¦ ⊂ E¦ = ¦x ∈ X : f(x) ∈ E¦ = f

−1

(E)

˜

f

−

[E] = ¦x ∈ X : ¦f(x)¦ ∩ E = ∅¦ = ¦x ∈ X : f(x) ∈ E¦ = f

−1

(E).

39

This means that if one considers correspondences which are actually

maps upper- and lower inverse coincide. For general correspondences

this is however not the case.

Knowing that correspondences are generalized functions we would

like to generalize the deﬁnition of continuity to correspondences. We

assume from now on that X and Y are topological spaces.

Deﬁnition 3.2.3. Let γ : X →→ Y be a correspondence. Then γ is

called upper hemi continuous or short uhc at x ∈ X if

x ∈ γ

+

[V ] for V ⊂ Y open ⇒ ∃ open neighborhood U ⊂ X of x s.t. U ⊂ γ

+

[V ].

γ is called lower hemi continuous or short lhc at x ∈ X if

x ∈ γ

−

[V ] for V ⊂ Y open ⇒ ∃ open neighborhood U ⊂ X of x s.t. U ⊂ γ

−

[V ].

γ is called uhc on X if γ is uhc at x for all x ∈ X and lhc on X if γ is

lhc at x for all x ∈ X.

As one can see directly fromthe deﬁnition, γ is uhc iff V ⊂ Y open ⇒

γ

+

[V ] ⊂ X open and lhc iff V ⊂ Y open ⇒ γ

−

[V ] ⊂ X open . Example

3.2.1 now says that a map f is continuous ( as a map ) if an only if it is

uhc ( considered as a correspondence ) and this is the case if and only

if it is lhc ( considered as a correspondence ).

Deﬁnition 3.2.4. A correspondence γ : X →→ Y is called continuous if

it is uhc and lhc on X.

Deﬁnition 3.2.5. Let γ : X →→ Y a correspondence. Then γ is called

closed at the point x ∈ X if

x

n

→ x, y

n

∈ γ(x

n

) ∀n and y

n

→ y ⇒ y ∈ γ(x).

40

A correspondence is said to be closed if it is closed at every point

x ∈ X. This is precisely the case if the graph Gr(γ) is closed as a subset

of X Y . γ is called open if Gr(γ) ⊂ X Y is an open subset.

Deﬁnition 3.2.6. Let γ : X →→ Y be a correspondence. We say γ has

open ( closed ) sections if for each x ∈ X the set γ(x) is open ( closed

) in Y and for each y ∈ Y the set γ

−

[¦y¦] is open ( closed ) in X.

Proposition 3.2.1. Let X ⊂ R

m

,Y ⊂ R

k

and γ : X →→ Y a correspon-

dence.

1. If γ is uhc and ∀x ∈ X γ(x) is a closed subset of Y , then γ is closed.

2. If Y is compact and γ is closed, then γ is uhc.

3. If γ is open, then γ is lhc.

4. If [γ(x)[ = 1 ∀x ( so that γ can actually be considered as a map )

and γ is uhc at x, then γ is continuous at x.

5. If γ has open lower sections ( i.e γ

−

[¦y¦] open in X ∀y ∈ Y ), then γ

is lhc

Proof. 1. We have to show that Gr(γ) ⊂ X Y is closed, i.e. its

complement is open. Assume (x, y) / ∈ Gr(γ), i.e. y / ∈ γ(x). Since

γ(x) is closed and hence its complement is open, there exists a

closed neighborhood U of y s.t. U ∩ γ(x) = ∅. Then V = U

c

is an

open neighborhood of γ(x). Since γ is uhc and x ∈ γ

+

[V ] there

exists an open neighborhood W of x s.t. W ⊂ γ

+

[V ]. We have

γ(w) ⊂ V ∀w ∈ W. This implies that

Gr(γ) ∩ W Y ⊂ W V = W U

c

and hence Gr(γ) ∩ W U = ∅. Since y has to be in the interior of

U ( otherwise U wouldn’t be a neighborhood of y ), we have that

W U

◦

is an open neighborhood of (x, y) in Gr(γ)

c

which shows

that Gr(γ)

c

is open.

41

2. Suppose γ were not uhc. Then there would exist x ∈ X and an

open neighborhood V of γ(x) s.t. for all neighborhoods U of X we

would have U ⊂ γ

+

[V ] i.e. there exists z ∈ U s.t. γ(z) ⊂ V . By

making U smaller and smaller we can ﬁnd a sequence z

n

→ x and

y

n

∈ γ(z

n

) s.t. y

n

/ ∈ V , i.e. y

n

∈ V

c

. Since Y is compact (y

n

) has

a convergent subsequence and w.l.o.g. (y

n

) itself is convergent.

We denote with y = lim

n

y

n

its limit. Since V

c

is closed we must

have y ∈ V

c

. From the closedness of γ however it follows that

y ∈ γ(x) ⊂ V which is clearly a contradiction.

3. exercise !

4. exercise !

5. exercise !

Proposition 3.2.2. (Sequential Characterization of Hemi Conti-

nuity) Let X ⊂ R

m

, Y ⊂ R

k

and γ : X →→ Y be a correspondence.

1. Assume ∀x ∈ X that γ(x) is compact. Then γ uhc ⇔ for every

sequence x

n

→ x and y

n

∈ γ(x

n

) there exists a convergent subse-

quence y

n

k

→ y and y ∈ γ(x).

2. γ is lhc ⇔ x

n

→ x and y ∈ γ(x) implies there exists a sequence

y

n

∈ γ(x

n

) with y

n

→ y.

Proof. 1. “⇒” : Assume x

n

→ x and y

n

∈ γ(x

n

). Since γ(x) is compact

it has a bounded neighborhood V . Since γ is uhc at x there exists

a neighborhood U of x s.t. γ(U) ⊂ V . Since x

n

→ x there exists

n

0

∈ N s.t. ∀n ≥ n

0

we have x

n

∈ U. Then since y

n

∈ γ(x

n

) ⊂

γ(U) ⊂ V for all n ≥ n

0

and V is bounded, (y

n

) has a convergent

subsequence y

n

k

→ y. Clearly y ∈ V . By making V smaller and

smaller ( for example one can take V

ǫ

:=

¸

x∈γ(x)

B

ǫ

(x) and let ǫ go

to zero ) we see that y lies in any neighborhood of γ(x). Since γ(x)

is compact, hence also closed we must have y ∈ γ(x).

42

“⇐” : Suppose γ is not uhc. Then there exists x ∈ X and a neigh-

borhood V of γ(x) s.t. for any open neighborhood U of x we have

U ⊂ γ

+

[V ]. Making U smaller and smaller we get a sequence

x

n

→ x s.t. γ(x

n

) ⊂ V . By choosing y

n

∈ γ(x

n

) ∩ V

c

we get a

sequence which does not enter into V . Since V is an open neigh-

borhood of the compact set γ(x) such a sequence cannot converge

to a limit y ∈ γ(x). This however is a contradiction to the assump-

tion on the left side in 1.)

2. exercise !

Deﬁnition 3.2.7. A convex set Y is a polytope, if it is the convex hull

of a ﬁnite set.

Example 3.2.3. simplices, but not only.

Proposition 3.2.3. Let X ⊂ R

m

and Y ⊂ R

k

, where Y is a polytope. If

for all x ∈ X the set γ(x) is convex and has open sections, then γ has an

open graph.

Proof. Let (x, y) ∈ Gr(γ), i.e. y ∈ γ(x). Since γ has open sections and

Y is a polytope, there is a neighborhood U of y contained in γ(x) s.t.

U is itself the interior of a polytope. Assume more precisely that U =

(co(y

1

, .., y

n

))

◦

. Since γ has open sections the sets V

i

:= γ

−

[¦y

i

¦] are open

for all i. Clearly for all z ∈ V

i

we have y

i

∈ γ(z) and furthermore x ∈ V

i

for all i. The set V :=

¸

n

i=1

V

i

is nonempty and open and furthermore

contains x. W := V U is open in X Y . Let (x

′

, y

′

) ∈ W. Then

y

i

∈ γ(x

′

)∀i. Since γ(x

′

) is convex we have that

y

′

∈ U = co(y

1

, .., y

n

)

◦

⊂ γ(x

′

) ⇒ (x

′

, y

′

) ∈ Gr(γ).

Therefore W ⊂ Gr(γ) and W is an open neighborhood of (x, y) ∈

Gr(γ).

Deﬁnition 3.2.8. Let γ : X →→ Y be a correspondence. Then x ∈ X is

called a ﬁxed point if x ∈ γ(x).

43

Proposition 3.2.4. 1. Let γ : X →→ Y be uhc s.t. γ(x) is compact

∀x ∈ X and let K be a compact subset of X. Then γ(K) is compact.

2. Let X ⊂ R

m

and γ : X →→ R

m

uhc with γ(x) closed ∀ x. Then the

set of all ﬁxed points of γ is closed.

3. let X ⊂ R

m

and γ, µ : X →→ R

m

uhc and γ(x), µ(x) closed ∀x.

Then

¦x ∈ X[γ(x) ∩ µ(x) = ∅¦ is closed in X

4. X ⊂ R

m

and γ : X →→R

m

lhc ( resp. uhc ). Then

¦x ∈ X[γ(x) = ∅¦ is open ( resp. closed ) in X

Proof. Exercise !

Deﬁnition 3.2.9. (Closure of a Correspondence) Let γ : X →→ Y

be a correspondence, then

γ : X →→ Y

x → γ(x)

is called the closure of γ.

Proposition 3.2.5. Let X ⊂ R

m

,Y ⊂ R

k

.

1. γ : X →→ Y uhc at x ⇒ γ : X →→ Y uhc at x

2. γ : X →→ Y lhc at x ⇔ γ : X →→ Y lhc at x

Proof. Exercise !

Deﬁnition 3.2.10. (Intersection of correspondences) Let γ, µ : X →→

Y be a correspondences, then deﬁne their intersection as

γ ∩ µ : X →→ Y

x → γ(x) ∩ µ(x).

44

Proposition 3.2.6. Let X ⊂ R

m

,Y ⊂ R

k

and γ, µ : X →→ Y be corre-

spondences. Suppose γ(x) ∩ µ(x) = ∅ ∀ x ∈ X.

1. If γ, µ are uhc at x and γ(z), µ(z) are closed for all z ∈ X then γ ∩µ

is uhc at x.

2. If µ is closed at x and γ is uhc at x and γ(x) is compact, then γ ∩ µ

is uhc at x.

3. If γ is lhc at x and if µ has open graph, then γ ∩ µ is lhc at x.

Proof. let U be an open neighborhood of γ(x) ∩ µ(x) and deﬁne C :=

γ(x) ∩ U

c

.

1. In this case C is closed and µ(x) ∩ C = ∅. Therefore there exist

open sets V

1

, V

2

s.t.

µ(x) ⊂ V

1

C ⊂ V

2

V

1

∩ V

2

= ∅.

Since µ is uhc at x and x ∈ µ

+

[V

1

] there exists a neighborhood W

1

of x with

µ(W

1

) ⊂ V

1

⊂ V

c

2

.

We have that

γ(x) = (γ(x) ∩ U) ∪ (γ(x) ∩ U

c

) ⊂ U ∪ C ⊂ U ∪ V

2

.

Since U ∪V

2

is open it follows from the upper hemi continuity of γ

that there exists an open neighborhood W

2

of x s.t.

γ(W

2

) ⊂ U ∪ V

2

.

45

We set W = W

1

∩ W

2

. Then W is a neighborhood of x and for all

z ∈ W we have

γ(z) ∩ µ(z) ⊂ (U ∪ V

2

) ∩ V

c

2

= U ∩ V

c

2

⊂ U.

Hence γ ∩ µ is uhc at x.

2. In this case C is compact and µ(x) ∩ C = ∅. For y / ∈ µ(x) there

cannot exist a sequence x

n

→ x, y

n

∈ µ(x

n

) and y

n

→ y because of

the closedness of µ. This implies that there exists a neighborhood

U

y

of y and W

y

of x s.t. µ(W

y

) ⊂ U

c

y

. Since C is compact we can ﬁnd

U

y

1

, ..., U

y

n

, W

y

1

, ..., W

y

n

as above such that C ⊂ V

2

:= U

y

1

∪... ∪U

y

n

.

We set W

1

:= W

y

1

∩... ∩W

y

n

. Then µ(W

1

) ⊂ V

c

2

. Now we choose W

2

for x and γ as in 1.) and proceed similarly as in 1.).

3. Let y ∈ (γ ∩ µ)(x) ∩ U. Since µ has an open graph, there is a

neighborhood W V of (x, y) which is contained in Gr(µ). Since

γ is lhc at x we ﬁnd that γ

−

[U ∩ V ] ∩ W is a neighborhood of x in

X and if z ∈ γ

−

[U ∩ V ] ∩ W then y ∈ (γ ∩ µ)(z) ∩ U. This however

implies that γ ∩ µ is lhc.

As one can do with ordinary maps one can compose correspondences.

Deﬁnition 3.2.11. (Composition of Correspondences) Let µ : X →→

Y ,γ : Y →→ Z be correspondences. Deﬁne

γ ◦ µ : X →→ Z

x →

¸

y∈µ(x)

γ(y).

γ ◦ µ is called the composition of γ and µ.

Proposition 3.2.7. Let γ, µ be as above. then

46

1. γ, µ uhc ⇒ γ ◦ µ uhc

2. γ, µ lhc ⇒ γ ◦ µ lhc

Proof. Exercise !

Deﬁnition 3.2.12. (Products of Correspondences) Let γ

i

: X →→

Y

i

for i = 1, .., k be correspondences. Then the correspondence

¸

i

γ

i

: X →→

¸

i

Y

i

x →

¸

i

γ

i

(x)

is called the product of the γ

i

.

Proposition 3.2.8. Assume γ

i

are correspondences as above.

1. γ

i

uhc at x and γ

i

(z) compact ∀z ∈ X and ∀ i ⇒

¸

i

γ

i

uhc at x

2. γ

i

lhc at x and γ

i

(z) compact ∀z ∈ X and ∀ i ⇒

¸

i

γ

i

lhc at x

3. γ

i

closed at x ∀ i ⇒

¸

i

γ

i

is closed at x

4. γ

i

has open graph ∀ i ⇒

¸

i

γ

i

has open graph

Proof. 1.) and 2.) follow directly from Proposition 3.2.2 ( sequential

characterization of hemi continuity ) and the fact that a sequence in

a product space converges if and only if all its component sequences

converge. 3.) and 4.) are clear.

Deﬁnition 3.2.13. Let Y

i

⊂ R

k

for i = 1, .., k and γ

i

: X →→ Y

i

be

correspondences. Then

¸

i

γ

i

: X →→

¸

i

Y

i

:= ¦

¸

i

y

i

: y

i

∈ Y

i

¦

x →

¸

i

γ

i

(x).

47

Proposition 3.2.9. Let γ

i

: X →→ Y

i

be as above.

1. γ

i

uhc and γ

i

compact valued ∀i ⇒

¸

i

γ

i

uhc and compact valued.

2. γ

i

lhc ∀i ⇒

¸

i

γ

i

lhc

3. γ

i

has open graph ∀i ⇒

¸

i

γ

i

has open graph

Proof. Follows again from Proposition 3.3.2.

Deﬁnition 3.2.14. ( Convex Hull of a Correspondence ) Let γ :

X →→ Y be a correspondence and Y be convex. then we deﬁne the

convex hull of γ as

co(γ) : X →→ Y

x → co(γ(x)).

Proposition 3.2.10. Let γ : X →→ Y be a correspondence and Y be

convex. Then

1. γ uhc at x and compact valued ⇒ co(γ) is uhc at x

2. γ lhc at x ⇒ co(γ) is lhc at x

3. γ has open graph ⇒ co(γ) has open graph.

Proof. Exercise !

Proposition 3.2.11. Let X ⊂ R

m

,Y ⊂ R

k

and F be a polytope. If γ :

X →→ Y has open sections, then co(γ) has open graph.

3.3 Abstract Economies and the Walras Equi-

librium

In this chapter we build up a basic mathematical model for an Econ-

omy. As mentioned before, such models can sometimes fail to be exact

48

replicas of the reality. However they are good to get theoretical insight

into how the reality works.

We think of an economy where we have m commodities. Commodities

can be products like oil, water, bread but also services like teaching,

health care etc. We let

R

m

:= “Commodity Space”

be the space which models our commodities. A commodity vector

is a vector in this space. Such a vector (x

1

, .., x

m

) stands for x

1

units

of the ﬁrst commodity, x

2

units of the second commodity etc. . In our

economy commodity vectors are exchanged ( traded ), manufactured

and consumed in the course of economic activity. A price vector

p =

¸

¸

¸

¸

¸

p

1

p

m

**associates prices to each commodity. More precisely p
**

i

denotes the

price of one unit of the i-th commodity. We assume p

i

≥ 0 for all i. The

price of the commodity vector x = (x

1

, .., x

m

) the computes as

p(x) =

m

¸

i=1

x

i

p

i

=< p, x > .

We assume that all prices are positive i.e. p

i

≥ 0. One can inter-

pret p as an element in the dual of the commodity space L(R

m

, R). In

the situation where the commodity space is ﬁnite dimensional ( as in

this course ) this is not so much of use, it is very helpful though if one

studies inﬁnite economies where the commodity space is an inﬁnite di-

mensional Hilbert-space or even more general.

As participants in our economic model we have consumers, suppliers

49

( also called producers ) and sometimes auctioneers. Auctioneers de-

termine the prices, one can think of a higher power like government or

trade organization but sometimes they are just an artiﬁcial construct

in the same way as in games with a random element, where one con-

siders nature as an additional player. We will see later how this can

be realized. The ultimate purpose of the economic organization is to

provide commodity vectors for ﬁnal consumption by the consumers. It

is reasonable to assume that not every consumer can consume every

commodity and every supplier can produce every commodity. For this

reason we model consumption sets resp. production sets for each

individual consumer resp. produce as

X

i

⊂ R

m

consumption set for consumer ”i”

Y

j

⊂ R

m

production set for consumer ”j”

where we assume that we have n consumers and k suppliers in our

economy and i ∈ ¦1, .., n¦ as well as j ∈ ¦1, .., k¦. Here X

i

stands for the

commodity vectors consumer ”i” can consume and Y

j

for the commodity

vectors supplier ”j” can produce.

We assume that each consumer has an initial endowment, that is a

commodity vector w

i

∈ R

m

he owns at initial time. Furthermore we as-

sume that consumers have to buy their consumption at market price.

Each consumer has an income at some rate which we denote with M

i

.

We assume that the incomes are positive i.e. M

i

≥ 0. We assume that

he cannot purchase more then his income, that is he cannot take any

credit. This determines the budget set for player i.

¦x ∈ X

i

: p(x) ≤ M

i

¦

These budget sets depend on the two parameters price p and income

M

i

and hence can be interpreted as correspondences

50

b

i

: R

m

+

R

+

→→ X

i

(p, M

i

) → ¦x ∈ X

i

[p(x) ≤ M

i

¦

We do nowcome to a very important point in game theory and math-

ematical economics, so called utility. Utility stands for the personal

gain some individual has by the outcome of some event, let it be a

game, an investment at the stock-market or something similar. Of-

ten the approach is to model utility in numbers. In these approaches

one usually has a utility function u which associates to the outcome a

real number and outcome 1 is preferable to outcome 2 if it has a higher

utility. The problem in this approach though is, that it is often very

difﬁcult to measure utility in numbers. What does it mean that out-

come 2 has half the utility of outcome 1, is two times outcome 2 as good

as one times outcome 1 ? However given to outcomes one can always

decide which of the two one prefers. This leads one to model utility as

a correspondence as we do here. We assume that each consumer has a

utility correspondence

U

i

: X

i

→→ X

i

x → ¦y ∈ X

i

: consumer ”i” prefers y to x¦

Here the word ”prefers” is meant in the strict sense. In case where

one has indeed a utility function u

i

: X

i

→ R one gets the utility corre-

spondence as

U

i

(x) = ¦y ∈ X

i

: u

i

(y) > u

i

(x)¦.

In our economy each consumer wants to maximize his utility, i.e.

ﬁnd x ∈ b

i

(p, M

i

) such that

U

i

(x) ∩ b

i

(p, M

i

) = ∅.

51

Such an x is called a demand vector and is a solution to the con-

sumers problem given prices p and income M

i

. Since we interpret p and

M

i

as parameters this gives us another correspondence, the so called

demand correspondence for consumer ”i”

d

i

: R

m

+

R

+

→→ X

i

(p, M

i

) → ¦ demand vectors for consumer ”i” given prices p

and income M

i

¦.

A supply vector

1

y ∈ Y

j

for supplier ”j” speciﬁes the quantities of

each commodity supplied ( positive entry ) and the amount of each com-

modity used as an input ( negative entry ). The proﬁt or net income

associated with the supply vector y = (y

1

, .., y

m

) given prices p is

p(y) =

m

¸

i=1

p

i

y

i

=< p, y > .

The set of proﬁt maximizing supply vectors is called the supply set.

It depends of course on the prices p and therefore is considered as the

so called supply correspondence

s

j

: R

m

+

→→ Y

j

p → ¦ proﬁt maximizing supply vectors for supplier “j”,

given prices p¦.

In the so called Walras Economy which we will later study in de-

tail, it is assumed that the consumers share some part of the proﬁt of

the suppliers as their income.

2

Let α

i

j

denote consumer ”i”’s share of

the proﬁt of supplier j. If supplier “j” produces y

j

and prices are p, then

the budget set for consumer “i” has the form

1

commodity vectors in the context of a supplier are also called supply vectors

2

this can be through wages, dividends etc.

52

¦x ∈ X

i

: p(x) ≤ p(w

i

) +α

i

j

p(y

j

)¦

The set

E(p) = ¦

n

¸

i=1

x

i

−

k

¸

j=1

y

j

: x

i

∈ d

i

(p, p(w

i

) +α

i

j

p(y

j

)), y

j

∈ s

j

(p)¦

is called the excess demand set.Since it depends on p it is natu-

rally to consider it as a correspondence the so called excess demand

correspondence

E : R

m

+

→→ R

m

p → E(p).

It would be a very good thing for the economy if the zero vector be-

longs to E(p). In fact this means that there is a combination of demand

and supply vectors which add up to zero in the way indicated above.

This however means that the suppliers produce exactly the amount of

commodities the consumer want to consume and furthermore the sup-

pliers make maximum proﬁt. A price vector p which satisﬁes 0 ∈ E(p)

is called a Walrasian equilibrium. The question of course is, does

such an equilibrium always exists ? We will answer this question later.

As in the case of the non cooperative equilibrium for two player games

in chapter 2, this has to do with ﬁxed points. But this time ﬁxed points

of correspondences. Let us brieﬂy illustrate why. Instead of the excess

demand correspondence E one can consider the correspondence

˜

E : R

m

+

→→R

m

p →→ p + E(p).

53

Then p is a Walrasian equilibrium if and only if p ∈

˜

E(p) which

is the case if and only if p is a ﬁxed point of the correspondence

˜

E.

There is a slightly more general deﬁnition of a Walrasian equilibrium

the so called Walrasian free disposal equilibrium. We will later

give the precise deﬁnition and a result about when such an equilibrium

exist. However before we can prove this result we need a result for

correspondences which corresponds to the Brouwer ﬁxed point theorem

in the case of maps. This needs preparation and some further studies

on correspondences, which will follow in the next two sections.

3.4 The Maximum Theorem for Correspon-

dences

In the last section we learned about some correspondences which natu-

rally occur in the framework of mathematical economics.To do analysis

with those correspondences on has to know that these correspondences

have some analytic properties like upper hemi continuity, lower hemi

continuity or open graphs etc. This section gives the theoretical back-

ground for this.

We studied for example the budget correspondence for consumer “i”

:

b

i

: R

m

+

R

+

→→ X

i

(p, M

i

) → ¦x ∈ X

i

[p(x) ≤ M

i

¦

The proof of the following proposition is easy and is left as an exer-

cise :

Proposition 3.4.1. Let X

i

⊂ R

m

be closed,convex and bounded from

below. Then the budget correspondence b

i

as deﬁned above is uhc and

furthermore, if there exist x ∈ X

i

such that p(x) < M

i

, then b

i

is lhc at

54

(p, M

i

).

It is however more difﬁcult to see that the demand correspondences

d

i

: R

m

+

R

+

→→ X

i

(p, M

i

) → ¦ demand vectors for consumer ”i” given prices p

and income M

i

¦.

have similar analytic properties. The reason for this is, that the

demand correspondences are the result of some optimization problem

( demand vectors are vectors in the budget set, which have maximum

utility ). The following Theorems will help us to come over this prob-

lem.

Theorem 3.4.1. (Maximum Theorem 1) Let G ⊂ R

m

,Y ⊂ R

k

and

γ : G →→ Y be a compact valued correspondence. Let f : Y → R be a

continuous function. Deﬁne

µ : G →→ Y

x → ¦y ∈ γ(x)[y maximizes f on γ(x)¦

and F : G → R with F(x) = f(y) for y ∈ µ(x). If γ is continuous

at x, then µ is closed at x as well as uhc at x and F is continuous at x.

Furthermore µ is compact valued.

Proof. Since γ is compact valued we have that µ(x) = ∅ for all x. Fur-

thermore µ(x) is closed and therefore compact ( since it is a closed sub-

set of a compact set ).Let us ﬁrst show that µ is closed at x. For this let

x

n

→ x and y

n

∈ µ(x

n

) with y

n

→ y. We have to show that y ∈ µ(x). Sup-

pose y / ∈ µ(x). Since γ is uhc and compact valued it follows from Propo-

sition 3.2.1 ( ﬁrst part ) that γ is closed at x and therefore y ∈ γ(x).

y / ∈ µ(x) however implies that there exists z ∈ γ(x) s.t. f(z) > f(y).

Since γ is also lhc at x it follows from Proposition 3.2.2 ( second part )

55

that there exists a sequence z

n

→ z with z

n

∈ γ(x

n

). Since y

n

∈ µ(x

n

)

we have f(z

n

) ≤ f(y

n

) for all n. Since f is continuous this implies that

f(z) = lim

n

f(z

n

) ≤ lim

n

f(x

n

) = f(x)

which is in contradiction to f(z) > f(y). So we must have y ∈ µ(x)

and therefore that µ is closed at x. Clearly

lim

n

F(x

n

) = lim

n

f(y

n

) = f(y) = F(x)

which shows that F is continuous at x. Now since µ = γ ∩ µ Propo-

sition 3.2.6 ( second part ) implies that µ is uhc at x.

This result corresponds to the case where the utility correspondence

is in fact induced by a utility function. The following theorem corre-

sponds to the case, where the utility correspondence is in fact an utter

correspondence.

Theorem 3.4.2. (Maximum Theorem 2) Let G ⊂ R

m

,Y ⊂ R

k

and

γ : G →→ Y a compact valued correspondence. Furthermore let U :

Y G →→ Y have open graph. Deﬁne

µ : G →→ Y

x → ¦y ∈ γ(x)[U(y, x) ∩ γ(x) = ∅¦.

If γ is closed and lhc at x then µ is closed at x. if in addition γ is

uhc at x, then µ is uhc at x. Furthermore µ is compact valued ( possibly

empty ).

Proof. Let x

n

→ x,y

n

∈ µ(x

n

) with y

n

→ y. In order to show that µ is

closed at x we have to show that y ∈ µ(x). Assume this would not be the

case, i.e. y / ∈ µ(x). Since y

b

∈ µ(x

n

) ⊂ γ(x

n

) and γ is closed at x we have

that y ∈ γ(x). However since y / ∈ µ(x) there must exist z ∈ U(y, x)∩γ(x).

Since γ is lhc at x it follows from Proposition 3.2.2 that there exists a

sequence z

n

→ z s.t. z

n

∈ γ(x

n

). Then lim

n

(y

n

, x

n

, z

n

) = (y, x, z) ∈ Gr(U)

56

( since z ∈ U(y, x) ). Since Gr(U) is open there must exist n

0

∈ N s.t.

(y

n

0

, x

n

0

, z

n

0

) ∈ Gr(U). This however means that

z

n

0

∈ U(y

n

0

, x

n

0

) ∩ γ(x

n

0

)

and therefore y

n

0

/ ∈ µ(x

n

0

) which is of course a contradiction. There-

fore y ∈ µ(x) and µ is closed at x. If in addition γ is also uhc at x, then

as in the previous proof Proposition 3.2.6 ( second part ) implies that

µ = γ ∩ µ is uhc at x. Finally we show that µ is compact valued. Since

µ(˜ x) ⊂ γ(˜ x) for all ˜ x and γ is compact valued it is enough to show that

µ(˜ x) is closed in γ(˜ x) ( in the relative topology ). Equivalently we can

show that γ(˜ x)`µ(˜ x) is open in γ(˜ x). Assume this would not be the case.

Then there would exist ˜ y ∈ γ(˜ x) and z ∈ U(˜ y, ˜ x) ∩ γ(˜ x) = ∅ ( i.e. ˜ y / ∈ µ(˜ x

) as well as a sequence y

n

∈ µ(˜ x) s.t. lim

n

y

n

= ˜ y and

U(y

n

, ˜ x) ∩ γ(˜ x) = ∅ , ∀ n.

Clearly lim

n

(y

n

, ˜ x, z) = (˜ y, ˜ x, z) ∈ Gr(U). Since Gr(U) is open there

must exist n

0

∈ N s.t. (y

n

0

, ˜ x, z) ∈ Gr(U) which implies that

z ∈ U(y

n

0

, ˜ x) ∩ γ(˜ x).

This however contradicts the previous equation and so µ(˜ x) must be

closed in γ(˜ x).

Proposition 3.4.2. Let G ⊂ R

m

, Y ⊂ R

k

and let U : GY →→ Y satisfy

the following condition :

z ∈ U(y, x) ⇒ ∃z

′

∈ U(y, x) s.t. (y, x) ∈ (U

−

[¦z

′

¦])

◦

.

We deﬁne µ : G →→ Y via µ(x) = ¦y ∈ Y : U(y, x) = ∅¦. Then µ is

closed.

Proof. Let x

n

→ x,y

n

∈ µ(x

n

) and y

n

→ y. Suppose y / ∈ µ(x). Then there

exists z ∈ U(y, x) and by the hypothesis there exists z

′

s.t.

57

(y, x) ∈ (U

−

[¦z

′

¦])

◦

= ¦(˜ y, ˜ x)[ U(˜ y, ˜ x) ∩ ¦z

′

¦ = ∅

. .. .

z

′

∈U(˜ y,˜ x)

¦

◦

Since lim

n

(y

n

, x

n

) = (y, x) there mist exist n

0

∈ N s.t. (y

n

0

, x

n

0

) ∈

(U

−

[¦z

′

¦])

◦

which implies that z

′

∈ U(y

n

0

, x

n

0

). This however is a con-

tradiction to y

n

0

∈ µ(x

n

0

).

Theorem 3.4.3. Let X

i

⊂ R

k

i

for i = 1, .., n be compact and set X =

¸

n

i=1

X

i

. Let G ⊂ R

k

and for each i let S

i

: XG →→ X

i

be a continuous

correspondence with compact values. Furthermore let U

i

: XG →→ X

i

be correspondences with open graph. Deﬁne

E : G →→ X

g → ¦x = (x

1

, .., x

n

) ∈ X : for each i x

i

∈ S

i

(x, g)

and U

i

(x, g) ∩ S

i

(x, g) = ∅¦.

Then E has compact values, is closed and uhc.

Proof. Let us ﬁrst show that E is closed at all x. This is equivalent to

showing that E has closed graph. Suppose (g, x) / ∈ Gr(E) i.e. x / ∈ E(g).

Then for some i either x

i

/ ∈ S

i

(x, g) or U

i

(x, g) ∩ S

i

(x, g) = ∅. By Propo-

sition 3.2.1 ( ﬁrst part ) S

i

is closed. So in the ﬁrst case there exists

a neighborhood V in X G X

i

of (x, g, x

i

) / ∈ Gr(S

i

) which is dis-

joint from Gr(S

i

). Then

˜

V = ¦(g, x) : (x, g, x

i

) ∈ V ¦ is an open neigh-

borhood of (g, x) in G X which cannot intersect Gr(E) since for all

(g, x) ∈

˜

V we have x

i

/ ∈ S

i

(x, g). In the second case there must exist i

and z

i

∈ U

i

(x, g) ∩ S

i

(x, g). Since U

i

has open graph there exist neigh-

borhoods V of z

i

and W

1

of (x, g) s.t. W

1

V ⊂ Gr(U

i

). Since S

i

is also

lhc there exists a neighborhood W

2

of (x, g) s.t. (˜ x, ˜ g) ∈ W

2

implies that

S

i

(˜ x, ˜ g) ∩ V = ∅.

3

Then W

1

∩ W

2

is a neighborhood of (x, g) disjoint from

Gr(E), since for all (˜ x, ˜ g) ∈ W

1

∩ W

2

we have U

i

(˜ x, ˜ g) ∩ S

i

(˜ x, ˜ g) = ∅ for

the simple reason that U

i

(˜ x, ˜ g) contains V . Since in both cases we can

3

V is an open neighborhood of z

i

, then deﬁnition lhc

58

ﬁnd neighborhoods of (g, x) which are still contained in Gr(E)

c

we have

that Gr(E)

c

is open and hence Gr(E) is closed. It follows now from the

compactness of X and the closedness of E as well as Proposition 3.2.1 (

second part ) that E is uhc. That it has compact values is clear.

Proposition 3.4.3. Let K ⊂ R

m

be compact, G ⊂ R

k

and let γ : K

G →→ K be closed correspondence. Deﬁne

F : G →→ K

g → ¦x ∈ K : x ∈ γ(x, g)¦.

Then F : G →→ K has compact values, is closed and uhc.

Proof. By Proposition 3.2.1 ( second part ) it is enough to show that F

is closed, but this follows immediately from the closedness of γ.

Proposition 3.4.4. Let K ⊂ R

m

be compact, G ⊂ R

k

and let γ : K

G →→R

m

have compact values and uhc. Deﬁne

Z : G →→ K

g → ¦x ∈ K : 0 ∈ γ(x, g)¦.

Then Z has compact values, is closed and uhc.

Proof. Exercise !

3.5 Approximation of Correspondences

Lemma 3.5.1. Let γ : X →→ Y be an uhc correspondence with nonempty

convex values. Furthermore let X ⊂ R

m

be compact and Y ⊂ R

k

be con-

vex. For each δ > 0 deﬁne a correspondence

59

γ

δ

: X →→ Y

x → co(

¸

z∈B

δ

(x)

γ(z)).

Then for every ǫ > 0 there exists δ > 0 s.t.

Gr(γ

δ

) ⊂ B

ǫ

(Gr(γ))

where B

ǫ

(Gr(γ)) denotes the points in X Y ⊂ R

m

R

k

which have

( Euclidean ) distance from Gr(γ) less than ǫ.

Proof. Exercise !

In the proof of the next lemma we need a technique from topology

called partition of unity. This technique is very helpful in general,

however we don’t give a proof here.

Proposition 3.5.1. Let X be a topological space and (U

i

)

i∈I

be an open

covering of X that is each U

i

is an open set and X =

¸

i∈I

U

i

then there

exists a locally ﬁnite subordinated partition of unity to this covering,

that is a family of functions f

i

: X → [0, ∞) s.t. for all x ∈ X we

have f

i

(x) > 0 only for ﬁnitely many i ∈ I,

¸

i∈I

f

i

≡ 1 and supp(f

i

) =

¦x[f

i

(x) > 0¦ ⊂ U

i

.

Theorem 3.5.1. (von Neumann Approximation Theorem) Let γ :

X →→ Y be uhc with nonempty, compact and convex values. Then for

any ǫ > 0 there is a continuous map f : X → Y s.t.

Gr(f) ⊂ B

ǫ

(Gr(γ)).

Proof. Let ǫ > 0. By Lemma 3.5.1 there exists δ > 0 s.t. Gr(γ

δ

) ⊂

B

ǫ

(Gr(γ)). Since X is compact there exist x

1

, .., x

n

s.t. X ⊂

¸

n

i=1

B

δ

(x

i

).

Choose y

i

∈ γ(x

i

). Let f

1

, .., f

n

be a locally ﬁnite partition of unity

subordinated to this covering. Then we have supp(f

i

) ⊂ B

δ

(x

i

) and

¸

n

i=1

f

i

≡ 1. We deﬁne the function f as follows :

60

f : X → Y

x →

n

¸

i=1

f

i

(x)y

i

.

Clearly f is continuous. Since f

i

(x) = 0 for all x / ∈ B

δ

(x

i

) each f(x)

is a convex linear combination of those y

i

such that x ∈ B

δ

(x

i

). Since

x ∈ B

δ

(x

i

) clearly implies x

i

∈ B

δ

(x) and therefore y

i

∈

¸

z∈B

δ

(x)

γ(z) we

have

f(x) ∈ co(y

i

[x

i

∈ B

δ

(x)) ⊂ co(

¸

z∈B

δ

(x)

γ(z)) = γ

δ

(x).

Hence for all x ∈ X we have (x, f(x)) ∈ Gr(γ

δ

) ⊂ B

ǫ

(Gr(γ)).

Deﬁnition 3.5.1. Let γ : X →→ Y be a correspondence. A selection of

γ is a function f : X →→ Y s.t. Gr(f) ⊂ Gr(γ), i.e. f(x) ∈ γ(x) for all

x ∈ X.

In the previous proof we constructed a continuous selection for the

correspondence γ

δ

( in fact for δ > 0 but arbitrary small ). We will

now show that under different assumptions we can do even better.

From now on in this chapter all correspondences are assumed to be

non empty valued.

Theorem 3.5.2. ( Browder ) Let X ⊂ R

m

and γ : X →→ R

k

be convex

valued s.t. for all y ∈ R

k

the sets γ

−1

(y) = ¦x ∈ X[y ∈ γ(x)¦ are open.

Then there exists a continuous selection of γ.

Proof. Clearly X =

¸

y∈R

k

γ

−1

(y)

so that (γ

−1

(y))

y∈R

k is an open covering

of X. Let f

y

: X → [0, ∞) denote the maps belonging to a locally ﬁnite

subordinated partition of unity, so

¸

y∈R

k

f

y

≡ 1 and supp(f

y

) ⊂ γ

−1

(y).

We deﬁne the map f as follows :

61

f : X → R

k

x →

¸

y

f

y

(x)y.

Then f is continuous and for each x ∈ X f(x) is a convex combina-

tion of those y s.t. f

y

(x) > 0 which can only hold if y ∈ γ(x). Since γ(x)

is convex this implies that f(x) ∈ γ(x).

We state the following proposition without proof ( due to time con-

straints, it is not more difﬁcult to prove than those before ).

Proposition 3.5.2. Let X ⊂ R

m

be compact and γ : X →→ R

k

be lhc

with closed and convex values. Then there exists a continuous selection

of γ.

3.6 Fixed Point Theorems for Correspon-

dences

One can interpret Brouwer’s ﬁxed point theorem as a special case of a

ﬁxed point theorem for correspondences where the correspondence is

in fact given by a map. That this is not the only case where ﬁxed points

of correspondences are guaranteed is shown in this section. The main

ﬁxed point theorem for correspondences is the Kakutani ﬁxed point

theorem. It will follow from the following theorem.

Theorem 3.6.1. Let K ⊂ R

m

be compact, nonempty and convex and µ :

K →→ K a correspondence. Suppose there is a closed correspondence

γ : K →→ Y with nonempty, compact and convex values where Y ⊂ R

k

is also compact and convex. Furthermore assume that there exists a

continuous map f : K Y → K s.t. for all x ∈ K one has

µ(x) = ¦f(x, y) : y ∈ γ(x)¦.

62

Then µ has a ﬁxed point, i.e. there exists ˜ x ∈ K s.t. ˜ x ∈ µ(˜ x).

Proof. By Theorem 3.5.1 there exists a sequence of continuous maps

g

n

: K → Y s.t.

Gr(g

n

) ⊂ B1

n

(Gr(γ)).

We deﬁne maps h

n

as follows :

h

n

: K → K

x → f(x, g

n

(x)).

It follows from Theorem 2.3.1 ( Brouwer’s ﬁxed point theorem ) that

each h

n

has a ﬁxed point x

n

∈ K, i.e. a point x

n

which satisﬁes

x

n

= h

n

(x

n

) = f(x

n

, g

n

(x

n

)).

Since K as well as Y are compact we can extract a convergent sub-

sequence of (x

n

) as well as (g

n

(x

n

)) and w.l.o.g. we can as well assume

that these two sequences already converge and ˜ x := lim

n

x

n

as well as

˜ y := limf

n

(x

n

). By the continuity of f we have

˜ x = f(˜ x, ˜ y.

Furthermore since γ is closed Gr(γ) is closed and for all n we have

(x

n

, g

n

(x

n

)) ∈ B1

n

(Gr(γ)). Therefore (˜ x, ˜ y) = lim

n

(x

n

g

n

(x

n

)) must lie in

Gr(γ) and therefore ˜ y ∈ γ(˜ x). By the assumption on the correspondence

µ we have ˜ x = f(˜ x, ˜ y ∈ µ(˜ x) so that ˜ x is a ﬁxed point of µ.

Theorem 3.6.2. ( Kakutani Fixed Point Theorem ) Let K ⊂ R

m

be

compact and convex and γ : K →→ K be closed or uhc with nonempty

convex and compact values. Then γ has a ﬁxed point.

Proof. If γ is uhc then by Proposition 3.2.1 ( ﬁrst part ) γ is also closed,

so we can just assume that γ is closed. We can then apply the previous

theorem on µ = γ and f deﬁned by

63

f : K K → K

(x, y) → y.

Then clearly µ(x) = γ(x) = ¦y = f(x, y)[y ∈ γ(x)¦ and therefore γ

has a ﬁxed point.

We come to another ﬁxed point theorem, which works in the setup

where the correspondence is lhc.

Theorem 3.6.3. Let K ⊂ R

m

be compact and convex and let γ : K →→

K be lhc with closed and convex values. Then γ has a ﬁxed point.

Proof. By Proposition 3.5.2 there exists a continuous selection f : K →

K s.t. f(x) ∈ γ(x) for all x ∈ X. Applying Theorem 2.3.1 ( Brouwer’s

ﬁxed point theorem ) again we see that f has a ﬁxed point, i.e. there

exists ˜ x s.t. ˜ x = f(˜ x) ∈ γ(˜ x). Clearly ˜ x) is also a ﬁxed point for γ.

Theorem 3.6.4. ( Browder ) Let K ⊂ R

m

be compact and convex and

let γ : K →→ K with nonempty convex values s.t. γ

−1

(y) is open for all

y ∈ K. Then γ has a ﬁxed point.

Proof. Follows in the same way as in the previous proof by application

of Theorem 3.5.2 and Brouwer’s ﬁxed point theorem.

Lemma 3.6.1. Let X ⊂ R

k

be nonempty, compact and convex and let

U : X →→ X be a convex valued correspondence s.t.

1. x / ∈ U(x) for all x ∈ X

2. U

−1

(x) = ¦x

′

∈ X[x ∈ U(x

′

)¦ is open in X for all x ∈ X.

Then there exists x ∈ X s.t. U(x) = ∅.

Proof. Let us deﬁne W := ¦x ∈ X[U(x) = ∅¦. If x ∈ W then there exists

y ∈ U(x) and since x ∈ U

−1

(y) ⊂ W by assumption 2.) U

−1

(y) is an

64

open neighborhood of x in W. Therefore W is an open subset of X. We

consider the restriction of U to W i.e.

U

|W

: W →→ X ⊂ R

k

This correspondence satisﬁes the assumptions in the Browder Se-

lection Theorem ( Theorem 3.5.2 ) and therefore admits a continuous

selection

f : W →R

k

which by deﬁnition of a selection has the property that f(x) ∈ U(x)

for all x ∈ W. We deﬁne a new correspondence as follows :

γ : X →→ X

x →

¦f(x)¦ if x ∈ W

X if x / ∈ W

Then γ is convex and compact valued. It follows from Proposition

3.2.1 second part that uhc if we can show that γ is closed. To show this

let x

n

→ x,y

n

∈ γ(x

n

) and y

n

→ y then we must show y ∈ γ(x). If x / ∈ W

this is clear since then γ(x) = X. If however x ∈ W then since W is

open there must exist n

0

s.t. for all n ≥ n

0

we have x

n

∈ W. For those

n we have y

n

∈ γ(x

n

) = ¦f(x

n

)¦ i.e. y

n

= f(x

n

). Now it follows from the

continuity of f that

y = lim

n

y

n

= lim

n

f(x

n

) = f(x)

and therefore y ∈ γ(x) = ¦f(x)¦. hence γ is uhc with nonempty,

convex and compact values. By application of the Kakutani Fixed Point

Theorem ( Theorem 3.6.2 ) γ has a ﬁxed point, that is there exists x ∈ X

s.t. x ∈ γ(x). If x ∈ W then by deﬁnition of γ x = f(x) ∈ U(x) which

is a contradiction to assumption 1.) Therefore x / ∈ W and therefore by

65

deﬁnition of W U(x) = ∅.

Deﬁnition 3.6.1. Let X ⊂ R

k

and f : X → R. We say that f is lower

semi-continuous if for all a ∈ R the sets f

−1

((a, ∞)) are open. We call

f quasi concave if for all a ∈ R the sets f

−1

([a, ∞)) are convex.

Theorem 3.6.5. (Ky-Fan) : Let X ⊂ R

k

be nonempty. convex and com-

pact and ϕ : X X →R s.t.

1. ∀y ∈ X the function ϕ(, y) considered as a function in the ﬁrst

variable is lower semi-continuous

2. ∀x ∈ X the function ϕ(x, ) considered as a function in the second

variable is quasi concave

3. sup

x∈X

ϕ(x, x) ≤ 0.

Then there exists x ∈ X s.t. sup

y∈y

ϕ(x, y) ≤ 0 i.e. ϕ(x, y) ≤ 0 for all

y ∈ X

Proof. Deﬁne a correspondence

U : X →→ X

x → U(x) := ¦y ∈ X[ϕ(x, y) > 0¦.

If we can ﬁnd x ∈ X s.t. U(x) = ∅ then we are ﬁnished. The ex-

istence of such an x follows from Lemma 3.6.1 if we can show that U

satisﬁes the required assumptions there. First let y

1

, y

2

∈ U(x) for an

arbitrary x ∈ X. Then

ǫ := min(ϕ(x, y

1

), ϕ(x, y

2

)) > 0

and furthermore y

1

, y

2

∈ ϕ(x, )

−1

([ǫ, ∞)) which by assumption 2.) is

convex. Therefore for all λ ∈ [0, 1] we have that λ y

1

+ (1 − λ) y

2

∈

ϕ(x, )

−1

([ǫ, ∞)) which implies ϕ(x, λ y

1

+ (1 −λ) y

2

) > 0 and therefore

66

λ y

1

+ (1 − λ) y

2

∈ U(x). hence U is convex valued. Furthermore for

each y ∈ X we have that

U

−1

(y) = ¦x ∈ X[y ∈ U(x)¦

= ¦x ∈ X[ϕ(x, y) > 0¦

= ϕ(, y)

−1

((0, ∞))

is open for all y by assumption 1.) By Assumption 3.) we also have

that ϕ(x, x) ≤ 0 for all x which by deﬁnition of U implies that x / ∈

U(x) for all x ∈ X. Therefore U satisﬁes all the condition in Lemma

3.6.1.

We are now in the position to prove the general version of the Nash

Theorem :

Proof. ( of Theorem 3.1.1 ) : We deﬁne a map ϕ : o(N) o(N) → R as

follows : For s = (s

1

, .., s

n

), t = (t

1

, .., t

n

) ∈ o(N) we deﬁne

ϕ(s, t) :=

n

¸

i=1

L

i

(s) −L

i

(s

1

, .., s

i−1

, t

i

, s

i+1

, .., s

n

).

It follows from the convexity of the L

i

in the i-th variable that ϕ(x, )

is quasi concave and from the continuity of the L

i

that ϕ(, y) is lower

semi-continuous. Since o(N) is also convex and compact we can apply

the Ky-Fan Theorem which then implies the existence of an s ∈ X s.t.

ϕ(s, t) ≤ 0 for all t ∈ o(N). In particular for all t ∈ ¦s

1

¦ .. ¦s

i−1

¦

o

i

¦s

i+1

¦ .. ¦s

n

¦ ∩ o(N) we have

67

0 ≥ ϕ(s, t) =

n

¸

j=1

L

j

(s) −L

j

(s

1

, .., s

j−1

, t

j

, s

j+1

, .., s

n

)

=

n

¸

j=i

L

j

(s) −L

j

(s

1

, .., s

j−1

, t

j

= s

j

, s

j+1

, .., s

n

)

+ L

i

(s) −L

i

(s

1

, .., s

i−1

, t

i

, s

i+1

, .., s

n

)

= L

i

(s) −L

i

(s

1

, .., s

i−1

, t

i

, s

i+1

, .., s

n

).

where we used that by the choice of t s and t only differ in the i-th

component. This however implies that

L

i

(s) ≤ L

i

(s

1

, .., s

i−1

, t

i

, s

i+1

, .., s

n

)

and shows that s is am NCE for (

n

.

3.7 Generalized Games and an Equilibrium

Theorem

As before we consider a competitive environment where n players par-

ticipate and act by choosing strategies which determine an outcome.

We denote by N = ¦1, .., n¦ the set of players and for each i ∈ ¦1, .., n¦

with X

i

the strategy set of player i. As before let X =

¸

n

i=1

X

i

denote the

multi strategy set. We do not assume that there is some kind of multi

loss operator and this is where we generalize the concept of games. In-

stead of a multi loss operator we assume we have n correspondences

U

i

: X →→ X

i

. We think of the correspondence as follows :

U

i

(x) = ¦y

i

∈ X

i

: (x

1

, .., x

i−1

, y

i

, x

i+1

, .., x

n

) determines an outcome which

player “i” prefers to the outcome determined by (x

1

, .., x

i

, .., x

n

) = x¦.

Note that the equation above is not a deﬁnition but an interpre-

68

tation.

4

The U

i

can be interpreted as utilities ( compare section 3.5

).

5

. Furthermore we assume we have feasibility correspondences F

i

:

X →→ X

i

for all 1 ≤ i ≤ n where the interpretation is as follows :

F

i

(x) = ¦y

i

∈ X

i

: (x

1

, .., x

i−1

, y

i

, x

i+1

, .., x

n

) is an allowed strategy ¦.

Then the jointly feasible multi-strategies are the ﬁxed points of the

correspondence

F =

n

¸

i=1

X

i

→→

n

¸

i=1

X

i

.

This correspondence generalizes the set o(N) in Deﬁnition 3.1.1.

Deﬁnition 3.7.1. A generalized game (sometimes also called ab-

stract economy ) is a quadruple (N, (X

i

), (F

i

), (U

i

)) where X

i

, F

i

, U

i

are

as above.

Though the situation is much more general now, it is not harder, in

fact even more natural to deﬁne what an equilibrium of a generalized

game should be.

Deﬁnition 3.7.2. A non cooperative equilibrium ( short NCE ) of

a generalized game (N, (X

i

), (F

i

), (U

i

)) is a multi-strategy x ∈ X s.t.x ∈

F(x) and

U

i

(x) ∩ F

i

(X) = ∅ , ∀ i.

The emptiness of the intersection above means that for player i,

given the strategies of the other players there is no better ( feasible

or allowed ) strategy. So in the sense of Nash, within a NCE none of

the players has reason to deteriorate from his strategy. We have the

4

The word “prefer” is always meant in the sense “strictly prefer”.

5

The case of a classic game where a multi-loss operator is given ﬁts into this con-

cept by setting U

i

(x) = ¦y

i

∈ X

i

: L

i

(x

1

, .., x

i−1

, y

i

, x

i+1

, .., x

n

) < L

i

(x

1

, .., x

i

, .., x

n

)¦

69

following theorem which states the existence of such equilibria under

certain assumptions on the correspondences used in the game.

Theorem3.7.1. ( Shafer, Sonnenschein 1975 ) Let ( = (N, (X

i

), (F

i

), (U

i

))

be a generalized game s.t. for all i :

1. X

i

⊂ R

k

i

is nonempty, compact and convex.

2. F

i

is continuous with nonempty, compact and convex values

3. Gr(U

i

) is open in X X

i

4. x

i

/ ∈ co(U

i

(x)) for all x ∈ X.

Then there exists a non cooperative equilibrium for (.

Proof. Let us deﬁne for each i a map

ν

i

: X X

i

→ R

+

(x, y

i

) → dist((x, y

i

), Gr(U

i

)

c

).

Since Gr(U

i

)

c

is closed we have

ν

i

(x, y

i

) > 0 ⇔ y

i

∈ U

i

(x).

Clearly the maps ν

i

are continuous. Furthermore we deﬁne

H

i

: X →→ X

i

x → ¦y

i

∈ F

i

(x) : y

i

maximizes ν

i

(x, ) on F

i

(x)¦.

Since the correspondences F

i

are compact valued by setting γ : X →→

X X

i

, x → (¦x¦ F

i

(x) we are in the situation of the Maximum The-

orem 1 ( Proposition 3.4.1 ) where we chose G = X,Y = X X

i

and

f = ν

i

. Then the Maximum Theorem says the correspondence

70

µ(x) = ¦z ∈ γ(z) : z maximizes ν

i

on γ(z)¦

= ¦(x, y

i

) : y

i

∈ F

i

(x) and y

i

maximizes ν

i

(x, ) on F

i

(x)¦

is uhc. H

i

is just the composition of the correspondence µ

i

with the

correspondence which is given by the continuous map pr : X X

i

→

X, (x, y

i

) → y

i

and is therefore uhc by Proposition 3.2.7. let us deﬁne

now another correspondence

G : X →→ X

x →

N

¸

i=1

co(H

i

(x)).

Then by Proposition 3.2.8 and Proposition 3.2.10 the correspondence

Gis uhc. Since furthermore X is compact and convex we are in the situ-

ation where we can apply the Kakutani Fixed Point Theorem( Theorem

3.6.2 ). Therefore there exists ˜ x ∈ X s.t. ˜ x ∈ G(˜ x). Since H

i

(˜ x) ⊂ F

i

(˜ x)

and F

i

(˜ x) is convex, we have that

˜ x

i

∈ G

i

(˜ x) = co(H

i

(˜ x)) ⊂ F

i

(˜ x).

Since this holds for all i we have that ˜ x ∈ F(˜ x), so that ˜ x is a

jointly feasible strategy. Let us now show that U

i

(˜ x) ∩ F

i

(˜ x) = ∅ for

all i. Suppose this would not be the case. Then there would exist an

i and z

i

∈ U

i

(˜ x) ∩ F

i

(˜ x). Then since z

i

∈ U

i

(˜ x) it follows from above

that ν

i

(˜ x, z

i

) > 0. Since furthermore H

i

(˜ x) consists of the maximizers of

ν

i

(˜ x, ) on F

i

(˜ x) we have

ν

i

(˜ x, y

i

) ≥ ν

i

(˜ x, z

i

) > 0 for all y

i

∈ H

i

(˜ x).

This however means that y

i

∈ U

i

(˜ x) for all y

i

∈ H

i

(˜ x) and hence

71

H

i

(˜ x) ⊂ U

i

(˜ x). Therefore

˜ x

i

∈ G

i

(˜ x) = co(H

i

(˜ x) ⊂ co(U

i

(˜ x).

The latter though is a contradiction to assumption 4.) in the theo-

rem. Thus we must have U

i

(˜ x) ∩ F

i

(˜ x) = ∅ for all i and this means that

˜ x is an NCE.

Using this theorem we can now quite easily proof Theorem 3.1.1

which is the original Nash Theorem.

Proof. ( of Theorem3.1.1 ).Let (

n

= ((o

i

), o(N), L) be an N-person game

consisting of strategy-sets o

i

for each player “i”, a subset o(N) of al-

lowed strategies and a multi-loss operator, which satisfy the conditions

in Theorem 3.1.1. Let us make a generalized game out of this. First

let us deﬁne utility correspondences U

i

:

¸

n

j=1

o

j

→→ o

j

as follows : If

s = (s

1

, .., s

n

) ∈ o(N) then

U

i

(s) := ¦˜ s

i

∈ o

i

: (s

1

, .., ˜ s

i

, .., s

n

) ∈ o(N) and

L

i

(s

1

, .., s

i−1

, ˜ s

i

, s

i+1

, .., s

n

) < L

i

(s

1

, .., s

i

, .., s

n

)¦

in the case where s / ∈ o(N) we deﬁne U

i

(s) = ∅.

3.8 The Walras Equilibrium Theorem

In this section we reconsider the Walras economy of section 3.3 and the

so called Walras Equilibrium Theorem. We refer to section 3.3 for most

of the notation and also the economical interpretation.

Deﬁnition 3.8.1. A Walras Economy is a ﬁve tuple

Jc = ((X

i

), (w

i

), (U

i

), (Y

j

), (α

i

j

))

72

consisting of consumption sets X

i

⊂ R

m

, supply sets Y

j

⊂ R

m

, ini-

tial endowments w

i

, utility correspondences U

i

: X

i

→→ X

i

and shares

α

i

j

≥ 0. Furthermore the prices of the commodities p

i

satisfy p

i

≥ 0

and

¸

m

i=1

p

i

= 1 so that the set of prices is given by the closed standard

simplex ∆

m−1

. We denote X =

¸

n

i=1

X

i

, Y =

¸

k

j=1

Y

j

and w =

¸

n

i=1

w

i

.

The assumption on the price vectors to be elements in ∆

m−1

might

look at ﬁrst glance very restrictive an unrealistic. However, we didn’t

specify any currency or whatever. Since there is however only ﬁnitely

many money in the world, we can assume that all prices lie between

zero and one. By introducing a m+1st commodity in our economy which

no one is able to consume or to produce and which can be given the price

1 −

¸

m

i=1

p

i

we get an economy which is equivalent to the original one

such that the prices satisfy the hypotheses in the Deﬁnition above.

Deﬁnition 3.8.2. An attainable state of the Walrasian economy Jc

is a tuple ((x

i

), (y

j

)) ∈

¸

n

i=1

X

i

¸

k

j=1

Y

j

such that

n

¸

i=1

x

i

−

k

¸

j=1

y

j

−w = 0.

We denote the set of attainable states with F.

In words : an attainable state is a state where the production of the

suppliers precisely ﬁts the demand of the consumers. let us introduce

some notation which will later be of advantage. Let

M := ¦((x

i

), (y

j

)) ∈ (R

m

)

n+k

:

n

¸

i=1

x

i

−

k

¸

j=1

y

j

−w = 0¦.

Then the set of attainable states can be written as F = M∩

¸

n

i=1

X

i

¸

k

j=1

Y

j

. Let furthermore

73

pr

i

: F → X

i

˜ pr

j

: F → Y

j

denote the projections on the corresponding factors and

˜

X

i

:= pr

i

(F),

˜

Y

j

:= ˜ pr

j

(F).

Deﬁnition 3.8.3. A Walrasian free disposal equilibrium is a price

˜ p together with an attainable state ((˜ x

i

), (˜ y

j

)) such that

1. < ˜ p, ˜ y

j

>≥< ˜ p, y

j

> for all y

j

∈ Y

j

for all j

2. ˜ x

i

∈ b

i

(˜ p, < ˜ p, w

i

> +

¸

k

j=1

α

i

j

< ˜ p, ˜ y

j

>) and

U

i

(˜ x

i

) ∩ b

i

(˜ p, < ˜ p, w

i

> +

k

¸

j=1

α

i

j

< ˜ p, ˜ y

j

>) = ∅.

where b

i

denotes the budget correspondence for consumer i.

The ﬁrst part in the deﬁnition above means that the supplier make

optimal proﬁt, the second one that the consumers get optimal utility

from there consumption. The following theorem tells us about the ex-

istence of a such an equilibrium under certain assumptions.

Theorem 3.8.1. Walras Equilibrium Theorem Assume the Walras

economy Jc satisﬁes the following conditions : For each i = 1, .., n and

j = 1, .., k:

1. X

i

is closed, convex bounded from below and w

i

∈ X

i

2. there exists x

i

∈ X

i

s.t. w

i

> x

i

6

3. U

i

has open graph, x

i

/ ∈ co(U

i

(x

i

)) and x

i

∈ U

i

(x

i

)

4. Y

j

is closed and convex and 0 ∈ Y

j

6

this inequality between two vectors is meant componentwise

74

5. Y ∩ (−Y ) = ¦0¦ and Y ∩ R

m

+

= ¦0¦

6. −R

m

+

⊂ Y

Then there exists a Walrasian free disposal equilibrium in the econ-

omy c.

Before we can proof this theorem we need to do some preparational

work.

Deﬁnition 3.8.4. A cone is a nonempty set C ⊂ R

m

which is closed

under multiplication by nonnegative scalars, i.e. λ ≥ 0 and x ∈ C imply

λ x ∈ C.

The notion of a one is well known, not so well known in general is

the notion of an asymptotic cone.

Deﬁnition 3.8.5. Let ER

m

. The asymptotic cone of E is the set A(E)

of all possible limits of sequences of the form (λ

j

x

j

) where x

j

∈ E and

λ

j

is a decreasing sequence of real numbers s.t. lim

j

λ

j

= 0.

The asymptotic cone can be used to check if a subset ofR

m

is bounded

as the following proposition shows :

Proposition 3.8.1. A set E ⊂ R

m

is bounded if and only if A(E) = ¦0¦.

Proof. If E is bounded then there exists a constant M s.t. | x |≤ M for

all xinE. If z = lim

j

λ

j

x

j

∈ A(E) then

0 ≤| z |= lim

j

[λ

j

[ | x

j

|≤ M lim

j

[λ

j

[ = M 0 = 0

Therefore in this case A(E) = ¦0¦. If however E is not bounded

then there exists a sequence x

j

∈ E s.t. | x

j

| converges monotonically

increasing to inﬁnity. We set λ

j

=

1

x

j

**and obtain that the sequence
**

(λ

j

) converges monotonically decreasing to zero. We have | λ

j

x

j

|= 1

for all j. Therefore the sequence λ

j

x

j

is a sequence on the unit sphere

S

m−1

⊂ R

m

. Since this is compact, the sequence λ

j

x

j

must contain a

convergent subsequence and w.l.o.g. we assume that λ

j

x

j

converges

75

itself to a point z ∈ S

m−1

. Clearly z = 0 and z ∈ A(E). Therefore in this

case A(E) = ¦0¦.

Intuitively the asymptotic cone tells one about the directions in

which E is unbounded. It might be difﬁcult though to compute the

asymptotic cone. For this the following rules are very helpful :

Lemma 3.8.1. Let E, E

i

⊂ R

m

and x ∈ R

m

then

1. A(E) is a cone

2. E ⊂ F ⇒A(E) ⊂ A(F)

3. A(E + x) = A(E)

4. A(

¸

i∈I

E

i

) ⊂

¸

i∈I

A(E

i

)

5. A(E) is closed

6. E convex ⇒A(E) convex

7. E closed and convex and x ∈ E ⇒ x + A(E) ⊂ E, in particular if

0 ∈ E and E convex, then A(E) ⊂ E.

8. C ⊂ E and C a cone ⇒ C ⊂ A(E)

9. A(

¸

i∈I

E

i

) ⊂

¸

i∈I

A(E

i

)

We will use these rules to prove the following result about the com-

pactness of the set F introduced before. This compactness will later

play a major role in the proof of the Walras Equilibrium Theorem.

Lemma 3.8.2. Assume the Walras economy Jc satisﬁes the following

conditions : For each i = 1, .., n and j = 1, .., k:

1. X

i

is closed, convex bounded from below and w

i

∈ X

i

2. Y

j

is closed and convex and 0 ∈ Y

j

76

3. A(Y ) ∩ R

m

+

= ¦0¦

4. Y ∩ (−Y ) = ¦0¦ and Y ∩ R

m

+

= ¦0¦

Then the set F of attainable states is compact and nonempty. Fur-

thermore 0 ∈

˜

Y

j

for j = 1, .., k. If more over the following two assump-

tions hold:

1. for each i there exists some x

i

∈ X

i

s.t. w

i

> x

i

2. −R

m

+

⊂ Y

Then x

i

∈

˜

X

i

.

Proof. Clearly ((w

i

), (0

j

)) ∈ F where 0

j

∈ Y

j

denotes the zero vector in

0 ∈ Y

j

. This implies F = ∅. Furthermore F as the intersection of the

two close sets M and

¸

n

i=1

X

i

¸

k

j=1

Y

j

is closed. By Proposition 3.8.1

for the compactness of F it sufﬁces to show that A(F) = ¦0¦. By part

e.) of the previous lemma we have :

A(F) ⊂ A(

n

¸

i=1

X

i

k

¸

j=1

Y

j

) ∩ A(M)

⊂ (

n

¸

i=1

A(X

i

)

k

¸

j=1

A(Y

j

)) ∩ A(M).

Since each of the X

i

is bounded from below there exist vectors b

i

∈

R

m

s.t. X

i

⊂ b

i

+ R

m

+

. Applying successively parts b.), c.) and i.) of the

previous lemma we get

A(X

i

) ⊂ A(b

i

+R

m

+

) = A(R

m

+

) = R

m

+

.

Also by application of part d.) and assumption b.) we have

A(Y

j

) ⊂ A(Y ).

77

Let us consider the vector ˜ w = (w

1

, ..., w

n

, 0, ..., 0) ∈ (R

m

)

n+k

and

˜

M = ¦((˜ x

i

), (˜ y

j

))[

n

¸

i=1

˜ x

i

−

k

¸

j=1

˜ y

j

= 0¦.

Then

˜

M is a vectorspace and therefore also a cone. Furthermore we

have

˜

M + ˜ w = M

and hence by application of part c.) of the previous lemma

A(M) = A(

˜

M + ˜ w) = A(

˜

M) =

˜

M = M − ˜ w.

Therefore A(F) = 0 would follow if

n

¸

i=1

R

m

+

k

¸

j=1

A(Y ) ∩

˜

M = ¦0¦.

To prove the latter we have to show that whenever y

j

∈ A(Y ) for

j = 1, .., k and

n

¸

i=1

x

i

−

k

¸

j=1

y

j

= 0 (3.1)

for some x

i

∈ R

m

+

then x

1

= .. = x

n

= y

1

= .. = y

k

= 0. Now since

¸

n

i=1

x

i

≥ 0 ( componentwise, since in R

m

+

) we also must have

k

¸

j=1

y

j

≥ 0.

Since A(Y ) is convex and a cone one has

¸

k

j=1

y

j

∈ A(Y ). Since

however by assumption 3.) we have A(Y ) ∩ R

m

+

= ¦0¦ we must have

¸

k

j=1

y

j

= 0 and therefore by equation (3.1) also

¸

n

i=1

x

i

= 0. Since

x

i

≥ 0 ( componentwise ) for all i we must have x

i

= 0 for all i. Now

since y

j

∈ A(Y ) for all j using assumption 4.) we get for all i = 1, .., n

78

k

¸

j=1

y

j

= 0 ⇒ y

i

....

∈A(Y )⊂Y

= −

¸

j=i

y

j

. .. .

∈−A(Y )⊂−Y

∈ Y ∩ (−Y ) = ¦0¦.

which ﬁnally proves that A(F) = ¦0¦ and therefore F compact. Let

us now assume that in addition the assumptions a.) and b.) of the

second part of the lemma hold. Choosing x

i

∈ X

i

as in assumption a.)

we get componentwise

n

¸

i=1

x

i

<

n

¸

i=1

w

i

.

Let us set y :=

¸

n

i=1

x

i

−

¸

n

i=1

w

i

. Then y < 0 and by assumption b.)

we must have y ∈ Y . Therefore there must exist y

j

∈ Y

j

s.t.

y =

k

¸

j=1

y

j

.

Clearly

¸

n

i=1

x

i

−

¸

k

j=1

y

j

−

n

¸

i=1

w

i

. .. .

=w

= y−y = 0 and therefore ((x

i

), (y

j

)) ∈

F. This however implies that x

i

∈

˜

X

i

.

Proof. ( of Theorem 3.8.1 ) Let us note ﬁrst that assumption 5.) to-

gether with with 7.) of Lemma 3.8.1 imply that

A(Y ) ∩ R

m

+

⊂ Y ∩ R

m

+

= ¦0¦

and therefore all assumptions in Lemma 3.8.2 are met. The set F

of attainable states is therefore compact. Since the image of compact

sets under continuous maps is also compact we have that all

˜

X

i

,

˜

Y

j

are

compact. Therefore we can choose compact, convex sets K

i

, C

j

⊂ R

m

s.t.

˜

X

i

⊂ K

◦

i

and

˜

Y

j

⊂ C

◦

j

. We set

79

X

′

i

:= K

i

∩ X

i

Y

′

j

:= C

j

∩ Y

j

.

We will set up a generalized game where these sets will serve as

strategy sets for some of the participants. The participants or players

will be :

1. An auctioneer : He is player “0” and his strategy set is the set of

price vectors ∆

m−1

2. Consumers 1, .., n are players 1, .., n and their strategy sets are the

X

′

i

3. Suppliers 1, ..k are players n + 1, .., n + k and their strategy sets

are the sets Y

′

j

Atypical multi-strategy therefore has the form(p, (x

i

), (y

j

)) ∈ ∆

m−1

¸

n

i=1

X

′

i

¸

k

j=1

Y

′

j

. The utility correspondences are given as follows : For

the auctioneer

U

0

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ ∆

m−1

(p, (x

i

), (y

j

)) → ¦q ∈ ∆

m−1

:< q,

¸

i

x

i

−

¸

j

y

j

−w >

> < p,

¸

i

x

i

−

¸

j

y

j

−w >¦.

This means that the auctioneer prefers to raise the value of excess

demand. The economical interpretation of this is that the prices go up,

if there is more demand then supply ( i.e.

¸

i

x

i

−

¸

j

y

j

− w ≥ 0 ) and

the prices go down if there is more supply than demand ( i.e.

¸

i

x

i

−

¸

j

y

j

− w ≤ 0 ). For the mathematics it is important to mention that

the correspondence above has open graph, is convex valued and p / ∈

80

U

0

(p, (x

i

), (y

j

)). Both properties follow more or less since the inequality

in the deﬁnition of U

0

is a strict one and the scalar product is bilinear.

Let us deﬁne the utility correspondences for the suppliers : For supplier

“l”

V

l

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ Y

′

l

(p, (x

i

), (y

j

)) → ¦˜ y

l

∈ Y

′

l

[ < p, ˜ y

l

>>< p, y

l

>¦.

Thus supplier prefer larger proﬁts. As before it is easy to see that

these correspondences have open graph and are convex valued. Fur-

thermore y

l

/ ∈ V

l

(p, (x

i

), (y

j

)). Finally the utility correspondences for

the consumers

7

are : For consumer “q”

˜

U

q

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ X

′

q

(p, (x

i

), (y

j

)) → co(U

q

(x

q

)) ∩ X

′

q

.

This correspondence is indeed well deﬁned by the convexity of X

q

.

Furthermore it follows from Proposition 3.2.10. 3.) and assumption

3.) that the

˜

U

q

have open graphs and x

q

/ ∈

˜

U

q

(p, (x

i

), (y

j

)). They are

also convex valued by the convexity of X

′

q

. To complete the setup of our

generalized game we need feasibility correspondences for each player.

For suppliers and the auctioneer this is very easy. We choose constant

correspondences : In fact for supplier “l” we deﬁne

G

l

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ Y

′

l

(p, (x

i

), (y

j

)) → Y

′

l

7

within our generalized game, they have to be distinguished from the utility cor-

respondences the consumers have in the Walras Economy

81

and for for the auctioneer

F

0

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ ∆

m−1

(p, (x

i

), (y

j

)) → ∆

m−1

.

Constant correspondences are clearly continuous and in this case

they are also compact and convex valued. For the feasibility correspon-

dences of the consumer we have to work a little bit more. Let us ﬁrst

deﬁne functions π

j

for j = 1, .., k s.t.

π

j

: ∆

m−1

→ R

p → max

y

j

∈Y

′

j

< p, y

j

> .

Basically these maps compute the optimal proﬁt for the suppliers.

It is not hard to see ( directly ) that these functions are continuous. It

follows however also from Theorem 3.4.1 ( Maximum Theorem 1 ).It

follows from Lemma 3.8.2 that 0 ∈

˜

Y

j

⊂ Y

′

j

, so we have

π

j

(p) ≥ 0 ∀p, j.

We deﬁne the feasibility correspondence for consumer “q” as follows

:

F

q

: ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

→→ X

′

q

(p, (x

i

), (y

j

)) → ¦˜ x

q

∈ X

′

l

:< p, ˜ x

q

>≤< p, w

q

> +

k

¸

j=1

α

q

j

π

j

(p)¦

This correspondence in fact only depends on the price vector p, (x

i

)

and (y

j

) are redundant. Using assumption 2 and the second part of

82

Lemma 3.8.2 there exists x

q

∈

˜

X

q

⊂ X

′

q

s.t. x

q

< w

q

. Since also p ≥ 0

and π

q

(p) ≥ 0 we have

< p, x

q

> < < p, w

q

> +

k

¸

j=1

α

q

j

π

j

(p)

and therefore F

q

is non empty valued. Furthermore it follows from

Proposition 3.4.1 that F

q

is lhc. Since furthermore X

′

q

is compact and

clearly F

q

has closed graph it follows from Proposition 3.2.1 b.) that

F

q

is uhc. Thus for each consumer the feasibility correspondences are

continuous with nonempty convex values. The generalized game con-

structed therefore satisﬁes all the assumptions in the Sonnenschein-

Shafer Theorem 3.7.1. Therefore there exists an NCE

(p

♯

, (x

♯

i

), (y

♯

j

)) ∈ ∆

m−1

n

¸

i=1

X

′

i

k

¸

j=1

Y

′

j

which by deﬁnition of an NCE satisﬁes

1. < q,

¸

i

x

♯

i

−

¸

j

y

♯

j

−w > ≤ < p

♯

,

¸

i

x

♯

i

−

¸

j

y

♯

j

−w > for all q ∈ ∆

m−1

2. < p

♯

, y

♯

j

> ≥ < p

♯

, y

j

> for all y

j

∈ Y

′

j

, i.e. < p

♯

, y

♯

j

>= π

j

(p

♯

)

3.

x

♯

i

∈ b

i

(p

♯

) = ¦x

i

∈ X

′

i

[ < p

♯

, x

i

> ≤ < p

♯

, x

i

>≤< p

♯

, w

i

>

+

k

¸

j=1

α

i

j

< p

♯

, y

♯

j

>

. .. .

=π

j

(p

♯

)

and

co(U

i

(x

♯

i

)) ∩ B

i

(p

♯

) = co(U

i

(x

♯

i

)) ∩ X

′

i

∩ B

i

(p

♯

)

=

˜

U

i

(p

♯

, (x

♯

i

), (y

♯

j

)) ∩ F

i

((p

♯

, (x

♯

i

), (y

♯

j

))

= ∅.

83

We are now going to construct a Walras Equilibrium form the NCE

(p

♯

, (x

♯

i

), (y

♯

j

). For notational convenience set

M

i

:=< p

♯

, w

i

> +

k

¸

j=1

α

i

j

< p

♯

, y

♯

j

>

for the income of consumer “i”. We show that by implying the NCE

each consumer spends all of his income. Suppose not, i.e. < p

♯

, x

♯

i

>

< M

i

. Then since U

i

(x

♯

i

) is open

8

and by assumption 3.) x

♯

i

∈ U

i

(x

♯

i

) it

would follow that U

i

(x

♯

i

) ∩b

i

(p

♯

) = ∅ and therefore also co(U

i

(x

♯

i

)) ∩B

i

(p

♯

)

which is a contradiction to property 3.) of our NCE above. Therefore

we have < p

♯

, x

♯

i

>= M

i

for all i or more precisely

< p

♯

, x

♯

i

>=< p

♯

, w

i

> +

k

¸

j=1

α

i

j

< p

♯

, y

♯

j

> for all i.

Summing up over i and using the assumption on the Walras econ-

omy that

¸

i

α

i

j

= 1 for each j yields

< p

♯

,

¸

i

x

♯

i

>=< p

♯

,

¸

j

y

♯

j

+ w > ⇒ < p

♯

,

¸

i

x

♯

i

−

¸

j

y

♯

j

−w >= 0.

By property 1.) of our NCE we then have

< q,

¸

i

x

♯

i

−

¸

j

y

♯

j

−w >≤ 0 for all q ∈ ∆

m−1

which clearly implies that

¸

i

x

♯

i

−

¸

j

y

♯

j

−w ≤ 0. By assumption 6.)

we have that z :=

¸

i

x

♯

i

−

¸

j

y

♯

j

− w ∈ Y . Therefore there must exist

y

j

∈ Y

j

such that z =

¸

j

y

j

. We deﬁne

˜ y

j

:= y

♯

j

+ y

j

for all j .

8

follows from the assumption that U

i

has open graph

84

Then

¸

i

x

♯

i

−

¸

j

˜ y

j

−w = z −z = 0, so that ˜ y

j

∈

˜

Y

j

. Further more we

have

< p

♯

, ˜ y

j

>=< p

♯

, y

♯

j

> + < p

♯

, y

j

> for all j.

Summing up these equations over j we get

¸

j

< p

♯

, ˜ y

j

>=

¸

j

< p

♯

, y

♯

j

> +

¸

j

< p

♯

, y

j

>

. .. .

=<p

♯

,z>=0

=

¸

j

< p

♯

, y

♯

j

> .

By property 2.) of our NCE and ˜ y

j

∈

˜

Y

j

⊂ Y

′

j

we have < p

♯

, ˜ y

j

>≤<

p

♯

, y

♯

j

>for all j. Therefore the equality above can only hold if < p

♯

, ˜ y

j

>=<

p

♯

, y

♯

j

> for all j. We have therefore shown that

< p

♯

, ˜ y

j

>≥< p

♯

, y

j

> for all y

j

∈ Y

′

j

.

We will now show that this inequality holds even for all y

j

∈ Y

j

.

Suppose this would not be the case, i.e. there would exist y

j

∈ Y

j

s.t.

< p

♯

, y

j

> > < p

♯

, ˜ y

j

>. Since Y

j

is convex we have λ y

j

+(1 −λ) ˜ y

j

∈ Y

j

for all λ ∈ [0, 1]. Since ˜ y

j

∈

˜

Y

j

⊂ (Y

′

j

)

◦

there exists λ > 0 s.t. y

′

j

:=

λy

j

+(1−λ) ˜ y

j

∈ Y

′

j

. Then < p

♯

, y

′

j

>>< p

♯

, ˜ y

j

> which is a contradiction

to the inequality above. By construction we have that ((x

♯

i

), ( ˜ y

j

)) ∈ F.

To show that (p

♯

, (x

♯

i

), ( ˜ y

j

)) is a Walrasian free disposal equilibrium it

remains to show that for each i

U

i

(x

♯

i

) ∩ ¦x

i

∈ X

i

:< p

♯

, x

i

>≤< p

♯

, w

i

> +

¸

j

α

i

j

< p

♯

, ˜ y

j

>¦ = ∅.

Suppose there would be an x

i

in this intersection. Then since X

′

i

is

convex and x

♯

i

∈

˜

X

i

⊂ (X

′

i

)

◦

there exists λ > 0 such that λx

i

+(1−λ)x

♯

i

∈

X

′

i

. Since however x

♯

i

∈ U

i

(x

♯

i

) by assumption 3.) it follows from the

convexity of b

i

(p

♯

) that λ x

i

+ (1 − λ) x

♯

i

∈ co(U

i

(x

♯

i

)) ∩ b

i

(p

♯

). This is a

85

contradiction to property 3.) of our NCE. Thus (p

♯

, (x

♯

i

), ( ˜ y

j

)) is indeed a

Walrasian free disposal equilibrium.

86

Chapter 4

Cooperative Games

4.1 Cooperative Two Person Games

In the setup of non-cooperative games the player choose independently

from each other their strategies, then the game is played and delivers

some output which is measured either via a loss-operator or a Utility

correspondence. In the setup of cooperative games the players are al-

lowed to communicate before choosing their strategies and playing the

game. They can agree but also disagree about a joint strategy. Let us

recall the ”‘Battle of the Sexes”’ game where the strategies are given as

follows :

man woman

s

1

= ”‘go to theater”’ ˜ s

1

= ”‘go to theater”’

s

2

= ”‘go to soccer”’ ˜ s

2

= ”‘go to soccer”’

The corresponding bilosses are given by the matrix

L :=

(−1, −4) (0, 0)

(0, 0) (−4, −1)

**The mixed strategies of this game look as follows :
**

87

x s

1

+ (1 −x) s

2

↔ x ∈ [0, 1]

y ˜ s

1

+ (1 −y) ˜ s

2

↔ x ∈ [0, 1]

and the biloss of the mixed strategy (x, y) is given by

L

1

(x, y) = −(5xy + 4 −4x −4y)

L

2

(x, y) = −(5xy + 1 −x −y)

Since we have for all x, y ∈ [0, 1] that

L

1

(1, 1) = −1 ≤ −x = L

1

(x, 1)

L

2

(1, 1) = −4 ≤ −4y = L

2

(1, y)

we see that the pure strategy (1, 1) ↔ (s

1

, ˜ s

1

) is a NCE. In the same

way one can see that (0, 0 is an NCE. All possible outcomes of the game

when using mixed strategies are given by the shaded region in the fol-

lowing graphic:

Assume now the man and woman decide to do the following : They

throw a coin and if its head then they go both to the theater and if its

number they go both to see the soccer match. The expected biloss of

this strategy is:

1

2

(−1, −4) +

1

2

(−4, −1) = (−

5

2

, −

5

2

).

We call such a strategy a jointly randomized strategy. It involves a

random experiment which the to players perform together. Note that

the non-cooperative mixed strategies (

1

2

,

1

2

) is the outcome of two ran-

dom experiments which the players do independently from each other.

88

Figure 4.1: Biloss region for “Battle of the Sexes” :

–4

–3

–2

–1

0

–4 –3 –2 –1

non-cooperative setup

The ( expected ) biloss of this strategy is

L

1

(

1

2

,

1

2

) = −

5

4

L

2

(

1

2

,

1

2

) = −

5

4

.

One can also see that the biloss of our jointly randomized strategy

is not in the biloss region of the non-cooperative game ( see graphic

). Hence such jointly randomized strategies give something completely

new and the concept of cooperative game theory is just the extension

of the original concept by these jointly randomized strategies. It is not

true that the jointly randomized strategy above is in any case better

than any non-cooperative strategy. In fact if both man and woman go

to the soccer match, then th man is better of than with our jointly ran-

domized strategy and vice versa the woman is better of if both go to the

89

theater. However these to cases are unlikely to happen, if both want

their will. The jointly randomized strategy is therefore in some sense

a compromise. One of the main questions in cooperative game theory

is to ﬁnd the best compromise. Before giving a precise mathematical

formulation of cooperative two person games let us mention that there

are more jointly randomized strategies for the ”‘Battle of the Sexes”’

game. In fact for λ

0

, λ

1

, λ

2

, λ

3

∈ [0, 1] such that λ

0

+ λ

1

+ λ

2

+ λ

3

= 1 we

have the jointly randomized strategy

λ

0

(1, 0) +λ

1

(1, 1) +λ

2

(0, 1) +λ

3

(0, 0).

The expected bilosses of these strategies are

λ

0

(0, 0) +λ

1

(−1, −4) +λ

2

(0, 0) +λ

3

(−4, −1) = (−1 −4λ

4

, −4λ

1

−λ

3

).

The possible bilosses of jointly randomized strategies are given in

the following graphic :

As one can see immediately, this set is th convex hull of the biloss

region of th non-cooperative game. In the following we restrict ourself

to games with only ﬁnite strategy sets.

Deﬁnition 4.1.1. Let (

2

b a ( non cooperative ) two person game with

ﬁnite strategy sets o

1

and o

2

and let L = (L

1

, L

2

) be its biloss operator.

Then the corresponding cooperative game is given by the biloss operator

ˆ

L : ∆

S

1

×S

2

→ R R

¸

i,j

λ

ij

(s

i

, ˜ s

j

) →

¸

i,j

λ

ij

L(s

i

, ˜ s

j

)

where ∆

S

1

×S

2

:= ¦

¸

i,j

λ

ij

(s

i

, ˜ s

j

)[

¸

i,j

λ

ij

= 1, λ

ij

∈ [0, 1]¦ is the ( for-

mal ) simplex spanned by the pure strategy pairs (s

i

, ˜ s

j

).

90

Figure 4.2: Biloss region for “Battle of the Sexes” :

–4

–3

–2

–1

0

–4 –3 –2 –1

cooperative setup

The image im(

ˆ

L) of

ˆ

L is called the biloss region of the cooperative

game. By deﬁnition of

ˆ

L it is clear that it is always convex and in

fact is the convex hull of the biloss region of the corresponding non-

cooperative game.

Remark 4.1.1. If the strategy sets are not necessarily ﬁnite but proba-

bility spaces then one can can consider jointly randomized strategies as

functions o

1

o

2

→R

+

such that

S

1

×S

2

f(s, ˜ s)dP

S

1

dP

S

2

= 1.

Deﬁnition 4.1.2. Given a two person game (

2

and let

ˆ

L be the biloss

operator of the corresponding cooperative game. A pair of losses (u, v) ∈

im(

ˆ

L) is called jointly sub-dominated by a pair (u

′

, v

′

) ∈ im(

ˆ

L) if

u

′

≤ u and v

′

≤ v and (u

′

, v

′

) = (u, v). The pair (u, v) is called Pareto

optimal if it is not jointly sub-dominated.

Let us recall the deﬁnition of th conservative value of a two person

game:

91

u

♯

= min

s

1

∈S

1

max

s

2

∈S

2

L

1

(s

1

, s

2

)

v

♯

= min

s

2

∈S

2

max

s

1

∈S

1

L

2

(s

1

, s

2

).

These values are the losses the players can guarantee for them-

selves, no matter what the other player does, by choosing the corre-

sponding conservative strategy ( see Deﬁnition 1.2.3 ).

Deﬁnition 4.1.3. Given a two person game (

2

and let

ˆ

L be the biloss

operator of the corresponding cooperative game. The set

B := ¦(u, v) ∈ im(L)[u ≤ u

♯

, v ≤ v

♯

and (u, v) Pareto optimal ¦

is called the bargaining set ( sometimes also negotiation set ).

The interpretation of the bargaining set is as follows : It contains

all reasonable compromises the players can agree on. In fact no player

would accept a compromise (u, v) where u > u

♯

resp. v > v

♯

because

the losses u

♯

resp. v

♯

are guaranteed to him. In the same way they

would not agree on a strategy pair which is jointly sub-dominated, be-

cause then by switching to the other strategy they can both do better

and one of them can do strictly better. The main question however re-

mains. What compromise in the bargaining set is the best one. Using

some assumptions which can be economically motivated the so called

Nash bargaining solution gives an answer to this question. This is the

content of the next section.

4.2 Nash’s Bargaining Solution

let us denote with conv(R

2

) the set of compact and convex subsets of R

n

and with

92

/ := ¦(u

0

, v

0

), P)[P ∈ conv(R

2

) and (u

0

, v

0

) ∈ P¦

Deﬁnition 4.2.1. A bargaining function is a function

ψ : / →R

2

s.t. ψ((u

0

, v

0

), P) ∈ P for all (u

0

, v

0

) ∈ P.

The economical interpretation of a bargaining function is as follows

: We think of P as the biloss region of some cooperative two person

game and of (u

0

, v

0

) as some status quo point ( the outcome when the

two players do not agree on a compromise ). Then ψ((u

0

, v

0

), P) gives

the compromise. As status quo point one often chooses the conservative

value of the game, but other choices are also possible.

Deﬁnition 4.2.2. A bargaining function ψ : / → R

2

is called a Nash

bargaining function if it satisﬁes the following conditions : let us

denote (u

∗

, v

∗

) := ψ((u

0

, v

0

, P). Then

1. u

∗

≤ u

0

,v

∗

≤ v

0

i.e. the compromise is at least as good as the status

quo

2. (u

∗

, v

∗

) is Pareto optimal, i.e. there does not exist u ≤ u

∗

, v ≤ v

∗

s.t.

(u, v) ∈ P ` ¦(u

∗

, v

∗

)¦

3. If P

1

⊂ P and (u

∗

, v

∗

) ∈ P

1

then (u

∗

, v

∗

) = ψ((u

0

, v

0

), P

1

) ( indepen-

dence of irrelevant alternatives )

4. Let P

′

be the image of P under the afﬁne linear transformation

u → au +b

v → cv + d

then ψ((au

0

+ b, cv

0

+ d), P

′

) = (au

∗

+ b, cv

∗

+ d) for all a, c > 0 (

invariance under afﬁne linear transformation = invariance under

rescaling utility )

93

5. If P is symmetric. i.e. (u, v) ∈ P ⇔ (v, u) ∈ P and u

0

= v

0

then

u

∗

= v

∗

.

We are going to prove that there is precisely one Nash bargaining

function ψ : / →R

2

. For this we need the following lemma.

Lemma 4.2.1. Let ((u

0

, v

0

), P) ∈ /. We deﬁne a function

f

(u

0

,v

0

)

: P ∩ ¦u ≤ u

0

, v ≤ v

0

¦ → R

+

(u, v) → (u

0

−u)(v

0

−v)

If there exists a pair (u, v) ∈ P s.t. u < u

0

and v < v

0

then f takes its

maximum at a unique point

(u

∗

, v

∗

) := argmax(f(u, v)) (4.1)

and u

∗

< u

0

, v

∗

< v

0

.

Proof. As f

(u

0

,v

0

)

is deﬁned on a compact set and is clearly continuous,

it takes its global maximum at at least one point (u

∗

, v

∗

). Let M =

f(u

∗

, v

∗

) then by our assumption and the deﬁnition of f

(u

0

,v

0

)

we have

M > 0 and u

∗

< u

0

, v

∗

< v

0

. Assume now f

(u

0

,v

0

)

(˜ u, ˜ v) = Mwith (˜ u, ˜ v) ∈

P ∩ ¦u ≤ u

0

, v ≤ v

0

¦. We have to show (˜ u, ˜ v) = (u

∗

, v

∗

). Assume this

would not be the case. Since by deﬁnition of f

(u

0

,v

0

)

and M we have

(u

0

−u

∗

)(v

0

−v

∗

) = M = (u

0

− ˜ u)(v

0

− ˜ v)

it follows u

∗

= ˜ u ⇔ v

∗

= ˜ v and we can then as well assume that u

∗

= ˜ u

and v

∗

= ˜ v. More precisely there are exactly two cases

(u

∗

< ˜ u and v

∗

> ˜ v) or (u

∗

> ˜ u and v

∗

< ˜ v).

Since P is convex, it contains the point

(u

′

, v

′

) :=

1

2

(u

∗

, v

∗

) +

1

2

(˜ u, ˜ v).

94

Clearly u

′

≤ u

0

, v

′

≤ v

0

. An easy computation shows that

M ≥ f

(u

0

,v

0

)

(u

′

, v

′

) =

1

2

f

(u

0

,v

0

)

(u

∗

, v

∗

)

. .. .

=

1

2

M

+f

(u

0

,v

0

)

(˜ u, ˜ v)

. .. .

1

2

M

+

(u

∗

− ˜ u)(˜ v −v

∗

)

2

. .. .

>0

> M.

which is a contradiction. Therefore we must have (u

∗

, v

∗

) = (˜ u

∗

, ˜ v

∗

)

and we are done.

Theorem 4.2.1. There exists exactly one Nash bargaining function ψ :

/ →R

2

.

Proof. We deﬁne a function ψ : / →R

2

as follows . Let ((u

0

, v

0

), P) ∈ /.

Then if there exists (u, v) ∈ P such that u < u

0

, v < v

0

then by using

Lemma 4.2.1 we deﬁne

(u

∗

, v

∗

) := ψ((u

0

, v

0

), P) := argmax(f

(u

0

,v

0

)

(u, v))

If there are no points (u, v) ∈ P s.t. u < u

0

, v < v

0

then the convexity

of P implies that exactly two cases can occur :

1. P ⊂ ¦(u, v

0

) : u ≤ u

0

¦

2. P ⊂ ¦(u

0

, v) : v ≤ v

0

¦.

In case 1.) we deﬁne ψ((u

0

, v

0

), P) := (u

∗

, v

0

) where u

∗

is the min-

imal value such that (u

∗

, v

0

) ∈ P. Similarly in case 2.) we deﬁne

ψ((u

0

, v

0

), P) := (u

0

, v

∗

) where v

∗

is the minimal value such that (u

0

, v

∗

) ∈

P. We will now show that the bargaining function ψ deﬁned above sat-

isﬁes the ﬁve conditions on Deﬁnition 4.2.2. Let us ﬁrst consider the

case where there exists (u, v) ∈ P such that u < u

0

, v < v

0

. Condi-

tion 1.) is trivially satisﬁed. To show that 2.) is satisﬁed assume that

u ≤ u

∗

, v ≤ v

∗

and (u, v) ∈ P ` ¦(u

∗

, v

∗

)¦. Then

f(u, v) = (u

0

−u)(v

0

−v) > (u

0

−u

∗

)(v

0

−v

∗

) = M

95

which is a contradiction. Therefore (u

∗

, v

∗

) is Pareto optimal and

2.) is satisﬁed. To show that condition 3.) holds let us assume that

P

1

⊂ P and (u

∗

, v

∗

) = ψ((u

0

, v

0

), P) ∈ P

1

. Then (u

∗

, v

∗

) maximizes the

function f

(u

0

,v

0

)

over P ∩ ¦(u, v) : u ≤ u

0

, v ≤ v

0

¦ and therefore also over

the smaller set P

1

∩ ¦(u, v) : u ≤ u

0

, v ≤ v

0

¦. By deﬁnition of ψ we have

(u

∗

, v

∗

) = ψ((u

0

, v

0

), P

1

). Now consider the afﬁne transformation

u → au +b, v → cv + d

where a, c > 0 and let P

′

be the image of P under this transforma-

tion. Since (u

∗

, v

∗

) maximizes (u

0

−u)(v

0

−v) over P ∩¦(u, v) : u ≤ u

0

, v ≤

v

0

¦ it also maximizes

ac(u

0

−u)(v

0

−v) = ((au

0

+ b) −(au + b))((cv

0

+ d) −(cd +d)).

But this is equivalent to that (au

∗

+b, cv

∗

+d) maximizes (u

′

0

−u)(v

′

0

−

v) over P

′

where u

′

0

= au

0

+ b and v

′

0

= cu

0

+ d. Hence by deﬁnition of ψ

we have ψ((u

′

0

, v

′

0

), P

′

) = (au

∗

+ b, cv

∗

+ d). To show that 5.) is satisﬁed

assume that P is symmetric,u

0

= v

0

but u

∗

= v

∗

. Then (v

∗

, u

∗

) ∈ P and

by convexity of P also

(u

′

, v

′

) :=

1

2

(u

∗

, v∗) +

1

2

(v

∗

, u

∗

) ∈ P

and an easy computation shows that

f

(u

0

,v

0

)

(u

′

, v

′

) =

u

∗2

+ 2u

∗

v

∗

+ v

∗2

4

−(u

∗

+ v

∗

)u

0

+ u

2

0

where we made use of u

0

= v

0

. Since we have (u

∗

−v

∗

)

2

> 0 we know

u

∗2

+ v

∗2

> 2u

∗

v

∗

and therefore

f

(u

0

,v

0

)

(u

′

, v

′

) > u

∗

v

∗

−(u

∗

+v

∗

)u

0

+u

2

0

= (u

0

−u

∗

)(v

0

−v

∗

) = f

(u

0

,v

0

)

(u

∗

, v

∗

)

which is a contradiction and therefore we must have u

∗

0 = v

∗

. There-

96

fore we have shows that the function deﬁned in the ﬁrst part of the

proof is a Nash bargaining function. It remains to show that whenever

˜

ψ is another Nash bargaining function then ψ =

˜

ψ. Let us therefore

assume that

˜

ψ is another bargaining function which satisﬁes the con-

ditions 1.) to 5.) of Deﬁnition 4.2.2. Denote (˜ u, ˜ v) =

˜

ψ((u

0

, v

0

), P). Let

us use the afﬁne transformation

u

′

:=

u −u

0

u

0

−u

∗

v

′

:=

v −v

0

v

0

−v

∗

and let P

′

denote the image of P under this transformation. We

have

(u

0

, v

0

) → (0, 0)

(u

∗

, v

∗

) → (−1, −1)

(˜ u, ˜ v) → (u, v)

where (u, v) is deﬁne by the relation above. Using property 4.) of ψ

we see that (−1, −1) maximizes f

(0,0)

(u, v) = u v over P

′

∩ ¦(u, v)[u ≤

u

0

, v ≤ v

0

¦. Clearly f

(0,0)

(−1, −1) = 1 Let (u, v) ∈ P

′

s.t. u ≤ 0, v ≤ 0 and

assume that u + v < −2. Then there exists ǫ > 0 and x ∈ R s.t.

u = −1 −x

v = −1 +x −ǫ.

Then by joining (−1, −11) with (u, v) with a line we see that for all

λ ∈ [0, 1] we have

(u

′

, v

′

) := (1 −λ)(−1, −1) +λ(−1 −x, −1 +x −ǫ) ∈ P

′

.

97

Evaluating f

(0,0)

at this point gives

f

(0,0)

(u

′

, v

′

) = (1 +λx)(1 +λ(ǫ −x)) = 1 +λǫ + λ

2

x(ǫ −x). (4.2)

Consider the last expression as a function in λ.Then evaluation at

λ = 0 gives the value 1. The function is clearly differentiable with

respect to λ and the derivative at point λ = 0 is ǫ > 0. Therefore there

exists an ǫ > 0 such that the right hand side of equation 4.2 is strictly

greater than one which is a contradiction. Therefore 0 ≥ u + v ≥ −2.

Now let

˜

P = ¦(u, v)[(u, v) ∈ P

′

or (v, u) ∈ P

′

¦

be the symmetric closure of P

′

. Clearly

˜

P is compact, convex and

symmetric and P

′

⊂

˜

P. We still have though that u + v ≥ −2 for all

pairs (u, v) ∈ ˜ p. This means that whenever the point (u, u) lies in

˜

P ∩

¦(u, v)[u ≤ u

0

, v ≤ v

0

¦ we have 0 ≥ u ≥ −1. Now let

(ˆ u, ˆ v) :=

˜

ψ((0, 0),

˜

P)

then by property 5.) of

˜

ψ we have ˆ u = ˆ v. Since (−1, −1) ∈ P

′

⊂

˜

P by

using the Pareto optimality of (ˆ u, ˆ v) we must have (ˆ u, ˆ v) = (−1, −1). But

then (ˆ u, ˆ v) ∈ P

′

and by using property 3.) of

˜

ψ we must have (u, v) =

(−1, −1). Computing the inverse under the afﬁne transformation show

(˜ u, ˜ v) = (u

∗

, v

∗

).

Therefore ψ =

˜

ψ and we are ﬁnished.

the consequence of Theorem 4.2.1 is that if the two players believe

in the axioms of a Nash bargaining function, there is a unique method

to settle the conﬂict once the status quo point is ﬁxed. It follows directly

from properties 1.) and 2.) that the value (u

∗

, v

∗

) = ψ((u

♯

, v

♯

), Im(

ˆ

L) in

the context of a cooperative game lies in the bargaining set.

98

Example 4.2.1. Consider a two person game (

2

where the biloss opera-

tor is given by the following matrix

L :=

(−1, −2) (−8, −3)

(−4, −4) (−2, −1)

**A straightforward computation shows that the conservative values
**

of this game are given by u

♯

= −3

1

3

and v

♯

= −2

1

2

. The biloss region of

the corresponding cooperative game is given in the following graphic.

The bargaining set is

B := ¦(u, v)[v = −

1

4

u −5, −4 ≤ u ≤ −8¦

Therefore to compute the value ψ((u

♯

, v

♯

), Im(

ˆ

L)) we have to maximize

the function

(−3

1

3

−u)(−2

1

2

+

1

4

u + 5

over −4 ≤ u ≤ −8. This is easy calculus and gives the values

u

∗

= −6

2

3

, v

∗

= −3

1

3

.

and therefore ψ((u

♯

, v

♯

), Im(

ˆ

L)) = (−6

2

3

, −3

1

3

).

The argumentation for the choice of the status quo point as the con-

servative value is reasonable but not completely binding as the follow-

ing example shows :

Example 4.2.2. Consider the two person game (

2

where the biloss op-

erator is given by the following matrix

L :=

(−1, −4) (1, 4)

(4, 1) (−4, −1)

**The conservative value is (u
**

♯

, v

♯

) = (0, 0). this follows after a straight-

forward computation. The biloss region of the corresponding coopera-

99

tive game is given by the following graphic :

Since we have u

0

= u

♯

= v

♯

= v

0

and the biloss region is obviously

symmetric we must have u

∗

= v

∗

. And the only point in the bargaining

set which satisﬁes this condition is (u

∗

, v

∗

) = (−2

1

2

, −2

1

2

). The question

is however is this compromise for player 1 as good as for player 2 ? As-

sume that the players are bankrupt if their losses exceed the value 3.

We claim that the second player is in a stronger position than the ﬁrst

one and therefore deserves a bigger piece of the pie. If player 2 decides

to play its ﬁrst strategy, then if player 1 chooses his ﬁrst strategy he

wins 1, if he chooses his second strategy he is bankrupt. So he is forced

to play his ﬁrst strategy although he knows that he does not make the

optimal proﬁt. Player 1 has no comparable strategy to offer. If player

1 chooses strategy 1 it could happen that player 2 goes bankrupt by

choosing the second strategy, but clearly player 2 would not do this and

instead by choosing strategy 1 be perfectly happy with his maximum

proﬁt.The compromise (2

1

2

, 2

1

2

) though gives both players the same and

is in this sense not fair.

Such a strategy as player twos ﬁrst strategy is called a thread. We

will develop a method which applies to the situation where the play-

ers thread their opponents. This method is also known as the thread

bargaining solution

1

gives a solution of this problem.

Deﬁnition 4.2.3. Let (

2

be a non cooperative two person game with

ﬁnite strategy sets o

1

and o

2

. The corresponding thread game T (

2

is

the non-cooperative two person game with mixed strategies ∆

S

1

and ∆

S

2

and biloss operator as follows :

T L(s, ˜ s) = ψ

Nash

(L(s, ˜ s), Im(

ˆ

L))

where ψ

Nash

denotes the Nash bargaining function.

Basically the process is as follows. Start with a non-cooperative

1

sometimes also Nash bargaining solution

100

game, consider then the corresponding cooperative game and look for

the compromises given by the Nash bargaining function in dependence

on the status quo points and you get back a non-cooperative two person

game. However to apply the methods from chapter 2 and 3 we must

know something about the continuity as well as convexity properties of

this biloss operator.

Lemma 4.2.2. Let (

2

be a non cooperative two person game with ﬁnite

strategy sets o

1

and o

2

and biloss operator L. then the function

∆

S

1

∆

S

2

→ R

2

(s, ˜ s) → ψ

Nash

(L(s, ˜ s), Im(

ˆ

L))

is continuous. The biloss operator in the corresponding thread game

T (

2

and satisﬁes the convexity assumption in Theorem 3.1.1.

Proof. The continuity property of ψ

Nash

follows straightforward ( but a

bit technical ) from its construction. Since the biloss operator in the

thread game is basically ψ

Nash

the continuity of the biloss operator is

therefore clear. To show that it satisﬁes the convexity assumptions of

Theorem 3.1.1 is a bit harder and we don’t do it here.

Theorem4.2.2. Let (

2

be a non cooperative two person game with ﬁnite

strategy sets and let T (

2

be the corresponding thread game. Then T (

2

has at least one non cooperative equilibrium. Furthermore the bilosses

under T L of all non cooperative equilibria are the same.

Proof. The existence of non cooperative equilibria follows from Theo-

rem 3.1.1 and the previous lemma. That the bilosses under L of all

the non cooperative equilibria are the same will follow from the follow-

ing discussion. The thread game is a special case of a so called purely

competitive game.

Deﬁnition 4.2.4. A two person game (

2

is called a purely competi-

tive game ( sometimes also a pure conﬂict or antagonistic game ) if all

101

outcomes are Pareto optimal, i.e. if (u

1

, v

1

) and (u

2

, v

2

) are two possible

outcomes and u

2

≤ u

1

the v

2

≥ v

1

.

Remark 4.2.1. The thread game is a purely competitive game. This

is clear since the values of the Nash Bargaining function are Pareto

optimal.

The uniqueness of the Nash bargaining solution does now follow

from the following proposition.

Proposition 4.2.1. Let (

2

be a purely competitive two person game.

Then the bilosses of all non-cooperative equilibria are the same.

Proof. Let (s, ˜ s) and (r, ˜ r) be non cooperative equilibria. We have to

prove

L

1

(s, ˜ s) = L

1

(r, ˜ r)

L

2

(s, ˜ s) = L

2

(r, ˜ r).

W.l.o.g we assume L

1

(s, ˜ s) ≥ L

1

(r, ˜ r). Since (r, ˜ r) is an NCE we have

by deﬁnition of an NCE that

L

2

(r, ˜ s) ≥ L

2

(r, ˜ r).

Since (

2

is purely competitive this implies that

L

1

(r, ˜ s) ≤ L

1

(r, ˜ r).

Since however (s, ˜ s) is also an NCE we have that

L

1

(r, ˜ s) ≥ L

1

(s, ˜ s)

and therefore by composing the two previous inequalities we get

L

1

(s, ˜ s) ≤ L

1

(r, ˜ r) so that in fact we have L

1

(s, ˜ s) = L

1

(r, ˜ r). The same

argument shows that L

2

(s, ˜ s) = L

2

(r, ˜ r).

102

Remark 4.2.2. The proof above shows more. In fact it shows that also

L

1

(r, ˜ s) = L

1

(s, ˜ s) and L

2

(r, ˜ s) = L

2

(s, ˜ s) and therefore that (r, ˜ s) and

the similarly (s, ˜ r) are non-cooperative equilibria. In words, this means

that all non-cooperative equilibria in a purely competitive game are in-

terchangeable.

Deﬁnition 4.2.5. Let (

2

be a non cooperative two person game with

ﬁnite strategy sets. Then the Nash bargaining solution is the unique

biloss under T L of any non cooperative equilibriumof the corresponding

thread game T (

2

.

Strategies s, ˜ s such that (s, ˜ s) is a NCE of the thread game are called

optimal threats. Using the biloss under L of any pair of optimal

threats as the status quo point for the Nash bargaining function de-

livers the Nash bargaining solution as a compromise. The Nash bar-

gaining solution now now gives a compromise for any two person game

with ﬁnite strategy sets which does not depend on status quo points.

The difﬁculty however still is to ﬁnd the NCE of the thread game. Let

us illustrate the method ﬁrst at an easy example. Assume the bargain-

ing set is given as a line

B := ¦(u, v)[au +v = b, c

1

≤ u ≤ c

2

¦.

Suppose player 1 threatens strategy s

2

and player 2 threatens strat-

egy ˜ s. Then player 1’s loss in the thread game is the u

∗

that maximizes

(L

1

(s, ˜ s) −u)(L

2

(s, ˜ s) −v) = (L

1

(s, ˜ s) −u)(L

2

(s, ˜ s) −b + au).

If this u

∗

happens to be in the interior of [c

1

, c

2

] then it can be com-

puted by setting the derivative with respect to u of the expression above

equal to zero. This gives

2

this means player 1 chooses strategy s playing the thread-game

103

0 =

d

du

((L

1

(s, ˜ s)−u)(L

2

(s, ˜ s)−b+au)) = −(L

2

(s, ˜ s)−b+au)+a(L

1

(s, ˜ s)−u).

This gives u

∗

=

1

2a

(b + aL

1

(s, ˜ s) −L

2

(s, ˜ s)). Let us denote with

L

ij

= L(s

i

, ˜ s

j

)

where the s

i

resp. ˜ s

j

denote the pure strategies of the original non

cooperative game ( i.e. the biloss operator L in (bi)-matrix form. Then

we have for s =

¸

i

x

i

s

i

and ˜ s =

¸

j

x

j

˜ s

j

that

u

∗

=

1

2a

(b +

¸

i,j

x

i

(aL

i,j

1

−L

i,j

2

)y

j

).

So as to make u

∗

as small as possible, player 1 has to choose x =

(x

1

, ..., x

n

) ∈ ∆

n

so as to minimize

¸

i,j

x

i

(aL

i,j

1

−L

i,j

2

)y

j

(4.3)

against any y.

3

Similarly substituting au + v = b we get

v

∗

=

1

2

(b −

¸

i,j

x

i

(aL

i,j

1

−L

i,j

2

)y

j

).

and to minimize v

∗

player 2 chooses y = (y

1

, ..., y

m

) ∈ ∆

m

so as to

maximize expression 4.3. Therefore the NCE of the thread game, i.e.

the Nash bargaining solution corresponds to the NCE of the Zero sum

game

˜

L = (aL

ij

1

−L

ij

2

)

∼

= (aL

ij

1

−L

ij

2

, −(aL

1

ij −L

ij

2

)). (4.4)

If w

∗

denotes loss of player one when the NCE strategies of the fame

˜

L are implemented, then

3

remind that u

∗

is the loss player 1 suffers when the corresponding compromise is

established

104

u

∗

=

1

2a

(b + w

∗

), v

∗

=

1

2

(b −w

∗

). (4.5)

This NCE can be computed with numerical methods and some lin-

ear algebra. In general the biloss region is a polygon and the bargain-

ing set is piecewise linear. One can then apply the method described

above to each line segment in the bargaining set. We have therefore

proven the following proposition which helps us to identify the Nash

bargaining solution.

Proposition 4.2.2. Let (

2

be a two person game with ﬁnitely many pure

strategies and bargaining set B. If the Nash bargaining solution lies in

the interior of one of the lines bargaining set and this line is given by the

equation au + v = b with c

1

≤ u ≤ c

2

then the Nash bargaining solution

(u

∗

, v

∗

) is given by (4.5).

If one knows on which line segment the Nash bargaining solution

lies then one gets the Nash bargaining solution with this method. More-

over one also gets the optimal threads to implement the Nash bargain-

ing solution. In the following we present a graphical method to decide

on which line segment the Nash bargaining solution can lie. In general

though it does also not give a clear answer to the problem.

Lemma 4.2.3. Let (

2

be a non-cooperative game such that the corre-

sponding bargaining set is a line. let (u

∗

, v

∗

) = ψ

Nash

((u

0

, v

0

), Im(

ˆ

L) be

the compromise given by the Nash bargaining function with status quo

point (u

0

, v

0

) ∈ Im(

ˆ

L). Assume that (u

∗

, v

∗

) is not one of the endpoints of

the bargaining set. Then the slope of the line joining (u

0

, v

0

) and (u

∗

, v

∗

)

is the negative of the slope of the bargaining set.

Proof. Suppose (u

∗

, v

∗

) lies on the line au+v = b for c

1

< u < c

2

. Then it

has to maximize the function (u

0

−u)(v

0

−v) over this set, i.e. u

∗

must

maximize (u

0

−u)(v

0

+au −b) and therefore

105

0 =

d

duu=u

∗

(u

0

−u)(v

0

+au −b) = −(v

0

+ au

∗

−b) +a(u

0

−u

∗

)

= −v

0

−2au

∗

+ b + au

0

.

therefore

u

∗

=

b −v

0

+ au

0

2a

v

∗

=

b +v

0

−au

0

2

.

The slope of the line from (u

0

, v

0

) to (u

∗

, v

∗

) is

v

∗

−v

0

u

∗

−u

0

=

b+v

0

−au

0

2

−v

0

b−v

0

+au

0

2a

−u

0

=

b−v

0

−au

0

2

b−v

0

−au

0

2a

= a.

The slope of the bargaining set is clearly −a since au + v = b ⇔ v =

−au + b.

Example 4.2.3.

Coming back to the point where we actually want to determine the

Nash bargaining solution and the optimal threats where the biloss re-

gion is a polygon, we can now try to solve all the non cooperative games

(4.4) ( one for each line segment in the bargaining set ) and then check if

which of the computed solutions satisfy the condition in Lemma 4.2.3.

Example 4.2.4. Let us now reconsider Example 4.2.2 which looks sym-

metric at ﬁrst glance but a closer inspection reveals that player 2 is in a

stronger position then player 1. The biloss operator is

L :=

(−1, −4) (1, 4)

(4, 1) (−4, −1)

**and the biloss region is drawn in Figure x.x. where the bargaining
**

set can be identiﬁed with

106

B := ¦−u + v = 5, −4 ≤ u ≤ −1¦

in equal a = 1 and b = −5. Since the bargaining set is obviously

one line, we can apply the method proposed before and see that the op-

timal threats are the equilibria of the non cooperative game with biloss

operator given by

˜

L :=

3 −3

3 −3

∼

=

(3, −3) (−3, 3)

(3, −3) (−3, 3)

.

Since the entry 3 at position (1, 1) of the matrix above is the biggest

in its row as well the smallest in its column we have that all strategies

of the form

(λs

1

+ (1 −λ)s

2

, ˜ s

1

)

are NCE’s and the conservative value w

∗

corresponds to the biloss of

any of these. Therefore w

∗

= 3 and therefore using (4.5) we have

(u

∗

, v

∗

) = (−1, −4)

Can we now say that this is the Nash bargaining solution ? It is, but

it does not follow directly from the argumentation above. As mentioned

for the argument above we assume that (u

∗

, v

∗

) is in the interior of B,

but (−1, −4) is not. What can we say though, is that in no case the

Nash bargaining solution can lie in the interior of B and therefore must

be either (−1, −4) or (−4, −1). The second one is unlikely because we

already saw that the second player is in a stronger position within this

game. For a precise argument we assume that (−4, −1) is the Nash

bargaining solution. The thread strategies leading to this value can

be identiﬁed by using Lemma 4.2.3 as (λs

1

+ (1 − λ)s

2

, ˜ s

2

). It is easy

to see that those strategies are no NCE strategies for the thread game.

Therefore the Nash bargaining solution is (−1, −4) and we see that in

fact it gives more to the second player than to the ﬁrst.

107

4.3 N-person Cooperative Games

Let us shortly reconsider the deﬁnition of an N-person game in chapter

3. An N-person game (

n

consists of a set N = ¦1, 2, .., n¦ of players and

1. Topological spaces o

1

, .., o

n

so called strategies for player 1 to n

2. A subset o(N) ⊂ o

1

.. o

n

, the so called allowed or feasible multi

strategies

3. A (multi)-loss operator L = (L

1

, ..., L

n

) : o

1

... o

n

→R

n

In this section we assume that o(N) = o

1

.. o

n

and that [o

i

[ <

∞. We will think of the strategies as pure strategies and use mixed

strategies which can then be identiﬁed as simplexes in the same way

as in chapter 2.

Deﬁnition 4.3.1. A coalition is a subset S ⊂ N which cooperates in the

game. If S = ¦i

1

, ..., i

k

¦ then by cooperating it can use jointly randomized

strategies from the set

∆

S

:= ∆

S

i

1

×...×S

i

k

and by implying the strategy ˜ x ∈ ∆

S

against the strategies of their

opponents receives a joint loss of

¸

i∈S

L

i

(˜ x, y).

Writing L

i

(˜ x, y) we mean the value of the multi-linear extended ver-

sion of L

i

where the components are in the right order, i.e. player i-th

strategies stand at position i. We use the following notation

X

S

:= o

i

1

... o

i

k

Y

N\S

:= o

j

1

... o

j

n−k

108

where N `S = ¦j

1

, ..., j

n−k

¦. The worst that can happen for the coali-

tion S4 is that their opponents also build a coalition N`S. The minimal

loss the coalition S can the guarantee for itself, i.e. the conservative

value for coalition S is given by

˜ ν(S) = min

˜ x∈∆

S

max

˜ y∈∆

N\S

¸

i∈S

L

i

(˜ x, ˜ y).

The following lemma says that in order to compute the conservative

value one only has to use pure strategies.

Lemma 4.3.1. In the situation above one has

˜ ν(S) = min

x∈X

S

max

y∈Y

N\S

¸

i∈S

L

i

(x, y).

Proof. Due to the fact that the properly denoting of the elements in the

simplices ∆

S

and ∆

N\S

affords a lot of indices we do not prove the result

here. Managing the complex notation the proof is in fact a straightfor-

ward result which only uses that the multi-loss operator used here is

the multi-linear extension of the multi-loss operator for ﬁnitely many

strategies.

Deﬁnition 4.3.2. Let (

n

be an N-person game. We deﬁne the character-

istic function ν of (

n

via

ν : {(N) → [0, ∞)

S → −˜ ν(S)

with the convention that ν(∅) = 0.

Proposition 4.3.1. Let ν be the characteristic function of the N-person

game (

n

. If S, T ⊂ N with S ∩ T = ∅ then

ν(S ∪ T) ≥ ν(S) +ν(T).

109

Proof. Since ν(S) = −˜ ν(S) we can as well prove ˜ ν(S ∪T) ≤ ˜ ν(S) + ˜ ν(T).

Using Lemma 4.3.1 we have

˜ ν(S ∪ T) = min

x∈X

S∪T

max

y∈Y

N\(S∪T)

¸

i∈(S∪T)

L

i

(x, y)

≤ min

α∈X

S

min

β∈X

T

max

y∈Y

N\(S∪T)

¸

i∈(S∪T)

L

i

(α, β, y).

Hence for each α ∈ X

S

,β ∈ X

T

we have

˜ ν(S ∪ T) ≤ max

y∈Y

N\(S∪T)

¸

i∈(S∪T)

L

i

(α, β, y)

≤ max

y∈Y

N\(S∪T)

¸

i∈S

L

i

(α, β, y) + max

y∈Y

N\(S∪T)

¸

i∈T

L

i

(α, β, y).

In particular this holds for β which minimizes the ﬁrst sum on the

right side and α which minimizes the second sum on the right side and

therefore we have

˜ ν(S ∪ T) ≤ min

β∈X

T

max

y∈Y

N\(S∪T)

¸

i∈S

L

i

(α, β, y) + min

α∈X

S

max

y∈Y

N\(S∪T)

¸

i∈T

L

i

(α, β, y)

≤ min

β∈X

T

max

(α,y)∈Y

N\T

¸

i∈S

L

i

(α, β, y) + min

α∈X

S

max

(β,y)∈Y

N\S

¸

i∈T

L

i

(α, β, y)

≤ min

β∈X

T

max

y∈Y

N\T

¸

i∈S

L

i

(β, y) + min

α∈X

S

max

y∈Y

N\S

¸

i∈T

L

i

(α, y)

= ˜ ν(T) + ˜ ν(S).

Deﬁnition 4.3.3. An N-person game (

n

is called inessential if its

characteristic function is additive, i.e. ν(S∪T) = ν(S)∪ν(T) for S, T ⊂ N

s.t. S ∩ T = ∅. In this case one has ν(N) =

¸

n

i=1

ν(¦i¦).

110

The economical interpretation of an inessential game is that in such

a game it does not pay to build coalitions, since the coalition cannot

guarantee more to its members as if the individual members act for

themselves without cooperating.

Deﬁnition 4.3.4. An N-person game (

n

is called essential if it is not

inessential.

Let us illustrate the concepts at the following example :

Example 4.3.1. Oil market game : Assume there are three countries

which we think of the players in our game.

1. Country 1 has oil an its industry can use the oil to achieve a proﬁt

of a per unit.

2. Country 2 has no oil, but has an industry which can use the oil to

achieve a proﬁt of b per unit.

3. Country 3 also has no oil, but an industry which can use the oil to

achieve a proﬁt of c per unit.

We assume that a ≤ b ≤ c. The strategies of the players are as follows.

The strategies of Country 2 and Country 3 are buy the oil from country 1

if it is offered to them. These are the only reasonable strategies for them,

since without oil there industry does not work. Country 1 however has

three strategies.

s

1

= keep the oil and use it for its own industry

s

2

= sell the oil to Country 2

s

3

= sell the oil to Country 3 .

Denoting with ˜ s, ˆ s the strategies of Country 2 resp. Country 3 the

multi-loss operator L of the game is given as follows :

111

L(s

1

, ˜ s, ˆ s) = (a, 0, 0)

L(s

2

, ˜ s, ˆ s) = (0, b, 0)

L(s

3

, ˜ s, ˆ s) = (0, 0, c).

Clearly the strategies 2 and 3 for Country 1 do only make sense if it

cooperate with the corresponding country and shares the losses ( wins

). From this it is easy to compute the corresponding values for ν, in fact

we have

0 = ν(∅) = ν(¦2¦) = ν(¦3¦) = ν(¦2, 3¦)

a = ν(¦1¦)

b = ν(¦1, 2¦)

c = ν(¦1, 3¦) == ν(¦1, 2, 3¦)

As usually when working with numerical values of utility ( as is

always the case when working with loss or multi-loss operators ) one

wished the concept to be independent of the scale of utility. For this

purpose one introduces the notion of strategic equivalence.

Deﬁnition 4.3.5. Two characteristic functions ν, ˆ ν are called strategi-

cally equivalent if there exists c > 0 and a

i

∈ R

N

s.t. ∀S ∈ P(N) one

has

ν

′

(S) = cν(S) +

¸

i∈S

a

i

.

Two N-person games are called strategically equivalent if their char-

acteristic functions are equivalent.

One can check that strategical equivalence is in fact an equivalence

relationship, i.e. symmetric, reﬂexive and transitive. We have the fol-

lowing proposition :

112

Proposition 4.3.2. Every essential N-person game (

n

is strategically

equivalent to an N-person game

ˆ

(

n

with characteristic function ˆ ν

′

satis-

fying

ˆ ν(N) = 1

ˆ ν(¦i¦) = 0

for i = 1, 2, ..., n.

Proof. Let ν be the characteristic function of (

n

. We deﬁne ˆ ν as follows

:

ˆ ν(S) := cν(S) −c

¸

i∈S

ν(¦i¦)

with c := (ν(N) −

¸

n

i=1

ν(¦i¦))

−1

. One can check that ˆ ν satisﬁes the

condition stated in the proposition. Furthermore it follows from the

deﬁnition of a characteristic function that ˆ ν is in fact the characteristic

function of the game

ˆ

(

n

which has the same strategies as ( but with

multi-loss-operator given by

ˆ

L

i

= c (L

i

−ν(¦i¦)) for i = 1, ..., n.

When the players join a coalition they have to decide how they share

their loss once the game is over. This leads to the deﬁnition of an im-

putation.

Deﬁnition 4.3.6. An imputation in an N-person game (

n

with charac-

teristic function (

n

is a vector x = (x

1

, .., x

n

)

⊤

∈ R

n

s.t.

n

¸

i=1

x

i

= ν(N)

x

i

≥ ν(¦i¦).

We denote the set of all imputations of (

n

respectively its character-

istic function with E(ν).

113

We interpret imputations as follows : x

i

is the i-th players award

( negative loss ) after the game is played. The second condition says

that whichever coalition player i chooses, he must do at least as good

if he would act alone. This is economically reasonable, since no one

would enter coalition if it would not pay for him. Furthermore from an

economical point of view it is clear that

¸

n

i=1

x

i

≤ ν(N) because when

coalition N is implemented all players work together and the sum of

the individual awards must be less that the collective award. If

¸

n

i=1

x

i

would be strictly less than ν(N) then the players in the game would be

better of to work together using the strategies to implement ν(N) and

then give x

i

to player i and equally share the difference ν(N) −

¸

n

i=1

x

i

.

The each player does strictly better. So the second condition in the

Deﬁnition of an imputation is in fact a Pareto condition.

Example 4.3.2. For the oil market game of Example 4.3.1 we have

E(ν) = ¦(x

1

, x

2

, x

3

)

⊤

[x

1

≥ a, x

2

≥ 0, x

3

≥ 0, x

1

+ x

2

+ x

3

= c¦

Can one imputation be better than another one ? Assume we have

two imputations x = (x

1

, ..., x

n

) and y = (y

1

, ..., y

n

) then

n

¸

i=1

x

i

= ν(N) =

n

¸

i=1

y

i

.

Therefore if for one i we have x

i

< y

i

then there is also a j s.t.

x

j

> y

j

. So an imputation can’t be better for everyone, but it is still

possible that for a particular coalition x is better than y. This leads to

the idea of domination of one imputation by another.

Deﬁnition 4.3.7. Let x, y be two imputations and S ⊂ N be a coalition.

We say that x dominates y over S if

114

x

i

> y

i

, for all i ∈ S

¸

i∈S

x

i

≤ ν(S).

In this case we write x >

S

y.

The economical interpretation of the second condition above is that

the coalition S has enough payoff to ensure its members the awards x.

Deﬁnition 4.3.8. Let x and y be two imputations of an N-person game

(

n

. We say x dominates y if there exists a coalition S s.t. x >

S

y. In this

case we write x ≻ y.

The next deﬁnition is about the core of an N-person game. We

should distinguish the core deﬁned in the context of cooperative N-

person games from the core deﬁned in section 1 ( see Deﬁnition 1.2.9

).

Deﬁnition 4.3.9. Let (

n

be a cooperative N-person game with charac-

teristic function ν. We deﬁne its core as the set of all imputations in

E(ν) which are not dominated ( for any coalition ). Denoting the core

with C(ν) this means that

C(ν) = ¦x ∈ E(ν)[ exists no y s.t. y ≻ x¦.

Theorem 4.3.1. A vector x ∈ R

n

is in the core C(ν) if and only if

1.

¸

n

i=1

x

i

= ν(N)

2.

¸

i∈S

≥ ν(S) for all S ⊂ N.

Proof. Let us assume ﬁrst that x ∈ R

n

satisﬁes the conditions 1.) and

2.) above. Then for S = ¦i¦ condition 2.) implies that x

i

≥ ν(¦i¦) so it

follows from condition 1.) that x is in fact an imputation. Suppose now

that x would be dominated by another imputation y. Then there exists

115

S ⊂ N s.t. y >

S

X i.e. y

i

> x

i

for all i ∈ S and ν(S) ≥

¸

i∈S

y

i

. Condition

2.) would then imply that

ν(S) ≥

¸

i∈S

y

i

>

¸

i∈S

x

i

≥ ν(S)

which of course is a contradiction. Therefore the imputation x is

not dominated and hence belongs to the core C(ν). Assume now on the

other side that x ∈ C(ν). Then x is an imputation, so condition 1.)

must hold. Assume condition 2.) would not hold. Then there would

exist S = N s.t.

¸

i∈S

< ν(S). We deﬁne

ǫ :=

ν(S) −

¸

i∈S

x

i

[S[

> 0

and

y

i

:=

x

i

+ǫ ∀i ∈ S

ν(¦i¦) +

(ν(N)−ν(S)−

¸

i∈N\S

ν({i})

|N−S|

∀i ∈ N ` S.

Then

¸

n

i=1

y

i

= ν(N) and y

i

≥ ν(¦i¦).

4

Moreover

¸

i∈S

y

i

= ν(S)

and y

i

> x

i

for all i ∈ S. Therefore y >

s

x and x is dominated. This

is a contradiction to x ∈ C(ν) and hence x must satisfy condition 2.).

Let us remark that the core of an N-person cooperative game is

always a convex and closed set. It has one disadvantage though, in

many cases it is empty.This is in particular the case for constant sum

games, as we will see next.

Deﬁnition 4.3.10. An N-person game is called a constant sum game if

¸

i∈N

L

i

≡ c where c is a constant and L

i

denotes the loss operator of the

i-th player.

4

since ν(N) = ν(N`S∪S) ≥ ν(N`S)+ν(S) and therefore ν(N)−ν(S) ≥ ν(N`S) ≥

¸

i∈N\S

ν(¦i¦) for i ∈ N ` S and x

i

≥ ν(¦i¦) for i ∈ S

116

Lemma 4.3.2. Let ν be the characteristic function of a constant sum

game, then for all S ⊂ N we have

ν(N ` S)

′

ν(S) = ν(N).

Proof. This follows from the deﬁnition of ν by applying the MiniMax

Theorem of section 2.5. to the two person zero sum game where o

1

=

Y

N\S

and o

2

= X

S

and the Loss-operator is given by

L =

¸

i∈N\S

(L

i

−

c

N

)

where c =

¸

i∈N

L

i

.

Proposition 4.3.3. If ν is the characteristic function of an essential

N-person constant sum game. Then C(ν) = ∅.

Proof. Assume x ∈ C(ν) then by condition 2.) in Theorem 4.3.1 and the

previous lemma we have for any i ∈ N

¸

j = ix

j

≥ ν(N ` ¦i¦) = ν(N) −ν(¦i¦).

Since x is an imputation we also have

x

i

+

¸

j = ix

j

= ν(N).

Combining the two gives ν(¦i¦) ≥ x

i

for all i ∈ N. Using that ν is an

essential game we get that

n

¸

i=1

x

i

≤

n

¸

i=1

ν(¦i¦) < ν(N)

which is a contradiction to x being an imputation.

Let us study the core of the oil-market game :

Example 4.3.3. By looking at Example 4.3.1 and Theorem 4.3.1 we see

that x = (x

1

, x

2

, x

3

)

⊤

∈ C(ν) if and only if

117

1. x

1

+ x

2

+ x

3

= c

2. x

1

≥ a, x

2

≥ 0, x

3

≥ 0, x

1

+ x

2

≥ b, x

2

+ x

3

≥ 0, x

1

+ x

3

≥ c

This however is the case if and only if x

2

= 0,x

1

+ x

3

= c and x

1

≥ b

and therefore

C(ν) = ¦(x, 0, c −x)[b ≤ x ≤ c¦

The fact that in lot of cases the core is just the empty set leads to the

question what other solution concepts for cooperative N-person games

are reasonable. We discuss two more, the so called stable sets and the

Shapely value.

Deﬁnition 4.3.11. A stable set S(ν) of an N-person game with charac-

teristic function ν is any subset S(ν) ⊂ E(ν) of imputations satisfying

1. x, y, ∈ S(ν) then x ≻ y and y ≻ x ( internal stability )

2. if z / ∈ S(ν) then there is an x ∈ S(ν) s.t. x ≻ z ( external stability )

Remark 4.3.1. We have C(ν) ⊂ S(ν) ⊂ E(ν) for any stable set S(ν)

because undominated imputations must be within any stable set. More

precisely C(ν) ⊂

¸

S(ν) stable

S(ν). In general the inclusion is proper. It

was long time a question whether all cooperative games contain stable

sets. The answer to this question is no ( Lucas 1968 ).

Exercise 4.3.1. Compute a stable set for the oil market game.

Deﬁnition 4.3.12. An N-person game is called simple if for all S ⊂ N

we have that ν(S) ∈ ¦0, 1¦. A minimum winning coalition S is one

where ν(S) = 1 and ν(S ` ¦i¦) = 0 for all i ∈ S.

The following proposition leads to various examples of stable sets

within simple games.

118

Proposition 4.3.4. Let ν be the characteristic function of a simple game

and S a minimum winning coalition. then

V

S

= ¦x ∈ E(ν)[x

i

= 0∀i ∈ S¦

is a stable set.

One of the more famous solution concepts in N-person cooperative

game is the so called Shapley value. It should be compared to the

Nash bargaining solution within two person cooperative games.

Theorem 4.3.2. Shapley : Denote with 1 the set of all characteristic

functions ν : N → [0, ∞). Then there exists exactly one function

φ = (φ

i

) : 1 →R

n

which satisﬁes the following conditions :

1. φ

i

(ν) = φ

π(i)

(πν) for all π ∈ Perm(N)

5

where πν denotes the char-

acteristic function of the game which is constructed from ν by re-

ordering the numbers of the players corresponding to the permu-

tation π.

2.

¸

n

i=1

φ

i

(ν) = ν(N) for all ν ∈ 1.

3. µ, ν ∈ 1 ⇒ φ(µ + ν) = φ(µ) +φ(ν).

This unique function is given by

φ

i

(ν) =

¸

i∈S⊂N

([S[ −1)!(n −[S[)!

n!

(ν(S) −ν(S −¦i¦)

φ(ν) is called the Shapley value of ν.

We will not proof this theorem but instead motivate the formula

and discuss the underlying idea. The idea of Shapley was as follows :

5

Here Perm(N) denotes the permutation-group of N, i.e. the group of bijective

maps π : N → N

119

the players arrive one after another at the negotiation table, but they

arrive in random order. Each time a new player arrives at the negotia-

tion table the negotiations will be extended to include the newly arrived

player. If when player i arrives the players S `¦i¦ are already sitting at

the negotiation table, then the award for player i from the new negoti-

ations should be what he brings in for the extended coalition, namely

ν(S) − ν(S ` ¦i¦). The probability that player i arrives when players

S ` ¦i¦ are already sitting at the negotiation table is

([S[ −1)!(n −[S[)!

n!

.

The Shapely value can therefore be considered as the expectational

award for player i. For the oil market game we get the following :

Example 4.3.4. For the oil market game on computes φ

1

(ν) =

1

2

c +

1

3

a +

1

6

b,φ

2

(ν) =

1

6

b +

1

6

a and φ

3

(ν) =

1

2

c −

1

6

a −

1

3

b.

120

Chapter 5

Differential Games

So far we considered only games which are static in the way that ﬁrst

the players choose their strategies, then the game is played with the

chosen strategies and an output measured in loss or utility is deter-

mined by the rules of the game. in this chapter we will develop a con-

cept of dynamic games which are played over a time interval [0, T] and

in which the players can choose their strategies at time t depending on

the information they obtained by playing the game up to this time.

5.1 Setup and Notation

Assume we have the following ingredients : We have players “i” i =

1, .., N which can choose strategies

γ

i

: [0, T] Map([0, T], R

n

) →R

m

i

(t, x()) → γ(t, x()).

We denote the i-th players strategies with Γ

i

. As before an element

(γ

1

, ..., γ

N

) of Γ

1

... Γ

N

is called a multi-strategy . Furthermore we

assume we have a function

121

f : R R

n

R

m

1

... R

m

N

→ R

n

(t, x, u

1

, ..., u

N

) → f(t, x, u

1

, ..., u

N

).

Deﬁnition 5.1.1. A multi-strategy (γ

1

, ..., γ

N

) ∈ Γ

1

... Γ

N

is called

admissible for the initial state x

0

∈ R

n

if for all i = 1, .., N

1. γ

i

(t, x()) depends on x() only through the values x(s) for 0 ≤ s ≤

τ

i

(t) where τ

i

∈ Map([0, T], [0, T])

2. γ

i

(t, x()) is piecewise continuous

3. the following differential equation has a unique solution in x()

dx(t)

dt

= f(t, x(t), γ

1

(t, x(), ..., γ

N

(t, x()

x(0) = x

0

.

The function τ

i

is called player i’s information structure

Deﬁnition 5.1.1 has to be interpreted in the following way. The func-

tion f represents the rules of the games and by the differential equa-

tion above determines which strategies can actually be used by the

players. The condition concerning the information structure say how

much information the players can use to choose their strategies at the

corresponding time. We denote the set of admissible strategies with

Γ

adm

. Clearly Γ

adm

depends on the function f, the initial state x

0

and

the information structures τ

i

. Sometimes we use the notation

u

i

(t) = γ

i

(t, x()

which has to be interpreted in the way that (γ

1

, ..., γ

N

) ∈ Γ

adm

and

the function x() is the unique solution of the differential equation

above. The payoff in our differential game is measured in values of

122

utility ( not in loss as in the previous chapter ) and for a multi-strategy

(γ

1

, ..., γ

N

) ∈ Γ

adm

player i’s utility is given by an expression of the form

J

i

(γ

1

, ..., γ

N

) =

T

0

φ

i

(t, x(t), u

1

(t), ..., u

N

(t))dt + g

i

(x(T)).

in this expression the time integral represents the utility taken

fromconsumption over time, the function g

i

represents the utility which

is determined by the ﬁnal state of the game. In the following we will

make use of the following technical assumptions :

1. For i = 1, ..N the functions φ(t, , u

1

, ..., u

N

) and g

i

() are continu-

ously differentiable in the state variable x

2. The function f is continuous in all its variables and also continu-

ously differentiable depending the state variable x.

Remark 5.1.1. If the function f is Lipschitz continuous admissibility

of strategies is a very mild condition due to theorems from the theory

of ordinary differential equations. In general though in the theory of

differential games one does not assume Lipschitz continuity of f.

The concept of Nash equilibria can now be directly applied within

the concept of differential games. We just repeat, this time in the con-

text of utility rather than loss :

Deﬁnition 5.1.2. An admissible multi-strategy (γ

1

♯

, ..., γ

N

♯

) ∈ Γ

adm

is

called a Nash Equilibrium if

J

1

(γ

1

♯

, ..., γ

N

♯

) ≥ J

1

(γ

1

, ..., γ

N

♯

) ∀γ

1

s.t. (γ

1

, ..., γ

N

♯

) ∈ Γ

adm

J

N

(γ

1

♯

, ..., γ

N

♯

) ≥ J

1

(γ

1

♯

, ..., γ

N

) ∀γ

N

s.t. (γ

1

♯

, ..., γ

N

) ∈ Γ

adm

.

We denote the values on the left side with J

i

♯ and call (J

1

♯, ..., J

N

♯)

the Nash-outcome

123

We discussed this Equilibrium concept in detail. the question is

again whether such Equilibria exist. We do no longer have compact-

ness of the strategy sets and therefore cannot apply the Theorems of

chapter 2 and chapter 3. More techniques from Functional Analysis

and Differential Equations are necessary. We will address this prob-

lem later and in the meantime introduce another Equilibrium concept

which was also previously discussed in section 1.3. the so called Stack-

elberg Equilibrium. We end this section by introducing the most im-

portant information structures, there are many more.

Deﬁnition 5.1.3. Player “i”’s information structure τ

i

is called

1. open loop if τ

i

(t) = 0 for all t, that is player “i” does not use any

information on the state of the game when choosing his strategies.

2. closed loop if τ

i

(t) = t for all t that is player “i” uses all possible

information he can gather by following the game.

3. ǫ-delayed closed loop if τ

i

(t) =

t −ǫ if t ∈ [ǫ, T]

0 if t ∈ [0, ǫ)

.

5.2 Stackelberg Equilibria for 2 Person Dif-

ferential Games

In the framework of differential games another equilibrium concept

has shown to be successfully, this is the concept of Stackelberg equilib-

ria. It is normally applied within games where some players are in a

better position then others. The dominant players are called the lead-

ers and the sub-dominant players the followers. One situation where

this concept is very convincing is when the players choose their strate-

gies one after another and the player who does the ﬁrst move has an

advantage. This interpretation however is not so convincing in continu-

ous time where players choose theories strategies at each time, but one

could think of that one player has a larger information structure. How-

ever from a mathematical point the concept is so successful because it

124

leads to results. In this course we will only consider Stackelberg Equi-

libria for 2 person differential games and ﬁrst illustrate the concept in

a special case.

Assume player “1” is the leader and player “2” is the follower. Sup-

pose that player “2” chooses his strategies as a reaction of player “1”’s

strategy in a way that when player “1” chooses strategy γ

1

the player

“2” chooses a strategy from which he gets optimal utility given that

player “1” plays γ

1

. We assume for a moment that there is such a strat-

egy and that it is unique. We denote this strategy with γ

2

and get a

map

T : Γ

1

→ Γ

2

γ

1

→ T(γ

1

) = γ

2

.

To maximize his own utility player “1” would the choose a strategy

γ

1

∗

which optimizes J

1

(γ

1

, T(γ

1

)) i.e.

J

1

(γ

1

∗

, T(γ

1

∗

)) ≥ J

1

(γ

1

, T(γ

1

))

for all γ

1

∈ Γ

1

. γ

2

∗

= T(γ

1

∗

) would be the optimal strategy for player

2 under this consideration. However we have to assume some assump-

tions on admissibility in the previous discussion and also in general the

existence and uniqueness of the strategy γ

2

= T(γ

1

) is not guaranteed.

Deﬁnition 5.2.1. The optimal reaction set R

2

(γ

1

) of player “2” to

player “1”’s strategy γ

1

is deﬁned by

R

2

(γ

1

) = ¦γ ∈ Γ

2

[J

2

(γ

1

, γ) ≥ J

2

(γ

1

, γ

2

) ∀γ

2

∈ Γ

2

s.t. (γ

1

, γ

2

) ∈ Γ

adm

¦.

Deﬁnition 5.2.2. In a 2 person differential game with player “1” as

a leader and player “2” as a follower a strategy γ

1

∗

∈ Γ

1

is called a

Stackelberg Equilibrium Solution for the leader if

125

min

γ∈R

2

(γ

1

∗

)

J

1

(γ

1

∗

, γ) ≥ min

γ∈R

2

(γ

1

)

J

1

(γ

1

, γ)

for all γ

1

in Γ

1

where we assume that min ∅ = ∞ and ∞ ≥ ∞. We

denote the left hand side with J

1

∗

and call it the Stackelberg payoff for

the leader and for any γ

2

∗

∈ R

2

(γ

1

∗

) we call the pair (γ

1

∗

, γ

2

∗

) a Stackelberg

equilibrium solution and (J

1

(γ

1

∗

, γ

2

∗

), J

2

(γ

1

∗

, γ

2

∗

)) the Stackelberg equilib-

rium outcome of the game. Even if Stackelberg solution for the leader

and the optimal reaction for the follower are not unique the Stackelberg

equilibrium outcome is unique.

5.3 Some Results fromOptimal Control The-

ory

One ccould say that optimal control problems are 1-person differential

games and most of the theory of differential games uses results from

there. We will therfore brieﬂy consider the optimal control problem

and state the Pontryagin principle which gives necessary conditions

for a control to be optimal.

Suppose that the state of a dynamical system evolves according to a

differential equation :

dx(t)

dt

= f(t, x(t =, u(t))

x(0) = 0

where u() denotes a control function which can be chosen within

the set of admissible controls, i.e. maps u : [0, T] →R

m

s.t. the differen-

tial equation from above has a unique solution. The problem in optimal

control theory is to choose u in a way that it maximizes a certain func-

tional

126

J(u) =

T

0

φ(t, x(t), u(t))dt + g(X(T)).

We assume that the functions f, φ and g satisfy similar conditions

as in section 5.1. In the economic literature a pair (x(), u()) which sat-

isﬁes the differential equation from above is called an programand an

optimal program (x

∗

(), u

∗

()) is one that maximizes J. The following

principle is known as the Pontryagin principle and gives necessary

conditions for a program to be an optimal program.

Theorem 5.3.1. Suppose (x

∗

(), u

∗

()) is an optimal program. Then

there exists an R

n

valued function

p : [0, T] →R

n

called the costate vector ( or sometimes multiplier function) s.t. if we

deﬁne the function

H(t, x, u, p) = φ(t, x, u)+ < p, f(t, x, u) >

for all (t, x, u, p) ∈ [0, T] R

n

R

m

R

n

then the following conditions

hold for i = 1, .., n :

dx

∗

i

(t)

dt

= =

∂H

∂p

i

(t, x

∗

(t), u

∗

(t), p(t)) = f

i

(t, x(t =, u(t))

x

i

(0) = x

i0

dp

i

(t

dt

(t) = −

∂H

∂x

i

(t, x

∗

(t), u

∗

(t), p(t))

p

i

(T) =

∂g

∂x

i

)(x

∗

(T))

an if U

adm

⊂ R

m

denotes the set of vectors u s.t. u() ≡ u is admissible

then

H(t, x

∗

(t), u

∗

(t), p(t)) = max

u∈U

adm

H(t, x

∗

(t), u, p(t))

127

for almost all t.

The method for ﬁnding the optimal control now works basically as

when using the Lagrange multiplier method in standar calculus. If one

knwos from a theoretical consideration that there must exist and opti-

mal program, then the necessary condition above help to determine it.

We will illustrate in the following example from economics how the

costate vector can be interpreted. We consider an economy with a con-

stant labour force and capital K(t) at time z ∈ [0, T] where the labour

force can use some technology F in the way that it produces a product

worth

Y (t) = F(K(t))

where F(0) = 0

1

. For the function F we assume that it has a “de-

creasing return to scale” property i.e.

dF(K)

dK

> 0,

d

2

F

dK

2

< 0

which froman economical point of viewis very reasonable and should

be interpreted in the way that one can produce more, given more cap-

ital, but that the effectivity becomes less the more capital is used in

the production. We assume that the memebrs of our economy consume

some of the capital and the rate of consumption at time t will be de-

noted with C(t). Furthermore we assume that the capital depreciation

is given by the constant µ. Then the evolution of capital in our economy

is given by the following differential equation :

dK(t)

dt

= F(K(t)) −C(t) −µK(t)

K(0) = K

0

.

1

This basically mean you cannot produce anything out of nothing

128

where K

0

denotes the initial capital. Let U be th utility function

which measures the utility taken from consumption. As always for

utility functions we assume that

dU(C)

dC

> 0,

d

2

U(C)

dC

2

< 0.

The members in our economy also beneﬁt from the ﬁnal capital

stock and this beneﬁt is given by g(K(T)) where g is another utility

function. The total utility from choosing the consumption C() over

[0, T] is the given by

W(C()) =

T

0

u(C(t))dt + g(K(T)).

Our economy would now like to hoose C() in a way that it maxi-

mizes this utility. Let us apply the Pontryagin pinciple. The Hamilton

function for this problem is given by

H(t, K, C, p) = u(C) +p (F(K) −C −µK).

The Pontryagin principle says that the optimal consumption rate at

time t satisﬁes C

∗

()

0 =

∂H

∂C

(t, K

∗

(t), C

∗

(t), p(t)) =

du(C)

dC

−p(t)

Furthermore

dP(t)

dt

= −

∂H

∂K

(t, K

∗

(t), C

∗

(t), p(t)) = −p(t)

dF(K

∗

(t))

dK

+ µp(t)

p(T) =

∂g(K

∗

(T))

∂K

.

One can show that the function p satisﬁes

p(s) =

∂

∂K

T

s

u(C

∗

(t))dt + g(K

∗

(T))

|K=K

∗

(s)

where C

∗

is considered to be a function of K

∗

in the way that C

∗

(t) =

129

C

∗

(t, K

∗

(t)). This representation can be obtained when solving for C

∗

().

However the term on the right side can be interpreted as the increase

in utility within the timeinterval [s, T] per unit ofcapital. In the eco-

nomic literature p(s) is therefore called the shadowprice of capital.

5.4 Necessary Conditions for Nash Equilib-

ria in N-person Differential Games

The following Theorem gives neccesary conditions for a multistrategy

in a differential game to be a Nash equilibrium solution. The theorem

heavily relies on Theorem 5.3.1.

Theorem 5.4.1. Consider an N-person differential game as formulated

in the beginning of this chapter and let (γ

1

♯

, ..., γ

N

♯

) be a Nash equilib-

rium solution. Assume γ

i

♯

(T, x()) depends on x() only through x(t) and

this dependence is C

1

-differentiable. Then there exists N costate vectors

p

i

(t) ∈ R

n

and N Hamilton functions for i = 1, .., N

H

i

(t, x, u

1

, ..., u

N

, p

i

) = φ

i

(t, x, u

1

, ..., u

N

)+ < p

i

, f(t, x, u

1

, ..., u

N

) >

s.t. the following conditions are satisﬁed for k, i = 1, ..., N

dx

♯

(t)

dt

= f(t, x

♯

(t), u

1

♯(t), ..., u

N

♯

(t))

x

♯

(0) = 0

dP

i

k

(t)

dt

= −

∂H

∂x

k

(t, x

♯

(t), u

1

♯

(t), ..., u

N

♯

(t), p

i

(t))

−

¸

j=i

m

j

¸

l=1

∂H

∂u

j

l

(t, x

♯

(t), u

1

♯

(t), ..., u

N

♯

(t), p

i

(t))

∂γ

j

l

∂x

k

(t, x

♯

(t))

p

i

k

(T) =

∂g

i

∂x

k

(x

♯

(T)).

Furthermore u

i

♯

(t) = γ(t, x

♯

(t)) maximizes

130

H

i

(t, x

♯

(t), u

1

♯

(t), ..., u

i−1

♯

(t), u, u

i+1

♯

(t), ..., u

N

♯

(t), p

i

(t))

for all u ∈ R

m

i

s.t. u(t) ≡ u is admissible.

131

Bibliography

[Au] Aubin : Mathematical Methods of Game and Economic Theory

[Jo] Jones : Game Theory, Mathematical Models of Conﬂict

[Th] Thomas : Games, Theory and Applications

[Ow] Owen : Game Theory

[BO] Border : Fixed Point Theorems with Applications to Economics

and Game Theory

[Du] Dugatkin : Game Theory and Animal Behavior

[Ma] Maynard-Smith : Evolutionary Game Theory

132

Abstract These are my Lecture Notes for a course in Game Theory which I taught in the Winter term 2003/04 at the University of Kaiserslautern. I am aware that the notes are not yet free of error and the manuscrip needs further improvement. I am happy about any comment on the notes. Please send your comments via e-mail to ce16@st-andrews.ac.uk.

Electronic copy of this paper is available at: http://ssrn.com/abstract=976592

Contents

1 Games 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 General Concepts of two Person Games . . . . . . . . . . 1.3 The Duopoly Economy . . . . . . . . . . . . . . . . . . . . 2 Brouwer’s Fixed Point Theorem and Nash’s Equilibrium Theorem 2.1 Simplices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Sperners Lemma . . . . . . . . . . . . . . . . . . . . . . . 2.3 Proof of Brouwer’s Theorem . . . . . . . . . . . . . . . . . 2.4 Nash’s Equilibrium Theorem . . . . . . . . . . . . . . . . 2.5 Two Person Zero Sum Games and the Minimax Theorem 3 More general Equilibrium Theorems 3.1 N-Person Games and Nash’s generalized Equilibrium Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Correspondences . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Abstract Economies and the Walras Equilibrium . . . . . 3.4 The Maximum Theorem for Correspondences . . . . . . . 3.5 Approximation of Correspondences . . . . . . . . . . . . . 3.6 Fixed Point Theorems for Correspondences . . . . . . . . 3.7 Generalized Games and an Equilibrium Theorem . . . . 3.8 The Walras Equilibrium Theorem . . . . . . . . . . . . . . 3 3 8 16

20 21 23 26 32 34 37 37 38 48 54 59 62 68 72

1

. . . . . . . . . 5.2 Stackelberg Equilibria for 2 Person Differential Games . . . . . . . . . . . . . . . . . . . . . . . . . . 5. 87 4. .1 Cooperative Two Person Games . . . .2 Nash’s Bargaining Solution . . 5. . .4 Cooperative Games 87 4. . .4 Necessary Conditions for Nash Equilibria in N-person Differential Games . . . 92 4. . . . . . . . . . . . .3 Some Results from Optimal Control Theory . . . . . . . . . . 121 121 124 126 130 2 . . . . . . . . . . . . . . . . . . . . . .3 N-person Cooperative Games .1 Setup and Notation . . . . . . . . 108 5 Differential Games 5.

Example 1. we describe two games. At the 3 . so we study simpliﬁed models ( such as for example simpliﬁed Poker which is played with only two cards.1 Introduction Game Theory is a formal approach to study games. However. psychology. in this case we speak of cooperative games as opposed to non-cooperative games. an “Ace” and a “Two” and only two players. biology. A lot of our examples though are also motivated by classical games. an Ace and a two ). Sometimes we allow the players to cooperate. However all methods from these subjects will be explained during the course. To start with. The theory by itself can be quite abstract and a lot of methods from Functional Analysis and Topology come in. reality is often far too complex. which will later help us to understand the abstract deﬁnitions coming in the next section. Simpliﬁed Poker : There are only two cards involved. player 1 and player 2. Game theory has many applications in subjects like Economy. but also in such an unpleasant subject as warfare. In this lecture we will concentrate on applications in Economics. where players are not allowed to cooperate.Chapter 1 Games 1. We can think of games as conﬂicts where some number of individuals ( called players ) take part and each one tries to maximize his utility in taking part in the conﬂict.

the outcome of the game is determined by the strategies chosen by each player and the rules This structure will be included in the (”abstract”) deﬁnition of a “Game” in Chapter 1. In this case Player 2 has to show player 1 his card. If it’s the “Ace” player 2 has to say “Ace”. Both cards lie face down on the table and none of the players knows. player 2 says “Ace” he has to put another Euro in the “pot”. Then player two draws one of the cards and takes a look. This in not the 4 . In the second game. If it is indeed the “Ace” then player 2 wins the ( now ) 4 Euro in the pot but if it is the “Two” then player 1 wins the 4 Euro. Either he believes player 2 and “folds” in. The two games just presented though differ under some aspect.beginning each one puts 1 Euro in the “pot”. he can say two and then loses the game ( in this case player 1 wins the 2 Euro in the “pot” ) or he can “bluff” and say “Ace”. there are rules under which the player can choose their strategies (moves) 3.2) : There are two piles of two matches each and two players are taking part in the game. The player who takes the last match looses. Nim(2. player 1 takes the ﬁrst turn. or player 1 assumes that player two has “bluffed” puts another Euro in the “pot”. the players have at each time perfect knowledge about the action of their opponent. In both cases. the game is ﬁnished. In the case . in this case player 2 wins the ( now ) 3 Euro in the “pot”. At each turn the player selects one pile which has at least one match left and removes at least one match from this pile. which card is the “Ace” and which one the “Two”. If however he has drawn the “Two”. there is a number of players taking part in the game 2. Player 1 then has two choices. The game ﬁnishes when all matches are gone. Example 2. player 1 and player 2. Each of the games above is of the following structure : 1. The players take alternating turns.

Usually a game can be considered as a tree. Trees can be helpful in understanding the structure of the game. For the Nim(2. which is missing in the second game. Such informations will be considered as additional structure and considered individually.2) can choose : 5 .case in the ﬁrst game. For mathematical purposes though. We would like to decode the structure of the game in a more condensed form. where the nodes are the states of the game and the edges represent the moves. First let us consider the individual strategies the players in Nim(2. Nim(2. they are often to bulky to work with.2) game one has the following tree : Example 3. where player 2 can “bluff”. the number on the bottom of the diagram indicates the winner. Also in the ﬁrst game there is a chance element.2) Tree : ||/|| H jjj HH 1 jjjj HH j HH jjjj j HH jjjj 1 $ jjj tj |/||2 J −/||3H JJ 2 HH 2 tt vv JJ HH tt vv JJ HH 2 tt2 vv2 HH JJ v tt zt $ zvv # |/|5 1 1 −/||4 −/|9 2 |/−6 1 −/|7 1 −/−8 t tt tt tt ztt 1 1 −/−10 |/−11 2 −/−12 −/−13 −/−14 1 2 −/−15 1 2 2 1 The number at each edge indicates which player does the move. We speak respectively of games with perfect information and games with non perfect information.

4. 2. 2 Let us also discuss the simpliﬁed Poker in this way. The following table shows the possible strategies of the players : 6 . If the players decide for one of their strategies. 5. which is described in matrix form : 1 −1 −1 1 −1 −1 L := 1 −1 1 1 −1 1 1 1 1 −1 −1 −1 The value L(i. namely s3 . 3} 1 j and those for player 2 as s2 for i ∈ {1. Let us denote the outcome of the game as 1 if player 1 loses and with −1 if player 1 wins.S1 s1 1 s2 1 s3 1 1st turn 1→2 1→2 1→3 if at 4 4 − go to 9 10 − S2 s1 2 s2 2 s3 2 s4 2 s5 2 s6 2 if at 2 3 2 3 2 3 2 3 2 3 2 3 go to 4 7 5 7 6 7 4 8 5 8 6 8 Here the strategies of player 1 are denoted with si for i ∈ {1. the outcome of the game is already ﬁxed. 2.2) if player 1 chooses his i-th strategy and player 2 chooses j-th strategy. 3. j) at position (i. Then the game is equivalent to the following game. We see that player 2 has a strategy which guarantees him to win. j) of this matrix is the outcome of the game Nim(2. 6}.

According to what card Player 2 draws. She would like to go to the theater. so they still like to spend their time together and enjoy the entertainment only if their partner is with them. Let’s say the ﬁrst strategy for each is to go to the theater and the second is to go to the football match. had the characteristic property that what one player loses.s1 1 s2 1 s1 2 s2 2 believe 2 when he says “Ace” don’t believe 2 when he says “Ace” say “Two” when you have “Two” say “Ace” when you have “Two” (“bluff”) Since their is a chance element in the game ( “Ace” or “Two” each with probability 1/2 ) the losses corresponding to pairs of strategies are not deterministic. the other player wins. This is not always the case as the following example shows : Example 4. Battle of the Sexes A married couple are trying to decide where to go for a night out. Then the individual losses for each can be denoted in matrix form as : 7 . he would like to go to a football match. we have different losses : LT wo := −1 1 −1 −2 and LAce := 1 1 2 2 One could now consider ”Nature” as a third player and denote the losses in a three dimensional array but one usually decides not to do this and instead denote the expected losses in a matrix. Since each event ”Two” and ”Ace” occurs with probability 1/2 we have for the expected losses : 1 1 L := LT wo + LAce = 2 2 0 1 1/2 0 The two examples we had so far. However they are just married since a couple of weeks.

the ﬁrst number indicates the loss for the man.1. 0) (−4. topological spaces S1 and S2 .L := (−1. and so since in fact we want to maximize the gains. a biloss operator L : U → R2 (s1 . The examples mentioned in this chapter will lead us directly to the formal deﬁnition of a ( Two Person ) Game in the next section and henceforth serve for illustrating the theory.2 General Concepts of two Person Games Deﬁnition 1. A two person game G2 in normal form consists of the following data : 1. the so called strategies for player 1 resp. −4) (0.The 8 (1. that we always speak about losses. which from now on at some points tends to be very abstract. The reader may ﬁnd it unusual. the spaces Si have been ﬁnite ( with discrete topology ) and U has to be chosen S1 × S2 . L2 (s1 . rather then gains ( or wins ). 1.1) (1.2. 0) (0. a topological subspace U ⊂ S1 × S1 of allowed strategy pairs 3.2) . s2 ) → (L1 (s1 . player 2. that in convex analysis one rather likes to determine minima instead of maxima ( only for formal reasons ). the second one the loss for the woman. For the games considered in the Introduction. 2. For the Nim(2. s2 ).2) game and simpliﬁed poker we have L2 = −L1 . we have to minimize the losses. s2 )) Li (s1 . s2 ) is the loss of player i if the strategies s1 and s2 are played. The reason is. −1) Here in each entry.

but for now we stick with two person games.main problem in Game Theory is to develop solution concepts and later on ﬁnd the solutions for the game. −1). L(˜1 .t. would be to choose the strategies s1 and s2 since they guarantee the minimal loss. s2 ).3) where α1 = inf (s1 .2. there is no pair of strategies which gives biloss (−1. α2 ) (1. With solution concepts we mean to characterize those strategies which are optimal in some sense. For the Nim(2. such strategies do not exist. Deﬁnition 1. We will see a lot of different approaches and also extend the deﬁnition to n-person games. s2 ) s2 ∈S2 sup L2 (s1 . Given a two person game G2 we deﬁne its shadow minimum as α = (α1 .2. s2 ).s2 )∈U L1 (s1 . −1) and for the “Battle of the Sexes” it is α = (−4. G2 is bounded from below. s2 ) = α s ˜ s ˜ then a good choice for both players. The shadow minimum represents the minimal losses for both players if they don’t think about strategies at all.1 We deﬁne functions L♯ : S1 → R and L♯ : S2 → R as follows : 1 2 L♯ (s1 ) = 1 L♯ (s2 ) = 2 1 sup L1 (s1 . for the simpliﬁed poker its α = (0.2) game the shadow-minimum is α = (−1. In case there exists (˜1 .2) game. However in most ˜ ˜ games. −4). s2 ) and α2 = inf (s1 . if both α1 and α2 are ﬁnite. s1 ∈S1 all coming deﬁnitions can be generalized to allow U to be a proper subset of S1 ×S2 9 . For example in the Nim(2.s2 )∈U L2 (s1 . Given a game G2 . −1). s2 ) ∈ U s. To keep things easy for now we assume U = S1 × S2 .

s2 ). 10 . s2 ) 1 1 1 s1 ∈S1 s1 ∈S1 s2 ∈S2 is called a conservative strategy for player 1 and analogously s♯ for 2 player 2. v2 ) is called the conservative value of the game. 2 Exercise 1. An easy computation shows that for the ”Battle of the Sexes” game ♯ ♯ we have L♯ ≡ 0 ≡ L♯ .1.3. hence chooses strategy s♯ = s2 and the woman de1 1 cides she wants to go to the theater. s♯ ) = 0 ≥ −1 = L1 (s1 .2. minimizes the maximal loss 1 and analogously player 2 should use a strategy which minimizes L♯ .2. Then both have chosen conservative strategies but both can do 2 2 better : L1 (s♯ . the following consideration is reasonable : Player 1 should use a strategy which minimizes L♯ . if ♯ L♯ (s♯ ) = v2 := inf L♯ (s2 ) = inf sup L2 (s1 . assume the man decides he wants to see the football match. s♯ ) = 0 ≥ −1 = L2 (s♯ . 1 2 1 2 We say the chosen pair of strategies is not individually stable. s♯ ) 1 2 1 2 L2 (s♯ . s2 ). Deﬁnition 1. If both players are highly risk aversive. i. Hence v1 = 0 = v2 and any strategy is a con1 2 servative strategy. 2 2 2 s2 ∈S2 s2 ∈S2 s1 ∈S1 ♯ ♯ The pair v = (v1 . which means she chooses strategy s♯ = s1 . Compute the function L♯ for the games in the Introduci tion. A strategy s♯ which satisﬁes 1 ♯ L♯ (s♯ ) = v1 := inf L♯ (s1 ) = inf sup L1 (s1 . However.L♯ (s1 ) is the worst loss that can happen for player 1 when he plays 1 strategy s1 and analogously L♯ (s2 ) is the worst loss that can happen 2 for player 2 when he plays strategy s2 .e.

s2 ). s♯ ) is called a non-cooperative equilib1 2 2 rium or short NCE if L1 (s♯ .4. s♯ ) ≤ L1 (s1 . For any x ∈ X we deﬁne the corresponding Dirac measure δx as 2 sometimes also called individually stable 11 . For the ”battle of the sexes” we have non-cooperative 2 2 equilibria (s1 . no non cooperative equilibria exist. then a ♯ non-cooperative equilibrium is a pair (s♯ . s2 ) ∀s2 ∈ S2 1 1 Clearly this means that L1 (s♯ .2. s♯ ) such that L1 (s1 . Using this criterion. s2 ) ≤ L2 (s♯ .5. This leads us to a crucial point in modern game theory. then no one has reason to deteriorate from his strategy. s♯ ) = min L1 (s1 . the extension of the strategy sets by so called mixed strategies.Deﬁnition 1. s♯ ) is the minimum in its row ( only taking into account the L2 1 2 values ). If the strategy sets are ﬁnite and L is 1 2 written as a matrix with entries the corresponding bilosses. s♯ ) 1 2 2 s1 ∈S1 L2 (s♯ . s♯ ) is the 1 2 2 minimum in its column ( only taking into account the L1 values ) and L2 (s♯ . one can easily check that in the simpliﬁed poker. Deﬁnition 1. Let X be an arbitrary set and RX the vector-space of real valued functions on X supplied with the topology of point wise convergence. A pair (s♯ . s♯ ) 1 2 = min L2 (s♯ . s♯ ) ∀s1 ∈ S1 1 2 2 ♯ L2 (s♯ .2. s2 ) 1 s2 ∈S2 In words : A non-cooperative equilibrium is stable in the sense that if the players use such a pair. s1 ) and (s1 .

S2 and a biloss-operator L = (L1 .δx : RX → R f → f (x). We deﬁne a new ˜ game G2 as follows : As strategy sets we take ˜ Si := M(Si ) .3 One can easily check that the set M(X) is convex. Let us assume now that we have a two person game G2 given by strategy sets S1 . We call m a discrete probability measure if it is positive and n i=1 λi = 1. Furthermore we have a canonical embedding δ : X → M(X) x → δx . i = 1. This space is equipped with the weak topology. L2 ). We say that m is positive if λi ≥ 0 for all i. 2 ˜ ˜ ˜ and as biloss operator L = (L1 . L2 ) with 3 this means mn → m ⇔ mn (f ) → m(f )∀f ∈ RX 12 . Any ( ﬁnite ) linear combination m = to n n i=1 λi δxi which maps f ∈ RX m(f ) = i=1 λi f (xi ) is called as discrete measure. We denote the set of discrete probability measures on X by M(X).

2.˜ ˜ ˜ Li : S1 × S2 → R n m n m ( i=1 λ1 δsi .2.. i 1 j=1 λ2 δs1 ) j 2 → i=1 j=1 λ1 λ2 Li (si . How can we interpret mixed strategies.2. δs2 ) are called pure strategies. λn ) ∈ Rn+1 |λi ≥ 0. If player one has say 2 pure strategies and the game is repeated let’s say 100 times. n λi = 1} by the map i=0 13 . sj ). one with probability λ1 and the other one with probability λ2 .7δs2 by playing the strategy 1 1 s1 for 30 times and the strategy s2 for 70 times. We will later see that for any zero sum game with ﬁnitely many pure strategies. Then S1 is homeomorphic to the closed standard n n simplex ∆ := {(λ0 . s2 corresponding to the outcome 1 1 of the experiment. i j 1 2 Deﬁnition 1. but repeated many times. that games are not just played once. The strategies which are contained in the image of the canonical embedding δ : S1 × S2 → M(S1 ) × M(S2 ) (s1 . If there are only ﬁnitely many pure strategies the mixed strategies also have a very nice geometric interpretation : Say ˜ S1 has n + 1 elements. Show that the extension of the simpliﬁed poker game has non cooperative equilibria. The sets M(Si ) are called the mixed strategies of G ˜ and the game G2 is called the extension of G2 by mixed strategies.. Often it happens.3δs1 + 0. Then he decides for one of the pure strategies s1 resp. that every time he wants to play the mixed strategy λ1 δs1 + λ2 δs2 he 1 1 does a random experiment which has two possible outcomes.6. s2 ) → (δs1 . Exercise 1. Another interpretation 1 1 is. then he can realize the mixed strategy 0. .. the extended game has non-cooperative equilibria.

We end this section with a deﬁnition. This relationship brings the geometry into It can also happen. s2 ) ≤ L1 (s1 . s2 ) ≤ v1 and L2 (˜1 . A pair of strategies (s1 .2. Deﬁnition 1. and so the second condition implies. in particular if one works with the extended game. A pair of strategies (˜1 . s2 ) ˜ ˜ ˜ and (s1 .. all Pareto optimal NCE’s are interchangeable in the sense that if (s1 . s2 ) are Pareto optimal NCE’s.. s2 ) sub-dominates another s ˜ pair (s1 . In this case one would like to have some decision rule. that one has to many non-cooperative equilibria. The core of the game G2 is the subset of all Pareto ♯ ♯ optimal strategies (˜1 .(λ0 .4 Deﬁnition 1. s2 ) with strict s ˜ s ˜ inequality in at least one case. which is important in particular in the context of cooperative games. knowing that both can do better ( and one can do strictly better ) by choosing different strategies. v2 ) denotes the conservative value of the game. ˜ The interpretation of the ﬁrst condition is that the two players wouldn’t choose an equilibrium strategy. s2 ) and L2 (˜1 .2.2.8. A two person game G2 has a strict solution if : 1. 4 in this case one sometimes speak of the collective stability property 14 . This leads to the concept of strict solutions for a game G2 . . that interchangeable equilibria have the same biloss. s2 ) such that L1 (˜1 . s2 ) and (s1 . We will later discuss other solution concepts. Deﬁnition 1. s2 ) ≤ L2 (s1 . s2 ). s2 ) is called Pareto optimal if it is not sub-dominated. then so are (s1 .. One can easily see.9.7. λn ) → the game. there is a NCE within the set of Pareto optimal pairs 2. s2 ) if L1 (˜1 . which of the equilibria one should choose. that all solutions in the strict sense have the same biloss. s2 ) ≤ v2 s ˜ s ˜ s ˜ ♯ ♯ where v = (v1 . n i=1 λi−1 δsi .

in general. s2 ) of F is a non cooperative equilibrium. Then any solution (˜1 . how the question of existence of equilibria is related to the question of the existence of ﬁxed points. s2 ) → (C(˜2 ). Denoting with F the function F : S1 × S2 → S1 × S2 (˜1 . The most famous one. s2 ) ∀ s2 ∈ S2 s1 ∈S1 L2 (s1 . D(˜1 )) s ˜ s s then any ﬁxed point5 (˜1 . then any point x ∈ X with f (x) = x is called a ﬁxed point 5 15 . since the functions we consider are often not contractive.In the end of this introductory chapter we demonstrate. Assume that there exist maps C : S2 → S1 D : S1 → S2 such that the following equations hold : L1 (C(s2 ). s2 ) of the system s ˜ C(˜2 ) = s1 s ˜ D(˜1 ) = s2 s ˜ is a non-cooperative equilibrium. D(s1 )) = min L2 (s1 . s2 ) ∀ s1 ∈ S1 s2 ∈S2 Such maps C and D are called optimal decision rules. s2 ) = min L1 (s1 . s ˜ Hence we are in need of theorems about the existence of ﬁxed points. if one has a map f : X → X. the Banach ﬁxed point theorem in general does not apply.

It is reasonable to assume that the price of the product on the market is determined by demand.3 The Duopoly Economy In this section we try to illustrate the concepts of the previous section by applying them to one of the easiest models in Economy. In this economy we have to producers which compete on the market. then no one wants to buy the product anymore. We assume that the individual cost functions for each producer are given by : c1 (s1 ) = γ1 s1 + δ1 c2 (s2 ) = γ2 s2 + δ2 . they produce and sell the same product. Later we will also consider more general ﬁxed point theorem. the so called duopoly economy.e. More precisely that we assume that there is an afﬁne relationship of the form : p(s1 . 1. β are positive constants. Here we interpret δ1 . δ2 as ﬁxed costs. i.4) where α. so called correspondences. We consider the producers as players in a game where the strategy-sets are given by Si = R+ and a strategy s ∈ R stands for the production of s units of the product.The second most famous is probably the Brouwer ﬁxed point theorem which we will discuss in the next chapter. s2 ) = α − β(s1 + s2 ) (1. that if the total production exceeds α. This relationship says more or less. The net costs for each producer are now given by 16 . which apply even in the framework of generalized functions.

u]. Since no producer will produce with positive net cost we assume U = {(s1 . Let us ﬁrst consider the conservative solutions of this game. u] × [0. We have L♯ (s1 ) = sup s1 (s1 + s2 − u) = s2 1 1 0≤s2 ≤u L♯ (s2 ) 2 = sup s2 (s1 + s2 − u) = s2 .L1 (s1 . For simplicity we will now assume. For the sum of the individual net costs we have L1 (s1 . s2 ) = c2 (s1 ) − p(s1 . δi = 0 for i = 1. s2 ) · s1 = βs1 (s1 + s2 − ( L2 (s1 . Assume now player i chooses a strategy si ≥ α−γi − δi := ui . Obviously this is not the best choice. s2 ) = (0. For the conservative value of the game we obtain v ♯ = (0. 0). s2 ) = (s1 + s2 )(s1 + s2 − u) = z(z − u) 6 it is a very good exercise to wok out the general case 17 . Let us now consider the Set of Pareto optimal strategies. that β = 1. L2 ). s2 ) = c1 (s1 ) − p(s1 . 2 and γ1 = γ2 .6 Then also u1 = u2 =: u and U = [0. s2 ) + L2 (s1 . s2 ) · s2 α − γ1 ) − δ1 ) β α − γ2 = βs2 (s1 + s2 − ( ) − δ2 ). s2 ) ∈ R+ × R+ |s1 ≤ u1 . Then he has positive net β costs. β The biloss operator is then deﬁned by L = (L1 . 0) which corresponds to the case s ˜ where no one produces anything. 2 0≤s1 ≤u ♯ Hence inf 0≤s1 ≤u L1 (s1 ) = 0 = inf 0≤s2 ≤u L♯ (s2 ) and the conservative 2 solution for this game is (˜1 . s2 ≤ u2 }.

s2 ) such that s ˜ s1 + s1 = u/2 we have ˜ ˜ L1 (˜1 . s ˜ s ˜ This set is also the core of the game. D such that 2 2 (− u . s2 ∈S2 s2 ∈S1 Differentiating with respect to s1 resp. For strategy pairs (˜1 . z ranges between 0 and 2u and hence the sum above ranges between −u2 /4 and 2u2 . s2 ) = min s1 (s1 + s2 − u) s1 ∈S1 s1 ∈S1 L2 (s1 . s2 give u − s2 2 u − s1 D(s1 ) = . If s1 . s2 range within U . D(s1 )) is a non cooperative equilibrium. s2 ) = min L1 (s1 . The image of L is therefore contained in the set {(x. y) ∈ R2 | − u2 /4 ≤ x + y ≤ 0} ∪ R+ × R+ . 2 C(s2 ) = From the last section we know that any ﬁxed point of the map (s1 . s2 ) + L2 (˜1 . s2 ) → (C(s2 ). s2 )|˜1 + s1 = u/2}. − u ) 4 4 L1 (C(s2 ). s2 ) = min s2 (s1 + s2 − u). To see what strategy pairs are non cooperative equilibria we consider the optimal decision rules C. s2 ) = (˜1 + s2 )(˜1 + s2 − u) = −u2 /4. s ˜ s ˜ s ˜ s ˜ Hence the set of Pareto optimal strategies is precisely the set P areto = {(˜1 . D(s1 )) = min L2 (s1 . Solving 18 . Furthermore we have α = for the shadow minimum of the game.where we substituted z := s1 + s2 .

The second player then uses s2 = D1 (˜1 ) = ˜ ˜ s = u/4. Since s1 + s2 = 2/3u it is not Pareto optimal and hence the ˜ ˜ Duopoly game has no solution in the strict sense. Then he can choose his strategy s1 so as to minimize ˜ u − s1 1 ) = s1 (s1 − u). The net losses are then 1 1 − u2 < − u2 = NCE loss. s2 ) just computed is sometimes called the Stackelberg equilibs ˜ rium. these strategies yield the players a net loss of −u2 /9. The pair (˜1 . D(s1 )) = L1 (s1 . However. Assume now. then both play the strategy u/2 leading to a net loss of 0 for each player. for Player 1 8 9 1 2 1 2 − u > − u = NCE loss. However. This is the only non-cooperative equis ˜ librium. player 1 is sure that player 2 uses the optimal decision rule D from above. if the player 2 has the same idea as player 1. 19 .s1 = C(˜2 ) = ˜ s s2 ˜ u − s2 2 u − s1 = D(˜1 ) = s 2 we get (˜1 . 2 2 s1 → L1 (s1 . 16 9 This brings the second player in a much worse position. for Player 2. u/3). s2 ) = (u/3. u−u/2 2 This yields to s1 = u/2.

which can be stated as that any continuous map f : [a. which was proven in 1928 and uses the idea of a proper labeling of a simplicial 20 . This generalization of the mean value theorem is harder to prove than it appears at ﬁrst.Chapter 2 Brouwer’s Fixed Point Theorem and Nash’s Equilibrium Theorem The Brouwer ﬁxed point theorem is one of the most important theorems in Topology. Our main tool will be Sperners lemma. There are very nice proofs using methods from either algebraic or differential Topology. b] → [a. It can be seen as a multidimensional generalization of the mean value theorem in basic calculus. However we will give a proof which does not depend on these methods and only uses very basic ideas from combinatorics. Theorem : Let X ⊂ Rm be convex and compact and let f : X → X continuous. then f has a ﬁxed point. b] has a ﬁxed point. that is a point x such that f (x) = x.

..1) The xi are called the vertices of the simplex and each simplex of the form xi0 ... If we refer to the dimension of the simplex..x := { i=0 0 n λi x : λi > 0 and i=1 i λi = 1}. 2.complex..1..xik is called a face of x0 .. We denote with x0 . xn is the set of all strictly positive convex combinations1 of the xi n n x . This will be proven in the last part of this chapter. There are many applications of Brouwer’s theorem. . . Let ei = (0. The most important one in the context of game-theory is doubtless Nash’s equilibrium theorem which in its most elementary version guarantees the existence of non cooperative equilibria for all games with ﬁnitely many pure strategies. Let x0 ... ... Example 2. xn ) n i=0 1 where the right hand side denotes the convex closure.. note that some authors actually mean closed simplexes when they speak of simplexes 21 . For y = λi xi ∈ x0 .......en is called the standard n-simplex.. xn ∈ Rm be a set of linear independent vectors.. The simplex spanned by x0 .xn the closure of the simplex x0 .xn ... 1...xn . where n is as in the deﬁnition above. Then x0 .1. .. we also speak of an n-simplex... 0) ∈ Rn+1 where 1 occurs at the i-th position and we start counting positions with 0 then ∆n = e0 .1.1.xn = co(x0 .1 Simplices Deﬁnition 2... . (2.xn we let we consider the open simplex.

..2) If χ(y) = {i0 . j ∈ I we have Ti ∩ Tj = ∅ closure of a common face The mesh of a subdivision is the diameter of the largest simplex in the subdivision..xik .xn = {i0 . It is the only face of x0 .. Deﬁnition 2.xn be an n-simplex..4) .. . Given a simplex T .1.. This face is called the carrier of y.2. λn are called the barycentric coordinates of y. Let T = x0 .1..ik }⊂{0. T2 deﬁne T1 > T2 := T2 is a face of T1 and T2 = T1 .1..2..1..xn which contains y.n} xi0 .t.. Show that any n-simplex is homeomorphic to the standard n-simplex.. (2. Exercise 2.xik . Example 2. ... The numbers λ0 ... let us consider the family of all simplices of the form 22 n xi .... A simplicial subdivision of T is a ﬁnite collection of simplices {Ti |i ∈ I} s.χ(y) = {i|λi > 0}. i∈I Ti = T and for any pair i. ik } then y ∈ xi0 .3) where k runs from 0 to n and the stands for the disjoint union...xn the barycenter of T denoted by b(T ) is the point 1 b(T ) = n+1 For simplices T1 . For any simplex T = x0 ... i=0 (2. We have x0 . (2..

1.3.1. Let us study the situation in the following Example : Example 2.xn be simplicially subdivided. n on its set of vertices and almost completely labeled if λ assumes exactly the values 0. ..xn be simplicially subdivided and properly labeled by the function λ. > Tk where k runs from 0 to n.b(Tk ) where T ≥ T0 > T1 > . n − 1. We call a simplex in the subdivision completely labeled if λ assumes all values 0.1.. If n = 0. Let us now assume the theorem is true for n − 1.. Let 23 . Let V denote the collection of all the vertices of all simplices in the subdivision. . ( Labeling and completely labeled simplex ) 2.. then T = T = x0 and λ(x0 ) ∈ χ(x0 ) = {0}..1. Then there are an odd number of completely labeled simplices in the subdivision.b(T0 ).4.. n} (2.5) satisfying λ(v) ∈ χ(v) is called a proper labeling of the subdivision. for any 0 simplex v we have b(v) = v. Let T = x0 .2 Sperners Lemma Theorem 2.. Example 2.. A function λ : V → {0. ( Barycentric subdivision ) Deﬁnition 2.. The proof goes by induction on n... It is called the ﬁrst barycentric subdivision...3..2.. This deﬁnes a simplicial subdivision of T .. Sperner (1928) Let T = x0 . Higher barycentric subdivisions are deﬁned recursively. Clearly. That means T is the only simplex in the subdivision and it is completely labeled. . Proof..

. e ∈ E d ∈ e := d ∈ A ∪ C and e is a face of d e=d∈B We have to check that this indeed deﬁnes a graph. An (n − 1) simplex lies either on the boundary and is then the face of exactly one n-simplex in the subdivision. B are pairwise disjoint. n − 1. n − 1 The sets C.. A graph consists of edges and nodes such that each edge joins exactly two nodes.. or it is the common face of two n-simplices.C = set of all completely labeled n-simplices in the subdivision A = set of almost completely labeled n-simplices in the subdivision B = set of all (n − 1) simplices in the subdivision.. If e denotes an edge and d a node we write d ∈ e if d is one of the two nodes joined by e and d ∈ e if not. that any edge e ∈ E joins exactly two nodes.. / Let us now construct a graph in the following way : Edges := E Nodes := C ∪ A ∪ B and for d ∈ D. A..e. . Furthermore all simplices in B are contained in the face x0 .. B however is a subset of E..xn−1 . Here we have to consider two cases : 1. .. . which lie on the boundary and bear all the labels 0. i.. e lies on the boundary and bears all labels 0.. n − 1 E = set of all (n − 1) simplices in the subdivision which bear all the labels 0. Then d1 := 24 .

2. In general for a graph with nodes D and edges E one has the following relationship : 25 .. and therefore bear at most (n−2) different labels and hence do not belong to E ). Interpreting d1 and d2 as nodes we see that the deﬁnition above tells us that e joins d1 and d2 and no more.e in fact. Therefore δ(d) = 2.. d ∈ e. Summarizing we get the following : δ(d) = 1 if d ∈ B ∪ C 2 if d ∈ A. d ∈ A : Then exactly two vertices v1 and v2 of d have the same label and exactly two faces e1 and e2 of d belong to E ( those who do not have both v1 and v2 as vertices. 2. e is a common face of two n-simplices d1 and d2 . This means only one of the faces of d belongs to E and hence using the deﬁnition we have δ(d) = 1. bears all the labels 0. n − 1. Let us compute the degree for each node in our graph.. 3.e ∈ B and is the face of exactly one simplex d2 ∈ A ∪ C. For each node d ∈ D the degree is deﬁned by δ(d) = number of edges e s. Then both belong to either A or C ( they are at least almost completely labeled since one of their faces. We have to consider the following three cases : 1..n−1 ) and hence by the deﬁnition above e joins d1 and d2 and no more.t. . Hence d ∈ e1 and d ∈ e2 but no more. . d ∈ B : Then e := d ∈ E is the only edge such that d ∈ e and therefore δ(d) = 1. d ∈ C : Then d is completely labeled and hence has only one face which bears all the labels 0..

.xn−1 .3 Proof of Brouwer’s Theorem We will ﬁrst proof the following simpliﬁed version of the Brouwer ﬁxed point theorem.. Then there exists at least one completely labeled simplex in the simplicial subdivision. Proof. For our graph it follows that 2 · |A| + |B| + |C| = d∈D δ(d) = 2|E| and this implies that |B| + |C| is even. Let T = x0 . d∈D (2. 2. Then B is the set of completely labeled simplices in this simplicial subdivision with this proper labeling function. n n 26 . The simplicial subdivision of T and the proper labeling function λ induce via restriction a simplicial subdivision and proper labeling function for x0 . Corollary 2. Proposition 2.3.. then f has a ﬁxed point.6) This relationship holds since when counting the edges on each node and summing up over the nodes one counts each edge exactly twice. Let f : ∆ → ∆ be continuous.δ(d) = 2|E|. In the proof of the Brouwer ﬁxed point theorem it will not be so important how many completely labeled simplices there are in the simplicial subdivision.xn be simplicially subdivided and properly labeled.1. Zero is not an odd number..2. but that there is at least one. This is the statement of the following corollary.1. Hence by induction |B| is odd and therefore |C| is odd which was to prove.

i (2.... .xik one element λ(v) ∈ {i0 ..7) j j Where xǫ. Since ∆ is compact we can ﬁnd a simplicial subdivision with mesh less than ǫ..i denotes the i-the component of xǫ . Then λ is a proper labeling function. 2 this is possible for example by using iterated barycentric subdivisions 27 .t..xn such ǫ ǫ that for any i ∈ {0..... .... ik } would imply n k k k n 1= i=0 fi (v) ≥ j=0 fij (v) > j=0 vij = i=0 vi = 1. This point is the common limit of all the sequences (xj ) for ǫ tending to 0. It follows from Corollary 2. Let us consider an arbitrary vertex v in the subdivision and v ∈ xi0 .. We now let ǫ tend to zero.7) and the continuity of f we get fi (x) ≤ xi ∀i ∈ {0. We denote this point with x. ik } ∩ {i|fi (v) ≤ vi } = ∅ since fi (v) > vi for all i ∈ {i0 . . Then {i0 ... 1.. Then we get a sequence of completely labeled simplexes x0 .. Let vi and fi denote the components of v and f .. This means. Let ǫ > 0. n}. ik }∩ {i|fi (v) ≤ vi }. n} there exists j s. .... . fi (xj ) ≤ xj ǫ ǫ..xik . Assume now that for one i ∈ {0. We deﬁne a labeling function λ : V → {0. Therefore ǫ using equation (2. n} by choosing for each vertex v ∈ xi0 .. n} we would have fi (x) < xi ..2 Let V be the set of vertices of this subdivision.. . ǫ ǫ n Since the mesh’ of the subdivisions converges to 0 and furthermore ∆ is compact we can extract a convergent subsequent which converges n to one point in ∆ .xn . there exists a simplex x0 ..Proof. .1 that there exists a completely labeled simplex in the simplicial subdivision.2.

Let v ∈ Rm . This is the same as f (x) = x and x is a ﬁxed point. Deﬁne x := φ−1 (˜).3.1. bijective and its inverse φ−1 : Y → X is also continuous. Then f has a ﬁxed point. In this case we write X ≈ Y . Deﬁne n n fφ : ∆ n → ∆ n y → φ(f (φ−1 (y))) Clearly f is continuous and hence by Proposition 2. Consider the ray starting at the origin 28 .2.3. Then ˜ y ˜ ˜ y f (˜) = f (φ−1 (˜)) = φ−1 (fφ (˜)) = φ−1 (˜) = x. fφ (˜) = y .Then n n 1= i=0 fi (x) < i=0 xi = 1 Therefore we must have fi (x) = xi for all i. Let us ﬁrst assume that 0 is contained in the interior X ◦ of X. We have the following corollary : Corollary 2. Let X ≈ ∆ and f : X → X be continuous. x y y y ˜ The following proposition will help us to prove the Brouwer ﬁxed point theorem in its general form. Then X ≈ Dn for some 0 ≤ n ≤ m where Dn := {x ∈ Rn : ||x|| ≤ 1} is the n-dimensional unit-ball. Let φ : X → ∆ be a homeomorphism.3.e. Proposition 2. Proof. Let X ⊂ Rm be convex and compact.1 must have a ﬁxed point y i. if it is continuous. In general a map φ : X → Y is called a homeomorphism. Proof.

it is clear that it intersects ∂X in at least one point. We claim that this ray intersects the boundary ∂X at exactly one point. Let us deﬁne a map which is deﬁned on the whole space X. m we have co(x. A compact. we can assume that ||x|| > ||y||. Dǫ ) ⊂ X and hence y ∈ X ◦ which is a contradiction to y ∈ ∂X. Assume now that x. f is clearly continuous and it follows from the discussion above that it is also injective ( otherwise two elements in ∂X would lie on the same ray from the origin ). y ∈ ∂X ∩ γ([0. Let us now consider the following function : f : ∂X → S m−1 := {z ∈ Rm : ||z|| = 1} x x → ||x|| Since X contains an open ball around the origin. Since ∂X is compact and S m−1 is Hausdorff it follows that f is a homeomorphism. B Hausdorff.o. then f is a homeomorphism.g.3 Hence the inverse map f −1 : S m−1 → ∂X is also continuous. Dǫ ) contains an open neighborhood of y. 3 29 . it follows that the map f is well deﬁned 0 ∈ ∂X and furthermore surjective. Result from Topology : f : A → B continuous and bijective. ∀t ≥ 0. Since X is convex and closed. Since γ(0) ∈ X and X is compact.l. Then since x and y are collinear w. Then the convex hull co(x. Since 0 ∈ X ◦ it follows that there exists ǫ > 0 such that m m Dǫ := {z ∈ Rm : ||z|| ≤ ǫ} ⊂ X ◦ .γ(t) := t · v. ∞)) and x = y.

y x and it follows from the injectivity of f −1 that ||x|| = ||y|| which ﬁnally implies that x = y. Continuity in all other points is clear so that k is a continuous map.k : Dm → X x → ||x|| · f −1 (x/||x||) 0 if x = 0 if x = 0 Since X is compact. there exists M ∈ R such that ||x|| ≤ M for all x ∈ X. 1] and f (¯) = ||¯|| which is equivalent ¯ ¯ x x x ¯ to x = f −1 ( ||¯|| ). Then as in the ﬁrst part of the proof x can be written as x ¯ x = t · x where x ∈ ∂X and t ∈ [0. k is also surjective : Assume x ∈ X. But then the equation above is equivalent to f −1 (x/||x||) = f −1 (y/||y||). This shows that k is injective. Then also || f −1 (x/||x||) || ≤ M for all x ∈ X and hence ∈∂X ||k(x)|| ≤ ||x|| · M It follows from this that the map k is continuous in 0. Then ¯ x 30 . Since ||f −1 (x/||x||)|| = 1 = ||f −1 (y/||y||)|| we must have ||x|| = ||y||. Then ||x|| · f −1 (x/||x||) = ||y|| · f −1 (y/||y||). Assume that x. y ∈ X and k(x) = k(y).

l. we can still assume that 0 ∈ X ( by translation. Let X ⊂ Rm be convex and compact. translations are homeomorphisms ) but we can no longer assume that 0 ∈ X ◦ . .o. Since for all n we have that ∆ is convex and compact.3. This map maps X homeomorphically to φ(X) ⊂ Rn . Now we can apply our previous result to φ(X) and get X ≈ φ(X) ≈ ∆ ⇒ X ≈ ∆ . . Also from the preceding proposition we know that n there must exist n such that X ≈ Dn . However we can ﬁnd a maximum number of linear independent vectors v1 . we have that k is a homeomorphism. vn ) isomorphically to Rn ( and its orthogonal complement to zero ). 31 n n ..2.. Let us now consider the general case. Easy linear algebra shows that there exists a linear map φ : Rm → Rn which maps span(v1 . vn ). Then X ⊂ span(v1 . Then X ≈ Dn ≈ ∆ .x = t·x=t·f ¯ = ||t · −1 ( t· ||t · t· x ¯ ||f −1 ( ||¯|| x ||t · x ¯ ) = k(t · ||¯|| x ∈Dm x ¯ ||¯|| x x ¯ || ||¯|| x x ¯ ||¯|| x x ¯ || ||¯|| x Repeating the previous argument. W... and since linear maps preserve convexity φ(X) is still convex. we can n use the preceding proposition to follow that ∆ ≈ Dn ( take the dimensions into account ). . vn ∈ X.. Then X ≈ ∆ for some 0 ≤ n ≤ m. Proof..g. since now Dm is compact and X is Hausdorff. n n Corollary 2.

n}}.3. l2 (˜1 . j 2 s2 = ˜ j=1 ˜ ˜ For 1 ≤ i ≤ n and 1 ≤ j ≤ m we deﬁne maps ci . sj ) − l2 (˜1 . Clearly S1 × S2 ≈ ∆ ×∆ is convex and ˜ ˜ compact. Let X ⊂ Rm be convex and compact and let f : X → X continuous.We are now able to proof the Brouwer ﬁxed point theorem in its general form : Theorem 2. Let L be the biloss-operator of G2 and L be its extension..1. Let s1 ∈ S1 and s2 ∈ S2 be given as ˜ ˜ n s1 = ˜ i=1 m λ1 · δsi i 1 λ2 · δsj ..4. then f has a ﬁxed point.2 X ≈ ∆ .3. Furthermore let S1 = {si : i ∈ {1. .1. s ˜ s 2 Furthermore we deﬁne a map 32 . s2 ) = max(0. l1 (si . dj : S1 × S2 → R as follows : ci (˜1 . ˜ Proof. s2 ) = max(0. By Corollary 2.3. Proof. S2 = {sj : j ∈ {1. Let G2 be a game with ﬁnite strategy sets S1 and S2 and U = S1 × S2 . l2 = −L2 . s2 ) − l1 (˜1 .4 Nash’s Equilibrium Theorem Theorem 2.. Hence the theorem follows by application of Corollary 2. Then there exists at least one non-cooperative equilibrium ˜ for the extended game G2 . n 2. s2 ).. m}} and 2 1 n−1 m−1 ˜ ˜ ˜ ˜ l1 = −L1 .1. . s2 ) s ˜ s ˜ 1 ˜ s ˜ dj (˜1 .

s2 ) s ˜ j i δsi . s2 ) . 1 ≤ j ≤ m.˜ ˜ ˜ ˜ f : S1 × S2 → S1 × S2 n n λ2 + dj (˜1 . s♯ ) such that s1 ˜2 (˜♯ . s2 ) s2 s ˜ s ˜ i=1 ˜ =:λ1 i ˜ =:λj 2 ˜ ˜ Clearly n λi = 1 = m λj and hence the right side in the exj=1 2 i=1 1 ˜ ˜ pression deﬁnes indeed an element in S1 × S2 . s♯ ). s♯ ) < l1 (˜♯ . s2 ) 1 j=1 1 + j=1 dj (˜1 . s♯ ) = l1 (˜♯ . Therefore there must exist i ∈ {1. s2 ) → ( s ˜ n n 1 + i=1 ci (˜1 . s♯ ) s♯ ˜2 j 1+ m j=1 dj (˜♯ . s2 ) = f (˜♯ . s2 ) 33 . For this i we have ci (˜♯ . s2 ) s ˜ λ1 + ci (˜1 . s♯ ) > l1 (˜♯ . Let us assume now that for all 1 ≤ i ≤ n we have l1 (si . s1 ˜2 1 ˜2 Then using that the extended biloss-operator and hence also l1 and l2 are bilinear we have n n l1 (˜♯ . s1 ˜2 s1 ˜2 i which is a contradiction. s♯ ) s1 ˜2 ∀ 1 ≤ i ≤ n. s♯ ).. . s♯ ). λ2♯ j = λ2♯ + dj (˜1 . s1 ˜♯ s1 ˜2 n n 1♯ 2♯ Writing s♯ = ˜1 ˜♯ i=1 λi · δsi and s2 = j=1 λj · δsj we see that the 1 2 equation above is equivalent to the following set of equations 1♯ λi = 1♯ λi + ci (˜1 . s♯ ) s♯ ˜2 1+ n s♯ ˜♯ i=1 ci (˜1 . δ j) (˜1 . s♯ ) > 1 ˜2 i i=1 λ1♯ l1 (˜♯ . n} such that l1 (si . s♯ ) = 0 and hence s1 ˜2 s1 ˜2 1 ˜2 it follows from our set of equations above that for this i we have λ1♯ i = λ1♯ i 1+ n s♯ ˜♯ i=1 ci (˜1 .. s♯ ). Obviously the map f is continuous and hence by application of the Brouwer ﬁxed point theorem there must exist a pair (˜♯ . s♯ ) = s1 ˜2 i=1 λ1♯ l1 (si .

s♯ ) ∀ 1 ≤ i ≤ n. s♯ ) ≤ i 1 ˜2 i=1 s1 ˜2 λ1 l1 (˜♯ . if L1 = −L2 . s♯ ) ˜ s1 ˜ ˜ s1 ˜2 L2 (˜♯ . s♯ ) ˜ which shows that (˜♯ . ∀ 1 ≤ j ≤ m. ∀ 1 ≤ i ≤ n. s2 ) ≤ l2 (˜♯ . s1 ˜♯ i A similar argument involving λ2♯ and dj . s♯ ) ≥ L1 (˜♯ . l2 shows that for arbitrary j s2 = m λ2 · δsj ˜ j=1 j 2 l2 (˜♯ . A two person game G2 is called a zero sum game. s♯ ) = s ˜2 i=1 λ1 l1 (si . s1 ˜ s1 ˜2 ˜ ˜ Using L1 = −l1 and L2 = −l2 we get ˜ s ˜2 ˜ s1 ˜2 L1 (˜1 . the “Battle of 34 . s2 ) = 0 which by ci (˜♯ . s♯ ) ≤ l1 (˜♯ . s1 ˜2 2. s♯ ).5 Two Person Zero Sum Games and the Minimax Theorem Deﬁnition 2. s2 ) ≥ L2 (˜♯ . s2 ). s♯ ) = l1 (˜♯ .1.5. s1 ˜2 1 ˜2 Using again the bilinearity of l1 we get for arbitrary s1 = ˜ n n n i=1 1 λi · δsi 1 l1 (˜1 . s1 ˜♯ By deﬁnition of the maps ci this means nothing else than l1 (si . s2 ) = 0 for all 1 ≤ i ≤ n.This equation however can only hold if positivity of the maps ci can only be true if n s♯ ˜♯ i=1 ci (˜1 . Nim(2.2) and “simpliﬁed poker” are zero sum games. s♯ ) is a non-cooperative equilibrium for G2 .

The Application of Theorem 2.1. s♯ ). s2 ) ≤ max L1 (˜1 . s ˜ s ˜ s 2 ∈S 2 ˜ ˜ ˜ Taking the minimum over all strategies s1 ∈ S1 on the right side of ˜ the last equation we get that ˜ we denote the biloss operator for the extended game with L instead of L as we did before. Clearly we have for all s1 ∈ S1 .5. s2 ) ≤ L1 (˜1 . s ˜ s ˜ ˜ Taking the maximum over all strategies s2 ∈ S2 on both sides we ˜ ˜ get for all s1 ∈ S1 ˜ ˜ ˜ ˜ ˜ s 2 ∈S 2 s 1 ∈S 1 max min L1 (˜1 .4. just to make it readable and save energy 4 35 . Theorem 2. s♯ ) this value coincides with L1 (˜1 . (MiniMax Theorem) : Let G2 be a zero sum game ˜ with ﬁnite strategy sets. Then for the extended game G2 we have s 1 ∈S 1 s 2 ∈S 2 ˜ ˜ ˜ ˜ max min L1 (˜1 . s2 ) s ˜ s ˜ s 2 ∈S 2 s 1 ∈S 1 ˜ ˜ ˜ ˜ and for any NCE (˜♯ . s2 ∈ S2 that ˜ s 1 ∈S 1 ˜ ˜ min L1 (˜1 . For zero sum games one usually only denotes L1 . s2 ).4 ˜ ˜ ˜ Proof. s2 ) = min max L1 (˜1 . In pars1 ˜2 s♯ ˜2 ticular all NCE’s have the same biloss. s2 ). since then L2 is determined by the negative values of L1 .the Sexes” is not.1 in this contexts yields to a Theorem which is called the MiniMax-Theorem and has applications in many different parts of mathematics.

s♯ ) s1 ˜2 s ˜2 s 1 ∈S 1 ˜ ˜ L2 (˜♯ .1 that there exists at least one NCE Then L1 (˜♯ . 36 . s♯ ) = min L1 (˜1 . s2 ) s1 ˜2 s1 ˜ s 2 ∈S 2 ˜ ˜ Now we have s 1 ∈S 1 s 2 ∈S 2 ˜ ˜ ˜ ˜ min max L1 (˜1 . s♯ ). s2 ) = L1 (˜♯ . s2 ) s 1 ∈S 1 ˜ ˜ s 2 ∈S 2 s 1 ∈S 1 ˜ ˜ ˜ ˜ Together with (2. s♯ ) = min max L1 (˜1 .s 2 ∈S 2 s 1 ∈S 1 ˜ ˜ ˜ ˜ s ˜ s ˜ max min L1 (˜1 . s♯ ) s1 ˜2 = min L1 (˜♯ . s♯ ) = max L1 (˜♯ .) we get that the second equation above is equivalent to L1 (˜♯ . s2 ). s2 ) ≤ max L1 (˜♯ .7) we get s 2 ∈S 2 s 1 ∈S 1 ˜ ˜ ˜ ˜ max min L1 (˜1 . s♯ ) ≤ max min L1 (˜1 . s2 ) s ˜ s1 ˜2 s ˜ s 1 ∈S 1 s 2 ∈S 2 ˜ ˜ ˜ ˜ Since (˜♯ .. s2 ) s ˜ s1 ˜ s1 ˜♯ s 2 ∈S 2 ˜ ˜ s ˜ s ˜2 = min L1 (˜1 .) = − max(.. s2 ) s1 ˜ s 2 ∈S 2 ˜ ˜ Using that L2 = −L1 and min(−.4. s2 ) = L1 (˜♯ . s2 ) ≤ min max L1 (˜1 .8) (˜♯ . s♯ ) was an arbitrary NCE the statement of the theorem s1 ˜2 follows. s 1 ∈S 1 s 2 ∈S 2 ˜ ˜ ˜ ˜ (2. s1 ˜2 It follows from Theorem 2.

..1 N-Person Games and Nash’s generalized Equilibrium Theorem Deﬁnition 3.. An N -person ( or n-person or n-player ) game Gn consists of the following data : 1.. We will restrict ourself though to the reformulation of the concept of non cooperative equilibria within n player games. 2. This is in fact a very good exercise. Let N = {1.2. .1. Deﬁnition 3. . × Sn .Chapter 3 More general Equilibrium Theorems 3.. × Sn → Rn All of the deﬁnitions in chapter 1 in the framework of 2 player games can be generalized to n-player games. . A (multi)-loss operator L = (L1 . .. the so called allowed or feasible multi strategies 3.. Ln ) : S1 × .n}. A subset S(N ) ⊂ S1 × . A multi strategy s = (s1 .. sn ) is called a non cooperative equilibrium ( in short NCE ) for the game Gn if for all i we have 37 .1.. Topological spaces S1 ..1. Sn so called strategies for player 1 to n 2...

. . For an arbitrary set M we denote with P(M ) the power set of M . In the context of game theory it can be used on one side to deﬁne so called generalized games and on the other side it shows up to be very useful in many proofs.. Nash (general Version) : Given an N -player game as above. Then there exists an NCE in Gn . before we go into the proof.4.2. .2 Correspondences The concept of correspondences is a generalization of the concept of functions.. Y be sets. A map γ : X → P(Y ) is called a correspondence... .1. sn ) considered as function in the i-th variable when s1 .. si−1 . × {si−1 } × Si × {si+1 } × . One should mention though.1.. It will follow later. this is the set which contains as elements the subsets of M . si+1 . y) ∈ X × Y : y ∈ γ(x)} ⊂ X × Y the graph of γ. We denote with Gr(γ) := {(x. that we do not assume that the game in question is the extended version of a game with only ﬁnitely many strategies and also the biloss operator does not have to be the bilinearly extended version of a biloss operator for such a game..Li (s) = min{Li (˜) : s ∈ {s1 } × .. . n} the function Li (s1 . We write γ : X →→ Y . We have to build up some general concepts. sn is ﬁxed but arbitrary is convex and continuous.. Suppose that S(N ) is convex and compact and for any i ∈ {1. . Deﬁnition 3. si−1 . Let X. ·. Theorem 3. 38 . 3. It is also due to Nash.1. × {sn } ∩ S(N )} s ˜ The following theorem is a generalization of Theorem 2. si−1 .1...

The lower inverse of E under γ is deﬁned by γ − [E] = {x ∈ X : γ(x) ∩ E = ∅}.2.2. Deﬁnition 3.1.The following example shows how maps can be considered as correspondences and that the deﬁnition above really is a generalization of the concept of maps. In general the inverse f −1 of f as a map is not deﬁned.2. Then ˜ f + [E] = {x ∈ X : {f (x)} ⊂ E} = {x ∈ X : f (x) ∈ E} = f −1 (E) ˜ f − [E] = {x ∈ X : {f (x)} ∩ E = ∅} = {x ∈ X : f (x) ∈ E} = f −1 (E).2. 39 . E ⊂ Y and F ⊂ X. The image of F under γ is deﬁned by γ(F ) = x∈F γ(x).2. However it makes sense to speak of the inverse correspondence f −1 : Y →→ X where y ∈ Y maps to the preimage f −1 ({y}) ∈ P(X). The upper inverse of E under γ is deﬁned by γ + [E] = {x ∈ X : γ(x) ⊂ E}. Furthermore for y ∈ Y we set γ −1 (y) = {x ∈ X|y ∈ γ(x)} = γ − [{y}]. Example 3.2. Any map f : X → Y can be considered as the corre˜ ˜ spondence f : X →→ Y where f (x) = {f (x)} ∈ P(Y ). in general though there is no clear relation between the upper and the lower inverse unless the correspondence is given by a map. Assume the correspondence f : X →→ Y is given by a map as in Example 3. It is easy to see that when γ has nonempty values one always has γ + [E] ⊂ γ − [E].1. Then we have : ˜ Example 3. Let γ : X →→ Y be a correspondence.

This means that if one considers correspondences which are actually maps upper- and lower inverse coincide. For general correspondences this is however not the case. Knowing that correspondences are generalized functions we would like to generalize the deﬁnition of continuity to correspondences. We assume from now on that X and Y are topological spaces. Deﬁnition 3.2.3. Let γ : X →→ Y be a correspondence. Then γ is called upper hemi continuous or short uhc at x ∈ X if

x ∈ γ + [V ] for V ⊂ Y open ⇒ ∃ open neighborhood U ⊂ X of x s.t. U ⊂ γ + [V ]. γ is called lower hemi continuous or short lhc at x ∈ X if

x ∈ γ − [V ] for V ⊂ Y open ⇒ ∃ open neighborhood U ⊂ X of x s.t. U ⊂ γ − [V ]. γ is called uhc on X if γ is uhc at x for all x ∈ X and lhc on X if γ is lhc at x for all x ∈ X. As one can see directly from the deﬁnition, γ is uhc iff V ⊂ Y open ⇒ γ + [V ] ⊂ X open and lhc iff V ⊂ Y open ⇒ γ − [V ] ⊂ X open . Example 3.2.1 now says that a map f is continuous ( as a map ) if an only if it is uhc ( considered as a correspondence ) and this is the case if and only if it is lhc ( considered as a correspondence ). Deﬁnition 3.2.4. A correspondence γ : X →→ Y is called continuous if it is uhc and lhc on X. Deﬁnition 3.2.5. Let γ : X →→ Y a correspondence. Then γ is called closed at the point x ∈ X if xn → x, yn ∈ γ(xn ) ∀n and yn → y ⇒ y ∈ γ(x).

40

A correspondence is said to be closed if it is closed at every point x ∈ X. This is precisely the case if the graph Gr(γ) is closed as a subset of X × Y . γ is called open if Gr(γ) ⊂ X × Y is an open subset. Deﬁnition 3.2.6. Let γ : X →→ Y be a correspondence. We say γ has open ( closed ) sections if for each x ∈ X the set γ(x) is open ( closed ) in Y and for each y ∈ Y the set γ − [{y}] is open ( closed ) in X. Proposition 3.2.1. Let X ⊂ Rm ,Y ⊂ Rk and γ : X →→ Y a correspondence. 1. If γ is uhc and ∀x ∈ X γ(x) is a closed subset of Y , then γ is closed. 2. If Y is compact and γ is closed, then γ is uhc. 3. If γ is open, then γ is lhc. 4. If |γ(x)| = 1 ∀x ( so that γ can actually be considered as a map ) and γ is uhc at x, then γ is continuous at x. 5. If γ has open lower sections ( i.e γ − [{y}] open in X ∀y ∈ Y ), then γ is lhc Proof. 1. We have to show that Gr(γ) ⊂ X × Y is closed, i.e. its complement is open. Assume (x, y) ∈ Gr(γ), i.e. y ∈ γ(x). Since / / γ(x) is closed and hence its complement is open, there exists a closed neighborhood U of y s.t. U ∩ γ(x) = ∅. Then V = U c is an open neighborhood of γ(x). Since γ is uhc and x ∈ γ + [V ] there exists an open neighborhood W of x s.t. W ⊂ γ + [V ]. We have γ(w) ⊂ V ∀w ∈ W . This implies that Gr(γ) ∩ W × Y ⊂ W × V = W × U c and hence Gr(γ) ∩ W × U = ∅. Since y has to be in the interior of U ( otherwise U wouldn’t be a neighborhood of y ), we have that W × U ◦ is an open neighborhood of (x, y) in Gr(γ)c which shows that Gr(γ)c is open. 41

2. Suppose γ were not uhc. Then there would exist x ∈ X and an open neighborhood V of γ(x) s.t. for all neighborhoods U of X we would have U ⊂ γ + [V ] i.e. there exists z ∈ U s.t. γ(z) ⊂ V . By making U smaller and smaller we can ﬁnd a sequence zn → x and yn ∈ γ(zn ) s.t. yn ∈ V , i.e. yn ∈ V c . Since Y is compact (yn ) has / a convergent subsequence and w.l.o.g. (yn ) itself is convergent. We denote with y = limn yn its limit. Since V c is closed we must have y ∈ V c . From the closedness of γ however it follows that y ∈ γ(x) ⊂ V which is clearly a contradiction. 3. exercise ! 4. exercise ! 5. exercise !

Proposition 3.2.2. (Sequential Characterization of Hemi Continuity) Let X ⊂ Rm , Y ⊂ Rk and γ : X →→ Y be a correspondence. 1. Assume ∀x ∈ X that γ(x) is compact. Then γ uhc ⇔ for every sequence xn → x and yn ∈ γ(xn ) there exists a convergent subsequence ynk → y and y ∈ γ(x). 2. γ is lhc ⇔ xn → x and y ∈ γ(x) implies there exists a sequence yn ∈ γ(xn ) with yn → y. 1. “⇒” : Assume xn → x and yn ∈ γ(xn ). Since γ(x) is compact Proof. it has a bounded neighborhood V . Since γ is uhc at x there exists a neighborhood U of x s.t. γ(U ) ⊂ V . Since xn → x there exists n0 ∈ N s.t. ∀n ≥ n0 we have xn ∈ U . Then since yn ∈ γ(xn ) ⊂ γ(U ) ⊂ V for all n ≥ n0 and V is bounded, (yn ) has a convergent subsequence ynk → y. Clearly y ∈ V . By making V smaller and smaller ( for example one can take V ǫ := x∈γ(x) Bǫ (x) and let ǫ go to zero ) we see that y lies in any neighborhood of γ(x). Since γ(x) is compact, hence also closed we must have y ∈ γ(x). 42

Let (x. for any open neighborhood U of x we have U ⊂ γ + [V ]. y ′ ) ∈ Gr(γ). there is a neighborhood U of y contained in γ(x) s.8.7.. y) ∈ Gr(γ). Proposition 3. Let X ⊂ Rm and Y ⊂ Rk .. Proof. yn )◦ ⊂ γ(x′ ) ⇒ (x′ . if it is the convex hull of a ﬁnite set.e. Example 3. simplices. Let γ : X →→ Y be a correspondence.2. Therefore W ⊂ Gr(γ) and W is an open neighborhood of (x. W := V × U is open in X × Y . Then yi ∈ γ(x′ )∀i.2. Let (x′ .“⇐” : Suppose γ is not uhc. y ∈ γ(x). where Y is a polytope. U is itself the interior of a polytope. Deﬁnition 3.t. Making U smaller and smaller we get a sequence xn → x s. Since γ(x′ ) is convex we have that y ′ ∈ U = co(y1 . i. exercise ! Deﬁnition 3. Assume more precisely that U = (co(y1 . γ(xn ) ⊂ V . Then there exists x ∈ X and a neighborhood V of γ(x) s.) 2. y ′ ) ∈ W . Clearly for all z ∈ Vi we have yi ∈ γ(z) and furthermore x ∈ Vi for all i. This however is a contradiction to the assumption on the left side in 1. .. Since γ has open sections the sets Vi := γ − [{yi }] are open for all i.3. y) ∈ Gr(γ).2. but not only.3.t. The set V := n Vi is nonempty and open and furthermore i=1 contains x. A convex set Y is a polytope. Since V is an open neighborhood of the compact set γ(x) such a sequence cannot converge to a limit y ∈ γ(x). then γ has an open graph. 43 . If for all x ∈ X the set γ(x) is convex and has open sections..t. yn ))◦ . Since γ has open sections and Y is a polytope. Then x ∈ X is called a ﬁxed point if x ∈ γ(x).2. . By choosing yn ∈ γ(xn ) ∩ V c we get a sequence which does not enter into V .

2.Proposition 3. let X ⊂ Rm and γ. then γ : X →→ Y x is called the closure of γ. Exercise ! Deﬁnition 3.t.4.2.2. 1. Proposition 3. µ : X →→ Y be a correspondences.2. Let γ : X →→ Y be uhc s. γ : X →→ Y uhc at x ⇒ γ : X →→ Y uhc at x 2. 3. (Intersection of correspondences) Let γ. 2. Then {x ∈ X|γ(x) = ∅} is open ( resp. closed ) in X Proof. .9. Let X ⊂ Rm and γ : X →→ Rm uhc with γ(x) closed ∀ x. Then the set of all ﬁxed points of γ is closed.Y ⊂ Rk .5. µ(x) closed ∀x. 1. Then γ(K) is compact. Let X ⊂ Rm . Exercise ! Deﬁnition 3. γ(x) is compact ∀x ∈ X and let K be a compact subset of X. Then {x ∈ X|γ(x) ∩ µ(x) = ∅} is closed in X 4. γ : X →→ Y lhc at x ⇔ γ : X →→ Y lhc at x Proof. X ⊂ Rm and γ : X →→ Rm lhc ( resp. µ : X →→ Rm uhc and γ(x). uhc ). (Closure of a Correspondence) Let γ : X →→ Y be a correspondence. then deﬁne their intersection as → γ(x) γ ∩ µ : X →→ Y x → 44 γ(x) ∩ µ(x).10.

Suppose γ(x) ∩ µ(x) = ∅ ∀ x ∈ X. 1. µ(x) ⊂ V1 C ⊂ V2 V1 ∩ V2 = ∅. 45 . µ(z) are closed for all z ∈ X then γ ∩ µ is uhc at x. Therefore there exist open sets V1 . then γ ∩ µ is uhc at x. then γ ∩ µ is lhc at x. 2. Proof. µ : X →→ Y be correspondences. In this case C is closed and µ(x) ∩ C = ∅. let U be an open neighborhood of γ(x) ∩ µ(x) and deﬁne C := γ(x) ∩ U c . 3. Since µ is uhc at x and x ∈ µ+ [V1 ] there exists a neighborhood W1 of x with µ(W1 ) ⊂ V1 ⊂ V2c .Y ⊂ Rk and γ. If γ is lhc at x and if µ has open graph. If γ. Let X ⊂ Rm . If µ is closed at x and γ is uhc at x and γ(x) is compact. We have that γ(x) = (γ(x) ∩ U ) ∪ (γ(x) ∩ U c ) ⊂ U ∪ C ⊂ U ∪ V2 . 1. γ(W2 ) ⊂ U ∪ V2 . Since U ∪ V2 is open it follows from the upper hemi continuity of γ that there exists an open neighborhood W2 of x s.t.t. µ are uhc at x and γ(z). V2 s.Proposition 3.2.6.

Since µ has an open graph. µ(Wy ) ⊂ Uy . yn ∈ µ(xn ) and yn → y because of the closedness of µ. This however implies that γ ∩ µ is lhc.. We set W1 := Wy1 ∩ . Since γ is lhc at x we ﬁnd that γ − [U ∩ V ] ∩ W is a neighborhood of x in X and if z ∈ γ − [U ∩ V ] ∩ W then y ∈ (γ ∩ µ)(z) ∩ U .. . In this case C is compact and µ(x) ∩ C = ∅. 3.. Now we choose W2 for x and γ as in 1. y) which is contained in Gr(µ). Let γ.γ : Y →→ Z be correspondences. then 46 . Let y ∈ (γ ∩ µ)(x) ∩ U .). (Composition of Correspondences) Let µ : X →→ Y .2.7. Wyn as above such that C ⊂ V2 := Uy1 ∪ .. ∩ Wyn . ∪ Uyn .. Uyn . . Wy1 ..) and proceed similarly as in 1. Then W is a neighborhood of x and for all z ∈ W we have γ(z) ∩ µ(z) ⊂ (U ∪ V2 ) ∩ V2c = U ∩ V2c ⊂ U. Since C is compact we can ﬁnd Uy1 .2... This implies that there exists a neighborhood c Uy of y and Wy of x s. Deﬁne γ ◦ µ : X →→ Z x → y∈µ(x) γ(y). For y ∈ µ(x) there / cannot exist a sequence xn → x.t. there is a neighborhood W × V of (x.. µ be as above. Proposition 3. Deﬁnition 3.We set W = W1 ∩ W2 . 2.11. γ ◦ µ is called the composition of γ and µ. As one can do with ordinary maps one can compose correspondences.. Then µ(W1 ) ⊂ V2c . Hence γ ∩ µ is uhc at x.

γi lhc at x and γi (z) compact ∀z ∈ X and ∀ i ⇒ 3.) follow directly from Proposition 3.2. 1.8.2 ( sequential characterization of hemi continuity ) and the fact that a sequence in a product space converges if and only if all its component sequences converge. 1. µ lhc ⇒ γ ◦ µ lhc Proof. γ.13. 3. γi uhc at x and γi (z) compact ∀z ∈ X and ∀ i ⇒ 2.1. 47 . γi closed at x ∀ i ⇒ i i i γi uhc at x γi lhc at x γi is closed at x i 4.) and 2. (Products of Correspondences) Let γi : X →→ Yi for i = 1. k be correspondences. Then γi : X →→ i i Yi := { i yi : yi ∈ Y i } x → i γi (x). . Assume γi are correspondences as above.2.) are clear. Exercise ! Deﬁnition 3.2. γi has open graph ∀ i ⇒ γi has open graph Proof. Deﬁnition 3.. ..12.2. Then the correspondence γi : X →→ i i Yi γi (x) i x is called the product of the γi . Let Yi ⊂ Rk for i = 1..) and 4. µ uhc ⇒ γ ◦ µ uhc 2. k and γi : X →→ Yi be correspondences. → Proposition 3. γ..

As mentioned before. γi lhc i 3.2. γi lhc ∀i ⇒ i i γi uhc and compact valued. Deﬁnition 3.2.9. Proposition 3. γ uhc at x and compact valued ⇒ co(γ) is uhc at x 2.2. Let γi : X →→ Yi be as above. γi uhc and γi compact valued ∀i ⇒ 2. such models can sometimes fail to be exact 48 . Follows again from Proposition 3. Then 1. γi has open graph ∀i ⇒ γi has open graph Proof. If γ : X →→ Y has open sections.2.Proposition 3. 1. 3. γ has open graph ⇒ co(γ) has open graph.2. Let γ : X →→ Y be a correspondence and Y be convex. Let X ⊂ Rm .10. Proof. Exercise ! Proposition 3.3 Abstract Economies and the Walras Equilibrium In this chapter we build up a basic mathematical model for an Economy.14. then we deﬁne the convex hull of γ as co(γ) : X →→ Y x → co(γ(x)).Y ⊂ Rk and F be a polytope. then co(γ) has open graph.11. ( Convex Hull of a Correspondence ) Let γ : X →→ Y be a correspondence and Y be convex. γ lhc at x ⇒ co(γ) is lhc at x 3.3.

More precisely pi denotes the price of one unit of the i-th commodity. As participants in our economic model we have consumers.e. suppliers 49 . water. it is very helpful though if one studies inﬁnite economies where the commodity space is an inﬁnite dimensional Hilbert-space or even more general. We let Rm := “Commodity Space” be the space which models our commodities.. Commodities can be products like oil. pi ≥ 0. We assume that all prices are positive i. manufactured and consumed in the course of economic activity. We assume pi ≥ 0 for all i.. x2 units of the second commodity etc. We think of an economy where we have m commodities. R). In the situation where the commodity space is ﬁnite dimensional ( as in this course ) this is not so much of use. xm ) the computes as m p= p(x) = i=1 xi · pi =< p. x > . Such a vector (x1 . health care etc. However they are good to get theoretical insight into how the reality works. . The price of the commodity vector x = (x1 ... .replicas of the reality. . One can interpret p as an element in the dual of the commodity space L(Rm . A commodity vector is a vector in this space. A price vector p1 · · pm associates prices to each commodity. In our economy commodity vectors are exchanged ( traded ). bread but also services like teaching. xm ) stands for x1 units of the ﬁrst commodity.

We assume that each consumer has an initial endowment. . Each consumer has an income at some rate which we denote with Mi . production sets for each individual consumer resp. Auctioneers determine the prices. . It is reasonable to assume that not every consumer can consume every commodity and every supplier can produce every commodity. Here Xi stands for the commodity vectors consumer ”i” can consume and Yj for the commodity vectors supplier ”j” can produce. {x ∈ Xi : p(x) ≤ Mi } These budget sets depend on the two parameters price p and income Mi and hence can be interpreted as correspondences 50 . This determines the budget set for player i. that is a commodity vector wi ∈ Rm he owns at initial time. one can think of a higher power like government or trade organization but sometimes they are just an artiﬁcial construct in the same way as in games with a random element. For this reason we model consumption sets resp. The ultimate purpose of the economic organization is to provide commodity vectors for ﬁnal consumption by the consumers.e. We will see later how this can be realized. We assume that the incomes are positive i. produce as Xi ⊂ Rm Yj ⊂ Rm consumption set for consumer ”i” production set for consumer ”j” where we assume that we have n consumers and k suppliers in our economy and i ∈ {1... k}. Furthermore we assume that consumers have to buy their consumption at market price.( also called producers ) and sometimes auctioneers. where one considers nature as an additional player. that is he cannot take any credit. Mi ≥ 0... We assume that he cannot purchase more then his income. n} as well as j ∈ {1.

bi : Rm × R+ →→ Xi + (p. ﬁnd x ∈ bi (p. This leads one to model utility as a correspondence as we do here. We assume that each consumer has a utility correspondence Ui : Xi →→ Xi x → {y ∈ Xi : consumer ”i” prefers y to x} Here the word ”prefers” is meant in the strict sense. an investment at the stock-market or something similar. i. Often the approach is to model utility in numbers.e. Mi ) such that Ui (x) ∩ bi (p. In these approaches one usually has a utility function u which associates to the outcome a real number and outcome 1 is preferable to outcome 2 if it has a higher utility. is two times outcome 2 as good as one times outcome 1 ? However given to outcomes one can always decide which of the two one prefers. In case where one has indeed a utility function ui : Xi → R one gets the utility correspondence as Ui (x) = {y ∈ Xi : ui (y) > ui (x)}. Mi ) = ∅. 51 . What does it mean that outcome 2 has half the utility of outcome 1. Utility stands for the personal gain some individual has by the outcome of some event. that it is often very difﬁcult to measure utility in numbers. Mi ) → {x ∈ Xi |p(x) ≤ Mi } We do now come to a very important point in game theory and mathematical economics. The problem in this approach though is. In our economy each consumer wants to maximize his utility. so called utility. let it be a game.

In the so called Walras Economy which we will later study in detail. 52 . Since we interpret p and Mi as parameters this gives us another correspondence. It depends of course on the prices p and therefore is considered as the so called supply correspondence sj : Rm →→ Yj + p → { proﬁt maximizing supply vectors for supplier “j”. it is assumed that the consumers share some part of the proﬁt of i the suppliers as their income. then the budget set for consumer “i” has the form 1 2 commodity vectors in the context of a supplier are also called supply vectors this can be through wages.Such an x is called a demand vector and is a solution to the consumers problem given prices p and income Mi .. Mi ) → { demand vectors for consumer ”i” given prices p and income Mi }. the so called demand correspondence for consumer ”i” di : Rm × R+ →→ Xi + (p. . ym ) given prices p is m p(y) = i=1 pi · yi =< p. given prices p}. y > . The proﬁt or net income associated with the supply vector y = (y1 . A supply vector1 y ∈ Yj for supplier ”j” speciﬁes the quantities of each commodity supplied ( positive entry ) and the amount of each commodity used as an input ( negative entry ). The set of proﬁt maximizing supply vectors is called the supply set. dividends etc.. If supplier “j” produces yj and prices are p.2 Let αj denote consumer ”i”’s share of the proﬁt of supplier j.

i {x ∈ Xi : p(x) ≤ p(wi ) + αj p(yj )} The set n k E(p) = { i=1 xi − j=1 i yj : xi ∈ di (p. does such an equilibrium always exists ? We will answer this question later. Instead of the excess demand correspondence E one can consider the correspondence ˜ E : Rm →→ Rm + p →→ p + E(p). But this time ﬁxed points of correspondences.Since it depends on p it is naturally to consider it as a correspondence the so called excess demand correspondence E : Rm →→ Rm + p → E(p). yj ∈ sj (p)} is called the excess demand set. It would be a very good thing for the economy if the zero vector belongs to E(p). Let us brieﬂy illustrate why. This however means that the suppliers produce exactly the amount of commodities the consumer want to consume and furthermore the suppliers make maximum proﬁt. A price vector p which satisﬁes 0 ∈ E(p) is called a Walrasian equilibrium. p(wi ) + αj p(yj )). 53 . As in the case of the non cooperative equilibrium for two player games in chapter 2. The question of course is. this has to do with ﬁxed points. In fact this means that there is a combination of demand and supply vectors which add up to zero in the way indicated above.

Let Xi ⊂ Rm be closed. lower hemi continuity or open graphs etc. Mi ) → {x ∈ Xi |p(x) ≤ Mi } The proof of the following proposition is easy and is left as an exercise : Proposition 3. if there exist x ∈ Xi such that p(x) < Mi . There is a slightly more general deﬁnition of a Walrasian equilibrium the so called Walrasian free disposal equilibrium. We studied for example the budget correspondence for consumer “i” : bi : Rm × R+ →→ Xi + (p. 3. This needs preparation and some further studies on correspondences.To do analysis with those correspondences on has to know that these correspondences have some analytic properties like upper hemi continuity. However before we can prove this result we need a result for correspondences which corresponds to the Brouwer ﬁxed point theorem in the case of maps. This section gives the theoretical background for this. which will follow in the next two sections. We will later give the precise deﬁnition and a result about when such an equilibrium exist. then bi is lhc at 54 .convex and bounded from below. Then the budget correspondence bi as deﬁned above is uhc and furthermore.˜ Then p is a Walrasian equilibrium if and only if p ∈ E(p) which ˜ is the case if and only if p is a ﬁxed point of the correspondence E.4.4 The Maximum Theorem for Correspondences In the last section we learned about some correspondences which naturally occur in the framework of mathematical economics.1.

y ∈ µ(x) however implies that there exists z ∈ γ(x) s.4.2. Since γ is uhc and compact valued it follows from Propo/ sition 3. / Since γ is also lhc at x it follows from Proposition 3. The following Theorems will help us to come over this problem.1 ( ﬁrst part ) that γ is closed at x and therefore y ∈ γ(x). Mi ) → { demand vectors for consumer ”i” given prices p and income Mi }.2 ( second part ) 55 . that the demand correspondences are the result of some optimization problem ( demand vectors are vectors in the budget set. If γ is continuous at x. then µ is closed at x as well as uhc at x and F is continuous at x.t. f (z) > f (y). Furthermore µ is compact valued. We have to show that y ∈ µ(x). Let f : Y → R be a continuous function. have similar analytic properties. Deﬁne µ : G →→ Y x → {y ∈ γ(x)|y maximizes f on γ(x)} and F : G → R with F (x) = f (y) for y ∈ µ(x).Let us ﬁrst show that µ is closed at x.(p. Mi ). which have maximum utility ). Furthermore µ(x) is closed and therefore compact ( since it is a closed subset of a compact set ). Theorem 3. Since γ is compact valued we have that µ(x) = ∅ for all x.Y ⊂ Rk and γ : G →→ Y be a compact valued correspondence. The reason for this is. It is however more difﬁcult to see that the demand correspondences di : Rm × R+ →→ Xi + (p. Proof. Suppose y ∈ µ(x).2. For this let xn → x and y n ∈ µ(xn ) with y n → y. (Maximum Theorem 1) Let G ⊂ Rm .1.

y n ∈ µ(xn ) with y n → y. x.2.Y ⊂ Rk and γ : G →→ Y a compact valued correspondence. The following theorem corresponds to the case. xn .6 ( second part ) implies that µ is uhc at x. where the utility correspondence is in fact an utter correspondence.2 that there exists a sequence z n → z s.2. Assume this would not be the case. Let xn → x. Furthermore let U : Y × G →→ Y have open graph. z n ) = (y. z n ∈ γ(xn ). then µ is uhc at x.t. Deﬁne µ : G →→ Y x → {y ∈ γ(x)|U (y. i. (Maximum Theorem 2) Let G ⊂ Rm . Since f is continuous this implies that f (z) = lim f (z n ) ≤ lim f (xn ) = f (x) n n which is in contradiction to f (z) > f (y). / Since γ is lhc at x it follows from Proposition 3. If γ is closed and lhc at x then µ is closed at x. Since y n ∈ µ(xn ) we have f (z n ) ≤ f (y n ) for all n. Since y b ∈ µ(xn ) ⊂ γ(xn ) and γ is closed at x we have / that y ∈ γ(x). if in addition γ is uhc at x. However since y ∈ µ(x) there must exist z ∈ U (y.2. In order to show that µ is closed at x we have to show that y ∈ µ(x). x) ∩ γ(x) = ∅}. Then limn (y n . Clearly lim F (xn ) = lim f (y n ) = f (y) = F (x) n n which shows that F is continuous at x. z) ∈ Gr(U ) 56 . Theorem 3. Now since µ = γ ∩ µ Proposition 3. So we must have y ∈ µ(x) and therefore that µ is closed at x.that there exists a sequence z n → z with z n ∈ γ(xn ). x)∩γ(x).4. Proof. Furthermore µ is compact valued ( possibly empty ). This result corresponds to the case where the utility correspondence is in fact induced by a utility function. y ∈ µ(x).e.

Assume this would not be the case. Equivalently we can x x show that γ(˜)\µ(˜) is open in γ(˜). x. x) ∩ γ(˜). Since Gr(U ) is open there ˜ y ˜ n0 ˜ must exist n0 ∈ N s. We deﬁne µ : G →→ Y via µ(x) = {y ∈ Y : U (y.e. Finally we show that µ is compact valued.2. x) s. x) ∩ γ(˜) = ∅ . x. Y ⊂ Rk and let U : G×Y →→ Y satisfy the following condition : z ∈ U (y. This however means that z n0 ∈ U (y n0 .y n ∈ µ(xn ) and y n → y. y ∈ µ(˜ ˜ x y ˜ x ˜/ x n n ) as well as a sequence y ∈ µ(˜) s. ˜ x Clearly limn (y n . Let G ⊂ Rm . (y .t. z) ∈ Gr(U ). There/ fore y ∈ µ(x) and µ is closed at x.6 ( second part ) implies that µ = γ ∩ µ is uhc at x.t. limn y = y and x ˜ U (y n . z n0 ) ∈ Gr(U ).4. Since µ(˜) ⊂ γ(˜) for all x and γ is compact valued it is enough to show that x x ˜ µ(˜) is closed in γ(˜) ( in the relative topology ). If in addition γ is also uhc at x.t. Let xn → x.2. Suppose y ∈ µ(x). 57 . (y n0 . Since Gr(U ) is open there must exist n0 ∈ N s. x) ∈ (U − [{z ′ }])◦ . x) ⇒ ∃z ′ ∈ U (y. Then µ is closed. Then there / exists z ∈ U (y.t. xn0 ) ∩ γ(xn0 ) and therefore y n0 ∈ µ(xn0 ) which is of course a contradiction. x) ∩ γ(˜) = ∅ ( i. x. ˜ x This however contradicts the previous equation and so µ(˜) must be x closed in γ(˜). Proof. xn0 . x) and by the hypothesis there exists z ′ s. ∀ n. x) = ∅}. x) ). z) = (˜. (y. z) ∈ Gr(U ) which implies that z ∈ U (y n0 . x x x Then there would exist y ∈ γ(˜) and z ∈ U (˜. then as in the previous proof Proposition 3. x Proposition 3.( since z ∈ U (y.t.

Let us ﬁrst show that E is closed at all x. x) ∩ {z ′ } = ∅}◦ y ˜ y ˜ z ′ ∈U (˜. xi ) ∈ V } is an open neighborhood of (g. x)| U (˜. x ∈ E(g). xn ) = (y. g ) ∩ Si (˜. In the second case there must exist i and zi ∈ Ui (x. Deﬁne E : G →→ X g → {x = (x1 . Proof. . (y n0 . Let G ⊂ R and for each i let Si : X × G →→ Xi be a continuous correspondence with compact values. / / Then for some i either xi ∈ Si (x. Furthermore let Ui : X ×G →→ Xi be correspondences with open graph. g ) contains V . W1 × V ⊂ Gr(Ui ). g ) ∈ W1 ∩ W2 we have Ui (˜. x) ∈ Gr(E) i. g ) ∈ W2 implies that x ˜ Si (˜. g) ∩ Si (x. (˜. g ) ∩ V = ∅.t.e. g) disjoint from x ˜ Gr(E).3 Then W1 ∩ W2 is a neighborhood of (x. So in the ﬁrst case there exists a neighborhood V in X × G × Xi of (x. since for all (˜.t. g).2.˜) yx Since limn (y n . g) s. g). x) : (x. This however is a contradiction to y n0 ∈ µ(xn0 ). g) = ∅}. Since Ui has open graph there exist neighborhoods V of zi and W1 of (x.. g. g) s. Then V = {(g. This is equivalent to showing that E has closed graph.t. x) in G × X which cannot intersect Gr(E) since for all ˜ / (g. g) ∩ Si (x. Let Xi ⊂ Rki for i = 1.3. Then E has compact values.1 ( ﬁrst part ) Si is closed. g) and Ui (x. g) or Ui (x. By Propo/ sition 3.4. x) ∈ V we have xi ∈ Si (x. Suppose (g. xn ) ∈ X : for each i xi ∈ Si (x. Since in both cases we can x ˜ 3 V is an open neighborhood of zi . g ) = ∅ for x ˜ x ˜ x ˜ the simple reason that Ui (˜. n be compact and set X = n k i=1 Xi . Since Si is also lhc there exists a neighborhood W2 of (x. xi ) ∈ Gr(Si ) which is dis/ ˜ joint from Gr(Si ).. x) there mist exist n0 ∈ N s. xn0 ). g) ∩ Si (x..(y. .. is closed and uhc. xn0 ) ∈ (U − [{z ′ }])◦ which implies that z ′ ∈ U (y n0 . g. Theorem 3. g) = ∅. then deﬁnition lhc 58 . x) ∈ (U − [{z ′ }])◦ = {(˜.

is closed and uhc. It follows now from the compactness of X and the closedness of E as well as Proposition 3. Proof. Let K ⊂ Rm be compact. Let γ : X →→ Y be an uhc correspondence with nonempty convex values.2. Proposition 3.2.5.1 ( second part ) it is enough to show that F is closed. That it has compact values is clear. G ⊂ Rk and let γ : K × G →→ Rm have compact values and uhc. Then Z has compact values.4.4. Exercise ! 3. is closed and uhc.1.1 ( second part ) that E is uhc. Let K ⊂ Rm be compact. Proof.3. but this follows immediately from the closedness of γ. Deﬁne F : G →→ K g → {x ∈ K : x ∈ γ(x. For each δ > 0 deﬁne a correspondence 59 .5 Approximation of Correspondences Lemma 3. g)}. By Proposition 3. Deﬁne Z : G →→ K g → {x ∈ K : 0 ∈ γ(x. G ⊂ Rk and let γ : K × G →→ K be closed correspondence. Furthermore let X ⊂ Rm be compact and Y ⊂ Rk be convex. x) which are still contained in Gr(E)c we have that Gr(E)c is open and hence Gr(E) is closed. Proposition 3.ﬁnd neighborhoods of (g. Then F : G →→ K has compact values.4. g)}.

Since X is compact there exist x1 . f n be a locally ﬁnite partition of unity subordinated to this covering.. Exercise ! In the proof of the next lemma we need a technique from topology called partition of unity..t.. Proof. Then we have supp(f i ) ⊂ Bδ (xi ) and n i=1 fi ≡ 1. We deﬁne the function f as follows : 60 . Let f 1 .t. X ⊂ n Bδ (xi ).1. Proposition 3.1 there exists δ > 0 s. ∞) s. By Lemma 3. Then for any ǫ > 0 there is a continuous map f : X → Y s.5. xn s.5. Then for every ǫ > 0 there exists δ > 0 s. Proof. This technique is very helpful in general. for all x ∈ X we have f i (x) > 0 only for ﬁnitely many i ∈ I. Let ǫ > 0. compact and convex values.t.1. (von Neumann Approximation Theorem) Let γ : X →→ Y be uhc with nonempty.5. Gr(f ) ⊂ Bǫ (Gr(γ)). i∈I fi ≡ 1 and supp(f i ) = {x|f i (x) > 0} ⊂ Ui . Theorem 3. .t.t. ..γ δ : X →→ Y x → co( z∈Bδ (x) γ(z)). that is a family of functions f i : X → [0. Let X be a topological space and (Ui )i∈I be an open covering of X that is each Ui is an open set and X = i∈I Ui then there exists a locally ﬁnite subordinated partition of unity to this covering. Gr(γ δ ) ⊂ Bǫ (Gr(γ)). however we don’t give a proof here. i=1 Choose y i ∈ γ(xi ). Gr(γ δ ) ⊂ Bǫ (Gr(γ)) where Bǫ (Gr(γ)) denotes the points in X × Y ⊂ Rm × Rk which have ( Euclidean ) distance from Gr(γ) less than ǫ.

f :X → Y n x → i=1 f i (x)y i .2. so y∈Rk fy ≡ 1 and supp(fy ) ⊂ γ −1 (y). ( Browder ) Let X ⊂ Rm and γ : X →→ Rk be convex valued s. Clearly f is continuous. We will now show that under different assumptions we can do even better.t. f (x) ∈ γ(x) for all x ∈ X.5. Clearly X = y∈Rk γ −1 (y) so that (γ −1 (y))y∈Rk is an open covering of X. i. for all y ∈ Rk the sets γ −1 (y) = {x ∈ X|y ∈ γ(x)} are open.1. Theorem 3. ∞) denote the maps belonging to a locally ﬁnite subordinated partition of unity.e.t. Deﬁnition 3. Gr(f ) ⊂ Gr(γ). Let γ : X →→ Y be a correspondence. Let fy : X → [0.5. From now on in this chapter all correspondences are assumed to be non empty valued. Since f i (x) = 0 for all x ∈ Bδ (xi ) each f (x) / is a convex linear combination of those y i such that x ∈ Bδ (xi ). In the previous proof we constructed a continuous selection for the correspondence γ δ ( in fact for δ > 0 but arbitrary small ). A selection of γ is a function f : X →→ Y s. Proof. f (x)) ∈ Gr(γ δ ) ⊂ Bǫ (Gr(γ)). Then there exists a continuous selection of γ. Since x ∈ Bδ (xi ) clearly implies xi ∈ Bδ (x) and therefore y i ∈ z∈Bδ (x) γ(z) we have f (x) ∈ co(y i |xi ∈ Bδ (x)) ⊂ co( z∈Bδ (x) γ(z)) = γ δ (x). We deﬁne the map f as follows : 61 . Hence for all x ∈ X we have (x.

t. It will follow from the following theorem. compact and convex values where Y ⊂ Rk is also compact and convex. it is not more difﬁcult to prove than those before ). Proposition 3.5. for all x ∈ K one has µ(x) = {f (x. Then there exists a continuous selection of γ. The main ﬁxed point theorem for correspondences is the Kakutani ﬁxed point theorem. Let X ⊂ Rm be compact and γ : X →→ Rk be lhc with closed and convex values. fy (x) > 0 which can only hold if y ∈ γ(x). Suppose there is a closed correspondence γ : K →→ Y with nonempty. We state the following proposition without proof ( due to time constraints.f : X → Rk x → y fy (x)y. Theorem 3. 3.6 Fixed Point Theorems for Correspondences One can interpret Brouwer’s ﬁxed point theorem as a special case of a ﬁxed point theorem for correspondences where the correspondence is in fact given by a map.6.t. Let K ⊂ Rm be compact. 62 .1. nonempty and convex and µ : K →→ K a correspondence. Since γ(x) is convex this implies that f (x) ∈ γ(x). Furthermore assume that there exists a continuous map f : K × Y → K s.2. y) : y ∈ γ(x)}. Then f is continuous and for each x ∈ X f (x) is a convex combination of those y s. That this is not the only case where ﬁxed points of correspondences are guaranteed is shown in this section.

g n (x)).5.Then µ has a ﬁxed point.e. Gr(g n ) ⊂ B 1 (Gr(γ)). there exists x ∈ K s. Therefore (˜.6.1 ( Brouwer’s ﬁxed point theorem ) that each hn has a ﬁxed point xn ∈ K.t.e. g n (xn )).o. Then γ has a ﬁxed point. y .3.2.1 there exists a sequence of continuous maps g n : K → Y s. a point xn which satisﬁes xn = hn (xn ) = f (xn . Proof. By the assumption on the correspondence ˜ x µ we have x = f (˜. ˜ ˜ x Proof. i. Since K as well as Y are compact we can extract a convergent subsequence of (xn ) as well as (g n (xn )) and w. ( Kakutani Fixed Point Theorem ) Let K ⊂ Rm be compact and convex and γ : K →→ K be closed or uhc with nonempty convex and compact values. By Theorem 3.l. y ∈ µ(˜) so that x is a ﬁxed point of µ. By the continuity of f we have ˜ x = f (˜. y ) = limn (xn g n (xn )) must lie in x ˜ n Gr(γ) and therefore y ∈ γ(˜). It follows from Theorem 2. g n (xn )) ∈ B 1 (Gr(γ)). so we can just assume that γ is closed. ˜ x ˜ x ˜ n Theorem 3.g. We can then apply the previous theorem on µ = γ and f deﬁned by 63 . n We deﬁne maps hn as follows : hn : K → K x → f (x. we can as well assume that these two sequences already converge and x := limn xn as well as ˜ n n y := lim f (x ). ˜ x ˜ Furthermore since γ is closed Gr(γ) is closed and for all n we have (x .t. x ∈ µ(˜).1 ( ﬁrst part ) γ is also closed.2. i. If γ is uhc then by Proposition 3.

Then γ has a ﬁxed point. We come to another ﬁxed point theorem. If x ∈ W then there exists y ∈ U (x) and since x ∈ U −1 (y) ⊂ W by assumption 2. Proof. Then γ has a ﬁxed point. which works in the setup where the correspondence is lhc. Lemma 3. ( Browder ) Let K ⊂ Rm be compact and convex and let γ : K →→ K with nonempty convex values s. Theorem 3.) U −1 (y) is an 64 . f (x) ∈ γ(x) for all x ∈ X.f :K ×K → K (x. 1.2 and Brouwer’s ﬁxed point theorem. Let X ⊂ Rk be nonempty. y) → y. Applying Theorem 2.6.t. ˜ ˜ x x ˜ Theorem 3. By Proposition 3.t. Clearly x) is also a ﬁxed point for γ. Proof. γ −1 (y) is open for all y ∈ K.4. U (x) = ∅.5. x = f (˜) ∈ γ(˜). y)|y ∈ γ(x)} and therefore γ has a ﬁxed point. Then there exists x ∈ X s.t.3.e. Follows in the same way as in the previous proof by application of Theorem 3. Let K ⊂ Rm be compact and convex and let γ : K →→ K be lhc with closed and convex values. there exists x s. compact and convex and let U : X →→ X be a convex valued correspondence s. Then clearly µ(x) = γ(x) = {y = f (x. Proof.6. U −1 (x) = {x′ ∈ X|x ∈ U (x′ )} is open in X for all x ∈ X.3. Let us deﬁne W := {x ∈ X|U (x) = ∅}.t. i. x ∈ U (x) for all x ∈ X / 2.1 ( Brouwer’s ﬁxed point theorem ) again we see that f has a ﬁxed point.5.t.2 there exists a continuous selection f : K → K s.1.6.

2. for all n ≥ n0 we have xn ∈ W .2 ) and therefore admits a continuous selection f : W → Rk which by deﬁnition of a selection has the property that f (x) ∈ U (x) for all x ∈ W . If x ∈ W then by deﬁnition of γ x = f (x) ∈ U (x) which / is a contradiction to assumption 1.t.t.2 ) γ has a ﬁxed point. For those n we have y n ∈ γ(xn ) = {f (xn )} i. Now it follows from the continuity of f that y = lim y n = lim f (xn ) = f (x) n n and therefore y ∈ γ(x) = {f (x)}. We consider the restriction of U to W i. hence γ is uhc with nonempty. To show this let xn → x.6.e. that is there exists x ∈ X s. x ∈ γ(x).5.y n ∈ γ(xn ) and y n → y then we must show y ∈ γ(x). If however x ∈ W then since W is open there must exist n0 s. It follows from Proposition 3. convex and compact values.) Therefore x ∈ W and therefore by 65 . By application of the Kakutani Fixed Point Theorem ( Theorem 3. Therefore W is an open subset of X.e. If x ∈ W / this is clear since then γ(x) = X. y n = f (xn ). We deﬁne a new correspondence as follows : γ : X →→ X x → {f (x)} X if x ∈ W if x ∈ W / Then γ is convex and compact valued.open neighborhood of x in W .1 second part that uhc if we can show that γ is closed. U|W : W →→ X ⊂ Rk This correspondence satisﬁes the assumptions in the Browder Selection Theorem ( Theorem 3.

If we can ﬁnd x ∈ X s. (Ky-Fan) : Let X ⊂ Rk be nonempty. y) considered as a function in the ﬁrst variable is lower semi-continuous 2. 1. Deﬁnition 3. First let y1 . y) ≤ 0 for all y∈X Proof. y1 ). ϕ(x. Then there exists x ∈ X s.6. y) > 0}. λ · y1 + (1 − λ) · y2 ) > 0 and therefore 66 . ∀x ∈ X the function ϕ(x. Then ǫ := min(ϕ(x. 1] we have that λ · y1 + (1 − λ) · y2 ∈ ϕ(x. The existence of such an x follows from Lemma 3. ∞)) are open. ϕ(x.t. Let X ⊂ Rk and f : X → R. y2 ∈ ϕ(x. convex and compact and ϕ : X × X → R s. ∞)) which by assumption 2. ·)−1 ([ǫ. supx∈X ϕ(x.1 if we can show that U satisﬁes the required assumptions there. y2 ∈ U (x) for an arbitrary x ∈ X. Therefore for all λ ∈ [0. U (x) = ∅ then we are ﬁnished.t.) is convex. ·)−1 ([ǫ. ∞)) which implies ϕ(x. supy∈y ϕ(x. ·) considered as a function in the second variable is quasi concave 3. We say that f is lower semi-continuous if for all a ∈ R the sets f −1 ((a.6. ∀y ∈ X the function ϕ(·. Deﬁne a correspondence U : X →→ X x → U (x) := {y ∈ X|ϕ(x.t. Theorem 3.6.e.1.5. We call f quasi concave if for all a ∈ R the sets f −1 ([a. y2 )) > 0 and furthermore y1 . x) ≤ 0. y) ≤ 0 i. ∞)) are convex.deﬁnition of W U (x) = ∅.

. t = (t1 .λ · y1 + (1 − λ) · y2 ∈ U (x). sn ). . tn ) ∈ S(N ) we deﬁne n ϕ(s. × {si−1 } × Si × {si+1 } × . . × {sn } ∩ S(N ) we have 67 . ti . . si+1 . It follows from the convexity of the Li in the i-th variable that ϕ(x.t. ∞)) is open for all y by assumption 1.) By Assumption 3. t) ≤ 0 for all t ∈ S(N ). ( of Theorem 3.1 ) : We deﬁne a map ϕ : S(N ) × S(N ) → R as follows : For s = (s1 . .) we also have that ϕ(x. Furthermore for each y ∈ X we have that U −1 (y) = {x ∈ X|y ∈ U (x)} = {x ∈ X|ϕ(x..... y) > 0} = ϕ(·... We are now in the position to prove the general version of the Nash Theorem : Proof. ϕ(s. Since S(N ) is also convex and compact we can apply the Ky-Fan Theorem which then implies the existence of an s ∈ X s.1. si−1 . ·) is quasi concave and from the continuity of the Li that ϕ(·.. sn ). y)−1 ((0. x) ≤ 0 for all x which by deﬁnition of U implies that x ∈ / U (x) for all x ∈ X. In particular for all t ∈ {s1 } × .1. t) := i=1 Li (s) − Li (s1 .. hence U is convex valued..6. y) is lower semi-continuous. Therefore U satisﬁes all the condition in Lemma 3.

Note that the equation above is not a deﬁnition but an interpre68 . .. ..n 0 ≥ ϕ(s.. ..... n} the set of players and for each i ∈ {1. tj = sj .. si+1 . . t) = j=1 n Lj (s) − Lj (s1 . sn ) = Li (s) − Li (s1 ... sj+1 . .. We do not assume that there is some kind of multi loss operator and this is where we generalize the concept of games..... . sn ) Lj (s) − Lj (s1 .. . xn ) determines an outcome which player “i” prefers to the outcome determined by (x1 . ti . ti .. sj−1 . This however implies that Li (s) ≤ Li (s1 .. . yi ... As before let X = n Xi denote the i=1 multi strategy set. 3.. xn ) = x}. xi−1 . Instead of a multi loss operator we assume we have n correspondences Ui : X →→ Xi .. . . . si+1 . si+1 .. We denote by N = {1. .. xi . .7 Generalized Games and an Equilibrium Theorem As before we consider a competitive environment where n players participate and act by choosing strategies which determine an outcome. . sj−1 ..... We think of the correspondence as follows : Ui (x) = {yi ∈ Xi : (x1 .. n} with Xi the strategy set of player i. . si−1 . xi+1 .. si−1 .. si−1 .. sn ) and shows that s is am NCE for Gn . . where we used that by the choice of t s and t only differ in the i-th component. tj . ti . sj+1 . sn ). sn ) j=i = + Li (s) − Li (s1 .

(Xi ). Deﬁnition 3. xn )} 5 4 69 . (Fi ). (Ui )) where Xi ... yi . The case of a classic game where a multi-loss operator is given ﬁts into this concept by setting Ui (x) = {yi ∈ Xi : Li (x1 .1. xi−1 . ∀ i. xn ) is an allowed strategy }..1.x ∈ F (x) and Ui (x) ∩ Fi (X) = ∅ .7.tation. This correspondence generalizes the set S(N ) in Deﬁnition 3. .1. xi+1 .2. .5 . We have the The word “prefer” is always meant in the sense “strictly prefer”.. Then the jointly feasible multi-strategies are the ﬁxed points of the correspondence n n F = i=1 Xi →→ i=1 Xi . A non cooperative equilibrium ( short NCE ) of a generalized game (N. xn ) < Li (x1 . given the strategies of the other players there is no better ( feasible or allowed ) strategy. xi−1 . yi . Deﬁnition 3. .t.. .5 ). in fact even more natural to deﬁne what an equilibrium of a generalized game should be. . A generalized game (sometimes also called abstract economy ) is a quadruple (N. .. (Ui )) is a multi-strategy x ∈ X s. Ui are as above. it is not harder. xi .7. (Fi ). The emptiness of the intersection above means that for player i. Though the situation is much more general now.. Furthermore we assume we have feasibility correspondences Fi : X →→ Xi for all 1 ≤ i ≤ n where the interpretation is as follows : Fi (x) = {yi ∈ Xi : (x1 . xi+1 .. Fi . (Xi ).. within a NCE none of the players has reason to deteriorate from his strategy.4 The Ui can be interpreted as utilities ( compare section 3.. So in the sense of Nash...

Sonnenschein 1975 ) Let G = (N. yi ). ( Shafer. Proof.1 ) where we chose G = X. Furthermore we deﬁne Hi : X →→ Xi x → {yi ∈ Fi (x) : yi maximizes νi (x. Clearly the maps νi are continuous. Then the Maximum Theorem says the correspondence 70 .7. x → ({x} × Fi (x) we are in the situation of the Maximum Theorem 1 ( Proposition 3. yi ) → dist((x.following theorem which states the existence of such equilibria under certain assumptions on the correspondences used in the game. compact and convex values 3.4. 2. yi ) > 0 ⇔ yi ∈ Ui (x).1. Let us deﬁne for each i a map νi : X × Xi → R+ (x. (Xi ). (Ui )) be a generalized game s.t. (Fi ). Since Gr(Ui )c is closed we have νi (x. Gr(Ui )c ). Xi ⊂ Rki is nonempty. ·) on Fi (x)}.Y = X × Xi and f = νi . Theorem 3. xi ∈ co(Ui (x)) for all x ∈ X. Gr(Ui ) is open in X × Xi 4. / Then there exists a non cooperative equilibrium for G. Fi is continuous with nonempty. for all i : 1. compact and convex. Since the correspondences Fi are compact valued by setting γ : X →→ X × Xi .

10 the correspondence G is uhc. Let us now show that Ui (˜) ∩ Fi (˜) = ∅ for x x all i. ·) on Fi (x)} is uhc. ˜ x x x Since this holds for all i we have that x ∈ F (˜). we have that x xi ∈ Gi (˜) = co(Hi (˜)) ⊂ Fi (˜). Hi is just the composition of the correspondence µi with the correspondence which is given by the continuous map pr : X × Xi → X. zi ) > 0. Then there would exist an x x x i and zi ∈ Ui (˜) ∩ Fi (˜). Since Hi (˜) ⊂ Fi (˜) ˜ ˜ x x x and Fi (˜) is convex. Suppose this would not be the case.2 ). Then by Proposition 3.µ(x) = {z ∈ γ(z) : z maximizes νi on γ(z)} = {(x. yi ) ≥ νi (˜.2.2. Then since zi ∈ Ui (˜) it follows from above that νi (˜. yi ) → yi and is therefore uhc by Proposition 3. yi ) : yi ∈ Fi (x) and yi maximizes νi (x. Since furthermore X is compact and convex we are in the situation where we can apply the Kakutani Fixed Point Theorem ( Theorem 3.2. x ∈ G(˜). ·) on Fi (˜) we have x x x x x νi (˜. Since furthermore Hi (˜) consists of the maximizers of x x νi (˜.7. (x.6. so that x is a ˜ x ˜ jointly feasible strategy. let us deﬁne now another correspondence G : X →→ X N x → i=1 co(Hi (x)).8 and Proposition 3.t. Therefore there exists x ∈ X s. zi ) > 0 for all yi ∈ Hi (˜). This however means that yi ∈ Ui (˜) for all yi ∈ Hi (˜) and hence x x 71 .

(Yj ).3 and the so called Walras Equilibrium Theorem... Deﬁnition 3. sn ) < Li (s1 .. Proof. (αj )) 72 . (wi ). / 3. S(N ).. . ˜ Using this theorem we can now quite easily proof Theorem 3. .1 ). . ( of Theorem 3. First n let us deﬁne utility correspondences Ui : j=1 Sj →→ Sj as follows : If s = (s1 ..8. L) be an N -person game consisting of strategy-sets Si for each player “i”.) in the theorem.1 which is the original Nash Theorem. which satisfy the conditions in Theorem 3.1.Hi (˜) ⊂ Ui (˜). We refer to section 3.. si+1 .1. sn )} ˜ in the case where s ∈ S(N ) we deﬁne Ui (s) = ∅. si . Thus we must have Ui (˜) ∩ Fi (˜) = ∅ for all i and this means that x x x is an NCE. ... si . Therefore x x xi ∈ Gi (˜) = co(Hi (˜) ⊂ co(Ui (˜). . sn ) ∈ S(N ) and s ˜ Li (s1 .8 The Walras Equilibrium Theorem In this section we reconsider the Walras economy of section 3. sn ) ∈ S(N ) then Ui (s) := {˜i ∈ Si : (s1 . (Ui ). Let us make a generalized game out of this. ˜ x x x The latter though is a contradiction to assumption 4.1..3 for most of the notation and also the economical interpretation..1. . . si−1 . si .Let Gn = ((Si ).... a subset S(N ) of allowed strategies and a multi-loss operator. A Walras Economy is a ﬁve tuple i WE = ((Xi ).1..

utility correspondences Ui : Xi →→ Xi and shares i αj ≥ 0. we didn’t specify any currency or whatever. In words : an attainable state is a state where the production of the suppliers precisely ﬁts the demand of the consumers. Let furthermore Xi × 73 . we can assume that all prices lie between zero and one. initial endowments wi . An attainable state of the Walrasian economy WE is a tuple ((xi ). let us introduce some notation which will later be of advantage. Furthermore the prices of the commodities pi satisfy pi ≥ 0 and m pi = 1 so that the set of prices is given by the closed standard i=1 m−1 simplex ∆ . By introducing a m+1st commodity in our economy which no one is able to consume or to produce and which can be given the price 1 − m pi we get an economy which is equivalent to the original one i=1 such that the prices satisfy the hypotheses in the Deﬁnition above. Let n k M := {((xi ).consisting of consumption sets Xi ⊂ Rm . supply sets Yj ⊂ Rm . Since there is however only ﬁnitely many money in the world. n i=1 Then the set of attainable states can be written as F = M ∩ k j=1 Yj . (yj )) ∈ (R ) m n+k : i=1 xi − j=1 yj − w = 0}. Y = k Yj and w = n wi . However. We denote the set of attainable states with F .2. i=1 j=1 i=1 The assumption on the price vectors to be elements in ∆ might look at ﬁrst glance very restrictive an unrealistic.8. We denote X = n Xi . Deﬁnition 3. (yj )) ∈ n Xi × k Yj such that i=1 j=1 n k m−1 xi − i=1 j=1 yj − w = 0.

< p. . there exists xi ∈ Xi s.8. Ui has open graph. xi ∈ co(Ui (xi )) and xi ∈ Ui (xi ) / 4.. wi > xi 6 3. < p. Theorem 3. convex bounded from below and wi ∈ Xi 2. A Walrasian free disposal equilibrium is a price p together with an attainable state ((˜i ). the second one that the consumers get optimal utility from there consumption. wi > + ˜ p ˜ k j=1 i αj < p. The ﬁrst part in the deﬁnition above means that the supplier make optimal proﬁt.. ˜ Yj := prj (F ). k: 1. wi > + x p ˜ j=1 i αj < p. yj > for all yj ∈ Yj for all j ˜ ˜ ˜ 2. yj >≥< p. .. n and j = 1. yj >) = ∅. Xi is closed. ˜ Deﬁnition 3.. Yj is closed and convex and 0 ∈ Yj 6 this inequality between two vectors is meant componentwise 74 . xi ∈ bi (˜. < p.1.pri : F → Xi prj : F → Yj ˜ ˜ denote the projections on the corresponding factors and Xi := pri (F ). (˜j )) such that ˜ x y 1.t.3. Walras Equilibrium Theorem Assume the Walras economy WE satisﬁes the following conditions : For each i = 1. The following theorem tells us about the existence of a such an equilibrium under certain assumptions.8. ˜ ˜ where bi denotes the budget correspondence for consumer i. yj >) and ˜ ˜ k Ui (˜i ) ∩ bi (˜.

5. The asymptotic cone can be used to check if a subset ofRm is bounded as the following proposition shows : Proposition 3.t. The asymptotic cone of E is the set A(E) of all possible limits of sequences of the form (λj · xj ) where xj ∈ E and λj is a decreasing sequence of real numbers s.8. Before we can proof this theorem we need to do some preparational work. We set λj = xj and obtain that the sequence (λj ) converges monotonically decreasing to zero. A cone is a nonempty set C ⊂ Rm which is closed under multiplication by nonnegative scalars.8. We have λj · xj = 1 for all j. Therefore the sequence λj · xj is a sequence on the unit sphere S m−1 ⊂ Rm .5. λ ≥ 0 and x ∈ C imply λ · x ∈ C. Deﬁnition 3. the sequence λj · xj must contain a convergent subsequence and w. Let ERm . not so well known in general is the notion of an asymptotic cone.l. limj λj = 0.t. Since this is compact. we assume that λj · xj converges 75 . i.8. −R+ ⊂ Y Then there exists a Walrasian free disposal equilibrium in the economy E. If however E is not bounded then there exists a sequence xj ∈ E s.t.4. The notion of a one is well known.1. xj converges monotonically 1 increasing to inﬁnity. If E is bounded then there exists a constant M s.o. A set E ⊂ Rm is bounded if and only if A(E) = {0}. Proof.g. all xinE.e. Y ∩ (−Y ) = {0} and Y ∩ Rm = {0} + m 6. Deﬁnition 3. If z = limj λj xj ∈ A(E) then 0 ≤ z = lim |λj | j x ≤ M for xj ≤ M · lim |λj | = M · 0 = 0 j Therefore in this case A(E) = {0}.

Intuitively the asymptotic cone tells one about the directions in which E is unbounded.2.8. then A(E) ⊂ E.. A(E) is closed 6. For this the following rules are very helpful : Lemma 3.. Lemma 3. C ⊂ E and C a cone ⇒ C ⊂ A(E) 9. Ei ⊂ Rm and x ∈ Rm then 1. . Let E. Assume the Walras economy WE satisﬁes the following conditions : For each i = 1. in particular if 0 ∈ E and E convex. It might be difﬁcult though to compute the asymptotic cone. Xi is closed. 8. E closed and convex and x ∈ E ⇒ x + A(E) ⊂ E.8.itself to a point z ∈ S m−1 . convex bounded from below and wi ∈ Xi 2. Therefore in this case A(E) = {0}. E ⊂ F ⇒ A(E) ⊂ A(F ) 3. . E convex ⇒ A(E) convex 7. A( i∈I Ei ) ⊂ i∈I A(Ei ) 5. A(E + x) = A(E) 4.1. Yj is closed and convex and 0 ∈ Yj 76 . Clearly z = 0 and z ∈ A(E)... This compactness will later play a major role in the proof of the Walras Equilibrium Theorem. A( i∈I Ei ) ⊂ i∈I A(Ei ) We will use these rules to prove the following result about the compactness of the set F introduced before. n and j = 1. k: 1. A(E) is a cone 2.

1 i=1 j=1 for the compactness of F it sufﬁces to show that A(F ) = {0}.8. This implies F = ∅. A(Y ) ∩ Rm = {0} + 4.) of the + previous lemma we get A(Xi ) ⊂ A(bi + Rm ) = A(Rm ) = Rm .). By Proposition 3. Proof. Clearly ((wi ). Applying successively parts b. (0j )) ∈ F where 0j ∈ Yj denotes the zero vector in 0 ∈ Yj . c.) of the previous lemma we have : n k A(F ) ⊂ A( i=1 n Xi × j=1 Yj ) ∩ A(M ) k ⊂ ( i=1 A(Xi ) × j=1 A(Yj )) ∩ A(M ). . + + + Also by application of part d.) we have A(Yj ) ⊂ A(Y ). 77 . for each i there exists some xi ∈ Xi s.3. Y ∩ (−Y ) = {0} and Y ∩ Rm = {0} + Then the set F of attainable states is compact and nonempty. If more over the following two assumptions hold: 1. Fur˜ thermore 0 ∈ Yj for j = 1.) and assumption b. −Rm ⊂ Y + ˜ Then xi ∈ Xi . Xi ⊂ bi + Rm .. k. wi > xi 2.t. Furthermore F as the intersection of the two close sets M and n Xi × k Yj is closed. By part e.) and i.t. Since each of the Xi is bounded from below there exist vectors bi ∈ Rm s..

) we have A(Y ) ∩ R+ = {0} we must have k n j=1 yj = 0 and therefore by equation (3.. . since in R+ ) we also must have k yj ≥ 0. = xn = y1 = ..... j=1 Since A(Y ) is convex and a cone one has k yj ∈ A(Y ). To prove the latter we have to show that whenever yj ∈ A(Y ) for j = 1.. . wn . = yk = 0. Furthermore we have ˜ M +w =M ˜ and hence by application of part c. k and n k xi − i=1 j=1 yj = 0 (3..1) m for some xi ∈ R+ then x1 = . (˜j ))| x y i=1 xi − ˜ j=1 yj = 0}. 0) ∈ (Rm )n+k and ˜ n k ˜ M = {((˜i ). Since xi ≥ 0 ( componentwise ) for all i we must have xi = 0 for all i. Now since yj ∈ A(Y ) for all j using assumption 4. Therefore A(F ) = 0 would follow if n k Rm + i=1 × j=1 ˜ A(Y ) ∩ M = {0}.. Now since n m i=1 xi ≥ 0 ( componentwise.) we get for all i = 1.1) also i=1 xi = 0. . .Let us consider the vector w = (w1 . 0.. ˜ ˜ Then M is a vectorspace and therefore also a cone... Since j=1 m however by assumption 3.. n 78 .) of the previous lemma ˜ ˜ ˜ ˜ ˜ A(M ) = A(M + w) = A(M ) = M = M − w.

Yj are compact. Therefore we can choose compact. Let us set y := n xi − n wi . Cj ⊂ Rm s. Therefore there must exist y j ∈ Yj s.) and b.) i=1 i=1 we must have y ∈ Y .8. k y= j=1 n n i=1 k j=1 yj . This however implies that xi ∈ Xi .1 ) Let us note ﬁrst that assumption 5. Choosing xi ∈ Xi as in assumption a.8.2 are met. Proof.) of the second part of the lemma hold. The set F of attainable states is therefore compact.) we get componentwise n n xi < i=1 i=1 wi . Since the image of compact ˜ ˜ sets under continuous maps is also compact we have that all Xi . We set 79 . Then y < 0 and by assumption b.t. convex sets Ki . ◦ ˜ ˜ Xi ⊂ Ki◦ and Yj ⊂ Cj .k yj = 0 ⇒ j=1 yi ∈A(Y )⊂Y = − j=i yj ∈ Y ∩ (−Y ) = {0}. ( of Theorem 3.8. Clearly xi − yj − i=1 wi = y−y = 0 and therefore ((xi ).1 imply that A(Y ) ∩ Rm ⊂ Y ∩ Rm = {0} + + and therefore all assumptions in Lemma 3. Let us now assume that in addition the assumptions a.) of Lemma 3.) together with with 7. (y j )) ∈ =w ˜ F . ∈−A(Y )⊂−Y which ﬁnally proves that A(F ) = {0} and therefore F compact.t.

n + k and their strategy sets are the sets Yj′ × A typical multi-strategy therefore has the form (p. (xi ). . For the mathematics it is important to mention that the correspondence above has open graph. The participants or players will be : 1. . i xi − j yj − w ≤ 0 ).... The utility correspondences are given as follows : For the auctioneer m−1 U0 : ∆ m−1 n k × i=1 Xi′ × j=1 Yj′ →→ ∆ → m−1 (p. i xi − j yj − w > xi − i j yj − w >}. This means that the auctioneer prefers to raise the value of excess demand. We will set up a generalized game where these sets will serve as strategy sets for some of the participants.... m−1 :< q. (yj )) ∈ ∆ n k ′ ′ i=1 Xi × j=1 Yj . n are players 1. (xi ).k are players n + 1. . if there is more demand then supply ( i.e.e.Xi′ := Ki ∩ Xi Yj′ := Cj ∩ Yj . . An auctioneer : He is player “0” and his strategy set is the set of m−1 price vectors ∆ 2. n and their strategy sets are the Xi′ 3. is convex valued and p ∈ / 80 . Suppliers 1. i xi − j yj − w ≥ 0 ) and the prices go down if there is more supply than demand ( i. The economical interpretation of this is that the prices go up. Consumers 1.. (yj )) {q ∈ ∆ > < p.

y ˜ (p. For suppliers and the auctioneer this is very easy. (xi ). This correspondence is indeed well deﬁned by the convexity of Xq .2. (xi ).U0 (p. (yj )) → ′ co(Uq (xq )) ∩ Xq . (yj )). We choose constant correspondences : In fact for supplier “l” we deﬁne Gl : ∆ m−1 n k × i=1 Xi′ × j=1 Yj′ →→ Yl′ → Yl ′ (p. (yj )) 7 within our generalized game. Furthermore it follows from Proposition 3. They are / ˜ ′ also convex valued by the convexity of Xq . they have to be distinguished from the utility correspondences the consumers have in the Walras Economy 81 . Finally the utility correspondences for / the consumers7 are : For consumer “q” m−1 ˜ Uq : ∆ × n k Xi′ i=1 × j=1 ′ Yj′ →→ Xq (p. Both properties follow more or less since the inequality in the deﬁnition of U0 is a strict one and the scalar product is bilinear. (xi ). 3. (yj )). (xi ). Let us deﬁne the utility correspondences for the suppliers : For supplier “l” Vl : ∆ m−1 n k × i=1 Xi′ × j=1 Yj′ →→ Yl′ → {˜l ∈ Yl′ | < p. (xi ).) that the Uq have open graphs and xq ∈ Uq (p. Furthermore yl ∈ Vl (p.10. (yj )). (xi ).) and assumption ˜ 3. As before it is easy to see that these correspondences have open graph and are convex valued. (yj )) Thus supplier prefer larger proﬁts. To complete the setup of our generalized game we need feasibility correspondences for each player. yl >}. yl >>< p.

It is not hard to see ( directly ) that these functions are continuous. For the feasibility correspondences of the consumer we have to work a little bit more. (xi ). Constant correspondences are clearly continuous and in this case they are also compact and convex valued.4. yj > .1 ( Maximum Theorem 1 ). Let us ﬁrst deﬁne functions πj for j = 1. (yj )) → {˜q ∈ Xl′ :< p. πj : ∆ m−1 → R p → maxyj ∈Yj′ < p. (xi ). so we have πj (p) ≥ 0 ∀p. (yj )) m−1 .and for for the auctioneer F0 : ∆ m−1 n k × i=1 Xi′ × j=1 Yj′ →→ ∆ → ∆ m−1 (p. . k s. It follows however also from Theorem 3. Using assumption 2 and the second part of 82 .. We deﬁne the feasibility correspondence for consumer “q” as follows : Fq : ∆ m−1 n k × i=1 Xi′ × j=1 ′ Yj′ →→ Xq k (p.t. xq >≤< p. (xi ) and (yj ) are redundant.2 that 0 ∈ Yj ⊂ Yj′ .It ˜ follows from Lemma 3. Basically these maps compute the optimal proﬁt for the suppliers.. j.8. wq > + x ˜ j=1 q αj πj (p)} This correspondence in fact only depends on the price vector p.

xi > ≤ < p♯ . yj > =πj (p♯ ) and ♯ co(Ui (xi )) ∩ Bi (p♯ ) = co(Ui (x♯ )) ∩ Xi′ ∩ Bi (p♯ ) i ˜i (p♯ . xi >≤< p♯ .8. Since furthermore Xq is compact and clearly Fq has closed graph it follows from Proposition 3. yj >= πj (p♯ ) 3. i. i x♯ − i j ♯ y j − w > ≤ < p♯ .′ ˜ Lemma 3.1. < p♯ . xq < wq . < q.e.7. 83 . (y ♯ )) = U i j i j = ∅.2. ♯ xi ∈ bi (p♯ ) = {xi ∈ Xi′ | < p♯ .2 there exists xq ∈ Xq ⊂ Xq s.) that Fq is uhc. Furthermore it follows from ′ Proposition 3. < p♯ . xq > < < p.t. The generalized game constructed therefore satisﬁes all the assumptions in the SonnenscheinShafer Theorem 3. wq > + j=1 q αj πj (p) and therefore Fq is non empty valued. Therefore there exists an NCE (p ♯ ♯ . (x♯ ).1 that Fq is lhc. (x♯ ). yj > for all yj ∈ Yj′ . Thus for each consumer the feasibility correspondences are continuous with nonempty convex values. (yj )) i ∈∆ m−1 n k × i=1 Xi′ × j=1 Yj′ which by deﬁnition of an NCE satisﬁes 1. (y ♯ )) ∩ Fi ((p♯ .1 b. Since also p ≥ 0 and πq (p) ≥ 0 we have k < p. (x♯ ). wi > k + j=1 ♯ i αj < p♯ . yj > ≥ < p♯ .4. i x♯ − i j ♯ yj − w > for all q ∈ ∆ m−1 ♯ ♯ 2.

e. We show that by implying the NCE ♯ each consumer spends all of his income. Suppose not. wi > + i j=1 ♯ i αj < p♯ . wi > + j=1 ♯ i αj < p♯ . Then since Ui (x♯ ) is open8 and by assumption 3. Summing up over i and using the assumption on the Walras econi omy that i αj = 1 for each j yields < p♯ . x♯ >=< p♯ . (yj ). yj > for all i. i x♯ − i j ♯ yj − w >≤ 0 for all q ∈ ∆ m−1 ♯ which clearly implies that i x♯ − j yj − w ≤ 0. i x♯ >=< p♯ .) of our NCE we then have < q.) xi ∈ Ui (x♯ ) it i i would follow that Ui (x♯ ) ∩ bi (p♯ ) = ∅ and therefore also co(Ui (x♯ )) ∩ Bi (p♯ ) i i which is a contradiction to property 3. By property 1. i. < p♯ . For notational convenience set i ♯ k Mi :=< p♯ . We deﬁne ♯ yj := yj + yj for all j . yj > for the income of consumer “i”. i j ♯ y j + w > ⇒ < p♯ . xi > ♯ < Mi . ˜ 8 follows from the assumption that Ui has open graph 84 .) i ♯ we have that z := i x♯ − j yj − w ∈ Y . Therefore ♯ we have < p♯ .) of our NCE above. Therefore there must exist i yj ∈ Yj such that z = j yj . By assumption 6.We are now going to construct a Walras Equilibrium form the NCE ♯ (p . (x♯ ). xi >= Mi for all i or more precisely k < p♯ . i x♯ − i j ♯ yj − w >= 0.

so that yj ∈ Yj . Since yj ∈ Yj ⊂ (Yj′ )◦ there exists λ > 0 s. yj > for all j. Then < p♯ . (xi ). Further more we ˜ ˜ ♯ < p♯ . ˜ Suppose there would be an xi in this intersection. yj >=< ˜ ♯ p♯ . Since Yj is convex we have λ · yj + (1 − λ) · yj ∈ Yj ˜ ˜ ′ ˜ for all λ ∈ [0. y j > = j =<p♯ . yj > for all j. By construction we have that ((x♯ ). ˜ We will now show that this inequality holds even for all yj ∈ Yj . Suppose this would not be the case. there would exist yj ∈ Yj s. yj >= ˜ j j ♯ < p♯ .t. yj >. 1]. yj > for all yj ∈ Yj′ . yj > for all j. Then since Xi′ is ˜ convex and x♯ ∈ Xi ⊂ (Xi′ )◦ there exists λ > 0 such that λ·xi +(1−λ)·x♯ ∈ i i ♯ ♯ ′ Xi . (yj )) ∈ F . yj >>< p♯ .) of our NCE and yj ∈ Yj ⊂ Yj′ we have < p♯ . Therefore the equality above can only hold if < p♯ . yj := ˜ ′ λ·yj +(1−λ)· yj ∈ Yj′ .) it follows from the convexity of bi (p♯ ) that λ · xi + (1 − λ) · x♯ ∈ co(Ui (x♯ )) ∩ bi (p♯ ). yj > which is a contradiction ˜ ˜ to the inequality above.e. ˜ Summing up these equations over j we get < p♯ . yj > + < p♯ . ˜ i ♯ ♯ To show that (p . yj >≥< p♯ . (yj )) is a Walrasian free disposal equilibrium it ˜ remains to show that for each i Ui (x♯ ) ∩ {xi ∈ Xi :< p♯ . xi >≤< p♯ . i. y j > . yj >=< p♯ . Since however xi ∈ Ui (xi ) by assumption 3. < p♯ . We have therefore shown that < p♯ . ˜ By property 2. yj >≤< ˜ ˜ ♯ ♯ p . yj >} = ∅.z>=0 ♯ < p♯ .t. wi > + i j i αj < p♯ .Then have i x♯ − i j ˜ yj − w = z − z = 0. yj > > < p♯ . This is a i i 85 . y j > + j < p♯ .

(xi ).♯ contradiction to property 3. Thus (p♯ . (yj )) is indeed a ˜ Walrasian free disposal equilibrium. 86 .) of our NCE.

0) (0. They can agree but also disagree about a joint strategy.Chapter 4 Cooperative Games 4. In the setup of cooperative games the players are allowed to communicate before choosing their strategies and playing the game. 0) (−4. −4) (0. −1) The mixed strategies of this game look as follows : 87 . then the game is played and delivers some output which is measured either via a loss-operator or a Utility correspondence. Let us recall the ”‘Battle of the Sexes”’ game where the strategies are given as follows : man woman s1 = ”‘go to theater”’ s1 = ”‘go to theater”’ ˜ s2 = ”‘go to soccer”’ s2 = ”‘go to soccer”’ ˜ The corresponding bilosses are given by the matrix L := (−1.1 Cooperative Two Person Games In the setup of non-cooperative games the player choose independently from each other their strategies.

88 . In the same ˜ way one can see that (0. It involves a random experiment which the to players perform together. 1] y · s1 + (1 − y) · s2 ↔ x ∈ [0. 1) L2 (1. y) is given by L1 (x. y) = −(5xy + 4 − 4x − 4y) L2 (x. y ∈ [0. 1) = −1 ≤ −x = L1 (x. y) we see that the pure strategy (1. 1) ↔ (s1 . − ). s1 ) is a NCE. 2 2 2 2 We call such a strategy a jointly randomized strategy. y) = −(5xy + 1 − x − y) Since we have for all x. 1) = −4 ≤ −4y = L2 (1. 2 ) is the outcome of two ran2 dom experiments which the players do independently from each other. 1] that L1 (1. −4) + (−4. 1] ˜ ˜ and the biloss of the mixed strategy (x. All possible outcomes of the game when using mixed strategies are given by the shaded region in the following graphic: Assume now the man and woman decide to do the following : They throw a coin and if its head then they go both to the theater and if its number they go both to see the soccer match. −1) = (− . 0 is an NCE. Note that 1 the non-cooperative mixed strategies ( 1 . The expected biloss of this strategy is: 1 1 5 5 · (−1.x · s1 + (1 − x) · s2 ↔ x ∈ [0.

In fact if both man and woman go to the soccer match. ) = − . ) = − 2 2 4 1 1 5 L2 ( . then th man is better of than with our jointly randomized strategy and vice versa the woman is better of if both go to the 89 . Hence such jointly randomized strategies give something completely new and the concept of cooperative game theory is just the extension of the original concept by these jointly randomized strategies. 2 2 4 One can also see that the biloss of our jointly randomized strategy is not in the biloss region of the non-cooperative game ( see graphic ).1: Biloss region for “Battle of the Sexes” : –4 –3 –2 –1 0 –1 –2 –3 –4 non-cooperative setup The ( expected ) biloss of this strategy is 1 1 5 L1 ( . It is not true that the jointly randomized strategy above is in any case better than any non-cooperative strategy.Figure 4.

In the following we restrict ourself to games with only ﬁnite strategy sets.1. sj )| i. Let G2 b a ( non cooperative ) two person game with ﬁnite strategy sets S1 and S2 and let L = (L1 .theater. The expected bilosses of these strategies are λ0 · (0. 1) + λ3 · (0. sj ) ˜ where ∆S1 ×S2 := { i. ˜ 90 . if both want their will.j λij (si .j λij = 1. 0) + λ1 · (−1. However these to cases are unlikely to happen. Then the corresponding cooperative game is given by the biloss operator ˆ L : ∆S1 ×S2 → R × R λij (si . 0). −4) + λ2 · (0. λij ∈ [0. this set is th convex hull of the biloss region of th non-cooperative game. One of the main questions in cooperative game theory is to ﬁnd the best compromise. Before giving a precise mathematical formulation of cooperative two person games let us mention that there are more jointly randomized strategies for the ”‘Battle of the Sexes”’ game.j λij L(si . λ3 ∈ [0.1. In fact for λ0 . 0) + λ1 · (1. L2 ) be its biloss operator. 1] such that λ0 + λ1 + λ2 + λ3 = 1 we have the jointly randomized strategy λ0 · (1. −4λ1 − λ3 ). −1) = (−1 − 4λ4 . 1) + λ2 · (0. λ1 . 1]} is the ( for˜ mal ) simplex spanned by the pure strategy pairs (si . sj ). Deﬁnition 4. sj ) → ˜ i. The possible bilosses of jointly randomized strategies are given in the following graphic : As one can see immediately. 0) + λ3 · (−4. λ2 .j i. The jointly randomized strategy is therefore in some sense a compromise.

˜ ˆ Deﬁnition 4. Let us recall the deﬁnition of th conservative value of a two person game: 91 .1. v ′ ) = (u.2. Given a two person game G2 and let L be the biloss operator of the corresponding cooperative game. v) ∈ ˆ ˆ im(L) is called jointly sub-dominated by a pair (u′ . s)dPS1 dPS2 = 1. A pair of losses (u. The pair (u.2: Biloss region for “Battle of the Sexes” : –4 –3 –2 –1 0 –1 –2 –3 –4 cooperative setup ˆ ˆ The image im(L) of L is called the biloss region of the cooperative ˆ game. v).Figure 4. By deﬁnition of L it is clear that it is always convex and in fact is the convex hull of the biloss region of the corresponding noncooperative game.1. Remark 4. v) is called Pareto optimal if it is not jointly sub-dominated. v ′ ) ∈ im(L) if u′ ≤ u and v ′ ≤ v and (u′ . If the strategy sets are not necessarily ﬁnite but probability spaces then one can can consider jointly randomized strategies as functions S1 × S2 → R+ such that S1 ×S2 f (s.1.

In the same way they would not agree on a strategy pair which is jointly sub-dominated. Given a two person game G2 and let L be the biloss operator of the corresponding cooperative game. s2 ∈S2 s1 ∈S1 These values are the losses the players can guarantee for themselves.3 ). Using some assumptions which can be economically motivated the so called Nash bargaining solution gives an answer to this question. This is the content of the next section. What compromise in the bargaining set is the best one. by choosing the corresponding conservative strategy ( see Deﬁnition 1. 4.2.3. because then by switching to the other strategy they can both do better and one of them can do strictly better.1. The main question however remains. v) Pareto optimal } is called the bargaining set ( sometimes also negotiation set ). s2 ) s1 ∈S1 s2 ∈S2 v ♯ = min max L2 (s1 . The interpretation of the bargaining set is as follows : It contains all reasonable compromises the players can agree on. v) ∈ im(L)|u ≤ u♯ . ˆ Deﬁnition 4.u♯ = min max L1 (s1 . The set B := {(u.2 Nash’s Bargaining Solution let us denote with conv(R2 ) the set of compact and convex subsets of Rn and with 92 . v > v ♯ because the losses u♯ resp. s2 ). v ≤ v ♯ and (u. In fact no player would accept a compromise (u. v) where u > u♯ resp. no matter what the other player does. v ♯ are guaranteed to him.

As status quo point one often chooses the conservative value of the game. (u∗ . Let P ′ be the image of P under the afﬁne linear transformation u → au + b v → cv + d then ψ((au0 + b. i. c > 0 ( invariance under afﬁne linear transformation = invariance under rescaling utility ) 93 . there does not exist u ≤ u∗ . v0 ). cv0 + d). v ∗ ) = ψ((u0 . v ≤ v ∗ s. Then 1.e. v ∗ )} 3. but other choices are also possible. the compromise is at least as good as the status quo 2. Deﬁnition 4. v0 . v0 ) as some status quo point ( the outcome when the two players do not agree on a compromise ). A bargaining function is a function ψ : A → R2 s. The economical interpretation of a bargaining function is as follows : We think of P as the biloss region of some cooperative two person game and of (u0 . v) ∈ P \ {(u∗ . P ).e.2. v0 ). u∗ ≤ u0 . A bargaining function ψ : A → R2 is called a Nash bargaining function if it satisﬁes the following conditions : let us denote (u∗ . v ∗ ) is Pareto optimal.t. Then ψ((u0 . v ∗ ) := ψ((u0 . v0 ) ∈ P } Deﬁnition 4.t.2. cv ∗ + d) for all a.2. ψ((u0 . P ) ∈ P for all (u0 . v0 ) ∈ P . P ′ ) = (au∗ + b.1. (u. P ) gives the compromise. v0 ). v0 ).A := {(u0 . If P1 ⊂ P and (u∗ . v ∗ ) ∈ P1 then (u∗ . P1 ) ( independence of irrelevant alternatives ) 4.v ∗ ≤ v0 i. P )|P ∈ conv(R2 ) and (u0 .

We have to show (˜.v0 ) (˜. v ∗ ). u) ∈ P and u0 = v0 then u∗ = v ∗ .1. v) ∈ P s. Proof. For this we need the following lemma. v ≤ v0 }.v0 ) we have M > 0 and u∗ < u0 . Let M = f (u∗ .v0 ) : P ∩ {u ≤ u0 . v ∗ < v0 . v ∗ ) + (˜. (u. As f(u0 . We are going to prove that there is precisely one Nash bargaining function ψ : A → R2 . v)) and u∗ < u0 . v0 ). v ) = M with (˜. i. u < u0 and v < v0 then f takes its maximum at a unique point (u∗ .2. v ) ∈ u ˜ u ˜ P ∩ {u ≤ u0 . v) ∈ P ⇔ (v. it takes its global maximum at at least one point (u∗ . Lemma 4. v ∗ ).v0 ) is deﬁned on a compact set and is clearly continuous. v) → (u0 − u)(v0 − v) If there exists a pair (u. We deﬁne a function f(u0 . P ) ∈ A.v0 ) and M we have (u0 − u∗ )(v0 − v ∗ ) = M = (u0 − u)(v0 − v ) ˜ ˜ ˜ ˜ it follows u∗ = u ⇔ v ∗ = v and we can then as well assume that u∗ = u ˜ ∗ ˜ and v = v . v ). v ∗ ) then by our assumption and the deﬁnition of f(u0 . v ∗ < v0 .e. u ˜ 2 2 94 (4. Assume now f(u0 . Since by deﬁnition of f(u0 . ˜ ˜ ˜ ˜ Since P is convex. v ) = (u∗ . Let ((u0 .5. it contains the point 1 1 (u′ . v ∗ ) := argmax(f (u.1) . v ≤ v0 } → R+ (u. If P is symmetric. Assume this u ˜ would not be the case. v ′ ) := (u∗ .t. More precisely there are exactly two cases (u∗ < u and v ∗ > v ) or (u∗ > u and v ∗ < v ).

We deﬁne a function ψ : A → R2 as follows . v0 ) : u ≤ u0 } 2. We will now show that the bargaining function ψ deﬁned above satisﬁes the ﬁve conditions on Deﬁnition 4. v ∗ ) := ψ((u0 .Clearly u′ ≤ u0 . v ′ ≤ v0 . u < u0 .2. v ) + u ˜ > M. Let ((u0 .1 we deﬁne (u∗ . v ∗ )}. v) ∈ P \ {(u∗ . Let us ﬁrst consider the case where there exists (u.v0 ) (u∗ . v0 ). To show that 2.2. Similarly in case 2. v) ∈ P s.2.t. P ) := (u∗ . An easy computation shows that 1 (u∗ − u)(˜ − v ∗ ) ˜ v M ≥ f(u0 . P ) ∈ A.) is trivially satisﬁed.v0 ) (˜. Then f (u.2.) is satisﬁed assume that u ≤ u∗ . There exists exactly one Nash bargaining function ψ : A → R2 . v) ∈ P such that u < u0 . v) : v ≤ v0 }. Condition 1. v) ∈ P such that u < u0 .v0 ) (u′ . v ∗ ) = (˜∗ . v0 ). v0 ) where u∗ is the minimal value such that (u∗ . In case 1. Theorem 4.) we deﬁne ψ((u0 . v < v0 . v0 ). Therefore we must have (u∗ . v) = (u0 − u)(v0 − v) > (u0 − u∗ )(v0 − v ∗ ) = M 95 . v ∗ ) where v ∗ is the minimal value such that (u0 . Then if there exists (u. P ) := (u0 . v ∗ ) u ˜ and we are done.1. v)) If there are no points (u. v ∗ ) ∈ P .) we deﬁne ψ((u0 . P ) := argmax(f(u0 . v0 ) ∈ P . P ⊂ {(u0 . v < v0 then by using Lemma 4. v ∗ ) + f(u0 . Proof. P ⊂ {(u. v ≤ v ∗ and (u. v < v0 then the convexity of P implies that exactly two cases can occur : 1. 2 2 1 =2M 1 M 2 >0 which is a contradiction. v0 ).v0 ) (u. v ′ ) = f(u0 .

v0 ) (u′ . cv ∗ + d). To show that condition 3. P1 ).v0 ) (u′ . v ∗ ) = ψ((u0 . There96 . v ′ ) = f(u0 . Then (v ∗ . v) : u ≤ u0 .v0 ) (u∗ . v ≤ v0 }.u0 = v0 but u∗ = v ∗ . cv ∗ +d) maximizes (u′0 −u)(v0 − ′ v) over P ′ where u′0 = au0 + b and v0 = cu0 + d. By deﬁnition of ψ we have (u∗ . v ∗ ) maximizes the function f(u0 . v ≤ v0 } it also maximizes ac(u0 − u)(v0 − v) = ((au0 + b) − (au + b))((cv0 + d) − (cd + d)). v0 ).) is satisﬁed. c > 0 and let P ′ be the image of P under this transformation.) holds let us assume that P1 ⊂ P and (u∗ .which is a contradiction. Then (u∗ . v ∗ ) maximizes (u0 −u)(v0 −v) over P ∩{(u. v ′ ) := (u∗ . v ∗ ) is Pareto optimal and 2. v∗) + (v ∗ . v ∗ ) 0 which is a contradiction and therefore we must have u∗ 0 = v ∗ . Now consider the afﬁne transformation u → au + b. Hence by deﬁnition of ψ ′ we have ψ((u′0 . Therefore (u∗ . Since we have (u∗ − v ∗ )2 > 0 we know u∗2 + v ∗2 > 2u∗ v ∗ and therefore f(u0 . u∗ ) ∈ P and by convexity of P also 1 1 (u′ . v ≤ v0 } and therefore also over the smaller set P1 ∩ {(u. v ′ ) > u∗ v ∗ − (u∗ + v ∗ )u0 + u2 = (u0 − u∗ )(v0 − v ∗ ) = f(u0 . v0 ). P ) ∈ P1 . v ∗ ) = ψ((u0 . v → cv + d where a. To show that 5. ′ But this is equivalent to that (au∗ +b. u∗ ) ∈ P 2 2 and an easy computation shows that u∗2 + 2u∗ v ∗ + v ∗2 2 − (u∗ + v ∗ )u0 + u0 4 where we made use of u0 = v0 .) is satisﬁed assume that P is symmetric. v0 ).v0 ) over P ∩ {(u. P ′ ) = (au∗ + b. v) : u ≤ u0 . Since (u∗ . v) : u ≤ u0 .

v ∗ ) → (−1. v0 ) → (0. v)|u ≤ u0 .fore we have shows that the function deﬁned in the ﬁrst part of the proof is a Nash bargaining function. v ≤ 0 and assume that u + v < −2. 0) (u∗ . −1 + x − ǫ) ∈ P ′ . v) with a line we see that for all λ ∈ [0. −1) = 1 Let (u. Using property 4.) to 5. Let u ˜ us use the afﬁne transformation u′ := v′ u − u0 u 0 − u∗ v − v0 := v0 − v ∗ and let P ′ denote the image of P under this transformation.t. v ) → (u. Denote (˜. It remains to show that whenever ˜ ˜ ψ is another Nash bargaining function then ψ = ψ. v ) = ψ((u0 . v ≤ v0 }.2.) of ψ we see that (−1. 97 . Then there exists ǫ > 0 and x ∈ R s. v) u ˜ where (u. Then by joining (−1. We have (u0 . Let us therefore ˜ assume that ψ is another bargaining function which satisﬁes the con˜ ditions 1. Clearly f(0. 1] we have (u′ . v) = u · v over P ′ ∩ {(u. −1) + λ(−1 − x.0) (u.2. u = −1 − x v = −1 + x − ǫ. u ≤ 0. P ).0) (−1.t. v0 ). −1) maximizes f(0. v ′ ) := (1 − λ)(−1. −11) with (u. v) ∈ P ′ s.) of Deﬁnition 4. v) is deﬁne by the relation above. −1) (˜.

v) ∈ P ′ or (v. Therefore 0 ≥ u + v ≥ −2.) of ψ u ˆ (−1. v ) = (−1. v) ∈ p. Therefore there exists an ǫ > 0 such that the right hand side of equation 4. −1). v ′ ) = (1 + λx)(1 + λ(ǫ − x)) = 1 + λǫ + λ2 x(ǫ − x). the consequence of Theorem 4. v) = then (ˆ. −1) ∈ P ′ ⊂ P by ˆ ˆ using the Pareto optimality of (ˆ. v ) := ψ((0. v ∗ ).) that the value (u∗ .) of ψ we have u = v . Computing the inverse under the afﬁne transformation show (˜. v ) ∈ P and by using property 3. −1). v ) = (u∗ .1 is that if the two players believe in the axioms of a Nash bargaining function. Im(L) in the context of a cooperative game lies in the bargaining set. (4. This means that whenever the point (u. u) ∈ P ′ } ˜ be the symmetric closure of P ′ . there is a unique method to settle the conﬂict once the status quo point is ﬁxed.Then evaluation at λ = 0 gives the value 1.0) (u′ . Now let ˜ ˜ (ˆ. It follows directly ˆ from properties 1.2) Consider the last expression as a function in λ. Clearly P is compact. The function is clearly differentiable with respect to λ and the derivative at point λ = 0 is ǫ > 0.0) at this point gives f(0. convex and ˜ symmetric and P ′ ⊂ P . 98 .2.) and 2. But u ˆ u ˆ ′ ˜ we must have (u. Since (−1.2 is strictly greater than one which is a contradiction. u) lies in P ∩ ˜ {(u. v)|(u. v ≤ v0 } we have 0 ≥ u ≥ −1. P ) u ˆ ˜ ˜ then by property 5. We still have though that u + v ≥ −2 for all ˜ pairs (u. 0). Now let ˜ P = {(u. v ) we must have (ˆ. v ∗ ) = ψ((u♯ .Evaluating f(0. v ♯ ). u ˜ ˜ Therefore ψ = ψ and we are ﬁnished. v)|u ≤ u0 .

v ♯ ) = (0. The biloss region of the corresponding coopera99 . The bargaining set is 1 B := {(u. v)|v = − u − 5. Im(L)) we have to maximize the function 1 1 1 (−3 − u)(−2 + u + 5 3 2 4 over −4 ≤ u ≤ −8. −4) (−2.Example 4. v ♯ ). 1) (−4.2. Im(L)) = (−6 2 . −3 1 ).2. this follows after a straightforward computation. −3) (−4. 0). 4) (4. −1) A straightforward computation shows that the conservative values 1 of this game are given by u♯ = −3 1 and v ♯ = −2 2 . The biloss region of 3 the corresponding cooperative game is given in the following graphic. v ♯ ). −1) The conservative value is (u♯ .2. −2) (−8. Consider the two person game G2 where the biloss operator is given by the following matrix L := (−1. 3 3 ˆ and therefore ψ((u♯ . −4) (1. Consider a two person game G2 where the biloss operator is given by the following matrix L := (−1.1. −4 ≤ u ≤ −8} 4 ˆ Therefore to compute the value ψ((u♯ . This is easy calculus and gives the values 2 1 u∗ = −6 . 3 3 The argumentation for the choice of the status quo point as the conservative value is reasonable but not completely binding as the following example shows : Example 4. v ∗ = −3 .

Deﬁnition 4. Player 1 has no comparable strategy to offer. v ∗ ) = (−2 2 .The compromise (2 2 .tive game is given by the following graphic : Since we have u0 = u♯ = v ♯ = v0 and the biloss region is obviously symmetric we must have u∗ = v ∗ . −2 1 ). Im(L)) ˜ ˜ where ψN ash denotes the Nash bargaining function. We claim that the second player is in a stronger position than the ﬁrst one and therefore deserves a bigger piece of the pie. If player 2 decides to play its ﬁrst strategy. if he chooses his second strategy he is bankrupt. s). but clearly player 2 would not do this and instead by choosing strategy 1 be perfectly happy with his maximum 1 1 proﬁt. Basically the process is as follows. The corresponding thread game T G2 is the non-cooperative two person game with mixed strategies ∆S1 and ∆S2 and biloss operator as follows : ˆ T L(s. If player 1 chooses strategy 1 it could happen that player 2 goes bankrupt by choosing the second strategy. 2 2 ) though gives both players the same and is in this sense not fair. This method is also known as the thread bargaining solution1 gives a solution of this problem. Start with a non-cooperative 1 sometimes also Nash bargaining solution 100 .3.2. The question 2 is however is this compromise for player 1 as good as for player 2 ? Assume that the players are bankrupt if their losses exceed the value 3. then if player 1 chooses his ﬁrst strategy he wins 1. Such a strategy as player twos ﬁrst strategy is called a thread. Let G2 be a non cooperative two person game with ﬁnite strategy sets S1 and S2 . s) = ψN ash (L(s. So he is forced to play his ﬁrst strategy although he knows that he does not make the optimal proﬁt. We will develop a method which applies to the situation where the players thread their opponents. And the only point in the bargaining 1 set which satisﬁes this condition is (u∗ .

game, consider then the corresponding cooperative game and look for the compromises given by the Nash bargaining function in dependence on the status quo points and you get back a non-cooperative two person game. However to apply the methods from chapter 2 and 3 we must know something about the continuity as well as convexity properties of this biloss operator. Lemma 4.2.2. Let G2 be a non cooperative two person game with ﬁnite strategy sets S1 and S2 and biloss operator L. then the function

∆S1 × ∆S2 → R2 ˆ (s, s) → ψN ash (L(s, s), Im(L)) ˜ ˜ is continuous. The biloss operator in the corresponding thread game T G2 and satisﬁes the convexity assumption in Theorem 3.1.1. Proof. The continuity property of ψN ash follows straightforward ( but a bit technical ) from its construction. Since the biloss operator in the thread game is basically ψN ash the continuity of the biloss operator is therefore clear. To show that it satisﬁes the convexity assumptions of Theorem 3.1.1 is a bit harder and we don’t do it here. Theorem 4.2.2. Let G2 be a non cooperative two person game with ﬁnite strategy sets and let T G2 be the corresponding thread game. Then T G2 has at least one non cooperative equilibrium. Furthermore the bilosses under T L of all non cooperative equilibria are the same. Proof. The existence of non cooperative equilibria follows from Theorem 3.1.1 and the previous lemma. That the bilosses under L of all the non cooperative equilibria are the same will follow from the following discussion. The thread game is a special case of a so called purely competitive game. Deﬁnition 4.2.4. A two person game G2 is called a purely competitive game ( sometimes also a pure conﬂict or antagonistic game ) if all 101

outcomes are Pareto optimal, i.e. if (u1 , v1 ) and (u2 , v2 ) are two possible outcomes and u2 ≤ u1 the v2 ≥ v1 . Remark 4.2.1. The thread game is a purely competitive game. This is clear since the values of the Nash Bargaining function are Pareto optimal. The uniqueness of the Nash bargaining solution does now follow from the following proposition. Proposition 4.2.1. Let G2 be a purely competitive two person game. Then the bilosses of all non-cooperative equilibria are the same. Proof. Let (s, s) and (r, r) be non cooperative equilibria. We have to ˜ ˜ prove

L1 (s, s) = L1 (r, r) ˜ ˜ L2 (s, s) = L2 (r, r). ˜ ˜ W.l.o.g we assume L1 (s, s) ≥ L1 (r, r). Since (r, r) is an NCE we have ˜ ˜ ˜ by deﬁnition of an NCE that L2 (r, s) ≥ L2 (r, r). ˜ ˜ Since G2 is purely competitive this implies that L1 (r, s) ≤ L1 (r, r). ˜ ˜ Since however (s, s) is also an NCE we have that ˜ L1 (r, s) ≥ L1 (s, s) ˜ ˜ and therefore by composing the two previous inequalities we get L1 (s, s) ≤ L1 (r, r) so that in fact we have L1 (s, s) = L1 (r, r). The same ˜ ˜ ˜ ˜ argument shows that L2 (s, s) = L2 (r, r). ˜ ˜

102

Remark 4.2.2. The proof above shows more. In fact it shows that also L1 (r, s) = L1 (s, s) and L2 (r, s) = L2 (s, s) and therefore that (r, s) and ˜ ˜ ˜ ˜ ˜ the similarly (s, r) are non-cooperative equilibria. In words, this means ˜ that all non-cooperative equilibria in a purely competitive game are interchangeable. Deﬁnition 4.2.5. Let G2 be a non cooperative two person game with ﬁnite strategy sets. Then the Nash bargaining solution is the unique biloss under T L of any non cooperative equilibrium of the corresponding thread game T G2 . Strategies s, s such that (s, s) is a NCE of the thread game are called ˜ ˜ optimal threats. Using the biloss under L of any pair of optimal threats as the status quo point for the Nash bargaining function delivers the Nash bargaining solution as a compromise. The Nash bargaining solution now now gives a compromise for any two person game with ﬁnite strategy sets which does not depend on status quo points. The difﬁculty however still is to ﬁnd the NCE of the thread game. Let us illustrate the method ﬁrst at an easy example. Assume the bargaining set is given as a line B := {(u, v)|au + v = b, c1 ≤ u ≤ c2 }. Suppose player 1 threatens strategy s2 and player 2 threatens strategy s. Then player 1’s loss in the thread game is the u∗ that maximizes ˜

(L1 (s, s) − u)(L2 (s, s) − v) = (L1 (s, s) − u)(L2 (s, s) − b + au). ˜ ˜ ˜ ˜ If this u∗ happens to be in the interior of [c1 , c2 ] then it can be computed by setting the derivative with respect to u of the expression above equal to zero. This gives

2

this means player 1 chooses strategy s playing the thread-game

103

3. player 1 has to choose x = (x1 . sj ) ˜ where the si resp. 1 2 = 1 2 (4. xn ) ∈ ∆n so as to minimize xi (aLi.j and to minimize v ∗ player 2 chooses y = (y1 . . i.j )yj ). then remind that u∗ is the loss player 1 suffers when the corresponding compromise is established 3 104 . the Nash bargaining solution corresponds to the NCE of the Zero sum game ij ˜ L = (aLij − Lij ) ∼ (aLij − Lij . s) − L2 (s.e. s)−b+au)+a(L1 (s.j )yj ). Let us denote with ˜ ˜ Lij = L(si .j )yj 1 2 i.. . −(aL1 ij − L2 )).j − Li. 1 2 i.e. s)−b+au)) = −(L2 (s. s)−u)(L2 (s.4) If w∗ denotes loss of player one when the NCE strategies of the fame ˜ L are implemented.. s)).j So as to make u∗ as small as possible.j − Li. 1 2 i. Then we have for s = i xi · si and s = j xj · sj that ˜ ˜ u∗ = 1 (b + 2a xi (aLi. ym ) ∈ ∆m so as to maximize expression 4.3) against any y.. sj denote the pure strategies of the original non ˜ cooperative game ( i.j (4. s)−u). the biloss operator L in (bi)-matrix form..j − Li.. ˜ ˜ ˜ ˜ du 1 (b 2a This gives u∗ = + aL1 (s.0= d ((L1 (s.3 Similarly substituting au + v = b we get 1 v ∗ = (b − 2 xi (aLi. Therefore the NCE of the thread game..

5) 2a 2 This NCE can be computed with numerical methods and some linear algebra.e. In the following we present a graphical method to decide on which line segment the Nash bargaining solution can lie. v0 ) and (u∗ . v ∗ ) is the negative of the slope of the bargaining set. Then the slope of the line joining (u0 . Lemma 4. u∗ = Proposition 4. Moreover one also gets the optimal threads to implement the Nash bargaining solution. Assume that (u∗ . u∗ must maximize (u0 − u)(v0 + au − b) and therefore 105 . v ∗ = (b − w∗ ). One can then apply the method described above to each line segment in the bargaining set. v ∗ ) lies on the line au + v = b for c1 < u < c2 . v ∗ ) = ψN ash ((u0 . In general though it does also not give a clear answer to the problem. (4. i. v ∗ ) is not one of the endpoints of the bargaining set. Let G2 be a two person game with ﬁnitely many pure strategies and bargaining set B. v ∗ ) is given by (4.3. Im(L) be the compromise given by the Nash bargaining function with status quo ˆ point (u0 . We have therefore proven the following proposition which helps us to identify the Nash bargaining solution.2.1 1 (b + w∗ ). Let G2 be a non-cooperative game such that the correˆ sponding bargaining set is a line.2. v0 ). In general the biloss region is a polygon and the bargaining set is piecewise linear. If the Nash bargaining solution lies in the interior of one of the lines bargaining set and this line is given by the equation au + v = b with c1 ≤ u ≤ c2 then the Nash bargaining solution (u∗ . If one knows on which line segment the Nash bargaining solution lies then one gets the Nash bargaining solution with this method.2. Proof.5). Suppose (u∗ . let (u∗ . Then it has to maximize the function (u0 − u)(v0 − v) over this set. v0 ) ∈ Im(L).

where the bargaining set can be identiﬁed with 106 .4. v ∗ ) is v ∗ − v0 = u∗ − u0 b+v0 −au0 2 b−v0 +au0 2a − v0 = − u0 b−v0 −au0 2 b−v0 −au0 2a = a. = 2 The slope of the line from (u0 . Example 4. we can now try to solve all the non cooperative games (4.2. 4) (4.2.4) ( one for each line segment in the bargaining set ) and then check if which of the computed solutions satisfy the condition in Lemma 4.3. −1) and the biloss region is drawn in Figure x. Let us now reconsider Example 4.0 = d (u0 − u)(v0 + au − b) = −(v0 + au∗ − b) + a(u0 − u∗ ) du u=u∗ = −v0 − 2au∗ + b + au0 .3. 1) (−4. therefore u∗ = v∗ b − v0 + au0 2a b + v0 − au0 . Coming back to the point where we actually want to determine the Nash bargaining solution and the optimal threats where the biloss region is a polygon. Example 4.2 which looks symmetric at ﬁrst glance but a closer inspection reveals that player 2 is in a stronger position then player 1. v0 ) to (u∗ .2. The biloss operator is L := (−1.2. The slope of the bargaining set is clearly −a since au + v = b ⇔ v = −au + b.x. −4) (1.

For a precise argument we assume that (−4. −1) is the Nash bargaining solution. The second one is unlikely because we already saw that the second player is in a stronger position within this game. 3) . is that in no case the Nash bargaining solution can lie in the interior of B and therefore must be either (−1. 107 . Since the bargaining set is obviously one line. Therefore w∗ = 3 and therefore using (4. It is easy ˜ to see that those strategies are no NCE strategies for the thread game.3 as (λs1 + (1 − λ)s2 . 3) (3. Since the entry 3 at position (1. but it does not follow directly from the argumentation above. As mentioned for the argument above we assume that (u∗ . The thread strategies leading to this value can be identiﬁed by using Lemma 4. −3) (−3. Therefore the Nash bargaining solution is (−1.B := {−u + v = 5.2. but (−1. s1 ) ˜ are NCE’s and the conservative value w∗ corresponds to the biloss of any of these. −4) and we see that in fact it gives more to the second player than to the ﬁrst. −4) Can we now say that this is the Nash bargaining solution ? It is. v ∗ ) is in the interior of B. −4) or (−4. we can apply the method proposed before and see that the optimal threats are the equilibria of the non cooperative game with biloss operator given by ˜ L := 3 −3 3 −3 ∼ = (3.5) we have (u∗ . 1) of the matrix above is the biggest in its row as well the smallest in its column we have that all strategies of the form (λs1 + (1 − λ)s2 . −1). −4) is not. −3) (−3. v ∗ ) = (−1. s2 ). What can we say though. −4 ≤ u ≤ −1} in equal a = 1 and b = −5.

×Sik and by implying the strategy x ∈ ∆S against the strategies of their ˜ opponents receives a joint loss of Li (˜.1. A coalition is a subset S ⊂ N which cooperates in the game... If S = {i1 .. 2. × Sn → Rn In this section we assume that S(N ) = S1 × . × Sjn−k 108 . × Sik YN \S := Sj1 × . .. n} of players and 1. Deﬁnition 4. × Sn and that |Si | < ∞. A (multi)-loss operator L = (L1 .... y) we mean the value of the multi-linear extended verx sion of Li where the components are in the right order.. . y). Sn so called strategies for player 1 to n 2. We will think of the strategies as pure strategies and use mixed strategies which can then be identiﬁed as simplexes in the same way as in chapter 2...3 N-person Cooperative Games Let us shortly reconsider the deﬁnition of an N -person game in chapter 3.. x i∈S Writing Li (˜. ... A subset S(N ) ⊂ S1 ×. Topological spaces S1 .. An N -person game Gn consists of a set N = {1..×Sn .3.. We use the following notation XS := Si1 × . Ln ) : S1 × . the so called allowed or feasible multi strategies 3.4. ik } then by cooperating it can use jointly randomized strategies from the set ∆S := ∆Si1 ×.... i.e. player i-th strategies stand at position i. ..

. If S. 109 . The minimal loss the coalition S can the guarantee for itself. x ˜ i∈S x∈∆S y ∈∆N \S ˜ ˜ The following lemma says that in order to compute the conservative value one only has to use pure strategies.2. Let ν be the characteristic function of the N -person game Gn . In the situation above one has ν (S) = min max ˜ Li (x. We deﬁne the characteristic function ν of Gn via ν : P(N ) → [0. Due to the fact that the properly denoting of the elements in the simplices ∆S and ∆N \S affords a lot of indices we do not prove the result here. jn−k }. i. i∈S x∈XS y∈YN \S Proof.where N \ S = {j1 .3.1.3. Lemma 4. ∞) S → −˜(S) ν with the convention that ν(∅) = 0. Deﬁnition 4.3. Let Gn be an N -person game. Proposition 4. the conservative value for coalition S is given by ν (S) = min max ˜ Li (˜. The worst that can happen for the coalition S4 is that their opponents also build a coalition N \S.. Managing the complex notation the proof is in fact a straightforward result which only uses that the multi-loss operator used here is the multi-linear extension of the multi-loss operator for ﬁnitely many strategies. y).1.. .e. T ⊂ N with S ∩ T = ∅ then ν(S ∪ T ) ≥ ν(S) + ν(T ). y ).

y) i∈(S∪T ) ≤ min min α∈XS β∈XT y∈YN \(S∪T ) max Li (α. β. y).e. y) + i∈S y∈YN \(S∪T ) max Li (α. S ∩ T = ∅. y) + min max i∈S α∈XS y∈YN \S Li (α. In this case one has ν(N ) = n ν({i}). y) i∈T β∈XT (α.3. β.1 we have ν (S ∪ T ) = ˜ x∈XS∪T y∈YN \(S∪T ) min max Li (x. β.3. i∈T In particular this holds for β which minimizes the ﬁrst sum on the right side and α which minimizes the second sum on the right side and therefore we have ν (S ∪ T ) ≤ ˜ ≤ ≤ β∈XT y∈YN \(S∪T ) min max Li (α. T ⊂ N s. y) + min i∈S α∈XS (β. y) + min i∈S α∈XS y∈YN \(S∪T ) max Li (α.y)∈YN \T min max Li (α. ˜ ˜ Deﬁnition 4.y)∈YN \S max Li (α.Proof. i. i∈(S∪T ) Hence for each α ∈ XS . Since ν(S) = −˜(S) we can as well prove ν (S ∪ T ) ≤ ν (S) + ν (T ).t. y) i∈(S∪T ) y∈YN \(S∪T ) max Li (α. y) i∈T β∈XT y∈YN \T min max Li (β. β. β. i=1 110 . β. β. ν ˜ ˜ ˜ Using Lemma 4.3. ν(S∪T ) = ν(S)∪ν(T ) for S. An N -person game Gn is called inessential if its characteristic function is additive. y) i∈T = ν (T ) + ν (S). y).β ∈ XT we have ν (S ∪ T ) ≤ ˜ ≤ y∈YN \(S∪T ) max Li (α. β.

s the strategies of Country 2 resp.3.The economical interpretation of an inessential game is that in such a game it does not pay to build coalitions. Let us illustrate the concepts at the following example : Example 4. We assume that a ≤ b ≤ c. The strategies of the players are as follows. Deﬁnition 4.3. since without oil there industry does not work.4. 2.1. but has an industry which can use the oil to achieve a proﬁt of b per unit. 3. These are the only reasonable strategies for them. s1 = s2 = s3 = keep the oil and use it for its own industry sell the oil to Country 2 sell the oil to Country 3 . Country 3 the ˜ ˆ multi-loss operator L of the game is given as follows : 111 . Country 2 has no oil. Country 1 has oil an its industry can use the oil to achieve a proﬁt of a per unit. Country 1 however has three strategies. but an industry which can use the oil to achieve a proﬁt of c per unit. Country 3 also has no oil. Denoting with s. An N -person game Gn is called essential if it is not inessential. 1. The strategies of Country 2 and Country 3 are buy the oil from country 1 if it is offered to them. since the coalition cannot guarantee more to its members as if the individual members act for themselves without cooperating. Oil market game : Assume there are three countries which we think of the players in our game.

We have the following proposition : 112 . Two characteristic functions ν. 3}) As usually when working with numerical values of utility ( as is always the case when working with loss or multi-loss operators ) one wished the concept to be independent of the scale of utility. c). From this it is easy to compute the corresponding values for ν. Deﬁnition 4. 3}) a = ν({1}) b = ν({1.L(s1 . 2}) c = ν({1. symmetric. 0) ˜ ˆ L(s3 . 0) ˜ ˆ L(s2 .5. s. s. s. For this purpose one introduces the notion of strategic equivalence. i. 0.e.t. 3}) == ν({1. b. reﬂexive and transitive. ν are called strategiˆ cally equivalent if there exists c > 0 and ai ∈ RN s. 0.3. s) = (0. s) = (a. Two N -person games are called strategically equivalent if their characteristic functions are equivalent. One can check that strategical equivalence is in fact an equivalence relationship. in fact we have 0 = ν(∅) = ν({2}) = ν({3}) = ν({2. s) = (0. ∀S ∈ P(N ) one has ν ′ (S) = cν(S) + i∈S ai . 2. ˜ ˆ Clearly the strategies 2 and 3 for Country 1 do only make sense if it cooperate with the corresponding country and shares the losses ( wins ).

Proof. Every essential N -person game Gn is strategically ˆ equivalent to an N -person game Gn with characteristic function ν ′ satisˆ fying ν (N ) = 1 ˆ ν ({i}) = 0 ˆ for i = 1... 113 . One can check that ν satisﬁes the ˆ condition stated in the proposition. This leads to the deﬁnition of an imputation.. xn )⊤ ∈ Rn s. Furthermore it follows from the deﬁnition of a characteristic function that ν is in fact the characteristic ˆ ˆn which has the same strategies as G but with function of the game G ˆ multi-loss-operator given by Li = c · (Li − ν({i})) for i = 1. . We denote the set of all imputations of Gn respectively its characteristic function with E(ν).. n xi = ν(N ) i=1 xi ≥ ν({i}). 2. When the players join a coalition they have to decide how they share their loss once the game is over.Proposition 4. .2.t. An imputation in an N -person game Gn with characteristic function Gn is a vector x = (x1 . Deﬁnition 4. Let ν be the characteristic function of Gn ..6. . n..3.. We deﬁne ν as follows ˆ : ˆ ν (S) := cν(S) − c i∈S n i=1 ν({i}) with c := (ν(N ) − ν({i}))−1 .. n.3.

yn ) then n n xi = ν(N ) = i=1 i=1 yi . since no one would enter coalition if it would not pay for him. x2 .3.3.3. Example 4. y be two imputations and S ⊂ N be a coalition. We say that x dominates y over S if 114 .1 we have E(ν) = {(x1 . . i=1 The each player does strictly better. xn ) and y = (y1 . he must do at least as good if he would act alone. This leads to the idea of domination of one imputation by another. xj > yj .2. The second condition says that whichever coalition player i chooses.7. This is economically reasonable... but it is still possible that for a particular coalition x is better than y.We interpret imputations as follows : xi is the i-th players award ( negative loss ) after the game is played. x3 )⊤ |x1 ≥ a. x1 + x2 + x3 = c} Can one imputation be better than another one ? Assume we have two imputations x = (x1 . Deﬁnition 4.. Therefore if for one i we have xi < yi then there is also a j s... Let x. So an imputation can’t be better for everyone.t. x3 ≥ 0. For the oil market game of Example 4. If n xi i=1 would be strictly less than ν(N ) then the players in the game would be better of to work together using the strategies to implement ν(N ) and then give xi to player i and equally share the difference ν(N ) − n xi . Furthermore from an economical point of view it is clear that n xi ≤ ν(N ) because when i=1 coalition N is implemented all players work together and the sum of the individual awards must be less that the collective award. x2 ≥ 0. So the second condition in the Deﬁnition of an imputation is in fact a Pareto condition. ..

xi > yi .3. A vector x ∈ Rn is in the core C(ν) if and only if 1. y ≻ x}. Let us assume ﬁrst that x ∈ Rn satisﬁes the conditions 1. Let Gn be a cooperative N -person game with characteristic function ν.3. Then for S = {i} condition 2.t.3. x >S y.) that x is in fact an imputation. i∈S In this case we write x >S y. Denoting the core with C(ν) this means that C(ν) = {x ∈ E(ν)| exists no y s. In this case we write x ≻ y. Deﬁnition 4. Proof.) above. for all i ∈ S xi ≤ ν(S).) and 2.2.1. 2. Deﬁnition 4. We say x dominates y if there exists a coalition S s. n i=1 i∈S xi = ν(N ) ≥ ν(S) for all S ⊂ N .t. The economical interpretation of the second condition above is that the coalition S has enough payoff to ensure its members the awards x. We deﬁne its core as the set of all imputations in E(ν) which are not dominated ( for any coalition ).) implies that xi ≥ ν({i}) so it follows from condition 1.8. We should distinguish the core deﬁned in the context of cooperative N person games from the core deﬁned in section 1 ( see Deﬁnition 1.9 ). Then there exists 115 .9. The next deﬁnition is about the core of an N -person game. Suppose now that x would be dominated by another imputation y. Let x and y be two imputations of an N -person game Gn . Theorem 4.

Therefore the imputation x is not dominated and hence belongs to the core C(ν). yi = ν(N ) and yi ≥ ν({i}). We deﬁne ǫ := and yi := Then n i=1 ν(S) − |S| i∈S xi >0 ν({i}) + xi + ǫ (ν(N )−ν(S)− i∈N \S ν({i}) |N −S| ∀i ∈ S ∀i ∈ N \ S. i∈S < ν(S). This is a contradiction to x ∈ C(ν) and hence x must satisfy condition 2. Then x is an imputation.). as we will see next. Assume now on the other side that x ∈ C(ν).e. An N -person game is called a constant sum game if i∈N Li ≡ c where c is a constant and Li denotes the loss operator of the i-th player.) would then imply that ν(S) ≥ i∈S i∈S yi .4 Moreover yi = ν(S) i∈S and yi > xi for all i ∈ S.10. in many cases it is empty. so condition 1.This is in particular the case for constant sum games. since ν(N ) = ν(N \ S ∪ S) ≥ ν(N \ S) + ν(S) and therefore ν(N ) − ν(S) ≥ ν(N \ S) ≥ i∈N \S ν({i}) for i ∈ N \ S and xi ≥ ν({i}) for i ∈ S 4 116 .t.) would not hold.t. y >S X i. Deﬁnition 4.S ⊂ N s.3. Assume condition 2. yi > xi for all i ∈ S and ν(S) ≥ 2. Then there would exist S = N s. It has one disadvantage though. Condition yi > i∈S xi ≥ ν(S) which of course is a contradiction. Therefore y >s x and x is dominated. Let us remark that the core of an N -person cooperative game is always a convex and closed set.) must hold.

3. This follows from the deﬁnition of ν by applying the MiniMax Theorem of section 2. Proposition 4. Let ν be the characteristic function of a constant sum game. Since x is an imputation we also have xi + j = ixj = ν(N ).) in Theorem 4.1 we see that x = (x1 . x3 )⊤ ∈ C(ν) if and only if 117 .3.3. Assume x ∈ C(ν) then by condition 2. Then C(ν) = ∅. to the two person zero sum game where S1 = YN \S and S2 = XS and the Loss-operator is given by L= i∈N \S (Li − c ) N where c = i∈N Li .2. If ν is the characteristic function of an essential N -person constant sum game.3.3. By looking at Example 4.1 and Theorem 4. then for all S ⊂ N we have ν(N \ S)′ ν(S) = ν(N ).3. Using that ν is an essential game we get that n n xi ≤ i=1 i=1 ν({i}) < ν(N ) which is a contradiction to x being an imputation.3. Proof.3.Lemma 4.5.1 and the previous lemma we have for any i ∈ N j = ixj ≥ ν(N \ {i}) = ν(N ) − ν({i}). Let us study the core of the oil-market game : Example 4. x2 . Proof. Combining the two gives ν({i}) ≥ xi for all i ∈ N .

3. if z ∈ S(ν) then there is an x ∈ S(ν) s. x2 + x3 ≥ 0.3. The answer to this question is no ( Lucas 1968 ). c − x)|b ≤ x ≤ c} The fact that in lot of cases the core is just the empty set leads to the question what other solution concepts for cooperative N -person games are reasonable. x ≻ z ( external stability ) / Remark 4. The following proposition leads to various examples of stable sets within simple games. y. We have C(ν) ⊂ S(ν) ⊂ E(ν) for any stable set S(ν) because undominated imputations must be within any stable set. More precisely C(ν) ⊂ S(ν) stable S(ν). Compute a stable set for the oil market game. Deﬁnition 4. Exercise 4.3. In general the inclusion is proper.11.12.1. x1 + x2 + x3 = c 2.1. x2 ≥ 0. x3 ≥ 0. 118 .t. x1 + x2 ≥ b. x. x1 ≥ a. A minimum winning coalition S is one where ν(S) = 1 and ν(S \ {i}) = 0 for all i ∈ S. 1}. the so called stable sets and the Shapely value. We discuss two more. x1 + x3 ≥ c This however is the case if and only if x2 = 0.x1 + x3 = c and x1 ≥ b and therefore C(ν) = {(x. ∈ S(ν) then x ≻ y and y ≻ x ( internal stability ) 2.1. 0. Deﬁnition 4. A stable set S(ν) of an N -person game with characteristic function ν is any subset S(ν) ⊂ E(ν) of imputations satisfying 1. It was long time a question whether all cooperative games contain stable sets. An N -person game is called simple if for all S ⊂ N we have that ν(S) ∈ {0.3.

4. φi (ν) = φπ(i) (πν) for all π ∈ P erm(N )5 where πν denotes the characteristic function of the game which is constructed from ν by reordering the numbers of the players corresponding to the permutation π. Then there exists exactly one function φ = (φi ) : V → Rn which satisﬁes the following conditions : 1. i. Shapley : Denote with V the set of all characteristic functions ν : N → [0. ν ∈ V ⇒ φ(µ + ν) = φ(µ) + φ(ν). ∞). µ.Proposition 4. We will not proof this theorem but instead motivate the formula and discuss the underlying idea. It should be compared to the Nash bargaining solution within two person cooperative games.2.3. then VS = {x ∈ E(ν)|xi = 0∀i ∈ S} is a stable set. n i=1 φi (ν) = ν(N ) for all ν ∈ V. The idea of Shapley was as follows : Here P erm(N ) denotes the permutation-group of N . 3. This unique function is given by φi (ν) = i∈S⊂N (|S| − 1)!(n − |S|)! · (ν(S) − ν(S − {i}) n! φ(ν) is called the Shapley value of ν. Let ν be the characteristic function of a simple game and S a minimum winning coalition.e. the group of bijective maps π : N → N 5 119 .3. One of the more famous solution concepts in N -person cooperative game is the so called Shapley value. Theorem 4. 2.

Each time a new player arrives at the negotiation table the negotiations will be extended to include the newly arrived player. For the oil market game we get the following : 1 Example 4.4. n! The Shapely value can therefore be considered as the expectational award for player i.the players arrive one after another at the negotiation table. but they arrive in random order. 6 6 2 3 120 . then the award for player i from the new negotiations should be what he brings in for the extended coalition. For the oil market game on computes φ1 (ν) = 2 c + 1 a + 3 1 1 1 b. If when player i arrives the players S \ {i} are already sitting at the negotiation table. namely ν(S) − ν(S \ {i}).φ2 (ν) = 1 b + 6 a and φ3 (ν) = 1 c − 6 a − 1 b.3. The probability that player i arrives when players S \ {i} are already sitting at the negotiation table is (|S| − 1)!(n − |S|)! .

.. . x(·)). 5.. then the game is played with the chosen strategies and an output measured in loss or utility is determined by the rules of the game. in this chapter we will develop a concept of dynamic games which are played over a time interval [0.. T ] × M ap([0. We denote the i-th players strategies with Γi . T ] and in which the players can choose their strategies at time t depending on the information they obtained by playing the game up to this time.. γ N ) of Γ1 × . Rn ) → Rmi (t. N which can choose strategies γ i : [0. T ]. Furthermore we assume we have a function 121 . .1 Setup and Notation Assume we have the following ingredients : We have players “i” i = 1. As before an element (γ 1 . x(·)) → γ(t. × ΓN is called a multi-strategy ..Chapter 5 Differential Games So far we considered only games which are static in the way that ﬁrst the players choose their strategies..

f : R × Rn × Rm1 × ... × RmN → Rn (t, x, u1 , ..., uN ) → f (t, x, u1 , ..., uN ). Deﬁnition 5.1.1. A multi-strategy (γ 1 , ..., γ N ) ∈ Γ1 × ... × ΓN is called admissible for the initial state x0 ∈ Rn if for all i = 1, .., N 1. γ i (t, x(·)) depends on x(·) only through the values x(s) for 0 ≤ s ≤ τ i (t) where τ i ∈ M ap([0, T ], [0, T ]) 2. γ i (t, x(·)) is piecewise continuous 3. the following differential equation has a unique solution in x(·)

dx(t) = f (t, x(t), γ 1 (t, x(·), ..., γ N (t, x(·) dt x(0) = x0 . The function τ i is called player i’s information structure Deﬁnition 5.1.1 has to be interpreted in the following way. The function f represents the rules of the games and by the differential equation above determines which strategies can actually be used by the players. The condition concerning the information structure say how much information the players can use to choose their strategies at the corresponding time. We denote the set of admissible strategies with Γadm . Clearly Γadm depends on the function f , the initial state x0 and the information structures τ i . Sometimes we use the notation ui (t) = γ i (t, x(·) which has to be interpreted in the way that (γ 1 , ..., γ N ) ∈ Γadm and the function x(·) is the unique solution of the differential equation above. The payoff in our differential game is measured in values of 122

utility ( not in loss as in the previous chapter ) and for a multi-strategy (γ 1 , ..., γ N ) ∈ Γadm player i’s utility is given by an expression of the form

T

J (γ , ..., γ ) =

0

i

1

N

φi (t, x(t), u1 (t), ..., uN (t))dt + g i (x(T )).

in this expression the time integral represents the utility taken from consumption over time, the function g i represents the utility which is determined by the ﬁnal state of the game. In the following we will make use of the following technical assumptions : 1. For i = 1, ..N the functions φ(t, ·, u1 , ..., uN ) and g i (·) are continuously differentiable in the state variable x 2. The function f is continuous in all its variables and also continuously differentiable depending the state variable x. Remark 5.1.1. If the function f is Lipschitz continuous admissibility of strategies is a very mild condition due to theorems from the theory of ordinary differential equations. In general though in the theory of differential games one does not assume Lipschitz continuity of f . The concept of Nash equilibria can now be directly applied within the concept of differential games. We just repeat, this time in the context of utility rather than loss :

1 N Deﬁnition 5.1.2. An admissible multi-strategy (γ♯ , ..., γ♯ ) ∈ Γadm is called a Nash Equilibrium if

1 N N N J 1 (γ♯ , ..., γ♯ ) ≥ J 1 (γ 1 , ..., γ♯ ) ∀γ 1 s.t. (γ 1 , ..., γ♯ ) ∈ Γadm

· ·

· ·

· ·

· ·

1 N 1 1 J N (γ♯ , ..., γ♯ ) ≥ J 1 (γ♯ , ..., γ N ) ∀γ N s.t. (γ♯ , ..., γ N ) ∈ Γadm .

We denote the values on the left side with J i ♯ and call (J 1 ♯, ..., J N ♯) the Nash-outcome 123

We discussed this Equilibrium concept in detail. the question is again whether such Equilibria exist. We do no longer have compactness of the strategy sets and therefore cannot apply the Theorems of chapter 2 and chapter 3. More techniques from Functional Analysis and Differential Equations are necessary. We will address this problem later and in the meantime introduce another Equilibrium concept which was also previously discussed in section 1.3. the so called Stackelberg Equilibrium. We end this section by introducing the most important information structures, there are many more. Deﬁnition 5.1.3. Player “i”’s information structure τ i is called 1. open loop if τ i (t) = 0 for all t, that is player “i” does not use any information on the state of the game when choosing his strategies. 2. closed loop if τ i (t) = t for all t that is player “i” uses all possible information he can gather by following the game. 3. ǫ-delayed closed loop if τ i (t) = t−ǫ 0 if t ∈ [ǫ, T ] . if t ∈ [0, ǫ)

**5.2 Stackelberg Equilibria for 2 Person Differential Games
**

In the framework of differential games another equilibrium concept has shown to be successfully, this is the concept of Stackelberg equilibria. It is normally applied within games where some players are in a better position then others. The dominant players are called the leaders and the sub-dominant players the followers. One situation where this concept is very convincing is when the players choose their strategies one after another and the player who does the ﬁrst move has an advantage. This interpretation however is not so convincing in continuous time where players choose theories strategies at each time, but one could think of that one player has a larger information structure. However from a mathematical point the concept is so successful because it 124

1. We denote this strategy with γ 2 and get a map T : Γ 1 → Γ2 γ 1 → T (γ 1 ) = γ2 . The optimal reaction set R2 (γ 1 ) of player “2” to player “1”’s strategy γ 1 is deﬁned by R2 (γ 1 ) = {γ ∈ Γ2 |J 2 (γ 1 . T (γ 1 )) 2 1 for all γ 1 ∈ Γ1 . T (γ 1 )) i. To maximize his own utility player “1” would the choose a strategy which optimizes J 1 (γ 1 . Assume player “1” is the leader and player “2” is the follower. T (γ∗ )) ≥ J 1 (γ 1 . γ∗ = T (γ∗ ) would be the optimal strategy for player 2 under this consideration. 1 γ∗ Deﬁnition 5. γ 2 ) ∀γ 2 ∈ Γ2 s. We assume for a moment that there is such a strategy and that it is unique. 1 1 J 1 (γ∗ .t. However we have to assume some assumptions on admissibility in the previous discussion and also in general the existence and uniqueness of the strategy γ 2 = T (γ 1 ) is not guaranteed. Suppose that player “2” chooses his strategies as a reaction of player “1”’s strategy in a way that when player “1” chooses strategy γ 1 the player “2” chooses a strategy from which he gets optimal utility given that player “1” plays γ 1 . γ) ≥ J 2 (γ 1 .2. In a 2 person differential game with player “1” as 1 a leader and player “2” as a follower a strategy γ∗ ∈ Γ1 is called a Stackelberg Equilibrium Solution for the leader if 125 . (γ 1 . γ 2 ) ∈ Γadm }.2.2. Deﬁnition 5. In this course we will only consider Stackelberg Equilibria for 2 person differential games and ﬁrst illustrate the concept in a special case.leads to results.e.

3 Some Results from Optimal Control Theory One ccould say that optimal control problems are 1-person differential games and most of the theory of differential games uses results from there. x(t =. γ∗ ) a Stackelberg 1 2 1 2 equilibrium solution and (J 1 (γ∗ . maps u : [0. i.e. γ) 2 γ∈R (γ ) for all γ 1 in Γ1 where we assume that min ∅ = ∞ and ∞ ≥ ∞. The problem in optimal control theory is to choose u in a way that it maximizes a certain functional 126 . We will therfore brieﬂy consider the optimal control problem and state the Pontryagin principle which gives necessary conditions for a control to be optimal.t. J 2 (γ∗ .1 γ∈R2 (γ∗ ) 1 min J 1 (γ∗ . Suppose that the state of a dynamical system evolves according to a differential equation : dx(t) = f (t. γ) ≥ min 1 J 1 (γ 1 . the differential equation from above has a unique solution. 5. γ∗ ). γ∗ )) the Stackelberg equilibrium outcome of the game. We 1 denote the left hand side with J∗ and call it the Stackelberg payoff for 2 1 1 2 the leader and for any γ∗ ∈ R2 (γ∗ ) we call the pair (γ∗ . Even if Stackelberg solution for the leader and the optimal reaction for the follower are not unique the Stackelberg equilibrium outcome is unique. T ] → Rm s. u(t)) dt x(0) = 0 where u(·) denotes a control function which can be chosen within the set of admissible controls.

t. n : ∂H dx∗ (t) i = = i (t. u. p(t)) = max H(t. if we deﬁne the function H(t. u(·)) which satisﬁes the differential equation from above is called an program and an optimal program (x∗ (·). x. In the economic literature a pair (x(·). . u.t. p(t)) dt ∂xi ∂g pi (T ) = )(x∗ (T )) ∂xi an if Uadm ⊂ Rm denotes the set of vectors u s. x∗ (t). x. The following principle is known as the Pontryagin principle and gives necessary conditions for a program to be an optimal program. x(t =. u∗ (t). T ] × Rn × Rm × Rn then the following conditions hold for i = 1. u∗ (t).1.. We assume that the functions f. u∗ (·)) is an optimal program. u(t))dt + g(X(T )). x(t).1. p) = φ(t. p(t)) = fi (t. x. p) ∈ [0. u∗ (·)) is one that maximizes J. φ and g satisfy similar conditions as in section 5. f (t. Suppose (x∗ (·). x∗ (t).3. x∗ (t). x. u∗ (t). u)+ < p. x∗ (t).. Theorem 5. u(·) ≡ u is admissible then H(t. u(t)) dt ∂p xi (0) = xi0 i dp (t ∂H (t) = − (t. u) > for all (t. T ] → Rn called the costate vector ( or sometimes multiplier function) s.T J(u) = 0 φ(t. Then there exists an Rn valued function p : [0. p(t)) u∈Uadm 127 . u.

T ] where the labour force can use some technology F in the way that it produces a product worth Y (t) = F (K(t)) where F (0) = 01 . We consider an economy with a constant labour force and capital K(t) at time z ∈ [0. We will illustrate in the following example from economics how the costate vector can be interpreted.e. 1 This basically mean you cannot produce anything out of nothing 128 . Furthermore we assume that the capital depreciation is given by the constant µ. but that the effectivity becomes less the more capital is used in the production. then the necessary condition above help to determine it. We assume that the memebrs of our economy consume some of the capital and the rate of consumption at time t will be denoted with C(t).for almost all t. The method for ﬁnding the optimal control now works basically as when using the Lagrange multiplier method in standar calculus. If one knwos from a theoretical consideration that there must exist and optimal program. dF (K) d2 F > 0. Then the evolution of capital in our economy is given by the following differential equation : dK(t) = F (K(t)) − C(t) − µK(t) dt K(0) = K0 . For the function F we assume that it has a “decreasing return to scale” property i. <0 dK dK 2 which from an economical point of view is very reasonable and should be interpreted in the way that one can produce more. given more capital.

Let us apply the Pontryagin pinciple. K. C. T ] is the given by T W (C(·)) = 0 u(C(t))dt + g(K(T )). p) = u(C) + p · (F (K) − C − µK). The Pontryagin principle says that the optimal consumption rate at time t satisﬁes C ∗ (·) 0= Furthermore dP (t) ∂H dF (K ∗ (t)) = − (t. ∂K One can show that the function p satisﬁes p(s) = ∗ ∂H du(C) (t. < 0. Our economy would now like to hoose C(·) in a way that it maximizes this utility. dC dC 2 The members in our economy also beneﬁt from the ﬁnal capital stock and this beneﬁt is given by g(K(T )) where g is another utility function. p(t)) = − p(t) ∂C dC ∂ ∂K T u(C ∗ (t))dt + g(K ∗ (T ))|K=K ∗ (s) s where C is considered to be a function of K ∗ in the way that C ∗ (t) = 129 . p(t)) = −p(t) · + µp(t) dt ∂K dK ∂g(K ∗ (T )) p(T ) = . Let U be th utility function which measures the utility taken from consumption. K ∗ (t). C ∗ (t).where K0 denotes the initial capital. The total utility from choosing the consumption C(·) over [0. K ∗ (t). As always for utility functions we assume that dU (C) d2 U (C) > 0. The Hamilton function for this problem is given by H(t. C ∗ (t).

. . x♯ (t). Theorem 5. The theorem heavily relies on Theorem 5. u1 . In the economic literature p(s) is therefore called the shadowprice of capital. i = 1. 5... f (t. p (t)) · (t.. u1 . N H i (t. Then there exists N costate vectors pi (t) ∈ Rn and N Hamilton functions for i = 1. . x..C ∗ (t. pi ) = φi (t. . pi (t)) ♯ ♯ dt ∂xk mj − j=i l=1 ∂γlj ∂H 1 N i (t.. .1.4.. x♯ (t). Assume γ♯ (T. u♯ (t).. .. Consider an N-person differential game as formulated 1 N in the beginning of this chapter and let (γ♯ ..... the following conditions are satisﬁed for k..4 Necessary Conditions for Nash Equilibria in N-person Differential Games The following Theorem gives neccesary conditions for a multistrategy in a differential game to be a Nash equilibrium solution. . x.. x♯ (t)) ∂xk ∂uj l pi (T ) = k ∂g i (x♯ (T ))... However the term on the right side can be interpreted as the increase in utility within the timeinterval [s. u♯ (t)) dt x♯ (0) = 0 i dPk (t) ∂H = − (t..1.t.. x♯ (t). . x. . N dx♯ (t) N = f (t. x(·)) depends on x(·) only through x(t) and this dependence is C 1 -differentiable. uN ) > s.. x♯ (t)) maximizes ♯ 130 . γ♯ ) be a Nash equilibi rium solution.. u1 .. uN . This representation can be obtained when solving for C ∗ (·).... u♯ (t).. ∂xk Furthermore ui (t) = γ(t. T ] per unit ofcapital..3. K ∗ (t)). u1 (t). uN (t). uN )+ < pi . u1 ♯(t).

.. pi (t)) ♯ ♯ ♯ for all u ∈ Rmi s... u1 (t).t. uN (t).... u♯ (t). 131 . ui−1 (t). x♯ (t). .i+1 H i (t. u. u(t) ≡ u is admissible.

Theory and Applications Owen : Game Theory Border : Fixed Point Theorems with Applications to Economics and Game Theory Dugatkin : Game Theory and Animal Behavior Maynard-Smith : Evolutionary Game Theory [Du] [Ma] 132 . Mathematical Models of Conﬂict Thomas : Games.Bibliography [Au] [Jo] [Th] [Ow] [BO] Aubin : Mathematical Methods of Game and Economic Theory Jones : Game Theory.

Sign up to vote on this title

UsefulNot useful- 9781441996367-c1by Duta La Cavaliere
- Application of Analysis on Nilpotent Groups to PDE Follandby surfasaur
- Antidegree Equitable Sets in a Graphby Don Hass
- Research on the Stability Structuring of Cooperation and Conflicts Between Main Manufacturers and Suppliers of the Large Airctaft Projectby Alexander Decker

- Proof for existence of Nash equiliberia
- Introduction to Game Theory
- ANALYSIS OF BAND SELECTION ALGORITHMS FOR ENDMEMBER EXTRACTION IN HYPERSPECTRAL IMAGES
- Game Theory Final
- Game Theory
- Egt - Evolutionary Game Theory
- A Theorem on Coloring the Lines of a Network
- syllabs
- Game Theory
- Robert Aumann
- Ps1 - Pedro Lp
- IJRET20140305163.pdf
- 02F Lecture Rep
- 3.1.Nash Equilibrium.1.0
- Game Theory
- 36285273 Task5 Solution
- Micro Econ
- fdhdfgsgjhdfhdshjd
- Bates Analytic Narratives_Intro - Cap 5
- goedel
- 9781441996367-c1
- Application of Analysis on Nilpotent Groups to PDE Folland
- Antidegree Equitable Sets in a Graph
- Research on the Stability Structuring of Cooperation and Conflicts Between Main Manufacturers and Suppliers of the Large Airctaft Project
- Stochastic Modeling of Irrationality in Normal-Form Games
- Garrett
- 20117067
- controllability-observability
- Periodic Points of the Family of Tent Maps
- S. V. Astashkin and F. A. Sukochev- Best Constants in Rosenthal-Type Inequalities and the Kruglov Operator
- SSRN-id976592[1]

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.