You are on page 1of 12

STANFORD UNIVERSITY

WINTER 2000

DEPT. OF MANAGEMENT SCIENCE


AND ENGINEERING

MS&E 241
ECONOMIC ANALYSIS

PRINCIPLES OF GAME THEORY


The lecture notes draw heavily on the following sources:
1. Fudenberg & Tirole, Game Theory (MIT Press, 1992).
2. Gibbons, Game Theory for Applied Economists (Princeton University Press, 1992).
LECTURE 1
Game theory is the study of the strategic interaction (conflict, cooperation, etc.) of two or
more decisionmakers, or "players," who are conscious that their actions affect each other.
When the only two publishers in a city choose prices for their newspapers, aware
that their sales are determined jointly, they are players in a game with each other. They are
not in a game with the readers who buy their newspapers, because each reader ignores his
or her effect on the publisher. Game theory is not useful and interesting when decisions are
made that ignore the reactions of others or treat them as impersonal market forces.
The following are examples of games:
1. OPEC members choosing their annual output. OPEC members are playing a game
because, for instance, Saudi Arabia knows that Kuwait's oil output is based on Kuwait's
forecast of Saudi output, and the output from both countries matter to the world price.
2. General Motors purchasing steel from USX. A significant portion of American trade in
steel is between GM and USX, and both companies realize that the quantities traded
between them affect the price. One wants the price high, the other wants it low, so that
this
is a game with conflict between the players.

2
3. Two manufacturers, one of nuts and one of bolts, deciding whether to use metric or
American standards. The nut and bolt manufacturers are not in conflict, but the actions of
one affect the desired actions of the other, so that the situation is nevertheless a game.
4. A board of directors setting up a stock option plan for the CEO. The board chooses a
stock option plan strategically, anticipating the effect of the plan on the actions of the
CEO.
[1] Definitions and Notation
[1.1] Structure of a typical game
The essential elements of a game are players, actions, information, strategies, outcomes,
payoffs, and equilibria. The players, actions and outcomes are collectively referred to as
the rules of the game. The game theorist's objective is to use the rules of the game to
predict the equilibrium outcome of the game.
1. The players are the individuals who make decisions. Each player chooses actions
to maximize his or her utility. In the OPEC example, we may specify the players to
be Saudi Arabia and Other Producers. An individual's investment decision problem
may, for instance, be modelled as a two-person game between the investor and "the
market," or "Nature." Nature is a pseudo-player who takes actions at specified
points in the game according to a specified probability distribution. We may, for
instance, assume that Nature "draws" the return on the market portfolio from a
lognormal distribution with specified parameters. In the OPEC example, Nature
may, at the beginning of the game, randomly decide whether oil demand will be
weak or strong, for instance by probabilities of 30 and 70 percent, respectively.
2. An action or move by player i, denoted ai, is a choice Player i can make whenever
she is called on to move or act. Player i's action set, Ai = {ai}, is the entire set of
actions available to her. An investment decisionmaker who has to decide what
fraction, , to invest in the market portfolio, may have an action set Ai = [0, 1], i.e.,
[0, 1]. An action profile specifies the actions taken by each of n players at a
given point in time. An action profile is an ordered set a = {aj, j = 1, 2, ..., n}; it
specifies that Player 1 took action a1, Player 2 action a2, etc.

3
In the investment example, if the Investor chooses = 0.3, and Nature
"chooses" Rm = 10%, then a = {0.3; 10%}. In the OPEC example, the action set for
both players may be the set of output level decisions, namely {High, Low}.
3. Information in a game is modelled using the concept of the information set. We
will define it in a mathematically more precise way later, but for now think of a
player's information set as her knowledge at a particular point in time of the values
of different variables. The elements of an information set are the different values the
player thinks are possible. If the information set has many elements, then there are
many values the player cannot rule out. If it has one element, she knows the value
precisely. If, for instance, the player's information on the value of the Dow Jones the
next day is that it will be somewhere between zero and infinity, then the player has
very little information, compared to knowing the exact future value of the Dow. A
player's information set also includes information such as what actions have been
taken by other players.
Suppose, for instance, in the OPEC example, after Nature moves only Saudi
Arabia knows whether oil demand is weak or strong. Other producers do not know
and, hence, cannot rule out either possibility. The information sets are Saudi Arabia:
{Strong} or {Weak}, depending on demand. Other Producers: {Strong, Weak}.
4. A strategy, si, is a rule that tells player i what action to take at each instance of the
game, given her information set. si is a complete set of instructions for Player i which
tell her what actions to take in every conceivable situation. Hence, strategies, unlike
actions, are generally unobservable. One possible strategy for Saudi Arabia in the
OPEC example is "Produce Low if demand is Weak, otherwise produce High."
Player i's strategy set or strategy space, Si = {si} is the set of strategies available to
her. A strategy profile s = {sj, j = 1, 2, ..., n} is an ordered set consisting of one
strategy for each of the n players in the game.
5. By Player i's payoff, i(s1, s2, ..., sn), we mean either:
(i) Actual payoff: The utility Player i receives after all players and Nature have
picked their strategies and the game has ended; In the two-player case, U1(s1, s2) is
the payoff to player 1 if the players choose strategies (s1, s2). If, for instance, the
investor chooses = 0.5, and the market "chooses" a stock market return of 5% and
Government security return of 3%, the payoff to the investor will be a return of 4%.
The investor's payoff function is Rm + (1-)r. In the OPEC example, we may

4
define the payoffs to the players as the sums of their oil revenues over two years of
production.
or
(ii) Expected payoff: The expected utility Player i receives as a function of the
strategies chosen by herself and the other players.
Two additional definitions related to Information.
Information is common knowledge if it is known to all players, if each player knows
that all the players know it, if each player knows that all the players know that all the
players know it, and so forth ad infinitum.
In a game of complete information: all players know the rules of the game, i.e., each
player knows his or her own payoffs and strategies and those of the other player.
6. The outcome is the set of interesting elements that the modeller picks from the
values of the actions, payoffs, and other variables after the game is played out. In the
OPEC example, we may be interested in the quantities of oil supplied, the state of
demand, the resulting revenues and market price, etc.
7. An equilibrium, s* = (s1*, s2*, ..., sn*), is a strategy profile consisting of a best
strategy for each of the n players of the game. For each i, si* is Player i's optimal
strategy, given the other strategies, (s1*, s2*, ...si-1*,si+1*, ... sn*).
An equilibrium outcome is the set of outcomes that would result from the optimal
strategies of the players.
[1.2] Representation of a Game
There are two ways of describing a game: (1) Strategic form (or, "normal form"),
and (2) Extensive form. In popular terms, the strategic-form representation can be
described as a "payoff diagram", and the extensive form as a "game tree."
Strategic Form
[1.2.1] Definition
The strategic-form representation of an n-player game
specifies (1) the players in the game, (2) the players' strategy spaces S1, ..., Sn
(strategies available to each player) and (3) their payoff functions, U1, ..., Un.
We denote this game by G = {S1, ..., Sn; U1, ..., Un}.

In the strategic-form representation of a game, each player simultaneously chooses a


strategy, and the combination of strategies chosen by the players determines each
player's payoff.
Examples
(1) Matching Pennies (Varian, pp. 260, 261; Luenberger, pp. 269, 270).
In this game two players each has a coin which he can display either head- or faceup. If the coins match, Player 1 gets a payoff of -1 and Player 2 gets +1. If the coins
do not match, the payoffs reverse, Player 1 gets +1 and Player 2 gets -1. Each
player's strategy space is {Heads, Tails}. The strategic-form representation of the
game is as shown in the diagram below.

Player 2

H
Player 1
T

-1, 1

1, -1

1, -1

-1, 1

(2) Prisoner's Dilemma


Two suspects are arrested and charged with a crime. The police lack sufficient
evidence to convict the suspects, unless at least one confesses. The police hold them
in separate cells and explain the consequences that will follow from the actions they
could take. If neither confesses, then both will be convicted of a minor offense and
sentenced to one month in jail. If both confess then both will be sentenced to jail for
six months. Finally, if one confesses but the other does not, then the confessor will
be released immediately, but the other will be sentenced to nine months in jail - six
for the crime and another three for obstructing justice.
The strategic form of the game can be represented in the form of the
following payoff matrix.

6
Prisoner 2

Cooperate

Cooperate

Defect

-1, -1

-9, 0

Prisoner 1

Defect

0, -9

-6, -6

Although we stated that in a strategic-form game the players choose their strategies
simultaneously, this does not imply that the parties necessarily act simultaneously: it
suffices that each chooses his or her action without knowledge of the other's choices,
as would be the case, for instance, if the prisoners reached decisions at arbitrary
times while in their separate cells.
Definition

The extensive-form representation of an n-player game specifies (1)

the players in the game, (2) when each player has the move, (3) what each player can
do when he or she has the move, (4) what each player knows when he or she has the
move, and (5) the payoff received by each player for each combination of moves that
could possibly be chosen by the players.
Example
Consider the following two-stage game:
1. Player 1 chooses an action a1 from the feasible set A1 = {L, R}.
2. Player 2 observes a1 and then chooses an action a2 from the set A2 = {L', R'}.
3. Payoffs are U1(a1, a2) and U2(a1, a2), as shown in the extensive form (game tree)
below.

Extensive Form
Payoff to Player 1 Payoff to Player 2
L'

2
L

R'

L'

R
2

R'

We now turn to the strategic-form representation of the game. Note that in the payoff
matrix of the strategic form representation, the strategy space of each player must be
specified, i.e., each row (Player 1) or column (Player 2) corresponds to a particular
strategy, not a particular action.
Recall that a strategy is a complete specification of what the player will do in each
contingency in which she will be called on to act. Player 2 has two such contingencies,
namely (i) after Player 1 has played L, and (ii) after Player 1 has played R. Hence, a
stratregy for Player 2 consists of a two-element vector, (s21, s22), where s21 specifies what
Player 2 will do if Player 1 plays L, and s22 specifies Player 2's action following Player 1's
play of R. Player 2's strategy space can be represented:
{(s21, s22): s21 (L', R'), s22 (L', R')} = {(L', L'), (L', R'), (R', L'), R', R')}.
The strategic form representation is therefore as follows:

Strategic Form

Player 2

(L' L')

(L', R')

(R', L')

(R', R')

3, 1

3, 1

1, 2

1, 2

2, 1

0, 0

Player 1

2, 1

0, 0

[2] Solving a Game Theoretic Problem


[2.1] Iterated Elimination of Strictly Dominated Strategies
In the Prisoner's Dilemma game, playing "defect" is a dominating strategy: Whatever
the other player's strategy, any player is better off defecting. Playing "cooperate" is a
dominated strategy, for the opposite reason. If Prisoner 2 cooperates, then Prisoner
1 gets -1 from cooperating and 0 from defecting. If Prisoner 2 defects, then Prisoner
1 gets -9 from cooperating and -6 from defecting. Hence, whatever strategy Prisoner
2 pursues, Prisoner 1 is better off defecting. Defecting is therefore a dominating
strategy.
Definition
In the strategic-form game G = {S1, ..., Sn; U1, ..., Un}, let s'i and
s"i be feasible strategies for player i (i.e., s'i and s"i are members of Si). Strategy s'i is
strictly dominated by strategy s"i if for each feasible combination of the other
players' stragegies, i's payoff from playing s'i is strictly less than i's payoff from
playing s"i:

Ui[s1, ..., si-1, s'i, si+1, ..., sn) < Ui[s1, ..., si-1, s"i, si+1, ..., sn) for each
(s1, ..., si-1, si+1, ..., sn) that can be constructed from the other players' strategy
spaces S1, ... , Si-1, Si+1, ... , Sn.
A rational player will never play a dominated strategy, because there is no belief
that a player could hold about the strategy the other player will choose such that it
would be optimal to play a dominated strategy.
Example: Consider the following game.

Player 2

Up

Left

Middle

1, 0

1, 2

Right

0, 1

Player 1

Down

0, 3

0, 1

2, 0

Player 1 has two strategies and player 2 has three: S1 = {Up, Down} and S2 = {Left,
Middle, Right}. For player 1, neither Up nor Down is strictly dominated: Up is
better than Down if 2 plays Left, but Down is better than Up if 2 plays Right.
For Player 2, however, Right is strictly dominated by Middle. Hence, a rational
Player 2 will not play Right. Thus, if Player 1 knows that Player 2 is rational, then
Player 1 can eliminate Right from Player 2's strategy space. That is, if Player 1
knows that Player 2 is rational, then Player 1 can play the following game as if it
were the original game:

10
Player 2
Left

Middle

Up

1, 0

1, 2

Down

0, 3

0, 1

Player 1

Down is now strictly dominated by Up for Player 1. Hence, if Player 1 is rational


(and Player 1 knows that Player 2 is rational, so that indeed this form of the game
applies) then Player 1 will not play Down. Player 2 can then eliminate Down from
Player 1's strategy space, leaving the following game:

Player 2

Player 1

Up

Left

Middle

1, 0

1, 2

But now Left is strictly dominated by Middle for Player 2, leaving (Up, Middle) as
the outcome of the game.
Example: Second-Price Auction

12
(1) Based on Assumption that players are rational and that mutual rationality is
common knowledge;
(2) Often a game has no dominated strategies, in which case this approach cannot
predict the outcome of the game. The concept of Nash equilibrium is a stronger
solution concept than iterated elimination of strictly dominated strategies: Players'
strategies in a Nash equilibrium always survive iterated elimination of strictly
dominated strategies, but the converse is not true. Later we will study refinements of
the concept of Nash equilibrium.

You might also like