Microeconomic Theory 3
Course Notes
by
LutzAlexander Busch
Dept.of Economics
University of Waterloo
Revised 2004
c LutzAlexander Busch, 1994,1995,1999,2001,2002,2004
Do not quote or redistribute without permission
Contents
1 Preliminaries 1
1.1 About Economists . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 What is (Micro—)Economics? . . . . . . . . . . . . . . . 2
1.3 Economics and Sex . . . . . . . . . . . . . . . . . . . . . . . . 5
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Review 9
2.1 Modelling Choice Behaviour
1
. . . . . . . . . . . . . . . . . . 9
2.1.1 Choice Rules . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Preferences . . . . . . . . . . . . . . . . . . . . . . . . 11
2.1.3 What gives? . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2 Consumer Theory . . . . . . . . . . . . . . . . . . . . . . . . . 15
Special Utility Functions . . . . . . . . . . . . . . . . . . . . . 18
2.2.1 Utility Maximization . . . . . . . . . . . . . . . . . . . 20
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.2.2 Expenditure Minimization and the Slutsky equation . . 25
2.3 General Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 27
1
This material is based on MasColell, Whinston, Green, chapter 1
i
ii LA. Busch, Microeconomics May 2004
2.3.1 Pure Exchange . . . . . . . . . . . . . . . . . . . . . . 27
2.3.2 A simple production economy . . . . . . . . . . . . . . 33
2.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 35
3 Intertemporal Economics 39
3.1 The consumer’s problem . . . . . . . . . . . . . . . . . . . . . 39
3.1.1 Deriving the budget set . . . . . . . . . . . . . . . . . 40
No Storage, No Investment, No Markets . . . . . . . . . . . . 40
Storage, No Investment, No Markets . . . . . . . . . . . . . . 40
Storage, Investment, No Markets . . . . . . . . . . . . . . . . 41
Storage, No Investment, Full Markets . . . . . . . . . . . . . . 42
Storage, Investment, Full Markets . . . . . . . . . . . . . . . . 43
3.1.2 Utility maximization . . . . . . . . . . . . . . . . . . . 44
3.2 Real Interest Rates . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3 Riskfree Assets . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.3.1 Bonds . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3.2 More on Rates of Return . . . . . . . . . . . . . . . . . 49
3.3.3 Resource Depletion . . . . . . . . . . . . . . . . . . . . 51
3.3.4 A Short Digression into Financial Economics . . . . . . 52
3.4 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Uncertainty 57
4.1 Risk Aversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.1.1 Comparing degrees of risk aversion . . . . . . . . . . . 65
Contents iii
4.2 Comparing gambles with respect to risk . . . . . . . . . . . . 67
4.3 A ﬁrst look at Insurance . . . . . . . . . . . . . . . . . . . . . 69
4.4 The StatePreference Approach . . . . . . . . . . . . . . . . . 72
4.4.1 Insurance in a State Model . . . . . . . . . . . . . . . . 74
4.4.2 Risk Aversion Again . . . . . . . . . . . . . . . . . . . 76
4.5 Asset Pricing . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.5.1 Diversiﬁcation . . . . . . . . . . . . . . . . . . . . . . . 77
4.5.2 Risk spreading . . . . . . . . . . . . . . . . . . . . . . 78
4.5.3 Back to Asset Pricing . . . . . . . . . . . . . . . . . . . 79
4.5.4 MeanVariance Utility . . . . . . . . . . . . . . . . . . 81
4.5.5 CAPM . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4.6 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 84
5 Information 91
5.1 Search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Adverse Selection . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.3 Moral Hazard . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4 The Principal Agent Problem . . . . . . . . . . . . . . . . . . 105
5.4.1 The Abstract PA Relationship . . . . . . . . . . . . . 105
6 Game Theory 113
6.1 Descriptions of Strategic Decision Problems . . . . . . . . . . 117
6.1.1 The Extensive Form . . . . . . . . . . . . . . . . . . . 117
6.1.2 Strategies and the Strategic Form . . . . . . . . . . . . 121
iv LA. Busch, Microeconomics May 2004
6.2 Solution Concepts for Strategic Decision Problems . . . . . . . 126
6.2.1 Equilibrium Concepts for the Strategic Form . . . . . . 127
6.2.2 Equilibrium Reﬁnements for the Strategic Form . . . . 131
6.2.3 Equilibrium Concepts and Reﬁnements for the Exten
sive Form . . . . . . . . . . . . . . . . . . . . . . . . . 134
Signalling Games . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.3 Review Problems . . . . . . . . . . . . . . . . . . . . . . . . . 140
7 Review Question Answers 145
7.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.3 Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
7.4 Chapter 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
List of Figures
2.1 Consumer Optimum: The tangency condition . . . . . . . . . 22
2.2 Corner Solutions: Lack of tangency . . . . . . . . . . . . . . . 23
2.3 An Edgeworth Box . . . . . . . . . . . . . . . . . . . . . . . . 29
2.4 Pareto Optimal Allocations . . . . . . . . . . . . . . . . . . . 30
3.1 Storage without and with markets, no (physical) investment . 44
4.1 Risk Aversion . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Risk Neutral and Risk Loving . . . . . . . . . . . . . . . . . . 63
4.3 The certainty equivalent to a gamble . . . . . . . . . . . . . . 64
4.4 Comparing two gambles with equal expected value . . . . . . . 67
4.5 An Insurance Problem in StateConsumption space . . . . . . 75
4.6 Risk aversion in the State Model . . . . . . . . . . . . . . . . 76
4.7 Eﬃcient Portfolios and the Market Portfolio . . . . . . . . . . 83
5.1 Insurance for Two types . . . . . . . . . . . . . . . . . . . . . 99
5.2 Impossibility of Pooling Equilibria . . . . . . . . . . . . . . . . 101
5.3 Separating Contracts could be possible . . . . . . . . . . . . . 102
5.4 Separating Contracts deﬁnitely impossible . . . . . . . . . . . 103
v
vi LA. Busch, Microeconomics May 2004
6.1 Examples of valid and invalid trees . . . . . . . . . . . . . . . 118
6.2 A Game of Perfect Recall and a Counterexample . . . . . . . 120
6.3 From Incomplete to Imperfect Information . . . . . . . . . . . 121
6.4 A Matrix game — game in strategic form . . . . . . . . . . . . 124
6.5 A simple Bargaining Game . . . . . . . . . . . . . . . . . . . . 125
6.6 The 4 standard games . . . . . . . . . . . . . . . . . . . . . . 126
6.7 Valid and Invalid Subgames . . . . . . . . . . . . . . . . . . . 135
6.8 The “Horse” . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
6.9 A Signalling Game . . . . . . . . . . . . . . . . . . . . . . . . 139
6.10 A Minor Perturbation? . . . . . . . . . . . . . . . . . . . . . . 140
Chapter 1
Preliminaries
These notes have been written for Econ 401 as taught by me at the University
of Waterloo. As such the list of topics reﬂects the course material for that
particular course. It is assumed that the student has mastered the pre
requisites and little or no time is spent on them, aside from a review of
standard consumer economics and general equilibrium in chapter 1. I assume
that the student has access to a standard undergraduate micro theory text
book. Any of the books commonly used will do and will give introductions
to the topics covered here, as well as allowing for a review, if necessary, of
the material from the prerequisites.
These notes will not give references. The material covered is by now
fairly standard and can be found in one form or another in most micro texts.
I wish to acknowledge two books, however, which have served as references:
the most excellent book by MasColell, Whinston, and Green, Microeconomic
Theory, as well as the much more concise Jehle and Reny, Advanced Micro
economic Theory. I also would like to acknowledge my teachers Don Ferguson
and Glen MacDonald, who have done much to bring microeconomics alive
for me.
This preliminary chapter contains extensive quotes which I have found
informative, amusing, interesting, and thought provoking. Their sources have
been indicated.
1
2 LA. Busch, Microeconomics May2004
1.1 About Economists
By now you may have heard many jokes about economists and noticed that
modern economics has a bad reputation in some circles. If you mention your
ﬁeld of study in the bar, you are quite possibly forced to defend yourself
against various stereotypical charges (focusing on assumptions, mostly.) The
most eloquent quotes about economists that I know of are the following two
quotes reproduced from the Economist, Sept.4, 1993, p.25:
No real Englishman, in his secret soul, was ever sorry for the death
of a political economist, he is much more likely to be sorry for his
life. You might as well cry at the death of a cormorant. Indeed
how he can die is very odd. You would think a man who could
digest all that arid matter; who really preferred ‘sawdust without
butter’; who liked the tough subsistence of rigid formulae, might
defy by intensity of internal constitution all stomachic or lesser
diseases. However they do die, and people say that the dryness
of the Sahara is caused by a deposit of similar bones.
(Walter Bagehot (1855))
Are economists human? By overwhelming majority vote, the
answer would undoubtedly be No. This is a matter of sorrow for
them, for there is no body of men whose professional labours are
more conscientiously, or consciously, directed to promoting the
wealth and welfare of mankind. That they tend to be regarded
as bluenosed killjoys must be the result of a great misunder
standing. (Geoﬀrey Crowther (1952))
1.2 What is (Micro—)Economics?
In Introductory Economics the question of what economics is has received
some attention. Since then, however, this question may have received no
further coverage, and so I thought to collect here some material which I to
use to start a course. It is meant to provide a background for the ﬁeld as
well as a defense, of sorts, of the way in which micro economics is practiced.
Malinvaud sees economics as follows:
Preliminaries 3
Here we propose the alternative, more explicit deﬁnition: eco
nomics is the science which studies how scarce resources are em
ployed for the satisfaction of the needs of men living in society:
on the one hand, it is interested in the essential operations of
production, distribution and consumption of goods, and on the
other hand, in the institutions and activities whose object it is to
facilitate these operations. [...]
The main object of the theory in which we are interested is the
analysis of the simultaneous determination of prices and the quan
tities produced, exchanged and consumed. It is called microeco
nomics because, in its abstract formulations, it respects the in
dividuality of each good and each agent. This seems a necessary
condition a priori for logical investigation of the phenomena in
question. By contrast, the rest of economic theory is in most
cases macroeconomic, reasoning directly on the basis of aggre
gates of goods and agents.
[E. Malinvaud, Lectures on Microeconomic Theory, revised, NH,
1985, p.12.]
This gives us a nice description of what economics is, and in particular
what micro theory entails. In following the agenda laid out by Malinvaud a
certain amount of theoretical abstraction and rigor have been found neces
sary, and one key critique heard often is the “attempt at overblown rigor”
and the “unrealistic assumptions” which micro theory employs. Takayama
and Hildenbrand both address these criticisms in the opening pages of their
respective books. First Takayama:
The essential feature of modern economic theory is that it is an
alytical and mathematical. Mathematics is a language that facil
itates the honest presentation of a theory by making the assump
tions explicit and by making each step of the logical deduction
clear. Thus it provides a basis for further developments and ex
tensions. Moreover, it provides the possibility for more accurate
empirical testing. Not only are some assumptions hidden and ob
scured in the theories of the verbal and “curvebending” economic
schools, but their approaches provide no scope for accurate em
pirical testing, simply because such testing requires explicit and
mathematical representations of the propositions of the theories
to be tested.
[...] But yet, economics is a complex subject and involves many
things that cannot be expressed readily in terms of mathematics.
4 LA. Busch, Microeconomics May2004
Commenting on Max Planck’s decision not to study economics,
J.M. Keynes remarked that economics involves the “amalgam of
logic and intuition and wide knowledge of facts, most of which
are not precise.” In other words, economics is a combination
of poetry and hardboiled analysis accompanied by institutional
facts. This does not imply, contrary to what many poets and
institutionalists feel, that hardboiled analysis is useless. Rather,
it is the best way to express oneself honestly without being buried
in the millions of institutional facts. [...]
Mathematical economics is a ﬁeld that is concerned with com
plete and hardboiled analysis. The essence here is the method of
analysis and not the resulting collection of theorems, for actual
economies are far too complex to allow the ready application of
these theorems. J.M. Keynes once remarked that “the theory of
economics does not furnish a body of settled conclusions immedi
ately applicable to policy. It is a method rather than a doctrine,
an apparatus of the mind, a technique of thinking, which helps
its possessor to draw conclusions.”
An immediate corollary of this is that the theorems are useless
without explicit recognition of the assumptions and complete un
derstanding of the logic involved. It is important to get an intu
itive understanding of the theorems (by means of diagrams and
so on, if necessary), but this understanding is useless without a
thorough knowledge of the assumptions and proofs.
[Akira Takayama, Mathematical Economics, 2nd ed., Cambridge,
1985, p. xv.]
Hildenbrand oﬀers the following:
I cannot refrain from repeating here the quotation from Bertrand
Russell cited by F. Hahn in his inaugural lecture in Cambridge:
“Many people have a passionate hatred of abstraction, chieﬂy, I
think, because of its intellectual diﬃculty; but as they do not wish
to give this reason they invent all sorts of other that sound grand.
They say that all abstraction is falsiﬁcation, and that as soon as
you have left out any aspect of something actual you have exposed
yourself to the risk of fallacy in arguing from its remaining aspects
alone. Those who argue that way are in fact concerned with
matters quite other than those that concern science.” (footnote
2, p.2, with reference)
Preliminaries 5
Let me brieﬂy recall the main characteristics of an axiomatic the
ory of a certain economic phenomenon as formulated by Debreu:
First, the primitive concepts of the economic analysis are selected,
and then, each one of these primitive concepts is represented by
a mathematical object.
Second, assumptions on the mathematical representations of the
primitive concepts are made explicit and are fully speciﬁed. Math
ematical analysis then establishes the consequences of these as
sumptions in the form of theorems.
[Werner Hildenbrand, Twenty Papers of Gerard Debreu, Econo
metric Society Monograph 4, Cambridge, 1983, page 4, quoted
with omissions.]
1.3 Economics and Sex
I close this chapter with the following thought provoking excerpt from Mark
Perlman and Charles R. McCann, Jr., “Varieties of uncertainty,” in Uncer
tainty in Economic Thought, ed. Christian Schmidt, Edward Elgar 1996, p
910.
The problem as perceived
As this is an opening paper, let us begin with what was once an estab
lished cultural necessity, namely a reference to our religious heritage. What
we have in mind is the Biblical story of the Fall of Man, the details of which
we shall not bore you with. Rather, we open consideration of this diﬃcult
question by asking what was the point of that Book of Genesis story about
the inadequacy of Man.
We are told that apparently whatever were God’s expectations, He be
came disappointed with Man. Mankind and particularly Womankind
1
did
not live up to His expectations.
2
In any case, Adam and Eve were informed
1
Much has been made of the failure of women, perhaps that is because men wrote up
the history. We should add, in order to avoid deleterious political correctness (and thereby
cut oﬀ provocation and discussion), that since Eve was the proximate cause of the Fall,
and Eve represents sexual attraction or desire, some (particularly St Paul, whose opinion
of womankind was problematic) have considered that sexual attraction was in some way
even more responsible for the Fall than anything else. Put crudely, even if economics
is not a sexy subject, its origins were sexual.
2
What that says about His omniscience and/or omnipotence is, at the very least, para
6 LA. Busch, Microeconomics May2004
that they had ‘fallen’ from Grace, and all of us have been made to suﬀer ever
since.
From our analytical standpoint there are two crucial questions:
1. What was the sin; and
2. What was the punishment?
The sin seems to have been something combining (1) an inability to fol
low precise directions; (2) a willingness to be tempted, particularly when one
could assert that ‘one was only doing what everyone else (sic) was doing;
3
(3) a greed involving things (something forbidden) and time (instant grat
iﬁcation); (4) an inability to leave well enough alone; and (5) an excessive
Faustian curiosity. Naturally, as academic intellectuals, we fancy the ﬁfth
reason as best.
But what interests us directly is the second question. It is ‘What was
God’s punishment for Adam and Eve’s vicarious sin, for which all mankind
suﬀers?’ Purportedly a distinction has been made between what happened to
Man and Woman, but, the one clear answer, particularly as seen by Aquinas
and by most economists ever since, was that man is condemned to live with
the paradigm of scarcity of goods and services and with a schedule of ap
petites and incentives which are, at best, confusing.
In the more modern terms of William Stanly Jevons, ours is a world
of considerable pain and a few costly pleasures. We are driven to produce
so that we can consume, and production is done mostly by the ‘sweat of
the brow’ and the strength of the back. The study of economics — of the
production, distribution and even the consumption of goods and services —
it follows, is the result of the Original Sin. When Carlyle called Economics
the ‘Dismal Science’, he was, if anything, writing in euphemisms; Economics
per se, is the Punishment for Sin.
doxical.
3
Cf. Genesis, 3:912,16,17. [9] But the Lord God called to the man and said to him, ‘
Where are you?’ [10] He replied, ‘I heard the sound as you were walking the garden , and
I was afraid because I was naked, and I hid myself.’ [11] God answered, ‘ Who told you
that you were naked? Have you eaten from the tree which I forbade you?’ [12] The man
said, ‘The woman you gave me for a companion, she gave me fruit from the tree and I ate.’
[Note: The story, as recalled, suggests that Adam was dependent upon Eve (for what?),
and the price of that dependency was to be agreeable to Eve (‘It was really all her fault
— I only did what You [God] had laid out for me.’)] ([16] and [17] omitted) [Again, for
those civil libertarians amongst us, kindly note that God forced Adam to testify against
himself. Who says that the Bill of Rights is an inherent aspect of divine justice? Far from
it, in the Last Judgment, pleading the Fifth won’t do at all.]
Preliminaries 7
But, it is another line of analysis, perhaps novel, which we put to you.
Scarcity, as the paradigm, may not have been the greatest punishment, be
cause scarcity, as such, can usually be overcome. Scarcity simply means that
one has to allocate between one’s preferences, and the thinking man ought
to be able to handle the situation. We use our reasoning power, surely tied
up with Free Will, to allocate priorities and thereby overcome the greater
disasters of scarcity. What was the greater punishment, indeed the greatest
punishment, is more basic. Insofar as we are aware, it was identiﬁed early
on by another Aristotelian, one writing shortly before Aquinas, Moses Mai
monides. Maimonides suggested that God’s real punishment was to push
man admittedly beyond the limits of his reasoning power. Maimonides held
that prior to the Fall, Adam and Eve (and presumably mankind, generally)
knew everything concerning them; after the Fall they only had opinions.
4
Requisite to the wise use of power is understanding and full speciﬁcation;
what was lost was any such claim previously held by man to complete knowl
edge and the full comprehension of his surroundings. In other words, what
truly underlies the misery of scarcity is neither hunger nor thirst, but the
lack of knowledge of what one’s preference schedule will do to one’s happi
ness. For if one had complete knowledge (including foreknowledge) one could
compensate accordingly.
If one pursues Maimonides’ line of inquiry, it seems that uncertainty
(which is based not on ignorance of what can be known with study of data
collection, but also on ignorance tied to the unknowable) is the real punish
ment.
4
[omitted]
8 LA. Busch, Microeconomics May2004
Notation:
Blackboard versions of these symbols may diﬀer slightly.
I will not distinguish between vectors and scalars by notation. Generally all
small variables (x, y, p) are column vectors (even if written in the notes as row
vectors to save space.) The context should clarify the usage. Capital letters
most often denote sets, as in the consumption set X, or budget set B. Sets
of sets are denoted by capital script letters, such as A = ¦X
1
, X
2
, . . . , X
k
¦,
where X
i
= ¦x ∈ '
N
[
¸
n
x
2
n
= i¦.
I will use the usual partial derivative notation ∂f/∂x
1
= f
1
(), if no
confusion arises, and will often omit the arguments to a function but in
dicate that there is a function by the previous notation, i.e., f() denotes
f(x
1
, x
2
, . . . , x
n
), for example. Finally, the Reals ' will be written as IR on
the board, in the more familiar fashion.
' the real numbers, superscript denotes dimensionality
'
+
the nonnegative reals (include 0); ('
++
are the positive Reals)
∀ “for all”, for all elements in the set
∃ “there exists” or “there is at least one element”
“not”, or negation, the following statement is not true
dot product, as in x y = x
T
y =
¸
n
x
i
y
i
∈ “in”, or “element of”
÷ “such that”. Note: ∀x ∈ X ÷ A is equivalent to ¦x ∈ X [ A¦.
≤ less or equal to, component wise: x
n
≤ y
n
for all n = 1, . . . , N.
< strictly less than, component wise: ∀n ∈ N : x
n
< y
n
≥ greater or equal (component wise)
strictly greater (component wise)
~ strictly preferred to
_ weakly preferred to
∼ indiﬀerent to
∂ partial, as in partial derivatives
∇ gradient, the vector of partial derivatives of a function of a vector.
∇f(x) = [f
1
(), f
2
(), . . . , f
n
()]
T
D
x
derivative operator, the vector (matrix) of partial derivatives of a
vector of functions:
D
w
x(p, w) =
¸
∂x
1
()
∂w
,
∂x
2
()
∂w
, . . . ,
∂x
N
()
∂w
T
⇐⇒ “if and only if”, often written “iﬀ”
Chapter 2
Review
Consumer theory is concerned with modelling the choices of consumers and
predicting the behaviour of consumers in response to changes in their choice
environment (comparative statics.) This raises the question of how one can
model choice formally, or more precisely, how one can model decision making.
An extensive analysis of this topic would lead us way oﬀ course, but the two
possible approaches are both outlined in the next section. After that, we will
return to the standard way in which consumer decision making in particular
is addressed.
2.1 Modelling Choice Behaviour
1
Any model of choice behaviour at a minimum contains two elements: (1) a de
scription of the available choices/decisions. (2) a description of how/why de
cisions are made. The ﬁrst part of this is relatively easy. We specify some set
B which is the set of (mutually exclusive) alternatives. For a consumer this
will usually be the budget set, i.e., the set of amounts of commodities which
can be purchased at current prices and income levels. In principle this set can
be anything, however. A simple description of this set will be advantageous,
and thus we most often will have B = ¦x ∈ '
n
+
[px ≤ m¦, the familiar budget
set for a price taking consumer. Note that much of economics is concerned
with how choice changes as B changes. This is the standard question of com
parative statics, where we ask how choice adjusts as income, m or one of the
prices, p
i
, i ∈ ¦1, . . . , n¦, changes. It does require, however, some additional
1
This material is based on MasColell, Whinston, Green, chapter 1
9
10 LA. Busch, Microeconomics May2004
structure as to what kind of B are possible. So we get a set of sets, B, which
contains all sets B we could consider. For example, for our neoclassical con
sumer this is B =
¸
¦x ∈ '
n
+
[p x ≤ m¦, p ∈ S
n−1
, m ∈ ', m > 0
¸
. We also
need a description of the things which are assembled into these budget sets.
In consumer theory this would be a consumption bundle, that is, a vector x
indicating the quantities of all the goods. We normally let X be the set of
technically feasible choices, or consumption set. This set does not take into
account what is an economically feasible choice, but only what is in principle
possible (for example, it would contain 253 Lear Jets even if those are not
aﬀordable to the consumer in question, but it will not allow the consumer to
create time from income, say, since that would be physically impossible.)
For the second element, the description of choice itself, there are two
fundamental approaches we can take: (1) we can specify the choice for each
and every choice set, that is, we can specify choice rules directly. For our
application to the consumer this basically says that we specify the demand
functions of the consumer as the primitive notion in our theory.
Alternatively, (2), we can specify some more primitive notion of “pref
erences” and derive the choice for each choice set from those. Under this
approach a ranking of choices in X is speciﬁed, and then choice for a given
choice set B is determined by the rule that the highest ranked bundle in the
set is taken.
In either case we will need to specify what we mean by “rational”. This
is necessary since in the absence of such a restriction we would be able to
specify quite arbitrary choices. While that would allow our theory to match
any data, it also means that no refutable propositions can be derived, and
thus the theory would be useless. Rationality restrictions can basically be
viewed as consistency requirements: It should not be possible to get the
consumer to engage in behaviour that contradicts previous behaviour. More
precisely, if the consumer’s behaviour reveals to us that he seems to like some
alternative a better than another, b, by virtue of the consumer selecting
a when both are available, this conclusion should not be contradicted in
other observations. The way we introduce this restriction into the model will
depend on the speciﬁc approach taken.
2.1.1 Choice Rules
Since we will not, in the end, use this approach, I will give an abstract
description without explicit application to the consumer problem. We are
Review 11
to specify as the primitive of our theory the actual choice for each possible
set of choices. This is done by means of a choice structure. A choice
structure contains a set of nonempty subsets of X, denoted by B. For the
consumer this would be the collection of all possible budget sets which he
might encounter. For each actual budget set B ∈ B, we then specify a
choice rule (or correspondence). It assigns a set of chosen elements, denoted
C(B) ⊂ B. In other words, for each budget set B we have a “demand
function” C(B) speciﬁed directly.
The notion of rationality we impose on this is as follows:
Deﬁnition 1 A choice structure satisﬁes the weak axiom of revealed
preference if it satisﬁes the following restriction: If ever the alternatives x
and y are available and x is chosen, then it can never be the case that y is
chosen and x is not chosen, if both x and y are available. More concisely: If
∃B ∈ B ÷ x, y ∈ B ⇒x ∈ C(B) then ∀B
∈ B ÷ x, y ∈ B
and y ∈ C(B
) ⇒
x ∈ C(B
).
For example, suppose that X = ¦a, b, c, d¦, and that B = ¦ ¦a, b¦,
¦a, b, c¦, ¦b, c, d¦ ¦. Also suppose that C(¦a, b¦) = ¦a¦. While the above
criterion is silent on the relation of this choice to C(¦b, c, d¦) = ¦b¦, it does
rule out the choice C(¦a, b, c¦) = ¦b¦. Why? In the ﬁrst case, a is not an
element of both ¦a, b¦ and ¦b, c, d¦, so the deﬁnition is satisﬁed vacuously.
(Remember, any statement about the empty set is true.) In the second case,
the fact that C(¦a, b¦) = ¦a¦ has revealed a preference of the consumer for
a over b, after all both where available but only a was picked. A rational
consumer should preserve this ranking in other situations. So if the choice is
over ¦a, b, c¦, then b cannot be picked, only a or c, should c be even better
than a.
2
You can see how we are naturally lead to think of one choice being
better than, or worse than, another even when discussing this abstract notion
of direct speciﬁcation of choice rules. The second approach to modelling
decision making, which is by far the most common, does this explicitly.
2.1.2 Preferences
In this approach we specify a more primitive notion, that of the consumer
being able to rank any two bundles. We call such a ranking a preference
2
Just let x be a and y be b in the above deﬁnition!
12 LA. Busch, Microeconomics May2004
relation which is usually denoted by the symbol _. So, deﬁne a binary
relation _ on X, with the interpretation that for any x, y ∈ X, the statement
x _ y means that “x is at least as good as y”.
From this notion of weak preference we can easily deﬁne the notions
of strict preference (x ~ y ⇔ (x _ y and (y _ x))), and indiﬀerence
(x ∼ y ⇔(x _ y and y _ x)).
How is rationality imposed on preferences?
Deﬁnition 2 A preference relation _ is rational if it is a complete, reﬂexive,
and transitive binary relation.
complete: ∀x, y ∈ X, either x _ y or y _ x, or both;
reﬂexive: ∀x ∈ X, x _ x;
transitive ∀x, y, z ∈ X, x _ y and y _ z =⇒x _ z.
At this point we might want to consider how restrictive this requirement
of rational preferences is. Completeness simply says that any two feasible
choices can be ranked. This could be a problem, since what is required is the
ability to rank things which are very far from actual experience. For example,
the consumer is assumed to be able to rank two consumption bundles which
diﬀer only in the fact that in the ﬁrst he is to consume 2 Ducati 996R, 1
Boeing 737, 64 cans of Beluga caviar, and 2 bottles of Dom Perignon, while
the other contains 1 Ducati 996R, 1 Aprilia Mille RS, 1 Airbus 320, 57 cans
of Beluga caviar, and 2 bottles of Veuve Cliquot. For most consumers these
bundles may be diﬃcult to rank, since they may be unsure about the relative
qualities of the items. On the other hand, these bundles are very far from
the budget set of these consumers. For bundles close to being aﬀordable, it
may be much more reasonable to assume that any two can be ranked against
each other.
Transitivity could also be a problem. It is crucial to economics since
it rules out cycles of preference. This, among other important implications,
rules out “dutch books” — a sequence of trades in which the consumer pays
at each step in order to change his consumption bundle, only to return to
the initial bundle at the end. Yet, it can be shown that it is possible to have
real life consumers violate this assumption. One problem is that of “just
perceptible diﬀerences”. If the changes in the consumption bundle are small
enough, the consumer can be led through a sequence of bundles in which
he prefers any earlier bundle to a later bundle, but which closes up on the
ﬁrst, so that the last bundle in the sequence is preferred to the ﬁrst, even
though it was worse than all preceding bundles, including the ﬁrst. The
Review 13
other common way in which transitivity is violated is through the eﬀects
of the “framing problem”. The framing problem refers to the fact that a
consumer’s stated preference may depend on the way the question is asked.
A famous example of this are surveys about vaccinations. If the beneﬁts are
stressed (so many fewer crippled and dead) respondents are in favour, while
they are against vaccination programs if the costs are stressed (so many
deaths which would not have occurred otherwise.) It is worthwhile to note
that what looks like intransitivity can quite often be explained as the outcome
of transitive behaviour on more primitive characteristics of the problem. Also
note that addiction and habit formation commonly lead to what appears to
be intransitivity, but really involves a change in the underlying preferences,
or (more commonly) preferences which depend on past consumption bundles
as well. In any case, a thorough discussion of this topic is way beyond these
notes, and we will assume rational preferences henceforth.
While beautiful in it simplicity, this notion of preferences on sets of
choices is also cumbersome to work with for most (but note the beautiful
manuscript by Debreu, Theory of value). In particular, it would be nice if
we could use the abundance of calculus tools which have been invented to
deal with optimization problems. Thus the notion of a utility function is
useful.
Deﬁnition 3 A function u : X → ' is a utility function representing the
preference relation _ if ∀x, y ∈ X, (x _ y) ⇔(u(x) ≥ u(y)).
We will return to utility functions in more detail in the next section.
Choice now is modeled by specifying that the consumer will choose the
highest ranked available bundle, or alternatively, that the consumer will max
imize his utility function over the set of available alternatives, choosing the
one which leads to the highest value of the utility function (for details, see
the next section.)
2.1.3 What gives?
If there are two ways to model choice behaviour, which is “better”? How do
they compare? Are the two approaches equivalent? Why do economists not
just write down demand functions, if that is a valid method?
To answer these questions we need to investigate the relationship be
tween the two approaches. What is really important for us are the following
14 LA. Busch, Microeconomics May2004
two questions: If I have a rational preference relation, will it generate a choice
structure which satisﬁes the weak axiom? (The answer will be Yes.) If I have
a choice structure that satisﬁes the weak axiom, does there exist a rational
preference relation which is consistent with these choices? (The answer will
be Maybe.)
The following statements can be made:
Theorem 1 Suppose _ is rational. Then the choice structure (B, C()) =
(B, ¦x ∈ B[x _ y ∀y ∈ B¦) satisﬁes the weak axiom.
In other words, rational preferences always lead to a choice structure
which is consistent with the weak axiom (and hence satisﬁes the notion of
rationality for choice structures.)
Can this be reversed? Note ﬁrst that we only need to deal with B ∈ B,
not all possible subsets of X. Therefore in general more than one _ might do
the trick, since there could be quite a few combinations of elements for which
no choice is speciﬁed by the choice structure, and hence, for which we are free
to pick rankings. Clearly,
3
without the weak axiom there is no hope, since
the weak axiom has the ﬂavour of “no contradictions” which transitivity also
imposes for _. However, the following example shows that the weak axiom
is not enough: X = ¦a, b, c¦, B = ¦ ¦a, b¦, ¦b, c¦, ¦c, a¦ ¦, C(¦a, b¦) = ¦a¦,
C(¦b, c¦) = ¦b¦, C(¦c, a¦) = ¦c¦. This satisﬁes the weak axiom (vacuously)
but implies that a is better than b, b is better than c, and c is better than
a, which violates transitivity. Basically what goes wrong here is that the set
B is not rich enough to restrict C() much. However, if it were, there would
not be a problem as the following theorem asserts:
Theorem 2 If (B, C()) is a choice structure such that (i) the weak axiom
is satisﬁed, and (ii) B includes all 2 and 3 element subsets of X, then there
exists a rational preference ordering _ such that ∀B ∈ B, C(B) = ¦x ∈
B[x _ y ∀y ∈ B¦).
While this is encouraging, it is of little use to economists. The basic
problem is that we rarely if ever have collections of budget sets which satisfy
this restriction. Indeed, the typical set of consumer budget sets consists of
3
Words such as “clearly”, “therefore”, “thus”, etc. are a signal to the reader that
the argument should be thought about and is expected to be known, or at least easily
derivable.
Review 15
nested hyper planes. One way around this problem are the slightly stronger
axioms required by Revealed Preference Theory (see Varian’s Advanced Mi
croeconomic Theory for details.)
Hence, the standard approach to consumer theory employs preferences
as the fundamental concept. The next section will review this approach.
2.2 Consumer Theory
Consumers are faced with a consumption set X ⊂ '
n
+
which is the set
of all (nonnegative) vectors
4
x = (x
1
, x
2
, . . . , x
n
) where each coordinate x
i
indicates the amount desired/consumed of commodity i.
5
All commodities
are assumed inﬁnitely divisible for simplicity. Often we will have only two
commodities to allow simple graphing. A key concept to remember is that a
“commodity” is a completely speciﬁed good. That means that not only is it
a ‘car’, but its precise type, colour, and quality are speciﬁed. Furthermore,
it is speciﬁed when and where this car is consumable, and the circumstances
under which it can be consumed. In previous courses these latter dimensions
have been suppressed and the focus was on current consumption without
uncertainty, but in this course we will focus especially on these latter two.
So we will have, for example, today’s consumption versus tomorrow’s, or
consumption if there is an accident versus consumption if there is no accident
(i.e., consumption contingent on the state of the world.) The consumption set
incorporates all physical constraints (no negative consumption, no running
time backwards, etc) as well as all institutional constraints. It does not
include the economic constraints the consumer faces.
The economic constraints come chieﬂy in two forms. One is an assump
tion about what inﬂuence, if any, the consumer may have on the prices which
are charged in the market. The usual assumption is that of price taking,
which is to say that the consumer does not have any inﬂuence on price, or
more precisely, acts under the assumption of not having any inﬂuence on
price. (Note that there is a slight diﬀerence between those two statements!)
This assumption can be relaxed, of course, but then we have to specify how
price reacts to the consumer’s demands, or at least how the consumer thinks
4
While I write all vectors as row vectors in order to save space and notation, they
are really column vectors. Hence I should really write x = (x
1
, x
2
, . . . , x
n
)
T
where the T
operator indicates a transpose.
5
Given the equilibrium concept of market clearing a consumer’s desired consumption
will coincide, in equilibrium, with his actual consumption. The diﬀerence between the two
concepts, while crucial, is therefore seldom made.
16 LA. Busch, Microeconomics May2004
that they will react. We will deal with a simple version of that later under
the heading of bilateral monopoly, where there are only two consumers who
have to bargain over the price. In all but the most trivial settings, situations
in which the prices vary with the consumer’s actions and the consumer is
cognizant of this fact will be modelled using game theory.
More crucially, consumers cannot spend more money than what they
have. In other words, their consumption choices are limited to economi
cally feasible vectors. The set of economically feasible consumption vec
tors for a given consumer is termed the consumer’s budget set: B =
¦x ∈ X [ p x ≤ m¦. Here p is a vector of n prices, and m is a scalar de
noting the consumer’s monetary wealth. Recall that p x denotes a ‘dot
product’, so that p x =
¸
n
i=1
p
i
x
i
. The left hand side of the inequality
therefore gives total expenditure (cost) of the consumption bundle. To add
to the potential confusion, we normally do not actually have money in the
economy at all. Instead the consumer usually has an endowment — an ini
tial bundle of goods. The budget constraint then requires that the value of
the ﬁnal consumption bundle does not exceed the value of this endowment,
in other words B = ¦x ∈ X [ p x ≤ p ω¦. Here ω ∈ X is the endowment.
You can treat this as a two stage process during which prices stay constant:
First, sell all initially held goods at market prices: this generates income of
p ω = m. Then buy the set of ﬁnal goods under the usual budget constraint.
Note however that for comparative static purposes only true market trans
actions are important, as evidenced by the change in the Slutsky equation
when moving to the endowment model (p.46, for example.)
The behavioural assumption in consumer theory is that consumers
maximize utility. What we really mean by that is that consumers have
the ability to rank all consumption bundles x ∈ X according to what is
termed the consumer’s preferences, and will choose that bundle which is the
highest ranked (most preferred) among all the available options. We denote
preferences by the symbol _, which is a binary relation. That simply means
that it can be used to compare two vectors (not one, not three or more.)
The expression x _ y is read as “consumption bundle x is at least as good
as bundle y”, or “x is weakly preferred to y”.
These preferences are assumed to be:
(i) complete (∀x, y ∈ X : either x _ y or y _ x or both);
(ii) reﬂexive (∀x ∈ X : x _ x);
(iii) transitive (∀x, y, z ∈ X : (x _ y, y _ z) ⇒x _ z);
(iv) continuous (¦x [ x _ y¦ and ¦x [ y _ x¦ are closed sets.)
What this allows us to do is to represent the preferences by a function
Review 17
u : '
N
+
→', called a utility function, which has the property that
u(x) > u(y) ⇐⇒x ~ y,
i.e., that the value of the function evaluated at x is larger than that at y
if and only if x is strictly preferred to y. If preferences are also strongly
monotonic ((x ≥ y, x = y) ⇒ x ~ y) then the function u() can also be
chosen to be continuous. With some further assumptions it will furthermore
be continuously diﬀerentiable, and that is what we normally assume.
Note that the utility function u() representing some preferences _is not
unique! Indeed, any other function which is a monotonic increasing trans
formation of u(), say h(u()), h
() > 0, will represent the same preferences.
So, for example, the following functions all represent the same preferences
on '
2
:
x
α
1
x
β
2
; αlnx
1
+βlnx
2
; x
a
1
x
(1−a)
2
−10
4
, a =
α
α +β
; α, β > 1.
In terms of the diagram of the utility function this means that the function
has to be increasing, but that the rate of increase, and changes in the rate
of increase, are meaningless. Put diﬀerently, the “spacing” of indiﬀerence
curves, or more precisely, the labels of indiﬀerence curves, are arbitrary, as
long as they are increasing in the direction of higher quantities. Furthermore,
any two functions that have indiﬀerence curves of the same shape represent
the same preferences. This is in marked contrast to what we will have to
do later. Once we discuss uncertainty, the curvature (rate of change in the
rate of increase) of the function starts to matter, and we therefore will then
be restricted to using positive aﬃne transforms only (things of the form
a +bu(); b > 0).
Finally a few terms which will arise frequently enough to warrant deﬁ
nition here:
Deﬁnition 4 The preferences _ are said to be monotone if
∀x ∈ X x >> y ⇐⇒ x ~ y.
Deﬁnition 5 The preferences _ are said to be strongly monotone if
∀x ∈ X x ≥ y ⇐⇒ x ~ y.
Note in particular that Leontief preferences (‘L’shaped indiﬀerence curves)
are monotone but not strictly monotone.
18 LA. Busch, Microeconomics May2004
Deﬁnition 6 The preferences _ are said to be convex if
∀x ∈ X the set ¦y ∈ X[y _ x¦ is convex.
In other words, the set of bundles weakly preferred to a given bundle is
convex. Applied to the utility function u() representing _ this means that
the upper contour sets of the utility function are convex sets, which, if you
recall, is the deﬁnition of a quasiconcave function.
Deﬁnition 7 A function u : '
N
→ ' is quasiconcave if its upper contour
sets are convex. Alternatively, u is quasiconcave if
u(λx + (1 −λ)y) ≥ min¦u(x), u(y)¦, ∀λ ∈ [0, 1].
Note also that the lower boundary of an upper contour set is what we refer to
as an indiﬀerence curve. (Which exists due to our assumption of continuity
of _, which is why we had to make that particular assumption.)
Special Utility Functions
By and large utility functions and preferences can take many forms. However,
of particular importance in modelling applications and in many theories are
those which allow the derivation of the complete indiﬀerence map from just
one indiﬀerence curve. Two kinds of preferences for which this is true exist,
homothetic preferences and quasilinear preferences.
Deﬁnition 8 Preferences _ are said to be homothetic if
∀x, y ∈ X, α ∈ '
++
, x ∼ y ⇐⇒αx ∼ αy.
In other words, for any two bundles x and y which are indiﬀerent to one
another, we may scale them by the same factor, and the resulting bundles
will also be indiﬀerent. This is particularly clear in '
2
+
. A bundle x can be
viewed as a vector with its foot at the origin and its head at the coordinates
(x
1
, x
2
). αx, α > 0 deﬁnes a ray through the origin through that point. Now
consider some indiﬀerence curve and 2 points on it. These points deﬁne two
rays from the origin, and homotheticity says that if we scale the distance from
the origin by the same factor on these two rays, we will intersect the same
indiﬀerence curve on both. Put diﬀerently, indiﬀerence curves are related to
one another by “radial blowup”. The utility function which goes along with
such preferences is also called homothetic:
Review 19
Deﬁnition 9 A utility function u : '
N
+
→' is said to be homothetic if u()
is a positive monotonic transformation of a function which is homogenous of
degree 1.
How does this assumption aﬀect indiﬀerence curves? Recall that the key
about indiﬀerence curves is their slope, not the label attached to the level of
utility. The slope, or MRS (Marginal Rate of Substitution,) is u
1
(x)/u
2
(x).
Now let u(x) be homothetic, that is u(x) = h(l(x)) where h
() > 0 and l()
is homogeneous of degree 1. Then we get
u
1
(x)
u
2
(x)
=
h
(l(x))l
1
(x)
h
(l(x))l
2
(x)
=
l
1
(x)
l
2
(x)
by the chain rule. So what you ask? Recall that
Deﬁnition 10 A function h : '
N
→' is homogeneous of degree k if ∀λ > 0,
h(λx) = λ
k
h(x).
Therefore (in two dimensions, for simplicity) l(x
1
, x
2
) = x
1
l(1, x
2
/x
1
) =
x
1
ˆ
l(k), where k = x
2
/x
1
. But then l
1
(x) =
ˆ
l(k)+
ˆ
l
(k)k, and l
2
(x) =
ˆ
l
(k), and
therefore the marginal rate of substitution of a homothetic utility function is
only a function of the ratio of the consumption amounts, not a function of
the absolute amounts. But the ratio x
2
/x
1
is constant along any ray from the
origin. Therefore, the MRS of a homothetic utility function is constant along
any ray from the origin! You can verify this property for the CobbDouglas
utility function, for example.
The other class of preferences for which knowledge of one indiﬀerence
curve is enough to know all of them is called quasilinear.
Deﬁnition 11 The preference _ deﬁned on ''
N−1
+
is called quasilinear
with respect to the numeraire good 1, if ∀x, y ∈ ''
N−1
+
, α > 0, x ∼ y ⇐⇒
x +αe
1
∼ y +αe
1
. Here e
1
= (1, 0, 0, . . .) is the base vector of the numeraire
dimension.
In terms of the diagram in '
2
, we require that indiﬀerence between 2 con
sumption bundles is maintained if an equal amount of the numeraire com
modity is added (or subtracted) from both: Indiﬀerence curves are related
to one another via translation along the numeraire axis!
Deﬁnition 12 A utility function u : ''
N−1
+
→' is called quasilinear if
it can be written as u(x) = x
1
+ ˆ u(x
−1
).
20 LA. Busch, Microeconomics May2004
For example the functions u(x) = x
1
+
√
x
2
and u(x) = x
1
+ lnx
2
are quasi
linear with respect to commodity 1. Note that the MRS of a quasilinear
function is independent of the numeraire commodity:
u
1
(x)
u
2
(x)
=
1
ˆ u
(x
2
)
,
so that the indiﬀerence curves all have the same slope along a line parallel
to the numeraire axis (good 1 in this case.)
2.2.1 Utility Maximization
The problem addressed in consumer theory is to predict consumer behaviour
for a price taking consumer with rational preferences. The consumer’s be
haviour is often summarized by the Marshallian demand functions, which
give us the desired consumption bundle of the consumer for all prices and in
come. These demands correspond to the observable demands by consumers,
since they depend only on observable variables (prices and income.) This
is in contrast to the Hicksian demand functions, which we later derive from
the expenditure minimization problem. Marshallian, or ordinary, demand is
derived mathematically from a constrained optimization problem:
max
x∈X
¦u(x) s.t p x ≤ m¦.
The easiest way to solve this is to realize that (i) normally all commodities
are goods, that is, consumer preferences are monotonic, wherefore we can
replace the inequality with an equality constraint;
6
(ii) that normally we have
assumed a diﬀerentiable utility function so that we can now use a Lagrangian:
max
x∈X
L = u(x) +λ(m−p x)
(FOC
i
) L
i
=
∂
∂x
i
u(x) −λp
i
= u
i
() −p
i
λ = 0 ∀i = 1 . . . n
(FOC
λ
) L
λ
= m−p x = 0
Of course, there are also second order conditions, but if we know that prefer
ences are convex (the utility function is quasiconcave) and that the budget
is (weakly) concave, we won’t have to worry about them. If these conditions
6
In fact, Local Nonsatiation is enough to guarantee that the budget is exhausted.
Local Nonsatiation assumes that for any bundle there exists a strictly preferred bundle
arbitrarily close by. This is weaker than monotonicity since no direction is assumed.
Review 21
are not obviously satisﬁed in a problem you will have to check the second
order conditions!
The above ﬁrst order conditions are n +1 equations which have to hold
as identities at the optimum values of the n + 1 choice variables (the n
consumption levels of goods, and the multiplier λ.) That means a variety of
things, most importantly that the implicit function theorem can be applied
if need be (and it is needed if we want to do comparative statics!)
Let us interpret the above ﬁrst order conditions. The last equation
(FOC
λ
) states that the optimum is on the budget constraint. Since they are
identities, any two of the other ﬁrst order conditions can be combined in one
of the following ways:
u
i
() = λp
i
u
j
() = λp
j
⇒
u
i
()
u
j
()
=
p
i
p
j
⇒
u
i
()
p
i
=
u
j
()
p
j
These have the economic interpretation that the (negative of the) slope of
the indiﬀerence curve (the MRS) is equal to the ratio of prices. But since the
slope of the budget is the negative of the ratio of prices, this implies that a
tangency of an indiﬀerence curve to the budget must occur at the optimum.
7
The second form tells us that the marginal utility gained from the last
dollar’s worth of consumption must be equalized across all goods at the
optimum, another useful thing to keep in mind. Also note that λ
∗
= u
i
()/p
i
,
so that the level of the multiplier gives us the “marginal utility of budget”.
This can also be seen by considering the fact that ∂L/∂m = λ.
The marginal rate of substitution term depends on the amounts of the
goods consumed. In other words u
i
()/u
j
() is a function of x. Requiring that
this ratio take on some speciﬁc value thus implicitly deﬁnes a relationship
between the x
i
. This tangency condition thus translates into a locus of
consumption bundles where the slope of the indiﬀerence curve has the given
value. This locus is termed the Income Expansion path. This is due to the
fact that this locus would be traced out if we were to keep prices constant but
increased income, thus shifting the budget out (in a parallel fashion.) Since
the budget line provides another locus of consumption bundles, namely those
which are aﬀordable at a given income level, solving for the demands then
boils down to solving for the intersection of these lines. Now note that the
7
If you ever get confused about which term “goes on top” remember the following
derivation: An indiﬀerence curve is deﬁned by u(x, y) = u. Taking a total diﬀerential we
obtain u
x
(·)dx + u
y
(·)dy = 0, and hence dy/dx = −u
x
(·)/u
y
(·). The same works for the
budget. As a last resort you can verify that it is the term for the horizontal axis which is
“on top”.
22 LA. Busch, Microeconomics May2004
0
5
10
15
20
0 5 10 15 20
good 2
good 1
Figure 2.1: Consumer Optimum: The tangency condition
income expansion path for both homothetic and quasilinear preferences is
linear. Hence solving for demands is particularly simple, since, in geometric
terms, we are looking for the intersection of a line and a (hyper) plane.
In terms of a diagram, we have the usual picture of a set of indiﬀerence
curves superimposed on a budget set, given in Figure 2.1. The optimum
occurs on the highest indiﬀerence curve feasible for the budget, i.e., where
there is a tangency between the budget and an indiﬀerence curve.
Note a couple of things: For one, the usual characterization of “tangency
to budget” is only valid if we are not at a corner solution and can compute the
relevant slopes. If we are at a corner solution (and this might happen either
because our budget ‘stops’ before it reaches the axis or because our indiﬀer
ence curves intersect an axis, as for quasilinear preferences) then we really
need to consider “complementary slackness conditions”. Basically we will
have to consider all the nonnegativity constraints which we have currently
omitted, and their multipliers. We will not bother with that, however. In
this course such cases will be fairly obvious and can be solved by inspection.
The key thing to remember, which follows from the condition on per dollar
marginal utilities, is that if we are at a corner then the indiﬀerence curve
must lie outside the budget. That means that on the horizontal axis the
indiﬀerence curve is steeper than the budget, on the vertical axis it is ﬂatter.
Finally, if the indiﬀerence curve or the budget have a kink, then no tangency
exists in strict mathematical terms (since we do not have diﬀerentiability),
but we will often loosely refer to such a case as a “tangency” anyways.
Review 23
0
5
10
15
20
0 5 10 15 20
good 2
good 1
0
5
10
15
20
0 5 10 15 20
good 2
good 1
Figure 2.2: Corner Solutions: Lack of tangency
Properties
As mentioned, the above problem gives rise ﬁrst of all to the (ordinary)
demand functions. These tell us the relationship of demand behaviour to the
parameters of the problem, namely the price vector and the income:
x(p, m) = argmax
x∈X
¦ u(x) +λ(m−p x) ¦.
The other result of solving the optimization problem is the indirect utility
function, which tells us the highest level of utility actually achieved:
v(p, m) = max
x∈X
¦ u(x) +λ(m−p x) ¦.
This is useless per s`e, but the function is nevertheless useful for some appli
cations (duality!)
What properties can we establish for these? Two important properties
of the (ordinary) demand functions are
Theorem 3 The (ordinary) demand x(p, m) derived from a continuous util
ity function representing rational preferences which are monotonic satisﬁes:
homogeneous of degree zero: x(p, m) = x(αp, αm), ∀p, m; α > 0;
Walras’ Law: p x(p, m) = m, ∀p >> 0, m > 0.
Walras’ Law has two important implications known as Engel and Cournot
Aggregation. Both are derived by simple diﬀerentiation of Walras’ Law. En
gel Aggregation refers to the fact that a change in consumer income must
all be spent. Cournot Aggregation refers to the fact that a change in a price
may not change total expenditure.
Deﬁnition 13 Engel Aggregation:
¸
n
i=1
p
i
∂x
i
(·)
∂m
= 1, or more compactly p D
p
x(p, m) = 1.
24 LA. Busch, Microeconomics May2004
Deﬁnition 14 Cournot Aggregation:
¸
n
i=1
p
i
∂x
i
(·)
∂p
k
+x
k
(p, m) = 0, ∀k = 1, . . . , n, or p D
p
x(p, m) +x(p, m)
T
= 0
T
.
(Here 0
T
is a row vector of zeros.)
Sometimes it is more useful to write these in terms of elasticities (since that
is what much of econometrics estimates)
8
.
Deﬁne s
i
(p, m) = p
i
x
i
(p, m)/m, the expenditure share of commodity i
(note how for a CobbDouglas utility function this is constant!) Let
ia
()
denote the elasticity of demand for good i with respect to the variable a.
Then the above aggregation laws can be expressed as
n
¸
i=i
s
i
(p, m)
im
(p, m) = 1 and
n
¸
i=i
s
i
(p, m)
ik
(p, m)i +s
k
(p, m) = 0.
We will not bother here with the properties of the indirect utility func
tion aside from providing Roy’s Identity:
x(p, m) =
−1
∂v(p, m)/∂m
∇
p
v(p, m) =
¸
−
∂v()/∂p
l
∂v()/∂m
n×1
.
This is “just” an application of the envelope theorem, or can be derived
explicitly by using the ﬁrst order conditions. The reason this is useful is
that it is mathematically much easier to diﬀerentiate, compared to solving
L + 1 nonlinear equations. Hence a problem can be presented by stating an
indirect utility function and then diﬀerentiating to get demands, a technique
often employed in trade theory.
9
Finally, note that we have no statements about demand other than
the ones above. In particular, it is not necessarily true that (ordinary)
demand curves slope downwards (∂x(p, m)/∂p
i
≤ 0.) Similarly, the income
derivatives can take any sign, and we call a good normal if ∂x(p, m)/∂m ≥ 0,
while we call it inferior if ∂x(p, m)/∂m ≤ 0.
8
Recall that often the logarithms of variables are regressed. A coeﬃcient on the right
hand side thus represents
dlnx
dlny
=
dx
dy
y
x
9
In graduate courses the properties of the indirect utility function get considerable
attention for this reason. It can be shown that any function having the the requisite
properties can be the indirect utility function for some consumer.
Review 25
2.2.2 Expenditure Minimization and the Slutsky equa
tion
Another way to consider the consumer’s problem is to think of the least
expenditure needed to achieve a given level of utility. Basically, this reverses
the process from above. We now ﬁx the indiﬀerence curve we want to achieve
and minimize the budget subject to that:
e(p, u) = min
x∈X
¦ p x +λ(¯ u −u(x)) ¦.
The function which tells us the least expenditure for a given price vector and
utility level is called the expenditure function. The optimal choices for
this problem,
h(p, u) = argmin
x∈X
¦ p x +λ(¯ u −u(x)) ¦,
are the Hicksian demand functions. These are unobservable (since they
depend on the unobservable utility level) but extremely useful. The reason is
that the Hicksian demand for good i, h
i
(p, u), is (by the envelope theorem)
the ﬁrst partial derivative of the expenditure function with respect to p
i
. It
follows that the ﬁrst derivative of the Hicksian demand function with respect
to a price is the second partial derivative of the expenditure function. So
what, you ask? Consider the following:
Theorem 4 If u(x) is continuous and represents monotonic, rational prefer
ences and if p >> 0, then e(p, u) is homogeneous of degree 1 in prices;
continuous in u and p; concave in p; increasing in u and nonde
creasing in p.
We will not prove these properties, but they are quite intuitive. For concavity
in p consider the following: Fix an indiﬀerence curve and 2 tangent budget
lines to it. A linear combination of prices at the same income is another
budget line which goes through the intersection point of the two original
budgets and has intermediate axis intercepts. Clearly, it cannot touch or
intersect the indiﬀerence curve, indicating that higher expenditure (income)
would be needed to reach that particular utility level.
So, since the expenditure function is concave in prices, this means that
the matrix of second partials, D
p
∇
p
e(p, u), is negative semideﬁnite. Also,
by Young’s Theorem, this matrix is symmetric. But it is equal to the matrix
of ﬁrst partials of the Hicksian demands, which therefore is also symmetric
and negative semideﬁnite!
26 LA. Busch, Microeconomics May2004
Why is this useful information? It means that Hicksian demand curves
do not slope upwards, since negative semideﬁniteness requires negative main
diagonal entries. This is a statement which we could not make about ordinary
demand curves. Also, the crossprice eﬀects of any two goods are equal. This
certainly is unexpected, since it requires that the eﬀect of a change in the
price of gasoline on bananas is the same as the eﬀect of a change in the
price of bananas on gasoline. Finally, Hicksian demands are homogeneous of
degree zero in prices.
As mentioned, the Hicksian demand curves are unobservable. So why
should we be excited about having strong properties for them, if they can
never be tested? One key reason is the Slutsky equation, which links the
unobservable Hicksian demands to the observable ordinary demands. We
proceed as follows: The fact that we are talking about two ways to look at
the same optimal point means that
x
i
(p, e(p, u)) = h
i
(p, u),
where we recognize the fact that the income and expenditure level must
coincide in the two problems. Now taking a derivate of both sides we get
∂x
i
(p, e(p, u))
∂p
j
+
∂x
i
(p, e(p, u))
∂m
∂e(p, u)
∂p
j
=
∂h
i
(p, u)
∂p
j
,
and using the envelope theorem on the deﬁnition of the expenditure function
as well as utilizing the original equality we simplify this to
∂x
i
(p, e(p, u))
∂p
j
+
∂x
i
(p, e(p, u))
∂m
x
j
(p, u) =
∂h
i
(p, u)
∂p
j
.
The right hand side of this is a typical element of the symmetric and negative
semideﬁnite matrix of price partials of the Hicksian demands. The equality
then implies that the matrix of corresponding left hand sides, termed the
Slutsky matrix, is also symmetric and negative semideﬁnite.
Reordering any such typical element we get the Slutsky equation:
∂x
i
(p, e(p, u))
∂p
j
=
∂h
i
(p, u)
∂p
j
−
∂x
i
(p, e(p, u))
∂m
x
j
(p, e(p, u)).
This tells us that for any good the total response to a change in price is
composed of a substitution eﬀect and an income eﬀect. Therefore ordinary
demands can slope upward if there is a large enough negative income eﬀect
— which means that we need an inferior good.
Review 27
It bears repeating that homogeneous demands satisfying Walras’ Law
with a symmetric, negative semideﬁnite Slutsky matrix is all that our theory
can say about the consumer. Nothing more can be said unless one is willing
to assume particular functional forms.
2.3 General Equilibrium
What remains is to model the determination of equilibrium prices. This is
not really done in economics, however. Since in the standard model every
consumer (and every ﬁrm, in a production model) takes prices as given, there
is nobody to set prices. Instead a “Walrasian auctioneer” is imagined, who
somehow announces prices. The story (and it is nothing more) is that the
auctioneer announces prices, and checks if all markets clear at those prices.
If not, no trades occur but the prices are adjusted (in the way ﬁrst suggested
in Econ 101: markets with excess demands experience a price increase, those
with excess supply a price decrease, the “tˆ atonnement” process.) Of course,
all agents other than the auctioneer must be unaware of this process some
how — otherwise they might start to misstate demands and supplies in
order to aﬀect prices. In any case, there really is no model of equilibrium
price determination. This does, of course, not stop us from deﬁning what an
equilibrium price is! At its most cynical, equilibrium can be viewed simply as
a consistency condition which makes our model logically consistent. In par
ticular, it must be true that, in equilibrium, no consumer ﬁnds it impossible
to trade the amounts desired at the announced market prices. That is, a key
requirement of any equilibrium price vector will be that all economic agents
can maximize their objective functions and carry out the resulting plans.
Since aggregate demands and supplies are the sum of individual demands
and supplies it follows that aggregate demand and supply must balance in
equilibrium — all markets must clear!
In what follows we will only review 2 special cases of general equilibrium,
the pure exchange economy, and the one consumer, one producer economy.
2.3.1 Pure Exchange
Consider an economy with two goods, i = 1, 2, and two consumers, l = 1, 2.
Each consumer has an initial endowment ω
l
= (ω
l
1
, ω
l
2
) of the two goods,
and preferences _
j
on '
2
which can be represented by some utility function
u
j
(x) : '
2
+
→ '. The total endowment of good i in the economy is the
28 LA. Busch, Microeconomics May2004
sum of the consumers’ endowments, so there are ω
1
= ω
1
1
+ω
2
1
units of good
1, and ω
1
= ω
1
2
+ω
2
2
units of good 2.
We can now deﬁne the following:
Deﬁnition 15 An allocation x = (x
1
, x
2
) is an assignment of a nonnega
tive consumption vector x
l
∈ '
2
to each consumer.
Deﬁnition 16 An allocation is feasible if
¸
2
l=1
x
l
i
=
¸
2
l=1
ω
l
i
for i = 1, 2.
Note that feasibility implies that if consumer 1 consumes (x
1
1
, x
1
2
) in
some feasible allocation, then consumer 2 must consume (ω
1
− x
1
1
, ω
2
− x
1
2
).
This fact allows us to represent the set of feasible allocations by a box in
'
2
. Suppose we measure good 1 along the horizontal axis and good 2 along
the vertical axis. Then the feasible allocations lie anywhere in the rectangle
formed by the origin and the point (ω
1
, ω
2
). Furthermore, if we measure
consumer 1’s consumption from the usual origin (bottom left), then for any
consumption point x we can read oﬀ consumer 2’s consumption by measuring
from the right top ‘origin’ (the point ω.)
Also note that a budget line for consumer 1, deﬁned by p x = p ω
1
passes through both the endowment point and the allocation, and has a slope
of p
1
/p
2
. But if we consider consumer 2’s budget line at these prices we ﬁnd
that it passes through the same two points and has, measured from consumer
2’s origin on the top right, the same slope. Hence any price vector deﬁnes
a line through the endowment point which divides the feasible allocations
into the budget sets for the two consumers. These budget sets only have the
budget line in common.
As a ﬁnal piece, we may represent each consumer’s preferences by an
indiﬀerence map in the usual fashion, only that consumer 2’s consumption
set is measured from the top right origin (at ω in consumer 1’s coordinates.)
Note that both consumer’s indiﬀerence curves are not limited to lie within
the box. Feasibility is a problem for the economy, not the consumer. Indeed,
the consumer is assumed to be unaware of the actual amount of the goods
in the economy. Hence each consumer’s budget and indiﬀerence maps are
deﬁned in the usual fashion over the respective consumer’s consumption set,
which here is all of '
2
+
, measured from the usual origin for consumer 1 and
measured down and to the left from ω for consumer 2! (More bluntly, both
a consumer’s budget line and her indiﬀerence curves can “leave the box”.)
Review 29
Figure 2.3 represents an example of such a pure exchange economy.
Here we have assumed that u
1
(x) = min¦x
1
, x
2
¦; ω
1
= (10, 5), while u
2
(x) =
x
1
+x
2
; ω
1
= (10, 10).
O
1
x
1
1
` x
1
2
O
2
,
x
2
1
·
x
2
2
10 20
5
15
s
ω
p
Figure 2.3: An Edgeworth Box
Aside of the notion of feasibility the other key concept in general equi
librium analysis is that of Pareto eﬃciency, or Pareto optimality.
Deﬁnition 17 A feasible allocation x is Pareto Eﬃcient if there does not
exist a feasible allocation y such that y _
l
x, ∀j, and ∃j ÷ y ~
l
x.
A slightly weaker criterion also exists to deal with preferences that are
not strictly monotonic:
Deﬁnition 18 A feasible allocation x is weakly Pareto Eﬃcient if there
does not exist a feasible allocation y such that y ~
l
x, ∀j.
In words, Pareto eﬃciency (PO) requires that no feasible alternative
exist which leaves all consumers as well oﬀ and at least one better oﬀ. Weak
Pareto eﬃciency requires that no feasible allocation exist which makes all
consumers better oﬀ.
Note that these deﬁnitions can be tricky to apply since an allocation is
deﬁned as eﬃcient if something is not true. Moreover, there are inﬁnitely
30 LA. Busch, Microeconomics May2004
many other feasible allocations for any given one we wish to check. A brute
force approach therefore is not the smart thing to do. Instead we can proceed
as follows: Suppose we wish to check for PO of an allocation, say (ω
1
, ω
2
)
in Figure 2.3. Does there exist a feasible alternative which would make one
consumer better oﬀ and not hurt the other? The consumers’ indiﬀerence
curves through the allocation ω will tell us allocations for each consumer
which are at least as good. Allocations below a consumer’s indiﬀerence curve
make her worse oﬀ, those above better oﬀ. In Figure 2.4 these indiﬀerence
curves have been drawn in.
O
1
x
1
1
` x
1
2
O
2
,
x
2
1
·
x
2
2
10 20
5
15
s
ω
s
z
s
q
Figure 2.4: Pareto Optimal Allocations
Note that any allocation in the triangular region which contains alloca
tions above both indiﬀerence curves will do as the alternative that makes ω
not PO (and indeed not weakly PO either.) Next, note that ω was in no
way special: all allocations below the “kink line” of consumer 1 would give
rise to just the same diagram. Hence none of those can be PO either. The
allocations above the kink line are just mirror images, they are not PO either.
That leaves the kink line itself, with allocations such as z = ((11, 11), (9, 4)).
For such allocations the weakly preferred sets of the two consumers do not
intersect, hence it is not possible to make one better oﬀ while not reducing
the utility of the other. Neither is it possible to make both better oﬀ. Hence
these allocations are weakly PO and PO.
Finally, we need to consider the boundaries separately. The axes of
consumer 1 are no problem, they give rise to a diagram just like that for ω,
in fact. But how about points such as q, which lie on the top boundary of the
Review 31
box to the right of the intersection with the kink line? For such points both
consumers cannot be made better oﬀ, since there exists no feasible allocation
which makes consumer 1 better oﬀ. Thus, the allocation q is weakly PO.
However, by moving left we can make consumer 2 better oﬀ without reducing
the utility of consumer 1, so q is not PO in the strict sense.
The set of Pareto optimal allocations in Figure 2.4 therefore is the kink
line of consumer 1, that is all allocations
¦x ∈ '
4
[x
1
1
= x
1
2
, x
1
1
≤ 15, x
2
1
= 20 −x
1
1
, x
2
2
= 15 −x
1
2
, x
l
i
≥ 0 ∀i, l = 1, 2¦.
You can see from the above example that the diﬀerence between PO
and weak PO can only arise for preferences which have vertical or horizontal
sections that could coincide with a boundary.
Theorem 5 All Pareto Optimal allocations are weakly Pareto optimal. If all
preferences are strictly monotonic, then all weakly Pareto optimal allocations
are also Pareto optimal.
Finally note that for convex preferences and diﬀerentiable utility func
tions the condition that the weakly preferred sets be disjoint translates into
the familiar requirement that the indiﬀerence curves of the consumers must
be tangent, that is, an allocation is Pareto optimal if the consumers’ marginal
rates of substitution equal.
We are now ready to deﬁne equilibrium and to see how it relates to the
above diagram.
Deﬁnition 19 A Walrasian (competitive) equilibrium for a pure ex
change economy is a price vector p
∗
and allocation x
∗
such that
(1) (optimality) for each consumer l, (x
l
)
∗
_
l
x
l
∀x
l
÷ p
∗
x
l
= p
∗
ω
l
;
(2) (feasibility) for each good i,
¸
j
(x
l
i
)
∗
=
¸
j
ω
l
i
.
This deﬁnition is the most basic we can give, and at the same time
includes all others. For example, you may remember from introductory eco
nomics that a general equilibrium is achieved when “demand equals supply”.
But supply in this case is the sum of endowments. Demand means aggre
gate demand, that is, the sum of all consumers’ demands. But a consumer’s
demand has been deﬁned as a utility maximizing bundle at a given price —
hence the notion of demand includes the optimality condition above. We
therefore can deﬁne equilibrium in the following way:
32 LA. Busch, Microeconomics May2004
Deﬁnition 20 A Walrasian (competitive) equilibrium for a pure ex
change economy is a price vector p
∗
and allocation x
∗
such that x
∗
= x(p
∗
),
¸
j
x
l
i
(p
∗
) =
¸
j
ω
l
i
, for each good i, and for each consumer l, x
l
(p) =
argmax
x
¦u
j
(x) s.t. p x
l
= p ω
l
¦.
For an Edgeworth box we can also deﬁne equilibrium in yet a third
way — as the intersection of consumers’ oﬀer curves. An oﬀer curve is the
analogue of a demand curve, only drawn in commodity space, not commodity
price space.
Deﬁnition 21 The oﬀer curve is the locus of all optimal consumption
bundles (in X) as price changes:
¦x ∈ X[∃p ÷ ∀y with p y = p ω, x _
j
y, and p x = p ω¦ or alternatively
¦x ∈ X[∃p ÷ x = argmax¦u
j
(x) s.t. p x = p ω
l
¦ ¦.
Note how the notion of an oﬀer curve incorporates optimality, so that
the intersection of consumers’ oﬀer curves, lying on both oﬀer curves, is
optimal for both consumers. Since oﬀer curves cannot intersect outside the
Edgeworth box, feasibility is also guaranteed.
10
We thus have diﬀerent ways of ﬁnding an equilibrium allocation in a pure
exchange economy, and depending on the setting one may be more convenient
than another. For example, should both consumers have diﬀerentiable utility
functions we can normally most easily ﬁnd demands, and then set demand
equal to supply. This is simpliﬁed by Walras’ Law. Let z(p) denote the
aggregate excess demand vector: z(p) =
¸
j
(x
l
(p) −ω
l
). Then we have.
Deﬁnition 22 Walras’ Law: p z(p) = 0 ∀p ∈ '
n
+
.
Why is this so? It follows from Walras’ Law for each consumer, requiring
that each consumer l’s demands exhaust the budget, so that the total value
of excess demand for a consumer is zero, for all prices, by deﬁnition. Hence,
summing across all consumers the aggregate value of excess demand must also
be zero. In an equilibrium aggregate demand is equal to aggregate supply,
11
that is, summing across consumers the excess demand must equal zero for
each good, and hence the value must be zero. Hence, if n −1 markets clear
10
This argument assumes as given that oﬀer curves cannot intersect outside the feasible
set. Why? You should be able to argue/prove this!
11
Yet another deﬁnition of a Walrasian equilibrium is z(p) = 0!
Review 33
the value of the excess demand of the n
th
market must be zero, and this can
only be the case if either the price of the good is zero or the actual aggregate
excess demand itself. Thus it suﬃces to work with n−1 markets. The other
market will “take care of itself.”
In practice we are also free to choose one of the prices, since we have
already shown that (Walrasian) demand is homogeneous of degree zero in
prices for each consumer (remember that m
l
= pω
l
here), and thus aggregate
demand must also be homogeneous of degree zero.
So, suppose the economy consists of two CobbDouglas consumers with
parameters α and β and endowments ((ω
1
1
, ω
1
2
), (ω
2
1
, ω
2
2
)). Demands are
αp ω
1
p
1
,
(1 −α)p ω
1
p
2
, and
βp ω
2
p
1
,
(1 −β)p ω
2
p
2
.
Hence the equilibrium price ratio p
1
/p
2
is
p
∗
=
αω
1
2
+βω
2
2
(1 −α)ω
1
1
+ (1 −β)ω
2
1
.
The allocation then is obtained by substituting the price vector back into the
consumer’s demands and simplifying.
On the other hand, if we have nondiﬀerentiable preferences it is most
often more convenient to use the oﬀer curve approach. For example, a con
sumer with Leontief preferences will consume on the kink line no matter
what the price ratio. If such a consumer where paired with a consumer with
perfect substitute preferences, such as in Figure 2.3, we know the price ratio
by inspection to be equal to the perfect substitute consumer’s MRS (in Fig
ure 2.3 that is 1.) Why? Since a prefect substitute consumer will normally
consume only one of the goods, and only consumes both if the MRS equals
the price ratio. The allocation then can be derived as the intersection of
consumer 1’s kink line with the indiﬀerence curve through the endowment
point for consumer 2: in Figure 2.3 the equilibrium allocation thus would be
(x
1
, x
2
) = ((7.5, 7.5), (12.5, 7.5)).
2.3.2 A simple production economy
The second simple case of interest is a production economy with one con
sumer, one ﬁrm and 2 goods, leisure and a consumption good. This case is
also often used to demonstrate the second welfare theorem in its most simple
34 LA. Busch, Microeconomics May2004
setting. Recall that the second welfare theorem states that a Pareto eﬃcient
allocation can be “decentralized”, that is, can be achieved as a competitive
equilibrium provided that endowments are appropriately redistributed.
Let us therefore approach this problem as is often done in macro eco
nomics. Consider the social planner’s problem, which in this case is equiv
alent to the problem of the consumer who owns the production technology
directly. The consumer has preferences over the two goods, leisure and con
sumption, denoted by l and c respectively, represented by the utility function
u(c, l). It has the usual properties (that is, it has positive ﬁrst partials and
is quasiconcave.) The consumer has an endowment of leisure,
¯
L. Time not
consumed as leisure is work, x =
¯
L − l, and the consumption good c can
be produced according to a production function c = f(x). Assume that
it exhibits nonincreasing returns to scale (so f
() > 0; f
() ≤ 0.) The
consumer’s problem then is
max
c,l
¦u(c, l) s.t. c = f(
¯
L −l) ¦
or, substituting out for c
max
l
u(f(
¯
L −l), l).
The FOC for this problem is −u
c
()f
() + u
l
() = 0, or more usefully,
f
() = u
l
()/u
c
(). This tells us that the consumer will equate the marginal
product of spending more time at work with the Marginal Rate of Substi
tution between leisure time and consumption — which measures the cost of
leisure time to him.
Now consider a competitive private ownership economy in which the
consumer sells work time to a ﬁrm and buys the output of the ﬁrm. The
consumers owns all shares of the ﬁrm, so that ﬁrm proﬁts are part of consumer
income. Suppose the market price for work, x, is denoted by w (for wage),
and the price of the output c is denoted by p.
The ﬁrm’s problem is max
x
pf(x)−wx with ﬁrst order condition pf
(x
∗
) =
w. The consumer’s problem is max
c,l
¦u(c, l) s.t. π(p, w) + pc = w(
¯
L − l) ¦.
Here π(p, w) denotes ﬁrm proﬁts. The FOCs for this lead to the condition
u
l
()/u
c
() = w/p. Hence the same allocation is characterized as in the plan
ner’s problem, as promised by the second welfare theorem. (Review Problem
12 provides a numerical example of the above.)
Note that in this economy there is a unique Pareto optimal point if
f
() < 0. Since a competitive equilibrium must be PO, it suﬃces to ﬁnd
the PO allocation and you have also found the equilibrium! This “trick” is
often exploited in macro growth models, for example.
Review 35
2.4 Review Problems
It is useful exercise to derive both kinds of demand for the various functions
you might commonly encounter. The usual functions are
CobbDouglas: u(x
1
, x
2
) = x
α
1
x
(1−α)
2
, α ∈ (0, 1);
QuasiLinear: u(x
1
, x
2
) = f(x
1
) +x
2
, f
> 0, f
< 0;
Perfect Substitute: u(x
1
, x
2
) = ax
1
+x
2
, a > 0;
Leontief: u(x
1
, x
2
) = min¦ax
1
, x
2
¦, a > 0;
“Kinked”: u(x
1
, x
2
) = min¦ax
1
+x
2
, x
1
+bx
2
¦, a, b > 0.
The ﬁrst thing to do is to determine how the indiﬀerence curves for each of
these look like. I strongly recommend that you do this. Familiarity with the
standard utility functions is assumed in class and in exams.
Budgets are normally easy and are often straight lines. However, they
can have kinks. For example, consider the budget for the following problem:
A consumer has an endowment bundle of (10, 10) and if he wants to increase
his x
2
consumption he can trade 2 x
1
for 1 x
2
, while to increase x
1
consump
tion he can trade 2 x
2
for 1 x
1
. Such budgets will arise especially in inter
temporal problems, where consumers’ borrowing and lending rates may not
be the same.
You can now combine any kind of budget with any of the above utility
functions and solve for the income expansion paths (diagrammatically), and
the demands, both ordinary and Hicksian.
You may also want to refresh your knowledge on (price)oﬀer curves.
This is the locus of demands in (x
1
, x
2
) space which is traced out as one
price, say p
1
, is varied while p
2
and m stay constant.
Question 1: Let X = ¦x, y, z, w¦ and let (B, C()) be a choice structure
with B = ¦¦x, y, z¦, ¦x, z, w¦, ¦w, z, y¦, ¦x¦, ¦y, w¦, ¦z, x¦, ¦x, w¦¦.
a) Provide a C() which satisﬁes the weak axiom.
b) Does there exist a preference relation on X which rationalizes your
C() on B?
c) Is that preference relation rational?
d) Given this B and some choice rule which satisﬁes the weak axiom, are
we guaranteed that the choices could be derived from a rational preference
relation?
e) Since you probably ‘cheated’ in part a) by thinking of rational pref
erences ﬁrst and then getting the choice rule from that: write down a choice
rule C() for the above B which does satisfy the weak axiom but can NOT
be rationalized by a preference relation.
36 LA. Busch, Microeconomics May2004
Question 2: Consider a consumer endowed with 16 hours of leisure time and
$100 of initial wealth. There are only two goods, leisure and wealth/aggregate
consumption, so a consumption bundle can be denoted by (l, c). Extend the
deﬁnition of the budget set to be the set of all feasible (l, c) vectors, i.e.,
B = ¦(l, c) [ 0 ≤ l ≤ 16, c feasible¦, where feasibility of consumption c
will be determined by total income, consisting of endowed income and work
income. In particular note that the boundary of the budget set,
¯
B, is the set
of maximal feasible consumption for any given leisure level. Now, in view of
this, what is the budget set and its boundary in the following cases (you may
want to provide a well labelled diagram, and/or a deﬁnition of
¯
B instead of
a set deﬁnition for B.) Is the budget set convex?
a) There are two jobs available, each of which can be worked at for no
more than 8 hours. An additional restriction is that you have to work at one
job full time (8 hours) before you can start the other job. Job 1 pays $12 per
hour for the ﬁrst 6 hours and $16 per hour for the next 2 hours. Job 2 pays
$8 per hour for the ﬁrst two hours and $14 per hour for the next 6 hours.
b) The same situation as in [a] above, but job 2 now pays $14 2/3 per
hour for the next 6 hours (after the ﬁrst 2).
c) Suppose we drop the restriction that you have to work one job full
time before you can work the other. (So you could work one job for 5 hours
and the other for 3, for example, which was not possible previously.) Why
does it (does it not) matter?
d) What if instead there are four jobs, with job (i) paying $12 per hour
for up to 6 hours, job (ii) paying $16 per hour for up to 2 hours, (iii) paying
$8 per hour for up to 2 hours and (iv) paying $14 per hour for up to 6 hours?
You may work any combination of jobs, each up to its maximum number of
hours.
Question 3: A consumer’s preferences over 2 goods can be represented
by the utility function u(x) = x
0.3
1
x
0.6
2
. Solve for the consumer’s demand
function. If the consumer’s endowment is ω = (412, 72) and the price vector
is p = (3, 1), what is the quantity demanded?
Question 4: A consumer’s preferences are represented by the utility function
U(x
1
, x
2
) = min¦x
1
+ 5x
2
, 2x
1
+ 2x
2
, 5x
1
+x
2
¦.
Solve both the utility maximization and the expenditure minimization prob
lem for this consumer. (Note the the Walrasian and Hicksian demands are
not functions in this case.) Also draw the price oﬀer curve for p
1
variable,
m and p
2
ﬁxed. [Note: Solving the problems does not require diﬀerential
calculus techniques.]
Review 37
Question 5: The elasticity of substitution is deﬁned as
1,2
= −
∂ [x
1
(p, w)/x
2
(p, w)]
∂ [p
1
/p
2
]
p
1
/p
2
x
1
(p, w)/x
2
(p, w)
.
What does this elasticity measure? Why might it be of interest?
Question 6: The utility function u(x
1
, x
2
) = (x
ρ
1
+x
ρ
2
)
(1/ρ)
, where ρ < 1 and
ρ = 0, is called the Constant Elasticity of Substitution utility function. The
shape of its indiﬀerence curves depends on the parameter ρ. Demonstrate
that the function looks like a Leontief function min¦x
1
, x
2
¦ as ρ →−∞; like
a CobbDouglas function x
1
x
2
as ρ → 0; like a perfect substitute function
x
1
+x
2
as ρ →1.
Question 7: A consumer’s preferences over 3 goods can be represented by
the utility function u(x) = x
1
+ lnx
2
+ 2lnx
3
. Derive Marshallian (ordinary)
demand.
Question 8: Consider a 2 good, 2 consumer economy. Consumer A has
preferences represented by u
A
(x) = 4x
1
+ 3x
2
. Consumer B has preferences
represented by u
B
(x) = 3lnx
1
+ 4lnx
2
. Suppose consumer A’s endowment is
ω
A
= (12, 9) while consumer B’s endowment is ω
B
= (8, 11). What is the
competitive equilibrium of this economy?
Question 9: Consider a 2 good, 2 consumer economy. Consumer A has
preferences represented by u
A
(x) = min¦4x
1
+3x
2
, 3x
1
+4x
2
¦. Consumer B
has preferences represented by u
B
(x) = 3x
1
+ 4lnx
2
. Suppose consumer A’s
endowment is ω
A
= (12, 9) while consumer B’s endowment is ω
B
= (8, 11).
What is the competitive equilibrium of this economy?
Question 10
∗
: Suppose (B, C()) satisﬁes the weak axiom. Consider the
following two strict preference relations: ~ deﬁned by x~y ⇐⇒ ∃B ∈ B ÷
x, y ∈ B, x ∈ C(B), y / ∈ C(B); and ~
∗
deﬁned by x~
∗
y ⇐⇒ ∃B ∈ B ÷
x, y ∈ B, x ∈ C(B) and ∃B
∈ B ÷ x, y ∈ B
, y ∈ C(B
). Prove that these
two strict preference relations give the same relation on X. Is this still true
if the weak axiom is not satisﬁed by C()? (Counterexample suﬃces.)
Question 11
∗
: Consider a 2 good, 2 consumer economy. Consumer A has
preferences represented by u
A
(x) = αx
1
+ x
2
. Consumer B has preferences
represented by u
B
(x) = βlnx
1
+ lnx
2
.
a) Suppose consumer A’s endowment is ω
A
= (10, 10) while consumer
B’s endowment is ω
B
= (10, 10). What is the competitive equilibrium of this
economy?
b) Suppose α = 2, β = 1 and ﬁx the total size of the economy at 20
38 LA. Busch, Microeconomics May2004
units of each good as above. As you have seen above, there are two kinds
of equilibria, interior, and boundary. For which endowments will interior
equilibria obtain, for which ones boundary equilibria?
Question 12
∗
: Consider a privateownership production economy with one
consumption good, one ﬁrm and one consumer. Suppose technology is given
by c = f(x) = 4
√
x. Let consumer preferences be represented by u(c, l) =
lnc +
1
2
lnl and let the consumer have a leisure endowment of
¯
L = 16. a)
Solve the social planner’s problem. b) Solve for the competitive equi
librium of the private ownership economy.
Chapter 3
Intertemporal Economics
In this chapter we will relabel the basic model in order to focus the analysis
on the question of intertemporal choice. This will reveal information about
borrowing and savings behaviour and the relationship between interest rates
and the timepreferences of consumers. In particular, we will see that inter
est rates are just another way to express a price ratio between time dated
commodities.
3.1 The consumer’s problem
In what follows, we will consider the simple case of just two commodities,
consumption today — denoted c
1
— and consumption tomorrow — c
2
. These
take the place of the two commodities, say apples and oranges, in the stan
dard model of the previous chapter. Since it is somewhat meaningless to
speak of income in a two period model, indeed, part of our job here is to
ﬁnd out how to deal with and evaluate income streams, we will employ an
endowment model. The consumer is assumed to have an endowment of the
consumption good in each period, which we will denote by m
1
and m
2
, re
spectively. The consumer is assumed to have preferences over consumption
in both periods, represented by the utility function u(c
1
, c
2
).
39
40 LA. Busch, Microeconomics May2004
3.1.1 Deriving the budget set
Our ﬁrst job will be to determine how the consumer’s budget may be ex
pressed in this setting. That, of course, will depend on the technology for
storing the good and on what markets exist. Note ﬁrst of all that it is never
possible to consume something before you have it: the consumer will not be
able to consume in period 1 any of the endowment in period 2. In contrast,
it may be possible to store the good, so that quantities of the period 1 en
dowment not consumed in period 1 are available for consumption in period
2. Of course, such storage may be subject to losses (depreciation). Ideally,
the consumer has some investment technology (such as planting seeds) which
allows the consumer to “invest” (denoting that the good is used up, but not
in utility generating consumption) period 1 units in order to create period
2 units. Finally, the consumer may be able to access markets, which allow
lending and borrowing. We consider these in turn.
No Storage, No Investment, No Markets
If there is no storage and no investment, then anything not consumed today
will be lost forever and consumption cannot be postponed to tomorrow. You
may want to think of this as 100% depreciation of any stored quantity. No
markets means that there is also no way to trade with somebody else in order
to either move consumption forward or backward in time. The consumer
ﬁnds himself therefore in the economically trivial case where he is forced to
consume precisely the endowment bundle. The budget set then is just that
one point: B = ¦c ∈ '
2
+
[c
i
= m
i
, i = 1, 2¦.
Storage, No Investment, No Markets
This is a slightly more interesting case where consumption can be postponed.
Since there are no markets, no borrowing against future endowments is pos
sible. Storage is usually not perfect. Let δ ∈ (0, 1) denote the rate of depre
ciation — for example, our consumption good may spoil, so that the outside
layers of the meat will not be edible in the next period, or pests may eat
some of the stuﬀ while it is in storage (a serious problem in many countries.)
A typical budget set then is
B = ¦(c
1
, c
2
) [ c
2
≤ m
2
+ (1 −δ)(m
1
−c
1
), 0 ≤ c
1
≤ m
1
¦.
Intertemporal 41
The quantity (m
1
−c
1
) in this expression is the amount of period 1 endow
ment not consumed in period 1, usually called savings, and denoted S. We
can therefore express the consumer’s utility maximization problem in three
equivalent ways:
max
c
1
,c
2
∈B
u(c)
max
c
1
≤m
1
u(c
1
, m
2
+ (1 −δ)(m
1
−c
1
))
max
S≤m
1
u(m
1
−S, m
2
+ (1 −δ)S)
In the second case only the level of consumption in period 1, c
1
, is a choice
variable. It implies the consumption in period 2. The third line simply rela
bels that same maximization and has savings in period 1 — whatever is not
consumed — as the choice variable. All of these will give the same answer, of
course! The left diagram in Figure 3.1 gives the diagrammatic representation
of this optimization problem (with two diﬀerent (!) preferences indicated by
representative indiﬀerence curves.)
One thing to be careful about is the requirement that 0 ≤ c
1
≤ m
1
.
We could employ KuhnTucker conditions to deal with this constraint, but
usually it suﬃces to check after the fact. For example, a consumer with CD
preferences u(c
1
, c
2
) = c
1
c
2
faced with a price ratio of unity would like to
consume where c
1
= c
2
(You ought to verify this!) However, if the endow
ment happens to be (m
1
, m
2
) = (5, 25) then this is clearly impossible. We
therefore conclude that there is a corner solution and consumption occurs at
the endowment point m.
Storage, Investment, No Markets
How is the above case changed if investment is possible? Here I am thinking
of physical investment, such as planting, not “market investment”, as in
lending. Suppose then that the consumer has the same storage possibility as
previously, but also has access to an investment technology. We will model
this technology just as if it were a ﬁrm, and specify a function that gives
the returns for any investment level: y = f(x). This function must either
have constant returns to scale or decreasing returns to scale for things to
work easily (and for the second order conditions of the utility maximization
problem to hold.) Suppose the technology exhibits CRS. In that case the
marginal return of investment, f
(x), is constant. It either is larger or smaller
than the return of storage, which is 1 − δ. Since the consumer will want to
maximize the budget set, he will choose whichever has the higher return, so
if f
(x) > (1 −δ) the consumer will invest, and he will store otherwise. The
42 LA. Busch, Microeconomics May2004
budget then is
B =
¦(c
1
, c
2
) [ c
2
≤ m
2
+f(m
1
−c
1
), 0≤c
1
≤m
1
¦ if (1 −δ) < f
()
¦(c
1
, c
2
) [ c
2
≤ m
2
+ (1−δ)(m
1
−c
1
), 0≤c
1
≤m
1
¦ otherwise
What if the investment technology exhibits decreasing returns to scale?
Suppose the most interesting case, where f
(x) > (1 − δ) for low x, but the
opposite is true for high x. Suppose further that not only the marginal return
falls, but that the case is one where f(m
1
) < (1 − δ)m
1
, so that the total
return will be below that of storage if all ﬁrst period endowment is invested.
What will be the budget set? Clearly (?) any initial amounts not consumed
in period 1 should be invested, not stored. But when should the consumer
stop investing? The maximal investment x is deﬁned by f
(x) = (1−δ). Any
additional unit of foregone period 1 consumption should be stored, since the
marginal return of storage now exceeds that of further investment. The
budget then is
B =
¦(c
1
, c
2
) [ c
2
≤ m
2
+f(m
1
−c
1
), m
1
−x ≤ c
1
≤ m
1
¦
¦(c
1
, c
2
) [ c
2
≤ m
2
+f(x) + (1 −δ)(m
1
−x −c
1
), 0 ≤ c
1
≤ m
1
−x¦
where x is deﬁned by (1 −δ) = f
(x)
Storage, No Investment, Full Markets
We now allow the consumer to store consumption between periods 1 and
2, with some depreciation, and to trade consumption on markets. Instead
of the usual prices, which are an expression of the exchange ratio of, say
good 2 for good 1, we normally express things in terms of interest rates
when we deal with time. Of course, one can always convert between the two
without much trouble if some care is taken. The key idea is that a loan of P
dollars today will pay back the principal P plus some interest income. At an
interest rate r, this is an additional rP dollars. Thus, 1 dollar today “buys”
(1 + r) dollars tomorrow. Put diﬀerently, the price of 1 of today’s dollars
is (1 + r) of tomorrow’s. Similarly, for a payment of F dollars tomorrow,
how many dollars would somebody be willing to pay? F/(1 + r), of course,
since F/(1 +r) +rF/(1 +r) = F. We therefore have an equivalence between
P dollars today and F dollars tomorrow, provided that P(1 + r) = F, or
P = F/(1 + r). By convention the value P is called the present value of
F, while F is the future value of P.
Given these conventions, let us now derive the budget set of a consumer
who may borrow and lend, but has no (physical) investment technology.
Intertemporal 43
Storage is not an option in order to move consumption from period 2 to period
1. The consumer may, however, forgo some future consumption for current
consumption by purchasing the appropriate borrowing contract. Based on
what we have said above, if he is willing to pay, in period 2, an amount of
(m
2
−c
2
) > 0, then in period 1 he can receive at most (m
2
−c
2
)/(1+r) units.
Thus one constraint in the budget is m
1
≤ c
1
≤ m
1
+(m
2
−c
2
)/(1+r). On the
other hand, consumption can be postponed in two ways: storage and lending.
If the consumer stores the good his constraint on period 2 consumption is
as derived previously, c
2
≤ m
2
+ δ(m
1
− c
1
). If he uses the market instead,
his constraint becomes c
2
≤ m
2
+ (m
1
− c
1
)(1 + r). As long as δ and r
are both strictly positive the second of these is a strictly larger set than
the ﬁrst. Since consumption is a good (more is better) the consumer will
not use storage, and the eﬀective constraint on future consumption will be
c
2
≤ m
2
+(m
1
−c
1
)(1 +r). This is indicated in the right diagram in Figure
3.1, where there are two “budget lines” for increasing future consumption.
The lower of the two is the one corresponding to storage, the higher the one
corresponding to a positive interest rate on loans.
Manipulation of the two constraints shows that they are really identical.
Indeed, the consumer’s budget can be expressed as any of
B = ¦(c
1
, c
2
) [ c
2
≤ m
2
+ (1 +r)(m
1
−c
1
), c
1
, c
2
≥ 0¦,
= ¦(c
1
, c
2
) [ c
1
≤ m
1
+
(m
2
−c
2
)
1 +r
, c
1
, c
2
≥ 0¦,
= ¦(c
1
, c
2
) [ (m
2
−c
2
) + (1 +r)(m
1
−c
1
) ≥ 0, c
1
, c
2
≥ 0¦,
= ¦(c
1
, c
2
) [ (m
1
−c
1
) +
(m
2
−c
2
)
1 +r
≥ 0, c
1
, c
2
≥ 0¦.
The ﬁrst and third of these are expressed in future values, the second
and fourth in present values. It does not matter which you choose as long
as you adopt one perspective and stick to it! (If you recall, we are free to
pick a numeraire commodity. Here this means we can choose a period and
denominate everything in future or presentvalue equivalents for this period.
This, by the way, is the one key “trick” in doing intertemporal economics:
pick one time period as your “base”, convert everything into this time period,
and then stick to it! Finally note that we are interested in the budget line,
generally. Replacing inequalities with equalities in the above you note that
all are just diﬀerent ways of writing the equation of a straight line through
the endowment point (m
1
, m
2
) with a slope of (1 +r).
44 LA. Busch, Microeconomics May2004
0
5
10
15
20
0 5 10 15 20
good 2
good 1
+
+
0
5
10
15
20
0 5 10 15 20
good 2
good 1
+
+
+
+
Figure 3.1: Storage without and with markets, no (physical) investment
Storage, Investment, Full Markets
What if everything is possible? As above, storage will normally not ﬁgure
into the problem, so we will ignore it for a moment. How do (physical)
investment and market investment interact in determining the budget? As
in the case of storage and investment, the ﬁrst thing to realize is that the
(physical) investment technology will be used to the point where its marginal
(gross rate of) return f
() is equal to that of putting the marginal dollar
into the ﬁnancial market. That is, the optimal investment is determined by
f
(x) = (1 + r). The second key point is that with perfect capital markets
it is possible to borrow money to invest. The budget thus is bounded by a
straight line through the point (m
1
−x, m
2
+f(x)) with slope (1 +r).
3.1.2 Utility maximization
After the budget has been derived the consumer’s problem is solved in the
usual fashion, and all previous equations which characterize equilibrium ap
ply. Suppose the case of markets in which an interest rate of r is charged.
Then it is necessary that
u
1
(c)
u
2
(c)
=
(1 +r)
1
.
What is the interpretation of this equation? Well, on the right hand side we
have the slope of the budget line, which would usually be the price ratio p
1
/p
2
.
That is, the RHS gives the cost of future consumption in terms of current
consumption (recall that in terms of units of goods we have (1/p
2
)/(1/p
1
).)
On the left hand side we have the ratio of marginal utilities, that is, the
MRS. It tells us the consumer’s willingness to trade oﬀ current consumption
for future consumption. This will naturally depend on the consumer’s time
preferences.
Intertemporal 45
Let us take a concrete example. In macroeconomics we often use a so
called timeseparable utility function like u(c
1
, c
2
) = lnc
1
+βlnc
2
; β < 1.
This speciﬁcation says that consumption is ranked equally within each period
(the same subutility function applies within each period) but that future
utility is not as valuable as today’s and hence is discounted. One of the
key features of such a separable speciﬁcation is that the marginal utilities of
today’s consumption and tomorrow’s are independent. That is, if I consider
∂u(c
1
, c
2
)/∂c
i
, I ﬁnd that it is only a function of c
i
and not of c
j
. This
feature makes life much easier if the goal is to solve some particular model or
make some predictions. Note that the discount factor β is related to the
consumer’s discount rate ρ: β = 1/(1 + ρ). For this speciﬁc function, the
equation characterizing the consumer’s optimum then becomes
c
2
βc
1
= (1 +r) ⇒
c
2
c
1
= β(1 +r) ⇒
c
2
c
1
=
1 +r
1 +ρ
.
Some interesting observations follow from this. First of all, if the private
discount rate is identical to the market interest rate, ρ = r, the consumer
would prefer to engage in perfect consumption smoothing. The ideal
consumption path has the consumer consume the same quantity each period.
If the consumer is more impatient than the market, so that ρ > r, then
the consumer will favour current consumption, while a patient consumer for
whom ρ < r will postpone consumption. This is actually true in general with
additively separable functions, since u
(c
1
) = u
(c
2
) iﬀ c
1
= c
2
.
In order to achieve the preferred consumption path the consumer will
have to engage in the market. He will either be a lender (c
1
< m
1
) or a
borrower (c
1
> m
1
.) This, of course, depends on the desired consumption
mix compared to the consumption mix in the consumer’s endowment.
Finally, we can do the usual comparative statics. How will a change
in relative prices aﬀect the consumer’s wellbeing and consumption choices?
The usual facts from revealed preference theory apply: If the interest rate in
creases (i.e., current consumption becomes more expensive relative to future
consumption) then
• A lender will remain a lender.
• A borrower will borrow less (assuming normal goods) and may switch
to being a lender.
• A lender will become better oﬀ.
• A borrower who remains a borrower will become worse oﬀ.
46 LA. Busch, Microeconomics May2004
• A borrower who switches to become a lender may be worse or better
oﬀ.
These can be easily veriﬁed by drawing the appropriate diagram and observ
ing the restrictions implied by the weak axiom of revealed preference.
We can also see some of these implications by considering the Slutsky
equation for this case. In this case the demand function for period 1 con
sumption is c
1
(p
1
, p
2
, M), where M = p
1
m
1
+p
2
m
2
. It follows from the chain
rule that
dc
1
()
dp
1
=
∂c
1
()
∂p
1
+
∂c
1
()
∂M
∂M
∂p
1
,
but ∂M/∂p
1
= m
1
. The Slutsky equation tells us that
∂c
1
()
∂p
1
=
∂h
1
()
∂p
1
−c
1
()
∂c
1
()
∂M
and hence we obtain
dc
1
()
dp
1
=
∂h
1
()
∂p
1
−(c
1
() −m
1
)
∂c
1
()
∂M
.
This equation is easily remembered since it is really just the Slutsky equation
as usual, where the weighting of the income eﬀect is by market purchases
only. In the standard model without endowment, all good 1 consumption
is purchased, and hence subject to the income eﬀect. With an endowment,
only the amount traded on markets is subject to the income eﬀect.
Now to the analysis of this equation. We know that the substitution
eﬀect is negative. We also know that for a normal good the income eﬀect
is positive. The sign of the whole expression therefore depends on the term
in brackets, in other words on the lender/borrower position of the consumer.
A borrower has a positive bracketed term. Thus the whole expression is
certainly negative and a borrower will consume less if the price of current
consumption goes up. A lender will have a negative bracketed term, which
cancels the negative sign in the expression, and we know that the total eﬀect
is less negative than the substitution eﬀect. In fact, the (positive) income
eﬀect could be larger than the (negative) substitution eﬀect and current
consumption could go up!
Intertemporal 47
3.2 Real Interest Rates
So far we have used the usual economic notion of prices as exchange rates
of goods. In reality, prices are not denoted in terms of some numeraire
commodity, but in terms of money (which is not a good.) This may lead to
the phenomenon that there is a price ratio of goods to money, and that this
may change over time, an eﬀect called inﬂation if money prices of goods go
up (and deﬂation otherwise.) We can modify our model for this case by ﬁxing
the price of the numeraire at only one point in time. We then can account for
inﬂation/deﬂation. Doing so will require a diﬀerentiation between nominal
and real interest rates, however, because you get paid back the principal and
interest on an investment in units of money which has a diﬀerent value (in
terms of real goods) compared to the one you started out with. So, let p
1
= 1
be the (money) price of the consumption good in period 1 and let p
2
be the
(money) price of the consumption good in period 2. Let the nominal interest
rate be r, which means that you will get interest of r units of money per
unit. The budget constraint for the two period problem then becomes:
B :: p
2
c
2
= p
2
m
2
+ (1 +r)(m
1
−c
1
).
In other words, the total monetary value of second period consumption can at
most be the monetary value of second period endowment plus the monetary
value of foregone ﬁrst period consumption.
We can now ask two questions: what is the rate of inﬂation and what
is the real interest rate.
The rate of inﬂation is the rate π such that (1 + π)p
1
= p
2
, i.e., π =
(p
2
−p
1
)/p
1
. The real interest rate can now be derived by rewriting the above
budget to have the usual look:
c
2
= m
2
+
1 +r
p
2
(m
1
−c
1
) = m
2
+
1 +r
1 +π
(m
1
−c
1
) = m
2
+ (1 + ˆ r)(m
1
−c
1
).
Thus, the real interest rate is ˆ r = (r − π)/(1 + π), and as a rule of thumb
(for small π) it is common to simply use ˆ r ∼ (r −π). Note however that this
exaggerates the real interest rate.
3.3 Riskfree Assets
Another application of this methodology is for a ﬁrst look at assets. It is
easiest if we ﬁrst look at ﬁnancial assets only. A ﬁnancial asset is really just
48 LA. Busch, Microeconomics May2004
a promise to pay (sometimes called an IOU, from the phrase “I owe you”.)
If we assume that the promise is known to be kept, that the amount it will
pay back is known, and that the value of what it pays back when it does is
known, then we have a riskfree asset. While there are few such things,
a government bond comes fairly close. Note that assets do not have to be
ﬁnancial instruments: they can also be real, such as a dishwasher, car, or
plant (of either kind, actually.) By calling those things assets we focus on
the fact that they have a future value.
3.3.1 Bonds
A bond is a ﬁnancial instrument issued by governments or large corporations.
Bonds are, as far as their issuer is concerned, a loan. A bond has a so called
face value, which is what it promises to pay the owner of the bond at the
maturity date T of the the bond. A normal bond also has a a sequence
of coupons, which used to be literally coupons that were attached to the
bond and could be cut oﬀ and exchanged for money at indicated dates. If
we denote by F the face value of the bond, and by C the value of each
coupon, we can now compute the coupon rate of the bond. For simplicity,
assume a yearly coupon for now (we will see more ﬁnancial math later which
allows conversion of compounding rates into simple rates). In that case the
coupon rate is simply
C
F
100%. A strip bond is a bond which had all
its coupons removed (“stripped”). The coupons themselves then generate a
simple annuity — a ﬁxed yearly payment for a speciﬁed number of years
— which could be sold separately. Another special kind of bond is a consol,
which is a bond which pays its coupon rate forever, but never pays back any
face value.
We can now ask what the price of such a bond should be today. Since the
bond bestows on its owner the right to receive speciﬁed payments on speciﬁed
future dates, its value is the current value of all these future payments. Future
payments are, of course, converted into their present values by using the
appropriate interest rate:
PV =
1
(1 +r)
C +
1
(1 +r)
2
C +. . . +
1
(1 +r)
T
C +
1
(1 +r)
T
F.
Denote the coupon rate by c. Then we can simplify the above equation to
yield
PV = F
1
(1 +r)
T
+c
T
¸
i=1
1
(1 +r)
i
.
Intertemporal 49
From this equation follows one of the more important facts concerning the
price of bonds and the interest rate. Recall that once a bond is issued T,
F, and C are ﬁxed. That means that the present price of the bond needs
to adjust as the interest rate r varies. Since r appears in the denominator,
as the interest rate rises the present value of a bond falls. Bond prices and
interest rates are inversely related. Is the present price of a bond higher
or lower than the maturity value? In the latter case the bond is trading at a
discount, in the former at a premium. This will depend on the relationship
between the interest rate and the coupon rate. Intuitively, if the coupon rate
exceeds the interest rate the bond is more valuable, and thus its price will
be higher.
3.3.2 More on Rates of Return
You may happen to own an asset which, as mentioned previously, is the right
to a certain stream of income or beneﬁts (services). This asset happens to
also have a current market price (we will see later where that might come
from and what conditions it will have to satisfy.) The question you now have
is, what is this asset’s rate of return?
Let us start with the simple most case of no income/services, but an asset
that only has a current and a future price, p
0
and p
1
, respectively. Your rate of
return then is the money you gain (or loose) as a percentage of the cost of the
asset, in other words the per dollar rate of return is
p
1
−p
0
p
0
. We can now
ask what conditions such a rate of return might have to satisfy in equilibrium.
Assume, therefore, a case where there is complete certainty over all assets’
returns, i.e., future prices. All consumers would want to buy that asset which
has the highest rate of return, since that would move their individual budget
line out the most. In equilibrium, if a consumer is willing to hold more than
one asset, then it must be true that both assets have the same rate of return.
Call this rate r. Then we know that r =
p
1
−p
0
p
0
=
p
1
p
0
−1, or (1 + r) =
p
1
p
0
.
This condition must hold for all assets which are held, otherwise there would
be arbitrage opportunities. It is therefore also known as a zero arbitrage
condition. Recall that arbitrage refers to the activity of selling high while
buying low in order to make a proﬁt. For example, assume that there were
an asset for which p
1
/p
0
> 1 +r. Then what I should do is to borrow money
at the interest rate r, say I dollars, and use those funds to buy the good in
question, i.e., purchase I/p
0
units. Then wait and sell the goods in the next
period. That will yield Ip
1
/p
0
dollars. I also have to pay back my loan, at
50 LA. Busch, Microeconomics May2004
(1 +r)I dollars, and thus I have a proﬁt of p
1
/p
0
−(1 +r) per dollar of loan.
Note that the optimal loan size would be inﬁnite. However, the resulting large
demand would certainly drive up current prices (while also lowering future
prices, since everybody expects a ﬂood of the stuﬀ tomorrow), and this serves
to reduce the proﬁtability of the exercise. In a zeroarbitrage equilibrium we
therefore must have (1 + r) = p
1
/p
0
, or, more tellingly, p
0
=
p
1
1 +r
. The
correct market price for an asset is its discounted future value!
This discussion has an application to the debate about pricing during
supply or demand shocks. For example, gasoline prices during the Gulf war,
or the alleged pricegouging in the icestorm areas of Quebec and Ontario:
What should the price of an item be which is already in stock? Many people
argue that it is unfair to charge a higher price for instock items. Only the
replacement items, procured at higher cost, should be sold at the higher cost.
While this may be “ethical” according to some, it is easily demonstrated to
violate the above rule: The price of the good tomorrow will be determined by
demand and supply tomorrow, and apparently all are agreed that that price
might well be higher due to a large shift out in the demand and/or reduction
in supply. Currently I own that good, and have therefore the choice of selling
it tomorrow or selling it today. I would want to obtain the appropriate rate
of return on the asset, which has to be equal between the two options. Thus
I am only willing to part with it now if I am oﬀered a higher price which
foreshadows tomorrows higher price. Should I be forced not to do so I am
forced to give money away against my will and better judgment. This would
normally be considered unethical by most (just try and force them to give
you money.)
Of course, assets are not usually all the same, and we will see this later
when we introduce uncertainty. For example, a house worth $100,000 and
$100,000 cash are not equivalent, since the cash is immediately usable, while
the house may take a while to sell — it is less “liquid.” The same is true
for thinly traded stocks. Such assets may carry a liquidity premium —
an illiquidity punishment, really — and will have a higher rate of return in
order to compensate for the potential costs and problems in unloading them.
This can, of course, be treated in terms of risk, since the realization of the
house’s value is a random variable, at least in time, if not in the amount. Of
course, there are other kinds of risk as well, and in general the future price of
the asset is not known. (Note that bonds are an exception to some degree.
If you choose to hold the bond all the way to the maturity date you do know
the precise stream of payments. If you sell early, you face the uncertain sale
price which depends on the interest rate at that point in time.)
Intertemporal 51
Assets may also yield consumption returns while you hold them: a car or
house are examples, as are dividend payments of stocks or interest payments
of bonds. For one period this is still simple to deal with: The asset will
generate beneﬁt (say rent saved, or train tickets saved) of b and we thus
compute the rate of return as
p
1
−p
0
+b
p
0
. If the consumer holds multiple
assets in equilibrium, then we again require that this be equal to the rate
of return on other assets. Complicating things in the real world is the fact
that assets often diﬀer in their tax treatment. For example, if the house is a
principal residence any capital gains (the tax man’s term for p
1
−p
0
, and to
add insult to injury they ignore inﬂation) are tax free. For another asset, say
a painting, this is not true. Equilibrium requires, of course, that the rates
of return as perceived by the consumer are equalized, and thus we may have
to use an after tax rate for one asset and set it equal to an untaxed rate for
another.
3.3.3 Resource Depletion
The simple discounting rules above can also be applied to gain some ﬁrst
insights into resource economics. We can analyse the question of simple
resource depletion: at what rate should we use up a nonrenewable resource.
We can also analyse when a tree (or forest) should to be cut down.
Assume a nonrenewable resource currently available at quantity S. For
simplicity, ﬁrst assume a ﬁxed annual demand D. It follows that there are
S/D years left, after which we assume that an alternative has to be used
which costs C. Thus the price in the last year should be p
S/D
= C. Arbi
trage implies that p
t+1
= (1 + r)p
t
, so that p
0
= C/(1 + r)
S/D
. Note that
additional discoveries of supplies lower the price since they increase the time
to depletion, as do reductions in demand. Lowering the price of the alter
native also lowers the current price. Finally, increases in the discount rate
lower the price.
This approach has a major ﬂaw, however. It assumes demand and supply
to be independent of price. So instead, let us assume some current price p
0
as a starting value and let us focus on supply. When will the owner of the
resource be willing to sell? If the market rate of return on other assets is r
then the resource, which is just another asset, will also have to generate that
rate of return. Therefore p
1
= (1 + r)p
0
, and in general we’d have to expect
p
t
= (1 + r)
t
p
0
. Note that the price of the resource is therefore increasing
with time, which, in general equilibrium, means two things: demand will
52 LA. Busch, Microeconomics May2004
fall as customers switch to alternatives, and substitutes will become more
competitive. Furthermore we might expect more substitutes to be developed.
We will ultimately run out of the resource, but it is nearly always wrong to
simply use a linear projection of current use patterns. This fact has been
established over and over with various natural resources such as oil, tin,
copper, titanium, etc.
What about renewable resources? Consider ﬁrst the ‘European’ model
of privately owned land for timber production as an example. Here we have
a company who owns an asset — a forest — which it intends to manage
in order to maximize the present value of current and future proﬁts. When
should it harvest the trees? Each year there is the decision to harvest the
tree or not. If it is cut it generates revenue right away. If it continues to grow
it will not generate this revenue but instead generate more revenue tomorrow
(since it is growing and there will be more timber tomorrow.) It follows that
the two rates of return should be equalized, that is, the tree should be cut
once its growth rate divided by its current size has slowed to the market
interest rate. This fact has a few implications for forestry: Faster growing
trees are a better investment, and thus we see mostly fast growing species
replanted, instead of, say, oaks, which grow only slowly. (This discussion is
ceteris paribus — ignoring general equilibrium eﬀects.) Furthermore, what
if you don’t own the trees? What if you are the James Bond of forestry, with
a (timelimited) license to kill? In that case you will simply cut the trees
down either immediately or before the end of your license, depending on the
growth rate. Of course, in Canada most licenses are for mature forests, which
nearly by deﬁnition have slow or no growth — thus the thing to do is to clear
cut and get out of there. The Europeans, critical of clearcutting, forget that
they have long ago cut nearly all of their mature forests and are now in a
harvesting model with mostly high growth forests.
As a ﬁnal note, notice that lack of ownership will also impact the re
planting decision. As we will see later in the course, if we treat the logger as
an agent of the state, the state has serious incentive problems to overcome
within this principal agent framework.
3.3.4 A Short Digression into Financial Economics
I thought it might be useful to provide you with a short refresher or intro
duction to multiperiod present value and compound interest computations.
For starters, assume you put $1 in the bank at 5% interest, computed yearly,
and that all interest income is also reinvested at this 5% rate. How much
Intertemporal 53
money will you have in each of the following years? The answer is
1.05, 1.05
2
, 1.05
3
, . . . 1.05
t
.
The important fact about this is that a simple interest rate and a compounded
interest rate are not the same, since with compounding there is interest on
interest. For example, if you get a loan at 12%, it matters how often this is
compounded. Let us assume it is just simple interest; You then owe $1.12
for every dollar you borrowed at the end of one year. What you will quickly
ﬁnd out is that banks don’t normally do that. They at least compound semi
annually, and normally monthly. Monthly compounding would mean that
you will owe (1 +
.12
12
)
12
= 1.1268. On a million dollar loan this would be
a diﬀerence of $6825.03. In other words, you are really paying not a 12%
interest rate but a 12.6825% simple interest rate. It is therefore very impor
tant to be sure to know what interest rate applies and how compounding is
applied (semiannual, monthly, etc.?)
Here is a handy little device used in many circles: the rule of 72,
sometimes also referred to as the rule of 69. It is used to ﬁnd out how long
it will take to double your money at any given interest rate. The idea is that
it will approximately take 72/r periods to double your money at an interest
rate of r percent. The proof is simple: we want to solve for the t for which
1 +
r%
100
t
= 2 ⇒tln
1 +
r%
100
= ln2.
However, for small x we know that ln(1 +x) ∼ x, thus
t
r%
100
∼ ln2 ⇒t ∼
100ln2
r%
=
69.3147
r%
but of course 72 has more divisors and is much easier to work with.
The power of compounding also comes into play with mortgages or
other installment loans. A mortgage is a promise to pay every period for
a speciﬁed length (typically 25 years, i.e., 300 months) a certain payment
p. This is also known as a simple annuity. What is the value of such a
promise, i.e., its present value? We need to compute the value of the following
sum:
δp +δ
2
p +δ
3
p +. . . +δ
n
p.
Here δ = 1/(1 +r), where r is the interest rate we use per period. Thus
PV = δ
p +δp +δ
2
p +. . . +δ
n−1
p
= δp
1 +δ +δ
2
+. . . +δ
n−1
54 LA. Busch, Microeconomics May2004
= δp
1 −δ
n
1 −δ
PV =
1 −(1 +r)
−n
r
p
(Recall in the above derivation that for δ < 1 we have
¸
∞
i=1
δ
i
= 1/(1 −δ).)
The above equation relates four variables: the principal amount, the
payment amount, the payment periods, and the interest rate. If you ﬁx any
three this allows you to derive the fourth after only a little bit of manip
ulation. A ﬁnal note: In Canada a mortgage can be at most compounded
semiannually. Thus the eﬀective interest rate per month is derived by solv
ing (1+r/2)
2
= (1+r
m
)
12
. If you are quoted a 12% interest rate per year the
monthly rate is therefore (1.06)
1/6
−1 = 0.975879418%. The eﬀective yearly
interest rate in turn is (1.06)
2
− 1 = 12.36%, and by law the bank is sup
posed to tell you about that too. Given the above, and the fact that nearly
all mortgages are computed for a 25 year term (but seldom run longer than 5
years, these days), the monthly payment at a 10% yearly interest rate for an
additional $1000 on the mortgage is $8.95. Before you engage in mortgages
it would be a good idea to program your spreadsheet with these formulas
and convince yourself how biweekly payments reduce the total interest you
pay, how important a couple of percentage points oﬀ the interest rate are to
your monthly budget, etc.
3.4 Review Problems
Question 1: There are three time periods and one consumption good. The
consumer’s endowments are 4 units in the ﬁrst period, 20 units in the second,
and 1 unit in the third. The money price for the consumption good is known
to be p = 1 in all periods (no inﬂation.) Let r
ij
denote the (simple, nominal)
interest rate from period i to j.
a) State the restrictions on r
12
, r
23
and r
13
implied by zero arbitrage.
b) Write down the consumer’s budget constraint assuming the restric
tion in (a) holds. Explain why it is useful to have this condition hold (i.e.,
point out what would cause a potential problem in how you’ve written the
budget if the condition in (a) fails.
c) Draw a diagrammatic representation of the budget constraint in pe
riods 2 and 3, being careful to note how period 1 consumption inﬂuences this
diagram.
Question 2: There are two goods, consumption today and tomorrow. Joe
Intertemporal 55
has an initial endowment of (100, 100). There exists a credit market which
allows him to borrow or lend against his initial endowment at market interest
rates of 0%. A borrowing constraint exists which prevents him from borrow
ing against more than 60% of his period 2 endowment. Joe also possesses
an investment technology which is characterized by a production function
x
2
= 10
√
x
1
. That is, an investment of x
1
units in period 1 will lead to x
2
units in period 2.
a) What is Joe’s budget constraint? A very clearly drawn and well la
belled diagram suﬃces, or you can give it mathematically. Also give a short
explanatory paragraph how the set is derived.
b) Suppose that Joe’s preferences can be represented by the function
U(c
1
, c
2
) = exp(c
4
1
c
6
2
). (Here exp() denotes the exponential function.) What
is Joe’s ﬁnal consumption bundle, how much does he invest, and what are
his transactions in the credit market.
Question 3: Anna has preferences over her consumption levels in two peri
ods which can be represented by the utility function
u(c
1
, c
2
) = min
23
22
12
10
c
1
+c
2
,
13
10
c
1
+c
2
.
a) Draw a carefully labelled representation of her indiﬀerence curve
map.
b) What is her utility maximizing consumption bundle if her initial en
dowment is (9, 8) and the interest rate is 25%.
c) What is her utility maximizing consumption bundle if her initial en
dowment is (5, 12) and the interest rate is 25%.
d) Assume she can lend money at 22% and borrow at 28%. What would
her endowment have to be for her to be a lender, a borrower?
e) Assume she can lend money at 18% and borrow at 32%. Would Anna
ever trade at all? (Explain.)
Question 4: Alice has preferences over consumption in two periods repre
sented by the utility function u
A
(c
1
, c
2
) = lnc
1
+ αlnc
2
, and an endowment
of (12, 6). Bob has preferences over consumption in two periods represented
by the utility function u
B
(c
1
, c
2
) = c
1
+βc
2
, and an endowment of (8, 4).
a) Draw an appropriately labelled representation of this exchange econ
omy in order to “prime” your intuition. (Indicate the indiﬀerence maps and
the Contract Curve.)
b) Assuming, of course, that both α and β lie strictly between zero and
one, what is the equilibrium interest rate and allocation?
56 LA. Busch, Microeconomics May2004
Chapter 4
Uncertainty
So far, it has been assumed that consumers would know precisely what they
were buying and getting. In real life, however, it is often the case that
an action does not lead to a deﬁnite outcome, but instead to one of many
possible outcomes. Which of these occurs is outside the control of the decision
maker. It is determined by what is referred to as “nature.” These situations
are ones of uncertainty — it is uncertain what happens. Often, however,
the probabilities of the diﬀerent possibilities are known from past experience,
or can be estimated in some other way, or indeed are assumed based on some
personal (subjective) judgment. Economists then speak of risk.
Note that our “normal” model is already handling such cases if we take it
at its most general level: commodities in the model were supposed to be fully
speciﬁed, and could, in principle, be state contingent. We will develop that
interpretation further later on in this chapter. First, however, we will develop
a more simple model which is designed to bring the role of probabilities to
the fore.
One of the key facts about situations involving risk/uncertainty is that
the consumer’s wellbeing does not only depend on the various possible out
comes, and which occurs in the end, but also on how likely each outcome
is. The standard model of chapter 2 does not allow an explicit role for
such probabilities. They are somehow embedded in the utility function and
prices. In order to compare situations which diﬀer only in the probabilities,
for example, it would be nice to have probabilities explicitly in the model
formulation. A particularly simple model that does this holds the outcomes
ﬁxed, they will all be assumed to lie in some set of alternatives X, and fo
cuses on the diﬀerent probabilities with which they occur. We call such a list
57
58 LA. Busch, Microeconomics May2004
of the probabilities for each outcome a lottery.
Deﬁnition 1 A simple lottery is a list L = (p
1
, p
2
, . . . , p
N
) of probabilities
for the N diﬀerent outcomes in X, with p
i
≥ 0,
¸
N
i=1
p
i
= 1.
If we have a suitably deﬁned continuous space of outcomes, for example
'
+
for the outcome “wealth”, we can view a probability distribution as a
lottery. We will therefore, if it is convenient, use the cumulative distribution
function (cdf) F() or the probability density function (assuming it exists)
f(x) to denote lotteries over (onedimensional) continuous outcome spaces.
1
Of course, lotteries could have as outcomes other lotteries, and such
lotteries are called compound lotteries. However, any such compound
lottery will lead to a probability distribution over ﬁnal outcomes which is
equivalent to that of some simple lottery. In particular, if some compound
lottery leads with probability α
i
to some simple lottery i, and if that simple
lottery i in turn assigns probability p
i
n
to outcome n, then we have the total
probability of outcome n given by p
n
=
¸
i
α
i
p
i
n
.
Assumption 1 Only the probability distribution over “ﬁnal” outcomes mat
ters to the consumer =⇒ preferences are over simple lotteries only.
Note that this is a restrictive assumption. For example, it does not
allow the consumer to attach any direct value to the fact that he is involved
in a lottery, i.e., gambling pleasure (or dislike) in itself is not allowed under
this speciﬁcation. The consumer is also assumed not to care how a given
probability distribution over ﬁnal outcomes arises, only what it is. This
is clearly an abstraction which looses quite a bit of richness in consumer
behaviour. On the positive side stands the mathematical simplicity of this
setting. Probability distributions over some set are a fairly simple framework
to work with.
We now have assumed that a consumer cares only about the probability
distribution over a ﬁnite set of outcomes. Just as before, we will assume that
the consumer is able to rank any two such distributions, in the usual way.
1
Recall that a cumulative distribution function F(·) is always deﬁned and that F(x)
represents the probability of the underlying random variable taking on a value less or equal
to x. If the density exists (and it does not have to!) then we denote it as f(x) and have
the identity F(x) =
x
f(t)dt. We can then denote the mean, say, in two diﬀerent ways:
µ =
tf(t)dt =
tdF(t), depending on if we know that the pdf exists, or want to allow
that it does not.
Uncertainty 59
Assumption 2 The preferences over simple lotteries are rational and con
tinuous.
This latter assumption guarantees us the existence of a utility function
representing these preferences. Note that rationality is a strong assumption
in this case. In particular, rationality requires transitivity of the preferences,
that is, L _ L
, L
_
ˆ
L ⇒ L _
ˆ
L. For diﬀerent lotteries this may be
hard to believe, and there is some evidence that real life consumers violate
this assumption. These violations of transitivity are “more common” in
this setting compared to the model without risk/uncertainty. Yet, without
rationality the model would have no predictive power. We further assume
that the preferences satisfy the following assumption:
Assumption 3 Preferences are Independent of what other outcomes may
have been available: L _ L
⇒αL + (1 −α)
ˆ
L _ αL
+ (1 −α)
ˆ
L.
This seems sensible on the one hand, since outcomes are mutually ex
clusive — one and only one of the outcomes will happen in the end — but
is restrictive since consumers often express regret: having won $5000, the
evaluation of that win often depends on the fact if this was the top price
available or the consolation price, for example. The economic importance of
this assumption is that we have only one utility index over outcomes in which
preferences over lotteries (the distribution over available outcomes) does not
enter. Once we have done this, there is only one more assumption:
Assumption 4 The utility of a lottery is the expected value of the utilities
of its outcomes:
U(L) =
n
¸
i=1
p
i
u(x
i
)
U(L) =
u(x)dF(x)
This form of a utility function is called a von NeumannMorgenstern,
or vNM, utility function. Note that this name applies to U(), not u().
The latter is sometimes called a Bernoulli utility function. The vNM utility
function is unique only up to a positive aﬃne transformation, that is, the
same preferences over lotteries are expressed by V () and U() if and only
if V () = aU() + b, a > 0. We are allowed to scale the utility index and
to change its slope, but we are not allowed to change its curvature. The
reason for this should be clear. Suppose we compare two lotteries, L and
ˆ
L,
60 LA. Busch, Microeconomics May2004
which diﬀer only in that probability is shifted between outcomes k and j and
outcomes m and n. Suppose U(L) > U(
ˆ
L), so:
U(L) =
n
¸
i=1
p
i
u(x
i
) > U(
ˆ
L) =
n
¸
i=1
ˆ p
i
u(x
i
)
n
¸
i=1
p
i
u(x
i
) −
n
¸
i=1
ˆ p
i
u(x
i
) > 0
(p
k
− ˆ p
k
)u(x
k
) + (p
j
− ˆ p
j
)u(x
j
)(p
m
− ˆ p
m
)u(x
m
) + (p
n
− ˆ p
n
)u(x
n
) > 0
(p
k
− ˆ p
k
)(u(x
k
) −u(x
j
)) + (p
m
− ˆ p
m
)(u(x
m
) −u(x
n
)) > 0
This comparison clearly depends on both, the diﬀerences in probabilities as
well as the diﬀerences in the utility indices of the outcomes. If we multiply
u() by a constant, it will factor out of the last line above. If, however, we
were to transform the function u(), even by a monotonic transformation,
we would change the diﬀerence between outcome utilities, and this could
change the above comparison. In fact, as we shall see later, the curvature
of the Bernoulli utility index u() is crucial in determining the consumer’s
behaviour with respect to risk, and will be used to measure the consumer’s
risk aversion.
Before we proceed to that, some famous paradoxes relating to uncer
tainty and our assumptions.
Allais Paradox: The Allais paradox shows that consumers may not satisfy
the axioms we had assumed. It considers the following case: Consider a
space of outcomes for a lottery given by C = (25, 5, 0) in hundred thousands
of dollars. Subjects are then asked which of two lotteries they would prefer,
L
A
= (0, 1, 0) or L
B
= (.1, .89, .01).
Often consumers will indicate a preference for L
A
, probably because they
foresee that they would regret to have been greedy if they end up with nothing
under lottery B. On the other hand, if they are asked to choose between
L
C
= (0, .11, .89) or L
D
= (.1, 0, .9)
the same consumers often indicate a preference for lottery D. Note that there
is little regret possible here, you simply get a lot larger winning in exchange
for a slightly lower probability of winning under D. These choices, however,
violate our assumptions. This is easily checked by assuming the existence of
some u(): The preference for A over B then indicates that
u(5) > .1u(25) +.89u(5) +.01u(0)
.11u(5) > .1u(25) +.01u(0)
.11u(5) +.89u(0) > .1u(25) +.9u(0)
Uncertainty 61
and the last line indicates that lottery C is preferred to D!
Ellsberg Paradox: This paradox shows that consumers may not be con
sistent in their assessment of uncertainty. Consider an urn with 300 balls in
it, of which precisely 100 are known to be red. The other 200 are blue or
green in an unknown proportion (note that this is uncertainty: there is no
information as to the proportion available.) The consumer is again oﬀered
the choice between two pairs of gambles:
Choice 1 =
L
A
: $1000 if a drawn ball is red.
L
B
: $1000 if a drawn ball is blue.
Choice 2 : =
L
C
: $1000 if a drawn ball is NOT red.
L
D
: $1000 if a drawn ball is NOT blue.
Often consumers faced with these two choices will choose A over B and will
choose C over D. However, letting u(0) be zero for simplicity, this means that
p(R)u(1000) > p(B)u(1000) ⇒ p(R) > p(B) ⇒ (1 − p(R)) < (1 − p(B)) ⇒
p(R) < p(B) ⇒ p(R)u(1000) < p(B)u(1000). Thus the consumer
should prefer D to C if choice were consistent.
Other problems with expected utility also exist. One is the intimate
relation of risk aversion and time preference which is imposed by these pref
erences. There consequently is a fairly active literature which attempts to
ﬁnd a superior model for choice under uncertainty. These attempts mostly
come at the expense of much higher mathematical requirements, and many
still only address one or the other speciﬁc problem, so that they too are easily
‘refuted’ by a properly chosen experiment.
4.1 Risk Aversion
We will now restrict the outcome space X to be onedimensional. In particu
lar, assume that X is simply the wealth/total consumption of the consumer
in each outcome. With this simpliﬁcation, the basic attitudes of a consumer
concerning risk can be obtained by comparing two diﬀerent lotteries: one
that gives an outcome for certain (a degenerate lottery), and another that
has the same expected value, but is nondegenerate. So, let L be a lottery
on [0,
¯
X] given by the probability density f(x). It generates an expected
value of wealth of
¯
X
0
xf(x)dx = C. We can now compare the consumer’s
utility from obtaining C for certain, and that from the lottery L (which has
62 LA. Busch, Microeconomics May2004
expected wealth C.) Compare
U(L) =
¯
X
0
u(x)f(x)dx to u
¯
X
0
xf(x)dx
.
Deﬁnition 2 A riskaverse consumer is one for whom the expected utility
of any lottery is lower than the utility of the expected value of that lottery:
¯
X
0
u(x)f(x)dx < u
¯
X
0
xf(x)dx
.
0
0.5
1
1.5
2
2.5
3
0 5 10 15 20
Utility
wealth
w1 w2 E(w)
u(w1)
u(w2)
E(u())
u(E(w))
u(w)
♦
♦
♦
♦ ♦
♦ ♦
♦ ♦
♦
Figure 4.1: Risk Aversion
The astute reader may notice that this is Jensen’s inequality, which is
one way to deﬁne a concave function, in this case u() (see Fig. 4.1.) This
is also the reason why only aﬃne transformations were allowed for expected
utility functions. Any other transformation would aﬀect the curvature of the
Bernoulli utility function u(), and thus would change the riskaversion of
the consumer. Clearly, consumers with diﬀerent risk aversion do not have
the same preferences, however.
2
Note that a concave u() has a diminishing
marginal utility of wealth, an assumption which is quite familiar from intro
ductory courses. Risk aversion therefore implies (and is implied by) the fact
2
To belabour the point, consider preferences over wealth represented by u(w) = w.
In the standard framework of chapter 1 positive monotonic transformations are ok, so
that the functions w
2
and
√
w both represent identical preferences. It is easy to verify
that these two functions lead to a quite diﬀerent relationship between the expected utility
and the utility of the expected wealth than the initial one, however. Thus they cannot
represent the same preferences in a setting of uncertainty/risk.
Uncertainty 63
0
2
4
6
8
10
12
14
0 2 4 6 8 10 12 14
U
w
w1 w2 E(w)
u(w1)
u(w2)
E(u())=
u(E(w))
♦
♦
♦
♦ ♦
♦ ♦
0
20
40
60
80
100
120
0 1 2 3 4 5
U
w
w1 w2 E(w)
u(w1)
u(w2)
E(u())
u(E(w))
u(w)
♦♦
♦
♦ ♦
♦ ♦
♦ ♦
♦
Figure 4.2: Risk Neutral and Risk Loving
that additional units of wealth provide additional utility, but at a decreasing
rate. Of course, consumers do not have to be riskaverse.
Risk neutral and risk loving are deﬁned in the obvious way: The ﬁrst
requires that
u(x)f(x)dx = u
xf(x)dx
.
while the second requires
u(x)f(x)dx > u
xf(x)dx
.
There is a nice diagrammatic representation of these available if we
consider only two possible outcomes (Fig. 4.2).
There are two other ways in which we might deﬁne risk aversion, and
both reveal interesting facts about the consumer’s economic behaviour. The
ﬁrst is by using the concept of a certainty equivalent. It is the answer
to the question “how much wealth, received for certain, is equivalent (in the
consumer’s eyes, according to preferences) to a given gamble/lottery?” In
other words:
Deﬁnition 3 The certainty equivalent C(f, u) for a lottery with proba
bility distribution f() under the (Bernoulli) utility function u() is deﬁned
by the equation
u(C(f, u)) =
u(x)f(x)dx.
Again a diagram for the twooutcome case might help (Fig. 4.3).
64 LA. Busch, Microeconomics May2004
0
0.5
1
1.5
2
2.5
3
0 5 10 15 20
Utility
wealth
w1 w2 E(w)
C()
u(w1)
u(w2)
E(u())
u(w)
♦
♦
♦
♦ ♦
♦ ♦ ♦
♦
Figure 4.3: The certainty equivalent to a gamble
A risk averse consumer is one for whom the certainty equivalent of
any gamble is less than the expected value of that gamble. One useful
economic interpretation of this fact is that the consumer is willing to pay
(give up expected wealth) in order to avoid having to face the gamble. In
deed, the maximum amount which the consumer would pay is the diﬀerence
wf(w)dw − C(f, u). This observation basically underlies the whole insur
ance industry: riskaverse consumers are willing to pay in order to avoid risk.
A well diversiﬁed insurance company will be risk neutral, however, and there
fore is willing to provide insurance (assume the risk) as long as it guarantees
the consumer not more than the expected value of the gamble: Thus there
is room to trade, and insurance will be oﬀered. (More on that later.)
Another way to look at risk aversion is to ask the following question: If
I were to oﬀer a gamble to the consumer which would lead either to a win of
or a loss of , how much more than fair odds do I have to oﬀer so that the
consumer will take the bet? Note that a fair gamble would have an expected
value of zero (i.e., 50/50 odds), and thus would be rejected by the (risk
averse) consumer for sure. This idea leads to the concept of a probability
premium.
Deﬁnition 4 The probability premium π(u, , w) is deﬁned by
u(w) = (0.5 +π()) u(w +) + (0.5 −π()) u(w −).
A riskaverse consumer has a positive probability premium, indicating that
the consumer requires more than fair odds in order to accept a gamble.
Uncertainty 65
It can be shown that all three concepts are equivalent, that is, a con
sumer with preferences that have a positive probability premium will be one
for whom the certainty equivalent is less than the expected value of wealth
and for whom the expected utility is less than the utility of the expectation.
This is reassuring, since the certainty equivalent basically considers a con
sumer with a property right to a gamble, and asks what it would take for him
to trade to a certain wealth level, while the probability premium considers
a consumer with a property right to a ﬁxed wealth, and asks what it would
take for a gamble to be accepted.
4.1.1 Comparing degrees of risk aversion
One question we can now try to address is to see which consumer is more risk
averse. Since risk aversion apparently had to do with the concavity of the
(Bernoulli) utility function it would appear logical to attempt to measure its
concavity. This is indeed what Arrow and Pratt have done. However, simply
using the second derivative of u(), which after all measures curvature, will
not be such a good idea. The reason is that the second derivative will depend
on the units in which wealth and utility are measured.
3
Arrow and Pratt
have proposed two measures which largely avoid this problem:
Deﬁnition 5 The ArrowPratt measure of (absolute) risk aversion
is
r
A
= −
u
(w)
u
(w)
.
The ArrowPratt measure of relative risk aversion is
r
R
= −
u
(w)w
u
(w)
.
Note that the ﬁrst of these in eﬀect measures risk aversion with respect
to a ﬁxed amount of gamble (say, $1). The latter, in contrast, measures risk
aversion for a gamble over a ﬁxed percentage of wealth. These points can be
demonstrated as follows:
Consider a consumer with initial wealth w who is faced with a small
fair bet, i.e., a gain or loss of some small amount with equal probability.
3
You can easily verify this by thinking of the units attached to the second derivative.
If the ﬁrst derivative measures change in utility for change in wealth, then its units must
be u/w, while the second derivative is like a rate of acceleration. Its units are u/w
2
.
66 LA. Busch, Microeconomics May2004
How much would the consumer be willing to pay in order to avoid this bet?
Denoting this payment by I we need to consider (note that w − I is the
certainty equivalent)
0.5u(w +) + 0.5u(w −) = u(w −I).
Use a Taylor series expansion in order to approximate both sides:
0.5(u(w) +u
(w) + 0.5
2
u
(w)) + 0.5(u(w) −u
(w) + 0.5
2
u
(w))
∼ u(w) −Iu
(w) .
Collecting terms and simplifying gives us
0.5
2
u
(w)) ∼ −Iu
(w) ⇒ I ∼
2
2
−u
(w)
u
(w)
.
Thus the required payment is proportional to the absolute coeﬃcient of risk
aversion (and the dollar amount of the gamble.)
On the other hand,
u
w
u
=
du
dw
w
u
=
du
/u
dw/w
∼
%∆u
%∆w
.
Thus the relative coeﬃcient of riskaversion is nothing but the elasticity of
marginal utility with respect to wealth. That is, it measures the responsive
ness of the marginal utility to wealth changes.
Comparing across consumers, a consumer is said to be more risk averse
than another if (either) ArrowPratt coeﬃcient of risk aversion is larger. This
is equivalent to saying that he has a lower certainty equivalent for any given
gamble, or requires a higher probability premium.
We can also compare the risk aversion of a given consumer for diﬀerent
wealth levels. That is, we can compute these measures for the same u()
but diﬀerent initial wealth. After all, r
A
is a function of w. It is commonly
assumed that consumers have (absolute) risk aversion which is decreasing
with wealth. Sometimes the stronger assumption of decreasing relative risk
aversion is made, however. Note that a constant absolute risk aversion implies
increasing relative risk aversion. Finally, note also that the only functional
form for u() which has constant absolute risk aversion is u(w) = −e
(−aw)
.
You may wish to verify that a consumer exhibiting decreasing absolute
risk aversion will have a decreasing diﬀerence between initial wealth and the
certainty equivalent (a declining maximum price paid for insurance) on the
one hand, and a decreasing probability premium on the other.
Uncertainty 67
0
0.5
1
1.5
2
2.5
3
0 5 10 15 20
Utility
wealth
w1 w2 E(w) w3 w4
E(u(1,2))
E(u(3,4))
u(w)
♦
♦
♦
♦ ♦
♦ ♦ ♦
♦ ♦
♦
♦
♦ ♦
♦ ♦
Figure 4.4: Comparing two gambles with equal expected value
4.2 Comparing gambles with respect to risk
Another type of comparison of interest is not across consumers or wealth
levels, as above, but across diﬀerent gambles. Faced with two gambles, when
do we want to say that one is riskier than the other? We could try to
approach this question with purely statistical measures, such as comparisons
of the various moments of the two lotteries’ distributions. This has the major
problem, however, that the consumer may in general be expected to be willing
to trade oﬀ a higher expected return for higher variance, say. Because of
this, a deﬁnition based directly on consumer preferences is preferable. Two
such measures are commonly employed in economics, ﬁrst and second order
stochastic dominance.
Let us ﬁrst focus on lotteries with the same expected value. For example,
consider the two gambles depicted in Fig. 4.4. The ﬁrst is a gamble over w
1
and w
2
. The second is a gamble over w
3
and w
4
. Both have an identical
expected value of E(w). Nevertheless a risk averse consumer clearly will
prefer the second to the ﬁrst, as inspection of the diagram veriﬁes.
Note that in Fig. 4.4 E(w) − w
1
> E(w) − w
3
and w
2
− E(w) > w
4
−
E(w). This clearly indicates that the second lottery has a lower variance,
and thus that a risk averse consumer prefers to have less variability for a
given mean. With multiple possible outcomes the question is not so simple
anymore, however. One could construct an example with two lotteries that
have the same mean and variance, but which diﬀer in higher moments. What
are the “obvious” preferences of a risk averse consumer about skurtosis, say?
68 LA. Busch, Microeconomics May2004
This has lead to a more general deﬁnition for comparing distributions
which have the same mean:
Deﬁnition 6 Let F(x) and G(x) be two cumulative distribution functions
for a onedimensional random variable (wealth). Let F() have the same
mean as G(). F() is said to dominate G() according to second order
stochastic dominance if for every nondecreasing concave u(x):
u(x)dF(x) ≥
u(x)dG(x)
In words, a distribution second order stochastically dominates another
if they have the same mean and if the ﬁrst is preferred by all riskaverse
consumers.
This deﬁnition has economic appeal in its simplicity, but is one of those
deﬁnitions that are problematic to work with due to the condition that for
all possible concave functions something is true. In order to apply this
deﬁnition easily we need to ﬁnd other tests.
Lemma 1 Let F(x) and G(x) be two cumulative distribution functions for
a onedimensional random variable (wealth). F() dominates G() according
to second order stochastic dominance if
tg(t)dt =
tf(t)dt, and
x
0
G(t)dt ≥
x
0
F(t)dt, ∀x.
I.e., if they have the same mean and there is more area under the cdf G()
than under the cdf F() at any point of the distribution.
4
A concept related to second order stochastic dominance is that of a mean
preserving spread. Indeed it can be shown that the two are equivalent.
Deﬁnition 7 Let F(x) and G(x) be two cumulative distribution functions
for a onedimensional random variable (wealth). G() is a mean preserving
spread compared to F() if x is distributed according to F() and G() is the
distribution of the random variable x +z, where z is distributed according to
some H() with
zdH(z) = 0.
4
Note that the condition of identical means also implies a restriction on the total
areas below the cumulative distributions. After all,
x
x
tdF(t) = [tF(t)]
x
x
−
x
x
F(t)dt =
x −
x
x
F(t)dt.
Uncertainty 69
The above gives us an easy way to construct a second order stochastically
dominated distribution: Simply add a zero mean random variable to the given
one.
While it is nice to be able to rank distributions in this manner, the
condition of equal means is restrictive. Furthermore, it does not allow us to
address the economically interesting question of what the trade oﬀ between
mean and risk may be. The following concept is frequently employed in
economics to deal with such situations.
Deﬁnition 8 Let F(x) and G(x) be two cumulative distribution functions
for a onedimensional random variable (wealth). F() is said to dominate
G() according to ﬁrst order stochastic dominance if for every non
decreasing u(x):
u(x)dF(x) ≥
u(x)dG(x)
This is equivalent to the requirement that F(x) ≤ G(x), ∀x.
Note that this requires that any consumer, risk averse or not, would
prefer F to G. It is often useful to realize two facts: One, a ﬁrst order
stochastically dominating distribution F can be obtained form a distribution
G by shifting up outcomes randomly. Two, ﬁrst order stochastic dominance
implies a higher mean, but is stronger than just a requirement on the mean.
The other moments of the distribution get involved too. In other words,
just because the mean is higher for one distribution than another does not
mean that the ﬁrst dominates the second according to ﬁrst order stochastic
dominance!
4.3 A ﬁrst look at Insurance
Let us use the above model to investigate a simple model of insurance. To be
concrete, assume an individual with current wealth of $100,000 who faces a
25% probability to loose his $20,000 car through theft. Assume the individual
has vNM expected utility. The individual’s expected utility then is
U() = 0.75u(100, 000) +.25u(80, 000).
Now assume that the individual has access to an insurance plan. Insurance
works as follows: The individual decides on an amount of coverage, C. This
70 LA. Busch, Microeconomics May2004
coverage carries a premium of π per dollar. The contract speciﬁes that the
amount C will be paid out if the car has been stolen. (Assume that this
is all veriﬁable.) How would our individual choose the amount of coverage?
Simple: maximize expected utility. Thus
max
C
¦0.75u(100, 000 −πC) + 0.25u(80, 000 −πC +C)¦.
The ﬁrst order condition for this problem is
(−π)0.75u
(100, 000 −πC) + (1 −π)0.25u
(80, 000 −πC +C) = 0.
Before we further investigate this equation let us verify the second order
condition. It requires
(−π)
2
0.75u
(100, 000 −πC) + (1 −π)
2
0.25u
(80, 000 −πC +C) < 0.
Clearly this is only satisﬁed if u() is concave, in other words, if the consumer
is risk averse.
So, what does the ﬁrst order condition tell us? Manipulation yields the
condition that
u
(100, 000 −πC)
u
(80, 000 −πC +C)
=
(1 −π)
3π
which gives us a familiar looking equation in that the LHS is a ratio of
marginal utilities. It follows that total consumption under each circumstance
is set so as to set the ratio of marginal utility of wealth equal to some frac
tion which depends on price and the probabilities. Even without knowing
the precise function we can say something about the insurance behaviour,
however. To do so, let us compute the actuarially fair premium. The
expected loss is $5,000, so that an insurance premium which collects that
amount for the $20,000 insured value would lead to zero expected proﬁts for
the insurance ﬁrm: 0.75πC + 0.25(πC −C) = 0 ⇒π = 0.25. An actuarially
fair premium simply charges the odds (there is a 1 in 4 chance of a loss, after
all.) If we use this fair premium in the above ﬁrst order condition we obtain
u
(100, 000 −πC)
u
(80, 000 −πC +C)
= 1.
Since the utility function is strictly concave it can have the same slope only
at the same point, and we conclude that
5
(100, 000 −πC) = (80, 000 −πC +C) ⇒C = 20, 000.
5
Ok, read that sentence again. Do you understand the usage of the word ‘Since’ ? I
am not “cancelling” the u
terms, because those indicate a function. Instead the equation
tells us that numerator and denominator must be the same. But for what values of the
independent variable wealth does the function u(·) have the same derivative? For none, if
u(·) is strictly concave. Therefore the function must be evaluated at the same level of the
independent variable.
Uncertainty 71
This is one of the key results in the analysis of insurance: at actuarially
fair premiums a risk averse consumer will fully insure. Note that
the consumer will not bear any risk in this case: wealth will be $95,000
independent of if the car is stolen, since a $5,000 premium is due in either
case, and if the car is actually stolen it will be replaced. As we have seen
before, this will make the consumer much better oﬀ than if he is actually
bearing the gamble with this same expected wealth level. If you draw the
appropriate diagram you can verify that the consumer does not have to pay
any of the amount he would be willing to pay (the diﬀerence between the
expected value and the certainty equivalent.)
If we had a particular utility function we could now also compute the
maximal amount the consumer would be willing to pay. We have to be
careful, however, how we set up this problem, since simply increasing π will
reduce the amount of coverage purchased! So instead, let us approach the
question as follows: What fee would the consumer be willing to pay in order
to have access to actuarially fair insurance? Let F denote the fee. Then we
have the consumer choose between
u(95, 000 −F) and 0.25u(80, 000) + 0.75u(100, 000).
(Note that I have skipped a step by assuming full insurance. The left term is
the expected utility of a fully insured consumer who pays the fee, the right
term is the expected utility of an uninsured consumer. You should verify
that the lump sum fee does not stop the consumer from fully insuring at a
fair premium.) For example, if u() = ln() then simple manipulation yields
F ∼ 426.
It is important to note why we have set up the problem this way. Con
sider the alternative (based on these numbers and the logarithmic function)
and assume that the total payment of $5,426 which is made in the above case
of a fair premium plus fee, were expressed as a premium. Then we get that
π = 5426/20000 = 0.2713. The ﬁrst order condition for the choice of C then
requires that (recall that ∂ln(x)/∂x = 1/x)
(80, 000 + 0.7287C)
(100, 000 −0.2713C)
=
0.7287
0.8139
= 0.895318835 ⇒C = 9, 810.50.
As you can see, if the additional price is understood as a per dollar charge
for insured value, the consumer will not insure fully. Of course this is an
implication of the previous result — the consumer now faces a premium
which is not actuarially fair. Indeed, we could also compute the premium for
which the consumer will cease to purchase any insurance. For logarithmic
utility like this we would want to compute (remember, we are trying to ﬁnd
72 LA. Busch, Microeconomics May2004
when C = 0 is optimal)
80, 000
100, 000
=
1 −π
3π
⇒π = 0.2941.
As indicated before, there is room to trade between insurance providers and
risk averse consumers. Indeed, as you can verify in one of the questions
at the end of the chapter, there is room for trade between two risk averse
consumers if they face diﬀerent risk or if they diﬀer in their attitudes towards
risk (degree of risk aversion.)
4.4 The StatePreference Approach
While the above approach lets us focus quite well on the role of probabilities
in consumer choice, it is diﬀerent in character to the ‘maximize utility subject
to a budget constraint’ approach we have so much intuition about. In the
ﬁrst order condition for the insurance problem, for example, we had a ratio of
marginal utilities on the one side — but was that the slope of an indiﬀerence
curve?
As mentioned previously, we can actually treat consumption as involving
contingent commodities, and will do so now. Let us start by assuming that
the outcomes of any random event can be categorized as something we will
refer to as the states of the world. That is, there exists a set of mutually
exclusive states which are adequate to describe all randomness in the world.
In our insurance example above, for example, there were only two states of
the world which mattered: Either the car was stolen or it was not. Of course,
in more general settings we could think of many more states (such as the car
is stolen and not recovered, the car is stolen but recovered as a write oﬀ,
the car is stolen and recovered with minor damage, etc.) In accordance with
this view of the world we now will have to develop the idea of contingent
commodities. In the case of our concrete example with just two states, a
contingent commodity would be delivered only if a particular state (on which
the commodity’s delivery is contingent) occurs. So, if there are two states,
good and bad, then there could be two commodities, one which promises
consumption in the good state, and one which promises consumption in the
bad state. Notice that you would have to buy both of these commodities if
you wanted to consume in both states. Notice also that nothing requires that
the consumer purchase them in equal amounts. They are, after all, diﬀerent
commodities now, even if the underlying good which gets delivered in each
state is the same. Finally, note that if one of these commodities were missing
Uncertainty 73
you could not assure consumption in both states (which is why economists
make such a fuss about “complete markets” — which essentially means that
everything which is relevant can be traded. It does not have to be traded,
of course, that is up to people’s choices, but it should be available should
someone want to trade.) Of course, after the fact (ex post in the lingo)
only one of these states does occur, and thus only the set of commodities
contingent on that state are actually consumed. Before the fact (before the
uncertainty is resolved, called ex ante) there are two diﬀerent commodities
available, however.
Once we have this setting we can proceed pretty much as before in our
analysis. To be concrete let there be just two states, good and bad. We
will now index goods by a subscript b or g to indicate the state in which
they are delivered. We will further simplify things by having just one good,
consumption (or wealth). Given that there are two states, that means that
there are two distinct (contingent) commodities, c
g
and c
b
. We may now
assume that the consumer has our usual vNM expected utility.
6
If the
individual assessed a probability of π to the good state occurring, then we
would obtain an expected utility of consumption of U(c
g
, c
b
) = πu(c
g
) +(1 −
π)u(c
b
).
This expression gives us the expected utility of the consumer. The
consumers’ objective is to maximize expected utility, as before. It might
be useful at this point to assess the properties of this function. As long
as the utility index applied to consumption in each state, u(), is concave,
this is a concave function. It will be increasing in each commodity, but at
a decreasing rate. We can also ask what the marginal rate of substitution
between the commodities will be. This is easily derived by taking the total
derivative along an indiﬀerence curve and rearranging:
πu
(c
g
)dc
g
+ (1 −π)u
(c
b
)dc
b
= 0,
dc
b
dc
g
= −
πu
(c
g
)
(1 −π)u
(c
b
)
.
Note the fact that the MRS now depends not only on the marginal utility
of wealth but also on the (subjective) probabilities the consumer assesses
for each state! Even more importantly, we can consider what is known as
the certainty line, that is, the locus of points where c
g
= c
b
. Since the
marginal utility of consumption then is equal in both states (we have state
independent utility here, after all, which means that the same u() applies in
each state), it follows that the slope of an indiﬀerence curve on the certainty
6
Note that this is somewhat more onerous than before now: imagine the states are
indexed by good health and poor health. It is easy to imagine that an individual would
evaluate material wealth diﬀerently in these two cases.
74 LA. Busch, Microeconomics May2004
line only depends on the probability the consumer assesses for each state. In
this case, it is π/(1 −π).
The other ingredient is the budget line, of course. Since we have two
commodities, each might be expected to have a price, and we denote these
by p
g
, p
b
respectively. The consumer who has a total initial wealth of W
may therefore consume any combination which lies on the budget line p
g
c
g
+
p
b
c
b
= W, while a consumer who has an endowment of consumption given by
(w
g
, w
b
) may consume anything on the budget line p
g
c
g
+p
b
c
g
= p
g
w
g
+p
b
w
b
.
Where do these prices come from? As before, they will be determined by
general equilibrium conditions. But if contingent markets are well developed
and competitive, and there is general agreement on the likelihood of the
states, then we might expect that a dollar’s worth of consumption in a state
will cost its expected value, which is just the dollar times the probability
that it needs to be delivered. (I.e., a kind of zero proﬁt condition for state
pricing.) Thus we might expect that p
g
= π and p
b
= (1 −π).
The budget line also has a slope, of course, which is the rate at which
consumption in one state can be transferred into consumption in the other
state. Taking total derivatives of the budget we obtain that the budget slope
is dc
b
/dc
g
= p
g
/p
b
. Combining this with our condition on “fair” pricing in
the previous paragraph, we obtain that the budget allows transformation of
consumption in one state to the other according to the odds.
4.4.1 Insurance in a State Model
So let us reconsider our consumer who was in need of insurance in this frame
work. In order to make this problem somewhat neater, we will reformulate
the insurance premium into what is known as a net premium, which is a
payment which only accrues in the case there is no loss. Since the normal
insurance contract speciﬁes that a premium be paid in either case, we usu
ally have a payment of premiumAmount in order to obtain a net beneﬁt
of Amount −premiumAmount. One dollar of consumption added in the
state in which an accident occurs will therefore cost premium/(1−premium)
dollars in the no accident state. Thus, let p
b
= 1 and let p
g
= P, the net
premium. The consumer will then solve
max
c
b
,cg
¦πu(c
g
) + (1 −π)u(c
b
)¦ s.t. Pc
g
+c
b
= P(100, 000) + 80, 000.
The two ﬁrst order conditions for the consumption levels in this problem are
πu
(c
g
) −λP = 0 and (1 −π)u
(c
b
) −λ = 0.
Uncertainty 75
0
5
10
15
20
0 5 10 15 20
loss
no loss
@ endowment
cert.
Figure 4.5: An Insurance Problem in StateConsumption space
Combining them in the usual way we obtain
πu
(c
g
)
(1 −π)u
(c
b
)
= P.
Now, as we have just seen the LHS of this is the slope of an Indiﬀerence
curve. The RHS is the slope of the budget, and so this says nothing but the
familiar “there must be a tangency”.
We have also derived P = π/(1−π) for a fair net premium before. Thus
we get that
πu
(c
g
)
(1 −π)u
(c
b
)
=
π
1 −π
,
which requires that
u
(c
g
)
u
(c
b
)
= 1 ⇒
c
g
c
b
= 1.
Thus this model shows us, just as the previous one, that a risk averse con
sumer faced with a fair premium will choose to fully insure, that is, choose to
equalize consumption levels across the states. A diagrammatic representation
of this can be found in diagram 3.5, which is standard for insurance prob
lems. The consumer has an endowment which is oﬀ the certainty (45degree)
line. The fair premium deﬁnes a budget line along which the consumer can
reallocate consumption from the good (no loss) state to the bad (loss) state.
Optimum occurs where there is a tangency, which must occur on the cer
tainty line since then the slopes are equalized. The picture looks perfectly
“normal”, that is, just as we are used from introductory economics.
76 LA. Busch, Microeconomics May2004
4.4.2 Risk Aversion Again
Given the amount of time spent previously on riskaversion, it is interesting
to see how riskaversion manifests itself in this setting. Intuitively it might
be apparent that a more risk averse consumer will have indiﬀerence curves
which are more curved, that is, exhibit less substitutability (recall that a
straight line indiﬀerence curve means that the goods are perfect substitutes,
while a kinked Leontief indiﬀerence curve means perfect complements.) It
therefore stands to reason that we might be interested in the rate at which
the MRS is falling. It is, however, much easier to think along the lines of
certainty equivalents: Consider two consumers with diﬀerent risk aversion,
that is, curvature of indiﬀerence curves. For simplicity, let us consider a
point on the certainty line and the two indiﬀerence curves for our consumers
through that common point (see Fig. 4.6).
0
5
10
15
20
0 5 10 15 20
loss
no loss
cert.
B
A
Figure 4.6: Risk aversion in the State Model
Assume further that consumer B’s indiﬀerence curve lies everywhere else
above consumer A’s. We can now ask how much consumption we have to
add for each consumer in order to keep the consumer indiﬀerent between
the certain point and a consumption bundle with some given amount less in
the bad state. Clearly, consumer B will need more compensation in order
to accept the bad state reduction. Looked at it the other way around, this
means that consumer B is willing to give up more consumption in the good
state in order to increase bad state consumption. Note that both assess the
same probabilities on the certainty line, since the slopes of their ICs are the
same. How does this relate to certainty equivalents? Well, a budget line at
fair odds will have the slope −
π
1−π
. Consider three such budget lines which
are all parallel and go through the certain consumption point and the two
Uncertainty 77
gambles which are equivalent for the consumer to the certain point. Clearly
(from the picture) consumer B’s budget is furthest out, followed by consumer
A’s, and furthest in is the budget through the certain point. But we know
that parallel budgets diﬀer only in the income/wealth they embody. Thus
there is a larger reduction in wealth possible for B without reducing his
welfare, compared to A. The wealth reduction embodied in the lower budget
is the equivalent of the certainty equivalent idea before. (The expected value
of a given gamble on such a budget line is given by the point on the certainty
line and that budget, after all.)
4.5 Asset Pricing
Any discussion of models of uncertainty would be incomplete without some
coverage of the main area in which all of this is used, which is the pricing
of assets. As we have seen before, if there is only time to contend with but
returns or future prices are known, then asset pricing reduces to a condition
which says that the current price of an asset must relate to the future price
through discounting. In the “real world” most assets do not have a future
price which is known, or may otherwise have returns which are uncertain —
stocks are a good example, where dividends are announced each year and
their price certainly seems to ﬂuctuate. Our discussion so far has focused on
the avoidance of risk. Of course, even a risk averse consumer will accept some
risk in exchange for a higher return, as we will see shortly. First, however,
let us deﬁne two terms which often occur in the context of investments.
4.5.1 Diversiﬁcation
Diversiﬁcation refers to the idea that risk can be reduced by spreading one’s
investments across multiple assets. Contrary to popular misconceptions it is
not necessary that their price movements be negatively correlated (although
that certainly helps.) Let us consider these issues via a simple example.
Assume that there exists a project A which requires an investment of
$9,000 and which will either pay back $12,000 or $8,000, each with equal
probability. The expected value of this project is therefore $10,000. Now
assume that a second project exists which is just like this one, but (and
this is important) which is completely independent of the ﬁrst. How much
each pays back in no way depends on the other. Two investors now could
each invest $4,500 in each project. Each investor then has again a total
78 LA. Busch, Microeconomics May2004
investment of $9,000. How much do the projects pay back? Well, each will
pay an investor either $6,000 or $4,000, each with equal probability. Thus an
investor can receive either $12,000, $10,000, or $8,000. $12,000 or $8,000 are
received one quarter of the time, and half the time it is $10,000. The total
expected return thus is the same. BUT, there is less risk, since we know that
for a riskaverse consumer 0.5u(12)+0.5u(8) < 0.25u(12)+0.25u(8)+0.5u(10)
since 2(0.25u(12) + 0.25u(8)) < 2(0.5u(10)).
Should the investor have access to investments which have negatively
correlated returns (if one is up the other is down) risk may be able to be
eliminated completely. All that is needed is to assume that the second project
above will pay $8,000 when the ﬁrst pays $12,000, and that it will pay $12,000
if the ﬁrst pays $8,000. In that case an investor who invests half in each will
obtain either $6,000 and $4,000 or $4,000 and $6,000: $10,000 in either case.
The expected return has not increased, but there is no risk at all now, a
situation which a riskaverse consumer would clearly prefer.
4.5.2 Risk spreading
Risk spreading refers to the activity which lies at the root of insurance.
Assume that there are 1000 individuals with wealth of $35,000 and a 1%
probability of suﬀering a $10,000 loss. If the losses are independent of one
another then there will be an average of 10 losses per period, for a total
$100,000 loss for all of them. The expected loss of each individual is $100, so
that all individuals have an expected wealth of $34,900. A mutual insurance
would now collect $100 from each, and everybody would be reimbursed in full
for their loss. Thus we can guarantee the consumers their expected wealth
for certain.
Note that there is a new risk introduced now: in any given year more (or
less) than 10 losses may occur. We can get rid of some of this by charging the
$100 in all years and retaining any money which was not collected in order to
cover higher expenses in years in which more than 10 losses occur. However,
there may be a string of bad luck which might threaten the solvency of the
plan: but help is on the way! We could buy insurance for the insurance
company, in eﬀect insuring against the unlikely event that signiﬁcantly more
than the average number of losses occurs. This is called reinsurance. Since
an insurance company has a well diversiﬁed portfolio of (independent) risks,
the aggregate risk it faces itself is low and it will thus be able to get fairly
cheap insurance.
Uncertainty 79
These kind of considerations are also able to show why there may not
be any insurance oﬀered for certain losses. You may recall the lament on the
radio about the fact that homeowners in the Red River basin were not able
to purchase ﬂood insurance. Similarly, you can’t get earthquake insurance
in Vancouver, and certain other natural disasters (and manmade ones, such
as wars) are excluded from coverage. Why? The answer lies in the fact that
all insured individuals would have either a loss or no loss at the same time.
That would mean that our mutual insurance above would either require no
money (no losses) or $10,000,000. But the latter requires each participant to
pay $10,000, in which case you might as well not insure! (A note aside: often
the statement that no insurance is available is not literally correct: there may
well be insurance available, but only at such high rates that nobody would
buy it anyways. Even at low rates many people do not carry insurance, often
hoping that the government will bail them out after the fact, a ploy which
often works.)
4.5.3 Back to Asset Pricing
Before we look at a more general model of asset pricing, it may be useful to
verify that a riskaverse consumer will indeed hold nonnegative amounts of
risky assets if they oﬀer positive returns. To do so, let us assume the simple
most case, that of a consumer with a given wealth w who has access to a
risky asset which has a return of r
g
or r
b
< 0 < r
g
. Let x denote the amount
invested in the risky asset. Wealth then is a random variable and will be
either w
g
= (w − x) + x(1 + r
g
) or w
b
= (w − x) + x(1 + r
b
). Suppose the
good outcome occurs with probability π. What will be the choice of x?
max
0≤x≤w
¦πu(w +r
g
x) + (1 −π)u(w +r
b
x)¦ .
The ﬁrst and second order conditions are
r
g
πu
(w +r
g
x) +r
b
(1 −π)u
(w +r
b
x) = 0
r
2
g
πu
(w +r
g
x) +r
2
b
(1 −π)u
(w +r
b
x) < 0
The second order condition is satisﬁed trivially if the consumer is risk averse.
To show under what circumstances it is not optimal to have a zero investment
consider the FOC at x = 0:
r
g
πu
(w) +r
b
(1 −π)u
(w) ? 0.
The LHS is only positive if πr
g
+ (1 − π)r
b
> 0, that is, if expected returns
are positive. Notice also that in that case there will be some investment! Of
80 LA. Busch, Microeconomics May2004
course this is driven by the fact that not investing guarantees a zero rate of
return. Investing is a gamble, which the consumer dislikes, but also increases
returns. Even a riskaverse consumer will take some risk for that higher
return!
Now let us consider a more general model with many assets. Assume
that there is a riskfree asset (one which yields a certain return) and many
risky ones. Let the return for the riskfree asset be denoted by R
0
and the
returns for the risky assets be denoted by
˜
R
i
, each of which is a random
variable with some distribution. Initial wealth of the consumer is w. Finally,
we can let x
i
denote the fraction of wealth allocated to asset i = 0, . . . , n. In
the second period (we will ignore time discounting for simplicity and clarity)
wealth will be a random variable the distribution of which depends on how
much is invested in each asset. In particular, ˜ w = w
0
¸
n
i=0
x
i
˜
R
i
, with the
budget constraint that
¸
n
i=0
x
i
= 1. We can transform this expression as
follows:
˜ w = w
¸
(1 −
n
¸
i=1
x
i
)R
0
+
n
¸
i=1
x
i
˜
R
i
¸
= w
¸
R
0
+
n
¸
i=1
x
i
(
˜
R
i
−R
0
)
¸
.
The consumer’s goal, of course, is to maximize expected utility from this
wealth by choice of the investment fractions. That is,
max
{x}
i
¦Eu( ˜ w)¦ = max
{x}
i
Eu
w
¸
R
0
+
n
¸
i=1
x
i
(
˜
R
i
−R
0
)
¸¸
.
Diﬀerentiation yields the ﬁrst order conditions
Eu
( ˜ w)(
˜
R
i
−R
0
) = 0, ∀i.
Now we will do some manipulation of this to make it look more presentable
and informative. You may recall that the covariance of two random variables,
X, Y , is deﬁned as COV(X, Y ) = EXY −EXEY. It follows that Eu
( ˜ w)
˜
R
i
=
COV(u
,
˜
R
i
)+Eu
( ˜ w)E
˜
R
i
. Using this fact and distributing the subtraction in
the FOC across the equal sign, we obtain for each risky asset i the following
equation:
Eu
( ˜ w)R
0
= Eu
( ˜ w)E
˜
R
i
+ COV(u
( ˜ w),
˜
R
i
).
From this it follows that in equilibrium the expected return of asset i
must satisfy
E
˜
R
i
= R
0
−
COV(u
( ˜ w),
˜
R
i
)
Eu
( ˜ w)
.
Uncertainty 81
This equation has a nice interpretation. The ﬁrst term is clearly the risk
free rate of return. The second part therefore must be the riskpremium
which the asset must garner in order to be held by the consumer in a utility
maximizing portfolio. Note that if a return is positively correlated with
wealth — that is, if an asset will return much if the consumer is already rich
— then it is negatively correlated with the marginal utility of wealth, since
that is decreasing in wealth. Thus the expected return of such an asset must
exceed the risk free return if it is to be held. Of course, assets which pay
oﬀ when wealth otherwise would be low can have a lower return than the
riskfree rate since they, in a sense, provide insurance.
4.5.4 MeanVariance Utility
The above pricing model required us to know the covariance and expectation
of marginal utility, since, as we have seen before, it is the fact that marginal
utility diﬀers across outcomes which in some sense causes riskaversion. A
nice simpliﬁcation of the model is possible if we specify at the outset that
our consumer likes the mean but dislikes the variance of random returns,
i.e., the mean is a good, the variance is a bad. We can then specify a utility
function directly on those two characteristics of the distribution. (The normal
distribution, for example is completely described by these two moments. If
distributions diﬀer in higher moments, this formulation would not be able to
pick that up, however.)
Recall that for a set of outcomes (w
1
, w
2
, . . . , w
n
) with probabilities
(π
1
, π
2
, . . . , π
n
)
The mean is µ
w
=
n
¸
i=1
π
i
w
i
,
and the variance is σ
2
w
=
n
¸
i=1
π
i
(w
i
−µ
w
)
2
.
We now deﬁne utility directly on these: u(µ
w
, σ
w
), although it is standard
practice to actually use the standard deviation as I just have done. Risk
aversion is now expressed through the fact that we assume that
∂u()
∂µ
w
= u
1
() > 0 while
∂u()
∂σ
w
= u
2
() < 0.
We will now focus on two portfolios only (the validity of this approach
will be shown in a while.) The risk free asset has a return of r
f
, the risky
82 LA. Busch, Microeconomics May2004
asset (the “market portfolio”) has a return of m
s
with probability π
s
. Let
r
m
=
¸
π
s
m
s
and σ
m
=
¸
π
s
(m
s
−r
m
)
2
. Assume that a fraction x of
wealth is to be invested in the risky asset (the market).
The expected return for a fraction x invested will be
r
x
=
¸
π
s
(xm
s
+ (1 −x)r
f
) = (1 −x)r
f
+x
¸
π
s
m
s
.
The variance of this portfolio is
σ
2
x
=
¸
π
s
(xm
s
+ (1 −x)r
f
−r
x
)
2
=
¸
π
s
(xm
s
−xr
m
)
2
= x
2
σ
2
m
.
The investor/consumer will maximize utility by choice of x:
max
x
¦u(xr
m
+ (1 −x)r
f
, xσ
m
)¦
FOC u
1
()[r
m
−r
f
] +u
2
()σ
m
= 0
SOC u
11
()[r
m
−r
f
]
2
+u
22
()σ
2
m
+ 2σ
m
[r
m
−r
f
]u
12
() ≤ 0
Assuming that the second order condition holds, we note that we will require
[r
m
− r
f
] > 0 since u
2
() is negative by assumption. We may also note that
we can rewrite the FOC as
−u
2
()
u
1
()
=
r
m
−r
f
σ
m
.
The LHS of this expression is the MRS between the mean and the standard
deviation, that is, the slope of an indiﬀerence curve. The RHS can be seen to
be the slope of the budget line since the budget is a mix of two points, (r
f
, 0)
and (r
m
, σ
m
), which implies that the tradeoﬀ of mean for standard deviation
is rise over run: (r
m
−r
f
)/σ
m
.
In a general equilibrium everybody has access to the same market and
the same risk free asset. Thus, everybody who does hold any of the market
will have the same MRS — a result analogous to the fact that in our usual
general equilibria everybody will have the same MRS. Of course, this is just
a requirement of Pareto Optimality.
In this discussion we had but one risky asset. In reality there are many.
As promised, we derive here the justiﬁcation for considering only the socalled
market portfolio. The basic idea is simple. Assume a set of risky assets.
Since we are operating in a two dimensional space of mean versus standard
deviation, one can certainly ask what combination of assets (also known as a
portfolio) will yield the highest mean for a given standard deviation, or, which
is often easier to compute, the lowest standard deviation for a given mean.
Uncertainty 83
0
5
10
15
20
0 5 10 15 20
mean
standard dev.
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
risk free
market
budget
eﬃcient risk
Figure 4.7: Eﬃcient Portfolios and the Market Portfolio
Let x denote the vector of shares in each of the assets so that
¸
I
x = 1.
Deﬁne the mean and standard deviation of the portfolio x as µ(x) and σ(x).
Then it is possible to solve
max
x
¦µ(x) +λ(s −σ(x))¦ or
min
x
¦σ(x) +λ(m−µ(x)¦ .
For each value of the constraint there will be a portfolio (an allocation of
wealth across the diﬀerent risky assets) which achieves the optimum. It
turns out (for reasons which I do not want to get into here: take a ﬁnance
course or do the math) that this will lead to a frontier which is concave to
the standard variation axis.
Now, by derivation, any proportion of wealth which is held in risky assets
will be held according to one of these portfolios. But which one? Well, the
consumer can combine any one of these assets with the risk free asset in order
to arrive at the ﬁnal portfolio. Since this is just a linear combination, the
resulting “budget line” will be a straight line and have a positive slope of
(µ
x
− µ
f
)/σ
x
, where µ
x
, σ
x
are drawn from the eﬃcient frontier. A simple
diagram suﬃces to convince us that the highest budget will be that which
is just tangent to the eﬃcient frontier. The point of tangency deﬁnes the
market portfolio we used above!
Note a couple more things from this diagram. First of all it is clear from
the indiﬀerence curves drawn that the consumer gains from the availability of
risky assets on the one hand, and of the riskfree asset on the other. Second,
the market portfolio with which the risk free asset will be mixed will depend
on the return from the risk free asset. Imagine sliding the risk free return
84 LA. Busch, Microeconomics May2004
up in the diagram. The budget line would have to become ﬂatter and this
means a tangency to the eﬃciency locus further to the right. Finally, the
precise mix between market and risk free will depend on preferences. Indeed,
there might be people who would want to have more than their entire wealth
in the market portfolio, that is, the tangency to the budget would occur to
the right of the market portfolio. This requires a “leveraged” portfolio in
which the consumer is allowed to hold negative amounts of certain assets
(shortselling.)
4.5.5 CAPM
The above shows that all investors face the same price of risk (in terms of
the variance increase for a given increase in the mean.) It does not tell us
anything about the riskreturn tradeoﬀ for any given asset, however. Any
risk unrelated to the market risk can be diversiﬁed away in this setting,
however, so that any unsystematic risk will not attract any excess returns.
An asset must, however, earn higher returns to the extent that it contributes
to the market risk, since if it did not it would not be held in the market
portfolio. Consideration of this problem in more detail (assume a portfolio
with a small amount of this asset held and the rest in the market, compute
the inﬂuence of the asset on the portfolio return and variance, rearrange)
will yield the famous CAPM equation involving the asset’s ‘beta’ a number
which is published in the ﬁnancial press:
µ
x
= µ
f
+ (µ
m
−µ
f
)
σ
X,M
σ
2
M
.
Here x denotes asset x, m the market, f the risk free asset, and σ
X,M
is the
covariance between asset x and the market. The ratio
σ
X,M
σ
2
M
is referred to as
the asset’s beta.
4.6 Review Problems
Question 1: Demonstrate that all riskaverse consumers would prefer an
investment yielding wealth levels 24, 20, 16 with equal probability to one
with wealth levels 24 or 16 with equal probability.
Question 2: Compute the certainty equivalent for an expected utility maxi
mizing consumer with (Bernoulli) utility function u(w) =
√
w facing a gamble
over $3600 with probability α and $6400 with probability 1 −α.
Uncertainty 85
Question 3: Determine at what wealth levels a consumer with (Bernoulli)
utility function u(w) = lnw has the same absolute risk aversion as a consumer
with (Bernoulli) utility function u(w) = 2
√
w. How do their relative risk
aversions compare at that level. What does that mean?
Question 4: Consider an environment with two states — call them rain,
R, and shine, S, — with the probability of state R occurring known to
be π. Assume that there exist two consumers who are both riskaverse,
vNM expected utility maximizers. Assume further that the endowment of
consumer A is (10, 5) — denoting 10 units of the consumption good in the
case of state R and 5 units in the case of S — and that the endowment
of consumer B is (5, 10). What are the equilibrium allocation and price?
(Provide a well labelled diagram and supporting arguments for any assertions
you make.)
Question 5: Anna has $10,000 to invest and wants to invest it all in one or
both of the following assets: Asset G is genetechnology stock, while asset
B is stock in a bibleprinting business. There are only two states of nature
to worry about, both of which occur with equal probability. One is that
genetechnology is approved and ﬂourishes, in which case the return of asset
B is 0% while the return of asset G is 80%. The other is that religious
fundamentalism takes hold, and genetechnology is severely restricted. In
that case the return of asset B will be 40% but asset G has a return of (
40%). Anna is a riskaverse expectedutility maximizer with preferences over
wealth represented by u(w).
a) State Anna’s choice problem mathematically.
b) What proportion of the $10,000 will she invest in the genetechnology
stock? (I.e. solve the above maximization problem.)
Question 6
∗
: Assume that consumers can be described by the following
preferences: they are risk averse over wealth levels, but they enjoy gambling
for its consumption attributes (i.e., while they dislike the risk on wealth which
gambling implies, they get some utility out of partaking in the excitement
(say, they get utility out of daydreaming about what they could do if they
won.)) Let us further assume that consumers only diﬀer with respect to their
initial wealth level and their consumption utility from gambling, but that all
consumers have the same preferences over wealth. In order to simplify these
preferences further, assume that wealth and gambling are separable. We can
represent these preferences over wealth, w, and gambling, g ∈ ¦0, 1¦ by some
u(w, g) = v(w) +µ
i
g, where v(w) is strictly concave, and µ
i
is an individual
parameter for each consumer. Finally, they are assumed to be expected util
ity maximizers.
a) Assume for now that the gambling is engaged in via a lottery, in
86 LA. Busch, Microeconomics May2004
which consumers pay a ﬁxed price p for one lottery ticket, and the ticket is
either a “Try Again” (no prize) or “Winner” (Prize won.) Also assume for
simplicity that they can either gamble once or not at all (i.e., each consumer
can only buy one ticket.)
i) First verify that in such an environment the government can make
positive proﬁts. (I.e., verify that some consumers will buy a ticket even if
the ticket price exceeds the expected value of the ticket.
ii) If consumers have identical µ
i
, how does the participation of con
sumers then depend on their initial wealth level if their preferences exhibit
¦constant[ decreasing¦ ¦absolute [relative¦ risk aversion? (The above nota
tion means: consider all (sensible) permutations.)
iii) Assume now that preferences are characterized by decreasing ab
solute and constant relative risk aversion. Verify that the utility functions
v(w) = ln w and v(w) =
√
w satisfy this assumption. Also assume that the
consumption enjoyment of gambling is decreasing in wealth, that is, con
sumers with high initial wealth have low µ
i
. (They know what pain it is to
be rich and don’t daydream as much about it.) Who would gamble then?
Question 7
∗
: Assume that a worker only cares about the income he gen
erates from working and the eﬀort level he expends at work. Also assume
that the worker is risk averse over income generated and dislikes eﬀort. His
preferences can be represented by u(w, e) =
√
w −e
2
. The worker generates
income by accepting a contract which speciﬁes an eﬀort level e and the asso
ciated wage rate w(e). (Since leisure time does not enter in his preferences he
will work full time (supply work inelastically) and we can normalize the wage
rate to be per period income.) The eﬀort level of the worker is not observed
directly by the ﬁrm, and thus the worker has an incentive to expend as little
eﬀort as possible. The ﬁrm, however, cares about the eﬀort expended, since
it aﬀects the marginal product it gets from employing the worker. It can
conduct random tests of the worker’s eﬀort level with some probability α.
These tests reveal the true eﬀort level employed by the worker. If the worker
is not tested, then it is assumed that he did indeed work at the speciﬁed
eﬀort level and will receive the contracted wage. If he is tested and found
to have shirked (not supplied the correct eﬀort level) then a penalty p is
assessed and deducted from the wage of the worker.
a) What relationship has to hold between w(e), e, α and p in order for a
worker to provide the correct eﬀort level if p is a constant (i.e., not dependent
on either the contracted nor the observed eﬀort level)?
b) If we can make p depend on the actual deviation in eﬀort which we
observe, what relationship has to be satisﬁed then?
c) Is there any economic insight hidden in this? Think about how these
problems would change if the probability of detection somehow depended on
Uncertainty 87
the eﬀort level (i.e., the more I deviate from the correct eﬀort level, the more
likely I might be caught.)
Question 8: A consumer is risk averse. She is faced with an uncertain
consumption in the future, since she faces the possibility of an accident.
Accidents occur with probability π. If an accident occurs, it is either really
bad, in which case she looses B, or minor, in which case her loss is M < B.
Bad accidents represent only 1/5 of all accidents, all other accidents are
minor. Without the accident her consumption would be W > B.
a) Derive her optimal insurance purchases if the magnitude of the loss is
publicly observable and veriﬁable and insurance companies make zero proﬁts.
More explicitly, if the type of accident is veriﬁable then a contract can be
written contingent on the type of loss. The problem thus is equivalent to a
problem where there are two types of accident, each of which occurs with a
diﬀerent probability and can be insured for separately at fair premiums.
b) Now consider the case if the amount of loss is private information. In
this case only the fact that there was an accident can be veriﬁed (and hence
contracted on), but the insurer cannot verify if the loss was minor or major,
and hence pays only one ﬁxed amount for any kind of accident. Assume
zero proﬁts for the insurer, as before, and show that the consumer now over
insures for the minor loss but underinsures for the bad loss, and that her
utility thus is lowered. (Note that the informational distortion therefore leads
to a welfare loss.)
Question 9: Assume a meanvariance utility model, and let µ denote the
expected level of wealth, and σ its variance. Take the boundary of the eﬃcient
risky asset portfolios to be given by µ =
√
σ −16. Assume further that there
exists a riskfree asset which has mean zero and standard deviation zero (if
this bothers you, you can imagine that this is actually measuring the increase
in wealth above current levels.) Let there be two consumers who have mean
variance utility given by u(µ, σ) = µ−
σ
2
64
and u(µ, σ) = 3µ−
σ
2
64
respectively.
Derive their optimal portfolio choice and contrast their decisions.
Question 10: Fact 1: The asset pricing formula derived in class states that
E
˜
R
i
= R
0
−
Cov(U
( ˜ w),
˜
R
i
)
EU
( ˜ w)
.
Fact 2: A disability insurance contract can be viewed as an asset which pays
some amount of money in the case the insured is unable to generate income
from work.
Use Facts 1 and 2 above to explain why disability insurance can have
88 LA. Busch, Microeconomics May2004
a negative return, (that is, why the price of the contract may exceed the
expected value of the payment) if viewed as an asset in this way.
Question 11: TRUE/FALSE/UNCERTAIN: Provide justiﬁcation for
your answers via proof, counterexample or argument.
1) The optimal amount of insurance a riskloving consumer purchases
is characterized by the First Order Condition of his utility maximization
problem, which indicates a tangency between an indiﬀerence curve and a
budget line.
2) For a consumer who is initially a borrower, the utility level will
deﬁnitely fall if interest rates increase substantially.
3) The utility functions
(3000 ∗ lnx
1
+ 6000 ∗ lnx
2
)
12
+ 2462 and Exp(x
1/3
1
x
2/3
2
)
represent the same consumer preferences.
4) A riskaverse consumer will never pay more than the expected value
of a gamble for the right to participate in the gamble. A risklover would, on
the other hand.
5) The market rate of return is 15%. The stock of Gargleblaster Inc. is
known to increase to $117 next period, and is currently trading for $90. This
market (and the current stock price of Gargleblaster Inc.) is in equilibrium.
6) Under RiskVariance utility functions, all consumers who actually
hold both the riskfree asset and the risky asset will have the same Marginal
Rate of Substitution between the mean and the variance, but may not have
the same investment allocation.
Question 12
∗
: Suppose workers have identical preferences over wealth only,
which can be represented by the utility function u(w) = 2
√
w. Workers are
also known to be expected utility maximizers. There are three kinds of jobs
in the economy. One is a government desk job paying $40,000.00 a year. This
job has no risk of accidents associated with it. The second is a busdriver.
In this job there is a risk of accidents. The wage is $44,100.00 and if there is
an accident the monetary loss is $11,700.00. Finally a worker could work on
an oil rig. These jobs pay $122,500.00 and have a 50% accident probability.
These are all market wages, that is, all these jobs are actually performed in
equilibrium.
a) What is the probability of accidents in the bus driver occupation?
b) What is the loss suﬀered by an oil rig worker if an accident occurs
there?
c) Suppose now that the government institutes a workers’ compensation
scheme. This is essentially an insurance scheme where each job pays a fair
premium for its accident risk. Suppose that workers can buy this insurance
Uncertainty 89
in arbitrary amounts at these premiums. What will the new equilibrium
wages for busdrivers and oil rig workers be? Who gains from the workers’
compensation?
d) Now suppose instead that the government decides to charge only one
premium for everybody. Suppose that of the workers in risky jobs 40% are
oil rig workers and 60% are bus drivers. Suppose that they can buy as much
or little insurance as they wish. How much insurance do the two groups buy?
Who is better oﬀ, who is worse oﬀ in this case (at the old wages)? Can we
say what the new equilibrium wages would have to be?
Question 13
∗
: Prove that stateindependent expected utility is homothetic
if the consumer exhibits constant relative risk aversion. (This question arises
since indiﬀerence curves do have the same slope along the certainty line. So
could they have the same slope along any ray from the origin? In that case
they would be homothetic.)
90 LA. Busch, Microeconomics May2004
Chapter 5
Information
In the standard model of consumer choice discussed in chapter 1, as well as
the model of uncertainty developed in chapter 3, it was assumed that the
decision maker knows all relevant information. In chapter 1 this meant that
the consumer knows the price of all goods, as well as the precise features of
each good (all characteristics relevant to the consumer.) In chapter 3 this in
particular implied that the consumer has information about the probabilities
of states or outcomes. Not only that, this information is symmetric, so that
all parties to a transaction have the same information. Hence in chapter 3
the explicit assumption that the insurance provider has the same knowledge
of the probabilities as the consumer.
What if these assumptions fail? What if there is no complete and sym
metric information? Fundamentally, one of the key problems is asymmetric
information — when one party to a transaction knows something relevant
to the transaction which the other party does not know. This quite clearly
will lead to problems, since Pareto eﬃciency necessitates that all available
information is properly incorporated. Consider, for example, the famous
“Lemon’s Problem”:
1
Suppose a seller knows the quality of her used car,
which is either high or low. The seller attaches values of $5000 or $1000 to
the two types of car, respectively. Buyers do not know the quality of a used
car and have no way to determine it before purchase. Buyers value good used
cars at $6000 and bad used cars at $2000. Note that it is Pareto eﬃcient
in either case for the car to be sold. Will the market mechanism work in
this case? Suppose that it is known by everybody that half of all cars are
good, and half are bad. To keep it simple, suppose buyers and sellers are risk
1
This kind of example is due to Akerlof (1970) Quarterly Journal of Economics, a paper
which has changed economics.
91
92 LA. Busch, Microeconomics May2004
neutral. Buyers would then be willing to pay at most $4000 for a gamble
with even odds in which they either receive a good or a bad used car. Now
suppose that this were the market price for used cars. At this price only bad
cars are actually oﬀered for sale, since good car buyers rather keep theirs,
since their valuation of their own car exceeds the price. It follows that this
price cannot be an equilibrium price. It is fairly easy to verify that the only
equilibrium would have bad cars trade at a price somewhere between $1000
and $2000 while good cars are not traded at all. This is not Pareto eﬃcient.
Of course, it is not necessary that there be asymmetric information.
Suppose that there exist many restaurants, each oﬀering slightly diﬀerent
combinations of food and ambiance. Consumers have tastes over the char
acteristics of the meals (how spicy, what kind of meat, if any meat at all;
Italian, French, eastern European, Japanese, Egyptian, etc.) as well as the
kind of restaurant (formal, romantic, authentic, etc.) as well as the general
quality of the cooking within each category. In a Pareto eﬃcient general
equilibrium each consumer must frequent (subject to capacity constraints)
the most preferred restaurant, or if that is full the next preferred one. Can
we expect this to be the equilibrium?
2
To see why the general answer may
be “No” consider a risk averse consumer in an environment where a dining
experience is necessary in order to ﬁnd out all relevant characteristics of a
restaurant. Every visit to a new place carries with it the risk of a really
unpleasant experience. If the expected beneﬁt of ﬁnding a better place than
the one currently known does not exceed the expected cost of having a bad
experience, the consumer will not try a new restaurant, and hence will not
ﬁnd the best match!
What seems to be important then, are two aspects of information: One,
can all relevant information be acquired before the transactions is com
pleted?; Two, is the information symmetric or asymmetric?
Aside from a classiﬁcation of problems into asymmetric or symmetric in
formation, it is common to distinguish between three classes of goods, based
on the kind of informational problems they present: search goods, experience
goods, and (less frequently) faith goods. A search good is one for which the
consumer is lacking some information, be it price or some attribute of the
good, which can be fully determined before purchase of the good. Anytime
2
This is a question similar to one very important in labour economics: are workers
matched to the correct jobs, that is, are the characteristics of workers properly matched
to the required characteristics of the job? These kind of problems are analysed in the large
matching literature. In this literature you will ﬁnd interesting papers on stable marriages
— is everybody married to their most preferred partner, or could a small coalition swap
partners and increase their welfare?
Information 93
the consumer can discover all relevant aspects before purchasing we speak of
a search good. Supposing that search is costly, which seems reasonable, we
can then model the optimal search behaviour of consumers by considering
the (marginal) beneﬁts and (marginal) costs of search. Applying such think
ing to labour markets we can study the eﬃciency eﬀects of unemployment
insurance; or we can apply it to advertising, which for such goods focuses
on supplying the missing information, and is therefore possibly eﬃciency
enhancing.
For some goods such a determination of all relevant characteristics may
not be possible. Nobody can explain to you how something tastes, for ex
ample; you will have to consume the good to ﬁnd out. Similarly for issues
of quality. Inasmuch as this refers to how long a durable good lasts, this
can only be determined by consuming the good and seeing when (and if)
it breaks. Such goods are called experience goods. The consumer needs
to purchase the good, but the purchase and consumption of the good (or
service!) will fully inform the consumer. This situation naturally leads to
consumers trying a good, but maybe not necessarily ﬁnding the best match
for them. Advertising will be designed to make the consumer try the product
— free samples could be used.
3
Why would the consumer not necessarily ﬁnd
the best product? If there is a cost to unsatisfactory consumption experi
ences this will naturally arise. As in the restaurant example above. Similar
examples can be constructed for hair cuts, and many other services.
What if the consumer never ﬁnds out if the product performs its func
tion? This is the natural situation for many medical and religious services.
The consumer will not discover the full implications until it is (presumably)
too late. Such goods are termed faith goods and present tremendous prob
lems to markets. In our current society the spiritual health of consumers is
not judged to be important, and so the market failure for religious services is
not addressed.
4
Health, in contrast, is judged important — since consumers
value it, for one, and since there are large externalities in a society with a
social safety net — and thus there is extensive regulation for health care
services in most societies.
5
Since education also has certain attributes of a
3
It used to be legal for cigarette manufacturers to distribute “sample packs” for free,
allowing consumers to experience the good without cost. The fact that nicotine is addictive
to some is only helpful in this regard, as any local drug pusher knows: they also hand out
free sample packs in schools.
4
A convincing argument can be made that societies which prescribe one state religion
do so not in an attempt to maximize consumer welfare but tax revenue and control. The
Inquisition, for example, probably had little to do with a concern for the welfare of heretics.
Note also that I am speaking especially with respect to religions in which the “afterlife”
plays a large role. We will encounter them again in the chapter on game theory.
5
Interesting issues arise when the provision of health care services is combined with the
94 LA. Busch, Microeconomics May2004
faith good — in the sense that it is very costly to unlearn, or to repeat the
education — we also see strong regulation of education in most societies.
6
Note that religion and health care probably diﬀer in another dimen
sion: in health care there is information asymmetry between provider and
consumer; it may be argued that in religion both parties are equally unin
formed.
7
Presumably the fact that information is asymmetric makes it easier
to exploit the uninformed party on the one hand, and to regulate on the
other.
8
The government attempts to combat this informational asymmetry
by certifying the supply.
Aside from all the above, there are additional problems with informa
tion. These days everybody speaks of the “information economy”. Clearly
information is supposed to have some kind of value and therefore should
have a price. However, while that is clear conceptually, it is far from easy
to incorporate information into our models. Information is not a good like
most others. For one, it is hard to measure. There are, of course, mea
surements which have been derived in communications theory — but they
often measure the amount of meaningful signals versus some noise (as in the
transmission of messages.) These measurements measure if messages have
been sent, and how many. Economists, in contrast, are concerned with what
kind of message actually contains relevant information and what kind may be
vacuous. Much care is therefore taken to deﬁne the informational context of
any given decision problem (we will encounter this again in the game theory
part, where we will use the term information set to denote all situations
in which the same information has been gathered, loosely speaking.)
Aside from the problem of deﬁning and measuring information, it is also
a special good since it is not rivalrous (a concept you may have encountered
in ECON 301): the fact that I possess some information in no way impedes
your ability to have the same information. Furthermore, the fact that I
“consume” the information somehow (let’s say by acting on it) does not stop
you from consuming it or me from using it again later. There are therefore
provision of spiritual services.
6
This is regulation for economically justiﬁable reasons. Because education is so costly
to undo or repeat it is also frequently meddled with for “societal engineering” reasons.
7
I am not trying to belittle beliefs here, just pointing to the fact that while a medical
doctor may actually know if a prescribed treatment works (while the patient does not),
neither the religious oﬃcial or the follower of a faith know if the actions prescribed by the
faith will “work”.
8
Note the weight loss and aphrodisiac markets, or cosmetics, for example. Little dif
ference between these and the “snake oil cures” of the past seems to exist. Regulation can
take the simple form that only “veriﬁable” claims may be made.
Information 95
large public good aspects to information which require special consideration.
9
The long and short of this is that standard demandsupply models often don’t
work, and that markets will in general misallocate if information is involved,
which makes it even more important to have a good working model. A
complete study of this topic is outside the scope of these notes, however. In
what follows we will only outline some speciﬁc issues in further detail.
5.1 Search
One of the key breakthroughs in the economics of information was a simple
model of information acquisition. The basic idea is a simple one (as all
good ideas are.) A consumer lacks information — say about the price at
which a good may be bought. Clearly it is in the consumer’s interest not to
be “ﬂeeced,” which requires him to have some idea about what the market
price is. In general the consumer will not know what the lowest price for
the product is, but can go to diﬀerent stores and ﬁnd out what their price
is. The more stores the consumer visits the better his idea about what the
correct price might be — the better his information — but of course the
higher his cost, since he has to visit all these stores. An optimizing consumer
may be expected to continue to ﬁnd new (hopefully) lower prices as long
as the marginal beneﬁt of doing so exceeds the marginal cost of doing so.
Therefore we “just” need to deﬁne beneﬁts and costs and can then apply our
standard answer that the optimum is achieved if the marginal cost equals
the marginal beneﬁt!
The problem with this is the fact that we will have to get into sampling
distributions and other such details (most of which will not concern us here)
to do this right. The reason for this is that the best way to model this
kind of problem is as the consumer purchasing messages (normally: prices)
which are drawn from some distribution. For example: let us say that the
consumer knows that prices are distributed according to some cumulative
distribution function F(z) =
z
0
f(p)dp, where f(p) is the (known) density. If
the consumer where to obtain n price samples (draws) from this distribution
9
This is the problem in the Patent protection ﬁght: Once somebody has invented
(created the information for) a new drug the knowledge should be freely available — but
this ignores the general equilibrium question of where the information came from. The
incentives for its creation will depend on what property rights are enforceable later on.
For while two ﬁrms both may use the information, a ﬁrm may only really proﬁt from it if
it is the sole user.
96 LA. Busch, Microeconomics May2004
then the probability that any given sample (say p
0
) is the lowest will be
[1 −F(p
0
)]
n−1
f(p
0
).
(This formula should make sense to you from Econometrics or Statistics: we
are dealing with n independent random variables.) From this it follows that
the expected value of the lowest price after having taken a sample of n prices
is
p
n
low
=
∞
0
p[1 −F(p)]
n−1
f(p)dp.
Note that this expected lowest sample decreases as n increases, but at a
decreasing rate: the diﬀerence between sampling n times and n −1 times is
p
n
low
−p
n−1
low
= −
∞
0
pF(p)[1 −F(p)]
n−2
f(p)dp < 0.
So additional sampling leads to a beneﬁt (lower expected price) but with a
diminishing margin.
What about cost? Even with constant (marginal) cost of sampling we
would have a well deﬁned problem and it is easy to see that individuals
with higher search costs will have lower sample sizes and thus pay higher ex
pected prices. Also note that the lowest price paid is still a random variable,
and hence consumers do not buy at the same price (which is ineﬃcient, in
general!) Computing the variance you would observe that dispersion of the
lowest price is decreasing in n — that means that the lower the search costs
the ‘better’ the market can be expected to work. Indeed, competition policy
is concerned with this fact in some places and attempts to generate rules
which require that prices be posted (lowering the cost of search).
Any discussion of search would be lacking if we did not point out that
search is normally sequential. In the above approach n messages where
bought, with n predetermined. This is the equivalent of visiting n stores
for a quote irrespective of what the received quotes are. The dynamic prob
lem is the much more interesting one and has been quite well studied. We
will attempt to distill the most important point and demonstrate it by way
of a fairly simple example. The key insight into this kind of problem is that
it is often optimal to employ a stopping rule.
10
That is, to continue sam
pling until a price has been obtained which is below some preset limit, at
which point search is abandoned and the transaction occurs (a sort of “good
enough” attitude.) The price at which one decides to stop is the reservation
10
It does depend on the distributional assumptions we make on the samples — inde
pendence makes what follows true.
Information 97
price — the highest price one is willing to pay! In order to derive this price
we will have to specify if one can ‘go back’ to previous prices, or if the trade
will have fallen through if one walks away. The latter is the typical setup in
the matching literature in labour economics, the former is a bit easier and
we will consider it ﬁrst.
Suppose the cost of another sample (search eﬀort) is c. The outcome of
the sample is a new price p, which will only be of beneﬁt if it is lower than
the currently known minimum price. Evaluated at the optimal reservation
price p
R
, the expected gain from an additional sample is therefore the savings
p
R
−p, “expected over” all prices p < p
R
. If these expected savings are equal
to the cost of the additional sample, then the consumer is just indiﬀerent
between buying another sample or not, and thus the reservation price is
found:
p
R
satisﬁes
p
R
0
(p
r
−p)f(p)dp = c.
Next, consider the labour market, where unemployed workers are search
ing for jobs. This is the slightly more complex case where the consumer
cannot return to a past oﬀer. Also, the objective is to ﬁnd the highest price.
First determine the value of search, V , which is composed of the cost, the
expected gain above the wage currently on the table, and the fact that a
lower wage might arise which would indicate that another search is needed.
Thus, assuming a linear utility function for simplicity (no risk aversion), and
letting p stand for wages
V = −c +
∞
p
R
pf(p)dp +V
p
R
0
f(p)dp =⇒ V =
−c +
∞
p
R
pf(p)dp
1 −F(p
R
)
.
Note that I assume stationarity here and the fact that one p
R
will do (i.e.,
the fact that the reservation wage is independent of how long the search has
been going on.) All of these things ought to be shown for a formal model.
We now will ask what choice of p
R
will maximize the value above (which is
the expected utility from searching.) Taking a derivative we get
−p
R
f(p
R
)(1 −F(p
R
)) +f(p
R
)(−c +
∞
p
R
pf(p)dp)
(1 −F(p
R
))
2
= 0.
Simplifying and rearranging we obtain
∞
p
R
pf(p)dp −p
R
= c −p
R
F(p
R
).
The LHS of this expression is the expected increase in the wage, the RHS
is the cost of search, which consists of the actual cost of search and the fact
98 LA. Busch, Microeconomics May2004
that the current wage is foregone if a lower value is obtained. What now will
be the eﬀect of a decrease in the search cost c, for example if the government
decides to subsidize nonworking (searching) workers? This would lower c,
and the LHS would have to fall to compensate, which will occur only if a
higher p
R
is chosen. Of course, a higher p
R
lowers the RHS further. In
mathematical terms it is easy to compute that
dp
R
dc
=
−1
1 −F(p
R
)
.
Unemployment insurance will increase the reservation wage (and thus unem
ployment — note the pun: it ensures unemployment!) The reason is that our
workers can be more choosy and search for the “right” job. They become
more discriminating in their job search. Note that this is not necessarily
bad. If the quality of the match (suitability of worker and ﬁrm with each
other) is reﬂected in a higher wage, then this leads to better matches (fewer
Ph.D. cab drivers). This may well be desirable for the general equilibrium
eﬃciency properties of the model.
Now, this model is quite simplistic. More advanced models might take
into account eligibility rules. In those models unemployment insurance can
be shown to cause some workers to take bad jobs (because that way they can
qualify for more insurance later.) Similar models can also be used to analyse
other matching markets. The market for medical interns comes to mind, or
the marriage market.
In closing let us note that certain forms of advertising will lower search
costs (since consumers now can determine cheaply who has what for sale
at which price) and thus are eﬃciency enhancing (less search, less resources
spent on search, and lower price dispersion in the market.) Other forms of
advertising (image advertising) do not have this function, however, and will
have to be looked at in a diﬀerent framework. This is where the distinction
between search goods and experience goods comes in.
5.2 Adverse Selection
Most problems which will concern us in this course are actually of a diﬀerent
nature than the search for information above. What we are interested in
most are the problems which arise because information is asymmetric. This
means that two parties to a transaction do not have the same information,
as is the case if the seller knows more about the quality (or lack thereof) of
Information 99
0
5
10
15
20
0 5 10 15 20
l
o
s
s
no loss
endowment *
bad
good
cert.
Figure 5.1: Insurance for Two types
his good than the buyer, or if the buyer knows more about the value of the
good to himself than the seller. In these types of environments we run into
two well known problems, that of adverse selection (which you may think
of as “hidden property”) and moral hazard (“hidden action.”) We will deal
with the former in this section.
Adverse selection lies at the heart of Akerlof’s Lemons Problem. These
kind of markets lead naturally to the question if the informed party can
send some sort of signal to reveal information. But how could they do so?
Simply stating the fact that, say, the car is of good quality will not do, since
such a statement is free and would also be made by sellers of bad cars (it is
not incentive compatible.) Sometimes this problem can be ﬁxed via the
provision of a warranty, since a warranty makes the statement that the car is
good more costly to sellers of bad cars than of cars which are, in fact, good.
Let us examine these issues in our insurance model. Assume two states,
good and bad, and assume two individuals who both have wealth endowment
(w
g
, w
b
); w
g
> w
b
. Suppose that these individuals are indistinguishable to the
insurance company, but that one individual is a good risk type who has a
probability of the good state occurring of π
H
, while the other is a bad risk
type with probability of the good state of only π
L
< π
H
. To be stereotypical
and simplify the presentation below, assume that the good risk is female, the
bad risk male, so that grammatical pronouns can distinguish types in what
follows. As we have seen before, the individuals’ indiﬀerence curves will have
a slope of −π
i
/(1 −π
i
) on the certainty line.
100 LA. Busch, Microeconomics May2004
If there were full information both types could buy insurance at fair
premiums and would choose to be on the certainty line on their respective
budget lines (with the high risk type on a lower budget line and with a lower
consumption in each state.) However, with asymmetric information this is
not a possible outcome. The bad risk cannot be distinguished from the good
risk a priori and therefore can buy at the good risk premium. Now, if he were
to maximize at this premium we know that he would over insure — and this
action would then distinguish him from the good risk. The best he can do
without giving himself away is to buy the same insurance coverage that our
good risk would buy, in other words to mimic her. Thus both would attempt
to buy full insurance for a good type.
We now have to ask if this is a possible equilibrium outcome. The answer
is NO, since the insurance company now would make zero (expected) proﬁts
on her insurance contract, but would loose money on his insurance contract.
Consider the proﬁt function (and recall that p
i
= π
i
):
(1 −π
H
)(w
g
−w
b
) −(1 −π
L
)(w
g
−w
b
) < 0.
Foreseeing this fact, the insurance company would refuse to sell insurance at
these prices. Well then, what is the equilibrium in such a market? There
seem to be two options: either both types buy the same insurance (this
is called pooling behaviour) or they buy diﬀerent contracts (this is called
separating behaviour.)
Does there exist a pooling equilibrium in this market? Consider a larger
market with many individuals of each of the two types. Let f
H
denote the
proportion of the good types (π
H
) in the market. An insurer would be
making zero expected proﬁts from selling a common policy for coverage of I
at a premium of p to all types if and only if
f
H
(π
H
pI −(1 −π
H
)(1 −p)I) + (1 −f
H
)(π
L
pI −(1 −π
L
)(1 −p)I) = 0.
This requires a premium
p = (1 −π
L
) −f
H
(π
H
−π
L
).
Note that at this premium the good types subsidize the bad types, since the
former pay too high a premium, the latter too low a premium. In Figure
5.2, this premium is indicated by a zero proﬁt locus (identical to the con
sumers’ budget) at an intermediate slope (labelled ‘market’.) Any proposed
equilibrium with pooling would have to lie on this line and be better than
no insurance for both types. Such a point might be point M in the ﬁgure.
However, this cannot be an equilibrium. In order for it to be an equilibrium
Information 101
0
5
10
15
20
0 5 10 15 20
l
o
s
s
no loss
endowment *
bad
good
market
* D
M *
cert.
Figure 5.2: Impossibility of Pooling Equilibria
nobody must have any actions available which they could carry out and pre
fer to what is proposed. Consider, then, any point in the area to the right of
the zero proﬁt line, below the bad type’s indiﬀerence curve, and above the
good type’s indiﬀerence curve: a point such as D. This contract, if proposed
by an insurance company, oﬀers less insurance but at a better price. Only
the good type would be willing to take it (she would end up on a higher
indiﬀerence curve). Since it is not all the way over on the good type zero
proﬁt line, the insurance company would make strictly positive proﬁts. Of
course, all bad types would remain at the old contract M, and since this
is above the zero proﬁt line for bad types whoever sold them this insurance
would make losses. Notice that the same arguments hold whatever the initial
point. It follows that a pooling equilibrium cannot exist.
Well then, does a separating equilibrium exist? We now would need
two contracts, one of which is taken by all bad types and the other by all
good types. Insurance oﬀerers would have to make zero expected proﬁts. Of
course, since each type takes a diﬀerent contract the insurer will know who
is who from their behaviour. This suggests that we look at contracts which
insure the bad risks fully at fair prices. For the bad risk types to accept
this type of contract the contract oﬀered to the good risk type must be on
a lower indiﬀerence curve. It must also be on the fair odds line for good
types for there to be zero proﬁts. Finally it must be acceptable to the good
types. This suggests a point such as A in the Figure 5.3. There now are
two potential problems with this. One is that the insurer and insured of
good type would like to move to a diﬀerent point, such as B, after the type
102 LA. Busch, Microeconomics May2004
0
5
10
15
20
0 5 10 15 20
l
o
s
s
no loss
endowment *
bad
good
market
* A
* B
cert.
Figure 5.3: Separating Contracts could be possible
is revealed. But if that would indeed happen then it can’t, in equilibrium,
because the bad types would foresee it and pretend to be good in order then
to be mistaken for good. The other problem is that the stability of such
separating equilibria depends on the mix of types in the market. If, as in the
Figure 5.3, the market has a lot of bad types this kind of equilibrium works.
But what if there are only a few bad types? In that case an insurer could
deviate from our proposed contract and oﬀer a pooling contract which makes
strictly positive proﬁts and is accepted by all in favour over the separating
contract. This is point C in Figure 5.4. Of course, while this deviation
destroys our proposed equilibrium it is itself not an equilibrium (we already
know that no pooling contract can be an equilibrium.) This shows that a
small proportion of bad risks who can’t be identiﬁed can destroy the market
completely!
Notice the implications of these ﬁndings on life or health insurance mar
kets when there are people with terminal diseases such as AIDS or Hepatitis
C, or various forms of cancer. The patient may well know that he has this
problem, but the insurance company sure does not. Of course, it could re
quire testing in order to determine the risk it will be exposed to. Our model
shows that doing so would make sure that everybody has access to insur
ance at fair rates and that there is no crosssubsidization. This is clearly
eﬃcient. However, it will reduce the welfare of consumers with these dis
eases. Indeed, given that AIDS, for example, means a very high probability
of seriously expensive treatment, the insurance premiums would be very high
(justiﬁably so, by the way, since the risk is high.) What ought society do
Information 103
0
5
10
15
20
0 5 10 15 20
l
o
s
s
no loss
endowment *
bad
good
market
* A
C *
cert.
Figure 5.4: Separating Contracts deﬁnitely impossible
about this? Economics is not able to answer that question, but it is able
to show the implications of various policies. For example, banning testing
and forcing insurance companies to bear the risk of insuring such customers
might make the market collapse, in the worst scenario, or will at least lead
to ineﬃciency for the majority of consumers. It would also most likely tend
to lead to “alternative” screening devices — the insurance company might
start estimating consumers’ “lifestyle choices” and then discriminate based
on that information. Is that an improvement? An alternative would be to
let insurance operate at fair rates but to directly subsidize the income of the
aﬀected minority. (This may cause “moral” objections by certain segments
of the population.)
5.3 Moral Hazard
Another problem which can arise in insurance markets, and indeed in any
situation with asymmetric information, is that of hidden actions being taken.
In our insurance example it is often possible for the insured to inﬂuence the
probability of a loss: is the car locked? Are there antitheft devices installed?
Are there theft and ﬁre alarms, sprinklers? Do the tires have enough tread
depth, and are they the correct tire for the season (in the summer a dedicated
summer tire provides superior breaking and road holding to a M&S tire, while
in the winter a proper winter tire is superior.) Some of these actions are
observable and will therefore be included in the conditions of the insurance
104 LA. Busch, Microeconomics May2004
contract. For example, in Germany your insurance will refuse to pay if you
do not have “approved” tires, or if the car was not locked and this can
be established. Indeed, insurance for cars without antitheft devices is now
basically unaﬀordable since theft rates are so high since the ‘iron curtain’
opened.
11
If you insure your car as parked in a closed parking garage and
then leave it out over night your insurance may similarly refuse to pay! Other
actions are not so easily veriﬁed, however. How aggressively do you drive?
How many risks do you decide to take on a given day? This is often not
observable but nevertheless in your control. If the avoidance of accidents is
costly to the insured in any way, then he can be expected to pick the optimal
level of the (unobservable) action — optimal for himself, not the insurance
company, that is.
As a benchmark, let us consider an individual who faces the risk of loss L.
The probability of the loss occurring depends on the amount of preventive
action, A, taken and is denoted π(A). It would appear logical to assume
that π
(A) < 0, π
(A) > 0. The activity costs money, and cost is c(A) with
c
(A) > 0, c
(A) > 0. In the absence of insurance the individual would choose
A so as to
max
A
¦π(A)u(w −L −c(A)) + (1 −π(A))u(w −c(A))¦.
The ﬁrst order condition for this problem is
π
(A)u(w −L −c(A)) −c
(A)π(A)u
(w −L −c(A)) −
π
(A)u(w −c(A)) −c
(A)(1 −π(A))u
(w −c(A)) = 0.
Thus the optimal A
∗
satisﬁes
c
(A
∗
) =
π
(A
∗
)(u(w −L −c(A)) −u(w −c(A
∗
)))
π(A
∗
)u
(w −L −c(A
∗
)) −(1 −π(A
∗
))u
(w −c(A
∗
))
.
Now consider the consumer’s choice when insurance is available. To keep
it simple we will assume that the only available contract has a premium of
π(A
∗
) and is for the total loss L. Note that this would be a fair premium if the
consumer continues to choose the level A
∗
of abatement activity. However,
the maximization problem the consumer now faces becomes (the consumer
will assume the premium ﬁxed)
max
A
¦π(A)u(w −pL −c(A)) + (1 −π(A))u(w −pL −c(A))¦.
The ﬁrst order condition for this problem is
π
(A)u() −c
(A)π(A)u
() −π
(A)u() −c
(A)(1 −π(A))u
() = 0.
11
Yes, I am suggesting that most stolen cars end up in former eastern block countries
— it is a fact.
Information 105
Thus the optimal A
∗
satisﬁes
c
(
ˆ
A) = 0.
Clearly (and expectedly) the consumer will now not engage in the activity at
all. It follows that the probability of a loss is higher and that the insurance
company would loose money. If we were to recompute the level of A at a
fair premium for π(0) we would ﬁnd that the same is true: no eﬀort is taken.
The only way to elicit eﬀort is to expose the consumer to the right incentives:
there must be a reward for the hidden action. A deductible will accomplish
this to some extent and will lead to at least some eﬀort being taken. This
is due to the fact that the consumer continues to face risk (which is costly.)
The amount of eﬀort will in general not be the optimal, however.
5.4 The Principal Agent Problem
To conclude the chapter on information here is an outline of the leading
paradigm for analysing asymmetric information problems. Most times one
(uninformed) party wishes to inﬂuence the behavior of another (informed)
party, the problem can be considered a principalagent problem. This kind
of problem is so frequent that it deserved its own name and there are books
which nearly exclusively deal with it. We will obviously have only a much
shorter introduction to the key issues in what follows.
First note that this is a frequent problem in economics, to say the least.
Second, note that at the root of the problem lies asymmetric information.
We are, generically, concerned with situations in which one party — the
principal — has to rely on some other party — the agent — to do something
which aﬀects the payoﬀ of the ﬁrst party. This in itself is not the problem,
of course. What makes it interesting is that the principal cannot tell if the
agent did what he was asked to do, that is, there is a lack of information,
and that the agent has incentives not to do as asked. In short, there is a
moralhazard problem. The principalagent literature explores this problem
in detail.
5.4.1 The Abstract PA Relationship
We will frame this analysis in its usual setting, which is that of a riskneutral
principal who tries to maximize (expected) proﬁts, and one agent, who will
106 LA. Busch, Microeconomics May2004
have control over (unobservable) actions which inﬂuence proﬁts. Call the
principal the owner and the agent the manager. The agent/manager will be
assumed to have choice of a single item, what we shall call his eﬀort level
e ∈ [e, e]. This eﬀort level will be assumed to inﬂuence the level of proﬁts
before manager compensation, called grossproﬁts, which you may think of as
revenue if there are no other inputs. More general models could have multiple
dimensions at this stage. The manager’s preferences will be over monetary
reward (wages/income) and eﬀort: U(w, e). We assume that the manager is
an expected utility maximizer. Also assume that the manager is riskaverse
(that is u
1
(, ) > 0, u
11
(, ) < 0) and that eﬀort is a bad (u
2
(, ) < 0.)
The owner can not observe the manager’s eﬀort choice. Instead, only
the realization of gross proﬁts is observed. Note at this stage that this in
itself does not yet cause a moralhazard problem, since if the relationship
between eﬀort and proﬁt is known we can simply invert the proﬁt function
to deduce the eﬀort level. Therefore we need to introduce some randomness
into the model. Let ρ be a random variable which also inﬂuences proﬁts and
which is not directly observable by the owner either. It could be known to the
manager, but we will have to assume that it will become known only after the
fact, so that the manager cannot condition his eﬀort choice on the realization
of ρ. Let the relationship between eﬀort, proﬁts and the random variable be
denoted by Π(e, ρ). Note that Π(e, ρ) will be a random variable itself. All
expectations in what follows will be taken with respect to the distribution of
ρ. EΠ(e, ρ), for example, will be the expected proﬁts for eﬀort level e.
Since the owner can only observe proﬁts, the most general wage contract
he can oﬀer the manager will be a function w(Π). This formulation includes
a ﬁxed wage as well as all wage plus bonus schemes, or share contracts (where
the manager gets a ﬁxed share of proﬁts.)
Let us ﬁrst, as a benchmark, determine the eﬃcient level of eﬀort, which
would be provided under full information. In that case we would have to solve
the Pareto problem, that is, solve
max
e,w
¦EΠ(e, ρ) −w s.t. U(w, e) = u¦ .
Here u is a level of utility suﬃcient to make the manager accept the contract.
Note also that in formulating this problem like this I have already used the
knowledge that a riskneutral owner should fully insure a riskaverse manager
by oﬀering a constant wage.
Assuming no corner solutions, it is easy to see that we would want to
set the eﬀort level such that the marginal beneﬁt of eﬀort (in terms of higher
expected proﬁts) is related to the marginal cost (in terms of higher disutility
Information 107
of eﬀort, which will have to be compensated via the wages). In particular,
we need that
EΠ
e
(, ) =
w
U
w
(, )
U
e
(, ).
What if eﬀort is not observable? In that case we will solve the model
“backwards”: given any particular wage contract faced by the manager, we
will determine the manager’s choice. Then we will solve for the optimal
contract, “foreseeing” those choices. Assume in what follows that the owner
wants to support some eﬀort level ˆ e (which is derived from this process.)
So, our manager is faced with two decisions. One is to determine how
much eﬀort to provide given he accepts the contract:
max
e
¦EU(w(Π(e, ρ)), e)¦.
This leads to FOC
E[U
w
(, )w
()Π
e
(, ) +U
e
(, )] = 0,
which determines the optimal e
∗
. The other is the question if to accept the
contract at all, which requires that
max
e
¦EU(w(Π(e, ρ)), e)¦ = EU(w(Π(e
∗
, ρ)), e
∗
) ≥ U
0
.
Here U
0
is the level of utility the manager can obtain if he does not accept the
contract but instead engages in the next best alternative activity (in other
words, it is the manager’s opportunity cost.)
Both of these have special names and roles in the principalagent litera
ture. The latter one is called the participation constraint, or individual
rationality constraint. That is, any contract is constrained by the fact
that the manager must willingly participate in it. Thus, the contract w
∗
(Π)
must satisfy
IR : max
e
¦EU(w(Π(e, ρ)), e)¦ ≥ U
0
.
The other constraint is that the manager’s optimal choice should, in equilib
rium, correspond to what the owner wanted the manager to do. That is, if
the owner wants to elicit eﬀort level ˆ e, then it should be true that the man
ager, in equilibrium, actually supplies that level. This is called the incentive
compatibility constraint. Mathematically it says:
IC : ˆ e = argmax
e
¦EU(w(Π(e, ρ)), e)¦.
Here ‘argmax’ indicates the argument which maximizes. In other words, we
want ˆ e = e
∗
.
108 LA. Busch, Microeconomics May2004
Now we can write down the principal’s problem. The principal ‘simply’
wishes to maximize his own payoﬀ subject to both, the participation and
incentive compatibility constraints. This problem is easily stated
max
e,w(Π)
¦E[Π(e, ρ) −w(Π(e, ρ))
+λ
P
(U
0
−U(w(Π(e, ρ)), e))
+ λ
I
(U
w
(, )w
()Π
e
(, ) +U
e
(, ))]¦ ,
but hard to solve in general cases. We will, therefore, only look at three
special cases in which some answers are possible.
First, let us simplify by assuming a risk neutral manager. In that case
we can have U(w, e) = w − v(e), where v(e) is the disutility of eﬀort. Now
note that we would not have to insure the manager in this case since he
evaluates expected wealth the same as some ﬁxed payment. Also note that
there is no reason to over pay the manager compared to the outside option,
that is, he needs to be given only U
0
. Before we charge ahead and solve this
brute force, let us think for a moment longer about the situation. We will
not be insuring the manager. He will furthermore have the correct incentive
to expend eﬀort if he cares as much about proﬁts as the owner. Thus it
would stand to reason that if we were to sell him the proﬁts he would be the
owner and thus take the optimal action! But this is equivalent to paying him
a wage which is equal to the proﬁts less some ﬁxed return for the owners.
So, let us propose a wage contract of w = Π−p, and assume for the moment
that the IR constraint can be satisﬁed (by choice of the correct p.) With
such a contract the manager will choose an eﬀort level such that
Π
e
(e, ρ) −v
(e) = 0.
Note that this is the eﬃcient eﬀort level under full information. The owner’s
problem now simply is to choose the largest p such that the manager still
accepts the contract!
The other special case one can consider is that of an inﬁnitely riskaverse
manager. We can model this as a manager who has lexicographic preferences
over his minimum wealth and eﬀort. Independent of the eﬀort levels the
manager will prefer a higher minimum wealth, and for equal minimum wealth
the manager will prefer the lower eﬀort. For simplicity also assume that
the lowest possible proﬁt level is independent of eﬀort (that is, only the
probabilities of proﬁts depend on the eﬀort, not the level). In that case a
manager will always face the same lowest wage for any wage function and
thus will not be able to be enticed to provide more than the minimum level
of eﬀort.
Information 109
As you can see, we might expect anything between eﬃcient outcomes
and completely ineﬃcient outcomes, largely depending on the precise situ
ation. Let us return for a moment to the general setting above. We will
restrict it somewhat by assuming that the manager’s preferences are separa
ble: U(w, e) = u(w) −v(e) with the obvious assumptions of u
() > 0, u
() <
0, v
() ≥ 0, v
() > 0, v
(e) = 0, v
(e) = ∞. The last two are typical “Inada
conditions” designed to ensure an interior solution. We will also assume that
the cumulative distribution of proﬁts exists and depends on eﬀort, F(Π, e),
and that this distribution is over Π ∈ [Π, Π], has a density f(Π, e) > 0, and
satisﬁes ﬁrstorder stochastic dominance.
In this formulation the manager will solve
max
e
Π
Π
u(w(Π))f(Π, e)dΠ −v(e)
¸
.
This has ﬁrst order condition
Π
Π
u(w(Π))f
e
(Π, e)dΠ −v
(e) = 0.
We will ignore the SOC for a while. The participation constraint will be
Π
Π
u(w(Π))f(Π, e)dΠ −v(e) ≥ U
0
.
The owner will want to ﬁnd a wage structure and eﬀort level to
max
w(·),e
Π
Π
[(Π −w(Π))f(Π, e)
+λ
P
(u(w(Π)) −v(e) −U
0
)f(Π, e)
+ λ
C
(u(w(Π))f
e
(Π, e) −v
(e)f(Π, e))] dΠ
¸
.
This will have to be diﬀerentiated, which is quite messy and, as pointed out
before, hard to solve. However, consider the diﬀerentiation with respect to
the wage function:
−f(Π, e) +λ
P
f(Π, e)u
(w(Π)) +λ
C
f
e
(Π, e)u
(w(Π)) = 0.
Rewriting we get something which is informative:
1
u
(w(Π))
= λ
P
+λ
C
f
e
(Π, e)
f(Π, e)
.
110 LA. Busch, Microeconomics May2004
The left hand side of this is the inverse of the slope of the wealth utility
function. It is therefore increasing in the wage. The right hand side consists
of the constraint multipliers (which will both be positive since the constraints
are binding) multiplied by some factors. Of particular interest is the last
term, the ratio of the derivative of the density to the density. Let us further
specialize the problem and suppose that there are only two eﬀort levels, and
that the owner wants to get the high eﬀort level. In that case it is easy to
see that the above equation will become
1
u
(w(Π))
= λ
P
+λ
C
f
H
(Π) −f
L
(Π)
f
H
(Π)
= λ
P
+λ
C
1 −
f
L
(Π)
f
H
(Π)
.
The term
f
L
(Π)
f
H
(Π)
is called a likelihood ratio. It follows that the higher the
relative probability that the eﬀort was high for a given realized proﬁt level,
the higher the manager’s wage. Indeed, if the likelihood ratio is decreasing
then then the wage function must be increasing with realized proﬁts. What
is going on here is that higher proﬁts are a correct signal of higher eﬀort, and
thus we will make wages increase with the signal.
Finally, assume that there are only two proﬁt levels, with the high level
of proﬁts more likely under high eﬀort. In that case it can be shown that
0 <
w
H
−w
L
Π
H
−Π
L
< 1.
When does this principalagent relationship come into play? We already
mentioned the insurance problem, where the insurer wants to elicit the ap
propriate amount of care from the insured. It also occurs in politics, where
the electorate would like to insure that the elected oﬃcials actually carry out
the electorates’ will (witness the ‘recall’ eﬀorts in BC, presumably designed
to make the wage function steeper, since under the old system any punish
ment only occurred after 4 years.) In essence a principalagent relationship
with the attendant problems exists any time there is a delegation of responsi
bilities with less than perfect monitoring. Most frequently in economics this
is the case in the production process. Owners of ﬁrms delegate to managers,
who have superior information about market circumstances, technologies,
etc. Managers in turn delegate to subordinates — who often have superior
information about the details of the production process. (Do quality prob
lems arise inherently from the production process employed, or are they due
to negligent work?) Any time a ﬁrm subcontracts another ﬁrm there is an
other principalagent relationship set up. Indeed, even in our class there are
multiple principalagent relationships. The university wants to ensure that
I treat you appropriately, teach the appropriate material, and teach “well”.
Information 111
You, in some sense, are the principal of the university, having similar goals.
However, you inherently do not know what it is you “ought” to be taught.
You may also have no way to determine if you are taught well — especially
since there is a potential conﬂict between your future self (desiring rigorous
instruction) and your current self (desiring a ‘pleasant’ course.) There are
various incentive systems in place to attempt to address these problems, and
heated debate about how successful they are!
As an aside: the principalagent problems arising from delegation are of
ten held responsible for limits on ﬁrm size (i.e., are taken to cause decreasing
returns to scale.)
112 LA. Busch, Microeconomics May2004
Chapter 6
Game Theory
Most of the models you have encountered so far had one distinguishing fea
ture: the economic agent, be it ﬁrm or consumer, faced a simple decision
problem. Aside from the discussion of oligopoly, where the notion that one
ﬁrm may react to another’s actions was mentioned, none of our decision mak
ers had to take into account other’s decisions. For the price taking ﬁrm or
consumer prices are ﬁxed no matter what the consumer decides to do. Even
for the monopolist, where prices are not ﬁxed, the (inverse) demand curve is
ﬁxed. These models therefore can be treated as maximization problems in
the presence of an (exogenous) constraint.
Only when duopoly was introduced did we need to discuss what, if any,
eﬀect one ﬁrm’s actions might have on the other ﬁrm’s actions. Usually this
problem is avoided in the analysis of the problem, however. For example,
in the standard Cournot model we suppose a ﬁxed market demand schedule
and then determine one ﬁrm’s optimal (proﬁt maximizing) output under the
assumption that the other ﬁrm produces some ﬁxed output. By doing this
we get the optimal output for each possible output level by the other ﬁrm
(called reaction function) for each ﬁrm.
1
It then is argued that each ﬁrm must
correctly forecast the opponent’s output level, that is, that in equilibrium
each ﬁrm is on its reaction function. This determines the equilibrium output
level for both ﬁrms (and hence the market price).
2
1
So, suppose two ﬁxed marginal cost technology ﬁrms, and that inverse market demand
is p = A−BQ. Each ﬁrm i then solves max
qi
{(A−B(q
i
+q
−i
))q
i
−c
i
q
i
} which has FOC
A−Bq
−i
−2Bq
i
−c
i
= 0 and thus the optimal output level is q
i
= (A−c
i
)(2B)
−1
−0.5q
−i
.
2
To continue the above example: q
1
= (A−c
1
)(2B)
−1
−0.5((A−c
2
)(2B)
−1
−0.5q
1
) −→
q
1
= (A−2c
1
+c
2
)(3B)
−1
. Thus q
2
= (A−2c
1
+c
2
)(3B)
−1
and p = (A+c
1
+c
2
)(3)
−1
.
113
114 LA. Busch, Microeconomics May2004
In this kind of analysis we studiously avoid allowing a ﬁrm to consider
an opponent’s actions as somehow dependent on its own. Yet, we could easily
incorporate this kind of thinking into our model. We could not only call it a
reaction function, but actually consider it a reaction of some sort. Doing so
requires us to think about (or model) the reactions to reactions, etc.. Game
theory is the term given to such models. The object of game theory is the
analysis of strategic decision problems — situations where
• The outcome depends on the decisions of multiple (n ≥ 2) decision
makers, such that the outcome is not determined unilaterally;
• Everybody is aware of the above fact;
• Everybody assumes that everybody else conforms to fact 2;
• Everybody takes all these facts into account when formulating a course
of action.
These points especially interesting if there exists a conﬂict of interest
or a coordination problem. In the ﬁrst case, any payoﬀ gains to one player
imply payoﬀ losses to another. In the second case, both players’ payoﬀs rise
and fall together, but they cannot agree beforehand on which action to take.
Game theory provides a formal language for addressing such situations.
There are two major branches in game theory — Cooperative game
theory and Noncooperative game theory. They diﬀer in their approach,
assumptions, and solution concepts. Cooperative game theory is the most
removed from the actual physical situation/game at hand. The basis of anal
ysis is the set of feasible payoﬀs, and the payoﬀs players can obtain by not
participating in the ﬁrst place. Based upon this, and without any knowledge
about the underlying rules, certain properties which it is thought the solution
ought to satisfy are postulated — so called axioms. Based upon these axioms
the set of points which satisfy them are found. This set may be empty —
in which case the axioms are not compatible (for example Arrow’s Impossi
bility Theorem) — have one member, or have many members. The search
is on for the fewest axioms which lead to a unique solution, and which have
a “natural” interpretation, although that is often a more mathematical than
economic metric. One of the skills needed for this line of work is a pretty
solid foundation in functional analysis — a branch of mathematics concerned
with properties of functions. We will not talk much about this type of game
theory in this course, although it will pop up once later, when we talk about
bargaining (in the Nash Bargaining Solution). As a ﬁnal note on this sub
ject: this type of game theory is called “cooperative” not necessarily because
Game Theory 115
players will cooperate, but since it is assumed that players’ commitments
(threats, promises, agreements) are binding and can be enforced. Concepts
such as the Core of an exchange economy
3
or the Nash bargaining solution
are all cooperative game theory notions.
In this chapter we will deal exclusively with noncooperative game the
ory. In this branch the focus is more on the actual rules of the game — it is
thus a useful tool in the analysis of how the rules aﬀect the outcome. Indeed,
in the mechanism design and the implementation literature researchers ba
sically design games in order to achieve certain outcomes. Noncooperative
game theory can be applied to games in two broad categories diﬀering with
respect to the detail employed in modelling the situation at hand (in fact,
there are many ways in which one can categorize games; by the number of
players, the information possessed by them, or the question if there is room
for cooperation or not, among others.) The most detailed branch employs the
extensive form, while a more abstracted approach employs the strategic
form.
While we are ultimately concerned with economic problems, much of
what we do in the next pages deals with toy examples. Part of the reason for
the name game theory is the similarity of many strategic decision problems to
games. There are certain essential features which many economic situations
have in common with certain games, and a study of the simpliﬁed game
is thus useful preparation for the study of the economic situation. Some
recurring examples of particular games are the following (you will see that
many of these games have names attached to them which are used by all
researchers to describe situations of that kind.)
Matching Pennies: 2 players simultaneously
4
announce Head or Tails. If
the announcements match, then player 2 pays player 1 one dollar; if they
3
The ‘Core’ refers to the set of all Pareto Optimal allocations for which each
player/trader achieves at least the same level of utility as in the original allocation. For
a two player exchange economy it is the part of the Contract Curve which lies inside the
trading lens. The argument is that any trade, since it is voluntary, must improve each
player at least weakly, and that two rational players should not stop trading until all gains
from trade are exhausted. The result follows from this. It is not more speciﬁc, since we
do not know the trading procedures. A celebrated result in economics (due to Edgeworth)
is that the core converges to the competitive equilibrium allocation as the economy be
comes large (i.e., players are added, so that the number of players grows to inﬁnity.) This
demonstrates nicely that in a large economy no one trader has suﬃcient market power to
inﬂuence the market price.
4
This means that no information is or can be transmitted. It does not necessarily mean
actually simultaneous actions. While simultaneousness is certainly one way to achieve the
goal, the players could be in diﬀerent rooms without telephones or shared walls.
116 LA. Busch, Microeconomics May2004
do not match, then player 1 pays player 2 one dollar. This is an example
of a zerosum game, the focus of much early game theoretic analysis. The
distinguishing feature is that what is good for one is necessarily bad for the
other player. The game is one of pure conﬂict. It is also a game of incomplete
information.
Battle of the Sexes: Romeo and Juliet rather share an activity than not.
However, Romeo likes music better than sports while Juliet likes sports better
than music. They have to choose an activity without being able to commu
nicate with each other. This game is a coordination game — the players
want to coordinate their activities, since that increases their payoﬀs. There
is some conﬂict, however, since each player prefers coordination on a diﬀerent
activity.
Prisoners’ Dilemma: Probably one of the most famous (and abused)
games, it captures the situation in which two players have to choose to co
operate or not (called ‘defect’) simultaneously. One form is that both have
to announce one of two statements, either “give the other player $3000” or
“give me $1000.” The distinguishing feature of this game is that each player
is better oﬀ by not cooperating — independently of what the other player
does — but as a group they would be better oﬀ if both cooperated. This is a
frequent problem in economics (for example in duopolies, where we will use
it later.)
“Punishment Game:” This is not a standard name or game. I will use
it to get some ideas across, however, so it might as well get a name. It is a
game with sequential moves and perfect information: ﬁrst the child chooses
to either behave or not. Based on the observation of the behaviour, the
parents then decide to either punish the child, or not. The child prefers not
to behave, but punishment reduces its utility. The parents prefer if the child
behaves, but dislike punishing it.
There are many other examples which we will encounter during the
remainder of the course. There are repeated games, where the same so
called stage game is repeated over and over. While the stage game may be a
simultaneous move game, such as the Prisoners’ Dilemma, after each round
the players get to observe either payoﬀs or actual actions in the preceding
stage. This allows them to condition their play on the opponents’ play in
the past (a key mechanism by which cooperation and suitable behaviour is
enforced in most societies.)
Game Theory 117
6.1 Descriptions of Strategic Decision Prob
lems
6.1.1 The Extensive Form
We will start rather formally. In what follows we will use the correct and
formal way to deﬁne games and you will therefore not have to reformalize
some foggy notions later on — the down side is that this may be a bit unclear
on ﬁrst reading. Much of it is only jargon, however, so don’t be put oﬀ!
The ﬁrst task in writing down a game is to give a complete description of
the players, their possible actions, the timing of these actions, the information
available to the players when they take the actions, and of course the payoﬀs
they receive in the end. This information is summarized in a game tree.
Now, a game tree has to satisfy certain conditions in order to be sensible: It
has to be ﬁnite (otherwise, how would you write it down?), and it has to be
connected (so that we can get from one part of the tree to another), and it
has to be like a tree in that we do not have loops in it, where two branches
join up again.
5
All this is formally said in the following way:
Deﬁnition 1 A game tree Γ (also called a topological tree) is a ﬁnite col
lection of nodes, called vertices, connected by lines, called arcs, so as to
form a ﬁgure which is connected (there exists a set of arcs connecting any
one vertex to any other) and contains no simple closed curves (there does not
exist a set of arcs connecting a vertex to itself.)
In Figure 6.1 A, B, and C satisfy the deﬁnition, D, E, and F do not.
We also need a sense of where the game starts and where it ends up.
We will call the start of the game the distinguished node/vertex or more
commonly the root. We can then deﬁne the following:
Deﬁnition 2 Let Γ be a tree with root A. Vertex C follows vertex B if
the sequence of arcs connecting A to C passes through B. C follows B
immediately if C follows B and there is one arc connecting C to B. A
vertex is called terminal if it has no followers.
5
Finite can be dropped. Indeed we will consider inﬁnite games, such as the Cournot
duopoly game where each player has inﬁnitely many choices. The formalism to this ex
tension is left to more advanced courses and texts.
118 LA. Busch, Microeconomics May2004
A
c
/
/
/
/
`
`
`
`
s s s
B
c
/
/
/
/
`
`
`
`
s s s
s s
/
/
/
/
`
`
`
`
s s s
`
`
`
s s
`
`
`
s s
C
c
s
s s
s
D
c
s
c
s
s s
s
E
c
/
/
/
/
`
`
`
`
s s s
s s
/
/
/
/
`
`
`
`
s s
`
`
`
s
s
s
`
`
`
s s
F
c
`
`
`
`
/
/
/
/
s s s \
\
\
s s s
s
\
\
\
s s
Figure 6.1: Examples of valid and invalid trees
We are now ready to deﬁne a game formally by deﬁning a bunch of
objects and a game tree to which they apply. These objects and the tree will
capture all the information we need. What is this information? We need the
number of players, the order of play, i.e., which player plays after/together
with what other player(s), what each player knows when making a move,
what the possible moves are at each stage, what, if any, exogenous moves
exist, and what the probability distribution over them is, and of course the
payoﬀs at the end. So, formally, we get the following:
Deﬁnition 3 A nplayer game in extensive form comprises:
1. A game tree Γ with root A;
2. A function, called payoﬀ function, associating a vector of length n
with each terminal vertex of Γ;
3. A partition ¦S
0
, S
1
, . . . , S
n
¦ of the set of nonterminal nodes of Γ (the
player sets;)
4. For each vertex in S
0
a probability distribution over the set of immediate
followers;
5. for each i ∈ ¦1, 2, . . . , n¦ a partition of S
i
into subsets S
j
i
(informa
tion sets), such that ∀B, C ∈ S
j
i
, B and C have the same number of
immediate followers;
6. for each S
j
i
an index set I
j
i
and a 11 map from I
j
i
to the set of
immediate followers of each vertex in S
j
i
.
Game Theory 119
This is a rather exhaustive list. Notice in particular the following: (1) While
the game is for n players, we have (n + 1) player sets. The reason is that
“nature” gets a player set too. Nature will be a very useful concept. It
allows us to model a nonstrategic choice by the environment, most often the
outcome of some randomization (as when there either is an accident or not,
but none of the players is choosing this.) (2) Nature, since it is nonstrategic,
does not have a payoﬀ in the game, and it does not have any information sets.
(3) Our information sets capture the idea that a player can not distinguish
the nodes within them, since every node has the same number of possible
moves and they are the same (i.e., their labels are the same.)
Even with all the restrictions already implied, there are still various
ways one could draw such a game. Two very important issues deal with
assumptions on information — properties of the information sets, in other
words. The ﬁrst is a restriction on information sets which will capture the
idea that players do not forget any information they learn during the game.
Deﬁnition 4 A nperson game in extensive form is said to be a game of
perfect recall if all players never forget information once known, and if
they never forget any of their own previous moves: i.e., if x, x
∈ S
j
i
then
neither x nor x
is a predecessor of the other one, and if ˆ x is a predecessor
of x and the same player moves at x and ˆ x (i.e., ˆ x, x ∈ S
i
), then there exists
some ˜ x in the same S
j
i
as ˆ x which is a predecessor of x
, and which has the
same index on the arc leading to x
as that from ˆ x to x.
This deﬁnition bears close reading, but is quite intuitive in practice: if two
nodes are in a player’s information set, then one cannot follow the other —
else the player in eﬀect forgets that he himself moved previously in order to
get to the current situation. Furthermore, there is a restriction on predeces
sors: it must be true that either both nodes have the same predecessor in
a previous information set of this player, or if not, that “the same action”
was chosen at the two diﬀerent predecessor nodes. Otherwise, the player
should remember the index he chose previously and thus the current two
nodes cannot be indistinguishable.
We will always assume perfect recall. In other words, all our players will
recall all their own previous moves, and if they have learned something about
their opponents (such as that the opponent took move “Up” the second time
he moved) then they will not forget it. In practice, this means that players
learn something during the game, in some sense.
Games of perfect recall still allow for the possibility of players not know
120 LA. Busch, Microeconomics May2004
b
1
L R
>
>
>
>
>
>
r
r
1, 1
r
A B
r
r
1, 1
r
B A
p p p p p p p p p p p p p p p p p p p p
2
r
r
0, 0 1, 2
r
1
r
r
r
1, 2 0, 0
r
1
r
b
1
L R
r
r
1, 1
r
2
A B
r
r
1, 1
r
2
B A
r
r
0, 0 1, 2
r
r
r
r
1, 2 0, 0
r
r
p p p p p p p p p p p p p p p p p p p p
1
Figure 6.2: A Game of Perfect Recall and a Counterexample
ing something about the past of the game — for example in games where
players move simultaneously, a player would not know his opponent’s move
at that time. Such games are called games of imperfect information. The
opposite is a game of perfect information:
Deﬁnition 5 A nperson game in extensive form is said to be a game of
perfect information if all information sets are singletons.
It is important to note a crucial linguistic diﬀerence here: Games of
Incomplete Information are not the same as games of Imperfect Infor
mation. Incomplete information refers to the case when some feature of the
extensive form is not known to one (or more) of the players. For example the
player may not know the payoﬀ function, or even some of the possible moves,
or their order. In that case, the player could not write down the extensive
form at all! In the case of imperfect information the player can write down
the extensive form, but it contains nontrivial information sets.
A famous theorem due to Harsanyi shows that it is possible to trans
form a situation of incomplete information to one of imperfect information if
players are Bayesian.
6
In that case, uncertainty about the number of players,
6
By this we mean that they use Bayes’ formula to update their beliefs. Bayes’ formula
Game Theory 121
b
E
r
0, 10
r
out in
>
>
>
>
>
r
r
−2, ? 4, ?
r
M
accom
fight
>> b
Nature
wolf
lamb
>
>
>
>
>
r
r
0, 10
r
in out
r
r
0, 10
r
out in
p p p p p p p p p p p p p p p p
E
r
r
−2, 5 4, 3
r
M
f
ac
r
r
−2, 2 4, 4
r
M
ac
f
Figure 6.3: From Incomplete to Imperfect Information
available moves, outcomes, etc., can be transformed into uncertainty about
payoﬀs only. From that, one can construct a complete information game
with imperfect information, and the equilibria of these games will coincide.
For example, take the case of a market entrant who does not know if the mo
nopolist enjoys ﬁghting an entrant, or not. The entrant thus does not know
the payoﬀs of the game. However, suppose it is known that there are two
types of monopolist, one that enjoys a ﬁght, and one that does not. Nature
is assumed to choose which one is actually playing, and the probability of
that choice is set to coincide with the entrant’s priors.
7
After this transfor
mation we have a well speciﬁed game and can give the extensive form — as
in Figure 6.3. This turns out to be one of the more powerful results, since it
makes our tools very useful to lots of situations. If we could not transform
incomplete information into imperfect information, then we could not model
most interesting situations — which nearly all have some information that
players don’t know.
6.1.2 Strategies and the Strategic Form
In order to analyze the situation modelled by the extensive form, we employ
the concept of strategies. These are complete, contingent plans of behaviour
in the game, not just a “move,” which refers to the action taken at any
particular information set. You should think of a strategy as a complete
game plan, which could be given to a referee or a computer, and they would
then play the game for you according to these instructions, while you just
says that the probability of an event A occurring given that an event B has occurred will
be the probability of the event A occurring times the probability of B happening when A
does, all divided by the probability of B occurring:
P(AB) =
P(A∩ B)
P(B)
=
P(BA)P(A)
P(B)
.
7
A prior is the exante belief of a player. The expost probability is called a posterior.
122 LA. Busch, Microeconomics May2004
watch what happens and have no possibility to change your mind during the
actual play of the game. This is a very important point. The players have
to submit a complete plan before they start the game, and it has to cover
all eventualities — which translates into saying that the plan has to specify
moves for all information sets of the player, even those which prior moves of
the same player rule out!
Deﬁnition 6 A pure strategy for player i ∈ ¦1, 2, . . . , n¦ is a function σ
i
that associates every information set S
j
i
with one element of the index set I
j
i
,
σ
i
: S
j
i
→I
j
i
.
Alternatively, we can allow the player to randomize.
8
This randomiza
tion can occur on two levels, at the level of each information set, or at the
level of pure strategies:
Deﬁnition 7 A behavioural strategy for player i ∈ ¦1, 2, . . . , n¦ is a
function β
i
that associates every information set S
j
i
with a probability distri
bution over the elements of the index set I
j
i
.
Deﬁnition 8 A mixed strategy µ
i
for player i ∈ ¦1, 2, . . . , n¦ is a proba
bility distribution over the pure strategies σ
i
∈ Σ
i
.
Notice that these are not the same concepts, in general. However, under
perfect recall one can ﬁnd a behavioural strategy corresponding to each mixed
strategy, and so we will only deal with mixed strategies (which are more
properly associated with the strategic form, which we will introduce shortly.)
Mixed strategies are also decidedly easier to work with.
We can now consider what players’ payoﬀs from a game are. Consider
pure strategies only, for now. Each player has pure strategy σ
i
, giving rise to
a strategy vector σ = (σ
1
, σ
2
, . . . , σ
n
) = (σ
i
, σ
−i
). In general, σ does not de
termine the outcome fully, however, since there may be moves by nature. We
8
This is a somewhat controversial issue. Do people ﬂip coins when making decisions?
Nevertheless it is pretty much generally accepted. In some circumstances randomization
can be seen as a formal equivalent of bluﬃng: Take Poker for example. Sometimes you
fold with a pair, sometimes you stand, sometimes you even raise people. This could be
modelled as a coin ﬂip. In other instances the randomizing distribution is explained by
saying that while each person plays some deﬁnite strategy, a population may not, and the
randomization probabilities just correspond to the proportion of people in the population
who play that strategy. We will not worry about it, however, and assume randomization
as necessary (and sometimes it is, as we will see!)
Game Theory 123
therefore use von NeumannMorgenstern expected utility to evaluate things.
In general, the payoﬀs players receive from a strategy combination (vector)
are therefore expected payoﬀs. In the end, players will, of course, arrive at
precisely one terminal vertex and receive whatever the payoﬀ vector is at that
terminal vertex. Before the game is played, however, the presence of nature
or the use of mixed strategies implies a probability distribution over termi
nal vertices, and the game and strategies are thus evaluated using expected
payoﬀs. Deﬁne the following:
Deﬁnition 9 The expected payoﬀ of player i given σ = (σ
i
, σ
−i
), is π
i
(σ).
The vector of expected payoﬀs for all players is π(σ) = (π
1
(σ), . . . , π
n
(σ)).
Deﬁnition 10 The function π(σ) associated with the nperson game Γ in
extensive form is called the strategic form associated with Γ. (It is also
known as the “normal form,” but that language is coming out of use.)
We will treat the strategic form in this fashion — as an abbreviated
representation of the sometimes cumbersome extensive form. This is the
prevalent view nowadays, and this interpretation is stressed by the term
“strategic form.” There is a slightly diﬀerent viewpoint, however, since so
called matrixgames where actually analyzed ﬁrst. Thus, one can also see
the following deﬁnition:
Deﬁnition 11 A game G in strategic (normal) form is a 3tuple (N, S, U),
where N is the player set ¦1, . . . , n¦, S is the strategy set S = S
1
S
2
. . .S
n
,
where S
i
is player i’s strategy set, and U is the payoﬀ function U : S →1
n
;
and a set of rules of the game, which are implicit in the above.
This is a much more abstract viewpoint, where information is not only
suppressed, but not even mentioned, in general. The strategic form can be
represented by a matrix (hence the name matrix games.) Player 1 is taken
to choose the row, player 2 the column, and a third player would be choosing
among matrices. (For more than three players this representation clearly
looses some of its appeal.) Figure 6.4 provides an example of a three player
game in strategic form.
A related concept, which is even more abstract, is that of a game form.
Here, only outcomes are speciﬁed, not payoﬀs. To get a game we need a set
of utility functions for the players.
124 LA. Busch, Microeconomics May2004
Matrix A ←− Player 3 −→ Matrix B
1`2 L R C
U (1, 1, 1) (2, 1, 2) (1, 3, 2)
C (1, 2, 1) (1, 1, 1) (2, 3, 3)
D (2, 1, 2) (1, 1, 3) (3, 1, 1)
1`2 L R C
U (1, 1, 2) (2, 1, 3) (1, 3, 1)
C (1, 2, 2) (1, 1, 0) (2, 3, 4)
D (2, 1, 0) (1, 1, 5) (3, 1, 2)
Figure 6.4: A Matrix game — game in strategic form
Deﬁnition 12 A game form is a 3tuple (N, S, O) where N and S are as
deﬁned previously and O is the set of physical outcomes.
You may note a couple of things at this point. For one, diﬀerent exten
sive form games can give rise to the same strategic form. The games may
not even be closely related for this to occur. In principle, realizing that the
indices in the index sets are arbitrary, and that we can relabel everything
without loss of generality (does it matter if we call a move “UP” or “OP
TION 1”?), any extensive form game with, say, eight strategies for a player
will lead to a matrix with eight entries for that player. But we could have
one information set with eight moves, or we could have three information sets
with two moves each. We could also have two information sets, one with four
moves, one with two. The extensive forms would thus be widely diﬀerent,
and the games would be very diﬀerent indeed. Nevertheless, they could all
give rise to the same matrix. Does this matter? We will have more to say
about this later, when we talk about solution concepts. The main problem
is that one might want all games that give rise to the same strategic form to
have the same solution — which often they don’t. What is “natural” in one
game may not be “natural” in another.
The second point concerns the fact that the strategic form is not always
a more convenient representation. Figure 6.5 gives an example.
This is a simple bargaining game, where the ﬁrst player announces if
he wants 0, 50 or 100 dollars, then the second player does the same. If the
announcements add to $100 or less the players each get what they asked for,
if not, they each pay one dollar. While the extensive form is simple, the
strategic form of this game is a 3 27 matrix!
9
9
Why? Since a strategy for the second player is an announcement after each announce
ment by the ﬁrst player it is a 3tuple. For each of the elements there are three possible
moves, so that there are 3
3
diﬀerent vectors that can be constructed.
Game Theory 125
1`2 (0, 0, 0) (0, 0, 50) . . . (100, 100, 100)
0 (0, 0) (0, 0) . . . (0, 100)
50 (50, 0) (50, 0) . . . (−1, −1)
100 (100, 0) (−1, −1) . . . (−1, −1)
1
c
.
.
.
.
.
.
.
.
.
. .










s s s
0 50 100
2
2
2
0 50 100
0 50 100
0 50 100
0
0
0
50
0
100
50
0
50
50
−1
−1
100
0
−1
−1
−1
−1
Figure 6.5: A simple Bargaining Game
Before we go on, Figure 6.6 below and on the next page gives the four
games listed in the beginning in both their extensive and strategic forms.
Note that I am following the usual convention that in a matrix the ﬁrst
payoﬀ belongs to the row player, while in an extensive form payoﬀs are listed
by index.
Matching Pennies
1`2 H T
H (1, −1) (−1, 1)
T (−1, 1) (1, −1)
Battle of the Sexes
1`2 M S
M (50, 30) (5, 5)
S (1, 1) (30, 50)
Prisoners’ Dilemma
1`2 C D
C (3, 3) (0, 4)
D (4, 0) (1, 1)
b
1
H T
r
r
1, −1 −1, 1
r
H T
r
r
−1, 1 1, −1
r
H T
p p p p p p p p p p p p
2
b
1
M S
r
r
50, 30 5, 5
r
M S
r
r
1, 1 30, 50
r
M S
p p p p p p p p p p p p
2
b
1
C D
r
r
3, 3 0, 4
r
C D
r
r
4, 0 1, 1
r
C D
p p p p p p p p p p p p
2
126 LA. Busch, Microeconomics May2004
The “Education Game”
b
C
B NB
>
>
>
>
>
>
r
r
−3, 3 1, 5
r
P
P I
r
r
1, −1 5, 1
r
P
I P
P`C B NB
(P, P) (3, −3) (−1, 1)
(P, I) (3, −3) (1, 5)
(I, P) (5, 1) (−1, 1)
(I, I) (5, 1) (1, 5)
Figure 6.6: The 4 standard games
6.2 Solution Concepts for Strategic Decision
Problems
We have developed two descriptions of strategic decision problems (“games”.)
How do we now make a prediction as to the “likely” outcome of this?
10
We will employ a solution concept to “solve” the game. In the same
way in which we impose certain conditions in perfect competition (such as,
“markets clear”) which in essence say that the equilibrium is a situation
where everybody is able to carry out their planned actions (in that case, buy
and sell as much as they desire at the equilibrium price), we will impose
conditions on the strategies of players (their planned actions in a game).
Any combination of strategies which satisfy these conditions will be called an
equilibrium. Since there are many diﬀerent conditions one could impose, the
equilibrium is usually qualiﬁed by a name, such as “these strategies constitute
a Nash equilibrium (Bayes Nash equilibrium, perfect equilibrium, subgame
perfect equilibrium, the ChoKreps criterion,...) The equilibrium outcome is
determined by the equilibrium strategies (and moves by nature.) In general,
you will have to get used to the notion that there are many equilibrium
outcomes for a game. Indeed, in general there are many equilibria for one
game. This is part of the reason for the many equilibrium concepts, which
try to “reﬁne away” (lingo for “discard”) outcomes which do not appear to
be sensible. There are about 280 diﬀerent solution concepts — so we will
only deal with a select few which have gained wide acceptance and are easy
to work with (some of the others are diﬃcult to apply to any given game.)
10
It is sometimes not quite clear what we are trying to do: tell players how they should
play, or determine how they will play. There are some very interesting philosophical issues
at stake here, for a discussion of which we have neither the time nor the inclination!
However, let it be noted here that the view taken in this manual is that we are interested
in prediction only, and do not care one iota if players actually determine their actions in
the way we have modeled.
Game Theory 127
6.2.1 Equilibrium Concepts for the Strategic Form
We will start with equilibrium concepts for the strategic form. The ﬁrst
is a very persuasive idea, which is quite old: Why don’t we eliminate a
strategy of a player which is strictly worse than all his other strategies no
matter what his opponents do? This is known as Elimination of (Strictly)
Dominated Strategies. We will not formally deﬁne this since there are
various variants which use this idea (iterated elimination or not, of weakly
or strictly dominated strategies), but the general principle should be clear
from the above. What we will do, is to deﬁne what we mean by a dominated
strategy.
Deﬁnition 13 Strategy a strictly dominates strategy b if the payoﬀ to the
player is larger under a, independent of the opponents’ strategies:
a strictly dominates b if π
i
(a, s
−i
) > π
i
(b, s
−i
) ∀s
−i
∈ S
−i
.
A similar deﬁnition can be made for weakly dominates if the strict in
equality is replaced by a weak inequality. Other authors use the notion of a
dominated strategy instead:
Deﬁnition 14 Strategy a is is weakly (strictly) dominated if there exists
a mixed strategy α such that π
i
(α, s
−i
) ≥ (>)π
i
(a, s
−i
), ∀s
−i
∈ Σ
i
and
π
i
(α, s
−i
) > π
i
(a, s
−i
) for some s
−i
∈ Σ
i
.
If we have a 2 2 game, then elimination of dominated strategies may
narrow down our outcomes to one point. Consider the “Prisoners’ Dilemma”
game, for instance. ‘Defect’ strictly dominates ‘Cooperate’ for both players,
so we would expect both to defect. On the other hand, in “Battle of the
Sexes” there is no dominated (dominating) strategy, and we would still not
know what to predict. If a player has more than two strategies, we also do
not narrow down the ﬁeld much, even if there are dominated strategies. In
that case, we can use Successive Elimination of Dominated Strategies,
where we start with one player, then go to the other player, back to the ﬁrst,
and so on, until we can’t eliminate anything. For example, in the following
game
1`2 (l, l) (r, r) (l, r) (r, l)
L (2, 0) (2, −1) (2, 0) (2, −1)
R (1, 0) (3, 1) (3, 1) (1, 0)
128 LA. Busch, Microeconomics May2004
player 1 does not have a dominated strategy. Player 2 does, however, since
(r, l) is strictly dominated by (l, r). If we also eliminate weakly dominated
strategies, we can throw out (l, l) and (r, r) too, and then player 1 has a
dominated strategy in L. So we would predict, after successive elimination
of weakly dominated strategies, that the outcome of this game is (R, (l, r)).
There are some criticisms about this equilibrium concept, apart from
the fact that it may not allow any predictions. These are particularly strong
if one eliminates weakly dominated strategies, for which the argument that
a player should never choose those appears weak. For example you might
know that the opponent will play that strategy for which you are indiﬀerent
between two strategies. Why then would you eliminate one of these strategies
just because somewhere else in the game (where you will not be) one is worse
than the other?
Next, we will discuss the probably most widely used equilibrium concept
ever, Nash equilibrium.
11
This is the most universally accepted concept,
but it is also quite weak. All other concepts we will see are reﬁnements of
Nash, imposing additional constraints to those imposed by Nash equilibrium.
Deﬁnition 15 A Nash equilibrium in pure strategies is a set of strate
gies, one for each player, such that each player’s strategy maximizes that
player’s payoﬀ, taking the other players’ strategies as given:
σ
∗
is Nash iﬀ ∀i, ∀σ
i
∈ Σ
i
, π
i
(σ
∗
i
, σ
∗
−i
) ≥ π
i
(σ
i
, σ
∗
−i
).
Note that the crucial feature of this equilibrium concept: each player takes
the others’ actions as given and plays a best response to them. This is
the mutual best response property we ﬁrst saw in the Cournot equilibrium,
which we can now recognize as a Nash equilibrium.
12
Put diﬀerently, we
only check against deviations by one player at a time. We do not consider
mutual deviations! So in the Prisoners’ Dilemma game we see that one player
alone cannot gain from a deviation from the Nash equilibrium strategies
11
Nash received the Nobel price for economics in 1994 for this contribution. He extended
the idea of mutual best responses proposed by von Neumann and Morgenstern to n players.
He did this in his Ph.D. thesis. von Neumann and Morgenstern had thought this problem
too hard when they proposed it in their book Games and Economic Behaviour.
12
Formally we now have 2 players. Their strategies are q
i
∈ [0, P
−1
(0)]. Restricting
attention to pure strategies, their payoﬀ functions are π
i
(q
1
, q
2
), so the strategic form
is (π
1
(q), π
2
(q)). Denote by b
i
(q
−i
) the best response function we derived in footnote 1
of this chapter. The Nash equilibrium for this game is the strategy vector (q∗
1
, q∗
2
) =
(b
1
(q∗
2
), b
2
(q∗
1
)). This, of course, is just the computation performed in footnote 2.
Game Theory 129
(Defect,Defect). We do not allow or consider agreements by both players to
defect to (Cooperate,Cooperate), which would be better!
A Nash equilibrium in pure strategies may not exist, however. Consider,
for example, the “Matching Pennies” game: If player 2 plays ‘H’ player 1
wants to play ‘H’, but given that, player 2 would like ‘T’, but given that
1 would like ‘T’, ... We may need mixed strategies to be able to have a
Nash equilibrium. The deﬁnition for a mixed strategy Nash equilibrium is
analogous to the one above and will not be repeated. All that changes is
the deﬁnition of the strategy space. Since an equilibrium concept which
may fail to give an answer is not that useful (hence the general disregard for
elimination of dominated strategies) we will consider the question of existence
next.
Theorem 1 A Nash equilibrium in pure strategies exists for perfect infor
mation games.
Theorem 2 For ﬁnite games a Nash equilibrium exists (possibly in mixed
strategies.)
Theorem 3 For (N, S, U) with S ∈ 1
n
compact and convex and U
i
: S →1
continuous and strictly quasi concave in s
i
, a Nash equilibrium exists.
Remarks:
1. Nash equilibrium is a form of rational expectations equilibrium (ac
tually, a rational expectations equilibrium is a Nash equilibrium, for
mally.) As in a rational expectations equilibrium, the players can be
seen to “expect” their opponent(s) to play certain strategies, and in
equilibrium the opponents actually do, so that the expectation was
justiﬁed.
2. There is an apparent contradiction between the ﬁrst existence theorem
and the fact that Nash equilibrium is deﬁned on the strategic form.
However, you may want to think about the way in which assuming
perfect information restricts the strategic form so that matrices like
the one for matching pennies can not occur.
3. If a player is to mix over some set of pure strategies ¦σ
1
i
, σ
2
i
, . . . , σ
k
i
¦
in Nash equilibrium, then all the pure strategies in the set must lead
130 LA. Busch, Microeconomics May2004
to the same expected payoﬀ (else the player could increase his payoﬀ
from the mixed strategy by changing the distribution.) This in turn
implies that the fact that a player is to mix in equilibrium will impose
a restriction on the other players’ strategies! For example, consider the
matching pennies game:
1`2 H T
H (1, −1) (−1, 1)
T (−1, 1) (1, −1)
For player 1 to mix we will need that π
1
(H, µ
2
) = π
1
(T, µ
2
). If β denotes
the probability of player 2 playing H, then we need that β −(1 −β) =
−β + (1 −β), or 2β −1 = 1 −2β, in other words, β = 1/2. For player
1 to mix, player 2 must mix at a ratio of 1/2 : 1/2. Otherwise, player
1 will play a pure strategy. But now player 2 must mix. For him to
mix (the game is symmetric) we need that player 1 mixes also at a
ratio of 1/2 : 1/2. We have, by the way, just found the unique Nash
equilibrium of this game. There is no pure strategy Nash, and if there
is to be a mixed strategy Nash, then it must be this. (Notice that we
know there is a mixed strategy Nash, since this is a ﬁnite game!)
The next equilibrium concept we mention is Bayesian Nash Equi
librium (BNE). This will be for completeness sake only, since we will in
practice be able to use Nash Equilibrium. BNE concerns games of incom
plete information, which, as we have seen already, can be modelled as games
of imperfect information. The way this is done is by introducing “types” of
one (or more) player(s). The type of a player summarizes all information
which is not public (common) knowledge. It is assumed that each type actu
ally knows which type he is. It is common knowledge what distribution the
types are drawn from. In other words, the player in question knows who he
is and what his payoﬀs are, but opponents only know the distribution over
the various types which are possible, and do not observe the actual type of
their opponents (that is, do not know the actual payoﬀ vectors, but only
their own payoﬀs.) Nature is assumed to choose types. In such a game,
players’ expected payoﬀs will be contingent on the actual types who play the
game, i.e., we need to consider π(σ
i
, σ
−i
[t
i
, t
−i
), where t is the vector of type
realizations (potentially one for each player.) This implies that each player
type will have a strategy, so that player i of type t
i
will have strategy σ
i
(t
i
).
We then get the following:
Game Theory 131
Deﬁnition 16 A Bayesian Nash Equilibrium is a set of type contingent
strategies σ
∗
(t) = (σ
∗
i
(t
1
), . . . , σ
∗
n
(t
n
)) such that each player maximizes his ex
pected utility contingent on his type, taking other players’ strategies as given,
and using the priors in computing the expectation:
π
i
(σ
∗
i
(t
i
), σ
∗
−i
[t
i
) ≥ π
i
(σ
i
(t
i
), σ
∗
−i
[t
i
), ∀σ
i
(t
i
) = σ
∗
i
(t
i
), ∀i, ∀t
i
∈ T
i
.
What is the diﬀerence to Nash Equilibrium? The strategies in a Nash equi
librium are not conditional on type: each player formulates a plan of action
before he knows his own type. In the Bayesian equilibrium, in contrast, each
player knows his type when choosing a strategy. Luckily the following is true:
Theorem 4 Let G be an incomplete information game and let G
∗
be the
complete information game of imperfect information that is Bayes equivalent:
Then σ
∗
is a Bayes Nash equilibrium of the normal form of G if and only if
it is a Nash equilibrium of the normal form of G
∗
.
The reason for this result is straight forward: If I am to optimize the
expected value of something given the probability distribution over my types
and I can condition on my types, then I must be choosing the same as if I
wait for my type to be realized and maximize then. After all, the expected
value is just a weighted sum (hence linear) of the conditional on type payoﬀs,
which I maximize in the second case.
6.2.2 Equilibrium Reﬁnements for the Strategic Form
So how does Nash equilibrium do in giving predictions? The good news is
that, as we have seen, the existence of a Nash equilibrium is assured for a wide
variety of games.
13
The bad news is that we may get too many equilibria, and
that some of the strategies or outcomes make little sense from a “common
sense” perspective. We will deal with the ﬁrst issue ﬁrst. Consider the
following game, which is a variant of the Battle of the Sexes game:
13
One important game for which there is no Nash equilibrium is Bertrand competition
between 2 ﬁrms with diﬀerent marginal costs. The payoﬀ function for ﬁrm 1, say, is
π
1
(p
1
, p
2
) =
(p
1
−c
1
)Q(p
1
) if p
1
< p
2
α(p
1
−c
1
)Q(p
1
) if p
1
= p
2
0 otherwise
which is not continuous in p
2
and hence Theorem 3 does not apply.
132 LA. Busch, Microeconomics May2004
1`2 M S
M (6, 2) (0, 0)
S (0, 0) (2, 6)
This game has three Nash equilibria. Two are in pure strategies — (M, M)
and (S, S) — and one is a mixed strategy equilibrium where µ
1
(S) = 1/4 and
µ
2
(S) = 3/4. So what will happen? (Notice another interesting point about
mixed strategies here: The expected payoﬀ vector in the mixed strategy
equilibrium is (3/2, 3/2), but any of the four possible outcomes can occur in
the end, and the actual payoﬀ vector can be any of the three vectors in the
game.)
The problem of too many equilibria gave rise to reﬁnements, which
basically refers to additional conditions which will be imposed on top of
standard Nash. Most of these reﬁnements are actually applied to the exten
sive form (since one can then impose restrictions on how information must
be consistent, and so on.) However, there is one common reﬁnement on the
strategic form which is sometimes useful.
Deﬁnition 17 A dominant strategy equilibrium is a Nash equilibrium
in which each player’s strategy choice (weakly) dominates any other strategy
of that player.
You may notice a small problem with this: It may not exist! For example,
in the game above there are no dominating strategies, so that the set of
dominant strategy equilibria is empty. If such an equilibrium does exist, it
may be quite compelling, however.
There is another commonly used concept, that of normal form perfect
equilibrium. We will not use this much, since a similar perfection criterion
on the extensive form is more useful for what we want to do later. However,
it is included here for completeness. Basically, normal form perfect will
reﬁne away some equilibria which are “knife edge cases.” The problem with
Nash is that one takes strategies of the opponents as given, and can then be
indiﬀerent between one’s own strategies. Normal form perfect eliminates this
by forcing one to consider completely mixed strategies, and only allowing pure
strategies that survive after the limit of these completely mixed strategies is
taken. This eliminates many of the equilibria which are only brought about
by indiﬀerence. We ﬁrst deﬁne an “approximate” equilibrium for completely
mixed games, then take the limit:
Game Theory 133
Deﬁnition 18 A completely mixed strategy for player i is one that at
taches positive probability to every pure strategy of player i: µ
i
(s
i
) > 0 ∀s
i
∈
S
i
.
Deﬁnition 19 A ntuple µ() = (µ
1
, . . . , µ
n
) is an perfect equilibrium
of the normal form game G if µ
i
is completely mixed for all i ∈ ¦1, . . . , n¦,
and if
µ
i
(s
j
) ≤ if π
i
(s
j
, µ
−i
) ≤ π
i
(s
k
, µ
−i
), s
k
= s
j
, > 0.
Notice that this restriction implies that any strategies which are a poor
choice, in the sense of having lower payoﬀs than other strategies, must be
used very seldom. We can then take the limit as “seldom” becomes “never:”
Deﬁnition 20 A Perfect Equilibrium is the limit point of an perfect
equilibrium as →0.
To see how this works, consider the following game:
1`2 T B
t (100, 0) (−50, −50)
b (100, 0) (100, 0)
The pure strategy Nash equilibria of this game are (t, T), (b, B), and (b, T).
The unique normal form perfect equilibrium is (b, T). This can easily be seen
from the following considerations. Let α denote the probability with which
player 1 plays t, and let β denote the probability with which player 2 plays
T. 2’s payoﬀ from T is zero independent of α. 2’s payoﬀ from B is −50α,
which is less than zero as long as α > 0. So, in the perfect equilibrium we
have to set (1 −β) < , that is β > 1 − in any perfect equilibrium. Now
consider player 1. His payoﬀ from t will be 150β −50, while his payoﬀ from b
is 100. His payoﬀ from t is therefore less than from b for all β, and we require
that α < . As →0, both α and (1 −β) thus approach zero, and we have
(b, T) as the unique perfect equilibrium.
While the payoﬀs are the same in the perfect equilibrium and all the
Nash equilibria, the perfect equilibrium is in some sense more stable. Notice
in particular that a very small probability of making mistakes in announcing
or carrying out strategies will not aﬀect the nPE, but it would lead to a
potentially very bad payoﬀ in the other two Nash equilibria.
14
14
Note that an nPE is Nash, but not vice versa.
134 LA. Busch, Microeconomics May2004
6.2.3 Equilibrium Concepts and Reﬁnements for the
Extensive Form
Next, we will discus equilibrium concepts and reﬁnements for the extensive
form of a game. First of all, it should be clear that a Nash equilibrium of
the strategic form corresponds onetoone with a Nash equilibrium of the
extensive form. The deﬁnition we gave applies to both, indeed. Since our
extensive form game, as we have deﬁned it so far, is a ﬁnite game, we are
also assured existence of a Nash equilibrium as before. Consider the following
game, for example, here given in both its extensive and strategic forms:
b
1
U D
>
>
>
>
>
>
r
r
4, 6 0, 4
r
2
u d
r
r
2, 1 1, 8
r
2
d u
2`1 U D
(u, u) (6, 4) (1, 2)
(u, d) (6, 4) (8, 1)
(d, u) (4, 0) (1, 2)
(d, d) (4, 0) (8, 1)
This game has 3 pure strategy Nash equilibria: (D, (d, d)), (U, (u, u)), and
(U, (u, d)).
15
What is wrong with this? Consider the equilibrium (D, (d, d)).
Player 1 moves ﬁrst, and his move is observed by player 2. Would player 1
really believe that player 2 will play d if player 1 were to choose U, given
that player 2’s payoﬀ from going to u instead is higher? Probably not. This
is called an incredible threat. By threatening to play ‘down’ following an
‘Up’, player 2 makes his preferred outcome, D followed by d, possible, and
obtains his highest possible payoﬀ. Player 1, even though he moves ﬁrst,
ends up with one of his worst payoﬀs.
16
However, player 2, if asked to follow
his strategy, would rather not, and play u instead of d if he ﬁnds himself
after a move of U. The move d in this information set is only part of a best
reply because under the proposed strategy for 1, which is taken as given in
a Nash equilibrium, this information set is never reached, and thus it does
not matter (to player 2’s payoﬀ) which action is speciﬁed. This is a type
of behaviour which we may want to rule out. This is done most easily by
requiring all moves to be best replies for their part of the game, a concept
we will now make more formal. (See also Figure 6.7)
15
There are also some mixed strategy equilibria, namely (U, (P
1
2
(u) =1, P
2
2
(u) =α)) for
any α ∈ [0, 1], and (D, (P
1
2
(u)≤1/4, P
2
2
(u)=0)).
16
It is sometimes not clear if the term ‘incredible threat’ should be used only if there
is some actual threat, as for example in the education game when the parents threaten
to punish. The more general idea is that of an action that is not a best reply at an
information set. In this sense the action is not credible at that point in the game.
Game Theory 135
A
c
>
>
>
>
>
>
s
B
s
C
s
D
`
`
`
s
E
s
F
/
/
/
`
`
`
/
/
/
`
`
`
/
/
/
`
`
`
`
`
`
s
G
s
H
`
`
`
`
`
` s
I
s
K
`
`
`
`
`
`
Subgames start at:
A,E,F,G,D,H
No Subgames at:
B,C,I,K
Figure 6.7: Valid and Invalid Subgames
Deﬁnition 21 Let V be a nonterminal node in Γ, and let Γ
V
be the game
tree comprising V as root and all its followers. If all information sets in Γ
are either completely contained in Γ
V
or disjoint from Γ
V
, then Γ
V
is called
a subgame.
We can now deﬁne a subgame perfect equilibrium, which tries to exclude
incredible threats by assuring that all strategies are best replies in all proper
subgames, not only along the equilibrium path.
17
Deﬁnition 22 A strategy combination is a subgame perfect equilibrium
(SPE) if its restriction to every proper subgame is a subgame perfect equilib
rium.
In the example above, only (U, (u, d)) is a SPE. There are three proper sub
games, one starting at player 2’s ﬁrst information set, one starting at his
second information set, and one which is the whole game tree. Only u is a
best reply in the ﬁrst, only d in the second, and thus only U in the last.
Remarks:
1. Subgame Perfect Equilibria exist and are a strict subset of Nash Equi
libria.
2. Subgame Perfect equilibrium goes hand in hand with the famous “back
ward induction” procedure for ﬁnding equilibria. Start at the end of
the game, with the last information sets before the terminal nodes,
17
The equilibrium path is, basically, the sequence of actions implied by the equilibrium
strategies, in other words the implied path through the game tree (along some set of arcs.)
136 LA. Busch, Microeconomics May2004
and determine the optimal action there. Then back up one level in the
tree, and consider the information sets leading up to these last deci
sions. Since the optimal action in the last moves is now known, they
can be replaced by the resulting payoﬀs, and the second last level can
be determined in a similar fashion. This procedure is repeated until
the root node is reached. The resulting strategies are Subgame Perfect.
3. Incredible Threats are only eliminated if all information sets are sin
gletons, in other words, in games of perfect information. As a counter
example consider the following game:
1
c>
>
>
>
>
>
A
B
C
s
2
s
s
2
a
c
a
c
a
c
(0,0)
(0,1)
(1/2,0)
(1,1)
(1,0)
(1,1)
In this game there is no subgame starting with player 2’s information
set after 1 chose B or C, and therefore the equilibrium concept reverts
to Nash, and we get that (A, (c, c)) is a SPE, even though c is strictly
dominated by a in the nontrivial information set.
4. Notwithstanding the above, Subgame Perfection is a useful concept in
repeated games, where a simultaneous move game is repeated over and
over. In that setting a proper subgame starts in every period, and
thus at least incredible threats with regard to future retaliations are
eliminated.
5. Subgame Perfection and normal Form Perfect lead to diﬀerent equilib
ria. Consider the game we used before when we analyzed nPE:
b
1
t b
>
>
>
>
>
>
r
r
100, 0 −50, −50
r
T B
r
r
100, 0 100, 0
r
B T
p p p p p p p p p p p p p p p p p p p p
2 1`2 T B
t (100, 0) (−50, −50)
b (100, 0) (100, 0)
As we had seen before, the nPE is (b, T), but since there are no sub
games, the SPE are all the Nash equilibria, i.e., (b, T), (t, T) and (b, B).
Game Theory 137
c
1
A
s
2
a
(1, 1, 1)
D d
s s
3
´
´
´
´
´
l
`
`
`
`
`
r
´
´
´
´
´
l
`
`
`
`
`
r
(3, 2, 2) (0, 0, 0) (4, 4, 0) (0, 0, 1)
Figure 6.8: The “Horse”
As we can see, SPE does nothing to prevent incredible threats if there
are no proper subgames. In order to deal with this aspect, the following
equilibrium concept has been developed. For games of imperfect informa
tion we cannot use the idea of best replies in all subgames, since a player
may not know at which node in an information set he is. We would like to
ensure that the actions taken at an information set are best responses nev
ertheless. In order to do so, we have to introduce what the player believes
about his situation at that information set. By introducing a belief system
— which speciﬁes a probability distribution over all the nodes in each of
the player’s information sets — we can then require all actions to be best
responses given the belief system. Consider the example in Figure 6.8, which
is commonly called “The Horse.” This game has two pure strategy Nash
equilibria, (A, a, r) and (D, a, l). Both are subgame perfect since there are
no proper subgames at all. The second one, (D, a, l), is “stupid” however,
since player 2 could, if he is actually asked to move, play d, which would
improve his payoﬀ from 1 to 4. In other words, a is not a best reply for
player 2 if he actually gets to move.
Deﬁnition 23 A system of beliefs φ is a vector of beliefs for each player,
φ
i
, where φ
i
is a vector of probability distributions, φ
j
i
, over the nodes in each
of player i’s information sets S
j
i
:
φ
j
i
: x
j
k
→[0, 1],
K
¸
k=1
φ
j
i
(x
j
k
) = 1; ∀x
j
k
∈ S
j
i
.
Deﬁnition 24 An assessment is a system of beliefs and a set of strategies,
(σ
∗
, φ
∗
).
138 LA. Busch, Microeconomics May2004
Deﬁnition 25 An assessment (σ, φ) is sequentially rational if
E
φ
∗
π
i
(σ
∗
i
, σ
∗
−i
[S
j
i
)
≥ E
φ
∗
π
i
(σ
i
, σ
∗
−i
[S
j
i
)
, ∀i, ∀σ
i
∈ Σ
i
, ∀S
j
i
.
Deﬁnition 26 An assessment (σ
∗
, φ
∗
) is consistent if
(σ
∗
, φ
∗
) = lim
n→∞
(σ
n
, φ
n
), where σ
n
is a sequence of completely mixed be
havioural strategies and φ
n
are beliefs consistent with σ
n
being played (i.e.,
obtained by Bayesian updating.)
Deﬁnition 27 An assessment (σ
∗
, φ
∗
) is a sequential equilibrium if it is
consistent and sequentially rational.
As you can see, some work will be required in using this concept! Re
consider the horse in Figure 6.8. The strategy combination (A, a, r) is a
sequential equilibrium with beliefs α = 0, where α denotes player 3’s proba
bility assessment of being at the left node. You can see this by considering
the following sequence of strategies: 1 plays D with (1/n)
2
, which converges
to zero, as required. 2 plays d with (1/n), also converging to zero. The
consistent belief for three thus is given by (from Bayes’ Rule)
α(n) =
(1/n)
2
(1/n)
2
+ (1 −(1/n)
2
)(1/n)
=
n
n
2
+n −1
=
1
n + 1 −1/n
,
which converges to zero as n goes to inﬁnity. As usual, the tougher part is
to show that there is no sequence which can be constructed that will lead to
(D, a, l). Here is a short outline of what is necessary: In order for 3 to play
l we need beliefs which put at least a probability of 1/3 on being at the left
node. We thus need that 1 plays down almost surely, since player 2 will play
d any time 3 plays l with more than a 1/4 probability. But as weight shifts
to l for 3, and 2 plays d, sequential rationality for 1 requires him to play A
(4 > 3). This destroys our proposed setup.
Signalling Games
This is a type of game used in the analysis of quality choice, advertising,
warranties, or education and hiring. The general setup is that an informed
party tries to convey information to an uninformed party. For example,
the fact that I spend money advertising should convey the information that
my product is of high quality to consumers who are not informed about
the quality of my product. There are other sellers of genuinely low quality,
Game Theory 139
(3, 0)
(−4, −1)
(−3, −2)
(−2, −2)
(1, 2)
(−2, 1)
(−1, −1)
(2, 3)
l
l
r
r
l
l
r
r
L R
L R
1
1
¯
t
t
(1/2)
(1/2)
s s s
s s s `
`
`
`
`
`
`
`
`
`
`
`
`
`
e
Root: Nature
α β
Figure 6.9: A Signalling Game
however, and they will try to mimic my actions. In order to be credible my
action should therefore be hard (costly) to mimic. The equilibrium concept
used for this type of game will be sequential equilibrium, since we have to
model the beliefs of the uninformed party.
Consider the game in Figure 6.9. Nature determines if player 1 is a
high type or a low type. Player 1 moves left or right, knowing his type.
Player 2 then, without knowing 1’s true type, moves left or right also. The
payoﬀs are as indicated. This type of game is called a game of asym
metric information, since one of the parties is completely informed while
the other is not. The usual question is if the informed party can convey
its private information or not. In the above game, the Nash equilibria
are the following: Let player 1’s strategy vector (S
1
, S
2
) indicate 1’s action
if he is the low and high type, respectively, while player 2’s strategy vector
(s
1
, s
2
) indicates 2’s response if he observes L and R, respectively. Then we
get that the pure strategy Nash equilibria are ((R, R), (r, r)), ((R, L), (r, r)),
and ((L, R), (l, l)). Now introduce player 2’s beliefs. Let 2’s belief of facing
a low type player 1 be denoted by α if 2 observes L, and by β if 2 observes
R. We then can get two sequential equilibria: ((L, R), (l, r), (α = 1, β = 0))
and ((R, R), (r, r), (α = 0, β = 0.5)) (It goes without saying that you should
try to verify this claim!). The ﬁrst of these is a separating equilibrium.
The action of player 1 completely conveys the private information of player
1. Only low types move Left, only high types move Right, and the move
thus reveals the type of player. The second equilibrium, in contrast, is a
pooling equilibrium. Both types take the same move in equilibrium, and
no information is transmitted. Notice, however, that the belief that α = 0
is somewhat stupid. If you ﬁnd yourself, as player 2, inadvertently in your
ﬁrst information set, what should you believe? Your beliefs here say that
140 LA. Busch, Microeconomics May2004
you think it must have been the high type who made the mistake. This is
“stupid”, since the high type’s move L is strictly dominated by R, while the
low type’s move L is not dominated by R. It would be more reasonable to
assume, therefore, that if anything it was the low type who was trying to
“tell you something” by deviating (you are not on the equilibrium path if
you observe L, remember!)
There are reﬁnements that impose restrictions like this last argument
on the beliefs out of the equilibrium path, but we will not go into them here.
Look up the ChoKreps criterion in any good game theory book if you want
to know the details. The basic idea is simple: of the equilibrium path you
should only put weight on types for whom the continuation equilibria oﬀ the
equilibrium path are actually better than if they had followed the proposed
equilibrium. The details are, of course, messy.
Finally, notice a last problem with sequential equilibrium. Minor per
turbations of the extensive form change the equilibrium set. In particular,
the two games in Figure 6.10 have diﬀerent sequential equilibria, even though
the games would appear to be quite closely related.
e
1
>
>
>
>
>
>
>
>
>
>
>
>
(2, 2)
L M R
\
\
\
\
\
\ s s
2
u d u d
(3, 3) (1.5, 0) (0, 0) (1, 1)
e
1
>
>
>
>
>
>
>
>
>
>
>
>
(2, 2)
L Not L
s 1
M R
>
>
>
>
>
>
>
>
>
>
>
>
s s
2
>
>
>
>
>
>
u d u d
(3, 3) (1.5, 0) (0, 0) (1, 1)
Figure 6.10: A Minor Perturbation?
6.3 Review Problems
Question 1: Provide the deﬁnition of a 3player game in extensive form.
Then draw a well labelled example of such a game in which you indicate all
the elements of the deﬁnition.
Question 2: Deﬁne “perfect recall” and provide two examples of games
Game Theory 141
which violate perfect recall for diﬀerent reasons.
Question 3: Assume that you are faced with some ﬁnite game. Will this
game have a Nash equilibrium? Will it have a Subgame Perfect Equilibrium?
Why can you come to these conclusions?
Question 4: Consider the following 3 player game in strategic form:
Left Player 3 Right
1`2 L R C
U (1, 1, 1) (2, 1, 2) (1, 3, 2)
C (1, 2, 1) (1, 1, 1) (2, 3, 3)
D (2, 1, 2) (1, 1, 3) (3, 1, 1)
1`2 L R C
U (2, 2, 2) (4, 2, 4) (2, 6, 4)
C (5, 0, 1) (1, 1, 1) (0, 1, 1)
D (3, 2, 3) (2, 2, 4) (4, 2, 2)
Would elimination of weakly dominated strategies lead to a good predic
tion for this game? What are the pure strategy Nash equilibria of this game?
Describe in words how you might ﬁnd the mixed strategy Nash equilibria.
Be clear and concise and do not actually attempt to solve for the mixed
strategies.
Question 5: Find the mixed strategy Nash equilibrium of this game:
1`2 L R C
U (1, 4) (2, 1) (4, 2)
C (3, 2) (1, 1) (2, 3)
Question 6: Consider the following situation and construct an extensive
form game to capture it.
A railway line passes through a town. Occasionally, accidents will happen
on this railway line and cause damage and impose costs on the town. The
frequency of these accidents depends on the eﬀort and care taken by the
railway — but these are unobservable by the town. The town may, if an
accident has occurred, sue the railway for damages, but will only be successful
in obtaining damages if it is found that the railway did not use a high level
of care. For simplicity, assume that there are only two levels of eﬀort/care
(high and low) and that the courts can determine with certainty which level
was in fact used. Also assume that going to court costs the railway and the
town money, that eﬀort is costly for the railway (high eﬀort reduces proﬁts),
that accidents cost the railway and the town money and that this cost is
independent of the eﬀort level (i.e., there is a “standard accident”). Finally,
assume that if the railway is “guilty” it has to pay the town’s damages and
court costs.
142 LA. Busch, Microeconomics May2004
Question 7: Consider the following variation of the standard Battle of the
Sexes game: with probability α Juliet gets informed which action Romeo has
taken before she needs to choose (with probability (1 − α) the game is as
usual.)
a) What is the subgame perfect equilibrium of this game?
b) In order for Romeo to be able to insist on his preferred outcome,
what would α have to be?
Question 8: (Cournot Duopoly) Assume that inverse market demand is
given by P(Q) = (Q − 10)
2
, where Q refers to market output by all ﬁrms.
Assume further that there are n ﬁrms in the market and that they all have
zero marginal cost of production. Finally, assume that all ﬁrms are Cournot
competitors. This means that they take the other ﬁrms’ outputs as given and
consider their own inverse demand to be p
i
(q
i
) = (
¸
j=i
q
j
+q
i
−10)
2
. Derive
the Nash equilibrium output and price. (That is, derive the multilateral best
response strategies for the output choices: Given every other ﬁrm’s output,
a given ﬁrm’s output is proﬁt maximizing for that ﬁrm. This holds for all
ﬁrms.) Show that market output converges to the competitive output level as
n gets large. (HINT: Firms are symmetric. It is then enough for now to focus
on symmetric equilibria. One can solve for the socalled reaction function of
one ﬁrm (it’s best reply function) which gives the proﬁt maximizing level
of output for a given level of joint output by all others. Symmetry then
suggests that each ﬁrm faces the same joint output by its (n−1) competitors
and produces the same output in equilibrium. So we can substitute out and
solve.)
Question 9
∗
: Assume that a seller of an object knows its quality, which we
will take to be the probability with which the object breaks during use. For
simplicity assume that there are only two quality levels, high and low, with
breakdown probabilities of 0.1 and 0.4, respectively. The buyer does not know
the type of seller, and can only determine if a good breaks, but not its quality.
The buyer knows that 1/2 of the sellers are of high quality, and 1/2 of low
quality. Assume that the seller receives a utility of 10 from a working product
and 0 from a nonworking product, and that his utility is linear in money (so
that the price of the good is deducted from the utility received from the good.)
If the seller does not buy the object he is assumed to get a utility level of 0.
The cost of the object to the sellers is assumed to be 2 for the low quality seller
and 3 for the high quality seller. We want to investigate if signalling equilibria
exist. We also want to train our understanding of sequential equilibrium, so
use that as the equilibrium concept in what follows.
a) Assume that sellers can only diﬀer in the price they charge. Show
that no separating equilibrium exist.
Game Theory 143
b) Now assume that the sellers can oﬀer a warranty which will replace
the good once if it is found to be defective. Does a separating equilibrium
exist? Does a pooling equilibrium exist?
Question 10
∗
: Assume a uniform distribution of buyers over the range of
possible valuations for a good, [0, 2].
a) Derive the market demand curve.
b) There are 2 ﬁrms with cost functions C
1
(q
1
) = q
1
/10 and C
2
(q
2
) = q
2
2
.
Find the Cournot Equilibrium and calculate equilibrium proﬁts.
c) Assume that ﬁrm 1 is a Stackelberg leader and compute the Stack
elberg equilibrium. (This means that ﬁrm 1 moves ﬁrst and ﬁrm 2 gets to
observe ﬁrm 1’s output choice. The Stackelberg equilibrium is the SPE for
this game.)
d) What is the joint proﬁt maximizing price and output level for each
ﬁrm? Why could this not be attained in a Nash equilibrium?
144 LA. Busch, Microeconomics May2004
Chapter 7
Review Question Answers
7.1 Chapter 2
Question 1:
a) There are multiple C() which satisfy the Weak Axiom. Note, how
ever, that you have to check back and forth to make sure that the WA is
indeed satisﬁed. (I.e., C(¦x, y, z¦) = ¦x¦, C(¦x, y¦) = ¦x, y¦ does not sat
isfy the axiom since while the check for x seems to be ok, you also have
to check for y, and there it fails.) One choice structure that does work
is C(¦x, y, z¦) = ¦x¦, C(¦x, z, w¦) = ¦x¦, C(¦y, w, z¦) = ¦w¦, C(¦y, w¦) =
¦w¦, C(¦x, z¦) = ¦x¦, C(¦x, w¦) = ¦x¦, C(¦x¦) = ¦x¦.
b) Yes (I thought of that ﬁrst, actually, in deriving the above) it is
x _ w _ y _ z.
c) Yes, it is transitive.
d) I was aiming for an application of out Theorem: our set of bud
get sets B does not contain all 2 and 3 element subsets of X. Missing are
¦x, y, w¦, ¦x, y¦, ¦y, z¦, ¦w, z¦.
e) The best way to go about this one is to determine where we can pos
sibly get this to work. Examination of the sets B shows that the two choices
y, x only appear in one of the sets and thus must be our key if we want to sat
isfy the WA without having rational preferences. Some ﬁddling reveals that
the following works: C(¦x, y, z¦) = ¦x¦, C(¦x, z, w¦) = ¦w¦, C(¦y, w, z¦) =
¦y¦, C(¦y, w¦) = ¦y¦, C(¦x, z¦) = ¦x¦, C(¦x, w¦) = ¦w¦, C(¦x¦) = ¦x¦.
The problem is intransitivity, since the above implies that y _ w _ x _ z
but we also have x _ y!
Question 2: Here you have to make sure to maximize income for any given
145
146 LA. Busch, Microeconomics May2004
work Part a Partb
hrs 1 then 2 2 then 1 Max 1 then 2 2 then 1 Max
1 112 108 112 112 108 112
2 124 116 124 124 116 124
3 136 130 136 136 130
2
3
136
4 148 144 148 148 145
1
3
148
5 160 158 160 160 160 160
6 172 172 172 172 174
2
3
174
2
3
7 188 186 188 188 189
1
3
189
1
3
8 204 200 204 204 204 204
9 212 212 212 212 216 216
10 220 224 224 220 228 228
11 234 236 236 234
2
3
240 240
12 248 248 248 249
1
3
252 252
13 262 260 262 264 264 264
14 276 272 276 278
2
3
276 278
2
3
15 290 288 290 293
1
3
292 293
1
3
16 304 304 304 308 308 308
Table 7.1: Table 1: Computing maximal income
amount of work. In parts (a) and (b) you have to choose to work either job
1 then job 2 (after 8 hours in job 1) or job 2 then job 1. Simply plotting the
two and then taking the outer hull (i.e., the highest frontier) for each leisure
level gives you the frontier. In (a) they only cross twice (at 9 and 12 hours
of work) while in part (b) they cross 4 times. You can best see this eﬀect by
considering a table in which you tabulate total hours worked against total
income, computed by doing job 1 ﬁrst, and by doing job 2 ﬁrst. This is shown
in Table 1. In neither part a) nor in part b) is the budget set convex.
c) This is a possibly quite involved problem. The intuitive answer is
that it will not matter since marginal and average pay is (weakly) increasing
in both jobs. Here is a more general treatment of these questions:
We really are faced with an maximization problem, to max income given
the constraints, for any given total amount worked. Let h
1
and h
2
denote
hours worked in jobs 1 and 2, respectively. Then the objective function
is I(h
1
, h
2
) = h
1
w
1
(h
1
) + h
2
w
2
(h
2
), where w
i
(h
i
) are the wage schedules.
The wage schedules have the general form w
1
(h
1
) =
w
1
if h
1
≤ C
1
w
1
if h
1
≥ C
1
and
w
2
(h
2
) =
w
2
if h
2
≤ C
2
w
2
if h
2
≥ C
2
, where w
i
< w
i
. I ignore here that no hours
above 8 are possible for either job, choosing to put that information into the
constraints later.
Answers 147
Consider now the isoincome curves in (h
1
, h
2
) space which result. We
will have four regions to consider, namely A = ¦(h
1
, h
2
)[h
1
≤ C
1
, h
2
≤ C
2
¦,
B = ¦(h
1
, h
2
)[h
1
≤ C
1
, h
2
≥ C
2
¦, C = ¦(h
1
, h
2
)[h
1
≥ C
1
, h
2
≤ C
2
¦,
D = ¦(h
1
, h
2
)[h
1
≥ C
1
, h
2
≥ C
2
¦. The slope of the isoincome curves for
the regions is easily seen to be the negative of the ratio of wages, so we have
S(A) = −w
1
/w
2
, S(B) = −w
1
/w
2
, S(C) = −w
1
/w
2
, S(D) = −w
1
/w
2
. It is
obvious that S(C) < S(D) and S(A) < S(B), as well as that S(C) < S(A)
and S(D) < S(B). This implies, of course, that S(C) < S(A) < S(B) as
well as that S(C) < S(D) < S(B), with the comparison of S(A) to S(D)
indeterminate. (But luckily not needed in any case.) The important fact
which follows from all of this is that the isoincome curves are all concave to
the origin and piecewise linear.
Now superimpose the choice sets onto this. Note that without any re
strictions H = h
1
+ h
2
, that is, for any given number of hours H the hours
in each job are “perfect substitutes”. These isohour curves are all straight
lines with a slope of −1. (For our parameters all of S(A), S(C), S(D) are
less than −1, while S(B) > −1, but his is not important.) For parts (a)
and (b) the feasible set consists of the boundaries of the 8 8 square of
feasible hours, where either h
i
= 0, h
j
< 8, or where h
i
= 8, 0 ≤ h
j
≤ 8.
The choice set is thus given by the intersection of the isohour lines with the
feasible set (the box boundary). In part (c) this restriction is removed and
the whole interior of the box is feasible. Due to the concavity to the origin
of the isoincome lines this is of no relevance, however. Note how I have used
our usual techniques of isoobjective curves and constraint sets to approach
this problem. Works pretty well, doesn’t it!
d) Now we “clearly” take up jobs in decreasing order of pay, starting
with the highest paid and progressing to the lower paid ones in order. The
resulting budget set will be convex.
Question 3: The consumer will
max
x
¸
x
0.3
1
x
0.6
2
+λ(m−x
1
p
1
−x
2
p
2
)
¸
which leads to the ﬁrst order conditions
0.3x
−0.7
1
x
0.6
2
= λp
1
, 0.6x
0.3
1
x
−0.4
2
= λp
2
, x
1
p
1
+x
2
p
2
= m.
The utility function is quasiconcave (actually, strictly concave in this case)
and the budget set convex, so the second order conditions will be satisﬁed.
Combining the ﬁrst two ﬁrst order conditions we get
0.3x
2
0.6x
1
=
p
1
p
2
=⇒ x
2
=
2p
1
p
2
x
1
.
148 LA. Busch, Microeconomics May2004
Substitute into the budget constraint and simplify:
x
1
p
1
+
2p
1
p
2
x
1
p
2
= m =⇒ x
1
=
m
3p
1
.
Now use this to solve for x
2
: x
2
=
2m
3p
2
. So (x
1
(p, m), x
2
(p, m)) =
m
3p
1
,
2m
3p
2
.
To ﬁnd the particular quantity demanded, simply plug in the numbers
and simplify:
x
1
=
3 412 + 1 72
3 3
=
412 + 24
3
=
436
3
;
x
2
=
2(3 412 + 1 72)
3 1
=
2(412 + 24)
1
= 872.
Question 4: The key is to realize that this utility function is piecewise
linear with line segments at slopes −5, −1, −1/5, from left to right. The seg
ments join at rays from the origin with slopes 3 and 1/3. Properly speaking,
neither the Hicksian nor the Marshallian demands are functions. The func
tion has either a perfect substitute or Leontief character. In the former the
substitution eﬀects approach inﬁnity, in the latter they are zero. Demands
are easiest derived from the price oﬀer curve, which is a nice zigzag line.
It starts at the intercept of the budget with the vertical axis (point A). It
follows the indiﬀerence curve segment with −5 slope to the ray with slope
3. Call this point B. From there it follows the ray with slope 3 until that
ray intersects a budget drawn from A with a slope of 1 (point C). It then
continues on this budget and the coinciding indiﬀerence curve segment to
the ray with slope 1/3 (point D). Up along that ray to an intersection with a
budget from A with slope 1/5 (point E), along that budget to the intercept
with the horizontal axis (point F), and then along the horizontal axis oﬀ to
inﬁnity.
x
2
x
1
\
\
\
\
\
\
\
\































 /
/
/
/
/
/
/
/
/ /
A
B
C
D
E
F
Now we can solve for the demands along the diﬀerent pieces of the
oﬀer curve and get the Marshallian demand. Note that demand is either a
whole range, or a “proper demand”. The ranges can be computed from the
endpoints (i.e., A to B, C to D, E to F.) Along the rays demand is solved as for
Answers 149
a Leontief consumer: we know the ratio of consumption, we know the budget.
So for example on the ﬁrst ray segment (B to C) we know that x
2
= 3x
1
.
Also, p
1
x
1
+ p
2
x
2
= w. Hence x
1
(p
1
+ 3p
2
) = w, and x
1
= w/(p
1
+ 3p
2
).
(For the Hicksian demand we simply need to ﬁx one indiﬀerence curve and
compute the points along it. We then get either a segment (like A to B
above), or we stay at a kink for a range of prices.) The demands for good 1
therefore are
x
1
(p, w) =
0, if p
1
/p
2
> 5;
[0, 5w/(8p
1
)] , if 5 = p
1
/p
2
;
w/(3p
2
+p
1
), if 5 > p
1
/p
2
> 1;
[w/(4p
1
), 3w/(4p
1
)] , if p
1
/p
2
= 1;
3w/(3p
1
+p
2
), if 1 > p
1
/p
2
> 1/5;
[3w/(8p
1
), w/p
1
] , if p
1
/p
2
= 1/5;
w/p
1
, if 1/5 > p
1
/p
2
.
h
1
(p, u) =
0, if p
1
/p
2
> 5;
[0, u/8] , if p
1
/p
2
= 5;
u/8, if 5 > p
1
/p
2
> 1;
[u/8, 3u/16] , if p
1
/p
2
= 1;
3u/16, if 1 > p
1
/p
2
> 1/5;
[3u/16, u] , if p
1
/p
2
= 1/5;
u, if 1/5 > p
1
/p
2
.
The demands for good 2 are similar and left as exercise. The income ex
pansion paths and Engel curves can be whole regions at price ratios 1,5,1/5,
otherwise the income expansion paths are the axes or rays, and the Engel
curves are straight increasing lines.
Question 5: The elasticity of substitution measures by how much the con
sumption ratio changes as the price ratio changes (both measured in per
centages.) In other words, as the price ratio changes the slope of the budget
changes and we know this will cause a change in the ratio of the quantity
demanded of the goods. But by how much? The higher the value of the
elasticity, the larger the response in demands.
Question 6: First we need to realize that the utility index which each
function assigns to a given consumption point does not have to be the same.
Instead, as long as the MRS is identical at every point, two utility functions
represent the same preferences. So instead of taking the limit of the utility
function directly, we will take the limits of the MRS and compare those to
the MRSs of the other functions.
MRS =
u
1
u
2
=
(1/ρ)(x
ρ
1
+x
ρ
2
)
(1−ρ)/ρ
(ρx
ρ−1
1
)
(1/ρ)(x
ρ
1
+x
ρ
2
)
(1−ρ)/ρ
(ρx
ρ−1
2
)
=
x
ρ−1
1
x
ρ−1
2
=
x
1−ρ
2
x
1−ρ
1
.
150 LA. Busch, Microeconomics May2004
The MRSs for the other functions are
CD:
x
2
x
1
; Perfect Sub: 1; Leon: 0, or ∞.
So, consider the Leontief function min¦x
1
, x
2
¦. Its MRS is 0 or ∞. But as
ρ →−∞ we see that (x
2
/x
1
)
1−ρ
→(x
2
/x
1
)
∞
. But if x
2
> x
1
the fraction is
greater than 1 and an inﬁnite power goes to inﬁnity. If x
2
< x
1
the fraction
is less than one and the power goes to zero. The CobbDouglas function x
1
x
2
has MRS x
2
/x
1
. But as ρ → 0 the MRS of our function is just that. The
perfect substitute function x
1
+ x
2
has a constant MRS of 1. But as ρ → 1
the MRS of our function is (x
2
/x
1
)
0
= 1. Therefore the CES function “looks
like” those three functions for those choices of ρ. The parameter ρ essentially
controls the curvature of the IC’s.
Question 7: Set up the consumer’s optimization problem:
max
x
1
,x
2
,x
3
¦x
1
+ lnx
2
+ 2lnx
3
+λ(m−p
1
x
1
−p
2
x
2
−p
3
x
3
)¦ .
The FOCs are
1 −λp
1
= 0;
1
x
2
−λp
2
= 0;
2
x
3
−λp
3
= 0
and the budget. The ﬁrst of these allows us to solve for λ = 1/p
1
. Therefore
the second and third give us x
2
= p
1
/p
2
and x
3
= 2p
1
/p
3
. Combining this
with the budget we get x
1
= m/p
1
− 3. Of course, this is only sensible if
m > 3p
1
. If it is not we must be at a corner solution. In that case x
1
= 0
and all money is spent on x
2
and x
3
. The second and third FOC above tell
us that x
3
/x
2
= 2p
2
/p
3
. Hence (remember x
1
= 0 now) m = p
2
x
2
+ 2p
2
x
2
and x
2
= m/(3p
2
) while x
3
= 2m/(3p
3
). So we get
x(p, m) =
m
p
1
−3,
p
1
p
2
, 2
p
1
p
3
if m > 3p
1
0,
m
3p
2
,
2m
3p
3
if m ≤ 3p
1
Question 8: Here we have a pure exchange economy with 2 goods and 2
consumers. We can best represent this in an Edgeworth box.
Answers 151
O
B
O
A 20
20
5
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
11.25
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
/
ω
8
11
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
x
∗
`
`
`
`
`
`
`
`
s
ˆ ω
Suppose x
1
is on the horizontal axis and x
2
is on the vertical, and let
consumer B have the lower left hand corner as origin, consumer A the upper
right hand corner. (I made this choice because I like to have the “harder”
consumer oriented the usual way.) The dimensions of the box are 20 by 20
units. The ﬁrst thing to do is to ﬁnd the Pareto Set (the contract curve),
since we know that any equilibrium has to be Pareto eﬃcient. The MRS
for person A is 4/3, the MRS for person B is 3x
2
/(4x
1
). Therefore the
Pareto Set is deﬁned by x
2
/x
1
= 16/9 (in person B’s coordinates.) This
is a straight ray from B’s origin with a slope greater than 1, and therefore
above the main diagonal. The Pareto set is this ray and the portion of the
upper boundary of the box from the ray’s intersection point to the origin of
A. There now are 2 possibilities for the equilibrium. Either it is on the ray,
and therefore must have a price ratio of 4/3. Or it is on the upper boundary
of the box, in which case the price ratio must be below 4/3, but we know
that B’s consumption level for good 2 is 20. In the ﬁrst case we have 2
equations deﬁning equilibrium. The ray, x
2
= 16x
1
/9, and the budget line
(x
2
− 11) = 4(8 − x
1
)/3. From this we get 16x
1
/9 − 11 = 32/3 − 12x
1
/9
and from that 28x
1
/9 = 65/3 and thus x
1
= 195/28 < 20. It follows that
x
2
= (16 195)/(9 28) = 780/63 = 260/21 < 20. Since both of B’s
consumption points are strictly within the interior of the box, we are done.
All that remains is to compute A’s allocation. The equilibrium is therefore
(p∗, (x
A
), (x
B
)) =
4
3
,
20 −
195
28
, 20 −
780
63
,
195
28
,
780
63
.
Question 9: Again we have a square Edgeworth box, 2020. Again I choose
to put consumer B on the bottom left origin. B’s preferences are quasilinear
with respect to x
1
, A’s are piecewise linear with slopes 4/3 and 3/4 which
152 LA. Busch, Microeconomics May2004
meet at the kink line x
2
= x
1
(in A’s coordinates!) which coincides with
the main diagonal. The MRS for B’s preferences is 3x
2
/4. We have Pareto
optimality when 3x
2
/4 = 4/3 →x
2
= 16/9 and when 3x
2
/4 = 3/4 →x
2
= 1.
So, the Pareto Set is the vertical axis from B’s origin to x
B
2
= 1, the horizontal
line x
B
2
= 1 to the main diagonal (the point (1, 1) in other words), up the
main diagonal to the point (x
B
1
, x
B
2
) = (16/9, 16/9), from there along the
horizontal line x
B
2
= 16/9 to the right hand edge of the Box, and then up
that border to A’s origin.
O
B
O
A 20
20
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
1
16/9
ω
8
11
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
x
∗
By inspection, the most likely candidate for an equilibrium is a price
ratio of 4/3 with an allocation on the second horizontal line segment. Let
us attempt to solve for it. First, the budget equation (in B’s coordinates) is
3(x
2
−11) = 4(8 −x
1
). Second, we are presuming that x
2
= 16/9. So we get
16/3 −33 = 32 −4x
1
, or x
1
= 14
11
12
. Since this is less than 20 we have found
an interior point and are done. The equilibrium is
(p∗, (x
A
), (x
B
)) =
4
3
,
5
1
12
,
164
9
,
14
11
12
,
16
9
.
Question 10: To prove this we need to show the implication in both di
rections: (⇐) : Suppose x ~ y. Then ∃B, x, y ∈ B with the property that
x ∈ C(B), y / ∈ C(B). Consider all other B
∈ B with the property that
x, y ∈ B
. By the Weak Axiom ∃B
with y ∈ C(B
) since otherwise the set
B would violate the weak axiom (applied to the choice y with the initial set
B
.) Therefore x ~
∗
y.
(⇒) : Let x ~
∗
y. The ﬁrst part of the deﬁnition requires ∃B, x, y, ∈ B, x ∈
C(B). By the weak axiom there are two possibilities: either all B
∈ B with
x, y ∈ B
have ¦x, y¦ ∈ C(B
) or none have y ∈ C(B
). The second part of
Answers 153
the deﬁnition requires us to be in the second case, but then y / ∈ C(B), and
so x ~ y.
If the WA fails a counter example suﬃces: Let X = ¦x, y, z¦,
B = ¦¦x, y¦, ¦x, y, z¦¦, C(¦x, y¦) = ¦x¦, C(¦x, y, z¦) = ¦x, y¦. This violates
the WA. C(¦x, y¦) = ¦x¦ demonstrates that x ~ y by deﬁnition. On the
other hand it is not true that x ~
∗
y (let B = ¦x, y¦ and B
= ¦x, y, z¦ in
the deﬁnition of ~
∗
.
Question 11:
a) This is another 20 by 20 box, with the endowment in the centre.
Suppose B’s origin on the bottom left, A’s the top right. As in question 8,
A’s indiﬀerence curves have a constant MRS of α and are perfect substitute
type. B’s ICs have a MRS of βx
2
/x
1
and are CobbDouglas. The contract
curve in the interior must have the MRSs equated, so it occurs where (in
B’s coordinates) x
2
/x
1
= α/β. This is a straight ray from B’s origin and
depending on the values of α and β it lies above or below the main diagonal.
Since these cases are (sort of) symmetric we pick one, and assume that α/β >
1. The contract curve is this ray and then the part of the upper edge of the
box to A’s origin.
As in question 8 there are two cases for the competitive equilibrium. Either
it occurs on the part of the Contract curve interior to the box, or it occurs on
the boundary of the box. In the ﬁrst case the slope of the budget and hence
the equilibrium price must be α, since both MRSs have that slope along the
ray and in equilibrium the price must equal the MRS. Note that the budget
now coincides with A’s indiﬀerence curve through the endowment point. The
equilibrium allocation is determined by the intersection of the contract curve
and this budget/IC. So we have two equations in two unknowns:
α =
x
2
−10
10 −x
1
and x
2
=
α
β
x
1
.
Hence α(10 −x
1
) = αx
1
/β or αβ10 = x
1
(α+αβ) and thus x
1
= β10/(1 +β)
and x
2
= α10/(1 + β). These are the consumption levels for B. A gets the
rest. The equilibrium thus would be (p
∗
, (x
A
1
, x
B
2
), (x
B
1
, x
B
2
)) =
α,
10
2 +β
1 +β
, 10
2(1 +β) −α
1 +β
,
β10
1 +β
,
α10
1 +β
which only makes sense if the allocation indeed is interior, that is, as long as
10α/(1 +β) < 20, or (α −β) < (2 +β).
If that is not true we ﬁnd ourselves in the other case. In that case we know
that we are looking for an equilibrium on the upper boundary of the box and
thus know that x
B
2
= 20 while x
A
2
= 0. It remains to determine p and the
allocations for good 1. At the equilibrium point the budget must be ﬂatter
154 LA. Busch, Microeconomics May2004
than A’s IC (so that A chooses to only consume good 1). The allocation
must also be the optimal choice for B and hence the budget must be tangent
to B’s IC, since for B this is an interior consumption bundle (interior to B’s
consumption set, that is.) So we again have to solve two equations in 2
unknowns:
βc
2
c
1
= p and p =
c
2
−10
10 −c
1
while c
2
= 20.
It follows that β20(10 −c
1
) = 10c
1
and therefore c
1
= 20β/(1 + 2β). A gets
the rest. The equilibrium is therefore
(p
∗
, (x
A
1
, x
A
2
), (x
B
1
, x
B
2
)) =
1 + 2β,
20
1 +β
1 + 2β
, 0
,
β20
1 + 2β
, 20
.
b) All endowments above and to the right of the line x
2
= 40 − 2x
1
in
B’s coordinates will lead to a boundary equilibrium. All those on this line
and below will lead to an interior equilibrium with p = 2.
Question 12:
a) The social planner’s problem is
max
l
ln(4
√
16 −l) +
1
2
ln(l)
which has ﬁrst order condition
−
1
4
√
16 −l
2
√
16 −l
+
1
2l
= 0.
Hence 16 −l = l and so l
∗
= 8, x
∗
= 8, c
∗
= 8
√
2.
b) Since the consumer’s problem requires proﬁts, we solve for the ﬁrm
ﬁrst. max
x
¦p4
√
x − wx¦ has FOC 2p/
√
x = w and leads to ﬁrm labour
demand of x(p, w) = 4p
2
/w
2
, consumption good supply of c(p, w) = 8p/w,
and proﬁts of π(p, w) = 4p
2
/w.
The consumer will
max
c,l
lnc +
1
2
lnl +λ(16w +π(p, w) −pc −wl)
which has ﬁrst order conditions 1/c − λp = 0; 1/(2l) − λw = 0; 16w +
π(p, w) = pc + wl. The ﬁrst 2 imply that pc = 2lw. Substituting into
the third and using the proﬁts computed above yields demand of c(p, w) =
32w/(3p) + 8p/(3w) and leisure demand of l(p, w) = 16/3 + 4p
2
/(3w
2
).
We can now solve for the equilibrium price ratio. Take any one market and
set demand equal to supply. For the goods market this implies 32w/(3p) +
Answers 155
8p/(3w) = 8p/w, and hence p
2
/w
2
= 2. Substituting into the demands and
supplies this gives l
∗
= 8, and hence all values are the same as in the social
planner’s problem in part a). You may want to verify that you could have
solved for the price ratio from the labour market.
The complete statement of the general equilibrium is: The equilibrium price
ratio is p/w =
√
2, the consumer’s allocation is (c, l) = (8
√
2, 8), and the
ﬁrm produces 8
√
2 units consumption good from 8 units input. Note that we
cannot state proﬁts without ﬁxing one of the prices. So let w = 1 (so that
we use labour as numeraire), then p =
√
2 and proﬁts are 8.
7.2 Chapter 3
Question 1:
a) Zero arbitrage means that whichever way I move between periods, I
get the same ﬁnal answer. In particular I could lend in period 1 to collect
in period three, or I could lend in period 1 to period 2, and then lend the
proceeds to period 3. Hence the condition is
(1 +r
12
)(1 +r
23
) = (1 +r
13
).
Note that if we where to treat r
13
not as a simple interest rate but as a
compounding one, we’d get (1 +r
12
)(1 +r
23
) = (1 +r
13
)
2
instead.
b) You have to adopt one period as your viewpoint and then put all
other values in terms of that period (by discounting or applying interest).
With period 3 as the viewpoint I use period 3 future values for everything:
B = ¦(c
1
, c
2
, c
3
)[(1+r
13
)c
1
+(1+r
23
)c
2
+c
3
= (1+r
13
)m
1
+(1+r
23
)m
2
+m
3
¦
Note that any other viewpoint is equally valid. The restriction in (a) means
that it does not matter which interest rate I use to compute the forward value
of c
1
, say. Indeed, without that restriction I would get an inﬁnite budget if
it is possible to borrow inﬁnite amounts. With some borrowing constraints
in place I would have to compute the highest possible arbitrage proﬁts for
the various periods and compute the resulting budget.
c) This is a standard downward sloping budget line in (c
2
, c
3
) space
with a slope of −(1 + r
23
). It does not necessarily have to go through the
endowment point (m
2
, m
3
), however. It will be below that point if c
1
> m
1
and above that point if c
1
< m
1
.
156 LA. Busch, Microeconomics May2004
c
3
c
2
m
m
3
m
2
`
`
`
`
`
`
`
`
`
`
¦m
1
> c
1
¦
`
`
`
`
`
`
`
¦m
1
< c
1
¦
Question 2:
a) The easy way to get this is to ﬁrst ignore the technology. The market
budget is a straight line with a slope of −1 through the point (100, 100), which
is truncated at the point (160, 40), where the budget becomes vertical. Note
that the gross rate of return is 1 since the interest rate is 0. Now consider the
technology and the implications of zero arbitrage: Joe can move consumption
from period 1 to period 2 in two ways, via the ﬁnancial market, or via
“planting”. Both must yield the same gross rate of return at the optimum
(why? we know that at the optimum of a maximization problem the last
dollar allocated to each option must yield the same marginal beneﬁt.) The
gross rate of return at the margin is nothing but the marginal product of the
technology, however. So, compute the MP (5/
√
x
1
) and ﬁnd the investment
level at which the MP is 1.
5
√
x
1
= 1 −→ 5 =
√
x
1
−→ x
1
= 25.
At optimal use at an interior optimum Joe invests 25 units (and collects 50
in the next period.) This means that from any point on the ﬁnancial market
budget Joe can move left 25 and up 50. So that gives a straight line with
slope −1 which starts at (0, 225) and goes to (135, 90). After this point there
is a corner solution in technology choice: Joe cannot use the market any
more. The technology therefore may give a higher return than the market.
So the budget follows the (ﬂipped over to the left) technology frontier down
to the point (160, 40), and down to (160, 0) from there.
b) First simplify the preferences (this step is not necessary!). Applying
a natural logarithm gives the function
ˆ
U(c
1
, c
2
) = c
4
1
c
6
2
which represents the
same preferences. Applying the 10th root gives
˜
U(c
1
, c
2
) = c
.4
1
c
.6
2
which also
represents the same preferences and is recognized as a CobbDouglas. Now
you can either compute the MRS (2c
2
/3c
1
) and set that equal to 1 (since
most of the budget has a slope of −1 and we know that MRS = Slope at
the optimum.) That gives you two equations in two unknowns, and we can
solve:
c
2
= 225 −c
1
, c
2
= 3c
1
/2 → 450 = 5c
1
→ c
1
= 90, c
2
= 135.
Answers 157
We then double check that the assumption that we are on the −1 sloped
portion of the budget was correct, which it is (by inspection.) Or you could
use the demand function for CD, so you know
(c
1
, c
2
) =
.4M
p
1
,
.6M
p
2
=
.4 225
1 + 0
,
.6 225
1
= (90, 135).
Now this is his ﬁnal consumption bundle. In order to get there he invested
25 units, so on the “market budget” line he must have started at (115, 85),
and that required him to borrow 15 units.
In summary, he borrows 15, giving him 115, of which he invests 25, so he
has 90 left to consume. In the next period he gets 100 from his endowment,
50 from the investment, for a total of 150, of which he has to use 15 to pay
back the loan, so he can consume 135!
Question 3: I will not draw the diagram but describe it. You should refer
to a rough diagram while reading these solutions to make sense of them.
a) The indiﬀerence curves have two segments with a slope of −1.3 and
−1.2 respectively. The switch (kink) occurs where
23
12
10
c
1
+c
2
= 22
13
10
c
1
+c
2
→ c
2
= (2213−2312)c
1
/10 = c
1
.
b) Note that the budget has a slope of 1.25 which is less than 1.3 and
more than 1.2, so she consumes at the kink. Thus she is on the kink line and
the budget:
c
1
= c
2
and −1.25 =
c
2
−8
c
1
−9
→ c
1
= c
2
= 77/9.
c) Again she consumes at the kink, so
c
1
= c
2
and −1.25 =
c
2
−12
c
1
−5
→ c
1
= c
2
= 73/9.
d) Here we need to work back. Note that at the slopes implied by the
interest rates she continues to consume at her kink line. The reason is that
both 1.25 and 1.28 are bigger than 1.2, the slope of her lower segment, but
less than 1.3, the slope of the steep segment. Hence optimal consumption is
at the kink and she borrows if she has less period 1 endowment than period
2 endowment. She lends money if she has larger period 1 endowment than
period 2 endowment. So for all endowments above the main diagonal she is
158 LA. Busch, Microeconomics May2004
a borrower, for all endowments below a lender.
e) Now she never trades. To lend money the budget slope is 1.18 which
is less than either of her IC segment slopes. She would not want to lend ever
at this rate no matter what her endowment. On the other hand, suppose
she were to borrow. The budget slope is 1.32 which is steeper than even her
steepest IC segment. She would not borrow. Thus she remains at the kink
in her budget (the endowment point) no matter where it is.
Question 4: Again I will not draw the diagram but describe it. You should
refer to a rough diagram while reading these solutions to make sense of
them.
a) This is a 20 by 10 box. Suppose A’s origin on the bottom left,
B’s the top right. A’s indiﬀerence curves have a MRS of c
2
/(αc
1
) and are
nice CD type curves. B’s ICs have a MRS of 1/β and are straight lines.
The contract curve in the interior must have the MRSs equated (from Econ
301: for diﬀerentiable utility functions an interior Pareto optimum has a
tangency), so it occurs where c
2
/c
1
= α/β. This is a straight ray from A’s
origin and depending on the values of α and β it lies above or below the
main diagonal. Since these cases are (sort of) symmetric we pick one, and
assume that α/β > 1/2. The contract curve is this ray and then the part of
the upper edge of the box to B’s origin.
b) There are two cases, either the Contract curve ray is shallow enough
that the equilibrium occurs on it, or it is so steep that the equilibrium occurs
on the top boundary of the box. In the ﬁrst case the slope of the budget
and hence the equilibrium price must be 1/β, since both MRSs have that
slope along the ray and in equilibrium the price must equal the MRS. So the
equilibrium interest rate is (1 − β)/β. Note that the budget now coincides
with player B’s indiﬀerence curve through the endowment point. Hence the
ray of the contract curve must intersect that, and it does so only if it intersects
the top boundary to the right of the intersection of B’s IC with the boundary.
The latter occurs at (4(3−β), 10). The former occurs at (10β/α, 10). So the
interior solution obtains if 10β/α > 4(3 −β), or if 10β > 12α−4αβ. In that
case the equilibrium allocations are derived by solving the intersection of the
budget and the ray:
c
2
= αc
2
/β and −
1
β
=
c
2
−6
c
1
−12
→ c
A
1
=
6(2 +β)
1 +α
, c
A
2
=
α6(2 +β)
β(1 +α)
.
B gets the remainder.
In the other case, when the ray fails to intersect B’s IC, we know that
we are looking for an equilibrium on the upper boundary of the box (so
c
A
2
= 10 and c
B
2
= 0.) At this point we must have a budget ﬂatter than B’s
IC (so that B chooses to only consume good 1). It must also be tangent to
Answers 159
A’s IC, since for player A this is an interior consumption bundle (interior to
his consumption set, that is.) So we require 1 + r = 10/(αc
1
) to have the
tangency, and we require 1 + r = (10 − 6)/(12 − c
1
) in order to be on the
budget line. These are two equations in two unknowns again, so we solve:
c
A
1
= 60/(5+2α) and r = (5−4α)/(6α). B gets the rest of good 1, of course.
7.3 Chapter 4
Question 1: We wish to show that for any concave u(x)
1
3
u(24) +
1
3
u(20) +
1
3
u(16) ≥
1
2
u(24) +
1
2
u(16).
We can do the following: ﬁrst bring the u(24) and u(16) to the RHS:
2
6
u(20) ≥
1
6
u(24) +
1
6
u(16).
Then multiply both sides by 3:
u(20) ≥
1
2
u(24) +
1
2
u(16).
The LHS of this represents a certain outcome of 20, the RHS a lottery with
2 equally likely outcomes. Now note that
1
2
24 +
1
2
16 = 20.
That is, the expected value of the lottery on the RHS of the last inequality
above is equal to the expected value of the degenerate lottery on the LHS.
Therefore this penultimate inequality must be true, since it coincides with
the deﬁnition of a risk averse consumer. (utility of expectation greater than
expectation of utility.)
Question 2: The certainty equivalent is deﬁned by
U(CE) =
¸
p
i
u(x
i
) =
u(x)dF(x).
Using the particular function we are given:
√
CE = α
√
3600 + (1 −α)
√
6400
CE = (α60 + (1 −α)80)
2
= (80 −20α)
2
.
160 LA. Busch, Microeconomics May2004
Note that the expected value of the gamble is E(w) = α3600+(1−α)6400 =
6400 −α2800 and thus the maximal fee this consumer would pay for access
to fair insurance would be the diﬀerence E(w) −CE = 400α(1 −α).
Question 3: The coeﬃcient of absolute risk aversion is deﬁned as r
A
=
−u
(w)/u
(w). Computing this for both functions we get
u(w) = lnw −→r
A
=
1
w
; u(w) = 2
√
w −→r
A
=
1
2w
.
Therefore the two consumers exhibit equal risk aversion if the second con
sumer has half the wealth of the ﬁrst. Their relative risk aversion coeﬃcients
(deﬁned as −u
(w)w/u
(w) are 1 and 1/2, respectively. That means that
while, if the logarithm consumer has twice the wealth as the root consumer,
he will have the same attitude towards a ﬁxed dollar amount gamble, he
will be more risk averse with respect to a gamble over a given proportion of
wealth. (Note that the two statements don’t contradict one another: a $1
gamble represents half the percentage of wealth for a consumer with twice
the wealth!)
Question 4: Here we need an Edgeworth Box diagram, which is a square,
15 units a side. Suppose we have consumer A on the bottom left origin (B
then goes top right). Suppose also that we put state R on the horizontal.
Note that the certainty line is the main diagonal of the box! This observation
is crucial, since it means that there is no aggregate risk!
General equilibrium requires that demand is equal to supply for each
good, but we can’t ﬁnd those here (not knowing the consumers’ tastes), so it
is not useful information. But we also know that in general equilibrium the
price ratio must equal each consumer’s MRS (since GE is Pareto optimal and
that requires MRSs to be equalized, at least for interior allocations.) Note
that the two MRSs here are
MRS
A
=
πu
A
(c
A
R
)
(1 −π)u
A
(c
A
S
)
MRS
B
=
πu
B
(c
B
R
)
(1 −π)u
B
(c
B
S
)
On the certainty line (the main diagonal) c
A
S
= c
A
R
and c
B
S
= c
B
R
, so MRS
A
=
MRS
B
= π/(1 − π). In other words, the certainty line for each consumer
coincides and together they are the set of Pareto optimal points.
Hence the equilibrium price ratio must be p
∗
= π/(1 −π).
The allocation is now easily computed: we know the price ratio and the
endowment, hence the budget line for the consumers. We also know that
Answers 161
consumption is equal in both states. So
π
1 −π
=
c
A
S
−5
10 −c
A
R
=
c
A
−5
10 −c
A
→c
A
S
= c
A
R
= 5(1 +π)
and since c
B
i
= 15 −c
A
i
we get c
B
S
= c
B
R
= 5(2 −π).
Question 5: a) max
x
¦0.5u(10000(1 + 0.8x)) + 0.5u(10000(1.4 −0.8x))¦
b) The FOC for this is
0.5 0.8u
(100000(1 + 0.8x)) −0.5 0.8u
(10000(1.4 −0.8x)) = 0
implies : u
(10000(1 + 0.8x)) = u
(10000(1.4 −0.8x))
implies : 10000(1 + 0.8x) = 10000(1.4 −0.8x)
since she is risk averse. It follows that 1 + .8x = 1.4 − .8x, and therefore
that 1.6x = 0.4, so that x = 0.25. One quarter, or 25% are invested in gene
technology.
Question 6:
i) Denote the probability with which a ticket wins by π and the prize by
P. A fair price for this lottery ticket would have to be a fraction p per dollar
of prize such that π(P − pP) − (1 − π)pP = 0, or p = π. Let us start with
this as a benchmark case (we know that normally such a lottery would not
be accepted.) Utility maximization requires that for a gambling consumer
v(w
0
) ≤ πv(w
0
+(1 −p)P) +(1 −π)v(w
0
−pP) +µ
i
. Thus all consumers for
whom µ
i
≥ v(w
0
) −πv(w
0
+(1 −p)P) −(1 −π)v(w
0
−pP) purchase a ticket.
At a fair gamble this is
µ
i
≥ v(w
0
) −πv(w
0
+ (1 −π)P) −(1 −π)v(w
0
−πP)
> v(w
0
) −v(π(w
0
+ (1 −π)P) + (1 −π)(w
0
−πP))
= v(w
0
) −v(w
0
)
(the second strict inequality follows from the deﬁnition of risk aversion).
Clearly a strictly positive µ is required. Can the government make money
on this? Well, assume that the price p above is fair (p = π) and let there
be an additional charge of q. Now all consumers gamble for whom µ
i
≥
v(w
0
) − πv(w
0
+ (1 − π)P − q) − (1 − π)v(w
0
− πP − q). While such a µ
i
is larger than before, it exists (for small q in any case) as long as things
are suﬃciently smooth and the µ
i
go that high. Note that those who gamble
have a high utility for it (a high taste parameter µ
i
) in this setting. Note that
this implies that even though they lose money on average they have a higher
162 LA. Busch, Microeconomics May2004
welfare. (The antigambling arguments in public policy debates therefore
come in two ﬂavours: (i) your gambling is against my (religious) beliefs, and
thus it ought to be banned, (ii) there are externalities: your lost money is
really not yours but should have bought a lunch for your child/spouse/dog.
Since your child/spouse/dog can’t make you stop, we will on their behalf.)
ii) Now µ is ﬁxed. Of course, the decision to gamble will still depend
on the same inequality, namely
µ > v(w
0
) −πv(w
0
+ (1 −π)P −q) −(1 −π)v(w
0
−πP −q).
We thus can translate this question into the question of how the right hand
side depends on w
0
and how this dependency relates to the diﬀerent be
haviours of risk aversion with wealth. So, is the right hand side increasing
or decreasing with wealth, and is this a monotonic relationship? The right
hand side is related, of course, to the utility loss from going to the expected
utility from the expected value (ignoring q for a minute.) Intuitively, we
would expect the diﬀerence to be declining in wealth for constant absolute
risk aversion: Constant absolute risk aversion implies a constant diﬀerence
between the expected value and the certainty equivalent.
1
Let this diﬀer
ence be the base of a right triangle. Orthogonal to that we have the side
which is the required distance between the two utilities. The third side must
have a declining slope as wealth increases since it is related to the marginal
utility of wealth at the certainty equivalent, which is declining in wealth by
assumption. There you go, I’d expect the utility diﬀerence must fall with
wealth.
More formally, consider the original inequality again and approximate
the RHS by its second order Taylor series expansion (that way we get ﬁrst
and second derivatives, which we want in order to form r
A
:
v(w
0
) −πv(w
⊕
0
) −(1 −π)v(w
0
−πP −q) ≈
v(w
0
) −π(v(w
0
) +⊕v
(w
0
) +
⊕
2
v
(w
0
)/2) −(1 −π)(v(w
0
) −v
(w
0
) +
2
v
(w
0
)/2)
= −π ⊕v
(w
0
) −π ⊕
2
v
(w
0
)/2(1 −π)(v
(w
0
) −
2
v
(w
0
)/2)
= v
(w
0
) [(1 −π) (1 −r
A
/2) −π ⊕(1 −⊕r
A
/2)] .
This looks more like it! Now note that we use and ⊕ as positive quanti
ties (which are not equal: is larger!) Furthermore we know that (a) this
quantity must be positive and (b) that π is probably a very small number.
Now, if r
A
is constant then the term in brackets is constant, but of course
1
Is there a general proof for that? Note that constant r
A
has for example the functional
form u(w) = −e
−aw
, for which the above is certainly true.
Answers 163
v
(w) falls with w and thus the right hand side of our initial inequality (way
above) falls. Any given µ is therefore more likely to be larger than it. Thus
rich consumers participate, poor consumers don’t if we have constant abso
lute risk aversion. If we have decreasing absolute risk aversion this eﬀect is
strengthened. Now, since relative risk aversion is just r
A
w, it follows that
constant relative risk aversion requires a decreasing absolute risk aversion,
and that decreasing relative risk aversion requires an even more decreasing
absolute risk aversion. Thus in all cases the rich gamble and the poor don’t.
(Note here that they are initially rich. Since they loose money on average
they will become poor and stop gambling.)
iii) If v(w) = ln w then v
(w) = 1/w and v
(w) = −1/w
2
. Therefore
r
A
= 1/w, with ∂r
A
/∂w < 0, and r
R
= 1. If v(w) =
√
w then v
(w) = 1/2
√
w
and v
(w) = −w
3/2
/4. Therefore r
A
= 1/(2w) and r
R
= 1/2. We now know
two pieces of information: the consumers’ risk aversion to a given size gam
ble is declining with wealth. This would, ceteris paribus make them more
likely to purchase the gamble for a constant µ (see above). But µ now is also
declining with wealth. The ﬁnal outcome therefore depends on what declines
faster, and we can’t make a deﬁnite statement. (As an aside note the follow
ing. Suppose we are talking stock market participation here. Then it might
be reasonable to assume that the utility of participating in it is increasing in
wealth, on average, and so we get higher participation by wealthier people.
Now, if the stock market on average is a bad bet we get mean reversion in
wealth, while if the stock market is on average more proﬁtable than savings
accounts etc we get the rich getting richer. If you now run a voting model
where the mean voter wins, you get the desire to redistribute (i.e., tax the
investing and proﬁting rich and give the cash to those who have a too high
marginal utility of wealth to invest themselves.) Note also that progressive
taxes reduce the returns of a given investment proportional to wealth, coun
teracting the above eﬀect of more participation by wealthy individuals. . . .
See how much fun you can have with these simple models and a willing
ness to extrapolate wildly?)
Question 7: This question forms part of a typical incomplete information
contracting environment. Here we focus only on the consumer’s behaviour.
a) Assume that the worker has a contractual obligation to provide an
eﬀort level of E. Once he has signed the contract, however, he knows that his
actual eﬀort is not observable and thus would try to shirk. Expected utility
is maximized for
e
∗
= argmax¦α
w(E) −p + (1 −α)
w(E) −e
2
¦.
The ﬁrst order condition for this problem is −2e = 0 if e = E. I.e., given
the worker shirks he will go all the way (after all, the punishment does not
164 LA. Busch, Microeconomics May2004
depend on the severity of the crime in any way.) Thus we need to ensure
that the worker will not shirk at all, which is the case if
w(E) − E
2
≥
α
w(E) −p + (1 − α)
w(E), or E
2
≤ α(
w(E) −
w(E) −p). If the
wage function satisﬁes this inequality for all E, it will elicit the correct eﬀort
levels in all cases.
b) Now we have a potentially variable punishment. Given some job with
contractual obligation E, the worker now will maximize expected utility and
set
e
∗
= argmax¦α
w(E) −p(E −e) + (1 −α)
w(E) −e
2
¦.
The FOC for this problem is αp
()(2
w(E) −p(E −e))
−1
−2e = 0. (There
are also second order conditions which need to hold!) This implies that the
worker will play oﬀ the cost of shirking against the gains from doing so. We
need to make sure that this equation is only satisﬁed for e
∗
= E, in which
case he “voluntarily” chooses the contracted level. This clearly requires a
positive p
(). In particular, αp
(0)(2
w(E))
−1
− 2E = 0. Note: We could
also vary the detection/supervision probability and make α depend on E.
Then we get e
∗
= argmax¦α(E)
w(E) −p + (1 − α(E))
w(E) − e
2
¦. As
in (a), if the worker deviates he will go all the way here. So the problem is
similar to (a), only the wage schedule is now diﬀerent since α(E) can also
vary now. What this shows us is that we tend to want a punishment and a
detection probability which both depend on the deviation from the correct
level. (This is going to be a question about the technology available: some
technologies may be able to detect ﬂagrant shirking more readily than slight
shirking.)
c) What this seems to indicate is that we would like to make punish
ments ﬁt the crime. (So for example, if the punishment for a holdup with a
weapon is as severe as if somebody actually gets shot during it, then I might
as well shoot people when I’m at it and I think that helps (and if it does not
increase the eﬀort the police put into ﬁnding me.)) Furthermore, if detection
is a function of the actual eﬀort level (the more you fudge the books the more
likely will you be detected) then we need lower punishments, ceteris paribus,
since the increasing risk will provide some disincentive to cheat anyways.
Question 8:
a) Let C
B
denote the coverage purchased for bad losses, and C
M
the
coverage for minor losses. Zero proﬁts imply that the premiums p
B
and p
M
for bad and minor losses, respectively, are p
B
= π/5 and p
M
= 4π/5. Hence
the consumer’s expected utility maximization problem becomes
max
C
B
,C
M
(1 −π)u(W −p
M
C
M
−p
B
C
B
)+
π(
1
5
u(W −p
M
C
M
−p
B
C
B
+C
B
−B)+
Answers 165
4
5
u(W −p
M
C
M
−p
B
C
B
+C
M
−M))
The ﬁrst order conditions for this problem are
−p
M
(1 −π)u
(n) −p
M
π
5
u
(b) + (1 −p
M
)
4π
5
u
(m) = 0
−p
B
(1 −π)u
(n) + (1 −p
B
)
π
5
u
(b) −p
B
4π
5
u
(m) = 0
Using the fair premiums this simpliﬁes to
−(1 −π)u
(n) −
π
5
u
(b) +
1 −
4π
5
u
(m) = 0
−(1 −π)u
(n) +
1 −
π
5
u
(b) −
4π
5
u
(m) = 0
Hence
1 −
4π
5
u
(m) −
π
5
u
(b) =
1 −
π
5
u
(b) −
4π
5
u
(m)
and thus u
(b) = u
(m), which ﬁnally implies that u
(n) = u
(b) = u
(m) and
therefore that
0 = C
B
−B = C
M
−M.
As expected, the consumer buys full insurance for each accident type sepa
rately.
b) Now only one coverage can be purchased, denote it by C, and will
be paid in case of either accident. Zero proﬁts imply that the premium
p is p = π. Hence the consumer’s expected utility maximization problem
becomes
max
C
(1 −π)u(W −pC)+
π(
1
5
u(W −pC +C −B)+
4
5
u(W −pC +C −M))
The ﬁrst order condition for this problem is
−p(1 −π)u
(n) + (1 −p)
π
5
u
(b) + (1 −p)
4π
5
u
(m) = 0
Using the fair premium this simpliﬁes to
u
(n) =
1
5
u
(b) +
4
5
u
(m)
166 LA. Busch, Microeconomics May2004
and hence either
u
(b) > u
(n) > u
(m) or u
(b) < u
(n) < u
(m).
Thus either W − B + (1 − π)C < W − πC < W − M + (1 − π)C or W −
B + (1 − π)C > W − πC > W − M + (1 − π)C, but this implies either
−B + 1C < 0 < −M + 1C or −B + 1C > 0 > −M + 1C. Since B > M
by deﬁnition we obtain that B > C > M, the consumer over insures against
minor losses, and is under insured against big losses.
Question 9: From Figure 3.7 in the text, we can take the consumers’ budget
line to be the line from the risk free asset point (the origin in this case) to a
tangency with the eﬃcient portfolio frontier. Now this tangency occurs where
the margin is equal to the average, so that
√
σ −16/σ = (2
√
σ −16)
−1
. That
means that the market portfolio has 2(σ − 16) = σ or σ = 32. Therefore
µ = 4. The slope of the portfolio line thus is 4/32. For an optimal solution
the consumer’s MRS must equal the slope of the portfolio line. For the two
consumers given the MRS is σ/32 and σ/96. Thus the optima are σ =
4, µ = 1/2 and σ = 12, µ = 3/2. As expected, the consumer with the higher
marginal utility for the mean will have a higher mean at the same prices (and
given that both have the same disutility from variance.)
Question 10: The asset pricing formula implies that the expected return of
the insurance equals the expected riskfree return less a covariance term. If
insurance has a lower expected return than the riskfree asset, this covariance
term must be positive. In the denominator we have the expected marginal
utility, guaranteed to be positive. Thus the numerator must be positive. This
means that Cov(u
(w), R
i
) > 0. But since u
(w) < 0 this implies that the
covariance between w and R
i
is negative, that is, if wealth is low the return
to the policy is high, if wealth is high, the return to the policy is low. That of
course is precisely the feature of disability insurance which replaces income
from work if and only if the consumer is unable to work.
Question 11:
1) False. The second order condition would indicate a minimum as
demonstrated here: max
C
¦πu(w−L−pC+C)+(1−π)u(w−pC)¦ has FOC
π(1−p)u
(w−L+(1−p)C)−p(1−π)u
(w−pC) = 0. The second order condi
tion for a maximum is π(1−p)
2
u
(w−L+(1−p)C)+p
2
(1−π)u
(w−pC) ≤ 0.
Note that 1 ≥ π, p ≥ 0, so that the SOC requires u
() to be negative for at
least one of the terms. A risklover has, by deﬁnition, u
() > 0.
2) Uncertain. We can draw 2 diagrams to demonstrate. In both we have
two intersecting budget lines, one steeper, one ﬂatter. The ﬂatter one corre
sponds to the initial situation. They intersect at the consumer’s endowment.
Since the consumer is a borrower, the initial consumption point is below and
Answers 167
to the right of the endowment on the initial budget. The indiﬀerence curve
through this point is tangent to this budget. It may, however, cut the new
budget (so that the IC tangent to the new budget represents a higher level
of utility) or lie everywhere above it (in which case utility falls.)
3) True. Apply the following positive monotonic transformations to the
ﬁrst function: −2462, 12, collect terms in one logarithm, take exponential,
take the 9000th root. What you get is the second function.
4) True. A risk averse consumer is deﬁned as having u(
xg(x)dx) >
u(x)g(x)dx. Let the consumer have initial wealth w and suppose he could
participate in a lottery which leads to a change in his initial wealth by x, dis
tributed as f(x). Suppose the payment for this lottery is p. If this payment
is equal to the expected value of the lottery then the consumer will not have
a change in expected wealth, but will face risk. Thus by deﬁnition he would
not buy this lottery. If the payment is less, then the expected value of wealth
from participating in the lottery exceeds the initial wealth. Depending on by
how much, the consumer may purchase. A risk loving consumer, of course,
would already buy at when the expected net gain is zero. (This argument
could be made more precise, and you should try to put it into equations!)
5) False. The market rate of return is 15%. Gargleblaster stock has a
rate of return of (117 −90)/90 = 30%. This violates zero arbitrage.
6) True. All consumers face the same budget line in meanvariance
space. At an interior optimum (and assuming their MRS is deﬁned) they all
consume on this line where the tangency to their indiﬀerence curve occurs.
This may be anywhere along the line, depending on tastes, but the slope is
dictated by the market price for risk.
Question 12:
a) Since workers work as bus driver and at a desk job we require
2
√
40000 = α2
√
44100 −11700 + (1 −α)2
√
44100
Therefore
α =
√
44100 −
√
40000
√
44100 −
√
32400
=
210 −200
210 −180
=
1
3
.
b) Since workers work on oil rigs and at a desk job we require
2
√
40000 = 0.5 2
√
122500 −Loss + 0.5 2
√
122500.
Thus 400 =
√
122500 −Loss + 350 and hence 50 =
√
122500 −Loss or
Loss = 120000.
c) At fair premiums the workers will fully insure. That is, they suﬀer
their expected loss for certain. For a bus driver the expected loss is 11700/3 =
3900. Thus the bus driver wage must satisfy 2
√
40000 = 2
√
w −3900 and
168 LA. Busch, Microeconomics May2004
hence it is $43900. For the oil rig worker the expected loss is $60000, and
their wages will fall to $100000 under workers compensation. Note the the
condition that workers take all jobs together with a ﬁxed desk job wage ﬁxes
the utility level in equilibrium for workers. However, the wage premium for
risky jobs will not have to be paid: the wages of the risky occupations fall
which beneﬁts the ﬁrms in those industries by lowering their wage costs.
(This is why industries are in favour of workers’ compensation.)
d) The average probability of an accident now is 0.4 0.5 +0.6 1/3 =
0.4. If we were to use this as a fair premium (but see below!) this premium is
too high for bus drivers, who will under insure, and too low for oil rig workers,
who will over insure. Indeed, the bus drivers will choose to buy insurance C
b
such that 6
√
44100 −0.4C
b
= 8
√
32400 +.6C
b
(Take the ﬁrst order condition
for max
C
b
¦(1/3)
√
32400 + 0.6C
b
+(2/3)
√
44100 −0.4C
b
¦, bring the
√
from
the denominator into the numerators and loose the 1/30 on both sides.) Thus
we require 9(44100−0.4C
b
) = 16(32400+0.6C
b
), or C
b
= 10(944100−16
32400)/(616+49) = −9204. What does this mean? It means that the bus
drivers would like to bet on themselves having an accident buying negative
amounts of insurance! (The ultimate in under insurance!) Note that the
governments expected proﬁt from bus drivers is −0.49204/3+1.29204/3 =
2454.40 > 0.
The oil rig workers would need to solve
max
Co
¦
√
2500 + 0.6C
o
+
√
122500 −0.4C
o
¦, which leads to
3
√
122500 −0.4C
o
= 2
√
2500 + 0.6C and thus C
o
= 182083.33. Note that
the govt looses money on them, since (0.5 0.4 − 0.5 0.6) 182083.33 =
−18208.30.
Overall then the govt makes losses of 5810.68N, where N is the number
of workers in risky occupations. At the old wages both groups are better
oﬀ (and thus there would be an inﬂux of desk workers and a reallocation
towards oil rigs.) In order to break even the insurance rates would have to
be changed, in particular raised. It also seems that the govt would ban the
purchase of negative insurance amounts. In which case the bus drivers would
ﬁnd it optimal to buy no insurance, and then premiums would have to be 0.5
for the govt to break even. This would be deemed unjust by all involved, and
so in practice the govt forces all workers to buy a ﬁxed amount of insurance!
In principle we could compute equilibrium wages if we treat the insur
ance purchase as a function of the wage. So, for example we know from the
above that C
b
(w) = 10(9w−16(w−11700))/(616+49). We then solve
for s
√
40000 = (2/3)
w + (1 −0.4)C
b
(w) + (4/3)
w −11700 −0.4C
b
(w).
The details are left to the reader.
Answers 169
The important point here is that it is important to charge the correct
premiums. If that is not done things will work out funny. That in turn leads
to real life plans which do not allow a choice — workers have to insure, the
amount is dictated (often capped, that is, the insured amount is a function
of the wage up to a maximum.) You can see that such plans can be quite
complicated and that it can be quite complicated to ﬁgure out who would
want to do what, what the distributional implications are, etc.
Question 13: Let us translate the question into notation: We are to show
that
u
(c
1
)
u
(c
2
)
= k if
c
2
c
1
= λ if the function u() satisﬁes
u
(w)w
u
(w)
= a, ∀w.
u
(c
1
)
u
(c
2
)
= k and
c
2
c
1
= λ =⇒ u
(c
1
) = k(λ)u
(λc
1
).
If the left and right hand side of that last expression are identical functions,
then their derivatives must equal: u
(c
1
) = k(λ)λu
(λc
1
), but we know that
k(λ) = u
(c
1
)/u
(λc
1
), so that
u
(c
1
) =
u
(c
1
)
u
(λc
1
)
λu
(λc
1
) =⇒
u
(c
1
)
u
(c
1
)
= λ
u
(λc
1
)
u
(λc
1
)
Thus the MRS is constant for any consumption ratio λ if
u
(c
1
)c
1
u
(c
1
)
=
u
(λc
1
)λc
1
u
(λc
1
)
∀λ,
which is constant relative risk aversion.
7.4 Chapter 6
Question 1: A 3player game in extensive form comprises a game tree, Γ,
a payoﬀ vector of length three for each terminal node, a partition of the set
of nonterminal nodes into player sets S
0
, S
1
, S
2
, S
3
, a partition of the player
sets S
1
, S
2
, S
3
into information sets. Further, a probability distribution for
each node in S
0
over the set of immediate followers and for each S
J
i
an index
set I
J
i
and a 11 mapping from I
j
i
to the set of immediate followers of the
nodes in S
j
i
. Any carefully labelled game tree diagram will do. It does not
even have to have nature (i.e., S
0
could be empty.)
Question 2: Perfect recall is when each player never forgets any of his own
previous moves (so that for any two nodes within an information set one
170 LA. Busch, Microeconomics May2004
may not be a predecessor of the other and any two nodes may not have a
common predecessor in another information set of that player such that the
arc leading to the nodes diﬀers) and never forgets information once known (so
that any two nodes in a player’s information set may not have predecessors
in distinct previous information sets of this player.) Counter examples as in
the text, or any game which violates these requirements.
Question 3: Yes, any ﬁnite game has a Nash equilibrium, possibly in mixed
strategies. This follows from the Theorem we have in the text. The game
therefore will also have a SPE (they are a subset of the Nash equilibria, but
keep in mind that the condition of subgame perfection may have no ‘bite’,
in which case we revert to Nash.)
Question 4: Player 1 has no weakly dominated strategies since
u
1
(D, L, Left) > u
1
(U, L, Left) but u
1
(D, R, Left) < u
1
(U, R, Left), while
u
1
(C, L, Right) > u
1
(U, L, Right). Player 3 does also not have a weakly
dominated strategy. Depending on the opponents’ moves he gets a higher
payoﬀ sometimes in the left and sometimes in the right matrix. Player 2 does
have weakly dominated strategies: Both L and R are weakly dominated by
C.
This does not leave us with a good prediction yet, aside from the fact
that 2 can be argued to play C. However, if we now consider repeated
elimination we can narrow down the answer to what is also the unique Nash
equilibrium in pure strategies in this case, (D, C, Right).
To ﬁnd mixed strategy Nash we assign probabilities to the strategies
for players, so let µ
1
= Pr(U), µ
2
= Pr(C), γ
1
= Pr(L), γ
2
= Pr(R), and
α = Pr(Left). We can then compute the payoﬀs for players for each of their
pure strategies. So for example u
1
(U, γ, α) = α(γ
1
+ 2γ
2
+ (1 − γ
1
− γ
2
) +
(1 −α)(2γ
1
+ 4γ
2
+ 2(1 −γ
1
−γ
2
)). We then can ask, when is player 1, say,
actually willing to mix? Only if the payoﬀ from the pure strategies in the
support of the mixed strategy are equal, so that the player does not care.
Question 5: This one is made easier by the fact that strategy R is (strictly)
dominated, so that it will never be used in any mixed strategy equilibrium
(or indeed any equilibrium.) Hence this is really just a 2 2 matrix we need
to consider. Let α = Pr(U) and β = Pr(L), so that Pr(C) = 1 − α and
Pr(C) = 1−β. Then for player 1 to mix we require β+4(1−β) = 3β+2(1−β),
hence 2 = 4β, and hence β = 0.5. So if player 2 mixes with this probability
then player 1 is indiﬀerent between his two strategies. Now look at player
2: For 2 to be indiﬀerent between the two strategies L and C we require
4α + 2(1 − α) = 2α + 3(1 − α). Hence 3α = 1 and thus α = 1/3. Thus the
Answers 171
mixed strategy Nash equilibrium is ((1/3, 2/3), (1/2, 1/2)). Note that the
game has no pure strategy Nash equilibria.
Question 6: This is a two player game (the court is not a strategic player
and does not receive any payoﬀs.) The most natural extensive form for such
a situation is probably as in the game tree on the next page.
e
railway
high
low
s Nature sNature
Accident p
l
No Acc
(1 −p
l
)
Accident p
h
No Acc
(1 −p
h
)
(20, 10) (25, 10)
s s town
Sue
Not Sue
Sue
Not Sue
(10, 0) (12, 2) (5, 10) (17, 2)
Here it is important to note that p
h
> p
l
, reﬂecting the fact that if low
care is taken the accident probability is higher. I have arbitrarily assigned
payoﬀs which satisfy the description. High level of care costs the railway 5,
accidents impose a cost of 8 on both parties, legal costs are 2 for each party.
Let us try and ﬁnd the Nash equilibrium of this game. As an exercise
let us ﬁrst ﬁnd the strategic form:
R`T Sue NotSue
high (20 −10p
l
, 10 −10p
l
) (20 −8p
l
, 10 −8p
l
)
low (25 −20p
h
, 10) (25 −8p
h
, 10 −8p
h
)
Note that low strictly dominates high for the railway if 0.5 > (2p
h
−
p
l
) while high strictly dominates low if p
h
− p
l
> 5/8. In those cases the
172 LA. Busch, Microeconomics May2004
Nash equilibria are (low, Sue) and (high, NotSue), respectively. Otherwise
there will be a mixed strategy equilibrium. Let α be the probability with
which the railway uses the high eﬀort level. The town is indiﬀerent iﬀ its
expected payoﬀs from the two strategies are the same, that is, if 10−10αp
l
=
10 − 8p
h
+ 8α(p
h
− p
l
). This is the case if α = 4p
h
/(4p
h
+ p
l
). For lower
α it prefers to Sue, for higher α it prefers to NotSue. Letting β denote
the probability with which the town sues, the railway expects to receive
20 −8p
l
−2βp
l
from high and 25 −8p
h
−12βp
h
from low. It is indiﬀerent if
β = (5 −8(p
h
−p
l
))/(12p
h
−2p
l
). So the mixed strategy Nash equilibrium is
(α, β) =
4p
h
4p
h
+p
l
,
5 −8(p
h
−p
l
)
12p
h
−2p
l
if p
h
−
5
8
> p
l
> 2p
h
−
1
2
.
Question 7: There are two ways to draw this game. We can have nature
move ﬁrst and then Romeo (who does not observe nature’s move.) Or we
can have Romeo move ﬁrst, and then nature determines if the move is seen.
The game tree for the ﬁrst case is as drawn below.
e
Nature
>
>
>
>
>
>
>
>
>
>
see p
not see 1p
s
Romeo
s
S
M
S
M
s J
1
s
J
2
s J
3
s
S
`
`
`
`
`
`
M
S
`
`
`
`
`
`
M
S
`
`
`
`
`
`
M
S
`
`
`
`
`
`
M
(30,50) (1,1) (5,5) (50,30) (30,50) (1,1) (5,5) (50,30)
A strategy vector in this game is (s
R
, (s
1
J
, s
2
J
, s
3
J
)). Subgames start at
information sets J
1
and J
2
, the only other subgame is the whole tree. In the
subgame perfect equilibrium Juliet therefore is restricted to (S, M, ). Let α
denote Romeo’s probability of moving S, and β Juliet’s (in J
3
.) Romeo’s
(expected) payoﬀ from S is 30p + (1 − p)(30β + 1 − β) and his payoﬀ from
M is 50p + (1 − p)(5β + 50(1 − β)). The β for which he is indiﬀerent is
(49 − 29p)/(74(1 − p)). Note that this is increasing in p and that β = 1 if
p = 5/9! Juliet has payoﬀs of 50α+5(1−α) and α+30(1−α) from moving S
Answers 173
and M, respectively, in J
3
. Hence she is indiﬀerent if α = 25/74. Of course,
pure strategy equilibria may also exist, and we get the SPE equilibria to be
25
74
,
S, M,
49 −29p
74(1 −p)
, (S, (S, M, S)) , (M, (S, M, M)) if p <
5
9
.
Note that (S, (S, M, S)) requires that 30 > 50p + 5(1 − p), or p < 5/9 also.
(M, (S, M, M)) requires that 50 > 30p+(1−p), or p < 49/29, which is always
true. What if p > 5/9? In that case the equilibrium in which the outcome
is coordination on S (preferred by Juliet) does not exist, and neither does
the mixed strategy equilibrium. Hence the unique equilibrium if p ≥ 5/9 is
(M, (S, M, M)). Romeo can eﬀectively insist on his preferred outcome.
Question 8: Each ﬁrm will
max
q
i
(
¸
j=i
q
j
+q
i
−10)
2
q
i
−0q
i
¸
.
The FOC for this problem is
2(
¸
j=i
q
j
+q
i
−10)q
i
+ (
¸
j=i
q
j
+q
i
−10)
2
= 0
Hence if
¸
j=i
q
j
+q
i
−10 = 0 we require
2q
i
+ (
¸
j=i
q
j
+q
i
−10) = 0
and get the reaction function q
i
= (10 −
¸
j=i
q
j
)/3. With identical ﬁrms we
then know that in equilibrium q
j
= q
i
, so that
¸
j=i
q
j
= (n − 1)q
i
. Hence
3q
i
= 10 − (n − 1)q
i
and we get that q
i
∗ = 10/(n + 2) ∀i. Total market
output then is 10n/(n + 2). Note that total market output approaches 10
from below as n gets large. Market price for a given n is 400/(n+2)
2
, which
approaches zero as n gets large. (Note that the marginal cost is zero and
hence the perfectly competitive price is zero!)
Question 9: In the ﬁrst instance the sellers can only vary price. To clarify
ideas, let us focus on two prices only (as would be needed for a separating
equilibrium.) The game then is as depicted below. We are to show that no
separating equilibrium exists. If it did, it would have to be the two prices as
indicated, where one ﬁrm charges one price (presumably the high quality ﬁrm
charging the higher price) the other another. But given that, the consumer
knows (in equilibrium) which ﬁrm produced the product. It is easy to see that
the low quality ﬁrm would deviate to the higher price (being then mistaken
174 LA. Busch, Microeconomics May2004
(9 −p,
p −3)
(0, −3)
(0, −2)
(6 −p,
p −2)
(9 − ˆ p,
ˆ p −3)
(0, −3)
(0, −2)
(6 − ˆ p,
ˆ p −2)
buy
buy
No
No
buy
buy
No
No
p ˆ p
p ˆ p
seller
seller
high
low
(1/2)
(1/2)
s s s
s s s `
`
`
`
`
`
`
`
`
`
`
`
`
`
e
Root: Nature
α β
buyer buyer
for the high quality ﬁrm so that the consumer buys) since costs are unaﬀected
by such a move, but a higher price is received.
At this point the remainder is nontrivial and left for summer study! The
key is that the consumer now has an information set for each pricewarranty
pair, and that there are two nodes in it, one for each type of ﬁrm.
Question 10: What was not stated in the question was the fact that each
consumer buys either one or no units. Each buyer purchases a unit of the
good if and only if the price is below the valuation of the buyer. Hence total
market demand is given by the number of buyers with a valuation above p,
or 1 −F(p). F(v) is the cumulative distribution for the uniform distribution
on [0, 2]. Since the pdf for the uniform distribution on [0, 2] is 0.5, we have
F(v) =
v
0
0.5dt = 0.5v. Hence market demand is 1−0.5p and inverse market
demand is 2(1 −Q).
A Cournot equilibrium is nothing but a Nash equilibrium in the game
in which ﬁrms simultaneously choose output levels. Hence ﬁrm 1 solves
max
q
1
¦2(1 −q
1
−q
2
)q
1
−q
1
/10¦
which leads to FOC 2(1−q
1
−q
2
) −2q
1
−1/10 = 0 and the reaction function
q
1
(q
2
) = 19/40 −q
2
/2.
Firm 2 solves
max
q
2
¸
2(1 −q
1
−q
2
)q
2
−q
2
2
¸
which leads to FOC 2(1 −q
1
−q
2
) −2q
2
−2q
2
= 0 and the reaction function
q
2
(q
1
) = 1/3 −q
1
/3.
The Nash equilibrium then is (q
1
, q
2
) = (37/100, 21/100). Market price
is 42/50. Proﬁts for the two ﬁrms are 7437/10000 for ﬁrm 1 and (8221−
21
2
)/10000 for ﬁrm 2, so that joint proﬁt is (74 37 +82 21 −21
2
)/10000.
Answers 175
In the Stackelberg leader case we consider the SPE of the game in which
ﬁrm 1 chooses output ﬁrst and ﬁrm 2, after observing ﬁrm 1’s output choice,
picks its output level. Firm 1, the Stackelberg leader, therefore takes ﬁrm
2’s reaction function as given. Thus ﬁrm 1 solves
max
q
1
2
1 −q
1
−
1
3
−
q
1
3
q
1
−q
1
/10
The FOC for this is 4(1 − 2q
1
)/3 − 1/10 = 0 and hence q
1
= 37/80. Thus
q
2
= 43/240. Market price is 86/240. Proﬁts for the two ﬁrms are (62
37)/(24080) and 4343/240
2
. Joint proﬁt thus is (18637+4343)/240
2
.
Joint proﬁt maximization would require that the ﬁrms solve
max
q
1
,q
2
¸
2(1 −q
1
−q
2
)(q
1
+q
2
) −q
1
/10 −q
2
2
¸
.
This has FOCs
2(1 −q
1
−q
2
) −2(q
1
+q
2
) −1/10 = 0
2(1 −q
1
−q
2
) −2(q
1
+q
2
) −2q
2
= 0
so that we know that 2q
2
= 1/10 or q
2
= 1/20. Hence 2(1 − q
1
− 1/20) −
2(q
1
+1/20) −1/10 = 0 and 2 −4q
1
−3/10 = 0 and q
1
= 7/40. Market price
then is 2(31/40). Joint proﬁts are 2(31/40)(9/40) − 8/400. This cannot be
attained as a Nash equilibrium because neither output level is on the ﬁrm’s
reaction function, and only output levels on the reaction function are, by
design, a best response.
Contents
1 Preliminaries 1.1 1.2 1.3 About Economists . . . . . . . . . . . . . . . . . . . . . . . . What is (Micro—)Economics? . . . . . . . . . . . . . . . Economics and Sex . . . . . . . . . . . . . . . . . . . . . . . . Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Review 2.1 Modelling Choice Behaviour 2.1.1 2.1.2 2.1.3 2.2
1
1 2 2 5 8 9 . . . . . . . . . . . . . . . . . . 9
Choice Rules . . . . . . . . . . . . . . . . . . . . . . . 10 Preferences . . . . . . . . . . . . . . . . . . . . . . . . 11 What gives? . . . . . . . . . . . . . . . . . . . . . . . . 13
Consumer Theory . . . . . . . . . . . . . . . . . . . . . . . . . 15 Special Utility Functions . . . . . . . . . . . . . . . . . . . . . 18 2.2.1 Utility Maximization . . . . . . . . . . . . . . . . . . . 20
Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.2.2 2.3
1
Expenditure Minimization and the Slutsky equation . . 25
General Equilibrium . . . . . . . . . . . . . . . . . . . . . . . 27
This material is based on MasColell, Whinston, Green, chapter 1
i
3 3. . . . . . . . . . . . . . . . . . . .3. 43 3. . . 52 3. . . .3. . . . . . . . . .1. . . . Full Markets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Investment. 47 3. No Investment. . . . . . . . 65 . . . . . . . . . .2 3. . . 40 Storage. Busch. . . . . . . . . . . . . 51 A Short Digression into Financial Economics . . . . . . 47 Riskfree Assets . . . . .1. . . . .1 2. . . . . . . . . . . . . . .3. . . . . . . .4 Review Problems . . . . . . . . Microeconomics 2. . . . . . . . . . . . . . . . . . . . . . . . 27 A simple production economy . . . . .2 3. . No Investment. . . . . . . .2 2. . . . . . . 41 Storage. . 35 39 3 Intertemporal Economics 3. . . . . . . . . .1 Deriving the budget set . . . 33 Review Problems . . .1 Comparing degrees of risk aversion . . . . .4 Bonds . . . . . . . . . . . . . . . .1 The consumer’s problem . . . . . . . . . No Markets . . . . . . . . . . . 42 Storage. 40 No Storage. . No Investment. . . . . . . . . . . . . Investment. . . . .3 Utility maximization . . . . . . . . . . 48 More on Rates of Return . . . . . . . . .1 3. . . 39 3. . . . . . . . .3. . . . . . .ii LA. . 49 Resource Depletion . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . .1. . . . . . . . . . 61 4. . . . . . . . . . . . . . . No Markets . . . No Markets . . . . .1 Risk Aversion . 40 Storage. 54 57 4 Uncertainty 4. . . . 44 Real Interest Rates . . . . . . . . . . Full Markets . .2 3.4 May 2004 Pure Exchange . . . .3.
1 The Abstract PA Relationship . . . . . . . . . . . .3 5. .1 6. . . . . . . . 74 Risk Aversion Again . . . . . . . . .5. . . . . . . . . . . .5. . . . . . . 77 Risk spreading . . . . . . . . . . . . .2 The Extensive Form . . . . .4 Comparing gambles with respect to risk . . . . . . . . . . . . . . . . . . . . . . 121 . . . . . . . . . 79 MeanVariance Utility . . . . 69 The StatePreference Approach . . . .4 4. . 84 4. . 81 CAPM . . . . . . . 103 The Principal Agent Problem . . . . 98 Moral Hazard . . .1. . . . 95 Adverse Selection . . . . . . . . . . . . . . 105 5. . . . . . . 76 4. . . . . . . . 67 A ﬁrst look at Insurance . . . 84 91 5 Information 5. .1 5. . . . . . . . . . . . . . . . . .4. .5. . . . . . . . . . . . .2 Insurance in a State Model . . . .4. . . .1.4. . . . . . . . . . . . .4 Search . 78 Back to Asset Pricing . . . . . . . . . . . . . . . . . .5 Asset Pricing . . . . . . . . . .5. . . . 77 4. 117 6. . . . . . . . . . . . . . .Contents iii 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72 4. . . .5. . . . . . . . . . .1 Descriptions of Strategic Decision Problems . . . . . .1 4. . . . .2 5. 105 113 6 Game Theory 6. . . . . . . . . . . . . . . . .2 4. . . . . . . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . . . . . .1 4. . . 117 Strategies and the Strategic Form . .2 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Review Problems . . . . . . . . .5 Diversiﬁcation .3 4.
2. . . . . . . . 134 Signalling Games . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . .iv LA. . . . . . . . . . . . . . . . . .4 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6. .2 May 2004 Solution Concepts for Strategic Decision Problems . . . .2 7. 127 Equilibrium Reﬁnements for the Strategic Form . . . . . . . . . . . . . . . . . .1 6. Microeconomics 6. .3 7. . . . 140 145 7 Review Question Answers 7. . . .2. 159 Chapter 6 . . . .1 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Chapter 4 . . . . . . . . . Busch. . . 131 Equilibrium Concepts and Reﬁnements for the Extensive Form . . . . .2 6. . . 169 . . . . . . . . 145 Chapter 3 . . . . . . .3 Review Problems . . . . . . 138 6. . . . . . . . . . . . . . . . . . . .3 Equilibrium Concepts for the Strategic Form . . .
. . . . . . 76 Eﬃcient Portfolios and the Market Portfolio . . . . 63 The certainty equivalent to a gamble . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Pareto Optimal Allocations . . . . . . . . . 23 An Edgeworth Box . . . . . . . . . 62 Risk Neutral and Risk Loving .3 5. . . . . . . . . .1 4.4 4. . . 103 v . . . . . . .4 Consumer Optimum: The tangency condition . . .5 4. . . . . . . 83 Insurance for Two types . .7 5. . . . . 75 Risk aversion in the State Model . . . . . . . . . . . . . . . . .3 4. . . . . . . . . . . . . . . . . . . .2 2. . . . .2 5. . . . . . . . . 99 Impossibility of Pooling Equilibria . . . 22 Corner Solutions: Lack of tangency . . . . . . .1 5. . . . . . 64 Comparing two gambles with equal expected value . . .List of Figures 2. . .4 3. . . . . . . . . . . . . 44 Risk Aversion .1 2. . . 30 Storage without and with markets. . . . . . . . . 67 An Insurance Problem in StateConsumption space . . . . . . 102 Separating Contracts deﬁnitely impossible . . . . . . . .3 2. . . .2 4. . . 101 Separating Contracts could be possible . . . .6 4. . . . . . . . . . . . . . . . . . . . no (physical) investment . .1 4. . . . . .
4 6. . . . .7 6. . . . . . . . . . . . . . . . . . . . . . . . . 120 From Incomplete to Imperfect Information . 140 . .6 6. . . .8 6. . . . . . . . . . . .2 6. . 137 A Signalling Game . . . . .9 May 2004 Examples of valid and invalid trees . . . . . . . . . . Microeconomics 6. . Busch. . . . . . 118 A Game of Perfect Recall and a Counterexample . . . . 121 A Matrix game — game in strategic form . . . . . . . . . . . . . . . 126 Valid and Invalid Subgames . . . . . . . .5 6. . . . . . . . .10 A Minor Perturbation? . . . . . . . . . . . . . . . . . . . . . .vi LA. . 135 The “Horse” . . . . . .1 6. 125 The 4 standard games . . . . . . . . .3 6. . . . . . . . 139 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 A simple Bargaining Game . . .
as well as allowing for a review. I also would like to acknowledge my teachers Don Ferguson and Glen MacDonald. It is assumed that the student has mastered the prerequisites and little or no time is spent on them. I wish to acknowledge two books. Microeconomic Theory. which have served as references: the most excellent book by MasColell. The material covered is by now fairly standard and can be found in one form or another in most micro texts. 1 . who have done much to bring microeconomics alive for me. I assume that the student has access to a standard undergraduate micro theory text book. interesting. These notes will not give references. aside from a review of standard consumer economics and general equilibrium in chapter 1. as well as the much more concise Jehle and Reny. and Green. Whinston. Advanced Microeconomic Theory. however. Their sources have been indicated. of the material from the prerequisites. amusing.Chapter 1 Preliminaries These notes have been written for Econ 401 as taught by me at the University of Waterloo. if necessary. Any of the books commonly used will do and will give introductions to the topics covered here. and thought provoking. This preliminary chapter contains extensive quotes which I have found informative. As such the list of topics reﬂects the course material for that particular course.
25: No real Englishman. and so I thought to collect here some material which I to use to start a course.2 LA. who really preferred ‘sawdust without butter’. he is much more likely to be sorry for his life. of the way in which micro economics is practiced. and people say that the dryness of the Sahara is caused by a deposit of similar bones. Since then. Malinvaud sees economics as follows: . 1993. You would think a man who could digest all that arid matter. This is a matter of sorrow for them. the answer would undoubtedly be No. this question may have received no further coverage.4. If you mention your ﬁeld of study in the bar. It is meant to provide a background for the ﬁeld as well as a defense. (Walter Bagehot (1855)) Are economists human? By overwhelming majority vote. in his secret soul. was ever sorry for the death of a political economist. Microeconomics May2004 1. might defy by intensity of internal constitution all stomachic or lesser diseases.2 What is (Micro—)Economics? In Introductory Economics the question of what economics is has received some attention. (Geoﬀrey Crowther (1952)) 1.1 About Economists By now you may have heard many jokes about economists and noticed that modern economics has a bad reputation in some circles. directed to promoting the wealth and welfare of mankind. mostly. Busch. However they do die. You might as well cry at the death of a cormorant. however. or consciously. for there is no body of men whose professional labours are more conscientiously. Sept. That they tend to be regarded as bluenosed killjoys must be the result of a great misunderstanding. p. Indeed how he can die is very odd. of sorts. you are quite possibly forced to defend yourself against various stereotypical charges (focusing on assumptions.) The most eloquent quotes about economists that I know of are the following two quotes reproduced from the Economist. who liked the tough subsistence of rigid formulae.
in its abstract formulations. in the institutions and activities whose object it is to facilitate these operations. Malinvaud. and in particular what micro theory entails. Mathematics is a language that facilitates the honest presentation of a theory by making the assumptions explicit and by making each step of the logical deduction clear.. it is interested in the essential operations of production. economics is a complex subject and involves many things that cannot be expressed readily in terms of mathematics. Thus it provides a basis for further developments and extensions. . the rest of economic theory is in most cases macroeconomic. By contrast. p..Preliminaries 3 Here we propose the alternative. exchanged and consumed. Lectures on Microeconomic Theory. This seems a necessary condition a priori for logical investigation of the phenomena in question.. distribution and consumption of goods.12. [E.. and one key critique heard often is the “attempt at overblown rigor” and the “unrealistic assumptions” which micro theory employs.] This gives us a nice description of what economics is. it provides the possibility for more accurate empirical testing. it respects the individuality of each good and each agent. more explicit deﬁnition: economics is the science which studies how scarce resources are employed for the satisfaction of the needs of men living in society: on the one hand. Moreover. revised. and on the other hand. In following the agenda laid out by Malinvaud a certain amount of theoretical abstraction and rigor have been found necessary. First Takayama: The essential feature of modern economic theory is that it is analytical and mathematical. It is called microeconomics because. [. [. NH.] But yet. reasoning directly on the basis of aggregates of goods and agents. simply because such testing requires explicit and mathematical representations of the propositions of the theories to be tested. Not only are some assumptions hidden and obscured in the theories of the verbal and “curvebending” economic schools. 1985. Takayama and Hildenbrand both address these criticisms in the opening pages of their respective books.] The main object of the theory in which we are interested is the analysis of the simultaneous determination of prices and the quantities produced. but their approaches provide no scope for accurate empirical testing.
” (footnote 2. Mathematical Economics. p.2..” In other words. [. It is important to get an intuitive understanding of the theorems (by means of diagrams and so on. economics is a combination of poetry and hardboiled analysis accompanied by institutional facts.M. Rather.M. J.. 1985.] Mathematical economics is a ﬁeld that is concerned with complete and hardboiled analysis. p. I think. J. It is a method rather than a doctrine. This does not imply. They say that all abstraction is falsiﬁcation. which helps its possessor to draw conclusions. but this understanding is useless without a thorough knowledge of the assumptions and proofs. for actual economies are far too complex to allow the ready application of these theorems. [Akira Takayama.. that hardboiled analysis is useless. chieﬂy. it is the best way to express oneself honestly without being buried in the millions of institutional facts.] Hildenbrand oﬀers the following: I cannot refrain from repeating here the quotation from Bertrand Russell cited by F. xv.4 LA. 2nd ed. but as they do not wish to give this reason they invent all sorts of other that sound grand. Keynes once remarked that “the theory of economics does not furnish a body of settled conclusions immediately applicable to policy. most of which are not precise. and that as soon as you have left out any aspect of something actual you have exposed yourself to the risk of fallacy in arguing from its remaining aspects alone. Busch. Keynes remarked that economics involves the “amalgam of logic and intuition and wide knowledge of facts. because of its intellectual diﬃculty. an apparatus of the mind. Microeconomics Commenting on Max Planck’s decision not to study economics. The essence here is the method of analysis and not the resulting collection of theorems. contrary to what many poets and institutionalists feel. Those who argue that way are in fact concerned with matters quite other than those that concern science. Hahn in his inaugural lecture in Cambridge: “Many people have a passionate hatred of abstraction. if necessary). a technique of thinking.” An immediate corollary of this is that the theorems are useless without explicit recognition of the assumptions and complete understanding of the logic involved. with reference) May2004 . Cambridge.
” in Uncertainty in Economic Thought. [Werner Hildenbrand.3 Economics and Sex I close this chapter with the following thought provoking excerpt from Mark Perlman and Charles R. “Varieties of uncertainty. Rather. and Eve represents sexual attraction or desire. its origins were sexual. assumptions on the mathematical representations of the primitive concepts are made explicit and are fully speciﬁed. McCann. perhaps that is because men wrote up the history. that since Eve was the proximate cause of the Fall. para . p 910. Econometric Society Monograph 4. in order to avoid deleterious political correctness (and thereby cut oﬀ provocation and discussion). Mathematical analysis then establishes the consequences of these assumptions in the form of theorems. whose opinion of womankind was problematic) have considered that sexual attraction was in some way even more responsible for the Fall than anything else. Mankind and particularly Womankind1 did not live up to His expectations. 2 What that says about His omniscience and/or omnipotence is. Edward Elgar 1996.Preliminaries 5 Let me brieﬂy recall the main characteristics of an axiomatic theory of a certain economic phenomenon as formulated by Debreu: First. We are told that apparently whatever were God’s expectations. Twenty Papers of Gerard Debreu. some (particularly St Paul. at the very least. He became disappointed with Man. we open consideration of this diﬃcult question by asking what was the point of that Book of Genesis story about the inadequacy of Man. We should add. Cambridge.] 1. page 4. even if economics is not a sexy subject. quoted with omissions. ed. the primitive concepts of the economic analysis are selected. Put crudely. namely a reference to our religious heritage. 1983. Christian Schmidt. and then.. Jr. The problem as perceived As this is an opening paper. let us begin with what was once an established cultural necessity. Adam and Eve were informed 1 Much has been made of the failure of women.2 In any case. Second. the details of which we shall not bore you with. What we have in mind is the Biblical story of the Fall of Man. each one of these primitive concepts is represented by a mathematical object.
kindly note that God forced Adam to testify against himself. particularly as seen by Aquinas and by most economists ever since. and (5) an excessive Faustian curiosity. It is ‘What was God’s punishment for Adam and Eve’s vicarious sin.’ [11] God answered. ‘The woman you gave me for a companion. and production is done mostly by the ‘sweat of the brow’ and the strength of the back. [9] But the Lord God called to the man and said to him. was that man is condemned to live with the paradigm of scarcity of goods and services and with a schedule of appetites and incentives which are. and all of us have been made to suﬀer ever since.’)] ([16] and [17] omitted) [Again. pleading the Fifth won’t do at all. 3 Cf. writing in euphemisms. Microeconomics May2004 that they had ‘fallen’ from Grace. if anything. Genesis. Economics per se.] . What was the punishment? The sin seems to have been something combining (1) an inability to follow precise directions. ‘ Where are you?’ [10] He replied. ‘ Who told you that you were naked? Have you eaten from the tree which I forbade you?’ [12] The man said. (4) an inability to leave well enough alone. and the price of that dependency was to be agreeable to Eve (‘It was really all her fault — I only did what You [God] had laid out for me. From our analytical standpoint there are two crucial questions: 1.’ [Note: The story. We are driven to produce so that we can consume. Who says that the Bill of Rights is an inherent aspect of divine justice? Far from it. and I was afraid because I was naked. as recalled. suggests that Adam was dependent upon Eve (for what?).16. as academic intellectuals. The study of economics — of the production.17.3 (3) a greed involving things (something forbidden) and time (instant gratiﬁcation). for which all mankind suﬀers?’ Purportedly a distinction has been made between what happened to Man and Woman. distribution and even the consumption of goods and services — it follows. Busch. doxical. is the Punishment for Sin. and 2. Naturally. In the more modern terms of William Stanly Jevons. particularly when one could assert that ‘one was only doing what everyone else (sic) was doing. for those civil libertarians amongst us. she gave me fruit from the tree and I ate. 3:912. When Carlyle called Economics the ‘Dismal Science’. (2) a willingness to be tempted. is the result of the Original Sin. ours is a world of considerable pain and a few costly pleasures. at best. But what interests us directly is the second question. ‘I heard the sound as you were walking the garden . we fancy the ﬁfth reason as best. but. he was. and I hid myself. confusing. What was the sin.6 LA. in the Last Judgment. the one clear answer.
Maimonides held that prior to the Fall. In other words. but the lack of knowledge of what one’s preference schedule will do to one’s happiness. If one pursues Maimonides’ line of inquiry. it is another line of analysis. perhaps novel. as such. Maimonides suggested that God’s real punishment was to push man admittedly beyond the limits of his reasoning power. because scarcity. it seems that uncertainty (which is based not on ignorance of what can be known with study of data collection. one writing shortly before Aquinas. generally) knew everything concerning them. can usually be overcome.4 Requisite to the wise use of power is understanding and full speciﬁcation. it was identiﬁed early on by another Aristotelian. after the Fall they only had opinions. Adam and Eve (and presumably mankind. surely tied up with Free Will. We use our reasoning power. and the thinking man ought to be able to handle the situation. Scarcity. may not have been the greatest punishment. 4 [omitted] .Preliminaries 7 But. which we put to you. Scarcity simply means that one has to allocate between one’s preferences. what was lost was any such claim previously held by man to complete knowledge and the full comprehension of his surroundings. but also on ignorance tied to the unknowable) is the real punishment. indeed the greatest punishment. what truly underlies the misery of scarcity is neither hunger nor thirst. For if one had complete knowledge (including foreknowledge) one could compensate accordingly. is more basic. as the paradigm. Insofar as we are aware. What was the greater punishment. Moses Maimonides. to allocate priorities and thereby overcome the greater disasters of scarcity.
or budget set B. such as X = {X1 . . for example. I will use the usual partial derivative notation ∂f /∂x1 = f1 (·). f2 (·). . as in the consumption set X. . .. . f (·) denotes f (x1 . ( ++ are the positive Reals) + ∀ “for all”. as in partial derivatives gradient. N . Capital letters most often denote sets. component wise: ∀n ∈ N : xn < yn ≥ greater or equal (component wise) strictly greater (component wise) strictly preferred to weakly preferred to ∼ indiﬀerent to ∂ partial. . Finally. f (x) = [f1 (·). component wise: xn ≤ yn for all n = 1. . ≤ less or equal to.. for all elements in the set “there exists” or “there is at least one element” ∃ ¬ “not”. strictly less than. . the real numbers. . i. Microeconomics Notation: Blackboard versions of these symbols may diﬀer slightly. . w) = ∂w ∂w ∂w ⇐⇒ “if and only if”. as in x · y = xT y = n xi yi · ∈ “in”. . Generally all small variables (x. May2004 I will not distinguish between vectors and scalars by notation. Note: ∀x ∈ X A is equivalent to {x ∈ X  A}. in the more familiar fashion. .) The context should clarify the usage. the following statement is not true dot product. y. and will often omit the arguments to a function but indicate that there is a function by the previous notation. Dw x(p. Busch. .. superscript denotes dimensionality the nonnegative reals (include 0). X2 . . or negation. the vector of partial derivatives of a function of a vector. x2 .. the vector (matrix) of partial derivatives of a vector of functions: T ∂xN (·) ∂x1 (·) ∂x2 (·) .. Sets of sets are denoted by capital script letters. Xk }. . or “element of” “such that”. fn (·)]T Dx derivative operator. the Reals will be written as IR on the board. .8 LA.e. often written “iﬀ” . . 2 where Xi = {x ∈ N  n xn = i}. if no confusion arises. xn ). p) are column vectors (even if written in the notes as row vectors to save space.
but the two possible approaches are both outlined in the next section. Note that much of economics is concerned with how choice changes as B changes. i ∈ {1. how one can model decision making. . or more precisely. however. A simple description of this set will be advantageous.1 Modelling Choice Behaviour 1 Any model of choice behaviour at a minimum contains two elements: (1) a description of the available choices/decisions. After that.e. Green. the set of amounts of commodities which can be purchased at current prices and income levels. the familiar budget + set for a price taking consumer. where we ask how choice adjusts as income. . . n}. In principle this set can be anything. The ﬁrst part of this is relatively easy. This is the standard question of comparative statics. 2. m or one of the prices. We specify some set B which is the set of (mutually exclusive) alternatives. and thus we most often will have B = {x ∈ n p·x ≤ m}. some additional 1 This material is based on MasColell. chapter 1 9 . Whinston. For a consumer this will usually be the budget set. An extensive analysis of this topic would lead us way oﬀ course. . however.Chapter 2 Review Consumer theory is concerned with modelling the choices of consumers and predicting the behaviour of consumers in response to changes in their choice environment (comparative statics. changes. (2) a description of how/why decisions are made.) This raises the question of how one can model choice formally. pi . i. It does require.. we will return to the standard way in which consumer decision making in particular is addressed.
While that would allow our theory to match any data. This is necessary since in the absence of such a restriction we would be able to specify quite arbitrary choices. In either case we will need to specify what we mean by “rational”. but only what is in principle possible (for example. by virtue of the consumer selecting a when both are available. that is. or consumption set. that is. I will give an abstract description without explicit application to the consumer problem. it would contain 253 Lear Jets even if those are not aﬀordable to the consumer in question. Microeconomics May2004 structure as to what kind of B are possible. More precisely. For our application to the consumer this basically says that we specify the demand functions of the consumer as the primitive notion in our theory. In consumer theory this would be a consumption bundle. for our neoclassical consumer this is B = {x ∈ n p · x ≤ m}. This set does not take into account what is an economically feasible choice. Alternatively. a vector x indicating the quantities of all the goods. it also means that no refutable propositions can be derived. (2). Busch. We normally let X be the set of technically feasible choices. p ∈ S n−1 . b.10 LA. use this approach. since that would be physically impossible. 2. if the consumer’s behaviour reveals to us that he seems to like some alternative a better than another. in the end. this conclusion should not be contradicted in other observations. say. Under this approach a ranking of choices in X is speciﬁed. m > 0 . B. Rationality restrictions can basically be viewed as consistency requirements: It should not be possible to get the consumer to engage in behaviour that contradicts previous behaviour. The way we introduce this restriction into the model will depend on the speciﬁc approach taken.1 Choice Rules Since we will not.) For the second element. which contains all sets B we could consider. but it will not allow the consumer to create time from income.1. So we get a set of sets. we can specify choice rules directly. and then choice for a given choice set B is determined by the rule that the highest ranked bundle in the set is taken. we can specify some more primitive notion of “preferences” and derive the choice for each choice set from those. and thus the theory would be useless. We also + need a description of the things which are assembled into these budget sets. the description of choice itself. We are . For example. m ∈ . there are two fundamental approaches we can take: (1) we can specify the choice for each and every choice set.
d}. c}) = {b}. It assigns a set of chosen elements. Why? In the ﬁrst case. or worse than. While the above criterion is silent on the relation of this choice to C({b. (Remember. b. suppose that X = {a. We call such a ranking a preference 2 Just let x be a and y be b in the above deﬁnition! . then it can never be the case that y is chosen and x is not chosen. which is by far the most common. Also suppose that C({a. for each budget set B we have a “demand function” C(B) speciﬁed directly. b} and {b. y ∈ B and y ∈ C(B ) ⇒ x ∈ C(B ). b. More concisely: If ∃B ∈ B x. The second approach to modelling decision making. c. d}. {b. This is done by means of a choice structure.) In the second case. A choice structure contains a set of nonempty subsets of X. b. c.Review 11 to specify as the primitive of our theory the actual choice for each possible set of choices. denoted by B. In other words. c. b. and that B = { {a. c}. denoted C(B) ⊂ B. b}) = {a}.2 You can see how we are naturally lead to think of one choice being better than. c}. should c be even better than a. after all both where available but only a was picked. any statement about the empty set is true. then b cannot be picked. the fact that C({a.2 Preferences In this approach we specify a more primitive notion. d} }. another even when discussing this abstract notion of direct speciﬁcation of choice rules. For example. So if the choice is over {a.1. For the consumer this would be the collection of all possible budget sets which he might encounter. b}) = {a} has revealed a preference of the consumer for a over b. c. A rational consumer should preserve this ranking in other situations. The notion of rationality we impose on this is as follows: Deﬁnition 1 A choice structure satisﬁes the weak axiom of revealed preference if it satisﬁes the following restriction: If ever the alternatives x and y are available and x is chosen. 2. {a. does this explicitly. that of the consumer being able to rank any two bundles. y ∈ B ⇒ x ∈ C(B) then ∀B ∈ B x. only a or c. a is not an element of both {a. it does rule out the choice C({a. we then specify a choice rule (or correspondence). so the deﬁnition is satisﬁed vacuously. if both x and y are available. For each actual budget set B ∈ B. b}. d}) = {b}.
How is rationality imposed on preferences? Deﬁnition 2 A preference relation is rational if it is a complete. Transitivity could also be a problem. Busch. 64 cans of Beluga caviar. reﬂexive: ∀x ∈ X. y. so that the last bundle in the sequence is preferred to the ﬁrst. among other important implications. x y and y z =⇒ x z. From this notion of weak preference we can easily deﬁne the notions of strict preference (x y ⇔ (x y and ¬(y x))). y ∈ X. On the other hand. 1 Aprilia Mille RS. This. these bundles are very far from the budget set of these consumers. rules out “dutch books” — a sequence of trades in which the consumer pays at each step in order to change his consumption bundle. One problem is that of “just perceptible diﬀerences”. only to return to the initial bundle at the end. either x y or y x. or both. including the ﬁrst. the consumer can be led through a sequence of bundles in which he prefers any earlier bundle to a later bundle. For example.12 LA. It is crucial to economics since it rules out cycles of preference. For most consumers these bundles may be diﬃcult to rank. and 2 bottles of Dom Perignon. it can be shown that it is possible to have real life consumers violate this assumption. reﬂexive. while the other contains 1 Ducati 996R. the statement x y means that “x is at least as good as y”. since they may be unsure about the relative qualities of the items. At this point we might want to consider how restrictive this requirement of rational preferences is. with the interpretation that for any x. complete: ∀x. and 2 bottles of Veuve Cliquot. x x. since what is required is the ability to rank things which are very far from actual experience. the consumer is assumed to be able to rank two consumption bundles which diﬀer only in the fact that in the ﬁrst he is to consume 2 Ducati 996R. For bundles close to being aﬀordable. and indiﬀerence (x ∼ y ⇔ (x y and y x)). This could be a problem. it may be much more reasonable to assume that any two can be ranked against each other. even though it was worse than all preceding bundles. and transitive binary relation. 57 cans of Beluga caviar. Microeconomics May2004 relation which is usually denoted by the symbol . z ∈ X. 1 Airbus 320. Completeness simply says that any two feasible choices can be ranked. y ∈ X. If the changes in the consumption bundle are small enough. The . So. 1 Boeing 737. but which closes up on the ﬁrst. Yet. deﬁne a binary relation on X. transitive ∀x.
and we will assume rational preferences henceforth. which is “better”? How do they compare? Are the two approaches equivalent? Why do economists not just write down demand functions. (x y) ⇔ (u(x) ≥ u(y)). this notion of preferences on sets of choices is also cumbersome to work with for most (but note the beautiful manuscript by Debreu.Review 13 other common way in which transitivity is violated is through the eﬀects of the “framing problem”. While beautiful in it simplicity.3 What gives? If there are two ways to model choice behaviour. In any case. it would be nice if we could use the abundance of calculus tools which have been invented to deal with optimization problems.) 2. Deﬁnition 3 A function u : X → is a utility function representing the preference relation if ∀x. while they are against vaccination programs if the costs are stressed (so many deaths which would not have occurred otherwise. If the beneﬁts are stressed (so many fewer crippled and dead) respondents are in favour.1. Also note that addiction and habit formation commonly lead to what appears to be intransitivity. that the consumer will maximize his utility function over the set of available alternatives. Theory of value). if that is a valid method? To answer these questions we need to investigate the relationship between the two approaches. What is really important for us are the following .) It is worthwhile to note that what looks like intransitivity can quite often be explained as the outcome of transitive behaviour on more primitive characteristics of the problem. but really involves a change in the underlying preferences. see the next section. Thus the notion of a utility function is useful. Choice now is modeled by specifying that the consumer will choose the highest ranked available bundle. A famous example of this are surveys about vaccinations. We will return to utility functions in more detail in the next section. The framing problem refers to the fact that a consumer’s stated preference may depend on the way the question is asked. In particular. choosing the one which leads to the highest value of the utility function (for details. y ∈ X. or (more commonly) preferences which depend on past consumption bundles as well. or alternatively. a thorough discussion of this topic is way beyond these notes.
14 LA. This satisﬁes the weak axiom (vacuously) but implies that a is better than b. C(·)) is a choice structure such that (i) the weak axiom is satisﬁed. The basic problem is that we rarely if ever have collections of budget sets which satisfy this restriction. Basically what goes wrong here is that the set B is not rich enough to restrict C(·) much. C({a. Indeed. the following example shows that the weak axiom is not enough: X = {a. if it were. In other words. Clearly. 3 . b}) = {a}.) If I have a choice structure that satisﬁes the weak axiom.) The following statements can be made: Theorem 1 Suppose is rational. the typical set of consumer budget sets consists of Words such as “clearly”. there would not be a problem as the following theorem asserts: Theorem 2 If (B. rational preferences always lead to a choice structure which is consistent with the weak axiom (and hence satisﬁes the notion of rationality for choice structures. c}. C(B) = {x ∈ Bx y ∀y ∈ B}). not all possible subsets of X. a} }. or at least easily derivable. C(·)) = (B. does there exist a rational preference relation which is consistent with these choices? (The answer will be Maybe. “thus”. Microeconomics May2004 two questions: If I have a rational preference relation. “therefore”. for which we are free to pick rankings. since there could be quite a few combinations of elements for which no choice is speciﬁed by the choice structure. While this is encouraging. B = { {a. b is better than c. since the weak axiom has the ﬂavour of “no contradictions” which transitivity also imposes for . c}) = {b}. b}. {b. and c is better than a. {c. Therefore in general more than one might do the trick. Busch. However. which violates transitivity. etc. a}) = {c}. it is of little use to economists. and (ii) B includes all 2 and 3 element subsets of X. c}.3 without the weak axiom there is no hope. then there exists a rational preference ordering such that ∀B ∈ B. C({c. C({b. and hence. are a signal to the reader that the argument should be thought about and is expected to be known. b.) Can this be reversed? Note ﬁrst that we only need to deal with B ∈ B. {x ∈ Bx y ∀y ∈ B}) satisﬁes the weak axiom. Then the choice structure (B. However. will it generate a choice structure which satisﬁes the weak axiom? (The answer will be Yes.
. colour. and quality are speciﬁed. The next section will review this approach. 5 Given the equilibrium concept of market clearing a consumer’s desired consumption will coincide. . Furthermore. but then we have to specify how price reacts to the consumer’s demands. One is an assumption about what inﬂuence. One way around this problem are the slightly stronger axioms required by Revealed Preference Theory (see Varian’s Advanced Microeconomic Theory for details.) Hence. it is speciﬁed when and where this car is consumable. So we will have. . . The usual assumption is that of price taking. with his actual consumption. That means that not only is it a ‘car’. but in this course we will focus especially on these latter two. . Hence I should really write x = (x1 . no running time backwards. . and the circumstances under which it can be consumed. In previous courses these latter dimensions have been suppressed and the focus was on current consumption without uncertainty. x2 . The diﬀerence between the two concepts. etc) as well as all institutional constraints. (Note that there is a slight diﬀerence between those two statements!) This assumption can be relaxed. in equilibrium. xn ) where each coordinate xi indicates the amount desired/consumed of commodity i. while crucial. is therefore seldom made. . consumption contingent on the state of the world. 4 . the consumer may have on the prices which are charged in the market. It does not include the economic constraints the consumer faces. which is to say that the consumer does not have any inﬂuence on price. or at least how the consumer thinks While I write all vectors as row vectors in order to save space and notation. for example. A key concept to remember is that a “commodity” is a completely speciﬁed good. they are really column vectors.) The consumption set incorporates all physical constraints (no negative consumption.2 Consumer Theory Consumers are faced with a consumption set X ⊂ n which is the set + of all (nonnegative) vectors4 x = (x1 . acts under the assumption of not having any inﬂuence on price. . or consumption if there is an accident versus consumption if there is no accident (i. Often we will have only two commodities to allow simple graphing. The economic constraints come chieﬂy in two forms. today’s consumption versus tomorrow’s. the standard approach to consumer theory employs preferences as the fundamental concept. 2. x2 . xn )T where the T operator indicates a transpose. but its precise type. or more precisely.e. . of course.5 All commodities are assumed inﬁnitely divisible for simplicity. if any.Review 15 nested hyper planes.
What we really mean by that is that consumers have the ability to rank all consumption bundles x ∈ X according to what is termed the consumer’s preferences. Busch. Instead the consumer usually has an endowment — an initial bundle of goods. for example.16 LA. y ∈ X : either x y or y x or both). Note however that for comparative static purposes only true market transactions are important. Microeconomics May2004 that they will react. We denote preferences by the symbol . We will deal with a simple version of that later under the heading of bilateral monopoly. More crucially.) The expression x y is read as “consumption bundle x is at least as good as bundle y”. These preferences are assumed to be: (i) complete (∀x. situations in which the prices vary with the consumer’s actions and the consumer is cognizant of this fact will be modelled using game theory. their consumption choices are limited to economically feasible vectors. To add to the potential confusion. in other words B = {x ∈ X  p · x ≤ p · ω}. y z) ⇒ x z). Then buy the set of ﬁnal goods under the usual budget constraint. Recall that p · x denotes a ‘dot n product’. That simply means that it can be used to compare two vectors (not one. or “x is weakly preferred to y”. which is a binary relation. where there are only two consumers who have to bargain over the price. Here ω ∈ X is the endowment. (iii) transitive (∀x. z ∈ X : (x y.) What this allows us to do is to represent the preferences by a function . The budget constraint then requires that the value of the ﬁnal consumption bundle does not exceed the value of this endowment. You can treat this as a two stage process during which prices stay constant: First. so that p · x = i=1 pi xi . and m is a scalar denoting the consumer’s monetary wealth. (iv) continuous ({x  x y} and {x  y x} are closed sets. (ii) reﬂexive (∀x ∈ X : x x). In all but the most trivial settings. as evidenced by the change in the Slutsky equation when moving to the endowment model (p. consumers cannot spend more money than what they have. not three or more. Here p is a vector of n prices. y. we normally do not actually have money in the economy at all.) The behavioural assumption in consumer theory is that consumers maximize utility. and will choose that bundle which is the highest ranked (most preferred) among all the available options. The left hand side of the inequality therefore gives total expenditure (cost) of the consumption bundle. In other words.46. The set of economically feasible consumption vectors for a given consumer is termed the consumer’s budget set: B = {x ∈ X  p · x ≤ m}. sell all initially held goods at market prices: this generates income of p · ω = m.
α+β In terms of the diagram of the utility function this means that the function has to be increasing. x = y) ⇒ x y) then the function u(·) can also be chosen to be continuous. are said to be strongly monotone if Note in particular that Leontief preferences (‘L’shaped indiﬀerence curves) are monotone but not strictly monotone. Furthermore. i. will represent the same preferences. are meaningless. the curvature (rate of change in the rate of increase) of the function starts to matter. but that the rate of increase. say h(u(·)). a = α . With some further assumptions it will furthermore be continuously diﬀerentiable. that the value of the function evaluated at x is larger than that at y if and only if x is strictly preferred to y. a x 1 x2 (1−a) − 104 . If preferences are also strongly monotonic ((x ≥ y. and we therefore will then be restricted to using positive aﬃne transforms only (things of the form a + bu(·).Review 17 u: N + → . are arbitrary. 1 2 αlnx1 + βlnx2 . Deﬁnition 5 The preferences ∀x ∈ X x ≥ y ⇐⇒ x y. β > 1.. .e. α. any other function which is a monotonic increasing transformation of u(·). h (·) > 0. called a utility function. So. the following functions all represent the same preferences on 2 : xα xβ . or more precisely. Finally a few terms which will arise frequently enough to warrant deﬁnition here: Deﬁnition 4 The preferences are said to be monotone if ∀x ∈ X x >> y ⇐⇒ x y. Put diﬀerently. Once we discuss uncertainty. any two functions that have indiﬀerence curves of the same shape represent the same preferences. and changes in the rate of increase. the labels of indiﬀerence curves. b > 0). for example. This is in marked contrast to what we will have to do later. and that is what we normally assume. as long as they are increasing in the direction of higher quantities. which has the property that u(x) > u(y) ⇐⇒ x y. Note that the utility function u(·) representing some preferences is not unique! Indeed. the “spacing” of indiﬀerence curves.
Deﬁnition 8 Preferences are said to be homothetic if ∀x. and homotheticity says that if we scale the distance from the origin by the same factor on these two rays. u is quasiconcave if u(λx + (1 − λ)y) ≥ min{u(x).18 LA. α ∈ ++ . we may scale them by the same factor. is the deﬁnition of a quasiconcave function. In other words. This is particularly clear in 2 . However. Microeconomics Deﬁnition 6 The preferences are said to be convex if ∀x ∈ X the set {y ∈ Xy x} is convex. u(y)}. homothetic preferences and quasilinear preferences. we will intersect the same indiﬀerence curve on both. A bundle x can be + viewed as a vector with its foot at the origin and its head at the coordinates (x1 . x2 ). the set of bundles weakly preferred to a given bundle is convex. 1]. which. ∀λ ∈ [0. Two kinds of preferences for which this is true exist. These points deﬁne two rays from the origin. The utility function which goes along with such preferences is also called homothetic: . and the resulting bundles will also be indiﬀerent. (Which exists due to our assumption of continuity of . Note also that the lower boundary of an upper contour set is what we refer to as an indiﬀerence curve. Now consider some indiﬀerence curve and 2 points on it. Busch. y ∈ X. Applied to the utility function u(·) representing this means that the upper contour sets of the utility function are convex sets. if you recall. α > 0 deﬁnes a ray through the origin through that point.) Special Utility Functions By and large utility functions and preferences can take many forms. Put diﬀerently. Alternatively. αx. for any two bundles x and y which are indiﬀerent to one another. of particular importance in modelling applications and in many theories are those which allow the derivation of the complete indiﬀerence map from just one indiﬀerence curve. Deﬁnition 7 A function u : N → is quasiconcave if its upper contour sets are convex. x ∼ y ⇐⇒ αx ∼ αy. which is why we had to make that particular assumption. indiﬀerence curves are related to one another by “radial blowup”. May2004 In other words.
ˆ N −1 + → is called quasilinear if . How does this assumption aﬀect indiﬀerence curves? Recall that the key about indiﬀerence curves is their slope. Therefore. But the ratio x2 /x1 is constant along any ray from the origin. α > 0. and l2 (x) = ˆ (k). the MRS of a homothetic utility function is constant along any ray from the origin! You can verify this property for the CobbDouglas utility function. Now let u(x) be homothetic. 0. In terms of the diagram in 2 . So what you ask? Recall that Deﬁnition 10 A function h : h(λx) = λk h(x). for example. N → is homogeneous of degree k if ∀λ > 0. x2 /x1 ) = x1 ˆ l(k). for simplicity) l(x1 . we require that indiﬀerence between 2 consumption bundles is maintained if an equal amount of the numeraire commodity is added (or subtracted) from both: Indiﬀerence curves are related to one another via translation along the numeraire axis! Deﬁnition 12 A utility function u : × it can be written as u(x) = x1 + u(x−1 ). 0. The other class of preferences for which knowledge of one indiﬀerence curve is enough to know all of them is called quasilinear. . or MRS (Marginal Rate of Substitution. that is u(x) = h(l(x)) where h (·) > 0 and l(·) is homogeneous of degree 1. Therefore (in two dimensions.Review 19 Deﬁnition 9 A utility function u : N → is said to be homothetic if u(·) + is a positive monotonic transformation of a function which is homogenous of degree 1. not a function of the absolute amounts. if ∀x. and l l therefore the marginal rate of substitution of a homothetic utility function is only a function of the ratio of the consumption amounts. The slope. y ∈ × + −1 .) is u1 (x)/u2 (x). But then l1 (x) = ˆ l(k)+ˆ (k)k. x2 ) = x1 l(1. N Deﬁnition 11 The preference deﬁned on × + −1 is called quasilinear N with respect to the numeraire good 1. Here e1 = (1. . .) is the base vector of the numeraire dimension. not the label attached to the level of utility. Then we get u1 (x) h (l(x))l1 (x) l1 (x) = = u2 (x) h (l(x))l2 (x) l2 (x) by the chain rule. where k = x2 /x1 . x ∼ y ⇐⇒ x + αe1 ∼ y + αe1 .
Busch. . but if we know that preferences are convex (the utility function is quasiconcave) and that the budget is (weakly) concave. The easiest way to solve this is to realize that (i) normally all commodities are goods. Note that the MRS of a quasilinear function is independent of the numeraire commodity: u1 (x) 1 = . n ∂xi (F OCλ ) Lλ = m − p · x = 0 Of course. since they depend only on observable variables (prices and income.1 Utility Maximization The problem addressed in consumer theory is to predict consumer behaviour for a price taking consumer with rational preferences. which we later derive from the expenditure minimization problem.t p · x ≤ m}. that is. demand is derived mathematically from a constrained optimization problem: maxx∈X {u(x) s. or ordinary. If these conditions In fact. These demands correspond to the observable demands by consumers.20 LA. Marshallian. wherefore we can replace the inequality with an equality constraint.) This is in contrast to the Hicksian demand functions. there are also second order conditions. . This is weaker than monotonicity since no direction is assumed.) 2. we won’t have to worry about them. which give us the desired consumption bundle of the consumer for all prices and income. consumer preferences are monotonic. 6 . u2 (x) u (x2 ) ˆ so that the indiﬀerence curves all have the same slope along a line parallel to the numeraire axis (good 1 in this case. Local Nonsatiation assumes that for any bundle there exists a strictly preferred bundle arbitrarily close by.6 (ii) that normally we have assumed a diﬀerentiable utility function so that we can now use a Lagrangian: maxx∈X L = u(x) + λ(m − p · x) ∂ (F OCi ) Li = u(x) − λpi = ui (·) − pi λ = 0 ∀i = 1 . Local Nonsatiation is enough to guarantee that the budget is exhausted. The consumer’s behaviour is often summarized by the Marshallian demand functions.2. Microeconomics May2004 √ For example the functions u(x) = x1 + x2 and u(x) = x1 + lnx2 are quasilinear with respect to commodity 1.
so that the level of the multiplier gives us the “marginal utility of budget”. and the multiplier λ. The last equation (F OCλ ) states that the optimum is on the budget constraint. y) = u.) That means a variety of things. 7 . Requiring that this ratio take on some speciﬁc value thus implicitly deﬁnes a relationship between the xi . any two of the other ﬁrst order conditions can be combined in one of the following ways: ui (·) = λpi uj (·) = λpj ⇒ pi ui (·) = uj (·) pj ⇒ ui (·) uj (·) = pi pj These have the economic interpretation that the (negative of the) slope of the indiﬀerence curve (the MRS) is equal to the ratio of prices. Also note that λ∗ = ui (·)/pi . This is due to the fact that this locus would be traced out if we were to keep prices constant but increased income. This locus is termed the Income Expansion path. The same works for the budget. Since they are identities. this implies that a tangency of an indiﬀerence curve to the budget must occur at the optimum.Review 21 are not obviously satisﬁed in a problem you will have to check the second order conditions! The above ﬁrst order conditions are n + 1 equations which have to hold as identities at the optimum values of the n + 1 choice variables (the n consumption levels of goods. But since the slope of the budget is the negative of the ratio of prices. Now note that the If you ever get confused about which term “goes on top” remember the following derivation: An indiﬀerence curve is deﬁned by u(x.7 The second form tells us that the marginal utility gained from the last dollar’s worth of consumption must be equalized across all goods at the optimum. In other words ui (·)/uj (·) is a function of x. another useful thing to keep in mind. Taking a total diﬀerential we obtain ux (·)dx + uy (·)dy = 0. and hence dy/dx = −ux (·)/uy (·). This can also be seen by considering the fact that ∂L/∂m = λ. As a last resort you can verify that it is the term for the horizontal axis which is “on top”. solving for the demands then boils down to solving for the intersection of these lines. most importantly that the implicit function theorem can be applied if need be (and it is needed if we want to do comparative statics!) Let us interpret the above ﬁrst order conditions.) Since the budget line provides another locus of consumption bundles. This tangency condition thus translates into a locus of consumption bundles where the slope of the indiﬀerence curve has the given value. The marginal rate of substitution term depends on the amounts of the goods consumed. thus shifting the budget out (in a parallel fashion. namely those which are aﬀordable at a given income level.
22 LA. on the vertical axis it is ﬂatter.1: Consumer Optimum: The tangency condition income expansion path for both homothetic and quasilinear preferences is linear. which follows from the condition on per dollar marginal utilities. as for quasilinear preferences) then we really need to consider “complementary slackness conditions”. if the indiﬀerence curve or the budget have a kink. we have the usual picture of a set of indiﬀerence curves superimposed on a budget set.1. If we are at a corner solution (and this might happen either because our budget ‘stops’ before it reaches the axis or because our indiﬀerence curves intersect an axis. and their multipliers. The optimum occurs on the highest indiﬀerence curve feasible for the budget. but we will often loosely refer to such a case as a “tangency” anyways. We will not bother with that. in geometric terms. In terms of a diagram. the usual characterization of “tangency to budget” is only valid if we are not at a corner solution and can compute the relevant slopes. Finally. since.. i.e. That means that on the horizontal axis the indiﬀerence curve is steeper than the budget. The key thing to remember. we are looking for the intersection of a line and a (hyper) plane. Busch. Basically we will have to consider all the nonnegativity constraints which we have currently omitted. given in Figure 2. Microeconomics good 2 20 May2004 15 10 5 0 0 5 10 15 20 good 1 Figure 2. where there is a tangency between the budget and an indiﬀerence curve. however. Hence solving for demands is particularly simple. then no tangency exists in strict mathematical terms (since we do not have diﬀerentiability). In this course such cases will be fairly obvious and can be solved by inspection. . is that if we are at a corner then the indiﬀerence curve must lie outside the budget. Note a couple of things: For one.
m) derived from a continuous utility function representing rational preferences which are monotonic satisﬁes: homogeneous of degree zero: x(p. or more compactly p · Dp x(p. αm). Both are derived by simple diﬀerentiation of Walras’ Law.Review 23 good 2 20 15 10 5 0 0 5 10 15 20 good 1 good 2 20 15 10 5 0 0 5 10 15 20 good 1 Figure 2. The other result of solving the optimization problem is the indirect utility function. the above problem gives rise ﬁrst of all to the (ordinary) demand functions. . Engel Aggregation refers to the fact that a change in consumer income must all be spent.2: Corner Solutions: Lack of tangency Properties As mentioned. Walras’ Law: p · x(p. m. α > 0. Cournot Aggregation refers to the fact that a change in a price may not change total expenditure. ∀p. namely the price vector and the income: x(p. m) = 1. m) = m. m) = argmaxx∈X { u(x) + λ(m − p · x) }. These tell us the relationship of demand behaviour to the parameters of the problem. but the function is nevertheless useful for some applie cations (duality!) What properties can we establish for these? Two important properties of the (ordinary) demand functions are Theorem 3 The (ordinary) demand x(p. m) = x(αp. Walras’ Law has two important implications known as Engel and Cournot Aggregation. m > 0. Deﬁnition 13 Engel Aggregation: n ∂xi (·) i=1 pi ∂m = 1. m) = maxx∈X { u(x) + λ(m − p · x) }. This is useless per s`. ∀p >> 0. which tells us the highest level of utility actually achieved: v(p.
) Sometimes it is more useful to write these in terms of elasticities (since that is what much of econometrics estimates)8 . m) = −1 · ∂v(p. the income derivatives can take any sign. Deﬁne si (p. a technique often employed in trade theory. . . m)/∂pi ≤ 0. or can be derived explicitly by using the ﬁrst order conditions. m)/m. m) = 1 and i=i si (p. m) + x(p. it is not necessarily true that (ordinary) demand curves slope downwards (∂x(p. m) i=i im (p. m) = 0 . m) = 0. Recall that often the logarithms of variables are regressed. and we call a good normal if ∂x(p.) Similarly. n×1 This is “just” an application of the envelope theorem. The reason this is useful is that it is mathematically much easier to diﬀerentiate. We will not bother here with the properties of the indirect utility function aside from providing Roy’s Identity: x(p. m) ik (p. . the expenditure share of commodity i (note how for a CobbDouglas utility function this is constant!) Let ia (·) denote the elasticity of demand for good i with respect to the variable a. (Here 0T is a row vector of zeros. ∀k = 1. Busch. . Microeconomics May2004 Deﬁnition 14 Cournot Aggregation: n ∂xi (·) T T i=1 pi ∂pk + xk (p. note that we have no statements about demand other than the ones above. Hence a problem can be presented by stating an indirect utility function and then diﬀerentiating to get demands. m)/∂m p v(p. m) = 0.24 LA. m) = pi xi (p. It can be shown that any function having the the requisite properties can be the indirect utility function for some consumer. n. m)/∂m ≤ 0. m)i + sk (p. A coeﬃcient on the right y hand side thus represents dlnx = dx x dlny dy 9 In graduate courses the properties of the indirect utility function get considerable attention for this reason. m)/∂m ≥ 0. 8 . or p · Dp x(p. m) = − ∂v(·)/∂pl ∂v(·)/∂m . Then the above aggregation laws can be expressed as n n si (p. In particular.9 Finally. compared to solving L + 1 nonlinear equations. while we call it inferior if ∂x(p.
this means that the matrix of second partials. this matrix is symmetric. Also. Basically. indicating that higher expenditure (income) would be needed to reach that particular utility level. Dp p e(p. The optimal choices for this problem. We now ﬁx the indiﬀerence curve we want to achieve and minimize the budget subject to that: e(p. increasing in u and nondecreasing in p. Clearly. But it is equal to the matrix of ﬁrst partials of the Hicksian demands. So. So what. since the expenditure function is concave in prices. For concavity in p consider the following: Fix an indiﬀerence curve and 2 tangent budget lines to it. These are unobservable (since they depend on the unobservable utility level) but extremely useful.Review 25 2. then e(p. which therefore is also symmetric and negative semideﬁnite! . A linear combination of prices at the same income is another budget line which goes through the intersection point of the two original budgets and has intermediate axis intercepts. u). you ask? Consider the following: Theorem 4 If u(x) is continuous and represents monotonic. but they are quite intuitive. continuous in u and p. by Young’s Theorem. u The function which tells us the least expenditure for a given price vector and utility level is called the expenditure function. u). are the Hicksian demand functions. rational preferences and if p >> 0. concave in p. u) = minx∈X { p · x + λ(¯ − u(x)) }. The reason is that the Hicksian demand for good i. hi (p. u) is homogeneous of degree 1 in prices. is negative semideﬁnite.2. We will not prove these properties. u h(p.2 Expenditure Minimization and the Slutsky equation Another way to consider the consumer’s problem is to think of the least expenditure needed to achieve a given level of utility. u) = argminx∈X { p · x + λ(¯ − u(x)) }. It follows that the ﬁrst derivative of the Hicksian demand function with respect to a price is the second partial derivative of the expenditure function. it cannot touch or intersect the indiﬀerence curve. is (by the envelope theorem) the ﬁrst partial derivative of the expenditure function with respect to p i . this reverses the process from above.
e(p. Busch. The equality then implies that the matrix of corresponding left hand sides. is also symmetric and negative semideﬁnite. u)) = hi (p. We proceed as follows: The fact that we are talking about two ways to look at the same optimal point means that xi (p. u). u) = + . As mentioned. u)) ∂hi (p. So why should we be excited about having strong properties for them. u)) ∂hi (p. e(p. Microeconomics May2004 Why is this useful information? It means that Hicksian demand curves do not slope upwards. Therefore ordinary demands can slope upward if there is a large enough negative income eﬀect — which means that we need an inferior good. u)) xj (p. e(p. u) xj (p. where we recognize the fact that the income and expenditure level must coincide in the two problems. the crossprice eﬀects of any two goods are equal. the Hicksian demand curves are unobservable. termed the Slutsky matrix. since negative semideﬁniteness requires negative main diagonal entries. e(p. ∂pj ∂m ∂pj ∂pj and using the envelope theorem on the deﬁnition of the expenditure function as well as utilizing the original equality we simplify this to ∂xi (p. Reordering any such typical element we get the Slutsky equation: ∂xi (p. Hicksian demands are homogeneous of degree zero in prices. u)) ∂e(p. This is a statement which we could not make about ordinary demand curves. Now taking a derivate of both sides we get ∂xi (p. if they can never be tested? One key reason is the Slutsky equation. which links the unobservable Hicksian demands to the observable ordinary demands. u)) ∂xi (p. u) ∂hi (p.26 LA. e(p. e(p. since it requires that the eﬀect of a change in the price of gasoline on bananas is the same as the eﬀect of a change in the price of bananas on gasoline. Also. . Finally. This certainly is unexpected. u) + = . u)). ∂pj ∂m ∂pj The right hand side of this is a typical element of the symmetric and negative semideﬁnite matrix of price partials of the Hicksian demands. u)) ∂xi (p. e(p. = − ∂pj ∂pj ∂m This tells us that for any good the total response to a change in price is composed of a substitution eﬀect and an income eﬀect. u) ∂xi (p. e(p.
no consumer ﬁnds it impossible to trade the amounts desired at the announced market prices. l = 1. the pure exchange economy.Review 27 It bears repeating that homogeneous demands satisfying Walras’ Law with a symmetric. In any case.3 General Equilibrium What remains is to model the determination of equilibrium prices. 2. it must be true that. Instead a “Walrasian auctioneer” is imagined. not stop us from deﬁning what an equilibrium price is! At its most cynical. the “tˆtonnement” process. This does. of course.1 Pure Exchange Consider an economy with two goods. i = 1.) Of course. a key requirement of any equilibrium price vector will be that all economic agents can maximize their objective functions and carry out the resulting plans. The total endowment of good i in the economy is the . 2 and preferences j on which can be represented by some utility function 2 uj (x) : + → . 2. Nothing more can be said unless one is willing to assume particular functional forms. 2. negative semideﬁnite Slutsky matrix is all that our theory can say about the consumer. in a production model) takes prices as given. Since in the standard model every consumer (and every ﬁrm. equilibrium can be viewed simply as a consistency condition which makes our model logically consistent. If not. ω2 ) of the two goods. one producer economy. Since aggregate demands and supplies are the sum of individual demands and supplies it follows that aggregate demand and supply must balance in equilibrium — all markets must clear! In what follows we will only review 2 special cases of general equilibrium. That is. who somehow announces prices. those with excess supply a price decrease. there really is no model of equilibrium price determination. In particular. and two consumers. This is not really done in economics. and checks if all markets clear at those prices. l l Each consumer has an initial endowment ω l = (ω1 . in equilibrium. The story (and it is nothing more) is that the auctioneer announces prices. however.3. a all agents other than the auctioneer must be unaware of this process somehow — otherwise they might start to misstate demands and supplies in order to aﬀect prices. 2. no trades occur but the prices are adjusted (in the way ﬁrst suggested in Econ 101: markets with excess demands experience a price increase. there is nobody to set prices. and the one consumer.
both a consumer’s budget line and her indiﬀerence curves can “leave the box”. Suppose we measure good 1 along the horizontal axis and good 2 along the vertical axis. deﬁned by p · x = p · ω 1 passes through both the endowment point and the allocation. which here is all of 2 .28 LA. Microeconomics May2004 1 2 sum of the consumers’ endowments. Hence any price vector deﬁnes a line through the endowment point which divides the feasible allocations into the budget sets for the two consumers.) Also note that a budget line for consumer 1. and ω1 = ω2 + ω2 units of good 2. But if we consider consumer 2’s budget line at these prices we ﬁnd that it passes through the same two points and has. the same slope. Feasibility is a problem for the economy. Furthermore.) Note that both consumer’s indiﬀerence curves are not limited to lie within the box. Then the feasible allocations lie anywhere in the rectangle formed by the origin and the point (ω1 . These budget sets only have the budget line in common. measured from the usual origin for consumer 1 and + measured down and to the left from ω for consumer 2! (More bluntly. only that consumer 2’s consumption set is measured from the top right origin (at ω in consumer 1’s coordinates. x1 ) in 1 2 some feasible allocation. Indeed. then for any consumption point x we can read oﬀ consumer 2’s consumption by measuring from the right top ‘origin’ (the point ω. ω2 ). 2. As a ﬁnal piece. Note that feasibility implies that if consumer 1 consumes (x1 . x2 ) is an assignment of a nonnegative consumption vector xl ∈ 2 to each consumer. not the consumer. the consumer is assumed to be unaware of the actual amount of the goods in the economy. if we measure consumer 1’s consumption from the usual origin (bottom left). ω2 − x1 ). We can now deﬁne the following: Deﬁnition 15 An allocation x = (x1 . measured from consumer 2’s origin on the top right. then consumer 2 must consume (ω1 − x1 . Busch. and has a slope of p1 /p2 . 2 1 This fact allows us to represent the set of feasible allocations by a box in 2 . Deﬁnition 16 An allocation is feasible if 2 l=1 xl = i 2 l=1 l ωi for i = 1. Hence each consumer’s budget and indiﬀerence maps are deﬁned in the usual fashion over the respective consumer’s consumption set. we may represent each consumer’s preferences by an indiﬀerence map in the usual fashion. so there are ω1 = ω1 + ω1 units of good 1 2 1.) .
x2 }.3 represents an example of such a pure exchange economy. or Pareto optimality. Here we have assumed that u1 (x) = min{x1 . ω 1 = (10. 10). ω 1 = (10. while u2 (x) = x1 + x2 . Deﬁnition 17 A feasible allocation x is Pareto Eﬃcient if there does not exist a feasible allocation y such that y l x. Moreover. 5). Weak Pareto eﬃciency requires that no feasible allocation exist which makes all consumers better oﬀ. 15 rr rr d rr rr d d 2 x1 ' d d d d d c d d d O2 x2 2 d d d rr d d d rr d d d rr d d d p r rr d d s 5 ω dd d d rr x1 T d drr d 2 d d rrd d d drr d d d r E d d 20 O1 10 x1 1 d d Figure 2. ∀j. Note that these deﬁnitions can be tricky to apply since an allocation is deﬁned as eﬃcient if something is not true. and ∃j y l x.3: An Edgeworth Box Aside of the notion of feasibility the other key concept in general equilibrium analysis is that of Pareto eﬃciency. A slightly weaker criterion also exists to deal with preferences that are not strictly monotonic: Deﬁnition 18 A feasible allocation x is weakly Pareto Eﬃcient if there does not exist a feasible allocation y such that y l x. In words. Pareto eﬃciency (PO) requires that no feasible alternative exist which leaves all consumers as well oﬀ and at least one better oﬀ.Review 29 Figure 2. there are inﬁnitely . ∀j.
Neither is it possible to make both better oﬀ. in fact. A brute force approach therefore is not the smart thing to do. Does there exist a feasible alternative which would make one consumer better oﬀ and not hurt the other? The consumers’ indiﬀerence curves through the allocation ω will tell us allocations for each consumer which are at least as good. 4)). Hence none of those can be PO either.3. The axes of consumer 1 are no problem. those above better oﬀ. (9. The allocations above the kink line are just mirror images. Busch. 11). Allocations below a consumer’s indiﬀerence curve make her worse oﬀ. say (ω 1 . Hence these allocations are weakly PO and PO. For such allocations the weakly preferred sets of the two consumers do not intersect. That leaves the kink line itself.4: Pareto Optimal Allocations d x1 1 10 d d 20 d Note that any allocation in the triangular region which contains allocations above both indiﬀerence curves will do as the alternative that makes ω not PO (and indeed not weakly PO either. they are not PO either. In Figure 2. But how about points such as q. which lie on the top boundary of the . Microeconomics May2004 many other feasible allocations for any given one we wish to check. with allocations such as z = ((11. hence it is not possible to make one better oﬀ while not reducing the utility of the other. they give rise to a diagram just like that for ω. ω 2 ) in Figure 2.) Next.4 these indiﬀerence curves have been drawn in. Instead we can proceed as follows: Suppose we wish to check for PO of an allocation.30 LA. 2 x1 ' s d q d 15 d d d d d O2 d c d d d d ds z d d x2 2 d d d d d 5 x1 T 2 E ds ωd d d d d d d d d d O1 d Figure 2. we need to consider the boundaries separately. Finally. note that ω was in no way special: all allocations below the “kink line” of consumer 1 would give rise to just the same diagram.
Review 31 box to the right of the intersection with the kink line? For such points both consumers cannot be made better oﬀ. an allocation is Pareto optimal if the consumers’ marginal rates of substitution equal. We are now ready to deﬁne equilibrium and to see how it relates to the above diagram. 1 2 1 1 1 2 i You can see from the above example that the diﬀerence between PO and weak PO can only arise for preferences which have vertical or horizontal sections that could coincide with a boundary. x1 ≤ 15. since there exists no feasible allocation which makes consumer 1 better oﬀ. Deﬁnition 19 A Walrasian (competitive) equilibrium for a pure exchange economy is a price vector p∗ and allocation x∗ such that (1) (optimality) for each consumer l. that is. This deﬁnition is the most basic we can give. If all preferences are strictly monotonic. Finally note that for convex preferences and diﬀerentiable utility functions the condition that the weakly preferred sets be disjoint translates into the familiar requirement that the indiﬀerence curves of the consumers must be tangent. 2}. However. l = 1. Demand means aggregate demand. Thus. the allocation q is weakly PO. j (xi ) = j ωi . We therefore can deﬁne equilibrium in the following way: . The set of Pareto optimal allocations in Figure 2. the sum of all consumers’ demands. Theorem 5 All Pareto Optimal allocations are weakly Pareto optimal. For example. that is all allocations 1 {x ∈ 4 x1 = x1 . then all weakly Pareto optimal allocations are also Pareto optimal. But supply in this case is the sum of endowments. x2 = 15 − x2 . you may remember from introductory economics that a general equilibrium is achieved when “demand equals supply”. so q is not PO in the strict sense. But a consumer’s demand has been deﬁned as a utility maximizing bundle at a given price — hence the notion of demand includes the optimality condition above. xl ≥ 0 ∀i. and at the same time includes all others.4 therefore is the kink line of consumer 1. x2 = 20 − x1 . (xl )∗ l xl ∀xl p∗ xl = p∗ ω l . l ∗ l (2) (feasibility) for each good i. by moving left we can make consumer 2 better oﬀ without reducing the utility of consumer 1. that is.
Busch. This is simpliﬁed by Walras’ Law. Deﬁnition 21 The oﬀer curve is the locus of all optimal consumption bundles (in X) as price changes: {x ∈ X∃p ∀y with p · y = p · ω. Let z(p) denote the aggregate excess demand vector: z(p) = j (xl (p) − ω l ). p · x = p · ω l } }.32 LA. requiring that each consumer l’s demands exhaust the budget. and depending on the setting one may be more convenient than another. x (p) = j xi (p ) = argmaxx {uj (x) s. Microeconomics May2004 Deﬁnition 20 A Walrasian (competitive) equilibrium for a pure exchange economy is a price vector p∗ and allocation x∗ such that x∗ = x(p∗ ). lying on both oﬀer curves. feasibility is also guaranteed. Deﬁnition 22 Walras’ Law: p · z(p) = 0 ∀p ∈ n +. if n − 1 markets clear This argument assumes as given that oﬀer curves cannot intersect outside the feasible set. and for each consumer l. p · xl = p · ω l }.t. For an Edgeworth box we can also deﬁne equilibrium in yet a third way — as the intersection of consumers’ oﬀer curves. summing across consumers the excess demand must equal zero for each good. and p · x = p · ω} or alternatively {x ∈ X∃p x = argmax{uj (x) s. Then we have. x j y. Why is this so? It follows from Walras’ Law for each consumer. only drawn in commodity space.11 that is. and hence the value must be zero. not commodityprice space. summing across all consumers the aggregate value of excess demand must also be zero. l l l ∗ j ωi . Since oﬀer curves cannot intersect outside the Edgeworth box. Why? You should be able to argue/prove this! 11 Yet another deﬁnition of a Walrasian equilibrium is z(p) = 0! 10 . Hence. for each good i. is optimal for both consumers. In an equilibrium aggregate demand is equal to aggregate supply. Note how the notion of an oﬀer curve incorporates optimality. by deﬁnition. should both consumers have diﬀerentiable utility functions we can normally most easily ﬁnd demands. so that the total value of excess demand for a consumer is zero. so that the intersection of consumers’ oﬀer curves. and then set demand equal to supply. For example. An oﬀer curve is the analogue of a demand curve. Hence.t. for all prices.10 We thus have diﬀerent ways of ﬁnding an equilibrium allocation in a pure exchange economy.
Review 33 the value of the excess demand of the nth market must be zero, and this can only be the case if either the price of the good is zero or the actual aggregate excess demand itself. Thus it suﬃces to work with n − 1 markets. The other market will “take care of itself.” In practice we are also free to choose one of the prices, since we have already shown that (Walrasian) demand is homogeneous of degree zero in prices for each consumer (remember that ml = p·ω l here), and thus aggregate demand must also be homogeneous of degree zero. So, suppose the economy consists of two CobbDouglas consumers with 2 2 1 1 parameters α and β and endowments ((ω1 , ω2 ), (ω1 , ω2 )). Demands are αp · ω 1 (1 − α)p · ω 1 , p1 p2 , and βp · ω 2 (1 − β)p · ω 2 , p1 p2 .
Hence the equilibrium price ratio p1 /p2 is
1 2 αω2 + βω2 p = . 1 2 (1 − α)ω1 + (1 − β)ω1 ∗
The allocation then is obtained by substituting the price vector back into the consumer’s demands and simplifying. On the other hand, if we have nondiﬀerentiable preferences it is most often more convenient to use the oﬀer curve approach. For example, a consumer with Leontief preferences will consume on the kink line no matter what the price ratio. If such a consumer where paired with a consumer with perfect substitute preferences, such as in Figure 2.3, we know the price ratio by inspection to be equal to the perfect substitute consumer’s MRS (in Figure 2.3 that is 1.) Why? Since a prefect substitute consumer will normally consume only one of the goods, and only consumes both if the MRS equals the price ratio. The allocation then can be derived as the intersection of consumer 1’s kink line with the indiﬀerence curve through the endowment point for consumer 2: in Figure 2.3 the equilibrium allocation thus would be (x1 , x2 ) = ((7.5, 7.5), (12.5, 7.5)).
2.3.2
A simple production economy
The second simple case of interest is a production economy with one consumer, one ﬁrm and 2 goods, leisure and a consumption good. This case is also often used to demonstrate the second welfare theorem in its most simple
34 LA. Busch, Microeconomics
May2004
setting. Recall that the second welfare theorem states that a Pareto eﬃcient allocation can be “decentralized”, that is, can be achieved as a competitive equilibrium provided that endowments are appropriately redistributed. Let us therefore approach this problem as is often done in macro economics. Consider the social planner’s problem, which in this case is equivalent to the problem of the consumer who owns the production technology directly. The consumer has preferences over the two goods, leisure and consumption, denoted by l and c respectively, represented by the utility function u(c, l). It has the usual properties (that is, it has positive ﬁrst partials and ¯ is quasiconcave.) The consumer has an endowment of leisure, L. Time not ¯ − l, and the consumption good c can consumed as leisure is work, x = L be produced according to a production function c = f (x). Assume that it exhibits nonincreasing returns to scale (so f (·) > 0; f (·) ≤ 0.) The consumer’s problem then is ¯ maxc,l {u(c, l) s.t. c = f (L − l) } or, substituting out for c ¯ maxl u(f (L − l), l). The FOC for this problem is −uc (·)f (·) + ul (·) = 0, or more usefully, f (·) = ul (·)/uc (·). This tells us that the consumer will equate the marginal product of spending more time at work with the Marginal Rate of Substitution between leisure time and consumption — which measures the cost of leisure time to him. Now consider a competitive private ownership economy in which the consumer sells work time to a ﬁrm and buys the output of the ﬁrm. The consumers owns all shares of the ﬁrm, so that ﬁrm proﬁts are part of consumer income. Suppose the market price for work, x, is denoted by w (for wage), and the price of the output c is denoted by p. The ﬁrm’s problem is maxx pf (x)−wx with ﬁrst order condition pf (x∗ ) = ¯ w. The consumer’s problem is maxc,l {u(c, l) s.t. π(p, w) + pc = w(L − l) }. Here π(p, w) denotes ﬁrm proﬁts. The FOCs for this lead to the condition ul (·)/uc (·) = w/p. Hence the same allocation is characterized as in the planner’s problem, as promised by the second welfare theorem. (Review Problem 12 provides a numerical example of the above.) Note that in this economy there is a unique Pareto optimal point if f (·) < 0. Since a competitive equilibrium must be PO, it suﬃces to ﬁnd the PO allocation and you have also found the equilibrium! This “trick” is often exploited in macro growth models, for example.
Review 35
2.4
Review Problems
It is useful exercise to derive both kinds of demand for the various functions you might commonly encounter. The usual functions are (1−α) , α ∈ (0, 1); CobbDouglas: u(x1 , x2 ) = xα x2 1 QuasiLinear: u(x1 , x2 ) = f (x1 ) + x2 , f > 0, f < 0; Perfect Substitute: u(x1 , x2 ) = ax1 + x2 , a > 0; Leontief: u(x1 , x2 ) = min{ax1 , x2 }, a > 0; “Kinked”: u(x1 , x2 ) = min{ax1 + x2 , x1 + bx2 }, a, b > 0. The ﬁrst thing to do is to determine how the indiﬀerence curves for each of these look like. I strongly recommend that you do this. Familiarity with the standard utility functions is assumed in class and in exams. Budgets are normally easy and are often straight lines. However, they can have kinks. For example, consider the budget for the following problem: A consumer has an endowment bundle of (10, 10) and if he wants to increase his x2 consumption he can trade 2 x1 for 1 x2 , while to increase x1 consumption he can trade 2 x2 for 1 x1 . Such budgets will arise especially in inter temporal problems, where consumers’ borrowing and lending rates may not be the same. You can now combine any kind of budget with any of the above utility functions and solve for the income expansion paths (diagrammatically), and the demands, both ordinary and Hicksian. You may also want to refresh your knowledge on (price)oﬀer curves. This is the locus of demands in (x1 , x2 ) space which is traced out as one price, say p1 , is varied while p2 and m stay constant. Question 1: Let X = {x, y, z, w} and let (B, C(·)) be a choice structure with B = {{x, y, z}, {x, z, w}, {w, z, y}, {x}, {y, w}, {z, x}, {x, w}}. a) Provide a C(·) which satisﬁes the weak axiom. b) Does there exist a preference relation on X which rationalizes your C(·) on B? c) Is that preference relation rational? d) Given this B and some choice rule which satisﬁes the weak axiom, are we guaranteed that the choices could be derived from a rational preference relation? e) Since you probably ‘cheated’ in part a) by thinking of rational preferences ﬁrst and then getting the choice rule from that: write down a choice rule C(·) for the above B which does satisfy the weak axiom but can NOT be rationalized by a preference relation.
Job 1 pays $12 per hour for the ﬁrst 6 hours and $16 per hour for the next 2 hours. what is the budget set and its boundary in the following cases (you may ¯ want to provide a well labelled diagram.) Why does it (does it not) matter? d) What if instead there are four jobs. (iii) paying $8 per hour for up to 2 hours and (iv) paying $14 per hour for up to 6 hours? You may work any combination of jobs. 72) and the price vector is p = (3. c) Suppose we drop the restriction that you have to work one job full time before you can work the other. for example. consisting of endowed income and work ¯ income. job (ii) paying $16 per hour for up to 2 hours. each up to its maximum number of hours.36 LA. so a consumption bundle can be denoted by (l. 5x1 + x2 }. If the consumer’s endowment is ω = (412.) Also draw the price oﬀer curve for p1 variable. (So you could work one job for 5 hours and the other for 3. leisure and wealth/aggregate consumption. with job (i) paying $12 per hour for up to 6 hours. 2x1 + 2x2 . what is the quantity demanded? Question 4: A consumer’s preferences are represented by the utility function U (x1 . [Note: Solving the problems does not require diﬀerential calculus techniques. B. Solve both the utility maximization and the expenditure minimization problem for this consumer. in view of this. m and p2 ﬁxed. x2 ) = min{x1 + 5x2 . which was not possible previously.) Is the budget set convex? a) There are two jobs available. Question 3: A consumer’s preferences over 2 goods can be represented by the utility function u(x) = x0. c)  0 ≤ l ≤ 16. (Note the the Walrasian and Hicksian demands are not functions in this case. and/or a deﬁnition of B instead of a set deﬁnition for B. Solve for the consumer’s demand 1 2 function. is the set of maximal feasible consumption for any given leisure level. There are only two goods. c) vectors. Busch. c). B = {(l.] . An additional restriction is that you have to work at one job full time (8 hours) before you can start the other job. In particular note that the boundary of the budget set. c feasible}. Now. Job 2 pays $8 per hour for the ﬁrst two hours and $14 per hour for the next 6 hours. Microeconomics May2004 Question 2: Consider a consumer endowed with 16 hours of leisure time and $100 of initial wealth. each of which can be worked at for no more than 8 hours. where feasibility of consumption c will be determined by total income. 1)..3 x0.e. i.6 . b) The same situation as in [a] above. Extend the deﬁnition of the budget set to be the set of all feasible (l. but job 2 now pays $14 2/3 per hour for the next 6 hours (after the ﬁrst 2).
Question 8: Consider a 2 good. Consumer A has preferences represented by uA (x) = αx1 + x2 . y ∈ C(B ).) Question 11∗ : Consider a 2 good. is called the Constant Elasticity of Substitution utility function. a) Suppose consumer A’s endowment is ωA = (10. ∂ [p1 /p2 ] x1 (p. Consider the following two strict preference relations: deﬁned by x y ⇐⇒ ∃B ∈ B x. x ∈ C(B) and ∃B ∈ B x. 11).Review 37 Question 5: The elasticity of substitution is deﬁned as 1. Derive Marshallian (ordinary) demand. x ∈ C(B). The shape of its indiﬀerence curves depends on the parameter ρ. C(·)) satisﬁes the weak axiom. β = 1 and ﬁx the total size of the economy at 20 . like a CobbDouglas function x1 x2 as ρ → 0. y ∈ B. 10) while consumer B’s endowment is ωB = (10. Consumer B has preferences represented by uB (x) = βlnx1 + lnx2 . Consumer B has preferences represented by uB (x) = 3lnx1 + 4lnx2 . Is this still true if the weak axiom is not satisﬁed by C(·)? (Counterexample suﬃces. Demonstrate that the function looks like a Leontief function min{x1 . x2 } as ρ → −∞. What is the competitive equilibrium of this economy? Question 9: Consider a 2 good. x2 ) = (xρ + xρ )(1/ρ) . y ∈ C(B). Suppose consumer A’s endowment is ωA = (12. Consumer A has preferences represented by uA (x) = 4x1 + 3x2 . Consumer A has preferences represented by uA (x) = min{4x1 + 3x2 . 3x1 + 4x2 }. y ∈ B. 11). 2 consumer economy. Suppose consumer A’s endowment is ωA = (12. and ∗ deﬁned by x ∗ y ⇐⇒ ∃B ∈ B / x. w)/x2 (p. What is the competitive equilibrium of this economy? Question 10∗ : Suppose (B. Consumer B has preferences represented by uB (x) = 3x1 + 4lnx2 . w)] p1 /p2 . 9) while consumer B’s endowment is ωB = (8. 2 consumer economy. like a perfect substitute function x1 + x2 as ρ → 1.2 =− ∂ [x1 (p. What is the competitive equilibrium of this economy? b) Suppose α = 2. 9) while consumer B’s endowment is ωB = (8. w)/x2 (p. y ∈ B . w) What does this elasticity measure? Why might it be of interest? Question 6: The utility function u(x1 . Prove that these two strict preference relations give the same relation on X. 10). where ρ < 1 and 1 2 ρ = 0. 2 consumer economy. Question 7: A consumer’s preferences over 3 goods can be represented by the utility function u(x) = x1 + lnx2 + 2lnx3 .
Suppose technology is given √ by c = f (x) = 4 x. . and boundary. Solve the social planner’s problem. For which endowments will interior equilibria obtain. there are two kinds of equilibria. Microeconomics May2004 units of each good as above. interior. b) Solve for the competitive equilibrium of the private ownership economy. l) = 1 ¯ a) lnc + 2 lnl and let the consumer have a leisure endowment of L = 16.38 LA. As you have seen above. Let consumer preferences be represented by u(c. one ﬁrm and one consumer. for which ones boundary equilibria? Question 12∗ : Consider a privateownership production economy with one consumption good. Busch.
3. The consumer is assumed to have preferences over consumption in both periods. we will employ an endowment model. The consumer is assumed to have an endowment of the consumption good in each period. indeed. consumption today — denoted c1 — and consumption tomorrow — c2 . which we will denote by m1 and m2 . This will reveal information about borrowing and savings behaviour and the relationship between interest rates and the timepreferences of consumers. respectively. say apples and oranges. 39 . represented by the utility function u(c1 . part of our job here is to ﬁnd out how to deal with and evaluate income streams.1 The consumer’s problem In what follows. we will consider the simple case of just two commodities. in the standard model of the previous chapter.Chapter 3 Intertemporal Economics In this chapter we will relabel the basic model in order to focus the analysis on the question of intertemporal choice. c2 ). Since it is somewhat meaningless to speak of income in a two period model. These take the place of the two commodities. we will see that interest rates are just another way to express a price ratio between time dated commodities. In particular.
In contrast. 0 ≤ c1 ≤ m1 }. No Markets This is a slightly more interesting case where consumption can be postponed. Since there are no markets. No Markets If there is no storage and no investment. it may be possible to store the good. The budget set then is just that one point: B = {c ∈ 2 ci = mi . so that the outside layers of the meat will not be edible in the next period. We consider these in turn. such storage may be subject to losses (depreciation). .1 Deriving the budget set Our ﬁrst job will be to determine how the consumer’s budget may be expressed in this setting. c2 )  c2 ≤ m2 + (1 − δ)(m1 − c1 ). No markets means that there is also no way to trade with somebody else in order to either move consumption forward or backward in time. 2}. Let δ ∈ (0.) A typical budget set then is B = {(c1 . no borrowing against future endowments is possible. so that quantities of the period 1 endowment not consumed in period 1 are available for consumption in period 2.40 LA. the consumer may be able to access markets. + Storage. which allow lending and borrowing. then anything not consumed today will be lost forever and consumption cannot be postponed to tomorrow. 1) denote the rate of depreciation — for example. or pests may eat some of the stuﬀ while it is in storage (a serious problem in many countries. Ideally. of course. Busch.1. No Investment. i = 1. the consumer has some investment technology (such as planting seeds) which allows the consumer to “invest” (denoting that the good is used up. The consumer ﬁnds himself therefore in the economically trivial case where he is forced to consume precisely the endowment bundle. will depend on the technology for storing the good and on what markets exist. but not in utility generating consumption) period 1 units in order to create period 2 units. Of course. our consumption good may spoil. Finally. Note ﬁrst of all that it is never possible to consume something before you have it: the consumer will not be able to consume in period 1 any of the endowment in period 2. No Storage. Storage is usually not perfect. Microeconomics May2004 3. No Investment. That. You may want to think of this as 100% depreciation of any stored quantity.
is constant. It implies the consumption in period 2. he will choose whichever has the higher return. as in lending. m2 + (1 − δ)S) In the second case only the level of consumption in period 1. if the endowment happens to be (m1 . This function must either have constant returns to scale or decreasing returns to scale for things to work easily (and for the second order conditions of the utility maximization problem to hold. a consumer with CD preferences u(c1 . The third line simply relabels that same maximization and has savings in period 1 — whatever is not consumed — as the choice variable. but usually it suﬃces to check after the fact. not “market investment”. m2 + (1 − δ)(m1 − c1 )) maxS≤m1 u(m1 − S. and denoted S.c2 ∈B u(c) maxc1 ≤m1 u(c1 . Suppose then that the consumer has the same storage possibility as previously. and he will store otherwise.) One thing to be careful about is the requirement that 0 ≤ c1 ≤ m1 . We can therefore express the consumer’s utility maximization problem in three equivalent ways: maxc1 .1 gives the diagrammatic representation of this optimization problem (with two diﬀerent (!) preferences indicated by representative indiﬀerence curves. Investment. The . of course! The left diagram in Figure 3. but also has access to an investment technology. usually called savings. In that case the marginal return of investment. which is 1 − δ. c2 ) = c1 c2 faced with a price ratio of unity would like to consume where c1 = c2 (You ought to verify this!) However. c1 . It either is larger or smaller than the return of storage. We could employ KuhnTucker conditions to deal with this constraint. No Markets How is the above case changed if investment is possible? Here I am thinking of physical investment. so if f (x) > (1 − δ) the consumer will invest.) Suppose the technology exhibits CRS. m2 ) = (5. such as planting. is a choice variable. We will model this technology just as if it were a ﬁrm. Storage. Since the consumer will want to maximize the budget set. All of these will give the same answer. For example. and specify a function that gives the returns for any investment level: y = f (x). 25) then this is clearly impossible. f (x). We therefore conclude that there is a corner solution and consumption occurs at the endowment point m.Intertemporal 41 The quantity (m1 − c1 ) in this expression is the amount of period 1 endowment not consumed in period 1.
Thus. how many dollars would somebody be willing to pay? F/(1 + r). let us now derive the budget set of a consumer who may borrow and lend.42 LA. Microeconomics budget then is B= May2004 {(c1 . while F is the future value of P . and to trade consumption on markets. say good 2 for good 1. By convention the value P is called the present value of F . Given these conventions. 1 dollar today “buys” (1 + r) dollars tomorrow. No Investment. Of course. The budget then is {(c1 . We therefore have an equivalence between P dollars today and F dollars tomorrow. since F/(1 + r) + rF/(1 + r) = F . Full Markets We now allow the consumer to store consumption between periods 1 and 2. this is an additional rP dollars. not stored. Instead of the usual prices. but has no (physical) investment technology. one can always convert between the two without much trouble if some care is taken. the price of 1 of today’s dollars is (1 + r) of tomorrow’s. for a payment of F dollars tomorrow. But when should the consumer stop investing? The maximal investment x is deﬁned by f (x) = (1 − δ). or P = F/(1 + r). so that the total return will be below that of storage if all ﬁrst period endowment is invested. c2 )  c2 ≤ m2 + f (x) + (1 − δ)(m1 − x − c1 ). Similarly. since the marginal return of storage now exceeds that of further investment. 0 ≤ c1 ≤ m1 } otherwise What if the investment technology exhibits decreasing returns to scale? Suppose the most interesting case. but the opposite is true for high x. Suppose further that not only the marginal return falls. which are an expression of the exchange ratio of. we normally express things in terms of interest rates when we deal with time. 0 ≤ c1 ≤ m1 } if (1 − δ) < f () {(c1 . c2 )  c2 ≤ m2 + f (m1 − c1 ). c2 )  c2 ≤ m2 + f (m1 −c1 ). Busch. m1 − x ≤ c1 ≤ m1 } B = {(c1 . c2 )  c2 ≤ m2 + (1−δ)(m1 −c1 ). provided that P (1 + r) = F . Any additional unit of foregone period 1 consumption should be stored. Put diﬀerently. . At an interest rate r. but that the case is one where f (m1 ) < (1 − δ)m1 . where f (x) > (1 − δ) for low x. with some depreciation. of course. 0 ≤ c1 ≤ m1 − x} where x is deﬁned by (1 − δ) = f (x) Storage. What will be the budget set? Clearly (?) any initial amounts not consumed in period 1 should be invested. The key idea is that a loan of P dollars today will pay back the principal P plus some interest income.
If the consumer stores the good his constraint on period 2 consumption is as derived previously. m2 ) with a slope of (1 + r). is the one key “trick” in doing intertemporal economics: pick one time period as your “base”. Indeed. Manipulation of the two constraints shows that they are really identical. the second and fourth in present values. As long as δ and r are both strictly positive the second of these is a strictly larger set than the ﬁrst. c1 . Replacing inequalities with equalities in the above you note that all are just diﬀerent ways of writing the equation of a straight line through the endowment point (m1 . c1 . c2 )  (m1 − c1 ) + 1+r The ﬁrst and third of these are expressed in future values. then in period 1 he can receive at most (m2 − c2 )/(1 + r) units. = {(c1 . Here this means we can choose a period and denominate everything in future. Since consumption is a good (more is better) the consumer will not use storage. (m2 − c2 ) = {(c1 . and then stick to it! Finally note that we are interested in the budget line.or presentvalue equivalents for this period. if he is willing to pay. generally. c1 . If he uses the market instead. forgo some future consumption for current consumption by purchasing the appropriate borrowing contract. where there are two “budget lines” for increasing future consumption. 1+r = {(c1 . The lower of the two is the one corresponding to storage. c2 )  c2 ≤ m2 + (1 + r)(m1 − c1 ). however. by the way.1. c1 . c2 ≥ 0}. c2 ≤ m2 + δ(m1 − c1 ). in period 2. consumption can be postponed in two ways: storage and lending. Thus one constraint in the budget is m1 ≤ c1 ≤ m1 +(m2 −c2 )/(1+r). c2 )  (m2 − c2 ) + (1 + r)(m1 − c1 ) ≥ 0. On the other hand. This. and the eﬀective constraint on future consumption will be c2 ≤ m2 + (m1 − c1 )(1 + r). his constraint becomes c2 ≤ m2 + (m1 − c1 )(1 + r). Based on what we have said above. the higher the one corresponding to a positive interest rate on loans. we are free to pick a numeraire commodity. c2 ≥ 0}. c2 ≥ 0}. It does not matter which you choose as long as you adopt one perspective and stick to it! (If you recall. c2 ≥ 0}. the consumer’s budget can be expressed as any of B = {(c1 .Intertemporal 43 Storage is not an option in order to move consumption from period 2 to period 1. (m2 − c2 ) ≥ 0. This is indicated in the right diagram in Figure 3. c2 )  c1 ≤ m1 + . . convert everything into this time period. an amount of (m2 − c2 ) > 0. The consumer may.
the ﬁrst thing to realize is that the (physical) investment technology will be used to the point where its marginal (gross rate of) return f (·) is equal to that of putting the marginal dollar into the ﬁnancial market. that is. the optimal investment is determined by f (x) = (1 + r). That is. which would usually be the price ratio p1 /p2 . and all previous equations which characterize equilibrium apply. so we will ignore it for a moment. u2 (c) 1 What is the interpretation of this equation? Well.) On the left hand side we have the ratio of marginal utilities. . Then it is necessary that (1 + r) u1 (c) = .44 LA. the MRS. Investment. Full Markets What if everything is possible? As above. This will naturally depend on the consumer’s time preferences. m2 + f (x)) with slope (1 + r). Suppose the case of markets in which an interest rate of r is charged. How do (physical) investment and market investment interact in determining the budget? As in the case of storage and investment. 3. on the right hand side we have the slope of the budget line. Microeconomics good 2 20 15 10 5 0 0 5 10 15 20 good 1 good 2 May2004 + + 20 + + 15 10 5 0 0 5 10 + + 15 20 good 1 Figure 3.1. The budget thus is bounded by a straight line through the point (m1 − x. storage will normally not ﬁgure into the problem. Busch. It tells us the consumer’s willingness to trade oﬀ current consumption for future consumption. no (physical) investment Storage.1: Storage without and with markets. the RHS gives the cost of future consumption in terms of current consumption (recall that in terms of units of goods we have (1/p2 )/(1/p1 ). That is.2 Utility maximization After the budget has been derived the consumer’s problem is solved in the usual fashion. The second key point is that with perfect capital markets it is possible to borrow money to invest.
depends on the desired consumption mix compared to the consumption mix in the consumer’s endowment. ρ = r. Note that the discount factor β is related to the consumer’s discount rate ρ: β = 1/(1 + ρ). the equation characterizing the consumer’s optimum then becomes c2 = (1 + r) βc1 ⇒ c2 = β(1 + r) c1 ⇒ c2 1+r = . This is actually true in general with additively separable functions. of course. How will a change in relative prices aﬀect the consumer’s wellbeing and consumption choices? The usual facts from revealed preference theory apply: If the interest rate increases (i. if the private discount rate is identical to the market interest rate. First of all. Finally. One of the key features of such a separable speciﬁcation is that the marginal utilities of today’s consumption and tomorrow’s are independent. He will either be a lender (c1 < m1 ) or a borrower (c1 > m1 . This feature makes life much easier if the goal is to solve some particular model or make some predictions. the consumer would prefer to engage in perfect consumption smoothing. For this speciﬁc function. while a patient consumer for whom ρ < r will postpone consumption.) This. c1 1+ρ Some interesting observations follow from this.. if I consider ∂u(c1 . • A lender will become better oﬀ. c2 ) = lnc1 +βlnc2 . so that ρ > r. That is. since u (c1 ) = u (c2 ) iﬀ c1 = c2 . This speciﬁcation says that consumption is ranked equally within each period (the same subutility function applies within each period) but that future utility is not as valuable as today’s and hence is discounted. I ﬁnd that it is only a function of ci and not of cj .e. • A borrower who remains a borrower will become worse oﬀ. we can do the usual comparative statics.Intertemporal 45 Let us take a concrete example. The ideal consumption path has the consumer consume the same quantity each period. current consumption becomes more expensive relative to future consumption) then • A lender will remain a lender. then the consumer will favour current consumption. In macroeconomics we often use a socalled timeseparable utility function like u(c1 . c2 )/∂ci . In order to achieve the preferred consumption path the consumer will have to engage in the market. • A borrower will borrow less (assuming normal goods) and may switch to being a lender. β < 1. . If the consumer is more impatient than the market.
Microeconomics May2004 • A borrower who switches to become a lender may be worse or better oﬀ. M ). It follows from the chain rule that dc1 (·) ∂c1 (·) ∂c1 (·) ∂M = + . The Slutsky equation tells us that ∂c1 (·) ∂h1 (·) ∂c1 (·) = − c1 (·) ∂p1 ∂p1 ∂M and hence we obtain ∂h1 (·) ∂c1 (·) dc1 (·) . A lender will have a negative bracketed term. We can also see some of these implications by considering the Slutsky equation for this case. where the weighting of the income eﬀect is by market purchases only. We know that the substitution eﬀect is negative. Thus the whole expression is certainly negative and a borrower will consume less if the price of current consumption goes up. Busch. only the amount traded on markets is subject to the income eﬀect. dp1 ∂p1 ∂M ∂p1 but ∂M/∂p1 = m1 . These can be easily veriﬁed by drawing the appropriate diagram and observing the restrictions implied by the weak axiom of revealed preference. Now to the analysis of this equation. In this case the demand function for period 1 consumption is c1 (p1 . The sign of the whole expression therefore depends on the term in brackets. In fact.46 LA. p2 . which cancels the negative sign in the expression. all good 1 consumption is purchased. the (positive) income eﬀect could be larger than the (negative) substitution eﬀect and current consumption could go up! . A borrower has a positive bracketed term. where M = p1 m1 + p2 m2 . In the standard model without endowment. in other words on the lender/borrower position of the consumer. = − (c1 (·) − m1 ) dp1 ∂p1 ∂M This equation is easily remembered since it is really just the Slutsky equation as usual. and we know that the total eﬀect is less negative than the substitution eﬀect. With an endowment. and hence subject to the income eﬀect. We also know that for a normal good the income eﬀect is positive.
an eﬀect called inﬂation if money prices of goods go up (and deﬂation otherwise.Intertemporal 47 3. Doing so will require a diﬀerentiation between nominal and real interest rates..) This may lead to the phenomenon that there is a price ratio of goods to money. So. The budget constraint for the two period problem then becomes: B :: p2 c2 = p2 m2 + (1 + r)(m1 − c1 ). but in terms of money (which is not a good. i. π = (p2 −p1 )/p1 . the real interest rate is r = (r − π)/(1 + π). In other words.e. ˆ p2 1+π Thus. which means that you will get interest of r units of money per unit. Note however that this ˆ exaggerates the real interest rate.2 Real Interest Rates So far we have used the usual economic notion of prices as exchange rates of goods. however. The real interest rate can now be derived by rewriting the above budget to have the usual look: c2 = m 2 + 1+r 1+r (m1 − c1 ) = m2 + (m1 − c1 ) = m2 + (1 + r)(m1 − c1 ). We can now ask two questions: what is the rate of inﬂation and what is the real interest rate. prices are not denoted in terms of some numeraire commodity.) We can modify our model for this case by ﬁxing the price of the numeraire at only one point in time. We then can account for inﬂation/deﬂation. A ﬁnancial asset is really just . The rate of inﬂation is the rate π such that (1 + π)p1 = p2 . and that this may change over time. It is easiest if we ﬁrst look at ﬁnancial assets only. let p1 = 1 be the (money) price of the consumption good in period 1 and let p2 be the (money) price of the consumption good in period 2. and as a rule of thumb ˆ (for small π) it is common to simply use r ∼ (r − π).3 Riskfree Assets Another application of this methodology is for a ﬁrst look at assets. In reality. the total monetary value of second period consumption can at most be the monetary value of second period endowment plus the monetary value of foregone ﬁrst period consumption. because you get paid back the principal and interest on an investment in units of money which has a diﬀerent value (in terms of real goods) compared to the one you started out with. Let the nominal interest rate be r. 3.
and by C the value of each coupon.. A strip bond is a bond which had all F its coupons removed (“stripped”). 3. which is a bond which pays its coupon rate forever. A normal bond also has a a sequence of coupons. PV = F +c (1 + r)T (1 + r)i i=1 . assume a yearly coupon for now (we will see more ﬁnancial math later which allows conversion of compounding rates into simple rates).1 Bonds A bond is a ﬁnancial instrument issued by governments or large corporations. as far as their issuer is concerned.48 LA. A bond has a so called face value. We can now ask what the price of such a bond should be today. Then we can simplify the above equation to yield T 1 1 . which is what it promises to pay the owner of the bond at the maturity date T of the the bond. Bonds are. While there are few such things. a government bond comes fairly close. car. Future payments are. which used to be literally coupons that were attached to the bond and could be cut oﬀ and exchanged for money at indicated dates. In that case the coupon rate is simply C 100%. Another special kind of bond is a consol.) If we assume that the promise is known to be kept. we can now compute the coupon rate of the bond. + C+ F. actually. that the amount it will pay back is known. its value is the current value of all these future payments. of course. The coupons themselves then generate a simple annuity — a ﬁxed yearly payment for a speciﬁed number of years — which could be sold separately. from the phrase “I owe you”. and that the value of what it pays back when it does is known.. (1 + r) (1 + r)2 (1 + r)T (1 + r)T Denote the coupon rate by c. but never pays back any face value. such as a dishwasher. Note that assets do not have to be ﬁnancial instruments: they can also be real. Microeconomics May2004 a promise to pay (sometimes called an IOU. Since the bond bestows on its owner the right to receive speciﬁed payments on speciﬁed future dates.) By calling those things assets we focus on the fact that they have a future value. For simplicity. then we have a riskfree asset. converted into their present values by using the appropriate interest rate: PV = 1 1 1 1 C+ C + . If we denote by F the face value of the bond. a loan. or plant (of either kind. Busch.3.
Then what I should do is to borrow money at the interest rate r. therefore. as mentioned previously. Recall that once a bond is issued T . a case where there is complete certainty over all assets’ returns.2 More on Rates of Return You may happen to own an asset which.) The question you now have is. I also have to pay back my loan. say I dollars. then it must be true that both assets have the same rate of return. but an asset that only has a current and a future price. Call this rate r. since that would move their individual budget line out the most.e. 3. p1 p1 p1 − p 0 = − 1. if a consumer is willing to hold more than one asset. or (1 + r) = . i.3. as the interest rate rises the present value of a bond falls. Then wait and sell the goods in the next period. Bond prices and interest rates are inversely related. Then we know that r = p0 p0 p0 This condition must hold for all assets which are held. Assume.. purchase I/p0 units. Is the present price of a bond higher or lower than the maturity value? In the latter case the bond is trading at a discount. Recall that arbitrage refers to the activity of selling high while buying low in order to make a proﬁt.. in the former at a premium. We can now p0 ask what conditions such a rate of return might have to satisfy in equilibrium. Your rate of return then is the money you gain (or loose) as a percentage of the cost of the p1 − p 0 asset. F . This asset happens to also have a current market price (we will see later where that might come from and what conditions it will have to satisfy. and thus its price will be higher. and C are ﬁxed. at .Intertemporal 49 From this equation follows one of the more important facts concerning the price of bonds and the interest rate. what is this asset’s rate of return? Let us start with the simple most case of no income/services. is the right to a certain stream of income or beneﬁts (services). For example. assume that there were an asset for which p1 /p0 > 1 + r. respectively. Since r appears in the denominator. This will depend on the relationship between the interest rate and the coupon rate. It is therefore also known as a zero arbitrage condition. i. future prices. Intuitively. In equilibrium. if the coupon rate exceeds the interest rate the bond is more valuable. That will yield Ip1 /p0 dollars.e. That means that the present price of the bond needs to adjust as the interest rate r varies. p0 and p1 . and use those funds to buy the good in question. in other words the per dollar rate of return is . otherwise there would be arbitrage opportunities. All consumers would want to buy that asset which has the highest rate of return.
and this serves to reduce the proﬁtability of the exercise. and in general the future price of the asset is not known. Of course. For example. a house worth $100. Note that the optimal loan size would be inﬁnite. it is easily demonstrated to violate the above rule: The price of the good tomorrow will be determined by demand and supply tomorrow. more tellingly. and have therefore the choice of selling it tomorrow or selling it today. of course. If you sell early. gasoline prices during the Gulf war. Currently I own that good. which has to be equal between the two options. I would want to obtain the appropriate rate of return on the asset. should be sold at the higher cost. if not in the amount. really — and will have a higher rate of return in order to compensate for the potential costs and problems in unloading them. and thus I have a proﬁt of p1 /p0 − (1 + r) per dollar of loan. This would normally be considered unethical by most (just try and force them to give you money. procured at higher cost. However.) Of course.000 and $100. (Note that bonds are an exception to some degree.) . since the realization of the house’s value is a random variable.” The same is true for thinly traded stocks. The 1+r correct market price for an asset is its discounted future value! This discussion has an application to the debate about pricing during supply or demand shocks.000 cash are not equivalent. assets are not usually all the same. This can. or. Microeconomics May2004 (1 + r)I dollars. or the alleged pricegouging in the icestorm areas of Quebec and Ontario: What should the price of an item be which is already in stock? Many people argue that it is unfair to charge a higher price for instock items. there are other kinds of risk as well. Only the replacement items. since everybody expects a ﬂood of the stuﬀ tomorrow). since the cash is immediately usable. For example. at least in time.50 LA. In a zeroarbitrage equilibrium we p1 therefore must have (1 + r) = p1 /p0 . If you choose to hold the bond all the way to the maturity date you do know the precise stream of payments. Busch. Should I be forced not to do so I am forced to give money away against my will and better judgment. Such assets may carry a liquidity premium — an illiquidity punishment. be treated in terms of risk. while the house may take a while to sell — it is less “liquid. the resulting large demand would certainly drive up current prices (while also lowering future prices. and we will see this later when we introduce uncertainty. While this may be “ethical” according to some. Thus I am only willing to part with it now if I am oﬀered a higher price which foreshadows tomorrows higher price. you face the uncertain sale price which depends on the interest rate at that point in time. and apparently all are agreed that that price might well be higher due to a large shift out in the demand and/or reduction in supply. p0 = .
We can analyse the question of simple resource depletion: at what rate should we use up a nonrenewable resource. say a painting. For simplicity. as do reductions in demand. Equilibrium requires. let us assume some current price p0 as a starting value and let us focus on supply. When will the owner of the resource be willing to sell? If the market rate of return on other assets is r then the resource. Complicating things in the real world is the fact that assets often diﬀer in their tax treatment. and thus we may have to use an after tax rate for one asset and set it equal to an untaxed rate for another.3. It assumes demand and supply to be independent of price. however. this is not true. as are dividend payments of stocks or interest payments of bonds. For one period this is still simple to deal with: The asset will generate beneﬁt (say rent saved. 3.3 Resource Depletion The simple discounting rules above can also be applied to gain some ﬁrst insights into resource economics. in general equilibrium. which is just another asset. which. that the rates of return as perceived by the consumer are equalized.Intertemporal 51 Assets may also yield consumption returns while you hold them: a car or house are examples. and to add insult to injury they ignore inﬂation) are tax free. This approach has a major ﬂaw. For another asset. ﬁrst assume a ﬁxed annual demand D. will also have to generate that rate of return. It follows that there are S/D years left. We can also analyse when a tree (or forest) should to be cut down. Note that additional discoveries of supplies lower the price since they increase the time to depletion. and in general we’d have to expect pt = (1 + r)t p0 . then we again require that this be equal to the rate of return on other assets. if the house is a principal residence any capital gains (the tax man’s term for p1 − p0 . so that p0 = C/(1 + r)S/D . Arbitrage implies that pt+1 = (1 + r)pt . increases in the discount rate lower the price. after which we assume that an alternative has to be used which costs C. Lowering the price of the alternative also lowers the current price. If the consumer holds multiple p0 assets in equilibrium. Assume a nonrenewable resource currently available at quantity S. For example. or train tickets saved) of b and we thus p1 − p 0 + b compute the rate of return as . Finally. Thus the price in the last year should be pS/D = C. of course. Therefore p1 = (1 + r)p0 . So instead. means two things: demand will . Note that the price of the resource is therefore increasing with time.
We will ultimately run out of the resource. computed yearly. which grow only slowly. If it is cut it generates revenue right away. what if you don’t own the trees? What if you are the James Bond of forestry. in Canada most licenses are for mature forests. For starters. How much . The Europeans. Busch. the state has serious incentive problems to overcome within this principal agent framework. the tree should be cut once its growth rate divided by its current size has slowed to the market interest rate. notice that lack of ownership will also impact the replanting decision. instead of. etc. and substitutes will become more competitive. if we treat the logger as an agent of the state. titanium.52 LA. As we will see later in the course. forget that they have long ago cut nearly all of their mature forests and are now in a harvesting model with mostly high growth forests. What about renewable resources? Consider ﬁrst the ‘European’ model of privately owned land for timber production as an example. tin. say. critical of clearcutting. This fact has been established over and over with various natural resources such as oil. This fact has a few implications for forestry: Faster growing trees are a better investment. and that all interest income is also reinvested at this 5% rate. depending on the growth rate.) Furthermore. As a ﬁnal note.4 A Short Digression into Financial Economics I thought it might be useful to provide you with a short refresher or introduction to multiperiod present value and compound interest computations. (This discussion is ceteris paribus — ignoring general equilibrium eﬀects. but it is nearly always wrong to simply use a linear projection of current use patterns. which nearly by deﬁnition have slow or no growth — thus the thing to do is to clear cut and get out of there. If it continues to grow it will not generate this revenue but instead generate more revenue tomorrow (since it is growing and there will be more timber tomorrow. 3.3. that is.) It follows that the two rates of return should be equalized. Microeconomics May2004 fall as customers switch to alternatives. Furthermore we might expect more substitutes to be developed. When should it harvest the trees? Each year there is the decision to harvest the tree or not. Of course. assume you put $1 in the bank at 5% interest. copper. Here we have a company who owns an asset — a forest — which it intends to manage in order to maximize the present value of current and future proﬁts. and thus we see mostly fast growing species replanted. with a (timelimited) license to kill? In that case you will simply cut the trees down either immediately or before the end of your license. oaks.
thus t 100ln2 69. .. Let us assume it is just simple interest. + δ n p. The important fact about this is that a simple interest rate and a compounded interest rate are not the same. It is used to ﬁnd out how long it will take to double your money at any given interest rate. for small x we know that ln(1 + x) ∼ x. sometimes also referred to as the rule of 69. 300 months) a certain payment p. i. What you will quickly ﬁnd out is that banks don’t normally do that. monthly.052 . . where r is the interest rate we use per period. . 1.3147 r% ∼ ln2 ⇒ t ∼ = 100 r% r% but of course 72 has more divisors and is much easier to work with. 1. and normally monthly.6825% simple interest rate. Here δ = 1/(1 + r).e.Intertemporal 53 money will you have in each of the following years? The answer is 1. It is therefore very important to be sure to know what interest rate applies and how compounding is applied (semiannual. 1.1268. However. . . Thus PV = δ p + δp + δ 2 p + . etc. + δ n−1 p = δp 1 + δ + δ 2 + .03.?) Here is a handy little device used in many circles: the rule of 72. For example. . you are really paying not a 12% interest rate but a 12.12 )12 = 1. The idea is that it will approximately take 72/r periods to double your money at an interest rate of r percent. its present value? We need to compute the value of the following sum: δp + δ 2 p + δ 3 p + . if you get a loan at 12%.12 for every dollar you borrowed at the end of one year.05. . The proof is simple: we want to solve for the t for which 1+ r% 100 t =2 ⇒ tln 1 + r% 100 = ln2. This is also known as a simple annuity. Monthly compounding would mean that you will owe (1 + . i. The power of compounding also comes into play with mortgages or other installment loans. . it matters how often this is compounded. + δ n−1 . They at least compound semiannually. What is the value of such a promise. since with compounding there is interest on interest.e. . In other words.053 . A mortgage is a promise to pay every period for a speciﬁed length (typically 25 years. On a million dollar loan this would be 12 a diﬀerence of $6825. You then owe $1.05t ..
etc. and 1 unit in the third. r23 and r13 implied by zero arbitrage.06)2 − 1 = 12.95.975879418%. Before you engage in mortgages it would be a good idea to program your spreadsheet with these formulas and convince yourself how biweekly payments reduce the total interest you pay. how important a couple of percentage points oﬀ the interest rate are to your monthly budget. nominal) interest rate from period i to j.36%. the payment periods. and the fact that nearly all mortgages are computed for a 25 year term (but seldom run longer than 5 years. Explain why it is useful to have this condition hold (i.54 LA.) Let rij denote the (simple. A ﬁnal note: In Canada a mortgage can be at most compounded semiannually. Joe . The eﬀective yearly interest rate in turn is (1. these days). Given the above. Thus the eﬀective interest rate per month is derived by solving (1 + r/2)2 = (1 + rm )12 . c) Draw a diagrammatic representation of the budget constraint in periods 2 and 3. the monthly payment at a 10% yearly interest rate for an additional $1000 on the mortgage is $8. the payment amount. point out what would cause a potential problem in how you’ve written the budget if the condition in (a) fails. and the interest rate.4 Review Problems Question 1: There are three time periods and one consumption good. The money price for the consumption good is known to be p = 1 in all periods (no inﬂation.) The above equation relates four variables: the principal amount. and by law the bank is supposed to tell you about that too. Question 2: There are two goods. Busch. consumption today and tomorrow. The consumer’s endowments are 4 units in the ﬁrst period. Microeconomics 1 − δn 1−δ 1 − (1 + r)−n PV = p r = δp (Recall in the above derivation that for δ < 1 we have ∞ i=1 May2004 δ i = 1/(1 − δ).e.. If you are quoted a 12% interest rate per year the monthly rate is therefore (1.06)1/6 − 1 = 0. If you ﬁx any three this allows you to derive the fourth after only a little bit of manipulation. b) Write down the consumer’s budget constraint assuming the restriction in (a) holds. a) State the restrictions on r12 . 3. 20 units in the second. being careful to note how period 1 consumption inﬂuences this diagram.
Also give a short explanatory paragraph how the set is derived. 4). what is the equilibrium interest rate and allocation? . c2 ) = c1 + βc2 . and an endowment of (8. 6). and an endowment of (12. (Here exp() denotes the exponential function. Joe also possesses an investment technology which is characterized by a production function √ x2 = 10 x1 . A borrowing constraint exists which prevents him from borrowing against more than 60% of his period 2 endowment. how much does he invest. c2 ) = min 23 22 12 13 c1 + c 2 . c2 ) = exp(c4 c6 ). Bob has preferences over consumption in two periods represented by the utility function uB (c1 .) Question 4: Alice has preferences over consumption in two periods represented by the utility function uA (c1 . an investment of x1 units in period 1 will lead to x2 units in period 2. c2 ) = lnc1 + αlnc2 . What would her endowment have to be for her to be a lender. That is. of course.) What 1 2 is Joe’s ﬁnal consumption bundle. There exists a credit market which allows him to borrow or lend against his initial endowment at market interest rates of 0%. Question 3: Anna has preferences over her consumption levels in two periods which can be represented by the utility function u(c1 . a borrower? e) Assume she can lend money at 18% and borrow at 32%. or you can give it mathematically. 10 10 a) Draw a carefully labelled representation of her indiﬀerence curve map.) b) Assuming.Intertemporal 55 has an initial endowment of (100. (Indicate the indiﬀerence maps and the Contract Curve. 100). b) What is her utility maximizing consumption bundle if her initial endowment is (9. that both α and β lie strictly between zero and one. Would Anna ever trade at all? (Explain. a) Draw an appropriately labelled representation of this exchange economy in order to “prime” your intuition. 8) and the interest rate is 25%. d) Assume she can lend money at 22% and borrow at 28%. b) Suppose that Joe’s preferences can be represented by the function U (c1 . a) What is Joe’s budget constraint? A very clearly drawn and well labelled diagram suﬃces. c1 + c 2 . and what are his transactions in the credit market. 12) and the interest rate is 25%. c) What is her utility maximizing consumption bundle if her initial endowment is (5.
56 LA. Microeconomics May2004 . Busch.
in principle. but also on how likely each outcome is. First. and which occurs in the end. and could. for example. however. however. or indeed are assumed based on some personal (subjective) judgment. One of the key facts about situations involving risk/uncertainty is that the consumer’s wellbeing does not only depend on the various possible outcomes. We will develop that interpretation further later on in this chapter.Chapter 4 Uncertainty So far. or can be estimated in some other way. We call such a list 57 . it is often the case that an action does not lead to a deﬁnite outcome. they will all be assumed to lie in some set of alternatives X. Often.” These situations are ones of uncertainty — it is uncertain what happens. Economists then speak of risk. They are somehow embedded in the utility function and prices. it would be nice to have probabilities explicitly in the model formulation. The standard model of chapter 2 does not allow an explicit role for such probabilities. be state contingent. the probabilities of the diﬀerent possibilities are known from past experience. Note that our “normal” model is already handling such cases if we take it at its most general level: commodities in the model were supposed to be fully speciﬁed. In real life. Which of these occurs is outside the control of the decision maker. A particularly simple model that does this holds the outcomes ﬁxed. and focuses on the diﬀerent probabilities with which they occur. however. it has been assumed that consumers would know precisely what they were buying and getting. we will develop a more simple model which is designed to bring the role of probabilities to the fore. In order to compare situations which diﬀer only in the probabilities. but instead to one of many possible outcomes. It is determined by what is referred to as “nature.
use the cumulative distribution function (cdf) F (·) or the probability density function (assuming it exists) f (x) to denote lotteries over (onedimensional) continuous outcome spaces. n Assumption 1 Only the probability distribution over “ﬁnal” outcomes matters to the consumer =⇒ preferences are over simple lotteries only. only what it is. then we have the total n probability of outcome n given by pn = i αi pi . Recall that a cumulative distribution function F (·) is always deﬁned and that F (x) represents the probability of the underlying random variable taking on a value less or equal to x. This is clearly an abstraction which looses quite a bit of richness in consumer behaviour..1 Of course. or want to allow that it does not. 1 . . For example. i=1 If we have a suitably deﬁned continuous space of outcomes. any such compound lottery will lead to a probability distribution over ﬁnal outcomes which is equivalent to that of some simple lottery. Just as before. i. we can view a probability distribution as a lottery. with pi ≥ 0. We can then denote the mean. pN ) of probabilities for the N diﬀerent outcomes in X. depending on if we know that the pdf exists. On the positive side stands the mathematical simplicity of this setting. In particular. and such lotteries are called compound lotteries. . We will therefore.58 LA. The consumer is also assumed not to care how a given probability distribution over ﬁnal outcomes arises. if it is convenient. However. in the usual way.e. Probability distributions over some set are a fairly simple framework to work with. gambling pleasure (or dislike) in itself is not allowed under this speciﬁcation. Microeconomics of the probabilities for each outcome a lottery. . in two diﬀerent ways: µ = tf (t)dt = tdF (t). If the density exists (and it does not have to!) then we denote it as f (x) and have x the identity F (x) = f (t)dt. and if that simple lottery i in turn assigns probability pi to outcome n. for example + for the outcome “wealth”. if some compound lottery leads with probability αi to some simple lottery i. We now have assumed that a consumer cares only about the probability distribution over a ﬁnite set of outcomes. we will assume that the consumer is able to rank any two such distributions. N pi = 1. Note that this is a restrictive assumption. say. lotteries could have as outcomes other lotteries. May2004 Deﬁnition 1 A simple lottery is a list L = (p1 . it does not allow the consumer to attach any direct value to the fact that he is involved in a lottery. Busch. p2 . .
there is only one more assumption: Assumption 4 The utility of a lottery is the expected value of the utilities of its outcomes: n U (L) = i=1 pi u(xi ) U (L) = u(x)dF (x) This form of a utility function is called a von NeumannMorgenstern. a > 0.Uncertainty 59 Assumption 2 The preferences over simple lotteries are rational and continuous. L L . Note that this name applies to U (·). rationality requires transitivity of the preferences. the evaluation of that win often depends on the fact if this was the top price available or the consolation price. The latter is sometimes called a Bernoulli utility function. since outcomes are mutually exclusive — one and only one of the outcomes will happen in the end — but is restrictive since consumers often express regret: having won $5000. the same preferences over lotteries are expressed by V (·) and U (·) if and only if V (·) = aU (·) + b. This seems sensible on the one hand. utility function. The vNM utility function is unique only up to a positive aﬃne transformation. . without rationality the model would have no predictive power. In particular. Once we have done this. and there is some evidence that real life consumers violate this assumption. The ˆ reason for this should be clear. Suppose we compare two lotteries. These violations of transitivity are “more common” in this setting compared to the model without risk/uncertainty. but we are not allowed to change its curvature. Yet. L and L. The economic importance of this assumption is that we have only one utility index over outcomes in which preferences over lotteries (the distribution over available outcomes) does not enter. for example. Note that rationality is a strong assumption in this case. We are allowed to scale the utility index and to change its slope. ˆ ˆ that is.L L ⇒ L L. We further assume that the preferences satisfy the following assumption: Assumption 3 Preferences are Independent of what other outcomes may ˆ ˆ have been available: L L ⇒ αL + (1 − α)L αL + (1 − α)L. or vNM. that is. This latter assumption guarantees us the existence of a utility function representing these preferences. For diﬀerent lotteries this may be hard to believe. not u(·).
Note that there is little regret possible here.11u(5) > . 0) in hundred thousands of dollars.1.1. we would change the diﬀerence between outcome utilities. as we shall see later. . we were to transform the function u(·). so: n n U (L) = n i=1 n ˆ pi u(xi ) > U (L) = i=1 ˆ pi u(xi ) i=1 pi u(xi ) − pi u(xi ) > 0 ˆ i=1 (pk − pk )u(xk ) + (pj − pj )u(xj )(pm − pˆ )u(xm ) + (pn − pn )u(xn ) > 0 ˆ ˆ ˆ m (pk − pk )(u(xk ) − u(xj )) + (pm − pˆ )(u(xm ) − u(xn )) > 0 ˆ m This comparison clearly depends on both. and this could change the above comparison. if they are asked to choose between LC = (0.01u(0) .1u(25) + . the curvature of the Bernoulli utility index u(·) is crucial in determining the consumer’s behaviour with respect to risk.89) or LD = (. Allais Paradox: The Allais paradox shows that consumers may not satisfy the axioms we had assumed. If we multiply u(·) by a constant. it will factor out of the last line above. LA = (0.1u(25) + . Suppose U (L) > U (L).9) the same consumers often indicate a preference for lottery D. violate our assumptions. 5. Microeconomics May2004 which diﬀer only in that probability is shifted between outcomes k and j and ˆ outcomes m and n.60 LA.1u(25) + . On the other hand. some famous paradoxes relating to uncertainty and our assumptions.89u(5) + . It considers the following case: Consider a space of outcomes for a lottery given by C = (25.89u(0) > . This is easily checked by assuming the existence of some u(·): The preference for A over B then indicates that u(5) > .89. the diﬀerences in probabilities as well as the diﬀerences in the utility indices of the outcomes. even by a monotonic transformation. Often consumers will indicate a preference for LA .01u(0) . Before we proceed to that. 0. In fact.01). you simply get a lot larger winning in exchange for a slightly lower probability of winning under D. probably because they foresee that they would regret to have been greedy if they end up with nothing under lottery B. however. 0) or LB = (. . . These choices.11u(5) + . If.9u(0) . 1. and will be used to measure the consumer’s risk aversion. . however. .11. Busch. Subjects are then asked which of two lotteries they would prefer.
X] given by the probability density f (x). Other problems with expected utility also exist. of which precisely 100 are known to be red. In particular. this means that p(R)u(1000) > p(B)u(1000) ⇒ p(R) > p(B) ⇒ (1 − p(R)) < (1 − p(B)) ⇒ p(¬R) < p(¬B) ⇒ p(¬R)u(1000) < p(¬B)u(1000). The other 200 are blue or green in an unknown proportion (note that this is uncertainty: there is no information as to the proportion available. assume that X is simply the wealth/total consumption of the consumer in each outcome. the basic attitudes of a consumer concerning risk can be obtained by comparing two diﬀerent lotteries: one that gives an outcome for certain (a degenerate lottery). LD : $1000 if a drawn ball is NOT blue. letting u(0) be zero for simplicity. However.) The consumer is again oﬀered the choice between two pairs of gambles: Choice 1 = Choice 2 : = LA : $1000 if a drawn ball is red.1 Risk Aversion We will now restrict the outcome space X to be onedimensional. 4. Consider an urn with 300 balls in it. With this simpliﬁcation.Uncertainty 61 and the last line indicates that lottery C is preferred to D! Ellsberg Paradox: This paradox shows that consumers may not be consistent in their assessment of uncertainty. LB : $1000 if a drawn ball is blue. One is the intimate relation of risk aversion and time preference which is imposed by these preferences. so that they too are easily ‘refuted’ by a properly chosen experiment. and that from the lottery L (which has . There consequently is a fairly active literature which attempts to ﬁnd a superior model for choice under uncertainty. and many still only address one or the other speciﬁc problem. Thus the consumer should prefer D to C if choice were consistent. and another that has the same expected value. Often consumers faced with these two choices will choose A over B and will choose C over D. So. but is nondegenerate. These attempts mostly come at the expense of much higher mathematical requirements. let L be a lottery ¯ on [0. We can now compare the consumer’s utility from obtaining C for certain. It generates an expected ¯ X value of wealth of 0 xf (x)dx = C. LC : $1000 if a drawn ball is NOT red.
Any other transformation would aﬀect the curvature of the Bernoulli utility function u(·).1. 2 . Busch. so of that the functions w 2 and w both represent identical preferences. however. however.5 u(w1) 1 0. It is easy to verify that these two functions lead to a quite diﬀerent relationship between the expected utility and the utility of the expected wealth than the initial one. consider preferences over wealth represented by u(w) = w. and thus would change the riskaversion of the consumer. Thus they cannot represent the same preferences in a setting of uncertainty/risk.) Compare ¯ X ¯ X May2004 U (L) = 0 u(x)f (x)dx to u 0 xf (x)dx .1: Risk Aversion The astute reader may notice that this is Jensen’s inequality. In the standard framework√ chapter 1 positive monotonic transformations are ok.62 LA. an assumption which is quite familiar from introductory courses. Microeconomics expected wealth C.) This is also the reason why only aﬃne transformations were allowed for expected utility functions. which is one way to deﬁne a concave function.2 Note that a concave u(·) has a diminishing marginal utility of wealth.5 0 0 ♦ w1 ♦ 5 E(w) 10 ♦ w2 15 20 wealth Figure 4. in this case u(·) (see Fig. Deﬁnition 2 A riskaverse consumer is one for whom the expected utility of any lottery is lower than the utility of the expected value of that lottery: ¯ X ¯ X u(x)f (x)dx < u 0 0 xf (x)dx .5 ♦ ♦ ♦ ♦ u(w) u(E(w)) 2 ♦ E(u()) ♦ 1. Risk aversion therefore implies (and is implied by) the fact To belabour the point. 4. consumers with diﬀerent risk aversion do not have the same preferences. Clearly. Utility u(w2) 3 2.
. received for certain. and both reveal interesting facts about the consumer’s economic behaviour.2).Uncertainty 63 U 14 12 10 E(u())= 8 u(E(w)) 6 ♦ 4 u(w1) 2 ♦ 0 ♦ 0 2 4 w1 u(w2) U 120 100 80 u(w2) 60 E(u()) 40 ♦ u(E(w)) 20 ♦ u(w1) 0 w 0 u(w) ♦ ♦ ♦ ♦ ♦ ♦ ♦ ♦ 1 w1 2 ♦ ♦ 6 8 10 12 14 E(w) w2 3 4 E(w) w2 5w Figure 4. Again a diagram for the twooutcome case might help (Fig. consumers do not have to be riskaverse. u) for a lottery with probability distribution f (·) under the (Bernoulli) utility function u(·) is deﬁned by the equation u (C(f. Risk neutral and risk loving are deﬁned in the obvious way: The ﬁrst requires that u(x)f (x)dx = u while the second requires u(x)f (x)dx > u xf (x)dx . according to preferences) to a given gamble/lottery?” In other words: Deﬁnition 3 The certainty equivalent C(f.2: Risk Neutral and Risk Loving that additional units of wealth provide additional utility. u)) = u(x)f (x)dx. The ﬁrst is by using the concept of a certainty equivalent. 4. 4. is equivalent (in the consumer’s eyes. There is a nice diagrammatic representation of these available if we consider only two possible outcomes (Fig. Of course.3). but at a decreasing rate. xf (x)dx . It is the answer to the question “how much wealth. There are two other ways in which we might deﬁne risk aversion.
u).5 1 0. indicating that the consumer requires more than fair odds in order to accept a gamble. how much more than fair odds do I have to oﬀer so that the consumer will take the bet? Note that a fair gamble would have an expected value of zero (i.5 + π(·)) u(w + ) + (0.64 LA. . and thus would be rejected by the (risk averse) consumer for sure. Indeed. w) is deﬁned by u(w) = (0. . Deﬁnition 4 The probability premium π(u. This idea leads to the concept of a probability premium.5 0 0 May2004 ♦ u(w) ♦ ♦ ♦ ♦ ♦ ♦ ♦ 10 ♦ w2 15 20 wealth w1 C()5 E(w) Figure 4.. and insurance will be oﬀered. the maximum amount which the consumer would pay is the diﬀerence wf (w)dw − C(f. however. (More on that later.5 − π(·)) u(w − ).5 2 E(u()) u(w1) 1.3: The certainty equivalent to a gamble A risk averse consumer is one for whom the certainty equivalent of any gamble is less than the expected value of that gamble. A well diversiﬁed insurance company will be risk neutral. This observation basically underlies the whole insurance industry: riskaverse consumers are willing to pay in order to avoid risk. 50/50 odds). Microeconomics Utility u(w2) 3 2.e.) Another way to look at risk aversion is to ask the following question: If I were to oﬀer a gamble to the consumer which would lead either to a win of or a loss of . One useful economic interpretation of this fact is that the consumer is willing to pay (give up expected wealth) in order to avoid having to face the gamble. A riskaverse consumer has a positive probability premium. Busch. and therefore is willing to provide insurance (assume the risk) as long as it guarantees the consumer not more than the expected value of the gamble: Thus there is room to trade.
Since risk aversion apparently had to do with the concavity of the (Bernoulli) utility function it would appear logical to attempt to measure its concavity. in contrast. The reason is that the second derivative will depend on the units in which wealth and utility are measured. Its units are u/w 2 . a consumer with preferences that have a positive probability premium will be one for whom the certainty equivalent is less than the expected value of wealth and for whom the expected utility is less than the utility of the expectation. i. then its units must be u/w. will not be such a good idea.1 Comparing degrees of risk aversion One question we can now try to address is to see which consumer is more risk averse. However.. a gain or loss of some small amount with equal probability. and asks what it would take for a gamble to be accepted. since the certainty equivalent basically considers a consumer with a property right to a gamble. u (w) The ArrowPratt measure of relative risk aversion is rR = − u (w)w . measures risk aversion for a gamble over a ﬁxed percentage of wealth. If the ﬁrst derivative measures change in utility for change in wealth.Uncertainty 65 It can be shown that all three concepts are equivalent. This is reassuring. while the second derivative is like a rate of acceleration. that is.3 Arrow and Pratt have proposed two measures which largely avoid this problem: Deﬁnition 5 The ArrowPratt measure of (absolute) risk aversion is u (w) rA = − . 3 . The latter. u (w) Note that the ﬁrst of these in eﬀect measures risk aversion with respect to a ﬁxed amount of gamble (say. simply using the second derivative of u(·). and asks what it would take for him to trade to a certain wealth level. 4. $1).e.1. These points can be demonstrated as follows: Consider a consumer with initial wealth w who is faced with a small fair bet. You can easily verify this by thinking of the units attached to the second derivative. which after all measures curvature. This is indeed what Arrow and Pratt have done. while the probability premium considers a consumer with a property right to a ﬁxed wealth.
Use a Taylor series expansion in order to approximate both sides: 0.) On the other hand. or requires a higher probability premium.5u(w + ) + 0. a consumer is said to be more risk averse than another if (either) ArrowPratt coeﬃcient of risk aversion is larger. rA is a function of w. You may wish to verify that a consumer exhibiting decreasing absolute risk aversion will have a decreasing diﬀerence between initial wealth and the certainty equivalent (a declining maximum price paid for insurance) on the one hand. and a decreasing probability premium on the other. This is equivalent to saying that he has a lower certainty equivalent for any given gamble. Note that a constant absolute risk aversion implies increasing relative risk aversion.5(u(w) − u (w) + 0. it measures the responsiveness of the marginal utility to wealth changes. Busch. We can also compare the risk aversion of a given consumer for diﬀerent wealth levels. u dw u dw/w %∆w Thus the relative coeﬃcient of riskaversion is nothing but the elasticity of marginal utility with respect to wealth. Comparing across consumers. It is commonly assumed that consumers have (absolute) risk aversion which is decreasing with wealth. That is.5 2 u (w)) ∼ u(w) − Iu (w) . u (w) Thus the required payment is proportional to the absolute coeﬃcient of risk aversion (and the dollar amount of the gamble.66 LA.5 2 u (w)) + 0. That is. we can compute these measures for the same u(·) but diﬀerent initial wealth.5 2 u (w)) ∼ −Iu (w) 2 ⇒ I∼ 2 × −u (w) . u w du w du /u %∆u = = ∼ .5(u(w) + u (w) + 0. Finally. however. Microeconomics May2004 How much would the consumer be willing to pay in order to avoid this bet? Denoting this payment by I we need to consider (note that w − I is the certainty equivalent) 0.5u(w − ) = u(w − I). Collecting terms and simplifying gives us 0. After all. note also that the only functional form for u(·) which has constant absolute risk aversion is u(w) = −e(−aw) . . Sometimes the stronger assumption of decreasing relative risk aversion is made.
Two such measures are commonly employed in economics. For example.2 Comparing gambles with respect to risk Another type of comparison of interest is not across consumers or wealth levels.5 1 0. such as comparisons of the various moments of the two lotteries’ distributions.4. as inspection of the diagram veriﬁes. a deﬁnition based directly on consumer preferences is preferable.4)) ♦ E(u(1. Both have an identical expected value of E(w).5 2 E(u(3. One could construct an example with two lotteries that have the same mean and variance. Note that in Fig. and thus that a risk averse consumer prefers to have less variability for a given mean. however. however. With multiple possible outcomes the question is not so simple anymore. The second is a gamble over w3 and w4 . as above. say? . What are the “obvious” preferences of a risk averse consumer about skurtosis. The ﬁrst is a gamble over w1 and w2 . that the consumer may in general be expected to be willing to trade oﬀ a higher expected return for higher variance. when do we want to say that one is riskier than the other? We could try to approach this question with purely statistical measures. ﬁrst and second order stochastic dominance.4: Comparing two gambles with equal expected value 4. Nevertheless a risk averse consumer clearly will prefer the second to the ﬁrst. This clearly indicates that the second lottery has a lower variance. say. but across diﬀerent gambles. Because of this. but which diﬀer in higher moments.5 0 0 ♦ ♦ ♦ ♦ ♦ ♦ u(w) ♦ w1 ♦ ♦ ♦ 10 w4 ♦ w2 15 20 wealth 5 w3 E(w) Figure 4. 4.4 E(w) − w1 > E(w) − w3 and w2 − E(w) > w4 − E(w). consider the two gambles depicted in Fig.2)) ♦ 1.Uncertainty 67 Utility 3 2. Faced with two gambles. 4. This has the major problem. Let us ﬁrst focus on lotteries with the same expected value.
This deﬁnition has economic appeal in its simplicity. ∀x.4 A concept related to second order stochastic dominance is that of a mean preserving spread. In order to apply this deﬁnition easily we need to ﬁnd other tests.. F (·) is said to dominate G(·) according to second order stochastic dominance if for every nondecreasing concave u(x): u(x)dF (x) ≥ u(x)dG(x) In words. Deﬁnition 7 Let F (x) and G(x) be two cumulative distribution functions for a onedimensional random variable (wealth). x tdF (t) = [tF (t)]x − x F (t)dt = x x x 4 x− F (t)dt.e. . Lemma 1 Let F (x) and G(x) be two cumulative distribution functions for a onedimensional random variable (wealth). Busch. and 0 G(t)dt ≥ 0 F (t)dt. but is one of those deﬁnitions that are problematic to work with due to the condition that for all possible concave functions something is true. Let F (·) have the same mean as G(·). Note that the condition of identical means also implies a restriction on the total x x areas below the cumulative distributions. Indeed it can be shown that the two are equivalent. where z is distributed according to some H(·) with zdH(z) = 0. if they have the same mean and there is more area under the cdf G(·) than under the cdf F (·) at any point of the distribution. a distribution second order stochastically dominates another if they have the same mean and if the ﬁrst is preferred by all riskaverse consumers. F (·) dominates G(·) according to second order stochastic dominance if x x tg(t)dt = tf (t)dt. Microeconomics May2004 This has lead to a more general deﬁnition for comparing distributions which have the same mean: Deﬁnition 6 Let F (x) and G(x) be two cumulative distribution functions for a onedimensional random variable (wealth).68 LA. I. After all. G(·) is a mean preserving spread compared to F (·) if x is distributed according to F (·) and G(·) is the distribution of the random variable x + z.
F (·) is said to dominate G(·) according to ﬁrst order stochastic dominance if for every nondecreasing u(x): u(x)dF (x) ≥ u(x)dG(x) This is equivalent to the requirement that F (x) ≤ G(x). risk averse or not. The following concept is frequently employed in economics to deal with such situations. 000). To be concrete. ∀x. Now assume that the individual has access to an insurance plan. The other moments of the distribution get involved too. a ﬁrst order stochastically dominating distribution F can be obtained form a distribution G by shifting up outcomes randomly. it does not allow us to address the economically interesting question of what the trade oﬀ between mean and risk may be. Assume the individual has vNM expected utility.75u(100. While it is nice to be able to rank distributions in this manner. ﬁrst order stochastic dominance implies a higher mean. Insurance works as follows: The individual decides on an amount of coverage. In other words. just because the mean is higher for one distribution than another does not mean that the ﬁrst dominates the second according to ﬁrst order stochastic dominance! 4. but is stronger than just a requirement on the mean. This . Two. would prefer F to G. Furthermore.000 who faces a 25% probability to loose his $20.Uncertainty 69 The above gives us an easy way to construct a second order stochastically dominated distribution: Simply add a zero mean random variable to the given one. Note that this requires that any consumer. the condition of equal means is restrictive.3 A ﬁrst look at Insurance Let us use the above model to investigate a simple model of insurance. assume an individual with current wealth of $100.25u(80. It is often useful to realize two facts: One. C. 000) + . Deﬁnition 8 Let F (x) and G(x) be two cumulative distribution functions for a onedimensional random variable (wealth).000 car through theft. The individual’s expected utility then is U (·) = 0.
) How would our individual choose the amount of coverage? Simple: maximize expected utility. in other words. 000 − πC + C) Since the utility function is strictly concave it can have the same slope only at the same point.75πC + 0.25u (80.) If we use this fair premium in the above ﬁrst order condition we obtain u (100. what does the ﬁrst order condition tell us? Manipulation yields the condition that u (100. if u(·) is strictly concave.75u (100. It requires (−π)2 0.25u(80.25. if the consumer is risk averse.70 LA. 000. Before we further investigate this equation let us verify the second order condition. The expected loss is $5.25u (80. Do you understand the usage of the word ‘Since’ ? I am not “cancelling” the u terms. because those indicate a function. The contract speciﬁes that the amount C will be paid out if the car has been stolen.25(πC − C) = 0 ⇒ π = 0. Even without knowing the precise function we can say something about the insurance behaviour. 000 − πC + C) = 0. Therefore the function must be evaluated at the same level of the independent variable. however. and we conclude that5 (100. An actuarially fair premium simply charges the odds (there is a 1 in 4 chance of a loss. Instead the equation tells us that numerator and denominator must be the same. To do so. Thus maxC {0.75u(100. 000 − πC) (1 − π) = u (80. 000 − πC + C) ⇒ C = 20. Ok. let us compute the actuarially fair premium. 000 − πC) + 0. read that sentence again. 000 − πC) + (1 − π)2 0.000 insured value would lead to zero expected proﬁts for the insurance ﬁrm: 0.000. But for what values of the independent variable wealth does the function u(·) have the same derivative? For none. So. 000 − πC) = (80. after all. Busch. 000 − πC) + (1 − π)0. 000 − πC + C) 3π which gives us a familiar looking equation in that the LHS is a ratio of marginal utilities. 000 − πC + C)}. Microeconomics May2004 coverage carries a premium of π per dollar. 000 − πC) = 1. 000 − πC + C) < 0. (Assume that this is all veriﬁable. The ﬁrst order condition for this problem is (−π)0. Clearly this is only satisﬁed if u(·) is concave. It follows that total consumption under each circumstance is set so as to set the ratio of marginal utility of wealth equal to some fraction which depends on price and the probabilities. so that an insurance premium which collects that amount for the $20. 5 .75u (100. u (80.
000 + 0. however. the right term is the expected utility of an uninsured consumer. we are trying to ﬁnd . We have to be careful.000 independent of if the car is stolen.7287C) 0. You should verify that the lump sum fee does not stop the consumer from fully insuring at a fair premium. this will make the consumer much better oﬀ than if he is actually bearing the gamble with this same expected wealth level. and if the car is actually stolen it will be replaced.) For example.426 which is made in the above case of a fair premium plus fee. since a $5. Consider the alternative (based on these numbers and the logarithmic function) and assume that the total payment of $5. Indeed. For logarithmic utility like this we would want to compute (remember. Of course this is an implication of the previous result — the consumer now faces a premium which is not actuarially fair. if the additional price is understood as a per dollar charge for insured value. were expressed as a premium.895318835 (100. 810. how we set up this problem. If you draw the appropriate diagram you can verify that the consumer does not have to pay any of the amount he would be willing to pay (the diﬀerence between the expected value and the certainty equivalent. we could also compute the premium for which the consumer will cease to purchase any insurance. let us approach the question as follows: What fee would the consumer be willing to pay in order to have access to actuarially fair insurance? Let F denote the fee.75u(100. 000) + 0. 000 − 0. 000).8139 ⇒ C = 9.2713. 000 − F ) and 0. Then we have the consumer choose between u(95.) If we had a particular utility function we could now also compute the maximal amount the consumer would be willing to pay. Then we get that π = 5426/20000 = 0. As we have seen before.50. the consumer will not insure fully. Note that the consumer will not bear any risk in this case: wealth will be $95.000 premium is due in either case.2713C) 0. The ﬁrst order condition for the choice of C then requires that (recall that ∂ln(x)/∂x = 1/x) (80. if u(·) = ln(·) then simple manipulation yields F ∼ 426. The left term is the expected utility of a fully insured consumer who pays the fee. As you can see.25u(80.7287 = = 0. since simply increasing π will reduce the amount of coverage purchased! So instead. It is important to note why we have set up the problem this way.Uncertainty 71 This is one of the key results in the analysis of insurance: at actuarially fair premiums a risk averse consumer will fully insure. (Note that I have skipped a step by assuming full insurance.
after all. note that if one of these commodities were missing . Finally. Notice that you would have to buy both of these commodities if you wanted to consume in both states. there is room to trade between insurance providers and risk averse consumers. In our insurance example above. That is. even if the underlying good which gets delivered in each state is the same. a contingent commodity would be delivered only if a particular state (on which the commodity’s delivery is contingent) occurs. and will do so now. we can actually treat consumption as involving contingent commodities.2941. In the case of our concrete example with just two states. then there could be two commodities. there were only two states of the world which mattered: Either the car was stolen or it was not. the car is stolen but recovered as a write oﬀ.) 4. Of course. it is diﬀerent in character to the ‘maximize utility subject to a budget constraint’ approach we have so much intuition about. etc. Microeconomics when C = 0 is optimal) 1−π 80. Indeed. for example. one which promises consumption in the good state. the car is stolen and recovered with minor damage. Notice also that nothing requires that the consumer purchase them in equal amounts. May2004 As indicated before. we had a ratio of marginal utilities on the one side — but was that the slope of an indiﬀerence curve? As mentioned previously. 000 = 100. for example. 000 3π ⇒ π = 0. if there are two states. and one which promises consumption in the bad state. diﬀerent commodities now.) In accordance with this view of the world we now will have to develop the idea of contingent commodities. In the ﬁrst order condition for the insurance problem.72 LA. in more general settings we could think of many more states (such as the car is stolen and not recovered. there exists a set of mutually exclusive states which are adequate to describe all randomness in the world. They are. there is room for trade between two risk averse consumers if they face diﬀerent risk or if they diﬀer in their attitudes towards risk (degree of risk aversion. as you can verify in one of the questions at the end of the chapter. good and bad. Let us start by assuming that the outcomes of any random event can be categorized as something we will refer to as the states of the world. Busch. So.4 The StatePreference Approach While the above approach lets us focus quite well on the role of probabilities in consumer choice.
consumption (or wealth). 6 . that is up to people’s choices. We will further simplify things by having just one good. after all.6 If the individual assessed a probability of π to the good state occurring. This is easily derived by taking the total derivative along an indiﬀerence curve and rearranging: πu (cg )dcg + (1 − π)u (cb )dcb = 0.) Of course. the locus of points where cg = cb . It is easy to imagine that an individual would evaluate material wealth diﬀerently in these two cases. We may now assume that the consumer has our usual vNM expected utility. u(·). after the fact (ex post in the lingo) only one of these states does occur. but at a decreasing rate. Given that there are two states. as before. called ex ante) there are two diﬀerent commodities available. Since the marginal utility of consumption then is equal in both states (we have state independent utility here. that means that there are two distinct (contingent) commodities. As long as the utility index applied to consumption in each state. but it should be available should someone want to trade. This expression gives us the expected utility of the consumer. cg and cb . cb ) = πu(cg ) + (1 − π)u(cb ). and thus only the set of commodities contingent on that state are actually consumed.Uncertainty 73 you could not assure consumption in both states (which is why economists make such a fuss about “complete markets” — which essentially means that everything which is relevant can be traded. this is a concave function. however. good and bad. of course. dcg (1 − π)u (cb ) Note the fact that the MRS now depends not only on the marginal utility of wealth but also on the (subjective) probabilities the consumer assesses for each state! Even more importantly. πu (cg ) dcb =− . To be concrete let there be just two states. Once we have this setting we can proceed pretty much as before in our analysis. Before the fact (before the uncertainty is resolved. The consumers’ objective is to maximize expected utility. It might be useful at this point to assess the properties of this function. We can also ask what the marginal rate of substitution between the commodities will be. It does not have to be traded. which means that the same u(·) applies in each state). it follows that the slope of an indiﬀerence curve on the certainty Note that this is somewhat more onerous than before now: imagine the states are indexed by good health and poor health. we can consider what is known as the certainty line. is concave. then we would obtain an expected utility of consumption of U (cg . It will be increasing in each commodity. We will now index goods by a subscript b or g to indicate the state in which they are delivered. that is.
which is the rate at which consumption in one state can be transferred into consumption in the other state.4. it is π/(1 − π). and we denote these by pg . which is just the dollar times the probability that it needs to be delivered. then we might expect that a dollar’s worth of consumption in a state will cost its expected value.. which is a payment which only accrues in the case there is no loss. Since we have two commodities. Microeconomics May2004 line only depends on the probability the consumer assesses for each state. In order to make this problem somewhat neater. Taking total derivatives of the budget we obtain that the budget slope is dcb /dcg = pg /pb . P cg + cb = P (100. wb ) may consume anything on the budget line pg cg + pb cg = pg wg + pb wb . they will be determined by general equilibrium conditions. 000) + 80. In this case. pb respectively. The two ﬁrst order conditions for the consumption levels in this problem are πu (cg ) − λP = 0 and (1 − π)u (cb ) − λ = 0. a kind of zero proﬁt condition for state pricing. One dollar of consumption added in the state in which an accident occurs will therefore cost premium/(1−premium) dollars in the no accident state.cg {πu(cg ) + (1 − π)u(cb )} s. we usually have a payment of premium × Amount in order to obtain a net beneﬁt of Amount − premium × Amount. Busch. The consumer will then solve maxcb . 000. of course. Combining this with our condition on “fair” pricing in the previous paragraph. The other ingredient is the budget line. Where do these prices come from? As before.e.) Thus we might expect that pg = π and pb = (1 − π). Thus. Since the normal insurance contract speciﬁes that a premium be paid in either case. we will reformulate the insurance premium into what is known as a net premium.t. we obtain that the budget allows transformation of consumption in one state to the other according to the odds. . and there is general agreement on the likelihood of the states. But if contingent markets are well developed and competitive. let pb = 1 and let pg = P . 4. while a consumer who has an endowment of consumption given by (wg . The consumer who has a total initial wealth of W may therefore consume any combination which lies on the budget line pg cg + pb cb = W . of course.74 LA. the net premium. (I.1 Insurance in a State Model So let us reconsider our consumer who was in need of insurance in this framework. each might be expected to have a price. The budget line also has a slope.
Optimum occurs where there is a tangency. that is. We have also derived P = π/(1 − π) for a fair net premium before. The picture looks perfectly “normal”. The consumer has an endowment which is oﬀ the certainty (45degree) line. that a risk averse consumer faced with a fair premium will choose to fully insure. just as the previous one. and so this says nothing but the familiar “there must be a tangency”. (1 − π)u (cb ) Now. (1 − π)u (cb ) 1−π u (cg ) =1 u (cb ) ⇒ cg = 1.5: An Insurance Problem in StateConsumption space Combining them in the usual way we obtain πu (cg ) = P. choose to equalize consumption levels across the states. The RHS is the slope of the budget. . which must occur on the certainty line since then the slopes are equalized. which is standard for insurance problems. A diagrammatic representation of this can be found in diagram 3. that is. Thus we get that πu (cg ) π = . The fair premium deﬁnes a budget line along which the consumer can reallocate consumption from the good (no loss) state to the bad (loss) state. just as we are used from introductory economics.Uncertainty 75 loss 20 cert. cb which requires that Thus this model shows us. as we have just seen the LHS of this is the slope of an Indiﬀerence curve. 15 10 5 @ endowment 0 0 5 10 15 20 no loss Figure 4.5.
How does this relate to certainty equivalents? Well.) It therefore stands to reason that we might be interested in the rate at which the MRS is falling. let us consider a point on the certainty line and the two indiﬀerence curves for our consumers through that common point (see Fig. Clearly.76 LA. a budget line at π fair odds will have the slope − 1−π . consumer B will need more compensation in order to accept the bad state reduction. loss 20 cert. We can now ask how much consumption we have to add for each consumer in order to keep the consumer indiﬀerent between the certain point and a consumption bundle with some given amount less in the bad state.6: Risk aversion in the State Model Assume further that consumer B’s indiﬀerence curve lies everywhere else above consumer A’s. curvature of indiﬀerence curves. much easier to think along the lines of certainty equivalents: Consider two consumers with diﬀerent risk aversion. Intuitively it might be apparent that a more risk averse consumer will have indiﬀerence curves which are more curved. For simplicity. B A 15 10 5 0 0 5 10 15 20 no loss Figure 4. it is interesting to see how riskaversion manifests itself in this setting. that is. that is. It is.4. Consider three such budget lines which are all parallel and go through the certain consumption point and the two . Looked at it the other way around. exhibit less substitutability (recall that a straight line indiﬀerence curve means that the goods are perfect substitutes.6). this means that consumer B is willing to give up more consumption in the good state in order to increase bad state consumption. 4. however. since the slopes of their ICs are the same. Note that both assess the same probabilities on the certainty line. Microeconomics May2004 4. Busch.2 Risk Aversion Again Given the amount of time spent previously on riskaversion. while a kinked Leontief indiﬀerence curve means perfect complements.
4. each with equal probability. if there is only time to contend with but returns or future prices are known. which is the pricing of assets.) 4. followed by consumer A’s. Two investors now could each invest $4.5. (The expected value of a given gamble on such a budget line is given by the point on the certainty line and that budget. First. The wealth reduction embodied in the lower budget is the equivalent of the certainty equivalent idea before. but (and this is important) which is completely independent of the ﬁrst. even a risk averse consumer will accept some risk in exchange for a higher return.000 or $8. as we will see shortly.000. where dividends are announced each year and their price certainly seems to ﬂuctuate. Clearly (from the picture) consumer B’s budget is furthest out. In the “real world” most assets do not have a future price which is known. Each investor then has again a total .) Let us consider these issues via a simple example. let us deﬁne two terms which often occur in the context of investments. As we have seen before.5 Asset Pricing Any discussion of models of uncertainty would be incomplete without some coverage of the main area in which all of this is used. after all.Uncertainty 77 gambles which are equivalent for the consumer to the certain point. however. Of course. and furthest in is the budget through the certain point. The expected value of this project is therefore $10. Contrary to popular misconceptions it is not necessary that their price movements be negatively correlated (although that certainly helps. How much each pays back in no way depends on the other. then asset pricing reduces to a condition which says that the current price of an asset must relate to the future price through discounting. Assume that there exists a project A which requires an investment of $9.000 and which will either pay back $12. Thus there is a larger reduction in wealth possible for B without reducing his welfare.1 Diversiﬁcation Diversiﬁcation refers to the idea that risk can be reduced by spreading one’s investments across multiple assets. Now assume that a second project exists which is just like this one.500 in each project. or may otherwise have returns which are uncertain — stocks are a good example. compared to A.000. But we know that parallel budgets diﬀer only in the income/wealth they embody. Our discussion so far has focused on the avoidance of risk.
In that case an investor who invests half in each will obtain either $6.000 and $4.000 or $8.5u(12)+0. there is less risk. Thus we can guarantee the consumers their expected wealth for certain.000 if the ﬁrst pays $8.000 or $4. Should the investor have access to investments which have negatively correlated returns (if one is up the other is down) risk may be able to be eliminated completely. This is called reinsurance. so that all individuals have an expected wealth of $34.25u(12) + 0.5u(10) since 2(0. $12.000.2 Risk spreading Risk spreading refers to the activity which lies at the root of insurance.900. there may be a string of bad luck which might threaten the solvency of the plan: but help is on the way! We could buy insurance for the insurance company. Busch. or $8.000 loss for all of them. each with equal probability. The total expected return thus is the same. If the losses are independent of one another then there will be an average of 10 losses per period.000. .25u(8)+0.000. and that it will pay $12.000. but there is no risk at all now. However. 4.000 loss. We can get rid of some of this by charging the $100 in all years and retaining any money which was not collected in order to cover higher expenses in years in which more than 10 losses occur.000 when the ﬁrst pays $12.5u(10)).000: $10.25u(12)+0.5.000 are received one quarter of the time. Thus an investor can receive either $12.000. a situation which a riskaverse consumer would clearly prefer. and half the time it is $10. each will pay an investor either $6. How much do the projects pay back? Well. Assume that there are 1000 individuals with wealth of $35.000 and $6.000 in either case. The expected loss of each individual is $100. The expected return has not increased. BUT. Note that there is a new risk introduced now: in any given year more (or less) than 10 losses may occur. All that is needed is to assume that the second project above will pay $8. Since an insurance company has a well diversiﬁed portfolio of (independent) risks. since we know that for a riskaverse consumer 0.000.000.000.000 and a 1% probability of suﬀering a $10. A mutual insurance would now collect $100 from each. $10. the aggregate risk it faces itself is low and it will thus be able to get fairly cheap insurance. Microeconomics May2004 investment of $9. in eﬀect insuring against the unlikely event that signiﬁcantly more than the average number of losses occurs. for a total $100. and everybody would be reimbursed in full for their loss.000 or $4.25u(8)) < 2(0.78 LA.5u(8) < 0.
Why? The answer lies in the fact that all insured individuals would have either a loss or no loss at the same time. and certain other natural disasters (and manmade ones. a ploy which often works.5. let us assume the simple most case. Wealth then is a random variable and will be either wg = (w − x) + x(1 + rg ) or wb = (w − x) + x(1 + rb ). that is. Notice also that in that case there will be some investment! Of . but only at such high rates that nobody would buy it anyways.000.000.) 4. you can’t get earthquake insurance in Vancouver. in which case you might as well not insure! (A note aside: often the statement that no insurance is available is not literally correct: there may well be insurance available. that of a consumer with a given wealth w who has access to a risky asset which has a return of rg or rb < 0 < rg . What will be the choice of x? max0≤x≤w {πu(w + rg x) + (1 − π)u(w + rb x)} . such as wars) are excluded from coverage. To do so.000. it may be useful to verify that a riskaverse consumer will indeed hold nonnegative amounts of risky assets if they oﬀer positive returns. Similarly. You may recall the lament on the radio about the fact that homeowners in the Red River basin were not able to purchase ﬂood insurance.3 Back to Asset Pricing Before we look at a more general model of asset pricing. The ﬁrst and second order conditions are rg πu (w + rg x) + rb (1 − π)u 2 2 rg πu (w + rg x) + rb (1 − π)u (w + rb x) = 0 (w + rb x) < 0 The second order condition is satisﬁed trivially if the consumer is risk averse. To show under what circumstances it is not optimal to have a zero investment consider the FOC at x = 0: rg πu (w) + rb (1 − π)u (w) ? 0. often hoping that the government will bail them out after the fact. Suppose the good outcome occurs with probability π. if expected returns are positive. But the latter requires each participant to pay $10.Uncertainty 79 These kind of considerations are also able to show why there may not be any insurance oﬀered for certain losses. Let x denote the amount invested in the risky asset. That would mean that our mutual insurance above would either require no money (no losses) or $10. Even at low rates many people do not carry insurance. The LHS is only positive if πrg + (1 − π)rb > 0.
From this it follows that in equilibrium the expected return of asset i must satisfy COV(u (w). Busch. each of which is a random variable with some distribution. Y . We can transform this expression as follows: n n n w = w (1 − ˜ xi )R0 + i=1 i=1 ˜ xi Ri = w R0 + i=1 ˜ xi ( Ri − R 0 ) . Investing is a gamble. Ri ) ˜ ˜ ˜ . Now we will do some manipulation of this to make it look more presentable and informative. is to maximize expected utility from this wealth by choice of the investment fractions. Let the return for the riskfree asset be denoted by R0 and the ˜ returns for the risky assets be denoted by Ri . which the consumer dislikes. Diﬀerentiation yields the ﬁrst order conditions Eu (w)(Ri − R0 ) = 0. The consumer’s goal. In the second period (we will ignore time discounting for simplicity and clarity) wealth will be a random variable the distribution of which depends on how ˜ much is invested in each asset.80 LA. w = w0 n xi Ri . It follows that Eu (w)Ri = ˜ ˜ ˜ COV(u . . E Ri = R 0 − Eu (w) ˜ . Microeconomics May2004 course this is driven by the fact that not investing guarantees a zero rate of return. Using this fact and distributing the subtraction in ˜ ˜ the FOC across the equal sign. Assume that there is a riskfree asset (one which yields a certain return) and many risky ones. Y ) = EXY −EXEY. but also increases returns. we can let xi denote the fraction of wealth allocated to asset i = 0. You may recall that the covariance of two random variables. Initial wealth of the consumer is w. X. with the ˜ i=0 n budget constraint that i=0 xi = 1. is deﬁned as COV(X. Finally. In particular. of course. ˜ ˜ ∀i. n max{x}i {Eu (w)} = max{x}i ˜ Eu w R0 + i=1 ˜ xi ( Ri − R 0 ) . Even a riskaverse consumer will take some risk for that higher return! Now let us consider a more general model with many assets. n. Ri ). we obtain for each risky asset i the following equation: ˜ ˜ ˜ ˜ ˜ Eu (w)R0 = Eu (w)ERi + COV(u (w). . Ri )+Eu (w)ERi . That is. . .
) The risk free asset has a return of rf . if an asset will return much if the consumer is already rich — then it is negatively correlated with the marginal utility of wealth. Risk aversion is now expressed through the fact that we assume that ∂u(·) = u1 (·) > 0 while ∂µw ∂u(·) = u2 (·) < 0. . i.Uncertainty 81 This equation has a nice interpretation. in a sense. .e. πi (wi − µw )2 . although it is standard practice to actually use the standard deviation as I just have done. the mean is a good. 4. provide insurance. the risky . Thus the expected return of such an asset must exceed the risk free return if it is to be held. .) Recall that for a set of outcomes (w1 . since that is decreasing in wealth. . π2 .5. The ﬁrst term is clearly the riskfree rate of return. We can then specify a utility function directly on those two characteristics of the distribution. If distributions diﬀer in higher moments. . wn ) with probabilities (π1 . since.4 MeanVariance Utility The above pricing model required us to know the covariance and expectation of marginal utility. it is the fact that marginal utility diﬀers across outcomes which in some sense causes riskaversion. and the variance is 2 σw = i=1 We now deﬁne utility directly on these: u(µw . Note that if a return is positively correlated with wealth — that is. this formulation would not be able to pick that up. . (The normal distribution. Of course. assets which pay oﬀ when wealth otherwise would be low can have a lower return than the riskfree rate since they. πn ) n The mean is µw = i=1 n π i wi . however. A nice simpliﬁcation of the model is possible if we specify at the outset that our consumer likes the mean but dislikes the variance of random returns. . as we have seen before. the variance is a bad. σw ). for example is completely described by these two moments. w2 . ∂σw We will now focus on two portfolios only (the validity of this approach will be shown in a while. The second part therefore must be the riskpremium which the asset must garner in order to be held by the consumer in a utility maximizing portfolio.. .
xσm )} FOC u1 (·)[rm − rf ] + u2 (·)σm = 0 2 SOC u11 (·)[rm − rf ]2 + u22 (·)σm + 2σm [rm − rf ]u12 (·) ≤ 0 Assuming that the second order condition holds. The basic idea is simple. . In reality there are many. The investor/consumer will maximize utility by choice of x: maxx {u(xrm + (1 − x)rf . which is often easier to compute. We may also note that we can rewrite the FOC as rm − r f −u2 (·) = . the lowest standard deviation for a given mean. Of course. The expected return for a fraction x invested will be rx = πs (xms + (1 − x)rf ) = (1 − x)rf + x πs ms . the slope of an indiﬀerence curve. Microeconomics May2004 asset (the “market portfolio”) has a return of ms with probability πs . σm ). Let rm = πs ms and σm = πs (ms − rm )2 . Busch. or. (rf .82 LA. As promised. we derive here the justiﬁcation for considering only the socalled market portfolio. we note that we will require [rm − rf ] > 0 since u2 (·) is negative by assumption. Thus. this is just a requirement of Pareto Optimality. which implies that the tradeoﬀ of mean for standard deviation is rise over run: (rm − rf )/σm . everybody who does hold any of the market will have the same MRS — a result analogous to the fact that in our usual general equilibria everybody will have the same MRS. 0) and (rm . Since we are operating in a two dimensional space of mean versus standard deviation. that is. The RHS can be seen to be the slope of the budget line since the budget is a mix of two points. In a general equilibrium everybody has access to the same market and the same risk free asset. The variance of this portfolio is 2 σx = πs (xms + (1 − x)rf − rx )2 = 2 πs (xms − xrm )2 = x2 σm . Assume that a fraction x of wealth is to be invested in the risky asset (the market). Assume a set of risky assets. one can certainly ask what combination of assets (also known as a portfolio) will yield the highest mean for a given standard deviation. u1 (·) σm The LHS of this expression is the MRS between the mean and the standard deviation. In this discussion we had but one risky asset.
7: Eﬃcient Portfolios and the Market Portfolio Let x denote the vector of shares in each of the assets so that I x = 1. the consumer can combine any one of these assets with the risk free asset in order to arrive at the ﬁnal portfolio. and of the riskfree asset on the other. First of all it is clear from the indiﬀerence curves drawn that the consumer gains from the availability of risky assets on the one hand. Deﬁne the mean and standard deviation of the portfolio x as µ(x) and σ(x). Imagine sliding the risk free return . where µx . Second. by derivation. Then it is possible to solve maxx {µ(x) + λ(s − σ(x))} or minx {σ(x) + λ(m − µ(x)} . It turns out (for reasons which I do not want to get into here: take a ﬁnance course or do the math) that this will lead to a frontier which is concave to the standard variation axis. Since this is just a linear combination.Uncertainty 83 mean 20 budget eﬃcient risk 15 market 10 5 risk free 0 0 5 10 15 20 standard dev. But which one? Well. any proportion of wealth which is held in risky assets will be held according to one of these portfolios. The point of tangency deﬁnes the market portfolio we used above! Note a couple more things from this diagram. A simple diagram suﬃces to convince us that the highest budget will be that which is just tangent to the eﬃcient frontier. the resulting “budget line” will be a straight line and have a positive slope of (µx − µf )/σx . Now. For each value of the constraint there will be a portfolio (an allocation of wealth across the diﬀerent risky assets) which achieves the optimum. σx are drawn from the eﬃcient frontier. Figure 4. the market portfolio with which the risk free asset will be mixed will depend on the return from the risk free asset.
Consideration of this problem in more detail (assume a portfolio with a small amount of this asset held and the rest in the market. Question 2: Compute the certainty equivalent for an expected utility maxi√ mizing consumer with (Bernoulli) utility function u(w) = w facing a gamble over $3600 with probability α and $6400 with probability 1 − α. The budget line would have to become ﬂatter and this means a tangency to the eﬃciency locus further to the right.M covariance between asset x and the market. so that any unsystematic risk will not attract any excess returns. σM Here x denotes asset x. since if it did not it would not be held in the market portfolio. rearrange) will yield the famous CAPM equation involving the asset’s ‘beta’ a number which is published in the ﬁnancial press: σX.5 CAPM The above shows that all investors face the same price of risk (in terms of the variance increase for a given increase in the mean. 20. however. Finally. that is.) 4. the precise mix between market and risk free will depend on preferences.M µx = µf + (µm − µf ) 2 .6 Review Problems Question 1: Demonstrate that all riskaverse consumers would prefer an investment yielding wealth levels 24. Any risk unrelated to the market risk can be diversiﬁed away in this setting. . however. however.5. there might be people who would want to have more than their entire wealth in the market portfolio. 16 with equal probability to one with wealth levels 24 or 16 with equal probability. Indeed. compute the inﬂuence of the asset on the portfolio return and variance. An asset must. This requires a “leveraged” portfolio in which the consumer is allowed to hold negative amounts of certain assets (shortselling. the tangency to the budget would occur to the right of the market portfolio. The ratio 2 is referred to as σM the asset’s beta.M is the σX. m the market. earn higher returns to the extent that it contributes to the market risk. and σX. 4.84 LA. Busch. f the risk free asset.) It does not tell us anything about the riskreturn tradeoﬀ for any given asset. Microeconomics May2004 up in the diagram.
One is that genetechnology is approved and ﬂourishes. b) What proportion of the $10.. and µi is an individual parameter for each consumer. In that case the return of asset B will be 40% but asset G has a return of (40%). they are assumed to be expected utility maximizers. In order to simplify these preferences further. and genetechnology is severely restricted. both of which occur with equal probability.)) Let us further assume that consumers only diﬀer with respect to their initial wealth level and their consumption utility from gambling. What are the equilibrium allocation and price? (Provide a well labelled diagram and supporting arguments for any assertions you make.Uncertainty 85 Question 3: Determine at what wealth levels a consumer with (Bernoulli) utility function u(w) = lnw has the same absolute risk aversion as a consumer √ with (Bernoulli) utility function u(w) = 2 w. while they dislike the risk on wealth which gambling implies. g ∈ {0.e.) Question 5: Anna has $10. while asset B is stock in a bibleprinting business. assume that wealth and gambling are separable. How do their relative risk aversions compare at that level. 10).000 to invest and wants to invest it all in one or both of the following assets: Asset G is genetechnology stock. R. but they enjoy gambling for its consumption attributes (i. There are only two states of nature to worry about. The other is that religious fundamentalism takes hold.000 will she invest in the genetechnology stock? (I. 1} by some u(w. a) Assume for now that the gambling is engaged in via a lottery. Assume further that the endowment of consumer A is (10. they get some utility out of partaking in the excitement (say. S. but that all consumers have the same preferences over wealth. w. in . Finally.) Question 6∗ : Assume that consumers can be described by the following preferences: they are risk averse over wealth levels. and gambling. — with the probability of state R occurring known to be π. 5) — denoting 10 units of the consumption good in the case of state R and 5 units in the case of S — and that the endowment of consumer B is (5. Assume that there exist two consumers who are both riskaverse. they get utility out of daydreaming about what they could do if they won. in which case the return of asset B is 0% while the return of asset G is 80%. g) = v(w) + µi g. vNM expected utility maximizers.e. solve the above maximization problem. a) State Anna’s choice problem mathematically. where v(w) is strictly concave. and shine. We can represent these preferences over wealth. What does that mean? Question 4: Consider an environment with two states — call them rain. Anna is a riskaverse expectedutility maximizer with preferences over wealth represented by u(w).
His √ preferences can be represented by u(w. If the worker is not tested. and the ticket is either a “Try Again” (no prize) or “Winner” (Prize won.) i) First verify that in such an environment the government can make positive proﬁts. Also assume that the consumption enjoyment of gambling is decreasing in wealth. (Since leisure time does not enter in his preferences he will work full time (supply work inelastically) and we can normalize the wage rate to be per period income. The worker generates income by accepting a contract which speciﬁes an eﬀort level e and the associated wage rate w(e).e..) Who would gamble then? Question 7∗ : Assume that a worker only cares about the income he generates from working and the eﬀort level he expends at work. then it is assumed that he did indeed work at the speciﬁed eﬀort level and will receive the contracted wage. a) What relationship has to hold between w(e). It can conduct random tests of the worker’s eﬀort level with some probability α. consumers with high initial wealth have low µi . Verify that the utility functions √ v(w) = ln w and v(w) = w satisfy this assumption.e. verify that some consumers will buy a ticket even if the ticket price exceeds the expected value of the ticket. each consumer can only buy one ticket. Microeconomics May2004 which consumers pay a ﬁxed price p for one lottery ticket. and thus the worker has an incentive to expend as little eﬀort as possible.) iii) Assume now that preferences are characterized by decreasing absolute and constant relative risk aversion.86 LA. The ﬁrm. Busch. since it aﬀects the marginal product it gets from employing the worker.. how does the participation of consumers then depend on their initial wealth level if their preferences exhibit {constant decreasing} {absolute relative} risk aversion? (The above notation means: consider all (sensible) permutations. (They know what pain it is to be rich and don’t daydream as much about it.e. not dependent on either the contracted nor the observed eﬀort level)? b) If we can make p depend on the actual deviation in eﬀort which we observe. These tests reveal the true eﬀort level employed by the worker.) Also assume for simplicity that they can either gamble once or not at all (i. what relationship has to be satisﬁed then? c) Is there any economic insight hidden in this? Think about how these problems would change if the probability of detection somehow depended on . (I. e) = w − e2 . α and p in order for a worker to provide the correct eﬀort level if p is a constant (i.. cares about the eﬀort expended. Also assume that the worker is risk averse over income generated and dislikes eﬀort. that is. If he is tested and found to have shirked (not supplied the correct eﬀort level) then a penalty p is assessed and deducted from the wage of the worker.) The eﬀort level of the worker is not observed directly by the ﬁrm. ii) If consumers have identical µi . e. however.
you can imagine that this is actually measuring the increase in wealth above current levels. and that her utility thus is lowered. if the type of accident is veriﬁable then a contract can be written contingent on the type of loss. since she faces the possibility of an accident. b) Now consider the case if the amount of loss is private information. and σ its variance. in which case she looses B. 64 64 Derive their optimal portfolio choice and contrast their decisions. Take the boundary of the eﬃcient √ risky asset portfolios to be given by µ = σ − 16. or minor. In this case only the fact that there was an accident can be veriﬁed (and hence contracted on). The problem thus is equivalent to a problem where there are two types of accident. Question 10: Fact 1: The asset pricing formula derived in class states that ˜ ERi = R0 − Cov(U (w). Assume further that there exists a riskfree asset which has mean zero and standard deviation zero (if this bothers you. each of which occurs with a diﬀerent probability and can be insured for separately at fair premiums. Accidents occur with probability π. More explicitly. the more likely I might be caught. Assume zero proﬁts for the insurer.) Let there be two consumers who have meanσ2 σ2 variance utility given by u(µ. it is either really bad.. Ri ) ˜ ˜ . (Note that the informational distortion therefore leads to a welfare loss.e. If an accident occurs.) Question 8: A consumer is risk averse. and show that the consumer now overinsures for the minor loss but underinsures for the bad loss. and let µ denote the expected level of wealth. Bad accidents represent only 1/5 of all accidents. in which case her loss is M < B. a) Derive her optimal insurance purchases if the magnitude of the loss is publicly observable and veriﬁable and insurance companies make zero proﬁts.Uncertainty 87 the eﬀort level (i. the more I deviate from the correct eﬀort level. σ) = 3µ− respectively. Use Facts 1 and 2 above to explain why disability insurance can have . but the insurer cannot verify if the loss was minor or major. and hence pays only one ﬁxed amount for any kind of accident. σ) = µ− and u(µ. She is faced with an uncertain consumption in the future. as before.) Question 9: Assume a meanvariance utility model. Without the accident her consumption would be W > B. all other accidents are minor. EU (w) ˜ Fact 2: A disability insurance contract can be viewed as an asset which pays some amount of money in the case the insured is unable to generate income from work.
which indicates a tangency between an indiﬀerence curve and a budget line. In this job there is a risk of accidents. (that is.00 and have a 50% accident probability. 4) A riskaverse consumer will never pay more than the expected value of a gamble for the right to participate in the gamble. This is essentially an insurance scheme where each job pays a fair premium for its accident risk.100. and is currently trading for $90. a) What is the probability of accidents in the bus driver occupation? b) What is the loss suﬀered by an oil rig worker if an accident occurs there? c) Suppose now that the government institutes a workers’ compensation scheme.500.) is in equilibrium.00 a year. This job has no risk of accidents associated with it. Finally a worker could work on an oil rig. There are three kinds of jobs in the economy. 5) The market rate of return is 15%. 6) Under RiskVariance utility functions. all consumers who actually hold both the riskfree asset and the risky asset will have the same Marginal Rate of Substitution between the mean and the variance. One is a government desk job paying $40. that is. Suppose that workers can buy this insurance . A risklover would. Question 11: TRUE/FALSE/UNCERTAIN: Provide justiﬁcation for your answers via proof. but may not have the same investment allocation.700.000.88 LA. This market (and the current stock price of Gargleblaster Inc. why the price of the contract may exceed the expected value of the payment) if viewed as an asset in this way. 2) For a consumer who is initially a borrower. Question 12∗ : Suppose workers have identical preferences√ over wealth only.00 and if there is an accident the monetary loss is $11. Microeconomics May2004 a negative return. counterexample or argument. The second is a busdriver. which can be represented by the utility function u(w) = 2 w. The stock of Gargleblaster Inc. 3) The utility functions (3000 ∗ lnx1 + 6000 ∗ lnx2 ) 1/3 2/3 + 2462 and Exp(x1 x2 ) 12 represent the same consumer preferences. These are all market wages. the utility level will deﬁnitely fall if interest rates increase substantially. The wage is $44. Workers are also known to be expected utility maximizers. 1) The optimal amount of insurance a riskloving consumer purchases is characterized by the First Order Condition of his utility maximization problem. on the other hand. all these jobs are actually performed in equilibrium.00. These jobs pay $122. Busch. is known to increase to $117 next period.
What will the new equilibrium wages for busdrivers and oil rig workers be? Who gains from the workers’ compensation? d) Now suppose instead that the government decides to charge only one premium for everybody. So could they have the same slope along any ray from the origin? In that case they would be homothetic. (This question arises since indiﬀerence curves do have the same slope along the certainty line. who is worse oﬀ in this case (at the old wages)? Can we say what the new equilibrium wages would have to be? Question 13∗ : Prove that stateindependent expected utility is homothetic if the consumer exhibits constant relative risk aversion. Suppose that they can buy as much or little insurance as they wish.) .Uncertainty 89 in arbitrary amounts at these premiums. How much insurance do the two groups buy? Who is better oﬀ. Suppose that of the workers in risky jobs 40% are oil rig workers and 60% are bus drivers.
Microeconomics May2004 .90 LA. Busch.
This quite clearly will lead to problems. this information is symmetric. suppose buyers and sellers are risk This kind of example is due to Akerlof (1970) Quarterly Journal of Economics. the famous “Lemon’s Problem”:1 Suppose a seller knows the quality of her used car. since Pareto eﬃciency necessitates that all available information is properly incorporated. 1 91 . as well as the precise features of each good (all characteristics relevant to the consumer. Hence in chapter 3 the explicit assumption that the insurance provider has the same knowledge of the probabilities as the consumer. Will the market mechanism work in this case? Suppose that it is known by everybody that half of all cars are good. Buyers value good used cars at $6000 and bad used cars at $2000. which is either high or low. Not only that. a paper which has changed economics. In chapter 1 this meant that the consumer knows the price of all goods. What if these assumptions fail? What if there is no complete and symmetric information? Fundamentally. The seller attaches values of $5000 or $1000 to the two types of car. so that all parties to a transaction have the same information. one of the key problems is asymmetric information — when one party to a transaction knows something relevant to the transaction which the other party does not know. respectively. and half are bad. Consider. Note that it is Pareto eﬃcient in either case for the car to be sold. for example.) In chapter 3 this in particular implied that the consumer has information about the probabilities of states or outcomes. as well as the model of uncertainty developed in chapter 3. To keep it simple. Buyers do not know the quality of a used car and have no way to determine it before purchase.Chapter 5 Information In the standard model of consumer choice discussed in chapter 1. it was assumed that the decision maker knows all relevant information.
it is not necessary that there be asymmetric information. etc. be it price or some attribute of the good. Microeconomics May2004 neutral. In a Pareto eﬃcient general equilibrium each consumer must frequent (subject to capacity constraints) the most preferred restaurant. and hence will not ﬁnd the best match! What seems to be important then. Anytime 2 This is a question similar to one very important in labour economics: are workers matched to the correct jobs. romantic. the consumer will not try a new restaurant. and (less frequently) faith goods. Now suppose that this were the market price for used cars. Japanese. Two. If the expected beneﬁt of ﬁnding a better place than the one currently known does not exceed the expected cost of having a bad experience. It follows that this price cannot be an equilibrium price. Egyptian. Consumers have tastes over the characteristics of the meals (how spicy. experience goods. each oﬀering slightly diﬀerent combinations of food and ambiance. In this literature you will ﬁnd interesting papers on stable marriages — is everybody married to their most preferred partner. French. what kind of meat. It is fairly easy to verify that the only equilibrium would have bad cars trade at a price somewhere between $1000 and $2000 while good cars are not traded at all. that is. Suppose that there exist many restaurants. Of course. At this price only bad cars are actually oﬀered for sale. authentic.) as well as the kind of restaurant (formal. which can be fully determined before purchase of the good. can all relevant information be acquired before the transactions is completed?. it is common to distinguish between three classes of goods. are the characteristics of workers properly matched to the required characteristics of the job? These kind of problems are analysed in the large matching literature. since their valuation of their own car exceeds the price.92 LA. This is not Pareto eﬃcient. Buyers would then be willing to pay at most $4000 for a gamble with even odds in which they either receive a good or a bad used car. eastern European. Every visit to a new place carries with it the risk of a really unpleasant experience. are two aspects of information: One. etc. Busch. or could a small coalition swap partners and increase their welfare? . based on the kind of informational problems they present: search goods. A search good is one for which the consumer is lacking some information. is the information symmetric or asymmetric? Aside from a classiﬁcation of problems into asymmetric or symmetric information. since good car buyers rather keep theirs. Italian. or if that is full the next preferred one.) as well as the general quality of the cooking within each category. if any meat at all. Can we expect this to be the equilibrium?2 To see why the general answer may be “No” consider a risk averse consumer in an environment where a dining experience is necessary in order to ﬁnd out all relevant characteristics of a restaurant.
and many other services. Such goods are called experience goods. The Inquisition. and is therefore possibly eﬃciency enhancing. which for such goods focuses on supplying the missing information. Similar examples can be constructed for hair cuts. 5 Interesting issues arise when the provision of health care services is combined with the 3 . and since there are large externalities in a society with a social safety net — and thus there is extensive regulation for health care services in most societies. Similarly for issues of quality. Such goods are termed faith goods and present tremendous problems to markets. Advertising will be designed to make the consumer try the product — free samples could be used. in contrast. The fact that nicotine is addictive to some is only helpful in this regard. probably had little to do with a concern for the welfare of heretics. What if the consumer never ﬁnds out if the product performs its function? This is the natural situation for many medical and religious services. Inasmuch as this refers to how long a durable good lasts. we can then model the optimal search behaviour of consumers by considering the (marginal) beneﬁts and (marginal) costs of search. Applying such thinking to labour markets we can study the eﬃciency eﬀects of unemployment insurance.4 Health. The consumer will not discover the full implications until it is (presumably) too late. for example. Nobody can explain to you how something tastes. or we can apply it to advertising. is judged important — since consumers value it. For some goods such a determination of all relevant characteristics may not be possible.Information 93 the consumer can discover all relevant aspects before purchasing we speak of a search good. Supposing that search is costly. Note also that I am speaking especially with respect to religions in which the “afterlife” plays a large role. for example. allowing consumers to experience the good without cost.3 Why would the consumer not necessarily ﬁnd the best product? If there is a cost to unsatisfactory consumption experiences this will naturally arise. but maybe not necessarily ﬁnding the best match for them. which seems reasonable. for one. as any local drug pusher knows: they also hand out free sample packs in schools.5 Since education also has certain attributes of a It used to be legal for cigarette manufacturers to distribute “sample packs” for free. but the purchase and consumption of the good (or service!) will fully inform the consumer. and so the market failure for religious services is not addressed. As in the restaurant example above. you will have to consume the good to ﬁnd out. In our current society the spiritual health of consumers is not judged to be important. 4 A convincing argument can be made that societies which prescribe one state religion do so not in an attempt to maximize consumer welfare but tax revenue and control. This situation naturally leads to consumers trying a good. this can only be determined by consuming the good and seeing when (and if) it breaks. The consumer needs to purchase the good. We will encounter them again in the chapter on game theory.
8 Note the weight loss and aphrodisiac markets. or cosmetics. Furthermore. However. it may be argued that in religion both parties are equally uninformed. in contrast. just pointing to the fact that while a medical doctor may actually know if a prescribed treatment works (while the patient does not). or to repeat the education — we also see strong regulation of education in most societies.) These measurements measure if messages have been sent. These days everybody speaks of the “information economy”. the fact that I “consume” the information somehow (let’s say by acting on it) does not stop you from consuming it or me from using it again later.) Aside from the problem of deﬁning and measuring information. Microeconomics May2004 faith good — in the sense that it is very costly to unlearn. Much care is therefore taken to deﬁne the informational context of any given decision problem (we will encounter this again in the game theory part. Clearly information is supposed to have some kind of value and therefore should have a price. it is far from easy to incorporate information into our models. where we will use the term information set to denote all situations in which the same information has been gathered. and how many. 6 This is regulation for economically justiﬁable reasons. There are therefore provision of spiritual services. measurements which have been derived in communications theory — but they often measure the amount of meaningful signals versus some noise (as in the transmission of messages.94 LA. there are additional problems with information. There are. Busch. loosely speaking.7 Presumably the fact that information is asymmetric makes it easier to exploit the uninformed party on the one hand. Little difference between these and the “snake oil cures” of the past seems to exist. while that is clear conceptually. . Because education is so costly to undo or repeat it is also frequently meddled with for “societal engineering” reasons. it is also a special good since it is not rivalrous (a concept you may have encountered in ECON 301): the fact that I possess some information in no way impedes your ability to have the same information.8 The government attempts to combat this informational asymmetry by certifying the supply. 7 I am not trying to belittle beliefs here. it is hard to measure. Regulation can take the simple form that only “veriﬁable” claims may be made. and to regulate on the other. Information is not a good like most others. neither the religious oﬃcial or the follower of a faith know if the actions prescribed by the faith will “work”. for example.6 Note that religion and health care probably diﬀer in another dimension: in health care there is information asymmetry between provider and consumer. For one. Aside from all the above. Economists. are concerned with what kind of message actually contains relevant information and what kind may be vacuous. of course.
a ﬁrm may only really proﬁt from it if it is the sole user. since he has to visit all these stores. 5. The basic idea is a simple one (as all good ideas are. The reason for this is that the best way to model this kind of problem is as the consumer purchasing messages (normally: prices) which are drawn from some distribution.) A consumer lacks information — say about the price at which a good may be bought. 9 The long and short of this is that standard demandsupply models often don’t work. The incentives for its creation will depend on what property rights are enforceable later on. but can go to diﬀerent stores and ﬁnd out what their price is. For example: let us say that the consumer knows that prices are distributed according to some cumulative z distribution function F (z) = 0 f (p)dp. If the consumer where to obtain n price samples (draws) from this distribution This is the problem in the Patent protection ﬁght: Once somebody has invented (created the information for) a new drug the knowledge should be freely available — but this ignores the general equilibrium question of where the information came from. A complete study of this topic is outside the scope of these notes. An optimizing consumer may be expected to continue to ﬁnd new (hopefully) lower prices as long as the marginal beneﬁt of doing so exceeds the marginal cost of doing so. where f (p) is the (known) density. For while two ﬁrms both may use the information.” which requires him to have some idea about what the market price is. Therefore we “just” need to deﬁne beneﬁts and costs and can then apply our standard answer that the optimum is achieved if the marginal cost equals the marginal beneﬁt! The problem with this is the fact that we will have to get into sampling distributions and other such details (most of which will not concern us here) to do this right. In what follows we will only outline some speciﬁc issues in further detail. Clearly it is in the consumer’s interest not to be “ﬂeeced. 9 . however. which makes it even more important to have a good working model.1 Search One of the key breakthroughs in the economics of information was a simple model of information acquisition. and that markets will in general misallocate if information is involved. The more stores the consumer visits the better his idea about what the correct price might be — the better his information — but of course the higher his cost. In general the consumer will not know what the lowest price for the product is.Information 95 large public good aspects to information which require special consideration.
The key insight into this kind of problem is that it is often optimal to employ a stopping rule. What about cost? Even with constant (marginal) cost of sampling we would have a well deﬁned problem and it is easy to see that individuals with higher search costs will have lower sample sizes and thus pay higher expected prices. This is the equivalent of visiting n stores for a quote irrespective of what the received quotes are.) From this it follows that the expected value of the lowest price after having taken a sample of n prices is ∞ pn = p[1 − F (p)]n−1 f (p)dp. to continue sampling until a price has been obtained which is below some preset limit. at which point search is abandoned and the transaction occurs (a sort of “good enough” attitude.96 LA. In the above approach n messages where bought. 10 . in general!) Computing the variance you would observe that dispersion of the lowest price is decreasing in n — that means that the lower the search costs the ‘better’ the market can be expected to work. Indeed. with n predetermined. Microeconomics May2004 then the probability that any given sample (say p0 ) is the lowest will be [1 − F (p0 )]n−1 f (p0 ). The dynamic problem is the much more interesting one and has been quite well studied. Busch. competition policy is concerned with this fact in some places and attempts to generate rules which require that prices be posted (lowering the cost of search). Any discussion of search would be lacking if we did not point out that search is normally sequential. We will attempt to distill the most important point and demonstrate it by way of a fairly simple example. but at a decreasing rate: the diﬀerence between sampling n times and n − 1 times is ∞ pn low − pn−1 low =− 0 pF (p)[1 − F (p)]n−2 f (p)dp < 0. low 0 Note that this expected lowest sample decreases as n increases. (This formula should make sense to you from Econometrics or Statistics: we are dealing with n independent random variables. So additional sampling leads to a beneﬁt (lower expected price) but with a diminishing margin.) The price at which one decides to stop is the reservation It does depend on the distributional assumptions we make on the samples — independence makes what follows true.10 That is. and hence consumers do not buy at the same price (which is ineﬃcient. Also note that the lowest price paid is still a random variable.
V . First determine the value of search. 0 Next.) All of these things ought to be shown for a formal model. Also. Suppose the cost of another sample (search eﬀort) is c. the expected gain above the wage currently on the table.e. then the consumer is just indiﬀerent between buying another sample or not. and the fact that a lower wage might arise which would indicate that another search is needed. the former is a bit easier and we will consider it ﬁrst. Thus. the expected gain from an additional sample is therefore the savings pR − p. Note that I assume stationarity here and the fact that one pR will do (i. The latter is the typical setup in the matching literature in labour economics. or if the trade will have fallen through if one walks away. and thus the reservation price is found: pR pR satisﬁes (pr − p)f (p)dp = c. the objective is to ﬁnd the highest price. and letting p stand for wages ∞ pR V = −c + pf (p)dp + V pR 0 f (p)dp =⇒ V = −c + ∞ pR pf (p)dp 1 − F (pR ) . which consists of the actual cost of search and the fact . consider the labour market. “expected over” all prices p < pR . the fact that the reservation wage is independent of how long the search has been going on. The LHS of this expression is the expected increase in the wage. This is the slightly more complex case where the consumer cannot return to a past oﬀer. The outcome of the sample is a new price p. pf (p)dp − pR = c − pR F (pR ). We now will ask what choice of pR will maximize the value above (which is the expected utility from searching. If these expected savings are equal to the cost of the additional sample. which will only be of beneﬁt if it is lower than the currently known minimum price. which is composed of the cost.) Taking a derivative we get −pR f (pR )(1 − F (pR )) + f (pR )(−c + (1 − F (pR ))2 Simplifying and rearranging we obtain ∞ pR ∞ pR pf (p)dp) = 0. assuming a linear utility function for simplicity (no risk aversion). the RHS is the cost of search.. where unemployed workers are searching for jobs. Evaluated at the optimal reservation price pR .Information 97 price — the highest price one is willing to pay! In order to derive this price we will have to specify if one can ‘go back’ to previous prices.
and lower price dispersion in the market. for example if the government decides to subsidize nonworking (searching) workers? This would lower c. This is where the distinction between search goods and experience goods comes in. dc 1 − F (pR ) Unemployment insurance will increase the reservation wage (and thus unemployment — note the pun: it ensures unemployment!) The reason is that our workers can be more choosy and search for the “right” job. This means that two parties to a transaction do not have the same information. What we are interested in most are the problems which arise because information is asymmetric.2 Adverse Selection Most problems which will concern us in this course are actually of a diﬀerent nature than the search for information above. In closing let us note that certain forms of advertising will lower search costs (since consumers now can determine cheaply who has what for sale at which price) and thus are eﬃciency enhancing (less search. Now. 5. Of course. More advanced models might take into account eligibility rules. and the LHS would have to fall to compensate. less resources spent on search.D. then this leads to better matches (fewer Ph. Note that this is not necessarily bad. Busch. and will have to be looked at in a diﬀerent framework. If the quality of the match (suitability of worker and ﬁrm with each other) is reﬂected in a higher wage. Microeconomics May2004 that the current wage is foregone if a lower value is obtained.) Similar models can also be used to analyse other matching markets. In those models unemployment insurance can be shown to cause some workers to take bad jobs (because that way they can qualify for more insurance later. which will occur only if a higher pR is chosen. or the marriage market. as is the case if the seller knows more about the quality (or lack thereof) of .) Other forms of advertising (image advertising) do not have this function. cab drivers). The market for medical interns comes to mind. a higher pR lowers the RHS further. What now will be the eﬀect of a decrease in the search cost c. however. this model is quite simplistic. They become more discriminating in their job search. In mathematical terms it is easy to compute that −1 dpR = .98 LA. This may well be desirable for the general equilibrium eﬃciency properties of the model.
Suppose that these individuals are indistinguishable to the insurance company. Assume two states. while the other is a bad risk type with probability of the good state of only πL < πH .Information 99 20 cert. that of adverse selection (which you may think of as “hidden property”) and moral hazard (“hidden action. good. In these types of environments we run into two well known problems. but that one individual is a good risk type who has a probability of the good state occurring of πH .) Sometimes this problem can be ﬁxed via the provision of a warranty. say. or if the buyer knows more about the value of the good to himself than the seller. assume that the good risk is female. Adverse selection lies at the heart of Akerlof’s Lemons Problem. since a warranty makes the statement that the car is good more costly to sellers of bad cars than of cars which are.1: Insurance for Two types his good than the buyer. the individuals’ indiﬀerence curves will have a slope of −πi /(1 − πi ) on the certainty line. These kind of markets lead naturally to the question if the informed party can send some sort of signal to reveal information. in fact. As we have seen before. wg > wb . good and bad.”) We will deal with the former in this section. wb ). so that grammatical pronouns can distinguish types in what follows. But how could they do so? Simply stating the fact that. good loss 15 10 bad 5 endowment * 0 0 5 10 15 20 no loss Figure 5. . since such a statement is free and would also be made by sellers of bad cars (it is not incentive compatible. the car is of good quality will not do. and assume two individuals who both have wealth endowment (wg . the bad risk male. To be stereotypical and simplify the presentation below. Let us examine these issues in our insurance model.
Now. since the insurance company now would make zero (expected) proﬁts on her insurance contract. The bad risk cannot be distinguished from the good risk a priori and therefore can buy at the good risk premium. The answer is NO. In Figure 5. In order for it to be an equilibrium .) However.2. Well then. with asymmetric information this is not a possible outcome. Consider the proﬁt function (and recall that pi = πi ): (1 − πH )(wg − wb ) − (1 − πL )(wg − wb ) < 0. An insurer would be making zero expected proﬁts from selling a common policy for coverage of I at a premium of p to all types if and only if fH (πH pI − (1 − πH )(1 − p)I) + (1 − fH )(πL pI − (1 − πL )(1 − p)I) = 0. Thus both would attempt to buy full insurance for a good type.) Does there exist a pooling equilibrium in this market? Consider a larger market with many individuals of each of the two types. Note that at this premium the good types subsidize the bad types. if he were to maximize at this premium we know that he would over insure — and this action would then distinguish him from the good risk. what is the equilibrium in such a market? There seem to be two options: either both types buy the same insurance (this is called pooling behaviour) or they buy diﬀerent contracts (this is called separating behaviour. in other words to mimic her. The best he can do without giving himself away is to buy the same insurance coverage that our good risk would buy. Such a point might be point M in the ﬁgure. Foreseeing this fact. this premium is indicated by a zero proﬁt locus (identical to the consumers’ budget) at an intermediate slope (labelled ‘market’. We now have to ask if this is a possible equilibrium outcome. However.) Any proposed equilibrium with pooling would have to lie on this line and be better than no insurance for both types. the insurance company would refuse to sell insurance at these prices. the latter too low a premium. This requires a premium p = (1 − πL ) − fH (πH − πL ). Let fH denote the proportion of the good types (πH ) in the market. Busch. this cannot be an equilibrium. but would loose money on his insurance contract. Microeconomics May2004 If there were full information both types could buy insurance at fair premiums and would choose to be on the certainty line on their respective budget lines (with the high risk type on a lower budget line and with a lower consumption in each state. since the former pay too high a premium.100 LA.
Consider. Finally it must be acceptable to the good types. oﬀers less insurance but at a better price. This suggests a point such as A in the Figure 5. Notice that the same arguments hold whatever the initial point. Well then. One is that the insurer and insured of good type would like to move to a diﬀerent point.Information 101 20 cert. all bad types would remain at the old contract M . It must also be on the fair odds line for good types for there to be zero proﬁts. Since it is not all the way over on the good type zero proﬁt line. Of course. Only the good type would be willing to take it (she would end up on a higher indiﬀerence curve). It follows that a pooling equilibrium cannot exist. below the bad type’s indiﬀerence curve.2: Impossibility of Pooling Equilibria nobody must have any actions available which they could carry out and prefer to what is proposed. since each type takes a diﬀerent contract the insurer will know who is who from their behaviour. good loss market 15 10 bad M * * D 5 endowment * 0 0 5 10 15 20 no loss Figure 5. one of which is taken by all bad types and the other by all good types. such as B.3. This suggests that we look at contracts which insure the bad risks fully at fair prices. if proposed by an insurance company. For the bad risk types to accept this type of contract the contract oﬀered to the good risk type must be on a lower indiﬀerence curve. any point in the area to the right of the zero proﬁt line. after the type . and since this is above the zero proﬁt line for bad types whoever sold them this insurance would make losses. Insurance oﬀerers would have to make zero expected proﬁts. Of course. This contract. and above the good type’s indiﬀerence curve: a point such as D. the insurance company would make strictly positive proﬁts. does a separating equilibrium exist? We now would need two contracts. then. There now are two potential problems with this.
This is point C in Figure 5. But what if there are only a few bad types? In that case an insurer could deviate from our proposed contract and oﬀer a pooling contract which makes strictly positive proﬁts and is accepted by all in favour over the separating contract. the insurance premiums would be very high (justiﬁably so. as in the Figure 5.) This shows that a small proportion of bad risks who can’t be identiﬁed can destroy the market completely! Notice the implications of these ﬁndings on life or health insurance markets when there are people with terminal diseases such as AIDS or Hepatitis C. because the bad types would foresee it and pretend to be good in order then to be mistaken for good. but the insurance company sure does not. means a very high probability of seriously expensive treatment. Busch. This is clearly eﬃcient. good loss market 15 May2004 10 bad * B 5 * A endowment * 0 0 5 10 15 20 no loss Figure 5. by the way.4. However. But if that would indeed happen then it can’t. in equilibrium.) What ought society do . Our model shows that doing so would make sure that everybody has access to insurance at fair rates and that there is no crosssubsidization. it will reduce the welfare of consumers with these diseases. Microeconomics 20 cert. for example. The other problem is that the stability of such separating equilibria depends on the mix of types in the market. while this deviation destroys our proposed equilibrium it is itself not an equilibrium (we already know that no pooling contract can be an equilibrium. If.102 LA. or various forms of cancer. The patient may well know that he has this problem. the market has a lot of bad types this kind of equilibrium works. given that AIDS. it could require testing in order to determine the risk it will be exposed to.3: Separating Contracts could be possible is revealed. Indeed. Of course.3. since the risk is high. Of course.
but it is able to show the implications of various policies. in the worst scenario. It would also most likely tend to lead to “alternative” screening devices — the insurance company might start estimating consumers’ “lifestyle choices” and then discriminate based on that information.) 5. or will at least lead to ineﬃciency for the majority of consumers. sprinklers? Do the tires have enough tread depth. In our insurance example it is often possible for the insured to inﬂuence the probability of a loss: is the car locked? Are there antitheft devices installed? Are there theft and ﬁre alarms. and are they the correct tire for the season (in the summer a dedicated summer tire provides superior breaking and road holding to a M&S tire. is that of hidden actions being taken.Information 103 20 cert. while in the winter a proper winter tire is superior.4: Separating Contracts deﬁnitely impossible about this? Economics is not able to answer that question.) Some of these actions are observable and will therefore be included in the conditions of the insurance . banning testing and forcing insurance companies to bear the risk of insuring such customers might make the market collapse. (This may cause “moral” objections by certain segments of the population.3 Moral Hazard Another problem which can arise in insurance markets. good loss market 15 10 bad C * 5 * A endowment * 0 0 5 10 15 20 no loss Figure 5. Is that an improvement? An alternative would be to let insurance operate at fair rates but to directly subsidize the income of the aﬀected minority. and indeed in any situation with asymmetric information. For example.
Note that this would be a fair premium if the consumer continues to choose the level A∗ of abatement activity. taken and is denoted π(A). π (A) > 0.104 LA. and cost is c(A) with c (A) > 0. Busch. I am suggesting that most stolen cars end up in former eastern block countries — it is a fact. then he can be expected to pick the optimal level of the (unobservable) action — optimal for himself. As a benchmark. For example. To keep it simple we will assume that the only available contract has a premium of π(A∗ ) and is for the total loss L. in Germany your insurance will refuse to pay if you do not have “approved” tires. not the insurance company. However. that is. The probability of the loss occurring depends on the amount of preventive action. A. insurance for cars without antitheft devices is now basically unaﬀordable since theft rates are so high since the ‘iron curtain’ opened. How aggressively do you drive? How many risks do you decide to take on a given day? This is often not observable but nevertheless in your control. The ﬁrst order condition for this problem is π (A)u(·) − c (A)π(A)u (·) − π (A)u(·) − c (A)(1 − π(A))u (·) = 0. The ﬁrst order condition for this problem is π (A)u(w − L − c(A)) − c (A)π(A)u (w − L − c(A)) − π (A)u(w − c(A)) − c (A)(1 − π(A))u (w − c(A)) = 0. however. let us consider an individual who faces the risk of loss L. Yes. Thus the optimal A∗ satisﬁes c (A∗ ) = π (A∗ )(u(w − L − c(A)) − u(w − c(A∗ ))) . Microeconomics May2004 contract. If the avoidance of accidents is costly to the insured in any way.11 If you insure your car as parked in a closed parking garage and then leave it out over night your insurance may similarly refuse to pay! Other actions are not so easily veriﬁed. It would appear logical to assume that π (A) < 0. The activity costs money. the maximization problem the consumer now faces becomes (the consumer will assume the premium ﬁxed) maxA {π(A)u(w − pL − c(A)) + (1 − π(A))u(w − pL − c(A))}. Indeed. π(A∗ )u (w − L − c(A∗ )) − (1 − π(A∗ ))u (w − c(A∗ )) Now consider the consumer’s choice when insurance is available. or if the car was not locked and this can be established. 11 . In the absence of insurance the individual would choose A so as to maxA {π(A)u(w − L − c(A)) + (1 − π(A))u(w − c(A))}. c (A) > 0.
generically. who will . however. Clearly (and expectedly) the consumer will now not engage in the activity at all. The principalagent literature explores this problem in detail. The only way to elicit eﬀort is to expose the consumer to the right incentives: there must be a reward for the hidden action. First note that this is a frequent problem in economics. This in itself is not the problem. which is that of a riskneutral principal who tries to maximize (expected) proﬁts. A deductible will accomplish this to some extent and will lead to at least some eﬀort being taken. to say the least.1 The Abstract PA Relationship We will frame this analysis in its usual setting. We are. and one agent.) The amount of eﬀort will in general not be the optimal. This kind of problem is so frequent that it deserved its own name and there are books which nearly exclusively deal with it. 5. of course.Information 105 Thus the optimal A∗ satisﬁes ˆ c (A) = 0. 5. Most times one (uninformed) party wishes to inﬂuence the behavior of another (informed) party. This is due to the fact that the consumer continues to face risk (which is costly.4.4 The Principal Agent Problem To conclude the chapter on information here is an outline of the leading paradigm for analysing asymmetric information problems. It follows that the probability of a loss is higher and that the insurance company would loose money. there is a moralhazard problem. Second. and that the agent has incentives not to do as asked. the problem can be considered a principalagent problem. What makes it interesting is that the principal cannot tell if the agent did what he was asked to do. In short. concerned with situations in which one party — the principal — has to rely on some other party — the agent — to do something which aﬀects the payoﬀ of the ﬁrst party. that is. If we were to recompute the level of A at a fair premium for π(0) we would ﬁnd that the same is true: no eﬀort is taken. note that at the root of the problem lies asymmetric information. there is a lack of information. We will obviously have only a much shorter introduction to the key issues in what follows.
Assuming no corner solutions. U (w. Note that Π(e. Here u is a level of utility suﬃcient to make the manager accept the contract. which would be provided under full information. Let ρ be a random variable which also inﬂuences proﬁts and which is not directly observable by the owner either. what we shall call his eﬀort level e ∈ [e. only the realization of gross proﬁts is observed. EΠ(e. e]. solve maxe. Busch. This eﬀort level will be assumed to inﬂuence the level of proﬁts before manager compensation. u11 (·. will be the expected proﬁts for eﬀort level e. that is.) Let us ﬁrst. Note at this stage that this in itself does not yet cause a moralhazard problem.t.) The owner can not observe the manager’s eﬀort choice. Call the principal the owner and the agent the manager. determine the eﬃcient level of eﬀort. as a benchmark. so that the manager cannot condition his eﬀort choice on the realization of ρ. Let the relationship between eﬀort. Microeconomics May2004 have control over (unobservable) actions which inﬂuence proﬁts. The agent/manager will be assumed to have choice of a single item. called grossproﬁts. or share contracts (where the manager gets a ﬁxed share of proﬁts. ρ) will be a random variable itself. proﬁts and the random variable be denoted by Π(e. since if the relationship between eﬀort and proﬁt is known we can simply invert the proﬁt function to deduce the eﬀort level. It could be known to the manager. e). which you may think of as revenue if there are no other inputs. e) = u} . All expectations in what follows will be taken with respect to the distribution of ρ. More general models could have multiple dimensions at this stage. ρ). In that case we would have to solve the Pareto problem. We assume that the manager is an expected utility maximizer. it is easy to see that we would want to set the eﬀort level such that the marginal beneﬁt of eﬀort (in terms of higher expected proﬁts) is related to the marginal cost (in terms of higher disutility . ·) < 0. Also assume that the manager is riskaverse (that is u1 (·. but we will have to assume that it will become known only after the fact. ·) > 0.w {EΠ(e. for example. Instead. ·) < 0) and that eﬀort is a bad (u2 (·. ρ) − w s. the most general wage contract he can oﬀer the manager will be a function w(Π). Therefore we need to introduce some randomness into the model.106 LA. Note also that in formulating this problem like this I have already used the knowledge that a riskneutral owner should fully insure a riskaverse manager by oﬀering a constant wage. Since the owner can only observe proﬁts. This formulation includes a ﬁxed wage as well as all wage plus bonus schemes. The manager’s preferences will be over monetary reward (wages/income) and eﬀort: U (w. ρ).
Uw (·. in equilibrium. Thus. e)} = EU (w(Π(e∗ . in equilibrium. The other is the question if to accept the contract at all. ρ)). In particular. ˆ Here ‘argmax’ indicates the argument which maximizes. ˆ . actually supplies that level. we need that w EΠe (·. ·) = Ue (·. or individual rationality constraint. ρ)). which requires that maxe {EU (w(Π(e. That is. Here U0 is the level of utility the manager can obtain if he does not accept the contract but instead engages in the next best alternative activity (in other words. ρ)).) ˆ So.) Both of these have special names and roles in the principalagent literature. ·) What if eﬀort is not observable? In that case we will solve the model “backwards”: given any particular wage contract faced by the manager.Information 107 of eﬀort. e∗ ) ≥ U0 . ·) + Ue (·. which determines the optimal e∗ . ·)] = 0. ρ)). it is the manager’s opportunity cost. That is. Then we will solve for the optimal contract. e)} ≥ U0 . This leads to FOC E [Uw (·. This is called the incentive compatibility constraint. the contract w ∗ (Π) must satisfy IR : maxe {EU (w(Π(e. which will have to be compensated via the wages). correspond to what the owner wanted the manager to do. we want e = e∗ . any contract is constrained by the fact that the manager must willingly participate in it. ·). we will determine the manager’s choice. ρ)). One is to determine how much eﬀort to provide given he accepts the contract: maxe {EU (w(Π(e. “foreseeing” those choices. if the owner wants to elicit eﬀort level e. The latter one is called the participation constraint. In other words. The other constraint is that the manager’s optimal choice should. e)}. e)}. Assume in what follows that the owner wants to support some eﬀort level e (which is derived from this process. then it should be true that the manˆ ager. our manager is faced with two decisions. Mathematically it says: IC : e = argmaxe {EU (w(Π(e. ·)w (·)Πe (·.
We will not be insuring the manager. He will furthermore have the correct incentive to expend eﬀort if he cares as much about proﬁts as the owner. Microeconomics May2004 Now we can write down the principal’s problem. This problem is easily stated maxe. ·))]} . Thus it would stand to reason that if we were to sell him the proﬁts he would be the owner and thus take the optimal action! But this is equivalent to paying him a wage which is equal to the proﬁts less some ﬁxed return for the owners. So. only look at three special cases in which some answers are possible. the participation and incentive compatibility constraints. and assume for the moment that the IR constraint can be satisﬁed (by choice of the correct p. ρ) − v (e) = 0. let us think for a moment longer about the situation. We can model this as a manager who has lexicographic preferences over his minimum wealth and eﬀort.w(Π) {E [Π(e.108 LA. For simplicity also assume that the lowest possible proﬁt level is independent of eﬀort (that is. e)) + λI (Uw (·. e) = w − v(e). that is. In that case we can have U (w. only the probabilities of proﬁts depend on the eﬀort. not the level). We will. but hard to solve in general cases.) With such a contract the manager will choose an eﬀort level such that Πe (e. Before we charge ahead and solve this brute force. ·)w (·)Πe (·. ρ)) +λP (U0 − U (w(Π(e. Now note that we would not have to insure the manager in this case since he evaluates expected wealth the same as some ﬁxed payment. . therefore. Busch. Note that this is the eﬃcient eﬀort level under full information. ρ)). Also note that there is no reason to over pay the manager compared to the outside option. ·) + Ue (·. In that case a manager will always face the same lowest wage for any wage function and thus will not be able to be enticed to provide more than the minimum level of eﬀort. ρ) − w(Π(e. where v(e) is the disutility of eﬀort. and for equal minimum wealth the manager will prefer the lower eﬀort. Independent of the eﬀort levels the manager will prefer a higher minimum wealth. First. let us simplify by assuming a risk neutral manager. let us propose a wage contract of w = Π − p. The principal ‘simply’ wishes to maximize his own payoﬀ subject to both. The owner’s problem now simply is to choose the largest p such that the manager still accepts the contract! The other special case one can consider is that of an inﬁnitely riskaverse manager. he needs to be given only U0 .
Π]. we might expect anything between eﬃcient outcomes and completely ineﬃcient outcomes. e). v (·) ≥ 0. e)u (w(Π)) = 0. This has ﬁrst order condition Π Π u(w(Π))fe (Π. The owner will want to ﬁnd a wage structure and eﬀort level to Π maxw(·). e)dΠ − v (e) = 0. We will also assume that the cumulative distribution of proﬁts exists and depends on eﬀort. In this formulation the manager will solve Π maxe Π u(w(Π))f (Π. Let us return for a moment to the general setting above. v (·) > 0. We will ignore the SOC for a while. and satisﬁes ﬁrstorder stochastic dominance. e)dΠ − v(e) . e)u (w(Π)) + λC fe (Π. u (·) < 0. e) − v (e)f (Π. e)dΠ − v(e) ≥ U0 . e) +λP (u(w(Π)) − v(e) − U0 )f (Π. e) . hard to solve. has a density f (Π. largely depending on the precise situation. e))] dΠ .e Π [(Π − w(Π))f (Π. u (w(Π)) f (Π. This will have to be diﬀerentiated. However. which is quite messy and. as pointed out before. e) + λC (u(w(Π))fe (Π. v (e) = ∞. The participation constraint will be Π Π u(w(Π))f (Π. consider the diﬀerentiation with respect to the wage function: −f (Π. v (e) = 0. The last two are typical “Inada conditions” designed to ensure an interior solution. e) 1 = λP + λC .Information 109 As you can see. and that this distribution is over Π ∈ [Π. We will restrict it somewhat by assuming that the manager’s preferences are separable: U (w. e) + λP f (Π. e) = u(w) − v(e) with the obvious assumptions of u (·) > 0. e) > 0. Rewriting we get something which is informative: fe (Π. F (Π.
Let us further specialize the problem and suppose that there are only two eﬀort levels. Indeed. Indeed. since under the old system any punishment only occurred after 4 years. where the insurer wants to elicit the appropriate amount of care from the insured. and thus we will make wages increase with the signal. the higher the manager’s wage. In that case it is easy to see that the above equation will become fH (Π) − fL (Π) fL (Π) 1 = λP + λC = λP + λC 1 − u (w(Π)) fH (Π) fH (Π) . if the likelihood ratio is decreasing then then the wage function must be increasing with realized proﬁts. Busch. assume that there are only two proﬁt levels. and that the owner wants to get the high eﬀort level. Most frequently in economics this is the case in the production process. or are they due to negligent work?) Any time a ﬁrm subcontracts another ﬁrm there is another principalagent relationship set up. The right hand side consists of the constraint multipliers (which will both be positive since the constraints are binding) multiplied by some factors. Of particular interest is the last term. What is going on here is that higher proﬁts are a correct signal of higher eﬀort. who have superior information about market circumstances. presumably designed to make the wage function steeper. It follows that the higher the (Π) relative probability that the eﬀort was high for a given realized proﬁt level.) In essence a principalagent relationship with the attendant problems exists any time there is a delegation of responsibilities with less than perfect monitoring. In that case it can be shown that 0< wH − w L < 1.110 LA. the ratio of the derivative of the density to the density. It is therefore increasing in the wage. where the electorate would like to insure that the elected oﬃcials actually carry out the electorates’ will (witness the ‘recall’ eﬀorts in BC. fL The term fH(Π) is called a likelihood ratio. with the high level of proﬁts more likely under high eﬀort. ΠH − Π L When does this principalagent relationship come into play? We already mentioned the insurance problem. It also occurs in politics. Owners of ﬁrms delegate to managers. Finally. (Do quality problems arise inherently from the production process employed. Managers in turn delegate to subordinates — who often have superior information about the details of the production process. Microeconomics May2004 The left hand side of this is the inverse of the slope of the wealth utility function. teach the appropriate material. The university wants to ensure that I treat you appropriately. and teach “well”. . etc. even in our class there are multiple principalagent relationships. technologies.
you inherently do not know what it is you “ought” to be taught.) There are various incentive systems in place to attempt to address these problems.Information 111 You. having similar goals. are taken to cause decreasing returns to scale. in some sense. and heated debate about how successful they are! As an aside: the principalagent problems arising from delegation are often held responsible for limits on ﬁrm size (i. are the principal of the university.. You may also have no way to determine if you are taught well — especially since there is a potential conﬂict between your future self (desiring rigorous instruction) and your current self (desiring a ‘pleasant’ course. However.e.) .
112 LA. Busch, Microeconomics
May2004
Chapter 6 Game Theory
Most of the models you have encountered so far had one distinguishing feature: the economic agent, be it ﬁrm or consumer, faced a simple decision problem. Aside from the discussion of oligopoly, where the notion that one ﬁrm may react to another’s actions was mentioned, none of our decision makers had to take into account other’s decisions. For the price taking ﬁrm or consumer prices are ﬁxed no matter what the consumer decides to do. Even for the monopolist, where prices are not ﬁxed, the (inverse) demand curve is ﬁxed. These models therefore can be treated as maximization problems in the presence of an (exogenous) constraint. Only when duopoly was introduced did we need to discuss what, if any, eﬀect one ﬁrm’s actions might have on the other ﬁrm’s actions. Usually this problem is avoided in the analysis of the problem, however. For example, in the standard Cournot model we suppose a ﬁxed market demand schedule and then determine one ﬁrm’s optimal (proﬁt maximizing) output under the assumption that the other ﬁrm produces some ﬁxed output. By doing this we get the optimal output for each possible output level by the other ﬁrm (called reaction function) for each ﬁrm.1 It then is argued that each ﬁrm must correctly forecast the opponent’s output level, that is, that in equilibrium each ﬁrm is on its reaction function. This determines the equilibrium output level for both ﬁrms (and hence the market price).2
So, suppose two ﬁxed marginal cost technology ﬁrms, and that inverse market demand is p = A − BQ. Each ﬁrm i then solves maxqi {(A − B(qi + q−i ))qi − ci qi } which has FOC A−Bq−i −2Bqi −ci = 0 and thus the optimal output level is qi = (A−ci )(2B)−1 −0.5q−i . 2 To continue the above example: q1 = (A−c1 )(2B)−1 −0.5((A−c2 )(2B)−1 −0.5q1 ) −→ q1 = (A − 2c1 + c2 )(3B)−1 . Thus q2 = (A − 2c1 + c2 )(3B)−1 and p = (A + c1 + c2 )(3)−1 .
1
113
114 LA. Busch, Microeconomics
May2004
In this kind of analysis we studiously avoid allowing a ﬁrm to consider an opponent’s actions as somehow dependent on its own. Yet, we could easily incorporate this kind of thinking into our model. We could not only call it a reaction function, but actually consider it a reaction of some sort. Doing so requires us to think about (or model) the reactions to reactions, etc.. Game theory is the term given to such models. The object of game theory is the analysis of strategic decision problems — situations where • The outcome depends on the decisions of multiple (n ≥ 2) decision makers, such that the outcome is not determined unilaterally; • Everybody is aware of the above fact; • Everybody assumes that everybody else conforms to fact 2; • Everybody takes all these facts into account when formulating a course of action. These points especially interesting if there exists a conﬂict of interest or a coordination problem. In the ﬁrst case, any payoﬀ gains to one player imply payoﬀ losses to another. In the second case, both players’ payoﬀs rise and fall together, but they cannot agree beforehand on which action to take. Game theory provides a formal language for addressing such situations. There are two major branches in game theory — Cooperative game theory and Noncooperative game theory. They diﬀer in their approach, assumptions, and solution concepts. Cooperative game theory is the most removed from the actual physical situation/game at hand. The basis of analysis is the set of feasible payoﬀs, and the payoﬀs players can obtain by not participating in the ﬁrst place. Based upon this, and without any knowledge about the underlying rules, certain properties which it is thought the solution ought to satisfy are postulated — so called axioms. Based upon these axioms the set of points which satisfy them are found. This set may be empty — in which case the axioms are not compatible (for example Arrow’s Impossibility Theorem) — have one member, or have many members. The search is on for the fewest axioms which lead to a unique solution, and which have a “natural” interpretation, although that is often a more mathematical than economic metric. One of the skills needed for this line of work is a pretty solid foundation in functional analysis — a branch of mathematics concerned with properties of functions. We will not talk much about this type of game theory in this course, although it will pop up once later, when we talk about bargaining (in the Nash Bargaining Solution). As a ﬁnal note on this subject: this type of game theory is called “cooperative” not necessarily because
Some recurring examples of particular games are the following (you will see that many of these games have names attached to them which are used by all researchers to describe situations of that kind. but since it is assumed that players’ commitments (threats. and that two rational players should not stop trading until all gains from trade are exhausted. Indeed. must improve each player at least weakly. In this chapter we will deal exclusively with noncooperative game theory. so that the number of players grows to inﬁnity. While simultaneousness is certainly one way to achieve the goal. the information possessed by them. While we are ultimately concerned with economic problems. Part of the reason for the name game theory is the similarity of many strategic decision problems to games. If the announcements match. The result follows from this. 4 This means that no information is or can be transmitted. Concepts such as the Core of an exchange economy3 or the Nash bargaining solution are all cooperative game theory notions. if they The ‘Core’ refers to the set of all Pareto Optimal allocations for which each player/trader achieves at least the same level of utility as in the original allocation. players are added. in the mechanism design and the implementation literature researchers basically design games in order to achieve certain outcomes. then player 2 pays player 1 one dollar. or the question if there is room for cooperation or not.Game Theory 115 players will cooperate. promises. There are certain essential features which many economic situations have in common with certain games. A celebrated result in economics (due to Edgeworth) is that the core converges to the competitive equilibrium allocation as the economy becomes large (i.) The most detailed branch employs the extensive form. agreements) are binding and can be enforced. and a study of the simpliﬁed game is thus useful preparation for the study of the economic situation. Noncooperative game theory can be applied to games in two broad categories diﬀering with respect to the detail employed in modelling the situation at hand (in fact. In this branch the focus is more on the actual rules of the game — it is thus a useful tool in the analysis of how the rules aﬀect the outcome.) This demonstrates nicely that in a large economy no one trader has suﬃcient market power to inﬂuence the market price. much of what we do in the next pages deals with toy examples. among others. The argument is that any trade. there are many ways in which one can categorize games. For a two player exchange economy it is the part of the Contract Curve which lies inside the trading lens. by the number of players.. since we do not know the trading procedures. It is not more speciﬁc. the players could be in diﬀerent rooms without telephones or shared walls. 3 . while a more abstracted approach employs the strategic form. since it is voluntary.) Matching Pennies: 2 players simultaneously4 announce Head or Tails. It does not necessarily mean actually simultaneous actions.e.
) . It is a game with sequential moves and perfect information: ﬁrst the child chooses to either behave or not. where we will use it later. such as the Prisoners’ Dilemma.116 LA. This allows them to condition their play on the opponents’ play in the past (a key mechanism by which cooperation and suitable behaviour is enforced in most societies. it captures the situation in which two players have to choose to cooperate or not (called ‘defect’) simultaneously. since each player prefers coordination on a diﬀerent activity. Busch. The game is one of pure conﬂict. Prisoners’ Dilemma: Probably one of the most famous (and abused) games. the parents then decide to either punish the child. One form is that both have to announce one of two statements. so it might as well get a name. but punishment reduces its utility. They have to choose an activity without being able to communicate with each other. The parents prefer if the child behaves. however. Battle of the Sexes: Romeo and Juliet rather share an activity than not. then player 1 pays player 2 one dollar. I will use it to get some ideas across. Microeconomics May2004 do not match. The child prefers not to behave.” The distinguishing feature of this game is that each player is better oﬀ by not cooperating — independently of what the other player does — but as a group they would be better oﬀ if both cooperated. after each round the players get to observe either payoﬀs or actual actions in the preceding stage. but dislike punishing it. It is also a game of incomplete information. or not. since that increases their payoﬀs. the focus of much early game theoretic analysis. There are many other examples which we will encounter during the remainder of the course. While the stage game may be a simultaneous move game. However. This is an example of a zerosum game. This is a frequent problem in economics (for example in duopolies. There is some conﬂict. where the same so called stage game is repeated over and over.) “Punishment Game:” This is not a standard name or game. This game is a coordination game — the players want to coordinate their activities. Based on the observation of the behaviour. The distinguishing feature is that what is good for one is necessarily bad for the other player. either “give the other player $3000” or “give me $1000. however. There are repeated games. Romeo likes music better than sports while Juliet likes sports better than music.
1 A. a game tree has to satisfy certain conditions in order to be sensible: It has to be ﬁnite (otherwise. B. A vertex is called terminal if it has no followers. Indeed we will consider inﬁnite games. E. connected by lines.) In Figure 6. however. This information is summarized in a game tree. Now. . where two branches join up again. Vertex C follows vertex B if the sequence of arcs connecting A to C passes through B. and F do not.Game Theory 117 6. such as the Cournot duopoly game where each player has inﬁnitely many choices. how would you write it down?).1. called arcs. C follows B immediately if C follows B and there is one arc connecting C to B. The formalism to this extension is left to more advanced courses and texts. and it has to be connected (so that we can get from one part of the tree to another). D. We will call the start of the game the distinguished node/vertex or more commonly the root. We also need a sense of where the game starts and where it ends up. Much of it is only jargon. the timing of these actions. their possible actions. and C satisfy the deﬁnition. In what follows we will use the correct and formal way to deﬁne games and you will therefore not have to reformalize some foggy notions later on — the down side is that this may be a bit unclear on ﬁrst reading. so as to form a ﬁgure which is connected (there exists a set of arcs connecting any one vertex to any other) and contains no simple closed curves (there does not exist a set of arcs connecting a vertex to itself. called vertices.1 Descriptions of Strategic Decision Problems The Extensive Form 6. so don’t be put oﬀ! The ﬁrst task in writing down a game is to give a complete description of the players. 5 Finite can be dropped.1 We will start rather formally. and of course the payoﬀs they receive in the end. We can then deﬁne the following: Deﬁnition 2 Let Γ be a tree with root A. the information available to the players when they take the actions.5 All this is formally said in the following way: Deﬁnition 1 A game tree Γ (also called a topological tree) is a ﬁnite collection of nodes. and it has to be like a tree in that we do not have loops in it.
Busch. . . 5.. Microeconomics A c B c C c c D c s s g £ ¢f g £ ¢ f s s gs s £ ¢ f g £ ¢ s s fs g £ £ gs s s d ds s ¢ f ¢ f ¢e ¡e f ¢ ¢ e ¡ e es ¢ fc ¢ e s¡ ¢ s s e¡ ¡e F ¡ e ¡ es s s May2004 ¢ f ¢ s s fs ¢f ¢ f ¢ f ¢ s s fs s d d ds ds s s ¢f ¡e ¢ f ¡ e ¡ es ¢ f s ¢ s s fs s ¡e ¡ e es ¡ s ¢f ¢ f E c s s d ds s s Figure 6. . n} a partition of Si into subsets Sij (information sets). if any. i. . what.1: Examples of valid and invalid trees We are now ready to deﬁne a game formally by deﬁning a bunch of objects and a game tree to which they apply. B and C have the same number of immediate followers.) 4. and what the probability distribution over them is. . 6. such that ∀B. for each i ∈ {1. A partition {S0 . formally. which player plays after/together with what other player(s). A game tree Γ with root A. 2. what the possible moves are at each stage. For each vertex in S0 a probability distribution over the set of immediate followers. . 3. exogenous moves exist. These objects and the tree will capture all the information we need. the order of play. . So. . A function. what each player knows when making a move. S1 . Sn } of the set of nonterminal nodes of Γ (the player sets. and of course the payoﬀs at the end.118 LA. called payoﬀ function. associating a vector of length n with each terminal vertex of Γ. we get the following: Deﬁnition 3 A nplayer game in extensive form comprises: 1. . 2. What is this information? We need the number of players. C ∈ Sij .e. for each Sij an index set Iij and a 11 map from Iij to the set of immediate followers of each vertex in Sij .
and if x is a predecessor ˆ of x and the same player moves at x and x (i.. Nature will be a very useful concept. if x. that “the same action” was chosen at the two diﬀerent predecessor nodes. and which has the ˜ ˆ same index on the arc leading to x as that from x to x. Notice in particular the following: (1) While the game is for n players. The reason is that “nature” gets a player set too. this means that players learn something during the game. we have (n + 1) player sets. Deﬁnition 4 A nperson game in extensive form is said to be a game of perfect recall if all players never forget information once known. the player should remember the index he chose previously and thus the current two nodes cannot be indistinguishable. there are still various ways one could draw such a game.) (2) Nature.e. (3) Our information sets capture the idea that a player can not distinguish the nodes within them. does not have a payoﬀ in the game. or if not. We will always assume perfect recall. in other words. but is quite intuitive in practice: if two nodes are in a player’s information set. but none of the players is choosing this. x ∈ Sij then neither x nor x is a predecessor of the other one.. all our players will recall all their own previous moves. most often the outcome of some randomization (as when there either is an accident or not. and it does not have any information sets.e. and if they never forget any of their own previous moves: i. and if they have learned something about their opponents (such as that the opponent took move “Up” the second time he moved) then they will not forget it. in some sense. In other words. then one cannot follow the other — else the player in eﬀect forgets that he himself moved previously in order to get to the current situation.e. then there exists ˆ ˆ j some x in the same Si as x which is a predecessor of x . x.. It allows us to model a nonstrategic choice by the environment. there is a restriction on predecessors: it must be true that either both nodes have the same predecessor in a previous information set of this player. since every node has the same number of possible moves and they are the same (i. The ﬁrst is a restriction on information sets which will capture the idea that players do not forget any information they learn during the game. their labels are the same. Furthermore. Otherwise. Games of perfect recall still allow for the possibility of players not know .) Even with all the restrictions already implied. Two very important issues deal with assumptions on information — properties of the information sets. since it is nonstrategic. ˆ This deﬁnition bears close reading. x ∈ Si ).Game Theory 119 This is a rather exhaustive list. In practice.
1 1. a player would not know his opponent’s move at that time.6 In that case. For example the player may not know the payoﬀ function. 2 0. or even some of the possible moves. 6 By this we mean that they use Bayes’ formula to update their beliefs. uncertainty about the number of players. Busch. 1 r 2 r A d B L b R 0. 1 d r d r d d r dr r dr 1.120 LA. 1 d r d r d d r dr r dr r2 A d B 1. Microeconomics 1 b ¨¨ rr R L¨ rr ¨ ¨ p p p p p p p 2 p p p p p p p rp r rp p p pr ¨p A d B A d B d d 1r 1r dr dr 1. the player could not write down the extensive form at all! In the case of imperfect information the player can write down the extensive form.2: A Game of Perfect Recall and a Counterexample ing something about the past of the game — for example in games where players move simultaneously. Bayes’ formula . 2 1. or their order. 0 May2004 0. In that case. A famous theorem due to Harsanyi shows that it is possible to transform a situation of incomplete information to one of imperfect information if players are Bayesian. The opposite is a game of perfect information: Deﬁnition 5 A nperson game in extensive form is said to be a game of perfect information if all information sets are singletons. Incomplete information refers to the case when some feature of the extensive form is not known to one (or more) of the players. 0 d d dpr p p p p p p p p p 1 p p p p p p p p p r dr p 1. but it contains nontrivial information sets. It is important to note a crucial linguistic diﬀerence here: Games of Incomplete Information are not the same as games of Imperfect Information. 2 1. 0 1. Such games are called games of imperfect information. 2 1 0. 0 Figure 6.
P (B) P (B) A prior is the exante belief of a player. 3 −2.” which refers to the action taken at any particular information set. 6. 5 r dr M r d f d ac 0.3: From Incomplete to Imperfect Information available moves. or not. then we could not model most interesting situations — which nearly all have some information that players don’t know. contingent plans of behaviour in the game. one that enjoys a ﬁght.3. Nature is assumed to choose which one is actually playing. take the case of a market entrant who does not know if the monopolist enjoys ﬁghting an entrant.Game Theory 121 E b out ¨¨ rrr in ¨ r rM r¨ ¨ rr f ight d accom 0. 4 Figure 6. The entrant thus does not know the payoﬀs of the game. since it makes our tools very useful to lots of situations. From that. ? r r ¨ ¨ −2. If we could not transform incomplete information into imperfect information.. not just a “move. which could be given to a referee or a computer. You should think of a strategy as a complete game plan. For example. etc. can be transformed into uncertainty about payoﬀs only. The expost probability is called a posterior. This turns out to be one of the more powerful results. and they would then play the game for you according to these instructions. while you just says that the probability of an event A occurring given that an event B has occurred will be the probability of the event A occurring times the probability of B happening when A does. suppose it is known that there are two types of monopolist. ? 4.2 Strategies and the Strategic Form In order to analyze the situation modelled by the extensive form. These are complete. . outcomes. 10 r r d rM d f d ac 4. 10 dr d r −2. 2 dr d 4.1. we employ the concept of strategies. and the equilibria of these games will coincide. all divided by the probability of B occurring: P (AB) = 7 P (BA)P (A) P (A ∩ B) = . However. one can construct a complete information game with imperfect information. and one that does not. and the probability of that choice is set to coincide with the entrant’s priors.7 After this transformation we have a well speciﬁed game and can give the extensive form — as in Figure 6. 10 dr d Nature b wolf ¨¨ rr lamb ¨ r p p p pr r¨ ¨ p p p p p p E p p p p p rp r out d in in d out 0.
a population may not. . .122 LA. as we will see!) 8 . we can allow the player to randomize. n} is a probability distribution over the pure strategies σi ∈ Σi . . The players have to submit a complete plan before they start the game. Notice that these are not the same concepts. Consider pure strategies only. or at the level of pure strategies: Deﬁnition 7 A behavioural strategy for player i ∈ {1. 2. 2.) Mixed strategies are also decidedly easier to work with. σn ) = (σi . Each player has pure strategy σi . σ2 . at the level of each information set. In general. σ does not determine the outcome fully. Deﬁnition 8 A mixed strategy µi for player i ∈ {1. . . We This is a somewhat controversial issue. 2. in general. sometimes you stand. Do people ﬂip coins when making decisions? Nevertheless it is pretty much generally accepted.8 This randomization can occur on two levels. for now. σ−i ). sometimes you even raise people. . Alternatively. This is a very important point. and the randomization probabilities just correspond to the proportion of people in the population who play that strategy. . . σi : Sij → Iij . In other instances the randomizing distribution is explained by saying that while each person plays some deﬁnite strategy. We will not worry about it. even those which prior moves of the same player rule out! Deﬁnition 6 A pure strategy for player i ∈ {1. . We can now consider what players’ payoﬀs from a game are. . however. and it has to cover all eventualities — which translates into saying that the plan has to specify moves for all information sets of the player. however. . giving rise to a strategy vector σ = (σ1 . under perfect recall one can ﬁnd a behavioural strategy corresponding to each mixed strategy. Busch. However. n} is a function σ i that associates every information set Sij with one element of the index set Iij . . . In some circumstances randomization can be seen as a formal equivalent of bluﬃng: Take Poker for example. which we will introduce shortly. . and so we will only deal with mixed strategies (which are more properly associated with the strategic form. n} is a function βi that associates every information set Sij with a probability distribution over the elements of the index set Iij . Sometimes you fold with a pair. Microeconomics May2004 watch what happens and have no possibility to change your mind during the actual play of the game. since there may be moves by nature. This could be modelled as a coin ﬂip. . . and assume randomization as necessary (and sometimes it is.
. . since socalled matrixgames where actually analyzed ﬁrst. . . which is even more abstract. . in general. not payoﬀs. . and a third player would be choosing among matrices. is πi (σ). players will. where N is the player set {1. To get a game we need a set of utility functions for the players. and the game and strategies are thus evaluated using expected payoﬀs.×Sn . however. the payoﬀs players receive from a strategy combination (vector) are therefore expected payoﬀs. . where information is not only suppressed. πn (σ)). of course. (It is also known as the “normal form.” but that language is coming out of use. Thus.” There is a slightly diﬀerent viewpoint. In general.Game Theory 123 therefore use von NeumannMorgenstern expected utility to evaluate things. is that of a game form.) We will treat the strategic form in this fashion — as an abbreviated representation of the sometimes cumbersome extensive form. . In the end. one can also see the following deﬁnition: Deﬁnition 11 A game G in strategic (normal) form is a 3tuple (N. only outcomes are speciﬁed. .) Player 1 is taken to choose the row. S is the strategy set S = S1 ×S2 . and U is the payoﬀ function U : S → Rn . and this interpretation is stressed by the term “strategic form. The strategic form can be represented by a matrix (hence the name matrix games. . n}.4 provides an example of a three player game in strategic form. Here. where Si is player i’s strategy set. however. and a set of rules of the game. U ). arrive at precisely one terminal vertex and receive whatever the payoﬀ vector is at that terminal vertex. the presence of nature or the use of mixed strategies implies a probability distribution over terminal vertices. This is the prevalent view nowadays. player 2 the column. This is a much more abstract viewpoint. S. Deﬁne the following: Deﬁnition 9 The expected payoﬀ of player i given σ = (σi . which are implicit in the above. but not even mentioned. The vector of expected payoﬀs for all players is π(σ) = (π1 (σ). A related concept. σ−i ).) Figure 6. . Deﬁnition 10 The function π(σ) associated with the nperson game Γ in extensive form is called the strategic form associated with Γ. Before the game is played. (For more than three players this representation clearly looses some of its appeal.
1. 2) Figure 6. if not. 1. For each of the elements there are three possible moves. where the ﬁrst player announces if he wants 0. You may note a couple of things at this point. 2. 1.5 gives an example. While the extensive form is simple.4: A Matrix game — game in strategic form Deﬁnition 12 A game form is a 3tuple (N. The second point concerns the fact that the strategic form is not always a more convenient representation. 1) (1. so that there are 33 diﬀerent vectors that can be constructed. 4) (3. The extensive forms would thus be widely diﬀerent. 3. O) where N and S are as deﬁned previously and O is the set of physical outcomes. 1. 50 or 100 dollars. The games may not even be closely related for this to occur. 2) (2. when we talk about solution concepts. 3) (1. 1) (2. 0) (1. 3. they could all give rise to the same matrix. 1. any extensive form game with. 1. 1) (2. or we could have three information sets with two moves each. 2) (1. If the announcements add to $100 or less the players each get what they asked for. 9 . 2) (1. 3) (1. realizing that the indices in the index sets are arbitrary. 1. say. eight strategies for a player will lead to a matrix with eight entries for that player. We could also have two information sets. 2. 0) (2. 3) (3. For one. 2) (2. 1.124 LA. then the second player does the same. Busch. 1. 5) May2004 1\2 U C D C (1. S. 3. the strategic form of this game is a 3 × 27 matrix!9 Why? Since a strategy for the second player is an announcement after each announcement by the ﬁrst player it is a 3tuple. Microeconomics Matrix A ←− Player 3 −→ Matrix B L R C 1\2 L R (1. diﬀerent extensive form games can give rise to the same strategic form. What is “natural” in one game may not be “natural” in another. 2) (1. and the games would be very diﬀerent indeed. 1. Figure 6. one with four moves. 3. The main problem is that one might want all games that give rise to the same strategic form to have the same solution — which often they don’t. 1. But we could have one information set with eight moves. Nevertheless. This is a simple bargaining game. they each pay one dollar. Does this matter? We will have more to say about this later. In principle. and that we can relabel everything without loss of generality (does it matter if we call a move “UP” or “OPTION 1”?). 1. 1) U C D (1. 1) (2. one with two.
Game Theory 125 1\2 0 50 100 (0, 0, 0) (0, 0) (50, 0) (0, 0, 50) . . . (100, 100, 100) (0, 0) (50, 0) ... ... ... (0, 100) (−1, −1) (−1, −1)
(100, 0) (−1, −1)
$$ $$$ $c d
1
0 0
0 $ 2s$ $$ $ d 0 50 d 100 d d 0 50 0 100
50 2s
50 0
0 50 d100 50 50
d d
−1 −1
100 2 s d 0 50 d100 d d
100 0
−1 −1
−1 −1
Figure 6.5: A simple Bargaining Game Before we go on, Figure 6.6 below and on the next page gives the four games listed in the beginning in both their extensive and strategic forms. Note that I am following the usual convention that in a matrix the ﬁrst payoﬀ belongs to the row player, while in an extensive form payoﬀs are listed by index. Matching Pennies 1\2 H T H T (−1, 1) (1, −1) (1, −1) (−1, 1) Battle of the Sexes 1\2 M S M S (50, 30) (1, 1) (5, 5) (30, 50) Prisoners’ Dilemma 1\2 C D C D (3, 3) (0, 4) (4, 0) (1, 1)
H dd T pr p p pd p p p p p2 p p p d r H d T H d T 1, −1
r
1 b
50, 30
1 b d S M d pr p p pd p p p p p2 p p p d r M d S M d S dr dr r d r d 5, 5 1, 1
30, 50
−1, −1, 1 1
dr d
r
1, −1
dr d
3, 3
r
C dd D pr p p pd p p p p p2 p p p d r C d D C d D
dr d
1 b
0, 4 4, 0
r
dr d
1, 1
126 LA. Busch, Microeconomics The “Education Game” B¨¨
¨r
May2004
C b
−3, 3
¨ Pr ¨ ¨ P d I d r dr
1, 5
rrN B rr rrP P d I d r dr
(P, P ) (P, I) (I, P ) (I, I)
P \C
B (3, −3) (5, 1) (3, −3) (5, 1)
NB (−1, 1) (1, 5) (−1, 1) (1, 5)
1, −1
5, 1
Figure 6.6: The 4 standard games
6.2
Solution Concepts for Strategic Decision Problems
We have developed two descriptions of strategic decision problems (“games”.) How do we now make a prediction as to the “likely” outcome of this?10 We will employ a solution concept to “solve” the game. In the same way in which we impose certain conditions in perfect competition (such as, “markets clear”) which in essence say that the equilibrium is a situation where everybody is able to carry out their planned actions (in that case, buy and sell as much as they desire at the equilibrium price), we will impose conditions on the strategies of players (their planned actions in a game). Any combination of strategies which satisfy these conditions will be called an equilibrium. Since there are many diﬀerent conditions one could impose, the equilibrium is usually qualiﬁed by a name, such as “these strategies constitute a Nash equilibrium (Bayes Nash equilibrium, perfect equilibrium, subgame perfect equilibrium, the ChoKreps criterion,...) The equilibrium outcome is determined by the equilibrium strategies (and moves by nature.) In general, you will have to get used to the notion that there are many equilibrium outcomes for a game. Indeed, in general there are many equilibria for one game. This is part of the reason for the many equilibrium concepts, which try to “reﬁne away” (lingo for “discard”) outcomes which do not appear to be sensible. There are about 280 diﬀerent solution concepts — so we will only deal with a select few which have gained wide acceptance and are easy to work with (some of the others are diﬃcult to apply to any given game.)
It is sometimes not quite clear what we are trying to do: tell players how they should play, or determine how they will play. There are some very interesting philosophical issues at stake here, for a discussion of which we have neither the time nor the inclination! However, let it be noted here that the view taken in this manual is that we are interested in prediction only, and do not care one iota if players actually determine their actions in the way we have modeled.
10
Game Theory 127
6.2.1
Equilibrium Concepts for the Strategic Form
We will start with equilibrium concepts for the strategic form. The ﬁrst is a very persuasive idea, which is quite old: Why don’t we eliminate a strategy of a player which is strictly worse than all his other strategies no matter what his opponents do? This is known as Elimination of (Strictly) Dominated Strategies. We will not formally deﬁne this since there are various variants which use this idea (iterated elimination or not, of weakly or strictly dominated strategies), but the general principle should be clear from the above. What we will do, is to deﬁne what we mean by a dominated strategy. Deﬁnition 13 Strategy a strictly dominates strategy b if the payoﬀ to the player is larger under a, independent of the opponents’ strategies: a strictly dominates b if πi (a, s−i ) > πi (b, s−i ) ∀s−i ∈ S−i . A similar deﬁnition can be made for weakly dominates if the strict inequality is replaced by a weak inequality. Other authors use the notion of a dominated strategy instead: Deﬁnition 14 Strategy a is is weakly (strictly) dominated if there exists a mixed strategy α such that πi (α, s−i ) ≥ (>)πi (a, s−i ), ∀s−i ∈ Σi and πi (α, s−i ) > πi (a, s−i ) for some s−i ∈ Σi . If we have a 2 × 2 game, then elimination of dominated strategies may narrow down our outcomes to one point. Consider the “Prisoners’ Dilemma” game, for instance. ‘Defect’ strictly dominates ‘Cooperate’ for both players, so we would expect both to defect. On the other hand, in “Battle of the Sexes” there is no dominated (dominating) strategy, and we would still not know what to predict. If a player has more than two strategies, we also do not narrow down the ﬁeld much, even if there are dominated strategies. In that case, we can use Successive Elimination of Dominated Strategies, where we start with one player, then go to the other player, back to the ﬁrst, and so on, until we can’t eliminate anything. For example, in the following game 1\2 L R (l, l) (2, 0) (1, 0) (r, r) (2, −1) (3, 1) (l, r) (2, 0) (3, 1) (r, l) (2, −1) (1, 0)
apart from the fact that it may not allow any predictions. one for each player. but it is also quite weak. There are some criticisms about this equilibrium concept. for which the argument that a player should never choose those appears weak. taking the other players’ strategies as given: σ∗ is Nash iﬀ ∀i. Why then would you eliminate one of these strategies just because somewhere else in the game (where you will not be) one is worse than the other? Next.12 Put diﬀerently. since (r. π2 (q)). of course. If we also eliminate weakly dominated strategies. This. we only check against deviations by one player at a time. so the strategic form is (π1 (q). 12 Formally we now have 2 players. von Neumann and Morgenstern had thought this problem too hard when they proposed it in their book Games and Economic Behaviour. Busch. We do not consider mutual deviations! So in the Prisoners’ Dilemma game we see that one player alone cannot gain from a deviation from the Nash equilibrium strategies Nash received the Nobel price for economics in 1994 for this contribution. r) too. These are particularly strong if one eliminates weakly dominated strategies. The Nash equilibrium for this game is the strategy vector (q∗ 1 . Their strategies are qi ∈ [0. b2 (q∗1 )). 11 . P −1 (0)]. after successive elimination of weakly dominated strategies. we will discuss the probably most widely used equilibrium concept ever. Nash equilibrium. Restricting attention to pure strategies. we can throw out (l. He extended the idea of mutual best responses proposed by von Neumann and Morgenstern to n players. l) is strictly dominated by (l.D. q∗2 ) = (b1 (q∗2 ). So we would predict. q2 ). thesis. imposing additional constraints to those imposed by Nash equilibrium. Denote by bi (q−i ) the best response function we derived in footnote 1 of this chapter. Microeconomics May2004 player 1 does not have a dominated strategy. and then player 1 has a dominated strategy in L. which we can now recognize as a Nash equilibrium. He did this in his Ph. This is the mutual best response property we ﬁrst saw in the Cournot equilibrium. that the outcome of this game is (R. (l. Note that the crucial feature of this equilibrium concept: each player takes the others’ actions as given and plays a best response to them. σ−i ) ≥ πi (σi . σ−i ). l) and (r. ∗ ∗ ∗ πi (σi . however. such that each player’s strategy maximizes that player’s payoﬀ. r)). Player 2 does. their payoﬀ functions are πi (q1 . Deﬁnition 15 A Nash equilibrium in pure strategies is a set of strategies.11 This is the most universally accepted concept. All other concepts we will see are reﬁnements of Nash. is just the computation performed in footnote 2. r).128 LA. ∀σi ∈ Σi . For example you might know that the opponent will play that strategy for which you are indiﬀerent between two strategies.
Game Theory 129 (Defect,Defect). We do not allow or consider agreements by both players to defect to (Cooperate,Cooperate), which would be better! A Nash equilibrium in pure strategies may not exist, however. Consider, for example, the “Matching Pennies” game: If player 2 plays ‘H’ player 1 wants to play ‘H’, but given that, player 2 would like ‘T’, but given that 1 would like ‘T’, ... We may need mixed strategies to be able to have a Nash equilibrium. The deﬁnition for a mixed strategy Nash equilibrium is analogous to the one above and will not be repeated. All that changes is the deﬁnition of the strategy space. Since an equilibrium concept which may fail to give an answer is not that useful (hence the general disregard for elimination of dominated strategies) we will consider the question of existence next. Theorem 1 A Nash equilibrium in pure strategies exists for perfect information games. Theorem 2 For ﬁnite games a Nash equilibrium exists (possibly in mixed strategies.) Theorem 3 For (N, S, U ) with S ∈ Rn compact and convex and Ui : S → R continuous and strictly quasi concave in si , a Nash equilibrium exists. Remarks: 1. Nash equilibrium is a form of rational expectations equilibrium (actually, a rational expectations equilibrium is a Nash equilibrium, formally.) As in a rational expectations equilibrium, the players can be seen to “expect” their opponent(s) to play certain strategies, and in equilibrium the opponents actually do, so that the expectation was justiﬁed. 2. There is an apparent contradiction between the ﬁrst existence theorem and the fact that Nash equilibrium is deﬁned on the strategic form. However, you may want to think about the way in which assuming perfect information restricts the strategic form so that matrices like the one for matching pennies can not occur.
1 2 k 3. If a player is to mix over some set of pure strategies {σi , σi , . . . , σi } in Nash equilibrium, then all the pure strategies in the set must lead
130 LA. Busch, Microeconomics
May2004
to the same expected payoﬀ (else the player could increase his payoﬀ from the mixed strategy by changing the distribution.) This in turn implies that the fact that a player is to mix in equilibrium will impose a restriction on the other players’ strategies! For example, consider the matching pennies game: 1\2 H T H (−1, 1) (1, −1) T (−1, 1) (1, −1)
For player 1 to mix we will need that π1 (H, µ2 ) = π1 (T, µ2 ). If β denotes the probability of player 2 playing H, then we need that β − (1 − β) = −β + (1 − β), or 2β − 1 = 1 − 2β, in other words, β = 1/2. For player 1 to mix, player 2 must mix at a ratio of 1/2 : 1/2. Otherwise, player 1 will play a pure strategy. But now player 2 must mix. For him to mix (the game is symmetric) we need that player 1 mixes also at a ratio of 1/2 : 1/2. We have, by the way, just found the unique Nash equilibrium of this game. There is no pure strategy Nash, and if there is to be a mixed strategy Nash, then it must be this. (Notice that we know there is a mixed strategy Nash, since this is a ﬁnite game!) The next equilibrium concept we mention is Bayesian Nash Equilibrium (BNE). This will be for completeness sake only, since we will in practice be able to use Nash Equilibrium. BNE concerns games of incomplete information, which, as we have seen already, can be modelled as games of imperfect information. The way this is done is by introducing “types” of one (or more) player(s). The type of a player summarizes all information which is not public (common) knowledge. It is assumed that each type actually knows which type he is. It is common knowledge what distribution the types are drawn from. In other words, the player in question knows who he is and what his payoﬀs are, but opponents only know the distribution over the various types which are possible, and do not observe the actual type of their opponents (that is, do not know the actual payoﬀ vectors, but only their own payoﬀs.) Nature is assumed to choose types. In such a game, players’ expected payoﬀs will be contingent on the actual types who play the game, i.e., we need to consider π(σi , σ−i ti , t−i ), where t is the vector of type realizations (potentially one for each player.) This implies that each player type will have a strategy, so that player i of type ti will have strategy σi (ti ). We then get the following:
Game Theory 131 Deﬁnition 16 A Bayesian Nash Equilibrium is a set of type contingent ∗ ∗ strategies σ ∗ (t) = (σi (t1 ), . . . , σn (tn )) such that each player maximizes his expected utility contingent on his type, taking other players’ strategies as given, and using the priors in computing the expectation:
∗ ∗ ∗ πi (σi (ti ), σ−i ti ) ≥ πi (σi (ti ), σ−i ti ), ∗ ∀σi (ti ) = σi (ti ), ∀i, ∀ti ∈ Ti .
What is the diﬀerence to Nash Equilibrium? The strategies in a Nash equilibrium are not conditional on type: each player formulates a plan of action before he knows his own type. In the Bayesian equilibrium, in contrast, each player knows his type when choosing a strategy. Luckily the following is true: Theorem 4 Let G be an incomplete information game and let G∗ be the complete information game of imperfect information that is Bayes equivalent: Then σ ∗ is a Bayes Nash equilibrium of the normal form of G if and only if it is a Nash equilibrium of the normal form of G∗ . The reason for this result is straight forward: If I am to optimize the expected value of something given the probability distribution over my types and I can condition on my types, then I must be choosing the same as if I wait for my type to be realized and maximize then. After all, the expected value is just a weighted sum (hence linear) of the conditional on type payoﬀs, which I maximize in the second case.
6.2.2
Equilibrium Reﬁnements for the Strategic Form
So how does Nash equilibrium do in giving predictions? The good news is that, as we have seen, the existence of a Nash equilibrium is assured for a wide variety of games.13 The bad news is that we may get too many equilibria, and that some of the strategies or outcomes make little sense from a “common sense” perspective. We will deal with the ﬁrst issue ﬁrst. Consider the following game, which is a variant of the Battle of the Sexes game:
One important game for which there is no Nash equilibrium is Bertrand competition between 2 ﬁrms with diﬀerent marginal costs. The payoﬀ function for ﬁrm 1, say, is π1 (p1 , p2 ) = (p1 − c1 )Q(p1 ) if p1 < p2 α(p1 − c1 )Q(p1 ) if p1 = p2 0 otherwise
13
which is not continuous in p2 and hence Theorem 3 does not apply.
and can then be indiﬀerent between one’s own strategies. M ) and (S. since a similar perfection criterion on the extensive form is more useful for what we want to do later. So what will happen? (Notice another interesting point about mixed strategies here: The expected payoﬀ vector in the mixed strategy equilibrium is (3/2. 3/2). 2) (0. and only allowing pure strategies that survive after the limit of these completely mixed strategies is taken. There is another commonly used concept. We ﬁrst deﬁne an “approximate” equilibrium for completely mixed games. Microeconomics 1\2 M S M (6. Deﬁnition 17 A dominant strategy equilibrium is a Nash equilibrium in which each player’s strategy choice (weakly) dominates any other strategy of that player. We will not use this much. which basically refers to additional conditions which will be imposed on top of standard Nash. 6) May2004 This game has three Nash equilibria. 0) S (0. it may be quite compelling. and the actual payoﬀ vector can be any of the three vectors in the game. Basically. Busch. and so on. Most of these reﬁnements are actually applied to the extensive form (since one can then impose restrictions on how information must be consistent. then take the limit: . so that the set of dominant strategy equilibria is empty.132 LA. but any of the four possible outcomes can occur in the end. it is included here for completeness. that of normal form perfect equilibrium. This eliminates many of the equilibria which are only brought about by indiﬀerence.” The problem with Nash is that one takes strategies of the opponents as given. You may notice a small problem with this: It may not exist! For example. If such an equilibrium does exist. S) — and one is a mixed strategy equilibrium where µ1 (S) = 1/4 and µ2 (S) = 3/4. Normal form perfect eliminates this by forcing one to consider completely mixed strategies. However. however. normal form perfect will reﬁne away some equilibria which are “knife edge cases. in the game above there are no dominating strategies. there is one common reﬁnement on the strategic form which is sometimes useful. Two are in pure strategies — (M.) However.) The problem of too many equilibria gave rise to reﬁnements. 0) (2.
> 0. 2’s payoﬀ from B is −50α. and we have (b. This can easily be seen from the following considerations. While the payoﬀs are the same in the perfect equilibrium and all the Nash equilibria. and let β denote the probability with which player 2 plays T . . and (b. Deﬁnition 19 A ntuple µ( ) = (µ1 . but not vice versa. which is less than zero as long as α > 0. µ−i ). in the perfect equilibrium we have to set (1 − β) < . . 0) (100. and if µi (sj ) ≤ if πi (sj . in the sense of having lower payoﬀs than other strategies. both α and (1 − β) thus approach zero. (b. . So. µ−i ) ≤ πi (sk .Game Theory 133 Deﬁnition 18 A completely mixed strategy for player i is one that attaches positive probability to every pure strategy of player i: µi (si ) > 0 ∀si ∈ Si . Now consider player 1. sk = sj . We can then take the limit as “seldom” becomes “never:” Deﬁnition 20 A Perfect Equilibrium is the limit point of an perfect equilibrium as → 0. consider the following game: 1\2 t b T (100. that is β > 1 − in any perfect equilibrium. . 0) B (−50. T ). 2’s payoﬀ from T is zero independent of α. 0) The pure strategy Nash equilibria of this game are (t. Let α denote the probability with which player 1 plays t. B). T ). The unique normal form perfect equilibrium is (b. . His payoﬀ from t will be 150β − 50. T ). . the perfect equilibrium is in some sense more stable. . T ) as the unique perfect equilibrium. µn ) is an perfect equilibrium of the normal form game G if µi is completely mixed for all i ∈ {1. and we require that α < . . To see how this works. Notice in particular that a very small probability of making mistakes in announcing or carrying out strategies will not aﬀect the nPE. As → 0.14 14 Note that an nPE is Nash. must be used very seldom. −50) (100. but it would lead to a potentially very bad payoﬀ in the other two Nash equilibria. Notice that this restriction implies that any strategies which are a poor choice. His payoﬀ from t is therefore less than from b for all β. . n}. while his payoﬀ from b is 100.
D followed by d. player 2. The deﬁnition we gave applies to both.7) 1 2 There are also some mixed strategy equilibria. for example. possible. Busch. a concept we will now make more formal.3 Equilibrium Concepts and Reﬁnements for the Extensive Form Next. even though he moves ﬁrst. 1) This game has 3 pure strategy Nash equilibria: (D. and his move is observed by player 2. 1].15 What is wrong with this? Consider the equilibrium (D. we are also assured existence of a Nash equilibrium as before. if asked to follow his strategy. (P2 (u) = 1. P2 (u) = α)) for 1 2 any α ∈ [0. (d.16 However. In this sense the action is not credible at that point in the game. namely (U. By threatening to play ‘down’ following an ‘Up’. Microeconomics May2004 6. The more general idea is that of an action that is not a best reply at an information set. as for example in the education game when the parents threaten to punish. and (U. 4) (8. 4) (1. would rather not. indeed. u)). The move d in this information set is only part of a best reply because under the proposed strategy for 1. u) (d.2. player 2 makes his preferred outcome. (P2 (u) ≤ 1/4. ends up with one of his worst payoﬀs.134 LA. d)). d) (4. u) (6. P2 (u) = 0)). This is done most easily by requiring all moves to be best replies for their part of the game. 6 r dr d 0. we will discus equilibrium concepts and reﬁnements for the extensive form of a game. d) (6. (u. Player 1. 4 2. (u. and play u instead of d if he ﬁnds himself after a move of U . 1) 4. 0) (4. 2) 2r ¨ ¨ rr2 u d d u d d (u. (d. and thus it does not matter (to player 2’s payoﬀ) which action is speciﬁed. and obtains his highest possible payoﬀ. 16 It is sometimes not clear if the term ‘incredible threat’ should be used only if there is some actual threat. Player 1 moves ﬁrst. 0) (1. 8 (d. Would player 1 really believe that player 2 will play d if player 1 were to choose U . Since our extensive form game. (See also Figure 6. as we have deﬁned it so far. This is a type of behaviour which we may want to rule out. d)). here given in both its extensive and strategic forms: 1 b 2\1 U D ¨r ¨ rrD U¨ ¨ rr (u. is a ﬁnite game. First of all. which is taken as given in a Nash equilibrium. 2) (8. d)). it should be clear that a Nash equilibrium of the strategic form corresponds onetoone with a Nash equilibrium of the extensive form. (U. This is called an incredible threat. Consider the following game. and (D. this information set is never reached. given that player 2’s payoﬀ from going to u instead is higher? Probably not. 1 r dr d 1. 15 .
In the example above. in other words the implied path through the game tree (along some set of arcs. There are three proper subgames. not only along the equilibrium path. with the last information sets before the terminal nodes.E. 2. which tries to exclude incredible threats by assuring that all strategies are best replies in all proper subgames.17 Deﬁnition 22 A strategy combination is a subgame perfect equilibrium (SPE) if its restriction to every proper subgame is a subgame perfect equilibrium. only (U. Start at the end of the game. The equilibrium path is. Only u is a best reply in the ﬁrst.F. We can now deﬁne a subgame perfect equilibrium. then ΓV is called a subgame.) 17 . one starting at his second information set. only d in the second. the sequence of actions implied by the equilibrium strategies.7: Valid and Invalid Subgames Deﬁnition 21 Let V be a nonterminal node in Γ.Game Theory 135 ¨¨ A c ¨r s E ¢f ¢ f ¢ f e B ¨¨ ¨ s C s e es F ¢f ¢ f ¢ f rr D rs ¡e ¢f ¡ e ¢ f es ¡ ¢ fs G¡ e e H e ¡ e es e ¡ s I¡ e ¡ eK ¡ e ¡ e ¡ e ¡ e rr Subgames start at: A.C. (u.D. Remarks: 1. and let ΓV be the game tree comprising V as root and all its followers. and thus only U in the last.G.H No Subgames at: B. d)) is a SPE. and one which is the whole game tree. If all information sets in Γ are either completely contained in ΓV or disjoint from ΓV .K Figure 6. basically. Subgame Perfect Equilibria exist and are a strict subset of Nash Equilibria.I. one starting at player 2’s ﬁrst information set. Subgame Perfect equilibrium goes hand in hand with the famous “backward induction” procedure for ﬁnding equilibria.
in other words. the SPE are all the Nash equilibria. T ) and (b. c)) is a SPE. (t. Then back up one level in the tree. Busch.. Subgame Perfection is a useful concept in repeated games. 3.1) In this game there is no subgame starting with player 2’s information set after 1 chose B or C. where a simultaneous move game is repeated over and over. . This procedure is repeated until the root node is reached. i. Subgame Perfection and normal Form Perfect lead to diﬀerent equilibria. In that setting a proper subgame starts in every period. The resulting strategies are Subgame Perfect. even though c is strictly dominated by a in the nontrivial information set.0) c (1.0) 2s (0. Notwithstanding the above. and we get that (A. −50 100. 0) (100. Incredible Threats are only eliminated if all information sets are singletons. 4.136 LA. Microeconomics May2004 and determine the optimal action there. −50) (100. Since the optimal action in the last moves is now known. As a counterexample consider the following game: a (0. 0 As we had seen before. 0 1\2 t b T (100. 5. B). but since there are no subgames. 0) 100. and therefore the equilibrium concept reverts to Nash. T ). 0 −50. (b. and consider the information sets leading up to these last decisions.e. the nPE is (b. T ). and the second last level can be determined in a similar fashion. Consider the game we used before when we analyzed nPE: 1 b ¨r rrb ¨ t¨ r ¨ r ¨ p p p p p p p 2 p p p p p p p rp r p p p pr ¨p T d B T d B dr dr d r r d 100. they can be replaced by the resulting payoﬀs. in games of perfect information. (c. 0) B (−50.0) 2 2 1¨¨¨ B c 222 s c rr (1. and thus at least incredible threats with regard to future retaliations are eliminated.1) ¨ c A ¨ ¨ a 2 (1/2.1) r rr2 a C rs (1.
Deﬁnition 23 A system of beliefs φ is a vector of beliefs for each player.8: The “Horse” As we can see. over the nodes in each i of player i’s information sets Sij : K φj i : xj k → [0. In other words. 4.Game Theory 137 1c D s t A d 3 t tr t t 2 s a (1. l). 0. r) and (D. 0. where φi is a vector of probability distributions. The second one. a. i k Deﬁnition 24 An assessment is a system of beliefs and a set of strategies. 1]. 1. we have to introduce what the player believes about his situation at that information set. (A. l). 0) (0. φj . Both are subgame perfect since there are no proper subgames at all. 0) (4. φi .” This game has two pure strategy Nash equilibria. . 1) Figure 6. (D. For games of imperfect information we cannot use the idea of best replies in all subgames. since a player may not know at which node in an information set he is.8. k=1 j φj (xk ) = 1. 2. ∀xj ∈ Sij . Consider the example in Figure 6. 2) (0. a is not a best reply for player 2 if he actually gets to move. 1) s t l l t tr t t (3. By introducing a belief system — which speciﬁes a probability distribution over all the nodes in each of the player’s information sets — we can then require all actions to be best responses given the belief system. SPE does nothing to prevent incredible threats if there are no proper subgames. We would like to ensure that the actions taken at an information set are best responses nevertheless. a. which would improve his payoﬀ from 1 to 4. a. since player 2 could. (σ ∗ . In order to deal with this aspect. if he is actually asked to move. the following equilibrium concept has been developed. play d. is “stupid” however. In order to do so. φ∗ ). which is commonly called “The Horse.
There are other sellers of genuinely low quality. ∀Sij . warranties. The consistent belief for three thus is given by (from Bayes’ Rule) α(n) = (1/n)2 n 1 = 2 = . and 2 plays d. where σn is a sequence of completely mixed behavioural strategies and φn are beliefs consistent with σn being played (i. φn ). . some work will be required in using this concept! Reconsider the horse in Figure 6. This destroys our proposed setup. which converges to zero. also converging to zero.e. sequential rationality for 1 requires him to play A (4 > 3). The strategy combination (A. The general setup is that an informed party tries to convey information to an uninformed party. since player 2 will play d any time 3 plays l with more than a 1/4 probability. σ−i Sij ) ≥ E φ πi (σi . φ∗ ) is a sequential equilibrium if it is consistent and sequentially rational. For example. Microeconomics Deﬁnition 25 An assessment (σ. φ∗ ) is consistent if (σ ∗ . φ∗ ) = limn→∞ (σn . (1/n)2 + (1 − (1/n)2 )(1/n) n +n−1 n + 1 − 1/n which converges to zero as n goes to inﬁnity. Signalling Games This is a type of game used in the analysis of quality choice. the fact that I spend money advertising should convey the information that my product is of high quality to consumers who are not informed about the quality of my product.138 LA. φ) is sequentially rational if ∗ ∗ ∗ E φ πi (σi . But as weight shifts to l for 3. We thus need that 1 plays down almost surely. As usual. where α denotes player 3’s probability assessment of being at the left node. a. advertising. ∀σi ∈ Σi . σ−i Sij ) . or education and hiring. l). r) is a sequential equilibrium with beliefs α = 0. Busch. You can see this by considering the following sequence of strategies: 1 plays D with (1/n)2 . ∀i. 2 plays d with (1/n). as required. ∗ ∗ May2004 Deﬁnition 26 An assessment (σ ∗ . the tougher part is to show that there is no sequence which can be constructed that will lead to (D. obtained by Bayesian updating. As you can see.) Deﬁnition 27 An assessment (σ ∗ . a. Here is a short outline of what is necessary: In order for 3 to play l we need beliefs which put at least a probability of 1/3 on being at the left node..8.
R). l)). Let 2’s belief of facing a low type player 1 be denoted by α if 2 observes L. r)). only high types move Right. Consider the game in Figure 6. Both types take the same move in equilibrium. 0) (−4. The equilibrium concept used for this type of game will be sequential equilibrium. (l. Notice. If you ﬁnd yourself. and they will try to mimic my actions. S2 ) indicate 1’s action if he is the low and high type. The payoﬀs are as indicated. (α = 1. Now introduce player 2’s beliefs. inadvertently in your ﬁrst information set. 3) Figure 6.5)) (It goes without saying that you should try to verify this claim!). This type of game is called a game of asymmetric information. and by β if 2 observes R. in contrast. and ((L. and no information is transmitted. r)). and the move thus reveals the type of player. (r. as player 2. β = 0)) and ((R. that the belief that α = 0 is somewhat stupid. −2) (−2. r). respectively. 2) (−2. The second equilibrium. −2) r t (1/2) s l L 1 s R s r l (−1. The usual question is if the informed party can convey its private information or not.9: A Signalling Game however.9. s2 ) indicates 2’s response if he observes L and R. (r. Only low types move Left. L). knowing his type. Nature determines if player 1 is a high type or a low type. since we have to model the beliefs of the uninformed party.Game Theory 139 (3. since one of the parties is completely informed while the other is not. Player 2 then. The action of player 1 completely conveys the private information of player 1. We then can get two sequential equilibria: ((L. without knowing 1’s true type. R). Player 1 moves left or right. In order to be credible my action should therefore be hard (costly) to mimic. (l. The ﬁrst of these is a separating equilibrium. 1) ¯ t (1/2) Root: eNature r (−3. −1) l r s α L 1 s R β s l (1. In the above game. however. the Nash equilibria are the following: Let player 1’s strategy vector (S1 . R). R). (r. r). respectively. while player 2’s strategy vector (s1 . what should you believe? Your beliefs here say that . β = 0. moves left or right also. is a pooling equilibrium. −1) (2. ((R. (α = 0. Then we get that the pure strategy Nash equilibria are ((R.
Busch. Question 2: Deﬁne “perfect recall” and provide two examples of games . This is “stupid”.140 LA. 1e ¨¨ g ¨¨ g L ¨¨ M g R ¨ g ¨¨ g 2 ¨¨ ¨ s gs d d (2. messy. of course. 2) d u d u dd d d (3. 0) (0. Finally. even though the games would appear to be quite closely related.10: A Minor Perturbation? 6.5. 0) (1. 0) (0. while the low type’s move L is not dominated by R. notice a last problem with sequential equilibrium. The basic idea is simple: of the equilibrium path you should only put weight on types for whom the continuation equilibria oﬀ the equilibrium path are actually better than if they had followed the proposed equilibrium. 2) ¨¨ s ¨ d u dd d ¨¨ ¨ ¨ ¨¨ ¨ M ¨ Not L 1 ¨s R 2 ¨ ¨ u ¨¨ s ¨¨ d (3. Look up the ChoKreps criterion in any good game theory book if you want to know the details. therefore. In particular. 3) (1. 0) (1. that if anything it was the low type who was trying to “tell you something” by deviating (you are not on the equilibrium path if you observe L. It would be more reasonable to assume. 1) Figure 6.10 have diﬀerent sequential equilibria.5. Then draw a well labelled example of such a game in which you indicate all the elements of the deﬁnition. Microeconomics May2004 you think it must have been the high type who made the mistake. since the high type’s move L is strictly dominated by R. 1) ¨¨ 1e ¨¨ ¨ ¨ ¨¨ L ¨¨ ¨¨ (2. The details are. the two games in Figure 6. Minor perturbations of the extensive form change the equilibrium set. 3) (1. but we will not go into them here. remember!) There are reﬁnements that impose restrictions like this last argument on the beliefs out of the equilibrium path.3 Review Problems Question 1: Provide the deﬁnition of a 3player game in extensive form.
2) (1. The frequency of these accidents depends on the eﬀort and care taken by the railway — but these are unobservable by the town. 3) (3.Game Theory 141 which violate perfect recall for diﬀerent reasons. 1) (1. 1) (3. 1) (3. A railway line passes through a town. 1) (1. assume that there are only two levels of eﬀort/care (high and low) and that the courts can determine with certainty which level was in fact used. 1. Also assume that going to court costs the railway and the town money. 1) (4. 2) (2.. if an accident has occurred. 3) (1. 2) (4. 2. Finally. 1) Question 6: Consider the following situation and construct an extensive form game to capture it. 2. 1. 1. 1. 4) (0. The town may. 2. there is a “standard accident”). 6. Be clear and concise and do not actually attempt to solve for the mixed strategies. 3. Will this game have a Nash equilibrium? Will it have a Subgame Perfect Equilibrium? Why can you come to these conclusions? Question 4: Consider the following 3 player game in strategic form: Lef t 1\2 U C D L R C (1.e. 2) (1. 1) (1. 2) (1. but will only be successful in obtaining damages if it is found that the railway did not use a high level of care. 1. 1) (2. that eﬀort is costly for the railway (high eﬀort reduces proﬁts). 4) Would elimination of weakly dominated strategies lead to a good prediction for this game? What are the pure strategy Nash equilibria of this game? Describe in words how you might ﬁnd the mixed strategy Nash equilibria. 3) Player 3 Right 1\2 L U C D R C (2. 1. accidents will happen on this railway line and cause damage and impose costs on the town. 1. 1) (2. 3) (2. 2) (2. 4) (2. For simplicity. 1. 2. Question 5: Find the mixed strategy Nash equilibrium of this game: 1\2 U C L R C (4. 2. assume that if the railway is “guilty” it has to pay the town’s damages and court costs. that accidents cost the railway and the town money and that this cost is independent of the eﬀort level (i. . Occasionally. 0. 4) (5. 2. 3. 2) (2. Question 3: Assume that you are faced with some ﬁnite game. sue the railway for damages.
what would α have to be? Question 8: (Cournot Duopoly) Assume that inverse market demand is given by P (Q) = (Q − 10)2 . We also want to train our understanding of sequential equilibrium. This means that they take the other ﬁrms’ outputs as given and consider their own inverse demand to be pi (qi ) = ( j=i qj + qi − 10)2 . One can solve for the socalled reaction function of one ﬁrm (it’s best reply function) which gives the proﬁt maximizing level of output for a given level of joint output by all others.) Show that market output converges to the competitive output level as n gets large. (HINT: Firms are symmetric. derive the multilateral best response strategies for the output choices: Given every other ﬁrm’s output. Derive the Nash equilibrium output and price.4. Assume that the seller receives a utility of 10 from a working product and 0 from a nonworking product. Busch. a given ﬁrm’s output is proﬁt maximizing for that ﬁrm. The buyer does not know the type of seller. respectively. The buyer knows that 1/2 of the sellers are of high quality. Show that no separating equilibrium exist.) If the seller does not buy the object he is assumed to get a utility level of 0. where Q refers to market output by all ﬁrms. It is then enough for now to focus on symmetric equilibria. (That is. We want to investigate if signalling equilibria exist. so use that as the equilibrium concept in what follows. high and low. and can only determine if a good breaks.1 and 0. .) a) What is the subgame perfect equilibrium of this game? b) In order for Romeo to be able to insist on his preferred outcome. Microeconomics May2004 Question 7: Consider the following variation of the standard Battle of the Sexes game: with probability α Juliet gets informed which action Romeo has taken before she needs to choose (with probability (1 − α) the game is as usual. which we will take to be the probability with which the object breaks during use. Symmetry then suggests that each ﬁrm faces the same joint output by its (n − 1) competitors and produces the same output in equilibrium. For simplicity assume that there are only two quality levels. Finally. Assume further that there are n ﬁrms in the market and that they all have zero marginal cost of production.) Question 9∗ : Assume that a seller of an object knows its quality. with breakdown probabilities of 0. So we can substitute out and solve. and that his utility is linear in money (so that the price of the good is deducted from the utility received from the good. This holds for all ﬁrms. a) Assume that sellers can only diﬀer in the price they charge.142 LA. The cost of the object to the sellers is assumed to be 2 for the low quality seller and 3 for the high quality seller. and 1/2 of low quality. assume that all ﬁrms are Cournot competitors. but not its quality.
Game Theory 143 b) Now assume that the sellers can oﬀer a warranty which will replace the good once if it is found to be defective. The Stackelberg equilibrium is the SPE for this game. Find the Cournot Equilibrium and calculate equilibrium proﬁts. Does a separating equilibrium exist? Does a pooling equilibrium exist? Question 10∗ : Assume a uniform distribution of buyers over the range of possible valuations for a good. (This means that ﬁrm 1 moves ﬁrst and ﬁrm 2 gets to observe ﬁrm 1’s output choice. a) Derive the market demand curve. 2]. [0.) d) What is the joint proﬁt maximizing price and output level for each ﬁrm? Why could this not be attained in a Nash equilibrium? . c) Assume that ﬁrm 1 is a Stackelberg leader and compute the Stackelberg equilibrium. 2 b) There are 2 ﬁrms with cost functions C1 (q1 ) = q1 /10 and C2 (q2 ) = q2 .
144 LA. Busch. Microeconomics May2004 .
z}) = {x}. z}) = {x}. w}) = {w}. since the above implies that y w x z but we also have x y! Question 2: Here you have to make sure to maximize income for any given 145 . z. that you have to check back and forth to make sure that the WA is indeed satisﬁed. y. z}) = {x}. {y. C({y. you also have to check for y. y. C({x. The problem is intransitivity. d) I was aiming for an application of out Theorem: our set of budget sets B does not contain all 2 and 3 element subsets of X. y. Missing are {x. C({x}) = {x}. y}) = {x. z}) = {x}. z}) = {w}.e. C({y. Examination of the sets B shows that the two choices y. in deriving the above) it is x w y z. actually. C({x. w}) = {x}. however.. z}) = {y}. it is transitive. y} does not satisfy the axiom since while the check for x seems to be ok. {w. C({x.Chapter 7 Review Question Answers 7. c) Yes. and there it fails. z}. e) The best way to go about this one is to determine where we can possibly get this to work. w. C({x. w}) = {w}.) One choice structure that does work is C({x. C({x. z}) = {x}. w. Some ﬁddling reveals that the following works: C({x. C({x. y}. C({x. z. (I. w}. C({y. b) Yes (I thought of that ﬁrst. C({y. x only appear in one of the sets and thus must be our key if we want to satisfy the WA without having rational preferences. w}) = {w}.1 Chapter 2 Question 1: a) There are multiple C(·) which satisfy the Weak Axiom. C({x}) = {x}. w}) = {x}. C({x. z}. {x. w}) = {y}. Note. y.
Simply plotting the two and then taking the outer hull (i. where w i < wi . w1 if h1 ≤ C1 and The wage schedules have the general form w1 (h1 ) = w1 if h1 ≥ C1 w2 if h2 ≤ C2 . I ignore here that no hours w2 (h2 ) = w2 if h2 ≥ C2 above 8 are possible for either job. the highest frontier) for each leisure level gives you the frontier. Microeconomics work hrs 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Part a 1 then 2 112 124 136 148 160 172 188 204 212 220 234 248 262 276 290 304 Partb 1 then 2 2 then 1 112 108 124 116 2 136 130 3 1 148 145 3 160 160 2 172 174 3 1 188 189 3 204 204 212 216 220 228 2 234 3 240 1 249 3 252 264 264 2 278 3 276 1 293 3 292 308 308 May2004 2 then 1 108 116 130 144 158 172 186 200 212 224 236 248 260 272 288 304 Max 112 124 136 148 160 172 188 204 212 224 236 248 262 276 290 304 Max 112 124 136 148 160 174 2 3 189 1 3 204 216 228 240 252 264 278 2 3 293 1 3 308 Table 7.1: Table 1: Computing maximal income amount of work. c) This is a possibly quite involved problem. for any given total amount worked.. The intuitive answer is that it will not matter since marginal and average pay is (weakly) increasing in both jobs.e. Busch. In parts (a) and (b) you have to choose to work either job 1 then job 2 (after 8 hours in job 1) or job 2 then job 1. In neither part a) nor in part b) is the budget set convex.146 LA. You can best see this eﬀect by considering a table in which you tabulate total hours worked against total income. Here is a more general treatment of these questions: We really are faced with an maximization problem. respectively. choosing to put that information into the constraints later. where wi (hi ) are the wage schedules. and by doing job 2 ﬁrst. h2 ) = h1 w1 (h1 ) + h2 w2 (h2 ). computed by doing job 1 ﬁrst. In (a) they only cross twice (at 9 and 12 hours of work) while in part (b) they cross 4 times. to max income given the constraints. Let h1 and h2 denote hours worked in jobs 1 and 2. Then the objective function is I(h1 . This is shown in Table 1. .
for any given number of hours H the hours in each job are “perfect substitutes”. (For our parameters all of S(A). h2 ≥ C2 }. h2 )h1 ≥ C1 .) For parts (a) and (b) the feasible set consists of the boundaries of the 8 × 8 square of feasible hours.Answers 147 Consider now the isoincome curves in (h1 . h2 ≤ C2 }. The resulting budget set will be convex. 0 ≤ hj ≤ 8. h2 )h1 ≥ C1 . however. of course.3 x0. starting with the highest paid and progressing to the lower paid ones in order. while S(B) > −1. Note that without any restrictions H = h1 + h2 . The choice set is thus given by the intersection of the isohour lines with the feasible set (the box boundary). 1 2 0. The utility function is quasiconcave (actually. Now superimpose the choice sets onto this.3x−0. These isohour curves are all straight lines with a slope of −1. We will have four regions to consider. Works pretty well. S(C) = −w 1 /w2 . doesn’t it! d) Now we “clearly” take up jobs in decreasing order of pay.) The important fact which follows from all of this is that the isoincome curves are all concave to the origin and piecewise linear. with the comparison of S(A) to S(D) indeterminate.6x0. h2 ≤ C2 }. C = {(h1 . S(D) = −w 1 /w2 . that S(C) < S(A) < S(B) as well as that S(C) < S(D) < S(B). This implies. namely A = {(h1 . 1 2 x1 p1 + x2 p2 = m. or where hi = 8.3x2 = 0. Due to the concavity to the origin of the isoincome lines this is of no relevance. S(B) = −w 1 /w2 . h2 )h1 ≤ C1 .6 = λp1 . that is. h2 ) space which result.3 x−0. B = {(h1 . (But luckily not needed in any case. Question 3: The consumer will maxx x0. h2 )h1 ≤ C1 . It is obvious that S(C) < S(D) and S(A) < S(B). Note how I have used our usual techniques of isoobjective curves and constraint sets to approach this problem. hj < 8. S(C).7 x0. but his is not important. so we have S(A) = −w 1 /w2 . where either hi = 0. h2 ≥ C2 }.6x1 p2 =⇒ x2 = 2p1 x1 . D = {(h1 . so the second order conditions will be satisﬁed. as well as that S(C) < S(A) and S(D) < S(B).6 + λ (m − x1 p1 − x2 p2 ) 1 2 which leads to the ﬁrst order conditions 0.4 = λp2 . p2 . S(D) are less than −1. In part (c) this restriction is removed and the whole interior of the box is feasible. The slope of the isoincome curves for the regions is easily seen to be the negative of the ratio of wages. Combining the ﬁrst two ﬁrst order conditions we get p1 0. strictly concave in this case) and the budget set convex.
neither the Hicksian nor the Marshallian demands are functions. In the former the substitution eﬀects approach inﬁnity. Up along that ray to an intersection with a budget from A with slope 1/5 (point E). 3p1 May2004 2m m 2m Now use this to solve for x2 : x2 = 3p2 . x2 A ¢ ¢ d g g d¢ C E g ¢d g¢ d Bg ¢ d D ¢g d d ¢ g F ¢ g d x1 Now we can solve for the demands along the diﬀerent pieces of the oﬀer curve and get the Marshallian demand. It then continues on this budget and the coinciding indiﬀerence curve segment to the ray with slope 1/3 (point D). Busch. x2 (p.. From there it follows the ray with slope 3 until that ray intersects a budget drawn from A with a slope of 1 (point C). C to D.148 LA. 3×1 1 Question 4: The key is to realize that this utility function is piecewise linear with line segments at slopes −5. along that budget to the intercept with the horizontal axis (point F). and then along the horizontal axis oﬀ to inﬁnity. The function has either a perfect substitute or Leontief character. Demands are easiest derived from the price oﬀer curve. in the latter they are zero. It follows the indiﬀerence curve segment with −5 slope to the ray with slope 3. Note that demand is either a whole range. Properly speaking. −1/5. To ﬁnd the particular quantity demanded. The ranges can be computed from the endpoints (i. 3p2 . 3×3 3 3 2(3 × 412 + 1 × 72) 2(412 + 24) = = 872. So (x1 (p. which is a nice zigzag line. A to B. The segments join at rays from the origin with slopes 3 and 1/3.) Along the rays demand is solved as for . m)) = 3p1 . −1. from left to right. m). or a “proper demand”. simply plug in the numbers and simplify: x1 = x2 = 412 + 24 436 3 × 412 + 1 × 72 = = . Microeconomics Substitute into the budget constraint and simplify: x1 p 1 + 2p1 x1 p 2 = m p2 =⇒ x1 = m . It starts at the intercept of the budget with the vertical axis (point A).e. E to F. Call this point B.
if p1 /p2 = 1/5. So for example on the ﬁrst ray segment (B to C) we know that x2 = 3x1 . 5w/(8p1 )] . But by how much? The higher the value of the elasticity. Question 5: The elasticity of substitution measures by how much the consumption ratio changes as the price ratio changes (both measured in percentages. if 1 > p1 /p2 > 1/5. if 5 > p1 /p2 > 1. Instead. M RS = x1−ρ (1/ρ)(xρ + xρ )(1−ρ)/ρ (ρxρ−1 ) xρ−1 u1 1 2 1 1 = = ρ−1 = 2 . if 1/5 > p1 /p2 . 3u/16] . if 5 = p1 /p2 . 0. u/8] .) In other words.1/5. u) = [u/8. 3w/(3p1 + p2 ). 0. if p1 /p2 > 5. if p1 /p2 = 1. The income expansion paths and Engel curves can be whole regions at price ratios 1. u/8. (For the Hicksian demand we simply need to ﬁx one indiﬀerence curve and compute the points along it. p1 x1 + p2 x2 = w. u] . two utility functions represent the same preferences. 3w/(4p1 )] . and the Engel curves are straight increasing lines. w/p1 . as the price ratio changes the slope of the budget changes and we know this will cause a change in the ratio of the quantity demanded of the goods.Answers 149 a Leontief consumer: we know the ratio of consumption.5. if 5 > p1 /p2 > 1. if p1 /p2 = 5. the larger the response in demands. w/(3p2 + p1 ). otherwise the income expansion paths are the axes or rays. h1 (p. w/p1 ] . if 1/5 > p1 /p2 . 3u/16. we will take the limits of the MRS and compare those to the MRSs of the other functions. w) = [w/(4p1 ). and x1 = w/(p1 + 3p2 ). Hence x1 (p1 + 3p2 ) = w. we know the budget.) The demands for good 1 therefore are if p1 /p2 > 5. if 1 > p1 /p2 > 1/5. [3u/16. [0. or we stay at a kink for a range of prices. Also. We then get either a segment (like A to B above). The demands for good 2 are similar and left as exercise. 1−ρ u2 (1/ρ)(xρ + xρ )(1−ρ)/ρ (ρxρ−1 ) x2 x1 1 2 2 . if p1 /p2 = 1/5. u. if p1 /p2 = 1. [0. So instead of taking the limit of the utility function directly. [3w/(8p1 ). as long as the MRS is identical at every point. x1 (p. Question 6: First we need to realize that the utility index which each function assigns to a given consumption point does not have to be the same.
Busch. The second and third FOC above tell us that x3 /x2 = 2p2 /p3 . But as ρ → 1 the MRS of our function is (x2 /x1 )0 = 1. May2004 So. Question 7: Set up the consumer’s optimization problem: maxx1 . or ∞. The FOCs are 1 − λp1 = 0. x1 Perfect Sub: 1. If it is not we must be at a corner solution. x2 }.x3 {x1 + lnx2 + 2lnx3 + λ(m − p1 x1 − p2 x2 − p3 x3 )} . But as ρ → 0 the MRS of our function is just that. We can best represent this in an Edgeworth box. But if x2 > x1 the fraction is greater than 1 and an inﬁnite power goes to inﬁnity. Microeconomics The MRSs for the other functions are CD: x2 .150 LA. p1 . The perfect substitute function x1 + x2 has a constant MRS of 1. But as ρ → −∞ we see that (x2 /x1 )1−ρ → (x2 /x1 )∞ . Therefore the second and third give us x2 = p1 /p2 and x3 = 2p1 /p3 . Hence (remember x1 = 0 now) m = p2 x2 + 2p2 x2 and x2 = m/(3p2 ) while x3 = 2m/(3p3 ). The ﬁrst of these allows us to solve for λ = 1/p1 .x2 . m . m) = 0. consider the Leontief function min{x1 . Leon: 0. So we get m − 3. this is only sensible if m > 3p1 . 2 p1 if m > 3p1 p1 p2 p3 x(p. Of course. Its MRS is 0 or ∞. The parameter ρ essentially controls the curvature of the IC’s. If x2 < x1 the fraction is less than one and the power goes to zero. x2 2 − λp3 = 0 x3 and the budget. 2m if m ≤ 3p1 3p2 3p3 Question 8: Here we have a pure exchange economy with 2 goods and 2 consumers. Therefore the CES function “looks like” those three functions for those choices of ρ. The CobbDouglas function x1 x2 has MRS x2 /x1 . . In that case x1 = 0 and all money is spent on x2 and x3 . 1 − λp2 = 0. Combining this with the budget we get x1 = m/p1 − 3.
we are done. 195 780 . and therefore must have a price ratio of 4/3.Answers 151 20 d d ˆ ds ω d d d x∗ E ω 5 11. the MRS for person B is 3x2 /(4x1 ). B’s preferences are quasilinear with respect to x1 . 28 63 . (xB )) = 4 195 780 . and therefore above the main diagonal. and let consumer B have the lower left hand corner as origin. The ﬁrst thing to do is to ﬁnd the Pareto Set (the contract curve). 20 − . From this we get 16x1 /9 − 11 = 32/3 − 12x1 /9 and from that 28x1 /9 = 65/3 and thus x1 = 195/28 < 20. Since both of B’s consumption points are strictly within the interior of the box. Or it is on the upper boundary of the box. There now are 2 possibilities for the equilibrium. (xA ). Question 9: Again we have a square Edgeworth box. All that remains is to compute A’s allocation. 20 − 3 28 63 .) The dimensions of the box are 20 by 20 units. Again I choose to put consumer B on the bottom left origin. A’s are piecewise linear with slopes 4/3 and 3/4 which . x2 = 16x1 /9. Therefore the Pareto Set is deﬁned by x2 /x1 = 16/9 (in person B’s coordinates. The ray. 20×20. but we know that B’s consumption level for good 2 is 20. and the budget line (x2 − 11) = 4(8 − x1 )/3. It follows that x2 = (16 × 195)/(9 × 28) = 780/63 = 260/21 < 20. The Pareto set is this ray and the portion of the upper boundary of the box from the ray’s intersection point to the origin of A.25 OA 11 OB 20 8 Suppose x1 is on the horizontal axis and x2 is on the vertical. in which case the price ratio must be below 4/3. consumer A the upper right hand corner. (I made this choice because I like to have the “harder” consumer oriented the usual way. Either it is on the ray. The equilibrium is therefore (p∗. since we know that any equilibrium has to be Pareto eﬃcient.) This is a straight ray from B’s origin with a slope greater than 1. The MRS for person A is 4/3. In the ﬁrst case we have 2 equations deﬁning equilibrium.
up the 2 main diagonal to the point (xB .152 LA. Since this is less than 20 we have found an interior point and are done. So we get 11 16/3 − 33 = 32 − 4x1 . y ∈ B with the property that x ∈ C(B). Busch. we are presuming that x2 = 16/9. x. The ﬁrst part of the deﬁnition requires ∃B. By the weak axiom there are two possibilities: either all B ∈ B with x. 14 11 16 . from there along the 2 1 B horizontal line x2 = 16/9 to the right hand edge of the Box. 16/9). y. 3 12 9 . So. The MRS for B’s preferences is 3x2 /4. the most likely candidate for an equilibrium is a price ratio of 4/3 with an allocation on the second horizontal line segment. (⇒) : Let x ∗ y. Then ∃B. Second. The equilibrium is (p∗. xB ) = (16/9. Consider all other B ∈ B with the property that / x. the Pareto Set is the vertical axis from B’s origin to xB = 1. x. y ∈ B . the budget equation (in B’s coordinates) is 3(x2 − 11) = 4(8 − x1 ). First. (xB )) = 4 1 164 . 1) in other words). ∈ B. 5 . Question 10: To prove this we need to show the implication in both directions: (⇐) : Suppose x y. Let us attempt to solve for it. and then up that border to A’s origin. the horizontal 2 line xB = 1 to the main diagonal (the point (1. OA 20 11 ω 16/9 x∗ 1 OB 8 20 By inspection. Microeconomics May2004 meet at the kink line x2 = x1 (in A’s coordinates!) which coincides with the main diagonal. 12 9 . (xA ). y} ∈ C(B ) or none have y ∈ C(B ). The second part of . y ∈ B have {x. We have Pareto optimality when 3x2 /4 = 4/3 → x2 = 16/9 and when 3x2 /4 = 3/4 → x2 = 1. By the Weak Axiom ∃B with y ∈ C(B ) since otherwise the set B would violate the weak axiom (applied to the choice y with the initial set B . y ∈ C(B). x ∈ C(B).) Therefore x ∗ y. or x1 = 14 12 .
10 2(1 + β) − α 2+β . xB )) = 1 2 1 2 α. Since these cases are (sort of) symmetric we pick one. β Hence α(10 − x1 ) = αx1 /β or αβ10 = x1 (α + αβ) and thus x1 = β10/(1 + β) and x2 = α10/(1 + β). B = {{x. It remains to determine p and the 2 2 allocations for good 1. At the equilibrium point the budget must be ﬂatter . On the ∗ other hand it is not true that x y (let B = {x. Either it occurs on the part of the Contract curve interior to the box. A’s the top right. y}) = {x} demonstrates that x y by deﬁnition. These are the consumption levels for B. The equilibrium thus would be (p∗ . y}. or (α − β) < (2 + β). z} in ∗ . Suppose B’s origin on the bottom left. Note that the budget now coincides with A’s indiﬀerence curve through the endowment point. y}) = {x}. or it occurs on the boundary of the box. A gets the rest. {x. If that is not true we ﬁnd ourselves in the other case. y. z}}. and / so x y. with the endowment in the centre. As in question 8 there are two cases for the competitive equilibrium. In the ﬁrst case the slope of the budget and hence the equilibrium price must be α. y. but then y ∈ C(B). 10 1+β 1+β . and assume that α/β > 1. The equilibrium allocation is determined by the intersection of the contract curve and this budget/IC. In that case we know that we are looking for an equilibrium on the upper boundary of the box and thus know that xB = 20 while xA = 0. C({x. The contract curve is this ray and then the part of the upper edge of the box to A’s origin. B’s ICs have a MRS of βx2 /x1 and are CobbDouglas. C({x. that is. As in question 8. The contract curve in the interior must have the MRSs equated. y. A’s indiﬀerence curves have a constant MRS of α and are perfect substitute type. so it occurs where (in B’s coordinates) x2 /x1 = α/β. the deﬁnition of Question 11: a) This is another 20 by 20 box. y}. y} and B = {x. This is a straight ray from B’s origin and depending on the values of α and β it lies above or below the main diagonal. (xB . β10 α10 . 1+β 1+β which only makes sense if the allocation indeed is interior. z}) = {x. z}.Answers 153 the deﬁnition requires us to be in the second case. This violates the WA. since both MRSs have that slope along the ray and in equilibrium the price must equal the MRS. y. C({x. So we have two equations in two unknowns: α= x2 − 10 10 − x1 and x2 = α x1 . If the WA fails a counter example suﬃces: Let X = {x. xB ). (xA . as long as 10α/(1 + β) < 20.
xA ). 1 + 2β β20 . A gets the rest. consumption good supply of c(p. 16w + π(p. w) = 16/3 + 4p2 /(3w2 ). maxx {p4 x − wx} has FOC 2p/ x = w and leads to ﬁrm labour demand of x(p. (xB .154 LA. b) All endowments above and to the right of the line x2 = 40 − 2x1 in B’s coordinates will lead to a boundary equilibrium. We can now solve for the equilibrium price ratio. w) = 4p2 /w2 .0 . x∗ = 8. b) Since the consumer’s problem requires proﬁts. It follows that β20(10 − c1 ) = 10c1 and therefore c1 = 20β/(1 + 2β). w) = 32w/(3p) + 8p/(3w) and leisure demand of l(p. 20 1+β . For the goods market this implies 32w/(3p) + . The ﬁrst 2 imply that pc = 2lw. and proﬁts of π(p.) So we again have to solve two equations in 2 unknowns: c2 − 10 βc2 = p and p = c1 10 − c1 while c2 = 20. The consumer will 1 maxc. we solve for the ﬁrm √ √ ﬁrst. w) − pc − wl) 2 which has ﬁrst order conditions 1/c − λp = 0. Question 12: a) The social planner’s problem is √ 1 maxl ln(4 16 − l) + ln(l) 2 which has ﬁrst order condition 1 2 1 √ − √ + = 0. Microeconomics May2004 than A’s IC (so that A chooses to only consume good 1). (xA . w) = 4p2 /w. w) = 8p/w. Busch. 20 1 + 2β .l lnc + lnl + λ (16w + π(p. that is. since for B this is an interior consumption bundle (interior to B’s consumption set. All those on this line and below will lead to an interior equilibrium with p = 2. w) = pc + wl. The equilibrium is therefore (p∗ . Substituting into the third and using the proﬁts computed above yields demand of c(p. 4 16 − l 16 − l 2l √ Hence 16 − l = l and so l∗ = 8. c∗ = 8 2. 1/(2l) − λw = 0. The allocation must also be the optimal choice for B and hence the budget must be tangent to B’s IC. xB )) = 1 2 1 2 1 + 2β. Take any one market and set demand equal to supply.
8). l) = (8 2. With period 3 as the viewpoint I use period 3 future values for everything: B = {(c1 . Note that if we where to treat r13 not as a simple interest rate but as a compounding one. or I could lend in period 1 to period 2. however. It will be below that point if c1 > m1 and above that point if c1 < m1 . c3 )(1 + r13 )c1 + (1 + r23 )c2 + c3 = (1 + r13 )m1 + (1 + r23 )m2 + m3 } Note that any other viewpoint is equally valid. b) You have to adopt one period as your viewpoint and then put all other values in terms of that period (by discounting or applying interest). Note that we cannot state proﬁts without ﬁxing one √ the prices. The complete statement of the general equilibrium is: The equilibrium price √ √ ratio is p/w = √ 2. and hence all values are the same as in the social planner’s problem in part a). and hence p2 /w2 = 2. the consumer’s allocation is (c. I get the same ﬁnal answer. and then lend the proceeds to period 3.2 Chapter 3 Question 1: a) Zero arbitrage means that whichever way I move between periods. Hence the condition is (1 + r12 )(1 + r23 ) = (1 + r13 ). In particular I could lend in period 1 to collect in period three. and the ﬁrm produces 8 2 units consumption good from 8 units input. m3 ). Indeed. c2 . we’d get (1 + r12 )(1 + r23 ) = (1 + r13 )2 instead. The restriction in (a) means that it does not matter which interest rate I use to compute the forward value of c1 . Substituting into the demands and supplies this gives l∗ = 8. . You may want to verify that you could have solved for the price ratio from the labour market. So let w = 1 (so that of we use labour as numeraire). c) This is a standard downward sloping budget line in (c2 . 7. say. With some borrowing constraints in place I would have to compute the highest possible arbitrage proﬁts for the various periods and compute the resulting budget.Answers 155 8p/(3w) = 8p/w. c3 ) space with a slope of −(1 + r23 ). then p = 2 and proﬁts are 8. without that restriction I would get an inﬁnite budget if it is possible to borrow inﬁnite amounts. It does not necessarily have to go through the endowment point (m2 .
So.) That gives you two equations in two unknowns. So the budget follows the (ﬂipped over to the left) technology frontier down to the point (160. however. 90). 40). The technology therefore may give a higher return than the market. c2 = 135.4 ˜ same preferences.) This means that from any point on the ﬁnancial market budget Joe can move left 25 and up 50. 225) and goes to (135. At optimal use at an interior optimum Joe invests 25 units (and collects 50 in the next period. 5 √ =1 x1 −→ 5= √ x1 −→ x1 = 25. which is truncated at the point (160. Applying the 10th root gives U (c1 . Note that the gross rate of return is 1 since the interest rate is 0. b) First simplify the preferences (this step is not necessary!). .6 which also 2 represents the same preferences and is recognized as a CobbDouglas. Microeconomics {m 1 > c1 } {m1 < c1 } m m3 m2 c2 c3 May2004 Question 2: a) The easy way to get this is to ﬁrst ignore the technology. So that gives a straight line with slope −1 which starts at (0. 0) from there.) The gross rate of return at the margin is nothing but the marginal product of the √ technology. c2 ) = c4 c6 which represents the 1 2 . via the ﬁnancial market. or via “planting”. compute the MP (5/ x1 ) and ﬁnd the investment level at which the MP is 1. 100).156 LA. and down to (160. Applying ˆ a natural logarithm gives the function U (c1 . The market budget is a straight line with a slope of −1 through the point (100. c2 ) = c1 c. Busch. After this point there is a corner solution in technology choice: Joe cannot use the market any more. where the budget becomes vertical. Both must yield the same gross rate of return at the optimum (why? we know that at the optimum of a maximization problem the last dollar allocated to each option must yield the same marginal beneﬁt. 40). Now you can either compute the MRS (2c2 /3c1 ) and set that equal to 1 (since most of the budget has a slope of −1 and we know that M RS = Slope at the optimum. c2 = 3c1 /2 → 450 = 5c1 → c1 = 90. Now consider the technology and the implications of zero arbitrage: Joe can move consumption from period 1 to period 2 in two ways. and we can solve: c2 = 225 − c1 .
so you know (c1 . c2 ) = . p1 p2 = .Answers 157 We then double check that the assumption that we are on the −1 sloped portion of the budget was correct. but less than 1. 50 from the investment.2 respectively. and that required him to borrow 15 units.25 which is less than 1. Now this is his ﬁnal consumption bundle. he borrows 15. which it is (by inspection. 1+0 1 = (90. 85). b) Note that the budget has a slope of 1.25 = c2 − 12 c1 − 5 → c1 = c2 = 73/9.3 and −1.6M .25 = c2 − 8 c1 − 9 → c1 = c2 = 77/9.4 × 225 .28 are bigger than 1. She lends money if she has larger period 1 endowment than period 2 endowment.6 × 225 . so he can consume 135! Question 3: I will not draw the diagram but describe it. Note that at the slopes implied by the interest rates she continues to consume at her kink line. so c1 = c2 and − 1.) Or you could use the demand function for CD. In order to get there he invested 25 units. for a total of 150.25 and 1. c) Again she consumes at the kink. Hence optimal consumption is at the kink and she borrows if she has less period 1 endowment than period 2 endowment. so he has 90 left to consume. so she consumes at the kink. You should refer to a rough diagram while reading these solutions to make sense of them. of which he has to use 15 to pay back the loan. So for all endowments above the main diagonal she is . the slope of the steep segment.3 and more than 1.4M . In summary.3. The reason is that both 1. giving him 115.2. In the next period he gets 100 from his endowment. The switch (kink) occurs where 23 12 c1 + c 2 10 = 22 13 c1 + c 2 10 → c2 = (22×13−23×12)c1 /10 = c1 . a) The indiﬀerence curves have two segments with a slope of −1. d) Here we need to work back. the slope of her lower segment.2. 135). of which he invests 25. Thus she is on the kink line and the budget: c1 = c2 and − 1. so on the “market budget” line he must have started at (115.
You should refer to a rough diagram while reading these solutions to make sense of them. a) This is a 20 by 10 box. Hence the ray of the contract curve must intersect that. A’s indiﬀerence curves have a MRS of c2 /(αc1 ) and are nice CD type curves. The contract curve in the interior must have the MRSs equated (from Econ 301: for diﬀerentiable utility functions an interior Pareto optimum has a tangency). when the ray fails to intersect B’s IC.32 which is steeper than even her steepest IC segment. Since these cases are (sort of) symmetric we pick one. c2 = .) At this point we must have a budget ﬂatter than B’s 2 2 IC (so that B chooses to only consume good 1). She would not want to lend ever at this rate no matter what her endowment. Busch. In that case the equilibrium allocations are derived by solving the intersection of the budget and the ray: c2 = αc2 /β and − c2 − 6 1 = β c1 − 12 → A c1 = 6(2 + β) A α6(2 + β) . The contract curve is this ray and then the part of the upper edge of the box to B’s origin. This is a straight ray from A’s origin and depending on the values of α and β it lies above or below the main diagonal. so it occurs where c2 /c1 = α/β. either the Contract curve ray is shallow enough that the equilibrium occurs on it. or it is so steep that the equilibrium occurs on the top boundary of the box. for all endowments below a lender.158 LA. b) There are two cases. or if 10β > 12α − 4αβ.18 which is less than either of her IC segment slopes. and assume that α/β > 1/2. Microeconomics May2004 a borrower. we know that we are looking for an equilibrium on the upper boundary of the box (so cA = 10 and cB = 0. So the equilibrium interest rate is (1 − β)/β. Thus she remains at the kink in her budget (the endowment point) no matter where it is. suppose she were to borrow. since both MRSs have that slope along the ray and in equilibrium the price must equal the MRS. 10). The latter occurs at (4(3 − β). 10). Note that the budget now coincides with player B’s indiﬀerence curve through the endowment point. Question 4: Again I will not draw the diagram but describe it. It must also be tangent to . In the other case. 1+α β(1 + α) B gets the remainder. and it does so only if it intersects the top boundary to the right of the intersection of B’s IC with the boundary. e) Now she never trades. B’s ICs have a MRS of 1/β and are straight lines. On the other hand. She would not borrow. Suppose A’s origin on the bottom left. In the ﬁrst case the slope of the budget and hence the equilibrium price must be 1/β. So the interior solution obtains if 10β/α > 4(3 − β). B’s the top right. The former occurs at (10β/α. The budget slope is 1. To lend money the budget slope is 1.
3 Chapter 4 Question 1: We wish to show that for any concave u(x) 1 1 1 1 1 u(24) + u(20) + u(16) ≥ u(24) + u(16). 6 6 6 Then multiply both sides by 3: 1 1 u(20) ≥ u(24) + u(16). of course. the expected value of the lottery on the RHS of the last inequality above is equal to the expected value of the degenerate lottery on the LHS. 7. (utility of expectation greater than expectation of utility. 2 2 That is. 2 2 The LHS of this represents a certain outcome of 20. since for player A this is an interior consumption bundle (interior to his consumption set. Using the particular function we are given: √ √ √ CE = α 3600 + (1 − α) 6400 CE = (α60 + (1 − α)80)2 = (80 − 20α)2 .) So we require 1 + r = 10/(αc1 ) to have the tangency. . the RHS a lottery with 2 equally likely outcomes. so we solve: A c1 = 60/(5 + 2α) and r = (5 − 4α)/(6α).) Question 2: The certainty equivalent is deﬁned by U (CE) = pi u(xi ) = u(x)dF (x). Therefore this penultimate inequality must be true. since it coincides with the deﬁnition of a risk averse consumer. and we require 1 + r = (10 − 6)/(12 − c1 ) in order to be on the budget line. These are two equations in two unknowns again. that is. 3 3 3 2 2 We can do the following: ﬁrst bring the u(24) and u(16) to the RHS: 1 1 2 u(20) ≥ u(24) + u(16). B gets the rest of good 1. Now note that 1 1 24 + 16 = 20.Answers 159 A’s IC.
respectively. Question 3: The coeﬃcient of absolute risk aversion is deﬁned as rA = −u (w)/u (w). but we can’t ﬁnd those here (not knowing the consumers’ tastes). Computing this for both functions we get u(w) = lnw −→ rA = 1 . That means that while. at least for interior allocations. hence the budget line for the consumers. But we also know that in general equilibrium the price ratio must equal each consumer’s MRS (since GE is Pareto optimal and that requires MRSs to be equalized. so it is not useful information. he will be more risk averse with respect to a gamble over a given proportion of wealth.) Note that the two MRSs here are M RSA = πuA (cA ) R (1 − π)uA (cA ) S M RSB = B πuB (cR ) (1 − π)uB (cB ) S On the certainty line (the main diagonal) cA = cA and cB = cB . Microeconomics May2004 Note that the expected value of the gamble is E(w) = α3600 + (1 − α)6400 = 6400 − α2800 and thus the maximal fee this consumer would pay for access to fair insurance would be the diﬀerence E(w) − CE = 400α(1 − α). 15 units a side. the certainty line for each consumer coincides and together they are the set of Pareto optimal points. Their relative risk aversion coeﬃcients (deﬁned as −u (w)w/u (w) are 1 and 1/2. he will have the same attitude towards a ﬁxed dollar amount gamble. Note that the certainty line is the main diagonal of the box! This observation is crucial. Busch. which is a square. 2w Therefore the two consumers exhibit equal risk aversion if the second consumer has half the wealth of the ﬁrst. Hence the equilibrium price ratio must be p∗ = π/(1 − π). so M RSA = R S R S M RSB = π/(1 − π). (Note that the two statements don’t contradict one another: a $1 gamble represents half the percentage of wealth for a consumer with twice the wealth!) Question 4: Here we need an Edgeworth Box diagram. In other words. Suppose also that we put state R on the horizontal. w √ 1 u(w) = 2 w −→ rA = .160 LA. if the logarithm consumer has twice the wealth as the root consumer. The allocation is now easily computed: we know the price ratio and the endowment. since it means that there is no aggregate risk! General equilibrium requires that demand is equal to supply for each good. We also know that . Suppose we have consumer A on the bottom left origin (B then goes top right).
5 × 0. A fair price for this lottery ticket would have to be a fraction p per dollar of prize such that π(P − pP ) − (1 − π)pP = 0.8x.5u(10000(1 + 0.5u(10000(1.8u (100000(1 + 0. so that x = 0. It follows that 1 + .6x = 0. At a fair gamble this is µi ≥ v(w0 ) − πv(w0 + (1 − π)P ) − (1 − π)v(w0 − πP ) > v(w0 ) − v(π(w0 + (1 − π)P ) + (1 − π)(w0 − πP )) = v(w0 ) − v(w0 ) (the second strict inequality follows from the deﬁnition of risk aversion). Note that those who gamble have a high utility for it (a high taste parameter µi ) in this setting.8x)) − 0. Let us start with this as a benchmark case (we know that normally such a lottery would not be accepted. it exists (for small q in any case) as long as things are suﬃciently smooth and the µi go that high. One quarter. Can the government make money on this? Well.4.8u (10000(1. Clearly a strictly positive µ is required.8x = 1.8x)) + 0. So π cA − 5 cA − 5 → cA = cA = 5(1 + π) = S = S R 1−π 10 − cA 10 − cA R and since cB = 15 − cA we get cB = cB = 5(2 − π).4 − 0.4 − 0. i i S R Question 5: a) maxx {0. or 25% are invested in gene technology. While such a µi is larger than before. assume that the price p above is fair (p = π) and let there be an additional charge of q.8x)) = u (10000(1. or p = π. Now all consumers gamble for whom µi ≥ v(w0 ) − πv(w0 + (1 − π)P − q) − (1 − π)v(w0 − πP − q). Question 6: i) Denote the probability with which a ticket wins by π and the prize by P .4 − 0.) Utility maximization requires that for a gambling consumer v(w0 ) ≤ πv(w0 + (1 − p)P ) + (1 − π)v(w0 − pP ) + µi . and therefore that 1.4 − 0.4 − . Note that this implies that even though they lose money on average they have a higher .5 × 0. Thus all consumers for whom µi ≥ v(w0 ) − πv(w0 + (1 − p)P ) − (1 − π)v(w0 − pP ) purchase a ticket.8x)) = 0 implies : implies : u (10000(1 + 0.25.8x)) 10000(1 + 0.8x) = 10000(1.Answers 161 consumption is equal in both states.8x))} b) The FOC for this is 0.8x) since she is risk averse.
for which the above is certainly true. So. namely µ > v(w0 ) − πv(w0 + (1 − π)P − q) − (1 − π)v(w0 − πP − q). consider the original inequality again and approximate the RHS by its second order Taylor series expansion (that way we get ﬁrst and second derivatives. which we want in order to form rA : ⊕ v(w0 ) − πv(w0 ) − (1 − π)v(w0 − πP − q) ≈ v(w0 ) − π(v(w0 ) + ⊕v (w0 ) + ⊕2 v (w0 )/2) − (1 − π)(v(w0 ) − v (w0 ) + 2 v (w0 )/2) = −π ⊕ v (w0 ) − π ⊕2 v (w0 )/2(1 − π)( v (w0 ) − 2 v (w0 )/2) = v (w0 ) [(1 − π) (1 − rA /2) − π ⊕ (1 − ⊕rA /2)] . and thus it ought to be banned. Now. (ii) there are externalities: your lost money is really not yours but should have bought a lunch for your child/spouse/dog. There you go. but of course Is there a general proof for that? Note that constant rA has for example the functional form u(w) = −e−aw .) ii) Now µ is ﬁxed. This looks more like it! Now note that we use and ⊕ as positive quantities (which are not equal: is larger!) Furthermore we know that (a) this quantity must be positive and (b) that π is probably a very small number. 1 . The third side must have a declining slope as wealth increases since it is related to the marginal utility of wealth at the certainty equivalent. which is declining in wealth by assumption. Busch. the decision to gamble will still depend on the same inequality. (The antigambling arguments in public policy debates therefore come in two ﬂavours: (i) your gambling is against my (religious) beliefs. and is this a monotonic relationship? The right hand side is related.) Intuitively. Of course. More formally. I’d expect the utility diﬀerence must fall with wealth. we will on their behalf.162 LA.1 Let this diﬀerence be the base of a right triangle. Orthogonal to that we have the side which is the required distance between the two utilities. is the right hand side increasing or decreasing with wealth. of course. We thus can translate this question into the question of how the right hand side depends on w0 and how this dependency relates to the diﬀerent behaviours of risk aversion with wealth. if rA is constant then the term in brackets is constant. Since your child/spouse/dog can’t make you stop. to the utility loss from going to the expected utility from the expected value (ignoring q for a minute. Microeconomics May2004 welfare. we would expect the diﬀerence to be declining in wealth for constant absolute risk aversion: Constant absolute risk aversion implies a constant diﬀerence between the expected value and the certainty equivalent.
and so we get higher participation by wealthier people. if the stock market on average is a bad bet we get mean reversion in wealth. and we can’t make a deﬁnite statement. Thus rich consumers participate. (As an aside note the following. the punishment does not . See how much fun you can have with these simple models and a willingness to extrapolate wildly?) Question 7: This question forms part of a typical incomplete information contracting environment.. given the worker shirks he will go all the way (after all. it follows that constant relative risk aversion requires a decreasing absolute risk aversion. . and rR = 1. you get the desire to redistribute (i. Now. The ﬁnal outcome therefore depends on what declines faster.e. If we have decreasing absolute risk aversion this eﬀect is strengthened. Expected utility is maximized for e∗ = argmax{α w(E) − p + (1 − α) w(E) − e2 }. since relative risk aversion is just rA w. Thus in all cases the rich gamble and the poor don’t. while if the stock market is on average more proﬁtable than savings accounts etc we get the rich getting richer. Now. This would. Then it might be reasonable to assume that the utility of participating in it is increasing in wealth. counteracting the above eﬀect of more participation by wealthy individuals. . and that decreasing relative risk aversion requires an even more decreasing absolute risk aversion. Any given µ is therefore more likely to be larger than it.Answers 163 v (w) falls with w and thus the right hand side of our initial inequality (way above) falls. Since they loose money on average they will become poor and stop gambling. If v(w) = w then v (w) = 1/2 w and v (w) = −w 3/2 /4. poor consumers don’t if we have constant absolute risk aversion. on average.) iii) If v(w) = ln w then v (w) = 1/w and v (w) = −1/w 2 . however. with ∂rA /∂w < 0.. Therefore rA = 1/(2w) and rR = 1/2. We now know two pieces of information: the consumers’ risk aversion to a given size gamble is declining with wealth. . Therefore √ √ rA = 1/w. If you now run a voting model where the mean voter wins. The ﬁrst order condition for this problem is −2e = 0 if e = E. But µ now is also declining with wealth. Once he has signed the contract. Here we focus only on the consumer’s behaviour. I. tax the investing and proﬁting rich and give the cash to those who have a too high marginal utility of wealth to invest themselves. a) Assume that the worker has a contractual obligation to provide an eﬀort level of E. Suppose we are talking stock market participation here. (Note here that they are initially rich. he knows that his actual eﬀort is not observable and thus would try to shirk. ceteris paribus make them more likely to purchase the gamble for a constant µ (see above).) Note also that progressive taxes reduce the returns of a given investment proportional to wealth.e.
it will elicit the correct eﬀort levels in all cases. only the wage schedule is now diﬀerent since α(E) can also vary now. b) Now we have a potentially variable punishment. Zero proﬁts imply that the premiums pB and pM for bad and minor losses. (This is going to be a question about the technology available: some technologies may be able to detect ﬂagrant shirking more readily than slight shirking. and CM the coverage for minor losses. What this shows us is that we tend to want a punishment and a detection probability which both depend on the deviation from the correct level. if detection is a function of the actual eﬀort level (the more you fudge the books the more likely will you be detected) then we need lower punishments. We need to make sure that this equation is only satisﬁed for e∗ = E. ceteris paribus. This clearly requires a positive p (). Hence the consumer’s expected utility maximization problem becomes maxCB . (There are also second order conditions which need to hold!) This implies that the worker will play oﬀ the cost of shirking against the gains from doing so. Busch. the worker now will maximize expected utility and set e∗ = argmax{α w(E) − p(E − e) + (1 − α) w(E) − e2 }. since the increasing risk will provide some disincentive to cheat anyways.164 LA.CM (1 − π)u(W − pM CM − pB CB )+ 1 π( u(W − pM CM − pB CB + CB − B)+ 5 . then I might as well shoot people when I’m at it and I think that helps (and if it does not increase the eﬀort the police put into ﬁnding me. αp (0)(2 w(E))−1 − 2E = 0. which is the case if w(E) − E 2 ≥ α w(E) − p + (1 − α) w(E). If the wage function satisﬁes this inequality for all E. in which case he “voluntarily” chooses the contracted level.) c) What this seems to indicate is that we would like to make punishments ﬁt the crime.)) Furthermore. In particular.) Thus we need to ensure that the worker will not shirk at all. if the worker deviates he will go all the way here. or E 2 ≤ α( w(E) − w(E) − p). Note: We could also vary the detection/supervision probability and make α depend on E. if the punishment for a holdup with a weapon is as severe as if somebody actually gets shot during it. Then we get e∗ = argmax{α(E) w(E) − p + (1 − α(E)) w(E) − e2 }. So the problem is similar to (a). Question 8: a) Let CB denote the coverage purchased for bad losses. Microeconomics May2004 The FOC for this problem is αp (·)(2 w(E) − p(E − e))−1 − 2e = 0. As in (a). are pB = π/5 and pM = 4π/5. Given some job with contractual obligation E. respectively. (So for example. depend on the severity of the crime in any way.
Zero proﬁts imply that the premium p is p = π. b) Now only one coverage can be purchased. which ﬁnally implies that u (n) = u (b) = u (m) and therefore that 0 = CB − B = CM − M. denote it by C. As expected.Answers 165 4 u(W − pM CM − pB CB + CM − M )) 5 The ﬁrst order conditions for this problem are 4π π −pM (1 − π)u (n) − pM u (b) + (1 − pM ) u (m) = 0 5 5 π 4π −pB (1 − π)u (n) + (1 − pB ) u (b) − pB u (m) = 0 5 5 Using the fair premiums this simpliﬁes to −(1 − π)u (n) − π 4π u (b) + 1 − u (m) = 0 5 5 4π π u (b) − u (m) = 0 −(1 − π)u (n) + 1 − 5 5 4π 5 u (m) − π π 4π u (b) = 1 − u (m) u (b) − 5 5 5 Hence 1− and thus u (b) = u (m). Hence the consumer’s expected utility maximization problem becomes maxC (1 − π)u(W − pC)+ 1 π( u(W − pC + C − B)+ 5 4 u(W − pC + C − M )) 5 The ﬁrst order condition for this problem is π 4π −p(1 − π)u (n) + (1 − p) u (b) + (1 − p) u (m) = 0 5 5 Using the fair premium this simpliﬁes to 1 4 u (n) = u (b) + u (m) 5 5 . the consumer buys full insurance for each accident type separately. and will be paid in case of either accident.
166 LA. one ﬂatter. Now this tangency occurs where √ √ the margin is equal to the average. µ = 3/2. Note that 1 ≥ π. This means that Cov(u (w). May2004 Thus either W − B + (1 − π)C < W − πC < W − M + (1 − π)C or W − B + (1 − π)C > W − πC > W − M + (1 − π)C. A risklover has. if wealth is low the return to the policy is high. we can take the consumers’ budget line to be the line from the risk free asset point (the origin in this case) to a tangency with the eﬃcient portfolio frontier. if wealth is high. The ﬂatter one corresponds to the initial situation. Question 11: 1) False. Question 9: From Figure 3. but this implies either −B + 1C < 0 < −M + 1C or −B + 1C > 0 > −M + 1C. In the denominator we have the expected marginal utility. guaranteed to be positive. Ri ) > 0. so that the SOC requires u (·) to be negative for at least one of the terms. They intersect at the consumer’s endowment. That of course is precisely the feature of disability insurance which replaces income from work if and only if the consumer is unable to work. But since u (w) < 0 this implies that the covariance between w and Ri is negative.) Question 10: The asset pricing formula implies that the expected return of the insurance equals the expected riskfree return less a covariance term. Since B > M by deﬁnition we obtain that B > C > M . Since the consumer is a borrower. Thus the optima are σ = 4. That means that the market portfolio has 2(σ − 16) = σ or σ = 32. by deﬁnition. The second order condition for a maximum is π(1−p)2 u (w−L+(1−p)C)+p2 (1−π)u (w−pC) ≤ 0. µ = 1/2 and σ = 12. so that σ − 16/σ = (2 σ − 16)−1 . We can draw 2 diagrams to demonstrate. In both we have two intersecting budget lines. As expected. and is under insured against big losses. that is. this covariance term must be positive. the consumer over insures against minor losses. Busch. 2) Uncertain. For an optimal solution the consumer’s MRS must equal the slope of the portfolio line. the consumer with the higher marginal utility for the mean will have a higher mean at the same prices (and given that both have the same disutility from variance. Therefore µ = 4. the return to the policy is low. The slope of the portfolio line thus is 4/32. The second order condition would indicate a minimum as demonstrated here: maxC {πu(w − L − pC + C) + (1 − π)u(w − pC)} has FOC π(1−p)u (w−L+(1−p)C)−p(1−π)u (w−pC) = 0. one steeper. For the two consumers given the MRS is σ/32 and σ/96. the initial consumption point is below and . p ≥ 0. Thus the numerator must be positive. u (·) > 0. If insurance has a lower expected return than the riskfree asset. Microeconomics and hence either u (b) > u (n) > u (m) or u (b) < u (n) < u (m).7 in the text.
All consumers face the same budget line in meanvariance space. Suppose the payment for this lottery is p. For a bus driver the expected loss is 11700/3 = √ √ 3900. Depending on by how much. If this payment is equal to the expected value of the lottery then the consumer will not have a change in expected wealth. α= √ = 210 − 180 3 44100 − 32400 b) Since workers work on oil rigs and at a desk job we require √ √ √ 2 40000 = 0. take the 9000th root. It may. Apply the following positive monotonic transformations to the ﬁrst function: −2462. ×12. however. Question 12: a) Since workers work as bus driver and at a desk job we require √ √ √ 2 40000 = α2 44100 − 11700 + (1 − α)2 44100 Therefore √ √ 1 44100 − 40000 210 − 200 √ = . would already buy at when the expected net gain is zero. If the payment is less. Let the consumer have initial wealth w and suppose he could participate in a lottery which leads to a change in his initial wealth by x. then the expected value of wealth from participating in the lottery exceeds the initial wealth. distributed as f (x). depending on tastes. √ √ Thus 400 = 122500 − Loss + 350 and hence 50 = 122500 − Loss or Loss = 120000.) 3) True. The market rate of return is 15%. they suﬀer their expected loss for certain. the consumer may purchase. What you get is the second function. Gargleblaster stock has a rate of return of (117 − 90)/90 = 30%. That is. c) At fair premiums the workers will fully insure. (This argument could be made more precise. but will face risk. This may be anywhere along the line. take exponential. A risk loving consumer. of course.Answers 167 to the right of the endowment on the initial budget. Thus the bus driver wage must satisfy 2 40000 = 2 w − 3900 and . cut the new budget (so that the IC tangent to the new budget represents a higher level of utility) or lie everywhere above it (in which case utility falls. The indiﬀerence curve through this point is tangent to this budget. This violates zero arbitrage. Thus by deﬁnition he would not buy this lottery. 4) True. collect terms in one logarithm.5 × 2 122500. At an interior optimum (and assuming their MRS is deﬁned) they all consume on this line where the tangency to their indiﬀerence curve occurs. 6) True. A risk averse consumer is deﬁned as having u( xg(x)dx) > u(x)g(x)dx. and you should try to put it into equations!) 5) False.5 × 2 122500 − Loss + 0. but the slope is dictated by the market price for risk.
For the oil rig worker the expected loss is $60000.4. or Cb = 10(9×44100−16× 32400)/(6×16+4×9) = −9204.5 × 0. It also seems that the govt would ban the purchase of negative insurance amounts. and so in practice the govt forces all workers to buy a ﬁxed amount of insurance! In principle we could compute equilibrium wages if we treat the insurance purchase as a function of the wage. bring the the denominator into the numerators and loose the 1/30 on both sides.6) × 182083.5 for the govt to break even. where N is the number of workers in risky occupations. and their wages will fall to $100000 under workers compensation. who will under insure.4Cb }.6 × 1/3 = 0. the wage premium for risky jobs will not have to be paid: the wages of the risky occupations fall which beneﬁts the ﬁrms in those industries by lowering their wage costs. Busch. since (0.6C and thus Co = 182083.5 × 0.6Cb + (2/3) 44100 − 0.168 LA.4Co }. (This is why industries are in favour of workers’ compensation.6Co √ 122500 − 0.4Cb (w). Note that the govt looses money on them.4Cb ) = 16(32400+0. So. Indeed. . for example we know from the above that Cb (w) = 10(9×w−16×(w−11700))/(6×16+4×9).2×9204/3 = 2454. In which case the bus drivers would ﬁnd it optimal to buy no insurance.4)Cb (w) + (4/3) w − 11700 − 0. and then premiums would have to be 0. At the old wages both groups are better oﬀ (and thus there would be an inﬂux of desk workers and a reallocation towards oil rigs.4Cb = 8 32400 + . who will over insure. and too low for oil rig workers. the bus drivers will choose to buy insurance Cb √ √ such that 6 44100 − 0.33 = −18208. Overall then the govt makes losses of 5810. which leads to √ 3 122500 − 0.) Thus we require 9(44100−0. However.6Cb ).6Cb (Take the ﬁrst order condition √ √ √ from for maxCb {(1/3) 32400 + 0.5 + 0. The oil rig workers would need to solve √ √ + maxCo { 2500 + 0.40 > 0.4Co = 2 2500 + 0. We then solve √ for s 40000 = (2/3) w + (1 − 0. in particular raised. The details are left to the reader.) d) The average probability of an accident now is 0.68N . This would be deemed unjust by all involved.4 − 0.) In order to break even the insurance rates would have to be changed. What does this mean? It means that the bus drivers would like to bet on themselves having an accident buying negative amounts of insurance! (The ultimate in under insurance!) Note that the governments expected proﬁt from bus drivers is −0.4×9204/3+1.30. If we were to use this as a fair premium (but see below!) this premium is too high for bus drivers.33.4 × 0. Microeconomics May2004 hence it is $43900. Note the the condition that workers take all jobs together with a ﬁxed desk job wage ﬁxes the utility level in equilibrium for workers.
the insured amount is a function of the wage up to a maximum.) Question 2: Perfect recall is when each player never forgets any of his own previous moves (so that for any two nodes within an information set one . Further. a partition of the player sets S1 . what the distributional implications are. Question 13: Let us translate the question into notation: We are to show u (c1 ) c2 u (w)w that = k if = λ if the function u(·) satisﬁes = a. If that is not done things will work out funny. If the left and right hand side of that last expression are identical functions. a payoﬀ vector of length three for each terminal node. a probability distribution for each node in S0 over the set of immediate followers and for each SiJ an index set IiJ and a 11 mapping from Iij to the set of immediate followers of the nodes in Sij . that is.e. 7. u (c2 ) c1 u (w) c2 u (c1 ) = k and =λ u (c2 ) c1 =⇒ u (c1 ) = k(λ)u (λc1 ).) You can see that such plans can be quite complicated and that it can be quite complicated to ﬁgure out who would want to do what. the amount is dictated (often capped. S0 could be empty. then their derivatives must equal: u (c1 ) = k(λ)λu (λc1 ).. S3 into information sets. S2 . but we know that k(λ) = u (c1 )/u (λc1 ). S3 . ∀λ.Answers 169 The important point here is that it is important to charge the correct premiums. It does not even have to have nature (i. Any carefully labelled game tree diagram will do. etc. That in turn leads to real life plans which do not allow a choice — workers have to insure. ∀w. Γ. so that u (c1 ) = u (c1 ) λu (λc1 ) u (λc1 ) =⇒ u (c1 ) u (λc1 ) =λ u (c1 ) u (λc1 ) Thus the MRS is constant for any consumption ratio λ if u (c1 )c1 u (λc1 )λc1 = u (c1 ) u (λc1 ) which is constant relative risk aversion. a partition of the set of nonterminal nodes into player sets S0 . S2 .4 Chapter 6 Question 1: A 3player game in extensive form comprises a game tree. S1 .
Right). Hence 3α = 1 and thus α = 1/3. To ﬁnd mixed strategy Nash we assign probabilities to the strategies for players.) Counter examples as in the text. actually willing to mix? Only if the payoﬀ from the pure strategies in the support of the mixed strategy are equal. Let α = P r(U ) and β = P r(L). L. any ﬁnite game has a Nash equilibrium. γ2 = P r(R). Player 3 does also not have a weakly dominated strategy. L. hence 2 = 4β. so that the player does not care. Lef t) but u1 (D. However. Right) > u1 (U. while u1 (C. Thus the . R. This follows from the Theorem we have in the text. α) = α(γ1 + 2γ2 + (1 − γ1 − γ2 ) + (1 − α)(2γ1 + 4γ2 + 2(1 − γ1 − γ2 )). Lef t) > u1 (U. so let µ1 = P r(U ). γ1 = P r(L). so that it will never be used in any mixed strategy equilibrium (or indeed any equilibrium. The game therefore will also have a SPE (they are a subset of the Nash equilibria. Microeconomics May2004 may not be a predecessor of the other and any two nodes may not have a common predecessor in another information set of that player such that the arc leading to the nodes diﬀers) and never forgets information once known (so that any two nodes in a player’s information set may not have predecessors in distinct previous information sets of this player. Now look at player 2: For 2 to be indiﬀerent between the two strategies L and C we require 4α + 2(1 − α) = 2α + 3(1 − α). and α = P r(Lef t). γ. but keep in mind that the condition of subgame perfection may have no ‘bite’. Right). We then can ask.170 LA. aside from the fact that 2 can be argued to play C. possibly in mixed strategies. This does not leave us with a good prediction yet. Depending on the opponents’ moves he gets a higher payoﬀ sometimes in the left and sometimes in the right matrix.) Hence this is really just a 2 × 2 matrix we need to consider. Lef t). So for example u1 (U. and hence β = 0. µ2 = P r(C). Player 2 does have weakly dominated strategies: Both L and R are weakly dominated by C. Then for player 1 to mix we require β+4(1−β) = 3β+2(1−β). C. R. So if player 2 mixes with this probability then player 1 is indiﬀerent between his two strategies.) Question 4: Player 1 has no weakly dominated strategies since u1 (D. when is player 1. if we now consider repeated elimination we can narrow down the answer to what is also the unique Nash equilibrium in pure strategies in this case. Lef t) < u1 (U. We can then compute the payoﬀs for players for each of their pure strategies. Question 5: This one is made easier by the fact that strategy R is (strictly) dominated. so that P r(C) = 1 − α and P r(C) = 1−β. Question 3: Yes. L. or any game which violates these requirements. in which case we revert to Nash. Busch. say. L.5. (D.
) The most natural extensive form for such a situation is probably as in the game tree on the next page. reﬂecting the fact that if low care is taken the accident probability is higher. 10) (17. 2) Here it is important to note that ph > pl . 10 − 8ph ) (20 − 10pl . Question 6: This is a two player game (the court is not a strategic player and does not receive any payoﬀs. accidents impose a cost of 8 on both parties. 10) N otSue (25 − 8ph . Note that the game has no pure strategy Nash equilibria. railway e d d d d low high d d d d dsNature Nature s d d d No Acc d No Acc (1 − pl ) (1 − phd d ) d d d d Accident l p Accident h p (20. 10 − 8pl ) Note that low strictly dominates high for the railway if 0. Let us try and ﬁnd the Nash equilibrium of this game. 1/2)). In those cases the . (1/2.Answers 171 mixed strategy Nash equilibrium is ((1/3. 2/3). As an exercise let us ﬁrst ﬁnd the strategic form: R\T high low Sue (25 − 20ph . legal costs are 2 for each party. 10) (25. 10 − 10pl ) (20 − 8pl . I have arbitrarily assigned payoﬀs which satisfy the description. High level of care costs the railway 5. 2) (5. 10) s town s d d d d Sue Sue d Not Sue d Not Sue d d d d d d (10.5 > (2ph − pl ) while high strictly dominates low if ph − pl > 5/8. 0) (12.
and then nature determines if the move is seen. if 10−10αpl = 10 − 8ph + 8α(ph − pl ). the railway expects to receive 20 − 8pl − 2βpl from high and 25 − 8ph − 12βph from low. The game tree for the ﬁrst case is as drawn below. the only other subgame is the whole tree. M.) Or we can have Romeo move ﬁrst. It is indiﬀerent if β = (5 − 8(ph − pl ))/(12ph − 2pl ). s ¨ Romeo ¨ d d M S d d J1 s dsJ 2 ¡e ¡e ¡ e ¡ e S¡ S¡ eM eM ¡ e ¡ e ¡ e ¡ e ¡ e ¡ e ¨¨ see ¨¨ p Nature e ¨rr ¨ rr ¨¨ 1p rr see not rr rrs d d M S d d J3s ds ¡e ¡e ¡ e ¡ e S¡ S¡ eM eM ¡ e ¡ e ¡ e ¡ e ¡ e ¡ e (30.5) (50. Busch. Subgames start at J J J information sets J 1 and J 2 . (s1 . s2 . 8 2 Question 7: There are two ways to draw this game. Let α be the probability with which the railway uses the high eﬀort level.50) (1.30) (30. So the mixed strategy Nash equilibrium is (α. The town is indiﬀerent iﬀ its expected payoﬀs from the two strategies are the same.1) (5. and β Juliet’s (in J 3 . for higher α it prefers to N otSue. β) = 5 − 8(ph − pl ) 4ph .50) (1.30) A strategy vector in this game is (sR .5) (50.1) (5. We can have nature move ﬁrst and then Romeo (who does not observe nature’s move. In the subgame perfect equilibrium Juliet therefore is restricted to (S. ·). Microeconomics May2004 Nash equilibria are (low. that is. Otherwise there will be a mixed strategy equilibrium. N otSue). respectively.) Romeo’s (expected) payoﬀ from S is 30p + (1 − p)(30β + 1 − β) and his payoﬀ from M is 50p + (1 − p)(5β + 50(1 − β)). Sue) and (high. Let α denote Romeo’s probability of moving S. The β for which he is indiﬀerent is (49 − 29p)/(74(1 − p)). Letting β denote the probability with which the town sues. 4ph + pl 12ph − 2pl if ph − 5 1 > pl > 2ph − .172 LA. Note that this is increasing in p and that β = 1 if p = 5/9! Juliet has payoﬀs of 50α+5(1−α) and α+30(1−α) from moving S . s3 )). For lower α it prefers to Sue. This is the case if α = 4ph /(4ph + pl ).
(S. S)) requires that 30 > 50p + 5(1 − p). where one ﬁrm charges one price (presumably the high quality ﬁrm charging the higher price) the other another. the consumer knows (in equilibrium) which ﬁrm produced the product.Answers 173 and M . 9 Note that (S. Of course. S)) . (S.) The game then is as depicted below. pure strategy equilibria may also exist. (S. M )). M. (M. M. M. it would have to be the two prices as indicated. It is easy to see that the low quality ﬁrm would deviate to the higher price (being then mistaken . Market price for a given n is 400/(n + 2)2 . (S. Hence 3qi = 10 − (n − 1)qi and we get that qi ∗ = 10/(n + 2) ∀i. We are to show that no separating equilibrium exists. If it did. M )) requires that 50 > 30p+(1−p). With identical ﬁrms we then know that in equilibrium qj = qi . so that j=i qj = (n − 1)qi . let us focus on two prices only (as would be needed for a separating equilibrium. 74 74(1 − p) 5 . or p < 5/9 also. M )) if p < . or p < 49/29. Hence the unique equilibrium if p ≥ 5/9 is (M. Romeo can eﬀectively insist on his preferred outcome. M. which approaches zero as n gets large. M. in J 3 . But given that. Question 8: Each ﬁrm will maxqi ( j=i qj + qi − 10)2 qi − 0qi . which is always true. and we get the SPE equilibria to be 25 49 − 29p . The FOC for this problem is 2( j=i qj + qi − 10)qi + ( j=i qj + qi − 10)2 = 0 Hence if j=i qj + qi − 10 = 0 we require 2qi + ( j=i qj + qi − 10) = 0 and get the reaction function qi = (10 − j=i qj )/3. and neither does the mixed strategy equilibrium. (S. Note that total market output approaches 10 from below as n gets large. (S. (M. respectively. (Note that the marginal cost is zero and hence the perfectly competitive price is zero!) Question 9: In the ﬁrst instance the sellers can only vary price. To clarify ideas. What if p > 5/9? In that case the equilibrium in which the outcome is coordination on S (preferred by Juliet) does not exist. S. M. Hence she is indiﬀerent if α = 25/74. Total market output then is 10n/(n + 2).
Firm 2 solves which leads to FOC 2(1 − q1 − q2 ) − 2q2 − 2q2 = 0 and the reaction function q2 (q1 ) = 1/3 − q1 /3. we have v F (v) = 0 0.174 LA. Each buyer purchases a unit of the good if and only if the price is below the valuation of the buyer. F (v) is the cumulative distribution for the uniform distribution on [0. ˆ p − 2) ˆ for the high quality ﬁrm so that the consumer buys) since costs are unaﬀected by such a move. p − 3) (0. or 1 − F (p). Since the pdf for the uniform distribution on [0. Market price is 42/50. Hence total market demand is given by the number of buyers with a valuation above p. q2 ) = (37/100. Microeconomics (9 − p. Hence ﬁrm 1 solves maxq1 {2(1 − q1 − q2 )q1 − q1 /10} which leads to FOC 2(1 − q1 − q2 ) − 2q1 − 1/10 = 0 and the reaction function q1 (q2 ) = 19/40 − q2 /2.5v. ˆ p − 3) ˆ (0. Hence market demand is 1−0. p − 2) No (0.5p and inverse market demand is 2(1 − Q). The Nash equilibrium then is (q1 . one for each type of ﬁrm. 2 maxq2 2(1 − q1 − q2 )q2 − q2 . Question 10: What was not stated in the question was the fact that each consumer buys either one or no units. 2] is 0.5dt = 0. 2]. but a higher price is received. so that joint proﬁt is (74 × 37 + 82 × 21 − 212 )/10000. At this point the remainder is nontrivial and left for summer study! The key is that the consumer now has an information set for each pricewarranty pair. Proﬁts for the two ﬁrms are 74 × 37/10000 for ﬁrm 1 and (82 × 21 − 212 )/10000 for ﬁrm 2. A Cournot equilibrium is nothing but a Nash equilibrium in the game in which ﬁrms simultaneously choose output levels. −3) May2004 buy No s α p seller s p ˆ β s buy high (1/2) Root: eNature low s No buyer No (9 − p. 21/100).5. and that there are two nodes in it. −3) buyer (0. Busch. −2) (1/2) s buy p seller p ˆ s buy (6 − p. −2) (6 − p.
Joint proﬁts are 2(31/40)(9/40) − 8/400. by design.q2 2(1 − q1 − q2 )(q1 + q2 ) − q1 /10 − q2 . therefore takes ﬁrm 2’s reaction function as given. Thus q2 = 43/240. picks its output level. Joint proﬁt thus is (186×37+43×43)/2402 . Proﬁts for the two ﬁrms are (62 × 37)/(240×80) and 43×43/2402 . and only output levels on the reaction function are. Thus ﬁrm 1 solves maxq1 2 1 − q1 − 1 q1 − 3 3 q1 − q1 /10 The FOC for this is 4(1 − 2q1 )/3 − 1/10 = 0 and hence q1 = 37/80. Hence 2(1 − q1 − 1/20) − 2(q1 + 1/20) − 1/10 = 0 and 2 − 4q1 − 3/10 = 0 and q1 = 7/40. a best response.Answers 175 In the Stackelberg leader case we consider the SPE of the game in which ﬁrm 1 chooses output ﬁrst and ﬁrm 2. Market price is 86/240. Firm 1. after observing ﬁrm 1’s output choice. This cannot be attained as a Nash equilibrium because neither output level is on the ﬁrm’s reaction function. Joint proﬁt maximization would require that the ﬁrms solve 2 maxq1 . the Stackelberg leader. . Market price then is 2(31/40). This has FOCs 2(1 − q1 − q2 ) − 2(q1 + q2 ) − 1/10 = 0 2(1 − q1 − q2 ) − 2(q1 + q2 ) − 2q2 = 0 so that we know that 2q2 = 1/10 or q2 = 1/20.