This action might not be possible to undo. Are you sure you want to continue?
Econ 509, Introduction to Mathematical Economics I
Professor Ariell Reshef
University of Virginia
Lecture notes based on Chiang and Wainwright, Fundamental Methods of Mathematical Economics.
1 Mathematical economics
Why describe the world with mathematical models, rather than use verbal theory and logic? After all, this
was the state of economics until not too long ago (say, 1950s).
1. Math is a concise, parsimonious language, so we can describe a lot using fewer words.
2. Math contains many tools and theorems that help making general statements.
3. Math forces us to explicitly state all assumptions, and help preventing us from failing to acknowl
edge implicit assumptions.
4. Multi dimensionality is easily described.
Math has become a common language for most economists. It facilitates communication between econo
mists. Warning: despite its usefulness, if math is the only language for economists, then we are restricting
not only communication among us, but more importantly we are restricting our understanding of the world.
Mathematical models make strong assumptions and use theorems to deliver insightful conclusions. But,
remember the AA’ CC’ Theorem:
« Let C be the set of conclusions that follow from the set of assumptions A. Let A’ be a small perturbation
of A. There exists such A’ that delivers a set of conclusions C’ that is disjoint from C. Thus, the
insightfullness of C depends critically on the plausibility of A.
The plausibility of A depends on empirical validity, which needs to be established, usually using econo
metrics. On the other hand, sometimes theory informs us on how to look at existing data, how to collect
new data, and which tools to use in its analysis. Thus, there is a constant discourse between theory and
empirics. Neither can be without the other (see the inductivism v deductivism debate).
Theory is an abstraction of the world. You focus on the most important relationships that you consider
important a priori to understanding some phenomenon. This may yield an economic model.
2 Economic models
Some useful notation: \ for all, ¬ exists, ¬! exists and is unique. If we cross any of these, or pre…x by or
÷, then it means "not".
1
2.1 Ingredients of mathematical models
1. Equations:
De…nitions : ¬ = 1 ÷C
: 1 = C +1 +G+1 ÷'
: 1
+1
= (1 ÷c) 1

+1

Behavioral/Optimization : c
J
= c ÷j
: 'C = '1
: 'C = 1
Equilibrium : c
J
= c
s
2. Parameters: e.g. c, , c from above.
3. Variables: exogenous, endogenous.
Parameters and functional forms govern the equations, which determine the relationships between variable.
Thus, any complete mathematical model can be written as
1 (0. 1. A) = 0 .
where 1 is a set of functions (say, demand and supply), 0 is a set of parameters (say, elasticities), 1 are
endogenous variables (price and quantity) and A are exogenous, predetermined variables (income, weather).
Some models will not have explicit A variables.
2.2 From chapter 3: equilibrium analysis
One general de…nition of a model’s equilibrium is "a constellation of selected, interrelated variables so
adjusted to one another that no inherent tendency to change prevails in the model which they constitute".
« Selected: there may be other variables. This implies a choice of what is endogenous and what is
exogenous, but also the overall set of variables that are explicitly considered in the model. Changing
the set of variables that is discussed, and the partition to exogenous and endogenous will likely change
the equilibrium.
« Interrelated: all variables must be simultaneously in a state of rest, i.e. constant. And the value of
each variable must be consistent with the value of all other variables.
« Inherent: this means that only the relationships within the model are setting the equilibrium. It
implies that the exogenous variables and parameters are all …xed.
2
Since all variables are at rest, an equilibrium is often called a static. Comparing equilibria is called therefore
comparative statics.
An equilibrium can be de…ned as 1
+
that solves
1 (0. 1. A) = 0 .
for given 0 and A. This is one example for the usefulness of mathematics for economists: see how much is
described by so little notation.
We are interested in …nding an equilibrium for 1 (0. 1. A) = 0. Sometimes, there will be no solution.
Sometimes it will be unique and sometimes there will be multiple equilibria. Each of these situations is
interesting in some context. In most cases, especially when policy is involved, we want a model to have
a unique equilibrium, because it implies a function from (0. A) to 1 (the implicit function theorem). But
this does not necessarily mean that reality follows a unique equilibrium; that is only a feature of a model.
Warning: models with unique equilibrium are useful for many theoretical purposes, but it takes a leap of
faith to go from model to reality – as if the unique equilibrium pertains to reality.
Students should familiarize themselves with the rest of chapter 3 on their own.
2.3 Numbers
« Natural, N: 0. 1. 2... or sometimes 1. 2. 3. ...
« Integers, Z: ... ÷2. ÷1. 0. 1. 2. ...
« Rational, Q: :´d where both : and d are integers and d is not zero. : is the numerator and d is the
denominator.
« Irrational numbers: cannot be written as rational numbers, e.g., ¬, c,
2.
« Real, R: rational and irrational. The real line: (÷·. ·). This is a special set, because it is dense, in
the sense that there are just as many real numbers between 0 and 1 (or any other real numbers) as on
the entire real line.
« Complex: an extension of the real numbers, where there is an additional dimension in which we add
to the real numbers imaginary numbers: r +in, where i =
÷1.
2.4 Sets
We already described some sets above (N, Q, R, Z). A set o contains elements c:
o = ¦c
1
. c
2
. c
3
. c
4
¦ .
where c
I
may be numbers or objects (say: car, bus, bike, etc.). We can think of sets in terms of the number
of elements that they contain:
3
« Finite: o = ¦c
1
. c
2
. c
3
. c
4
¦.
« Countable: there is a mapping between the set and N. Trivially, a …nite set is countable.
« In…nite and countable: Q. Despite containing in…nitely many elements, they are countable.
« Uncountable: R, [0. 1].
Membership and relationships between sets:
« c ÷ o means that the element c is a member of set o.
« Subset: o
1
· o
2
: \c ÷ o
1
. c ÷ o
2
. Sometimes denoted as o
1
_ o
2
. Sometimes a strict subset is
de…ned as \c ÷ o
1
. c ÷ o
2
and ¬c ÷ o
2
. c ´ ÷ o
1
.
« Equal: o
1
= o
2
: \c ÷ o
1
. c ÷ o
2
and \c ÷ o
2
. c ÷ o
1
.
« The null set, ?, is a subset of any set, including itself, because it does not contain any element that is
not in any subset (it is empty).
« Cardinality: there are 2
n
subsets of any set of magnitude : = [o[.
« Disjoint sets: o
1
and o
2
are disjoint if they do not share common elements, i.e. if there does not exist
an element c such that \c ÷ o
1
and c ÷ o
2
.
Operations on sets:
« Union (or): ¹' 1 = ¦c[c ÷ ¹ or c ÷ 1¦.
« Intersection (and): ¹¨ 1 = ¦c[c ÷ ¹ and c ÷ 1¦.
« Complement: de…ne as the universe set. Then
¹ or ¹
c
= ¦c[c ÷ and c ´ ÷ ¹¦.
« Minus: for 1 · ¹, ¹`1 = ¦c[c ÷ ¹ and c ´ ÷ 1¦. E.g.,
¹ = `¹.
Rules:
« Commutative:
¹' 1 = 1 ' ¹
¹¨ 1 = 1 ¨ ¹
« Association:
(¹' 1) ' C = ¹' (1 ' C)
(¹¨ 1) ¨ C = ¹¨ (1 ¨ C)
4
« Distributive:
¹' (1 ¨ C) = (¹' 1) ¨ (¹' C)
¹¨ (1 ' C) = (¹¨ 1) ' (¹¨ C)
Do Venn diagrams.
2.5 Relations and functions
Ordered pairs: whereas ¦r. n¦ = ¦n. r¦ because they are sets, but not ordered, (r. n) = (n. r) unless r = n
(think of the two dimensional plane R
2
).
Let A and 1 be two sets. Then Cartesian product of A and 1 is a set that is given by
A 1 = ¦(r. n) [r ÷ A. n ÷ 1 ¦ .
For example, the :dimensional Euclidian space is
R
n
= R R ... R = ¦(r
1
. r
2
. ...r
n
) [r
I
÷ R¦ .
Cartesian products are relations between sets:
\r ÷ A. ¬n ÷ 1 such that (r. n) ÷ A 1 .
so that the set 1 is related to the set A. Any subset of a Cartesian product also has this trait. Note that
each r ÷ A may have more than one n ÷ 1 that is related to it.
If
\r ÷ A. ¬!n ÷ 1 such that (r. n) ÷ o · A 1 .
then n is a function of r. We write this in shorthand notation as
n = 1 (r)
or
1 : A ÷1 .
The second term also is called mapping, or transformation. Note that although for n to be a function of
r we must have \r ÷ A. ¬!n ÷ 1 , it is not necessarily true that \n ÷ 1. ¬!r ÷ A. In fact, there need not
exist any such r at all. For example, n = a +r
2
, a 0.
In n = 1 (r), n is the value or dependent variable; r is the argument or independent variable. The set
of all permissible values of r is called domain. For n = 1 (r), n is the image of r. The set of all possible
images is called the range.
5
2.6 Functional forms
Students should familiarize themselves with polynomials, exponents, logarithms, "rectangular hyperbolic"
functions (unit elasticity).
2.7 Functions of more than one variable
. = 1 (r. n) means that
\(r. n) ÷ domain · A 1. ¬!. ÷ 7 such that (r. n. .) ÷ o · A 1 7 .
This is a function from a plane in R
2
to R or a subset of it. n = 1 (r
1
. r
2
. ...r
n
) is a function from the R
n
hyperplane or hypersurface to R or a subset of it.
3 Equilibrium analysis
Students cover independently. Conceptual points reported above in 2.2.
4 Matrix algebra
4.1 De…nitions
« Matrix:
¹
n·n
=
a
11
a
12
. . . a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
¸
¸
¸
¸
¸
= [a
I,
] i = 1. 2. ...:. , = 1. 2. ...: .
Notation: usually matrices are denoted in upper case. : and : are called the dimensions.
« Vector:
r
n·1
=
r
1
r
2
.
.
.
r
n
¸
¸
¸
¸
¸
.
Notation: usually lowercase. Sometimes called a column vector. A row vector is
r
t
=
r
1
r
2
r
n
.
4.2 Matrix operations
« Equality: ¹ = 1 i¤ a
I,
= /
I,
\i,. Clearly, the dimensions of ¹ and 1 must be equal.
« Addition/subtraction: ¹±1 = C i¤ a
I,
±/
I,
= c
I,
\i,.
« Scalar multiplication: 1 = c¹ i¤ /
I,
= c a
I,
\i,.
6
« Matrix multiplication: Let ¹
n·n
and 1
·l
be matrices.
– if : = / then the product ¹
n·n
1
n·l
exists and is equal to a matrix C
n·l
of dimensions :.
– if : =  then the product 1
·n
¹
n·n
exists and is equal to a matrix C
·n
of dimensions / :.
– If product exists, then
¹
n·n
1
n·l
=
÷
a
11
a
12
. . . a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
¸
¸
¸
¸
¸
¸
¸

/
11
/
12
. . . /
1l
/
21
/
22
/
2l
.
.
.
.
.
.
/
n1
/
n2
. . . /
nl
¸
¸
¸
¸
¸
=
¸
c
I,
=
n
¸
=1
a
I
/
,
¸
i = 1. 2. ...:. , = 1. 2. ... .
« Transpose: Let ¹
n·n
= [a
I,
]. Then ¹
t
n·n
= [a
,I
]. Also denoted ¹
T
. Properties:
– (¹
t
)
t
= ¹
– (¹+1)
t
= ¹
t
+1
t
– (¹1)
t
= 1
t
¹
t
« Operation rules
– Commutative addition: ¹+1 = 1 +¹.
– Distributive addition: (¹+1) +C = ¹+ (1 +C).
– NON commutative multiplication: ¹1 = 1¹.
– Distributive multiplication: (¹1) C = ¹(1C).
– Associative: premultiplying ¹(1 +C) = ¹1+¹C and postmultiplying (¹+1) C = ¹C+1C.
« Identity matrix:
1
n
=
1 0 . . . 0
0 1 0
.
.
.
.
.
.
.
.
.
0 0 . . . 1
¸
¸
¸
¸
¸
.
¹1 = 1¹ = ¹ (of course, dimensions must conform).
« Zero matrix: all elements are zero. 0 +¹ = ¹, 0¹ = ¹0 = 0 (of course, dimensions must conform).
« Idempotent matrix: ¹¹ = ¹. ¹

= ¹ . / = 1. 2. ...
7
Example: the linear regression model is n
n·1
= A
n·
·1
+ 
n·1
and the estimated model by OLS
n = A/+c, where / = (A
t
A)
÷1
A
t
n. Therefore we have ´ n = A/ = A (A
t
A)
÷1
A
t
n and c = n÷´ n = n÷A/ =
n ÷ A (A
t
A)
÷1
A
t
n =
1 ÷A (A
t
A)
÷1
A
t
n. We can de…ne the projection matrix as 1 = A (A
t
A)
÷1
A
t
and the residual generating matrix as 1 = [1 ÷1]. Both 1 and 1 are idempotent. What does it mean that
1 is idempotent? And 1?
« Singular matrices: even if ¹1 = 0 does NOT imply that ¹ = 0 or 1 = 0. E.g.,
¹ =
¸
2 4
1 2
. 1 =
¸
÷2 4
1 ÷2
.
Likewise, C1 = C1 does NOT imply 1 = 1. E.g.,
C =
¸
2 3
6 9
. 1 =
¸
1 1
1 2
. 1 =
¸
÷2 1
3 2
.
This is because ¹, 1 and C are singular: there is one (or more) row or column that is a linear
combination of the other rows or columns, respectively.
4.3 Vector products
« Scalar multiplication: Let r
n·1
be a vector. Then the scalar product cr is
cr
n·1
=
cr
1
cr
2
.
.
.
cr
n
¸
¸
¸
¸
¸
.
« Inner product: Let r
n·1
and n
n·1
be vectors. Then the inner product is a scalar
r
t
n =
n
¸
I=1
r
I
n
I
.
This is useful for computing correlations.
« Outer product: Let r
n·1
and n
n·1
be vectors. Then the outer product is a matrix
rn
t
=
r
1
n
1
r
1
n
2
. . . r
1
n
n
r
2
n
1
r
2
n
2
r
2
n
n
.
.
.
.
.
.
r
n
n
1
r
n
n
2
. . . r
n
n
n
¸
¸
¸
¸
¸
n·n
.
This is useful for computing the variance/covariance matrix.
« Geometric interpretations: do in 2 dimensions. All extends to n dimensions.
– Scalar multiplication.
– Vector addition.
– Vector subtraction.
– Inner product and orthogonality (rn = 0 means rln).
8
4.4 Linear dependence
De…nition 1: a set of / vectors r
1
. r
2
. ...r

are linearly independent i¤ neither one can be expressed as a
linear combination of all or some of the others. Otherwise, they are linearly dependent.
De…nition 2: a set of / vectors r
1
. r
2
. ...r

are linearly independent i¤ ¬ a set of scalars c
1
. c
2
. ...c

such
that c
I
= 0 \i and
¸

I=1
c
I
r
I
= 0. Otherwise, they are linearly dependent.
Consider R
2
.
« All vectors that are multiples are linearly dependent. If two vectors cannot be expressed as multiples
then they are linearly independent.
« If two vectors are linearly independent, then any third vector can be expressed as a linear combination
of the two.
« It follows that any set of / 2 vectors in R
2
must be linearly independent.
4.5 Vector spaces and metric spaces
The complete set of vectors of : dimensions is a space, or vector space. If all elements of these vectors are
real numbers (÷ R), then this space is R
n
.
« Any set of : linearly independent vectors is a base for R
n
.
« A base spans the space to which it pertains. This means that any vector in R
n
can be expressed as a
linear combination of of the base (it is spanned by the base).
« Bases are not unique.
« Bases are minimal: they contain the smallest number of vectors that span the space.
Example: unit vectors. Consider the vector space R
3
. Then
c
1
=
1
0
0
¸
¸
. c
2
=
0
1
0
¸
¸
. c
3
=
0
0
1
¸
¸
is a base.
« Distance metric: Let r. n ÷ o, some set. De…ne the distance between r and n by a function d:
d = d (r. n), which has the following properties:
– d (r. n) = 0 = r = n.
– d (r. n) _ 0. d (r. n) 0 = r = n.
– d (r. n) _ d (r. .) +d (.. n) \r. n. . (triangle inequality).
9
A metric space is given by a vector space + distance metric. The Euclidean space is given by R
n
+
the following distance function
d (r. n) =
n
¸
I=1
(r
I
÷n
I
)
2
=
(r ÷n)
t
(r ÷n) .
But you can imagine other metrics that give rise to other di¤erent metric space.
4.6 Inverse matrix
De…nition: if for some square (: :) matrix ¹ there exists a matrix 1 such that ¹1 = 1
n
, then 1 is the
inverse of ¹, and is denoted ¹
÷1
, i.e. ¹¹
÷1
= 1.
Properties:
« Not all square matrices have an inverse. If ¹
÷1
does not exist, then ¹ is singular. Otherwise, ¹ is
nonsingular.
« ¹ is the inverse of ¹
÷1
and vice versa.
« The inverse is square.
« The inverse, if it exists, is unique. Proof: suppose not, i.e. ¹1 = 1 and 1 = ¹
÷1
. Then ¹
÷1
¹1 =
¹
÷1
1, 11 = 1 = ¹
÷1
, a contradiction
« Operation rules:
–
¹
÷1
÷1
= ¹
– (¹1)
÷1
= 1
÷1
¹
÷1
. Proof: Let (¹1)
÷1
= C. Then (¹1)
÷1
(¹1) = 1 = C (¹1) = C¹1 = C¹11
÷1
=
C¹ = 11
÷1
= 1
÷1
= C¹¹
÷1
= C = 1
÷1
¹
÷1
– (¹
t
)
÷1
=
¹
÷1
t
. Proof: Let (¹
t
)
÷1
= 1. Then (¹
t
)
÷1
¹
t
= 1 = 1¹
t
= (1¹
t
)
t
= ¹1
t
= 1
t
=
1 = ¹
÷1
¹1
t
= ¹
÷1
1 = 1
t
= ¹
÷1
= 1 =
¹
÷1
t
Conditions for nonsingularity:
« Necessary condition: matrix is square.
« Given square matrix, a su¢cient condition is that the rows or columns are linearly independent. It
does not matter whether we use the row or column croterion because matrix is square.
¹ is square + linear independence
. .. .
necessary and su¢cient conditions
= ¹ is nonsingular = ¬¹
÷1
How do we …nd the inverse matrix? Soon...Why do we care? See next section.
10
4.7 Solving systems of linear equations
We seek a solution r to the system ¹r = c
¹
n·n
r
n·1
= c
n·1
= r = c¹
÷1
.
where ¹ is a nonsingular matrix and c is a vector. Each row of ¹ gives coe¢cients to the elements of r:
row 1 :
n
¸
I=1
a
1I
r
I
= c
1
row 2 :
n
¸
I=1
a
2I
r
I
= c
2
Many linear models can be solved this way. We will learn clever ways to compute the solution to this system.
We care about singularity of ¹ because (given c) it tells us something about the solution r.
4.8 Markov chains
We introduce this through an example. Let r denote a vector of employment and unemployment rates:
r
t
=
c n
, where c + n = 1. De…ne the matrix 1 as a transition matrix that gives the conditional
probabilities for transition from the state today to a state next period,
1 =
¸
j
tt
j
tu
j
ut
j
uu
.
where 1
I,
= Pr (state , tomorrow[state i today). Clearly, j
tt
+j
tu
= 1 and j
ut
+j
uu
= 1. Now add a time
dimension to r: r
t

=
c

n

.
We ask: what is the employment and unemployment rates going to be in t + 1 given r

? Answer:
r
t
+1
= r
t

1 =
c

n

¸
j
tt
j
tu
j
ut
j
uu
=
c

j
tt
+n

j
ut
c

j
tu
+n

j
uu
.
What will they be in t + 2? Answer: r
t
+2
= r
t

1
2
. More generally, r
t
0+
= r
t
0
1

.
A transition matrix, sometimes called stochastic matrix, is de…ned as a square matrix whose elements
are non negative and all rows sum to 1. This gives you conditional transition probabilities starting from
each state, where each row is a starting state and each column is the state in the next period.
Steady state: a situation in which the distribution over the states is not changing over time. How do
we …nd such a state, if it exists?
« Method 1: Start with some initial condition r
0
and iterate forward r
t

= r
t
0
1

, taking / ÷·.
« Method 2: de…ne r as the steady state value. Solve r
t
= r
t
1. I.e., 1
t
r = r.
11
5 Matrix algebra continued and linear models
5.1 Rank
De…nition: The number of linearly independent rows (or, equivalently, columns) of a matrix ¹ is the rank
of ¹: r = ra:/ (¹).
« If ¹
n·n
then ra:/ (¹) _ min¦:. :¦.
« If a square matrix ¹
n·n
has rank :, then we say that ¹ is full rank.
« Multiplying a matrix ¹ by a another matrix 1 that is full rank does not change the rank of ¹.
« If ra:/ (¹) = r
.
and ra:/ (1) = r
1
, then ra:/ (¹1) = min¦r
.
. r
1
¦.
Finding the rank: the echelon matrix method. First de…ne elementary operations:
1. Multiply a row by a non zero scalar: c 1
I
, c = 0.
2. Adding c times of one row to another: 1
I
+c1
,
.
3. Interchanging rows: 1
I
÷1
,
.
All these operations alter the matrix, but do not change its rank (in fact, they can all be expressed by
multiplying matrices, which are all full rank).
De…ne: echelon matrix.
1. Zero rows appear at the bottom.
2. For non zero rows, the …rst element on the left is 1.
3. The …rst element of each row on the left (which is 1) appears to the left of the row directly below it.
The number of non zero rows in the echelon matrix is the rank.
We use the elementary operations in order to change the subject matrix into an echelon matrix, which has
as many zeros as possible. A good way to start the process is to concentrate zeros at the bottom. Example:
¹ =
0 ÷11 ÷4
2 6 2
4 1 0
¸
¸
1
1
÷1
3
:
4 1 0
2 6 2
0 ÷11 ÷4
¸
¸
1
4
1
1
:
1
1
4
0
2 6 2
0 ÷11 ÷4
¸
¸
1
2
÷21
1
:
1
1
4
0
0 5
1
2
2
0 ÷11 ÷4
¸
¸
1
3
+ 21
2
:
1
1
4
0
0 5
1
2
2
0 0 0
¸
¸
2
11
1
2
:
1
1
4
0
0 1 4´11
0 0 0
¸
¸
There is a row of zeros: ra:/ (¹) = 2. So ¹ is singular.
12
5.2 Determinants and nonsingularity
Denote the determinant of a square matrix as [¹
n·n
[. This is not absolute value. If the determinant is zero
then the matrix is singular.
1. [¹
1·1
[ = a
11
.
2. [¹
2·2
[ = a
11
a
22
÷a
12
a
21
.
3. Determinants for higher order matrices. Let ¹
·
be a square matrix. The i, minor ['
I,
[ is the
determinant of the matrix given by erasing row i and column , from ¹. Example:
¹ =
a / c
d c 1
o / i
¸
¸
. ['
11
[ =
c 1
/ i
.
The Laplace expansion of row i gives the determinant of ¹:
[¹
·
[ =

¸
,=1
a
I,
(÷1)
I+,
['
I,
[ =

¸
,=1
a
I,
C
I,
.
where C
I,
= (÷1)
I+,
['
I,
[ is called the cofactor. Example: expansino by row 1
a / c
d c 1
o / i
= aC
11
+/C
12
+cC
13
= a ['
11
[ ÷/ ['
12
[ +c ['
13
[
= a
c 1
/ i
÷/
d 1
o i
+c
d c
o /
= a (ci ÷1/) ÷/ (di ÷1o) +c (d/ ÷co) .
In doing this, it is useful to choose the expansion with the row that has the most zeros.
Properties of determinants
1. [¹
t
[ = [¹[
2. Interchanging rows or columns ‡ips the sign of the determinant.
3. Multiplying a row or column by a scalar c multiplies the determinant by c.
4. 1
I
+c1
,
does not change the determinant.
5. If a row or a column are multiples of another row or column, respectively, then the determinant is zero:
linear dependence.
6. Changing the minors in the Laplace expansion by alien minors will give zero, i.e.

¸
,=1
a
I,
(÷1)
I+,
['
n,
[ = 0 . i = : .
13
Determinants and singularity: [¹[ = 0
= ¹ is nonsingular
= columns and rows are linearly independent
= ¬¹
÷1
= for ¹r = c . ¬!r = ¹
÷1
c
= the column (or row) vectors of ¹ span the vector space .
5.3 Finding the inverse matrix
Let ¹ be a nonsingular matrix,
¹
n·n
=
a
11
a
12
. . . a
1n
a
21
a
22
a
2n
.
.
.
.
.
.
a
n1
a
n2
. . . a
nn
¸
¸
¸
¸
¸
.
The cofactor matrix of ¹ is C
.
:
C
.
=
C
11
C
12
. . . C
1n
C
21
C
22
C
2n
.
.
.
.
.
.
C
n1
C
n2
. . . C
nn
¸
¸
¸
¸
¸
.
where C
I,
= (÷1)
I+,
['
I,
[. The adjoint matrix of ¹ is adj¹ = C
t
.
:
adj¹ = C
t
.
=
C
11
C
21
. . . C
n1
C
12
C
22
C
n2
.
.
.
.
.
.
C
1n
C
2n
. . . C
nn
¸
¸
¸
¸
¸
.
Consider ¹C
t
.
:
¹C
t
.
=
¸
n
,=1
a
1,
C
1,
¸
n
,=1
a
1,
C
2,
. . .
¸
n
,=1
a
1,
C
n,
¸
n
,=1
a
2,
C
1,
¸
n
,=1
a
2,
C
2,
¸
n
,=1
a
2,
C
n,
.
.
.
.
.
.
¸
n
,=1
a
n,
C
1,
¸
n
,=1
a
n,
C
2,
. . .
¸
n
,=1
a
n,
C
n,
¸
¸
¸
¸
¸
=
¸
n
,=1
a
1,
C
1,
0 . . . 0
0
¸
n
,=1
a
2,
C
2,
0
.
.
.
.
.
.
0 0 . . .
¸
n
,=1
a
n,
C
n,
¸
¸
¸
¸
¸
=
[¹[ 0 . . . 0
0 [¹[ 0
.
.
.
.
.
.
0 0 . . . [¹[
¸
¸
¸
¸
¸
= [¹[ 1 .
14
where the o¤ diagonal elements are zero due to alien cofactors. It follows that
¹C
t
.
= [¹[ 1
¹C
t
.
1
[¹[
= 1
¹
÷1
= C
t
.
1
[¹[
=
adj¹
[¹[
.
Example:
¹ =
¸
1 2
3 4
. C
.
=
¸
4 ÷3
÷2 1
. C
t
.
=
¸
4 ÷2
÷3 1
. [¹[ = ÷2 . ¹
÷1
=
¸
÷2 1
3
2
÷
1
2
.
5.4 Cramer’s rule
For the system ¹r = c and nonsingular ¹, we have
r = ¹
÷1
c =
adj¹
[¹[
c .
Denote by ¹
,
the matrix ¹ with column , replaced by c. Then it turns out that
r
,
=
[¹
,
[
[¹[
.
5.5 Homogenous equations: Ax = 0
Let the system of equations be homogenous: ¹r = 0. If ¹ is nonsingular, then only r = 0 is a solution. If
¹ is singular, then there are in…nite solutions, including r = 0.
5.6 Summary of linear equations: Ax = c
For nonsingular A:
1. c = 0 = ¬!r = 0
2. c = 0 = ¬!r = 0
For singular A:
1. c = 0 = ¬r, in…nite solutions = 0.
« If there is inconsistency – linear dependency in ¹, the elements of c do not follow the same linear
combination – there is not solution.
2. c = 0 = ¬r, in…nite solutions, including 0.
One can think of the system ¹r = c as defrining a relation between c and r. If ¹ is nonsingular,
then there is a function (mapping/transformation) between c and r. In fact, when ¹ is nonsingular, this
transformation is invertible.
15
5.7 Leontief input/output model
We are interested in computing the level of output that is required from each industry in an economy that is
required to satisfy …nal demand. This is not a trivial question, because output of some industries are inputs
for other industries, while also being consumed in …nal demand. These relationships constitute input/output
linkages.
Assume
1. Each industry produces one homogenous good.
2. inputs are used in …xed proportions.
3. Constant returns to scale.
This gives rise to the Leontief (…xed proportions) production function. The second assumption can be
relaxed, depending on the interpretation of the mode.. If you only want to use the framework for accounting
purposes, then this is not critical.
De…ne a
Io
as the unit requirement of inputs from industry i used in the production of output o. I.e., in
order to produce on unit of output o you need a
Io
units of i. For n industries ¹
n·n
= [a
Io
] is a technology
matrix. Each column tells you how much of each input is required to produce one unit of output o. If some
industry i does not require its own output for production, then a
II
.
If all industries were used as inputs as well as output, then there would be no primary inputs, i.e.
labor, entrepreneurial talent, land, natural resources. To accommodate primary inputs, we add an open
sector. If the a
Io
are denominated in monetary values, i.e., in order to product $1 of output o you need $a
Io
of input i, then we must have
¸
n
I=1
a
Io
_ 1. And if there is an open sector, then we must have
¸
n
I=1
a
Io
< 1.
This simply means that the cost of producing $1 is less than $1. By CRS and competitive economy, we
have the zero pro…t condition, which means that all revenue is paid out to inputs. So primary inputs receive
(1 ÷
¸
n
I=1
a
Io
) from each industry o.
Equilibrium implies
supply = demand
= demand for intermediate inputs + …nal demand .
For some output o we have
r
o
=
n
¸
I=1
a
Io
r
I
+d
o
= a
1o
r
1
+a
2o
r
2
+... +a
no
r
n
. .. .
intermediate demand
+ d
o
....
…nal
.
This implies
÷a
1o
r
1
÷a
2o
r
2
+... (1 ÷a
oo
) r
o
÷a
o+1,o
r
o+1
÷... ÷a
no
r
n
= d
o
.
16
In matrix notation
(1 ÷a
11
) ÷a
12
÷a
13
÷a
1n
÷a
21
(1 ÷a
22
) ÷a
23
÷a
2n
÷a
31
÷a
32
(1 ÷a
33
) ÷a
3n
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
÷a
n1
÷a
n2
÷a
n3
(1 ÷a
nn
)
¸
¸
¸
¸
¸
¸
¸
r
1
r
2
r
3
.
.
.
r
n
¸
¸
¸
¸
¸
¸
¸
=
d
1
d
2
d
3
.
.
.
d
n
¸
¸
¸
¸
¸
¸
¸
.
Or
(1 ÷¹) r = d .
(1 ÷¹) is the Leontief matrix. This implies that you need more r than just …nal demand because some
r is used as intermediate inputs ("1 ÷¹ < 1").
r = (1 ÷¹)
÷1
d .
You need nonsingular (1 ÷¹). But even then the solution to r might not be positive. We need to …nd
conditions for this.
5.7.1 Existence of non negative solution
Consider
¹ =
a / c
d c 1
o / i
¸
¸
.
De…ne
« Principal minors: the minors that arise from deleting the ith row and ith column. E.g.
['
11
[ =
c 1
/ i
. ['
22
[ =
a c
o i
. ['
33
[ =
a /
d c
.
« /th order principal minor: is a principal minor that arises from a matrix of dimensions / /. If
the dimensions of the original matrix are : :, then a /th order principal minor is obtained after
deleting the same : ÷/ rows and columns. E.g., the 1st order principal minors of ¹ are
[a[ . [c[ . [i[ .
The 2nd order principal minors are ['
11
[, ['
22
[ and ['
33
[ given above.
« Leading principal minors: these are the 1st, 2nd, 3rd (etc.) order principal minors, where we
keep the upper most left corner of the original matrix in each one. E.g.
['
1
[ = [a[ . ['
2
[ =
a /
d c
. ['
3
[ =
a / c
d c 1
o / i
.
17
SimonHawkins Condition (Theorem): consider the system of equations 1r = d. If (1) all non
diagonal elements of 1
n·n
are non positive, i.e. /
I,
_ 0. \i = ,; (2) all elements of d
n·1
are non negative,
i.e. d
I
_ 0. \i; Then ¬r _ 0 such that 1r = d i¤ (3) all leading principal minors are strictly positive, i.e.
['
I
[ 0. \i. In our case, 1 = 1 ÷¹, the Leontief matrix.
Economic meaning of SHC. To illustrate, use a 2 2 example:
1 ÷¹ =
¸
1 ÷a
11
÷a
12
÷a
21
1 ÷a
22
.
From (3) we have ['
1
[ = [1 ÷a
11
[ = 1 ÷ a
11
0, i.e. a
11
< 1. This means that less than the total output
of r
1
is used to produce r
1
, i.e. viability. Next, we have
['
2
[ = [1 ÷¹[
= (1 ÷a
11
) (1 ÷a
22
) ÷a
12
a
21
= 1 ÷a
11
÷a
22
+a
11
a
22
÷a
12
a
21
0
It follows that
(1 ÷a
11
) a
22
. .. .
0
+a
11
+a
12
a
21
< 1
a
11
....
direct use
+ a
12
a
21
. .. .
indirect use
< 1
This means that the total amount of r
1
demanded (for production of r
1
and for production of r
2
) is less
than the amount produced (=1), i.e. the resource constraint is kept.
5.7.2 Closed model version
The closed model version treats the primary sector as any industry. Suppose that there is only one primary
input: labor. Then one interpretation is that the value of consumption of each good is in …xed proportions
(these preferences can be represented by a CobbDouglas utility function).
In this model …nal demand, as de…ned above, must equal zero. Since income accrues to primary inputs
(think of labor) and this income is captured in r, then it follows that the d vector must be equal to zero. We
know that …nal demand equals income. If …nal demand was positive, then we would have to have an open
sector to pay for that demand (from its income). I.e. we have a homogenous system:
(1 ÷¹) r = 0
(1 ÷a
00
) ÷a
01
÷a
02
÷a
10
(1 ÷a
11
) ÷a
12
÷a
20
÷a
21
(1 ÷a
22
)
¸
¸
r
0
r
1
r
2
¸
¸
=
0
0
0
¸
¸
.
where 0 denotes the primary sector (there could be more than one).
Each row in the original ¹ matrix must sum to 1, i.e. a
0o
+ a
2o
+ ... + a
no
= 1. \o, because all of the
input is exhausted in production. But then each column in 1 ÷¹ can be expressed as minus the sum of all
18
other columns. It follows that 1 ÷¹ is singular, and therefore r is not unique! It follows that you can scale
up or down the economy with no e¤ect. In fact, this is a general property of CRS economies with no outside
sector or endowment. One way to pin down the economy is to set some r
I
to some level, as an endowment.
6 Derivatives and limits
Teaching assistant covers.
7 Di¤erentiation and use in comparative statics
7.1 Di¤erentiation rules
1. If n = 1 (r) = c, a constant, then
Ju
Jr
= 0
2.
J
Jr
ar
n
= a:r
n÷1
3.
J
Jr
lnr =
1
r
4.
J
Jr
[1 (r) ±o (r)] = 1
t
(r) ±o
t
(r)
5.
J
Jr
[1 (r) o (r)] = 1
t
(r) o (r) +1 (r) o
t
(r)
6.
J
Jr
](r)
a(r)
=
]
0
(r)a(r)+](r)a
0
(r)
[a(r)]
2
=
]
0
(r)
a(r)
÷
](r)
a(r)
a
0
(r)
a(r)
7.
J
Jr
1 [o (r)] =
J]
Ja
Ja
Jr
(chain rule)
8. Inverse functions. Let n = 1 (r) be strictly monotone. Then an inverse function, r = 1
÷1
(n), exists
and
dr
dn
=
d1
÷1
(n)
dn
=
1
dn´dr
=
1
d1 (r) ´dr
.
where r and n map one into the other, i.e. n = 1 (r) and r = 1
÷1
(n).
« Strictly monotone means that r
1
r
2
= 1 (r
1
) 1 (r
2
) (strictly increasing) or 1 (r
1
) < 1 (r
2
)
(strictly decreasing). It implies that there is an inverse function r = 1
÷1
(n) because \n ÷Range
¬!r ÷Domain (recall: \r ÷Domain ¬!n ÷Range de…nes 1 (r)).
7.2 Partial derivatives
Let n = 1 (r
1
. r
2
. ...r
n
). De…ne the partial derivative of 1 with respect to r
I
:
0n
0r
I
= lim
ri÷0
1 (r
I
+ r
I
. r
÷I
) ÷1 (r
I
. r
÷I
)
r
I
.
Operationally, you derive 0n´0r
I
just as you would derive dn´dr
I
, while treating all other r
÷I
as constants.
Example. Consider the following production function
n = . [c/
,
+ (1 ÷c) 
,
]
1/,
. . _ 1 .
19
De…ne the elasticity of substitution as the percent change in relative factor intensity (/´) in response to a
1 percent change in the relative factor returns (r´n). What is the elasticity of substitution? If factors are
paid their marginal product, then
n

=
1
.
. []
1
'
÷1
.c/
,÷1
= r
n
l
=
1
.
. []
1
'
÷1
.(1 ÷c) 
,÷1
= n .
Thus
r
n
=
c
1 ÷c
/

,÷1
and then
/

=
c
1 ÷c
1
1'
r
n
÷
1
1'
.
The elasticity of substitution is ÷
1
1÷,
. It is constant, o =
1
1÷,
. This production function exhibits constant
elasticity of substitution, denoted a CES production function.
7.3 Gradients
n = 1 (r
1
. r
2
. ...r
n
)
The gradient is de…ned as
\1 = (1
1
. 1
2
. ...1
n
) .
where
1
I
=
01
0r
I
.
We can use this in …rst order approximations:
1[
r0
= \1 (r
0
) r
1 (r) ÷1 (r
0
)  (1
1
. 1
2
. ...1
n
)[
r0
¸
¸
r
1
.
.
.
r
n
¸
¸
¸ ÷
r
0
1
.
.
.
r
0
n
¸
¸
¸
¸
.
Application to open input/output model:
(1 ÷¹) r = d
r = (1 ÷¹)
÷1
d = \ d
r
1
.
.
.
r
n
¸
¸
¸ =
·
11
·
1n
.
.
.
.
.
.
.
.
.
·
n1
·
nn
¸
¸
¸
d
1
.
.
.
d
n
¸
¸
¸ .
Think of r as a function of d:
\r
1
=
·
11
·
12
·
1n
·
I,
=
0r
I
0d
,
.
20
7.4 Jacobian and functional dependence
Let there be two functions
n
1
= 1 (r
1
. r
2
)
n
2
= o (r
1
. r
2
)
The Jacobian determinant is
[J[ =
0n
0r
t
=
0
n
1
n
2
0 (r
1
. r
2
)
=
Ju1
Jr1
Ju1
Jr2
Ju2
Jr1
Ju2
Jr2
.
Theorem (functional dependence): [J[ = 0 \r i¤ the functions are dependent.
Example: n
1
= r
1
r
2
and n
2
= lnr
1
+ lnr
2
.
[J[ =
r
2
r
1
1
r1
1
r2
= 0 .
Example: n
1
= r
1
+ 2r
2
2
and n
2
= ln
r
1
+ 2r
2
2
.
[J[ =
1 4r
2
1
r1+2r
2
2
4r2
r1+2r
2
2
= 0 .
Another example: r = \ d,
r
1
.
.
.
r
n
¸
¸
¸ =
·
11
·
13
.
.
.
.
.
.
.
.
.
·
n1
·
nn
¸
¸
¸
d
1
.
.
.
d
n
¸
¸
¸ =
¸
·
1I
d
I
.
.
.
¸
·
nI
d
I
¸
¸
¸ .
So [J[ = [\ [. It follows that linear dependence is equivalent to functional dependence for a system of linear
equations. If [\ [ = 0 then there are · solutions for r and the relationship between d and r cannot be
inverted.
8 Total di¤erential, total derivative and the implicit function the
orem
8.1 Total derivative
Often we are interested in the total rate of change in some variable in response to a change in some other
variable or some parameter. If there are indirect e¤ects, as well as direct ones, you want to take this into
account. Sometimes the indirect e¤ects are due to general equilibrium constraints and can be very important.
Example: consider the utility function n(r. n) and the budget constraint j
r
r +j
u
n = 1. Then the total
e¤ect of a small change in r on utility is
dn
dr
=
0n
0r
+
0n
0n
dn
dr
.
21
More generally: 1 (r
1
. ...r
n
)
d1
dr
I
=
n
¸
,=1
01
0r
,
dr
,
dr
I
.
where we know that dr
I
´dr
I
= 1.
Example: . = 1 (r. n. n. ·), where r = r(n. ·) and n = n (n. ·).
d.
dn
=
01
0r
dr
dn
+
0r
0·
d·
dn
+
01
0n
dn
dn
+
0n
0·
d·
dn
+
01
0n
+
01
0·
d·
dn
.
If we want to impose that · is constant, then this is denoted as
J:
Ju
u
and then all terms that involve d·´dn
are zero:
d.
dn
u
=
01
0r
dr
dn
+
01
0n
dn
dn
+
01
0n
.
8.2 Total di¤erential
Now we are interested in the change (not rate of...) in some variable or function if all its arguments change
a bit, perturbed. For example, if the saving function for the economy is o = o (n. r), then
do =
0o
0n
dn +
0o
0r
dr .
More generally, n = 1 (r
1
. ...r
n
)
dn =
n
¸
,=1
01
0r
,
dr
,
.
One can view the total di¤erential as a linearization of the function around a speci…c point.
The same rules that apply to derivatives apply to di¤erentials; just simply add dr after each partial
derivative:
1. dc = 0 for constant c.
2. d (cn
n
) = c:n
n÷1
dn =
J(cu
n
)
Ju
dn.
3. d (n ±·) = dn ±d· =
J(u+u)
Ju
dn +
J(u+u)
Ju
d·.
« d (n ±· ±n) = dn ±d· ±dn =
J(u+u+u)
Ju
dn +
J(u+u+u)
Ju
d· +
J(u+u+u)
Ju
dn.
4. d (n·) = ·dn +nd· =
J(uu)
Ju
dn +
J(uu)
Ju
d·.
« d (n·n) = ·ndn +nnd· +n·dn =
J(uuu)
Ju
dn +
J(uuu)
Ju
d· +
J(uuu)
Ju
dn.
5. d (n´·) =
uJu÷uJu
u
2
=
J(u/u)
Ju
dn +
J(u/u)
Ju
d·
Example: suppose that you want to know how much utility, n(r. n), changes if r and n are perturbed.
Then
dn =
0n
0r
dr +
0n
0n
dn .
22
Now, if you imposed that utility is not changing, i.e. you are interested in an isoquant, then this implies
that dn = 0 and then
dn =
0n
0r
dr +
0n
0n
dn = 0
and hence
dn
dr
= ÷
0n´0r
0n´0n
.
This should not be understood as a derivative, but rather as a ratio of perturbations. We will see soon
conditions under which this is actually a derivative.
Log linearization. Suppose that you want to loglinearize . = 1 (r. n) around some point, say
(r
+
. n
+
. .
+
). This means …nding the percent change in . in response to a percent change in r and n. We have
d. =
0.
0r
dr +
0.
0n
dn .
Divide through by .
+
to get
d.
.
+
=
r
+
.
+
0.
0r
dr
r
+
+
n
+
.
+
0.
0n
dn
n
+
´ . =
r
+
.
+
0.
0r
´ r +
n
+
.
+
0.
0n
´ n .
where
´ . =
d.
.
+
 d ln.
is approximately the percent change.
Another example:
1 = C +1 +G
d1 = dC +d1 +dG
d1
1
=
C
1
dC
C
+
1
1
d1
1
+
G
1
dG
G
´
1 =
C
1
´
C +
1
1
´
1 +
G
1
´
G .
8.3 The implicit function theorem
This is a useful tool to study the behavior of an equilibrium in response to a change in an exogenous variable.
Consider
1 (r. n) = 0 .
We are interested in characterizing the implicit function between r and n, if it exists. We already saw one
implicit function when we computed the utility isoquant. In that case, we had
n(r. n) = n
23
for some constant level of n. This can be rewritten in the form above as
n(r. n) ÷n = 0 .
From this we derived a dn´dr slope. But this can be more general and constitute a function.
Another example: what is the slope of a tangent line at any point on a circle?
r
2
+n
2
= r
2
r
2
+n
2
÷r
2
= 0
1 (r. n) = 0
Taking the total di¤erential
1
r
dr +1
u
dn = 2rdr + 2ndn = 0
dn
dr
= ÷
r
n
. n = 0 .
For example, the slope at
r´
2. r´
2
is ÷1.
The implicit function theorem: Let the function 1 (r. n) ÷ C
1
on some open set and 1 (r. n) = 0.
Then there exists a (implicit) function n = 1 (r) ÷ C
1
that satis…es 1 (r. 1 (r)) = 0, such that
dn
dr
= ÷
1
r
1
u
on this open set. More generally, if 1 (n. r
1
. r
2
. ...r
n
) ÷ C
1
on some open set and 1 (n. r
1
. r
2
. ...r
n
) = 0,
then there exists a (implicit) function n = 1 (r
1
. r
2
. ...r
n
) ÷ C
1
that satis…es 1 (r. 1 (r)) = 0, such that
dn =
n
¸
I=1
1
I
dr
I
.
If we allow only one speci…c r
I
to be perturbed, then 1
I
=
Ju
Jri
= ÷1
ri
´1
u
. From 1 (n. r
1
. r
2
. ...r
n
) = 0 and
n = 1 (r
1
. r
2
. ...r
n
) we have
01
0n
dn +
01
0r
1
dr
1
+... +
01
0r
n
dr
n
= 0
dn = 1
1
dr
1
+... +1
n
dr
n
so that
01
0n
(1
1
dr
1
+... +1
n
dr
n
) +1
r1
dr
1
+... +1
rn
dr
n
= (1
r1
+1
u
1
1
) dr
1
+... + (1
rn
+1
u
1
n
) dr
n
= 0 .
Since we only allow r
I
to be perturbed, dr
÷I
= 0 then (1
ri
+1
u
1
I
) = 0 and so 1
I
= ÷1
ri
´1
u
.
24
8.3.1 A more general version of the implicit function theorem
Let 1 (r. n) = 0 be a set of : functions where r
n·1
(exogenous) and n
n·1
(endogenous). If
1. 1 ÷ C
1
and
2. [J[ =
JJ
Ju
0
= 0 at some point (r
0
. n
0
) (no functional dependence),
then ¬n = 1 (r), a set of : functions in a neighborhood of (r
0
. n
0
) such that 1 ÷ C
1
and 1 (r. 1 (r)) = 0 in
that neighborhood of (r
0
. n
0
).
We further develop this. From 1 (r. n) = 0 we have
¸
01
0n
t
n·n
dn
n·1
+
¸
01
0r
t
n·n
dr
n·1
= 0 =
¸
01
0n
t
dn = ÷
¸
01
0r
t
dr .
From n = 1 (r) we have
dn
n·1
=
¸
0n
0r
t
n·n
dr
n·1
Combining we get
¸
01
0n
t
n·n
¸
0n
0r
t
n·n
dr
n·1
= ÷
¸
01
0r
t
n·n
dr
n·1
.
Now suppose that only r
1
is perturbed, so that dr
t
=
dr
1
0 0
. Then we get only the …rst column
in the set of equations above:
¸
01
1
0n
1
0n
1
0r
1
+
01
1
0n
2
0n
2
0r
1
+... +
01
1
0n
n
0n
n
0r
1
dr
1
= ÷
01
1
0r
1
dr
1
.
.
.
¸
01
n
0n
1
0n
1
0r
1
+
01
n
0n
2
0n
2
0r
1
+... +
01
n
0n
n
0n
n
0r
1
dr
1
= ÷
01
n
0r
1
dr
1
By eliminating the dr
1
terms we get
¸
01
1
0n
1
0n
1
0r
1
+
01
1
0n
2
0n
2
0r
1
+... +
01
1
0n
n
0n
n
0r
1
= ÷
01
1
0r
1
.
.
.
¸
01
n
0n
1
0n
1
0r
1
+
01
n
0n
2
0n
2
0r
1
+... +
01
n
0n
n
0n
n
0r
1
= ÷
01
n
0r
1
and thus
¸
01
0n
t
n·n
¸
0n
0r
1
n·1
= ÷
¸
01
0r
1
n·1
.
Since we required [J[ =
JJ
Ju
0
= 0 it follows that the
JJ
Ju
0
n·n
matrix is nonsingular, and thus ¬!
Ju
Jr1
n·1
,
a solution to the system. This can be obtained by Cramer’s rule:
0n
,
0r
1
=
[J
,
[
[J[
.
25
where [J
,
[ is obtained by replacing the ,th column in [J
,
[ by
JJ
Jr1
.
Why is this useful? We are often interested in how a model behaves around some point, usually an
equilibrium or perhaps a steady state. But models are typically nonlinear and the behavior is hard to
characterize without implicit functions.
JJ
1
Ju1
JJ
1
Ju2
JJ
1
Jun
JJ
2
Ju1
JJ
2
Ju2
JJ
2
Jun
.
.
.
.
.
.
.
.
.
.
.
.
JJ
n
Ju1
JJ
n
Ju2
JJ
n
Jun
¸
¸
¸
¸
¸
¸
dn
1
dn
2
.
.
.
dn
n
¸
¸
¸
¸
¸
+
JJ
1
Jr1
JJ
1
Jr2
JJ
1
Jrm
JJ
2
Jr1
JJ
2
Jr2
JJ
2
Jrm
.
.
.
.
.
.
.
.
.
JJ
n
Jr1
JJ
n
Jr2
JJ
n
Jrm
¸
¸
¸
¸
¸
¸
dr
1
dr
2
.
.
.
dr
n
¸
¸
¸
¸
¸
= 0
JJ
1
Ju1
JJ
1
Ju2
JJ
1
Jun
JJ
2
Ju1
JJ
2
Ju2
JJ
2
Jun
.
.
.
.
.
.
.
.
.
.
.
.
JJ
n
Ju1
JJ
n
Ju2
JJ
n
Jun
¸
¸
¸
¸
¸
¸
dn
1
dn
2
.
.
.
dn
n
¸
¸
¸
¸
¸
= ÷
JJ
1
Jr1
JJ
1
Jr2
JJ
1
Jrm
JJ
2
Jr1
JJ
2
Jr2
JJ
2
Jrm
.
.
.
.
.
.
.
.
.
JJ
n
Jr1
JJ
n
Jr2
JJ
n
Jrm
¸
¸
¸
¸
¸
¸
dr
1
dr
2
.
.
.
dr
n
¸
¸
¸
¸
¸
dn
1
dn
2
.
.
.
dn
n
¸
¸
¸
¸
¸
=
Ju
1
Jr1
Ju
1
Jr2
Ju
1
Jrm
Ju
2
Jr1
Ju
2
Jr2
Ju
2
Jrm
.
.
.
.
.
.
.
.
.
Ju
n
Jr1
Ju
n
Jr2
Ju
n
Jrm
¸
¸
¸
¸
¸
¸
dr
1
dr
2
.
.
.
dr
n
¸
¸
¸
¸
¸
= 0
JJ
1
Ju1
JJ
1
Ju2
JJ
1
Jun
JJ
2
Ju1
JJ
2
Ju2
JJ
2
Jun
.
.
.
.
.
.
.
.
.
.
.
.
JJ
n
Ju1
JJ
n
Ju2
JJ
n
Jun
¸
¸
¸
¸
¸
¸
Ju1
Jr1
Ju1
Jr2
Ju1
Jrm
Ju2
Jr1
Ju2
Jr2
Ju2
Jrm
.
.
.
.
.
.
.
.
.
Jun
Jr1
Jun
Jr2
Jun
Jrm
¸
¸
¸
¸
¸
¸
dr
1
dr
2
.
.
.
dr
n
¸
¸
¸
¸
¸
= ÷
JJ
1
Jr1
JJ
1
Jr2
JJ
1
Jrm
JJ
2
Jr1
JJ
2
Jr2
JJ
2
Jrm
.
.
.
.
.
.
.
.
.
JJ
n
Jr1
JJ
n
Jr2
JJ
n
Jrm
¸
¸
¸
¸
¸
¸
dr
1
dr
2
.
.
.
dr
n
¸
¸
¸
¸
¸
8.3.2 Example: demandsupply system
demand : c
J
= d(
÷
j.
+
n)
supply : c
s
= :(
+
j)
equilibrium : c
J
= c
s
.
Let d. : ÷ C
1
. By eliminating c we get
:(
+
j) ÷d(
÷
j.
+
n) = 0 .
which is an implicit function
1 (j. n) = 0 .
where j is endogenous and n is exogenous.
We are interested in how the endogenous price responds to income. By the implicit function theorem
¬j = j (n) such that
dj
dn
= ÷
1
u
1
µ
= ÷
÷d
u
:
µ
÷d
µ
=
d
u
:
µ
÷d
µ
0
26
because d
µ
< 0. An increase in income unambiguously increases the price.
To …nd how quantity changes we apply the total derivative approach to the demand function:
dc
dn
=
0d
0j
dj
dn
. .. .
"substitution" e¤ect<0
+
0d
0n
....
income e¤ect0
so the sign here is ambiguous. But we can show that it is positive by using the supply side:
dc
dn
=
0:
0j
dj
dn
0 .
« Draw demandsupply system.
This example is simple, but the technique is very powerful, especially in nonlinear general equilibrium
models.
8.3.3 Example: demandsupply system – continued
Now consider the system by writing it as a system of two implicit functions:
1 (j. c; n) = 0
1
1
(j. c. n) = d (j. n) ÷c = 0
1
2
(j. c. n) = : (j) ÷c = 0 .
Apply the general theorem. Check for functional dependence in the endogenous variables:
[J[ =
01
0 (j. c)
=
d
µ
÷1
:
µ
÷1
= ÷d
µ
+:
µ
0 .
So there is no functional dependence. Thus ¬j = j (n) and ¬c = c (n). We now wish to compute the
derivatives with respect to the exogenous argument n. Since d1 = 0 we have
01
1
0j
dj +
01
1
0c
dc +
01
1
0n
dn = 0
01
2
0j
dj +
01
2
0c
dc +
01
2
0n
dn = 0
Thus
¸
JJ
1
Jµ
JJ
1
Jo
JJ
2
Jµ
JJ
2
Jo
¸
¸
dj
dc
= ÷
¸
JJ
1
Ju
dn
JJ
2
Ju
dn
¸
Use the following
dj =
0j
0n
dn
dc =
0c
0n
dn
27
to get
¸
JJ
1
Jµ
JJ
1
Jo
JJ
2
Jµ
JJ
2
Jo
¸¸
Jµ
Ju
dn
Jo
Ju
dn
¸
= ÷
¸
JJ
1
Ju
dn
JJ
2
Ju
dn
¸
¸
JJ
1
Jµ
JJ
1
Jo
JJ
2
Jµ
JJ
2
Jo
¸¸
Jµ
Ju
Jo
Ju
¸
= ÷
¸
JJ
1
Ju
JJ
2
Ju
¸
Using the expressions for 1
1
and 1
2
we get
¸
JJ
Jµ
÷1
Js
Jµ
÷1
¸¸
Jµ
Ju
Jo
Ju
¸
=
¸
÷
JJ
Ju
0
.
We seek a solution for
Jµ
Ju
and
Jo
Ju
. This is a system of equations, which we solve using Cramer’s rule:
0j
0n
=
[J
1
[
[J[
=
÷
JJ
Ju
÷1
0 ÷1
[J[
=
JJ
Ju
[J[
0
and
0c
0n
=
[J
2
[
[J[
=
JJ
Jµ
÷
JJ
Ju
Js
Jµ
0
[J[
=
JJ
Ju
Js
Jµ
[J[
0 .
8.3.4 Example: demandsupply system – continued again
Now we use the total derivative approach. We have
: (j) ÷d (j. n) = 0 .
Take the total derivative with respect to n:
0:
0j
dj
dn
÷
0d
0j
dj
dn
÷
0d
0n
= 0
Thus
dj
dn
¸
0:
0j
÷
0d
0j
=
0d
0n
and so
dj
dn
=
JJ
Ju
Js
Jµ
÷
JJ
Jµ
0 .
« We could even do the total di¤erential approach...
9 Optimization with one variable and Taylor expansion
A function may have many local minimum and maximum. A function may have only one global minimum
and maximum, if it exists.
28
9.1 Local maximum, minimum
First order necessary conditions (FONC): Let 1 ÷ C
1
on some open convex set (will be de…ned
properly later) around r
0
. If 1 (r
0
)
t
= 0, then r
0
is a critical point, i.e. it could be either a maximum or
minimum.
1. r
0
is a local maximum if 1
t
(r
0
) changes from positive to negative as r increases around r
0
.
2. r
0
is a local minimum if 1
t
(r
0
) changes from negative to positive as r increases around r
0
.
3. Otherwise, r
0
is an in‡ection point (not max nor min).
Second order su¢cient conditions (SOC): Let 1 ÷ C
2
on some open convex set around r
0
. If
1 (r
0
)
t
= 0 (FONC satis…ed) then:
1. r
0
is a local maximum if 1
tt
(r
0
) < 0 around r
0
.
2. r
0
is a local minimum if 1
tt
(r
0
) 0 around r
0
.
3. Otherwise (1
tt
(r
0
) = 0) we cannot be sure.
« Extrema at the boundaries: if the domain of 1 (r) is bounded, then the boundaries may be extrema
without satisfying any of the conditions above.
Draw graphs for all cases.
Example:
n = r
3
÷12r
2
+ 36r + 8
FONC:
1
t
(r) = 3r
2
÷24r + 36 = 0
r
2
÷8r + 12 = 0
r
2
÷2r ÷6r + 12 = 0
r(r ÷2) ÷6 (r ÷2) = 0
(r ÷6) (r ÷2) = 0
r
1
= 6. r
2
= 2 are critical points and both satisfy the FONC.
1
tt
(r) = 6r ÷24
1
tt
(2) = ÷12 = maximum
1
tt
(6) = +12 = minimum
29
9.2 The Nth derivative test
If 1
t
(r
0
) = 0 and the …rst non zero derivative at r
0
is of order :, 1
(n)
(r
0
) = 0, then
1. If : is even and 1
(n)
(r
0
) < 0 then r
0
is a local maximum.
2. If : is even and 1
(n)
(r
0
) 0 then r
0
is a local minimum.
3. Otherwise r
0
is an in‡ection point.
Example:
1 (r) = (7 ÷r)
4
.
1
t
(r) = ÷4 (7 ÷r)
3
.
so r = 7 is a critical point (satis…es the FONC).
1
tt
(r) = ÷12 (7 ÷r)
2
. 1
tt
(7) = 0
1
ttt
(r) = ÷24 (7 ÷r) . 1
ttt
(7) = 0
1
tttt
(r) = 24 0 .
so r = 7 is a minimum: 1
(4)
is the …rst non zero derivative. 4 is even. 1
(4)
0.
« Understanding the ·th derivative test is based on the Maclaurin expansion and the Taylor ex
pansion.
9.3 Maclaurin expansion
Terms of art:
« Expansion: express a function as a polynomial.
« Around r
0
: in a small neighborhood of r
0
.
Consider the following polynomial
1 (r) = a
0
+a
1
r +a
2
r
2
+a
3
r
3
+... +a
n
r
n
1
(1)
(r) = a
1
+ 2a
2
r + 3a
3
r
2
+... +:a
n
r
n÷1
1
(2)
(r) = 2a
2
+ 2 3a
3
r +... + (: ÷1) :a
n
r
n÷2
.
.
.
1
(n)
(r) = 1 2 ... (: ÷1) :a
n
.
30
Evaluate at r = 0:
1 (0) = a
0
1
(1)
(0) = a
1
1
(2)
(0) = 2a
2
.
.
.
1
(n)
(0) = 1 2 ... (: ÷1) :a
n
= :!a
n
.
Maclaurin expansion around 0:
1 (r)[
r=0
=
1 (0)
0!
+
1
(1)
(0)
1!
r +
1
(2)
(0)
2!
r
2
+
1
(3)
(0)
3!
r +...
1
(n)
(0)
:!
r
n
.
9.4 Taylor expansion
Example: quadratic equation.
1 (r) = a
0
+a
1
r +a
2
r
2
.
De…ne r = r
0
+c, where we …x r
0
as an anchor and allow c to vary. This is essentially relocating the origin
to (r
0
. 1 (r
0
)).
o (c) = a
0
+a
1
(r
0
+c) +a
2
(r
0
+c)
2
= 1 (r) .
Note that
o (c) = 1 (r) .
Taking derivatives
o
t
(c) = a
1
+ 2a
2
(r
0
+c) = a
1
+ 2a
2
r
0
+ 2a
2
c
o
tt
(c) = 2a
2
.
Use Maclaurin’s expansion for o (c) around c = 0:
o (c)[
o=0
=
o (0)
0!
+
o
(1)
(0)
1!
c +
o
(2)
(0)
2!
c
2
.
Using c = r ÷ r
0
and the fact that r = r
0
when c = 0, we get a Maclaurin expansion for 1 (r) around
r = r
0
:
1 (r)[
r=r0
=
1 (r
0
)
0!
+
1
(1)
(r
0
)
1!
(r ÷r
0
) +
1
(2)
(r
0
)
2!
(r ÷r
0
)
2
.
More generally, we have the Taylor expansion for an arbitrary C
n
function:
1 (r)[
r=r0
=
1 (r
0
)
0!
+
1
(1)
(r
0
)
1!
(r ÷r
0
) +
1
(2)
(r
0
)
2!
(r ÷r
0
)
2
+... +
1
(n)
(r
0
)
:!
(r ÷r
0
)
n
+1
n
= 1
n
+1
n
.
where 1
n
is a remainder. Theorem:
31
« As we choose higher :, then 1
n
will be smaller and in the limit vanish.
« As r is farther away from r
0
1
n
may grow.
The Lagrange form of 1
n
: for some point j ÷ [r
0
. r] (if r r
0
) or j ÷ [r. r
0
] (if r < r
0
) we have
1
n
=
1
(: + 1)!
1
(n+1)
(j) (r ÷r
0
)
n+1
.
Example: for : = 0 we have
1 (r)[
r=r0
=
1 (r
0
)
0!
+1
n
= 1 (r
0
) +1
n
= 1 (r
0
) +1
t
(j) (r ÷r
0
) .
Rearranging this we get
1 (r) ÷1 (r
0
) = 1
t
(j) (r ÷r
0
)
for some point j ÷ [r
0
. r] (if r r
0
) or j ÷ [r. r
0
] (if r < r
0
). This is the Mean Value Theorem:
9.5 Taylor expansion and the Nth derivative test
De…ne: r
0
is a maximum (minimum) of 1 (r) if the change in the function, 1 = 1 (r) ÷1 (r
0
), is negative
(positive) in a neighborhood of r
0
, both on the right and on the left of r
0
.
The Taylor expansion helps determining this.
1 = 1
(1)
(r
0
) (r ÷r
0
) +
1
(2)
(r
0
)
2
(r ÷r
0
)
2
+... +
1
(n)
(r
0
)
:!
(r ÷r
0
)
n
+
1
(: + 1)!
1
(n+1)
(j) (r ÷r
0
)
n+1
. .. .
remainder
.
1. Consider the case that 1
t
(r
0
) = 0, i.e. the …rst non zero derivative at r
0
is of order 1. Choose : = 0,
so that the remainder will be of the same order of the …rst non zero derivative and evaluate
1 = 1
t
(j) (r ÷r
0
) .
Using the fact that j is very close to r
0
, so close that 1
t
(j) = 0, we have that 1 changes signs around
r
0
.
32
2. Consider the case of 1
t
(r
0
) = 0 and 1
tt
(r
0
) = 0. Choose : = 1, so that the remainder will be of the
same order of the …rst non zero derivative (2) and evaluate
1 = 1
t
(r
0
) (r ÷r
0
) +
1
tt
(j)
2
(r ÷r
0
)
2
=
1
2
1
tt
(j) (r ÷r
0
)
2
.
Since (r ÷r
0
)
2
0 always and 1
tt
(j) = 0 we get 1 is either positive (minimum) or negative
(maximum) around r
0
.
3. Consider the case of 1
t
(r
0
) = 0, 1
tt
(r
0
) = 0 and 1
ttt
(r
0
) = 0. Choose : = 2, so that the remainder
will be of the same order of the …rst non zero derivative (3) and evaluate
1 = 1
t
(r
0
) (r ÷r
0
) +
1
tt
(j)
2
(r ÷r
0
)
2
+
1
ttt
(j)
6
(r ÷r
0
)
3
=
1
6
1
ttt
(j) (r ÷r
0
)
3
.
Since (r ÷r
0
)
3
changes signs around r
0
and 1
ttt
(j) = 0 we get 1 is changing signs and therefore not
an extremum.
4. In the general case 1
t
(r
0
) = 0, 1
tt
(r
0
) = 0, ... 1
(n÷1)
(r
0
) = 0 and 1
(n)
(r
0
) = 0. Choose : ÷ 1, so
that the remainder will be of the same order of the …rst non zero derivative (:) and evaluate
1 = 1
(1)
(r
0
) (r ÷r
0
) +
1
(2)
(r
0
)
2
(r ÷r
0
)
2
+... +
1
(n÷1)
(r
0
)
(: ÷1)!
(r ÷r
0
)
n÷1
+
1
:!
1
(n)
(j) (r ÷r
0
)
n
=
1
:!
1
(n)
(j) (r ÷r
0
)
n
.
In all cases 1
(n)
(j) = 0.
If : is odd, then (r ÷r
0
)
n
changes signs around r
0
and 1 changes signs and therefore not an extremum.
If : is even, then (r ÷r
0
)
n
0 always and 1 is either positive (minimum) or negative (maximum).
« Warning: in all the above we need 1 ÷ C
n
at r
0
. For example,
1 (r) =
c
÷
1
2
r
2
r = 0
0 r = 0
is not C
1
at 0.
10 Exponents and logs
These are used a lot in economics due to their useful properties, some of which have economic interpretations,
in particular in dynamic problems that involve time.
10.1 Exponent function
n = 1 (t) = /

. / 1 .
(the case of 0 < / < 1 can be dealt with similarly.)
33
« 1 (t) ÷ C
o
.
« 1 (t) 0 \t ÷ R.
« 1
t
(t) 0, 1
tt
(t) 0, therefore strictly increasing and so ¬t = 1
÷1
(n) = log
b
n, where n ÷ R
++
.
« Any n 0 can be expressed as an exponent of many bases. Make sure you know how to convert bases:
log
b
n =
log
o
n
log
o
/
.
10.2 The constant e
n = ¹c
:
captures growth rates.
d
dt
c

= c

d
dt
¹c
:
= r¹c
:
.
It turns out that
lim
n÷o
1 +
1
:
n
= lim
n÷0
(1 +:)
1/n
= c = 2.71828...
Think of 1´: = : as time.
Use a Taylor expansion of c
r
around zero:
c
r
= c
0
+
1
1!
(c
r
)
t
r=0
(r ÷0) +
1
2!
(c
r
)
tt
r=0
(r ÷0)
2
+
1
3!
(c
r
)
ttt
r=0
(r ÷0)
3
+...
= 1 +r +
1
2!
r
2
+
1
3!
r
3
+...
Evaluate this at r = 1:
c
1
= c = 1 + 1 +
1
2!
+
1
3!
+... = 2.71828...
10.3 Examples
10.3.1 Interest compounding
Suppose that you are o¤ered an interest rate r on your savings after a year. Then the return after one year
is 1 +r. If you invested ¹, then at the end of the year you have
¹(1 +r) .
Now suppose that an interest
:
n
is o¤ered for 1´: of a year. In that case you get to compound
:
n
: times
throughout the year. In that case an investment of ¹ will be worth at the end of the year
¹
1 +
r
:
n
= ¹
¸
1 +
r
:
n/:
:
.
34
Now suppose that you get a instant rate of interest
:
n
where : ÷· (: ÷0), compounded : ÷· times
throughout the year. In that case an investment of ¹ will be worth at the end of the year
lim
n÷o
¹
1 +
r
:
n
= lim
n÷o
¹
¸
1 +
r
:
n/:
:
= ¹
¸
lim
n÷o
1 +
r
:
n/:
:
= ¹
¸
lim
u=:/n÷0
(1 +n)
1/u
:
= ¹c
:
.
Thus, r is the instantaneous rate of return.
Suppose that we are interested in an arbitrary period of time, t, where, say t = 1 is a year (but this is
arbitrary). Then the same kind of math will lead us to …nd the value of an investment ¹ after t time to be
¹
1 +
r
:
n
= ¹
¸
1 +
r
:
n/:
:
.
if : is …nite, and
¹c
:
if we get continuous compounding (: ÷·).
10.3.2 Growth rates
The interest rate example tells you how much the investment is worth when it growth at a constant, instan
taneous rate:
growth rate =
d\´dt
\
=
r¹c
:
¹c
:
= r per instant (dt).
Any discrete growth rate can be described by a continuous growth rate:
¹(1 +i)

= ¹c
:
.
where
(1 +i) = c
:
.
10.3.3 Discounting
The value today of A t periods in the future is
NPV =
A
(1 +i)

.
where 1´ (1 +i)

is the discount factor. This can also be represented by continuous discounting
NPV =
A
(1 +i)

= Ac
÷:
.
where the same discount factor is 1´ (1 +i)

= (1 +i)
÷
= c
÷:
.
35
10.4 Logarithms
Log is the inverse function of the exponent.
n = /

= t = log
b
n .
/ 1, t ÷ R, n ÷ R + +. This is very useful, e.g. for regressions analysis.
E.g.,
16 = 2
4
= 4 = log
2
16 .
Also,
n = /
log
b
u
.
Convention:
log
t
r = lnr .
Rules:
« ln(n·) = lnn + ln·
« ln(n´·) = lnn ÷ln·
« ln
an
b
= lna +/ lnn
« log
b
r =
log
a
r
log
a
b
, where a. /. r 0
– Corollary: log
b
c =
ln t
ln b
=
1
ln b
Some useful properties of logs:
1. Log di¤erences approximate growth rates:
lnA
2
÷lnA
1
= ln
A
2
A
1
= ln
A
2
A
1
÷1 + 1
= ln
1 +
A
2
÷A
1
A
1
= ln(1 +r) .
where r is the growth rate of A. Take a …rst order Taylor approximation of ln(1 +r) around ln(1):
ln(1 +r)  ln(1) + (ln(1))
t
(1 +r ÷1) = r .
So we have
lnA
2
÷lnA
1
 r .
2. Logs "bend down": their image relative to the argument below the 45 degree line. Exponents do the
opposite.
3. The derivative of log is always positive, but ever diminishing.
36
4. Nevertheless, lim
r÷o
log
b
r = ·. Also, lim
r÷0
log
b
r = ÷·. Therefore the range is R.
5. Suppose that n = ¹c
:
. Then lnn = ln¹+rt. Therefore
t =
lnn ÷ln¹
r
.
This answers the question: how long will it take to grow from ¹ to n, if growth is at an instantaneous
rate of r.
6. Converting n = ¹/
c
into n = ¹c
:
: /
c
= c
:
, therefore c ln/ = r, therefore n = ¹c
:
= n = ¹c
(c ln b)
.
10.5 Derivatives of exponents and logs
d
dt
lnt =
1
t
d
dt
log
b
t =
d
dt
lnt
ln/
=
1
t ln/
d
dt
c

= c

Let n = c

, so that t = lnn:
d
dt
c

=
d
dt
n =
1
dt´dn
=
1
1´n
= n = c

.
By chain rule:
d
dt
c
u
= c
u
dn
dt
d
dt
lnn =
dn´dt
n
Higher derivatives:
d
n
(dt)
n
c

= c

d
dt
lnt =
1
t
.
d
2
(dt)
2
lnt = ÷
1
t
2
.
d
3
(dt)
3
lnt =
2
t
3
...
d
dt
/

= /

ln/ .
d
2
(dt)
2
/

= /

(ln/)
2
.
d
3
(dt)
3
/

= /

(ln/)
3
...
10.6 Application: optimal timing
The value of / bottles of wine is given by
\ (t) = /c

.
Discounting: 1(t) = c
÷:
. The present value of \ (t) today is
1\ = 1(t) \ (t) = c
÷:
/c

= /c
÷:
.
37
Choosing t to maximize 1\ = /c
÷:
is equivalent to choosing t to maximize ln1\ = ln/ +
t ÷ rt.
FONC:
0.5t
÷0.5
÷r = 0
0.5t
÷0.5
= r
Marginal bene…t to wait one more instant = marginal cost of waiting one more instant. t
+
= 1´4t
2
. SOC:
÷0.25t
÷1.5
< 0
so t
+
is a maximum.
10.7 Growth rates again
Denote
d
dt
r = _ r .
So the growth rate at some point in time is
dr´dt
r
=
_ r
r
.
So in the case r = ¹c
:
, we have
_
\
\
= r .
And since r(0) = ¹c
:0
= ¹, we can write without loss of generality r(t) = r
0
c
:
.
Growth rates of combinations:
1. For n (t) = n(t) · (t) we have
_ n
n
=
_ n
n
+
_ ·
·
o
u
= o
u
+o
u
Proof:
lnn (t) = lnn(t) + ln· (t)
d
dt
lnn (t) =
d
dt
lnn(t) +
d
dt
ln· (t)
1
n (t)
dn
dt
=
1
n(t)
dn
dt
+
1
· (t)
d·
dt
2. For n (t) = n(t) ´· (t) we have
_ n
n
=
_ n
n
÷
_ ·
·
o
u
= o
u
÷o
u
Proof: similar to above.
3. For n (t) = n(t) ±· (t) we have
o
u
=
n
n ±·
o
u
±
n
n ±·
o
u
38
10.8 Elasticities
An elasticity of n with respect to r is de…ned as
o
u,r
=
dn´n
dr´r
=
dn
dr
r
n
.
Since
d lnr =
0 lnr
0r
dr =
dr
r
we get
o
u,r
=
d lnn
d lnr
.
11 Optimization with more than one choice variable
11.1 The di¤erential version of optimization with one variable
This helps developing concepts for what follows. Let . = 1 (r) ÷ C
1
, r ÷ R. Then
d. = 1
t
(r) dr .
« FONC: an extremum may occur when 1
t
(r) = 0, or equivalently, when d. = 0, since dr = 0.
« SOC:
d
2
. = d [d.] = d [1
t
(r) dr] = 1
tt
(r) dr
2
.
A maximum occurs when 1
tt
(r) < 0 or equivalently when d
2
. < 0. A minimum occurs when 1
tt
(r) 0
or equivalently when d
2
. 0.
11.2 Extrema of a function of two variables
Let . = 1 (r. n) ÷ C
1
, r. n ÷ R. Then
d. = 1
r
dr +1
u
dn .
FONC: d. = 0 for arbitrary values of dr and dn, not both equal to zero. A necessary condition that gives
this is
1
r
= 0 and 1
u
= 0 .
As before, this is not a su¢cient condition for an extremum, not only because of in‡ection point, but also
due to saddle points.
« Note: in matrix notation
d. =
¸
01
0 (r. n)
¸
dr
dn
= \1dr =
1
r
1
u
¸
dr
dn
= 1
r
dr +1
u
dn .
If r ÷ R
n
then
d. =
¸
01
0r
t
dr = \1dr =
1
1
1
n
dr
1
.
.
.
dr
n
¸
¸
¸ =
n
¸
I=1
1
I
dr
I
.
39
De…ne
1
rr
=
0
2
1
0r
2
1
uu
=
0
2
1
0n
2
1
ru
=
0
2
1
0r0n
1
ur
=
0
2
1
0n0r
Young’s Theorem: If both 1
ru
and 1
ur
are continuous, then 1
ru
= 1
ur
.
Now we apply this
d
2
. = d [d.] = d [1
r
dr +1
u
dn] = 1
rr
dr
2
+ 21
ru
drdn +1
uu
dn
2
.
In matrix notation
d
2
. =
dr dn
¸
1
rr
1
ru
1
ru
1
uu
¸
dr
dn
.
And more generally, if r ÷ R
n
then
d
2
. = dr
t
¸
0
2
1
0r0r
t
. .. .
Hessian
dr .
SONC (second order necessary conditions): for arbitrary values of dr and dn
« d
2
. _ 0 gives a maximum.
« d
2
. _ 0 gives a minimum.
SOSC (second order su¢cient conditions): for arbitrary values of dr and dn
« d
2
. < 0 gives a maximum. In the two variable case
d
2
. < 0 i¤ 1
rr
< 0, 1
uu
< 0 and 1
rr
1
uu
1
2
ru
.
« d
2
. 0 gives a minimum. In the two variable case
d
2
. 0 i¤ 1
rr
0, 1
uu
0 and 1
rr
1
uu
1
2
ru
.
Comments:
« SONC is necessary but not su¢cient, while SOSC are not necessary.
« If 1
rr
1
uu
= 1
2
ru
a point can be an extremum nonetheless.
« If 1
rr
1
uu
< 1
2
ru
then this is a saddle point.
« If 1
2
ru
0, then 1
rr
1
uu
1
2
ru
0 implies sign(1
rr
) =sign(1
uu
).
40
11.3 Quadratic form and sign de…niteness
This is a tool to help analyze SOCs. Relabel terms for convenience:
. = 1 (r
1
. r
2
)
d
2
. = c . dr
1
= d
1
. dr
2
= d
2
1
11
= a . 1
22
= / . 1
12
= /
Then
d
2
. = 1
11
dr
2
1
+ 21
12
dr
1
dr
2
+1
22
dr
2
2
c = ad
2
1
+ 2/d
1
d
2
+/d
2
2
=
d
2
d
1
¸
a /
/ /
¸
d
1
d
2
.
This is the quadratic form.
« Note: d
1
and d
2
are variables, not constants, as in the FONC. We require the SOCs to hold \d
1
. d
2
,
and in particular \d
1
. d
2
= 0.
Denote the Hessian by
H =
¸
0
2
1
0r0r
t
The quadratic form is
c = d
t
Hd
De…ne
c is
positive de…nite
positive semide…nite
negative semide…nite
negative de…nite
if c is invariably
0
_ 0
_ 0
< 0
.
regardless of values of d. Otherwise, c is inde…nite.
Consider the determinant of H, [H[, which we call here the discriminant of H:
c is
positive de…nite
negative de…nite
¸
i¤
[a[ 0
[a[ < 0
¸
and [H[ 0 .
[a[ is (the determinant of) the …rst ordered minor of H. In the simple two variable case, [H[ is (the
determinant of) the second ordered minor of H. In that case
[H[ = a/ ÷/
2
.
If [H[ 0, then a and / must have the same sign, since a/ /
2
0.
41
11.4 Quadratic form for n variables and sign de…niteness
c = d
t
Hd =
n
¸
I=1
n
¸
,=1
/
I,
d
I
d
,
.
« c is positive de…nite i¤ all (determinants of) the principal minors are positive
[H
1
[ = [/
11
[ 0. [H
2
[ =
/
11
/
12
/
21
/
22
0. ... [H
n
[ = [H[ 0 .
« c is negative de…nite i¤ (determinants of) the odd principal minors are negative and the even ones
are positive:
[H
1
[ < 0. [H
2
[ 0. [H
3
[ < 0. ...
11.5 Characteristic roots test for sign de…niteness
Consider some :: matrix H
n·n
. We look for a characteristic root r (scalar) and characteristic vector
r
n·1
(: 1) such that
Hr = rr .
Developing this expression:
Hr = r1r = (H ÷r1) r = 0 .
De…ne (H ÷r1) as the characteristic matrix:
(H ÷r1) =
/
11
÷r /
12
/
1n
/
21
/
22
÷r /
2n
.
.
.
.
.
.
.
.
.
.
.
.
/
n1
/
n2
. . . /
nn
÷r
¸
¸
¸
¸
¸
If (H ÷r1) has a non trivial solution (r = 0), then (H ÷r1) must be singular, so that [H ÷r1[ = 0. This is
an equation that we can solve for r. The equation [H ÷r1[ = 0 is the characteristic equation, and is an
: degree polynomial in r, with : non trivial solutions (some of the solutions can be equal). Some properties:
« If H is symmetric, then we will have r ÷ R. This is useful, because most applications in economics will
deal with symmetric matrices, like Hessians and variancecovariance matrices.
« For each characteristic root that solves [H ÷r1[ = 0 there are many characteristic vectors r such that
Hr = rr. Therefore we normalize: r
t
r = 1. Denote the normalized characteristic vectors as ·. Denote
the characteristic vectors (eigenvector) of the characteristic root (eigenvalue) as ·
I
and r
I
.
« The set of eigenvectors is orthonormal, i.e. orthogonal and normalized: ·
t
I
·
,
= 0 \i = , and ·
t
I
·
I
= 1.
42
11.5.1 Application to quadratic form
Let \ = (·
1
. ·
2
. ...·
n
) be the set of eigenvectors of the matrix H. De…ne the vector n that solves d = \ n.
We use this in the quadratic form
c = d
t
Hd = n
t
\
t
H\ n = n
t
1n .
where \
t
H\ = 1. It turns out that
1 =
r
1
0 0
0 r
2
0
.
.
.
.
.
.
0 0 r
n
¸
¸
¸
¸
¸
Here is why:
\
t
H\ = \
t
H·
1
H·
2
H·
n
=
·
t
1
·
t
2
.
.
.
·
t
n
¸
¸
¸
¸
¸
r
1
·
1
r
2
·
2
r
n
·
n
=
r
1
·
t
1
·
1
r
1
·
t
1
·
2
r
1
·
t
1
·
n
r
2
·
t
2
·
1
r
2
·
t
2
·
2
r
2
·
t
2
·
n
.
.
.
.
.
.
.
.
.
.
.
.
r
n
·
t
n
·
1
r
n
·
t
n
·
2
r
n
·
t
n
·
n
¸
¸
¸
¸
¸
= 1 .
where the last equality follows from ·
t
I
·
,
= 0 \i = , and ·
t
I
·
I
= 1. It follows that sign(c) depends only on
the characteristic roots: c = n
t
1n
¸
n
I=1
r
I
n
2
I
.
11.5.2 Characteristic roots test for sign de…niteness
c is
positive de…nite
positive semide…nite
negative semide…nite
negative de…nite
i¤ all r
I
0
_ 0
_ 0
< 0
.
regardless of values of d. Otherwise, c is inde…nite.
« When : is large, …nding the roots can be hard, because it involves …nding the roots of a polynomial of
degree :. But the computer can do it for us.
11.6 Global extrema, convexity and concavity
We seek conditions for a global maximum or minimum. If a function has a "hill shape" over its entire
domain, then we do not need to worry about boundary conditions and the local extremum will be a global
extremum. Although the global maximum can be found at the boundary of the domain, it will not be
detected by the FONC.
« Strictly concave: the global maximum is unique.
« Concave, but not strictly: this allows for ‡at regions, so the global maximum may not be unique.
43
Let . = 1 (r) ÷ C
2
, r ÷ R
n
.
If d
2
. is
positive de…nite
positive semide…nite
negative semide…nite
negative de…nite
\r in the domain, then 1 is
strictly convex
convex
concave
strictly concave
.
When an objective function is general, then we must assume convexity or concavity. If a speci…c functional
form is used, we can check whether it is convex or concave.
11.6.1 Convexity and concavity de…ned
De…nition 1: A function 1 is concave i¤ \r. n ÷graph of 1 the line between r and n lies on or below the
graph.
« If \r = n the line lies strictly below the graph, then 1 is strictly concave.
« For convexity replace "below" with "above".
De…nition 2: A function 1 is concave i¤ \r. n ÷domain of 1, which is assumed to be a convex set, and
\0 ÷ (0. 1) we have
01 (r) + (1 ÷0) 1 (n) _ 1 [0r + (1 ÷0) n] .
« For strict concavity replace "_" with "<" and add \r = n.
« For convexity replace "_" with "_" and "<" with "".
Draw …gures.
The term 0r + (1 ÷0) n, 0 ÷ (0. 1) is called a convex combination.
Properties:
1. If 1 is linear, then 1 is both concave and convex, but not strictly.
2. If 1 is (strictly) concave, then ÷1 is (strictly) convex.
« Proof: 1 is concave. Therefore \r. n ÷domain of 1 and \0 ÷ (0. 1) we have
01 (r) + (1 ÷0) 1 (n) _ 1 [0r + (1 ÷0) n] ´ (÷1)
0 [÷1 (r)] + (1 ÷0) [÷1 (n)] _ ÷1 [0r + (1 ÷0) n]
3. If 1 and o are concave functions, then 1 +o is also concave. If one of the functions is strictly concave,
then 1 +o is strictly concave.
44
« Proof: 1 and o are concave, therefore
01 (r) + (1 ÷0) 1 (n) _ 1 [0r + (1 ÷0) n]
0o (r) + (1 ÷0) o (n) _ o [0r + (1 ÷0) n]
0 [1 (r) +o (r)] + (1 ÷0) [1 (n) +o (n)] _ 1 [0r + (1 ÷0) n] +o [0r + (1 ÷0) n]
0 [(1 +o) (r)] + (1 ÷0) [(1 +o) (n)] _ (1 +o) [0r + (1 ÷0) n]
The proof for strict concavity is identical.
11.6.2 Example
Is . = r
2
+n
2
concave or convex? Consider …rst the LHS of the de…nition:
(i) : 01 (r
1
. n
1
) + (1 ÷0) 1 (r
2
. n
2
) = 0
r
2
1
+n
2
1
+ (1 ÷0)
r
2
2
+n
2
2
.
Now consider the RHS of the de…nition:
(ii) : 1 [0r
1
+ (1 ÷0) r
2
. 0n
1
+ (1 ÷0) n
2
] = [0r
1
+ (1 ÷0) r
2
]
2
+ [0n
1
+ (1 ÷0) n
2
]
2
= 0
2
r
2
1
+n
2
1
+ (1 ÷0)
2
r
2
2
+n
2
2
+ 20 (1 ÷0) (r
1
r
2
+n
1
n
2
) .
Now subtract (i)÷(ii):
0 (1 ÷0)
r
2
1
+n
2
1
+r
2
2
+n
2
2
÷20 (1 ÷0) (r
1
r
2
+n
1
n
2
) = 0 (1 ÷0)
(r
1
÷r
2
)
2
+ (n
1
÷n
2
)
2
_ 0 .
So this is a convex function. Moreover, it is strictly convex, since \r
1
= r
2
and \n
1
= n
2
we have
(i)÷(ii) 0.
Clearly, ÷
r
2
+n
2
is strictly concave.
11.6.3 Example
Is 1 (r. n) = (r +n)
2
concave or convex? Use the same procedure from above.
(i) : 01 (r
1
. n
1
) + (1 ÷0) 1 (r
2
. n
2
) = 0 (r
1
+n
1
)
2
+ (1 ÷0) (r
2
+n
2
)
2
.
Now consider
(ii) : 1 [0r
1
+ (1 ÷0) r
2
. 0n
1
+ (1 ÷0) n
2
] = [0r
1
+ (1 ÷0) r
2
+0n
1
+ (1 ÷0) n
2
]
2
= [0 (r
1
+n
1
) + (1 ÷0) (r
2
+n
2
)]
2
= 0
2
(r
1
+n
1
)
2
+ 20 (1 ÷0) (r
1
+n
1
) (r
2
+n
2
) + (1 ÷0)
2
(r
2
+n
2
)
2
.
Now subtract (i)÷(ii):
0 (1 ÷0)
(r
1
+n
1
)
2
+ (r
2
+n
2
)
2
÷20 (1 ÷0) (r
1
+n
1
) (r
2
+n
2
)
= 0 (1 ÷0) [(r
1
+n
1
) ÷(r
2
+n
2
)]
2
_ 0 .
So convex but not strictly. Why not strict? Because when r +n = 0, i.e. when n = ÷r, we get 1 (r. n) = 0.
The shape of this function is a hammock, with the bottom at n = ÷r.
45
11.7 Di¤erentiable functions, convexity and concavity
Let 1 (r) ÷ C
1
and r ÷ R. Then 1 is concave if \r
1
. r
2
÷domain of 1
1
t
r
1
_
1
r
1
÷1
r
2
r
1
÷r
2
.
i.e. the slope is smaller than the derivative at r
1
. Think of r
1
as the point of reference and r
2
as a target
point. For convex replace "_" with "_".
Another way to write this is
1
r
1
_ 1
r
2
+1
t
r
1
r
1
÷r
2
.
If r ÷ R
n
. Then 1 is concave if \r
1
. r
2
÷domain of 1
1
r
1
_ 1
r
2
+\1
r
1
r
1
÷r
2
.
Another way to write this is
1 _ \1
r
1
r .
Let . = 1 (r) ÷ C
2
and r ÷ R
n
. Then 1 is concave i¤ \r ÷domain of 1 we have d
2
. is negative
semide…nite. If d
2
. is negative de…nite, then 1 is strictly concave (not only if). Replace "negative" with
"positive" for convexity.
11.8 Convex sets in R
n
This is related, but distinct from convex and concave functions.
De…ne: convex set in R
n
. Let the set o · R
n
. If \r. n ÷ o \ and 0 ÷ [0. 1] we have
0r + (1 ÷0) n ÷ o
then o is a convex set. (this de…nition actually holds in other spaces too.) Essentially, a set is convex if it
has no "holes" (no doughnuts) and the boundary is not "dented" (no bananas).
46
11.8.1 Relation to convex functions 1
The concavity condition \r. n ÷domain of 1 and \0 ÷ (0. 1) we have
01 (r) + (1 ÷0) 1 (n) _ 1 [0r + (1 ÷0) n]
assumes that the domain is convex: \r. n ÷domain of 1 and \0 ÷ (0. 1)
0r + (1 ÷0) n ÷ domain of 1 .
because 1 [0r + (1 ÷0) n] must be de…ned.
11.8.2 Relation to convex functions 2
If 1 is a convex function, then the set
o = ¦r : 1 (r) _ /¦ . / ÷ R
is a convex set (but not only if, i.e. this is a necessary condition, not su¢cient).
If 1 is a concave function, then the set
o = ¦r : 1 (r) _ /¦ . / ÷ R
is a convex set (but not only if, i.e. this is a necessary condition, not su¢cient).
47
« This is why there is an intimate relationship between convex preferences and concave utility functions.
11.9 Example: input decisions of a …rm
¬ = 1 ÷C = jc ÷n ÷r/ .
Let j, n, r be given, i.e. the …rm is a price taker in a competitive economy. To simplify, let output, c, be
the numeraire, so that j = 1 and everything is then denominated in units of output:
¬ = c ÷n ÷r/ .
Production function with decreasing returns to scale:
c = /
o

o
. c < 1´2
Choose ¦/. ¦ to maximize ¬. FONC:
0¬
0/
= c/
o÷1

o
÷r = 0
0¬
0/
= c/
o

o÷1
÷n = 0 .
SOC:
[H[ =
0
2
¬
0
/

0
/ 
=
c(c ÷1) /
o÷2

o
c
2
/
o÷1

o÷1
c
2
/
o÷1

o÷1
c(c ÷1) /
o

o÷2
.
[H
1
[ = c(c ÷1) /
o÷2

o
< 0 \/.  0. [H
2
[ = [H[ = c
2
(1 ÷2c) /
2(o÷1)

2(o÷1)
0 \/.  0. Therefore ¬ is
a concave function and the extremum will be a maximum.
From the FONC:
c/
o÷1

o
= c
c
/
= r
c/
o

o÷1
= c
c

= n
48
so that r/ = n = cc. Thus
/ =
cc
r
 =
cc
n
.
Using this in the production function:
c = /
o

o
=
cc
r
o
cc
n
o
= c
2o
c
2o
1
rn
o
= c
2
12
1
rn
12
.
so that
/ = c
1
12
1
r
1
12
1
n
12
 = c
1
12
1
r
12
1
n
1
12
.
12 Optimization under equality constraints
12.1 Example: the consumer problem
Objective : Choose ¦r. n¦ to maximize n(r. n)
Constraint(s) : s.t. (r. n) ÷ 1 = ¦(r. n) : r. n _ 0. rj
r
+nj
u
_ 0¦
(draw the budget set, 1). Under some conditions, which we will explore soon, we will get the result that
the consumer chooses a point on the budget line, s.t. rj
r
+ nj
u
= 0, and that r. n _ 0 is trivially satis…ed
(nonsatiation and strict convexity of n). So we state a simpler problem:
Objective : Choose ¦r. n¦ to maximize n(r. n)
Constraint(s) : s.t. rj
r
+nj
u
= 0 .
The optimum will be denoted (r
+
. n
+
). The value of the problem is thus n(r
+
. n
+
). Constraints can only
hurt the unconstrained value (although they may not). This will happen when the unconstrained optimum
point is not in the constraint set. E.g.,
Choose ¦r. n¦ to maximize r ÷r
2
+n ÷n
2
has a maximum at (r
+
. n
+
) = (1´2. 1´2), but this point is not on the line r+n = 2, so applying this constraint
will move us away from the unconstrained optimum and hurt the objective.
12.2 Lagrange method: one constraint, two variables
Let 1. o ÷ C
1
. Suppose that (r
+
. n
+
) is the solution to
Choose ¦r. n¦ to maximize . = 1 (r. n) , s.t. o (r. n) = c
49
and that (r
+
. n
+
) is not a critical point of o (r. n) = c, i.e. both o
r
= 0 and o
u
= 0 at (r
+
. n
+
). Then there
exists a number `
+
such that (r
+
. n
+
. `
+
) is a critical point of
L = 1 (r. n) +`[c ÷o (r. n)] .
i.e.
0L
0`
= c ÷o (r. n) = 0
0L
0r
= 1
r
÷`o
r
= 0
0L
0n
= 1
u
÷`o
u
= 0 .
From this it follows that at (r
+
. n
+
. `
+
)
o (r
+
. n
+
) = c
`
+
= 1
r
´o
r
`
+
= 1
u
´o
u
.
« The last equations make it clear why we must check the constraint quali…cations, that both o
r
= 0
and o
u
= 0 at (r
+
. n
+
), i.e. check that (r
+
. n
+
) is not a critical point of o (r. n) = c. For linear constraints
this will be automatically satis…ed.
« Always write +`[c ÷o (r. n)].
Recall that for unconstrained maximum, we must have
d. = 1
r
dr +1
u
dn = 0 .
and thus
dn
dr
= ÷
1
r
1
u
But even in the constrained problem this still holds – as we will see below – except that now dr and dn are
not arbitrary: they must satisfy the constraint, i.e.
o
r
dr +o
u
dn = 0 .
Thus
dn
dr
= ÷
o
r
o
u
.
From both of these we obtain
o
r
o
u
=
1
r
1
u
.
i.e. the objective and the constraint are tangent. This follows from
1
u
o
u
= ` =
1
r
o
r
.
50
A graphic interpretation. Think of the gradient as a vector that points in a particular direction. This
direction is where to move in order to increase the function the most, and is perpendicular to the isoquant
of the function. Notice that we have
\1 (r
+
. n
+
) = `
+
\o (r
+
. n
+
)
(1
r
. 1
u
) = `
+
(o
r
. o
u
) .
This means that the constraint and the isoquant of the objective at the optimal value are parallel. They
may point in the same direction if ` 0 or in opposite directions if ` < 0.
12.3 is the shadow cost of the constraint
` tells you how much 1 would increase if we relax the constraint by one unit, i.e. increase or decrease c by
1 (for equality constraints, it will be eitheror). For example, if the objective is utility and the constraint is
your budget in money, $, then ` is in terms of utils/$. It tells you how many more utils you would get if
you had one more dollar.
Write the system of equations that de…ne the optimum as identities
1
1
(`. r. n) = c ÷o (r. n) = 0
1
2
(`. r. n) = 1
r
÷`o
r
= 0
1
2
(`. r. n) = 1
u
÷`o
u
= 0 .
This is a system of functions of the form 1 (`. r. n. c) = 0. If all these functions are continuous and [J[ = 0
at (`
+
. r
+
. n
+
), where
[J[ =
01
0 (` r n)
=
0 ÷o
r
÷o
u
÷o
r
1
rr
÷`o
rr
1
ru
÷`o
ru
÷o
u
1
ru
÷`o
ru
1
uu
÷`o
uu
.
then by the implicit function theorem we have at `
+
= `(c), r
+
= r(c) and n
+
= n (c) with derivatives de…ned
that can be found as we did above. The point is that such functions exist and that they are di¤erentiable.
It follows that there is a sense in which d`
+
´dc is meaningful.
51
Now consider the value of the Lagrangian
L
+
= L(`
+
. r
+
. n
+
) = 1 (r
+
. n
+
) +`
+
[c ÷o (r
+
. n
+
)] .
where we remember that (r
+
. n
+
. `
+
) is a critical point. Take the derivative w.r.t. c:
dL
+
dc
= 1
r
dr
+
dc
+1
u
dn
+
dc
+
d`
+
dc
[c ÷o (r
+
. n
+
)] +`
+
¸
1 ÷o
r
dr
+
dc
÷o
u
dn
+
dc
=
dr
+
dc
[1
r
÷`
+
o
r
] +
dn
+
dc
[1
u
÷`
+
o
u
] +
d`
+
dc
[c ÷o (r
+
. n
+
)] +`
+
= `
+
.
Therefore
dL
+
dc
= `
+
=
0L
+
0c
.
This is a manifestation of the envelope theorem (see below). But we also know that at the optimum we
have
c ÷o (r
+
. n
+
) = 0 .
So at the optimum we have
L(r
+
. n
+
. `
+
) = 1 (r
+
. n
+
) .
and therefore
dL
+
dc
=
d1
+
dc
= `
+
.
12.4 The envelope theorem
Let r
+
be a critical point of 1 (r. 0). Then
d1 (r
+
. 0)
d0
=
01 (r
+
. 0)
00
.
Proof: since at r
+
we have 1
r
(r
+
. 0) = 0, we have
d1 (r
+
. 0)
d0
=
01 (r
+
. 0)
0r
dr
d0
+
01 (r
+
. 0)
00
=
01 (r
+
. 0)
00
« Drawing of an "envelope" of functions and optima for 1 (r
+
. 0
1
), 1 (r
+
. 0
2
), ...
12.5 Lagrange method: one constraint, many variables
Let 1 (r) . o (r) ÷ C
1
and r ÷ R
n
. Suppose that r
+
is the solution to
Choose r to maximize 1 (r) , s.t. o (r) = c .
and that r
+
is not a critical point of o (r) = c. Then there exists a number `
+
such that (r
+
. `
+
) is a critical
point of
L = 1 (r) +`[c ÷o (r)] .
52
i.e.
0L
0`
= c ÷o (r. n) = 0
0L
0r
I
= 1
I
÷`o
I
= 0 . i = 1. 2. ...: .
« The constraint quali…cation is similar to above:
\o
+
= (o
1
(r
+
) . o
2
(r
+
) . ...o
n
(r
+
)) = 0 .
12.6 Lagrange method: many constraints, many variables
Let 1 (r) . o
,
(r) ÷ C
1
, = 1. 2. ...:, and r ÷ R
n
. Suppose that r
+
is the solution to
Choose r to maximize 1 (r) , s.t. o
1
(r) = c
1
. o
2
(r) = c
2
. ...o
n
(r) = c
n
.
and that r
+
satis…es the constraint quali…cations. Then there exists : numbers `
+
1
. `
+
2
. ...`
+
n
such that
(r
+
. `
+
) is a critical point of
L = 1 (r) +
n
¸
,=1
`
,
c
,
÷o
,
(r)
.
i.e.
0L
0`
,
= c
,
÷o
,
(r) = 0 . , = 1. 2. ...:
0L
0r
I
= 1
I
÷`o
I
= 0 . i = 1. 2. ...: .
« The constraint quali…cation now requires that
ra:/
¸
0o
0r
t
n·n
= : .
which is as large as it can possibly be. This means that we must have : _ :, because otherwise the
maximal rank would be : < :. This constraint quali…cation, as all the others, means that there exists
a : ÷: dimensional tangent hyperplane (a R
n÷n
vector space). Loosely speaking, it ensures that we
can construct tangencies freely enough.
12.7 Constraint quali…cations in action
This example shows that when the constraint quali…cation is not met, the Lagrange method does not work.
Choose ¦r. n¦ to maximize r, s.t. r
3
+n
2
= 0 .
The constraint set is given by
n
2
= ÷r
3
= n = ±r
3/2
for r _ 0 .
i.e.
C =
(r. n) : r _ 0. n = r
3/2
. n = ÷r
3/2
¸
53
Notice that (0. 0) is the maximum point. Let us check the constraint quali…cation:
\o =
3r
2
2n
\o (0. 0) = (0. 0) .
This violates the constraint quali…cations, since (0. 0) is a critical point of o (r. n).
Now check the Lagrangian
L = r +`
÷r
3
÷n
2
L
X
= ÷r
3
÷n
2
= 0
L
r
= 1 ÷`3r
2
= 0 = ` = 1´3r
2
L
u
= ÷`2n = 0 = either ` = 0 or n = 0 .
« Suppose r = 0. Then ` = · – not admissible.
« Suppose r = 0. Then ` 0 and thus n = 0. But then from the constraint set r = 0 – a contradiction.
12.8 Constraint quali…cations and Fritz John Theorem
Let 1 (r) . o (r) ÷ C
1
, r ÷ R
n
. Suppose that r
+
is the solution to
Choose r to maximize 1 (r) , s.t. o (r) = c
Then there exists two numbers `
+
0
and `
+
1
such that (`
+
1
. r
+
) is a critical point of
L = `
0
1 (r. n) +`
1
[c ÷o (r. n)] .
i.e.
0L
0`
= c ÷o (r. n) = 0
0L
0r
I
= `
0
1
I
÷`
1
o
I
= 0 . i = 1. 2. ...:
54
and
`
+
0
÷ ¦0. 1¦
¦`
+
0
. `
+
1
¦ = (0. 0) .
This generalizes to multi constraint problems.
12.9 Second order conditions
We want to know whether d
2
. is negative or positive de…nite on the constraint set. Using the Lagrange
method we …nd a critical point (r
+
. `
+
) of the problem
L = 1 (r) +`[c ÷o (r)] .
But this is not a maximum of the L problem. In fact, (r
+
. `
+
) is a saddle point: perturbations of r
around r
+
will hurt the objective, while perturbations of ` around `
+
will increase the objective. If (r
+
. `
+
)
is a critical point of the L problem, then: holding `
+
constant, r
+
maximizes the problem; and holding r
+
constant, `
+
maximizes the problem. This makes sense: lowering the shadow cost of the constraint as much
as possible at the point that maximizes the value.
This complicates characterizing the second order conditions, to distinguish maxima from minima. We
want to know whether d
2
. is negative or positive de…nite on the constraint set. Consider the two variables
case
d. = 1
r
dr +1
u
dn
From o (r. n) = c we have
o
r
dr +o
u
dn = 0 = dn = ÷
o
r
o
u
dr .
i.e. dn is not arbitrary. We can treat dn as a function of r and n when we di¤erentiate d. the second
time:
d
2
. = d (d.) =
0 (d.)
0r
dr +
0 (d.)
0n
dn
0
0r
[1
r
dr +1
u
dn] dr +
0
0n
[1
r
dr +1
u
dn] dn
=
¸
1
rr
dr +1
ur
dn +1
u
0 (dn)
0r
dr +
¸
1
ru
dr +1
uu
dn +1
u
0 (dn)
0n
dn
= 1
rr
dr
2
+ 21
ru
drdn +1
uu
dn
2
+1
u
d
2
n .
where we use
d
2
n = d (dn) =
0 (dn)
0r
dr +
0 (dn)
0n
dn .
This is not a quadratic form, but we use o (r. n) = c again to transform it into one, by eliminating d
2
n.
Di¤erentiate
do = o
r
dr +o
u
dn = 0 .
55
using dn as a function of r and n again:
d
2
o = d (do) =
0 (do)
0r
dr +
0 (do)
0n
dn
=
0
0r
[o
r
dr +o
u
dn] dr +
0
0n
[o
r
dr +o
u
dn] dn
=
¸
o
rr
dr +o
ur
dn +o
u
0 (dn)
0r
dr +
¸
o
ru
dr +o
uu
dn +o
u
0 (dn)
0n
dn
= o
rr
dr
2
+ 2o
ru
drdn +o
uu
dn
2
+o
u
d
2
n = 0
Thus
d
2
n = ÷
1
o
u
o
rr
dr
2
+ 2o
ru
drdn +o
uu
dn
2
.
Use this in the expression for d
2
. to get
d
2
. =
1
rr
÷1
u
o
rr
o
u
dr
2
+ 2
1
ru
÷1
u
o
ru
o
u
drdn +
1
uu
÷1
u
o
uu
o
u
dn
2
.
From the FONCs we have
` =
1
u
o
u
and by di¤erentiating the FONCs we get
L
rr
= 1
rr
÷`o
rr
L
uu
= 1
uu
÷`o
uu
L
ru
= 1
ru
÷`o
ru
.
We use all this to get
d
2
. = L
rr
dr
2
+ 2L
ru
drdn +L
uu
dn
2
.
This is a quadratic form, but not a standard one, because, dr and dn are not arbitrary. As before, we want
to know the sign of d
2
., but unlike the unconstrained case, dr and dn must satisfy do = o
r
dr + o
u
dn = 0.
Thus, we have second order necessary conditions (SONC):
« If d
2
. is negative semide…tine s.t. do = 0, then maximum.
« If d
2
. is positive semide…tine s.t. do = 0, then minimum.
The second order su¢cient conditions are (SOSC):
« If d
2
. is negative de…nite s.t. do = 0, then maximum.
« If d
2
. is positive de…nite s.t. do = 0, then minimum.
These are less stringent conditions relative to unconstrained optimization, where we required conditions on
d
2
. for all values of dr and dn. Here we consider only a subset of those values, so the requirement is less
stringent, although slightly harder to characterize.
56
12.10 Bordered Hessian and constrained optimization
Using the notations we used before for a Hessian,
H =
¸
a /
/ /
we can write
d
2
. = L
rr
dr
2
+ 2L
ru
drdn +L
uu
dn
2
as
d
2
. = adr
2
+ 2/drdn +/dn
2
.
We also rewrite
o
r
dr +o
u
dn = 0
as
cdr +dn = 0 .
The second order conditions involve the sign of
d
2
. = adr
2
+ 2/drdn +/dn
2
s.t. 0 = cdr +dn .
Eliminate dn using
dn = ÷
c
dr
to get
d
2
. =
a
2
÷2/c +/c
2
dr
2
2
.
The sign of d
2
. depends on the square brackets. For a maximum we need it to be negative. It turns out that
÷
a
2
÷2/c +/c
2
=
0 c
c a /
/ /
=
H
.
Notice that H contains the Hessian, and is bordered by the gradient of the constraint. Thus, the term
"bordered Hessian".
The :dimensional case with one constraint
Let 1 (r) . o (r) ÷ C
2
, r ÷ R
n
. Suppose that r
+
is a critical point of the Lagrangian problem. Let
H
n·n
=
¸
0
2
L
0r0r
t
be the Hessian of L evaluated at r
+
. Let \o be the set of linear constraints on d
n·1
(= dr
n·1
), evaluated
at r
+
:
\od = 0 .
57
We want to know what is the sign of
d
2
. = c = d
t
Hd
such that
\od = 0 .
The sign de…niteness of the quadratic form c depends on the following bordered Hessian
H
(n+1)·(n+1)
=
¸
0 \o
1·n
\o
t
n·1
H
n·n
.
Recall that sign de…niteness of a matrix depends on the signs of the determinants of the leading principal
minors. Therefore
d
2
. is
positive de…nite (min)
negative de…nite (max)
¸
s.t. do = 0 i¤
H
3
.
H
4
. ...
H
n
< 0
H
3
0.
H
4
< 0.
H
5
0. ...
¸
.
« Note that in the text they start from
H
2
, which they de…ne as the third leading principal minor and
is an abuse of notation. We have one consistent way to de…ne leading principal minors of a matrix and
we should stick to that.
The general case
Let 1 (r) . o
,
(r) ÷ C
2
, = 1. 2. ...:, and r ÷ R
n
. Suppose that r
+
is a critical point of the Lagrangian
problem. Let
H
n·n
=
¸
0
2
L
0r0r
t
be the Hessian of L evaluated at r
+
. Let
¹
n·n
=
¸
0o
0r
t
be the set of linear constraints on d
n·1
(= dr
n·1
), evaluated at r
+
:
¹d = 0 .
We want to know what is the sign of
d
2
. = c = d
t
Hd
such that
¹d = 0 .
The sign de…niteness of the quadratic form c depends on the bordered Hessian
H
(n+n)·(n+n)
=
¸
0
n·n
¹
n·n
¹
t
n·n
H
n·n
.
The sign de…niteness of H depends on the signs of the determinants of the leading principal minors.
58
« For a maximum (d
2
. negative de…nite) we require that
H
2n
.
H
2n+1
...
H
n+n
alternate signs,
where sign
H
2n
= (÷1)
n
(Dixit).
« An alternative formulation for a maximum (d
2
. negative de…nite) requires that
H
n+n
.
H
n+n÷1
...
alternate signs, where sign
H
n+n
= (÷1)
n
(Simon and Blume).
« Anyway, the formulation in the text is wrong.
« For a minimum...? We know that searching for a minimum of 1 is like searching for a maximum of
÷1. So one could set up the problem that way and just treat it like a maximization problem.
12.11 Quasiconcavity and quasiconvexity
This is a less restrictive condition on the objective function.
« De…nition: a function 1 is quasiconcave i¤ \r
1
. r
2
÷domain of 1, which is assumed to be a convex
set, and \0 ÷ (0. 1) we have
1
r
2
_ 1
r
1
= 1
0r
1
+ (1 ÷0) r
2
_ 1
r
1
or, more simply put
1
0r
1
+ (1 ÷0) r
2
_ min
¸
1
r
2
. 1
r
1
.
In words: the image of the convex combination is larger than the lower of the two images.
« De…nition: a function 1 is quasiconvex i¤ \r
1
. r
2
÷domain of 1, which is assumed to be a convex
set, and \0 ÷ (0. 1) we have
1
r
2
_ 1
r
1
= 1
0r
1
+ (1 ÷0) r
2
_ 1
r
2
or, more simply put
1
0r
1
+ (1 ÷0) r
2
_ max
¸
1
r
2
. 1
r
1
.
In words: the image of the convex combination is smaller than the higher of the two images.
« For strict quasiconcavity and quasiconvexity replace the second inequality with a strict inequality, but
not the …rst. Strict quasiconcavity or convexity rule out ‡at segments.
« 0 ´ ÷ ¦0. 1¦.
59
Due to the ‡at segment, the function on the left is not strictly quasiconcave. Note that neither of these
functions is convex nor concave. Thus, this is a weaker restriction. The following function, while not convex
nor concave, is both quasiconcave and quasiconvex.
« Compare de…nition of quasiconcavity to concavity and then compare graphically.
Properties:
1. If 1 is linear, then it is both quasiconcave and quasiconvex.
2. If 1 is (strictly) quasiconcave, then ÷1 is (strictly) quasiconvex.
3. If 1 is concave (convex), then it is quasiconcave (quasiconvex) – but not only if.
« Note that unlike concave functions, the sum of two quasiconcave functions is NOT quasiconcave.
60
Alternative de…nitions: Let r ÷ R
n
.
« 1 is quasiconcave i¤ \/ ÷ R the set
o = ¦r : 1 (r) _ /¦ . / ÷ R
is a convex set (for concavity it is only "if", not "only if").
« 1 is quasiconvex i¤ \/ ÷ R the set
o = ¦r : 1 (r) _ /¦ . / ÷ R
is a convex set (for convexity it is only "if", not "only if").
These may be easier to verify. Recal that for concavity and convexity the conditions above were necessary,
but not su¢cient.
Consider a continuously di¤erentiable function 1 (r) ÷ C
1
and r ÷ R
n
. Then 1 is
« quasiconcave i¤ \r
1
. r
2
÷domain of 1, which is assumed to be a convex set, we have
1
r
2
_ 1
r
1
= \1
r
1
r
2
÷r
1
_ 0 .
In words: the function does not change the sign of the slope (more than once).
« quasiconvex i¤ \r
1
. r
2
÷domain of 1, which is assumed to be a convex set, we have
1
r
2
_ 1
r
1
= \1
r
2
r
2
÷r
1
_ 0 .
In words: the function does not change the sign of the slope (more than once).
« For strictness, change the last inequality to a strict one, which rules out ‡at regions.
Consider a twice continuously di¤erentiable function 1 (r) ÷ C
2
and r ÷ R
n
. As usual, the Hessian is
denoted H and the gradient as \1. De…ne
1 =
¸
0
1·1
\1
1·n
\1
t
n·1
H
n·n
(n+1)·(n+1)
.
Conditions for quasiconcavity and quasiconvexity in the positive orthant, r ÷ R
n
+
involve the leading
principal minors of 1.
Necessary condition: 1 is quasiconcave on R
n
+
if (but not only if) \r ÷ R, the leading principal
minors of 1 follow this pattern
[1
2
[ _ 0. [1
3
[ _ 0. [1
4
[ _ 0. ...
Su¢cient condition: 1 is quasiconcave on R
n
+
only if \r ÷ R, the leading principal minors of 1 follow
this pattern
[1
2
[ < 0. [1
3
[ 0. [1
4
[ < 0. ...
Finally, there are also explicitly quasiconcave functions.
61
« De…nition: a function 1 is explicitly quasiconcave if \r
1
. r
2
÷domain of 1, which is assumed to
be a convex set, and \0 ÷ (0. 1) we have
1
r
2
1
r
1
= 1
0r
1
+ (1 ÷0) r
2
1
r
1
.
This rules out ‡at segments, except at the top of the hill.
Ranking of quasi concavity, from strongest to weakest:
1. strictly quasiconcave
1
r
2
_ 1
r
1
= 1
0r
1
+ (1 ÷0) r
2
1
r
1
.
2. explicitly quasiconcave
1
r
2
1
r
1
= 1
0r
1
+ (1 ÷0) r
2
1
r
1
.
3. quasiconcave
1
r
2
_ 1
r
1
= 1
0r
1
+ (1 ÷0) r
2
_ 1
r
1
.
12.12 Why is quasiconcavity important? Global maximum
Quasiconcavity is important because it allows arbitrary cardinality in the utility function, while maintaining
ordinality. Concavity imposes decreasing marginal utility, which is not necessary for characterization of
convex preferences and convex indi¤erence sets. Only when dealing with risk do we need to impose concavity.
We do not need concavity for global extrema.
Suppose that r
+
is the solution to
Choose r to maximize 1 (r) , s.t. o (r) = c .
If
1. the set ¦r : o (r) = c¦ is a convex set; and
2. 1 is explicitly quasiconcave,
then 1 (r
+
) is a global (constrained) maximum.
If in addition 1 is strictly quasiconcave, then this global maximum is unique.
12.13 Application: cost minimization
We like apples (a) and bananas (/), but want to reduce the cost of any (a. /) bundle for a given level of
utility (l(
+
a.
+
/)) (or fruit salad, if we want to use a production metaphor).
Choose ¦a. /¦ to minimize cost C = aj
o
+/j
b
, s.t. l (a. /) = n
62
Set up the appropriate Lagrangian
L = aj
o
+/j
b
+`[n ÷l (a. /)] .
Here ` is in units of $/util: it tells you how much an additional util will cost.
FONC:
0L
0`
= n ÷l (a. /) = 0
0L
0a
= j
o
÷`l
o
= 0 = j
o
´l
o
= ` 0
0L
0/
= j
b
÷`l
b
= 0 = j
b
´l
b
= ` 0 .
Thus
'1o =
l
o
l
b
=
j
o
j
b
So we have tangency. Let the value of the problem be
C
+
= a
+
j
o
+/
+
j
b
.
Take a total di¤erential at the optimum to get
dC = j
o
da +j
b
d/ = 0 =
d/
da
= ÷
j
o
j
b
< 0 .
We could also obtain this result from the implicit function theorem, since C (a. /) . l (a. /) ÷ C
1
and [J[ = 0.
Yet another way to get this is to see that since l (a. /) = n, a constant,
dl (a. /) = l
o
da +l
b
d/ = 0 =
d/
da
= ÷
l
o
l
b
< 0 .
At this stage all we know is that the isoquant for utility slopes downward, and that it is tangent to the
isocost line at the optimum.
SOC: recall
H
=
0 l
o
l
b
l
o
÷`l
oo
÷`l
ob
l
b
÷`l
ob
÷`l
bb
¸
¸
.
We need positive de…niteness of dC
+
for a minimum, so that we need
H
2
< 0 and
H
3
=
H
< 0.
H
2
=
0 l
o
l
o
÷`l
oo
= ÷l
2
o
< 0 .
so this is good. But
H
3
= 0 ÷l
o
[l
o
(÷`l
bb
) ÷(÷`l
ob
) l
b
] +l
b
[l
o
(÷`l
ob
) ÷(÷`l
oo
) l
b
]
= l
2
o
`l
bb
÷l
o
`l
ob
l
b
÷l
b
l
o
`l
ob
+l
2
b
`l
oo
= `
l
2
o
l
bb
÷2l
o
l
ob
l
b
+l
2
b
l
oo
63
Without further conditions on l, we do not know whether the expression in the parentheses is negative or
not (` 0).
The curvature of the utility isoquant is given by
d
da
d/
da
=
d
2
/
da
2
=
d
da
÷
l
o
l
b
= ÷
d
da
l
o
(a. /)
l
b
(a. /)
=
= ÷
l
oo
+l
ob
Jb
Jo
l
b
÷l
o
l
bb
Jb
Jo
+l
ob
l
2
b
= ÷
l
oo
+l
ob
÷
Ia
I
b
l
b
÷l
o
l
bb
÷
Ia
I
b
+l
ob
l
2
b
= ÷
l
oo
l
b
÷l
o
l
ob
+l
2
o
l
bb
´l
b
÷l
o
l
ob
l
2
b
= ÷
l
oo
l
2
b
÷2l
o
l
ob
l
b
+l
2
o
l
bb
l
3
b
= ÷
1
l
3
b
l
oo
l
2
b
÷2l
o
l
ob
l
b
+l
2
o
l
bb
.
This involves the same expression in the parentheses. If the indi¤erence curve is convex, then
J
2
b
Jo
2
_ 0 and
thus the expression in the parentheses must be negative. This coincides with the positive semide…niteness
of dC
+
. Thus, convex isoquants and existence of a global minimum in this case come together. This would
ensure a global minimum, although not a unique one. If
J
2
b
Jo
2
0, then the isoquant is strictly convex and
the global minimum is unique, as dC
+
is positive de…nite.
« If l is strictly quasiconcave, then indeed the isoquant is strictly convex and the global minimum
is unique.
12.14 Further topics
« Expansion paths.
« Homotheticity.
« Elasticity of substitution.
« Constant elasticity of substitution and relation to CobbDouglas.
13 Optimization with inequality constraints
13.1 One inequality constraint
Let 1 (r) . o (r) ÷ C
1
, r ÷ R
n
. The problem is
Choose r to maximize 1 (r) , s.t. o (r) _ c .
64
Write the constraint in a "standard way"
o (r) ÷c _ 0 .
Suppose that r
+
is the solution to
Choose r to maximize 1 (r) , s.t. o (r) ÷c _ 0
and that r
+
is not a critical point of o (r) = c, if this constraint binds. Write down the Lagrangian
function
L = 1 (r) +`[c ÷o (r)] .
Then there exists a number `
+
such that
(1) :
0L
0r
I
= 1
I
÷`o
I
= 0 . i = 1. 2. ...:
(2) : `[c ÷o (r. n)] = 0
(3) : ` _ 0
(4) : o (r) _ c .
« The standard way: write o (r) ÷c _ 0 and then ‡ip it in the Lagrangian function `[c ÷o (r)].
« Conditions 2 and 3 are called complementary slackness conditions. If the constraint is not binding,
then changing c a bit will not a¤ect the value of the problem; in that case ` = 0.
« The constraint quali…cations are that if the constraint binds, i.e. o (r
+
) = c, then \o (r
+
) = 0.
Conditions 14 in the text are written di¤erently, although they are an equivalent representation:
(i) :
0L
0r
I
= 1
I
÷`o
I
= 0 . i = 1. 2. ...:
(ii) :
0L
0`
= [c ÷o (r. n)] _ 0
(iii) : ` _ 0
(iv) : `[c ÷o (r. n)] = 0 .
Notice that from (ii) we get o (r) _ c. If o (r) < c, then L
X
0.
13.2 One inequality constraint and one nonnegativity constraint
There is really nothing special about this problem, but it is worthwhile setting it up, for practice. Let
1 (r) . o (r) ÷ C
1
, r ÷ R
n
. The problem is
Choose r to maximize 1 (r) , s.t. o (r) _ c and r _ 0 .
65
I rewrite this as
Choose r to maximize 1 (r) , s.t. o (r) ÷c _ 0 and ÷r _ 0 .
Suppose that r
+
is the solution to this problem and that r
+
is not a critical point of the constraint set (to
be de…ned below). Write down the Lagrangian function
L = 1 (r) +`[c ÷o (r)] +.[r] .
Then there exists two numbers `
+
and .
+
such that
(1) :
0L
0r
I
= 1
I
÷`o
I
+. = 0 . i = 1. 2. ...:
(2) : `[c ÷o (r. n)] = 0
(3) : ` _ 0
(4) : o (r) _ c
(5) : .[r] = 0
(6) : . _ 0
(7) : ÷r _ 0 =r _ 0 .
The constraint quali…cation is that `
+
is not a critical point of the constraints that bind. If only o (r) = c
binds, then we require \o (r
+
) = 0. See the general case below.
The text gives again a di¤erent – and I argue less intuitive – formulation. The Lagrangian is set up
without explicitly mentioning the nonnegativity constraints
? = 1 (r) +.[c ÷o (r)] .
In the text the FONCs are written as
(i) :
0?
0r
I
= 1
I
÷.o
I
_ 0
(ii) : r
I
_ 0
(iii) : r
I
0?
0r
I
= 0 . i = 1. 2. ...:
(iv) :
0?
0.
= [c ÷o (r)] _ 0
(v) : . _ 0
(vi) : .
0?
0.
= 0 .
The unequal treatment of di¤erent constraints is confusing. My method treats all constraints consistently.
A nonnegativity constraint is just like any other.
66
13.3 The general case
Let 1 (r) . o
,
(r) ÷ C
1
, r ÷ R
n
, , = 1. 2. ...:. The problem is
Choose r to maximize 1 (r) , s.t. o
,
(r) _ c
,
. , = 1. 2. ...: .
Write the the problem in the standard way
Choose r to maximize 1 (r) , s.t. o
,
(r) ÷c
,
_ 0 . , = 1. 2. ...: .
Write down the Lagrangian function
L = 1 (r) +
n
¸
,=1
`
,
c
,
÷o
,
(r)
.
Suppose that r
+
is the solution to the problem above and that r
+
does not violate the constraint quali…cations
(see below). Then there exists : numbers `
+
,
, , = 1. 2. ...: such that
(1) :
0L
0r
I
= 1
I
÷
n
¸
,=1
`
,
o
,
I
(r) = 0 . i = 1. 2. ...:
(2) : `
,
c
,
÷o
,
(r)
= 0
(3) : `
,
_ 0
(4) : o
,
(r) _ c
,
. , = 1. 2. ...: .
« The constraint quali…cations are as follows. Consider all the binding constraints. Count them by
,
b
= 1. 2. ...:
b
. Then we must have that the rank of
¸
0o
1
(r
+
)
0r
t
=
Ja
1
(r
)
Jr
0
Ja
2
(r
)
Jr
0
.
.
.
Ja
m
b
(r
)
Jr
0
¸
¸
¸
¸
¸
¸
n
b
·n
is :
b
, as large as possible.
13.4 Minimization
It is worthwhile to consider minimization separately, although minimization of 1 is just like maximization
of ÷1. We compare to maximization.
Let 1 (r) . o (r) ÷ C
1
, r ÷ R
n
. The problem is
Choose r to maximize 1 (r) , s.t. o (r) _ c .
Rewrite as
Choose r to maximize 1 (r) , s.t. o (r) ÷c _ 0
67
Write down the Lagrangian function
L = 1 (r) +`[c ÷o (r)] .
FONCs
(1) :
0L
0r
I
= 1
I
÷`o
I
= 0 . i = 1. 2. ...:
(2) : `[c ÷o (r. n)] = 0
(3) : ` _ 0
(4) : o (r) _ c .
Compare this to
Choose r to minimize /(r) , s.t. o (r) _ c .
Rewrite as
Choose r to minimize /(r) , s.t. o (r) ÷c 0
Write down the Lagrangian function
L = /(r) +`[c ÷o (r)] .
FONCs
(1) :
0L
0r
I
= /
I
÷`o
I
= 0 . i = 1. 2. ...:
(2) : `[c ÷o (r. n)] = 0
(3) : ` _ 0
(4) : o (r) _ c .
Everything is the same. Just pay attention to the inequality setup correctly. This will be equivalent to
Compare this to
Choose r to maximize ÷/(r) , s.t. o (r) _ c .
Rewrite as
Choose r to maximize ÷/(r) , s.t. c ÷o (r) _ 0 .
and set up the proper Lagrangian function for maximization
L = ÷/(r) +`[o (r) ÷c] .
This will give the same FONCs as above.
68
13.5 Example
Choose ¦r. n¦ to maximize min¦ar. /n¦ , s.t. rj
r
+nj
u
_ 1 .
where a. /. j
r
. j
u
0. Convert this to the following problem
Choose ¦r. n¦ to maximize ar, s.t. ar _ /n. rj
r
+nj
u
÷1 _ 0 .
This is equivalent, because given a level of n, we will never choose ar /n, nor can the objective exceed /n
by construction.
Choose ¦r. n¦ to maximize ar, s.t. ar ÷/n _ 0. rj
r
+nj
u
÷1 _ 0 .
Set up the Lagrangian
L = ar +`[1 ÷rj
r
÷nj
u
] +.[/n ÷ar] .
FONC:
L
r
= a ÷`j
r
÷a. = 0
L
u
= ÷`j
u
+/. = 0
`[1 ÷rj
r
÷nj
u
] = 0
` _ 0
rj
r
+nj
u
_ 1
.[/n ÷ar] = 0
. _ 0
ar _ /n .
The solution process is a trial and error process. The best way is to start by checking which constraints do
not bind.
1. Suppose . = 0. Then ÷`j
u
= 0 = ` = 0 = a ÷a. = 0 = . = 1 0 – a contradiction. Therefore
' > 0 must hold. Then ar = /n = n = ar´/.
2. Suppose ` = 0. Then /. = 0 = . = 0 – a contradiction (even without . 0, we would reach
another contradiction: a = 0). Therefore > 0. Then rj
r
+ nj
u
= 1 = rj
r
+ arj
u
´/ = 1 =
r(j
r
+aj
u
´/) = 1 = r
+
= 1´ (j
r
+aj
u
´/), n
+
= a1´ (/j
r
+aj
u
).
Solving for the multipliers (which is an integral part of the solution) involves solving
`j
r
+a. = a
`j
u
÷/. = 0 .
69
This can be written in matrix notation
¸
j
r
a
j
u
÷/
¸
`
.
=
¸
a
0
.
The solution requires nonsingular matrix:
j
r
a
j
u
÷/
= ÷/j
r
÷aj
u
< 0 .
Solving by Cramer’s Rule:
`
+
=
a a
0 ÷/
÷/j
r
÷aj
u
=
a/
/j
r
+aj
u
0
.
+
=
j
r
a
j
u
0
÷/j
r
÷aj
u
=
aj
u
/j
r
+aj
u
0 .
Finally, we check the constraint quali…cations. Since both constraints bind (`
+
0, .
+
0), we must have
a rank of two for the matrix
0
¸
rj
r
+nj
u
ar ÷/n
0 [r n]
=
¸
j
r
j
u
a ÷/
.
In this case we can verify that the rank is two by the determinant, since this is a square 2 2 matrix:
j
r
j
u
a ÷/
= ÷/j
r
÷aj
u
< 0 .
13.6 Example
Choose ¦r. n¦ to maximize r
2
+r + 4n
2
, s.t. 2r + 2n _ 1 . r. n _ 0
Rewrite as
Choose ¦r. n¦ to maximize r
2
+r + 4n
2
, s.t. 2r + 2n ÷1 _ 0 . ÷r _ 0 . ÷n _ 0
Consider the Jacobian of the constraints
0
2r + 2n
÷r
÷n
¸
¸
0 [r n]
=
2 2
÷1 0
0 ÷1
¸
¸
.
This has rank 2 \(r. n) ÷ R
2
, so the constraint quali…cations are never violated. The constraint set is a
triangle, all the constraints are linear, the the constraint quali…cation will not fail.
Set up the Lagrangian function
L = r
2
+r + 4n
2
+`[1 ÷2r ÷2n] +.[r] + [n]
70
FONCs
L
r
= 2r + 1 ÷2` +. = 0
L
u
= 8n ÷2` + = 0
`[1 ÷2r ÷2n] = 0 ` _ 0 2r + 2n _ 1
.r = 0 . _ 0 r _ 0
n = 0 _ 0 n _ 0
1. From L
r
= 0 we have
2r + 1 +. = 2` 0
with strict inequality, because r _ 0 and . _ 0. Thus > 0 and the constraint
2r + 2n = 1
binds, so that
n = 1´2 ÷r or r = 1´2 ÷n .
2. Suppose . 0. Then r = 0 = n = 1´2 = = 0 = ` = 2 = . = 3. A candidate solution is
(r
+
. n
+
) = (0. 1´2).
3. Suppose . = 0. Then
2r + 1 = 2` .
From L
u
= 0 we have
8n + = 2` .
Combining the two we get
2r + 1 = 8n +
2 (1´2 ÷n) + 1 = 8n +
2 ÷2n = 8n +
10n + = 2 .
The last result tells us that we cannot have both = 0 and n = 0, because we would get 0 = 2 – a
contradiction (also because then we would get ` = 0 from L
u
= 0). So either = 0 or n = 0 but not
both.
(a) Suppose n = 0. Then r = 1´2 = ` = 1 = = 2. A candidate solution is (r
+
. n
+
) = (1´2. 0).
(b) Suppose n 0. Then = 0 = n = 0.2 = r = 0.3 = ` = 0.8. A candidate solution is
(r
+
. n
+
) = (0.3. 0.2).
Eventually, we need to evaluate the objective function with each candidate to see which is the global
maximizer.
71
13.7 The KuhnTucker su¢ciency theorem
Let 1 (r) . o
,
(r) ÷ C
1
, , = 1. 2. ...:. The problem is
Choose r ÷ R
n
to maximize 1 (r) ,
s.t. r _ 0 and o
,
(r) _ c
,
. , = 1. 2. ...: .
Theorem: if
1. 1 is concave on R
n
,
2. o
,
are convex on R
n
,
3. r
+
satis…es the FONCs,
then r
+
is a global maximum – not necessarily unique.
« We know: convex o
,
(r) _ c
,
gives a convex set. One can show that the intersection of convex sets is
also a convex set, so that the constraint set is also convex. SO the theorem actually says that trying
to maximize a concave function on a convex set give a global maximum, if it exists. It could exist on
the border or not – the FONCs will detect it.
« But these are strong conditions on our objective and constraint functions. The next theorem relaxes
things quite a bit.
13.8 The ArrowEnthoven su¢ciency theorem
Let 1 (r) . o
,
(r) ÷ C
1
, , = 1. 2. ...:. The problem is
Choose r ÷ R
n
to maximize 1 (r) ,
s.t. r _ 0 and o
,
(r) _ c
,
. , = 1. 2. ...: .
Theorem: if
1. 1 is quasi concave on R
n
+
,
2. o
,
are quasi convex on R
n
+
,
3. r
+
satis…es the FONCs of the KuhnTucker problem,
4. Any one of the following conditions on 1 holds:
(a) ¬i such that 1
I
(r
+
) < 0.
(b) ¬i such that 1
I
(r
+
) 0 and r
+
I
0 (i.e. it does not violate the constraints).
72
(c) \1 (r
+
) = 0 and 1 ÷ C
2
around r
+
.
(d) 1 (r) is concave.
then r
+
is a global maximum.
ArrowEnthoven constraint quali…cation test for a maximization problem
If
1. o
,
(r) ÷ C
1
are quasi convex,
2. ¬r
0
÷ R
n
+
such that all constraints are slack,
3. Any one of the following holds:
(a) o
,
(r) are convex.
(b) 0o (r) ´0r
t
= 0 \r in the constraint set.
then the constraint quali…cation is not violated.
13.9 Envelope theorem for constrained optimization
Recall the envelope theorem for unconstrained optimization: if r
+
is a critical point of 1 (r. 0). Then
d1 (r
+
. 0)
d0
=
01 (r
+
. 0)
00
.
This was due to
J](r
,0)
Jr
= 0.
Now we face a more complicated problem:
Choose r ÷ R
n
to maximize 1 (r. 0) , s.t. o (r. 0) = c .
For a problem with inequality constraints we simply use only those constraints that bind. We will consider
small perturbations of 0, so small that they will not a¤ect which constraint binds. Set up the Lagrangian
function
L = 1 (r. 0) +`[c ÷o (r. 0)] .
FONCs
L
X
= c ÷o (r. 0) = 0
L
ri
= 1
I
(r. 0) ÷`o
I
(r. 0) = 0 . i = 1. 2. ...:
We apply the implicit function theorem to this set of equations to get ¬r
+
= r
+
(0) and `
+
= `
+
(0) for which
there well de…ned derivatives around (`
+
. r
+
). We know that at the optimum we have that the value of the
problem is the value of the Lagrangian function
1 (r
+
. 0) = L
+
= 1 (r
+
. 0) +`
+
[c ÷o (r
+
. 0)]
= 1 (r
+
(0) . 0) +`
+
(0) [c ÷o (r
+
(0) . 0)] .
73
De…ne the value of the problem as
· (0) = 1 (r
+
. 0) = 1 (r
+
(0) . 0) .
Take the derivative with respect to 0 to get
d· (0)
d0
=
dL
+
d0
= 1
+
r
dr
+
d0
+1
+
0
+
d`
+
d0
[c ÷o (r
+
(0) . 0)] ÷`
+
¸
o
+
r
dr
+
d0
+o
+
0
= [1
+
r
÷`
+
o
+
r
]
dr
+
d0
+
d`
+
d0
[c ÷o (r
+
(0) . 0)] +1
+
0
÷`
+
o
+
0
= 1
+
0
÷`
+
o
+
0
.
Of course, we could have just applied this directly using the envelope theorem:
d· (0)
d0
=
dL
+
d0
=
0L
+
00
= 1
+
0
÷`o
+
0
.
13.10 Duality
We will demonstrate the duality of utility maximization and cost minimization. But the principles here are
more general than consumer theory.
The primal problem is
Choose r ÷ R
n
to maximize l (r) , s.t. j
t
r = 1 .
(this should be stated with _ but we focus on preferences with nonsatiation and strict convexity so the
solution lies on the budget line and r 0 is satis…ed). The Lagrangian function is
L = n(r) +`[1 ÷j
t
r]
FONCs:
L
ri
= n
I
÷`j
I
= 0 = ` = n
I
´j
I
. i = 1. ...:
L
X
= [1 ÷j
t
r] = 0
We apply the implicit function theorem to this set of equations to get Marshallian demand
r
n
I
= r
n
I
(j. 1)
`
n
I
= `
n
I
(j. 1)
for which there well de…ned derivatives around (`
+
. r
+
). We de…ne the indirect utility function
· (j. 1) = n[r
n
(j. 1)] .
The dual problem is
Choose r ÷ R
n
to minimize jr s.t. n(r) = n .
74
where n is a level of promised utility. The Lagrangian function is
7 = j
t
r +.[n ÷n(r)] .
FONCs:
7
ri
= j
I
÷.n
I
= 0 = . = j
I
´n
I
. i = 1. ...:
7
,
= n ÷(r) = 0
We apply the implicit function theorem to this set of equations to get Hicksian demand
r

I
= r

I
(j. n)
.

I
= .

I
(j. n)
for which there well de…ned derivatives around (.
+
. r
+
). We de…ne the expenditure function
c (j. n) = j
t
r

(j. n) .
Duality: all FONCs imply the same thing:
n
I
n
,
=
j
I
j
,
.
Thus, at the optimum
r
n
I
(j. 1) = r

I
(j. n)
c (j. n) = 1
· (j. 1) = n .
Moreover,
. =
1
`
and this makes sense given the interpretation of . and `.
« Duality relies on unique global extrema. We need to have all the preconditions for that.
« Make drawing.
13.10.1 Roy’s identity
· (j. 1) = n(r
n
) +`(1 ÷j
t
r
n
) .
Taking the derivative with respect to a price,
0·
0j
I
=
n
¸
,=1
n
,
0r
n
,
0j
I
+
0`
0j
I
(1 ÷j
t
r
n
) ÷`
n
¸
,=1
j
,
0r
n
,
0j
I
+r
n
I
¸
¸
=
n
¸
,=1
(n
,
÷`j
,
)
0r
n
,
0j
I
+
0`
0j
I
(1 ÷j
t
r
n
) ÷`r
n
I
= ÷`r
n
I
.
75
An increase in j
I
will lower demand by r
n
I
, which decreases the value of the problem, as if by the decreasing
income by r
n
I
times the `utils/$ per dollar of pseudo lost income. In other words, income is now worth r
n
I
less, and this taxes the objective by `r
n
I
. Taking the derivative with respect to income,
0·
01
=
n
¸
,=1
n
,
0r
n
,
01
+
0`
01
(1 ÷j
t
r
n
) +`
1 ÷
n
¸
,=1
j
,
0r
n
,
01
¸
¸
=
n
¸
,=1
(n
,
÷`j
,
)
0r
n
,
01
+
0`
01
(1 ÷j
t
r
n
) +`
= ` .
An increase in income will increase our utility by `, which is the standard result.
« In fact, we could get these results applying the envelpe theorem directly:
0·
0j
I
= ÷`r
n
I
0·
01
= ` .
Roy’s identity is thus
÷
0·´0j
I
0·´01
= r
n
I
.
Why is this interesting? Because this is the amount of income needed to compensate consumers for (that
will leave them indi¤erent to) an increase in the price of some good r
I
. To see this, …rst consider
· (j. 1) = n .
where n is a level of promised utility (as in the dual problem). By the implicit function theorem ¬1 = 1 (j
I
)
in a neighbourhood of r
n
, which has a well de…ned derivative d1´dj. This function is de…ned at the
optimal bundle r
n
. Now consider the total di¤erential of ·, evaluated at the optimal bundle r
n
:
·
µi
dj
I
+·
1
d1 = 0 .
This di¤erential does not involve other partial derivatives because it is evaluated at the the optimal bundle
r
n
(i.e. the envelope theorem once again). And we set this di¤erential to zero, because we are considering
keeping the consumer exactly indi¤erent, i.e. her promised utility and optimal bundle remain unchanged.
Then we have
d1
dj
I
= ÷
·
µi
·
1
= ÷
0·´0j
I
0·´01
= r
n
I
.
This result tells you that if you are to keep the consumer indi¤erent to a small change in the price of good
i, i.e. not changing the optimally chosen bundle, then you must compensate the consumer by r
n
I
units of
income. We will see this again in the dual problem, using Sheppard’s lemma, where keeping utility …xed
76
is explicit. We will see that
Jt
Jµi
= r

I
= r
n
I
is exactly the change in expenditure that results from keeping
utility …xed, while increasing the price of good i.
To see this graphically, consider a level curve of utility. The slope of the curve at (j. 1) (more generally,
the gradient) is r
n
.
13.10.2 Sheppard’s lemma
c (j. n) = j
t
r

+.
n ÷n
r

.
Taking the derivative with respect to a price,
0c
0j
I
= r

I
+
n
¸
,=1
j
,
0r
µ
,
0j
I
+
0.
0j
I
n ÷n
r

÷.
n
¸
,=1
n
,
0r

,
0j
I
=
n
¸
,=1
(j
,
÷.n
,
)
0r

,
0j
I
+
0.
0j
I
n ÷n
r

+r

I
= r

I
.
An increase in j
I
will increases cost by r

I
while keeping utility …xed at n (remember that this is a minimiza
tion problem so increasing the value of the problem is "bad"). Note that this is exactly the result of Roy’s
Identity. Taking the derivative with respect to promised utility,
0c
0n
=
n
¸
,=1
j
,
0r

,
0n
+
0.
0n
n ÷n
r

+.
1 ÷
n
¸
,=1
n
,
0r

,
0n
¸
¸
=
n
¸
,=1
(j
,
÷.n
,
)
0r

,
0n
+
0.
0n
n ÷n
r

+.
= . .
An increase in utility income will expenditures by ., which is the standard result.
77
« In fact, we could get these results applying the envelpe theorem directly:
0c
0j
I
= r

I
0c
0n
= . .
14 Integration
14.1 Preliminaries
Consider a general function
r = r(t)
and its derivative with respect to time
dr
dt
= _ r .
This is how much r changes during a very short period dt. Suppose that you know _ r at any point in time.
We can write down how much r changed from some initial point in time, say t = 0, until period t as follows:

0
_ rdt .
This is the sum of all changes in r from period 0 to t. The term of art is integration, i.e. we are integrating
all the increments. But you cannot say what r(t) is, unless you have the value of r at the initial point. This
is the same as saying that you know what the growth rate of GDP is, but you do not know the level. But
given r
0
= r(0) we can tell what r(t) is:
r(t) = r
0
+

0
_ rdt .
E.g.
_ r = t
2

0
_ rdt =

0
n
2
dn =
1
3
t
3
+c .
The constant c is arbitrary and captures the fact that we do not know the level.
Suppose that the instant growth rate of n is a constant r, i.e.
_ n
n
= r .
This can be written as
_ n ÷rn = 0 .
which is an ordinary di¤erential equation. We know that n = c
:
gives _ n´n = r. But so does n = /c
:
.
So once again, without having additional information about the value of n at some initial point, we cannot
say what n (t) is.
78
14.2 Inde…nite integrals
Denote
1 (r) =
d1 (r)
dr
.
Therefore,
d1 (r) = 1 (r) dr .
Summing over all small increments we get
d1 (r) =
1 (r) dr = 1 (r) +c .
where the constant of integration, c, denotes that the integral is correct up to an indeterminate constant.
This is so because knowing the sum of increments does not tell you the level. Another way to see this is
d
dr
1 (r) =
d
dr
[1 (r) +c] .
Integration is the opposite operation of di¤erentiation. Instead of looking at small perturbations, or incre
ments, we look for the sum of all increments.
Commonly used integrals
1.
r
n
dr =
r
n+1
n+1
+c
2.
1
t
(r) c
](r)
dr = c
](r)
+c .
c
r
dr = c
r
+c
3.
]
0
(r)
](r)
dr = ln [1 (r)] +c .
1
r
dr =
Jr
r
= lnr +c
Operation rules
1. Sum:
[1 (r) +o (r)] dr =
1 (r) dr +
o (r) dr
2. Scalar multiplication: /
1 (r) dr =
/1 (r) dr
3. Substitution/change of variables: Let n = n(r). Then
1 (n)
dn
dr
dr =
1 (n) n
t
dr =
1 (n) dn = 1 (n) +c .
E.g.
2r
r
2
+ 1
dr = 2
r
3
+r
dr = 2
r
3
dr + 2
rdr =
1
2
r
4
+r
2
+c
Alternatively, de…ne n = r
2
+ 1, hence n
t
= 2r, and so
2r
r
2
+ 1
dr =
dn
dr
ndr =
ndn =
1
2
n
2
+c
t
=
1
2
r
2
+ 1
2
+c
t
=
1
2
r
4
+ 2r
2
+ 1
+c
t
=
1
2
r
4
+r
2
+
1
2
+c
t
=
1
2
r
4
+r
2
+c .
79
4. Integration by parts: Since
d (n·) = nd· +·dn
we have
d (n·) = n· =
nd· +
·dn .
Thus the integration by part formula is
nd· = n· ÷
·dn .
To reduce confusion denote
\ = \ (r) . · (r) = d\ (r) ´dr
l = l (r) . n(r) = dl (r) ´dr
Then we write the formula as
l (r) d\ (r) = l (r) \ (r) ÷
\ (r) dl (r)
l (r) · (r) dr = l (r) \ (r) ÷
\ (r) n(r) dr .
E.g., let 1 (r) = .c
÷,r
. Then
r.c
÷,r
dr = ÷rc
÷,r
÷
÷c
÷,r
dr
In the notation above, we have
r
....
I
.c
÷,r
. .. .
u
dr = r
....
I
÷c
÷,r
. .. .
\
÷
1
....
u
÷c
÷,r
. .. .
\
dr
14.3 De…nite integrals
The area under the 1 curve, i.e. between the 1 curve and the horizontal axis, from a to / is
b
o
1 (r) dr = 1 (/) ÷1 (a) .
This is also called the fundamental theorem of calculus.
The Riemann Integral: create : rectangles that lie under the curve, that take the minimum of the
heights: r
I
, i = 1. 2...:. Then create : rectangles with height the maximum of the heights: 1
I
, i = 1. 2...:.
As the number of these rectangles increases, the sums of the rectangles may converge. If they do, then we
say that 1 is Reimannintegrable. I.e. if
lim
n÷o
n
¸
I=1
r
I
= lim
n÷o
n
¸
I=1
1
I
80
then
b
o
1 (r) dr .
is well de…ned.
Properties:
1. Minus/switching the integration limits:
b
o
1 (r) dr = ÷
o
b
1 (r) dr = 1 (/) ÷1 (a) = ÷[1 (a) ÷1 (/)]
2. Zero:
o
o
1 (r) dr = 1 (a) ÷1 (a) = 0
3. Partition: for all a < / < c
c
o
1 (r) dr =
b
o
1 (r) dr +
c
b
1 (r) dr .
4. Scalar multiplication:
b
o
/1 (r) dr = /
b
o
1 (r) dr . \/ ÷ R
5. Sum:
b
o
[1 (r) +o (r)] dr =
b
o
1 (r) dr +
b
o
o (r) dr
6. By parts:
b
o
l·dr = l\ [
b
o
÷
b
o
n\ dr = l (/) \ (/) ÷l (a) \ (/) ÷
b
o
n\ dr
Suppose that we wish to integrate a function from some initial point r
0
until some inde…nite point r.
Then
r
r0
1 (t) dt = 1 (r) ÷1 (r
0
) .
and so
1 (r) = 1 (r
0
) +
r
r0
1 (t) dr .
Leibnitz’s Rule
Let 1 ÷ C
1
(i.e. 1 ÷ C
2
). Then
0
00
b(0)
o(0)
1 (r. 0) dr = 1 (/ (0) . 0)
0/ (0)
00
÷1 (a (0) . 0)
0a (0)
00
+
b(0)
o(0)
0
00
1 (r. 0) dr .
81
Proof: let 1 (r. 0) = d1 (r. 0) ´dr. Then
0
00
b(0)
o(0)
1 (r. 0) dr =
0
00
[1 (r. 0)[
b(0)
o(0)
=
0
00
[1 (/ (0) . 0) ÷1 (a (0) . 0)]
= 1
r
(/ (0) . 0)
0/ (0)
00
+1
0
(/ (0) . 0) ÷1
r
(a (0) . 0)
0a (0)
00
÷1
0
(a (0) . 0)
= 1 (/ (0) . 0)
0/ (0)
00
÷1 (a (0) . 0)
0a (0)
00
+ [1
0
(/ (0) . 0) ÷1
0
(a (0) . 0)]
= 1 (/ (0) . 0)
0/ (0)
00
÷1 (a (0) . 0)
0a (0)
00
+
b(0)
o(0)
d
dr
1
0
(r. 0) dr
= 1 (/ (0) . 0)
0/ (0)
00
÷1 (a (0) . 0)
0a (0)
00
+
b(0)
o(0)
0
00
1 (r. 0) dr .
The last line follows by Young’s Theorem. Clearly, if the integration limits do not depend on 0, then
0
00
b
o
1 (r. 0) dr =
b
o
0
00
1 (r. 0) dr .
and if 1 does not depend on 0, then
0
00
b(0)
o(0)
1 (r) dr = 1 (/ (0))
0/ (0)
00
÷1 (a (0))
0a (0)
00
.
14.4 Improper integrals
14.4.1 In…nite integration limits
o
o
1 (r) dr = lim
b÷o
b
o
1 (r) dr = 1 (·) ÷1 (a) .
E.g., A ~exp(.): 1 (r) = 1 ÷c
÷,r
, 1 (r) = .c
÷,r
for r _ 0.
o
0
.c
÷,r
dr = lim
b÷o
b
0
.c
÷,r
dr = lim
b÷o
÷c
÷,b
+c
÷,0
= 1 .
Also
1 (r) =
o
0
r1 (r) r =
o
0
r.c
÷,r
dr =
÷rc
÷,r
o
0
÷
o
0
÷c
÷,r
dr
= " ÷·c
÷,o
" + 0c
÷,0
+
¸
÷
1
.
c
÷,r
o
0
= 0 ÷
1
.
c
÷,o
+
1
.
c
÷,0
=
1
.
.
82
14.4.2 In…nite integrand
E.g., sometimes the integral is divergent, even though the integration limits are …nite:
o
1
1
r
dr = lim
b÷o
b
1
1
r
dr = [ln(r)[
o
1
= ln(·) ÷ln(1) = ·÷1 = ·
1
0
1
r
dr = lim
b÷0
1
b
1
r
dr = [ln(r)[
1
0
= ln (1) ÷ln(0) = 0 +·= · .
Suppose that for some j ÷ (a. /)
lim
r÷µ
1 (r) = · .
Then the integral from a to / is convergent i¤ the partitions are also convergent:
b
o
1 (r) dr =
µ
o
1 (r) dr +
b
µ
1 (r) dr .
E.g.
lim
r÷0
1
r
3
= · .
Therefore, the integral
1
÷1
1
r
3
dr =
0
÷1
1
r
3
dr +
1
0
1
r
3
dr =
¸
÷
1
2r
2
0
÷1
+
¸
÷
1
2r
2
1
0
does not exist, because neither integral converges.
14.5 Example: investment and capital formation
In discrete time we have the capital accumulation equation
1
+1
= (1 ÷c) 1

+1

.
where 1

is gross investment at time t. Rewrite as
1
+1
÷1

= 1

÷c1

.
We want to rewrite this in continuous time. In this context, investment, 1
a

, is instantaneous and capital
depreciates at an instantaneous rate of c. Consider a period of length . The accumulation equation is
1
+
÷1

= 1

÷c1

.
Divide by to get
1
+
÷1

= 1

÷c1

.
Now take ÷0 to get
_
1

= 1

÷c1

.
83
where it is understood that 1

is instantaneous investment at time t, and 1

is the amount of capital available
at that time. c1

is the amount of capital that vanishes due to depreciation. Write
_
1

= 1
n

.
where 1
n

is net investment. Given a functional form for 1
n

we can tell how much capital is around at time
t, given an initial amount at time 0, 1
0
.
Let 1
n

= t
o
. then
1

÷1
0
=

0
_
1dt =

0
1
n
u
dn =

0
n
o
dn =
¸
n
o+1
a + 1

0
=
t
o+1
a + 1
.
14.6 Domar’s growth model
Domar was interested in answering: what must investment be in order to satisfy the equilibrium condition
at all times.
Structure of the model:
1. Fixed saving rate: 1

= :1

, : ÷ (0. 1). Therefore
_
1 = :
_
1 . And so
_
1 =
1
:
_
1 .
i.e. there is a multiplier e¤ect of investment on output.
2. Potential output is given by a CRS production function
¬

= j1

.
therefore
_ ¬ = j
_
1 = j1 .
3. Long run equilibrium is given when potential output is equal to actual output
¬ = 1 .
therefore
_ ¬ =
_
1 .
We have three equations:
(i) output demand :
_
1 =
1
:
_
1
(ii) potential output : _ ¬ = j1
(iii) equilibrium : _ ¬ =
_
1 .
84
Use (iii) in (ii) to get
j1 =
_
1
and then use (i) to get
j1 =
1
:
_
1 .
which gives
_
1
1
= j: .
Now integrate in order to …nd the level of investment at any given time:
_
1
1
dt =
j:dt
ln1 = j:t +c
1

= c
(µs)+c
= c
(µs)
c
c
= 1
0
c
(µs)
.
The larger is productivity, j, and the higher the saving rate, :, the more investment is required. This is the
amount of investment needed for output to keep output in check with potential output.
Now suppose that output is not equal to its potential, i.e. ¬ = 1 . This could happen if the investment
is not growing at the correct rate of j:. Suppose that investment is growing at rate a, i.e.
1

= 1
0
c
o
.
De…ne the utilization rate
n = lim
÷o
1

¬

.
Compute what the capital stock is at any moment:
1

÷1
0
=

0
_
1dt +

0
1
r
dt =

0
1
0
c
or
dt =
1
a
1
0
c
o
(the constant of integration is absorbed in 1
0
.) Now compute
n = lim
÷o
1

¬

= lim
÷o
1
s
1

j1

=
1
j:
lim
÷o
1

1

=
1
j:
lim
÷o
1
0
c
o
1
o
1
0
c
o
+1
0
=
a
j:
lim
÷o
1
0
c
o
1
0
c
o
+a1
0
=
a
j:
.
The last equality follows from using L’Hopital’s rule. If a j: then n 1 there is a shortage of capacity,
excess demand. If a j: then n < 1 there is a excess of capacity, excess supply. Thus, in order to keep
output demand equal to output potential we must have a = j: and thus n = 1.
In fact, this holds at any point in time:
_
1 =
d
dt
1
0
c
o
= a1
0
c
o
.
Therefore
_
1 =
1
:
_
1 =
a
:
1
0
c
o
_ ¬ = j1 = j1
0
c
o
.
85
So
_
1
_ ¬
=
o
s
1
0
c
o
j1
0
c
o
=
a
:j
= n .
If the utilization rate is too high n 1, then demand growth outstrips supply,
_
1 _ ¬. If the utilization rate
is too low n < 1, then demand growth lags behind supply,
_
1 < _ ¬.
Thus, the razor edge condition: only a = :j keeps us at a sustainable equilibrium path:
« If n 1, i.e. a :j, there is excess demand, investment is too high. Entrepreneurs will try to invest
even more to increase supply, but this implies an even larger gap between the two.
« If n < 1, i.e. a < :j, there is excess supply, investment is too low. Entrepreneurs will try to cut
investment to lower demand, but this implies an even larger gap between the two.
This model is clearly unrealistic and highly stylized.
15 First order di¤erential equations
We deal with equations that involve _ n. The general form is
_ n +n(t) n (t) = n(t)
The goal is to characterize n (t) in terms of n(t) and n(t).
« First order means
Ju
J
, not
J
2
u
J
2
.
« No products: _ n n is not permitted.
In principle, we can have d
n
n´dt
n
, where : is the order of the di¤erential equation. In the next chapter
we will deal with up to d
2
n´dt
2
.
15.1 Constant coe¢cients
15.1.1 Homogenous case
_ n +an = 0
This gives rise to
_ n
n
= ÷a
which has solution
n (t) = n
0
c
÷o
.
We need an additional condition to pin down n
0
.
86
15.1.2 Non homogenous case
_ n +an = / .
where / = 0. The solution method involves splitting the solution into two:
n (t) = n
c
(t) +n
µ
(t) .
where n
µ
(t) is a particular solution and n
c
(t) is a complementary function.
« n
c
(t) solves the homogenous equation
_ n +an = 0 .
so that
n
c
(t) = ¹c
÷o
.
« n
µ
(t) solves the original equation for a stationary solution, i.e. _ n = 0, which implies that n is constant
and thus n = /´a, where a = 0. The solution is thus
n = n
c
+n
µ
= ¹c
÷o
+
/
a
.
Given an initial condition n (0) = n
0
, we have
n
0
= ¹c
÷o0
+
/
a
= ¹+
/
a
= ¹ = n
0
÷
/
a
.
The general solution is
n (t) =
n
0
÷
/
a
c
÷o
+
/
a
= n
0
c
÷o
+
/
a
1 ÷c
÷o
.
One way to think of the solution is a linear combination of two points: the initial condition n
0
and the
particular, stationary solution /´a. (If a 0, then for t _ 0 we have 0 _ c
÷o
_ 1, which yields a convex
combination). Verify this solution:
_ n = ÷a
n
0
÷
/
a
c
÷o
= ÷a
n
0
÷
/
a
c
÷o
+
/
a
. .. .
u
÷
/
a
¸
¸
¸
¸
¸
= ÷an +/
= _ n +an = / .
Yet a di¤erent way to look at the solution is
n (t) =
n
0
÷
/
a
c
÷o
+
/
a
= /c
÷o
+
/
a
.
87
for some arbitrary point /. In this case
_ n = ÷a/c
÷o
.
and we have
_ n +an = ÷a/c
÷o
+a
/c
÷o
+
/
a
= / .
« When a = 0, we get
_ n = /
so
n = n
0
+/t .
This follows directly from
_ n =
/
n = /t +c .
where c = n
0
. We can also solve this using the same technique as above. n
c
solves _ n = 0, so that this
is a constant n
c
= ¹. n
µ
should solve 0 = /, but this does not work unless / = 0. So try a di¤erent
particular solution, n
µ
= /t, which requires / = /, because then _ n
µ
= / = /. So the general solution is
n = n
c
+n
µ
= ¹+/t .
Together with a value for n
0
, we get ¹ = n
0
.
E.g.
_ n + 2n = 6 .
n
c
solves _ n + 2n = 0, so
n
c
= ¹c
÷2
.
n
µ
solves 2n = 6 ( _ n = 0), so
n
µ
= 3 .
Thus
n = n
c
+n
µ
= ¹c
÷2
+ 3 .
Together with n
0
= 10 we get 10 = ¹c
÷20
+ 3, so that ¹ = 7. This completes the solution:
n = 7c
÷2
+ 3 .
Verifying this solution:
_ n = ÷14c
÷2
and
_ n + 2n = ÷14c
÷2
+ 2
7c
÷2
+ 3
= 6 .
88
15.2 Variable coe¢cients
The general form is
_ n +n(t) n (t) = n(t) .
15.2.1 Homogenous case
n(t) = 0:
_ n +n(t) n (t) = 0 =
_ n
n
= ÷n(t) .
Integrate both sides to get
_ n
n
dt =
÷n(t) dt
lnn +c = ÷
n(t) dt
n = c
÷c÷
R
u()J
= ¹c
÷
R
u()J
.
where ¹ = c
÷c
. Thus, the general solution is
n = ¹c
÷
R
u()J
.
Together with a value for n
0
and a functional form for n(t) we can solve explicitly.
E.g.
_ n + 3t
2
n = 0
_ n +
3t
2
n = 0 .
Thus
_ n
n
= ÷3t
2
_ n
n
dt =
÷3t
2
dt
lnn +c = ÷
3t
2
dt
n = c
÷c÷
R
3
2
J
= ¹c
÷
3
.
15.2.2 Non homogenous case
n(t) = 0:
_ n +n(t) n (t) = n(t) .
The solution is
n = c
÷
R
u()J
¸
¹+
n(t) c
R
u()J
dt
.
89
Obtaining this solution requires some elaborate footwork, which we will do. But …rst, see that this works:
e.g.,
_ n +t
2
n = t
2
= n(t) = t
2
. n(t) = t
2
.
n(t) dt =
t
2
dt =
1
3
t
3
n(t) c
R
u()J
dt =
t
2
c

3
/3
dt = c

3
/3
.
since
1
t
(n) c
](u)
dn = c
](u)
.
Thus
n = c
÷
3
/3
¹+c

3
/3
= ¹c
÷
3
/3
+ 1 .
Verifying this solution:
_ n = ÷t
2
¹c
÷
3
/3
so
_ n +n(t) n (t) = ÷t
2
¹c
÷
3
/3
+
t
2
¹c
÷
3
/3
+ 1
= ÷t
2
¹c
÷
3
/3
+t
2
¹c
÷
3
/3
+t
2
= t
2
= n(t) .
15.3 Solving exact di¤erential equations
Suppose that the primitive di¤erential equation can be written as
1 (n. t) = c
so that
d1 = 1
u
dn +1

dt = 0 .
We use the latter total di¤erential to obtain 1 (n. t), from which we obtain n (t). We set 1 (n. t) = c to get
initial conditions.
De…nition: the di¤erential equation
'dn +·dt = 0
is an exact di¤erential equation i¤ ¬1 (n. t) such that ' = 1
u
and · = 1

. By Young’s Theorem we
have
0'
0t
=
0
2
1
0t0n
=
0·
0n
.
The latter is what we will be checking in practice.
90
E.g., let 1 (n. t) = n
2
t = c. Then
d1 = 1
u
dn +1

dt = 2ntdn +n
2
dt = 0 .
Set
' = 2nt . · = n
2
.
Check:
0
2
1
0t0n
=
0'
0t
= 2n
0
2
1
0t0n
=
0·
0n
= 2n
So this is an exact di¤erential equation.
Before solving, one must always check that the equation is indeed exact.
« Step 1: Since
d1 = 1
u
dn +1

dt
we can integrate both sides. But instead, we write
1 (n. t) =
1
u
dn +.(t)
=
'dn +.(t) .
where .(t) is a residual function.
« Step 2: Take the derivative · = 1

from step 1 to identify .(t).
« Step 3: Solve for n (t), taking into account 1 (n. t) = c.
Example:
2nt
....
1
dn + n
2
....
Þ
dt = 0 .
Step 1:
1 (n. t) =
'dn +.(t) =
2ntdn +.(t) = n
2
t +.(t) .
Step 2:
01 (n. t)
0t
=
0
0t
n
2
t +.(t)
= n
2
+.
t
(t) .
Since · = n
2
we must have .
t
(t) = 0, i.e. .(t) is a constant function, .(t) = /, for some /. Thus
1 (n. t) = n
2
t +/ = c .
91
so we can ignore the constant / and write
1 (n. t) = n
2
t = c .
Step 3: We can now solve for n (t):
n (t) = (ct)
÷1/2
.
Example:
(t + 2n) dn +
n + 3t
2
dt = 0 .
So that
' = (t + 2n)
· =
n + 3t
2
.
Check that this equation is exact:
0'
0t
= 1 =
0·
0n
.
so this is indeed an exact di¤erential equation.
Step 1:
1 (n. t) =
'dn +.(t) =
(t + 2n) dn +.(t) = tn +n
2
+.(t) .
Step 2:
01 (n. t)
0t
=
0
0t
tn +n
2
+.(t)
= n +.
t
(t) = · = n + 3t
2
.
so that
.
t
(t) = 3t
2
and
.(t) =
.
t
(t) dt =
3t
2
dt = t
3
.
Thus
1 (n. t) = tn +n
2
+.(t) = tn +n
2
+t
3
.
Step 3: we cannot solve this analytically for n (t), but using the implicit function theorem, we can
characterize it.
Example: Let T ~ 1 (t) be the time until some event occurs, T _ 0. De…ne the hazard rate as
/(t) =
1 (t)
1 ÷1 (t)
.
which is the "probability" that the event occurs at time t, given that it has not occurred by time t.
We can write
/(t) = ÷
1
0
(t)
1(t)
.
92
where 1(t) = 1 ÷1 (t). We know how to solve such di¤erential equations:
1
0
(t) +/(t) 1(t) = 0 .
1(t) = ¹c
÷
R
t
(s)Js
.
Since 1(0) = 1 (the probability that the event occurs at all), then we have ¹ = 1:
1(t) = c
÷
R
t
(s)Js
.
It follows that
1 (t) = ÷1
t
(t) = ÷c
÷
R
t
(s)Js
0
0t
¸
÷

/(:) d:
= ÷c
÷
R
t
(s)Js
[÷/(t)] = /(t) c
÷
R
t
(s)Js
.
Suppose that the hazard rate is constant:
/(t) = c .
In that case
1 (t) = cc
÷
R
t
oJs
= cc
÷o
.
which is the p.d.f. of the exponential distribution.
Now suppose that the hazard rate is not constant, but
/(t) = ct
o÷1
.
In that case
1 (t) = ct
o÷1
c
÷
R
t
oos
1
Js
= ct
o÷1
c
÷o
.
which is the p.d.f. of the Weibull distribution. This is useful if you want to model an increasing hazard
( 1) or decreasing hazard ( < 1). When = 1 or we get the exponential distribution.
15.4 Integrating factor and the general solution
Sometimes we can turn a non exact di¤erential equation into an exact one. For example,
2tdn +ndt = 0
is not exact:
' = 2t
· = n
and
'

= 2 = ·
u
= 1 .
But if we multiply the equation by n, we get an exact equation:
2ntdn +n
2
dt = 0 .
which we saw above is exact.
93
15.4.1 Integrating factor
We have the general formulation
_ n +nn = n .
where all variables are functions of t and we wish to solve for n (t). Write the equation above as
dn
dt
+nn = n
dn +nndt = ndt
dn + (nn ÷n) dt = 0 .
The integrating factor is
c
R
t
u(s)Js
.
If we multiply the equation by this factor we always get an exact equation:
c
R
t
u(s)Js
dn +c
R
t
u(s)Js
(nn ÷n) dt = 0 .
To verify this, write
' = c
R
t
u(s)Js
· = c
R
t
u(s)Js
(nn ÷n)
and
0'
0t
=
0
0t
c
R
t
u(s)Js
= c
R
t
u(s)Js
n(t)
0·
0n
=
0
0n
c
R
t
u(s)Js
(nn ÷n) = c
R
t
u(s)Js
n(t) .
So 0'´0t = 0·´0n.
This form can be recovered from the method of undetermined coe¢cients. We seek some ¹ such
that
¹
....
1
dn +¹(nn ÷n)
. .. .
Þ
dt = 0
and
0'
0t
=
0¹
0t
=
_
¹
0·
0n
=
0
0n
[¹(nn ÷n)] = ¹n
are equal. This means
_
¹ = ¹n
_
¹´¹ = n
¹ = c
R
t
u(s)Js
.
94
15.4.2 The general solution
We have some equation that is written as
_ n +nn = n .
Rewrite as
dn + (nn ÷n) dt = 0 .
Multiply by the integrating factor to get an exact equation
c
R
t
u(s)Js
. .. .
1
dn +c
R
t
u(s)Js
(nn ÷n)
. .. .
Þ
dt = 0 .
« Step 1:
1 (n. t) =
'dn +.(t) =
c
R
t
u(s)Js
dn +.(t) = nc
R
t
u(s)Js
+.(t) .
« Step 2:
01
0t
=
0
0t
nc
R
t
u(s)Js
+.(t)
= nc
R
t
u(s)Js
n(t) +.
t
(t) = · .
Using · from above we get
· = nc
R
t
u(s)Js
n(t) +.
t
(t) = c
R
t
u(s)Js
(nn ÷n) .
so that
.
t
(t) = ÷c
R
t
u(s)Js
n
and so
.(t) =
÷c
R
t
u(s)Js
ndt .
Now we can write
1 (n. t) = nc
R
t
u(s)Js
÷
c
R
t
u(s)Js
ndt = c
« Step 3, solve for n:
n = c
÷
R
t
u(s)Js
¸
c +
c
R
t
u(s)Js
ndt
.
15.5 First order nonlinear di¤erential equations of the 1st degree
In general,
_ n = /(n. t)
will yield an equation like this
1 (n. t) dn +o (n. t) dt = 0 .
In principle, n and t can appear in any degree.
« First order means _ n, not n
(n)
.
« First degree means _ n, not ( _ n)
n
.
95
15.5.1 Exact di¤erential equations
See above.
15.5.2 Separable variables
1 (n) dn +o (t) dt = 0 .
Then just integrate
1 (n) dn = ÷
o (t) dt
and solve for n (t).
Example:
3n
2
dn ÷tdt = 0
3n
2
dn =
tdt
n
3
=
1
2
t
2
+c
n (t) =
1
2
t
2
+c
1/3
.
Example:
2tdn ÷ndt = 0
dn
n
=
dt
2t
dn
n
=
dt
2t
lnn =
1
2
lnt +c
n = c
1
2
ln +c
= c
ln 
1=2
c
c
= c
c
t
1/2
.
15.5.3 Reducible equations
Suppose that
_ n = /(n. t)
can be written as
_ n +1n = Tn
n
.
where
1 = 1(t)
T = T (t)
96
are functions only of t and
: = 0. 1 .
This is a Bernoulli equation, which can be reduced to a linear equation and solved as such. Here’s
how:
_ n +1n = Tn
n
1
n
n
_ n +1n
1÷n
= T
Use a change of variables
. = n
1÷n
so that
_ . = (1 ÷:) n
÷n
_ n
_ n
n
n
=
_ .
1 ÷:
.
Plug this in the equation to get
_ .
1 ÷:
+1. = T
d. +
(1 ÷:) 1
. .. .
u
. ÷(1 ÷:) T
. .. .
u
¸
¸
dt = 0
d. + [n. +n] = 0 .
This is something we know how to solve:
. (t) = c
÷
R
t
u(s)Js
¸
¹+
c
R
t
u(s)Js
ndt
.
from which we get the original
n (t) = . (t)
1
1m
.
Example:
_ n +tn = 3tn
2
In this case
1 = t
T = 3t
: = 2 .
Divide by n
2
and rearrange to get
n
÷2
_ n +tn
÷1
÷3t = 0 .
97
Change variables
. = n
÷1
_ . = ÷n
÷2
_ n
so that we get
÷_ . +t. ÷3t = 0
d. + (÷t. + 3t) dt = 0 .
so that we set
n = ÷t
n = ÷3t .
Using the formula we get
. (t) = c
÷
R
t
u(s)Js
¸
¹+
c
R
t
u(s)Js
ndt
= c
R
t
sJs
¸
¹÷3
c
÷
R
t
sJs
tdt
= c

2
/2
¸
¹÷3
c
÷
2
/2
tdt
= c

2
/2
¹+ 3c
÷
2
/2
= ¹c

2
/2
+ 3 .
So that
n (t) =
1
.
=
¹c

2
/2
+ 3
÷1
.
Example:
_ n +n´t = n
3
.
In this case
1 = 1´t
T = 1
: = 3 .
Divide by n
3
and rearrange to get
n
÷3
_ n +t
÷1
n
÷2
÷1 = 0 .
98
Change variables
. = n
÷2
_ . = ÷2n
÷3
_ n
so that we get
÷
_ .
2
+
.
t
÷1 = 0
_ . +÷2
.
t
+ 2 = 0
d. +
÷2
.
t
+ 2
dt = 0 .
so that we set
n = ÷2´t
n = ÷2 .
Using the formula we get
. (t) = c
÷
R
t
u(s)Js
¸
¹+
c
R
t
u(s)Js
ndt
= c
2
R
t

1
Js
¸
¹÷2
c
÷2
R
t

1
Js
dt
= c
2 ln 
¸
¹÷2
c
÷2 ln 
dt
= t
2
¹÷2t
÷2
= ¹t
2
÷2 .
So that
n (t) =
1
.
2
=
¹t
2
÷2
÷2
.
15.6 The qualitative graphic approach
Given
_ n = 1 (n)
we can plot _ n as a function of n. This is called a phase diagram. This is an autonomous di¤erential
equation, since t does not appear explicitly as an argument. We have three cases:
1. _ n 0 : n is growing, so we shift to the right.
2. _ n < 0 : n is decreasing, so we shift to the left.
3. _ n = 0 : n is stationary, an equilibrium.
99
« System A is dynamically stable: the _ n curve is downward sloping; any movement away from the
stationary point n
+
will bring us back there.
« System B is dynamically unstable: the _ n curve is upward sloping; any movement away from the
stationary point n
+
take farther away.
For example, consider
_ n +an = /
with solution
n (t) =
¸
n
0
÷
/
a
c
÷o
+
/
a
=
c
÷o
n
0
+
1 ÷c
÷o
/
a
.
This is a linear combination between the initial point n
0
and /´a.
« System A happens when a 0: lim
÷o
c
÷o
÷0, so that lim
÷o
n (t) ÷/´a = n
+
.
« System B happens when a < 0: lim
÷o
c
÷o
÷·, so that lim
÷o
n (t) ÷±·.
15.7 The Solow growth model (no long run growth version)
1. CRS production function
1 = 1 (1. 1)
n = 1 (/)
where n = 1´1, / = 1´1. Given 1
1
0 and 1
11
< 0 we have 1
t
0 and 1
tt
< 0.
100
2. Constant saving rate: 1 = :1 , so that
_
1 = :1 ÷c1.
3. Constant labor force growth:
_
1´1 = :.
_
1 = :1 (1. 1) ÷c1 = :11 (/) ÷c1
_
1
1
= :1 (/) ÷c/ .
Since
_
/ =
d
dt
1
1
=
_
11 ÷1
_
1
1
2
=
_
1
1
÷
1
1
_
1
1
=
_
1
1
÷/: .
we get
_
/ = :1 (/) ÷(: +c) / .
This is an autonomous di¤erential equation in /.
Since 1
t
0 and 1
tt
< 0 we know that ¬/ such that :1 (/) < (: +c) /. And given the Inada conditions
– 1
t
(0) = · and 1 (0) = 0 – ¬/ such that :1 (/) (: +c) /. Therefore,
_
/ 0 for low levels of /; and
_
/ 0
for high levels of /. Given the continuity of 1 we know that ¬/
+
such that
_
/ = 0, i.e. the system is stable.
101
16 Higher order di¤erential equations
We will discuss here only second order, since it is very rare to …nd higher order di¤erential equations in
economics. The methods introduced here can be extended to higher order di¤erential equations.
16.1 Second order, constant coe¢cients
n
tt
+a
1
n
t
+a
2
n = / .
where
n = n (t)
n
t
= dn´dt
n
tt
= d
2
n´dt
2
.
102
and a, /, and c are constants. The solution will take the form
n = n
µ
+n
c
.
where the particular solution, n
µ
, characterizes a stable point and the complementary function, n
c
, charac
terizes dynamics/transitions.
The particular solution. We start with the simplest solution possible; if this fails, we move up in the
degree of complexity.
« If a
2
= 0, then n
µ
= /´a
2
is solution, which implies a stable point.
« If a
2
= 0 and a
1
= 0, then n
µ
=
b
o1
t .
« If a
2
= 0 and a
1
= 0, then n
µ
=
b
2
t
2
.
In the latter solutions, the "stable point" is moving. Recall that this solution is too restrictive, because it
constrains the coe¢cients in the di¤erential equation.
The complementary function solves the homogenous equation
n
tt
+a
1
n
t
+a
2
n = 0 .
We "guess"
n = ¹c
:
which implies
n
t
= r¹c
:
n
tt
= r
2
¹c
:
and thus
n
tt
+a
1
n
t
+a
2
n = ¹
r
2
+a
1
r +a
2
c
:
= 0 .
Unless ¹ = 0, we must have
r
2
+a
1
r +a
2
= 0 .
The roots are
r
1,2
=
÷a
1
±
a
2
1
÷4a
2
2
.
For each root r
I
there is a potentially di¤erent ¹
I
coe¢cient. So there are two possible solutions,
n
1
= ¹
1
c
:1
n
2
= ¹
2
c
:2
.
103
But we cannot just chose one solution, because this will restrict the coe¢cients in the original di¤erential
equation. Thus, we have
n
c
= ¹
1
c
:1
+¹
2
c
:2
.
Given two conditions on n – i.e. two values of either one of n, n
t
or n
tt
at some point in time – we can pin
down ¹
1
and ¹
2
.
There are three options for the composition of the roots:
« Two distinct real roots: r
1
. r
2
÷ R and r
1
= r
2
. This will give us values for ¹
1
and ¹
2
, given two
conditions on n.
n
c
= ¹
1
c
:1
+¹
2
c
:2
.
« Repeated real root: r
1
= r
2
÷ R, r = ÷a
1
´2. It might seem that we can just add up the solution as
before, but this will restrict the coe¢cients in the original di¤erential equation. This is so because in
n
c
= (¹
1
+¹
2
) c
:2
we cannot separately identify ¹
1
from ¹
2
. We guess again:
n
1
= ¹
1
c
:
n
2
= ¹
2
t c
:
.
This turns out to work, because both solve the homogenous equation. You can check this. Thus for
repeated real root the complementary function is
n
c
= ¹
1
c
:1
+¹
2
tc
:2
.
« Complex roots, r
1,2
= r ±/i, i =
÷1, a
2
1
< 4a
2
. This gives rise to oscillating dynamics
n
c
= c
:
[¹
1
cos (/t) +¹
2
sin(/t)]
We do not discuss in detail here.
Stability: does n
c
÷0?
« r
1
. r
2
÷ R: need both r
1
. r
2
< 0.
« r
1
= r
2
= r ÷ R: need r < 0.
« Complex roots: need r < 0.
104
16.2 Di¤erential equations with moving constant
n
tt
+a
1
n
t
+a
2
n = / (t) .
where a
1
and a
2
are constants. We require that / (t) takes a form that combines a …nite number of "elementary
functions", e.g. /t
n
, c

, etc. We …nd n
c
in the same way as above, because we consider the homogenous
equation where / (t) = 0. We …nd n
µ
by using some educated guess and verify our guess by using the method
of undetermined coe¢cients. There is no general solution procedure for any type of / (t).
Example: polynomial / (t):
n
tt
+ 5n
t
+ 3n = 6t
2
÷t ÷1 .
Guess:
n
µ
= .
2
t
2
+.
1
t +.
0
.
This implies
n
t
µ
= 2.
2
t +.
1
n
tt
µ
= 2.
2
.
Plug this into n
µ
to get
n
tt
+ 5n
t
+ 3n = 2.
2
+ 5 (2.
2
t +.
1
) + 3
.
2
t
2
+.
1
t +.
0
= 3.
2
t
2
+ (10.
2
+ 3.
1
) t + (2.
2
+ 5.
1
+ 3.
0
) .
we need to solve
3.
2
= 6
10.
2
+ 3.
1
= ÷1
2.
2
+ 5.
1
+ 3.
0
= ÷1 .
This gives .
2
= 2, .
1
= ÷7, .
0
= 10. Thus
n
µ
= 2t
2
÷7t + 10 .
But this may not always work. For instance, if
n
tt
+a
1
n
t
+a
2
n = t
÷1
.
Then no guess of the type n
µ
= .t
÷1
or n
µ
= .lnt will work.
Example: missing n (t) and polynomial / (t)
n
tt
+ 5n
t
= 6t
2
÷t ÷1 .
105
The former type of guess,
n
µ
= .
2
t
2
+.
1
t +.
0
.
will not work, because .
2
will never show up in the equation, so cannot be recovered. Instead, try
n
µ
= t
.
2
t
2
+.
1
t +.
0
.
If this fails, try
n
µ
= t
2
.
2
t
2
+.
1
t +.
0
.
and so on.
Example: exponential / (t)
n
tt
+a
1
n
t
+a
2
n = 1c
:
.
Guess:
n
µ
= ¹tc
:
with the same r and look for solutions for ¹. The guess n
µ
= ¹c
:
will not work. E.g.
n
tt
+ 3n
t
÷4n = 2c
÷4
.
Guess:
n
µ
= ¹tc
÷4
n
t
µ
= ¹c
÷4
+÷4¹tc
÷4
= ¹c
÷4
(1 ÷4t)
n
tt
µ
= ÷4¹c
÷4
(1 ÷4t) +÷4¹c
÷4
= ¹c
÷4
(÷8 + 16t) .
Plug in the guess
n
tt
+ 3n
t
÷4n = ¹c
÷4
(÷8 + 16t) + 3¹c
÷4
(1 ÷4t) +÷4¹tc
÷4
= ¹c
÷4
(÷8 + 16t + 3 ÷12t ÷4t)
= ÷5¹c
÷4
We need to solve
÷5¹c
÷4
= 2c
÷4
so ¹ = ÷0.4 and
n
µ
= ÷0.4tc
÷4
.
106
17 First order di¤erence equations
n
+1
+an

= c .
As with di¤erential equations, we wish to trace out a path for some variable n over time, i.e. we seek n (t).
But now time is discrete, which gives rise to some peculiarities.
De…ne
n

= n
+1
÷n

.
(not the standard notation) which is like
n

t
=
n
+
÷n

.
where = 1.
17.1 Backward iteration
1. n
+1
+n

= c.
n
1
= n
0
+c
n
2
= n
1
+c = n
0
+c +c = n
0
+ 2c
n
3
= n
2
+c = n
0
+ 2c +c = n
0
+ 3c
.
.
.
n

= n
0
+ct .
2. an
+1
÷/n

= 0, a = 0. Then n
+1
= /n

, where / = /´a.
n
1
= /n
0
n
2
= /n
1
= /
2
n
0
.
.
.
n

= /

n
0
.
17.2 General solution
n
+1
+an

= c .
where a = 0. The solution method involves splitting the solution into two:
n (t) = n
c
(t) +n
µ
(t) .
where n
µ
(t) is a particular solution and n
c
(t) is a complementary function.
107
« n
c
(t) solves the homogenous equation
n
+1
+an

= 0 .
Guess
n

= ¹/

so that
n
+1
+an

= 0
implies
¹/
+1
+a¹/

= 0
/ +a = 0
/ = ÷a .
n
c
(t) = ¹(÷a)

.
where a = 0.
« a = ÷1. n
µ
(t) solves the original equation for a stationary solution, n

= /, a constant. This implies
/ +a/ = c
/ =
c
1 +a
.
So that
n
µ
=
c
1 +a
. a = ÷1 .
« a = ÷1. Guess n
µ
(t) = /t. This implies
/ (t + 1) ÷/t = c
/ = c .
So that
n
µ
= ct . a = ÷1 .
The general solution is
n

= n
c
(t) +n
µ
(t) =
¹(÷a)

+
c
1+o
if a = ÷1
¹+ct if a = ÷1
.
Given an initial condition n (0) = n
0
, then
« for a = ÷1
n
0
= ¹+
c
1 +a
= ¹ = n
0
÷
c
1 +a
.
108
« for a = ÷1
n
0
= ¹ .
The general solution is
n

=
n
0
÷
c
1+o
(÷a)

+
c
1+o
if a = ÷1
n
0
+ct if a = ÷1
.
For a = ÷1 we have
n

= n
0
(÷a)

+
1 ÷(÷a)

c
1 +a
.
which is a linear combination of the initial point and the stationary point
c
1+o
. And if a ÷ (÷1. 1), then this
process is stable. Otherwise it is not. For a = ÷1 and c = ÷1 the process is never stable.
Example:
n
+1
÷5n

= 1 .
First, notice that a = ÷1 and a = 0. n
c
solves
n
+1
÷5n

= 0 .
Let n
c
(t) = ¹/

, so that
¹/
+1
÷5¹/

= 0
¹/

(/ ÷5) = 0
/ = 5 .
so that
n
c
(t) = ¹5

.
n
µ
= / solves
/ ÷5/ = 1
/ = ÷1´4 .
so that n
µ
= ÷1´4.
n

= n
c
(t) +n
µ
(t) = ¹5

÷1´4 .
Given n
0
= 7´4 we have ¹ = 2, which completes the solution.
109
17.3 Dynamic stability
Given
n

=
¸
n
0
÷
c
1 +a
/

+
c
1 +a
.
the dynamics are governed by / (= ÷a).
1. / < 0 will give rise to oscillating dynamics.
« ÷1 < / < 0: oscillations diminish over time. In the limit we converge on the stationary point
c
1+o
.
« / = ÷1: constant oscillations.
« / < ÷1: growing oscillations over time. The process is divergent.
2. / = 0 and / = 1: no oscillations, but this is degenerate.
« / = 0 means a = 0, so n

= c.
« / = 1 means a = ÷1, so n

= n
0
+ct.
3. 0 < / < 1 gives convergence to the stationary point
c
1+o
.
4. / 1 gives divergent dynamics.
Only [/[ < 1 gives convergent dynamics.
17.4 Application: cobweb model
This is an early model of agriculture markets. Farmers determined supply last year based on the prevailing
price at that time. Consumers determine demand based on current prices. Thus, three equations complete
the description of this model
supply : c
s
+1
= : (j

) = ÷ +cj

demand : c
J
+1
= d (j
+1
) = c ÷j
+1
equilibrium : c
s
+1
= c
J
+1
.
where c. . . c 0. Imposing equilibrium:
÷ +cj

= c ÷j
+1
j
+1
+
c
. .. .
o
j

=
c +
. .. .
c
.
The solution to this di¤erence equation is
j

=
¸
j
0
÷
c +
+c
÷
c

+
c +
+c
.
110
The process is convergent (stable) i¤ [c[ < [[. Since both are positive, we need c < .
Interpretation: what are and c? These are the slopes of the demand and supply curves, respectively.
If follows that if the slope of the supply curve is lower than that of the demand curve, then the process if
convergent. I.e., as long as the farmers do not "overreact" to current prices next year, the market will converge
on a happy stable equilibrium price and quantity. Conversely, as long as consumers are not "insensitive" to
prices, then...
17.5 Nonlinear di¤erence equations
We will use only a qualitative/graphic approach and restrict to autonomous equations, in which t is not
explicit. Let
n
+1
= .(n

) .
Draw a phase diagram with n
+1
on the vertical axis and n

on the horizontal axis and the 45 degree ray
starting from the origin. For simplicity, n 0. A stationary point satis…es n = .(n). But sometimes
the stationary point is not stable. If [.
t
(n)[ < 1 at the stationary point, then the process is stable. More
generally, as long as [.
t
(n

)[ < 1 the process is stable, i.e. it will converge to some stationary point. When
[.
t
(n

)[ _ 1 the process will diverge.
111
« Example: Galor and Zeira (1993), REStud.
112
18 Phase diagrams with two variables (19.5)
We now analyze a system of two autonomous di¤erential equations:
_ r = 1 (r. n)
_ n = G(r. n) .
First we …nd the _ r = 0 and _ n = 0 loci by setting
1 (r. n) = 0
G(r. n) = 0 .
Apply the implicit function theorem separately to the above, which gives rise to two (separate) functions:
_ r = 0 : n = 1
_ r=0
(r)
_ n = 0 : n = o
_ u=0
(r) .
113
where
1
t
= ÷
1
r
1
u
o
t
= ÷
G
r
G
u
.
Now suppose that we have enough information about 1 and G to characterize 1 and o. And suppose that
1 and o intersect, which is the interesting case. This gives rise to a stationary point, in which both r and n
are constant:
1
_ r=0
(r
+
) = o
_ u=0
(r
+
) = n
+
.
There are two interesting cases, although you can characterize the other ones, once you do this.
Case 1: dynamic stability.
1
r
< 0. 1
u
0
G
r
0. G
u
< 0 .
Both 1 and o are upward sloping. First, notice that
« in all points above 1
_ r=0
_ r < 0 and in all points below 1
_ r=0
_ r 0.
« in all points above o
_ u=0
_ n < 0 and in all points below o
_ u=0
_ n 0.
Given an intersection, this gives rise to four regions in the (r. n) space:
1. Below 1
_ r=0
and above o
_ u=0
: _ r < 0 and _ n < 0.
2. Above 1
_ r=0
and above o
_ u=0
: _ r 0 and _ n < 0.
3. Above 1
_ r=0
and below o
_ u=0
: _ r 0 and _ n 0.
4. Below 1
_ r=0
and below o
_ u=0
: _ r < 0 and _ n 0.
This gives rise to a stable system. From any point in the (r. n) space we converge to (r
+
. n
+
).
114
Given the values that _ r and _ n take (given the direction in which the arrows point in the …gure), we can
draw trajectories. In this case, all trajectories will eventually arrive at the stationary point at the intersection
of _ r = 0 and _ n = 0.
« Notice that at the point in which we cross the _ r = 0 the trajectory is vertical. Similarly, at the point
in which we cross the _ n = 0 the trajectory is horizontal.
Case 2: saddle point.
1
r
0. 1
u
< 0
G
r
< 0. G
u
0 .
Both 1 and o are still upward sloping, but now the pattern is di¤erent, because o
_ u=0
crosses 1
_ r=0
at a steeper
slope. Notice that
« in all points above 1
_ r=0
_ r < 0 and in all points below 1
_ r=0
_ r 0.
« in all points above o
_ u=0
_ n 0 and in all points below o
_ u=0
_ n < 0.
Given an intersection, this gives rise to four regions in the (r. n) space:
1. Below 1
_ r=0
and above o
_ u=0
: _ r 0 and _ n 0.
2. Above 1
_ r=0
and above o
_ u=0
: _ r < 0 and _ n 0.
3. Above 1
_ r=0
and below o
_ u=0
: _ r < 0 and _ n < 0.
115
4. Below 1
_ r=0
and below o
_ u=0
: _ r 0 and _ n < 0.
This gives rise to an unstable system. However, there is a stationary point at the intersection, (r
+
. n
+
).
But in order to converge to (r
+
. n
+
) there are only two trajectories that bring us there, one from the region
above 1
_ r=0
and below o
_ u=0
, the other from the region below 1
_ r=0
and above o
_ u=0
. These trajectories are
called stable branches. If we are not on those trajectories, then we are on unstable branches. Note that
being in either region does not ensure that we are on a stable branch, as the …gure illustrates.
19 Optimal control
Like in static optimization problems, we want to maximize (or minimize) an objective function. The di¤erence
is that the objective is the sum of a path of values at any instant in time; therefore, we must choose an entire
path as a maximizer.
1
The problem is generally stated as follows:
Choose n(t) to maximize
T
0
1 (n. n. t) dt
s.t.
Law of motion : _ n = o (n. n. t)
Initial condition : n (0) = n
0
transversality condition : n (T) c
÷:T
_ 0 .
1
The theory behind this relies on "calculus of variations", which was …rst developed to compute trajectories of missiles (to
the moon and elsewhere) in the U.S.S.R.
116
where r is some average discount rate that is relevant to the problem. To this we need to sometimes add
Terminal condition : n (T) = n
T
Constraints on control : n(t) ÷ l
The function n (t) is called the state variable. The function n(t) is called the control variable. It is useful
to think of of the state as a stock (like capital) and the control as a ‡ow (like investment or consumption).
Usually we will have 1. o ÷ C
1
, but in principle we could do without di¤erentiability with respect to n. I.e.,
we only need that the functions 1 and o are continuously di¤erentiable with respect to n and t.
The transversality condition immediately implies that n (T) _ 0, but also something more. It tells you
that if n (T) 0, then its value at the end of the problem, n (T) c
÷:T
must be zero. This will become clearer
below, when we discuss the Lagrangian approach.
« If there is no law of motion for n, then we can solve the problem separately at any instant as a static
problem. The value would just be the sum of those static values.
« There is no uncertainty here. To deal with uncertainty, wait for your next course in math.
« To ease notation we will omit time subscripts when there is no confusion.
Example: the saving/investment problem for individuals.
1. Output: 1 = 1 (1. 1).
2. Investment/consumption: 1 = 1 ÷C = 1 (1. 1) ÷C.
3. Capital accumulation:
_
1 = 1 ÷c1.
We want to maximize the present value of instantaneous utility from now (at t = 0) till we die (at some
distant time T). The problem is stated as
Choose C (t) to maximize
T
0
c
÷µ
l [C (t)] dt
s.t.
_
1 = 1 ÷c1
1 (0) = 1
0
1 (T) = 1
T
.
117
19.1 Pontryagin’s maximum principle and the Hamiltonian function
De…ne the Hamiltonian function:
H (n. n. t. `) = 1 (n. n. t) +`o (n. n. t) .
The function `(t) is called the costate function and also has a law of motion. Finding ` is part of the
solution. The FONCs of this problem ensure that we maximize H at every point in time, and as a whole. If
n
+
is a maximizing plan then
(i) : H (n. n
+
. t. `) _ H (n. n. t. `) \n ÷ l
or :
0H
0n
= 0 if 1. o ÷ C
1
State equation (ii) :
0H
0`
= _ n = _ n = o (n. n. t)
Costate equation (iii) :
0H
0n
= ÷
_
` =
_
` +1
u
+`o
u
= 0
Transversality condition (iv) : `(T) = 0 .
Conditions (ii)+(iii) are a system of …rst order di¤erential equations that can be solved explicitly if we have
functional forms and two conditions: n (0) = n
0
and `(T) = 0. Note that ` has the same interpretation as
the Lagrange multiplier: it is the shadow cost of the constraint is at any instant.
We adopt the convention that n (0) = n
0
is always given. There are a few way to introduce terminal
conditions, which gives the following taxonomy
1. When T is …xed,
(a) `(T) = 0, n (T) free.
(b) n (T) = n
T
, `(T) free.
(c) n (T) _ n
min
(or n (T) _ n
max
), `(T) free. Add the following complementary slackness conditions:
n (T) _ n
min
`(T) _ 0
`(T) (n (T) ÷n
min
) = 0
2. T is free and n (T) = n
T
. Add H (T) = 0.
3. T _ T
max
(or T _ T
min
) and n (T) = n
T
. Add the following complementary slackness conditions:
H (T) _ 0
T _ T
max
H (T) (T ÷T
max
) = 0
118
19.2 The Lagrangian approach
The problem is
Choose n(t) to maximize
T
0
1 (n. n. t) dt
s.t.
_ n = o (n. n. t)
n (T) c
÷:T
_ 0
n (0) = n
0
.
You can think of _ n = o (n. n. t) as an inequality _ n _ o (n. n. t). We can write this up as a Lagrangian. For this
we need Lagrange multipliers for the law of motion constraint at every point in time, as well as an additional
multiplier for the transversality condition:
L =
T
0
1 (n. n. t) dt +
T
0
`(t) [o (n. n. t) ÷ _ n] dt +0n (T) c
÷:T
=
T
0
[1 (n. n. t) +`(t) o (n. n. t)] dt ÷
T
0
`(t) _ n (t) dt +0n (T) c
÷:T
.
Using integration by parts we have
÷
` _ ndt = ÷`n +
_
`ndt
so that
L =
T
0
[1 (n. n. t) +`(t) o (n. n. t)] dt ÷[`(t) n (t)[
T
0
+
T
0
_
`(t) n (t) dt +0n (T) c
÷:T
T
0
[1 (n. n. t) +`(t) o (n. n. t)] dt ÷`(T) n (T) +`(0) n (0) +
T
0
_
`(t) n (t) dt +0n (T) c
÷:T
.
Before writing down the FONCs for the Lagrangian, recall that
H (n. n. t. `) = 1 (n. n. t) +`(t) o (n. n. t) .
The FONCs for the Lagrangian are
(i) : L
u
= 1
u
+`o
u
= 0
(ii) : L
X
= o ÷ _ n = 0 (in the original Lagrangian)
(iii) : L
u
= 1
u
+`o
u
+
_
` = 0 .
These imply (are consistent with)
(i) : H
u
= 1
u
+`o
u
= 0
(ii) : H
X
= o = _ n
(iii) : H
u
= 1
u
+`o
u
= ÷
_
` .
119
The requirement that n (0) = n
0
can also be captured in the usual way, as well as n (T) = n
T
, if it is required.
The transversality condition is captured by the complementary slackness conditions
n (T) c
÷:T
_ 0
0 _ 0
0n (T) c
÷:T
= 0 .
We see here that if n (T) c
÷:T
0, then its value, 0, must be zero.
19.3 Autonomous problems
In these problems t is not an explicit argument.
Choose n to maximize
T
0
1 (n. n) dt s.t. _ n = o (n. n)
plus boundary conditions. The Hamiltonian is thus
H (n. n. `) = 1 (n. n) +`o (n. n) .
These problems are easier to solve and are amenable to analysis by phase diagrams.
Example: the cake eating problem
Objective: You want to eat your cake in an optimal way, maximizing your satisfaction from eating it,
starting now (t = 0) and …nishing before bedtime, at T.
« The cake starts at size o
0
.
« When you eat cake, the size diminishes by the amount that you ate:
_
o = ÷C.
« You like cake, but less so when you eat more: l
t
(C) 0, l
tt
(C) < 0.
The problem is
Choose C to maximize
T
0
l (C) dt s.t.
_
o = ÷C
o (0) = o
0
o (T) _ 0 .
This is an autonomous problem. The Hamiltonian is
H (C. o. `) = l (C) +`[÷C] .
120
FONCs:
(i) :
0H
0C
= l
t
(C) ÷` = 0
(ii) :
0H
0`
= ÷C =
_
o
(iii) :
0H
0o
= 0 = ÷
_
`
(iv) : o (T) _ 0. `(T) _ 0. o (T) `(T) = 0 .
From (iii) it follows that ` is constant. From (i) we have l
t
(C) = `, and since ` is constant, C is constant
too. Then given a constant C we get from (ii) that
o = ¹÷Ct .
And given o (0) = o
0
we have
o = o
0
÷Ct .
But we still do not know what is C, except that it is constant. So we solve for the complementary slackness
conditions, i.e., will we leave leftovers?
Suppose ` 0. Then o (T) = 0. Therefore
0 = o
0
÷CT .
which gives
C =
o
0
T
.
Suppose ` = 0. Then it is possible to have o (T) 0. But then we get l
t
= 0 – a contradiction.
The solution is thus
C (t) = o
0
´T
o (t) = o
0
÷(o
0
´T) t
`(t) = (l
t
)
÷1
(o
0
´T) .
If we allowed a ‡at part in the utility function after some satiation point, then we could have a solution
with leftovers o (T) 0. In that case we would have more than one optimal path: all would be global
because with one ‡at part l is still quasi concave.
19.3.1 Anecdote: the value of the Hamiltonian is constant in autonomous problems
We demonstrate that on the optimal path the value of the Hamiltonian function is constant.
H (n. n. t. `) = 1 (n. n. t) +`o (n. n. t) .
The derivative with respect to time is
121
dH
dt
= H
u
_ n +H
u
_ n +H
X
_
` +H

.
The FONCs were
H
u
= 0
H
u
= ÷
_
`
H
X
= _ n .
Plugging this into dH´dt gives
dH
dt
=
0H
0t
.
This is in fact a consequence of the envelope theorem, although not in a straightforward way. If time is not
explicit in the problem, then
J1
J
= 0, which implies the statement above.
19.4 Current value Hamiltonian
Many problems in economics involve discounting (as we saw above), so the problem is not autonomous.
However, usually the only place that time is explicit is in the discount factor,
T
0
1 (n. n. t) dt =
T
0
c
÷:
G(n. n) dt .
You can try to solve those problems "asis", but an easier way (especially if the costate is of no particular
interest) is to use the current value Hamiltonian:
¯
H = c
:
H = G(n. n) +.o (n. n) .
where
. = `c
:
.
A maximizing plan n
+
satis…es the following FONCs:
(i) :
¯
H (n. n
+
. .) _
¯
H (n. n. .) \n ÷ l
or :
0
¯
H
0n
= 0 if
¯
H. o ÷ C
1
State equation (ii) :
0
¯
H
0.
= _ n = _ n = o (n. n)
Costate equation (iii) :
0
¯
H
0n
= ÷_ . +r. = _ . ÷r. +1
u
+`o
u
= 0
Transversality condition (iv) : .(T) = 0 or
¯
H (T) = 0 or other.
Example: the cake eating problem with discounting
122
We now need to choose a functional form for the instantaneous utility function. The problem is
Choose C to maximize
T
0
c
÷:
ln(C) dt s.t.
_
o = ÷C
o (0) = o
0
o (T) _ 0 .
We write the present value Hamiltonian
¯
H = lnC +.[÷C]
FONCs:
0
¯
H
0C
=
1
C
÷. = 0
0
¯
H
0.
= ÷C =
_
o
0
¯
H
0o
= 0 = ÷_ . +r.
o (T) _ 0. .(T) _ 0. o (T) .(T) = 0 .
From (iii) we have
_ .
.
= r .
hence
. = 1c
:
.
for some 1. From (i) we have
C =
1
.
=
1
1
c
÷:
.
From (ii) we have
_
o = ÷C

0
_
odt =

0
÷Cdt
o (t) = ¹+

0
÷Cdt .
which, together with o
0
implies
o (t) = o
0
÷

0
Cdt .
123
which makes sense. Now, using C = 1
÷1
c
÷:
we get
o (t) = o
0
÷

0
1
÷1
c
÷::
d.
= o
0
÷1
÷1
¸
÷
1
r
c
÷::

0
= o
0
÷1
÷1
¸
÷
1
r
c
÷:
+
1
r
c
÷:0
= o
0
÷
1
r1
1 ÷c
÷:
Suppose .(T) = 0. Then 1 = 0 and C (T) = · – not possible. So .(T) 0, which implies o (T) = 0.
Therefore
0 = o
0
÷
1
r1
1 ÷c
÷:T
1 =
r
1 ÷c
÷:T
o
0
Therefore
C =
o
0
r [1 ÷c
÷:T
]
c
÷:
.
which is decreasing, and
. =
r
1 ÷c
÷:T
o
0
c
:
.
which is increasing. And …nally
o (t) = o
0
¸
1 ÷
1 ÷c
÷:
1 ÷c
÷:T
.
This completes the characterization of the problem.
19.5 In…nite time horizon
When the problem’s horizon is in…nite, i.e. never ends, we need to modify the transversality condition.
These are
lim
T÷o
`(T) n (T) = 0
for the present value Hamiltonian, and
lim
T÷o
.(T) c
÷:T
/ (T) = 0
for the current value Hamiltonian.
19.6 The neoclassical growth model
1. Preferences: n(C), n
t
0, n
tt
< 0. Inada conditions: n(0) = 0, n
t
(0) = ·, n
t
(C) ÷0 as C ÷·.
124
2. Aggregate production function: 1 = 1 (1. 1), CRS, 1
I
0, 1
II
< 0. Given this we can write the
perworker version n = 1 (/), where 1
t
0, 1
tt
< 0 and n = 1´1, / = 1´1. Inada conditions:
1 (0) = 0, 1
t
(0) = ·, 1
t
(/) ÷0 as / ÷·.
3. Capital accumulation:
_
1 = 1 ÷c1 = 1 ÷C ÷c1. As we saw in the Solow model, we can write this
in per worker terms
_
/ = 1 (/) ÷c ÷(: +c) /, where : is the constant growth rate of labor force.
4. There cannot be negative consumption, but also, once output is converted into capital, we cannot eat
it. This can be summarized in 0 _ C _ 1 (1. 1). This is an example of a restriction on the control
variable.
5. A social planner chooses a consumption plan to maximize everyone’s welfare, in equal weights. The
objective function is
\ =
o
0
1
0
c
n
c
÷µ
l (c) dt =
o
0
c
÷:
l (c) dt .
where we normalize 1
0
= 1 and we set r = j÷: 0, which ensures integrability. Notice that everyone
gets the average level of consumption c = C´1.
The problem is
Choose c to maximize \ s.t.
_
/ = 1 (/) ÷c ÷(: +c) /
0 _ c _ 1 (/)
/ (0) = /
0
Write down the current value Hamiltonian
H = n(c) +.[1 (/) ÷c ÷(: +c) /] .
FONCs:
H
c
= n
t
(c) ÷. = 0
H
,
= [1 (/) ÷c ÷(: +c) /] =
_
/
H

= .[1
t
(/) ÷(: +c)] = r. ÷ _ .
lim
T÷o
.(T) c
÷:T
/ (T) = 0
Ignore for now 0 _ c _ 1 (/). The transversality condition here is a su¢cient condition for a maximum,
although in general this speci…c condition is not necessary. If this was a present value Hamiltonian the same
transversality condition would be lim
T÷o
`(T) / (T) = 0, which just means that the value of an additional
unit of capital in the limit is zero.
125
From H
c
we have n
t
(c) = .. From H

we have
_ .
.
= ÷[1
t
(/) ÷(: +c +r)] .
We want to characterize the solution qualitatively using a phase diagram. To do this, we need two
equations: one for the state, /, and one for the control, c. Notice that
_ . = n
tt
(c) _ c .
so
n
tt
(c) _ c
n
t
(c)
= ÷[1
t
(/) ÷(: +c +r)] .
Rearrange to get
_ c
c
= ÷
n
t
(c)
cn
tt
(c)
[1
t
(/) ÷(: +c +r)] .
Notice that
÷
cn
tt
(c)
n
t
(c)
is the coe¢cient of relative risk aversion. Let
n(c) =
c
1÷c
1 ÷o
.
This is a class of constant relative relative risk aversion, or CRRA, utility functions, with coe¢cient of RRA
= o.
Eventually, our two equations are
_
/ = 1 (/) ÷c ÷(: +c) /
_ c
c
= ÷
1
o
[1
t
(/) ÷(: +c +r)] .
From this we derive
_
/ = 0 : c = 1 (/) ÷(: +c) /
_ c = 0 : 1
t
(/) = : +c +r .
The _ c = 0 locus is a vertical line in the (/. c) space. Given the Inada conditions and diminishing returns to
capital, we have that the
_
/ = 0 locus is hump shaped. Since r 0, the peak of the hump is to the right of
the vertical _ c = 0 locus.
The phase diagram features a saddle point, with two stable branches. If / is to the right of the _ c = 0
locus, then _ c < 0 and vice versa for / to the left of the _ c = 0 locus. For c above the
_
/ = 0 locus we have
_
/ < 0 and vice versa for c below the
_
/ = 0 locus. See textbook for …gure.
De…ne the stationary point as (/
+
. c
+
). Suppose that we start with /
0
< /
+
. Then the optimal path for
consumption must be on the stable branch, i.e. c0 is on the stable branch, and will eventually go to c
+
.
126
The reason is that any other choice is not optimal. Higher consumption will eventually lead to depletion
of the capital stock, which eventually leads to no output and therefore no consumption (U.S.A.). Too little
consumption will lead …rst to an increase in the capital stock and an increase in output, but eventually
this is not sustainable as the plan requires more and more consumption forgone to keep up with e¤ective
depreciation (: +c) and eventually leads to zero consumption as well (U.S.S.R.).
One can do more than just analyze the phase diagram. First, given functional forms we can compute the
exact paths for all dynamic variables. Second, we could linearize (a …rst order Taylor expansion) the system
of di¤erential equations around the saddle point to compute dynamics around that point.
127