Optimization Techniques in Economics
Optimization Techniques in Economics
mainly optimization
Medvegyev Péter
2013
Theorem
If x0 is a local minimum then
1 0 = fx0 (x0 ) = (∂f /∂x1 (x0 ) , . . . , ∂f /∂xn (x0 )) .
2 0 hT fxx00 (x0 ) h = ∑ ∑ ∂2 f /∂xi ∂xj (x0 ) hi hj
i j
Theorem
If f is twice di¤erentiable at x0 and
fx0 (x0 ) = 0
Lemma
If x T Hx is a positive de…nite quadratic form on Rn then there is an α > 0
such that
x T Hx α kx k2 .
As f is twice di¤erentiable at x0
1
f (x0 + h) f (x0 ) = fx0 (x0 ) , h + hT fxx00 (x0 ) h + o khk2 .
2
Let H = fxx00 (x0 ) and let α > 0 be the constant above. For any ε < α/2
one has that
o kh k2 ε kh k2
Theorem
Assume that ϕk k = 0, 1, 2, . . . , p are di¤erentiable and assume that ϕ0
has a local minimum at x0 on the set
X $ fx j ϕk (x ) = 0, k = 1, . . . , p g .
l = (λ0 , λ1 , . . . , λp ) 2 Rp +1
such that
p
∑ λk ϕk0 (x0 ) = 0.
k =0
Theorem
Assume that we are in the regular case and ϕk , k = 0, 1, . . . , p are twice
di¤erentiable at x0 , where x0 is a local minimum of the constrained
optimization problem then
ϕk0 (x0 ) , h = 0, 00
k = 1, 2, . . . , p =) hT Lxx (x0 , λ) h 0.
ϕk0 (x0 ) , h = 0, 00
k = 1, 2, . . . , p =) hT Lxx (x0 , λ) h 0.
00 (x , λ ) is the second derivative of the Lagrange function
Recall that Lxx 0
with respect to x at x0 .
Theorem
Assume Lx0 (x0 , λ) = 0, that is let x0 be a stationary point of the
Lagrangian. Assume that we are in the regular case and
ϕk , k = 0, 1, . . . , p are twice di¤erentiable at x0 . If
00
hT Lxx (x0 , λ) h > 0, h 6= 0
ϕ0 (x ) K (x ) K (x0 ) = ϕ0 (x0 )
That is
Using the condition on the existence of the second derivative and the
stationarity condition Lx0 (x0 , λ) = 0 and that as x0 is a feasible solution
L (x0 , λ) = ϕ0 (x0 ) one must show that
0 (x x0 )T Lxx
00
(x0 , λ) (x x0 ) + o kx x0 k2
2
γ F 0 (x0 ) (x x0 ) + o (kx x0 k) .
Obviously
2
F 0 (x0 ) (x x0 ) + o (kx x0 k) =
2
F 0 (x0 ) (x x0 ) +
+ ko (kx x0 k)k2 +
+2 F 0 (x0 ) (x x0 ) , o (kx x0 k) =
2
= F 0 (x0 ) (x x0 ) + o kx x0 k2 .
F 0 (x0 ) h = 0 ) hT Lxx
00
(x0 , λ) h > 0
Q (x x0 ) + o kx x0 k2 0.
Problem
Let A be symmetric. When does x T Ax 0 for every Bx = 0? Or when
does x T Ax > 0 for every Bx = 0, x 6= 0?
We can assume that B is fat and full rank. Solving the homogeneous
equation x2 = Dx1 where x1 is the vector of free variables. If the
number of the equations is p then the dimension of x1 is n p. Hence
x1 I
ker (B) = x = Py j y 2 Rn p
= = y j y 2 Rn p
.
x2 D
as size (P) = n (n p) .
Medvegyev (CEU) Mathematics for Economists 2013 13 / 252
Constrained positive de…nite matrixes
One can use the Jacobi–Sylvester criteria on C. To solve the su¢ ciency
problem one should check the determinants of the n p leading principal
minors of C.
But to do this one should solve the equation Bx = 0 and construct matrix
C. We want to avoid this and we will calculate the LAST n p leading
principal determinant of the bordered matrix
0 B
H$ .
BT A
Example
Solve x12 + x22 ! min, x1 + x2 = 1.
det (H) = 4. One should check 2 1 determinants and for minimum the
sign must be 1 = ( 1)p as there is just one constraint, p = 1. Hence
we have a local minimum.
Example
Solve x12 + x22 ! min, x1 + x2 = 1, 2x1 + 3x2 = 4.
1 1 1 1
4 3 2 4
x1 = = 1, x2 = = 2.
1 1 1 1
2 3 2 3
Example
Solve x1 + x2 ! min, x12 + x22 = 1.
The Lagrangian is
p p
If x1 = x2 = 1/ 2 then λ = 1/ 2
0 p p 1
p
2 0 p p p0 p2 2
A= p , B = 2/ 2, 2/ 2 , H = @ p2 2 0 A
p
0 2
2 0 2
p
det (H) = 4 2. As there are two variables this is the condition for local
maximum, so p1 , p1 is a local maximum.
2 2
p p
If x1 = x2 = 1/ 2 then λ = 1/ 2
0 p p 1
p 0
2 p0 p p p p 2 2
A= ,B = 2, 2 ,H =@ p2 2 p0 A
0 2
2 0 2
p
determinant: 4 2. As p = 1 this is the condition for the minimum so
p1 , p1 is a local minimum.
2 2
The Lagrangian is
L (x1 , x2 , λ) = x1 x2 + λ x12 + x22 1 .
The necessary conditions
x2 + 2λx1 = 0, x1 + 2λx2 = 0.
If x1 = 0 then x2 = 0 which is not a solution. So x1 6= 0, x2 6= 0.
x2 x1 x2 x1
= 2λ, = 2λ, = , x12 = x22
x1 x2 x1 x2
Hence the four solutions are
1 1 1 1 1 1 1 1
p ,p , p , p , p ,p p , p .
2 2 2 2 2 2 2 2
Medvegyev (CEU) Mathematics for Economists 2013 21 / 252
Constrained positive de…nite matrixes
2λ 1
A= , B = (2x1 , 2x2 ) .
1 2λ
Example
Solve the problem x12 + x22 + x32 ! min, x1 + x2 + x3 = 1.
The Lagrangian is
But we must calculate 3 1=2 determinants: Deleting the last row and
the last column 0 1
0 1 1
det @ 1 2 0 A = 4.
1 0 2
As both determinants are negative, p = 1, so the sign for the minimum is
( 1)p = 1 this is a local minimum.
Example
Solve the problem x1 x2 x3 ! max, x12 + x22 + x32 = 1
p
A maximal solution is when x1 = x2 = x3 = 1/ 3 in this case
x1 x2 1
λ= = p
2x3 2 3
0 p p p 1
0p 2/ p3 2/p3 2/p3
B 2/ 3 1/p 3 1/ p3 1/p3 C
H =B p
@ 2/ 3
C
A
p 1/p3 1/p 3 1/ p3
2/ 3 1/ 3 1/ 3 1/ 3
The determinant is 16 3 . As there are three variables this is the right sign
condition for the maximum as 1 = ( 1)3
p
Now check the x1 = x2 = x3 = 1/ 3 which is the minimal solution.
x1 x2 1
λ= = p
2x3 2 3
0 p p p 1
0p 2/p 3 2/p3 2/p3
B 2/p3 1/ p3 1/p 3 1/p3 C
H=B
@
C.
A
2/p3 1/p3 1/ p3 1/p 3
2/ 3 1/ 3 1/ 3 1/ 3
In this case the determinant is still 16
3 , which is also compatible with the
minimum rule as p = 1. (Observe that sign is compatible with maximum
criteria as well.)
Which is the right sign for the minimum condition as p = 1 and in the
minimum case there is no alternation.
x2 x3 + 2λx1 = 0
x1 x3 + 2λx2 = 0
x1 x2 + 2λx3 = 0
0 1
0 2 0 0
B 2 0 0 0 C
H =B @ 0 0
C
0 1 A
0 0 1 0
The determinant is 4 which is not compatible with the max or the min rule.
Example
Solve x12 x22 x32 ! max, x12 + x22 + x32 = 1.
The Lagrangian
The Hessian
0 1
2 + 2λ 0 0
A =@ 0 2 + 2λ 0 A,B = 2x1 2x2 2x3 .
0 0 2 + 2λ
If x1 = 1, x2 = 0, x3 = 0, then λ = 1
0 1
0 2 0 0
B 2 0 0 0 C
H =B @ 0 0
C
4 0 A
0 0 0 4
The determinant is 64. As there are three variables for maximum the
sign must be ( 1)3 .
The determinant is 0. The problem comes from the fact that (0, 1, 0) is not
a isolated local minimum as x1 = 0, x22 + x32 = 1 are all minimal solutions.
Example
Solve x32 x12 x22 ! max, x12 + x22 + x32 = 1.
0 0 0
The determinant is 64 which is …ne. But 0 4 0 = 0. (The
0 0 4
problem is that the basis is at a wrong place.)
Example
Consumer saving problem:
1 The planning period is …nite t = 1, 2, . . . , T .
2 At every period t the consumer has wealth wt 0 and consumption
p
ct 0. The utility function for the consumer is u (c ) = c.
3 At every t the set of feasible decisions are Φ (wt 1) = [0, wt 1] .
4 There is a production function
f (w , c ) = (w c ) (1 + r ) $ k (w c ) with r 0.
5 The consumer is maximizing the aggregate utility ∑Tt=0 u ( ct ) .
Problem
We want to maximize the aggregate reward
T
∑ rt (st , at ) ! max,
t =1
s1 2 S
st = f t 1 ( st = 2, . . . , T
1 , at 1 ) , t
at 2 Φ ( st ) A, t = 1, 2, . . . , T
where at is the action one can choose and st is the state of the system.
Obviously the objects in the problem are given before the optimization and
are parameters of the problem.
De…nition
1 S is called the state space.
2 A is called the action space.
3 rt (s, a) is the reward function at time period t.
4 ft (s, a) is the transition function at time period t.
5 Φt (s ) ! 2A is the correspondence of feasible actions at time period
t.
De…nition
1 The history ht at time t is the sequence ht = (s1 , a1 , . . . at 1 , st ) .
(Observe that there is no at but there is an st in the de…nition of the
history.)
2 Strategy σ is a sequence of mappings σt .which gives the next action
at 2 Φ (st ) . The σt can depend on the whole actual history.ht .
3 Σ denotes the set of strategies.
4 The value function is
T
V (s ) $ max
σ2Σ,σ0 =s
W (σ) $ max ∑ rt (σt ) .
σ 2Σ,σ0 =s t =1
De…nition
A strategy is a Markovian strategy if the strategy is dependent only on the
present state, that is independent of the history of path how we have got
to the present state s.
The conditions of the above theorem are quite reasonable. They are (just)
guaranteeing the existence of the optimum.
1 rt (s, a) is continuous on S A for every t.
2 ft (s, a) is continuous on S A for every t.
3 Φt is a continuous, non-empty and compact valued correspondence.
De…nition
A set-valued mapping, that is a correspondence, Φ is called
1 upper semi continuous if for every xn ! x, yn ! y and yn 2 Φ (xn )
one gets that y 2 Φ (x ) . This says that the
is a closed set;
2 lower semi continuous if for every xn ! x and y 2 Φ (x ) there is a
sequence yn 2 Φ (xn ) such that yn ! y ;
3 it is continuous if both upper- and lower semi continuous.
g (s ) $ max fU (s, x ) j x 2 Φ (s )g
Ψ (s ) $ fx j x 2 Φ (s ) , U (s, x ) = g (s )g
1. First let us …nd the solution of the problem at t = T . In this case the
state variable is the wealth w the correspondence
Φ (w ) = [0, w ] .
p
As u (c ) = c the value function at time T is
p
VT (w ) = max fu (c ) j c 2 Φ (T )g = w.
if c = w then p p p
w< 1+k w
so p p
VT 1 (w ) = 1+k w
T
∑ βt 1
u ( ct ) .
t =1
Di¤erentiating
∂L
= λ βt 1 0
u ( ct ) µt = 0
∂ct
∂L
= λ ϕ = 0.
∂wT +1
If ϕ = 0 then λ = 0, but in this case βt 1 u 0 (ct ) = 0 which is impossible
as u 0 > 0. Hence ϕ > 0 which implies that wT +1 = 0. By the Inada
condition ct > 0 hence µt = 0. So
βt 1 1 0
u ( ct 1 ) = λ = β t 2 1 0
u ( ct 2 ) .
Medvegyev (CEU) Mathematics for Economists 2013 54 / 252
Cake eating problem
De…nition
The relation
u 0 (ct ) = βu 0 (ct +1 )
is called Euler equation.
Example
Solve the cake eating problem for the u (x ) $ ln x function.
Observe that in this formulation one can use the logarithmic function as it
is perfectly legitimate in the Kuhn-Tucker theory as in this case one has a
convex open set U over which the whole story of optimization is
considered. The Euler equation is
1 1
=β , t = 1, 2, . . . , T 1.
ct ct + 1
βw1
c2 = ,
1 + β + . . . + βT 1
..
.
β T 1 w1
cT =
1 + β + . . . + βT 1
Vt (w ) = max βt 1
u (c ) + Vt +1 (w c) .
c 2[0,w ]
VT (w ) = βT 1
u (w ) .
βT 2 0
u (c ) VT0 (w c ) = 0.
T 2 0 T 1 0
β u (c ) β u (w c) = 0
βT 2 0
u ( cT 1) = β T 1 u 0 ( w cT 1 ) = β T 1 0
u ( cT )
u 0 (cT 1 ) = βu 0 (cT )
βT 3 0
u ( ct ) VT0 1 ( wt ct ) = β T 3 0
u ( ct ) VT0 1 ( wT 1) =0
T 3 0
β u ( cT 2) = VT0 1 ( wT 1) .
VT 1 (w ) = max βT 2
u (c ) + VT (w c) =
c 2[0,w ]
= max βT 2
u (c ) + βT 1
u (w c) =
c 2[0,w ]
= βT 2
u (c (w )) + βT 1
u (w c (w )) .
then
df (p ) ∂g ∂g dx (p ) ∂g
= (p, x (p )) + (p, x (p )) = (p, x (p )) ,
dp ∂p ∂x dp ∂p
Observe that the partial derivative is by the parameter of the goal function
so with
g (p, x ) = βT 2
u (x ) + βT 1
u (p x)
∂g
(p, x (p )) = βT 1 0
u (p x (p ))
∂p
that is
1 (w )
dVT
(w ) = βT 1 0
u (w c (w )) .
dw
Which gives the Euler equation.
βT t 1 0
u ( cT t) = VT0 t +1 ( wT t +1 )
VT t +1 (w ) = max βT t
u (c ) + VT t +2 (w c) =
c 2[0,w ]
= max βT t
u (w u ) + VT t +2 (u ) .
u 2[0,w ]
VT0 t +1 (w ) = βT t 0
u (w u (w )) = βT t 0
u ( cT t +1 )
hence
βT t 1 0
u ( cT t) = βT t 0
u ( cT t +1 )
Example
Solve the cake eating problem for the u (x ) $ ln x function.
1 β
c1 = w1 , c 2 = w1 .
1+β 1+β
c 1 + c 2 + c 3 = w1
w1 βw1 β 2 w1
c1 = , c 2 , c 3 =
1 + β + β2 1 + β + β2 1 + β + β2
V1 = ln c1 + β ln c2 + β2 ln c3 = A3 + B3 ln w1 .
Example
Solve the problem
T
∑ β t U ( ct ) ! max
t =0
ct + kt +1 = f (kt ) .
where k0 is given.
βt U (f (kt ) kt +1 ) + βt +1 U (f (kt +1 ) kt +2 )
De…nition
The second order di¤erence equation
We can get the same equation with dynamic programing. By the Bellman
equation
which is again hopeless as it expresses Vt0+1 with Vt0+2 , in the …rst case
kt +1 is not in the formula.
Hence
Hence
which is the same Euler equation for the production saving model.
∑ µt ct ∑ νt kt .
t t
∂L
= β t U 0 ( ct ) + λ t µt = 0.
∂ct
By the Inada condition λt > 0. hence
ct + kt +1 f (kt ) = 0.
∂L
= λt 1 λt f 0 (kt ) νt kt = 0.
∂kt
If we exclude corner solutions then νt = µt = 0 and then
λt 1 = λt f 0 (kt )
λt = β t U 0 ( ct )
βt 1
U 0 ( ct= βt U 0 (ct ) f 0 (kt )
1)
U 0 (ct 1 ) = βU 0 (ct ) f 0 (kt )
U 0 (f (kt 1 ) kt ) = βU 0 (f (kt ) kt +1 ) f 0 (kt )
U 0 (ct ) = βU 0 (ct +1 ) .
Example
Solve the production saving model with U (c ) = ln c, f (k ) = k α .
kt +1 kα f (kt +1 )
= βα t +1 = βα .
ct ct + 1 ct + 1
ct = f (kt ) kt +1
hence
kt +1 kt +1 kt +1 /f (kt ) st
= = = .
ct f (kt ) kt +1 1 kt +1 /f (kt ) 1 st
Also
kt +2 f (kt +1 ) ct +1 ct + 1
st + 1 = = =1
f (kt +1 ) f (kt +1 ) f (kt +1 )
αβ
sT 1 =
1 + αβ
αβ αβ (1 + αβ)
sT 2 = = =
1 + αβ
αβ 1 + αβ + αβ (1 + αβ) αβ
1 +αβ
2
αβ + (αβ)
=
1 + αβ + (αβ)2
t
∑ (αβ)s
s =1
sT t = t
1+ ∑ (αβ)s
s =1
Example
Optimal selling of a stock with independent o¤ers.
3. If ξ 1 < 5/8 then you should not take ξ 1 . Then you expected payout in
this period
Z 5/8 Z 1
5 5
E max ,ξ = dx + xdx =
8 1 0 8 5/8
1
25 x2 25 1 1 25
= + = + =
64 2 5/8 64 2 2 64
1 1 25 64 + 25 89
= + = = = 0, 695 31.
2 2 64 128 128
Now assume that r = 100% and let ξ uniformly distributed on [0, 1] and
let T = 3.
1. t = T = 3.you have no choice, so your expected payout is
E (ξ 3 ) = 1/2.
2. t = T 1 = 2. If
E (ξ 3 ) 1
ξ2 < =
1+r 4
you should continue as in this case your payout is
ξ 2 (1 + r ) = 2ξ 2 < E (ξ 3 ) = 1/2.
αT = 0.
Z α t +1 Z ∞
αt = E (max (αt +1 , ξ t )) = αt +1 dF (x ) + xdF (x ) =
0 α t +1
Z ∞
= α t +1 F ( α t +1 ) + xdF (x ) .
α t +1
De…nition
Let Ht 0 be a set of random payouts. The optimal stopping problem is
to …nd
sup E (H (τ ))
τ
Xt = sup E (H (τ ) j Ft )
τ t
That is Xt is the best what one can get starting at time t. Xt is the same
as the value function in DP.
De…nition
Xt is called the Snell envelope.
Theorem
The optimal strategy is
τ = min (t j Ht = Xt ) = min (t j Ht Xt ) =
= min (t j Ht E (Xt +1 j Ft )) .
The interpretation is quite simple. One must stop (in our case sell) when
one sees that the present payout is better then the expected future payout
given our knowledge at that moment.
E (Xt +1 j Ft ) = E (Xt +1 )
E (Xt +1 )
αT $ 0, αt $ .
(1 + r )T t
Xt
= max (ξ t , αt ) .
(1 + r )T t
Xt (1 + r )T t max (ξ t , αt )
Vt (ξ t ) $ = = max (ξ t , αt ) ,
(1 + r )T t (1 + r )T t
E (Xt +1 ) 1 E (Xt +1 ) E (Vt +1 (ξ ))
αt $ T t
= T (t +1 )
= .
(1 + r ) 1 + r (1 + r ) 1+r
Hence
E (Vt +1 (ξ t ))
Vt (ξ t ) = max (ξ t , αt ) = max ξ t , .
1+r
E (VT 1 (ξ ))
VT (ξ ) = ξ max ξ, = VT 1 (ξ ) ,
1+r
With induction
E (Vt +2 (ξ )) E (Vt +1 (ξ ))
Vt +1 (ξ ) = max ξ, max ξ, = Vt (ξ ) ,
1+r 1+r
hence
E (Vt +2 (ξ )) E (Vt +1 (ξ ))
α t +1 = = αt .
1+r 1+r
De…nition
(αn ) is the optimal exercise boundary.
αT = 0,
E (Vk +1 (ξ t )) E (max (ξ t +1 , αt +1 ))
αt = = =
1+r 1+r
Z α t +1 Z ∞
1
= αk +1 dF + xdF (x ) =
1+r 0 α t +1
Z ∞
1
= α t +1 F ( α t +1 ) + xdF (x ) .
1+r α t +1
αT = 0,
1
!
1 x2 1 1 α2n +1
αn = α2n +1 + = α2n +1 + =
1+r 2 α n +1 1+r 2 2
1
= α2 + 1 .
2 ( 1 + r ) n +1
Example
Buying a stock with independent o¤ers.
XT = HT $ ξ T
Xn = min (ξ n , E (Xn +1 j Fn ))
Xn = min (ξ n , E (Xn +1 )) ,
τ = min fn 0 j ξn αn g ^ T .
At time T one must buy the stock.. At time T 1 one must buy it if
ξ T 1 is smaller than the expected value. etc.
Medvegyev (CEU) Mathematics for Economists 2013 101 / 252
Optimal stopping
αT = E (ξ T )
Z α n +1
αn = α n +1 (1 F (αn +1 )) + xdF (x )
0
1
αT =
2
α2n +1 α2n +1
αn = α n +1 (1 α n +1 ) + = α n +1 .
2 2
Example
Selling stock with recalling prices.
Hn $ ( 1 + r ) T n
max ξ k
k n
XT = HT = max ξ n = (1 + r )T T
max ξ n ,
n T n T
Xn = max (Hn , E (Xn +1 j Fn )) =
= max (1 + r )T n
max ξ k , E (Xn +1 j Fn ) .
k n
Let
Xn
Vn $
(1 + r )T n
then
VT = max ξ n ,
n T
E (Vn +1 j Fn )
Vn = max max ξ k , .
k n 1+r
Theorem
If variables ξ and η are independent then one can use the relation
E (f (ξ, η ) j ξ = x ) = E (f (x, η ))
As (ξ k ) are independent
ξ n j FT 1 )
E (maxn T
VT 1 = max max ξ k , =
k T 1 1+r
E (max (maxn T 1 ξ n , ξ T ) j FT 1)
= max max ξ k , =
k T 1 1+r
where
E (max (x, ξ T ))
h (x ) $ .
1+r
Let
E (max (x, ξ T ))
S $ fx j x h (x )g $ x jx
1+r
Obviously Z ∞
E (max (x, ξ T )) = xF (x ) + wdF (w ) .
x
Z ∞
S = x j (1 + r ) x xF (x ) + wdF (w ) =
x
Z ∞
= x j rx x (F (x ) 1) + wdF (w ) =
x
Z ∞
= x j rx (w x ) dF (w ) .
x
max ξ n 2 S = max ξ n a ,
n T 1 n T 1
The reason is the one step ahead strategy. We work by induction on T . Let
n o n o
(T ) (T ) (T ) (T )
S n $ Hn Xn $ Hn Xn .
(T )
As Hn = maxk n ξ k is not changing.with T , but if we increase T then
+1 (T ) (T +1 )
as HTT + 1 HTT therefore obviously Xn Xn . Therefore
n o
(T +1 ) (T +1 ) (T +1 )
Sn $ Hn Xn =
n o n o
(T ) (T +1 ) (T ) (T ) (T )
= Hn Xn Hn Xn = Sn
(T ) (T +1 )
One must prove that Sn Sn .
As
max ξ k max ξ k a
k n +1 k n
using the induction hypothesis that if one periods left then the
fx j x ag is the stopping region
E (Vn +2 j Fn +1 )
max max ξ k , = max ξ k .
k n +1 1+r k n +1
Medvegyev (CEU) Mathematics for Economists 2013 111 / 252
Optimal stopping
Substituting back and recalling that
E (maxk n +1 ξ k j Fn )
Vn = max max ξ k , =
k n 1+r
E (max (maxk n ξ k , ξ n +1 ) j Fn )
= max max ξ k , $
k n 1+r
(T ) (T +1 )
So Sn Sn hence the set fHn Xn g is independent of the number
of periods left.
Medvegyev (CEU) Mathematics for Economists 2013 112 / 252
Homework
De…nition
Stochastic dynamic programming problem on …nite time horizon is
!
T 1
E g (xT ) + ∑ uk (xk , uk , ξ k ) ! max
k =0
xk +1 = fk (xk , uk , ξ k ) , uk 2 Uk (xk ) , k = 0, 1, . . . , T 1
x0 = a;
JN = g (xT )
Jk (xk ) = max E (uk (xk , uk , ξ k ) + Jk +1 (xk +1 ) j Fk )
u k 2U k
= max E (uk (xk , uk , ξ k ) + Jk +1 (f (xk , uk , ξ k )) j Fk ) .
u k 2U k
JT = HT
Hn 1 if uk = stop
Jn 1 = max .
E (Jn (f (xk , uk , ξ k )) j Fn 1 ) otherwise
J (S ) = h E (S D ) + + p E (D S )+ .
Z Z
E (S D )+ = (S x )+ dF (x ) = (S x )+ f (x ) dx =
R R
Z S
= (S x ) f (x ) dx.
0
Z S
dJ
= h (S S) + h 1 f (x ) dx +
dS 0
Z ∞
p (S S) + p 1 f (x ) dx.
S
Solving it for S
p
(h + p ) F (S ) = p, F (S ) = .
h+p
J (S ) = cS + h E (S D ) + + p E (D S )+ .
c + hF (S ) p (1 F (S )) = 0.
Let
∆J (S ) = J (S + 1) J (S ) =
S ∞
= h ∑ P (D k) + p ∑ P (D > k )
k =0 k =S +1
S 1 ∞
h ∑ P (D k) p ∑ P (D > k ) .
k =0 k =S
∆J (S ) = hP (D S) pP (D > S ) =
= h (1 P (D > S )) pP (D > S ) =
= h (h + p ) P (D > S ) .
S = min (S j ∆J (S ) 0) = min (S j J (S + 1) J (S )) ,
S = min fS j h (h + p ) P (D > S ) 0g =
h
= min S j P (D > S ) =
h+p
h
= min S j 1 P (D S ) =
h+p
p
= min S j P (D S ) =
h+p
= min fS j P (D S ) αg .
Observe that one must cumulate until we …rst time hit the level α.
yt +1 = (yt + v Dt ) + , y0 = x,
l (x, v ) $ cv + hx + p E (D (x + v ))+ =
Z ∞
= cv + hx + p (z (x + v )) dF (z ) .
x +v
∞
J (x, V ) = ∑ βn l (yn , vn ) .
n =0
S x if x S
v (x ) = .
0 if x > S
c (1 β) + βh
F (S ) = 1 F (S ) = .
p βc + βh
That is
p c
F (S ) = .
p + β (h c )
Compare with the static solution
p c
F (S ) = .
p+h
Medvegyev (CEU) Mathematics for Economists 2013 126 / 252
Liquidity modelling
Example
Let the distribution of D exponential with λ = 2. Let the penalty p = 3
and the holding cost h = 5 and let c = 1. In the static model
3 1 2 1
F (S ) = 1 exp ( 2S ) = = = .
3+5 8 4
3 1 3
exp ( 2S ) = ,S = log = 0, 143 84.
4 2 4
3 1 2
F (S ) = 1 exp ( 2S ) = 1
= .
3 + (5 1)
2
5
3 1 3
exp ( 2S ) = ,S = log = 0, 255 41
5 2 5
Problem
The problem
∞
∑ βt 1
r (st , at ) ! max
t =1
st +1 = f (st , at ) , t = 1, 2, . . .
at 2 Φ (st ) , t = 1, 2, . . .
Theorem
Under the above conditions there is a stationary optimal strategy π. The
value function solves the Bellman equation:
De…nition
A Markovian strategy is stationary if it is independent of the time
parameter t.
w0 is given.
V (w ) = max (u (c ) + βV (w c )) =
c 2[0,w ]
= max (u (w s ) + βV (s )) .
s 2[0,w ]
u 0 (c ) = βV 0 (w c )
or
u 0 (w s ) = βV 0 (s ) .
How we can calculate V 0 (u )?
Again we will use the Envelope Theorem. We use the second formulation
V 0 (w ) = u 0 (w s ) = u 0 (c (w )) .
V 0 (w c (w )) = u 0 (c (w c (w ))) ,
u 0 (c (w )) = βu 0 (c (w c (w ))) .
Example
c1 σ
Solve the problem with u (c ) = 1 σ
1 σ
We guess that V (x ) = α x1 σ and the optimal policy is π (x ) = λx, with
0 < λ < 1.
w1 σ c1 σ βα
α = max + (w c )1 σ
.
1 σ c 2[0,w ] 1 σ 1 σ
Di¤erentiating
σ σ 1/σ
c + βα (w c) ( 1) = 0, ( βα) (w c) = c
1/σ 1/σ
( βα) w = c 1 + ( βα)
1/σ
( βα) w
c= 1/σ
w= = λw
1 + ( βα) 1 + ( βα)1/σ
Substituting back
w1 σ (λw )1 σ βα
α = + ((1 λ ) w )1 σ
1 σ 1 σ 1 σ
α = λ1 σ + βα (1 λ)1 σ
If
σ
α = 1 β1/σ
λ = 1 β1/σ
then
σ (1 σ ) σ
1 β1/σ = 1 β1/σ +β 1 β1/σ β (1 σ)/σ
=
(1 σ ) σ
= 1 β1/σ + 1 β1/σ β1/σ =
σ
= 1 β1/σ 1 β1/σ + β1/σ
Obviously
lim βn V (cn ) = 0
n !∞
Example
Solve the problem if u (c ) = ln c.
βB βB
A + B ln w = ln w w + β A + B ln w
1 + Bβ 1 + Bβ
w βB
A + B ln w = ln + β A + B ln w
1 + Bβ 1 + Bβ
βB
A + B ln w = ln w ln (1 + B β) + βA + βB ln + βB ln w
1 + Bβ
βB
A + B ln w = ln (1 + B β) + βA + βB ln + (1 + βB ) ln w
1 + Bβ
For
βB
A= ln (1 + B β) + βA + βB ln
1 + Bβ
βn ln (cn ) βn ln (w ) ! 0.
hence we can just prove that
∂F ∂F
βt (t, xt , xt +1 ) + βt +1 (t, xt +1 , xt +2 ) = 0.
∂xt +1 ∂xt
FxT +1 (t, xT , xT +1 ) = 0.
In this case one can solve this equation with backward induction.
Medvegyev (CEU) Mathematics for Economists 2013 143 / 252
In…nite cake eating problem
Z T
exp ( rt ) U (C (t )) dt ! max
0
0
K (t ) = F (K (t )) C (t ) bK (t )
K ( 0 ) = K0 , K ( T ) 0, C (t ) 0
is an optimal control problem. First two lines are always there the third
line, the boundary conditions, has many variations. The boundary
conditions are very important!
Z b
max
F t, x, x dt !
a min
x (a) = xa , x (b ) = xb
b Z
d
0 = ψ (0) $0
F t, x (t ) + λy (t ) , x (t ) + λy (t ) dt =
dλ a
Z b
d
= F t, x (t ) + λy (t ) , x (t ) + λy (t ) dt =
a dλ
Z b
= Fx0 y + F 0 y dt $
a x
Z b
$ q (t ) y (t ) + p (t ) y (t ) dt.
a
q (t ) p 0 (t ) = 0,
that is
d 0
Fx0 t, x (t ) , x (t ) F t, x (t ) , x (t ) = 0,
dt x
where Fx0 and F 0 are the partial derivative of F
x
De…nition
This is the so-called the Euler–Lagrange di¤erential equation.
Z b
max
ϕa (x (a)) + ϕb (x (b )) + F t, x, x dt !
a min
The argument is very similar. Now let y be an arbitrary (di¤erentiable)
function and let
Z b
0 = ψ0 (0) = ϕa0 (x (a)) y (a) + ϕb0 (x (b )) y (b ) + Fx0 y + F 0 y dt.
a x
De…nition
The last two conditions are called the transversality conditions.
Example
Solve the problem
Z 2 .
2
2 t
(4 3x 16x 4x )e dt ! max
0
x (0) = 8/3, x (2) = 1/3
In this case
. 2
F (t, x, x ) = (4 3x 2 16x 4x )e t
Fx0 = 6xe t
,F0 = 16 8x e t
.
x
t d t
6xe = 16 8x e =
dt
t t
= e ( 1) 16 8x + e 8x
Simplifying
.
3
x x x = 2.
4
x (0) = 8/3, x (2) = 1/3.
1 3 8
C1 exp t + C2 exp t .
2 2 3
attain an extremum.
2
The kernel function is F t, x, x = x 2tx. The partial derivatives
Fx0 = 2t, F 0 = 2x
x
The Euler–Lagrange equation is
d
2t = 2x = 2 x , x = t
dt
The solution is
t3
x (t ) = t 2 /2 + c1 , x (t ) = + c1 t + c2 .
6
Medvegyev (CEU) Mathematics for Economists 2013 160 / 252
Calculus of variations
Example
Find the extremal of the functional
Z 3
1
(3t x ) xdt, x (1) = 1, x (3) = 4 .
1 2
F t, x, x = (3t x ) x.
Fx0 = 3t 2x, F 0 = 0.
x
The Euler–Lagrange equation is 3t 2x = 0, so x = 3/2t. But at t = 1
x (1) = 3/2 the …rst boundary condition is not valid. Hence there is no
extremal solution.
Example
Find the extremal solutions of
Z 1
exp (x ) + t xdt, x (0) = 0, x (1) = α.
0
Example
Find
Z 1r 2
1+ x dt ! min
0
x (0) = 0, x (1) = 1.
x
c= r
2
1+ x
2 2
c2 1 + x = x
2
1 c2 x = c2
x = C ) x = Ct + B.
Example
Find the extremal
Z 1 p
2
y y0
3
dx, y (0) = 1, y (1) = 4
0
2 d 2
y0 = 2 yy 0 = 2 y 0 + 2yy 00
dx
2
2yy 00 + y 0 = 0.
Using this
2
2yy 00 + y 0 = 0
du 2
2y + y0 = 0
dx
d d
2y (x ) u (y (x )) y (x ) + u 2 (x ) = 0
dy dx
d
2y (x ) u (y (x )) u (y (x )) + u 2 (y (x )) = 0.
dy
q p q
1 = c3 c2 , 4 = c3 ( 1 + c2 ) 2
3 2 3 3
Example
Find the extremal of
Z e
2
x y0 + yy 0 dx, y (1) = 0, y (e ) = 1.
1
0 = u + xu 0 .
1 1
dx = du
x u
ln jx j = ln ju j + c
c
= u = y0
x
y = c1 ln x + c2
Example
Z T r
2
2π x (t ) 1+ x dt ! min
0
x (0) = a, x (T ) = b
The kernel is r
2
F t, x, x =x 1+ x
The Euler–Lagrange is
r
2 d xx
1+ x = r
dt 2
1+ x
r r !0
2 0 2
r 1+ x xx 1+ x xx
2
1+ x = 2
1+ x
r
2 2
1+ x x +xx r 2x x xx
r 2
2 2 1+ x
1+ x = 2
1+ x
2 2 2
2
1+ x x +xx x x x
1+ x = 2
=
1+ x
2 4 2 2
2 x +xx + x +xx x x x x
1+ x = 2
=
1+ x
4 2 2 4
1+ x +2 x = x +xx + x
Which simpli…es to
2
1 = xx x
du
1 = xu u2
dx
1 + u2 du
= x
u dx
1 2u 1
du = dx
2 u2 + 1 x
p p
ln 1 + u 2 = ln [x j + C , 1 + u 2 = Cx
x
q = C
2
1+x
2 x2
x = 1
C2
Writing back to the Euler–Lagrange
x2
xx 1 1=0
C2
x2
xx =
C2
x
x = 2
C
4AB = C2
t β
x = α ch
α
Z T r
2
2π x (t ) 1+ x dt ! min
0
x (0) = a
The solution of the Euler–Lagrange equation is still of the form
t β
x (t ) = α ch
α
but we should also have the condition x (0) = a and the transversality
0 = ϕb0 (x (b )) = F 0 b, x (b ) , x (b ) =
x
x (T ) x (T )
r .
2
1 + x (T )
The Euler–Lagrange is
d 0 d
0 = Fx0 = F = t + 2x = 1 + 2 x .
dt x dt
C = t + 2 x, x (0) = 1
C t
x= , x (0) = 1
2 2
C t2
x (t ) 1 = t .
2 4
t2 C
x (t ) = + t +1
4 2
Medvegyev (CEU) Mathematics for Economists 2013 183 / 252
Calculus of variations
that is C = 0.
Theorem
If the kernel function F t, x, x is convex in x, x then every solution of
the Euler–Lagrange equation is a global minimum.
ψ ( λ ) = Ψ (x + λ (y x )) = Ψ ((1 λ) x + λy ) =
Z b
= L t, (1 λ) x + λy , (1 λ ) x + λy
a
Z b Z b
(1 λ) L t, x , x +λ L t, y , y =
a a
= (1 λ) ψ (0) + λψ (1) .
Ψ (x ) = ψ (0) ψ (1) = Ψ (y )
for every y , which proves the theorem.
Example
r
2
Study the problem with kernel F t, x, x = 1+ x .
Example
Study the problem with kernel
r
2
F t, x, x =x 1+ x
2
λx x
λ2 3/2 2
2
1+ x 1+ x
v
u 2
u
x u x2 x
u +4
3/2 u 3 2
2 t 2
1+ x
1+ x 1+ x
De…nition
The problem
Z t1
f (t, x (t ) , u (t )) dt ! max
t0
x (t ) = g (t, x (t ) , u (t )) , x (t0 ) = x0
u (t ) 2 U,
is called the optimal control problem. The variable x (t1 ) can be …xed or
free. The set U is called the control region.
Example
The problem
Z t1
f (t, x (t ) , u (t )) dt ! max
t0
De…nition
The expression
Theorem (Mangasarian)
If U is a convex set and H (t, x, u, p (t )) is concave in (x, u ) then the
solution of the Maximum Principle above (x (t ) , u (t )) is an optimal
solution of the problem.
Theorem (Arrow)
Let
b (t, x, p ) $ max fH (t, x, u, p ) j u 2 U g
H
assuming that the maximum is attained. If H b (t, x, p (t )) is concave in x
then if (x (t ) , u (t )) is a solution of the Maximum Principle above then
(x (t ) , u (t )) solves the optimization problem.
d
p (t ) = Hx0 (t, x (t ) , u (t ) , p (t )) =
dt
= 0 p (t ) gx (t, x (t ) , u (t )) .
One can interpret p (t ) as the shadow price for the state variable x (t ) . At
every moment of time the in…nitesimal contribution is
dW (t )
= f (t, x (t ) , u (t )) + (x (t ) p (t ))0 =
dt
= f (t, x (t ) , u (t )) + x 0 (t ) p (t ) + x (t ) p 0 (t ) =
= f (t, x (t ) , u (t )) + p (t ) g (t, x (t ) , u (t )) + x (t ) p 0 (t ) =
= H (t, x (t ) , u (t )) + x (t ) p 0 (t ) .
the direct in‡ow and the indirect change of the wealth. (Direct cash in‡ow
and change of the value of the stocks.)
∂
H (t, x, u, p ) + xp 0 = Hx + p 0 = 0 ) p 0 = Hx .
∂x
Observe that there are no constraints on the state variable. Now if x (t )
and therefore its price p (t ) is already given then x (t ) p 0 (t ) is also given
and one should directly maximize H with respect to u.(There are
constraints on u so we cannot simply calculate the stationary points.)
H = F (t, x, u ) + pu.
That is
d 0 d
F = p = Hx0 = Fx0 .
dt x dt
Example
Solve the problem
Z T
1 tx (t ) u (t )2 dt ! max
0
x (t ) = u (t ) , x (0) = x0 .
H (t, x, u, p ) = 1 tx u 2 + pu.
Hu0 = 2u + p = 0, ) u = p/2.
Example
Solve the problem
Z T
x 2 (t ) + u 2 (t ) dt ! min, x (t ) = u (t )
0
x (0) = x0 .
The Hamiltonian is
H (t, x, u, p ) = x2 u 2 + pu.
Hx0 = 2x, Hu0 = 2u + p.
That is
dp dx
= 2x (t ) , p (T ) = 0, u (t ) = p (t ) /2 ) = p (t ) /2.
dt dt
p (t ) = c1 exp (t ) + c2 exp ( t ) , p (T ) = 0
T
c1 = 2x0 e / eT + e T
, c2 = 2x0 e T / e T + e T
1 dp eT t + et T
x (t ) = = x0 T
2 dt e +e T
0= F 0 t, x (T ) , x (T ) = 2x (T ) .
x
x ( t ) = c1 e t + c2 e t
.
x0 = c 1 + c 2 ,
0 = c1 e T c2 e T
, c1 = c2 e 2T
2T 2T x0 e T
x0 = c2 1 + e , c2 = x0 / 1 + e =
eT + e T
Hence
x0 e t T eT t
x (t ) = + x0 .
eT + e T eT + e T
d 2 d 2
p = x , x 2 = p2 + C .
dt dt
(xu )2
x 2 = p2 + C = +C
1 + u2
x 2 + x 2 u2 = (xu )2 + C 1 + u 2
2
x 2 = C 1 + u2 = C 1+ x
2x x = 2C x x ,
x
x = 0.
C
From here the solution is the same as above.
Example
Let us consider the problem and try to apply Arrow”s condition.
Z r
b 2
2π x 1+ x dt ! min .
a
p
The Hamiltonian is H (t, x, u, p ) = x 1 + u 2 + pu. This is a concave
function in u. So for the maximization
u
0 = Hu0 = p xp .
1 + u2
At p (t ) it is
u
p (t ) = x p
1 + u2
which is not solvable for every x, in u so the problem has no solution for
every x so Arrow’s condition is not applicable.
Medvegyev (CEU) Mathematics for Economists 2013 215 / 252
Optimal control, minimum problems
One can handle a minimum problem by changing the sign of the goal
function. The Hamiltonian is
dp
H= f + p g, = Hx0 = ( fx + p gx ) .
dt
Multiplying by e=
1 and introducing p p
dp de
p e x0 .
= ( fx + pgx ) , = (fx + pegx ) = H
dt dt
e is an solution of the adjoint equation of the maximum problem.
That is p
Z t1
fdt ! max, x = g.
t0
Example
Solve the problem
Z 1
x (t ) dt ! max
0
x (t ) = x (t ) + u (t ) , x (0) = 0, u (t ) 2 [ 1, 1] .
The Hamiltonian is
H (t, x, u, p ) = x + p (x + u ) .
p= Hx0 = (1 + p ) , p (1) = 0
p (t ) = e 1 t
1.
x (t ) = x (t ) + u (t ) = x (t ) + 1.
x (t ) = Ce t 1.
x (t ) = exp (t ) 1.
Example
Study the problem
Z 2
u2 xdt ! max, x = u, x (0) = 0, 0 u 1.
0
p (t ) = t 2.
b (t, x, p (t )) =
H max u 2 x + (t 2) u =
u 2[0,1 ]
x if t 2 [0, 1]
=
x +t 1 if t 2 (1, 2]
Example
Solve the problem
Z 1
2x (t ) x 2 (t ) dt ! max
0
x (t ) = u (t ) , x (0) = 0, x (1) = 0, u (t ) 2 [ 1, 1] .
The Hamiltonian is
H (t, x, u, p ) = 2x x 2 + pu.
d
p = Hx0 = 2x 2.
dt
As there is a terminal condition on x there is no transversality condition on
p.
x (t )=t = t + C = x (t +) , 2t = C
As x (1) = 0
C t = 2t t = 2t 1 = 0, t = 1/2.
Hence x (t ) = 1 t if t 1/2.
t if t < 1/2
x (t ) =
1 t if t 1/2
As
dp 2 (t 1) if t < 1/2
= 2 (x 1) =
dt 2 (1 t 1) if t 1/2
Z t
p (t ) = 2 (s 1) ds = t 2 2t + C , t < 1/2
0
1 1
0 = p = 1 + C , ) C = 3/4
2 4
Z t
1
p (t ) = 2sds = t2 + , t 1/2.
1/2 4
Z T
f (t, x (t ) , u (t )) dt + φ (x (T )) ! max
0
x (t ) = g (t, x (t ) , u (t )) , x (0) = x0
Hence
H = f (t, x (t ) , u (t )) + φ0 (x (t )) g (t, x (t ) , u (t )) +
+p (t ) g (t, x (t ) , u (t )) =
= f (t, x (t ) , u (t )) + p (t ) + φ0 (x (t )) g (t, x (t ) , u (t )) =
= f (t, x (t ) , u (t )) + pe (t ) g (t, x (t ) , u (t )) .
d d
e (t ) = p (t ) + φ00 (x (t )) x (t ) .
p
dt dt
Hence
d
e (t ) =
p fx0 + p
egx0 , p (T ) = 0.
dt
e = f +p
If H eg then
d e x0 ,
e (t ) =
p H e = φ0 (x (T )) .
p
dt
Example
Solve Z 1 2
x xdt + x 2 (1) ! min, x (0) = 0
0
as an optimal control problem.
Z 1
u2 xdt + x 2 (1)
0
x = u, x (0) = 0.
H = u 2 x + pu, p = Hx = 1, p = t + C
p (1) = 1 + C = 2x (1) , C = 2x (1) 1, p (t ) = t + 2x (1) 1
0 = Hu0 = 2u + p = 2u + t + 2x (1) 1,
u = x (1) + 1/2 t/2.
Z t
x (t ) = x (0) + x (1) + 1/2 t/2dt =
0
= x (1) t + 1/2t t 2 /4.
1 1 2 1 1
2x (1) = 1 1 = , x (1) = ,
2 4 4 8
3 t2
x (t ) = t .
8 4
Example
Solve the problem
Z 1
u 2 dt + x 2 (1) ! min
0
x = x + u, x (0) = 1
H = u 2 + p (x + u ) ,
p= Hx0 = p,
p = C exp ( t )
C
p (1) = 2x (1) = ,
e
C = 2e x (1) , p = 2x (1) exp ( t + 1) ,
x (1) x (1)
x (1) = + Ce, 1 = x (0) = e + C,
2 2
x (1) x (1) 1
= Ce, C = 1 e, C = 1 Ce 2 , C =
2 2 1 + e2
2e
x (1) =
1 + e2
e2 t 1
x (t ) = e + et .
1 + e2 1 + e2
x (t ) = C1 exp ( t ) + C2 exp (t ) .
0 = F 0 (1) = 2x (1) ,
x
x (1) = 0,
1
0= C1 + C2 e, C1 = C2 e 2
e
so C1 = e 2 / 1 + e 2 and C2 = 1/ 1 + e 2 .
De…nition
An equilibrium point x of a dynamic system X is called attractive if for
any initial value x0 the solution X (t, x0 ) is convergent to x that is
De…nition
An equilibrium point x of a dynamic system is (Lyapunov) stable if for
any ε > 0 there is a δ > 0 such that for any kx0 x k < δ
kX (t, x0 ) x k < ε. The system is (Lyapunov) asymptotically stable if it
is stable and it is attractive. If it is true only for some neighborhood of x
then x is called locally asymptotically stable.
Example
The solution y = 0 of the system
ay 00 + by 0 + cy = 0
x (t ) = c1 cos θt + c2 sin θt
Example
The solution y = 0 of the system
yt +2 + ayt +1 + byt = 0
Example
There is an example of a non-linear system which is attractive but not
locally stable.
x12 (x2 x1 )+x25
2
x1 ( ) (
x1 +x22 1 + x12 +x22
2
)
= 2
x 1 (x 2 x 1 )
x2 2
( ) (
x12 +x22 1 + x12 +x22 )
y 0 .2 5
x
-0.5 0 0 .5 1 1 .5
0
-0 .2 5
-0.5
-0 .7 5
x '= Ax + By A= 1 B= 1
y '= C x +D y C =4 D =1
0
y
-1
-2
-3
-4
-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
x '= Ax + By A= - 3 B= 1
y '=C x +D y C =1 D =-2
0
y
-1
-2
-3
-4
-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
1 1
x0 = x
1 2
x ' = Ax + By A= 1 B= - 1
y '=C x +D y C=1 D=3
0
y
-1
-2
-3
-4
-5
Medvegyev (CEU) Mathematics for Economists 2013 250 / 252
Phase diagram
A focus
1/2 1
x0 = x
1 1/2
x ' =A x +B y A = - 1/2 B =1
y ' =C x +D y C =-1 D = - 1/2
0
y
-1
-2
-3
-4
-5
-5 -4 -3 -2 -1 0 1 2 3 4 5
x
1 Identify the four regions given by x10 = 0 and x20 = 0. This curves are
called nullclines or demarcation lines.
2 Identify the directions of increase in all four regions.
3 We have a node when both eigenvalues are real and of the same sign.
The node is stable when the eigenvalues are negative and unstable
when they are positive.
4 When eigenvalues are real and of opposite signs we have a saddle
point. The saddle is always unstable.
5 Focus (sometimes called spiral point) when eigenvalues are
complex-conjugate. The focus is stable when the eigenvalues have
negative real part and unstable when they have positive real part.
6 The center equilibrium occurs when a system has eigenvalues on the
imaginary axis, namely, one pair of pure-imaginary eigenvalues.
Centers in linear systems have concentric periodic orbits.