You are on page 1of 42

Contents

Chapter 1. Ordinary Differential Equations 1


1.1. Introduction 1
1.2. First order linear ODE system with constant coefficients 3
1.3. Second order scalar ODEs 9
1.4. General linear systems 11
1.5. Uniqueness of solutions 16
1.6. Existence of solutions 19
1.7. Qualitative behavior of ODEs 21

v
CHAPTER 1

Ordinary Differential Equations

1.1. Introduction
The simplest examples of ODEs are found in growth models such
as population growth or income growth. Denote by x(t) the quantity
of interest at time t. If the rate of growth is constant, then we have
1 dx
= c,
x dt
where c is the growth rate. The left hand side of the equation above is
the mathematical expression of growth rate. This gives us:
dx
= cx.
dt
The solution of this ODE is given by:
x(t) = x0 ect ,
where x0 = x(0). Constant growth rate leads to exponential growth
behavior.
In many cases, this unbounded exponential growth behavior is not
realistic. A better model would entail a decreasing growth rate. The
simplest such model is
1 dx
= c − kx.
x dt
This is the logistic growth model. As we will see later, this model
predicts a bounded growth behavior.
A rich sources of ODE models come from mechanics. Consider the
motion of a particle under some external force. Denote by x(t) the
position of the particle at time t. From Newton’s second law, we have
ODE1 (1.1) mẍ = F (x),
where m is the mass of the particle, F (x) is the external force acting
on the particle when its location is at x. To close the system, we need a
1
2 1. ORDINARY DIFFERENTIAL EQUATIONS

constitutive relation that serves to specify F . For example, if the force


is given by a linear spring, then we have F (x) = −kx, assuming that
the rest position of the particle is at 0. Here k is the spring constant. If
the force comes from gravitational force, then F is given by Newton’s
law of gravity.
In any case, given F , we can then solve the ODE (1.1) to predict
the behavior of the particle. In other words, we have turned a physics
problem into a mathematics problem.
The general form of ODE in Rn is

dx
= f (x, t), f : Rn × R → Rn ,
dt

where x(t) = (x1 (t), x2 (t), · · · , xn (t)) ∈ Rn . If the function f is in-


dependent of time t, the system is called autonomous; otherwise, it is
called non-autonomous.
We will consider the initial value problem, i.e., we assume we are
given the value of x at the initial time, say at t = 0, x(0) = x0 , and we
look for its behavior at other times:

 dx
= f (x)
eqn:generalode (1.2) dt
 x(0) = x0

The main questions we are interested in are:

• Does (1.2) have a solution? Is the solution unique ? If the


answers to these questions are negative, then we did not pose
our problem in the right way and we have to rethink how to
formulate the problem.
• How can we solve (1.2)? There are very few cases for which
one can solve the ODE analytically, by which we mean that
the solutions can be expressed in terms of integrals. We will
discuss some of these cases. In the general case, one has to
be content with solving the ODEs approximately, either using
numerical methods or using asymptotic methods.
1.2. FIRST ORDER LINEAR ODE SYSTEM WITH CONSTANT COEFFICIENTS3

1.2. First order linear ODE system with constant coefficients


Consider the linear ODE system with constant coefficients

 dx
= Ax,
eqn:linearode (1.3) dt
 x(0) = x0 ,
where A ∈ Rn×n is a constant matrix. This is the most general class
of systems that can be solved analytically.
When n = 1, the solution is x(t) = x0 eat = eat x0 .
We claim that in the general case, the solution is given by:
x(t) = eAt x0 ,
where the exponential matrix function is defined to be:

+∞
1
e At
≡ (At)n .
n=0
n!
It is easy to check that this is indeed a solution and the uniqueness
theorem we will discuss later tells us that this has to be the only solu-
tion.
Take n = 2 and let us examine two special cases first.
Case 1 : ( ) ( )
λ1 0 e λ1 t 0
A= =⇒ eAt = ,
0 λ2 0 eλ2 t
since

+∞ n n
λ t
= eλt .
n=0
n!
Case 2: ( )
λ 1
A= =⇒ eAt =?
0 λ
Observe that
( )( ) ( )
2
λ 1 λ 1 λ 2λ
A2 = = .
0 λ 0 λ 0 λ2
In fact for any integer n,
( )
λn nλn−1
eqn:An (1.4) An = .
0 λn
4 1. ORDINARY DIFFERENTIAL EQUATIONS

One can check this by induction.


( )( ) ( )
n n−1 n+1 n n
λ nλ λ 1 λ λ + nλ
An+1 = An A = = .
0 λn 0 λ 0 λn+1

Using (1.4), we have


( ) ( )

+∞
1 n n ∑ 1 n λn nλn−1 eλt teλt
eAt = t A = t =
n=0
n! n
n! 0 λn 0 eλt
since

+∞
1 n ∑
+∞
1 ∑
+∞
1 n n
n−1 n−1 n−1
t (nλ ) = t t λ =t t λ = teλt .
n=0
n! n=1
(n − 1)! n=0
n!
−1
A general 2 × 2 matrix A admits the decomposition A = ( P ΛP )
λ1 0
where P is a non-singular matrix and Λ is either in the form of
0 λ2
( )
λ 1
or the form . Thus,
0 λ

An = (P −1 ΛP )(P −1 ΛP ) · · · (P −1 ΛP ) = P −1 Λn P,

and
( +∞ )

+∞
1 n n ∑ 1 n −1 n
+∞ ∑ 1
eAt = t A = t P Λ P = P −1 tn Λn P = P −1 eΛt P.
n=0
n! n=0
n! n=0
n!

This discussion is also valid for general values of n. In that case we


need the following Jordan’s theorem:

Theorem 1. Let A be an n×n matrix. There exists a non-singular


matrix P ∈ Rn×n (could be complex) and a block diagonal matrix (Jor-
dan form)
 
Λ1
 
 Λ2 
Λ=  ..


 . 
Λd
n×n
such that
A = P −1 ΛP,
1.2. FIRST ORDER LINEAR ODE SYSTEM WITH CONSTANT COEFFICIENTS5
 
λi 1
 
 λi 1 
where the Jordan blocks are in the form: Λi = 
 ..

 .
 . 1 
λi
ni ×ni

Naturally, the sum of the ni ’s is n. If A can be diagonalized, i.e., if


A has n independent eigenvectors, then all the ni ’s are equal to 1.
Note: The matrix exponential eA+B ̸= eA eB unless A and B com-
mute, i.e., AB = BA.

Corollary 2. If the constant matrix A ∈ Rn×n can be diagonal-


ized, i.e., A has n linear independent eigenvectors {vj } (j = 1, . . . , n),
corresponding to the eigenvalues {λj }, then the general solution to the
ODE dx dt
= Ax is

n
x(t) = cj eλj t vj ,
j=1

where {cj } are arbitrary constants.

Proof. The Jordan form of A is the diagonal matrix


 
λ1
 
 λ2 
Λ = diag{λ1 , λ2 , . . . , λn } = 
 ..
.

 . 
λn

We note that

A[v1 , . . . , vn ] = [λ1 v1 , . . . , λvn ] = [v1 , . . . , vn ]Λ.

Let the matrix P = [v1 , . . . , vn ], thus P is non-singular and

A = P ΛP −1 .

Thus,
eAt = P eΛt P −1
where
eΛt = diag{eλ1 t , eλ2 t , . . . , eλn t }.
6 1. ORDINARY DIFFERENTIAL EQUATIONS

The general solution is x(t) = eAt c̃ where c̃ is arbitrary constant


column vector. Plug in eAt = P eΛt P −1 , and let c = P −1 c̃, we have
 
c1 eλ1 t
 
[ ]  c2 eλ2 t  ∑ n
x(t) = P e c = v1 , v2 , . . . , vn 
Λt
 .. 
 = c j e λj t v j .
 .  j=1
c n e λn t

We use this corollary to look at two specific 2 × 2 examples.
pg:ex Example
• ( )
0 1
A= .
1 0
The eigenvalues and eigenvectors are λ1 = 1, λ2 = −1 and
v1 = (1, 1)T , v2 = (1, −1)T respectively. Therefore the general
solution is
x(t) = c1 et v1 + c2 e−t v2 = (c1 et + c2 e−t , c1 et − c2 e−t )T .
• ( )
0 1
A= .
−2 0

The eigenvalues and eigenvectors are λ1 = 2i, λ2 = λ̄1 =
√ √ √
− 2i and v1 = (1, 2i)T , v2 = v̄1 = (1, − 2i)T respectively.
Thefore the general solution is
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 = c1 eλ1 t v1 + c2 eλ1 t v1
If c1 = c2 , we get one solution Re(eλ1 t v1 ); if c1 = −c2 , we get
another solution Im(eλ1 t v1 ). Therefore, the general real-valued
solution is
x(t) = c̃1 Re(eλ1 t v1 ) + c̃2 Im(eλ1 t v1 )
( √ ) ( √ )
cos 2t sin 2t
= c̃1 √ √ + c̃2 √ √
− 2 sin 2t 2 cos 2t
√ ( √ )
cos(− 2t + φ)
= c̃21 + c̃21 √ √ ,
2 sin(− 2t + φ)
1.2. FIRST ORDER LINEAR ODE SYSTEM WITH CONSTANT COEFFICIENTS7
√ √
where cos φ = c̃1 / c̃21 + c̃21 , sin φ = c̃2 / c̃21 + c̃21
Observe that the two components of x(t), x1 (t) and x2 (t) satisfy
(x2 (t))2
2
(x1 (t)) + = c̃21 + c̃21 .
2
i.e., x(t) lies on an ellipse.
In summary, we see that the solutions to the linear ODE system
ẋ = Ax form a linear space of dimension n.

Appendix: Review on Jordan form


λ is an eigenvalue of A if there exists x ∈ Rn and x ̸= 0 such that

Ax = λx,

or
(A − λI)x = 0.

It is equivalent to say that A − λI is singular, or, det(A − λI) = 0, or


λ is the root of the polynomial
( )det(A − λI).
1 1
For example, A = , then
0 1
( )
1−λ 1
det(A − λI) = det = (1 − λ)2 = 0.
0 1−λ

The only eigenvalue is


λ = 1.

To solve (A − λI)x = 0,
( )( ) ( )
0 1 x1 x2
(A − λI)x = = = 0,
0 0 x2 0

thus x2 = 0 and the eigenvector is x = (1, 0)T .


Given two matrices A and B, if there exists a nonsingular matrix
P such that B = P −1 AP , then we say A is similar to B, or, A ∼ B.
If A ∼ B, then they must have the same eigenvalues.
8 1. ORDINARY DIFFERENTIAL EQUATIONS
( )
λ1 0
The possible Jordan forms for n = 2 are (λ1 , λ2 can be
0 λ2
( )
λ 1
either different or the same) , or .
0 λ
The possible Jordan forms for n = 3 are
 
λ 1 0
 
•  0 λ 1  (one Jordan block, one eigenvalue with algebra-
0 0 λ
ic multiplicity=3, one dimensional eigenvector corresponding
to this eigenvalue),
 
λ1 1 0
 
•  0 λ1 0 (two Jordan blocks. λ1 has algebraic multiplicity=2,
0 0 λ2
geometric multiplicity= 1 which corresponds to the first block;
λ2 has algebraic multiplicity = 1 and geometric multiplicity= 1
corresponding
 to the second block. ),

λ1 0 0
 
•  0 λ2 0  (three Jordan blocks. λ1,2,3 could be the
0 0 λ3
same. Each eigenvalue has the same algebraic and geomet-
ric multiplicities.).

How can we find the nonsingular matrix P ? Let n = 2 and Q =


−1
P and write Q = [q1 , q2 ] in column vector forms, then
A = P −1 ΛP = QΛQ−1 ,
or
eqn:Aq (1.5) A[q1 , q2 ] = [q1 , q2 ]Λ.
( )
λ1 0
If Λ = , then the above equation gives
0 λ2
Aq1 = λ1 q1 , Aq2 = λ2 q2 ,
i.e., q1 , q2 are two eigenvectors of A. The very form of Λ guarantees
q1 , q2 can be linear independent. Thus, Q = [q1 , q2 ] are nonsingular.
1.3. SECOND ORDER SCALAR ODES 9
( )
λ 1
In the case when Λ = , we can have by (1.5) that
0 λ
Aq1 = λq1 ,

and

eqn:Aq2 (1.6) Aq2 = q1 + λq2 .

Thus, q1 is the eigenvector of A. To obtain q2 , note that (A−λI)q2 =


q1 by (1.6), we have (A − λI)2 q2 = (A − λI)q1 = 0. Since A − λI is
singular, (A − λI)2 q2 = 0 at least has a nonzero solution q2 , which can
also be chosen to be linearly independent of q1 (In fact, (A − λI)2 = 0
in this case).

1.3. Second order scalar ODEs


We begin with the second order ODE:

(1.7) ẍ + aẋ + bx = 0,

where the constants a, b ∈ R. This second order scalar equation can


be transformed into a first order 2 × 2 system,
{
ẋ = v,
v̇ = −av − bx,
i.e., ( ) ( )( )
d x 0 1 x
= .
dt v −b −a v
We can then solve this problem using the methods discussed in the
last section. However, we can also proceed directly by trying to find
solutions in the form x(t) = eλt . Then, ẋ = λeλt and ẍ = λ2 eλt ,
ẍ + aẋ + bx = (λ2 + aλ + b)eλt . So λ must satisfy

λ2 + aλ + b = 0.

This is called the characteristic equation for the ODE. The roots of

this quadratic equation are λ1,2 = 21 (−a ± a2 − 4b).
General solutions can now be constructed using the superposition
principle, which is a feature of all linear problems.
10 1. ORDINARY DIFFERENTIAL EQUATIONS

Superposition principle: If x1 (t), x2 (t) are both solutions of a


linear equation, then x(t) = c1 x1 (t) + c2 x2 (t) is also a solution where
c1,2 are any constants.
Going back to the second order ODE, since it is equivalent to a
first order linear system with 2 components, we know that its solution
space must be two dimensional, i.e. there must be two linearly inde-
pendent solutions and the general solution must have two independent
constants.
• If λ1 ̸= λ2 , then x(t) = c1 eλ1 t + c2 eλ2 t is the general solution
of the ODE .
• If λ1 = λ2 = λ, then λ = −a/2. It is easy to see that there is
an additional solution x2 (t) = teλt besides x1 (t) = eλt . Indeed,
we have
x¨2 + ax˙2 + bx2 = (2λeλt + λ2 teλt ) + a(eλt + λteλt ) + bteλt
= eλt (2λ + a) + eλt t(λ2 + aλ + b)
= eλt 0 + eλt t0
= 0.
Therefore, when λ1 = λ2 = λ, the general solutions x(t) =
c1 eλt + c2 teλt .
Example: Consider the a linear spring described by:
mẍ = −kx,
or
k
ẍ = − x.
m
The characteristic equation is
k
λ2 = − <0
m
The roots of this equation are:

λ = ±iω, ω= k/m.
From this we obtain two solutions x1 (t) = eiωt , x2 (t) = e−iωt . According
to the superposition principle, the general solution is given by
x(t) = c1 eiωt + c2 e−iωt .
1.4. GENERAL LINEAR SYSTEMS 11

Since e±iωt = cos t ± i sin t, we can rewrite the general solution as

x(t) = c1 eiωt + c2 e−iωt


= c̃1 cos ωt + c̃2 sin ωt
√ ( )
c̃ 1 c̃ 2
= c̃21 + c̃22 √ 2 cos ωt + √ 2 sin ωt
c̃1 + c̃22 c̃1 + c̃22

= c̃21 + c̃22 cos(ωt + φ).
√ √
where cos φ = c̃1 / c̃21 + c̃22 , sin φ = −c̃2 / c̃21 + c̃22 . This described

a harmonic oscillation: c̃21 + c̃22 is the amplitude, T = 2π/ω is the
period and φ is the initial phase.

1.4. General linear systems


Consider the ODE system in Rn
dx
(1.8) = A(t)x + g(t).
dt
First, let us consider the scalar case, n = 1. Let A(t) be the primi-
∫t
tive function of a(t), i.e., A(t) = 0 a(s)ds. Then
d ( −A(t) ) dx dA(t) −A(t)
e x = e−A(t) − e x
dt dt dt
dx
= e−A(t) − a(t)e−A(t) x
dt
( )
−A(t) dx
=e − a(t)x
dt
= e−A(t) g(t)

Integrate the above from 0 to t, we have


∫ t
−A(t)
e x(t) − x(0) = e−A(s) g(s)ds,
0
or
∫ t
A(t)
x(t) = e x(0) + eA(t)−A(s) g(s)ds.
0
12 1. ORDINARY DIFFERENTIAL EQUATIONS

This calculation can be generalized to the case when n > 1. For


example, let us look at the case when n = 2, x = (x1 , x2 )T ,
( )
a11 (t) a12 (t)
A(t) =
a21 (t) a22 (t)

We define two special solutions ψ1 (t) = (ψ11 (t), ψ12 (t))T and ψ2 (t) =
(ψ21 (t), ψ22 (t))T to the homogeneous ODE,
dx
eqn:Atx (1.9) = A(t)x
dt
with initial conditions ψ1 (0) = (1, 0)T and ψ2 (0) = (0, 1)T . We will
see later that these solutions are uniquely defined, as long as A is
continuous and bounded. We call ψ1 and ψ2 fundmental solutions and
define the fundmental matrix,
( )
[ ] ψ11 (t) ψ21 (t)
eqn:funmat (1.10) Ψ(t) = ψ1 (t), ψ2 (t) = .
ψ12 (t) ψ22 (t)

Consider a general initial condition x(0) = x0 = (x01 , x02 )T , then we


claim that

eqn: genx (1.11) x(t) = x01 ψ1 (t) + x02 ψ2 (t)

is a solution of (1.9) with the initial condition x(0) = x0 . Indeed


dx d 0
= (x ψ1 (t) + x20 ψ2 (t))
dt dt 1
d d
= x01 ψ1 (t) + x02 ψ2 (t)
dt dt
= x01 A(t)ψ1 (t) + x02 A(t)ψ2 (t)
= A(t)x(t)

x(t) in (1.11) can be written in the form


( )
x01
x(t) = (ψ1 (t), ψ2 (t)) = Ψ(t)x0 ,
x02
Thus, Ψ(t) plays the role solution operator; it maps the initial condition
x0 to the solution at time t, x(t). For example, if A(t) is a constant
matrix A, then Ψ(t) = eAt
1.4. GENERAL LINEAR SYSTEMS 13

We can also claim that there is a unique matrix Ψ(t) (which is the
fundamental matrix) such that

 dΨ(d)
= A(t)Ψ(t)
dt
 Ψ(0) = I, the identity matrix.

Proposition 3 (Abel’s formula). The fundmental matrix (1.10)


satisfies

d
eqn:Abel (1.12) |Ψ(t)| = trace(A(t))|Ψ(t)|
dt

where |Ψ(t)| = det Ψ(t).

Proof.

d d ψ11 (t) ψ21 (t) ψ̇11 (t) ψ̇21 (t) ψ11 (t) ψ21 (t)
|Ψ(t)| = = +
dt dt ψ12 (t) ψ22 (t) ψ12 (t) ψ22 (t) ψ̇12 (t) ψ̇22 (t)

Note that ψ1 and ψ2 are solutions to the homogeneous ODE ẋ =


A(t)x. Thus,

d a11 ψ11 + a12 ψ12 a11 ψ21 + a12 ψ22


|Ψ(t)| =
dt ψ12 ψ22

ψ11 ψ21
+
a21 ψ11 + a22 ψ12 a21 ψ21 + a22 ψ22

a11 ψ11 a11 ψ21


À − a12 × Á =
ψ12 ψ22

ψ11 ψ21
Á − a21 × À +
a22 ψ12 a22 ψ22
= a11 |Ψ(t)| + a22 |Ψ(t)|
= trace(A(t))|Ψ(t)|.


14 1. ORDINARY DIFFERENTIAL EQUATIONS

(1.12) together with |Ψ(0)| = |I| = 1 implies that


(∫ t )
|Ψ(t)| = exp trace(A(s))ds > 0
0

as long as trace(A(t)) is integrable.


Now we look at the inhomogeneous problem,
{
dx
dt
= A(t)x + g(t)
eqn:fode (1.13)
x(0) = x0 .

Physically, the term g(t) is the external driving force.

Proposition 4. Assume x1 and x2 satisfy the following


dx1
= A(t)x1 , x1 (0) = x0 ,
dt
and
dx2
= A(t)x2 + g(t), x2 (0) = 0,
dt
then x = x1 + x2 satisfies dx
dt
= A(t)x + g(t) and x(0) = x0 .

Duhamel’s Principle (or variation of constant formula):

We try to find the solution x2 in the form x2 (t) = Ψ(t)v(t), where


v(t) is a column vector depending on t. (This explains the term “vari-
ation of constants”: a solution to the problem without g, the homoge-
neous problem, would require v to be a constant vector. Now we are
using the dependence on t of v to find a special solution to the problem
with g, the inhomogenous problem).
Note that the fundmental matrix satisfies dΨ(d) dt
= A(t)Ψ(t), we
therefore have
( )
d d d
(Ψ(t)v(t)) = Ψ(t) v(t) + Ψ(t) v(t)
dt dt dt
d
= A(t)Ψ(t)v(t) + Ψ(t) v(t)
dt
On the other hand,
dx2
= A(t)x2 + g(t) = A(t)Ψ(t)v(t) + g(t).
dt
1.4. GENERAL LINEAR SYSTEMS 15

d dx2
Since dt
(Ψ(t)v(t)) = dt
, we must have
d
Ψ(t) v(t) = g(t).
dt
or
d
v(t) = Ψ−1 (t)g(t).
dt
Hence ∫ t
v(t) = Ψ−1 (s)g(s)ds.
0
Since x2 (0) = 0, we have v(0) = 0. Therefore, we obtain
∫ t
x2 (t) = Ψ(t)v(t) = Ψ(t) Ψ−1 (s)g(s)ds.
0
It is easy to check that this is indeed a solution to the problem
dx2
= A(t)x2 + g(t), x2 (0) = 0.
dt
In summary, the solution to the ODE (1.13) is given by
∫ t
0
x(t) = Ψ(t)x + Ψ(t) Ψ−1 (s)g(s)ds .
0

This is Duhamel’s principle or the variation of constants formula.


We consider two special cases of this formula
• A(t) = A constant matrix. Ψ(t) = eAt ,
∫ t
At 0
x(t) = e x + eA(t−s) g(s)ds.
0
∫t
• n = 1, then Ψ(t) = eF (t) where F (t) = 0 a(s)ds.
∫ t
At 0
x(t) = e x + eF (t)−F (s) g(s)ds.
0
Exercise Consider
eqn:hw2 (1.14) y ′′ + p(t)y ′ + q(t)y = f (t)
Let φ1 (t), φ2 (t) be two linearly independent solutions,
(1) Write (1.14) as a first order system and show that one can find
fundmental matrix of the form
( )
φ1 (t) φ2 (t)
Ψ(t) = .
φ′1 (t) φ′2 (t)
16 1. ORDINARY DIFFERENTIAL EQUATIONS

(2) Let w(t) = det(Ψ(t)). Let ψ1 (t) be the first component of


the solution of the first order system with initial condition
ψ1 (0) = ψ1′ (0) = 0. Show that
∫ t
φ2 (t)φ1 (s) − φ1 (t)φ2 (s)
ψ1 (t) = f (s)ds.
0 w(s)

1.5. Uniqueness of solutions


We first take a look at the following example
{ √
dx
dt
= x,
eqn:nonuniq (1.15)
x(0) = 0.
Obviously x(t) ≡ 0 is a solution. But there is another non-zero solution.

Divide (1.15) by x ,
1
√ dx = dt
x

2d( x) = dt
x = t2 /4.
Actually, for any number a > 0, the function
{
0, t < a,
xa (t) =
(t − a) /4,
2
t ≥ a.
is a solution to (1.15). This means that the ODE (1.15) has an infinite
number of solutions !
We will see later that the source of the problem comes from the fact
√ √
that the derivative f (x) = x is unbounded at x = 0, or, x is not
Lipschitz continuous on [0, T ] for any T > 0.
Definition A function f defined on Rn is called (globally) Lipschitz
continuous if there exists a number L > 0, such that

|f (x) − f (y)| ≤ L|x − y|, ∀x, y ∈ Rn .

L is called the Lipschitz constant.

Proposition 5. Assume that f is differentiable and |∇f (x)| ≤ L


for all x, then f is Lipschitz continuous with L being the Lipschitz
constant.
1.5. UNIQUENESS OF SOLUTIONS 17

This is obvious since, for n = 1, by the mean value theorem

|f (x) − f (y)| = |f ′ (ξ)(x − y)| ≤ L|x − y|.

The argument for larger values of n is basically the same.

Theorem 6 (Uniqueness). Consider the problem: dx dt


= f (x), x(0) =
0
x . If f is Lipschitz continuous, then the solution to this problem is
unique, i.e., if x1 , x2 are both solutions, then x1 ≡ x2 .

Proof. We integrate dx
dt
= f (x) from 0 to t,
∫ t ∫ t
dx
ds = f (x(s))ds
0 ds 0

to get
∫ t
0
x(t) = x + f (x(s))ds.
0
Since x1 , x2 are both solutions, then
∫ t
0
x1 (t) = x + f (x1 (s))ds,
0

and ∫ t
0
x2 (t) = x + f (x2 (s))ds,
0
Subtracting the above two equations, we have
∫ t
x1 (t) − x2 (t) = f (x1 (s)) − f (x2 (s))ds.
0
∫t
Taking the norm on both sides, and using the property that | g(s)ds| ≤
∫t 0

0
|g(s)|ds for any integrable function g, we obtain
∫ t
|x1 (t) − x2 (t)| ≤ |f (x1 (s)) − f (x2 (s))|ds.
0

Using the Lipschitz continuity of f , we get |f (x1 (s)) − f (x2 (s))| ≤


L|x1 (s) − x2 (s)|, we have
∫ t
|x1 (t) − x2 (t)| ≤ L |x1 (s) − x2 (s)|ds.
0

Using Gronwall’s inequality (see below, with C = 0), we arrive at the


conclusion that |x1 (t) − x2 (t)| = 0 for all t. Hence x1 (t) ≡ x2 (t). 
18 1. ORDINARY DIFFERENTIAL EQUATIONS

Proposition 7 (Gronwall’s Inequality). Assume that for some


constant C ≥ 0 and non-negative integrable functions f and g, we
have
∫ t
eqn:gw1 (1.16) f (t) ≤ C + f (s)g(s)ds,
0
then
∫t
eqn:gw2 (1.17) f (t) ≤ Ce 0 g(s)ds .
∫t
Proof. Let F (t) = C + 0 f (s)g(s)ds, then F (t) is differentiable
and
F ′ (t) = f (t)g(t).
Since g(t) ≥ 0 and f (t) ≤ F (t), we have
f (t)g(t) ≤ F (t)g(t),
or
F ′ (t) ≤ F (t)g(t).
Without loss of generality, we assume that F (t) strictly positive (F (t) =
0 if and only if C = 0 and f = g = 0; in this case (1.17) is trivial),
then we have
F ′ (t)/F (t) ≤ g(t)
or
d
log F (t) ≤ g(t)
dt
Integrating both sides, we obtain
∫t
F (t)/F (0) ≤ e 0 g(s)ds
,
Since F (0) = C, we have
∫t
F (t) ≤ Ce 0 g(s)ds
,
(1.17) follows immediately since f (t) ≤ F (t).

Remark If f depends on t, then the condition (??) can be replaced
by the Lipschitz condition of f (t, y) with respect to y uniformly in t.
Example f (t, y) = A(t)y = g(t). ∂f
∂y
= A(t). If |A(t)| ≤ L for some
fixed number L then uniqueness holds.
1.6. EXISTENCE OF SOLUTIONS 19

Exercise (1) Find the general solution to the equation


mẍ + kx = cos ωt.
√ √
Discuss the cases (1) ω ̸= k/m (2) ω = k/m (resonant
case)
(2) Let y1 and y2 be solutions of y ′ = f (y) where f is Lipschitz
continuous and let a fixed time T > 0, prove that there exists
a constant C, such that
|y1 (t) − y2 (t)| ≤ C|y1 (0) − y2 (0)|
for all t ∈ [0, T ].

1.6. Existence of solutions


Theorem 8 (local existence). Assume that f satisfies the following
conditions on some region D = {(t, y) : t ∈ [t0 − a, t0 + a], y ∈ [y 0 −
b, y 0 + b]},
(1) f is continuous and differentiable in y;
(2) there exists M > 0 such that |f (t, y)| ≤ M for all (t, y) ∈ D;
(3) there exists L > 0 such that | ∂f
∂y
(t, y)| ≤ L for all (t, y) ∈ D.
Let h = min{a, b/M }. Over the interval [t0 − h, t0 + h] there exists a
function y such that y is continuously differentiable, y(t0 ) = y 0 , and y
satisfies dy
dt
= f (t, y)

Proof. This theorem is proved using the well-known Picard iter-


ation scheme. We construct a sequence of functions {yn (t)} over the
time interval [t0 − h, t0 + h] by defining
y0 (t) ≡ y 0
and ∫ t
yn+1 (t) = y + 0
f (s, yn (s))ds, n ≥ 0.
t0
First, we claim that for all n, the function yn (t) satisfies yn (t) ∈
[y − b, y 0 + b]. Indeed, to check this, we use induction
0

∫ t ∫ t
|yn (t) − y | ≤
0
|f (s, yn−1 (s))|ds ≤ M ds ≤ M h ≤ b
t0 t0

since if (t, yn−1 (t)) ∈ D, f (t, yn−1 (t)) ≤ M .


20 1. ORDINARY DIFFERENTIAL EQUATIONS

Let rn (t) = yn+1 (t) − yn (t). The proposition below says that for
any t ∈ [t0 − h, t0 + h],

eqn:extr (1.18) |rn (t)| ≤ M Ln tn+1 /(n + 1)!.



This inequality implies that the series +∞ ri (t) absolutely converges
∑+∞i=0 n n+1
uniformly in t since the power series 0 M L t /(n + 1)! converges
for all t ∈ R. Hence the limit

lim yn (t) = y∞ (t)


n

exists and is uniform for t ∈ [t0 − h, t0 + h]. Take


∫ t
0
yn+1 (t) = y + f (s, yn (s))ds,
0

we obtain
∫ t
0
y∞ (t) = y + f (s, y∞ (s))ds.
t0

Here we have used:


∫ t ∫ t ∫ t
lim f (s, yn (s))ds = lim f (s, yn (s))ds = f (s, lim yn (s))ds
n t0 t0 n t0 n
∫ t
= lim f (s, y∞ (s))ds.
t0 n

One can easily see then that y∞ (t) satisfies the requirements specified
in the theorem. 

Proposition 9. If t ∈ [t0 − h, t0 + h], then

(1.19) |rn (t)| ≤ M Ln (t − t0 )n+1 /(n + 1)!

Proof. We prove this by induction. When n = 0,


∫ t ∫ t
|r0 (t)| = |y1 (t) − y | = |
0
f (s, y )ds| ≤
0
|f (s, y 0 )|ds ≤ M (t − t0 ).
t0 t0
1.7. QUALITATIVE BEHAVIOR OF ODES 21

Suppose that |rn | ≤ M Ln (t − t0 )n+1 /(n + 1)!, then


∫ t ∫ t
|rn+1 (t)| = | f (s, yn+1 (s)ds − f (s, yn+1 (s))ds|
t0 t0
∫ t
≤ |f (s, yn+1 (s)) − f (s, yn (s))|ds
t0
∫ t
∂f
≤ | (s, ξn (s))||yn+1 (s) − yn (s)|ds
t0 ∂y
∫ t
≤ L |yn+1 (s) − yn (s)|ds
t0
∫ t
= L |rn (s)|ds
t0
∫ t
M Ln
≤ L (s − t0 )n+1 ds
t0 (n + 1)!
n+1
ML
= (t − t0 )n+2 .
(n + 2)!

So (1.18) holds. 

Remark Existence can be proved using the weaker assumption that


f is continuous on D. This is the statement of Peano’s Theorem (G.
Peano, Sull’integrabilità delle equazioni differenziali del primo ordine,
Atti Accad.Sci. Torino, 21 (1886) 677–685) .

1.7. Qualitative behavior of ODEs


To understand the behavior of the solutions to ODEs, we can take
two different views. One is to solve the ODEs either analytically or
numerically. The other is to study the solutions qualitatively. The
results on existence and uniqueness of solutions are examples of the
qualitative properties that of interest. One can ask other questions: Do
solutions converge as time goes to infinity? If not, how do they behave
at large times? Are there special solutions of special interest? These
questions are the main concerns of the subject “dynamical system”
which is a very active area of mathematics. Here we will only be able
to touch upon the simplest aspects of this subject. For simplicity, we
22 1. ORDINARY DIFFERENTIAL EQUATIONS

will consider automous systems in the form:


dx
= f (x)
dt
Some remarks are in order. First of all, when discussing qualitative
properties of solutions, we are not particularly concerned with a spe-
cific solution with a specific initial condition, we are interested in the
behavior of solutions for all the initial condition. In other words, we
are interested in the flow that maps the initial condition to the solution
at time t:
x0 → Φ(t, x0 )
where Φ(t, x0 ) is the solution of the ODE at time t with initial condition
x0 . The family of maps Φ parametrized by t defines a flow.
The fixed points of this flow, i.e. solutions that satisfy
Φ(t, x∗ ) = x∗
are of special interest. These are stationary solutions of the ODE and
should satisfy:
f (x∗ ) = 0
The questions we are interested include: How do other solutions behave
with regard to these special solutions? Do they converge or diverge
from these solutions? This is relevant for the stability of the stationary
solutions or fixed points.
The next class of special solutions that are of particular interest
are time-periodic solutions. Again, we can ask the same kind questions
about these solutions.

1.7.1. Linear systems. Consider


dx
= Ax,
dt
where A ∈ R2×2 . In this case, the origin is the only stationary solution
or fixed point of the problem. We are interested in the stability of this
solution. We will define stability formally later on. For now, we will
speak informally about stability.
We will discuss some specific examples. The parameters are as-
sumed to be real.
1.7. QUALITATIVE BEHAVIOR OF ODES 23

Examples: ( )
0 1
(1). A = .
−1 0
The general solution is
x(t) = c1 cos t + c2 sin t, y(t) = −c1 sin t + c2 cos t.
i.e.,
x(t) = cos(−t + φ), y(t) = sin(−t + φ).
Fig. 1 shows the components x(t) and y(t). If we look at the phase
plane, i.e., the (x, y)–plane, and draw the trajectory (x(t), y(t)) for
t ∈ R, we obtain Fig. 2. Since (x(t))2 + (y(t))2 ≡ (x(0))2 + (y(0))2 , for
different initial values (x(0), y(0))2 , the corresponding trajectories are
( of different
concentric circles ) radii.
−1 0
(2). A = .
0 −2
The solution is (x(t), y(t)) = (x0 e−t , y0 e−2t ). The solution (x(t), y(t))
satisfies
(x/x0 )2 = y/y0 .
They are parabolas in the phase plane (Fig. 3). All trajectories ap-
proach the origin
( (0, 0))as time goes to infinity.
−1 0
(3). A = .
0 2
The solution is given by: (x(t), y(t)) = (x0 e−t , y0 e2t ). The solution
(x(t), y(t)) satisfies
(x/x0 )−2 = y/y0 .
They are hyperbolas in the phase plane (Fig. 4). Only the trajectories
initiated on the x–axis approach the origin (0, 0) as time increases. The
other trajectories diverge and approaches the y–axis.
The origin is referred to as the “center ”, “node” and “saddle”,
respectively, (
in the three
) examples above.
λ 0
(4). A = . The corresponding solution is
0 µ
[ ] [ ]
1 0
x(t) = c1 eλt + c2 eµt .
0 1
24 1. ORDINARY DIFFERENTIAL EQUATIONS

1.5

1
x(t)
0.5

−0.5

−1
y(t)
−1.5

0 2 4 6 8 10 12
t

Figure 1. solution
in terms of t fig:traj1
1.5

0.5 (x(t),y(t))

−0.5

−1

−1.5
−1.5 −1 −0.5 0 0.5 1 1.5

Figure
2. trajectory in
phase plane fig:phase1

We only consider nontrivial cases when λµ ̸= 0.

• λµ < 0. One direction is stable and the other direction is un-


stable. The origin is called a saddle point. The phase portrait
is shown in Fig. 5 (also in Fig. 4).
• λ, µ < 0. The origin is asymptotically stable. This is called a
sink or a star-type node (Fig. 6) if λ = µ.
1.7. QUALITATIVE BEHAVIOR OF ODES 25

Figure 3. trajectories in phase plane: parabola fig:phase2

• λ, µ > 0. The solution is unstable in this case and is called a


source.
( )
λ 0
(5). A = .
1 λ
Introduce new variables
x′ = λx, y ′ = x + λy,
and
dy 1 y
= + ,
dx λ x
then we have
x
x = 0 and y = cx + log|x|,
λ
and {
dy +∞ (λ < 0)
lim y = 0 and lim = .
x→0 x→0 dx −∞ (λ > 0)
In this case, the origin is called a single-directional node.
26 1. ORDINARY DIFFERENTIAL EQUATIONS

Figure 4. trajectories in phase plane: hyperbola fig:phase3

–3

–2

–1
x
3 2 1 0 –1 –2 –3

y
2

Figure 5. Phase portrait for saddle point. fig:planar1


1.7. QUALITATIVE BEHAVIOR OF ODES 27

–3

–2

–1
x
3 2 1 0 –1 –2 –3

y
2

Figure 6. Phase portrait for star-type node. fig:planar2

–3

–2

–1
x
3 2 1 0 –1 –2 –3

y
2

Figure 7. Phase portrait for single-directional node. fig:planar3


( )
α −β
(6). A = A = . The general solution is
β α
[ ] [ ]
cos(βt) sin(βt)
x(t) = c1 eαt + c2 eαt ,
− sin(βt) cos(βt)
28 1. ORDINARY DIFFERENTIAL EQUATIONS

where c1 , c2 ∈ C are constants.


In polar coordinates, we get
dr
= αr,
dt

= −β.
dt
• α = 0, the origin is called a center.
• α > 0, the origin is called a unstable focus.
• α > 0, the origin is called a stable focus.

1.7.2. Stability analysis.


Definition x∗ ∈ Rn is called a stationary point (also referred to as
equilibria or fixed point) of the ODE dx
dt
= f (x), if f (x∗ ) = 0.
Definition Let x∗ be a stationary point. It is said to be stable if for
any ε > 0, there exists δ > 0 such that

eqn:stdef (1.20) |x(t0 ) − x∗ | ≤ δ =⇒ |x(t) − x∗ | ≤ ε, ∀t ≥ t0 .

It is asymptotically stable if there exists a δ > 0, such that

|x(t0 ) − x∗ | ≤ δ =⇒ lim x(t) = x∗ .


t→+∞

By definition, the center in Example 1 is stable, but not asymptoti-


cally stable; the node in Example 2 is stable and the saddle in Example
3 is not stable.

1.7.3. Phase plane analysis. We now restrict ourselves to the


case when n = 2. Generally, in order to obtain the qualitative behavior
of a non-linear systems, we follow the following three steps:
• find the stationary points,
• linearize the system around each stationary point (local anal-
ysis),
• global analysis.
This is by no means a general strategy. The last step, global analy-
sis, is usually not easy and very much problem dependent. However,
linerization is usually a very powerful tool which we now discuss.
1.7. QUALITATIVE BEHAVIOR OF ODES 29

Figure 8. µ > 0 fig:spiral1

Figure 9. µ < 0 fig:spiral2

Assume that the system


{
dx
dt
= f1 (x, y)
eqn:s1 (1.21) dy
dt
= f2 (x, y)
has a stationary point (x∗ , y ∗ ). We want to characterize the behavior
of solutions (x(t), y(t)) near (x∗ , y ∗ ). To this end, we write (x(t), y(t))
in the form
{
x(t) = x∗ + δx(t),
eqn:s2 (1.22)
y(t) = y ∗ + δy(t),
30 1. ORDINARY DIFFERENTIAL EQUATIONS

where δx(t) and δy(t) are functions of t and are thought to be small.
δx should satisfy:
d d
eqn:s3 (1.23) δx(t) = x(t) = f1 (x∗ + δx(t), y ∗ + δy(t)).
dt dt
Linearizing the right hand side, i.e. dropping terms that are not linear
in δx and δy, we get
∂f1 ∗ ∗ ∂f2 ∗ ∗
f1 (x∗ + δx(t), y ∗ + δy(t)) = f1 (x∗ , y ∗ ) + (x , y )δx(t) + (x , y )δy(t)
∂x ∂y
+o(δx(t), δy(t))
∂f1 ∗ ∗ ∂f2 ∗ ∗
≈ (x , y )δx(t) + (x , y )δy(t)
∂x ∂y
(note f1 (x∗ , y ∗ ) = 0). Thus, we arrive at a linearized equation for δx:
d ∂f1 ∗ ∗ ∂f2 ∗ ∗
δx(t) = (x , y )δx(t) + (x , y )δy(t).
dt ∂x ∂y
Similarly for δy(t). Together, we obtain
( ) ( )( ) ( )
∂f1 ∂f1
d δx(t) ∂x ∂y δx(t) δx(t)
(1.24) = ∂f2 ∂f2 =: A
dt δy(t) ∂x ∂y
δy(t) δy(t)
where the Jacobian matrix A is evaluated at the stationary point
(x∗ , y ∗ ). In particular, A is a constant matrix.
The expectation is that the behavior of the solutions (x(t), y(t)) is
close to that of the linearized system in the neighborhood of (x∗ , y ∗ ).
This is not always true. If all the eigenvalues λ of the matrix A satisfy
Reλ ̸= 0, then the above statement is true. But if one of the eigenvalues
is pure imaginary, then stability can not be determined by linearization,
as the following example shows:
eqn:ctex (1.25) ẍ + εx2 ẋ + x = 0
Rewrite (1.25) as a system
( ) ( )( ) ( )
d x 0 1 x 0
(1.26) = −ε .
dt y −1 0 y x2 y
We find eigenvalues of the corresponding matrix A are λ = ±i. How-
ever, unless ε = 0, the fixed point (0, 0) is not a center, as in the linear
system, but an attracting spiral sink (asymptotically stable) if ε > 0,
and a repelling source if ε < 0.
1.7. QUALITATIVE BEHAVIOR OF ODES 31

Example (The Lotka-Volterra model) This is a well-known mod-


el in ecology. Assume that there two and only two species, the predator
and the prey, whose populations are denoted by x and y, respectively.
Let f and g be their rate of growth: ẋ/x = f (x, y) and ẏ/y = g(x, y).
Typically, one would expect f to be an increasing function of y and g
to be a decreasing function of x. The simplest examples of such func-
tions are: f = f (y) = ry − d and g = g(x) = b − px where r, d, b, p
are positive constants. This simple model is called the Lotka–Volterra
model.
{
dx
dt
= x(ry − d),
eqn:lv (1.27) dy
dt
= y(b − px)
It is easy to see that this model has two fixed points: (0, 0) and
(b/p, d/r). The Jacobian matrix at (0, 0) is
( )
−1 0
.
0 1
Thus, the eigenvalues are −1 and 1 and the eigenvectors are (−1, 0) and
(0, 1), respectively. The origin is a saddle point with stable direction
(−1, 0) and unstable direction (0, 1). The Jacobi matrix at (b/p, d/r)
is ( )
0 br/p
.
−pd/r 0
The eigenvalues are purely imaginary. The stationary point (b/p, d/r)
is a center for the linearized system.
Without further information this is all we can say. Luckily, for this
model, there is a conserved quantity. To see this, we rewrite the model
in the form
eqn:exat1 (1.28) yg(x)dx − xf (y)dy = 0.
1
Let µ(x, y) = xy
and multiply (1.28) by µ(x, y), we get
g(x) f (y)
dx − dy = 0.
x y
Hence, we have ∫ ∫
g(x) f (y)
dx = dy.
x y
32 1. ORDINARY DIFFERENTIAL EQUATIONS

1.8

1.6

1.4

1.2

1
y

0.8

0.6

0.4

0.2

0
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x

Figure 10. Phase plane of (1.27) and iso-curves of


V (x, y). (r = d = b = p = 1) fig:lv

Specifically,
ry + px = d ln y + b ln x + C
where C is an arbitrary constant. Let
V (x, y) = ry + px − d ln y − b ln x,
then the solutions of (1.27) satisfy:
V (x(t), y(t)) = constant
A straightforward calculation reveals that the stationary point (b/p, d/r)
is the local minimizer of V . The contour lines of V (x, y) are closed
curves in the phase plane as shown in Fig. 10.

1.7.4. Hamiltonian system and gradient system. The mo-


tion of a particle (on the line) driven by the potential V is described
by the ODE
d2 x dx ∂V (x)
m 2 = −γ −
dt dt ∂x
where m is the mass of the particle, γ is the friction coefficient.
1.7. QUALITATIVE BEHAVIOR OF ODES 33

Let us first consider the case when friction (dissipation) is negligible,


i.e., γ = 0. In this case, we can rewrite the second order equation into
a first order system,
{
ẋ = y,
eqn:H1 (1.29)
mẏ = − ∂V∂x(x) .
Let p = my, the momentum of the particle. Define
1 2
H(x, p) = p + V (x).
2m
This is the total energy of the system, also called the Hamiltonian. The
first order system above can now be rewritten as:
{
ẋ = ∂H∂p
,
eqn:H11 (1.30)
ṗ = − ∂x .
∂H

Systems in this form are called Hamiltonian systems.


The most important feature of such a system is that the Hamilton-
ian is a conserved quantity:
d
(H(x(t), p(t))) = 0,
dt
or
H(x(t), p(t))) ≡ H(x(0), p(0)))
Indeed,
d ∂V (x) p
(H(x(t), p(t))) = ẋ + ṗ = 0.
dt ∂x m
Therefore, the solutions of the equation (1.29) lie on the contour lines
of H in the phase plane, i.e. the (x, p) plane. The analysis of Hamil-
tonian systems with one degree of freedom boils down to the exercise
of drawing contour curves for the Hamiltonian.
Let us look at a specific example. Let m = 1, and V (x) = 14 (1−x2 )2 .
First, we will proceed in the usual way, i.e. finding stationary points
and linearizing around stationary points.
There are three stationary points (±1, 0) and (0, 0). We first con-
sider the stationary point (x∗ , y ∗ ) = (0, 0). The linearized equation
is
( ) ( )( )
d δx(t) 0 1 δx(t)
(1.31) = .
dt δy(t) 1 0 δy(t)
34 1. ORDINARY DIFFERENTIAL EQUATIONS

The eigenvalues are λ1 = 1 and λ2 = −1. So, the origin is a saddle


point. It is not stable. The eigenvectors are v1 = (1, 1) and v2 =
(1, −1), respectively. Thus, the solution of the linearized equation is
(Example 1 at Page 6),
( ) ( ) ( )
δx(t) 1 1
= c1 et v1 + c2 e−t v2 = c1 et + c2 e−t .
δy(t) 1 −1

In the phase plane of the linearized system, along v1 , the solution grows
exponentially like et ; while along the second direction v2 , the solution
shrinks exponentially like e−t . In the phase plane of the original system,
near the stationary point, there are two trajectories tangent to ±v1
respectively and two trajectories tangent to ±v2 respectively.
Now we perform linearization around the stationary points (±1, 0).
The Jacobian matrices are the same for the two stationary points. The

eigenvalues are ± 2i. See Example 2 at Page 6 for a discussion of
the linearized system. The stationary points (±1, 0) are centers of the
linearized system. However, we can not directly draw the conclusion
that (±1, 0) are also centers of the non-linear system (1.29).
To perform the global analysis, we have to use the Hamiltonian
structure. For this system, the Hamiltonian is given by, H(x, p) =
p + 41 (1 − x2 )2 . H(x, p) = c gives rise to the curve
1 2
2

1
p(x) = ± 2c − (1 − x2 )2 .
2
At the stationary points (±1, 0), H(x, p) = 0. If 0 < c < 14 , the contour
curves are two closed curves around the stationary points (±, 1). If
c = 41 , the contour curve crosses the origin. If c > 41 , the contour curve
is a single closed curve and crosses the p-axis. See Fig. 11 for the details
of the phase plane.
the y-label needs to be changed to p-label
If the inertial effect is negligible, i.e., m = 0, we have
dx ∂V (x)
=− ,
dt ∂x
(set γ = 1 for simplicity). System in this form are called gradient
systems.
1.7. QUALITATIVE BEHAVIOR OF ODES 35

1.4

1.2

1
V(x) 0.8

0.6

0.4

0.2

0
−1.5 −1 −0.5 0 0.5 1 1.5
x

H=C >1/4
0.5 1

H=1/4
y=x’

0
H=C2<1/4

−0.5

−1
−1.5 −1 −0.5 0 0.5 1 1.5
x

Figure 11. Phase plane of the Hamiltonian system (1.29) fig:ham1

In this case, we have

dV (x(t)) ∂V (x(t)) dx
= = −|∇V |2 ≤ 0.
dt ∂x dt
The potential energy (which is also the total energy in this case) of the
system decreases in time.

1.7.5. Bifurcation theory. Consider a dynamical system depend-


ing on a parameter. We are interested in the global behavior of the
solutions as the parameter changes. A bifurcation occurs when a small
change of the parameter value causes a ’qualitative’ change in the be-
havior of the solutions (the meaning of this will be made more precise
below).
Pitchfork bifurcation Consider the family of differential equations
depending on the parameter a:
dx
= (a − 1)x − x3
dt
36 1. ORDINARY DIFFERENTIAL EQUATIONS

x 0

1
a
Figure 12. Bifurcation diagram. fig:bifurcation

Fixed points are x∗ = 0 for all a ∈ R, x∗ = ± a − 1 if a ≥ 1. Fig. 12.

If a < 1, there is only one fixed point x∗ = 0. At this fixed point,


fx = a − 1 < 0, so x∗ = 0 is asymptotically stable.
If a > 1, then fx > 0 at x∗ = 0. Therefore this fixed point is

unstable. On the other hand, at the other fixed points x∗ = ± a − 1,
fx = −2(a − 1) < 0. Therefore both of these fixed points are asymp-
totically stable.
¿From this discussion, we see that a qualitative changes to the sys-
tem at a = 1. This is a bifurcation point.
Hopf bifurcation Consider the system
dx
= ax − y − x(x2 + y 2 ),
dt
dy
= x + ay − y(x2 + y 2 ).
dt
Origin is a stationary point for this system and the linearization at the
origin gives rise to
( ) ( )( )
d x a −1 x
= .
dt y 1 a y
1.7. QUALITATIVE BEHAVIOR OF ODES 37

Figure 13. Hopf bifurcation and bifurcation diagram


(λ = a). fig:hopf

The eigenvalues are a ± i, so we expect a bifurcation at a = 0.


To see what happens as a passes through 0, we change to polar
coordinates. The system then becomes
dr
= ar − r3 ,
dt

= 1.
dt
Note that the origin is the only stationary point for this system. For
a < 0, the origin is asymptotically stable. When a > 0, we have
dr √ √
[
dt = 0 if r = a. So the circle of radius a is invariant in the sense
that if a trajectory is initiated on that circle, it will stay on it as time
goes on. In fact this invariant curve represents a periodic solution:

(1.32) x = cos t, y = sin t.



In addition, we also see that also have dr > 0 if 0 < r < a, and
dr √ dt

dt
< 0 if r > a. Thus, all nonzero solutions spiral toward this
circular solution as t → +∞.
In contrast to the previous example in which new stationary solu-
tions occur at the bifurcation point, in this example, a new periodic
38 1. ORDINARY DIFFERENTIAL EQUATIONS

solution is born out and the existing fixed point loses its stability to
this new periodic solution. This type of bifurcation is called a Hopf
bifurcation.
Periodic solutions are the simplest kind of solutions besides the
stationary solutions. Natually we can also ask about their stability
and the behavior of solutions nearby. A limit cycle in the phase space
is a closed trajectory with the property that there is at least one other
trajectory that spirals into it either as time approaches +∞ or −∞.
In the case where all the neighboring trajectories approach the limit-
cycle as t → +∞, it is called a stable or attractive limit-cycle. In the
example above, we have a stable limit cycle.
Theorem 10 (Poincaré-Bendixson theorem). Consider the ODE

x = f (x) in R2 . Assume that ∃ a bounded solution x(), ˙ i.e., a constant
c and a solution x()˙ such that |x(t)| ≤ c for all t > 0. Then, either the
ODE system has a stationary solution, or it has a limit cycle Γ, such
that limt→+∞ x(t) → Γ.
1.7.6. Chaotic behavior. We end this discussion with some brief
comments about chaos.
The logistic map First we consider an example of discrete dynamical
system, the logistic map, which can be regarded as the next simplest
population growth model beyond the linear model:
xn+1 = λxn (1 − xn ),
where λ > 0 is a parameter and 0 ≤ xn ≤ 1. The dynamics of this
model is described by a simple quardratic map. Yet the behavior of
solutions can be quite complex. By varying λ, the following behavior
is observed:
• 0 < λ ≤ 1. xn → 0 as n → +∞, i.e., the population will
eventually die, independent of the initial population.
r−1
• 1 < λ ≤ 2. The population stabilizes on the value ,
r
independent of the initial population.
• 2 < λ ≤ 3. The population also eventually stabilizes on the
r−1
same value , but first oscillates around that value for
r
some time.
1.7. QUALITATIVE BEHAVIOR OF ODES 39

Figure 14. Bifurcation diagram for the discrete logistic


population model. fig:logistic

• 3 < λ ≤ 1 + 6. The population may oscillate between two
values forever. These two values are dependent on t.
• 4 < λ. The values eventually leave the interval [0, 1] and
diverge for almost all initial values.
This is summarized in Fig. 14.
is this all right?
Lorenz attractor The following system was introduced by E. Loren-
z in 1963, as a drastically simplified model for the dynamics in the
atomsphere:

 ′
 x = σ(y − x),
y ′ = rx − y − xz,

 z ′ = xy − bz,

where σ is called the Prandtl number and r is called the Rayleigh


number. All σ, r, b > 0, but the most commonly used values are
σ = 10, b = 8/3, with r varies. The behavior of the system at r = 28
is shown in Fig. 15. One can clear see the “butterfly” structure of the
“Lorenz attract”.
40 1. ORDINARY DIFFERENTIAL EQUATIONS

Figure 15. Lorenz attractor for σ = 10, b = 8/3, r = 28. fig:lorenz

You might also like