Professional Documents
Culture Documents
Math8430 Lecture Notes PDF
Math8430 Lecture Notes PDF
Fundamental Theory
of
Ordinary Differential Equations
Lecture Notes
Julien Arino
Department of Mathematics
University of Manitoba
Fall 2006
Contents
2 Linear systems 35
2.1 Existence and uniqueness of solutions . . . . . . . . . . . . . . . . . . . . . . 35
2.2 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.2.1 The vector space of solutions . . . . . . . . . . . . . . . . . . . . . . 37
2.2.2 Fundamental matrix solution . . . . . . . . . . . . . . . . . . . . . . 38
2.2.3 Resolvent matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.2.4 Wronskian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.2.5 Autonomous linear systems . . . . . . . . . . . . . . . . . . . . . . . 43
2.3 Affine systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.1 The space of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.2 Construction of solutions . . . . . . . . . . . . . . . . . . . . . . . . . 46
2.3.3 Affine systems with constant coefficients . . . . . . . . . . . . . . . . 47
2.4 Systems with periodic coefficients . . . . . . . . . . . . . . . . . . . . . . . . 48
2.4.1 Linear systems: Floquet theory . . . . . . . . . . . . . . . . . . . . . 48
i
Fund. Theory ODE Lecture Notes – J. Arino
ii CONTENTS
4 Linearization 63
4.1 Some linear stability theory . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
4.2 The stable manifold theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3 The Hartman-Grobman theorem . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.4 Example of application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.1 A chemostat model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.4.2 A second example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5 Exponential dichotomy 79
5.1 Exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Existence of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . 81
5.3 First approximate theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.4 Stability of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . . . 84
5.5 Generality of exponential dichotomy . . . . . . . . . . . . . . . . . . . . . . 84
References 85
iii
Introduction
This course deals with the elementary theory of ordinary differential equations. The word
elementary should not be understood as simple. The underlying assumption here is that, to
understand the more advanced topics in the analysis of nonlinear systems, it is important
to have a good understanding of how solutions to differential equations are constructed.
If you are taking this course, you most likely know how to analyze systems of nonlinear
ordinary differential equations. You know, for example, that in order for solutions to a
system to exist and be unique, the system must have a C 1 vector field. What you do not
necessarily know is why that is. This is the object of Chapter 1, where we consider the
general theory of existence and uniqueness of solutions. We also consider the continuation
of solutions as well as continuous dependence on initial data and on parameters.
In Chapter 2, we explore linear systems. We first consider homogeneous linear systems,
then linear systems in full generality. Homogeneous linear systems are linked to the theory
for nonlinear systems by means of linearization, which we study in Chapter 4, in which
we show that the behavior of nonlinear systems can be approximated, in the vicinity of
a hyperbolic equilibrium point, by a homogeneous linear system. As for autonomous sys-
tems, nonautonomous nonlinear systems are linked to a linearized form, this time through
exponential dichotomy, which is explained in Chapter 5.
1
Chapter 1
We begin with the general theory of ordinary differential equations (ODEs). First, we define
ODEs, initial value problems (IVPs) and solutions to ODEs and IVPs in Section 1.1. In
Section 1.2, we discuss existence and uniqueness of solutions to IVPs.
where x(n) denotes the nth order derivative of x. An equation such as (1.1) is said to be in
general (or implicit) form.
An equation is said to be in normal (or explicit) form when it is written as
x(n) = f t, x, x0 , x00 , . . . , x(n−1) .
Note that it is not always possible to write a differential equation in normal form, as it can
be impossible to solve F (t, x, . . . , x(n) ) = 0 in terms of x(n) .
Definition 1.1.2 (First-order ODE). In the following, we consider for simplicity the more
restrictive case of a first-order ordinary differential equation in normal form
x0 = f (t, x). (1.2)
3
Fund. Theory ODE Lecture Notes – J. Arino
4 1. General theory of ODEs
Note that the theory developed here holds usually for nth order equations; see Section 1.5.
The function f is assumed continuous and real valued on a set U ⊂ R × Rn .
Definition 1.1.3 (Initial value problem). An initial value problem (IVP) for equation (1.2)
is given by
x0 = f (t, x)
(1.3)
x(t0 ) = x0 ,
where f is continuous and real valued on a set U ⊂ R × Rn , with (t0 , x0 ) ∈ U.
Remark – The assumption that f be continuous can be relaxed, piecewise continuity only is
needed. However, this leads in general to much more complicated problems and is beyond the
scope of this course. Hence, unless otherwise stated, we assume that f is at least continuous. The
function f could also be complex valued, but this too is beyond the scope of this course. ◦
Remark – An IVP for an nth order differential equation takes the form
x(n) = f (t, x, x0 , . . . , x(n−1) )
(n−1)
x(t0 ) = x0 , x0 (t0 ) = x00 , . . . , x(n−1) (t0 ) = x0 ,
R = {(t, x) : |t − t0 | ≤ a, kx − x0 k ≤ b} ,
where k k is any appropriate norm of Rn . This domain is illustrated in Figures 1.1 and
1.2; it is sometimes called a security system, i.e., the union of a security interval (for the
independent variable) and a security domain (for the dependent variables) [19]. Suppose
x
x0+b
x0 (t ,x )
0 0
x0−b
t0−a t0 t0+a t
y
0
t0 t
x0
that f is continuous on R, and let M = maxR kf (t, x)k, which exists since f is continuous
on the compact set R.
In the following, existence of solutions will be obtained generally in relation to the domain
R by considering a subset of the time interval |t − t0 | ≤ a defined by |t − t0 | ≤ α, with
a if M = 0
α= b
min(a, M ) if M > 0.
This choice of α = min(a, b/M ) is natural. We endow f with specific properties (continuity,
Lipschitz, etc.) on the domain R. Thus, in order to be able to use the definition of φ(t) as
the solution of x0 = f (t, x), we must be working in R. So we require that |t − t0 | ≤ a and
kx − x0 k ≤ b. In order to satisfy the first of these conditions, choosing α ≤ a and working on
|t − t0 | ≤ α implies of course that |t − t0 | ≤ a. The requirement that α ≤ b/M comes from
the following argument. If we assume that φ(t) is a solution of (1.3) defined on [t0 , t0 + α],
then we have, for t ∈ [t0 , t0 + α],
Z t
kφ(t) − x0 k =
f (s, φ(s))ds
t
Z t0
≤ kf (s, φ(s))k ds
t0
Z t
≤M ds
t0
= M (t − t0 ),
where the first inequality is a consequence of the definition of the integrals by Riemann
sums (Lemma A.2.1 in Appendix A.2). Similarly, we have kφ(t) − x0 k ≤ −M (t − t0 ) for all
t ∈ [t0 − α, t0 ]. Thus, for |t − t0 | ≤ α, kφ(t) − x0 k ≤ M |t − t0 |. Suppose now that α ≤ b/M .
It follows that kφ − x0 k ≤ M |t − t0 | ≤ M b/M = b. Taking α = min(a, b/M ) then ensures
that both |t − t0 | ≤ a and kφ − x0 k ≤ b hold simultaneously.
The following two theorems deal with the localization of the solutions to an IVP. They
make more precise the previous discussion. Note that for the moment, the existence of
a solution is only assumed. First, we establish that the security system described above
performs properly, in the sense that a solution on a smaller time interval stays within the
security domain.
Theorem 1.1.6. If φ(t) is a solution of the IVP (1.3) in an interval |t − t0 | < α̃ ≤ α, then
kφ(t) − x0 k < b in |t − t0 | < α̃, i.e., (t, φ(t)) ∈ R((t0 , x0 ), α̃, b) for |t − t0 | < α̃.
Proof. Assume that φ is a solution with (t, φ(t)) 6∈ R((t0 , x0 ), α̃, b). Since φ is continuous, it
follows that there exists 0 < β < α̃ such that
kφ(t) − x0 k < b for |t − t0 | < β and kφ(t0 + β) − x0 k = b or kφ(t0 − β) − x0 k = b , (1.5)
i.e., the solution escapes the security domain at t = t0 ± β. Since α̃ ≤ α ≤ a, β < a. Thus
(t, φ(t)) ∈ R for |t − t0 | ≤ β.
Fund. Theory ODE Lecture Notes – J. Arino
1.1. ODEs, IVPs, solutions 7
Thus kf (t, φ(t))k ≤ M for |t − t0 | ≤ β. Since φ is a solution, we have that φ0 (t) = f (t, φ(t))
and φ(t0 ) = x0 . Thus
Z t
φ(t) = x0 + f (s, φ(s))ds for |t − t0 | ≤ β.
t0
Hence
Z t
f (s, φ(s))ds
for |t − t0 | ≤ β
kφ(t) − x0 k =
t0
≤ M |t − t0 | for |t − t0 | ≤ β.
As a consequence,
b
kφ(t) − x0 k ≤ M β < M α̃ ≤ M α ≤ M = b for |t − t0 | ≤ β.
M
In particular, kφ(t0 ± β) − x0 k < b. Hence contradiction with (1.5).
The following theorem is proved using the same sort of technique as in the proof of
Theorem 1.1.6. It links the variation of the solution to the nature of the vector field.
Theorem 1.1.7. If φ(t) is a solution of the IVP (1.3) in an interval |t − t0 | < α̃ ≤ α, then
kφ(t1 ) − φ(t2 )k ≤ M |t1 − t2 | whenever t1 , t2 are in the interval |t − t0 | < α̃.
i) ∀t ∈ I, (t, φ(t)) ∈ U.
ii) φ is continuous on I.
Rt
iii) ∀t ∈ I, φ(t) = x0 + t0 f (s, φ(s))ds.
Fund. Theory ODE Lecture Notes – J. Arino
8 1. General theory of ODEs
Proof. (⇒) Let us suppose that φ0 = f (t, φ) for all t ∈ I and that φ(t0 ) = x0 . Then for all
t ∈ I, (t, φ(t)) ∈ U (i ). Also, φ is differentiable and thus continuous on I (ii ). Finally,
and thus
Z t
φ(t) = x0 + f (s, φ(s))ds
t0
hence (iii ).
(⇐) Assume i ), ii ) and Riii ). Then φ is differentiable on I and φ0 (t) = f (t, φ(t)) for all t ∈ I.
t
From (3), φ(t0 ) = x0 + t00 f (s, φ(s))ds = x0 .
Note that Theorem 1.1.8 states that φ should be continuous, whereas the solution should
of course be C 1 , for its derivative needs to be continuous. However, this is implied by point
iii ). In fact, more generally, the following result holds about the regularity of solutions.
Theorem 1.1.10. Let x0 = f (x) be a scalar autonomous differential equation. Then the
solutions of this equation are monotone.
Proof. The direction field of an autonomous scalar differential equation consists of vectors
that are parallel for all t (since f (t, x) = f (x) for all t). Suppose that a solution φ of
x0 = f (x) is non monotone. Then this means that, given an initial point (t0 , x0 ), one the
following two occurs, as illustrated in Figure 1.3.
t0 t2 t1 t0 t2 t1
Figure 1.3: Situations that would lead to a scalar autonomous differential equation having
nonmonotone solutions.
Suppose we are in case i), and assume we are in the case f (x0 ) > 0. Thus, the solution curve
φ is increasing at (t0 , x0 ), i.e., φ0 (t0 ) > 0. As φ is continuous, i) implies that there exists
t2 ∈ (t0 , t1 ) such that φ(t2 ) is a maximum, with φ increasing for t ∈ [t0 , t2 ) and φ decreasing
for t ∈ (t2 , t1 ]. It follows that φ0 (t1 ) < 0, which is a contradiction with φ0 (t0 ) > 0.
Now assume that we are in case ii). Then there exists t2 ∈ (t0 , t1 ) with φ(t2 ) = x0 but
such that φ0 (t2 ) < 0. This is a contradiction.
Remark – If we have uniqueness of solutions, it follows from this theorem that if φ1 and φ2 are
two solutions of the scalar autonomous differential equation x0 = f (x), then φ1 (t0 ) < φ2 (t0 ) implies
that φ1 (t) < φ2 (t) for all t. ◦
Step 1. Start with an initial estimate of the solution, say, the constant function φ0 (t) =
φ0 = x0 , for |t − t0 | ≤ h. Evidently, this function satisfies the IVP.
Step 2. Use φ0 in (1.4) to define the second element in the sequence:
Z t
φ1 (t) = x0 + f (s, φ0 (s))ds.
t0
At this stage, there are two major ways to tackle the problem, which use the same idea:
if we can prove that the sequence {φn } converges, and that the limit happens to satisfy
the differential equation, then we have the solution to the IVP (1.3). The first method
(Section 1.2.2) uses a fixed point approach. The second method (Section 1.2.3) studies
explicitly the limit.
for all x1 , x2 ∈ D = {x : kx − x0 k ≤ b} and all t such that |t − t0 | ≤ a. Then there exists
0 < δ ≤ α = min a, Mb such that (1.3) has a unique solution in |t − t0 | ≤ δ.
To set up the proof, we proceed as follows. Define the operator F by
Z t
F : x 7→ x0 + f (s, x(s))ds.
t0
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 11
Note that the function (F φ)(t) is a continuous function of t. Then Picard’s successives
approximations take the form φ1 = F φ0 , φ2 = F φ1 = F 2 φ0 , where F 2 represents F ◦ F .
Iterating, the general term is given for k = 0, . . . by
φk = F k φ0 .
Therefore, finding the limit limk→∞ φk is equivalent to finding the function φ, solution of the
fixed point problem
x = F x,
with x a continuously differentiable function. Thus, a solution of (1.3) is a fixed point of F ,
and we aim to use the contraction mapping principle to verify the existence (and uniqueness)
of such a fixed point. We follow the proof of [14, p. 56-58].
Proof. We show the result on the interval t − t0 ≤ δ. The proof for the interval t0 − t ≤ δ
is similar. Let X be the space of continuous functions defined on the interval [t0 , t0 + δ],
X = C([t0 , t0 + δ]), that we endow with the sup norm, i.e., for x ∈ X,
Recall that this norm is the norm of uniform convergence. Let then
S = {x ∈ X : kx − x0 kc ≤ b}
Of course, S ⊂ X. Furthermore, S is closed, and X with the sup norm is a complete metric
space. Note that we have transformed the problem into a problem involving the space of
continuous functions; hence we are now in an infinite dimensional case. The proof proceeds
in 3 steps.
Step 1. We begin by showing that F : S → S. From (1.4),
Z t
(F φ)(t) − x0 = f (s, φ(s))ds
t0
Z t
= f (s, φ(s)) − f (s, x0 ) + f (s, x0 )ds
t0
As f is (piecewise) continuous, it is bounded on [t0 , t1 ] and there exists M = maxt∈[t0 ,t1 ] kf (t, x0 )k.
Thus Z t
kF φ − x0 k ≤ kf (s, φ(s)) − f (t, x0 )k + M ds
t
Z 0t
≤ Lkφ(s) − x0 k + M ds,
t0
Fund. Theory ODE Lecture Notes – J. Arino
12 1. General theory of ODEs
kF φ − x0 kc = max kF φ − x0 k ≤ (Lb + M )δ
[t0 ,t0 +δ]
Choose then δ such that δ ≤ b/(Lb + M ), i.e., t sufficiently close to t0 . Then we have
kF φ − x0 kc ≤ b
and thus
ρ
kF φ1 − F φ2 kc ≤ Lδkφ1 − φ2 kc ≤ ρkφ1 − φ2 kc for δ ≤
L
Thus, choosing ρ < 1 and δ ≤ ρ/L, F is a contraction. Since, by Step 1, F : S → S, the
contraction mapping principle (Theorem A.11) implies that F has a unique fixed point in
S, and (1.3) has a unique solution in S.
Step 3. It remains to be shown that any solution in X is in fact in S (since it is on X that
we want to show the result). Considering a solution starting at x0 at time t0 , the solution
leaves S if there exists a t > t0 such that kφ(t) − x0 k = b, i.e., the solution crosses the border
of D. Let τ > t0 be the first of such t’s. For all t0 ≤ t ≤ τ ,
Z t
kφ(t) − x0 k ≤ kf (s, φ(s)) − f (s, x0 )k + kf (s, x0 )kds
t0
Z t
≤ Lkφ(s) − x0 k + M ds
t0
Z t
≤ Lb + M ds
t0
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 13
As a consequence,
b = kφ(τ ) − x0 k ≤ (Lb + M )(τ − t0 )
As τ = t0 + µ, for some µ > 0, it follows that if
b
µ>
Lb + M
then the solution φ is confined to D.
Note that the condition x1 , x2 ∈ D = {x : kx − x0 k ≤ b} in the statement of the theorem
refers to a local Lipschitz condition. If the function f is Lipschitz, then the following theorem
holds.
Theorem 1.2.3 (Global existence). Suppose that f is piecewise continuous in t and is
Lipschitz on U = I × D. Then (1.3) admits a unique solution on I.
and
b
α = min(a, )
M
Then the sequence defined by
φ0 = x0 , |t − t0 | ≤ α
Z t
φi (t) = x0 + f (s, φi−1 (s))ds, i ≥ 1, |t − t0 | ≤ α
t0
Rt
from the definitions of M and α, and thus kφ1 − φ0 k ≤ b. So t0 f (s, φ1 (s))ds is defined for
|t − t0 | ≤ α, and, for |t − t0 | ≤ α,
Z t
Z t
f (s, φ1 (s))ds
≤ k
kφ2 (t) − φ0 k =
kf (s, φ1 (s))kds ≤ αM ≤ b.
t0 t0
All subsequent terms in the sequence can be similarly defined, and, by induction, for |t−t0 | ≤
α,
kφk (t) − φ0 k ≤ αM ≤ b, k = 1, . . . , n.
Now, for |t − t0 | ≤ α,
Z t Z t
kφk+1 (t) − φk (t)k =
x0 +
f (s, φk (s))ds − x0 − f (s, φk−1 (s))ds
Z t t0 t0
=
f (s, φk (s)) − f (s, φk−1 (s)) ds
t0
Z t
≤L kφk (s) − φk−1 (s)kds,
t0
(L|t − t0 |)k
kφk+1 − φk k ≤ b for |t − t0 | ≤ α (1.6)
k!
Indeed, (1.6) holds for k = 1, as previously established. Assume that (1.6) holds for k = n.
Then
Z t
kφn+2 − φn+1 k =
f (s, φn+1 (s)) − f (s, φn (s))ds
t0
Z t
≤ Lkφn+1 (s) − φn (s)kds
t0
Z t
(L|s − t0 |)n
≤ Lb ds for |t − t0 | ≤ α
t0 n!
s=t
Ln+1 |t − t0 |n+1
≤b
n! n + 1 s=t0
(L|t − t0 |)n+1
≤b
(n + 1)!
and thus (1.6) holds for k = 1, . . ..
Thus, for N > n we have
N −1 N −1 N −1
X X (L|t − t0 |)k X (Lα)k
kφN (t) − φn (t)k ≤ kφk+1 (t) − φk (t)k ≤ b ≤b
k=n k=n
k! k=n
k!
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 15
The rightmost term in this expression tends to zero as n → ∞. Therefore, {φk (t)} converges
uniformly to a function φ(t) on the interval |t − t0 | ≤ α. As the convergence isPuniform, the
limit function is continuous. Moreover φ(t0 ) = x0 . Indeed, φN (t) = φ0 (t) + N
P∞ k=1 (φk (t) −
φk−1 (t)), so φ(t) = φ0 (t) + k=1 (φk (t) − φk−1 (t)).
The fact that φ is a solution of (1.3) follows from the following result. If a sequence
of functions {φk (t)} converges uniformly and that the φk (t) are continuous on the interval
|t − t0 | ≤ α, then Z t Z t
lim φn (s)ds = lim φn (s)ds
n→∞ t0 t0 n→∞
Hence,
As the integrand f (t, φ) is a continuous function, φ is differentiable (with respect to t), and
φ0 (t) = f (t, φ(t)), so φ is a solution to the IVP (1.3).
Uniqueness. Let φ and ψ be two solutions of (1.3), i.e., for |t − t0 | ≤ α,
Z t
φ(t) = x0 + f (s, φ(s))ds
t0
Z t
ψ(t) = x0 + f (s, ψ(s))ds.
t0
Then, for |t − t0 | ≤ α,
Z t
kφ(t) − ψ(t)k =
f (s, φ(s)) − f (s, ψ(s))ds
t0
Z t
≤L kφ(s) − ψ(s)kds. (1.7)
t0
We now apply Gronwall’s Lemma A.7) to this inequality, using K = 0 and g(t) = kφ(t) −
ψ(t)k. First, applying the lemma for t0 ≤ t ≤ t0 + α, we get 0 ≤ kφ(t) − ψ(t)k ≤ 0, that is,
kφ(t) − ψ(t)k = 0,
Fund. Theory ODE Lecture Notes – J. Arino
16 1. General theory of ODEs
Example – Let us consider the IVP x0 = −x, x(0) = x0 = c, c ∈ R. For initial solution, we choose
φ0 (t) = c. Then
Z t
φ1 (t) = x0 + f (s, φ0 (s))ds
0
Z t
=c+ −φ0 (s)ds
0
Z t
=c−c ds
0
= c − ct.
To find φ2 , we use φ1 in (1.4).
Z t
φ2 (t) = x0 + f (s, φ1 (s))ds
0
Z t
=c− (c − cs)ds
0
t2
= c − ct + c .
2
Continuing this method, we find a general term of the form
n
X (−1)i ti
φn (t) = c .
i!
i=0
This is the power series expansion of ce−t , so φn → φ = ce−t (and the approximation is valid on
R), which is the solution of the initial value problem.
Note that the method of successive approximations is a very general method that can be
used in a much more general context; see [8, p. 264-269].
R = {(t, x) : |t − t0 | ≤ a, kx − x0 k ≤ b},
with a, b > 0, and let M = maxR kf (t, x)k. Then there exists a continuous function φ(t),
differentiable on R, such that
i) φ(t0 ) = x0 ,
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 17
Before we can prove this result, we need a certain number of preliminary notations and
results. The definition of equicontinuity and a statement of the Ascoli lemma are given in
Section A.5. To construct a solution without the Lipschitz condition, we approximate the
differential equation by another one that does satisfy the Lipschitz condition. The unique
solution of such an approximate problem is an ε-approximate solution. It is formally defined
as follows [8, p. 285].
R = {(t, x) : |t − t0 | ≤ a, kx − x0 k ≤ b}.
Then, for every positive number ε, there exists a function Fε (t, x) such that
ii) Fε has continuous partial derivatives of all orders with respect to x1 , . . . , xn for |t−t0 | ≤
a and all x,
iii) kFε (t, x)k ≤ maxR kf (t, x)k = M for |t − t0 | ≤ a and all x,
See a proof in [12, p. 10-12]; note that in this proof, the property that f defines a
differential equation is not used. Hence Lemma 1.2.7 can be used in a more general context
than that of differential equations. We now prove Theorem 1.2.5.
Proof of Theorem 1.2.5. The proof takes four steps.
1. We construct, for every positive number ε, a function Fε (t, x) that satisfies the re-
quirements given in Lemma 1.2.7. Using an existence-uniqueness result in the Lipschitz case
(such as Theorem 1.2.2), we construct a function φε (t) such that
(P1) φε (t0 ) = x0 ,
4. Observe that
Z t
φε (t) = x0 + Fε (s, φε (s))ds
t
Z 0t Z t
= x0 + f (s, φε (s))ds + Fε (s, φε (s)) − f (s, φε (s))ds,
t0 t0
The function φε can indeed be thus defined on [t0 − δ, t0 + α]. To see this, remark first that
this formula is meaningful and defines φε (t) for t0 ≤ t ≤ t0 + α1 , α1 = min(α, ε), so that
φε (t) is C 1 on [t0 − δ, t0 + α1 ] and, on this interval,
It then follows that (1.8) can be used to extend φε (t) as a C 1 function over [t0 − δ, t0 + α2 ],
where α2 = min(α, 2ε), satisfying relation (1.9). Continuing in this fashion, (1.8) serves to
define φε (t) over [t0 , t0 + α] so that φε (t) is a C 0 function on [t0 − δ, t0 + α], satisfying relation
(1.9).
Since kφ0ε (t)k ≤ M , M can be used as a Lipschitz constant for φε , giving uniform con-
tinuity of φε . It follows that the family of functions, φε (t), 0 < ε ≤ δ, is equicontinuous.
Thus, using Ascoli’s Lemma (Lemma A.5), there exists a sequence ε(1) > ε(2) > . . ., such
that ε(n) → 0 as n → ∞ and
on [t0 − δ, t0 + α]. The continuity of f implies that f (t, φε(n) (t − ε(n)) tends uniformly to
f (t, φ(t)) as n → ∞; thus term-by-term integration of (1.8) where ε = ε(n) gives
Z t
φ(t) = x0 + f (s, φ(s))ds
t0
R = {(t, x) : |t − t0 | ≤ a, kx − x0 k ≤ b}.
Given any ε > 0, there exists an ε-approximate solution φ of (1.3) on |t − t0 | ≤ α such that
φ(t0 ) = x0 .
Fund. Theory ODE Lecture Notes – J. Arino
20 1. General theory of ODEs
Proof. Let ε > 0 be given. We construct an ε-approximate solution on the interval [t0 , t0 +ε];
the construction works in a similar way for [t0 − α, t0 ]. The ε-approximate solution that we
construct is a polygonal path starting at (t0 , x0 ).
Since f ∈ C on R, it is uniformly continuous on R, and therefore for the given value of
ε, there exists δε > 0 such that
if
(t, φ) ∈ R, (t̃, φ̃) ∈ R and |t − t̃| ≤ δε kφ − φ̃k ≤ δε .
Now divide the interval [t0 , t0 + α] into n parts t0 < t1 < · · · < tn = t0 + α, in such a way
that
δε
max |tk − tk−1 | ≤ min δε , . (1.11)
M
From (t0 , x0 ), construct a line segment with slope f (t0 , x0 ) intercepting the line t = t1 at
(t1 , x1 ). From the definition of α and M , it is clear that this line segment lies inside the
triangular region T bounded by the lines segments with slopes ±M from (t0 , x0 ) to their
intercept at t = t0 + α, and the line t = t0 + α. In particular, (t1 , x1 ) ∈ T .
At the point (t1 , x1 ), construct a line segment with slope f (t1 , x1 ) until the line t = t2 ,
obtaining the point (t2 , x2 ). Continuing similarly, a polygonal path φ is constructed that
meets the line t = t0 + α in a finite number of steps, and lies entirely in T .
The function φ, which can be expressed as
φ(t0 ) = x0
(1.12)
φ(t) = φ(tk−1 ) + f (tk−1 , φ(tk−1 ))(t − tk−1 ), t ∈ [tk−1 , tk ], k = 1, . . . , n,
is the ε-approximate solution that we seek. Clearly, φ ∈ Cp1 ([t0 , t0 + α]) and
If t ∈ [tk−1 , tk ], then (1.13) together with (1.11) imply that kφ(t) − φ(tk−1 )k ≤ δε . But from
(1.12) and (1.10),
Therefore, φ is an ε-approximation.
Proof. Let {εn } be a monotone decreasing sequence of positive real numbers with εn → 0
as n → ∞. By Theorem 1.2.10, for each εn , there exists an εn -approximate solution φn of
(1.3) on |t − t0 | ≤ α such that φn (t0 ) = x0 . Choose one such solution φn for each εn . From
(1.13), it follows that
kφn (t) − φn (t̃)k ≤ M |t − t̃|. (1.14)
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 21
Clearly, φ(t0 ) = 0, when evaluated using (1.16), and also φ0 (t) = f (t, φ(t)) since f is contin-
uous. Thus φ as defined by (1.16) is a solution to (1.3) on |t − t0 | ≤ α.
Since this has to hold true for all x1 , x2 ∈ I, it must hold true in particular for x2 = 0. Thus
2
3|x1 | 3 ≤ L|x1 |
Given an ε > 0, it is possible to find Nε > 0 such that n1 < ε for all n ≥ Nε . Let x1 = n1 . Then for
n ≥ Nε , if f is Lipschitz there must hold
2
1 3 L
3 ≤
n n
Fund. Theory ODE Lecture Notes – J. Arino
22 1. General theory of ODEs
2) In this case,
< 0, if t < a
x(t) is = 0, if t ∈ [a, b]
> 0, if t > b
3) Here,
= 0, if t < b
x(t) is
> 0, if t > b
4) In this case,
< 0, if t < a
x(t) is
= 0, if t > a
5) In this last case, x(t) = 0 for all t ∈ R.
Now, depending on the sign of x, we can integrate the equation. First, if x > 0, then |x| = x and
so
x0 = 3x2/3
1
⇔ x−2/3 x0 = 1
3
⇔ x1/3 = t + k1
⇔ x(t) = (t + k1 )3
Fund. Theory ODE Lecture Notes – J. Arino
1.2. Existence and uniqueness theorems 23
x0 = 3 (−x)2/3
1
⇔ (−x)−2/3 (−x0 ) = −1
3
⇔ (−x)1/3 = −t + k2
⇔ x(t) = −(−t + k2 )3
for k2 ∈ R. We can now use these computations with the different cases that were discussed earlier,
depending on the value of t0 and x0 . We begin with the case of t0 > 0 and x0 > 0.
1) The case E = ∅ is impossible, for all initial conditions (t0 , x0 ). Indeed, as x0 > 0, we have
x(t) = (t + k1 )3 . Using the initial condition, we find that x(t0 ) = x0 = (t0 + k1 )3 , i.e.,
1/3 1/3
k1 = x0 − t0 , and x(t) = (t + x0 − t0 )3 .
x(t) = 0 if t ∈ [a, b]
(t + k1 )3 if t > b
Since x0 > 0, we have to be in the t > b region, so t0 > b, and (t0 + k1 )3 = x0 , which implies
1/3
that k1 = x0 − t0 . Thus
3
−(−t + k2 ) if t < a
x(t) = 0 if t ∈ [a, b]
1/3
(t + x0 − t0 )3 if t > b
Since x is continuous,
1/3
lim (t + x0 − t0 )3 = 0
t→b,t>b
and
lim −(−t + k2 )3 = 0
t→a,t<a
1/3
This implies that b = t0 − x0 and k2 = a. So finally,
1/3
Thus, choosing a ≤ t0 − x0 , we have solutions of the form shown in Figure 1.4. Indeed, any
ai satisfying this property yields a solution.
3) The case [a, +∞) is impossible. Indeed, there does not exist a solution through (t0 , x0 ) such
that x(t) = 0 for all t ∈ [a, +∞); since we are in the case of monotone increasing functions,
if x0 > 0 then x(t) ≥ x0 for all t ≥ t0 .
with f continuous on a domain U of the (t, x) space, and the initial point (t0 , x0 ) ∈ U.
Lemma 1.3.1. Let the function f (t, x) be continuous in an open set U in (t, x)-space, and
assume that a function φ(t) satisfies the condition φ0 (t) = f (t, φ(t)) and (t, φ(t)) ∈ U, in an
open interval I = {t1 < t < t2 }. Under this assumption, if limj→∞ (τj , φ(τj )) = (t1 , η) ∈ U
for some sequence {τj : j = 1, 2, . . .} of points in the interval I, then limτ →t1 (τ, φ(τ )) =
(t1 , η). Similarly, if limj→∞ (τj , φ(τj )) = (t2 , η) ∈ U for some sequence {τj : j = 1, 2, . . .} of
points in the interval I, then limτ →t2 (τ, φ(τ )) = (t2 , η).
Proof. Let W be an open neighborhood of (t1 , η). Then (t, φ(t)) ∈ W in an interval τ1 <
t < τ (W) for some τ (W) determined by W. Indeed, assume that the closure of W, W̄ ⊂ U,
and that |f (t, x)| ≤ M in W for some positive number M . For every positive integer j and
every positive number ε, consider a rectangular region
Then there exists an ε > 0 and a j such that (τ1 , η) ∈ Rj (ε) ⊂ W, with ε = min ε, M ε
M
and
tj − ε ≤ τ1 .
From Theorem 1.1.6 applied to the solution φ of the IVP x0 = f (t, x), x(τj ) = φ(τj ), we
obtain that (τ, φ(τ )) ∈ Rj (ε) ∈ U on the interval t1 < τ ≤ τj . Since U is an arbitrary open
neighborhood of (t1 , η), we conclude that limj→∞ (τj , φ(τj )) = (t1 , η) ∈ R.
From the previous result, we can derive a result concerning the maximal interval over
which a solution can be extended. To emphasize the fact that the solution φ of an ODE
exists in some interval I, we denote (φ, I). We need the notion of extension of a solution.
It is defined in the classical manner (see Figure 1.5).
Definition 1.3.2 (Extension). Let (φ, I) and (φ̃, Ĩ) be two solutions of the same ODE. We
say that (φ̃, Ĩ) is an extension of (φ, I) if, and only if,
I ⊂ Ĩ, φ̃|I = φ
Theorem 1.3.3. Let f (t, x) be continuous in an open set U in (t, x)-space, and the function
φ(t) be a function satisfying the condition φ0 (t) = f (t, φ(t)) and (t, φ(t)) ∈ U, in an open
interval I = {t1 < t < t2 }. If the following two conditions are satisfied:
ii) limj→∞ (τj , φ(τj )) = (t1 , η) (or, respectively, (t2 , η)) exists for some sequence {τj : j =
1, 2, . . .} of points in the interval I,
then the limit point (t1 , η) (or, respectively, (t2 , η)) must be on the boundary of U.
Fund. Theory ODE Lecture Notes – J. Arino
26 1. General theory of ODEs
~
φ φ
~
φ
I
~
I
Figure 1.5: The extension φ̃ on the interval Ĩ of the solution φ (defined on the interval
I).
Proof. Suppose that the hypotheses of the theorem are satisfied, and that (t1 , η) ∈ U (re-
spectively, (t2 , η) ∈ U). Then, from Lemma 1.3.1, it follows that
lim (τ, φ(τ )) = (t1 , η)
τ →t1
(or, respectively, limτ →t2 (τ, φ(τ )) = (t2 , η)). Thus we can apply Theorem 1.2.5 (Peano’s
Theorem) to the IVP
x0 = f (t, x)
x(t1 ) = η,
(or, respectively, x0 = f (t, x), x(t2 ) = η). This implies that the solution φ can be extended
to the left of t1 (respectively, to the right of t2 ), since Theorem 1.2.5 implies existence in a
neighborhood of t1 . This is a contradiction.
A particularly important consequence of the previous theorem is the following corollary.
Corollary 1.3.4. Assume that f (t, x) is continuous for t1 < t < t2 and all x ∈ Rn . Also,
assume that there exists a function φ(t) satisfying the following conditions:
a) φ and φ0 are continuous in a subinterval I of the interval t1 < t < t2 ,
b) φ0 (t) = f (t, φ(t)) in I.
Then, either
i) φ(t) can be extended to the entire interval t1 < t < t2 as a solution of the differential
equation x0 = f (t, x), or
ii) limt→τ kφ(t)k = ∞ for some τ in the interval t1 < t < t2 .
Fund. Theory ODE Lecture Notes – J. Arino
1.3. Continuation of solutions 27
Definition 1.3.5 (Right maximal interval of existence). The interval I is a right maximal
interval of existence for x if there does not exist an extension of x(t) over an interval I1
so that x remains a solution of (1.20), with I ⊂ I1 (and I and I1 having different right
endpoints). A left maximal interval of existence is defined in a similar way.
Definition 1.3.6 (Maximal interval of existence). An interval which is both a left and a
right maximal interval of existence is called a maximal interval of existence.
Theorem 1.3.7. Let f (t, x) be continuous on an open set U and φ(t) be a solution of (1.20)
on some interval. Then φ(t) can be extended (as a solution) over a maximal interval of
existence (ω− , ω+ ). Also, if (ω− , ω+ ) is a maximal interval of existence, then φ(t) tends to
the boundary ∂U of U as t → ω− and t → ω+ .
Remark – The extension need not be unique, and ω± depends on the extension. Also, to say, for
example, that φ → ∂U as t → ω+ is interpreted to mean that either ω+ = ∞ or that ω+ < ∞ and
if U 0 is any compact subset of U, then (t, φ(t)) 6∈ U 0 when t is near ω+ . ◦
i) J = [t0 , t0 + a],
Corollary 1.3.9. Let f (t, x) be continuous on the closure Ū of an open (t, x)-set U, and let
(1.3) possess a solution φ on a maximal right interval J. Then either
i) J = [t0 , ∞),
Definition 1.3.10 (Maximal solution). Let I1 ⊂ R and I2 ⊂ R be two intervals such that
I1 ⊂ I2 . A solution (φ, I1 ) is maximal in I2 if φ has no extension (φ̃, Ĩ) solution of the
ODE such that I1 ( Ĩ ⊂ I2 .
φ1
φ2
Every global solution on a given interval I is maximal on that same interval. The converse
is false.
Example – Consider the equation x0 = −2tx2 on R. If x 6= 0, x0 x−2 = −2t, which implies that
x(t) = 1/(t2 − c), with c ∈ R. Depending on c, there are several cases.
• if c = 0, then the maximal non global solutions on R are defined on (−∞, 0) and (0, ∞).
The following theorem extends the uniqueness property to an interval of existence of the
solution.
Theorem 1.3.13. Let φ1 , φ2 : I → Rn be two solutions of the equation x0 = f (t, x), with f
locally Lipschitz in x on U. If φ1 and φ2 coincide at a point t0 ∈ I, then φ1 = φ2 on I.
Proof. Under the assumptions of the theorem, φ1 (t0 ) = φ2 (t0 ). Suppose that there exists a
t1 , t1 6= t0 , such that φ1 (t1 ) 6= φ2 (t1 ). For simplicity, let us assume that t1 > t0 .
By the local uniqueness of the solution, it follows from φ1 (t0 ) = φ2 (t0 ) that there exists
a neighborhood N of t0 such that φ1 (t) = φ2 (t) for all t ∈ N . Let
Since t1 ∈ E, E 6= ∅. Let α = inf(E), we have α ∈ (t0 , t1 ], and for all t ∈ [t0 , α), φ1 (t) = φ2 (t).
By continuity of φ1 and φ2 , we thus have φ1 (α) = φ2 (α). This implies that there exists
a neighborhood W of α on which φ1 = φ2 . This is a contradiction, since φ1 (t) 6= φ2 (t) for
t > α, hence there exists no such t1 , and φ1 = φ2 on I.
Corollary 1.3.14 (Global uniqueness). Let f (t, x) be locally Lipschitz in x on U. Then by
any point (t0 , x0 ) ∈ U, there passes a unique maximal solution φ : I → Rn . If there exists a
global solution on I, then it is unique.
As ψ is the solution of (1.3) through the point (t̃0 , x̃0 ), we have, for all t ∈ I,
Z t
ψ(t) = x̃0 + f (s, ψ(s))ds (1.22)
t̃0
Since Z t Z t̃0 Z t
f (s, φ(s))ds = f (s, φ(s))ds + f (s, φ(s))ds,
t0 t0 t̃0
and therefore
Z
Z
t̃0
t
kφ(t) − ψ(t)k ≤ kx0 − x̃0 k +
f (s, φ(s))ds
+
f (s, φ(s)) − f (s, ψ(s))ds
t0
t̃0
Using the boundedness assumptions on f and ∂f /∂x to evaluate the right hand side of the
latter inequation, we obtain
Z t
kφ(t) − ψ(t)k ≤ kx0 − x̃0 k + M |t̃0 − t0 | + K
φ(s) − ψ(s)ds
t̃0
Now, given ε > 0, we need only choose δ < ε/[M + (1 + M )K(τ2 −τ1 ) ] to obtain the desired
inequality, completing the proof.
Fund. Theory ODE Lecture Notes – J. Arino
1.4. Continuous dependence on initial data, on parameters 31
What we have shown is that the solution passing through the point (t0 , x0 ) is a continuous
function of the triple (t, t0 , x0 ). We now consider the case where the parameters also vary,
comparing solutions to two different but “close” equations.
Theorem 1.4.2. Let f, g be defined in a domain U and satisfy the hypotheses of Theo-
rem 1.4.1. Let φ and ψ be solutions of x0 = f (t, x) and x0 = g(t, x), respectively, such
that φ(t0 ) = x0 , ψ(t0 ) = x̂0 , existing on a common interval α < t < β. Suppose that
kf (t, x) − g(t, x)k ≤ ε for (t, x) ∈ U. Then the solutions φ and ψ satisfy
x0 = f (t, x, µ)
(1.24)
x(t0 ) = x0
have a unique solution φ0 on the interval [a, b], where t0 ∈ [a, b]. Then there exists a δ > 0
such that, for any fixed µ such that |µ − µ0 | < δ, every solution φµ of (1.24) exists over [a, b]
and as µ → µ0
φµ → φ0
uniformly over [a, b].
Proof. We begin by considering t0 ∈ (a, b). First, choose an α > 0 small enough that the
region R = {|t − t0 | ≤ α, kx − x0 k ≤ M α} is in U; note that R is a slight modification
of the usual security domain. All solutions of (1.24) with µ ∈ Iµ exist over [t0 − α, t0 + α]
and remain in R. Let φµ denote a solution. Then the set of functions {φµ }, µ ∈ Iµ is an
uniformly bounded and equicontinuous set in |t − t0 | ≤ α. This follows from the integral
equation Z t
φµ (t) = x0 + f (s, φµ (s), µ)ds (|t − t0 | ≤ α) (1.25)
t0
x(n) = f t, x, x0 , . . . , x(n−1)
(1.27)
This equation can be reduced to a system of n first order ordinary differential equations, by
proceeding as follows. Let y0 = x, y1 = x0 , y2 = x00 , . . . , yn−1 = x(n) . Then (1.27) is equivalent
to
y 0 = F (t, y) (1.28)
with y = (y0 , y1 , . . . , yn−1 )T and
y1
y2
F (t, z) =
..
.
yn−1
f (t, y0 , . . . , yn−1 )
Fund. Theory ODE Lecture Notes – J. Arino
1.5. Generality of first order systems 33
x(n) = f t, x, x0 , . . . , x(n−1)
(1.29)
x(t0 ) = x0 , x0 (t0 ) = x1 , . . . , x(n−1) (t0 ) = xn−1
x00 = −2x0 + 4x − 3
x(0) = 2, x0 (0) = 1
y 0 = −2y + 4x − 3
The initial condition becomes x(0) = 2, y(0) = 1. So finally, the following IVP is equivalent to the
original one:
x0 = y
y 0 = 4x − 2y − 3
x(0) = 2, y(0) = 1
is an nth order nonhomogeneous linear differential equation. Together with the initial condition
(n−1)
x(n−1) (t0 ) = x0 , . . . , x0 (t0 ) = x00 , x(t0 ) = x0
(n−1)
where x0 , x00 , . . . , x0 ∈ R, it forms an IVP. We can transform it to a system of linear first order
equations by setting
y0 = x
y1 = x0
..
.
yn−1 = x(n−1)
yn = x(n)
Fund. Theory ODE Lecture Notes – J. Arino
34 1. General theory of ODEs
The nth order linear equation is then equivalent to the following system of n first order linear
equations
y00 = y1
y10 = y2
..
.
0
yn−2 = yn−1
0
yn−1 = yn
yn0 = an−1 (t)yn (t) + an−2 (t)yn−1 (t) + · · · + a1 (t)y1 (t) + a0 (t)y0 (t) + b(t)
x0 = f (y, x)
y 0 = 1.
However, this transformation does not always make the system any easier to study.
Linear systems
The name linear for system (2.1) is an abuse of language. System (2.1) should be called
an affine system, with associated linear system
Another way to distinguish systems (2.1) and (2.2) is to refer to the former as a nonhomo-
geneous linear system and the latter as an homogeneous linear system. In order to lighten
the language, since there will be other qualificatives added to both (2.1) and (2.2), we use
in this chapter the names affine system for (2.1) and linear system for (2.2).
The exception to this naming convention is that we refer to (2.1) as a linear system if we
consider the generic properties of (2.1), with (2.2) as a particular case, as in this chapter’s
title or in the next section, for example.
35
Fund. Theory ODE Lecture Notes – J. Arino
36 2. Linear systems
Proof. Let k(t) = |||A(t)||| = supkxk≤1 kA(t)xk. Then for all t ∈ I and all x1 , x2 ∈ K,
Then, for t0 , t ∈ J ,
Z t
Z t
kφ(t)k ≤ K + L
φ(s)ds
≤ K + L
kφ(s)kds.
t0 t0
Thus, using Gronwall’s Lemma (Lemma A.7), the following estimate holds in J ,
This implies that case ii) in Corollary 1.3.4 is ruled out, leaving only the possibility for φ to
be extendable over I, since the vector field in (2.1) is Lipschitz.
φ01 = A(t)φ1
φ02 = A(t)φ2 ,
Definition 2.2.2 (Fundamental set of solutions). A set of n solutions of the linear differ-
ential equation (2.2), all defined on the same open interval I, is called a fundamental set of
solutions on I if the solutions are linearly independent functions on I.
Proposition 2.2.3. If A(t) is defined and continuous on the interval I, then the system
(2.2) has a fundamental set of solutions defined on I.
Proof. Let t0 ∈ I, and e1 , . . . , en denote the canonical basis of Kn . Then, from Theorem 2.1.1,
there exists a unique solution φ(t0 ) = (φ1 (t0 ), . . . , φn (t0 )) such that φi (t0 ) = ei , for i =
1, . . . , n. Furthermore, from Theorem 2.1.1, each function φi is defined on the interval I.
Assume that {φi }, i = 1, . . . ,P n, is linearly dependent. Then there exists αi ∈ R, i =
n
1, . . . , n, not all zero,
Pn such that Pi=1 αi φi (t) = 0 for all t. In particular, this is true for
n
t = t0 , and thus i=1 αi φi (t0 ) = i=1 αi ei = 0, which implies that the canonical basis of
Kn is linearly dependent. Hence a contradiction, and the φi are linearly independent.
Proposition 2.2.4. If F is a fundamental set of solutions of the linear system (2.2) on the
open interval I, then every solution defined on I can be expressed as a linear combination
of the elements of F.
Φt0 : S 0 → Kn
Y 7→ Φt0 (x) = x(t0 )
Proof. Φt0 is bijective. Indeed, let v ∈ Kn , from Theorem 2.1.1, there exists a unique solution
passing through (t0 , v), i.e.,
so Φt0 is surjective. That Φt0 is injective follows from uniqueness of solutions to an ODE.
Furthermore, Φt0 (λ1 x1 +λ2 x2 ) = λ1 Φt0 (x1 )+λ2 Φt0 (x2 ). Therefore dim S 0 = dim Kn = n.
Proof. Writing the differential equation Φ0 (t) = A(t)Φ(t) in terms of the elements ϕij and
aij of, respectively, Φ and A,
Xn
0
ϕij (t) = aik (t)ϕkj (t), (2.4)
k=1
for i, j = 1, . . . , n. Writing
ϕ11 (t) ϕ12 (t) . . . ϕ1n (t)
ϕ21 (t) ϕ22 (t) . . . ϕ2n (t)
det Φ =
,
ϕn1 (t) ϕn2 (t) . . . ϕnn (t)
we see that
0
ϕ11 ϕ012 . . . ϕ01n ϕ11 ϕ12 . . . ϕ1n ϕ11 ϕ12 . . . ϕ1n
0
0
ϕ21 ϕ22 . . . ϕ2n ϕ21 ϕ022 . . . ϕ02n ϕ21 ϕ22 . . . ϕ2n
(det Φ) = + + ··· + .
0 0 0
ϕn1 ϕn2 . . . ϕnn ϕn1 ϕn2 . . . ϕnn ϕ ϕ . . . ϕ
n1 n2 nn
Fund. Theory ODE Lecture Notes – J. Arino
2.2. Linear systems 39
Indeed, write det Φ(t) = Γ(r1 , r2 , . . . , rn ), where ri is the ith row in Φ(t). Γ is then a linear
function of each of its arguments, if all other rows are constant, which implies that
d d d d
det Φ(t) = Γ r1 , r2 , . . . , rn + Γ r1 , r2 , . . . , rn + · · · + Γ r1 , r2 , . . . , rn .
dt dt dt dt
(To show this, use the definition of the derivative as a limit.) Using (2.4) on the first of the
n determinants in (det Φ)0 gives
P P P
k a1k ϕk1 k a1k ϕk2 . . . k a1k ϕkn
ϕ21 ϕ22 ... ϕ2n
.
ϕn1 ϕn2 ... ϕnn
Adding −a12 times the second row, −a13 times the first row, etc., −a1n times the nth row,
to the first row, does not change the determinant, and thus
P P P
k a 1k ϕ k1 k a1k ϕ k2 . . . k a 1k ϕ kn
a11 ϕ11 a11 ϕ12 . . . a11 ϕ1n
ϕ21 ϕ22 ... ϕ2n ϕ21 ϕ22 . . . ϕ2n
=
= a11 det Φ.
ϕn1 ϕn2 ... ϕnn ϕn1 ϕn2 . . . ϕnn
Repeating this for each of the terms in (det Φ)0 , we obtain (det Φ)0 = (a11 + a22 + · · · +
ann ) det Φ, giving finally (det Φ)0 = (trA)(det Φ). Note that this equation takes the form
u0 − α(t)u = 0, which implies that
Z t
u exp α(s)ds = constant,
τ
Remark – Consider (2.3). Suppose that τ ∈ I is such that det Φ(τ ) 6= 0. Then, since ea 6= 0
for any a, it follows that det Φ 6= 0 for all t ∈ I. In short, linear independence of solutions for a
t ∈ I is equivalent to linear independence of solutions for all t ∈ I. As a consequence, the column
vectors of a fundamental matrix are linearly independent at every t ∈ I. ◦
solution for any choice of φ(t0 ). Thus det Φ(t0 ) 6= 0, and by the remark above, det Φ(t) 6= 0
for all t ∈ I.
Reciproqually, let Φ be a solution matrix of (2.2), and suppose that det Φ(t) 6= 0 for
t ∈ I. Then the column vectors are linearly independent at every t ∈ I.
From the remark above, the condition “det Φ(t) 6= 0 for all t ∈ I” in Theorem 2.2.8 is
equivalent to the condition “there exists t ∈ I such that det Φ(t) 6= 0”. A frequent candidate
for this role is t0 .
To conclude on fundamental solution matrices, remark that there are infinitely many
of them, for a given linear system. However, since each fundamental solution matrix can
provide a basis for the vector space of solutions, it is clear that the fundamental matrices
associated to a given problem must be linked. Indeed, we have the following result.
= 0.
0
Therefore, integrating (Φ−1 Ψ) gives Φ−1 Ψ = C, with C ∈ Mn (K) is a constant. Thus,
Ψ = CΦ. Furthermore, as Φ and Ψ are fundamental matrix solutions, det Φ 6= 0 and
det Ψ 6= 0, and therefore det C 6= 0.
Remark – Note that if Φ is a fundamental matrix solution to (2.2) and C ∈ Mn (K) is a constant
nonsingular matrix, then it is not necessarily true that CΦ is a fundamental matrix solution to
(2.2). See Exercise 2.3. ◦
Fund. Theory ODE Lecture Notes – J. Arino
2.2. Linear systems 41
Therefore,
∂ ∂ ∂
R(t, s) = −Φ(t)Φ−1 (s) −1
Φ(s) Φ (s) = −R(t, s) Φ(s) Φ−1 (s).
∂s ∂s ∂s
Now, since Φ(s) is a fundamental matrix solution, it follows that ∂Φ(s)/∂s = A(s)Φ(s), and
thus
∂
R(t, s) = −R(t, s)A(s)Φ(s)Φ−1 (s) = −R(t, s)A(s),
∂s
giving 4). Finally,
∂ ∂
R(t, s) = Φ(t)Φ−1 (s)
∂t ∂t
= A(t)Φ(t)Φ−1 (s) since Φ is a fundamental matrix solution
= A(t)R(t, s),
giving 5).
The role of the resolvent matrix is the following. Recall that, from Lemma 2.2.5, Φt0
defined by
Φt0 : S → Kn
x 7→ x(t0 ),
is a K-linear isomorphism from the space S to the space Kn . Then R is an application from
Kn to Kn ,
R(t, t0 ) : Kn → Kn
v 7→ R(t, t0 )v = w
such that
R(t, t0 ) = Φt ◦ Φ−1
t0
i.e.,
2.2.4 Wronskian
Definition 2.2.14. The Wronskian of a system {x1 , . . . , xn } of solutions to (2.2) is given
by
W (t) = det(x1 (t), . . . , xn (t)).
Let vi = xi (t0 ). Then we have
As v = 0 is not the only solution, this implies that A − λI must not be invertible, and so
where x0i is the ith component of x0 , i = 1, . . . , n. In the general case, we need the notion
of matrix exponentials. Defining the exponential of matrix A as
∞
A
X An
e =
k=0
n!
which implies that, in this case, the matrix R(t, t0 ) takes the form
eλ1 (t−t0 ) 0 0
0 eλ2 (t−t0 ) 0
R(t, t0 ) = .. .
. .
. .
λn (t−t0 )
0 0 e
In the general case, we need the notion of generalized eigenvectors.
Definition 2.2.17 (Generalized eigenvectors). Let λ be an eigenvalue of the n × n matrix
A, with multiplicity m ≤ n. Then, for k = 1, . . . , m, any nonzero solution v of
(A − λI)k v = 0
is called a generalized eigenvector of A.
Theorem 2.2.18. Let A be a real n × n matrix with real eigenvalues λ1 , . . . , λn repeated
according to their multiplicity. Then there exists a basis of generalized eigenvectors for Rn .
And if {v1 , . . . , vn } is any basis of generalized eigenvectors for Rn , the matrix P = [v1 · · · vn ]
is invertible,
A = D + N,
where
P −1 DP = diag(λj ),
the matrix N = A − D is nilpotent of order k ≤ n, and D and N commute.
Fund. Theory ODE Lecture Notes – J. Arino
46 2. Linear systems
Therefore
d
(x1 − x2 ) = A(t)(x1 − x2 )
dt
Theorem 2.3.2. The global solutions of (2.1) that are defined on I form an n dimensional
affine subspace of the vector space of maps from I to Kn .
Theorem 2.3.3. Let V be the vector space over R of solutions to the linear system x0 =
A(t)x. If ψ is a particular solution of the affine system (2.1), then the set of all solutions of
(2.1) is precisely
{φ + ψ, φ ∈ V }.
Practical rules:
1. To obtain all solutions of (2.1), all solutions of (2.2) must be added to a particular
solution of (2.1).
Theorem 2.3.4. Let R(t, t0 ) be the resolvent of the homogeneous equation x0 = A(t)x asso-
ciated to (2.1). Then the solution x to (2.1) is given by
Z t
x(t) = R(t, t0 ) + R(t, s)B(s)ds (2.7)
t0
Proof. Let R(t, t0 ) be the resolvent of x0 = A(t)x. Any solution of the latter equation is
given by
x(t) = R(t, t0 )v, v ∈ Rn
Let us now seek a particular solution to (2.1) of the form x(t) = R(t, t0 )v(t), i.e., using a
variation of constants approach. Taking the derivative of this expression of x, we have
d
x0 (t) = [R(t, t0 )]v(t) + R(t, t0 )v 0 (t)
dt
= A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t)
Thus x is a solution to (2.1) if
A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t) = A(t)R(t, t0 )v(t) + B(t)
⇔ R(t, t0 )v 0 (t) = B(t)
⇔ v 0 (t) = R(t0 , t)B(t)
Rt
since R(t, s)−1 = R(s, t). Therefore, v(t) = t0 R(t0 , s)B(s)ds. A particular solution is given
by
Z t
x(t) = R(t, t0 ) R(t0 , s)B(s)ds
t0
Z t
= R(t, t0 )R(t0 , s)B(s)ds
t0
Z t
= R(t, s)B(s)ds
t0
Proof. Use Lemma 2.2.15 and the variation of constants formula (2.7).
Fund. Theory ODE Lecture Notes – J. Arino
48 2. Linear systems
x0 = A(t)x
(2.10)
A(t + ω) = A(t), ∀t,
Definition 2.4.1 (Monodromy operator). Associated to system (2.10) is the resolvent R(t, s).
For all s ∈ R, the operator
C(s) := R(s + ω, s)
is called the monodromy operator.
Theorem 2.4.2. If X(t) is a fundamental matrix for (2.10), then there exists a nonsingular
constant matrix V such that, for all t,
X(t + ω) = X(t)V.
Proof. Since X is a fundamental matrix solution, there holds that X 0 (t) = A(t)X(t) for all t.
Therefore X 0 (t+ω) = A(t+ω)X(t+ω), and by periodicity of A(t), X 0 (t+ω) = A(t)X(t+ω),
which implies that X(t + ω) is a fundamental matrix of (2.10). As a consequence, by
Theorem 2.2.9, there exists a matrix V such that X(t + ω) = X(t)V .
Since at t = 0, X(ω) = X(0)V , it follows that V = X −1 (0)X(ω).
Theorem 2.4.3 (Floquet’s theorem, complex case). Any fundamental matrix solution Φ of
(2.10) takes the form
Φ(t) = P (t)etB (2.11)
where P (t) and B are n × n complex matrices such that
Proof. Let Φ be a fundamental matrix solution. From 2.4.2, the monodromy matrix V =
Φ−1 (0)Φ(ω) is such that Φ(t + ω) = Φ(t)V . By Theorem A.11.1, there exists B ∈ Mn (C)
Fund. Theory ODE Lecture Notes – J. Arino
2.4. Systems with periodic coefficients 49
such that eBω = V . Let P (t) = Φ(t)e−Bt , so Φ(t) = P (t)eBt . It is clear that P is continuous
and nonsingular. Also,
P (t + ω) = Φ(t + ω)e−B(t+ω)
= Φ(t)V e−B(ω+t)
= Φ(t)eBω e−Bω e−Bt
= Φ(t)e−Bt
= P (t),
proving the P is ω-periodic.
Theorem 2.4.4 (Floquet’s theorem, real case). Any fundamental matrix solution Φ of (2.10)
takes the form
Φ(t) = P (t)etB (2.12)
where P (t) and B are n × n real matrices such that
i) P (t) is invertible, continuous, and periodic of period 2ω in t,
ii) B is a constant matrix such that Φ(ω)2 = e2ωB .
Proof. The proof works similarly as in the complex case, except that here, Theorem A.11.1
implies that there exists B ∈ Mn (R) such that e2ωB = V 2 . Let P (t) = Φ(t)e−Bt , so
Φ(t) = P (t)etB . It is clear that P is continuous and nonsingular. Also,
P (t + 2ω) = Φ(t + 2ω)e−(t+2ω)B
= Φ(t + ω)V e−(2ω+t)B
= Φ(t)V 2 e−(2ω+t)B
= Φ(t)e2ωB e−2ωB e−tB
= Φ(t)e−tB
= P (t),
proving the P is ω-periodic.
See [12, p. 87-90], [4, p. 162-179].
Theorem 2.4.5 (Floquet’s theorem, [4]). If Φ(t) is a fundamental matrix solution of the
ω-periodic system (2.10), then, for all t ∈ R,
Φ(t + ω) = Φ(t)Φ−1 (0)Φ(ω).
In addition, for each possibly complex matrix B such that
eωB = Φ−1 (0)Φ(ω),
there is a possibly complex ω-periodic matrix function t 7→ P (t) such that Φ(t) = P (t)etB for
all t ∈ R. Also, there is a real matrix R and a real 2ω-periodic matrix function t → Q(t)
such that Φ(t) = Q(t)etR for all t ∈ R.
Fund. Theory ODE Lecture Notes – J. Arino
50 2. Linear systems
Definition 2.4.6 (Floquet normal form). The representation Φ(t) = P (t)etR is called a
Floquet normal form.
In the case where Φ(t) = P (t)etB , we have dP (t)/dt = A(t)P (t) − P (t)B. Therefore,
letting x = P (t)z, we obtain x0 = P (t)x0 + dP (t)/dtx = P (t)A(t)x + A(t)P (t)x − P (t)Bx
−1 −1
z = P −1 (t)x, so z 0 = dP dt (t) x + P −1 (t)x0 = dP dt (t) P (t)z + P −1 (t)A(t)P (t)z
Proposition 2.4.11. Suppose that X, Y are fundamental matrices for (2.10) and that X(t+
ω) = X(t)V , Y (t + ω) = Y (t)U . Then the monodromy matrices U and V are similar.
Proof. Suppose that X(t + ω) = X(t)V and Y (t + ω) = Y (t)U . But, by Theorem 2.2.9,
since X and Y are fundamental matrices for (2.10), there exists an invertible matrix C such
that X(t) = Y (t)C for all t. Thus, in particular, X(t + ω) = Y (t + ω)C, and so
From this Proposition, it follows that monodromy matrices share the same spectrum.
Corollary 2.4.12. All solutions of (2.10) tend to 0 as t → ∞ if and only if |ρj | < 1 for all
j (or <(λj ) < 0 for all j).
x0 = A(t)x (2.14)
associated to (2.13) has no nonzero solution of period ω, then (2.13) has for each function
f , a unique ω-periodic solution.
The Fredholm alternative concerns the case where there exists a nonzero periodic solution
of (2.14). We give some needed results before going into details. Consider (2.14). Associated
to this system is the so-called adjoint system, which is defined by the following differential
equation,
y 0 = −AT (t)y (2.15)
Proposition 2.4.14. The adjoint equation has the following properties.
i) Let R(t, t0 ) be the resolvent matrix of (2.14). Then, the resolvent matrix of (2.15) is
RT (t0 , t).
ii) There are as many independent periodic solutions of (2.14) as there are of (2.15).
iii) If x is a solution of (2.14) and y is a solution of (2.15), then the scalar product
hx(t), y(t)i is constant.
∂ ∂
Proof. i) We know that ∂s R(t, s) = −R(t, s)A(s). Therefore, ∂s RT (t, s) = −AT (s)RT (t, s).
As R(s, s) = I, the first point is proved.
ii) The solution of (2.15) with initial value y0 is RT (0, t)y0 . The initial value of a periodic
solution of (2.15) is y0 such that
RT (0, ω)y0 = y0
This can also be written as T
R (0, ω) − I y0 = 0
or, taking the transpose,
y0T [R(0, ω) − I] = 0
Now, since R(0, ω) = R−1 (ω, 0), it follows that
This is equivalent to y0T [R(0, ω) − I] = 0. The latter equation has as many solutions as
[R(0, ω) − I] x0 = 0; the number of these depends on the rank of R(ω, 0) − I.
Fund. Theory ODE Lecture Notes – J. Arino
52 2. Linear systems
Let Img(A) be the image of A, Ker(A∗ ) be the kernel of A∗ . Then we have H = Img(A) ⊕
Ker(A∗ ).
Theorem 2.4.15 (Fredholm alternative). For the equation Af = g to have a solution, it is
necessary and sufficient that g be orthogonal to every element of Ker(A∗ ).
We now use this very general setting to prove the following theorem, in the context of
ODEs.
Theorem 2.4.16 (Fredholm alternative for ODEs). Consider (2.13) with A and f con-
tinuous and ω-periodic. Suppose that the homogeneous equation (2.14) has p independent
solutions of period ω. Then the adjoint equation (2.15) also has p independent solutions of
period p, which we denote y1 , . . . , yp . Then
i) If Z ω
hyk (t), b(t)idt = 0, k = 1, . . . , p (2.16)
0
ii) if this condition is not fulfilled, (2.13) has no nontrivial solution of period ω.
Proof. First, remark that x0 is the initial condition of a periodic solution of (2.13) if, and
only if, Z ω
[R(0, ω) − I] x0 = R(0, s)b(s)ds (2.17)
0
Hence, at time ω, Z ω
x(ω) = R(ω, 0)x0 + R(ω, s)b(s)ds
0
Fund. Theory ODE Lecture Notes – J. Arino
2.4. Systems with periodic coefficients 53
On the other hand, yk (0) is the initial condition of an ω-periodic solution yk if, and only
if, T
R (0, ω) − I yk (0) = 0
Let C = R(0, ω) − I. We have that Rn = Img(C) ⊕ Ker(C T ). We now use the Fredholm
alternative in this context. There exists x0 such that
Z ω
Cx0 = R(0, s)b(s)ds
0
is of the form v0 + Ker(C T ), where v0 is one of these vectors; hence there exist p of them
which are independent and are initial conditions of the p independent ω-periodic solutions
of (2.13).
on some subinterval of I.
Proof. We proceed using a variation of constants approach. It is known that the general
solution to the homogeneous equation x0 = A(t)x associated to (2.19) is given by
φ(t) = R(t, t0 )x0 .
We seek a solution to (2.19) by assuming that φ(t) = R(t, t0 )v(t). We have
0 d
φ (t) = R(t, t0 ) v(t) + R(t, t0 )v 0 (t)
dt
= A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t),
Fund. Theory ODE Lecture Notes – J. Arino
2.5. Further developments, bibliographical notes 55
from Proposition 2.2.11. For φ to be solution, it must satisfy the differential equation (2.19),
and thus
φ0 (t) = A(t)φ(t) + g(t, φ(t)) ⇔ A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t) = A(t)R(t, t0 )v(t) + g(t, φ(t))
⇔ R(t, t0 )v 0 (t) = g(t, φ(t))
⇔ v 0 (t) = R(t, t0 )−1 g(t, φ(t))
⇔ v 0 (t) = R(t0 , t)g(t, φ(t))
Z t
⇔ v(t) = R(t0 , s)g(s, φ(s))ds + C,
t0
57
Fund. Theory ODE Lecture Notes – J. Arino
58 3. Stability of linear systems
Theorem 3.2.1. Suppose that all eigenvalues of A have negative real parts, and that b is
continuous and such that limt→∞ b(t) = 0. Then 0 is a g.a.s. equilibrium of (3.2).
by the hypothesis on A, e(t−t0 )A is such that for t0 ≤ t < ∞, kΨ(t, t0 )k ≤ Ke−σ(t−t0 ) for
K > 0, σ > 0, where R(t, t0 ) = e(t−t0 )A is the fundamental matrix of the homogeneous part
of (3.2).
Since limt→∞ b(t) = 0, given any η > 0, there exists a number T ≥ t0 such that |b(t)| < η
for t ≥ T . We now use the variation of constants formula (3.3) with the point (T, φT (x0 ))
for initial condition. We have, for T ≤ t < ∞,
Z t
(t−T )A
φt (x0 ) = e φT (x0 ) + e(t−s)A b(s)φs (x0 )ds
T
Thus, using kΦ(t)k ≤ Ke−σ(t−t0 ) (with t0 = T ) and |b(t)| < η for t ≥ T , we obtain, for
T ≤ t < ∞, Z t
−σ(t−T )
kφt (x0 )k ≤ Ke kφT (x0 )k + Kη e−σ(t−s) kφs (x0 )kds
T
σt
Multiplying both sides of this inequality by e and using Gronwall’s inequality (Appendix A.7)
with the function kφt (x0 )keσt , we obtain, for T ≤ t < ∞,
From this we conclude that if 0 < η < σ/K, the solution φt (x0 ) will approach zero exponen-
tially. This does not yet prove that the zero solution of (3.2) is stable. To do this, we compute
a bound on kφT (x0 )k. Returning to (3.3) and restricting t to the interval t0 ≤ t ≤ T , we
have Z t
−σ(t−t0 )
kφt (x0 )k ≤ Ke kx0 k + K1 K e−σ(t−s) kφ(s, t0 , x0 )kds
t0
σt
where K1 = maxt0 ≤t≤T |b(t)|. Multiplying by e and applying the Gronwall inequality we
obtain
Therefore,
kφT (x0 )k ≤ Kkx0 keK1 K(T −t0 ) , t0 ≤ T (3.6)
Thus we can make |φT (x0 )| small by choosing |x0 | sufficiently small. This together with (3.4)
gives the stability.
Indeed, substituting (3.6) into (3.4) gives, for T ≤ t < ∞,
Let then K2 = max KeK1 K(T −t0 ) , K 2 eK1 K(T −t0 ) . From (3.5) and (3.7) we have
K2 kx0 k if t0 ≤ t ≤ T
kφt (x0 )k ≤ (3.8)
K2 kx0 ke−(σ−Kη)(t−T ) if T ≤ t < ∞
For a given matrix A, we can compute K and σ; we next pick any 0 < η < σ/K and then
T ≥ t0 so that |b(t)| < η for t ≥ T . We then compute K1 and K2 . Now, given any ε > 0,
choose δ < ε/K2 . Then from (3.8), if kx0 k < δ, kφt (x0 )k < ε for all t ≥ t0 so that the zero
solution is stable. From (3.8), it is clear that the zero solution is globally asymptotically
stable.
Corollary 3.2.2. Let all eigenvalues of A have negative real part, so that |eAt | ≤ Ke−σt for
some constants K > 0, σ > 0 and all t ≥ 0. Let b(t) be continuous for 0 ≤ t < ∞ and
suppose that there exists T > 0 such that |b(t)| < σ/K for t ≥ T . Then the zero solution of
(3.2) is globally asymptotically stable.
We give some notions of linear stability theory, in the case of the autonomous linear
system (2.6), repeated here for convenience:
be a basis of Rn , with n = 2m − k.
Definition 3.2.4 (Stable, unstable and center subspaces). The stable, unstable and center
subspaces of the linear system (2.6) are given, respectively, by
E u = Span{uj , vj : aj > 0}
and
E c = Span{uj , vj : aj = 0}.
Definition 3.2.5. The mapping eAt : Rn → Rn is called the flow of the linear system (2.6).
The term flow is used since eAt describes the motion of points x0 ∈ Rn along trajectories
of (2.6).
Definition 3.2.6. If all eigenvalues of A have nonzero real part, that is, if E c = ∅, then
the flow eAt of system (2.6) is called a hyperbolic flow, and the system (2.6) is a hyperbolic
linear system.
Definition 3.2.7. A subspace E ⊂ Rn is invariant with respect to the flow eAt , or invariant
under the flow of (2.6), if eAt E ⊂ E for all t ∈ R.
Theorem 3.2.8. Let E be the generalized eigenspace of A associated to the eigenvalue λ.
Then AE ⊂ E.
Theorem 3.2.9. Let A ∈ Mn (R). Then
Rn = E s ⊕ E u ⊕ E c .
Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then E s ,
E u and E c are invariant under the flow of (2.6).
Definition 3.2.10. If all the eigenvalues of A have negative (resp. positive) real parts, then
the origin is a sink (resp. source) for the linear system (2.6).
Theorem 3.2.11. The stable, center and unstable subspaces E S , E C and E U , respectively,
are invariant with respect to eAt , i.e., let x0 ∈ E S , y0 ∈ E C and z0 ∈ E U , then eAt x0 ∈ E S ,
eAt y0 ∈ E C and eAt z0 ∈ E U .
Fund. Theory ODE Lecture Notes – J. Arino
3.2. Affine systems with small coefficients 61
Definition 3.2.12 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A → B of A onto B is a continuous one-to-one map of A
onto B such that h−1 : B → A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
i) for all α, Uα is homeomorphic to the open unit ball in Rn , B = {x ∈ Rn : |x| < 1},
i.e., for all α there exists a homeomorphism of Uα onto B, hα : Uα → B,
h = hα ◦ h−1
β : hβ (Uα ∩ Uβ ) → hα (Uα ∩ Uβ )
is differentiable (or of class C k ) and for all x ∈ hβ (Uα ∩ Uβ ), the determinant of the
Jacobian, det Dh(x) 6= 0.
For simplicity and without loss of generality since both results are local results, we assume
hereforth that x∗ = 0, i.e., that a change of coordinates has been performed translating x∗
to the origin. We also assume that t0 = 0.
Chapter 4
Linearization
x0 = f (x) (4.1)
The object of this chapter is to show two results which link the behavior of (4.1) near a
hyperbolic equilibrium point x∗ to the behavior of the linearized system
be a basis of Rn , with n = 2m − k.
Definition 4.1.1 (Stable, unstable and center subspaces). The stable, unstable and center
subspaces of the linear system (2.6) are given, respectively, by
E u = Span{uj , vj : aj > 0}
and
E c = Span{uj , vj : aj = 0}.
63
Fund. Theory ODE Lecture Notes – J. Arino
64 4. Linearization
Definition 4.1.2. The mapping eAt : Rn → Rn is called the flow of the linear system (2.6).
The term flow is used since eAt describes the motion of points x0 ∈ Rn along trajectories
of (2.6).
Definition 4.1.3. If all eigenvalues of A have nonzero real part, that is, if E c = ∅, then
the flow eAt of system (2.6) is called a hyperbolic flow, and the system (2.6) is a hyperbolic
linear system.
Definition 4.1.4. A subspace E ⊂ Rn is invariant with respect to the flow eAt , or invariant
under the flow of (2.6), if eAt E ⊂ E for all t ∈ R.
Theorem 4.1.5. Let E be the generalized eigenspace of A associated to the eigenvalue λ.
Then AE ⊂ E.
Theorem 4.1.6. Let A ∈ Mn (R). Then
Rn = E s ⊕ E u ⊕ E c .
Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then E s ,
E u and E c are invariant under the flow of (2.6).
Definition 4.1.7. If all the eigenvalues of A have negative (resp. positive) real parts, then
the origin is a sink (resp. source) for the linear system (2.6).
Theorem 4.1.8. The stable, center and unstable subspaces E S , E C and E U , respectively,
are invariant with respect to eAt , i.e., let x0 ∈ E S , y0 ∈ E C and z0 ∈ E U , then eAt x0 ∈ E S ,
eAt y0 ∈ E C and eAt z0 ∈ E U .
Definition 4.1.9 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A → B of A onto B is a continuous one-to-one map of A
onto B such that h−1 : B → A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
Definition 4.1.10 (Differentiable manifold). An n-dimensional differentiable manifold M
(or a manifold of class C k ) is a connected metric space with an open covering {Uα } ( i.e.,
M = ∪α Uα ) such that
i) for all α, Uα is homeomorphic to the open unit ball in Rn , B = {x ∈ Rn : |x| < 1},
i.e., for all α there exists a homeomorphism of Uα onto B, hα : Uα → B,
h = hα ◦ h−1
β : hβ (Uα ∩ Uβ ) → hα (Uα ∩ Uβ )
is differentiable (or of class C k ) and for all x ∈ hβ (Uα ∩ Uβ ), the determinant of the
Jacobian, det Dh(x) 6= 0.
Fund. Theory ODE Lecture Notes – J. Arino
4.2. The stable manifold theorem 65
eigenvalues with zero real part. Also, recall that the solutions of (4.1) form a one-parameter
group that defines the flow of the nonlinear differential equation (4.1). To be more precise,
consider the IVP consisting of (4.1) and an initial condition x(t0 ) = x0 . Let I(x0 ) be the
maximal interval of existence of the solution to the IVP. Let then φ : R × Rn → Rn be
defined as follows: For x0 ∈ Rn and t ∈ I(x0 ), φ(t, x0 ) = φt (x0 ) is the solution of the IVP
defined on its maximal interval of existence I(x0 ).
Example – Consider the (linear) ordinary differential equation x0 = ax, with a, x ∈ R. The
solution is φ(t, x0 ) = eat x0 , and satisfies the group property
φ(t + s, x0 ) = ea(t+s) x0 = eat (eas x0 ) = φ(t, eas x0 ) = φ(t, φ(s, x0 ))
For simplicity and without loss of generality since both results are local results, we assume
hereforth that x∗ = 0, i.e., that a change of coordinates has been performed translating x∗
to the origin. We also assume that t0 = 0.
There are several approaches to the proof of this result. Hale [10] gives a proof which uses
functional analysis. The proof we give here comes from [18, p. 108-111], who derives it from
[6, p. 330-335]. It consists in showing that there exists a real nonsingular constant matrix
C such that if y = C −1 x then there are n − k real continuous functions yj = ψj (y1 , . . . , yk )
defined for small |yi |, i ≤ k, such that
yj = ψj (y1 , . . . , yk ) (j = k + 1, . . . , n)
define a k-dimensional differentiable manifold S̃ in y space. The stable manifold in S space
is obtained by applying the transformation P −1 to y so that x = P −1 y defines S in terms of
k curvilinear coordinates y1 , . . . , yk .
Fund. Theory ODE Lecture Notes – J. Arino
66 4. Linearization
x0 = Df (0)x + F (x)
with F (x) = f (x) − Df (0)x. Since f ∈ C 1 (E), F ∈ C 1 (E), and F (0) = f (0) = 0 since
f (0) = 0. Also, DF (x) = Df (x) − Df (0) and so DF (0) = 0. To continue, we use the
following lemma (which we will not prove).
Lemma 4.2.2. Let E be an open subset of Rn containing the origin. If F ∈ C 1 (E), then
for all x, y ∈ Nδ (0) ⊂ E, there exists a ξ ∈ Nδ (0) such that
From Lemma 4.2.2, it follows that for all ε > 0 there exists a δ > 0 such that |x| ≤ δ and
|y| ≤ δ imply that
|F (x) − F (y)| ≤ ε|x − y|
Let C be an invertible n × n matrix such that
−1 P 0
B=C Df (0)C =
0 Q
where the eigenvalues λ1 , . . . , λk of the k × k matrix P have negative real part and the
eigenvalues λk+1 , . . . , λn of the (n − k) × (n − k) matrix Q have positive real part. Let α > 0
be chosen small enough that for j = 1, . . . , k, <(λj ) < −α < 0. Let y = C −1 x, we have
y 0 = C −1 x0
= C −1 Df (0)x + C −1 F (x)
= C −1 Df (0)Cy + C −1 F (Cy)
= By + G(y)
and
kV (t)k ≤ Ke−σt for all t ≤ 0
Fund. Theory ODE Lecture Notes – J. Arino
4.2. The stable manifold theorem 67
where a, u ∈ Rn and a is a constant vector. We can solve this equation using the method of
successive approximations. Indeed, let
u(0) (t, a) = 0
and
Z t Z ∞
(j+1) (j)
u (t, a) = U (t)a + U (t − s)G(u (s, a))ds − V (t − s)G(u(j) (s, a))ds (4.5)
0 t
K|a|e−αt
|u(j) (t, a) − u(j−1) (t, a)| ≤ (4.6)
2j−1
Clearly, (4.6) holds for j = 1 since
Z t Z ∞
(1)
|u (t, a)| ≤ kU (t)ak + kU (t − s)G(u(s, a))kds + kV (t − s)G(u(s, a))kds
0 t
which, since G verifies a Lipschitz-type condition as given by Lemma 4.2.2, implies that
there exists ε > 0 such that
Z t
(k+1) (k)
|u (t, a) − u (t, a)| ≤ ε kU (t − s)k u(k+1) (s, a) − u(k) (s, a) ds
Z 0∞
+ε kV (t − s)k u(k+1) (s, a) − u(k) (s, a) ds
t
Fund. Theory ODE Lecture Notes – J. Arino
68 4. Linearization
Using the bounds on kU k and kV k as well as the induction hypothesis (4.6), it follows that
t
K|a|e−αs
Z
|u (k+1) (k)
(t, a) − u (t, a)| ≤ ε Ke−(α+σ)(t−s) ds
0 2k−1
Z ∞
K|a|e−αs
+ε Keσ(t−s) ds
t 2k−1
εK 2 |a|e−αt εK 2 |a|e−αt
≤ +
σ2k−1 σ2k−1
which, if we choose ε < σ/(4K), i.e., εK/σ < 1/4, implies that
1 1 K|a|e−αt K|a|e−αt
(k+1) (k)
|u (t, a) − u (t, a)| < + = (4.7)
4 4 2k−1 2k
Note that for G to satisfy a Lipschitz-type condition, we must choose K|a| < δ/2, i.e.,
|a| < δ/(2K). Then, by induction, (4.6) holds for all t ≥ 0 and j = 1, . . ..
As a consequence, for t ≥ 0, n > m > N ,
∞
X
|u(n) (t, a) − u(m) (t, a)| ≤ |u(j+1) (t, a) − u(j) (t, a)|
j=N
∞
X 1 K|a|
≤ K|a| =
j=N
2j 2N −1
As this last quantity approaches 0 as N → ∞, it follows that {u(j) (t, a)} is a Cauchy sequence
(of continuous functions).
It follows that
lim u(j) (t, a) = u(t, a)
j→∞
uniformly for all t ≥ 0 and |a| < δ/(2K). From the uniform convergence, we deduce that
u(t, a) is continuous. Now taking the limit as j → ∞ in both sides of (4.5), it follows that
u(t, a) satisfies the integral equation (4.4) and as a consequence, the differential equation
(4.3).
Since G ∈ C 1 (Ẽ), it follows from induction on (4.5) that u(j) (t, a) is a differentiable
function of a for |a| < δ/(2K) and t ≥ 0. Since u(j) (t, a) → u(t, a) uniformly, it then follows
that u(t, a) is differentiable for t ≥ 0 and |a| < δ/(2K). The estimate (4.7) implies that
uj (0, a) = aj for j = 1, . . . , k
Fund. Theory ODE Lecture Notes – J. Arino
4.3. The Hartman-Grobman theorem 69
and
Z ∞
uj (0, a) = − V (−s)G(u(s, a1 , . . . , ak , 0, . . . , 0))ds for j = k + 1, . . . , n
0 j
yj = ψj (y1 , . . . , yk ) for j = k + 1, . . . , n
p
which defines a differentiable manifold S̃ in y space for y12 + · · · + yk2 < δ/(2K). Fur-
thermore, if y(t) is a solution of the differential equation (4.3) with y(0) ∈ S̃, i.e., with
y(0) = u(0, a), then
y(t) = u(t, a)
It follows from the estimate (4.8) that if y(t) is a solution of (4.3) with y(0) ∈ S̃, then
y(t) → 0 as t → ∞. It can also be shown that if y(t) is a solution of (4.3) with y(0) 6∈ S̃,
then y(t) 6→ 0 as t → ∞; see [6, p. 332].
This implies, as φt satisfies the group property φs+t (x0 ) = φs (φt )(x0 ), that if y(0) ∈ S̃,
then y(t) ∈ S̃ for all t ≥ 0. And it can be shown as in [6, Th 4.2, p. 333] that
∂ψj
(0) = 0
∂yi
y 0 = −By − G(y)
The stable manifold for this system is the unstable manifold Ũ of (4.3). In order to determine
the (n − k)-dimensional manifold Ũ using the above process, the vector y has to be replaced
by the vector (yk+1 , . . . , yn , y1 , . . . , yk ).
i.e., H maps trajectories of (4.1) near the origin onto trajectories of x0 = Df (0)x near the
origin and preserves the parametrization by time.
Proof. Suppose that f ∈ C 1 (E), f (0) = 0 (i.e., 0 is an equilibrium) and A = Df (0) the
jacobian matrix of f at 0.
1. As we have assumed that the matrix A has no eigenvalues with zero real part (i.e., 0
is an hyperbolic equilibrium point), we can write A in the form
P 0
A=
0 Q
where P has only eigenvalues with negative real parts and Q has only eigenvalues with
positive real parts.
2. Let φt be the flow of the nonlinear system (4.1). Let us write the solution as
y(t, y0 , z0 )
x(t, x0 ) = φt (x0 ) =
z(t, y0 , z0 )
where
y
x 0 = 0 ∈ Rn
z0
has been decomposed as y0 ∈ E s , the stable subspace of A, and z0 ∈ E u , the unstable
subspace of A.
3. Let
Ỹ (y0 , z0 ) = y(1, y0 , z0 ) − eP y0
and
Z̃(y0 , z0 ) = z(1, y0 , z0 ) − eQ z0
Then Ỹ (0) = Ỹ (0, 0) = y(1, 0, 0) = 0. The same is true of Z̃(0) = 0. Also, DỸ (0) =
DZ̃(0) = 0. Since f ∈ C 1 (E), Ỹ (y0 , z0 ) and Z̃(y0 , z0 ) are continuously differentiable. Thus
kDỸ (y0 , z0 )k ≤ a
and
kDZ̃(y0 , z0 )k ≤ a
Fund. Theory ODE Lecture Notes – J. Arino
4.3. The Hartman-Grobman theorem 71
on the compact set |y0 |2 + |z0 |2 ≤ s20 . By choosing s0 sufficiently small, we can make a as
small as we like. We let Y (y0 , z0 ) and Z(y0 , z0 ) be smooth functions, defined by
2
Ỹ (y0 , z0 ) for |y0 |2 + |z0 |2 ≤ s20
Y (y0 , z0 ) =
0 for |y0 |2 + |z0 |2 ≥ s20
and 2
Z̃(y0 , z0 ) for |y0 |2 + |z0 |2 ≤ s20
Z(y0 , z0 ) =
0 for |y0 |2 + |z0 |2 ≥ s20
By the mean value theorem,
p
|Y (y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a(|y0 | + |z0 |)
and p
|Z(y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a(|y0 | + |z0 |)
for all (y0 , z0 ) ∈ Rn . Let B = eP and C = eQ . Assuming that P and Q have been normalized
in a proper way, we have
b = kBk < 1 and c = kC −1 k < 1
4. For
y
x= ∈ Rn
z
define the transformations
By
L(y, z) =
Cz
and
By + Y (y, z)
T (y, z) =
Cz + Z(y, z)
i.e., L(x) = eA x and, locally, T (x) = φ1 (x). Then the following lemma holds, which we
prove later.
Lemma 4.3.3. There exists a homeomorphism H of an open set U containing the origin
onto an open set V containing the origin such that
H ◦T =L◦H
Define Z 1
H= L−s H0 T s ds
0
Fund. Theory ODE Lecture Notes – J. Arino
72 4. Linearization
It follows from the above lemma that there exists a neighborhood of the origin for which
Z 1
t
LH= Lt−s H0 T s−t dsT t
0
Z 1−t
= L−s H0 T s dsT t
−t
Z 0 Z 1−t
−s s −s
= L H0 T ds + L H0 T ds T t
s
−t 0
Z 1
= L−s H0 T s dsT t
0
= HT t
Thus H ◦ T t = Lt H or equivalently
Ψ0 (y, z) = z
(4.11)
Ψk+1 (y, z) = C −1 Ψk (By + Y (y, z), Cz + Z(y, z))
It can be shown by induction that for k = 0, 1, . . . the functions Ψk are continuous and such
that Ψk (y, z) = z for |y| + |z| ≥ 2s0 .
Fund. Theory ODE Lecture Notes – J. Arino
4.3. The Hartman-Grobman theorem 73
Let us now prove that {Ψk } is a Cauchy sequence. For this, we show by induction that
for all j ≥ 1,
|Ψj (y, z) − Ψj−1 (y, z)| ≤ M rj (|y| + |z|)δ (4.12)
where r = c[2 max(a, b, c)]δ with δ ∈ (0, 1) chosen sufficiently small that r < 1 (which is
possible since c < 1) and M = ac(2s0 )1−δ /r. Inequality (4.12) is satisfied for j = 1 since
since Z(y, z) = 0 for |y| + |z| ≥ 2s0 . Now assuming that (4.12) holds for j = k gives
|Ψk+1 (y, z) − Ψk (y, z)| = |C −1 Ψk (By + Y (y, z), Cz + Z(y, z)) − C −1 Ψk−1 (By + Y (y, z), Cz + Z(y, z))|
= |C −1 (Ψk − Ψk−1 )|
≤ kC −1 k |Ψk − Ψk−1 |
Using the same type of argument as in the proof of the stable manifold theorem, Ψk is
thus a Cauchy sequence of continuous functions that converges uniformly as k → ∞ to a
continuous function Ψ(y, z). Also, Ψ(y, z) = z for |y| + |z| ≥ 2s0 . Taking limits in (4.11)
and left-multiplying by C shows that Ψ(y, z) is a solution of (4.10b).
Now for (4.10a). This equation can be written
where Y1 and Z1 occur in the inverse of T , which exists provided that a is small enough (i.e.,
s0 is sufficiently small), −1
−1 B y + Y1 (y, z)
T (y, z) =
C −1 z + Z1 (y, z)
Successive approximations with Φ0 (y, z) = y can then be used as above (since b = kBk < 1)
to solve (4.13).
Fund. Theory ODE Lecture Notes – J. Arino
74 4. Linearization
ξ 0 = f (ξ)
ξI∗ = (S ∗ , x∗ ) = (λ, S 0 − λ)
where λ is such that µ(λ) = D. Note that this implies that if λ ≥ S 0 , ξT∗ is the only
equilibrium of the system.
Fund. Theory ODE Lecture Notes – J. Arino
4.4. Example of application 75
From the nullclines equations, it is clear that (x, y) = (0, 0) is the only equilibrium point.
At (0, 0), the Jacobian matrix of (4.16) is given by
−1 0
J=
0 1
0.05
0.04
0.03
0.02
0.01
y
0
−0.01
−0.02
−0.03
−0.04
−0.05
−0.05 −0.04 −0.03 −0.02 −0.01 0 0.01 0.02 0.03 0.04 0.05
x
To be more precise about the nature of the stable manifold S, we proceed as follows.
First of all, as A is in diagonal form, we have
−1 0
A=B=
0 1
−y 2
and C = I. Also, F (ξ) = G(ξ) = . Here, the matrices P and Q are in fact scalars,
x2
Fund. Theory ODE Lecture Notes – J. Arino
4.4. Example of application 77
P = −1 and Q = 1. Thus
e−t 0
0 0
U (t) = , V (t) =
0 0 0 et
Finally, a = (a1 , 0)T . So now we can use successive approximations to find an approximate
solution to the integral equation (4.4), which here takes the form
−t Z t −(t−s) 2 Z ∞
e a1 −e u2 (s) 0
u(t, a) = + ds − ds
0 0 0 t e(t−s) u21 (s)
To construct the sequence of successive approximations, we start with u(t, a) = (0, 0)T , then
compute the successive terms using equation (4.5), which takes the form
−t Z t −(t−s) Z ∞
(j+1) e a1 e 0 (j)
0 0 (j)
u (t, a) = + G u (s) ds − G u (s) ds
0 0 0 0 t 0 e(t−s)
2 2
(j) ∞ (j)
0 u2 (s) u (s)
−t Z t −(t−s) Z
e a1 e 0 0 2
= + 2 ds − (t−s) 2 ds
0 0 0 0 e
(j) (j)
0 u (s) t
1 u (s) 1
2 ! !
0
t (j) ∞
e−t a1
Z Z
−e−(t−s) u2 (s)
= + ds −
(j)
2 ds
0 0 0 t e(t−s) u1 (s)
Therefore,
e−a a1
(1)
u (t, a) = U (t)a =
0
!
(0)
u1 (t, a) 0
since u(0) (t, a) = (0) = .
u2 (t, a) 0
Then,
∞
e−t a1
Z
(2) 0
u (t, a) = − 2 ds
0 e(t−s) (e−s a1 )
−t
t
e a1 0
= − 1 2 −2t
0 ae
3 1
e−t a1
=
− 13 a21 e−2t
and continuing this process, we find
e−t a1 + 27
1
(e−4t − e−t )a41
(3)
u (t, a) =
− 13 a21 e−2t
as a1 → 0. Thus S is approximated by
x2
y=− + O(x5 )
3
as x → 0.
Chapter 5
Exponential dichotomy
Our aim here is to show the equivalent of the Hartman-Grobman theorem for linear systems
with variable coefficients. Compared to other results we have seen so far, this is a much
more recent field. The first results were shown in the 60s by Lin. We give here only the
most elementary results. For more details, see, e.g., [13].
We consider the linear system of differential equations
dx
= A(t)x (5.1)
dt
where the n × n matrix A(t) is continuous on the real axis.
and satisfy the conditions that there exists α, β, positive constants such that
where
79
Fund. Theory ODE Lecture Notes – J. Arino
80 5. Exponential dichotomy
Definition 5.1.2. Let Φ(t, s), Φ(t, t) = I, be the principal matrix solution of (5.1). We
say that (5.1) has an exponential dichotomy on the interval I if there are projections P (t) :
Rn → Rn , t ∈ I, continuous in t, such that if Q(t) = I − P (t), then
with γ any rectifiable simple closed curve in the open left half-plane which contains in its interior
all eigenvalues of A with negative real part. ◦
Fund. Theory ODE Lecture Notes – J. Arino
5.2. Existence of exponential dichotomy 81
Theorem 5.2.1. If the matrix A(t) in (5.1) is continuous and bounded on R, and there
exists a quadratic form V (t, x) = xT G(t)x, where the matrix G(t) is symmetric, regular,
bounded and C 1 , such that the derivative of V (t, x) with respect to (5.1) is positive definite,
then (5.1) admits exponential dichotomy.
The converse is true, without the requirement that A(t) be bounded.
A result of [7].
i) A(t) has k eigenvalues with real part ≤ −α < 0 and n − k eigenvalues with real part
≥ β > 0 for all t ∈ I,
For any positive constant ε < min(α, β), there exists a positive constant δ = δ(M, α + β, ε)
such that, if
kA(t2 ) − A(t1 )k ≤ δ for |t2 − t1 | ≤ h
where h > 0 is a fixed number not greater than the length of I, then the equation
x0 = A(t)x
The following result, due to Muldowney [16], gives a criterion for the existence of a
(µ1 , µ2 )-dichotomy.
( m n
)
X X
min l2 <(ajj ) − |aij | − l2 |aij | : j = m + 1, . . . , n ≥ l2 ρ.
i=1 i=m+1,i6=j
( m n
)
X X
µ2 = min <(ajj ) − l1 |aij | − |aij | : j = m + 1, . . . , n .
i=1 i=m+1,i6=j
The same sort of theorem can be proved with sums of the columns replaced by sums of
the rows.
Example – Consider
−1 0 1/2
A(t) = t/2 t t2 , t>0
t/2 −t 2 t
dx
= A(t)x + f (t, x) (5.4)
dt
where f : R × Rn → Rn is continuous, f (t, x) = O(kxk2 ), kxk = o(1), kf (t, x1 ) − f (t, x2 )k ≤
Lkx1 − x2 k with L small enough.
Let x(t) be a non trivial solution of (5.1); define
1 kx(t)k
λ̄u (x(t)) = lim sup log
t−s→∞ t−s kx(s)k
and
1 kx(t)k
λu (x(t)) = lim inf log
t−s→∞ t−s kx(s)k
The numbers λ̄u (x(t)) and λu (x(t)) are called the uniform upper characteristic exponent and
uniform lower characteristic exponent of x(t), respectively.
Remark – If λ̄(x) ≤ −α < 0, then lims→−∞ kx(s)k = ∞. If λ(x) ≥ α > 0, then limt→∞ kx(t)k =
∞. ◦
Theorem 5.3.1. If (5.1) admits the exponential dichotomy, then the linear system (5.1)
and the nonlinear system (5.4) are topologically equivalent, i.e.,
ii) there exists positive constants α0 and β0 such that if a solution x(t) of (5.4) is such
that limt→∞ x(0) = 0, or limt→−∞ x(t) = 0, then
or
kx(t)k ≤ β0 kx(s)keα0 (t−s) , s≥t
respectively. At this time, λ̄u (x(t)) ≤ −α0 < 0, or λu (x(t)) ≥ α0 > 0;
iii) for a k-dimensional solution x of (5.1) with λ̄u (x(t)) ≤ −α < 0, or an (n − k)-
dimensional solution x(t) of (5.1) with λu (x(t)) ≥ α > 0, there is a unique k-
dimensional or (n − k)-dimensional y(t), solution of (5.4), such that λ̄u (x(t)) ≤ −α <
0, or λu (x(t)) ≥ α > 0 respectively.
Theorem 5.3.2. Suppose that A(t) is a continuous matrix function such that the linear
equation x0 = A(t)x has an exponential dichotomy. Suppose that f (t, x) is a continuous
function of R × Rn into Rn such that
i) H(t, x) − x is bounded in R × Rn ,
ii) if x(t) is any solution of the differential equation x0 = A(t)x + f (t, x), then H(t, x(t))
is a solution of x0 = A(t)x.
kH(t, x) − xk ≤ 4Kµα−1
for all t, x. For each fixed t, Ht (x) = H(t, x) is a homeomorphism of Rn . L(t, x) = Ht−1 (x)
is continuous in R × Rn and if y(t) is any solution of x0 = A(t)x, then L(t, y(t)) is a solution
of x0 = A(t)x + f (t, x).
Fund. Theory ODE Lecture Notes – J. Arino
84 5. Exponential dichotomy
Theorem 5.4.2. Let B : Mn (R+ ) be a bounded, continuous matrix function. Suppose that
(5.1) has an exponential dichotomy on R+ . If δ = sup kB(t)k < α/(4K 2 ), then the perturbed
equation
x0 = (A(t) + B(t))z
also has an exponential dichotomy on R+ with constants K̃ and α̃ determined by K, α and
δ. Moreover, if P̃ (t) is the corresponding projection, then kP (t) − P̃ (t)k = O(δ) uniformly
in t ∈ R+ . Also, |α − α̃| = O(δ).
[2] F. Brauer and J.A. Nohel. The Qualitative Theory of Ordinary Differential Equations.
Dover, 1989.
[3] H. Cartan. Cours de calcul différentiel. Hermann, Paris, 1997. Reprint of the 1977
edition.
[5] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. McGraw-
Hill, 1955.
[6] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. Krieger,
1984.
[7] W.A. Coppel. Dichotomies in Stability Theory, volume 629 of Lecture Notes in Mathe-
matics. Springer-Verlag, 1978.
[9] N.B. Haaser and J.A. Sullivan. Real Analysis. Dover, 1991. Reprint of the 1971 edition.
[11] P. Hartman. Ordinary Differential Equations. John Wiley & Sons, 1964.
[12] P.-F. Hsieh and Y. Sibuya. Basic Theory of Ordinary Differential Equations. Springer,
1999.
[13] Z. Lin and Y-X. Lin. Linear Systems. Exponential Dichotomy and Structure of Sets of
Hyperbolic Points. World Scientific, 2000.
[15] R.K Miller and A.N. Michel. Ordinary Differential Equations. Academic Press, 1982.
85
Fund. Theory ODE Lecture Notes – J. Arino
86 BIBLIOGRAPHY
[16] J.S Muldowney. Dichotomies and asymptotic behaviour for linear differential systems.
Transactions of the AMS, 283(2):465–484, 1984.
[17] K.J. Palmer. A generalization of Hartman’s linearization theorem. J. Math. Anal. Appl.,
41:753–758, 1973.
Here, some results that are important for the course are given with a somewhat random
ordering.
kL(x)k
∀L ∈ L(E), |||L||| = sup = sup kL(x)k.
x∈E−{0} kxk kxk≤1
The inequality
kA(t)(x1 − x2 )k ≤ |||A(t)||| kx1 − x2 k
87
Fund. Theory ODE Lecture Notes – J. Arino
88 A. Definitions and results
results from the nature of the norm ||| |||. See appendix A.1. is best understood by indicating
the spaces in which the various norms are defined. We have
x
kAxka = kxkb
A
kxkb
a
≤ kxkb |||A|||
= kAkkxkb ,
since
x
kxkb
= 1.
b
For a given component function fi , i = 1, . . . , n, using the definition of the Riemann integral,
Z b Xk
fi (x)dx = lim fi (x∗j )∆xj ,
a k→∞
j=1
where x∗j is the sample point in the interval [xj−1 , xj ] with width ∆xj . Therefore,
Z b
k
k
X
∗
X
∗
f (x)dx = lim f (x )∆x = lim f (x )∆x
,
i i j i j
k→∞ k→∞
a j=1
j=1
since the norm is a continuous function. The result then follows from the triangle inequality.
Definition A.2 (Uniform convergence). Let X be any set, and let Y be a topological space.
A sequence f1 , f2 , . . . of mappings from X to Y is said to be uniformly convergent to a
mapping f : X → Y , if given ε > 0, there exists N such that for all n ≥ N and all x ∈ X,
In other words,
Theorem A.5.1. Let {fn } be an equicontinuous sequence of functions. If fn (x) → f (x) for
every x ∈ X, then the function f is continuous.
Theorem A.6. Let C(X) be the space of continuous functions on the complete metric space
X with values in Rn . If a sequence {fn } in C(X) is bounded and equicontinuous, then it has
a uniformly convergent subsequence.
iii) f (t, x) has continuous partial derivative ∂f /∂x on a bounded closed domain D ⇒ f is
locally Lipschitz on D.
Proof. i) Suppose that f is Lipschitz, i.e., there exists L > 0 such that kf (t, x1 )−f (t, x2 )k ≤
Lkx1 − x2 k. Recall that f is uniformly continuous if for every ε > 0, there exists δ > 0 such
that for all x1 , x2 , kx1 − x2 k < δ implies that kf (t, x1 ) − f (t, x2 )k < ε. So, given an ε > 0,
choose δ < ε/L. Then kx1 − x2 k < δ implies that
If (t, x1 ), (t, x2 ) ∈ U, by the mean-value theorem, there exists η ∈ [x1 , x2 ] such that f (t, x2 ) −
f (t, x1 ) = (x2 − x1 ) ∂f∂x
(t, η). As η ∈ U, it follows that k ∂f
∂x
(t, η)k ≤ L, and thus kf (t, x2 ) −
f (t, x1 )k ≤ Lkx2 − x1 k.
i) g(t) is continuous on t0 ≤ t ≤ t1 ,
Then
0 ≤ g(t) ≤ KeL(t−t0 ) ,
for t0 ≤ t ≤ t1 .
Fund. Theory ODE Lecture Notes – J. Arino
92 A. Definitions and results
Rt
Proof. Let v(t) = t0
g(s)ds. Then v 0 (t) = g(t), which implies that the assumption on g can
be written
0 ≤ v 0 (t) ≤ K + Lv(t).
Rt
The right inequality is a linear differential inequality, with integrating factor exp(− t0
Lds)
Also, v(t0 ) = 0. Hence,
d −L(t−t0 )
v(t) ≤ Ke−L(t−t0 )
e
dt
and therefore,
K
e−L(t−t0 ) v(t) ≤ 1 − e−L(t−t0 ) .
L
Thus Lv(t) ≤ K eL(t−t0 ) − 1 , and g(t) ≤ K + Lv(t) ≤ KeL(t−t0 ) .
Lemma A.8. Suppose that a < b and let g, K and L be nonnegative continuous functions
defined on the interval [a, b]. Moreover, suppose that either K is a constant function, or K
is differentiable on [a, b] with positive derivative K 0 . If, for all t ∈ [a, b]
Z t
g(t) ≤ K(t) + L(t)g(s)ds,
a
then Z t
g(t) ≤ K(t) exp L(s)ds ,
a
Lemma A.9. If ϕ and ψ are two nonnegative regulated functions on the interval [0, c], then
for every nonnegative regulated function w in [0, c] satisfying the inequality
Z t
w(t) ≤ ϕ(t) + ψ(s)w(s)ds,
0
Before proving the result, let us recall that a function f from an interval I ⊂ R to a
Banach space F is regulated if it admits in each point in I a left limit and a right limit. In
particular, every continuous mapping from I ⊂ R to a Banach space is regulated, as well as
monotone maps from I to R; see, e.g., [8, Section 7.6].
Fund. Theory ODE Lecture Notes – J. Arino
A.8. Fixed point theorems 93
Rt Rt
Proof. Let y(t) = 0 ψ(s)w(s)ds; y is continuous, and since w(t) ≤ ϕ(t) + 0 ψ(s)w(s)ds, it
follows that, except maybe at a denumerable number of points of [0, c], we have
Using a mean-value type theorem (see, e.g., [8, Th. 8.5.3]) and using the fact that z(0) = 0,
we get, for t ∈ [0, c],
Z t Z s
z(t) ≤ ϕ(s)ψ(s) exp − ψ(ξ)dξ ds,
0 0
whence by definition Z t Z t
y(t) ≤ ϕ(s)ψ(s) exp ψ(ξ)dξ ds,
0 s
and inequality (A.1) now follows from the relation w(t) ≤ ϕ(t) + y(t).
Similarly,
Iterating,
d(xn+1 , xn ) ≤ K n d(x1 , x0 ).
Therefore,
The λj , j = 1, . . . , q + s, are the characteristic roots of A, which need not all be distinct.
If λj is a simple root, then it occurs in J0 , and therefore, if all the roots are distinct, A is
similar to the diagonal matrix
λ1 0 0 · · · 0
0 λ2 0 · · · 0
J =·
· · · ·
0 0 0 · · · λn
An algorithm to compute the Jordan canonical form of an n × n matrix A [15].
i) Compute the eigenvalues of A. Let λ1 , . . . , λm be the distinct eigenvalues of A with
multiplicities n1 , . . . , nm , respectively.
ii) Compute n1 linearly independent generalized eigenvectors of A associated with λ1 as
follows. Compute
(A − λ1 En )i
for i = 1, 2, . . . until the rank of (A−λ1 En )k is equal to the rank of (A−λ1 En )k+1 . Find
a generalized eigenvector of rank k, say u. Define ui = (A−λ1 En )k−1 u, for i = 1, . . . , k.
If k = n1 , proceed to step 3. If k < n1 , find another linearly independent generalized
eigenvector with rank k. If this is not possible, try k − 1, and so forth, until n1 linearly
independent generalized eigenvectors are determined. Note that if ρ(A − λ1 En ) = r,
then there are totally (n − r) chains of generalized eigenvectors associated with λ1 .
iii) Repeat step 2 for λ2 , . . . , λm .
iv) Let u1 , . . . , uk , . . . be the new basis. Observe that
Au1 = λ1 u1 ,
Au2 = u1 + λ1 u2 ,
..
.
Auk = uk−1 + λ1 uk
Thus in the new basis, A has the representation
α1 1
...
1
α1
α2 1
...
1
J = α
2
α3 1
...
1
α3
...
Fund. Theory ODE Lecture Notes – J. Arino
96 A. Definitions and results
where each chain of generalized eigenvectors generates a Jordan block whose order
equals the length of the chain.
iii) eA = I + ∞ 1 n
P
n=1 n! A .
iv) e0 = I.
2) If A is such that there exists q ∈ N such that Aq = A, then this can sometimes be
exploited to simplify the computation of eA .
4) Other cases require the use of the Jordan normal form (explained below).
Fund. Theory ODE Lecture Notes – J. Arino
A.11. Matrix logarithms 97
Use of the Jordan form to compute the exponential of a matrix. Suppose that
J = P −1 AP is the Jordan form of the matrix A. For a block diagonal matrix
B1 0
B=
... ,
0 Bs
we have, for k = 0, 1, . . .,
B1k 0
Bk =
... ,
0 Bsk
Therefore, for t ∈ R,
eJ0 t 0
eJt =
.. ,
.
0 eJs t
with
eλ1 t 0
eJ0 =
.. .
.
0 eλk t
Now, since Ji = λk+i Ii + Ni , with Ni a nilpotent matrix, and since Ii and Ni commute, there
holds
eJi t = eλk+i t eNi t .
For k ≥ Ni , Nik = 0, so
tni −1
1 t ··· (ni −1)!
tni −2
0 1 · · ·
etJi = (ni −2)! .
0 ··· 1
1
= 1 − u + u2 − u3 + · · ·
1+u
Fund. Theory ODE Lecture Notes – J. Arino
98 A. Definitions and results
For any z ∈ C,
z2 z3
exp(z) = 1 + z + + + ···
2! 3!
∞
X zn
=
n=1
n!
1 + u = exp(ln(1 + u))
∞
"∞ #n
k
X 1 X u
= (−1)k+1
n=1
n! k=1
k
Suppose that
∞
X (−1)k+1 1 k
B = (ln λ)I + Z
k=1
k λk
where
0 1 ··· 0
.. ..
Z=
. .
0 0 1
0 0
is an m × m-matrix. Since Z is nilpotent (Z N = 0 for all N ≥ m), the above sum is finite.
Observe that
∞
!
X (−1)k+1 1 k
exp(B) = exp((ln λ)I) exp k
Z
k=1
k λ
∞
X (−1) k+1
k !
Z
= λ exp
k=1
k λ
Z
=λ I+
λ
= λI + Z
=J
We say ln J = B.
Fund. Theory ODE Lecture Notes – J. Arino
A.12. Spectral theorems 99
Theorem A.12.1 (Routh-Hurwitz). If n = 2, suppose that det A > 0 and trA < 0. Then
A has only eigenvalues with negative real part.
Problem sheets
Contents
Homework sheet 1 – 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Homework sheet 2 – 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Homework sheet 3 – 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Homework sheet 4 – 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Final examination – 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Homework sheet 1 – 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . 145
101
Fund. Theory ODE Lecture Notes – J. Arino
103
Part I. Suppose that there exists a solution x of (B.1), defined on R+ and satisfying the
inequality
|x(t)| ≤ a, t ∈ R+ (B.4)
Part II.
i) Show that any solution of (B.1) on R+ that satisfies |x(0)| < a, satisfies (B.4).
ii) Deduce from the preceding questions the two following properties.
a) Any solution x of (B.1) on R+ satisfying the condition |x(0)| < a, admits the
limit 0 when t → ∞.
b) The function x ≡ 0 is the unique solution of (B.1) on R+ such that x(0) = 0.
Part III. Application. Show that, for α > 1, all solutions of the equation
x0 = −αx + ln 1 + x2
Exercise 1.2 – Let f : [0, +∞) → R, f ∈ C 1 , and a ∈ R. We consider the initial value
problems
x0 (t) + ax(t) = f (t), t≥0
(B.5)
x(0) = 0
and
x0 (t) + ax(t) = f 0 (t), t≥0
(B.6)
x(0) = 0
As these equations are linear, the initial value problems (B.5) and (B.6) admit unique solu-
tions. We denote φ the solution to (B.5) and ψ the solution to (B.6). Find a necessary and
sufficient condition on f such that φ0 = ψ.
[Hint: Use a variation of constants formula]. ◦
i) Let x be a solution of (B.7) defined on a bounded interval [0, α), with α > 0. Suppose
that t 7→ f (x(t)) is bounded on [0, α).
kx(t) − xα k ≤ Mα |t − α|
ii) Show that there exists an extension of x that is a solution of (B.7) on the interval
[0, α].
◦
Fund. Theory ODE Lecture Notes – J. Arino
105
Let us now prove the converse, i.e., that if a function x satisfies (B.2), then it is a solution
to the IVP (B.1). Since x and f are continuous on R+ , t 7→ eαt f (x(t)) is continuous on R+ .
This implies that Z t
t 7→ eαs f (x(s))ds
0
And thus
x0 (t) = −αx(t) + f (x(t))
which implies that x is a solution to (B.1).
Part I. We now assume that (B.3) is also satisfied, and that there exists a solution x on R+
satisfying (B.4).
1) If x is a solution of (B.1), then
Z t
−αt −αt
x(t) = e x(0) + e eαs f (x(s))ds
0
Fund. Theory ODE Lecture Notes – J. Arino
106 B. Problem sheets
Thus,
Z t
αt
e |x(t)| ≤ |x(0)| exp kds
0
kt
≤ |x(0)|e
2) From (B.3), 0 < k < α, hence α − k > 0, which implies, together with the result of
the previous question, that limt→∞ x(t) = 0.
Part II.
1) Let us suppose that x is a solution of (B.1) that is such that |x(0)| < 0. Let A = {t :
|x(t)| ≤ a}. Let us show that A = [0, +∞).
First of all, notice that |x(0)| < a and x continuous on R+ implies that there exists
t0 ∈ R+ − {0} such that, for all t ∈ [0, t0 ], |x(t)| ≤ a. Indeed, suppose this were not the case.
Then, for all ε > 0, there exists tε ∈ [0, ε] such that |x(tε )| > a. This means that for all
n ∈ N − {0}, there exists un ∈ [0, n1 ] such that |x(un )| > a. As un → 0 as n → ∞ and that x
is continuous, |x(0)| ≥ a, since taking the limit implies that strict inequalities become loose.
This is a contradiction with |x(0)| < a. Thus [0, t0 ] ⊂ A.
Let us now show that for all t1 ∈ A, [0, t1 ] ⊂ A, i.e., A is an interval. First, if t1 ≤ t0 then
[0, t1 ] ⊂ [0, t0 ] ⊂ A. Secondly, in the case t1 > t0 , suppose that [0, t1 ] 6⊂ A. This means that
∃η ∈ [0, t1 ] such that η 6∈ A. More precisely, ∃η ∈ (t0 , t1 ) such that η 6∈ A, since [0, t0 ] ⊂ A
and t1 ∈ A. Let β be the smallest such η, that is, β = inf{t ∈ (t0 , t1 ); t 6∈ A}. Note that β
can also be defined as β = sup{t ∈ (t0 , t1 ); t ∈ A}.
Thus
β = inf{t ∈ (t0 , t1 ); |x(t)| > a} = sup{t ∈ (t0 , t1 ); |x(t)| < a}
Since x is continuous, this implies that |x(β)| ≥ a and |x(β)| ≤ a, hence x(β) = ±a. But,
with its sup definition, this implies that β = t1 , whereas with its inf definition, this implies
that β < t1 .
Fund. Theory ODE Lecture Notes – J. Arino
107
|x(t)| ≤ a
|x(t)| ≤ |x(0)|e−(α−k)t
⇒ |x(t)| ≤ |x(0)| < a
⇒ |x(c)| < a
Since x is continuous on R+ , there exists t > c such that |x(t)| ≤ a, and thus there exists
t > c such that t ∈ A, which is a contradiction. Therefore, A = [0, ∞), and we can conclude
that ∀t ∈ R+ , |x(t)| ≤ a.
Another proof, contributed by Guihong Fan, proceeds by contradiction, using the fact
the (B.1) is an autonomous scalar equation. Notice that equation (B.1) can be written in
the form x0 = g(x), with g(u) = −αu + f (u). Thus, since this mapping is continuous,
we can apply Theorem 1.1.8 on the monotonicity of the solutions to an autonomous scalar
differential equation. Assume that x(t) is a solution of (B.1) on R+ that satisfies |x(0)| < a,
but that (B.4) is violated.
Then, since the solution x(t) is monotone, there exists t0 ∈ R+ such that one of the
following holds.
i) x(t0 ) = a and x0 (t0 ) > 0,
since α > k. This is a contradiction with x0 (t0 ) > 0. Case ii) is treated similarly, and thus
it follows that (B.4) holds for all t ∈ R+ .
2 – a) If |x(0)| < a, then we have just proved that for all t ∈ R+ , |x(t)| ≤ a. From Part
I, 1), this implies that |x(t)| ≤ |x(0)|e−(α−k)t . Therefore, since α > k, limt→∞ x(t) = 0.
2 – b) To show that x ≡ 0 is the only solution of (B.1) such that x(0) = 0, we first show
that x ≡ 0 is a solution of (B.1). Condition (B.3) applied to 0 states that |0| < a implies
|f (0)| ≤ k|0| = 0.
Fund. Theory ODE Lecture Notes – J. Arino
108 B. Problem sheets
Uniqueness: let φ be a solution of (B.1) such that φ(0) = 0. This implies that |φ(0) < a,
and as a consequence, it follows from Part I, 1) that for all t ∈ R+ , |φ(t)| ≤ |φ(0)|e−(α−k)t ,
hence for all t ∈ R+ , φ(t) = 0.
Part III. All solutions of the nonlinear equation x0 = −αx + ln(1 + x2 ) tend to zero as
t → ∞, when α > 1. Indeed, let f (u) = ln(1 + u2 ). We have
2|u|
|f 0 (u)| = ≤1
1 + u2
since (u − 1)2 = u2 + 1 − 2u ≥ 0 (and hence 2u/(1 + u2 ) ≤ 1). Then, |f (u) − f (0)| ≤ |u − 0|
implies that |f (u)| ≤ |u|, for all u ∈ R. We thus have k = 1 < α, the hypotheses of the
exercise are satisfied and all solutions of the equation tend to zero. ◦
and Z t
−at
ψ(t) = e c2 + e−a(t−s) f 0 (s)ds, t≥0
0
Let us show this for the solution of (B.5), the solution of (B.6) is obtained using exactly the
same method. The solution to a linear differential equation consists of the general solution
to the homogeneous equation together with a particular solution to the nonhomogeneous
equation. Here, the homogeneous equation is
x0 = −ax
and basic integration yields the general solution x(t) = c1 e−at . To obtain a particular solution
to the nonhomogeneous equatiob (B.5), we use a variation of constants formula: assume that
the constant in the solution x(t) = ce−at is a function of time, hence
x(t) = c(t)e−at
Taking the derivative of this expression, we obtain
x0 = c0 e−at − ace−at
Substituting both this expression and x = ce−at into (B.5), we get
c0 e−at − ace−at + ace−at = f (t), t≥0
and hence
c0 = eat f (t)
Integrating both sides of this expression gives
Z t
c(t) = eas f (s)ds
0
Fund. Theory ODE Lecture Notes – J. Arino
109
Summing the general solution to the homogeneous equation with this last expression gives
the desired result.
Using the initial conditions yields
φ(0) = c1 = 0
ψ(0) = c2 = 0
Rt
Recall that if g(t) = t0
h(s, t)ds, then for all t0 ,
Z t
0 ∂h
g (t) = h(t, t) + (s, t)ds
t0 ∂t
Let us denote M = supt∈[0,α) kf (t, x)k, and let n, p > N , where N is sufficiently large
1
that α − n+p ∈ [0, α) and α − n1 ∈ [0, α). Using Theorem B.1.3, we obtain that
1 1 1 1
kx(α − ) − x(α − )k ≤ M | − |
n+p n n n+p
1 1
kzα,n+p − zα,n k ≤ M | − |
n n+p
So the sequence (zα,n )n∈N∗ is a Cauchy sequence.
1 – b) For all t ∈ [0, α),
Z
1 t
kx(t) − x(α − )k ≤ kf (x(s))kds
n α− 1
n
1
kx(t) − zα,n k ≤ Mα |t − α + | (B.8)
n
Since the sequence (zα,n )n∈N∗ is a Cauchy sequence, there exists xα = limn→∞ (zα,n ). Thus
taking n → ∞ in (B.8) gives
kx(t) − xα k ≤ Mα |t − α|
1 – c) According to 1.b), we have
lim kx(t) − xα k ≤ 0
t→α,t<α
Hence
lim x(t) = xα
t→α,t<α
2) Let
x(t) if t ∈ [0, α)
z(t) =
xα if t = α
Let us show that z is a solution of (B.7) on [0, α] if z is continuous on [0, α] and satisfies the
integral equation Z t
z(t) = z(t0 ) + f (z(s))ds
t0
for all t ∈ [0, α], and with an arbitrary t0 ∈ [0, α]. We know by construction that z is
continuous (since limt→∞ x(t) = xα ).
Fund. Theory ODE Lecture Notes – J. Arino
111
z(α) = xα
= lim x(t)
t→α,t<α
Z t
= lim x(t0 ) + f (x(s))ds
t→α,t<α t
Z 0t
= lim z(t0 ) + f (z(s))ds
t→α,t<α t0
So Z α
z(α) = z(t0 ) + f (z(s))ds
t0
Exercise 2.3 – Let X(t) be a fundamental matrix for the system x0 = A(t)x, where A(t)
is an n × n matrix with continuous entries on R. What conditions on A(t) and C guarantee
that CX(t) is a fundamental matrix, where C is a constant matrix. ◦
i) Show that P(ω), the set of ω-periodic solutions of (B.9), is a vector space.
◦
Fund. Theory ODE Lecture Notes – J. Arino
114 B. Problem sheets
x1 = c1
x2 = c2 e(a+d)t
• If a + d > 0, then the x1 axis is invariant and any solution that does not start on the
x1 axis diverges to ±∞ parallely to the x2 axis.
◦
P∞
Solution – Exercise 2 – 1) We have eA = k=0 Ak /k!. Taking the norm,
∞
X
A
ke k = k Ak /k!k,
k=0
P∞
whence, by the triangle inequality, keA k ≤ k=0 kAk /k!k = ekAk .
2) Let J be the Jordan form of A, i.e., there exists P nonsingular such that P −1 AP =
J, where J has the form diag(Bj )j=1,...,p , with Bj the Jordan block corresponding to the
eigenvalue λj of multiplicity mj . Then, since A and J are similar,
det eA = det eJ
= det(eB1 )m1 . . . det(eBp )mp
= eλ1 m1 · · · eλp mp
p
X
= exp( λk m k )
k=1
= trA
Fund. Theory ODE Lecture Notes – J. Arino
115
3) We can write
lim e−αt eAt = lim e(A−αI)t
t→∞ t→∞
Let Sp (A) be the spectrum of A, i.e., the set of eigenvalues of A. Then, if λ ∈ Sp (A),
λ − α ∈ Sp (A − αI). We have limt→∞ e(A−αI)t = 0 if, and only if, <(µ) < 0 for all
µ ∈ Sp (A − αI), i.e., <(µ + α) < 0 for all µ ∈ Sp (A), i.e., <(µ) < α for all µ ∈ Sp (A).
Hence, choosing α greater than the eigenvalue of A with maximal real part ensures that
limt→∞ e(A−αI)t = 0. ◦
For CX(t) to be a fundamental matrix for x0 = A(t)x requires that (CX(t))0 = A(t)(CX(t)).
So C and A(t) must commute. Also, a fundamental matrix must be nonsingular. As X(t)
is a fundamental matrix, it is nonsingular. Thus C must be nonsingular for CX(t) to be
a fundamental matrix. So, to conclude, if X(t) is a fundamental matrix for the system
x0 = A(t)x, then CX(t) is a fundamental matrix for x0 = A(t)x if C is nonsingular and
commutes with A(t). ◦
which implies that x1 −x2 is a solution to (B.9), and therefore, dim P(ω) 6= 0, a contradiction.
Thus the solution to (B.10) is unique.
a) ⇒ c) Let x be the unique ω-periodic solution of (B.10), and assume that dim P(ω) 6=
0, i.e., there exists y, non trivial ω-periodic solution to (B.9). Then
x0 = A(t)x
where A(t) has continuous entries on R and is such that A(t + ω) = A(t) for some ω ∈ R.
Then 1 is an eigenvalue of A(t).
Proof. For some constant vector c 6= 0, we have x(t) = X(t)c. Also, because of periodicity,
x(t + ω) = X(t + ω)c. As x is periodic of period ω, x(t) = x(t + ω), so that, using the
previously obtained forms,
ii) Deduce the formula for the principal solution matrix R(t, t0 ).
with 1
t
t
A(t) =
0 1
◦
Fund. Theory ODE Lecture Notes – J. Arino
119
and B(t) = (−t2 , 2t)T . The system (B.11) can then be written as
ξ 0 = Aξ + B(t)
Thus dim ker(A − 2I) = 1 6= 2, and A is not diagonalisable. Let us compute the Jordan
canonical form of A. There exists P nonsingular such that
−1 2 α
P AP =
0 2
We have P −1 AP = 2I + N , where
0 1
0 0
−1
(P AP )t
Therefore,
P∞ tn n e = e(2It+N t) = e2t eN t . Now, N is nilpotent (N 2 = 0), so eN t =
n=0 n! N = I + N t. As a consequence,
−1 AP )t
e(P = e2t (I + N t)
2t 1 0 0 1
=e +
0 1 0 0
2t 2t
e te
=
0 e2t
Thus
e2t te2t
eAt
=P P −1
0 e2t
(1 − t)e2t −te2t
=
te2t (1 + t)e2t
2t 1 − t −t
=e
t 1+t
Rt
We still have to compute 0 e−As B(s)ds. We have
2 3 2 2
−As −2s 1 + s s −s −2s −s − s + 2s −2s s2 − s3
e B(s) = e =e =e
−s 1 − s 2s s3 + 2s − 2s2 s3 − 2s2 + 2s
Rt Rt Rt
Let I1 (t) = 0 e−2s sds, I2 (t) = 0 e−2s s2 ds and I3 (t) = 0 e−2s s3 ds. Then
Z t
−As I1 (t) − I3 (t)
e B(s)ds =
0 I3 (t) − 2I2 (t) + 2I1 (t)
As a conclusion,
−2t 1 2
t + 14 t + 81 + 21 t3 − 1
1 − t −t 1 − t −t e
ξ(t) = e2t ξ0 + 4 8
t 1+t t 1+t e−2t 1 2
4
t − 34 t − 83 − 12 t3 + 3
8
−2t
e x0 − e−2t tx0 − e−2t ty0 + 21 e−2t t + 18 e−2t − 81 − 4t + 34 e−2t t2
=
e−2t tx0 + e−2t y0 + e−2t ty0 − 41 e−2t t2 − e−2t t + 4t − 83 e−2t + 38
It follows that
Z t+h Z t Z t
1 1
(M (t + h) − M (t)) = exp A(s)ds exp A(s)ds − exp A(s)ds
h h t t0 t0
Z t+h
1
= M (t) exp A(s)ds − I
h t
Z t+h
1 1
= exp A(s)ds − I M (t)
h t h
◦
Fund. Theory ODE Lecture Notes – J. Arino
122 B. Problem sheets
it is obvious the Theorem B.3.5 can be used here, since I commutes with all matrices. Thus,
Z t Z t
R(t, t0 ) = exp a(s)dsI exp b(s)dsV
t0 t0
Rt Rt
Let α(t) = t0 a(s)ds and β(t) = t0 b(s)ds. Then R(t, t0 ) = eα(t)I eβ(t)V = eα(t) Ieβ(t)V . Now
notice that V 2 = I, V 3 = −V , etc., so that we can write that
n (−1)p I if n = 2p,
V =
(−1)p V if n = 2p + 1.
Thus
Solution – Exercise 3.4 – 1) Notice that the y 0 equation in (B.16) does not involve
x. Therefore, we can solve it directly, giving y(t) = Cet , with C ∈ R. Substituting this into
the equation for x0 , we have
1
x0 = x(t) + tCet
t
To solve this nonhomogeneous first-order scalar equation, we start by solving the homoge-
neous part, x0 = x/t. This equation is separable, giving the solution x(t) = Kt, for K ∈ R.
Now we use a variation of constants approach to find a particular solution to the nonhomoge-
neous problem. We use the ansatz x(t) = K(t)t, which, when differentiated and substituted
into the nonhomogeneous equation, gives K 0 (t) = Cet , and hence, K(t) = Cet is a particular
solution, giving the general solution x(t) = Kt + Cet .
Fund. Theory ODE Lecture Notes – J. Arino
123
Let t0 6= 0 (to avoid problems with 1/t), and suppose x(t0 ) = x0 , y(t0 ) = y0 . Then
x0 = Kt0 + Ct0 et0 and y0 = Cet0 . It follows that K = x0 /t0 − y0 and C = e−t0 y0 , and the
solution to the equation going through the point (x0 , y0 ) at time t0 is given by
x0 x0
( t0 − y0 )t + e−t0 y0 tet t−t0
x(t) t + y 0 t(e − 1)
ξ(t) = = = t0 .
y(t) e−t0 y0 et y0 et−t0
ξ 0 = A(t)ξ
ξ(t0 ) = ξ0
is given by ξ(t) = R(t, t0 )ξ0 . Thus, the resolvent matrix (or principal solution matrix) for
(B.16) is given by
t
−t + tet−t0
R(t, t0 ) = t0
0 et−t0
R t 3) First of all, notice that A(t) and A(s) do not commute. Let us compute B(t) =
t0
A(s)ds.
Rt Rt !
1
t0 s
ds sds
B(t) = Rt0t
0 t0
ds
ln tt0 1 2 2
2
(t − t0 )
=
0 t − t0
Eigenvalues of B(t) are α and γ. As R(t0 , t0 ) = B(t0 ) = I, we are concerned with finding
a t 6= t0 such that B(t) is diagonalizable. If a t exists such that α 6= γ, then B(t) is
diagonalizable, i.e., there exists P nonsingular such that
−1 α 0
P B(t)P = D =
0 β
Then
eα 0
eB(t)
=P P −1
0 eβ
We find
1 β −1 1 γ − α −β
P = , P =
0 γ−α γ−α 0 1
Fund. Theory ODE Lecture Notes – J. Arino
124 B. Problem sheets
The element ∆ in this matrix is the only one different from the elements in R(t, t0 ). We have
t2 − t20
t−t0 t
∆= e − 6= t(et−t0 − 1)
2(t − t0 − ln tt0 ) t0
◦
Fund. Theory ODE Lecture Notes – J. Arino
125
for ω > 0, is called a delay differential equation (or also a differential difference equation,
or an equation with deviating argument), and ω is called the delay. The basic initial value
problem for (B.17) takes the form
i) Use the method of steps to construct the solution to (B.18) on the interval [t0 , t0 + ω],
that is, find how to construct the solution to the non delayed problem
ii) Discuss existence and uniqueness of the solution on the interval [t0 , t0 + ω], depending
on the nature of φ0 and f .
iii) Suppose that φ0 ∈ C 0 ([t0 − ω, t0 ]). Discuss the regularity of the solution to (B.18) on
the interval [t0 + kω, t0 + (k + 1)ω], k ∈ N.
with a, C ∈ R, ω ∈ R∗+ . Using the ideas of the previous exercise, find the solution to (B.20)
on the interval [t0 + kω, t0 + (k + 1)ω], k ∈ N. ◦
Exercise 3.3 – Compute Ani and etAi for the following matrices.
0 1 − sin(θ)
0 1 1 1
A1 = , A2 = , A3 = −1 0 cos(θ) .
1 0 0 1
− sin(θ) cos(θ) 0
◦
Fund. Theory ODE Lecture Notes – J. Arino
126 B. Problem sheets
t2
with limk→∞ Ck (t) = 2
|||A2 |||.
b) Show that for all t ∈ R and all k ∈ N∗ ,
I + t A ≤ e |t|k |||A||| .
k
c) Deduce that
k
tA t
e = lim I+ A .
k→∞ k
ii) Suppose now that A is symmetric and that its eigenvalues are > −α, with α > 0.
c) Show that
Hence we find the solution to the differential equation to be, on the interval [ω, 2ω],
1 1
x2 (t) = C 1 + at + a2 t2 − a2 ω 2
2 2
We develop the intuition that the solution at step n (i.e., on the interval [(n − 1)ω, nω])
must take the form
n
X (t − (k − 1)ω)k
xn (t) = C ak (B.22)
k=0
k!
It follows that
∞
! ∞
!
X tn X ntn
eA2 t = I+ N
n=0
n! n=0
n!
∞
!
t
X tn−1
=e I +t N
n=0
(n − 1)!
= et I + tet N
t 1 t
=e
0 1
Finally, for matrix A3 , we have that
− cos2 θ − 12 sin 2θ cos θ
t t t2 t2 t
e k A = I + A + 2 A2 + 2 ε( )
k 2k k k
Thus
t t t2 t2 t
e k A − I − A = 2 A2 + 2 ε( )
k 2k k k
Therefore, taking the norm |||·||| of this expression,
2
t A t
e k − (I + A) ≤
1 t A + t ε( ) = 1 Ck (t)
2 2
t
k k 2 2 k k2
We have
t2 2
lim Ck (t) =A
k→∞ 2
Let Sk = ∞ tj j
P
j=3 kj−2 j! |||A||| . This series is uniformly convergent, which implies that we can
change the order in the following limit,
∞ j
X t j
lim Sk = lim |||A||| = 0
k→∞
j=3
k→∞ k j−2 j!
1–b) We have already seen (Exercise 2, Assignment 2) that eA ≤ e|||A||| . Therefore,
t t
e ≤ e k |||A|||
k A
But
∞
|t| X 1 |t|k |
e k
|||A|||
= k
|||A|||k
k=0
k! k
|t| |t|2 |
=1+ |||A||| + 2 |||A|||2 + · · ·
k 2k
|t|2
which, since 2k2
|||A|||2 + · · · ≥ 0, implies that
|t|
|||A||| |t| t
e k ≥ 1 + |||A||| = |||I||| + A
k k
t
≥ I + A
k
Fund. Theory ODE Lecture Notes – J. Arino
131
j
t j
t j
(I + A) ≤ I + A ≤ e |t|k |||A||| = e j|t| |||A|||
k
k k
Now,
e−(α+λ1 )t 0
e−(αI+A)t = P
.. −1
P
.
0 e−(α+λn )t
Since for all i = 1, . . . , n, λi > −α, it follows that limt→∞ e−(α+λi )t = 0, which in turn implies
that limt→∞ e−(αI+A)t = 0. Using this in (B.25) gives (B.24) for k = 0.
Now assume (B.24) holds for k = j, i.e.,
Z ∞
−j tj−1
(αI + A) = e−(αI+A)t dt
0 (j − 1)!
Then
∞ j ∞
d −(αI+A)t tj
Z Z
−(αI+A)t t
e dt = −(αI + A)−1 e dt
0 j! 0 dt j!
Z ∞
−1 d −(αI+A)t tj
= −(αI + A) e dt
0 dt j!
j ∞ Z ∞ j−1
−1 −(αI+A)t t −(αI+A)t t
= −(αI + A) e − e dt
j! 0 0 (j − 1)!
Fund. Theory ODE Lecture Notes – J. Arino
133
As we did in the k = 0 case, we now use the bound on the eigenvalues to get rid of the term
tj
e−(α+λ1 )t
j!
0
tj −(αI+A)t ... −1
e =P P −→0 as t → ∞
j! tj
0 j!
e−(α+λn )t
Therefore,
∞ j ∞
tj−1
Z Z
−(αI+A)t t −1
e dt = −(αI + A) e−(αI+A)t dt
0 j! 0 (j − 1)!
= −(αI + A) (αI + A)−j −1
= −(αI + A)−(j+1)
k !−1 −k
−At t t
e = lim I+ A = lim I+ (B.26)
k→∞ k k→∞ k
∀t > 0, e−At ≤ M
We now treat the reverse implication, (∀t > 0, e−At ≤ M ) ⇒ (∀u > 0, (I + uA)−k ≤ M ).
We have
1 1
(I + uA)−k = [u( I + A)]−k = u−k [ I + A]−k
u u
Suppose that −α > −1/u, i.e., 0 < u < 1/α. Then, from 2–a), it follows that
∞
tj−1
Z
1
−k −k
(I + uA) =u e−( u I+A)t dt
0 (j − 1)!
Fund. Theory ODE Lecture Notes – J. Arino
134 B. Problem sheets
2–c) Let us begin with the forward implication (⇒). To apply 2–b) with k = 0, it
suffices that the eigenvalues of A be greater than −α. Take λ0 = α. Then λ > α = λ0 , and
so
Z ∞
−1
(λI + A) = e−(λI+A)t dt
Z0 ∞
= e−λt e−At dt ≥0
0
for λ sufficiently large. Take λ = k/t, the previous expression can be written
k
∀t > 0, ∀k ≥ k0 , ( I + A)−k ≥ 0
t
where k0 is sufficiently large. This implies that
−k
k t
∀t > 0, ∀k ≥ k0 , (I + A)−k ≥ 0
t k
As (k/t)−k > 0,
t
∀t > 0, ∀k ≥ k0 , (I + A)−k ≥ 0
k
1
See, e.g., M. Abramowitz and I.E. Stegun, Handbook of Mathematical Functions. Dover, 1965.
Fund. Theory ODE Lecture Notes – J. Arino
135
so
t
∀t > 0, lim (I + A)−k ≥ 0
k→∞ k
So that this finally implies that
∀t > 0, e−At ≥ 0
3) The results of the previous part hold. However, in the case of a nonsymmetric matrix,
we need to ask for the real part of the eigenvalues to be greater than −α. ◦
Fund. Theory ODE Lecture Notes – J. Arino
137
This examination paper includes 4 pages and 3 questions. You are responsible
for ensuring that your copy of the paper is complete.
Detailed Instructions
You have 72 hours, from the time you pick up and sign for this examination sheet, to
complete this examination. You are to work on this examination by yourself. Any hint of
collaborative work will be considered as evidence of academic dishonesty. You are not to
have any outside contacts concerning this subject, except myself.
You can use any document that you find useful. If using theorems from outside sources,
give the bibliographic reference, and show clearly how your problem fits in with the conditions
of application of the theorem. When citing theorems from the lecture notes, refer to them
by the number they have on the last version of the notes, as posted on the website on the
first day of the examination period.
Pay attention to the form of your answers: as this is a take-home examination, you are
expected to hand back a very legible document, in terms of the presentation of your answers.
Show your calculations, but try to be concise.
This examination consists of 1 independent question and 2 problems. In questions or
problems that have multiple parts, you are always allowed to consider as proved the results
of a previous part, even if you have not actually done that part.
Foreword to the correction. This examination was long, but established the results in
a very guided way. Exercise 1 was almost trivial. Both of the Problems dealt with Sturm
theory. This comes as an illustration of the richness of behaviors that can be observed in
differential equations: simple equations such as (B.29) can have very complex behaviors.
Concerning the difficulty of the problems, it was not excessive. Problem 2 is a shortened
and simplified version of a review problem for the CAPES, a French competition to hire
high school teachers. The original problem comprised 23 questions, and was written by
candidates in 5 hours. Problem 3 introduced the Wronskian, which we did not have time to
cover during class. It also established further results of Sturm type.
Fund. Theory ODE Lecture Notes – J. Arino
138 B. Problem sheets
ii) Let u be an eigenvector of R(ω, 0), associated to the eigenvalue λ. Show that the
solution of (B.27) taking the value u for t0 = 0 is such that
iii) Conversely, show that if x is a nontrivial solution of (B.27) such that (B.28) holds,
then λ is an eigenvalue of R(ω, 0).
Problem 5 2 – The aim of this problem is to study some properties of the solutions of
the differential equation
x00 + q(t)x = 0, (B.29)
where q is a continuous function from R to R.
i) Show that for t0 , x0 , y0 ∈ R, there exists a unique solution of (B.29) such that
x(t0 ) = x0 , x0 (t0 ) = y0
Before proceeding with the study of the solutions of (B.29), we establish a few useful results
on convex functions.
ii) Let f be a function defined on R, convex and nonnegative. Suppose that f has two
zeros t1 , t2 and that t1 < t2 . Show that f is zero on the interval [t1 , t2 ].
Let c ∈ R and f be a convex function that is bounded from above on the interval [c, +∞).
It can then be shown that f is decreasing on [c, +∞). Using this fact, show the following.
Part I.
Fund. Theory ODE Lecture Notes – J. Arino
139
iv) Let a, b ∈ R, a < b. We assume that (B.29) has a solution x, zero at a and at b and
positive on (a, b). Show that
Z b
4
|q(t)|dt > .
a b−a
R∞
v) We suppose that 0 |q(t)|dt converges. Let x be a bounded solution of (B.29). Deter-
mine the behaviour of x0 as t → ∞.
vi) We suppose that q ∈ C 1 and is positive and increasing on R+ . Show that all solutions
of (B.29) are bounded on R+ .
Part II. We suppose in this part that q is nonpositive and is not the zero function.
vii) Let x be a solution of (B.29). Show that x2 is a convex function.
viii) Show that if x is a solution of (B.29) that has two distinct zeros, then x ≡ 0.
ix) Show that if x is a bounded solution of (B.29), then x ≡ 0.
Part III.
x) Let x, y be two solutions of (B.29). Show that the function xy 0 − x0 y is constant.
xi) Let x1 and x2 be the solutions of (B.29) that satisfy
Show that (x1 , x2 ) is a basis of the vector space S on R of the solutions of (B.29).
What is the value of x1 x02 − x01 x2 ? Can the functions x1 and x2 have a common zero?
Justify your answer.
xii) Discuss the results of question 11) in the context of linear systems, i.e., transform
(B.29) into a system of first-order differential equations and express question 11) and
its answer in this context.
xiii) Show that if q is an even function, then the function x1 is even and the function x2 is
odd.
Problem 5 3 – The aim of this problem is to show some elementary properties of the
Wronskian of a system of solutions, and to use them to study a second-order differential
equation.
Consider the nth order ordinary differential equation
i) Find the matrix A(t) such that this system can be written as the first-order linear
system y 0 = A(t)y.
Part I : Wronskian Let f1 , . . . , fn be n functions from R into R that are n − 1 times differ-
entiable. We define W (f1 , . . . , fn ), the Wronskian of f1 , . . . , fn , by
f1 (t) ··· fn (t)
f10 (t) ··· fn0 (t)
W (f1 , . . . , fn )(t) = det .. .. .
. .
(n−1) (n−1)
f1 (t) · · · fn (t)
iii) Using the linear system x0 = A(t)x, show that for every set of n solutions,
Z t
W (t) = W (s) exp an−1 (u)du .
s
Part II : a theorem of Sturm Let us now consider the second-order differential equation
in an interval (t1 , t2 ). Every solution of (B.32) that is not identically zero has at most one
zero in the interval [t1 , t2 ].
vi) Let φ be a solution of (B.32) on (t1 , t2 ), and v be a zero of φ in this interval. Discuss the
properties of φ. [Hint: Use of Problem 2, Part II is possible, but not strictly necessary.]
vii) Let u < v be another zero of φ in the interval (t1 , t2 ). Discuss properties of φ. Conclude.
Fund. Theory ODE Lecture Notes – J. Arino
142 B. Problem sheets
R(ω, 0)u = λu
Let x be the solution of (B.27) such that x(t0 ) = x(0) = u. As x is a solution of (B.27), we
have that, for all t,
x(t) = R(t, t0 )u = R(t, 0)u
Therefore,
x(t + ω) = λx(t)
We assume that, for all t ∈ R, x(t + ω) = λx(t). This is true in particular for t = 0, and
hence x(ω) = λx(0). As x ≡6 0, there exists v ∈ R − {0} such that x(0) = v. Therefore,
x(t) = λv
and as a consequence,
R(ω, 0)v = λv
and λ is an eigenvalue of R(ω, 0). ◦
Solution – Problem 2 – This problem concerns what are called Sturm theory type
results, that is, results dealing with the behavior of second order differential equations.
Fund. Theory ODE Lecture Notes – J. Arino
143
Solution – Problem 3 – This problem was also about Sturm results. But it also
introduced the notion of Wronskian, which is a very general tool intimately linked to the
notion of resolvent matrix.
1) We let y1 = x, y2 = x0 , . . . , yn = x(n−1) . As a consequence, y10 = y2 , y20 = y3 , . . . , yn−1
0
=
0
yn , and yn = a0 (t)y1 + a1 (t)y2 + · · · + an−1 (t)yn−1 . Written in matrix form, this is equivalent
to y 0 = A(t)y, with y = (y1 , . . . , yn )T and
0 1 0 ... 0
0 0 1 0 ... 0
. ..
.
. .
A(t) = . (B.33)
..
1 0
0 1
a0 (t) a1 (t) . . . an−1 (t)
Fund. Theory ODE Lecture Notes – J. Arino
144 B. Problem sheets
2) We know that the system is equivalent to y 0 = A(t)y, with A(t) given by (B.33). To
every basis (φ1 , . . . , φn ) of the vector space of solutions of
y 0 = A(t)y (B.34)
there corresponds a basis (ϕ1 , . . . , ϕn ) of (B.30), where ϕi is the first coordinate of the vector
φi for every i. The converse is also true.
We know that a system (φ1 , . . . , φn ) of solutions of (B.34) is a basis if det(φ1 , . . . , φn ) 6= 0,
and it suffices for this that det(φ1 , . . . , φn ) be nonzero at one point.
Since we have det(φ1 , . . . , φn ) = W (ϕ1 , . . . , ϕn ), the result follows.
3) This is a direct application of Liouville’s theorem, which states that if R(t, s) is the
resolvent of A(t), then Z t
det R(t, s) = exp trA(u)du
s
Now, note that det Φ(t) = W (t) and det Φ(s) = W (s). This implies the result.
Note that for a system of solutions of (B.30), W (ϕ) 6= 0 iff ϕ are linearly independent
(i.e., we have the converse implication).
4)
5)
6)
7) ◦
Fund. Theory ODE Lecture Notes – J. Arino
145
In this problem, we will study the solutions of some differential equations, and in particular,
their periodic solutions.
Let T > 0 be a real number, P be the vector space of real valued, continuous and
T -periodic functions defined on R, and let a ∈ P . Define
Z T Z t
A= a(t)dt, g(t) = exp a(u)du ,
0 0
First part
2.a. Describe the set of maximal solutions to (E2) and the intervals of definition of these
solutions.
2.b. Describe the set of maximal solutions to (E2) that are T -periodic, first assuming
A 6= 0, then A = 0.
Fund. Theory ODE Lecture Notes – J. Arino
146 B. Problem sheets
Second part
In this part, we let H be a real valued C 1 function defined on R2 , and consider the
differential equation
x0 (t) = a(t)x(t) + H(x(t), t). (E3)
3. Check that a function x is solution to (E3) if and only if it satisfies the condition
Z t
−1
x(t) = g(t) x(0) + g(s) H(x(s), s)ds .
0
4. Suppose that H is T -periodic with respect to its second argument, and that A 6= 0.
Show that, for all functions x ∈ P , the formula
Z t+T
eA
U (H x)(t) = A
g(t) g(s)−1 H(x(s), s)ds,
1−e t
Third part
λ = sup |f 0 (u)|,
u∈[−1,1]
|x(t)| ≤ |x(0)|e(k+ελ)t .
9. In this question, we suppose that the set of t such that |x(t)| > 1 is non-empty, and
we denote its lower bound by θ. Show that, for all t ∈ [0, θ],
|x(t)| ≤ |x(0)|e(k+ελ)t .
10. Conclude.
N.B. This result expresses the stability and the asymptotic stability of the trivial solution
of (E6).
Fund. Theory ODE Lecture Notes – J. Arino
149
x0 (t)
x0 (t) = a(t)x(t) ⇔ = a(t)
x(t)
Z t
⇔ ln |x(t)| = a(s)ds + C
0
Z t
⇔ |x(t)| = exp a(s)ds + C
0
Z t
⇔ x(t) = K exp a(s)ds ,
0
where it was assumed that integration starts at t0 = 0, and where the sign of |x(t)| is
absorbed into K ∈ R. Since x(0) = K, the general solution to (E1) is thus
Z t
x(t) = x(0) exp a(s)ds . (B.35)
0
2.a. We know that the general solution to the homogeneous equation (E1) associated to
(E2) is given by (B.35). To find the general solution to (E2), we need a particular solution
to (E2), or to use integrating factors or a variation of constants approach. We do the
latter, since we already have the solution (B.35) to (E1). Returning to the solution with
undetermined value for K, we consider the ansatz
Z t
φ(t) = K(t) exp a(s)ds = K(t)g(t),
0
Fund. Theory ODE Lecture Notes – J. Arino
150 B. Problem sheets
The function φ is solution to (E2) if and only if it satisfies (E2); therefore, φ is solution if
and only if
Note that the remark that g(t) 6= 0 is made “for form”: as it is defined, g(t) > 0 for all
t ≥ 0. We conclude that the general solution to (E2) is given by
Z t Z t
b(s)
x(t) = ds + C exp a(s)ds .
0 g(s) 0
Since it will be useful to have information in terms of x(0) (as in question 1.), we note that
C = x(0). Thus, the solution to (E2) through x(0) = 0 is given by
Z t Z t
b(s)
x(t) = ds + x(0) exp a(s)ds . (B.36)
0 g(s) 0
With integrating factors, we would have done as follows: write the equation (E2) as
Maximal solutions are solutions that are the restriction of no other solution.
Fund. Theory ODE Lecture Notes – J. Arino
151
2.b. Solutions to (E2) are T -periodic if x(T ) = x(0); therefore, a T -periodic solution
satisfies
Z T Z T
b(s)
x(T ) = x(0) ⇔ ds + x(0) exp a(s)ds = x(0)
0 g(s) 0
Z T
b(s)
⇔ ds + x(0) = x(0)e−A
0 g(s)
Z T
b(s)
ds = e−A − 1 x(0)
⇔
0 g(s)
Second part
Z t
0 0 −1 H(x(t), t)
x (t) = g (t) x(0) + g(s) H(x(s), s)ds + g(t)
0 g(t)
Z t
= g 0 (t) x(0) + g(s)−1 H(x(s), s)ds + H(x(t), t)
0
Z t
−1
= a(t)g(t) x(0) + g(s) H(x(s), s)ds + H(x(t), t)
0
= a(t)x(t) + H(x(t), t),
Rt
−1
and thus x(t) = g(t) x(0) + 0
g(s) H(x(s), s)ds is solution to (E3).
Remark that
Z t+T
g(t + T ) = exp a(s)ds
0
Z t Z t+T
= exp a(s)ds + a(s)ds
0 t
Z t+T
= g(t) exp a(s)ds
t
A
= e g(t),
since a(t) is T -periodic. Therefore,
Z t+2T
eA A
(UH x)(t + T ) = e g(t) g(s)−1 H(x(s), s)ds
1 − eA t+T
A Z t+T
e
= A
eA g(t) g(s − T )−1 H(x(s − T ), s − T )ds.
1−e t
Now
Z s−T
g(s − T ) = exp a(u)du
0
Z s Z s−T
= exp a(u)du + a(u)du
0 s
Z s
= g(s) exp − a(u)du
s−T
= e−A g(s).
So, finally,
t+T
eA 2A
Z
(UH x)(t + T ) = e g(t) g(s)−1 H(x(s), s)ds
1 − eA t
= e2A (UH x)(t),
since H is T -periodic in its second argument and x ∈ P . Therefore, UH x ∈ P for x ∈ P .
Suppose that x(t) = (UH x)(t). Then,
Z t+T
eA
0 0 H(x(s), s) H(x(t + T ), t + T ) H(x(t), t)
x (t) = g (t) ds + g(t) −
1 − eA t g(s) g(t + T ) g(t)
A
Z t+T
e H(x(s), s) H(x(t), t) H(x(t), t)
= a(t)g(t) ds + g(t) −
1 − eA t g(s) eA g(t) g(t)
A Z t+T A A
e H(x(s), s) e (1 − e )H(x(t), t)
= a(t) A
g(t)g(t) ds + A
g(t)
1−e t g(s) 1−e eA g(t)
= a(t)x(t) + H(x(t), t).
Fund. Theory ODE Lecture Notes – J. Arino
153
5.a. We seek ε0 > 0 such that for all ε ≤ ε0 , kxk ≤ r ⇒ kUε xk ≤ r. Therefore, we
compute kUε xk. We have, letting H(x, s) = εF (x, s),
Note that we keep the absolute value of |1 − eA |, since A could be negative, leading to a
negative value for 1 − eA . Let kg −1 k = supt∈R |g −1 (t)|. We then have
Z t+T
eA −1
kUε xk ≤ ε kgkkg k sup |F (x(s), s)| ds
|1 − eA | t∈R t
Z t+T
eA −1
≤ε kgkkg k αr ds,
|1 − eA | t
eA
kUε xk ≤ ε kgkkg −1 kαr T.
|1 − eA |
Letting
−1
eA
ε0 = kgkkg −1 kαr T r,
|1 − eA |
5.b. For the restriction of Uε to be a contraction, we must have the inequality obtained
above, as well as, for x, y ∈ Br , d(Uε x, Uε y) < d(x, y). In terms of the induced norm, this
means that kUε x − Uε yk < kx − yk. Therefore, letting x, y ∈ P be such that kxk ≤ r and
Fund. Theory ODE Lecture Notes – J. Arino
154 B. Problem sheets
kyk ≤ r, we compute
For s ∈ [0, T ] and x(s) ∈ [−r, r], we have, picking a y(s) ∈ [−r, r],
• for ε ≤ ε1 , Uε is a contraction of Br ,
• P is complete (it is closed in C(R, R) ∩ B(R) endowed with the supremum norm) and
Br is closed in P .
Therefore, we conclude that for a given r > 0, for all ε ≤ ε1 , there exists a unique solution
xε of (E4) in Br .
kxε − xε0 k = kUε xε − Uε0 xε0 k ≤ kUε xε − Uε xε0 k +kUε xε0 − Uε0 xε0 k.
| {z }
≤Kkxε −xε0 k
Fund. Theory ODE Lecture Notes – J. Arino
155
But we have
Z t+T
0 eA
kUε xε − Uε0 xε0 k(t) ≤ |ε − ε | A
|g(t)g(s)−1 F (x(s), s)|ds
|1 − e | t
eA
≤ |ε − ε0 | eT A T αr .
|1 − eA |
| {z }
=K 0
|ε − ε0 |K 0
Thus, we have kxε − xε0 k ≤ and therefore ε ∈ R 7→ xε ∈ P is continuous; it follows
1−K
that lim xε = x0 . But the only periodic solution of (E1) when A 6= 0 is the zero solution.
ε→0
Therefore, xε → 0 when ε → 0.
8.a. Using the formula obtained in 5.a. with αr = r2 , βr = 2r, g(t) = e−t , A = −T then
8.b. The zero function is clearly a 1-periodic solution of (E5). By uniqueness of solutions,
xε = 0 is the only solution of (E5).
Fund. Theory ODE Lecture Notes – J. Arino
156 B. Problem sheets
8.c. The vector field −x+εx2 is C 1 , and therefore existence and uniqueness of a maximal
solution ϕα is a direct consequence of the theorem of Cauchy-Lipschitz.
We solve the equation x0 = −x + εx2 without constraint of periodicity. There are two
constant solutions , x(t) = 0 and x(t) = 1/ε. By uniqueness, any other solution never takes
the values 0 and 1/ε. We have:
x0 d x x λe−t
= ln = −1 ⇒ = λe−t ⇒ x(t) = , λ ∈ R∗ .
x(1 − εx) dt 1 − εx 1 − εx 1 + ελe−t
α
The condition x(0) = α gives λ = 6 0 and α 6= 1/ε, and as a consequence,
if α =
1 − εα
1
letting β = − ε we obtain
α
1
ϕα (t) = t .
βe + ε
• If β ≥ 0 then ϕα is defined on R.
• If β < 0, we let t0 = ln − βε .
ε
– If α > 0 then − > 1, that is, t0 > 0, ϕα is defined on ] − ∞, t0 [.
β
ε
– If α < 0 then − < 1, that is, t0 < 0, ϕα is defined on ]t0 , +∞[.
β
alpha>1/epsilon 2
y
1
0<alpha<1/epsilon
–3 –2 –1 0 1 2 3
x
–1 alpha<0
–2
–3
Third part
Fund. Theory ODE Lecture Notes – J. Arino
157
9. θ > 0 since |x(0)| < 1, and we have x(s) ∈ [−1, 1] for all s ∈ [0, θ], by definition of
θ. Since f (0) = 0 and |f 0 | is bounded by λ on [−1, 1], we have, from the inequality of finite
variations, |f (u) − f (0) | ≤ λ|u| for all u ∈ [−1, 1], that is, |f (x(s)) − f (0) | ≤ λ|x(s)|.
|{z} |{z}
=0 =0
Let t ∈ [0, θ]. From 4.,
Z t Z t
−kt −ks
|e x(t)| = x(0) + ε e f (x(s)) ds ≤ |x(0)| + ελ e−ks |x(s)| ds,
s=0 s=0
from which |e−kt x(t)| ≤ |x(0)|eελt (using Gronwall’s lemma with ϕ(t) = e−kt |x(t)|, η = |x(0)|,
ζ = ελ), giving the inequality.
10. Since ελ < 0, it follows that |x(0)|e(k+ελ)t ≤ |x(0)| < 1. Letting E = {t > 0 | |x(t)| >
1}, which is assumed non empty, then θ = inf E > 0 (by continuity, since |x(0)| < 1 there
exists η > 0 such that |x(t)| < 1 on [0, η], and thus θ ≥ η > 0). Since lim− |x(t)| ≤ 1 and
t→θ
lim+ |x(t)| ≥ 1, it follows |x(θ)| = 1.
t→θ
On [0, θ], |x(t)| ≤ |x(0)|e(k+ελ)t and, taking the limit, |x(θ)| < 1, which is impossible.
First conclusion : E = ∅ and, if J is the interval of definition of x, then ∀t ∈ J ∩ [0, +∞[,
|x(t)| ≤ 1.
If J admits an upper bound b ∈ R, then x0 is bounded in a neighborhood of b. Thus x
admits a limit in b. The same is therefore true for x0 . We then know that x can be extended
beyond b, contradicting the maximality of J.
Final conclusion : J ∩[0, +∞[= [0, +∞[, x is defined on [0, +∞[ and the proof of question
9. holds true for all t ≥ 0, i.e.,
N.B. This result expresses the stability and the asymptotic stability of the trivial solution
of (E6).
This subject was the Première composition de mathématiques for the contest determining
admission to École Polytechnique in France, for MP (Math-Physics) track students, in 2004.
Students, in their second year of university, have 4 hours to write this première composition.
The original subject comprised another question, originally question 3, which was sup-
pressed in this homework sheet. To be complete, this question is included here:
Fund. Theory ODE Lecture Notes – J. Arino
158 B. Problem sheets
2’.a. If k 6= 0 then A = 2πk 6= 0, and from 2., there exists a unique 2π-periodic solution.
Since the mapping x 7→ x b(n) is linear, and from the relation xb0 (n) = inb
x(n), we have
bb(n)
xb0 (n) = kb
x(n) + bb(n) ⇒ x
b(n) = .
in − k
• If bb(0) = 0 then all solutions are 2π-periodic. In this case, solutions satisfy x̂(n) =
b̂(n)/in for n ∈ Z non zero and x̂(0) varies with the solutions under consideration.