Professional Documents
Culture Documents
Math8430 Lecture Notes PDF
Math8430 Lecture Notes PDF
Fundamental Theory
of
Ordinary Differential Equations
Lecture Notes
Julien Arino
Department of Mathematics
University of Manitoba
Fall 2006
Contents
1 General theory of ODEs
1.1 ODEs, IVPs, solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.1 Ordinary differential equation, initial value problem . . . . . . . . . .
1.1.2 Solutions to an ODE . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1.3 Geometric interpretation . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Existence and uniqueness theorems . . . . . . . . . . . . . . . . . . . . . . .
1.2.1 Successive approximations . . . . . . . . . . . . . . . . . . . . . . . .
1.2.2 Local existence and uniqueness Proof by fixed point . . . . . . . . .
1.2.3 Local existence and uniqueness Proof by successive approximations
1.2.4 Local existence (non Lipschitz case) . . . . . . . . . . . . . . . . . . .
1.2.5 Some examples of existence and uniqueness . . . . . . . . . . . . . .
1.3 Continuation of solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.3.1 Maximal interval of existence . . . . . . . . . . . . . . . . . . . . . .
1.3.2 Maximal and global solutions . . . . . . . . . . . . . . . . . . . . . .
1.4 Continuous dependence on initial data, on parameters . . . . . . . . . . . . .
1.5 Generality of first order systems . . . . . . . . . . . . . . . . . . . . . . . . .
1.6 Generality of autonomous systems . . . . . . . . . . . . . . . . . . . . . . . .
1.7 Suggested reading, Further problems . . . . . . . . . . . . . . . . . . . . . .
3
3
3
4
8
9
9
10
13
16
21
24
27
28
29
32
34
34
2 Linear systems
2.1 Existence and uniqueness of solutions . . . . . .
2.2 Linear systems . . . . . . . . . . . . . . . . . .
2.2.1 The vector space of solutions . . . . . .
2.2.2 Fundamental matrix solution . . . . . .
2.2.3 Resolvent matrix . . . . . . . . . . . . .
2.2.4 Wronskian . . . . . . . . . . . . . . . . .
2.2.5 Autonomous linear systems . . . . . . .
2.3 Affine systems . . . . . . . . . . . . . . . . . . .
2.3.1 The space of solutions . . . . . . . . . .
2.3.2 Construction of solutions . . . . . . . . .
2.3.3 Affine systems with constant coefficients
2.4 Systems with periodic coefficients . . . . . . . .
2.4.1 Linear systems: Floquet theory . . . . .
35
35
36
37
38
41
43
43
46
46
46
47
48
48
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
CONTENTS
. . . .
. . . .
linear
. . . .
54
57
57
57
4 Linearization
4.1 Some linear stability theory . . .
4.2 The stable manifold theorem . . .
4.3 The Hartman-Grobman theorem .
4.4 Example of application . . . . . .
4.4.1 A chemostat model . . . .
4.4.2 A second example . . . . .
.
.
.
.
.
.
63
63
65
69
74
74
75
.
.
.
.
.
79
79
81
82
84
84
2.5
.
.
.
.
.
.
.
.
.
.
.
.
5 Exponential dichotomy
5.1 Exponential dichotomy . . . . . . . .
5.2 Existence of exponential dichotomy .
5.3 First approximate theory . . . . . . .
5.4 Stability of exponential dichotomy . .
5.5 Generality of exponential dichotomy
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . . .
. . . .
system
. . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
. . .
with
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
51
54
References
85
87
87
87
87
87
88
88
89
89
90
91
93
94
96
97
99
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B Problem sheets
Homework sheet 1 2003
Homework sheet 2 2003
Homework sheet 3 2003
Homework sheet 4 2003
Final examination 2003
Homework sheet 1 2006
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
101
103
113
117
125
137
145
Introduction
This course deals with the elementary theory of ordinary differential equations. The word
elementary should not be understood as simple. The underlying assumption here is that, to
understand the more advanced topics in the analysis of nonlinear systems, it is important
to have a good understanding of how solutions to differential equations are constructed.
If you are taking this course, you most likely know how to analyze systems of nonlinear
ordinary differential equations. You know, for example, that in order for solutions to a
system to exist and be unique, the system must have a C 1 vector field. What you do not
necessarily know is why that is. This is the object of Chapter 1, where we consider the
general theory of existence and uniqueness of solutions. We also consider the continuation
of solutions as well as continuous dependence on initial data and on parameters.
In Chapter 2, we explore linear systems. We first consider homogeneous linear systems,
then linear systems in full generality. Homogeneous linear systems are linked to the theory
for nonlinear systems by means of linearization, which we study in Chapter 4, in which
we show that the behavior of nonlinear systems can be approximated, in the vicinity of
a hyperbolic equilibrium point, by a homogeneous linear system. As for autonomous systems, nonautonomous nonlinear systems are linked to a linearized form, this time through
exponential dichotomy, which is explained in Chapter 5.
Chapter 1
General theory of ODEs
We begin with the general theory of ordinary differential equations (ODEs). First, we define
ODEs, initial value problems (IVPs) and solutions to ODEs and IVPs in Section 1.1. In
Section 1.2, we discuss existence and uniqueness of solutions to IVPs.
1.1
1.1.1
Definition 1.1.1 (ODE). An nth order ordinary differential equation (ODE) is a functional
relationship taking the form
d2
dn
d
F t, x(t), x(t), 2 x(t), . . . , n x(t) = 0,
dt
dt
dt
that involves an independent variable t I R, an unknown function x(t) D Rn of
the independent variable, its derivative and derivatives of order up to n. For simplicity, the
time dependence of x is often omitted, and we in general write equations as
F t, x, x0 , x00 , . . . , x(n) = 0,
(1.1)
where x(n) denotes the nth order derivative of x. An equation such as (1.1) is said to be in
general (or implicit) form.
An equation is said to be in normal (or explicit) form when it is written as
x(n) = f t, x, x0 , x00 , . . . , x(n1) .
Note that it is not always possible to write a differential equation in normal form, as it can
be impossible to solve F (t, x, . . . , x(n) ) = 0 in terms of x(n) .
Definition 1.1.2 (First-order ODE). In the following, we consider for simplicity the more
restrictive case of a first-order ordinary differential equation in normal form
x0 = f (t, x).
3
(1.2)
Note that the theory developed here holds usually for nth order equations; see Section 1.5.
The function f is assumed continuous and real valued on a set U R Rn .
Definition 1.1.3 (Initial value problem). An initial value problem (IVP) for equation (1.2)
is given by
x0 = f (t, x)
(1.3)
x(t0 ) = x0 ,
where f is continuous and real valued on a set U R Rn , with (t0 , x0 ) U.
Remark The assumption that f be continuous can be relaxed, piecewise continuity only is
needed. However, this leads in general to much more complicated problems and is beyond the
scope of this course. Hence, unless otherwise stated, we assume that f is at least continuous. The
function f could also be complex valued, but this too is beyond the scope of this course.
Remark An IVP for an nth order differential equation takes the form
x(n) = f (t, x, x0 , . . . , x(n1) )
(n1)
We have already seen that the order of an ODE is the order of the highest derivative
involved in the equation. An equation is then classified as a function of its linearity. A linear
equation is one in which the vector field f takes the form
f (t, x) = a(t)x(t) + b(t).
If b(t) = 0 for all t, the equation is linear homogeneous; otherwise it is linear nonhomogeneous.
If the vector field f depends only on x, i.e., f (t, x) = f (x) for all t, then the equation is
autonomous; otherwise, it is nonautonomous. Thus, a linear equation is autonomous if
a(t) = a and b(t) = b for all t. Nonlinear equations are those that are not linear. They too,
can be autonomous or nonautonomous.
Other types of classifications exist for ODEs, which we shall not deal with here, the
previous ones being the only one we will need.
1.1.2
Solutions to an ODE
Definition 1.1.4 (Solution). A function (t) (or , for short) is a solution to the ODE
(1.2) if it satisfies this equation, that is, if
0 (t) = f (t, (t)),
for all t I R, an open interval such that (t, (t)) U for all t I.
The notations and x are used indifferently for the solution. However, in this chapter,
to emphasize the difference between the equation and its solution, we will try as much as
possible to use the notation x for the unknown and for the solution.
(1.4)
t0
(t ,x )
0 0
x0
x0b
t0a
t0+a
t0
t0
x0
x
that f is continuous on R, and let M = maxR kf (t, x)k, which exists since f is continuous
on the compact set R.
In the following, existence of solutions will be obtained generally in relation to the domain
R by considering a subset of the time interval |t t0 | a defined by |t t0 | , with
a
if M = 0
=
b
min(a, M ) if M > 0.
This choice of = min(a, b/M ) is natural. We endow f with specific properties (continuity,
Lipschitz, etc.) on the domain R. Thus, in order to be able to use the definition of (t) as
the solution of x0 = f (t, x), we must be working in R. So we require that |t t0 | a and
kx x0 k b. In order to satisfy the first of these conditions, choosing a and working on
|t t0 | implies of course that |t t0 | a. The requirement that b/M comes from
the following argument. If we assume that (t) is a solution of (1.3) defined on [t0 , t0 + ],
then we have, for t [t0 , t0 + ],
Z t
k(t) x0 k =
f (s, (s))ds
t
Z t0
kf (s, (s))k ds
t0
Z t
M
ds
t0
= M (t t0 ),
where the first inequality is a consequence of the definition of the integrals by Riemann
sums (Lemma A.2.1 in Appendix A.2). Similarly, we have k(t) x0 k M (t t0 ) for all
t [t0 , t0 ]. Thus, for |t t0 | , k(t) x0 k M |t t0 |. Suppose now that b/M .
It follows that k x0 k M |t t0 | M b/M = b. Taking = min(a, b/M ) then ensures
that both |t t0 | a and k x0 k b hold simultaneously.
The following two theorems deal with the localization of the solutions to an IVP. They
make more precise the previous discussion. Note that for the moment, the existence of
a solution is only assumed. First, we establish that the security system described above
performs properly, in the sense that a solution on a smaller time interval stays within the
security domain.
Theorem 1.1.6. If (t) is a solution of the IVP (1.3) in an interval |t t0 | <
, then
k(t) x0 k < b in |t t0 | <
, i.e., (t, (t)) R((t0 , x0 ),
, b) for |t t0 | <
.
Proof. Assume that is a solution with (t, (t)) 6 R((t0 , x0 ),
, b). Since is continuous, it
follows that there exists 0 < <
such that
k(t) x0 k < b for |t t0 | < and k(t0 + ) x0 k = b or k(t0 ) x0 k = b , (1.5)
i.e., the solution escapes the security domain at t = t0 . Since
a, < a. Thus
(t, (t)) R for |t t0 | .
Thus kf (t, (t))k M for |t t0 | . Since is a solution, we have that 0 (t) = f (t, (t))
and (t0 ) = x0 . Thus
Z t
(t) = x0 +
f (s, (s))ds for |t t0 | .
t0
Hence
Z t
k(t) x0 k =
f (s, (s))ds
for |t t0 |
t0
M |t t0 | for |t t0 | .
As a consequence,
k(t) x0 k M < M
M M
b
= b for |t t0 | .
M
Proof. () Let us suppose that 0 = f (t, ) for all t I and that (t0 ) = x0 . Then for all
t I, (t, (t)) U (i ). Also, is differentiable and thus continuous on I (ii ). Finally,
0 (s) = f (s, (s))
so integrating both sides from t0 to t,
Z
(t) (t0 ) =
f (s, (s))ds
t0
and thus
Z
(t) = x0 +
f (s, (s))ds
t0
hence (iii ).
() Assume i ), ii ) and Riii ). Then is differentiable on I and 0 (t) = f (t, (t)) for all t I.
t
From (3), (t0 ) = x0 + t00 f (s, (s))ds = x0 .
Note that Theorem 1.1.8 states that should be continuous, whereas the solution should
of course be C 1 , for its derivative needs to be continuous. However, this is implied by point
iii ). In fact, more generally, the following result holds about the regularity of solutions.
Theorem 1.1.9 (Regularity). Let f : U Rn , with U an open set of R Rn . Suppose that
f C k . Then all solutions of (1.2) are of class C k+1 .
Proof. The proof is obvious, since a solution is such that 0 C k .
1.1.3
Geometric interpretation
The function f is the vector field of the equation. At every point in (t, x) space, a solution
is tangent to the value of the vector field at that point. A particular consequence of this
fact is the following theorem.
Theorem 1.1.10. Let x0 = f (x) be a scalar autonomous differential equation. Then the
solutions of this equation are monotone.
Proof. The direction field of an autonomous scalar differential equation consists of vectors
that are parallel for all t (since f (t, x) = f (x) for all t). Suppose that a solution of
x0 = f (x) is non monotone. Then this means that, given an initial point (t0 , x0 ), one the
following two occurs, as illustrated in Figure 1.3.
i) f (x0 ) 6= 0 and there exists t1 such that (t1 ) = x0 .
ii) f (x0 ) = 0 and there exists t1 such that (t1 ) 6= x0 .
t0
t2
t1
t0
t2
t1
Figure 1.3: Situations that would lead to a scalar autonomous differential equation having
nonmonotone solutions.
Suppose we are in case i), and assume we are in the case f (x0 ) > 0. Thus, the solution curve
is increasing at (t0 , x0 ), i.e., 0 (t0 ) > 0. As is continuous, i) implies that there exists
t2 (t0 , t1 ) such that (t2 ) is a maximum, with increasing for t [t0 , t2 ) and decreasing
for t (t2 , t1 ]. It follows that 0 (t1 ) < 0, which is a contradiction with 0 (t0 ) > 0.
Now assume that we are in case ii). Then there exists t2 (t0 , t1 ) with (t2 ) = x0 but
such that 0 (t2 ) < 0. This is a contradiction.
Remark If we have uniqueness of solutions, it follows from this theorem that if 1 and 2 are
two solutions of the scalar autonomous differential equation x0 = f (x), then 1 (t0 ) < 2 (t0 ) implies
that 1 (t) < 2 (t) for all t.
1.2
Several approaches can be used to show existence and/or uniqueness of solutions. In Sections 1.2.2 and 1.2.3, we take a direct path: using either a fixed point method (Section 1.2.2)
or an iterative approach (Section 1.2.3), we obtain existence and uniqueness of solutions
under the assumption that the vector field is Lipschitz. In Section 1.2.4, the Lipschitz
assumption is dropped and therefore a different approach must be used, namely that of
approximate solutions, with which only existence can be established.
1.2.1
Successive approximations
Picards successive approximation method consists in using the integral form (1.4) of the
solution to the IVP (1.3) to construct a sequence of approximation of the solution, that
converges to the solution. The steps followed in constructing this approximating sequence
are the following.
10
Step 1. Start with an initial estimate of the solution, say, the constant function 0 (t) =
0 = x0 , for |t t0 | h. Evidently, this function satisfies the IVP.
Step 2. Use 0 in (1.4) to define the second element in the sequence:
Z t
1 (t) = x0 +
f (s, 0 (s))ds.
t0
...
Step n. Use n1 in (1.4) to define the nth element in the sequence:
Z t
n (t) = x0 +
f (s, n1 (s))ds.
t0
At this stage, there are two major ways to tackle the problem, which use the same idea:
if we can prove that the sequence {n } converges, and that the limit happens to satisfy
the differential equation, then we have the solution to the IVP (1.3). The first method
(Section 1.2.2) uses a fixed point approach. The second method (Section 1.2.3) studies
explicitly the limit.
1.2.2
Here are two slightly different formulations of the same theorem, which establishes that if the
vector field is continuous and Lipschitz, then the solutions exist and are unique. We prove
the result in the second case. For the definition of a Lipschitz function, see Section A.6 in
the Appendix.
Theorem 1.2.1 (Picard local existence and uniqueness). Assume f : U R Rn D
Rn is continuous, and that f (t, x) satisfies a Lipschitz condition in U with respect to x.
Then, given any point (t0 , x0 ) U, there exists a unique solution of (1.3) on some interval
containing t0 in its interior.
Theorem 1.2.2 (Picard local existence and uniqueness). Consider the IVP (1.3), and assume f is (piecewise) continuous in t and satisfies the Lipschitz condition
kf (t, x1 ) f (t, x2 )k Lkx1 x2 k
for all x1 , x2 D = {x : kx x0 k b} and all t such that |t t0 | a. Then there exists
0 < = min a, Mb such that (1.3) has a unique solution in |t t0 | .
To set up the proof, we proceed as follows. Define the operator F by
Z t
F : x 7 x0 +
f (s, x(s))ds.
t0
11
Note that the function (F )(t) is a continuous function of t. Then Picards successives
approximations take the form 1 = F 0 , 2 = F 1 = F 2 0 , where F 2 represents F F .
Iterating, the general term is given for k = 0, . . . by
k = F k 0 .
Therefore, finding the limit limk k is equivalent to finding the function , solution of the
fixed point problem
x = F x,
with x a continuously differentiable function. Thus, a solution of (1.3) is a fixed point of F ,
and we aim to use the contraction mapping principle to verify the existence (and uniqueness)
of such a fixed point. We follow the proof of [14, p. 56-58].
Proof. We show the result on the interval t t0 . The proof for the interval t0 t
is similar. Let X be the space of continuous functions defined on the interval [t0 , t0 + ],
X = C([t0 , t0 + ]), that we endow with the sup norm, i.e., for x X,
kxkc =
max kx(t)k
t[t0 ,t0 +]
Recall that this norm is the norm of uniform convergence. Let then
S = {x X : kx x0 kc b}
Of course, S X. Furthermore, S is closed, and X with the sup norm is a complete metric
space. Note that we have transformed the problem into a problem involving the space of
continuous functions; hence we are now in an infinite dimensional case. The proof proceeds
in 3 steps.
Step 1. We begin by showing that F : S S. From (1.4),
Z t
(F )(t) x0 =
f (s, (s))ds
t0
Z t
=
f (s, (s)) f (s, x0 ) + f (s, x0 )ds
t0
As f is (piecewise) continuous, it is bounded on [t0 , t1 ] and there exists M = maxt[t0 ,t1 ] kf (t, x0 )k.
Thus
Z t
kF x0 k
Lk(s) x0 k + M ds,
t0
12
(t t0 )(Lb + M )
As t [t0 , t0 + ], (t t0 ) , and thus
kF x0 kc = max kF x0 k (Lb + M )
[t0 ,t0 +]
Choose then such that b/(Lb + M ), i.e., t sufficiently close to t0 . Then we have
kF x0 kc b
This implies that for S, F S, i.e., F : S S.
Step 2. We now show that F is a contraction. Let 1 , 2 S,
Z t
k(F 1 )(t) (F 2 )(t)k =
f
(s,
(s))
f
(s,
(s))ds
1
2
t0
Z t
and thus
L
Thus, choosing < 1 and /L, F is a contraction. Since, by Step 1, F : S S, the
contraction mapping principle (Theorem A.11) implies that F has a unique fixed point in
S, and (1.3) has a unique solution in S.
Step 3. It remains to be shown that any solution in X is in fact in S (since it is on X that
we want to show the result). Considering a solution starting at x0 at time t0 , the solution
leaves S if there exists a t > t0 such that k(t) x0 k = b, i.e., the solution crosses the border
of D. Let > t0 be the first of such ts. For all t0 t ,
Z t
k(t) x0 k
kf (s, (s)) f (s, x0 )k + kf (s, x0 )kds
t0
Z t
Lk(s) x0 k + M ds
t0
Z t
Lb + M ds
kF 1 F 2 kc Lk1 2 kc k1 2 kc for
t0
13
As a consequence,
b = k( ) x0 k (Lb + M )( t0 )
As = t0 + , for some > 0, it follows that if
>
b
Lb + M
1.2.3
Using the method of successive approximations, we can prove the following theorem.
Theorem 1.2.4. Suppose that f is continuous on a domain R of the (t, x)-plane defined,
for a, b > 0, by R = {(t, x) : |t t0 | a, kx x0 k b}, and that f is locally Lipschitz in x
on R. Let then, as previously defined,
M = sup kf (t, x)k <
(t,x)R
and
= min(a,
b
)
M
i 1,
|t t0 |
t0
(s))ds
0
t0
M |t t0 |
M b
14
Rt
from the definitions of M and , and thus k1 0 k b. So t0 f (s, 1 (s))ds is defined for
|t t0 | , and, for |t t0 | ,
Z t
Z t
kf (s, 1 (s))kds M b.
k2 (t) 0 k =
f (s, 1 (s))ds
k
t0
t0
All subsequent terms in the sequence can be similarly defined, and, by induction, for |tt0 |
,
kk (t) 0 k M b, k = 1, . . . , n.
Now, for |t t0 | ,
Z t
Z t
kk+1 (t) k (t)k =
x0 +
f (s, k (s))ds x0
f (s, k1 (s))ds
t0
Z t t0
=
f
(s,
(s))
f
(s,
(s))
ds
k
k1
t0
Z t
L
kk (s) k1 (s)kds,
t0
(L|t t0 |)k
for |t t0 |
k!
(1.6)
Indeed, (1.6) holds for k = 1, as previously established. Assume that (1.6) holds for k = n.
Then
Z t
kn+2 n+1 k =
f (s, n+1 (s)) f (s, n (s))ds
t0
Z t
Lb
n!
t0
s=t
Ln+1 |t t0 |n+1
b
n!
n + 1 s=t0
(L|t t0 |)n+1
b
(n + 1)!
and thus (1.6) holds for k = 1, . . ..
Thus, for N > n we have
N
1
X
N
1
X
N
1
X
(L)k
(L|t t0 |)k
kN (t) n (t)k
kk+1 (t) k (t)k
b
b
k!
k!
k=n
k=n
k=n
15
The rightmost term in this expression tends to zero as n . Therefore, {k (t)} converges
uniformly to a function (t) on the interval |t t0 | . As the convergence isPuniform, the
limit function is continuous.
Moreover (t0 ) = x0 . Indeed, N (t) = 0 (t) + N
k=1 (k (t)
P
k1 (t)), so (t) = 0 (t) + k=1 (k (t) k1 (t)).
The fact that is a solution of (1.3) follows from the following result. If a sequence
of functions {k (t)} converges uniformly and that the k (t) are continuous on the interval
|t t0 | , then
Z
Z
t
lim
n (s)ds =
t0
lim n (s)ds
t0 n
Hence,
(t) = lim n (t)
n
Z t
= x0 + lim
f (s, n1 (s))ds
n t
0
Z t
= x0 +
lim f (s, n1 (s))ds
t0 n
Z t
= x0 +
f (s, (s))ds,
t0
(t) = x0 +
t0
As the integrand f (t, ) is a continuous function, is differentiable (with respect to t), and
0 (t) = f (t, (t)), so is a solution to the IVP (1.3).
Uniqueness. Let and be two solutions of (1.3), i.e., for |t t0 | ,
Z t
(t) = x0 +
f (s, (s))ds
t0
Z t
(t) = x0 +
f (s, (s))ds.
t0
Then, for |t t0 | ,
Z t
k(t) (t)k =
f
(s,
(s))
f
(s,
(s))ds
t0
Z t
L
k(s) (s)kds.
(1.7)
t0
We now apply Gronwalls Lemma A.7) to this inequality, using K = 0 and g(t) = k(t)
(t)k. First, applying the lemma for t0 t t0 + , we get 0 k(t) (t)k 0, that is,
k(t) (t)k = 0,
16
= c ct.
To find 2 , we use 1 in (1.4).
Z
2 (t) = x0 +
f (s, 1 (s))ds
0
Z t
=c
(c cs)ds
0
= c ct + c
t2
.
2
n
X
(1)i ti
.
c
i!
i=0
This is the power series expansion of cet , so n = cet (and the approximation is valid on
R), which is the solution of the initial value problem.
Note that the method of successive approximations is a very general method that can be
used in a much more general context; see [8, p. 264-269].
1.2.4
The following theorem is often called Peanos existence theorem. Because the vector field is
not assumed to be Lipschitz, something is lost, namely, uniqueness.
Theorem 1.2.5 (Peano). Suppose that f is continuous on some region
R = {(t, x) : |t t0 | a, kx x0 k b},
with a, b > 0, and let M = maxR kf (t, x)k. Then there exists a continuous function (t),
differentiable on R, such that
i) (t0 ) = x0 ,
17
18
4. Observe that
Z
(t) = x0 +
F (s, (s))ds
t
Z 0t
= x0 +
f (s, (s))ds +
t0
t0
The function can indeed be thus defined on [t0 , t0 + ]. To see this, remark first that
this formula is meaningful and defines (t) for t0 t t0 + 1 , 1 = min(, ), so that
(t) is C 1 on [t0 , t0 + 1 ] and, on this interval,
k (t) x0 k b,
(1.9)
19
It then follows that (1.8) can be used to extend (t) as a C 1 function over [t0 , t0 + 2 ],
where 2 = min(, 2), satisfying relation (1.9). Continuing in this fashion, (1.8) serves to
define (t) over [t0 , t0 + ] so that (t) is a C 0 function on [t0 , t0 + ], satisfying relation
(1.9).
Since k0 (t)k M , M can be used as a Lipschitz constant for , giving uniform continuity of . It follows that the family of functions, (t), 0 < , is equicontinuous.
Thus, using Ascolis Lemma (Lemma A.5), there exists a sequence (1) > (2) > . . ., such
that (n) 0 as n and
(t) = lim (n) (t) exists uniformly
n
on [t0 , t0 + ]. The continuity of f implies that f (t, (n) (t (n)) tends uniformly to
f (t, (t)) as n ; thus term-by-term integration of (1.8) where = (n) gives
Z t
(t) = x0 +
f (s, (s))ds
t0
20
Proof. Let > 0 be given. We construct an -approximate solution on the interval [t0 , t0 +];
the construction works in a similar way for [t0 , t0 ]. The -approximate solution that we
construct is a polygonal path starting at (t0 , x0 ).
Since f C on R, it is uniformly continuous on R, and therefore for the given value of
, there exists > 0 such that
kf (t, ) f (t, )k
(1.10)
if
R and |t t|
(t, ) R, (t, )
.
k k
Now divide the interval [t0 , t0 + ] into n parts t0 < t1 < < tn = t0 + , in such a way
that
.
(1.11)
max |tk tk1 | min ,
M
From (t0 , x0 ), construct a line segment with slope f (t0 , x0 ) intercepting the line t = t1 at
(t1 , x1 ). From the definition of and M , it is clear that this line segment lies inside the
triangular region T bounded by the lines segments with slopes M from (t0 , x0 ) to their
intercept at t = t0 + , and the line t = t0 + . In particular, (t1 , x1 ) T .
At the point (t1 , x1 ), construct a line segment with slope f (t1 , x1 ) until the line t = t2 ,
obtaining the point (t2 , x2 ). Continuing similarly, a polygonal path is constructed that
meets the line t = t0 + in a finite number of steps, and lies entirely in T .
The function , which can be expressed as
(t0 ) = x0
(t) = (tk1 ) + f (tk1 , (tk1 ))(t tk1 ),
t [tk1 , tk ], k = 1, . . . , n,
(1.12)
(1.13)
If t [tk1 , tk ], then (1.13) together with (1.11) imply that k(t) (tk1 )k . But from
(1.12) and (1.10),
k0 (t) f (t, (t))k = kf (tk1 , (tk1 )) f (t, (t))k .
Therefore, is an -approximation.
We can now turn to their proof of Theorem 1.2.5.
Proof. Let {n } be a monotone decreasing sequence of positive real numbers with n 0
as n . By Theorem 1.2.10, for each n , there exists an n -approximate solution n of
(1.3) on |t t0 | such that n (t0 ) = x0 . Choose one such solution n for each n . From
(1.13), it follows that
kn (t) n (t)k M |t t|.
(1.14)
21
n (t) = x0 +
(1.15)
t0
where n (t) = 0 (t) f (t, n (t)) at those points where 0n exists, and n (t) = 0 otherwise.
Because n is an n -approximate solution, kn (t)k n . Since f is uniformly continuous on
R, and nk uniformly on [t0 , t0 + ] as k , it follows that f (t, nk ) f (t, (t))
uniformly on [t0 , t0 + ] as k .
Replacing n by nk in (1.15) and letting k gives
Z t
(t) = x0 +
f (s, (s))ds.
(1.16)
t0
Clearly, (t0 ) = 0, when evaluated using (1.16), and also 0 (t) = f (t, (t)) since f is continuous. Thus as defined by (1.16) is a solution to (1.3) on |t t0 | .
1.2.5
x0 = 3|x| 3
x(t0 ) = x0
(1.17)
Here, Theorem 1.2.5 applies, since f (t, x) = 3x2/3 is continuous. However, Theorem 1.2.2 does not
apply, since f (t, x) is not locally Lipschitz in x = 0 (or, f is not Lipschitz on any interval containing
0). This means that we have existence of solutions to this IVP, but not uniqueness of the solution.
The fact that f is not Lipschitz on any interval containing 0 is established using the following
argument. Suppose that f is Lipschitz on an interval I = (, ), with > 0. Then, there exists
L > 0 such that for all x1 , x2 I,
kf (t, x1 ) f (t, x2 )k L|x1 x2 |
that is,
2
2
3 |x1 | 3 |x2 | 3 L|x1 x2 |
Since this has to hold true for all x1 , x2 I, it must hold true in particular for x2 = 0. Thus
2
3|x1 | 3 L|x1 |
Given an > 0, it is possible to find N > 0 such that n1 < for all n N . Let x1 = n1 . Then for
n N , if f is Lipschitz there must hold
2
1 3
L
3
n
n
22
L
3
= , and so f is not Lipschitz on I.
1
n3
E = ,
2)
E = [a, b], (closed since x is continuous and thus reaches its bounds),
3)
E = (, b),
4)
E = (a, +),
5)
E = R.
Note that case 2) includes the case of a single intersection point, when a = b, giving E = {a}. Let us
now consider the nature of x in these different situations. Recall that from Theorem 1.1.10, since
(1.17) is defined by a scalar autonomous equation, its solutions are monotone. For simplicity, we
consider here the case of monotone increasing solutions. The case of monotone decreasing solutions
can be treated in a similar fashion.
1)
2)
< 0, if t < a
= 0, if t [a, b]
x(t) is
> 0, if t > b
3)
Here,
= 0, if t < b
> 0, if t > b
< 0, if t < a
= 0, if t > a
x(t) is
4)
In this case,
x(t) is
5)
Now, depending on the sign of x, we can integrate the equation. First, if x > 0, then |x| = x and
so
x0 = 3x2/3
1
x2/3 x0 = 1
3
x1/3 = t + k1
x(t) = (t + k1 )3
23
2)
The case E = is impossible, for all initial conditions (t0 , x0 ). Indeed, as x0 > 0, we have
x(t) = (t + k1 )3 . Using the initial condition, we find that x(t0 ) = x0 = (t0 + k1 )3 , i.e.,
1/3
1/3
k1 = x0 t0 , and x(t) = (t + x0 t0 )3 .
If E = [a, b], then
(t + k2 )3 if t < a
0 if t [a, b]
x(t) =
(t + k1 )3 if t > b
Since x0 > 0, we have to be in the t > b region, so t0 > b, and (t0 + k1 )3 = x0 , which implies
1/3
that k1 = x0 t0 . Thus
3
(t + k2 ) if t < a
0 if t [a, b]
x(t) =
1/3
(t + x0 t0 )3 if t > b
Since x is continuous,
1/3
lim (t + x0
tb,t>b
t0 )3 = 0
and
lim (t + k2 )3 = 0
ta,t<a
1/3
and k2 = a. So finally,
(t + a)3 if t < a
1
1
0 if t [a, t0 x03 ]
(a t0 x03 )
x(t) =
1/3
(t + x0 t0 )3 if t > t0 x03
1/3
Thus, choosing a t0 x0 , we have solutions of the form shown in Figure 1.4. Indeed, any
ai satisfying this property yields a solution.
3)
4)
The case [a, +) is impossible. Indeed, there does not exist a solution through (t0 , x0 ) such
that x(t) = 0 for all t [a, +); since we are in the case of monotone increasing functions,
if x0 > 0 then x(t) x0 for all t t0 .
E = R is also impossible, for the same reason.
24
5)
0 if t (, b]
(t + k1 )3 if t > b
1/3
Since x(t0 ) = x0 , k1 = x0
1/3
(1.18)
Here, we have existence and uniqueness of the solutions to (1.18). Indeed, f (t, x) = 2tx2 is continuous and locally Lipschitz on R.
1.3
Continuation of solutions
The results we have seen so far deal with the local existence (and uniqueness) of solutions to
an IVP, in the sense that solutions are shown to exist in a neighborhood of the initial data.
The continuation of solutions consists in studying criteria which allow to define solutions on
possibly larger intervals.
Consider the IVP
x0 = f (t, x)
(1.19)
x(t0 ) = x0 ,
25
with f continuous on a domain U of the (t, x) space, and the initial point (t0 , x0 ) U.
Lemma 1.3.1. Let the function f (t, x) be continuous in an open set U in (t, x)-space, and
assume that a function (t) satisfies the condition 0 (t) = f (t, (t)) and (t, (t)) U, in an
open interval I = {t1 < t < t2 }. Under this assumption, if limj (j , (j )) = (t1 , ) U
for some sequence {j : j = 1, 2, . . .} of points in the interval I, then lim t1 (, ( )) =
(t1 , ). Similarly, if limj (j , (j )) = (t2 , ) U for some sequence {j : j = 1, 2, . . .} of
points in the interval I, then lim t2 (, ( )) = (t2 , ).
Proof. Let W be an open neighborhood of (t1 , ). Then (t, (t)) W in an interval 1 <
U,
t < (W) for some (W) determined by W. Indeed, assume that the closure of W, W
and that |f (t, x)| M in W for some positive number M . For every positive integer j and
every positive number , consider a rectangular region
Rj () = {(t, x) : |t tj | , kx (tj )k M }
I I,
where
|I
|I =
Theorem 1.3.3. Let f (t, x) be continuous in an open set U in (t, x)-space, and the function
(t) be a function satisfying the condition 0 (t) = f (t, (t)) and (t, (t)) U, in an open
interval I = {t1 < t < t2 }. If the following two conditions are satisfied:
i) (t) cannot be extended to the left of t1 (or, respectively, to the right of t2 ),
ii) limj (j , (j )) = (t1 , ) (or, respectively, (t2 , )) exists for some sequence {j : j =
1, 2, . . .} of points in the interval I,
then the limit point (t1 , ) (or, respectively, (t2 , )) must be on the boundary of U.
26
I
~
I
Figure 1.5: The extension on the interval I of the solution (defined on the interval
I).
Proof. Suppose that the hypotheses of the theorem are satisfied, and that (t1 , ) U (respectively, (t2 , ) U). Then, from Lemma 1.3.1, it follows that
lim (, ( )) = (t1 , )
t1
(or, respectively, lim t2 (, ( )) = (t2 , )). Thus we can apply Theorem 1.2.5 (Peanos
Theorem) to the IVP
x0 = f (t, x)
x(t1 ) = ,
(or, respectively, x0 = f (t, x), x(t2 ) = ). This implies that the solution can be extended
to the left of t1 (respectively, to the right of t2 ), since Theorem 1.2.5 implies existence in a
neighborhood of t1 . This is a contradiction.
A particularly important consequence of the previous theorem is the following corollary.
Corollary 1.3.4. Assume that f (t, x) is continuous for t1 < t < t2 and all x Rn . Also,
assume that there exists a function (t) satisfying the following conditions:
a) and 0 are continuous in a subinterval I of the interval t1 < t < t2 ,
b) 0 (t) = f (t, (t)) in I.
Then, either
i) (t) can be extended to the entire interval t1 < t < t2 as a solution of the differential
equation x0 = f (t, x), or
ii) limt k(t)k = for some in the interval t1 < t < t2 .
1.3.1
27
Another way of formulating these results is with the notion of maximal intervals of existence.
Consider the differential equation
x0 = f (t, x)
(1.20)
28
1.3.2
Linked to the notion of maximal intervals of existence of solutions is the notion of maximal
and global solutions.
Definition 1.3.10 (Maximal solution). Let I1 R and I2 R be two intervals such that
I)
solution of the
I1 I2 . A solution (, I1 ) is maximal in I2 if has no extension (,
U
1
Every global solution on a given interval I is maximal on that same interval. The converse
is false.
Example Consider the equation x0 = 2tx2 on R. If x 6= 0, x0 x2 = 2t, which implies that
x(t) = 1/(t2 c), with c R. Depending on c, there are several cases.
if c < 0, then x(t) = 1/(t2 c) is a global solution on R,
if c > 0, the solutions are defined on (, c), ( c, c) and ( c, ). The solutions are
maximal solutions on R, but are not global solutions.
if c = 0, then the maximal non global solutions on R are defined on (, 0) and (0, ).
Another solution is x 0, which is a global solution on R.
maximal solution .
29
The following theorem extends the uniqueness property to an interval of existence of the
solution.
Theorem 1.3.13. Let 1 , 2 : I Rn be two solutions of the equation x0 = f (t, x), with f
locally Lipschitz in x on U. If 1 and 2 coincide at a point t0 I, then 1 = 2 on I.
Proof. Under the assumptions of the theorem, 1 (t0 ) = 2 (t0 ). Suppose that there exists a
t1 , t1 6= t0 , such that 1 (t1 ) 6= 2 (t1 ). For simplicity, let us assume that t1 > t0 .
By the local uniqueness of the solution, it follows from 1 (t0 ) = 2 (t0 ) that there exists
a neighborhood N of t0 such that 1 (t) = 2 (t) for all t N . Let
E = {t [t0 , t1 ] : 1 (t) 6= 2 (t)}
Since t1 E, E 6= . Let = inf(E), we have (t0 , t1 ], and for all t [t0 , ), 1 (t) = 2 (t).
By continuity of 1 and 2 , we thus have 1 () = 2 (). This implies that there exists
a neighborhood W of on which 1 = 2 . This is a contradiction, since 1 (t) 6= 2 (t) for
t > , hence there exists no such t1 , and 1 = 2 on I.
Corollary 1.3.14 (Global uniqueness). Let f (t, x) be locally Lipschitz in x on U. Then by
any point (t0 , x0 ) U, there passes a unique maximal solution : I Rn . If there exists a
global solution on I, then it is unique.
1.4
Let be a solution of (1.3). To emphasize the fact the this solution depends on the initial
condition (t0 , x0 ), we denote it t0 ,x0 . Let be a parameter of (1.3). When we study the
dependence of t0 ,x0 on , we denote the solution as t0 ,x0 , .
We suppose that kf (t, x)k M and |f (t, x)/xi | K for i = 1, . . . , n for (t, x) U,
with U R Rn . Note that these conditions are automatically satisfied on a closed bounded
region of the form R = {(t, x) : |t t0 | a, kx x0 k b}, where a, b > 0.
Our objective here is to characterize the nature of the dependence of the solution on the
initial time t0 and the initial data x0 .
Theorem 1.4.1. Suppose that f and f /x are continuous and bounded in a given region
U. Let t0 ,x0 be a solution of (1.3) passing through (t0 , x0 ) and t0 ,x0 be a solution of (1.3)
passing through (t0 , x0 ). Suppose that and exist on some interval I.
Then, for each > 0, there exists > 0 such that if |t t| < and kx0 x0 k < , then
k(t) (t)k < , for t, t I.
Proof. The prooof is from [2, p. 135-136]. Since is the solution of (1.3) through the point
(t0 , x0 ), we have, for all t I,
Z t
f (s, (s))ds
(1.21)
(t) = x0 +
t0
30
As is the solution of (1.3) through the point (t0 , x0 ), we have, for all t I,
Z
f (s, (s))ds
(t) = x0 +
(1.22)
t0
Since
t
t0
f (s, (s))ds =
f (s, (s))ds +
t0
f (s, (s))ds,
t0
t0
t0
(t) (t) = x0 x0 +
f (s, (s))ds +
t0
t0
and therefore
Z
Z
t
t0
k(t) (t)k kx0 x0 k +
f (s, (s))ds
+
f (s, (s)) f (s, (s))ds
t0
t0
Using the boundedness assumptions on f and f /x to evaluate the right hand side of the
latter inequation, we obtain
Z t
k(t) (t)k kx0 x0 k + M |t0 t0 | + K
(s) (s)ds
t0
(1.23)
t0
if |t t| < , we have
k(t) (t)k k(t) (t)k + k(t) (t)k (1 + M )eK(2 1 ) + M
Now, given > 0, we need only choose < /[M + (1 + M )K(2 1 ) ] to obtain the desired
inequality, completing the proof.
31
What we have shown is that the solution passing through the point (t0 , x0 ) is a continuous
function of the triple (t, t0 , x0 ). We now consider the case where the parameters also vary,
comparing solutions to two different but close equations.
Theorem 1.4.2. Let f, g be defined in a domain U and satisfy the hypotheses of Theorem 1.4.1. Let and be solutions of x0 = f (t, x) and x0 = g(t, x), respectively, such
that (t0 ) = x0 , (t0 ) = x0 , existing on a common interval < t < . Suppose that
kf (t, x) g(t, x)k for (t, x) U. Then the solutions and satisfy
k(t) (t)k kx0 x0 keK|tt0 | + ( )eK|tt0 |
for all t, < t < .
The following theorem [6, p. 58] is less restrictive in its hypotheses than the previous
one, requiring only uniqueness of the solution of the IVP.
Theorem 1.4.3. Let U be a domain of (t, x) space, I the domain | 0 | < c, with c > 0,
and U the set of all (t, x, ) satisfying (t, x) U, I . Suppose f is a continuous function
on U , bounded by a constant M there. For = 0 , let
x0 = f (t, x, )
x(t0 ) = x0
(1.24)
have a unique solution 0 on the interval [a, b], where t0 [a, b]. Then there exists a > 0
such that, for any fixed such that | 0 | < , every solution of (1.24) exists over [a, b]
and as 0
0
uniformly over [a, b].
Proof. We begin by considering t0 (a, b). First, choose an > 0 small enough that the
region R = {|t t0 | , kx x0 k M } is in U; note that R is a slight modification
of the usual security domain. All solutions of (1.24) with I exist over [t0 , t0 + ]
and remain in R. Let denote a solution. Then the set of functions { }, I is an
uniformly bounded and equicontinuous set in |t t0 | . This follows from the integral
equation
Z
t
(t) = x0 +
(1.25)
t0
32
(1.26)
kx 0 ( )k + M |t + |}
1.5
(1.27)
This equation can be reduced to a system of n first order ordinary differential equations, by
proceeding as follows. Let y0 = x, y1 = x0 , y2 = x00 , . . . , yn1 = x(n) . Then (1.27) is equivalent
to
y 0 = F (t, y)
(1.28)
with y = (y0 , y1 , . . . , yn1 )T and
y1
y2
..
.
F (t, z) =
yn1
f (t, y0 , . . . , yn1 )
33
(1.29)
(1.30)
As a consequence, all results in this chapter are true for equations of order higher than 1.
Example Consider the second order IVP
x00 = 2x0 + 4x 3
x(0) = 2, x0 (0) = 1
To transform it into a system of first-order differential equations, we let y = x0 . Substituting (where
possible) y for x0 in the equation gives
y 0 = 2y + 4x 3
The initial condition becomes x(0) = 2, y(0) = 1. So finally, the following IVP is equivalent to the
original one:
x0 = y
y 0 = 4x 2y 3
x(0) = 2, y(0) = 1
Note that the linearity of the initial problem is preserved.
x(n1) (t0 ) = x0
(n1)
where x0 , x00 , . . . , x0
equations by setting
34
The nth order linear equation is then equivalent to the following system of n first order linear
equations
y00 = y1
y10 = y2
..
.
0
yn2
= yn1
0
yn1
= yn
yn0 = an1 (t)yn (t) + an2 (t)yn1 (t) + + a1 (t)y1 (t) + a0 (t)y0 (t) + b(t)
under the initial conditions
(n1)
yn1 (t0 ) = x0
1.6
A nonautonomous system
x0 (t) = f (t, x(t))
can be transformed into an autonomous system of equations by setting an auxiliary variable,
say y, equal to t, giving
x0 = f (y, x)
y 0 = 1.
However, this transformation does not always make the system any easier to study.
1.7
Most of these results are treated one way or another in Coddington and Levinson [6] (first
edition published in 1955), and the current text, as many others, does little but paraphrase
them.
We have not seen here any results specific to complex valued differential equations. As
complex numbers are two-dimensional real vectors, the results carry through to the complex
case by simply assuming that if, in (1.2), we consider an n-dimensional complex vector, then
this is equivalent to a 2n-dimensional problem. Furthermore, if f (t, x) is analytic in t and
x, then analytic solutions can be constructed. See Section I-4 in [12], ..., for example.
Chapter 2
Linear systems
Let I be an interval of R, E a normed vector space over a field K (E = Kn , with K = R
or C), and L(E) the space of continuous linear maps from E to E. Let k k be a norm on
E, and ||| ||| be the induced supremum norm on L(E) (see Appendix A.1). Consider a map
A : I L(E) and a map B : I E. A linear system of first order equations is defined by
x0 (t) = A(t)x(t) + B(t)
(2.1)
where the unknown x is a map on I, taking values in E, defined differentiable on a subinterval of I. We restrict ourselves to the finite dimensional case (E = Kn ). Hence we
consider A Mn (K), n n matrices over the field K, and B Kn . We suppose that A and
B have continuous entries. In most of what follows, we assume K = R.
The name linear for system (2.1) is an abuse of language. System (2.1) should be called
an affine system, with associated linear system
x0 (t) = A(t)x(t).
(2.2)
Another way to distinguish systems (2.1) and (2.2) is to refer to the former as a nonhomogeneous linear system and the latter as an homogeneous linear system. In order to lighten
the language, since there will be other qualificatives added to both (2.1) and (2.2), we use
in this chapter the names affine system for (2.1) and linear system for (2.2).
The exception to this naming convention is that we refer to (2.1) as a linear system if we
consider the generic properties of (2.1), with (2.2) as a particular case, as in this chapters
title or in the next section, for example.
2.1
Theorem 2.1.1. Let A and B be defined and continuous on I 3 t0 . Then, for all x0 E,
there exists a unique solution t (x0 ) of (2.1) through (t0 , x0 ), defined on the interval I.
35
2. Linear systems
Proof. Let k(t) = |||A(t)||| = supkxk1 kA(t)xk. Then for all t I and all x1 , x2 K,
kf (t, x1 ) f (t, x2 )k = kA(t)(x1 x2 )k
|||A(t)||| kx1 x2 k
k(t)kx1 x2 k,
where the inequality
kA(t)(x1 x2 )k |||A(t)||| kx1 x2 k
results from the nature of the norm ||| ||| (see Appendix A.1). Furthermore, k is continuous
on I. Therefore the conditions of Theorem 1.2.2 hold, leading to existence and uniqueness
on the interval I.
With linear systems, it is possible to extend solutions easily, as is shown by the next
theorem.
Theorem 2.1.2. Suppose that the entries of A(t) and the entries of B(t) are continuous on
an open interval I. Then every solution of (2.1) which is defined on a subinterval J of the
interval I can be extended uniquely to the entire interval I as a solution of (2.1).
Proof. Suppose that I = (t1 , t2 ), and that a solution of (2.1) is defined on J = (1 , 2 ),
with J ( I. Then
Z t
k(t)k k(t0 )k +
A(s)(s) + B(s)ds
t0
t0
Thus, using Gronwalls Lemma (Lemma A.7), the following estimate holds in J ,
k(t)k KeL|tt0 | KeL(2 1 ) <
This implies that case ii) in Corollary 1.3.4 is ruled out, leaving only the possibility for to
be extendable over I, since the vector field in (2.1) is Lipschitz.
2.2
Linear systems
We begin our study of linear systems of ordinary differential equations by considering homogeneous systems of the form (2.2) (linear systems), with x Rn and A Mn (R), the set
of square matrices over the field R, A having continuous entries on an interval I.
2.2.1
37
Theorem 2.2.1 (Superposition principle). Let S 0 be the set of solutions of (2.2) that are
defined on some interval I R. Let 1 , 2 S 0 , and 1 , 2 R. Then 1 1 + 2 2 S 0 .
Proof. Let 1 , 2 S 0 be two solutions of (2.2), 1 , 2 R. Then for all t I,
01 = A(t)1
02 = A(t)2 ,
from which it comes that
d
(1 1 + 2 2 ) = A(t)[1 1 + 2 2 ],
dt
implying that 1 1 + 2 2 S 0 .
Thus the linear combination of any two solutions of (2.2) is in S 0 . This is a hint that S 0
must be a vector space of dimension n on K. To show this, we need to find a basis of S 0 .
We proceed in the classical manner, with the notable difference from classical linear algebra
that the basis is here composed of time-dependent functions.
Definition 2.2.2 (Fundamental set of solutions). A set of n solutions of the linear differential equation (2.2), all defined on the same open interval I, is called a fundamental set of
solutions on I if the solutions are linearly independent functions on I.
Proposition 2.2.3. If A(t) is defined and continuous on the interval I, then the system
(2.2) has a fundamental set of solutions defined on I.
Proof. Let t0 I, and e1 , . . . , en denote the canonical basis of Kn . Then, from Theorem 2.1.1,
there exists a unique solution (t0 ) = (1 (t0 ), . . . , n (t0 )) such that i (t0 ) = ei , for i =
1, . . . , n. Furthermore, from Theorem 2.1.1, each function i is defined on the interval I.
Assume that {i }, i = 1, . . . ,P
n, is linearly dependent. Then there exists i R, i =
n
1, . . . , n, not all zero,
i i (t) = 0 for all t. In particular, this is true for
Pn such that Pi=1
n
t = t0 , and thus i=1 i i (t0 ) = i=1 i ei = 0, which implies that the canonical basis of
Kn is linearly dependent. Hence a contradiction, and the i are linearly independent.
Proposition 2.2.4. If F is a fundamental set of solutions of the linear system (2.2) on the
open interval I, then every solution defined on I can be expressed as a linear combination
of the elements of F.
Let t0 I, we consider the application
t0 : S 0 Kn
Y 7 t0 (x) = x(t0 )
Lemma 2.2.5. t0 is a linear isomorphism.
2. Linear systems
Proof. t0 is bijective. Indeed, let v Kn , from Theorem 2.1.1, there exists a unique solution
passing through (t0 , v), i.e.,
v Kn , !x S 0 , x(t0 ) = v t0 (x) = v,
so t0 is surjective. That t0 is injective follows from uniqueness of solutions to an ODE.
Furthermore, t0 (1 x1 +2 x2 ) = 1 t0 (x1 )+2 t0 (x2 ). Therefore dim S 0 = dim Kn = n.
2.2.2
trA(s)ds .
(2.3)
Proof. Writing the differential equation 0 (t) = A(t)(t) in terms of the elements ij and
aij of, respectively, and A,
n
X
0
ij (t) =
aik (t)kj (t),
(2.4)
k=1
for i, j = 1, . . . , n. Writing
11 (t) 12 (t) . . . 1n (t)
21 (t) 22 (t) . . . 2n (t)
,
det =
n1 (t) n2 (t) . . . nn (t)
we see that
0
11 012 . . . 01n 11 12 . . . 1n
11 12 . . . 1n
0
21 22 . . . 2n 21 022 . . . 02n
21 22 . . . 2n
0
+
+ +
.
(det ) =
0
0
0
n1 n2 . . . nn n1 n2 . . . nn
.
.
.
n1
n2
nn
39
Indeed, write det (t) = (r1 , r2 , . . . , rn ), where ri is the ith row in (t). is then a linear
function of each of its arguments, if all other rows are constant, which implies that
d
d
d
d
det (t) =
r1 , r2 , . . . , rn + r1 , r2 , . . . , rn + + r1 , r2 , . . . , rn .
dt
dt
dt
dt
(To show this, use the definition of the derivative as a limit.) Using (2.4) on the first of the
n determinants in (det )0 gives
P
P
P
k a1k k2 . . .
k a1k kn
k a1k k1
21
22
...
2n
.
n1
n2
...
nn
Adding a12 times the second row, a13 times the first row, etc., a1n times the nth row,
to the first row, does not change the determinant, and thus
P
P
P
a11 11 a11 12 . . . a11 1n
a
.
.
.
a
1k
k2
1k
kn
1k
k1
k
k
k
21
21
22
. . . 2n
22
...
2n
=
= a11 det .
n1
n2 . . . nn
n2
...
nn n1
Repeating this for each of the terms in (det )0 , we obtain (det )0 = (a11 + a22 + +
ann ) det , giving finally (det )0 = (trA)(det ). Note that this equation takes the form
u0 (t)u = 0, which implies that
Z t
(s)ds = constant,
u exp
n
X
cj j ,
j=1
2. Linear systems
solution for any choice of (t0 ). Thus det (t0 ) 6= 0, and by the remark above, det (t) 6= 0
for all t I.
Reciproqually, let be a solution matrix of (2.2), and suppose that det (t) 6= 0 for
t I. Then the column vectors are linearly independent at every t I.
From the remark above, the condition det (t) 6= 0 for all t I in Theorem 2.2.8 is
equivalent to the condition there exists t I such that det (t) 6= 0. A frequent candidate
for this role is t0 .
To conclude on fundamental solution matrices, remark that there are infinitely many
of them, for a given linear system. However, since each fundamental solution matrix can
provide a basis for the vector space of solutions, it is clear that the fundamental matrices
associated to a given problem must be linked. Indeed, we have the following result.
Theorem 2.2.9. Let be a fundamental matrix solution to (2.2). Let C Mn (K) be a
constant nonsingular matrix. Then C is a fundamental matrix solution to (2.2). Conversely, if is another fundamental matrix solution to (2.2), then there exists a constant
nonsingular C Mn (K) such that (t) = (t)C for all t I.
Proof. Since is a fundamental matrix solution to (2.2), we have
(C)0 = 0 C = (A(t))C = A(t)(C),
and thus C is a matrix solution to (2.2). Since is a fundamental matrix solution to
(2.2), Theorem 2.2.8 implies that det 6= 0. Also, since C is nonsingular, det C 6= 0. Thus,
det C = det det C 6= 0, and by Theorem 2.2.8, C is a fundamental matrix solution to
(2.2).
Conversely, assume that and are two fundamental matrix solutions. Since 1 = I,
0
0
taking the derivative of this expression gives 0 1 + (1 ) = 0, and therefore (1 ) =
1 0 1 . We now consider the product 1 . There holds
0
0
1 = 1 + 1 0
= 1 0 1 + 1 A(t)
= 1 A(t)1 + 1 A(t)
= 1 A(t) + 1 A(t)
= 0.
0
2.2.3
41
Resolvent matrix
If t 7 (t) is a matrix solution of (2.2) on the interval I, then 0 (t) = A(t)(t) on I. Thus,
by Proposition 2.2.3, there exists a fundamental matrix solution.
Definition 2.2.10 (Resolvent matrix). Let t0 I and (t) be a fundamental matrix solution
of (2.2) on I. Since the columns of are linearly independent, it follows that (t0 ) is
invertible. The resolvent (or state transition matrix) of (2.2) is then defined as
R(t, t0 ) = (t)(t0 )1 .
It is evident that R(t, t0 ) is the principal fundamental matrix solution at t0 (since
R(t0 , t0 ) = (t0 )(t0 )1 = I). Thus system (2.2) has a principal fundamental matrix
solution at each point in I.
Proposition 2.2.11. The resolvent matrix satisfies the Chapman-Kolmogorov identities
1)
R(t, t) = I,
2)
4) s
R(t, s)
= R(t, s)A(s),
5) t
R(t, s)
= A(t)R(t, s).
Proof. First, for the Chapman-Kolmogorov identities. 1) is R(t, t) = (t)1 (t) = I. Also,
2) gives
R(t, s)R(s, u) = (t)1 (s)(s)1 (u) = (t)1 (u) = R(t, u).
The other equalities are equally easy to establish. Indeed,
1
1
(t)1 = (s)1 (t) = R(s, t),
= 1 (s)
R(t, s)1 = (t)1 (s)
whence 3). Also,
R(t, s) =
(t)1 (s)
s
s
1
= (t)
(s)
s
As is a fundamental matrix solution, 0 exists and is nonsingular, and differentiating
1 = I gives
1
1
1
(s) (s) = 0
(s) (s) + (s)
(s) = 0
s
s
s
1
(s)
(s) =
(s) 1 (s)
s
s
1
(s) = (s)
(s) 1 (s).
s
s
2. Linear systems
Therefore,
1
(s) (s) = R(t, s)
(s) 1 (s).
s
s
Now, since (s) is a fundamental matrix solution, it follows that (s)/s = A(s)(s), and
thus
43
2.2.4
Wronskian
trA(s)ds
t
Z 0t
W (t) = exp
(2.5a)
trA(s)ds det(v1 , . . . , vn ).
(2.5b)
t0
2.2.5
At this point, we know that solutions to (2.2) take the form (t) = R(t, t0 )x0 , but this
was obtained formally. We have no indication whatsoever as to the precise form of R(t, t0 ).
Typically, finding R(t, t0 ) can be difficult, if not impossible. There are however cases where
the resolvent can be explicitly computed. One such case is for autonomous linear systems,
which take the form
x0 (t) = Ax(t),
(2.6)
that is, where A(t) A. Our objective here is to establish the following result.
2. Linear systems
e =
X
An
k=0
n!
X
1
(t t0 )n An x0
(t) =
n!
n=0
X
1
=
(t t0 )n An x0 ,
n!
n=0
45
X
1
(t) =
n(t t0 )n1 An x0
n!
n=1
0
X
n=0
1
(n + 1)(t t0 )n An+1 x0
(n + 1)!
X
1
=
(t t0 )n An+1 x0
n!
n=0
X
1
=A
(t t0 )n An x0
n!
n=0
= A(t)
so is solution of (2.6). Since (2.6) is linear, solutions are unique and global.
The problem is now to evaluate the matrix etA . We have seen that in the case where A
is diagonalizable, solutions take the form
(t) = e1 (tt0 ) x01 , . . . , en (tt0 ) x0n ,
which implies that, in this case, the matrix R(t, t0 ) takes the form
e1 (tt0 )
0
0
0
e2 (tt0 )
0
R(t, t0 ) = ..
.
.
.
.
.
n (tt0 )
0
0
e
In the general case, we need the notion of generalized eigenvectors.
Definition 2.2.17 (Generalized eigenvectors). Let be an eigenvalue of the n n matrix
A, with multiplicity m n. Then, for k = 1, . . . , m, any nonzero solution v of
(A I)k v = 0
is called a generalized eigenvector of A.
Theorem 2.2.18. Let A be a real n n matrix with real eigenvalues 1 , . . . , n repeated
according to their multiplicity. Then there exists a basis of generalized eigenvectors for Rn .
And if {v1 , . . . , vn } is any basis of generalized eigenvectors for Rn , the matrix P = [v1 vn ]
is invertible,
A = D + N,
where
P 1 DP = diag(j ),
the matrix N = A D is nilpotent of order k n, and D and N commute.
2. Linear systems
2.3
Affine systems
We consider the general (affine) problem (2.1), which we restate here for convenience. Let
x Rn , A : I L(E) and B : I E, where I R and E is a normed vector space, we
consider the system
x0 (t) = A(t)x(t) + B(t)
(2.1)
2.3.1
The first problem that we are faced with when considering system (2.1) is that the set of
solutions does not constitute a vector space; in particular, the superposition principle does
not hold. However, we have the following result.
Proposition 2.3.1. Let x1 , x2 be two solutions of (2.1). Then x1 x2 is a solution of the
associated homogeneous equation (2.2).
Proof. Since x1 and x2 are solutions of (2.1),
x01 = A(t)x1 + B(t)
x02 = A(t)x2 + B(t)
Therefore
d
(x1 x2 ) = A(t)(x1 x2 )
dt
Theorem 2.3.2. The global solutions of (2.1) that are defined on I form an n dimensional
affine subspace of the vector space of maps from I to Kn .
Theorem 2.3.3. Let V be the vector space over R of solutions to the linear system x0 =
A(t)x. If is a particular solution of the affine system (2.1), then the set of all solutions of
(2.1) is precisely
{ + , V }.
Practical rules:
1. To obtain all solutions of (2.1), all solutions of (2.2) must be added to a particular
solution of (2.1).
2. To obtain all solutions of (2.2), it is sufficient to know a basis of S 0 . Such a basis is
called a fundamental system of solutions of (2.2).
2.3.2
Construction of solutions
47
Theorem 2.3.4. Let R(t, t0 ) be the resolvent of the homogeneous equation x0 = A(t)x associated to (2.1). Then the solution x to (2.1) is given by
Z t
x(t) = R(t, t0 ) +
R(t, s)B(s)ds
(2.7)
t0
Proof. Let R(t, t0 ) be the resolvent of x0 = A(t)x. Any solution of the latter equation is
given by
x(t) = R(t, t0 )v, v Rn
Let us now seek a particular solution to (2.1) of the form x(t) = R(t, t0 )v(t), i.e., using a
variation of constants approach. Taking the derivative of this expression of x, we have
d
[R(t, t0 )]v(t) + R(t, t0 )v 0 (t)
dt
= A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t)
x0 (t) =
x(t) = R(t, t0 )
R(t0 , s)B(s)ds
t0
Z t
=
R(t, t0 )R(t0 , s)B(s)ds
t0
Z t
=
R(t, s)B(s)ds
t0
2.3.3
We consider the affine equation (2.1), but with the matrix A(t) A.
Theorem 2.3.5. The general solution to the IVP
x0 (t) = Ax(t) + B(t)
x(t0 ) = x0
(2.8)
is given by
(tt0 )A
x(t) = e
x0 +
e(tt0 )A B(s)ds
t0
Proof. Use Lemma 2.2.15 and the variation of constants formula (2.7).
(2.9)
2. Linear systems
2.4
2.4.1
(2.10)
49
such that eB = V . Let P (t) = (t)eBt , so (t) = P (t)eBt . It is clear that P is continuous
and nonsingular. Also,
P (t + ) = (t + )eB(t+)
= (t)V eB(+t)
= (t)eB eB eBt
= (t)eBt
= P (t),
proving the P is -periodic.
Theorem 2.4.4 (Floquets theorem, real case). Any fundamental matrix solution of (2.10)
takes the form
(t) = P (t)etB
(2.12)
where P (t) and B are n n real matrices such that
i) P (t) is invertible, continuous, and periodic of period 2 in t,
ii) B is a constant matrix such that ()2 = e2B .
Proof. The proof works similarly as in the complex case, except that here, Theorem A.11.1
implies that there exists B Mn (R) such that e2B = V 2 . Let P (t) = (t)eBt , so
(t) = P (t)etB . It is clear that P is continuous and nonsingular. Also,
P (t + 2) = (t + 2)e(t+2)B
= (t + )V e(2+t)B
= (t)V 2 e(2+t)B
= (t)e2B e2B etB
= (t)etB
= P (t),
proving the P is -periodic.
See [12, p. 87-90], [4, p. 162-179].
Theorem 2.4.5 (Floquets theorem, [4]). If (t) is a fundamental matrix solution of the
-periodic system (2.10), then, for all t R,
(t + ) = (t)1 (0)().
In addition, for each possibly complex matrix B such that
eB = 1 (0)(),
there is a possibly complex -periodic matrix function t 7 P (t) such that (t) = P (t)etB for
all t R. Also, there is a real matrix R and a real 2-periodic matrix function t Q(t)
such that (t) = Q(t)etR for all t R.
2. Linear systems
Definition 2.4.6 (Floquet normal form). The representation (t) = P (t)etR is called a
Floquet normal form.
In the case where (t) = P (t)etB , we have dP (t)/dt = A(t)P (t) P (t)B. Therefore,
letting x = P (t)z, we obtain x0 = P (t)x0 + dP (t)/dtx = P (t)A(t)x + A(t)P (t)x P (t)Bx
1
1
z = P 1 (t)x, so z 0 = dP dt (t) x + P 1 (t)x0 = dP dt (t) P (t)z + P 1 (t)A(t)P (t)z
Definition 2.4.7 (Characteristic multipliers). The eigenvalues 1 , . . . , n of a monodromy
matrix B are called the characteristic multipliers of equation (2.10).
Definition 2.4.8 (Characteristic exponents). Numbers such that e is a characteristic
multiplier of (2.10) are called the Floquet exponents of (2.10).
Theorem 2.4.9 (Spectral mapping theorem). Let K = R or C. If C GLn (K) is written
C = eB , then the eigenvalues of C coincide with the exponentials of the eigenvalues of B,
with same multiplicity.
Definition 2.4.10 (Characteristic exponents). The eigenvalues 1 , . . . , n of a monodromy
matrix B are called the characteristic exponents of equation (2.10). The exponents 1 =
exp(21 ), . . . , n = exp(2n ) of the matrix ()2 are called the (Floquet) multipliers of
(2.10).
Proposition 2.4.11. Suppose that X, Y are fundamental matrices for (2.10) and that X(t+
) = X(t)V , Y (t + ) = Y (t)U . Then the monodromy matrices U and V are similar.
Proof. Suppose that X(t + ) = X(t)V and Y (t + ) = Y (t)U . But, by Theorem 2.2.9,
since X and Y are fundamental matrices for (2.10), there exists an invertible matrix C such
that X(t) = Y (t)C for all t. Thus, in particular, X(t + ) = Y (t + )C, and so
C 1 U CX(t + ) = Y (t + )C = Y (t)U C = X(t)C 1 U C,
since Y (t) = X(t)C 1 . It follows that V = C 1 U C, so U and V are similar.
From this Proposition, it follows that monodromy matrices share the same spectrum.
Corollary 2.4.12. All solutions of (2.10) tend to 0 as t if and only if |j | < 1 for all
j (or <(j ) < 0 for all j).
Let p be an eigenvector of ()2 associated with a multiplier . Then the solution
(t) = (t)p of (2.10) satisfies the condition (t + 2) = (t). This is the origin of the
term multiplier.
2.4.2
51
We discuss here an extension of a theorem that was proved implicitly in Exercise 4, Assignment 2. Let us start by stating the result in question. We consider here the system
x0 = A(t)x + b(t),
(2.13)
(2.14)
associated to (2.13) has no nonzero solution of period , then (2.13) has for each function
f , a unique -periodic solution.
The Fredholm alternative concerns the case where there exists a nonzero periodic solution
of (2.14). We give some needed results before going into details. Consider (2.14). Associated
to this system is the so-called adjoint system, which is defined by the following differential
equation,
y 0 = AT (t)y
(2.15)
Proposition 2.4.14. The adjoint equation has the following properties.
i) Let R(t, t0 ) be the resolvent matrix of (2.14). Then, the resolvent matrix of (2.15) is
RT (t0 , t).
ii) There are as many independent periodic solutions of (2.14) as there are of (2.15).
iii) If x is a solution of (2.14) and y is a solution of (2.15), then the scalar product
hx(t), y(t)i is constant.
2. Linear systems
d
hx(t), y(t)i = hA(t)x(t), y(t)i + hx(t), AT (t)y(t)i = 0
dt
Before we carry on to the actual Fredholm alternative in the context of ordinary differential equations, let us consider the problem in a more general setting. Let H be a Hilbert
space. If A L(H, H), the adjoint operator A of A is the element of L(H, H) such that
u, v H,
hAu, vi = hu, A vi
Let Img(A) be the image of A, Ker(A ) be the kernel of A . Then we have H = Img(A)
Ker(A ).
Theorem 2.4.15 (Fredholm alternative). For the equation Af = g to have a solution, it is
necessary and sufficient that g be orthogonal to every element of Ker(A ).
We now use this very general setting to prove the following theorem, in the context of
ODEs.
Theorem 2.4.16 (Fredholm alternative for ODEs). Consider (2.13) with A and f continuous and -periodic. Suppose that the homogeneous equation (2.14) has p independent
solutions of period . Then the adjoint equation (2.15) also has p independent solutions of
period p, which we denote y1 , . . . , yp . Then
i) If
Z
k = 1, . . . , p
(2.16)
Hence, at time ,
Z
x() = R(, 0)x0 +
R(, s)b(s)ds
0
53
On the other hand, yk (0) is the initial condition of an -periodic solution yk if, and only
if,
T
R (0, ) I yk (0) = 0
Let C = R(0, ) I. We have that Rn = Img(C) Ker(C T ). We now use the Fredholm
alternative in this context. There exists x0 such that
Z
Cx0 =
R(0, s)b(s)ds
0
R
Indeed, from the Fredholm alternative, setting f = x0 and g = 0 R(0, s)b(s)ds, we have
that Cf = g has a solution if, and only if, g is orthogonal to every element of Ker(C T ), i.e.,
since Rn = Img(C) Ker(C T ), if, and only if, g Img(C).
Now, y1 (0), . . . , yp (0) is a basis of Ker(C T ). It follows that there exists a solution of
(2.13) if, and only if, for all k = 1, . . . , p,
Z
k = 1, . . . , p, h
R(0, s)b(s)ds, yk (0)i = 0
0
Z
k = 1, . . . , p,
hR(0, s)b(s), yk (0)ids = 0
0
Z
k = 1, . . . , p,
hb(s), RT (0, s)yk (0)ids = 0
0
Z
k = 1, . . . , p,
hb(s), yk (s)ids = 0
0
is of the form v0 + Ker(C T ), where v0 is one of these vectors; hence there exist p of them
which are independent and are initial conditions of the p independent -periodic solutions
of (2.13).
Example The equation
x00 = f (t)
(2.18)
2. Linear systems
Let y =
x0 .
0
x
0 1
x
0
=
+
y
0 0
y
f (t)
Hence,
T
A =
0 0
1 0
and the adjoint equation 0 = AT has the periodic solution (0, a)T .
2.5
2.5.1
(2.19a)
(2.19b)
where g : R Rn Rn a smooth function, and let R(t, t0 ) be the resolvent associated to the
homogeneous system x0 = A(t)x, with R defined on some interval I 3 t0 . Then the solution
of (2.19) is given by
Z t
(t) = R(t, t0 )x0 +
R(t, s)g((s), s)ds,
(2.20)
t0
on some subinterval of I.
Proof. We proceed using a variation of constants approach. It is known that the general
solution to the homogeneous equation x0 = A(t)x associated to (2.19) is given by
(t) = R(t, t0 )x0 .
We seek a solution to (2.19) by assuming that (t) = R(t, t0 )v(t). We have
d
0
(t) =
R(t, t0 ) v(t) + R(t, t0 )v 0 (t)
dt
= A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t),
55
from Proposition 2.2.11. For to be solution, it must satisfy the differential equation (2.19),
and thus
0 (t) = A(t)(t) + g(t, (t)) A(t)R(t, t0 )v(t) + R(t, t0 )v 0 (t) = A(t)R(t, t0 )v(t) + g(t, (t))
R(t, t0 )v 0 (t) = g(t, (t))
v 0 (t) = R(t, t0 )1 g(t, (t))
v 0 (t) = R(t0 , t)g(t, (t))
Z t
v(t) =
R(t0 , s)g(s, (s))ds + C,
t0
Chapter 3
Stability of linear systems
3.1
(3.1)
To emphasize the fact that we are dealing with flows, we write x(t, x0 ) the solution to (3.1),
at time t and satisfying at time t = 0 the initial condition x(0) = x0 .
Definition 3.1.1 (Fixed point). A fixed point of (3.1) is a point x such that f (x ) = 0.
This is evident, as a point such that f (x ) = 0 satisfies (x )0 = f (x ) = 0, so that the
solution is constant when x = x . Note also that this implies that x(t) = x is a solution
defined on R.
Definition 3.1.2 (Stable equilibrium point). The fixed point x is ( positively) stable if the
following two conditions hold:
i) There exists r > 0 such that if kx0 x k < r, then the solution x(t, x0 ) is defined for
all t 0. (This is automatically satisfied for flows).
ii) For any > 0, there exists > 0 such that kx0 x k < implies kx(t, x0 ) x k < .
Definition 3.1.3 (Asymptotically stable equilibrium point). If the equilibrium x is (positively) stable and that additionally, there exists > 0 such that kx0 x k < implies
limt x(t, x0 ) = x , then x is (positively) asymptotically stable.
3.2
(3.2)
58
Theorem 3.2.1. Suppose that all eigenvalues of A have negative real parts, and that b is
continuous and such that limt b(t) = 0. Then 0 is a g.a.s. equilibrium of (3.2).
The proof comes from [2, p. 156-157].
Proof. For any given (t0 , x0 ), t0 > 0, we have, from [2, Th 2.1, p. 37] about the existence
and uniqueness of the solutions to the linear equation x0 = A(t)x + g(t), that the (unique)
solution t (x0 ) satisfying the initial condition t0 (x0 ) = x0 exists for all t t0 .
by the variation of constants formula, using b(t)x as the inhomogeneous term, we can
express the solution by means of the equivalent integral equation, for t0 t < ,
Z t
(tt0 )A
t (x0 ) = e
x0 +
e(ts)A b(s)s (x0 )ds
(3.3)
t0
by the hypothesis on A, e(tt0 )A is such that for t0 t < , k(t, t0 )k Ke(tt0 ) for
K > 0, > 0, where R(t, t0 ) = e(tt0 )A is the fundamental matrix of the homogeneous part
of (3.2).
Since limt b(t) = 0, given any > 0, there exists a number T t0 such that |b(t)| <
for t T . We now use the variation of constants formula (3.3) with the point (T, T (x0 ))
for initial condition. We have, for T t < ,
Z t
(tT )A
t (x0 ) = e
T (x0 ) +
e(ts)A b(s)s (x0 )ds
T
Thus, using k(t)k Ke(tt0 ) (with t0 = T ) and |b(t)| < for t T , we obtain, for
T t < ,
Z t
(tT )
kt (x0 )k Ke
kT (x0 )k + K
e(ts) ks (x0 )kds
T
t
Multiplying both sides of this inequality by e and using Gronwalls inequality (Appendix A.7)
with the function kt (x0 )ket , we obtain, for T t < ,
kt (x0 )k KkT (x0 )ke(K)(tT )
(3.4)
From this we conclude that if 0 < < /K, the solution t (x0 ) will approach zero exponentially. This does not yet prove that the zero solution of (3.2) is stable. To do this, we compute
a bound on kT (x0 )k. Returning to (3.3) and restricting t to the interval t0 t T , we
have
Z t
(tt0 )
e(ts) k(s, t0 , x0 )kds
kt (x0 )k Ke
kx0 k + K1 K
t0
t
t0 t T
(3.5)
59
Therefore,
kT (x0 )k Kkx0 keK1 K(T t0 ) ,
t0 T
(3.6)
Thus we can make |T (x0 )| small by choosing |x0 | sufficiently small. This together with (3.4)
gives the stability.
Indeed, substituting (3.6) into (3.4) gives, for T t < ,
kt (x0 )k K 2 kx0 keK1 K(T t0 ) e(K)(tT )
Let then K2 = max KeK1 K(T t0 ) , K 2 eK1 K(T t0 ) . From (3.5) and (3.7) we have
K2 kx0 k
if t0 t T
kt (x0 )k
K2 kx0 ke(K)(tT ) if T t <
(3.7)
(3.8)
For a given matrix A, we can compute K and ; we next pick any 0 < < /K and then
T t0 so that |b(t)| < for t T . We then compute K1 and K2 . Now, given any > 0,
choose < /K2 . Then from (3.8), if kx0 k < , kt (x0 )k < for all t t0 so that the zero
solution is stable. From (3.8), it is clear that the zero solution is globally asymptotically
stable.
Corollary 3.2.2. Let all eigenvalues of A have negative real part, so that |eAt | Ket for
some constants K > 0, > 0 and all t 0. Let b(t) be continuous for 0 t < and
suppose that there exists T > 0 such that |b(t)| < /K for t T . Then the zero solution of
(3.2) is globally asymptotically stable.
Theorem 3.2.3. Let all eigenvalues
of A have negative real part, and let b(t) be continuous
R
for 0 t < and such that 0 |b(s)|ds < . Then the zero solution of (3.2) is globally
asymptotically stable.
60
We give some notions of linear stability theory, in the case of the autonomous linear
system (2.6), repeated here for convenience:
x0 (t) = Ax(t).
(2.6)
61
Definition 3.2.12 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A B of A onto B is a continuous one-to-one map of A
onto B such that h1 : B A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
Definition 3.2.13 (Differentiable manifold). An n-dimensional differentiable manifold M
(or a manifold of class C k ) is a connected metric space with an open covering {U } ( i.e.,
M = U ) such that
i) for all , U is homeomorphic to the open unit ball in Rn , B = {x Rn : |x| < 1},
i.e., for all there exists a homeomorphism of U onto B, h : U B,
ii) if U U 6= and h : U B, h : U B are homeomorphisms, then h (U U )
and h (U U ) are subsets of Rn and the map
h = h h1
: h (U U ) h (U U )
is differentiable (or of class C k ) and for all x h (U U ), the determinant of the
Jacobian, det Dh(x) 6= 0.
Remark The manifold is analytic if the maps h = h h1
are analytic.
For simplicity and without loss of generality since both results are local results, we assume
hereforth that x = 0, i.e., that a change of coordinates has been performed translating x
to the origin. We also assume that t0 = 0.
Chapter 4
Linearization
We consider here the autonomous nonlinear system in Rn
x0 = f (x)
(4.1)
The object of this chapter is to show two results which link the behavior of (4.1) near a
hyperbolic equilibrium point x to the behavior of the linearized system
x0 = Df (x )(x x )
(4.2)
4.1
We now give some notions of linear stability theory, in the case of the autonomous linear
system (2.6), repeated here for convenience:
x0 (t) = Ax(t).
(2.6)
4. Linearization
Definition 4.1.2. The mapping eAt : Rn Rn is called the flow of the linear system (2.6).
The term flow is used since eAt describes the motion of points x0 Rn along trajectories
of (2.6).
Definition 4.1.3. If all eigenvalues of A have nonzero real part, that is, if E c = , then
the flow eAt of system (2.6) is called a hyperbolic flow, and the system (2.6) is a hyperbolic
linear system.
Definition 4.1.4. A subspace E Rn is invariant with respect to the flow eAt , or invariant
under the flow of (2.6), if eAt E E for all t R.
Theorem 4.1.5. Let E be the generalized eigenspace of A associated to the eigenvalue .
Then AE E.
Theorem 4.1.6. Let A Mn (R). Then
Rn = E s E u E c .
Furthermore, if the matrix A is the matrix of the linear autonomous system (2.6), then E s ,
E u and E c are invariant under the flow of (2.6).
Definition 4.1.7. If all the eigenvalues of A have negative (resp. positive) real parts, then
the origin is a sink (resp. source) for the linear system (2.6).
Theorem 4.1.8. The stable, center and unstable subspaces E S , E C and E U , respectively,
are invariant with respect to eAt , i.e., let x0 E S , y0 E C and z0 E U , then eAt x0 E S ,
eAt y0 E C and eAt z0 E U .
Definition 4.1.9 (Homeomorphism). Let X be a metric space and let A and B be subsets
of X. A homeomorphism h : A B of A onto B is a continuous one-to-one map of A
onto B such that h1 : B A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B.
Definition 4.1.10 (Differentiable manifold). An n-dimensional differentiable manifold M
(or a manifold of class C k ) is a connected metric space with an open covering {U } ( i.e.,
M = U ) such that
i) for all , U is homeomorphic to the open unit ball in Rn , B = {x Rn : |x| < 1},
i.e., for all there exists a homeomorphism of U onto B, h : U B,
ii) if U U 6= and h : U B, h : U B are homeomorphisms, then h (U U )
and h (U U ) are subsets of Rn and the map
h = h h1
: h (U U ) h (U U )
is differentiable (or of class C k ) and for all x h (U U ), the determinant of the
Jacobian, det Dh(x) 6= 0.
65
Example Consider the (linear) ordinary differential equation x0 = ax, with a, x R. The
solution is (t, x0 ) = eat x0 , and satisfies the group property
(t + s, x0 ) = ea(t+s) x0 = eat (eas x0 ) = (t, eas x0 ) = (t, (s, x0 ))
For simplicity and without loss of generality since both results are local results, we assume
hereforth that x = 0, i.e., that a change of coordinates has been performed translating x
to the origin. We also assume that t0 = 0.
4.2
Theorem 4.2.1 (Stable manifold theorem). Let E be an open subset of Rn containing the
origin, let f C 1 (E), and let t be the flow of the nonlinear system (4.1). Suppose that
f (0) = 0 and that Df (0) has k eigenvalues with negative real part and n k eigenvalues
with positive real part. Then there exists a k-dimensional differentiable manifold S tangent
to the stable subspace E s of the linear system (4.2) at 0 such that for all t 0, t (S) S
and for all x0 S,
lim t (x0 ) = 0
t
There are several approaches to the proof of this result. Hale [10] gives a proof which uses
functional analysis. The proof we give here comes from [18, p. 108-111], who derives it from
[6, p. 330-335]. It consists in showing that there exists a real nonsingular constant matrix
C such that if y = C 1 x then there are n k real continuous functions yj = j (y1 , . . . , yk )
defined for small |yi |, i k, such that
yj = j (y1 , . . . , yk ) (j = k + 1, . . . , n)
define a k-dimensional differentiable manifold S in y space. The stable manifold in S space
is obtained by applying the transformation P 1 to y so that x = P 1 y defines S in terms of
k curvilinear coordinates y1 , . . . , yk .
4. Linearization
P 0
Df (0)C =
0 Q
where the eigenvalues 1 , . . . , k of the k k matrix P have negative real part and the
eigenvalues k+1 , . . . , n of the (n k) (n k) matrix Q have positive real part. Let > 0
be chosen small enough that for j = 1, . . . , k, <(j ) < < 0. Let y = C 1 x, we have
y 0 = C 1 x0
= C 1 Df (0)x + C 1 F (x)
= C 1 Df (0)Cy + C 1 F (Cy)
= By + G(y)
where E = C 1 (E). Also,
where G(y) = C 1 F (Cy). Since F C 1 (E), G C 1 (E),
Lemma 4.2.2 applies to G.
Now consider the system
y 0 = By + G(y)
(4.3)
and let
Pt
e
0
0 0
U (t) =
and V (t) =
0 0
0 eQt
67
U (t s)G(u(s, a))ds
u(t, a) = U (t)a +
V (t s)G(u(s, a))ds
(4.4)
where a, u Rn and a is a constant vector. We can solve this equation using the method of
successive approximations. Indeed, let
u(0) (t, a) = 0
and
u
(j+1)
(j)
(t, a) = U (t)a +
0
(4.5)
K|a|et
2j1
(4.6)
(1)
Z
kU (t s)G(u(s, a))kds +
kV (t s)G(u(s, a))kds
t
which, since G verifies a Lipschitz-type condition as given by Lemma 4.2.2, implies that
there exists > 0 such that
|u
(k+1)
(k)
kU (t s)k u(k+1) (s, a) u(k) (s, a) ds
Z 0
+
t
kV (t s)k u(k+1) (s, a) u(k) (s, a) ds
4. Linearization
Using the bounds on kU k and kV k as well as the induction hypothesis (4.6), it follows that
|u
(k+1)
(k)
K|a|es
ds
2k1
0
Z
K|a|es
+
Ke(ts)
ds
2k1
t
K 2 |a|et K 2 |a|et
+
2k1
2k1
Z
Ke(+)(ts)
(4.7)
Note that for G to satisfy a Lipschitz-type condition, we must choose K|a| < /2, i.e.,
|a| < /(2K). Then, by induction, (4.6) holds for all t 0 and j = 1, . . ..
As a consequence, for t 0, n > m > N ,
|u(n) (t, a) u(m) (t, a)|
j=N
X
1
K|a|
=
K|a|
2j
2N 1
j=N
As this last quantity approaches 0 as N , it follows that {u(j) (t, a)} is a Cauchy sequence
(of continuous functions).
It follows that
lim u(j) (t, a) = u(t, a)
j
uniformly for all t 0 and |a| < /(2K). From the uniform convergence, we deduce that
u(t, a) is continuous. Now taking the limit as j in both sides of (4.5), it follows that
u(t, a) satisfies the integral equation (4.4) and as a consequence, the differential equation
(4.3).
it follows from induction on (4.5) that u(j) (t, a) is a differentiable
Since G C 1 (E),
function of a for |a| < /(2K) and t 0. Since u(j) (t, a) u(t, a) uniformly, it then follows
that u(t, a) is differentiable for t 0 and |a| < /(2K). The estimate (4.7) implies that
|u(t, a)| 2K|a|et
(4.8)
69
and
Z
uj (0, a) =
V (s)G(u(s, a1 , . . . , ak , 0, . . . , 0))ds
0
for j = k + 1, . . . , n
j
(4.9)
It follows from the estimate (4.8) that if y(t) is a solution of (4.3) with y(0) S,
y(t) 0 as t . It can also be shown that if y(t) is a solution of (4.3) with y(0)
then y(t) 6 0 as t ; see [6, p. 332].
This implies, as t satisfies the group property s+t (x0 ) = s (t )(x0 ), that if y(0)
then y(t) S for all t 0. And it can be shown as in [6, Th 4.2, p. 333] that
Furwith
then
6 S,
S,
j
(0) = 0
yi
for i = 1, . . . , k and j = k + 1, . . . , n, i.e., that the differentiable manifold S is tangent to
the stable subspace E s = {y Rn : y1 = = yk = 0} of the linear system y 0 = By at 0.
The existence of the unstable manifold U of (4.3) is established the same way, but considering a reversal of time, t t, i.e., considering the system
y 0 = By G(y)
The stable manifold for this system is the unstable manifold U of (4.3). In order to determine
the (n k)-dimensional manifold U using the above process, the vector y has to be replaced
by the vector (yk+1 , . . . , yn , y1 , . . . , yk ).
4.3
4. Linearization
y
x 0 = 0 Rn
z0
DZ(0) = 0. Since f C (E), Y (y0 , z0 ) and Z(y0 , z0 ) are continuously differentiable. Thus
kDY (y0 , z0 )k a
and
0 , z0 )k a
kDZ(y
71
on the compact set |y0 |2 + |z0 |2 s20 . By choosing s0 sufficiently small, we can make a as
small as we like. We let Y (y0 , z0 ) and Z(y0 , z0 ) be smooth functions, defined by
2
2
Y (y0 , z0 ) =
and
Z(y0 , z0 ) =
By the mean value theorem,
p
|Y (y0 , z0 )| a |y0 |2 + |z0 |2 a(|y0 | + |z0 |)
and
|Z(y0 , z0 )| a
for all (y0 , z0 ) Rn . Let B = eP and C = eQ . Assuming that P and Q have been normalized
in a proper way, we have
b = kBk < 1 and c = kC 1 k < 1
4. For
and
y
x=
Rn
z
By
L(y, z) =
Cz
By + Y (y, z)
T (y, z) =
Cz + Z(y, z)
i.e., L(x) = eA x and, locally, T (x) = 1 (x). Then the following lemma holds, which we
prove later.
Lemma 4.3.3. There exists a homeomorphism H of an open set U containing the origin
onto an open set V containing the origin such that
H T =LH
5. We let H0 be the homeomorphism defined above and Lt and T t be the one-parameter
families of transformations defined by
Lt (x0 ) = eAt x0 and T t (x0 ) = t (x0 )
Define
Z
H=
0
Ls H0 T s ds
4. Linearization
It follows from the above lemma that there exists a neighborhood of the origin for which
Z 1
t
LH=
Lts H0 T st dsT t
0
Z 1t
Ls H0 T s dsT t
=
t
Z 0
Z 1t
s
s
s
s
=
L H0 T ds +
L H0 T ds T t
t
0
Z 1
=
Ls H0 T s dsT t
0
= HT t
since by the above lemma, H0 = L1 H0 T which implies that
Z 0
Z 0
s
s
L H0 T ds =
Ls1 H0 T s+1 ds
s
t
Z 1
=
Ls H0 T s ds
1t
Thus H T t = Lt H or equivalently
H t (x0 ) = eAt H(x0 )
and it can be shown that H is a homeomorphism on Rn . The outline of the proof is
complete.
We now prove Lemma 4.3.3.
Proof. We use the method of successive approximations. For x Rn , let
(y, z)
H(x) =
(y, z)
Then H T = L H is equivalent to the pair of equations
B(y, z) = (By + Y (y, z), Cz + Z(y, z))
C(y, z) = (By + Y (y, z), Cz + Z(y, z))
(4.10a)
(4.10b)
(4.11)
It can be shown by induction that for k = 0, 1, . . . the functions k are continuous and such
that k (y, z) = z for |y| + |z| 2s0 .
73
Let us now prove that {k } is a Cauchy sequence. For this, we show by induction that
for all j 1,
|j (y, z) j1 (y, z)| M rj (|y| + |z|)
(4.12)
where r = c[2 max(a, b, c)] with (0, 1) chosen sufficiently small that r < 1 (which is
possible since c < 1) and M = ac(2s0 )1 /r. Inequality (4.12) is satisfied for j = 1 since
|1 (y, z) 0 (y, z)| = |C 1 0 (By + Y (y, z), Cz + Z(y, z)) z|
= |C 1 (Cz + Z(y, z)) z|
= |C 1 Z(y, z)|
kC 1 k |Z(y, z)|
ca(|y| + |z|)
M r(|y| + |z|)
since Z(y, z) = 0 for |y| + |z| 2s0 . Now assuming that (4.12) holds for j = k gives
|k+1 (y, z) k (y, z)| = |C 1 k (By + Y (y, z), Cz + Z(y, z)) C 1 k1 (By + Y (y, z), Cz + Z(y, z))|
= |C 1 (k k1 )|
kC 1 k |k k1 |
which, using induction hypothesis (4.12) and c = kC 1 k gives
cM rk |By + Y (y, z)| + |Cz + Z(y, z)|
cM rk b|y| + 2a(|y| + |z|) + c|z|
cM rk 2 max(a, b, c) |y| + |z|
M rk+1 |y| + |z|)
Using the same type of argument as in the proof of the stable manifold theorem, k is
thus a Cauchy sequence of continuous functions that converges uniformly as k to a
continuous function (y, z). Also, (y, z) = z for |y| + |z| 2s0 . Taking limits in (4.11)
and left-multiplying by C shows that (y, z) is a solution of (4.10b).
Now for (4.10a). This equation can be written
B 1 (y, z) = (B 1 y + Y1 (y, z), C 1 z + Z1 (y, z))
(4.13)
where Y1 and Z1 occur in the inverse of T , which exists provided that a is small enough (i.e.,
s0 is sufficiently small),
1
B y + Y1 (y, z)
1
T (y, z) =
C 1 z + Z1 (y, z)
Successive approximations with 0 (y, z) = y can then be used as above (since b = kBk < 1)
to solve (4.13).
4. Linearization
(y, z)
H(y, z) =
(y, z)
and it follows as in [11, p. 248-249] that H is a homeomorphism of Rn onto Rn .
4.4
4.4.1
Example of application
A chemostat model
To illustrate the use of the theorems in this chapter, we take an example of nonlinear
system, a system of two nonlinear differential equations modeling a biological device called
a chemostat. Without going into details, the system is the following.
dS
= D(S 0 S) (S)x
dt
dx
= ((S) D)x
dt
(4.14a)
(4.14b)
The parameters S 0 and D, respectively the input concentration and the dilution rate, are
real and positive. The function is the growth function. It is generally assumed to satisfy
(0) = 0, 0 > 0 and 00 < 0.
To be complete, one should verify that the positive quadrant is positively invariant under
the flow of (4.14), i.e., that for S(0) 0 and x(0) 0, solutions remain nonnegative for all
positive times, and similar properties. But since we are here only interested in applications
of the stable manifold theorem, we proceed to a very crude analysis, and will not deal with
this point.
Note that in vector form, the system is noted
0 = f ()
with = (S, x)T and
f () =
D(S 0 S) (S)x
((S) D)x
Equilibria of the system are found by solving f () = 0. We find two, the first one situated
on one of the boundaries of the positive quadrant,
T = (ST , xT ) = (S 0 , 0)
the second one in the interior,
I = (S , x ) = (, S 0 )
where is such that () = D. Note that this implies that if S 0 , T is the only
equilibrium of the system.
75
=
D
(S 0 )
0 (S 0 ) D
We have two eigenvalues, D and (S 0 ) D. Let us suppose that (S 0 ) D < 0. Note that
this implies that T is the only equilibrium, since, as we have seen before, I is not feasible
if > S 0 .
As the system has dimensionality 2, and that the matrix Df (T ) has two negative eigenvalues, the stable manifold theorem (Theorem 4.2.1) states that there exists a 2-dimensional
differentiable manifold M such that
t (M) M
for all 0 M, limt t (0 ) = T .
At T , M is tangent to the stable subspace E S of the linearized system 0 = Df (T )(
T ).
Since there are no eigenvalues with positive real part, there does not exist an unstable
manifold in this case. Let us now caracterize the nature of the stable subspace E S . It is
obtained by studying the linear system
0 = Df (T )( T )
D
(S 0 )
S S0
=
x
0 (S 0 ) D
0
0
D(S S ) (S )x
=
((S 0 ) D)x
(4.15)
Of course, the Jacobian matrix associated to this system is the same as that of the nonlinear
system (at T ). Associated to the eigenvalue D is the eigenvector v1 = (1, 0)T , to (S 0 )D
is v2 = (1, 1)T .
The stable subspace is thus given by Span (v1 , v2 ), i.e., the whole of R2 .
In fact, the stable manifold of T is the whole positive quadrant, since all solutions limit
to this equilibrium. But let us pretend that we do not have this information, and let us try
to find an approximation of the stable manifold.
4.4.2
A second example
This example is adapted from [18, p. 111]. Consider the nonlinear system
x0 = x y 2
y 0 = x2 + y
(4.16)
4. Linearization
From the nullclines equations, it is clear that (x, y) = (0, 0) is the only equilibrium point.
At (0, 0), the Jacobian matrix of (4.16) is given by
1 0
J=
0 1
The linearized system at 0 is
x0 = x
y0 = y
(4.17)
So the eigenvalues are 1 and 1, with associated eigenvectors (1, 0)T and (0, 1)T , respectively. Therefore, the stable manifold theorem (Theorem 4.2.1) implies that there exists a
1-dimensional stable (differentiable) manifold S such that t (S) S and limt t (x0 ) = 0
for all x0 S, and a 1-dimensional unstable (differentiable) manifold U such that t (U ) U
and limt t (x0 ) for all x0 U . Furthermore, at 0, S is tangent to the stable subspace
E S of (4.17), and U is tangent to the unstable subspace E U of (4.17).
The stable subspace E S is given by Span (v1 ), with v1 = (0, 1)T , i.e., the y-axis. The
unstable subspace E U is Span (v2 ), with v2 = (1, 0)T , i.e., the x-axis. The behavior of this
system is illustrated in Figure 4.1.
0.05
0.04
0.03
0.02
0.01
0
0.01
0.02
0.03
0.04
0.05
0.05
0.04
0.03
0.02
0.01
0.01
0.02
0.03
0.04
0.05
To be more precise about the nature of the stable manifold S, we proceed as follows.
First of all, as A is in diagonal form, we have
1 0
A=B=
0 1
and C = I. Also, F () = G() =
y 2
. Here, the matrices P and Q are in fact scalars,
x2
77
P = 1 and Q = 1. Thus
U (t) =
et 0
,
0 0
V (t) =
0 0
0 et
Finally, a = (a1 , 0)T . So now we can use successive approximations to find an approximate
solution to the integral equation (4.4), which here takes the form
t Z t (ts) 2
Z
0
e a1
e
u2 (s)
u(t, a) =
+
ds
ds
0
0
e(ts) u21 (s)
0
t
To construct the sequence of successive approximations, we start with u(t, a) = (0, 0)T , then
compute the successive terms using equation (4.5), which takes the form
t Z t (ts)
Z
0
0
e a1
e
0
(j)
(j+1)
(j)
G u (s) ds
u
(t, a) =
+
G
u
(s)
ds
0
0
0
0 e(ts)
t
0
2
2
(j)
(j)
t Z t (ts)
Z
u (s)
0
0
e a1
e
0 u2 (s)
2
2 ds
2 ds
+
=
(ts)
(j)
(j)
0
0
e
0
0
t
0
u (s)
u (s)
1
=
et a1
0
e(ts)
2 !
(j)
u2 (s)
ds
2
(j)
e(ts) u1 (s)
!
ds
Therefore,
(1)
u (t, a) = U (t)a =
since u(0) (t, a) =
ea a1
0
!
(0)
0
u1 (t, a)
=
.
(0)
0
u2 (t, a)
Then,
(2)
u (t, a) =
e a1
0
et a1
13 a21 e2t
=
=
et a1
0
t
0
2 ds
e(ts) (es a1 )
0
1 2 2t
ae
3 1
u (t, a) =
1
et a1 + 27
(e4t et )a41
13 a21 e2t
4. Linearization
as a1 0. Thus S is approximated by
y=
as x 0.
x2
+ O(x5 )
3
Chapter 5
Exponential dichotomy
Our aim here is to show the equivalent of the Hartman-Grobman theorem for linear systems
with variable coefficients. Compared to other results we have seen so far, this is a much
more recent field. The first results were shown in the 60s by Lin. We give here only the
most elementary results. For more details, see, e.g., [13].
We consider the linear system of differential equations
dx
= A(t)x
dt
(5.1)
5.1
Exponential dichotomy
Definition 5.1.1 (Exponential dichotomy). Let X(t) be a the fundamental matrix solution
of (5.1). If X(t) and X 1 (s) can be decomposed into the following forms
X(t) = X1 (t) + X2 (t)
X (s) = Z1 (s) + Z2 (s)
X(t)Z 1 (s) = X1 (t)Z1 (s) + X2 (t)Z2 (s)
1
and satisfy the conditions that there exists , , positive constants such that
kX1 (t)Z1 (s)k e(ts) , t s
kX2 (t)Z2 (s)k e(ts) ,
st
where
X1 (t) = (X11 (t), 0), X2 (t) = (0, X12 (t)),
Z11 (s)
0
Z1 (s) =
, Z2 (s) =
,
0
Z21 (s)
79
(5.2)
80
st
(5.3)
5.2
81
To check that the previous definitions hold can be a very tedious task. Some authors have
thus worked on deriving simpler conditions that imply exponential dichotomy.
Theorem 5.2.1. If the matrix A(t) in (5.1) is continuous and bounded on R, and there
exists a quadratic form V (t, x) = xT G(t)x, where the matrix G(t) is symmetric, regular,
bounded and C 1 , such that the derivative of V (t, x) with respect to (5.1) is positive definite,
then (5.1) admits exponential dichotomy.
The converse is true, without the requirement that A(t) be bounded.
A result of [7].
Theorem 5.2.2. Let A(t) be a continuous n n matrix function defined on an interval I
such that
i) A(t) has k eigenvalues with real part < 0 and n k eigenvalues with real part
> 0 for all t I,
ii) kA(t)k M for all t I.
For any positive constant < min(, ), there exists a positive constant = (M, + , )
such that, if
kA(t2 ) A(t1 )k for |t2 t1 | h
where h > 0 is a fixed number not greater than the length of I, then the equation
x0 = A(t)x
has a fundamental matrix X(t) satisfying the inequalities
kX(t)P X 1 (s)k Ke()(ts) for t s
kX(t)(I P )X 1 (s)k Le()(st) for s t
where K, L are positive constants depending only
I
P = k
0
on M, + , and
0
0
The following result, due to Muldowney [16], gives a criterion for the existence of a
(1 , 2 )-dichotomy.
Proposition 5.2.3. Suppose there is a continuous real-valued function on I and constants
li , 0 li < 1, i = 1, 2, such that for some m, 0 m n,
(
)
m
n
X
X
max l1 <(ajj ) + l1
|aij | +
|aij | : j = 1, . . . , m l1 ,
i=1,i6=j
i=m+1
82
(
min l2 <(ajj )
m
X
n
X
|aij | l2
i=1
|aij | : j = m + 1, . . . , n
l2 .
i=m+1,i6=j
i=1,i6=j
(
2 = min <(ajj ) l1
m
X
n
X
|aij |
i=1
)
|aij | : j = m + 1, . . . , n .
i=m+1,i6=j
The same sort of theorem can be proved with sums of the columns replaced by sums of
the rows.
Example Consider
1 0 1/2
t
t2 ,
A(t) = t/2
2
t/2 t
t
t>0
5.3
(5.4)
ts
and
u (x(t)) = lim inf
ts
1
kx(t)k
log
ts
kx(s)k
1
kx(t)k
log
ts
kx(s)k
u (x(t)) and (x(t)) are called the uniform upper characteristic exponent and
The numbers
u
uniform lower characteristic exponent of x(t), respectively.
Remark If (x)
< 0, then lims kx(s)k = . If (x) > 0, then limt kx(t)k =
.
83
Theorem 5.3.1. If (5.1) admits the exponential dichotomy, then the linear system (5.1)
and the nonlinear system (5.4) are topologically equivalent, i.e.,
i) if the solution x(t) of (5.4) remains in a neighborhood of the origin for t 0, or t 0,
then limt x(t) = 0, or limt x(t) = 0, respectively;
ii) there exists positive constants 0 and 0 such that if a solution x(t) of (5.4) is such
that limt x(0) = 0, or limt x(t) = 0, then
kx(t)k 0 kx(s)ke0 (ts) ,
ts
st
or
u (x(t)) 0 < 0, or (x(t)) 0 > 0;
respectively. At this time,
u
u (x(t)) < 0, or an (n k)iii) for a k-dimensional solution x of (5.1) with
dimensional solution x(t) of (5.1) with u (x(t)) > 0, there is a unique k u (x(t)) <
dimensional or (n k)-dimensional y(t), solution of (5.4), such that
0, or u (x(t)) > 0 respectively.
A different statement of the same sort of result is given by Palmer [17].
Theorem 5.3.2. Suppose that A(t) is a continuous matrix function such that the linear
equation x0 = A(t)x has an exponential dichotomy. Suppose that f (t, x) is a continuous
function of R Rn into Rn such that
kf (t, x)k ,
84
5.4
Theorem 5.4.1. Suppose that the linear system (5.1) admits exponential dichotomy. Then
there exists a constant > 0 such that the linear system
dx
= (A(t) + B(t))x
dt
(5.5)
. Moreover, if P (t) is the corresponding projection, then kP (t) P (t)k = O() uniformly
in t R+ . Also, |
| = O().
5.5
The exposition has been done here in the case of a system of ODEs. But it is important to
realize that exponential dichotomies exist in a much more general setting.
Bibliography
[1] A. Acosta and P. Garca. Synchronization of non-identical chaotic systems: an exponential dichotomies approach. J. Phys. A: Math. Gen., 34:91439151, 2001.
[2] F. Brauer and J.A. Nohel. The Qualitative Theory of Ordinary Differential Equations.
Dover, 1989.
[3] H. Cartan. Cours de calcul differentiel. Hermann, Paris, 1997. Reprint of the 1977
edition.
[4] C. Chicone. Ordinary Differential Equations with Applications. Springer, 1999.
[5] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. McGrawHill, 1955.
[6] E.A. Coddington and N. Levinson. Theory of Ordinary Differential Equations. Krieger,
1984.
[7] W.A. Coppel. Dichotomies in Stability Theory, volume 629 of Lecture Notes in Mathematics. Springer-Verlag, 1978.
[8] J. Dieudonne. Foundations of Modern Analysis. Academic Press, 1969.
[9] N.B. Haaser and J.A. Sullivan. Real Analysis. Dover, 1991. Reprint of the 1971 edition.
[10] J.K. Hale. Ordinary Differential Equations. Krieger, 1980.
[11] P. Hartman. Ordinary Differential Equations. John Wiley & Sons, 1964.
[12] P.-F. Hsieh and Y. Sibuya. Basic Theory of Ordinary Differential Equations. Springer,
1999.
[13] Z. Lin and Y-X. Lin. Linear Systems. Exponential Dichotomy and Structure of Sets of
Hyperbolic Points. World Scientific, 2000.
[14] H.J. Marquez. Nonlinear Control Systems. Wiley, 2003.
[15] R.K Miller and A.N. Michel. Ordinary Differential Equations. Academic Press, 1982.
85
BIBLIOGRAPHY
[16] J.S Muldowney. Dichotomies and asymptotic behaviour for linear differential systems.
Transactions of the AMS, 283(2):465484, 1984.
[17] K.J. Palmer. A generalization of Hartmans linearization theorem. J. Math. Anal. Appl.,
41:753758, 1973.
[18] L. Perko. Differential Equations and Dynamical Systems. Springer, 2001.
[19] L. Schwartz. Cours danalyse, volume I. Hermann, Paris, 1967.
[20] K. Yosida. Lectures on Differential and Integral Equations. Dover, 1991.
Appendix A
A few useful definitions and results
Here, some results that are important for the course are given with a somewhat random
ordering.
A.1
A.1.1
A.1.2
Matrix norms
A.1.3
|||L||| =
kL(x)k
= sup kL(x)k.
xE{0} kxk
kxk1
sup
The inequality
kA(t)(x1 x2 )k |||A(t)||| kx1 x2 k
87
88
results from the nature of the norm ||| |||. See appendix A.1. is best understood by indicating
the spaces in which the various norms are defined. We have
x
A
kAxka = kxkb
kxkb
a
kxkb |||A|||
= kAkkxkb ,
since
x
kxkb
= 1.
b
A.2
For a given component function fi , i = 1, . . . , n, using the definition of the Riemann integral,
Z b
k
X
fi (x)dx = lim
fi (xj )xj ,
k
j=1
where xj is the sample point in the interval [xj1 , xj ] with width xj . Therefore,
Z b
k
k
X
X
f
(x
f
(x)dx
=
lim
)x
=
lim
f
(x
)x
,
i j
i
i j
k
k
a
j=1
j=1
since the norm is a continuous function. The result then follows from the triangle inequality.
A.3
Types of convergences
Definition A.1 (Pointwise convergence). Let X be any set, and let Y be a topological space.
A sequence f1 , f2 , . . . of mappings from X to Y is said to be pointwise convergent (or simply
convergent) to a mapping f : X Y , if the sequence fn (x) converges to f (x) for each x in
X. This is usually denoted by fn f . In other words,
(fn f ) (x X, > 0, N 0, n N, d(fn (x), f (x)) < ) .
89
Definition A.2 (Uniform convergence). Let X be any set, and let Y be a topological space.
A sequence f1 , f2 , . . . of mappings from X to Y is said to be uniformly convergent to a
mapping f : X Y , if given > 0, there exists N such that for all n N and all x X,
d(fn (x), f (x)) < .
u
A.4
Asymptotic Notations
Let n be a integer variable which tends to and let x be a continuous variable tending
to some limit. Also, let (n) or (x) be a positive function and f (n) or f (x) any function.
Then
i) f = O() means that |f | < A for some constant A and all values of n and x,
ii) f = o() mean that f / 0,
iii) f means that f / 1.
Note that f = o() implies f = O().
A.5
Types of continuities
Definition A.3 (Uniform continuity). Let (X, dX ) and (Y, dY ) be two metric spaces, E X
and F Y . A function f : E F is uniformly continuous on the set E X if for every
> 0, there exists > 0 such that
dY (f (x), f (y)) < whenever x, y E and dX (x, y) < .
In other words,
(f : E (X, dX ) F (Y, dY ) uniformly continuous on E)
( > 0, > 0, x, y E, dX (x, y) < dY (f (x), f (y)) < ) .
Definition A.4 (Equicontinuous set). A set of functions F = {f } defined on a real interval
I is said to be equicontinuous on I if, given any > 0, there exists a > 0, independent
of f F and also t, t I such that
kf (t) f (t)k < whenever |t t| <
90
A.6
Lipschitz function
sup
(t,x)6=(t,y)V
91
A.7
Gronwalls lemma
0 g(t) K + L
g(s)ds.
t0
Then
0 g(t) KeL(tt0 ) ,
for t0 t t1 .
92
Rt
t0
g(s)ds. Then v 0 (t) = g(t), which implies that the assumption on g can
0 v 0 (t) K + Lv(t).
The right inequality is a linear differential inequality, with integrating factor exp(
Also, v(t0 ) = 0. Hence,
d L(tt0 )
e
v(t) KeL(tt0 )
dt
and therefore,
K
eL(tt0 ) v(t)
1 eL(tt0 ) .
L
Thus Lv(t) K eL(tt0 ) 1 , and g(t) K + Lv(t) KeL(tt0 ) .
Rt
t0
Lds)
Z
g(t) K(t) +
L(t)g(s)ds,
a
then
Z
g(t) K(t) exp
L(s)ds ,
(s)w(s)ds,
0
Z
(s)(s) exp
()d ds.
(A.1)
Before proving the result, let us recall that a function f from an interval I R to a
Banach space F is regulated if it admits in each point in I a left limit and a right limit. In
particular, every continuous mapping from I R to a Banach space is regulated, as well as
monotone maps from I to R; see, e.g., [8, Section 7.6].
93
Rt
Rt
Proof. Let y(t) = 0 (s)w(s)ds; y is continuous, and since w(t) (t) + 0 (s)w(s)ds, it
follows that, except maybe at a denumerable number of points of [0, c], we have
y 0 (t) (t)y(t) (t)(t)
Rt
from [8, Section 8.7]. Let z(t) = y(t) exp( 0 (s)ds). Then (A.2) is equivalent to
Z t
0
z (t) (t)(t) exp
(s)ds .
(A.2)
Using a mean-value type theorem (see, e.g., [8, Th. 8.5.3]) and using the fact that z(0) = 0,
we get, for t [0, c],
Z s
Z t
z(t)
(s)(s) exp
()d ds,
0
whence by definition
Z
y(t)
Z
(s)(s) exp
()d ds,
and inequality (A.1) now follows from the relation w(t) (t) + y(t).
A.8
Definition A.10 (Contraction mapping). Let (X, d) be a metric space, and let S X. A
mapping f : S S is a contraction on S if there exists K < 1 such that, for all x, y S,
d(f (x), f (y)) Kd(x, y)
Every contraction is uniformly continuous on X (from Proposition A.6.4, since a contraction is Lipschitz).
Theorem A.11 (Contraction mapping principle). Consider the complete metric space (X, d).
Every contraction mapping f : X X has one and only one x X such that f (x) = x.
Proof. Existence We use successive approximations. Let x0 X. Define x1 = f (x0 ), x2 =
f (x1 ), . . . , xn = f (xn1 ), . . . This defines an infinite sequences of elements of X. As f is a
contraction,
d(x2 , x1 ) = d(f (x1 ), f (x0 ))
Kd(x1 , x0 ).
Similarly,
d(x3 , x2 ) = d(f (x2 ), f (x1 ))
Kd(x2 , x1 )
K 2 d(x1 , x0 ).
94
Iterating,
d(xn+1 , xn ) K n d(x1 , x0 ).
Therefore,
d(xn+p , xn ) d(xn+p , xn+p1 ) + d(xn+p1 , xn+p2 ) + + d(xn+1 , xn )
(K p1 + K p2 + + K + 1)K n d(x1 , x0 )
Kn
d(x1 , x0 ).
1K
Therefore d(xn+p , xn ) tends to 0 as n , so {xn } is a Cauchy sequence. Since X is a
complete space, it follows that {xn } admits a limit `. As limn xn = `, it follows from
continuity of f that xn+1 = f (xn ) tends to f (`). But xn+1 also converges to `, so f (`) = `,
that is, ` is a fixed point of f .
Uniqueness Suppose `1 and `2 are two fixed points. Then there must hold that
d(`1 , `2 ) Kd(`1 , `2 ) < d(`1 , `2 ),
if d(`1 , `2 ) 6= 0. Therefore d(`1 , `2 ) = 0, and `1 = `2 .
In the case that f : S X S, the theorem takes the form of Theorem A.12. Closedness
of S is an implicit requirement, since the set S in the complete metric space X is closed if,
and only if, S is complete.
Theorem A.12. Let S be a closed subset of the complete metric space (X, d). Every contraction mapping f : S S has one and only one x S such that f (x) = x.
Theorem A.8.1. Consider a mapping f : X X, where X is a complete metric space.
Suppose that f is not necessarily a contraction, but that one of the iterates f k of f , is a
contraction. Then f has a unique fixed point.
A.9
J0 0 0 0
0 J1 0 0
J =
0 0 0 Js
where J0 is a diagonal matrix with diagonal 1 , . . . , n , and,
q+i
1 0 0
0
0 q+i 1 0
0
Ji =
0
0 0 0 q+i
0
0 0 0
0
for i = 1, . . . , s,
0
0
1
q+i
95
The j , j = 1, . . . , q + s, are the characteristic roots of A, which need not all be distinct.
If j is a simple root, then it occurs in J0 , and therefore, if all the roots are distinct, A is
similar to the diagonal matrix
1 0 0 0
0 2 0 0
J =
0 0 0 n
An algorithm to compute the Jordan canonical form of an n n matrix A [15].
i) Compute the eigenvalues of A. Let 1 , . . . , m be the distinct eigenvalues of A with
multiplicities n1 , . . . , nm , respectively.
ii) Compute n1 linearly independent generalized eigenvectors of A associated with 1 as
follows. Compute
(A 1 En )i
for i = 1, 2, . . . until the rank of (A1 En )k is equal to the rank of (A1 En )k+1 . Find
a generalized eigenvector of rank k, say u. Define ui = (A1 En )k1 u, for i = 1, . . . , k.
If k = n1 , proceed to step 3. If k < n1 , find another linearly independent generalized
eigenvector with rank k. If this is not possible, try k 1, and so forth, until n1 linearly
independent generalized eigenvectors are determined. Note that if (A 1 En ) = r,
then there are totally (n r) chains of generalized eigenvectors associated with 1 .
iii) Repeat step 2 for 2 , . . . , m .
iv) Let u1 , . . . , uk , . . . be the new basis. Observe that
Au1 = 1 u1 ,
Au2 = u1 + 1 u2 ,
..
.
Auk = uk1 + 1 uk
Thus in the new basis, A has the representation
1 1
...
2 1
...
1
J =
3 1
...
...
96
where each chain of generalized eigenvectors generates a Jordan block whose order
equals the length of the chain.
v) The similarity transformation which yields J = Q1 AQ is given by Q = [u1 , . . . , uk , . . .].
A.10
Matrix exponentials
e =
X
An
n=0
n!
(A.3)
P 1 n
We have eA Mn (K). Also n!1 An n!1 |||A|||n , so that the series
A is absolutely
n!
A
convergent in Mn (K). Thus e is well defined.
Property A.10.1. Let A, B Mn (K). Then
i) eA e|||A||| .
ii) If A and B commute ( i.e., AB = BA), then eA+B = eA eB .
P
1 n
iii) eA = I +
n=1 n! A .
iv) e0 = I.
v) eA is invertible with inverse eA .
vi) eAt is differentiable, and
d At
e
dt
= AeAt .
97
Use of the Jordan form to compute the exponential of a matrix. Suppose that
J = P 1 AP is the Jordan form of the matrix A. For a block diagonal matrix
B1
0
...
B=
Bs
B1k
we have, for k = 0, 1, . . .,
Bk =
...
Bsk
eJ0 t
Therefore, for t R,
..
eJt =
eJs t
e1 t
with
..
eJ0 =
0
.
ek t
Now, since Ji = k+i Ii + Ni , with Ni a nilpotent matrix, and since Ii and Ni commute, there
holds
eJi t = ek+i t eNi t .
For k Ni , Nik = 0, so
etJi
A.11
1 t
0 1
=
tni 1
(ni 1)!
tni 2
(ni 2)! .
Matrix logarithms
98
k
X
t
(1)k+1
=
k
k=1
ln(1 + u) = t
For any z C,
exp(z) = 1 + z +
=
z2 z3
+
+
2!
3!
X
zn
n=1
n!
k
X
1 X
u
(1)k+1
=
n!
k
n=1
k=1
Suppose that
where
X
(1)k+1 1 k
B = (ln )I +
Z
k
k
k=1
0 1 0
.. ..
.
.
Z=
0
0 1
0
0
is an m m-matrix. Since Z is nilpotent (Z N = 0 for all N m), the above sum is finite.
Observe that
!
X
(1)k+1 1 k
exp(B) = exp((ln )I) exp
Z
k
k
k=1
k !
k+1
X (1)
Z
= exp
k
k=1
Z
= I+
= I + Z
=J
We say ln J = B.
A.12
99
Spectral theorems
n
X
|aij |
i=1,i6=j
{| ajj | Rj }
Appendix B
Problem sheets
Contents
Homework sheet 1 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Homework sheet 2 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Homework sheet 3 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 117
Homework sheet 4 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Final examination 2003 . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Homework sheet 1 2006 . . . . . . . . . . . . . . . . . . . . . . . . . . 145
101
(B.1)
x is continuous on R+
and
Rt
(B.2)
(B.3)
Part I. Suppose that there exists a solution x of (B.1), defined on R+ and satisfying the
inequality
|x(t)| a, t R+
(B.4)
i) Prove the inequality
|x(t)| |x(0)|e(k)t ,
t R+
B. Problem sheets
and
t0
t0
(B.5)
(B.6)
As these equations are linear, the initial value problems (B.5) and (B.6) admit unique solutions. We denote the solution to (B.5) and the solution to (B.6). Find a necessary and
sufficient condition on f such that 0 = .
[Hint: Use a variation of constants formula].
(B.7)
i) Let x be a solution of (B.7) defined on a bounded interval [0, ), with > 0. Suppose
that t 7 f (x(t)) is bounded on [0, ).
a) Consider the sequence
z,n = x(
1
),
n
n N
e x (s)ds =
e x(s)ds +
es f (x(s))ds
0
0
0
Z t
Z t
Z t
t
s
s
s
[e x(s)]0
e x(s)ds =
e x(s)ds +
es f (x(s))ds
0
0
0
Z t
et x(t) x(0) =
es f (x(s))ds, for all t R+
0
Z t
t
t
x(t) = e x(0) + e
es f (x(s))ds, for all t R+
0
Let us now prove the converse, i.e., that if a function x satisfies (B.2), then it is a solution
to the IVP (B.1). Since x and f are continuous on R+ , t 7 et f (x(t)) is continuous on R+ .
This implies that
Z
t
es f (x(s))ds
t 7
0
And thus
x0 (t) = x(t) + f (x(t))
which implies that x is a solution to (B.1).
Part I. We now assume that (B.3) is also satisfied, and that there exists a solution x on R+
satisfying (B.4).
1) If x is a solution of (B.1), then
Z t
t
t
x(t) = e x(0) + e
es f (x(s))ds
0
B. Problem sheets
|x(t)| e
es |f (x(s))|ds
|x(0)| + e
0
t
kes |x(s)|ds
e |x(t)| |x(0)| +
0
L(t)
K(t)
g(s)
Thus,
Z
kds
0
kt
|x(0)|e
and so finally, for all t R+ , we have
B. Problem sheets
Uniqueness: let be a solution of (B.1) such that (0) = 0. This implies that |(0) < a,
and as a consequence, it follows from Part I, 1) that for all t R+ , |(t)| |(0)|e(k)t ,
hence for all t R+ , (t) = 0.
Part III. All solutions of the nonlinear equation x0 = x + ln(1 + x2 ) tend to zero as
t , when > 1. Indeed, let f (u) = ln(1 + u2 ). We have
|f 0 (u)| =
2|u|
1
1 + u2
(t) = e
c1 +
ea(ts) f (s)ds,
t0
ea(ts) f 0 (s)ds,
t0
and
at
(t) = e
Z
c2 +
Let us show this for the solution of (B.5), the solution of (B.6) is obtained using exactly the
same method. The solution to a linear differential equation consists of the general solution
to the homogeneous equation together with a particular solution to the nonhomogeneous
equation. Here, the homogeneous equation is
x0 = ax
and basic integration yields the general solution x(t) = c1 eat . To obtain a particular solution
to the nonhomogeneous equatiob (B.5), we use a variation of constants formula: assume that
the constant in the solution x(t) = ceat is a function of time, hence
x(t) = c(t)eat
Taking the derivative of this expression, we obtain
x0 = c0 eat aceat
Substituting both this expression and x = ceat into (B.5), we get
c0 eat aceat + aceat = f (t),
and hence
c0 = eat f (t)
Integrating both sides of this expression gives
Z t
c(t) =
eas f (s)ds
0
t0
Summing the general solution to the homogeneous equation with this last expression gives
the desired result.
Using the initial conditions yields
(0) = c1 = 0
(0) = c2 = 0
Hence the system under consideration is
Z t
(t) =
ea(ts) f (s)ds, t 0
Z0 t
(t) =
ea(ts) f 0 (s)ds, t 0
0
Rt
t0
h
(s, t)ds
t
g (t) = h(t, t) +
t0
(t) = f (t) a
ea(ts) f (s)ds
It follows that
0
t
a(ts)
Z
f (s)ds =
ea(ts) f 0 (s)ds
0
t
s=t
f (t) a
e
f (s)ds = ea(ts) f (s) s=0 a
0
a(ts)
s=t
f (t) = e
f (s) s=0
a(ts)
ea(ts) f (s)ds
B. Problem sheets
1
1
1
1
) x( )k M |
|
n+p
n
n n+p
kz,n+p z,n k M |
1
1
|
n n+p
1
|
(B.8)
n
is a Cauchy sequence, there exists x = limn (z,n ). Thus
kx(t) z,n k M |t +
kx(t) x k M |t |
1 c) According to 1.b), we have
lim kx(t) x k 0
t,t<
Hence
lim x(t) = x
t,t<
2) Let
z(t) =
x(t)
x
if t [0, )
if t =
Let us show that z is a solution of (B.7) on [0, ] if z is continuous on [0, ] and satisfies the
integral equation
Z t
z(t) = z(t0 ) +
f (z(s))ds
t0
for all t [0, ], and with an arbitrary t0 [0, ]. We know by construction that z is
continuous (since limt x(t) = x ).
Let t [0, ),
t
Z
z(t) = x(t) = x(t0 ) +
f (x(s))ds
t0
Z
=
=
lim x(t0 ) +
t,t<
f (x(s))ds
t
Z 0t
lim z(t0 ) +
t,t<
f (z(s))ds
t0
So
t0
z() = z(t0 ) +
f (z(s))ds
t0
Exercise 2.3 Let X(t) be a fundamental matrix for the system x0 = A(t)x, where A(t)
is an n n matrix with continuous entries on R. What conditions on A(t) and C guarantee
that CX(t) is a fundamental matrix, where C is a constant matrix.
(B.9)
B. Problem sheets
a b
c d
ke k = k
k=0
Ak /k!k,
k=0
= trA
3) We can write
lim et eAt = lim e(AI)t
Let Sp (A) be the spectrum of A, i.e., the set of eigenvalues of A. Then, if Sp (A),
Sp (A I). We have limt e(AI)t = 0 if, and only if, <() < 0 for all
Sp (A I), i.e., <( + ) < 0 for all Sp (A), i.e., <() < for all Sp (A).
Hence, choosing greater than the eigenvalue of A with maximal real part ensures that
limt e(AI)t = 0.
B. Problem sheets
which implies that x1 x2 is a solution to (B.9), and therefore, dim P() 6= 0, a contradiction.
Thus the solution to (B.10) is unique.
a) c) Let x be the unique -periodic solution of (B.10), and assume that dim P() 6=
0, i.e., there exists y, non trivial -periodic solution to (B.9). Then
(x + y)0 = A(t)x + f (t) + A(t)y
= A(t)(x + y) + f (t)
and so x + y is a solution to (B.10), which is a contradiction since x is the unique solution
to (B.10).
(B.11)
(B.12)
We have seen that the solution of this initial value problem is given by
x(t) = R(t, t0 )x0
where R(t, t0 ) is the resolvent matrix of the system. Suppose that we are in the case where
the following condition holds
t, s I,
A(t)A(s) = A(s)A(t)
(B.13)
with I R.
i) Show that M (t) = exp
R
t
t0
where In is the n n identity matrix. [Hint: Use the formal definition of a derivative,
i.e., M 0 (t) = limh0 (M (t + h) M (t))/h]
ii) Deduce that, when (B.13) holds,
Z
R(t, t0 ) = exp
A(s)ds
t0
t0
B. Problem sheets
a(t) b(t)
b(t) a(t)
(B.15)
(B.16)
R(t, t0 ) 6= exp
A(s)ds
t0
with
1
A(t) =
t
0 1
t
1 1
1 3
and B(t) = (t2 , 2t)T . The system (B.11) can then be written as
0 = A + B(t)
This is a nonhomogeneous linear system, so we use the variation of constants,
Z t
(tt0 )A
(t) = e
0 +
e(ts)A B(s)ds
t0
1 AP )t
P 1
with
e(P
1 AP )t
= eP
1 (At)P
B. Problem sheets
We have P 1 AP = 2I + N , where
0 1
0 0
(P AP )t
Therefore,
= e(2It+N t) = e2t eN t .
P tn n e
n=0 n! N = I + N t. As a consequence,
e(P
1 AP )t
= e2t (I + N t)
1 0
0 1
2t
=e
+
0 1
0 0
2t
2t
e te
=
0 e2t
Thus
At
e2t te2t
P 1
0 e2t
=P
(1 t)e2t
te2t
=
te2t
(1 + t)e2t
t
2t 1 t
=e
t
1+t
Rt
We still have to compute 0 eAs B(s)ds. We have
2
3
2
2
s
s
s2 s3
As
2s 1 + s
2s s s + 2s
2s
e B(s) = e
=e
=e
s 1 s
2s
s3 + 2s 2s2
s3 2s2 + 2s
Rt
Rt
Rt
Let I1 (t) = 0 e2s sds, I2 (t) = 0 e2s s2 ds and I3 (t) = 0 e2s s3 ds. Then
Z t
I1 (t) I3 (t)
As
e B(s)ds =
I3 (t) 2I2 (t) + 2I1 (t)
0
Evaluating the integrals, we obtain
2t
Z t
e
As
e B(s)ds = 2t
e
0
1 2
t
4
1 2
t
4
+ 14 t + 18 + 12 t3 81
34 t 83 12 t3 + 83
As a conclusion,
(t) =
=
2t
1 t t
1 t t
e
0 +
t
1+t
t
1+t
e2t
1 2
t + 14 t + 81 + 21 t3
4
e2t
1 2
t 34 t 83 12 t3 +
4
2t
e x0 e2t tx0 e2t ty0 + 21 e2t t + 18 e2t 81 4t + 34 e2t t2
e2t tx0 + e2t y0 + e2t ty0 41 e2t t2 e2t t + 4t 83 e2t + 38
1
8
3
8
Z
t+h
Z
A(s)ds exp
exp
A(s)ds
t0
t0
t0
t0
s1
A(u)du
t0
It follows that
1
1
(M (t + h) M (t)) =
h
h
Z
t+h
Z
Z
exp
A(s)ds exp
A(s)ds exp
t
t0
Z t+h
1
A(s)ds I
= M (t) exp
h
t
Z t+h
1
1
=
exp
A(s)ds I M (t)
h
h
t
A(s)ds
t0
t0
B. Problem sheets
it is obvious the Theorem B.3.5 can be used here, since I commutes with all matrices. Thus,
Z t
Z t
R(t, t0 ) = exp
a(s)dsI exp
b(s)dsV
t0
t0
Rt
Rt
Let (t) = t0 a(s)ds and (t) = t0 b(s)ds. Then R(t, t0 ) = e(t)I e(t)V = e(t) Ie(t)V . Now
notice that V 2 = I, V 3 = V , etc., so that we can write that
(1)p I if n = 2p,
n
V =
(1)p V if n = 2p + 1.
This implies that
(t)V
X
1
(t)n V n
=
n!
n=0
X
(1)p
p=0
(2p)!
!
2p
(t)
I+
X
(1)p
(t)2p+1
(2p + 1)!
p=0
!
V
= cos((t))I + sin((t))V
Thus
R(t, t0 ) = e(t) (cos((t))I + sin((t))V )
(t)
e cos((t)) e(t) sin((t))
=
e(t) sin((t)) e(t) cos((t))
Solution Exercise 3.4 1) Notice that the y 0 equation in (B.16) does not involve
x. Therefore, we can solve it directly, giving y(t) = Cet , with C R. Substituting this into
the equation for x0 , we have
1
x0 = x(t) + tCet
t
To solve this nonhomogeneous first-order scalar equation, we start by solving the homogeneous part, x0 = x/t. This equation is separable, giving the solution x(t) = Kt, for K R.
Now we use a variation of constants approach to find a particular solution to the nonhomogeneous problem. We use the ansatz x(t) = K(t)t, which, when differentiated and substituted
into the nonhomogeneous equation, gives K 0 (t) = Cet , and hence, K(t) = Cet is a particular
solution, giving the general solution x(t) = Kt + Cet .
Let t0 6= 0 (to avoid problems with 1/t), and suppose x(t0 ) = x0 , y(t0 ) = y0 . Then
x0 = Kt0 + Ct0 et0 and y0 = Cet0 . It follows that K = x0 /t0 y0 and C = et0 y0 , and the
solution to the equation going through the point (x0 , y0 ) at time t0 is given by
(t) =
x0
x0
tt0
( t0 y0 )t + et0 y0 tet
t
+
y
t(e
1)
x(t)
0
=
.
= t0
y(t)
y0 ett0
et0 y0 et
!
Rt
sds
Rt0t
ds
t0
1 2
2
(t
t
)
0
2
t t0
1
ds
t0 s
B(t) =
0
ln tt0
0
=
which, for convenience, we denote
B(t) =
Eigenvalues of B(t) are and . As R(t0 , t0 ) = B(t0 ) = I, we are concerned with finding
a t 6= t0 such that B(t) is diagonalizable. If a t exists such that 6= , then B(t) is
diagonalizable, i.e., there exists P nonsingular such that
0
1
P B(t)P = D =
0
Then
B(t)
=P
e 0
P 1
0 e
We find
P =
1
,
0
1
=
0
1
B. Problem sheets
(e
e )
t
=
t0
0 ett0
The element in this matrix is the only one different from the elements in R(t, t0 ). We have
t
t2 t20
tt0
e
=
6= t(ett0 1)
t0
2(t t0 ln tt0 )
(B.17)
for > 0, is called a delay differential equation (or also a differential difference equation,
or an equation with deviating argument), and is called the delay. The basic initial value
problem for (B.17) takes the form
x0 = f (t, x(t), x(t ))
x(t) = 0 (t), t0 t t0
(B.18)
i) Use the method of steps to construct the solution to (B.18) on the interval [t0 , t0 + ],
that is, find how to construct the solution to the non delayed problem
x0 = f (t, x(t), 0 (t ))
x(t0 ) = 0 (t0 ), t0 t t0 +
(B.19)
ii) Discuss existence and uniqueness of the solution on the interval [t0 , t0 + ], depending
on the nature of 0 and f .
iii) Suppose that 0 C 0 ([t0 , t0 ]). Discuss the regularity of the solution to (B.18) on
the interval [t0 + k, t0 + (k + 1)], k N.
(B.20)
with a, C R, R+ . Using the ideas of the previous exercise, find the solution to (B.20)
on the interval [t0 + k, t0 + (k + 1)], k N.
Exercise 3.3 Compute Ani and etAi for the following matrices.
A1 =
0 1
,
1 0
A2 =
1 1
,
0 1
0
1
sin()
0
cos() .
A3 = 1
sin() cos()
0
B. Problem sheets
1
1
0
A = 1 0 1 .
0 1 1
a) Show that for all t R and all k N , there exists Ck (t) 0 such that,
t A
t
e k I + A 1 Ck (t).
k 2
k
with limk Ck (t) =
t2
2
|||A2 |||.
= lim
t
I+ A
k
k
.
ii) Suppose now that A is symmetric and that its eigenvalues are > , with > 0.
a) Show by induction that, for k 0,
(k+1)
(I + A)
Z
=
et(I+A)
tk
dt.
k!
t > 0.
c) Show that
t > 0,
etA 0 0 ,
> 0 ,
(I + A)1 0
(B.21)
with g(t, x(t)) = f (t, x(t), 0 (t )), which is well defined on the interval [t0 , t0 + ] since
for t [t0 , t0 + ], t [, 0], on which the function 0 is defined.
We can then use the integral form to construct the solution on the interval [t0 , t0 + ],
Z t
x(t) = x(t0 ) +
g(s, x(s))ds
t0
Z t
= 0 (t0 ) +
f (s, x(s), 0 (s ))ds
t0
Solution Exercise 2 We proceed as previously explained. We assume for simplicity that t0 = 0. To find the solution on the interval [0, ], we consider the nondelayed
IVP
x01 = ax0 (t)
x1 (0) = C
where x0 (t) = C for t [0, ]. The solution to this IVP is straightforward, x1 (t) = C +aCt =
C(1 + at), defined on the interval [0, ]. To integrate on the second interval, we consider the
IVP
x02 = a[C(1 + at)]
x2 () = x1 () = C + aC
B. Problem sheets
Hence we find the solution to the differential equation to be, on the interval [, 2],
1
1
x2 (t) = C 1 + at + a2 t2 a2 2
2
2
1
1
1
1
1
x3 (t) = C 1 + at + a2 t2 + a3 t3 ta3 2 a3 3 a2 2
2
6
2
3
2
We develop the intuition that the solution at step n (i.e., on the interval [(n 1), n])
must take the form
n
X
(t (k 1))k
(B.22)
ak
xn (t) = C
k!
k=0
This can be proved by induction (we will not do it here).
X
t2n 2n X t2n+1
eAt =
A1 +
A12n+1
(2n)!
(2n
+
1)!
n=0
! n=0
!
2n
X t2n+1
X
t
=
I+
A1
(2n)!
(2n
+
1)!
n=0
n=0
= cosh tI + sinh tA
cosh t sinh t
=
sinh t cosh t
For matrix A2 , remark that it can be written as A2 = I + N , where
N=
0 1
0 0
It follows that
X
tn
eA2 t =
n=0
t
!
I+
n!
=e I +t
X
ntn
n=0
X
n=0
tn1
(n 1)!
n!
!
= et I + tet N
t 1 t
=e
0 1
Finally, for matrix A3 , we have that
t2 2
A.
2 3
0 0 0
J = 0 1 1
0 0 1
Now, to compute eJ t, remark that J has the form
00 0
0
A1
0
where eA1 t has been computed in Exercise 3. Hence,
0t
e 0 0
1 0 0
0 A1 t = 0 et tet
e
0 0 et
0
We have J = P 1 AP , where
1 1 2
0 1 ,
P = 1
1
1 1
P 1
1 1 1
= 0 1 1
1 0 1
are the matrices of change of basis that transform A to its Jordan canonical form. Then
A = P JP 1 , and eAt = P eJt P 1 , i.e.,
(2 t)et 1
et 1
(1 t)et 1
1
1 et
eAt = 1 et
1 + (t 1)et 1 et 1 + tet
B. Problem sheets
Thus
t
t2
t2 t
t
e k A I A = 2 A2 + 2 ( )
k
2k
k k
Therefore, taking the norm |||||| of this expression,
2
2
t
t A
t
1
t
2
e k (I + A)
A + t ( ) = 1 Ck (t)
k
2
k
k
2
k2
We have
t2 2
A
k
2
P
j
tj
Let Sk =
j=3 kj2 j! |||A||| . This series is uniformly convergent, which implies that we can
change the order in the following limit,
j
X
t
j
lim
lim Sk =
|||A||| = 0
k
k
k j2 j!
j=3
1b) We have already seen (Exercise 2, Assignment 2) that eA e|||A||| . Therefore,
t
t
k A
e e k |||A|||
lim Ck (t) =
But
e
|t|
|||A|||
k
X
1 |t|k |
|||A|||k
=
k
k! k
k=0
=1+
which, since
|t|2
2k2
|t|
|t|2 |
|||A||| + 2 |||A|||2 +
k
2k
|t|
|||A|||
k
t
|t|
1 + |||A||| = |||I||| + A
k
k
t
I + A
k
since
for two matrices E, F Mn (R) (or Mn (C)) that commute, E n F n = (E
Pk1
F ) j=0 E k1j F j .
Therefore,
k1
At
t A(I+ t A) X
t(k1j)
t
t
k
A
j
(I + A)
e (I + A) e k
k
k
e
(B.23)
k
k
j=0
t(k1j)
|t|(k1j)
Now, we have e k A e k |||A||| . Also,
j
j
t
t
|||A|||
j
(I + A) I + A e |t|k |||A||| = e j|t|
k
k
k
where the last inequality results from question 1b). Therefore,
k1
X
At
j|t|
t
|t| k1j
k
|||A||| k |||A|||
e (I + A) 1 Ck (t)
k
e
e
k
k2
j=0
k1
X
k1
1
= 2 Ck (t)
e|t| k |||A|||
k
j=0
k1
1
Ck (t)ke|t| k |||A|||
2
k
k1
1
= Ck (t)e|t| k |||A|||
k
We thus have
At
1
|||A|||
e (I + t A)k 1 Ck (t)e|t| k1
k
= Dk (t)
k
k
k
2
k1
As k , Ck (t) t2 |||A2 ||| and e|t| k |||A||| e|t||||A||| . Therefore, limk Dk (t) = t2 |||A2 ||| e|t||||A||| ,
which in turn implies that limk (I + kt A)k = eAt .
2a) We now suppose that A is a symmetric matrix. Recall that any symmetric matrix
is diagonalizable, with real eigenvalues. Furthermore, there exists a matrix P such that
P 1 = P T and that P T AP = diag(i ), with i Sp (A).
We assume that the eigenvalues i , i = 1, . . . , m, are such that i > , for > 0. We
want to show by induction that the following holds.
Z
tk
(k+1)
k 0, (I + A)
=
e(I+A)t dt
(B.24)
k!
0
B. Problem sheets
e(I+A)t dt
0
1
(I+A)t
d (I+A)t
e
dt
d (I+A)t
(I + A)1
e
dt
dt
0
Z i
d (I+A)t
1
e
= (I + A)
nf ty
dt
dt
0
= (I + A)1 e(I+A)t 0
h
i
= (I + A)1 lim e(I+A)t I
ds =
(B.25)
Now,
e(+1 )t
0
..
e(I+A)t = P
0
.
e(+n )t
1
P
Since for all i = 1, . . . , n, i > , it follows that limt e(+i )t = 0, which in turn implies
that limt e(I+A)t = 0. Using this in (B.25) gives (B.24) for k = 0.
Now assume (B.24) holds for k = j, i.e.,
Z
tj1
j
(I + A) =
e(I+A)t
dt
(j 1)!
0
Then
Z
j
(I+A)t t
e
0
j!
d (I+A)t tj
(I + A)1
e
dt
dt
j!
0
Z
d (I+A)t tj
1
= (I + A)
e
dt
dt
j!
0
Z
j
j1
1
(I+A)t t
(I+A)t t
= (I + A)
e
e
dt
j! 0
(j 1)!
0
Z
dt =
As we did in the k = 0 case, we now use the bound on the eigenvalues to get rid of the term
tj
tj (I+A)t
e
=P
j!
e(+1 )t
j!
0
...
tj
j!
e(+n )t
1
P 0
as t
Therefore,
Z
j
(I+A)t t
j!
dt = (I + A)
e(I+A)t
tj1
dt
(j 1)!
= (I + A) (I + A)j
= (I + A)(j+1)
from which we deduce that (B.24) holds for all k 0.
2b) Let us begin with the implication (u > 0, (I + uA)k M ) (t > 0, eAt M ).
We know from 1c) that eAt = limk (I + kt A)k . Thus
At
= lim
t
I+ A
k
k !1
= lim
t
I+
k
k
(B.26)
(I + uA)
=u
Z
0
e( u I+A)t
tj1
dt
(j 1)!
B. Problem sheets
0
k
Mu
(k 1)!
At t
e e u
tj1
dt
(j 1)!
e u tk1 dt
(k 1)! 0
The latter integral is (k), the Gamma Function. It is well known1 that for k N, (k) =
(k 1)!. Thus,
(I + uA)k M
2c) Let us begin with the forward implication (). To apply 2b) with k = 0, it
suffices that the eigenvalues of A be greater than . Take 0 = . Then > = 0 , and
so
Z
1
(I + A) =
e(I+A)t dt
Z0
=
et eAt dt
0
0
k N ,
(I + A)k 0
for sufficiently large. Take = k/t, the previous expression can be written
t > 0,
k k0 ,
k
( I + A)k 0
t
k k0 ,
t
(I + A)k 0
k
See, e.g., M. Abramowitz and I.E. Stegun, Handbook of Mathematical Functions. Dover, 1965.
so
t > 0,
t
lim (I + A)k 0
k
k
eAt 0
3) The results of the previous part hold. However, in the case of a nonsymmetric matrix,
we need to ask for the real part of the eigenvalues to be greater than .
MATH
4G03/6G03
Duration Of Examination: 72 Hours
McMaster University Final Examination
Julien Arino
December 2003
This examination paper includes 4 pages and 3 questions. You are responsible
for ensuring that your copy of the paper is complete.
Detailed Instructions
You have 72 hours, from the time you pick up and sign for this examination sheet, to
complete this examination. You are to work on this examination by yourself. Any hint of
collaborative work will be considered as evidence of academic dishonesty. You are not to
have any outside contacts concerning this subject, except myself.
You can use any document that you find useful. If using theorems from outside sources,
give the bibliographic reference, and show clearly how your problem fits in with the conditions
of application of the theorem. When citing theorems from the lecture notes, refer to them
by the number they have on the last version of the notes, as posted on the website on the
first day of the examination period.
Pay attention to the form of your answers: as this is a take-home examination, you are
expected to hand back a very legible document, in terms of the presentation of your answers.
Show your calculations, but try to be concise.
This examination consists of 1 independent question and 2 problems. In questions or
problems that have multiple parts, you are always allowed to consider as proved the results
of a previous part, even if you have not actually done that part.
Foreword to the correction. This examination was long, but established the results in
a very guided way. Exercise 1 was almost trivial. Both of the Problems dealt with Sturm
theory. This comes as an illustration of the richness of behaviors that can be observed in
differential equations: simple equations such as (B.29) can have very complex behaviors.
Concerning the difficulty of the problems, it was not excessive. Problem 2 is a shortened
and simplified version of a review problem for the CAPES, a French competition to hire
high school teachers. The original problem comprised 23 questions, and was written by
candidates in 5 hours. Problem 3 introduced the Wronskian, which we did not have time to
cover during class. It also established further results of Sturm type.
B. Problem sheets
t R.
(B.28)
iii) Conversely, show that if x is a nontrivial solution of (B.27) such that (B.28) holds,
then is an eigenvalue of R(, 0).
Problem 5 2 The aim of this problem is to study some properties of the solutions of
the differential equation
x00 + q(t)x = 0,
(B.29)
x0 (t0 ) = y0
iv) Let a, b R, a < b. We assume that (B.29) has a solution x, zero at a and at b and
positive on (a, b). Show that
Z b
4
|q(t)|dt >
.
ba
a
R
v) We suppose that 0 |q(t)|dt converges. Let x be a bounded solution of (B.29). Determine the behaviour of x0 as t .
vi) We suppose that q C 1 and is positive and increasing on R+ . Show that all solutions
of (B.29) are bounded on R+ .
Part II. We suppose in this part that q is nonpositive and is not the zero function.
vii) Let x be a solution of (B.29). Show that x2 is a convex function.
viii) Show that if x is a solution of (B.29) that has two distinct zeros, then x 0.
ix) Show that if x is a bounded solution of (B.29), then x 0.
Part III.
x) Let x, y be two solutions of (B.29). Show that the function xy 0 x0 y is constant.
xi) Let x1 and x2 be the solutions of (B.29) that satisfy
x1 (0) = 1,
x2 (0) = 0,
x01 (0) = 0,
x02 (0) = 1.
Show that (x1 , x2 ) is a basis of the vector space S on R of the solutions of (B.29).
What is the value of x1 x02 x01 x2 ? Can the functions x1 and x2 have a common zero?
Justify your answer.
xii) Discuss the results of question 11) in the context of linear systems, i.e., transform
(B.29) into a system of first-order differential equations and express question 11) and
its answer in this context.
xiii) Show that if q is an even function, then the function x1 is even and the function x2 is
odd.
Problem 5 3 The aim of this problem is to show some elementary properties of the
Wronskian of a system of solutions, and to use them to study a second-order differential
equation.
Consider the nth order ordinary differential equation
x(n) = a0 (t)x + a1 (t)x0 + + an1 (t)x(n1) (t)
where x(k) denotes the kth derivative of x,
dk x
.
dtk
(B.30)
B. Problem sheets
i) Find the matrix A(t) such that this system can be written as the first-order linear
system y 0 = A(t)y.
Part I : Wronskian Let f1 , . . . , fn be n functions from R into R that are n 1 times differentiable. We define W (f1 , . . . , fn ), the Wronskian of f1 , . . . , fn , by
f1 (t)
fn (t)
f10 (t)
fn0 (t)
.
.
(n1)
f1
(n1)
(t) fn
(t)
Part II : a theorem of Sturm Let us now consider the second-order differential equation
a2 (t)x00 + a1 (t)x0 + a0 (t)x = 0
(B.31)
a1 (s)
ds
a2 (s)
q(t) 0
(B.32)
in an interval (t1 , t2 ). Every solution of (B.32) that is not identically zero has at most one
zero in the interval [t1 , t2 ].
vi) Let be a solution of (B.32) on (t1 , t2 ), and v be a zero of in this interval. Discuss the
properties of . [Hint: Use of Problem 2, Part II is possible, but not strictly necessary.]
vii) Let u < v be another zero of in the interval (t1 , t2 ). Discuss properties of . Conclude.
B. Problem sheets
t+
R(t + , t0 + ) = exp
A(s)ds
t +
Z 0t
= exp
t
Z 0t
= exp
A(s + )ds
A(s)ds
t0
= R(t, t0 )
2) Let u be an eigenvector associated to the eigenvalue , i.e.,
R(, 0)u = u
Let x be the solution of (B.27) such that x(t0 ) = x(0) = u. As x is a solution of (B.27), we
have that, for all t,
x(t) = R(t, t0 )u = R(t, 0)u
Therefore,
x(t + ) = R(t + , 0)u
= R(t + , )R(, 0)u
= R(t + , )u
= R(t, 0)u
= R(t, 0)u
= x(t)
and hence (B.28).
3) Let x be a nonzero solution of (B.27) such that, for all t R,
x(t + ) = x(t)
We assume that, for all t R, x(t + ) = x(t). This is true in particular for t = 0, and
hence x() = x(0). As x
6 0, there exists v R {0} such that x(0) = v. Therefore,
x(t) = v
and as a consequence,
R(, 0)v = v
and is an eigenvalue of R(, 0).
Solution Problem 2 This problem concerns what are called Sturm theory type
results, that is, results dealing with the behavior of second order differential equations.
Solution Problem 3 This problem was also about Sturm results. But it also
introduced the notion of Wronskian, which is a very general tool intimately linked to the
notion of resolvent matrix.
0
1) We let y1 = x, y2 = x0 , . . . , yn = x(n1) . As a consequence, y10 = y2 , y20 = y3 , . . . , yn1
=
0
yn , and yn = a0 (t)y1 + a1 (t)y2 + + an1 (t)yn1 . Written in matrix form, this is equivalent
to y 0 = A(t)y, with y = (y1 , . . . , yn )T and
0
1
0
...
0
0
0
1 0 ...
0
.
..
.
.
.
A(t) = .
(B.33)
..
1
0
0
1
a0 (t) a1 (t)
. . . an1 (t)
B. Problem sheets
2) We know that the system is equivalent to y 0 = A(t)y, with A(t) given by (B.33). To
every basis (1 , . . . , n ) of the vector space of solutions of
y 0 = A(t)y
(B.34)
there corresponds a basis (1 , . . . , n ) of (B.30), where i is the first coordinate of the vector
i for every i. The converse is also true.
We know that a system (1 , . . . , n ) of solutions of (B.34) is a basis if det(1 , . . . , n ) 6= 0,
and it suffices for this that det(1 , . . . , n ) be nonzero at one point.
Since we have det(1 , . . . , n ) = W (1 , . . . , n ), the result follows.
3) This is a direct application of Liouvilles theorem, which states that if R(t, s) is the
resolvent of A(t), then
Z
t
trA(u)du
s
exp
det (t)
= det (s)
trA(u)du
s
Z
trA(u)du
trA(u)du
Now, note that det (t) = W (t) and det (s) = W (s). This implies the result.
Note that for a system of solutions of (B.30), W () 6= 0 iff are linearly independent
(i.e., we have the converse implication).
4)
5)
6)
7)
First part
1. For what value(s) of A does the differential equation
x0 (t) = a(t)x(t)
(E1)
(E2)
2.a. Describe the set of maximal solutions to (E2) and the intervals of definition of these
solutions.
2.b. Describe the set of maximal solutions to (E2) that are T -periodic, first assuming
A 6= 0, then A = 0.
B. Problem sheets
Second part
In this part, we let H be a real valued C 1 function defined on R2 , and consider the
differential equation
x0 (t) = a(t)x(t) + H(x(t), t).
(E3)
3. Check that a function x is solution to (E3) if and only if it satisfies the condition
Z t
1
x(t) = g(t) x(0) +
g(s) H(x(s), s)ds .
0
4. Suppose that H is T -periodic with respect to its second argument, and that A 6= 0.
Show that, for all functions x P , the formula
Z t+T
eA
U (H x)(t) =
g(t)
g(s)1 H(x(s), s)ds,
A
1e
t
defines a function UH x P , and that x est solution to (E3) if and only if UH x = x.
In the rest of the problem, we let F be a real-valued C 1 function defined on R2 , T -periodic
with respect to its second argument; for all > 0, define H = F and U = UH , so that
the differential equation (E3) is written
x0 (t) = a(t)x(t) + F (x(t), t).
(E4)
Assume that A 6= 0. For all r > 0, we denote Br the closed ball with centre 0 and radius r
in the normed space P . We want to show the following assertion: for all r > 0, there exists
1 > 0 such that, for all 1 , the differential equation (E4) has a unique solution x Br ,
that we will denote x .
(v, s)|), where
We denote r (resp. r ) the upper bound of the set |F (v, s)| (resp. | F
v
v [r, r] and s [0, T ].
5.a. Find a real 0 > 0 such that, for all 0 , U (Br ) Br .
5.b. Find a real 1 0 such that, for all 1 , the restriction of U to Br be a
contraction of Br .
5.c. Conclude.
6. Study the behavior of the function x when 0, the number r being fixed.
7. We now suppose that the function a is a constant k 6= 0 et that the function F takes
the form F (v, s) = f (v). Determine the solution x of (E4).
8. We now consider T = 1, k = 1 and f (v) = v 2 , and thus (E4) takes the form
x0 (t) = x(t) + x(t)2 .
(E5)
Third part
Here, we consider the differential equation
x0 (t) = kx(t) + f (x(t)),
(E6)
x0 (t) = a(t)x(t)
where it was assumed that integration starts at t0 = 0, and where the sign of |x(t)| is
absorbed into K R. Since x(0) = K, the general solution to (E1) is thus
Z t
x(t) = x(0) exp
a(s)ds .
(B.35)
0
a(s)ds = 0
0
A = 0.
2.a. We know that the general solution to the homogeneous equation (E1) associated to
(E2) is given by (B.35). To find the general solution to (E2), we need a particular solution
to (E2), or to use integrating factors or a variation of constants approach. We do the
latter, since we already have the solution (B.35) to (E1). Returning to the solution with
undetermined value for K, we consider the ansatz
Z t
(t) = K(t) exp
a(s)ds = K(t)g(t),
0
B. Problem sheets
Z
x(t) =
0
Z t
b(s)
a(s)ds .
ds + C exp
g(s)
0
Since it will be useful to have information in terms of x(0) (as in question 1.), we note that
C = x(0). Thus, the solution to (E2) through x(0) = 0 is given by
Z
x(t) =
0
Z t
b(s)
a(s)ds .
ds + x(0) exp
g(s)
0
With integrating factors, we would have done as follows: write the equation (E2) as
x0 (t) a(t)x(t) = b(t).
The integrating factor is then
Z
(t) = exp a(t)dt ,
and the general solution to (E2) is given by
1
x(t) =
(t)
Z
(s)b(s)ds + C
Maximal solutions are solutions that are the restriction of no other solution.
(B.36)
2.b. Solutions to (E2) are T -periodic if x(T ) = x(0); therefore, a T -periodic solution
satisfies
Z T
Z T
b(s)
x(T ) = x(0)
ds + x(0) exp
a(s)ds = x(0)
0 g(s)
0
Z T
b(s)
ds + x(0) = x(0)eA
g(s)
0
Z T
b(s)
ds = eA 1 x(0)
0 g(s)
Second part
3. Before proceeding, note that
d
exp
g (t) =
dt
0
Z
a(s)ds
0
t
Z
a(s)ds
= a(t) exp
0
= a(t)g(t).
Rt
We differentiate x(t) = g(t) x(0) + 0 g(s)1 H(x(s), s)ds . This gives
H(x(t), t)
x (t) = g (t) x(0) +
g(s) H(x(s), s)ds + g(t)
g(t)
0
Z t
= g 0 (t) x(0) +
g(s)1 H(x(s), s)ds + H(x(t), t)
0
Z t
1
= a(t)g(t) x(0) +
g(s) H(x(s), s)ds + H(x(t), t)
0
Rt
0
t+2T
t+T
B. Problem sheets
Remark that
t+T
Z
g(t + T ) = exp
a(s)ds
0
t
Z
t+T
= exp
a(s)ds +
a(s)ds
t
Z t+T
= g(t) exp
a(s)ds
0
t
A
= e g(t),
since a(t) is T -periodic. Therefore,
Z t+2T
eA A
e g(t)
g(s)1 H(x(s), s)ds
(UH x)(t + T ) =
1 eA
t+T
Z t+T
A
e
eA g(t)
g(s T )1 H(x(s T ), s T )ds.
=
A
1e
t
Now
Z
sT
g(s T ) = exp
a(u)du
Z
sT
a(u)du +
a(u)du
s
Z s
= g(s) exp
a(u)du
= exp
sT
= eA g(s).
So, finally,
eA 2A
e g(t)
(UH x)(t + T ) =
1 eA
= e2A (UH x)(t),
t+T
x (t) =
1 eA
g(s)
g(t + T )
g(t)
t
Z t+T
A
e
H(x(s), s)
H(x(t), t) H(x(t), t)
a(t)g(t)
ds + g(t)
=
1 eA
g(s)
eA g(t)
g(t)
t
Z t+T
A
A
A
e
H(x(s), s)
e
(1 e )H(x(t), t)
= a(t)
g(t)g(t)
ds +
g(t)
A
A
1e
g(s)
1e
eA g(t)
t
= a(t)x(t) + H(x(t), t).
sup
|g(t)|
A
|1 e | tR
g(s)
t
Note that we keep the absolute value of |1 eA |, since A could be negative, leading to a
negative value for 1 eA . Let kg 1 k = suptR |g 1 (t)|. We then have
Z t+T
eA
1
kgkkg k sup
|F (x(s), s)| ds
kU xk
|1 eA |
tR t
Z t+T
eA
1
kgkkg k
r ds,
|1 eA |
t
since x(s) [r, r], and so
kU xk
eA
kgkkg 1 kr T.
|1 eA |
Letting
0 =
eA
kgkkg 1 kr T
|1 eA |
1
r,
5.b. For the restriction of U to be a contraction, we must have the inequality obtained
above, as well as, for x, y Br , d(U x, U y) < d(x, y). In terms of the induced norm, this
means that kU x U yk < kx yk. Therefore, letting x, y P be such that kxk r and
B. Problem sheets
kyk r, we compute
kU x U k = sup |(U x)(t) (U y)(t)|
tR
A
Z t+T
e
F (x(s), s) F (y(s), s)
= sup
g(t)
ds
A
g(s)
tR 1 e
t
A
Z t+T
e
F
(x(s),
s)
F
(y(s),
s)
sup |g(t)|
=
ds
1 eA tR
g(s)
t
A
Z t+T
e
F (x(s), s) F (y(s), s)
sup |g(t)|
ds.
A
1 e tR
g(s)
t
For s [0, T ] and x(s) [r, r], we have, picking a y(s) [r, r],
|F (x(s), s)| = |F (x(s), s) F (y(s), s) + F (y(s), s)|
|F (x(s), s) F (y(s), s)| + |F (y(s), s)|
r |x(s) y(s)| + r ,
from the mean value theorem, and thus
|F (x(s), s)| 2r r + r .
But we have
Z t+T
eA
kU x U0 x0 k(t) | |
|g(t)g(s)1 F (x(s), s)|ds
A
|1 e | t
eA
| 0 |
eT A T r .
|1 eA |
|
{z
}
0
=K 0
| 0 |K 0
and therefore R 7 x P is continuous; it follows
1K
that lim x = x0 . But the only periodic solution of (E1) when A 6= 0 is the zero solution.
0
Therefore, x 0 when 0.
Thus, we have kx x0 k
0
kt
kt kT
kT
f
(c
)
e
e
e
1
e
f (c0 )
=
=
=
0
1 ekT
k
1 ekT k
k
The constant function x (t) = c0 is solution (where c0 is the unique solution of the equation
f (x) + kx = 0 for sufficiently small).
Note : letting g(x) = f (x), it follows that g 0 (x) = f 0 (x). Thus, for r > 0 given,
k
k
there exists 0 > 0 such that 0 implies g([r, r]) [r, r], and there exists 1 0 such
that sup |g 0 (x)| < 1, the fixed point theorem can be applied easily.
x[r,r]
8.a. Using the formula obtained in 5.a. with r = r2 , r = 2r, g(t) = et , A = T then
0 =
1 eT
r(1 eT )eT eT
=
,
r2 T
rT
1 =
1 1 eT
0
= .
2 2rT
4
8.b. The zero function is clearly a 1-periodic solution of (E5). By uniqueness of solutions,
x = 0 is the only solution of (E5).
B. Problem sheets
8.c. The vector field x+x2 is C 1 , and therefore existence and uniqueness of a maximal
solution is a direct consequence of the theorem of Cauchy-Lipschitz.
We solve the equation x0 = x + x2 without constraint of periodicity. There are two
constant solutions , x(t) = 0 and x(t) = 1/. By uniqueness, any other solution never takes
the values 0 and 1/. We have:
x0
d x
=
ln
= 1
x(1 x)
dt
1 x
x
= et
1 x
x(t) =
et
,
1 + et
R .
1
(t) = t
.
e +
If 0 then is defined on R.
If < 0, we let t0 = ln .
If > 0 then
alpha>1/epsilon
y
1
0<alpha<1/epsilon
3
2
x
alpha<0
Third part
9. > 0 since |x(0)| < 1, and we have x(s) [1, 1] for all s [0, ], by definition of
. Since f (0) = 0 and |f 0 | is bounded by on [1, 1], we have, from the inequality of finite
variations, |f (u) f (0) | |u| for all u [1, 1], that is, |f (x(s)) f (0) | |x(s)|.
|{z}
|{z}
=0
=0
|e
Z
x(t)| = x(0) +
t
ks
s=0
Z
f (x(s)) ds |x(0)| +
s=0
from which |ekt x(t)| |x(0)|et (using Gronwalls lemma with (t) = ekt |x(t)|, = |x(0)|,
= ), giving the inequality.
10. Since < 0, it follows that |x(0)|e(k+)t |x(0)| < 1. Letting E = {t > 0 | |x(t)| >
1}, which is assumed non empty, then = inf E > 0 (by continuity, since |x(0)| < 1 there
exists > 0 such that |x(t)| < 1 on [0, ], and thus > 0). Since lim |x(t)| 1 and
t
On [0, ], |x(t)| |x(0)|e(k+)t and, taking the limit, |x()| < 1, which is impossible.
First conclusion : E = and, if J is the interval of definition of x, then t J [0, +[,
|x(t)| 1.
If J admits an upper bound b R, then x0 is bounded in a neighborhood of b. Thus x
admits a limit in b. The same is therefore true for x0 . We then know that x can be extended
beyond b, contradicting the maximality of J.
Final conclusion : J [0, +[= [0, +[, x is defined on [0, +[ and the proof of question
9. holds true for all t 0, i.e.,
t [0, +[, |x(t)| |x(0)|e(k+)t .
N.B. This result expresses the stability and the asymptotic stability of the trivial solution
of (E6).
This subject was the Premi`ere composition de mathematiques for the contest determining
admission to Ecole
Polytechnique in France, for MP (Math-Physics) track students, in 2004.
Students, in their second year of university, have 4 hours to write this premi`ere composition.
The original subject comprised another question, originally question 3, which was suppressed in this homework sheet. To be complete, this question is included here:
B. Problem sheets
2.a. If k 6= 0 then A = 2k 6= 0, and from 2., there exists a unique 2-periodic solution.
Since the mapping x 7 x
b(n) is linear, and from the relation xb0 (n) = inb
x(n), we have
bb(n)
xb0 (n) = kb
x(n) + bb(n) x
b(n) =
.
in k
Since x is C 1 , we know that the
Fourier series of x is normally convergent. Since bb(n) 0,
1
.
we ca also say that x
b(n) = o
n
2.b. Applying here again the result of 2.b.,
If bb(0) = 0 then all solutions are 2-periodic. In this case, solutions satisfy x(n) =
b(n)/in for n Z non zero and x(0) varies with the solutions under consideration.
If bb(0) 6= 0 then no solution is periodic.