You are on page 1of 125

Differential Equations

V.K. Kalantarov
Koc University , Department of mathematics, Sariyer, Istanbul , Turkey
Email address: vkalantarov@kum.edu.tr
Contents

Chapter 1. Introduction 1

Chapter 2. First Order Ordinary Differential equations 3


1. First Order Linear Equations 3
2. Separable Equations. 6
3. Homogeneous equations. 8
4. Exact Differential Equations and Integrating Factors 9
5. Integrating Factors 12
6. Existence and uniqueness. 15
7. Some famous First Order equations 23
8. Problems 25

Chapter 3. Second Order Ordinary Differential equations 27


1. Second Order Linear Equations 27
2. Linear independence and Wronskian. 28
3. Nonhomogeneous Equations. Method of Variation of Parameters. 31
4. Method of Undetermined coefficients. 33
5. Series solutions 36
6. Some famous second order ODE’s 40

Chapter 4. Higher Order Ordinary Differential equations 43


1. Homogeneous equations with constant coefficients 45

Chapter 5. Laplace Transform and Applications 47


1. Step Functions. 52
2. Convolution Integral 54

Chapter 6. Systems of Ordinary Differential Equations 57


1. Systems of Linear ODE’s 57

Chapter 7. Stability and Instability 81


1. Autonomous Systems 81

Chapter 8. Boundary Value Problems 87


1. Boundary Value Problems for second order linear ODE’s 87
2. Sturm-Liouville boundary value problems 89
3. Boundary Value Problems for second order nonlinear ODE’s 91
4. Problems 93
iii
iv

Chapter 9. Fourier Series and PDE’s 95


1. Periodic Functions 95
2. Functional Series 95
3. Euler’s formulas and Fourier series 97
4. Riemann -Lebesgue Lemma 100
5. Bessel inequality and mean value approximation. 101

Chapter 10. Partial Differential Equations 105


1. Heat Equation. Method of Separation of Variables 105
2. Wave Equation 112
3. Problems 117
4. Some famous PDE’s 118

Bibliography 119
Index 121
CHAPTER 1

Introduction

Everything is in motion.
Everything flows. Everything is
vibrating.

William Hazlitt

Differential equations were invented by Newton (1642-1727). New-


ton considered this invention of his so important that he encoded
it as an anagram 1 whose meaning in modern terms can be freely
translated as follows: The laws of nature are expressed by dif-
ferential equations.
V.I. Arnold ”Ordinary Differential Equations”, Springer, 1991.

Definition 0.1. A differential equation is an equation, where the unknown


is a function, and the equation involves some derivative or derivatives of the
unknown function.

The equation

y 000 (t) + y 0 (t) + 4y(t) + y 5 (t) = sin t, t ∈ R (0.1)

is an ordinary differential equation.

Definition 0.2. If the unknown function in a differential equation is a func-


tion of one variable, then this kind of equation is called an ordinary differ-
ential equation . If the unknown function in a differential equation is a mul-
tivaribale function, then this kind of equation id called a partial differential
equation .

1
a word or phrase that you can make from another word or phrase by putting the letters in a
different order. For example thing is an anagram of night.

1
2 Chapter A

The equation
ut (x, t) − 2uxx (x, t) = 0, x ∈ (0, π), t > 0 (0.2)
is a partial differential equation.

Definition 0.3. The maximal order of a derivative of the unknown function


in the differential equation is called the order of the differential equation .

So the equation (3.5) is a third order ordinary differential equation and (0.2)
is a second order partial differential equation .
In what follows we will use the aabreviation ODE for ordinary differential equa-
tion and PDE for partial differential equation.
CHAPTER 2

First Order Ordinary Differential equations

1. First Order Linear Equations


A simplest diffierential equation is a differential equation
y 0 (t) = 0, t ∈ R. (1.1)
It is clear that each constant function y(t) = C is a solution of this equation. On
the other hand if y(t) is a solution of (1.1) then it is a constant function. In fact
if y 0 (t) = 0, ∀t ∈ R, and t1 , t2 ∈ R are arbitrary two points on R, then thanks to
the mean value theorem for derivative there exists a point c between t1 and t2
such that
y(t1 ) − y(t2 ) = y 0 (c)(t1 − t2 ) = 0,
i.e.
y(t) = C = const, ∀ ∈ R.
If f (t) is continuous on some interval I = (a, b), then each solution of the
equation

y 0 (t) = f (t), t ∈ I. (1.2)


has the form Z t
y(t) = f (τ )dτ + C ∀t ∈ I,
t0
where t0 is some point on I.
A first order homogeneous linear ordinary differential equation (ODE) has
the form
y 0 (t) + a(t)y(t) = 0.
This equation we rewrite in the form
y 0 (t)
= −a(t)
y(t)
or
(ln |y(t)|)0 = −a(t).
From the last equation we get
R
y(t) = Ce− a(t)dt
.
3
4 Chapter 1

The general first order nonhomogeneous ODE has the form


y 0 (t) + a(t)y(t) = b(t). (1.3)
Multiplying the equation (1.3) by a nonzero function µ(t), which is called an
integrating factor, we obtain an equivalent equation
µ(t)y 0 (t) + µ(t)a(t)y(t) = µ(t)b(t) (1.4)
Let us choose µ(t) such that
µ0 (t) = µ(t)a(t),
i.e. R
a(t)dt
µ(t) = e + C. (1.5)
Then the equation (1.4) takes the form
(mu(t)y(t))0 = µ(t)b(t).
Integrating the last equation and using (1.5) we obtain
R R Z R
− a(t)dt − a(t)dt
y(t) = Ce +e b(t)e a(t)dt dt.

Example 1.1. Show that the solution of the Cauchy problem


(
y 0 (t) + a(t)y(t) = b(t)
(1.6)
y(0) = y0
has the form
Rt Rt Z t Rτ
− a(τ )dτ − a(τ )dτ a(s)ds
y(t) = y0 e 0 +e 0 b(τ )e 0 dτ. (1.7)
0
Rt
Solution. Multiplying the equation (1.6) by e 0 a(τ )dτ we obtain
d  R t a(τ )dτ  Rt
e0 y(t) = b(t)e 0 a(τ )dτ .
dt
Integrating last equality over the interval (0, t) we obtain (1.7).
Example 1.2. Solve the initial value problem
ty 0 + 4y = 6t2 , y(1) = 1
for t > 0.
Solution. Rewrite the equation in the form
4
y 0 + y = 6t,
t
4
R
dt
and multiply both sides by e t = t4 :
(y(t)t4 )0 = 6t5 .
First Order Ordinary Differential equations 5

Integrating the last equality we get

y(t)t4 = t6 + C.

By using the initial condition we obtain:

C = 0.

Hence the solution is:


y(t) = t2 .

Example 1.3. Solve the initial value prolem


(
y 0 + 3t y = t, t > 0,
y(1) = 1

Solution. We multiply the equation by the integrating factor


Rt sds
e 1 s = e3 ln t = t3

and obtain
t3 y2 + 3t2 y = t4 .
Since
t3 y2 + 3t2 y = (t3 y)0
we have
1
(t3 y − t5 )0 = 0
5
Ingrating over the interval (1, t) we find
1 4
t3 y(t) − t5 = .
5 5
Hence
4 1
y(t) = 3
+ t2 .
5t 5
Example 1.4. Let h(t) be a continuous function defined on [0, ∞) and

lim h(t) = 0.
t→∞

Show that the solution of the problem

y 0 (t) + 5y(t) = h(t), y(0) = 2

tends to zero as t → ∞.
6 Chapter 1

2. Separable Equations.
Definition 2.1. An equation of the form
f (y)y 0 = g(x) (2.1)
is called a separable equation .
If y = φ(x) is a solution of the equation (2.1) then
f (φ(x))φ0 (x) = g(x).
Hence Z Z
0
f (φ(x))φ (x)dx = g(x)dx. (2.2)

Since dy = φ0 (x)dx we can rewrite (2.2) as follows


Z Z
f (y)dy = g(x)dx
or
F (y) = G(x) + C,
where F (y) is antiderivative of f (y), G(x) is antiderivative of g(x) and C is
arbitrary constant.
Problem 2.2. Suppose that f and g are continuous on some open intervals
containing the points y0 and x0 respectively and y(x) is a solution of the Cachy
problem
f (y)y 0 = g(x), y(x0 ) = y0 .
Then Z y(x) Z x
f (s)ds = g(s).
y0 x0

Example 2.3. Solve the Cauchy problem


dy
= (x − 1)y 2 , y(0) = 2.
dx

dy
= (x − 1)dx,
y2
1 x2
− = − x + C.
y 2
By using the initial condition y(0) = 2 we get C = − 12 . Hence
2
y(x) = .
2x + 1 − x2
Example 2.4. Solve the Cauchy problem
(1 + et )yy 0 = et , y(0) = 1.
First Order Ordinary Differential equations 7

Separating variables we get


0
et

1 2
y = .
2 1 + et
Inegrating last equation over the interval (0, t) and taking into account the initial
condition we obtain
Z t
1 2 1 eτ
y (t) − = τ
dτ = ln(1 + et ) − ln 2.
2 2 0 1+e
Thus
1 + et
y 2 = 1 + 2 ln .
2
Since y(0) = 1 > 0 we have
r
1 + et
y(t) = 1 + 2 ln .
2
Example 2.5. Solve the Cauchy problem
1
xy 0 + y = y 2 , y(1) = .
2
We rewrite the equation in the form
dy dx
=
y2 − y x
dy dy dx
+ =−
1−y y x
Integrating we get
ln[(1 − y)y] = − ln x + ln C
Hence
x(1 − y)y = C.
By using the initial condition finally we obtain
1
x(1 − y)y = .
4
Example 2.6. Find solution of the equation,
2
ey e−x − 2xdx = 0
that satisifes the condition
y(0) = ln 2.
Solution. This equation we can rewrite in the following form
2
ey dy = 2xex dx.
The last equation is a separabel equation. Integrating we get
2
ey = ex + C.
8 Chapter 1

Hence
2
y = ln(ex + C).
We use the initial condition and obtain
2
y(x) = ln(ex + 1).

3. Homogeneous equations.
An equation of the form
dy y
=f (3.1)
dx x
is called homogeneous equation . To solve the homogeneous equation (3.1) we
make change of variables xy = v and reduce the equation to a separable equation
of the form
xv 0 + v = f (v).
Example 3.1. Solve the equation
2xyy 0 = 4x2 + 3y 2 .
Dividing both sides of this equation by xy we obtain:
dy x 3y
=2 + .
dx y 2x
Hence this equation is a homogeneous equation. Therefore we make the change
y = xv:
1 3
v + xv 0 = 2 + v,
v 2
dv 4 + 3v 2
x = ,
dx 2v
Z Z
2v dx
2
dv = ,
v +4 x
d(v 2 + 4)
Z Z
dx
= ,
v2 + 4 x
ln(v 2 + 4) = ln |x| + ln C,
y2
v 4 + 4 = C|x| + 4 = C|x|,
x2
p
y 2 = Cx3 − 4x2 , y = ± Cx3 − 4x2 , x > 0.
Example 3.2. Solve the Cauchy problem
dy p
x = y + x2 − y 2 ; y(1) = 0.
dx
First Order Ordinary Differential equations 9

We rewrite the equation in the form:


r
dy y  y 2
= + 1− .
dx x x
So this equation is also a homogeneous equation. The change of variables v = y/x
gives: p
v + xv 0 = v + 1 − v 2 ,
dv dx
√ = ,
1−v 2 x
arcsin v = ln x + C,
v = sin(ln x + C),
y(x) = x sin(ln x + C).
Due to initial condition C = nπ, N = ±1, ±2, .... Hence
y(x) = x sin(ln x + nπ).

4. Exact Differential Equations and Integrating Factors


Let us onsider the following equation
(4x3 y + 3x2 y 2 ) + (x4 + 2x3 y)y 0 = 0 (4.1)
This equation is neither linear nor separable.
But it is not difficult to see that
4x3 y + 3x2 y 2 = x4 y + x3 y 2 x ,


x4 + 2x3 y = x4 y + x3 y 2 y .


Thus the equation is an exact equation. Therefore we can write this equation in
the form
dy
Ψx (x, y) + Ψy (x, y) = 0,
dx
or
d
Ψ(x, y) = 0. (4.2)
dx
Hence Ψ(x, y) = C. That is
x3 y + x2 y 2 = C.
The last equality defines the solution of the equation (3.1) implicitly.

Definition 4.1. An equation of the form


M (x, y) + N (x, y)y 0 = 0 (4.3)
is called an exact equation if there exists a function U (x, y) such that
Ux (x, y) = M (x, y), Uy (x, y) = N (x, y).
10 Chapter 1

In other words an equation of the form (4.3) is an exact equation if the vector
filed
F(x, y) = M (x, y)i + N (x, y)j
is a conservative vector filed.
Theorem 4.2. Let the functions M, N, Nx , My be continuous functions in a
region D. Then the equation
M (x, y) + N (x, y)y 0 = 0 (4.4)
is an exact equation if and only if
My (x, y) = Nx (x, y) (4.5)
at each point of the region D.
Proof. Assume that (4.4) is an exact equation. Then there exists a function
U such that
Ux = M, Uy = N.
These equalities imply
Uxy = My , Uyx = Nx .
Thus
My = Nx .
Assume that
My (x, y) = Nx (x, y)
and let us find U (x, y) such that Ux = M, Uy = N .
It follows from
Ux (x, y) = M (x, y)
that Z
U (x, y) = M (x, y)dx + g(y).
From this equality we get
Z
Uy (x, y) = My (x, y)dx + g 0 (y),

and the equation is an exact equation if


Z
N (x, y) = My (x, y)dx + g 0 (y)
or Z
N (x, y) − My (x, y)dx = g 0 (y).
R
Hence the function N (x, y) − My (x, y)dx must be a function of y. Due to the
condition (4.4)
Z

[N (x, y) − My (x, y)dx] = Nx (x, y) − My (x, y) = 0.
∂x
First Order Ordinary Differential equations 11

Therefore the function


Z Z  Z 
U (x, y) = N (x, y)dy + N (x, y) − My (x, y)dx dy.

is the required function. 


Example 4.3. Find the value of b for which the equation
(ye2xy + 4x3 )dx + bxe2xy dy = 0
is exact and then solve the equation.
An equation of the form
M (x, y)dx + N (x, y)dy = 0
is exact if and only if
My (x, y) = Nx (x, y).
Thus our equation is exact if and only if
My = e2xy + 2xye2xy = b[e2xy + 2xye2xy ] = Nx (x, y).
Hence it is exact if and only if b = 1. Now we consider the equation
(ye2xy + 4x3 )dx + xe2xy dy = 0.
Since this equation is an exact equation there exists a function F (x, y) such that
Fx (x, y) = ye2xy + 4x3 , (4.6)

Fy (x, y) = xe2xy . (4.7)


Integrating (3.5) with respect to y we get
Z
1
F (x, y) = xe2xy dy = e2xy + h(x).
2
From the last equality we obtain:
Fx (x, y) = xe2xy + h0 (x). (4.8)
(3.5) and (3.6) imply:
h0 (x) = 4x3 → h(x) = x4 + C.
Thus the solution is
1 2xy
e + x4 + C = 0
2
Example 4.4. Solve the equation
(ex sin y − 2y sin x)dx + (ex cos y + 2 sin x)dy = 0.
12 Chapter 1

Since
My = ex cos y − 2 sin x = Nx
this equation is an exact equation. Therefor
Z
U (x, y) = (ex cos y − 2y sin x)dy = ex sin y − y 2 sin x + g(y),

Uy (x, y) = ex sin y − 2y sin x + g 0 (y) = ex cos y + 2 sin x.


Hence g 0 (y) = 0 and g = const. Therefore the solution has the form
ex sin y − y 2 sin x = C.
Example 4.5. Solve the equation
! !
x 1 1 y1 x
p + + dx + p + − 2 dy = 0.
x2 + y 2 x y 2
x +y 2 y y
Z !
x 1 1
Ux (x, y) = p + + dx,
x2 + y 2 x y
p x
U (x, y) = x2 + y 2 + ln x + + g(y),
y
y x y 1 x
Uy (x, y) = p − 2 + g 0 (y) = p + − 2.
x2 + y 2 y x2 + y 2 y y
1
g 0 (y) = , g(y) = ln y.
y
Hence the solution has the form
p x
x2 + y 2 + ln x + + ln y = C.
y
5. Integrating Factors
Sometimes we can convert a differential equation of the form
M (x, y)dx + N (x, y)dy = 0 (5.1)
which is not an exact equation to an exact equation by multiplying the equation
by appropriate function which we call integrating factor.
A function m(x, y) is called an integrating factor for the equation (5.1) if the
equation
m(x, y)M (x, y)dx + m(x, y)N (x, y)dy = 0
is an exact equation.

We consider just the case when m is a function of a single variable m(x) or


m(y).
First Order Ordinary Differential equations 13

Assume that the equation (5.1) has an integrating factor m(x) depending
only on x. Then

(m(x)M (x, y))y = (N (x, y)m(x))x ,


or

m(x)My (x, y) = m(x)Nx (x, y) + m0 (x)N (x, y),

m0 (x) My (x, y) − Nx (x, y)


= .
m(x) N
My −Nx
Hence (5.1) has an integrating factor depending on x if the expression N
is not depending on y.

Similarly we can show that (5.1) has an integrating factor m(y) depending
N −M
only on y if the expression xM y is not depending on x.

Example 5.1. Solve the equations

(3x2 y + 2xy + y 3 )dx + (x2 + y 2 )dy = 0.

My − Nx
= 3.
N
Therefore this equation has an integrating factor depending on x. Thus we
have
m0 (x) = 3m(x),

m(x) = e3x .

(x2 + y 2 )e3x = Uy (x, y),

y 3 3x
U (x, y) = yx2 e3x + e + h(x),
3

Ux = 2xye3x + 3yx2 e3x + y 3 e3x + h0 (x) = 3x2 ye3x + 2xye3x + y 3 e3x ,

y 3 3x
U (x, y) = yx2 e3x + e
3

h0 (x) = 0, h = C
14 Chapter 1

Solution is an implicit function:


y 3 3x
yx2 e3x + e = C.
3
Example 5.2. Solve the equation
(1 + 3x2 sin y)dx − x cot ydy = 0.

Nx − My − cot y − 3x2 cos y − cot y[1 + 3x2 sin y]


= = = − cot y.
M 1 + 3x2 sin y 1 + 3x2 sin y
Hence the equation has an integrating factor depending only on y:
R 1
m(y) = e− cot ydy = e− ln(sin y) = ,
sin y

m0 (y) + (cot y)m(y) = 0,

1
m(y) = .
sin y
So the equation  
1 2 cos y
+ 3x dx − x 2 dy = 0
sin y sin y
is an exact equation. Thus we have

1
U (x, y)x (x, y) = + 3x2 ,
sin y
1
U (x, y) = x + x3+g(y) ,
sin y
cos y
Uy (x, y) = − 2 x + g 0 (y).
sin y
Hence g is a constant, and the solution has the form:
1
x + x3 = C.
sin y
Example 5.3. Show that the following equation has an integrating factor
depending on xy and solve it
xy 2 dx + (x2 y − x)dy = 0

(m(xy)xy 2 )y = ((x2 y − x)m(xy))x

m0 (xy)x2 y 2 = x2 y 2 m0 (xy) − xym0 (xy) + 2m(xy)xy − m(xy) + 2xym(xy),


First Order Ordinary Differential equations 15

m0 (xy)xy + m(xy) = 0,

1
m0 (s) + m(s) = 0,
s
1
m(s) = ,
s
1
ydx + (x − )dy = 0,
y

Ux (x, y) = y, U (x, y) = xy + g(y),

Uy (x, y) = x + g 0 (y),

1
g 0 (y) = − , g(y) = − ln y.
y
Hence x − ln y = c is a general solution.

6. Existence and uniqueness.


Theorem 6.1. If p(t) and h(t) are continuous on the interval a < t < b
containing the point t = t0 , then there exists a unique function y = ϕ(t) that
satisfies (
y 0 + p(t)y = h(t), t ∈ (a, b),
(6.1)
y(t0 ) = y0 .
Proof. Suppose that z(t) is also a solution of the problem (6.1)i.e.
(
y 0 + p(t)y = h(t), t ∈ (0, T ),
(6.2)
y(t0 ) = y0 .
The the function y(t) − z(t) is a solution of the problem
(
w0 + p(t)w = 0, t ∈ (0, T ),
(6.3)
w(0) = 0.
Rt
Multiplying the equation (6.3) by e 0 p(s)ds we obtain
d  Rt 
w(t)e 0 p(s)ds = 0.
dt
Therefore Rt
w(t)e 0 p(s)ds = e(0) = 0, ∀t ∈ [0, T ].
Hence w(t) = 0 or y(t) = z(t), ∀t ∈ [0, T ]. 
16 Chapter 1

Theorem 6.2. (Existence and uniqueness) Suppose that the functions f (t, y)
and fy (t, y) are continuous on the rectangle
R0 := {(t, y) : |t| ≤ a, |y − y0 | ≤ b}
Then the initial value problem
y 0 = f (t, y), y(0) = y0 (6.4)
has a unique solution defined on the interval
 
b
|t| ≤ h = min a, ,
M
where
M = max |f (t, y)|, (t, y) ∈ R0 .
Proof. Note that since
|fy (t, y)| ≤ L, ∀(t, y) ∈ R0 ,
the function f (·, ·) satisfies the Lipschitz condition in R0 :
|f (t, y1 ) − f (t, y2 )| ≤ L|y1 − y2 |, ∀(t, y1 ), (t, y2 ) ∈ R0 . (6.5)
The problem (6.4) is equivalent to the integral equation:
Z t
y(t) = y0 + f (s, y(s))ds. (6.6)
0
Let us show that there exists a continuous function y(t) that satisfies(??) (it
follows from (??) that the function y(t) is differentiable, and satisfies (6.4)).
We are going to show existence of such a function employing the method of
successive approximations (Picard’s iteration method), and start by choosing an
initial function
y0 (t) = y0 , ∀t ∈ [−h, h]
The subsequent iteration we choose as follows:
Z t
y1 (t) = y0 + f (s, y0 )ds,
0
Z t
y2 (t) = y0 + f (s, y1 (s))ds,
0
....
Z t
yn (t) = y0 + f (s, yn−1 (s)ds, (6.7)
0
Let us show that the graph of yn (t) is lying on R0 when |t| ≤ h (remember that
f is defined only on R0 and f (t, yn (t)) has a meaning only when |yn (t) − y0 | ≤
b, ∀t ∈ [−h, h]).
For n = 0 the statement is clear
(t, y0 ) ∈ R0 , ∀t ∈ [−h, h]
First Order Ordinary Differential equations 17

For n = 1 and t ∈ [0, h] we have


Z t Z t
|y1 (t) − y0 | = | f (s, y0 )ds| ≤ |f (s, y0 )|ds ≤ M |t| ≤ M h = b
0 0

Suppose that the graph of yn (t) is in R0 , i.e.


|yn (t) − y0 | ≤ b, ∀t ∈ [−h, h]
Then
Z t
|yn+1 (t) − y0 | ≤ |f (s, yn (s)|ds ≤ M |t| ≤ M h = b
0
For t ∈ [−h, 0] we have same estimate.
It remains to show that there exists a continuous function y(t) defined on [−h, h],
such that
lim yn (t) = y(t), ∀t ∈ [−h, h]
n→∞

It is easy to see that the sequence {yn (t)} converges if and only if the series

X
y0 + (yn (t) − yn−1 (t)) (6.8)
n=1

converges , and limn→∞ yn (t) is equal to the sum of the series (6.8).
To show that (??) converges to some continuous function (of course, defined on
[−h, h]) we need the following.
Proposition A Suppose that the functions fn (t), n = 1, 2, ... are continuous on
some interval [a, b], and
|fn (t)| ≤ Cn , n = 1, 2, ... ∀t ∈ [a, b],

P ∞
P
where the number series Cn is convergent. Then the functional series fn (t)
n=1 n=1
is absolutely convergent on [a, b], and the function

X
f (t) = fn (t)
n=1

is a continuous function.
Morever
Z b ∞ Z
X b
f (t)dt = fn (t)dt
a n=1 a
It is not difficult to see that
Z t

‘|y1 (t) − y0 | = f (s, y0 )ds ≤ M t,

0
18 Chapter 1

Z t

|y2 (t) − y1 (t)| = [f (s, y1 (s))ds − f (s, y0 )]ds

0
Z t
≤ |f (s, y1 (s)) − f (s, y0 | ds
0
Z t
t2
≤ L|y1 (s) − y0 |ds ≤ M L ,
0 2
Z t
L|y2 (s) − y1 (s)|ds
|y3 (t) − y2 (t)| ≤
0
Z t 2
s t3
≤ LM L ds = M L2 ,
0 2 3!
...............................................
Z t
|yn (t) − yn−1 (t)| ≤ L |yn−1 (s) − yn−2 (s)|ds
0
Z t n−1
n−1 s tn (Lh)n
≤ ML ds = M Ln−1 ≤ M L .
0 (n − 1)! n! n!

P (Lh)n
We know that the series n! converges.
n=1
Therefore, due to the Proposition A, the series (6.7) converges to some continuous
function y(t), which is the limit of the sequence {yn (t)} on [−h, h].
Hence passing to the limit in (3) as n → ∞ we obtain
Z t
y(t) = y0 + f (s, y(s))ds.
0
Suppose that y(t) is a solution of the problem (6.1) and z(t)is a solution of the
problem
z 0 (t) = f (t, z(t)), z(0) = z0 (6.9)
Assume that y(t)and z(t) exists on the interval [0, T ]. Then subtracting from
(6.6) the equation
Z t
z(t) = z0 + f (s, z(s))ds
0
which is equivalent to (6.9) we get:
Z t
|y(t) − z(t)| ≤ |y0 − z0 | + |f (s, y(s)) − f (s, z(s))|ds
0
Z t
≤ |y0 − z0 | + L |y(s) − z(s)|ds. (6.10)
0
By using the Gronwall’s lemma we obtain from (6.10) the following estimate:
|y(t) − z(t)| ≤ |y0 − z0 |eLT , ∀t ∈ [0, T ] (6.11)
First Order Ordinary Differential equations 19

It follows from the inequality (6.11) that if y0 is sufficiently close to z0 , then y(t)
is sufficiently close to z(t) on [0, T ], i.e. if |z0 − y0 | is small enough, then
max |y(t) − z(t)|
t∈[0,T ]

is small enough, that is the solution of the initial value problem (6.4) continuosly
dependes on the initial data.
In particularthe estimate (6.11) implies that the problem (6.4) has a unique
solution. 
Lemma (Gronwall)* If u(t), v(t) are non-negative continuous functions on
[0, T ], C is a non-negative number, and
Z t
u(t) ≤ C + u(s)v(s)ds, ∀t ∈ [0, T ], (6.12)
0
then Rt
v(s)ds
u(t) ≤ Ce 0 , ∀t ∈ [0, T ]. (6.13)
Proof. Let us denote
Z t
Ψ(t) := C + u(s)v(s)ds.
0
Then multiplying both sides of (6.12) by v(t) we get
 Z t 
u(t)v(t) ≤ C + u(s)v(s)ds v(t)
0
or
Ψ0 (t) ≤ v(t)Ψ(t), Ψ(0) = C.
Rt
Multiplying this inequality by e− 0 v(s)ds
we obtain
Rt
(e− 0 v(s)ds
Ψ(t))0 ≤ 0
It follows from last inequality that the function
Rt
e− 0 v(s)ds
Ψ(t)
is non-increasing on [0, T ]. Therefore
Rt
e− 0 v(s)ds
Ψ(t) ≤ Ψ(0) = C, ∀t ∈ [0, T ]
Hence Rt
v(s)ds
Ψ(t) ≤ Ce 0

Since
Ψ(t) ≥ u(t), ∀t ∈ [0, T ]
we see that (6.12) holds true. 
20 Chapter 1

You can find the proof of Gronwall’s lemma and some of its generalizations
in a famous book of E.F Beckennbach and R. Bellman.1
Theorem 6.3. Suppose that the function f (t, y) is continuous on the region
QT = [0, T ] × R and satisfies the Lipschitz condition with respect to the second
variable, i.e. there exists a positive number L such that
|f (t, y) − f (t, z)| ≤ L|y − z|, ∀t ∈ [0, T ], ∀y, z ∈ R. (6.14)
Then the problem
y 0 = f (t, y), y(0) = y0 (6.15)
has a unique solution on the interval [0, T ].
Proof. The problem (6.1) is equivalent to the nonlinear integral equation
Z t
y(t) = y0 + f (s, y(s))ds. (6.16)
0
We write this integral equation in the form of an operator equation
y = T (y), (6.17)
where Z t
T (y) := y0 + f (s, y(s))ds
0
So our problem is reduced to the problem of existence and uniqueness of solution
to the operator equation (6.17) or to the problem of existence and uniqueness of
a fixed point of the operator T .
Let us denote by CT the Banach space of all continuous functions defined on
[0, T ] that is equipped with the norm
kyk = max e−2Lt |y(t)| .

t∈[0,T ]

It is clear that the operator T maps the Banach space CT into itself. Due to the
Lipschitz condition (3.4) we have
Z t

|T (y)(t) − T (z)(t)| = (f (s, y(s)) − f (s, z(s))) ds

0
Z t Z t
≤ |f (s, y(s)) − f (s, z(s))| ds ≤ L |y(s) − z(s)|ds
0 0
Z t
=L e−2Ls |y(s) − z(s)|e2Ls ds
0
 t 2Ls e2Lt
Z
−2Ls
≤ L max e |y(s) − z(s)| e ds ≤ L ky − zk.
s∈[0,T ] 0 2L

1E.F.Beckenbach and R.Belllman, Inequalities, Springer Verlag, 1961.


First Order Ordinary Differential equations 21

Therefore we have
1
e−2Lt |T (y)(t) − T (z)(t)| ≤ ky − zk.
2
Hence
 1
kT (y) − T (z)k = max e−2Lt |T (y)(t) − T (z)(t)| ≤ ky − zk,
t∈[0,T ] 2
i.e. the operator T is a contraction in the Banach space CT . Thus according to
the Banch fixed point theorem it has a unique fixed point in CT , and therefore
the problem (6.15) has a unique solution. 
Proposition 6.4. If f (y) is an increasing and continuous function, then
y 0 + f (y) = 0, y(t0 ) = y0 (6.18)
has a unique solution.
Suppose that the problem (6.18) has two different solutions y(t) and ỹ(t).
Then the function w(t) = y(t) − ỹ(t) is a solution of the problem

w0 (t) + f (y(t)) − f (ỹ(t)) = 0, (6.19)


w(t0 ) = 0. (6.20)
Multiplying (6.19) by w(t) we obtain
 
d 1 2
w (t) = −(f (y(t)) − f (ỹ(t))(y(t) − ỹ(t)). (6.21)
dt 2
Since the function f (y) is an increasing function for each y and ỹ
(f (y) − f (ỹ))(y − ỹ) ≥ 0.
Therefore (6.21) implies:  
d 1 2
w (t) ≤ 0.
dt 2
Integrating this inequality and using (6.20) we get
w2 (t) ≤ w2 (t0 ) = 0, ∀t ≥ t0 .
Therefore w(t) = 0, ∀t ≥ t0 .
The following problem has infinitely many solutions.

y 0 = y; y(0) = 0
For each positive L the following function is a solution of this problem
(
0, for x ≤ L, and
yL (t) = 1 2
4 (t − L) , for x > L.
The problem
y0 = y2
22 Chapter 1

y(0) = 1
has a solution
1
y(t) =
1−t
which tends to +∞ as t → 1− .

6.1. Picard Iterations. The sequence of functions {yn (t)} defined by the
recurrence relation Z t
yn (t) = y0 + f (s, yn−1 (s))ds (6.22)
0
is called the sequence of Picard iterations.
Proposition 6.5. If f (t, y) satisfies the conditions of the Theorem 6.3 then
the sequence of Picard iterations defined by (6.22) converges uniformly on the
interval [0, T ] to the solution of the problem (6.15)..
Proof. It is clear that
Z t
|y1 (t) − y0 (t)| ≤ |f (s, y0 )|ds ≤ M t.
0
By using the Lipschitz condition and the previous estimate we get
Z t
|y2 (t) − y1 (t)| ≤ |f (s, y1 (s) − f (s, y0 )|ds
0
Z t Z t
t2
≤ L|y1 (s) − y0 |ds ≤ M L sds = M L .
0 0 2
Similarly we get the estimate
Z t
|y3 (t) − y2 (t)| ≤ |f (s, y2 (s) − f (s, y1 )|ds
0
Z t
M L2 t 2 t3
Z
≤ L|y2 (s) − y1 (s)|ds ≤ s ds = M L2
0 2! 0 3!
and by induction we get
Z t
|yn (t) − yn−1 (t)| ≤ |f (s, yn−1 (s) − f (s, yn−2 (s))|ds
0
Z t
M Ln−1 t n−1 tn
Z
≤ L|yn−1 (s) − yn−2 (s)|ds ≤ s ds = M Ln−1 (6.23)
0 (n − 1)! 0 n!
for each n = 1, 2, .... Due to this estimate and the Weierstrass theorem the
functional series
y0 + (y1 (t) − y0 ) + (y2 (t) − y1 (t)) + · · · + (yn (t) − yn−1 (t)) + · · · (6.24)
First Order Ordinary Differential equations 23

is uniformly convergent on the interval [0, T ] to a continuous function y(t). Pass-


ing to the limit in the equation (6.22) as n → ∞ we deduce that the function y(t)
is a solution of the integral equation (6.16), and therefore solution of the Cauchy
problem (6.15). 
Problem 6.6. Use the method of successive approximations to find the solu-
tion of the problem
y 0 (t) = y(t) + t, y(0) = 1.
Solution.
Z t
yn+1 (t) = 1 + (yn−1 (s) + s)ds, y0 (x) = 1, n = 1, 2, · · · (6.25)
0
From this equality we obtain that
t
t2
Z
y0 (t) = 1, y1 (t) = 1 + (s + 1)ds = 1 + + t,
0 2
Z t
s2 t3

y2 (t) = 1 + s+1+s+ ds = 1 + t + t2 + ,
0 2 3!
Z t
s3 t3 t4

2
y3 (t) = 1 + s+1+s+s + ds = 1 + t + t2 + + ,
0 3! 3 4!
...............................................
Z t
s3 2sn−1 sn

2
yn (t) = 1 + s+1+s+s + + ··· + + ds
0 3 (n − 1)! n!
2t3 2tn 2tn+1
= 1 + t + t2 + + ··· + + .
3! n! (n + 1)!

7. Some famous First Order equations


• The equation
y 0 + p(t)y = q(t)y n (7.1)
is called the Beournulli equation . It was first considered by Jacob
Bernoulli. The change v = y n−1 is reducing the Bernoulli equation
to the linear first order equation. In fact, multiplying the equation (7.1)
by y −n we obtain
y −n y 0 + y 1−n p(t) = q(t). (7.2)
Since
v 0 = (1 − n)y −n y 0
the equation (7.2) takes the form
1
v 0 + p(t)v = q(t).
1−n
24 Chapter 1

• The following equation is called a Riccati equation


y 0 (t) = a(t)y 2 (t) + b(t)y(t) + c(t). (7.3)
Properties of the Riccati equation:

i) If y1 (t) is a particular solution of the Riccati equation, then


setting
y(t) = y1 (t) + z(t)
we obtain:
y10 + z 0 − a(y12 + 2y1 z + z 2 ) − b(y1 + z) − c = z 0 − (2ay1 + b)z − az 2 = 0

⇒ z 0 − (2ay1 + b)z − az 2 = 0
ii) If y, y1 , y2 , y3 are solution of the Riccati equation , then
y − y2 y3 − y2
: = const.
y − y1 y3 − y1
• The following equation
y 0 (t) = ry(t)(1 − y(t))
is called the logistic equation.
An equation of the form
y = ty 0 (t) + f (y 0 (t))
is called the Clairot equation.

• The Friedman equation is the following nonlinear equation


2
κc2

1 0 8
R (t) = πGρ − 2 , (7.4)
R(t) 3 R (t)
where ρ is the density of the universe (including the mass equivalent of
any energy, according to E = mc2 ), G is the Gravitational constant, c is
the speed of light, and κ is the curvature constant.
• The Logistic equation The equation
y 0 = ay − by 2 ,
where a > 0, b > 0 are given numbers models processes in economics and
genetics of populations.
models a number of processes
• The Solow-Swan equation
k 0 (t) = sk(t)α − (δ + g + n)k(t) (7.5)
is modeling a long-run economic growth.
First Order Ordinary Differential equations 25

8. Problems
(1) Show that the Cauchy problem
(
y 0 (t) + arctan y(t) = 0, t > 0,
y(0) = y0
has a unique solution.
(2) Solve the equation
y 0 = (x − y)2 + 1.
(3) Solve the equation
2(xy + y)y 0 + x(y 4 + 1) = 0
(4) Find all solutions of the equation
y 0 + sin(t + y) = sin(t − y)
(5) Solve the equation
y
ty 0 = y + t cos( ).
t
CHAPTER 3

Second Order Ordinary Differential equations

Many problems of mathematical physics are modelled by the initial value


problem or boundary value problems for second rder ODE’s. Most famous second
order ODE is the Newton’s second law:
mx00 (t) = F (t)
governing the motion of a body of mass m under the influence of a force F (t).

1. Second Order Linear Equations


Suppose that y1 (t) and y2 (t) are two solutions of the second order linear
homogeneous equation

y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0, (1.1)


whose coefficients are continuous on some interval (a, b). It is easy to show
that for each constants C1 , C2 the function

y(t) = C1 y1 (t) + C2 y2 (t) (1.2)


is also a solution of (1.1). We are going to show that if y1 and y2 are satisfying
some condition, then each solution of (1.1) has the form (1.2). Let φ(t) be any
solution of the equation (1.1). We calculate the values of φ and φ0 at the point
t0 ∈ (a, b):

φ(t0 ) = φ0 , φ0 (t0 ) = φ1 (1.3)


and consider the system of equations (with respect to C1 , C2 ):
C1 y1 (t0 ) + C2 y2 (t0 ) = y0
C1 y10 (t0 ) + C2 y20 (t0 ) = y1
This system has a unique solution C1 , C2 if

y1 (t0 ) y2 (t0 )
y1 (t0 ) y20 (t0 ) 6= 0.
0

The function
y1 (t) y2 (t)
W [y1 , y2 ](t) = 0
y1 (t) y20 (t)

is called the Wronskian of y1 , y2 . So we proved the following theorem:


27
28 Chapter 2

Theorem 1.1. If p and q are continuous on (a, b), and the Wronskian of
solutions y1 , y2 is not zero at some point t0 ∈ (a, b), then the family of solutions
y(t) = C1 y1 (t) + C2 y2 (t), t ∈ (a, b)
includes every solution of (1.1).
The solution set y1 , y2 is said to form a fundamental set of solutions if
W [y1 , y2 ](t0 ) 6= 0 at some t0 ∈ (a, b).

If y1 (t) is a solution of (1.1), which satisfies


y1 (t0 ) = 1, y10 (t0 ) = 0
y2 (t) is a solution satisfying
y2 (t0 ) = 0, y20 (t0 ) = 1
Then y1 , y2 is a fundamental set of solutions.

2. Linear independence and Wronskian.


Two functions f1 (t) and f2 (t) are said to be linearly dependent on an interval
(a, b) if there exist two constants K1 , K2 not both zero, such that
K1 f1 (t) + K2 f2 (t) = 0, ∀t ∈ (a, b).
Theorem 2.1. If f and g are linearly dependent on (a, b), then
W [f, g](t) = 0, ∀t ∈ (a, b).
If f (t) and g(t) are differentiable on (a, b) and
W [f, g](t0 ) 6= 0
for some t0 ∈ (a, b), then f and g are linearly independent on (a, b) .
Proof.
K1 f (t) + K2 g(t) = 0, ∀t ∈ (a, b).
If the function is zero on the interval (a, b) its derivative is zero to in this interval:
K1 f 0 (t) + K2 g 0 (t) = 0, ∀t ∈ (a, b).
We know that at at least one of the numbers K1 and K2 is not zero, thus the
determinant of the system
(
K1 f (t) + K2 g(t) = 0,
K1 f 0 (t) + K2 g 0 (t) = 0
must be zero. That is W [f, g](t) = 0, ∀t ∈ (a, b).
If W [f, g](t0 ) were equal to zero, then according to the Cramer’s rule the
numbers K1 and K2 both would have been equal to 0. 
Second Order Ordinary Differential Equations 29

Theorem 2.2. (Abel’s theorem). If y1 (t) and y2 (t) are solutions of the dif-
ferential equation
y100 + p(t)y 0 (t) + q(t)y(t) = 0,
where p, q are continuous on (a, b), then
 Z 
W [y1 , y2 ](t) = C exp − p(t)dt , ∀t ∈ (a, b)

where C = const is a constant.


Proof. Let us multiply the equation

y100 + p(t)y10 + q(t)y1 = 0


by y2 (t) and

y200 + p(t)y20 + q(t)y2 = 0


by y1 (t) :

y100 y2 + p(t)y10 y2 + q(t)y1 y2 = 0, (2.1)

y200 y1 + p(t)y1 y20 + q(t)y1 y2 = 0. (2.2)


Subtracting (2.2) from (2.1) we obtain:

y1 y200 − y100 y2 + p(t)[y1 y20 − y10 y2 ] = 0.


We can rewrite the last equality in the following form:

[y1 y20 − y10 y2 ]0 + p(t)[y1 y20 − y10 y2 ] = 0


or

W 0 [y1 , y2 ](t) + p(t)W [y1 , y2 ](t) = 0.


Hence
R
W [y1 , y2 ](t) = Ce− p(t)dt
.

Theorem 2.3. Let y1 and y2 be solutions of

y 00 + p(t)y 0 + q(t)y = 0, (2.3)


where p, q are continuous on (a, b).
Then y1 and y2 are linearly dependent on (a, b) if and only if
W [y1 , y2 ](t) = 0 ∀t ∈ (a, b).
30 Chapter 2

Proof. Let us show that if W [y1 , y2 ](t) = 0 on (a, b) , then y1 and y2


are linearly dependent on (a, b). Let t0 ∈ (a, b) be any point of (a, b). Then
W [y1 , y2 ](t0 ) = 0, e.g.

y1 (t0 ) y2 (t0 )
y1 (t0 ) y20 (t0 ) = 0.
0

Therefore the system of equations


(
C1 y1 (t0 ) + C2 y2 (t0 ) = 0,
(2.4)
C1 y10 (t0 ) + C2 y20 (t0 ) = 0
has nontrivial solution C1∗ , C2∗ .
Now consider the function
ψ(t) = C1∗ y1 (t) + C2∗ y2 (t).
This function is a solution of the equation (2.3) and due to (2.4) satisfies the
conditions
ψ(t0 ) = ψ 0 (t0 ) = 0.
Thus due to the uniqueness theorem (the zero function satisfies the same equation
and the same initial conditions) we have
ψ(t) = C1∗ y1 (t) + C2∗ y2 (t) = 0, ∀t,
that is the functions y1 and y2 are linearly dependent on (a, b). 
Proposition 2.4. Let u and v be two solutions of the equation
y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0, (2.5)
on the interval (a, b), where p, q are a continuous functions on (a, b). Show that
if t0 ∈ (a, b) is the common extremum point for u and v, then they are linearly
dependent on (a, b).
Problem 2.5. Determine if the following functions are linearly independent
y1 (t) = e2t , y2 (t) = sin(3t), y3 (t) = cos t. (2.6)
Problem 2.6. For which values of the coefficients a and b every solution of
the equation
y 00 (t) + ay 0 (t) + by(t) = 0
has infinitely many zeros ?
Problem 2.7. Show that if the functions
u1 (t), u2 (t), ..., un (t)
are defined on (a, b) and linearly independent on (a1 , b1 ) ⊂ (a, b), then they are
linearly independent on (a, b)
Second Order Ordinary Differential Equations 31

Problem 2.8. Show that if the functions u, v are continuously differentiable


and linearly independent on (a, b) and W (u, v)(t) = 0, ∀t ∈ (a, b), then there
exists an interval (α, β) ⊂ (a, b) such that the functions u and v are linearly
dependent on (α, β).

3. Nonhomogeneous Equations. Method of Variation of Parameters.


In this section we will find a particular solution of a nonhomogeneous equation
y 00 (t) + p(t)y 0 (t) + q(t)y(t) = h(t), (3.1)
where p(t), q(t) and h(t) are given continuous functions.
It is easy to see that if Y1 and Y2 are solutions of the non-homogeneous equation
(3.1), then the function
y(t) = Y1 (t) − Y2 (t)
is a solution of the corresponding homogeneous equation
y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0. (3.2)
Proposition 3.1. If yp (t) is a particular solution of (3.1), y1 (t) and y2 (t) is
a fundamental set of solutions of the corresponding homogeneous equation (3.2).
Then the general solution of (3.1) has the form
y(t) = c1 y1 (t) + c2 y2 (t) + yp (t). (3.3)
Proof. Assume that y(t) is an arbitrary solution of (3.1). Then the function
y(t) − yp (t) satisfies (3.2). But each solution of (3.2) has the form

c1 y1 (t) + c2 y2 (t)
That is

y(t) = yp (t) + c1 y1 (t) + c2 y2 (t)


so
y(t) − yp (t) + c1 y1 (t) + c2 y2 (t)
Thus to find the general solution of (3.1) we must find a particular solution of
(3.1). 
So if we know two linearly independent solutions of the homogeneous equation
(3.2), then to fins the general solution of the non-homogeneous equation (3.1) it
suffices to fins one particular solution yp (t) of (3.1).
Now we will find a particular solution of (3.1) by using the so called method of
variation of parameters. The idea is that we look for a particular solution of the
equation (3.1) in the form
Y (t) = c1 (t)y1 (t) + c2 (t)y2 (t), (3.4)
where y1 (t), y2 (t) is a fundamental set of solutions of the homogeneous equation
y 00 (t) + p(t)y 0 (t) + q(t)y(t) = 0, (3.5)
32 Chapter 2

and c1 (t), c2 (t) are differentiable functions which will be determined.


It is clear that

yp0 (t) = c1 (t)y10 (t) + c2 (t)y20 (t) + c01 (t)y1 (t) + c02 (t)y2 (t).

Assume that
c01 (t)y1 (t) + c02 (t)y2 (t) = 0. (3.6)
Thanks to the condition (3.6)

yp0 (t) = c1 (t)y10 (t) + c2 (t)y20 (t). (3.7)

Thus
yp00 (t) = c1 (t)y100 (t) = c2 (t)y200 (t) + c01 (t)y10 (t) = c02 (t)y20 (t). (3.8)
Substituting yp (t), yp0 (t) and yp00 (t) as it is given in (3.4), (3.7) and (3.8) into the
equation (3.1) we obtain

c1 (t)y100 (t) + c2 (t)y200 (t) + c01 (t)y10 (t) + c02 (t)y20 (t) + p(t) c1 (t)y10 (t) + c2 (t)y20 (t)
 

+ q(t) [c1 (t)y1 (t) + c2 (t)y2 (t)] = h(t).

The last equality we can rewrite in the form

c1 (t) y100 (t) + p(t)y10 (t) + q(t)y1 (t) + c2 (t) y200 (t) + p(t)y20 (t) + q(t)y2 (t)
   

+ c01 (t)y10 (t) + c02 (t)y20 (t) = h(t)

Since y1 (t) and y2 (t) are solutions of the homogeneous equation (3.2) we have

c01 (t)y10 (t) + c02 (t)y20 (t) = h(t) (3.9)

Solving the system of equations (3.6) and (3.9) we find


y2 (t)h(t) y1 (t)h(t)
c01 (t) = − , c02 (t) = .
W [y1 , y2 ](t) W [y1 , y2 ](t)
and
Z Z
y2 (t)h(t) y1 (t)h(t)
c1 (t) = − dt, c2 (t) = dt.
W [y1 , y2 ](t) W [y1 , y2 ](t)
Hence the general solution of the non-homogeneous equation (3.1) has the form
Z
y2 (t)h(t)
y(t) = c1 (t)y1 (t) + c2 (t)y2 (t) − y1 (t) dt
W [y1 , y2 ](t)
Z (3.10)
y1 (t)h(t)
+ y2 (t) dt.
W [y1 , y2 ](t)
Second Order Ordinary Differential Equations 33

4. Method of Undetermined coefficients.


Example 4.1. Find the general solution of
y 00 + 4y 0 + 4y = te−2t . (4.1)
First we find the general solution of the homogeneous equation
y 00 + 4y 0 + 4y = 0. (4.2)
The corresponding characteristic equation
r2 + 4r + 4 = 0
has a double root r0 = −2. Thus the equation (2) has tow linearly independent
solutions
y1 (t) = e−2t , y2 (t) = te−2t .
The general solution of the homogeneous equation is:

yh (t) = (C1 + C2 t)e−2t


WE’ll find the particular solution of (4.1) by the method of undetermined co-
efficients. Since the right hand side is a solution of the homogeneous equation
corresponding to a double root of the characteristic equation we look for a par-
ticular solution in the following form:
yp (t) = (At + B)e−2t t2 .
yp0 = −2e−2t At3 − 2e−2t Bt2 + 3At2 e−2t + 2Bte−2t .
yp00 = 4e−2t At3 + 4e−2t Bt2 − 6At2 e−2t − 4bte−2t −
6At2 e−2t − 4Bte−2t + 6Ate−2t + 2Be−2t =
4e−2t At3 + (4B − 12A)e−2t t2 + (6A − 8B)te−2t + 2Be−2t .
Inserting the values of yp , yp0 , yp00 into the equation (4.1) we obtain
1
6Ate−2t + 2Be−2t = te−2t → A = , B = 0.
6
1 3 −2t
So yp (t) = 6 t e is a particular solution of (4.1) and the general solution of (1)
has the form
1
yp (t) = (C1 + C2 t)e−2t + t3 e−2t .
6
Find the general solution of
y 00 + 4y 0 + 4y = te−2t . (4.3)
Solution. First we find the general solution of the homogeneous equation
y 00 + 4y 0 + 4y = 0. (4.4)
The corresponding characteristic equation
r2 + 4r + 4 = 0
34 Chapter 2

has a double root r0 = −2. Thus the equation (4.3) has tow linearly independent
solutions
y1 (t) = e−2t , y2 (t) = te−2t .
The general solution of the homogeneous equation is:

yh (t) = (C1 + C2 t)e−2t


We’ll find the particular solution of (1) by the method of undetermined coef-
ficients. Since the right hand side is a solution of the homogeneous equation
corresponding to a double root of the characteristic equation we look for a par-
ticular solution in the following form:
yp (t) = (At + B)e−2t t2 .

yp0 = −2e−2t At3 − 2e−2t Bt2 + 3At2 e−2t + 2Bte−2t .


yp00 = 4e−2t At3 + 4e−2t Bt2 − 6At2 e−2t − 4bte−2t −
6At2 e−2t − 4Bte−2t + 6Ate−2t + 2Be−2t =
4e−2t At3 + (4B − 12A)e−2t t2 + (6A − 8B)te−2t + 2Be−2t .
Inserting the values of yp , yp0 , yp00 into the equation (1) we obtain
1
6Ate−2t + 2Be−2t = te−2t → A = , B = 0.
6
1 3 −2t
So yp (t) = 6 t e is a particular solution of (1) and the general solution of (1)
has the form
1
yp (t) = (C1 + C2 t)e−2t + t3 e−2t .
6
Example 4.2. Find a particular solution of
y 00 − 3y 0 − 4y = e3t
Since
(eλt )0 = λeλt , (eλt )00 = λ2 eλt ,
That is the exponential function reproduces itself through differentiation, we try
to find a solution which has the form
Y (t) = Ae3t
Y 00 (t) = 3Ae3t ,
Y 00 (t) = 9Ae3t ⇒
9Ae3t − 3Ae3t − 4Ae3t = e3t ⇒
2Ae3t = e3t ⇒
1
A=
2
So the function Y (t) = 12 e3t is a particular solution of this equation.
Second Order Ordinary Differential Equations 35

Example 4.3.
y 00 + 2y = t2 + 1
Here we assume that
y(t) = At2 + 2Bt + C
2A + 2At2 + 2Bt + 2C = t2 + 1
⇒ 2A = 1, 2B = 0 ⇒ B = 0
1
A = , 2A + 2C = 1 ⇒ C = 0
2
Thus
1
y(t) = t2
2
is a solution of the equation.

Example 4.4. Find particular solution of


y 00 + 3y 2 + 2y = cos t
First we try
y(t) = A cos t
−A cos t − 3A sin t + 2A cos t = cos t
A cos t + 3A sin t = − cos t
A = 0, A = −1
Impossible.
To compensate for the sin t we try
y(t) = A cos t + B sin t
−A cos t − B sin t − 3A sin t + 3B cos t + 2A cos t + 2B sin t = cos t
(A + 3B) cos t + (B3 A) sin t = cos t
A + 3B = 1, B = 3A
1 1
10B = 1, B= , A=
10 30
So the particular solution is
1 1
y(t) =cos t + sin t
30 10
Example 4.5. Find the particular solution of
y 00 + 3y 0 + 2y = 3tet (∗)
In this case we look gar a particular solution which has the form:
Y (t) = Atet + Bet
36 Chapter 2

Example 4.6. Solve the cauchy problem


e2t
y 00 − 4y 0 + 5y = , y(0) = 2, y 0 (0) = 4
cos t

5. Series solutions
One of Newton’s fundamental analytic achievements was the ex-
pansion of all functions in power series (the meaning of a second,
long anagram of Newton’s to the effect that to solve any equation
one should substitute the series into the equation and equate co-
efficients of like powers).”
V.I. Arnol’d ”Ordinary Differential Equations”, Springer, 1991.

5.1. Power series. A power series is a series of the form



X
an xn . (5.1)
n=0

We say that a power series is convergent at the point x = x0 , if the number



an xn0 is convergent.
P
series
n=0

Theorem 5.1. If the power series of the form (5.1) is convergent at some
point x0 6= 0 then this series converges absolutely for |x| < r0 = |x0 | .

an xn0 is convergent the sequence {an xn0 }
P
Proof. Since the number series
n=0
tends to zero as n → ∞. Thus there exist a number M > 0 such that
|an xn0 | ≤ M0 , n = 1, 2, ...
Therefore we have x n x n
|an xn | = |an xn0 | ≤ M0 .

x0 x0
For |x| < |x0 | we have

x n 1
X
= .
x0 1 −
x
n=0 x0

|an xn | is convergent.
P
Therefore thanks to the comparison theorem the series
n=0

Theorem 5.2. If a power series (5.1) converges for some x0 6= 0, then there
exists a number R > 0 such that (5.1) converges absolutely on the inerval (−R, R)
and diverges for |x| > R. If (5.1) converges just for x = 0, then r = 0.
Second Order Ordinary Differential Equations 37

Theorem 5.3. If
a + 1
n
lim =L

n→∞ an
or
p
n
lim |an | = L
n→∞
then the radius of convergence of the power series

X
an xn
n=0
1
is the number r = L, with r = ∞ if L = 0 and r = 0 if L = ∞.
Theorem 5.4. If

X
an xn = 0, ∀x
n=0
then an = 0, n = 0, 1, 2, ..
If

X ∞
X
n
f (x) = an x , g(x) = bn xn
n=1 n=1
then

X ∞
X
f (x) + g(x) = (an + bn )xn , f (x)g(x) = cn xn , (5.2)
n=1 n=0
where cn = nk=0 ak bn−k . The interval of convergence of both series in (5.2) is
P
the common interval of convergence of power series for f (x) and g(x).
Theorem 5.5. If

X
f (x) = an xn (5.3)
n=1

nan xn−1 obtained by termwise dif-
P
is convergent on (−r, r), then the series
n=1
ferentiation of the series has the radius of convergence r and

X
0
f (x) = nan xn−1 .
n=1
Rx
The series obtained by termwise integration gives the power series for 0 f (x)ds:
Z x ∞
X 1
f (x)ds = an xn+1 , |x| < r
0 n + 1
n=0
38 Chapter 2

Example 5.6. Show that



X ∞
X
nan−1 xn−1 + bn xn+1 = 0
n=1 n=2
implies that
a0 = a1 = a2 = 0 and an = −bn−1 /(n + 1), n≥3

Definition 5.7. A function f (x) is said to be analytic at a point x = x0 if


f (x) is sum of the power series

X
an (x − x0 )n
n=0
that has a positive radius of convergence.

A point x0 is called an ordinary point for


y 00 + p(x)y 0 + q(x)y = 0, (5.4)
if p(x) and q(x) are analytic at x0 . Otherwise x0 is called a singular point for
(5.4).

Definition 5.8. A singular point x0 is called a regular singular point if both


(x − x0 )p(x) and (x − x0 )2 q(x) are analytic at x0 . Otherwise it is called an
irregular singular point.

Example 5.9. Find the power series solution of the equation


y 00 + xy 0 + (x2 + 2)y = 0.
Solution.

X ∞
X
n 0
y(x) = an x , y (x) = nan xn−1 ,
n=0 n=1

X ∞
X
y 00 (x) = n(n − 1)an xn−2 = (n + 2)(n + 1)an+2 xn .
n=2 n=0
Inserting the expressions for y, y 0 , y 00
into equation we get
X∞ ∞
X X∞ ∞
X
(n + 2)(n + 1)an+2 xn + nan xn + 2 an xn + an xn+2 = 0
n=0 n=1 n=0 n=0
Second Order Ordinary Differential Equations 39

P∞ n+2
P∞ n
Since n=0 an x = n=2 an−2 x we have

X
[(n + 2)(n + 1)an+2 + (n + 2)an + an−2 ]xn + (6a3 + 3a1 )x + 2a2 + 2a0 = 0
n=2
From the last equality we deduce that
2a2 + 2a0 = 0 ⇒ a2 = −a0 ,
3a1 + 6a3 = 0;
1
a3 = − a1 ,
2
(n + 2)(n + 1)an+2 + (n + 2)an + an−2 = 0,
1 1
an+2 = − an − an−2
(n + 1) (n + 1)(n + 2)
1 1 1 1 1
a4 = − a2 − a0 = a0 − a0 = a0
3 3·4 3 12 4
1 1 1 1 3
a5 = − a3 − a1 = a1 − a1 = a1 , ...
4 4·5 8 20 40
1 1 3
y(x) = a0 [1 − x2 + x4 + ...] + a1 [x − x3 + x5 + ...]
4 2 40
Example 5.10. Find the power series solution of the Airy equation
y 00 − xy = 0


X
y= an xn ,
n=0

X ∞
X
00 n−2
y = n(n − 1)an x = (n + 2)(n + 1)an+2 xn
n=2 n=0

X ∞
X
y0 = nan xn−1 = ( (n + 1)an+1 xn )
n=1 n=0

X ∞
X
n
[(n + 2)(n + 1)an+2 ]x − an xn+1 = 0
n=0 n=0

X ∞
X
(n + 2)(n + 1)an+2 xn − an−1 xn = 0
n=0 n=1
⇒ 2a2 = 0 ⇒ a2 = 0
1
an+2 = an−1
(n + 2)(n + 1)
1
a3 = a0
2·3
1
a4 = a1
4·3
40 Chapter 2

a5 = 0 n = 3
1 1
a6 = a3 = a0 , n = 4
6·5 2·3·5·6
1 1
a7 = a4 = a1 , ... n = 5
7·6 7·6·4·3
a8 = 0, n = 6
1 1
a9 = a6 = a0 , n = 7
9·8 2·3·5·6·8·9
1 1
a10 = a7 = a1 , n = 8,
9 · 10 3 · 4 · 6 · 7 · 9 · 10
a11 = 0,
a3n−1 = 0,
a0
a3n = ,
2 · 3 · 5 · 6...(3n − 1)3n
a1
a3n+1 = , n ≥ 4.
3 · 4 · 6 · 7 · · · (3n)(3n + 1)
Thus
x3 x6 x3n
 
y(x) = a0 1 + + + + ··· +
2 · 3 2 · 3 · 5 · 6 2 · 3 · 5 · 6 · · · (3n − 1)3n
x4 x7 x3n+1
 
a1 1 + + + + ··· .
3 · 4 3 · 4 · 6 · 7 3 · 4 · 6 · 7 · · · (3n)(3n + 1)

6. Some famous second order ODE’s


• Chebyshev Equation The equation
(1 − x2 )y 00 − xy 0 + ny = 0, (6.1)
where |x| < 1 and n is real number is called the Chebyshev equation..
• Emden - Fowler Equation
ÿ = Atn y m (6.2)
• Liouville’s Equation
y 00 + g(y)y 02 + f (x)y 0 = 0 (6.3)
• Thomas - Fermi Equation
3
00y2
y =√ (6.4)
x
This equation arises in the study of the distribution of electrons in an
atom.
Second Order Ordinary Differential Equations 41

• van der Pol’s Equation


y 00 + c(y 2 − 1)y 0 + y = 0, (6.5)
where c is a positive constant.This equation studied Dutch physicist
Balthasar van der Pol as mathematical model of the circuit of a vacuum
tube. It has been used also to model the human heartbeat by Johannes
van der Mark.
CHAPTER 4

Higher Order Ordinary Differential equations

In this section we will discuss the problem of finding a general solution and
solution of the initial value problem for n’th order linear ODE of the form
dn y dn−1 y dy
n
+ p 1 (t) n−1
+ ... + pn−1 (t) + pn (t)y = g(t) (0.1)
dt dt dt
A problem of finding of solution to (0.1) which satisfies the initial conditions:

y(t0 ) = y0 , y 0 (t0 ) = y0 , ..., y (n−1) (t0 ) = y0n−1 (0.2)

is called a Cauchy problem or initial value problem.

Theorem 0.1. Let p1 (t), ..., pn (t) and g(t) be continuous on (a, b) then the
problem (1),(2) has a unique solution on (a, b).

The equation
dn y dn−1 y dy
n
+ p1 n−1
+ ... + pn−1 + pn y = 0 (0.3)
dt dt dt
is called the homogeneous equation corresponding to (0.1).
If y1 , ..., yn are solutions of (0.3), then

n
X
y(t) = ci yi (t).
i=1

is a solution of (0.3).

Question: Can we express every solution of (0.1) as a linear combination of


given solutions y1 , ..., yn ?

Let y(t) be an arbitrary solution of (0.1). If

n
X
y(t) = ci yi (t),
i=1
43
44 Chapter 3

then 

c1 y1 (t0 ) + ... + cn yn (t0 ) = y(t0 )
c y 0 (t ) + ... + c y 0 (t ) = y 0 (t )

1 1 0 n n 0 0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .


 n−1
c1 y1 (t0 ) + ... + cn ynn−1 (t0 ) = y n−1 (t0 ).
The last system has a unique solution if and only if

W [y1 , ..., yn ](t0 ) 6= 0.


Theorem 0.2. If the functions p1 (t), ..., pn (t) are continuous on the open
interval (a, b), the functions y1 , · · · , yn are solutions of the homogeneous equation
(0.3) and
W [y1 , ..., yn ](t0 ) 6= 0,
where t0 is some point in (a, b), then every solution of (0.3) can be expressed as
a linear combination of y1 , ..., yn , ie.e for each solution y(t)of this equation there
exist constants C1 , · · · , Cn such that
y(t) = C1 y1 (t) + · · · + Cn yn (t).
Theorem 0.3. (Abel’s theorem) If y1 , ..., yn are solutions of
y (n) + p1 (t)y (n−1) + ... + pn (t)y = 0
, then
 Z 
W [y1 , ..., yn ](t) = C exp − p1 (t)dt .

Consider third order equation


000
y + p1 y 00 + p2 y 0 + p3 y = 0. (0.4)
Let us show that if y1 , y2 , y3 are solutions of (0.4), then
 Z 
W [y1 , y2 , y3 ](t) = C exp − p1 (t)dt .


y1 y2 y3
d
W [y1 , y2 , y3 ] = y1 y20 y30 =
0
dt y 000 y 000 y 000
1 2 3

y1 y2 y3
0 0 y30

y 1 y2

−p1 y1 − p2 y1 − p3 y1 −p1 y2 − p2 y2 − p3 y2 −p1 y3 − p2 y30 − p3 y3
00 0 00 0 00


= −p1 W [y1 , y2 , y3 ].
Higher Order Ordinary Differential Equations 45

1. Homogeneous equations with constant coefficients


Consider n0 th order linear homogeneous equation has the form:

y (n) + a1 y (n−1) + ... + an−1 y 0 + an y = 0. (1.1)


A function y(t) = ert is a solution of (1.1) iff

L(r) ≡ rn + a1 rn−1 + ... + an−1 r + an = 0. (1.2)


The equation (1.2) is called characteristic equation of (1.2)
a) If the characteristic equation (1.2) has n distinct roots r1 , r2 , ..., rn , then func-
tions

er1 t , er2 t , ..., ern t


are linearly independent solutions of (1.1) and the general solution of (1.1) has
the form
y(t) = C1 er1 t + C2 er1 t + ... + Cn ern t .
b) If a real root r = rk has a multiplicity m, then there are m linearly independent
solutions

erk t , terk t , ..., tm−1 erk t


corresponding to rk .

c) If rk = λ + im is a complex root and it is repeated m times , then r̄k = λ − im


is also a root of characteristic equation repeated m times. In this case there are
2m linearly independent solutions

eλt cos mt, eλt sin mt, teλt cos λt, teλt sin mt,
..., tm−1 eλt cos mt, tm−1 eλt sin mt
corresponding to these roots.
Example 1.1. Find the general solution of
y (6) + 3y (5) + 3y (4) + y 000 = 0.
The corresponding characteristic equation is
r3 (r + 1)3 = 0.
Therefore
y = c1 + c2 t + c3 t2 + c4 e−t + c5 te−t + c6 t2 e−t .
46 Chapter 3

Example 1.2. Find the general solution of the following equations

i. y (5) (t) + 2y(t) = 0, (1.3)


ii. y (4) (t) − 4y 000 (t) + 5y 00 (t) − 4y 0 (t) = 0. (1.4)
CHAPTER 5

Laplace Transform and Applications

Definition 0.1. A function f : I → R defined on a finite interval is called


piecewise continuous if it is continuous on I except of of finitely many points,
where it has both one-sided limits.
A function f : [0, ∞) → R is called picewise continuous if it is piecewise
continuous on each finite interval [0, T ].

Definition 0.2. The Laplace transform of a piecewise continuous function


f : [0, ∞) → R is defined by the equality
Z ∞
L(f (t)) := F (s) = e−st f (t)dt,
0
whenever the integral is convergent

Theorem 0.3. Suppose that a function f is continuous on [0, ∞), its deriva-
tive f 0 is piecewise continuous on [0, ∞) and f is of exponential order at infinity,
i.e. there exist positive numbers a > 0, K > 0, T0 > 0 such that

|f (t)| ≤ Keat , ∀t ≥ T0 . (0.1)

Then

L{f 0 (t)} = sL{f (t)} − f (0), ∀s > a. (0.2)

Proof. Assume that t1 , ..., tn are points of discontinuity of f 0 on [0, A]. Then
47
48 Chapter 4

Z A Z t1 Z t2
−st 0 −st 0
e f (t)dt = e f (t)dt + e−st f 0 (t)dt + · · ·
0 0 t1
Z A
+ e−st f 0 (t)dt = e−st f (t)|t01 + e−st f (t)|tt21 + ... + e−st f (t)|A
tn
tn
Z t1 Z t2 Z A
(0.3)
−st −st −st
+s e f (t)dt + s e f (t)dt + ... + s e f (t)dt
0 t1 tn
Z A
= −f (0) + e−sA f (A) + s e−st f (t)dt.
0
Since
−sA
e f (A) ≤ Ke−(s−a)A → 0, as A → ∞,

passing to the limit as A → ∞ in (0.3) we arrive at (0.2). 


Corollary 0.4. If f1 , f 0 , ...f (n−1) are continuous on [0, ∞), of exponential
order a at infinity and f (n) is piecewise continuous on [0, ∞), then

L{f (n) (t)} = sn L{f (t)}+


sn−1 f (0) − sn−2 f 0 (0) − ... − f (n−2) (0)s − f (n−1) (0), ∀s > a. (0.4)
The following special case of (0.4) will be used frequently in solving initial
value problems for second order equations
L{f 00 (t)} = sL{f 0 (t)} − f 0 (0) = s2 L{f (t)} − sf (0) − f 0 (0). (0.5)
Theorem 0.5. Let F (s) = L{f (t)}(s) and assume that f (t) is piecewise
continuous on [0, ∞) and of exponential order α. Then for s > α we have :

L{tn f (t)}(s) = (−1)n F (n) (s), (0.6)

L{f + g} = L{f } + L{g}, (0.7)

L{eat f (t)} = L{f }(s − a). (0.8)


Example 0.6. Show that
L{tf 0 (t)} = −F (s) − sF 0 (s).
According to (0.6)
d d
L{tf 0 (t)} = − L{f 0 (t)} = − (sF (s) − f (0) = −F (s) − sF 0 (s).
ds ds
Example 0.7. Show that
L{t2 f 00 (t)} = s2 F 00 (s) + 4sF 0 (s) + 2F (s).
Laplace Transform and Applications 49

Example 0.8. Calculate L{tn }, n ≥ 1.

∞ Z ∞
1 −st n ∞
Z
n −st 1 −st n−1
t e dt = − e t + n e t dt
0 s 0 0 s
n −st n−1 ∞ n(n − 1) ∞ −st n−2
Z
= − 2e t + e t dt
s 0 s2 0
Z ∞
n(n − 1)(n − 2)
= e−st tn−3 dt
s3 0
n! ∞ −st
Z
n!
= ··· = n e dt = n+1 .
s 0 s

Example 0.9. Calculate L{sin(at)}.


1 −st 0
∞
Z Z 
−st
I := sin(at)e dt = − e sin(at)dt =
0 0 s
1 ∞ a ∞ Z
− e−st sin(at) + cos(at)e−st dt =

s 0 s 0
a ∞ −st 0 ∞ a2 Z ∞
Z
a −st
− 2 e cos(at)dt = − 2 e cos(at) − 2 sin(at)e−st dt =

s 0 s 0 s 0
a2 ∞
Z
a
− sin(at)e−st dt.
s2 s2 0

So we have
a a2
I= − I.
s2 s2
Therefore
Z ∞
a
I := sin(at)e−st dt = . (0.9)
0 s2 + a2

We denote by L−1 {F (s)} the inverse Laplace transform of a function F (s). If


L{f (t)} = F (s), then
L−1 {F (s)} = f (t).

Example 0.10. Find


 
−1 2 3s 1
z(t) := L + 2 + 2 .
s − 5 s + 25 s + 4s + 5
50 Chapter 4

     
−1 1 s −1−1 1
z(t) = 2L + 3L +L =
s−5 s2 + 25 s2 + 4s + 5
   
5t −1 1 5t −1 1
2e + 3 cos(5t) + L = 2e + 3 cos(5t) + L =
s2 + 4s + 5 (s + 2)2 + 1
2e5t + 3 cos(5t) + e−2t sin t.

   
−1 7 7 4! 7 4t 4
L = L−1 = e t
(s − 4)5 4! (s − 4)5 4!
Example 0.11. Solve the Cauchy problem:
y 00 − y 0 − 6y = 0, y(0) = 1, y 0 (0) = −1.

s2 Y (s) − sy(0) − y 0 (0) − (sY (s) − y(0)) − 6Y (s) = 0,


s2 Y (s) − s + 1 − sY (s) + 1 − 6Y (s) = 0,
s−2 s−2 1 1 4 1
Y (s) = = = + .
s2 − s − 6 (s − 3)(s + 2) 5s−3 5s+2
Hence  
−1 1 1 4 1 1 4
y(t) = L + = e3t + e−2t .
5s−3 5s+2 5 5
Example 0.12. Solve the Cauchy problem:
(
y 00 + 6y 0 + 34y = 0,
y(0) = 3, y 0 (0) = 1.

s2 Y (s) − 3s − 1 + 6 [sY (s) − 3] + 34Y (s) = 0.


Thus
3s + 19
Y (s) =
s2
+ 6s + 34
s+3 5
=3 +2
(s + 3)2 + 25 (s + 3)2 + 25
= e−3t [3 cos(5t) + 2 sin(5t)] .
Example 0.13. Solve the Cauchy problem
(
y 00 + 2y 0 + 5y = 0,
y(0) = 1, y 0 (0) = 0.
Laplace Transform and Applications 51

(s2 + 2s + 5)Y (s) = s + 2,


s+2 s+1 1 2
Y (s) = 2 = + .
s + 2s + 5 (s + 1) + 4 2 (s + 1)2 + 4
2

Thus
 
−1 s+1 1 2 1
y(t) = L + = e−t cos(2t) + e−t sin(2t).
(s + 1) + 4 2 (s + 1)2 + 4
2 2
Example 0.14. Solve the Cauchy problem:
y 00 − 4y = 4t − 8e−2t , y(0) = 0, y 0 (0) = 5.

4 1
s2 Y (s) − sy(0) − y 0 (0) − 4y(s) = 2
−8 ,
s s+2
4 8 5
Y (s) = − + ,
s2 (s2 − 4) (s + 2)2 (s − 2) s2 − 4
4 1 1 1 1 1 1 1
2 2
= 2 − 2 = − − 2,
s (s − 4) s −4 s 4s−2 4s+2 s
5 5 1 5 1
= −
s2 − 4 4s−2 4s+2
1 A B C
2
= + +
(s + 2) (s − 2) s − 2 s + 2 (s + 2)2
As2 + 4As + 4A + Bs2 − 4B + Cs − 2C
=
...

A + B = 0, 4A + C = 0 ⇒ C = −4A, 4A − 4B − 2C

1
⇒ 12A − 4B = 1, A = −B =
16
1
C = −4A = −
4
8 1 1 1 1 1
− =− + −2 .
(s + 2)2 (s − 2) 2s−2 2s+2 (s + 2)2
Thus

3 1 3 1 1 1 1 1 1 1
Y (s) = − − 2 −− + −2 .
2s−2 2s+2 s 2s−2 2s+2 (s + 2)2
Therefore

y(t) = e2t − e−2t + 2e−2t t − t.


52 Chapter 4

1. Step Functions.

Definition 1.1. Unit step function or Heaviside function is a function defied


by (
0, if 0 ≤ t < c,
uc (t) =
1, if t ≥ c, c ≥ 0.

Example 1.2. Sketch the graph of the function


y(t) = u2 (t) − u3 (t).
Let us find the Laplace transform of the Heaviside function.
Z ∞ Z ∞
−st 1 e−sc
L{uc (t)} = e uc (t)dt = e−st dt = − e−st |∞
c = , s > 0.
0 c s s
Hence
e−sc
. L{uc (t)} =
s
Bu suing the Heaviside function we can express the function
(
0, 0 ≤ t < c,
g(t) =
f (t − c), t ≥ c.
in the form
g(t) = uc (t)f (t − c).
Theorem 1.3. If F (s) = L{f (t)}(s) exist for s > a ≥ 0, and c > 0, then
L{uc (t)f (t − c)} = e−cs L{f (t)} = e−cs F (s), s > a.

Z ∞ Z ∞
−st
e uc (t)f (t − c)dt = e−st f (t − c)dt =
0 c
Z ∞
e−st e−sτ f (τ )dτ = e−cs F (s).
0
Corollary.

L{uc (t)f (t)}(s) = e−cs L{f (t + c)}(s).


Example 1.4. Determine
L{t2 u(t − 1)}.
Laplace Transform and Applications 53

g(t) = f (t + c)
Z ∞ Z ∞
−st
e uc (t)f (t)dt = e−st uc (t)f (t + c − c)dt =
0 Z ∞ 0

= e−st uc (t)g(t − c)dt = e−sc L{f (t + c)}


0
L{u(t − 1)t2 } = e−st L{(t + 1)2 } = e−s L{(t2 + 2t + 1)} =
2 2 1
= e−s ( 3 + 2 + )
s s s
Example 1.5. Find  −5s 
−1 e
L .
s2
Example 1.6. Solve the problem
y 00 + 4y = 6u2 (t), y(0) = 3, y 0 (0) = 0.

s2 Y (s) − 3s + 4Y (s) = 6e−2s ,


3s 6e−2s
Y (s) = 2 + 2 ,
s +4 s +4
y(t) = 3 cos 2t + 3u2 (t) sin(2(t − 2)).
Example 1.7. Solve the problem
(
y 00 + 4y = h(t), t > 0,
y(0) = 0, y 0 (0) = 0,
where 
0, 0 ≤ t ≤ 3,

h(t) = 31 (t − 3), 3 ≤ t < 6,

1, t ≥ 6.

It is clear that
1 1
h(t) = (t − 3) − (t − 6).
3 3
Thus we have
 1
(s2 + 4)Y (s) = e−3s − e−6s ,
s2
or
Y (s) = e−3s − e−6s F (s),


where
1 1 1 1 1 1
F (s) = 2 2
= 2
− .
s 4+s 4s 4 4 + s2
Thus
1 1
y(t) = u3 (t)f (t − 3) − u6 (t)f (t − 6),
3 3
54 Chapter 4

where
1 1
f (t) = L−1 F (s) = t − sin(2t).
4 8

2. Convolution Integral
The convolution f ∗ g of piecewise continuous functions f, g defined on [0, ∞)
is a function
Z t
(f ∗ g)(t) = f (τ )g(t − τ )dτ.
0

Theorem 2.1. If F (s) is a Laplace transform of f (t), G(s) is a Laplace


transform of g(t) for s > a, then
H(s) = F (s)G(s),
where
H(s) = L{h(t)}, h(t) = (f ∗ g)(t)
Proof. We have
Z ∞ Z ∞
−sη
F (s) = e f (η)dη, G(s) = e−sτ g(τ )dτ,
0 0

Z ∞ Z ∞
−sη
F (s)G(s) = e f (u)du e−sτ g(τ )dτ =
0 0
Z ∞ Z ∞ 
−sτ −su
= e g(τ ) e f (u)du dτ =
0 0
Z ∞ Z ∞ 
−s(τ +u)
e f (u)du g(τ )dτ.
0 0

Making the change τ + u and using Fubini’s theorem we get:


Z ∞ Z ∞ 
−st
F (s)G(s) = e f (t − τ )dt g(τ )dτ =
0 τ
Z ∞Z ∞ Z ∞ Z t 
e−st f (t − τ )g(τ )dtdτ = f (t − τ )g(τ )dτ e−st dt = H(s).
0 τ 0 0


Example 2.2. Find
 
−1 1
L .
(1 + s2 )2
Laplace Transform and Applications 55

Solution
  Z t
−1 1
L = sin(t − τ ) sin τ dτ
(1 + s2 )2 0
Z t
[sin t cos τ − cos t sin τ ] sin τ dτ
=
0
Z t Z t
1 1
= sin t sin(2τ )dτ − cos t [1 − cos(2τ )] dτ
2 0 2 0
1 1 1 1
= − sin(t) cos(2t) + sin t − t cos t + cos t sin(2t)
4 4 2 4
1 1
= sin t − t cos t.
2 2
Example 2.3. Find a function which satisfies the integral equation
Z t
f (τ )f (t − τ )dτ = te−2t .
0

1
F (s)F (s) =
(s + 2)2
1
F (s) = ± .
s+2
Hence
f (t) = ±e−2t .
56 Chapter 4

2.1. Laplace transform of some functions. If F (s) and G(s) are Laplace
transforms of a function f (t) and g(t) respectively , then

(1) L{1} = 1s , (s > 0)


1
(2) L{eat } = s−a
(s > a)
n!
(3) L{tn } = sn+1
(s > 0, n a positive integer)
a
(4) L{sin(at)) = s2 +a2
(s > 0)
s
(5) L{cos(at)} = s2 +a2
(s > 0)
b
(6) L{eat · sin(bt)} = (s−a)2 +b2
(s > a)

(7) L{eat · (f (t))} = F (s − a)) (s > a)


n!
(8) L{tn · eat } = (s−a)n+1
(s > a)
d n
(9) L{tn · f (t)} = (−1)n ds n F (s)

(10) L{f 0 (t)} = sF (s) − f (0)

(11) L{f (n) (t)} = sn F (s) − sn−1 f (0) − · · · sf (n−2) (0) − f (n−1) (0)
Rt F (s)
(12) L{ 0
f (τ )dτ } = s
.
e−cs
(13) L{uc (t)} = s
(s > 0)

(14) L{uc (t) · f (t − c)} = e−cs F (s)

(15) L{(u ∗ g)(t)} = F (s)G(s)


CHAPTER 6

Systems of Ordinary Differential Equations

Many processes in physics , chemistry and biology are often characterized by


multiple functions of one variable simultaneously. The relationship between these
functions is described by differential equations that contain these functions and
their derivatives. In this case, we have to study systems of differential equations.
In this chapter we consider mainly systems of linear equations of first order and
discuss some important systems of nonlinear equations.

1. Systems of Linear ODE’s


A linear system of ODE’s is system of equations, where the largest derivative
of unknown functions in the system is a first derivative and all unknown functions
and their derivatives only occur to the first power and are not multiplied by other
unknown functions, i.e. a system of n linear first order equation is a system of
ODE2s of the form
 0

x1 (t) = p11 (t)x1 (t) + p12 (t)x2 (t) + · · · + p1n (t)xn (t) + h1 (t),
x0 (t) = p (t)x (t) + p (t)x (t) + · · · + p (t)x (t) + h (t),

2 21 1 22 2 2n n 2
(1.1)

....................................

 0
xn (t) = pn1 (t)x1 (t) + pn2 (t)x2 (t) + · · · pnn (t)xn (t) + hn (t),

where x1 (t), ..., xn (t) are unknown functions, pij , hi , j = 1, ...n are given func-
tions.
It is convinient to write the system (1.10) in the form

x0 (t) = P (t)x(t) + g(t), (1.2)

where
 
x1 (t)
 x2 (t) 
x(t) =  .
 
..
 . 
xn (t)
57
58 Chapter 5

is the unknown vector-function,


 
g1 (t)
 g2 (t) 
g(t) =  .
 
..
 . 
gn (t)
is a given source function and
 
p11 (t) p12 (t) . . . p1n (t)
 p21 (t) p22 (t) . . . p2n (t) 
P (t) = 
 ...

... ... ... 
pn1 (t) pn2 (t) . . . pnn (t)
iis a given matrix-function. A system of equations
x0 (t) = P (t)x(t) (1.3)
is called the homogeneous sytem corresponding to the system (1.2).
Theorem 1.1. If the functions Pij (t), gi (t), i, j = 1, · · · n, are continuous on
some interval I and t0 ∈ I, and x0 ∈ Rn is a given vector, then the problem
(
x0 (t) = P (t)x(t) + g(t),
(1.4)
x(t) = x0
has a unique solution on I
Theorem 1.2. A general solution of the system (1.2) has the form
x(t) = xh (t) + v(t),
where xh (t) is a general solution of the homogeneous sytem (1.3) and v(t) is
some particular solution of the nonhomogeneous sytem (1.2).
1.1. Homogeneous Systems of Linear ODE’s. A system of differential
equations
 0

 x1 (t) = a11 x1 (t) + a12 x2 (t) + · · · + a1n xn (t),
x0 (t) = a x (t) + a x (t) + · · · + a x (t),

2 21 1 22 2 2n n
(1.5)

 ....................................

 0
xn (t) = an1 x1 (t) + an2 x2 (t) + · · · + ann xn (t),
where x1 (t), ..., xn (t) are unknown functions, aij , i, j = 1, ...n are given constants,
is called a homogeneous linear system of ODE’s. Let us denote by A the matrix
 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A=  ... ... ... ... 

an1 an2 . . . ann


Systems of Linear Ordinary Differential Equations 59

and let us denote by x(t) the vector function


 
x1 (t)
 x2 (t) 
x(t) =  .. .
 
 . 
xn (t)
Then we can rewrite the system (1.5) in the following form
x0 (t) = Ax(t). (1.6)
It is natural to look for solution
 
x1 (t)
 x2 (t) 
x(t) = 
 
.. 
 . 
xn (t)
of the system (1.5) (or (1.6)) in the following form
 rt   
e w1 w1
 ert w2   w2 
x(t) =   = ert  rt
..  =: e w.
   
..
 .   . 
ert wn wn
Substituting into the system we see that the vector function ert w is a non-zero
solution of (1.6) if and only if w is an eigenvector of A corresponding to eigenvalue
r, that is w is a nonzero vector which satisfies
Aw = rw


(a11 − r)w1 + a12 w2 + . . . + a1n wn = 0,

a w + (a − r)w + . . . + a w = 0,
21 1 22 2 1n n
(1.7)

........................

an1 w1 + an2 w2 + . . . + (ann − r)wn = 0.

This system called the auxiliary system of (1.6) has a non-zero solution if and
only if its determinant is zero:

(a11 − r) a12 ... a1n


a 11 (a 22 − r) ... a1n
= 0.

. . . . . . ... ...

an1 an1 . . . (ann − r)
The last equation or the equation
|A − rI| = 0 (1.8)
is called the characteristic equation of the system.
60 Chapter 5

1.2. Characteristic equation has distinct real roots. If the character-


istic equation (1.8) has n distinct eigenvalues r1 , r2 , ..., rn then there are n linearly
independent eigenvectors
w(1) , w(2) , . . . , w(n)
corresponding to these eigenvalues. Thus the system (1.6) has n linearly inde-
pendent solutions
er1 t w(1) , er2 t w(2) , . . . , ern t w(n) .
In this case each solution of the system has the form
x(t) = C1 er1 t w(1) + C2 er2 t w(2) + . . . + ern t w(n) .
Each set of n linearly independent solutions of the system is called a fundamental
set of solutions.
Example 1.3. Find the fundamental set of solutions of the system
(
dx
dt = 5x + 2y,
dy
dt = −4x − y

Solving the characteristic equation



5−r 2
= 0,
−4 −1 − r
or

r2 − 4r + 3 = 0
we find
r1 = 1, r2 = 3.
From the auxiliary system
(
(5 − r)w1 + 2w2 = 0
−4w1 − (1 + r)w2 = 0
we find eigenvectors
   
(1) 1 (2) 1
w = , w = .
−2 −1
Hence    
(1) 1 t (2) 3t 1
x (t) = e , x (t) = e
−2 −1
is the fundamental set of solutions.
Example 1.4. Solve the system of equations:

0
x1 = x1 − x2 − x3 ,

x02 = x2 + 3x3 ,
 0

x3 = 3x2 + x3 .
Systems of Linear Ordinary Differential Equations 61

We write this system in the form


 
1 −1 −1
x0 =  0 1 3  x.
0 3 1
Let us find eigenvalues and eigenvectors of the matrix of the system. The corre-
sponding auxiliary system of equations is:
    
1 − r −1 −1 w1 0
 0 1−r 3   w2  =  0 
0 3 1−r w3 0
or 
(1 − r)w1 − w2 − w3 = 0,

0w1 + (1 − r)w2 + 3w3 = 0,

0w1 + 3w2 + (1 − r)w3 = 0.

Solving the characteristic equation

1−r 3
(1 − r)
=0
3 1−r
we find
(1 − r)3 − (1 − r)9 = (1 − r)(1 − 2r + r2 − 9) = 0,
(1 − r)(r − 4)(r + 2) = 0.
Hence the matrix of the system has eigenvalues:
r1 = −2, r2 = 1, r3 = 4.
Let us find corresponding eigenvectors. The auxiliary system for r = r1 = −2
takes the form 
3w1 − w2 − w3 = 0,

0w1 + 3w2 + 3w3 = 0,

0w1 + 3w2 + 3w3 = 0,

This system has the following nontrivial solution:

w3 = −w2 , w1 = 0.
Therefore 

0
w(1) =  1 
−1
is the eigenvector corresponding to the eigenvalue r1 = −2. For r2 = 1 we
have 
−w2 − w3 = 0

3w3 = 0 .

3w2 = 0

62 Chapter 5

Hence the corresponding eigenvector is


 
1
w(2) =  0 .
0
For r3 = 4 we have: 
−3w1 − w2 − w3 = 0,

0w1 − 3w2 + 3w3 = 0,

0w1 + 3w2 − 3w3 = 0.

Two last equations imply
w2 = w3 ,
and first equation becomes:
−3u1 = 2u2 .
Thus  
2
w(3) =  −3 
−3
is the corresponding eigenvector. So the general solution of the system has the
form:
     
0 0 2
x(t) = C1  1  e−2t + C2  1  et + C3  −3  e4t .
1 −1 −3
Example 1.5. Find the general solution of the system

0
x1 = x2 + x3 ,

x02 = 3x1 + x3 ,
 0

x3 = 3x1 + x2 .
The matrix of this system
 
0 1 1
A= 3 0 1 
3 1 0
has eigenvalues r1 = −1, r2 = −2, r3 = 3 and corresponding eigenvectors
     
0 −1 2
v(1) =  1  , v(2) =  1  , v(3) =  3  .
−1 11 3
Therefore the general solution of the system has the form
     
0 −1 2
x(t) = C1 e−t  1  + C2 e−2t  1  + C3  3  .
−1 1 3
Systems of Linear Ordinary Differential Equations 63

1.3. Complex roots of characteristic equation. Suppose that a matrix


A has a complex eigenvalue. Since all entries of the matrix are real numbers, the
coefficients of a polynomial

det(A − rI) = 0
are also real numbers.
Therefore if r = λ + im is a complex eigenvalue of A, then r̄ = λ − im is also an
eigenvalue of A:

(A − r)w = 0,

(A − r̄)w̄ = 0.
The corresponding solutions are

z(1) (t) = wert ,

z(2) (t) = wer̄t ,


But these solutions are complex-valued.
Let

w = a + ib,
a and b are real.
Then

z(1) (t) = (a + ib)e(λ+im)t = (a + ib)eλt (cos(mt) + i sin(mt)) =


= eλt (a cos(mt) − b sin(mt)) + ieλt (a sin(mt) + b cos(mt)).
and
z(2) (t) = eλt (a cos(mt) − b sin(mt)) − ieλt (a sin(mt) + b cos(mt))
Since the equation is a linear homogeneous equation the real valued solutions
corresponding to the complex root r are
1 1
x(1) (t) = (z(1) (t) + z(2) (t)), x(2) (t) = (z(1) (t) − z(2) (t)),
2 2i
that is
x(1) (t) = eλt (a cos(mt) − b sin(mt)), x(2) (t) = eλt (a sin(mt) + b cos(mt)).
Example 1.6. Find the general solution of the system
 
0 −1 1
x = x(t).
−1 1
64 Chapter 5

Let us find eigenvalues of the matrix


 
1 1
A= :
−1 1

−1 − r 1
|A − rI| = = 0,
−1 1−r

(1 − r)2 + 2 = 0

r2 − 2λ + 3 = 0, r1,2 = 1 ± i
Inserting r1 = 1 + i into the auxiliary system
(
(−1 − r)w1 + 2w2 = 0,
−1w1 − (3 + r)w2 = 0.
we get
(
(−1 + 2 − i)w1 + 2w2 = 0,
−w1 − (3 − 2 + i)w2 = 0.

(1 − i)w1 + 2w2 = 0.
So we can take
w1 = 2, w2 = −(1 − i) = (−1 + i)

−w1 − (1 + i)w2 = 0

     
2 2 0
w= = +i
−1 + i −1 1
           
−2t 2 0 −2t 2 0
x(t) = C1 e cos t − sin t +C2 e sin t− cos t .
−1 1 −1 1
Example 1.7. Find the general solution of the system
x0 (t) = Ax(t),
where  
1 2 −1
A= 0 1 1 .
0 −1 1
Systems of Linear Ordinary Differential Equations 65

The characteristic equation


(1 − r)((1 − r)2 + 1) = (1 − r)(r2 − 2r + 2) = 0
has roots:
r1 = 1, r2 = 1 + i, r2 = 1 − i.
From the auxiliary system

(1 − r)w1 + 2w2 − w3 = 0

0 · w1 + (1 − r)w2 + w3 = 0

0 · w1 − w2 + (1 − r)w3

we find that

1
w(1) =  0 
0
is an eigenvector corresponding to the real eigenvalue r1 = 1, and that
     
2−i 2 −1
w =  i  =  0  + i 1 
−1 −1 0
is the eigenvector corresponding to the complex root r = 1 + i. Therefore the
general solution has the form

      
1 2 −1
x(t) = c1  0  et + C2  0  cos t −  1  sin t et +
0 −1 0
    
−1 2
+ C3  1  cos t +  0  sin t et .
0 −1
Example 1.8. Find a general solution of the system

x(t) = Ax(t),
where  
1 −2 2
A =  −2 1 2  .
2 2 1
First we write the auxiliary system:

(1 − r)w1 − 2w2 + 2w3 = 0,

−2w1 + (1 − r)w2 + 2w3 = 0,

2w1 + 2w2 + (1 − r)w3 = 0,

66 Chapter 5

and solve the characteristic equation



1 − r −2 2

−2 1 − r 2 = 0,

2 2 1−r

1−r −2
2 2 + 2 −2 1 − r

(1 − r)
+ 2 = 0,
2 1−r 2 1−r 2 2

(1 − r)(r2 − 2r − 3) + 2(−2 + 2r − 4) + 2(−4 − 2 + 2r) = 0,

r2 − 2r − 3 − r3 + 2r2 + 3r − 4 + 4r − 8 − 12 + 4r = 0,

−r3 + 3r2 + 9r − 27 = 0,

r2 (3 − r) + 9(r − 3) = 0, or (r2 − 9)(3 − r) = 0.


Hence the characteristic equation has the roots:
r1 = 3, r2 = 3 r3 = −3.
Inserting r = r1 = 3 into the auxiliary system we get

−2w1 − 2w2 + 2w2 = 0

−2w1 − 2w2 + 2w3 = 0 ,

2w1 + 2w2 − 2w3 = 0

or     
4 −2 2 w1 0
 −2 4 2   w2  =  0  .
2 2 4 w3 0
It is not difficult to see that the vectors
 
1
w1 (1) =  0 
1
and
 
1
w2 (2) =  −1 
0
are eigenvectors corresponding to r1 = r2 = 3. The vector
 
1
w(3) =  1 
−1
is the eigenvector corresponding to r3 = −3. Therefore
Systems of Linear Ordinary Differential Equations 67

     
1 1 1
x(t) = C1  0  e3t + C2 e3t  −1  + C3  1  e−3t
1 0 −1
is the general solution.
1.4. Repeated roots of characteristic equation. If r is a root of mul-
tiplicity k and there are m(m < k) linearly independent eigenvectorsw1 , ...wm
corresponding to the eigenvalue λ, then the solutions corresponding to this eigen-
value of the system we look in the form: :

x = (w0 + w1 t + ... + wk−m tk−m )eλt . (1.9)


To find the vectors w0 , w1 , ..., wk−m we plug the expression in (1.9) and equate
the corresponding coefficients on the left hand side and right hand side.
Example 1.9. Solve the system of equations
(
dx
dt = 3x + y,
dy
dt = −x + y.

From the characteristic equation



3−r 1
=0
−1 1 − r
we find
r2 − 4r + 4 = 0
r1 = r2 = 2.
Then from the auxiliary system
(
(3 − r)w1 + w2 = 0
−w1 + (1 − r)w2 = 0
we obtain
w1 + w2 = 0.
Thus  
1
w=
−1
is the eigenvector of the matrix of the system corresponding to the eigenvalue
r = 2. Hence
   
(1) x11 (t) 2t 1
x (t) = =e
x21 (t) −1
is a solution of the system. The second solution we look in the form
   
(2) x21 (t) a1 + b1 t
x (t) = = et ,
x22 (t) a2 + b2 t
68 Chapter 5

that is
x21 (t) = (a1 + b1 t)e2t ,

x22 (t) = (a2 + b2 t)e2t .


Inserting into the system we get
2a1 e2t + 2b1 e2t + 2b1 te2t = 3a1 e2t + 3b1 te2t + (a2 + b2 t)e2t ,

2a2 e2t + b2 e2t + 2b2 te2t = −a1 e2t − b1 te2t + a2 e2t + b2 + e2t ,

2a1 + b1 = 3a1 + a2 , a1 + a2 = b1 ,

2b1 = 3b1 + b2 , b1 + b2 = 0,

2a2 + b2 = −a1 + a2 , a1 + a2 = −b2 ,

2b2 = −b1 + b2 , b1 = −b2 .


Finally we obtain
a1 = 1, a2 = 0, b1 = 1, b2 = −1.
Hence
(1 + t)e2t
 
(2)
x (t) = .
−te2t
Example 1.10. Find the general solution of the system
(
dx1
dt = 5x1 − x2 , (1.10)
dx2
dt = x1 + 3x2 .

First we find the eigenvalues of the matrix


 
5 −1
A= .
1 3
The characteristic equation
5 − r −1

1 3−r
or
r2 − 8r + 1 = 0
has a repeated root r = 4. So r = 4 is the repeated eigenvalue of the matrix A.
From the auxiliary system we find the corresponding eigenvector
 
(1) 1
w = .
1
So
Systems of Linear Ordinary Differential Equations 69

   
(1) x11 (t) 4t 1
x (t) = =e
x21 (t) 1
is a solution of the system. We look for the second solution that is independent
of  
(2) x12 (t)
x (t) =
x22 (t)
in the form

x(2) (t) = (a + bt)e4t .


From the system (1.10) we find:

4a2 + b2 = a1 + 3a2
a2 + b2 = a1
4b2 = b1 + 3b2
b1 = b2
(2)
x1 = (a1 + b1 t)e4t
(2)
x2 (t) = (a2 + b2 t)e4t
4a1 + b1 + 4b1 t = 5a1 + 5b1 t − a2 − b2 t
4a2 + b2 + 4b2 t = a1 + b1 t + 3a2 + 3b2 t
4a1 + b1 = 5a1 − a2 ⇒ a1 − a2 = b1
4b1 = 5b1 − b2 ⇒ b2 = b1
4a2 + b2 = a1 + 3a2
a1 − a2 = b2
4b2 = b1 + 3b2
b1 = 1, b2 = 1
a1 = 2, a2 = 1,
x12 (t) = (2 + t)e4t ,
x22 (t) = (1 + t)e4t .
Hence

(2 + t)e4t
   
(2) x12 (t)
x (t) = =
x22 (t) (1 + t)e4t
and the general solution has the form
 4t 
(2 + t)e4t
 
(2) e
x (t) = C1 + C2 .
e4t (1 + t)e4t
70 Chapter 5

1.5. Exponential of a Matrix. Let A be n × n matrix of the form


 
a11 a12 . . . a1n
 a21 a22 . . . a2n 
A=  ... ...
.
... ... 
an1 an2 . . . ann
Exponential of a matrix is defined by the equality
1 1
eA = I + A + A2 + ... + An + ...,
2! n!
where I is the identity matrix.
The matrix exponent has the following properties:
(1) If AB = BA, then
eA+B = eA · eB .
(2) If D is a diagonal matrix and A = QDQ−1 , then

eA = QeD Q−1 .
(3) The matrix
t2 2 tn
etA = I + tA + A + ... + An + ...
2! n!
satisfies the equation
d tA
e = AetA , etA t=0 = I.

(1.11)
dt
(4) It follows from the last property that the vector function
x(t) = etA x0
is a solution of the problem
x0 (t) = Ax(t), x(0) = x0 .
Suppose that the matrix A has n linearly independent eigenvectors
     
w11 w12 w1n
 w21   w22   w2n 
w(1) =  ..  , w(2) =  ..  , . . . , w(n) =  ..  (1.12)
     
 .   .   . 
wn1 wn2 wn1
corresponding to eigenvalues r1 , r2 , . . . , rn . Since the vectors w(1) , w(1) , . . . , w(n)
are linearly independent the matrix
 
w11 w12 . . . w1n
 w21 w22 . . . w2n 
Q=  ... ...
 (1.13)
... ... 
wn1 wn2 . . . wnn
Systems of Linear Ordinary Differential Equations 71

is non-singular and Q−1 exists.

It is easy to see that


 
r1 w11 . . . rn w1n
AQ =  .. .. ..
 = QD, (1.14)
 
. . .
r1 wn1 . . . rn wnn
where  
r1 . . . ... 0
 0 r2 ... 0 
D=
 
.. .. .. .. 
 . . . . 
0 ... . . . rn
It follows from (1.14) that
A = QDQ−1 , Q−1 AQ = D.
That is A is diagonalizable.

Example 1.11. Calculate etA , where


 
3 −2
A= (1.15)
4 −3
and solve the problem
 
0 1
x (t) = Ax(t), x(0) = . (1.16)
2
Let us find eigenvalues and eigenvectors of A. The characteristic equation

3−r −2
=0
4 −3 − r
has roots r1 = −1, r2 = 1. The vectors
   
(1) 1 (2) 1
w = , w =
2 1
are eigenvectors of A corresponding to the eigenvectors r1 and r2 respectively.
So the matrix A is diagonalizable :
A = QDQ−1 ,
where      
1 1 −1 0 −1 −1 1
Q= , D= , Q = .
2 1 0 1 2 −1
Thus
e−t 0 −e−t + 2e e−t − et
     
tA 1 1 −1 1
e = = .
2 1 0 et 2 −1 −2e−t + 2et 2e−t − et
72 Chapter 5

The solution of the problem is


−e−t + 2e e−t − et
     −t 
1 e
x(t) = = .
−2e−t + 2et 2e−t − et 2 2e−t

1.6. Nonhomogeneous Systems.


x0 = P (t)x + g(t). (1.17)
The general solution of (1.17) has the form

x = c1 x(1) + ... + cn x(n) + v(t) (1.18)


where c1 x(1) + ... + cn x(n) is a general solution of the corresponding homogeneous
system, v(t) is a particular solution of (1.17).
First let us consider the system:

x0 = Ax + g(t) (1.19)
where A is a constant matrix which has n linearly independent eigenvectors
w(1) , w(2) , . . . , w(n)
corresponding to eigenvalues r1 , r2 , . . . , rn .
In this case A is diagonalizable and we can transform the system into a system
which is easily solvable.

Let Q be a matrix of eigenvectors defined by (1.13).

We define a new dependent variable y by :

x = Qy. (1.20)
Substituting into (1.19) we obtain:

Qy0 = AQy + g(t).


Multiplying by Q−1 we obtain :
y0 = Q−1 AQ + Q−1 g(t) = Dy + h(t),
where  
h1 (t)
 h2 (t) 
h(t) =   = Q−1 g(t).
 
..
 . 
hn (t)
Systems of Linear Ordinary Differential Equations 73

So we have n first order ordinary differential equations


 0

 y1 (t) = r1 y1 (t) + h1 (t),
y 0 (t) = r y (t) + h (t),

2 2 2 2

 . . . . . . . . . . . . . . . . . . . ..,

 0
yn (t) = rn yn (t) + hn (t).
Solving these equations we get:
Z t
yj (t) = erj t
e−rj s hj (s)ds + yj (t0 )erj (t−t0 ) , j = 1, ..., n.
t0

1.7. Method of variation of parameters. Method of variation of param-


eters allows us to find particular solution of the nonhomogeneous system of the
form
x0 = P (t)x + g(t), (1.21)
when the fundamental system of solution of the corresponding homogeneous sys-
tem is given.
First we consider the corresponding homogeneous system:
x0 (t) = P (t)x(t) (1.22)
The general solution of (1.22) has the form

x(t) = Ψ(t)C, (1.23)


where C is a constant vector. We look for a particular solution of (1.21) in the
following form

x = Ψ(t)u(t), (1.24)
where Ψ(t) is the fundamental matrix of the system and u(t) is a vector-function
to be found.

Ψ0 (t)u(t) + Ψ(t)u0 (t) = P (t)Ψ(t)u(t) + g(t),

P (t)Ψ(t)u(t) + Ψ(t)u0 (t) = P (t)Ψ(t)u(t) + g(t),

u0 (t) = Ψ−1 (t)g(t). (1.25)


Integrating the last equality we get:
Z
u(t) = Ψ(t)−1 g(t) + C

Thus Z
xp (t) = Ψ(t) Ψ(t)−1 g(t)dt
74 Chapter 5

is a particular solution of the nonhomogeneous sytem(1.21). The general solution


of the system has the form:
Z
x(t) = Ψ(t)C + Ψ(t) Ψ(t)−1 g(t)dt.
In order to find the solution of the system (1.21) satisfying the initial condi-
tion
x(t0 ) = x0 (1.26)
we integrate (1.25) over the interval (t0 , t) :
Z t
u(t) = u(t0 ) + Ψ−1 (s)g(s)ds.
t0
Thus
Z t
x(t) = Ψ(t)u(t0 ) + Ψ(t) Ψ−1 (s)g(s)ds.
t0
It follows from the last equality that

u(t0 ) = Ψ(t0 )−1 x(t0 ).


Hence the solution of the Cauchy problem (1.21),(1.26) has the form:
Z t
−1
x(t) = Ψ(t)Ψ (t0 )x(t0 ) + Ψ(t) Ψ−1 (s)g(s)ds.
t0

Example 1.12. Find the solution of the system


(
dx1
dt = 7x1 + 3x2 , (1.27)
dx2
dt = 6x1 + 4x2 .
which satisfies the initial conditions:
x1 (0) = 1, x2 (0) = 3
The auxiliary system is:
(
(7 − r)w1 + 3w2 = 0,
6w1 + (4 − r)w2 = 0
The characteristic equation
(7 − r)(4 − r) − 18 = 0,
or
r2 − 11 + 10 = 0
has roots √
11 ± 121 − 40
r1,2 = ,
2
Systems of Linear Ordinary Differential Equations 75

r1 = 1, r2 = 10.
From the auxiliary system we find that the vector
 
(1) 1
w =
−2
is an eigenvector corresponding to r1 = 1 and
 
(2) 1
w =
1
is the eigenvector corresponding to r2 = 10. Hence the fundamental matrix is the
following matrix

et e10t
 
Ψ(t) = .
−2et e10t
It is easy to see that
   
1 1 1/3 −1/3
Ψ(0) = , Ψ−1 (0) = .
−2 1 2/3 1/3
Thus
et e10t − 23 et + 53 e10t
     
1/3 −1/3 1
x(t) = = .
−2et e10t 2/3 1/3 3 4 t
3e + 3e
5 10t

Example 1.13. Find the general solution of the problem


   
0 −2 −4 1
x (t) = x(t) + .
−1 1 1+t
First we find a fundamental matrix of the corresponding homogeneous system.
The characteristic equation of the matrix of the system

−2 − r −4
=0
−1 1−r
has the roots
r1 = −3, r2 = 2.
By using the auxiliary system
(
(2 − r)w1 − 3w2 = 0,
w1 − (2 + r)w2 = 0.
we find the corresponding eigenvectors.

The eigenvector corresponding to the eigenvalue r1 = −3 is


 
(1) 4
w = ,
1
76 Chapter 5

and the eigenvector corresponding to the eigenvalue r2 = 2 is


 
(2) 1
w = .
−1
Then the fundamental matrix and its inverse matrix are:
 −3t
e2t
 3t
e3t
 
4e −1 1 e
Ψ(t) = , and Ψ (t) = .
e−3t −e2t 5 e−2t −4e−2t
 3t
e3t

−1 1 e
Ψ (t) =
5 e−2t −4e−2t
 3t
e3t 2e3t + te3t
   
−1 1 e 1 1
Ψ (t)g(t) = =
5 e−2t −4e−2t 1+t 5 −3e−2t − 4te−2t
and
1 3t 1 3t
Z  
−1
Ψ (t)g(t)dt = 9 e + 15 te .
1 −2t 2 −2t
2e + 5 te
Thus
4e−3t e2t 1 3t 1 3t
Z   
v(t) = Ψ(t) −1
Ψ (t)g(t)dt = 9 e + 15 te
e−3t −e2t 1 −2t
2e + 25 te−2t
17 2
 
= 18 + 3 t
− 18 − 51 t
7

is a particular solution of the system


4C1 e−3t + C2 e2t + 17 2
 
x(t) = 18 + 3 t
C1 e−3t − C2 e2t − 18
7
− 15 t
is the general solution of the problem.
Example 1.14. Find the solution to the initial value problem
   2t   
0 2 −3 e −1
x (t) = x(t) + , x(0) = .
1 −2 1 0
The auxiliary system is
(
(2 − r)w1 − 3w2 = 0,
w1 − (2 + r)w2 = 0.
The characteristic equation

2−r −3
=0
1 −2 − r
has roots
r1 = −1, r2 = 1.
Systems of Linear Ordinary Differential Equations 77

The eigenvector corresponding to the eigenvalue r1 = 1 is


 
3
w(1) = ,
1
and the eigenvector corresponding to the eigenvalue r2 = −1 is
 
1
w(2) = .
1
Thus the fundamental matrix is:
 t −t 
1 e−t −e−t
 
3e e −1
Ψ(t) = , and Ψ (t) = .
et e−t 2 −et 3et
Thus the solution of the problem has the form

3et e−t
   
1 1 −1 −1
x(t) = +
2 et e−t −1 3 0
−t
Z t
3et e e−s −e−s e2s
  
1
ds,
2 et e−t 0 −es 3es 1

  t −t  Z t  s
−3et + e−t e − e−s
 
3e e
2x(t) = + ds
−et + e−t et e−t 0 −e3s + 3es
−3et + e−t et + e−t − 2
   t −t   
3e e
= +
−et + 3e−t et e−t − 13 e3t + 3et − 83
−3et + e−t 8 2t t − 8 e−t
   
e + 6 − 6e
= + 32 2t 3
−et + e−t t
3 e + 4 − 2e − 3 e
8 −t

e − 9et − 53 e−t + 6
 8 2t 
= 2 2t 3 .
t 5 −t
3 e − 3e − 3 e +4
Hence
4 2t
− 92 et − 56 e−t + 3
 
x(t) = 3e .
1 2t
3e − 32 et − 56 e−t + 2
Example 1.15. Let A be a simmetric, positive definite matrix and h(t) is a
vector function continuous on [0, ∞). Show that if kh(t)k → 0 as t → ∞. then
all solutions of the system
x0 (t) + Ax(t) = h(t) (1.28)
are tending to zero as t → ∞.
Solution. It follows from (1.28) that
(x0 (t) + Ax(t) − h(t), x(t)) = 0
or
(x0 (t), x(t)) + (Ax(t), x(t)) = (h(t), x(t)). (1.29)
78 Chapter 5

Since A is a positive definite matrix, there exists a0 > 0 such that


(Ax(t), x(t)) ≥ a0 kx(t)k,
and thanks to the Cauchy - Schwarz inequality and the arithmetic-geometric
inequality we have
a0 1
(h(t), x(t)) ≤ k(h(t)kk(x(t)k ≤ k(x(t)k2 + k(h(t)k2 .
2 2a0
Utilising last two inequalities in (1.29) we arrive at the following inequality
d 1
kx(t)k2 + a0 kx(t)k2 ≤ k(h(t)k2 . (1.30)
dt a0
We used here the equality
d
2(x0 (t), x(t)) = k(x(t)k2 .
dt
Multiplication of (1.30)by ea0 t gives:
d  a0 t 1
e kx(t)k2 ≤ ea0 t kx(t)k2 .

dt a0
Finally we inegrate the last inequality over the interval (0, t) and obtain that
Z t
1
kx(t)k2 ≤ e−a0 t kx(0)k2 + e−a0 t ea0 s kh(s)k2 ds. (1.31)
a0 0
It is clear that if the function
Z t
φ(t) := ea0 s kh(s)k2 ds
0
is bounded on [0, ∞), then e−a0 t φ(t)
→ 0 as t → ∞, hence kx(t)k → 0 as t → ∞.
If φ(t) is not bounded on [0, ∞), then it must tend to infinity as t → ∞. In
this case e−a0 t φ(t) → 0 also tends to zero as t → ∞. Because thanks to the
L’Hospital’s rule we have
φ0 (t) 1
lim φ(t)e−a0 t = lim = lim kh(s)k2 = 0.
t→∞ t→∞ a0 ea0 t a0 t→∞
Systems of Linear Ordinary Differential Equations 79

ı
CHAPTER 7

Stability and Instability

1. Autonomous Systems
A system of differential equations
 0

y1 (t) = f1 (y1 (t), y2 (t), ..., yn (t)),
y 0 (t) = f (y (t), y (t), ..., y (t)),

2 2 1 2 n
(1.1)

. . . . . . . . . . . . . . . . . . . . . . . . . . .......

 0
yn (t) = fn (y1 (t), y2 (t), ..., yn (t)),
where y1 (t), ..., yn (t) are unknown functions, fk (y1 , y2 , ..., yn ), k = 1, ..., n are
given functions defined on Rn is called a autonomous system of ODE’s. A system
of differential equations


 f1 (y1 , y2 , ..., yn ) = 0,

f (y , y , ..., y ) = 0,
2 1 2 n
(1.2)
. . . . . . . . . . . . . . . . . . . . .


fn (y1 , y2 , ..., yn ) = 0,

where y1 , ..., yn are unknown numbers is called the stationary system correspond-
ing to (1.1) Solutions of the system (1.2) are cllled the stationary states or equi-
libria of (1.1).
For the sake of convenience we write the system as a differential equation in Rn
y0 (t) = f (y(t)) (1.3)
and as vector equation in Rn the corresponding stationary system
f (y) = 0 (1.4)

Definition 1.1. A solution y0 (t) of the equation (1.3) is called a stable


solution if for each ε > 0 there exists δ > 0 such that for each solution y(t) of
the equation (1.3)
ky(t) − y(0)k ≤ ε, ∀t ∈ R+ ,
whenever
ky(0) − y0 (0)k ≤ δ.
81
82 Chapter 6

Here and in what follows we use the following notations for vectors
u = (u1 , u2 , · · · , un ), v = (v1 , v2 , · · · , vn ) ∈ Rn :
u · v = u1 v1 + u2 v2 + · · · + un vn
and
kuk = u21 + u22 + · · · u2n .

Definition 1.2. A solution y0 (t) of the equation (1.3) is called asymptoti-


cally stable solution if this solution is a stable solution and there exists δ > 0
such that for each solution y(t) of the equation (1.3)
lim ky(t) − y0 (t)k = 0,
t→∞
whenever
ky(0) − y0 (0)k ≤ δ.

Definition 1.3. A solution y0 (t) of the equation (1.3) is called globally


asymptotically stable solution if this solution is a stable solution and for each
solution y(t) of the equation (1.3)
lim ky(t) − y0 (t)k = 0.
t→∞

Definition 1.4. A system (1.1) or the equation (1.3) is called a dissipative


system if there exists a number R0 such that for each M > 0 and all inital
data y0 that satisfy the condition ky0 k ≤ M the corresponding solutions of
the Cauchy problem for equation (1.3) with initial condition
y(0) = y0 (1.5)
satisfy the inequality
ky(t)k ≤ R0 , ∀t ≥ t0 (M ),
where t0 (M ) depends only on M .

Example 1.5. Let a, b be positive numbers. Show that the zero solution of
the system (
x0 (t) + ay(t) = 0,
y 0 (t) − bx(t) = 0,
Stability and Instability 83

is stable. Is the zero solution asymptotically stable?


Example 1.6. Let a, b be positive numbers. Show that the zero solution of
the system
(
x0 (t) + 3x(t) − 2y(t) + a( |x(t)|2 + |y(t)|2 x(t) = 0,


y 0 (t) + 4y(t) + 2x(t) + b( |x(t)|2 + |y(t)|2 y(t) = 0,




is globally asymptotically stable.


Example 1.7. Given the system of equations
(
x0 (t) = ax3 (t) + by(t),
(1.6)
y 0 (t) = cx(t) + dy 3 (t).
Find the conditions on the parameters a, b, c, d for which
(1) the zero solution is stable,
(2) the system is dissipative,
(3) all solutions of the system are bounded on R+ ,
(4) the zero solution is globally asymptotically stable.
Solution.
(1) a ≤ 0, d ≤ 0, bc < 0. In this case the zero solution of the system is stable-
Witout loss of generality we can assume that b > 0 and c < 0.
Then multiplying first equation by |c|x(t), the second equation by by(t) and
adding obtained relations we get
1d 
|c|x2 (t) + by 2 (t) ≤ 0,

2 dt
From the last inequality we obtain
|c|x2 (t) + by 2 (t) ≤ |c|x2 (0) + by 2 (0) , ∀t ≥ .
   

This inequality implies that all solutions of this system are bounded on R+ and
the zero solution of the system is stable.

(2) Assume that a < 0, d < 0. Then we can rewrite the equality (??) in the
following form
1d  2
x (t) + y 2 (t) + m x2 (t) + y 2 (t)
  
2 dt
= ax4 (t) + dy 4 (t) + m x2 (t) + y 2 (t) + (b + c)x(t)y(t)
 

≤ ax4 (t) + +dy 4 (t) + m1 x2 (t) + y 2 (t) , (1.7)


 

b+c
where m > 0 is an arbitrary positive number and m1 := m + 2 .
Emplyoing the Young inequality
1
αβ ≤ εα2 + β 2

84 Chapter 6

which holds for each positive α, β and ε we obtain


1
m1 y 2 (t) ≤ |d|y 4 + m2
4|d| 1
1
m1 x2 (t) ≤ |a|x4 (t) + m2
4|a| 1
By using the last two inequalities in (1.7) we get
1d  2
x (t) + y 2 (t) + m x2 (t) + y 2 (t) ≤ m0 ,
  
(1.8)
2 dt
where  
1 2 1 1
m0 := m1 + .
4 |a| |d|
Multiplication of (1.8) by e2mt gives
d  2mt  2
x (t) + y 2 (t) ≤ 2m0 e2mt .

e
dt
Finally we integrate the last inequality and obtain
 m0
x (t) + y 2 (t) ≤ e−2mt x2 (0) + y 2 (0) +
 2  
.
m
The last inequltity implies that if a < 0, d < 0, then the system (1.1) is a dissi-
pative system.

Example 1.8. Supppse that a(t) is a continuous function defined on [0, ∞).
Show that solutions of the equation
y 0 (t) = a(t)y(t), t ≥ 0 (1.9)
are stable if and only if
Z t
lim sup a(s)ds < ∞. (1.10)
t→∞ 0

Solution Let y(t) be a given solution of the equation (1.9) that satisfies the
initial condition y(0) = y0 . Then
Rt
a(s)ds
y(t) = y0 e 0 .
Let z(t) be an arbitrary solution of (1.9). It is clear that
Rt
a(s)ds
|y(t) − z(t)| = |y0 − z0 |e 0 , (1.11)
where z0 = z(0). A solution y(t) of (1.9) is stabel if for each ε > 0, there exists
δ = δ(ε) > 0 such that
|y(t) − z(t)| ≤ ε (1.12)
whenever |y0 − z0 | ≤ δ. It is clear that the right hand side of (1.11) is finite only
when the condition (1.10) is satisfied.
Stability and Instability 85

Suppose now that the condition (1.10) is satisfied. Then there exists A0 > 0 such
that Rt
e 0 a(s)ds ≤ A0 < ∞,
Therefore it follows from (1.11) that for each ε > 0
|y(t) − z(t)| ≤ ε
ε
whenever |y0 − z0 | ≤ A0 , i.e. the solution y(t) is stable.
CHAPTER 8

Boundary Value Problems

In the previous chaper we studied mainly the initial vlaue problems (or
Cauchy problems) for ordinary differential equations. We were given the ini-
tial value (or initial values) of the unknown function and a differential equation
which governed its behavior for subsequent times. In this chaper we consider
a different type of problems for second order ODE’s which we call a boundary
value problem. In this case our aim is to find a function defined on some interval,
where we are given its value or the value of its derivative on the boundary points
of the interval and a differential equation to govern its behavior in the interior of
the interval.

1. Boundary Value Problems for second order linear ODE’s


There are many important problems in mathematical physics leading to the
boundary value problems for second order linear and nonlinear ODE’s. In this
section we study the following boundary value problem
L[y] = −h(x), x ∈ (a, b), (1.1)
Ba [y] = α1 y(a) + α2 y 0 (a) = 0 (1.2)

Bb [y] = β1 y(b) + β2 y 0 (b) = 0 (1.3)


where
L[y] := (p(x)y 0 )0 + q(x)y.
Here p is a given continuously differentiable , and q, h are given continuous func-
tions on the interval [a, b], α1 , α2 , β1 , β2 are given numbers.
We assume that the homogeneous equation
L[y] = 0 (1.4)
under the boundary conditions (1.2),(1.3) has just zero solution.
Let y1 (x) be a solution of (1.4) under the boundary condition Ba [y1 ] = 0, and y2
a solution of (1.4) under the boundary condition Bb [y2 ] = 0.
y1 and y2 are linearly independent. In fact if y1 , y2 were linearly dependent,
then we would have y1 = cy2 , with c = cont., i.e. y1 - nonzero solution of
(1.4),(1.2),(1.3).
A particular solution of (1.1) has the form
87
88 Chapter 6

yp (x) = c1 (x)y1 (x) + c2 (x)y2 (x)


where
h(x)y2 (x)
c01 (x) = ,
p(x)w(x)
h(x)y1 (x)
c02 (x) = − .
p(x)w(x)
Hence Z b
h(x)y2 (x)
c1 (x) = − ds,
x p(x)w(x)
and the particular solution takes the form
Z x
h(x)y1 (x)
c2 (x) = − ds,
a p(x)w(x)
Z b Z x
h(x)y2 (x) h(x)y1 (x)
y(x) = y1 (x) − ds + y2 (x) − ds (1.5)
x p(x)w(x) a p(x)w(x)
Let us consider the function
(
− y1 (s)y
c
2 (x)
, a≤s≤x
G(x, s) = y2 (s)y1 (x) .
− c , x ≤ s ≤ b.
Then we can rewrite (1.5) in the following form
Z b
y(x) = G(x, s)h(s)ds.
a
It is clear that
Z b Z x
0 0 h(x)y2 (s) 0 h(s)y1 (s)
y (x) = y1 (x) − ds + h(s)y1 (x)y2 (x) + y2 (x) − ds
x c a c
Z b Z x
1 0 0
= y1 (x) −h(s)y2 (s)ds + y2 (x) −h(s)y1 (s)ds,
c x a
Z b
0 1 0
y (a) = y1 (a)[ −h(x)y2 (x)dx]
c a
1 b
Z Z b
α1 y(a) + α2 y 0 (a) = α1 −h(x)y2 (x)dxy1 (a) + α2 y10 (a) −h(x)y2 (x)dx = 0
c a a
Similarly β1 y(b) + β2 y 0 (b) = 0.
Hence the function Z b
y(x) = G(x, s)h(s)ds
a
is the unique solution of the problem (1.1) -(1.3). The function G(x, y) is called
the Greene function .
So to find the solution of (1.1)-(1.3) we must do the following steps:
Boundary value Problems 89

(1) Find two linearly independent solutions of (1.4) that satisfy the bound-
ary conditions Ba [y1 ] = 0, Bb [y2 ] = 0,
(2) Compute p(x)W (x) = c,
(3) Construct the Green function
(
− y1 (s)y
c
2 (x)
, a ≤ s ≤ x,
G(x, s) = y2 (s)y1 (x)
− c , x ≤ s ≤ b.
Then the desired solution is the function
Z b
y(x) = G(x, s)y(s)ds.
a

Problem 1.1. Solve the boundary value problem


(
−y 00 = x,
y(0) = y(π) = 0.
General solution of homogeneous equation: yh = c1 x + c2
y1 (0) = c1 .0 + c2 = 0 ⇒ c2 = 0 ⇒ y1 (x) = c1 x
y2 (0) = c1 .π + c2 = 0 ⇒ c2 = c1 π, c1 = −1, c2 = π ⇒ y2 (x) = π − x
x π−x

W [x, π − x] = = −π
1 −1
(
(π−s)x
π x ≤ s ≤ b,
G(x, s) = s(π−x)
π 0 ≤ s ≤ x.

π−x x 2 x π
Z Z
y(x) = s ds + (π − s)sds
π 0 π x

π − x x3 x πs2 πs3

= + −
π 3 π 2 3 x
x3 x4 x π 3 π 3 πx2 x3
 
= − + − − +
3 3π π 2 3 2 3
x 3 x 4 2
π x x 3 x 4 −x 2
= − + − + = (x − π 2 ).
3 3π 6 2 3π 6
2. Sturm-Liouville boundary value problems
In this section we we consider the following Sturm-Liovile problem , i.e. a
problem of finding the numbers λ for which the equation
–y 00 + q(x)y = λy, x ∈ (a, b), (2.1)
where q is given continuous function on the interval [a, b], under the boundary
conditions of the form
α1 y(a) + α2 y 0 (a) = 0 (2.2)
90 Chapter 6

β1 y(b) + β2 y 0 (b) = 0 (2.3)


has nonzero solution. Such solutions of the problem (2.27)-(2.3) are called eigen-
functions , and the corresponding values of the parameter λ are called eigenvalues
of this Sturm-Liouville problem (2.27)-(2.3).
The problem was first investigated between 1837 and 1841 by
J. Liouville and J. C. F. Sturm. The solution of some types of equations of math-
ematical physics by the Fourier method leads to the Sturm-Liouville problem. In
Chapter we will see how the Sturm-Lioville problem of the form
(
−y 00 = λy, x ∈ (0, L),
y(0) = y(L) = 0
appears when solving the initial boundary value problem for the heat equation
and the wave equation.
Theorem 2.1. The differential operator
L[y] = −y 00 + q(x)y
with the domain of definition
D(L) := y ∈ C 2 [a, b] : Ba [y](a) = 0, Bb [y](a) = 0


is a symmetric operator, i.e.


Z b Z b
f (x)L[g](x)dx = g(x)L[f ](x)dx, ∀f, g ∈ D(L). (2.4)
a a

Proof. First we show that if f, g ∈ D(L), then


W [f, g](a) = W [f, g](b) = 0,
where W [f, g] is the Wronskian of f and g.
Really, if Ba [y] = 0, then either y(a) = 0 or y 0 (b) = 0 or y(a) = hy 0 (a).
If f (a) = g(a) = 0, then W [f, g](a) = f (a)g 0 (a) − g(a)f 0 (a) = 0g 0 (a) − 0f 0 (a) = 0.
If f 0 (a) = g 0 (a) = 0, then W [f, g](a) = f (a)g 0 (a)−g(a)f 0 (a) = f (a)0)−g(a)0 = 0.
If f (a) = hf 0 (a), g(a) = g 0 (a), then
W [f, g](a) = f (a)g 0 (a) − g(a)f 0 (a) = hf 0 (a)g 0 (a) − hg 0 (a)f 0 (a) = 0.
Similarly we can show that W [f, g](b) = 0.
Integrating by parts we get
Z b Z b
f (x) −g 00 (x) + q(x)g(x) dx
 
f (x)L[g](x)dx =
a a
Z b Z b
=− f (x)g 00 (x)dx + q(x)f (x)g(x)dx
a a
b Z b Z b
0 0 0
= −f g + f (x)g (x)dx + q(x)f (x)g(x)dx (2.5)
a a a
Boundary value Problems 91

and
Z b Z b
g(x) −f 00 (x) + q(x)f (x) dx
 
g(x)L[f ](x)dx =
a a
Z b Z b
00
=− g(x)f (x)dx + q(x)f (x)g(x)dx
a a
b Z b Z b
0
0 0
= −f g + f (x)g (x)dx + q(x)f (x)g(x)dx (2.6)

a a a
It follows from (2.5) and (2.6) that
Z b Z b
f (x)L[g](x)dx − g(x)L[f ](x)dx = W [g, f ](b) + W [f, g](a) = 0.
a a

3. Boundary Value Problems for second order nonlinear ODE’s
In this section we consider the problem
−y 00 (x) + a(x)y(x) = f (x, y(x)) + h(x), x ∈ (a, b), (3.1)
Ba [y] = α1 y(a) + α2 y 0 (a) = 0, (3.2)

Bb [y] = β1 y(b) + β2 y 0 (b) = 0, (3.3)


where α1 , α2 , β1 , β2 are so that the equation
−y 00 (x) + a(x)y(x) = 0
under the boundary conditions (3.2),(3.3) has just trivial solution. Here a(x), h(x)
are given continuous functions and the given nonlinear term f (x, ·) is continuous
on Q := [0, 1] × R1 and satisfies the uniform Lipschitz condition, i.e. there exists
a number k0 > 0 such that
|f (x, y1 ) − f (x, y2 )| ≤ k0 |y1 − y2 |, ∀x ∈ [0, 1], y1 , y2 ∈ R1 . (3.4)
The problem is equaivalent to the following integral equation
Z 1
y(x) = G(x, ξ)f (ξ, y(ξ)dξ + h1 (x), (3.5)
0
where G(x, ξ) is the Green function of the operator Ly = −y 00 + a(x)y under the
boundary conditions (3.2),(??). and
Z 1
h1 (x) = G(x, ξ)h(ξ)dξ.
0
The problem of finding of a solution of the integral equation (3.5) is equaivalent
to the problem of finding of a fixed point of the operator
Z 1
A[y](x) = G(x, ξ)f (ξ, y(ξ))dξ + h1 (x)
0
92 Chapter 6

It is clear that the operator A[·] maps C[0, 1) into itself. On the other hand
thanks to the Lipschitz condition (3.4) we have
Z 1
A[y1 ](x) − A[y2 ](x) ≤ k0 |G(x, ξ)||y1 (ξ) − y2 (ξ)|dξ.

0
Since |G(x, ξ)| ≤ g0 for each x, ξ ∈ [0, 1] we have

max A[y1 ](x) − A[y2 ](x) ≤ k0 g0 max |y1 (x) − y2 (x)|.

x∈[0,1] x∈[0,1]

This inequality implies that the operator A is a contraction in the Banach space
C[0, 1] whenever k0 g0 < 1. Hence the following theorem holds true
Theorem 3.1. If the nonlinear term satisfies the condition (3.4) and
k0 g0 < 1, then the problem (3.1)-(3.3) has a unique solution.
Problem 3.2. Show that y(x) ≡ 0 is a unique solution of the problem
(
−y 00 (x) + [y(x)]3 = 0, x ∈ (0, 1),
y(0) = y(1) = 0.
Problem 3.3. Show that the problem
(
y 00 (x) − y(x) = sin(πx), x ∈ (0, 1),
y(0) = 0, y 0 (0) = −2
has a unique solution.
The general solution of the equation has the form
1
y(x) = C1 E x + C2 e−x − sin(πx).
1 + π2
Boundary conditions are satisfied if
C1 + C2 = 0, C1 e + C2 e−1 = −2.
Thus the problem has a solution
1 1
y(x) = (sinh(1 − x) − 2 sinh x) − sin(πx)
sinh 1 1 + π2
Assume that v(x) is another solution of the problem. Then the function,
z(x) = y(x) − v(x)
would have been a solution of the problem
(
z 00 (x) − z(x) = 0, x ∈ (0, 1),
z(0) = 0, z 0 (0) = 0.
Mutiplying this equation by −z(x) and integrating over the interval (0, 1) we
obtain
Boundary value Problems 93

Z 1 x=1 Z 1
0 2 0
[z (x)] dx − z z + [z(x)]2 dx = 0.

0 x=0 0
Thanks to the boundary conditions
Z 1 Z 1
[z 0 (x)]2 dx + [z(x)]2 dx = 0.
0 0
Hence z(x) = y(x) − v(x) = 0, ∀x ∈ [0, 1].
4. Problems
(1) Find eigenvalues and eigenfunctions of the problem
(
−y 00 = λy, x ∈ (0, 2π),
y(0) = y(2π), y 0 (0) = y 0 (2π).
CHAPTER 9

Fourier Series and PDE’s

1. Periodic Functions
A function f (x) defined on R is called a periodic function if there exists a
number T > 0 such that
f (x + T ) = f (x), ∀x ∈ R. (1.1)
The smallest number T for which the relation (1.1) holds is called the period of
f or fundamental period of f .

Lemma 1.1. If f is T - periodic continuous function, then


Z a+T Z T
f (x)dx = f (x)dx. (1.2)
a 0
Proof. Consider the function
Z x+T
F (x) = f (s)ds
x
It is clear that
F 0 (x) = f (x + T ) − f (x) = 0, ∀x ∈ R (f is T- periodic)
Thus F (x) is a constant function. Hence
Z a+T Z T
F (a) = f (x)dx = F (0) = f (x)dx.
a 0


2. Functional Series
Let
f1 (x), f2 (x), ..., fn (x), ... (2.1)
be a sequence of functions defined on some interval I ⊂ R. We say that the
sequence (3.1) is convergent (or pointwise convergent) to a function f (x) on I if
for each fixed point x ∈ I the number sequence {fn (x)} converges to the number
f (x) as n → ∞. If at least for one point x0 the sequence f (x0 ) is divergent, then
we say that the sequence of functions {fn (x)} is divergent on I. .
A sequence of functions (3.1) is said to be uniformly convergent to a function
95
96 Chapter 8

f (x) on I if for each ε > 0 there exists a number Nε depending on ε only, such
that
|fn (x) − f (x)| ≤ ε
for all n ≥ Nε .

For a given sequence of functions (3.1) the series



X
fn (x) (2.2)
n=1

is the following limit


lim SN (x),
N →∞
where
SN (x) = f1 (x) + f2 (x) + ... + fN (x)
is called the N -th partial sum of the series.
If the sequence of partial sums {SN (x)} converges to some function s(x) on I,
i.e.
X∞
fn (x) = s(x), (2.3)
n=1
then we say that the series (3.8) is convergent on I to s(x). Otherwise the series
(3.8) is called divergent.
If the sequence {SN (x)} is uniformly convergent to s(x) then we say that the
series (3.8) is uniformly convergent.

Theorem 2.1. If the functions


f1 (x), f2 (x), ..., fn (x), ...

P
are continuous on an interval [a, b] and the series fn (x) is uniformly con-
n=1
vergent on [a, b], then the sum of the series s(x) is a continuous function on
[a, b]. Moreover the series obtained term by term integration of this series is also
convergent and
X∞ Z b Z b
fn (x)dx = s(x)dx.
n=1 a a

Theorem 2.2. (Weierstrass Theorem) If the functions


f1 (x), f2 (x), ..., fn (x), ...
are continuous on an interval [a, b],
|fn (x)| ≤ an , ∀x ∈ [a, b], n = 1, 2, ...
Fourier Series and PDE’s 97


P ∞
P
and the series an is convergent then the series fn (x) is uniformly conver-
n=1 n=1
gent to some function that is continuous on [a, b].

3. Euler’s formulas and Fourier series


A series of the form

a0 X
+ (an cos(nx) + bn sin(nx)) (3.1)
2
n=1

is called a trigonometric series.

Question: Which functions have trigonometric series expansion . If f (x)


has a trigonometric series expansion

a0 X
f (x) = + (an cos(nx) + bn sin(nx)) (3.2)
2
n=1

how to compute a0 , a1 , ..., an , ..., b1 , b2 , ...?

Theorem 3.1. Suppose that f is 2π-periodic function and



a0 X
f (x) = + (an cos(nx) + bn sin(nx)), (3.3)
2
n=1

where the series converges uniformly on the real axis. Then


1 π
Z
an = f (x) cos(nx)dx, n = 0, 1, 2... (3.4)
π −π
1 π
Z
bn = f (x) sin(nx)dx, n = 1, 2... (3.5)
π −π
Proof. Really since
Z π Z π
cos(nx)dx = 0, sin(nx)dx = 0
−π −π

and due to uniform convergence of the series we can integrate (3.1) over (−π, π)
and get: Z π
f (x)dx = a0 π,
−π
Let us multiply (3.1) by cos(mx) and integrate over (−π, π). Taking into
account Z π Z π
2
cos (mx)dx = π, sin2 (mx)dx = π
−π −π
98 Chapter 8

we obtain Z π
1
an = f (x) cos(nx)dx, n = 0, 1, 2... (3.6)
π −π

1 π
Z
bn = f (x) sin(nx)dx, n = 1, 2... (3.7)
π −π
Here we have used the fact that for each m the series

X
cos(mx) (an cos(nx) + bn sin(nx))
n=1
and

X
sin(mx) (an cos(nx) + bn sin(nx))
n=1
are uniformly convergent. 

Definition 3.2. The series (3.1) where an and bn are defined by (3.6) and
(3.7) is called the Fourier series of the function f , the numbers an , bn are
called the Fourier coefficients of f .

Piecewise continuous function. A function f (x) is called piecewise continu-


ous on [a, b], if limx→b− f (x), limx→a+ f (x) exist f is continuous on (a, b) except
at finitely many of points in (a, b), where f has one-sided limits.
Theorem 3.3. If 2π-periodic function f (x) and its derivative f 0 (x) are piece-
wise continuous functions, then

f (x+) + f (x−) a0 X
= + (an cos(nx) + bn sin(nx)) (3.8)
2 2
n=1

for each x ∈ R, where an and bn are defined by (3.4) and (3.5).


Example 3.4. Find the Fourier series of the function φ(x) given on [−π, π]
by (
π + x, x ∈ [−π, 0]
φ(x) =
π − x, x ∈ [0, π].
Solution. The function φ(x) is piecewise smooth. i.e. it is continuous, but its
derivative is piecewise continuopus.

1 π 2 π x2
Z Z 
2
a0 = φ(x)dx = (π − x)dx = πx − = 2π − π = π
π −π π 0 π 2 0
Fourier Series and PDE’s 99

π π
2 π
Z Z Z
2
an = (π − x) cos(nx)dx == 2 cos(nx)dx − x cos(nx)dx
π 0 0 π 0
0
2 π
Z  Z π
1 2 π 2
=− x sin(nx) dx = − [x sin(nx)]0 + sin(nx)dx
π 0 n nπ nπ 0

(−1)n
  
2 1 2 1 1 2 1
= − cos(nx) = − cos(nπ) = −
nπ n 0 nπ n n nπ n2 n2
2
Thus an = 0 if n is and even number, and an = n2
, if n is and odd number.

π 4 X 1
φ(x) = + cos(2k − 1)x
2 π (2k − 1)2
k=1

Example 3.5. Using Fourier series for φ(x) show that


π2 1 1 1
= 1 + 2 + 2 + 2 + ...
8 3 5 7
3.1. Functions of any period. If a function f (x) is 2l peiodic and

a0 X   nπ   nπ 
f (x) = + an cos x + bn sin x (3.9)
2 l l
n=1
then
l
1 l
Z Z
1  nπ   nπ 
an = f (x) cos x dx, bn = f (x) sin x dx. (3.10)
l 0 l l 0 l
3.2. Even and Odd Functions. If a function f (x) is an even function ,
then

2 l
Z  nπ 
bn = f (x) sin x dx = 0
l 0 l
and its Fourier series has the form

a0 X  nπ 
f (x) = + an cos x , (3.11)
2 l
n=1
where
2 l
Z  nπ 
an = f (x) cos x dx, n = 0, 1, 2, ... (3.12)
l 0 l
If a function f (x) is an odd function , then
2 l
Z  nπ 
an = f (x) cos x dx = 0
l 0 l
and its Fourier series has the form
X∞  nπ 
f (x) = bn sin x , (3.13)
l
n=1
100 Chapter 8

where
2 l
Z  nπ 
bn = f (x) sin x dx, n = 1, 2, ... (3.14)
l 0 l
Let f (x) be defined on [0, l]. We define the even periodic extension fe of f
as follows
fe (x) = f (−x), if x ∈ [−l, 0], and fe (x) = fe (x + 2l), ∀x ∈ R.
An odd periodic extension f0 of f is defined as follows
f0 (x) = −f (−x), if x ∈ [−l, 0], and f0 (x) = f0 (x + 2l), ∀x ∈ R.
Example 3.6. Find Fourier series expansion for f (x) = 1 − x2 , x ∈ [−1, 1]
and use it to show that

π2 X 1
= .
6 n2
n=1

Solution. Z 1
4
a0 = 2 (1 − x2 )dx = ,
0 3
Z 1 Z 1
an = 2 (1 − x2 ) cos(nπx)dx = −2 cos(nπx)dx+
0 0
Z  0 Z 1
1 2
2 x2 sin(nπx dx = − 2x sin(nπx)dx =
nπ nπ 0
Z 1  0
4 1 4 4
x − cos(nπx dx = − 2
x cos(nπx)|10 = − 2 2 (−1)n .
nπ 0 nπ (nπ) n π
Thus we have

2 4 X (−1)n+1
f (x) = + 2 cos(nπx).
3 π n2
n=1

4. Riemann -Lebesgue Lemma


Proposition 4.1. If f (x) is a continuous function , and
Z π Z π
In := f (x) cos(nx)dx, Jn := f (x) sin(nx)dx.
−π −π
Then
lim In = lim Jn = 0 (4.1)
n→∞ n→∞
Proof of 4.1. Since cos α = − cos(α + π) we have
Z π Z π h π i
In := f (x) cos(nx)dx = − f (x) cos (x + )n dx.
−π −π n
π
Making change of variables x + n = y and using the Lemma 1.1 we obtain
Z π Z π+ π Z π
2 π π
f (x) cos(nx)dx = − f (y − ) cos(ny)dy = − f (y − ) cos(ny)dy
−π −π+ π n −π n
2
Fourier Series and PDE’s 101

Hence we have
Z π Z π
π
In + In = 2 f (x) cos(nx)dx = [f (x) − f (x − )] cos(nx)dx
−π −π n
Z π Z π
π
= 2 f (x) cos(nx)dx ≤ |f (x) − f (x − )|dx.
−π −π n
The function f is continuous on [−π, π] thus it is uniformly continuous on [−π, π]. Therefore
the integral in the right hand side of (R) tends to zero as n → ∞. So In → 0 as n → ∞.
Similarly we can show that Jn → 0 as n → ∞.

Problem. Let f (x) be 2π -periodic and f 0 (x) is continuous function. Show that
   
1 1
an = o , bn = o .
n n

5. Bessel inequality and mean value approximation.


Theorem 5.1. (Bessel inequality) If f (x) is a piecewise continuous func-
tion on (−π, π), then the following inequality caleed the Bessel inequality holds
true

a20 X 2 1 π 2
Z
2
+ (an + bn ) ≤ f (x)dx, (5.1)
2 π −π
n=1

where a0 , an and bn , n = 1, 2, ... are Fourier coefficients of f .


Proof. It is clear that
Z π Z π Z π
2 2
0 ≤ EN = f (x)dx − 2 SN (x)f (x)dx + SN (x)dx. (5.2)
−π −π −π
By using Euler formulas we get
Z π
SN (x)f (x)dx
−π
Z π N Z π
a0 X
= f (x)dx + f (x) [ak cos(nx) + bk sin(nx)] dx
2 −π n=1 −π
 
1 2 2 2 2 2
= π a0 + a1 + ... + aN + b1 + ... + bN , (5.3)
2
Z π  2 
2 a0 2 2 2 2
[SN (x)] dx = π + a1 + ... + aN + b1 + ... + bN . (5.4)
−π 2
Substituting (5.3) and (5.4) into (5.2) we obtain:
N
Z π " #
a2 X
0 ≤ EN = f 2 (x)dx − π 0 + (a2n + b2n ) .
−π 2
n=1
102 Chapter 8

or
N π
a20 X 2
Z
1
+ (an + b2n ) ≤ f 2 (x)dx. (5.5)
2 π −π
n=1
We cam pass to the limit as N → ∞ and get (5.1). 
By using the Bessel inequality we can prove the following theorem
Theorem 5.2. Suppose that f is continuous 2π-periodic function and f 0 is
piecewise continuous function. Then the Fourier series of f converges absolutely
and uniformly to the function f .
Proof. Let us calculate the Fourier coefficients of f 0 :
1 π 0
Z
1
α0 = f (x)dx = (f (π) − f (−π)) = 0,
π −π π
Z π Z π
1 0 1 x=π
αn = f (x) cos(nx)dx = cos(nx)f (x) +n f (x) sin(nx)dx = nbn ,

π −π π x=−π −π

1 π 0
Z
βn = f (x) sin(nx)dx
π −π
Z π
1 x=π
= sin(nx)f (x) −n f (x) cos(nx)dx = nan .

π x=−π −π
So we have
αn = nbn , n = 0, 1, 2, ..., βn = nan , n = 1, 2, ..., (5.6)
Employing the inequality
1
|ab| ≤ a2 + b2
4
we obtain from (5.6) the following inequlity
1 1 1
|an | + |bn | = |βn | + |αn | ≤ 2 + αn2 + βn2 .
n n 2n
Due to the Bessel inequality the series
X∞
(αn2 + βn2 )
n=1
P∞
is convergent. Hence the series n=1 (|an | + |bn |) is also convergent. Therefore
the series

a0 X  nπ nπ 
+ an cos x + bn sin x
2 l l
n=1
is uniformly convergent to a continuous function f .

where an and bn are Fourier coefficients of the function f
Fourier Series and PDE’s 103

Example 5.3. Assume that the Fourier series of f (x) on [−π, π] converges
to f (x) and can be integrated term by term. Multiply

a0 X  nπ nπ 
+ an cos x + bn sin x
2 l l
n=1
by f (x) and integrate the obtained relation from −π to π to derive the identity

π ∞
a20 X 2
Z
1
f 2 (x)dx = + (an + b2n ). (5.7)
π −π 2
n=1
This identity is called the Parseval identity.
CHAPTER 10

Partial Differential Equations

This chapter is devoted to the study of the Cauchy problem and initial boundary
value problems for the heat equation and wave equations.

1. Heat Equation. Method of Separation of Variables


We consider first the following initial boundary value problem for the heat equa-
tion
ut = a2 uxx , x ∈ (0, l), t > 0, (1.1)
u(x, 0) = f (x), x ∈ [0, l], (1.2)
u(0, t) = u(l, t) = 0, t ≥ 0, (1.3)
We assume that the solution of the problem has the form
u(x, t) = X(x)T (t),
where X(x) and T (t) are nonzero functions. Substituting into (1.1) we get
X(x)T 0 (t) = a2 X 00 (x)T (t).
Dividing both sides of the last equality by a2 X(x)T (t) we obtain
T 0 (t) X 00 (x)
= (1.4)
a2 T (t) X(x)
Since the left hand side of (1.4) depends only on t and the right hand side depend
only on x each side of this equality can only be equal to some constant. Thus
T 0 (t) X 00 (x)
= = −λ, λ = constant
a2 T (t) X(x)
or
T 0 (t) = λa2 T (t) (1.5)
00
X (x) + λX(x) = 0. (1.6)
It follows from (3) that
X(0) = X(l) = 0. (1.7)
So we have to find the values of λ for which the equation (1.6) has nonzero
solution which satisfy (1.7). The values of l for which (1.6) has nonzero solution
satisfying (1.7) are called eigenvalues of the problem (1.6),(1.7) When l = 0 (1.7),
105
106 Chapter 9

the corresponding solutions - eigenfunctions of (1.6),(1.7).


When l = 0 the equation has a general solution
X(x) = Ax + B
This function satisfies (1.7) just for A = B = 0. Thus λ = 0 is not an eigenvalue.
If λ < 0 then general solution of (1.6) has the form
√ √
X(x) = C1 e |λ|x + C1 e− |λ|x .
It is easy to see that this function satisfies (1.7) just when C1 = C2 = 0. So
(1.6),(1.7) has not negative eigenvalues.
If l > 0 then the general solution of (1.6) has the form
√ √
X(x) = C1 cos( λx) + C2 sin( λx).
Substituting into (1.7) we obtain

X(0) = C1 = 0, X(l) = C2 sin( λl) = 0
The second equality holds for ll = nπ, n = ±1, ±2, ... Thus the numbers
n2 π 2
λn = , n = 1, 2, ...
l2
are eigenvalues of the problem (1.6),(1.7), and the functions
 nπ 
Xn (x) = sin x , n = 1, 2, ...
l
are the corresponding eigenfunctions. It is easy to see that the general solution
of (1.5) for l = ln has the form

Tn (t) = Dn e−a nt
, n = 1, 2, ...
2
Hence for each n = 1, 2, ... the function e−a λn t sin nπ

l x satisfies (1.1),(1.3).
Since the equation (1.6) is a linear equation for each N
N  nπ 

X
uN (x, t) = Dn e−a nt
sin x , (1.8)
l
n=1

where Dn , n = 1, ..., N are arbitrary constants also satisfies (1.1),(1.3). Next we


try to satisfy the initial condition (1.2):
N
X  nπ 
uN (x, 0) = Dn sin x = f (x)
l
n=1

We see that the solution of the problem (1.1)-(1.3) has this form just when the
initial function is linear combination of functions
p p k2 π2
sin λ1 x, ...., sin λn x, λk = 2 .
l
Partial Differential Equations 107

Let us consider the series


∞  nπ 

X
u(x, t) = Dn e−a nt
sin x . (1.9)
l
n=1

P
Let us note that if |Dn | < ∞ , then this series is uniformly convergent on
n=1
[−l, l]×[0, T ], ∀T > 0. The function u(x, t) defined by (1.9) satisfies the boundary
conditions (1.3) since each term of the series satisfies these conditions. It follows
from (1.9) that u(x, t) satisfies the initial condition (1.2)

X  nπ 
f (x) = u(x, 0) = Dn sin x (1.10)
l
n=1

iff
Z l
2  nπ 
Dn = fn = f (x) sin x dx.
l 0 l
Theorem 1.1. If f (x) is continuous on [0, L], f 0 (x) is piecewise continuous
on [0, L], f (0) = f (l) = 0 then the function
∞  nπ 

X
u(x, t) = fn e−a nt
sin x (1.11)
l
n=1

satisfies (1.1)-(1.3).
To prove this theorem we need the following proposition
Proposition 1.2. Assume that the functions vn (x, t), n = 1, 2, ... are contin-
uous QT = [a, b] × [t0 , T ] and
|vn (x, t)| ≤ an , ∀(x, t) ∈ QT , n = 1, 2, ...

P
where the sequence of positive numbers {an } is so that the series an is con-
n=1

P
vergent. Then the series vn (x, t) is absolutely and uniformly convergent on
n=1
QT . Moreover the function

X
v(x, t) = vn (x, t)
n=1

is continuous on QT .
If
2
∂vn ∂ vn
∂t (x, t) ≤ bn , ∂x2 (x, t) ≤ dn , ∀(x, t) ∈ QT , n = 1, 2, ...

108 Chapter 9

and

X ∞
X
bn < ∞, dn < ∞,
n=1 n=1
then the series
∞ ∞
X ∂vn X ∂ 2 vn
(x, t) and (x, t)
∂t ∂x2
n=1 n=1
uniformly converge to vt (x, t) and vxx (x, t) in QT . Moreover these functions are
continuous in QT .
Proof of Theorem 1.1. Since f is piecewise smooth the series

X
|fn |
n=1

is convergent. Thus the Proposition 1.2 implies that the function u(x, t) is con-
tinuous on [0, l] × [0, ∞). Let us show that the series
∞ ∞
X ∂un X ∂ 2 un
(a) (x, t) and (b) (x, t) (1.12)
∂t ∂x2
n=1 n=1

are convergent uniformly for t ≥ t0 , x ∈ [0, l], where t0 is an arbitrary positive


number. Continuity of f on [0, l] implies boundedness of the sequence {fn }. So
there exists M > 0 so that
|fn | ≤ M, n = 1, 2, ...
Thus for each t ≥ t0 and x ∈ [0, l] we have
2π2
 nπ 
∂un n a2 n2 π 2 a2 n2 π 2

2 t
x ≤ M1 n2 e− l2 t0 ,

∂t (x, t) = −fn a l2 e l2 sin (1.13)

l
2
where M1 = M a2 πl2 .
2 2 2 
∂ un a2 n2 π 2 a2 n2 π 2
= −fn n π e− l2 t sin nπ x ≤ n2 M1 e− l2 t0 ,


∂x2 (x, t) (1.14)
l2 l a2
It is easy to see that the series

X M1 − a2 n22 π2 t0
n2 e l
a2
n=1

is convergent. Therefore due to the Proposition 1.2 the function u(x, t) defined
by (2.10) is a solution of the problem (1.1)-(1.3).
Finally we show that the solution we obtained is unique. Really, if v(x, t) is
another solution of the problem then
w(x, t) = u(x, t) − v(x, t)
Partial Differential Equations 109

is a solution of the problem



2
wt (x, t) = a wxx (x, t), x ∈ (0, l), t > 0,

w(x, 0) = 0, x ∈ [0, l], (1.15)

w(0, t) = w(l, t) = 0, t ≥ 0,

∞ ∞
a2n and b2n be convergent. Show that
P P
Problem 1.3. Let the series
n=1 n=1

∞ ∞
!1/2 ∞
!1/2
X X X
|an bn | ≤ a2n b2n
n=1 n=1 n=1

1.1. Nonhomogeneous Equation.


ut = a2 uxx + h(x, t), x ∈ (0, l), t > 0, (1.16)
u(x, 0) = f (x), x ∈ [0, l], (1.17)
u(0, t) = u(l, t) = 0, t ≥ 0, (1.18)
We look for solution of the problem (1.16)-(1.18) in the form

X  nπ 
u(x, t) = un (t) sin x (1.19)
l
n=1

We expand h(x, t)

X  nπ 
h(x, t) = hn (t) sin x
l
n=1
By using (1.19) we obtain from (1.16)

X p
u0n (t) + λn a2 un (t) − hn (t) sin( λn x) = 0.
 
n=1
This equality holds iff
u0n (t) + λn a2 un (t) = hn (t), n = 1, 2, ... (1.20)
Taking into accoun the initial condition (1.17) we obtain

X  nπ  ∞
X  nπ 
u(x, 0) = hn (0) sin x = f (x) = fn sin x.
l l
n=1 n=1
It follows then
un (0) = fn , n = 1, 2, ... (1.21)
The initial value problem (1.20),(1.22) has the solution
Z t
−λn a2 t 2
un (t) = e fn + e−λn a (t−s) hn (s)ds
0
110 Chapter 9

Inserting the expression for un (t) into (1.19) we get


∞  Z t   nπ 
−λn a2 t −λn a2 (t−s)
X
u(x, t) = fn e + e hn (s)ds sin x. (1.22)
0 l
n=1

n2 π 2
Let us recall that λn = l2

Theorem 1.4. The solution of the problem is unique


Proof. Suppose that a function v(x, t) is also a solution of this problem, i.e.
vt = a2 vxx + h(x, t), x ∈ (0, l), t > 0, (1.23)
v(x, 0) = f (x), x ∈ [0, l], (1.24)
v(0, t) = v(l, t) = 0, t ≥ 0, (1.25)
It is clear that then the function w(x, t) = u(x, t) − v(x, t) is a solution of the
following problem
wt = a2 wxx , x ∈ (0, l), t > 0, (1.26)
w(x, 0) = 0, x ∈ [0, l], (1.27)
w(0, t) = w(l, t) = 0, t ≥ 0, (1.28)
Let us multiply the equation (1.26) by w(x, t)
w(x, t)wt (x, t) = a2 w(x, t)wxx (x, t)
and rewrite this equality in the following form
1d l 2
Z Z l
w (x, t)dx − (w(x, t)wx (x, t))x + a2 [wx (x, t)]2 dx = 0.
2 dt 0 0
Integrating this equality over the interval (0, l) with respect to x we get
1d l 2 x=l Z l
Z
w (x, t)dx − w(x, t)wx (x, t) + [wx (x, t)]2 dx.

2 dt 0 x=0 0
x=l
Due to the boundary conditions (??) we have w(x, t)wx (x, t) = 0. Thus

x=0
Z l Z l
d
w2 (x, t)dx + 2a2 [wx (x, t)]2 dx = 0.
dt 0 0
This equality implies that
Z l
d
w2 (x, t)dx ≤ 0.
dt 0
Rl
That is the function z(t) := 0 w2 (x, t)dx
is non=increasing on [0, ∞). Then
Z l Z l
w2 (x, t)dx = w2 (x, 0)dx = 0, ∀t ≥ 0.
0 0
Partial Differential Equations 111

Hence
w(x, t) ≡ v(x, t), i.e. u(x, t) ≡ v(x, t).

1.2. Nonhomogeneous boundary conditions. Let us consider the prob-
lem
ut = a2 uxx , x ∈ (0, l), t > 0, (1.29)
u(x, 0) = f (x), x ∈ [0, l], (1.30)
u(0, t) = A, u(l, t) = B, t ≥ 0, (1.31)
where A, B are given constants. The solution of the problem u(x, t) is a sum of
two functions v(x, t) and W (x), where W (x) is a solution of the problem
(
W 00 (x) = 0, x ∈ (0, l),
(1.32)
W (0) = A, W (l) = B
and v is a solution of the problem

2
vt = a vxx , x ∈ (0, l), t > 0,

v(x, 0) = f (x) − W (x), x ∈ [0, l], (1.33)

u(0, t) = u(l, t) = 0, t ≥ 0,

It is clear that
1
W (x) = A + (B − A)x
l
is a solution of the problem (1.32) Hence the solution of the proble (1.34)-(1.36)
is the function
∞  nπ 
X 2 2 1
u(x, t) = qn e−n a t sin x + A + (B − A)x,
l l
n=1
where Z l 
2 1  nπ 
qn = f (x) − A − (B − A)x sin x .
l 0 l l
Next we consider the following problem

ut = a2 uxx , x ∈ (0, l), t > 0, (1.34)


u(x, 0) = f (x), x ∈ [0, l], (1.35)
u(0, t) = φ(t), u(l, t) = ψ(t), t ≥ 0, (1.36)
we look the solution of the problem(1.34)-(1.36) in the form
X∞ √ 
u(x, t) = un (t) sin λn x (1.37)
n=1

For ut we have

X √ 
ut (x, t) = u0n (t) sin λn x (1.38)
n=1
112 Chapter 9


X √ 
uxx (x, t) = gn (t) sin λn x , (1.39)
n=1
where
2
Z l √ 
gn (t) = uxx (x, t) sin λn x dx.
l 0
Integrating by parts we obtain
2h √ il 2√
Z l √ 
gn (t) = ux sin λn x − λn ux (x, t) cos ln x dx
l 0 l 0

2√ √  l

2
Z l √ 
=− λn u(x, t) cos λn x − λn u(x, t) sin λn x dx
l 0 l 0
2√
= λn [u(0, t) − u(l, t) cos(nπ)] − λn un (t).
l
Employing the boundary conditions (1.36) we obtain
2√
gn (t) = λn [φ(t) − ψ(t) cos(nπ)] − λn un (t)
l
Thus (1.39) implies
∞  √ √  √
X 2 λn 2 λn 
uxx (x, t) = φ(t) − (−1)n ψ(t) − ln un (t) sin λn x
n=1
l l
By using the last relation and (1.38) in(1.34) we obtain
∞   2√ √ √
2a2 λn

X 2a λn 
u0n (t) − a2 φ(t) − (−1)n ψ(t) − aλn un (t) sin λn x = 0.
n=1
l l
Therefore the Fourier coefficients un (t) satisfy
 √ √ 
2 λn 2 λn
u0n (t) = a2 φ(t) − (−1)n ψ(t) − λn un (t) , n = 1, 2, ... (1.40)
l l
The function u will satisfy the initial condition (1.35) iff
un (0) = fn , n = 1, 2, ... (1.41)
We solve the initial value problem (1.40),(1.41) and get
√ Z
2 2a2 λn t −a2 λn (t−s)
un (t) = fn e−a λn t − e [(−1)n ψ(s) − φ(s)] ds, n = 1, 2, ... (1.42)
l 0
So the solution of the problem (1.34)-(1.36) has the form (1.37), where un (t), n = 1, 2, ... are
defined by (1.42)

2. Wave Equation
In this section we study the wave equation. The first problem is the initial
boundary value problem:
utt = c2 uxx , x ∈ (0, l), t > 0, (2.1)
u(x, 0) = f (x), ut (x, 0) = g(x), x ∈ [0, l], (2.2)
u(0, t) = u(l, t) = 0, t ≥ 0, (2.3)
We assume that the solution of the problem has the form
u(x, t) = X(x)T (t),
Partial Differential Equations 113

where X(x) and T (t) are nonzero functions. Substituting into (2.1) we get
X(x)T 00 (t) = a2 X 00 (x)T (t).
Dividing both sides of the last equality by c2 X(x)T (t) we obtain
T 00 (t) X 00 (x)
= (2.4)
c2 T (t) X(x)
Since the left hand side of (2.4) depends only on t and the right hand side depend
only on x each side of this equality can only be equal to some constant. Thus
T 00 (t) X 00 (x)
= = −λ, λ = constant
c2 T (t) X(x)
or
T 00 (t) = λc2 T (t) (2.5)
X 00 (x) + λX(x) = 0. (2.6)
It follows from (2.3) that
X(0) = X(l) = 0. (2.7)
So we have to solve the eigenvalue problem (2.6),(2.7). We have seen that the
numbers
n2 π 2
λn = 2 , n = 1, 2, ...
l
are eigenvalues of the problem (2.6),(2.7), and the functions
 nπ 
Xn (x) = sin x , n = 1, 2, ...
l
are the corresponding eigenfunctions. It is easy to see that the general solution
of (2.5) for l = λn has the form
p p
Tn (t) = An cos(c λn t) + Bn sin(c λn t), n = 1, 2, ...
It is easy to see that for each N the function
N h
X p p i  nπ 
uN (x, t) = An cos(c λn t) + Bn sin(c λn t) sin x , (2.8)
l
n=1

where An , Bn , n = 1, ..., N are arbitrary constants satisfies (2.1),(2.3). But this


function may satisfy the initial condition (2.2) only when f is a linear combination
of finitely many eigenfunctions. To satisfy the initial condition (2.2) we consider
the series
∞ h
X p p i  nπ 
u(x, t) = An cos(c λn t) + Bn sin(c λn t) sin x . (2.9)
l
n=1
114 Chapter 9


P ∞
P
Let us note that if |An | < ∞ and |Bn | < ∞ , then this series is uniformly
n=1 n=1
convergent on [−l, l] × [0, T ], ∀T > 0. The function u(x, t) defined by (2.9) sat-
isfies the boundary conditions (2.3) since each term of the series satisfies these
conditions. It follows from (2.9) that u(x, t) satisfies the initial conditions (2.2)

X  nπ 
f (x) = u(x, 0) = An sin x
l
n=1


X cnπ  nπ 
g(x) = ut (x, 0) = Bn sin x
l l
n=1

iff
Z l
2  nπ 
An = fn = f (x) sin x dx,
l 0 l
and
Z l
l 2  nπ 
Bn = gn = g(x) sin x dx,
cnπ cnπ 0 l
Similar to the corresponding theorem for the heat equation we can prove the
following

Theorem 2.1. If f (x), g(x), g 0 (x), f 0 (x), f 00 (x) are continuous g 00 (x), f 000 (x)
are piecewise continuous on [0, L],

f (0) = f (l) = 0, f 00 (0) = f 00 (l) = 0, g(0) = g(l) = 0

then the function


∞   p 
X  p  gn  nπ 
u(x, t) = fn cos c λn t + √ sin c λn t sin x (2.10)
c λn l
n=1

is a solution of (2.1)-(2.3).

Let us show that solution of this problem is unique. Really if v(x, t) is also
solution of the problem (2.1)-(2.3) then the function w(x, t) = u(x, t) − v(x, t) is
a solution of the problem

wtt = c2 wxx , x ∈ (0, l), t > 0, (2.11)

w(x, 0) = 0, wt (x, 0) = 0, x ∈ [0, l], (2.12)

w(0, t) = w(l, t) = 0, t ≥ 0, (2.13)


Partial Differential Equations 115

Mutiplying (2.11) by wt (x, t) then integrating over the interval (0, l) , after inte-
gration by parts we get
Z l Z t
2

0= wtt − c wxx wt (x, t)dx = wtt (x, t)wt (x, t)
0 0
Z l x=l
− c2 wx (x, t)wxt (x, t)dx − c2 wx (x, t)wt (x, t) . (2.14)

0 x=0

Thanks to the boundary conditions (2.14)


wt (0, t) = wt (l, t) = 0, ∀t > 0.
x=l
Theerefore wx (x, t)wt (x, t) = 0, and we obtain from (??)

x=0
Z t
0= wtt (x, t)wt (x, t)
0
Z l
2
−c wx (x, t)wxt (x, t)dx
0
Z l 
1d 2 2
= w (x, t) + wt (x, t) , ∀t ≥ 0.
2 dt 0 t
This equality implies that
wt (x, t) = wx (x, t) = 0, ∀x ∈ [0, l], t ≥ 0.
Hence
w(x, t) = const.
Since w(x, t) is zero for x = 0, x = l it is identically zero for all x ∈ [0, l], t ≥ 0.
2.1. Nonhomogeneous Equation.
utt = c2 uxx + h(x, t), x ∈ (0, l), t > 0, (2.15)
u(x, 0) = f (x), ut (x, 0) = g(x), x ∈ [0, l], (2.16)
u(0, t) = u(l, t) = 0, t ≥ 0, (2.17)
We look for solution of the problem (2.15)-(2.17) in the form
X∞  nπ 
u(x, t) = un (t) sin x (2.18)
n=1
l
We expand h(x, t)

X  nπ 
h(x, t) = hn (t) sin x
n=1
l
By using (2.18) we obtain from (2.15)

X p
 0
un (t) + λn a2 un (t) − hn (t) sin( λn x) = 0.

n=1
116 Chapter 9

This equality hold iff


u00n (t) + λn c2 un (t) = hn (t), n = 1, 2, ... (2.19)
Taking into account the initial condition (2.16) we obtain
X∞  nπ  X∞  nπ 
u(x, 0) = An (0) sin x = f (x) = fn sin x.
n=1
l n=1
l
∞ ∞
X cnπ  nπ  X  nπ 
ut (x, 0) = Bn (0) sin x = g(x) = gn sin x.
n=1
l l n=1
l
It follows then
l
An (0) = fn , Bn (0) = gn , n = 1, 2, ... (2.20)
cnπ
The initial value problem (2.19),(2.20) has the solution
 p  lgn  p  Z t h cnπ i
un (t) = fn cos c λn t + √ sin c λn t + sin (t − s) hn (s)ds
c ln 0 l
Inserting the expression for un (t) into (2.18) we get
∞   p 
X  p  gn  nπ 
u(x, t) = fn cos c λn t + √ sin c λn t sin x
n=1
c λn l
X ∞ Z t h p i  nπ 
+ sin c λn (t − s) hn (s)ds sin x (2.21)
n=1 0
l
n2 π 2
Remember that λn = l2 .

2.2. The Cauchy problem for the wave equation. D’Alembert’s


formula. Now we consider the initial value problem for the wave equation, i.e.
we would like to find solution of the equation
utt = c2 uxx , t > 0, x ∈ (−∞, ∞), (2.22)
under the initial conditions
u(x, 0) = f (x); ut (x, 0) = g(x), x ∈ (−∞, ∞), (2.23)
where f and g are given numbers and c > 0 is a given number. To solve the
problem we make the following change of variables
ξ = x − ct, η = x + ct.
By using the chain rule we find
ut = uξ ξt + uη ηt = −auξ + cuη = c(uη − uξ ),
utt = c(uηξ ξt + uηη ηt − uξξ ξt − uξη ηt ) = c2 (uξξ − 2uξη + uηη ), (2.24)
ux = uξ ξx + uη ηx = uξ + uη ,
uxx = uξξ ξx + uξη ηx + uηξ ξx + uηη ηx = uξξ + 2uξη + uηη (2.25)
Partial Differential Equations 117

Bu using (2.24) and (2.25) in (2.22) we find


uξη = 0. (2.26)
It is clear that for each differentiable functions u1 , u2 the function
u(ξ, η) = u1 (ξ) + u2 (η),
isa solution of (2.26) . Hence the function
u(x, t) = u1 (x − ct) + u2 (x + ct) (2.27)
is a solution of (2.22). Bu using the initial conditions (2.23) we get
u(x, 0) = u1 (x) + u2 (x) = f (x), (2.28)
ut (x, 0) = −cu01 (x) + cu02 (x) = g(x).
Integrating the last equality we obtain
Z x
1
u2 (x) − u1 (x) = g(s)ds + C. (2.29)
c x0

Solving the system of equations (2.28), (2.29) we obtain


1 x
Z
1 C
u1 (x) = f (x) − g(s)ds − ,
2 2c x0 2
1 x
Z
1 C
u2 (x) = f (x) + g(s)ds + .
2 2c x0 2
Iserting the values of u1 , u2 from the last two equalities into (2.27) we get
Z x−ct
1 x+ct
Z
1 1 1
u(x, t) = f (x − ct) − g(s)ds + f (x + ct) + g(s)ds
2 2a x0 2 2c x0
or
Z x+ct
f (x − ct) + f (x + ct) 1
u(x, t) = + g(s)ds, (2.30)
2 2a x−ct
The last equality is called the D’Alembert formula .

3. Problems
(1) Consider the initial boundary value problem for the telegraph equation

utt − c2 uxx + but = 0, b > 0, x ∈ (0, l), t > 0, (3.1)


u(x, 0) = f (x), ut (x, 0) = g(x), x ∈ [0, l], (3.2)
u(0, t) = u(l, t) = 0, t ≥ 0, (3.3)
and show that if f, g are sufficiently smooth functions, then all solutions
of the problem (3.1)-(3.3) tend to zero as t → ∞.
118 Chapter 9

(2) Use the Fourier method to find the solution of the initial boundary value
problem for the Schrödinger equation

iut = uxx , x ∈ (0, l), t > 0,

u(x, 0) = f (x), x ∈ [0, l],

u(0, t) = u(l, t) = 0, t ≥ 0,

and show that the following conservation laws hold true


Z 1 Z 1
2
|u(x, t)| dx = |f (x)|2 dx,
0 0
Z 1 Z 1
|ux (x, t)|2 dx = |f 0 (x)|2 dx
0 0

4. Some famous PDE’s


• Korteveg - de Vries equation
ut + uux + uxxx = 0 (4.1)
• Nonlinear Scrödinger equation

iψt + ∆ψ + |ψ|2 ψ = 0 (4.2)


Bibliography

[1] V.I. Arnold, Ordinary Differential Equations, Springer, Berlin, 1992.


[2] W. E. Boyce and R. C. DiPrima, Elementary Differential Equations and Boundary Value Problems,
9th Edition (John Wiley and Sons, New York, 2010.
[3] M. Braun, Differential Equations and Their Applications, 4th ed. New York: Springer-Verlag, 1993.
[4] E.A. Coddington and N. Levinson, Theory of Ordinary Differential Equations. New York: McGraw-
Hill, 1955.
[5] P. Hartman, Ordinary Differential Equations, 2nd Ed., Society for Industrial and Applied Math,
2002.
[6] E.L. Ince, Ordinary Differential Equations, Dover Publications, 1958,
[7] I. G. Petrovsky, Lectures on the theory of ordinary differential equations, - M: Publishing house of
Moscow State University, 1984. - 296 p.
[8] J. Robinson, An introduction to Ordinary Differential Equations, Cambridge University Press,
2004.
[9] W. Walter Ordinary Differential equations, Springer,1998.

119
Index

Abel’s Theorem, 29, 44 nonhomogeneous boundary conditions, 111


asymptotically stable solution, 82 nonhomogeneous heat equation, 109
autonomous system, 81 nonhomogeneous wave equation, 115

Bernoulli equation, 23 order of the differential equation, 2


Bessel inequality, 101 ordinary differential equation, 1
ordinary point, 38
Cauchy problem, 43
characteristic equation of the system, 59 Parseval identity, 103
Chebyshev equation, 40 partial differential equation, 1
convolution, 54 period, 95
convolution theorem, 54 pointwise convergent series, 95
power series, 36
D’Alembert formula, 117
differential equation, 1 regular singular point, 38
dissipative system, 82 repetead roots, 67
divergent sequence of functions, 95 Riccati equation, 24
Riemann -Lebesgue Lemma, 100
eigenfunction, 90
exact equation, 9 separable equation, 6
existence and uniqueness theorem, 15 singular point, 38
exponential of a matrix, 70 stable solution, 81
stationary state, 81
first order linear ODE’s, 3 stationary system, 81
Fourier coefficients, 98 Sturm-Liouville problem, 89
fundamental period, 95
fundamental set of solutions, 28, 31, 60 trigonometric series, 97

globally asymptotically stable solution, 82 uniformly convergent sequence of functions, 95


Green’s function, 88
wave equation, 112
Heat equation, 105 Weierstrass Theorem, 96
Heaviside function, 52 Wronskian, 27
homogeneous equation, 8
homogeneous linear system, 58

initial value problem, 43


integrating factor, 4, 12

linearly dependent functions, 28


Lipschitz condition, 20

method of separation of variables, 105


method of variation of parameters, 31, 73

121

You might also like