Professional Documents
Culture Documents
Math204 Notes Fall2020
Math204 Notes Fall2020
V.K. Kalantarov
Koc University , Department of mathematics, Sariyer, Istanbul , Turkey
Email address: vkalantarov@kum.edu.tr
Contents
Chapter 1. Introduction 1
Bibliography 119
Index 121
CHAPTER 1
Introduction
Everything is in motion.
Everything flows. Everything is
vibrating.
William Hazlitt
The equation
1
a word or phrase that you can make from another word or phrase by putting the letters in a
different order. For example thing is an anagram of night.
1
2 Chapter A
The equation
ut (x, t) − 2uxx (x, t) = 0, x ∈ (0, π), t > 0 (0.2)
is a partial differential equation.
So the equation (3.5) is a third order ordinary differential equation and (0.2)
is a second order partial differential equation .
In what follows we will use the aabreviation ODE for ordinary differential equa-
tion and PDE for partial differential equation.
CHAPTER 2
y(t)t4 = t6 + C.
C = 0.
and obtain
t3 y2 + 3t2 y = t4 .
Since
t3 y2 + 3t2 y = (t3 y)0
we have
1
(t3 y − t5 )0 = 0
5
Ingrating over the interval (1, t) we find
1 4
t3 y(t) − t5 = .
5 5
Hence
4 1
y(t) = 3
+ t2 .
5t 5
Example 1.4. Let h(t) be a continuous function defined on [0, ∞) and
lim h(t) = 0.
t→∞
tends to zero as t → ∞.
6 Chapter 1
2. Separable Equations.
Definition 2.1. An equation of the form
f (y)y 0 = g(x) (2.1)
is called a separable equation .
If y = φ(x) is a solution of the equation (2.1) then
f (φ(x))φ0 (x) = g(x).
Hence Z Z
0
f (φ(x))φ (x)dx = g(x)dx. (2.2)
dy
= (x − 1)dx,
y2
1 x2
− = − x + C.
y 2
By using the initial condition y(0) = 2 we get C = − 12 . Hence
2
y(x) = .
2x + 1 − x2
Example 2.4. Solve the Cauchy problem
(1 + et )yy 0 = et , y(0) = 1.
First Order Ordinary Differential equations 7
Hence
2
y = ln(ex + C).
We use the initial condition and obtain
2
y(x) = ln(ex + 1).
3. Homogeneous equations.
An equation of the form
dy y
=f (3.1)
dx x
is called homogeneous equation . To solve the homogeneous equation (3.1) we
make change of variables xy = v and reduce the equation to a separable equation
of the form
xv 0 + v = f (v).
Example 3.1. Solve the equation
2xyy 0 = 4x2 + 3y 2 .
Dividing both sides of this equation by xy we obtain:
dy x 3y
=2 + .
dx y 2x
Hence this equation is a homogeneous equation. Therefore we make the change
y = xv:
1 3
v + xv 0 = 2 + v,
v 2
dv 4 + 3v 2
x = ,
dx 2v
Z Z
2v dx
2
dv = ,
v +4 x
d(v 2 + 4)
Z Z
dx
= ,
v2 + 4 x
ln(v 2 + 4) = ln |x| + ln C,
y2
v 4 + 4 = C|x| + 4 = C|x|,
x2
p
y 2 = Cx3 − 4x2 , y = ± Cx3 − 4x2 , x > 0.
Example 3.2. Solve the Cauchy problem
dy p
x = y + x2 − y 2 ; y(1) = 0.
dx
First Order Ordinary Differential equations 9
x4 + 2x3 y = x4 y + x3 y 2 y .
Thus the equation is an exact equation. Therefore we can write this equation in
the form
dy
Ψx (x, y) + Ψy (x, y) = 0,
dx
or
d
Ψ(x, y) = 0. (4.2)
dx
Hence Ψ(x, y) = C. That is
x3 y + x2 y 2 = C.
The last equality defines the solution of the equation (3.1) implicitly.
In other words an equation of the form (4.3) is an exact equation if the vector
filed
F(x, y) = M (x, y)i + N (x, y)j
is a conservative vector filed.
Theorem 4.2. Let the functions M, N, Nx , My be continuous functions in a
region D. Then the equation
M (x, y) + N (x, y)y 0 = 0 (4.4)
is an exact equation if and only if
My (x, y) = Nx (x, y) (4.5)
at each point of the region D.
Proof. Assume that (4.4) is an exact equation. Then there exists a function
U such that
Ux = M, Uy = N.
These equalities imply
Uxy = My , Uyx = Nx .
Thus
My = Nx .
Assume that
My (x, y) = Nx (x, y)
and let us find U (x, y) such that Ux = M, Uy = N .
It follows from
Ux (x, y) = M (x, y)
that Z
U (x, y) = M (x, y)dx + g(y).
From this equality we get
Z
Uy (x, y) = My (x, y)dx + g 0 (y),
Since
My = ex cos y − 2 sin x = Nx
this equation is an exact equation. Therefor
Z
U (x, y) = (ex cos y − 2y sin x)dy = ex sin y − y 2 sin x + g(y),
Assume that the equation (5.1) has an integrating factor m(x) depending
only on x. Then
Similarly we can show that (5.1) has an integrating factor m(y) depending
N −M
only on y if the expression xM y is not depending on x.
My − Nx
= 3.
N
Therefore this equation has an integrating factor depending on x. Thus we
have
m0 (x) = 3m(x),
m(x) = e3x .
y 3 3x
U (x, y) = yx2 e3x + e + h(x),
3
y 3 3x
U (x, y) = yx2 e3x + e
3
h0 (x) = 0, h = C
14 Chapter 1
1
m(y) = .
sin y
So the equation
1 2 cos y
+ 3x dx − x 2 dy = 0
sin y sin y
is an exact equation. Thus we have
1
U (x, y)x (x, y) = + 3x2 ,
sin y
1
U (x, y) = x + x3+g(y) ,
sin y
cos y
Uy (x, y) = − 2 x + g 0 (y).
sin y
Hence g is a constant, and the solution has the form:
1
x + x3 = C.
sin y
Example 5.3. Show that the following equation has an integrating factor
depending on xy and solve it
xy 2 dx + (x2 y − x)dy = 0
m0 (xy)xy + m(xy) = 0,
1
m0 (s) + m(s) = 0,
s
1
m(s) = ,
s
1
ydx + (x − )dy = 0,
y
Uy (x, y) = x + g 0 (y),
1
g 0 (y) = − , g(y) = − ln y.
y
Hence x − ln y = c is a general solution.
Theorem 6.2. (Existence and uniqueness) Suppose that the functions f (t, y)
and fy (t, y) are continuous on the rectangle
R0 := {(t, y) : |t| ≤ a, |y − y0 | ≤ b}
Then the initial value problem
y 0 = f (t, y), y(0) = y0 (6.4)
has a unique solution defined on the interval
b
|t| ≤ h = min a, ,
M
where
M = max |f (t, y)|, (t, y) ∈ R0 .
Proof. Note that since
|fy (t, y)| ≤ L, ∀(t, y) ∈ R0 ,
the function f (·, ·) satisfies the Lipschitz condition in R0 :
|f (t, y1 ) − f (t, y2 )| ≤ L|y1 − y2 |, ∀(t, y1 ), (t, y2 ) ∈ R0 . (6.5)
The problem (6.4) is equivalent to the integral equation:
Z t
y(t) = y0 + f (s, y(s))ds. (6.6)
0
Let us show that there exists a continuous function y(t) that satisfies(??) (it
follows from (??) that the function y(t) is differentiable, and satisfies (6.4)).
We are going to show existence of such a function employing the method of
successive approximations (Picard’s iteration method), and start by choosing an
initial function
y0 (t) = y0 , ∀t ∈ [−h, h]
The subsequent iteration we choose as follows:
Z t
y1 (t) = y0 + f (s, y0 )ds,
0
Z t
y2 (t) = y0 + f (s, y1 (s))ds,
0
....
Z t
yn (t) = y0 + f (s, yn−1 (s)ds, (6.7)
0
Let us show that the graph of yn (t) is lying on R0 when |t| ≤ h (remember that
f is defined only on R0 and f (t, yn (t)) has a meaning only when |yn (t) − y0 | ≤
b, ∀t ∈ [−h, h]).
For n = 0 the statement is clear
(t, y0 ) ∈ R0 , ∀t ∈ [−h, h]
First Order Ordinary Differential equations 17
It is easy to see that the sequence {yn (t)} converges if and only if the series
∞
X
y0 + (yn (t) − yn−1 (t)) (6.8)
n=1
converges , and limn→∞ yn (t) is equal to the sum of the series (6.8).
To show that (??) converges to some continuous function (of course, defined on
[−h, h]) we need the following.
Proposition A Suppose that the functions fn (t), n = 1, 2, ... are continuous on
some interval [a, b], and
|fn (t)| ≤ Cn , n = 1, 2, ... ∀t ∈ [a, b],
∞
P ∞
P
where the number series Cn is convergent. Then the functional series fn (t)
n=1 n=1
is absolutely convergent on [a, b], and the function
∞
X
f (t) = fn (t)
n=1
is a continuous function.
Morever
Z b ∞ Z
X b
f (t)dt = fn (t)dt
a n=1 a
It is not difficult to see that
Z t
‘|y1 (t) − y0 | = f (s, y0 )ds ≤ M t,
0
18 Chapter 1
Z t
|y2 (t) − y1 (t)| = [f (s, y1 (s))ds − f (s, y0 )]ds
0
Z t
≤ |f (s, y1 (s)) − f (s, y0 | ds
0
Z t
t2
≤ L|y1 (s) − y0 |ds ≤ M L ,
0 2
Z t
L|y2 (s) − y1 (s)|ds
|y3 (t) − y2 (t)| ≤
0
Z t 2
s t3
≤ LM L ds = M L2 ,
0 2 3!
...............................................
Z t
|yn (t) − yn−1 (t)| ≤ L |yn−1 (s) − yn−2 (s)|ds
0
Z t n−1
n−1 s tn (Lh)n
≤ ML ds = M Ln−1 ≤ M L .
0 (n − 1)! n! n!
∞
P (Lh)n
We know that the series n! converges.
n=1
Therefore, due to the Proposition A, the series (6.7) converges to some continuous
function y(t), which is the limit of the sequence {yn (t)} on [−h, h].
Hence passing to the limit in (3) as n → ∞ we obtain
Z t
y(t) = y0 + f (s, y(s))ds.
0
Suppose that y(t) is a solution of the problem (6.1) and z(t)is a solution of the
problem
z 0 (t) = f (t, z(t)), z(0) = z0 (6.9)
Assume that y(t)and z(t) exists on the interval [0, T ]. Then subtracting from
(6.6) the equation
Z t
z(t) = z0 + f (s, z(s))ds
0
which is equivalent to (6.9) we get:
Z t
|y(t) − z(t)| ≤ |y0 − z0 | + |f (s, y(s)) − f (s, z(s))|ds
0
Z t
≤ |y0 − z0 | + L |y(s) − z(s)|ds. (6.10)
0
By using the Gronwall’s lemma we obtain from (6.10) the following estimate:
|y(t) − z(t)| ≤ |y0 − z0 |eLT , ∀t ∈ [0, T ] (6.11)
First Order Ordinary Differential equations 19
It follows from the inequality (6.11) that if y0 is sufficiently close to z0 , then y(t)
is sufficiently close to z(t) on [0, T ], i.e. if |z0 − y0 | is small enough, then
max |y(t) − z(t)|
t∈[0,T ]
is small enough, that is the solution of the initial value problem (6.4) continuosly
dependes on the initial data.
In particularthe estimate (6.11) implies that the problem (6.4) has a unique
solution.
Lemma (Gronwall)* If u(t), v(t) are non-negative continuous functions on
[0, T ], C is a non-negative number, and
Z t
u(t) ≤ C + u(s)v(s)ds, ∀t ∈ [0, T ], (6.12)
0
then Rt
v(s)ds
u(t) ≤ Ce 0 , ∀t ∈ [0, T ]. (6.13)
Proof. Let us denote
Z t
Ψ(t) := C + u(s)v(s)ds.
0
Then multiplying both sides of (6.12) by v(t) we get
Z t
u(t)v(t) ≤ C + u(s)v(s)ds v(t)
0
or
Ψ0 (t) ≤ v(t)Ψ(t), Ψ(0) = C.
Rt
Multiplying this inequality by e− 0 v(s)ds
we obtain
Rt
(e− 0 v(s)ds
Ψ(t))0 ≤ 0
It follows from last inequality that the function
Rt
e− 0 v(s)ds
Ψ(t)
is non-increasing on [0, T ]. Therefore
Rt
e− 0 v(s)ds
Ψ(t) ≤ Ψ(0) = C, ∀t ∈ [0, T ]
Hence Rt
v(s)ds
Ψ(t) ≤ Ce 0
Since
Ψ(t) ≥ u(t), ∀t ∈ [0, T ]
we see that (6.12) holds true.
20 Chapter 1
You can find the proof of Gronwall’s lemma and some of its generalizations
in a famous book of E.F Beckennbach and R. Bellman.1
Theorem 6.3. Suppose that the function f (t, y) is continuous on the region
QT = [0, T ] × R and satisfies the Lipschitz condition with respect to the second
variable, i.e. there exists a positive number L such that
|f (t, y) − f (t, z)| ≤ L|y − z|, ∀t ∈ [0, T ], ∀y, z ∈ R. (6.14)
Then the problem
y 0 = f (t, y), y(0) = y0 (6.15)
has a unique solution on the interval [0, T ].
Proof. The problem (6.1) is equivalent to the nonlinear integral equation
Z t
y(t) = y0 + f (s, y(s))ds. (6.16)
0
We write this integral equation in the form of an operator equation
y = T (y), (6.17)
where Z t
T (y) := y0 + f (s, y(s))ds
0
So our problem is reduced to the problem of existence and uniqueness of solution
to the operator equation (6.17) or to the problem of existence and uniqueness of
a fixed point of the operator T .
Let us denote by CT the Banach space of all continuous functions defined on
[0, T ] that is equipped with the norm
kyk = max e−2Lt |y(t)| .
t∈[0,T ]
It is clear that the operator T maps the Banach space CT into itself. Due to the
Lipschitz condition (3.4) we have
Z t
|T (y)(t) − T (z)(t)| = (f (s, y(s)) − f (s, z(s))) ds
0
Z t Z t
≤ |f (s, y(s)) − f (s, z(s))| ds ≤ L |y(s) − z(s)|ds
0 0
Z t
=L e−2Ls |y(s) − z(s)|e2Ls ds
0
t 2Ls e2Lt
Z
−2Ls
≤ L max e |y(s) − z(s)| e ds ≤ L ky − zk.
s∈[0,T ] 0 2L
Therefore we have
1
e−2Lt |T (y)(t) − T (z)(t)| ≤ ky − zk.
2
Hence
1
kT (y) − T (z)k = max e−2Lt |T (y)(t) − T (z)(t)| ≤ ky − zk,
t∈[0,T ] 2
i.e. the operator T is a contraction in the Banach space CT . Thus according to
the Banch fixed point theorem it has a unique fixed point in CT , and therefore
the problem (6.15) has a unique solution.
Proposition 6.4. If f (y) is an increasing and continuous function, then
y 0 + f (y) = 0, y(t0 ) = y0 (6.18)
has a unique solution.
Suppose that the problem (6.18) has two different solutions y(t) and ỹ(t).
Then the function w(t) = y(t) − ỹ(t) is a solution of the problem
y(0) = 1
has a solution
1
y(t) =
1−t
which tends to +∞ as t → 1− .
6.1. Picard Iterations. The sequence of functions {yn (t)} defined by the
recurrence relation Z t
yn (t) = y0 + f (s, yn−1 (s))ds (6.22)
0
is called the sequence of Picard iterations.
Proposition 6.5. If f (t, y) satisfies the conditions of the Theorem 6.3 then
the sequence of Picard iterations defined by (6.22) converges uniformly on the
interval [0, T ] to the solution of the problem (6.15)..
Proof. It is clear that
Z t
|y1 (t) − y0 (t)| ≤ |f (s, y0 )|ds ≤ M t.
0
By using the Lipschitz condition and the previous estimate we get
Z t
|y2 (t) − y1 (t)| ≤ |f (s, y1 (s) − f (s, y0 )|ds
0
Z t Z t
t2
≤ L|y1 (s) − y0 |ds ≤ M L sds = M L .
0 0 2
Similarly we get the estimate
Z t
|y3 (t) − y2 (t)| ≤ |f (s, y2 (s) − f (s, y1 )|ds
0
Z t
M L2 t 2 t3
Z
≤ L|y2 (s) − y1 (s)|ds ≤ s ds = M L2
0 2! 0 3!
and by induction we get
Z t
|yn (t) − yn−1 (t)| ≤ |f (s, yn−1 (s) − f (s, yn−2 (s))|ds
0
Z t
M Ln−1 t n−1 tn
Z
≤ L|yn−1 (s) − yn−2 (s)|ds ≤ s ds = M Ln−1 (6.23)
0 (n − 1)! 0 n!
for each n = 1, 2, .... Due to this estimate and the Weierstrass theorem the
functional series
y0 + (y1 (t) − y0 ) + (y2 (t) − y1 (t)) + · · · + (yn (t) − yn−1 (t)) + · · · (6.24)
First Order Ordinary Differential equations 23
⇒ z 0 − (2ay1 + b)z − az 2 = 0
ii) If y, y1 , y2 , y3 are solution of the Riccati equation , then
y − y2 y3 − y2
: = const.
y − y1 y3 − y1
• The following equation
y 0 (t) = ry(t)(1 − y(t))
is called the logistic equation.
An equation of the form
y = ty 0 (t) + f (y 0 (t))
is called the Clairot equation.
8. Problems
(1) Show that the Cauchy problem
(
y 0 (t) + arctan y(t) = 0, t > 0,
y(0) = y0
has a unique solution.
(2) Solve the equation
y 0 = (x − y)2 + 1.
(3) Solve the equation
2(xy + y)y 0 + x(y 4 + 1) = 0
(4) Find all solutions of the equation
y 0 + sin(t + y) = sin(t − y)
(5) Solve the equation
y
ty 0 = y + t cos( ).
t
CHAPTER 3
The function
y1 (t) y2 (t)
W [y1 , y2 ](t) = 0
y1 (t) y20 (t)
Theorem 1.1. If p and q are continuous on (a, b), and the Wronskian of
solutions y1 , y2 is not zero at some point t0 ∈ (a, b), then the family of solutions
y(t) = C1 y1 (t) + C2 y2 (t), t ∈ (a, b)
includes every solution of (1.1).
The solution set y1 , y2 is said to form a fundamental set of solutions if
W [y1 , y2 ](t0 ) 6= 0 at some t0 ∈ (a, b).
Theorem 2.2. (Abel’s theorem). If y1 (t) and y2 (t) are solutions of the dif-
ferential equation
y100 + p(t)y 0 (t) + q(t)y(t) = 0,
where p, q are continuous on (a, b), then
Z
W [y1 , y2 ](t) = C exp − p(t)dt , ∀t ∈ (a, b)
c1 y1 (t) + c2 y2 (t)
That is
yp0 (t) = c1 (t)y10 (t) + c2 (t)y20 (t) + c01 (t)y1 (t) + c02 (t)y2 (t).
Assume that
c01 (t)y1 (t) + c02 (t)y2 (t) = 0. (3.6)
Thanks to the condition (3.6)
Thus
yp00 (t) = c1 (t)y100 (t) = c2 (t)y200 (t) + c01 (t)y10 (t) = c02 (t)y20 (t). (3.8)
Substituting yp (t), yp0 (t) and yp00 (t) as it is given in (3.4), (3.7) and (3.8) into the
equation (3.1) we obtain
c1 (t)y100 (t) + c2 (t)y200 (t) + c01 (t)y10 (t) + c02 (t)y20 (t) + p(t) c1 (t)y10 (t) + c2 (t)y20 (t)
c1 (t) y100 (t) + p(t)y10 (t) + q(t)y1 (t) + c2 (t) y200 (t) + p(t)y20 (t) + q(t)y2 (t)
Since y1 (t) and y2 (t) are solutions of the homogeneous equation (3.2) we have
has a double root r0 = −2. Thus the equation (4.3) has tow linearly independent
solutions
y1 (t) = e−2t , y2 (t) = te−2t .
The general solution of the homogeneous equation is:
Example 4.3.
y 00 + 2y = t2 + 1
Here we assume that
y(t) = At2 + 2Bt + C
2A + 2At2 + 2Bt + 2C = t2 + 1
⇒ 2A = 1, 2B = 0 ⇒ B = 0
1
A = , 2A + 2C = 1 ⇒ C = 0
2
Thus
1
y(t) = t2
2
is a solution of the equation.
5. Series solutions
One of Newton’s fundamental analytic achievements was the ex-
pansion of all functions in power series (the meaning of a second,
long anagram of Newton’s to the effect that to solve any equation
one should substitute the series into the equation and equate co-
efficients of like powers).”
V.I. Arnol’d ”Ordinary Differential Equations”, Springer, 1991.
Theorem 5.1. If the power series of the form (5.1) is convergent at some
point x0 6= 0 then this series converges absolutely for |x| < r0 = |x0 | .
∞
an xn0 is convergent the sequence {an xn0 }
P
Proof. Since the number series
n=0
tends to zero as n → ∞. Thus there exist a number M > 0 such that
|an xn0 | ≤ M0 , n = 1, 2, ...
Therefore we have x n x n
|an xn | = |an xn0 | ≤ M0 .
x0 x0
For |x| < |x0 | we have
∞
x n 1
X
= .
x0 1 −
x
n=0 x0
∞
|an xn | is convergent.
P
Therefore thanks to the comparison theorem the series
n=0
Theorem 5.2. If a power series (5.1) converges for some x0 6= 0, then there
exists a number R > 0 such that (5.1) converges absolutely on the inerval (−R, R)
and diverges for |x| > R. If (5.1) converges just for x = 0, then r = 0.
Second Order Ordinary Differential Equations 37
Theorem 5.3. If
a + 1
n
lim =L
n→∞ an
or
p
n
lim |an | = L
n→∞
then the radius of convergence of the power series
∞
X
an xn
n=0
1
is the number r = L, with r = ∞ if L = 0 and r = 0 if L = ∞.
Theorem 5.4. If
∞
X
an xn = 0, ∀x
n=0
then an = 0, n = 0, 1, 2, ..
If
∞
X ∞
X
n
f (x) = an x , g(x) = bn xn
n=1 n=1
then
∞
X ∞
X
f (x) + g(x) = (an + bn )xn , f (x)g(x) = cn xn , (5.2)
n=1 n=0
where cn = nk=0 ak bn−k . The interval of convergence of both series in (5.2) is
P
the common interval of convergence of power series for f (x) and g(x).
Theorem 5.5. If
∞
X
f (x) = an xn (5.3)
n=1
∞
nan xn−1 obtained by termwise dif-
P
is convergent on (−r, r), then the series
n=1
ferentiation of the series has the radius of convergence r and
∞
X
0
f (x) = nan xn−1 .
n=1
Rx
The series obtained by termwise integration gives the power series for 0 f (x)ds:
Z x ∞
X 1
f (x)ds = an xn+1 , |x| < r
0 n + 1
n=0
38 Chapter 2
P∞ n+2
P∞ n
Since n=0 an x = n=2 an−2 x we have
∞
X
[(n + 2)(n + 1)an+2 + (n + 2)an + an−2 ]xn + (6a3 + 3a1 )x + 2a2 + 2a0 = 0
n=2
From the last equality we deduce that
2a2 + 2a0 = 0 ⇒ a2 = −a0 ,
3a1 + 6a3 = 0;
1
a3 = − a1 ,
2
(n + 2)(n + 1)an+2 + (n + 2)an + an−2 = 0,
1 1
an+2 = − an − an−2
(n + 1) (n + 1)(n + 2)
1 1 1 1 1
a4 = − a2 − a0 = a0 − a0 = a0
3 3·4 3 12 4
1 1 1 1 3
a5 = − a3 − a1 = a1 − a1 = a1 , ...
4 4·5 8 20 40
1 1 3
y(x) = a0 [1 − x2 + x4 + ...] + a1 [x − x3 + x5 + ...]
4 2 40
Example 5.10. Find the power series solution of the Airy equation
y 00 − xy = 0
∞
X
y= an xn ,
n=0
∞
X ∞
X
00 n−2
y = n(n − 1)an x = (n + 2)(n + 1)an+2 xn
n=2 n=0
∞
X ∞
X
y0 = nan xn−1 = ( (n + 1)an+1 xn )
n=1 n=0
∞
X ∞
X
n
[(n + 2)(n + 1)an+2 ]x − an xn+1 = 0
n=0 n=0
∞
X ∞
X
(n + 2)(n + 1)an+2 xn − an−1 xn = 0
n=0 n=1
⇒ 2a2 = 0 ⇒ a2 = 0
1
an+2 = an−1
(n + 2)(n + 1)
1
a3 = a0
2·3
1
a4 = a1
4·3
40 Chapter 2
a5 = 0 n = 3
1 1
a6 = a3 = a0 , n = 4
6·5 2·3·5·6
1 1
a7 = a4 = a1 , ... n = 5
7·6 7·6·4·3
a8 = 0, n = 6
1 1
a9 = a6 = a0 , n = 7
9·8 2·3·5·6·8·9
1 1
a10 = a7 = a1 , n = 8,
9 · 10 3 · 4 · 6 · 7 · 9 · 10
a11 = 0,
a3n−1 = 0,
a0
a3n = ,
2 · 3 · 5 · 6...(3n − 1)3n
a1
a3n+1 = , n ≥ 4.
3 · 4 · 6 · 7 · · · (3n)(3n + 1)
Thus
x3 x6 x3n
y(x) = a0 1 + + + + ··· +
2 · 3 2 · 3 · 5 · 6 2 · 3 · 5 · 6 · · · (3n − 1)3n
x4 x7 x3n+1
a1 1 + + + + ··· .
3 · 4 3 · 4 · 6 · 7 3 · 4 · 6 · 7 · · · (3n)(3n + 1)
In this section we will discuss the problem of finding a general solution and
solution of the initial value problem for n’th order linear ODE of the form
dn y dn−1 y dy
n
+ p 1 (t) n−1
+ ... + pn−1 (t) + pn (t)y = g(t) (0.1)
dt dt dt
A problem of finding of solution to (0.1) which satisfies the initial conditions:
Theorem 0.1. Let p1 (t), ..., pn (t) and g(t) be continuous on (a, b) then the
problem (1),(2) has a unique solution on (a, b).
The equation
dn y dn−1 y dy
n
+ p1 n−1
+ ... + pn−1 + pn y = 0 (0.3)
dt dt dt
is called the homogeneous equation corresponding to (0.1).
If y1 , ..., yn are solutions of (0.3), then
n
X
y(t) = ci yi (t).
i=1
is a solution of (0.3).
n
X
y(t) = ci yi (t),
i=1
43
44 Chapter 3
then
c1 y1 (t0 ) + ... + cn yn (t0 ) = y(t0 )
c y 0 (t ) + ... + c y 0 (t ) = y 0 (t )
1 1 0 n n 0 0
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
n−1
c1 y1 (t0 ) + ... + cn ynn−1 (t0 ) = y n−1 (t0 ).
The last system has a unique solution if and only if
y1 y2 y3
d
W [y1 , y2 , y3 ] = y1 y20 y30 =
0
dt y 000 y 000 y 000
1 2 3
y1 y2 y3
0 0 y30
y 1 y2
−p1 y1 − p2 y1 − p3 y1 −p1 y2 − p2 y2 − p3 y2 −p1 y3 − p2 y30 − p3 y3
00 0 00 0 00
= −p1 W [y1 , y2 , y3 ].
Higher Order Ordinary Differential Equations 45
eλt cos mt, eλt sin mt, teλt cos λt, teλt sin mt,
..., tm−1 eλt cos mt, tm−1 eλt sin mt
corresponding to these roots.
Example 1.1. Find the general solution of
y (6) + 3y (5) + 3y (4) + y 000 = 0.
The corresponding characteristic equation is
r3 (r + 1)3 = 0.
Therefore
y = c1 + c2 t + c3 t2 + c4 e−t + c5 te−t + c6 t2 e−t .
46 Chapter 3
Theorem 0.3. Suppose that a function f is continuous on [0, ∞), its deriva-
tive f 0 is piecewise continuous on [0, ∞) and f is of exponential order at infinity,
i.e. there exist positive numbers a > 0, K > 0, T0 > 0 such that
Then
Proof. Assume that t1 , ..., tn are points of discontinuity of f 0 on [0, A]. Then
47
48 Chapter 4
Z A Z t1 Z t2
−st 0 −st 0
e f (t)dt = e f (t)dt + e−st f 0 (t)dt + · · ·
0 0 t1
Z A
+ e−st f 0 (t)dt = e−st f (t)|t01 + e−st f (t)|tt21 + ... + e−st f (t)|A
tn
tn
Z t1 Z t2 Z A
(0.3)
−st −st −st
+s e f (t)dt + s e f (t)dt + ... + s e f (t)dt
0 t1 tn
Z A
= −f (0) + e−sA f (A) + s e−st f (t)dt.
0
Since
−sA
e f (A) ≤ Ke−(s−a)A → 0, as A → ∞,
∞ Z ∞
1 −st n ∞
Z
n −st 1 −st n−1
t e dt = − e t + n e t dt
0 s 0 0 s
n −st n−1 ∞ n(n − 1) ∞ −st n−2
Z
= − 2e t + e t dt
s 0 s2 0
Z ∞
n(n − 1)(n − 2)
= e−st tn−3 dt
s3 0
n! ∞ −st
Z
n!
= ··· = n e dt = n+1 .
s 0 s
∞
1 −st 0
∞
Z Z
−st
I := sin(at)e dt = − e sin(at)dt =
0 0 s
1 ∞ a ∞ Z
− e−st sin(at) + cos(at)e−st dt =
s 0 s 0
a ∞ −st 0 ∞ a2 Z ∞
Z
a −st
− 2 e cos(at)dt = − 2 e cos(at) − 2 sin(at)e−st dt =
s 0 s 0 s 0
a2 ∞
Z
a
− sin(at)e−st dt.
s2 s2 0
So we have
a a2
I= − I.
s2 s2
Therefore
Z ∞
a
I := sin(at)e−st dt = . (0.9)
0 s2 + a2
−1 1 s −1−1 1
z(t) = 2L + 3L +L =
s−5 s2 + 25 s2 + 4s + 5
5t −1 1 5t −1 1
2e + 3 cos(5t) + L = 2e + 3 cos(5t) + L =
s2 + 4s + 5 (s + 2)2 + 1
2e5t + 3 cos(5t) + e−2t sin t.
−1 7 7 4! 7 4t 4
L = L−1 = e t
(s − 4)5 4! (s − 4)5 4!
Example 0.11. Solve the Cauchy problem:
y 00 − y 0 − 6y = 0, y(0) = 1, y 0 (0) = −1.
Thus
−1 s+1 1 2 1
y(t) = L + = e−t cos(2t) + e−t sin(2t).
(s + 1) + 4 2 (s + 1)2 + 4
2 2
Example 0.14. Solve the Cauchy problem:
y 00 − 4y = 4t − 8e−2t , y(0) = 0, y 0 (0) = 5.
4 1
s2 Y (s) − sy(0) − y 0 (0) − 4y(s) = 2
−8 ,
s s+2
4 8 5
Y (s) = − + ,
s2 (s2 − 4) (s + 2)2 (s − 2) s2 − 4
4 1 1 1 1 1 1 1
2 2
= 2 − 2 = − − 2,
s (s − 4) s −4 s 4s−2 4s+2 s
5 5 1 5 1
= −
s2 − 4 4s−2 4s+2
1 A B C
2
= + +
(s + 2) (s − 2) s − 2 s + 2 (s + 2)2
As2 + 4As + 4A + Bs2 − 4B + Cs − 2C
=
...
A + B = 0, 4A + C = 0 ⇒ C = −4A, 4A − 4B − 2C
1
⇒ 12A − 4B = 1, A = −B =
16
1
C = −4A = −
4
8 1 1 1 1 1
− =− + −2 .
(s + 2)2 (s − 2) 2s−2 2s+2 (s + 2)2
Thus
3 1 3 1 1 1 1 1 1 1
Y (s) = − − 2 −− + −2 .
2s−2 2s+2 s 2s−2 2s+2 (s + 2)2
Therefore
1. Step Functions.
Z ∞ Z ∞
−st
e uc (t)f (t − c)dt = e−st f (t − c)dt =
0 c
Z ∞
e−st e−sτ f (τ )dτ = e−cs F (s).
0
Corollary.
g(t) = f (t + c)
Z ∞ Z ∞
−st
e uc (t)f (t)dt = e−st uc (t)f (t + c − c)dt =
0 Z ∞ 0
It is clear that
1 1
h(t) = (t − 3) − (t − 6).
3 3
Thus we have
1
(s2 + 4)Y (s) = e−3s − e−6s ,
s2
or
Y (s) = e−3s − e−6s F (s),
where
1 1 1 1 1 1
F (s) = 2 2
= 2
− .
s 4+s 4s 4 4 + s2
Thus
1 1
y(t) = u3 (t)f (t − 3) − u6 (t)f (t − 6),
3 3
54 Chapter 4
where
1 1
f (t) = L−1 F (s) = t − sin(2t).
4 8
2. Convolution Integral
The convolution f ∗ g of piecewise continuous functions f, g defined on [0, ∞)
is a function
Z t
(f ∗ g)(t) = f (τ )g(t − τ )dτ.
0
Z ∞ Z ∞
−sη
F (s)G(s) = e f (u)du e−sτ g(τ )dτ =
0 0
Z ∞ Z ∞
−sτ −su
= e g(τ ) e f (u)du dτ =
0 0
Z ∞ Z ∞
−s(τ +u)
e f (u)du g(τ )dτ.
0 0
Example 2.2. Find
−1 1
L .
(1 + s2 )2
Laplace Transform and Applications 55
Solution
Z t
−1 1
L = sin(t − τ ) sin τ dτ
(1 + s2 )2 0
Z t
[sin t cos τ − cos t sin τ ] sin τ dτ
=
0
Z t Z t
1 1
= sin t sin(2τ )dτ − cos t [1 − cos(2τ )] dτ
2 0 2 0
1 1 1 1
= − sin(t) cos(2t) + sin t − t cos t + cos t sin(2t)
4 4 2 4
1 1
= sin t − t cos t.
2 2
Example 2.3. Find a function which satisfies the integral equation
Z t
f (τ )f (t − τ )dτ = te−2t .
0
1
F (s)F (s) =
(s + 2)2
1
F (s) = ± .
s+2
Hence
f (t) = ±e−2t .
56 Chapter 4
2.1. Laplace transform of some functions. If F (s) and G(s) are Laplace
transforms of a function f (t) and g(t) respectively , then
(11) L{f (n) (t)} = sn F (s) − sn−1 f (0) − · · · sf (n−2) (0) − f (n−1) (0)
Rt F (s)
(12) L{ 0
f (τ )dτ } = s
.
e−cs
(13) L{uc (t)} = s
(s > 0)
where x1 (t), ..., xn (t) are unknown functions, pij , hi , j = 1, ...n are given func-
tions.
It is convinient to write the system (1.10) in the form
where
x1 (t)
x2 (t)
x(t) = .
..
.
xn (t)
57
58 Chapter 5
This system called the auxiliary system of (1.6) has a non-zero solution if and
only if its determinant is zero:
(a11 − r) a12 ... a1n
a 11 (a 22 − r) ... a1n
= 0.
. . . . . . ... ...
an1 an1 . . . (ann − r)
The last equation or the equation
|A − rI| = 0 (1.8)
is called the characteristic equation of the system.
60 Chapter 5
r2 − 4r + 3 = 0
we find
r1 = 1, r2 = 3.
From the auxiliary system
(
(5 − r)w1 + 2w2 = 0
−4w1 − (1 + r)w2 = 0
we find eigenvectors
(1) 1 (2) 1
w = , w = .
−2 −1
Hence
(1) 1 t (2) 3t 1
x (t) = e , x (t) = e
−2 −1
is the fundamental set of solutions.
Example 1.4. Solve the system of equations:
0
x1 = x1 − x2 − x3 ,
x02 = x2 + 3x3 ,
0
x3 = 3x2 + x3 .
Systems of Linear Ordinary Differential Equations 61
w3 = −w2 , w1 = 0.
Therefore
0
w(1) = 1
−1
is the eigenvector corresponding to the eigenvalue r1 = −2. For r2 = 1 we
have
−w2 − w3 = 0
3w3 = 0 .
3w2 = 0
62 Chapter 5
det(A − rI) = 0
are also real numbers.
Therefore if r = λ + im is a complex eigenvalue of A, then r̄ = λ − im is also an
eigenvalue of A:
(A − r)w = 0,
(A − r̄)w̄ = 0.
The corresponding solutions are
w = a + ib,
a and b are real.
Then
(1 − r)2 + 2 = 0
r2 − 2λ + 3 = 0, r1,2 = 1 ± i
Inserting r1 = 1 + i into the auxiliary system
(
(−1 − r)w1 + 2w2 = 0,
−1w1 − (3 + r)w2 = 0.
we get
(
(−1 + 2 − i)w1 + 2w2 = 0,
−w1 − (3 − 2 + i)w2 = 0.
(1 − i)w1 + 2w2 = 0.
So we can take
w1 = 2, w2 = −(1 − i) = (−1 + i)
−w1 − (1 + i)w2 = 0
2 2 0
w= = +i
−1 + i −1 1
−2t 2 0 −2t 2 0
x(t) = C1 e cos t − sin t +C2 e sin t− cos t .
−1 1 −1 1
Example 1.7. Find the general solution of the system
x0 (t) = Ax(t),
where
1 2 −1
A= 0 1 1 .
0 −1 1
Systems of Linear Ordinary Differential Equations 65
we find that
1
w(1) = 0
0
is an eigenvector corresponding to the real eigenvalue r1 = 1, and that
2−i 2 −1
w = i = 0 + i 1
−1 −1 0
is the eigenvector corresponding to the complex root r = 1 + i. Therefore the
general solution has the form
1 2 −1
x(t) = c1 0 et + C2 0 cos t − 1 sin t et +
0 −1 0
−1 2
+ C3 1 cos t + 0 sin t et .
0 −1
Example 1.8. Find a general solution of the system
x(t) = Ax(t),
where
1 −2 2
A = −2 1 2 .
2 2 1
First we write the auxiliary system:
(1 − r)w1 − 2w2 + 2w3 = 0,
−2w1 + (1 − r)w2 + 2w3 = 0,
2w1 + 2w2 + (1 − r)w3 = 0,
66 Chapter 5
r2 − 2r − 3 − r3 + 2r2 + 3r − 4 + 4r − 8 − 12 + 4r = 0,
−r3 + 3r2 + 9r − 27 = 0,
1 1 1
x(t) = C1 0 e3t + C2 e3t −1 + C3 1 e−3t
1 0 −1
is the general solution.
1.4. Repeated roots of characteristic equation. If r is a root of mul-
tiplicity k and there are m(m < k) linearly independent eigenvectorsw1 , ...wm
corresponding to the eigenvalue λ, then the solutions corresponding to this eigen-
value of the system we look in the form: :
that is
x21 (t) = (a1 + b1 t)e2t ,
2a2 e2t + b2 e2t + 2b2 te2t = −a1 e2t − b1 te2t + a2 e2t + b2 + e2t ,
2a1 + b1 = 3a1 + a2 , a1 + a2 = b1 ,
2b1 = 3b1 + b2 , b1 + b2 = 0,
(1) x11 (t) 4t 1
x (t) = =e
x21 (t) 1
is a solution of the system. We look for the second solution that is independent
of
(2) x12 (t)
x (t) =
x22 (t)
in the form
4a2 + b2 = a1 + 3a2
a2 + b2 = a1
4b2 = b1 + 3b2
b1 = b2
(2)
x1 = (a1 + b1 t)e4t
(2)
x2 (t) = (a2 + b2 t)e4t
4a1 + b1 + 4b1 t = 5a1 + 5b1 t − a2 − b2 t
4a2 + b2 + 4b2 t = a1 + b1 t + 3a2 + 3b2 t
4a1 + b1 = 5a1 − a2 ⇒ a1 − a2 = b1
4b1 = 5b1 − b2 ⇒ b2 = b1
4a2 + b2 = a1 + 3a2
a1 − a2 = b2
4b2 = b1 + 3b2
b1 = 1, b2 = 1
a1 = 2, a2 = 1,
x12 (t) = (2 + t)e4t ,
x22 (t) = (1 + t)e4t .
Hence
(2 + t)e4t
(2) x12 (t)
x (t) = =
x22 (t) (1 + t)e4t
and the general solution has the form
4t
(2 + t)e4t
(2) e
x (t) = C1 + C2 .
e4t (1 + t)e4t
70 Chapter 5
eA = QeD Q−1 .
(3) The matrix
t2 2 tn
etA = I + tA + A + ... + An + ...
2! n!
satisfies the equation
d tA
e = AetA , etA t=0 = I.
(1.11)
dt
(4) It follows from the last property that the vector function
x(t) = etA x0
is a solution of the problem
x0 (t) = Ax(t), x(0) = x0 .
Suppose that the matrix A has n linearly independent eigenvectors
w11 w12 w1n
w21 w22 w2n
w(1) = .. , w(2) = .. , . . . , w(n) = .. (1.12)
. . .
wn1 wn2 wn1
corresponding to eigenvalues r1 , r2 , . . . , rn . Since the vectors w(1) , w(1) , . . . , w(n)
are linearly independent the matrix
w11 w12 . . . w1n
w21 w22 . . . w2n
Q= ... ...
(1.13)
... ...
wn1 wn2 . . . wnn
Systems of Linear Ordinary Differential Equations 71
x0 = Ax + g(t) (1.19)
where A is a constant matrix which has n linearly independent eigenvectors
w(1) , w(2) , . . . , w(n)
corresponding to eigenvalues r1 , r2 , . . . , rn .
In this case A is diagonalizable and we can transform the system into a system
which is easily solvable.
x = Qy. (1.20)
Substituting into (1.19) we obtain:
x = Ψ(t)u(t), (1.24)
where Ψ(t) is the fundamental matrix of the system and u(t) is a vector-function
to be found.
Thus Z
xp (t) = Ψ(t) Ψ(t)−1 g(t)dt
74 Chapter 5
r1 = 1, r2 = 10.
From the auxiliary system we find that the vector
(1) 1
w =
−2
is an eigenvector corresponding to r1 = 1 and
(2) 1
w =
1
is the eigenvector corresponding to r2 = 10. Hence the fundamental matrix is the
following matrix
et e10t
Ψ(t) = .
−2et e10t
It is easy to see that
1 1 1/3 −1/3
Ψ(0) = , Ψ−1 (0) = .
−2 1 2/3 1/3
Thus
et e10t − 23 et + 53 e10t
1/3 −1/3 1
x(t) = = .
−2et e10t 2/3 1/3 3 4 t
3e + 3e
5 10t
3et e−t
1 1 −1 −1
x(t) = +
2 et e−t −1 3 0
−t
Z t
3et e e−s −e−s e2s
1
ds,
2 et e−t 0 −es 3es 1
t −t Z t s
−3et + e−t e − e−s
3e e
2x(t) = + ds
−et + e−t et e−t 0 −e3s + 3es
−3et + e−t et + e−t − 2
t −t
3e e
= +
−et + 3e−t et e−t − 13 e3t + 3et − 83
−3et + e−t 8 2t t − 8 e−t
e + 6 − 6e
= + 32 2t 3
−et + e−t t
3 e + 4 − 2e − 3 e
8 −t
e − 9et − 53 e−t + 6
8 2t
= 2 2t 3 .
t 5 −t
3 e − 3e − 3 e +4
Hence
4 2t
− 92 et − 56 e−t + 3
x(t) = 3e .
1 2t
3e − 32 et − 56 e−t + 2
Example 1.15. Let A be a simmetric, positive definite matrix and h(t) is a
vector function continuous on [0, ∞). Show that if kh(t)k → 0 as t → ∞. then
all solutions of the system
x0 (t) + Ax(t) = h(t) (1.28)
are tending to zero as t → ∞.
Solution. It follows from (1.28) that
(x0 (t) + Ax(t) − h(t), x(t)) = 0
or
(x0 (t), x(t)) + (Ax(t), x(t)) = (h(t), x(t)). (1.29)
78 Chapter 5
ı
CHAPTER 7
1. Autonomous Systems
A system of differential equations
0
y1 (t) = f1 (y1 (t), y2 (t), ..., yn (t)),
y 0 (t) = f (y (t), y (t), ..., y (t)),
2 2 1 2 n
(1.1)
. . . . . . . . . . . . . . . . . . . . . . . . . . .......
0
yn (t) = fn (y1 (t), y2 (t), ..., yn (t)),
where y1 (t), ..., yn (t) are unknown functions, fk (y1 , y2 , ..., yn ), k = 1, ..., n are
given functions defined on Rn is called a autonomous system of ODE’s. A system
of differential equations
f1 (y1 , y2 , ..., yn ) = 0,
f (y , y , ..., y ) = 0,
2 1 2 n
(1.2)
. . . . . . . . . . . . . . . . . . . . .
fn (y1 , y2 , ..., yn ) = 0,
where y1 , ..., yn are unknown numbers is called the stationary system correspond-
ing to (1.1) Solutions of the system (1.2) are cllled the stationary states or equi-
libria of (1.1).
For the sake of convenience we write the system as a differential equation in Rn
y0 (t) = f (y(t)) (1.3)
and as vector equation in Rn the corresponding stationary system
f (y) = 0 (1.4)
Here and in what follows we use the following notations for vectors
u = (u1 , u2 , · · · , un ), v = (v1 , v2 , · · · , vn ) ∈ Rn :
u · v = u1 v1 + u2 v2 + · · · + un vn
and
kuk = u21 + u22 + · · · u2n .
Example 1.5. Let a, b be positive numbers. Show that the zero solution of
the system (
x0 (t) + ay(t) = 0,
y 0 (t) − bx(t) = 0,
Stability and Instability 83
This inequality implies that all solutions of this system are bounded on R+ and
the zero solution of the system is stable.
(2) Assume that a < 0, d < 0. Then we can rewrite the equality (??) in the
following form
1d 2
x (t) + y 2 (t) + m x2 (t) + y 2 (t)
2 dt
= ax4 (t) + dy 4 (t) + m x2 (t) + y 2 (t) + (b + c)x(t)y(t)
b+c
where m > 0 is an arbitrary positive number and m1 := m + 2 .
Emplyoing the Young inequality
1
αβ ≤ εα2 + β 2
4ε
84 Chapter 6
Example 1.8. Supppse that a(t) is a continuous function defined on [0, ∞).
Show that solutions of the equation
y 0 (t) = a(t)y(t), t ≥ 0 (1.9)
are stable if and only if
Z t
lim sup a(s)ds < ∞. (1.10)
t→∞ 0
Solution Let y(t) be a given solution of the equation (1.9) that satisfies the
initial condition y(0) = y0 . Then
Rt
a(s)ds
y(t) = y0 e 0 .
Let z(t) be an arbitrary solution of (1.9). It is clear that
Rt
a(s)ds
|y(t) − z(t)| = |y0 − z0 |e 0 , (1.11)
where z0 = z(0). A solution y(t) of (1.9) is stabel if for each ε > 0, there exists
δ = δ(ε) > 0 such that
|y(t) − z(t)| ≤ ε (1.12)
whenever |y0 − z0 | ≤ δ. It is clear that the right hand side of (1.11) is finite only
when the condition (1.10) is satisfied.
Stability and Instability 85
Suppose now that the condition (1.10) is satisfied. Then there exists A0 > 0 such
that Rt
e 0 a(s)ds ≤ A0 < ∞,
Therefore it follows from (1.11) that for each ε > 0
|y(t) − z(t)| ≤ ε
ε
whenever |y0 − z0 | ≤ A0 , i.e. the solution y(t) is stable.
CHAPTER 8
In the previous chaper we studied mainly the initial vlaue problems (or
Cauchy problems) for ordinary differential equations. We were given the ini-
tial value (or initial values) of the unknown function and a differential equation
which governed its behavior for subsequent times. In this chaper we consider
a different type of problems for second order ODE’s which we call a boundary
value problem. In this case our aim is to find a function defined on some interval,
where we are given its value or the value of its derivative on the boundary points
of the interval and a differential equation to govern its behavior in the interior of
the interval.
(1) Find two linearly independent solutions of (1.4) that satisfy the bound-
ary conditions Ba [y1 ] = 0, Bb [y2 ] = 0,
(2) Compute p(x)W (x) = c,
(3) Construct the Green function
(
− y1 (s)y
c
2 (x)
, a ≤ s ≤ x,
G(x, s) = y2 (s)y1 (x)
− c , x ≤ s ≤ b.
Then the desired solution is the function
Z b
y(x) = G(x, s)y(s)ds.
a
π−x x 2 x π
Z Z
y(x) = s ds + (π − s)sds
π 0 π x
π
π − x x3 x πs2 πs3
= + −
π 3 π 2 3 x
x3 x4 x π 3 π 3 πx2 x3
= − + − − +
3 3π π 2 3 2 3
x 3 x 4 2
π x x 3 x 4 −x 2
= − + − + = (x − π 2 ).
3 3π 6 2 3π 6
2. Sturm-Liouville boundary value problems
In this section we we consider the following Sturm-Liovile problem , i.e. a
problem of finding the numbers λ for which the equation
–y 00 + q(x)y = λy, x ∈ (a, b), (2.1)
where q is given continuous function on the interval [a, b], under the boundary
conditions of the form
α1 y(a) + α2 y 0 (a) = 0 (2.2)
90 Chapter 6
and
Z b Z b
g(x) −f 00 (x) + q(x)f (x) dx
g(x)L[f ](x)dx =
a a
Z b Z b
00
=− g(x)f (x)dx + q(x)f (x)g(x)dx
a a
b Z b Z b
0
0 0
= −f g + f (x)g (x)dx + q(x)f (x)g(x)dx (2.6)
a a a
It follows from (2.5) and (2.6) that
Z b Z b
f (x)L[g](x)dx − g(x)L[f ](x)dx = W [g, f ](b) + W [f, g](a) = 0.
a a
3. Boundary Value Problems for second order nonlinear ODE’s
In this section we consider the problem
−y 00 (x) + a(x)y(x) = f (x, y(x)) + h(x), x ∈ (a, b), (3.1)
Ba [y] = α1 y(a) + α2 y 0 (a) = 0, (3.2)
It is clear that the operator A[·] maps C[0, 1) into itself. On the other hand
thanks to the Lipschitz condition (3.4) we have
Z 1
A[y1 ](x) − A[y2 ](x) ≤ k0 |G(x, ξ)||y1 (ξ) − y2 (ξ)|dξ.
0
Since |G(x, ξ)| ≤ g0 for each x, ξ ∈ [0, 1] we have
max A[y1 ](x) − A[y2 ](x) ≤ k0 g0 max |y1 (x) − y2 (x)|.
x∈[0,1] x∈[0,1]
This inequality implies that the operator A is a contraction in the Banach space
C[0, 1] whenever k0 g0 < 1. Hence the following theorem holds true
Theorem 3.1. If the nonlinear term satisfies the condition (3.4) and
k0 g0 < 1, then the problem (3.1)-(3.3) has a unique solution.
Problem 3.2. Show that y(x) ≡ 0 is a unique solution of the problem
(
−y 00 (x) + [y(x)]3 = 0, x ∈ (0, 1),
y(0) = y(1) = 0.
Problem 3.3. Show that the problem
(
y 00 (x) − y(x) = sin(πx), x ∈ (0, 1),
y(0) = 0, y 0 (0) = −2
has a unique solution.
The general solution of the equation has the form
1
y(x) = C1 E x + C2 e−x − sin(πx).
1 + π2
Boundary conditions are satisfied if
C1 + C2 = 0, C1 e + C2 e−1 = −2.
Thus the problem has a solution
1 1
y(x) = (sinh(1 − x) − 2 sinh x) − sin(πx)
sinh 1 1 + π2
Assume that v(x) is another solution of the problem. Then the function,
z(x) = y(x) − v(x)
would have been a solution of the problem
(
z 00 (x) − z(x) = 0, x ∈ (0, 1),
z(0) = 0, z 0 (0) = 0.
Mutiplying this equation by −z(x) and integrating over the interval (0, 1) we
obtain
Boundary value Problems 93
Z 1 x=1 Z 1
0 2 0
[z (x)] dx − z z + [z(x)]2 dx = 0.
0 x=0 0
Thanks to the boundary conditions
Z 1 Z 1
[z 0 (x)]2 dx + [z(x)]2 dx = 0.
0 0
Hence z(x) = y(x) − v(x) = 0, ∀x ∈ [0, 1].
4. Problems
(1) Find eigenvalues and eigenfunctions of the problem
(
−y 00 = λy, x ∈ (0, 2π),
y(0) = y(2π), y 0 (0) = y 0 (2π).
CHAPTER 9
1. Periodic Functions
A function f (x) defined on R is called a periodic function if there exists a
number T > 0 such that
f (x + T ) = f (x), ∀x ∈ R. (1.1)
The smallest number T for which the relation (1.1) holds is called the period of
f or fundamental period of f .
2. Functional Series
Let
f1 (x), f2 (x), ..., fn (x), ... (2.1)
be a sequence of functions defined on some interval I ⊂ R. We say that the
sequence (3.1) is convergent (or pointwise convergent) to a function f (x) on I if
for each fixed point x ∈ I the number sequence {fn (x)} converges to the number
f (x) as n → ∞. If at least for one point x0 the sequence f (x0 ) is divergent, then
we say that the sequence of functions {fn (x)} is divergent on I. .
A sequence of functions (3.1) is said to be uniformly convergent to a function
95
96 Chapter 8
f (x) on I if for each ε > 0 there exists a number Nε depending on ε only, such
that
|fn (x) − f (x)| ≤ ε
for all n ≥ Nε .
∞
P ∞
P
and the series an is convergent then the series fn (x) is uniformly conver-
n=1 n=1
gent to some function that is continuous on [a, b].
and due to uniform convergence of the series we can integrate (3.1) over (−π, π)
and get: Z π
f (x)dx = a0 π,
−π
Let us multiply (3.1) by cos(mx) and integrate over (−π, π). Taking into
account Z π Z π
2
cos (mx)dx = π, sin2 (mx)dx = π
−π −π
98 Chapter 8
we obtain Z π
1
an = f (x) cos(nx)dx, n = 0, 1, 2... (3.6)
π −π
1 π
Z
bn = f (x) sin(nx)dx, n = 1, 2... (3.7)
π −π
Here we have used the fact that for each m the series
∞
X
cos(mx) (an cos(nx) + bn sin(nx))
n=1
and
∞
X
sin(mx) (an cos(nx) + bn sin(nx))
n=1
are uniformly convergent.
Definition 3.2. The series (3.1) where an and bn are defined by (3.6) and
(3.7) is called the Fourier series of the function f , the numbers an , bn are
called the Fourier coefficients of f .
π π
2 π
Z Z Z
2
an = (π − x) cos(nx)dx == 2 cos(nx)dx − x cos(nx)dx
π 0 0 π 0
0
2 π
Z Z π
1 2 π 2
=− x sin(nx) dx = − [x sin(nx)]0 + sin(nx)dx
π 0 n nπ nπ 0
π
(−1)n
2 1 2 1 1 2 1
= − cos(nx) = − cos(nπ) = −
nπ n 0 nπ n n nπ n2 n2
2
Thus an = 0 if n is and even number, and an = n2
, if n is and odd number.
∞
π 4 X 1
φ(x) = + cos(2k − 1)x
2 π (2k − 1)2
k=1
2 l
Z nπ
bn = f (x) sin x dx = 0
l 0 l
and its Fourier series has the form
∞
a0 X nπ
f (x) = + an cos x , (3.11)
2 l
n=1
where
2 l
Z nπ
an = f (x) cos x dx, n = 0, 1, 2, ... (3.12)
l 0 l
If a function f (x) is an odd function , then
2 l
Z nπ
an = f (x) cos x dx = 0
l 0 l
and its Fourier series has the form
X∞ nπ
f (x) = bn sin x , (3.13)
l
n=1
100 Chapter 8
where
2 l
Z nπ
bn = f (x) sin x dx, n = 1, 2, ... (3.14)
l 0 l
Let f (x) be defined on [0, l]. We define the even periodic extension fe of f
as follows
fe (x) = f (−x), if x ∈ [−l, 0], and fe (x) = fe (x + 2l), ∀x ∈ R.
An odd periodic extension f0 of f is defined as follows
f0 (x) = −f (−x), if x ∈ [−l, 0], and f0 (x) = f0 (x + 2l), ∀x ∈ R.
Example 3.6. Find Fourier series expansion for f (x) = 1 − x2 , x ∈ [−1, 1]
and use it to show that
∞
π2 X 1
= .
6 n2
n=1
Solution. Z 1
4
a0 = 2 (1 − x2 )dx = ,
0 3
Z 1 Z 1
an = 2 (1 − x2 ) cos(nπx)dx = −2 cos(nπx)dx+
0 0
Z 0 Z 1
1 2
2 x2 sin(nπx dx = − 2x sin(nπx)dx =
nπ nπ 0
Z 1 0
4 1 4 4
x − cos(nπx dx = − 2
x cos(nπx)|10 = − 2 2 (−1)n .
nπ 0 nπ (nπ) n π
Thus we have
∞
2 4 X (−1)n+1
f (x) = + 2 cos(nπx).
3 π n2
n=1
Hence we have
Z π Z π
π
In + In = 2 f (x) cos(nx)dx = [f (x) − f (x − )] cos(nx)dx
−π −π n
Z π Z π
π
= 2 f (x) cos(nx)dx ≤ |f (x) − f (x − )|dx.
−π −π n
The function f is continuous on [−π, π] thus it is uniformly continuous on [−π, π]. Therefore
the integral in the right hand side of (R) tends to zero as n → ∞. So In → 0 as n → ∞.
Similarly we can show that Jn → 0 as n → ∞.
Problem. Let f (x) be 2π -periodic and f 0 (x) is continuous function. Show that
1 1
an = o , bn = o .
n n
or
N π
a20 X 2
Z
1
+ (an + b2n ) ≤ f 2 (x)dx. (5.5)
2 π −π
n=1
We cam pass to the limit as N → ∞ and get (5.1).
By using the Bessel inequality we can prove the following theorem
Theorem 5.2. Suppose that f is continuous 2π-periodic function and f 0 is
piecewise continuous function. Then the Fourier series of f converges absolutely
and uniformly to the function f .
Proof. Let us calculate the Fourier coefficients of f 0 :
1 π 0
Z
1
α0 = f (x)dx = (f (π) − f (−π)) = 0,
π −π π
Z π Z π
1 0 1 x=π
αn = f (x) cos(nx)dx = cos(nx)f (x) +n f (x) sin(nx)dx = nbn ,
π −π π x=−π −π
1 π 0
Z
βn = f (x) sin(nx)dx
π −π
Z π
1 x=π
= sin(nx)f (x) −n f (x) cos(nx)dx = nan .
π x=−π −π
So we have
αn = nbn , n = 0, 1, 2, ..., βn = nan , n = 1, 2, ..., (5.6)
Employing the inequality
1
|ab| ≤ a2 + b2
4
we obtain from (5.6) the following inequlity
1 1 1
|an | + |bn | = |βn | + |αn | ≤ 2 + αn2 + βn2 .
n n 2n
Due to the Bessel inequality the series
X∞
(αn2 + βn2 )
n=1
P∞
is convergent. Hence the series n=1 (|an | + |bn |) is also convergent. Therefore
the series
∞
a0 X nπ nπ
+ an cos x + bn sin x
2 l l
n=1
is uniformly convergent to a continuous function f .
where an and bn are Fourier coefficients of the function f
Fourier Series and PDE’s 103
Example 5.3. Assume that the Fourier series of f (x) on [−π, π] converges
to f (x) and can be integrated term by term. Multiply
∞
a0 X nπ nπ
+ an cos x + bn sin x
2 l l
n=1
by f (x) and integrate the obtained relation from −π to π to derive the identity
π ∞
a20 X 2
Z
1
f 2 (x)dx = + (an + b2n ). (5.7)
π −π 2
n=1
This identity is called the Parseval identity.
CHAPTER 10
This chapter is devoted to the study of the Cauchy problem and initial boundary
value problems for the heat equation and wave equations.
We see that the solution of the problem (1.1)-(1.3) has this form just when the
initial function is linear combination of functions
p p k2 π2
sin λ1 x, ...., sin λn x, λk = 2 .
l
Partial Differential Equations 107
iff
Z l
2 nπ
Dn = fn = f (x) sin x dx.
l 0 l
Theorem 1.1. If f (x) is continuous on [0, L], f 0 (x) is piecewise continuous
on [0, L], f (0) = f (l) = 0 then the function
∞ nπ
2λ
X
u(x, t) = fn e−a nt
sin x (1.11)
l
n=1
satisfies (1.1)-(1.3).
To prove this theorem we need the following proposition
Proposition 1.2. Assume that the functions vn (x, t), n = 1, 2, ... are contin-
uous QT = [a, b] × [t0 , T ] and
|vn (x, t)| ≤ an , ∀(x, t) ∈ QT , n = 1, 2, ...
∞
P
where the sequence of positive numbers {an } is so that the series an is con-
n=1
∞
P
vergent. Then the series vn (x, t) is absolutely and uniformly convergent on
n=1
QT . Moreover the function
∞
X
v(x, t) = vn (x, t)
n=1
is continuous on QT .
If
2
∂vn ∂ vn
∂t (x, t) ≤ bn , ∂x2 (x, t) ≤ dn , ∀(x, t) ∈ QT , n = 1, 2, ...
108 Chapter 9
and
∞
X ∞
X
bn < ∞, dn < ∞,
n=1 n=1
then the series
∞ ∞
X ∂vn X ∂ 2 vn
(x, t) and (x, t)
∂t ∂x2
n=1 n=1
uniformly converge to vt (x, t) and vxx (x, t) in QT . Moreover these functions are
continuous in QT .
Proof of Theorem 1.1. Since f is piecewise smooth the series
∞
X
|fn |
n=1
is convergent. Thus the Proposition 1.2 implies that the function u(x, t) is con-
tinuous on [0, l] × [0, ∞). Let us show that the series
∞ ∞
X ∂un X ∂ 2 un
(a) (x, t) and (b) (x, t) (1.12)
∂t ∂x2
n=1 n=1
is convergent. Therefore due to the Proposition 1.2 the function u(x, t) defined
by (2.10) is a solution of the problem (1.1)-(1.3).
Finally we show that the solution we obtained is unique. Really, if v(x, t) is
another solution of the problem then
w(x, t) = u(x, t) − v(x, t)
Partial Differential Equations 109
∞ ∞
a2n and b2n be convergent. Show that
P P
Problem 1.3. Let the series
n=1 n=1
∞ ∞
!1/2 ∞
!1/2
X X X
|an bn | ≤ a2n b2n
n=1 n=1 n=1
We expand h(x, t)
∞
X nπ
h(x, t) = hn (t) sin x
l
n=1
By using (1.19) we obtain from (1.16)
∞
X p
u0n (t) + λn a2 un (t) − hn (t) sin( λn x) = 0.
n=1
This equality holds iff
u0n (t) + λn a2 un (t) = hn (t), n = 1, 2, ... (1.20)
Taking into accoun the initial condition (1.17) we obtain
∞
X nπ ∞
X nπ
u(x, 0) = hn (0) sin x = f (x) = fn sin x.
l l
n=1 n=1
It follows then
un (0) = fn , n = 1, 2, ... (1.21)
The initial value problem (1.20),(1.22) has the solution
Z t
−λn a2 t 2
un (t) = e fn + e−λn a (t−s) hn (s)ds
0
110 Chapter 9
n2 π 2
Let us recall that λn = l2
Hence
w(x, t) ≡ v(x, t), i.e. u(x, t) ≡ v(x, t).
1.2. Nonhomogeneous boundary conditions. Let us consider the prob-
lem
ut = a2 uxx , x ∈ (0, l), t > 0, (1.29)
u(x, 0) = f (x), x ∈ [0, l], (1.30)
u(0, t) = A, u(l, t) = B, t ≥ 0, (1.31)
where A, B are given constants. The solution of the problem u(x, t) is a sum of
two functions v(x, t) and W (x), where W (x) is a solution of the problem
(
W 00 (x) = 0, x ∈ (0, l),
(1.32)
W (0) = A, W (l) = B
and v is a solution of the problem
2
vt = a vxx , x ∈ (0, l), t > 0,
v(x, 0) = f (x) − W (x), x ∈ [0, l], (1.33)
u(0, t) = u(l, t) = 0, t ≥ 0,
It is clear that
1
W (x) = A + (B − A)x
l
is a solution of the problem (1.32) Hence the solution of the proble (1.34)-(1.36)
is the function
∞ nπ
X 2 2 1
u(x, t) = qn e−n a t sin x + A + (B − A)x,
l l
n=1
where Z l
2 1 nπ
qn = f (x) − A − (B − A)x sin x .
l 0 l l
Next we consider the following problem
For ut we have
∞
X √
ut (x, t) = u0n (t) sin λn x (1.38)
n=1
112 Chapter 9
∞
X √
uxx (x, t) = gn (t) sin λn x , (1.39)
n=1
where
2
Z l √
gn (t) = uxx (x, t) sin λn x dx.
l 0
Integrating by parts we obtain
2h √ il 2√
Z l √
gn (t) = ux sin λn x − λn ux (x, t) cos ln x dx
l 0 l 0
2√ √ l
2
Z l √
=− λn u(x, t) cos λn x − λn u(x, t) sin λn x dx
l 0 l 0
2√
= λn [u(0, t) − u(l, t) cos(nπ)] − λn un (t).
l
Employing the boundary conditions (1.36) we obtain
2√
gn (t) = λn [φ(t) − ψ(t) cos(nπ)] − λn un (t)
l
Thus (1.39) implies
∞ √ √ √
X 2 λn 2 λn
uxx (x, t) = φ(t) − (−1)n ψ(t) − ln un (t) sin λn x
n=1
l l
By using the last relation and (1.38) in(1.34) we obtain
∞ 2√ √ √
2a2 λn
X 2a λn
u0n (t) − a2 φ(t) − (−1)n ψ(t) − aλn un (t) sin λn x = 0.
n=1
l l
Therefore the Fourier coefficients un (t) satisfy
√ √
2 λn 2 λn
u0n (t) = a2 φ(t) − (−1)n ψ(t) − λn un (t) , n = 1, 2, ... (1.40)
l l
The function u will satisfy the initial condition (1.35) iff
un (0) = fn , n = 1, 2, ... (1.41)
We solve the initial value problem (1.40),(1.41) and get
√ Z
2 2a2 λn t −a2 λn (t−s)
un (t) = fn e−a λn t − e [(−1)n ψ(s) − φ(s)] ds, n = 1, 2, ... (1.42)
l 0
So the solution of the problem (1.34)-(1.36) has the form (1.37), where un (t), n = 1, 2, ... are
defined by (1.42)
2. Wave Equation
In this section we study the wave equation. The first problem is the initial
boundary value problem:
utt = c2 uxx , x ∈ (0, l), t > 0, (2.1)
u(x, 0) = f (x), ut (x, 0) = g(x), x ∈ [0, l], (2.2)
u(0, t) = u(l, t) = 0, t ≥ 0, (2.3)
We assume that the solution of the problem has the form
u(x, t) = X(x)T (t),
Partial Differential Equations 113
where X(x) and T (t) are nonzero functions. Substituting into (2.1) we get
X(x)T 00 (t) = a2 X 00 (x)T (t).
Dividing both sides of the last equality by c2 X(x)T (t) we obtain
T 00 (t) X 00 (x)
= (2.4)
c2 T (t) X(x)
Since the left hand side of (2.4) depends only on t and the right hand side depend
only on x each side of this equality can only be equal to some constant. Thus
T 00 (t) X 00 (x)
= = −λ, λ = constant
c2 T (t) X(x)
or
T 00 (t) = λc2 T (t) (2.5)
X 00 (x) + λX(x) = 0. (2.6)
It follows from (2.3) that
X(0) = X(l) = 0. (2.7)
So we have to solve the eigenvalue problem (2.6),(2.7). We have seen that the
numbers
n2 π 2
λn = 2 , n = 1, 2, ...
l
are eigenvalues of the problem (2.6),(2.7), and the functions
nπ
Xn (x) = sin x , n = 1, 2, ...
l
are the corresponding eigenfunctions. It is easy to see that the general solution
of (2.5) for l = λn has the form
p p
Tn (t) = An cos(c λn t) + Bn sin(c λn t), n = 1, 2, ...
It is easy to see that for each N the function
N h
X p p i nπ
uN (x, t) = An cos(c λn t) + Bn sin(c λn t) sin x , (2.8)
l
n=1
∞
P ∞
P
Let us note that if |An | < ∞ and |Bn | < ∞ , then this series is uniformly
n=1 n=1
convergent on [−l, l] × [0, T ], ∀T > 0. The function u(x, t) defined by (2.9) sat-
isfies the boundary conditions (2.3) since each term of the series satisfies these
conditions. It follows from (2.9) that u(x, t) satisfies the initial conditions (2.2)
∞
X nπ
f (x) = u(x, 0) = An sin x
l
n=1
∞
X cnπ nπ
g(x) = ut (x, 0) = Bn sin x
l l
n=1
iff
Z l
2 nπ
An = fn = f (x) sin x dx,
l 0 l
and
Z l
l 2 nπ
Bn = gn = g(x) sin x dx,
cnπ cnπ 0 l
Similar to the corresponding theorem for the heat equation we can prove the
following
Theorem 2.1. If f (x), g(x), g 0 (x), f 0 (x), f 00 (x) are continuous g 00 (x), f 000 (x)
are piecewise continuous on [0, L],
is a solution of (2.1)-(2.3).
Let us show that solution of this problem is unique. Really if v(x, t) is also
solution of the problem (2.1)-(2.3) then the function w(x, t) = u(x, t) − v(x, t) is
a solution of the problem
Mutiplying (2.11) by wt (x, t) then integrating over the interval (0, l) , after inte-
gration by parts we get
Z l Z t
2
0= wtt − c wxx wt (x, t)dx = wtt (x, t)wt (x, t)
0 0
Z l x=l
− c2 wx (x, t)wxt (x, t)dx − c2 wx (x, t)wt (x, t) . (2.14)
0 x=0
3. Problems
(1) Consider the initial boundary value problem for the telegraph equation
(2) Use the Fourier method to find the solution of the initial boundary value
problem for the Schrödinger equation
iut = uxx , x ∈ (0, l), t > 0,
u(x, 0) = f (x), x ∈ [0, l],
u(0, t) = u(l, t) = 0, t ≥ 0,
119
Index
121