You are on page 1of 164

Chapter One

Derivation of Ordinary Differential Equations From


Physical Problems
1.1 Introduction

An ordinary differential equation (O.D.E) is an equation that involves


the unknown dependent function and their derivatives. Often, the equa-
tion involves coefficients that are functions of the independent variables.
The highest order of derivative that appear in the equation is called the
order of the differential equation. Many real life situations can be mod-
eled as ordinary differential equations. Many physical problems in en-
gineering, physics, chemistry, geology, economics and various biomedical
systems can be modeled as ordinary differential equations and therefore
admit standard mathematical treatments that are already available for
such equations. Thus, the theory of ordinary differential equation has
found formidable applications in diverse fields of human endeavors.

Why do we want to solve ordinary differential equation ?. Human beings


have discovered that the laws of science can be formulated as relations
that involves not only magnitudes of quantities but also rates of change
(with respect to time) of these magnitudes.Thus the laws can be for-
mulated as differential equations. Note that if a general solution of an
n-order equation can be found, it usually involves n arbitrary constants.
As an illustration, the general solution of the 3rd order ordinary differ-
ential equation
000
y = 16e−2x
can be found by three successive integrations. Integrating for the first
time, we have :
00
y = −8e−2x + C1
integrating again
0
y = 4e−2x + C1 x + C2
finally,
1
y = −2e−2x + C1 x2 + C2 x + C3
2
where C1 , C2 , C3 are arbitrary constants. In what follows, we shall con-
sider some examples of specific physical cases that can be modeled as
ordinary differential equation.

1
1.1.1. Example

Radioactivity Exponential Decay


Experiments show that a radioactive substance decompose at a rate pro-
portional to the amount present. Starting with an amount of substance
say, y grams at certain time, say t = 0, what can be said about the amount
available at a later time?

Solution
First Step: Description of the physical process by a differential equation:

Let the amount of substance that still present at a time t be denoted


dy
by y(t). The rate of change is the first derivative of y with respect to
dt
dy
time t. According to the law governing the process of the radiation,
dt
is proportional to y(t).
That is
dy
∝ y(t)
dt
so that
dy
= Ky(t), (1.1)
dt
where K is a definite physical‘ constant whose numerical value is known
for various radioactive substances. The amount given is positive and de-
dy
creases with time. This implies is negative, so that the constant K is
dt
negative.

Second Step: Solution of the Derived Equation:

Solving the equation (1.1) by method of calculus we have

y(t) = CeKt (1.2),


where C is constant at all time t.

Third Step: Calculation of the Value of the arbitrary Constant by us-


ing initial values.

Let y = 2gms at t = 0 hence we have to specify the value of the con-


stant C, so that when t = 0,y = 2 by inserting this initial condition

y(0) = 2 (1.3)

2
in equation (1.2).
We have y(0) = Ce0 or C = 2. Thus equation (1.2) becomes

y(t) = 2ekt . (1.4).

This particular solution of (1.1) characterize the amount of substance


still present at any time t. The physical constant K is negative and y(t)
decreases with time.

Fourth Step: Checking


Differentiating equation (1.4), we have

dy(t)
= 2kekt = ky(t)
dt
y(0) = 2e0 .

The function (1.4) satisfies the Equation (1.1) as well as the initial con-
dition (1.3)

1.1.2. Example

Population With Restricted Growth.


Suppose we have a bacterial population of size N , whose rate of growth
dN
decreases as N increases. We wish to construct a model for a popu-
dt
lation that will eventually enter stationary phase.

Let t denote the elapsed time, N = N (t), the population at time, t.


Let us suppose there exists an upper limit K ( a positive constraint) for
the population size. Thus

N (t) ≤ K, ∀ t,
dN
= K(K − N ) (1.5)
dt
where K is a positive constant. Thus as N increases to K, the rate of
growth decreases to zero. To solve (1.5), we write

1 dN
= K, K − N > 0.
K − N dt
Integrating both sides we have
Z
1 dN (t)
dt = kt + C
K − N (t) dt

3
where C is the arbitrary constant of integration. Evaluating the right
hand side, we have:
Z
1
dN = − ln(K − N ) + C
K −N
Hence, it follows that

− ln(K − N ) + C = Kt + C
ln(K − N ) = −kt + (C1 − C)
K − N = eC1 −C e−kt

Solving for N , we have:


N = K − αe−kt (1.6)
where α = eC1 −C is a positive constant that is completely determined by
the value of N at t = 0. Let N0 = N (0) ,then α = K − N0 from (1.6).

Note
The function (1.6) is the only function that satisfies the differential equa-
tion (1.5) and the initial condition N = N0 at t = 0. Notice that since
αe−kt > 0, ∀ t, the value

N (t) = K − αe−kt < K

and as
t → ∞, N = K − αe−kt → K,
since
e−kt → 0.
Equation (1.5) above is a special case of the differential equation of the
form
dN
= a + bN
dt
where a and b are constants.

1.1.3. Definition
An equation of order n is said to be a linear equation if it is of the special
form:
0
a0 (x)y (n) (x) + a1 (x)y (n−1) (x) + ......... + an−1 (x)y = f (x), (1.7)
where a0 , a1 · · · an are given functions that are defined on an interval

I = {x : a < x < b} ⊆ IR

Thus, the general nth order equation (1.7) is linear if the left hand side
(LHS) of the equation is a first degree polynomial in y, y 0 , y 00 · · · y n .

4
The equation that is not of the form (1.7) is said to be nonlinear. For
example, the equation

xy 00 + y 0 + (cos x)y = ex

xy 0 + y = x2
xy 000 + ex y 0 + (sin x)y = 0
are all linear equation. But

y 0 + 2y 2 = 1

y 00 + (cos x)yy 0 = sin x


y 000 + (y 0 )3 + y = 0
are non-linear differential equations.

1.2 Derivation of Ordinary Differential Equations

Just as a function can be obtained as a solution of a given differen-


tial equation, the reverse process can be carried out by obtaining an
ordinary differential equation from a given function. Such differential
equations may be obtained by eliminating some constants that appear in
such functions. We shall illustrate this procedure through some examples
as follows:

1.2.1 Example . Consider the function

y = C1 e−2x + C2 e3x (1.8)

involving two constants C1 and C2 . We shall attempt to eliminate the


constants by differentiating the given function y twice and combining the
derivatives as follows:
dy
(i) = −2C1 e−2x + 3C2 e3x
dx
and
d2 y
(ii) = 4C1 e−2x + 9C2 e3x
dx2
from (i) and (ii)
dy
+ 2y = 5C2 e3x .
dx
That is
y 0 + 2y = 5C2 e3x

5
and
y 00 + 2y 0 = 15C2 e3x = 3(y 0 + 2y)
and hence,
3(y 0 + 2y) − (y 00 + 2y 0 ) = 0.
This implies that
y 00 − y 0 − 6y = 0.
1.2.2. Example. Eliminate the constants from the equation:

(x − a)2 + y 2 = a2 .

Direct differentiation yields:

2(x − a) + 2yy 0 = 0,

from which
a = yy 0 + x.
Substituting for the constant a in the given equation, we have:

(yy 0 )2 + y 2 = (x + yy 00 )2

Hence,
y 2 = x2 + 2xyy 0 .
Finally we have:
(x2 − y 2 )dx + 2xydy = 0.
1.2.3. Example : Eliminate β and α from the relation:

(i) x = B cos(ωt + α)

where ω is a parameter.
Differentiating with respect to t, we have
dx
(ii) = −ωB sin(ωt + α)
dt
d2 x
(iii)2
= −ω 2 B cos(ωt + α).
dt
Combining Equations (i) and (iii) above, we have

d2 x
+ ω 2 x = 0.
dt2
1.2.4. Remark: Associated with an nth order ODE of the form
h i
y (n) = G x, y, y 0 , y 00 , · · · y (n−1) (1.8)

6
n numbers of auxiliary conditions of the type

y(x0 ) = K0 , y 0 (x0 ) = K1 , · · · y (n−1) (x0 ) = Kn−1 (1.9)

where Ki are given numbers may be attached. There are n conditions


for the nth -order equation. These conditions specify the values of the
unknown function and its first (n − 1) derivatives at a single point x0 .

For first order equation:


y 0 = F (x, y),
there is only one condition

y(x0 ) = K0 .

For a second order equation,

y 00 = K[x, y, y 0 ]

there would be two conditions

y(x0 ) = K0 , y 0 (x0 ) = K1 .

A set of auxiliary conditions of the form (1.9) is called a set of initial con-
ditions for Equation (1.8). Equation (1.8) together with (1.9) constitute
an initial value problem.

1.3 First Order Equations


First order equations can in general be written as
dy
= F (x, y), (1.20)
dx
where F (x, y) is a given function. Solutions of first order equation can
usually be found where F (x, y) has particular simple forms. These can be
categorized as follows:

Case A: Separable Equations


A first order ODE that can be written in the form
dy
= f (x)g(y) (1.21)
dx
where f (x) and g(y) are given functions is called a it separable equation.
The following equations are examples of separable equations:
dy
= 2xy 2
dx
7
dy
y −1
= (x + 1)−1
dx
dy
(3y 2 + ey ) = cos x.
dx
For separable equation, we have from Equation (1.21),
1 dy
= f (x).
g(y) dx
Integrating Z
1 dy Z
dx = f (x)dx.
g(y) dx
This implies that Z
dy Z
= f (x)dx.
g(y)
This expresses the solution y(x) implicitly in terms of x.
Consider the following examples:

1.3.1. Example . Solve the differential equation:


dy 1+y
= .
dx 2+x
The right hand side of the given can be written in separable form. Com-
1
paring with Equation (1.21), f (x) = and g(y) = 1 + y. Separating the
2+x
variables we have:
1 dy 1
=
1 + y dx 2+x
Integrating, we have Z
dy Z
dx
= .
1+y 2+x
Hence,
ln(1 + y) = ln(2 + x) + C = ln(2 + x) + ln A.
Thus,
ln(1 + y) = ln A(2 + x)
Finally, we have
y(x) = A(2 + x) − 1
where A is an arbitrary constant.

1.3.2. Example : Solve the differential equation:


dy
= 2xy 2
dx
8
The equation is separable with f (x) = 2x and g(y) = y 2 . Separating the
variables and integrating we have:
dy
y −2 = 2x
dx
and Z Z
y −2 dy = 2xdx.
Hence,
1
− = x2 + C
y
or
1
y(x) = − .
x2 +C

1.3.3. Example : Solve the differential equation:


dy
(x + 1) = 2y
dx
Separating the variables, we have:
dy 2
y −1 = .
dx x+1
Integrating, we have: Z
dy Z 2
= dx.
y x+1
Hence,
ln y = ln(x + 1)2 + C1
or
|y| = ec1 (x + 1)2
or
y(x) = ±eC1 (x + 1)2 ,
where C1 is an arbitrary constant. But the factor ±eC1 that appears in the
last equation can have any value except zero, so that the set of solutions
described by the last equation is also described by a simpler formula:

y(x) = C(x + 1)2 , C 6= 0.

Case B: Homogeneous Equation


For this case, we first define an homogeneous function of two variables.

9
Definition: A function f (x, y) defined for all (x, y) ∈ IR2 is said to be
homogeneous of degree n in x and y if and only if

f (tx, ty) = tn f (x, y), ∀t (1.22)

For example, if f (x, y) is given

y x4
f (x, y) = 2y 3 e( x ) − ,
x + 3y
then
t4 x4
ty
f (tx, ty) = 2t3 y 3 e( tx ) −
tx + 3ty
x4
( )
y
3 3 x
= t 2y e −
x + 3y
= t3 f (x, y).

Hence, f (x, y) is homogeneous of degree 3 in x and y. On the other hand,


the function g(x, y) defined by

g(x, y) = x3 y 2 − 3x5

is homogeneous of degree 5 in x and y. To see this, we have:

g(tx, ty) = t3 x3 t2 y 2 − 3t5 x5


= t5 (x3 y 2 − 3x5 )
= t5 g(x, y).

Assume now that the right hand side F (x, y) of Equation (1.20) can be
written as
f (x, y)
F (x, y) =
g(x, y)
where the functions f (x, y) and g(x, y) are homogeneous of the same degree
in x and y, then the first order equation (1.20) becomes

dy f (x, y) xn f˜( y ) y
= = n yx = Φ( ). (1.23)
dx g(x, y) x g̃( x ) x

Generally, in determining if a given first order equation can be written


in the form (1.23), the following criterion is often used.

1.3.4. Theorem: A first order ordinary differential equation:


dy
M (x, y) + N (x, y) =0 (1.24)
dx
10
can be put in the form (1.23) if the functions M (x, y) and N (x, y) are ho-
mogeneous of the same degree in x and y.

Remark: If M and N are homogeneous of the same degree n, then we can


write:
M (x, y) = t−n M (tx, ty)
and
N (x, y) = t−n N (tx, ty).
Setting
t = x−1 ,
we have
y y
M (x, y) = xn M (1, ), N (x, y) = xn N (1, )
x x
Equation (1.24) can be written as
dy M (x, y)
= −
dx N (x, y)
M (1, xy )
= − .
N (1, xy )
The last equation is of the form (1.23). The first order equation can be
solved by making a change of variable
y
v=
x
or equivalently, by writing
y = vx.
The change of variable will reduce the given equation to a separable form
in the variables v and x.
As an illustration, consider the following examples:

1.3.5. Example : Solve the first order equation:


dy
2x2 = x2 + y 2 .
dx
This equation can be written as
dy x2 + y 2
= .
dx 2x2
The numerator and denominator of right hand side is clearly homoge-
neous of the same degree 2 in x and y and
x2 + y 2 1 y
 
2
= 1 + ( )2 .
2x 2 x

11
Setting
y = vx,
we have
dv 1
x + v = (1 + v 2 )
dx 2
or
dv
2x = v 2 − 2v + 1 = (v − 1)2 .
dx
Hence, separating the variables and integrating, we have:
Z
dv 1 Z dx
= .
(v − 1)2 2 x

Then,
−1 1
= ln |x| + c1
v−1 2
or
2
v =1− .
ln |x| + 2c1
y
Replacing v by and setting c = 2c1 we finally have:
x
2x
y(x) = x − .
ln |x| + c

1.3.6. Example : Solve the differential equation:

dy 2xy + 3y 2
= 2
dx x + 2xy

Solution: Notice that the numerator and the denominator of the right
hand side of the given equation is homogenous in x and y. Let y = vx.
Then
dy dv
=v+x
dx dx
and
2xy + 3y 2 2vx2 + 3v 2 x2 2v + 3v 2
= = .
x2 + 2xy x2 + 2vx2 1 + 2v
Substituting these these in the given equation, we have:

dv 2v + 3v 2
v+x = .
dx 1 + 2v

12
That is
dv 2v + 3v 2
x = −v
dx 1 + 2v
2v + 3v 2 − v − 2v 2
=
1 + 2v
2
v +v
= .
1 + 2v
Separating the variables and integrating, we have:
Z
1 + 2v Z
1
2
dv = dx
v +v x
That is
ln(v 2 + v) = ln x + c = ln x + ln A.
Thus,
v 2 + v = Ax.
y
Putting back the v = , we finally have:
x
y y
( )2 + = Ax
x x
or
y 2 + xy = Ax3 .

1.3.7. Exercises: Solve the following equations:


dy
(1) (x − y) = x + y.
dx
dy
(2) 2x2 = x2 + y 2
dx
dy
(3) (x2 + xy) = xy − y 2
dx
dy
(4) Show that an equation of the form = F (ay + bx + c), a 6= 0 becomes
dx
separable under the change of variable v = ay + bx + k where k is any
constant.
ydy y 1 y
(5) e x = 2(e x − 1) + e x , x 6= 0.
dx x
dy
(6) (x4 + y 4 ) = x3 y.
dx

13
Case C: Exact Equations
We shall consider here, another class of differential equations called Ex-
act Equations. We present the following definitions.

1.3.8. Definition: If a function Φ(x, y) exists such that M dx + N dy is


exactly the total differential of Φ , then we say that the equation :
M dx + N dy = 0 (1.25)
is an exact equation.
Thus if (1.25) is exact, then by definition Φ exists such that
dΦ = M dx + N dy.
But from calculus ,
∂Φ ∂Φ
dΦ = dx + dy,
∂x ∂y
so that
∂Φ ∂Φ
M= , N= .
∂x ∂y
By differentiating M and N partially with respect to y and x respectively,
these two equations lead to the following:
∂M ∂ 2Φ ∂N ∂ 2Φ
= , = .
∂y ∂y∂x ∂x ∂x∂y
Again from calculus, we have
∂ 2Φ ∂ 2Φ
=
∂y∂x ∂x∂y
provided these partial derivatives are continuous. Therefore, if (1.25) is
an exact equation, then
∂M ∂N
= (1.26)
∂y ∂x
Thus for equation (1.25) to be exact, it is necessary that equation (1.26)
be satisfied. Conversely we shall show that if (1.26) is satisfied, then
(1.25) is exact.
Let (1.26) be satisfied and let F (x, y) be a function for which
∂F
= M.
∂x
The function F is the result of integrating M with respect to the variable
x while holding y constant. Now
∂ 2F ∂M
=
∂y∂x ∂y

14
and hence if (1.26) is satisfied , then also

∂ 2F ∂N
= . (1.27)
∂y∂x ∂x
In the integration with respect to x holding y constant the arbitrary
constant may be any function of y.
Integrating (1.27) with respect to x, we have

∂F
= N + B 0 (y). (1.28)
∂y
We can now exhibit
Φ(x, y) = F (x, y) − B(y)
Thus
∂Φ ∂Φ
dΦ = dx + dy
∂x ∂y
δF δF
= (x, y)dx + ( − B 0 (y))dy
δx δy
= M (x, y)dx + (N + B 0 (y) − B 0 (y))dy
= M (x, y)dx + N (x, y)dy.

This implies that Equation (1.25) is exact. In the light of the above, the
following theorem has been proved.
∂M ∂N
1.3.9. Theorem: If M , N , and are continuous functions of x and
∂y ∂x
y then a necessary and sufficient condition that the differential equation

M (x, y)dx + N (x, y)dy = 0

be exact is that
∂M ∂N
= .
∂y ∂x

In what follows, we present some examples on the techniques for solving


exact equations.

1.3.10. Example : Solve the equation

3x(xy − 2)dx + (x3 + 2y)dy = 0 (1.29)

Here,
M (x, y) = 3x(xy − 2)

15
and
N (x, y) = (x3 + 2y).
Also
∂M ∂N
= 3x2 =
∂y ∂x
are all continuous being polynomials in x and y. Furthermore, (1.26) is
satisfied. Hence, equation (1.29) is exact. Its solution is

Φ = C,

where C is a constant. Hence, we have


∂Φ
= M = 3x2 − 6x (1.30)
∂x
and
∂Φ
= N = x3 + 2y (1.31)
∂y
Integrating Equation (1.30) both sides with respect to x holding y con-
stant yields
Φ(x, y) = x3 y − 3x2 + T (y). (1.32)
To determine the function T (y), we know that Φ(x, y) must also satisfy
Equation (1.31). Hence, differentiating Equation (1.32) with respect to
the variable y and using Equation (1.31) yields

x3 + T 0 (y) = x3 + 2y,

This implies that


T 0 (y) = 2y.
Again, integrating with respect to y, we have

T (y) = y 2 .

From Equation (1.32), no arbitrary constant is needed since the solution


Φ = C where C is already an arbitrary constant. Hence,

Φ(x, y) = x3 y − 3x2 + y 2 .

A set of solution of (1.29) is therefore defined by

x3 y − 3x2 + y 2 = C.

1.3.11. Example : Solve the differential equation:

(2x3 − xy 2 − 2y + 3)dx − (x2 y + 2x)dy = 0. (1.33)

16
Here
M = 2x3 − xy 2 − 2y + 3
and
N = −(x2 y + 2x).
To check whether the given equation is exact or not, we differentiate M
and N partially with respect to the variables y and x, we have:
∂M ∂N
= −2xy − 2 = .
∂y ∂x

The functions are all continuous and (1.26) is satisfied . Hence equation
(1.33) is exact. A set of solutions of (1.33) is defined by

Φ(x, y) = C,

where
∂Φ
= 2x3 − xy 2 − 2y + 3 (1.34)
∂x
and
∂Φ
= −x2 y − 2x. (1.35)
∂y
Because Equation (1.35) is simpler than Equation (1.34), we will take Φ
from (1.35).
Integrating Equation (1.35) with respect to the variable y yields
1
Φ(x, y) = − x2 y 2 − 2xy + Q(x).
2
We will determine the function Q(x) from equation (1.34). Thus differen-
tiating Φ partially with respect to x and equating to the right hand side
of (1.34), we have

−xy 2 − 2y + Q0 (x) = 2x3 − xy 2 − 2y + 3.

This implies that


Q0 (x) = 2x3 + 3.
Integrating again to obtain the function Q(x), we have
1
Q(x) = x4 + 3x.
2
Thus the solution of (1.33) is defined implicitly by.
1 1
φ(x, y) = − x2 y 2 − 2xy + x4 + 3x = C. (1.36)
2 2

17
1. 3.12. Exercises
Solve the following equations

(1) (x + 2y)dx + (2x + y)dy = 0

(2) (cos 2y − 3x2 y 2 )dx + (cos 2y − 2x sin 2y − 2x3 y)dy = 0

(3) (1 + y 2 )dx + (xy 2 + y)dy = 0

(4) (r + sin θ − cos θ)dr + r(sin θ + cos θ)dθ = 0

Next, we shall consider some equations that can be made exact by mul-
tiplying through by an integrating factor.

Integrating Factor
If the equation
dy
=0
M +N
dx
is not exact , it may be possible to make it exact by multiplying it by
some functions. That is, we may find a function ρ(x, y) such that

ρ(x, y)M (x, y) + ρ(x, y)N (x, y)y 0 = 0 (1.37)

is an exact equation. Such a function ρ (whenever it exists) is called an


integrating factor (I.F) for the original equation.

1.3.13. Example
Consider the equation

(1 + x2 y 2 + y)dx + xdy = 0. (1.38)

This equation is not exact since


∂M
= 2x2 y + 1
∂y
and
∂N
= 1.
∂x
But by multiplying through by the function:
1
ρ(x, y) = ,
1 + x2 y 2

18
Equation (1.38) becomes
y x
(1 + 2 2
)dx + dy = 0. (1.39)
1+x y 1 + x2 y 2
Equation (1.39) is exact since
∂ y ∂ x
(1 + 2 2
)= ( ).
∂y 1+x y ∂x 1 + x2 y 2
Since
y x
d(tan−1 xy) = 2 2
dx + dy,
1 + (x y ) 1 + (x2 y 2 )
we see from Equation (1.39) that

d(x + tan−1 xy) = 0.

Hence the solution satisfies the relation

x + tan−1 xy = C

and so the solution curves are given by


1
y= tan(C − x).
x

Note: There is no general rule for finding the integrating factor (I.F).
The general procedure is by trial and error.

1.4. First Order Linear Equations

A first order linear equation is of the form


dy
a0 (x) + a1 (x)y = f (x),
dx
where f, a0 , a1 are function defined on an interval Ω ⊆ IR with values in
IR satisfying suitable regularity conditions. We assume that a0 (x) 6= 0, for
all x ∈ Ω. Then, we can divide through by a0 (x) to get
dy
+ p(x)y = q(x) (1.40)
dx
where
a1 (x) f (x)
p(x) = , q(x) = .
a0 (x) a0 (x)
Note the form of Equation (1.40) compared with (1.26). A formula for
the solution of (1.40) is given by the following theorem.

19
1.4.1. Theorem
Let ρ be any function such that ρ0 (x) = p(x). Then
Z
ρ(x) = p(x)dx.

Let the function φ(x) be specified by the relation :

Φ(x) = ±eρ(x) (1.41)

where either the plus or minus sign may be chosen . Then the solutions
of (1.40) are given by the formula:
Z
Φ(x)y = Φ(x)q(x)dx + C (1.42)

where C is an arbitrary constant .

Proof
Consider the positive sign in Equation (1.41). If Equation (1.40) is mul-
tiplied through by
Φ(x) = eρ(x)
then, " #
dy
+ p(x)y eρ(x) = eρ(x) q(x).
dx
Since ρ0 (x) = p(x), then
" #
d h ρ(x) i dy
ye = + p(x)y eρ(x) ,
dx dx

hence
d h ρ(x) i
ye = eρ(x) q(x),
dx
integrating both sides, we have
Z
ρ(x)
ye = eρ(x) q(x)dx + C (1.43)

which is the same as (1.42)

Derivation of the Integration Factor (1.41)


We shall attempt to find an integration factor for the linear equation
(1.40). Let us rewrite (1.40) in the form:

[p(x)y − q(x)]dx + dy = 0 (1.44)

20
That is
M (x, y)dx + N (x, y)dy = 0
where
M (x, y) = p(x)y − q(x)
and
N (x, y) = 1.
Here
∂M
= p(x)
∂y
and
∂N
= 0.
∂x
By the foregoing, Equation (1.44) is not exact except when p(x) = 0 for
all x and in that case Equation (1.44) reduces to a simple separable form

dy
= q(x).
dx
Multiplying Equation (1.44) by an integration factor Φ(x), we have

[Φ(x)p(x)y − Φ(x)q(x)dx] + Φ(x)dy = 0. (1.45)

By definition, Φ(x) is an integrating factor if and only if Equation (1.45)


is exact, that is
∂ ∂
[Φ(x)p(x)y − Φ(x)q(x)] = Φ(x).
∂y ∂x
This implies
d
Φ(x)p(x) = Φ(x) (1.46)
dx
In Equation (1.46), the function p(x) is known ( from Equation (1.40))
but Φ(x) is an unknown function that we are trying to determine . Thus
Φ is the dependent variable with x as independent variable . Hence we
may write:

= p(x)Φ.
dx
The differential equation is separable, thus separating the variables and
integrating we have Z
dΦ Z
= p(x)dx
Φ
or Z
ln |Φ| = p(x)dx = ρ(x)

21
or
Φ(x) = ±eρ(x)
which is precisely (1.41)

We present the following examples:

1.4.2. Example
(a) Solve the differential equation
dy 2x + 1
+( )y = e−2x . (1.47)
dx x
Solution: Here,
2x + 1
p(x) = .
x
The integrating factor is
2x + 1
Z Z 
Φ(x) = exp ( p(x)dx) = exp ( dx)
x
= exp(2x + ln(x))
= exp(2x) · exp(ln(x)) = x. exp(2x)

Multiply the differential equation (1.47) by Φ(x), we have :

dy
xe2x + e2x (2x + 1)y = x
dx
or
d
(xe2x y) = x.
dx
Integrating
1
xe2x y = x2 + C
2
or
1 c
y = xe−2x + e−2x ,
2 x
where C is an arbitrary constant.

(b) Solve the following initial value problem :

dy
(x2 + 1) + 4xy = x, y(2) = 1. (1.48)
dx
Solution: We first divide the given differential equation in (1.48) by (x2 +1)
to get
dy 4x x
+ 2 y= 2 . (1.49)
dx x + 1 x +1

22
Equation (1.49) is in linear form (1.40). Here,
4x
p(x) = .
x2 +1
By Equation (1.41), the integrating factor
4x
Z 
Φ(x) = exp 2
dx = exp[ln(x2 + 1)2 ] = (x2 + 1)2 .
x +1
Multiply Equation (1.49) by Φ(x), we have
dy
(x2 + 1)2 + 4x(x2 + 1)y = x(x2 + 1)
dx
or
d
[(x2 + 1)y] = x3 + x.
dx
Integrating, we have
1 1
(x2 + 1)2 y = x4 + x2 + C.
4 2
Applying the initial condition x = 2, y = 1,then

25 = 6 + C,

giving
C = 19.
The final solution of (1.49) is given by:
x4 x2 19
y(x) = + + .
4(x2 + 1)2 2(x2 + 1)2 (x2 + 1)2

1.4.3. Example
Solve the differential equation:

y 2 dx + (3xy − 1)dy = 0 (1.50)

Solution: We first rewrite Equation (1.50) to get


dy y2
= .
dx 1 − 3xy
The equation is clearly not linear in y. Furthermore, equation (1.50) is
neither exact , separable nor homogenous. If we regard x as the depen-
dent variable and y as the independent variable, and write (1.50) in the
form:
dx 3 1
+ x= 2 (1.51)
dy y y

23
then (1.51) is of the form:
dx
+ p(y)x = q(y)
dy
which is linear in x. The integrating factor is given by
!
Z
3
Φ(y) = exp dy = exp(ln(y)3 ) = y 3 .
y

Multiply (1.51) by y 3 , we have:


dx
y3 + 3y 2 x = y
dy
or
d 3
[y x] = y.
dy
Integrating with respect to y, we have
1
y3x = y2 + C
2
or
1 C
x= + 3,
2y y
where C is an arbitrary constant .

We remark that the method of solution described in the last example


may not be suitable for some differential equations that model certain
physical problems.

1.5. Bernoulli Equations


This is a special type of differential equation which can be reduced to a
linear equation by an appropriate transformation.

1.5.1. Definition:
An equation of the form:
dy
+ p(x)y = q(x)y n (1.52)
dx
where n ∈ IN is called a Bernoulli differential equation.
For n = 0 or 1,the Bernoulli Equation (1.52) reduces to a linear equation
and is therefore readily solvable by our previous methods. In general, we
assume that n is neither zero nor 1. The following theorem provides a

24
technique for solving Equation (1.52).

1.5.2. Theorem
Suppose that n is neither zero nor 1. Then the transformation v = y 1−n
reduces the Bernoulli equation (1.52) to a linear equation in v and by
multiplying (1.52) by y −n , we have
dy
y −n + p(x)y 1−n = q(x). (1.53)
dx
Let
v = y 1−n ,
then
dv dy
= (1 − n)y −n .
dx dx
Substituting these in (1.53),we have :
1 dv
+ p(x)v = q(x)
1 − n dx
or
dv
+ (1 − n)p(x)v = (1 − n)q(x).
dx
Letting
p1 (x) = (1 − n)p(x)
and
q1 (x) = (1 − n)q(x)
then the last equation may be written in the form:
dv
+ p1 (x)v = q1 (x)
dx
which is of the linear form (1.40) in the variables v and x.

1.5.3. Example: Solve the differential equation:


dy
+ y = xy 3 . (1.54)
dx

Equation (1.54) is Bernoulli with n = 3, p(x) = 1 and q(x) = x


Multiply through by y −3 , we have
dy
y −3 + y −2 = x. (1.55)
dx
If we let
v = y 1−n = y −2 ,

25
then
dv dy
= −2y −3 .
dx dx
Equation (1.55) transforms into:
−1 dv
+v =x
2 dx
or
dv
− 2v = −2x (1.56)
dx
Evaluating the integrating factor for (1.56) where p(x) = −2, we have
R
Φ(x) = e p(x)dx
= e−2x .

Multiply (1.56) by the integrating factor e−2x , we have


dv
e−2x − 2e−2x v = −2xe−2x
dx
or
d −2x
(e v) = −2xe−2x .
dx
Integrating,
1
e−2x v = e−2x (2x + 1) + C
2
or
1
v =x+ + Ce2x ,
2
1
where C is an arbitrary constant. But v = 2 , thus
y
1 1
2
= x + + Ce2x .
y 2

1.5.4. Remark: Consider a more general equation:


d dy
f (y) + p(x)f (y) = q(x), (1.57)
dy dx
Where f is a known function of y.
In (1.57), let v = f (y),then
dv dv dy df (y) dy
= · = · .
dx dy dx dy dx
Equation(1.57) becomes
dv
+ p(x)v = q(x),
dx
26
which is of the linear form (1.40) in v. Bernoulli differential equation
(1.52) is a special case of (1.57).
The reason for the foregoing assertion is as follows: If we write (1.52) in
the form
dy
y −n + p(x)y 1−n = q(x)
dx
and then multiply through by 1 − n then we have
dy
(1 − n)y −n + p1 (x)y 1−n = q1 (x). (1.58)
dx
In (1.58) above,
p1 (x) = (1 − n)p(x), q1 (x) = (1 − n)q(x).
Equation (1.58) is of the form (1.57) where
f (y) = y 1−n .

1.5.5. Exercises
(1) The Equation:
dy
= A(x)y 2 + B(x)y + C(x) (1.59)
dx
is called the Riccati’s equation

(a) Show that if A(x) ≡ 0 ∀ x,then Equation (1.59) is a linear equa-


tion of the form (1.40), whereas if C(x) = 0 ∀ x then Equation (1.59) is
a Bernoulli equation.

(b) Show that if f is any solution of (1.59) then the transformation


y = f + v1 reduces (1.59) to a linear equation in v.

2) Solve the initial value problems:

(a)
dy 3
x + y = (xy) 2 , y(1) = 4
dx
(b)
dy
+ y = f (x),
dx
where
f (x) = 2, 0 ≤ x < 1
f (x) = 0, x ≥ 1
y(0) = 0.

27
(3) (i) Solve the equation:

dy
a + by = ke−λx
dx
where a, b, k are positive constants and λ is non - negative.
k
ii) Show that if λ = 0, every solution approaches as x → ∞, but if
b
λ > 0, every solution approaches zero as x → ∞.

(4) Solve the differential equation :

di
L + Ri = E sin(ωt),
dt
for the current i(t) across a circuit, where L, R, E, ω are constants and
i(0) = 0

28
Chapter Two
Applications of First Order Ordinary Differential Equations

2.1. Introduction

This chapter is devoted to some important applications of first order


ordinary differential equations to physical and economic problems. We
shall discuss in what follows, methods for construction of orthogonal and
oblique trajectories of given classes of functions. The chapter also dis-
cusses applications to an economic model and the use of ordinary differ-
ential equations for calculating the rate of cooling of a chemical reaction.
These applications are to demonstrate the importance of studying ordi-
nary differential equations as tools for solving diverse problems occurring
in various fields of human endeavors. We start the chapter with the dis-
cussion of orthogonal trajectories.

2.2. Orthogonal Trajectories

2.2.1. Definition
Let
F (x, y, c) = 0 (2.1)
be a given one-parameter family of curves in the X − Y plane .
A curve which intercepts the curves of family (2.1) at right angles is
called an orthogonal trajectory of the given family.

2.2.2. Example
Consider the family of circles

x2 + y 2 + c2 = 0, (2.2)

with center at the origin and radius c. Each straight line through the
origin
y = kx (2.3)
is an orthogonal trajectory of the family of circles (2.2). The families of
curves (2.2) and (2.3) are orthogonal trajectories of each other.

Procedure for Finding Orthogonal Trajectories

Step 1
From the given equation (2.1), that is

F (x, y, c) = 0

29
describing the curves, find the first order differential equation that de-
scribe the family of curves by eliminating the constant c. That is, find
the differential equation:
dy
= f (x, y) (2.4)
dx
of the curves (2.1).

Step 2
In the differential equation (2.4) so found in step (1) replace f (x, y) by
the negative reciprocal
−1
f (x, y)
to obtain the differential equation satisfied by the orthogonal trajectories
given by
dy 1
=− . (2.5)
dx f (x, y)

Step 3
Obtain a one parameter family
G(x, y, c) = 0,
or
y = φ(x, c)
of the solutions of (2.5), thus obtaining the desired family of trajectories.

Notice that from calculus, the product of the slope of the curve (2.1)
and its orthogonal trajectory at the point (x, y) must be −1. This in-
formed the right hand side of Equation (2.5) in Step 2, above.

2.2.3.Example
Find the orthogonal trajectories of the family of parabola given by:
y = cx2 (2.6)

Step 1
Differentiating (2.6), we have:
dy
= 2cx. (2.7)
dx
Eliminating the parameter c,between Equations (2.6) and (2.7) we have
the differential equation of the family in the form:
dy 2y
= (2.8)
dx x
30
Step 2
2y
Replace the right hand side of (2.8), that is, by its negative reciprocal
x
to obtain
dy −x
= . (2.9)
dx 2y
Separating the variables and integrating, we have
Z Z
2ydy = − xdx + C.

That is
x2 + 2y 2 = k 2 ,
where k is an arbitrary constant. This is a family of ellipses with center
at the origin.

2.3. Oblique trajectories:


First we present the following definition.
2.3.1. Definition Let
F (x, y, c) = 0 (2.10)
be a one-parameter family of curve .
A curve that intercepts the curves of the family (2.10) at a constant angle
π
α 6= is called an oblique trajectory of the given family.
2
Suppose that the differential equation of a family of curves is
dy
= f (x, y). (2.11)
dx
Then the curve of the family (2.11) through the point (x, y) has the slope
f (x, y) at (x, y) and hence its tangent line has angle of inclination

α = tan−1 [f (x, y)]

there.
The tangent line of an oblique trajectory which intercepts the curve at
an angle α will have an angle of inclination: α + tan−1 [f (x, y)] at the point
(x, y). Hence the slope of this oblique trajectory is given by
f (x, y) + tan α
α + tan−1 [f (x, y)] = . (2.12)
1 − f (x, y) tan α
The differential equation of such a family of arbitrary trajectories is given
by
dy f (x, y) + tan α
= . (2.13)
dx 1 − f (x, y) tan α

31
Therefore to obtain a family of oblique trajectories we may follow the
same procedure above for the case of orthogonal trajectories except that
we replace step 2 by the following steps.

Step 2
dy
In the differential equation = f (x, y) of a given family, replace f (x, y)
dx
by the expression:
f (x, y) + tan α
(2.14)
1 − f (x, y) tan α

2.3.2. Example
Find the family of oblique trajectories that intersects with the family of
straight lines
y = Cx
at angle 450 = π4 .

Solution
Step 1
dy
Here α = 450 , tan α = 1 from y = Cx, we find = C, we have
dx
dy y
= (2.15)
dx x
for the given straight line .

Step 2
y
With f (x, y) = in (2.15), use Equation (2.13) to obtain:
x
dy f (x, y) + tan α
=
dx 1 − f (x, y) tan α
x
y
+1 x+y
= y = (2.16)
1− x
x−y

Step 3
Solve the differential equation (2.16). Observe that this is a homogenous
equation we let y = vx,to obtain
dv 1+v
v+x =
dx 1−v
or
v−1 1
( 2
)dv = − .
v +1 x

32
Integrating;
1
ln(v 2 + 1) − tan−1 v = − ln(x) − ln(C).
2
This implies
ln C 2 x2 (v 2 + 1) − 2 tan−1 v = 0.
y
Replace V by , we have the family of oblique trajectories in the form:
x
y
ln C(x2 + y 2 ) − 2 tan−1 = 0.
x

2.4. Exercises
Find the orthogonal trajectory of the following family of curves and draw
a few representative curves of each family.

(1) x2 − y 2 = C

(2) y 2 = Cx3

(3) x3 = 3(y − C)

(4) Find a family of oblique trajectories which intersect the family of


parabolas:
y 2 = Cx
at angle 600 .

2.5. Economic Model


Let the price function P = P (t), the supply function S(t) and demand
function D(t) of a commodity at any time t be continuous functions of
the time t in some interval in the positive segment of the real line. Assume
dP
that the rate of change of the price with respect to time satisfies
dt
dP
= k(D(t) − S(t)), (2.17)
dt
where k is constant. The foregoing implies that the rate of change of
price with respect to time t is directly proportional to the difference
(D(t) − S(t)) between the demand and supply function at any time. Then
dP
it follows that > 0 if and only if D(t) > S(t) and the price increases. If
dt
dP
S(t) < D(t) at any time, then < 0 and the price decreases.
dt
We must determine the supply and demand functions. In this model, we

33
shall assume that the supply S(t) is seasonal and periodic. To be specific
we take
S(t) = C(1 − cos αt) (2.18)
where C and α are positive constants . Then arising from the periodic
properties of the cosine function, S(t) is periodic and non negative. That
is
S(t + T ) = S(t)
for some real number T called the period of S(t).
Next we assume that the demand function does not only depend on the
price only but also a decreasing function of price. The simplest of such
function is a linear function of the form:

D(t) = a − bP (t) (2.19)

where a and b are positive constants.


a
Thus 0 < P < since D must not be negative.
b
Inserting the expressions (2.18) for S(t) and (2.19) for D(t) into (2.17)
yields.
dP
= k[a − bP − c(1 − cos αt)], P (0) = p0 . (2.20)
dt
Notice that the differential equation in (2.20) is linear of the form (1.40).
Solving (2.20), we have:

a−c k 2 bc a−c kc
P (t) = [P0 − − 2 2 2
]e−kbt + + 2 2 (kb cos αt+α sin αt). (2.21)
b k b +α b k b + α2
The limiting values of the price P (t) after a long time (that is as t → ∞ )
is given by
a−c kc
P (t) ≈ − 1 sin(αt + θ),
b (k 2 b2 + α2 ) 2
where
kb
θ = tan−1 .
α
a−c
Thus P(t) fluctuates about the value
b

The supply function S(t) is minimum when t is an integral multiple of .
α
However, the price P (t) is not a maximum at these times. It is maximum
when
[2nπ − ( π2 + θ)]
t=
α
where α is any integer different from zero.

34
2.6. The Rate of Cooling of a Chemical Reaction
If a body cools in a surrounding medium,it might be expected that the
rate of change of the temperature of the body would depend on the dif-
ference between the temperature of the body and that of the surrounding
medium. Newton’s law of cooling asserts that the rate of change is di-
rectly proportional to the difference of the temperature. Thus if U (t) is
the temperature of the body at a time t and U0 is the constant tempera-
ture of the surrounding medium, then
dU
= −k(U (t) − U0 ), (2.22)
dt
where k is a positive constant. The negative sign at the right hand side
dU
of Equation (2.22) occurs because will be negative when U > U0 , that
dt
is, when the body is expected to cool down from the high temperature
to that of the surrounding medium. Consider the following example for
illustration.

2.6.1. Example: Suppose that an object is heated to 3000 F and allowed


to cool in a room where temperature is 800 F . If after 10 minutes , the
temperature is 2500 F ,what will be its temperature after 20 minutes?

Solution: From Equation (2.22),


dU
= −k(U − 80) (2.23)
dt
and the initial conditions are U (0) = 300 and U (10) = 250.
Equation (2.23) is linear and separable. To determine the constant k, we
have Z 250 Z 10
dU
= −k dt
300 U − 80 0
from where
1 220
k= ln( ) = 0.0258.
10 170
To determine the temperature U (t) of the body when t = 20, we go back
to (2.23) and write:
Z U Z 20
dU
= −k dt,
300 U − 80 0
evaluating the integrals, we obtain
u − 80
ln( ) = −20k
220
or
U = 220e−20k + 80 = 220e−20(.0258) + 80 = 2120 F.

35
2.6.2. Exercises
1. The growth of a population is said to follow a logistic law if the
population satisfies the differential equation:
dX
= KX(t)(M − X(t)),
dt
where X(t) is the population size at time t and M is the maximum size
possible and K is a constant. Assume that a College enrolment follows
the logistic law. At the beginning, there are 10, 000 students. If the
maximum enrolment that the College can take is 25, 000 students and
in five years, enrolment has reached 20, 000 students, what will be the
enrolment in five more years ?.

2. Suppose that in a certain chemical reaction, a substance P is trans-


formed into a substance X. Let ρ be the initial concentration of P and
let x(t) be the concentration of X at time t and let ρ − x(t) be the concen-
tration of P at time t. Suppose further that the reaction is auto catalytic
in the sense that dx
dt
is jointly proportional to x and ρ − x, that is
dx
= αx(ρ − x), x(0) = x0 .
dt
Find x(t).

π
3. Find the orthogonal and oblique trajectories at angle 4
of the family
of curves
x2 + y 2 = Kx,
where K is a constant.

4. A body of mass m is dropped from rest in a medium offering re-


sistance proportional to the magnitude of the velocity. Find the interval
which elapsed before the velocity of the object reaches 90 percent of its
limiting value.

5. Experience suggests that the rate at which people contribute in char-


ity drive is proportional to the difference between the current total and
the announced target goal. A drive is announced with a target set at 90,
000 Naira and an initial contribution of 10, 000 Naira. After one month,
30, 000 Naira has been contributed. What does the model predict at the
end of two months ?.

6. In a certain isolated population, P (t), the rate of population growth


dP
dt
is a function of the population given by
dP
= F (P ).
dt
36
Assume that the function takes the simplest form

F (P ) = λP,

where λ is a positive constant. Find the population P (t) at any time t if


the initial population P (0) = p0 . What is the limit

lim P (t)?.
t→∞

37
Chapter Three
Second Order Ordinary Differential Equations
3.1 Introduction

We shall discuss some methods of solving some special cases of second or-
der ordinary differential equations in this chapter. We restrict ourselves
to the class of such equations that may be handled by special methods to
be discussed here. We reserve more general situations to future chapters.
First we present the following definition.

3.1.1. Definition. An equation of the form:

F (x, y, y 0 , y 00 ) = 0 (3.1)

is called a second order ordinary differential equation involving second


order derivatives of the unknown function y(x) as the highest order.

For the rest of this chapter, we shall consider two classes of Equation
(3.1) that can be solved by successively solving two first-order equations:

Class I: The case when dependent variable y is missing.


Such equation will have the form:

G(x, y 0 , y 00 ) = 0. (3.2)

dy
Suppose that y is a solution of the Equation (3.2), then if we set v = ;
dx
then v must be a solution of the first order equation:

G(x, v, v 0 ) = 0. (3.3)

Hence, if Equation (3.3) is solvable, then the solution of the original


equation (3.2) can be found from the relation:

dy
= v(x)
dx
by integration.
Consider the following examples

3.1.2. Example : Solve the differential equation:

d2 y dy 2 dy
x = 2[( − )]. (3.4)
dx2 dx dx

38
Equation (3.4) is of the form (3.2) where the independent variable y is
dy
missing. Setting v = , then we have the first order ODE given by :
dx
dv
x = s(v 2 − v) (3.5)
dx
for v. Equation (3.5) is also separable and we have
dv dx
= 2 (3.6)
v2 − v x
or
1 1 dx
( − )dv = 2 .
v−1 v x
Integrating we have:
v−1
| | = 2 ln |x| + ln C.
v
Therefore
v−1
= C 1 x2 .
v
Thus
dy 1
v= = .
dx 1 − C 1 x2
If C1 > 0 we say C1 = a2 for some a ∈ IR, then
dy 1
= ,
dx 1 − a1 x 2
so that
1 1 + ax

y(x) = ln + C2 .
2a 1 − ax
On the other hand, if C1 < 0 then we can write C1 = −b2 , for some real
number b. Hence, we have
dy 1
= .
dx 1 + b2 x 2
Integrating, we have
1
y(x) = tan−1 bx + C2 .
b
Finally, since the constant functions v = 0 and v = 1 are solutions of Equa-
tion (3.5), the function y = C,y = x + C, are also solutions of the given
differential equation (3.4).

Class II. The case when independent variable is missing. Such equations
have the form:
H(y, y 0 , y 00 ) = 0. (3.7)

39
Suppose that the equation has a solution y and let
dy
v= .
dx
On an interval where y is strictly increasing or decreasing, the function
v can be regarded as a function of y and we can write:

d2 y dv dv dy dv
2
= = . =v .
dx dx dy dx dy

Then equation (3.7) becomes :

dv
H(y, v, v ) = 0. (3.8)
dy

Equation (3.8) is a first order differential equation for v. If this differential


equation is solvable,then a solution of (3.7) can be found by solving first
order equation:
dy
= v(y). (3.9)
dx
As illustration of this procedure, we present the following example:

3.1.3. Example: Solve the differential equation:

d2 y dy dy
y 2
= ( )2 + 2 . (3.10)
dx dx dx
Solution: Notice that the independent variable x is missing.
Setting
dy d2 y dv
v= , 2
=v .
dx dx dy
In this case, Equation (3.10) becomes:

dv
yv = v 2 + 2v. (3.11)
dy

By inspection, we see that v = 0 and v = −2 are the solutions of (3.11).


Hence
y(x) = C, y(x) = −2x + C
are solutions of Equation (3.10). To find the remaining solutions, we
divide through by v in (3.11) to get:

dv
y = v + 2.
dy

40
Separating the variables and integrating, we have:
Z
dv Z
dy
= + ln c1 .
v+2 y
That is,
ln(v + 2) = ln y + ln c1 ,
or
ln(v + 2) = ln(c1 y).
Hence,
v = c1 y − 2.
Next for the case c1 6= 0, we solve the equation:
dy
= c1 y − 2.
dx
Then separating the variables, we get:
Z
dy Z
= dx,
c1 y − 2
that is,
1 Z c1 dy Z
= dx,
c1 c1 y − 2
so that by integrating, we get:
1
ln(c1 y − 2) = x + c,
c1
or
c1 y − 2 = ec1 x+c = c2 ec1 x ,
where
c2 = e c .
Therefore,
1
y=[c2 ec1 x + 2] .
c1
When c1 = 0 we have v = −2 giving a solution

y(x) = −2x + c.

41
3.2. Exercises
Find the general solution of the following differential equations.

dx d2 x dx
(1) 2t · 2 = ( )2 + 1.
dt dt dt
2
dx dx
(2) 2
= + 2t.
dt dt
d2 y dy
(3) 2x 2 = ( )2 − 1.
dx dx
2
dy dy
(4) y 2 + ( )2 = 0.
dx dx
2
dy dy
(5) y 2 + ( )2 + a2 = 0.
dx dx
2
dV 1 dV
(6) + = 0.
dr62 r dr
d2 y
(7) = 4y.
dx2

42
Chapter Four
Second Order Homogenous Linear Equations

4.1 Introduction

The general second order linear ordinary differential equation is of the


form:
d2 y dy
a0 (x) 2 + a1 (x) + a2 (x)y = f (x), x ∈ I ⊆ IR (4.1)
dx dx
where a0 , a1 , a2 and f are given functions of x in the interval I ⊆ IR.
When a0 ,a1 and a2 are constants and f (x) = 0, for all x ∈ I, then (4.1)
is called homogenous second order linear ordinary differential equation
with constant coefficients.
Thus we have:
d2 y dy
a0 2 + a1 + a2 y = 0. (4.2)
dx dx
If y = y1 (x) and y = y2 (x) are solutions of (4.2), so also is the sum y =
y1 (x) + y2 (x). This follows from the fact that if y1 is a solution, then

d2 y 1 dy1
a0 2
+ a1 + a2 y1 = 0. (4.3)
dx dx
Also if y2 is a solution then

d2 y 2 dy2
a0 2
+ a1 + a2 y2 = 0. (4.4)
dx dx
Adding Equations (4.3) and (4.4) we have,

d2 y1 d2 y2 dy1 dy2
a0 ( 2
+ 2
) + a1 ( + ) + a2 (y1 + y2 ) = 0. (4.5)
dx dx dx dx
Notice that by the linearity of the differential operator, the last equation
can be written as:
d2 d
a0 2
(y1 + y2 ) + a1 (y1 + y2 ) + a2 (y1 + y2 ) = 0. (4.6)
dx dx
Equation (4.6) is the original equation with y replaced by y1 + y2 .
Therefore, y1 + y2 is also a solution.

4.2. Procedures for Solution: To solve equation (4.2), we need a function


such that its derivatives are constant multiples of itself. Such a function

43
is the exponential function. Notice that if a0 = 0, we obtain the first order
equation of the same family. That is, for a1 6= 0, a2 =
6 0, we have
dy
a1 + a2 y = 0.
dx
Dividing the last equation by a1 , we have
dy a2
+ y = 0.
dx a1
Setting
a2
k= ,
a1
we have
dy
+ ky = 0.
dx
Solving by separating variables we have
dy
= −ky,
dx
so that Z
dy Z
= −k dx + c.
dx
Carrying out the integration, we obtain:

ln y = −kx + c.

Hence,
y = e−kx+c = ec e−kx = Ae−kx .
Let −k = m, then y = Aemx . Now assume that y = Aemx is a solution of
(4.2) for a certain m. Then

dy d2 y
= Amemx , = Am2 emx .
dx dx2
Substituting these values in Equation (4.2), we have:

a0 Am2 emx + a1 Amemx + a2 Aemx = 0. (4.7)

That is,
a0 m2 + a1 m + a2 = 0. (4.8)
Equation (4.8) is called the auxiliary equation or characteristic equation
of the given differential equation (4.2).

Three cases arise according to the roots of (4.8).

44
These are as follows:
(1) When the roots of (4.8) are real and distinct.

(2) when the roots of (4.8) are real and repeating.

(3) When the roots of (4.8) are complex numbers.

We shall examine each case as follows through relevant examples:


Case 1:Real and Distinct Roots: Consider the differential equation:

d2 y dy
2
− 3 + 2y = 0 (4.9)
dx dx
In Equation (4.9), coefficients of the differential equation are as follows:
a0 = 1, a1 = −3, and a2 = 2 . The auxiliary equation (4.8) for this case is
given by
m2 − 3m + 2 = 0.
Solving for m, we have
(m − 1)(m − 2) = 0.
Hence,
m1 = 1, m2 = 2.
Since the roots are real and distinct, the solutions are y1 = Aem1 x and
y2 = A2 em2 x . The general solution is

y = y1 + y2 = A1 em1 x + A2 em2 x .

That is
y = A1 ex + A2 e2x .
(2) Again, consider the differential equation:

d2 y dy
2
= −4 + 5. (4.10)
dx dx
The auxiliary equation (4.8) for this equation is given by:

m2 + 4m − 5 = 0.

Solving for m, we have


(m − 1)(m + 5) = 0.
This implies that m1 = 1, m2 = −5.
Therefore the general solution of equation (4.10) is given by:

y(x) = A1 ex + A2 e−5x .

45
Case 2: Repeated Real Roots: Consider the differential equation:

d2 y dy
2
− 6 + 9y = 0. (4.10)
dx dx
Then, the auxiliary equation (4.8) for the given equation (4.10) is

m2 − 6m + 9 = 0.

Solving the quadratic equation, we have

(m − 3)2 = 0.

Therefore
m1 = 3, m2 = 3.
By the previous method, if we write

y = A1 e3x + A2 e3x = (A1 + A2 )e3x = Ae3x ,

where A is an arbitrary constant. But we know that every second order


differential equation must have two arbitrary constants, so there must
be another term containing a second constant. Thus we must find two
linearly independent solutions: Thus

y = em1 x (A + Bx)

is the general solution of (4.10) given by the following proposition.

4.2.1 Proposition: Let y be a nontrivial solution of the differential equa-


tion (4.2). If the auxiliary equation (4.8) has real roots m with m occur-
ring twice, then the general solution of (4.2) is

y(x) = (A1 + A2 x)emx , (4.9)

where A1 and A2 are arbitrary constants.

Proof: For convenience, Equation (4.8) is transformed to

m2 + am + b = 0 (4.10)
a1 a2
where a = , b = . Thus the solutions of (4.10) are
a0 a0

−a ± a2 − 4b a
m1,2 = =− ,
2 2
46
since m has repeated roots. Let
a
y(x) = e− 2 x (4.11)

be a solution of equation (4.2). Then we show that


a
y2 (x) = xe− 2 x (4.12)

is another independent solution. Differentiating (4.12), we have:


a a
y20 (x) = (1 − x)e− 2 x
2
and
a2
" #
a
y200 (x) = x − a e− 2 x
2
so that

y 00 (x) + ay 0 (x) + by(x)


a2
" #
− a2 x a
= e ( x − a) + a(1 − x) + bx)
4 2
a2
" #
a
= e− 2 x − x + bx .
4

a2
But a2 = 4b or b = . Therefore
4
a a2
e− 2 x [− x + bx] = 0.
4
a
Thus y2 (x) = xe− 2 x satisfies equation (4.2)and the general solution is
a a
y(x) = A1 e− 2 x + A2 xe− 2 x = [A1 + A2 x].

4.2.2. Definition (Linear Independence): A solution of a differential equa-


tion of the second order (linear or nonlinear) is called a general solution
if it contains two independent arbitrary constants. That is, it cannot be
reduced to an expression containing less than two arbitrary constants.

4.2.3. Definition: Let y1 (x) and y2 (x) be functions defined on an open


interval I ⊆ IR . The functions y1 (x) and y2 (x) are said to be linearly
dependent on I if there exists constant c1 and c2 (not both zero) such
that
c1 y1 (x) + c2 y2 (x) = 0. (4.13)

47
If no such relation exists, the functions y1 and y2 are said to be linearly
independent.

Note: If y1 and y2 are linearly dependent then by Equation (4.13) there


exists constants c1 and c2 (not both zero) Such that

c1 y1 (x) + c2 y2 (x) = 0.

Hence, we can write


c2 c2
y1 (x) = − y2 (x) = ky2 (x), k = − , c1 6= 0.
c1 c1
Therefore y1 x is a constant multiple of y2 (x) or vice versa for all values of
x on I ⊆ IR.

4.2.4. Example: Consider the functions y1 (x) = 8x, y2 (x) = 3x. These
functions are linearly dependent on any interval since
y1 (x) 8x 8
= = = constant.
y2 (x) 3x 3
But y1 (x) = x3 and y2 (x) = x2 are linearly independent since
y1 (x) x3
= 2 = x 6= constant,
y2 (x) x
on any interval in IR. Similarly, for the functions y3 (x) = x + 2, y4 (x) = x,
we have
y3 (x) x+2 2
= = 1 + 6= constant, ∀ x ∈ I\{0}.
y4 (x) x x

4.2.5. Definition: Two linearly independent solutions of (4.2) on an in-


terval I are called a basis or a fundamental system of solutions on I. A
general solution is a linear combination of the basis functions.

Case 3: Complex Conjugate Roots: if the auxiliary equation

a0 m2 + a1 m + a2 = 0

transformed to
m2 + am + b = 0
has complex roots
m1 = α + iβ, m2 = α − iβ
where α and β are real numbers and β 6= 0. The solutions are of the form:

y1 (x) = e(α+iβ)x , y2 (x) = e(α−iβ)x . (4.14)

48
The general solution will be of the form

y(x) = A1 e(α+iβ)x + A2 e(α−iβ)x = A1 eαx eiβx + A2 eαx e−iβx


h i
= eαx A1 eiβx + A2 e−iβx . (4.15)
Applying Euler formula:

eiθ = cos θ + i sin θ,

e−iθ = cos θ − i sin θ.


Thus from Equation (4.14), we get

y1 (x) = e(α+iβ)x = eαx [cos βx + i sin βx]

and
y2 (x) = e(α−iβ)x = eαx [cos βx − i sin βx].
From Equation (4.15), we have

y(x) = eαx [A1 (cos βx + i sin βx) + A2 (cos βx − i sin βx)]

= eαx [A cos βx + B sin βx]


where
A = A1 + A2 , B = i(A1 − A2 ).
The corresponding general solution is

y(x) = eαx [A cos βx + B sin βx],

where A and B are arbitrary constants.

4.2.6. Example: Find the general solution of the differential equation:

y 00 − 2y 0 + 10y = 0.

We first form the associated auxiliary equation given by

m2 − 2m + 10 = 0.

Solving, we have: √
2± 4 − 40
m= = 1 ± 3i.
2
Thus,
m1 = 1 + 3i, m2 = 1 − 3i.
In this case,
α = 1, β = 3.

49
By Equation (4.15), the general solution is given by

y(x) = ex [A cos 3x + B sin 3x].

4.2.7. Example : Solve y 00 + cy = 0, c 6= 0.


We consider the cases where c is positive or negative. If c > 0, then we
can write c = k 2 , for some k > 0. Then

y 00 + k 2 y = 0.

Auxiliary equation is given by

m2 + k 2 = 0.

Solving the auxiliary equation, we have

m = ±ik

or
m1 = ik, m2 = −ik.
In this case,
α = 0, β = k.
The general solution is therefore

y(x) = A cos kx + B sin kx. (4.16)

If c < 0, we can write c = −k 2 for some constant k. Therefore

m1,2 = ±k,

that is
m1 = k, m2 = −k.
Therefore
y(x) = Aekx + Be−kx . (4.17)
In order to rewrite expression (4.17) in terms of the hyperbolic cosine
and sine functions, recall that
1
cos hnx = (enx + e−nx )
2
and
1
sin hnx = (enx − e−nx ).
2
Hence,
2 cos hnx = enx + e−nx

50
and
2 sin hnx = enx − e−nx .
Adding the last two expression, we get:

enx = cos hnx + sin hnx,

and by substraction, we have

e−nx = cos hnx − sin hnx.

Substituting into the general solution (4.16), we have

y(x) = (A + B) cos hkx + (A − B) sin hkx = A1 cos hkx + A2 sin hkx.

In summary, we have shown that, for the equation:


d2 y
2
+ k 2 y = 0,
dx
the general solution is

y(x) = A1 cos kx + A2 sin kx.

In the case of the equation


d2 y
− k 2 y = 0,
dx2
the general solution is given by

y(x) = A1 cos hkx + A2 sin hkx.

We remark here that the second order ordinary differential equations

y 00 ± k 2 y = 0

appear frequently in applications especially in the solutions of the wave


and heat partial differential equations in one dimension.

4.2.8. Example: Solve the initial value problem:


d2 y dy
2
− 6 + 25y = 0,
dx dx
y(0) = −3, y 0 (0) = −1
Solution: The auxiliary equation is given by:

m2 − 6m + 25 = 0,

51
with complex and conjugate solutions

m1,2 = 3 ± 4i.

Here,
α = 3, β = 4.
The general solution may be written

y(x) = e3x [A1 sin 4x + A2 cos 4x]. (4.18)

In order to employ the initial conditions for the computation of the values
of the constants A1 and A2 , we proceed as follows: By differentiating y(x)
from Equation (4.18) we have:
dy
= e3x [(3A1 − 4A2 ) sin 4x + (4A1 + 3A2 ) cos 4x]. (4.19)
dx
Applying the initial values y(0) = −3 by putting x = 0 and y = −3 in
(4.18), we have:
−3 = e0 [A1 sin 0 + A2 cos 0],
so that
A2 = −3
Applying the condition y 0 (0) = −1, in (4.19) we have:

−1 = e0 [(3A1 − 4A2 ) sin 0 + (4A1 + 3A2 ) cos 0] .

That is,
4A1 + 3A2 = −1.
Substitute for A2 , we have A1 = 2, A2 = −3. Replacing these values in the
general solution (4.18), we finally get

y(x) = e3x [2 sin 4x − 3 cos 4x]

or √
y(x) = 13e3x sin(4x + θ)
where θ is defined by sin θ = − √313 and cos θ = √2 .
13

4.3 Applications to Electric Circuits.


In what follows, we present some standard definition of terms as they
apply to electric circuits. The current rate of flow of electric charge in
a capacitor is measured in coulombs (Q) at time t seconds. Note that
I (amperes) is the current rate of flow of charge. That is, I = dQdt
. Fur-
thermore, E is the electromotive force (emf ) or voltage measured in volts
across the circuit. R is the resistance in measured in ohms. A resistor

52
is a component of the circuit that opposes the current. L represents the
inductance, which is measured in Henrys. Inductor opposes a change in
current while C measures the capacitance in farads. The capacitor stores
the energy.

Laboratory experiments have shown that the following hold:

(1) The voltage drop ER across the resistor is proportional to the in-
stantaneous current I flowing. That is

ER = IR (4.20)

where R, the constant of proportionality is the resistance of the resistor.

(2) The voltage drop EL across an inductor is proportional to the in-


stantaneous time rate of change of the current. That is
dI
EL = L (4.21)
dt
where L is a constant.

(3) The voltage drop across the capacitor is proportional to the instan-
taneous electric charge Q on the capacitor, that is
Q
Ec = . (4.22)
c
The fundamental principle guiding such electric circuits is given by Kir-
choff ’s law. This states that the algebraic sum of all the voltage drops around a
closed circuit is zero. Thus

ER + EL + Ec − E = 0. (4.23)

That is,
dI Q
IR + L + =E
dt c
or
dI Q
L + RI + = E. (4.24)
dt c
dQ
Since = I, we differentiate equation (4.24) to get a second order ho-
dt
mogenous differential equation of the form:

d2 I dI 1
L 2 + R + I = 0. (4.25)
dt dt c

53
To solve the last equation, we write out the auxiliary equation:
R 1
m2 + m+ = 0.
L cL
Thus, q
4L
−R ± R2 − c
m1,2 = . (4.26)
2L
Consider equation (4.24):

dI Q
L + RI + = E,
dt c
dQ
setting I = , we have
dt
d2 Q dQ Q
L 2
+R + =E (4.27)
dt dt c
we have a second order linear equation in Q with constant coefficients
1
L, R, . The last equation may be solved by employing techniques pre-
c
sented already.

4.4. Exercises: Show that the general solution of Equation (4.27) is given
by:
−Rc
Q = cE + e 2L (A sin wt + B cos wt),
1 R2 dQ
where A and B are constants and w = ( − 2 ). When t0 = 0, dt
= I0 .
Lc 4L

54
Chapter Five
Second Order Non-Homogenous Differential Equation
5.1 Introduction

We have so far considered the homogenous differential equation of the


form:
a0 (x)y 00 + a1 (x)y 0 + a2 (x)y = 0, (5.1)
where the solution in general was of the form:

y(x) = A1 em1 x + A2 em2 x , (5.2)

where m1 , m2 are the roots of the auxiliary equation. Now, consider the
following non homogenous differential equation:

a0 (x)y 00 + a1 (x)y 0 + a2 (x)y = f (x). (5.3)

Substituting for y(x) in (5.3) using (5.2) will make the left hand side of
(5.3) equal to zero. Therefore there must be a further term in (6.2) that
will make the L.H.S equal to f (x). The general solution to (5.3) can be
written in the form:

y(x) = [A1 em1 x + A2 em2 x ] + yp (x), (5.4)

where yp (x) is called the particular integral and the term in the square
bracket is called the complimentary function (general solution of the ho-
mogenous differential equation (5.1).

5.2. Method of Undetermined Coefficients: The method of undeter-


mined coefficient is a technique for solving Equation (5.3) if f (x) takes
a particular form. Let Pn (x) be a polynomial of degree n then we shall
consider when the function f (x) is of the general form:

(1) Pn (x)

(2) Pn (x)eax

(3) Pn (x) sin βx or Pn (x) cos βx

(4) Pn sin hx or Pn (x) cos hx.

The technique is to guess that there is a solution to (5.3) in some ba-


sic form as f (x) and then substitute this guessed solution into (5.3) to

55
determine the unknown coefficients . This is called the method of unde-
termined coefficients. Thus to solve the non homogenous equation:

a0 y 00 + a1 y 0 + a2 y = f (x),

by the method of undetermined coefficients, the following procedures are


followed:

(a) The complimentary function is obtained by solving the equation with


f (x) = 0. This should give one of the following:

(1). y(x) = A1 em1 x + A2 em2 x , - (real and distinct roots of the auxiliary equa-
tion).

(2) y(x) = eax (A cos bx + B sin bx) - (complex and conjugate roots).

(3) y(x) = eax (A + bx)- (real and repeating roots).

(4) y(x) = A cos αx + B sin αx

(5) y(x) = A1 cosh x + A2 sinh x.

(b) The particular integral is found by assuming the general form of the
function f (x) as indicated above, substituting this in the given equation
and solve for the unknown coefficients by forming algebraic equations in
the unknowns. The algebraic equations are formed by equating coeffi-
cients of the powers of the independent variables that appear on both
sides of the equation.

We present the following examples for illustration.

5.2.1. Example: Solve the differential equation.

y 00 (x) − 5y 0 + 6y = x2 . (5.5)

(a) To find the complimentary function, solve the auxiliary equation:

m2 − 5m + 6 = 0,

to get:
(m − 2)(m − 3) = 0,
so that m1 = 2, m2 = 3. Therefore

y = A1 e2x + A2 e3x . (5.6)

56
(b) To find the particular integral we assume the general form of the right
hand side which is a second degree polynomial function:

yp (x) = Cx2 + Dx + E, (5.7)

where C, D, E are to be determined.


dy d2 y
Then, differentiating (5.7), we have = 2Cx + D, 2 = 2C. Substituting
dx dx
in the given equation (5.5), we have;

2C − 5(2Cx + D) + 6(Cx2 + Dx + E) = x2 ,

that is,
6Cx2 + (6D − 10C)x + (2C − 5D + 6E) = x2 . (5.8)
Next, equate the coefficients in (5.8) to get :
1
6C = 1, ⇒ C = ,
6
5
6D − 10C = 0 ⇒ D = ,
18
19
2C − 5D + 6E = 0, ⇒ E = .
108
The particular integral for Equation (5.5) is therefore given by:
1 5 19
yp (x) = x2 + x + .
6 18 108
The general solution of the non homogenous problem (5.5) is

y(x) = A1 e2x + A2 e3x + yp (x)


1 5 19
= A1 e2x + A2 e3x + x2 + x + ,
6 18 108
where A1 and A2 are arbitrary constants.

5.2.2. Example: Solve the differential equation:

y 00 + y = xe2x (5.9)

Here f (x) is of the form: Pn (x)eax where Pn (x) is a polynomial of degree


one. We try a particular solution of the form:

yp (x) = e2x (a + bx).

Thus
yp0 (x) = e2x (2a + b + 2bx),

57
and
yp00 (x) = e2x (4a + 4b + 4bx),
substituting in (5.9) we have:

e2x (4a + 4b + 4bx) + e2x (a + bx) = xe2x .

Dividing through by e2x and equating coefficients, we have

5a + 4b = 0, 5b = 1,
1 −4
thus b = and a = and a particular solution is given by
5 25
e2x
yp (x) = (5x − 4).
25
The complimentary solution (i.e solution of the homogenous part y 00 +y = 0
is
yc (x) = A1 sin x + A2 cos x.
The general solution of (5.19) is:

e2x
y(x) = A1 sin x + A2 cos x + (5x − 4).
25

5.2.3. Example . Consider the equation :

y 00 − y = 2ex (5.10)

We first solve the homogenous part:

y 00 − y = 0

to get
yc (x) = A1 ex + A2 e−x .
Notice that f (x) is a solution to the homogenous equation . A function
of the form yp (x) = Aex will not be a particular integral since

yp00 (x) = Aex

and by substituting into equation (5.10) yields: 0 = 2ex which is impossi-


ble since 2 6= 0 and ex 6= 0 for all x ∈ IR . Let us assume guess that there
is a solution of the form :
yp (x) = Axex .
Then
yp0 (x) = Aex (x + 1)

58
and
yp00 (x) = Aex (x + 2).
Substituting into (5.10) we have:
Aex (x + 2) − Axex = 2ex
⇒ 2Aex = 2ex ⇒ A = 1.
Therefore,
yp (x) = xex .
The general solution (5.10) is
y(x) = A1 ex + A2 e−x + xex .

Remarks: If in the given second order differential equation:


y 00 (x) + ay 0 (x) + by(x) = f (x)
the function f (x) can be written as the sum
f (x) = f1 (x) + f2 (x)
where f1 and f2 are both one of the forms above, then we may use the
principle of superposition to solve the problem. That is, we may sepa-
rately solve the equations:
y 00 (x) + ay 0 (x) + by(x) = f1 (x),
with a particular solution yp1 (x) and
y 00 (x) + ay 0 (x) + by(x) = f2 (x)
with a particular solution yp2 (x). Then the particular solution to the given
equation:
y 00 (x) + ay 0 (x) + by(x) = f (x) = f1 (x) + f2 (x)
is
yp (x) = yp1 (x) + yp2 (x).

Summary: We summarize the procedure as follows: Consider the non-


homogenous equation
y 00 + ay 0 + by = f (x) (5.11)
and the homogenous part
y 00 + ay 0 + by = 0 (5.12)
Case 1: If no terms in f (x) is a solution of equation (5.12), then a partic-
ular solution of (5.11) will have the form yp (x) according to the following
table:

59
f (x) yp (x)
Pn (x) a0 + a1 x + a2 x 2 · · · an x n
Pn (x)eax (a0 + a1 x + · · · an xn )eax
Pn (x)eax sin bx (a0 + a1 x + · · · + an xn )eax sin bx + (c0 + c1 x + · · · + cn xn )eax cos bx.
Pn (x)eax cos bx (a0 + a1 x + · · · + an xn )eax sin bx + (c0 + c1 x + · · · + cn xn )eax cos bx.

Case 2: If any term of f (x) is a solution of (5.12), then we multiply the


appropriate function yp (x) of case 1 by xk , where k is the smallest integer
such that no term in xk yp (x) is a solution of (5.12).

5.3. Method of Variation of Constants


The method of undetermined coefficients should be used only when the
function f (x) is in a ”correct” form indicated above. The more general
method is the variation of constants. We again consider the equation:

y 00 + ay 0 + by = f (x), (5.13)

and assume that we have found two linearly independent solutions y1 and
y2 of the homogenous equation

y 00 + ay 0 + by = 0.

So that a general solution is given by

y(x) = C1 y1 (x) + C2 y2 (x). (5.14)

Thus any particular solution yp (x) of (5.13) must have the property that
yp (x) yp (x)
and are not constants. This suggests that we replace the
y1 (x) y2 (x)
constants C1 and C2 in (5.14) by two functions C1 (x) and C2 (x) and then
look for particular solution of (5.13) in the form:

yp (x) = C1 (x)y1 (x) + C2 (x)y2 (x). (5.15)

Differentiating Equation (5.15) and dropping the subscript p in yp (x), we


have
y 0 (x) = C1 (x)y10 (x) + C2 (x)y20 (x) + C10 (x)y1 (x) + C20 (x)y2 (x).
To simplify this expression, it is convenient to set

C10 (x)y1 (x) + C20 (x)y2 (x) = 0, (5.16)

then
y 0 (x) = C1 (x)y10 (x) + C2 (x)y20 (x).

60
Differentiating again

y 00 (x) = C1 (x)y100 (x) + C2 (x)y200 (x) + C10 (x)y10 (x) + C20 (x)y20 (x).

Substituting in equation (5.13) yields

y 00 (x) + ay100 (x) + by(x) = C1 (x)y100 (x) + C2 (x)y200 (x) + C10 (x)y10 (x) + C20 (x)y20 (x)
+a [C1 (x)y10 (x) + C2 (x)y20 (x)] + b [C1 (x)y10 (x) + C2 (x)y20 (x)]
= C1 (x)[y100 (x) + ay10 (x) + by1 (x)] + C2 (x)[y200 (x) + ay20 (x) + by2 (x)]
+C10 (x)y10 (x) + C20 (x)y20 (x) = f (x).

Since y1 and y2 are solutions to the homogenous equation, then the equa-
tion above reduces to

C10 (x)y10 (x) + C20 (x)y20 (x) = f (x). (5.17)

Thus, we have the two conditions on C1 (x) and C2 (x) described by the
simultaneous equations
y1 C10 + y2 C20 = 0
y10 C10 + y20 C20 = f (x). (5.18)
Multiply the first equations by y20 and the second equation by y2 and
subtract to obtain and expression for C10 (x). The second derived function
C20 (x) can be determined in a similar way. Solving Equation (5.18) we
have
f (x)y2 (x)
C10 (x) = −
y1 (x)y2 (x) − y2 (x)y10 (x)
0

f (x)y1 (x)
C20 (x) = − (5.19)
y1 (x)y20 (x)− y2 (x)y10 (x)
The denominations in (5.19) must not be zero. Finally, we integrate
(5.19) to obtain C1 (x) and C2 (x) and substitute these in (5.15) to obtain
yp (x).
We remark that it can be shown that the denominators W (y1 , y2 )(x) =
y1 (x)y20 (x) − y2 (x)y10 (x) in (5.19) are non zero for linearly independent solu-
tions y1 and y2 of the homogenous problem. Details shall be discussed in
the next chapter.

5.3.1. Example. Solve the differential equation

y 00 + y = tan x. (5.20)

The solution to the homogenous equation are y1 = sin x and y2 = cos x.


Also, the function
W (y1 , y2 ) = 1 6= 0.

61
Thus from equation (5.19)

sin2 x cos2 x − 1
C10 (x) = − tan x sin x = − = = cos x − sec x
cos x cos x
C20 (x) = tan x cos x = sin x.

Integrating, we find that C1 (x) = sin x − ln | sec x + tan x| and C2 (x) = − cos x.

Thus from (5.15), the particular solution is


yp (x) = C1 (x)y1 (x) + C2 (x)y2 (x)
= cos x sin x − cos x ln | sec x + tan x| − sin x cos x
= − cos x ln | sec x + tan x|.
The general solution of Equation (5.20) is therefore given by
y(x) = yc (x) + yp (x)
= C1 cos x + C2 sin x − cos x ln | sec x + tan x|.

5.4. Higher Order Linear Equations


Most of the results of the preceding sections can be immediately gener-
alized for application to equation of order higher than two. Consider the
nth order linear differential equation:
y n + a1 (x)y n−1 + a2 (x)y n−2 + · · · + an−1 (x)y 0 + an (x)y = f (x), (5.21)
and the associated homogenous equation
y (n) + a1 (x)y (n−1) + a2 (x)y (n−2) + · · · + an−1 (x)y 0 + an (x)y = 0 (5.22)
where the functions ai (x), i = 1, 2, · · · , n are assumed to be continuous.

5.4.1 Definition: We say that the functions y1 , y2 , · · · , yn are linearly in-


dependent over an interval I = [x0 , x1 ] ⊆ IR if the condition
C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x) = 0,
implies that
C1 = C2 = · · · = Cn = 0 ∀ x ∈ [x0 , x1 ].
We define the function (called the Wronskian, to be discussed more in
the next chapter) by


y1 y2 ··· yn

y10 y20 ··· yn0


W (y1 , y2 , ..., yn )(x) = . (5.23)

··· ··· ··· ···

(n−1) (n−1)
y1 y2 · · · yn(n−1)

62
It can be proved that W (y1 , y2 , · · · , yn )(x) = 0 if and only if the functions
y1 , y2 , · · · yn are linearly dependent solutions of Equation (5.22). We can
also prove that R
W (y1 , y2 , · · · , yn ) = Ce− a1 (x)dx . (5.24)
for some constant C. Formula (5.24) is known as the Abel formula.
Furthermore, we can show that if y1 , y2 , · · · , yn are linearly independent
solutions of (5.22) then the solution y(x) of (5.22) can be written as

y(x) = C1 y1 (x) + C2 y2 (x) + · · · + Cn yn (x), (5.24)

where Ci , i = 1, 2, · · · , n are constants. Equation (5.24) may be called the


general solution of (5.22).
Finally if yp1 and yp2 are two linearly independent solutions of non ho-
mogenous equation (5.21), the the difference y(x) = yp1 (x) − yp2 (x) is a
solution of the homogenous equation (5.22).

As in the case of the second order equation, there is no general method


for obtaining a closed form solution of (5.21) and (5.22), unless the func-
tions ai (x), i = 1, 2, · · · , n are all constants.
For the remaining part of this section, we will concentrate on finding the
solution of the nth order linear equation with constant coefficients given
by:
y (n) + a1 (x)y (n−1) + · · · + an y = 0. (5.25)
As before, we seek solutions of the form

y = emx . (5.26)

Substituting into (5.25) and noting that the kth derivative of emx is mk emx ,
we have
emx [mn + a1 mm−1 + a2 mn−2 · · · an−1 m + an ] = 0.
Since the exponential function is never zero on IR, we divide both sides
of the last equation by emx to obtain

P (m) = mn + a1 mm−1 + a2 mn−2 · · · an−1 m + an = 0. (5.27)

Equation (5.27) is the auxiliary equation for the homogenous equation


(5.25). It is evident that if m is a root of (5.27), then emx is a solu-
tion to (5.25). Equation (5.27) is a polynomial equation of degree n.The
polynomial P (m) can be factorized as
k
P (m) = (m − m1 )n1 (m − m2 )n2 · · · (m − mk )n ,

where the ni , i = 1, 2, · · · , k is the multiplicity of the root mi and n =


n1 + n2 + ... + nk .

63
If ni = 1 for a given i, then the root mi is called a simple root of (5.27).
Since we are assuming that the coefficient ai , i = 1, 2 · · · n are real num-
bers, any complex root of (5.27) will appear in conjugate pairs.

5.5. Nature Of Solutions


The nature of the solutions of Equation (5.25) corresponding to different
types of roots of Equation (5.27) are as follows:

(1). Let m1 , m2 , · · · mj be j simple roots of (5.27). Then em1 x , em2 x · · · emj x


are linearly independent solutions of Equation (5.25).

(2). If mj is a root of multiplicity nj > 1, then emj x , xemj x , x2 emj x , · · · , xnj −1 emj x
are nj linearly independent solutions for (5.25).

(3). If mj = α + iβ and mk = α − iβ is a pair of simple complex conjugate


roots of (5.25) then eαx cos βx and eαx sin βx are two linearly independent
solutions of (5.25).

(4) If mj = α + iβ and mk = α − iβ are two complex conjugates roots


of multiplicity nj = nk > 1, then
(i). eαx cos βx, eαx sin βx,
(ii). xeαx cos βx, xeαx sin βx,
(iii).x2 eαx cos βx, x2 eαx sin βx,
(iv). xnj −1 eαx cos βx, xnj −1 eαx sin βx,
are 2nj linearly independent solutions of (5.25).

5.5.1 Example: Solve the differential equation:

y 000 + 14y 00 + y 0 − 6y = 0.

The auxiliary equation is given by:

m3 + 4m2 + m − 6 = 0.

Solving, we have:
m1 = 1, m2 = −2, m3 = −3.
Therefore, the general solution is

y(x) = C1 ex + C2 e−2x + C3 e−3x .

Remarks: Finding solutions of polynomials of degree greater than two is


generally a tedious task. The following examples are those with easily
calculable roots although this is not the general case. In case of higher

64
order polynomials, the roots can only be approximated.

5.5.2. Example: Consider the differential equation:

y (iv) + 16y 000 + 9y 00 + 256y 0 + 256y = 0.

Auxiliary equation is:

m4 + 16m3 + 96m2 + 256m + 256 = 0.

Factoring and solving, we have:

(m + 4)4 = 0.

Hence m = −4, and this is a root of multiplicity four and a general solu-
tion:
y(x) = C1 e−4x + C2 xe−4x + C3 x2 e−4x + C4 x3 e−4x .

5.6. Higher Order Non - Homogenous Equations


Consider the non homogenous differential equation with constant coeffi-
cients:
y (n) + a1 y (n−1) + a2 y (n−2) + · · · + an−1 y 0 + an y = f (x). (5.28)
As in the previous section, Equation (5.28) can be solved by the method
of undetermined coefficients if f (x) is of the ”right form”. It can also
be solved by the method of variation of parameters. The method of un-
determined coefficients works exactly as in the second order case. The
method of variation of parameters of the nth order equation is a tedious
but direct generalization of the method for the second order. We describe
it as follows:

Let y1 , y2 , · · · yn be n linearly independent solutions of the homogenous


equation:
y (n) + a1 y (n−1) + · · · + an y = 0. (5.29)
We seek a solution to (5.28) of the form:

yp (x) = C( x)y1 (x) + · · · Cn (x)yN (x). (5.30)

Differentiating Equation (5.30) we have:

yp0 (x) = (C1 y10 + · · · Cn yn0 ) + (C10 y1 + · · · + Cn0 yn ) = 0.

It is convenient (as in the second order case) to set

C10 y1 + · · · + Cn0 yn = 0.

65
Then,
yp0 (x) = C1 y10 + · · · + Cn yn0 .
A second differentiation yields:

yp00 (x) = (C1 y100 + · · · + Cn yn00 ) + (C10 y10 + · · · + Cn0 yn0 ).

We now set:
C10 y10 + C20 y20 + · · · Cn0 yn0 = 0,
so that
yp00 (x) = C1 y100 + · · · + Cn yn00 .
Continuing in the same manner, we set
(k) (k)
C10 y1 + C20 y2 + · · · + Cn0 yn(k) = 0, k = 0, 1, 2 · · · (n − 2)

and obtain
(k) (k)
yp(k) (x) = C1 y1 + C2 y2 + · · · + Cn yn(k) , k = 0, 1, 2 · · · (n − 1).

Finally since
(n−1) (n−1)
yp(n−1) = C1 y1 + C2 y2 + · · · + Cn yn(n−1)

a last differentiation yields:


h i
(n) (n)
yp(n) (x) = C1 y1 + C2 y2 + · · · + Cn yn(n)

(n−1) (n−1)
+C10 y1 + C20 y2 + · · · + Cn0 yn(n−1) .
We have now obtained all the derivatives up to the nth derivative of yp (x)
and we may substitute these in equation (5.28).
Since y1 , y2 , · · · yn are solutions of the homogenous problem (5.28) (i.e
f (x) = 0.) and that yp solves the non homogenous equation, we find that
(n−1) (n−1)
C10 y1 + C20 y2 + · · · + Cn0 yn(n−1) = f (x).

Thus, we have found n equations with n unknowns derived functions


C10 , C20 , · · · Cn0 . That is

C10 y1 + C20 y2 + · · · + Cn0 yn = 0


C20 y10 + C20 y20 + · · · + Cn0 yn0 = 0
..................................... = ···
0 (n−1) (n−1)
C1 y 1 + C20 y2 + · · · Cn0 y (n−1) = f (x) (5.31)

66
The determinant of the system (5.31) is W (y1 , y2 · · · yn ) which is non zero
since the functions y1 , y2 · · · yn are linearly independent. The system (5.31)
has a unique solution given by Crammers’s rule:
Wk
Ck0 = , k = 1, 2, · · · n, (5.32)
W
where Wk is the determinant obtained by replacing the kth column of W
by the transpose of the vector (0, 0, 0 · · · f (x)).
Finally, the functions C1 (x), C2 (x), · · · Cn (x) may be obtained by integra-
tion (if possible).

5.6.1. Example: Use the method of variation of parameter to solve the


differential equation :
y 000 − 3y 0 − 2y = 3e−x .
Solution: The three linearly independent solutions of the homogenous
part are:
y1 = e−x , y2 = xe−x , y3 = e2x .
According to Abel formula

e−x xe−x e2x





W (x) = −e−x (1 − x)e−x 2e2x


e−x (x − 2)e−x 4e2x


R
0dx
= Ce =C
which is a constant.
Therefore, we compute the constant C at the point x = 0 to get

1 0 1

W (x) = W (0) = −1 1 2 = 9.


1 −2 4

By the Crammer’s rule, we have:

xe−x e2x

0




W1 = 0

(1 − x)e−x 2e2x
= 9x − 3.
3e−x (x − 2)e−x 42x

By similar calculation,

W2 = −9, W3 = 3e−3x ,

Then,
W1 1
C10 (x) = =x− .
W 3
67
W2
C20 (x) == −1,
W
W3 e−3x
C30 (x) = = ,
W 3
and by integration, we have:
1 1 1
C1 = x2 − x, C2 = −x, C3 = − e−3x .
2 3 9
Finally, we obtain,

yp (x) = C1 (x)y1 (x) + C2 (x)y2 (x) + C3 (x)y3 (x)


x2 x −x
!
1 −3x 2x
 
−x
= − e − x(xe ) − e e
2 3 9
1 2 −x 1 1 −x
 
= − xe − x+ e .
2 3 9
The last term at the right hand side above is part of the general solution
of the homogenous equation. Thus, we take
1
yp (x) = − x2 e−x .
2
Hence the general solution of the non homogenous problem is
x2 −x
y(x) = A1 e−x + A2 xe−x + A3 e2x − e ,
2
where A1 , A2 , A3 are arbitrary constants.

5.7. Linear First Order Difference Equations.


We shall devote this section to the discussion of elementary difference
equations. We shall compare their methods of solution to that of differ-
ential equations that we have so far developed.
The general first order order linear difference equation can be written in
the form:
yn+1 = an yn + fn , (5.32)
where an and fn are known for all n. We first consider the much more
simpler equation:
yn+1 = ayn (5.33)
where a is a given constant. Proceeding inductively, we have:

y1 = ay0 , y2 = ay1 = a · ay0 ,

and in general
yn+1 = an+1 y0 , (5.34

68
which is the general solution of (5.33). Comparing Equation (5.33) and
(5.34) with differential equation

y 0 = ay,

which has a general solution

y(x) = Ceax .

It is easy to see that n + 1, a and y0 correspond to x, ea and C respectively.


Suppose that our equation is

yn+1 = an yn , (5.35)

using the same procedure as above, we see that

y1 = a0 y0 , y2 = a1 y1 = a1 a0 y0

and in general,

yn+1 = an an−1 · · · a1 a0 y0
= (Πnk=0 ak ) y0 . (5.36)

The comparable differential equation:

y 0 (x) = a(x)y

has the general solution R


a(x)dx
y(x) = Ce .
R
So that Πnk=0 ak
corresponds to the exponential e a(x)dx
.
Consider the equation:
yn+1 = yn + fn . (5.37)
Similar iterative procedure yields:

y 1 = y 0 + f0
y2 = y1 + f1 = (y0 + f0 ) + f1 = y0 + (f0 + f1 ),

so that n
X
yn+1 = y0 + fk . (5.38)
k=0

Since y 0 = y + f (x) has the general solution


Z
y(x) = Cex + ex f (x)e−x dx,

69
the correspondence is clear. Note that the Πnk=0 1 = 1 corresponds to e0 = 1.
Finally, consider the general first order linear difference equation

yn+1 = an yn + fn .

Then,
y1 = a0 y0 + f0 ,
y2 = a1 y1 + f1 = a1 · a0 y0 + a1 f0 + f1 .
Generally,

yn+1 = (an an−1 · · · a0 )y0 + (an · · · a1 )f0 +


+(an · · · a2 )f1 + · · · + an fn−1 + fn .
n  
= (Πnk=0 ak ) y0 + Πnj=k+1 aj fk
X
(5.39)
k=0

Equation (5.39) is the discrete analog of the equation:


R Z R 
− adx adx
y=e f (x)e dx + C .

5.7.1. Example: An Amoeba population has an initial size of 1000. It is


observed that on the average one out of every ten amoeba reproduces by
cell division every hour. Approximately how many amoebas will there
be after 20 hours.? If a leak from another container is introducing 30
additional amoebas into the population every hour, how many will there
be after 20 hours?

Solution: Let yn be the number of amoebas present after nhours. Then


the population growth over the next hour is given by
1
yn+1 − yn = yn , (5.40)
10
which implies that
yn+1 = (1.1)yn .
The general solution given by (5.34) is:

yn+1 = (1.1)n+1 y0 , y0 = 1000.

Thus,
y0 = (1.1)20 (1000) ∼
= 6727.
For the second part, the equation now becomes:
1
yn+1 − yn = yn + 30,
10
70
or
yn+1 = (1.1)yn + 30,
which by Equation (5.39) has the solution
n
yn+1 = (1.1)n+1 (1000) = (1.1)n−k · 30.
X

k=0

n
(1.1)n−k is the sum of the first n+1 terms of a geometric progression,
X
But
k=0
so that the solution is given by

(1.1)n+1 − 1
!
n+1
yn+1 = 1000(1.1) + 30 .
1.1 − 1

Thus,
y20 ∼
= 8445.

5.8. Applications: Newton’s Method: We shall discuss application of dif-


ference equation for solving algebraic equation in one variable. Consider
the algebraic equation
F (x) = 0. (5.41)
We wish to find approximate roots of equation (5.41). Assuming that
the function F (x) is sufficiently differentiable, by using Taylors theorem
centered at a value xn , we can express F (x) in the form:
1
0 = F (x) = F (xn ) + F 0 (xn )(x − xn ) + F 00 (xn )(x − xn )2 + · · · (5.42)
2
Omitting all but the first two terms of the right hand side of (5.42) and
solving for x yields:
F (xn )
x = xn − 0 .
F (xn )
If we call this new value xn+1 , we obtain a first order difference equation

F (xn )
xn+1 = xn − . (5.43)
F 0 (xn )

Equation (5.43) is known as the Newton’s formula . The value xn+1 is


approximate to some root of equation (5.41). The procedure of find-
ing a root consists in making an initial guess x0 and repeatedly applying
(5.43) to generate a sequence {xn } which we hope converges to a solution.

71
5.9. Second Order Difference Equation: The second order difference
equation is generally of the form:

F (n, yn , yn+1 , yn+2 ) = 0. (5.44)

Equation (5.44) is linear if it involves no non linear function or products


in the unknown terms yn+2 , yn+1 and yn . The difference equation given
by:
yn+2 + (3n2 sin n)5 yn+1 + yn = cos n3
is linear while
(yn+2 )2 + yn+1
2
+ yn = 0
is not linear.
The most general linear case can be written in the form:

yn+2 + an yn+1 + bn yn = fn .

If an , bn and fn are defined for every integer n ≥ 0 and if y0 and y1 are


given, then
y 2 = f 0 − a0 y 1 − b 0 y 0 .
y 3 = f 1 − a1 y 2 − b 1 y 1
and the iteration thus defined will not terminate. Therefore, for linear
equation, we have the following theorem:

5.9.1. Theorem: Let C0 and C1 be given constants and suppose that


an , bn and fn are defined for every integer n ≥ 0. Then there exists a
unique solution yn of the difference equation:

yn+2 + an yn+1 + bn yn = fn . (5.45)

If fn = 0 for all n, then (5.45) is said to be homogenous, otherwise it is not.

5.10. MAT 241 Examination Past Questions As Exercises

Here, we gather together for the benefit of student readers, some past
examination questions on ordinary differential equations administered at
the University of Ibadan degree examinations in the second year of the
four or five year degree program in science, education and engineering.

MAT 241, B.Sc, B.Ed Degree Examinations, 1990/91 Session

72
1(a) State a necessary and sufficient condition for the differential equa-
tion:
M (x, y)dx + N (x, y)dy = 0, (1.1)
to be exact. Examine whether or not the O.D.E.

y sin xdx + cos xdy = 0 (1.2)

is exact and hence or otherwise obtain its solution.

(b) Obtain an integrating factor for the O.D.E.


dy
+ A(x)y = B(x). (1.3)
dx
Hence or otherwise, solve
dy
cot x + y = cos x. (1.4)
dx

2 (a) Obtain the differential equation associated with the primitive func-
tion:
y = Aex + B sin x, (2.1)
where A and B are arbitrary constants.

(b) The tangent to a curve is such that the product of its X-intercept
and its Y -intercept varnishes. Express this as and O.D.E. Of what order
and degree os the equation you obtain?

(c) Given that


dv p
= , (2.2)
dt R + p2 v
where R and p are known fixed constants. Show that
Rv 1 2
t= + pv (2.3)
p 2
if v = 0 when t = 0.

3 (a) Show that the Bernoulli equation

y 0 (x) + a(x)y = b(x)y n , (3.1)

reduces to linear O.D.E with the substitution

v = y 1−n . (3.2)

73
Hence or other solve the equation

xy 0 (x) + y = xy 3 (3.3)

(b) Solve the linear systems of equations:


d2 dy 1
x−x+ =− t
dt dt 2
dx dy
+ 2x − 2 = t (3.4)
dt dt
given that x(0) = 0, y(0) = 1.

4 (a) Given that


y = xr (4.1)
is a solution of the Euler equation

Ax2 y 00 + Bxy 0 + Cy = 0. (4.2)

Obtain the quadratic (indicial) equation satisfied by r.

Write down the form of the solution of (4.2) when the roots of the indicial
equation are :
(α) Real and distinct,i.e r1 6= r2

(β) Real and coincident, i.e. r = r1 = r2 .

(γ) Imaginary, i.e, r = a ± ib.

(b) Solve the differential equation:

x2 y 00 − xy 0 + y = 0. (4.3)

(c) Given that U (x) is a solution of the differential equation:

ay 00 + by 0 cy = 0 (4.4)

and that
y = U (x)V (x) (4.5)
is a second linearly independent solution of Equation (4.4), show that
V (x) satisfies the differential equation:

U 0 (x) b
!
00
V (x) + 2 + V 0 (x) = 0. (4.6)
U (x) a

74
5 (a) Obtain the difference equation satisfied by the primitive equation

Un = A + B4n (5.1)

(b) Solve the difference equation

Un+2 − 4Un+1 + 4Un = 3n (5.2)

(c) Obtain the solutions to the simultaneous difference equations

Un+1 = 2Un + Vn + 2n
Un+1 = Un + 2Vn+1 + 1
Given that
U0 = 0, V0 = 1. (5.3)

MAT 241, B.Sc, B.Ed. Degree Examinations, 1992/93 Session

1 (a) The quadratic interpolation formula is expressible in the form

f (x) = A + B(x − x0 ) + C(x − x0 )(x − x1 ) (1.1)

By making appropriate substitutions into Equation (1.1) at x = x0 , x1


and x2 respectively, evaluate A, B and C.

(b) Hence or otherwise from 1(a), obtain the quadratic approximation


for the data in the following table
x 1.0 1.1 1.2 1.3 1.4
f (x) 8.01 9.69 11.56 13.61 15.84

(c) Estimate the value of f at x = 1.15 from the table above.

2 (a) Using the data in the table above, estimate the integral
Z 1.4
I= f (x)dx
1.0

using the Trapezoidal rule.

(b) If in a sequence
Ur = r(r + 1)

75
obtain an expression in r for

42 Ur + Ur

where 4 is the forward difference operator.

(c) Solve the difference equation

Ur+1 − aUr = r

given that U1 = 0 and a is a real number.

3 (a) Obtain the ordinary differential equation associated with the equa-
tion
y = A sin x + B cos x + Cex (3.1)
where A, B, C are constants.

(b) Check whether or not the following ordinary differential equations


are exact.

x(x3 + y 2 )dx + y(x2 + y 2 + 1)dy = 0 (3.2)


(6x4 − 2y)dx − xdy = 0 (3.3)
Hence or otherwise, solve one of the equations.

(c) Solve the finite difference equation

Ur+3 − 6Ur+2 + 11Ur+1 − 6Ur = 0 (3.4)

given that
U0 = 0, U1 = 1; U2 = 5.

4 (a) Given the Bernoulli equation


dy
+ Ay = By n (4.1)
dx
where a and B are functions of x.
Using the substitution
V = y 1−n (4.2)
transform Equation (4.1) into a linear O.D.E.

(b) Hence from 4(a) or otherwise, solve the equation

y 0 + y tan x = y 3 sec x (4.3)

76
(c) Given that y1 (x) and y2 (x) are two linearly independent solutions of
the homogenous part of the O.D.E.:
d2 y dy
2
+ a + by = f (x) (4.4)
dx dx
and y = U y1 + V y2 is a particular solution of Equation (4.4). Show that
dU −y2 (x)f (x)
=
dx W (x)
dV y1 (x)f (x)
=
dx W (x)
where W (x) is the Wronskian of the solutions y1 and y2 of the homogenous
equation.

MAT 241 B.Sc, B.Ed Degree Examinations 1997/98 Session

1. Find the general solutions of the following differential equations:

dy x(1 + x2 )3
5
(a) (1 + x2 ) 2 + = x2
dx 2
n o dy
(b) (x + 2)2 + (y − 1)2 + 2(x + 2)(y − 1) + (x + 2)2 = 0
dx
2x + 1
 
0
(c) y + y = e−2x .
x
(d) Hence , solve the Ricatti ’s equation
dy
= A(x)y 2 + B(x)y + C(x).
dx
(i) Show that if A(x) ≡= 0, for all x, then the Equation is linear whereas
it is non-linear when C(x) ≡= 0 for all x.
(ii) Show that if f (x) is any solution of the Ricatti’s equation, then the
transformation
1
y = f (x) +
V (x)
reduces the equation to a linear equation in V .

2 (a) Verify that y = (1 + x)2 is a particular solution of the O.D.E.

(x + 1)2 y 00 − 2(x + 1)y 0 + 2y = 0.

77
Hence or otherwise, find another linearly independent solution.

(b) Let y1 and y2 be two linearly independent solutions of the second


order linear differential equation:

y 00 + a0 y 0 + a1 y = 0

on an interval I ⊆ IR. Show that the Wronskian W (y1 , y2 )(x) satisfies the
Equation
W 0 + a0 = 0
and hence, solve the equation for W (x).

(c) Solve the differential equations:

(i) y 00 + y = tan x

d2 y h
0 2 0
i
(ii) x = 2 (y ) − y .
dx2
3 (a) Solve the following difference equation

(i) yn+2 + 8yn+1 − 9yn = 3n

(ii) yn+2 − 2yn+1 + 4yn = 0.

(b) Compute the following inverse operator expressions.


1 
2

(i) 1 + x + x
D2 + 2D − 3
!
1
(ii) e2x .
(D − 2)3 (D − 1)
4 (a) Suppose that a hawk P at the point (0, a) spots a pigeon at the
origin, flying along y-axis at a speed v m/s. The hawk immediately flies
towards the pigeon at w m/s. Find the differential equation of the path
of the hawk and solve the equation.

(b) The rate of change of the price P (t) of a commodity is directly pro-
portional to the difference between the demand D(t) and supply S(t) of
the commodity at time t. If the supply and demand functions satisfy the
models:
S(t) = c [1 − sin αt]
and
D(t) = a − bP (t).

78
(i) Obtain the differential equation satisfied by P (t) (if any).
(ii) Solve the equation so obtained in (i).
(iii) Find the time when the price is maximum.

5 (a) An Amoeba population has an initial size of 1500. It is observed


that on the average, one out of every ten amoeba reproduces by cell di-
vision every hour.
(i) Approximately, how many amoebas will be there after 25 hours ?
(ii) If a leak from another container is introducing 40 additional amoebas
into the population every hour, how many will be there after 25 hours?

(b) An infectious disease is introduced to a large population. The pro-


portion of people who have been exposed to the disease increase with
time. Suppose that P (t) is the proportion of people who have been ex-
posed to the disease within t years of its introduction. If P (t) satisfies the
equation:
1
P 0 (t) = [1 − P (t)] , P (0) = 0,
3
after how many years will the proportion have increased to 90% ?.

(c) A chemical substance S is produced at the rate of r moles per minute


in a chemical reaction. At the same time it is consumed at a rate of a
moles per minute per mole of S. Let S(t) be the number of moles of the
chemical present at time t.

(i) Obtain the differential equation satisfied by S(t).


(ii) Determine S(t) in terms of S(0) = S0 .
(iii) Find the equilibrium amount of the chemical.

MAT 241 B.Sc, B.Ed Degree Examination 1999-2000 Session

1 (a) Find the general solution of the differential equation:


s
d2 y dy dy
x 2 = + 2 x2 + ( )2 .
dx dx dx

(b) Let y1 and y2 be two linearly independent solutions of the second


order linear differential equation

y 00 (x) + cy 0 (x) + ay(x) = 0, x ∈ I ⊆ IR.

79
Show that the Wronskian W (y1 , y2 )(x) satisfies the equation

W 0 + cW = 0

and hence, solve the equation for W (x).

(c) Solve the differential equation:


dy
cos y + sin x sin y = x ln x.
dx

2 (i) Solve the differential equations:


dy
(x + 1) − (x + 2)y = ex (x + 1)3 loge x
dx
and find the particular solution for which y(1) = 0.

(ii) 2(x2 + 7x − 3)y 0 + (6x + 21)y = 3(x + 8)2

(iii) y 00 − y 0 − 6y = ex cos x, y(0) = 1, y 0 (0) = 0.

3. Find the general solution of the following differential equations:


(i) y 00 − 2y 0 + 3y = 5e−x by the method of variation of parameters.

(ii) y 00 + y = 1 + x2 + tan x by any suitable method.

(iii) Solve the differential equation:

(2x + sin 2x cos2 y)dx + (2y cos2 x sin 2y)dy = 0.

4. (a) Find the orthogonal trajectory of the following family of curves


(i) x2 − y 2 = C, (ii) x3 = 3(y − c).

π
Find the oblique trajectory at an angle 4
of the following curves
(i) x2 − y 2 = c (ii) x3 = 3(y − c).

(b) The displacement of a particle at time t is x measured from a fixed


point. Its velocity at time t is given by
dx
= a(c2 − x2 )
dt
where a and c are positive constants. If x(s) = 0, find the displacement at
any time t.

80
(c) The temperature of a liquid in a room of constant temperature 20◦ F
is 70◦ F. After 5 minutes, it is 60◦ F. What will be its temperature after a
further 30 minutes ? After how long will its temperature be 40◦ F ?

(d) The alternating current i(t) across a circuit at time t satisfies


di
L + Ri = E
dt
where L, R, E are constants. Find the current at any time t if i(0) = 0.

MAT 241, B.Ed, B.Sc Degree Examinations, 2007- 2008 Session

1. (a) By applying a suitable transformation, express the Bernoulli equa-


tion:
dy
+ P (x)y = Q(x)y n , n 6= 0, 1
dx
as a linear ODE.
(b) Solve the Bernoulli equation:
dy
+ xy = xy 3
dx
(c) Show that if H(y) is a known function of y, then the ODE
d dy
H(y) + P (x)H(y) = R(x)
dy dx
may be expressed as a linear ODE by applying a suitable transformation.
(d) Hence, solve the differential equation:
dy 1
eλy + ( sin x)eλy = x ln x, λ 6= 0
dx λ

2. (a) Find the general solution of the third order ODE


000 0
y − 2y + 3y = 5e−x

(b) Find the general solution of the second order ODE


00
y + y = 1 + x2 + tan x

by any suitable method.


(c) Let W (y1 , y2 )(x) be the Wronskian of the two linearly independent so-
00 0
lutions of the homogenous problem y +a0 y +a1 y = 0 on an interval I ⊆ IR.

81
0
Show that the Wronskian solves the homogenous problem W + a0 W = 0

3. Solve the following differential equations by any known method.


dy
(a) (1 + x) − (x + 2)y = ex (x + 1)3 ln x
dx
and find the particular solution for which y(1) = 0.

dy 5
(b) 2(x2 + 7x − 8) + (6x + 21)y = 3(x + 8)2 y 3 .
dx
00 0 0
(c) y − y − 6y = ex cos x; y(0) = 1, y (0) = 0.

4. (a) A radioactive material decomposes at a rate which is directly


proportional to the amount N (t) present at any time t > 0. Given that
N (t) = N0 at the time t = 0. Derive and solve the differential equation
satisfied by N (t).
(b) (i) Find the orthogonal and (ii) oblique trajectories at angle π6 to the
family of curves defined by Φ(x, y) = x2 − y 2 = C.
(c) A rod heated to the temperature of 80◦ F was left to cool in a room of
constant temperature of 15◦ F . After 5 minutes, the rod cooled down to
70◦ F . What will be its temperature after 30 minutes ?. After how long
will its temperature be 10◦ F ?
(d) In a closed electric circuit consisting of a resistor (R), inductor (L)
and an EMF (E) source, it is known that the quantity of electric charge
Q(t) at time t satisfies the circuit differential equation:

d2 d Q
L 2 Q(t) + R Q(t) + = E
dt dt C
where L, R, C, E, C 6= 0 are constants. Solve the equation for Q(t).

82
Chapter six
Qualitative Aspects of Ordinary Differential Equations
Linear Dependence, Reduction of Order and Variation of Parameters

6.1. Introduction:
We shall be concentrating on the qualitative study of solutions of second
order ordinary differential equations. For this purpose, we shall develop
some techniques which would be applicable for solving this class of equa-
tions.
Beginning from this chapter, we shall be discussing standard materials
concerning the solutions of linear differential equations typically required
of advanced undergraduate courses in the mathematical sciences, physics,
geology and all engineering fields. We present the following definitions.

6.1.1. Definition: An ordinary differential equation of order m and degree


n is an equation of the following form:
n 00 0
am (x)(y (m) ) + am−1 (x)(y (m−1) )n−1 + · · · + a2 (x)y + +a1 (x)y + a0 (x)y + f (x) = 0,
(6.1)
the order m being the highest derivative and n its degree where we have
put
dk
k
y = y (k) .
dx
Let F (x, η1 , η2 , · · · ηn ) be a polynomial of degree m in the variables η1 , η2 , · · · ηn
with variable coefficients depending on x, that is, we can write F in the
form:
av1 ···vn (x)η1v1 η2v2 · · · ηnvn .
X
F (x, η1 , η2 , · · · ηn ) =
v1 +v2 +···vn =m

Then, an ordinary differential equation of degree m and order n − 1 is an


equation of the form:
0 00 0
F (x, y, y , y , · · · y (n−1) ) = av1 ···vn (x)y v1 (y )v2 · · · (y (n−1) )vn = 0.
X
(6.2)

For illustration, consider the following examples.

6.1.2. Example: Consider a differential equation given by:


00 0
x2 (y )2 + 3xy + y = 0.

This is an equation with order 2 and degree 2.


Also,
000 0
x2 y + 3x(y )2 + y = 0

83
is an equation of order 3 and degree 2. However, the equation
0
(y )100 + bxy = 0

has degree 100 and order 1.


The equation:
0 00 8
(sin xy)y 2 + xy 2 y + (cos xy)(y ) = 0,
is of order 2 and degree 8.
The order is the highest derivative while the degree is the highest power
to which the various derivatives are raised.
The most general second order ordinary differential equation is of the
form:
0 00
F (x, y, y , y ) = 0. (6.3)
We shall assume in what follows that Equation (6.3) can be expressed
unambiguously in explicit form as:
00 0
y = f (x, y, y ), (6.4)

where f is a function of the variables.

Since Equations (6.3) and (6.4) are second order differential equations,
the solution of either of the equation, if it exists, will involve two arbi-
trary constants which may be determined by imposing two conditions on
the solution. For example, one may specify the values of the dependent
0
variables y and the derivative y at some fixed point x0 in the interval of
solution. These are two conditions on the solution and they are called
initial conditions. Under certain circumstances these initial conditions
could be enough to assume uniqueness of the solution thus obtained. In
that case, we have achieved existence and uniqueness of the solution of
Equations (6.3) and (6.4). Since equation (6.4) is in general nonlinear, it
may not always be possible to obtain a closed expression for its solution.
If the degree is one, then it is linear but otherwise it is nonlinear. For
00 0
example, the equation: y + y + xy 2 = 0 is linear because the degree of
every derivative is one. The differential equation:
00 0
(y )2 + by + xy = 0,

has a degree greater than one. Degrees and not power determine linear-
ity. In what follows, we shall be dealing almost exclusively with second
order linear differential equations.

6.1.3. Definition: The most general second order linear equation has
the form:
00 0
P (x)y + Q(x)y + R(x)y = S(x), x ∈ I ⊆ IR, (6.5)

84
where P, Q, R, S are given or known real valued functions on an open
subset I of IR.
In our subsequent discussion, we shall be seeking a solution of equation
(6.5) in an open neighbourhood I ⊆ IR of some fixed point x0 ∈ IR. In this
circumstance, if P (x) 6= 0, ∀x ∈ I, then Equation (6.5) may be recast as
follows:
00 0
y + q(x)y + r(x)y = s(x), (6.6)
by dividing through by P (x). Thus, the new coefficients q, r, s are defined
by
Q(x) R(x) S(x)
q(x) = , r(x) = , s(x) = .
P (x) P (x) P (x)
In the following, we shall assume the following theorem which gives suffi-
cient conditions for the existence and uniqueness of a solution of Equation
(6.6).

6.1.4. Theorem: Suppose that the real valued functions q, r, s occur-


ring in Equation (6.6) are continuous on the open subset I ⊆ IR, and let
x0 belong to I. Then there exists a unique solution of Equation (6.6) on
0 0 0
the interval I which satisfies y(x0 ) = y0 and y (x0 ) = y0 , where y0 and y0 are
pre - assigned numbers.

Remark: (i) We shall not prove the last theorem but we shall employ
its assertion in the subsequent discussions.
(ii) In the development of the theorem, we shall study some techniques
for solving Equation (6.6).

Note: Equation (6.6) is called a general non-homogenous linear ordi-


nary differential equation if the right hand side function s(x) 6= 0. For
s(x) = 0, we call the equation
00 0
y + q(x)y + r(x)y = 0 (6.7)
a homogenous linear equation associated with or derived from (6.6).

The techniques for solving (6.6) depend on knowing at least one solu-
tion of the homogenous or complementary equation (6.7). When such a
solution is known the technique alluded to allow complete determination
of the general solution of equation (6.6). For this reason, we shall con-
sider some results concerning the solutions of Equation (6.7).
dn
6.1.5. Definition: By setting n
= Dn , where n is a positive integer,
dx
we may associate with Equation (6.7), the following operator :
L(D) = D2 + qD + r

85
whose domain is C 2 (I) the set of all continuous functions on I which are
twice differentiable, and whose range is C(I), the set of all continuous
functions on I. It is trivial to check that L(D) is a linear operator with
the domain and range as specified.
Equation (6.7) now takes the form:

L(D)y = 0. (6.8)

As a result of the linearity of L(D), if y1 and y2 are solutions of equation


(6.8), then so is the linear combination a1 y1 + a2 y2 where a1 , a2 are real
numbers.

6.1.6. Definition: Two solutions y1 and y2 of Equation (6.8) are said


to be linearly independent provided that

a1 y1 + a2 y2 = 0 if and only if a1 = 0, and a2 = 0.

Remarks: Let G denote the linear space of all solutions of Equation (6.8)
and let y1 and y2 be two linearly independent solutions in G. It is clear
from linear algebra that every solution in G may be expressed as a linear
combination of the solutions y1 and y2 . In other words, every linearly
independent set {y1 , y2 } of solutions from G is a basis for G. This means
that in order to generate all the solutions of Equation (6.8), that is, in
order to generate the linear space G, we need only know two linearly
independent solutions of Equation (6.8). It is therefore natural at this
point to pose the following questions: When are two solutions of Equation
(6.8) linearly independent?. This question is answered by the following
theorem:

6.1.7. Theorem: Suppose that the real valued functions q and r occurring
in (6.8) are continuous on an open subset I ⊆ IR. Let y1 and y2 be two
solutions of Equation (6.8) such that the following holds:
0 0
y1 (x)y2 (x) − y1 (x)y2 (x) 6= 0, ∀x ∈ I. (6.9)

Then y1 and y2 are linearly independent.

Proof: Let y be an arbitrary solution of Equation (6.8). Then y ∈ G.


0
Let x0 , y0 , y0 be three real numbers such that x0 ∈ I and y(x0 ) = y0 and
0 0
y (x0 ) = y0 . Then
a1 y1 (x0 ) + a2 y2 (x0 ) = y0 (6.10)
0 0 0
a1 y1 (x0 ) + a2 y2 (x0 ) = y0 . (6.11)
By the existence and uniqueness theorem, we must have

y(x) = a1 y1 (x) + a2 y2 (x), ∀x ∈ I.

86
In order that there are real numbers a1 and a2 such that Equations (6.10)
and (6.11) hold, it must be possible to solve (6.10) and (6.11) for a1 and
a2 . We can rewrite (6.10) and (6.11) in matrix form as follows:
! ! !
y1 (x) y2 (x) a1 y0
0 0 = 0 .
y1 (x) y2 (x) a2 y0

There will be real numbers a1 and a2 such that the last vector equation
holds if
y (x ) y (x ) 0 0
1 0 2 0
0 0

= y( x0 )y2 (x0 ) − y1 (x0 )y2 (x0 ) 6= 0.
y1 (x0 ) y2 (x0 )

But x0 is arbitrary in I. Hence Equation (6.9) must hold.

6.1.8. Definition: Let y1 and y2 in G be two solutions of Equation (6.8).


Then the function:
0 0
W (y1 , y2 ) = y1 y2 − y1 y2
is called the Wronskian of the solutions y1 and y2 . (Wronski (1778 - 1853)
originally called Hone, was a Polish mathematician).

The last theorem asserts that y1 and y2 in G are linearly independent


if and only if
W (y1 , y2 )(x) 6= 0, ∀x ∈ I.
We may now ask: when do we have that W (y1 , y2 )(x) 6= 0, ∀x ∈ I, where
y1 and y2 are in G. The following theorem supplies an answer to the last
query.

6.1.9. Theorem: Let g and r belong to the space C(I), I ⊆ IR, and
let y1 and y2 be two solutions of Equation (6.8) on the open interval I.
Then either (i) W (y1 , y2 ) varnishes identically on I or (ii) W (y1 , y2 ) is never
varnishing on I.

Proof: Since y1 and y2 are solutions of Equation (6.8), we must have


that:
L(D)y1 = 0 = L(D)y2 .
That is,
00 0
y1 + qy1 + ry1 = 0 (6.12)
00 0
y2 + qy2 + ry2 = 0. (6.13)
Multiply Equation (6.12) by −y2 and Equation (6.13) by y1 and add to
obtain
00 00 0 0
(y1 y2 − y2 y1 ) + q(y1 y2 − y1 y2 ) = 0. (6.14)

87
By setting W (y1 , y2 )(x) = W12 (x) and observe that

0 00 y y
00

W12 = y1 y2 − y2 y1 = 001 002 , (6.15)

y1 y2

then Equation (6.14) takes the form:


0
W12 + qW12 = 0. (6.16)

Solving the first order equation (6.16) for W12 , we have Equation (12) has
the solution : Z x
W12 (x) = A exp(− q(t)dt), (6.17)

where A is an arbitrary constant. Note that e−y 6= 0 on IR except at y = ∞


which is not on IR. Since the exponential function is never varnishing, it
follows that W12 = 0 if and only if A = 0 and when A = 0, W12 varnishes
identically. This concludes the proof.

Remark: (a) Equation (6.17) gives an expression for the Wronskian of any
two linearly independent solutions of Equation (6.8) up to a multiplica-
tive constant. Furthermore, it follows from (6.17) that the Wronskians
of two sets {y1i , y2i }, i = 1, 2 of linearly independent solutions of equation
(6.8) can only differ by a multiplicative constant.
(b) Equation (6.17) is called the Abel identity because it was first derived
in 1827 by N.H Abel (1802 - 1829), a Norwegian mathematician.

Finally, we ask: Does there exist a linearly independent pair y1 , y2 of


solutions of (6.8) ? To answer this question, there is the following result.

6.1.10. Theorem: Suppose that q and r belong to C(I), I ⊆ IR. Then


there exists a linearly independent set {y1 , y2 } of solutions of Equation
(6.8) on I.

Proof: It suffices to exhibit such a linearly independent set.


Let x0 ∈ I. Then by the existence and uniqueness theorem, unique solu-
tions y1 , y2 of Equation (6.8) exist such that
0
(i) y1 (x0 ) = A 6= 0, y1 (x0 ) = 0.

and
0
(ii) y2 (x0 ) = 0, y2 (x0 ) = B 6= 0.
It is readily seen that
0 0
W (y1 , y2 )(x0 ) = y1 (x0 )y2 (x0 ) − y1 (x0 )y2 (x0 ) = AB 6= 0.

88
Hence by the second to the last theorem, the solutions y1 , y2 are linearly
independent. This concludes the proof.

6.2. Reduction of Order


The method of reduction of order described in what follows was proba-
bly employed first by the French Mathematician Jean D’ Lambert (1717
- 1783).

If one solution of a second order homogenous linear ODE is known, the


method of reduction of order is the procedure for finding a second non-
trivial solution which is linearly independent of the given solution. This
is achieved by reducing the second order homogenous linear ordinary dif-
ferential equation to a first order linear homogenous equation.

(1) Procedure: Let y1 be a nontrivial solution of


00 0
y + qy + ry = 0.

Set
y2 (x) = V (x)y1 (x),
where V is some twice differentiable function. Then y2 is a solution of
Equation (6.8) if it satisfies that equation. To obtain the condition, i.e
a constrain on V that y2 is a solution of Equation (6.8), we proceed as
follows: We have
0 0 0
y2 (x) = V (x)y1 + V (x)y1 (x) (6.18)
00 00 0 0 00
y2 = V (x)y1 + 2V (x)y1 (x) + V (x)y1 (x) (6.19)
Substituting y2 and the expression in (6.18) and (6.19) above in Equation
(6.8), we obtain:
00 0 0 0 00
V (y1 + qy1 + ry1 ) + V (2y1 + qy1 ) + V y1 = 0.

Since y1 is by hypothesis a solution of Equation (6.8), the last equation


reduces to 0
00 y1 0
V + (q + 2 )V = 0. (6.20)
y1
0
Equation (6.20) is a first order homogeneous linear equation for V . The
0
solution of Equation (6.20) for V is given by
Z x 0 !
0 2y (t)
V (x) = A exp − (q(t) + 1 )dt (6.21)
y1 (t)
W (y1 , y2 )
= A
y12
= AΦ(x),

89
where A is an arbitrary constant. Integrating Equation (6.21), we get
Z x
V (x) = A Φ(t)dt + B,

where B is a constant. Hence, we have


Z x
y2 (x) = V (x)y1 (x) = Ay1 (x) Φ(t)dt + By1 (x). (6.22)

To demonstrate linear independence of y1 and y2 , we only need to show


that the Wronskian W (y1 , y2 ) is never varnishing .

From Equation (6.8), we have


Z x
0 0 0
y2 (x) = Ay1 (x) Φ(t)dt + Ay1 (x)Φ(x) + By1 (x).

By using the relation:


Z x
d

g(t)dt = g(x),
dx a

we have

W (y1 , y2 )(x)
0 0
= y1 (x)y2 (x) − y1 (x)y2 (x)
0 0
= Ay1 (x)y2 (x) − y1 (x)y2 (x)
Z x
0
= Ay1 (x)y1 (x) Φ(t)dt + Ay12 (x)Φ(x)
Z x
0 0 0
+ By1 (x)y1 (x) − Ay1 (x)y1 (x) Φ(t)dt − By1 (x)y1 (x)
= Ay12 (x)Φ(x).

Since A is an arbitrary constant and y1 is nontrivial, it follows that


W (y1 , y2 ) is never varnishing. This concludes the determination of y2 .

6.2.1. Example: (a) Show that y1 (x) = x−2 is a solution of the equation:
00 0
x2 y + 2xy − 2y = 0, ∀x ∈ IR, x 6= 0.

(b) Find a second linearly independent solution of the ODE.


0 00
Solution: (a) If y1 (x) = x−2 , then y1 (x) = −2x−3 and y1 (x) = 6x−4 .
Substituting these expressions in the left hand side of the given equation,
we have:
00 0
x2 y1 + 2xy1 − 2y1 = x2 (6x−4 ) + 2x(−2x−3 ) − 2x−2 = 6x−2 − 4x−2 − 2x−2 = 0.

90
Hence y1 (x) = x−2 is indeed a solution of the given equation.

(b) To obtain a second solution y2 , set

y2 (x) = V (x)y1 (x) = V (x)x−2 .

Then,
0 0
y2 (x) = −2V (x)x−3 + V (x)x−2
00 0 00 0
y2 (x) = 6V (x)x−4 − 2V (x)x−3 + V (x)x−2 − 2V (x)x−3
0 00
= 6V (x)x−4 − 4V (x)x−3 + V (x)x−2 .

If y2 is indeed a solution of the given equation, by substituting these


expressions in the given equation we must have,
0 00 0
6V (x)x−2 − 4V (x)x−1 + V (x) − 4V (x)x−2 + 2V (x)x−1 − 2V (x)x−2 = 0.

Therefore,
00 0
V (x) − 2V (x)x−1 = 0.
0 00 0
Now set V = Z, then we have V = Z , and the last equation becomes
0
Z − 2Zx−1 = 0

That is,
0
Z 2
= .
Z x
Integrating, we have
ln Z − ln A = ln x2 .
Therefore,
Z = Ax2 ,
0
where A is a constant. But Z = V , finally we have,
0
V = Ax2
A 3
V = x +B
3
where A and B are constants. Hence the second solution is given by:

y2 (x) = V (x)y1 (x) = (Cx3 + B)x−2 = Cx + Bx−2 .

Often, we omit the constant B. In that case we find y2 (x) = Cx.

Now to settle the question of linear independence, we have


0 0
y1 (x) = −2x−3 , y2 (x) = 1,

91
Hence,
0 0
W (y1 , y2 )(x) = y1 (x)y2 (x) − y1 y2 (x)
= x−2 · 1 − (−2x−3 )x
= x−2 + 2x−2
= 3x−2
Thus, the Wronskian of y1 and y2 never varnishes since x ∈ IR\{0} and we
conclude that y1 (x) = x−2 and y2 (x) = x are linearly independent solutions
of the given ODE.

6.3. Exercises
In the following problems,
(i) Show in each case that the given function is a solution of the given
equation.

(ii) Find in each case, a 2nd linearly independent solution.

(iii) Indicate in each case, the interval of validity of the general solu-
tion.
00 0
(1) y − 4y − 12y = 0, y1 (x) = e6x
00 0
(2) y + 2y + y = 0, y1 (x) = e−x
00 0
(3) x2 y + 2xy = 0, y1 (x) = 1
00 0
(4) y + 4y + 4y = 0, y1 (x) = e−2x
00 0
(5) y − 7y + 12y = 0, y1 (x) = e3x
00 0
(6) y + 2y + 5y = 0, y1 (x) = ex sin 2x.
00 0
(7) (1 − x2 )y − 2xy + 6y = 0, y1 (x) = 3x2 − 1.
00 0 1
(8) x2 y + xy + (x2 − 14 )y = 0, y1 (x) = x− 2 sin x.

6.4. The Non - Homogenous Equation


This section is devoted to the study of the non- homogenous second order
linear ordinary differential equations of the form:
00 0
L(D)y = y + qy + ry = s. (6.23)
Before discussing the solution of Equation (6.23), we shall study some
qualitative aspects of the solution.

92
6.4.1. Theorem: Let q, r, s be continuous functions. Then the differ-
ence of any two solutions of Equation (6.23) is a solution of its associated
homogenous equation given by:
00 0
L(D)y = y + qy + ry = 0. (6.24)

Proof: Let y1 and y2 be two solutions of Equation (6.23). Then

L(D)y1 = s(x)

and
L(D)y2 = s(x).
If we let
U = y1 − y2 ,
then

L(D)U = L(D)(y1 − y2 )
= L(D)y1 − L(D)y2
= s(x) − s(x)
= 0.

Hence, U is a solution of (6.24) as claimed.

The foregoing discussion leads us to the following theorem:

6.4.2. Theorem: Let q, r and s be continuous functions. Then any solu-


tion y of Equation (6.23) may be expressed in the form:

y(x) = yp (x) + a1 y1 (x) + a2 y2 (x), (6.25)

where yp is an arbitrary solution of Equation (6.23) and y1 and y2 are


linearly independent solutions of Equation (6.24).

Remark: The linear combination given by Equation (6.25) is often called


general solution of Equation (6.23). This general solution is seen to con-
sist of two parts: (a) yp which is any solution of Equation (6.23). This
solution is called a particular solution of Equation (6.23), (b) the linear
combination a1 y1 + a2 y2 which is often called the complementary solution of
Equation (6.24).

We remark that the particular solution is by no means unique. This is


because if y is a particular solution of Equation (6.23) and Z is a solution

93
of equation (6.24), then y + Z is also a particular solution of Equation
(6.23).

Note that if n
00 0 X
y + qy + ry = s1 + s2 + · · · + sn = si (6.26)
i=1

and ypi is a particular solution of the equation

y 00 + qy 0 + ry = si , i = 1, 2 · · · n

then the sum n


X
yp = ypi
i=1

is a particular solution of Equation (6.26). This can be established as


follows: Since
L(D)ypi = si , i = 1, 2 · · · n,
then n n n
X X X
L(D)yp = L(D) ypi = L(D)ypi = si .
i=1 i=1 i=1

Thus n
X
y(x) = ypi + yc ,
i=1

is the general solution of Equation (6.26), where yc is the complementary


solution that solves L(D)y = 0.

For example, if

sin(n + 21 )x
S(x) = = 1 + cos x + cos 2x + · · · cos nx,
2 sin 12 x

then it is easier to use the expansion in sum and use the step one at a time.

6.5. The Method of Variation of Parameters


In the foregoing, we remarked that a general solution of Equation (6.23)
is a sum of the form:
y(x) = yp (x) + yc (x),
where yc is the complementary solution of Equation (6.24) and yp is a
particular solution of Equation (6.23). Here, we shall discuss a general
procedure called the method of variation of parameters, for obtaining a par-
ticular solution of Equation (6.23). Note that our analysis and study in
this section complements that of Chapter five.

94
Let y1 and y2 be two linearly independent solutions of Equation (6.24).
Then a general solution of Equation (6.24) is
yc = a1 y1 + a2 y2 ,
where a1 , a2 are arbitrary constants.

The method of variation of parameters involves the replacement of con-


stants a1 and a2 by functions U1 and U2 , that is, we look for a particular
solution of Equation (6.23) of the form :
yp (x) = U1 (x)y1 (x) + U2 (x)y2 (x). (6.27)
The functions U1 and U2 occurring in (6.27) must be determined. To
do this, two conditions must be imposed on U1 , U2 . One of these two
conditions is already imposed, that is, Expression (6.27) be a solution of
Equation (6.23). Another condition on U1 and U2 is now chosen arbitrar-
ily and this is usually done in such a way as to simplify the subsequent
analysis.

From Equation (6.27), we have


0 0 0 0 0
yp = (U1 y1 + U2 y2 ) + (U1 y1 + U2 y2 ). (6.28)
As one of the two conditions on U1 , U2 we may now require that
0 0
U1 y1 + U2 y2 = 0. (6.29)
Then Equation (6.28) becomes
0 0 0
yp = U1 y1 + U2 y2 . (6.30)
Differentiating Equation (6.30), we get
00 0 0 0 0 00 00
yp = U1 y1 + U2 y2 + U1 y1 + U2 y2 (6.31)
Substituting Equations (6.27), (6.30), (6.31) in Equation (6.23), we obtain
00 0 00 0 0 0 0 0
U1 (y1 + qy1 + ry1 ) + U2 (y2 + qy2 + ry2 ) + U1 y1 + U2 y2 = s.
But since y1 , y2 are solutions of Equation (6.24) by hypothesis, the last
Equation becomes
0 0 0 0
U1 y1 + U2 y2 = s. (6.32)
Equations (6.29) and (6.32) are two conditions on U1 , U2 . We reproduce
them below for convenience.
0 0
y1 U1 + y2 U2 = 0
0 0 0 0
y1 U1 + y2 U2 = s. (6.33)

95
That is,
0
! ! !
y1 y2 U1 0
0 0 0 =
y1 y2 U2 s
0 0
Solving (6.33) for U1 and U2 , we have
0 −y2 s
U1 =
W (y1 , y2 )
and
0 y1 s
U2 = , (6.34)
W (y1 , y2 )
where
0 0
W (y1 , y2 ) = y1 y2 − y1 y2 6= 0,
is the Wronskian of y1 , y2 . The Wronskian is different from zero since y1
and y2 are linearly independent in the interval I of solution.
Integrating (6.34), we get
Z x !
y2 (t)S(t)
U1 (x) = − dt
W (y1 , y2 )(t)
and Z x !
y1 (t)S(t)
U2 (x) = dt.
W (y1 , y2 )(t)
Hence Equation (6.27) becomes
!
Z x
[y1 (t)y2 (x) − y1 (x)y2 (t)]S(t)
yp (x) = dt (6.35)
W (y1 , y2 )(t)

6.5.1. Example : Show that y1 (x) = sin x is a solution of the differential


equation :
00
y +y =0
and obtain a second linearly independent solution of the same equation.
Hence or otherwise obtain the general solution of the non homogenous
equation:
00
y + y = cot x.
Solution: To show that y1 (x) = sin x is a solution of the given homogenous
0 00 00
equation we have: y1 (x) = cos x, and y1 (x) = − sin x, thus y1 (x) + y1 (x) = 0
as required.

The second linearly independent solution is y2 (x) = cos x. This may be


obtained by the method of reduction of order.
Hence a general solution of the homogenous equation is
y = a1 sin x + a2 cos x.

96
To obtain a particular solution of the non-homogenous equation, we as-
sume that yp (x) is of the form:

yp (x) = U1 (x) sin x + U2 (x) cos x.

Then
0 0 0
yp (x) = U1 (x) cos x − U2 (x) sin x + (U1 (x) sin x + U2 (x) cos x).

If we set the sum in parenthesis equal to zero, we obtain


0 0
U1 (x) sin x + U2 (x) cos x = 0.

This gives one condition on U1 and U2 .


Next differentiating yp (x), we have
0
yp (x) = U1 (x) cos x − U2 (x) sin x.

and
00 0 0
yp (x) = −(U1 (x) + U2 (x)) sin x + (U1 (x) − U2 (x)) cos x.
00
Substituting these expressions for yp (x), yp in the given non-homogenous
ODE we have
0 0
−(U1 (x) + U2 (x)) sin x + (U1 (x) − U2 (x)) cos x + U1 (x) sin x + U2 (x) cos x = cot x

or
0 0
U1 cos x − U2 sin x = cot x.
Combining these conditions, we have :
0 0
U1 (x) sin x + U2 (x) cos x = 0
0 0
U1 (x) cos x − U2 (x) sin x = cot x

or in matrix form:
0
! ! !
sin x cos x U1 0
0 = .
cos x − sin x U2 cot x
0 0
Solving for U1 and U2 , we get


0 cos x

cot x − sin x

0
U1 = = cos x cot x

sin x cos x

cos x − sin x

97
and

sin x 0

cos x cot x

0
U2 = = − sin x cot x = − cos x.
sin x cos x

cos x − sin x

Integrating, we have
Z x Z x
cos2 φ
U1 (x) = cos φ cot φdφ = dφ
sin φ
Z x
1 − sin2 φ
= dφ
sin φ
Z x !
1
= − sin φ dφ.
sin φ

To evaluate the last integral, we shall employ the identity:


Z x x
1 Z
1
dφ = dφ.
sin φ 2 sin cos φ2
φ
2

By dividing the numerator and denominator of the integrand at the right


hand side by the expression cos2 φ2 , the integral can further be expressed
as
Z x 1
2
sec2 φ2 φ
φ dφ = ln[tan ].
tan 2 2
Therefore,
Z x !
1
U1 (x) = − sin φ
sin φ
x
= ln(tan ) + cos x.
2
Z x
U2 (x) = − cos φdφ = − sin x.

Hence, the particular solution yp (x) is given by

yp (x) = U1 (x) sin x + U2 (x) cos x


x
 
= sin x ln(tan ) + cos x − sin x cos x
2
x

= sin x ln tan .
2
Thus, the general solution of the given non - homogenous equation is
x
y(x) = a1 sin x + a2 cos x + sin x ln(tan ),
2

98
where a1 , a2 are arbitrary constants.
00 0
6.5.2. Example: Prove that a particular solution of Equation xy − y =
f (x), on the interval [1, ∞) is given by

1 Z x x2 − t2
!
yp (x) = f (t)dt.
2 1 t3

Solution: Consider first, the homogenous part of the given ODE, that is
00 0
xy − y = 0.

Then
00 0
xy = y
and 00
y 1
0 =
y x
Hence,
0
y
ln = ln x
a1
Therefore,
0
y
=x
a1
or
0
y = a1 x
Finally we have:
1 1
y = a1 x 2 + a2 = a1 ( x 2 ) + a2 .
2 2
This gives two solutions: y1 (x) = 21 x2 and y2 (x) = 1 . Hence, a general
solution of the homogenous problem is :
1
yc = a1 ( x2 ) + a2 · 1.
2
To obtain a particular solution yp of the non-homogenous equation, we
assume that yp (x) is of the form:

1
yp (x) = U1 (x)x2 + U2 (x).
2
Then
0 1 0 0
yp (x) = U1 (x)x2 + U2 (x) + U1 (x)x.
2

99
As a first condition on U1 and U2 , we set
1 0 0
U1 (x)x2 + U2 (x) = 0. (α)
2
0
Then yp (x) becomes
0
yp (x) = U1 (x)x
and therefore we have
00 0
yp (x) = U1 (x) + U1 (x)x.

If yp is a solution of the given non homogenous problem, then


00 0
xyp − yp = f.

That is
0
U1 (x)x + U1 (x)x2 − U1 (x)x = f.
Hence, we obtain
0
U1 (x)x2 = f (x)
or
0 f (x)
U1 (x) =
x2
and from Equation (α), we get

0 1 0 1
U2 (x) = − U1 (x)x2 = − f (x).
2 2
0 0
Integrating U1 and U2 we get:
Z x
f (t) 1Z x
U1 (x) = dt, U2 (x) = − f (t)dt.
1 t2 2 1
Hence,
1
yp (x) = U1 (x)( x2 ) + U2 (x)
2
Z x 2 Z x
x f (t) f (t)
= 2
dt − dt
1 2t 1 2
x2 f (t) 1
Z x " #
= − f (t) dt
1 2t2 2
Z x" 2 #
x 1
= 2
− f (t)dt
1 2t 2
1 Z x x2 − t 2
!
= f (t)dt.
2 1 t2

100
This is the required integral representation for yp (x).

6.6. Exercises
(1) What do you understand by an ordinary differential equation and its
solution ?

(2) Let q and r be real valued continuous functions on some open in-
tervals I ⊆ IR. Consider the equation
00 0
y (x) + qy (x) + ry(x) = 0, x ∈ I. (β)

Suppose that y1 and y2 are two solutions of (β) such that


0 0
y1 (x)y2 (x) − y1 (x)y2 (x) 6= 0, ∀x ∈ I.

Prove that y1 and y2 are linearly independent.

(3) If W (y1 , y2 ) denotes the Wronskian of y1 and y2 , prove that W (y1 , y2 )


varnishes identically on I or else W (y1 , y2 ) is never varnishing on I.

(4) Let λ be a non zero constant. Suppose that f is a real valued contin-
uous function on an open interval I. Prove that a particular solution yp
of the differential equation:
00
y − λ2 y = f, on I

is given by
1Zx
yp (x) = [sin hλ(x − t)] f (t)dt, x ∈ I.
λ
Note: Recall that
1
sin hξ = (eξ − e−ξ ).
2

101
Chapter Seven

Series Solutions of Second Order Linear Ordinary


Differential Equations
7.1. Introduction:

In this chapter and the next, we shall study series solutions of the sec-
ond order linear ordinary differential equations. We shall first be con-
cerned with the solutions near ordinary points of the differential equa-
tions while chapter eight is devoted to the series solutions near regular
singular points. These concepts of points shall be defined in what follows.
Recall that the most general linear ordinary differential equation of sec-
ond order is of the form:
d2 y dy
P (x) 2
+ Q(x) + R(x)y = S(x). (7.1)
dx dx
In this chapter, we shall demonstrate how to obtain series solutions of
Equation (7.1) whenever they exist. In the sequel, it will fortunately
suffice to consider the homogenous ODE:
d2 y dy
P (x) 2
+ Q(x) + R(x)y = 0, x ∈ I ⊆ IR (7.2)
dx dx
instead of (7.1). We will consider solutions of Equation (7.2) in the neigh-
bourhood of a point x0 . In that case if P (x0 ) 6= 0, x0 is called an ordinary
point of Equation (7.2). For example, if for example, P (x) = e−x , then
x0 = 0 is an ordinary point of Equation (7.2) since P (x0 ) = 1. If P (x0 ) = 0,
then we shall say that x0 is a singular point of Equation (7.2). In either
case, the method of solution involves expressing y as an infinite series in
powers of x − x0 , where x0 is some specified point. It is therefore conve-
nient to give a cursory review of aspects of the theory of infinite series.

7.1.1. Definition: An infinite power series



an (x − x0 )n
X
(7.3)
n=0

is said to converge at a point x if


m
an (x − x0 )n
X
lim
m→∞
n=0

exists. It is clear that the series converges at the point x = x0 .

102
A series may converge for all x or it may converge for some values of
x. For example,

(x − x0 )n
= ex−x0 , ∀x,
X
(7.4)
n=0 n!
that is the exponential function converges for all x.

an (x − x0 )n is said to converge absolutely at a point
X
The power series
n=0

|an (x − x0 )n | converges. Absolute convergence implies
X
x if the series
n=0
convergence but the converse may fail.

Remark (a): A useful test for absolute convergence is the ratio test.
If for a fixed value of x,

n+1
n+1 (x − x0 )
a
lim = r, (7.5)

an (x − x0 )n

n→∞


an (x − x0 )n converges absolutely at x if r < 1 and
X
then the power series
n=0
diverges if r > 1. If r = 1, then the series may or may not converge.

(x − x0 )n
X
For example, the power series converges absolutely and there-
n=0 n!
fore converges. To establish the claim, we shall employ the ratio test to
show the absolute convergence, that is, to show that
m
|an (x − x0 )n |
X
lim
m→∞
n=0

exists. Thus,

(x − x )n+1 (x − x )n |x − x0 |n+1 n!
0 0
/ = ·

|x − x0 |n

(n + 1)! n! (n + 1)!

|x − x0 |
= → 0 as n → ∞.
n+1
(b) If the series

an (x − x0 )n
X

n=0

converges absolutely at x = x1 , then it converges absolutely at all points


x such that |x − x0 | < |x1 − x0 | and it diverges at all points x such that
|x − x0 | > |x1 − x0 |.

103
(c) There is a number ρ, called the radius of convergence, such that
the series ∞
an (x − x0 )n
X

n=0

converges absolutely for all x ∈ I = {x ∈ IR : |x − x0 | < ρ} and diverges for


all x such that |x − x0 | > ρ. For a series that converges nowhere except at
x = x0 we define ρ to be zero. For series that converges for all x we define
ρ to be infinite, that is ρ = ∞.
∞ ∞
an (x − x0 )n and bn (x − x0 )n converges abso-
X X
(d) If the power series
n=0 n=0
lutely to f (x) and g(x) respectively for all x such that |x − x0 | < ρ, ρ > 0
then the following statements are true for all x such that |x − x0 | < ρ.

(i) The series can be added or subtracted termwise and



(an ± bn )(x − x0 )n .
X
f (x) ± g(x) =
n=0

(ii) The series may be multiplied point wisely, we have:


"∞ #" ∞ #
an (x − x0 )n bn (x − x0 )n .
X X
f (x)g(x) =
n=0 n=0

(iii) If g(x) 6= 0 in I = {x : |x − x0 | < ρ} then the series can be divided and



f (x) X
= dn (x − x0 )n , x ∈ I.
g(x) n=0

The coefficients dn depends on an and bn in a complicated way and the


radius of convergence of the series arising from the division may be less
that ρ.

an (x − x0 )n converges absolutely to a func-
X
(e) Suppose that the series
n=0
tion f in I = {x : |x − x0 | < ρ}. Then the function f is continuous to-
gether with the derivatives of all orders in I i.e f ∈ C ∞ (I). Furthermore,
0 00
the derivatives f , f , · · · may be computed by differentiating the series
termwise and each of the series defining the derivatives converge abso-
lutely in I, i.e

an (x − x0 )n , on I = {x : |x − x0 | < ρ}.
X
f (x) =
n=0

104
If the value of an is given by
f n (x0 )
an = ,
n!
the series is called the Taylor series of about x = x0 (Brook Taylor (1685
- 1731).

(f ) If
∞ ∞
n
bn (x − x0 )n , ∀x ∈ I
X X
an (x − x0 ) =
n=0 n=0
then
an = bn , for n = 0, 1, 2 · · ·
In particular if

an (x − x0 )n = 0, ∀x ∈ I,
X

n=0
then
a0 = a1 = · · · an = 0.
Definition: A function f which has a Taylor series expansion about x = x0
i.e ∞
f n (x0 )
(x − x0 )n
X
f (x) =
n=0 n!
with radius of convergence ρ > 0 is said to be analytic at the point x = x0 .
Thus taking (d) above into account, it follows that if f and g are analytic
at x0 then f ± g, f · g and fg provided g(x0 ) 6= 0 are analytic at x = x0 . Poly-
nomials and rational functions except at the zeros of the denominators
are analytic at every point.

7.2. Series Solutions Near an Ordinary Point


In what follows, we shall illustrate the method of solving the homogenous
equation:
00 0
P (x)y + Q(x)y + R(x)y = 0 (7.6)
in the case where the functions P, Q, R are polynomials by considering
several specific examples. We recall that in solving Equation (7.6) in the
neighborhood of an ordinary point x0 in IR, the function P in Equation
(7.6) satisfies P (x0 ) 6= 0. Since P, Q, R are polynomials, P is continuous
and there exists an interval I about x0 such that P is never varnishing.
Thus in the said interval about x0 , the rational functions Q P
and PR are
continuous. Hence there exists a unique solution of Equation (7.6) satis-
0 0 0
fying y(x0 ) = y0 and y (x0 ) = y0 for an arbitrary choice of (y0 , y0 ) ∈ IR2 .
To solve the equation, we look for solutions of the form:

an (x − x0 )n .
X
y= (7.7)
n=0

105
In this connection, two questions immediately come to mind.

(i) How do we determine explicitly the expansion coefficients {an }∞


n=0 .

(ii) What is the radius of convergence of the series given in (7.7).

We know already that if there exists a ρ > 0 such that the series (7.7)
converges for all x satisfying |x − x0 | < ρ, then y is analytic in I = {x :
|x − x0 | < ρ}. Below, we discuss question (ii) much more deeply. In the
meantime, we show by examples how to answer question (i).

7.2.1. Example : Solve the differential equation


00 0
y − 2xy + λy = 0 (7.8)
where λ is a constant.

Remark: The differential equation (7.8) is called the Hermite Equation,


(Charles Hermite 1822 -1901).
To solve the equation, notice that P (x) ≡ 1 , Q(x) = −2x and R(x) = λ.
Hence, every point of IR is an ordinary point of the ODE. Let us find
a solution of Equation (7.8) in the neighborhood of x0 = 0. Assume the
solution in the form: ∞
an (x − x0 )n .
X
y=
n=0
Since x0 = 0, we have

an x n .
X
y= (7.9)
=0
Differentiating term by term we have
∞ ∞
0 n−1
nan xn−1 ,
X X
y (x) = nan x = (7.10)
n=0 n=1
putting n − 1 = m gives n = m + 1 and n = 1 implies that m = 0, n = ∞
implies that m = ∞. Thus Equation (7.10) becomes:

0
(n + 1)an+1 xn .
X
y (x) =
n=0
Differentiating again,

00
(n + 1)nan+1 xn−1
X
y (x) =
n=0

(n + 1)nan+1 xn−1
X
=
n=1

(n + 1)(n + 2)an+2 xn .
X
= (7.11)
n=0

106
Substituting (7.9), (7.10) and (7.11) in Equation (7.8), we obtain:
∞ ∞ ∞
(n + 2)(n + 1)an+2 xn − 2(n + 1)an+1 xn+1 + λan xn = 0
X X X

n=0 n=0 n=0


∞ ∞ ∞
(n + 2)(n + 1)an+2 xn − 2(n)an xn + λan xn = 0
X X X
that is,
n=0 n=1 n=0

[(n + 2)(n + 1)an+2 − 2nan − λan ] xn = 0.
X
that is, 2a2 + λa0 +
n=1

Hence, we have
λ 2n − λ
(i) a2 = − a0 and (ii) an+2 = an , n ≥ 1. In fact (i) and (ii)
2 (n + 2)(n + 1)
can be combined into the following:
2n − λ
an+2 = an , n ≥ 0. (7.12)
(n + 2)(n + 1)
Equation (7.12) is called a recurrence relation. Substituting various val-
ues of n in Equation (7.12), we get
λ
a2 = − a0
2
2−λ
a3 = a1
2·3
4−λ −(4 − λ)λ
a4 = a2 = a0
3·4 2·3·4
6−λ (6 − λ)(2 − λ)
a5 = a3 = a1 .
4·5 2·3·4·5
Thus, we see that a0 and a1 are arbitrary. By grouping together odd
and even terms separately, the formal series solution of the Hermite dif-
ferential equation is therefore given by:
" #
λ (4 − λ)λ 4 (8 − λ)(4 − λ)λ 6
y = a0 1 − x 2 − x − x + ···
2! 4! 6!
" #
2 − λ 3 (6 − λ)(2 − λ) 5 (10 − λ)(6 − λ)(2 − λ) 7
+a1 x + x + x + x + ···
3! 5! 7!
= a0 y1 (x) + a1 y2 (x).

Remark: (i) The series defining y1 (x) and y2 (x) respectively converge for
all x.
(ii) Notice that if λ is a nonnegative even integer, then one or the other
of the series y1 (x) and y2 (x) terminates., given a polynomial solution. The
polynomial solution corresponding to λ = 2k is known as Hermite poly-
nomial Hk (x) of degree k.

107
7.2.2. Example: The Legendre Equation
Solve the Legendre equation given by:
00 0
(1 − x2 )y − 2xy + α(α + 1)y = 0. (7.13)

Solution: Here the singular points of the differential equation are ±1 since
P (±1) = 0. Every other point is an ordinary point.
Taking x0 = 0 as the ordinary point about which we want a solution, then
we may assume a solution of the form (7.9), that is,

an x n .
X
y=
n=0

Substituting (7.9), (7.10) and (7.11) in (7.13), we get


∞ h i ∞ ∞
n n+2 n+1
α(α + 1)an xn = 0.
X X X
(n + 2)(n + 1)an+2 (x − x ) − 2(n + 1)an+1 x +
n=0 n=0 n=0

That is,

[(n + 2)(n + 1)an+2 + α(α + 1)an ] xn +
X

n=0

∞ ∞
n+2
2(n + 1)an+1 xn+1 = 0.
X X
− (n + 2)(n + 1)an+2 x −
n=0 n=0

Note that by putting n+2 = m, ⇒ m = 2 when n = 0 and n+1 = m, ⇒ m = 1


when n = 0. Replacing m by n again, we have:

[(n + 2)(n + 1)an+2 + α(α + 10an ] xn
X

n=0
∞ ∞
(n − 1)nan xn + 2nan xn
X X
=
n=2 n=1

[(n − 1)n + 2n] an xn
X
=
n=0

n(n + 1)an xn .
X
= (7.14)
n=0

Equating coefficients of different powers of x on both sides of (7.14), we


get:
(n + 2)(n + 1)an+2 + α(α + 1)an = n(n + 1)an ,
giving
n(n + 1) − α(α + 1)
an+2 = an .
(n + 1)(n + 2)

108
Since
n(n + 1) − α(α + 1) = −(α − n)(n + α + 1),
then from the previous expression
(α − n)(α + n + 1)
an+2 = − an . (7.15)
(n + 1)(n + 2)
Using (7.15) in (7.9) we get:
" #
α(α + 1) 2 α(α + 1)(α − 2)(α + 3) 4
y(x) = a0 1 − x + x + ···
2! 4!
" #
(α − 1)(α + 2) 3 (α − 1)(α + 2)(α − 3)(α + 4) 5
+a1 x − x + x + ···
3! 5!
= a0 y1 (x) + a2 y2 (x).
The two series occurring on the right hand side above converge for |x| < 1.
We remark that polynomial solutions of the Legendre equation may be
obtained if α is chosen to be an integer. The polynomial obtained in
this way are called Legendre polynomials and are denoted by Pk (x), x ∈
(−1, 1), k = 0, 1, 2 · · ·

7.3. Orthogonality of Legendre Polynomials


Let C(I) be the space of continuous functions on the finite interval I =
(a, b). Then the mapping < ·, · > of C(I) × C(I) to IR given by
Z b
< f, g >= f (x)g(x)ω(x)dx
a

is called an inner product or scalar product on C(I) with weight function


ω. Two functions f1 and f2 in C(I) is said to be orthogonal if < f1 , f2 >= 0.

7.3.1. Example: Let I = [−π, π]. Introduce the inner product < ·, · >
defined by Z π
< f1 , f2 >= f1 (x)f2 (x)dx.
−π

Then we claim that the functions f1 (x) = sin nx and f2 (x) = cos mx, where
n and m are arbitrary integers, are orthogonal relative to the given inner
product. To see this, notice that
Z π
< f1 , f2 > = f1 (x)f2 (x)dx
−π
Z π
= sin nx cos mxdx
−π

Since
1
[sin(n + m)x + sin(n − m)x] = sin nx cos mx,
2
109
then
1Z π
< f1 , f2 > = [sin(n + m)x + sin(n − m)x] dx
2 −π

1 1 1

= − cos(n + m)x + cos(n − m)x
2 n+m n−m −π
1 1 1

= cos(n + m)π + cos(n − m)π
2 n+m n−m
1 1

− cos(n + m)(−π) − cos(n − m)(−π) = 0.
n+m n−m
So f1 (x) and f2 (x) are orthogonal.

Remark: The Legendre polynomials are orthogonal in the inner prod-


uct < ·, · > on C(I), I = (−1, 1) given by
Z 1
< f, g >= f (x)g(x)dx, f, g ∈ C(I).
−1

That is given two distinct Legendre polynomials Pk (x), Pj (x) on I, then


we have Z 1
Pk (x)Pj (x)dx = 0, j 6= k.
−1

To verify this assertion, notice that Pk satisfy Equation (7.13) for each k.
The Legendre equation (7.13) given by:
00 0
h i
(1 − x2 )y − 2xy + α(α + 1)y = 0

may be written as
d h 0
i
(1 − x2 )y + α(α + 1)y = 0.
dx
Hence,
d h 0
i
(1 − x2 )Pk (x) + k(k + 1)Pk (x) = 0, (7.16)
dx
similarly we write for Pl (x) to obtain

d h 0
i
(1 − x2 )Pl (x) + l(l + 1)Pl (x) = 0, (7.17)
dx
Now multiply Equation (7.16) by Pk (x) and (7.17) by Pl (x) and subtract to
obtain
d h 0
i d h 0
i
Pk (x) (1 − x2 )Pl (x) − Pl (x) (1 − x2 )Pk (x)
dx dx
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x) = 0 (7.18)

110
The first two terms of Equation (7.18) may be written as follows:
d h 0 0
i
(1 − x2 )(Pk (x)Pl (x) − Pl (x)Pk (x) (7.19)
dx
Hence,
d h 0 0
i
(1 − x2 )(Pk (x)Pl (x) − Pl (x)Pk (x))
dx
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x) = 0. (7.20)
Integrate equation (7.20) from −1 to +1 to obtain
 0 0
+1
(1 − x2 ) Pk (x)Pl (x) − Pl (x)Pk (x))

−1
Z 1
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x)dx = 0.
−1

The integrated term varnishes because 1 − x2 = 0 at x = ±1 and Pk (x) and


Pl (x) being polynomials are finite at x = ±1. Hence, we have:
Z 1
[l(l + 1) − k(k + 1)] Pk (x)Pl (x)dx = 0.
−1

But the square brackets is not zero unless k = l. Therefore the integral
must be zero for k 6= l. That is,
Z 1
Pk (x)Pl (x)dx = 0, k 6= l.
−1

The Legendre polynomials are pairwise orthogonal.

Remark: In the preceding analysis, we have assumed that the differential


equation:
00 0
P (x)y + Q(x)y + R(x)y = 0 (7.21)
where P, Q, R are polynomials, has a solution y = Φ(x) expressible as a
series: ∞
an (x − x0 )n
X
y = Φ(x) = (7.22)
n=0

convergent for all x such that |x − x0 | < ρ, ρ > 0 and we obtained the
expansion coefficients {an }∞ n=0 by substituting (7.22) in (7.21).
To justify this process, we must show that we can determine Φ(x0 ), n =
0, 1, 2 · · · from (7.21), for we have from (7.22) that

m!am = Φ(m) (x0 ). (2.23)

It can be shown that Φ is an analytic function in its interval of conver-


gence.

111
To determine Φ(n) and an , n = 0, 1, 2 · · ·, we use (7.21) as follows:

Since Φ is a solution of (7.21), we have:


00 0
P (x)Φ (x) + Q(x)Φ (x) + R(x)Φ(x) = 0. (7.24)

For the interval I about x0 in which P is non varnishing, we get


00 0
Φ (x) = −p(x)Φ (x) − q(x)Φ(x), (7.25)

where
Q(x) R(x)
p(x) = , and q(x) = .
P (x) P (x)
Hence
00 0
Φ (x0 ) = −p(x0 )Φ (x0 ) − q(x)Φ(x),
giving
2!a2 = −p(x0 )a1 − q(x0 )a0 .
The coefficient a2 is determined in terms of a0 and a1 .
To determine a3 , differentiate (7.25) and evaluate at x = x0 to obtain
000 0 0
Φ (x0 ) = 3!a3 = −2!p(x0 )a2 − [p (x0 ) + q(x0 )]a1 − q (x0 )a0 .

By repeated differentiation of Φ and evaluation at x0 , one is able to obtain


the expansion coefficients {an } in the summation:

an (x − x0 )n .
X
Φ(x) =
n=0

But notice that for Φ to be infinitely differentiable, it is necessary for the


functions p and q to be infinitely differentiable. Also for (7.22) to con-
verge, one requires more than infinite differentiability of p and q. Specifi-
cally, one requires that p and q be analytic functions in the neighborhood
of x0 . Because of the preceding discussion, we may generalize our earlier
definitions of ordinary and singular points as follows.

7.3.2. Definition: Given


00 0
P (x)y + Q(x)y + R(x)y = 0

where P, Q, R need not be polynomials, we shall say that x0 is an ordi-


nary point for the differential equation if the functions p = Q
P
and q = PR
are analytic at x0 , otherwise x0 is called a singular point.

For example, consider the initial value problem:


00 0 πx 0
y + 2(ln x)y + sin( )y = 0, y(1) = −1 = −y (1).
2
112
πx
Since the functions 2 ln x = Q(x) and R(x) = sin are continuous at x0 =
2
1 ∈ IR, then the solution exists. The solution is unique since boundary
conditions are prescribed.

Now we shall determine the series solution of the problem in the form:

an (x − 1)n .
X
y = Φ(x) =
n=0

To obtain the coefficients {an } we demand that Φ be a solution of the


given equation. Then we get:
00 0 πx
Φ (x) + (2 ln x)Φ + sin( )Φ = 0
2
Therefore,
00 0 π
Φ (1) = −(2 ln 1)Φ (1) − sin Φ(1).
2
Since
Φ(n) (x0 )
an =
n!
and
0
Φ(1) = −1 = −Φ (1)
as given above, then we have at x = 1,
00 0 π
Φ (1) = −(2 ln 1)Φ (1) − (sin )Φ(1)
2
and therefore, we have
2a2 = 1.
Hence,
1
a2 = .
2
Next, we have
000 −2 0 00 π πx πx 0
Φ (x) = Φ (x) − (2 ln x)Φ (x) − ( cos )Φ(x) − (sin )Φ (x).
x 2 2 2
At x = 1, we get:
000 0 00 π π π 0
Φ (1) = −2Φ (1) − (2 ln 1)Φ (1) − ( cos )Φ(1) − (sin )Φ (1).
2 2 2
Hence,
3!a3 = −2a1 − a1 = −3a1 = −3
Therefore,
−3 1
a3 = =− .
3! 2
113
Substituting a0 , a1 , a2 , a3 in the expansion, we get

Φ(x) = a0 + a1 (x − 1) + a2 (x − 1)2 + a3 (x − 1)3 + · · ·


1 1
= −1 + (x − 1) + (x − 1)2 − (x − 1)3 + · · ·
2 2

7.4. Interval of Convergence of a Series Solution

In addition to the use of various tests of convergence of series solution of


Equation (7.21) above, the following theorem concerning the interval of
convergence of a series solution of Equation (7.21) is available.

7.4.1. Theorem: Let x0 be an ordinary point of Equation (7.21). Then


the general solution of the equation is of the form:

an (x − x0 )n = a0 y1 (x) + a1 y2 (x)
X
y=
n=0

where a0 and a1 are arbitrary constants and y1 and y2 are linearly indepen-
dent series solutions which are analytic at x0 . Furthermore, the radius of
convergence for each of the series solution y1 and y2 is at least as large as
the minimum of the radii of convergence of the series p and q.
∞ ∞
pn xn has radius of convergence ρ1 and q(x) = q n xn
X X
Note: If p(x) =
n=0 n=0
has radius of convergence ρ2 then y(x) has radius of convergence satisfy-
ing:
ρ ≥ min(ρ1 , ρ2 ).
For example, if
00
y + ex y + (sin x)y = 0
then,

xn
p(x) = ex =
X

n=0 n!

X (−1)n x2n+1
q(x) = sin x = .
n=0 (2n + 1)!

Thus for p(x),



xn+1 n! x

r= lim · n = lim = 0, ∀x.
n→∞ (n + 1)! x n→∞ n + 1
For q(x),
x2(n+1)+1 (2n + 1)!
r= lim · = 0.

x2n+1

n→∞ (2(n + 1) + 1)!

114
So that
ρ1 = ∞, ρ2 = ∞.
Hence, by the last theorem,

ρ = ∞, ∀x.

Remark: (i) We shall not offer a proof of the last theorem because it is
beyond the scope of our discussion.
(ii) In order to obtain the minimum of the radii of convergence for the
series of p and q, we may compute the relevant series and then apply one
of the tests for infinite series. When P , Q and R are polynomials, there
is an easier test. In complex analysis, it is known that a rational function
Q
P
= p will have a convergent power series expansion about a point x = x0
if P (x0 ) 6= 0. Furthermore, assuming that any factor common to Q and P
have been canceled, the radius of convergence of the power series for Q P
about the point x0 is precisely the distance from x0 to the nearest zero of
P . In computing this distance, it must be noted that P (x) = 0 may have
complex roots and such roots must be taken into account.

7.4.2. Example: (i) What is the radius of convergence of the Taylor


series for the function
Q 1
=
P 1 + x2
about the point x = 0.
Solution: Here x0 = 0 and the zeroes of 1 + x2 are x = ±i. The distance
from x0 = 0 to ±i in the complex plane is

|0 + i0 − (0 ± i)| = |0 + i0 − 0 ∓ i| = | ± i| = 1.

Hence the radius of convergence of the power series expansion of (1+x2 )−1
about x0 = 0 is ρ = 1.

(ii) Determine a lower bound for the radius of convergence of the se-
ries solutions about x0 = 0 for the Legendre equation
00 0
(1 − x2 )y − 2xy + α(α + 1)y = 0, α ∈ IR.

Solution: Here P (x) = 1 − x2 , Q(x) = −2x and R(x) = α(α + 1) are all poly-
nomials.
P (x) = 0 if and only if x = ±1 and hence the distance of x0 to x = ±1 is 1.
Consequently a series solution of the Legendre equation will converge for
at least |x| < 1 and possibly for larger value of x.

Indeed we have seen that when α is a positive integer leads to a polyno-


mial solution of the Legendre equation, and a polynomial converges for

115
all x.

7.5. Exercises
(1) Let P, Q, R be real valued functions on some open interval I ⊂ IR.
Consider the differential equation
00 0
P (x)y + Q(x)y + R(x)y = 0, on I.

What do you understand by


(i) an ordinary point
(ii) a singular point
(iii) a regular singular point of Equation (1)?

Using these terms, determine the nature of the point x0 ∈ I in each of the
following cases: (a) P (x) = (1 − e(x−x0 ) )(x − x0 ), Q(x) = 4, R(x) = (x − x0 )2 .
(b) P (x) = 1 + x − x0 , Q(x) = ex , R(x) = 4.
(c) P (x) = (x − x0 )2 , Q(x) = 4(x − x0 ), R(x) = 3.

(2) Obtain series solutions of the following ODE


00
(i) x2 y + y = 0, about x0 = 1.
00 0
(ii) y + x2 y − y = 0, about x0 = 10.
00 0
(iii) (1 − x2 )y − 2xy + α(α + 1)y = 0, about x0 = 2.

(3) Show that the initial value problem:


00 0 πx
y + (2 ln x)y + sin( )y = 0
2
0
y(1) = −1 = −y (1)
has a unique solution which admits an infinite series expansion in a neigh-
borhood of x0 = 1.
Hint: To solve the ODE
00 0
y + q(x)y + r(x)y = 0 (1)

in a neighborhood of a point x0 6= 0 (where x0 is assumed to be an ordinary


point), we must assume a series representation of the form:

an (x − x0 )n
X
y= (2)
n=0

116
d d dt d dt
Let x − x0 = t, then x = t + x0 and dx
= dt
· dx
= dt
since dx
= 1. Thus
Equation (1) becomes

d2 d
2
y(t) + q(t + x0 ) y(t) + r(t + x0 )y(t) = 0
dt dt
and the series representation (2) becomes

an tn .
X
y(t) =
n=0

(4) Does an arbitrary solution of the differential equation:


00 0
y + ex y + (cos x)y = 0, x ∈ [−1, 1] (1)

have a series representation about the origin of the form:



an x n ?
X
y= (2)
n=0

Justify your answer. If you assert that no solution of (1) has a series
development of the form (2), then write down the correct series repre-
sentation if any exist.
(b) Show that the series expansion about the origin of the two linearly
independent solutions of Equation (1) is of the form:

y = a0 y1 (x) + a1 y2 (x)
x2 x 3
!
= a0 1 − + ···
2! 3!
x2 x3
!
= a1 x − − − ···
2! 3!

117
Chapter Eight

Series Solutions Near Regular Singular Points

8.1. Introduction

Let P, Q, R be polynomials without common factors and consider the


equation:
00 0
P (x)y + Q(x)y + R(x)y = 0. (8.1)
We recall that a singular point of Equation (8.1) is a point x0 such that
P (x0 ) = 0. We wish to consider solutions of (8.1) in the neighborhood of
a singular point x0 in this section. In this connection, the method of the
previous chapter fail, and we must look for more general type of series
expansion than the Taylor series. In the ensuing discussion, we shall
Q
limit ourselves to cases in which the singularities of the functions and
P
R
at x = x0 are not too serious i.e. ”weak singularities”. If P, Q, R are
P
polynomials, it may be shown that x = x0 is a weak singularity if
Q(x)
lim (x − x0 )
x→x0 P (x)
and
R(x)
lim (x − x0 )2
x→x0 P (x)
Q
are finite. This means that the singularities of can be no worse than
P
R
(x − x0 )−1 and the singularities of can be no worse than (x − x0 )−2 .
P
We shall call such a point a regular singular point of Equation (8.1).
If P, Q, R are not necessarily polynomials, we say more generally that
x = x0 is a regular singular point of Equation (8.1) if

(i) x = x0 is a singular point, i.e. P (x0 ) = 0.

Q(x) R(x)
(ii) both (x − x0 ) and (x − x0 )2 are analytic at x = x0 .
P (x) P (x)
8.1.1. Example:Determine the singular points of the equation:
00 2 0
x2 (1 − x2 )y + y + 4y = 0, (8.2)
x
118
and classify them as regular or irregular.

2
Solution: Here we have P (x) = x2 (1 − x2 ), Q(x) = x
and R(x) = 4. The
singular points occur when
P (x) = 0.
Thus,
x = 0, ±1.
Next, we have:
Q(x) 2
= 3
P (x) x (1 − x2 )
and
R(x) 4
= 2 .
P (x) x (1 − x2 )
Consider the three singular points: (i) when x = 0, then

Q(x) 2 2
(x − x0 ) =x· 3 2
= 2
P (x) x (1 − x ) x (1 − x2 )

which is a rational function. We know that rational functions are analytic


except at the zeroes of their denominators. In this case, the zeroes of the
denominator are 0, ±1. Hence the rational function above is not analytic
in the neighborhood of x0 = 0. We conclude that x0 = 0 is an irregular
singular point of the given equation.

(ii) Consider the point x0 = 1. Then

Q(x) 2(x − 1) −2(1 − x) −2


(x − 1) = 3 2
= 3 = 3
P (x) x (1 − x ) x (1 + x)(1 − x) x (1 + x)

and
R(x) 4(x − 1)2 4(1 − x)2 4(1 − x)
(x − 1)2 = 2 = = .
P (x) x (1 − x2 ) x2 (1 + x)(1 − x) x2 (1 + x)
−2 4(1 − x)
Since and are analytic at x = 1, we conclude that x0 = 1
x3 (1 + x) x2 (1 + x)
is a regular singular point of equation (8.2).

(iii) Consider x0 = −1. Then

Q(x) 2(x + 1) 2(1 + x) 2


(x + 1) = 3 2
= 3 = 3 .
P (x) x (1 − x ) x (1 + x)(1 − x) x (1 − x)

2 R(x) 4(1 + x)2 4(1 + x)(1 + x) 4(1 + x)


(x + 1) = 2 2
= 2 = 2 .
P (x) x (1 − x ) x (1 + x)(1 − x) x (1 − x)

119
2 4(1 + x)
Since and 2 are analytic at x = −1, we conclude that
x3 (1
− x) x (1 − x)
x0 = −1 is a regular singular point of equation (8.2).

8.2. Euler Equation


An important example of a differential equation with a singularity at
x = 0 is given by
00 0
L[y] = x2 y + αxy + βy = 0 (8.3)
It is clear by our previous analysis that the point x0 = 0 is a regular
singular point of equation (8.3). The nth order analogue of Equation
(8.3) is given by:

L[y] = xn y (n) + αn−1 xn−1 y (n−1) + · · · + α0 y = 0. (8.4)

It is readily checked that the singular point x = 0 is a regular singular


point of the Euler equation.
We show how to solve (8.3). In any interval not containing x = 0, equation
(8.3) has a solution of the form:

y = a0 y1 (x) + a1 y2 (x).

This follows from the fact that any point in such an interval would then be
an ordinary point of (8.3). We first consider the interval I+ = {x ∈ IR : x >
0}. Later we shall extend our results to the interval I− = {x ∈ IR : x < 0}.
0 00
For any r ∈ IR, notice that (xr ) = rxr−1 and (xr ) = r(r − 1)xr−2 . Hence
assuming that a solution of Equation (8.3) is of the form:

y = xr (8.5)

then we have
xr F (r) = 0
where
F (r) = r(r − 1) + αr + β.
Since x 6= 0 in I+ , we have that

F (r) = 0,

then Equation (8.5) is a solution of Equation (8.3). The roots of the


equation
F (r) = r(r − 1) + αr + β = 0 (8.6)
are
1
 q 
r1 = −(α − 1) + (α − 1)2 − 4β (8.7)
2

120
and
1
 q 
r2 = −(α − 1) − (α − 1)2 − 4β (8.8)
2
and hence (8.6) may be written as follows:

F (r) = (r − r1 )(r − r2 ) = 0. (8.9)

Just as in the case of the second order linear ordinary differential equa-
tion with constant coefficients, we must examine separately the following
cases:

(i) (α − 1)2 − 4β > 0

(ii) (α − 1)2 − 4β = 0

(iii) (α − 1)2 − 4β < 0

Case (i): (α − 1)2 − 4β > 0. Here Equations (8.7) and (8.8) give two real
distinct roots. Since Wronskian of the solutions y(x) = xr1 and y2 (x) = xr2
is non varnishing for r1 6= r2 and x > 0, it follows that the general solution
of Equation (8.3) is

y(x) = a0 xr1 + a1 xr2 , x > 0.

Note that if r is irrational, then xr is defined by

xr = er ln x .

Case (ii) : (α − 1)2 − 4β = 0. Here equations (8.7) and (8.8) give


1
r1 = − (α − 1) = r2 ,
2
and we apparently have only one solution:

y1 (x) = xr1

of Equation (8.3). To obtain a second solution, we proceed as follows:


Since r1 = r2 , it follows that

F (r) = (r − r1 )2 .

Thus not only do we have


F (r1 ) = 0
but also
0
F (r1 ) = 0.

121
Consider now

L[xr ].
∂r
Then
∂ ∂ r
L[xr ] = (x F (r))
∂r ∂r
0
= xr F (r) + F (r)xr ln x.
0
Hence, since F (r) = (r − r1 )2 and F (r) = 2(r − r1 ), we have
∂ ∂
L[xr ] = L[ xr ]
∂r ∂r
= L[xr ln x]
= 2(r − r1 )xr + (r − r1 )2 xr ln x (8.10)
The right hand side of (8.10) varnishes for r = r1 . Hence
y2 (x) = xr1 ln x, x > 0
is the required second solution of Equation (8.3). Since the functions xr1
and xr1 ln x are linearly independent for x > 0, the general solution of (8.3)
is
y(x) = (a0 + a1 ln x) xr1 , x > 0.
Case (iii):(α − 1)2 − 4β < 0. Here the roots r1 and r2 are complex and more-
over they are complex conjugate. Thus if r1 = λ + iµ then r2 = λ − iµ.
For complex r we define xr by
xr = er ln x , x > 0.
Thus
xr1,2 = xλ±iµ
= e(λ±iµ) ln x
= eλ ln x±iµ
h
ln x
i
= eλ ln x e±iµ ln x
= eλ ln x (cos(µ ln x) ± i sin(µ ln x)) .
Then the general solution of Equation (8.3) is given by
y(x) = C0 xλ+iµ + C1 xλ−iµ .
We observe that the real and imaginary parts of xλ+iµ , namely xλ cos(µ ln x)
and xλ sin(µ ln x) are independent solutions of Equation (8.3). Hence, for
complex root of the equation
F (r) = 0

122
the general solution of Equation (8.3) is:

y(x) = a0 xλ cos(µ ln x) + a1 xλ sin(µ ln x)


= xλ (a0 cos(µ ln x) + a1 sin(µ ln x))

Finally, we consider the solution of (8.3) in the interval IR− = {x : x < 0}.
The solutions of the Euler equation given above for IR+ = {x : x > 0} can
be shown to be valid for x < 0, but they will in general be complex valued.
To obtain real valued solutions of Euler equation (8.3) in the interval IR−
we may make the following transformation

x = −ξ.

Then x < 0 ⇒ ξ > 0, furthermore,


d dξ d d
= · =− .
dx dx dξ dξ
Therefore,
d2 d2
=
dx2 dξ 2

Thus, x = −ξ ⇒ dx
= −1, hence Equation (8.3) becomes:

d2 y dy
ξ2 2
+ α(−ξ)(− ) + βy = 0 (8.11)
dξ dξ
or
d2 y dy
ξ2 2
+ αξ ) + βy = 0, ξ > 0. (8.12)
dξ dξ
Recall that when x > 0,

y(x) = a0 xr1 + a1 xr2

and when x < 0,

y(x) = a0 (−x)r1 + a1 (−x)r2 = a0 ξ r1 + a1 ξ r2 .

Combining the last two equations, we have

y(x) = a0 |x|r1 + a1 |x|r2 .

But we have already solved Equation (8.12) above since Equation (8.12)
is essentially Equation (8.3), hence the solutions of (8.12) will be given
as in cases (i), (ii) and (iii) above but with x replaced by ξ. Since ξ = −x,
and (
x, x > 0
|x| =
−x, x < 0

123
it follows that we need only replace x by |x| in the solution of (8.3) given
under cases (i), (ii) and (iii) in any interval not containing the origin.
The following theorem has been established.

8.2.1. Theorem: To solve Euler equation


00 0
x2 y + αxy + βy = 0

in any interval not containing the origin, substitute y = xr and compute


the roots of the equation

F (r) = r2 + (α − 1)r + β = 0.

(i) If the roots are real and distinct, then

y = a0 |x|r1 + a1 |x|r2 .

(ii) If the two roots are real and equal, then

y(x) = (a0 + a1 ln |x|) |x|r1 .

(iii) If the roots are complex with r1 = λ + iµ and r2 = λ − iµ , then

y(x) = |x|λ (a0 cos(µ ln |x|) + a1 sin(µ ln |x|)) .

Remark: For the Euler Equation of the form:


00 0
(x − x0 )2 y + α(x − x0 )y + βy = 0, (8.13)

we look for solutions of the form

y(x) = (x − x0 )r .

By using the above techniques or alternatively using the transformation:

x − x0 = t

then
d dt d
= ·
d(x − x0 ) d(x − x0 ) dt
then, Equation (8.13) becomes

d2 y dy
t2 2
+ αt + βy = 0.
dt dt
Therefore, the change of variable reduces Equation (8.13) to the form of
Equation (8.3) considered above and can be solved by putting y = tr as
before.

124
8.2.2 Examples: (1). Solve the differential equation:
00 0
x2 y + 3xy − y = 0

on the intervals (i) (−1, 1)\{0}, (ii) (−10, −2).

(2). Solve the differential equation


00 0
x2 y + xy + y = 0

on the intervals (i) (−5, 21 )\{0}, (ii) ( 21 , 10).

(3) Solve the differential equation


00 0
x2 y + xy − y = 0

on the intervals (i) (−10, 10)\{0}, (ii) (−10, −5).

Solutions: (1). Let y = xr be a solution of the given equation. Then


we have:
0 0 00 00
y = rxr−1 , xy = rxr , y = r(r − 1)xr−2 , x2 y = r(r − 1)xr ,

and
r(r − 1)xr + 3rxr − xr = 0.
That is
[r(r − 1) + 3r − 1]xr = 0.
Hence on the interval (−1, 1)\{0} it follows that (since x 6= 0 )

r(r − 1) + 3r − 1 = 0

or
r2 + 2r − 1 = 0.
Thus, √ √
−2 ± 4+4 −2 ± 8 √
r= = = −1 ± 2
2 2
Here, √ √
r1 = −1 + 2, r2 = −1 − 2.
We notice that the roots r1 and r2 are real and distinct. Hence the general
solution of the given equation on the interval (−1, 1)\{0} = (−1, 0) ∪ (0, 1)
is √ √
y = a0 |x|(−1+ 2) + a1 |x|(−1− 2) .

125
(ii) the solution for this case is
√ √
y(x) = a0 (−x)(−1+ 2)
+ a1 (−x)(−1− 2)
,

this follows since the interval is on the negative segment of the real line.

(2). Suppose y = xr is a solution of the given equation


00 0
x2 y + xy + y = 0.

Then
r(r − 1)xr + rxr + xr = 0
and this implies that
r(r − 1) + r + 1 = 0
that is
r2 + 1 = 0.
Hence,
r = ±i.
We notice that the roots r1 = +i and r2 = −i are complex and conjugate.
Recall that for the general forms r1 = λ + iµ and r2 = λ − iµ, then

y(x) = |x|λ [a0 cos(µ ln |x|) + a1 sin(µ ln |x|)] ,

Hence the solution of the given ODE on the interval (−5, 21 )\{0} = (−5, 0)∪
(0, 21 ) is
y(x) = a0 cos(ln |x|) + a1 sin(ln |x|).
On the interval ( 21 , 10) ⊆ IR, we have the solution given by:

y(x) = a0 cos(ln x) + a1 sin(ln x)

where λ = 0, µ = 1.

(3). Suppose that = xr is a solution of the given equation, then

r(r − 1)x2 + rxr − xr = 0

and hence
r(r − 1) + r − 1 = 0
That is
r2 − 1 = 0.
Thus
r = ±1.

126
Hence the solution of the given equation in the interval (−10, 10)\{0} is

y(x) = a0 |x| + a1 |x|−1 .

(ii) On the interval (−10, −5), the solution is

C1
y(x) = a0 (−x) + a1 (−x)−1 = C0 x + .
x

(4). To solve the equation


00 0
x2 y + 3xy + y = 0,

on the interval (i) (−1, 1)\{0} = (−1, 0) ∪ (0, 1) we proceed as follows: Sup-
pose that y = xr is a solution of the given equation. Then

r(r − 1)xr + 3rxr + xr = 0.

Hence,
r(r − 1) + 3r + 1 = 0,
that is
r2 + 2r + 1 = 0.
Thus √
−2 ± 4−4 2
r= = − = −1.
2 2
That is,
r1 = −1, and r2 = −1.
We notice that the roots r1 and r2 are real and equal both being −1.
Hence the solution of the given equation on the interval (−1, 1)\{0} is

y(x) = (a0 + a1 ln |x|) |x|−1 .

In the interval (−10, −2) the solution is

y(x) = (a0 + a1 ln(−x)) (−x)−1 .

8.3. Solutions Near a Regular Singular Point

8.3.1. The Frobenius Method


We consider the ordinary differential equation:
00 0
P (x)y + Q(x)y + R(x)y = 0. (8.14)

127
Suppose that we seek a solution of (8.14) in the neighborhood of a regular
singular point x = x0 . For simplicity, we take x0 = 0 in what follows. Then
since x0 = 0 is a regular singular point, we have that
xQ(x)
xp(x) =
P (x)
and
x2 R(x)
x2 q(x) =
P (x)
are analytic functions in the neighborhood of x = 0. Thus, the functions
admit convergent power series expansion of the form:

p n xn
X
xp(x) =
n=0

x2 q(x) = q n xn ,
X
(8.15)
n=0

Notice that Equation (8.14) may be written as:


00 0
x2 y + x(xp(x)y + (x2 q(x))y = 0.
In the method of Frobenius, we look for a solution of (8.14) of the form:

an xn+r
X
y= (8.16)
n=0

where r is some number. For convenience in what follows, we suppose


x > 0 . Then employing (8.15) and (8.16) in (8.14) we have

"∞ #" ∞ #
n+r n+r n
X X X
(n + r)(n + r − 1)an x + (n + r)an x pn x
n=0 n=0 n=0
"∞ #" ∞ #
n+r n
X X
+ an x qn x =0
n=0 n=0

"∞ #" ∞ #
(n + r)(n + r − 1)an xn+r + xr (n + r)an xn p n xn
X X X

n=0 n=0 n=0


"∞ #" ∞ #
r n n
X X
+x an x qn x =0
n=0 n=0
That is,
∞ ∞ ∞
(n + r)(n + r − 1)an xn+r + xr An xn + xr Bn xn = 0,
X X X

n=0 n=0 n+0

the last equality holds by expanding the products of infinite series in-
volved and we have put
n
X
An = (r + k)ak pn−k
k=0

128
and n
X
Bn = ak qn−k .
k=0

Hence,
∞ ∞ ∞
n+r n+r
Bn xn+r = 0,
X X X
(n + r)(n + r − 1)an x + An x +
n=0 n=0 n+0

or ∞
[(n + r)(n + r − 1)an + An + Bn ] xn+r = 0.
X

n=0

Thus substituting for An and Bn we have:


∞ n
" #
[(r + k)ak pn−k + ak qn−k ] xn+r = 0.
X X
(n + r)(n + r − 1)an +
n=0 k=0

or
∞ n
" #
[(r + k)pn−k + qn−k ] ak xn+r = 0.
X X
(n + r)(n + r − 1)an +
n=0 k=0

Finally,
∞ n−1
" #
((r + k)pn−k + qn−k ) ak xn+r = 0.
X X
{(n + r)(n + r − 1) + (n + r)p0 + q0 } an +
n=0 n=0

Let
F (r) = r(r − 1) + p0 + q0
then
F (r + n) = (r + n)(r + n − 1) + p0 (r + n) + q0 .
In terms of the function r → F (r), the last Equation may be written as
follows:
∞ n−1
( )
a0 F (r)xr + ak [(r + k)pn−k + qn−k ] xn+r = 0.
X X
F (r + n)an + (8.17)
n=1 k=0

If (8.16) is indeed a solution of (8.14), then the coefficient of each power


of x must be zero in (8.17). Thus we must have:

F (r) = r(r − 1) + p0 r + q0 = 0, a0 6= 0. (8.18)

Equation (8.18) is called the indicial equation and it gives the values of r
for which (8.16) is a solution of (8.14).
Let r1 and r2 be the roots of (8.18),in what follows, if these roots are

129
real, we shall suppose that r ≥ r2 . Next, setting the coefficients of xn+r in
(8.17) equal to zero, we get
n−1
X
F (r + n)an + ak [(r + k)pn−k + qn−k ] = 0, n = 1, 2, 3 · · · . (8.19)
k=0

Equation (8.19) shows that in general an depends on all the earlier co-
efficients a0 , a1 , a2 · · · an−1 . It shows too that we can successfully compute
a1 , a2 , · · · an in terms of the coefficients in the series expansion for xp(x)
and x2 q(x) provided that

F (r + 1), F (r + 2), · · · F (r + n)

do not varnish. But F (r) = 0 only if r = r1 and r = r2 . Thus, since r1 + n


is not equal to r1 or r2 for n ≥ 1, it follows that F (r1 + n) 6= 0 for n ≥ 1.
Hence we can always determine one solution

y1 (x) = xr1 an (r1 )xn
X
(8.20)
n=0

Next, if r2 6= r1 and r1 − r2 does not lie in IN , then r2 + n is not equal to r1


for any n ≥ 1. Hence, F (r2 + n) 6= 0 and we obtain a second solution:

r2
an (r2 )xn .
X
y2 (x) = x (8.21)
n=0

The two series ∞


an (ri )xn , i = 1, 2,
X
yi (x) =
n=0

when they converge are analytic functions in their interval of conver-


gence. Hence, the singularities of the solutions (8.20) and (8.21) must
arise from the functions xri , i = 1, 2 which multiply Equation (8.20) and
(8.21) respectively.

For x < 0, we obtain real valued solutions of (8.14) by making the substi-
tution x = −ξ, ξ > 0 as in the Euler Equation. And it turns out that again
that we need only replace xr1 and xr2 in (8.20) and (8.21) by |x|r1 and |x|r2 .

Case of Complex Roots


Here, if r1 and r2 are complex, they are necessarily complex conjugate
and r1 6= r2 + N . Thus we will be able to compute two series solutions
of the form (8.16). However, they will be complex valued functions of
x, but real valued solutions may be obtained by taking suitable linear
combination of y1 and y2 .

130
Case of Equal Roots
If r1 = r2 , we obtain apparently only one solution of the form (8.16). The
procedure for determining the second solution is explained below.

Case where r1 − r2 = N : If r1 − r2 = N , where N is a positive integer,


then
F (r2 + N ) = F (r1 ) = 0
and we will not be able to determine aN from (8.19) unless the sum in
(8.19) also varnishes for n = N . In the latter case, aN will be arbitrary
and a second series solution can be obtained. If the sum in (8.19) does
not varnish for any n, there is only one series solution of the form (8.16).

Remark: (i) Notice that to determine r1 and r2 we need only determine


p0 and q0 and then solve
r(r − 1) + p0 r + q0 = 0
where the coefficients p0 and q0 are given by
p0 = lim xp(x)
x→0

and
q0 = lim (x2 q(x)).
x→0
(ii) In practice and especially when P, Q, R are polynomials, it is usu-
ally more convenient to substitute the series (8.16) in (8.14), deter-
mine the values of r1 and r2 and then explicitly compute the coefficients
an , n = 0, 1, 2 · · ·. We summarize the discussion by a theorem which fol-
lows:

8.3.2. Theorem: Consider the differential equation:


00 0
x2 y + x(xp(x))y + (x2 q(x))y = 0
where x = 0 is a regular singular point. Then the functions x → xp(x) and
x → x2 q(x) are analytic at x = 0 with convergent power series expansion:

p n xn
X
xp(x) =
n=0

and ∞
x2 q(x) = q n xn
X

n=0
for |x| ≤ ρ, where ρ is some positive number.
Let r1 and r2 be the roots of the indicial equation:
F (r) = r(r − 1) + p0 r + q0 = 0

131
with r1 ≥ r2 if r1 and r2 are real and distinct. Then in either of the
intervals −ρ < x < 0 and 0 < x < ρ, there exists a solution of the form:

" #
r1 n
X
y1 (x) = |x| 1+ an (r1 )x (8.22)
n=1

where {an (r1 )} are given by the recurrence relation (8.19) with a0 = 1
and r1 = r2 . If r1 − r2 is not zero or a positive integer, then in either of
the intervals −rho < x < o and 0 < x < ρ, there exists a second linearly
independent solution of the form:

" #
r2 n
X
y2 (x) = |x| 1+ an (r2 )x . (8.23)
n=1

The coefficients {an (r2 )} are also determined by the means of the recur-
rence relation (8.19) with a0 = 1 and r = r2 . The power series in (8.22)
and (8.23) converge for |x| < ρ.

8.3.3. Example: Discuss the nature of the solutions of the equation


00 0
2x(1 + x)y + (3 + x)y − xy = 0

in a neighborhood of the origin.

Solution: Here P (x) = 2x(1 + x), Q(x) = 3 + x and R(x) = −x. The point
x = 0 is a regular singular point since
xQ(x) x(3 + x) 3
lim = lim =
x→0 P (x) x→0 2x(1 + x) 2
and
x2 R(x) x2 (−x)
lim = lim = 0.
x→0 P (x) x→0 2x(1 + x)

Hence, we have
3
p0 = , and q0 = 0.
2
Thus the indicial equation is
3
r(r − 1) + r + 0 = 0
2
with roots r1 = 0 and r2 = − 12 .
Since these roots are different and do not differ by a positive integer, by
the last theorem, there will be two linearly independent solutions of the
form: ∞
an (0)xn
X
y1 (x) = 1 +
n=1

132
and

" #
− 12 1
an (− )xn
X
y2 (x) = |x| 1+
n=1 2
for 0 < |x| < ρ. Note that ρ is the distance from x0 to the nearest zero of
P (x). In the present case ρ ≥ 1.

8.3.4. Series Solutions Near a Regular Singular Point


Cases r1 = r2 and r1 − r2 = N
We begin with the case (i) r1 = r2 . This implies real roots of the indicial
equation. Hence if we assume as usual a solution of Equation (8.14) of
the form (8.16) then after substituting in (8.14) we obtain:
∞ n−1
" #
a0 F (r)xr + ak {(r + k)pn−k + qn−k } xn+r = 0
X X
F (n + r)an + (8.24)
n=1 k=0

where
F (r) = r(r − 1) + p0 r + q0 . (8.25)
If the roots of (8.25) are both equal to r1 , we can obtain one solution by
setting r = r1 and demanding that the coefficient of each power of xr+n
varnishes for all n = 0, 1, 2 · · ·. Then assuming that F (r + n) 6= 0, n ≥ 1 we
have Pn−1
ak [(r + k)pn−k + qn−k ]
an (r) = − k=0 , n≥1 (8.26)
F (r + n)
Using (8.26) in (8.24), we have

L[y](r, x) = a0 xr F (r) (8.27)

where
d2
" #
2 d
L[y] = x 2
+ x(xp(x) + x2 q(x) (y).
dx dx
Since r1 is a repeated root of (8.25), we have

F (r) = (r − r1 )2 (8.28)

Letting r = r1 in (8.27), we find that

L[y](r1 , x) = 0.

As we already know,

" #
r1 n
X
y1 (x) = x a0 + an (r1 )x , x>0 (8.29)
n=1

133
is a solution of (8.14). Furthermore from (8.27) we have

∂y ∂
L[ ](r, x) = a0 (xr )(r − r1 )2
∂r ∂r
h i
= a0 (r − r1 )2 xr ln x + 2(r − r1 )xr .

Hence, " #
∂y
L (r1 , x) = 0.
∂r
Thus, a second solution of (8.14) is

∂y
y2 (x) = ( )(r1 , x)
∂r( )
∂ r1
X
n
= x [a0 + an (r1 )x ]
∂r n=1
∞ ∞
0
= (xr1 ln x)[a0 + an (r1 )xn ] + xr1 an (r1 )xn
X X

n=1 n=1

0
= y1 (x) ln x + xr1 an (r1 )xn , x > 0
X

n=1

where
0 d
an (r1 ) = an (r)|r=r1 .
dr

(ii) Case r1 − r2 = N ∈ IN . First, we recall that the recurrent relation for


an is given by:
n−1
X hX i
F (r2 + n)an (r2 ) = − ak (r2 ) (r2 + k)pn−k + qn−k , n ≥ 1.
k=0

If the right hand side varnishes when n = N , then F (r2 + N ) = 0. This


implies that 0 · an (r2 ) = 0. This implies that an is arbitrary if the right
hand side does not varnish. We have the following theorem:

8.3.5. Theorem: Consider the ordinary differential equation


00 0
L[y] = x2 y + x(xp(x))y + x2 q(x)y = 0 (8.30)

where x = 0 is a regular singular point. Then the functions x → xp(x) and


x → x2 q(x) are analytic at x = 0, that is, the functions admit convergent
series expansion of the form:
∞ ∞
pn xn , and x2 q(x) = q n xn
X X
xp(x) =
n=0 n=0

134
which converges for |x| < ρ, ρ > 0.

Let r1 and r2 with r1 ≥ r2 if they are real, be the roots of the indicial
equation:
F (r) = r(r − 1) + p0 r + q0 = 0.
Then in either of the intervals: (−∞, 0) and (0, ∞) equation (8.30) has two
linearly independent solutions y1 and y2 of the following forms:

(i) if r1 − r2 is a positive integer, then



" #
r1 n
X
y1 (x) = |x| 1+ an (r1 )x
n=1

and

" #
r2 n
X
y2 (x) = λy1 (x) ln |x| + |x| 1+ cn (r2 )x .
n=1
(ii) If r1 = r2 , then

" #
r1 n
X
y1 (x) = |x| 1+ an (r1 )x
n=1

y2 (x) = y1 (x) ln |x| + |x|r1 bn (r1 )xn
X

n=1
The coefficients an (r1 ), an (r2 ), bn (r1 ), cn (r2 ) and the constant λ may be
determined by substituting the series solution (8.16) in (8.30).
The constant λ may be zero. All the series displaced above converge for
|x| < ρ, ρ > 0 and each defines a function which is analytic in a neighbor-
hood of x = 0.

In our discussion of Bessel equation in the next section, we shall need


the notion of gamma functions and their elementary properties in what
follows.

8.3.6. The Gamma Function Γ(z)


The gamma function defined by:
Z ∞

Γ(z) = e−t tz−1 dt.


0
Then, we have
Z ∞
Γ(z + 1) = e−t tz dt
0
Z ∞
= −e−t tz |∞
0 +z e−t tz−1 dt
0
Z ∞
−t z−1
= 0+z e t dt
0
= zΓ(z).

135
Therefore,
Γ(z + 1) = zΓ(z).
Similarly,

Γ(z + 2) = Γ(1 + (1 + z))


= (1 + z)Γ(1 + z)
= z(1 + z)Γ(z).

Thus, evaluating the values of the gamma function, we have:


Z ∞
Γ(1) = e−t dt = −e−t |∞
0 = −(0 − 1) = 1.
0
and Γ(0) is undefined, Γ(z) when z is negative is undefined.

Γ(n) = (n − 1)!, Γ(n + 1) = n!.

8.4. Bessel Equations


We consider the solution of a special case of Equation (8.30) given by:
00 0
x2 y + xy + (x2 − v 2 )y = 0 (8.31)

in a neighborhood of the origin.


Equation (8.31) is called the Bessel equation of the first kind of order
v. (named after Friedrich Wilhen Bessel 1784 -1846). This equation is
encountered in many calculation of mathematical physics and applied
mathematics. We shall discuss more fully the case where v = 0.

Assume that a solution of the Bessel equation is of the form:



an xn+r .
X
y=
n=0

Then ∞
0
an (n + r)xn+r−1
X
y =
n=0
and ∞
0
an (n + r)xn+r .
X
xy =
n=0
Notice that
00 0 0 0
x2 y + xy + (x2 − v 2 )y = x(xy ) + (x2 − v 2 )y = 0.

Then ∞
0 0
an (n + r)2 xn+r−1
X
(xy ) =
n=0

136
and ∞
0 0
an (n + r)2 xn+r .
X
x(xy ) =
n=0

Notice also that the Bessel equation may be written as:


0 0
x(xy ) + (x2 − v 2 )y = 0,

hence we have:
∞ ∞ ∞
an (n + r)2 xn+r + an xn+r+2 − v 2 an xn+r
X X X

n=0 n=0 n=0


∞ ∞ ∞
an (n + r)2 xn+r + an−2 xn+r − v 2 an xn+r
X X X
=
n=0 n=2 n=0
∞ h i ∞
an (n + r)2 − v 2 xn+r + an−2 xn+r
X X
=
n=0 n=2
h i
= a0 (r − v )x + a1 (r + 1) − v 2 xr+1
2 2 r 2

∞  
an−2 + an [(r + n)2 − v 2 ] xn+r = 0.
X
+
n=2

Thus, the indicial equation is

a0 (r2 − v 2 ) = 0

or
r = ±v, a0 6= 0.
Also we have
a1 ((r + 1)2 − v 2 ) = 0.
Thus a1 = 0, since r + 1 = 1 ± v 6= ±v. The general recurrence relation is
given by h i
an−2 + an (r + n)2 − v 2 = 0
or
an−2
an = − , n = 2, 3, 4 · · · (8.32)
(n + r)2 − v 2
We have r1 = ν, r2 = −ν. Consider the case where ν = 0. Here the two
indicial roots coincide. We have from (8.32)

a2n−2 (r) a2n−4 (r)


a2n (r) = − 2
= . (8.33)
(2n + r) (2n + 4)2 (2n − 2 + r)2
Thus,
a2n−4
a2n−2 (r) = −
(2n − 2 + r)2

137
a2(n−1)−2 (r)
a2(n−1) (r) = −
[2(n − 1) + r]2
a2n−4 (r)
= −
(2n − 2 + r)2
(−1)n a0
a2n (r) = , n = 1, 2 · · ·
(2n + r)2 (2n − 2 + r)2 (2n − 4 + r)2 · · · (2 + r)2

For r1 = 0 = r2 , setting r = 0 we have

(−1)n a0
a2n (0) =
(2n)2 (2n − 2)2 (2n − 4)2 · · · (2)2
(−1)n a0
= 2n .
2 (n!)2

The corresponding solution of the Bessel equation for r = 0 is



a2n (0)x2n
X
y1 (x) =
n=0

(−1)n x2n
" #
X
= a0 1+ 2n 2
, x > 0.
n=1 2 (n!)

To obtain another linearly independent solution, differentiate (8.33) log-


arithmically. Then
0 1 0
(ln a2n (r)) = = a2n (r).
a2n (r)

Therefore,
0
a2n (r) 1 1 1
 
= −2 + + ··· +
a2n (r) 2n + r 2n − 2 + r 2+r

and setting r = 0, we have:


0
a2n (0) 1 1 1
 
= −2 + + ··· +
a2n (0) 2n 2n − 2 2
1 1 1
 
= − + + ··· + .
n n−1 1
Define Hn by
1 1 1
Hn = + + ··· + .
n n−1 1
Then
0 Hn · (−1)n a0
a2n (0) = −Hn a2n (0) = − .
22n (n!)2

138
0
Then by the last theorem, with a0 = 1, a2n (0) = b2n (0), we obtain the
second solution of the Bessel equation of order zero to be the following:

X (−1)n+1 Hn 2n
y2 (x) = J0 (x) ln(x) + 2n 2
x , x > 0,
n=1 2 (n!)

where we have written J0 (x) for y1 (x).


Consider again the case where r1 = ν, r2 = −ν, ν 6= 0. Then the recurrence
relation (8.32) may be written thus:
an−2 an−2 an−2
an = − 2 2
=− 2 =− , n = 2, 3 · · ·
(n + ν) − ν n + 2nν n(n + 2ν)

Since a1 = 0, we have

a2n+1 = 0, n = 0, 1, 2 · · · .

Hence the expression for an may be written:


a2n−2 a2n−2
a2n = − =− 2 .
2n(2n + 2ν) 2 n(n + ν)

Since
Γ(1 + ν) = νΓ(ν)
and
Γ(2 + ν) = (1 + ν)Γ(1 + ν)
then
Γ(2 + ν)
1+ν = ,
Γ(1 + ν)
and therefore,
a0 a0 Γ(1 + ν)
a2 = − =− 2 .
22 (1+ ν) 2 Γ(2 + ν)
Again since

Γ(3 + ν) = Γ(1 + (2 + ν)), where Γ(1 + z) = zΓ(z)

then
Γ(3 + ν) = (2 + ν)Γ(2 + ν)
and
1 Γ(2 + ν)
= ,
2+ν Γ(3 + ν)
then
a2 a0 Γ(1 + ν)
a4 = − =
23 (2+ ν) 2!24 Γ(3 + ν)

139
−a4 a0 Γ(1 + ν)
a6 = =− 6 .
3!2(3 + ν) 3!2 Γ(4 + ν)
Then the series solution for r = ν is
"
ν 1 1 x
y1 (x) = a0 x Γ(1 + ν) − ( )2 +
Γ(1 + ν) Γ(2 + ν) 2
#
1 x 1 x
+ ( )4 − ( )6 + · · · .
2!Γ(3 + ν) 2 3!Γ(4 + ν) 2
If we take
1
a0 =
2ν Γ(1 + ν)
then y1 is called the Bessel function of the first kind of order ν and written
Jν (x) . Thus

(−1)n x
( )2n+ν .
X
Jν (x) =
n=0 Γ(n + 1)Γ(n + ν + 1) 2

140
Chapter Nine
Laplace Transformations
9.1. Introduction

The Laplace Transformation is a powerful tool for solving linear ordi-


nary differential equations especially those with constant coefficients and
possessing piecewise continuous non-homogenous terms. In this connec-
tion, Laplace transformations have the following characteristics:

(i) the Laplace transformation reduces the problem of solving a differen-


tial equation to an algebraic problem,

(ii) the Laplace transformation automatically takes care of initial con-


ditions, i.e. in initial value problems, the determination of a general
solution is rendered unnecessary.

(iii) the Laplace transformation, when applied to a non-homogenous equa-


tion gives the solution directly, that is, without first solving the corre-
sponding homogenous equation.

In this chapter, we shall consider some elementary aspects of the the-


ory of Laplace transformation.

9.1.1. Definition: A real valued function t → f (t) on an interval I ⊆ IR is


said to be piecewise continuous on I if I can be subdivided into finitely
many intervals in each of which f is continuous and has finite limits as
t approaches either endpoints of the interval of subdivision from the in-
terior. We define the Laplace transformation in the theorem that follows:

9.1.2. Theorem: Let t → f (t) be a real valued function which is piecewise


continuous on every finite interval contained in (0, ∞) and which satisfies

|f (t)| ≤ M eαt , ∀t ∈ (0, ∞) (9.1)

for some constants α and M > 0. Then the integral


Z ∞
L(f )(s) ≡ F (s) = e−st f (t)dt (9.2)
0

exists for all s > α.

Proof: Since f is piecewise continuous, the function t → e−st f (t), s ≥ 0 is

141
integrable over any finite interval and
Z ∞
|L(f )(s)| = | e−st f (t)dt|
0
Z ∞
≤ e−st |f (t)|dt
0
Z ∞
≤ M e−st eαt dt
0
Z ∞
M
= M e−(s−α)t dt = , s > α.
0 s−α

Remarks: (a). It is easy in many practical situations to find out whether


or not a given function satisfies the inequality

|f (t)| ≤ M eαt .

We shall now examine some examples of functions which satisfy such in-
equalities.
1
(i) Consider the hyperbolic cosine function t → cos ht. Since cos ht = (et + e−t )
2
then the function t → cos ht ≤ et . This follows from the fact that for all
t ≥ 0, e−t ≤ et . Hence
1
| cos ht| = (et + e−t ) ≤ et ,
2
satisfying (9.1) with M = α = 1.

(ii) Consider the function f (t) = tn , n ∈ IN . The function satisfies (9.1).


Notice that by Taylor series expansion,
∞ n
t t2 t3 tk
et =
X
=1+t+ + + ··· + + ···
n=0 n! 2! 3! k!

then, we can write


tn tn tn
< 1+ <1+t+
n! n! n!
2 n
t t
< 1+t+ +
2! n!
t2 t3 tn
< 1 + t + + + ··· + ≤ et .
2! 3! n!
Hence
tn
< et ,
n!

142
from which follows that |f (t)| < n!et , satisfying (9.1) with M = n! and α = 1.

(iii) Any function which is bounded in absolute value for all t ≤ 0.

(b). We remark that the conditions of the Theorem (9.1.2) are suffi-
cient rather than necessary. Thus, there may be functions f which do
not satisfy the conditions of the Theorem for which the integral (9.2)
exists nevertheless.

9.1.3. Definition: Let L(IR) denote the linear space of all functions f
for which (9.2) exists. Then, the function
Z ∞
s → L(f )(s) ≡ F (s) = e−st f (t)dt, f ∈ L(IR)
0

is called the Laplace transform of the function f and the map f → L(f )
is called the Laplace transformation.

9.1.4. Remark: Notice that the operator f → L(f ) is linear. That is,
L(f1 + f2 ) = L(f1 ) + L(f2 )
and
L(cf ) = cL(f ), for all f, f1 , f2 ∈ L(IR), and c ∈ IR.

9.1.5. Examples: Find the Laplace transforms of the functions:

(i) f (t) = ta , a ≥ 0, (ii) f (t) = sin ωt.

(iii) f (t) = cos ωt, (iv) f (t) = eat .

Solutions: Notice that each function (i)-(iv) is piecewise continuous on


any interval I ⊆ IR+ and satisfies the inequality (9.1) given by |f (t)| ≤ M eαt
for some M, α > 0. Then for (i),
Z ∞
L(f )(s) = e−st f (t)dt
0
Z ∞
= e−st ta dt, a ≥ 0.
0

Setting st = λ, then
Z ∞
1 −λ λ a
L(f )(s) = e ( ) dλ
0 s s
1 Z ∞
= e−λ λa dλ
sa+1 0
Γ(a + 1)
= , a ≥ 0,
sa+1
143
where Z ∞
e−λ λa dλ = Γ(a + 1).
0
To solve questions (ii) and (iii), let
f (t) = eiωt .
Then
eiωt = cos ωt + i sin ωt.
Z ∞
L(f )(s) = e−st f (t)dt
0
Z ∞
= e−st eiωt dt
0
Z ∞
= e−(s−iω)t dt
0
1
=
s − iω
s + iω
=
(s − iω)(s + iω)
s iω
= 2 2
+ 2
s +ω
Z ∞
s + ω2 Z

= e−st cos ωtdt + i e−st sin ωtdt
0 0

Hence, Z ∞
s
e−st cos ωtdt =
0 s2 + ω2
and Z ∞
ω
e−st sin ωt = .
0 s2 + ω2
at
(iv). Here f (t) = e and for s > a we have
Z ∞
L(f )(s) = e−st f (t)dt
0
Z ∞
= e−st eat dt, s > a
0
Z ∞
1
= e−(s−a)t = , s>a
0 s−a
Note that for
f (t) = ta , a ≥ 0
Γ(a + 1)
L(f )(s) = , a ≥ 0.
sa+1
If a = 0, then f (t) = 1, ∀t, we have
Γ(1) 1
L(f )(s) = =
s s
144
since
Γ(n + 1) = n!.
For a = n ∈ IN ,
Γ(n + 1) n!
L(f )(s) = n+1
= n+1
= L(tn ).
s s
9.1.5. Definition: Let Ua be the function defined on (0, ∞) as follows:
(
0 ∀ t<a
Ua (t) =
1 ∀ t≥a

where a > 0.
Then the function Ua is called a step function.

The Laplace Transform of Ua :


By definition, we have
Z ∞
L(Ua )(s) = e−st Ua (t0dt.
0

The transform exists since


|Ua (t)| ≤ 1.
Hence,
Z a Z ∞
L(Ua )(s) = e−st Ua (t)dt + e−st Ua (t)dt
0 a
Z ∞
= 0+ e−st dt
a
1 e−sa
= − e−st |∞
a =
s s
9.1.6. Definition: Let L(f ) = F be the Laplace transform of f ∈ L(IR).
Then L−1 (F ) = f is called the inverse Laplace transform of F . The oper-
ator L−1 is also linear.

9.1.7. Example: Find the function f whose Laplace transform is


s−1
F (s) = .
s2 + 1

Solution : It is required to find a function f such that


s−1 s 1
L(f )(s) = 2
= 2 − 2
s +1 s +1 s +1
Recall that
s
L(cos ωt) =
s2 + ω2

145
and
ω
L(sin ωt) = .
s2 + ω2
Setting ω = 1 we have
s
L(cos t) =
+1 s2
1
L(sin t) = 2
s +1
s−1
Hence the function which has F (s) = 2 as Laplace transform is
s +1
f (t) = cos t − sin t.

9.2. Some Properties of the Laplace Transformation

This section is devoted to some important properties of the Laplace trans-


formation. In particular, we shall study Laplace transforms of derived
functions that satisfy inequality (9.1). We present the following impor-
tant result

9.2.1. Theorem (The Translation Theorem)


If L(f ) = F (s) is the Laplace transform of a function f ∈ L(IR), then the
Laplace transform of the function g given by

g(t) = eat f (t)

is given by
L(g)(s) = F (s − a) := F−a (s),
where
hb (x) = h(x + b), b ∈ IR.

9.2.2. Remark: The assertion of the theorem indicates that multiplica-


tion of f by the function t → eat corresponds to the translation of the
argument of L(f ) = F by −a

Proof: By definition,
Z ∞
L(f )(s) = F (s) = e−st f (t)dt
0

Hence,
Z ∞
L(g)(s) = e−st g(t)dt
0

146
Z ∞
= e−st eat f (t)dt
0
Z ∞
= e−(s−a)t f (t)dt
0
= F (s − a).

9.2.3. Example: Find the Laplace transform of the function f given by

f (t) = eat cos ωt.

Solution: Let f1 be defined as

f1 (t) = cos ωt.

Then
s
L(f1 )(s) = F1 (s) = .
s2 + ω2
Notice that
f (t) = eat f1 (t).
By the translation Theorem, the Laplace transform L(f ) of f is given by

L(f )(s) = F1 (s − a).

Hence,
s−a
L(f )(s) = .
(s − a)2 + ω 2
9.2.4. Differentiation Theorem: Let f ∈ L(IR) be continuous on [0, ∞).
0
Suppose that f is piecewise continuous on every finite interval and sat-
0
isfies boundedness inequality (3). The Laplace transform of f exists and
is given by:
0
L(f )(s) = sL(f )(s) − f (0).
0
Proof: We consider first the case where f is continuous on [0, ∞). Then
0
by definition of L(f ) and by integration by parts, we have:
Z ∞
0 0
L(f )(s) = e−st f (t)dt
0
h i∞ Z ∞
= e−st f (t) +s e−st f (t)dt
0 0
Z ∞
= −f (0) + s e−st f (t)dt
0
= sL(f )(s) − f (0).
0
If f is only piecewise continuous, the proof is similar but the range of
0
integration is broken up into parts in each of which f is continuous. This
concludes the proof.

147
9.2.5. Example: Find the Laplace transform of the function f given
by
1
f (t) = sin ωt.
ω
You may assume that
s
L(cos ωt) = .
s2 + ω2
Solution: Notice that
d
(cos ωt) = −ω sin ωt.
dt
That is
d 1 1
(− cos ωt) = sin ωt.
dt ω ω
Set
1
g(t) = − cos ωt.
ω2
Then
0 1
g (t) = sin ωt = f (t)
ω
Hence
0
L(f )(s) = L(g )(s) = sL(g)(s) − g(0).
But
1 s
L(g)(s) = − · 2 ,
ω s + ω2
2

and
1
g(o) = − .
ω2
Hence
0
L(f )(s) = L(g )(s)
1 s2 1
= − 2· 2 2
− (− 2 )
ω" s + ω # ω
2
1 s
= 1− 2
ω s + ω2
1 s2 + ω 2 − s2
" #
1
= =
ω2 s2 + ω 2 s2 + ω 2

The obvious generalization of the last theorem is the following:

9. 2.6. Theorem:Derivative of Higher Order:


0 00
Let f , f , · · · f (n−1) belong to the space L(IR) and satisfy boundedness

148
inequality (9.1) for some M and α and suppose that f (n) is piecewise con-
tinuous on every finite interval in [0, ∞). Then the Laplace transform of
f (n) exists for s > α, and is given by
0
L(f n )(s) = S n L(f )(s) − sn−1 f (0) − sn−2 f (0) − · · · f (n−1) (0).
Remark: When n = 2, we have
00 0
L(f )(s) = s2 L(f )(s) − sf (0) − f (0).
Proof:
00 0 0
L(f )(s) = L((f ) )(s)
0 0
= sL(f )(s) − f (0)
0
= s(sL(f )(s) − f (0)) − f (0)
0
= s2 L(f )(s) − sf (0) − f (0).

9.3. Laplace Transforms of Integrals

9.3.1. Theorem Let f ∈ L(IR) be piecewise continuous and satisfies in-


equality (9.1) for some M > 0 and α > 0. Let g be the function on [0, ∞)
defined by: Z t
g(t) = f (τ )dτ.
0
Then
1
L(g)(s) = L(f )(s), s > α.
s
Proof: Let f be piecewise continuous and satisfy (9.1) for some M and α.
Then we may assume that α > 0. Since if (9.1) holds for some α < 0, it
holds also for α > 0.
Then the map t → g(t) = 0t f (τ )dτ is continuous and by (9.1), we obtain:
R

Z t
|g(t)| ≤ |f (τ )|dτ
0

Since by (9.1),
|f (t)| ≤ M eαt
then
Z t
|g(t)| ≤ |f (τ )|dτ
0
Z t
≤ M eατ dτ
0
M  αt 
= e −1
α
M αt 0
≤ e = M eαt
α
149
0 0
where M = M α
> 0. So that L(g)(s) exists. Furthermore, g (t) = f (t) except
0
at the points of discontinuity of f . Hence g is piecewise continuous on
each finite interval and by our previous results we have
0
L(f )(s) = L(g )(s) = sL(g)(s) − g(0).

Since g(0) = 0, we have


L(f )(s) = sL(g)(s).
This concludes the proof.

9.3.2. Example: Find g if


1
L(g)(s) = , s ∈ [0, ∞).
s3 + 4s
Solution:
1 1
L(g)(s) = · 2 .
s s +4
1
Now is the Laplace transform of the function f given by
s2 +4
1
F 9t) = sin 2t.
2
1 1
Hence · 2 is the Laplace transform of the function g given by
s s +4
Z t
1Z t
g(t) = f (τ )dτ = sin 2τ dτ
0 2 0
t
1 1

= cos 2τ
2 2 0
1
= [1 − cos 2t] .
4

9.4. Differentiation and Integration of Laplace Transforms

9.4.1. Differentiation of Transforms


It may be readily shown that under the condition of Theorem (9.1.2), we
have:
0 0
L(f ) (s) = F (s) = −L(g)(s).
where g is the function given by:

g(t) = tf (t).

150
Proof: The above can be established as follows: Since
Z ∞
0 d d −st
F (s) = F (s) = e f (t)dt
ds Z0∞ ds
= e−st (−tf (t))dt
0
Z ∞
= − e−st (tf (t))dt
0
= −L(g)(s).

Remark: It follows that differentiation of the Laplace transform of a


given function corresponds in L(IR) to the operation of multiplication by
−t. This property of the Laplace transforms enables us to obtain new
transforms from given ones

Integration of Laplace Transforms

We present the following results concerning integration of transforms.

9.4.2. Theorem: If f satisfies the conditions of Theorem (9.1.2) and

f (t)
lim
t→0+ t
exists, i.e. the right hand limit exists, then
Z ∞
0 0
F (s )ds = L(g)(s),
s

where g is the function given by


f (t)
g(t) = .
t

Proof: Z ∞ Z ∞Z ∞ 0
0 0 0
F (s )ds = e−s t f (t)dtds .
s s 0
But under the above assumptions, we can show that the order of inte-
gration in the last integral may be interchanged. Thus
Z ∞ Z ∞ Z ∞ 0

0 0 −s t 0
F (s )ds = e f (t)ds dt
s 0 s
Z ∞ Z ∞ 0

−s t 0
= f (t) e ds dt
0 s
Z ∞
f (t)
= e−st dt
0 t
= L(g)(s),

151
where
f (t)
g(t) = .
t

9. 4.3. Example: Find the function g whose Laplace transform is given


by
1
L(g)(s) = 3 .
s
Solution: Notice that
Z ∞
1 du Z ∞
= 3 = F (u)du
s3 s u4 s

where
3
F (s) =
s4
But the function s → F (s) is the Laplace transform of the function f given
by
1
f (t) = t3
2
By the result above
1 1
L(t2 )(s) = 3 .
2 s
1 1
That is the function g(t) = t2 has the transform given by L(g)(s) = 3 .
2 s
In the foregoing, we have used the fact that:
Γ(a + 1)
L(ta )(s) =
sa+1
this implies that
Γ(3) 2! 2
L(t2 )(s) = = = .
s3 s3 s3

9.4.4. Definition: Let f and g be two absolutely integrable functions on


[0, ∞]. Then the function defined by
Z t
h(t) = f (t − τ )g(τ )dτ
0

is called the convolution of f and g. We shall denote the convolution of


f and g by f ∗ g, that is h = f ∗ g. Notice the following properties of the
convolution:

(i) f ∗ g = g ∗ f - Commutative property.

(ii) (f1 ∗ f2 ) ∗ f3 = f1 ∗ (f2 ∗ f3 ) - Associative property.

152
(iii) f1 ∗ (f2 + f3 ) = (f1 ∗ f2 ) + (f1 ∗ f3 ) - Distributive property over addi-
tion.

(iv) f ∗ 0 = 0 ∗ f = 0 - Action of the Zero element

9.4.5. Example: Suppose that f (t) = t and g(t) = sin t. Then the con-
volution of the functions is
Z t
h(t) = f (t − τ )g(τ )dτ
0
Z t
= (t − τ ) sin τ dτ
0
Z t Z t
= t sin τ dτ − τ sin τ dτ
0 0
Z t Z t
= t sin τ dτ − τ sin τ dτ
0 0
 Z t 
= −t cos τ |t0 − −τ cos τ |t0 + cos τ dτ
0
n o
= −t cos t + t − −t cos t + sin τ |t0
= −t cos t + t + t cos t − sin t
= t − sin t.

Next, we shall present a theorem on the Laplace transforms of convolu-


tions.

9.4.6. Theorem: Let f and g satisfy the assumptions of Theorem (9.1.2)


with the same α ≥ 0 and define h by h = f ∗ g. Then

H(s) = F (s)G(s),

where
H(s) = L(h)(s),
F (s) = L(f )(s),
G(s) = L(g)(s).

Proof: We have

F (s)G(s) = L(f )(s)L(g)(s)


Z ∞  Z ∞ 
−st −sτ
= e f (t)dt e g(τ )dτ
0 0
Z ∞ Z ∞
= g(τ ) e−s(t+τ ) f (t)dtdτ.
0 0

153
Set t + τ = σ for each fixed τ . Then the last expression becomes:
Z ∞ Z ∞
F (s)G(s) = g(τ ) e−sσ f (σ − τ )dσdτ.
0 τ

The integral on the right is to be performed over the region bounded by


the line τ = σ and the positive segment of the σ axis, (the line τ = 0) in
the τ −σ plane. Under the conditions of the theorem, we may interchange
the order of integration. Hence
Z ∞ Z σ
−st
F (s)G(s) = e f (σ − τ )g(τ )dτ dσ
Z0∞ 0

= e−st h(σ)dσ
0
= H(s)

where Z t
h(t) = f (t − τ )g(τ )dτ.
0

9.4.7. Example: Find the function f such that


Z x
f (x) = x + sin x + f (x − t) sin tdt.
0

Solution: We may write

f (x) = x + sin x + (f ∗ g)(x)

where
g(t) = sin t
. Taking the Laplace transform of both sides of the equation, we get:

1 1 F (s)
F (s) = + +
s2 s2 s2 + 1
Therefore,
1 1 1
 
F (s) 1 − 2 = 2+ 2
s +1 s s +1
That is
s2 s2 + 1 + s2
F (s) =
s2 + 1 s2 (s2 + 1)
or equivalently,
2s2 + 1 2 1
F (s) = 4
= 2 + 4.
s s s

154
2
Now, the function which has as Laplace transform is f1 (x) = 2x and the
s2
1 x3
function which has 4 as Laplace transform is f2 (x) = Hence
s 3!
x3
f (x) = 2x +
3!
is the required function. Notice that we have employed the relation that

Γ(a + 1)
L(xa )(s) ≡ .
sa+1

155
Chapter Ten
Application of Laplace Transforms to Ordinary Differential
Equations

10.1. Introduction:

In this chapter, we shall demonstrate applications of the method of


Laplace transforms for solving ordinary differential equations by con-
sidering some specific examples. First we have:

Example 1: The X and Y coordinates of a particle moving in the plane


IR2 satisfy the following equations:
0
x(t) + y (t) = 1 + t (10.1)
0
y(t) − x (t) = t − 1 (10.2)

At the point t = 0, the particle is at the point (1, 0) ∈ IR2 . Using specifi-
cally the method of Laplace transformation, find expressions for x and y
as functions of t.

Solution: Let X and Y be the Laplace transforms of x and y respectively.


That is Z ∞
X(s) = e−st x(t)dt.
0
Taking the Laplace transform of (10.1), then
1 1
X(s) + sY (s) − y(0) = + 2
s s
Since y(0) = 0, we have
s+1
X(s) + sY (s) = . (10.3)
s2
Applying the Laplace transform to Equation (10.2), we have:
1 1
Y (s) − sX(s) + x(0) = − .
s2 s
That is
1−s 1 − s − s2
−sX(s) + Y (s) = − 1 = (10.4)
s2 s2
since x(0) = 1. Multiplying Equation (10.3) by s and add to (10.4), we
have:
s + 1 1 − s − s2 1
(s2 + 1)Y (s) = + 2
= 2.
s s s
156
Therefore,
1 1
Y (s) = · (10.5)
s2 s2 + 1
From Equation (10.5), we have:
s+1 s 1
X(s) = 2
− 2· 2
s s s +1
(s + 1)(s2 + 1) − s
=
s2 (s2 + 1)
s3 + s + s2 + 1 − s
=
s2 (s2 + 1)
s3 + s2 + 1
=
s2 (s2 + 1)
s3 s2 + 1
= +
s2 (s2 + 1) s2 (s2 + 1)
s 1
= 2
+ 2 (10.6)
s +1 s
= L(cos t) + L(t) ⇒
X(t) = cos t + t

also for Equation (10.5), resolving into partial fraction::


1 1
Y (s) = 2
− 2
s s +1
so
y(t) = t − sin t.

Example 2: Let u be the function defined by:

u(t) = sin t, t ∈ [0, ∞).

Using specifically the technique of Laplace transformation, determine the


solution of the following integro-differential equation:
00 0
y + y = u ∗ y, y(0) − 1 = 0 = y (0). (10.7)

Solution: Taking the Laplace transform of Equation (10.7), we get:


0
s2 Y (s) − s(y(0)) − y (0) + Y (s) = U (s)Y (s). (10.8)
0 1
But y(0) = 1, y (0) = 0 and U (s) = . Hence Equation (10.8) becomes
s2 + 1
Y (s)
s2 Y (s) − s + Y (s) =
s2 + 1

157
and by rearranging we have
1
 
2
S +1− 2 Y (s) = s
s +1
that is
(s2 + 1)2 − 1
Y (s) = s
s2 + 1
or
s4 + 2s2
Y (s) = s.
s2 + 1
Thus,
s(s2 + 1) s2 + 1
Y (s) = =
s2 (s2 + 2) s(s2 + 2)
s2 1
= +
s(s2 + 2) s(s2 + 2)
s 1
= 2 + 2
s + 2 s(s + 2)
s 1
= 2 √ + √
s + ( 2) 2 s(s + ( 2)2 )
2

Hence,
√ 1 Zt √
y(t) = cos 2t + √ I(t − τ ) sin 2τ dτ
2 0
√ 1 Zt √
= cos 2t + √ sin 2τ dτ
2 0
√ √ t
" #
1 1
= cos 2t + √ − √ cos 2τ |0
2 2
√ 1  √ 
= cos 2t − cos 2t − 1
2
1 √ 
= cos 2t + 1 .
2
where the identity function I is defined by I(u) = u, u lying in the domain
of I.

Example 3: By using specifically the method of Laplace transform, solve


the system of differential given by
0 0
y1 + y1 = y2 + y2 (10.9)
00 00
y1 + y2 = et (10.10)
0
y1 (0) = 0, y1 (0) = 1
0
y2 (0) = 1, y2 (0) = 0, (10.11)

158
Solution: Taking the Laplace transform of (10.9), we have by using Y1 (s) =
L(y1 ) and Y2 (s) = L(y2 ), then

sY1 (s) − y1 (0) + Y1 (s) = sY2 (s) − y2 (0) + Y2 (s).

Using the initial conditions (10.11), we have

sY1 (s) + Y1 (s) = sY2 (s) − 1 + Y2 (s)

that is
(s + 1)Y1 (s) − (1 + s)Y2 (s) = −1 (10.12)
Taking the Laplace transform of (10.10) we have

0 0 1
s2 Y1 (s) − sy1 (0) − sy1 (0) − y1 (0) + s2 Y2 (s) − sy2 (0) − y2 (0) = ,
s−1
since
1
L(eαt ) =
.
s−α
Using (10.11) in the last equation, we get:
1
s2 Y1 (s) − 1 + s2 Y2 (s) − s = .
s−1
That is
1
s2 Y1 (s) + s2 Y2 (s) = +1+s
s−1
1 − 1 + s2
=
s−1
s2
=
s−1
or
1
Y1 (s) + Y2 (s) = (10.13)
s−1
and from (10.12) we have
1
Y1 (s) − Y2 (s) = −
1+s
Therefore by combining the last two equations and solving for Y1 (s) and
Y2 (s) we get:
!
1 1 1 1 1 + s − (s − 1) 1
 
Y1 (s) = − = = .
2 s−1 1+s 2 s2 − 1 s2 −1

159
Inverting (performing the inverse transformation) we get,
1
y1 (t) = (et − e−t ) = sin ht.
2
Finally,
1 1 1
 
Y2 (s) = + .
2 s−1 s+1
Performing the inverse transformation for Y2 (s) we have
1 t 
y2 (t) = e + e−t = cos ht.
2

10.2. MAT 341 Past Examination Questions As Exercises

MAT 341 B.Sc, B.Ed., Degree Examinations, University of Ibadan,


1982-1983 Session

1. (a) Let L(IR+ ) denote the set of all functions f : IR+ = [0, ∞) → IR which
have Laplace transforms.
(i) State sufficient conditions for a function to be a member of L(IR+ ).

(ii) Let f ∈ L(IR+ ). Derive an expression for the Laplace transform of


f 000 , stating any other assumptions which you make.

(b) Solve the system of ordinary differential equations, using the method
of Laplace transformation:

y10 + y20 = cos t


1 1
y10 − y20 = sin t, y( 0) = − , y2 (0) = .
2 2

2 (a) Consider the ordinary differential equation:

y 00 + qy 0 + ry = s (1)

in which q, r, s are continuous functions on some interval I ⊆ IR. Suppose


that
yc = a1 y1 + a2 y2
where a1 and a2 are real numbers, is the complementary solution of Equa-
tion (1). Using the method of variation of parameters, determine an ex-
pression for the particular solution of (1).

160
(b) Verify that y1 (x) = x−1 is a solution of the ordinary differential equa-
tion
2
y 00 − 2 y = 0, 0 < x < ∞. (2)
x
Hence, determine
(i) a second solution of Equation (2)

(ii) a particular solution of


2
y 00 − y = x, 0 < x < ∞.
x2

3. Write down the Bessel equation of order zero and determine its lin-
early independent solutions in a neighbourhood of the origin.

4 (a) What do you understand by


(i) an ordinary point, (ii) a singular point (iii) a regular singular point
of a linear second order ordinary differential equation?

(b) Find the first terms in each of the two linearly independent series
solutions of the ordinary differential equation

4ex y 00 + xy = 0

in a neighbourhood of the origin.

B.Ed Degree Examination: MAT 341, University of Ibadan


Mathematical Method I, 2006 - 2007 Session

Question 1.

(a) Given the second order ODE


00 0
y (x) + P (x)y (x) + Q(x)y(x) = R(x)
0
y(x0 ) = a0 , y (x0 ) = a1 , x0 ∈ I ⊆ IR. (1)
State, without proof, a theorem on the existence and uniqueness of solu-
tion of problem (1).

(b) Investigate the existence and uniqueness of solution of equation


00 x 0
y + y + e x y = x2 ,
1−x

161
0 1 1
y(0) = 1, y (0) = , x ∈ [0, ].
2 2

(c) (i) Let y1 (x), y2 (x) be two solutions of (1) with R(x) ≡ 0. When do we
say that the two solutions are linearly independent (LI) ?
(ii) Define the Wroskian W (y1 , y2 ) of y1 , y2 .

(d) State the conditions for the two solutions to be linearly indepen-
dent in terms of the Wroskian.

(e) (i) Show that y1 (x) = sin kx and y2 (x) = cos kx are solutions of the
ODE
00
y (x) + k 2 y = 0.
(ii) Show that they are LI on IR.

Question 2.

(a) Let y1 (x) be a given solution of the second order ODE


00 0
y (x) + P (x)y (x) + Q(x)y(x) = 0, x ∈ I.

By writing
y2 (x) = V (x)y1 (x),
obtain the second LI solution y2 (x).

(b) Verify that y1 (x) = x is a solution of


00 0
x2 y − 2xy + 2y = 0,

and obtain a second LI solution.

(c) State the process for finding a particular solution yp (x) of equation
(1) by the method of variation of parameters.

(d) Verify that y1 (x) = x1 and y2 (x) = x2 are two Linearly Independent
solutions of the differential equation:
00 2
y − y = 0, 0 ≤ x ≤ ∞,
x2
determine a particular solution of the non homogenous equation
00 2
y − y = x, 0 ≤ x ≤ ∞.
x2

162
Hence, write down its general solution.

Question 3

(a) When do we say that a function f (x) is analytic at a point x0 ∈ IR ?.

(b) Given the differential equation


00 0
p(x)y + q(x)y + r(x)y = 0, x ∈ I.

When do we say that a point x0 ∈ I is an ordinary point, a singular point


, a regular singular point for the ODE ?.

(c) (i) Determine the ordinary point(s) of the ODE


00 0
xy + x3 y + xex y = 0.

(ii) Determine the ordinary point(s) and singular point(s) of the differ-
ential equation
00 0
(x2 − 16)y − ex y + αy = 0, α ∈ IR.

(d) Verify that x = 1 is an ordinary point of the Hermite equation :


00 0
y − 2xy + λy = 0, x ∈ IR, λ constant.

Hence obtain a power series solution about the point x = 1 for the Her-
mite equation.

163
REFERENCES
[1] Agarwal, Ravi, P; Donal, O Regan :An Introduction to Differential Equa-
tion, Universitext Series (2008) Springer Series on Dynamical Sys-
tems and Applications.

[2] Aigbe, W. F. A: Lecture Notes on MAT 241, Ordinary Differential Equa-


tions, 1981- 1982 Session, Department of Mathematics, University of Ibadan,
Nigeria, Unpublished Lecture Notes in Mathematics.

[3] Ayoola, E. O.: Lecture Notes on MAT 241, Ordinary Differential Equa-
tions, 2007 -2008 Session, Department of Mathematics, University of Ibadan,
Nigeria, Unpublished Lecture Notes in Mathematics.

[4] Diprima, R. C; Boyce, W. E:Elementary Differential Equations and Bound-


ary Value Problems. John Wiley (1969) Second Edition.

[5] Ekhaguere, G.O.S: Lecture Notes on MAT 341, Mathematical Methods I,


1982 -1983 Session, Department of Mathematics, University of Ibadan, Nige-
ria, Unpublished Lecture Notes in Mathematics.

[6] Hildebrand, F. B: Advanced Calculus for Applications, Prentice Hall


(1976) New York.

[7] King, A.C.; Billingham, J; Otto, S.R. :Differential Equations, Linear,


Nonlinear, Ordinary and Partial, Cambridge (2003).

[8] Morrey, J. N: University Calculus with Analytical Geometry, Addison


Wesley (1962) USA.

[9] Nielson, K. J: Differential Equations, Barnes and Noble (1962) Eng-


land.

[10] Simmons, G. F. :Differential Equations with Applications and Historical


Notes, McGraw-Hill.

[11] Walter, W. :Ordinary Differential Equations, Springer - Verlag.

164

You might also like