Professional Documents
Culture Documents
1
1.1.1. Example
Solution
First Step: Description of the physical process by a differential equation:
y(0) = 2 (1.3)
2
in equation (1.2).
We have y(0) = Ce0 or C = 2. Thus equation (1.2) becomes
dy(t)
= 2kekt = ky(t)
dt
y(0) = 2e0 .
The function (1.4) satisfies the Equation (1.1) as well as the initial con-
dition (1.3)
1.1.2. Example
N (t) ≤ K, ∀ t,
dN
= K(K − N ) (1.5)
dt
where K is a positive constant. Thus as N increases to K, the rate of
growth decreases to zero. To solve (1.5), we write
1 dN
= K, K − N > 0.
K − N dt
Integrating both sides we have
Z
1 dN (t)
dt = kt + C
K − N (t) dt
3
where C is the arbitrary constant of integration. Evaluating the right
hand side, we have:
Z
1
dN = − ln(K − N ) + C
K −N
Hence, it follows that
− ln(K − N ) + C = Kt + C
ln(K − N ) = −kt + (C1 − C)
K − N = eC1 −C e−kt
Note
The function (1.6) is the only function that satisfies the differential equa-
tion (1.5) and the initial condition N = N0 at t = 0. Notice that since
αe−kt > 0, ∀ t, the value
and as
t → ∞, N = K − αe−kt → K,
since
e−kt → 0.
Equation (1.5) above is a special case of the differential equation of the
form
dN
= a + bN
dt
where a and b are constants.
1.1.3. Definition
An equation of order n is said to be a linear equation if it is of the special
form:
0
a0 (x)y (n) (x) + a1 (x)y (n−1) (x) + ......... + an−1 (x)y = f (x), (1.7)
where a0 , a1 · · · an are given functions that are defined on an interval
I = {x : a < x < b} ⊆ IR
Thus, the general nth order equation (1.7) is linear if the left hand side
(LHS) of the equation is a first degree polynomial in y, y 0 , y 00 · · · y n .
4
The equation that is not of the form (1.7) is said to be nonlinear. For
example, the equation
xy 00 + y 0 + (cos x)y = ex
xy 0 + y = x2
xy 000 + ex y 0 + (sin x)y = 0
are all linear equation. But
y 0 + 2y 2 = 1
5
and
y 00 + 2y 0 = 15C2 e3x = 3(y 0 + 2y)
and hence,
3(y 0 + 2y) − (y 00 + 2y 0 ) = 0.
This implies that
y 00 − y 0 − 6y = 0.
1.2.2. Example. Eliminate the constants from the equation:
(x − a)2 + y 2 = a2 .
2(x − a) + 2yy 0 = 0,
from which
a = yy 0 + x.
Substituting for the constant a in the given equation, we have:
(yy 0 )2 + y 2 = (x + yy 00 )2
Hence,
y 2 = x2 + 2xyy 0 .
Finally we have:
(x2 − y 2 )dx + 2xydy = 0.
1.2.3. Example : Eliminate β and α from the relation:
(i) x = B cos(ωt + α)
where ω is a parameter.
Differentiating with respect to t, we have
dx
(ii) = −ωB sin(ωt + α)
dt
d2 x
(iii)2
= −ω 2 B cos(ωt + α).
dt
Combining Equations (i) and (iii) above, we have
d2 x
+ ω 2 x = 0.
dt2
1.2.4. Remark: Associated with an nth order ODE of the form
h i
y (n) = G x, y, y 0 , y 00 , · · · y (n−1) (1.8)
6
n numbers of auxiliary conditions of the type
y(x0 ) = K0 .
y 00 = K[x, y, y 0 ]
y(x0 ) = K0 , y 0 (x0 ) = K1 .
A set of auxiliary conditions of the form (1.9) is called a set of initial con-
ditions for Equation (1.8). Equation (1.8) together with (1.9) constitute
an initial value problem.
9
Definition: A function f (x, y) defined for all (x, y) ∈ IR2 is said to be
homogeneous of degree n in x and y if and only if
y x4
f (x, y) = 2y 3 e( x ) − ,
x + 3y
then
t4 x4
ty
f (tx, ty) = 2t3 y 3 e( tx ) −
tx + 3ty
x4
( )
y
3 3 x
= t 2y e −
x + 3y
= t3 f (x, y).
g(x, y) = x3 y 2 − 3x5
Assume now that the right hand side F (x, y) of Equation (1.20) can be
written as
f (x, y)
F (x, y) =
g(x, y)
where the functions f (x, y) and g(x, y) are homogeneous of the same degree
in x and y, then the first order equation (1.20) becomes
dy f (x, y) xn f˜( y ) y
= = n yx = Φ( ). (1.23)
dx g(x, y) x g̃( x ) x
11
Setting
y = vx,
we have
dv 1
x + v = (1 + v 2 )
dx 2
or
dv
2x = v 2 − 2v + 1 = (v − 1)2 .
dx
Hence, separating the variables and integrating, we have:
Z
dv 1 Z dx
= .
(v − 1)2 2 x
Then,
−1 1
= ln |x| + c1
v−1 2
or
2
v =1− .
ln |x| + 2c1
y
Replacing v by and setting c = 2c1 we finally have:
x
2x
y(x) = x − .
ln |x| + c
dy 2xy + 3y 2
= 2
dx x + 2xy
Solution: Notice that the numerator and the denominator of the right
hand side of the given equation is homogenous in x and y. Let y = vx.
Then
dy dv
=v+x
dx dx
and
2xy + 3y 2 2vx2 + 3v 2 x2 2v + 3v 2
= = .
x2 + 2xy x2 + 2vx2 1 + 2v
Substituting these these in the given equation, we have:
dv 2v + 3v 2
v+x = .
dx 1 + 2v
12
That is
dv 2v + 3v 2
x = −v
dx 1 + 2v
2v + 3v 2 − v − 2v 2
=
1 + 2v
2
v +v
= .
1 + 2v
Separating the variables and integrating, we have:
Z
1 + 2v Z
1
2
dv = dx
v +v x
That is
ln(v 2 + v) = ln x + c = ln x + ln A.
Thus,
v 2 + v = Ax.
y
Putting back the v = , we finally have:
x
y y
( )2 + = Ax
x x
or
y 2 + xy = Ax3 .
13
Case C: Exact Equations
We shall consider here, another class of differential equations called Ex-
act Equations. We present the following definitions.
14
and hence if (1.26) is satisfied , then also
∂ 2F ∂N
= . (1.27)
∂y∂x ∂x
In the integration with respect to x holding y constant the arbitrary
constant may be any function of y.
Integrating (1.27) with respect to x, we have
∂F
= N + B 0 (y). (1.28)
∂y
We can now exhibit
Φ(x, y) = F (x, y) − B(y)
Thus
∂Φ ∂Φ
dΦ = dx + dy
∂x ∂y
δF δF
= (x, y)dx + ( − B 0 (y))dy
δx δy
= M (x, y)dx + (N + B 0 (y) − B 0 (y))dy
= M (x, y)dx + N (x, y)dy.
This implies that Equation (1.25) is exact. In the light of the above, the
following theorem has been proved.
∂M ∂N
1.3.9. Theorem: If M , N , and are continuous functions of x and
∂y ∂x
y then a necessary and sufficient condition that the differential equation
be exact is that
∂M ∂N
= .
∂y ∂x
Here,
M (x, y) = 3x(xy − 2)
15
and
N (x, y) = (x3 + 2y).
Also
∂M ∂N
= 3x2 =
∂y ∂x
are all continuous being polynomials in x and y. Furthermore, (1.26) is
satisfied. Hence, equation (1.29) is exact. Its solution is
Φ = C,
x3 + T 0 (y) = x3 + 2y,
T (y) = y 2 .
Φ(x, y) = x3 y − 3x2 + y 2 .
x3 y − 3x2 + y 2 = C.
16
Here
M = 2x3 − xy 2 − 2y + 3
and
N = −(x2 y + 2x).
To check whether the given equation is exact or not, we differentiate M
and N partially with respect to the variables y and x, we have:
∂M ∂N
= −2xy − 2 = .
∂y ∂x
The functions are all continuous and (1.26) is satisfied . Hence equation
(1.33) is exact. A set of solutions of (1.33) is defined by
Φ(x, y) = C,
where
∂Φ
= 2x3 − xy 2 − 2y + 3 (1.34)
∂x
and
∂Φ
= −x2 y − 2x. (1.35)
∂y
Because Equation (1.35) is simpler than Equation (1.34), we will take Φ
from (1.35).
Integrating Equation (1.35) with respect to the variable y yields
1
Φ(x, y) = − x2 y 2 − 2xy + Q(x).
2
We will determine the function Q(x) from equation (1.34). Thus differen-
tiating Φ partially with respect to x and equating to the right hand side
of (1.34), we have
17
1. 3.12. Exercises
Solve the following equations
Next, we shall consider some equations that can be made exact by mul-
tiplying through by an integrating factor.
Integrating Factor
If the equation
dy
=0
M +N
dx
is not exact , it may be possible to make it exact by multiplying it by
some functions. That is, we may find a function ρ(x, y) such that
1.3.13. Example
Consider the equation
18
Equation (1.38) becomes
y x
(1 + 2 2
)dx + dy = 0. (1.39)
1+x y 1 + x2 y 2
Equation (1.39) is exact since
∂ y ∂ x
(1 + 2 2
)= ( ).
∂y 1+x y ∂x 1 + x2 y 2
Since
y x
d(tan−1 xy) = 2 2
dx + dy,
1 + (x y ) 1 + (x2 y 2 )
we see from Equation (1.39) that
x + tan−1 xy = C
Note: There is no general rule for finding the integrating factor (I.F).
The general procedure is by trial and error.
19
1.4.1. Theorem
Let ρ be any function such that ρ0 (x) = p(x). Then
Z
ρ(x) = p(x)dx.
where either the plus or minus sign may be chosen . Then the solutions
of (1.40) are given by the formula:
Z
Φ(x)y = Φ(x)q(x)dx + C (1.42)
Proof
Consider the positive sign in Equation (1.41). If Equation (1.40) is mul-
tiplied through by
Φ(x) = eρ(x)
then, " #
dy
+ p(x)y eρ(x) = eρ(x) q(x).
dx
Since ρ0 (x) = p(x), then
" #
d h ρ(x) i dy
ye = + p(x)y eρ(x) ,
dx dx
hence
d h ρ(x) i
ye = eρ(x) q(x),
dx
integrating both sides, we have
Z
ρ(x)
ye = eρ(x) q(x)dx + C (1.43)
20
That is
M (x, y)dx + N (x, y)dy = 0
where
M (x, y) = p(x)y − q(x)
and
N (x, y) = 1.
Here
∂M
= p(x)
∂y
and
∂N
= 0.
∂x
By the foregoing, Equation (1.44) is not exact except when p(x) = 0 for
all x and in that case Equation (1.44) reduces to a simple separable form
dy
= q(x).
dx
Multiplying Equation (1.44) by an integration factor Φ(x), we have
21
or
Φ(x) = ±eρ(x)
which is precisely (1.41)
1.4.2. Example
(a) Solve the differential equation
dy 2x + 1
+( )y = e−2x . (1.47)
dx x
Solution: Here,
2x + 1
p(x) = .
x
The integrating factor is
2x + 1
Z Z
Φ(x) = exp ( p(x)dx) = exp ( dx)
x
= exp(2x + ln(x))
= exp(2x) · exp(ln(x)) = x. exp(2x)
dy
xe2x + e2x (2x + 1)y = x
dx
or
d
(xe2x y) = x.
dx
Integrating
1
xe2x y = x2 + C
2
or
1 c
y = xe−2x + e−2x ,
2 x
where C is an arbitrary constant.
dy
(x2 + 1) + 4xy = x, y(2) = 1. (1.48)
dx
Solution: We first divide the given differential equation in (1.48) by (x2 +1)
to get
dy 4x x
+ 2 y= 2 . (1.49)
dx x + 1 x +1
22
Equation (1.49) is in linear form (1.40). Here,
4x
p(x) = .
x2 +1
By Equation (1.41), the integrating factor
4x
Z
Φ(x) = exp 2
dx = exp[ln(x2 + 1)2 ] = (x2 + 1)2 .
x +1
Multiply Equation (1.49) by Φ(x), we have
dy
(x2 + 1)2 + 4x(x2 + 1)y = x(x2 + 1)
dx
or
d
[(x2 + 1)y] = x3 + x.
dx
Integrating, we have
1 1
(x2 + 1)2 y = x4 + x2 + C.
4 2
Applying the initial condition x = 2, y = 1,then
25 = 6 + C,
giving
C = 19.
The final solution of (1.49) is given by:
x4 x2 19
y(x) = + + .
4(x2 + 1)2 2(x2 + 1)2 (x2 + 1)2
1.4.3. Example
Solve the differential equation:
23
then (1.51) is of the form:
dx
+ p(y)x = q(y)
dy
which is linear in x. The integrating factor is given by
!
Z
3
Φ(y) = exp dy = exp(ln(y)3 ) = y 3 .
y
1.5.1. Definition:
An equation of the form:
dy
+ p(x)y = q(x)y n (1.52)
dx
where n ∈ IN is called a Bernoulli differential equation.
For n = 0 or 1,the Bernoulli Equation (1.52) reduces to a linear equation
and is therefore readily solvable by our previous methods. In general, we
assume that n is neither zero nor 1. The following theorem provides a
24
technique for solving Equation (1.52).
1.5.2. Theorem
Suppose that n is neither zero nor 1. Then the transformation v = y 1−n
reduces the Bernoulli equation (1.52) to a linear equation in v and by
multiplying (1.52) by y −n , we have
dy
y −n + p(x)y 1−n = q(x). (1.53)
dx
Let
v = y 1−n ,
then
dv dy
= (1 − n)y −n .
dx dx
Substituting these in (1.53),we have :
1 dv
+ p(x)v = q(x)
1 − n dx
or
dv
+ (1 − n)p(x)v = (1 − n)q(x).
dx
Letting
p1 (x) = (1 − n)p(x)
and
q1 (x) = (1 − n)q(x)
then the last equation may be written in the form:
dv
+ p1 (x)v = q1 (x)
dx
which is of the linear form (1.40) in the variables v and x.
25
then
dv dy
= −2y −3 .
dx dx
Equation (1.55) transforms into:
−1 dv
+v =x
2 dx
or
dv
− 2v = −2x (1.56)
dx
Evaluating the integrating factor for (1.56) where p(x) = −2, we have
R
Φ(x) = e p(x)dx
= e−2x .
1.5.5. Exercises
(1) The Equation:
dy
= A(x)y 2 + B(x)y + C(x) (1.59)
dx
is called the Riccati’s equation
(a)
dy 3
x + y = (xy) 2 , y(1) = 4
dx
(b)
dy
+ y = f (x),
dx
where
f (x) = 2, 0 ≤ x < 1
f (x) = 0, x ≥ 1
y(0) = 0.
27
(3) (i) Solve the equation:
dy
a + by = ke−λx
dx
where a, b, k are positive constants and λ is non - negative.
k
ii) Show that if λ = 0, every solution approaches as x → ∞, but if
b
λ > 0, every solution approaches zero as x → ∞.
di
L + Ri = E sin(ωt),
dt
for the current i(t) across a circuit, where L, R, E, ω are constants and
i(0) = 0
28
Chapter Two
Applications of First Order Ordinary Differential Equations
2.1. Introduction
2.2.1. Definition
Let
F (x, y, c) = 0 (2.1)
be a given one-parameter family of curves in the X − Y plane .
A curve which intercepts the curves of family (2.1) at right angles is
called an orthogonal trajectory of the given family.
2.2.2. Example
Consider the family of circles
x2 + y 2 + c2 = 0, (2.2)
with center at the origin and radius c. Each straight line through the
origin
y = kx (2.3)
is an orthogonal trajectory of the family of circles (2.2). The families of
curves (2.2) and (2.3) are orthogonal trajectories of each other.
Step 1
From the given equation (2.1), that is
F (x, y, c) = 0
29
describing the curves, find the first order differential equation that de-
scribe the family of curves by eliminating the constant c. That is, find
the differential equation:
dy
= f (x, y) (2.4)
dx
of the curves (2.1).
Step 2
In the differential equation (2.4) so found in step (1) replace f (x, y) by
the negative reciprocal
−1
f (x, y)
to obtain the differential equation satisfied by the orthogonal trajectories
given by
dy 1
=− . (2.5)
dx f (x, y)
Step 3
Obtain a one parameter family
G(x, y, c) = 0,
or
y = φ(x, c)
of the solutions of (2.5), thus obtaining the desired family of trajectories.
Notice that from calculus, the product of the slope of the curve (2.1)
and its orthogonal trajectory at the point (x, y) must be −1. This in-
formed the right hand side of Equation (2.5) in Step 2, above.
2.2.3.Example
Find the orthogonal trajectories of the family of parabola given by:
y = cx2 (2.6)
Step 1
Differentiating (2.6), we have:
dy
= 2cx. (2.7)
dx
Eliminating the parameter c,between Equations (2.6) and (2.7) we have
the differential equation of the family in the form:
dy 2y
= (2.8)
dx x
30
Step 2
2y
Replace the right hand side of (2.8), that is, by its negative reciprocal
x
to obtain
dy −x
= . (2.9)
dx 2y
Separating the variables and integrating, we have
Z Z
2ydy = − xdx + C.
That is
x2 + 2y 2 = k 2 ,
where k is an arbitrary constant. This is a family of ellipses with center
at the origin.
there.
The tangent line of an oblique trajectory which intercepts the curve at
an angle α will have an angle of inclination: α + tan−1 [f (x, y)] at the point
(x, y). Hence the slope of this oblique trajectory is given by
f (x, y) + tan α
α + tan−1 [f (x, y)] = . (2.12)
1 − f (x, y) tan α
The differential equation of such a family of arbitrary trajectories is given
by
dy f (x, y) + tan α
= . (2.13)
dx 1 − f (x, y) tan α
31
Therefore to obtain a family of oblique trajectories we may follow the
same procedure above for the case of orthogonal trajectories except that
we replace step 2 by the following steps.
Step 2
dy
In the differential equation = f (x, y) of a given family, replace f (x, y)
dx
by the expression:
f (x, y) + tan α
(2.14)
1 − f (x, y) tan α
2.3.2. Example
Find the family of oblique trajectories that intersects with the family of
straight lines
y = Cx
at angle 450 = π4 .
Solution
Step 1
dy
Here α = 450 , tan α = 1 from y = Cx, we find = C, we have
dx
dy y
= (2.15)
dx x
for the given straight line .
Step 2
y
With f (x, y) = in (2.15), use Equation (2.13) to obtain:
x
dy f (x, y) + tan α
=
dx 1 − f (x, y) tan α
x
y
+1 x+y
= y = (2.16)
1− x
x−y
Step 3
Solve the differential equation (2.16). Observe that this is a homogenous
equation we let y = vx,to obtain
dv 1+v
v+x =
dx 1−v
or
v−1 1
( 2
)dv = − .
v +1 x
32
Integrating;
1
ln(v 2 + 1) − tan−1 v = − ln(x) − ln(C).
2
This implies
ln C 2 x2 (v 2 + 1) − 2 tan−1 v = 0.
y
Replace V by , we have the family of oblique trajectories in the form:
x
y
ln C(x2 + y 2 ) − 2 tan−1 = 0.
x
2.4. Exercises
Find the orthogonal trajectory of the following family of curves and draw
a few representative curves of each family.
(1) x2 − y 2 = C
(2) y 2 = Cx3
(3) x3 = 3(y − C)
33
shall assume that the supply S(t) is seasonal and periodic. To be specific
we take
S(t) = C(1 − cos αt) (2.18)
where C and α are positive constants . Then arising from the periodic
properties of the cosine function, S(t) is periodic and non negative. That
is
S(t + T ) = S(t)
for some real number T called the period of S(t).
Next we assume that the demand function does not only depend on the
price only but also a decreasing function of price. The simplest of such
function is a linear function of the form:
a−c k 2 bc a−c kc
P (t) = [P0 − − 2 2 2
]e−kbt + + 2 2 (kb cos αt+α sin αt). (2.21)
b k b +α b k b + α2
The limiting values of the price P (t) after a long time (that is as t → ∞ )
is given by
a−c kc
P (t) ≈ − 1 sin(αt + θ),
b (k 2 b2 + α2 ) 2
where
kb
θ = tan−1 .
α
a−c
Thus P(t) fluctuates about the value
b
2Π
The supply function S(t) is minimum when t is an integral multiple of .
α
However, the price P (t) is not a maximum at these times. It is maximum
when
[2nπ − ( π2 + θ)]
t=
α
where α is any integer different from zero.
34
2.6. The Rate of Cooling of a Chemical Reaction
If a body cools in a surrounding medium,it might be expected that the
rate of change of the temperature of the body would depend on the dif-
ference between the temperature of the body and that of the surrounding
medium. Newton’s law of cooling asserts that the rate of change is di-
rectly proportional to the difference of the temperature. Thus if U (t) is
the temperature of the body at a time t and U0 is the constant tempera-
ture of the surrounding medium, then
dU
= −k(U (t) − U0 ), (2.22)
dt
where k is a positive constant. The negative sign at the right hand side
dU
of Equation (2.22) occurs because will be negative when U > U0 , that
dt
is, when the body is expected to cool down from the high temperature
to that of the surrounding medium. Consider the following example for
illustration.
35
2.6.2. Exercises
1. The growth of a population is said to follow a logistic law if the
population satisfies the differential equation:
dX
= KX(t)(M − X(t)),
dt
where X(t) is the population size at time t and M is the maximum size
possible and K is a constant. Assume that a College enrolment follows
the logistic law. At the beginning, there are 10, 000 students. If the
maximum enrolment that the College can take is 25, 000 students and
in five years, enrolment has reached 20, 000 students, what will be the
enrolment in five more years ?.
π
3. Find the orthogonal and oblique trajectories at angle 4
of the family
of curves
x2 + y 2 = Kx,
where K is a constant.
F (P ) = λP,
lim P (t)?.
t→∞
37
Chapter Three
Second Order Ordinary Differential Equations
3.1 Introduction
We shall discuss some methods of solving some special cases of second or-
der ordinary differential equations in this chapter. We restrict ourselves
to the class of such equations that may be handled by special methods to
be discussed here. We reserve more general situations to future chapters.
First we present the following definition.
F (x, y, y 0 , y 00 ) = 0 (3.1)
For the rest of this chapter, we shall consider two classes of Equation
(3.1) that can be solved by successively solving two first-order equations:
G(x, y 0 , y 00 ) = 0. (3.2)
dy
Suppose that y is a solution of the Equation (3.2), then if we set v = ;
dx
then v must be a solution of the first order equation:
G(x, v, v 0 ) = 0. (3.3)
dy
= v(x)
dx
by integration.
Consider the following examples
d2 y dy 2 dy
x = 2[( − )]. (3.4)
dx2 dx dx
38
Equation (3.4) is of the form (3.2) where the independent variable y is
dy
missing. Setting v = , then we have the first order ODE given by :
dx
dv
x = s(v 2 − v) (3.5)
dx
for v. Equation (3.5) is also separable and we have
dv dx
= 2 (3.6)
v2 − v x
or
1 1 dx
( − )dv = 2 .
v−1 v x
Integrating we have:
v−1
| | = 2 ln |x| + ln C.
v
Therefore
v−1
= C 1 x2 .
v
Thus
dy 1
v= = .
dx 1 − C 1 x2
If C1 > 0 we say C1 = a2 for some a ∈ IR, then
dy 1
= ,
dx 1 − a1 x 2
so that
1 1 + ax
y(x) = ln + C2 .
2a 1 − ax
On the other hand, if C1 < 0 then we can write C1 = −b2 , for some real
number b. Hence, we have
dy 1
= .
dx 1 + b2 x 2
Integrating, we have
1
y(x) = tan−1 bx + C2 .
b
Finally, since the constant functions v = 0 and v = 1 are solutions of Equa-
tion (3.5), the function y = C,y = x + C, are also solutions of the given
differential equation (3.4).
Class II. The case when independent variable is missing. Such equations
have the form:
H(y, y 0 , y 00 ) = 0. (3.7)
39
Suppose that the equation has a solution y and let
dy
v= .
dx
On an interval where y is strictly increasing or decreasing, the function
v can be regarded as a function of y and we can write:
d2 y dv dv dy dv
2
= = . =v .
dx dx dy dx dy
dv
H(y, v, v ) = 0. (3.8)
dy
d2 y dy dy
y 2
= ( )2 + 2 . (3.10)
dx dx dx
Solution: Notice that the independent variable x is missing.
Setting
dy d2 y dv
v= , 2
=v .
dx dx dy
In this case, Equation (3.10) becomes:
dv
yv = v 2 + 2v. (3.11)
dy
dv
y = v + 2.
dy
40
Separating the variables and integrating, we have:
Z
dv Z
dy
= + ln c1 .
v+2 y
That is,
ln(v + 2) = ln y + ln c1 ,
or
ln(v + 2) = ln(c1 y).
Hence,
v = c1 y − 2.
Next for the case c1 6= 0, we solve the equation:
dy
= c1 y − 2.
dx
Then separating the variables, we get:
Z
dy Z
= dx,
c1 y − 2
that is,
1 Z c1 dy Z
= dx,
c1 c1 y − 2
so that by integrating, we get:
1
ln(c1 y − 2) = x + c,
c1
or
c1 y − 2 = ec1 x+c = c2 ec1 x ,
where
c2 = e c .
Therefore,
1
y=[c2 ec1 x + 2] .
c1
When c1 = 0 we have v = −2 giving a solution
y(x) = −2x + c.
41
3.2. Exercises
Find the general solution of the following differential equations.
dx d2 x dx
(1) 2t · 2 = ( )2 + 1.
dt dt dt
2
dx dx
(2) 2
= + 2t.
dt dt
d2 y dy
(3) 2x 2 = ( )2 − 1.
dx dx
2
dy dy
(4) y 2 + ( )2 = 0.
dx dx
2
dy dy
(5) y 2 + ( )2 + a2 = 0.
dx dx
2
dV 1 dV
(6) + = 0.
dr62 r dr
d2 y
(7) = 4y.
dx2
42
Chapter Four
Second Order Homogenous Linear Equations
4.1 Introduction
d2 y 1 dy1
a0 2
+ a1 + a2 y1 = 0. (4.3)
dx dx
Also if y2 is a solution then
d2 y 2 dy2
a0 2
+ a1 + a2 y2 = 0. (4.4)
dx dx
Adding Equations (4.3) and (4.4) we have,
d2 y1 d2 y2 dy1 dy2
a0 ( 2
+ 2
) + a1 ( + ) + a2 (y1 + y2 ) = 0. (4.5)
dx dx dx dx
Notice that by the linearity of the differential operator, the last equation
can be written as:
d2 d
a0 2
(y1 + y2 ) + a1 (y1 + y2 ) + a2 (y1 + y2 ) = 0. (4.6)
dx dx
Equation (4.6) is the original equation with y replaced by y1 + y2 .
Therefore, y1 + y2 is also a solution.
43
is the exponential function. Notice that if a0 = 0, we obtain the first order
equation of the same family. That is, for a1 6= 0, a2 =
6 0, we have
dy
a1 + a2 y = 0.
dx
Dividing the last equation by a1 , we have
dy a2
+ y = 0.
dx a1
Setting
a2
k= ,
a1
we have
dy
+ ky = 0.
dx
Solving by separating variables we have
dy
= −ky,
dx
so that Z
dy Z
= −k dx + c.
dx
Carrying out the integration, we obtain:
ln y = −kx + c.
Hence,
y = e−kx+c = ec e−kx = Ae−kx .
Let −k = m, then y = Aemx . Now assume that y = Aemx is a solution of
(4.2) for a certain m. Then
dy d2 y
= Amemx , = Am2 emx .
dx dx2
Substituting these values in Equation (4.2), we have:
That is,
a0 m2 + a1 m + a2 = 0. (4.8)
Equation (4.8) is called the auxiliary equation or characteristic equation
of the given differential equation (4.2).
44
These are as follows:
(1) When the roots of (4.8) are real and distinct.
d2 y dy
2
− 3 + 2y = 0 (4.9)
dx dx
In Equation (4.9), coefficients of the differential equation are as follows:
a0 = 1, a1 = −3, and a2 = 2 . The auxiliary equation (4.8) for this case is
given by
m2 − 3m + 2 = 0.
Solving for m, we have
(m − 1)(m − 2) = 0.
Hence,
m1 = 1, m2 = 2.
Since the roots are real and distinct, the solutions are y1 = Aem1 x and
y2 = A2 em2 x . The general solution is
y = y1 + y2 = A1 em1 x + A2 em2 x .
That is
y = A1 ex + A2 e2x .
(2) Again, consider the differential equation:
d2 y dy
2
= −4 + 5. (4.10)
dx dx
The auxiliary equation (4.8) for this equation is given by:
m2 + 4m − 5 = 0.
y(x) = A1 ex + A2 e−5x .
45
Case 2: Repeated Real Roots: Consider the differential equation:
d2 y dy
2
− 6 + 9y = 0. (4.10)
dx dx
Then, the auxiliary equation (4.8) for the given equation (4.10) is
m2 − 6m + 9 = 0.
(m − 3)2 = 0.
Therefore
m1 = 3, m2 = 3.
By the previous method, if we write
y = em1 x (A + Bx)
m2 + am + b = 0 (4.10)
a1 a2
where a = , b = . Thus the solutions of (4.10) are
a0 a0
√
−a ± a2 − 4b a
m1,2 = =− ,
2 2
46
since m has repeated roots. Let
a
y(x) = e− 2 x (4.11)
a2
But a2 = 4b or b = . Therefore
4
a a2
e− 2 x [− x + bx] = 0.
4
a
Thus y2 (x) = xe− 2 x satisfies equation (4.2)and the general solution is
a a
y(x) = A1 e− 2 x + A2 xe− 2 x = [A1 + A2 x].
47
If no such relation exists, the functions y1 and y2 are said to be linearly
independent.
c1 y1 (x) + c2 y2 (x) = 0.
4.2.4. Example: Consider the functions y1 (x) = 8x, y2 (x) = 3x. These
functions are linearly dependent on any interval since
y1 (x) 8x 8
= = = constant.
y2 (x) 3x 3
But y1 (x) = x3 and y2 (x) = x2 are linearly independent since
y1 (x) x3
= 2 = x 6= constant,
y2 (x) x
on any interval in IR. Similarly, for the functions y3 (x) = x + 2, y4 (x) = x,
we have
y3 (x) x+2 2
= = 1 + 6= constant, ∀ x ∈ I\{0}.
y4 (x) x x
a0 m2 + a1 m + a2 = 0
transformed to
m2 + am + b = 0
has complex roots
m1 = α + iβ, m2 = α − iβ
where α and β are real numbers and β 6= 0. The solutions are of the form:
48
The general solution will be of the form
and
y2 (x) = e(α−iβ)x = eαx [cos βx − i sin βx].
From Equation (4.15), we have
y 00 − 2y 0 + 10y = 0.
m2 − 2m + 10 = 0.
Solving, we have: √
2± 4 − 40
m= = 1 ± 3i.
2
Thus,
m1 = 1 + 3i, m2 = 1 − 3i.
In this case,
α = 1, β = 3.
49
By Equation (4.15), the general solution is given by
y 00 + k 2 y = 0.
m2 + k 2 = 0.
m = ±ik
or
m1 = ik, m2 = −ik.
In this case,
α = 0, β = k.
The general solution is therefore
m1,2 = ±k,
that is
m1 = k, m2 = −k.
Therefore
y(x) = Aekx + Be−kx . (4.17)
In order to rewrite expression (4.17) in terms of the hyperbolic cosine
and sine functions, recall that
1
cos hnx = (enx + e−nx )
2
and
1
sin hnx = (enx − e−nx ).
2
Hence,
2 cos hnx = enx + e−nx
50
and
2 sin hnx = enx − e−nx .
Adding the last two expression, we get:
y 00 ± k 2 y = 0
m2 − 6m + 25 = 0,
51
with complex and conjugate solutions
m1,2 = 3 ± 4i.
Here,
α = 3, β = 4.
The general solution may be written
In order to employ the initial conditions for the computation of the values
of the constants A1 and A2 , we proceed as follows: By differentiating y(x)
from Equation (4.18) we have:
dy
= e3x [(3A1 − 4A2 ) sin 4x + (4A1 + 3A2 ) cos 4x]. (4.19)
dx
Applying the initial values y(0) = −3 by putting x = 0 and y = −3 in
(4.18), we have:
−3 = e0 [A1 sin 0 + A2 cos 0],
so that
A2 = −3
Applying the condition y 0 (0) = −1, in (4.19) we have:
That is,
4A1 + 3A2 = −1.
Substitute for A2 , we have A1 = 2, A2 = −3. Replacing these values in the
general solution (4.18), we finally get
or √
y(x) = 13e3x sin(4x + θ)
where θ is defined by sin θ = − √313 and cos θ = √2 .
13
52
is a component of the circuit that opposes the current. L represents the
inductance, which is measured in Henrys. Inductor opposes a change in
current while C measures the capacitance in farads. The capacitor stores
the energy.
(1) The voltage drop ER across the resistor is proportional to the in-
stantaneous current I flowing. That is
ER = IR (4.20)
(3) The voltage drop across the capacitor is proportional to the instan-
taneous electric charge Q on the capacitor, that is
Q
Ec = . (4.22)
c
The fundamental principle guiding such electric circuits is given by Kir-
choff ’s law. This states that the algebraic sum of all the voltage drops around a
closed circuit is zero. Thus
ER + EL + Ec − E = 0. (4.23)
That is,
dI Q
IR + L + =E
dt c
or
dI Q
L + RI + = E. (4.24)
dt c
dQ
Since = I, we differentiate equation (4.24) to get a second order ho-
dt
mogenous differential equation of the form:
d2 I dI 1
L 2 + R + I = 0. (4.25)
dt dt c
53
To solve the last equation, we write out the auxiliary equation:
R 1
m2 + m+ = 0.
L cL
Thus, q
4L
−R ± R2 − c
m1,2 = . (4.26)
2L
Consider equation (4.24):
dI Q
L + RI + = E,
dt c
dQ
setting I = , we have
dt
d2 Q dQ Q
L 2
+R + =E (4.27)
dt dt c
we have a second order linear equation in Q with constant coefficients
1
L, R, . The last equation may be solved by employing techniques pre-
c
sented already.
4.4. Exercises: Show that the general solution of Equation (4.27) is given
by:
−Rc
Q = cE + e 2L (A sin wt + B cos wt),
1 R2 dQ
where A and B are constants and w = ( − 2 ). When t0 = 0, dt
= I0 .
Lc 4L
54
Chapter Five
Second Order Non-Homogenous Differential Equation
5.1 Introduction
where m1 , m2 are the roots of the auxiliary equation. Now, consider the
following non homogenous differential equation:
Substituting for y(x) in (5.3) using (5.2) will make the left hand side of
(5.3) equal to zero. Therefore there must be a further term in (6.2) that
will make the L.H.S equal to f (x). The general solution to (5.3) can be
written in the form:
where yp (x) is called the particular integral and the term in the square
bracket is called the complimentary function (general solution of the ho-
mogenous differential equation (5.1).
(1) Pn (x)
(2) Pn (x)eax
55
determine the unknown coefficients . This is called the method of unde-
termined coefficients. Thus to solve the non homogenous equation:
a0 y 00 + a1 y 0 + a2 y = f (x),
(1). y(x) = A1 em1 x + A2 em2 x , - (real and distinct roots of the auxiliary equa-
tion).
(2) y(x) = eax (A cos bx + B sin bx) - (complex and conjugate roots).
(b) The particular integral is found by assuming the general form of the
function f (x) as indicated above, substituting this in the given equation
and solve for the unknown coefficients by forming algebraic equations in
the unknowns. The algebraic equations are formed by equating coeffi-
cients of the powers of the independent variables that appear on both
sides of the equation.
y 00 (x) − 5y 0 + 6y = x2 . (5.5)
m2 − 5m + 6 = 0,
to get:
(m − 2)(m − 3) = 0,
so that m1 = 2, m2 = 3. Therefore
56
(b) To find the particular integral we assume the general form of the right
hand side which is a second degree polynomial function:
2C − 5(2Cx + D) + 6(Cx2 + Dx + E) = x2 ,
that is,
6Cx2 + (6D − 10C)x + (2C − 5D + 6E) = x2 . (5.8)
Next, equate the coefficients in (5.8) to get :
1
6C = 1, ⇒ C = ,
6
5
6D − 10C = 0 ⇒ D = ,
18
19
2C − 5D + 6E = 0, ⇒ E = .
108
The particular integral for Equation (5.5) is therefore given by:
1 5 19
yp (x) = x2 + x + .
6 18 108
The general solution of the non homogenous problem (5.5) is
y 00 + y = xe2x (5.9)
Thus
yp0 (x) = e2x (2a + b + 2bx),
57
and
yp00 (x) = e2x (4a + 4b + 4bx),
substituting in (5.9) we have:
5a + 4b = 0, 5b = 1,
1 −4
thus b = and a = and a particular solution is given by
5 25
e2x
yp (x) = (5x − 4).
25
The complimentary solution (i.e solution of the homogenous part y 00 +y = 0
is
yc (x) = A1 sin x + A2 cos x.
The general solution of (5.19) is:
e2x
y(x) = A1 sin x + A2 cos x + (5x − 4).
25
y 00 − y = 2ex (5.10)
y 00 − y = 0
to get
yc (x) = A1 ex + A2 e−x .
Notice that f (x) is a solution to the homogenous equation . A function
of the form yp (x) = Aex will not be a particular integral since
58
and
yp00 (x) = Aex (x + 2).
Substituting into (5.10) we have:
Aex (x + 2) − Axex = 2ex
⇒ 2Aex = 2ex ⇒ A = 1.
Therefore,
yp (x) = xex .
The general solution (5.10) is
y(x) = A1 ex + A2 e−x + xex .
59
f (x) yp (x)
Pn (x) a0 + a1 x + a2 x 2 · · · an x n
Pn (x)eax (a0 + a1 x + · · · an xn )eax
Pn (x)eax sin bx (a0 + a1 x + · · · + an xn )eax sin bx + (c0 + c1 x + · · · + cn xn )eax cos bx.
Pn (x)eax cos bx (a0 + a1 x + · · · + an xn )eax sin bx + (c0 + c1 x + · · · + cn xn )eax cos bx.
y 00 + ay 0 + by = f (x), (5.13)
and assume that we have found two linearly independent solutions y1 and
y2 of the homogenous equation
y 00 + ay 0 + by = 0.
Thus any particular solution yp (x) of (5.13) must have the property that
yp (x) yp (x)
and are not constants. This suggests that we replace the
y1 (x) y2 (x)
constants C1 and C2 in (5.14) by two functions C1 (x) and C2 (x) and then
look for particular solution of (5.13) in the form:
then
y 0 (x) = C1 (x)y10 (x) + C2 (x)y20 (x).
60
Differentiating again
y 00 (x) = C1 (x)y100 (x) + C2 (x)y200 (x) + C10 (x)y10 (x) + C20 (x)y20 (x).
y 00 (x) + ay100 (x) + by(x) = C1 (x)y100 (x) + C2 (x)y200 (x) + C10 (x)y10 (x) + C20 (x)y20 (x)
+a [C1 (x)y10 (x) + C2 (x)y20 (x)] + b [C1 (x)y10 (x) + C2 (x)y20 (x)]
= C1 (x)[y100 (x) + ay10 (x) + by1 (x)] + C2 (x)[y200 (x) + ay20 (x) + by2 (x)]
+C10 (x)y10 (x) + C20 (x)y20 (x) = f (x).
Since y1 and y2 are solutions to the homogenous equation, then the equa-
tion above reduces to
Thus, we have the two conditions on C1 (x) and C2 (x) described by the
simultaneous equations
y1 C10 + y2 C20 = 0
y10 C10 + y20 C20 = f (x). (5.18)
Multiply the first equations by y20 and the second equation by y2 and
subtract to obtain and expression for C10 (x). The second derived function
C20 (x) can be determined in a similar way. Solving Equation (5.18) we
have
f (x)y2 (x)
C10 (x) = −
y1 (x)y2 (x) − y2 (x)y10 (x)
0
f (x)y1 (x)
C20 (x) = − (5.19)
y1 (x)y20 (x)− y2 (x)y10 (x)
The denominations in (5.19) must not be zero. Finally, we integrate
(5.19) to obtain C1 (x) and C2 (x) and substitute these in (5.15) to obtain
yp (x).
We remark that it can be shown that the denominators W (y1 , y2 )(x) =
y1 (x)y20 (x) − y2 (x)y10 (x) in (5.19) are non zero for linearly independent solu-
tions y1 and y2 of the homogenous problem. Details shall be discussed in
the next chapter.
y 00 + y = tan x. (5.20)
61
Thus from equation (5.19)
sin2 x cos2 x − 1
C10 (x) = − tan x sin x = − = = cos x − sec x
cos x cos x
C20 (x) = tan x cos x = sin x.
Integrating, we find that C1 (x) = sin x − ln | sec x + tan x| and C2 (x) = − cos x.
62
It can be proved that W (y1 , y2 , · · · , yn )(x) = 0 if and only if the functions
y1 , y2 , · · · yn are linearly dependent solutions of Equation (5.22). We can
also prove that R
W (y1 , y2 , · · · , yn ) = Ce− a1 (x)dx . (5.24)
for some constant C. Formula (5.24) is known as the Abel formula.
Furthermore, we can show that if y1 , y2 , · · · , yn are linearly independent
solutions of (5.22) then the solution y(x) of (5.22) can be written as
y = emx . (5.26)
Substituting into (5.25) and noting that the kth derivative of emx is mk emx ,
we have
emx [mn + a1 mm−1 + a2 mn−2 · · · an−1 m + an ] = 0.
Since the exponential function is never zero on IR, we divide both sides
of the last equation by emx to obtain
63
If ni = 1 for a given i, then the root mi is called a simple root of (5.27).
Since we are assuming that the coefficient ai , i = 1, 2 · · · n are real num-
bers, any complex root of (5.27) will appear in conjugate pairs.
(2). If mj is a root of multiplicity nj > 1, then emj x , xemj x , x2 emj x , · · · , xnj −1 emj x
are nj linearly independent solutions for (5.25).
y 000 + 14y 00 + y 0 − 6y = 0.
m3 + 4m2 + m − 6 = 0.
Solving, we have:
m1 = 1, m2 = −2, m3 = −3.
Therefore, the general solution is
64
order polynomials, the roots can only be approximated.
(m + 4)4 = 0.
Hence m = −4, and this is a root of multiplicity four and a general solu-
tion:
y(x) = C1 e−4x + C2 xe−4x + C3 x2 e−4x + C4 x3 e−4x .
C10 y1 + · · · + Cn0 yn = 0.
65
Then,
yp0 (x) = C1 y10 + · · · + Cn yn0 .
A second differentiation yields:
We now set:
C10 y10 + C20 y20 + · · · Cn0 yn0 = 0,
so that
yp00 (x) = C1 y100 + · · · + Cn yn00 .
Continuing in the same manner, we set
(k) (k)
C10 y1 + C20 y2 + · · · + Cn0 yn(k) = 0, k = 0, 1, 2 · · · (n − 2)
and obtain
(k) (k)
yp(k) (x) = C1 y1 + C2 y2 + · · · + Cn yn(k) , k = 0, 1, 2 · · · (n − 1).
Finally since
(n−1) (n−1)
yp(n−1) = C1 y1 + C2 y2 + · · · + Cn yn(n−1)
(n−1) (n−1)
+C10 y1 + C20 y2 + · · · + Cn0 yn(n−1) .
We have now obtained all the derivatives up to the nth derivative of yp (x)
and we may substitute these in equation (5.28).
Since y1 , y2 , · · · yn are solutions of the homogenous problem (5.28) (i.e
f (x) = 0.) and that yp solves the non homogenous equation, we find that
(n−1) (n−1)
C10 y1 + C20 y2 + · · · + Cn0 yn(n−1) = f (x).
66
The determinant of the system (5.31) is W (y1 , y2 · · · yn ) which is non zero
since the functions y1 , y2 · · · yn are linearly independent. The system (5.31)
has a unique solution given by Crammers’s rule:
Wk
Ck0 = , k = 1, 2, · · · n, (5.32)
W
where Wk is the determinant obtained by replacing the kth column of W
by the transpose of the vector (0, 0, 0 · · · f (x)).
Finally, the functions C1 (x), C2 (x), · · · Cn (x) may be obtained by integra-
tion (if possible).
xe−x e2x
0
W1 = 0
(1 − x)e−x 2e2x
= 9x − 3.
3e−x (x − 2)e−x 42x
By similar calculation,
W2 = −9, W3 = 3e−3x ,
Then,
W1 1
C10 (x) = =x− .
W 3
67
W2
C20 (x) == −1,
W
W3 e−3x
C30 (x) = = ,
W 3
and by integration, we have:
1 1 1
C1 = x2 − x, C2 = −x, C3 = − e−3x .
2 3 9
Finally, we obtain,
and in general
yn+1 = an+1 y0 , (5.34
68
which is the general solution of (5.33). Comparing Equation (5.33) and
(5.34) with differential equation
y 0 = ay,
y(x) = Ceax .
yn+1 = an yn , (5.35)
y1 = a0 y0 , y2 = a1 y1 = a1 a0 y0
and in general,
yn+1 = an an−1 · · · a1 a0 y0
= (Πnk=0 ak ) y0 . (5.36)
y 0 (x) = a(x)y
y 1 = y 0 + f0
y2 = y1 + f1 = (y0 + f0 ) + f1 = y0 + (f0 + f1 ),
so that n
X
yn+1 = y0 + fk . (5.38)
k=0
69
the correspondence is clear. Note that the Πnk=0 1 = 1 corresponds to e0 = 1.
Finally, consider the general first order linear difference equation
yn+1 = an yn + fn .
Then,
y1 = a0 y0 + f0 ,
y2 = a1 y1 + f1 = a1 · a0 y0 + a1 f0 + f1 .
Generally,
Thus,
y0 = (1.1)20 (1000) ∼
= 6727.
For the second part, the equation now becomes:
1
yn+1 − yn = yn + 30,
10
70
or
yn+1 = (1.1)yn + 30,
which by Equation (5.39) has the solution
n
yn+1 = (1.1)n+1 (1000) = (1.1)n−k · 30.
X
k=0
n
(1.1)n−k is the sum of the first n+1 terms of a geometric progression,
X
But
k=0
so that the solution is given by
(1.1)n+1 − 1
!
n+1
yn+1 = 1000(1.1) + 30 .
1.1 − 1
Thus,
y20 ∼
= 8445.
F (xn )
xn+1 = xn − . (5.43)
F 0 (xn )
71
5.9. Second Order Difference Equation: The second order difference
equation is generally of the form:
yn+2 + an yn+1 + bn yn = fn .
Here, we gather together for the benefit of student readers, some past
examination questions on ordinary differential equations administered at
the University of Ibadan degree examinations in the second year of the
four or five year degree program in science, education and engineering.
72
1(a) State a necessary and sufficient condition for the differential equa-
tion:
M (x, y)dx + N (x, y)dy = 0, (1.1)
to be exact. Examine whether or not the O.D.E.
2 (a) Obtain the differential equation associated with the primitive func-
tion:
y = Aex + B sin x, (2.1)
where A and B are arbitrary constants.
(b) The tangent to a curve is such that the product of its X-intercept
and its Y -intercept varnishes. Express this as and O.D.E. Of what order
and degree os the equation you obtain?
v = y 1−n . (3.2)
73
Hence or other solve the equation
xy 0 (x) + y = xy 3 (3.3)
Write down the form of the solution of (4.2) when the roots of the indicial
equation are :
(α) Real and distinct,i.e r1 6= r2
x2 y 00 − xy 0 + y = 0. (4.3)
ay 00 + by 0 cy = 0 (4.4)
and that
y = U (x)V (x) (4.5)
is a second linearly independent solution of Equation (4.4), show that
V (x) satisfies the differential equation:
U 0 (x) b
!
00
V (x) + 2 + V 0 (x) = 0. (4.6)
U (x) a
74
5 (a) Obtain the difference equation satisfied by the primitive equation
Un = A + B4n (5.1)
Un+1 = 2Un + Vn + 2n
Un+1 = Un + 2Vn+1 + 1
Given that
U0 = 0, V0 = 1. (5.3)
2 (a) Using the data in the table above, estimate the integral
Z 1.4
I= f (x)dx
1.0
(b) If in a sequence
Ur = r(r + 1)
75
obtain an expression in r for
42 Ur + Ur
Ur+1 − aUr = r
3 (a) Obtain the ordinary differential equation associated with the equa-
tion
y = A sin x + B cos x + Cex (3.1)
where A, B, C are constants.
given that
U0 = 0, U1 = 1; U2 = 5.
76
(c) Given that y1 (x) and y2 (x) are two linearly independent solutions of
the homogenous part of the O.D.E.:
d2 y dy
2
+ a + by = f (x) (4.4)
dx dx
and y = U y1 + V y2 is a particular solution of Equation (4.4). Show that
dU −y2 (x)f (x)
=
dx W (x)
dV y1 (x)f (x)
=
dx W (x)
where W (x) is the Wronskian of the solutions y1 and y2 of the homogenous
equation.
dy x(1 + x2 )3
5
(a) (1 + x2 ) 2 + = x2
dx 2
n o dy
(b) (x + 2)2 + (y − 1)2 + 2(x + 2)(y − 1) + (x + 2)2 = 0
dx
2x + 1
0
(c) y + y = e−2x .
x
(d) Hence , solve the Ricatti ’s equation
dy
= A(x)y 2 + B(x)y + C(x).
dx
(i) Show that if A(x) ≡= 0, for all x, then the Equation is linear whereas
it is non-linear when C(x) ≡= 0 for all x.
(ii) Show that if f (x) is any solution of the Ricatti’s equation, then the
transformation
1
y = f (x) +
V (x)
reduces the equation to a linear equation in V .
77
Hence or otherwise, find another linearly independent solution.
y 00 + a0 y 0 + a1 y = 0
on an interval I ⊆ IR. Show that the Wronskian W (y1 , y2 )(x) satisfies the
Equation
W 0 + a0 = 0
and hence, solve the equation for W (x).
(i) y 00 + y = tan x
d2 y h
0 2 0
i
(ii) x = 2 (y ) − y .
dx2
3 (a) Solve the following difference equation
(b) The rate of change of the price P (t) of a commodity is directly pro-
portional to the difference between the demand D(t) and supply S(t) of
the commodity at time t. If the supply and demand functions satisfy the
models:
S(t) = c [1 − sin αt]
and
D(t) = a − bP (t).
78
(i) Obtain the differential equation satisfied by P (t) (if any).
(ii) Solve the equation so obtained in (i).
(iii) Find the time when the price is maximum.
79
Show that the Wronskian W (y1 , y2 )(x) satisfies the equation
W 0 + cW = 0
π
Find the oblique trajectory at an angle 4
of the following curves
(i) x2 − y 2 = c (ii) x3 = 3(y − c).
80
(c) The temperature of a liquid in a room of constant temperature 20◦ F
is 70◦ F. After 5 minutes, it is 60◦ F. What will be its temperature after a
further 30 minutes ? After how long will its temperature be 40◦ F ?
81
0
Show that the Wronskian solves the homogenous problem W + a0 W = 0
dy 5
(b) 2(x2 + 7x − 8) + (6x + 21)y = 3(x + 8)2 y 3 .
dx
00 0 0
(c) y − y − 6y = ex cos x; y(0) = 1, y (0) = 0.
d2 d Q
L 2 Q(t) + R Q(t) + = E
dt dt C
where L, R, C, E, C 6= 0 are constants. Solve the equation for Q(t).
82
Chapter six
Qualitative Aspects of Ordinary Differential Equations
Linear Dependence, Reduction of Order and Variation of Parameters
6.1. Introduction:
We shall be concentrating on the qualitative study of solutions of second
order ordinary differential equations. For this purpose, we shall develop
some techniques which would be applicable for solving this class of equa-
tions.
Beginning from this chapter, we shall be discussing standard materials
concerning the solutions of linear differential equations typically required
of advanced undergraduate courses in the mathematical sciences, physics,
geology and all engineering fields. We present the following definitions.
83
is an equation of order 3 and degree 2. However, the equation
0
(y )100 + bxy = 0
Since Equations (6.3) and (6.4) are second order differential equations,
the solution of either of the equation, if it exists, will involve two arbi-
trary constants which may be determined by imposing two conditions on
the solution. For example, one may specify the values of the dependent
0
variables y and the derivative y at some fixed point x0 in the interval of
solution. These are two conditions on the solution and they are called
initial conditions. Under certain circumstances these initial conditions
could be enough to assume uniqueness of the solution thus obtained. In
that case, we have achieved existence and uniqueness of the solution of
Equations (6.3) and (6.4). Since equation (6.4) is in general nonlinear, it
may not always be possible to obtain a closed expression for its solution.
If the degree is one, then it is linear but otherwise it is nonlinear. For
00 0
example, the equation: y + y + xy 2 = 0 is linear because the degree of
every derivative is one. The differential equation:
00 0
(y )2 + by + xy = 0,
has a degree greater than one. Degrees and not power determine linear-
ity. In what follows, we shall be dealing almost exclusively with second
order linear differential equations.
6.1.3. Definition: The most general second order linear equation has
the form:
00 0
P (x)y + Q(x)y + R(x)y = S(x), x ∈ I ⊆ IR, (6.5)
84
where P, Q, R, S are given or known real valued functions on an open
subset I of IR.
In our subsequent discussion, we shall be seeking a solution of equation
(6.5) in an open neighbourhood I ⊆ IR of some fixed point x0 ∈ IR. In this
circumstance, if P (x) 6= 0, ∀x ∈ I, then Equation (6.5) may be recast as
follows:
00 0
y + q(x)y + r(x)y = s(x), (6.6)
by dividing through by P (x). Thus, the new coefficients q, r, s are defined
by
Q(x) R(x) S(x)
q(x) = , r(x) = , s(x) = .
P (x) P (x) P (x)
In the following, we shall assume the following theorem which gives suffi-
cient conditions for the existence and uniqueness of a solution of Equation
(6.6).
Remark: (i) We shall not prove the last theorem but we shall employ
its assertion in the subsequent discussions.
(ii) In the development of the theorem, we shall study some techniques
for solving Equation (6.6).
The techniques for solving (6.6) depend on knowing at least one solu-
tion of the homogenous or complementary equation (6.7). When such a
solution is known the technique alluded to allow complete determination
of the general solution of equation (6.6). For this reason, we shall con-
sider some results concerning the solutions of Equation (6.7).
dn
6.1.5. Definition: By setting n
= Dn , where n is a positive integer,
dx
we may associate with Equation (6.7), the following operator :
L(D) = D2 + qD + r
85
whose domain is C 2 (I) the set of all continuous functions on I which are
twice differentiable, and whose range is C(I), the set of all continuous
functions on I. It is trivial to check that L(D) is a linear operator with
the domain and range as specified.
Equation (6.7) now takes the form:
L(D)y = 0. (6.8)
Remarks: Let G denote the linear space of all solutions of Equation (6.8)
and let y1 and y2 be two linearly independent solutions in G. It is clear
from linear algebra that every solution in G may be expressed as a linear
combination of the solutions y1 and y2 . In other words, every linearly
independent set {y1 , y2 } of solutions from G is a basis for G. This means
that in order to generate all the solutions of Equation (6.8), that is, in
order to generate the linear space G, we need only know two linearly
independent solutions of Equation (6.8). It is therefore natural at this
point to pose the following questions: When are two solutions of Equation
(6.8) linearly independent?. This question is answered by the following
theorem:
6.1.7. Theorem: Suppose that the real valued functions q and r occurring
in (6.8) are continuous on an open subset I ⊆ IR. Let y1 and y2 be two
solutions of Equation (6.8) such that the following holds:
0 0
y1 (x)y2 (x) − y1 (x)y2 (x) 6= 0, ∀x ∈ I. (6.9)
86
In order that there are real numbers a1 and a2 such that Equations (6.10)
and (6.11) hold, it must be possible to solve (6.10) and (6.11) for a1 and
a2 . We can rewrite (6.10) and (6.11) in matrix form as follows:
! ! !
y1 (x) y2 (x) a1 y0
0 0 = 0 .
y1 (x) y2 (x) a2 y0
There will be real numbers a1 and a2 such that the last vector equation
holds if
y (x ) y (x ) 0 0
1 0 2 0
0 0
= y( x0 )y2 (x0 ) − y1 (x0 )y2 (x0 ) 6= 0.
y1 (x0 ) y2 (x0 )
6.1.9. Theorem: Let g and r belong to the space C(I), I ⊆ IR, and
let y1 and y2 be two solutions of Equation (6.8) on the open interval I.
Then either (i) W (y1 , y2 ) varnishes identically on I or (ii) W (y1 , y2 ) is never
varnishing on I.
87
By setting W (y1 , y2 )(x) = W12 (x) and observe that
0 00 y y
00
W12 = y1 y2 − y2 y1 = 001 002 , (6.15)
y1 y2
Solving the first order equation (6.16) for W12 , we have Equation (12) has
the solution : Z x
W12 (x) = A exp(− q(t)dt), (6.17)
Remark: (a) Equation (6.17) gives an expression for the Wronskian of any
two linearly independent solutions of Equation (6.8) up to a multiplica-
tive constant. Furthermore, it follows from (6.17) that the Wronskians
of two sets {y1i , y2i }, i = 1, 2 of linearly independent solutions of equation
(6.8) can only differ by a multiplicative constant.
(b) Equation (6.17) is called the Abel identity because it was first derived
in 1827 by N.H Abel (1802 - 1829), a Norwegian mathematician.
and
0
(ii) y2 (x0 ) = 0, y2 (x0 ) = B 6= 0.
It is readily seen that
0 0
W (y1 , y2 )(x0 ) = y1 (x0 )y2 (x0 ) − y1 (x0 )y2 (x0 ) = AB 6= 0.
88
Hence by the second to the last theorem, the solutions y1 , y2 are linearly
independent. This concludes the proof.
Set
y2 (x) = V (x)y1 (x),
where V is some twice differentiable function. Then y2 is a solution of
Equation (6.8) if it satisfies that equation. To obtain the condition, i.e
a constrain on V that y2 is a solution of Equation (6.8), we proceed as
follows: We have
0 0 0
y2 (x) = V (x)y1 + V (x)y1 (x) (6.18)
00 00 0 0 00
y2 = V (x)y1 + 2V (x)y1 (x) + V (x)y1 (x) (6.19)
Substituting y2 and the expression in (6.18) and (6.19) above in Equation
(6.8), we obtain:
00 0 0 0 00
V (y1 + qy1 + ry1 ) + V (2y1 + qy1 ) + V y1 = 0.
89
where A is an arbitrary constant. Integrating Equation (6.21), we get
Z x
V (x) = A Φ(t)dt + B,
we have
W (y1 , y2 )(x)
0 0
= y1 (x)y2 (x) − y1 (x)y2 (x)
0 0
= Ay1 (x)y2 (x) − y1 (x)y2 (x)
Z x
0
= Ay1 (x)y1 (x) Φ(t)dt + Ay12 (x)Φ(x)
Z x
0 0 0
+ By1 (x)y1 (x) − Ay1 (x)y1 (x) Φ(t)dt − By1 (x)y1 (x)
= Ay12 (x)Φ(x).
6.2.1. Example: (a) Show that y1 (x) = x−2 is a solution of the equation:
00 0
x2 y + 2xy − 2y = 0, ∀x ∈ IR, x 6= 0.
90
Hence y1 (x) = x−2 is indeed a solution of the given equation.
Then,
0 0
y2 (x) = −2V (x)x−3 + V (x)x−2
00 0 00 0
y2 (x) = 6V (x)x−4 − 2V (x)x−3 + V (x)x−2 − 2V (x)x−3
0 00
= 6V (x)x−4 − 4V (x)x−3 + V (x)x−2 .
Therefore,
00 0
V (x) − 2V (x)x−1 = 0.
0 00 0
Now set V = Z, then we have V = Z , and the last equation becomes
0
Z − 2Zx−1 = 0
That is,
0
Z 2
= .
Z x
Integrating, we have
ln Z − ln A = ln x2 .
Therefore,
Z = Ax2 ,
0
where A is a constant. But Z = V , finally we have,
0
V = Ax2
A 3
V = x +B
3
where A and B are constants. Hence the second solution is given by:
91
Hence,
0 0
W (y1 , y2 )(x) = y1 (x)y2 (x) − y1 y2 (x)
= x−2 · 1 − (−2x−3 )x
= x−2 + 2x−2
= 3x−2
Thus, the Wronskian of y1 and y2 never varnishes since x ∈ IR\{0} and we
conclude that y1 (x) = x−2 and y2 (x) = x are linearly independent solutions
of the given ODE.
6.3. Exercises
In the following problems,
(i) Show in each case that the given function is a solution of the given
equation.
(iii) Indicate in each case, the interval of validity of the general solu-
tion.
00 0
(1) y − 4y − 12y = 0, y1 (x) = e6x
00 0
(2) y + 2y + y = 0, y1 (x) = e−x
00 0
(3) x2 y + 2xy = 0, y1 (x) = 1
00 0
(4) y + 4y + 4y = 0, y1 (x) = e−2x
00 0
(5) y − 7y + 12y = 0, y1 (x) = e3x
00 0
(6) y + 2y + 5y = 0, y1 (x) = ex sin 2x.
00 0
(7) (1 − x2 )y − 2xy + 6y = 0, y1 (x) = 3x2 − 1.
00 0 1
(8) x2 y + xy + (x2 − 14 )y = 0, y1 (x) = x− 2 sin x.
92
6.4.1. Theorem: Let q, r, s be continuous functions. Then the differ-
ence of any two solutions of Equation (6.23) is a solution of its associated
homogenous equation given by:
00 0
L(D)y = y + qy + ry = 0. (6.24)
L(D)y1 = s(x)
and
L(D)y2 = s(x).
If we let
U = y1 − y2 ,
then
L(D)U = L(D)(y1 − y2 )
= L(D)y1 − L(D)y2
= s(x) − s(x)
= 0.
93
of equation (6.24), then y + Z is also a particular solution of Equation
(6.23).
Note that if n
00 0 X
y + qy + ry = s1 + s2 + · · · + sn = si (6.26)
i=1
y 00 + qy 0 + ry = si , i = 1, 2 · · · n
Thus n
X
y(x) = ypi + yc ,
i=1
For example, if
sin(n + 21 )x
S(x) = = 1 + cos x + cos 2x + · · · cos nx,
2 sin 12 x
then it is easier to use the expansion in sum and use the step one at a time.
94
Let y1 and y2 be two linearly independent solutions of Equation (6.24).
Then a general solution of Equation (6.24) is
yc = a1 y1 + a2 y2 ,
where a1 , a2 are arbitrary constants.
95
That is,
0
! ! !
y1 y2 U1 0
0 0 0 =
y1 y2 U2 s
0 0
Solving (6.33) for U1 and U2 , we have
0 −y2 s
U1 =
W (y1 , y2 )
and
0 y1 s
U2 = , (6.34)
W (y1 , y2 )
where
0 0
W (y1 , y2 ) = y1 y2 − y1 y2 6= 0,
is the Wronskian of y1 , y2 . The Wronskian is different from zero since y1
and y2 are linearly independent in the interval I of solution.
Integrating (6.34), we get
Z x !
y2 (t)S(t)
U1 (x) = − dt
W (y1 , y2 )(t)
and Z x !
y1 (t)S(t)
U2 (x) = dt.
W (y1 , y2 )(t)
Hence Equation (6.27) becomes
!
Z x
[y1 (t)y2 (x) − y1 (x)y2 (t)]S(t)
yp (x) = dt (6.35)
W (y1 , y2 )(t)
96
To obtain a particular solution of the non-homogenous equation, we as-
sume that yp (x) is of the form:
Then
0 0 0
yp (x) = U1 (x) cos x − U2 (x) sin x + (U1 (x) sin x + U2 (x) cos x).
and
00 0 0
yp (x) = −(U1 (x) + U2 (x)) sin x + (U1 (x) − U2 (x)) cos x.
00
Substituting these expressions for yp (x), yp in the given non-homogenous
ODE we have
0 0
−(U1 (x) + U2 (x)) sin x + (U1 (x) − U2 (x)) cos x + U1 (x) sin x + U2 (x) cos x = cot x
or
0 0
U1 cos x − U2 sin x = cot x.
Combining these conditions, we have :
0 0
U1 (x) sin x + U2 (x) cos x = 0
0 0
U1 (x) cos x − U2 (x) sin x = cot x
or in matrix form:
0
! ! !
sin x cos x U1 0
0 = .
cos x − sin x U2 cot x
0 0
Solving for U1 and U2 , we get
0 cos x
cot x − sin x
0
U1 = = cos x cot x
sin x cos x
cos x − sin x
97
and
sin x 0
cos x cot x
0
U2 = = − sin x cot x = − cos x.
sin x cos x
cos x − sin x
Integrating, we have
Z x Z x
cos2 φ
U1 (x) = cos φ cot φdφ = dφ
sin φ
Z x
1 − sin2 φ
= dφ
sin φ
Z x !
1
= − sin φ dφ.
sin φ
98
where a1 , a2 are arbitrary constants.
00 0
6.5.2. Example: Prove that a particular solution of Equation xy − y =
f (x), on the interval [1, ∞) is given by
1 Z x x2 − t2
!
yp (x) = f (t)dt.
2 1 t3
Solution: Consider first, the homogenous part of the given ODE, that is
00 0
xy − y = 0.
Then
00 0
xy = y
and 00
y 1
0 =
y x
Hence,
0
y
ln = ln x
a1
Therefore,
0
y
=x
a1
or
0
y = a1 x
Finally we have:
1 1
y = a1 x 2 + a2 = a1 ( x 2 ) + a2 .
2 2
This gives two solutions: y1 (x) = 21 x2 and y2 (x) = 1 . Hence, a general
solution of the homogenous problem is :
1
yc = a1 ( x2 ) + a2 · 1.
2
To obtain a particular solution yp of the non-homogenous equation, we
assume that yp (x) is of the form:
1
yp (x) = U1 (x)x2 + U2 (x).
2
Then
0 1 0 0
yp (x) = U1 (x)x2 + U2 (x) + U1 (x)x.
2
99
As a first condition on U1 and U2 , we set
1 0 0
U1 (x)x2 + U2 (x) = 0. (α)
2
0
Then yp (x) becomes
0
yp (x) = U1 (x)x
and therefore we have
00 0
yp (x) = U1 (x) + U1 (x)x.
That is
0
U1 (x)x + U1 (x)x2 − U1 (x)x = f.
Hence, we obtain
0
U1 (x)x2 = f (x)
or
0 f (x)
U1 (x) =
x2
and from Equation (α), we get
0 1 0 1
U2 (x) = − U1 (x)x2 = − f (x).
2 2
0 0
Integrating U1 and U2 we get:
Z x
f (t) 1Z x
U1 (x) = dt, U2 (x) = − f (t)dt.
1 t2 2 1
Hence,
1
yp (x) = U1 (x)( x2 ) + U2 (x)
2
Z x 2 Z x
x f (t) f (t)
= 2
dt − dt
1 2t 1 2
x2 f (t) 1
Z x " #
= − f (t) dt
1 2t2 2
Z x" 2 #
x 1
= 2
− f (t)dt
1 2t 2
1 Z x x2 − t 2
!
= f (t)dt.
2 1 t2
100
This is the required integral representation for yp (x).
6.6. Exercises
(1) What do you understand by an ordinary differential equation and its
solution ?
(2) Let q and r be real valued continuous functions on some open in-
tervals I ⊆ IR. Consider the equation
00 0
y (x) + qy (x) + ry(x) = 0, x ∈ I. (β)
(4) Let λ be a non zero constant. Suppose that f is a real valued contin-
uous function on an open interval I. Prove that a particular solution yp
of the differential equation:
00
y − λ2 y = f, on I
is given by
1Zx
yp (x) = [sin hλ(x − t)] f (t)dt, x ∈ I.
λ
Note: Recall that
1
sin hξ = (eξ − e−ξ ).
2
101
Chapter Seven
In this chapter and the next, we shall study series solutions of the sec-
ond order linear ordinary differential equations. We shall first be con-
cerned with the solutions near ordinary points of the differential equa-
tions while chapter eight is devoted to the series solutions near regular
singular points. These concepts of points shall be defined in what follows.
Recall that the most general linear ordinary differential equation of sec-
ond order is of the form:
d2 y dy
P (x) 2
+ Q(x) + R(x)y = S(x). (7.1)
dx dx
In this chapter, we shall demonstrate how to obtain series solutions of
Equation (7.1) whenever they exist. In the sequel, it will fortunately
suffice to consider the homogenous ODE:
d2 y dy
P (x) 2
+ Q(x) + R(x)y = 0, x ∈ I ⊆ IR (7.2)
dx dx
instead of (7.1). We will consider solutions of Equation (7.2) in the neigh-
bourhood of a point x0 . In that case if P (x0 ) 6= 0, x0 is called an ordinary
point of Equation (7.2). For example, if for example, P (x) = e−x , then
x0 = 0 is an ordinary point of Equation (7.2) since P (x0 ) = 1. If P (x0 ) = 0,
then we shall say that x0 is a singular point of Equation (7.2). In either
case, the method of solution involves expressing y as an infinite series in
powers of x − x0 , where x0 is some specified point. It is therefore conve-
nient to give a cursory review of aspects of the theory of infinite series.
102
A series may converge for all x or it may converge for some values of
x. For example,
∞
(x − x0 )n
= ex−x0 , ∀x,
X
(7.4)
n=0 n!
that is the exponential function converges for all x.
∞
an (x − x0 )n is said to converge absolutely at a point
X
The power series
n=0
∞
|an (x − x0 )n | converges. Absolute convergence implies
X
x if the series
n=0
convergence but the converse may fail.
Remark (a): A useful test for absolute convergence is the ratio test.
If for a fixed value of x,
n+1
n+1 (x − x0 )
a
lim = r, (7.5)
an (x − x0 )n
n→∞
∞
an (x − x0 )n converges absolutely at x if r < 1 and
X
then the power series
n=0
diverges if r > 1. If r = 1, then the series may or may not converge.
∞
(x − x0 )n
X
For example, the power series converges absolutely and there-
n=0 n!
fore converges. To establish the claim, we shall employ the ratio test to
show the absolute convergence, that is, to show that
m
|an (x − x0 )n |
X
lim
m→∞
n=0
exists. Thus,
(x − x )n+1 (x − x )n |x − x0 |n+1 n!
0 0
/ = ·
|x − x0 |n
(n + 1)! n! (n + 1)!
|x − x0 |
= → 0 as n → ∞.
n+1
(b) If the series
∞
an (x − x0 )n
X
n=0
103
(c) There is a number ρ, called the radius of convergence, such that
the series ∞
an (x − x0 )n
X
n=0
104
If the value of an is given by
f n (x0 )
an = ,
n!
the series is called the Taylor series of about x = x0 (Brook Taylor (1685
- 1731).
(f ) If
∞ ∞
n
bn (x − x0 )n , ∀x ∈ I
X X
an (x − x0 ) =
n=0 n=0
then
an = bn , for n = 0, 1, 2 · · ·
In particular if
∞
an (x − x0 )n = 0, ∀x ∈ I,
X
n=0
then
a0 = a1 = · · · an = 0.
Definition: A function f which has a Taylor series expansion about x = x0
i.e ∞
f n (x0 )
(x − x0 )n
X
f (x) =
n=0 n!
with radius of convergence ρ > 0 is said to be analytic at the point x = x0 .
Thus taking (d) above into account, it follows that if f and g are analytic
at x0 then f ± g, f · g and fg provided g(x0 ) 6= 0 are analytic at x = x0 . Poly-
nomials and rational functions except at the zeros of the denominators
are analytic at every point.
105
In this connection, two questions immediately come to mind.
We know already that if there exists a ρ > 0 such that the series (7.7)
converges for all x satisfying |x − x0 | < ρ, then y is analytic in I = {x :
|x − x0 | < ρ}. Below, we discuss question (ii) much more deeply. In the
meantime, we show by examples how to answer question (i).
106
Substituting (7.9), (7.10) and (7.11) in Equation (7.8), we obtain:
∞ ∞ ∞
(n + 2)(n + 1)an+2 xn − 2(n + 1)an+1 xn+1 + λan xn = 0
X X X
Hence, we have
λ 2n − λ
(i) a2 = − a0 and (ii) an+2 = an , n ≥ 1. In fact (i) and (ii)
2 (n + 2)(n + 1)
can be combined into the following:
2n − λ
an+2 = an , n ≥ 0. (7.12)
(n + 2)(n + 1)
Equation (7.12) is called a recurrence relation. Substituting various val-
ues of n in Equation (7.12), we get
λ
a2 = − a0
2
2−λ
a3 = a1
2·3
4−λ −(4 − λ)λ
a4 = a2 = a0
3·4 2·3·4
6−λ (6 − λ)(2 − λ)
a5 = a3 = a1 .
4·5 2·3·4·5
Thus, we see that a0 and a1 are arbitrary. By grouping together odd
and even terms separately, the formal series solution of the Hermite dif-
ferential equation is therefore given by:
" #
λ (4 − λ)λ 4 (8 − λ)(4 − λ)λ 6
y = a0 1 − x 2 − x − x + ···
2! 4! 6!
" #
2 − λ 3 (6 − λ)(2 − λ) 5 (10 − λ)(6 − λ)(2 − λ) 7
+a1 x + x + x + x + ···
3! 5! 7!
= a0 y1 (x) + a1 y2 (x).
Remark: (i) The series defining y1 (x) and y2 (x) respectively converge for
all x.
(ii) Notice that if λ is a nonnegative even integer, then one or the other
of the series y1 (x) and y2 (x) terminates., given a polynomial solution. The
polynomial solution corresponding to λ = 2k is known as Hermite poly-
nomial Hk (x) of degree k.
107
7.2.2. Example: The Legendre Equation
Solve the Legendre equation given by:
00 0
(1 − x2 )y − 2xy + α(α + 1)y = 0. (7.13)
Solution: Here the singular points of the differential equation are ±1 since
P (±1) = 0. Every other point is an ordinary point.
Taking x0 = 0 as the ordinary point about which we want a solution, then
we may assume a solution of the form (7.9), that is,
∞
an x n .
X
y=
n=0
That is,
∞
[(n + 2)(n + 1)an+2 + α(α + 1)an ] xn +
X
n=0
∞ ∞
n+2
2(n + 1)an+1 xn+1 = 0.
X X
− (n + 2)(n + 1)an+2 x −
n=0 n=0
n=0
∞ ∞
(n − 1)nan xn + 2nan xn
X X
=
n=2 n=1
∞
[(n − 1)n + 2n] an xn
X
=
n=0
∞
n(n + 1)an xn .
X
= (7.14)
n=0
108
Since
n(n + 1) − α(α + 1) = −(α − n)(n + α + 1),
then from the previous expression
(α − n)(α + n + 1)
an+2 = − an . (7.15)
(n + 1)(n + 2)
Using (7.15) in (7.9) we get:
" #
α(α + 1) 2 α(α + 1)(α − 2)(α + 3) 4
y(x) = a0 1 − x + x + ···
2! 4!
" #
(α − 1)(α + 2) 3 (α − 1)(α + 2)(α − 3)(α + 4) 5
+a1 x − x + x + ···
3! 5!
= a0 y1 (x) + a2 y2 (x).
The two series occurring on the right hand side above converge for |x| < 1.
We remark that polynomial solutions of the Legendre equation may be
obtained if α is chosen to be an integer. The polynomial obtained in
this way are called Legendre polynomials and are denoted by Pk (x), x ∈
(−1, 1), k = 0, 1, 2 · · ·
7.3.1. Example: Let I = [−π, π]. Introduce the inner product < ·, · >
defined by Z π
< f1 , f2 >= f1 (x)f2 (x)dx.
−π
Then we claim that the functions f1 (x) = sin nx and f2 (x) = cos mx, where
n and m are arbitrary integers, are orthogonal relative to the given inner
product. To see this, notice that
Z π
< f1 , f2 > = f1 (x)f2 (x)dx
−π
Z π
= sin nx cos mxdx
−π
Since
1
[sin(n + m)x + sin(n − m)x] = sin nx cos mx,
2
109
then
1Z π
< f1 , f2 > = [sin(n + m)x + sin(n − m)x] dx
2 −π
π
1 1 1
= − cos(n + m)x + cos(n − m)x
2 n+m n−m −π
1 1 1
= cos(n + m)π + cos(n − m)π
2 n+m n−m
1 1
− cos(n + m)(−π) − cos(n − m)(−π) = 0.
n+m n−m
So f1 (x) and f2 (x) are orthogonal.
To verify this assertion, notice that Pk satisfy Equation (7.13) for each k.
The Legendre equation (7.13) given by:
00 0
h i
(1 − x2 )y − 2xy + α(α + 1)y = 0
may be written as
d h 0
i
(1 − x2 )y + α(α + 1)y = 0.
dx
Hence,
d h 0
i
(1 − x2 )Pk (x) + k(k + 1)Pk (x) = 0, (7.16)
dx
similarly we write for Pl (x) to obtain
d h 0
i
(1 − x2 )Pl (x) + l(l + 1)Pl (x) = 0, (7.17)
dx
Now multiply Equation (7.16) by Pk (x) and (7.17) by Pl (x) and subtract to
obtain
d h 0
i d h 0
i
Pk (x) (1 − x2 )Pl (x) − Pl (x) (1 − x2 )Pk (x)
dx dx
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x) = 0 (7.18)
110
The first two terms of Equation (7.18) may be written as follows:
d h 0 0
i
(1 − x2 )(Pk (x)Pl (x) − Pl (x)Pk (x) (7.19)
dx
Hence,
d h 0 0
i
(1 − x2 )(Pk (x)Pl (x) − Pl (x)Pk (x))
dx
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x) = 0. (7.20)
Integrate equation (7.20) from −1 to +1 to obtain
0 0
+1
(1 − x2 ) Pk (x)Pl (x) − Pl (x)Pk (x))
−1
Z 1
+ [l(l + 1) − k(k + 1)] Pk (x)Pl (x)dx = 0.
−1
But the square brackets is not zero unless k = l. Therefore the integral
must be zero for k 6= l. That is,
Z 1
Pk (x)Pl (x)dx = 0, k 6= l.
−1
convergent for all x such that |x − x0 | < ρ, ρ > 0 and we obtained the
expansion coefficients {an }∞ n=0 by substituting (7.22) in (7.21).
To justify this process, we must show that we can determine Φ(x0 ), n =
0, 1, 2 · · · from (7.21), for we have from (7.22) that
111
To determine Φ(n) and an , n = 0, 1, 2 · · ·, we use (7.21) as follows:
where
Q(x) R(x)
p(x) = , and q(x) = .
P (x) P (x)
Hence
00 0
Φ (x0 ) = −p(x0 )Φ (x0 ) − q(x)Φ(x),
giving
2!a2 = −p(x0 )a1 − q(x0 )a0 .
The coefficient a2 is determined in terms of a0 and a1 .
To determine a3 , differentiate (7.25) and evaluate at x = x0 to obtain
000 0 0
Φ (x0 ) = 3!a3 = −2!p(x0 )a2 − [p (x0 ) + q(x0 )]a1 − q (x0 )a0 .
Now we shall determine the series solution of the problem in the form:
∞
an (x − 1)n .
X
y = Φ(x) =
n=0
where a0 and a1 are arbitrary constants and y1 and y2 are linearly indepen-
dent series solutions which are analytic at x0 . Furthermore, the radius of
convergence for each of the series solution y1 and y2 is at least as large as
the minimum of the radii of convergence of the series p and q.
∞ ∞
pn xn has radius of convergence ρ1 and q(x) = q n xn
X X
Note: If p(x) =
n=0 n=0
has radius of convergence ρ2 then y(x) has radius of convergence satisfy-
ing:
ρ ≥ min(ρ1 , ρ2 ).
For example, if
00
y + ex y + (sin x)y = 0
then,
∞
xn
p(x) = ex =
X
n=0 n!
∞
X (−1)n x2n+1
q(x) = sin x = .
n=0 (2n + 1)!
114
So that
ρ1 = ∞, ρ2 = ∞.
Hence, by the last theorem,
ρ = ∞, ∀x.
Remark: (i) We shall not offer a proof of the last theorem because it is
beyond the scope of our discussion.
(ii) In order to obtain the minimum of the radii of convergence for the
series of p and q, we may compute the relevant series and then apply one
of the tests for infinite series. When P , Q and R are polynomials, there
is an easier test. In complex analysis, it is known that a rational function
Q
P
= p will have a convergent power series expansion about a point x = x0
if P (x0 ) 6= 0. Furthermore, assuming that any factor common to Q and P
have been canceled, the radius of convergence of the power series for Q P
about the point x0 is precisely the distance from x0 to the nearest zero of
P . In computing this distance, it must be noted that P (x) = 0 may have
complex roots and such roots must be taken into account.
|0 + i0 − (0 ± i)| = |0 + i0 − 0 ∓ i| = | ± i| = 1.
Hence the radius of convergence of the power series expansion of (1+x2 )−1
about x0 = 0 is ρ = 1.
(ii) Determine a lower bound for the radius of convergence of the se-
ries solutions about x0 = 0 for the Legendre equation
00 0
(1 − x2 )y − 2xy + α(α + 1)y = 0, α ∈ IR.
Solution: Here P (x) = 1 − x2 , Q(x) = −2x and R(x) = α(α + 1) are all poly-
nomials.
P (x) = 0 if and only if x = ±1 and hence the distance of x0 to x = ±1 is 1.
Consequently a series solution of the Legendre equation will converge for
at least |x| < 1 and possibly for larger value of x.
115
all x.
7.5. Exercises
(1) Let P, Q, R be real valued functions on some open interval I ⊂ IR.
Consider the differential equation
00 0
P (x)y + Q(x)y + R(x)y = 0, on I.
Using these terms, determine the nature of the point x0 ∈ I in each of the
following cases: (a) P (x) = (1 − e(x−x0 ) )(x − x0 ), Q(x) = 4, R(x) = (x − x0 )2 .
(b) P (x) = 1 + x − x0 , Q(x) = ex , R(x) = 4.
(c) P (x) = (x − x0 )2 , Q(x) = 4(x − x0 ), R(x) = 3.
116
d d dt d dt
Let x − x0 = t, then x = t + x0 and dx
= dt
· dx
= dt
since dx
= 1. Thus
Equation (1) becomes
d2 d
2
y(t) + q(t + x0 ) y(t) + r(t + x0 )y(t) = 0
dt dt
and the series representation (2) becomes
∞
an tn .
X
y(t) =
n=0
Justify your answer. If you assert that no solution of (1) has a series
development of the form (2), then write down the correct series repre-
sentation if any exist.
(b) Show that the series expansion about the origin of the two linearly
independent solutions of Equation (1) is of the form:
y = a0 y1 (x) + a1 y2 (x)
x2 x 3
!
= a0 1 − + ···
2! 3!
x2 x3
!
= a1 x − − − ···
2! 3!
117
Chapter Eight
8.1. Introduction
Q(x) R(x)
(ii) both (x − x0 ) and (x − x0 )2 are analytic at x = x0 .
P (x) P (x)
8.1.1. Example:Determine the singular points of the equation:
00 2 0
x2 (1 − x2 )y + y + 4y = 0, (8.2)
x
118
and classify them as regular or irregular.
2
Solution: Here we have P (x) = x2 (1 − x2 ), Q(x) = x
and R(x) = 4. The
singular points occur when
P (x) = 0.
Thus,
x = 0, ±1.
Next, we have:
Q(x) 2
= 3
P (x) x (1 − x2 )
and
R(x) 4
= 2 .
P (x) x (1 − x2 )
Consider the three singular points: (i) when x = 0, then
Q(x) 2 2
(x − x0 ) =x· 3 2
= 2
P (x) x (1 − x ) x (1 − x2 )
and
R(x) 4(x − 1)2 4(1 − x)2 4(1 − x)
(x − 1)2 = 2 = = .
P (x) x (1 − x2 ) x2 (1 + x)(1 − x) x2 (1 + x)
−2 4(1 − x)
Since and are analytic at x = 1, we conclude that x0 = 1
x3 (1 + x) x2 (1 + x)
is a regular singular point of equation (8.2).
119
2 4(1 + x)
Since and 2 are analytic at x = −1, we conclude that
x3 (1
− x) x (1 − x)
x0 = −1 is a regular singular point of equation (8.2).
y = a0 y1 (x) + a1 y2 (x).
This follows from the fact that any point in such an interval would then be
an ordinary point of (8.3). We first consider the interval I+ = {x ∈ IR : x >
0}. Later we shall extend our results to the interval I− = {x ∈ IR : x < 0}.
0 00
For any r ∈ IR, notice that (xr ) = rxr−1 and (xr ) = r(r − 1)xr−2 . Hence
assuming that a solution of Equation (8.3) is of the form:
y = xr (8.5)
then we have
xr F (r) = 0
where
F (r) = r(r − 1) + αr + β.
Since x 6= 0 in I+ , we have that
F (r) = 0,
120
and
1
q
r2 = −(α − 1) − (α − 1)2 − 4β (8.8)
2
and hence (8.6) may be written as follows:
Just as in the case of the second order linear ordinary differential equa-
tion with constant coefficients, we must examine separately the following
cases:
(ii) (α − 1)2 − 4β = 0
Case (i): (α − 1)2 − 4β > 0. Here Equations (8.7) and (8.8) give two real
distinct roots. Since Wronskian of the solutions y(x) = xr1 and y2 (x) = xr2
is non varnishing for r1 6= r2 and x > 0, it follows that the general solution
of Equation (8.3) is
xr = er ln x .
y1 (x) = xr1
F (r) = (r − r1 )2 .
121
Consider now
∂
L[xr ].
∂r
Then
∂ ∂ r
L[xr ] = (x F (r))
∂r ∂r
0
= xr F (r) + F (r)xr ln x.
0
Hence, since F (r) = (r − r1 )2 and F (r) = 2(r − r1 ), we have
∂ ∂
L[xr ] = L[ xr ]
∂r ∂r
= L[xr ln x]
= 2(r − r1 )xr + (r − r1 )2 xr ln x (8.10)
The right hand side of (8.10) varnishes for r = r1 . Hence
y2 (x) = xr1 ln x, x > 0
is the required second solution of Equation (8.3). Since the functions xr1
and xr1 ln x are linearly independent for x > 0, the general solution of (8.3)
is
y(x) = (a0 + a1 ln x) xr1 , x > 0.
Case (iii):(α − 1)2 − 4β < 0. Here the roots r1 and r2 are complex and more-
over they are complex conjugate. Thus if r1 = λ + iµ then r2 = λ − iµ.
For complex r we define xr by
xr = er ln x , x > 0.
Thus
xr1,2 = xλ±iµ
= e(λ±iµ) ln x
= eλ ln x±iµ
h
ln x
i
= eλ ln x e±iµ ln x
= eλ ln x (cos(µ ln x) ± i sin(µ ln x)) .
Then the general solution of Equation (8.3) is given by
y(x) = C0 xλ+iµ + C1 xλ−iµ .
We observe that the real and imaginary parts of xλ+iµ , namely xλ cos(µ ln x)
and xλ sin(µ ln x) are independent solutions of Equation (8.3). Hence, for
complex root of the equation
F (r) = 0
122
the general solution of Equation (8.3) is:
Finally, we consider the solution of (8.3) in the interval IR− = {x : x < 0}.
The solutions of the Euler equation given above for IR+ = {x : x > 0} can
be shown to be valid for x < 0, but they will in general be complex valued.
To obtain real valued solutions of Euler equation (8.3) in the interval IR−
we may make the following transformation
x = −ξ.
d2 y dy
ξ2 2
+ α(−ξ)(− ) + βy = 0 (8.11)
dξ dξ
or
d2 y dy
ξ2 2
+ αξ ) + βy = 0, ξ > 0. (8.12)
dξ dξ
Recall that when x > 0,
But we have already solved Equation (8.12) above since Equation (8.12)
is essentially Equation (8.3), hence the solutions of (8.12) will be given
as in cases (i), (ii) and (iii) above but with x replaced by ξ. Since ξ = −x,
and (
x, x > 0
|x| =
−x, x < 0
123
it follows that we need only replace x by |x| in the solution of (8.3) given
under cases (i), (ii) and (iii) in any interval not containing the origin.
The following theorem has been established.
F (r) = r2 + (α − 1)r + β = 0.
y = a0 |x|r1 + a1 |x|r2 .
y(x) = (x − x0 )r .
x − x0 = t
then
d dt d
= ·
d(x − x0 ) d(x − x0 ) dt
then, Equation (8.13) becomes
d2 y dy
t2 2
+ αt + βy = 0.
dt dt
Therefore, the change of variable reduces Equation (8.13) to the form of
Equation (8.3) considered above and can be solved by putting y = tr as
before.
124
8.2.2 Examples: (1). Solve the differential equation:
00 0
x2 y + 3xy − y = 0
and
r(r − 1)xr + 3rxr − xr = 0.
That is
[r(r − 1) + 3r − 1]xr = 0.
Hence on the interval (−1, 1)\{0} it follows that (since x 6= 0 )
r(r − 1) + 3r − 1 = 0
or
r2 + 2r − 1 = 0.
Thus, √ √
−2 ± 4+4 −2 ± 8 √
r= = = −1 ± 2
2 2
Here, √ √
r1 = −1 + 2, r2 = −1 − 2.
We notice that the roots r1 and r2 are real and distinct. Hence the general
solution of the given equation on the interval (−1, 1)\{0} = (−1, 0) ∪ (0, 1)
is √ √
y = a0 |x|(−1+ 2) + a1 |x|(−1− 2) .
125
(ii) the solution for this case is
√ √
y(x) = a0 (−x)(−1+ 2)
+ a1 (−x)(−1− 2)
,
this follows since the interval is on the negative segment of the real line.
Then
r(r − 1)xr + rxr + xr = 0
and this implies that
r(r − 1) + r + 1 = 0
that is
r2 + 1 = 0.
Hence,
r = ±i.
We notice that the roots r1 = +i and r2 = −i are complex and conjugate.
Recall that for the general forms r1 = λ + iµ and r2 = λ − iµ, then
Hence the solution of the given ODE on the interval (−5, 21 )\{0} = (−5, 0)∪
(0, 21 ) is
y(x) = a0 cos(ln |x|) + a1 sin(ln |x|).
On the interval ( 21 , 10) ⊆ IR, we have the solution given by:
where λ = 0, µ = 1.
and hence
r(r − 1) + r − 1 = 0
That is
r2 − 1 = 0.
Thus
r = ±1.
126
Hence the solution of the given equation in the interval (−10, 10)\{0} is
C1
y(x) = a0 (−x) + a1 (−x)−1 = C0 x + .
x
on the interval (i) (−1, 1)\{0} = (−1, 0) ∪ (0, 1) we proceed as follows: Sup-
pose that y = xr is a solution of the given equation. Then
Hence,
r(r − 1) + 3r + 1 = 0,
that is
r2 + 2r + 1 = 0.
Thus √
−2 ± 4−4 2
r= = − = −1.
2 2
That is,
r1 = −1, and r2 = −1.
We notice that the roots r1 and r2 are real and equal both being −1.
Hence the solution of the given equation on the interval (−1, 1)\{0} is
127
Suppose that we seek a solution of (8.14) in the neighborhood of a regular
singular point x = x0 . For simplicity, we take x0 = 0 in what follows. Then
since x0 = 0 is a regular singular point, we have that
xQ(x)
xp(x) =
P (x)
and
x2 R(x)
x2 q(x) =
P (x)
are analytic functions in the neighborhood of x = 0. Thus, the functions
admit convergent power series expansion of the form:
∞
p n xn
X
xp(x) =
n=0
∞
x2 q(x) = q n xn ,
X
(8.15)
n=0
the last equality holds by expanding the products of infinite series in-
volved and we have put
n
X
An = (r + k)ak pn−k
k=0
128
and n
X
Bn = ak qn−k .
k=0
Hence,
∞ ∞ ∞
n+r n+r
Bn xn+r = 0,
X X X
(n + r)(n + r − 1)an x + An x +
n=0 n=0 n+0
or ∞
[(n + r)(n + r − 1)an + An + Bn ] xn+r = 0.
X
n=0
or
∞ n
" #
[(r + k)pn−k + qn−k ] ak xn+r = 0.
X X
(n + r)(n + r − 1)an +
n=0 k=0
Finally,
∞ n−1
" #
((r + k)pn−k + qn−k ) ak xn+r = 0.
X X
{(n + r)(n + r − 1) + (n + r)p0 + q0 } an +
n=0 n=0
Let
F (r) = r(r − 1) + p0 + q0
then
F (r + n) = (r + n)(r + n − 1) + p0 (r + n) + q0 .
In terms of the function r → F (r), the last Equation may be written as
follows:
∞ n−1
( )
a0 F (r)xr + ak [(r + k)pn−k + qn−k ] xn+r = 0.
X X
F (r + n)an + (8.17)
n=1 k=0
Equation (8.18) is called the indicial equation and it gives the values of r
for which (8.16) is a solution of (8.14).
Let r1 and r2 be the roots of (8.18),in what follows, if these roots are
129
real, we shall suppose that r ≥ r2 . Next, setting the coefficients of xn+r in
(8.17) equal to zero, we get
n−1
X
F (r + n)an + ak [(r + k)pn−k + qn−k ] = 0, n = 1, 2, 3 · · · . (8.19)
k=0
Equation (8.19) shows that in general an depends on all the earlier co-
efficients a0 , a1 , a2 · · · an−1 . It shows too that we can successfully compute
a1 , a2 , · · · an in terms of the coefficients in the series expansion for xp(x)
and x2 q(x) provided that
F (r + 1), F (r + 2), · · · F (r + n)
For x < 0, we obtain real valued solutions of (8.14) by making the substi-
tution x = −ξ, ξ > 0 as in the Euler Equation. And it turns out that again
that we need only replace xr1 and xr2 in (8.20) and (8.21) by |x|r1 and |x|r2 .
130
Case of Equal Roots
If r1 = r2 , we obtain apparently only one solution of the form (8.16). The
procedure for determining the second solution is explained below.
and
q0 = lim (x2 q(x)).
x→0
(ii) In practice and especially when P, Q, R are polynomials, it is usu-
ally more convenient to substitute the series (8.16) in (8.14), deter-
mine the values of r1 and r2 and then explicitly compute the coefficients
an , n = 0, 1, 2 · · ·. We summarize the discussion by a theorem which fol-
lows:
and ∞
x2 q(x) = q n xn
X
n=0
for |x| ≤ ρ, where ρ is some positive number.
Let r1 and r2 be the roots of the indicial equation:
F (r) = r(r − 1) + p0 r + q0 = 0
131
with r1 ≥ r2 if r1 and r2 are real and distinct. Then in either of the
intervals −ρ < x < 0 and 0 < x < ρ, there exists a solution of the form:
∞
" #
r1 n
X
y1 (x) = |x| 1+ an (r1 )x (8.22)
n=1
where {an (r1 )} are given by the recurrence relation (8.19) with a0 = 1
and r1 = r2 . If r1 − r2 is not zero or a positive integer, then in either of
the intervals −rho < x < o and 0 < x < ρ, there exists a second linearly
independent solution of the form:
∞
" #
r2 n
X
y2 (x) = |x| 1+ an (r2 )x . (8.23)
n=1
The coefficients {an (r2 )} are also determined by the means of the recur-
rence relation (8.19) with a0 = 1 and r = r2 . The power series in (8.22)
and (8.23) converge for |x| < ρ.
Solution: Here P (x) = 2x(1 + x), Q(x) = 3 + x and R(x) = −x. The point
x = 0 is a regular singular point since
xQ(x) x(3 + x) 3
lim = lim =
x→0 P (x) x→0 2x(1 + x) 2
and
x2 R(x) x2 (−x)
lim = lim = 0.
x→0 P (x) x→0 2x(1 + x)
Hence, we have
3
p0 = , and q0 = 0.
2
Thus the indicial equation is
3
r(r − 1) + r + 0 = 0
2
with roots r1 = 0 and r2 = − 12 .
Since these roots are different and do not differ by a positive integer, by
the last theorem, there will be two linearly independent solutions of the
form: ∞
an (0)xn
X
y1 (x) = 1 +
n=1
132
and
∞
" #
− 12 1
an (− )xn
X
y2 (x) = |x| 1+
n=1 2
for 0 < |x| < ρ. Note that ρ is the distance from x0 to the nearest zero of
P (x). In the present case ρ ≥ 1.
where
F (r) = r(r − 1) + p0 r + q0 . (8.25)
If the roots of (8.25) are both equal to r1 , we can obtain one solution by
setting r = r1 and demanding that the coefficient of each power of xr+n
varnishes for all n = 0, 1, 2 · · ·. Then assuming that F (r + n) 6= 0, n ≥ 1 we
have Pn−1
ak [(r + k)pn−k + qn−k ]
an (r) = − k=0 , n≥1 (8.26)
F (r + n)
Using (8.26) in (8.24), we have
where
d2
" #
2 d
L[y] = x 2
+ x(xp(x) + x2 q(x) (y).
dx dx
Since r1 is a repeated root of (8.25), we have
F (r) = (r − r1 )2 (8.28)
L[y](r1 , x) = 0.
As we already know,
∞
" #
r1 n
X
y1 (x) = x a0 + an (r1 )x , x>0 (8.29)
n=1
133
is a solution of (8.14). Furthermore from (8.27) we have
∂y ∂
L[ ](r, x) = a0 (xr )(r − r1 )2
∂r ∂r
h i
= a0 (r − r1 )2 xr ln x + 2(r − r1 )xr .
Hence, " #
∂y
L (r1 , x) = 0.
∂r
Thus, a second solution of (8.14) is
∂y
y2 (x) = ( )(r1 , x)
∂r( )
∂ r1
X
n
= x [a0 + an (r1 )x ]
∂r n=1
∞ ∞
0
= (xr1 ln x)[a0 + an (r1 )xn ] + xr1 an (r1 )xn
X X
n=1 n=1
∞
0
= y1 (x) ln x + xr1 an (r1 )xn , x > 0
X
n=1
where
0 d
an (r1 ) = an (r)|r=r1 .
dr
134
which converges for |x| < ρ, ρ > 0.
Let r1 and r2 with r1 ≥ r2 if they are real, be the roots of the indicial
equation:
F (r) = r(r − 1) + p0 r + q0 = 0.
Then in either of the intervals: (−∞, 0) and (0, ∞) equation (8.30) has two
linearly independent solutions y1 and y2 of the following forms:
and
∞
" #
r2 n
X
y2 (x) = λy1 (x) ln |x| + |x| 1+ cn (r2 )x .
n=1
(ii) If r1 = r2 , then
∞
" #
r1 n
X
y1 (x) = |x| 1+ an (r1 )x
n=1
∞
y2 (x) = y1 (x) ln |x| + |x|r1 bn (r1 )xn
X
n=1
The coefficients an (r1 ), an (r2 ), bn (r1 ), cn (r2 ) and the constant λ may be
determined by substituting the series solution (8.16) in (8.30).
The constant λ may be zero. All the series displaced above converge for
|x| < ρ, ρ > 0 and each defines a function which is analytic in a neighbor-
hood of x = 0.
135
Therefore,
Γ(z + 1) = zΓ(z).
Similarly,
Then ∞
0
an (n + r)xn+r−1
X
y =
n=0
and ∞
0
an (n + r)xn+r .
X
xy =
n=0
Notice that
00 0 0 0
x2 y + xy + (x2 − v 2 )y = x(xy ) + (x2 − v 2 )y = 0.
Then ∞
0 0
an (n + r)2 xn+r−1
X
(xy ) =
n=0
136
and ∞
0 0
an (n + r)2 xn+r .
X
x(xy ) =
n=0
hence we have:
∞ ∞ ∞
an (n + r)2 xn+r + an xn+r+2 − v 2 an xn+r
X X X
∞
an−2 + an [(r + n)2 − v 2 ] xn+r = 0.
X
+
n=2
a0 (r2 − v 2 ) = 0
or
r = ±v, a0 6= 0.
Also we have
a1 ((r + 1)2 − v 2 ) = 0.
Thus a1 = 0, since r + 1 = 1 ± v 6= ±v. The general recurrence relation is
given by h i
an−2 + an (r + n)2 − v 2 = 0
or
an−2
an = − , n = 2, 3, 4 · · · (8.32)
(n + r)2 − v 2
We have r1 = ν, r2 = −ν. Consider the case where ν = 0. Here the two
indicial roots coincide. We have from (8.32)
137
a2(n−1)−2 (r)
a2(n−1) (r) = −
[2(n − 1) + r]2
a2n−4 (r)
= −
(2n − 2 + r)2
(−1)n a0
a2n (r) = , n = 1, 2 · · ·
(2n + r)2 (2n − 2 + r)2 (2n − 4 + r)2 · · · (2 + r)2
(−1)n a0
a2n (0) =
(2n)2 (2n − 2)2 (2n − 4)2 · · · (2)2
(−1)n a0
= 2n .
2 (n!)2
Therefore,
0
a2n (r) 1 1 1
= −2 + + ··· +
a2n (r) 2n + r 2n − 2 + r 2+r
138
0
Then by the last theorem, with a0 = 1, a2n (0) = b2n (0), we obtain the
second solution of the Bessel equation of order zero to be the following:
∞
X (−1)n+1 Hn 2n
y2 (x) = J0 (x) ln(x) + 2n 2
x , x > 0,
n=1 2 (n!)
Since a1 = 0, we have
a2n+1 = 0, n = 0, 1, 2 · · · .
Since
Γ(1 + ν) = νΓ(ν)
and
Γ(2 + ν) = (1 + ν)Γ(1 + ν)
then
Γ(2 + ν)
1+ν = ,
Γ(1 + ν)
and therefore,
a0 a0 Γ(1 + ν)
a2 = − =− 2 .
22 (1+ ν) 2 Γ(2 + ν)
Again since
then
Γ(3 + ν) = (2 + ν)Γ(2 + ν)
and
1 Γ(2 + ν)
= ,
2+ν Γ(3 + ν)
then
a2 a0 Γ(1 + ν)
a4 = − =
23 (2+ ν) 2!24 Γ(3 + ν)
139
−a4 a0 Γ(1 + ν)
a6 = =− 6 .
3!2(3 + ν) 3!2 Γ(4 + ν)
Then the series solution for r = ν is
"
ν 1 1 x
y1 (x) = a0 x Γ(1 + ν) − ( )2 +
Γ(1 + ν) Γ(2 + ν) 2
#
1 x 1 x
+ ( )4 − ( )6 + · · · .
2!Γ(3 + ν) 2 3!Γ(4 + ν) 2
If we take
1
a0 =
2ν Γ(1 + ν)
then y1 is called the Bessel function of the first kind of order ν and written
Jν (x) . Thus
∞
(−1)n x
( )2n+ν .
X
Jν (x) =
n=0 Γ(n + 1)Γ(n + ν + 1) 2
140
Chapter Nine
Laplace Transformations
9.1. Introduction
141
integrable over any finite interval and
Z ∞
|L(f )(s)| = | e−st f (t)dt|
0
Z ∞
≤ e−st |f (t)|dt
0
Z ∞
≤ M e−st eαt dt
0
Z ∞
M
= M e−(s−α)t dt = , s > α.
0 s−α
|f (t)| ≤ M eαt .
We shall now examine some examples of functions which satisfy such in-
equalities.
1
(i) Consider the hyperbolic cosine function t → cos ht. Since cos ht = (et + e−t )
2
then the function t → cos ht ≤ et . This follows from the fact that for all
t ≥ 0, e−t ≤ et . Hence
1
| cos ht| = (et + e−t ) ≤ et ,
2
satisfying (9.1) with M = α = 1.
142
from which follows that |f (t)| < n!et , satisfying (9.1) with M = n! and α = 1.
(b). We remark that the conditions of the Theorem (9.1.2) are suffi-
cient rather than necessary. Thus, there may be functions f which do
not satisfy the conditions of the Theorem for which the integral (9.2)
exists nevertheless.
9.1.3. Definition: Let L(IR) denote the linear space of all functions f
for which (9.2) exists. Then, the function
Z ∞
s → L(f )(s) ≡ F (s) = e−st f (t)dt, f ∈ L(IR)
0
is called the Laplace transform of the function f and the map f → L(f )
is called the Laplace transformation.
9.1.4. Remark: Notice that the operator f → L(f ) is linear. That is,
L(f1 + f2 ) = L(f1 ) + L(f2 )
and
L(cf ) = cL(f ), for all f, f1 , f2 ∈ L(IR), and c ∈ IR.
Setting st = λ, then
Z ∞
1 −λ λ a
L(f )(s) = e ( ) dλ
0 s s
1 Z ∞
= e−λ λa dλ
sa+1 0
Γ(a + 1)
= , a ≥ 0,
sa+1
143
where Z ∞
e−λ λa dλ = Γ(a + 1).
0
To solve questions (ii) and (iii), let
f (t) = eiωt .
Then
eiωt = cos ωt + i sin ωt.
Z ∞
L(f )(s) = e−st f (t)dt
0
Z ∞
= e−st eiωt dt
0
Z ∞
= e−(s−iω)t dt
0
1
=
s − iω
s + iω
=
(s − iω)(s + iω)
s iω
= 2 2
+ 2
s +ω
Z ∞
s + ω2 Z
∞
= e−st cos ωtdt + i e−st sin ωtdt
0 0
Hence, Z ∞
s
e−st cos ωtdt =
0 s2 + ω2
and Z ∞
ω
e−st sin ωt = .
0 s2 + ω2
at
(iv). Here f (t) = e and for s > a we have
Z ∞
L(f )(s) = e−st f (t)dt
0
Z ∞
= e−st eat dt, s > a
0
Z ∞
1
= e−(s−a)t = , s>a
0 s−a
Note that for
f (t) = ta , a ≥ 0
Γ(a + 1)
L(f )(s) = , a ≥ 0.
sa+1
If a = 0, then f (t) = 1, ∀t, we have
Γ(1) 1
L(f )(s) = =
s s
144
since
Γ(n + 1) = n!.
For a = n ∈ IN ,
Γ(n + 1) n!
L(f )(s) = n+1
= n+1
= L(tn ).
s s
9.1.5. Definition: Let Ua be the function defined on (0, ∞) as follows:
(
0 ∀ t<a
Ua (t) =
1 ∀ t≥a
where a > 0.
Then the function Ua is called a step function.
145
and
ω
L(sin ωt) = .
s2 + ω2
Setting ω = 1 we have
s
L(cos t) =
+1 s2
1
L(sin t) = 2
s +1
s−1
Hence the function which has F (s) = 2 as Laplace transform is
s +1
f (t) = cos t − sin t.
is given by
L(g)(s) = F (s − a) := F−a (s),
where
hb (x) = h(x + b), b ∈ IR.
Proof: By definition,
Z ∞
L(f )(s) = F (s) = e−st f (t)dt
0
Hence,
Z ∞
L(g)(s) = e−st g(t)dt
0
146
Z ∞
= e−st eat f (t)dt
0
Z ∞
= e−(s−a)t f (t)dt
0
= F (s − a).
Then
s
L(f1 )(s) = F1 (s) = .
s2 + ω2
Notice that
f (t) = eat f1 (t).
By the translation Theorem, the Laplace transform L(f ) of f is given by
Hence,
s−a
L(f )(s) = .
(s − a)2 + ω 2
9.2.4. Differentiation Theorem: Let f ∈ L(IR) be continuous on [0, ∞).
0
Suppose that f is piecewise continuous on every finite interval and sat-
0
isfies boundedness inequality (3). The Laplace transform of f exists and
is given by:
0
L(f )(s) = sL(f )(s) − f (0).
0
Proof: We consider first the case where f is continuous on [0, ∞). Then
0
by definition of L(f ) and by integration by parts, we have:
Z ∞
0 0
L(f )(s) = e−st f (t)dt
0
h i∞ Z ∞
= e−st f (t) +s e−st f (t)dt
0 0
Z ∞
= −f (0) + s e−st f (t)dt
0
= sL(f )(s) − f (0).
0
If f is only piecewise continuous, the proof is similar but the range of
0
integration is broken up into parts in each of which f is continuous. This
concludes the proof.
147
9.2.5. Example: Find the Laplace transform of the function f given
by
1
f (t) = sin ωt.
ω
You may assume that
s
L(cos ωt) = .
s2 + ω2
Solution: Notice that
d
(cos ωt) = −ω sin ωt.
dt
That is
d 1 1
(− cos ωt) = sin ωt.
dt ω ω
Set
1
g(t) = − cos ωt.
ω2
Then
0 1
g (t) = sin ωt = f (t)
ω
Hence
0
L(f )(s) = L(g )(s) = sL(g)(s) − g(0).
But
1 s
L(g)(s) = − · 2 ,
ω s + ω2
2
and
1
g(o) = − .
ω2
Hence
0
L(f )(s) = L(g )(s)
1 s2 1
= − 2· 2 2
− (− 2 )
ω" s + ω # ω
2
1 s
= 1− 2
ω s + ω2
1 s2 + ω 2 − s2
" #
1
= =
ω2 s2 + ω 2 s2 + ω 2
148
inequality (9.1) for some M and α and suppose that f (n) is piecewise con-
tinuous on every finite interval in [0, ∞). Then the Laplace transform of
f (n) exists for s > α, and is given by
0
L(f n )(s) = S n L(f )(s) − sn−1 f (0) − sn−2 f (0) − · · · f (n−1) (0).
Remark: When n = 2, we have
00 0
L(f )(s) = s2 L(f )(s) − sf (0) − f (0).
Proof:
00 0 0
L(f )(s) = L((f ) )(s)
0 0
= sL(f )(s) − f (0)
0
= s(sL(f )(s) − f (0)) − f (0)
0
= s2 L(f )(s) − sf (0) − f (0).
Z t
|g(t)| ≤ |f (τ )|dτ
0
Since by (9.1),
|f (t)| ≤ M eαt
then
Z t
|g(t)| ≤ |f (τ )|dτ
0
Z t
≤ M eατ dτ
0
M αt
= e −1
α
M αt 0
≤ e = M eαt
α
149
0 0
where M = M α
> 0. So that L(g)(s) exists. Furthermore, g (t) = f (t) except
0
at the points of discontinuity of f . Hence g is piecewise continuous on
each finite interval and by our previous results we have
0
L(f )(s) = L(g )(s) = sL(g)(s) − g(0).
g(t) = tf (t).
150
Proof: The above can be established as follows: Since
Z ∞
0 d d −st
F (s) = F (s) = e f (t)dt
ds Z0∞ ds
= e−st (−tf (t))dt
0
Z ∞
= − e−st (tf (t))dt
0
= −L(g)(s).
f (t)
lim
t→0+ t
exists, i.e. the right hand limit exists, then
Z ∞
0 0
F (s )ds = L(g)(s),
s
Proof: Z ∞ Z ∞Z ∞ 0
0 0 0
F (s )ds = e−s t f (t)dtds .
s s 0
But under the above assumptions, we can show that the order of inte-
gration in the last integral may be interchanged. Thus
Z ∞ Z ∞ Z ∞ 0
0 0 −s t 0
F (s )ds = e f (t)ds dt
s 0 s
Z ∞ Z ∞ 0
−s t 0
= f (t) e ds dt
0 s
Z ∞
f (t)
= e−st dt
0 t
= L(g)(s),
151
where
f (t)
g(t) = .
t
where
3
F (s) =
s4
But the function s → F (s) is the Laplace transform of the function f given
by
1
f (t) = t3
2
By the result above
1 1
L(t2 )(s) = 3 .
2 s
1 1
That is the function g(t) = t2 has the transform given by L(g)(s) = 3 .
2 s
In the foregoing, we have used the fact that:
Γ(a + 1)
L(ta )(s) =
sa+1
this implies that
Γ(3) 2! 2
L(t2 )(s) = = = .
s3 s3 s3
152
(iii) f1 ∗ (f2 + f3 ) = (f1 ∗ f2 ) + (f1 ∗ f3 ) - Distributive property over addi-
tion.
9.4.5. Example: Suppose that f (t) = t and g(t) = sin t. Then the con-
volution of the functions is
Z t
h(t) = f (t − τ )g(τ )dτ
0
Z t
= (t − τ ) sin τ dτ
0
Z t Z t
= t sin τ dτ − τ sin τ dτ
0 0
Z t Z t
= t sin τ dτ − τ sin τ dτ
0 0
Z t
= −t cos τ |t0 − −τ cos τ |t0 + cos τ dτ
0
n o
= −t cos t + t − −t cos t + sin τ |t0
= −t cos t + t + t cos t − sin t
= t − sin t.
H(s) = F (s)G(s),
where
H(s) = L(h)(s),
F (s) = L(f )(s),
G(s) = L(g)(s).
Proof: We have
153
Set t + τ = σ for each fixed τ . Then the last expression becomes:
Z ∞ Z ∞
F (s)G(s) = g(τ ) e−sσ f (σ − τ )dσdτ.
0 τ
= e−st h(σ)dσ
0
= H(s)
where Z t
h(t) = f (t − τ )g(τ )dτ.
0
where
g(t) = sin t
. Taking the Laplace transform of both sides of the equation, we get:
1 1 F (s)
F (s) = + +
s2 s2 s2 + 1
Therefore,
1 1 1
F (s) 1 − 2 = 2+ 2
s +1 s s +1
That is
s2 s2 + 1 + s2
F (s) =
s2 + 1 s2 (s2 + 1)
or equivalently,
2s2 + 1 2 1
F (s) = 4
= 2 + 4.
s s s
154
2
Now, the function which has as Laplace transform is f1 (x) = 2x and the
s2
1 x3
function which has 4 as Laplace transform is f2 (x) = Hence
s 3!
x3
f (x) = 2x +
3!
is the required function. Notice that we have employed the relation that
Γ(a + 1)
L(xa )(s) ≡ .
sa+1
155
Chapter Ten
Application of Laplace Transforms to Ordinary Differential
Equations
10.1. Introduction:
At the point t = 0, the particle is at the point (1, 0) ∈ IR2 . Using specifi-
cally the method of Laplace transformation, find expressions for x and y
as functions of t.
157
and by rearranging we have
1
2
S +1− 2 Y (s) = s
s +1
that is
(s2 + 1)2 − 1
Y (s) = s
s2 + 1
or
s4 + 2s2
Y (s) = s.
s2 + 1
Thus,
s(s2 + 1) s2 + 1
Y (s) = =
s2 (s2 + 2) s(s2 + 2)
s2 1
= +
s(s2 + 2) s(s2 + 2)
s 1
= 2 + 2
s + 2 s(s + 2)
s 1
= 2 √ + √
s + ( 2) 2 s(s + ( 2)2 )
2
Hence,
√ 1 Zt √
y(t) = cos 2t + √ I(t − τ ) sin 2τ dτ
2 0
√ 1 Zt √
= cos 2t + √ sin 2τ dτ
2 0
√ √ t
" #
1 1
= cos 2t + √ − √ cos 2τ |0
2 2
√ 1 √
= cos 2t − cos 2t − 1
2
1 √
= cos 2t + 1 .
2
where the identity function I is defined by I(u) = u, u lying in the domain
of I.
158
Solution: Taking the Laplace transform of (10.9), we have by using Y1 (s) =
L(y1 ) and Y2 (s) = L(y2 ), then
that is
(s + 1)Y1 (s) − (1 + s)Y2 (s) = −1 (10.12)
Taking the Laplace transform of (10.10) we have
0 0 1
s2 Y1 (s) − sy1 (0) − sy1 (0) − y1 (0) + s2 Y2 (s) − sy2 (0) − y2 (0) = ,
s−1
since
1
L(eαt ) =
.
s−α
Using (10.11) in the last equation, we get:
1
s2 Y1 (s) − 1 + s2 Y2 (s) − s = .
s−1
That is
1
s2 Y1 (s) + s2 Y2 (s) = +1+s
s−1
1 − 1 + s2
=
s−1
s2
=
s−1
or
1
Y1 (s) + Y2 (s) = (10.13)
s−1
and from (10.12) we have
1
Y1 (s) − Y2 (s) = −
1+s
Therefore by combining the last two equations and solving for Y1 (s) and
Y2 (s) we get:
!
1 1 1 1 1 + s − (s − 1) 1
Y1 (s) = − = = .
2 s−1 1+s 2 s2 − 1 s2 −1
159
Inverting (performing the inverse transformation) we get,
1
y1 (t) = (et − e−t ) = sin ht.
2
Finally,
1 1 1
Y2 (s) = + .
2 s−1 s+1
Performing the inverse transformation for Y2 (s) we have
1 t
y2 (t) = e + e−t = cos ht.
2
1. (a) Let L(IR+ ) denote the set of all functions f : IR+ = [0, ∞) → IR which
have Laplace transforms.
(i) State sufficient conditions for a function to be a member of L(IR+ ).
(b) Solve the system of ordinary differential equations, using the method
of Laplace transformation:
y 00 + qy 0 + ry = s (1)
160
(b) Verify that y1 (x) = x−1 is a solution of the ordinary differential equa-
tion
2
y 00 − 2 y = 0, 0 < x < ∞. (2)
x
Hence, determine
(i) a second solution of Equation (2)
3. Write down the Bessel equation of order zero and determine its lin-
early independent solutions in a neighbourhood of the origin.
(b) Find the first terms in each of the two linearly independent series
solutions of the ordinary differential equation
4ex y 00 + xy = 0
Question 1.
161
0 1 1
y(0) = 1, y (0) = , x ∈ [0, ].
2 2
(c) (i) Let y1 (x), y2 (x) be two solutions of (1) with R(x) ≡ 0. When do we
say that the two solutions are linearly independent (LI) ?
(ii) Define the Wroskian W (y1 , y2 ) of y1 , y2 .
(d) State the conditions for the two solutions to be linearly indepen-
dent in terms of the Wroskian.
(e) (i) Show that y1 (x) = sin kx and y2 (x) = cos kx are solutions of the
ODE
00
y (x) + k 2 y = 0.
(ii) Show that they are LI on IR.
Question 2.
By writing
y2 (x) = V (x)y1 (x),
obtain the second LI solution y2 (x).
(c) State the process for finding a particular solution yp (x) of equation
(1) by the method of variation of parameters.
(d) Verify that y1 (x) = x1 and y2 (x) = x2 are two Linearly Independent
solutions of the differential equation:
00 2
y − y = 0, 0 ≤ x ≤ ∞,
x2
determine a particular solution of the non homogenous equation
00 2
y − y = x, 0 ≤ x ≤ ∞.
x2
162
Hence, write down its general solution.
Question 3
(ii) Determine the ordinary point(s) and singular point(s) of the differ-
ential equation
00 0
(x2 − 16)y − ex y + αy = 0, α ∈ IR.
Hence obtain a power series solution about the point x = 1 for the Her-
mite equation.
163
REFERENCES
[1] Agarwal, Ravi, P; Donal, O Regan :An Introduction to Differential Equa-
tion, Universitext Series (2008) Springer Series on Dynamical Sys-
tems and Applications.
[3] Ayoola, E. O.: Lecture Notes on MAT 241, Ordinary Differential Equa-
tions, 2007 -2008 Session, Department of Mathematics, University of Ibadan,
Nigeria, Unpublished Lecture Notes in Mathematics.
164