You are on page 1of 359

First-Order ODE: Existence and Uniqueness Results

Basic Definitions, Existence and Uniqueness


Results for First-Order IVP

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Texts/References:

1 S. L. Ross, Differential Equations, John Wiley & Son Inc,


2004.
2 W. E. Boyce and R. C. Diprima, Elementary Differential

Equations and Boundary Value Problems, John Wiley &


Son, 2001.
3 E. A. Coddington, An Introduction to Ordinary Differential

Equations, Prentice Hall India, 1995.


4 E. L. Ince, Ordinary Differential Equations, Dover
Publications, 1958.
5 Dennis G. Zill and Michael R. Cullen, Differential

equations with boundary value problems

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Definition: An equation involving derivatives of one or more


dependent variables with respect to one or more independent
variables is said to be a differential equation(DE).

Definition: A DE involving ordinary derivatives of one or more


dependent variables w.r.t a single independent variable is
called an ordinary differential equation(ODE).

A general form of the nth order ODE:


F (x, y(x), y ′ (x), y ′′ (x), · · · , y (n) (x)) = 0, (1)
d2 y dn y
where y ′ (x) = dy
dx
, y ′′ (x) = dx2
,··· , y (n) (x) = dxn
.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

• The order of a DE is the order of the highest derivative


that occurs in the equation.
• The degree of a DE is the power of the highest order
derivative occurring in the differential equation.
• Eq. (1) is linear if F is linear in y, y ′ , y ′′ , . . . , y (n) , with
coefficients depending on the independent variable x. Eq.
(1) is called nonlinear if it is not linear.
Examples:
• y ′′ (x) + 3y ′ (x) + xy(x) = 0
(second-order, first-degree, linear)
• y (x) + 3y(x)y ′ (x) + xy(x) = 0
′′

(second-order, first-degree, nonlinear)


′′ ′
• (y (x)) + 3y (x) + xy 2 (x) = 0
2

(second-order, second-degree, nonlinear)

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Definition: A DE involving partial derivatives of one or more


dependent variables w.r.t more than one independent variable
is called a partial differential equation(PDE).
A PDE for a function u(x1 , x2 , . . . , xn ) (n ≥ 2) is a relation of
the form
F (x1 , x2 , . . . , xn , u, ux1 , ux2 , . . . , ux1 x1 , ux1 x2 , . . . , ) = 0, (2)
where F is a given function of the independent variables
x1 , x2 , . . . , xn , and of the unknown function u and of a finite
number of its partial derivatives.
Examples:
• x ∂u
∂x
+ y ∂u ∂y
= 0 (first-order equation)
∂2u ∂2u
• ∂x2
+ ∂y 2
= 0 (second-order equation)
We shall consider only ODE.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Definition: A function ϕ(x) ∈ C n ((a, b)) that satisfies

F (x, ϕ(x), ϕ′ (x), ϕ′′ (x), · · · , ϕn (x)) = 0, x ∈ (a, b)


is called an explicit solution to the equation on (a, b).

Example: ϕ(x) = x2 − x−1 is an explicit solution to


y
y ′′ (x) − 2 = 0.
x2
Note that ϕ(x) is an explicit solution on (−∞, 0) and also on
(0, ∞).

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Definition: (Initial Value Problem)


Find a solution y(x) ∈ C n ((a, b)) that satisfies
F (x, y, y ′ (x), · · · , y (n) (x)) = 0, x ∈ (a, b)
and the n initial conditions(IC)

y(x0 ) = y0 , y ′ (x0 ) = y1 , · · · , y (n−1) (x0 ) = yn−1 ,


where x0 ∈ (a, b) and y0 , y1 , . . . , yn−1 are given constants.
First-order IVP: F (x, y, y ′ (x)) = 0, y(x0 ) = y0 .
Second-order IVP: F (x, y, y ′ (x), y ′′ (x)) = 0,
y(x0 ) = y0 , y ′ (x0 ) = y1 .
Example: The function ϕ(x) = sin x − cos x is a solution to
IVP: y ′′ (x) + y(x) = 0, y(0) = −1, y ′ (0) = 1. on R.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Consider the following IVPs:


|y ′ | + 2|y| = 0, y(0) = 1 (no solution).

1
y ′ (x) = x, y(0) = 1 (a unique solution y = x2 + 1).
2

xy ′ (x) = y − 1, y(0) = 1 (many solutions y = 1 + cx).


Observation:
Thus, an IVP

F (x, y, y ′ ) = 0, y(x0 ) = y0

may have none, precisely one, or more than one solution.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Well-posed IVP

An IVP is said to be well-posed if


• it has a solution,
• the solution is unique and,
• the solution is continuously depends on the initial data y0
and f .

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Theorem(Peano’s Theorem):
Let R : |x − x0 | ≤ a, |y − y0 | ≤ b be a rectangle. If
f ∈ C(R) then the IVP

y ′ (x) = f (x, y), y(x0 ) = y0


has at least one solution y(x). This solution is defined for all x
in the interval |x − x0 | ≤ h, where
b
h = min{a, }, K = max |f (x, y)|.
K (x,y)∈R

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Example: Let R : |x − 0| ≤ 3, |y − 0| ≤ 3 be a rectangle. Let


f (x, y) = xy. Then f ∈ C(R). Then the IVP

y ′ (x) = f (x, y), y(0) = 0

has at least one solution y(x). This solution is defined for all x in the
interval |x − 0| ≤ h, where
3
h = min{3, }, K = max |xy|.
K (x,y)∈R

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Theorem(Picard’s Theorem):
Let f ∈ C(R) and satisfy the Lipschitz condition with respect
to y in R, i.e., there exists a number L such that

|f (x, y2 ) − f (x, y1 )| ≤ L|y2 − y1 | ∀(x, y1 ), (x, y2 ) ∈ R.


Then, the IVP
y ′ (x) = f (x, y), y(x0 ) = y0
has a unique solution y(x). This solution is defined for all x in
the interval |x − x0 | ≤ h, where
b
h = min{a, }, K = max |f (x, y)|
K (x,y)∈R

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Example: Consider the IVP:


y ′ (x) = |y|, y(1) = 1.
f (x, y) = |y| is continuous and satisfies Lipschitz condition
w.r.t y in every domain R of the xy-plane. The point (1, 1)
certainly lies in some such domain R. The IVP has a unique
solution ϕ defined on some |x − 1| ≤ h about x0 = 1.
Corollary to Picard’s Theorem:
Let f , ∂f
∂y
∈ C(R). Then the IVP

y ′ (x) = f (x, y), y(x0 ) = y0


has a unique solution y(x). This solution is defined for all x in
the interval |x − x0 | ≤ h, where
b
h = min{a, }, K = max |f (x, y)|.
K (x,y)∈R

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Basic Definitions, Existence and Uniqueness


Results for First-Order IVP

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Texts/References:

1 S. L. Ross, Differential Equations, John Wiley & Son Inc,


2004.
2 W. E. Boyce and R. C. Diprima, Elementary Differential

Equations and Boundary Value Problems, John Wiley &


Son, 2001.
3 E. A. Coddington, An Introduction to Ordinary Differential

Equations, Prentice Hall India, 1995.


4 E. L. Ince, Ordinary Differential Equations, Dover
Publications, 1958.
5 Dennis G. Zill and Michael R. Cullen, Differential

equations with boundary value problems

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Example: Let R : |x| ≤ 5, |y| ≤ 3 be the rectangle.


Consider the IVP
y ′ = 1 + y 2 , y(0) = 0
over R.
Here, a = 5, b = 3. Then

max |f (x, y)| = max |1 + y 2 | = 10(= K),


(x,y)∈R (x,y)∈R

∂f
max = max 2|y| = 6(= L).
(x,y)∈R ∂y (x,y)∈R
b 3
h = min{a, } = min{5, } = 0.3 < 5.
K 10
Note that the solution of the IVP is y = tan x. This solution
is valid in the interval |x| ≤ 0.3 instead of the entire interval
|x| ≤ 5.
SU/KSK MA-102 (2018)
First-Order ODE: Existence and Uniqueness Results

Example(Non-uniqueness): Consider the IVP:

y ′ = 3 y 3 for x ∈ R, y(0) = 0.
2

For each real number c ≥ 0, let


{
0 if 0 ≤ x < c,
yc (x) = .
(x − c) if c ≤ x < ∞,
3

It is easy to verify that yc (x) is a solution to the IVP.


Therefore, this IVP has infinitely many solutions.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

The Method of Successive Approximations


Consider the IVP
y ′ (x) = f (x, y), y(x0 ) = y0 . (1)

Key Idea: Replacing the IVP (1) by an the equivalent integral


equation ∫ x
y(x) = y0 + f (t, y(t)) dt. (2)
x0

Note that (1) and (2) are equivalent.


A first (rough) approximation to a solution is given by
y0 (x) = y0 . A second approximation y1 (x) is obtained as
follows: ∫ x
y1 (x) = y0 + f (t, y0 (t)) dt.
x0

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

The next step is to use y1 (x) to generate another


approximation y2 (x) in the same way:
∫ x
y2 (x) = y0 + f (t, y1 (t)) dt.
x0

At the nth step, we have


∫ x
yn (x) = y0 + f (t, yn−1 (t)) dt.
x0

This procedure is called Picard’s method of successive


approximations.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Example: Consider IVP: y ′ = y, y(0) = 1.


x2 x3 xn
yn (x) = 1 + x + + + ··· + .
2! 3! n!

Note that yn (x) → ex as n → ∞.


Theorem: Let R : |x − x0 | ≤ a, |y − y0 | ≤ b be a rectangle
where a, b > 0. Let f ∈ C(R) and let |f (x, y)| ≤ M for all
(x, y) ∈ R. Further suppose that f satisfies Lipschitz
condition w.r.t y with constant K in R. Then the successive
approximations
y0 (x) ≡ y0
∫ x
yk+1 (x) = y0 + f (t, yk (t))dt, k = 0, 1, 2, 3, ...
x0

converge uniformly on the interval I : |x − x0 | ≤ h where


h = min{a, Mb } to a solution y(x) of the IVP
y ′ = f (x, y), y(x0 ) = y0 .
SU/KSK MA-102 (2018)
First-Order ODE: Existence and Uniqueness Results

Theorem(Continuous dependence on initial data):


Let f , ∂f
∂y
∈ C(R) and (x0 , y0 ), (x0 , y0 m ) ∈ R. Let ϕ(x) be
the solution of
y ′ = f (x, y), y(x0 ) = y0 ,
and let ϕm (x) be the solution of

y ′ = f (x, y), y(x0 ) = y0 m ,


in R for |x − x0 | ≤ h. Then, for |x − x0 | ≤ h, we have
|ϕ(x) − ϕm (x)| ≤ |y0 − y0 m |eLh ,

where | ∂f
∂y
(x, y)| ≤ L for all (x, y) ∈ R.
Further, as y0 m → y0 , ϕm → ϕ uniformly on [x0 − h, x0 + h].

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Theorem(Continuous dependence on f ):
Let f , fm , ∂f , ∂fm ∈ C(R), and (x0 , y0 ) ∈ R. Let ϕ(x) be
∂y ∂y
the solution of
y ′ = f (x, y), y(x0 ) = y0 ,
and ϕm (x) be the solution of
y ′ = fm (x, y), y(x0 ) = y0 .

Assume that both ϕ(x), ϕm (x) exist on [x0 − h, x0 + h].


Then, for |x − x0 | ≤ h, we have
{ }
|ϕ(x) − ϕm (x)| ≤ he max |f (x, y) − fm (x, y)| ,
L̂h
(x,y)∈R

L̂ = min{L, Lm }, | ∂f
∂y
(x, y)| ≤ L, | ∂f∂ym (x, y)| ≤ Lm ∀(x, y) ∈
R. Further, as fm → f , ϕm → ϕ uniformly on[x0 − h, x0 + h].
SU/KSK MA-102 (2018)
First-Order ODE: Existence and Uniqueness Results

Separable Equations
Definition: A first-order equation y ′ (x) = f (x, y) is separable
if it can be written in the form
dy
= g(x)p(y)
dx
Method for solving separable equations: To solve the equation
dy
= g(x)p(y),
dx
1
we write it as h(y)dy = g(x)dx, where h(y) := p(y) .
Integrating both sides
∫ ∫
h(y)dy = g(x)dx =⇒ H(y) = G(x) + C,

which gives an implicit solution to the differential equation.


SU/KSK MA-102 (2018)
First-Order ODE: Existence and Uniqueness Results

Formal justification of method: Writing the equation in the


form
dy 1
h(y) = g(x), h(y) := .
dx p(y)
Let H(y) and G(x) be such that
H ′ (y) = h(y), G′ (x) = g(x).

Then
dy
H ′ (y)
= G′ (x).
dx
d
Since dx H(y(x)) = H ′ (y(x)) dx
dy
(by chain rule), we obtain
d d
H(y(x)) = G(x) ⇒ H(y(x)) = G(x) + C.
dx dx

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

Remark: In finding a one-parameter family of solutions in the


separation process, we assume that p(y) ̸= 0. Then we must
find the solutions y = y0 of the equation p(y) = 0 and
determine whether any of these are solutions of the original
equation which were lost in the formal separation process.
Example: Consider (x − 4)y 4 dx − x3 (y 2 − 3)dy = 0.
Separating the variable by dividing x3 y 4 , we obtain
(x − 4)dx (y 2 − 3)dy
− =0
x3 y4
The general solution is − x1 + 2
x2
+ 1
y
− 1
y3
= C, y ̸= 0
Note: y = 0 is a solution of the original equation which was
lost in the separation process.

SU/KSK MA-102 (2018)


First-Order ODE: Existence and Uniqueness Results

First-Order Linear Equations


A linear first-order equation can be expressed in the form
dy
a1 (x) + a0 (x)y = b(x), (3)
dx
where a1 (x), a0 (x) and b(x) depend only on the independent
variable x, not on y.
Examples:
dy
(1 + 2x) dx + 6y = ex (linear)
dy
sin x dx + (cos x)y = x2 (linear)
dy
dx
+ xy 3 = x2 (not linear)
Theorem(Existence and Uniqueness):
Suppose a1 (x), a0 (x), b(x) ∈ C((a, b)), a1 (x) ̸= 0 and
x0 ∈ (a, b). Then for any y0 ∈ R, there exists a unique
solution y(x) ∈ C 1 ((a, b)) to the IVP
dy
a1 (x) + a0 (x)y = b(x), y(x0 ) = y0 .
dx SU/KSK MA-102 (2018)
First-Order ODE: Separable Equations, Exact
Equations and Integrating Factor

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


REMARK: In the last theorem of the previous lecture, you can change
the open interval (a, b) to any interval I (I may be open or closed or
semi-closed, it does not matter). The theorem is given in its correct form
on the next page...see below.

SU/KSK MA-102 (2018)


First-Order Linear Equations
A linear first-order equation can be expressed in the form
dy
a1 (x) + a0 (x)y = b(x), (1)
dx
where a1 (x), a0 (x) and b(x) depend only on the independent
variable x, not on y.
Examples:
dy
(1 + 2x) dx + 6y = ex (linear)
dy
sin x dx + (cos x)y = x2 (linear)
dy
dx
+ xy 3 = x2 (not linear)
Theorem(Existence and Uniqueness):
Let I be an interval. Suppose a1 (x), a0 (x), b(x) ∈ C(I),
a1 (x) ̸= 0 and x0 ∈ I. Then for any y0 ∈ R, there exists a
unique solution y(x) ∈ C 1 (I) to the IVP
dy
a1 (x) + a0 (x)y = b(x), y(x0 ) = y0 .
dx
SU/KSK MA-102 (2018)
Exact Differential Equation
Definition: Let F be a function of two real variables such that
F has continuous first partial derivatives in a domain D. The
total differential dF of the function F is defined by the formula
dF (x, y) = Fx (x, y)dx + Fy (x, y)dy
for all (x, y) ∈ D.
Definition: The expression M (x, y)dx + N (x, y)dy is called an
exact differential in a domain D if there exists a function F
such that
Fx (x, y) = M (x, y) and Fy (x, y) = N (x, y)
for all (x, y) ∈ D.
Definition: If M (x, y)dx + N (x, y)dy is an exact differential,
then the differential equation
M (x, y)dx + N (x, y)dy = 0
is called an exact differential equation.
SU/KSK MA-102 (2018)
Definition: If an equation
F (x, y) = c
can be solved for y = ϕ(x) or for x = ψ(y) in a neighbourhood of each
point (x, y) satisfying F (x, y) = c, and if the corresponding function ϕ or
ψ satisfies
M (x, y)dx + N (x, y)dy = 0,
then F (x, y) = c is said to be an Implicit solution of
M (x, y)dx + N (x, y)dy = 0.

SU/KSK MA-102 (2018)


Theorem: Let R be a rectangle in R2 . Let
M (x, y), N (x, y) ∈ C 1 (R). Then
M (x, y) + N (x, y)y ′ = 0 is exact ⇐⇒ My (x, y) = Nx (x, y)

for (x, y) ∈ R.
Example: Consider 4x + 3y + 3(x + y 2 )y ′ = 0.
Note that M, N ∈ C 1 (R) and My = 3 = Nx . Thus, there
exists f (x, y) such that fx = 4x + 3y and fy = 3x + 3y 2 .
fx = 4x + 3y ⇒ f (x, y) = 2x2 + 3xy + ϕ(y). Now,
3x + 3y 2 = fy (x, y) = 3x + ϕ′ (y).
⇒ ϕ′ (y) = 3y 2 ⇒ ϕ(y) = y 3 .
Thus, f (x, y) = 2x2 + 3xy + y 3 and the general solution is
given by
2x2 + 3xy + y 3 = C
SU/KSK MA-102 (2018)
Definition: If the equation
M (x, y)dx + N (x, y)dy = 0 (2)
is not exact, but the equation
µ(x, y){M (x, y)dx + N (x, y)dy} = 0 (3)

is exact then µ(x, y) is called an integrating factor of (2).


Example: The equation (y 2 + y)dx − xdy = 0 is not exact.
But, when we multiply by y12 , the resulting equation

1 x
(1 + )dx − 2 dy = 0, y ̸= 0
y y
is exact.
Remark: While (2) and (3) have essentially the same solutions,
it is possible to lose solutions when multiplying by µ(x, y).

SU/KSK MA-102 (2018)


(My −Nx )
Theorem: If N
is continuous and depends only on x,
then (∫ { } )
My − Nx
µ(x) = exp dx
N
is an integrating factor for M dx + N dy = 0.
Proof. If µ(x, y) is an integrating factor, we must have
( )
∂ ∂ ∂µ ∂µ ∂N ∂M
{µM } = {µN } ⇒ M −N = − µ.
∂y ∂x ∂y ∂x ∂x ∂y
( )
My −Nx
If µ = µ(x) then dµdx
= N
µ, where (My − Nx )/N is
just a function of x.

SU/KSK MA-102 (2018)


Example: Solve (2x2 + y)dx + (x2 y − x)dy = 0.
The equation is not exact as My = 1 ̸= (2xy − 1) = Nx . Note
that
My − Nx 2(1 − xy) −2
= = ,
N −x(1 − xy) x
which is a function of only x, so an I.F µ(x) = x−2 and the
2
solution is given by 2x − 2yx−1 + y2 = C.
Remark. Note that the solution x = 0 was lost in multiplying
µ(x) = x−2 .
−My
Theorem: If NxM is continuous and depends only on y, then
(∫ { } )
Nx − My
µ(y) = exp dy
M
is an integrating factor for M dx + N dy = 0.

SU/KSK MA-102 (2018)


Homogeneous Functions

If M (x, y)dx + N (x, y)dy = 0 is not a separable, exact, or


linear equation, then it may still be possible to transform it
into one that we know how to solve.
Definition: A function f (x, y) is said to be homogeneous of
degree n if
f (tx, ty) = tn f (x, y),
for all suitably restricted x, y and t, where t ∈ R and n is a
constant.
Example:
1. f (x, y) = x2 + y 2 log(y/x), x > 0, y > 0
(homogeneous of degree 2)
y/x
2. f (x, y) = e + tan(y/x) x > 0 y >
0(homogeneous of degree 0)

SU/KSK MA-102 (2018)


• If M (x, y) and N (x, y) are homogeneous functions of the
same degree then the substitution y = vx transforms the
equation into a separable equation.
Writing M dx + N dy = 0 in the form dxdy
= −M/N = f (x, y).
Then, f (x, y) is a homogeneous function of degree 0. Now,
substitution y = vx transform the equation into
dv dv dx
v+x = f (1, v) ⇒ = ,
dx f (1, v) − v x
which is in variable separable form.
Example: Consider (x + y)dx − (x − y)dy = 0.
Put y = vx and separate the variable to have
(1 − v)dv dx
2
=
1+v x
Integrating and replacing v = y/x, we obtain
y √
tan−1 = log x2 + y 2 + C.
x
SU/KSK MA-102 (2018)
Substitutions and Transformations
• A first-order equation of the form

y ′ + p(x)y = q(x)y α ,
where p(x), q(x) ∈ C((a, b)) and α ∈ R, is called a
Bernoulli equation.

The substitution v = y 1−α transforms the Bernoulli


equation into a linear equation
dv
+ p1 (x)v = q1 (x),
dx
where p1 (x) = (1 − α)p(x), q1 (x) = (1 − α)q(x).

Example: Consider y ′ + y = xy 3 . The general solution is given


by y12 = x + 12 + ce2x .

SU/KSK MA-102 (2018)


• An equation of the form

y ′ = p(x)y 2 + q(x)y + r(x)


is called Riccati equation.
If it’s one solution, say u(x) is known then the
substitution y = u + 1/v reduces to a linear equation in v.

Remark: Note that if p(x) = 0 then it is a linear equation. If


r(x) = 0 then it is a Bernoulli equation.

SU/KSK MA-102 (2018)


A DE of the form M (x, y)dx + N (x, y)dy = 0 is called a homogeneous
DE if M (x, y) and N (x, y) are both homogeneous functions of the same
degree.

SU/KSK MA-102 (2018)


• A DE of the form
(a1 x + b1 y + c1 )dx + (a2 x + b2 y + c2 )dy = 0, where ai ’s,
bi ’s and ci ’s are constants, can be transformed into the
homogeneous equation by substituting
x = u + h and y = v + k,
where h, k are solutions (provided solution exists) of
a1 h + b1 k + c1 = 0 and a2 h + b2 k + c2 = 0.
If a2 /a1 = b2 /b1 = k, then substitution z = a1 x + b1 y
reduces the above DE to a separable equation in x and z.

SU/KSK MA-102 (2018)


Orthogonal Trajectories
Suppose
dy
= f (x, y)
dx
represents the DE of the family of curves. Then, the slope of
any orthogonal trajectory is given by
dy 1 dx
=− or − = f (x, y),
dx f (x, y) dy
which is the DE of the orthogonal trajectories.
Example: Consider the family of circles x2 + y 2 = r2 .
dy
Differentiate w.r.t x to obtain x + y dx = 0. The( differential
)
equation of the orthogonal trajectories is x + y − dx
dy
= 0.
Separating variable and integrating we obtain y = c x as the
equation of the orthogonal trajectories.
*** End ***
SU/KSK MA-102 (2018)
Higher Order Linear ODE: Existence and
Uniqueness Results, Fundamental Solutions,
Wronskian

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Differential Operators

Let I be an interval and n be a positive integer. We will now see what is


meant by a differential operator from C n (I) to C(I).
Consider the map D : C 1 (I) → C(I) given by D(f ) = f ′ . More
generally, for any k ∈ {1, . . . , n}, consider the map Dk : C k (I) → C(I)
given by Dk (f ) = f (k) , where f (k) denotes the k-th derivative of f .
Observe that Dk = D ◦ D ◦ · · · ◦ D (k times). By convention, D0 = Id
(the identity map).
The operators (or maps) Dk are called differentiation operators.
Definition: A differential operator from C n (I) to C(I) is a map
L : C n (I) → C(I) which can be expressed as a function of the
differentiation operator D.
For example: Take L = Dn or L = eD or
L = an Dn + an−1 Dn−1 + · · · + a1 D + a0 D0 , where
a0 , a1 , . . . , an ∈ C(I).

SU/KSK MA-102 (2018)


Linear ODEs

Definition The differential operator L : C n (I) → C(I) is said


to be linear if for any y(x), y1 (x), y2 (x) ∈ C n (I) and c ∈ R,
• L(y1 + y2 ) = L(y1 ) + L(y2 ), and L(cy) = cL(y).

Linear ODE: An ODE given by F (x, y, y ′ , . . . , y (n) ) = 0 on an


interval I is said to be linear if it can be written as
L(y)(x) = g(x), where L : C n (I) → C(I) is a linear
differential operator.

SU/KSK MA-102 (2018)


Example: Consider y ′′ + 3xy ′ + xy = x, this is a linear ODE.
Note that L(y)(x) := y ′′ + 3xy ′ + xy is linear.
Non-linear ODE: A non-linear ODE involves higher powers of y
and/or derivatives of y.
Example: y ′′ + xy ′2 + xy 3 = x is a non-linear ODE. Note that
L(y)(x) := y ′′ + xy ′2 + xy 3 is not linear.
• FACT: A general n-th order linear ODE can be
represented as
an (x)y (n) + an−1 (x)y (n−1) + · · · + a1 (x)y ′ + a0 (x)y = g(x),
where ai and g are given functions of x, an (x) ̸= 0.
• CHECK THAT: L : C n (I) → C(I) given by
L(y)(x) := an (x)y (n) (x) + an−1 (x)y (n−1) (x) + · · · +
a1 (x)y ′ (x) + a0 (x)y(x) is a linear differential operator.
• When g(x) = 0, L(y)(x) = 0 is called homogeneous
differential equation.
SU/KSK MA-102 (2018)
Existence and Uniqueness Results

Theorem: (Existence and uniqueness theorem for linear IVP of


order n)
Suppose that aj (x), g(x) ∈ C(I) and an (x) ̸= 0 for all x ∈ I.
Let x0 ∈ I. Then the initial value problem (IVP)
(Ly)(x) = g(x), y (j) (x0 ) = αj , j = 0, . . . , n − 1,
where αj ∈ R and L(y)(x) :=
an (x)y (n) (x)+an−1 (x)y (n−1) (x)+· · ·+a1 (x)y ′ (x)+a0 (x)y(x),
has a unique solution y(x) for all x ∈ I.
In particular, if g=0 and αj = 0, j = 0, . . . , n − 1, then
y(x) = 0 for all x ∈ I.

SU/KSK MA-102 (2018)


Example:
• The IVP
(1 + x2 )y ′′ + xy ′ − y = tan x, y(1) = 1, y ′ (1) = 2 has a
unique solution which exists on (−π/2, π/2)).
• The IVP y ′′ + 3x2 y ′ + ex y = sin x, y(0) = 1, y ′ (0) = 0
has a unique solution which exists on (−∞, ∞)).
• The IVP y ′′ − y = 0, y(1) = 0, y ′ (1) = 0 has a trivial
solution y(x) = 0 for all x ∈ R.
Theorem:(Superposition principle for homogeneous equation)
Let yi ∈ C n (I), i = 1, · · · , n be any solutions of L(y)(x) = 0
on I. Then y(x) = c1 y1 (x) + c2 y2 (x) + · · · + cn yn (x), where
ci , i = 1, · · · , n are arbitrary constants, is also a solution on I.

Example: y1 (x) = e2x and y2 (x) = xe2x are two solutions of


y ′′ − 4y ′ + 4y = 0. Note that y(x) = c1 y1 (x) + c2 y2 (x) is also
a solution of y ′′ − 4y ′ + 4y = 0.
SU/KSK MA-102 (2018)
Theorem:(Superposition principle for non-homogeneous
equation)
Let ypi ∈ C n (I) be solutions of L(y)(x) = gi (x) for each
i = 1, · · · , n on I. Then
yp (x) = c1 yp1 (x) + c2 yp2 (x) + · · · + cn ypn (x),
= 1, · · · , n are arbitrary constants, is a solution of
where ci , i ∑
L(y)(x) = ni=1 ci gi (x) on I.

Example: Note that yp1 (x) = ex is solution of


y ′′ − 2y ′ + 2y = ex and yp2 (x) = x2 is a solution of
y ′′ − 2y ′ + 2y = 2 − 4x + 2x2 . Then 10ex + 7x2 is a solution
of y ′′ − 2y ′ + 2y = 10ex + 7(2 − 4x + 2x2 ).

SU/KSK MA-102 (2018)


Solution of linear ODE:
Consider the linear differential operator L where

L(y) := an y (n) + an−1 y (n−1) + · · · + a1 y ′ + a0 y,


where ai : I → R are given functions.
Problem: Given g ∈ C(I), find y ∈ C n (I) such that L(y) = g.
Since L : C n (I) → C(I) is a linear transformation, the
solution set of
L(y) = g
is given by
Ker(L) + yP ,
where yp is a particular solution (PS) satisfying L(yP ) = g and
Ker(L) = {y ∈ C n (I) |L(y) = 0}.

SU/KSK MA-102 (2018)


Note that Ker(L) is a vector space.

If {y1 , . . . , yn } ⊂ C n (I) is a basis of Ker(L), then the general


solution (GS) of L(y) = g is given by
y = c1 y1 + · · · + cn yn + yP .

Moral: (The GS of L(y) = g) = (The GS of L(y) = 0 )


+ (a PS yp satisfying L(yp ) = g)

SU/KSK MA-102 (2018)


Theorem: We have dim(Ker(L)) = n.

Proof: Choose x0 ∈ I. Define T : Ker(L) → Kn by


T y := (y(x0 ), y ′ (x0 ), . . . , y (n−1) (x0 )).

Here, K is either the field of real numbers or the field of


complex numbers.
Then T is linear. By uniqueness theorem, T (y) = 0 implies
y = 0. Therefore, T is one-to-one. The existence of solution
shows that T is onto. Thus, T is bijective. Hence
dim(Ker(L)) = n.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Higher Order Linear ODE: Existence and


Uniqueness Results, Fundamental Solutions,
Wronskian

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Recall that all solutions of L(y) = g are given by

Ker(L) + yP ,
where L(yP ) = g is a particular solution.

Hence what we need to do is to find


• a basis {y1 , . . . , yn } of Ker(L) and
• a particular solution yP .
Then the general solution of L(y) = g is given by

y := c1 y1 + · · · + cn yn + yP .

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Definition: If {f1 , . . . , fn } ⊂ C n (I), then



f1 ··· fn

f1 ··· fn′
W (f1 , · · · , fn ) :=

(n−1)
f ···
(n−1)
fn
1
is called the Wronskian of f1 , . . . , fn on I.

Theorem: Let y1 , y2 , . . . , yn ∈ C n (I) be solutions of L(y) = 0,


where
L(y) := an y (n) + an−1 y (n−1) + · · · + a1 y ′ + a0 y,
where ai : I → R are given functions,
ai (x) ∈ C(I), i = 0, . . . , n, and an (x) ̸= 0 on I. If
W (y1 , . . . , yn )(x0 ) ̸= 0 for some x0 ∈ I, then every solution
y(x) of L(y) = 0 on I can be expressed in the form
y(x) = C1 y1 (x) + · · · + Cn yn (x),
where C1 , . . . , Cn are constants.
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

Example: The functions y1 = e2x and y2 = e−2x are both


solutions of y ′′ − 4y = 0 on (−∞, ∞). The Wronskian
2x
e e −2x
W (y1 , y2 ) = 2x = −4 ̸= 0.
2e −2e−2x
The general solution is y = c1 e2x + c2 e−2x .
Theorem: (Abel’s formula) Let y1 , . . . , yn be any n solutions
to
Ly = y (n) + p1 (x)y (n−1) + · · · + pn (x)y = 0
on I, where p1 , . . . , pn ∈ C(I). Then, for x0 ∈ I, we have
( ∫ x )
W (y1 , . . . , yn )(x) = W (y1 , . . . , yn )(x0 ) exp − p1 (t)dt
x0

for all x ∈ I.
Proof. Prove for n = 2 (See Theorem 8 in Chapter 3 of
Coddington’s book).
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

Corollary: The Wronskian of solutions W (y1 , . . . , yn )(x) is


either identically zero or never zero on I.

Definition: A set of n linearly independent solutions of Ly = 0


that spans Ker(L) are called fundamental solutions.

Fact: Let y1 , y2 , . . . , yn ∈ C n (I) be solutions of L(y) = 0


where L(y)(x) = an (x)y (n) (x) + · · · + a1 (x)y ′ (x) + a0 (x)y(x),
ai ∈ C(I) and an (x) ̸= 0 ∀x ∈ I. Then the following
statements are equivalent:
• {y1 , y2 , . . . , yn } is a fundamental solution set on I.
• {y1 , y2 , . . . , yn } are linearly independent on I.
• W (y1 , y2 , . . . , yn )(x) ̸= 0 on I.
Proof. See Theorems 6 and 7 in Chapter 3 of Coddington’s
book.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Theorem: Let yp (x) ∈ C n (I) be a particular solution to


L(y)(x) = g(x) on I and let {y1 , y2 , . . . , yn } ∈ C n (I) be a
fundamental solution set of L(y) = 0 on I. Then every
solution of L(y) = g on I can be expressed in the form
y(x) = C1 y1 (x) + · · · + Cn yn (x) + yp (x)

Example: Given that yp = x2 is a particular solution to


y ′′ − y = 2 − x2 and y1 (x) = ex and y2 (x) = e−x are solution
to y ′′ − y = 0. A general solution is
y(x) = C1 ex + C2 e−x + x2 .

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Homogeneous linear equations with constant coefficients


Aim: To find a basis for Ker(L). That is, to find a set of
fundamental solution to the homogeneous equation L(y) = 0,
where

L(y) := an y (n) + an−1 y (n−1) + · · · + a1 y ′ + a0 y


and an ̸= 0, an−1 , . . . , a0 are real constants.
For y = erx , we find
L(erx ) = an rn erx + an−1 rn−1 erx + · · · + a0 erx
= erx (an rn + an−1 rn−1 + · · · + a0 ) = erx P (r),
where P (r) = an rn + an−1 rn−1 + · · · + a0 .
Thus L(erx ) = 0 provided r is a root of the auxiliary equation
P (r) = an rn + an−1 rn−1 + · · · + a0 = 0.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Case I (Distinct real roots): Let r1 , . . . , rn be real and distinct


roots. The n solutions are given by
y1 (x) = er1 x , y2 (x) = er2 x , . . . , yn (x) = ern x .
We need to show
c1 er1 x + · · · + cn ern x = 0 =⇒ c1 = c2 = · · · = cn = 0.
P (r) can be factored as
P (r) = an (r − r1 )(r − r2 ) · · · (r − rn ).
Writing the operator L as
L = P (D) = an (D − r1 ) · · · (D − rn ).
Now, construct the polynomial Pk (r) by deleting the factor
(r − rk ) from P (r). Then
Lk := Pk (D) = an (D−r1 ) · · · (D−rk−1 )(D−rk+1 ) · · · (D−rn ).
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

By linearity
∑n
Lk ( ci eri x ) = Lk (0) ⇒ c1 Lk (er1 x ) + · · · + cn Lk (ern x ) = 0.
i=1

Since Lk = Pk (D), we find that Lk (erx ) = erx Pk (r) for all r.


Thus
∑n
ci eri x Pk (ri ) = 0 =⇒ ck erk x Pk (rk ) = 0,
i=1
as Pk (ri ) = 0 for i ̸= k. Since rk is not a root of Pk (r), then
Pk (rk ) ̸= 0. This yields ck = 0. As k is arbitrary, we have
c1 = c2 = · · · = cn = 0.
Theorem: If P (r) = 0 has n distinct roots r1 , r2 , . . . , rn . Then
the general solution of L(y) = 0 is
y(x) = C1 er1 x + C2 er2 x + · · · + Cn ern x ,
where C1 , C2 , . . . , Cn are arbitrary constants.
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

Example: Consider y ′′ − 3y ′ + 2y = 0. The auxiliary equation


P (r) = r2 − 3r + 2 = 0 has two roots r1 = 1, r2 = 2. The
general solution is y(x) = C1 ex + C2 e2x ..

Case II (Repeated roots): If r1 is a root of multiplicity m.


Then
P (r) = (r − r1 )m P̃ (r),
where P̃ (r) = an (r − rm+1 ) · · · (r − rn ) and P̃ (r1 ) ̸= 0. Now
L(erx ) = erx (r − r1 )m P̃ (r)
Setting r = r1 , we see that er1 x is a solution. To find other
∂k ∂k
solutions, we note that ∂r k L(e
rx
) = ∂r k [e
rx
(r − r1 )m P̃ (r)].
Now,
∂k
L(erx )|r=r1 = 0 if k ≤ m − 1.
∂rk
[ k ]
∂ rx
=⇒ L (e )|r=r1 = 0.
∂rk
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

Thus,
∂ k rx
k
(e )|r=r1 = xk er1 x
∂r
will be a solution to L(y) = 0 for k = 0, 1, . . . , m − 1.
So, m distinct solutions are
er1 x , xer1 x , · · · , xm−1 er1 x .

Theorem: If P (r) = 0 has the real root r1 occurring m times


and the remaining roots rm+1 , rm+2 , . . . , rn are distinct, then
the general solution of L(y) = 0 is
y(x) = (C1 + C2 x + C3 x2 + · · · + Cm xm−1 )er1 x
+Cm+1 erm+1 x + · · · + Cn ern x ,
where C1 , C2 , . . . , Cn are arbitrary constants.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Solution of Constant Coefficients ODE

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Thus,
∂ k rx
k
(e )|r=r1 = xk er1 x
∂r
will be a solution to L(y) = 0 for k = 0, 1, . . . , m − 1.
So, m distinct solutions are
er1 x , xer1 x , · · · , xm−1 er1 x .

Theorem: If P (r) = 0 has the real root r1 occurring m times


and the remaining roots rm+1 , rm+2 , . . . , rn are distinct, then
the general solution of L(y) = 0 is
y(x) = (C1 + C2 x + C3 x2 + · · · + Cm xm−1 )er1 x
+Cm+1 erm+1 x + · · · + Cn ern x ,
where C1 , C2 , . . . , Cn are arbitrary constants.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Example: Consider y (4) − 8y ′′ + 16y = 0. In this case,


r1 = r2 = 2 and r3 = r4 = −2. The general solution is
y = (C1 + C2 x)e2x + (C3 + C4 x)e−2x .

Case III (Complex roots): If α + iβ is a non-repeated complex


root of P (r) = 0 so is its complex conjugate. Then, both
e(α+iβ)x and e(α−iβ)x
are solution to L(y) = 0. Then, the corresponding part of the
general solution is of the form
eαx (C1 cos(βx) + C2 sin(βx)).

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Theorem: If P (r) = 0 has non-repeated complex roots α + iβ


and α − iβ, the corresponding part of the general solution is

eαx (C1 cos(βx) + C2 sin(βx)) .


If α + iβ and α − iβ are each repeated roots of multiplicity m,
then the corresponding part of the general solution is
[
eαx (C1 + C2 x + C3 x2 + · · · + Cm xm−1 ) cos(βx)
]
+(Cm+1 + Cm+2 x + · · · + C2m xm−1 ) sin(βx) ,
where C1 , C2 , . . . , C2m are arbitrary constants.

Example: Consider y (4) − 2y ′′′ + 2y ′′ − 2y ′ + y = 0. Here,


r1 = r2 = 1, r3 = i and r4 = −i. The general solution is
y = (C1 + C2 x)ex + (C3 cos x + C4 sin x).

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Particular solution of constant coefficients ODE

Method of undetermined coefficients: A simple procedure for


finding a particular solution(yp ) to a non-homogeneous
equation L(y) = g, when L is a linear differential operator
with constant coefficients and when g(x) is of special type:
That is, when g(x) is either
• a polynomial in x,
• an exponential function eαx ,
• trigonometric functions sin(βx), cos(βx)
or finite sums and products of these functions.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Case I. For finding yp to the equation L(y) = pn (x), where


pn (x) is a polynomial of degree n. Try a solution of the form

yp (x) = An xn + · · · + A1 x + A0
and match the coefficients of L(yp ) with those of pn (x):
L(yp ) = pn (x).

Remark: This procedure yields n + 1 linear equations in n + 1


unknowns A0 , . . . , An .

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Example: Find yp to L(y)(x) := y ′′ + 3y ′ + 2y = 3x + 1.

Try the form yp (x) = Ax + B and attempt to match up L(yp )


with 3x + 1. Since

L(yp ) = 2Ax + (3A + 2B),

equating
2Ax + (3A + 2B) = 3x + 1 =⇒ A = 3/2 and B = −7/4.

Thus, yp (x) = 23 x − 47 .

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Case II: The method of undetermined coefficients will also


work for equations of the form

L(y) = aeαx ,
where a and α are given constants. Try yp of the form
yp (x) = Aeαx
and solve L(yp )(x) = aeαx for the unknown coefficients A.

Example: Find yp to L(y)(x) := y ′′ + 3y ′ + 2y = e3x .


Seek yp (x) = Ae3x . Then
L(yp ) = 9Ae3x + 3(3Ae3x ) + 2(Ae3x ) = 20Ae3x .
Now, L(yp ) = e3x =⇒ 20Ae3x = e3x =⇒ A = 1/20.
Thus, yp (x) = (1/20)e3x .
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE

Case III: For an equation of the form


L(y) = a cos βx + b sin βx,
try yp of the form

yp (x) = A cos βx + B sin βx


and solve L(yp ) = a cos βx + b sin βx for the unknowns A and
B.

Example: Find yp to L(y) := y ′′ − y ′ − y = sin x.


Seek yp (x) of the form yp (x) = A cos x + B sin x. Then
L(yp ) = sin x =⇒ A = 1/5, B = −2/5.
Thus, yp (x) = 15 cos x − 25 sin x.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Example: Find yp to L(y) := y ′′ − y ′ − 12y = e4x .


Note that yh (x) = c1 e4x + c2 e−3x . Try finding yp with the
guess yp (x) = Ae4x as before. Since e4x is a solution to the
corresponding homogeneous equation L(y) = 0, we replace
this choice of yp by yp (x) = Axe4x . Since L(xe4x ) ̸= 0, there
exists a particular solution of the form
yp (x) = Axe4x .

Remark: If L(yp ) = 0 then replace yp (x) by xyp (x). If


L(xyp ) = 0 the replace xyp by x2 yp and so on. Thus,
employing xs yp , where s is the smallest nonnegative integer
such that L(xs yp ) ̸= 0.

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

Form of yp :

• g(x) = pn (x) = an xn + · · · + a1 x + a0 ,
yp (x) = xs Pn (x) = xs {An xn + . . . + A1 x + A0 }

• g(x) = aeαx , yp (x) = xs Aeαx

• g(x) = a cos βx + b sin βx,


yp (x) = xs {A cos βx + B sin βx}

• g(x) = pn (x)eαx , yp (x) = xs Pn (x)eαx

• g(x) = pn (x) cos βx + qm (x) sin βx,


where qm (x) = bm xm + · · · + b1 x + b0 and pn (x) as above.
yp (x) = xs {PN (x) cos βx + QN (x) sin βx},
where QN (x) = BN xN + · · · + B1 x + B0 ,
PN (x) = AN xN + · · · + A1 x + A0 and N = max(n, m).

SU/KSK MA-102 (2018)


Solution of Constant Coefficients ODE

• g(x) = aeαx cos βx + beαx sin βx,


yp (x) = xs {Aeαx cos βx + Beαx sin βx}

• g(x) = pn (x)eαx cos +qm (x)eαx sin βx,


yp (x) = xs eαx {PN (x) cos βx + QN (x) sin βx}, where
N = max(n, m).

Note:
1. The nonnegative integer s is chosen to be the smallest
integer so that no term in yp is a solution to L(y) = 0.
2. Pn (x) or PN (x) must include all its terms even if pn (x) has
some terms that are zero. Similarly for QN (x).

*** End ***

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

The Annihilator and Operator Methods for


Finding a Particular Solution yp

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

The Annihilator Method for Finding yp


• This method provides a procedure for finding a particular
solution (yp ) such that L(yp ) = g, where L is a linear
differential operator with constant coefficients and g(x) is
a given function. The basic idea is to transform the given
nonhomogeneous equation into a homogeneous one.

Definition: A linear differential operator Q is said to annihilate


a function f (x) in (a, b) if
Q(f )(x) = 0 for all x ∈ (a, b).

Example:
1. f (x) = ex , Q = D − 1 (Q annihilates ex ).
2. f (x) = xex , Q = (D − 1)2 .
3. f (x) = e2x sin(4x), Q = (D2 − 4D + 20).
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods

Consider
L(y) = g(x), L(y) := an y (n) + an−1 y (n−1) + · · · + a0 y,
where ai ’s are constants.
Suppose Q(g)(x) = 0, then Q(L(y))(x) = Q(g)(x) = 0.

QL(y)(x) = 0 =⇒ y ∈ Ker(QL).

Determine Ker(QL) and then compare with the general


solution of L(y) = 0 (i.e., Ker(L)) to determine the form of
the particular solution to L(y) = g.

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

g(x) Annihilator of g

xn−1 Dn

eαx (D − α)

xn−1 eαx (D − α)n

cos(βx) or sin(βx) D2 + β 2

xn−1 cos(βx) or xn−1 sin(βx) (D2 + β 2 )n

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

g(x) Annihilator of g

eαx cos(βx) or eαx sin(βx) D2 − 2αD + (α2 + β 2 )

xn−1 eαx cos(βx) or xn−1 eαx sin(βx) [D2 − 2αD + (α2 + β 2 )]n

Note: If g(x) has the form ex , log x, x1 , tan x or sin−1 x the


2

annihilator method will not work.

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

Example: Find a particular solution of


Ly := y ′′ + y = e2x + 1.

Note that (D − 2)(e2x ) = 0 and D(1) = 0. Hence,


D(D − 2)(e2x + 1) = 0, Q = D(D − 2).
Now,
QL(y) = Q(e2x + 1) = 0 =⇒ D(D − 2)(D2 + 1)(y) = 0.
Since Ker(QL) = span {cos x, sin x, e2x , 1}, the general
solution to QL(y) = 0 is
y(x) = c1 cos x + c2 sin x + c3 e2x + c4 . (∗)

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

Since every solution of L(y) = g is also a solution to


QL(y) = 0 and the general solution of L(y) = g is

y(x) = c1 cos x + c2 sin x + yp (x),


where Ker(L) = span{cos x, sin x} and L(yp ) = e2x + 1.
Thus, comparing with (∗), we obtain yp = c3 e2x + c4 .

L(yp ) = e2x +1 =⇒ 5c3 e2x +c4 = e2x +1 =⇒ c3 = 1/5, c4 = 1.


So, the particular solution is yp (x) = (1/5)e2x + 1.
Note: The general solution of y ′′ + y = e2x + 1 is
y(x) = c1 cos x + c2 sin x + (1/5)e2x + 1.

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

Operator Methods for Finding yp


Writing Ly = g as P (D)y = g(x), where
L = P (D) = an Dn + an−1 Dn−1 + · · · + a0 .

With each P (D), associate a polynomial


P (r) = an rn + an−1 rn−1 + · · · + a0
called the auxiliary polynomial of P (D).
If P (r) can be factored as product of n linear factors, say
P (r) = an (r − r1 )(r − r2 ) · · · (r − rn ),
then the corresponding factorization of P (D) has the form
P (D) = an (D − r1 )(D − r2 ) · · · (D − rn ),
where r1 , r2 , . . . , rn are the roots of P (r) = 0.
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods

Note that

• Dyp (x) = g(x) ⇒ yp (x) = g(x)dx. It is natural to
define ∫
1
g(x) := g(x)dx.
D
• (D − r)yp = g(x), where r is a constant. Formally, we
write
1
yp = g(x).
D−r
The solution of (D − r)yp = g(x) is

yp (x) = e rx
e−rx g(x)dx.

(Because e P (x)dx is an integrating factor for the ODE
dy
dx
+ P (x)y = q(x).)
∫ −rxThus, we define
1
D−r
g(x) := e rx
e g(x)dx. Operators like D1 , D−r1
are
called inverse operators.
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods

1
Let P (D) be the inverse of the operator P (D). Then the
particular solution to P (D)y = g(x) is given by
1
yp (x) = g(x).
P (D)

Method 1:(Successive integrations)


If P (D) = (D − r1 )(D − r2 ) · · · (D − rn ), then
1 1
yp (x) = g(x) = g(x)
P (D) (D − r1 )(D − r2 ) · · · (D − rn )
1 1 1
= ··· g(x).
(D − r1 ) (D − r2 ) (D − rn )

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

Example: Find a particular solution of y ′′ − 3y ′ + 2y = xex .


Here P (D)y = (D − 1)(D − 2)y = xex . The particular
solution yp is

1 1
yp (x) = xex
D−1D [ ∫− 2 ]
1 2x −2x x 1
= e e xe dx = [−(1 + x)ex ]
D−1 D−1

1
= −e x
e−x (1 + x)ex dx = − (1 + x)2 ex .
2
Note: The successive integrations are likely to become
complicated and time-consuming.

SU/KSK MA-102 (2018)


The Annihilator and Operator Methods

Method 2:(Partial fractions)


If the factors of P (D) are distinct, we can decompose
1
operator P (D) into partial fractions as
[ ]
1 A1 A1 An
yp = g(x) = + + ··· + g(x),
P (D) (D − r1 ) (D − r2 ) (D − rn )
for suitable constants Ai ’s.

Example: Find a particular solution of y ′′ − 3y ′ + 2y = xex .


[ ]
1 1 1
yp (x) = = − xex
(D − 1)(D − 2) D−2 D−1
1 1
= xex − xex
D− ∫ 2 D − 1 ∫
= e2x e−2x xex dx − ex e−x xex dx
1
= −(1 + x + x2 )ex .
2
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods

Method 3:(Series expansions)


1
If g(x) = xn , expand the inverse operator P (D) in a power
series in D so that
1
yp (x) = g(x) = (a0 + a1 D + a2 D2 + · · · + an Dn )g(x),
P (D)
where (a0 + a1 D + a2 D2 + · · · + an Dn ) is the expansion of
1
P (D)
to n + 1 terms as Dk xn = 0 if k > n.
Example: Find yp of y ′′′ − 3y ′′ + 2y = x4 + 2x + 5.
1
= 1 + 2D2 − D3 + 4D4 − 4D5 + · · · .
1 − 2D2 + D3
1
yp (x) = (x4 + 2x + 5)
1 − 2D + D
2 3

= (1 + 2D2 − D3 + 4D4 − 4D5 + · · · )(x4 + 2x + 5)


= (x4 + 2x + 5) + 2(12x2 ) − (24x) + 4(24)
= x4 + 24x2 − 22x + 101.
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods

Method 4: If g(x) = eαx , α a constant, then


(D − r)eαx = (α − r)eαx .
Operating both sides of the above identity by
(α − r)−1 (D − r)−1 , we obtain
1 1
eαx = eαx ,
(D − r) (α − r)
provided α ̸= r. Similarly, if P (D) = (D − r1 ) · · · (D − rn )
then
1 αx 1
e = eαx
P (D) (D − r1 ) · · · (D − rn )
1
= eαx ,
(α − r1 ) · · · (α − rn )
provided r1 , . . . , r2 are distinct from α.
• If P (D) is a polynomial in D such that P (α) ̸= 0, then
1 αx eαx
e = .
PSU/KSK
(D) MA-102 P (α)
(2018)
The Annihilator and Operator Methods

Example: Find a particular solution of


y ′′′ − y ′′ + y ′ + y = 3e−2x .

1
yp = 3e−2x
P (D)
3e−2x
=
P (−2)
3e−2x
=
(−2)3 − (−2)2 − 2 + 1
3
= − e−2x .
13

*** End ***

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Variation of Parameters, Use of a Known Solution


to Find Another and Cauchy-Euler Equation

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Variation of Parameters
The variation of parameter is a more general method for
finding a particular solution (yp ). The method applies even
when the coefficients of the differential equation are functions
of x.
Consider L(y) = g(x), where
L(y) := y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y 0 + p0 (x)y,
where pn−1 (x), . . . , p0 (x) ∈ C (I). We know the general
solution to L(y) = g is given by
y(x) = yh (x) + yp (x),

where yh is the general solution to Ly = 0 and yp (x) is a


particular solution to L(y) = g.

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Suppose we know a fundamental solution set {y1 , . . . , yn } for


L(y) = 0. Then
yh (x) = C1 y1 (x) + · · · + Cn yn (x).
In this method, seek a particular solution yp of the form
yp (x) = v1 (x)y1 (x) + · · · + vn (x)yn (x),
and try to determine the functions v1 , . . . , vn .
Differentiating yp ,
n n
yp0 = vi yi0 + vi0 yi .
X X

i=1 i=1
To avoid second and higher-order derivatives of vi ’s, we
impose the condition
n
vi0 yi = 0.
X
(1)
i=1
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1
n
X n
X
yp(n) = vi yi(n) + vi0 yi(n−1)
i=1 i=1
Recall L(yp ) = yp(n) + pn−1 (x)yp(n−1) + · · · + p1 (x)yp0 + p0 (x)yp ,

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1
n
X n
X
yp(n) = vi yi(n) + vi0 yi(n−1)
i=1 i=1
Recall L(yp ) = yp(n) + pn−1 (x)yp(n−1) + · · · + p1 (x)yp0 + p0 (x)yp ,
n
X n
X n
X
L(yp ) = vi yi(n) + vi0 yi(n−1) + pn−1 ( vi yi(n−1) ) + · · · +
i=1 i=1 i=1
· · · + p0 (v1 (x)y1 (x) + · · · + vn (x)yn (x)),
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

 
L(yp ) = v1 y1(n) + pn−1 y1(n−1) + p(n−2) y1(n−3) + · · · + p0 y1 +
 
v2 y2(n) + pn−1 y2(n−1) + p(n−2) y2(n−3) + · · · + p0 y2 + · · · +
  P
vn yn(n) + pn−1 yn(n−1) + p(n−2) yn(n−3) + · · · + p0 yn + n 0 (n−1) .
i=1 vi yi

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

 
L(yp ) = v1 y1(n) + pn−1 y1(n−1) + p(n−2) y1(n−3) + · · · + p0 y1 +
 
v2 y2(n) + pn−1 y2(n−1) + p(n−2) y2(n−3) + · · · + p0 y2 + · · · +
  P
vn yn(n) + pn−1 yn(n−1) + p(n−2) yn(n−3) + · · · + p0 yn + n 0 (n−1) .
i=1 vi yi
Therefore, if we seek v10 , . . . , vn0 that satisfy the system

y1 v10 + · · · + yn vn0 = 0,
y10 v10 + · · · + yn0 vn0 = 0,
.. . . ..
. + .. + .. = .
y1(n−1) v10 + · · · + yn(n−1) v10 = g.

then

L(yp ) = v1 × 0 + v2 × 0 + · · · + vn × 0 + g = g
=⇒ yp is a particular solution of L(y) = g.

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore we can solve the matrix equation to obtain


v10 , . . . , vn0 ,

v10 (x)
    
y1 (x) y2 (x) ··· yn (x) 0

 y10 (x) y20 (x) ··· yn0 (x) 
 v20 (x) 


 0 

 .. .. .. ..  ..   .. 
. . . . . = . .
    
   

y1(n−2)(x) y2(n−2) (x) ··· yn(n−2) (x)

..  
.. 
. .
    
    
y1(n−1) (x) y2(n−1) (x) ··· yn(n−1) (x) vn0 (x) g(x)

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Therefore we can solve the matrix equation to obtain


v10 , . . . , vn0 ,

v10 (x)
    
y1 (x) y2 (x) ··· yn (x) 0

 y10 (x) y20 (x) ··· yn0 (x) 
 v20 (x) 


 0 

 .. .. .. ..  ..   .. 
. . . . . = . .
    
   

y1(n−2)(x) y2(n−2) (x) ··· yn(n−2) (x)

..  
.. 
. .
    
    
y1(n−1) (x) y2(n−1) (x) ··· yn(n−1) (x) vn0 (x) g(x)
Because
y1 ··· yn


.. ..


. .
= W (y1 , . . . , yn )(x) , 0


y1(n−2) · · · yn(n−2)


y1(n−1) · · ·

y (n−1)
n
on I, which is true as {y1 , . . . , yn } is a fundamental solution
set.
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

··· ···

y1 (x) 0 yn (x)
.. ..


.
(n−2)(x) .

y
1 · · · 0 · · · yn(n−2)(x)

(n−1)(x)
y
1 · · · g(x) · · · yn(n−1)(x)
vk0 (x) =
W (y1 , y2 , · · · , yn )(x)
g(x)Wk (x)
i.e, vk0 (x) = , k = 1, . . . , n,
W (y1 , . . . , yn )(x)
where Wk (x) is obtained from W (y1 , . . . , yn )(x) by replacing kth
column by [0, . . . , 0, 1]T .
We can express Wk (x) as

Wk (x) = (−1) (n−k) W (y1 , . . . , yk−1 , yk+1 , . . . , yn )(x)

for k = 1, . . . , n.

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Integrating vk0 (x) yields


Z
g(x)Wk (x)
vk (x) = dx, k = 1, . . . , n.
W (y1 , . . . , yn )(x)
Finally, substituting the vk ’s back into yp , we obtain
yp (x) = v1 (x)y1 (x) + · · · + vn (x)yn (x)
we obtain
n Z
g(x)Wk (x)
yp (x) =
X
yk (x) dx.
k=1
W (y1 , . . . , yn )(x)

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

For n = 2, v10 and v20 are given by



0 y2 (x)


g(x) y20 (x) −g(x)y2 (x)

0 g(x)y1 (x)
v1 (x) = = , v20 (x) =

,
W (y1 , y2 )(x) W (y1 , y2 )(x) W (y1 , y2 )(x)
where W (y1 , y2 )(x) , 0. Integrating these equations, we
obtain
Z
−g(x)y2 (x) Z
g(x)y1 (x)
v1 (x) = dx, v2 (x) = dx.
W (y1 , y2 )(x) W (y1 , y2 )(x)
Thus, the particular solution is given by
yp (x) = v1 (x)y1 (x) + v2 (x)y2 (x).

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Example: Consider y 00 + y = cosec x.


yh (x) = c1 sin x + c2 cos x.
The two linearly independent solutions are y1 (x) = sin x and
y2 (x) = cos x and W (y1 , y2 ) = −1 , 0.
Z
−g(x)y2 (x) Z
− cos x cosec x
v1 (x) = dx = dx = log(sin x).
W (y1 , y2 )(x) −1
Z
g(x)y1 (x) Z
sin x cosec x
v2 (x) = dx = dx = −x.
W (y1 , y2 )(x) −1

yp = sin x log(sin x) − x cos x.


The general solution is
y(x) = c1 sin x + c2 cos x + sin x log(sin x) − x cos x.

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Use of a known solution to find another


Assume that y1 (x) , 0 is a known solution of L(y) = 0, where
L(y) = y 00 + p(x)y 0 + q(x)y.

We know L(cy1 ) = 0, where c is any arbitrary constant.


Replace c by an unknown function v(x) so that L(y2 ) = 0,
where y2 = v(x)y1 (x).
Suppose L(y2 ) = L(vy1 ) = 0. Then, we have

v(y100 + py10 + qy1 ) + v 00 y1 + v 0 (2y10 + py1 ) = 0.


Since L(y1 ) = 0, we have
v 00 y10
v 00 y1 + v 0 (2y10 + py1 ) = 0 ⇒ = −2 − p.
v0 y1

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

v 00 y10 z0 y10
= −2 − p =⇒ = −2 − p, z = v0.
v0 y1 z y1
Integrating
1 − R pdx Z
1 − R pdx
z(x) = 2 e =⇒ v(x) = e dx.
y1 y12
Thus, the second solution is y2 (x) = v(x)y1 (x).

Example: Given that y1 = ex is a solution to y 00 − 2y 0 + y = 0.


Determine the second linear independent solution y2 .
Note that v(x) = x. The second linearly dependent solution is
y2 (x) = vy1 = xex .

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Cauchy-Euler Equation
An equation of the form
an xn y (n) + an−1 xn−1 y (n−1) + · · · + a1 xy 0 + a0 y = g(x),
where ai ’s are constants is called Cauchy-Euler equation.
The substitution x = et transform the above equation into an
equation with constant coefficients. For simplicity, take n = 2.
Assume that x > 0 and let x = et . By the chain rule,
dy dy dx dy t dy
= = e =x ,
dt dx dt dx dx
hence
dy dy
x = .
dx dt

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

dy dy
Differentiating x dx = dt with respect to t, we find that

d2 y
! !
d dy dx dy d dy
= x = + x
dt2 dt dx dt dx dt dx
dy 2
d y dx dy d2 y
= +x 2 = + x 2 et
dt dx dt dt dx
dy 2
d y
= + x2 2 .
dt dx
Thus
d2 y d2 y dy
x2 = 2 − .
dx2 dt dt

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Substituting into the equation we obtain the constant


coefficient ODE
d2 y dy
!
dy
a2 − + a1 + a0 y = g(et ),
dt2 dt dt
which may be written as

d2 y dy
a2 2
+ (a1 − a2 ) + a0 y = g(et ).
dt dt

Note: Observe that in the proof it is assumed that x > 0. If


x < 0, the substitution x = −et will reduced the Cauchy-Euler
equation to constant coefficients ODE. The method can be
applied to higher-order Cauchy-Euler equation.

SU/KSK MA-102 (2018)


Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation

Example: Consider x2 y 00 − 2xy 0 + 2y = x3 , x > 0.


Setting x = et , we obtain
d2 y dy dy
2
− − 2 + 2y = e3t ,
dt dt dt
or
d2 y dy
2
− 3 + 2y = e3t .
dt dt
The GS to the homogeneous equation is
yh (x) = c1 et + c2 e2t = c1 x + c2 x2 .
To find a particular solution, let yp = Ae3t . Then, A = 1
2
hence, yp = 21 e3t = 12 x3 . The GS is
y(x) = yh (x) + yp (x)
1
= c1 x + c2 x2 + x3 , x > 0.
2
*** End ***
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations

Systems of First Order Differential Equations

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

A first order system of n (not necessarily linear) equations in n


unknown functions x1 (t), x2 (t), . . . , xn (t) in normal form is
given by

x01 (t) = f1 (t, x1 , x2 , . . . , xn ),


x02 (t) = f2 (t, x1 , x2 , . . . , xn ),
..
.
x0n (t) = fn (t, x1 , x2 , . . . , xn ).

Higher-order differential equations often can be rewritten as


first-order system. We can convert the nth order ODE

y (n) = f (t, y, y 0 , . . . , y (n−1) ) (1)


into a first-order system as follows.
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations

Setting
x1 (t) := y(t), x2 (t) := y 0 (t), . . . , xn (t) := y (n−1) (t).
we obtain n first-order equations:
x01 (t) = y 0 (t) = x2 (t),
x02 (t) = y 00 (t) = x3 (t),
..
. (2)
x0n−1 (t) = y (n−1)
(t) = xn (t),
x0n (t) = y (n) (t) = f (t, x1 , x2 , . . . , xn ).
If (1) has n initial conditions:
y(t0 ) = α1 , y 0 (t0 ) = α2 , . . . , y (n−1) (t0 ) = αn ,
then the system (2) has initial conditions:
x1 (t0 ) = α1 , x2 (t0 ) = α2 , . . . , x(n) (t0 ) = αn .
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations

Example: y 00 (t) + 3y 0 (t) + 2y(t) = 0; y(0) = 1, y 0 (0) = 3.


Setting
x1 (t) := y(t) and x2 (t) := y 0 (t)
we obtain

x01 (t) = x2 (t),


x02 (t) = −3x2 (t) − 2x1 (t).

The ICs transform to x1 (0) = 1, x2 (0) = 3.

We shall consider only linear systems of first-order ODEs.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Consider the linear system in the normal form:


x01 (t) = a11 (t)x1 (t) + · · · + a1n (t)xn (t) + f1 (t),
x02 (t) = a21 (t)x1 (t) + · · · + a2n (t)xn (t) + f2 (t),
..
.
x0n (t) = an1 (t)x1 (t) + · · · + ann (t)xn (t) + fn (t).

In matrix and vector notations, we write it as


x0 (t) = A(t)x(t) + f (t), (3)
where x(t) = [x1 (t), . . . , xn (t)]T , f (t) = [f1 (t), . . . , fn (t)]T ,
and A(t) = [aij (t)] is a n × n matrix.
When f = 0 the linear system (3) is said to be homogeneous.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Definition: The IVP for the system


x0 (t) = A(t)x(t) + f (t) (4)
is to find a vector function x(t) ∈ C 1 that satisfies the system
(4) on an interval I and the initial conditions
x(t0 ) = x0 = (x1,0 , . . . , xn,0 )T , where t0 ∈ I and x0 ∈ Rn .

Theorem: (Existence and Uniqueness)


Let A(t) and f (t) are continuous on I and t0 ∈ I. Then, for
any choice of x0 = (x1,0 , . . . , xn,0 )T ∈ Rn , there exists a
unique solution x(t) to the IVP
x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0
on the whole interval I.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Example: Consider the IVP:

   √   
0 t3 tan t 1−t −1
x (t) = x(t) + , x(0) = .
t sin t 0 1

This IVP has a unique solution on the interval (−π/2, 1).


Definition: The Wronskian of n vector functions
x1 (t) = (x1,1 , . . . , xn,1 )T , . . ., xn (t) = (x1,n , . . . , xn,n )T
is defined as
x1,1 (t) x1,2 (t) · · · x1,n (t)


x2,1 (t) x2,2 (t) · · · x2,n (t)


W (x1 , . . . , xn )(t) := .. .. ..
. . .

x (t) x (t) · · · x (t)
n,1 n,2 n,n
= det[x1 x2 . . . xn ].

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem: Let A(t) is an n × n matrix of continuous functions.


If x1 , x2 , . . ., xn are linearly independent solutions to
x0 (t) = A(t)x on I, then W (t) := det[x1 x2 . . . xn ] 6= 0 on
I.
Proof. Suppose W (t0 ) = 0 at some point t0 ∈ I. Now,
W (t0 ) = 0 =⇒ x1 (t0 ), x2 (t0 ), . . ., xn (t0 ) are L.D. .

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem: Let A(t) is an n × n matrix of continuous functions.


If x1 , x2 , . . ., xn are linearly independent solutions to
x0 (t) = A(t)x on I, then W (t) := det[x1 x2 . . . xn ] 6= 0 on
I.
Proof. Suppose W (t0 ) = 0 at some point t0 ∈ I. Now,
W (t0 ) = 0 =⇒ x1 (t0 ), x2 (t0 ), . . ., xn (t0 ) are L.D. . Then, ∃
scalars c1 , . . . , cn , not all zero, such that
c1 x1 (t0 ) + c2 x2 (t0 ) + . . . + cn xn (t0 ) = 0.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem: Let A(t) is an n × n matrix of continuous functions.


If x1 , x2 , . . ., xn are linearly independent solutions to
x0 (t) = A(t)x on I, then W (t) := det[x1 x2 . . . xn ] 6= 0 on
I.
Proof. Suppose W (t0 ) = 0 at some point t0 ∈ I. Now,
W (t0 ) = 0 =⇒ x1 (t0 ), x2 (t0 ), . . ., xn (t0 ) are L.D. . Then, ∃
scalars c1 , . . . , cn , not all zero, such that
c1 x1 (t0 ) + c2 x2 (t0 ) + . . . + cn xn (t0 ) = 0.
Note that c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t) and z(t) = 0 are
0
both
Pn solutions to x (t) = A(t)x(t), x(t0 ) = 0 on I and
i=1 ci xi (t0 ) = z(t0 ) = 0. By the existence and uniqueness
theorem
c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t) = 0, ∀t ∈ I

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem: Let A(t) is an n × n matrix of continuous functions.


If x1 , x2 , . . ., xn are linearly independent solutions to
x0 (t) = A(t)x on I, then W (t) := det[x1 x2 . . . xn ] 6= 0 on
I.
Proof. Suppose W (t0 ) = 0 at some point t0 ∈ I. Now,
W (t0 ) = 0 =⇒ x1 (t0 ), x2 (t0 ), . . ., xn (t0 ) are L.D. . Then, ∃
scalars c1 , . . . , cn , not all zero, such that
c1 x1 (t0 ) + c2 x2 (t0 ) + . . . + cn xn (t0 ) = 0.
Note that c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t) and z(t) = 0 are
0
both
Pn solutions to x (t) = A(t)x(t), x(t0 ) = 0 on I and
i=1 ci xi (t0 ) = z(t0 ) = 0. By the existence and uniqueness
theorem
c1 x1 (t) + c2 x2 (t) + . . . + cn xn (t) = 0, ∀t ∈ I
which contradicts to the fact that x1 , . . ., xn are L.I. . Hence,
W (t0 ) 6= 0. Since t0 ∈ I is arbitrary, the result follows.
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations

Theorem:(Abel’s formula)
If x1 , . . . , xn are n solutions to x0 (t) = A(t)x(t) on an interval
I and t0 is any point of I, then for all t ∈ I,
Z t (X n
) !
W (t) = W (t0 ) exp aii (s) ds ,
t0 i=1

where aii ’s are the main diagonal elements of A.


Proof: Prove for n = 3. (See Theorem 11.12 of Ross’s book.)

Fact:
• The Wronskian of solutions to x0 (t) = A(t)x(t) is either
zero or never zero on I.
• A set of n solutions to x0 (t) = A(t)x(t) on I is linearly
independent on I if and only if W (x1 , . . . , xn )(t) 6= 0 on
I.
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations

Representation of Solutions
Theorem:(Homogeneous case)
Let x1 , . . . , xn be n linearly independent solutions to

x0 (t) = A(t)x(t), t ∈ I,
where A(t) is continuous on I. Then, every solution to
x0 (t) = A(t)x(t) can be expressed in the form
x(t) = c1 x1 (t) + · · · + cn xn (t),
where ci ’s are constants.
Definition: A set {x1 , . . . , xn } of n linearly independent
solutions to
x0 (t) = A(t)x(t), t ∈ I (∗)
is called a fundamental solution set for (∗) on I.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

The matrix Φ(t) defined by


Φ(t) := [x1 (t) x2 (t) . . . xn (t)]
x1,1 (t) x1,2 (t) · · · x1,n (t)
 
 x2,1 (t) x2,2 (t) · · · x2,n (t) 
=  .. .. .. 
 . . . 
xn,1 (t) xn,2 (t) · · · xn,n (t)

is called a fundamental matrix for (∗).


Note: 1. We can use Φ(t) to express the general solution

x(t) = c1 x1 (t)+· · ·+cn xn (t) = Φ(t)c, where c = (c1 , . . . , cn )T .


2. Since det Φ(t) = W (x1 , . . . , xn ) 6= 0 on I =⇒ Φ(t) is
invertible for every t ∈ I.

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem: There exists fundamental sets of solutions of the homogeneous


linear system of differential equations
d
x = A(t)x.
dt

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Example: The set {x1 , x2 , x3 }, where


 2t     
e −e−t 0
x1 =  e2t  , x2 =  0  , x3 =  e−t  ,
e2t e−t −e−t

is a fundamental solution
 for the system x0 (t) = A(t)x(t)
set 
0 1 1
on R, where A = 1 0
 1 .
1 1 0
Note that Axi (t) = x0i (t), i = 1, 2, 3. Further,

e2t −e−t 0
2t
W (t) = e 0 e−t = −3 6= 0.
−t
e 2t
e −e−t

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

 
e2t −e−t 0
The fundamental matrix Φ(t) =  e2t 0 e−t  .
e2t e−t −e−t

Thus, the GS is
    
e2t −e−t 0
x(t) = Φ(t)c = c1  e2t  + c2  0  + c3  e−t  .
e2t e−t −e−t

SU/KSK MA-102 (2018)


Systems of First Order Differential Equations

Theorem:(Non-homogeneous case)
let xp be a particular solution to

x0 (t) = A(t)x(t) + f (t), t ∈ I, (∗∗)


and let {x1 , . . . , xn } be a fundamental solution set on I for
the corresponding homogeneous system x0 (t) = A(t)x(t).
Then every solution to (∗∗) can be expressed in the form

x(t) = c1 x1 (t) + · · · + cn xn (t) + xp (t)


= Φ(t)c + xp (t).

*** End ***

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Homogeneous Linear Systems With Constant


Coefficients

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Homogeneous linear systems with constant coefficients


Consider the homogeneous system
x0 (t) = Ax(t), (1)
where A is a real n × n matrix.
Goal: To find a fundamental solution set for (1).

We seek solutions of the form x(t) = eλt v, where λ is a


constant and v is a constant vector such that
λeλt v = eλt Av =⇒ (A − λI)v = 0.
Thus,
x(t) = eλt v is a solution of x0 (t) = Ax(t)
⇐⇒ λ and v satisfy (A − λI)v = 0.
(We are intersted in the case when v 6= 0)
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients

(A − λI)v = 0 has a nontrivial solution ⇐⇒ det(A − λI) = 0.

Recall:

λ is an eigenvalue of A ⇐⇒ P(λ) = 0,
where P(λ) = det(A − λI) is called the characteristic
polynomial of A.
Finding the eigenvalues of A is equivalent to finding the
zeros of P(λ). P(λ) = 0 is called the characteristics
equation of A.

Observation: eλt v is a solution to x0 = Ax if λ is an


eigenvalue and v is a corresponding eigenvector of A.
Q. Can we obtain n linear independent solutions to x0 = Ax
by finding all the eigenvalues and eigenvectors of A?
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients

Some essential results from linear algebra


Theorem: Let A be an n × n matrix. The following statements
are equivalent:
• A is singular.
• det A = 0.
• Ax = 0 has nontrivial solution (x 6= 0).
• The columns of A form a linearly dependent set.
Definition: (Eigenvalues and Eigenvectors)
The numbers λ for which
(A − λI)v = 0

has at least one nontrivial solution v are called eigenvalues of


A. The corresponding nontrivial solutions are called the
eigenvectors of A associated with λ.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients

Example: Find the eigenvalues and eigenvectors of the matrix


 
2 −3
A= .
1 −2

2−λ −3
det(A − λI) = = λ2 − 1 = 0.
1 −2 − λ
λ1 = 1, λ2 = −1. To find the eigenvectors corresponding to
λ1 = 1, we solve
    
1 −3 v1 0
(A − λ1 I)v = 0 =⇒ = .
1 −3 v2 0
The eigenvector associated with λ1 = 1 is
 
3
v1 = r , r ∈ R \ {0}.
1

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Similarly, for λ2 = −1, we solve


    
3 −3 v1 0
(A − λ2 I)v = 0 =⇒ = .
1 −1 v2 0
The eigenvector associated with λ2 = −1 is
 
1
v2 = r , r ∈ R \ {0}.
1
Theorem: If λ1 , . . . , λn are distinct eigenvalues of A and vi is
an eigenvectors associated with λi , then v1 , . . . , vn are linearly
independent.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Finding the general solution to x0 = Ax


Theorem: Suppose A = (aij )n×n has n linearly independent
eigenvectors v1 , v2 , . . ., vn . Let λi be the eigenvalue
corresponding to vi . Then
{eλ1 t v1 , eλ2 t v2 , . . . , eλn t vn }
is a fundamental solution set on R for x0 = Ax. Then the
general solution (GS) of x0 = Ax is
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 + · · · + cn eλn t vn ,
where c1 , . . . , cn are arbitrary constants.
Proof.
W (t) = det[eλ1 t v1 , . . . , eλn t vn ] = e(λ1 +···+λn )t det[v1 , . . . , vn ] 6= 0.
Thus, {eλ1 t v1 , eλ2 t v2 , . . . , eλn t vn } is a fundamental solution
set and hence, the GS is given by
x(t) = c1 eλ1 t v1 + c2 eλ2 t v2 + · · · + cn eλn t vn .
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients

Example: Find the GS of


 
0 2 −3
x (t) = Ax(t), where A = .
1 −2
The eigenvalues are λ1 = 1 and λ2 = −1. The corresponding
eigenvectors (with r = 1) are
   
3 1
v1 = and v2 = .
1 1
The GS is    
t 3 −t 1
x(t) = c1 e + c2 e .
1 1

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Uncoupling Normal Systems

We know the GS to scalar equation x0 (t) = ax(t) is


x(t) = ceat , where c = x(0).
The easiest normal systems to solve are systems of the form
x0 (t) = Dx(t),
where D is an n × n diagonal matrix. Such a system actually
consists of n uncoupled equations
x0i (t) = dii xi (t), i = 1, . . . , n,

whose solution is
xi (t) = ci edii t ,
where the ci ’s are constants (ci = xi (0)).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Example: Consider the uncoupled system


x01 (t) = −x1 (t)
x02 (t) = 2x2 (t).
Writing this system in the matrix form x0 (t) = Ax(t), where
 
−1 0
A= .
0 2
The method of separation of variables yield the GS
x1 (t) = c1 e−t
x2 (t) = c2 e2t
In matrix form
e−t 0
 
x(t) = c, where c = x(0).
0 e2t
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients

Diagonalization Technique

The diagonalization technique is used to reduce the linear


system
x0 (t) = Ax(t)
to an uncoupled linear system.

Theorem: If the eigenvalues λ1 , λ2 , . . . , λn of A are real


distinct, then any set of corresponding eigenvectors
{v1 , v2 , . . . , vn } form a basis for Rn . The matrix

P = [v1 , v2 , . . . , vn ]
is invertible and P −1 AP = diag[λ1 , . . . , λn ].

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Reducing the system x0 = Ax to an uncoupled system:


Define y = P −1 x. Then x = P y.
Now,
y0 = P −1 x0
= P −1 AP y
= diag[λ1 , . . . , λn ]y.
The uncoupled linear system has the solution

y(t) = diag[eλ1 t , . . . , eλn t ]y(0)


Since y(0) = P −1 x(0) and x(t) = P y(t), it follows that
x(t) = P diag[eλ1 t , . . . , eλn t ] P −1 x(0).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

Example: Consider x01 = −x1 − 3x2 ; x02 = 2x2 . Here


 
−1 −3
A= .
0 2
The eigenvalues of A are λ1 = −1 and λ2 = 2. A pair of
corresponding eigenvectors is
   
1 −1
v1 = , v2 = .
0 1
The matrix
   
1 −1 −1 1 1
P = and P = .
0 1 0 1

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients

 
−1−1 0
Note that P AP =
0 2
We obtain the uncoupled linear system

y10 = −y1 y20 = 2y2 .


The GS is given by

y1 (t) = c1 e−t , y2 (t) = c2 e2t .


The GS to the original system is
 −t 
e 0
x(t) = P P −1 c, c = x(0).
0 e2t

x1 (t) = c1 e−t + c2 (e−t − e2t ), x2 (t) = c2 e2t .

*** End ***

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Homogeneous Linear Systems With Constant


Coefficients Contd...

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

We shall extend techniques for scalar differential equations to


systems.
For example, a GS to x0 (t) = ax(t), where a is a constant, is
x(t) = ceat . Analogously, we shall show that a GS to the
system
x0 (t) = Ax(t),
where A is a constant matrix, is

x(t) = eAt c.

Task: To define the matrix exponential eAt .

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Let A be an n × n matrix. The norm of A is defined by


q
kAk = max |A(x)|, |x| = x21 + · · · + x2n .
|x|≤1

Note: In fact in this case we can explicitly compute the norm as


np o
kAk = max λi , λ1 , λ2 , · · · , λn eigenvalues of AT A.
i=1,··· ,n

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Definition: A series of functions Σ∞


n=1 fn (t) is said to converge uniformly
on a set E if the sequence {Sn (t)} of partial sums defined by

Sn (t) = Σnk=1 fk (t)

converges uniformly on E, that is, for every  > 0, there exists a natural
number n (independent of t) such that for all n ≥ n and for all t ∈ E,

|Sn (t) − S(t)| < 

where S(t) = Σ∞
k=1 fk (t).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

x2 x3
1+x+
+ + · · · + · · · = ex
2! 3!
x2 x3 xn
or lim (1 + x + + + ··· + ) = ex ,
n→∞ 2! 3! n!

A2 A3
I +A+
+ + · · · + · · · =?
2! 3!
A2 A3 An
or lim (I + A + + + ··· + ) =?
n→∞ 2! 3! n!

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...


X Ak
Theorem: The series of matrices converges absolutely to a matrix.
k!
k=0
Proof:Let kAk = a
k
A kAkk ak
≤ = = Mk ,
k! k! k!
∞ ∞
X X ak
Mk = = ea , which converges.
k!
k=0 k=0

X Ak
Therefore by Weierstrass M −test, the series converges absolutely
k!
k=0
to a matrix.

The limit of this matrix is represented as eA .



X Ak
or we write eA =
k!
k=0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

What about eAt , t ∈ R?



X Ak t k
Theorem: The series of matrices converges absolutely to a
k!
k=0
matrix for each t ∈ R.
Proof: Exercise.

The limit of this matrix is represented as eAt .



X Ak t k
or we write eAt = , t ∈ R.
k!
k=0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Theorem: For a given n × n matrix A and t0 > 0, the series



X Ak tk
k=0
k!

is absolutely and uniformly convergent for all |t| ≤ t0 .

Proof. Let kAk = a. Then for |t| ≤ t0 ,


k k
A t kAkk |t|k ak tk0
≤ ≤ = Mk .
k! k! k!

X ak tk 0
But, = eat0 . By the Weierstrass M -test, the series
k=0
k!
∞ k k
X A t
is absolutely and uniformly convergent for all
k=0
k!
|t| ≤ t0 .
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients Contd...

If A is a diagonal matrix, then the computation of eAt is


simple.  
−1 0
Example: Let A = . Then
0 2
     
2 1 0 3 −1 0 n (−1)n 0
A = , A = , ··· , A = .
0 4 0 8 0 2n

∞ k
kt
X
At
e = A
k=0
k!
 P∞ k tk

= k=0 (−1) k! P∞
0
k tk
0 k=0 (2) k!
e−t 0
 
= .
0 e2t

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

If A is a block diagonal matrix, we can compute eAt in the following way.


 
A1 0

 A2 

A=
 .. 
 . 

 
0 Ak

eA1 t
 
0

 eA2 t 

e At
=
 .. 
 . 

 
0 eAk t
Proof follows by principle of mathematical induction.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Theorem: Let A and B be n × n constant matrices and


r, s, t ∈ R (or ∈ C). Then
• eA0 = e0 = I.
• eA(t+s) = eAt eAs .
• (eAt )−1 = e−At .
• e(A+B)t = eAt eBt , provided that AB = BA.
• erIt = ert I.
Theorem: If P and A are n × n matrices and P AP −1 = B,
then
eBt = P eAt P −1 .
Proof. Using the definition of eAt ,
n
Bt
X (P AP −1 )k tk
e = lim
n→∞
k=0
k!
n
X (At)k
= P lim P −1 = P eAt P −1 .
n→∞
k=0
k!
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients Contd...

Corollary: If P −1 AP = diag[λj ] then eAt = P diag[eλj t ] P −1 .


   
a −b A a cos b − sin b
Corollary: If A = then e = e .
b a sin b cos b
Proof. If λ = a + ib, it follows by induction that
 k  
a −b Re(λk ) −Im(λk )
= .
b a Im(λk ) Re(λk )
Thus,
∞  k k
Re( λk! ) −Im( λk! )
X 
A
e = λk λk
Im( k!
) Re( k!
)
k=0   
Re(eλ ) −Im(eλ ) a cos b − sin b
= =e .
Im(eλ ) Re(eλ ) sin b cos b

If a = 0, then eA is simply a rotation through b radians.


SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients Contd...

Lemma: Let A be a square matrix, then


d At
e = AeAt .
dt
Proof. We have
d At eA(t+h) − eAt
e = lim
dt h→0 h
Ah
(e − I)
= lim eAt
h→0 h
A2 h Ak hk−1

At
= e lim lim A+ + ··· + = AeAt , why?
h→0 k→∞ 2! k!

Ak−1 hk−1
 
At Ah
= Ae lim lim fk (h), fk (h) = I + + ··· +
h→0 k→∞ 2! k!
fk → f, uniformly for |h| ≤ 1.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Theorem: Suppose fn → f uniformly on a set E in a metric space. Let x


be a limit point of E, and suppose that

lim fn (t) = an , n = 1, 2, · · ·
t→x

then the sequence {an } converges and lim f (t) = lim an . Or in other
t→x n→∞
words
lim lim fn (t) = lim lim fn (t)
t→x n→∞ n→∞ t→x

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Note that
d At
(e ) = AeAt
dt
i.e each column of eAt is a solution to the system of
differential equations x0 (t) = Ax(t).
Since eAt is invertible it follows that the columns of eAt are
linearly independent and form a fundamental solution set for
x0 (t) = Ax(t).
Theorem: If A is an n × n constant matrix, then the columns
of eAt form a fundamental solution set for
x0 (t) = Ax(t).

Therefore, eAt is a fundamental matrix for the system, and a


GS is
x(t) = eAt c = Φ(t)c.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients Contd...

When A has complex eigenvalues

Theorem: Let A be a real matrix of size 2n × 2n. If A has 2n


distinct complex eigenvalues λj = aj + ibj and
λ̄j = aj − ibj and corresponding complex eigenvectors
wj = uj + ivj and w̄j = uj − ivj , j = 1, . . . , n, then
{u1 , v1 , . . . , un , vn } is a basis for R2n , the matrix
P = [v1 u1 v2 u2 · · · vn un ]

is invertible and
 
−1 aj −bj
P AP = diag
bj aj
a real 2n × 2n matrix with 2 × 2 blocks along the diagonal.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Remark: Instead of P if we use

Q = [u1 v1 u2 v2 · · · un vn ]
then  
−1 aj b j
Q AQ = diag .
−bj aj
Using the above result, a fundamental matrix Φ(t) is
computed as

 
cos(bj t) − sin(bj t)
Φ(t) = e At
= P diag e aj t
P −1 .
sin(bj t) cos(bj t)

General solution of x0 = Ax, is given by


 
cos(bj t) − sin(bj t)
x(t) = P diag e aj t
P −1 x(0).
sin(bj t) cos(bj t)

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Example: Find a fundamental matrix and write the G.S of


x0 = Ax with  
1 −1 0 0
 1 1 0 0 
A= 
 0 0 3 −2 
0 0 1 1
A has complex eigenvalues λ1 = 1 + i λ2 = 2 + i (as well as
λ̄1 = 1 − i λ̄2 = 2 − i). A corresponding pair of complex
eigenvectors is
w1 = u1 +iv1 = [i 1 0 0]T and w2 = u2 +iv2 = [0 0 1+i 1]T .
The matrix
 
1 0 0 0
 0 1 0 0 
P = [v1 u1 v2 u2 ] = 
 0

0 1 1 
0 0 0 1
SU/KSK MA-102 (2018)
Homogeneous Linear Systems With Constant Coefficients Contd...

   
1 0 0 0 1 −1 0 0
 0 1 0 0   1 1 0 0 
P −1 = −1
 and P AP =  
 0 0 1 −1   0 0 2 −1 
0 0 0 1 0 0 1 2

The fundamental matrix eAt is given by


 t
e cos t −et sin t

0 0
 et sin t et cos t 0 0  −1
eAt = P  2t 2t
P
 0 0 e cos t −e sin t 
0 0 e2t sin t e2t cos t
 t
e cos t −et sin t

0 0
 et sin t et cos t 0 0 
= 
 0 0 e2t (cos t + sin t) −e2t sin t 
0 0 e2t sin t e2t (cos t − sin t)

General solution is
x(t) = Φ(t)x(0)

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

When A has both real and complex eigenvalues.

Theorem: If A has distinct real eigenvalues λj and


corresponding eigenvectors vj , j = 1, . . . , k and distinct
complex eigenvalues λj = aj + ibj and λ̄j = aj − ibj and
corresponding eigenvectors wj = uj + ivj and w̄j = uj − ivj ,
j = k + 1, . . . , n, then the matrix
P = [v1 · · · vk vk+1 uk+1 · · · vn un ]
is invertible and

P −1 AP = diag[λ1 , . . . , λk , Bk+1 , . . . , Bn ],
 
aj −bj
where Bj = for j = k + 1, . . . , n.
bj aj

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

Example: Find a fundamental matrix and write the G.S of


x0 = Ax, with  
−3 0 0
A =  0 3 −2  .
0 1 1
The eigenvalues are λ1 = −3, λ2 = 2 + i(λ̄2 = 2 − i).
The corresponding eigenvectors
v1 = [1 0 0]T and w2 = u2 + iv2 = [0 1 + i 1]T
   
1 0 0 1 0 0
P =  0 1 1  , P −1 =  0 1 −1  .
0 0 1 0 0 1

SU/KSK MA-102 (2018)


Homogeneous Linear Systems With Constant Coefficients Contd...

 
−3 0 0
P −1 AP =  0 2 −1 
0 1 2
The fundamental matrix eAt is given by
 −3t 
e 0 0
eAt = P  0 e2t cos t −e2t sin t  P −1
0 e2t sin t e2t cos t
 −3t 
e 0 0
=  0 e2t (cos t + sin t) −2e2t sin t .
2t 2t
0 e sin t e (cos t − sin t)

G.S is
x(t) = eAt x(0)
*** End ***
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Homogeneous Linear Systems with Repeated


Eigenvalues and Nonhomogeneous Linear
Systems

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Repeated real eigenvalues

Q. How to compute the fundamental matrix of

x0 (t) = Ax(t),
when A has repeated eigenvalues?

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Repeated real eigenvalues

Q. How to compute the fundamental matrix of

x0 (t) = Ax(t),
when A has repeated eigenvalues?
Definition: Let λ be an eigenvalue of A of multiplicity m ≤ n.
Then, for k = 1, . . . , m, any nonzero solution v of
(A − λI)k v = 0
is called a generalized eigenvector(GEV) of A.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Repeated real eigenvalues

Q. How to compute the fundamental matrix of

x0 (t) = Ax(t),
when A has repeated eigenvalues?
Definition: Let λ be an eigenvalue of A of multiplicity m ≤ n.
Then, for k = 1, . . . , m, any nonzero solution v of
(A − λI)k v = 0
is called a generalized eigenvector(GEV) of A.
Definition: An n × n matrix is said to be nilpotent of order k
if N k−1 6= 0 and N k = 0.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: Let λ1 , . . . , λn be real eigenvalues of an n × n


matrix A repeated according to their multiplicity. Then, there
exists a basis of generalized eigenvectors for Rn . If v1 , . . . , vn
is any basis of generalized eigenvectors for Rn , the matrix

P = [v1 , . . . , vn ] is invertible,
A = S + N , where P −1 SP = diag[λj ],
the matrix N = A − S is nilpotent of order k ≤ n, and
SN = N S.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: Let λ1 , . . . , λn be real eigenvalues of an n × n


matrix A repeated according to their multiplicity. Then, there
exists a basis of generalized eigenvectors for Rn . If v1 , . . . , vn
is any basis of generalized eigenvectors for Rn , the matrix

P = [v1 , . . . , vn ] is invertible,
A = S + N , where P −1 SP = diag[λj ],
the matrix N = A − S is nilpotent of order k ≤ n, and
SN = N S.
Using the above theorem, we have the following result.
Theorem:
N k−1 tk−1
 
At λj t −1
e = P diag[e ]P I + Nt + · · · + .
(k − 1)!

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Example: Find a fundamental matrix and write a G.S of


x0 = Ax, where  
1 0 0
A =  −1 2 0  .
1 1 2
The eigenvalues of A are λ1 = 1, λ2 = λ3 = 2. The
corresponding eigenvectors are
   
1 0
v1 =  1  and v2 =  0  .
−2 1
One GEV corresponding to λ = 2 and independent of v2 is
obtained by solving
 
1 0 0
2
(A − 2I) v =  1 0 0  v = 0.
−2 0 0
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Choose v3 = (0, 1, 0)T . The matrix P is then given by


   
1 0 0 1 0 0
P =  1 0 1  and P −1 =  2 0 1  .
−2 1 0 −1 1 0

Then, determine S as
   
1 0 0 1 0 0
S = P  0 2 0  P −1 =  −1 2 0  ,
0 0 2 2 0 2
 
0 0 0
N =A−S =  0 0 0  , and N 2 = 0.
−1 1 0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

The fundamental matrix eAt is then given by


 t 
e 0 0
eAt = P  0 e2t 0  P −1 [I + N t]
0 0 e2t
 
et 0 0
=  et − e2t e2t 0  .
−2e + (2 − t)e te2t e2t
t 2t

G.S is
x(t) = eAt x(0)

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Repeated complex eigenvalues


Theorem: Let A be a real 2n × 2n matrix with complex
eigenvalues
λj = aj + ibj and λj = aj − ibj , j = 1, . . . , n.

Then ∃ generalized complex eigenvectors


wj = uj + ivj and w̄j = uj − ivj j = 1, . . . , n
such that {u1 , v1 , . . . , un , vn } is a basis for R2n . The matrix

P = [v1 u1 · · · vn un ] is invertible,
 
−1 aj −bj
A = S + N, where P SP = diag .
bj aj
The matrix N = A − S is nilpotent of order k ≤ 2n, and
SN = N S.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

N k−1 tk−1
   
cos(bj t) − sin(bj t)
eAt = P diag eaj t P −1 I + · · · + .
sin(bj t) cos(bj t) (k − 1)!

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

N k−1 tk−1
   
cos(bj t) − sin(bj t)
eAt = P diag eaj t P −1 I + · · · + .
sin(bj t) cos(bj t) (k − 1)!

Example: Find a fundamental matrix and write G.S of


x0 (t) = Ax(t) where
 
0 −1 0 0
 1 0 0 0 
A= .
 0 0 0 −1 
2 0 1 0

The matrix A has eigenvalues λ = i and λ̄ = −i of multiplicity


2. To find eigenvectors, we need to solve the equations
(A − λI)w = 0, (A − λI)2 w = 0.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Now (A − λI)w = 0 =⇒ z1 = z2 = 0 and z3 = iz4 . Thus,


we have one eigenvector w1 = (0, 0, i, 1)T . The equation
  
−2 2i 0 0 z1
 −2i −2 0 0   z2  = 0
 
(A − λI)2 w =  −2 0 −2 2i   z3 
−4i −2 −2i −2 z4

⇒ z1 = iz2 and z3 = iz4 − z1 .


We now choose the GEV w2 = (i, 1, 0, 1)T . Then
u1 = (0, 0, 0, 1)T , v1 = (0, 0, 1, 0)T , u2 = (0, 1, 0, 1)T , and
v2 = (1, 0, 0, 0)T . The matrix P and P −1 are given by
   
0 0 1 0 0 0 1 0
 0 0 0 1  −1
 0 −1 0 1 
P =  1 0 0 0 , P =  1
  .
0 0 0 
0 1 0 1 0 1 0 0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

   
0 −1 0 0 0 −1 0 0
 1 0 0 0   1 0 0 0 
S=P  P −1 = ,
 0 0 0 −1   0 1 0 −1 
0 0 1 0 1 0 1 0
 
0 0 0 0
 0 0 0 0 
N =A−S =  , and N 2 = 0.
 0 −1 0 0 
1 0 0 0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

The fundamental matrix eAt is given by


 
cos t − sin t 0 0
 sin t cos t 0 0  −1
eAt = P   P [I + N t]
 0 0 cos t − sin t 
0 0 sin t cos t
 
cos t − sin t 0 0
 sin t cos t 0 0 
=  .
 −t sin t sin t − t cos t cos t − sin t 
sin t + t cos t −t sin t sin t cos t

G.S is
x(t) = eAt x(0).
Remark. The case when A has both real and complex
repeated eigenvalues can be treated by combining the above
two theorems.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Nonhomogeneous linear systems

Recall the GS to the nonhomogeneous system


x0 (t) = Ax(t) + f (t), (∗)

is given by
x(t) = Φ(t)c + xp (t),
where Φ(t) is fundamental matrix for the corresponding
homogeneous system and xp (t) is a particular solution to (∗).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Note:We know Φ(t) = eAt is a fundamental matrix satisfies


x0 (t) = Ax(t) with Φ(0) = I.
We shall now attempt to find a particular solution xp (t) by
variation of parameters.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is a fundamental matrix of x0 (t) = A(t)x(t)


on I, then the function
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds
t0
0
is a solution to x (t) = A(t)x(t) + f (t) on I.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is a fundamental matrix of x0 (t) = A(t)x(t)


on I, then the function
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds
t0
0
is a solution to x (t) = A(t)x(t) + f (t) on I.
Proof. Let Φ(t) be a fundamental matrix of the system
x0 (t) = A(t)x(t) on I. We seek a particular solution xp of the
form
xp (t) = Φ(t)v(t),
where v(t) is a vector function to be determined.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is a fundamental matrix of x0 (t) = A(t)x(t)


on I, then the function
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds
t0
0
is a solution to x (t) = A(t)x(t) + f (t) on I.
Proof. Let Φ(t) be a fundamental matrix of the system
x0 (t) = A(t)x(t) on I. We seek a particular solution xp of the
form
xp (t) = Φ(t)v(t),
where v(t) is a vector function to be determined.
Now
x0p (t) = Φ0 (t)v(t) + Φ(t)v0 (t)
= A(t)Φ(t)v(t) + f (t).
= A(t)x(t) + f (t)
i.e., xp (t) is a solution if Φ(t)v0 (t) = f (t)
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

But,
Z t
0
Φ(t)v (t) = f (t) =⇒ v(t) = Φ−1 (s)f (s)ds, t0 , t ∈ I.
t0

Therefore, Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds.
t0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

But,
Z t
0
Φ(t)v (t) = f (t) =⇒ v(t) = Φ−1 (s)f (s)ds, t0 , t ∈ I.
t0

Therefore, Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds.
t0

Notice that
Z t
x0p (t) = Φ (t) 0
Φ−1 (s)f (s)ds + Φ(t)Φ−1 (t)f (t)
t0
Z t
= A(t)Φ(t) Φ−1 (s)f (s)ds + f (t)
t0
= A(t)xp (t) + f (t), ∀t ∈ I,
Note that xp (t0 ) = 0.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (The fundamental theorem for linear systems)


Let A be an n × n matrix. Then for a given x0 ∈ Rn , the IVP
x0 (t) = Ax(t); x(0) = x0 has a unique solution given by
x(t) = eAt x0 .

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (The fundamental theorem for linear systems)


Let A be an n × n matrix. Then for a given x0 ∈ Rn , the IVP
x0 (t) = Ax(t); x(0) = x0 has a unique solution given by
x(t) = eAt x0 .
Proof. If x(t) = eAt x0 , then x0 (t) = d At
dt
e x0 = Ax(t), t ∈ R.
Also, x(0) = Ix0 = x0 .

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (The fundamental theorem for linear systems)


Let A be an n × n matrix. Then for a given x0 ∈ Rn , the IVP
x0 (t) = Ax(t); x(0) = x0 has a unique solution given by
x(t) = eAt x0 .
Proof. If x(t) = eAt x0 , then x0 (t) = d At
dt
e x0 = Ax(t), t ∈ R.
Also, x(0) = Ix0 = x0 .
Uniqueness: Let x̃(t) be any solution of the IVP. Set
y(t) = e−At x̃(t). Then
y0 (t) = −Ae−At x̃(t) + e−At x̃0 (t)
= −Ae−At x̃(t) + e−At Ax̃(t) = 0, for all t ∈ R.
⇒ y(t) is a constant.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (The fundamental theorem for linear systems)


Let A be an n × n matrix. Then for a given x0 ∈ Rn , the IVP
x0 (t) = Ax(t); x(0) = x0 has a unique solution given by
x(t) = eAt x0 .
Proof. If x(t) = eAt x0 , then x0 (t) = d At
dt
e x0 = Ax(t), t ∈ R.
Also, x(0) = Ix0 = x0 .
Uniqueness: Let x̃(t) be any solution of the IVP. Set
y(t) = e−At x̃(t). Then
y0 (t) = −Ae−At x̃(t) + e−At x̃0 (t)
= −Ae−At x̃(t) + e−At Ax̃(t) = 0, for all t ∈ R.
⇒ y(t) is a constant.

Further, y(0) = x0 =⇒ y(t) = x0 , ∀t.

Therefore x̃(t) = eAt y(t) = eAt x0 = x(t) ∀t.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (Uniqueness for non homogeneous system) Let A be


an n × n matrix. If x1 (t) and x2 (t) are two solutions of a
given IVP
x0 (t) = Ax(t) + f (t), x(0) = x0 , t ∈ I,
then x1 (t) = x2 (t) ∀t ∈ I.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (Uniqueness for non homogeneous system) Let A be


an n × n matrix. If x1 (t) and x2 (t) are two solutions of a
given IVP
x0 (t) = Ax(t) + f (t), x(0) = x0 , t ∈ I,
then x1 (t) = x2 (t) ∀t ∈ I.

Proof: Define ψ(t) = x1 (t) − x2 (t), ∀t ∈ I.


Observe that ψ is a solution of the IVP x0 (t) = Ax(t), x(0) = 0,

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: (Uniqueness for non homogeneous system) Let A be


an n × n matrix. If x1 (t) and x2 (t) are two solutions of a
given IVP
x0 (t) = Ax(t) + f (t), x(0) = x0 , t ∈ I,
then x1 (t) = x2 (t) ∀t ∈ I.

Proof: Define ψ(t) = x1 (t) − x2 (t), ∀t ∈ I.


Observe that ψ is a solution of the IVP x0 (t) = Ax(t), x(0) = 0,
Therefore by Uniqueness
ψ(t) = 0, ∀t ∈ I =⇒ x1 (t) = x2 (t) ∀t ∈ I.

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is any fundamental matrix of


x0 (t) = A(t)x(t) then the solution of the IVP
x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0
is given by
Z t
−1
x(t) = Φ(t)Φ (t0 )x0 + Φ(t) Φ−1 (s)f (s)ds.
t0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is any fundamental matrix of


x0 (t) = A(t)x(t) then the solution of the IVP
x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0
is given by
Z t
−1
x(t) = Φ(t)Φ (t0 )x0 + Φ(t) Φ−1 (s)f (s)ds.
t0
Proof: Differentiating, we obtain
x0 (t) = Φ0 (t)Φ−1 (t0 )x0 + Φ(t)Φ−1 (t)f (t)
Z t
0
+Φ (t) Φ−1 (s)f (s)ds.
t0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Theorem: If Φ(t) is any fundamental matrix of


x0 (t) = A(t)x(t) then the solution of the IVP
x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0
is given by
Z t
−1
x(t) = Φ(t)Φ (t0 )x0 + Φ(t) Φ−1 (s)f (s)ds.
t0
Proof: Differentiating, we obtain
x0 (t) = Φ0 (t)Φ−1 (t0 )x0 + Φ(t)Φ−1 (t)f (t)
Z t
0
+Φ (t) Φ−1 (s)f (s)ds.
t0
0
Since Φ (t) = A(t)Φ(t), it follows that
 Z t 
0 −1 −1
x (t) = A(t) Φ(t)Φ (t0 )x0 + Φ(t) Φ (s)f (s)ds + f (t)
t0
= A(t)x(t) + f (t), t ∈ R.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0

G.S of x0 (t) = A(t)x(t) + f (t) −→ x(t) = Φ(t)c + xp (t).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0

G.S of x0 (t) = A(t)x(t) + f (t) −→ x(t) = Φ(t)c + xp (t).

x(t0 ) = Φ(t0 )c + xp (t0 ) =⇒ c = Φ(t0 )−1 x(t0 ).

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0

G.S of x0 (t) = A(t)x(t) + f (t) −→ x(t) = Φ(t)c + xp (t).

x(t0 ) = Φ(t0 )c + xp (t0 ) =⇒ c = Φ(t0 )−1 x(t0 ).

Solution of x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).

Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0

G.S of x0 (t) = A(t)x(t) + f (t) −→ x(t) = Φ(t)c + xp (t).

x(t0 ) = Φ(t0 )c + xp (t0 ) =⇒ c = Φ(t0 )−1 x(t0 ).

Solution of x0 (t) = A(t)x(t) + f (t), x(t0 ) = x0

is x(t) = Φ(t)Φ(t0 )−1 x(t0 ) + xp (t)


Z t
−1
=⇒ x(t) = Φ(t)Φ(t0 ) x(t0 ) + Φ(t) Φ−1 (s)f (s)ds
t0
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Remark. With Φ(t) = eAt , the solution of the IVP

x0 (t) = Ax(t) + f (t), x(0) = x0


takes the form
Z t
x(t) = e x0 + e At At
e−As f (s)ds.
0

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

Remark. With Φ(t) = eAt , the solution of the IVP

x0 (t) = Ax(t) + f (t), x(0) = x0


takes the form
Z t
x(t) = e x0 + e At At
e−As f (s)ds.
0

Example: Solve x0 (t) = Ax(t) + f (t), x(0) = x0 , where


   
0 −1 0
A= and f (t) = .
1 0 f (t)
In this case
 
At cos t − sin t
e = = Φ(t).
sin t cos t

SU/KSK MA-102 (2018)


Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems

 
−At cos t sin t
e = = Φ(−t).
− sin t cos t
The solution of the IVP is
Z t
x(t) = e x0 + e At
e−As f (s)ds
At
t0
Z t 
f (s) sin(s)
= Φ(t)x0 + Φ(t) ds.
t0
f (s) cos(s)

*** End ***

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Stability of Linear Systems in R2

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

A system of first order differential equations is called


autonomous if the system can be written in the form
dx1
= g1 (x1 , x2 , . . . , xn )
dt
dx2
= g2 (x1 , x2 , . . . , xn )
dt
......
...
dxn
= gn (x1 , x2 , . . . , xn )
dt
Observe that the variable t DOES NOT appear explicitly on
the right hand side of each differential equation in the above
system. If n = 2 in the above system, then the system is
called a Plane autonomous system.

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Consider a Plane autonomous system given by:

x01 (t) = g1 (x1 , x2 )


(1)
x02 (t) = g2 (x1 , x2 )

A solution of the above system consists of pair of functions


{x1 (t), x2 (t)} that satisfy (1) for all t ∈ I. The set of points
n  o
x1 (t), x2 (t) : t ∈ I

in the x1 x2 -plane is called a trajectory, and the x1 x2 -plane is referred to


as the phase plane.

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Definition: A point (a, b), where


g1 (a, b) = 0 and g2 (a, b) = 0,
is called a critical point of the system (1).
If (a, b) is a critical point, then the constant functions
x1 (t) = a, x2 (t) = b
form a solution to this system. This is called an equilibrium
solution. The point (a, b) also called equilibrium point.
Example: Consider the system
x01 (t) = −x2 (x2 − 2)
x02 (t) = (x1 − 2)(x2 − 2).
The critical points are (2, 0)&(c, 2). The corresponding
equilibrium solutions are
x1 (t) = 2, x2 (t) = 0
x1 (t) = c, x2 (t) = 2
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2

Definition: Let x0 ∈ Rn . For t ≥ 0 we define φt (x0 ) = x(t), where x(t) is


the solution of IVP x0 = f (x), x(0) = x0 .

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Definition: An equilibrium point x0 ∈ Rn of x0 (t) = f (x) is


stable if for all  > 0 there exists a δ = δ() > 0 such that
|φt (x̃) − x0 | <  ∀t ≥ 0 whenever |x̃ − x0 | < δ.

Asymptotically stable if it is stable and there exist δ > 0 such


that

lim φt (x) = x0 , ∀x such that |x − x0 | < δ.


t→∞

If x0 is not stable, then it is unstable.

Remark: Asymptotic stability =⇒ Stability.

Stability need not be Asymptotic stability always, why?

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

For the linear system, it is sufficient to study the stability of


the zero solution to the homogeneous system. In the two
theorems below, A denotes an n × n matrix with real entries.
Theorem: An equilibrium point x̃ of the linear system
x0 (t) = Ax(t)
is stable (asymptotically stable) iff x0 = 0 is a stable (an
asymptotically stable)equilibrium point of
x0 (t) = Ax(t).
Theorem: If all eigenvalues of A have negative real parts, then
the equilibrium point x0 = 0 of
x0 (t) = Ax(t),
is asymptotically stable.
However, if at least one eigenvalue of A has positive real part,
then the zero equilibrium point is unstable.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2

We shall study the behaviour of trajectories near the equilibrium point.

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Consider the linear system

x0 (t) = Ax(t), A is a 2 × 2 matrix. (2)


We first describe the phase portraits (set of all solution curves
in the phase space R2 ) of the linear system
y0 (t) = By(t), P −1 AP = B (3)
The phase portrait for the linear system (2) then obtained
from the phase portrait for (3) under the linear transformation
x = P y.
Recall  
λ 0
• if B = then the solution of IVP y0 (t) = By(t)
0 µ
 λt 
e 0
with y(0) = y0 is y(t) = y0 .
0 eµt

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

 
λ 1
• If B = then the solution of IVP y0 (t) = By(t)
0 λ
 
1 t
with y(0) = y0 is y(t) = eλt y0 .
0 1
 
a −b
• If B = then the solution of IVP y0 (t) = By(t)
b a
with y(0) = y0 is
 
at cos bt − sin bt
y(t) = e y0 .
sin bt cos bt
We now discuss the various phase portraits that result from
these solutions.

SU/KSK MA-102 (2018)


2
 Stability of Linear
 Systems in R
λ 0
Case I. B = with λ < 0 < µ.
0 µ

The system is said to have a saddle at the origin in this case.


If µ < 0 < λ, the arrows are reversed. Whenever A has two
real eigenvalues of opposite sign, the phase portrait for the
linear system is linearly equivalent to the phase portrait shown
in Fig. 1.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2
   
λ 0 λ 1
Case II. B = with λ ≤ µ < 0 or B = with λ < 0.
0 µ 0 λ

In each of these cases, the origin is refereed to as a stable node. It is


called a proper node in the first case with λ = µ and an improper node in
the other two cases. If λ ≥ µ > 0 or if λ > 0 in Case II, the arrows are
reversed and the origin is referred to as an unstable node. The stability of
the node is determined by the sign of the eigenvalues: stable if
λ ≤ µ < 0 and unstable if λ ≥ µ > 0.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2
 
a −b
Case III. B = with a < 0.
b a

The origin is refereed to as a stable focus in these cases.


If a > 0, the trajectories spiral away from the origin with increasing t.
The origin is called an unstable focus. Whenever A has a pair of complex
conjugate eigenvalues with nonzero real part, a ± ib, with a < 0, the
phase portraits for the system (3) is linearly equivalent to one of the
phase portraits shown above.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2
 
0 −b
Case IV. B = .
b 0

The system (3) is said to have a center at the origin in this case.
Whenever A has a pair of purely imaginary complex conjugate
eigenvalues, ±ib, the phase portrait of the linear system (2) is linearly
equivalent to one of the phase portraits shown above. Note that
trajectories or solution curves lie on circles kx(t)k = constant. In general,
the trajectories of the system (2) will lie on ellipse.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2
 
0 0 −4
Example: Consider the system x (t) = Ax(t), A = .
1 0
Note that A has eigenvalues λ = ±2i. The matrix P and P −1
are    
2 0 −1 1/2 0
P = and P = .
0 1 0 1
Thus    
−1 0 −2 0 −2
B = P AP = = .
2 0 2 0
We consider y0 = By
  
Bt cos(2t) − sin(2t) c1
y(t) = e c =
sin(2t) cos(2t) c2

y1 (t) = c1 cos(2t) − c2 sin(2t)


y2 (t) = c1 sin(2t) + c2 cos(2t).

y12 (t) + y22 (t) = c21 +


SU/KSK
c22 ,(2018) a circle.
MA-102
Stability of Linear Systems in R2

The solution x(t)is then given by x = P y,


x1 (t) = c1 cos(2t) − 2c2 sin(2t)
1
x2 (t) = c1 sin(2t) + c2 cos(2t).
2
Observe that
x21 (t) + 4x22 (t) = c21 + 4c22 , ∀t ∈ R.
The trajectories of this system lie on ellipses.

SU/KSK MA-102 (2018)


Stability of Linear Systems in R2

Real eigenvalues
Saddle point λµ < 0 Unstable

Node λµ > 0 stable if λ < 0, µ < 0


Unstable if λ > 0, µ > 0
Complex eigenvalues
Focus Re λ 6= 0 Stable if Reλ < 0

Unstable if Reλ > 0


Center Reλ = 0 Stable

Note: We consider the case A is invertible, λ 6= 0, µ 6= 0.


*** End ***

SU/KSK MA-102 (2018)


Series Solution of Linear Ordinary Differential
Equations

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Aim: To study methods for determining series expansions for
solutions to linear ODE with variable coefficients.

In particular, we shall obtain

• the form of the series expansion,


• a recurrence relation for determining the coefficients, and
• the interval of convergence of the expansion.

SU/KSK MA-102 (2018)


Review of power series
A series of the form

X
an (x − x0 )n = a0 + a1 (x − x0 ) + a2 (x − x0 )2 + · · · , (1)
n=0

is called a power series about the point x0 . Here, x is a


variable and an ’s are constants.
The series (1) converges at x = c if ∞ n
P
n=0 an (c − x0 )
converges. That is, the limit of partial sums
N
X
lim an (c − x0 )n < ∞.
N →∞
n=0

If this limit does not exist, the power series is said to diverge
at x = c.

SU/KSK MA-102 (2018)


P∞
Note that n=0 an (x − x0 )n converges at x = x0 as

X
an (x0 − x0 )n = a0 .
n=0

Q. What about convergence for other values of x?


Theorem: (Radius of convergence)
For each power series of the form (1), there is a number R
(0 ≤ R ≤ ∞), called the radius of convergence of the power
series, such that the series converges absolutely for
|x − x0 | < R and diverge for |x − x0 | > R.
If the series (1) converges for all values of x, then R = ∞.
When the series (1) converges only at x0 , then R = 0.

SU/KSK MA-102 (2018)


Theorem: (Ratio test) If

an+1
lim = L,
n→∞ an

where 0 ≤ L P≤ ∞, then the radius of convergence (R) of the


power series ∞ n
n=0 an (x − x0 ) is
 1
 L if 0 < L < ∞,
R= ∞ if L = 0,
0 if L = ∞.


Remark. If the ratio an+1 does not have a limit, then

an
methods other than the ratio test (e.g. root test) must be
used to determine R.

SU/KSK MA-102 (2018)


(−2)n
Example: Find R for the series ∞ n
P
n=0 n+1 (x − 3) .
n
Note that an = (−2)
n+1
. We have
(−2)n+1 (n + 1)

an+1
lim
= lim
= lim 2(n + 1) = 2 = L.
n→∞ an n→∞ n
(−2) (n + 2) n→∞ (n + 2)
Thus, R = 1/2. The series converges absolutely for
|x − 3| < 12 and diverge for |x − 3| > 12 .
Next, what happens when |x − 3| = 1/2?
At x = 5/2, the series becomes the harmonic series ∞ 1
P
n=0 n+1 ,
and hence diverges. When x = 7/2, the series becomes an
alternating harmonic series, which converges.
Thus, the power series converges for each x ∈ (5/2, 7/2].

SU/KSK MA-102 (2018)


Given two power series

X ∞
X
n
f (x) = an (x − x0 ) , g(x) = bn (x − x0 )n ,
n=0 n=0

with nonzero radii of convergence. Then



X
f (x) + g(x) = (an + bn )(x − x0 )n
n=0

has common interval of convergence.


The formula for the product is

X n
X
n
f (x)g(x) = cn (x − x0 ) , where cn := ak bn−k . (2)
n=0 k=0

This power series in (2) is called the Cauchy product and will
converge for all x in the common interval of convergence for
the power series of f and g.
SU/KSK MA-102 (2018)
Differentiation and integration of power series

Theorem: If f (x) = ∞ n
P
n=0 an (x − x0 ) has a positive radius of
convergence R, then f is differentiable in the interval
|x − x0 | < R and termwise differentiation gives the power
series for the derivative:

X
f 0 (x) = nan (x − x0 )n−1 for |x − x0 | < R.
n=1

Furthermore, termwise integration gives the power series for


the integral of f :
Z ∞
X an
f (x)dx = (x − x0 )n+1 + C for |x − x0 | < R.
n=0
n+1

SU/KSK MA-102 (2018)


Example: A power series for
1
= 1 + x + x2 + x3 + · · · + · · · .
1−x
d 1
Since dx
{1/(1 − x)} = (1−x)2
, we obtain a power series for

1
= 1 + 2x + 3x2 + 4x3 + · · · + nxn−1 + · · · .
(1 − x)2
A power series for
1
= 1 − x2 + x4 − x6 + · · · + (−1)n x2n + · · · .
1 + x2
Rx 1
Since tan−1 x = 0 1+t 1
2 dt, integrate the series for 1+x2

termwise to obtain
1 1 1 (−1)n x2n+1
tan−1 x = x − x3 + x5 − x7 + · · · + + ··· .
3 5 7 2n + 1
SU/KSK MA-102 (2018)
Shifting the summation index
The index of a summation in a power series is a dummy index
and hence

X ∞
X ∞
X
n k
an (x − x0 ) = ak (x − x0 ) = ai (x − x0 )i .
n=0 k=0 i=0

Shifting the index of summation is particularly important when


one has to combine two different power series.
Example:

X ∞
X
n−2
n(n − 1)an x = (k + 2)(k + 1)ak+2 xk .
n=2 k=0


X ∞
X
3 2 n
x n (n − 2)an x = (n − 3)2 (n − 5)an−3 xn .
n=0 n=3

SU/KSK MA-102 (2018)


Adding two power series

Problem: Write Σ∞ n=1 2ncn x


n−1
+ Σ∞
n=0 6cn x
n+1
as one series.
In order to add the series, we require that both summation
indices start with the same number and that the powers of x
in each series be “in phase”;that is, if one series starts with a
multiple of x1 say, then we want the other series also to start
with the same power of x. By writing

Σ∞
n=1 2ncn x
n−1
+ Σ∞
n=0 6cn x
n+1

= 2.1.c1 x0 + Σ∞
n=2 2ncn x
n−1
+ Σ∞
n=0 6cn x
n+1

we have both the series Σ∞ n=2 2ncn x


n−1
and Σ∞
n=0 6cn x
n+1
of
the right hand side of the above equation start with x1 .

SU/KSK MA-102 (2018)


To get the same summation index, we are inspired by the exponents of x.
We let k = n − 1 in the first series and let k = n + 1 in the second series.
Thus the right hand side of the above equation becomes

2c1 + Σ∞ k ∞
k=1 2(k + 1)ck+1 x + Σk=1 6ck−1 x
k

= 2c1 + Σ∞
k=1 [2(k + 1)ck+1 + 6ck−1 ]x
k

which is the required form (as a single series) of the sum of the two given
series.

SU/KSK MA-102 (2018)


Definition: (Analytic function)
A function f isPsaid to be analytic at x0 if it has a power series
representation ∞ n
n=0 an (x − x0 ) in an neighborhood about
x0 , and has a positive radius of convergence.
Example: Some analytic functions and their representations:

x
X xn
e = .
n=0
n!

X (−1)n 2n+1
sin x = x .
n=0
(2n + 1)!

X (−1)n−1
ln x = (x − 1)n , x > 0.
n=1
n

SU/KSK MA-102 (2018)


Power series solutions to linear ODEs

Consider linear ODE of the form:


a2 (x)y 00 (x) + a1 (x)y 0 (x) + a0 (x)y(x) = 0, a2 (x) 6= 0. (∗)
Writing in the standard form

y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0,


where p(x) := a1 (x)/a2 (x) and q(x) := a0 (x)/a2 (x).

Definition: A point x0 is called an ordinary point of (∗) if both


p(x) = a1 (x)/a2 (x) and q(x) = a0 (x)/a2 (x) are analytic at
x0 . If x0 is not an ordinary point, it is called a singular point
of (∗).

SU/KSK MA-102 (2018)


Fact from Analysis: If p(x) and q(x) are polynomilas in x having no
common factors, then p(x)
q(x) is analytic at all points where q(x) 6= 0. And
p(x)
at the points where q(x) = 0, the function q(x) is not analytic.

SU/KSK MA-102 (2018)


Example: The differential equation xy 00 + (Sin x)y = 0 has an
ordinary point at x = 0 because
Sin x x2 x4
=1− + − ··· .
x 3! 5!
The singular points of the equation (x2 − 1)y 00 + 2xy 0 + 6y = 0
are 1 and −1. All other points are ordinary points.
The equation (x2 + 1)y 00 + xy 0 − y = 0 has two singular points
given by i, −i. All other finite values of x (real or complex),
are ordinary points.

SU/KSK MA-102 (2018)


Power series method about an ordinary point
Consider the equation

2y 00 + xy 0 + y = 0. (∗∗)
Let’s find a power series solution about x = 0. Seek a power
series solution of the form

X
y(x) = an x n ,
n=0

and then attempt to determine the coefficients an ’s.


Differentiate termwise to obtain

X ∞
X
0 n−1 00
y (x) = nan x , y (x) = n(n − 1)an xn−2 .
n=1 n=2

SU/KSK MA-102 (2018)


Substituting these power series in (∗∗), we find that

X ∞
X ∞
X
2n(n − 1)an xn−2 + nan xn + an xn = 0.
n=2 n=1 n=0

By shifting the indices, we rewrite the above equation as



X ∞
X ∞
X
2(k + 2)(k + 1)ak+2 xk + kak xk + ak xk = 0.
k=0 k=1 k=0

Combining the like powers of x in the three summation to


obtain

X
4a2 + a0 + [2(k + 2)(k + 1)ak+2 + kak + ak ]xk = 0.
k=1

SU/KSK MA-102 (2018)


Equating the coefficients of this power series equal to zero
yields
4a2 + a0 = 0
2(k + 2)(k + 1)ak+2 + (k + 1)ak = 0, k ≥ 1.
This leads to the recurrence relation
−1
ak+2 = ak , k ≥ 1.
2(k + 2)
Thus,
−1 −1
a2 = a0 , a3 = a1
22 2·3
−1 1 −1 1
a4 = a2 = 2 a0 , a5 = a3 = 2 a1
2·4 2 ·2·4 2·5 2 ·3·5
··· ···

SU/KSK MA-102 (2018)


With a0 and a1 as arbitrary constants, we find that
(−1)n
a2n = a0 , n ≥ 1,
22n n!
and
(−1)n
a2n+1 = a1 , n ≥ 1.
2n [1 · 3 · 5 · · · (2n + 1)]
From this, we have two linearly independent solutions as

X (−1)n
y1 (x) = x2n ,
n=0
22n n!

X (−1)n
y2 (x) = x2n+1 .
n=0
2n [1 · 3 · 5 · · · (2n + 1)]

SU/KSK MA-102 (2018)


Hence the general solution is
y(x) = a0 y1 (x) + a1 y2 (x).

Remark. Suppose we are given the value of y(0) and y 0 (0),


then a0 = y(0) and a1 = y 0 (0). These two coefficients leads to
a unique power series solution for the IVP.

*** End ***

SU/KSK MA-102 (2018)


The Method of Frobenius

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


If either p(x) or q(x) in
y 00 + p(x)y 0 + q(x)y = 0

is not analytic near x0 , power series solutions valid near x0


may or may not exist.

SU/KSK MA-102 (2018)


If either p(x) or q(x) in
y 00 + p(x)y 0 + q(x)y = 0

is not analytic near x0 , power series solutions valid near x0


may or may not exist.
Example: Try to find a power series solution of
x2 y 00 − y 0 − y = 0 (1)
about the point x0 = 0.
Assume that a solution

X
y(x) = an x n
n=0

exists.

SU/KSK MA-102 (2018)


Substituting this series in (1), we obtain the recursion formula
n2 − n − 1
an+1 = an .
n+1

SU/KSK MA-102 (2018)


Substituting this series in (1), we obtain the recursion formula
n2 − n − 1
an+1 = an .
n+1

The ratio test shows that this power series converges only for
x = 0. Thus, there is no power series solution valid in any
open interval about x0 = 0. This is because (1) has a singular
point at x = 0.

SU/KSK MA-102 (2018)


Substituting this series in (1), we obtain the recursion formula
n2 − n − 1
an+1 = an .
n+1

The ratio test shows that this power series converges only for
x = 0. Thus, there is no power series solution valid in any
open interval about x0 = 0. This is because (1) has a singular
point at x = 0.

The method of Frobenius is a useful method to treat such


equations.

SU/KSK MA-102 (2018)


Cauchy-Euler equations revisited
Recall that a second order homogeneous Cauchy-Euler
equation has the form
ax2 y 00 (x) + bxy 0 (x) + cy(x) = 0, x > 0, (2)
where a(6= 0), b, c are real constants. Writing (2) in the
standard form as
b c
y 00 + p(x)y 0 + q(x)y = 0, where p(x) = , q(x) = 2 .
ax ax

SU/KSK MA-102 (2018)


Cauchy-Euler equations revisited
Recall that a second order homogeneous Cauchy-Euler
equation has the form
ax2 y 00 (x) + bxy 0 (x) + cy(x) = 0, x > 0, (2)
where a(6= 0), b, c are real constants. Writing (2) in the
standard form as
b c
y 00 + p(x)y 0 + q(x)y = 0, where p(x) = , q(x) = 2 .
ax ax

Note that x = 0 is a singular point for (2). We seek solutions


of the form
y(x) = xr
and then try to determine the values for r.

SU/KSK MA-102 (2018)


Set
L(y)(x) := ax2 y 00 (x) + bxy 0 (x) + cy(x) and w(x) := xr .
Now

L(w)(x) = ax2 r(r − 1)xr−2 + bxrxr−1 + cxr


= {ar2 + (b − a)r + c}xr .

SU/KSK MA-102 (2018)


Set
L(y)(x) := ax2 y 00 (x) + bxy 0 (x) + cy(x) and w(x) := xr .
Now

L(w)(x) = ax2 r(r − 1)xr−2 + bxrxr−1 + cxr


= {ar2 + (b − a)r + c}xr .

Thus,
w = xr is a solution ⇐⇒ r satisfies
ar2 + (b − a)r + c = 0. (3)

SU/KSK MA-102 (2018)


Set
L(y)(x) := ax2 y 00 (x) + bxy 0 (x) + cy(x) and w(x) := xr .
Now

L(w)(x) = ax2 r(r − 1)xr−2 + bxrxr−1 + cxr


= {ar2 + (b − a)r + c}xr .

Thus,
w = xr is a solution ⇐⇒ r satisfies
ar2 + (b − a)r + c = 0. (3)
The equation (3) is known as the auxiliary or indicial equation
for (2).

SU/KSK MA-102 (2018)


Case I: When (3) has two distinct roots r1 , r2 . Then
L(w)(x) = a(r − r1 )(r − r2 )xr .
The two linearly independent solutions are

y1 (x) = w1 (x) = xr1 , y2 (x) = w2 (x) = xr2 for x > 0.

SU/KSK MA-102 (2018)


Case I: When (3) has two distinct roots r1 , r2 . Then
L(w)(x) = a(r − r1 )(r − r2 )xr .
The two linearly independent solutions are

y1 (x) = w1 (x) = xr1 , y2 (x) = w2 (x) = xr2 for x > 0.


Case II: When r1 = α + iβ, r2 = α − iβ. Then
xα+iβ = e(α+iβ) ln x = eα ln x cos(β ln x) + i eα ln x sin(β ln x)
= xα cos(β ln x) + i xα sin(β ln x).

SU/KSK MA-102 (2018)


Case I: When (3) has two distinct roots r1 , r2 . Then
L(w)(x) = a(r − r1 )(r − r2 )xr .
The two linearly independent solutions are

y1 (x) = w1 (x) = xr1 , y2 (x) = w2 (x) = xr2 for x > 0.


Case II: When r1 = α + iβ, r2 = α − iβ. Then
xα+iβ = e(α+iβ) ln x = eα ln x cos(β ln x) + i eα ln x sin(β ln x)
= xα cos(β ln x) + i xα sin(β ln x).
Thus (Exercise:), two linearly independent real-valued
solutions are
y1 (x) = xα cos(β ln x), y2 (x) = xα sin(β ln x).

SU/KSK MA-102 (2018)


Case III: When r1 = r2 = r0 is a repeated roots. Then
L(w)(x) = a(r − r0 )2 xr .

Setting r = r0 yields the solution


y1 (x) = w(x) = xr0 , x > 0.
To find the second linearly independent solution, apply
reduction of order method.

SU/KSK MA-102 (2018)


A second linearly independent solution is
y2 (x) = xr0 ln x, x > 0.

SU/KSK MA-102 (2018)


A second linearly independent solution is
y2 (x) = xr0 ln x, x > 0.

Example: Find a general solution to


4x2 y 00 (x) + y(x) = 0, x > 0.
Note that
L(w)(x) = (4r2 − 4r + 1)xr .
The indicial equation has repeated roots r0 = 1/2. Thus, the
general solution is
√ √
y(x) = c1 x + c2 x ln x, x > 0.

SU/KSK MA-102 (2018)


The Method of Frobenius

To motivate the procedure, recall the Cauchy-Euler equation


in the standard form
y 00 (x) + p̃(x)y 0 (x) + q̃(x)y(x) = 0, (4)

where
p̃0 q̃0
p̃(x) = , q̃(x) = 2 with p̃0 = b/a q̃0 = c/a.
x x

SU/KSK MA-102 (2018)


The Method of Frobenius

To motivate the procedure, recall the Cauchy-Euler equation


in the standard form
y 00 (x) + p̃(x)y 0 (x) + q̃(x)y(x) = 0, (4)

where
p̃0 q̃0
p̃(x) = , q̃(x) = 2 with p̃0 = b/a q̃0 = c/a.
x x
The indicial equation is of the form
r(r − 1) + p̃0 r + q̃0 = 0. (5)
If r = r1 is a root of (5), then w(x) = xr1 is a solution to (4).

SU/KSK MA-102 (2018)


Observe that,
xp̃(x) = p̃0 , a constant, and hence analytic at x = 0,
x2 q̃(x) = q̃0 , a constant, and hence analytic at x = 0

SU/KSK MA-102 (2018)


Observe that,
xp̃(x) = p̃0 , a constant, and hence analytic at x = 0,
x2 q̃(x) = q̃0 , a constant, and hence analytic at x = 0

Now consider general second order variable coefficient ODE in


the standard form
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0. (1)

SU/KSK MA-102 (2018)


Observe that,
xp̃(x) = p̃0 , a constant, and hence analytic at x = 0,
x2 q̃(x) = q̃0 , a constant, and hence analytic at x = 0

Now consider general second order variable coefficient ODE in


the standard form
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0. (1)
Say x = 0 is a singular point of this ODE.

SU/KSK MA-102 (2018)


Observe that,
xp̃(x) = p̃0 , a constant, and hence analytic at x = 0,
x2 q̃(x) = q̃0 , a constant, and hence analytic at x = 0

Now consider general second order variable coefficient ODE in


the standard form
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0. (1)
Say x = 0 is a singular point of this ODE.
If we have xp(x) and x2 q(x) are analytic near x = 0. i.e.

X ∞
X
n 2
xp(x) = pn x , x q(x) = q n xn .
n=0 n=0

SU/KSK MA-102 (2018)


Observe that,
xp̃(x) = p̃0 , a constant, and hence analytic at x = 0,
x2 q̃(x) = q̃0 , a constant, and hence analytic at x = 0

Now consider general second order variable coefficient ODE in


the standard form
y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0. (1)
Say x = 0 is a singular point of this ODE.
If we have xp(x) and x2 q(x) are analytic near x = 0. i.e.

X ∞
X
n 2
xp(x) = pn x , x q(x) = q n xn .
n=0 n=0

Then near x = 0, lim xp(x) = p0 , a constant,


x→0
lim x2 q(x) = q0 , a constant,
x→0
Resembles Cauchy-Euler eqn.
SU/KSK MA-102 (2018)
Therefore a guess for the solution of ODE (??) is

X
r
y(x) = x an x n .
n=0

SU/KSK MA-102 (2018)


Therefore a guess for the solution of ODE (??) is

X
r
y(x) = x an x n .
n=0

Definition: A singular point x0 of

y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0


is said to be a regular singular point if both (x − x0 )p(x) and
(x − x0 )2 q(x) are analytic at x0 . Otherwise x0 is called an
irregular singular point.

SU/KSK MA-102 (2018)


Therefore a guess for the solution of ODE (??) is

X
r
y(x) = x an x n .
n=0

Definition: A singular point x0 of

y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0


is said to be a regular singular point if both (x − x0 )p(x) and
(x − x0 )2 q(x) are analytic at x0 . Otherwise x0 is called an
irregular singular point.
Example: Classify the singular points of the equation
(x2 − 1)2 y 00 (x) + (x + 1)y 0 (x) − y(x) = 0.

SU/KSK MA-102 (2018)


Therefore a guess for the solution of ODE (??) is

X
r
y(x) = x an x n .
n=0

Definition: A singular point x0 of

y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0


is said to be a regular singular point if both (x − x0 )p(x) and
(x − x0 )2 q(x) are analytic at x0 . Otherwise x0 is called an
irregular singular point.
Example: Classify the singular points of the equation
(x2 − 1)2 y 00 (x) + (x + 1)y 0 (x) − y(x) = 0.
The singular points are 1 and −1. Note that x = 1 is an
irregular singular point and x = −1 is a regular singular point.
SU/KSK MA-102 (2018)
Series solutions about a regular singular point

Assume that x = 0 is a regular singular point for


y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0
so that

X ∞
X
p(x) = pn xn−1 , q(x) = qn xn−2 .
n=0 n=0

SU/KSK MA-102 (2018)


Series solutions about a regular singular point

Assume that x = 0 is a regular singular point for


y 00 (x) + p(x)y 0 (x) + q(x)y(x) = 0
so that

X ∞
X
p(x) = pn xn−1 , q(x) = qn xn−2 .
n=0 n=0

In the method of Frobenius, we seek solutions of the form



X ∞
X
r n
w(x) = x an x = an xn+r , x > 0.
n=0 n=0

Assume that a0 6= 0. We now determine r and an , n ≥ 1.

SU/KSK MA-102 (2018)


Differentiating w(x) with respect to x, we have

X
w0 (x) = (n + r)an xn+r−1 ,
n=0


X
00
w (x) = (n + r)(n + r − 1)an xn+r−2 .
n=0

SU/KSK MA-102 (2018)


Differentiating w(x) with respect to x, we have

X
w0 (x) = (n + r)an xn+r−1 ,
n=0


X
00
w (x) = (n + r)(n + r − 1)an xn+r−2 .
n=0

Substituting w, w , w00 , p(x) and q(x) into (4), we obtain


0


X
(n + r)(n + r − 1)an xn+r−2
n=0

! ∞
!
X X
+ pn xn−1 (n + r)an xn+r−1
n=0 n=0

! ∞
!
X X
+ qn xn−2 an xn+r = 0.
n=0 n=0

SU/KSK MA-102 (2018)


Group like powers of x, starting with the lowest power, xr−2 .
We find that
[r(r − 1) + p0 r + q0 ]a0 xr−2 + [(r + 1)ra1 + (r + 1)p0 a1
+p1 ra0 + q0 a1 + q1 a0 ]xr−1 + · · · = 0.
Considering the first term, xr−2 , we obtain
{r(r − 1) + p0 r + q0 }a0 = 0.
Since a0 6= 0, we obtain the indicial equation.

SU/KSK MA-102 (2018)


Group like powers of x, starting with the lowest power, xr−2 .
We find that
[r(r − 1) + p0 r + q0 ]a0 xr−2 + [(r + 1)ra1 + (r + 1)p0 a1
+p1 ra0 + q0 a1 + q1 a0 ]xr−1 + · · · = 0.
Considering the first term, xr−2 , we obtain
{r(r − 1) + p0 r + q0 }a0 = 0.
Since a0 6= 0, we obtain the indicial equation.
Definition: If x0 is a regular singular point of
y 00 + p(x)y 0 + q(x)y = 0, then the indicial equation for this
point is
r(r − 1) + p0 r + q0 = 0,
where
p0 := lim (x − x0 )p(x), q0 := lim (x − x0 )2 q(x).
x→x0 x→x0

SU/KSK MA-102 (2018)


Example: Find the indicial equation at the the singularity
x = −1 of

(x2 − 1)2 y 00 (x) + (x + 1)y 0 (x) − y(x) = 0.

SU/KSK MA-102 (2018)


Example: Find the indicial equation at the the singularity
x = −1 of

(x2 − 1)2 y 00 (x) + (x + 1)y 0 (x) − y(x) = 0.


Here x = −1 is a regular singular point. We find that
1
p0 = lim (x + 1)p(x) = lim (x − 1)−2 = ,
x→−1 x→−1 4
1
q0 = lim (x + 1)2 q(x) = lim [−(x − 1)−2 ] = − .
x→−1 x→−1 4
Thus, the indicial equation is given by
1 1
r(r − 1) + r − = 0.
4 4

SU/KSK MA-102 (2018)


The method of Frobenius
To derive a series solution about the singular point x0 of
a2 (x)y 00 (x) + a1 (x)y 0 (x) + a0 (x)y(x) = 0, x > x0 . (8)

Set p(x) = a1 (x)/a2 (x), q(x) = a0 (x)/a2 (x).


If both (x − x0 )p(x) and (x − x0 )2 q(x) are analytic at x0 , then
x0 is a regular singular point and the following steps apply.

SU/KSK MA-102 (2018)


The method of Frobenius
To derive a series solution about the singular point x0 of
a2 (x)y 00 (x) + a1 (x)y 0 (x) + a0 (x)y(x) = 0, x > x0 . (8)

Set p(x) = a1 (x)/a2 (x), q(x) = a0 (x)/a2 (x).


If both (x − x0 )p(x) and (x − x0 )2 q(x) are analytic at x0 , then
x0 is a regular singular point and the following steps apply.
Step 1: Seek solution of the form

X
w(x) = an (x − x0 )n+r .
n=0

Using term-wise differentiation and substitute w(x) into (8) to


obtain an equation of the form
A0 (x − x0 )r−2 + A1 (x − x0 )r−1 + · · · = 0.

SU/KSK MA-102 (2018)


Step 2: Set A0 = A1 = A2 = · · · = 0. (A0 = 0 will give the
indicial equation f (r) = r(r − 1) + p0 r + q0 = 0.)

SU/KSK MA-102 (2018)


Step 2: Set A0 = A1 = A2 = · · · = 0. (A0 = 0 will give the
indicial equation f (r) = r(r − 1) + p0 r + q0 = 0.)
Step 3: Use the system of equations
A0 = 0, A1 = 0, . . . , Ak = 0
to find a recurrence relation involving ak and a0 , a1 , . . ., ak−1 .

SU/KSK MA-102 (2018)


Step 2: Set A0 = A1 = A2 = · · · = 0. (A0 = 0 will give the
indicial equation f (r) = r(r − 1) + p0 r + q0 = 0.)
Step 3: Use the system of equations
A0 = 0, A1 = 0, . . . , Ak = 0
to find a recurrence relation involving ak and a0 , a1 , . . ., ak−1 .
Step 4: Take r = r1 , the larger root of the indicial equation
(We are assuming here that the indicial equation has two real
roots. The case when the indicial equation has two complex
conjugate roots is complicated and we do not go into it.), and
use the relation obtained in Step 3 to determine a1 , a2 , . . .
recursively in terms of a0 and r1 .

SU/KSK MA-102 (2018)


Step 2: Set A0 = A1 = A2 = · · · = 0. (A0 = 0 will give the
indicial equation f (r) = r(r − 1) + p0 r + q0 = 0.)
Step 3: Use the system of equations
A0 = 0, A1 = 0, . . . , Ak = 0
to find a recurrence relation involving ak and a0 , a1 , . . ., ak−1 .
Step 4: Take r = r1 , the larger root of the indicial equation
(We are assuming here that the indicial equation has two real
roots. The case when the indicial equation has two complex
conjugate roots is complicated and we do not go into it.), and
use the relation obtained in Step 3 to determine a1 , a2 , . . .
recursively in terms of a0 and r1 .
Step 5: A series expansion of a solution to (8) is

X
r1
w1 (x) = (x − x0 ) an (x − x0 )n , x > x0 ,
n=0

where a0 is arbitrary and an ’s are defined in terms of a0 and r1 .


SU/KSK MA-102 (2018)
Remark: The roots r1 and r2 of the indicial equation can be
either real or imaginary, we discuss only the case when the
roots are real. Let r1 ≥ r2 . For the larger root r1 , we have
already got one series solution (as mentioned earlier).

SU/KSK MA-102 (2018)


Remark: The roots r1 and r2 of the indicial equation can be
either real or imaginary, we discuss only the case when the
roots are real. Let r1 ≥ r2 . For the larger root r1 , we have
already got one series solution (as mentioned earlier).
Qn. How to find the second linear independent solution of the
give ODE?

SU/KSK MA-102 (2018)


Remark: The roots r1 and r2 of the indicial equation can be
either real or imaginary, we discuss only the case when the
roots are real. Let r1 ≥ r2 . For the larger root r1 , we have
already got one series solution (as mentioned earlier).
Qn. How to find the second linear independent solution of the
give ODE?
Observation: We obtain the constants an from the following
equations:
a0 f (r) = 0,
a1 f (r + 1) + a0 (rp1 + q1 ) = 0,
a2 f (r + 2) + a0 (rp2 + q2 ) + a1 [(r + 1)p1 + q1 ] = 0,
···
an f (r + n) + a0 (rpn + qn ) + · · · + an−1 [(r + n − 1)p1 + q1 ] = 0.

SU/KSK MA-102 (2018)


Remark: The roots r1 and r2 of the indicial equation can be
either real or imaginary, we discuss only the case when the
roots are real. Let r1 ≥ r2 . For the larger root r1 , we have
already got one series solution (as mentioned earlier).
Qn. How to find the second linear independent solution of the
give ODE?
Observation: We obtain the constants an from the following
equations:
a0 f (r) = 0,
a1 f (r + 1) + a0 (rp1 + q1 ) = 0,
a2 f (r + 2) + a0 (rp2 + q2 ) + a1 [(r + 1)p1 + q1 ] = 0,
···
an f (r + n) + a0 (rpn + qn ) + · · · + an−1 [(r + n − 1)p1 + q1 ] = 0.

Note: f (r1 ) = 0, f (r2 ) = 0.


If r1 − r2 = N then f (r2 + N ) = f (r1 ) = 0 cannot find aN of
solution corresponds to r2 .

SU/KSK MA-102 (2018)


Theorem: Let x0 be a regular singular point for

y 00 + p(x)y 0 (x) + q(x)y(x) = 0.


R be the radius of convergence of both (x − x0 )p(x)and
(x − x0 )2 q(x) and let r1 and r2 be the roots of the associated
indicial equation, where r1 ≥ r2 .

SU/KSK MA-102 (2018)


Theorem: Let x0 be a regular singular point for

y 00 + p(x)y 0 (x) + q(x)y(x) = 0.


R be the radius of convergence of both (x − x0 )p(x)and
(x − x0 )2 q(x) and let r1 and r2 be the roots of the associated
indicial equation, where r1 ≥ r2 .
Case a: If r1 − r2 is not an integer, then there exist two
linearly independent solutions of the form

X
y1 (x) = an (x − x0 )n+r1 ; a0 6= 0, x0 < x < x0 + R
n=0
X∞
y2 (x) = bn (x − x0 )n+r2 , b0 6= 0, x0 < x < x0 + R
n=0

SU/KSK MA-102 (2018)


Case b: If r1 = r2 , then there exist two linearly independent solutions of
the form

X
y1 (x) = an (x − x0 )n+r1 ; a0 6= 0, x0 < x < x0 + R
n=0

X
y2 (x) = y1 (x) ln(x − x0 ) + bn (x − x0 )n+r1 , x0 < x < x0 + R.
n=1

SU/KSK MA-102 (2018)


Case b: If r1 = r2 , then there exist two linearly independent solutions of
the form

X
y1 (x) = an (x − x0 )n+r1 ; a0 6= 0, x0 < x < x0 + R
n=0

X
y2 (x) = y1 (x) ln(x − x0 ) + bn (x − x0 )n+r1 , x0 < x < x0 + R.
n=1

Case c: If r1 − r2 is a positive integer, then there exist two linearly


independent solutions of the form

X
y1 (x) = an (x − x0 )n+r1 ; a0 6= 0, x0 < x < x0 + R
n=0

X
y2 (x) = Cy1 (x) ln(x − x0 ) + bn (x − x0 )n+r2 , b0 6= 0, x0 < x < x0 + R
n=0

where C is a constant that could be zero.

*** End ***

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Power Series Solutions to the Legendre Equation

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

The Legendre equation

The equation
(1 − x2 )y 00 − 2xy 0 + α(α + 1)y = 0, (1)

where α is any real constant, is called Legendre’s equation.


When α ∈ Z+ , the equation has polynomial solutions called
Legendre polynomials.

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Power series solution for the Legendre equation


The Legendre equation can be put in the form

y 00 + p(x)y 0 + q(x)y = 0,
where
2x α(α + 1)
p(x) = − 2
and q(x) = 2
, if x2 6= 1.
1−x 1−x
1
P ∞ 2n
Since (1−x 2) = n=0 x for |x| < 1, both p(x) and q(x) have
power series expansions in the open interval (−1, 1).
Thus, seek a power series solution of the form

X
y(x) = an xn , x ∈ (−1, 1).
n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Differentiating term by term, we obtain



X ∞
X
0 n−1 00
y (x) = nan x and y = n(n − 1)an xn−2 .
n=1 n=2

Thus,

X ∞
X
0 n
2xy = 2nan x = 2nan xn ,
n=1 n=0
and

X ∞
X
2 00 n−2
(1 − x )y = n(n − 1)an x − n(n − 1)an xn
n=2 n=2
X∞ ∞
X
= (n + 2)(n + 1)an+2 xn − n(n − 1)an xn
n=0 n=0

X
= [(n + 2)(n + 1)an+2 − n(n − 1)an ]xn .
n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Substituting in (1), we obtain


(n+2)(n+1)an+2 −n(n−1)an −2nan +α(α+1)an = 0, n ≥ 0,
which leads to a recurrence relation
(α − n)(α + n + 1)
an+2 = − an .
(n + 1)(n + 2)
Thus, we obtain
α(α + 1)
a2 = − a0 ,
1·2
(α − 2)(α + 3) α(α − 2)(α + 1)(α + 3)
a4 = − a2 = (−1)2 a0 ,
3·4 4!
..
.
α(α − 2) · · · (α − 2n + 2) · (α + 1)(α + 3) · · · (α + 2n − 1)
a2n = (−1)n a0 .
(2n)!

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Similarly, we can compute a3 , a5 , a7 , . . . , in terms of a1 and obtain

(α − 1)(α + 2)
a3 = − a1
2·3
(α − 3)(α + 4) (α − 1)(α − 3)(α + 2)(α + 4)
a5 = − a3 = (−1)2 a1
4·5 5!
..
.
(α − 1)(α − 3) · · · (α − 2n + 1)(α + 2)(α + 4) · · · (α + 2n)
a2n+1 = (−1)n a1
(2n + 1)!

Therefore, the series for y(x) can be written as

y(x) = a0 y1 (x) + a1 y2 (x), where


P∞ n α(α−2)···(α−2n+2)·(α+1)(α+3)···(α+2n−1) 2n
y1 (x) = 1 + n=1 (−1) (2n)! x , and
P∞ n (α−1)(α−3)···(α−2n+1)·(α+2)(α+4)···(α+2n) 2n+1
y2 (x) = x + n=1 (−1) (2n+1)! x .

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Note: The ratio test shows that y1 (x) and y2 (x) converges for
|x| < 1. These solutions y1 (x) and y2 (x) satisfy the initial
conditions

y1 (0) = 1, y10 (0) = 0, y2 (0) = 0, y20 (0) = 1.


Since y1 (x) and y2 (x) are independent (How?), the general
solution of the Legendre equation over (−1, 1) is

y(x) = a0 y1 (x) + a1 y2 (x)


with arbitrary constants a0 and a1 .

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Observations
Case I. When α = 2m, we note that
2n m!
α(α − 2) · · · (α − 2n + 2) = 2m(2m − 2) · · · (2m − 2n + 2) =
(m − n)!
and
(α + 1)(α + 3) · · · (α + 2n − 1) = (2m + 1)(2m + 3) · · · (2m + 2n − 1)
(2m + 2n)! m!
= .
2n (2m)! (m + n)!
Then, in this case, y1 (x) becomes
m
(m!)2 X (2m + 2k)!
y1 (x) = 1 + (−1)k x2k ,
(2m)! (m − k)!(m + k)!(2k)!
k=1

which is a polynomial of degree 2m. In particular, for


α = 0, 2, 4(m = 0, 1, 2), the corresponding polynomials are
35 4
y1 (x) = 1, 1 − 3x2 , 1 − 10x2 + x .
3
SU/KSK MA-102 (2018)
Power Series Solutions to the Legendre Equation

Note that the series y2 (x) is not a polynomial when α is even


because the coefficients of x2n+1 is never zero.
Case II. When α = 2m + 1, y2 (x) becomes a polynomial and
y1 (x) is not a polynomial.
In this case,
m
(m!)2 X (2m + 2k + 1)!
y2 (x) = x + (−1)k x2k+1 .
(2m + 1)! (m − k)!(m + k)!(2k + 1)!
k=1

For example, when α = 1, 3, 5 (m = 0, 1, 2), the corresponding


polynomials are
5 14 21
y2 (x) = x, x − x3 , x − x3 + x5 .
3 3 5

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

The Legendre polynomial


To obtain a single formula which contains both the
polynomials in y1 (x) and y2 (x), let
[n/2]
1 X (−1)r (2n − 2r)! n−2r
Pn (x) = n x ,
2 r=0 r!(n − r)!(n − 2r)!
where [n/2] denotes the greatest integer ≤ n/2.
• When n is even, it is a constant multiple of the polynomial
y1 (x).
• When n is odd, it is a constant multiple of the polynomial
y2 (x).
The first five Legendre polynomials are
1
P0 (x) = 1, P1 (x) = x, P2 (x) = (3x2 − 1)
2
1 1
P3 (x) = (5x3 − 3x), P4 (x) = (35x4 − 30x2 + 3).
2 8
SU/KSK MA-102 (2018)
Power Series Solutions to the Legendre Equation

Figure: Legendre polynomials over the interval [−1, 1]

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

Rodrigues’s formula for the Legendre polynomials


Note that
dn
 
(2n − 2r)! n−2r 1 1 n
x = n x2n−2r and = .
(n − 2r)! dx r!(n − r)! n! r

Thus, Pn (x) can be expressed as


[n/2]
1 dn X
 
r n
Pn (x) = n (−1) x2n−2r .
n
2 n! dx r=0 r

When [n/2] < r ≤ n, the term x2n−2r has degree less than n,
so its nth derivative is zero. This gives
n
1 dn X 1 dn 2
 
r n 2n−2r
Pn (x) = n (−1) x = (x − 1)n ,
2 n! dxn r 2n n! dxn
r=0

which is known as Rodrigues’ formula.


SU/KSK MA-102 (2018)
Power Series Solutions to the Legendre Equation

Properties of the Legendre polynomials Pn (x)

• For each n ≥ 0, Pn (x) is the only polynomial which


satisfies the Legendre equation
(1 − x2 )y 00 − 2xy 0 + n(n + 1)y = 0

and Pn (1) = 1.

• For each n ≥ 0, Pn (−x) = (−1)n Pn (x).

• Z 1 
0 if m 6= n,
Pn (x)Pm (x)dx = 2
−1 2n+1
if m = n.

SU/KSK MA-102 (2018)


Power Series Solutions to the Legendre Equation

• If f (x) is a polynomial of degree n, we have


n
X
f (x) = ck Pk (x), where
k=0
Z 1
2k + 1
ck = f (x)Pk (x)dx.
2 −1
• It follows from the orthogonality relation that
Z 1
g(x)Pn (x)dx = 0
−1

for every polynomial g(x) with deg(g(x)) < n.

*** End ***

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Power Series Solutions to the Bessel Equation

Department of Mathematics
IIT Guwahati

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

The Bessel equation


The equation
x2 y 00 + xy 0 + (x2 − α2 )y = 0, (1)
where α is a non-negative constant, i.e α ≥ 0, is called the
Bessel equation of order α.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

The Bessel equation


The equation
x2 y 00 + xy 0 + (x2 − α2 )y = 0, (1)
where α is a non-negative constant, i.e α ≥ 0, is called the
Bessel equation of order α.
The point x0 = 0 is a regular singular point. We shall use the
method of Frobenius to solve this equation.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

The Bessel equation


The equation
x2 y 00 + xy 0 + (x2 − α2 )y = 0, (1)
where α is a non-negative constant, i.e α ≥ 0, is called the
Bessel equation of order α.
The point x0 = 0 is a regular singular point. We shall use the
method of Frobenius to solve this equation.
Thus, we seek solutions of the form

X
y(x) = an xn+r , x > 0, (2)
n=0

with a0 6= 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Differentiation of (2) term by term yields



X
0
y = (n + r)an xn+r−1 .
n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Differentiation of (2) term by term yields



X
0
y = (n + r)an xn+r−1 .
n=0

Similarly, we obtain

X
00
y = (n + r)(n + r − 1)an xn+r−2 .
n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Differentiation of (2) term by term yields



X
0
y = (n + r)an xn+r−1 .
n=0

Similarly, we obtain

X
00
y = (n + r)(n + r − 1)an xn+r−2 .
n=0

Substituting these into (1), we obtain



X ∞
X
n+r
(n + r)(n + r − 1)an x + (n + r)an xn+r
n=0 n=0

X ∞
X
+ an xn+r+2 − α2 an xn+r = 0.
n=0 n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

This implies

X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

This implies

X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0

Now, cancel xr , and try to determine an ’s so that the


coefficient of each power of x will vanish.
For the constant term, we require (r2 − α2 )a0 = 0. Since
a0 6= 0, it follows that
r2 − α2 = 0,

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

This implies

X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0

Now, cancel xr , and try to determine an ’s so that the


coefficient of each power of x will vanish.
For the constant term, we require (r2 − α2 )a0 = 0. Since
a0 6= 0, it follows that
r2 − α2 = 0,

which is the indicial equation.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

This implies

X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0

Now, cancel xr , and try to determine an ’s so that the


coefficient of each power of x will vanish.
For the constant term, we require (r2 − α2 )a0 = 0. Since
a0 6= 0, it follows that
r2 − α2 = 0,

which is the indicial equation. The only possible values of r


are α and −α.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case I. For r = α, the equations for determining the


coefficients are:

[(1 + α)2 − α2 ]a1 = 0 and,


[(n + α)2 − α2 ]an + an−2 = 0, n ≥ 2.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case I. For r = α, the equations for determining the


coefficients are:

[(1 + α)2 − α2 ]a1 = 0 and,


[(n + α)2 − α2 ]an + an−2 = 0, n ≥ 2.

Since α ≥ 0, we have a1 = 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case I. For r = α, the equations for determining the


coefficients are:

[(1 + α)2 − α2 ]a1 = 0 and,


[(n + α)2 − α2 ]an + an−2 = 0, n ≥ 2.

Since α ≥ 0, we have a1 = 0. The second equation yields


an−2 an−2
an = − 2 2
=− . (3)
(n + α) − α n(n + 2α)

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case I. For r = α, the equations for determining the


coefficients are:

[(1 + α)2 − α2 ]a1 = 0 and,


[(n + α)2 − α2 ]an + an−2 = 0, n ≥ 2.

Since α ≥ 0, we have a1 = 0. The second equation yields


an−2 an−2
an = − 2 2
=− . (3)
(n + α) − α n(n + 2α)
Since a1 = 0, we immediately obtain
a3 = a5 = a7 = · · · = 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

For the coefficients with even subscripts, we have


−a0 −a0
a2 = = 2 ,
2(2 + 2α) 2 (1 + α)
−a2 (−1)2 a0
a4 = = 4 ,
4(4 + 2α) 2 2!(1 + α)(2 + α)
−a4 (−1)3 a0
a6 = = 6 ,
6(6 + 2α) 2 3!(1 + α)(2 + α)(3 + α)

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

For the coefficients with even subscripts, we have


−a0 −a0
a2 = = 2 ,
2(2 + 2α) 2 (1 + α)
−a2 (−1)2 a0
a4 = = 4 ,
4(4 + 2α) 2 2!(1 + α)(2 + α)
−a4 (−1)3 a0
a6 = = 6 ,
6(6 + 2α) 2 3!(1 + α)(2 + α)(3 + α)
and, in general
(−1)n a0
a2n = 2n .
2 n!(1 + α)(2 + α) · · · (n + α)
Therefore, the choice r = α yields the solution

!
n 2n
X (−1) x
yα (x) = a0 xα 1 + .
n=1
22n n!(1 + α)(2 + α) · · · (n + α)
Note: The ratio test shows that the power series formula
converges for all x ∈ R.
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

x < 0: Put x = −t, where t > 0, and set z(t) = y(x),

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

x < 0: Put x = −t, where t > 0, and set z(t) = y(x),


x2 y 00 + xy 0 + (x2 − α2 )y = 0, x<0 (4)
=⇒ t2 z 00 + tz 0 + (t2 − α2 )z = 0, t>0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

x < 0: Put x = −t, where t > 0, and set z(t) = y(x),


x2 y 00 + xy 0 + (x2 − α2 )y = 0, x<0 (4)
=⇒ t2 z 00 + tz 0 + (t2 − α2 )z = 0, t>0


X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

x < 0: Put x = −t, where t > 0, and set z(t) = y(x),


x2 y 00 + xy 0 + (x2 − α2 )y = 0, x<0 (4)
=⇒ t2 z 00 + tz 0 + (t2 − α2 )z = 0, t>0


X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0
For r = α,

!
X (−1)n t2n
zα (t) = a0 tα 1+ ,t>0
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

x < 0: Put x = −t, where t > 0, and set z(t) = y(x),


x2 y 00 + xy 0 + (x2 − α2 )y = 0, x<0 (4)
=⇒ t2 z 00 + tz 0 + (t2 − α2 )z = 0, t>0


X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0
For r = α,

!
X (−1)n t2n
zα (t) = a0 tα 1+ ,t>0
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

Therefore

!
α
X (−1)n x2n
yα (x) = a0 (−x) 1+ , x < 0.
n=1
2 n!(1 + α)(2 + α) · · · (n + α)
2n

is a solution of (4).
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

Therefore, the function yα (x) is given by



!
X (−1)n x2n
yα (x) = a0 |x|α 1+
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

is a solution of the Bessel equation valid for all real x 6= 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Therefore, the function yα (x) is given by



!
X (−1)n x2n
yα (x) = a0 |x|α 1+
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

is a solution of the Bessel equation valid for all real x 6= 0.

Qn: What about the second linearly independent solution?

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Therefore, the function yα (x) is given by



!
X (−1)n x2n
yα (x) = a0 |x|α 1+
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

is a solution of the Bessel equation valid for all real x 6= 0.

Qn: What about the second linearly independent solution?

When you can find the second linearly independent solution


y−α ?

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Therefore, the function yα (x) is given by



!
X (−1)n x2n
yα (x) = a0 |x|α 1+
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

is a solution of the Bessel equation valid for all real x 6= 0.

Qn: What about the second linearly independent solution?

When you can find the second linearly independent solution


y−α ?

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case II. For r = −α, determine the coefficients from


[(1 − α)2 − α2 ]a1 = 0 and [(n − α)2 − α2 ]an + an−2 = 0.
These equations become
(1 − 2α)a1 = 0 and n(n − 2α)an + an−2 = 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case II. For r = −α, determine the coefficients from


[(1 − α)2 − α2 ]a1 = 0 and [(n − α)2 − α2 ]an + an−2 = 0.
These equations become
(1 − 2α)a1 = 0 and n(n − 2α)an + an−2 = 0.
If α − (−α) = 2α is not an integer, these equations give us
an−2
a1 = 0 and an = − , n ≥ 2.
n(n − 2α)
Again a3 = a5 = a7 = · · · = 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Case II. For r = −α, determine the coefficients from


[(1 − α)2 − α2 ]a1 = 0 and [(n − α)2 − α2 ]an + an−2 = 0.
These equations become
(1 − 2α)a1 = 0 and n(n − 2α)an + an−2 = 0.
If α − (−α) = 2α is not an integer, these equations give us
an−2
a1 = 0 and an = − , n ≥ 2.
n(n − 2α)
Again a3 = a5 = a7 = · · · = 0.
Note that this formula is same as (3), with α replaced by −α.
Thus, the solution is given by

!
n 2n
X (−1) x
y−α (x) = a0 |x|−α 1 + ,
n=1
22n n!(1 − α)(2 − α) · · · (n − α)

which is valid for all real x 6= 0.


SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

Therefore when 2α is not an integer the Bessel equation


x2 y 00 + xy 0 + (x2 − α2 )y = 0
has two linearly independent solutions

!
n 2n
X (−1) x
yα (x) = a0 |x|α 1+ ,
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

!
X (−1)n x2n
y−α (x) = a0 |x|−α 1+ ,
n=1
22n n!(1 − α)(2 − α) · · · (n − α)
both are valid for all real x 6= 0

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Therefore when 2α is not an integer the Bessel equation


x2 y 00 + xy 0 + (x2 − α2 )y = 0
has two linearly independent solutions

!
n 2n
X (−1) x
yα (x) = a0 |x|α 1+ ,
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

!
X (−1)n x2n
y−α (x) = a0 |x|−α 1+ ,
n=1
22n n!(1 − α)(2 − α) · · · (n − α)
both are valid for all real x 6= 0

Note: 2α is a not an integer means α 6= 21 , α 6= 23 , etc. and


α 6= integer.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Therefore when 2α is not an integer the Bessel equation


x2 y 00 + xy 0 + (x2 − α2 )y = 0
has two linearly independent solutions

!
n 2n
X (−1) x
yα (x) = a0 |x|α 1+ ,
n=1
22n n!(1 + α)(2 + α) · · · (n + α)

!
X (−1)n x2n
y−α (x) = a0 |x|−α 1+ ,
n=1
22n n!(1 − α)(2 − α) · · · (n − α)
both are valid for all real x 6= 0

Note: 2α is a not an integer means α 6= 21 , α 6= 23 , etc. and


α 6= integer.

Note: In fact when α ∈/ Z+ the function y−α (x) defined above


forms a solution of Bessel equation. Why?
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

Now we concentrate only in the case when x > 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Euler’s gamma function and its properties


For s ∈ R with s > 0, we define Γ(s) by
Z ∞
Γ(s) = ts−1 e−t dt.
0

The integral converges if s > 0 and diverges if s ≤ 0.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Euler’s gamma function and its properties


For s ∈ R with s > 0, we define Γ(s) by
Z ∞
Γ(s) = ts−1 e−t dt.
0

The integral converges if s > 0 and diverges if s ≤ 0.


Integration by parts yields the functional equation
Γ(s + 1) = sΓ(s).

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Euler’s gamma function and its properties


For s ∈ R with s > 0, we define Γ(s) by
Z ∞
Γ(s) = ts−1 e−t dt.
0

The integral converges if s > 0 and diverges if s ≤ 0.


Integration by parts yields the functional equation
Γ(s + 1) = sΓ(s).
In general,
Γ(s + n) = (s + n − 1) · · · (s + 1)sΓ(s), for every n ∈ Z+ .

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Euler’s gamma function and its properties


For s ∈ R with s > 0, we define Γ(s) by
Z ∞
Γ(s) = ts−1 e−t dt.
0

The integral converges if s > 0 and diverges if s ≤ 0.


Integration by parts yields the functional equation
Γ(s + 1) = sΓ(s).
In general,
Γ(s + n) = (s + n − 1) · · · (s + 1)sΓ(s), for every n ∈ Z+ .
Since Γ(1) = 1, we find that Γ(n + 1) = n!. Thus, the gamma
function is an extension of the factorial function from integers
to positive real numbers. Therefore, we write
Γ(s + 1) = sΓ(s), s ∈ R+ .
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

When s < 0 and s is not a negative integer we define



Γ(s+1)

 s if − 1 < s < 0,
Γ(s+1)
Γ(s) = s
if − 2 < s < −1,

· · · ···

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

When s < 0 and s is not a negative integer we define



Γ(s+1)

 s if − 1 < s < 0,
Γ(s+1)
Γ(s) = s
if − 2 < s < −1,

· · · ···

In fact Γ(s) is defined for all s ∈ R \ {0} ∪ Z−

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Using this gamma function, we shall simplify the form of the


solutions of the Bessel equation.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Using this gamma function, we shall simplify the form of the


solutions of the Bessel equation.With s = 1 + α, we note that
Γ(n + 1 + α)
(1 + α)(2 + α) · · · (n + α) = .
Γ(1 + α)

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Using this gamma function, we shall simplify the form of the


solutions of the Bessel equation.With s = 1 + α, we note that
Γ(n + 1 + α)
(1 + α)(2 + α) · · · (n + α) = .
Γ(1 + α)
2−α
Choosing a0 = Γ(1+α)
, the expression for yα , can be written as

 x α X (−1)n  x 2n
Jα (x) = , ∀x > 0.
2 n=0
n!Γ(n + 1 + α) 2

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Using this gamma function, we shall simplify the form of the


solutions of the Bessel equation.With s = 1 + α, we note that
Γ(n + 1 + α)
(1 + α)(2 + α) · · · (n + α) = .
Γ(1 + α)
2−α
Choosing a0 = Γ(1+α)
, the expression for yα , can be written as

 x α X (−1)n  x 2n
Jα (x) = , ∀x > 0.
2 n=0
n!Γ(n + 1 + α) 2

The function Jα defined above for x > 0 and α ≥ 0 is called


the Bessel function of the first kind of order α.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

• What about J−α ?

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

• What about J−α ?


• Of course when α − (−α) = 2α is not an integer then J−α
is defined as below and both Jα and J−α are linearly
independent.


 x −α X (−1)n  x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2

i.e.,J−α is nothing but y−α with a0 = Γ(1−α)
.

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

• What about J−α ?


• Of course when α − (−α) = 2α is not an integer then J−α
is defined as below and both Jα and J−α are linearly
independent.


 x −α X (−1)n  x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2
2 α
i.e.,J−α is nothing but y−α with a0 = Γ(1−α) .
• In fact, J−α is defined as above for α ≥ 0, α ∈/ Z+ is a
solution of Bessel equation for x > 0. Why?...

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

• What about J−α ?


• Of course when α − (−α) = 2α is not an integer then J−α
is defined as below and both Jα and J−α are linearly
independent.


 x −α X (−1)n  x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2
2 α
i.e.,J−α is nothing but y−α with a0 = Γ(1−α) .
• In fact, J−α is defined as above for α ≥ 0, α ∈ / Z+ is a
solution of Bessel equation for x > 0. Why?...
• Conclusion:
If α ∈/ Z+ ∪ {0}, Jα (x) and J−α (x) are linearly
independent on x > 0. The general solution of the Bessel
equation for x > 0 is
y(x) = c1 Jα (x) + c2 J−α (x).
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

When α is a non-negative integer, say α = p, the Bessel


function Jp (x) is given by

X (−1)n  x 2n+p
Jp (x) = , (p = 0, 1, 2, . . .).
n=0
n!(n + p)! 2

Figure: The Bessel functions J0 and J1 .

SU/KSK MA-102 (2018)


Power Series Solutions to the Bessel Equation

Useful recurrence relations for Jα


• d α
dx (x Jα (x))= xα Jα−1 (x).

( )
d α d α
X (−1)n  x 2n+α
(x Jα (x)) = x
dx dx n=0
n! Γ(1 + α + n) 2
(∞ )
d X (−1)n x2n+2α
=
dx n=0 n! Γ(1 + α + n)22n+α

X (−1)n (2n + 2α)x2n+2α−1
= .
n=0
n! Γ(1 + α + n)22n+α
Since Γ(1 + α + n) = (α + n)Γ(α + n), we have

d α X (−1)n 2x2n+2α−1
(x Jα (x)) =
dx n=0
n! Γ(α + n)22n+α

X (−1)n  x 2n+α−1
= xα
n=0
n! Γ(1 + (α − 1) + n) 2
= xα Jα−1 (x).
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation

The other relations involving Jα are:


• d (x−α Jα (x)) = −x−α Jα+1 (x).
dx

• α Jα (x) + Jα0 (x) = Jα−1 (x).


x

• α Jα (x) − Jα0 (x) = Jα+1 (x).


x


• Jα−1 (x) + Jα+1 (x) = J (x).
x α

• Jα−1 (x) − Jα+1 (x) = 2Jα0 (x).


Note: Workout these relations.

*** End ***

SU/KSK MA-102 (2018)

You might also like