Professional Documents
Culture Documents
Department of Mathematics
IIT Guwahati
Texts/References:
1
y ′ (x) = x, y(0) = 1 (a unique solution y = x2 + 1).
2
F (x, y, y ′ ) = 0, y(x0 ) = y0
Well-posed IVP
Theorem(Peano’s Theorem):
Let R : |x − x0 | ≤ a, |y − y0 | ≤ b be a rectangle. If
f ∈ C(R) then the IVP
has at least one solution y(x). This solution is defined for all x in the
interval |x − 0| ≤ h, where
3
h = min{3, }, K = max |xy|.
K (x,y)∈R
Theorem(Picard’s Theorem):
Let f ∈ C(R) and satisfy the Lipschitz condition with respect
to y in R, i.e., there exists a number L such that
Department of Mathematics
IIT Guwahati
Texts/References:
y ′ = 3 y 3 for x ∈ R, y(0) = 0.
2
where | ∂f
∂y
(x, y)| ≤ L for all (x, y) ∈ R.
Further, as y0 m → y0 , ϕm → ϕ uniformly on [x0 − h, x0 + h].
Theorem(Continuous dependence on f ):
Let f , fm , ∂f , ∂fm ∈ C(R), and (x0 , y0 ) ∈ R. Let ϕ(x) be
∂y ∂y
the solution of
y ′ = f (x, y), y(x0 ) = y0 ,
and ϕm (x) be the solution of
y ′ = fm (x, y), y(x0 ) = y0 .
L̂ = min{L, Lm }, | ∂f
∂y
(x, y)| ≤ L, | ∂f∂ym (x, y)| ≤ Lm ∀(x, y) ∈
R. Further, as fm → f , ϕm → ϕ uniformly on[x0 − h, x0 + h].
SU/KSK MA-102 (2018)
First-Order ODE: Existence and Uniqueness Results
Separable Equations
Definition: A first-order equation y ′ (x) = f (x, y) is separable
if it can be written in the form
dy
= g(x)p(y)
dx
Method for solving separable equations: To solve the equation
dy
= g(x)p(y),
dx
1
we write it as h(y)dy = g(x)dx, where h(y) := p(y) .
Integrating both sides
∫ ∫
h(y)dy = g(x)dx =⇒ H(y) = G(x) + C,
Then
dy
H ′ (y)
= G′ (x).
dx
d
Since dx H(y(x)) = H ′ (y(x)) dx
dy
(by chain rule), we obtain
d d
H(y(x)) = G(x) ⇒ H(y(x)) = G(x) + C.
dx dx
Department of Mathematics
IIT Guwahati
for (x, y) ∈ R.
Example: Consider 4x + 3y + 3(x + y 2 )y ′ = 0.
Note that M, N ∈ C 1 (R) and My = 3 = Nx . Thus, there
exists f (x, y) such that fx = 4x + 3y and fy = 3x + 3y 2 .
fx = 4x + 3y ⇒ f (x, y) = 2x2 + 3xy + ϕ(y). Now,
3x + 3y 2 = fy (x, y) = 3x + ϕ′ (y).
⇒ ϕ′ (y) = 3y 2 ⇒ ϕ(y) = y 3 .
Thus, f (x, y) = 2x2 + 3xy + y 3 and the general solution is
given by
2x2 + 3xy + y 3 = C
SU/KSK MA-102 (2018)
Definition: If the equation
M (x, y)dx + N (x, y)dy = 0 (2)
is not exact, but the equation
µ(x, y){M (x, y)dx + N (x, y)dy} = 0 (3)
1 x
(1 + )dx − 2 dy = 0, y ̸= 0
y y
is exact.
Remark: While (2) and (3) have essentially the same solutions,
it is possible to lose solutions when multiplying by µ(x, y).
y ′ + p(x)y = q(x)y α ,
where p(x), q(x) ∈ C((a, b)) and α ∈ R, is called a
Bernoulli equation.
Department of Mathematics
IIT Guwahati
Department of Mathematics
IIT Guwahati
Ker(L) + yP ,
where L(yP ) = g is a particular solution.
y := c1 y1 + · · · + cn yn + yP .
for all x ∈ I.
Proof. Prove for n = 2 (See Theorem 8 in Chapter 3 of
Coddington’s book).
SU/KSK MA-102 (2018)
Solution of Constant Coefficients ODE
By linearity
∑n
Lk ( ci eri x ) = Lk (0) ⇒ c1 Lk (er1 x ) + · · · + cn Lk (ern x ) = 0.
i=1
Thus,
∂ k rx
k
(e )|r=r1 = xk er1 x
∂r
will be a solution to L(y) = 0 for k = 0, 1, . . . , m − 1.
So, m distinct solutions are
er1 x , xer1 x , · · · , xm−1 er1 x .
Department of Mathematics
IIT Guwahati
Thus,
∂ k rx
k
(e )|r=r1 = xk er1 x
∂r
will be a solution to L(y) = 0 for k = 0, 1, . . . , m − 1.
So, m distinct solutions are
er1 x , xer1 x , · · · , xm−1 er1 x .
yp (x) = An xn + · · · + A1 x + A0
and match the coefficients of L(yp ) with those of pn (x):
L(yp ) = pn (x).
equating
2Ax + (3A + 2B) = 3x + 1 =⇒ A = 3/2 and B = −7/4.
Thus, yp (x) = 23 x − 47 .
L(y) = aeαx ,
where a and α are given constants. Try yp of the form
yp (x) = Aeαx
and solve L(yp )(x) = aeαx for the unknown coefficients A.
Form of yp :
• g(x) = pn (x) = an xn + · · · + a1 x + a0 ,
yp (x) = xs Pn (x) = xs {An xn + . . . + A1 x + A0 }
Note:
1. The nonnegative integer s is chosen to be the smallest
integer so that no term in yp is a solution to L(y) = 0.
2. Pn (x) or PN (x) must include all its terms even if pn (x) has
some terms that are zero. Similarly for QN (x).
Department of Mathematics
IIT Guwahati
Example:
1. f (x) = ex , Q = D − 1 (Q annihilates ex ).
2. f (x) = xex , Q = (D − 1)2 .
3. f (x) = e2x sin(4x), Q = (D2 − 4D + 20).
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods
Consider
L(y) = g(x), L(y) := an y (n) + an−1 y (n−1) + · · · + a0 y,
where ai ’s are constants.
Suppose Q(g)(x) = 0, then Q(L(y))(x) = Q(g)(x) = 0.
QL(y)(x) = 0 =⇒ y ∈ Ker(QL).
g(x) Annihilator of g
xn−1 Dn
eαx (D − α)
cos(βx) or sin(βx) D2 + β 2
g(x) Annihilator of g
xn−1 eαx cos(βx) or xn−1 eαx sin(βx) [D2 − 2αD + (α2 + β 2 )]n
Note that
∫
• Dyp (x) = g(x) ⇒ yp (x) = g(x)dx. It is natural to
define ∫
1
g(x) := g(x)dx.
D
• (D − r)yp = g(x), where r is a constant. Formally, we
write
1
yp = g(x).
D−r
The solution of (D − r)yp = g(x) is
∫
yp (x) = e rx
e−rx g(x)dx.
∫
(Because e P (x)dx is an integrating factor for the ODE
dy
dx
+ P (x)y = q(x).)
∫ −rxThus, we define
1
D−r
g(x) := e rx
e g(x)dx. Operators like D1 , D−r1
are
called inverse operators.
SU/KSK MA-102 (2018)
The Annihilator and Operator Methods
1
Let P (D) be the inverse of the operator P (D). Then the
particular solution to P (D)y = g(x) is given by
1
yp (x) = g(x).
P (D)
1 1
yp (x) = xex
D−1D [ ∫− 2 ]
1 2x −2x x 1
= e e xe dx = [−(1 + x)ex ]
D−1 D−1
∫
1
= −e x
e−x (1 + x)ex dx = − (1 + x)2 ex .
2
Note: The successive integrations are likely to become
complicated and time-consuming.
1
yp = 3e−2x
P (D)
3e−2x
=
P (−2)
3e−2x
=
(−2)3 − (−2)2 − 2 + 1
3
= − e−2x .
13
Department of Mathematics
IIT Guwahati
Variation of Parameters
The variation of parameter is a more general method for
finding a particular solution (yp ). The method applies even
when the coefficients of the differential equation are functions
of x.
Consider L(y) = g(x), where
L(y) := y (n) + pn−1 (x)y (n−1) + · · · + p1 (x)y 0 + p0 (x)y,
where pn−1 (x), . . . , p0 (x) ∈ C (I). We know the general
solution to L(y) = g is given by
y(x) = yh (x) + yp (x),
i=1 i=1
To avoid second and higher-order derivatives of vi ’s, we
impose the condition
n
vi0 yi = 0.
X
(1)
i=1
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation
Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1
Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1
n
X n
X
yp(n) = vi yi(n) + vi0 yi(n−1)
i=1 i=1
Recall L(yp ) = yp(n) + pn−1 (x)yp(n−1) + · · · + p1 (x)yp0 + p0 (x)yp ,
Therefore,
n
X n
X
yp0 = vi yi0 , if vi0 yi = 0
i=1 i=1
Again, differentiating yp0 , we obatin
n
X n
X n
X n
X
yp00 = vi yi00 + vi0 yi0 = vi yi00 , if vi0 yi0 = 0
i=1 i=1 i=1 i=1
..
.
n
X n
X n
X n
X
yp(n−1) = vi yi(n−1) + vi0 yi(n−2) = vi yi(n−1) , if vi0 yi(n−2) = 0
i=1 i=1 i=1 i=1
n
X n
X
yp(n) = vi yi(n) + vi0 yi(n−1)
i=1 i=1
Recall L(yp ) = yp(n) + pn−1 (x)yp(n−1) + · · · + p1 (x)yp0 + p0 (x)yp ,
n
X n
X n
X
L(yp ) = vi yi(n) + vi0 yi(n−1) + pn−1 ( vi yi(n−1) ) + · · · +
i=1 i=1 i=1
· · · + p0 (v1 (x)y1 (x) + · · · + vn (x)yn (x)),
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation
L(yp ) = v1 y1(n) + pn−1 y1(n−1) + p(n−2) y1(n−3) + · · · + p0 y1 +
v2 y2(n) + pn−1 y2(n−1) + p(n−2) y2(n−3) + · · · + p0 y2 + · · · +
P
vn yn(n) + pn−1 yn(n−1) + p(n−2) yn(n−3) + · · · + p0 yn + n 0 (n−1) .
i=1 vi yi
L(yp ) = v1 y1(n) + pn−1 y1(n−1) + p(n−2) y1(n−3) + · · · + p0 y1 +
v2 y2(n) + pn−1 y2(n−1) + p(n−2) y2(n−3) + · · · + p0 y2 + · · · +
P
vn yn(n) + pn−1 yn(n−1) + p(n−2) yn(n−3) + · · · + p0 yn + n 0 (n−1) .
i=1 vi yi
Therefore, if we seek v10 , . . . , vn0 that satisfy the system
y1 v10 + · · · + yn vn0 = 0,
y10 v10 + · · · + yn0 vn0 = 0,
.. . . ..
. + .. + .. = .
y1(n−1) v10 + · · · + yn(n−1) v10 = g.
then
L(yp ) = v1 × 0 + v2 × 0 + · · · + vn × 0 + g = g
=⇒ yp is a particular solution of L(y) = g.
v10 (x)
y1 (x) y2 (x) ··· yn (x) 0
y10 (x) y20 (x) ··· yn0 (x)
v20 (x)
0
.. .. .. .. .. ..
. . . . . = . .
y1(n−2)(x) y2(n−2) (x) ··· yn(n−2) (x)
..
..
. .
y1(n−1) (x) y2(n−1) (x) ··· yn(n−1) (x) vn0 (x) g(x)
v10 (x)
y1 (x) y2 (x) ··· yn (x) 0
y10 (x) y20 (x) ··· yn0 (x)
v20 (x)
0
.. .. .. .. .. ..
. . . . . = . .
y1(n−2)(x) y2(n−2) (x) ··· yn(n−2) (x)
..
..
. .
y1(n−1) (x) y2(n−1) (x) ··· yn(n−1) (x) vn0 (x) g(x)
Because
y1 ··· yn
.. ..
. .
= W (y1 , . . . , yn )(x) , 0
y1(n−2) · · · yn(n−2)
y1(n−1) · · ·
y (n−1)
n
on I, which is true as {y1 , . . . , yn } is a fundamental solution
set.
SU/KSK MA-102 (2018)
Variation of Parameters, Use of a Known Solution to Find Another and Cauchy-Euler Equation
··· ···
y1 (x) 0 yn (x)
.. ..
.
(n−2)(x) .
y
1 · · · 0 · · · yn(n−2)(x)
(n−1)(x)
y
1 · · · g(x) · · · yn(n−1)(x)
vk0 (x) =
W (y1 , y2 , · · · , yn )(x)
g(x)Wk (x)
i.e, vk0 (x) = , k = 1, . . . , n,
W (y1 , . . . , yn )(x)
where Wk (x) is obtained from W (y1 , . . . , yn )(x) by replacing kth
column by [0, . . . , 0, 1]T .
We can express Wk (x) as
for k = 1, . . . , n.
v 00 y10 z0 y10
= −2 − p =⇒ = −2 − p, z = v0.
v0 y1 z y1
Integrating
1 − R pdx Z
1 − R pdx
z(x) = 2 e =⇒ v(x) = e dx.
y1 y12
Thus, the second solution is y2 (x) = v(x)y1 (x).
Cauchy-Euler Equation
An equation of the form
an xn y (n) + an−1 xn−1 y (n−1) + · · · + a1 xy 0 + a0 y = g(x),
where ai ’s are constants is called Cauchy-Euler equation.
The substitution x = et transform the above equation into an
equation with constant coefficients. For simplicity, take n = 2.
Assume that x > 0 and let x = et . By the chain rule,
dy dy dx dy t dy
= = e =x ,
dt dx dt dx dx
hence
dy dy
x = .
dx dt
dy dy
Differentiating x dx = dt with respect to t, we find that
d2 y
! !
d dy dx dy d dy
= x = + x
dt2 dt dx dt dx dt dx
dy 2
d y dx dy d2 y
= +x 2 = + x 2 et
dt dx dt dt dx
dy 2
d y
= + x2 2 .
dt dx
Thus
d2 y d2 y dy
x2 = 2 − .
dx2 dt dt
d2 y dy
a2 2
+ (a1 − a2 ) + a0 y = g(et ).
dt dt
Department of Mathematics
IIT Guwahati
Setting
x1 (t) := y(t), x2 (t) := y 0 (t), . . . , xn (t) := y (n−1) (t).
we obtain n first-order equations:
x01 (t) = y 0 (t) = x2 (t),
x02 (t) = y 00 (t) = x3 (t),
..
. (2)
x0n−1 (t) = y (n−1)
(t) = xn (t),
x0n (t) = y (n) (t) = f (t, x1 , x2 , . . . , xn ).
If (1) has n initial conditions:
y(t0 ) = α1 , y 0 (t0 ) = α2 , . . . , y (n−1) (t0 ) = αn ,
then the system (2) has initial conditions:
x1 (t0 ) = α1 , x2 (t0 ) = α2 , . . . , x(n) (t0 ) = αn .
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations
√
0 t3 tan t 1−t −1
x (t) = x(t) + , x(0) = .
t sin t 0 1
Theorem:(Abel’s formula)
If x1 , . . . , xn are n solutions to x0 (t) = A(t)x(t) on an interval
I and t0 is any point of I, then for all t ∈ I,
Z t (X n
) !
W (t) = W (t0 ) exp aii (s) ds ,
t0 i=1
Fact:
• The Wronskian of solutions to x0 (t) = A(t)x(t) is either
zero or never zero on I.
• A set of n solutions to x0 (t) = A(t)x(t) on I is linearly
independent on I if and only if W (x1 , . . . , xn )(t) 6= 0 on
I.
SU/KSK MA-102 (2018)
Systems of First Order Differential Equations
Representation of Solutions
Theorem:(Homogeneous case)
Let x1 , . . . , xn be n linearly independent solutions to
x0 (t) = A(t)x(t), t ∈ I,
where A(t) is continuous on I. Then, every solution to
x0 (t) = A(t)x(t) can be expressed in the form
x(t) = c1 x1 (t) + · · · + cn xn (t),
where ci ’s are constants.
Definition: A set {x1 , . . . , xn } of n linearly independent
solutions to
x0 (t) = A(t)x(t), t ∈ I (∗)
is called a fundamental solution set for (∗) on I.
is a fundamental solution
for the system x0 (t) = A(t)x(t)
set
0 1 1
on R, where A = 1 0
1 .
1 1 0
Note that Axi (t) = x0i (t), i = 1, 2, 3. Further,
e2t −e−t 0
2t
W (t) = e 0 e−t = −3 6= 0.
−t
e 2t
e −e−t
e2t −e−t 0
The fundamental matrix Φ(t) = e2t 0 e−t .
e2t e−t −e−t
Thus, the GS is
e2t −e−t 0
x(t) = Φ(t)c = c1 e2t + c2 0 + c3 e−t .
e2t e−t −e−t
Theorem:(Non-homogeneous case)
let xp be a particular solution to
Department of Mathematics
IIT Guwahati
Recall:
λ is an eigenvalue of A ⇐⇒ P(λ) = 0,
where P(λ) = det(A − λI) is called the characteristic
polynomial of A.
Finding the eigenvalues of A is equivalent to finding the
zeros of P(λ). P(λ) = 0 is called the characteristics
equation of A.
whose solution is
xi (t) = ci edii t ,
where the ci ’s are constants (ci = xi (0)).
Diagonalization Technique
P = [v1 , v2 , . . . , vn ]
is invertible and P −1 AP = diag[λ1 , . . . , λn ].
−1−1 0
Note that P AP =
0 2
We obtain the uncoupled linear system
Department of Mathematics
IIT Guwahati
x(t) = eAt c.
converges uniformly on E, that is, for every > 0, there exists a natural
number n (independent of t) such that for all n ≥ n and for all t ∈ E,
where S(t) = Σ∞
k=1 fk (t).
x2 x3
1+x+
+ + · · · + · · · = ex
2! 3!
x2 x3 xn
or lim (1 + x + + + ··· + ) = ex ,
n→∞ 2! 3! n!
A2 A3
I +A+
+ + · · · + · · · =?
2! 3!
A2 A3 An
or lim (I + A + + + ··· + ) =?
n→∞ 2! 3! n!
∞
X Ak
Theorem: The series of matrices converges absolutely to a matrix.
k!
k=0
Proof:Let kAk = a
k
A
kAkk ak
≤ = = Mk ,
k!
k! k!
∞ ∞
X X ak
Mk = = ea , which converges.
k!
k=0 k=0
∞
X Ak
Therefore by Weierstrass M −test, the series converges absolutely
k!
k=0
to a matrix.
∞ k
kt
X
At
e = A
k=0
k!
P∞ k tk
= k=0 (−1) k! P∞
0
k tk
0 k=0 (2) k!
e−t 0
= .
0 e2t
eA1 t
0
eA2 t
e At
=
..
.
0 eAk t
Proof follows by principle of mathematical induction.
Ak−1 hk−1
At Ah
= Ae lim lim fk (h), fk (h) = I + + ··· +
h→0 k→∞ 2! k!
fk → f, uniformly for |h| ≤ 1.
lim fn (t) = an , n = 1, 2, · · ·
t→x
then the sequence {an } converges and lim f (t) = lim an . Or in other
t→x n→∞
words
lim lim fn (t) = lim lim fn (t)
t→x n→∞ n→∞ t→x
Note that
d At
(e ) = AeAt
dt
i.e each column of eAt is a solution to the system of
differential equations x0 (t) = Ax(t).
Since eAt is invertible it follows that the columns of eAt are
linearly independent and form a fundamental solution set for
x0 (t) = Ax(t).
Theorem: If A is an n × n constant matrix, then the columns
of eAt form a fundamental solution set for
x0 (t) = Ax(t).
is invertible and
−1 aj −bj
P AP = diag
bj aj
a real 2n × 2n matrix with 2 × 2 blocks along the diagonal.
Q = [u1 v1 u2 v2 · · · un vn ]
then
−1 aj b j
Q AQ = diag .
−bj aj
Using the above result, a fundamental matrix Φ(t) is
computed as
cos(bj t) − sin(bj t)
Φ(t) = e At
= P diag e aj t
P −1 .
sin(bj t) cos(bj t)
1 0 0 0 1 −1 0 0
0 1 0 0 1 1 0 0
P −1 = −1
and P AP =
0 0 1 −1 0 0 2 −1
0 0 0 1 0 0 1 2
General solution is
x(t) = Φ(t)x(0)
P −1 AP = diag[λ1 , . . . , λk , Bk+1 , . . . , Bn ],
aj −bj
where Bj = for j = k + 1, . . . , n.
bj aj
−3 0 0
P −1 AP = 0 2 −1
0 1 2
The fundamental matrix eAt is given by
−3t
e 0 0
eAt = P 0 e2t cos t −e2t sin t P −1
0 e2t sin t e2t cos t
−3t
e 0 0
= 0 e2t (cos t + sin t) −2e2t sin t .
2t 2t
0 e sin t e (cos t − sin t)
G.S is
x(t) = eAt x(0)
*** End ***
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems
Department of Mathematics
IIT Guwahati
x0 (t) = Ax(t),
when A has repeated eigenvalues?
x0 (t) = Ax(t),
when A has repeated eigenvalues?
Definition: Let λ be an eigenvalue of A of multiplicity m ≤ n.
Then, for k = 1, . . . , m, any nonzero solution v of
(A − λI)k v = 0
is called a generalized eigenvector(GEV) of A.
x0 (t) = Ax(t),
when A has repeated eigenvalues?
Definition: Let λ be an eigenvalue of A of multiplicity m ≤ n.
Then, for k = 1, . . . , m, any nonzero solution v of
(A − λI)k v = 0
is called a generalized eigenvector(GEV) of A.
Definition: An n × n matrix is said to be nilpotent of order k
if N k−1 6= 0 and N k = 0.
P = [v1 , . . . , vn ] is invertible,
A = S + N , where P −1 SP = diag[λj ],
the matrix N = A − S is nilpotent of order k ≤ n, and
SN = N S.
P = [v1 , . . . , vn ] is invertible,
A = S + N , where P −1 SP = diag[λj ],
the matrix N = A − S is nilpotent of order k ≤ n, and
SN = N S.
Using the above theorem, we have the following result.
Theorem:
N k−1 tk−1
At λj t −1
e = P diag[e ]P I + Nt + · · · + .
(k − 1)!
Then, determine S as
1 0 0 1 0 0
S = P 0 2 0 P −1 = −1 2 0 ,
0 0 2 2 0 2
0 0 0
N =A−S = 0 0 0 , and N 2 = 0.
−1 1 0
G.S is
x(t) = eAt x(0)
P = [v1 u1 · · · vn un ] is invertible,
−1 aj −bj
A = S + N, where P SP = diag .
bj aj
The matrix N = A − S is nilpotent of order k ≤ 2n, and
SN = N S.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems
N k−1 tk−1
cos(bj t) − sin(bj t)
eAt = P diag eaj t P −1 I + · · · + .
sin(bj t) cos(bj t) (k − 1)!
N k−1 tk−1
cos(bj t) − sin(bj t)
eAt = P diag eaj t P −1 I + · · · + .
sin(bj t) cos(bj t) (k − 1)!
0 −1 0 0 0 −1 0 0
1 0 0 0 1 0 0 0
S=P P −1 = ,
0 0 0 −1 0 1 0 −1
0 0 1 0 1 0 1 0
0 0 0 0
0 0 0 0
N =A−S = , and N 2 = 0.
0 −1 0 0
1 0 0 0
G.S is
x(t) = eAt x(0).
Remark. The case when A has both real and complex
repeated eigenvalues can be treated by combining the above
two theorems.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems
is given by
x(t) = Φ(t)c + xp (t),
where Φ(t) is fundamental matrix for the corresponding
homogeneous system and xp (t) is a particular solution to (∗).
But,
Z t
0
Φ(t)v (t) = f (t) =⇒ v(t) = Φ−1 (s)f (s)ds, t0 , t ∈ I.
t0
Therefore, Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds.
t0
But,
Z t
0
Φ(t)v (t) = f (t) =⇒ v(t) = Φ−1 (s)f (s)ds, t0 , t ∈ I.
t0
Therefore, Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds.
t0
Notice that
Z t
x0p (t) = Φ (t) 0
Φ−1 (s)f (s)ds + Φ(t)Φ−1 (t)f (t)
t0
Z t
= A(t)Φ(t) Φ−1 (s)f (s)ds + f (t)
t0
= A(t)xp (t) + f (t), ∀t ∈ I,
Note that xp (t0 ) = 0.
SU/KSK MA-102 (2018)
Homogeneous Linear Systems with Repeated Eigenvalues and Nonhomogeneous Linear Systems
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0
Alternate proof
Φ(t) −→ fundamental solution of x0 (t) = A(t)x(t).
Z t
xp (t) = Φ(t) Φ−1 (s)f (s)ds −→ P.S of x0 (t) = A(t)x(t)+f (t).
t0
−At cos t sin t
e = = Φ(−t).
− sin t cos t
The solution of the IVP is
Z t
x(t) = e x0 + e At
e−As f (s)ds
At
t0
Z t
f (s) sin(s)
= Φ(t)x0 + Φ(t) ds.
t0
f (s) cos(s)
Department of Mathematics
IIT Guwahati
λ 1
• If B = then the solution of IVP y0 (t) = By(t)
0 λ
1 t
with y(0) = y0 is y(t) = eλt y0 .
0 1
a −b
• If B = then the solution of IVP y0 (t) = By(t)
b a
with y(0) = y0 is
at cos bt − sin bt
y(t) = e y0 .
sin bt cos bt
We now discuss the various phase portraits that result from
these solutions.
The system (3) is said to have a center at the origin in this case.
Whenever A has a pair of purely imaginary complex conjugate
eigenvalues, ±ib, the phase portrait of the linear system (2) is linearly
equivalent to one of the phase portraits shown above. Note that
trajectories or solution curves lie on circles kx(t)k = constant. In general,
the trajectories of the system (2) will lie on ellipse.
SU/KSK MA-102 (2018)
Stability of Linear Systems in R2
0 0 −4
Example: Consider the system x (t) = Ax(t), A = .
1 0
Note that A has eigenvalues λ = ±2i. The matrix P and P −1
are
2 0 −1 1/2 0
P = and P = .
0 1 0 1
Thus
−1 0 −2 0 −2
B = P AP = = .
2 0 2 0
We consider y0 = By
Bt cos(2t) − sin(2t) c1
y(t) = e c =
sin(2t) cos(2t) c2
Real eigenvalues
Saddle point λµ < 0 Unstable
Department of Mathematics
IIT Guwahati
If this limit does not exist, the power series is said to diverge
at x = c.
This power series in (2) is called the Cauchy product and will
converge for all x in the common interval of convergence for
the power series of f and g.
SU/KSK MA-102 (2018)
Differentiation and integration of power series
Theorem: If f (x) = ∞ n
P
n=0 an (x − x0 ) has a positive radius of
convergence R, then f is differentiable in the interval
|x − x0 | < R and termwise differentiation gives the power
series for the derivative:
∞
X
f 0 (x) = nan (x − x0 )n−1 for |x − x0 | < R.
n=1
1
= 1 + 2x + 3x2 + 4x3 + · · · + nxn−1 + · · · .
(1 − x)2
A power series for
1
= 1 − x2 + x4 − x6 + · · · + (−1)n x2n + · · · .
1 + x2
Rx 1
Since tan−1 x = 0 1+t 1
2 dt, integrate the series for 1+x2
termwise to obtain
1 1 1 (−1)n x2n+1
tan−1 x = x − x3 + x5 − x7 + · · · + + ··· .
3 5 7 2n + 1
SU/KSK MA-102 (2018)
Shifting the summation index
The index of a summation in a power series is a dummy index
and hence
∞
X ∞
X ∞
X
n k
an (x − x0 ) = ak (x − x0 ) = ai (x − x0 )i .
n=0 k=0 i=0
∞
X ∞
X
3 2 n
x n (n − 2)an x = (n − 3)2 (n − 5)an−3 xn .
n=0 n=3
Σ∞
n=1 2ncn x
n−1
+ Σ∞
n=0 6cn x
n+1
= 2.1.c1 x0 + Σ∞
n=2 2ncn x
n−1
+ Σ∞
n=0 6cn x
n+1
2c1 + Σ∞ k ∞
k=1 2(k + 1)ck+1 x + Σk=1 6ck−1 x
k
= 2c1 + Σ∞
k=1 [2(k + 1)ck+1 + 6ck−1 ]x
k
which is the required form (as a single series) of the sum of the two given
series.
2y 00 + xy 0 + y = 0. (∗∗)
Let’s find a power series solution about x = 0. Seek a power
series solution of the form
∞
X
y(x) = an x n ,
n=0
Department of Mathematics
IIT Guwahati
exists.
The ratio test shows that this power series converges only for
x = 0. Thus, there is no power series solution valid in any
open interval about x0 = 0. This is because (1) has a singular
point at x = 0.
The ratio test shows that this power series converges only for
x = 0. Thus, there is no power series solution valid in any
open interval about x0 = 0. This is because (1) has a singular
point at x = 0.
Thus,
w = xr is a solution ⇐⇒ r satisfies
ar2 + (b − a)r + c = 0. (3)
Thus,
w = xr is a solution ⇐⇒ r satisfies
ar2 + (b − a)r + c = 0. (3)
The equation (3) is known as the auxiliary or indicial equation
for (2).
where
p̃0 q̃0
p̃(x) = , q̃(x) = 2 with p̃0 = b/a q̃0 = c/a.
x x
where
p̃0 q̃0
p̃(x) = , q̃(x) = 2 with p̃0 = b/a q̃0 = c/a.
x x
The indicial equation is of the form
r(r − 1) + p̃0 r + q̃0 = 0. (5)
If r = r1 is a root of (5), then w(x) = xr1 is a solution to (4).
∞
X
00
w (x) = (n + r)(n + r − 1)an xn+r−2 .
n=0
∞
X
00
w (x) = (n + r)(n + r − 1)an xn+r−2 .
n=0
∞
X
(n + r)(n + r − 1)an xn+r−2
n=0
∞
! ∞
!
X X
+ pn xn−1 (n + r)an xn+r−1
n=0 n=0
∞
! ∞
!
X X
+ qn xn−2 an xn+r = 0.
n=0 n=0
Department of Mathematics
IIT Guwahati
The equation
(1 − x2 )y 00 − 2xy 0 + α(α + 1)y = 0, (1)
y 00 + p(x)y 0 + q(x)y = 0,
where
2x α(α + 1)
p(x) = − 2
and q(x) = 2
, if x2 6= 1.
1−x 1−x
1
P ∞ 2n
Since (1−x 2) = n=0 x for |x| < 1, both p(x) and q(x) have
power series expansions in the open interval (−1, 1).
Thus, seek a power series solution of the form
∞
X
y(x) = an xn , x ∈ (−1, 1).
n=0
Thus,
∞
X ∞
X
0 n
2xy = 2nan x = 2nan xn ,
n=1 n=0
and
∞
X ∞
X
2 00 n−2
(1 − x )y = n(n − 1)an x − n(n − 1)an xn
n=2 n=2
X∞ ∞
X
= (n + 2)(n + 1)an+2 xn − n(n − 1)an xn
n=0 n=0
∞
X
= [(n + 2)(n + 1)an+2 − n(n − 1)an ]xn .
n=0
(α − 1)(α + 2)
a3 = − a1
2·3
(α − 3)(α + 4) (α − 1)(α − 3)(α + 2)(α + 4)
a5 = − a3 = (−1)2 a1
4·5 5!
..
.
(α − 1)(α − 3) · · · (α − 2n + 1)(α + 2)(α + 4) · · · (α + 2n)
a2n+1 = (−1)n a1
(2n + 1)!
Note: The ratio test shows that y1 (x) and y2 (x) converges for
|x| < 1. These solutions y1 (x) and y2 (x) satisfy the initial
conditions
Observations
Case I. When α = 2m, we note that
2n m!
α(α − 2) · · · (α − 2n + 2) = 2m(2m − 2) · · · (2m − 2n + 2) =
(m − n)!
and
(α + 1)(α + 3) · · · (α + 2n − 1) = (2m + 1)(2m + 3) · · · (2m + 2n − 1)
(2m + 2n)! m!
= .
2n (2m)! (m + n)!
Then, in this case, y1 (x) becomes
m
(m!)2 X (2m + 2k)!
y1 (x) = 1 + (−1)k x2k ,
(2m)! (m − k)!(m + k)!(2k)!
k=1
When [n/2] < r ≤ n, the term x2n−2r has degree less than n,
so its nth derivative is zero. This gives
n
1 dn X 1 dn 2
r n 2n−2r
Pn (x) = n (−1) x = (x − 1)n ,
2 n! dxn r 2n n! dxn
r=0
and Pn (1) = 1.
• Z 1
0 if m 6= n,
Pn (x)Pm (x)dx = 2
−1 2n+1
if m = n.
Department of Mathematics
IIT Guwahati
with a0 6= 0.
Similarly, we obtain
∞
X
00
y = (n + r)(n + r − 1)an xn+r−2 .
n=0
Similarly, we obtain
∞
X
00
y = (n + r)(n + r − 1)an xn+r−2 .
n=0
This implies
∞
X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0
This implies
∞
X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0
This implies
∞
X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0
This implies
∞
X ∞
X
r 2 2 n r
x [(n + r) − α ]an x + x an xn+2 = 0.
n=0 n=0
Since α ≥ 0, we have a1 = 0.
∞
X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0
∞
X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0
For r = α,
∞
!
X (−1)n t2n
zα (t) = a0 tα 1+ ,t>0
n=1
22n n!(1 + α)(2 + α) · · · (n + α)
∞
X
r
=⇒ z(t) = t an tn , r2 − α2 = 0.
n=0
For r = α,
∞
!
X (−1)n t2n
zα (t) = a0 tα 1+ ,t>0
n=1
22n n!(1 + α)(2 + α) · · · (n + α)
Therefore
∞
!
α
X (−1)n x2n
yα (x) = a0 (−x) 1+ , x < 0.
n=1
2 n!(1 + α)(2 + α) · · · (n + α)
2n
is a solution of (4).
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation
∞
x −α X (−1)n x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2
2α
i.e.,J−α is nothing but y−α with a0 = Γ(1−α)
.
∞
x −α X (−1)n x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2
2 α
i.e.,J−α is nothing but y−α with a0 = Γ(1−α) .
• In fact, J−α is defined as above for α ≥ 0, α ∈/ Z+ is a
solution of Bessel equation for x > 0. Why?...
∞
x −α X (−1)n x 2n
J−α (x) = , x > 0.
2 n=0
n!Γ(n + 1 − α) 2
2 α
i.e.,J−α is nothing but y−α with a0 = Γ(1−α) .
• In fact, J−α is defined as above for α ≥ 0, α ∈ / Z+ is a
solution of Bessel equation for x > 0. Why?...
• Conclusion:
If α ∈/ Z+ ∪ {0}, Jα (x) and J−α (x) are linearly
independent on x > 0. The general solution of the Bessel
equation for x > 0 is
y(x) = c1 Jα (x) + c2 J−α (x).
SU/KSK MA-102 (2018)
Power Series Solutions to the Bessel Equation
2α
• Jα−1 (x) + Jα+1 (x) = J (x).
x α