You are on page 1of 29

First Order Linear Systems with Constant

Coefficients

Department of Mathematics
IIT Guwahati
SHB/SU

SHB/SU MA-102 (2020)


Solutions of homogeneous first order linear systems
Consider the n × n system X 0 (t) = AX (t).
Recall that it has a fundamental matrix e At = ∞ Ak t k
P
k=0 k! and
a general solution X (t) = e At C where C ∈ Rn is arbitrary.
If A has n real eigenvalues λi , and corresponding eigenvectors
vi , i = 1, . . . , n, form a linearly independent set, then
e At = V diag[e λj t ]V −1 ,

where V = [v1 · · · vn ] .
In such a case, another fundamental matrix is given by
V diag[e λj t ] = e At V .


We discuss the computation of e At when A has complex


eigenvalues and/or when A does not have n eigenvectors that
form a linearly independent set.
SHB/SU MA-102 (2020)
Some reminders from Linear Algebra
A matrix M is said to be diagonalizable if it is similar to a
diagonal matrix, i.e., there is an invertible matrix S and
a diagonal matrix D such that S −1 MS = D. Clearly the
diagonal entries of D are the eigenvalues of M and the
columns of S are the corresponding eigenvectors.
Let M be an n × n matrix. The following are equivalent.
• M is diagonalizable.
• M has n linearly independent eigenvectors.
• There is a basis of Cn consisting of eigenvectors of M.

If M has n distinct eigenvalues, then the correponding


eigenvectors form a linearly independent set. Consequently it
is always diagonalizable.
Hence, a non-diagonalizable matrix has repeated eigenvalues.
However, the converse is not true; for example, consider the
n × n identity matrix for n > 1.
SHB/SU MA-102 (2020)
A is diagonalizable and has only complex eigenvalues

Theorem: Let A be a real diagonalizable matrix of size


2n × 2n. Suppose A has 2n complex eigenvalues λj and λ̄j
and corresponding complex eigenvectors wj and w̄j ,
j = 1, . . . , n. Then the 2n × 2n real matrix

P = [Re w1 Im w1 Re w2 Im w2 · · · Re wn Im wn ]
is invertible and
 
−1 Reλj Imλj
P AP = diag
−Imλj Reλj
a real 2n × 2n matrix with 2 × 2 blocks along the diagonal.

SHB/SU MA-102 (2020)


Using the above result, the solution of the IVP
X 0 (t) = AX (t), X (0) = X0 ,

is given by
  
cos(Imλj t) sin(Imλj t)
X (t) = P diag e Reλj t
P −1 X0 .
− sin(Imλj t) cos(Imλj t)

Remark: Instead of P if we use


Q = [Im w1 Re w1 Im w2 Re w2 · · · Im wn Re wn ]
then  
−1 Reλj −Imλj
Q AQ = diag .
Imλj Reλj

SHB/SU MA-102 (2020)


Example: Solve the IVP X 0 (t) = AX (t), X (0) = X0 with
 
1 −1 0 0
 1 1 0 0 
A= 
 0 0 3 −2 
0 0 1 1

A has complex eigenvalues λ1 = 1 + i λ2 = 2 + i (as well as


λ̄1 = 1 − i λ̄2 = 2 − i). A corresponding pair of complex
eigenvectors is
w1 = [i 1 0 0]T and w2 = [0 0 1 + i 1]T .

The matrix
 
0 1 0 0
 1 0 0 0 
P = [Re w1 Im w1 Re w2 Im w2 ] = 
 0

0 1 1 
0 0 1 0

SHB/SU MA-102 (2020)


   
0 1 0 0 1 1 0 0
 1 0 0 0  −1 1 0 0 
P −1 and P −1 AP = 
 
=
 0

0 0 1   0 0 2 1 
0 0 1 −1 0 0 −1 2

The solution of the IVP is


 t 
e cos t e t sin t 0 0
 −e t sin t e t cos t 0 0  −1
X (t) = P  2t 2t
 P X0
 0 0 e cos t e sin t 
0 0 −e 2t sin t e 2t cos t
 t 
e cos t −e t sin t 0 0
 e t sin t e t cos t 0 0 
= 2t 2t
X0 .
 0 0 e (cos t − sin t) −e sin t 
2t 2t
0 0 e 2 sin t e (cos t + sin t)

SHB/SU MA-102 (2020)


A is diagonalizable and has some complex eigenvalues.

Theorem: Let A be a diagonalizable matrix with real


eigenvalues λj and corresponding eigenvectors vj , j = 1, . . . , k
and complex eigenvalues λj = aj + ibj and λ̄j = aj − ibj and
corresponding eigenvectors wj = uj + ivj and w̄j = uj − ivj ,
j = k + 1, . . . , n. Then the matrix

P = [v1 · · · vk Revk+1 Imvk+1 · · · Revn Imvn ]


is invertible and
P −1 AP = diag[λ1 , . . . , λk , Bk+1 , . . . , Bn ],
 
aj bj
where Bj = for j = k + 1, . . . , n.
−bj aj

SHB/SU MA-102 (2020)


Example: Solve the IVP X 0 (t) = AX (t), X (0) = X0 with
 
−3 0 0
A =  0 3 −2  .
0 1 1

The eigenvalues are λ1 = −3, λ2 = 2 + i, (λ̄2 = 2 − i).


The corresponding eigenvectors

v1 = [1 0 0]T and w2 = [0 1 + i 1]T


   
1 0 0 1 0 0
P = 0 1 1  , P −1 =  0 0 1  .
0 1 0 0 1 −1

SHB/SU MA-102 (2020)


 
−3 0 0
P −1 AP =  0 2 1 
0 −1 2
The solution of IVP is
 −3t 
e 0 0
X (t) = P  0 e 2t cos t e 2t sin t  P −1 X0
0 −e 2t sin t e 2t cos t
 −3t 
e 0 0
=  0 e 2t (cos t − sin t) −e 2t sin t  X0 .
2t 2t
0 2e sin t e (cos t + sin t)

SHB/SU MA-102 (2020)


A is not diagonalizable
Q. How to solve the IVP
X 0 (t) = AX (t), X (0) = X0 ,
when A is not diagonalizable?
Definition: Let λ be an eigenvalue of A of multiplicity m ≤ n.
Then, any nonzero solution v of
(A − λI )k v = 0
where k is an integer such that 1 ≤ k ≤ m, is called a
generalized eigenvector (GEV) of A.
Definition: An n × n matrix N is said to be nilpotent of order
k if N k−1 6= 0 and N k = 0. For example, for
 
0 0 0
N =  1 0 0  , N 2 6= 0 but N 3 = 0.
−1 1 0
SHB/SU MA-102 (2020)
A is not diagonalizable and has only real eigenvalues.
Theorem: Let λ1 , . . . , λn be real eigenvalues of an n × n
nondiagonalizable real matrix A, repeated according to their
multiplicity. Then, there exists a basis of Rn consisting of
generalized eigenvectors of A. If {v1 , . . . , vn } is such a basis,
then the matrix
P = [v1 · · · vn ] is invertible,

A = S + N, where P −1 SP = diag(λj ),
the matrix N is nilpotent of order k ≤ n, and SN = NS.
Using the above theorem, we have the following result.
Theorem: For A as in the above theorem,
N k−1 t k−1
 
At λj t −1
e = P diag(e ) P I + Nt + · · · + .
(k − 1)!

SHB/SU MA-102 (2020)


Example: Solve X 0 (t) = AX (t), X (0) = X0 , where
 
1 0 0
A = −1
 2 0 .
1 1 2

The eigenvalues of A are λ1 = 1, λ2 = λ3 = 2. The


corresponding eigenvectors are
   
1 0
v1 =  1  and v2 = 0  .

−2 1

One GEV corresponding to λ = 2 and independent of v2 is


obtained by solving
 
1 0 0
(A − 2I )2 v = 0 ⇒  1 0 0  v = 0.
−2 0 0

SHB/SU MA-102 (2020)


Choose v3 = (0, 1, 0)T . The matrix P is then given by
   
1 0 0 1 0 0
P =  1 0 1  and P −1 =  2 0 1  .
−2 1 0 −1 1 0

Then, determine S as
   
1 0 0 1 0 0
S = P  0 2 0  P −1 =  −1 2 0  ,
0 0 2 2 0 2
 
0 0 0
N =A−S = 0 0 0  , and N 2 = 0.
−1 1 0

SHB/SU MA-102 (2020)


The solution is then given by
 t 
e 0 0
X (t) = P  0 e 2t 0  P −1 [I + Nt]X0
2t
0 0 e
 
et 0 0
=  e t − e 2t e 2t 0  X0 .
t 2t
−2e + (2 − t)e te 2t e 2t

Note: If λ is an eigenvalue of A with multiplicity n, then


S = diag [λ]

with respect to the usual basis for Rn and N = A − S. The


solution to IVP is
N k−1 t k−1
 
λt
X (t) = e I + Nt + · · · + X0 .
(k − 1)!

SHB/SU MA-102 (2020)


Example:
 Solve X 0 (t) = AX (t), X (0) = X0 , where
3 1
A= .
−1 1
 
2 0
The eigenvalues are λ1 = λ2 = 2. Thus, S = and
0 2
 
1 1
N =A−S = . N 2 = 0 and
−1 −1
 
At 2t 2t 1+t t
X (t) = e X0 = e [I + Nt]X0 = e X0 .
−t 1 − t

SHB/SU MA-102 (2020)


A is not diagonalizable and has only complex eigenvalues.
Theorem: Let A be a real 2n × 2n matrix with complex
eigenvalues λj , λ̄j , j = 1, . . . , n. Then there exists
corresponding complex generalized eigenvectors
wj ,w̄j j = 1, . . . , n such that

{Re w1 , Im w1 , . . . , Re wn , Im wn }
is a basis for R2n . Then the 2n × 2n real matrix
P = [Rew1 Imw1 · · · Re wn Imwn ]

is invertible and
 
−1 Reλj Imλj
A = S + N, where P SP = diag .
−Imλj Reλj
| {z }
repeated n times

The matrix N is nilpotent of order k ≤ 2n, and SN = NS.


SHB/SU MA-102 (2020)
Theorem: For A as in the previous theorem,
 

N k−1 t k−1
   
cos(bj t) sin(bj t)  −1

e At = Pdiag e aj t P I + ··· + .

 − sin(bj t) cos(bj t)  (k − 1)!
| {z }
repeated n times

where aj = Reλj , bj = Imλj , j = 1, . . . , n.


Example: Solve the IVP X 0 (t) = AX (t), X (0) = X0 where
 
0 −1 0 0
 1 0 0 0 
A= .
 0 0 0 −1 
2 0 1 0
The matrix A has eigenvalues λ = i and λ̄ = −i of multiplicity
2. To find the generalized eigenvectors, we need to solve the
equations
(A − λI )w = 0, (A − λI )2 w = 0,
where w = [w1 w2 w3 w4 ]T .
SHB/SU MA-102 (2020)
Now (A − λI )w = 0 for w1 = (0, 0, i, 1)T . Thus, we have one
eigenvector corresponding to λ = i. Now, (A − λI )2 w = 0,
where  
−2 2i 0 0
 −2i −2 0 0 
(A − λI )2 = 
 −2
,
0 −2 2i 
−4i −2 −2i −2
is solved by w2 = (i, 1, 0, 1)T . Thus we have one generalized
eigenvector for λ = i. The eigenvector and generalized
eigenvector corresponding to −i are w̄1 and w̄2 respectively.
Thus Re w1 = (0, 0, 0, 1)T , Im w1 = (0, 0, 1, 0)T ,
Re w2 = (0, 1, 0, 1)T , and Im w2 = (1, 0, 0, 0)T .
The matrices P and P −1 are given by
   
0 0 0 1 0 −1 0 1
 0 0 1 0  −1  0 0 1 0 

P =  0 1 0 0 , P =  0
 .
1 0 0 
1 0 1 0 1 0 0 0
SHB/SU MA-102 (2020)
   
0 1 0
0 0 −1 0 0
 −1 0 0
0  1 0 0 0
 P −1 = 
 
S =P ,
 0 0 0
1   0 1 0 −1 
0 0 −1
0 1 0 1 0
 
0 0 0 0
 0 0 0 0  2
N =A−S =
 0 −1 0 0  , and N = 0.

1 0 0 0

SHB/SU MA-102 (2020)


The solution to the IVP is given by
 
cos t − sin t 0 0
 sin t cos t 0 0 
X (t) = P   P −1 [I + Nt]X0
 0 0 cos t − sin t 
0 0 sin t cos t
 
cos t − sin t 0 0
 sin t cos t 0 0 
=  X .
 −t sin t sin t − t cos t cos t − sin t  0
sin t + t cos t −t sin t sin t cos t

SHB/SU MA-102 (2020)


A is non-diagonalizable with some complex eigenvalues
Theorem: Let A be an n × n real non-diadonalizable matrix.
Suppose n = m + 2p and A has real eigenvalues λi ,
i = 1, . . . , m, and complex eigenvalues λi , λ̄i ,
i = m + 1, . . . , m + p. Then there exist corresponding
generalized eigenvectors wi ∈ Rn , i = 1, . . . , m, and
wi , w̄i ∈ Cn , i = m + 1, . . . , m + p, such that
{w1 , . . . , wm , Re wm+1 , Im wm+1 , . . . , Re wm+p , Im wm+p } is a
basis of Rn . Then
P = [w1 · · · wm Re wm+1 Im wm+1 · · · Re wm+p Im wm+p ]
is invertible such that
A = S + N, where N k = 0, 0 < k ≤ n, SN = NS,
and P −1 SP = diag(λ1 , . . . , λm , Bm+1 , . . . , Bm+p ),
 
Reλj Imλj
with Bj = for j = m + 1, . . . , m + p.
−Imλj Reλj
SHB/SU MA-102 (2020)
Exercise: For A as given in the previous theorem, show that
N k−1 t k−1
 
At St
e =e I + Nt + · · · +
(k − 1)!
where
e λ1 t
 
..

 . 

e λm t
 
e St = ,
 
 e Bm+1 t 
 .. 
 . 
e Bm+p t
with
 
Bj t Reλj t cos(Imλj t) sin(Imλj t)
e =e , j = m+1, . . . , m+p.
− sin(Imλj t) cos(Imλj t)

SHB/SU MA-102 (2020)


Nonhomogeneous linear systems
Recall that the general solution of the nonhomogeneous system
X 0 (t) = A(t)X (t) + F (t), (∗)
is given by
X (t) = Φ(t)C + Xp (t),
where Φ(t) is fundamental matrix for the corresponding
homogeneous system and Xp (t) is a particular solution of (∗).
We know that when A(t) = A is independent of t, Φ(t) = e At
is a fundamental matrix that satisfies X 0 (t) = A(t)X (t) with
Φ(0) = I .
Further, any fundamental matrix Φ(t) of X 0 (t) = AX (t) is
given by Φ(t) = e At S for some nonsingular matrix S. In such
a case, clearly Φ(0) = S.
If a fundamental matrix of (∗) is known, then the method of
variation of parameters gives a particular solution Xp (t).
SHB/SU MA-102 (2020)
Theorem: If Φ(t) is a fundamental matrix of
X 0 (t) = A(t)X (t) on I, then the function
Z t
Xp (t) = Φ(t) Φ−1 (s)F (s)ds
t0

is the unique solution to X 0 (t) = A(t)X (t) + F (t) on I


satisfying the initial condition Xp (t0 ) = 0.
Proof. Let Φ(t) be a fundamental matrix of the system
X 0 (t) = A(t)X (t) on I. We seek a particular solution Xp of
the form
Xp (t) = Φ(t)v (t),
where v (t) is a vector function to be determined.
Now

Xp0 (t) = Φ0 (t)v (t) + Φ(t)v 0 (t)


= A(t)Φ(t)v (t) + F (t).

SHB/SU MA-102 (2020)


Since Φ0 (t) = A(t)Φ(t), we obtain
Z t
0
Φ(t)v (t) = F (t) =⇒ v (t) = Φ−1 (s)F (s)ds, t0 , t ∈ I.
t0

Therefore, Z t
Xp (t) = Φ(t) Φ−1 (s)F (s)ds.
t0

Theorem: If Φ(t) is any fundamental matrix of X 0 (t) = AX (t)


then the solution of the IVP

X 0 (t) = A(t)X (t) + F (t), X (0) = X0


is unique on any interval I containing 0 such that A(t), F (t)
are continuous for all t ∈ I. This unique solution is given by
Z t
−1
X (t) = Φ(t)Φ (0)X0 + Φ(t) Φ−1 (s)F (s)ds.
0

SHB/SU MA-102 (2020)


Proof: A general solution of X 0 (t) = A(t)X (t) + F (t) is given
by Z t
X (t) = Φ(t)C + Φ(t) Φ−1 (s)F (s)ds.
0

Now, X (0) = X0 ⇒ X0 = Φ(0)C ⇒ C = Φ−1 (0)X0 . Thus the


solution of the IVP is
Z t
−1
X (t) = Φ(t)Φ (0)X0 + Φ(t) Φ−1 (s)F (s)ds.
0

The uniqueness of X (t) follows from the theory of such


systems of ODEs.

SHB/SU MA-102 (2020)


Remark. Choosing Φ(t) = e At , the solution of the IVP
X 0 (t) = AX (t) + F (t), X (0) = X0
takes the form
Z t
At
X (t) = e X0 + e At
e −As F (s)ds.
0
0
Example: Solve X (t) = AX (t) + F (t), where
   
0 −1 0
A= and F (t) = .
1 0 f (t)
In this case
 
At cos t − sin t
e = = Φ(t).
sin t cos t

SHB/SU MA-102 (2020)


 
−At cos t sin t
e = = Φ(−t).
− sin t cos t
The solution of the IVP is
Z t
X (t) = e X0 + eAt
e −As F (s)ds
At
0
Z t 
f (s) sin(s)
= Φ(t)X0 + Φ(t) ds.
0
f (s) cos(s)

*** End ***

SHB/SU MA-102 (2020)

You might also like