You are on page 1of 20

Chapter 4

Solutions to LTI Systems in State


Space Form

4.1 Computing functions of square matrices


In this chapter we define a matrix-valued function of a square matrix, and provide some
results for computing these
√ functions. Examples of commonly used functions of a matrix
A include square root A, logarithm ln(A), and the matrix exponential exp(A). We are
particularly interested in computing the matrix exponential exp(A) as this function appears
in the solution to the state and output response of LTI systems (2.1). We will provide three
methods for computing the matrix exponential
1. the diagonalization method
2. the Laplace transform method
3. the Cayley-Hamilton method

Polynomials of a square matrix


A scalar-valued polynomial p(λ) of a scalar variable λ is given by

p(λ) = α0 + α1 λ + · · · + αr−1 λr−1 + αr λr .

The polynomial p is said to have degree r (or sometimes called order r). Given p(λ) we can
define a polynomial function of a n × n matrix A as

p(A) = α0 I + α1 A + · · · + αr−1 Ar−1 + Ar

where powers of a matrix are defined as

Ak = A
| · A{z· · · A}, k≥1
k factors
A0 = I.

69
70 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

Example 24 If p(λ) = λ3 + 2λ2 − 6, then p(A) = A3 + 2A2 − 6I and if


 
0 1
A= (4.1)
0 −2

then    
2 0 −2 3 0 4
A = , A =
0 4 0 −8
and so        
0 4 0 −2 1 0 −6 0
p(A) = +2 −6 = .
0 −8 0 4 0 1 0 −6

Cayley-Hamilton’s theorem [Bay99, Page 209] states that every A ∈ Rn×n satisfies its char-
acteristic equation φ(A) = 0 where φ(λ) = det(λI − A).

Example 25 Consider the characteristic polynomial of (4.1)

φ(λ) = λ2 + 2λ

And we can verify


φ(A) = A2 + 2A = 0

We remark that a matrix may also satisfy other polynomial equations than its characteristic
equation [Bay99, page 219].

Functions of a square matrix


Assuming a function f (not necessarily a polynomial) has a Taylor series about λ = 0

X di f λi
f (λ) = i
(0)
i=0
dλ i!

we define the matrix-valued function of a matrix A as



X 1 di f
f (A) = i
(0)Ai (4.2)
i=0
i! dλ

Example 26 Given 

0 0
A=
1 0
compute f (A) if f (λ) = sin λ.
Since the Taylor series for sin is

f (λ) = sin λ = λ − λ3 /3! + λ5 /5! − · · ·


4.1. COMPUTING FUNCTIONS OF SQUARE MATRICES 71

we have
f (A) = sin A = A − A3 /3! + A5 /5! + · · ·
Since Ak = 0, k ≥ 2 we have
 
0 0
f (A) = sin A = A =
1 0


We remark that for any matrix A = [aij ] we cannot compute f (A) element-wise, i.e., the
ijth element of f (A) is not equal to f (aij ), where aij denotes the ijth element of A. This
fact was shown in Example 26. However, for some special types of matrices (e.g. diagonal
matrices) we can compute functions of matrices element-wise.

Computing functions of a matrix using diagonalization


The power of a diagonal matrix can be computed element-wise by taking the power of its
diagonal elements, i.e., given the diagonal matrix

 = diag(a11 , a22 , . . . , ann )

we have
Âk = diag(ak11 , ak22 , . . . , aknn ).
Assuming a function f has a convergent Taylor series about λ = 0

X di f λi
f (λ) = i
(0)
i=0
dλ i!

the function of a diagonal matrix f (Â) is readily obtained by evaluating the function element-
wise:
∞ ∞
X 1 di f i
X 1 di f
f (Â) = diag(f (a11 ), . . . , f (ann )) = i
(0)Â = i
(0) diag(ai11 , . . . , ainn ).
i=0
i! dλ i=0
i! dλ

For example, if we consider the function f (λ) = eλt then

f (Â) = exp(Ât) = diag(ea11 t , . . . , eann t ).

Next we remark that when  is similar to A, i.e., there exists an nonsingular matrix M such
that
 = M −1 AM
then for any function f we have
∞ ∞
X 1 di f i
X 1 di f
f (A) = i
(0)A = i
(0)M Âi M −1 = Mf (Â)M −1 . (4.3)
i=0
i! dλ i=0
i! dλ
72 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

This can be seen since

Ak = M ÂM −1 M ÂM −1 M · · · M ÂM −1


= M Âk M −1

where we have used M −1 M = I.


Hence when  is diagonal f (Â) is easy to compute, and (4.3) with M taken as the modal
matrix of A (the columns of M being the eigenvectors of A) can be used for computing
f (A) for diagonalizable A. This method for computing f (A) will be referred to as the
diagonalization method.

Example 27 Given  
0 1
A=
0 −2
compute exp(At) using the diagonalization method.
Since the matrix is triangular, we can read the eigenvalues off the diagonal entries of A.
Hence, the eigenvalues of A are λ1 = 0, λ2 = −2. Since A has distinct eigenvalues we can
compute two linearly independent eigenvectors to diagonalize A. This set of eigenvectors is
   
1 1
{ , }
0 −2

We define the modal matrix whose columns are eigenvectors of A


 
1 1
M=
0 −2

Hence
 = M −1 AM = diag(0, −2)
and
f (Â) = exp(Ât) = diag(1, e−2t )
and therefore using (4.3) we have
 1

1 2
(1 − e−2t )
f (A) = exp(At) = Mf (Â)M −1
= M exp(Ât)M −1
=
0 e−2t

Example 28 Given  
0 −2
A=
2 −2
compute exp(At) using the diagonalization method.
The characteristic polynomial of A is
 
λ 2
φ(λ) = det(λI − A) = det = λ2 + 2λ + 4
−2 λ − 2
4.1. COMPUTING FUNCTIONS OF SQUARE MATRICES 73

√ √
Hence the eigenvalues of A are complex λ1 = −1 + j 3, λ2 = −1 − j 3 and distinct. We can
therefore use the diagonalization method to compute exp(A). We solve (λ1 I − A)x = 0 to
obtain the eigenvector for λ1 . Since λ1 is complex, its associated eigenvector will be complex:
 √ 
−1 + j 3 2√
(λ1 I − A) = (4.4)
−2 1+j 3
√ √
performing R2 (−1 + j 3) + 2R1 → R2 and R1 (−1 − j 3)/4 → R1 we have the REF of (4.4)
 −1−j √3 
1 2
0 0
(Why do we expect one row of zeros in REF?) Hence an eigenvector for λ1 is
 1+j √3 
x= 2 t, t ∈ C
1
The eigenvector for λ2 is the conjugate of that of λ1 :
 1−j √3 
2 t, t ∈ C
1
Hence the modal matrix is
 1+j √3 √ 
1−j 3
" √ 1
√ #
j 3
−j/ 3 +
M= 2 2 ⇒ M −1 = √ 2
1
6

j 3
1 1 j/ 3 2
− 6

We have " √ #
j 3
e 0
f (Â) = e−t √
0 e−j 3
and so
f (A) = exp(At) = Mf (Â)M −1 = M exp(Ât)M −1
" √ √ √ #
√1 sin( 3t) + cos( 3t) − √2 sin( 3t)
= e−t 3 √ √ 3 1 √ .
√2 sin( 3t) cos( 3t) − √ sin( 3t)
3 3


Since not all matrices can be diagonalized, [Bay99, Pages 215–219] provides a more general
but similar method of computing f (A) using a Jordan form. We will not be considering this
procedure due to time constraints.

Computing exp(A) using the Laplace transform


From (4.2) the matrix exponential is defined in terms of an infinite series

X tk
exp(At) = Ak . (4.5)
k=0
k!
74 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

This series converges for all A and t (i.e., exp is an analytic function on R) and we have
immediately exp(0) = I. If we differentiate (4.5) term by term
∞ ∞ ∞ ∞
d d X (At)k X kAk tk−1 X Ai ti X Ai ti
exp(At) = = =A = A = A exp(At) = exp(At)A
dt dt k=0 k! k=1
k! i=0
i! i=0
i!
we obtain the useful identity
d
exp(At) = A exp(At) = exp(At)A. (4.6)
dt
Using the differentiation property of the Laplace transform
df
L{ (t)} = sF (s) − f (0− ) (4.7)
dt
and taking Laplace transforms of both sides of (4.6), we obtain
sL{exp(At)} − exp(0) = AL{exp(At)} ⇒ (sI − A)L{exp(At)} = I
or
exp(At) = L−1 {(sI − A)−1 }. (4.8)
The relation (4.8) gives a convenient way to compute exp(At) using Laplace transforms.
Example 29 Given  
0 −1
A=
1 −2
We compute  −1  
s 1 1 s + 2 −1
(sI − A) −1
= = .
−1 s + 2 (s + 1)2 1 s
Using the inverse Laplace transform L−1 {1/(s + 1)2 } = te−t and the derivative property of
the Laplace transform (4.7) so that
d
L−1 {s/(s + 1)2 } = (te−t ) = e−t − te−t .
dt
Hence, we obtain
 −t   
2te − te−t + e−t −te−t (t + 1)e−t −te−t
exp(At) = = .
te−t e−t − te−t te−t (1 − t)e−t


Properties of the matrix exponential


In addition to property (4.6), the matrix exponential satisfies two useful identities:
1. exp(A(t1 + t2 )) = exp(At1 ) exp(At2 ), ∀t1 , t2 ∈ R, A ∈ Rn×n
2. exp(At)−1 = exp(−At), ∀t ∈ R, A ∈ Rn×n
We remark that Property 2 follows directly from Property 1 if we set t2 = −t1 . We remark
that exp((A+B)t) = exp(At) exp(Bt) iff AB = BA. Hence, although the matrix exponential
shares a number of properties of the exponential of a scalar variable, one should beware of
this last difference.
4.1. COMPUTING FUNCTIONS OF SQUARE MATRICES 75

Computing functions of a matrix using Cayley-Hamilton


The following useful result based on the Cayley-Hamilton theorem (see [Bay99, Page 209–
214]) allows us to compute any function of an n × n matrix using an (n − 1)th degree
polynomial function of a matrix.
Suppose we are given an n × n matrix A with characteristic polynomial

φ(λ) = det(λI − A) = Πqi=1 (λ − λi )mi (4.9)

and q
X
n= mi . (4.10)
i=1

Expressions (4.9)–(4.10) are another way of saying that matrix A has q distinct eigenvalues
and that each distinct eigenvalue λi has multiplicity mi .
Given some function f for which we want to compute f (A), we define the (n − 1)th degree
polynomial
g(λ) = α0 + α1 λ + · · · + αn−1 λn−1
where αi are such that the following n independent linear equations in αi are satisfied:
dl f dl g
(λ i ) = (λi ), 1 ≤ i ≤ q, 0 ≤ l ≤ mi − 1. (4.11)
dλl dλl
When (4.11) are satisfied we have
f (A) = g(A) (4.12)
and we say g and f are equal on the spectrum of A. The relation (4.12) allows us a straight-
forward means to compute any function of a matrix f (A) by evaluating an (n − 1)th degree
polynomial g whose coefficients αi satisfy (4.11). This method for computing f (A) will be
referred to as the Cayley-Hamilton method for computing a function of a matrix.

Example 30 Given  
0 1
A=
−1 −2
and f (λ) = λ100 , find f (A) using the Cayley-Hamilton method.
Step 1: First we compute the eigenvalues of A. The characteristic polynomial of A is
 
λ −1
φ(λ) = det(λI − A) = det = λ(λ + 2) + 1 = (λ + 1)2 .
1 λ+2

Hence A has one distinct eigenvalue λ1 = −1 with multiplicity m1 = 2.


Step 2: Using (4.11), solve for the n = 2 coefficients of g(λ) = α0 + α1 λ. Since the
multiplicity of λ1 is m1 = 2, we need to compute one derivative in (4.11). The relations
(4.11) become

f (−1) = g(−1) ⇒ (−1)100 = 1 = α0 − α1


df dg
(−1) = (−1) ⇒ 100(−1)99 = −100 = α1 .
dλ dλ
76 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

Solving this linear system for α0 , α1 we obtain


α0 = −99
α1 = −100.
Hence
g(λ) = 99 − 100λ.
Step 3: Using (4.12) we can use g to readily evaluate f (without having to multiply A one
hundred times!):
     
100 −99 0 0 100 −99 −100
f (A) = A = g(A) = −99I − 100A = − = .
0 −99 −100 −200 100 101


Example 31 Given  
0 0 −2
A = 0 1 0 
1 0 3
and f (λ) = eλt , compute f (A) = exp(At) using the Cayley-Hamilton method.
Step 1: We begin by computing the eigenvalues of A. The characteristic polynomial of A is
 
λ 0 2
φ(λ) = det(λI−A) = det  0 λ − 1 0  = λ(λ−1)(λ−3)+2(λ−1) = (λ−1)2 (λ−2).
−1 0 λ−3
Hence A has an eigenvalue λ1 = 1 with multiplicity m1 = 2 and an eigenvalue λ2 = 2 with
multiplicity m2 = 1.
Step 2: Using (4.11), we solve for the n = 3 coefficients of g(λ) = α0 + α1 λ + α2 λ2 . Since
the multiplicity of λ1 is m1 = 2, we need to compute one derivative in (4.11). Since
dg
(λ) = α1 + 2α2 λ

df
(λ) = teλt

the relations (4.11) become
f (1) = g(1) ⇒ et = α0 + α1 + α2
df dg
(1) = (1) ⇒ tet = α1 + 2α2
dλ dλ
f (2) = g(2) ⇒ e2t = α0 + 2α1 + 4α2 .
Solving this linear system for α0 , α1 , α2 we obtain
α0 = e2t − 2tet (4.13a)
α1 = 3tet + 2et − 2e2t (4.13b)
α2 = e2t − et − tet . (4.13c)
4.1. COMPUTING FUNCTIONS OF SQUARE MATRICES 77

Hence
g(λ) = e2t − 2tet + (3tet + 2et − 2e2t )λ + (e2t − et − tet )λ2 .
Step 3: Using (4.12) we can use g to readily evaluate f (A):

f (A) = exp(At) = g(A) = (e2t − 2tet )I + (3tet + 2et − 2e2t )A + (e2t − et − tet )A2
 t 
2e − e2t 0 2et − 2e2t
= 0 et 0  (4.14)
2t t 2t t
e −e 0 2e − e

where we needed  
−2 0 −6
A2 =  0 1 0  .
3 0 7


In MATLAB you can check your hand computations of the matrix exponential using the
Symbolic Math Toolbox function sym/expm

>> A=[0 0 -2;0 1 0 ;1 0 3];


>> syms t
>> expm(A*t)

ans =

[ 2*exp(t)-exp(2*t), 0, -2*exp(2*t)+2*exp(t)]
[ 0, exp(t), 0 ]
[ exp(2*t)-exp(t), 0, -exp(t)+2*exp(2*t) ]

As expected, MATLAB’s answer is identical to (4.14).

Example 32 Given
 
0 2 −2
A = 0 1 0
1 −1 3

and f (λ) = eλt , compute exp(At) using the Cayley-Hamilton method.


Step 1: First we compute the eigenvalues of A. The characteristic polynomial of A is
 
λ −2 2
φ(λ) = det(λI − A) = det  0 λ − 1 0  = (λ − 1)2 (λ − 2).
−1 1 λ−3

Hence A has an eigenvalue λ1 = 1 with multiplicity m1 = 2 and an eigenvalue λ2 = 2 with


multiplicity m2 = 1.
78 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

Step 2: We remark that the spectrum (or the set of eigenvalues) of A is the same as in
Example 31. Hence we can reuse the values of αi in that example (see expressions (4.13)):

α0 = e2t − 2tet
α1 = 3tet + 2et − 2e2t
α2 = e2t − et − tet .

Step 3: However, since the matrix A is different, the value of exp(At) will not be the same
as that computed in Example 31:

f (A) = exp(At) = g(A) = (e2t − 2tet )I + (3tet + 2et − 2e2t )A + (e2t − et − tet )A2
 t 
2e − e2t 2tet 2et − 2e2t
= 0 et 0 .
2t t t 2t t
e − e −te 2e − e

4.2 Solutions to LTI systems


In this section we consider the solutions for MIMO LTI systems

ẋ = Ax + Bu (4.15a)
p m n
y = Cx + Du, y ∈ R ,u ∈ R ,x ∈ R (4.15b)

with initial state x(0) ∈ Rn .


First, we recall the differentiation property (4.6) of the matrix exponential

d
exp(At) = A exp(At) = exp(At)A. (4.16)
dt
We left multiply both sides of (4.15a) by exp(−At):

e−At ẋ(t) = e−At Ax(t) + e−At Bu(t). (4.17)

Note that we have included the dependence of x on t in our notation. We can rewrite (4.17)
as
e−At ẋ(t) − e−At Ax(t) = e−At Bu(t).
Using (4.16) we can write this last expression as

d −At
(e x(t)) = e−At ẋ(t) − e−At Ax(t) = e−At Bu(t). (4.18)
dt
Integrating (4.18) from 0 to t gives
Z t Z t
d −Aτ
(e x(τ ))dτ = e−Aτ Bu(τ )dτ
0 dτ 0
4.2. SOLUTIONS TO LTI SYSTEMS 79

Hence,
t Z t

e−Aτ
x(τ ) = e−Aτ Bu(τ )dτ
0 0
Z t
−At
e x(t) − Ix(0) = e−Aτ Bu(τ )dτ (since exp(0) = I.)
0

Left multiplying by eAt and using (e−At )−1 = eAt gives


Z t
At
x(t) = e x(0) + eA(t−τ ) Bu(τ )dτ (4.19)
0

where we have also used the property of the exponential

eAt e−Aτ = eA(t−τ ) , ∀t, τ ∈ R, A ∈ Rn×n .

The output equation (4.15b) and (4.19) gives the expression for the output response
Z t
At
y(t) = Cx(t) + Du(t) = Ce x(0) +C eA(t−τ ) Bu(τ )dτ + Du(t) (4.20)
| {z } 0
Zero-input response | {z }
Zero-state response

We remark

• The output response (4.20) is the superposition of two terms labeled above as zero-input
response and zero-state response. The zero-state response is a convolution integral.

• The matrix Φ(t, τ ) = exp(A(t − τ )) which appears in the zero-state response is called
the state-transition matrix of the system (4.15).

• To compute the output or state response using (4.20) or (4.19) we need to compute
exp(At) and perform integration.

• Frequency domain models or input/output models (as studied in EE 357) assume the
initial state x(0) = 0 and investigate the zero-state output response:
Z t
y(t) = C eA(t−τ ) Bu(τ )dτ + Du(t). (4.21)
0

Letting u(t) = δ(t) in (4.21) we obtain the expression for the impulse response matrix
in terms of the state space parameters A, B, C, D:

g(t) = CeAt B + Dδ(t).

Recall that the transfer matrix G(s) and impulse response matrix g(t) are related by

G(s) = C(sI − A)−1 B + D = L{g(t)}.


80 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

• Identical expressions to (4.19), (4.20) for x and y can be obtained when B, C, D are
functions of time. However, similar expressions cannot be obtained for time-varying A
matrices.

Example 33 Compute the unit step response for the SIMO system
   
0 −1 0
ẋ = x+ u
1 −2 1
y = x.

with initial conditions x(0) = 0.


Using the Laplace transform method we computed exp(At) previously
 
(t + 1)e−t −te−t
exp(At) = .
te−t (1 − t)e−t

From (4.20), with the zero-input response equal to zero since x(0) = 0, we have
Z t Z t   −t 
A(t−τ ) −(t − τ )e−(t−τ ) e + te−t − 1
y(t) = I e Bdτ = dτ = (4.22)
0 0 (1 − (t − τ ))e−(t−τ ) te−t

Note that in order to reduce the amount of computation in (4.22) it is best to perform the
product C exp(A(t − τ ))B before performing integration. When computing step responses,
it is actually possible to avoid the convolution integral in the zero-state response when A−1
exists: Z Z t t
eA(t−τ ) Bu(τ )dτ = eAλ dλB (4.23)
0 0

where we used a change of variables λ = t − τ and took u(t) = 1, t ≥ 0. Since

d
exp(At) = A exp(At)
dt
we can integrate this relation to get
Z t Z t
d
exp(Aτ )dτ = exp(At) − I = A exp(Aτ )dτ
0 dτ 0

Hence Z t
eAλ dλ = A−1 (exp(At) − I), if A nonsingular. (4.24)
0

Since A−1 exists, we can obtain the same answer as (4.22) using (4.24):
Z t     −t 
A(t−τ ) −2 1 −te−t e + te−t − 1
y(t) = e Bdτ = = .
0 −1 0 (1 − t)e−t − 1 te−t


4.3. SYSTEM MODES 81

Example 34 Find the unit step response of


   
0 1 0
ẋ = x+ u
−2 −3 1
y = x.

for initial state x(0) = (1, 0)T .


First we find exp(At) using (say) Laplace transform
 −1    
s −1 1 s+3 1 1 −3 −1
(sI − A) =−1
= ⇒A =
−1
. (4.25)
2 s+3 (s + 1)(s + 2) −2 s 2 2 0

We compute inverse Laplace transforms


s+3 2 −1
L−1 { } = L−1 { } + L−1 { } = 2e−t − e−2t
(s + 1)(s + 2) s+1 s+2
1 1 −1
L−1 { } = L−1 { } + L−1 { } = e−t − e−2t
(s + 1)(s + 2) s+1 s+2
−2
L−1 { } = −2e−t + 2e−2t
(s + 1)(s + 2)
s
L−1 { } = −e−t + 2e−2t
(s + 1)(s + 2)
Hence,  
2e−t − e−2t e−t − e−2t
exp(At) = . (4.26)
2e−2t − 2e−t 2e−2t − e−t
Substituting into (4.24) gives
Z t 1 e−2t

A(t−τ ) − e−t
+
e Bu(τ )dτ = A (exp(At) − I)B = 2 −t
−1 2 . (4.27)
0 e − e−2t

Hence, using (4.20) with (4.26) and (4.27), the output step response is
 −t  1   −t 1 1 −2t 
2e − e−2t 2
− e−t + 12 e−2t e + 2 − 2e
y(t) = x(t) = −t + = .
2e − 2e
−2t
e −e
−t −2t
e−2t − e−t
| {z } | {z }
Zero-input response Zero-state response

4.3 System modes


We assume for simplicity that the A-matrix in (4.15a) has n linearly independent eigenvectors
qi , 1 ≤ i ≤ n satisfying Aqi = λi qi which we use as a basis for the state space. We represent
the state x in this basis as n
X
x(t) = ξi (t)qi = Mξ(t) (4.28)
i=1
82 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

where ξ = (ξ1 , . . . , ξn )T and M is a modal matrix. Using (4.28) we can put the system (4.15)
into ξ-coordinates:

ξ˙ = diag(λ1 , . . . , λn )ξ + B̂u (4.29a)


y = Ĉξ + Du (4.29b)

where B̂ = M −1 B, Ĉ = CM. The equations (4.29) consist of n decoupled linear ODEs and
the ith term of the sum (4.28) is
ξi (t)qi (4.30)
and is called the system mode associated with eigenvalue λi . The representation (the right-
hand side of (4.28)) of x in the basis of eigenvectors is called the modal decomposition of the
state solution.

Relation between zero-input response and eigenvectors


When the system (4.15) has no input or the input is set to zero, we have a nice interpretation
of the eigenvectors of the A-matrix. Suppose qi is an eigenvector of A: Aqi = λi qi , if ẋ = Ax
and x(0) = qi α, α ∈ R then only the mode associated with λi is excited. (Evidently, here
we are assuming qi , λi are both real.) Hence, the state solution is given by

x(t) = eλi t qi α. (4.31)

This is because the zero-input solution is

(At)2
x(t) = exp(At)x(0) = (I + At + + . . . )qi α
2!
(λi t)2
= qi α + λi tqi α + qi α + . . .
2!
= eλi t qi α (4.32)

where we have used

A2 qi = AAqi = Aλi qi = λ2i qi


A3 qi = AA2 qi = Aλ2i qi = λ3i qi
..
.
Ak qi = λki qi , k≥0

Hence the zero-input response of (4.15) to an initial condition x(0) = qi is simple: it is the
mode of the unforced system ẋ = Ax associated with λi . The solution x(t) = eλi t qi is a
trajectory which lies in the direction of vector qi . If λi is positive, the mode increases with
time, if λi is negative, the mode decreases with time. Hence, for general initial conditions,
the zero-input response can excite all modes, but for x(0) = qi (or any scalar multiple of qi )
only the mode associated with λi is excited. Of course, since initial conditions must be real,
this discussion is limited to real eigenvalues and eigenvectors.
4.3. SYSTEM MODES 83

However, there is a similar interpretation for complex eigenvalues and eigenvectors. Sup-
posing qi and qi∗ correspond to a complex conjugate pair of eigenvalues λi and λ∗i of A. We
assume the initial state satisfies

x(0) = αqi + α∗ qi∗ = 2ℜ(αqi ) (4.33)


for any α ∈ C and where ℜ(·) denotes the real part. For initial condition (4.33) the zero-input
solution of (4.15) is
x(t) = exp(At)αqi + exp(At)α∗ qi∗ , by linearity in initial condition
= exp(λi t)αqi + exp(λ∗i t)α∗ qi∗ since Aqi = λi qi
= 2ℜ(α exp(λi t)qi )
  
  cos ωt sin ωt a
= 2 exp(σt) qre qim (4.34)
− sin ωt cos ωt −b
where
qi = qre + jqim , λi = σ + jω, α = a + jb.
The trajectory (4.34) is a linear combination of vectors qre and qim and hence for (4.33),
the state trajectory remains in plane span{qre , qim } for all t ≥ 0. The value of σ determines
the rate of decay/growth of the solution in this plane, and ω determines angular velocity of
rotation in the plane.

Example 35 We consider the unforced system


 
0 1
ẋ = x (4.35)
−2 −3

Eigenvalues of the system matrix are λ1 = −1 and λ2 = −2. Eigenvectors are q1 = [1, −1]T
and q2 = [1, −2]T . Figure 4.1 shows the phase portrait of the system which a plot of the
components of the system’s state trajectory. Since the eigenvalues of the system matrix
of (4.35) are real, negative, and nonzero, the equilibrium point x = 0 of System (4.35) is
called a stable node. When the eigenvalues are real, nonzero, and have opposite signs, the
equilibrium is called a saddle, and when the eigenvalues are real and positive, the equilibrium
x = 0 is called an unstable node.
For initial conditions proportional to q1 , i.e., x(0) = cq1 , c ∈ R, we only excite the mode
corresponding to λ1 . From (4.32) the solution is

x(t) = e−t cq1 , t≥0


Similarly, for initial states proportional to q2 , i.e., x(0) = cq2 , c ∈ R, we only excite the mode
corresponding to λ2 :
x(t) = e−2t cq2 , t ≥ 0.
From Figure 4.1 shows graphically that for any initial states in the direction of the eigen-
vectors, the trajectory remains in this direction for all time.
The system (4.35) can be transformed to JCF as

ż = diag(λ1 , λ2 )z (4.36)
84 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

Phase plot for xdot = [0 1;−2 −3]*x (stable node − original coords)
1

0.8 Eigenv q2 = [1, −2]T

0.6

0.4

0.2

x2
0 Eigenv q1 = [1, −1]T

−0.2

−0.4

−0.6

−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x1

Figure 4.1: Phase portrait of System (4.35)


4.3. SYSTEM MODES 85

Phase plot for zdot = diag([−1,−2])*z (stable node − modal coords)


1

0.8
Eigenv q2 = [0, 1]T
0.6

0.4

0.2

z
2
0

−0.2

−0.4

−0.6 Eigenv q1 = [1, 0]T

−0.8

−1
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
z
1

Figure 4.2: Phase portrait of System (5.20)


86 CHAPTER 4. SOLUTIONS TO LTI SYSTEMS IN STATE SPACE FORM

 
1 1
with Mz = x where a modal matrix M = [q1 , q2 ] = .
−1 −2
Note that from (5.20) we have z1 (t) = z1 (0)eλ1 t and z2 (t) = z2 (0)eλ2 t , we have
λ /λ2
z2 = cz1 1
and for λ1 /λ2 = 1/2 we have the family of parabolas z22 = cz1 as shown in Figure 4.2. Note
that in JCF the eigenvectors are unit vectors.
The phase plots were generated using Richard Murray’s phaseplot.m and boxgrid.m at
www.ece.ualberta.ca/~ alanl/ee460/x/examples/portrait.zip

Example 36 We consider the system
   
0 1 0 0
ẋ = −50 0 0 x + 0 u.
0 0 1 1
√ √
This system has two imaginary eigenvalues λ1 = j 50, λ2 = λ∗1 = −j 50 and a real eigen-
vector λ3 = 1. The eigenvector associated with λ1 is
 
√1
q1 = j 50 = qre + jqim .
0
The eigenvector associated with λ2 is
 
1

q2 = q1∗ = −j 50 = qre − jqim .
0
We choose an arbitrary complex number α = V ejφ , V > 0, V ∈ R and the initial condition
(from (4.33)) as  
√cos φ
x(0) = 2ℜ(αq1 ) = 2V − 50 sin φ . (4.37)
0
Then from (4.34) we have
  
  cos ωt sin ωt a
x(t) = 2 exp(σt) qre qim
− sin ωt cos ωt −b
 
1 √0  √ √  
cos( √50t) sin( √50t) cos φ
= 2V 0 50 
− sin( 50t) cos( 50t) − sin φ
0 0
 √ 
√cos( 50t
√ + φ)
= 2V − 50 sin( 50t + φ) .
0
As expected, solutions
√ for initial conditions x(0) = 2ℜ(αq1 ) are non-decaying oscillations at
a frequency of 50 rad/s. These oscillations remain in the plane span{qre , qim } for all t ≥ 0,
i.e., the initial condition (4.37) only excites the imaginary modes of the system. 
Bibliography

[Bay99] John S. Bay. Fundamentals of linear state space systems. McGraw-Hill, 1999.

[Che99] Chi-Tsong Chen. Linear System Theory and Design. Oxford, New York, NY, 3rd
edition, 1999.

[Max68] J.C. Maxwell. On governors. Proceedings of the Royal Society of London, 16:270–
283, 1868.

123

You might also like