You are on page 1of 50

State Space Analysis

of Control Systems

EE3L003 CONTROL SYSTEMS


N. C. Sahoo
Introduction
• A linear dynamic system can be described by an
ordinary differential equation.
• Using matrix-vector notation, an n-th order
differential equation can be expressed by a first-
order vector-matrix differential equation, called as
a state equation.
State: The state of a dynamic system is the smallest
set of variables (called state variables) such that the
knowledge of these variables at t = t0, together with the
knowledge of the input for t ≥ t0, completely determines
the behavior of the system for any time t ≥ t0.
Introduction
State Variable: The state variables of a dynamic
system are the variables making up the smallest set of
variables that determines the state of the dynamic
system.
N.B:
• State variables need not be physically measurable
or observable quantities. This is in fact an advantage
of the state space methods.
• Practically it is convenient to choose easily
measurable quantities as the state variables. This
may help in direct implementation some control
laws.
Introduction
State Vector: If n state variables are needed to
completely describe the behavior of a system, then
these state variables can be considered as the
components of a vector x, called a state vector.

State Space: n-dimensional space whose coordinate


axes are x1-axis, x2-axis... ,xn-axis, where x1, x2…xn are
the state variables.
Any state can be represented by a point in the state
space.
Introduction
State Space Equations:
⇒ Three types of variables are needed for modeling of
dynamic systems, i.e., input, output and state variables.
⇒ The state space model for a system is not unique,
except that the number of state variables is the same for
any different state space models of the given system.
⇒ Dynamic systems must include elements for
memorizing the input values for t ≥ t1. As integrators
serve as memory units in continuous-time systems, the
outputs of such integrators are conveniently taken as
state variables in state space models and they define
the internal states of the dynamic system.
Introduction
• So, the number of state variables to completely
define the dynamics of the system is equal to the
number of integrators involved in the system.
State Equation of LTI System:

x  t   Ax  t   Bu  t 

Output Equation: y  t   Cx  t   Du  t 
x= state vector,
u = input vector,
y = output vector and
A, B, C, D are the respective matrices.
Relationship Between Transfer Function and State
Space Equation:
Y ( s)
G s 
U (s)

State Model: x  Ax  Bu y  Cx  Du
⇒ sX( s)  x(0)  AX( s)  BU( s)
Introduction
Y( s)  CX( s)  DU( s)
With zero initial condition {for transfer function}:
1
(sI  A)X(s)  BU(s) ⇒ X(s )  (sI  A) BU(s)

Now, Y(s)  C(sI  A)1 B  D U(s)


Y ( s)
⇒ G s   C( sI  A) 1 B  D
U (s)
1
N.B: RHS of this Eq. involves (sI  A)
Q(s)
So, G(s) can be written as: G  s  
sI  A
Q(s) is the appropriate polynomial.
Introduction
⇒ sI  A = Characteristic polynomial of G(s)
Hence, the eigenvalues of A are identical to the poles of
G(s).
Example: The system is shown below.
The displacement y(t) is measured from
the equilibrium position obtained in the
absence of the external forces.

System Equation:
 
m y  b y  ky  u
Introduction
• This is a 2nd order system, which means that there
are two integrators.
Let x1 (t )  y(t ) and x2 (t )  y (t )

Thus, x1  x2
 1 1 k b 1
x2    ky  by   u   x1  x2  u
m m m m m
State Eq:   0 1   0
 x1    k   x1   
b   1 u
 x      x2   
 2  m m m

 x1 
Output Eq: y  1 0  
 x2 
Introduction
,
Comparison with standard form of State Space Model:
 x1   0 1  0
x  A k b B1
C  1 0 D0
 x2      
 m m m
Block Diagram Model:
Introduction
G(s)  C(sI  A)1 B  D
1
  0 1  0
 s 0  
 1 0     k 
b   1 0
0 s      
   m m  m
1
 2
ms  bs  k
State Space Model of n-th order system
(Input doesn’t have derivative terms)
System Dynamics:

( n 1)
y (n)
 a1 y   an1 y  an y  u

( n 1)
State Variables: x1  y, x2  y, ..., xn  y
 
So, x1  x2 x2  x3


xn1  xn

xn  an x1  an1 x2  a1 xn  u

State Eq: x  Ax  Bu
 x1   0 1 0 0  0
 x2   0 0 1  0  0
x   A   B  
   
   0 0 0 1  0
 xn   an an1 an2  a1  1 

 x1 
 x2 
Output Eq: y  1 0 0   ⇒ y  Cx  Du
 
 xn 
C  1 0  0 D  0
Y ( s) 1
Also  n
U (s) s  a1s n1  an1s  an
State Space Model of n-th order system
(Input function has derivative terms)
System Dynamics:
 
( n1) ( n1)
y  a1 y
( n)
  an1 y  an y  b0u  b1u ( n)
  bn1 u  bnu
• State variables must be chosen so that they
eliminate the derivatives of u in the state equation.

One choice: x1  y  0u


  
x2  y   0 u  1u  x1  1u
   
x3  y   0 u  1 u   2u  x2   2u

xn  y ( n1)  0u ( n1)  1u ( n2 )    n2 u   n1u

 x n1 –  n1u
0 , 1, , n are determined from:

0  b0 1  b1  a10 2  b2  a11  a2 0

n  bn  a1n1  an11  an 0
 
Thus, x1  x2  1u x2  x3   2u

 
x n1  xn   n1u x n  an x1  an1 x2  a1 xn   nu

State Eq: x  Ax  Bu
 1   x1 
 0 1 0 0   2   x2 
 0  0 
A 
0 1
 B  x  
 

 0 0 0 1 

  n1   
 an an1 an2  a1    n   xn 
 x1 
 x2 
Output Eq: y  1 0  0     0u
 
 xn 

Y ( s) b0 s n  b1s n1   bn1s  bn


⇒  n
U ( s ) s  a1s n1   an1s  an
State Space Model of Transfer Function
System
General System:
 
( n1) ( n 1)
y (n)
 a1 y   an1 y  an y  b0u ( n)
 b1u   bn1 u  bnu
Y ( s) b0 s n  b1s n1   bn1s  bn
⇒  n
U ( s ) s  a1s n1   an1s  an

(1) Controllable Canonical Form:


  
 x1   0 1 0 0   x1  0 
x   0 0 1  0   x2  0 
 2       u
      
 x   0 0 0 1   xn1  0 
 n 1   an an1 an2   a1   xn  1 
 xn 
 
 x1 
 x2 
y  bn  an b0 bn 1  an 1b0 b1  a1b0     b0u
 
 xn 
(2) Observable Canonical Form:

 x1  0 0 0 0 an   x1   bn  anb0 
 x 2  1 0 0 0 an 1   x2  bn 1  an 1b0 
      u
      
 x  0 0 0 1 a1   xn   b1  a1b0 
 n
 x1 
 x2 
y  0 0 1    b0u
 
 xn 
(3) Diagonal Canonical Form:
• Denominator polynomial of transfer function
involves only distinct poles.
Y ( s) b0 s n  b1s n1   bn1s  bn c1 c2 cn
  b0    
U ( s) ( s  p1 )( s  p2 ) ( s  pn ) ( s  p1 ) ( s  p2 ) ( s  pn )


 x1    p1 0 0 0   x1  1
 x 2   0  p2 0 0   x2  1
   0 0 0    
 u
      
 x   0 0 0  pn   xn  1
 n
 x1 
 x2 
y   c1 c2  cn     b0u
 
 xn 
(4) Jordan Canonical Form:
• Denominator polynomial of transfer function
involves multiple order poles.
Example: Y ( s) b0 s n  b1s n1   bn1s  bn

U ( s ) ( s  p1 )3 ( s  p4 ) ( s  pn )
c1 c2 c3 c4 cn
 b0      
( s  p1 ) ( s  p1 ) ( s  p1 ) ( s  p4 )
3 2
( s  pn )


 x1    p 1 0 0  0   x1  0 
 x2   1
  x2  0 
    0  p1 1
 x3   0 0  p1 0  0   x3  1 
     u
   0  0  p4 0 0   x4  1 
 x4   0 0    
  
   0  0 0 0  pn   xn  1 
 x n 
 x1 
 x2 
y   c1 c2 cn     b0u
 
 xn 
Example:
Y ( s) s3
 2
U ( s ) s  3s  2
(1) Controllable Canonical Form:


 x1    0 1   x1   0  u
 x 2   2 3  x2  1 
 

 x1 
y  3 1  
 x2 
(2) Observable Canonical Form:

 x1   0 2   x1   3 u
 x 2  1 3  x2  1
 

 x1 
y   0 1  
 x2 
(3) Diagonal Canonical Form:


 x1    1 0   x1   1 u
 x 2   0 2   x2  1
 

 x1 
y   2 1  
 x2 
Eigenvalues of Matrix A :
Eigenvalues of A: Roots of the characteristic equation.
λI  A  0

• They are also known as the characteristic roots.

Diagonalization of Matrix A:

x  Ax  Bu y  Cx
Matrix A, having distinct eigenvalues, is:
 0 1 0 0 
 0 0 1  0 
A  
 
 0 0 0 1 
 an an 1 an 2  a1 
Transformation: x  Pz
 1 1 1 
 1 2 n 
 2 2  {where λ1,λ2,…,λn are the
P   1 22 n 
  distinct eigenvalues of A}
1n1 2n1 nn1 

1 0 0 0
1
 0 2 0 0
P AP  
⇒ 0 0 0
 
 0 0 0 n 

Pz  APz  Bu

z  P1APz  P1Bu
y  CPz
Example:
 0 1 0 
A 0 0 1  has eigenvalues λ1, λ1, and λ3.
 a a2 a1 
 3
1 0 1
x  Sz S   1 1 3 
 2 2
1 21 3 
1 1 0 
S 1AS   0 1 0 
0 0  
 3
Invariance of Eigenvalues:
• Invariance of the eigenvalues under a linear
transformation.
1
• Characteristic polynomial λI  A and λI  P AP
are identical.
Proof: 1 1 1
λI  P AP  λP P  P AP

 P 1 (λI  A)P

 P 1 λI  A  P

 P 1 P λI  A  P 1P  λI  A

 λI  A
Solution of Homogeneous State Eqs:

Scalar differential Eq: x  ax
Let us assume a solution,
x  t   b0  b1t  b2t 2   bkt k 
On substitution,
b1  2b2t  3b3t 2   kbk t k 1 
 a(b0  b1t  b2t 2   bkt k  )
Thus, 1 1 2 1 1 3
b1  ab0 , b2  ab1  a b0 b3  ab2  a b0
2 2 3 3 2

1 k
bk  a b0 and b0  x(0)
k!
 1 22 1 33
So, x(t )  1  at  a t  a t  1 k k 
 at   x(0)
 2! 3! k! 

 e at x  0 

• Same principle can be adopted for vector-matrix


differential equation,

x  Ax
x  t   b 0  b1t  b 2t 2   bkt k 

 1 22 1 k k 
⇒ x  t   I  At  A t   At   x  0
 2! k! 
 e x  0
At
Matrix Exponential:

k k
A t
e 
At

k 0 k !

d At  1 22  Can be easily
e  I  At  A t   A  e At
A
dt  2! verified.
---------------------------------------------------------------------
A (t  s )
e e e
At As

Proof:
  A k t k   A k s k  k t i s k i 
 
e At e As       A   i ! ( k  i )! 
 k 0 k !  k 0 k !  k 0  i 0 

( t  s ) k
 Ak  eA( t  s )
k 0 k!
• If s  t , e At  At
e  e A( t t )
I
 At
So, inverse of e  e
At

---------------------------------------------------------------------
( A B) t
e e e if At Bt
AB  BA
and e( A B)t  e At eBt if AB  BA

Proof:
 A  B
2

e ( A  B) t
 I   A  B t t2 
2!
 A 2 2
t  B 2 2
t 
e e   I  At 
At Bt
  I  Bt   
 2!  2! 

A 2t 2 B 2t 2
 I   A  Bt   ABt 
2

2! 2!
Thus,
( A  B) t BA  AB 2 BA 2  ABA  B 2 A  BAB  2A 2B  2AB 2 3
e e e 
At Bt
t  t 
2! 3!

This difference vanishes if A and B commute.


---------------------------------------------------------------------
Laplace Transform Approach for Solution of
Homogeneous Differential Eqs:
⇒ sX  s   x  0   AX( s )

x(t )  Ax(t )

⇒ X  s   ( sI  A)1 x  0  ⇒ x  t   L1 ( sI  A)1  x  0 


2
I A A
( sI  A) 1   2  3 
s s s
1 22
⇒ L ( sI  A)   I  At  A t 
1 1
 e At
2!
So, x  t   e x  0 
At
State Transition Matrix:

x  Ax ⇒ Let the solution be: x t   Φ  t  x  0

Φ  t  is the unique solution of Φ  t   AΦ  t  ; Φ  0  I

Verification:
x  0  Φ  0 x  0  x  0
 
and x(t )  Φ  t  x  0   AΦ  t  x  0   Ax  t 
Φ  t   e At  L1[( sI  A) 1 ]

⇒ Φ 1  t   e  At  Φ  t 
Φ  t  ≡ State transition matrix
1t 2t nt
e ,e , ,e
1t
Specifically,  e 0 0 0 
0 e2t 0  , if A is diagonal.
Φ t   e  
0
At

0 0 0 
 0 0 0 ent 

Properties of State Transition Matrix:


(1) Φ  0   e A 0  I
(2) Φ 1  t   Φ  t 

Φ  t1  t2   e  eAt1 eAt2  Φ  t1  Φ  t2   Φ  t2  Φ  t1 
A t1 t2 
(3)
(4) Φ  t   Φ  nt 
n

(5) Φ  t2  t1  Φ t1  t0   Φ t2  t0   Φ t1  t0  Φ t2  t1 


Example:

System:  x1    0 1   x1 
 x 2   2 3  x2 
 

Obtain Φ  t  and Φ1  t 


-----------------------
0 1
A Φ  t   e At  L1[( sI  A) 1 ]
 2 3
 s3 1 
 ( s  1)( s  2) ( s  1)( s  2) 
s 1  ( sI  A)  
1

sI  A     2 s 
2 s  3  ( s  1)( s  2) ( s  1)( s  2) 

 2et  e2t et  e2t 


Φ t   
So, t
 2e  2e
2 t
et  2e2t 

 2et  e2t et  e 2 t 
Φ  t   Φ  t   
1

 2e t
 2e 2t
et  2e2t 
Solution of Nonhomogeneous State Equations:
 
Scalar Case: x  ax  bu ⇒ x ax  bu
•  d  at
⇒ e  at
 x  t   a x  t  
 dt [e x  t ]  e  at
bu (t )
t

⇒ e at x  t   x  0   e  a bu ( )d
0 t

⇒ x  t   x  0  eat  e at e  a bu ( )d
0

Nonhomogeneous State Eq:


 
x  Ax  Bu ⇒ x(t )  Ax(t )  Bu(t )
d  At
⇒ [e x  t ]  e  At Bu(t )
dt
t

⇒ x  t   eAt x  0   eA (t  ) Bu( )d


0 t

 Φ(t )x  0   Φ(t   )Bu( )d


0

Laplace Transform Approach:



x  Ax  Bu ⇒ sX( s )  x(0)  AX( s )  BU( s )

X  s    sI  A  x  0    sI  A  B U( s )
1 1

 L e At  x  0   L e At  B U( s )
t

⇒ x  t   e x  0   e A (t  )Bu( )d
At

N.B: Initial time may be any other time (t0) instead of 0.


t

⇒ x  t   e A( t t0 ) x  t0   e A( t  )Bu( )d


t0
Cayley-Hamilton Theorem:
• Every square matrix satisfies its own characteristic
equation.
If A is an n  n square matrix, and
pA (λ)  λI  A is its characteristic polynomial,
Then, p A  A   0

λI  AλI  A
Proof: 1
I
Using Cramer’s rule for inverse of any matrix (say Y):
 y11 y12 y1n 
1 1  
Y 
Y y yn 2 ynn 
 n1
where y jk
 p11 ( ) p1n ( ) 
So, 1 
 λI  A   
1

p A (λ)  p ( ) pnn ( ) 
 n1
where all
Thus, we can write,
pA  λ  λI  A  λn1Bn1  λn2Bn2 
1
 λB1  B0

n  n matrices.
⇒ p A  λ  I   λI  A  λ Bn1  λ Bn2   λB1  B0 
n 1 n2

 λ nBn1  λ n1Bn2   λ 2B1  λB0  λ n1ABn1  λ n2 ABn2   λAB1  AB0

A 
n 1
Let p λ  λ n
 c n 1λ   c1λ  c0
This is true if matrices used with each power of λ is an
identity matrix multiplied by respective coefficients.
Then, by comparison,
Bn1  I Bn2  ABn1  cn1I ,…, B0  AB1  c1I and AB0  c0I
p A  A   A n  cn1A n1  cn2 A n2   c1A  c0 I
 A nB n1  A n1  B n2  AB n1    A (B 0  AB1 )  AB 0
0

Method 1: eAt  L1[( sI  A)1 ]


Method 2:
⇒ If A has distinct eigenvalues, then
e1t 0 0 0 
Dt 1
0 e2t 0 0  1
e  Pe P  P 
At
P
0 0 0 
 0 0 0 ent 
where P is the diagonalizing matrix for A
D  P 1 AP
⇒ If A has multiple eigenvalues, A can be transformed
into Jordan canonical form.
Then e At  SeJt S 1 ; J  S1AS
Linear Independence of Vectors:
• Vectors x1, x2, x3,…, xn are said to be linearly
independent if
c1x1  c2x2   cn xn  0
where c1 , c2 , …, cn are constants, implies that

c1  c2   cn  0

n
xi  c j x j
j 1
j i
Controllability and Observability
• A system is said to be controllable at time t0, if it is
possible by means of an unconstrained control
vector to transfer the system from any initial state
x(t0) to any other state in a finite time interval.
• A system is said to be observable at time t0, if with
the system state at x(t0), it is possible to determine
the state from the observation of the output over a
finite time interval.

• The conditions of controllability and observability


may govern the existence of a solution to a control
design problem.
(Complete State) Controllability:

x  Ax  Bu

• If every state is controllable, the system is said to be


completely state controllable.

• For complete state controllability of this system, it is


necessary (and sufficient) that the rank of the matrix
B AB A n1B  must be n.

• Thus the vectors B, AB,, A n1B


must be linearly independent.

B AB A n1B  = Controllability Matrix


Observability:
Consider the following unforced system.

x  Ax y  Cx {x ≡ n-vector, y ≡ m-vector}

• The system is said to be completely observable if


every state x(t0) can be determined from observation
of y(t) over a finite time interval, t0 ≤ t ≤ t1.

⇒ The system is therefore completely observable if


every transition of the state eventually affects every
element of the output vector.
⇒ The observability concept is useful in estimating
unmeasurable state variables from measurable
variables in minimum possible time interval.

⇒ Why an unforced system is taken?

Let us consider the full system model. Assume t0 = 0



x  Ax  Bu y  Cx  Du
t

Then,   x  0   e Bu   d
A t  
x t  e At

0
t
y  t   Ce At x  0   Ce Bu   d  Du
A t  

0
• A, B, C, and D are known. u(t) is also known. Hence
the last two terms of y(t) equation are known
quantities. Thus, these two terms can be subtracted
from the observed values of y(t).
• Hence, for investigations on necessary and
sufficient conditions for complete observability, it is
enough to consider the unforced system.
⇒ It can be proved that x(0) can be uniquely
determined if and only if the rank of
 C 
 CA 
  is n.
 
CA 
n1
C* A*C* (A* )n1C*  = Observability Matrix
nnm

• For complete state observability, the observability


matrix must be of rank n.
Principle of Duality:
⇒ Analogy between controllability and observability:

x  Ax  Bu , y  Cx : System S1

z A z C v
* * , w  B* z : Dual System S2
• Principle of duality states that the system S1 is
completely state controllable (observable) if and only
if the system S2 is completely observable
(controllable).
Verification:
System S1:
1. Complete state controllability:
B AB A n1B  is of rank n.
2. Complete state observability:
C* A* C* ( A* ) n1 C*  is of rank n.
System S2:
1. Complete state controllability:

C* A* C* ( A* ) n1 C*  is of rank n.


2. Complete state observability:
B AB A n1B  is of rank n.

The comparison verifies the duality principle.

You might also like