# Contents

Modeling Techniques
  

First-principles models Black-box models Empirical models

Engineering conservation principles
   

Total mass balance Species balance Energy balance Physical correlations

Example

I - Modeling Chemical Processing Systems

1

Modeling Techniques

First-principles modeling methods
 

Also called physically-based or theoretical modeling Based on engineering conservation principles

" \$ \$ ! # \$ \$ % x !" n

dx = f ( x(t), u(t) ) (state equation) dt y(t) = g ( x(t), u(t) ) (output equation) x(t0 ) = x0
state vector input vector output vector initial state

(initial state)

f !" n g !"m
t !"

state-derivative map output map time initial time

u !" p

y !" m
x0 !" n

t0 !"

I - Modeling Chemical Processing Systems

2

Black-box modeling

Response-curve models

Also called empirical modeling

Semi-empirical models

Parts of the model are derived from first-principles and other parts are derived from experimental observations or from physical intuition.

I - Modeling Chemical Processing Systems

3

Engineering Conservation Principles
 

First-principles (or physically based) modeling approach Abstraction of an open system: boundary

1 inlet 2 streams Nin Q

A+B ! C

1 2 outlet streams

V, "

Nout Ws

Conservation principle

I - Modeling Chemical Processing Systems

4

Conserved quantities  Total mass
  

  

Momentum Entropy etc.

Species Electronic charge Total energy

Note: The rate of accumulation is a derivative with respect to time

! \$ d ! quantity \$ rate " %= " % dt #conserved & #ofaccumulation &
This is the reason why chemical engineering process models are often in the form of a differential equation.
I - Modeling Chemical Processing Systems © Oscar D. Crisalle 2005

5

Conservation Equations
Total Mass Balance
out in d!V = " !i Fi # " ! jFj dt (inlets) i=1 (outlets) j=1 N N

Species Balance dVCA dt =
N in (inlets) i=1

"

CA Fi #
i

N out (outlets) j=1

"

CA Fj # rA V
j

Energy Balance d ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ \$!V( U + P + K) & = " !i Fi ( H i + Pi + K i ) # " !i Fi ( H i + Pi + K i ) % ' dt (inlets) i=1 (outlets) j=1 ˆ ) r V + W # P dV + Q + (#(H rxn rxn s dt
I - Modeling Chemical Processing Systems © Oscar D. Crisalle 2005
N in N out

6

Common Nomenclature

Physical variables
    

Volume inside a boundary Density Volumetric flow rate Mass flow rate Linear velocity of flow

V ρ F w=ρF v

[l] [ kg/l ] [ l/s ] [ kg/s ] [ m/s ]

I - Modeling Chemical Processing Systems

7

Thermodynamic variables  Temperature  Pressure
      

Specific volume Specific enthalpy Specific internal energy Specific kinetic energy Specific potential energy Specific heat (heat capacity) Heat flow rate (Power)  Q > 0 heat enters the system  Q < 0 heat leaves the system Work rate (Power)  W > 0 work done on the system  W < 0 work done by the system

T P ˆ V = 1/ ρ

[C] [ atm ] [ l/kg ] [ J/ kg ] [ J/ kg ] [ J/ kg ] [ J/ kg ] [ J/ kg-C ] [ W ] = [ J/s ]

ˆ H ˆ ˆ ˆ U = H ! PV ˆ K ˆ P Cp Q

W

[ W ] = [ J/s ]

I - Modeling Chemical Processing Systems

8

Thermal mass (capacitance) of mass ρV

ρVCp

[ J/C ]

Chemical-kinetics variables

Specific heat of reaction

ˆ !Hrxn

[ J/mol ]

ˆ !Hrxn > 0 heat consumed by the system
(endothermic reaction)

ˆ !Hrxn < 0 heat produced by the system
(exothermic reaction)

  

Overall rate of reaction Rate of reaction of species A Concentration of species A

rrxn rA CA

[ mol/sec-l ] [ mol/sec-l ] [ mol/l ]

I - Modeling Chemical Processing Systems

9

Physical Correlatio ns

Density

a. Real Substance
  

ρ = ρ(T)

pure liquids

ρ = ρ(T, P) pure gases ρ is also a function of composition in mixtures ρ ≈ constant is often a good approximation MP ρ = RT

b. Liquids

c. Ideal gases

where M is the molecular weight

I - Modeling Chemical Processing Systems

10

Specific heat (heat capacity) at constant pressure

a. Real Substance
  

Typical correlations of the form: [J/kg-C] [J/kg-C]

Cp = Cp(T) = A1+A2T+A3T–2 Cp = Cp(T) = B1+B2T+B3T2+B4T3

Cp is also a function of composition in mixtures Cp ≈ constant valid for small T variations

b. Liquids

c. Ideal gases

Cp ≈ constant

I - Modeling Chemical Processing Systems

11

Specific Enthalpy a. Real Substance (let T o and Po be reference states)

ˆ ˆ H (T,P) = H (T o ,Po ) +

T , Po T o , Po

"

T ,P

Cp (T ! ) dT ! +

T , Po

"

ˆ V (1 # T\$T ) dP'

!T =

1 ˆ "V "T P ˆ V

(

)

thermal expansion coefficient

  

ˆ H (T,P) is also a function of composition in mixtures
Latent heats must be added as phase transitions are encountered For most liquids, for many real gases, and for all ideal gases, the pressure dependence is negligible, hence:

I - Modeling Chemical Processing Systems

12

ˆ ˆ H (T ) = H (T o ) +

T , Po T o , Po

"

Cp (T ! ) dT !

b. Liquids c. Ideal gases

ˆ H (T ) = ! + Cp,liq T

! = constant

ˆ ˆ H (T ) = ! + "H vap (T bp )+Cp,gasT

! = constant

Specific Internal Energy

a. Real Substance b. Liquids c. Ideal gases

^ ^ ^ U = H – PV

ˆ U (T ) = ! + Cp,liq T

^ ^ since for liquids PV « H

R ^ RT ˆ ˆ U = ! + "H vap + (Cp,gas # )T since PV = M MW

I - Modeling Chemical Processing Systems

13

Specific Kinetic Energy.

1 ^ = K 2 v2 v

[ m2/s2 ] = [ J/kg ]

linear velocity of stream (at inlet or outlet) or of the center or of the center of mass of volume V

Specific Potential Energy

^ P = g (z -zo)
 

[ m2/s2 ] = [ J/kg ]

g

acceleration of gravity

zo height of a reference plane

I - Modeling Chemical Processing Systems

14

Work

Ws dV Wf = – P dt

rate of shaft-work rate of flow-work

[ J/s ] = [ W ] [ J/s ] = [ W ]

I - Modeling Chemical Processing Systems

15

Heat flow rates

a. Convection
   

h Tw Ta A

Q = A h (Ta – Tw ) heat transfer coefficient
temp. of the boundary wall ambient (bulk) temperature effective heat-transfer area

[ J/s ] = [ W ] [W/m2-C] [C] [C] [m2]

I - Modeling Chemical Processing Systems

16

b. Conduction
 

T -T Q =AkT w1 w2 L

[ J/s ] = [ W] [W/m-C] [m]

kT thermal conductivity L slab thickness V2 Q = R I2 = R

c. Electric heating
  

[ J/s ] = [ W ] [Ω] [Amp] [V]

R I V

resistance current voltage

Q ! T4

I - Modeling Chemical Processing Systems

17

Chemical Kinetics

a. Chemical reaction

a A+ b B ! c C

b. Reaction rate of species A is defined in a closed system:

! dC \$ rA = # A & " dt % closed
system
 

[ mol/l-s ]

rA < 0 consumption of A by reaction rA > 0 generation of A by reaction

I - Modeling Chemical Processing Systems

18

c. Phenomenological kinetic models for rA

m rA = kr CC

! kf Cs Cn A B
/ RT

[ mol/l-s ]

  

kf = kf ,o e kr = kr,o e

! Ea,f

kf: forward kinetic-rate parameter kr: reverse kinetic-rate parameter

! Ea,r / RT

d. Overall rate of reaction

rrxn =

rA a

=

rB b

=

rC c

!0

[ mol/l-s ]

e. Heat of reaction
 

ΔHrxn = ΔHrxn(T)

[ J/mol ]

ΔHrxn < 0 exothermic reaction ΔHrxn > 0 endothermic reaction

I - Modeling Chemical Processing Systems

19

f. Examples

1.

The irreversible chemical reaction 2A → B is known to be of second order. Then

rA = ! kf C2 A

[ mol/l-s ]

2.

k f C2 A rrxn = [ mol/l-s ] 2 The reversible chemical reaction 3 A + B ! 2 C
is known to be elementary. Then
2 rA = kr CC ! kf C2 CB A

[ mol/l-s ] [ mol/l-s ]

rrxn =

2 kr CC ! kf C2 CB A 3

I - Modeling Chemical Processing Systems

20

A Modeling Example

T w h

temperature mass flow-rate liquid level

A V = hA

cross-sectional area liquid volume

I - Modeling Chemical Processing Systems

21

Total Mass Balance

Assumptions

Gravity drain w(t) = h(t) R

Balance

d [!Ah(t)] = !h Fh (t) + !c Fc (t) " !F(t) dt = wh (t) + wc (t) " w(t) dh(t) 1 1 1 h(t) = w (t) + w (t) " dt !A h !A c !A R
(1)

I - Modeling Chemical Processing Systems

22

Energy Balance

Assumptions

No heat losses, Cp,h = Cp,c = Cp, α = 0

Balance

d !Ah(t)C pT(t) = wh (t)C pTh (t) + wc (t)C pTc (t) " w(t)C pT(t) dt
C pT(t) d dT(t) !Ah(t)) + !AC p h(t) = wh (t)C pTh (t) + wc (t)C pTc (t) " w(t)C pT(t) ( dt dt

(

)

Substitute equation (1) to eliminate dh(t) dt from the above expression

dT (t) 1 wh (t)T (t) 1 wc (t)T (t) 1 wh (t)Th (t) 1 wc (t)Tc (t) =! ! + + dt "A h(t) "A h(t) "A h(t) "A h(t)

(2)

I - Modeling Chemical Processing Systems

23

Classification of variables Variable Type Physical variable states h, T outputs (temperature/level control) h, T inputs wh , wc disturbances Th , Tc parameters ρ, A, Cp, R

Std nomenclature x1, x2 y1, y2 u1, u2 d1:=u3, d2:=u4 p1, p2, p3, p4

The hot and cold temperatures cannot be manipulated at will, though they may change; hence, they are classified as disturbances

The hot and cold flow-rates are adjusted as necessary to keep the output temperature at a target value; hence they are classified as inputs.

Both states are measured, hence the outputs are equal to the states in this case

I - Modeling Chemical Processing Systems

24

Nonlinear state-space model  State and input vector

! x1 (t) \$ ! h(t)\$ x(t) = # '(2 &=# & # x2 (t)& "T(t)% " %

! u1 (t) \$ ! wh (t) \$ # & # & # u2 (t)& # wc (t) & u=# =# '(4 u3 (t) & Th (t) & # & # & #u4 (t)& # Tc (t) & " % " %

Nonlinear state-space equation:

! f1 x1 (t), x 2 (t), u1 (t), u 2 (t), u3 (t), u 4 (t) x = f ( x(t), u(t)) = # ! # f2 x1 (t), x 2 (t), u1 (t), u 2 (t), u3 (t), u 4 (t) "
where

( (

)\$ & )& %

f1 (x, u) =

1 1 1 wh (t) + wc (t) " h(t) !A !A !AR

I - Modeling Chemical Processing Systems

25

1 wh (t)T(t) 1 wc (t)T(t) 1 wh (t)Th (t) 1 wc (t)Tc (t) f2 (x, u) = ! ! + + "A h(t) "A h(t) "A h ( t ) "A h(t)

Output equation

! g1 x1 (t), x 2 (t), u1 (t), u 2 (t), u3 (t), u 4 (t) y = g ( x(t), u(t)) = # # g2 x1 (t), x 2 (t), u1 (t), u 2 (t), u3 (t), u 4 (t) "
where

( (

)\$ & )& %

g1 (x, u) = h(t) g2 (x, u) = T(t)

I - Modeling Chemical Processing Systems

26

Summary – Nonlinear Model

State equations

#d 1 1 1 % dt h(t) = !A wh (t) + !A wc (t) " !AR h(t) % \$ % d T(t) = " 1 wh (t)T(t) " 1 wc (t)T(t) + 1 wh (t)Th (t) + 1 wc (t)Tc (t) % dt !A h(t) !A h(t) !A h ( t ) !A h(t) &

Output equations

! y1 (t)=h(t) # " # y2 (t)=T(t) \$

I - Modeling Chemical Processing Systems

27

Linearization about a steady-state operating point  A standard procedure can be used to linearize the resulting model

State equations

! 1 ' " \$ # ! 2(AR h ! h(t) # #! &=# " "T (t) % # ' whTh + wcTc ' wh +wc T # (Ah 2 "

(

)

\$ & ! & ! h(t) \$ #! & wh +wc & "T (t) % & ' (Ah & % 0
# wh (t) & ! % ( wc (t) ( ! % % T (t) ( ! % h ( ! % Tc (t) ( \$ '

# 1 % !A +% % Th " T % \$ !Ah

1 !A Tc " T !Ah

0 wh

!Ah

& 0 ( ( wc ( ( !Ah '

I - Modeling Chemical Processing Systems

28

Output equations

! wh (t) \$ ! # & ! ! y1 (t) \$ ! 1 0 \$ ! h(t) \$ !0 0 0 0 \$ # wc (t) & ! ! # &=# & # ! &+# &# ! & ! # y2 (t) & "0 1 % "T (t) % "0 0 0 0 % # Th (t) & " % ! # Tc (t) & " %

I - Modeling Chemical Processing Systems

29

Contents
  

Real and complex numbers Linear combination of vectors Matrices
 

Partitions and transpositions Determinant

 

Matrix rank Matrix inversion
 

Left and right inverses The Adjoint and inverse matrices

 

Eigenvalues and eigenvectors Classification of eigenvalues

II - Review of Linear Algebra

1

Real and Complex Numbers

Scalars  R
  

Set of real numbers R} Set of complex numbers Set of integers Set of natural numbers

C = { σ + i ω : σ, ω N = { 0, 1, 2, ... }

Z = { ..., -2, -1, 0, 1, 2, ... }

Vectors  Set of ordered n-tuples (Cartesian product space)

! n = {x1 , x2 , x3 , ", xn } : xi !!, i = 1, 2,..., n = ! " ! " ! " " " !

{

}

x !! n

# x1 & % ( % x2 ( " x = % ( !! n " % ( % xn ( \$ '

x is a vector in ! n

II - Review of Linear Algebra

2

Linear Combination of Vectors
 

Let !1 , ! 2 , !3 , !,! n be a set of (real or complex) scalars Definition - Linear combination of vectors

!1 x1 + ! 2 x2 + !3 x1 + ! + ! n xn

Definition - Linear independence of vectors. A set of vectors

{x1 , x2 , x1 , ! , xn }
is said to be linearly independent if the only solution to the equation

!1 x1 + ! 2 x2 + !3 x1 + ! + ! n xn = 0
is the trivial solution !1 = ! 2 = !3 = ! = ! n = 0 . Otherwise, the vectors are said to be linearly dependent.
II - Review of Linear Algebra © Oscar D. Crisalle 2005

3

Two collinear vectors are linearly dependent αl = 2, α2 = -1 ⇒ 2x1 – 1x2 = 0

Two noncollinear vectors are linearly independent
x1 x2

only αl = α2 = 0 can make α1 x1 + α2 x2 = 0

Three noncollinear vectors lying on the same plane are linearly dependent αl = 1, α2 = 1, α3= 1 ⇒ 1x1 + 1x2 + 1x3 = 0

II - Review of Linear Algebra

4

Matrices

An m x n matrix is an array comprised of m rows and n columns

"a11 \$ \$a 21 \$ A = \$" \$a \$ m-1,1 \$a m,1 #
 

a12 a 22

! a1,n!1 ! a 2,n!1

" # " a m-1,2 ! a m-1,n!1 a m,2 ! a m,n!1

% ' a 2,n ' ' " ' = a ji a m-1,n ' ' a m,n ' & a1,n

( )

Real matrix: Complex matrix:

A !! n" m is such that a ij ! ! "(i, j) A !Cn" m is such that a ij ! C "(i, j)

The succinct notation A = (aij) or A = [(aij)] is often used.
II - Review of Linear Algebra © Oscar D. Crisalle 2005

5

A matrix as a collection of column vectors

A = ! a1 a2 ! an # %Cm& n " \$
where

! a11 \$ ! a12 \$ ! a1i \$ ! a1n \$ # & # & # & # & # a 21 & # a 22 & # a 2i & # a 2n & m m m a1 = # 'C a2 = # 'C ai = # 'C an = # 'Cm ! & ! & ! & ! & # & # & # & # & #a m1 & #a m2 & #a mi & #a mn & " % " % " % " %

II - Review of Linear Algebra

6

A matrix as a collection of row vectors ! bT \$ # 1 & # bT & A = # 2 & 'Cm( n # ! & # T& #bm & " % T b1 = !a11 a12 ! a1n # %C1& n " \$
T b2 = !a 21 "

a 22 ! a 2n # %C1& n \$
!

or

T bm = !a m1 a m2 ! a mn # %C1& n " \$

! a 21 \$ ! a i1 \$ # & # & #a & #a & b2 = # 22 & 'Cn bi = # i2 & 'Cn ! ! # & # & #a 2n & #a in & " % " %
II - Review of Linear Algebra

! a m1 \$ # & #a & bm = # m2 & 'Cn ! # & #a mn & " %

7

Partitioning a matrix into submatrices

"a11 \$ \$a 21 \$ A = \$a 31 \$" \$ \$a m1 #

a12 a 22 a 32

a13 a 23 a 33

a14 a 24 a 34

! a1,n !1 ! a 2,n !1 ! a 3,n !1

" " " " " a m2 a m3 a m4 ! a m,n !1

a1,n % ' a 2,n ' ' "B C % a 3,n ' = \$ ' D E& ' # " ' a m,n ' &

II - Review of Linear Algebra

8

Transposition of a matrix

!a11 a12 \$ !a11 # & 3( 2 T If A = #a 21 a 22 & 'C then A = # # "a12 #a & a 32 % " 31

a 21 a 22

a 31 \$ 2(3 & 'C a 32 % &

if A !Cn" m , then A !Cm" n Properties
T

( A + B ) = AT + BT
! BT # T !B C # = % T & " \$ %C & " \$

( AB ) = BT AT
T

( ABC ) = C T BT A
T

!B# == ! B T % & " "D\$

T

DT # \$

! BT !B C # % & == % T D E\$ %C " "

T

DT # & T E & \$

II - Review of Linear Algebra

9

Determinant of a Square Matrix

Special Definitions 1x1 matrix A ∈ R 1x1 A = (a11), det(A) = a11 det(A) = a11a22 - a21a12

2x2 matrix

! a11 a12 \$ # & A ∈ R 2x2 A = #a a 22 & , " 21 %
! a11 a12 # #a 21 a 22 A ∈ R 3x3 A = # " a 31 a 32 a13 \$ & a 23 & a 33 & %

3x3 matrix

det(A) = a11 a22 a33 + a12 a23 a31 + a21 a32 a13 – a 31 a22 a13 – a21 a12 a33 – a 32 a23 a11
II - Review of Linear Algebra © Oscar D. Crisalle 2005

10

General Definition

Preliminary Definition 1. Minor of element a ij : M ij Defined as the determinant of the matrix obtained by eliminating row i and column j from A .

 

Preliminary Definition 2. Cofactor of element aij : Aij = (!1)i+ j M ij Preliminary Definition 3. Cofactor of a matrix A !Cn" n : cof(A) .

! A11 A12 ! A1n \$ # & # A21 A22 ! A2n & 'Cn( n cof(A) = # " " # " & # & # A n1 A n2 ! A nn & " %
Matrix whose (i, j)-th element is the A ij cofactor of element a ij .
II - Review of Linear Algebra © Oscar D. Crisalle 2005

11

Definition. The determinant of a matrix A !Cn" n is defined as a the following cofactor expansion about any row i or any column j of the matrix A:

det ( A) :=
or

k =1

! aik Aik =

n

k =1

! ( "1)

n

i+ k

aik M ik

det ( A) :=

k =1

! akj Akj =

n

k =1

! ( "1)

n

k+j

akj M kj

Remark.

The cofactor expansion of the determinant of a matrix of

dimensions nxn consists of n terms, and each one of these terms is the product of n elements. Furthermore, each term contains one and only one element from each row, and one and only one element from each column.
II - Review of Linear Algebra © Oscar D. Crisalle 2005

12

Example

Cofactor expansion of matrix A about the first row:

det ( A) :=

k =1

!

n

a1k A1k =

k =1

! ( "1)
a13 \$ & a23 & a33 & %

n

1+ k

a1k M1k

Derive the formula for the determinant of the 3x3 matrix

! a11 # A = # a21 #a " 31

a12 a22 a32

using the cofactor expansion about the first row

Minors

II - Review of Linear Algebra

13

' ! a22 M11 = det ) # ( # a32 " ' ! a21 M12 = det ) # ( # a31 " ' ! a21 M13 = det ) # ( # a31 "

a23 \$* &, = a22 a33 - a32 a23 a33 &+ % a23 \$* &, = a21 a33 - a31 a23 a33 &+ % a22 \$* &, = a21 a32 - a31 a22 a32 &+ %

Cofactor expansion about the first row

det ( A) = ( !1)

1+1

a11 M11 + ( !1)

1+2

a12 M12 + ( !1)

1+3

a13 M13

= a11 M11 ! a12 M12 + a13 M13 = a11 a22 a33 ! a11 a32 a23 ! a12 a21 a33 +a12 a31 a23 + a13 a21 a32 ! a13 a31 a22
II - Review of Linear Algebra © Oscar D. Crisalle 2005

14

Properties of Determinants
Let A = !a1 a 2 ! a p "

aq ! a n # , where ai !! n \$

(a) Multiplying a column (row) by a scalar is equivalent to multiplying the det(A) by the scalar

det " a1 #

(

a2 ! !ai ! an \$ = ! det ( A) , ! &" %

)

! "!

(b) det ( !A) = ! n det ( A) (c) The determinant of a matrix with a column (row) of zeros is identically zero

det ! a1 "

(

a2 ! 0 ! an # = 0 \$
aq ! a n # = % det !a1 a 2 ! aq \$ "

)

(d) Swapping any two columns (rows) changes the sign of the determinant

det !a1 a 2 ! a p "
II - Review of Linear Algebra

(

)

(

ap ! an # \$

)
15

(e) Adding a multiple of a column (row) to another column (row) does not affect det(A)

det !a1 a 2 ! a p "

(

aq ! a n # = det !a1 a 2 ! a p \$ "

)

(

(aq + %a p ) ! a n # \$

)

(f) det(A) = det(AT ) (g) det ( AB ) = det ( BA ) = det ( A ) det ( B ) , (h) det(I) = 1

( A,B square )

1 det ( A ) (j) det (I + AB) = det (I + BA)
(i) det A!1 =

( )

II - Review of Linear Algebra

16

Matrix Rank

Definition. Matrix A !! n"m has column-rank ! if at most ! columns of A are linearly independent Definition. Matrix A !! n"m has row-rank β if at most β rows of A are linearly independent Definition. Matrix A !! n"m has full column-rank if α = m ( rk( A) = m ) Definition. Matrix A !! n"m has full row-rank if β = n ( rk( A) = n )

  

THEOREM.  The row-rank and column-rank of any m x n matrix are equal. COROLLARY.  rk(A) = number of independent columns of A = number of independent rows of A Let A !! n"m , then rk (A) ≤ min (m, n)

Remark.

II - Review of Linear Algebra

17

THEOREM. Let A !! n"m be an arbitrary matrix, and let square matrices B !!m"m and A !! n"m be full-rank matrices. Then rk (A) = rk (AB) = rk (CA)

Interpretation: Multiplication by a full-rank matrix does not affect rank

II - Review of Linear Algebra

18

Matrix Inversion

Inversion is defined only for square matrices A !! n" n (or A !Cm" n ) Definition. Matrix A!1 "! n# n is the inverse of matrix A if and only if it satisfies the equalities

A!1A = AA!1 = I

Definition. Matrix A !! n" n is said to be nonsingular if there exists a matrixinverse A!1 "! n# n . Otherwise, A is said to be singular.

Remark: The matrix inverse is also square.

II - Review of Linear Algebra

19

THEOREM. Existence of a matrix inverse for a square matrix A !! n" n . The following statements are equivalent: (i) Matrix A !! n" n is nonsigular if and only if A is full rank. (ii) Matrix A !! n" n is nonsigular if and only if rk( A) = n . (iii) Matrix A !! n" n is nonsigular if and only if det( A) ! 0 .

Properties of the matrix-inversion operation
 

( AB )

!1

= B!1A!1

( ABC)!1 = C!1B!1A!1

II - Review of Linear Algebra

20

Left and Right Inverses

Nonsquare (rectangular) matrices have no inverse. However, they may have a left or a right inverse THEOREM. Let A !! m" n be a full-column rank, then the square matrix

AT A !! n" n is full rank.
Equivalently:

rk( A) = n

! rk( AT A) = n

THEOREM. Let A !! m" n be a full-row rank, then the square matrix

AAT !!m"m is full rank.
Equivalently:

rk( A) = m

! rk( AT A) = m

II - Review of Linear Algebra

21

Definition.

A-L !! n"m is the left inverse of A !!m" n if and only if A! L A = I . A-R !! n"m is the right inverse of A !!m" n if and only if AA-R = I .

Definition.

THEOREM. There exists a left inverse of A !! m" n if and only if A is fullcolumn rank. Furthermore, the left inverse is unique and given by

A! L = ( AT A)!1 AT "! n# m .
 

Remark:

A! L A = ( AT A)!1 AT A = I "! n# n

THEOREM. There exists a right inverse of A !! m" n if and only if A is full-row rank. Furthermore, the right inverse is not unique, and one rightinverse is given by

A! R = AT ( AAT )!1 "! n# m .

Remark:

AA! R = AAT ( AAT )!1 = I "! m# m

II - Review of Linear Algebra

22

Definition. The adjoint of a square matrix is defined as the transpose of the matrix of cofactors:

adj( A) = ! cof (A) # " \$

T

Remark:

! A11 A12 # # A 21 A 22 adj( A)= # " " # # A n1 A n2 "

! A1n \$ & ! A 2n & # " & & ! A nn & %

T

! (-1)1+1M (-1)1+2 M12 11 # # (-1)2+1M 21 (-1)2+2 M 22 =# " " # #(-1)n+1M (-1)n+2 M n2 " n1

! ! # !

(-1) M1n \$ & 2+n (-1) M 2n & & " & (-1)n+n M nn & %
1+n

T

II - Review of Linear Algebra

23

THEOREM.
!1

A

1 = adj( A) det( A)

Remark:

" (-1)1+1M (-1)1+2 M12 11 \$ \$ (-1)2+1M 21 (-1)2+2 M 22 1 !1 A = \$ det( A) \$ " " \$(-1)n+1M (-1)n+2 M n2 # n1

! ! # !

(-1) M1n % ' 2+n (-1) M 2n ' ' " ' (-1)n+n M nn ' &
1+n

T

II - Review of Linear Algebra

24

Inverse of a 2x2 Matrix
!a b \$ '!2(2 Let A = # & c d% "
Procedure to find the inverse (assuming that det( A) ! 0 ): Find the cofactors for each of the elements of the matrix:

A11 = ( !1)

1+1

det(d) = d , A12 = ( !1)

1+2 2+2

det(c) = !c det(a) = a

A21 = ( !1)

2+1

det(b) = !b , A22 = ( !1)

II - Review of Linear Algebra

25

Construct the adjoint matrix and find the determinant
T ! A11 A12 \$ ! d 'c \$ ! d 'b \$ adj( A ) = # =# =# & & & A21 A22 & a% # " 'b a % " 'c " % T

det ( A ) = ad ! cb

Construct the inverse matrix using the adjoint theorem

A!1 =

1 1 " d !b % adj( A ) = ' ad ! cb \$ !c det ( A ) a& #

II - Review of Linear Algebra

26

Eigenvalues and Eigenvectors

Let A !! n" n and consider the equation Ax = λx (1)

Definition. Every nonzero vector v !Cn that satisfies (1) is an eigenvector of A, and every scalar ! "C that satisfies (1) is an eigenvalue of A. Notation: Characteristic matrix: Characteristic equation: Characteristic polynomial: λI - A det( !I -A) = 0

det( !I -A) = ! n +a n"1! n"1 + ! + a1!+a 0 = (! " !1 )(! " ! 2 )!(! " ! n"1 )(! " ! n )

II - Review of Linear Algebra

27

THEOREM. The eigenvalues of A are the roots of the characteristic equation. Observations:
   

There is a unique set of n eigenvalues {!1 , ! 2 , !, ! n"1 , ! n } Each eigenvalue satisfies ! in +a n"1! in"1 + ! + a1! i + a 0 = 0 An eigenvalue can be zero, but no eigenvector can be zero Given that A is a real matrix, then all complex eigenvalues appear as a complex-conjugate pair in the set {!1 , ! 2 , !, ! n"1 , ! n } The set of eigenvectors {v1 , v2 , !, vn!1 , vn } is not unique.

 

Definition. The of spectrum of matrix A is the set of eigenvalues, and is denoted as

!( A) = {"1 , " 2 , !, " n#1 , " n } : det("I # A) = 0
II - Review of Linear Algebra

{

}

28

Classification of Eigenvalues

II - Review of Linear Algebra

29

Contents
   

    

Canonical matrix forms Eigenvalue multiplicity Similarity Transformations Structure of canonical forms  Diagonal, Jordan, square, and square-Jordan The matrix exponential function Jordan canonical form Square canonical form Square-Jordan canonical form The Matrix Exponential Function exp(At)  Properties  Closed-form expressions  Diagonal, Jordan, square, and square-Jordan

III - The Matrix Exponential Function

1

Canonical Matrix Forms

Spectrum and eigenvalue sets of matrix A !" nxn Λ(A) = { λ1, λ2, … , λn-1, λn } V(A) = { v1, v2, … , vn-1 , vn }

Canonical forms for matrix A
   

Diagonal canonical form D: Jordan canonical form J: Square canonical form K: Square-Jordan canonical form Q:

D = H !1 AH
J = H !1 AH

K = H !1 AH
Q = H !1 AH

III - The Matrix Exponential Function

2

Classification of Eigenvalues

III - The Matrix Exponential Function

3

E igenvalue Multiplicity

Algebraic multiplicity mi Λ(A) = { λ1, λ2, λ3, λ4, λ5, λ6 } = { 2, −4, -4, 5, 5, 5 } λ1(A) = 2

algebraic multiplicity

m1 = 1 m2 = 2 m3 = 3

(distinct) (repeated) (repeated)

λ2(A) = - 4 algebraic multiplicity λ4(A) = 5 algebraic multiplicity

III - The Matrix Exponential Function

4

Eigenvector sets for repeated eigenvalues

!( A) =

{ "1, "2 , "3 , "4 "5 , "6 } = { "1, "2 , "2 , "4 , "4 , "4 } V( A) = { v1 , v2 , v3 , v4 , v5 , v6 } = { v1 , v21 , v22 , v41 , v42 , v43 }
where λ1, λ2 and λ3, are distinct

Eigenvalue set associated with each distinct eigenvalue

m1 = m(!1 ) = 1 m2 = m(! 2 ) = 2 m4 = m(! 4 ) = 3

{ v1 } V(! 2 ) = { v1 , v2 } = { v21 , v22 } V(! 4 ) = { v4 , v5 , v6 } = { v41 , v42 , v43 }
V(!1 ) =
V(! 4 ) =

Geometric multiplicity gi of an eigenvalue – Example

m4 = m(! 4 ) = 3
III - The Matrix Exponential Function

{ v4 ,

v5 , v6

} = { v41 , v42 , v43 }
5

   

g4 = 1 g4 = 2 g4 = 3

if V(! 4 ) has only 1 independent vector if V(! 4 ) has only 2 independent vectors if V(! 4 ) has 3 independent vectors

Example of case of algebraic multiplicity equal to 2

!( A) =

{ "1, "2 } = { # 3, # 3 }

V( A) =

{ v1, v2 }

λ1(A) = - 3 m1 = 2 (repeated)

  

{ v1, v2 } is a linearly dependent set ⇒ gi = 1 If { v1 , v2 } is a linearly independent set ⇒ gi = 2
If

Eigenvectors and principal vectors

Case of a repeated eigenvalue λ1(A)

Algebraic multiplicity m1 = m(!1 ) = 3

III - The Matrix Exponential Function

6

 

Geometric multiplicity g1 = m(!1 ) = 1 Eigenvector equations

Av1 = v1!1 ,
 

Av2 = v2 ! 2 ,

Av3 = v3!3

Eigenvector set V(!1 ) =

{ v1,

v2 , v3

} is NOT linearly independent
Aw3 = !1w3 + w2

Principal-vector equations. Define w1 = v1 ; then

Aw1 = w1!1 ,
 

Aw2 = !1w2 + w1 ,

Generalized eigenvector set W(!1 ) = THEOREM. W(!1 ) =

{ v1,

{ v1, w 2 , w 3 } w 2 , w 3 } is linearly independent

III - The Matrix Exponential Function

7

Similarity Transformations

Definition. Two square matrices A !R n" n and R !! n" n are said to be similar if there exists a nonsingular matrix H !! n" n such that

R = H !1 AH
 

Similarity transformation: the operation H !1 AH Properties of similarity transformations

( ) rk ( H AH ) = rk ( A) det ( H AH ) = det A
! H "1 AH = ! ( A)
!1 !1

III - The Matrix Exponential Function

8

Diagonal Canonical Form

Let A !R4" 4 have an eigenvalue set !( A) = eigenvalues are (i) real, and (ii) distinct. Let V( A) =

{ "1, "2 , "3 , "4 } where all the

{ v1, v2 ,

v3 , v4

} be an EIGENVECTOR set

THEOREM. The eigenvector set V( A) is linearly independent Definition. Modal matrix for A

H := ! v1 , v2 , v3 , v4 # %R4&4 " \$

COROLARY The inverse modal matrix H !1 "R4#4 exists.

III - The Matrix Exponential Function

9

THEOREM.

# "1 % %0 !1 H AH = D = % 0 % %0 \$

0 "2 0 0

0& ( 0 0( = diag "1 , " 2 ,!, " n )R4*4 "3 0 ( ( 0 "4 ( ' 0

(

)

III - The Matrix Exponential Function

10

Jordan Canonical From

Let A !R4" 4 have an eigenvalue set

!( A) =

{ "1, "2 , "3 , "4 } = { "1, "1, "1, "1 }
v3 , v4

where all the eigenvalues are (i) real, and (ii) repeated.

Let V( A) =

{ v1, v2 ,

} be an EIGENVECTOR set } be a PRINCIPAL VECTOR set, where
Aw3 = !1w3 + w2 , Aw4 = !1w4 + w3

Linearly dependent

Let W( A) =

{ w1, w2 ,

w3 , w4

Aw1 = w1!1 ,

Aw2 = !1w2 + w1 ,

THEOREM. The set W( A) is linearly independent

III - The Matrix Exponential Function

11

Definition. Modal matrix for A

H := ! w1 , w2 , w3 , w4 # %R4&4 " \$

COROLARY The inverse modal matrix H !1 "R4#4 exists. THEOREM.

# "1 1 0 0 & % ( 0 "1 1 0 ( % H !1 AH = J = % )R4*4 0 0 "1 1 ( % ( % 0 0 0 "1 ( \$ '

III - The Matrix Exponential Function

12

Square Canonical Form

Let A !R4" 4 have an eigenvalue set

!( A) =

{ "1, "2 , "3 , "4 } = { #1 + j\$1, #1 % j\$1, #2 + j\$2 , #2 % j\$2 } { v1, v1, }

where all the eigenvalues are (i) complex, and (ii) distinct.

Let V( A) =

v2 , v2 be an EIGENVECTOR set , where v1 = Re(v1 ) + j Im(v1 ) v1 = Re(v1 ) ! j Im(v1 ) v2 = Re(v2 ) + j Im(v2 ) v2 = Re(v2 ) ! j Im(v2 )

 

Linearly independent but complex

Define V !( A) =

{ Re(v1 ) , Im(v1 ),

Re(v2 ), Im(v2 )

}

No complex elements

THEOREM. The set V !( A) is linearly independent

III - The Matrix Exponential Function

13

Definition. Modal matrix for A

H := ! Re(v1 ) , Im(v1 ), Re(v2 ), Im(v2 ) # %R4&4 " \$

COROLARY The inverse modal matrix H !1 "R4#4 exists. THEOREM.

\$ "1 & & !#1 !1 H AH = K = & 0 & &0 %

#1 0 "1 0 0 0 "2 !# 2

0 ' ) 0 ) *R4+4 #2 ) ) "2 ) (

III - The Matrix Exponential Function

14

Square - Jordan Canonical Form

Let A !R4" 4 have an eigenvalue set

!( A) =

{ "1, "2 , "3 , "4 } = { # + j\$, # % j\$, # + j\$, # % j\$}
} be an EIGENVECTOR set , where
v = Re(v) ! j Im(v) Re(v) + j Im(v)

where the eigenvalues are (i) complex, and (ii) repeated.

Let V( A) = { v, v , v, v

v=

Linearly dependent

Let W( A) =

{ w1, w1,
w1 = v1 ,

w2 , w2

} be a PRINCIPAL VECTOR set, where
Aw2 = !w2 + w1

Linearly independent but complex

III - The Matrix Exponential Function

15

Define W !( A) =

{ Re(w1 ) , Im(w1 ),

Re(w2 ), Im(w2 )

}

No complex elements

 

THEOREM. The set W !( A) is linearly independent Definition. Modal matrix for A

H := ! w1 , w2 , w3 , w4 # %R4&4 " \$

Define V !( A) =

{ Re(v1 ) , Im(v1 ),

Re(v2 ), Im(v2 )

}

THEOREM. The set V !( A) is linearly independent

III - The Matrix Exponential Function

16

Definition. Modal matrix for A

H := ! Re(w1 ) , Im(w1 ), Re(w2 ), Im(w2 ) # %R4&4 " \$

COROLARY The inverse modal matrix H !1 "R4#4 exists. THEOREM.

\$" & # H !1 AH = Q = & &0 & %0

# " 0 0

1 0 " #

0' ) 1) *R4+4 #) ) "(

III - The Matrix Exponential Function

17

Analysis of an Example

The 8x8 real matrix A has a modal matrix H such that

% "1 ' '0 '0 ' '0 H !1 AH = ' '0 '0 ' '0 ' &0

1 "1 0 0 0 0 0 0

0 1 "1 0 0 0 0 0

0 0 0

0 0 0

0 0 0

0 0 0 0 0 0 "7 0

# \$ 0 !\$ # 0 0 0 "6 0 0 0 0 0 0

0( * 0* 0 * % J3x3 * ' 0* ' 0 *=' 0* 0 ' 0* ' 0 * & 1* * "7 )

0 K2x2 0 0

0 0 D1x1 0

0 ( * 0 * 0 * * J2x2 * )

III - The Matrix Exponential Function

18

Analyze the nature of the eigenvalues of A

λ = λ1

Real ___ or complex ___ m1 = ___

g1 = ___

Comment on the eigenvectors and principal vectors _______ _______________________________________________

λ = λ4

Real ___ or complex ___ m4 = ___

g4 = ___

Comment on the eigenvectors and principal vectors _______ _______________________________________________

λ = λ6

Real ___ or complex ___ m6 = ___

g6 = ___

Comment on the eigenvectors and principal vectors _______ _______________________________________________

λ = λ7

Real ___ or complex ___ m6 = ___

g6 = ___

Comment on the eigenvectors and principal vectors _______ _______________________________________________
III - The Matrix Exponential Function © 2005 Oscar D. Crisalle

19

The Scalar Exponential Function

Let α ∈ ! be a finite number and let t ∈ ! .
2 3 k ! t := 1 + t ! + t ! 2 + t !3 + ! + t ! k + ! e Definition. 2! 3! k! Convergence Property

 

The power series is uniformly convergent for any finite value of t because it satisfies

tk k # =0 lim k ! " k!

Reason: The factorial k ! is much greater than t k ! k at large values of k

Laplace Transform: L e! t :=

{ }

+#

\$
0

1 "1 e! t e"st dt = = (s " !) s-!

III - The Matrix Exponential Function

20

The Matrix Exponential Function
 

Let A !! n"m and t ∈ ! . Definition.
2 3 k At = 1 + t A + t A2 + t A3 + ! + t Ak + ! e 2! 3! k! Convergence Property

The power series is uniformly convergent for any finite value of t because it satisfies

tk k A =0 lim k ! " k!

Reason: The factorial k ! is much greater than t k bij at large values of k, where bij is the (i, j)-th element of the matrix Ak .

III - The Matrix Exponential Function

21

Properties of the matrix exponential that are analogous to those of the scalar exponential function d ! At # e = A e At = e At A  Differentiation " \$ dt

d ! At # = e At Ax(t) + e At dx e x(t) \$ dt " dt
At )T = e AT t (e
e( A + B)t =e( B + A)t e At e As =e A(t+s)

Chain rule Transposition

   

(e At )!1 = e! At

Matrix inverse (nonsingularity) Laplace Transform

L e At :=

{ }

+"

#
0

!1 e At e!st dt = ( sI ! A)

III - The Matrix Exponential Function

22

Properties of the matrix exponential that are not analogous to those of the scalar exponential function

In general

e At e Bt ! e Bt e At

(Note that e!t e"t = e"t e!t ) (Note that e!t e"t = e(! + ")t )

 

e At e Bt ! e( A + B)t

Reason: In general, matrix multiplication is not commutative with respect to multiplication (i.e., in general YZ ! ZY )

Effect of Similarity Transformations
 

H !1 A H t =H !1 e At H e
This is a key relationship used in the formulation of closed-form expressions for the matrix-exponential function

III - The Matrix Exponential Function

23

Closed - Form Expressions

Matrix A in diagonal canonical form:

" !1 \$ \$0 A= \$ " \$ \$0 #

0% ' !2 ! 0 ' '= D " # " ' 0 ! !n ' & 0 !
0 % ' ! t e 2 ! 0 ' ' " # " ' !nt ' 0 ! e ' & 0 !

Matrix exponential function for matrix A:

" e!1t \$ Dt = \$ 0 e \$ \$ " \$ \$ 0 #

III - The Matrix Exponential Function

24

Matrix A in Jordan canonical form:

" !1 \$ \$0 A= \$ 0 \$ \$0 \$ \$0 #

1 !1 0 0 0

0 1 !1 0 0

0 0 0 !3 0

0% ' 0' ' = "J 1 0 ' \$ 0 \$ 1' # ' !3 ' &

0% '=J J4 & '

Matrix exponential function for matrix A:

" ! t \$e 1 \$ \$ 0 \$ eJt = \$ 0 \$ \$ 0 \$ \$ 0 #

! t te 1 ! t e 1

t 2 !1t e 2!
! t te 1 ! t e 1

0 0 0 e
!4 t

0 0 0

0 0

0

% 0 ' ' ! J1t ' #e 0 ' # '=# 0 0 ' # !4 t ' # 0 " te ' ! t e 4 ' &

e

0 J4 t 0

0 \$ & 0 & & J6 t & e & %

III - The Matrix Exponential Function

25

Matrix A in square canonical form:

\$ !1 & & #"1 A= & 0 & &0 %

"1 0 !1 0 0 0 !2 #" 2

0 ' ) 0 ) ) =K "2 ) !2 ) (

Matrix exponential function for matrix A:

\$ e!1 t cos " t 1 & & #e!1 t sin " t 1 eK t = & & & 0 & & 0 %
III - The Matrix Exponential Function

' ) ! t ) e 1 cos "1 t 0 0 ) ) ! t ! t 0 e 2 cos " 2 t e 2 sin " 2 t ) ) ! t ! t 0 #e 2 sin " 2 t e 2 cos " 2 t ) ( 0 0

! t e 1 sin "1 t

26

Matrix A in square-Jordan canonical form:

#! % %" A= %0 % \$0

" ! 0 0

1 0 ! "

0& ( 1( =Q "( ( !'

Matrix exponential function for matrix A:

\$ e! t cos " t e! t sin " t t e! t cos " t & & #e! t sin " t e! t cos " t #t e! t sin " t eQ t = & 0 e! t cos " t & 0 & 0 #e! t sin " t % 0

t e! t sin " t ' ) ! t cos " t ) te ! t sin " t ) e ) ) e! t cos " t (

III - The Matrix Exponential Function

27

Closed Form for an Arbitrary Matrix

Let A !! n" n and let H !! n" n be a modal matrix for A such that

"D \$ 0 H !1 AH = \$ \$0 \$ #0
Then

0 J 0 0

0 0% ' 0 0' K 0' ' 0 Q&
0 0 \$ & 0 & '1 &H 0 & & eQ t & %

!e D t # # At = H # 0 e # 0 # # 0 "
III - The Matrix Exponential Function

0 eJ t 0 0

0 eK t 0

28

Example

% !1 ' '0 '0 ' '0 ' A= H' 0 ' '0 '0 ' '0 '0 &

0 !2 0 0 0 0 0 0 0

0 0 !3 0 0 0 0 0 0

0 0 0 !4 0 0 0 0 0

0 0 0 1 !4 0 0 0 0

0 0 0 0 1 !4 0 0 0

0 0 0 0 0 1 !4 0 0

0( * 0 0* 0 0* * %D 0 0 0* * \$1 ' H = H '0 J 0 0* * '0 0 0 0* & 0 0* * " #* \$# " * ) 0

0( * 0 * H \$1 K* )

where !1 , ! 2 , !3 , ! 4 , ", and # are real numbers.

III - The Matrix Exponential Function

29

% e!1t ' ' 0 ' ' 0 ' ' ' 0 ' At = H ' e ' 0 ' ' 0 ' ' 0 ' ' 0 ' ' 0 &

0
! t e 2

0 0
! t e 3

0 0 0 e
!4 t

0 0 0 te e
!4 t

0 0 0 t 2 !4 t e 2 te
!4 t

0 0 0 t 3 !4 t e 3! t 2 !4 t e 2
! t te 4 ! t e 4

0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0

!4 t

0 0 0 0

! t e 4

0 0 0

0 0

( * * 0 0 * * 0 0 * * 0 0 * * * H \$1 0 0 * * * 0 0 * * 0 0 * "5t "5t e cos #5t e sin #5t * * "5t "5t \$e sin #5t e cos #5t * ) 0 0

III - The Matrix Exponential Function

30

Contents

Dynamic systems

The linear state-space system

Time-varying, time-invariant, and homogeneous systems

   

Linearization of nonlinear dynamic systems Graphical Representation of LTI systems The single-input/single-output (SISO) system Interconnection of LTI systems
  

Systems in a parallel connection Systems in a series connection Systems in a negative-feedback connection

IV - The Linear State Space and Linearization

1

Dynamic Systems
" \$ \$ ! # \$ \$ % x !" n dx = f ( x(t), u(t) ) (state equation) dt y(t) = g ( x(t), u(t) ) (output equation) x(t0 ) = x0
state vector input vector output vector

(initial state)

f !" n g !"m
t !"

state-derivative map output map time initial time

u !" p

y !" m

x0 !" n initial state Order of the dynamic system: n  Number of differential equations

t0 !"

The state and output equations may be nonlinear

IV - The Linear State Space and Linearization

2

The Linear State Space System

Linear time-varying (LTV) state-space system

" dx = A(t)x(t) + B(t)u(t) (state equation) \$ dt \$ ! # y(t) = C (t)x(t) + D(t)u(t) (output equation) \$ x(t0 ) = x0 (initial state) \$ % System matrices are a function of time

A(t) !" n# n B(t) !" n# p C(t) !"m# n

state matrix input matrix output matrix

D(t) !"m# p feedthrough matrix
IV - The Linear State Space and Linearization © Oscar D. Crisalle 2005

3

Linear time-invariant (LTI) state-space system

" \$ \$ ! # \$ \$ %

dx = Ax(t) + Bu(t) (state equation) dt y(t) = Cx(t) + Du(t) (output equation) x(t0 ) = x0 (initial state)

Constant system matrices

A !" n# n B !" n#m C !"m# n D !"m# p

state matrix input matrix output matrix feedthrough matrix

IV - The Linear State Space and Linearization

4

Linear Homogeneous (LH) state-space system

!H

" \$ \$ # \$ \$ %

dx = A(t)x(t) dt y(t) = C (t)x(t) x(t0 ) = x0

(state equation) (output equation) (initial state)

This is as special case of LTV and LTI systems resulting after setting u(t) = 0 for all values of t.

IV - The Linear State Space and Linearization

5

Linearization of Dynamic System

Objective. Find a linear dynamic system that approximates a given nonlinear system " dx = f ( x(t), u(t) ) (state equation) \$ dt \$ ! # y(t) = g ( x(t), u(t) ) (output equation) \$ x(t0 ) = x0 (initial state) \$ % & Linearization about an operating point

!!

" \$ \$ # \$ \$ %

dx " = Ax(t) + Bu(t) " " dt y (t) = Cx(t) + Du(t) " " " x(t0 ) = 0 "

(state equation) (output equation) (ZERO initial state)

IV - The Linear State Space and Linearization

6

Step 1. Identification of a steady-state operating point (x, u) :

dx =0 ! dt

f (x, u) = 0

! u1 \$ ! x1 \$ # & # & # x2 & u = # u2 & x=# & #! & ! # & # & #u p & # xn & " % " % Infinite number of solutions (x, u)

n equations and n+p unknowns !0 \$ ! f1 (x1 , x2 ,", xn ,u1 ,u2 ,",u p ) \$ # & # f (x , x ,", x ,u ,u ,",u ) & n 1 2 p & #0 & = # 2 1 2 & #! & # ! # & # & # "0 % " fn (x1 , x2 ,", xn ,u1 ,u2 ,",u p ) & %

IV - The Linear State Space and Linearization

7

Step 2. Introduction of deviation variables

x(t) := x(t) ! x ! u(t) := u(t) ! u ! y (t) := y(t) ! g(x, u) !

Remark:

dx d(x ! x) ! = dt dt dx = dt

IV - The Linear State Space and Linearization

8

Step 3. Expansion of the nonlinear state-equation in terms of a truncated Taylor series. Solution: ! 'f 'f1 'f1 \$ ! 'f1 'f1 'f1 \$ 1 \$ # & \$ # & 'u1 'u2 'u p & 'xn & # # 'x1 'x2 " ! ! x1 \$ # ! x1 \$ # ! & ! u1 \$ & ! 'f2 & # & 'f2 & # & # 'f2 'f2 # & # 'f2 'f2 " & ! ! ! \$ # x2 = # # x2 & + # 'u 'u \$ 'u & # u2 & 'xn & # & 2 p&# & # # & # 'x1 'x2 # 1 # & # # & # & # # # & # # # % # & # % # & " & # xn % # & #u p & "! "x % "! % & # !n & # 'f 'fn 'fn 'fn & # 'fn 'fn # n & \$ # 'u 'u \$ 'u & # 'x1 'x2 'xn & " % 2 p% " 1 " x(t) = Ax(t) + Bu(t) or ! ! !

!x j

!fi

:=

!x j

!fi

x1 ,x2 ,!,x j-1 ,x j+1 ,!,xn u1 ,u2 ,!,u j ,u j+1 !,u p

!uk

!fi

:=

!uk

!fi

x1 ,x2 ,!,xk ,xk+1 ,!,xn u1 ,u2 ,!,uk ,uk+1 ,!,u p

IV - The Linear State Space and Linearization

9

Step 4. Expansion of the nonlinear output-equation in terms of a truncated Taylor series. Solution:

! 'g1 # # 'x1 ! y1 \$ # ! # & # 'g2 ! # y2 & = # # " & # 'x1 # & # " # ym & # "! % 'g # m # 'x1 "
or

'g1 'x2 'x2 " 'gm 'x2 'g2

# # \$ #

! 'g 'g1 \$ # 1 & 'xn & # 'u1 ! & ! x1 \$ # 'g 'g2 & # & # 2 ! # x2 & + # 'u 'xn & # & # 1 & " " &# & # " "x % & # !n & # 'gm # 'gm & # 'u 'xn & % " 1

'u2 'u2 "

'g1

# # \$ #

'g2

'gm 'u2

'g1 \$ & 'u p & & ! u1 \$ 'g2 & # & u &# 2& 'u p # & & " " &# & & #u p & " % 'gm & 'u p & %

y (t) = Cx(t) + Du(t) ! ! !

!x j

!gi

:=

!x j

!gi

x1 ,x2 ,!,x j-1 ,x j+1 ,!,xn u1 ,u2 ,!,u j ,u j+1 !,u p

!uk

!gi

:=

!uk

!gi

x1 ,x2 ,!,xk ,xk+1 ,!,xn u1 ,u2 ,!,uk ,uk+1 ,!,u p

IV - The Linear State Space and Linearization

10

Example of linearization

" x1 = !21 x(t) x1 (t) + 32 u(t) \$! # 2 ! \$ x2 = x1 (t) ! x2 (t) ! 4 u(t) %
about the steady-state point x T = !8 2 # and u = 1. " \$

" !4 !16 % " 32 % " !2 0 % " !18 % B=\$ ' C=\$ D=\$ Solution: A = \$ ' ' ' # 1 !4 & # !2 & # 1 0& # 0&
! x1 \$ ! x1 ' 8 \$ ! x=# &=# ! ! & u =u ! 1 #! & # & " x2 % " x2 ' 2 % ! y1 \$ ! y1 ' 17 \$ ! y=# &=# ! & #! & # & " y2 % " y2 ' 8 %

IV - The Linear State Space and Linearization

11

Graphical Representation of LTI Systems

Integrator 1: indefinite-integral operator

!

x (t) dt = x(t)

x (t)

! ( ) dt

x(t)

Integrator 2: definite-integral operator

x(t) • " x(t) dt = x(t) ! x0 x0

x0x(t) (t)
o

x (t)
x(t)

!

x(t) x

()
0

x(t) – xo dt +

x(t)

x(t)

IV - The Linear State Space and Linearization

12

Mathematical representation x(t) = Ax(t) + Bu(t) ! y(t) = Cx(t) + Du(t)

x(t0 ) = x0

Graphical representation

IV - The Linear State Space and Linearization

13

The SISO LTI System

Single-input/single-output: u(t) !! ( p = 1),

y(t) !! (m = 1)

x ( t ) = Ax ( t ) + bu ( t ) !
T

y ( t ) = c x ( t ) + du ( t )

! b1 \$ # & # b2 & '" n b=# & ! # & # bn & " %

! c1 \$ # & # c2 & '" n c := # & ! # & # cn & " %

d !!

IV - The Linear State Space and Linearization

14

LTI Systems in Parallel

First System

Second System

Output Combination

w = A1w + B1u ! y1 = C1 w + D1u

z = A2 z + B2 u ! y = C2 z + D2 u

y = y1 + y2

IV - The Linear State Space and Linearization

15

Overall output equation

!w # y = y1 + y2 = C1 w + C2 z + D1u + D2 u = !C1 C2 # % & + D1 + D2 u " # #\$ z ! " \$ " \$ ! "# # \$ D C Overall state equation

(

)

! A1 0 \$ ! w \$ ! B1 \$ !w \$ ! &# & + # &u # & == # A2 & ! # # & "z% " 0 #\$ % " z % " B2 % & " \$ % B A
D1 + D2 x x

u

! B1 \$ #B2 & " %

+

'

[C 1

C2 ]

+

y

!A1 0 \$ # 0 A2& " %

IV - The Linear State Space and Linearization

16

LTI Systems in Series
D1 w u B1
+

D2 w z C1
+

z
!

!

y1

B2

+

C2

+

y

A1

A2

First System

Second System

w = A1w + B1u ! y1 = C1 w + D1u

z = A2 z + B2 y1 ! y = C2 z + D2 y1

IV - The Linear State Space and Linearization

17

Overall output equation

!w # y = C2 z + D2C1 w + D2 D1u = ! D2C1 C2 # % & + (D2 D1 )u = Cx + Du " \$ z " \$ Overall state equation
0 \$ ! w \$ ! B1 \$ ! w \$ ! A1 ! &# &+ # &u # &=# ! " z % # B2C1 A2 & " z % # B2 D1 & " % " % x = Ax + Bu !

IV - The Linear State Space and Linearization

18

LTI Systems in Negative Feedback Configuration

Process

Controller

x p = Ap x p + B p u ! y = C p x p + Dp u

xc = Ac xc + Bc e ! u = Cc xc + Dc e

Feedback Error e=r! y

IV - The Linear State Space and Linearization

19

 

Well-posed system:

The inverse matrix (I + D p Dc )!1 exists

Overall output equation

y = C p x p + D p "Cc xc + Dc (r ! y) \$ # % = C p x p + D pCc xc + D p Dc r ! D p Dc y

(I + D p Dc ) y = C p x p + D pCc xc + D p Dc r
Given that the system is well-posed, it is possible to premultiply by (I + D p Dc )!1

y = "(I + D p Dc ) C p #
!1

"xp \$ (I + D p Dc ) D pCc \$ & ' + (I + D p Dc )!1 D p Dc r %&x ' # c%
!1

= Cx + Dr

IV - The Linear State Space and Linearization

20

Overall state equation

'1 ! x p \$ ! Ap ' B p Dc (I + D p Dc ) C p ! # &=# # xc & # 'Bc (I + D p Dc )'1C p "! % "

B pCc ' B p Dc (I + D p Dc )'1 D pCc \$ ! x p \$ &# & Ac ' Bc (I + D p Dc )'1 D pCc & # xc & %" %

! B D ' B D (I + D D )'1 D D \$ p c p c p c p c& +# r = Ax + Br # B ' B (I + D D )'1 D D & c c p c p c " %
D x r B
+

x
!

C

+

y

A

IV - The Linear State Space and Linearization

21

Contents

The State Transition Map for LTI systems Derivation of the Variation-of-Parameters Formula The State Transition Matrix Similar LTI systems Response Modes of LTI system

V – Response of LTI Systems

1

The State Transition Map

Linear time-invariant state-space system
i

x (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)

x(to ) = xo

A !! n" n (state matrix), C !! m" n (output matrix),

B !! n" p (input matrix) D !! m" p (feedthrough matrix)

! x1 \$ ! u1 \$ ! y1 \$ # & # & # & # x2 & '" n (state), u = # u2 & '" p (input), y = # y2 & '" m (output) x=# & #! & # ! & ! # & # & # & # xn & #un & # ym & " % " % " %
V – Response of LTI Systems © Oscar D. Crisalle 2005

2

Solution to the differential equation for any input function u(t):

x(t) = e

A(t!to )

xo +

to

e A(t!") B u(") d" #

t

This relationship is known as the variation-of-parameters formula in the classical differential-equations literature Known as the State Transition Map in the modern literature u(t) causes a transition from state x(to ) to state x(t)

Solution to the overall LTI system
t

y(t) = Ce

A(t!to )

xo +

to

Ce A(t!") B u(") d" + D u(t) #

Known as the Output Map in the modern literature

V – Response of LTI Systems

3

Derivation of the VOP Formula

Based on the strategy of using a integrating factor Given the state equation

x(t) = Ax(t) + Bu(t); x(to ) = x o !
Pre-multiply both sides by e!At (integrating factor)

dx !At ! e Ax(t) = e!At Bu(t) dt Integrate both sides and use the property e!At d !At !At dx (e x(t)) = e ! e!At Ax(t) dt dt

V – Response of LTI Systems

4

t,x(t) to ,x o

#

d e!A" x( " ) =

(

)

to

e!A" Bu( " )d" #

t

!At e!At x(t) ! e o x

o =

to

e!A" Bu( " )d" # e!A" Bu( " ) d" ) #
t

t

eAt (
which yields

!At e!At x(t) ! e o x

) = eAt ( o

to

x(t) = e

A(t!to )

xo +

to

e A(t!") B u(") d" #

t

An expression for the matrix exponential function e At is necessary to proceed

V – Response of LTI Systems

5

The State Transition Matrix

The state transition matrix for an LTI system is defined as follows:

!(t,t0 ) := e

A(t-t0 )

Properties

!(t,t) := I
t

[ !(t,t0 ) ]"1= !(t0 ,t)

State transition map and response map

x(t) = !(t,t0 ) x o +

to

# !(t, ") B u(") d" # C!(t, ") B u(") d" + D u(t)
t

y(t) = C!(t,t0 ) x o +

to

Expression also valid for LTV systems but with a different formula for !(t,t0 )

V – Response of LTI Systems

6

Leibnitz’s Rule

Leibnitz's Rule for differentiation of integral functions

d dt

g(t) f(t)

#

g(t) h(t, ! ) d! = f(t)

#

"h(t, ! ) dg(t) df(t) d! + h ( t, g(t)) \$ h ( t, f(t)) "t dt dt

Application to the VOP formula:

Differentiation of the VOP expression should yield a derivative of the state that satisfies the state equation

Derivative of the VOP formula

"t % d d " A(t!to ) % d \$ A(t!() x(t) = e xo + B u(() d( ' )e ' dt \$ ' # & dt dt \$ to # &
V – Response of LTI Systems © Oscar D. Crisalle 2005

7

Applying Leibnitz’s rule to the second term

x (t) = Ae

i

A(t!to )

xo +

to

#

t

Ae A(t!" ) Bu( " )d" + e A(t!t) Bu(t)

\$ ' t A(t!to ) = A& e xo + # e A(t!" ) Bu( " )d" ) + Bu(t) & ) to % (
= Ax(t) + Bu(t)
hence, the VOP formula is a solution to the state equation

V – Response of LTI Systems

8

Similar ity of LTI Systems

Consider the LTI system
i

x (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)

and the “change of basis” transformation

x(t) = H z(t)
where H !! n" n is nonsingular

!

z(t) = H "1 x(t)

Similar LTI system

z (t) = H !1 AH z(t) + H !1B u(t) y(t) = CH z(t) + D u(t)

i

V – Response of LTI Systems

9

Select H as a modal matrix, so that H !1 AH = M is in canonical form

For example

"D \$ 0 H !1 AH = \$ \$0 \$ #0 so that

0 J 0 0

0 0% ' 0 0' = diag(D, J , K ,Q) := M K 0' ' 0 Q&

"e D t \$ \$ 0 !1 e At = e HMH t = He Mt H !1 = H \$ \$ 0 \$ \$ 0 # = H e diag(D, J , K ,Q) t H !1

0 eJ t 0 0

0 0 eK t 0

0 % ' 0 ' !1 'H 0 ' ' eQ t ' &

where e At = H e M t H !1 is “easy” to calculate.
V – Response of LTI Systems © Oscar D. Crisalle 2005

10

VOP

x(t) = H e
or

H

!1

AH(t ! to ) !1 H xo +

to

#
t

t

H !1 AH(t ! " ) H !1 Bu( " )d" H e

x(t) = H e

M(t ! to ) !1 H xo +

to

#

H e M(t ! " ) H !1 Bu( " )d"

Output map

M (t ! to ) !1 y(t) = CHe H xo +

to

C H e M (t ! ") H !1 B u(")d" + D u(t) #

t

V – Response of LTI Systems

11

Summary. Procedure for finding closed-form expressions for the state transition map and the response map for a given input u(t):  Find a modal matrix H for the given state matrix A

Modify the state A matrix via the similarity transformation M = H !1 AH  The resulting similar matrix M is in canonical form Find closed-form expressions for e M t in terms of the eigenvalues, eigenvectors, and principal vectors of A Substitute the closed-form expressions for e M t and the known function u(t) into the expressions below and integrate:

M(t ! to ) !1 x(t) = H e H xo + y(t) = CHe M (t ! to ) !1 H xo +

to t

#

t

H e M(t ! " ) H !1 Bu( " )d"

to

C H e M (t ! ") H !1 B u(")d" + D u(t) #

V – Response of LTI Systems

12

Example of matrix exponential in canonical form
0
! t e 2

% e!1t ' ' 0 ' ' 0 ' ' ' 0 M t =' e ' ' 0 ' ' 0 ' ' 0 ' ' 0 ' ' 0 &

0 0
! t e 3

0 0 0 e
!4 t

0 0 0 te e
!4 t

0 0 0 t 2 !4 t e 2 te
!4 t

0 0 0 t 3 !4 t e 3! t 2 !4 t e 2
! t te 4 ! t e 4

0 0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0

!4 t

0 0 0 0

! t e 4

0 0 0

0 0

( * * 0 0 * * 0 0 * * 0 0 * * * 0 0 * * * 0 0 * * 0 0 * "t "t e cos#t e sin#t * * "t "t \$e sin#t e cos#t * ) 0 0

V – Response of LTI Systems

13

Response Modes
M(t ! to ) !1 x(t) = H e H xo + y(t) = CHe

M (t ! to ) !1 H xo +

to t

#

t

H e M(t ! " ) H !1 Bu( " )d"

to

C H e M (t ! ") H !1 B u(")d" + D u(t) #

Definition. The response modes of an LTI system with state matrix A are the elements of the matrix exponential

e M t = e diag(D, J , K,Q) t
where

M = H !1 A H

V – Response of LTI Systems

14

Modes introduced by D

!t ! t ! t e 1 ,e 2 ,e 3

Modes introduced by J

! t ! t 1 ! t 1 ! t e 4 , te 4 , t 2 e 4 , t 3e 4 2! 3!
! t ! t e 5 cos "5t, e 5 sin "5t ! t ! t ! t ! t e 6 cos "6t, e 6 sin "6t, t e 6 cos "6t, t e 6 sin "6t,

Modes introduced by K

Modes introduced by Q

1 2 !6 t 1 ! t t e cos "6t, e 6 sin "6t 2! 2!

V – Response of LTI Systems

15

Generalization.

An eigenvalue

!( A) = Re{!( A)} + j Im{!( A)}
produces response modes of the forms

1 k Re{!( A)} t t e cos Im{!( A)} t k!
and

1 k Re{!( A)} t t e sin Im{!( A)} t k!
Remark
 

If Re{!( A)} < 0 then the response modes tend to zero This is the key concept utilized to study the stability of LTI systems

V – Response of LTI Systems

16

Contents
 

The Laplace Transform The transfer-function representation of LTI systems  Input-output relationship  Structure of the transfer function The resolvent matrix Poles of transfer functions Minimality of transfer functions Connection of LTI systems  Systems in parallel  Systems in series  Systems in feedback

   

VI - Matrix Transfer Functions

1

The Laplace Transform

Preliminaries
 

t !R
s = ! + j " #C

time variable complex variable, !, " # R

The complex exponential function

e!s t = e!" t cos(#t) ! j e!" t sin(#t)

Definition. One-sided Laplace transform of a function of time

" f (s) :=
 

e!s t f (t) dt #
0

f(t) function of time

f (s) Laplace transform of f(t)
Laplace-transform pair

( f(t), f (s) )

VI - Matrix Transfer Functions

2

Notation

f (s) = L [ f(t) ]
f(s) = L [ f(t) ]

Simplified notation
• •

Fundamental property of the Laplace Transform

L [ f (t) ] = s L [ f(t) ] - f(t)| t = 0 L [ f (t) ] = s f(s) - f(t)| t = 0

Application to the solution of linear differential equations
   

Differential equation Laplace Transform Algebraic equation Solution

x (t) = Ax(t) + Bu(t)
s x(s) - x(t)| t = 0 = Ax(s) + Bu(s) (sI - A)x(s) = Bu(s) + x(t)| t = 0 x(s) = (sI - A) B u(s) + (sI - A) x(t)| t = 0
-1 -1

The differential equation is transformed into an algebraic equation
VI - Matrix Transfer Functions

3

Transfer Function Representations

Time-domain representation of an LTI dynamic system

x (t) = A x(t) + B u(t), x(t)| t = 0 =
 

x0

y(t) = C x(t) + D u(t) Zero initial conditions x(t)| t = 0 = 0 Laplace transform of the dynamic system equations (s I - A) x(s) = B u(s) + 0 y(s) = C x(s) + D u(s)
 

Then Definition.

y(s) =

[ C (sI - A) -1 B

+ D ] u(s)

Matrix transfer function between the input and the output G(s) := C (sI - A)-1 B + D

Laplace-domain representation of the LTI system y(s) = G(s) u(s)

VI - Matrix Transfer Functions

4

Input - output Relationship
y(s) = G(s) u(s)
u(s) G(s) y(s)

Effect of the Laplace transformation
INPUT-OUTPUT REPRESENTATION

u(s)

G(s)

y(s)

VI - Matrix Transfer Functions

5

Relevant Dimensions

Size of time-domain signals Size of system matrices

x !! n , u !! p , y !! m
A !! n" n , B !! n" p , C !! m" n , D !! m" p

The matrix transfer function G(s) has m x p elements

!G11 (s) # #G21 (s) -1 G(s) = C(sI-A) B + D = # #" #G (s) " m1

! G1p (s) \$ & G22 (s) ! G2p (s) & & " # " & Gm2 (s) ! Gmp (s)& % G12 (s)

VI - Matrix Transfer Functions

6

Structure of G ( s )
  

Definition Matrix inverse relationship Detailed structure

G(s) := C (sI - A)-1 B + D

(sI ! A)!1 =

1 adj (sI ! A) det (sI ! A)

C adj(sI -A) B + det(sI -A) D N (s) G(s) = C (sI ! A) B + D = = det(sI -A) det(sI -A)
!1
 

det(sI -A) = ! n + " n#1! n#1 + ! + "1! + "0 polynomial (size 1x1)
N(s) matrix (size m x p)
! n11 (s) n12 (s) # # n21 (s) n22 (s) N (s) = # " # " # n (s) n (s) m2 " m1 n1p (s) \$ & ! n2p (s) & & # " & ! nmp (s)& % !

VI - Matrix Transfer Functions

7

Final expression
n1p (s) % ' ! n2p (s) ' ' # " ' ! nmp (s)' & ! ! ! # ! % ' det(sI ! A) ' "G (s) 11 n2p (s) ' \$ ' \$G21 (s) det(sI ! A) ' = \$ ' \$" " ' \$G (s) nmp (s) ' # m1 ' det(sI ! A) & ' n1p (s) ! G1p (s) % ' G22 (s) ! G2p (s) ' ' " # " ' Gm2 (s) ! Gmp (s)' & G12 (s)

" n11 (s) n12 (s) \$ \$ n21 (s) n22 (s) 1 G(s) = \$ det(sI ! A) \$ " " \$ n (s) n (s) m2 # m1 " n (s) 11 \$ \$ det(sI ! A) \$ n (s) 21 \$ = \$ det(sI ! A) \$ " \$ \$ n (s) m1 \$ \$ # det(sI ! A) det(sI ! A) det(sI ! A) " det(sI ! A) nm2 (s) n22 (s) n12 (s)

VI - Matrix Transfer Functions

8

Example

Dynamic system

x1 (t) = !

x2(t) + 3 u1(t) + u2(t)

x 2 (t) = -2x1(t) - 3 x2(t) + u1(t) !
Input-output representation
! 3s + 10 \$ ! s2 + 4 s+ 5\$ y1 (s) = # & u1 (s) + # 2 & u 2 (s) 2 " s + 3s+ 2 % " s + 3s+ 2 % ! s' 6 \$ ! '2 \$ y2 (s) = # u1 (s) + # & & u 2 (s) 2 2 " s + 3s+ 2 % " s + 3s+ 2 %

Matrix transfer function
" 3s + 10 \$ \$ s2 + 3s + 2 G(s) = \$ s!6 \$ \$ s2 + 3s + 2 # s2 + 4s + 5 % ' 2 s + 3s + 2 ' ' !2 ' 2 & s + 3s + 2 '

VI - Matrix Transfer Functions

9

The Resolvent
 

The resolvent of a matrix A !! n" n is defined as (sI ! A)!1 Standard expression

(sI ! A)!1 =

adj (sI ! A) adj (sI ! A) = det (sI ! A) " n + # " n!1 + ! + # " + # n!1 1 0
n!1

The resolvent formula (Useful for calculations and analytical studies)

(sI -A)!1 =

"s "
j j=0

n

i= j+1

# i Ai! j!1

det(sI -A)
L [ (sI -A)!1 ] = e At
-1

Inverse Laplace-transform of the resolvent
 

L [ e At ] = (sI -A)!1

Laplace transform pair e At , (sI -A)!1

(

)

VI - Matrix Transfer Functions

10

Scalar Transfer Functions

Single-input/single-output system

x (t) = A x(t) + b u(t) y(t) = c T x(t) + d u(t)

i

A !! n" n , b !! n c !! n , d !!

! x1 \$ # & # x2 & '" n x=# & ! # & # xn & " %

u !!

y !!

Scalar transfer function

y(s) = G(s) u(s)
n(s) det(sI ! A)

G(s) = c T (s I ! A)!1 b + d=

VI - Matrix Transfer Functions

11

Poles of a Transfer Function

Scalar transfer function G(s)

Definition. A complex number s = p is a pole of G(s) if

G( p) ! "

Matrix transfer function

!G11 (s) # #G21 (s) G(s) = # #" #G (s) " m1

! G1p (s) \$ & G22 (s) ! G2 p (s) & & " # " & Gm2 (s) ! Gmp (s) & % G12 (s)

Definition. A complex number s = p is a pole of G(s) if

Gi j ( p) ! "
for at least one element (i, j) of G(s).
VI - Matrix Transfer Functions © Oscar D. Crisalle 2005

12

Examples of Poles

Scalar transfer function

G(s) =

3 (s + 1)(s + 2)

Poles: p1 = !1, p2 = !2

Scalar transfer function

G(s) =

3s+6 s + 3s+2
2

=

3 (s + 2) (s + 1) (s + 2)

Poles: p1 = !1

Matrix transfer function
" 3 % 3 \$ s s+5 ' ' G(s) = \$ 3s+6 ' \$ 3 \$ s ! 3 s2 + 3s+2 ' # &

Poles: p1 = 0, p2 = !5, p3 = 3, p4 = !1

VI - Matrix Transfer Functions

13

Minimality of Transfer Functions

Scalar transfer function G(s) =

n(s) d(s)

Definition. The scalar transfer function G(s) is said to be minimal if the numerator and denominator polynomials have no common factors. Examples:

G(s) =

3 (s + 1)(s + 2)

MIMINAL

G(s) =

3s+6 s + 3s+2
2

=

3 (s + 2) (s + 1) (s + 2)

NONMINIMAL

VI - Matrix Transfer Functions

14

Matrix transfer function

!G11 (s) # #G21 (s) G(s) = # #" #G (s) " m1

! G1p (s) \$ & G22 (s) ! G2 p (s) & & " # " & Gm2 (s) ! Gmp (s)& % G12 (s)

Definition. The matrix transfer function G(s) is said to be minimal if each of the mp SISO transfer functions Gij (s) is minimal, i = 2, …, m, j = 1, 2, …, p.

VI - Matrix Transfer Functions

15

Example
% 3 " 3 % " 3 3 \$ ' \$ s s+5 s+5 ' \$ s ' '= G(s) = \$ 3 (s + 2) ' 3s+6 ' \$ 3 \$ 3 \$ s ! 3 s2 + 3s+2 ' \$ s ! 3 (s + 1) (s + 2) ' # & \$ ' # &

NON MINIMAL

" 3 3 % \$ s s+5' G(s) = \$ ' 3 ' \$ 3 \$s ! 3 s + 1 ' # &

MINIMAL

VI - Matrix Transfer Functions

16

LTI Systems in Parallel

First System

Second System

Output Combination

y1 ( s) = G1 ( s) u( s)

y ( s) = G2 ( s) y1 ( s)

y ( s) = G2 ( s) +G1 ( s)
VI - Matrix Transfer Functions

(

) u(s)

y ( s) = y1 ( s) + y2 ( s)

17

LTI Systems in Series

First System

Second System

y1 ( s) = G1 ( s) u( s)

y ( s) = G2 ( s) y1 ( s)

y ( s) = G1 ( s) G2 ( s)
VI - Matrix Transfer Functions

(

) u(s)

18

LTI Systems i n Negative Feedback Configuration
r(s) d(s) n(s) Set point Disturbance Measurement noise

Process

Controller

y ( s ) = G p ( s ) u( s )

u( s) = Gc ( s) e ( s)

Feedback Error e(s) = r(s) ! y(s)

Well-posed system:
s ! ".

The inverse matrix ( I + G p (s) Gc (s) )!1 exists as

Remark: G p (!) = D p ,

Gc (!) = Dc

Well posed: There exists ( I + G p (!) Gc (!) )"1 = ( I + D p Dc )"1

VI - Matrix Transfer Functions

19

Transfer functions between the set point and all the loop signals

y(s) = ! I + G p (s)Gc (s) # " \$ e(s) = ! I + G p (s)Gc (s) # " \$

%1 %1

G p (s)Gc (s) r(s) = G yr (s) r(s) r(s)
%1

= Ger (s) r(s) r(s) = Gur (s) r(s)

u(s) = Gc (s) ! I + G p (s)Gc (s) # " \$
 

Closed-loop matrix transfer functions: G yr (s), Ger (s), Gur (s) Classical definitions

Sensitivity function Complementary Sensitivity: Closed-loop transfer functions

S(s) = ! I + G p (s)Gc (s) # " \$ T (s) = ! I + G p (s)Gc (s) # " \$

%1 %1

 

G p (s)Gc (s)

y(s) = T (s) r(s), e(s) = S(s) r(s),
VI - Matrix Transfer Functions

u(s) = Gc (s)S(s) r(s)

20

Contents

Controllability and reachability Observability Stability

Stability in the sense of Lyapunov External stability Assessing stability

VII - Analysis of Dynamic Systems

1

Controllability and Reachability

Intuitive notions of controllability and reachability

Dynamic system

x (t) = Ax(t) + Bu(t) y(t) = Cx(t) + Du(t)

i

General Controllability Concept

Controllability x(t1) = 0

Reachability x(t0) = 0

VII - Analysis of Dynamic Systems

2

Controllability. A dynamic state-space system is said to be controllable in the finite interval [t0 , t1 ] if for any initial state x(t0 ) there exists an input u(t) such that x(t1 ) = 0
 

Controllable:

( t0 , x(t0 ) )

u(t), t ! [t0 , t1 ] " """""" #

( t1, 0 )

Notation: The pair (A, B) is controllable

Reachability. A dynamic state-space system is said to be reachable in the finite interval [t0 , t1 ] if for any final state x(t1 ) there exists an input u(t) that transfers the initial state x(t0 ) = 0 to the final state x(t1 ) .
 

Reachable:

( t0 , 0 )

u(t), t ! [t0 , t1 ] " """""" #

( t1, x(t1 ) )

Notation: The pair (A, B) is reachable

VII - Analysis of Dynamic Systems

3

THEOREM. Kalman’s controllability test. The LTI dynamic system with state matrix A and input matrix B is controllable if and only if rank(Qc ) = n , where

Qc := " B AB A2 B ! An!1B \$ &" n' np # %

THEOREM. The LTI dynamic system with state matrix A and input matrix B is reachable if and only if it is controllable

Controllable LTI system ! Observable LTI system

VII - Analysis of Dynamic Systems

4

Example

R

( * ) * +

" xo1 % " !2 "1% 1% x(t) = \$ ! ' ' x(t) + \$ ' u(t) , x(0) = xo = \$ # 1 !2 & #1& \$ xo2 ' # & y(t) = "0 1% x(t) # &

" !1% Here n = 2 and An!1B = AB = \$ ' # !1& !1 %1# Qc = ! B AB # = & ' , rank(Qc ) = 1 = n " \$ "1 %1\$
The LTI system (A, B) is controllable. The LTI system (A, B) is reachable.

 

VII - Analysis of Dynamic Systems

5

Observability

Intuitive notion of observability

Find x(t0 )

Observability. A dynamic state-space system is said to be observable in the finite interval [t0 , t1 ] if any initial state x(t0 ) can be reconstructed from knowledge of the output function y(t) and the input function u(t) in the entire finite interval.

VII - Analysis of Dynamic Systems

6

Observable:

(

y(t), u(t), t ! [t0 , t1 ]

)

reconstruction " """""" x(t0 ) #

Notation: The pair (A, C) is observable

Observability for LTI systems

It is conventional to set u(t) = 0, t ! [t0 , t1 ] Observable:

(

y(t), u(t) = 0, t ! [t0 , t1 ]

)

reconstruction " """""" x(t0 ) #

Notation: The pair (A, C) is observable

VII - Analysis of Dynamic Systems

7

THEOREM. Kalman’s observability test. The LTI dynamic system with state matrix A and output matrix C is observable if and only if

rank(Qo ) = n
where

" C % \$ ' CA ' \$ Qo := \$ CA2 ' (" nm) n \$ ' \$ ! ' \$ n!1 ' #CA &

T Remark: rank(Qo ) = rank(Qo )

T Qo := "C T #

ATC T

( A2 )T C T

! ( An!1 )T C T \$ " n& nm %

VII - Analysis of Dynamic Systems

8

Example

( * ) * +

" xo1 % " !2 "1% 1% x(t) = \$ ! ' ' x(t) + \$ ' u(t) , x(0) = xo = \$ # 1 !2 & #1& \$ xo2 ' # & y(t) = "0 1% x(t) # &

n = 2 and CA

n!1

" !2 1 \$ = CA = "0 1\$ & ' = "1 ! 1\$ # % # % 1 !1% #

! C \$ !0 1 \$ Qo = # & = # & , rank Qo = 2 = n "CA % " 1 '1%
The LTI system (A, C) is observable

VII - Analysis of Dynamic Systems

9

Stability of LTI Systems

Two perspectives for analyzing the stability of the LTI system

x (t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)

i

x(0) = x0

(1) State stability of the ZERO INPUT system

x (t) = Ax(t),

i

x(0) = x0

with respect to deviations from the equilibrium point x(0) = 0

(3) Bounded-input/bounded-output (BIBO) stability of the ZERO INITIAL STATE system

x (t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)
VII - Analysis of Dynamic Systems

i

x(0) = 0

10

State Stability

State stability is concerned with the homogeneous (unforced) system

 

Issue:

x (t) = Ax(t), x(0) = x0 Is lim x(t) bounded for all initial conditions x0 # 0 ?
t!"

i

Simple interpretation

VII - Analysis of Dynamic Systems

11

State trajectories (variation-of-parameters formula)

x(t) = e At x0
Response modes (elements of e At )

! = " #!

1 2 !t 1 3 !t \$ e , te , t e , t e ," 2! 3!
!t !t

! = " + j# \$C % e"t cos #t, e"t sin #t, t e"t cos #t, t e"t sin #t, !
! = j" #C

\$ cos "t, sin "t, t cos "t, t sin "t, !
" 1, t, t 2 , !

!=0

VII - Analysis of Dynamic Systems

12

Observation # 1:

Eigenvalues with positive real parts ( ! i ( A) "ORHP ) produce response modes that are unbounded (they tend to infinity)

Re ! i ( A) > 0 " { e
Observation # 2:

(

)

!i t

, te

!i t

1 2 !i t 1 3 !i t , t e , t e , ! } # { \$, \$, \$, \$, ! } 2! 3!

Eigenvalues with negative real parts ( ! i ( A) "OLHP ) produce response modes that are bounded (they tend to zero)
!t !t 1 !t 1 !t Re ! i ( A) < 0 " { e i , te i , t 2 e i , t 3e i , ! } # { 0, 0, 0, 0, ! } 2! 3!

(

)

VII - Analysis of Dynamic Systems

13

Observation # 3:

Eigenvalues with zero real parts ( ! i ( A) "IA ) produce response modes that are:

Bounded (values range between -1 and +1) if the geometric multiplicity is equal to the algebraic multiplicity

Re ! i ( A) = 0 and g(! i ) = a(! i ) " { cos #t, sin #t } \$ { [%1, +1], [%1, +1] }
 

(

)

! = 0 and g(" i ) = a(" i ) # { 1 } \$ { 1 }

Unbounded (values tend to infinity) if the geometric multiplicity is less than the algebraic multiplicity

Re ! i ( A) = 0 and g(! i ) < a(! i ) " { t cos #t, t sin #t,

(

)

t 2 cos #t, t 2 sin #t,! } \$ { %, %, %, %, ! }

! = 0 and g(" i ) < a(" i ) # { t, t 2 , ! } \$ { %, %, ! }

VII - Analysis of Dynamic Systems

14

THEOREM. State Stability

Asymptotic stability: all the eigenvalues of A have negative real parts.

Asymptotic stability

!

Re " i ( A) < 0

(

)

i = 1, 2, !, n

Instability: One or more eigenvalues of A have positive real parts

Instability

!

" Re # i ( A) > 0

(

)

for some i = 1, 2, !, n

Conditional stability (marginal stability, or stability in the sense of Lyapunov): All eigenvalues have either negative or zero real parts, and all eigenvalues that have a zero real part have geometric multiplicity equal to their algebraic multiplicity

Conditional stability ! Re ( " i ( A) ) # 0

for all i = 1, 2, !, n

and g(" j ) = a(" j ) for j such that Re " j ( A) = 0

(

)

VII - Analysis of Dynamic Systems

15

Im(!)

Asymptotic Stability Region

Instability Region

Re(!)

VII - Analysis of Dynamic Systems

16

BIBO Stability

BIBO stability is concerned with the forced LTI system with zero initial state (known as “zero state LTI system”)

x (t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t)

i

x(0) = 0

Issue:

Given any bounded input function u(t)

|u1 (t) |< !, | u2 (t) |< !, !, | u p (t) |< !
will the output of the zero-state LTI system also be bounded?

||y1 (t) |< !, | y2 (t) |< !, !, | ym (t) |< !

?

VII - Analysis of Dynamic Systems

17

Input-output representation

y(s) = G(s) u(s)

where G(s) = C(sI ! A)!1 B + D
det(sI ! A) = 0
n

Eigenvalue s = ! of the state matrix A:

Total number of eigenvalues of A:

Poles of matrix transfer function G(s) = C
 

adj(sI ! A) B+D det(sI ! A)
G( p) ! "

Pole s = p of G(s):

All eigenvalues of A are “candidate” poles because they make the denominator of G(s) equal to zero Only the eigenvalues that do not cancel out with factors of the numerator are poles of G(s). Total number of poles of G(s):

 

np ! n n p =n

Minimal transfer function:

VII - Analysis of Dynamic Systems

18

Example

" !2 0 % "0 % A= \$ B = \$ ' C = "1 1% D = 0 ' # & # 0 !1& #1&
Eigenvalues: !( A) =

{ "1, "2 } == { # 1, # 2 }
!1

"s + 2 0 \$ !1 G(s) = C (sI ! A) B + D = "1 1\$ & ' # % 0 s + 1% #
Poles:

"0 \$ 1 & ' + 0 = s +1 #1%

 

P( G(s) ) =

{ p1 } == { ! 1 }

Eigenvalue !1 = "1 is a pole, but ! 2 = "2 is not a pole

VII - Analysis of Dynamic Systems

19

THEOREM. BIBO Stability.

BIBO stability: all the poles of G(s) have negative real parts.

Asymptotic stability

!

Re pi (G(s)) < 0

(

)

i = 1, 2, !, n p

BIBO instability: One or more poles of G(s) have positive real parts

Instability

!

" Re pi (G(s)) > 0 for some i = 1, 2, !, n p Im( p)

(

)

BIBO Stability Region

Instability Region

Re( p)

VII - Analysis of Dynamic Systems

20

Example

" !2 0 % "0 % A= \$ ' B = \$ ' C = "1 1% D = 0 # & 0 !1& 1& # #
Eigenvalues: !( A) =

{ "1, "2 } == { # 1, # 2 }
!1

"s + 2 0 \$ G(s) = C (sI ! A)!1 B + D = "1 1\$ & ' # % s + 1% # 0
Poles:

"0 \$ 1 +0= & ' s +1 #1%

  

P( G(s) ) =

{ p1 } == { ! 1 }

Eigenvalue !1 = "1 is a pole, but ! 2 = "2 is not a pole The LTI system is asymptotically stable because its pole has negative real part.

VII - Analysis of Dynamic Systems

21

Assessing Stability

Stability can be determined numerically by:
  

Calculation of eigenvalues (stability i.s.L.) Calculation of poles (external stability) Challenge: the numerical computation of eigenvalues/poles can be numerically ill-conditioned for systems of large order.

Methods that do not require the calculation of eigenvalues and poles
 

Routh-Array criteria Lyapunov’s first and second method for stability analysis

VII - Analysis of Dynamic Systems

22

Regulation via State Feedback

LTI system Linear State Feedback
 

x (t) = Ax(t) + Bu(t)
u(t) = ! K x(t)

i

Feedback gain Control objective
i

K !! m" n x(t) ! 0

(Regulation)

Closed-loop system

x (t) = Ax(t) + B (! Kx(t) ) = ( A ! BK ) x(t) x (t) = A x(t)
i

Closed-loop state matrix

A = A ! BK

VIII - Feedback Control

5

Criteria for specification of the feedback gain K
 

Criterion No. 1: Criterion No. 2:

A = A ! BK is a stable matrix Meet desired time-domain specs, such as speed of decay to zero

Common design methods

Trial-and-error design
  

Propose candidate K Determine if A is stable Assess time-domain performance via simulation Find a gain K that places the eigenvalues of A where desired

Eigenvalue-placement method

 

VIII - Feedback Control

6

State Regulation via Eigenvalue Placement

LTI system Linear State Feedback

x (t) = Ax(t) + Bu(t)
u(t) = ! K x(t)

i

Feedback gain

K !! m" n

Closed-loop state matrix

A = A ! BK

Eigenvalue-Placement Objective

Select K such that !( A " BK ) =
o o !1 , ! o , !3 , !, ! o 2 n

{

} is a user-specified eigenvalue set

{

o o #1 , # o , #3 , !, # o 2 n

}
7

VIII - Feedback Control

Ackermann’s Formula

Step 1:

Construct the controllability matrix

Qc := "b Ab A2b ! An!1b \$ &" n' n # %

Step 2:

Specify the set of desired closed-loop eigenvalues
o o { !1 , ! o , !3 , !, ! o } 2 n

Step 3:

o Find the coefficients { ! o , !1 , ! o , !, ! o } of the desired o 2 n-1 characteristic polynomial

o o det( A ! bk T ) = (" ! "1 ) (" ! " o ) (" ! "3 )!(" ! " o ) 2 n o o = " n + # o " n-1 + ! + #3 "3 + # o " 2 + #1 " + # o n-1 2 o

VIII - Feedback Control

13

Step 4:

Define the polynomial matrix

o o P( A) = An + ! o An-1 + ! + !3 A3 + ! o A2 + !1 A + ! o I "" n# n n-1 2 o

Step 5:

Find the feedback matrix using Ackermann’s formula
-1 k T = FirstRow( Qc P( A) )

or, equivalently

-1 k T = [0 0 ! 1] Qc P( A)

ACKERMANN’S FORMULA FOR EIGENVALUE PLACEMENT

VIII - Feedback Control

14

Remarks

The concept of controllability plays a crucial role, since the inverse
-1 matrix Qc !! n" n exists because rank(Qc ) = n

Extension to multiple-input systems (p > 1)

If the system is controllable, then one input can be used to place all the eigenvalues. The remaining inputs may be set to zero through the control law: "kT % \$ ' u(t) = ! K x(t) = ! \$ 0 ' x(t) \$0 ' # & Numerical problems with Ackermann’s formula Ackermann’s formula produces unreliable numerical results for large systems (usually for cases where n > 10) because it has a propensity for propagating and magnifying the small truncation errors made by digital computers.

VIII - Feedback Control

15

Tracking via State Feedback

LTI system State Feedback
  

x (t) = Ax(t) + Bu(t)
u(t) = ! K x(t) ! Kr r(t)

i

Feedback gain Feedforward gain Control objective
i

K !! m" n
Kr !! m" n x(t) ! r(t)
(Tracking)

Closed-loop system

x (t) = ( A ! BK ) x(t) ! BKr r(t) x (t) = A x(t) ! BKr r(t)
i

Closed-loop state matrix

A = A ! BK

VIII - Feedback Control

16

r

Kr K

! + !

u
B

Bu

+

x

i

!
A

x

Ax

Design Criteria

Specification of the feedback gain K (use regulation results)
 

Criterion No. 1: Criterion No. 2:

A = A ! BK is a stable matrix

Specification of the feedforward gain Kr

Criterion No. 3:

steady ! state offset := lim [ r(t) ! x(t) ] t"#
VIII - Feedback Control © Oscar D. Crisalle 2005

17

Class of admissible set-points r(t) !! n
 

Bounded (does not become infinitely large) Constant final value:

lim r(t) = rs t!"

Necessary conditions for existence of zero-offset for arbitrary admissible set points
 

(i)

The state matrix A !! n" n is nonsingular

(ii) The input matrix B !! n" p is full row-rank (i.e. p ! n )

Implications of the necessary conditions

(i)

! there exists A"1
"R

(ii) ! there exists B

# B "1 if p = n % := \$ % B T (BB T )"1 if p > n &

VIII - Feedback Control

18

OFSET-FREE TRACKING THEOREM If the necessary conditions (i) and (ii) are satisfied, and the feedback gain matrix K is specified such that the eigenvalues of A = A ! BK lie strictly in the open left-half plane. Then the state-feedback control law

u(t) = ! K x(t) ! B !R ( A ! BK ) r(t)
ensures that for every admissible set point

lim [ r(t) # x(t) ] = 0 t!"

VIII - Feedback Control

19

Closed-loop representation

r

B !R ( A ! BK ) K

! !

u
+

Bu
B

+

x
! Ax A

i

x

Remark. Offset-free behavior obtain through the design

Kr = B !R ( A ! BK )

VIII - Feedback Control

20

PROOF OF THE THEOREM.

Since A = A ! BK makes the closed-loop asymptotically stable, a final steady state xs is attained and is characterized by dx dt = 0

" x(t) ! xs \$ & u(t) = ' Kxs ' B 'R ( A ' BK ) rs # \$ r(t) ! rs % Closed-loop system

x (t) = A x(t) + B [ ! K x(t) ! B !R ( A ! BK ) r(t) ]

i

0 = A xs + B [ ! Kxs ! B !R ( A ! BK ) rs ] 0 = ( A ! BK )xs ! BB !R ( A ! BK ) rs 0 = ( A ! BK )xs ! I ( A ! BK ) rs

VIII - Feedback Control

21

Since ( A ! BK ) is invertible (because by the stabilizing matrix K ensures that ( A ! BK ) has no eigenvalue at zero):

( A ! BK )xs = ( A ! BK ) rs xs = ( A ! BK )!1 ( A ! BK ) rs

xs = rs Hence, rs ! xs = 0 , and using the definitions of the steady-state values rs and xs it follows that lim [ r(t) # x(t) ] = 0 t!"

VIII - Feedback Control