Professional Documents
Culture Documents
Labview Control Design and Simulation Module: Algorithm References
Labview Control Design and Simulation Module: Algorithm References
Simulation Module
Algorithm References
This document contains an index of Control Design VIs that use specific
algorithms to calculate the output of the VI. These algorithms include ones
derived by National Instruments and published algorithms. Table 1-1 lists
the VIs of the LabVIEW Control Design and Simulation Module that use
special algorithms to calculate the VI outputs. Table 1-1 lists the VIs by
palette. The first column contains the name of the palette, the second
column contains the name of the VI, and the third column provides the
name of the algorithm and links to the sections that contain the algorithm
derivation.
The derivations of the algorithms are located in sections that follow
Table 1-1. Each section also contains references to published algorithms,
which National Instruments used to implement the Control Design VIs.
This document also contains the derivations of the Lyapunov and Sylvester
Equations and the Riccati Equations.
Palette Name
Dynamic Characteristics
VI Name
CD Covariance
Response VI
Reference
Lyapunov Equation Solver
Discrete Lyapunov Equation Solver
Staircase
CD DC Gain VI
CD Norm VI
CD Parametric Time
Response VI
CD Pole-Zero Map VI
Staircase
Transmission Zeros (State-Space)
CD Root-Locus VI
ni.com
Palette Name
Frequency Response
VI Name
Reference
CD All Margins
CD Bandwidth
CD Bode
CD Gain and
Phase Margins VI
CD Nichols VI
CD Nyquist VI
CD Singular Values VI
Palette Name
Model Construction
VI Name
CD Construct Special
Model VI
Reference
Convert State-Space Model to
Transfer Function Model
First-Order-Hold
Pade Approximation of Delay for a
SISO Transfer Function Model
Staircase
Tustins Transformations
Zero-Order-Hold
CD Construct Special
Model VI PID Academic
CD Construct Special
Model VI PID Parallel
CD Construct Special
Model VI PID Series
CD Draw Zero-Pole-Gain
Equation VI
ni.com
Palette Name
Model Conversion
VI Name
Reference
CD Convert Continuous to
Discrete VI
CD Convert Discrete to
Continuous VI
CD Convert Discrete to
Discrete VI
CD Convert to State-Space
Model VI
Staircase
CD Convert to Transfer
Function Model VI
CD Convert to
Zero-Pole-Gain Model VI
Staircase
Control Design
and Simulation Module Algorithm Reference
Palette Name
Model Interconnections
VI Name
Reference
CD Append VI
Staircase
CD Feedback VI
CD Parallel VI
CD Series VI
Model Reduction
CD Minimal Realization VI
Staircase
Canonical State-Space Realization
Minimal State-Space Realization
CD Model Order
Reduction VI
ni.com
Palette Name
State Feedback Design
VI Name
Reference
CD Ackermann
Ackermann
CD Kalman Gain VI
CD Linear Quadratic
Regulator VI
CD Pole Placement VI
CD State Estimator
CD State-Space Controller
Palette Name
State-Space Model
Analysis
VI Name
CD Balance State-Space
Model (Grammians)
Reference
Lyapunov Equation Solver
Discrete Lyapunov Equation Solver
Grammians
Balancing
CD Controllability
Staircase VI
Staircase
CD Grammians VI
Time Response
CD Observability
Staircase VI
Staircase
CD Impulse Response VI
Linear Response
Staircase
Zero-Order-Hold
CD Initial Response VI
Linear Response
Staircase
Zero-Order-Hold
CD Linear Simulation VI
First-Order-Hold
Linear Simulation
Staircase
Zero-Order-Hold
CD Step Response
Linear Response
Staircase
Zero-Order-Hold
Stochastic Systems
CD Correlated Gaussian
Random Noise
ni.com
Palette Name
Implementation
VI Name
Reference
CD Current Observer
Corrector / Predictor
CD Predictive Observer
CD Discrete Recursive
Kalman Corrector /
Predictor
Implementation
(Simulation Module)
CD Continuous Observer
Implementation Continuous
Observer (Point-by-Point)
CD Continuous Recursive
Kalman Filter
ContinuousTime Recursive
Kalman-Bucy Filter
Adjust Stochastic State-Space System
with Non-Zero-Mean Noise
Excitation
Pole Placement
Assume the pair (A, B) is controllable and consider K the controller gain
such that it places the poles, Eig(A BK), in the location you specify.
Then, there exists a similarity transformation T such that
1
T ( A BK )T =
Where represents the user-defined poles along the diagonal. Rearranging
the terms, you get the following equation.
AT + BKT = T
AT T = BKT
When KT is unknown, set KT equal to G. You get the following equation.
AT T = BG
Note that T must be invertible. Therefore, (, G) should be observable
Eig ( ) Eig ( A ) = 0
The CD Pole Placement VI is implemented such that G is random but
(, G) is observable.
Reference:
Varga, A. Robust Pole Assignment via Sylvester Equation Based State
Feedback Parametrization. IEEE International Symposium on Computer
Aided Control Systems Design, CACSD' 2000. Anchorage, AK, 2000.
Singular Values
Consider a system defined in terms of the state space model, and the
equivalent transfer function model, shown below.
1
(s)
x = Ax + Bu X ( s ) = ( Is A ) 1 BU ( s ) Y
----------- = C ( Is A ) B + D
U(s)
y = Cx + Du
Y ( s ) = CX ( s ) + DU ( s )
10
ni.com
First-Order-Hold
This algorithm implements what is called the triangle-hold equivalent of a
first-order hold. The effect of this hold is to extrapolate samples and
connect them in a straight line. In discrete form, this extrapolation is
non-causal. You derive the filter that does this extrapolation as follows.
Consider the following unit impulse.
11
1
As shown in the figure, the ramp has a slope of --- , which you can describe
T
in the s-domain as follows:
1
slope = ---2s
Ts
e 1
At time T, the ramp starts at -----2- --- .
s T
21
At time 0, the ramp changes the sign of the slope ---- --- .
2
Ts
s T
e--------- --1At time T, the response levels are 2 .
s T
The composite transfer function becomes
T
e s 2 + e -s
H(s) = -----------------------------2
s T
Placing the first-order hold in series with the system transfer function G(s),
you get the following:
T
e s2+e s
- *G(s)*U(s)
Y(s) = H(s)*G(s) *U(s)= -----------------------------2
s T
Define the unit impulse function as
T
U(s) = (e s 2 + e
Ts
)U ( s )
12
ni.com
x
v
w
A B 0
= 0 0 --IT
0 0 0
(t)
0
+ 0 u
1
x
v
(t)
where u represents the unit impulse function. This model is shown in the
following figure.
( t ) = M ( t ) + Bu
e
At
(t) e
Mt
M ( t ) = e
Mt
Bu
Mt
Mt
de ( t )
----------------------- = e Bu
dt
[ ( K + 1 )T ] = e [ KT ] +
( K + 1 )T M [ t ( K + 1 )T ]
KT
Budt
13
( K + 1 )T
f ( t ) ( t KT ) =
KT
KT
f( t) = 0
KT
MT
[ KT ]
( MT )
eMT = I + MT + ---------------- +... =
2!
i=0
( MT )
--------------i!
M0
2
A B 0
A AB B T
2
= I; M = 0 0 1 T ; M = 0 0
0 ;
0 0 0
0 0
0
1
i1
i2
A A B ABT
A A B A BT
i
M = 0 0
0 ; ... ; M = 0
0
0
0 0
0
0
0
0
3
Substituting of such terms into the Taylor Series Expansion results in the
following:
MT
i=0
( MT )
--------------- =
i!
( AT )
i=0
( AT )
i1
i=1
0
0
( AT )
i2
i=2
1
0
1
1
( AT ) , 1 =
i=0
i=1
14
( AT )
i1
B , 2 =
( AT )
i2
i=2
ni.com
which give
MT
1 2
=
0 I 0
0 0 I
Therefore
1 2 x ( k )
x(k + 1)
v(k + 1) = 0 1 0 v( k)
w(k + 1)
0 0 1 w(k)
x(k + 1) = x(k) + 1v(k) + 2w(k)
Based on Franklin Powell
v(k) = u(k)
w(k) = u(k + 1) u(k)
substituting into the state-space model results in the following:
x(k + 1) = x(k) + 1u(k) + 2[u(k + 1) u(k)]
= x(k) + [1 2]u(k) + 2u(k + 1)
Because all k + 1 terms need to be on the right-hand side of the equation,
you need the following definition:
(k) = x(k) 2u(k)
Evaluating the previous equation at k + 1 results in the following:
( 1 2 )u ( k )
(k + 1) = x(k + 1) 2u(k + 1) = x ( k ) + 1 u ( k ) 2 u ( k )
= [x(k) 2u(k)] + 2u(k) + (1 2)u(k)
= [x(k) 2u(k)] + [1 2 + 2]u(k)
= (k) + [1 2 + 2]u(k)
15
From this expression you can recognize the following system matrix, which
is equivalent for the state dynamics,
F.O.H
Ad
F.O.H
= , B d
= 1 2 + 2
Cd
F.O.H
= C, D d
= C2 + D
Notice that because the change of variables are made to have all k + 1 terms
on the left-hand side of the system equation, the initial conditions are
affected by the inputs at time t = 0 in the following manner:
F.O.H
= [ I 2 ] x ( 0 ) , ICMAT = [I 2]
u(0)
where ICMAT is used to calculate the initial conditions based on the new
set of states, .
Reference:
Franklin, G.F., J. D. Powell, and M. L. Workman. Digital Control of
Dynamic Systems, 3th ed., p. 204. Reading, MA: Addison-Wesley, 1997.
Linear Response
Linear Response for continuous systems: After applying a zero-order-hold
conversion from continuous to discrete, the following assumptions are
made to the input profile:
In the case of an impulse response, you look at the effect of the impulse
before the system has been converted to discrete form. Therefore, in a
continuous system,
x = Ax + Bu
y = Cx + Du
16
ni.com
Note
Linear Simulation
Discrete System:
x(k + 1) = Ax(k) + Bu(k)
y(k) = Cx(k) + Du(k)
which makes integration straightforward after initial conditions are
defined.
Continuous System:
The system must be discretized, and the initial conditions must be adjusted
properly depending on the discretization method.
For Zero-Order-Hold: (Ac, Bc, Cc, Dc) (Ad, Bd, Cd, Dd)
IC not a function of u0. Integration is identical to Discrete Systems.
First-Order-Hold: Based on the algorithm in the First-Order-Hold section,
the initial conditions needs to be re-evaluated based on the change of
variables.
( k ) x ( k ) 2 u ( k )
which you can evaluate at t = 0.
( 0 ) = [ I 2 ] x ( 0 )
u(0)
17
Unit Feedback
Consider the following continuous state-space model.
x = Ax + Bu
y = Cx + Du
for unit feedback in the SISO case, as shown in the following figure.
y 2'
y 1'
0 0 1
1 0 0
y1
y2
y3
where u = r + y' = r + y
18
ni.com
or y = Cx + Dr , where C = ( I D ) C , D = ( I D ) CD .
Substituting the previous expressions in the state-space model dynamics
expression gives:
x = Ax + Bu
= Ax + B(r + y)
where y is a function of the states and the reference, r:
= Ax + B(r + Cx + Dr )
grouping common terms for the states and the reference:
= (A + BC )x + B(I + D )r
In this manner the equivalence of A and B matrices are identified:
A = A + B(I D)1C
B = B(I + (I D)1D)
In summary,
19
is equivalent to
u1 = r1 y1
u2 = r2 y2
u = r+y
from G
y=u
Therefore r is not an independent input to the system but identical to the
null vector r = 0.
r is not chosen but fixed. In this case, you cannot calculate the unit
feedback transfer function.
20
ni.com
Grammians
Controller Continuous: By definition the Controllable Grammians is given
by the following expressions
W c (0,t f)
t f A
T A
e BB e
and
W c lim W c (0,t f)
tf
The easier way to calculate the grammians is by pre and post multiplying
Wc by A and AT, and adding the two terms, as illustrated below:
T
AW c + W c A = A
A
0
T A
e BB e
d +
A
0
T A
e BB e
dA
Notice that the right-hand side of the equation can be grouped in the
following way:
d(e
T A
BB e )
------------------------------------- d
d
0
T
T A
= e BB e
= BB
Substituting the end values into the expression above and considering that
for stable systems the exponential goes to zero at infinite, you get the
following expression:
AWc + WcAT + BBT = 0
which solved using Lyapunov.
Observer Continuous: By definition the Observable Grammians is given by
the following expressions:
W 0 (0,t f) =
tf AT
e
0
21
C Ce d , W 0 lim W 0 (0,t f)
tf
A W0 + W0 A + C C = 0
which also is solved using Lyapunov.
Controller Discrete: The following expression defines the Controller
Grammians for discrete systems:
Wc
A BB ( A
k
T k
k=0
A BB ( A
W c lim W c =
n
T k
k=0
AW c A =
k+1
T k+1
BB ( A )
k=0
A BB ( A
J
T J
J=1
AW c A =
A BB ( A
J
T J
) BB
J=0
22
ni.com
(A
T k
) C CA
k=0
W 0 lim W 0 =
n
(A
T k
) C CA
k=0
U Wc U
W c
U Wc U
U A U
U AU
W c
+s = 0
because Wc is symmetric.
T
A W c + W c A + s = 0
Continuous Lyapunov
For Controller Discrete
AWcAT Wc + Us2UT = 0
23
U Wc U
W c
U A U
( A )
U Wc U
U AU
+s = 0
W c
Discrete Lyapunov
Note: Wc = UWcUT
Reference:
Kailath, T. Linear Systems, 1st ed., p. 609. Englewood, NJ: Prentice-Hall,
1980.
Balancing
The CD Balance State-Space Models (Grammians) VI has only one entry
per column or row in the transformation matrix T.
1
2
T =
3
4
24
ni.com
2.
---------- tolerance .
min
3.
4.
Reference:
Alan J. Laub, Michael T. Heath, Chris C. Paige, and Robert C. Ward,
Computation of System Balancing Transformations and Other
Applications of Simultaneous Diagonalization Algorithms. IEEE
Transactions on Automatic Control, vol. AC-21, num. 2, 1987.
Tustins Transformations
The Model Conversion VIs use Tustins methods to convert models from
discrete to continuous and from continuous to discrete. The following
section describes the algorithm used to implement the discrete to
continuous conversion.
25
Discrete to Continuous
AT 1
AT
* = I + -------- I --------
2
2
AT
AT
------------ = I + -------
2
2
T
( I ) = ( + I ) --- A
2
2
1
A = ( + I ) ( I ) --T
1
* = I AT
-------- B T
1
AT
B = ------- I --------
2
T
1
1
= ------- ( I ( + I ) ( I ) )
T
*H =
TC I AT
--------
1
1
1
AT
C = ------- H I -------- = ------- H ( I ( + I ) ( I ) )
2
T
T
AT 1 BT
* J = D + C I -------- ------
2 2
1 BT
1
D = J C ( I ( + I ) ( I ) ) ------2
2
1T
1
D = J --- --- H ( I ( + I ) ( I ) )
T2
H
1
D = J ---- ( I ( + I ) ( I ) )
2
26
ni.com
Reference:
Franklin, G.F., J. D. Powell, and M. L. Workman. Digital Control of
Dynamic Systems, 3th ed., p. 200. Reading, MA: Addison-Wesley, 1997.
The continuous to discrete conversion for Tustins method uses the algorithms
presented in Digital Control of Dynamic Systems.
Note
a 23 a 21 a 22
x3
= TAT 1
This way: x = x 1 , A
x2
The states have been reordered. The following procedure explains how to
obtain T.
27
J2
Je
where
e = number of states to eliminate,
n = total number of states
Consider T initialized as the identity matrix, and counter index i initialized
at 1.
1.
2.
3.
4.
y = Cs Ce
xs
xe
+ Du
0 = A es x s + A ee x e + B e u
28
ni.com
Solving for x e ss
ss
x e = A ee [ A es x s + B e u ] = [ A es x s + B e u ]
Substituting in the selected set of states dynamics, xs :
xs = A ss x s + A se [ A es x s + B e u ] + B s u
where now a new set of A, B, C and D matrices are identified, as illustrated
below.
A ss
[ B s + A se B e ] u
[ A ss + A se A es ] x s
B s
Note
C s
[ D + C e B e ] u
[ C s + C e A es ] x s
an 2 an 3 1 0
a1
0 0
0 0
29
Reference:
Ogata, K., Discrete-Time Control Systems. Englewood, NJ: Prentice-Hall,
1987.
O =
C
CA
:
CA
n1
30
ni.com
Zero-Order-Hold
Continuous to Discrete: Consider the state dynamics equation:
x = Ax + Bu
Pre-multiply the expression above by the exponential term
e
At
At
x = e
xe
At
At
Ax + e
At
Bu
Ax = e
At
Bu
d------------------( e x )- = e At Bu
dt
Integration of the previous expression between K and K + 1 gives
e
At
( K + 1 )T
KT
( K + 1 )T At
KT
Budt
A ( K + 1 )T
x [ ( K + 1 )T ] e
AKT
x [ KT ]
At
= e
A ( + KT )
u( + KT) = u(t)
d = dt
=
T A ( + KT )
0
Bu ( + KT )d = e
AKT
T A
0
Bu ( + KT )d
Which leads to
e
AT
x [ ( K + 1 )T ] x [ KT ] =
31
T A
0
Bu ( + KT )d
= e
AT
AT
AT
x [ KT ] +
x [ KT ] + e
x [ KT ] +
AT
T A ( T )
0
Bu ( + KT ) d
( K + 1 )T A ( t KT )
KT
Bu ( t )dt
( K + 1 )T A [ t ( K + 1 )T ]
KT
Bu ( t )dt
Zero-Order-Hold
Consider the following equation where the input u(t) remains constant in
the [K K+1] interval:
( K + 1 )T
KT
u(t)
= cons tan t = u ( KT )
AT
x [ KT ] +
( K + 1 )T A [ t ( K + 1 )T ]
KT
dtBu ( KT )
AT
x [ KT ] e
A [ t ( K + 1 )T ]
1 ( K + 1 )T
Bu ( KT )
KT
AT
= e
x [ KT ] [ A
AT
x [ KT ] ( I e
AT
AT
A ]Bu ( KT )
1
)A Bu ( KT )
This last expression allows you to identify the system matrices A and B for
the zero-order-hold conversion.
ZOH
Ad
=e
AT
ZOH
, Bd
= (e
AT
I )A B
MT
( MT )
= I + MT + ---------------+ =
2!
32
( MT )
-------------i!
i=0
ni.com
i1
3
i
M = A A B M = A A B
0 0
0
0
MT
i=1
i
i1
A A B --1i!
0
0
+ I 0
0 0
Because the matrix summation is equivalent to the summation of each of its
elements, you get the following expression:
MT
AT
i=1
i1
AT
A B--------------e
=
i!
i=1
1
A
----- A B
i!
AT
(e
AT
I )A B
0
AB T
ZOH
= Ad
ZOH
Bd
Note:
y = Cx + Du y(K) = Cx(K) + Du(K)
33
Initial Conditions: For the initial condition, as the zero order hold makes the
response causal, you get the following:
ZOH
xd
( 0 ) = x ( 0 ) + Ou ( 0 ) IC MAT = I O
Bd
= (e
Ac T
ZOH
Ad
I )A c B c
= e
Ac T
ZOH
= C
ZOH
= D
Cd
Dd
When going from discrete to continuous, you solve for the continuous set
of system matrices, as shown in the following:
ZOH
lnA d
A c = ----------------T
Bc = Ac ( e
AT
ZOH
= Ac ( Ad
ZOH
I ) Bd
1
ZOH
I ) Bd
State-Space Norm
The CD Norm VI calculates two types of normsthe 2-norm and the
infinity-norm.
2-norm
For stable systems, the following procedure outlines the steps for
computing the 2-norm.
For continuous systems, when you solve AX + XAT + BBT = 0 for X, you
get the following norm and frequency.
Frequency = NaN
34
ni.com
Trace ( CXC T )
Trace ( CXC T + DD T )
Infinity-norm
The procedure for computing the infinity-norm is different for SISO
systems and MIMO systems.
SISO Systems
For SISO systems, the CD Norm VI converts a state-space model to a
transfer function model.
Create frequency vector. For example, a vector of i.
Compute the norm according to the following formula.
G
MIMO Systems
Create frequency vector. For example, a vector of i.
For each i, compute the USVT decomposition of H(ji)
Use this to compute the norm according to the formula,
G
35
2.
3.
4.
5.
References:
R. H. Bartels and G. W. Stewart. Algorithm 432: Solution of the matrix
equation AX + XB = C. Communications of the ACM, September 1972.
G.H. Golub, S. Nash, and C. Van Loan. A Hessenberg Schur Method for the
problem AX + XB = C. IEEE Transaction on Automatic Control, December
1979.
36
ni.com
A 1 X + XA 1 = C 1
where
H
C1 = C1
37
Reference:
S. J. Hammarling. Numerical solution of the stable, non-negative definite
Lyapunov equation. IMA J. Numerical Analysis, vol. 2, pp. 303323, July
1982.
H H
From the Euler LaGrange equations = ------- , ------- = 0
x x
1 T
x ( t )
= A B R B x ( t )
T
(t )
(t)
Q
A
t
)
f
(t)
V 21 V 22
V 21 V 22
0
e
I
Sf
J+ ( t tf )
0, and e
J ( t tf )
X(t) = V12 e
38
,
J ( t tf )
ni.com
(t) = V22 e
J ( t tf )
where
* = V 11 V 12
z
V 21 V 22
I
Sf
As tf , S(t) becomes
S(t) = (t) X1(t) = V22 e
J ( t tf )
Z[V12 e
J ( t tf )
Z]1
Therefore S0 = V22V121
Numerical solutions involve computing this algorithm on the modal matrix
of H for stable eigenvalues.
References:
A.J. Laub. A Schur method for solving algebraic riccati equation. IEEE
Transaction on Automatic Control, vol. AC24, no. 6, pp. 913921,
December 1979.
Uy Loi Ly. Linear Multivariable Control, Course Notes, U. Washington,
Dec 31, 1996
1 T
1 T
A X + XA + Q SR B X XBR S XBR B X SR S = 0
By applying the following transformations,
A = A BR1ST
M = BR1BT
Q = Q SR1ST
39
H = Q SR1S, G = G 1 G 2 G 1
then the equation reduces to
T
Z = F + GF H G F
T
T
F H
F
Then this algorithm uses a basis for the stable eigenspace of Z to compute
the desired Riccati solution, using the same procedure as in the
Hamiltonian Reduction Algorithm for Solving Riccati Equations section.
References:
T. Pappas, A. J. Laub, N. R. Sandell. On the numerical solution of the
discrete time algebraic riccati equation, IEEE Transaction on Automatic
Control, Vol AC25, No. 4, pp. 631641, Aug 1980.
40
ni.com
y1
y2
G 21 G 2l u 2
=
:
: :
:
ym
G m1 G ml u l
To convert the whole transfer function model into a state-space model,
consider the first output y1.
l
y1 =
i=1
41
G 1i u i =
1i
i=1
where y1i = G1iui. You can get the controllable canonical realization of G1i,
given by:
x 1i = A 1i
x 1i + : u i
0
1
y 1i = C 1i x 1i
l
because y 1 =
1i ,
i=1
A 11
A 12
x 11
: =
x
x1 +
A 1l
1l
x 1 =
0 0 0
1 : :
0 1 :
1 1
A1
B1
y1 =
u1
:
ul
x1
C 11 C 12 C 1l
C1
Because all outputs share the same set of inputs, the additional outputs are
appended in the following way:
x 1
: =
x
A 11
y1
: =
ym
42
0
A 12
:
0
A 1l
x1
u1
x2
B2
+
:
:
:
ul
xm
Bm
C1 0 0
0 C2 0
: : :
0
B1
x1
x2
:
:
Cm xm
ni.com
N =
In 0
0 0
When M and N are square matrices and the dimensions match, this problem
is solved directly by using the Analysis VIs. When the dimensions of the
matrices do not match, the algorithm creates matrices M1 and M2 by using
random numbers to pad the matrix with a smaller dimension.
Then, if a transmission zero corresponds to original system, it has to be
(with probability close to 1) a zero of both M1 and M2.
The algorithm picks the zeros based on the following criteria where
z1 = set of zeros of M1
z2 = set of zeros of M2
For each element in z1, select the closest element in z2 and set distance as d.
If d < threshold, meaning d is 1020 times the norm of M, then the element
is a transmission zero of M.
References:
A.J. Laub and B. C. Moore. Calculation of transmission zeros using QZ
techniques. Tech report ESL-P-802.
A. S. Hodel. Computation of system zeros with Balancing. Linear Algebra
and its Applications. 188, 189: 423436, 1993.
43
Npq ( z ) =
( p + q k )!p!
z
(------------------------------------------p + q )!k! ( p k )!
k=0
and
q
Dpq ( z ) =
( p + q k )!q!
( z )
------------------------------------------( p + q )!k! ( q k )!
k=0
( p + q k )!p!
- ( T ) s
(-----------------------------------------p + q )!k! ( p k )!
k k
k=0
where
k=0N=1
k = 1 N = 1
--- T s s
2
1
2 2
k= 2 N = ------------------------------- ( ( T ) s s )
2 2 ( 2 1 )
etc.
is defined as the order of the polynomial.
The function sequentially computes the terms and takes the sum of products
(from k = 0 to p). Finally, the denominator is made monic and incorporated
back into the model.
44
ni.com
Reference:
G.H. Golub and C.F. Van Loan. Matrix Computations, 3rd ed.,
pp. 572573. Baltimore, MD: Johns Hopkins University Press, 1996.
45
2.
3.
4.
Once you know the final gain value, you create a grid of gain values
based on the geometric series, and iteratively refine this grid, until the
plot is estimated to be visually correct. Here you use some specialized
heuristics based on interpoint distances.
Lastly, you fill in the appropriate infinity and special points for the
plots.
References:
A.M. Krall and R. Fornaro. An Algorithm for Generating Root Locus
Diagrams. Communications of the ACM, Vol. 10, No. 3, pp. 186188,
Mar 1967.
T. O. e Silva. Automatic Generation of Root Locus Plots. Revista Do
Detua, Vol 2, No. 3, pp. 273278, Sept 1998.
46
ni.com
47
2.
48
ni.com
4.
Repeat the operation for each input-output pair of the MIMO system.
5.
= gm
= 180
49
2.
3.
Staircase
Reference:
Web Site of Math Department of Escuela Politecnica Nacional, Ecuador,
Staircase Algorithm. http://epn.edu.ec/~fc/dpto_mat/pdf/
staircase.pdf
Kalman Gain
The CD Kalman Gain VI uses the derivation found in the following
references.
Kwakernaak, H. & Sivan, R., Linear Optimal Control Systems.
Wiley-Interscience, 1972.
Ogata, K., Discrete-Time Control Systems. Englewood, NJ: Prentice-Hall,
1987.
50
ni.com
Ackermann
The CD Ackermann VI uses the derivation found in the following
reference.
Ogata, K., Discrete-Time Control Systems. Englewood, NJ: Prentice-Hall,
1995.
Solution
Lets first focus on the problem of generating random samples of a single
Gaussian random vector x with mean mx and covariance matrix Cxx. Then,
we will generalize the solution to generating random samples of the two
vectors X and Y.
51
z1
Define z =
z2
zn
0
mz =
Czz = Cov{z}
= E{zzT} mzmzT
1 0 0
= 0 1 0 =I
0 0 1
z~N(0, I)
Define the affine transformation:
X = Az + b
where A Rnn, b Rn
Since affine transformation of a Gaussian random vectors yields a Gaussian
random vector x is Gaussian-distributed.
We will find the matrix A and the vector b that will generate x such that
x ~ N(mx, Cxx).
E { x } = E { Az + b }
E{z}
= A
= 0
+b
E{x} = b
52
ni.com
Cov { x } = Cov { Az + b }
T
= E { [ Az + b ] [ Az + b ] } E { Az + b } E { Az + b }
T
= AE { zz }A + AE { z }b + bE { Z }A + bb
[ AE { z } + b ] [ AE { z } + b ]
T
= A C zz A + bb bb
I
C xx = A A
0 2 0
0 0 n
53
1 T
--2- T
V
C xx
1--2
1--- T
2 V T
1
--2
C xx =
Cov { w } = E { w, w } E { w }E { w }
x T T m
T
T
= E -- [ x y ] ------x [ m x m x ]
y
m
y
Cov { w } = C ww
T
T
T
m mx m x my
= E { xx } E { xy } x
t
T
T
T
E { yx } E { yy }
m y mx m y my
C xx C xy
T
Cxy C yy
54
ni.com
x ( 0 ) is already available.
@ time k:
x ( k ) is already available.
55
x ( 0 ) is already available.
@ time k:
x ( k ) is already available.
y ( t ) = Cx ( t ) + Du ( t )
x ( t 0 ) = Initial Condition
(t)
dx
-----------= Ax ( t ) + Bu ( t ) + L [ y ( t ) y ( t )]
dt
y ( t ) = Cx ( t ) + Du ( t )
56
x ( t 0 ) = Initial Condition
ni.com
Note
1.
2.
3.
Problem Statement
Given the continuous-Time (C.T.) Linear Time-Varying (LTV) stochastic
state-space system.
x ( t ) = A ( t )x ( t ) + B ( t )u ( t ) + G ( t )w ( t )
y ( t ) = C ( t )x ( t ) + D ( t )u ( t ) + H ( t )w ( t ) + v ( t )
r
nxr
nxq
The matrices A ( t ) IR , B ( t ) IR , G ( t ) IR ,
mxn
mxr
mxq
, D ( t ) IR , H ( t ) IR
are deterministic and
C ( t ) IR
known t .
The disturbance vectors w ( t ) and v ( t ) are assumed to be white (i.e.
zero-mean and temporally uncorrelated) random processes with the
following first- and second-order statistics.
E{ w( t) } = o
cov { w ( t ) ,w ( t ) } = E { w ( t ) w ( t ) } = Q ( t )s ( t t ) ,Q ( t ) = Q ( t ) 0
E{ v(t) } = o
t
T
57
cov { v ( t ) ,v ( t ) } = E { v ( t ) v ( t ) } = R ( t )s ( t t ) ,R ( t ) = R ( t ) > 0
cov { w ( t ) ,v ( t ) } = E { w ( t ) v ( t ) } = N ( t )s ( t t )
Solution
The best linear estimator x ( t ) of the state vector x ( t ) in terms of the
measurements Y t has the following dynamical structure:
x ( t ) = A ( t )x ( t ) + B ( t )u ( t ) + L ( t ) [ y ( t ) y ( t ) ]
y ( t ) = C ( t )x ( t ) + D ( t )u ( t )
T
L( t)=
[ P ( t )C ( t ) + G ( t )Q ( t )H ( t ) + G ( t )N ( t ) ]
. [ H ( t )Q ( t )H T ( t ) + H ( t )N ( t ) + N T ( t )H T ( t ) + R ( t ) ] 1
T
T
P ( t ) = A ( t )P ( t ) + P ( t )A ( t ) + G ( t )Q ( t )G ( t )
1
T
T
T
P ( t )C ( t ) . [ H ( t )Q ( t )H ( t ) + H ( t )N ( t ) + N ( t )H ( t ) + R ( t ) ] . C ( t )P ( t )
T
[ G ( t )Q ( t )H ( t ) + G ( t )N ( t ) ] . [ H ( t )Q ( t )H ( t ) + HtN ( t ) + N ( t )H ( t ) + R ( t ) ]
. [ G ( t )Q ( t )H T ( t ) + G ( t )N ( t ) ] T
y ( t ) + Cx ( t ) + Du ( t )
Estimator Error:
e ( t ) = x ( t ) x ( t )
Error Dynamics:
58
ni.com
e ( t ) = x ( t ) x ( t )
= Ax ( t ) + Bu ( t ) + Gw ( t )
[ Ax ( t ) + Bu ( t ) + Lt [ yt Cx ( t ) Du ( t ) ] ]
e ( t ) = Ae ( t ) + Gw ( t )
L ( t ) [ Cx ( t ) + Du ( t ) + Hw ( t ) + v ( t ) Cx ( t ) Du ( t ) ]
= Ae ( t ) + Gw ( t ) Lt [ Ce ( t ) + Hw ( t ) + v ( t ) ]
e ( t ) = [ A LC ]e ( t ) + [ G LH ]w ( t ) Lv ( t )
t
e ( t ) = ( t, to )e ( to ) + ( t, ) [ ( G LH )w ( ) Lv ( ) ] d
to
E {e()}
E{e(t)}
P(t) = E{e(t).e ( )}
= E ( t, to )e ( to ) + ( t, ) [ ( G LH )w ( t ) Lv ( t ) ] dt
to
. ( t, to )e ( to ) + ( t, s ) [ ( G LH )w ( s ) Lv ( s ) ] ds
to
59
= E { ( t, to )e ( to )e ( to ) ( t, to ) }
+ E ( t, to )e ( to ) .
( t, s ) [ ( G LH )w ( s ) Lv ( s ) ]ds
to
T
.
.
+ E ( t, ) [ ( G LH )w ( ) Lv ( ) ] dt e ( to ) ( t, to )
to
+ E
[ ( G LH )w ( s ) Lv ( s ) ] ( t, to ) ds
to
( t, ) [ ( G LH )w ( ) Lv ( t ) ] dt .
to
P ( t ) = ( t, to )P ( to ) ( t, to )
t t
T T
+ E ( t, ) [ ( G LH )w ( t ) Lv ( t ) ] [ ( G LH )w ( s ) Lv ( s ) ] ( t, s ) d d
to to
= ( t, to )P ( to ) ( t, to )
t t
( t, )E { [ ( G LH )w ( ) Lv ( ) ] [ w ( s ) ( G LH )
T
v ( s )L ] } ( t, s ) d
to to
T
= ( t, to )P ( to ) ( t, to )
t t
( t, ) [ ( G LH )Q ( t )s ( t s ) ( G LH ) ( G LH ) N ( )s ( s )L
T
to to
T
LN ( )s ( s ) ( G LH ) + LR ( )s ( s ) ] ( t, s )d
T
= ( t, to )P ( to ) ( t, to )
t
+ ( t, ) [ ( G LH )Q ( ) ( G LH ) ( G LH ) N ( )L
to
T
LN ( ) ( G LH ) + LRL ] ( t, )d
60
ni.com
P ( t ) = ( t, to )P ( to ) ( t, to )
t
+ ( t, ) [ ( G LH )Q ( ) ( G LH ) ( G LH ) N ( )L
to
T
LN ( ) ( G LH ) + LRL ] ( t, )d
recall:
h(t)
d
----- f ( x, t )dx =
dt
g(t)
h(t)
dh ( t )
t { f ( x, t ) } dx +f [ h ( ( t ), t ) ] -----------dt
g(t)
dg ( t )
f [ g ( t ), t ] -----------dt
denote:
T
Z = ( G LH )Q ( ) ( G LH ) ( G LH )N ( )L LN ( ) ( G LH )
+ LRL
T
T
P ( t ) = ( t, to )P ( to ) ( t, to ) + ( t, to ) ( to ) ( to )
t
T
+ ( t, )Z ( t, ) + ( t, )Z ( t, ) d
to
dto
-------T
dt
(
t
,
to
)
Z
(
t
,
to
)
( t, ) ( t, ) dt
----+
Z
dt
T
= 0
I
I
= 0
T
T
= ( t, to )P ( to ) ( t, to ) + ( t, to )P ( to ) ( t, to )
t
T
T
+ ( t, )Z ( t, ) + ( t, )Z ( t, ) d
to
+Z
but,
e ( t ) = [ A LC ]e ( t ) + [ G LH ]w ( t ) Lv ( t )
( t, to ) = state transition mtx for above D.E.
61
( t, to ) = [ A LC ] ( t, to )
T
P ( t ) = [ A LC ] ( t, to )P ( to ) ( t, to )
T
+ ( t, to )P ( to ) ( t, to ) [ A LC ]
+ [ A LC ] ( t, )Z ( t, ) d
to
t
+ ( t, )Z ( t, ) d [ A LC ]
to
+Z
t
= [ A LC ] ( t, to )P ( t, to ) ( t, to ) + ( t, )Z ( t, ) d
to
t
( t, to )P ( to ) ( t, to ) + ( t, )Z ( t, ) d
[ A LC ]
+ [ G LH ]Q ( t ) [ G LH ] [ G LH ]N ( t )L LN ( t ) [ G LH ] + LR ( t )L
to
P( t)
+Z
T
P ( t ) = [ A LC ]P ( t ) + P ( t ) [ A LC ] + Z
T
P ( t ) = [ A LC ]P ( t ) + P ( t ) [ A LC ]
T
T
e (t ).e(t )
T
E
= tr [ E { e ( t ) . e ( t ) } ]
scalar
T
= E { tr [ e ( t ) . e ( t ) ] }
T
= E { tr [ e ( t ) . e ( t ) ] }
T
= tr [ E { e ( t ) . e ( t ) } ]
= tr [ P ( t ) ]
62
ni.com
max
L( t)
{ tr [ P ( t ) ] }
{ tr } [ P ( t ) ]
L( t)
T
AP LCP + PA PC L
T
T
T
T
+ GQG GQH L LHQG + LHQH L
{ tr } [ P ( t ) ]
=
L GNL T + LHNL T LN T G T = LN T H T L T
L(t)
+ LRL
. { tr [ XA T ] } = A
x
. { tr [ X T A ] } = A
x
. { tr [ ABA T ] } = AB + AB T
x
1
T
T T
{ tr [ LCP ] } = ( CP ) = P C
L
T T
T
T
T
{ tr [ PC L ] } =
{ tr [ L CP ] } = PC
L
L
T T
T
T
T
{ tr [ GQH L ] } =
{ tr [ L GQH ] } = GQH
L
L
T
T T
T T
4 { tr [ LHQG ] } = ( HQG ) = GQ H
L
T
T T
= LHQH + L ( HQH )
T T
5 { tr [ LHQH L ] }
T
T T
L
= LHQH + LHQ H
6
T
T
{ tr [ GNL ] } =
{ tr [ L GN ] } = GN
L
L
T
T
T T
7 { tr [ LHNL ] } = LHN + L ( HN ) = LHN + LN H
L
63
T T
T T T
{ tr [ LN G ] } = ( N G ) = GN
L
T
10
T T
T
T
{ tr [ LRL ] } = LR + LR
L
now,
T
= LN H + L ( N H )
T T T
{ tr [ LN H L ] }
T T
L
= LN H + LHN
{ tr [ P ( t ) ] } 0
L
GN + LHN + LN H GN + LN H + LHN + LR + LR 0
T
L [ HQH + R + HN + N H ] PC + GQH + GN
T
L ( t ) = [ PC + GQH + GN ] . [ HQH + HN + N H + R ]
GNL LN G + LHNL + LN H L
+ LRL
T
T
= AP LCP + PA PC L
T
T
T
+ L [ HQH + HN + N H + R ]
T
T
P ( t ) = AP + PA GQG
T
L [ CP ( HQG + N G ) ]
T
[ PC + GQH + GN ]
64
ni.com
P ( t ) = AP + PA + GQG
T
L [ CP + HQG + N G ]
1
T
[ PC + GQH + GN ]L
2
T
L [ HQH + HN + N H + R ]L
3
1
T
T
T
T T
T
T T
1 [ PC GQH + GN ] [ HQH + HN + N H + R ] . [ CP + HQG + N G ]
T
1
T
T
T
T
T T
T
T
2 [ PC GQH + GN ] . [ [ HQH + HN + N H + R ] ] . [ PC + GQH + GN ]
T
T 1
3 [ PC GQH + GN ] . [ HQH + HN + N H + R ]
. [ HQH T + HN + N T H T + R ]
T
. [ [ HQH T + HN + N T H T + R ] 1 ] . [ PC T GQH T + GN ] T
T
T
T
[ PC + GQH + GN ] .
T
[ HQH + HN + N H + R ]
P ( t ) = AP + PA + GQG
. [ CP + HQG T + N T G T ]
65
= Ck xk + Dk uk + Hk wk + vk
yk
Need to find the n-step-ahead Kalman Predictor, i.e. find the state estimate
*
x k + n k = E { x k + n Y k }, given the knowledge of the one-step-ahead
predicted state x k + 1
k+n1
{ Ai } i = k + 1 ,
Solution:
Define,
k , j = Ak 1 Ak 2 Aj
j , j = I
Then, we may re-write the state difference equation as:
xk + 1 = k + 1 , 0 x0 +
k + 1 ,i + 1 ( Biui )
+ k + 1 ,i + 1 ( Giwi ) ]
i=0
x k + n = k + n ,k + 1 x k + 1 +
[ k + n , i + 1( Biui ) + k + n ,i + 1 ( Giwi ) ]
i = k+1
66
ni.com
Now:
xk + n k = E { xk + n Yk }
*
= k + n, k + 1 E { x k + 1 Y k }
k+n1
[ k + n , i + 1BiE { u i Y k } + k + n , i + 1GiE { w i Y k } ]
i = k+1
but,
E*{ui | Yk} = ui , since ui is a deterministic input vector.
*
E { w i Y k } = 0 k + 1 i k + n 1
since the sequence Yk = {y0, y1, , yk} depends only and linearly on
k
with { w i } i = k + 1 .
k+n1
x k + n k = k + n , k + 1 x k + 1 k +
k + n , i + 1BiUi
i = k+1
k + n , k + 1 =
k + n 1 (k + 1) + 1 = n 1
=A
Ak + n 1 Ak + n 2 Ai + 1
k + n, i + 1 =
n1
k + n 1 (i + 1) + 1 = k + n i 1
=A
k+ni1
67
k+n1
n1
x k + n k = A
x k + 1 k +
k+ni1
B Ui
i = k+1
Equation
Continuous
Discrete
Lyapunov
AX + XAT + Q = 0
AXAT X + Q = 0
Sylvester
AX + XB + C = 0
AXB X + C = 0
5.
1
Construct F = U TCV.
Laub, Alan J., Paul Van Dooren, and R.V. Patel (eds). Numerical Linear Algebra Techniques for Systems and Control.
Piscataway, NJ: IEEE Press, 1994.
Varga, A. Robust Pole Assignment via Sylvester Equation Based State Feedback Parametrization. IEEE International
Symposium on Computer Aided Control System Design, CACSD'2000. Anchorage, Alaska, 2000.
68
ni.com
6.
7.
Solve X = UYV T.
You can apply the same methodology to the discrete version of the
Lyapunov equation if you complete the following steps:
T
1.
2.
3.
4.
Riccati Equations
The symmetric n n algebraic Riccati equation for the continuous time
case is ATX + XA XMX + Q = 0.
The solution of this equation is based on the reduction of a Hamiltonian
matrix, defined by the following equation:
H =
A M
Q A
V =
V 11 V 12
V 21 V 22
You then can express the steady state solution of the Riccati equation with
the following equation:
1
X = V 22 V 12
69
A XA X A XB 1 ( B 2 + B 1 XB 1 ) B 1 XA + C = 0
1
H = A + BA C BA
T
T
A C
A
1 T
S )X ( A BR
1 T
S ) ( A BR
1 T T
S ) XB ( B XB + R )
1 T
B X ( A BR
1 T
S ) + Q SR
1 T
= X
70
ni.com
x ( t ) = Ax ( t ) + Bu ( t )
Y ( t ) = Cx ( t ) + Du ( t )
= (lm) T
l = 1,2,3...
0 m <1
We will consider the case where l 1, 0 m <1.
If l <1, the (l 1) delay will be set to the discrete model independently after
the discretization is achieved.
x ( kT + T ) = ( T )x ( kT ) + 1 u ( kT lT ) + 2 u ( kT lT + T )
AT
(T) = e
A
1 =
e d B
MT
MT
A
=
e d B
2
( ) = e A
define
A
( A ) = e d B
1 = ( mT ) ( T mT )
2 = ( mT )
define the state vector, (k) = u(k1), and (T=1) for simplicity
x(k)
x(k)
x(k + 1)
- = ( T ) 1 ---------- + ---------- u ( k )
------------------(k + 1)
(k)
x ( k + 1 ) = Ad x ( k ) + Bd u ( k )
now, looking at the output equation, we can note that for l 1, and 0m<1,
then a delay of is equivalent to a delay that is less than one sampling time,
71
y(k)=Cx(k)+Du(k1)
x(k)
y ( k ) = [ C D] ---------(k)
+[0]u(k)
y ( k ) = C d ( k )x ( k ) + D d u ( k )
Ad = ( T ) 1
0
Bd = -----2
I
Cd [C D}
Dd = 0
Problem 2: ZOH discretization with vectorized input delay.
x ( t ) = Ax ( t ) + Bu ( t )
y ( t ) = Cx ( t ) + Du ( t )
u1 ( t 1 )
x ( t ) = Ax ( t ) + [ b 1 b 2 b r ]
u2 ( t 2 )
ur ( t r )
1 = ( l 1 m 1 )T
2 = ( l 2 m 2 )T
= ( l m )T
r
r
r
li = 1, 2, 3, ...
0mi,1
72
ni.com
(T) = eAT
1 = [ 1, 1 1, 2 1, r ]
T
where, 1, i =
d b i
mi T
( mi T ) = e
Am i T
T mi T
1, i = ( m i T ) i ( T m i T )
i ( T mi T ) =
d b i
2 = [ 2, 1 2, 2 2, r ]
mi T
where, 2, i =
d b i
0
mi T
= i ( mi T ) i ( mi T ) =
d b i
x ( k + 1 )------------------(k + 1)
x ( k + 1 )
( T ) 1 x ( k ) -------------- + 2 u ( k )
0 0 (k) I
Cd
x( k )
--------- ( k ) + [ 0 ]u ( k )
y(k) =
[ CD ]
x ( k )
73
x ( t ) = Ax ( t ) + Bu ( t )
y ( t + ) = Cx ( t ) + Du ( t ) y ( t ) = Cx ( t ) + Du ( t )
= ( j q )T
j = 1, 2, 3,
0 q < 1
If l > 1, the (j1) delay will be set to the discrete model independently
after the discretization is achieved.
t
x(t) = e
A ( t to )
x ( to ) + e
A(t )
Bu ( ) d
to
t o = kT , t = kT + T , u ( kT ) = u ( kt ) kT, [ k T + T )
e
x ( kT + T ) =
AT
(T)
x ( kT ) + e
dBu ( kT )
x ( k + 1 ) = ( T )x ( k ) + u ( k )
y(t) = Cx(t) + Du(t)
y(kT) = Cx(kT) + Du(kT)
= Cx(kT(lm)T) + Du(kT(lm)T)
= Cx[T(k(lm)] + Du[T(k(lm))]
t
but, x ( t ) = e
A ( t to )
x ( to ) + e
A (t )
Bu ( ) d
t0
to = ( k 1 )T , t = kT ( 1 q )T
e
xT [ ( k ( 1 q ) ) ] =
A [ kT ( 1 q )T ( k 1 )T ]
x [ ( k 1 )T ]
kT ( 1 q )T
A [ kT ( 1 q )T ( k 1 )T ]
Bu ( ) d
( k 1 )T
74
ni.com
[ kT T + mT kT + T ]
x [ ( k 1 )T ]
kT ( 1 q )T
A[T(k 1 + q) 2]
B du [ ( k 1 )T ]
( k 1 )T
AqT
x [ ( k 1 )T ]
kT ( 1 q )T
A[T(k 1 + q) ]
dBu [ ( k 1 )T ]
( k 1 )T
x[T(k (1 q))] =
I
kT ( 1 q )T
I =
A[T(k 1 + q) ]
( k 1 )T
mT
e
0
d = ( k 1 + q )T
d = d
= ( k 1 )T = ( k 1 + q )T ( k 1 )T = qT
d
= ( k 1 + q )T = 0
qT
x [ ( k 1 )T ] +
( qT )
dB u [ ( k 1 )T
[ T ( k ( 1 q ) ) ] = ( qt ) e
AqT
( qT )
x [ T ( k ( 1 q ) ) ] = ( qT )x ( k 1 ) + ( qT )u ( k 1 )
Note that, u[T(k(1q)] = u[k(1q)]
= u(k1+q)
= u(k-1)
It can be noted that ~1 ZoH, the input signal u(t) will be sampled @
time-step kT and the value of the sample is led through time-step (k+1)T.
So, if we delay the signal by k-1 and then advance the signal by m s.t. we
get u(k1+q) we will still be capturing the value of the signal @ time-step
k1, i.e. u(k1)
75
x(k)
Note
[0 I] x(k)
---------- +
Cd y ( k )
[0]
y(k) =
Bd
Dd
u(k)
Ad
------------------------------C ( qT ) + D
x(k)
---------y(k)
x ( k + 1 )
T
0
C ( qT ) 0
x(k + 1)
------------------y(k + 1)
u(k)
u(k)
y ( t + ) = Cx ( t ) + Du ( t )
x1 ( t )
y1 ( t + 1 )
y2 ( t + 2 )
ym ( t + m )
y1 ( t + 1 )
x2 ( t )
D1
u1 ( t )
u (t)
D2 2
+
Cm
Dm
ur ( t )
xn ( t )
C2
C 11 x 1 ( t ) + C 12 x 2 ( t ) + + C 1n x n ( t ) + D 11 u 1 ( t ) + D 12 u 2 ( t ) + + D 1r u r ( t )
y2 ( t + 2 )
ym ( t + m )
C1
C 21 x 1 ( t ) + C 22 x 2 ( t ) + + C 2n x n ( t ) + D 21 u 1 ( t ) + D 22 u 2 ( t ) + + D 2r u r ( t )
=
C m1 x 1 ( t ) + C m2 x 2 ( t ) + + C mn x n ( t ) + D m1 u 1 ( t ) + D m2 u 2 ( t ) + + D 2mr u r ( t )
76
ni.com
y1 ( t )
C 11 x 1 ( t 1 ) + C 12 x 2 ( t 1 ) + + C 1n x n ( t 1 )
y2 ( t )
ym ( t )
C 21 x 1 ( t 2 ) + C 22 x 2 ( t 2 ) + + C 2n x n ( t 2 )
=
C m1 x 1 ( t m ) + C m2 x 2 ( t m ) + + C mn x n ( t m )
D 11 u 1 ( t 1 ) + + D 1r u ( t 1 )
D 21 u 1 ( t 2 ) + + D 2r u ( t 2 )
D m1 u 1 ( t m ) + + D mr u ( t m )
y1 ( t )
C1 x ( t 1 )
y2 ( t )
ym ( t )
D1 u ( t 1 )
C2 x ( t 2 )
=
Cm x ( t m )
D2 u ( t 2 )
+
Dm u ( t m )
Define,
= ( j q )T
1
1
1
= ( j q )T
2
2
2
= ( j q )T
m
m
m
j 1 = 1, 2, 3, ... i = 1, 2, ..., m
i = 1, 2, ..., m
0 q1 < 1
If j1 > 1, the (j1 1) delay will be set to the discrete model at the output
channel i, independently after the discretization is achieved.
77
Now,
y1 ( t )
C1 x ( t 1 )
y2 ( t )
ym ( t )
y 1 ( kT )
C2 x ( t 2 )
=
D2 u ( t 2 )
Cm x ( t m )
Dm u ( t m )
C 1 x [ kT ( 1 q 1 )T ]
y 2 ( kT )
y m ( kT )
D1 u ( t 1 )
D 1 u [ kT ( 1 q 1 )T ]
C 2 x [ kT ( 1 q 2 )T ]
=
C m x [ kT ( 1 q m )T ]
D 2 u [ kT ( 1 q 2 )T ]
+
D m u [ kT ( 1 q m )T ]
But,
t
x(t) = e
A ( t t0 )
x ( t0 ) + e
A(t )
Bu ( ) d
t0
t 0 = ( k 1 )T, t = kT ( 1 q 1 )T
kT ( 1 m )T
x [ T ( k ( 1 q i ) ) ] = e
A [ kT ( 1 q i )T ( k 1 )T ]
x [ ( k 1 )T ] +
A [ kT ( 1 q i )T ]
Bu ( ) d
( k 1 )T
Also,
u [ T ( k ( 1 qi ) ) ] = u [ k 1 + qi ] = u ( k 1 )
Following the same derivation as in case (3) we get
x [ k ( 1 q i ) ] = ( q i T )x ( k 1 ) + ( q i T )Bu ( k 1 )
78
ni.com
y1 ( k )
C 1 [ ( q 1 T )x ( k 1 ) + ( q 1 T )u ( k 1 ) ]
y2 ( k )
ym ( k )
C 2 [ ( q 2 T )x ( k 1 ) + ( q 2 T )u ( k 1 ) ]
=
C m [ ( q m T )x ( k 1 ) + ( q m T )u ( k 1 ) ]
D1 u ( k 1 )
D2 u ( k 1 )
+
Dm u ( k 1 )
C 1 ( q 1 T )x ( k 1 )
C1 ( q1 T ) + D1
C 2 ( q 2 T )x ( k 1 )
=
C m ( q m T )x ( k 1 )
C2 ( q2 T ) + D2
C1 ( q1 T )
C2 ( q2 T )
=
Cm ( qm T )
u(k 1)
Cm ( qm T ) + Dm
C1 ( q1 T ) + D1
x(k 1) +
C2 ( q2 T ) + D2
u(k 1)
Cm ( qm T ) + Dm
C1 ( q1 T )
Cm ( qm T )
y(k + 1) =
x(k )
C2 ( q2 T ) + D2
+
Cm ( qm T ) + Dm
C'
x(k)
y(k) +
79
x ( k )
T'
D'
u(k)
(T) 0
C' 0
Ad
x ( k + 1 )
D'
x(k + 1)
y(k + 1)
u(k)
C2 ( q2 T )
C1 ( q1 T ) + D1
Bd
u(k)
Cd
[0] u(k)
y(k) = [0 I]
x(k)
y(k) +
Dd
x ( k )
y ( t ) = Cx ( t ) + Du ( t ( + ) ) y ( t + ) = Cx ( t ) + Du ( t )
= ( j q )T
j = 1, 2, ...
0 q < 1
= ( l m )T
l = 1, 2, ...
0m<1
If (l > 1) and/or ( j > 1), the (l 1) and/or ( j 1) delay will be set to the
model independently after the discretization is achieved.
1.
State Eq: Following the same derivations as in case (1), we find that
x ( k + 1 ) = ( T )x ( k ) + 1 u ( k 1 ) + 2 u ( k )
1 = ( mT ) ( T mT )
2 = ( mT )
( k) = u(k 1)
Observation Eq.: Following the same derivations as in case (3), we find that
y ( kT ) = Cx [ kT ( 1 q )T ] + Du [ kT ( 1 m + 1 q )T ]
+ Du [ ( k 2 + m + q )T ]
Cx [ ( k 1 + q )T ]
**
* x [ ( k 1 + q )T ] = ?
Recall,
t
x(t) = e
A ( t t0 )
x ( t0 ) + e
A(t )
Bu ( ) d
t0
t 0 = ( k 1 )T
t = kT ( 1 q )T
80
ni.com
x [ ( k 1 + q )T ] = e
A [ kT T + qT kT + T ]
x [ ( k 1 )T ]
kT ( 1 q )T
A [ kT T + qT ]
Bu ( ) d
( k 1 )T
=e
AqT
x [ ( k 1 )T ]
kT ( 1 q )T
A [ kT T + qT ]
Bu ( ) d
( k 1 )T
= kT T + qT I
d = d
= ( k 1 )T = kT T + qT kT + T = qT
= kT ( 1 q ) T = kT T + qT Tk + T = qT
qT
x [ ( k 1 + q )T ] = e
AqT
x [ ( k 1 )T ] +
Bu [ kT T + qT ] d
0
qT
= e
AqT
x [ ( k 1 )T ] +
Bu [ kT T + qT ( 1 m )T ] d
Bu [ ( k 2 + q + m )T ] d
0
qT
= e
AqT
x [ ( k 1 )T ] +
I
qT
I =
Bu [ ( k 2 + q + m )T ] d
@ = 0 u [ ( k 2 + q + m )T ]
@ = qT u [ ( k 2 + m )T ]
81
Note that,
0 q, m < 1
0q+m<2
case 1:
0m+q<1
qT
I =
Bu [ ( k 2 + q + m )T ] d
0
qT
d B u [ ( k 2 )T ]
case 2:
1m+q<2
qT
I =
Bu [ ( k 2 + q + m )T ] d
0
( q + m )T T
d B u [ ( k 1 )T ]
d B u [ ( k 2 )T ]
0
qT
( q + m )T T
qT
(e
B u [ ( k 2 + q + m )T ] d
82
ni.com
qT
e A d B u [ ( k 2 )T ], 0 m + q < 1
( q + m 1 )T
A
e d B u [ ( k 1 )T ]
=
qT
A
+
e d B u [ ( k 2 )T ], 1 m + q < 2
( q + m 1 )T
Define,
( a ) = e Aa
A
( a ) = e d B
Now,
qT
d B
( q + m 1 )T
= ( q + m 1 )T
d = d
( 1 m )T
A[ + (q + m 1)T]
d B
0
( 1 m )T
= e
( q + m 1 )T
d B
= [ ( q + m 1 )T ] [ ( 1 m )T ]
= ( q + m 1 )T = ( q + m 1 )T ( q + m 1 )T
= qT = qT qT mT + T = ( 1 m )T
83
x [ ( k 1 + q )T ] = ( qT )x [ ( k 1 )T ] + I
( qT ) u [ ( k 2 )T ],
I =
[ ( q + m 1 )T ] u [ ( k 1 )T ]
[ ( q + m 1 )T ] [ ( 1 m )T ] u [ ( k 2 )T ],
0m+q<1
1m+q<2
** u [ ( k 2 + q + m )T ] =
case 1:
0m+q<1
u [ ( k 2 + m + q )T ] = u [ ( k 2 )T ]
case 2:
1m+q<2
u [ ( k 2 + m + q )T ] = u [ ( k 1 )T ]
case 1:
0m+q<1
y ( kT ) = C [ ( qT ) x ( k 1 )T ] + ( qT ) u [ ( k 2 )T ] + D u [ ( k 2 )T ]
y ( kT ) = C ( qT ) x[ ( k 1 )T ] + [ C ( qT ) + D ] u[ ( k 2 )T ]
case 2:
1m+q<2
y ( kT ) = C [ ( qT )x ( k 1 )T ] + [ ( q + m 1 )T ] u [ ( k 1 )T ]
+ [ ( q + m 1 )T ] [ ( 1 m )T ] u [ ( k 2 )T ]
+ D u [ ( k 1 )T ]
y ( kT ) = C ( qT ) x[ ( k 1 )T ] + [ C [ ( q + m 1 )T ] + D ]u [ ( k 1 )T ]
+ C [ ( q + m 1 )T ] [ ( 1 m )T ] u [ ( k 2 )T ]
84
ni.com
C ( qT ) x( k ) + [ C ( qT ) + D ]u ( k 1 )
y ( k + 1 ) = C ( qT ) x( k ) + [ C [ ( q + m 1 )T ] + D ]u ( k )
+ C [ ( q + m 1 )T ] [ ( 1 m )T ] u ( k 1 )
0m+q<1
1m+q<2
case 1:
0m+q<1
(T)
1
0 x(k)
2
x(k + 1)
=
+
(k + 1)
I u( k)
0
0
0 (k)
y(k + 1)
0
C ( qT ) C ( qT ) + D 0 y ( k )
x(k)
y ( k ) = 0 0 I ( k ) + [ 0 ]u ( k )
y(k)
case 2:
1m+q<2
(T)
1
0 x(k)
2
x(k + 1)
=
+
u(k)
(k + 1)
I
0
0
0 (k)
y(k + 1)
C [ ( q + m 1 )T ] + D
C ( qT ) C [ ( q + m 1 )T ] [ ( 1 m )T ] 0 y ( k )
x(k)
y ( k ) = 0 0 I ( k ) + [ 0 ]u ( k )
y(k)
Problem 6: ZoH discretization with vectorized input and output delays
x ( t ) = Ax ( t ) + Bu ( t )
y ( t + ) = Cx ( t ) + Du ( t )
1 = ( l 1 P 1 )T
2 = ( l 2 P 2 )T
= ( l P )T
r
r
r
85
1 = ( h 1 q 1 )T
2 = ( h 2 q 2 )T
= ( h q )T
r
r
r
l j = 1, 2,
o Pj < 1
1.
h i = 1, 2,
o qi < 1
j = 1, 2, , r
j = 1, 2, , r
i = 1, 2, , r
i = 1, 2, , r
j = 1, 2, , r
AP j T
T Pj T
A
j ( T Pj T ) =
d b j
2 = [ 2,1 2,2 2, r ]
2, j = j ( P j T )
j = 1, 2, , r
Pj T
j ( Pj T ) =
( d b j )
2.
86
ni.com
y1 ( t + 1 )
c 11 x 1 ( t ) + c 12 x 2 ( t ) + + c 1n x n ( t )
y2 ( t + 2 )
ym ( t + m )
d 11 u 1 ( t 1 ) + + d 1r u r ( t r )
c 21 x 1 ( t ) + c 22 x 2 ( t ) + + c 2n x n ( t )
c m1 x 1 ( t ) + c m2 x 2 ( t ) + + c mn x n ( t )
y2 t
d m1 u 1 ( t 1 ) + + d mr u r ( t r )
c 11 x 1 ( t 1 ) + c 12 x 2 ( t 2 ) + + c 1n x n ( t 1 )
y1 t
d 21 u 1 ( t 1 ) + + d 2r u r ( t r )
c 21 x 1 ( t 2 ) + c 22 x 2 ( t 2 ) + + c 2n x n ( t 2 )
c m1 x 1 ( t m ) + c m2 x 2 ( t m ) + + c mn x n ( t m )
ym t
d 11 u 1 [ t ( 1 + 1 ) ] + d 12 u 2 [ t ( 2 + 1 ) ] + + d 1r u r [ t ( r + 1 ) ]
d 21 u 1 [ t ( 1 + 2 ) ] + d 22 u 2 [ t ( 2 + 2 ) ] + + d 2r u r [ t ( r + 2 ) ]
d m1 u 1 [ t ( 1 + m ) ] + d mr u r [ t ( 2 + m ) ] + + d mr u r [ t ( r + m ) ]
y1 ( t )
y2 ( t )
ym ( t )
c1 x ( t 1 )
=
D1 u [ t ( + 1 ) ]
c2 x ( t 2 )
cm x ( t m )
D2 u [ t ( + 2 ) ]
Dm u [ t ( + m ) ]
y 1 ( kT )
c 1 x [ kT ( 1 q 1 )T ]
D 1 u [ kT ( 1 P ) T ( 1 q 1 ) T ]
y 2 ( kT )
c 2 x [ kT ( 1 q 2 ) T ]
D 2 u [ kT ( 1 P ) T ( 1 q 2 )T ]
c m x [ kT ( 1 q m ) T ]
y m ( kT )
**
* x [ kT ( 1 q i )T ]=
D m u [ kT ( 1 P )T ( 1 q m ) T ]
i = 1, 2, , m
recall, x ( t ) = e
A ( t t0 )
x ( t0 ) + e
A(t )
B u ( )d
t0
t 0 = ( k 1 )T
t = kT ( 1 q i )T
87
x [ ( k 1 + q i )T ] = e
Aq i T
kT ( 1 q i )T
A [ kT T + q i T ]
x [ ( k 1 )T ] +
B u ( )d
( k 1 )T
= ( k 1 )T = q i T
= kT T + q i T
= kT ( 1 q i ) T = 0
d = d
x [ ( k 1 + q i )T ] = e
Aq i T
qi T
x [ ( k 1 )T ] +
B u [ kT T + q i T ]d
qi T
= ( q i T )x [ ( k 1 )T ] +
B u [ kT T + q i T ( 1 P )T ]d 1
Ii
qi T
Ii =
B u [ kT T + q i T ( 1 P ) T ]d
qi T
B u 1 [ kT T + q i T ( 1 P 1 ) T ]
d
u r [ kT T + q i T 1 P r T ]
qi T
B u 1 [ ( k 2 + q i + P 1 )T ]
u 2 [ ( k 2 + q i + P 2 )T ]
u r [ ( k 2 + q i + P r )T ]
qi T
Ii =
88
ni.com
b 11 u 1 [ ( k 2 + q i + P 1 )T ] + b 12 u 2 [ ( k 2 + q i + P 2 )T ] + + b 1r u r [ ( k 2 + q i + P r )T ]
b 21 u 1 [ ( k 2 + q i + P 1 )T ] + b 22 u 2 [ ( k 2 + q i + P 2 )T ] + + b 2r u r [ ( k 2 + q i + P r )T ]
b n1 u 1 [ ( k 2 + q i + P 1 )T ] + b n2 u 2 [ ( k 2 + q i + P 2 )T ] + + b nr u r [ ( k 2 + q i + P r )T ]
[ b 1 u 1 [ ( k 2 + q i + P 1 )T ] + b 2 u 2 [ ( k 2 + q i + P 2 )T ] + + b r u r [ ( k 2 + q i + P r )T ]
qi T
b 1 u 1 [ ( k 2 + q i + P 1 )T ]
I i ,1
qi T
b 2 u 2 [ ( k 2 + q i + P 2 )T ]d
I i ,2
+
qi T
b r u r [ ( k 2 + q i + P r )T ]d
I i ,r
I i = I i ,1 + Ii ,2 + + I i ,r
qi T
where, I i ,j =
b j u j [ ( k 2 + q i + P i )T ]d
89
j = 1, 2, , r
i = 1, 2, , m
qi T
e A d b u [ ( k 2 )T ]
j
j
( qi + Pj 1 )T
A
e d b j u j [ ( k 1 )T ]
=
qi T
A
+
e d b j u j [ ( k 2 )T ]
( qi + Pj 1 )T
0 qi + Pj < 1
0 qi + Pj < 2
= ( q i + P j 1 )T
qi T
but,
= ( q i + P j 1 )T = 1
d b j d = d
= q i T = ( 1 P j )
( q i + P j 1 )T
( 1 P j )T
A ( qi + Pj 1 ) ( T + )
d b j
=e
A ( q i + P j 1 )T
( 1 P j )T
A
d b j
= [ ( q i + P j 1 )T ] j [ ( 1 P j )T ]
I i ,j
j q i T u j [ ( k 2 )T ]
[ ( q + P 1 )T ] u [ ( k 1 )T ]
j
j
j i
=
+ [ ( q i + P j 1 )T ] j [ ( 1 P j )T ] u j [ ( k 2 )T ]
0 Pj + qi < 1
1 Pj + qi < 2
i = 1, 2, , m
j = 1, 2, , r
x [ ( k 1 + q i )T ] = ( q i T )x [ ( k 1 )T ] + [ I i ,1 + I i ,2 + + I i ,r ]
= ( q i T )x [ ( k 1 )T ] + i ,3 u [ ( k 2 )T ] + i ,4 u [ ( k 1 )T ]
i ,3 = [ i ,3 ,1 i ,3 ,2 i ,3 ,r ]
j ( qi T )
i ,3 ,j =
[ ( q i + P j 1 )T ] j [ ( 1 P j )T ]
90
0 Pj + qi < 1
1 Pj + qi < 2
ni.com
i ,4 = [ i ,4 ,1 i ,4 ,2 i ,4 ,r ]
0 Pj + qi < 1
0
i ,4 ,j =
j [ ( q i + P j 1 )T ]
1 Pj + qi < 2
** u [ kT ( 1 P ) T ( 1 q i ) T ]=
i = 1, 2, , m
u [ kT ( 1 P ) T ( 1 q i ) T ] = u 1 [ kT ( 1 P 1 ) T ( 1 q i ) T ]
u 2 [ kT ( 1 P 2 ) T ( 1 q i ) T ]
u r [ kT ( 1 P r ) T ( 1 q i ) T ]
= u 1 [ kT ( 2 P 1 q i )T ]
u 1 [ kT ( 2 P 2 q i )T ]
u r [ kT ( 2 P r q i )T ]
Case 1: 0 P j + q i < 1
j = 1, 2, , r
u j [ kT ( 2 P j q i )T ] = u j [ ( k 2 )T ]
Case 2: 1 P j + q i < 2
u j [ kT ( 2 P j q i )T ] = u j [ ( k 1 )T ]
u [ kT ( 1 P ) T ( 1 q i ) T ] = V i u [ ( k 2 )T ] + W i [ ( k 1 )T ]u [ ( k 1 )T ]
V i ,11
where, V i =
W i ,11
Wi =
0
0
V i ,22 0
0 0 V i ,rr
0
W i ,22 0
0 0 W i ,rr
91
0 Pj + qi < 1
1
V i ,jj =
0
1 Pj + qi < 2
j = 1, 2, , r
1
W i ,jj =
0
0 Pj + qi < 1
1 Pj + qi < 2
j = 1, 2, , r
finally,
C 1 [ ( q 1 T )x [ ( k 1 )T ] + 1 ,3 u [ ( k 2 )T ] + 1 ,4 [ ( k 1 )T ] ]
y 1 ( kT )
y 2 ( kT )
y m ( kT )
C 2 [ ( q 2 T )x [ ( k 1 )T ] + 2 ,3 u [ ( k 2 )T ] + 2 ,4 [ ( k 1 )T ] ]
C m [ ( q m T )x [ ( k 1 )T ] + m ,3 u [ ( k 2 )T ] + m ,4 [ ( k 1 )T ] ]
D 1 [ V 1 u [ ( k 2 )T ] + W 1 u [ ( k 1 )T ] ]
+
D 2 [ V 2 u [ ( k 2 )T ] + W 2 u [ ( k 1 )T ] ]
D m [ V m u [ ( k 2 )T ] + W m u [ ( k 1 )T ] ]
y m ( kT )
C2 2 ( q2 T )
x [ ( k 1 )T ] +
C 2 2 ,3 + D 2 V 2
u [ ( k 2 )T ]
C m m ,3 + D m V m
Cm m ( qm T )
y 2 ( kT )
C 1 1 ,3 + D 1 V 1
C1 1 ( q1 T )
y 1 ( kT )
D2
C 1 1 ,4 + D 1 W 1
C 2 2 ,4 + D 2 W 2
u [ ( k 1 )T ]
C m m ,4 + D m W m
D2
Discrete-Equivalent System:
2
( T ) 1 0 x ( k )
(k + 1) =
0
0 0 (k) + 0 u(k)
x(k + 1)
y(k + 1)
D1 0
y(k)
D2
x(k)
y(k) = 0 0 I (k) + 0 u(k)
y(k)
where,
1 = 1 ,1 1 ,2 1 ,r
92
ni.com
1 ,j = j ( P j T ) j ( T P j T )
j ( Pj T ) = e
j ( T Pj T ) =
j = 1, 2, , r
AP j T
T Pj T
A
d b j
2 = 2 ,1 2 ,2 2 ,r
2 ,j = j ( P j T )
j = 1, 2, , r
Pj T
j ( Pj T ) =
d b j
C1 1 ( q1 T )
C =
Cm m ( qm T )
C 1 1 ,3 + D 1 V 1
D1 =
C 2 2 ,3 + D 2 V 2
C 1 1 ,4 + D 1 V 1
C 2 2 ,4 + D 2 V 2
D2 =
C m m ,3 + D m V m
i ,3 = i ,3 ,1 i ,3 ,2 i ,3 ,r
C m m ,4 + D m V m
i = 1, 2, , m
j = 1, 2, , r
j ( qi T )
i ,3 ,j =
[ ( q i + P j 1 )T ] j [ ( 1 P j )T ]
0 Pj + qi < 1
1 Pj + qi < 2
i ,4 = i ,4 ,1 i ,4 ,2 i ,4 ,r
0
i ,4 ,j =
j [ ( q i + P j 1 )T ]
93
0 Pj + qi < 1
1 Pj + qi < 2
V i ,11
Vi =
i = 1, 2, , m
V i ,22 0
j = 1, 2, , r
0 V i ,rr
W i ,11
Wi =
W i ,22
0 W i ,rr
1
V i ,jj =
0
0
W i ,jj =
1
0 Pj + qi < 1
1 Pj + qi < 2
0 Pj + qi < 1
1 Pj + qi < 2
Note
[1] Jerry Mendel, Lessons in Estimation Theory for Signal Processing, Communications,
and Control, Prentice Hall, 1995.
Problem Statement
Given the Discrete-Time LTV stochastic state-space system:
x ( k + 1 ) = A ( k )x ( k ) + B ( k )u ( k ) + G ( k )w ( k )
= C ( k )x ( k ) + D ( k )u ( k ) + H ( k )w ( k ) + v ( k )
y(k)
where the noise sequences { w ( k ) } and { v ( k ) } are non-zero-means.
i.e.
E { w ( k ) } = mw ( k ) 0
E { v ( k ) } = mv ( k ) 0
Solution
Define the noise sequence:
w1 ( k ) w ( k ) mw ( k )
94
ni.com
E { w1 ( k ) } = E { w ( k ) } E { mw ( k ) } = 0
w 1 ( k ) is a zero-mean noise sequence...
x ( k + 1 ) = ( k )x ( k ) + B ( k )u ( k ) + G ( k )w 1 ( k ) + G ( k )m w ( k )
Define. u 1 ( k ) B ( k )u ( k ) + G ( k )m w ( k )
x ( k + 1 ) = A ( k ) + u 1 ( k ) + G ( k )w 1 ( k )
Define the Noise sequence.
v1 ( k ) v ( k ) mv ( k )
E { v1 ( k ) } = E { v ( k ) } E { mv ( k ) } = o
v 1 ( k ) is a zero-mean noise sequence...
y ( k ) = C ( k )x ( k ) + D ( k )u ( k ) + H ( k )w 1 ( k ) + H ( k )m w ( k ) + v 1 ( k ) + m v ( k )
Define, y 1 ( k ) y ( k ) D ( k )u ( k ) H ( k )m w ( k ) m v ( k )
y 1 ( k ) = C ( k )x ( k ) + H ( k )w 1 ( k ) + v 1 ( k )
yk = Ck xk + Dk uk + Hk wk + vk
95
E{wk} = o k
E{vk} = o k
k = j
kj
We need to find, for every time-step k, the best linear predicted and filtered
state estimates of the vector xk (denoted as x k + 1 k and x k k , respectively)
in terms of the measurement sequence
k
{ y k } = { y i } i = 0 = { y 0, y 1, , y k }
sequence of measurements up to and including time.
random vector y is defined as:
1
x = E { x y } = m x + xy yy [ y m y ]
m x = E { x }, m y = E { y }, xy = Cov { x, y }, yy = Cov { y, y }
Denote:
x k + 1 k = E { x k + 1 y k } best linear predicted state estimate
x k k = E { x k y k } best linear filtered state estimate
x k k 1 = x k x k k 1 estimation error
k k 1 = Cov { x k k 1, x k k 1 } covariance of estimation error.
Solution:
The best linear (lin.) estimator x k + 1 k of the state vector xk+1 in terms of
k
Note
96
ni.com
xk + 1 k = Ak xk k 1 + Bk uk + Lk [ yk Ck xk k 1 Dk uk ]
T
T
Lk = [ Ak k k 1 Ck + Gk Qk Hk + Gk Nk ]
I.C.: x 0 1 = E { x 0 }
k + 1 k = cov {x k + 1 k, x k + 1 k}
T
T
= [A
k k k 1 Ak + Gk Qk Gk ]
T
T
[ Ak k k 1 Ck + Gk Qk Hk Gk Nk ]
1
[ Ck k k 1 CkT + Hk Qk HkT + Hk Nk + NkT HkT + Rk ]
T
T
T
[A
k k k 1 Ck + Gk Qk Hk + Gk Nk ]
T
I.C.: 0 1 = cov {x 0 1, x 0 1} = 0 1 0
1
Mk = [ k k 1 CkT ] [ Ck k k 1 CkT + Hk Qk HkT + Hk Nk + NkT HkT + Rk ]
k k = cov {x k k, x k k}
= k k 1 Mk [ Ck k k 1 ]
97
*
y k k 1 = E {yk Y k 1 }
*
= E { [Ck xk + Dk uk + Hk wk + vk ] Y k 1 }
*
E { vk Yk 1 }
1:
*
E { w k Y k 1 } = E { w k y 0 , y 1 , , y k 1 }
1
k1
E { wk Yk 1 } = 0
2:
*
= C k x k + D k u k + H k w k + v k C k x k k 1 D k u k
y k k 1 = Ck x k k 1 + H k w k + v k
vk is uncorrelated with x k , x k k 1 , x k k 1
wk is uncorrelated with x k , x k k 1 , x k k 1
uncorrelated with xk, since x k = A k 1 x k 1 + B k 1 u k 1 + G k 1 w k 1
98
ni.com
k1
k1
cov {y k k 1, y k k 1} = E { [ Ck x k k 1 + H k w k + v k ] [ Ck x k k 1 + H k w k + v k ]
T
= E { Ck x k k 1 x k k 1 Ck + Ck x k k 1 wk Hk + Ck x k k 1 vk
T
+ H k w k x k k 1 Ck + H k w k w k Hk + H k w k v k
T
+ v k x k k 1 C k + v k w k H k + v k v k }
T
cov {y k k 1, y k k 1} = C k k k 1 C k + H k Q k H k + H k N k + N k H k + R k
recall, updating formula:
x
k+1
= E { x y 0 , y 1 , , y k , y k + 1 }
*
= x k + E { x k y k + 1 k }
where:
*
x k = E { x y 0 , y 1 , , y k }
x k = x x k
*
y k + 1 k = y k + 1 E { y k + 1 y 0 , y 1 , , y k }
99
*
x k + 1 k = E { x k + 1 y 0 , y 1 , , y k + 1 }
*
E { x k + 1 k 1 y k k 1 }
= x k + 1 k 1
i
ii
i:
x k + 1
k1
= E { x k + 1 Y k 1 }
*
= E { Ak xk + Bk uk + Gk wk Yk 1 }
*
= Ak E { xk Yk 1 } + Bk E { uk Yk 1 } + Gk E { wk Yk 1 }
x k + 1 k 1 = A k x k k 1 + B k u k
ii:
*
E { x k + 1 k 1 y k k 1 } = E { x k + 1 k 1 } + cov {x k + 1
k 1, y k k 1}
1
cov {y k k 1, y k k 1} [ y k k 1 E { y k k 1 } ]
but, E { x k + 1 k 1 } = E { x k + 1 x k + 1 k 1 }
= E { x k + 1 A k x k k 1 B k u k }
= E { A k x k + B k u k + G k w k A k x k k 1 B k u k }
o
= A k E { x k k 1 } + G k E { w k }
= 0
1
cov { y k k 1, y k k 1 }
y k k 1
cov { x k + 1 k 1, y k k 1 }
k1
y k k 1 } =
E { x k + 1
1 cov { x k + 1 k 1, y k k 1 } = cov { [ x k + 1 x k + 1 k 1 ], y k k 1 }
1.a
cov { x k + 1 k 1, y k k 1 }
cov { x k + 1, y k k 1 }
1.b
100
ni.com
+ cov { x k + 1, w k } H k
+ cov { x k + 1, v k }
= cov { x k + 1, x k k 1 } C k
+ cov { A k x k + B k u k + G k w k, w k }H k
+ cov { A k x k + B k u k + G k w k, v k }
1.a cov { x k + 1, y k k 1 } = cov { x k + 1, x k k 1 }C kT + G k Q k H kT + G k N k
= A k cov { x k, x k k 1 }C kT + B k cov { u k, x k k 1 }C kT + G k cov { w k, x k k 1 }
+ G k Q k H kT + G k N k
1.b cov { x k + 1 k 1, y k k 1 } = cov { x k + 1 k 1, C k x k k 1 + H k w k + v k }
= cov { x k + 1
k 1,
x k k 1 }C kT
+ cov { x k + 1
k 1,
w k }H kT
+ cov { x k + 1
k 1,
vk }
= cov { A k x k k 1 + B k u k, x k k 1 }C kT
+ cov { A k x k k 1 + B k u k, w k }H kT
+ cov { A k x k k 1 + B k u k, v k }
= A k cov { x k k 1, x k k 1 }C kT
+ B k cov { u k, x k k 1 }C kT
+ A k cov { x k k 1, w k }H kT
+ B k cov { u k, w k }H kT
101
+ A k cov { x k k 1, v k }
+ B k cov { u k, v k }
1.b:
cov {x k + 1
k 1, y k k 1}
= A k cov {x k k 1, x k k 1} C k
+ B k cov {u k, x k k 1} C k
+ A k cov {x k k 1, w k} H kT
1:
cov {x k + 1
k 1, y k k 1}
= 1.a 1.b
= A k cov {x k, x k k 1} C k
T
+ B k cov {u k, x k k 1} C k
T
+ G k cov {w k, x k k 1} C k
T
+ Gk Qk Hk + Gk Nk
T
A k cov {x k k 1, x k k 1} C k
B k cov {u k, x k k 1} C k
A k cov {x k k 1, w k} H k
= A k cov {x k k 1, x k k 1} C k
+ G k cov {w k, x k k 1} C k
+ Gk Qk Hk + Gk Nk
A k cov {x k k 1, w k} H k
T
T
cov {x k + 1 k 1, y k k 1} = A k k k 1 C k + G k Q k H k + G k N k
2:
T
T
T
T
cov {y k k 1, y k k 1} = C k k k 1 C k + H k Q k H k + H k N k + N k H k + R k
3:
y k k 1 = y k y k k 1
102
ni.com
y k k 1 = y k C k x k k 1 D k u k
x k + 1 k 1 = i + ii
T
= A k x k k 1 + B k u k + [ A k k k 1 C k G k Q k H k + G k N k ]
T
[ Ck k k 1 Ck + Hk Qk Hk + Hk Nk + Nk Hk + Rk ]
[ y k C k x k k 1 D k u k ]
T
Lk = [ Ak k k 1 Ck + Gk Qk Hk + Gk Nk ] [ Ck k k 1 Ck + Hk Qk Hk + Hk Nk + Nk Hk + Rk ]
Now, lets derive the covariance of estimation error evolution equation.
Recall, updating formula:
1
T
cov {x k + 1, x k + 1} = cov {x k, x k} cov {x k , y k + 1 k} cov {y k + 1 k, y k + 1 k} cov {x k, y k + 1 k}
cov {x k + 1 k , x k + 1 k} =
*
cov {x k + 1 k 1, x k + 1 k 1}
T
1
cov {x k + 1 k 1, y k k 1} cov {y k k 1, y k k 1} cov {x k + 1 k 1, y 1}
*cov {x k + 1
k 1, x k + 1 k 1}
= ?
But,
x k + 1 k 1 = x k + 1 x k + 1
k1
= xk + 1 E { [ Ak xk + Bk uk + Gk wk ] Yk 1 }
*
= x k + 1 A k x k k 1 B k u k G k E { w k Y k 1 }
*
k1
103
= A k x k k 1 + G k w k
cov {x k + 1 k 1, x k + 1 k 1} = cov {[ A k x k k 1 + G k w k ], [ A k x k k 1 + G k w k ]}
T
= A k E { x k k 1 x k k 1 }A k
T
T
+ A k E { x k k 1 w k }G k
T
+ G k E { w k x k k 1 }A k
T
+ G k E { w k w k }G k
T
T
* cov {x k + 1 k 1, x k + 1 k 1} = A k k k 1 A k + G k Q k G k
k + 1 k = cov {x k + 1 k, x k + 1 k}
T
= [ Ak k k 1 Ak + Gk Qk Gk ]
T
[ Ak k k 1 Ck + Gk Qk Hk + Gk Nk ]
T
[ Ck k k 1 Ck + Hk Qk Hk + Hk Nk + Nk Hk + Rk ]
T
[ Ak k k 1 Ck + Gk Qk Hk + Gk Nk ]
k+1
*
= x k + E { x
y k + 1 k }
*
x k k = x k k + 1 + E {x k k 1 y k k 1 }
**
*
** E {x k k 1, y k k 1} = E {x k k 1 } + cov {x k k 1, y k k 1}
1
cov {y k k 1, y k k 1} [y k k 1 E {y k k 1 } ]
104
ni.com
cov {y k k 1, y k k 1} y k k 1
cov {x k k 1, y k k 1}
*
E {x k k 1, y k k 1} =
4
4:
T
cov {x k k 1, y k k 1} = E {x k k 1 [ y k y k k 1] }
T
= E {x k k 1 [ C kx k k 1 + H k w k + v k ] }
T
= E {x k k 1 x k k 1 } C k + E {x k k 1 w k }H k
+ E {x k k 1 v k }
= k k 1 Ck
1
T
T
T
T
T
x k k = x k k 1 + k k 1 C k [ C k k k 1 C k + H k Q k H k + H k N k + N k H k + R k ]
[ y k C kx k k 1 D k u k ]
T
Mk = k k 1 Ck [ Ck k k 1 Ck + Hk Qk Hk + Hk Nk + Nk Hk + Rk ]
Now, lets find the covariance matrix of the current filtered state estimation
error.
Recall, updating formula:
cov {x
k + 1, x k + 1}
1
= cov {x k, x k} cov {x k, y k + 1 k} cov {y k + 1 k, y k + 1 k} cov {y k + 1 k, x k}
k k = cov {x k k, x k k}
= cov {x k k 1, x k k 1}
1
cov {y k k 1, y k k 1} cov {y k k 1, x k k 1}
cov {x k k 1, y k k 1}
4
T
k k = k k 1 [ k k 1 C k ] [ C k k k 1 C k + H k Q k H k + H k N k + N k H k + R k ] [ k k 1 C k ]
105
Note The preceding derivations assumes familiarity of the reader with basic concepts of
Estimation Theory and Linear Algebra. Much of the derivations herein follows the
developments outlined in the following references:
Q NR N
T
1 T
A X + XA X BR B X +
= 0
1 T
A = A BR N
Assumptions
The following assumptions will be assumed to be true:
R = RT > 0
The positive-definiteness assumption is needed since R needs to be
non-singular for R1 to exist.
Q NR1NT 0
This condition is needed since the term Q NR1NT 0 in the
more general CARE structure given by:
AT XA XGX + H = 0
T
H = H 0
G = GT 0
Another way of looking at this condition is as follows:
106
ni.com
Q N
T
N R
must be 0
det [ R ] 0
det [ Q NR N ]
Q = QT
1 T
A = A BR N
A BR N
1
BR B
T T
NR N Q [ A BR N ]
A G
T
H A
107
Algorithm
General Outline
A BR N
BR B
I O
= I M
1 T
1 T T
O I
NR N Q [ A BR N ]
L I 2n 2n
= L M
L = upper-triangular
= quasi-upper-triangular
M
U =
U 11 U 12
U 21 U 22
B = I
Q 11 Q 12
Q 21 Q 22
K = R [B X + N ]
108
ni.com
= A X + XA [ XB + N ]R [ XB + N ] + Q
Algorithm adopted from:
[1] Arnold W.F. III and Laub A.J., Generalized Eigenproblem Algorithms
and Software for Algebraic Riccati Equations, Proc. IEEE, vol. 72, no. 12,
pp. 17461754, Dec. 1984.
[2] Laub A.J., A Schur Method for Solving Algebraic Riccati Equations,
IEEE Transactions on Automatic Control, vol. AC_24, no. 6, pp. 913921,
Dec. 1979.
[3] John Bay, Fundamentals of Linear State Space Systems, McGraw
Hill, 1999.
++++++
@ time k = 0:
x ( 0 ) is already available.
@ time k:
109
x ( k ) is already available.
X = A A [A B + N][B B + R] [A B + N] + Q
1 T
T
T
T
1 T
X = A A A B [ B B + R ] B A NR N + Q
1 T
A = A BR N
Assumptions:
The following assumptions will be assumed true:
110
ni.com
R=RT>0
Q NR1NT 0
H = H 0
G = GT 0
N R
Q NR N
detR 0
Q=QT
Algorithm:
General Outline:
111
1 T
I BR B
A BR N
T T
NR N Q
O [ A BR N ]
L
= L M
VMU
V [ L M ]U =
= L M
The matrix U can be partitioned into the blocks
U =
U 11 U 12
U 21 U 22
and with certain re-ordering to ensure that the Eigenvalues are partitioned
to those inside the unit circle and those outside the unit circle (refer to
reference papers details).
B = L
Then, taking the output matrix:
Z =
Z 11 Z 12
Z 21 Z 22
Solution: X=Z21Z111
The Gain matrix is defined as:
k = [BTB+R]1[BTA+NT]
The closed-loop spectrum can be derived as follows:
112
ni.com
= [ A Bk ]x ( A Bk )
= closed-loop spectrum
where
x = norm of the computed Riccati solution
Residual = Norm of the DARE that we achieve upon substitution X for
its solution
T
= A A X A B + NB B + R [A B + N] + Q
Algorithm adapted from:
Arnold, W.F. III and Laub, A.J., Generalized Eigenproblem Algorithms
and Software for Algebraic Riccati Equation, Proc. IEEE, Vo. 72, no 12,
pp. 1746-1754, Dec 1984.
Laub, A.J., A Schur Method for Solving Algebraic Riccati Equations,
IEEE Transactions are Automatic Control, vol. AC-24, no. 6, pp. 913-921,
Dec 1979.
Bay, John, Fundamentals of Linear State Space Systems, McGraw Hill,
1999.
113
A = BTB
Proposed solution:
Take the Singular Value Decomposition of A
US
12
12
A = USV
=B B
Notes:
The FRF is similar to the Cholesky Decomposition; however, the Cholesky
Decomposition restricts the input matrix to be positive-definite, whereas
we may be interested in finding the FRF for a positive semi-definite matrix.
The FRF is similar to a matrix square root; however, if we use the matrix
square root decomposition s.t.
A = A
12
12
we are inducing more computational load, since the matrix square root
VI was the Eigenvalue decomposition, which is more expensive than the
SVD.
National Instruments, NI, ni.com, and LabVIEW are trademarks of National Instruments Corporation.
Refer to the Terms of Use section on ni.com/legal for more information about National
Instruments trademarks. Other product and company names mentioned herein are trademarks or trade
names of their respective companies. For patents covering National Instruments products/technology,
refer to the appropriate location: HelpPatents in your software, the patents.txt file on your
media, or the National Instruments Patent Notice at ni.com/patents.
20042009 National Instruments Corporation. All rights reserved.