You are on page 1of 20

Digital and Optimal Control Data Book

prepared by Themistoklis Charalambous and Ammar Khan

2017 version

Contents
1 Table of Laplace Transforms 1

2 Table of Z-transforms 2

3 Continuous-time control 3
3.1 Stability of the Closed-loop System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.2 Routh-Hurwitz Stability Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.3 Nyquist Stability Criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.4 Root Locus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
3.5 Bode Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
3.6 Solution of continuous-time LQ problem using Dynamic Programming . . . . . . . . . 5

4 Discrete-time control 6
4.1 State-space representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.2 Transfer Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3 Discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3.1 Backward difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4.3.2 Forward difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3.3 Bilinear or Tustin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3.4 Impulse-invariance method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.3.5 Step-invariance method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.4 Nyquist criterion for sampling frequency . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.5 Gain- and phase margins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4.6 Stability criteria for discrete time systems . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.6.1 Jury’s stability criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.6.2 Triangle rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.6.3 Nyquist stability criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.7 Controllability and observability matrices . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.8 Canonical forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.8.1 Controllable canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.8.2 Observable canonical form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.9 Difference Equation and Pulse Transfer Function from Canonical Form . . . . . . . . . 10
4.10 State controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.11 State observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.12 Luenberger Observer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.13 Servo controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.14 Disturbance models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.14.1 Concepts of stochastic processes . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.14.2 Properties of stochastic variables . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4.14.3 Covariances and spectral densities . . . . . . . . . . . . . . . . . . . . . . . . . 12

i
ii ELEC-E8101 Data Book

4.14.4 Covariance of white noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12


4.14.5 Noise process in I/O form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.14.6 Noise process in state space form . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.15 Optimal Design Methods: A Polynomial Approach . . . . . . . . . . . . . . . . . . . . 13
4.15.1 Optimal Predictor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.15.2 Minimum Variance Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.16 Optimal Design Methods: A State-Space Approach . . . . . . . . . . . . . . . . . . . . 14
4.16.1 Solution of discrete-time LQ problem using Dynamic Programming . . . . . . . 14
4.16.2 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.17 Discrete PID controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

5 Mathematical Preliminaries 16
5.1 Quadratic formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.2 Geometric series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.3 Partial fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.4.1 Determinant of a marix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.4.2 Adjoint of a marix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.4.3 Inverse of a marix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4.4 The Cayley-Hamilton theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
5.4.5 Eigenvalues of a matrix function . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2017 version 1

1 Table of Laplace Transforms

Waveform: Laplace Transform: Z



g(t) (defined for t ≥ 0) G(s) = L{g(t)} = g(t)e−st dt
0−

δ(t) impulse 1

1
u(t) unit step
s
n!
tn
sn+1
1
e−at
s+a
ω0
sin(ω0 t)
s + ω02
2

s
cos(ω0 t)
s2 + ω02
ω0
sinh(ω0 t)
s − ω02
2

s
cosh(ω0 t)
s − ω02
2

A(s + a) + Bω0
e−at [A cos(ω0 t) + B sin(ω0 t)]
(s + a)2 + ω02
e−at g(t) G(s + a) shift in s

g(t − τ )u(t − τ ) where τ ≥ 0 e−sτ G(s) shift in t

d
tg(t) − ds G(s)

dg
dt differentiation sG(s) − g(0)
   
dn g dg dn−1 g
dtn sn G(s) − sn−1 g(0) − sn−2 dt − ··· − dtn−1
0 0
Z t
G(s)
g(τ )dτ integration
0 s
g1 (t) ∗ g2 (t) convolution G1 (s)G2 (s)
Z t
= g1 (t − τ )g2 (τ )dτ
0
2 ELEC-E8101 Data Book

2 Table of Z-transforms

Sequence: z-Transform:
X∞
gk , k = 0, 1, 2, . . . G(z) = gk z −k
k=0

1
1 (unit step)
1 − z −1
T z −1
kT
(1 − z −1 )2
(k + m − 1)! 1
k!(m − 1)! (1 − z −1 )m
1
e−akT
1− e−aT z −1
sin(ω0 T )z −1
sin(ω0 kT )
1 − 2 cos(ω0 T )z −1 + z −2
1 − cos(ω0 T )z −1
cos(ω0 kT )
1 − 2 cos(ω0 T )z −1 + z −2
rk−1 1 − az −1
[r sin(ω0 (k + 1)T ) − a sin(ω0 kT )]
sin ω0 T 1 − 2r cos(ω0 T )z −1 + r2 z −2
A + rz −1 (B sin(ω0 T ) − A cos(ω0 T ))
rk [A cos(ω0 kT ) + B sin(ω0 kT )]
1 − 2r cos(ω0 T )z −1 + r2 z −2
rk gk G(r−1 z)

gk+1 zG(z) − zg0

gk−1 z −1 G(z) + g−1

gk+m z m G(z) − z m g0 − · · · − zgm−1

gk−m z −m G(z) + z −(m−1) g−1 + · · · + g−m

g0 = lim G(z) (initial value theorem)


z→∞

lim gk = lim (z − 1)G(z) (final value theorem when poles of


k→∞ z→1
(z − 1)G(z) are inside unit circle)
2017 version 3

3 Continuous-time control
r̄(s) + ē(s) ȳ(s)
Σ K(s) P (s)

z̄(s)
H(s)

z̄(s) = H(s)G(s)K(s)ē(s) = L(s)ē(s)


where L(s) = H(s)G(s)K(s) is called the Return Ratio.

G(s)K(s) G(s)K(s)
ȳ(s) = r̄(s) = r̄(s)
1 + H(s)G(s)K(s) 1 + L(s)

G(s)K(s)
where 1+L(s) is the closed-loop transfer function relating y and r.

3.1 Stability of the Closed-loop System


The closed-loop system is stable if the roots of the characteristic equation, 1 + L(s) = 0, have negative
real parts.

3.2 Routh-Hurwitz Stability Criteria


The roots of the polynomial an sn + an−1 sn−1 + · · · + a0 , with a0 > 0, have negative real part:

for n = 2, if and only if all ai > 0;


for n = 3, if and only if all ai > 0 and a1 a2 > a0 a3 ;
for n = 4, if and only if all ai > 0 and a1 a2 a3 > a0 a23 + a4 a21 ;

(Further relationships exist for n > 4.)

For the following conditions it is convenient to write L(s) = k g(s), an explicit function of the gain k.

3.3 Nyquist Stability Criterion


For a stable closed-loop system, the full Nyquist plot of g(s), for s = jω and −∞ < ω < ∞, should
encircle the (− k1 , j0) point as many times as there are poles of g(s) (i.e. open-loop poles) in the right
half of the s-plane. The encirclements, for the path traced by increasing ω, are counted positive in a
counterclockwise direction.

3.4 Root Locus


The roots of 1 + kg(s) = 0, the closed loop poles, trace loci as k varies from 0 to ∞, starting at the
open-loop poles and ending at the open-loop zeros or at infinite distances.
All sections of the real axis with an odd number of poles and zeros to their right are sections of the root
locus (even number of poles and zeros to their right if k < 0).
dg
At the breakaway points (coincident roots): = 0.
ds
Angle condition: ∠g(s) = (2m + 1)π if k > 0 (∠g(s) = 2mπ if k < 0), where m is an integer.
1
Magnitude condition: |g(s)| = .
k
4 ELEC-E8101 Data Book

Asymptotes: If g(s) has P poles and Z zeros, the asymptotes of the loci as k → ∞ are straight lines at
angles (2m+1)π 2mπ
P −Z to the real axis if k > 0 ( P −Z if k < 0).
Their point of intersection σ with the real axis is given by:
P P
(poles of g(s)) − (zeros of g(s))
σ=
P −Z

3.5 Bode Diagrams


Bode diagram of (1 + sT ):

40dB
(=100) high freq asymptote
(slope = 20 dB/decade)
Gain

20dB
(=10)

low freq
asymptote

3 dB
0dB
(=1)

0.01/T 0.1/T 1/T 10/T 100/T


Frequency (rad/sec)

high freq
ω = 10/T asymptote
90
Phase (degrees)

45
approximation
to true phase

low freq
asymptote
ω = 0.1/T

0
0.01/T 0.1/T 1/T 10/T 100/T
Frequency (rad/sec)

1
Bode diagram of for ζ = 0.2, 0.4, 0.6, 0.8, 1.0:
1 + 2ζsT + s2 T 2

20

8dB = 2.5 (1/2ζ) ζ = 0.2

0
−6dB
Gain (dB)

ζ=1
−20

−40

−60
0.01/T 0.1/T 1/T 10/T 100/T
Frequency (rad/sec)
2017 version 5

ζ = 0.2

−45

ζ=1

Phase (degrees)
−90

−135

−180
0.01/T 0.1/T 1/T 10/T 100/T
Frequency (rad/sec)

3.6 Solution of continuous-time LQ problem using Dynamic Programming


Given the process model:

ẋ(t) = Ax(t) + Bu(t), t ≥ to , x(to )is given

Crtierion:
tf
1
Z
J(to ) = x(tf )T S(tf )x(tf ) + xT (t)Qx(t) + u(t)T Ru(t) dt

2 to
(S(tf ) ≥ 0, Q ≥ 0, and R > 0)

The general solution is:

−Ṡ(t) = AT S(t) + S(t)A − S(t)BR−1 B T S(t) + Q, t ≤ tf , boundary conditions S(tf )


K(t) = R−1 B T S(t)
u(t) = −K(t)x(t)
1
J ∗ (to ) = x(to )T S(to )x(to )
2
6 ELEC-E8101 Data Book

4 Discrete-time control
4.1 State-space representation
Continuous-time state-space representation is given by

System Equation: ẋ(t) = Ax(t) + Bu(t), x(0) = x0 dim(x) = n


Output Equation: y(t) = Cx(t) + Du(t) dim(u) = r
dim(y) = p

With periodic zero-order-hold (ZOH) sampling (sampling period h), the corresponding discrete-time
system is represented as

System Equation: x(kh + h) = Φx(kh) + Γu(kh),


Output Equation: y(kh) = Cx(kh) + Du(kh).

Equivalently, more compactly the discrete-time system can be written as

System Equation: x[k + 1] = Φx[k] + Γu[k],


Output Equation: y[k] = Cx[k] + Du[k],

where

Φ = eAh
Z h
Γ= eAs ds B
0

State transition matrix is:


n o
eAt = L−1 (sI − A)−1

1 1 X 1
= I + tA + t2 A2 + t3 A3 + . . . = tn An
2 6 n!
n=0

Characteristic equation χ(z):

χ(z) , det(zI − Φ) = |zI − Φ| = 0

Mapping of poles (eigenvalues):

λi (Φ) = eλi (A)h

4.2 Transfer Function


The discrete-time transfer function from discrete-time state-space representation is given by

G(z) = C(zI − Φ)−1 Γ + D

4.3 Discretization
The Laplace transfer function G(s) can be approximated by a discrete-time pulse function G(z) by the
following methods (sampling period is h):

4.3.1 Backward difference method



G(z) = G(s)

−1
s= 1−zh
2017 version 7

4.3.2 Forward difference method



G(z) = G(s)

s= z−1
h

4.3.3 Bilinear or Tustin method



G(z) = G(s)

z−1
s= T2
s z+1

4.3.4 Impulse-invariance method

G(z) = Z L−1 (G(s)) t=kh




4.3.5 Step-invariance method

z−1
   
−1 G(s)
G(z) = Z L
z s
t=kh

4.4 Nyquist criterion for sampling frequency


Suppose xc (t) is a low pass signal with Xc (jω) = 0, ∀ |ω| > ωo e.g.,

The continuous time counterpart xc (t) of a sampled signal xc (kh) can only be uniquely determined if
the sampling angular frequency is at least twice as big as ωo , i.e.,


ωs = > 2ωo
Ts
The minimum sampling angular frequency for which the inequality holds is called Nyquist angular fre-
quency.

4.5 Gain- and phase margins


The gain- and phase-margins are defined in the same way for both continuous- and for discrete-time
systems.

Open loop pulse-transfer function is H(z), frequency response H(ejωh )


8 ELEC-E8101 Data Book

ωo is the lowest frequency where:

argH(ejωo h ) = −π

ωc is the lowest frequency where:

|H(ejωc h )| = 1

Gain margin:
1
Amarg =
|H(ejωo h )|
Phase margin:

φmarg = π + argH(ejωc h )

4.6 Stability criteria for discrete time systems


4.6.1 Jury’s stability criterion
Using the characteristic polynomial

A(z) = a0 z n + a1 z n−1 + · · · + an

With Jury’s test it is possible to check whether all poles of the system are inside the unit circle, i.e.,
whether the system is asymptotically stable.
a1 · · · an−1 an

 a0
bn = aan0




 an an−1 · · · a1 a0




an−1 an−1 · · · an−1


0 1 n−1
(2n + 1) rows
an−1


 an−1 n−1
n−1 an−2 · · · a0
n−1
bn−1 = n−1
n−1


 a0
.

 ..




 0
a0
where an0 = a0 , an1 = a1 , · · · , ann = an ,

ak−1
i = aki − bk akk−i
akk
bk =
ak0

4.6.2 Triangle rule


A system with a characteristic equation:

A(z) = z 2 + a1 z + a2

is stable, if the parameters a1 and a2 fulfill:


 a2 < 1
a2 > −a1 − 1
a2 > a1 − 1

2017 version 9

4.6.3 Nyquist stability criterion


The closed-loop system

R(z) + Y (z)
Σ L(z)

will be stable if (and only if) the number of counter clockwise encirclements N of the point −1 by L(ejω )
as ω increases from 0 to 2π is equal to

N =Z −P

where

Z : number of zeros of the characteristic equation 1 + L(z) = 0 outside the unit circle
P : number of poles of the characteristic equation 1 + L(z) = 0 outside the unit circle

The zeros of the characteristic equation (i.e., closed-loop poles) determine the stability of the system
so that if the characteristic equation has zeros outside the unit circle, then the closed loop system is
unstable. The stability criterion is thus obtained by setting Z = 0 and by demanding that the Nyquist
curve encircles the point −1 P times counterclockwise.

4.7 Controllability and observability matrices


Controllability matrix Wc and observability matrix Wo are defined as:

Wc = Γ ΦΓ · · · Φn−1 Γ
 

 
C
 CΦ
Wo = 
 
..

 .

CΦ n−1

A system is reachable if and only if matrix Wc has rank n. A system is observable if and only if matrix
Wo has rank n, where n is the order of system.

4.8 Canonical forms


4.8.1 Controllable canonical form
   
−a1 −a2 ··· −an−1 −an 1
 1
 0 ··· 0 0 
0
 
 0
z[k + 1] =  1 ··· 0 0  z[k] +
0
  u[k]
 .. .. .. .. ..   .. 
 . . . . .  .
0 0 ··· 1 0 0

 
y[k] = b1 b2 · · · bn z[k]

fc Wc−1 , where W
The transformation matrix to the controllable canonical form is T = W fc is the control-
lability matrix for the controllable canonical form.
10 ELEC-E8101 Data Book

4.8.2 Observable canonical form


   
−a1 1 0 ··· 0 b1

 −a2 0 1 ··· 0 
 b2 
 
z[k + 1] = 
 .. .. .. .. ..  z[k] +  ..  u[k]
 . . . . .
 . 
 
−an−1 0 0 · · · 1  bn−1 
−an 0 0 ··· 0 bn

 
y[k] = 1 0 0 · · · 0 z[k]

f −1 Wo , where W
The transformation matrix to the observable canonical form is T = W fo is the observ-
o
ability matrix for the observable canonical form.

4.9 Difference Equation and Pulse Transfer Function from Canonical Form
Representation of a system in controllable and observable canonical forms is discussed in the previous
section. To represent the system as a difference equation directly from canonical form:

y[k − na ] + a1 y[k − na − 1] + · · · + ana y[k] = bo u[k − nb ] + · · · + bnb u[k]

To represent the system in terms of transfer function directly from canonical form:
bo z nb + b1 z nb −1 + · · · + bnb
G(z) =
z na + a1 z na −1 + · · · + ana

4.10 State controller


State space representation of a system:

x[k + 1] = Φx[k] + Γu[k]


y[k] = Cx[k]

State controller:

u[k] = −Lx[k]

Characteristic equation of the closed loop system can be obtained by:

|zI − Φcl | = |zI − Φ + ΓL| = 0

4.11 State observer


State observer equation:

x̂[k + 1] = Φx̂[k] + Γu[k] + K y


e [k]
x̂[k + 1] = Φx̂[k] + Γu[k] + K(y[k] − ŷ[k])
x̂[k + 1] = (Φ − KC)x̂[k] + Γu[k] + Ky[k]

Model for error dynamics is given by:

e[k + 1] = x[k + 1] − x̂[k + 1]


x
e[k + 1] = (Φ − KC)e
x x[k] = Φo x
e[k]

For pole placement:

|zI − Φo | = |zI − Φ + KC| = 0


2017 version 11

4.12 Luenberger Observer


From the State Observer section, we know the observer error is given by:
e[k + 1] = x[k + 1] − x̂[k + 1]
x
= (Φx[k] + Γu[k]) − ((I − KC)(Φx̂[k] + Γu[k]) + Ky[k + 1])
= ···
= (Φ − KCΦ)e
x[k] = Φo x
e[k]
Additionally it holds that:
y x[k + 1] = C(Φ − KCΦ)e
e [k + 1] = Ce x[k]
= (CΦ − CKCΦ)e
x[k] = (I − CK)CΦe
x[k]
By choosing I − CK = 0, the estimation error is eliminated:
e [k + 1] = 0 ⇒ y[k + 1] − Cx̂[k + 1] = 0 ⇒ y[k] = Cx̂[k]
y
This reduced order estimator is known as Luenberger observer.

4.13 Servo controller

If the system has to trace a changing reference signal, we use the servo controller:
u[k] = Lc yref [k] − Lx̂[k]
where yref [k] is the reference signal.
The overall system becomes:
      
x[k + 1] Φ − ΓL ΓL x[k] ΓLc
= + yref [k]
e[k + 1]
x 0 Φ − KC xe[k] 0

 
  x[k]
y[k] = C 0
xe[k]

4.14 Disturbance models


4.14.1 Concepts of stochastic processes
Definition of a stochastic variable x by a density function p:
Z ∞ Z ∞
p(x)d(x) = 1 E{x} = x.p(x)d(x)
−∞ −∞
12 ELEC-E8101 Data Book

Z ∞
2
var{x} = E{(x − E{x}) } = (x − E{x})2 .p(x)d(x)
−∞

For vectors:

m(t) = E{x(t)}
var{x(t)} = E{(x(t) − m(t))(x(t) − m(t))T }
= E{(x(t) − E{x(t)})(x(t) − E{x(t)})T }

4.14.2 Properties of stochastic variables


The basic properties of expectation value and variance are:
(a is constant, x and y are stochastic variables)

E{ax} = aE{x}
E{x + y} = E{x} + E{y}
E{a} = a
var{ax} = a2 var{x}
var{a} = 0

Additionally, if x and y are independent of each other:

E{xy} = E{x}E{y}
var{x + y} = var{x} + var{y}

4.14.3 Covariances and spectral densities


Covariance functions (autocovariance and cross-covariance):

rx (τ ) = rxx (τ ) = cov{x(t + τ ), x(t)} = E{(x(t + τ ) − m(t + τ ))(x(t) − m(t))T }


rxy (τ ) = cov{x(t + τ ), y(t)} = E{(y(t + τ ) − my (t + τ ))(x(t) − m(t))T }

Spectral densities (autospectral- and cross-spectral density):


Z π
1 X∞ −ikω
φxx (ω) = rxx (k)e rxx (k) = e−ikω φxx (ω)dω
2π k=−∞ −π

π
1 X∞
Z
φxy (ω) = rxy (k)e−ikω rxy (k) = e−ikω φxy (ω)dω
2π k=−∞ −π

Variance determined from autocorrelation:

var{x(t)} = rxx (0)

4.14.4 Covariance of white noise


White noise:
(
σ2 τ =0
r(τ ) =
0 τ 6= 0

σ2
φ(ω) =

2017 version 13

4.14.5 Noise process in I/O form


A stochastic process can be described with a filter with the pulse-transfer function H(z)

Mean:

my = H(1)mu

Spectral density:

φy (ω) = H(eiω )φu (ω)H T (e−iω )

Cross-spectral density:

φyu (ω) = H(eiω )φu (ω)

4.14.6 Noise process in state space form


For a stochastic process the state space form becomes:
(
x[k + 1] = Φx[k] + v[k]
y[k] = Cx[k]

For the mean value:


(
m[k + 1] = Φm[k], m(0) = mo
my [k] = Cm[k]

For the covariances:


rxx (k + τ, k) = Φτ P [k], τ ≥ 0




 r (k + τ, k) = Cr (k + τ, k)C T

yy xx


 ryx (k + τ, k) = Crxx (k + τ, k)
P (k + 1) = ΦP (k)ΦT + R1 , P (0) = Ro

P (k) = cov{x[k], xT [k]} = E{e xT [k]}


x[k]e

4.15 Optimal Design Methods: A Polynomial Approach


4.15.1 Optimal Predictor
The optimal m-step predictor is:
G∗ (q −1 ) qG(q)
ŷ[k + m|k] = ∗ −1
y[k] = y[k]
C (q ) C(q)

where the polynomials F and G are the quotient and the remainder when dividing q m−1 C by A, i.e.,

q m−1 C(q) = F (q)A(q) + G(q).

The polynomial F is monic of degree m − 1 and G is of degree less than n:

F (q) = q m−1 + f1 q m−2 + . . . + fm−1


G(q) = g0 q n−1 + g1 q n−2 + . . . + gn−1

The variance of the prediction error can be calculated from the terms of F :

var{y[k + m|k]} = 1 + f12 + . . . + fm−1 2



var{e[k]}
14 ELEC-E8101 Data Book

4.15.2 Minimum Variance Control

Minimum variance controller for systems with stable inverses is:

G∗ (q −1 ) −G(q)
u[k] = ∗ −1 ∗ −1
y[k] = y[k]
B (q )F (q ) B(q)F (q)

where the polynomials F and G are the quotient and the remainder when dividing q d−1 C by A, i.e.,

q d−1 C(q) = F (q)A(q) + G(q).

The polynomial F is monic of degree d − 1 and G is of degree less than n:

F (q) = q d−1 + f1 q d−2 + . . . + fd−1


G(q) = g0 q n−1 + g1 q n−2 + . . . + gn−1

where d is pole excess of the system

d = degA − degB

The variance of output signal can be calculated in terms of F :

var{y[k]} = 1 + f12 + . . . + fd−1


2

var{e[k]}

4.16 Optimal Design Methods: A State-Space Approach


4.16.1 Solution of discrete-time LQ problem using Dynamic Programming

Given the process model:

xk+1 = Axk + Buk , x0 is given

Crtierion:

N −1
1 T 1 X T
xk Qxk + uTk Ruk

J = xN SN xN +
2 2
k=i
(SN ≥ 0, Q ≥ 0, and R > 0 are symmetric)

The general solution is:

Kk = (B T Sk+1 B + R)−1 B T Sk+1 A


u∗k = −Kk xk
Sk = (A − BKk )T Sk+1 (A − BKk ) + Q + KkT RKk (Riccati Equation)
1
Jk∗ = xTk Sk xk
2

The Riccati equation can be written in a way independant of Kk .

Sk = AT [Sk+1 − Sk+1 B(B T Sk+1 B + R)−1 B T Sk+1 ]A + Q


2017 version 15

4.16.2 Kalman Filter


Given the linear model:

x[k + 1] = Φx[k] + Γu[k] + v[k],


y[k] = Cx[k] + e[k]

where v and e are discrete-time Gaussian white noise processes with zero-mean value and

E{vvT } = R1 

(    ) 
  T 
v v v R1 R12

T
E{ve } = R12 ⇒ cov =E = T
 e e e R12 R2
T
E{ee } = R2

The initial state x[0] is assumed to be Gaussian distributed, i.e., x[0] ∼ N (m0 , R0 ). Then, the Kalman
filter is given by
−1
P [k + 1] = ΦP [k]ΦT + R1 − ΦP [k]C T + R12 CP [k]C T + R2 CP [k]ΦT + R12
 

x̂[k + 1|k] = Φx̂[k|k − 1] + Γu[k] + K[k](y[k] − C x̂[k|k − 1])


−1
K[k] = ΦP [k]C T + R12 CP [k]C T + R2

,

where P [k] is the co-variance of the estimation error, P [0] = R0 , x̂[k + 1|k] is the estimated state and
K[k] is the Kalman gain.

4.17 Discrete PID controller


The correspondence between proportional, integral and derivative block of a continous time PID con-
troller to its discrete counterpart is as follows:

P (kh) = Kp e(kh)


 P (t) = K p e(t) 


k−1 k−1

 Z t 


 
 X X
I(t) = Ki e(τ )dτ =⇒ I(kh) = K i e(nh)h = Ki h e(nh)
 −∞  −∞ −∞
 
 D(t) = Kd de(t)
 
 D(kh) = Kd e(kh) − e(kh − 1) = Kd ∆e(kh)

 


dt h h
Control input in terms of error signal then becomes:
k−1
X Kd
u(kh) = Kp e(kh) + Ki h ∆e(kh)
e(nh) +
−∞
h
Kd z − 1
 
Ki h
U (z) = Kp + + E(z)
z−1 h z

The continous-discrete correspondence of controller components for a practical PID controller is as fol-
lows:
Pm (z) = Kp (bYref (z) − Y (z))
 

 Pm (s) = Kp (bYref (s) − Y (s)) 

 
Ki  I(z) = Ki h (Y (z) − Y (z))

 

I(s) = (Yref (s) − Y (s))

ref
s =⇒ z−1
Kd s Kd z−1

 

D (s) = − Y (s) zh
 
 m
 D
 m (z) = D m (s)| z−1 = − Y (z)
1 + KNd s
  s= zh
1 + KNd z−1

zh
16 ELEC-E8101 Data Book

5 Mathematical Preliminaries
5.1 Quadratic formula
The general quadratic equation is
ax2 + bx + c = 0,
where x is the unknown, while a, b and c are constants with a not equal to 0. With the above parameter-
ization, the quadratic formula is:

−b ± b2 − 4ac
x= .
2a

5.2 Geometric series


The general form of a geometric progression is given by
a, ar, ar2 , ar3 , . . .
The geometric series is the sum of the terms of the geometric progression and it is thus given by
a + ar + ar2 + ar3 + . . .

n
X 1 − rn+1
For r 6= 1, ark = a + ar + ar2 + ar3 + . . . + arn = a
1−r
k=0

n
X rm − rn+1
For r 6= 1, ark = a
1−r
k=m


X a
For |r| < 1, ark = a + ar + ar2 + ar3 + . . . =
1−r
k=0

5.3 Partial fractions

P (z) K1 K2 Kn
Y (z) = ≡ + + ... , ri 6= rj , i 6= j
(z − r1 )(z − r2 ) . . . (z − rn ) z − r1 z − r2 z − rn
P (z) Cq Cq−1 C1 K1 Kn
Y (z) = ≡ + + ... + + + ...
(z − r)q (z − r1 ) . . . (z − rn ) z−r z−r z − r z − r1 z − rn

Heaviside method:
Ki = lim {(z − ri )Y (z)} , i = 1, 2, . . . , n
z→ri

Cq = lim {(z − r)q Y (z)}


z→r
 
d q
Cq−1 = lim [(z − r) Y (z)]
z→r dz
..
.

1 dk
 
q
Cq−k = lim [(z − r) Y (z)]
z→r k! dz k
2017 version 17

5.4 Matrix Algebra


5.4.1 Determinant of a marix

Determinant of a 2 x 2 matrix:

For a matrix:
 
a a
A = 11 12
a21 a22

Determinant of A is given by:

a11 a12

det(A) = |A| = = a11 a22 − a21 a12
a21 a22

Determinant of a 3 x 3 matrix:

For a matrix:
 
a11 a12 a13
A = a21 a22 a23 
a31 a32 a33

Determinant of A is given by:

a11 a12 a13



a a a a a a

det(A) = |A| = a21 a22 a23 = a11 22 23 − a12 21 23 + a13 21 22

a32 a33 a31 a33 a31 a32
a31 a32 a33

5.4.2 Adjoint of a marix

Adjoint of a 2 x 2 matrix:

For a matrix:
 
a a
A = 11 12
a21 a22

Adjoint of A is given by:

 
a22 −a12
adj(A) =
−a21 a11

Adjoint of a 3 × 3 matrix:

For a matrix:
 
a11 a12 a13
A = a21 a22 a23 
a31 a32 a33
18 ELEC-E8101 Data Book

The co-factor matrix C, of A is given by:


 
a22 a23 a21 a23 a21 a22

+
 a32 a33 − +
a31 a33 a31 a32 

 
 a12 a13
a11

a13

a11

a12 
C= − a32 a33 + a31 a33

a31

a32 
 
 a12 a13 a11

a13

a11

a12 
+ − +
a22 a23 a21 a23 a21 a22

 
a22 a23 a12 a13 a12 a13

+ a32 − +
a33 a32 a33 a22 a23 

 
T
 a21 a23

a11

a13

a11

a13 
− a31
adj(A) = C = 
a33
+
a31 a33

a21

a23 
 
 a21 a22

a11

a12

a11

a12 
+ − +
a31 a32 a31 a32 a21 a22

5.4.3 Inverse of a marix


Inverse of a matrix A of any order is given by:

adj(A)
inv(A) = A−1 =
det(A)

5.4.4 The Cayley-Hamilton theorem


Let,

λn + a1 λn−1 + a2 λn−2 + · · · + an = 0

be the characteristic polynomial of a square matrix M . Then M satisfies,

M n + a1 M n−1 + a2 M n−2 + · · · + an I = 0

5.4.5 Eigenvalues of a matrix function


If f (M ) is a polynomial in M and vi is the eigenvector of M associated with eigenvalue λi , then:

f (M )vi = f (λi )vi

You might also like