You are on page 1of 47

Disturbance models

The existence of disturbances is one of the key reasons


why a control system is needed.
Classification of disturbances:
- Input- (process-), output- (measurement-) and system
disturbances
-Deterministic and stochastic disturbances
-Management of disturbances :
- Reduction of a disturbance by another disturbance
source
- Compensation, control
- Measurement of disturbances
- Estimation and prediction of disturbances
Disturbances
Classification of disturbances:
- Input- (process-), output- (measurement-) and system
disturbances 1.2

- Deterministic and stochastic disturbances 0.8

0.6

0.4

0.2

-0.2

Input
- System- Output
0 2 4 6 8 10 12 14 16 18 20

dist. dist. dist. 3

Control Measurem.
Input Process Output
1

-1

-2

-3
0 10 20 30 40 50 60 70 80 90 100
Reduction by a disturbance source

Different methods depending on the application area


- Buffer vessel in process industry
- Better positioning of the measurement sensor
- Better sampling
- Sensor changes
Reduction of disturbance concequences
By local feedback (e.g. valves)
- Cascade control
Disturbance
u(t) y(t)
Process

Local controller

Disturbance

Feedforward control yREF(t)


Controller + u(t) y(t)
Process
Compensator
- Needs measurement
Reduction by prediction
Estimation of disturbances is one possibility. Three
different cases are considered
Signal Signal Signal

Estimate Estimate Estimate

k-2 k-1 k k k k+1

xˆ (( k  m) h | kh) xˆ ( kh | kh) xˆ (( k  m) h | kh)

Smoothing Filtering Prediction k+2


Stochastic disturbance models
The stochastic process x is a function of two variables
x(t,w). If the variable w is fixed (w = w0) a time function
or realization x(t,w0) = xw0(t) is obtained . If the time
point t is fixed (t = t0) a stochastic variable x(t0,w) =
xt0,(w) is obtained.
Realization
 x x(t,1)

F(t1) x(t,2)
x(t,3)
1
x(t,4)
t
t1
Stochastic disturbance models
The distribution function of the stochastic process is
defined as follows (P is probability)
l
F ( 1 ,  2 ,  ,  n ; t1 , t 2 ,  , t n )  P x (t1 )   1 , x (t 2 )   2 ,  , x (t n )   n q
The mean value or expected value of x is defined as

z

m( t )   x ( t )  x ( t )  E {x ( t )}  dF ( ; t )

and variance
2
x
2

 (t )  var{x(t )}  E{( x(t )  m(t )) }  E  x(t )  E{x(t )}
2

  (  m(t )) 2  dF ( ; t )
Stochastic disturbance models
Sometimes the derivative of F, the density function p

z
is used. 

p( x ) dx  1
P is scaled as 
z

Expected value or mean value E {x}  x  p( x ) dx



Variance
z

var{ x }  E {( x  E { x }) 2 }  ( x  E { x }) 2  p ( x ) dx


The above concepts are extended to vector functions;


the variance is extended to covariance.
Stochastic disturbance models, example
Take an example of a stochastic process and its control.
Consider an example process of a motor

Now the disturbance is a


random variable with normal
distribution.

The same simulation without


and with compensation.
Stochastic disturbance models, example
0.3
Without compensation there are 0.2

considerable (periodic) 0.1

disturbances and the output variance is 0


large. With compensation the variance-0.1
-0.2
can be made much smaller are more
gaussian-like. 0 200 400 600 800 1000
4 0.02

Dist. (left). 2 0.01

Without comp. 0 0
(right., upper)
-2 -0.01
with compens.
(right,down). -4
0 200 400 600 800 1000
-0.02
0 200 400 600 800 1000
Stochastic disturbance models, example
Histograms for control (up.), disturbance (middle.) and
output (down.), (to the right:compensated, to the left: no
compensation)
About the same
6000 6000
control
4000 (statictically the 4000

2000
same), but 2000
different results
0
-200 -100 0 100 200
follow. 0
-200 -100 0 100 200
0000
6000 800
8000
600
4000 6000
400 4000
2000
200 2000

0
0 0 -0.04 -0.02 0 0.02 0.04
-1 -0.5 0 0.5 1 -5 0 5
Stochastic disturbance models
Calculation rules for mean value and variance
(a is constant, x and y are stochastic variables)
E {ax}  aE {x}
E {x  y}  E {x}  E { y}
E {a}  a
var{ax}  a 2 var{x}
var{a}  0
If x and y are independent, then
E {xy}  E {x} E { y}
var{x  y}  var{x}  var{ y}
Stochastic disturbance models
The definitions for mean value and variance can be
extended to vector variables
m (t )  E {x( t )}
m
var{x(t )}  E (x(t )  m (t ))( x(t )  m (t )) T r
 E m(x(t )  E {x(t )})( x( t )  E {x( t )}) r
T

The covariance function is defined as follows


m
rxx ( s, t )  cov{x( s), x(t )}  E (x( s)  m ( s))(x( t )  m (t )) T r
m
 E ( x( s)  E {x( s)})(x(t )  E {x(t )}) T r
 zz (  1  m ( s))(  2  m (t )) T dF (  1 ,  2 ; s, t )
Stochastic disturbance models
A stochastic process is stationary, if the distributions of
{x(t1), x(t2), … ,x(tn)} and {x(t1+t),x(t2 +t), … ,x(tn +t)}
are identical for all values of t , n, t1,t2, … ,tn . The
process is weakly stationary, if the two first moments of
the distributions (mean value and covariance) are the
same for all values of t. The value of the covariance
function then depends only on the time difference s - t.
rxx ( s, t )  rxx ( s  t )  rxx ( )  cov{x( t   ), x( t )}
The concept of covariance has been extended for
different signals, cross-covariance (for one signal
autocovariance).
Stochastic disturbance models
Autocovariance or covariance rx ( )  rxx ( )  cov{x( t   ), x( t )}
Cross-covariance rxy ( )  cov{x(t   ), y( t )}
The Fourier-transformation of the covariance function is the
spectral density (function).
Autospectral density

z
 
1
xx ( ) 
2
r
k 
xx ( k )e  ik
rxx ( k )  e  ik xx ( ) d

Cross-spectral density

z

1 
xy ( )  
2 k 
rxy ( k ) e  ik rxy ( k ) 

e  ik xy ( ) d
Stochastic disturbance models
From the definition it is immediate that variance is in fact
autocovariance when t is 0.
rxx (0)  cov{x(t ), x(t )}  E {(x( t )  m ( t ))( x( t )  m ( t )) T }  var{x( t )}

The largest value of että autocovariance is variance. The


autocovariance function is symmetric.
rxx ( )  rxx ( 0) rxx ( )  rxx (  )

The correlation function is the normed covariance function.


rxx ( ) rxx ( )
x ( )  
rxx ( 0)  2x
Stochastic disturbance models
Both the covariance functions and spectral densities play
an important role in identification theory, specifically in non-
parametric identification.
The autocovariance function describes how the signal
correlates with itself at different time instants. A big positive
covariance value means strong correlation, zero means no
correlation and a negative value means negative
correlation.
The cross-covariance function describes how a signal
correlates with another signal . By using cross-covariance
it is possible to investigate e.g. which signals in a large
system correlate with each other.
Stochastic disturbance models
The autospectral density describes the power of the signal
at a specified frequency range.

z
2
2 xx ( )d
1

The integral (area) describes the power of the signal at the


range [w1,w2]. The whole area under the spectrum is
propotional to variance.
Generally autocovariance and autospectral density are
characteristic to a signal, whereas crosscovariance and
crossspectral densities denote characteristics of the system
(input-output).
Stochastic disturbance models
Estimates of the proposed concepts can be calculated
directly from data. For example
1 N
Expected value: E {x}   x(i )
N i 1
Covariance:
1 N
rxy ( )   ( x(i   )  m x (i   ))( y (i )  m y (i ))T
N i 1
White noise in a discrete system
By white noise a totally random disturbance sequence is
meant. At each time instant the signal value is a random
variable without any correlation to any other signal (or to
itself) at any time instant.
Autocovariance R
r( )  S
 2
 0
T0  0
2
Autospectral density  ( ) 
2
r() 

 
 
-1 0 1
Coloured noise in a discrete system
All needed stochastic signals can be generated by filtering
white noise, so that the covariance ”spreads” (a sample
correlates with previous values and values to come) and in
the spectrum certain frequencies are weighted more (the
signal is more powerful at certain frequencies). In other
words the signal becomes coloured noise instead of pure
white.
r() 


 
0
The ARMAX model
The filter gives a good picture of the coloured noise it
generates. Let e(k) be zero-mean white noise. The
following model classes are considered
MA-process (moving average)
c h
y ( k )  e( k )  b1e( k  1)    bn e( k  n)  1  b1q 1    bn q  n e( k )

AR-process (autoregressive)
y(k)  a1y(k 1)  an y(k  n)  e(k)
c h
 y(k)  a1y(k 1)   an y(k  n)  1 a1q1  anqn y(k)  e(k)
The ARMAX model
ARMA-process (autoregressive moving average)
y( k )  a1 y( k  1)    an y( k  n)  e( k )  b1e( k  1)    bn e( k  n)
 c1  a q
1
1
h c h
   an q  n y( k )  1  b1q 1    bn q  n e( k )
ARMAX-process (autoregressive moving average with
exogenous variable)
y ( k )   a1 y ( k  1)    an y ( k  n)  b0u( k  d )  b1e( k  d  1)  
  bm e( k  d  m)  e( k )  b1e( k  1)    bn e( k  n)
 c1  a q
1
1
h c h
   an q  n y ( k )  b0  b1q 1    bm q  m u( k  d )

c h
 1  c1q 1    cn q  n e( k )
The ARMAX model
Several modifications of the ARMAX-structure exist. ARX-
processes are ordinary difference equations without noise,
which have been discussed in previous chapters. ARIMAX
is an ARMAX-process with integration (autoregressive
integrating moving average with exogenous variable) and
NARMAX is a non-linear ARMAX (nonlinear autoregressive
moving average with exogenous variable).
Stochastic difference equations
Consider the representation x( k  1)   x( k )  v ( k )
v(k) is an independent zero-mean random variable with the
covariance R1 (does not correlate with x nor with itself at
any time instant). v(k) is therefore white noise. Suppose
that the initial state has the mean m0 and covariance R0.
Consider the behaviour of m(k) as a function of time.
m ( k )  E {x( k )}
Take expectations from both sides
E{x(k  1)}  E{x(k )  v(k )}  E{x(k )}  E{v(k )}  E{x(k )}  0
 m(k  1)  m(k ), m(0)  m0
Stochastic difference equations
The mean value behaves exactly according to system
dynamics.
As for the covariance function, use a new variable
~
x ( k )  x( k )  m ( k )
For the state covariance P(k)
P ( k )  cov{x( k ), x( k )}  E {~
x(k )~
x T ( k )}
Calculate the square of x(k+1) for the covariance matrix
~ ~ b
~ gb
~
x ( k  1) x ( k  1)   x ( k )  v ( k )  x ( k )  v ( k )
T T
g
b
 ~x (k )  v(k ) ~ gc
x T ( k ) T  v T ( k ) h
 ~
x(k )~x T ( k ) T   ~ x (k )v T (k )  v(k )~
x T ( k ) T  v ( k ) v T ( k )
Stochastic difference equations
Take the expectations from both sides
E {~
x ( k  1) ~
x T ( k  1)}  E { ~
x(k )~
x T ( k )  T }  E { ~
x ( k ) v T ( k )}
 E {v ( k ) ~
x T ( k )  T }  E {v ( k ) v T ( k )}
 P ( k  1)   P ( k )  T  0  0  R 1   P ( k )  T  R 1
A dynamic equation for the covariance is obtained
P ( k  1)   P ( k )  T  R 1 , P (0)  R 0
Consider the state autocovariance for different values of t .
For example, if t has the value 1:
rxx ( k  1, k )  E {~
x ( k  1) ~ mb
x T ( k )}  E  ~
x (k )  v(k ) ~ g
x T (k ) r
m
 E ~
x(k )~
x T (k )  v(k )~ r
x T ( k )   P( k )  0   P( k )
Stochastic difference equations
By repeating for any value of t
rxx ( k   , k )    P ( k ) ,   0
If the output equation is y(k) = Cx(k), it follows

R|m (k )  Cm( k )
y

S|r ( k   , k )  Cr
yy xx ( k   , k ) C T

Tr ( k   , k )  Cr
yx xx (k   , k )
Summarizing the results
Stochastic difference equations
For the process RSx( k  1)  x( k )  v( k )
Ty(k )  Cx( k )
Mean value: RSm( k  1)  m( k ), m(0)  m 0

Tm ( k )  Cm( k )
y

R|r (k   , k )   P(k ) ,   0
xx

Covariance: |Sr (k   , k )  Cr (k   , k )C
yy xx
T

yx||r (k   , k )  Cr (k   , k )
xx

TP(k  1)  P(k )  R , P(0)  R


T
1 0
Stochastic difference equations, example
Consider the process x ( k  1)  ax ( k )  v ( k )
v(k) is white noise with covariance r1. The initial state has
the mean value m0 and autocovariance r0 at time instant t0.
For the mean value: m( k  1)  am( k ), m ( k 0 )  m0
 m( k )  a k  k0 m0

P ( k  1)  a 2 P ( k )  r1 , P ( k 0 )  r0
1  a 2 ( k  k0 )
2 ( k  k0 )
For the covariance :  P( k )  a r0  r1
1 a 2

R|S
a l  k P( k ) l  k
rx (l , k )  k  l
|T
a P(l ) l  k
Stochastic difference equations, example
Assume that the process is stable (|a| < 1) and it has been
running a long period of time (k0  -). Then it follows
1 r1a 
m( k )  0 , P ( k )  r , rx (l , k )  rx ( ) 
1 a 2 1
1 a2
Assume that additional independent zero-mean white noise
e(k) with covariance r2 is added to the output. The output
covariance then becomes
R| r1
 r2   0
y ( k )  x ( k )  e( k )
S|
ry ( )  1  a 
2

r1a
|T 1 a 2
 0
Stochastic difference equations, example
The Fourier-transformation (do it!) gives the autospectral
density of the signal

 y ( ) 
FG
1 r1 1 r1 IJ FG IJ
H i  i
2 (e  a )( e  a )
 r2 
K
2 1  a  2a cos 
2
 r2
H K
Now the covariance and spectral density can be plotted ( a
= 0.5) 2 0.8

1.8

Upper curve
0.7

1.6
0.6

with
1.4

0.5
1.2

measurement
1 0.4

0.8 0.3

noise and
0.6
0.2
0.4
0.1

lower without
0.2

0 0
-2 -1 0 1
-20 -15 -10 -5 0 5 10 15 20 10 10 10 10
Stochastic I/O-models
I/O-disturbance model gives coloured noise, when white
noise is filtered through a dynamic system. The signal u(k)
has the mean m(k) and covariance ru.
u(k) y(k)
H(z)

The pulse response or weighting function is h(k). The I/O-


description can be written in the form
k 
y( k )   h( k  l ) u( l )   h( n) u( k  n)
l  n0

Take expectations from both sides


Stochastic I/O-models
R 
U 
m ( k )  E { y ( k )}  E S h(n) u( k  n) V   h( n) E {u( k  n)}
y
Tn0 W n0

  h(n)mu ( k  n)
n0
The mean of y is obtained by sending the mean u through a
dynamic process. For determining the covariance note that

y (k )  m y (k )   h( n)  u ( k  n)  mu ( k  n) 
n 0

The covariance equations can thus be written by assuming


that the mean value is zero.
Stochastic I/O-models

 
 
 
T

ry ( )  E{y(k  ) y (k )}  E  h(n)u(k   n)  h(l )u(k  l )  


T

 n0  l 0  
   
 h(n)E{u(k   n)u (k  l )}h (l )  h(n)ru (  l  n)hT (l )
T T

n0 l 0 n0 l 0

A similar result is obtained for the cross-covariance


RF
I U
( )  E { y ( k   ) u ( k )}  E SG  h(n) u( k    n)J u ( k ) V
TH K W
T T
ryu
n0
 
  h( n) E {u( k    n) u ( k )}   h(n) ru (  n)
T

n0 n0
Stochastic I/O-models
By using spectral densities simpler forms are obtained

1
 y ( )   yy ( ) 
2
 ry (n)
e  in

n 
  
1

2

n 
e  in
 h ( k
k 0 l 0
) ru ( n  l  k )h T
(l )
  
1

2
 
k  0 n  l  0
e  ik
h ( k ) e  i ( n  l  k )
ru ( n  l  k ) e il T
h (l )
  
1
 e  ik
h( k )   e  in
ru (n)   eil h T ( l )
k 0 2 n  l 0
Stochastic I/O-models
For the pulse transfer function and weighting function

H ( z )   z  k h( k )
k 0

By substituting to the formula of the spectral density


 y ( )  H ( ei ) u ( ) H T (e  i )
Correspondingly for the cross-spectral density
1   in 1   in 
 yu ( )  
2 n 
e ryu (n)  
2 n 
e  h( k )ru (n  k )
k 0

1   in
  e h( k ) 
 ik
 e r ( n )  H ( e i
) u ( )
2 n 
u
k 0
Stochastic I/O-models
Summarizing:
A stochastic process is given by a filter H ( z )

Mean: my  H (1)mu

Spectral density:  y ( )  H ( ei ) u ( ) H T (e  i )

Cross-spectral density:  yu ( )  H ( ei ) u ( )


Stochastic I/O-models, example
Consider the same example as previously but now using
the new formulas
1
x ( k  1)  ax ( k )  v ( k )  H ( z) 
za
v(k) is white noise with spectral density:  v ( )  r1
2
The spectral density of x(k):  x ( )  H (ei ) v ( ) H T ( e  i )
r1 r1
 
2 ( e  a )( e  a ) 2 (1  a 2  ae  i  aei )
i  i

r1 r1
 1  i 1 i

2 (1  a  2a ( 2 e  2 e ) 2 (1  a 2  2a cos  )
2
Stochastic I/O-models, example
Because x(k) and e(k) are independent, their spectral
densities can be summed
y ( k )  x ( k )  e( k )
r1 r2
 y ( )  x ( )  e ( )  
2 (1  a 2  2a cos  ) 2
1  r1 
  r2  
2  (1  a 2
 2 a cos  ) 
The same result is obtained as previously with the state-
space approach.
Spectral factorization
Now consider a given output spectral density and develop a
filter, by which it is generated using white noise as input.
Move back to z-domain by the transformation z  ei
By the formula for the spectral density the function in z-
domain is 1
F ( z)  H ( z ) H ( z 1 )
2
If zi is a zero of H(z), then zi-1 is the zero of H(z-1). The
same holds for the poles, so that the zeros and poles of F
are mapped symmetrically from outside to inside of the unit
circle.
Spectral factorization
Based on this it is straightforward to find H, which
corresponds to the given spectral density. First the poles
and zeros of F are determined. They always exist as pairs,
which multiply to the value 1.
zi z j  1 pi p j  1

Take those poles and zeros which are


inside the unit disc.

H ( z)  K
 (z  z )
i

 (z  p )j
Example
Earlier, the process discussed was

r1 r2
 y ( )  x ( )  e ( )  
2 (1  a 2  2a cos  ) 2
1  r1 
  2
r  
2  (1  a 2  2a cos  ) 

which can be written in the form


1  r1  1  r1  r2 (1  a 2 )  r2 a ( z  z 1 ) 
 y ( )   r2     
2  ( z  a )( z  a )  z e j 2 
1 1
( z  a )( z  a )  z e j

The denominator is already in factorized form. Let


us factorize the numerator also
Example
2 ( z  b)( z 1  b)  r1  r2 (1  a 2 )  r2 a( z  z 1 )

By setting the coefficients of the powers of z equal

z 0 : 2 (1  b 2 )  r1  r2 (1  a 2 )
z1 : 2b  r2 a we get (after calculations)

b
r1  r2 (1  a 2 )  r  r (1  a) r  r (1  a) 
1 2
2
1 2
2

2ar2

which is the root inside the unit circle


Example
1
2 
  r1  r2 (1  a 2 ) 
2
1 2
r  r (1  a ) 2
 1 2
r  r (1  a ) 2
 
So, the spectral density can be written as

2 ( z  b)( z 1  b) a and b inside the unit disc


 y ( ) 
2 ( z  a )( z 1  a)

The process can be generated by sending white noise


through the filter

z b
H ( z) 
za
Example
That corresponds the process

y (k  1)  ay (k )  e(k  1)  be(k )

in which e (k ) is zero mean white noise with the variance


λ2

You might also like