You are on page 1of 13

CONTENT

1. The random process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3


1.1Characteristics for a random process. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. Synthesis of optimal stabilization system of stable object with
ideal measurements of its output coordinates and random impacts. . . . . . . . 9
3. Calculations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4. Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
REFERENCES. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

1. Random process

2
A random process is a process whose behavior is not completely predictable and
can be characterized by statistical laws.
Examples of random processes:
 Daily stream flow

 Hourly rainfall of storm events

 Stock index
A random variable is a mapping function which assigns outcomes of a random
experiment to real numbers. Occurrence of the outcome follows certain probability
distribution. Therefore, a random variable is completely characterized by its
probability density function (PDF).
A stochastic process (random process, or random function) can be defined as a
family of random variables { X ( t ) ,t ∊T }. The variables are indexed with the parameter t
which belongs to the set T , the index set, or the parameter set.

 When T ={ … ,−1 ,0 , 1 , … } or T ={ 0 , 1 ,2 , … } the stochastic process is referred to as


a discrete parameter or a discrete time process.
 When T ={ t ; 0 ≤ t< ∞ } or ¿ { t ;−∞<t <∞ } , the process is referred to as a continuous
parameter or a continuous time process.
The set of possible values which random variables X ( t ) , t ∊ T may assume is
called the state space of the process.
The term “stochastic processes” appears mostly in statistical textbooks;
however, the term “random processes” are frequently used in books of many
engineering applications.
A continuous time stochastic process { X ( t ) ,t ∊T } is said to have independent
increments if for all choices of t 0< t 1 <…< tn ,the n random variables
X ( t 1 )− X ( t 0 ) , X ( t 2) −X ( t 1 ) , … , X ( t n ) −X (t n−1) are independent. The process is said to

have stationary independent increments if in addition.

X ( t i +s )− X ( t i−1 +s ) has the same distribution as X ( t i )− X ( t i−1 ) for all t i , t i−1 ∊ T and s>0.

A random variable X can be assigned a number x (ω) based on the outcome ω of


the random experiment. Similarly, a random process { X ( t ) ,t ∊T } can assume values
3
{ X ( t , ω ) ,t ∊T } depending on the outcome of a random experiment. Each possible
{ X ( t , ω ) ,t ∊T } is called a realization of the random process { X ( t ) ,t ∊T } .
A totality of all realizations is called the ensemble of the random process.
1.1 Characteristics for a random process

 Mean function:

m X ( t )=E [ X (t ) ] =∫ x f X (t ) ( x)dx (1.1)
−∞

mx ( t ) is a function of time. It specifies the average behavior of X ( t) over time.

In the case where the mean of X ( t ) does not depend on t , we have:

m X ( t )=E [ X (t ) ] =mX (a constant) (1.2)


 Variance :
Can be obtained as:
var ( X t )= E [{ X (t )−mX ( t ) }2 ]= C x (t , t) (1.3)

var ( X t ) is a function of time and is always non-negative.

 Auto-correlation function:

R X (t 1 , t 2 ) is defined as the correlation between the two time samples X t = X (t 1 ) and X t 1 2

= X (t2 ):

R X ( t 1 ,t 2 )=E [ X t , X t ]
1 2 (1.4)

Properties:
1. In general, R X ( t 1 ,t 2 )depends on both t 1 and t 2.
2. For real processes, R X ( t 1 ,t 2 ) is symmetric:
R X ( t 1 ,t 2 )=R X ( t 2 , t 1 ) (1.5)
3. For any t ,t 1 and t 2 :
R X (t ,t ) = E [ X 2t ] ≥ 0

|R X ( t1 , t2 )|≤ √ E [ X 2t ] E [ X 2t ]
1 2
(1.6)

4
Process with E [ X 2t ] ¿ ∞ for all t is called second-order.

 Auto-covariance function:
C X (t 1 , t 2) is defined as the covariance between the two time samples X ( t 1 ) and X (t 2 ):

C X ( t 1 , t 2 ) =E { X t −mX ( t 1 ) }{ X t −mX ( t 2 ) } =R X ( t 1 , t 2) −m X ( t 1 ) mX ( t 2 ) (1.7)


[ 1 2
]

 Correlation coefficient function:

C X (t 1 , t 2 )
ρ X ( t 1 ,t 2 )= (1.8)
√ C X (t 1 , t1 ) √C X (t 2 ,t 2)

ρ X ( t 1 ,t 2 ) is a function of timest 1∧t 2. It is also symmetric.

In the Multiple random processes can be defined the:


- cross-covariance
- cross-correlation functions
Their joint behavior is completely specified by the joint distributions
for all combinations of their time samples.Some simpler functions can be used to
partially specify the joint behavior.
Consider two random processes X (t) and Y (t ).
 Cross-correlation function:
R X ,Y ( t 1 ,t 2 )=E [ X t Y t ]
1 2 (1.9)
- If R X ,Y ( t1 ,t 2 ) = 0 for all t 1 and t 2, processes X (t) and Y (t ) are
orthogonal.
- Unlike the auto-correlation function, the cross-correlation function
is not necessarily symmetric.
R X ,Y ( t 1 ,t 2 ) ≠ R X ,Y ( t 2 , t 1 ) (1.10)
 Cross-covariance function:
[ ]
C X ,Y ( t 1 , t 2 ) =E { X t −mX ( t 1 ) }{Y t −mY ( t 2 ) } =R X , Y ( t 1 , t 2 )−m X ( t 1 ) mY ( t 2 )
1 2

(1.11)
- If C X ,Y ( t 1 , t2 )= 0 for all t 1 and t 2, processes X (t) and Y (t ) are
5
uncorrelated.
Two processes X ( t) and Y (t ) are independent if any two vectors of time samples,
one from each process, are independent.
- If X ( t) and Y (t ) are independent then they are uncorrelated:
C X ,Y ( t 1 , t 2 ) =0 any t 1,t 2 (the reverse is not always true).

The concept of white noise


An important type of stochastic signals are the so-called white noise signals (or
processes). “White” is because in some sense white noise contains equally much of
all frequency components, analogously to white light which contains all colours.
White noise has zero mean value:
m X =0 (1.12)
There is no co-variance or relation between sample values at different time-
indexes, and hence the auto-covariance is zero for all lags L except for L = 0. Thus,
the auto-covariance is the pulse function shown in Figure 1.

Fig.1.1 White noise has auto-correlation function like a pulse function.


Mathematically the auto-covariance function of white noise is:
R x (L)=Var (x) δ(L)=σ 2 δ (L)=V δ ( L) (1.13)
Here, the short-hand symbol V has been introduced for the variance δ ( L)
is the unit pulse defined as follows:
Unit pulse:

δ ( L ) = 1 w h en L=0
{ 0 w h en L ≠0
(1.14)
6
White noise is an important signal in estimation theory because the random
noise which is always present in measurements, can be represented by white noise.
The concept of power spectral density

The power spectral density (PSD) describes how the power (or variance) of a
time series is distributed with frequency.

Properties of the power spectral density:


a. S X (f ) is real and even
S X ( f )= S X (−f ) (1.15)
b. The area under S X ( f ) is the average power of X (t)

∫ S X ( f ) df =R X ( 0 )=E ¿ ¿ (1.16)
−∞

c. S X ( f ) is the average power density, hence the average power of X ( t) in the


frequency band [f 1; f 2] is
−f 1 f2 f2

∫ S X ( f ) df +∫ S X ( f ) df =2 ∫ S X ( f ) df (1.17)
−f 2 f1 f1

d. S X (f ) is nonnegative: S X ( f )≥ 0 for all f .


e. In general, any function S( f ) that is real, even, nonnegative and has finite area
can be a psd function.
Mathematically, it is defined as the Fourier Transform (FT) of the
autocorrelation sequence of the time series.

Fourier Transform
The Fourier transform is very similar to the Laplace transform. The Fourier
transform uses assumption that any finite time-domain can be broken into a finite
sum of sinusoidal (sine and cosine). Under this assumption, the Fourier transform
converts a time domain signal into its frequency domain representation as a function
of the radial frequency.

The Fourier transform of a signal x ( t ) is defined as follows:


7

F ( jω ) = ∫ f ( t ) e− j ωτ dτ
−∞ (1.18)
It is possible to show that the Fourier transform is equivalent to the Laplace transform
when the following condition is true:

s= jω

then

F ( s )= ∫ f ( t ) e−sτ dτ
−∞
(1.19)
The formulas (1.18) and (1.19) represent direct Fourier transform.
The inverse Fourier transform is defined as follows:
1 +∞ j ωτ
f ( t )= ∫ F ( jω ) e dω
2π −∞
(1.20)
The Fourier transform exist for all functions that satisfy the following condition
+∞
∫−∞ |f ( t )| dt <∞ . (1.21)
A Spectral density function of a stochastic (random) process is a Fourier
transform of its covariance function
+ j∞
S xx ( s ) = ∫ R xx ( τ ) e−τs dτ
−j ∞ . (1.22)

In the same way, the cospectral density function of the random processes x ( t ) and

y (t )

can
be
evaluated as follows

8
2. Synthesis of optimal stabilization system of stable object with ideal
measurements of its output coordinates and random impacts

Unsteadiness of stabilizing object imposes serious limitations on optimal


structure of the regulator that supposed to be synthesized. When the object is stable,
the task of synthesis is easy to be solved.
Let’s the movement of stabilizing object described by well-known equation
system:
Px=Mu+ψ , (2.1)

but determinant |P| is Hurwitz one, and vector ψ – n-dimensional centered random
process with known matrix Sψψ .


u -1
x
M P

Stabilization plant

Fig.2.1 Structure of stabilization system


Assume that output vector x measured by “ideal” measurer and go to regulator,
that is placed in feedback to the object and has matrix of transfer functions W, which
is supposed to be found. Equation of the regulator will be:

u=Wx. (2.2)

Matrix W could be written with help of algorithm of unilateral imposition of


poles:

W =W 10−1 W 1=W 2 W 20−1, (2.3)

where matrices W 10, W 20 , W 1 and W 2- are polynomial, also |W 10|=¿ |W 20|.

9
Let’s define matrix of closed-loop system transfer functions from the input of
perturbations ψ to output x through F x , and matrix of system transfer functions from
input ψ to output u through F u. Substitute vector u (2.2) into object equation (2.1) and
after easy transformation, we’ll get:

F x =( P−MW )−1 (2.4)

F u=W ( P−MW )−1 (2.5)

Substitute denotations (2.3) to matrices (2.4) and (2.5), rewrite the last as
following:

F x =W 20( P W 20−M W 2 )−1=W 20 F−1


0 (2.6)

F u=W 2 (P W 20−M W 2 )−1=W 2 F−1


0 (2.7)

As known, stability of closed-loop system takes place if determinate of matrices


F 0=|F 0| satisfies Hurwitz criteria.

Task of the synthesis lies in finding of such structure of matrix W on class of


fractional-rational functions, which supposed to stabilize closed-loop system and on
the same time transmit the functional minimum:
j∞
1
e= ∫ tr [R S' xx +C S 'uu ]ds (2.8)
j − j∞

where S' xx , S' uu are matrices of spectral densities of vectors x and u; R and C –
weighted positive and negative determined symmetrical matrices.

The vectors x and u are equal

x=F x ψ , u=F u ψ

Then the object equation (2.1) could be rewritten as

PF x −MF u=E n (2.9)

And call this equation as connection between functions F x and F u, and functional
will be:

10
j∞
1
e= ∫ tr ¿ ¿ (2.10)
j − j∞

Where S ' ψψ transposed matrix of spectral density of perturbation ψ .


Stabilizing object is stable and connection between matrices F x and F u, we could
rewrite function F x as
F x =P−1 ( MF u + E n) (2.11)

With accordance to above equation, system functional could be as following


j∞
1
e= ∫ tr ¿ ¿ (2.12)
j − j∞

Task of optimal structure of matrix F u equivalent to the task of functional


minimizing (2.12) on class of physical realized fractional-rational matrices F u. Let’s
minimize the functional (2.12) using Wiener – Kolmogorov method.
j∞
1
e= ∫ tr ¿ ¿
j − j∞

Let’s assign that

S' ψψ =DD¿

M ¿ P−1
¿ RP
−1
M +C=Г ¿ Г

¿ M ¿ P ¿ R P D= 0
T =Г −1 −1 −1
T + T +¿+T −¿¿ ¿

According to transformations done above, let’s rewrite (2.13)


j∞
1
e= ∫ tr ¿ ¿ (2.14)
j − j∞

Condition that assumes identity variation equation to zero (2.14)

Г Fu D+T 0+ T +¿=0 ¿ (2.15)

For determination of optimal structure of matrix F u based on input information will


have next view:

11
F u=−Г −1 ¿ (2.16)

3. Calculations
“Optimal Stabilization System Design”

12
It is necessary to perform a synthesis of stabilization system for dynamic plant under
the external disturbances with known spectral density characteristics. The block
scheme of stabilization system is given in Fig.3.1
Task:
1. Estimate performance of the system under the external disturbances enclosed
with negative unit feedback by applying performance criterion (3.1).
2. Carry out the synthesis of optimal stabilization system and estimate its
performance by applying criterion (3.1); check a constraint equation (3.2).
3. Build a step response for the error of the stabilization system.
4. Build Bode plot for a defined optimal controller W.
5. Build a surface that reflects dependence of stabilization system error on
external intensities ψ (which are approximated with S ψψ ), namely,
2
e / σ ψ =f ( S ψψ ( τ 0 ) )
.
6. Compare the results of optimal system and system enclosed with unit feedback.
Draw conclusions.


u -1
x
M P

Stabilization plant

Fig.3.1 Structure of stabilization system


Initial data are given in Table 3.1-Table 3.3:
Table 3.1
Variant
1 P   s  1  s  3 ; M  2  s  2  ; R  0.25; C  1.

The external disturbances are approximated with spectral densities S ψψ and given
in Table 3.2; the values for a time constant are given in table 3.3.

Table 3.2
Spectral densities approximation
13
Variant 1- 4 5-8 9-12
S ψψ 2
σψ 1 2 σ 2ψ 2
σψ
| |
π ( τ 0 s+1 ) π π|τ 0 s+1|2

Table 3.3
Time constants
Variant τ 0 , sec

1 1.0
To analyse the dependence of stabilization system error on external intensities that
are approximated with spectral densities S ψψ , assume that time constant varies in a
range: τ 0 =[ 0 . 1, 1 , 10 ] sec.
Performance criterion is given below:
j∞
1
e= ∫ tr ¿ ¿ ¿
j − j∞ , (3.1)

where F x , Fu are the matrices of transfer functions of the closed loop system from
ψ to x and from ψ to u , respectively. C, R are positive definite
weighting symmetric matrices.

The constraint equation that establishes relation between the matrices F x and Fu
is given as follows:
PF x −MF u=E n , (3.2)

where En is an identity matrix of appropriate dimension.


The optimal structure of stabilization system controller W is defined as follows:
−1
W=F u F x

14

You might also like