Professional Documents
Culture Documents
Lectr 14
Lectr 14
Stochastic Processes
Introduction
Let denote the random outcome of an experiment. To every such
outcome suppose a waveform X (t, )
X (t , ) is assigned.
The collection of such X (t, )
waveforms form a
n
X (t, )
stochastic process. The
k
0
t
t t
infinite or finite) as well. 1 2
Fig. 14.1
For fixed i S (the set of
all experimental outcomes), X (t , ) is a specific time function.
For fixed t,
X 1 X (t1 , i )
is a random variable. The ensemble of all such realizations 1
X (t , ) over time represents the stochastic PILLAI/Cha
process X(t). (see Fig 14.1). For example
X (t ) a cos( 0 t ),
where is a uniformly distributed random variable in (0, 2 ),
represents a stochastic process. Stochastic processes are everywhere:
Brownian motion, stock market fluctuations, various queuing systems
all represent stochastic phenomena.
If X(t) is a stochastic process, then for fixed t, X(t) represents
a random variable. Its distribution function is given by
FX ( x, t ) P{ X (t ) x} (14-1)
Notice that FX ( x, t ) depends on t, since for a different t, we obtain
a different random variable. Further
dFX ( x, t )
f X ( x, t ) (14-2)
dx
represents the first-order probability density function of the
process X(t). 2
PILLAI/Cha
For t = t1 and t = t2, X(t) represents two different random variables
X1 = X(t1) and X2 = X(t2) respectively. Their joint distribution is
given by
and
2
FX ( x1 , x2 , t1 , t2 ) (14-4)
f X ( x1 , x2 , t1 , t2 )
x1 x2
Properties:
The function i 1
Then
T T
E [| z | ] T T E{ X (t1 ) X * (t2 )}dt1dt2
2
T T
T T R XX (t1 , t2 )dt1dt2 (14-10) 5
PILLAI/Cha
Example 14.2
X (t ) a cos( 0 t ), ~ U (0,2 ). (14-11)
This gives
X (t ) E{ X (t )} aE{cos( 0 t )}
a cos 0 t E{cos } a sin 0 t E{sin } 0, (14-12)
2
since E{cos } 21 0 cos d 0 E{sin }.
Similarly
R XX (t1 , t2 ) a 2 E{cos( 0 t1 ) cos( 0 t2 )}
a2
E{cos 0 (t1 t2 ) cos( 0 (t1 t2 ) 2 )}
2
a2
cos 0 (t1 t 2 ). (14-13)
2
6
PILLAI/Cha
Stationary Stochastic Processes
Stationary processes exhibit statistical properties that are
invariant to shift in the time index. Thus, for example, second-order
stationarity implies that the statistical properties of the pairs
{X(t1) , X(t2) } and {X(t1+c) , X(t2+c)} are the same for any c.
Similarly first-order stationarity implies that the statistical properties
of X(ti) and X(ti+c) are the same for any c.
In strict terms, the statistical properties are governed by the
joint probability density function. Hence a process is nth-order
Strict-Sense Stationary (S.S.S) if
f X ( x1 , x2 , xn , t1 , t2 , tn ) f X ( x1 , x2 , xn , t1 c, t2 c , tn c )
(14-14)
for any c, where the left side represents the joint density function of
the random variables X 1 X (t1 ), X 2 X (t2 ), , X n X (tn ) and
the right side corresponds to the joint density function of the random
variables X 1 X (t1 c ), X 2 X (t2 c ), , X n X (tn c).
A process X(t) is said to be strict-sense stationary if (14-14) is 7
true for all ti , i 1, 2, , n, n 1, 2, and any c. PILLAI/Cha
For a first-order strict sense stationary process,
from (14-14) we have
f X ( x, t ) f X ( x, t c ) (14-15)
2 2
and hence the above integral reduces to
2T 2T | |
2 t R XX ( )( 2T | |)d
2
z
1
2T 2t XX
R ( )(1 2 T ) d . 12
(14-24) PILLAI/Cha
Systems with Stochastic Inputs
A deterministic system1 transforms each input waveform X (t , i ) into
an output waveform Y (t , i ) T [ X (t , i )] by operating only on the
time variable t. Thus a set of realizations at the input corresponding
to a process X(t) generates a new set of realizations {Y (t , )} at the
output associated with a new process Y(t).
Y (t, i )
X (t, i )
X (t )
T [] Y
(t )
t t
Fig. 14.3
Our goal is to study the output process statistics in terms of the input
process statistics and the system function.
1
A stochastic system on the other hand operates on both the variables t and .
13
PILLAI/Cha
Deterministic Systems
Linear-Time Invariant
(LTI) systems
X (t ) h (t ) Y (t ) h(t ) X ( )d
LTI system h( ) X (t )d . 14
PILLAI/Cha
Memoryless Systems:
The output Y(t) in this case depends only on the present value of the
input X(t). i.e.,
Y (t ) g{ X (t )} (14-25)
Proof:
R XY ( ) E{ X (t )Y (t )} E[ X (t ) g{ X (t )}]
x1 g ( x2 ) f X1X 2 (x1 , x2 )dx1dx2 (14-27)
LL XX
16
R ( ) R (0)
XX XX
PILLAI/Cha
where L is an upper triangular factor matrix with positive diagonal
entries. i.e.,
l11 l12
L .
0 l22
Consider the transformation
Z L1 X ( Z1 , Z 2 )T , z L1 x ( z1 , z2 )T
so that
* 1 *1 1 * 1
E{Z Z } L E{ X X }L *
L AL I
and hence
* * *
x A1 x z L* A1 Lz z z z12 z22 .
17
The Jacobaian of the transformation is given by PILLAI/Cha
| J || L1 || A |1 / 2 .
Hence substituting these into (14-27), we obtain
z12 / 2 z22 / 2
RXY ( ) (l 11
z1 l12 z2 ) g (l22 z2 ) 1 1
| J | 2 | A|1/ 2 e e
l11 z g (l
1 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2
l12 z g (l
2 22
z2 ) f z1 ( z1 ) f z2 ( z2 )dz1dz2
0
l11 z1 f z1 ( z1 )dz1 g (l22 z2 ) f z2 ( z2 )dz2
l12 z2 g (l22 z2 ) f z2 ( z2 ) dz2
1 e z / 2 2
2
2
ll 2
12
ug (u ) 1
2
e u 2 / 2 l222
du,
22
22 2 l 2
22
df u ( u )
f u ( u )
du
RXX ( ) g (u ) f u(u )du,
X (t )
t
X (t ) Y (t )
t LTI
Y (t ) h(t ) X ( )d
arbitrary Fig. 14.6
input
h( ) X (t )d (14-31)
Eq. (14-31) follows by expressing X(t) as
X (t ) X ( ) (t )d (14-32)
and applying (14-28) and (14-30) to Y (t ) L{ X (t )}. Thus
Y (t ) L{ X (t )} L{ X ( ) (t )d }
L{ X ( ) (t )d } By Linearity
X ( ) L{ (t )}d By Time-invariance
X ( )h(t )d h( ) X (t )d . (14-33) 21
PILLAI/Cha
Output Statistics: Using (14-33), the mean of the output process
is given by
Y (t ) E{Y (t )} E{ X ( )h (t )d }
X ( )h(t )d X (t ) h(t ). (14-34)
X (t ) h(t) Y (t )
(a)
RXX (t1 , t 2 )
h*(t )
R XY ( t1 ,t 2 )
2 h(t )
1
RYY (t1 , t 2 )
(b)
23
Fig. 14.7 PILLAI/Cha
In particular if X(t) is wide-sense stationary, then we have X (t ) X
so that from (14-34)
Y (t ) X h( )d X c, a constant. (14-38)
X (t ) Y (t )
wide-sense LTI system wide-sense
stationary process h(t) stationary process.
(a)
X (t )
strict-sense LTI system Y (t )
strict-sense
stationary process h(t) stationary process
(b) (see Text for proof )
X (t ) Y (t )
Gaussian Linear system Gaussian process
process (also (also stationary)
stationary) (c)
25
Fig. 14.8 PILLAI/Cha
White Noise Process:
W(t) is said to be a white noise process if
RWW (t1 , t 2 ) q(t1 ) (t1 t 2 ), (14-42)
i.e., E[W(t1) W*(t2)] = 0 unless t1 = t2.
W(t) is said to be wide-sense stationary (w.s.s) white noise
if E[W(t)] = constant, and
RWW (t1 , t 2 ) q (t1 t 2 ) q ( ). (14-43)
and
where
( ) h( ) h ( ) h( )h* ( )d .
*
(14-46)
in (t , t t ) is given by t
t t
t
0
t x x x t f ( x1 , x2 )d x1dx2
2 0 1 2
X1X 2
X (t ) Fig. 14.11
0 f X 2 ( x2 )d x2 x t f X1 ( x1 )d x1 . (14-53)
2
1 1 1 (0)
R XX
( 2 2 / ) (14-55)
2R XX (0) 2 2 R XX (0)
[where we have made use of (5-78), Text]. There is an equal
probability for downcrossings, and hence the total probability for
crossing the zero line in an interval (t , t t ) equals 0 t , where
1
0 (0) / R XX (0) 0.
R XX (14-56)
It follows that in a long interval T, there will be approximately 0T
crossings of the mean value. If R XX (0) is large, then the
autocorrelation function R XX ( ) decays more rapidly as moves
away from zero, implying a large random variation around the origin
(mean value) for X(t), and the likelihood of zero crossings should
increase with increase in RXX (0), agreeing with (14-56). 31
PILLAI/Cha
Discrete Time Stochastic Processes:
A discrete time stochastic process Xn = X(nT) is a sequence of
random variables. The mean, autocorrelation and auto-covariance
functions of a discrete-time process are gives by
n E{ X ( nT )} (14-57)
R ( n1 , n2 ) E{ X ( n1T ) X * ( n2T )} (14-58)
and
C ( n1 , n2 ) R ( n1 , n2 ) n1 n*2 (14-59)
respectively. As before strict sense stationarity and wide-sense
stationarity definitions apply here also.
For example, X(nT) is wide sense stationary if
E{ X ( nT )} , a constant (14-60)
and
E [ X {( k n )T } X * {( k )T }] R ( n ) rn r*n (14-61) 32
PILLAI/Cha
i.e., R(n1, n2) = R(n1 – n2) = R*(n2 – n1). The positive-definite
property of the autocorrelation sequence in (14-8) can be expressed
in terms of certain Hermitian-Toeplitz matrices as follows:
Theorem: A sequence {rn } forms an autocorrelation sequence of
a wide sense stationary stochastic process if and only if every
Hermitian-Toeplitz matrix Tn given by
r0 r1 r2 rn
*
r1 r0 r1 rn 1
Tn Tn
*
(14-62)
r r* * *
n n 1 r1 r0
is non-negative (positive) definite for n 0, 1, 2,, .
Proof: Let a [a0 , a1 , , an ] represent an arbitrary constant vector.
T
i 0 k 0 k 0
From (14-64), if X(nT) is a wide sense stationary stochastic process
then Tn is a non-negative definite matrix for every n 0, 1, 2,, .
Similarly the converse also follows from (14-64). (see section 9.4, Text)
or
1 2 q
X ( z ) b0 b1 z b2 z bq z B( z )
H ( z ) h(k ) z k 1 2 p
k 0 W ( z ) 1 a1 z a2 z a p z A( z )
35
(14-70) PILLAI/Cha
represents the transfer function of the associated system response {h(n)}
in Fig 14.12 so that
X ( n ) h( n k )W ( k ). (14-71)
k 0
Notice that the transfer function H(z) in (14-70) is rational with p poles
and q zeros that determine the model order of the underlying system.
From (14-68), the output undergoes regression over p of its previous
values and at the same time a moving average based on W ( n ), W ( n 1),
, W ( n q) of the input over (q + 1) values is added to it, thus
generating an Auto Regressive Moving Average (ARMA (p, q))
process X(n). Generally the input {W(n)} represents a sequence of
uncorrelated random variables of zero mean and constant variance W2
so that
RWW (n) W2 (n). (14-72)
If in addition, {W(n)} is normally distributed then the output {X(n)}
also represents a strict-sense stationary normal process.
If q = 0, then (14-68) represents an AR(p) process (all-pole
36
process), and if p = 0, then (14-68) represents an MA(q) PILLAI/Cha
process (all-zero process). Next, we shall discuss AR(1) and AR(2)
processes through explicit calculations.
AR(1) process: An AR(1) process has the form (see (14-68))
X ( n ) aX ( n 1) W ( n ) (14-73)
and from (14-70) the corresponding system transfer
1
H ( z)
1 az 1
a n n
z (14-74)
n 0
given by
R (n)
X ( n ) XX a |n| , | n | 0. (14-77)
R XX (0)
It is instructive to compare an AR(1) model discussed above by
superimposing a random component to it, which may be an error
term associated with observing a first order AR process X(n). Thus
Y ( n) X ( n) V ( n) (14-78)
where X(n) ~ AR(1) as in (14-73), and V(n) is an uncorrelated random
sequence with zero mean and variance V that is also uncorrelated
2
RYY ( n ) 1 n0
Y ( n ) |n|
RYY (0) c a n 1, 2, (14-80)
where
W2
c 2 1.
W V (1 a )
2 2 (14-81)
Eqs. (14-77) and (14-80) demonstrate the effect of superimposing
an error sequence on an AR(1) model. For non-zero lags, the
autocorrelation of the observed sequence {Y(n)}is reduced by a constant
factor compared to the original process {X(n)}.
From (14-78), the superimposed
(0) (0) 1
error sequence V(n) only affects X Y
k 0
(14-89)
| b1 | ( ) b b ( ) b b ( ) | b2 | ( )
2 * n * * n * * n 2 * n
W
2
1 1 2
1
1 2 2
2
1 | 1 | 1 12 1 1 | 2 |
2 * * 2 41
1 2
PILLAI/Cha
where we have made use of (14-85). From (14-89), the normalized
output autocorrelations may be expressed as
RXX (n) *n *n
X ( n) c11 c2 2 (14-90)
RXX (0)
where c1 and c2 are appropriate constants.
Damped Exponentials: When the second order system in
(14-83)-(14-85) is real and corresponds to a damped exponential
response, the poles are complex conjugate which gives a1 4a2 0
2
in (14-83). Thus
1 r e j , 2 1* , r 1. (14-91)
j
In that case 1c c *
2 c e in (14-90) so that the normalized
correlations there reduce to
*n
X ( n ) 2 Re{c } 2cr n cos( n ).
1 1 (14-92)
But from (14-86)
1 2 2r cos a1 , r 2 a2 1, (14-93) 42
PILLAI/Cha
and hence 2r sin ( a12 4a2 ) 0 which gives
(a12 4a2 )
tan . (14-94)
a1
Also from (14-88)
X (1) a1 X (0) a2 X (1) a1 a2 X (1)
so that
a1
X (1) 2cr cos( ) (14-95)
1 a2
where the later form is obtained from (14-92) with n = 1. But X (0) 1
in (14-92) gives
2c cos 1, or c 1 / 2 cos . (14-96)
Substituting (14-96) into (14-92) and (14-95) we obtain the normalized
output autocorrelations to be 43
PILLAI/Cha
cos(n )
X (n ) ( a2 ) n/2
, a2 1 (14-97)
cos
where satisfies
cos( ) a1 1
. (14-98)
cos 1 a2 a2
Thus the normalized autocorrelations of a damped second order
system with real coefficients subject to random uncorrelated
impulses satisfy (14-97).
(14-100)
h h h h
n n 1 n 2 2n
i.e., In the case of rational systems for all sufficiently large n, the
Hankel matrices Hn in (14-100) all have the same rank.
E{w(n ) x* ( n k )} 0, k 1. (14-106) 47
PILLAI/Cha
Together with (14-68), we obtain
ri E{x (n ) x * ( n i )}
p q
ak {x ( n k ) x * (n i )} bk {w(n k ) w* (n i )}
k 1 k 0
p q
ak ri k bk {w( n k ) x * ( n i )} (14-107)
k 1 k 0
ak ri k ri 0, iq (14-108)
k 1
and
p
ak ri k ri 0, i q 1. (14-109)
k 1
where
r0 r1 r2 rk
r r r r
Dk 1 2 3 k 1
(14-111)
r r r r
k k 1 k 2 2k