Professional Documents
Culture Documents
Chap7 Random Process
Chap7 Random Process
Random Processes
7.1
A random variable X takes on numerical values as the result of an experiment. Suppose that the experiment also produces another random variable,
Y. What can we say about the relationship between X and Y ?.
One of the best ways to visualize the possible relationship is to plot the
(X, Y ) pair that is produced by several trials of the experiment. An example
of correlated samples is shown in Figure 7.1. The points fall within a somewhat elliptical contour, slanting downward, and centered at approximately
(4,0). The points were created with a random number generator using a
correlation coecient of = 0.5, E[X] = 4, E[Y ] = 0. The mean values
are the coordinates of the cluster center. The negative correlation coecient
indicates that an increase in X above its mean value generally corresponds
to a decrease in Y below its mean value. This tendency makes it possible to
make predictions about the value that one variable will take given the value
of the other, something which can be useful in many settings.
The joint behavior of X and Y is fully captured in the joint probability
distribution. If the random variables are continuous then it is appropriate to
use a probability density function, fXY (x, y). We will presume that the pdf
is known or can be estimated. Computation of the usual expected values is
then straightforward.
m
E[X Y ] =
ZZ
123
(7.1)
124
Figure 7.1: Scatter plot of random variables X and Y. These random variables
have a correlation of = 0.5.
7.1.1
Covariance Function
(7.2)
(7.3)
(7.4)
125
correlated or anti-correlated.
7.1.2
Autocorrelation Function
(7.5)
It retains the mean values in the calculation of the value. The random
variables are orthogonal if R(X, Y ) = 0.
7.1.3
yy
yy 2
xx
xx
+ y
2 x
x
y
1
p
fXY (x, y) =
exp
2)
2
2(1
2 x y 1
(7.6)
The contours of equal probability are ellipses, as shown in Figure 7.2. The
probability changes much more rapidly along the minor axis of the ellipses
than along the major axes. The orientation of the elliptical contours is along
the line y = x if > 0 and along the line y = x if < 0. The contours are
a circle,
and
the variables are uncorrelated, if = 0. The center of the ellipse
is x , y .
7.2
Linear Estimation
It is often the case that one would like to estimate or predict the value of
one random variable based on an observation of the other. If the random
variables are correlated then this should yield a better result, on the average,
than just guessing. We will see this is indeed the case.
1
Be careful to not confuse the term autocorrelation function with correlation coefficient.
126
(7.8)
Y
= 2E[(Y Y )
]=0
a
a
Y
= 2E[(Y Y )
]=0
b
b
(7.9)
(7.10)
127
(7.11)
(7.12)
These can be solved for a and b in terms of the expected values. The expected
values can be themselves estimated from the sample set.
cov(X, Y)
var(X)
cov(X, Y)
b = E[Y ]
E[X]
var(X)
a =
(7.13)
(7.14)
(7.15)
7.3
Random Processes
128
7.3.1
Averages
Because X(t1 , e) is a random variable that represents the set of samples across
the ensemble at time t1 , we can make use of all of the concepts that have
been developed for random variables.
Moments
For example, if it has a probability density function3 fX (x; t1 ) then the moments are
Z
n
mn (t1 ) = E[X (t1 )] =
xn fX (x; t1 ) dx
(7.16)
2
We will often suppress the display of the variable e and write X(t) for a continuoustime RP and X[n] or Xn for a discrete-time RP. Always go back to the basic definition
when you need to be sure that an idea is clear.
3
We will do the development in terms of random variables in which X has a continuous
range of values. If X has a discrete set of possible values, then the integrals are replaced
by sums, etc.
129
We need the notation fX (x; t1 ) because it is very possible that the probability
density will depend upon the time the samples are taken. The mean value is
X = m1 , which can be a function of time. The central moments are
Z
n
E[(X(t1 ) X (t1 )) ] =
(x X (t1 ))n fX (x; t1 ) dx
(7.17)
Correlation
The numbers X(t1 , e) and X(t2 , e) are samples from the same time function at dierent times. This is a pair of random variables which we could
write conveniently in terms of a doublet (X1 , X2 ). It is described by a joint
probability density function4 f (x1 , x2 ; t1 , t2 ). The notation includes the times
because the result surely can depend on when the samples are taken.
From the joint density function one can compute the marginal densities,
conditional probabilities and other quantities that may be of interest. A measure of particular interest is the correlation and covariance. The covariance
is5
ZZ
C(t1 , t2 ) = E[(X1 1 )(X2 2 )] =
(x1 1 )(x2 2 )f (x1 , x2 ; t1 , t2 )dx1 dx2
(7.18)
ZZ
(7.19)
C(t1 , t2 ) = R(t1 , t2 ) 1 2
(7.20)
(7.21)
Note that both the covariance and correlation functions are symmetric in t1
and t2 . C(t1 , t2 ) = C(t2 , t1 ) and R(t1 , t2 ) = R(t2 , t1 )
The average power in the process at time t is represented by
and C(t, t) represents the power in the fluctuation about the mean value.
4
Here we will not use subscripts on the function, as in fX1 X2 (x1 , x2 ; t1 , t2 ), to avoid
becoming overly baroque. The subscripts are implied by the argument. Here we see a
reason why this is such a common practice in probability publications.
5
Although not explicitly shown, the mean values can be functions of t1 and t2 .
6
This correlation function is called autocorrelation when it is necessary to clearly indicate that the variable is being correlated with itself rather than another variable, which is
called cross-correlation.
130
( )n e
n!
(7.22)
(7.23)
from which we readily find that the average power in the function X(t) is
E[X 2 (t)] = t + 2 t2
(7.24)
(7.25)
for t2 t1
(7.27)
Because R(t1, t2 ) = R(t2, t1 ) we can get the result for the case t1 t2 by
interchanging variables. The final result is
t2 + 2 t1 t2 , t1 t2
R(t1, t2 ) =
(7.28)
t1 + 2 t1 t2 , t2 t1
131
132
Figure 7.4: An example of a telegraph signal. The zero cossings are described
by a Poisson process, with an average rate of = 1 per second.
(7.29)
(7.30)
133
(7.31)
(7.32)
Note that the expected value decays toward x = 0 for large t. That is because the influence of knowing the value at t = 0 decays exponentially. The
autocorrelation function is computed by finding R(t1 , t2 ) = E [X (t1 ) X (t2 )] .
Let x0 = 1 and x1 = 1 denote the two values that X can attain. For the
moment assume that t2 t1 . Then
R(t1 , t2 ) =
1 X
1
X
j=0 k=0
(7.33)
The first term in each product is given above. To find the conditional probabilities we take note of the fact that the number of sign changes in t2 t1 is
a Poisson process. Hence, in a manner that is similar to the analysis above,
P [X(t2 ) = 1|X(t1 ) = 1] = P [X(t2 ) = 1|X(t1 ) = 1] = e(t2 t1 ) cosh (t2 t1 )
(7.34)
P [X(t2 ) = 1|X(t1 ) = 1] = P [X(t2 ) = 1|X(t1 ) = 1] = e(t2 t1 ) sinh (t2 t1 )
(7.35)
Hence
R(t1 , t2 ) = e(t2 t1 )
for t2 t1
(7.36)
(7.37)
134
The autocorrelation for the telegraph signal depends only upon the time difference, not the location of the time interval. We will see soon that this is a
very important characteristic of stationary random processes. We can now
remove condition (3) on the telegraph process. Let Y (t) = AX(t) where A is
a random variable independent of X that takes on the values 1 with equal
probability. Then Y (0) will equal 1 with equal probability, and the telegraph
process will no longer have the restriction of being positive at t = 0. Since A
and X are independent, the autocorrelation for Y (t) is given by
E [Y (t1 )Y (t2 )] = E[A2 ]E [X(t1 )X (t2 )] = e|t2 t1 |
(7.38)
since E[A2 ] = 1.
7.3.2
The random telegraph is one example of a process that has at least some
statistics that are independent of time. Random processes whose statistics
do not depend on time are called stationary. In general, random processes
can have joint statistics of any order. If the process is stationary, they are
independent of time shift. We will explain this by building up the description.
The first order statistics are described by the cumulative distribution
function F (x; t). If the process is stationary then the distribution function at
times t = t1 and t = t2 will be identical. An example of a random process
is shown in Figure 7.5. At any particular time, F [x; t] = P [X x; t] is just
the cumulative distribution function over the random variable X(t). The
distribution function is independent of time for a stationary process. Given
the distribution function, we can compute other first-order statistics such as
the probability density.
The second-order statistics are described by the joint statistics of random variables X(t1 ) and X (t2 ) . The second-order cumulative distribution
function is
F (x1 , x2 ; t1 , t2 ) = P [X(t1 ) x1 , X(t2 ) x2 ]
(7.39)
from which one may compute the density
f (x1 , x2 ; t1 , t2 ) =
2 F (x1 , x2 ; t1 , t2 )
x1 x2
(7.40)
When the process is stationary these functions depend on the time separation
(t2 t1 ) but not on the location of the interval.
135
136
(7.42)
C( ) = E[(X(t) ) (X(t + ) )] = R( ) 2
(7.43)
C(0) = E[(X(t) )2 ] = 2
(7.44)
Two random processes X(t) and Y (t) are called jointly wide-sense stationary
if each is wss and their cross correlation depends only on = t2 t1 . Then
Rxy ( ) = E[X(t)Y (t + )]
(7.45)
(7.46)
7.3.3
It is often the case that we need to describe the output of a system when the
input is a random process. This is generally the case with all communication
systems and sensor systems. This is clearly a very large class of systems.
Here we will deal only with linear systems because they lend themselves to
general results. Nonlinear systems must be treated with special analytical
tools on a case-by-case basis.
Let X(t, e) be a random process. For the moment we show the outcome
e of the underlying random experiment that selects a particular function of
time. An operation can be done on the time function to produce an output
137
(7.47)
(7.48)
Mean Value
The following result holds for any linear system, whether or not it is time
invariant and even stationary.
E[LX(t)] = LE[X(t)] = L[(t)]
(7.50)
When the process is stationary we find y = L[x ], which is just the response
to a constant of value x .
Output Autocorrelation
We would like to know the autocorrelation function Ryy ( ) of the output of
a linear system when its input is a wss random process. When the input is
wss and the system is time invariant the output is also wss.
Let us first find the cross-correlation function
Rxy ( ) = E [X(t)Y (t + )]
Z
E [X(t)X(t + s)] h(s)ds
=
Z
Rxx ( s)h(s)ds
=
(7.51)
138
The cross-correlation Rxy ( ) is the convolution of the autocorrelation function Rxx ( ) with the impulse response of the system, a very important result
in its own right.
Now, multiply through Equation 7.48 by Y (t) and take averages.
Ryy ( ) = LE[X(t + )Y (t)] = LRxy ( )
Applying the shift-invariant linear filter as the operator,
Z Z
Ryy ( ) =
Rxx ( s1 + s2 )h(s1 )h(s2 )ds1 ds2
(7.52)
(7.53)
Z
2w ( s)h(s)ds
=
=
2w h( )
(7.54)
(7.56)
139
Figure 7.6: Ensemble of outputs of digital filter with white noise input and
filter equation y[n] = x[n] + 0.9y[n 1], which represents a low-pass digital
filter.
Example 7.3.4 Discrete Low-pass Filter Let X[n] be a discrete rendom process with autocorrelation function Rxx [m] = E[X[n]X[n + m]] =
2x (n m). Consider the random process Y that is represented by the dierence equation
Y [n] = X[n] + aY [n 1]
(7.57)
where a is a constant. This dierence equation represents a digital filter with
impulse response
(7.58)
h[n] = an step(n)
Then the cross-correlation function is given by the dierence equation
Rxy [m] = E[Y [n]X[n m]] = 2x (m) + aRxy (m 1)
(7.59)
140
This can be solved recursively to find Rxy [m] = am Rxy [0]. This function is
bounded if |a| < 1. The impulse response of this filter is h[m] = am step[m].
The autocorrelation function of the output is
Ryy [m] =
2x
an an+m =
n=0
2x a|m|
1 a2
(7.60)
7.3.4
X(t, e0 )dt
(7.61)
141
The answer is a random variable for any value of T. The mean value of such
measures (computed across the ensemble!) is
Z T
Z T
1
1
E[XT (e)] =
E[X(t, e)]dt =
dt = x
(7.62)
2T T
2T T x
We expect (hope) that the variation will decrease with increasing T, and
the law of large numbers leads us to expect a decrease in the variance in
proportion to 1/T. Do we also expect that we would get the same result with
a dierent sample function, say X(t, e1 )? If the function is ergodic then we
will measure the same mean value using the time average approach and the
measured mean will converge on x . A process with this property is called
mean-ergodic.
The covariance of X(t, e) is C(t1 , t2 ) = E[X(t1 , e)X(t2 , e)] 2x . The
random process is mean-ergodic if and only if7
ZZ T
1
lim
C(t1 , C2 )dt1 dt2 = 0
(7.63)
t 4T 2
T
As a corrolary, a wss process is ergodic if and only if the autocovariance
C ( ) = R ( ) 2 is such that
Z T
1
| |
d = 0
(7.64)
C ( ) 1
lim
t 2T T
2T
A sucient condition for a process to be mean-ergodic is that it be wss and
Z
|C ( )| d <
(7.65)
142
Distribution-Ergodic Processes
If a process is mean-ergodic then the mean value can be found as the time
average over a single sample function from the random process. But what if
we want to measure the probability distribution. How and when might we
do that? It turns out that the cumulative distribution function can be found
from a sample function by a simple circuit.
Figure 7.7: The fraction of the time that a random process is lower than
a threshold level x is equal to the cumulative distribution function F (x) =
P [X x]. The fraction of the area that is shaded in the lower figure equals
F (x).
Recall that the cumulative distribution function is
F (x) = P [X(t) x]
(7.66)
143
1, X(t) x
Yx (t) =
(7.67)
0, X(t) > x
This is a random process in its own right. It has the property that, for any
t,
(7.68)
E[Yx (t)] = F (x)
If Yx (t) is mean-ergodic then we can measure its expected value by computing
the time average Yx . The fraction of the area under a sample Yx (t) waveform
will equal the value of F (x). By changing x and measuring the area, we find
F (x). This is illustrated in Figure 7.7. A process X(t) is distribution-ergodic
if the related process Yx (t) is mean-ergodic for every value of x.
Correlation-Ergodic Processes
We would often like to find the autocorrelation function of a random process.
If X(t) is wss then Rxx ( ) = E[X(t)X(t + )]. To compute this average over
time, construct the function
Z (t) = X(t)X(t + )
(7.69)
This is a random process with the property that Rxx ( ) = E [Z (t)] . If Z (t)
is mean-ergodic then we can compute the average over time for any sample
function of the random process. Let us denote this average by a script R to
dierentiate it from the ensemble average R.
Z T
1
X(t)X(t + )dt
(7.70)
Rxx ( ) = Z = lim
T 2T T
When we say that a stationary random process is ergodic we mean that
any average that we may want to find can be computed from any sample
function of the process by an appropriate average over time. We will make
use of the ergodic assumption when we examine the spectrum of random
processes.