Professional Documents
Culture Documents
Chapter5 Random Process
Chapter5 Random Process
5.1
5.2
5.3
Random Processes
Random Processes . . . . . . . . . . . . . . . . . . . . . . . . . . .
5.1.1
5.1.2
5.1.3
11
5.2.1
11
5.2.2
15
5.2.3
17
5.2.4
28
5.3.1
. . . . . . . . .
28
5.3.2
29
5.3.3
30
5.3.4
32
5.1
5.1.1
Random Processes
random experiment
X(t, 1 )
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
*1
* X(t, )
2
0.5
0
*i
*3
*2
0.5
X(t, 3 )
sample space S
0.5
0
0.5
X(t, i )
t1
Dr. Tanja Karp
5.1
Random Processes
t2
2
Random Process:
A random process X(t) describes the mapping of a random experiment with sample space
S onto an ensemble of sample functions X(t, i). For each point in time t1, X(t1)
describes a random variable.
Example: Rolling a Die
Random variable: X = i if number i is on top of the die
Random process: Y (t) = X cos(0t).
Example: Tossing a Coin N Times
Random variable: Xn = 0 if the nth result is head, Xn = 1 if the nth result is tail
PN
Random process: Y (t) =
n=1 Xn rect(t n + 0.5).
Y (t, 1 )
t
Y (t, 1 )
t
Y (t, 1 )
t
Y (t, 4 )
Y (t, 2N )
t
5.1
Random Processes
t
3
h( )X(t )d
For each time instance of a random process, the average value, variance etc. can be
calculated from all sample functions X(t, i).
Expected Value E{X(t)}:
For a random process X(t) with probability density function fX(t)(x), the expected value
E{X(t)} = mX (t) is given by:
Z
x fX(t)(x)dx = mX (t)
E{X(t)} =
Variance X (t):
Z
2
|x mX (t)| fX(t)(x)dx
5.1
Random Processes
For a stationary random process the probability density function is independent of time t,
thus the expected value and the variance are also a constant over time.
fX(t)(x) = fX(t+t0)(x),
mX (t) = mX (t + t0) = mX ,
5.1.3
t, t0
X (t) = X (t + t0) = X
So far, the average value and the variance of a random process X(t) were calculated based
on the probability density function fX(t). However, in practical experiments the probability
density function of a random process is often unknown. Also, in many cases, there is only
one sample function X(t, i) available. Therefore, it is favorable to average over time
instead of taking the ensemble average.
Average Value mX(i):
mX(i)
1
= lim
T T
T /2
X(t, i)dt
T /2
2
Variance X(
):
i
2
X( )
i
1
= lim
T T
5.1
T /2
T /2
(X(t, i) mX(i)) dt
Random Processes
ensemble average
1
random experiment
X(t, 1 )
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.12
0.14
0.16
0.18
0.2
*1
* X(t, )
2
0.5
0
*i
*3
*2
0.5
X(t, 3 )
sample space S
0.5
0
0.5
X(t, i )
time average
0
0.02
0.04
0.06
t1
5.1
Random Processes
0.08
0.1
t2
Ergodicity:
A stationary random process X(t) is called ergodic, if the time averages of each sample
function X(t, i) converge towards the corresponding ensemble average with probability
one.
In practical applications ergodicity is often assumed since just one sample function is
available and therefore the ensemble averages cannot be taken.
Example 1:
Random process: X(t) = A cos(0t)
A: discrete random variable with P (A = 1) = P (A = 2) = 0.5
0: constant
Ensemble average:
5.1
Random Processes
Example 2:
Random process: X(t) = A
A: discrete random variable with P (A = 1) = P (A = 2) = 0.5
Ensemble average:
mX(1)
mX(2)
1
= lim
T T
1
= lim
T T
T /2
1
X(t, 1)dt = lim
T T
T /2
T /2
1
X(t, 2)dt = lim
T T
T /2
T /2
1 dt = 1
T /2
T /2
2 dt = 2
T /2
time averages taken for different sample functions are not identical to the ensemble
average, the random process is thus not ergodic.
5.1
Random Processes
Y (t, 1 )
t
Y (t, 1 )
t
Y (t, 1 )
t
Y (t, 4 )
Y (t, 2N )
t
Ensemble average:
5.1
Random Processes
5.1
Random Processes
10
5.2
5.2.1
X(t, 1 )
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.5
X(t, 2 )
0.5
1
X(t, 3 )
0.5
0
0.5
X(t, i )
Autocorrelation Function:
t1
= E{X(t1) X(t2)} =
t2
11
Autocovariance Function:
Z Z
=
12
Since the average value is a constant, the autocovariance function is given by:
13
1
RXX ( ) = lim
T T
T /2
Z
1
T T
T /2
Z
CXX ( ) = lim
XT (t, i): sample function of random process X(t) windowed to be of length T (starting
at T /2 ending at T /2).
Dr. Tanja Karp
14
5.2.2
Motivation:
Inverse transform:
Z
j2f
RXX ( ) =
SXX (f ) e
df
SXX (f ) = SXX (f ),
Dr. Tanja Karp
SXX (f ) 0,
Im{SXX (f )} = 0
15
T /2
j2f
Rxx( ) e
Sxx(f ) =
1
=
lim
T T
1
= lim
T T
T /2
j2f
xT (t)xT (t + )dt e
T /2
T /2
xT (t)
T /2
1
= XT (f ) lim
T T
j2f
xT (t + ) e
|
{z
d dt
}
XT (f )ej2f t
T /2
xT (t) e
j2f t
dt
T /2
|XT (f )|2
XT (f )XT (f )
= lim
= lim
T
T
T
T
Dr. Tanja Karp
16
5.2.3
Power Signal:
The autocorrelation function and the power spectral density can also be calculated for a
deterministic power signal x(t). In this case, the signal simply replaces the random process.
With xT (t) = x(t) rect(t/T ) we obtain for the autocorrelation function:
1
Rxx( ) = lim
T T
T /2
Z
xT (t)xT (t + )dt
T /2
1
P = Rxx(0) = lim
T T
T /2
Z
2
xT (t)dt =
T /2
Z
Sxx(f )df
17
Energy Signals:
For an energy signal x(t) an energy autocorrelation function can be defined as
Z
E
Rxx( ) =
x(t)f (t + )dt = x( ) x( )
Applying the Fourier Transform to the energy autocorrelation function, we obtain the
energy spectral density as:
Z
E
E
2
Sxx(f ) =
Rxx( ) exp(j2f )d = X(f )X (f ) = |X(f )|
Z
E
E = Rxx(0) =
x(t) dt =
1
2
Z
E
Sxx(f )df =
|X(f )| df
18
5.2.4
x(t) = A sin(0t + )
A, 0: constant values,
: random variable with probability density function f(x):
A sin(0t + x)f(x) dx
1
A sin(0t + x) dx = 0
2
19
Autocorrelation function:
20
x(t, i) = A sin(0t + i)
Average value (time average):
mx(i)
Z T /2
1
= lim
x(t, i)dt
T T
T /2
Z T /2
1
= lim
A sin(0t + i)dt = 0 = mx
T T
T /2
21
A = 1, 0 = 2
Autocorrelation Function Rxx()
Sample Functions
x(t,1)
0.5
0
1
0.5
1.5
2.5
3.5
4.5
x(t,2)
0
1
0.5
1.5
2.5
3.5
4.5
x(t,3)
0.5
3
0.5
1.5
2.5
3.5
4.5
x(t,4)
0.6
0
1
0.5
0.4
0
0.5
1.5
2.5
3.5
4.5
1
x(t,5)
0.7
0.3
0.2
0
1
0.1
0
0.5
1.5
2.5
t in sec
3.5
4.5
0
3
0
f
22
T T
N
T
b 0
0
1
n1
= lim
[n1 1 Tb + (N n1) 0 Tb] = lim
= 0.5
N N Tb
N N
Autocorrelation function:
1
Rxx( ) = E{x(t, 1)x(t + , 1)} = lim
T T
= 0.25((t) + ( /Tb))
23
x(t,1)
Sample Function
1.5
1
0.5
0
0.5
0.75
0.5
0
10
t/T
12
14
16
18
20
0.25
x(t+0.3Tb,1)
x(t+Tb,1)
1.5
1
0.5
0
0.5
x(t+1.5Tb,1)
1.5
1
0.5
0
0.5
0
5
0
/Tb
10
12
14
16
18
20
0.75
0.5
0
10
12
14
16
18
20
0.25
10
t/Tb
12
14
16
18
20
0
4
0
f Tb
24
f (t)f (t + )dt = ( )
Sf f (f ) = F{Rf f ( )} = sinc (f )
0.8
0.8
xx
0.6
0.6
0.4
0.4
0.2
0.2
0
4
S (f)
Rxx()
Autocorrelation Function
0
4
0
f
25
Snn(f ) = N0/2
for all f
Autocorrelation function:
Rnn( ) = F
{Snn(f )} =
N0
(t)
2
Since only the first and second moment of the process are known, the probability density
function cannot be uniquely determined.
In the case of a Gaussian probability density function, the process is called white Gaussian
noise.
If the white Gaussian noise is added to the signal, we denote it as additive white Gaussian
noise (AWGN).
26
n(t)
1
0
1
2
5
t
Rnn ()
Snn ( f )
(N0 /2)
N0 /2
10
27
5.3
Excitation of an LTI system with sample function x(t, i) of a stationary random process
x(t).
x(t, i )
LTI system
h(t)
y(t, i )
8
>
< Z
my = E{y(t)} = E{x(t) h(t)} = E
>
:
9
>
=
x(t )h()d
Z
=
Z
j20
E {x(t )} h()d =
>
;
mx h() e
| {z } d = mxH(0)
28
5.3.2
>
>
Z
<Z
=
=E
h()x(t )d
h()x(t + )d
>
>
:
;
Z Z
=
h()h()Rxx( + )dd
Z
=
h()h( + )d Rxx( )d
{z
RE ()
hh
Z
E
E
Rhh
( ): Energy autocorrelation function of the system impulse response
Dr. Tanja Karp
29
5.3.3
Rhh( ) = h( ) h( )
h( ) H(f ),
E
h( ) H (f )
Rxx ()
Sxx ( f )
x(t, i )
|H()|2
y(t, i )
30
HLP (f ) =
1 for |f | < fc
0 otherwise
Z
P no =
Zfc
Snono (f )df =
HLP ( f )
0.5N0df = N0fc
fc
1 S ni ni ( f )
S no no ( f )
N0 /2
N0 /2
fc
Dr. Tanja Karp
fc
fc
fc f
31
5.3.4
The autocorrelation function describes the statistical properties of two random variables X1
and X2 taken from the same random process at times t1 and t2, respectively, X1 = X(t1)
and X2 = X(t2).
The cross-correlation function describes the statistical properties of two random variables
X1 and Y2 taken from two different random processes X(t) and Y (t) (here input and
output of an LTI system) at times t1 and t 2, respectively, such that X1 = X(t1) and
Y2 = Y (t2). It is defined as:
RXY ( ) = E{X(t) Y (t + )}
Two random processes X(t) and Y (t) are called uncorrelated if
RXY (t1, t2) = E{X(t1)Y (t2)} = E{X(t1)} E{Y (t2)} = mX (t1) mY (t2)
They are called orthogonal if
32
Here:
Z
=
h()E {x(t)x(t + )} d
h()Rxx( )d = h( ) Rxx( )
33