You are on page 1of 38

1

5.1 REVIEW OF PROBABILITY AND RANDOM VARIABLES


5.1.1 Sample Space, Events, and Probability
The fundamental concept in any probabilistic model is the concept of a random
experiment, which is any experiment whose outcome cannot be predicted with certainty.
A random experiment has certain outcomes, which are the elementary results of the
experiment. In flipping of a coin, head and tail are the possible outcomes. In
throwing a die, 1,2,3,4,5, and 6 are the possible outcomes. The set of all possible
outcomes is called the sample space and is denoted by . Outcomes are denoted by s,
and each lies in , i.e., .
Events are subsets of the sample space; in other words, an event is a collection
of outcomes. For instance, in throwing a die, the event the outcome is odd consists
of outcomes 1, 3, and 5; Events are disjoint if their intersection is empty. For
instance, in throwing a die, the events the outcome is odd and the outcome divides
4 are disjoint.

We define a probability P as a set function assigning nonnegative values to all


events E such that the following conditions are satisfied:

Some of the most important properties are as follows:

5.1.2 Conditional Probability


The conditional probability of the event E1, given the event E2, is defined by

If it happens that P(E1|E2) = P(E1), the events E1 and E2 are said


to be independent. For independent events, P(E1 E2) = P(E1)P (E2).

5.1.3 Random Variables


A random variable is a mapping from the sample space to the set of real numbers.
In other words, a random variable is an assignment of real numbers to the outcomes
of a random experiment. A schematic diagram representing a random variable is
given in Figure 5.1.

Figure 5.1 A random variable as a


mapping from to R.

The cumulative distribution function or CDF of a random variable X is defined as

which can be simply written as

and has the following properties:

Figure 5.2 The CDF for a


discrete random variable.

Figure 5.3 CDF for a


continuous random variable.

The probability density function, or PDF, of a continuous random variable X is defined


as the derivative of its CDF. It is denoted by fX(x), i.e.,

The basic properties of PDF are as follows:

Important Random Variables. In communications, the most commonly used


random variables are the following:
Uniform random variable. This is a continuous random variable taking values
between a and b with equal probabilities for intervals of equal length. The density
function is given by

it is usually modeled as a uniform random variable between 0 and 2.


Gaussian or normal random variable. The Gaussian, or normal, random variable is
a continuous random variable described by the density function

A Gaussian random variable with mean m and variance 2 is denoted by N(m, 2).
The random variable N(0, 1) is usually called standard normal.
The Gaussian random variable is the most important and frequently encountered
random variable in communications. The reason is that thermal noise, which is the
major source of noise in communication systems, has a Gaussian distribution.

Assuming that X is a standard normal random variable, we define the function Q(x) as
P(X > x). The Q-function is given by the relation

Figure 5.7 The PDF for the uniform


random variable.

Figure 5.8 The PDF for the Gaussian


random variable.

Figure 5.9 The Q-function as the area under the tail


of a standard normal random variable.
9

It is easy to see that Q(x) satisfies the following relations:

Table 5.1 gives the values of this function for various values of x.

TABLE 5.1 TABLE OF THE Q FUNCTION

10

For an N(m, 2) random variable, a simple change of variable in the integral that

computes P(X > x) results in P(X > x) =( ).


Example 5.1.6
X is a Gaussian random variable with mean 1 and variance 4. Find the probability that
X is between 5 and 7.
Solution We have m = 1 and = 4 = 2. Thus,

Example 5.1.7
Assuming X is a Gaussian random variable with m = 0 and = 1, find the probability
density function of the random variable Y given by Y = aX + b.
Solution In this case, g(x) = ax + b; therefore, g(x) = a. The equation ax + b = y

has only one solution, which is given by x = . Using these results, we obtain
11

It is observed that Y is a Gaussian random variable N(b, a2).


Example 5.1.8
Assume X is a N(3, 6) random variable. Find the density function of Y = 2X + 3.
Solution We know Y is a Gaussian random variable with the mean m = 23+3 = 3
and variance 2 = 4 6 = 24. Therefore, Y is a N(3, 24) random variable and

From this example, we arrive at the important conclusion that a linear function of a
Gaussian random variable is itself a Gaussian random variable.

12

Statistical Averages. The mean, expected value, or expectation of the random variable
X is defined as

and is also denoted by mX.


Note that E(X) is just a real number. In general, the nth moment of a random
variable X is defined as

The relation for the variance can be written as

For any constant c, the following relations hold:

13

It is also easy to verify that the variance has the following properties:

5.2 RANDOM PROCESSES: BASIC CONCEPTS


Example 5.2.1
Assume that we have a signal generator that can generate one of the six possible
sinusoids. The amplitude of all sinusoids is one, and the phase for all of them is zero,
but the frequencies can be 100, 200, . . . , 600 Hz. We throw a die, and depending on its
outcome, which we denote by F, we generate a sinusoid whose frequency is 100 times
what the die shows (100F). This means that each of the six possible signals will be
realized with equal probability. This is an example of a random process. This random
process can be defined as X(t) = cos(2 100Ft).
Example 5.2.2
Assume that we uniformly choose a phase between 0 and 2 and generate a sinusoid
with a fixed amplitude and frequency but with a random phase . In this case, the
random process is X(t) = Acos(2f0t+), where A and f0 denote the fixed amplitude and
frequency and denotes the random phase. Some sample functions for this random
process are shown in Figure 5.11.

14

Figure 5.11 Sample functions of the random process given in Example 5.2.2.
15

Figure 5.13 Sample functions


of a random process.

there exists a deterministic time function x(t; i ), which is called a sample function or a
realization of the random process. At each time instant t0 and for each i , we have
the number x(t0; i ). For the different outcomes (i s) at a fixed time t0, the numbers
x(t0; i) constitute a random variable denoted by X(t0). In other words, at any time
instant, the value of a random process is a random variable.

16

Example 5.2.4:In Example 5.2.1, determine the values of the random variable X(0.001).
Solution The possible values are cos(0.2), cos(0.4), . . . , cos(1.2) and each has a
1
probability .
6

Example 5.2.5:Let denote the sample space corresponding to the random experiment of
throwing a die. Obviously, in this case = {1, 2, 3, 4, 5, 6}. For all i, let x(t ;i )=ietu1(t)
denote a random process. Then X(1) is a random variable taking values e1, 2e1, . . . , 6e1
1
and each has probability 6. Sample functions of this random process are shown in Figure
5.14.

Figure 5.14 Sample functions of


Example 5.2.5.

17

5.2.1 Statistical Averages


we know that, at any time instance t0, the random process at that time, i.e., X(t0), is an
ordinary random variable; it has a density function and we can find its mean and its
variance at that point. Obviously, both mean and variance are ordinary deterministic
numbers, but they depend on the time t0. That is, at time t1, the density function, and
thus the mean and the variance, of X(t1) will generally be different from those of X(t0).
Definition 5.2.1. The mean, or expectation, of the random process X(t) is a
deterministic function of time denoted by mX(t) that at each time instant t0 equals the
mean of the random variable X(t0). That is, mX(t) = E[X(t)] for all t .

Figure 5.15 The mean of a random process.


18

Since at any t0 the random variable X(t0) is well defined with a probability density
function fX(t0) (x), we have

Example 5.2.7
The mean of the random process in Example 5.2.2 is obtained by noting that

Hence,

We observe that, in this case, mX(t) is independent of t .


Another statistical average that plays a very important role in our study of
random processes is the autocorrelation function. The autocorrelation function
is especially important because it completely describes the power spectral
density and the power content of a large class of random processes.

19

Definition 5.2.2. The autocorrelation function of the random process X(t),


denoted by RX(t1, t2), is defined by RX(t1, t2) = E[X(t1)X(t2)].
From this definition, it is clear that RX(t1, t2) is a deterministic function of two
variables t1 and t2 given by

Example 5.2.8
The autocorrelation function of the random process in Example 5.2.2 is

where we have used

20

5.2.2 Wide-Sense Stationary Processes


It can happen that some of the properties of this random variable are independent of
time.
Depending on what properties are independent of time, different notions of stationarity
can be defined. One of the most useful notions of stationarity is the notion of widesense stationary random processes.
Definition 5.2.3. A process X(t) is wide-sense stationary (WSS) if the following
conditions are satisfied:
1. mX(t) = E[X(t)] is independent of t .
2. RX(t1, t2) depends only on the time difference = t1 t2 and not on t1 and t2
individually.
Hereafter, we will use the term stationary as a shorthand for wide-sense stationary
processes, and their mean and autocorrelation will be denoted by mX and RX( ).
Example 5.2.10
For the random process in Exercise 5.2.2, we have already seen that mX = 0 and
Therefore, the process is WSS.

21

Example 5.2.11
Let the random process Y(t) be similar to the random process X(t) defined in Exercise
5.2.2, but assume that is uniformly distributed between 0 and . In this case,

Since mY (t) is not independent of t , the process Y(t) is not stationary.


From the definition of the autocorrelation function, it follows that RX(t1, t2) = RX(t2,
t1). This means that if the process is stationary, we have RX( ) = RX(), i.e., the
autocorrelation function is an even function in stationary processes.

22

5.2.4 Random Processes and Linear Systems


We are assuming that a stationary process X(t) is the input to a linear time-invariant
system with the impulse response h(t) and the output process is denoted by Y(t), as
shown in Figure 5.16.[
]

Figure 5.16 A random process passing through a linear time-invariant system.


We next demonstrate that if a stationary process X(t) with mean mX and autocorrelation
function RX( ) is passed through a linear time-invariant system with impulse response
h(t), the input and output processes X(t) and Y(t) will be jointly stationary with

23

By using the convolution integral to relate the output Y(t) to the input X(t), i.e.,
, we have

This proves that mY is independent of t .


The cross correlation function between the output and the input is

24

This shows that RXY (t1, t2) depends only on = t1 t2.


5.2.5 Power Spectral Density of Stationary Processes
A useful function that determines the distribution of the power of the random process at
different frequencies is the power spectral density or power spectrum of the random
process. The power spectral density of a random process X(t) is denoted by SX(f ), and
denotes the strength of the power in the random process as a function of frequency. The
unit for power spectral density is Watts/Hz.
Theorem [WienerKhinchin] For a stationary random process X(t), the power
spectral density is the Fourier transform of the autocorrelation function, i.e.,

Example 5.2.15
For the stationary random process in Example 5.2.2, we had

25

Hence,

The power spectral density is shown in Figure 5.17. All the power content of the
process is located at f0 and f0. This is expected because the sample functions of this
process are sinusoidals with their power at those frequencies.

Figure 5.17 Power spectral


density of the random
process of Example 5.2.15.

26

The power content, or simply the power, of a random process is the sum of the powers
at all frequencies in that random process. In order to find the total power, we have to
integrate the power spectral density over all frequencies.

Since SX(f ) is the Fourier transform of RX( ), then RX( ) will be the inverse
Fourier transform of SX(f ). Therefore, we can write

Substituting = 0 into this relation yields

Comparing this with Equation (5.2.10), we conclude that

27

Example 5.2.17
Find the power in the process given in Example 5.2.15
Solution We can use either the relation

or the relation

28

Power Spectra in LTI Systems. We have seen that when a stationary random process
with mean mx and autocorrelation function RX( ) passes through a linear time-invariant
system with the impulse response h(t), the output process will be also stationary with
mean
and autocorrelation

Translation of these relations into the frequency domain is straightforward. By noting


that
and
we can compute the Fourier
transform of both sides of these relations to obtain

29

We can also define a frequency-domain relation for the cross-correlation function.


Let us define the cross spectral density SXY (f ) as
Then

and since RYX( ) = RXY (), we have

Figure 5.18 shows how these quantities are related.


Example 5.2.18
If the process in Example 5.2.2 passes through a differentiator, we have H(f) = j2f ;
therefore,

and
30

Figure 5.18 Inputoutput relations for the power spectral density and the cross spectral
density.
31

5.3 GAUSSIAN AND WHITE PROCESSES


5.3.1 Gaussian Processes
Assume that X(t) is a stationary Gaussian RP. We know that at any time instant t0, the
random variable X(t0) is Gaussian; at any two points t1, t2, random variables
(X(t1),X(t2)) are distributed according to a two-dimensional jointly Gaussian random
variable.
Example 5.3.1
Let X(t) be a zero-mean stationary Gaussian random process with the power spectral
density
. Determine the probability density function of the random
variable X(3).
Solution Since X(t) is a Gaussian random process, the probability density function
of random variable X(t) at any value of t is Gaussian. Therefore, X(3) N(m, 2).
Now we need to find m and 2. Since the process is zero mean, at any time instance
t, we have E[X(t)] = 0; this means m = E[X(3)] = 0. To find the variance, we note
that

32

where, in the last step, we have used the fact that the process is stationary and hence
E[X(t1)X(t2)] = RX(t1 t2). But from Equation (5.2.12), we have

therefore, X(3) N(0, 5000), or the density function for X(3) is

33

5.3.2 White Processes


The term white process is used to denote processes in which all frequency components
appear with equal power, i.e., the power spectral density is a constant for all
frequencies. This parallels the notion of white light, in which all colors exist.
Definition 5.3.3. A process X(t) is called a white process if it has a flat spectral density,
i.e., if SX(f ) is a constant for all f .
In practice, the importance of white processes stems from the fact that thermal noise
can be closely modeled as a white process over a wide range of frequencies. Figure 5.19
shows the power spectrum of a white process.

Figure 5.19 Power spectrum of a


white process.

Figure 5.20 Power spectrum of


thermal noise.

34

If we find the power content of a white process using SX(f ) = C, a constant, we will
have

Thermal noise power spectrum is shown in Figure 5.20,where k is Boltzmanns


constant (equal to 1.38 1023 J/K). T denotes the temperature in degrees Kelvin. The
value kT is usually denoted by N0; therefore, the power spectral density of thermal noise

is usually given as Sn(f ) = 2.


Looking at the autocorrelation function for a white process, we see that

This shows that for all 0, we have RX( ) = 0. Thus, if we sample a white process
at two points t1 and t2 (t1 = t2), the resulting random variables will be uncorrelated.
If the random process is white and also Gaussian, any pair of random
variables X(t1), X(t2), where t1 t2, will also be independent (recall that for jointly
Gaussian random variables, uncorrelatedness and independence are equivalent.)
35

Properties of the Thermal Noise. The thermal noise that we will use in
subsequent chapters is assumed to have the following properties:
1. Thermal noise is a stationary process.
2. Thermal noise is a zero-mean process.
3. Thermal noise is a Gaussian process.

4. Thermal noise is a white process with a power spectral density Sn(f ) = 2 .


It is clear that the power spectral density of thermal noise increases with increasing
the ambient temperature; therefore, keeping electric circuits cool makes their noise
level low.
5.3.3 Filtered Noise Processes
In many cases, the white noise generated in one stage of the system is filtered by the
next stage; therefore, in the following stage, we encounter filtered noise that is a
bandpass process.White noise processes ;n(t) is filtered by ideal bandpass filter which
have
W bandwidth, fc center frequency, and assume that filter output is
X(t).[X(f)=H1(f).N(f)]

36

For example, one such filter can have a transfer function of the form

Figure 5.21 shows plots of the transfer functions of this filter.

Figure 5.21 Filter transfer functions H1(f )


37

Since thermal noise is white and Gaussian, the filtered thermal noise will be Gaussian
but not white. The power spectral density of the filtered noise will be

where we have used the fact that for ideal filters |H(f )|2 = H(f ).

38