## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**A First Course in Digital Communications
**

Ha H. Nguyen and E. Shwedyk

February 2009

A First Course in Digital Communications 1/58

Chapter 3: Probability, Random Variables, Random Processes

Introduction

The main objective of a communication system is the transfer

of information over a channel.

Message signal is best modeled by a random signal: Any

signal that conveys information must have some uncertainty in

it, otherwise its transmission is of no interest.

Two types of imperfections in a communication channel:

Deterministic imperfection, such as linear and nonlinear

distortions, inter-symbol interference, etc.

Nondeterministic imperfection, such as addition of noise,

interference, multipath fading, etc.

We are concerned with the methods used to describe and

characterize a random signal, generally referred to as a

random process (also commonly called stochastic process).

In essence, a random process is a random variable evolving in

time.

A First Course in Digital Communications 2/58

Chapter 3: Probability, Random Variables, Random Processes

Sample Space and Probability

Random experiment: its outcome, for some reason, cannot be

predicted with certainty.

Examples: throwing a die, ﬂipping a coin and drawing a card

from a deck.

Sample space: the set of all possible outcomes, denoted by Ω.

Outcomes are denoted by ω’s and each ω lies in Ω, i.e., ω ∈ Ω.

A sample space can be discrete or continuous.

Events are subsets of the sample space for which measures of

their occurrences, called probabilities, can be deﬁned or

determined.

A First Course in Digital Communications 3/58

Chapter 3: Probability, Random Variables, Random Processes

Three Axioms of Probability

For a discrete sample space Ω, deﬁne a probability measure P on

Ω as a set function that assigns nonnegative values to all events,

denoted by E, in Ω such that the following conditions are satisﬁed

Axiom 1: 0 ≤ P(E) ≤ 1 for all E ∈ Ω (on a % scale

probability ranges from 0 to 100%. Despite popular sports

lore, it is impossible to give more than 100%).

Axiom 2: P(Ω) = 1 (when an experiment is conducted there

has to be an outcome).

Axiom 3: For mutually exclusive events

1

E

1

, E

2

, E

3

,. . . we

have P (

∞

i=1

E

i

) =

∞

i=1

P(E

i

).

1

The events E

1

, E

2

, E

3

,. . . are mutually exclusive if E

i

∩ E

j

= ⊘ for all

i = j, where ⊘ is the null set.

A First Course in Digital Communications 4/58

Chapter 3: Probability, Random Variables, Random Processes

Important Properties of the Probability Measure

1. P(E

c

) = 1 −P(E), where E

c

denotes the complement of E.

This property implies that P(E

c

) +P(E) = 1, i.e., something

has to happen.

2. P(⊘) = 0 (again, something has to happen).

3. P(E

1

∪ E

2

) = P(E

1

) +P(E

2

) −P(E

1

∩ E

2

). Note that if

two events E

1

and E

2

are mutually exclusive then

P(E

1

∪ E

2

) = P(E

1

) +P(E

2

), otherwise the nonzero

common probability P(E

1

∩ E

2

) needs to be subtracted oﬀ.

4. If E

1

⊆ E

2

then P(E

1

) ≤ P(E

2

). This says that if event E

1

is contained in E

2

then occurrence of E

2

means E

1

has

occurred but the converse is not true.

A First Course in Digital Communications 5/58

Chapter 3: Probability, Random Variables, Random Processes

Conditional Probability

We observe or are told that event E

1

has occurred but are

actually interested in event E

2

: Knowledge that of E

1

has

occurred changes the probability of E

2

occurring.

If it was P(E

2

) before, it now becomes P(E

2

|E

1

), the

probability of E

2

occurring given that event E

1

has occurred.

This conditional probability is given by

P(E

2

|E

1

) =

_

_

_

P(E

2

∩ E

1

)

P(E

1

)

, if P(E

1

) = 0

0, otherwise

.

If P(E

2

|E

1

) = P(E

2

), or P(E

2

∩ E

1

) = P(E

1

)P(E

2

), then

E

1

and E

2

are said to be statistically independent.

Bayes’ rule

P(E

2

|E

1

) =

P(E

1

|E

2

)P(E

2

)

P(E

1

)

,

A First Course in Digital Communications 6/58

Chapter 3: Probability, Random Variables, Random Processes

Total Probability Theorem

The events {E

i

}

n

i=1

partition the sample space Ω if:

(i)

n

_

i=1

E

i

= Ω (1a)

(ii) E

i

∩ E

j

= ⊘ for all 1 ≤ i, j ≤ n and i = j (1b)

If for an event A we have the conditional probabilities

{P(A|E

i

)}

n

i=1

, P(A) can be obtained as

P(A) =

n

i=1

P(E

i

)P(A|E

i

).

Bayes’ rule:

P(E

i

|A) =

P(A|E

i

)P(E

i

)

P(A)

=

P(A|E

i

)P(E

i

)

n

j=1

P(A|E

j

)P(E

j

)

.

A First Course in Digital Communications 7/58

Chapter 3: Probability, Random Variables, Random Processes

Random Variables

R

Ω

1

ω

4

ω

3

ω

2

ω

4

( ) ω x

1

( ) ω x

2

( ) ω x

3

( ) ω x

A random variable is a mapping from the sample space Ω to

the set of real numbers.

We shall denote random variables by boldface, i.e., x, y, etc.,

while individual or speciﬁc values of the mapping x are

denoted by x(ω).

A First Course in Digital Communications 8/58

Chapter 3: Probability, Random Variables, Random Processes

Cumulative Distribution Function (cdf)

cdf gives a complete description of the random variable. It is

deﬁned as:

F

x

(x) = P(ω ∈ Ω : x(ω) ≤ x) = P(x ≤ x).

The cdf has the following properties:

1. 0 ≤ F

x

(x) ≤ 1 (this follows from Axiom 1 of the probability

measure).

2. F

x

(x) is nondecreasing: F

x

(x

1

) ≤ F

x

(x

2

) if x

1

≤ x

2

(this is

because event x(ω) ≤ x

1

is contained in event x(ω) ≤ x

2

).

3. F

x

(−∞) = 0 and F

x

(+∞) = 1 (x(ω) ≤ −∞ is the empty set,

hence an impossible event, while x(ω) ≤ ∞ is the whole

sample space, i.e., a certain event).

4. P(a < x ≤ b) = F

x

(b) −F

x

(a).

A First Course in Digital Communications 9/58

Chapter 3: Probability, Random Variables, Random Processes

Typical Plots of cdf I

A random variable can be discrete, continuous or mixed.

0 . 1

0

∞ ∞ −

x

( ) F x

x

(a)

A First Course in Digital Communications 10/58

Chapter 3: Probability, Random Variables, Random Processes

Typical Plots of cdf II

0 . 1

0

∞ ∞ −

x

( ) F x

x

(b)

0 . 1

0

∞ ∞ −

x

( ) F x

x

(c)

A First Course in Digital Communications 11/58

Chapter 3: Probability, Random Variables, Random Processes

Probability Density Function (pdf)

The pdf is deﬁned as the derivative of the cdf:

f

x

(x) =

dF

x

(x)

dx

.

It follows that:

P(x

1

≤ x ≤ x

2

) = P(x ≤ x

2

) −P(x ≤ x

1

)

= F

x

(x

2

) −F

x

(x

1

) =

_

x

2

x

1

f

x

(x)dx.

Basic properties of pdf:

1. f

x

(x) ≥ 0.

2.

_

∞

−∞

f

x

(x)dx = 1.

3. In general, P(x ∈ A) =

_

A

f

x

(x)dx.

For discrete random variables, it is more common to deﬁne

the probability mass function (pmf): p

i

= P(x = x

i

).

Note that, for all i, one has p

i

≥ 0 and

i

p

i

= 1.

A First Course in Digital Communications 12/58

Chapter 3: Probability, Random Variables, Random Processes

Bernoulli Random Variable

0

x

1 0

x

1

p − 1

1

(1 ) p −

( ) p

( ) F x

x

( ) f x

x

A discrete random variable that takes two values 1 and 0 with

probabilities p and 1 −p.

Good model for a binary data source whose output is 1 or 0.

Can also be used to model the channel errors.

A First Course in Digital Communications 13/58

Chapter 3: Probability, Random Variables, Random Processes

Binomial Random Variable

0

x

2

05 . 0

10 . 0

20 . 0

15 . 0

25 . 0

30 . 0

4

6

( ) f x

x

A discrete random variable that gives the number of 1’s in a

sequence of n independent Bernoulli trials.

f

x

(x) =

n

k=0

_

n

k

_

p

k

(1−p)

n−k

δ(x−k), where

_

n

k

_

=

n!

k!(n −k)!

.

A First Course in Digital Communications 14/58

Chapter 3: Probability, Random Variables, Random Processes

Uniform Random Variable

0

x

a b −

1

a b 0

x

a b

1

( ) F x

x

( ) f x

x

A continuous random variable that takes values between a

and b with equal probabilities over intervals of equal length.

The phase of a received sinusoidal carrier is usually modeled

as a uniform random variable between 0 and 2π. Quantization

error is also typically modeled as uniform.

A First Course in Digital Communications 15/58

Chapter 3: Probability, Random Variables, Random Processes

Gaussian (or Normal) Random Variable

0

x

0

x

1

2

2

1

πσ

µ

2

1

µ

( ) f x

x

( ) F x

x

A continuous random variable whose pdf is:

f

x

(x) =

1

√

2πσ

2

exp

_

−

(x −µ)

2

2σ

2

_

,

µ and σ

2

are parameters. Usually denoted as N(µ, σ

2

).

Most important and frequently encountered random variable

in communications.

A First Course in Digital Communications 16/58

Chapter 3: Probability, Random Variables, Random Processes

Functions of A Random Variable

The function y = g(x) is itself a random variable.

From the deﬁnition, the cdf of y can be written as

F

y

(y) = P(ω ∈ Ω : g(x(ω)) ≤ y).

Assume that for all y, the equation g(x) = y has a countable

number of solutions and at each solution point, dg(x)/dx

exists and is nonzero. Then the pdf of y = g(x) is:

f

y

(y) =

i

f

x

(x

i

)

¸

¸

¸

¸

dg(x)

dx

¸

¸

¸

x=x

i

¸

¸

¸

¸

,

where {x

i

} are the solutions of g(x) = y.

A linear function of a Gaussian random variable is itself a

Gaussian random variable.

A First Course in Digital Communications 17/58

Chapter 3: Probability, Random Variables, Random Processes

Expectation of Random Variables I

Statistical averages, or moments, play an important role in the

characterization of the random variable.

The expected value (also called the mean value, ﬁrst moment)

of the random variable x is deﬁned as

m

x

= E{x} ≡

_

∞

−∞

xf

x

(x)dx,

where E denotes the statistical expectation operator.

In general, the nth moment of x is deﬁned as

E{x

n

} ≡

_

∞

−∞

x

n

f

x

(x)dx.

For n = 2, E{x

2

} is known as the mean-squared value of the

random variable.

A First Course in Digital Communications 18/58

Chapter 3: Probability, Random Variables, Random Processes

Expectation of Random Variables II

The nth central moment of the random variable x is:

E{y} = E{(x −m

x

)

n

} =

_

∞

−∞

(x −m

x

)

n

f

x

(x)dx.

When n = 2 the central moment is called the variance,

commonly denoted as σ

2

x

:

σ

2

x

= var(x) = E{(x −m

x

)

2

} =

_

∞

−∞

(x −m

x

)

2

f

x

(x)dx.

The variance provides a measure of the variable’s

“randomness”.

The mean and variance of a random variable give a partial

description of its pdf.

A First Course in Digital Communications 19/58

Chapter 3: Probability, Random Variables, Random Processes

Expectation of Random Variables III

Relationship between the variance, the ﬁrst and second

moments:

σ

2

x

= E{x

2

} −[E{x}]

2

= E{x

2

} −m

2

x

.

An electrical engineering interpretation: The AC power equals

total power minus DC power.

The square-root of the variance is known as the standard

deviation, and can be interpreted as the root-mean-squared

(RMS) value of the AC component.

A First Course in Digital Communications 20/58

Chapter 3: Probability, Random Variables, Random Processes

The Gaussian Random Variable

0 0.2 0.4 0.6 0.8 1

−0.8

−0.6

−0.4

−0.2

0

0.2

0.4

0.6

t (sec)

S

i

g

n

a

l

a

m

p

l

i

t

u

d

e

(

v

o

l

t

s

)

(a) A muscle (emg) signal

A First Course in Digital Communications 21/58

Chapter 3: Probability, Random Variables, Random Processes

−1 −0.5 0 0.5 1

0

0.5

1

1.5

2

2.5

3

3.5

4

x (volts)

f

x

(

x

)

(

1

/

v

o

l

t

s

)

(b) Histogram and pdf fits

Histogram

Gaussian fit

Laplacian fit

f

x

(x) =

1

_

2πσ

2

x

e

−

(x−m

x

)

2

2σ

2

x

(Gaussian)

f

x

(x) =

a

2

e

−a|x|

(Laplacian)

A First Course in Digital Communications 22/58

Chapter 3: Probability, Random Variables, Random Processes

Gaussian Distribution (Univariate)

−15 −10 −5 0 5 10 15

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

x

f

x

(

x

)

σ

x

=1

σ

x

=2

σ

x

=5

Range (±kσ

x

) k = 1 k = 2 k = 3 k = 4

P(m

x

−kσ

x

< x ≤ m

x

−kσ

x

) 0.683 0.955 0.997 0.999

Error probability 10

−3

10

−4

10

−6

10

−8

Distance from the mean 3.09 3.72 4.75 5.61

A First Course in Digital Communications 23/58

Chapter 3: Probability, Random Variables, Random Processes

Multiple Random Variables I

Often encountered when dealing with combined experiments

or repeated trials of a single experiment.

Multiple random variables are basically multidimensional

functions deﬁned on a sample space of a combined

experiment.

Let x and y be the two random variables deﬁned on the same

sample space Ω. The joint cumulative distribution function is

deﬁned as

F

x,y

(x, y) = P(x ≤ x, y ≤ y).

Similarly, the joint probability density function is:

f

x,y

(x, y) =

∂

2

F

x,y

(x, y)

∂x∂y

.

A First Course in Digital Communications 24/58

Chapter 3: Probability, Random Variables, Random Processes

Multiple Random Variables II

When the joint pdf is integrated over one of the variables, one

obtains the pdf of other variable, called the marginal pdf:

_

∞

−∞

f

x,y

(x, y)dx = f

y

(y),

_

∞

−∞

f

x,y

(x, y)dy = f

x

(x).

Note that:

_

∞

−∞

_

∞

−∞

f

x,y

(x, y)dxdy = F(∞, ∞) = 1

F

x,y

(−∞, −∞) = F

x,y

(−∞, y) = F

x,y

(x, −∞) = 0.

A First Course in Digital Communications 25/58

Chapter 3: Probability, Random Variables, Random Processes

Multiple Random Variables III

The conditional pdf of the random variable y, given that the

value of the random variable x is equal to x, is deﬁned as

f

y

(y|x) =

_

_

_

f

x,y

(x, y)

f

x

(x)

, f

x

(x) = 0

0, otherwise

.

Two random variables x and y are statistically independent if

and only if

f

y

(y|x) = f

y

(y) or equivalently f

x,y

(x, y) = f

x

(x)f

y

(y).

The joint moment is deﬁned as

E{x

j

y

k

} =

_

∞

−∞

_

∞

−∞

x

j

y

k

f

x,y

(x, y)dxdy.

A First Course in Digital Communications 26/58

Chapter 3: Probability, Random Variables, Random Processes

Multiple Random Variables IV

The joint central moment is

E{(x−m

x

)

j

(y−m

y

)

k

} =

_

∞

−∞

_

∞

−∞

(x−m

x

)

j

(y−m

y

)

k

f

x,y

(x, y)dxdy

where m

x

= E{x} and m

y

= E{y}.

The most important moments are

E{xy} ≡

_

∞

−∞

_

∞

−∞

xyf

x,y

(x, y)dxdy (correlation)

cov{x, y} ≡ E{(x −m

x

)(y −m

y

)}

= E{xy} −m

x

m

y

(covariance).

A First Course in Digital Communications 27/58

Chapter 3: Probability, Random Variables, Random Processes

Multiple Random Variables V

Let σ

2

x

and σ

2

y

be the variance of x and y. The covariance

normalized w.r.t. σ

x

σ

y

is called the correlation coeﬃcient:

ρ

x,y

=

cov{x, y}

σ

x

σ

y

.

ρ

x,y

indicates the degree of linear dependence between two

random variables.

It can be shown that |ρ

x,y

| ≤ 1.

ρ

x,y

= ±1 implies an increasing/decreasing linear relationship.

If ρ

x,y

= 0, x and y are said to be uncorrelated.

It is easy to verify that if x and y are independent, then

ρ

x,y

= 0: Independence implies lack of correlation.

However, lack of correlation (no linear relationship) does not

in general imply statistical independence.

A First Course in Digital Communications 28/58

Chapter 3: Probability, Random Variables, Random Processes

Examples of Uncorrelated Dependent Random Variables

Example 1: Let x be a discrete random variable that takes on

{−1, 0, 1} with probabilities {

1

4

,

1

2

,

1

4

}, respectively. The

random variables y = x

3

and z = x

2

are uncorrelated but

dependent.

Example 2: Let x be an uniformly random variable over

[−1, 1]. Then the random variables y = x and z = x

2

are

uncorrelated but dependent.

Example 3: Let x be a Gaussian random variable with zero

mean and unit variance (standard normal distribution). The

random variables y = x and z = |x| are uncorrelated but

dependent.

Example 4: Let u and v be two random variables (discrete or

continuous) with the same probability density function. Then

x = u −v and y = u +v are uncorrelated dependent random

variables.

A First Course in Digital Communications 29/58

Chapter 3: Probability, Random Variables, Random Processes

Example 1

x ∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}

⇒ y = x

3

∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}

⇒ z = x

2

∈ {0, 1} with probabilities {1/2, 1/2}

m

y

= (−1)

1

4

+ (0)

1

2

+ (1)

1

4

= 0; m

z

= (0)

1

2

+ (1)

1

2

=

1

2

.

The joint pmf (similar to pdf) of y and z:

0

1 −

1

1

1

2

1

4

1

4

y

z

P(y = −1, z = 0) = 0

P(y = −1, z = 1) = P(x = −1) = 1/4

P(y = 0, z = 0) = P(x = 0) = 1/2

P(y = 0, z = 1) = 0

P(y = 1, z = 0) = 0

P(y = 1, z = 1) = P(x = 1) = 1/4

Therefore, E{yz} = (−1)(1)

1

4

+ (0)(0)

1

2

+ (1)(1)

1

4

= 0

⇒cov{y, z} = E{yz} −m

y

m

z

= 0 −(0)1/2 = 0!

A First Course in Digital Communications 30/58

Chapter 3: Probability, Random Variables, Random Processes

Jointly Gaussian Distribution (Bivariate)

f

x,y

(x, y) =

1

2πσ

x

σ

y

_

1 −ρ

2

x,y

exp

_

−

1

2(1 −ρ

2

x,y

)

×

_

(x −m

x

)

2

σ

2

x

−

2ρ

x,y

(x −m

x

)(y −m

y

)

σ

x

σ

y

+

(y −m

y

)

2

σ

2

y

_ _

,

where m

x

, m

y

, σ

x

, σ

y

are the means and variances.

ρ

x,y

is indeed the correlation coeﬃcient.

Marginal density is Gaussian: f

x

(x) ∼ N(m

x

, σ

2

x

) and

f

y

(y) ∼ N(m

y

, σ

2

y

).

When ρ

x,y

= 0 → f

x,y

(x, y) = f

x

(x)f

y

(y) → random

variables x and y are statistically independent.

Uncorrelatedness means that joint Gaussian random variables

are statistically independent. The converse is not true.

Weighted sum of two jointly Gaussian random variables is also

Gaussian.

A First Course in Digital Communications 31/58

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σ

x

= σ

y

= 1 and ρ

x,y

= 0

−3

−2

−1

0

1

2

3

−2

0

2

0

0.05

0.1

0.15

x

ρ

x,y

=0

y

f

x

,

y

(

x

,

y

)

x

y

−2 −1 0 1 2

−2.5

−2

−1

0

1

2

2.5

A First Course in Digital Communications 32/58

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σ

x

= σ

y

= 1 and ρ

x,y

= 0.3

−3

−2

−1

0

1

2

3

−2

0

2

0

0.05

0.1

0.15

x

ρ

x,y

=0.30

y

f

x

,

y

(

x

,

y

)

a cross−section

x

y

−2 −1 0 1 2

−2.5

−2

−1

0

1

2

2.5

A First Course in Digital Communications 33/58

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σ

x

= σ

y

= 1 and ρ

x,y

= 0.7

−3

−2

−1

0

1

2

3

−2

0

2

0

0.05

0.1

0.15

0.2

a cross−section

x

ρ

x,y

=0.70

y

f

x

,

y

(

x

,

y

)

x

y

−2 −1 0 1 2

−2.5

−2

−1

0

1

2

2.5

A First Course in Digital Communications 34/58

Chapter 3: Probability, Random Variables, Random Processes

Joint pdf and Contours for σ

x

= σ

y

= 1 and ρ

x,y

= 0.95

−3

−2

−1

0

1

2

3

−2

0

2

0

0.1

0.2

0.3

0.4

0.5

x

ρ

x,y

=0.95

y

f

x

,

y

(

x

,

y

)

x

y

−2 −1 0 1 2

−2.5

−2

−1

0

1

2

2.5

A First Course in Digital Communications 35/58

Chapter 3: Probability, Random Variables, Random Processes

Multivariate Gaussian pdf

Deﬁne

−→

x = [x

1

, x

2

, . . . , x

n

], a vector of the means

−→

m = [m

1

, m

2

, . . . , m

n

], and the n ×n covariance matrix C

with C

i,j

= cov(x

i

, x

j

) = E{(x

i

−m

i

)(x

j

−m

j

)}.

The random variables {x

i

}

n

i=1

are jointly Gaussian if:

f

x

1

,x

2

,...,x

n

(x

1

, x

2

, . . . , x

n

) =

1

_

(2π)

n

det(C)

×

exp

_

−

1

2

(

−→

x −

−→

m)C

−1

(

−→

x −

−→

m)

⊤

_

.

If C is diagonal (i.e., the random variables {x

i

}

n

i=1

are all

uncorrelated), the joint pdf is a product of the marginal pdfs:

Uncorrelatedness implies statistical independent for multiple

Gaussian random variables.

A First Course in Digital Communications 36/58

Chapter 3: Probability, Random Variables, Random Processes

Random Processes I

Real number

Time

t

k

x

1

(t,ω

1

)

x

2

(t,ω

2

)

x

M

(t,ω

M

)

.

.

.

ω

1

ω

2

ω

M

ω

3

ω

j

x(t,ω)

t

2

t

1

.

.

.

.

.

.

. . .

x(t

k

,ω)

.

.

.

A mapping from a sample space to a set of time functions.

A First Course in Digital Communications 37/58

Chapter 3: Probability, Random Variables, Random Processes

Random Processes II

Ensemble: The set of possible time functions that one sees.

Denote this set by x(t), where the time functions x

1

(t, ω

1

),

x

2

(t, ω

2

), x

3

(t, ω

3

), . . . are speciﬁc members of the ensemble.

At any time instant, t = t

k

, we have random variable x(t

k

).

At any two time instants, say t

1

and t

2

, we have two diﬀerent

random variables x(t

1

) and x(t

2

).

Any relationship between them is described by the joint pdf

f

x(t

1

),x(t

2

)

(x

1

, x

2

; t

1

, t

2

).

A complete description of the random process is determined by

the joint pdf f

x(t

1

),x(t

2

),...,x(t

N

)

(x

1

, x

2

, . . . , x

N

; t

1

, t

2

, . . . , t

N

).

The most important joint pdfs are the ﬁrst-order pdf

f

x(t)

(x; t) and the second-order pdf f

x(t

1

)x(t

2

)

(x

1

, x

2

; t

1

, t

2

).

A First Course in Digital Communications 38/58

Chapter 3: Probability, Random Variables, Random Processes

Examples of Random Processes I

(a) Thermal noise

0 t

(b) Uniform phase

t 0

A First Course in Digital Communications 39/58

Chapter 3: Probability, Random Variables, Random Processes

Examples of Random Processes II

t

0

(c) Rayleigh fading process

t

+V

−V

(d) Binary random data

0

T

b

A First Course in Digital Communications 40/58

Chapter 3: Probability, Random Variables, Random Processes

Classiﬁcation of Random Processes

Based on whether its statistics change with time: the process

is non-stationary or stationary.

Diﬀerent levels of stationarity:

Strictly stationary: the joint pdf of any order is independent of

a shift in time.

Nth-order stationarity: the joint pdf does not depend on the

time shift, but depends on time spacings:

f

x(t

1

),x(t

2

),...x(t

N

)

(x

1

, x

2

, . . . , x

N

; t

1

, t

2

, . . . , t

N

) =

f

x(t

1

+t),x(t

2

+t),...x(t

N

+t)

(x

1

, x

2

, . . . , x

N

; t

1

+t, t

2

+t, . . . , t

N

+t).

The ﬁrst- and second-order stationarity:

f

x(t

1

)

(x, t

1

) = f

x(t

1

+t)

(x; t

1

+t) = f

x(t)

(x)

f

x(t

1

),x(t

2

)

(x

1

, x

2

; t

1

, t

2

) = f

x(t

1

+t),x(t

2

+t)

(x

1

, x

2

; t

1

+t, t

2

+t)

= f

x(t

1

),x(t

2

)

(x

1

, x

2

; τ), τ = t

2

−t

1

.

A First Course in Digital Communications 41/58

Chapter 3: Probability, Random Variables, Random Processes

Statistical Averages or Joint Moments

Consider N random variables x(t

1

), x(t

2

), . . . x(t

N

). The

joint moments of these random variables is

E{x

k

1

(t

1

), x

k

2

(t

2

), . . . x

k

N

(t

N

)} =

_

∞

x

1

=−∞

· · ·

_

∞

x

N

=−∞

x

k

1

1

x

k

2

2

· · · x

k

N

N

f

x(t

1

),x(t

2

),...x(t

N

)

(x

1

, x

2

, . . . , x

N

; t

1

, t

2

, . . . , t

N

)

dx

1

dx

2

. . . dx

N

,

for all integers k

j

≥ 1 and N ≥ 1.

Shall only consider the ﬁrst- and second-order moments, i.e.,

E{x(t)}, E{x

2

(t)} and E{x(t

1

)x(t

2

)}. They are the mean

value, mean-squared value and (auto)correlation.

A First Course in Digital Communications 42/58

Chapter 3: Probability, Random Variables, Random Processes

Mean Value or the First Moment

The mean value of the process at time t is

m

x

(t) = E{x(t)} =

_

∞

−∞

xf

x(t)

(x; t)dx.

The average is across the ensemble and if the pdf varies with

time then the mean value is a (deterministic) function of time.

If the process is stationary then the mean is independent of t

or a constant:

m

x

= E{x(t)} =

_

∞

−∞

xf

x

(x)dx.

A First Course in Digital Communications 43/58

Chapter 3: Probability, Random Variables, Random Processes

Mean-Squared Value or the Second Moment

This is deﬁned as

MSV

x

(t) = E{x

2

(t)} =

_

∞

−∞

x

2

f

x(t)

(x; t)dx (non-stationary),

MSV

x

= E{x

2

(t)} =

_

∞

−∞

x

2

f

x

(x)dx (stationary).

The second central moment (or the variance) is:

σ

2

x

(t) = E

_

[x(t) −m

x

(t)]

2

_

= MSV

x

(t) −m

2

x

(t) (non-stationary),

σ

2

x

= E

_

[x(t) −m

x

]

2

_

= MSV

x

−m

2

x

(stationary).

A First Course in Digital Communications 44/58

Chapter 3: Probability, Random Variables, Random Processes

Correlation

The autocorrelation function completely describes the power

spectral density of the random process.

Deﬁned as the correlation between the two random variables

x

1

= x(t

1

) and x

2

= x(t

2

):

R

x

(t

1

, t

2

) = E{x(t

1

)x(t

2

)}

=

_

∞

x

1

=−∞

_

∞

x

2

=−∞

x

1

x

2

f

x

1

,x

2

(x

1

, x

2

; t

1

, t

2

)dx

1

dx

2

.

For a stationary process:

R

x

(τ) = E{x(t)x(t +τ)}

=

_

∞

x

1

=−∞

_

∞

x

2

=−∞

x

1

x

2

f

x

1

,x

2

(x

1

, x

2

; τ)dx

1

dx

2

.

Wide-sense stationarity (WSS) process: E{x(t)} = m

x

for

any t, and R

x

(t

1

, t

2

) = R

x

(τ) for τ = t

2

−t

1

.

A First Course in Digital Communications 45/58

Chapter 3: Probability, Random Variables, Random Processes

Properties of the Autocorrelation Function

1. R

x

(τ) = R

x

(−τ). It is an even function of τ because the

same set of product values is averaged across the ensemble,

regardless of the direction of translation.

2. |R

x

(τ)| ≤ R

x

(0). The maximum always occurs at τ = 0,

though there maybe other values of τ for which it is as big.

Further R

x

(0) is the mean-squared value of the random

process.

3. If for some τ

0

we have R

x

(τ

0

) = R

x

(0), then for all integers

k, R

x

(kτ

0

) = R

x

(0).

4. If m

x

= 0 then R

x

(τ) will have a constant component equal

to m

2

x

.

5. Autocorrelation functions cannot have an arbitrary shape.

The restriction on the shape arises from the fact that the

Fourier transform of an autocorrelation function must be

greater than or equal to zero, i.e., F{R

x

(τ)} ≥ 0.

A First Course in Digital Communications 46/58

Chapter 3: Probability, Random Variables, Random Processes

Power Spectral Density of a Random Process (I)

Taking the Fourier transform of the random process does not work.

x

2

(t,ω

2

)

x

M

(t,ω

M

)

x

1

(t,ω

1

)

t

t 0

0

0

0

0

0

|X

1

(f,ω

1

)|

|X

2

(f,ω

2

)|

|X

M

(f,ω

M

)|

f

f

f

.

.

.

.

.

.

.

.

.

.

.

.

Time−domain ensemble Frequency−domain ensemble

t

A First Course in Digital Communications 47/58

Chapter 3: Probability, Random Variables, Random Processes

Power Spectral Density of a Random Process (II)

Need to determine how the average power of the process is

distributed in frequency.

Deﬁne a truncated process:

x

T

(t) =

_

x(t), −T ≤ t ≤ T

0, otherwise

.

Consider the Fourier transform of this truncated process:

X

T

(f) =

_

∞

−∞

x

T

(t)e

−j2πft

dt.

Average the energy over the total time, 2T:

P =

1

2T

_

T

−T

x

2

T

(t)dt =

1

2T

_

∞

−∞

|X

T

(f)|

2

df (watts).

A First Course in Digital Communications 48/58

Chapter 3: Probability, Random Variables, Random Processes

Power Spectral Density of a Random Process (III)

Find the average value of P:

E{P} = E

_

1

2T

_

T

−T

x

2

T

(t)dt

_

= E

_

1

2T

_

∞

−∞

|X

T

(f)|

2

df

_

.

Take the limit as T →∞:

lim

T→∞

1

2T

_

T

−T

E

_

x

2

T

(t)

_

dt = lim

T→∞

1

2T

_

∞

−∞

E

_

|X

T

(f)|

2

_

df,

It follows that

MSV

x

= lim

T→∞

1

2T

_

T

−T

E

_

x

2

T

(t)

_

dt

=

_

∞

−∞

lim

T→∞

E

_

|X

T

(f)|

2

_

2T

df (watts).

A First Course in Digital Communications 49/58

Chapter 3: Probability, Random Variables, Random Processes

Power Spectral Density of a Random Process (IV)

Finally,

S

x

(f) = lim

T→∞

E

_

|X

T

(f)|

2

_

2T

(watts/Hz),

is the power spectral density of the process.

It can be shown that the power spectral density and the

autocorrelation function are a Fourier transform pair:

R

x

(τ) ←→S

x

(f) =

_

∞

τ=−∞

R

x

(τ)e

−j2πfτ

dτ.

A First Course in Digital Communications 50/58

Chapter 3: Probability, Random Variables, Random Processes

Time Averaging and Ergodicity

A process where any member of the ensemble exhibits the

same statistical behavior as that of the whole ensemble.

All time averages on a single ensemble member are equal to

the corresponding ensemble average:

E{x

n

(t))} =

_

∞

−∞

x

n

f

x

(x)dx

= lim

T→∞

1

2T

_

T

−T

[x

k

(t, ω

k

)]

n

dt, ∀ n, k.

For an ergodic process: To measure various statistical

averages, it is suﬃcient to look at only one realization of the

process and ﬁnd the corresponding time average.

For a process to be ergodic it must be stationary. The

converse is not true.

A First Course in Digital Communications 51/58

Chapter 3: Probability, Random Variables, Random Processes

Examples of Random Processes

(Example 3.4) x(t) = Acos(2πf

0

t +Θ), where Θ is a

random variable uniformly distributed on [0, 2π]. This process

is both stationary and ergodic.

(Example 3.5) x(t) = x, where x is a random variable

uniformly distributed on [−A, A], where A > 0. This process

is WSS, but not ergodic.

(Example 3.6) x(t) = Acos(2πf

0

t +Θ) where A is a

zero-mean random variable with variance, σ

2

A

, and Θ is

uniform in [0, 2π]. Furthermore, A and Θ are statistically

independent. This process is not ergodic, but strictly

stationary.

A First Course in Digital Communications 52/58

Chapter 3: Probability, Random Variables, Random Processes

Random Processes and LTI Systems

Linear, Time-Invariant

(LTI) System

Input Output

( ) ( ) h t H f ←→

( ) t x ( ) t y

, ( ) ( ) m R S f τ ←→

x x x

, ( ) ( ) m R S f τ ←→

y y y

,

( ) R τ

x y

m

y

= E{y(t)} = E

__

∞

−∞

h(λ)x(t −λ)dλ

_

= m

x

H(0)

S

y

(f) = |H(f)|

2

S

x

(f)

R

y

(τ) = h(τ) ∗ h(−τ) ∗ R

x

(τ).

A First Course in Digital Communications 53/58

Chapter 3: Probability, Random Variables, Random Processes

Thermal Noise in Communication Systems

A natural noise source is thermal noise, whose amplitude

statistics are well modeled to be Gaussian with zero mean.

The autocorrelation and PSD are well modeled as:

R

w

(τ) = kθG

e

−|τ|/t

0

t

0

(watts),

S

w

(f) =

2kθG

1 + (2πft

0

)

2

(watts/Hz).

where k = 1.38 ×10

−23

joule/

0

K is Boltzmann’s constant, G

is conductance of the resistor (mhos); θ is temperature in

degrees Kelvin; and t

0

is the statistical average of time

intervals between collisions of free electrons in the resistor (on

the order of 10

−12

sec).

A First Course in Digital Communications 54/58

Chapter 3: Probability, Random Variables, Random Processes

−15 −10 −5 0 5 10 15

0

f (GHz)

S

w

(

f

)

(

w

a

t

t

s

/

H

z

)

(a) Power Spectral Density, S

w

(f)

−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1

τ (pico−sec)

R

w

(

τ

)

(

w

a

t

t

s

)

(b) Autocorrelation, R

w

(τ)

N

0

/2

N

0

/2δ(τ)

White noise

Thermal noise

White noise

Thermal noise

A First Course in Digital Communications 55/58

Chapter 3: Probability, Random Variables, Random Processes

The noise PSD is approximately ﬂat over the frequency range

of 0 to 10 GHz ⇒ let the spectrum be ﬂat from 0 to ∞:

S

w

(f) =

N

0

2

(watts/Hz),

where N

0

= 4kθG is a constant.

Noise that has a uniform spectrum over the entire frequency

range is referred to as white noise

The autocorrelation of white noise is

R

w

(τ) =

N

0

2

δ(τ) (watts).

Since R

w

(τ) = 0 for τ = 0, any two diﬀerent samples of

white noise, no matter how close in time they are taken, are

uncorrelated.

Since the noise samples of white noise are uncorrelated, if the

noise is both white and Gaussian (for example, thermal noise)

then the noise samples are also independent.

A First Course in Digital Communications 56/58

Chapter 3: Probability, Random Variables, Random Processes

Example

Suppose that a (WSS) white noise process, x(t), of zero-mean and

power spectral density N

0

/2 is applied to the input of the ﬁlter.

(a) Find and sketch the power spectral density and

autocorrelation function of the random process y(t) at the

output of the ﬁlter.

(b) What are the mean and variance of the output process y(t)?

L

R

( ) t x ( ) t y

A First Course in Digital Communications 57/58

Chapter 3: Probability, Random Variables, Random Processes

H(f) =

R

R+j2πfL

=

1

1 +j2πfL/R

.

S

y

(f) =

N

0

2

1

1 +

_

2πL

R

_

2

f

2

←→R

y

(τ) =

N

0

R

4L

e

−(R/L)|τ|

.

0 0

2

0

N

(Hz) f (sec) τ

L

R N

4

0

( ) (watts/Hz) S f

y

( ) (watts) R τ

y

A First Course in Digital Communications 58/58

- Rupee Dollar
- 11.[57-69]Price Behaviour of Major Cereal Crops in Bangladesh
- part_ii
- Chap 5 PME
- Chapter 1 Financial statistics
- Structural Macroeconompopfdpofdetrics
- Indus Engg
- Stats Practice (RJ 08 P2)
- The Generalized Variogram 2343
- 1103.0502v1
- Meloeides
- Technical Note - Moving Average Model
- Probability and Statistics
- The forecasting of International Expo tourism using quantitative and qualita tive techniques
- Further Inference Topics
- Quantitative Methods Homework Help
- A Comparison of Meaning in Life in Younger and Older Persons
- Hiring Quality OHfficers
- 2nd Yr. Mathematics IV F
- Soomro et al. - 2016 - Participatory Governance through Youth Volunteerism in Public Sector of Pakistan.pdf
- Kalman Filtering Alazard
- 3a bivariate 2
- Statistical casio double
- 09nguyen_singleloop
- HW_6
- Probabilistic Techniques for Three-phase
- Ma2262 Probability and Queuing Theory Question Bank Download
- SN048The Relationship Between Intensity Indicators in Small-Sided Soccer Games
- hn ratio
- Jntuh Mba Syllabus

- Lecture 3
- Lecture 1
- BC_MAC
- lec1
- Advanced Digital Comms
- Lecture 2
- Intro Mimo
- LTE Basics
- Limited Feedback v3
- Lecture 4
- SU_MIMO
- Flow Control Data Nets
- Lecture 5
- Lecture 4
- Lecture 3
- M3L1slides
- Cm Publi 3666
- LPintro02
- Principles of Transmission and Detection of Digital Signals
- Grammikos Programmatismos
- Lecture 2
- CR Networks Security
- AMP Chapter 06 Scanned
- AChap1LinProgProbs
- lec1
- Demodulation of FM Data
- lec2v2
- junior-3
- Lecture0 Print
- Lecture 1

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading