You are on page 1of 14

Introduction

EEN 443 • Random signals in one form or another are


Communication Systems II encountered in every practical communications
system.
Class Notes – Information
– Noise
Chapter 8
• Noise may be defined as any unwanted signal
Random Signals interfering with or distorting the signal being
and Noise communicated.

Dr. Jad Atallah


Spring Semester - 2021

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 2

Probability and Random Relative-Frequency


Variables Approach

A random experiment has the following properties: • Suppose that in n trials of the experiment, event A occurs
nA times. We may then assign the ratio nA/n to the event A.
• On any trial of the experiment, the outcome is This ratio is called the relative frequency of the event A.
unpredictable.
• For a large number of trials of the experiment, nA
0d d1
the outcomes exhibit statistical regularity. That n
is, a definite average pattern of outcomes is • The experiment exhibits statistical regularity if, for any
observed if the experiment is repeated a large sequence of n trials, the relative frequency nA/n converges
number of times. to a limit as n becomes large. We define this limit to be the
probability of event A.

§n ·
5 [ A] lim ¨ A ¸
n o f© n ¹

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 3 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 4
Axioms of Probability Random Variables

• With each possible outcome of the experiment, we associate a • We use the expression random variable to
point called the sample point, which we denote by sk.
describe the process of assigning a number to the
• A probability system consists of the triple
outcome of a random experiment.
1. A sample space S of elementary events (outcomes).
• If the outcome of the experiment is s, we denote
2. A class H of events that are subsets of S. the random variable as X(s) or just X.
3. A probability measure P[A] assigned to each event A in the
class H, which has the following properties
• We denote a particular outcome of a random
experiment by x; that is X(sk)=x.
(i) P[S] = 1
(ii) 0 ≤ P[A] ≤ 1 • Random variables may be either discrete or
(iii) if A ‰ B is the union of two mutually exclusive events in continuous.
the class H, then

5 [ A ‰ B] 5 [ A]  5 [B]
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 5 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 6

Random Variables II Random Variables III


• Relationship between sample space, random
variables, and probability. • For a discrete-valued random variable, the
probability mass function describes the
probability of each possible value of the random
variable.
• Define the probability mass function of a
Bernoulli random variable as

­1  p , x 0
°
5 [X x] ® p , x 1
°0 , otherwise
¯

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 7 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 8
Distribution Functions Distribution Functions II

• The probability distribution function FX(x) is • If X is a continuous-valued random variable and


defined as FX(x) is differentiable with respect to x, then the
probability density function, denoted by fX(x) is
FX ( x) 5 [X d x]
defined as
• The probability distribution function has two
basic properties: w
f X ( x) FX ( x)
1. It is bounded between zero and one. wx
2. It is a monotone non-decreasing function of x.

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 9 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 10

Distribution Functions III Distribution Functions IV

• The probability density function has three basic • The probability distribution function of the
properties: Bernoulli random variable is given by
1. Since the distribution function is monotone non-
decreasing, it follows that the density function is ­0 , x0
nonnegative for all values of x. °
FX ( x) ®1  p , 0 d x  1
2. The distribution function may be recovered from the °1, xt1
density function by integration as shown by ¯
x
• A uniformly distributed random variable has the
probability density function
FX ( x) ³ f X (s) ds
f ­ 1
° , adxdb
3. Property 1 implies that the total area under the curve of f X ( x) ®b  a
the density function is unity. °̄0 , otherwise
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 11 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 12
Several Random Variables
Several Random Variables
II

• We define the joint distribution function FX,Y(x,y) • Suppose that the joint distribution function
as the probability that the random variable X is FX,Y(x,y) is continuous everywhere, and that the
less than or equal to a specified value x and that partial derivative
the random variable Y is less than or equal to a
w 2 FX ,Y ( x , y)
specified value y. f X ,Y ( x , y)
wx wy

exists and is continuous everywhere. We call


FX ,Y ( x, y) 5 [X d x, Y d y]
the function fX,Y(x,y) the joint probability
density function of the random variables X and
Y.

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 13 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 14

Several Random Variables


Conditional Probability
III
• Let P[Y|X] denote the probability mass function
• Two random variables X and Y, are statistically of Y given that X has occurred. This is called the
independent if the outcome of X does not affect conditional probability of Y given X which is
the outcome of Y. defined as
• In this case, the joint probability is 5 [X , Y ]
5 [Y | X]
5 [X ]
5 [X  A, Y  B] 5 [X  A] 5 [Y  B] • Also
5 [X , Y ]
equivalently 5 [ X |Y ]
5 [Y ]
5 [X , Y ] 5 [X] 5 [Y ] • Random variables X and Y are said to be
• Additionally statistically independent if
5 [X |Y ] 5 [X]
FX ,Y ( x, y) FX ( x) FY ( y)
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 15 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 16
Expectation Expectation II
• For a continuous random variable with a density
• We use simple statistical averages such as mean function fX(x)
and variance. f

• The statistical averages are denoted by E[g(x)]


([X ] ³ x f X ( x) dx
f
for the expected value of a function g(.) of the
random variable X. • For N observations of the random variable, we
use the estimator
• Shorthand notation of the mean: PX 1 N
• For a discrete random variable
P̂ X ¦ xn
Nn 1
PX ([X] • If we consider X as a random variable
representing observations of the voltage of a
¦ x 5>X x@
random signal, then the mean value of X
X
represents the average voltage or dc offset.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 17 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 18

Variance Variance II

• The variance is an estimate of the spread of the • For N observations of the random variable, we
probability distribution about the mean. may estimate the variance
• For discrete random variables, the variance is 1 N
Vˆ X2 ¦
N 1 n 1
( xn  Pˆ X ) 2
V X2 Var( X )
>
( (X  P X )2 @ • If we consider X as a random variable
representing observations of the voltage of a
¦ ( x  P X ) 2 5>X x@ random signal, then the variance represents the
X
ac power of the signal.
• For a continuous random variable with a density
function fX(x) • The second momentum E[X2], is also called the
f mean-square value of the random signal and it
physically represents the total power of the
V X2 ³ (x  PX )
2
f X ( x) dx
f signal.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 19 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 20
Transformation of Random
Covariance
Variables
• The covariance of two random variables is given • If Y=g(X) is a one-to-one transformation of the
by random variable X to the random variable Y, then
Cov(X , Y) (>(X  PX )(Y  PY )@ the distribution function of Y is given by
or
Cov(X , Y) (>XY @  PX PY FY ( y)
FX g 1 ( y)
• If the two random variables are continuous with where g-1(y) denotes the functional inverse of
joint density function fX,Y(x,y) then g(y).
f f
• Also if X has a density fX(x) then Y has the
(>XY @ ³ ³ x y f X ,Y ( x, y) dx dy density
f f
dg1 ( y)
• If the two variables are independent, then 1
fY ( y) f X g ( y)
(>XY @ (>X@ (>Y @ Ÿ Cov(X , Y) 0 dy

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 21 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 22

Transformation of Random
Gaussian Random Variables
Variables: Example
• A Gaussian random variable is a continuous
• If a random variable X with distribution FX(x) is
random variable with a density function
transformed to
1 ­
° (x  PX ) ½
2
°
Y AX  b f X ( x) e xp® ¾
VX 2S °̄ 2V X °
2
¿
then

§ yb· where the random variable X has a mean PX and a


FY ( y) FX ¨ ¸ variance V 2X.
© a ¹

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 23 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 24
Properties of a Gaussian Normalized Gaussian
Random Variable Random Variable
1. A Gaussian random variable is completely characterized by
its mean and variance. • A normalized Gaussian random variable has a
zero mean, PX=0, and a unity variance V 2X=1.
2. A Gaussian random variable plus a constant is another
Gaussian random variable with the mean adjusted by the • In this case, its density function is
constant.
3. A Gaussian random variable multiplied by a constant is ­  x2 ½
1
another Gaussian random variable where both the mean e xp®
f X ( x) ¾ f  x  f
and variance are affected by the constant. 2S ¯ 2 ¿
4. The sum of two independent Gaussian random variables is • Its distribution function is
also a Gaussian random variable.
5. The weighted sum of N independent Gaussian random 1 x
­  s2 ½
variables is a Gaussian random variable. FX ( x) ³ e xp® 2 ¾ ds
6. If two Gaussian random variables have zero covariance
2S f ¯ ¿
(uncorrelated), they are also independent.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 25 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 26

Q-function The Central Limit Theorem

• The Q-function is defined as • Let X1, X2,...Xn be a set of random variables with
the following properties:
f
1 ­  s2 ½
Q( x) ³ e xp® ¾ ds 1. The Xk with k=1, 2, 3,...,n are statistically independent
2S ¯ 2 ¿
x
2. The Xk all have the same probability density function.

1  FX ( x) 3. Both the mean and the variance exist for each Xk.

• We do not assume that the density function of


• The Q-function is the complement of the the Xk is Gaussian.
normalized Gaussian distribution function. • Let Y be a new random variable defined as
n
Y ¦ Xk
k 1

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 27 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 28
The Central Limit Theorem
Random Processes
II
• The normalized random variable is
• In communications, the received time-varying
Y  ([Y ] signal is random in nature.
Z
VY • Random processes represent the formal
• As n becomes large, mathematical model of these random signals.
z
1 ­ s2 ½ • Random processes have the following
FZ ( z) o ³ e xp® ¾ ds
properties:
f 2S ¯ 2¿
1. Random processes are a function of time.
• Therefore, the normalized distribution of the sum
2. Random processes are random in the sense that it is
of independent, identically distributed random
not possible to predict exactly what waveform will be
variables approaches a Gaussian distribution as observed in the future.
the number of random variables increases,
regardless of the individual distributions.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 29 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 30

Random Processes II Random Processes III

• We assign to each sample point s a function of


time with the label
X(t , s),  T  t  T

where 2T is the total observation period.


• For a fixed sample point sj, the function of X(t,sj)
is called a realization or a sample function.
• To simplify, we denote this sample function as

x j (t) X(t , s j )

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 31 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 32
Stationary Random
Random Processes IV
Processes
• With a random variable, the outcome of a
random experiment is mapped to a real number. • If the statistical characterization of a process is
independent of the time at which the
• With a random process, the outcome of a random
observations occur, then the process is said to be
experiment is mapped to a waveform that is a
function of time. stationary.

• At any point in the observation window tk, the • If


possible outcomes of a random process can be FX(t1 W ) ( x) FX(t1 ) ( x)
represented as a random variable.
for all t1 and W, we say the process is stationary
• The family of all such random variables, indexed to the first order. In this case, the distribution
by the time variable t, forms the random process.
function is independent of time.
• To restrict the range of possible random process,
• Additionally, its mean and variance are also
we need two technical conditions, stationarity
and ergodicity. independent of time.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 33 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 34

Stationary Random Correlation of Random


Processes II Processes
• If we sample the random process at different • Sometimes, samples of the process at different
points in time and if times may be correlated.
FX(t1 W ), X(t2 W ) ( x1 , x2 ) FX(t1 ), X(t2 ) ( x1 , x2 ) • The covariance of two random variables X(t1) and
X(t2) is
we say the process is stationary to the second
order. Cov X(t1 ), X(t 2 ) (>X(t1 )X(t 2 )@  PX(t1 ) P X(t2 )

• If the equivalence between distribution functions • The autocorrelation function is E[X(t1)X(t2)]


holds for all time shifts W, all possible orders even defined as
above 2, and all observation times t, then the
process is called strictly stationary. RX (t , s) (>X(t )X * ( s)@

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 35 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 36
Correlation of Random Properties of the
Processes II Autocorrelation Function
• If X(t) is stationary to the second order or higher,
• The autocorrelation function of a wide-sense
then
stationary random process has the following
RX (t , s) RX (t  s) properties for a real-valued process:
• For many applications, we often only require
1. Power of a Wide-Sense Stationary Process: The
that:
second moment or mean-square value of a real-
1. The mean of the random process is a constant valued random process is given by
independent of time: E[X(t)]=PX for all t.
2. The autocorrelation of the random process only
RX (0) >
( X(t) 2 @
depends upon the time difference for all t and W: 2. Symmetry: The autocorrelation of a real-valued
( >X(t )X (t  W )@ RX (W )*
wide-sense stationary process has even
• If a random process has these two properties, symmetry
then we say it is wide-sense stationary or weakly RX (W ) RX (W )
stationary.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 37 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 38

Properties of the
Ergodicity
Autocorrelation Function II
• The ensemble averages are given by
3. Maximum Value: The autocorrelation of a real-
valued wide-sense stationary process is 1 N
(>X(t k )@ ¦ x j (t k )
maximum at the origin. Nj 1
and
RX (0) t RX (W )
>
( X 2 (t k ) @ 1 N 2
¦ x (t k )
Nj 1 j

• If the process is wide-sense stationary, then the


mean value and second moment computed by
these two equations do not depend upon the time
t k.

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 39 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 40
Ergodicity II Ergodicity III

• The time average of only one continuous sample • Random processes for which various time
function x(t) drawn from the process is given by averages of the sample functions may be used to
approximate the corresponding ensemble
1 T averages or expectations are said to be ergodic.
H >x@ lim ³ x(t ) dt
T of 2T
T • Wide-sense stationary processes are assumed to
• The time-autocorrelation of the sample function be ergodic, in which case time averages and
x(t) is given by expectations can be used interchangeably.

1 T
Rx (W ) lim ³ x(t)x(t  W ) dt
T of 2 T
T

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 41 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 42

Spectra of Random Signals


Spectra of Random Signals
II

• Define the Fourier transform of a sample function • The power spectral density of a random process
xT(t) as X(t) is

> @
f
(  j 2S f t ) 1
[T ( f ) ³ xT (t) e dt SX ( f ) lim
T of 2T
( ;T ( f )
2

f

• The process in this case should be wide-sense


• The Fourier transform converts a family of stationary.
random variables X(t) indexed by parameter t to
a new family of random variables ;T(f) indexed
by parameter f.

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 43 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 44
Properties of the Power Properties of the Power
Spectral Density Spectral Density II
• The same Wiener-Khintchine relations that apply 2. Nonnegativity:
to deterministic processes also apply to wide-
sense stationary random processes. SX ( f ) t 0, for all f
• Other properties of a wide-sense stationary
process: 3. Symmetry:
SX ( f ) SX ( f )
1. Mean-Square Value:

> @
f
4. Filtering Random Processes (through a linear
³ SX ( f ) d f
2
( X(t )
f filter):
2
SY ( f ) H( f ) SX ( f )

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 45 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 46

Properties
Gaussian Processes
of a Gaussian Process
1. If a Gaussian process is wide-sense stationary,
• A random process X(t), with t taking values in then it is also stationary in the strict sense.
the set T, is said to be a Gaussian process if, for
any integer k and any subset {t1, t2,..., tk} of T, 2. If a Gaussian process is applied to a stable linear
the k random variables {X(t1), X(t2),...X(tk)} are filter, then the random process Y(t) produced at
jointly Gaussian distributed, that is their joint the output of the filter is also Gaussian.
density has a Gaussian shape. 3. If integration is defined in the mean-square
sense, then we may interchange the order of the
operations of integration and expectation with a
Gaussian random process. For example:
ªt º t
(>Y(t )@ ( «³ h(t  s) X( s) ds» ³ h(t  s) (>X(s)@ ds P Y (t )
¬« 0 ¼» 0

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 47 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 48
White Noise
White Noise
II
• The power spectral density of white noise is • Autocorrelation of white noise is given by
independent of frequency. N0
RW (W ) G (W )
• The power spectral density of a white noise 2
process W(t) is • Any two different samples of white noise, no
matter how close together in time they are taken,
N0
SW ( f ) are uncorrelated.
2
• The effect of white noise is observed only after
passing through a system of finite bandwidth.
• The dimensions of N0 are in watts per hertz.
• As long as the bandwidth of a noise process at
• White noise has no dc power.
the input of a system is appreciably larger than
that of the system itself, we may model the noise
process as white.
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 49 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 50

Narrowband Noise Narrowband Noise II


• The noise process appearing at the output of a narrowband 1. The in-phase component NI(t) and quadrature component
filter is called narrowband noise. NQ(t) of narrowband noise N(t) have zero mean.
• Narrowband noise can be represented mathematically using 2. If the narrowband noise N(t) is Gaussian, then its in-phase
in-phase and quadrature components. and quadrature components are Gaussian.
• We may represent N(t) in the form 3. If the narrowband noise N(t) is stationary, then its in-
N(t) N I (t) cos(2S fc t)  NQ (t) sin(2S f c t) phase and quadrature components are stationary.
4. Both components NI(t) and NQ(t) have the same power
spectral density which is related to the power spectral
density of the narrowband noise as in

­SN ( f  fc )  SN ( f  fc ) Bd f d B
SN I ( f ) SNQ ( f ) ®
¯0, otherwise
5. NI(t) and NQ(t) have the same variance as N(t).

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 51 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 52
Noise-Equivalent Bandwidth
Low-Pass Filter
Narrowband • Suppose a white noise source with spectrum
SW(f) = N0/2 is connected to the input of an
Noise III arbitrary filter of transfer function H(f).
• The average output noise power is
2
PN N 0 BN H(0)

• The noise-equivalent bandwidth for a low-pass


filter is f
2
³ H( f ) df
0
BN 2
H ( 0)
Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 53 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 54

Noise-Equivalent Bandwidth Noise-Equivalent Bandwidth


Band-Pass Filter Graphically
• In the case of a pass-band filter, the average
output noise power is
2
PN N 0 BN H( f c )

• The noise-equivalent bandwidth is

f
2
³ H( f ) df
0
BN 2
H( f c )

Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 55 Notre Dame University-Louaize - EEN 443 Communication Systems II - Spring 2021 56

You might also like