Professional Documents
Culture Documents
February 2009
0
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
BASK signal Digital message
1
0
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
1
0
−1
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
t
A First Course in Digital Communications 2/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
t t
0
(a) (b)
x(t) x(t)
Ts
t t
0 Ts 0
(c) (d)
Severely
Original pulse Some distortion Degraded Regenerated
degraded
1 2 3 4 5 Propagation distance
Source Sink
Transmitter Channel Receiver
(User) (User)
(a)
Transmitter Receiver
(b)
Note: “Synchronization” block is only present in a digital system.
A First Course in Digital Communications 7/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
Introduction
The main objective of a communication system is the transfer
of information over a channel.
Message signal is best modeled by a random signal: Any
signal that conveys information must have some uncertainty in
it, otherwise its transmission is of no interest.
Two types of imperfections in a communication channel:
Deterministic imperfection, such as linear and nonlinear
distortions, inter-symbol interference, etc.
Nondeterministic imperfection, such as addition of noise,
interference, multipath fading, etc.
We are concerned with the methods used to describe and
characterize a random signal, generally referred to as a
random process (also commonly called stochastic process).
In essence, a random process is a random variable evolving in
time.
A First Course in Digital Communications 9/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
1
The events E1 , E2 , E3 ,. . . are mutually exclusive if Ei ∩ Ej = ⊘ for all
i 6= j, where ⊘ is the null set.
A First Course in Digital Communications 12/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
Conditional Probability
We observe or are told that event E1 has occurred but are
actually interested in event E2 : Knowledge that of E1 has
occurred changes the probability of E2 occurring.
If it was P (E2 ) before, it now becomes P (E2 |E1 ), the
probability of E2 occurring given that event E1 has occurred.
This conditional probability is given by
P (E2 ∩ E1 )
, if P (E1 ) 6= 0
P (E2 |E1 ) = P (E1 ) .
0, otherwise
Bayes’ rule:
P (A|Ei )P (Ei ) P (A|Ei )P (Ei )
P (Ei |A) = = Pn .
P (A) j=1 P (A|Ej )P (Ej )
Random Variables
Ω
ω1 ω2
ω4
ω3
R
x(ω4 ) x(ω1 ) x(ω3 ) x(ω2 )
R
1 2 3 4 5 6
1.0
(a)
x
−∞ 0 ∞
1.0
(b)
x
−∞ 0 ∞
Fx ( x )
1.0
(c)
x
−∞ 0 ∞
fx ( x ) Fx ( x )
1− p
( p)
(1 − p )
x x
0 1 0 1
0.30
0.25
0.20
0.15
0.10
0.05
x
0 2 4 6
A discrete random variable that gives the number of 1’s in a
sequence of n independent Bernoulli trials.
n
X n k n−k n n!
fx (x) = p (1−p) δ(x−k), where = .
k k k!(n − k)!
k=0
fx ( x ) Fx ( x )
1
1
b−a
x x
a 0 b a 0 b
1
2
x x
0 µ 0 µ
f (x )
x i ,
X
fy (y) = dg(x)
i
dx x=xi
0.4
0.2
Signal amplitude (volts)
−0.2
−0.4
−0.6
−0.8
0 0.2 0.4 0.6 0.8 1
t (sec)
2.5
fx(x) (1/volts)
1.5
0.5
0
−1 −0.5 0 0.5 1
x (volts)
2
1 − (x−m2
x)
fx (x) = p e 2σx
(Gaussian)
2πσx2
a −a|x|
fx (x) = e (Laplacian)
2
A First Course in Digital Communications 31/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
0.25
fx(x)
0.2
0.15
0.1
0.05
0
−15 −10 −5 0 5 10 15
x
Example
Solution
1 (x−µ)2 (µ=0) 1 x2
fx (x) = √ e− 2σ2 = √ e− 2σ2 .
2πσ 2πσ
The Q-function is defined as:
∞
1
Z
λ2
Q(t) = √ e− 2 dλ
2π t
The probability that the noise voltage exceeds some level A is
Z ∞
P (x > A) = fx (x)dx
A
Z ∞ x
Z ∞
1 − x2
2 (λ= σ ) 1 λ2
= √ e 2σ dx = √ e− 2 dλ
2πσ A (∴dλ= dxσ
) 2π Aσ
A
= Q .
σ
A First Course in Digital Communications 34/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
Given the noise voltage is negative, the probability that the noise
voltage exceeds some level A < 0 is:
P (x > A , x < 0) P (A < x < 0)
P (x > A|x < 0) = =
P (x < 0) P (x < 0)
A A
1
Q σ − Q(0) Q σ −2
A
= = = 2Q − 1.
Q(0) 1/2 σ
f x ( x | x < 0)
2
2πσ 2
fx ( x )
1
2πσ 2
x x
0 A 0
10−4
P (x > −10−4 ) = Q − √ = Q(−1) = 0.8413
10−8
P (x > −10−4 |x < 0) = 2Q(−1) − 1 = 0.6827.
Example
The Matlab function randn defines a Gaussian random variable of
zero mean and unit variance. Generate a vector of L = 106
samples according to a zero-mean Gaussian random variable x with
variance σ 2 = 10−8 . This can be done as follows:
x=sigma*randn(1,L).
(a) Based on the sample vector x, verify the mean and variance of
random variable x.
(b) Based on the sample vector x, compute the “probability” that
x > A, where A = −10−4 . Compare the result with that
found in Question 2-(c).
(c) Using the Matlab command hist, plot the histogram of
sample vector x with 100 bins. Next obtain and plot the pdf
from the histogram. Then also plot on the same figure the
theoretical Gaussian pdf (note the value of the variance for
the pdf). Do the histogram pdf and theoretical pdf fit well?
A First Course in Digital Communications 37/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
Solution
% mean_x = -3.7696e-008
% variance_x = 1.0028e-008
% P = 0.8410
pl(1)=plot(x_center,hist_pdf,’color’,’blue’,’linestyle’,’--’,’li
hold on;
4000
pdf from histogram
true pdf
3500
3000
2500
f (x)
2000
x
1500
1000
500
0
−5 0 5
x −4
x 10
∂ 2 Fx,y (x, y)
fx,y (x, y) = .
∂x∂y
When the joint pdf is integrated over one of the variables, one
obtains the pdf of other variable, called the marginal pdf:
Z ∞
fx,y (x, y)dx = fy (y),
−∞
Z ∞
fx,y (x, y)dy = fx (x).
−∞
Note that:
Z ∞Z ∞
fx,y (x, y)dxdy = F (∞, ∞) = 1
−∞ −∞
Fx,y (−∞, −∞) = Fx,y (−∞, y) = Fx,y (x, −∞) = 0.
cov{x, y}
ρx,y = .
σx σy
Example 1
x ∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}
⇒ y = x3 ∈ {−1, 0, 1} with probabilities {1/4, 1/2, 1/4}
⇒ z = x2 ∈ {0, 1} with probabilities {1/2, 1/2}
my = (−1) 14 + (0) 12 + (1) 14 = 0; mz = (0) 12 + (1) 12 = 12 .
The joint pmf (similar to pdf) of y and z:
1
1 1 P (y = −1, z = 0) = 0
2 z
4 4 P (y = −1, z = 1) = P (x = −1) = 1/4
1
P (y = 0, z = 0) = P (x = 0) = 1/2
−1 P (y = 0, z = 1) = 0
0 1 y P (y = 1, z = 0) = 0
P (y = 1, z = 1) = P (x = 1) = 1/4
Therefore, E{yz} = (−1)(1) 14 + (0)(0) 12 + (1)(1) 14 =0
⇒ cov{y, z} = E{yz} − my mz = 0 − (0)1/2 = 0!
A First Course in Digital Communications 47/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
ρx,y=0
0.15
2.5
2
0.1
fx,y(x,y)
0.05
0
y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x
ρ =0.30
x,y
a cross−section
0.15 2.5
0.1
fx,y(x,y)
0.05
0
y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x
ρ =0.70
x,y
0.2 2.5
2
0.15
a cross−section
fx,y(x,y)
0.1 1
0.05 0
y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x
ρx,y=0.95
0.5
2.5
0.4
2
0.3
fx,y(x,y)
1
0.2
0.1 0
y
0
−1
2 3
2
0 1
0 −2
−2 −1
−2 −2.5
y −3 −2 −1 0 1 2
x x
Define −
→
x = [x1 , x2 , . . . , xn ], a vector of the means
−
→
m = [m1 , m2 , . . . , mn ], and the n × n covariance matrix C
with Ci,j = cov(xi , xj ) = E{(xi − mi )(xj − mj )}.
The random variables {xi }ni=1 are jointly Gaussian if:
1
fx1 ,x2 ,...,xn (x1 , x2 , . . . , xn ) = p ×
det(C) (2π)n
1 −
→ −
→ −1 −
→ −
→ ⊤
exp − ( x − m)C ( x − m) .
2
Random Processes I
ωj
ω
3
ω1 ωM
ω2
x(t,ω)
Real number
x(tk,ω)
x1(t,ω1)
x2(t,ω2)
. .
. .
xM(t,ωM) . .
. .
. .
t
1
t
2
... . t
k . Time
Random Processes II
0 t 0 t
Tb
0 t 0 t
The average is across the ensemble and if the pdf varies with
time then the mean value is a (deterministic) function of time.
If the process is stationary then the mean is independent of t
or a constant:
Z ∞
mx = E{x(t)} = xfx (x)dx.
−∞
This is defined as
Z ∞
2
MSVx (t) = E{x (t)} = x2 fx(t) (x; t)dx (non-stationary),
−∞
Z ∞
MSVx = E{x2 (t)} = x2 fx (x)dx (stationary).
−∞
Correlation
The autocorrelation function completely describes the power
spectral density of the random process.
Defined as the correlation between the two random variables
x1 = x(t1 ) and x2 = x(t2 ):
Rx (t1 , t2 ) = E{x(t1 )x(t2 )}
Z ∞ Z ∞
= x1 x2 fx1 ,x2 (x1 , x2 ; t1 , t2 )dx1 dx2 .
x1 =−∞ x2 =−∞
0 t
f
0
x (t,ω ) |X (f,ω )|
2 2 2 2
0 t
f
. 0 .
x (t,ω ) . |X (f,ω )|
.
M M M M
. .
0 t
f
. 0 .
. .
. .
It follows that
Z T
1
E x2T (t) dt
MSVx = lim
T →∞ 2T −T
n o
Z ∞ E |XT (f )|2
= lim df (watts).
−∞ T →∞ 2T
Finally,
n o
E |XT (f )|2
Sx (f ) = lim (watts/Hz),
T →∞ 2T
is the power spectral density of the process.
It can be shown that the power spectral density and the
autocorrelation function are a Fourier transform pair:
Z ∞
Rx (τ ) ←→ Sx (f ) = Rx (τ )e−j2πf τ dτ.
τ =−∞
Rx , y (τ )
mx , Rx (τ ) ←
→ Sx ( f ) my , Ry (τ ) ←
→ Sy ( f )
Z ∞
my = E{y(t)} = E h(λ)x(t − λ)dλ = mx H(0)
−∞
Sy (f ) = |H(f )|2 Sx (f )
Ry (τ ) = h(τ ) ∗ h(−τ ) ∗ Rx (τ ).
e−|τ |/t0
Rw (τ ) = kθG (watts),
t0
2kθG
Sw (f ) = (watts/Hz).
1 + (2πf t0 )2
White noise
Thermal noise
0
−15 −10 −5 0 5 10 15
f (GHz)
(b) Autocorrelation, R (τ)
w
N0/2δ(τ)
Rw(τ) (watts)
White noise
Thermal noise
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08 0.1
τ (pico−sec)
A First Course in Digital Communications 72/75
Chapter 1: Introduction Chapter 3: Probability, Random Variables, Random Processes
Example
x(t ) R y (t )
R 1
H(f ) = = .
R + j2πf L 1 + j2πf L/R
N0 1 N0 R −(R/L)|τ |
Sy (f ) = ←→ Ry (τ ) = e .
2 1+ 2πL 2 4L
R f2
S y ( f ) (watts/Hz) Ry (τ ) (watts)
N0
2 N0R
4L
0 f (Hz) 0 τ (sec)