Professional Documents
Culture Documents
Communication Theory
Spring 2003
A B i.e. A and B
don’t overlap
Joint and Conditional Probability
Joint probability is the probability that both
A and B occur:
P(A,B) = P(A∩B).
Conditional probability is the probability that
A will occur given that B has occurred:
P(A,B) P(A,B)
P(A|B) = P(B) and P(B|A) = P(A)
Bayes’ theorem:
P(A,B) = P(A)P(B|A) = P(B)P(A|B)
P(A|B)P(B) P(B|A)P(A)
P(B|A) = and P(A|B) = P(B)
P(A)
Statistical Independence
Events A and B are statistically independent
if P(A,B) = P(A)P(B).
If A and B are independent, then:
• P(A|B) = P(A) and P(B|A) = P(B).
Example:
• Flip a coin, call result A={heads, tails}.
• Flip it again, call result B = {heads,tails}.
• P{A=heads, B=tails} = 0.25.
• P{A=heads}P{B=tails} = (0.5)(0.5) = 0.25
Random Variables
A random variable X(s) is a real-valued
function of the underlying event space s∈S.
Typically, we just denote it as X.
• i.e. we suppress the dependence on s (it is assumed).
Random variables (R.V.’s) can be either
discrete or continuous:
A discrete R.V. can only take on a countable
number of values.
• Example: The number of students in a class.
A continuous R.V. can take on a continuous
range of values.
• Example: The voltage across a resistor.
Cumulative Distribution Function
Abbreviated CDF.
Also called Probability Distribution Function.
Definition: Fx(a) = P[ X ≤ x ]
Properties:
F(x) is monotonically nondecreasing.
F(-∞) = 0
F(∞ ) = 1
P[ a < X ≤ b] = F(b) - F(a)
The CDF completely defines the random
variable, but is cumbersome to work with.
Instead, we will use the pdf …
Probability Density Function
Abbreviated pdf.
Definition:
d
p X ( x) = FX ( x)
dx
Properties:
p(x) ≥ 0
∞
∫p
−∞
X ( x)dx = 1
b
∫Interpretation:
a
p ( x)dx = P[a < X ≤ b] = F
X X (b) − FX (a)
unit step
0 2 4 6 x function
The pdf is:
1 6
p X ( x) = ∑ δ ( x − i )
6 i =1
dirac delta
0 2 4 6 x function
Expected Values
Sometimes the pdf is unknown or cumbersome to
specify.
Expected values are a shorthand way of describing
a random variable.
The most important examples are:
∞
Mean: E[ X ] = m X = ∫ xp( x)dx
−∞
Variance:
[ ]
∞
σ = E ( X − m X ) = ∫ ( x − m X ) 2 p( x)dx = E [ X 2 ] − m X2
2 2
X
−∞
The expectation operator works with any function
Y=g(X).
∞
∫ g ( x) p( x)dx
E[Y ] = E[ g ( X )] =
−∞
Uniform Random Variables
The uniform random variable is the most
basic type of continuous R.V.
The pdf of a uniform R.V. is constant over a
finite range and zero elsewhere:
1
A
A A x
m− m m+
2 2
A
Example
Consider a uniform random variable with pdf:
1 / 10 0 ≤ x ≤ 10
p ( x) =
0 elsewhere
∑ p[ x] = P[a ≤ X ≤ b]
x =a
Binary Distribution
A binary or Bernoulli random variable has
the following pmf: 1 − p x = 0
p[ x] =
p x =1
Variance:
Binomial Distribution
n
Let Y = ∑
i =1
Xi
where {Xi, i=1,…,n} are i.i.d.
Bernoulli random variables.
Then: n
pY [k ] = p k (1 − p) n − k
k
where: n n!
=
k k!(n − k )!
Mean: m X = np
Variance: σ 2
X = np(1 − p)
Example
Suppose we transmit a 31 bit sequence (code
word).
We use an error correcting code capable of
correcting 3 errors.
The probability that any individual bit in the
code word is received in error is p=.001.
What is the probability that the code word is
incorrectly decoded?
i.e. Probability that more than 3 bits are in error.
Example
Parameters: n=31, p=0.001, and t=3
Pairs of Random Variables
We often need to consider a pair (X,Y) of RVs
joint CDF: FX ,Y ( x, y ) = P[ X2 ≤ x, Y ≤ y ] = P[{ X ≤ x} ∩ {Y ≤ y} ]
∂
joint pdf: p X ,Y ( x, y ) = ∂x∂y FX ,Y ( x, y )
marginal pdf:
∞ ∞
p X ( x) = ∫p X ,Y ( x, y )dy p X ( x) = ∫p
−∞
X ,Y ( x, y )dy
−∞
Conditional pdf:
p X ,Y ( x, y ) p X ,Y ( x, y )
p X ( x | y) = pY ( y | x) =
pY ( y ) p X ( x)
Bayes rule:
p ( y | x) p X ( x) p X ( x | y ) pY ( y )
p X ( x | y) = Y pY ( y | x) =
pY ( y ) p X ( x)
Independence and Joint Moments
X and Y are independent if:
p X ,Y ( x, y ) = p X ( x) pY ( y )
Correlation:
E[ XY ] = ∫ ∫ xyp( x, y )dxdy
• where
• If M diagonal, the X is uncorrelated
Linear Transformation: Y = AX
• mY=AmX
• MY=AMXA’
Central Limit Theorem
Let [X1,X2, …Xn] be a vector of n independent and
identically distributed (i.i.d.) random variables, and
n
let: Y = ∑ X i
i =1
Then as n→∞, Y will have a Gaussian distribution.
This is the Central Limit Theorem.
m −a
Fx (a ) = P[ X ≤ a ] = Q
σ
Approximation for large X
Most Q function tables only go up to z=4 or z=6.
For z>4 a good approximation is:
1 −z2 / 2
Q( z ) ≈ e
z 2π
Q-Function and Overbound
1
10
0 overbound
10
-1
10
-2
10
Q function
-3
10
-4
10
-5
10
-6
10
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
EE 561
Communication Theory
Spring 2003
heads
0 1 X
tails
Random Processes
Random Processes map
the outcome of a random signal associated
experiment to a signal with the outcome:
sample function
(function of time).
S
ensemble
heads
tails
If a process isφstrict-sense
(t1 , t 2 ) = φ (τ ) stationary, then it is also
wide-sense stationary.
where τ = t 2 − t1
Properties of the Autocorrelation
Function
If x(t) is Wide Sense Stationary, then its
autocorrelation function has the following
properties:
{
φ (0) = E x(t )
2
} this is the second moment
φ (0) ≥ φ (τ )
Examples:
Which of the following are valid ACF’s?
Power Spectral Density
Power Spectral Density (PSD) is a measure of a random
process’ power content per unit frequency.
Denoted Φ (f).
Units of W/Hz.
Φ (f) is nonnegative function.
For real-valued processes, Φ (f) is an even function.
The total power of the process if found by:
∞
P = ∫ Φ ( f )df
−∞
The power within bandwidth B is found by:
P = ∫ Φ ( f )df
B
Wiener-Khintchine Theorem
We can easily find the PSD of a WSS random
processes.
Wiener-Khintchine theorem:
If x(t) is a wide sense stationary random process,
then: ∞
Φ( f ) = F {φ (τ )} = ∫ φ (τ )e − j 2πfτ dτ
−∞
i.e. the PSD is the Fourier Transform of the ACF.
Example:
Find the PSD of a WSS R.P with autocorrelation:
τ
τ 1 − if τ ≤T
φ(τ) = Λ = T
T 0 if τ >T
−T T
τ
τ 1 − if τ ≤T
Example: φ(τ ) = Λ = T
T 0 if τ >T
White Gaussian Noise
A process is Gaussian if any n samples placed into a
vector form a Gaussian vector.
If a Gaussian process is WSS then it is SSS.
A process is white if the following hold:
WSS.
zero-mean, i.e. mx(t) = 0.
Flat PSD, i.e. Φ (f) = constant.
A white Gaussian noise process:
Is Gaussian.
Is white.
• The PSD is Φ (f) =N0/2
• N0/2 is called the two-sided noise spectral density.
Since it is WSS+Gaussian, then it is also SSS.
Linear Systems
The output of a linear time invariant (LTI) system is
found by convolution.
x(t) y(t)
h(t)
y (t ) = x(t ) ∗ h(t )
Y ( f ) = X ( f )H ( f )
However, if the input to the system is a random
process, we can’t find X(f).
Solution: use power spectral densities:
2
Φy ( f ) = Φx( f ) H ( f )
This implies that the output of a LTI system is WSS if the
input is WSS.
Example
A white Gaussian noise process with PSD of Φ (f) =N0/2 = 10-5
W/Hz is passed through an ideal lowpass filter with cutoff at 1 kHz.
Compute the noise power at the filter output.
Ergodicity
A random process is said to be ergodic if it is ergodic
in the mean and ergodic in correlation:
Ergodic in the mean: time average operator:
mx = E{x(t )} = x(t )
T /2
1
Ergodic in the correlation: g (t ) = limt →∞ ∫ g (t )dt
T −T / 2
φ (τ ) = E{ x(t ) x(t + τ )} = x(t ) x(t + τ )
x
In order for a random process to be ergodic, it must
first be Wide Sense Stationary.
If a R.P. is ergodic, then we can compute power three
different ways:
From any sample function: 1
T /2
T −T∫/ 2
Px = limt →∞ | x (t ) | 2
dt = | x (t ) | 2
From the autocorrelation:
Px = φ x (0)
From the Power Spectral Density: ∞
Px = ∫ Φ x ( f )df
−∞
Cross-correlation
If we have two random processes x(t) and y(t) we can
define a cross-correlation function:
∞ ∞
φ xy (t1 , t 2 ) = E{ x(t1 ) y (t 2 )} = ∫ ∫x t1 yt2 p (xt1 , yt2 )dxt1 dyt2
− ∞− ∞
Analog +
Input Sample Σ Quantizer Encode
Signal -
Prediction DPCM
Filter Signal
Digital
Communications
Channel
+ Analog
Decoder Σ DAC Output
+ Signal
Prediction
Filter
DPCM Issues
The linear prediction filter is usually just a
feedforward (FIR) filter.
The filter coefficients must be periodically transmitted.
In adaptive differential pulse-code modulation
(ADPCM), the quantization levels can be changed on
the fly.
Helpful if the input pdf changes over time (nonstationary).
Used in DECT (Digital European Cordless Telephone).
Delta modulation is a special case of DPCM where
there are only two quantization levels.
Only need to know the zero-crossings of the signal
While DPCM works well on speech, it does not work
well for modem signals.
Modem signals are uncorrelated.
Tradeoff: Voice Quality versus Bit
Rate
Mean Opinion Score
(MOS) The bit rate
produced by the
Excellent (5) voice coder can be
Toll quality
Good (4)
reduced at a price.
Increased
Fair (3)
Communications
quality hardware
complexity.
Poor (2)
Reduced
Unsatisfactory (1) Vocoders Waveform coders
perceived speech
quality.
1.2 2.4 4.8 9.6 16 24 32 64
• This is where the big savings (in terms of bit rate) comes from.
Vocoder standards
Vocoding is the single most important technology
enabling digital cell phones.
RPE-LTP
Regular Pulse Excited Long Term Prediction.
Used in GSM (European Digital Cellular)
13 kbps.
VSELP
Vector Sum Excited Linear Predictive Coder.
Used in USDC, IS-136 (US Digital Cellular).
8 kbps.
QCELP
Qualcomm Code Excited Linear Predictive Coder.
Used in IS-95. (US Spread Spectrum Cellular)
Variable bit rate (full, half, quarter, eighth)
Original full rate was 9.6 kbps.
Revised standard (QCELP13) uses 14.4 kbps.
Preview of Next Week
Analog Analog
Input Output
signal we have been D/A signal
Sample Conversion
looking at this
part of the Digital
Quantize communication Output
Source
Direct system Decoder
Digital (Part 1 of 4)
Input
Source Decryption
Encode
Encryption Channel
Decoder
Channel Equalizer
Encoder
Channel Demodulator
Modulator
Baseband Processing:
Source Coding
Channel Coding, etc.
antenna
code bits (symbol) symbol rate
Pulse-shaping
Filter
cos(2π fct)
baseband section RF section
Modulation
Modulation shifts the spectrum of a baseband
signal to that it becomes a bandpass signal.
A bandpass signal has non-negligible spectrum
only about some carrier frequency fc >> 0
Note: the bandwidth of a bandpass signal is the range
of positive frequencies for which the spectrum is non-
negligible.
Unless otherwise specified, the bandwidth of a
bandpass signal is twice the bandwidth of the baseband
signal used to create it.
BW=B BW=2B
Modulation
Common digital modulation techniques use the
data value to modify the amplitude, phase, or
frequency of the carrier.
Amplitude: On-off keying (OOK)
• 1 ⇒ A cos(2π fct)
• 0 ⇒0
More generally, this is called amplitude shift keying (ASK).
Phase: Phase shift keying (PSK)
• 1 ⇒ A cos(2π fct)
• 0 ⇒ A cos(2π fct + π ) = - A cos(2π fct)
Frequency: Frequency shift keying (FSK)
• 1 ⇒ A cos(2π f1t)
• 0 ⇒ A cos(2π f2t)
EE 561
Communication Theory
Spring 2003
copyright 2003
Announcements
Homework #2 is due today.
I’ll post solutions over the weekend.
Including solutions to all problems from chapter 3.
Computer assignment #1 is due next week.
See webpage for details.
Today: Vector representation of signals
Sections 4.1-4.2
© 2003
Review/
Preview
Analog Analog
Input Output
signal we have been D/A signal
Sample Conversion
looking at this
part of the Digital
Quantize communication Output
Source
Direct system Decoder
Digital
Input
Source Decryption
Encode
Encryption Channel
Decoder
Channel Equalizer
Encoder
Channel Demodulator
Modulator
© 2003
a( t ) = x 2 ( t ) + y 2 ( t ) af
x (t ) = a(t ) cos θ (t )
L
My( t ) O
y(t ) = a(t )sin a
θ (t )f
θ (t ) = tan −1
Nx(t) P Q
sl (t ) = x (t ) + jy(t )
© 2003
Key Points
With these alternative representations, we can
consider bandpass signals independently from their
carrier frequency.
The idea of quadrature notation sets up a coordinate
system for looking at common modulation types.
Idea: plot in 2-dimensional space.
• x axis is the in-phase component.
• y axis is the quadrature component.
Called a signal constellation diagram.
© 2003
Example Signal Constellation
Diagram: BPSK
k p
x (t ) ∈ −1,+1
kp
y( t ) ∈ 0
© 2003
Example Signal Constellation
Diagram: QPSK
k p
x (t ) ∈ −1,+1
k p
y(t ) ∈ −1,+1
QPSK:
© 2003
QAM:
© 2003
received signal
© 2003
A New Way of Viewing Modulation
The quadrature way of viewing modulation is very
convenient for some modulation types.
QAM and M-PSK.
We will examine an even more general way of looking at
modulation by using signal spaces.
We can study any modulation type.
By choosing an appropriate set of axes for our signal
constellation, we will be able to:
Design modulation types which have desirable properties.
Construct optimal receivers for a given modulation type.
Analyze the performance of modulation types using very general
techniques.
First, we must review vector spaces …
© 2003
Vector Spaces
An n-dimensional vector v = v1 , v2 ,..., vn consists of n
l
scalar components v1 , v2 ,..., vn q
The norm (length) of a vector v is given by:
n
v = ∑i
v 2
i =1
L
MO0LO 0 LO1 L
1O
v =
NP
Q M
NP
Q v
Which1 of the following
0
= v = M
NP
Q M
2 is a complete3basis?
1 0
v =
1P
NQ 4
L
1O
M L−1O
0P M
N0 P
e1 = e2 =
NQ Q
L
1O
e =M L 0O
0P
e =M
NQ N 1P
Q
1 2
L
1O
e =M
L1 O
e = MP
1P
NQ N−1Q
1 2
© 2003
Orthonormal Basis
Two vectors vi and vj are orthogonal if
vi ⋅ v j = 0
A basis is orthonormal if:
All basis vectors are orthogonal to one-another.
All basis vectors are normalized.
© 2003
Example: Complete
Orthonormal Basis
Which of the following is a complete orthonormal
basis? L
M
1 O L O
0
0P M1P
e1 = =
N Q e
N Q
2
e =M
L
1O L
M
1O
P
NQ
11P e =
N − 12
Q
L
1/ 2O
e = M Pe = M
L1/ 2 O
P
1
N Q N Q
1 / 2
2
− 1 / 2
e =M
L1O L
M
−1O
P
NQ
1
1P e =
N 02
Q
L1/ 2O L
e = M P e = MP
−1O
N1/ 2Q N0 Q
© 2003
1 2
EE 561
Communication Theory
Spring 2003
copyright 2003
Announcements
HW #3 is due on Monday.
“sigspace.m” is on web-page.
© 2003
Review
Analog Analog
Input Output
signal D/A signal
Sample Conversion
Digital
Quantize Output
Source
Direct Decoder
Digital
Input
Source Decryption
Encode
Encryption Channel
Decoder
Channel Equalizer
Encoder
Channel Demodulator
Modulator
© 2003
af
φ nn τ =
N0
2
af
δτ
Signal space K
representation:
r(t ) = ∑ rk fk (t ) + n' (t )
z
T k =1
rk = r (t ) fk (t )dt orthogonal noise --- disregard.
0 basis functions for s(t)
z z
T T
0 0
= sm,k + nk
Receiver Overview
r (t )
Front
r
Back
s
end end
Options: Options:
Correlation receiver MAP receiver
Matched-filter receiver ML receiver
© 2003
Front End Design #1:
Bank of Correlators
r (t )
z r1
T
⋅ dt
0
L
rO
r=M P
f1 (t ) 1
M
M
P
r P
NK Q
z rK
T
⋅ dt
0
f K (t )
Front End Design #2:
Bank of Matched Filters
r (t ) r1
h1 (t ) = f1 (T − t )
t=T
L
rO
r=M P
1
M
M
P
r P
NK Q
rK
hK (t ) = fK (T − t )
t=T
MAP Decision Rule
m b gr
s = arg max pm p r| s m
sm ∈ S
R
| F cr − s hIU | 2
substitute conditional pdf of
s = arg max S p b πN g expG JJV r given s (vector Gaussian)
K
−∑
G
− K /2 k m,k
m s
m
|T o
H N K|W k =1 o
m
R
| L F cr − s hIO U
|Vtake natural log
ln M P
2
m s |T Mm
N
o
H N KPQ |W k =1 o
R
| L F r − s hIO
c U
|
πN g + ln M P
2
m s |Tm o
M
NH N KQ|W k =1 o
R
| cr − s hU |V use ln(x ) = yln(x) 2
s
m |T 2
m
N
o
|Wand ln(exp(x)) = x
k =1 o
s = arg max S
Rlnbp g− K lnbπN g− 1 ∑ cr − s hU V
K
pull 1/N out of summation
2
© 2003
m s T m
2 N
o
W o k =1
k m,k o
MAP Decision Rule
(Continued)
Rlnbp g− K lnbπN g− 1 ∑ cr − 2s r + s hU
s = arg max S V
K
square term in
2 2
m s T m
2 N
o
o k =1Wsummation k m,k k m,k
Rlnbp g− 1 ∑ c−2s r + s hU
s = arg max S
K
m s T m
N o k =1 Wto all s
m,k k m,k
m
s = arg max S
Rlnbp g+ 2 ∑ s r − 1 ∑ s U
K
V
K
Break up the one summation
2
s T N m
o k =1N Winto two summations
m,k k
o k =1
m,k
Rlnbp g+ 2 ∑ s r − E U z
m
T
s = arg max S V
K K
Use definition of E = s (t ) dt = s
∑
2 2
m
s T m
N o k =1N W m,k k
signal energy: m m
k =1
m,k
RN p g+ ∑ s r − E U
m o 0
m s T2 m
k =12W
m,k k o
r z = Sr No
bg E
ln p1 − 1
2 2
choose
s
largest
L O
S=M P
s 1,1 s1, K
M
M
P
N
s M ,1 s PQ
M,K zM
K
zm = ∑ sm,k rk
No
2
bg E
ln p M − M
2
k =1
ML Decision Rule
ML is simply MAP with pm = 1/M
mb gr
s = arg max p r| s m
sm ∈ S
s = arg max S
R∑ s r − E U
K
V m
m s T
k =1 2 W
m, k k
Rz − E U
s = arg max S
T 2V W
m
m
m s
© 2003
Back End Design #2:
ML Decision Rule
If pm’s are unknown or all equal, then use the
ML (maximum likelihood) decision rule:
z1
r z = Sr E1
−
2
choose
s
largest
L
M
s1,1 O
P
s1, K
S=M P
M
N
s m ,1 s PQ
M ,K zM
EM
−
2
Example
Start with the following signal set:
s1 (t ) s2 (t )
0 1 2 0 1 2
s3 (t ) s4 (t )
0 1 2
0 1 2
r (t )
z
T
z1
⋅ dt
0
s1 (t )
No
bg
ln p1
2
choose
s
largest
z
T
zM
⋅ dt
0
s4 ( t )
No
2
bg
ln pM
Signal Space Representation
Note: the previous receiver is not an efficient
implementation.
4 correlators were used.
Could we use fewer correlators?
• We can answer this by using the concept of signal
space!
Using
f1 (t ) the following basis
f2 (t )functions:
0 1 2 0 1 2
diagram.
A More Efficient MAP Receiver
z1 = r1 + r2
No
bg
z
r (t ) T r1 ln p1
2
⋅ dt z2 = r1 − r2
matrix multiply:
0
s
No
bg
ln p2 choose
z=Sr
f1 (t ) 2 largest
z
T r2 z3 = − r1 + r2
⋅ dt
0
No
2
bg
ln p3
z4 = − r1 − r2
f2 ( t )
No
bg
L
s OL1
M P
T
1 O 2
ln p4
M
s P M
1
M
T
1 −1P
P
S=
M
s PM
=
1P
2
−1
M P
T
NQN
s
M
3
−1
T
4
−1P
Q
The ML Receiver
z1 = r1 + r2
r (t )
z r1
T
⋅ dt z2 = r1 − r2
matrix multiply:
0
choose
s
z=Sr
f1 (t ) largest
z
T r2 z3 = − r1 + r2
⋅ dt
0
z4 = − r1 − r2
f2 ( t )
L
s OL1
M P
T
1 O
M
s P M
1
M
T
1 −1P
P
S=
M
s PM
=
1P
2
−1
M P
T
NQN
s
M
3
−1
T
4
−1P
Q
Decision Regions
The decision regions can be shown on the
signal space diagram.
Example: Assume pm = ¼ for m={1,2,3,4}
Thus MAP and ML rules are the same.
Average Energy Per Bit
The energy of the
K
m th
signal (symbol) is:
Em = ∑ sm2 ,k
k =1
The average energy per symbol is:
M
Es = E Em = ∑ pmEm
k =1
log2M is the number of bits per symbol.
Thus the average energy per bit is:
Es
Eb =
log 2 M
Eb allows for a fair comparison of the energy
efficiencies of different signal sets.
© 2003
0.8
0.6
0.4
0.2
0
Y
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
X
Example: QPSK with
Unequal Probabilities
sigspace( [1 1 .3;1 –1 .3; -1 1 .3; -1 –1 .1], 2 )
Signal Space and Decision Regions
1
0.8
0.6
0.4
0.2
0
Y
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
X
Example: Extreme Case of
Unequal Probabilities
sigspace( [.5 .5 .3;.5 –.5 .3; -.5 .5 .3; -.5 –.5 .1], -6 )
Signal Space and Decision Regions
1
0.8
0.6
0.4
0.2
0
Y
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
X
Example: Unequal Signal Energy
sigspace( [1 1; 2 2; 3 3; 4 4], 10)
Signal Space and Decision Regions
4
3.5
2.5
2
Y
1.5
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4
X
Example: 16-QAM
sigspace( [0.5 0.5; 1.5 0.5; 0.5 1.5; 1.5 1.5; ...
-0.5 0.5; -1.5 0.5; -0.5 1.5; -1.5 1.5; ...
0.5 -0.5; 1.5 -0.5; 0.5 -1.5; 1.5 -1.5; ...
-0.5 -0.5; -1.5 -0.5; -0.5 -1.5; -1.5 -1.5], 10 )
1.5
0.5
0
Y
-0.5
-1
-1.5
-2
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
X
EE 561
Communication Theory
Spring 2003
z
T
MAP rule:
rk = r (t ) fk (t )dt
0 RN p g+ ∑ s
s = arg max S lnb
o
K
m , k rk −
Em U
V
T2 W
z z
m
T T
m s k =1 2
= s(t ) fk (t )dt + n(t ) fk (t )dt
0 0 ML rule:
= sm,k + nk
R∑ s
s = arg max S
K
m , k rk −
Em U
V
ms T k =1 2 W
© 2003
Error Probability
Symbol error probability:
Ps = Pr s ≠ s
M
= ∑ pm Pr s ≠ sm | s = s m From “Total probability theorem”
m=1
Probability that sm was sent. Probability of error given that sm was sent:
Pr s ≠ s m | s = s m = Pr r ∉ Rm | s = s m
= zb g
Rm
p r| s m dr
= 1 − Pr r ∈ Rm | s = s m
= 1− zb g
Rm
p r| s m dr
s (t ) = 2 P sinb
2 2πf t gfor 0 < t < T
c
s (t ) = − 2 P cosb
3 2πf t gfor 0 < t < T
c
s (t ) = − 2 P sinb
4 2πf t gfor 0 < t < T
c
Using Gram-Schmidt orthonormalization we find two
basis functions:
f 1 (t ) =
2
T
b gfor 0 < t < T
cos 2πf c t
f 2 (t ) =
2
T
b gfor 0 < t < T
sin 2πf c t
Now:
z
© 2003
T
1 2 Es 2Eb
P= s(t ) dt = =
T 0
T T
QPSK: Signal Space Representation
Signal vectors:
LE O
s =MP
L0 O
=MP
L− E O
=M P s4
L 0 O
=M P
N
− EQ
s s
s2 s3
1
N0 Q NE Q
s N0 Q s
s3 s1
− Es Es
R3 − Es s4
© 2003
R4
QPSK: Coordinate Rotation
The analysis is easier if we rotate coordinates
by 45o:
s2 s1
R2 R1
R3 R4
s3 s4
L
M EO
s
P L
M
−
EO
P s
L
M
−
EO
Ps
L
M E O
s
P
s =M P
2 s2 =M P2 s3 =M P
2 s4 =M P
2
1
M
M
EP M E P M EP M EP
N2 PQ M
N2 P Q M
N2P
−
Q M
N2P
−
Q
s s s s
© 2003
QPSK: Conditional Error Probability
zb g
Pr s ≠ s1 | s = s1 = 1 − p r| s1 dr
R1
zzc h
∞∞
R
| L
F EI F EIO U
|Vdr dr
zz S|T M
2 2
M P
∞∞
= 1−
1
G
exp −
H
1
r− J +G r −
2 K H 2 KP J |W
s s
πN
N Q
1 2 1 2
0 0
No o
R
| F EIU | R
| 1 F EIU
|
z |T G K|WzπN |T N G
∞ 2 ∞ 2
= 1−
0
1
πN
S
exp −
H N
1
o
r−
2J
Vdr
o
1
1exp S−
H 2 JKV
s
r − dr
|W 1
0 o o
2
s
2
L1 − QF E IO
= 1− M PL
M F E IO
P
M
N G
H N J
KP
Q
1
M
N
− Q G
H
s
o N J
KP
Q
s
F E I LF E IO
2
= 2 QG J − M QG JP
H KM NHN KP
s s
N o Q o
© 2003
QPSK: Symbol Error Probabilty
From symmetry:
F E I LF E IO
2
Pr s ≠ s m | s = s m = 2 QG J − M
QG JP
H KM NHN KP
s s
N o Q o
Thus:
Ps = Pr s ≠ s
M
= ∑ pm Pr s ≠ s m | s = s m
m =1
F E I LQF E IO
= 2 QG J − M P
2
HN K M G
H
s
N P
oN J
K Q
s
F 2E I LF 2E IO
2
= 2 QG J − MQG JP
H KM NHN KP
b b
N o Q o
© 2003
QPSK: Bit Error Probability
Assume Gray Mapping:
2 F 2E I LF 2E IO
2
P = QG J − M QG JP
2 HN K M NHN KP
b b
b
o Q o
F 2E I LQF 2E IO
= QG J − M P F 2E I
2
01 00
HN K M
b
G
H
N P
o N J
KQ G
≈ Q
HN JK
b
o
b
M
= ∑ Pr s = s j | s = s i
j =1
j ≠i
M
zb g
= ∑ p r| s i dr
j =1 R j
j ≠i
s2 s1
R2 R1
R3 R4
s3 s4
© 2003
A Bound on Probability
We can bound this probability by only
integrating over a region with just one
boundary:
s2 s1
R2 R1
R3 R4 Ignore the presence
of s2 and s3.
Then we pick s4
s3 s4 over s1 whenever
z4 ≥ z1
di , j di , j
−
2 2
o
© 2003
The Union Bound
Putting it all together: M
zb g
Pr s ≠ s i | s = s i = ∑ p r| s i dr
j =1 R j
j ≠i
M
≤ ∑ Pr z j ≥ zi
j =1
j ≠i
M Fd I
≤ ∑Q
j =1
G
H2 N JK
i, j
o
j ≠i
And the symbol error probability becomes:
M
Ps = ∑ pi Pr s ≠ s i | s = s i
i =1
M M Fd I
≤ ∑ pi ∑ Q
i =1 j =1
G
H2 N JK
i, j
o
j ≠i
© 2003
j = 2 Rj
zb g
Pr s ≠ s1 | s = s1 = ∑ p r| s i dr
s2 s1
R2 R1
R3 R4
s3 s4
o s2 s1
R1
R4
This area has been
Accounted for already
© 2003
Example:
s2 s1
QPSK R1
R4
s3
© 2003
s4
Comparison
Performance of QPSK
-1 Union bound
10
Improved Union bound
BER
Exact
-2
10
© 2003
0 1 2 3 4 5 6
Eb /No in dB
Mid-Semester!
Right now, we are at the midpoint of this
class.
© 2003
EE 561
Communication Theory
Spring 2003
M F
P ≤ ∑ Pr[ s = s ]∑QG
d
M I = si − s j
H2 N JK
i, j
s i
∑c h
i =1 j =1 K
o 2
j ≠i = si,k − s j ,k
k =1
o
© 2003
m
Ai = j: Rj borders Ri r
Consider the exact calculation:
4
j = 2 Rj
zb g
Pr s ≠ s1 | s = s1 = ∑ p r| s i dr
s2 s1
R2 R1
R3 R4
s3 s4
o s2 s1
R1
R4
This area has been
Accounted for already
© 2003
Example:
s2 s1
QPSK R1
R4
s3
© 2003
s4
Categories of Digital Modulation
Digital Modulation can be classified as:
PAM: pulse amplitude modulation
PSK: phase shift keying
QAM: quadrature amplitude modulation
Orthogonal signaling
Biorthogonal signaling
Simplex signaling
We have already defined and analyzed some of
these.
For completeness, we will now define and analyze all
of these formats.
Note: the definition & performance only depends on
the geometry of the signal constellation --- not the
© 2003
log 2 M
PAM
Pulse Amplitude Modulation
K = 1 dimension.
Usually, the M signals are equally spaced along the
line.
di,j = dmin (a constant) for |j-i| = 1
Usually, the M signals are symmetric about the
origin.
L
M
dmin O
P
Examples:
N
s i = ( 2i − 1 − M )
2 Q
0
dmin
© 2003
s1 0 s8
Performance of PAM
Using the Improved Union bound:
M Fd I
i =1
G
H2 N JK
Ps ≤ ∑ Pr[ s = s i ] ∑Q
j ∈Ai
i, j
1 F d
M I assuming equiprobable
∑ ∑QG
H2 N JK
i, j signaling
≤
M i =1 j ∈Ai o
Therefore,
M
1 M
Ps = ∑ pi P s ≠ s i | s i = ∑ P s ≠ s i | s i
i =1 M i =1
F Fd I + 2QFd II
≤
1
M G
2( M − 2)Q
H G
H2 N JK GH2 N JKJK
min
o
min
2( M − 1) Fd I
QG
H2 N JK
© 2003
≤ min
M o
Performance of PAM
Using the Improved Union bound:
2( M − 1) Fd I
Ps ≤
M G
H2 N JK
Q min
P
N
Ei = (2i − 1 − M )
2 Q
© 2003
PAM
Average Energy per Symbol
M
Es = ∑pi Ei
i =1
1 L
M ∑
M
O
P
dmin
2
assuming equiprobable
=
M N i =1
( 2 i
Q signaling
−1 − M )
2
1 F I ∑(2i −1 −M )
2 M
d
MH 2 K
= min 2
i =1
from symmetry
2 F I ∑(2n −1) 2 M /2
d
MH 2 K
= min 2
N
arithmetic series: ∑
n =1 (2 n −1) 2
2 F I 1FM IFF MI I n =1
2 2
=
MH
d
2 K3 H2K
min
G
H
4
H2 K −1J
K = c
1
3
N 4 N 2 −1 h
d c
2
M −1h
min
2
=
© 2003
12
Performance of PAM
12Es
Solve for dmin: d min =
c M 2 −1 h
Substitute into Union Bound Expression:
2( M − 1)Fd I
Ps ≤
M G
H2 N JK
Q min
2( M − 1) F I
QG J
6E
≤
M G
Hc h JK
M − 1 N 2
s
2( M − 1) F 6E log M I
QG J
≤
M G
Hc h JK
M − 1
b
N 2
2
o
Eq. (5.2-46)
G J
2( M − 1) only 1 bit error will
© 2003
Pb ≈
M log 2 M
Q
G
Hc h JK
M − 1
b
N 2
2
o
normally occur when
there is a symbol error.
Performance Comparison: PAM
Performance gets worse as M increases
0
10
32-PAM
-2
10
16-PAM
BER
-4
10
8-PAM
4-PAM
-6
10
2-PAM
-8
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No (dB)
MPSK
M-ary Phase Shift Keying
K = 2 dimensions.
Signals equally spaced along a circle of
radius Es
L
M H M KP
E cos F2π (i − 1) IO
s =M
2π (i − 1) IP
s
i
M F P
M
N E sin
s
H M K P
Q
Example: 8 PSK
© 2003
MPSK: Distances
Distance between two adjacent points:
Es
dij
2π Es
radians
M
Use law of cosines: 2 ab cos C = a 2 + b 2 − c 2
F2π I = 2E − d
HM K
2
2Es cos s ij
d = 2E M
L1 − cosF2π IO
ij
N HM KP
s
Q
d = 4E sin F I
π af
1 − cos 2θ = 2 sin 2 θ
HM K
2
ij s
d = 2 E sinF I
© 2003
π
ij s
HM K
Performance of M-PSK
Symbol error probability (M>2):
F 2E sin Fπ II
G
HN HM KJK
Ps ≤ 2Q s
F
≤ 2QG
2E log M Fπ II
H N
b
sin
o
HM KJK
2
o
2
sin
HM KJK
© 2003
Performance Comparison:
M-PSK
Performance gets worse as M increases.
0
10
64-PSK
-2
10
32-PSK
BER
-4
10
16-PSK
-6
10
-8
4-PSK 8-PSK
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No in dB
QAM
Quadrature Amplitude Modulation
K = 2 dimensions
Points can be placed anywhere on the plane.
Actual Ps depends on the geometry.
Example: 16 QAM Corner points have 2 neighbors
(There are 4 of these)
dmin
1 M
= ∑ P s ≠ si | si
16 i =1
FL Fd IO L F IO L F IO I
≤
1
GM G
H J
KP+ M Gd
min min
P
JKP
+ M Gd
min
JKP
JPK
N H2 N N H2 N
4 2Q 8 3Q 4 4 Q
16HM
N 2 N P
QM o o QM o Q
48 Fd I
≤ QG
16 H2 N J
min
K o
F
≤ 3QG
d I
H2 N JK
min
o
© 2003
Performance of Example
QAM Constellation
We would like to express Ps in terms of Eb.
However, as with PAM, the energy of the M
signals is not the same.
F
G3d I F
J G3d I 9
J
2 2
Ei =
H2 K H2 K 2
+ = d
min min 2
min
Corner points
Fd I F3d I 5
2
E = G J +G J = d
2
H2 K H2 K 2
min min 2 Edge points
i min
Fd I F d I 1
2
E = G J +G J = d
2
H2 K H2 K 2
min min 2
i min Interior points
© 2003
QAM
Average Energy per Symbol
Average Energy per Symbol:
16
assuming equiprobable
Es = ∑pi Ei signaling
i =1
1 L
9 2
MF
G I
J F
G5 I
J F
G1 I
JO
P
=
H K +
H K +
H K
2 2
16
4
2 N
d min 8
2
d 4
2
d
Q
min min
40 2
= d min
16
5
= d m2in
2
5
Performance of Example
QAM Constellation
Substitute into Union Bound Expression:
F
P ≤ 3QG
d I
H2 N JK
min
s
o
FE I
≤ 3QG
H5N JK
s
F 4E I
≤ 3QG
H5N JK
b
-2 16-PAM
10
symbol error probability
10
-4
16-PSK
-6
10
16-QAM
-8
10
0 2 4 6 8 10 12 14 16 18 20
Eb/No in dB
EE 561
Communication Theory
Spring 2003
H2 N JK
i, j
s i
i =1 j =1 o
j ≠i
m
Ai = j: Rj borders Ri r
© 2003
PAM
Pulse Amplitude Modulation
Definition:
K = 1 dimension.
Signals are equally spaced and symmetric about
the origin:
s =L O
d min
d
i M
N
( 2 i − 1 − M )
2 P
min
Q s1 0 s8
Example: 8-PAM
Performance:
2( M − 1) F 6E log M I
Ps ≤
M
Q G
Hc M −
b
1h
2
N
J
K
2
F 6E log M I
2( M − 1)
G
G J
Hc h JK
Pb ≈ Q b 2
© 2003
M log 2 M M − 2
1 N o
MPSK
M-ary Phase Shift Keying
Definition:
K = 2 dimensions.
Signals equally spaced along a circle of radius Es
L
ME cosF
2π (i − 1) IO
s =M
H M KP
2π (i − 1) IP
s
Mi
F P Example: 8 PSK
M
N E sin
Hs
M K P
Q
Performance:
F 2E log M sin Fπ II
G
Ps ≤ 2Q
H N
b
HM KJK
o
2
2 F 2E log M sin Fπ II
QG
P ≈
b
log M H2 N
b
HM KJK
o
2
© 2003
QAM
Quadrature Amplitude Modulation
Definition:
K = 2 dimensions
Points can be placed anywhere on the plane.
Neighboring points are normally distance dmin apart.
Constellation normally takes on a “box” or “cross” shape.
Performance:
Depends on geometry.
In general, when pi = 1/M:
Example: 16 QAM
F L Fd IO L F IO L F IOI
HM
G H2 N JKP MG P M H2 N JKPJK
1 d d
NG N H2 N JKP NG
Ps ≤ N 2 2Q + N 3Q min
+ N 4 Q min min
M M P
Q M Q M
o
3
P
Q o
4
o
© 2003
Number for points with 2 neighbors Number for points with 4 neighbors
i =1
M
1
∑s
2
= i
M i =1
Solve the above to get dmin = f(Es) and plug into the
expression for Ps.
Bit error probability is difficult to determine,
because the exact mapping of bits to symbols
© 2003
M
NP
0 Q M
N0 P
Q M
NQ
E
s
P
Example: 3-FSK
Es
Es
© 2003
Es
Performance of
Orthogonal Signaling
Distances:
The signal points are equally-distant:
dij = 2Es ∀i ≠ j
≤a
F
M −1fQG
E log M I
H N JK
b 2
-2
10
BER
10
-4
2-FSK
4-FSK
-6
10
8-FSK
16-FSK
32-FSK
10
-8 64-FSK
0 2 4 6 8 10 12 14 16
Eb/NoindB
Limits of FSK
As M → ∞, then Ps → 0 provided that:
Eb
> ln 2 ≈ −1.59 dB Eq. (5.2-30)
No
Simplex signaling.
Biorthogonal Signaling
K=M/2 dimensions
M is even.
First M/2 signals are orthogonal:
si(t) = Es fi(t) for 1 ≤ i ≤ M/2
Remaining M/2 signals are the negatives:
si(t) = - Es
fi-M/2(t) for M/2 + 1 ≤ i ≤ M
Since half-as many dimensions as
orthogonal, the bandwidth requirement is
halved: KR KR MR
W≥ s
= b
= b
© 2003
2 2 log 2 M 4 log 2 M
Example Biorthogonal Signal Set
Biorthogonal signal set for M=6.
Es
− Es
− Es Es
Es
− Es
© 2003
Performance of Biorthogonal Signals
Compute the distances:
R
| 0 for i = j
=S
M
di , j 2 E for i − j =
|| 2E
s
2
T s otherwise
Union Bound
P ≤a
F E I F 2E I
M − 2 fQG J + QG J
HN K HN K
s s
s
o o
≤a
F E log M I
M − 2fQG
H N JK
b 2
M
NM P
Q
s
Therefore,
P ≤a
F
M −1fQG
EM I '
HN ( M −1) JK
s
s
o
≤a
F
M −1fQG
ME log M I
H ( M −1) N JK
© 2003
b 2
o
BPSK and QPSK
Categorize the following:
BPSK
MPSK?
QAM?
Orthogonal?
Biorthgonal?
Simplex?
QPSK
MPSK?
QAM?
Orthogonal?
Biorthgonal?
Simplex?
© 2003