You are on page 1of 43

ECE 141 Lecture 7:

Optimal Detection of
Memoryless Digital
Modulation
Symbol Detection Problem, Optimal Decision Rules, BER Performance Analysis

ECE 141: DIGITAL COMMUNICATIONS 1


Review: Digital Modulation
𝐿 𝐿 𝐿
10 … 1 00 … 1 01 … 0 … Modulator 𝑠𝑚 𝑡

❑ Series of bits are mapped to different symbols.


❑ 𝐿 bits per symbol creates a symbol space with 2𝐿 elements.
❑ A trade-off between bandwidth and power is seen when
evaluating the reliability of different signaling schemes.
❑ Adding memory to the modulation removes abrupt
transitions in the signal which reduces signal side-lobes.

ECE 141: DIGITAL COMMUNICATION I 2


Symbol Detection Problem
The symbol mapper maps the
bits to a symbol in the
constellation set

Additive Noise
𝑚 → 𝒔𝒎 𝒓 = 𝒔𝒎 + 𝒏 Detection 𝑔 𝒓 =𝑚

Channel

Symbol Detection Problem is a special case of HYPOTHESIS TESTING:

Bayesian Inference:
We infer the parameter from
Parameter Observation
what we can observed
Space Space
Statistical Decision
𝑚∈𝑆
Experiment
𝒓~𝑃𝜃
Making
ෝ ∈ 𝑆መ
𝑔 𝒓 =𝑚
P 𝑔 ∙

ECE 141: DIGITAL COMMUNICATIONS 3


Optimal Detection
Parameter Observation
Space Space
Statistical Decision
𝑚∈𝑆
Experiment
𝒓~𝑃𝜃
Making
ෝ ∈ 𝑆መ
𝑔 𝒓 =𝑚
P 𝑔 ∙

❑ For any given 𝑚 ∈ 𝑆, we use the probability of error to


measure the performance of our decision making algorithm
defined as:
𝑃𝑒 = 𝑝 𝑚ෝ ≠𝑚
❑ GOAL: Find a decision making algorithm such that the
average probability of error is minimized:
𝑚ෝ = arg min 𝑝 𝑚 ෝ ≠𝑚
𝑚

ECE 141: DIGITAL COMMUNICATIONS 4


Bayesian Formulation for
Optimal Detection
Parameter Observation
Space Space
Statistical Decision
𝑚∈𝑆
Experiment
𝒓~𝑃𝜃
Making
ෝ ∈ 𝑆መ
𝑔 𝒓 =𝑚
P 𝑔 ∙

❑ The probability of error can be written accdg to the


probability of a correct decision:
𝑃𝑒 = 1 − 𝑃𝑐
❑ A correct decision occurs when symbol 𝑚 is sent given 𝒓 is
received:
𝑃𝑐 = 𝑝 𝑚ෝ = 𝑚 = 𝑝 𝒔𝒎 |𝒓
❑ The parameter 𝑝 𝒔𝒎 |𝒓 is called the Posterior probability.

ECE 141: DIGITAL COMMUNICATIONS 5


Maximum A Posteriori (MAP)
Decision Rule
❑ Maximizing 𝑃𝑐 will minimize 𝑃𝑒 which is the condition for an
optimal decision.
𝑚ෝ = 𝑔𝑜𝑝𝑡 𝒓 = arg max 𝑝 𝒔𝒎 |𝒓
𝑚
❑ Maximizing the posterior probability is called the maximum a
posteriori (MAP) rule.
❑ Note: given an observation, 𝒓, the correct symbol, 𝑚, must be
guessed by the receiver. To maximize guessing the correct
symbol, the posterior probability must be maximized.
❑ Note: the prior probability is the probability that symbol 𝒔𝒎
is sent prior to observation 𝑝 𝒔𝒎 .

ECE 141: DIGITAL COMMUNICATIONS 6


MAP Estimation Example
❑ Consider N scalar observations under i.i.d. Gaussian Noise:

yi = x + zi , i = 1, 2, ..., N
Parameter and noise are
both normally distributed
1  x2  1  zi2 
f X ( x) = exp  − 2  f Zi ( zi ) = exp  − 2 
2 x  2 x  2 z  2 z 

❑ Assuming x is completely known, we can represent the


conditional probability of yi given x as:
N
1  ( yi − x) 2  1   1 N

f Y| X ( y | x ) =  exp  − =  exp − 2  ( yi − x)  
2

2 z 2 z 
( )  2 z
2 N
i =1  2 z  i =1 

ECE 141: DIGITAL COMMUNICATIONS 7


MAP Estimation Example
To solve for the unknown parameter using MAP decision rule, we
need the posterior probability:
1 1

  1 1 2 1 N 2
 ( ) ( )
N N
f Y| X ( y | x ) f X ( x ) 2 x 2 z
f X |Y ( x | y ) = = c(y ) exp  −  2 x + 2  ( yi − x)    where c(y ) =
f Y (y )   2   x  z i =1    f Y (y )

  1  2 N x2 N 2 2

= c(y ) exp  − 2 2   z  +   x ( yi − x)   
  2 x  z  i =1 N i =1   

1  2 N x2 N 2 2
NOTE: the posterior probability
ln f X |Y ( x | y ) = ln c(y ) − 2 2   z  +   x ( yi − x)  is monotonous
2 x  z  i =1 N i =1 
 ln f X |Y ( x | y ) 1  2 N
  x2 1 N 
x
= 0 = − 2 2   z x −   x ( yi − x)  → xˆ = 2
 x z 
2
2   yi 
( z / N ) +  x  i=1 
i =1  N

ECE 141: DIGITAL COMMUNICATIONS 8


Maximum Likelihood (ML)
Decision Rule
❑ The prior distribution is often uniformly distributed. That is,
no preference on the hidden parameter, then:
𝑝 𝒓|𝒔𝒎 𝑝 𝒔𝒎
𝑚
ෝ = arg max 𝑝 𝒔𝒎 |𝒓 = arg max
𝑚 𝑚 𝑝𝒓
𝑚
ෝ = arg max 𝑝 𝒓|𝒔𝒎 Constant with 𝑚
𝑚
❑ 𝑝 𝒓|𝒔𝒎 is called the likelihood of 𝒓 to symbol 𝒔𝒎 .
❑ In this special case, MAP rule becomes ML rule:
❑ Widely used in statistical communication theory, largely
because in ignoring the prior probability, it becomes less
demanding in terms of computational complexity

ECE 141: DIGITAL COMMUNICATIONS 9


ML Decision Rule for Signal
Detection
❑ Consider vector detection of transmitted symbols under i.i.d.
Gaussian Noise:
𝒓 = 𝒔𝒎 + 𝒏 Detection 𝑚
ෝ =𝑔 𝒓
𝒔𝒎 ∈ 𝒔𝟏 , 𝒔𝟐 , … 𝒔𝑴
❑ Note that 𝒏 = 𝒓 − 𝒔𝒎 and 𝒏 is Gaussian.
❑ Likelihood function:
𝑁
1 𝒓−𝒔𝒎 2
− 𝑁0
𝑝 𝒓|𝒔𝒎 = 𝑝 𝒏 = 𝑒
𝜋𝑁0
❑ Take the natural logarithm for simplicity:
𝑁 𝒓 − 𝒔𝒎 2
ln 𝑝 𝒓|𝒔𝒎 = − ln 𝜋𝑁0 −
2 𝑁0

ECE 141: DIGITAL COMMUNICATIONS 10


ML Decision Rule for Signal
Detection
❑ Examining the natural logarithm:
2
𝑁 𝒓 − 𝒔𝒎
ln 𝑝 𝒓|𝒔𝒎 = − ln 𝜋𝑁0 −
2 𝑁0
Constant w.r.t. to 𝒔𝒎 No effect on
maximization
❑ Maximizing the likelihood function:
arg max 𝑝 𝒓|𝒔𝒎 = arg max ln 𝑝 𝒓|𝒔𝒎
𝑚 𝑚
arg max 𝑝 𝒓|𝒔𝒎 = arg min 𝒓 − 𝒔𝒎 2
𝑚 𝑚

❑ To maximize the likelihood function, you need to select the


symbol, 𝒔𝒎 , that is closest to the received signal 𝒓 -
minimum Euclidean distance.

ECE 141: DIGITAL COMMUNICATIONS 11


Scalar Detection using ML
Decision Rule
fY | X ( y | x = x A ) f Y | X ( y | x = xB )

xA x A + xB xB
x=
2

❑ ML Rule: Select 𝑥𝐴 if 𝑓𝑌|𝑋 𝑦 𝑥 = 𝑥𝐴 > 𝑓𝑌|𝑋 𝑦 𝑥 = 𝑥𝐵 .


❑ Log-likelihood ratio (LLR) as measure of certainty:
𝐿𝐿𝑅𝑥𝐴,𝑥𝐵 𝑦 > 0 𝑥ො = 𝑥𝐵
𝑓𝑌|𝑋 𝑦 𝑥 = 𝑥𝐵
𝐿𝐿𝑅𝑥𝐴,𝑥𝐵 𝑦 = ln = ൞𝐿𝐿𝑅𝑥𝐴,𝑥𝐵 𝑦 < 0 𝑥ො = 𝑥𝐴
𝑓𝑌|𝑋 𝑦 𝑥 = 𝑥𝐴
𝐿𝐿𝑅𝑥𝐴,𝑥𝐵 𝑦 ≈ 0 𝑢𝑛𝑐𝑒𝑟𝑡𝑎𝑖𝑛

ECE 141: DIGITAL COMMUNICATIONS 12


ML Decision Region
2 (t )
❑ Since ML Decision is equivalent to
Minimum Distance criterion, we can
define a decision region as set of all D2
x2
points closest to a signal point
(NEAREST NEIGHBOR)
❑ Decision Boundaries are orthogonal to y2
x3

the straight lines (dashed red) 1 (t )


connecting the constellation points x1
D3

❑ For ML decision, boundaries cut the D1

dashed red lines midway.


❑ For MAP decision, prior probability
affects the placement of boundary

ECE 141: DIGITAL COMMUNICATIONS 13


Other Decision Region
Examples (Using ML Rule)

ECE 141: DIGITAL COMMUNICATIONS 14


Binary Vector Detection
Binary vector detection can be reduced to (scalar) binary PAM detection through projection!

ECE 141: DIGITAL COMMUNICATIONS 15


MAP Decision Rule for Signal
Detection
❑ Consider vector detection of transmitted symbols under i.i.d.
Gaussian Noise:

𝒓 = 𝒔𝒎 + 𝒏 Detection 𝑚
ෝ =𝑔 𝒓
𝒔𝒎 ∈ 𝒔𝟏 , 𝒔𝟐 , … 𝒔𝑴

❑ Likelihood function:
𝑁
1 𝒓−𝒔𝒎 2
− 𝑁0
𝑝 𝒓|𝒔𝒎 = 𝑝 𝒏 = 𝑒
𝜋𝑁0

ECE 141: DIGITAL COMMUNICATIONS 16


MAP Decision Rule for Signal
Detection
❑ Posterior probability of 𝒔𝒎 can be expressed as:
𝑝 𝒓|𝒔𝒎 𝑝 𝒔𝒎
𝑝 𝒔𝒎 |𝒓 =
𝑝𝒓 constant with 𝒎,
can be ignored
❑ Substitute the likelihood probability:
𝑁
1 𝒓−𝒔𝒎 2
− 𝑁0
𝑝 𝒔𝒎 |𝒓 = 𝑝 𝒔𝒎 𝑒
𝜋𝑁0
❑ Take the log-domain conditional probability we get:
2
𝑁 𝒓 − 𝒔𝒎
ln 𝑝 𝒔𝒎 |𝒓 = ln 𝑝 𝒔𝒎 − ln 𝜋𝑁0 −
2 𝑁0
❑ MAP Decision Rule as a (biased) minimum distance rule:
arg max 𝑝 𝒔𝒎 |𝒓 = arg min 𝒓 − 𝒔𝒎 2 − 𝑁0 ln 𝑝 𝒔𝒎
𝑚 𝑚

ECE 141: DIGITAL COMMUNICATIONS 17


MAP Decision Rule for Signal
Detection
❑ Consider two symbol case with symbols 𝒔𝑨 and 𝒔𝑩
❑ Choose 𝒔𝑨 if:
𝒓 − 𝒔𝑨 2 − 𝑁0 ln 𝑝 𝒔𝑨 < 𝒓 − 𝒔𝑩 2
− 𝑁0 ln 𝑝 𝒔𝑩
2 2 < 𝑁 ln
𝑝 𝒔𝑨
𝒓 − 𝒔𝑨 − 𝒓 − 𝒔𝑩 0
𝑝 𝒔𝑩
𝑝 𝒔𝑨
𝒓 − 𝒔𝑨 + 𝒓 − 𝒔𝑩 𝒓 − 𝒔𝑨 − 𝒓 − 𝒔𝑩 < 𝑁0 ln
𝑝 𝒔𝑩
❑ One scenario:
𝒓 − 𝒔𝑨 𝒓 − 𝒔𝑩
𝒔𝑨 𝒓 𝒔𝑩
𝑝 𝒔𝑨 𝑑𝐴𝐵 𝑁0 𝑝 𝒔Ԧ 𝑨
𝑑𝐴𝐵 2𝑑𝐴𝑟 − 𝑑𝐴𝐵 < 𝑁0 ln → 𝑑𝐴𝑟 < + ln
𝑝 𝒔𝑩 2 2𝑑𝐴𝐵 𝑝 𝒔Ԧ 𝑩

ECE 141: DIGITAL COMMUNICATION I 18


Scalar Detection using MAP
Decision Rule
𝑑𝐴𝐵 Τ2 + Δ𝐴𝐵
MAP
ML
𝑝 𝒓|𝒔𝑨 𝑝 𝒔𝑨 𝑝 𝒓|𝒔𝑩 𝑝 𝒔𝑩

𝑠Ԧ𝐴 𝑑𝐴𝐵 𝑠Ԧ𝐵


𝑑𝐴𝐵 /2
❑ For MAP detection, we consider the prior probabilities
❑ Rationale: If the symbols are not equally likely, then the
detector should favor the symbol that is likely to occur →
Shift in decision boundary
❑ Placement of decision boundary:
𝑑𝐴𝐵 𝑁0 𝑝 𝒔𝑨
+ Δ𝐴𝐵 Δ𝐴𝐵 = ln
2 2𝑑𝐴𝐵 𝑝 𝒔𝑩

ECE 141: DIGITAL COMMUNICATIONS 19


ML and MAP
❑ ML is a special case of MAP wherein the a priori probabilities
are assumed to be equiprobable.
❑ The consequence of equiprobable symbols is that the decision
boundary is exactly between two symbols. The perpendicular
distance between the boundary and a symbol is equal to
𝑑𝑚𝑖𝑛 /2.
❑ For MAP, a priori probabilities must be known for an optimized
decision boundary.
❑ How to compare the performance of a decision rule and, by
extension, a symbol mapping scheme?

ECE 141: DIGITAL COMMUNICATION I 20


ECE 141 Lecture 7:
Optimal Detection of
Memoryless Digital
Modulation
Symbol Detection Problem, Optimal Decision Rules, BER Performance Analysis

ECE 141: DIGITAL COMMUNICATIONS 21


Preliminaries:The Q-Function
1  1 
exp  − t 2  ❑ Q-Function is the tail
2  2  Q( x) = Pr  N (0,1)  x

probability of the standard
=
1  1 
exp  − t 2  dt Gaussian distribution
x 2  2 

❑ Important Properties:
▪ Q(x) is a decreasing function
▪ Q(0) = ½
▪ Q(∞) = 0, Q(-∞) = 1
x t ▪ Q(x) + Q(-x) = 1

ECE 141: DIGITAL COMMUNICATIONS 24


Preliminaries: Bounds on the
Q-function
❑ Consider the following change of variable:

1  1  z = x −t 1  x2  0  1 
Q( x) =  exp  − t 2  dt ⎯⎯
→ exp  −   exp ( xz ) exp  − z 2  dz
x 2  2  2  2  −  2 

 1 2   z / 2 ( z / 2) ( z / 2) 
 2 2 2 2 n

exp  − z   1 − + − ...  
 2   1! 2! n! 
 
❑ Perform another change of variable ( v = − xz ) and consider the
following integral:
 This is called the Gamma
 exp(−v)dv = n ! Function. Google it! ☺
n
v
0

ECE 141: DIGITAL COMMUNICATION I 25


Preliminaries: Bounds on the
Q-function
❑ For x > 0 and using (n+1) terms of the series, we can recast
the Q-function as follows:
1  x 2   1 1 3 1 3  5...(2n − 1) 
Q( x)  exp  −  1 − 2 + 4 − ...  
2 x  2  x x x2n

❑ We can deduce the upper and lower bound of the Q-function:


1  x2   1  1  x2 
exp  −  1 − 2   Q( x)  exp  − 
2 x  2  x  2 x  2
❑ For very large x, a second bound can be obtained by ignoring
the x-1 factor:
 x2  BOTTOM LINE: Q-function
1
Q( x)  exp  −  behaves like an exponential
2  2 function decaying at rate –x2/2

ECE 141: DIGITAL COMMUNICATION I 26


Preliminaries: Measure of
Efficiency
❑ Spectral Efficiency (SE)
▪ Refers to a given information rate (bits/s) that can be transmitted
over a finite bandwidth B (Hz). That is:
Rb
SE =
B
▪ Units of SE: bits/s/Hz
❑ Energy Efficiency (EE)
▪ Defined as the number of bits transmitted per 1 Joule of Energy. That
is:
Rb
EE =
P
▪ Units of EE: bits/Joule

ECE 141: DIGITAL COMMUNICATIONS 27


Preliminaries: Signal-to-Noise
Ratio
❑ Signal-to-Noise Ratio (SNR) is an indicator of how good the
AWGN channel is:
average symbol energy 𝑃 𝐸𝑏 𝑅
𝑆𝑁𝑅 = = 2= ∙
total noise variance 𝜎𝑛 𝑁0 𝐵
SNR per bit:
Fair way of comparing schemes
with different constellation sets

❑ For binary PAM signals:


▪ Error occurs if signal falls at this side of the
constellation diagram
N (0, N 0 / 2)
▪ Probability that received signal falls in this
region is
 Eb   2 Eb 
Pe = Q   = Q  
 N /2
 0   N0 
− Eb + Eb

ECE 141: DIGITAL COMMUNICATIONS 28


Error Analysis of 4-PAM Signal
 2 Eb 
Pe (ML ;00 ) = Q  
 N0 
 2 Eb 
Pe (ML ;00 ) = Q  
 N0 

00 01 11 10

−3 Eb − Eb + Eb +3 Eb

ECE 141: DIGITAL COMMUNICATIONS 30


Error Analysis of 4-PAM Signal
 2 Eb 
Pe (ML ;00 ) = Q  
 N0 
 2 Eb 
 2 Eb  Pe (ML ;01) = 2Q  
Pe (ML ;01) = 2Q    N0 
 N0 

00 01 11 10

−3 Eb − Eb + Eb +3 Eb

ECE 141: DIGITAL COMMUNICATIONS 31


Error Analysis of 4-PAM Signal
 2 Eb 
Pe (ML ;00 ) = Q  
 N0 
 2 Eb 
 2 Eb  Pe (ML ;11) = 2Q  
Pe (ML ;01) = 2Q    N0 
 N0 
 2 Eb 
Pe (ML ;11) = 2Q  
 N0 

00 01 11 10

−3 Eb − Eb + Eb +3 Eb

ECE 141: DIGITAL COMMUNICATIONS 32


Error Analysis of 4-PAM Signal
 2 Eb 
Pe (ML ;00 ) = Q  
 N0 
 2 Eb 
 2 Eb  Pe (ML ;10 ) = Q  
Pe (ML ;01) = 2Q    N0 
 N0 
 2 Eb 
Pe (ML ;11) = 2Q  
 N0 
 2 Eb 
Pe (ML ;10 ) = Q  
 N0 
00 01 11 10

−3 Eb − Eb + Eb +3 Eb

ECE 141: DIGITAL COMMUNICATIONS 33


Error Analysis of 4-PAM Signal
1 + 2 + 2 + 1  2 Eb 
 2 Eb  Pe (ML ) = Q  
Pe (ML ;00 ) = Q   4  N0 
 N0 
 2 Eb  3  2 Eb  3  E 
Pe (ML ;01) = 2Q 
= Q    exp  − b 

 N0 
2  N0  2 2  N0 

 2 Eb  @ High SNR
Pe (ML ;11) = 2Q  
 N0 
 2 Eb 
Pe (ML ;10 ) = Q  
 N0 
00 01 11 10

−3 Eb − Eb + Eb +3 Eb

ECE 141: DIGITAL COMMUNICATIONS 34


Error Analysis of QPSK/4-
QAM Signal
Q
❑ Can be treated as independent analysis of two
PAM transmissions Independent noise
+ Eb for I and Q
01 11 ❑ Probability of successful detection:
− Eb + Eb P ( = 10 ) = Pr v1  0, v2  0 = Pr v1  0 Pr v2  0
I
  
= Pr N (0, N 0 / 2)  − Eb Pr N (0, N 0 / 2)  Eb 
− Eb  2 Eb 
= (1 − Q ) , Q  Q 
2

00 10  N0 
❑ Probability of Error:
Pe = 1 − P ( = 10 ) = 1 − (1 − Q )
2

 2 Eb  2  2 Eb   2 Eb 
= 2Q − Q 2 = 2Q   − Q    2Q   @ High SNR
 N0   N0   N0 

ECE 141: DIGITAL COMMUNICATIONS 35


4-QAM vs. Binary PAM
 2 Eb 
❑ 4-QAM symbol error probability: e
P  2Q  
0 1
 N0 
− Eb + Eb
❑ 2-PAM symbol error probability: P = Q  2 Eb 

e  N
Q  0 
❑ 4-QAM transmits 2 bits/symbol while 2-PAM
transmits 1 bit/symbol (i.e. 4-QAM is more
+ Eb spectrally-efficient)
01 11
− Eb + Eb
I ❑ 4-QAM and 2-PAM has same energy
efficiency (you can verify ☺)
− Eb
00 10

ECE 141: DIGITAL COMMUNICATIONS 36


4-QAM vs. 4-PAM
❑ 4-QAM symbol error probability: P  2Q  2 Eb 

e  N
 0 
00 01 11 10

4-PAM symbol error probability: 3  2 Eb 


− Eb
−3 Eb + Eb +3 Eb

Pe = Q  
2  N0 
Q
❑ 4-QAM is 2.5x more energy efficient than 4-PAM:
+ Eb
01 11
▪ 4-QAM average Tx energy:
− Eb + Eb
I 2 Eb + 2 Eb + 2 Eb + 2 Eb
EQPSK = = 2 Eb
4
− Eb
▪ 4-PAM average Tx energy:
00 10 9 E + 1Eb + 1Eb + 9 Eb
E4− PAM = b = 5 Eb
4
ECE 141: DIGITAL COMMUNICATIONS 37
Error Analysis of 16-QAM
Signal
❑ Exact performance is hard to compute

Q
❑ Alternative Solution: Use pairwise error
probability to find bounds of performance
0010 0110 1110 1010
▪ Suppose symbol xi is transmitted. An error occurs if:

y − xi  y − xl , i  l
2 2
0011 0111 1111 1011

I ▪ Probability of symbol error can be expressed as:


0001 0101 1101 1001
 M 2
P (  | x = xi ) = P  y − xi  y − xl 
2

 l =1,l i 
0000 0100 1100 1000
▪ Apply Union Bound Approximation:
 M 2
P (  | x = xi ) = P  y − xi  y − xl 
2

Assumption: Pairwise Probabilities  l =1,l i 


are Mutually Exclusive   P ( xˆ = xl | x = xi )
l i

ECE 141: DIGITAL COMMUNICATIONS 38


Error Analysis of 16-QAM
Signal
❑ Pairwise error probability can be written
as:
Q
 xi − xl   xi − xl 
P ( xˆ = xl | x = xi ) = Q   = Q 
0010 0110 1110 1010  2 2N / 2   2N 
 0   0 
 d min 
0011 0111 1111 1011  Q 
 2N 
 0 
I
0001 0101 1101 1001
dmin is the smallest distance between any pair of points
in constellation
0000 0100 1100 1000
❑ Since we assume that the error are
mutually exclusive:
 d min 
P (  | x = xi )  ( M − 1)Q  
 2N 
 0 

ECE 141: DIGITAL COMMUNICATIONS 39


Error Analysis of 16-QAM
Signal
❑ Finally, we can get the bounds of the
error probability as:
Q
1  d min  ( M − 1)  d min 
0010 0110 1110 1010 Q   P ( )  Q 
M  2 N 0 
 M  2N 
 0 

0011 0111 1111 1011 ❑ At high SNR, bound becomes tight:


I
0001 0101 1101 1001

This is an
1000
example of an
0000 0100 1100
error curve

ECE 141: DIGITAL COMMUNICATIONS 40


Error Analysis of M-PSK
Q ❑ Note that the information is encoded
only in phase for PSK
+ Eb

❑ Distance between two points can be


expressed as:
+ Eb  
I d = d min = 2 E sin  
M 
− Eb

❑ Plugging the value of dmin to the union


bound approximation, we get:
− Eb

1  2E    M −1  2E   
Q  sin     P (  )  Q  sin   
M  N0  M  M  N0  M 

ECE 141: DIGITAL COMMUNICATIONS 41


Error Analysis of M-FSK
3 (t )
❑ The 𝑘 𝑡ℎ symbol:
0

𝒔𝒌 = 𝐸 𝑘 𝑡ℎ element
⋮ 2 (t )
0
❑ The energy of a
symbol is 𝐸.
❑ Received signal:
𝑛1 𝑟1 4 (t )
⋮ ⋮
𝒓 = 𝐸 + 𝑛𝑘 = 𝑟𝑘
⋮ ⋮ 1 (t )
𝑛𝑀 𝑟𝑀
ECE 141: DIGITAL COMMUNICATION I 42
Optimal Decision for M-FSK
❑ Since signals are orthogonal to each other, the dot product
can be used for optimal decision:
ෝ = argmax 𝒓𝑻 𝒔𝒎
𝑚
𝑚
𝑚
ෝ = argmax 𝑟𝑚
𝑚
❑ Problem is reduced to finding the largest received signal
component. If symbol 𝒔𝟏 = 𝐸 0 … 0 𝑇 is transmitted:
𝐸 + 𝑛1 𝑚=1
𝑟𝑚 = ൝
𝑛𝑚 𝑚≠1

ECE 141: DIGITAL COMMUNICATION I 43


Optimal Decision for M-FSK
❑ Correct decision only if:
𝑚
ෝ = argmax 𝑟𝑚 = 𝑟1
𝑚
𝑟1 > 𝑟2 , 𝑟1 > 𝑟3 , … , 𝑟1 > 𝑟𝑀
❑ The minimum distance between FSK symbols:
𝑑𝑚𝑖𝑛 = 2 log 2 𝑀 𝐸𝑏
❑ The bit error probability is bounded by:
𝑀 𝐸𝑏 𝑀 𝐸𝑏
𝑄 log 2 𝑀 ≤ 𝑃𝑏 𝑒 ≤ 𝑄 log 2 𝑀
2 𝑀−1 𝑁0 2 𝑁0

❑ Note that as 𝑀 increases, 𝑃𝑏 𝑒 decreases – opposite of what


happens for QAM, ASK, and PSK.

ECE 141: DIGITAL COMMUNICATION I 44


ECE 141: DIGITAL COMMUNICATION I 45
ECE 141: DIGITAL COMMUNICATION I 46

You might also like