You are on page 1of 171

Digital Communications

Chapter 4. Optimum Receivers for AWGN Channels


H. F. Francis Lu

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

1 / 171

4.1 Waveform and Vector Channel


Models

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

2 / 171

System View

AWGN: Additive White Gaussian Noise


Sn (f ) =

N0
2

Watt/Hz

r (t), sm (t), and n(t) are bandpass real-valued signals


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

3 / 171

Assumption
r (t) = sm (t) + n(t)

Definition 1 (Optimality)
Given r (t), estimate m (or sm (t)) such that the error
probability is minimized
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

4 / 171

Models for Analysis

Signal demodulator: Vectorization


r (t) [r1 , r2 , , rN ]

Detector: minimize the probability of error in the above


functional block
estimator [r1 , r2 , , rN ]

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

5 / 171

Waveform to Vector
The bandpass AWGN channel is defined as
r (t) = sm (t) + n(t)
where n(t) is a WSS Gaussian random process with
Rn ( ) =

N0
( ).
2

Let {ei (t), 1 i N} be a spanning set of orthonormal


functions for signals {sm (t), 1 m M}; then define
ni = n(t), ei (t) = n(t)ei (t) dt
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

6 / 171

ni = n(t), ei (t) = n(t)ei (t) dt


Mean of ni :
Eni = E n(t)ei (t) dt = 0
Variance of ni and Cross-correlation:
Eni nj = EE n(t)ei (t) dt n( )ej ( ) d
N0
(t )ei (t)ej ( ) dt d
=
2
N0
=
K (i j).
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

7 / 171

So we have

n(t) = ni ei (t) + n(t)


i=1

Why n(t)? because {ei (t), i = 1, 2, , N} is not complete


But n(t) is independent of Ni=1 ni ei (t).

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

8 / 171

Proof:
N

E
n(t) ( ni ei ( ))
i=1
N

j=1

i=1

= E (n(t) nj ej (t)) ( ni ei ( ))
= En(t)ni ei ( ) [Enj ni ] ej (t)ei ( )
i

i,j

= En(t) [ n(s)ei (s) ds] ei ( )


i

i,j

N0
K (i j)ej (t)ei ( )
2

ni

= [En(t)n(s)] ei (s) ds ei ( )
i

N0
ei (t)ei ( )
2 i

N0
N0
(t s)ei (s) dsei ( )
ei (t)ei ( )
2
2 i
i
N0
N0
=
ei (t)ei ( )
ei (t)ei ( ) = 0
2 i
2 i
=

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

9 / 171

Thus define
ri (u) = r (u, t), ei (t) = r (u, t)ei (t) dt
sm,i = sm (t), ei (t) = sm (t)ei (t) dt
ni (u) = n(u, t), ei (t) = n(u, t)ei (t) dt
we can rewrite the channel model as
N

i=1

i=1

i=1

ri (u)ei (t) + r(u, t) = sm,i ei (t) + ni (u)ei (t) + n(u, t)


r(t) and n(t) are orthogonal to the signal sm (t)
they would not affect the detection of sm (t)
in fact r(u, t) = n(u, t)
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

10 / 171

Define
r = [ r1 rN ]

s m = [ sm,1 sm,N ]
n = [ n1 nN ]

we can rewrite the waveform channel


r (t) = sm (t) + n(t)
as a vector channel
r = s m + n.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

11 / 171

Summary
Using signal space with orthonormal functions
e1 (t), , eN (t) we can rewrite the waveform model
r (t) = sm (t) + n(t)
as
[ r1 rN ]
with

= [ sm,1 sm,N ] + [ n1 nN ]

2
N0
Eni2 = E n(t)ei (t)dt =
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

12 / 171

Gaussian Random Process


For a WSS ( strictly stationary) White Gaussian
random process with power spectral density
Sn (f ) =

N0
2

The autocorrelation function is


Rn ( ) =

N0
( )
2

Taking any orthonormal function ei (t) we see


N0
Rn (t, s)ei (s) ds = Rn (t s)ei (s) ds = 2 ei (t)
ei (t) is an eigenfunction of Rn ( ) with eigenvalue
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

N0
2
13 / 171

Now it follows from Karhunen-Lo`eve expansion theorem that


setting

n = [ n1 nN ]
we have
En = 0

and En n =

N0
IN
2

The joint probability density function (pdf) of n is given by


N

1
n
f (n) = (
) exp (
)
N0
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

14 / 171

Decision Function
M possible transmitted signals
s1 (t), , sM (t),

s 1 , , s M

Channel output
r = sm + n

The decision function


g RN {1, 2, , M} by r m

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

15 / 171

Decision Region
Given the decision function g RN {1, , M}, we can
define the decision region
Dm = {r RN g (r ) = m}
Error probability of g is (symbol error probability, SER)
M

Pe = Pm Pr {g (r ) m s m sent }
m=1
M

= Pm f (r s m ) dr
D
m=1

m m

where Pm = Pr{s m sent}


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

16 / 171

Optimal Decision Function


Recall decision function g induces the decision region
Dm = {r RN g (r ) = m}
The probability of correct decision associated with g is
M

Pc = Pr {s m sent}
m=1
M

f (r s m ) dr
Dm

= Pr {s m sent r } f (r ) dr
D
m=1

It implies the optimal decision (minimal Pe and maximal Pc ) is


Maximum a posterori probability (MAP) decision
gopt (r ) = arg max Pr {s m r }
m

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

17 / 171

Maximum Likelihood Receiver


From Bayes rule we have
Pr {s m r } =

f (r s m ) dr Pr{s m }
Pr{s m }
=
f (r s m )
f (r ) dr
f (r )

Hence
gMAP (r ) = arg max Pr{s m }f (r s m )
m

If s m are equally-likely, Pr{s m } =

1
M,

then

gML (r ) = arg max f (r s m )


m

This is the Maximum Likelihood Receiver


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

18 / 171

Bit Level Decision


The MAP Decision Rule
gMAP (r ) = arg max Pr {s m r }
m

can be extended to bit level decision


M-ary signaling gives k = log2 M bits
For the ith bit bi {0, 1}, the a posterori probability of
bi = ` is
Pr {bi = ` r } =

Pr {s m r }
s m bi =`

The MAP rule for bi is


gMAPi (r ) = arg max Pr {s m r }
`

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

s m bi =`
H. F. Francis Lu

19 / 171

Bit Error Probability (BER)


gMAPi (r ) = arg max` s m bi =` Pr {s m r }
The Decision region of bi is
Bi,0 = {r RN gMAPi (r ) = 0}
Bi,1 = RN /Bi,0 = {r RN gMAPi (r ) = 1}
The error probability of bi is
1

Pb,i = Pr {s m }
`=0 s m bi =`

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

f (r s m ) dr
Bi,`

H. F. Francis Lu

20 / 171

BER vs SER

The average bit error probability (BER) is


Pb =

1 k
Pb,i
k i=1

Theorem 1
Pb Pe kPb

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

21 / 171

Example

Example 1
Consider two equal-probable signals s 1 = [0 0] and s 2 = [1 1]
sending through an additive noisy channel with n = [n1 n2 ]
with joint pdf
f (n) = {

exp(n1 n2 ),
0,

if n1 , n2 0
otherwise.

Find MAP rule and Pe

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

22 / 171

Sol.
Since P{s 1 } = P{s 2 } = 12 , MAP and ML rules coincide.
Given r = s + n, we would choose
s1

s1

s2

s2

Pr{s 1 r } Pr{s 2 r } f (r s 1 ) f (r s 2 )
s1

e (r1 0)(r2 0) 1(r1 0, r2 0) e (r1 1)(r2 1) 1(r1 1, r2 1)


s2

Hence
D2 = {r r1 , r2 1}
Pe,1 = Pr{s 1 }
=

f (r s 1 ) dr

D2

1
2 1 1

Pe,2 = Pr{s 2 }

and D1 = R2 /D2

e n1 n2 dn1 dn2 =

1 2
e
2

f (r s 2 ) dr = 0

D1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

23 / 171

Z-Channel

s 1 1e / s 1
2

s2

/s

Pe,1 = Pr{s 1 } e 2
Pe,2 = 0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

24 / 171

Sufficient Statistics
Assuming given s m transmitted, we receive r = (r 1 , r 2 ) with
f (r s m ) = f (r 1 , r 2 s m ) = f (r 1 s m ) f (r 2 r 1 )
as a Markov chain.
Theorem 2
Under the above assumption, the optimal decision can be
made without r 2 . (therefore, r 1 is called the sufficient statistics
and r 2 is called the irrelevant data for detection of s m ).
Proof:
gopt (r ) = arg max Pr {s m r } = arg max Pr{s m }f (r s m )
m

= arg max Pr{s m }f (r 1 s m ) f (r 2 r 1 )


m

= arg max Pr{s m }f (r 1 s m )


m

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

25 / 171

Preprocessing

sm

Channel Preprocessing G Detector


Assume G (r ) = is a bijection. Then
gopt (r , ) = arg max Pr {s m r , }
m

= arg max Pr{s m }f (r , s m )


m

= arg max Pr{s m }f (r s m ) f ( r )


m

= arg max Pr{s m }f (r s m )


m

= arg max Pr{s m }f ( s m )


m

= gopt () = gopt (r )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

26 / 171

sm

Channel Preprocessing G Detector


Contrarily G (r ) = could be a many-to-one mapping. Then
gopt (r , ) = arg max Pr {s m r , }
m

= arg max Pr{s m }f (r , s m )


m

= arg max Pr{s m }f (r s m ) f ( r )


m

= arg max Pr{s m }f (r s m )


m

independent of

gsubopt () = arg max Pr{s m }f ( s m )


m

= arg max Pr{s m } f (r , s m ) dr


m

= arg max Pr{s m } f (r s m )f ( r ) dr


m

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

27 / 171

In general, gopt (r , ) gives smaller error rate than


gsubopt ().
They have equal performance only when pre-processing G
is a bijection.
By proceccing, data can be in a more useful (e.g.,
simpler in implementation) form but the error rate can
never be reduced!

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

28 / 171

sm

Channel Preprocessing G Detector

An interpretation of Sufficient statistics is the following:


Theorem 3 (Shannon Channel Coding Theorem)
Given the channel, the maximal rate R for reliable
communication is
R I (s m ; r )
and with preprocessing is
R I (s m ; )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

29 / 171

Data Processing Inequality


sm

Channel Preprocessing G Detector


Theorem 4 (Data Processing Inequality)
I (s m ; ) I (s m ; r )
and equality holds iff G is a bijection.
Thus if G is not a bijection (true in general) we have
R I (s m ; ` ) < < I (s m ; 1 ) < I (s m ; r )
meaning more processings of r would only decrease the
maximal possible rate R.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

30 / 171

Sec. 4.2-1 Optimal Detection for


Vector AWGN Channel

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

31 / 171

Recap
Using signal space with orthonormal functions e1 (t), , eN (t)
we can rewrite the waveform model
r (t) = sm (t) + n(t)
as
[ r1 rN ]
with

= [ sm,1 sm,N ] + [ n1 nN ]

2
N0
Eni2 = E n(t)ei (t)dt =
2

The joint probability density function (pdf) of n is given by


N

1
n
f (n) = (
) exp (
)
N0
N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

32 / 171

arg max [Pm f (r s m )]


m

= arg max [Pm f (n = r s m )]


m

N
2

r s m
1

= arg max Pm (
) exp (
)
m
N0

N0

2
r s m
]
= arg max [ln Pm
m
N0
N0
1
2
= arg max [ ln Pm r s m ]
m
2
2
N0
1
= arg max [ ln Pm Em + r s m ]
m
2
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

33 / 171

Theorem 5 (MAP Decision Rule)


m
= arg max [
m

N0
1
ln Pm Em + r s m ]
2
2

Theorem 6 (ML Decision Rule)


If Pm =

1
M,

the ML decision rule is


N0
1
2
ln Pm r s m ]
2
2
= arg min r s m

m
= arg max [
m

also known as minimum distance decision rule.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

34 / 171

m
= arg maxm [ N20 ln Pm 12 Em + r s m ]
When signals are equally likely and are of equal energy, i.e.
Pm =

1
M

s m = E

and

the ML decision rule simplifies to


m
= arg max r s m
m

This is called Correlation Rule since


r s m = r (t)sm (t) dt

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

35 / 171

Decision Region
From MAP rule, the Decision region Dm is
Dm

r RN

N0
1

=
2 ln Pm 2 Em + r s m >

N0
2

ln Pm 2 Em + r s m ,

all m m

From ML rule and equal energy, the Decision region Dm is


Dm = {r RN r s m < r s m all m m}

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

36 / 171

Example
Signal space diagram for ML decision maker for one kind of
signal assignment

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

37 / 171

Example
Erroneous decision region for the 5th signal

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

38 / 171

Example
Alternative signal space assignment

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

39 / 171

Example
Erroneous decision region for the 5th signal

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

40 / 171

There are two factors that affect the error probability.


1

The Euclidean distances among signal vectors.


Generally speaking, the larger the Euclidean distance
among signal vectors, the smaller the error probability.

The positions of signal vectors.


The former two signal space diagrams have the same
pair-wise Euclidean distance among signal vectors!

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

41 / 171

Waveform Channel
For Bandpass Waveform channel the MAP and ML decision
rules can be rewritten as for MAP
m
= arg max [
m

N0
1
2
ln Pm + r (t)sm (t) dt sm (t) dt]
2
2

and for ML
m
= arg max [ r (t)sm (t) dt
m

1
2
sm (t) dt]

= arg min r (t) sm (t) dt

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

42 / 171

Optimal Detection for Binary Antipodal Signaling


Consider
s1 (t) = s(t) and s2 (t) = s(t)
Pr{s1 (t)} = p and Pr{s2 (t)} = 1 p
Let s1 and s2 be respectively signal space representation
s(t)
of s1 (t) and s2 (t) using e(t) = s(t)

Then we have s1 = Es and s2 = Es with Es = Eb


Decision region for s1 is

N0
Eb
N0
Eb
D1 = {r r Eb +
ln p
> r Eb +
ln(1 p) }
2
2
2
2

N0
1 p

= r r > ln

p
4 Eb

=rth
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

43 / 171

Threshold Detection

To detect binary antipodal signaling,


gopt (r ) = {

1,
2,

2014 Digital Communications. Chap. 04

if r > rth
,
if r < rth

Ver. 2014.11.10

where rth =

H. F. Francis Lu

N0
1p
ln
p
4 Eb

44 / 171

Error Probability of Binary Antipodal Signaling


2

Pe = f (r sm ) dr
m=1 m m Dm

= p f (r s = Eb ) dr + (1 p) f (r s = Eb ) dr
D1
D2

rth
= p
f (r s = Eb ) dr + (1 p) f (r s = Eb ) dr
rth

N0
= p Pr {N ( Eb , ) < rth } +
2

N0
(1 p) Pr {N ( Eb , ) > rth }
2

Eb rth
Eb + rth
+ (1 p)Q

= pQ
N0
N0

2
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

45 / 171

ML Decision for Binary Antipodal Signaling


In ML Detection we have p = 1 p = 12 .

1,

gML (r ) = tie,

2,

if r > rth
if r = rth ,
if r < rth

where rth =

1p
N0
log
= 0
p
4 Eb

and with rth = 0,


Eb rth
Eb + rth

+ (1 p)Q
= Q
Pe = pQ
N0
N0

2
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

2Eb
N0

46 / 171

Pe

Eb
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)

H. F. Francis Lu

47 / 171

Binary Equal-Probable Signaling in General


Consider a pair of real binary signaling x1 (t) and x2 (t)
with Pr{x1 (t)} = Pr{x2 (t)} = 21
Assume in general that x1 (t) is linearly independent of
x2 (t); so we need two orthonormal basis functions e1 (t)
and e2 (t)
Signal Space Representation x1 (t) x 1 and x2 (t) x 2
ML Decision Region for x 1 is
D1 = {r R2 r x 1 < r x 2 }
i.e. the minimum distance decision rule.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

48 / 171

Since computing error probability concerns the following


integrations:
D f (r x 2 ) dr
1

and f (r x 1 ) dr
D
2

For D1 , given r = x 2 + n we have


r x 1 < r x 2
x 2 x 1 + n < n
2
2
x 2 x 1 + n < n
2
x 2 x 1 + 2 (x 2 x 1 ) n < 0

=z

Recall n is Gaussian with Kn =


Gaussian with variance

N0
2 I2 ,

hence z = (x 2 x 1 ) n is

E(x 2 x 1 ) nn (x 2 x 1 ) =
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

N0
2
x 2 x 1
2

H. F. Francis Lu

49 / 171

Setting
2

2
d12
= x 2 x 1

D f (r x 2 ) dr
1

N0 2
1 2
d12 ) < d12
}
2
2

2
2
d12

d12
2
= Q
=Q
2N0
d12 N20

= Pr {z N (0,

Note
2

2
d12
= x 2 x 1 = x2 (t) x1 (t) dt

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

50 / 171

Example 2 (Binary Antipodal)

In this case we have x1 = Eb and x2 = Eb , so


2
2
d12
= 2 Eb = 4Eb
Hence

2
d12
2Eb
= Q
D f (r x 2 ) dr = Q
2N0
N0
1
Similarly it can be shown

D f (r x 1 ) dr = Q

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

2
2Eb
d12
= Q
= Pb
2N0
N0

H. F. Francis Lu

51 / 171

Example 3 (Binary Orthogonal)

In this case we have x1 = ( Eb , 0) and x2 = (0, Eb ), so


2
2
d12
= 2 Eb = 2Eb
Hence

2
d12
Eb
= Q
D f (r x 2 ) dr = Q
2N0
N0
1
Similarly it can be shown

D f (r x 1 ) dr = Q

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

2
Eb
d12
= Q
= Pb
2N0
N0

H. F. Francis Lu

52 / 171

Binary Antipodal vs Orthogonal


For Binary Antipodal
Pb,BPSK

2Eb
= Q
N0

and for Binary Orthogonal


Pb,BFSK

Eb
= Q
N0

we see
BPSK is 3 dB better than BFSK in error performance
The term NEb0 is commonly referred to as SNR/bit.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

53 / 171

Pe

Eb
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)

H. F. Francis Lu

54 / 171

Implementation of Optimal
Receiver for AWGN Channels

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

55 / 171

Recall that the optimal decision rule is


gopt (r ) = arg max [
m

Em
N0
ln Pm
+ r s m]
2
2

where we note that


T

r s m = r (t)sm (t) dt
0
This suggests a Correlation Receiver

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

56 / 171

Correlation Receiver

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

57 / 171

Matched Filter Receiver

r s m = r (t)sm (t) dt
0
On the other hand, we could define a filter h with impulse
response
h(t) = sm (T t)
such that
T

r (t) h(t)t=T = r ( )h(T ) d = r ( )sm ( ) d

0
This gives the Matched Filter Receiver

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

58 / 171

Matched Filter Receiver

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

59 / 171

Optimality of Matched Filter


Assuming we use a filter h(t) to process the incoming signal
r (t) = s(t) + n(t),
y (t) = h(t)r (t) = h(t)s(t)+h(t)n(t) = h(t)s(t)+z(t)
Hence the noiseless signal at t = T is

h(t) s(t)t=T = H(f )S(f )e 2fT df

Total noise variance of z(t = T ) is


z2 =

2014 Digital Communications. Chap. 04

N0
H(f )2 df
2

Ver. 2014.11.10

H. F. Francis Lu

60 / 171

Optimality of Matched Filter


Thus the output SNR of matched filter is
H(f )S(f )e 2fT df

SNR =

2
H(f ) df

2
2fT 2 df
H(f ) df S(f )e

N0
2
2 H(f ) df

2
2
S(f )e 2fT df
=

N0
N0
2

The Cauchy-Schwartz inequality holds with equality iff


H(f ) = S (f )e

2014 Digital Communications. Chap. 04

2fT

Ver. 2014.11.10

1
h(t) = FFT
{S (f )}ttT
= s (t)ttT
= s (T t)

H. F. Francis Lu

61 / 171

Example. 4-ary bi-orthogonal signals.


Consider the 4-ary bi-orthogonal signals {s1 (t), s2 (t)}

Let {1 (t) =

s1 (t)
s1 (t) , 2 (t)

s2 (t)
s2 (t) }

be the orthonormal basis.

Then the matched filters are

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

62 / 171

Channel symbols

0
s1 (t) = A T2 1 (t) [ A T2 ,

s2 (t) = A T2 2 (t) [
0,
A T2

]
]

s3 (t) = s1 (t) and s4 (t) = s2 (t).


When s(t) = s1 (t),

y1s (t) = h1 ( )s1 (t )d

T
2
T
=
A 1 {0 t < } d
T
2
T /2

1 2
T /2 t T
=
A T(
) 1 {t T T /2}
2
T /2
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

63 / 171

y2s (t) = h2 ( )s1 (t )d

T /2
T
2
=
A 1 {0 t < } d
T
2
0

1 2
T /2 t T /2
=
A T(
) 1 {t T /2 T /2}
2
T /2

Thus, y1s (T ) = 12 A2 T and y2s (T ) = 0.


SNRO =
2014 Digital Communications. Chap. 04

1 2
2A T
2 N20

Ver. 2014.11.10

1 2
A T.
2N0
H. F. Francis Lu

64 / 171

Sec. 4.2-3 A Union Bound on the


Probability of Error of Maximum
Likelihood Detection

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

65 / 171

Recall for ML decoding


Pe =

1 M
f (r s m ) dr

M m=1 m m Dm

where the ML decoding rule is


gML (r ) = arg max f (r s m )
m

and
Dm = {r f (r s m ) > f (r s k ) for all k m }
It should be noted that Dm Dm = if m m .

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

66 / 171

Dm = {r f (r s m ) > f (r s k ) for all k m }


Define the error event Emm
Emm = {r f (r s m ) < f (r s m )}
Clearly Dm Emm and Dm = m m Dm = m m Emm
Now we see
Pe =

1 M
f (r s m ) dr

M m=1 m m Dm

1 M
Pr {r Dm s m }
M m=1
1 M
=
Pr { Emm s m }
M m=1
m m
M
1

Pr {Emm s m }
M m=1 m m
=

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

67 / 171

Pairwise Error Probability for AWGN Channel


Emm = {r f (r s m ) < f (r s m )}
For AWGN Channel,
Pr {Emm

dm,m
s m} =

f (r s m ) dr = Q
2N0
Emm

where dm,m = s m s m .
x2

As Q(x) 12 e 2 for x 0
2

dm,m
1 M
Pe
)
exp (
2M m=1 m m
4N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

68 / 171

Minimum Distance Bound


2

dm,m
1 M
Pe
)
exp (
2M m=1 m m
4N0
Define
dmin = min dm,m = min s m s m
mm

Then
exp (

2
dm,m

4N0

mm

) exp (

2
dmin
)
4N0

and

2
dmin
d2
M 1
Pe (M 1)Q

exp ( min )
2
4N0
2N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

69 / 171

Lower Bound on Pe

Pe =

1 M
Pr { Emm s m }
M m=1
m m
M

2
dm,m

s m} =

Q
max
M m=1 m m
2N0
M

1
max Pr {Emm
M m=1 m m

2 M
1 dmin
Q
1 (m dm,m = dmin )
M 2N0 m=1

=Nmin

2
Nmin dmin
=
Q
M
2N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

70 / 171

Sec. 4.3 Optimal Detection and


Error Probability for Bandlimited
Signaling

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

71 / 171

ASK or PAM Signaling

Let dmin be the minimum distance between adjacent PAM


constellations
Consider the signal constellations
3
M 1
1
dmin }
S = { dmin , dmin , ,
2
2
2
The average bit signal energy is
Eb =

2014 Digital Communications. Chap. 04

1
M2 1 2
2
ES s =
d
log2 M
12 log2 M min

Ver. 2014.11.10

H. F. Francis Lu

72 / 171

There are two types of error events: assuming n is zero-mean


Gaussian random variable with variance N20
Inner Points with error probability Pei
Pei = Pr {n >

dmin
dmin
} = 2Q (
)
2
2N0

Outer Points with error probability Peo : only one end


causes errors:
Peo = Pr {n >

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

dmin
dmin
} = Q (
)
2
2N0

H. F. Francis Lu

73 / 171

Symbol Error Probability of PAM


The symbol error probability is given by
1 M
Pr { error m sent}
M m=1
1
dmin
dmin
=
[2(M 2)Q (
) + 2Q (
)]
M
2N0
2N0
dmin
2(M 1)
Q (
=
)
M
2N0

2(M 1) 6 log2 M Eb
=
Q
M
M 2 1 N0

Pe =

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

74 / 171

Efficiency

2(M 1)
Pe =
Q
M

6 log2 M Eb
M 2 1 N0

To increase rate by 1 bit, we need to double M


To keep the same Pe we need Eb to quadruple
Increase rate by 1 bit increase Eb by 6 dB

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

75 / 171

PAM Performance

Pe

Eb
N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)
H. F. Francis Lu

76 / 171

PSK Signaling
Signal constellation for M-ary PSK is
S = {s k =

2k
2k
E (cos (
) , sin (
)) k = 0, 1, , M 1}
M
M

By symmetry we can assume s 0 = ( E, 0) was


transmitted
The received signal vector r is

r = ( E + n1 , n2 )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

77 / 171

Assuming Gaussian random process with Rn ( ) =


f (n1 , n2 ) =

N0
2 ( )

n2 + n22
1
exp ( 1
)
N0
N0

Thus we have

(r1 E)2 + r22


1
f (r = (r1 , r2 ) s 0 ) =
exp (
)
N0
N0
Define V =

r12 + r22 and = arctan rr22

v
v 2 + E 2 Ev cos
f (v , s 0 ) =
exp (
)
N0
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

78 / 171

The decision region for s 0 is


D0 = {r

<< }
M
M

The probability of erroneous decision given s 0 is


Pr { error s 0 }
= 1 Pr { correct s 0 }
=1

f (v , s 0 ) dv d

D0

=1

v
v 2 + E 2 Ev cos
exp (
) dv d
N0
N0

But the above does not have a closed form unless M = 2, 4.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

79 / 171

When M = 2, binary PSK is antipodal:

2Eb
Pe = Q
N0
When M = 4, it is QPSK

2E
b

Pe = 1 1 Q
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

80 / 171

Efficiency of PSK
For large M we can approximate Pe as

2 2 log2 M Eb
Pe 2Q
M2
N0

(1)

To increase rate by 1 bit, we need to double M


To keep the same Pe we need Eb to quadruple
Increase rate by 1 bit increase Eb by 6 dB

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

81 / 171

PSK Performance

Pe

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

Eb
N0

(dB)
H. F. Francis Lu

82 / 171

M-ary QAM Signaling


M is usually a square number, M = N 2
M-ary QAM is composed of two independent N-ary PAM
1
3
N 1
SPAM = { dmin , dmin , ,
dmin }
2
2
2
SQAM = {(x, y ) x, y SPAM }

Thus for M-ary QAM we have


Eb =

2014 Digital Communications. Chap. 04

M 1 2
d
6 log2 M min

Ver. 2014.11.10

H. F. Francis Lu

83 / 171

SMQAM = SNPAM SNPAM


Hence
2

Pe,MQAM = 1 [1 Pe,NPAM ]
Since
Pe,NPAM = 2 (1

1
dmin
dmin
) Q (
) 2Q (
)
N
2N0
2N0

we have
Pe,MQAM

3 log2 M Eb
dmin
4Q (
) = 4Q
M 1 N0

2N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

84 / 171

Efficiency of QAM

Pe,MQAM

4Q

3 log2 M Eb
M 1 N0

To increase rate by 1 bit, we need to double M


To keep the same Pe we need to double Eb
Increase rate by 1 bit increase Eb by 3 dB
QAM is more power efficient than PAM and PSK.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

85 / 171

QAM Performance

Pe

Eb
N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)
H. F. Francis Lu

86 / 171

Pe

Eb
N0

(dB)

4PSK 4QAM
16PSK is 4dB poorer than 16QAM
32PSK performs 7dB poorer than 32QAM
64PSK performs 10dB poorer than 64QAM
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

87 / 171

Demodulation of PAM, QAM, and PSK


We need two orthonormal basis functions

2
e1 (t) =
g (t) cos (2fc t) [u(t) u(t T )]
T

2
e2 (t) =
g (t) sin (2fc t) [u(t) u(t T )]
T
where
g (t) is the pulse shaping function with g (t) = 1
T is the symbol duration

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

88 / 171

Transmission of PAM Signals


We use the constellation set SPAM
3
M 1
1
dmin }
SPAM = { dmin , dmin , ,
2
2
2

where
dmin =

12 log2 M
Eb
M2 1

Hence the M-ary PAM waveforms are


SPAM (t) = {

dmin
3dmin
(M 1)dmin
e1 (t),
e1 (t), ,
e1 (t)}
2
2
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

89 / 171

Alternative Description of PAM Transmission


1
3
M 1
SPAM = { dmin , dmin , ,
dmin }
2
2
2
Define the set of baseband PAM waveforms

SPAM,` (t) = s`,m (t) = sm


g (t) [u(t) u(t T )] sm SPAM

then the bandpass signals are


SPAM (t) = {Re [s`,m (t)e 2fc t ] s`,m (t) SPAM,` (t)}

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

90 / 171

Demodulation and Detection of PAM


Assuming sm (t) SPAM (t) was transmitted, the received
signal is
r (t) = sm (t) + n(t)
Define

r = r (t), e1 (t) = r (t)e1 (t) dt


0
The MAP rule is
sm = arg max [r sm +
sm SPAM

N0
1
2
ln Pm sm ]
2
2

where Pm = Pr{sm }.
The above can also be implemented by Correlation or Match
Filter Receiver.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

91 / 171

Transmission of PSK Signals

Signal Constellation of M-ary PSK is


SPSK = {s k =

2k
2k
E [cos (
) , sin (
)] k ZM }
M
M

Hence the M-ary PSK waveforms are


SPSK (t) = {sm (t) = (s k )1 e1 (t) + (s k )2 e2 (t) k ZM }

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

92 / 171

Alternative Description of Transmission of PSK


SPSK ,` = {s`,m =

2m
2E e M m ZM }

and lowpass equivalent basis function is

1
e` (t) =
g (t) [u(t) u(t T )]
T
The set of baseband PSK waveforms is
SPSK ,` (t) = {s`,m (t) = s`,m e` (t) s`,m SPSK ,` }
Then the bandpass PSK waveforms are
SPSK (t) = {Re [s`,m (t)e 2fc t ] s`,m (t) SPSK ,` (t)}
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

93 / 171

Demodulation and Detection of PSK Signals


Given sm (t) SPSK (t) was transmitted, the bandpass received
signal is
r (t) = sm (t) + n(t)
Let r` (t) be the lowpass equivalent signal
r` (t) = [r (t) + r(t)] e 2fc t = s`,m (t) + n` (t)
Then compute
T

r` (t), e` (t) = r` (t)e` (t) dt = r`,x + r`,y = r`


0
Baseband MAP Rule
s`,m = arg
2014 Digital Communications. Chap. 04

max

s`,m SPSK ,`

{Re{r` s`,m
} + N0 ln Pm E}

Ver. 2014.11.10

H. F. Francis Lu

94 / 171

Transmission of QAM
Signal Constellation of M-ary QAM with M = N 2 is
1
3
N 1
SPAM = { dmin , dmin , ,
dmin }
2
2
2
SQAM = {x + y x, y SPAM }

where

6 log2 M
Eb
M 1
Hence the M-ary QAM waveforms are
dmin =

SQAM (t) = {xe1 (t) + ye2 (t) x, y SPAM }


Demodulation of QAM is similar to that of PSK.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

95 / 171

General Theory for Baseband MAP Detection


Let S` = {s `,1 , , s `,M } CN be a signal constellation with
respect to the baseband orthonormal basis functions
{`,i (t) i = 1, 2, , N} (Dimension N)
The baseband equivalent signals are
N

s`,m (t) = s`,m,i `,i (t)


where s `,m = [s`,m,1

i=1

s`,m,N ] .

The corresponding bandpass signals are


sm (t) = Re {s`,m (t)e 2fc t }
Note
2

Em = sm (t) =
2014 Digital Communications. Chap. 04

1
1
1
2
2
s`,m (t) = s `,m = E`,m
2
2
2

Ver. 2014.11.10

H. F. Francis Lu

96 / 171

Given sm (t) was transmitted, the bandpass received signal is


r (t) = sm (t) + n(t)
Let r` (t) be the lowpass equivalent signal
r` (t) = [r (t) + r(t)] e 2fc t = s`,m (t) + n` (t)
Set
r`,i = r` (t), `,i (t) = r` (t)`,i (t) dt
n`,i = n` (t), `,i (t) = n` (t)`,i (t) dt
Hence we have
r ` = s `,m + n`

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

97 / 171

n`,i = n` (t), `,i (t) = n` (t)`,i (t) dt


Recall that
Sn` (f ) = 4Sn (f + fc ) = 2N0
It follows that n` CN (0, 2N0 IN ) and
f (n` ) =

1
N

(2N0 )

exp (

n`
)
2N0

Baseband equivalent MAP detection


2

r ` s `,m

s `,m = arg max ln Pm


s `,m S`

2N0

= arg max {N0 ln Pm + Re {r ` s `,m }


s `,m S`

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

1
2
s `,m }
2
98 / 171

Sec. 4.4 Optimal Detection and


Error Probability for Power Limited
Signaling

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

99 / 171

FSK Signaling
Signal Constellation of M-ary FSK Signaling is


SFSK = {s 1 = [ E, 0, , 0] , , s M = [0, , 0, E] }
Given s 1 transmitted, the received signal vector is
r = s1 + n
with

2014 Digital Communications. Chap. 04

E + n1

r1

r2

+ n2

rM =

+ nM

Ver. 2014.11.10

H. F. Francis Lu

100 / 171

Assuming the signals s m are equiprobable, MAP Decision is


s m = arg max r s m
s m SFSK

Hence given s 1 transmitted, correct decision iff

r s 1 = E + En1 > r s m = Enm , 2 m M


It means

Pr {Correct s 1 } = Pr { E + n1 > n2 , , E + n1 > nM }


By symmetry, we have

Pr {Correct} = Pr { E + n1 > n2 , , E + n1 > nM }

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

101 / 171

Pr {Correct}

= Pr { E + n1 > n2 , , E + n1 > nM }

= Pr { E + n1 > n2 , , E + n1 > nM n1 } f (n1 ) dn1

M1

E
n
+

1
1 Q

f (n1 ) dn1

N0

Then Pe = 1 Pr {Correct }.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

102 / 171

To find Pb , note

Pr {s 2 detected s 1 } = Pr { E + n1 < n2 , n3 < n2 , . . . , nM < n2 }


=

= Pr { E + n1 < nM , n2 < nM , . . . , nM1 < nM }


= Pr {s M detected s 1 } =

Pe
M 1

Assume k = log2 M Z. Since


Pe
M 1
for all n = 1, . . . , k and for all distinct i1 , . . . , in , we have
Pr {bits i1 , , in in error s 1 } =

Pb =
=

1 k k
( ) n Pr { specific n bits in error s 1 }
k n=1 n
1 M Pe
M Pe
1
k
=
Pe
k 2 M 1 2 M 1
2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

103 / 171

Error Probability Upper Bound

Pe = 1 Pr { E + n1 > n2 , , E + n1 > nM }
M

= En1 Pr { {nm E + n1 < nm } n1 }


m=2

En1 Pr {nm E + n1 < nm n1 }


m=2

= (M 1) Pr { E + n1 < n2 }

E
= (M 1)Q
N0
M 1
E
<
exp (
)
2
2N0
kEb
k Eb
< 2k exp (
) = exp ( ( 2 ln 2))
2N0
2 N0
M

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

104 / 171

Power Limited Regime

Pb

1
k Eb
exp ( ( 2 ln 2))
2
2 N0

When k , we have
Eb
> 2 ln 2 1.42 dB Pb 0
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

105 / 171

Pushing it Further
With more sophisticated methods it can be shown that
Eb
1

( k2 ( NEb0 2 ln 2)) ,

2 exp
N0 > 4 ln 2

Pb

Eb

ln
2)
) , ln 2 NEb0 4 ln 2
exp
(k
(

N0

Hence we only need


Eb
k
ln 2 = 1.6 dB Pb 0
N0
The 1.6 dB is also the Shannon limit for AWGN channel.
FSK achieves AWGN Shannon Channel Capacity,
but in power limited (infinite bandwidth) regime
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

106 / 171

Orthogonal Signaling: FSK Performance

Pb

Eb
N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)
H. F. Francis Lu

107 / 171

Simplex Signaling
The M-ary Simplex Signaling can be obtained from M-ary
FSK by
SSP = {x ESFSK s x SFSK }
with the resulting energy
ESP =

M 1
EFSK
M

Thus the error probability of simple signaling is

M1
M

n
+
E
1
M1

1 Q
f (n1 ) dn1

N0

Pe = 1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

108 / 171

Biorthogonal Signaling


SBO = {[ E, 0, , 0] , , [0, , 0, E] }

Given s 1 = [ E, 0, , 0] transmitted, correct decision calls for

E + En1 E En1 n1 E
and
E+

En1 Enm ,

2m

M
2

Hence

E + n1 nm ,
2014 Digital Communications. Chap. 04

2m

Ver. 2014.11.10

M
,
2

n1 E

H. F. Francis Lu

109 / 171


E + n1 nm ,

n1 E

M
,
2

2m

2 1

n1 + E

Pc = 1 2Q
f (n1 ) dn1
N0
E

and
Pe = 1 Pc

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

110 / 171

Biorthogonal Signaling Performance

Pe

Eb
N0
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

(dB)
H. F. Francis Lu

111 / 171

Sec. 4.6 Comparison of Digital


Signaling Methods

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

112 / 171

In general signals that are Time-Limited [ T2 , T2 ] and


Band-Limited [W , W ] do not exist
However, we could relax the condition
W

W X` (f ) df
1

2
X
(f
)
df
`

for some small out-of-band ratio .


Theorem 7 (Prolate Spheroidal Functions)
For real signal x(t) with support in time [ T2 , T2 ] and
-bandlimited to W , there exists a set of real orthonormal
signals {j (t), 1 j N} such that
2

N
x(t) j=1 x(t), j (t) j (t) dt


2
X (f ) df

where  = 12 and N = 2WT + 1.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

113 / 171

Why N = 2WT + 1
For signals with bandwidth W , Nyquist rate is 2W for
perfect reconstruction
You get 2W samples / second,
2W degrees of freedom
For Time duration T , you get overall 2WT samples
2WT degrees of freedom

But be warned that these Nyquist sample functions do not


constitute the set {j (t)} (not -bandlimited)
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

114 / 171

Bandwidth Efficiency
N = 2WT
Since rate R =

1
T

log2 M, we have for M-ary real signaling

R
log2 M 2T
2 log2 M
=
=
W
T
N
N
where
2 log2 M relates to constellation size M
N is the dimensionality of the constellation

R
W

is called Bandwidth Efficiency.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

115 / 171

Band-limited vs Power-limited
For PAM, N = 1 hence
(

R
)
= 2 log2 M > 1
W PAM

For PSK and QAM, N = 2, hence


(

R
R
)
=( )
= log2 M > 1
W PSK
W QAM

For M-ary FSK, N = M, hence


(

2014 Digital Communications. Chap. 04

R
2 log2 M
)
=
1
W FSK
M

Ver. 2014.11.10

H. F. Francis Lu

116 / 171

Shannon Channel Coding Theorem


The Shannon Channel Coding Theorem for Bandlimited
Channels states:
Theorem 8
Given an average power constraint P over bandwidth W , the
maximal number bits per channel use (i.e., per dimension of
signal space) can be sent over the AWGN channel reliably is
C =

1
log (1 +
2 2

P
2W
N0
2

bits / channel use

Note: We will come back to this theorem later.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

117 / 171

C =

log2 (1 +

1
2

P
2W
N0
2

bits / channel use

Thus for N = 2WT channel uses,


log2 M = RT NC = WT log2 (1 +

P
)
N0 W

R
P
log2 (1 +
)
W
N0 W

On the other hand,


Eb =

E
PT
P
=
=
log2 M
log2 M
R

We have

2014 Digital Communications. Chap. 04

P = REb

Eb R
R
log2 (1 +
)
W
N0 W
Ver. 2014.11.10

H. F. Francis Lu

118 / 171

Shannon Limit for Power-Limited Channels


R
W

R
)
log2 (1 + NEb0 W

In general we have
R

2W 1
Eb

R
N0
W
For Power-Limited Channels we have 0 <

R
W

1, hence

Eb
2W 1
Rlim
= ln 2
R
N0
0
W
W
and ln 2 = 1.59dB is called the Shannon Limit for digital
communication over AWGN channels.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

119 / 171

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

120 / 171

Craig Representation of Gaussian Q


Functions

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

121 / 171

Derivation of Craig Representation

Q(x) =
x

1
t2
e 2 dt
2

Let X and Y be i.i.d. Gaussian Random Variables N (0, 1)


Q(x) = Pr {X x} = Pr {X x, < Y < }
with
f (x, y ) =

2014 Digital Communications. Chap. 04

1
x2 + y2
exp (
)
2
2

Ver. 2014.11.10

H. F. Francis Lu

122 / 171

Set
R =

X2 + Y2

Then
f (r , ) =

= arctan

Y
X

r
r2
exp ( ) = f ()f (r )
2
2

For x 0,
Q(x) = Pr {X x, < Y < }

x
2
= 2 Pr {R
} f () d
cos()
0

2
2 1
r2
= 2
dr d
re
x
2 cos()
0
=

2014 Digital Communications. Chap. 04

1
x2
exp (
) d

2 cos2 ()

Ver. 2014.11.10

H. F. Francis Lu

123 / 171

Setting = + 2 , it can be easily shown that


Q(x) =

x2
1
exp (
) d =

2 cos2
0

x2
1
exp (
) d

2 sin2

Compared with the original definition of Q(x), the above form


is easier in terms of numerical integration.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

124 / 171

Upper Bound of Q(x)

x2
1
) d
exp (

0
2 sin2

x2
2 1

exp ( ) d

2
0
1
x2
= exp ( )
2
2

Q(x) =

The inequality holds only for x 0.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

125 / 171

Error Performance of M-ary PSK


= M
0
E sin()
r = sin(+)
2

Nz

1
N0

sin ()
exp ( N Esin
) d
2
(+)

Pe = 2 0

ze

dz d

M
M exp ( N0 sin2 () ) d

<

M exp (

M1
M

E sin2 ( )

E sin2 ( M
)
)
N0

d
2

2M
exp ( Eb log
( M ) )
N0

compare the above bound to (1).


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

126 / 171

Sec. 4.5 Optimal Noncoherent


Detection

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

127 / 171

Basic Theory
So far we have assumed that all communication is well
synchronized and r (t) = sm (t) + n(t)
However, in practice the signal sm (t) could be delayed or
impaired by sources other than noises
We use a vector parameter to capture such
impairments. We reformulate the received signal as
r (t) = sm (t; ) + n(t)

is a random vector, and hence sm (t; ) is a random process.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

128 / 171

r (t) = sm (t; ) + n(t)


We could model with certain joint pdf
We could assume sm (t; ) is WSS
Let {e1 ( ), , eN ( )} be a set of eigenfunctions of Rs ( )
Then by Karhunen-Loeve expansion theorem we could
again vectorize the waveform channel to get
r = s m, + n

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

129 / 171

MAP in Presence of Uncertainty


r = s m, + n

m
= arg max Pr {s m r } = arg max Pm f (r s m )
m

= arg max Pm f (r s m , ) f () d
m

= arg max Pm fn (r s m, ) f () d
m

The error probability is


M

Pe = Pm fn (r s m, ) f () d dr
D
m=1

2014 Digital Communications. Chap. 04

m m

Ver. 2014.11.10

H. F. Francis Lu

130 / 171

Example
r (t) = Asm (t) + n(t)
where A is a nonnegative continuous random variable.
sm (t) is binary antipodal. s1 (t) = s(t) and s2 (t) = s(t).
Rewrite the above in vector form
r = Asm + n
Then D1 is
D1 = {r
0

(r a Eb )2
N0

f (a) da >

(r +a Eb )2
N0

f (a) da}

where s(t) = Es = Eb .
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

131 / 171

(r a Eb )2
N0

[e

e
0
r > 0

f (a) da >

(r a Eb )2
N0

a 2 Eb
N0

[e

2ra Eb
N0

(r +a Eb )2
N0

(r +a Eb )2
N0

2ra Eb
N0

f (a) da

] f (a) da > 0
] f (a) da > 0

Hence
D1 = {r r > 0}
The error probability

1
(r + a Eb )2

Pb = [
) dr ] f (a)da
exp (
N0
0
0
N0


2Eb

= E Q A
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

132 / 171

Noncoherent Detection of Carrier


Modulated Signals

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

133 / 171

Noncoherent Due to Uncertainty of Time Delay


Recall the bandpass signal sm (t)
sm (t) = Re {s`,m (t)e 2fc t }
Assume the received signal is delayed by td << T (due to
mis-sync., propagation, etc.)
r (t) = sm (t td ) + n(t)
= Re {s`,m (t td )e 2fc (ttd ) } + n(t)
= Re {s`,m (t td )e 2fc td e 2fc t } + n(t)
Note td T , fc large; we have
s`,m (t td ) s`,m (t) and

= 2fc td 1

Hence
r` (t) = s`,m (t)e + n` (t)
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

r ` = s `,m e + n`

H. F. Francis Lu

134 / 171

We can assume uniform over [0, 2). The MAP Rule is


1
fn (r s e ) d
2 ` ` `,m
2
r s
e
Pm
1
` `,m
2N0
= arg max
d
e
m 2 (2N0 )N

m
= arg max Pm
m

= arg max Pm e

ENm

= arg max Pm e

ENm

= arg max Pm e

ENm

Re[r s `,m e ]
`
N0

r s `,m
`
N0

cos(m +)

r ` s `,m
I0
N0
2

2
2
where Em = sm (t) = 12 s`,m (t) = 12 s `,m .

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

135 / 171

gMAP (r ` ) = arg max Pm e

ENm
0

I0

r ` s `,m
N0

For equal-energy and equiprobable signals, the above simplifies


to
m
= arg max r ` s `,m
m

= arg max r` (t)s`,m (t) dt


m

(2)

since I0 (x) is strictly increasing.


(2) is referred to as Envelope Detector

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

136 / 171

Compare with Coherent Detection


r` (t) = s`,m (t) + n` (t)

r ` = s `,m + n`

The ML rule is
m
= arg max Pm fn` (r ` s `,m )
m

r ` s `,m
Pm
= arg max
exp

m (2N0 )N
2N0

= arg max Re [r ` s `,m ]


m

for equiprobable and equal-energy signals.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

137 / 171

Main Result
Theorem 9 (Carrier Modulated Signals)
For carrier modulated signals with baseband equivalent
received signal r ` over AWGN
r ` = s `,m e + n`
Coherent ML Detection
m
= arg max Re [r ` s `,m e ]
m

NonCohereht ML Detection
m
= arg max r ` s `,m
m

for equiprobable and equal-energy signals.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

138 / 171

Noncoherent Detection of FSK


Modulated Signals

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

139 / 171

Recall that baseband M-ary FSK Orthogonal Modulation is


given by

s`,m (t) = 2Es g (t)e 2(m1)f t


and the bandpass
sm (t) = Re [s`,m (t)e 2fc t ]

= 2Es g (t) cos (2fc t + 2(m 1)f t)


for 1 m M.
Given s`,m (t) transmitted, the delayed received signal is
r` (t) = s`,m (t)e + n` (t)

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

140 / 171

NonCoherent Detection of FSK

r` (t) = s`,m (t)e + n` (t)


The NonCoherent ML Detection computes

r` (t)s`,m
(t) dt

(s`,m (t)e + n` (t)) s`,m


(t) dt

= e

2014 Digital Communications. Chap. 04

s`,m (t)s`,m
(t) dt +

Ver. 2014.11.10

n` (t)s`,m
(t) dt

H. F. Francis Lu

141 / 171

e s`,m (t)s`,m
(t) dt + n` (t)s`,m (t) dt

Assuming g (t) = T1 [u(t) u(t T )],

s`,m (t)s`,m (t) dt


T
2Es

e 2(m1)f t 2(m 1)f t dt


=

T 0

2Es e 2(mm )f T 1
=
T 2(m m )f

= 2Es e (mm )f T sinc [(m m )f T ]

Hence if f =
e

k
T

s`,m (t)s`,m
(t) dt = 2Es K (m m )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

142 / 171

Coherent Detection of FSK


r` (t) = s`,m (t) + n` (t)
Re [r ` s `,m ] = arg max
Re [
m
= arg max

Re [
= arg max

Re [

r` (t)s`,m
(t) dt]

s`,m (t)s`,m
(t) dt +

n` (t)s`,m
(t) dt]

s`,m (t)s`,m
(t) dt]

= Re [2Es e (mm )fT sinc ((m m )fT )]


= 2Es cos ((m m )fT ) sinc ((m m )fT )

Here we need only f =


Re [

k
2T

such that

s`,m (t)s`,m
(t) dt] = 2Es K (m m )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

143 / 171

Error Probability of Orthogonal


Signaling with NonCoherent
Detection

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

144 / 171

For M-ary Orthogonal Signaling with symbol energy Es , the


lowpass equivalent has signal constellation

)
s `,1 = (
2Es
0

)
s `,2 = (
2Es
0
0
(
)
=


s `,M = (
0
0

2Es )
Given s `,1 transmitted, the delayed received signal is
r ` = e s `,1 + n` .

The NonCoherent ML computes


s `,1 r ` = 2Es e + s `,1 n`
s `,m r ` = s `,m n`
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

2mM
H. F. Francis Lu

145 / 171

Recall n` is a complex Gaussian random vector with


Kn` = En` n` = 2N0 IM
Hence s `,m n` is a circular symmetric complex Gaussian random
variable with
Es `,m n` n` s `,m = 2N0 2Es = 4Es N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

146 / 171

s `,1 r ` = 2Es e + s `,1 n`


s `,m r ` = s `,m n`

2mM

Thus
Re [s `,1 r ` ] N (2Es cos , 2Es N0 )
Im [s `,1 r ` ] N (2Es sin , 2Es N0 )
Re [s `,m r ` ] N (0, 2Es N0 )
Im [s `,m r ` ] N (0, 2Es N0 )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

147 / 171

Re [s `,1 r ` ] N (2Es cos , 2Es N0 ),


Im [s `,1 r ` ] N (2Es sin , 2Es N0 )
Define R1 = s `,1 r ` (Rician r.v.); then we have
fR1 (r1 ) =

r12 +s 2
r1
sr1
I0 ( 2 ) e 22 ,

r1 > 0

where 2 = 2Es N0 and s = 2Es .


Define Rm = s `,m r ` (Rayleigh r.v.), m 2; then we have
rm rm22
fRm (rm ) = 2 e 2 ,

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

rm > 0

H. F. Francis Lu

148 / 171

fR1 (r1 ) =

r1
( sr1 )
I0 2 e

r12 +s 2
2 2

fRm (rm ) =

r2

rm 2m2
2 e

Pc = Pr {R2 < R1 , , RM < R1 }


=
=
=

Pr {R2 < r1 , , RM < r1 R1 = r1 } fR1 (r1 ) dr1

[ fRm (rm ) drm ]


0

[1 e

M1

r1

r12
2 2

fR1 (r1 ) dr1

M1

fR1 (r1 ) dr1

nr12 r
r12 +s 2
M 1
sr1
1
)(1)n e 22 I0 ( 2 ) e 22 dr1
n

0
n=0
M1
r
M 1
sr1 (n+1)r212 +s 2
1
n
2
= (
)(1)
I0 ( 2 ) e
dr1
n

0
n=0

M1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

149 / 171

r1
( sr1 )
I0 2 e

M1
n
Pc = M1
n=0 ( n )(1) 0

Setting

s
s =
n+1

(n+1)r12 +s 2
2 2

dr1

r = r1 n + 1

gives
r
r 2 +s 2
1
s r
1
I0 ( 2 ) e 22 dr =

n+1 0

n+1

Rician pdf

Hence
M1
n Es
n s2
(1)n M 1 n+1
(1)n M 1 n+1
N0
2 2 =
(
)e
(
)e
n
+
1
n
n
+
1
n
n=0
n=0
M1
n
E
n
s
(1) M 1 n+1 N
0
(
)e
=1+
n
n=1 n + 1
M1

Pc =

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

150 / 171

(1) M1
Pc = 1 + M1
n=1 n+1 ( n )e

n Es
n+1
N

Thus
M1

Pe = 1 Pc =

n=1

n Eb log2 M
(1)n+1 M 1 n+1
N0
(
)e
n+1
n

BFSK
For M = 2, the above shows

Eb
1 2NEb
Pe = e 0 > Q
2
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

151 / 171

Performance of Binary DPSK

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

152 / 171

For BDPSK, the two lowpass equivalent signals are

s `,1 = (
2E
2E )
s
s
s `,2 = (
2Es 2Es )
where
The first coordinate represents the previous transmission
The second coordinate represents the current transmission
Es = Eb is the bandpass bit energy
2Es for baseband equivalent
The received signal given s `,1 is
r ` = e s `,1 + n`
Note s `,1 , s `,2 = 0.
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

153 / 171

r ` = e s `,1 + n`

Re [s `,1 r ` ] N (4Es cos , 4Es N0 )


Im [s `,1 r ` ] N (4Es sin , 4Es N0 )
Re [s `,2 r ` ] N (0, 4Es N0 )
Im [s `,2 r ` ] N (0, 4Es N0 )

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

154 / 171

As
s `,1 , s `,2 = 0
s `,1 , s `,1 = s `,2 , s `,2 = 4Es
Following the same analysis as FSK we see for BDPSK
1 Eb
1 2Es
Pe,BDPSK = 1 Pc = e 2N0 = e N0
2
2
But for coherent detection of BPSK we have

2Eb
Pe,BPSK = Q
N0

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

155 / 171

DBPSK Performance

Loss < 0.8 dB


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

156 / 171

Maximum Likelihood Sequence


Estimators: Viterbi Algorithm

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

157 / 171

Problem Formulation
Given a sequence of received vectors y t , , y t at time instant
n
1
t1 , , tn , determine the most likely transmitted sequence of
vectors xt1 , , xtn .
Formulate this as an MAP problem
{
x t1 , , xtn }
= arg max Pr {x t1 , , x tn y t , , y t }
x t ,,x tn

= arg max f (y t , , y t x t1 , , x tn ) Pr {x t1 , , x tn }
x t ,,x tn
1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

158 / 171

Further Assumptions: Memoryless Channel


arg maxx t

,,x tn

f (y t , , y t x t1 , , x tn ) Pr {x t1 , , x tn }
n

We may assume the channel is memoryless and


y t = x ti + nti
i

for some noise vector nti at time instant ti


This means
n

f (y t , , y t x t1 , , x tn ) = f (y t x ti )
1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

i=1

H. F. Francis Lu

159 / 171

Further Assumptions: Markov Source


arg maxx t

,,x tn

[ni=1 f (y t x ti )] Pr {x t1 , , x tn }
i

We assume any realization of sequence {x t1 , , x tn } can


be formulated as a output of a first-order Markov
chain with
1
2
3

A state space S of N states {S1 , , SN }


State transition probability Pr{Sj Si } = pij
An output function
O(Si , Sj ) = x

A unique sequence of states {St0 , St1 , , Stn }, if exists,


such that
x ti = O (Sti1 , Sti ) ,

i = 1, 2, , n

where St0 is the initial state and Pr{St0 } = 1.


2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

160 / 171

arg maxx t

,,x tn

[ni=1 f (y t x ti )] Pr {x t1 , , x tn }
i

Given any event of sequence {x t1 , , x tn } with


Pr {x t1 , , x tn } > 0
we can find a sequence of states {St0 , St1 , , Stn } such
that
x ti = O (Sti1 , Sti ) , i = 1, 2, , n
Hence we have
n

Pr {x t1 , , x tn } = Pr {St0 , St1 , , Stn } = Pr {St0 } Pr {Sti Sti1 }


i=1

Then we can rewrite the original MAP problem as


n

arg max [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]


i
St1 ,,Stn i=1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

161 / 171

Dynamic Programming
arg maxSt1 ,,Stn ni=1 [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]
i

Rewrite the above as


n

max [f (y t O (Sti1 , Sti )) Pr {Sti Sti1 }]


i

St1 ,,Stn i=1

= max max f (y t O (Stn1 , Stn )) Pr {Stn Stn1 }


Stn

Stn1

max f (y t O (Stn2 , Stn1 )) Pr {Stn1 Stn2 }


Stn2

n1

max f (y t O (St1 , St2 )) Pr {St2 St1 }


St1

f (y t O (St0 , St1 )) Pr {St1 St0 }


1

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

162 / 171

maxSt1 f (y t O (St1 , St2 )) Pr {St2 St1 } f (y t O (St0 , St1 )) Pr {St1 St0 }


2

Note
This is a function of St2 only
Given any St2 , there is at least one state St1 such that it
maximizes the objective function
If there exist more than one choice of St1 such that the
object function is maximized, picking arbitrary one does
not change the value.

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

163 / 171

maxSt1 f (y t O (St1 , St2 )) Pr {St2 St1 } f (y t O (St0 , St1 )) Pr {St1 St0 }


2

Hence we define for all (St1 , St2 ) S 2


1
The Branch Metric Function
Bt0 ,t1 (St0 , St1 ) = f (y t O (St0 , St1 )) Pr {St1 St0 }
1

Bt1 ,t2 (St1 , St2 ) = f (y t O (St1 , St2 )) Pr {St2 St1 }


2

The State Metric Function


t1 (St1 ) = Bt0 ,t1 (St0 , St1 )
t2 (St2 ) = max Bt1 ,t2 (St1 , St2 )t1 (St1 )
St1 S

The Survival Path Function


Pt2 (St2 ) = arg max Bt1 ,t2 (St1 , St2 )t1 (St1 ) = St1
St1 S

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

164 / 171

Having obtained t2 (St2 ) and Bt2 (St2 ), the MAP problem now
reduces to
max max f (y t O (Stn1 , Stn )) Pr {Stn Stn1 }
St n

Stn1

max f (y t O (Stn2 , Stn1 )) Pr {Stn1 Stn2 }


Stn2

n1

max f (y t O (St2 , St3 )) Pr {St3 St2 } t2 (St2 )


St2

So we define for every (St2 , St3 ) S 2


Bt2 ,t3 (St2 , St3 ) = f (y t O (St2 , St3 )) Pr {St3 St2 }
3

t3 (St3 ) = max Bt2 ,t3 (St2 , St3 )t2 (St2 )


St2

Pt3 (St3 ) = arg max Bt2 ,t3 (St2 , St3 )t2 (St2 ) = St2
St2

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

165 / 171

Recursive Programming
arg maxSt1 ,,Stn ni=1 [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]
i

Continuing in this fashion, in general we have for i = 2, 3, , n


Bti1 ,ti (Sti1 , Sti ) = f (y t O (Sti1 , Sti )) Pr {Sti Sti1 }
i

ti (Sti ) = max Bti1 ,ti (Sti1 , Sti )ti1 (Sti1 )


Sti1

Pti (Sti ) = arg max Bti1 ,ti (Sti1 , Sti )ti1 (Sti1 ) = Sti1
Sti1 S

Finally we have
n

max [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]


i

St1 ,,Stn i=1

= max tn (Stn )
Stn

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

166 / 171

Viterbi Algorithm : Initial Stage

Input: Channel Observations y t , , y t


n
1
Output: MAP Estimates xt1 , , xtn
Initializing
1: for all St1 S do
2:
Bt0 ,t1 (St0 , St1 ) = f (y t O (St0 , St1 )) Pr {St1 St0 }
1
3:
t1 (St1 ) = Bt0 ,t1 (St0 , St1 )
4: end for

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

167 / 171

Viterbi Algorithm: Recursive Stage

1:
2:
3:
4:
5:
6:
7:
8:
9:

for i = 2 to n do
for all Sti S do
for all Sti1 S do
Bti1 ,ti (Sti1 , Sti ) = f (y t O (Sti1 , Sti )) Pr {Sti Sti1 }
i
end for
ti (Sti ) = maxSti1 Bti1 ,ti (Sti1 , Sti )ti1 (Sti1 )
Pti (Sti ) = arg maxSti1 S Bti1 ,ti (Sti1 , Sti )ti1 (Sti1 )
end for
end for

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

168 / 171

Viterbi Algorithm: Trace-Back Stage and Output

Stn = arg maxStn tn (Stn )


for i = n downto 1 do
Sti1 = Pti (Sti )
4:
xti = O (Sti1 , Sti )
5: end for

1:
2:
3:

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

169 / 171

Advantage of Viterbi Algorithm


The original MAP problem
arg maxSt1 ,,Stn ni=1 [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]
i
n
has exponential complexity O (S )
2
The Viterbi algorithm has linear complexity O (n S )
Many Communication Problems can be formulated as a
1st order Markov Chain
1
2
3
4

Demodulation of CPM
Demodulation of Differential Encoding
Decoding of Convolutional Codes
Estimation of Correlated Channels

It is easy to generalize to high-order Markov chains

2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

170 / 171

Further Improvement: Baum-Welch Algorithm


arg maxSt1 ,,Stn ni=1 [f (y t x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]
i

The Viterbi Algorithm provides the best estimates of


sequence xt1 , , xtn , but this is not the best one can do.
The best MAP estimate of each xti is the following:
xti = arg max
x

St ,,Stn
1
O(St
,S )=x
i1 ti

[f (y ti x ti = O (Sti1 , Sti )) Pr {Sti Sti1 }]


i=1

This can be solved by another dynamic programming,


known as Baum-Welch algorithm (or BCJR)
It has complexity 2 Viterbi Algorithm
2014 Digital Communications. Chap. 04

Ver. 2014.11.10

H. F. Francis Lu

171 / 171

You might also like