You are on page 1of 37

ICAI – Máster MIT

Sistemas de comunicación I

Chapter 1:
Basic signalling in AWGN
channels

Javier Matanza
W. Warzanskyj
Luis Cucala
1
1

Geometric representation of signals


Signal and symbol
Amplitude
T T T
(e.g. volts)

t
T 2T 3T kT

Symbol 1 Symbol 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Symbol k

• Signal or message: a concatenation of symbols, each symbol having a "symbol


duration", or "symbol period" T
• There will be M different symbols. The ensemble of the M symbols is the
symbol alphabet
• A symbol can represent one or more bits. M is usually a power of 2, so then
𝑀 = 2! , 𝒏 ≡ number of bits per symbol
• Signal or message S represented as a vector, whose components are symbols
S = [𝑠" 𝑡 , 𝑠# 𝑡 , 𝑠$ 𝑡 , . . . 𝑠% (𝑡)]
concatenation of k symbols, from the set of M possible symbols
3
Symbol as a vector (I)
• Basic idea: represent any of the M symbols 𝑠& 𝑡 , 𝑖 ∈ [1, 𝑀] as a linear
combination of N orthonormal basis functions ∅' 𝑡 , 𝑗 ∈ [1, 𝑁] with N ≤ M
%

symbol 𝑠! 𝑡 = % 𝑠!" ϕ" 𝑡 = 𝑠!$ ∅$ 𝑡 + 𝑠!& ∅& 𝑡 + . . . + 𝑠!% ∅% 𝑡


"#$

• Basis functions are defined in the interval [0,T] and must have finite energy 𝐸(
) +
& &
𝐸∅! = + ∅" 𝑡 𝑑𝑡 = + ∅" 𝑡 𝑑𝑡 < ∞
() *
• Basis functions must be an orthonormal basis of a vector space
*
Vector space scalar product 6 ∅' 𝑡 · ∅∗% 𝑡 𝑑𝑡 = 𝛿'% 1, 𝑗=𝑘
)
0, 𝑗≠𝑘

Key takeaway: the set of functions 𝑠! 𝑡 , 𝑡 ∈ 0, 𝑇 with the + operation, constitute a


vector space whose dimension is N. The 𝑠! 𝑡 are the vectors of the vector space, and
the ∅" 𝑡 are the vector space basis
4
Symbol as a vector (II)
• The coefficients that multiply the basis functions are the basis functions
amplitudes (the vector space coordinates). There are K possible amplitude
values, or levels*.
• A symbol is a vector and will be expressed by means of its coefficients, that can
then be represented as a column matrix, whose components are the basis
signal amplitudes (the coefficients)

𝑠!"
𝑠!#
𝑠! 𝑡 amplitudes 𝑠!! = . = 𝑠!" 𝑠!# . . . 𝑠!$ % 𝑠!& € {𝐴", . . . 𝐴' }
(coordinates) vector in the .
orthonormal base 𝑠!$

* Note: we are used to vector spaces where the coefficients can take any value 𝑠!" ∈ ℝ
However, in digital communications we will have to resort to a limited number of values,

5
Symbol as a vector message {𝑠8& , 𝑠8$ }
Basis
2 2𝜋𝑡 𝜋
∅" 𝑡 = · cos −
𝑇 𝑇 4

−1
Symbol 𝑠8& =
1

2 2𝜋𝑡 𝜋
∅! 𝑡 = · cos +
𝑇 𝑇 4
Basis
coordinates
1
Symbol 𝑠8$ =
−1

6
Definition of modulation
• To modulate is to build a symbol from its amplitudes, the coefficients that
multiply the basis functions. Thus, symbols are also called modulated
signals.

serial to parallel
si1 symbol
ϕ! (𝑡)
si2 + 𝑠! (𝑡)
bits from coder
... ϕ" (𝑡)
.....
siN

ϕ# (𝑡)

coefficients

7
Some concepts
• Bits per symbol: number of information bits represented by a symbol.
𝑁. 𝑙𝑜𝑔& 𝐾
Bits/symbol = Q (𝐾 possible coefficients amplitude values)
𝑙𝑜𝑔& 𝑀

• Constellation: Set of the alphabet’s M vectors (column matrixes) that


correspond to each one of the M symbols. Vectors’ dimension are N.
𝑠$$ 𝑠&$ 𝑠,$
𝑠̅ = 𝑠8! = ⋮ , ⋮ ,…, ⋮
𝑠$% 𝑠&% 𝑠,%
• Symbol energy: If the modulator uses an orthonormal basis, its value is:
% &
𝐸-" = 𝑠8! & =% 𝑠!"
"#$
• Mean constellation energy: averaging the energy of each symbol over its
probability of appearance. &
%
&
𝐸- = 𝐸 𝑠̅ =% 𝑠8! · 𝑃(𝑠̅ = 𝑠8! )
!#$

• Symbol mean power: 𝐸-


𝑃- = 𝑇: Symbol duration
𝑇

De Splash - Trabajo propio, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=1322082


8
Examples
• Example 1: N = 2, M = 2 2 2𝜋𝑡 𝜋
∅! 𝑡 = · cos +
a) Check it is an orthonormal basis. 𝑇 𝑇 4
b) Expression for the modulated signals (symbols) s1(t), Basis
2 2𝜋𝑡 𝜋
s2(t), symbol energy and mean constellation energy ∅" 𝑡 = · cos −
𝑇 𝑇 4

1
𝑠1! =
−1
There are 2 symbols in the alphabet, then 1 bit/symbol Symbols −1
𝑠1" =
1

• Example 2: N = 1, M = 4 1 𝑡
Basis ∅! 𝑡 =
𝑇
· sinc
𝑇
a) Represent the modulated signals s1(t), s2(t) , s3(t), s4(t)

𝑠1! = −3
𝑠1" = −1
There are 4 symbols in the alphabet, then 2 bits/symbol Symbols 𝑠1$ = 1
𝑠1% = 3

9
Examples ∅! 𝑡 =
2
𝑇
· cos
2𝜋𝑡 𝜋
𝑇
+
4
Basis
2 2𝜋𝑡 𝜋
• Example 1: N = 2, M = 2 ∅" 𝑡 =
𝑇
· cos
𝑇

4
a) Check it is an orthonormal basis.
1
b) Expression for the modulated signals s1(t), s2(t) 𝑠1! =
−1
Symbols −1
𝑠1" =
1
1 ) 2𝜋𝑡 𝜋 2𝜋𝑡 𝜋 1 ) 4𝜋𝑡 𝜋
, 2. cos + · cos − 𝑑𝑡 = , (cos + cos ) 𝑑𝑡 = 0
𝑇 ( 𝑇 4 𝑇 4 𝑇 ( 𝑇 2
orthonormal
) )
1 2𝜋𝑡 𝜋 2𝜋𝑡 𝜋 1 4𝜋𝑡
, 2. cos + · cos + 𝑑𝑡 = , (cos + 1) 𝑑𝑡 = 1
𝑇 ( 𝑇 4 𝑇 4 𝑇 ( 𝑇

2 2𝜋𝑡 𝜋 2 2𝜋𝑡 𝜋 2𝜋𝑡 𝜋 2 2𝜋𝑡


𝑠"(𝑡) = · cos + − · cos − = −2. 𝑠𝑒𝑛 . 𝑠𝑒𝑛 =− . 𝑠𝑒𝑛
𝑇 𝑇 4 𝑇 𝑇 4 𝑇 4 𝑇 𝑇
2 2𝜋𝑡
𝑠#(𝑡) = . 𝑠𝑒𝑛
𝑇 𝑇

𝐸* = > 𝑠!! # · 𝑃(𝑠̅ = 𝑠!! ) = 2𝑥0.5 + 2𝑥0.5 = 2


𝐸*& = 𝐸*' = 1# + (−1)#= 2
!+"#

10
Examples Basis ∅! 𝑡 =
1
𝑇
· sinc
𝑡
𝑇

• Example 2: N = 1, M = 4 𝑠1! = −3

a) Represent the modulated signals s1(t), s2(t) , s3(t), s4(t) 𝑠1" = −1


Symbols 𝑠1$ = 1
𝑠1% = 3

There are no circuits that use sincs

11
Appendix

Gram Schmidt orthogonalization procedure


Gram-Schmidt Orthogonalization Procedure

• Basic idea: Given a set of modulated waveforms (symbols), find an


orthonormal basis and its representation (coordinates) using it.
• Remember Algebra: the set of M symbols, if N≤M, are not a vector space
basis, but a generating system (they are not linearly independent)
• Let’s assume we have a set of M different modulated signals (symbols)
𝑠& 𝑡 , i = 1,2,…,M
• Seek to find a set of N≤M orthonormal functions to be used as a basis.
∅" 𝑡 , j = 1,2,…,N,
• Procedure:

1. Construct the first base function normalizing the first waveform.


.
s" 𝑡
∅" 𝑡 = E,& = , 𝑠"# 𝑡 𝑑𝑡
E,&
-.

Φ1(t) s1(t)
13
Gram-Schmidt Orthogonalization Procedure (II)

2. Construct the second basis vector perpendicular to the first one using
signal s2(t)

s2(t)

d2(t)
ϕ2(t)
s1(t)

ϕ1(t) c21

𝑑# 𝑡 = 𝑠# 𝑡 − 𝑐#" · ∅" (𝑡)

14
Gram-Schmidt Orthogonalization Procedure (II)

.
𝑐#" =< 𝑠# 𝑡 , ∅" 𝑡 > = ∫-. 𝑠# 𝑡 · ∅" 𝑡 𝑑𝑡 ß s2(t) projection over ∅! 𝑡 (scalar
product)
s2(t)
𝑑# 𝑡 = 𝑠# 𝑡 − 𝑐#" · ∅"(𝑡) d2(t)
𝑑" 𝑡 orthogonal to ∅! 𝑡 , but not unitary

d# 𝑡 Φ2(t)
∅# 𝑡 = ß Normalization
) #
∫( 𝑑# 𝑡 𝑑𝑡
Φ1(t) c21(t) s1(t)

In general:

d/ 𝑡 +
∅/ 𝑡 = 𝑐!" = < 𝑠! 𝑡 , ∅" 𝑡 > = ∫* 𝑠! 𝑡 · ∅" 𝑡 𝑑𝑡 (j=1 . . . i-
) #
∫( 𝑑! 𝑡 𝑑𝑡 1)
!-"
𝑑! 𝑡 = 𝑠! 𝑡 − > 𝑐!& · ∅0 𝑡
&+"

15
Gram-Schmidt Orthogonalization Procedure (III)

• Example 1: s1(t) s2(t)


1 1
2

2 t t
-1

• Example 2: s1(t) s2(t)


1 1
2

2 t t
-1

s3(t) s4(t)
1 1

t 3
t
3
-1

Find an orthogonal basis to represent the signals.


16
Gram-Schmidt Orthogonalization Procedure (III)
• Example 1: s1(t) s2(t)

. # 1 1
2
E,& = , 𝑠"# 𝑡 𝑑𝑡 = , 1𝑑𝑡 = 2
2 t t
-. ( -1

s" 𝑡 1
∅" 𝑡 = =
E,& 2

" " # "


𝑐#" =< 𝑠# 𝑡 , ∅" 𝑡 > = ∫( 1 𝑑𝑡 + ∫" (−1) 𝑑𝑡 =0
# #

0 ≤ 𝑡 < 1, 1 1
𝑑# 𝑡 = 𝑠# 𝑡 − 𝑐#" · ∅" 𝑡 = K 0≤𝑡<1
1≤𝑡<2 −1 d# 𝑡 2
∅# 𝑡 = =
) " # )#
1
# ∫( 𝑑# 𝑡 𝑑𝑡 1≤𝑡<2 −
, 𝑑# 𝑡 𝑑𝑡 = , 1𝑑𝑡 + , 1𝑑𝑡 = 2 2
( ( "

𝑠8$ = 2
0
the symbols coordinates in that basis are:
0
𝑠8& =
2
17
Gram-Schmidt Orthogonalization Procedure (III)
• Example 2: s1(t) s2(t)
1 1
2

2 t t
-1

s3(t) s4(t)
1 1

t 3
t
3
-1

Caution!, the four signals are not linear independent: 𝑠. 𝑡 = 𝑠$ 𝑡 + 𝑠& 𝑡 + 𝑠/ 𝑡

Then, the space vector dimension is 3, 𝑁 = 3, we will need only 3 basis functions ∅/ 𝑡

18
Gram-Schmidt Orthogonalization Procedure (III) s1(t) s2(t)
• Example 2: 1 1
+ " 2

t t
E)! = @ 𝑠! " 𝑡 𝑑𝑡 = @ 1𝑑𝑡 = 2
2
-1

*+ ,
s3(t) s4(t)
1
s! 𝑡 0 ≤ 𝑡 < 2, 1 1
∅𝟏 𝐭 = =: 2
E)!
2 ≤ 𝑡 < 3, 0 t t
3 3
-1

! ! " !
𝑐"! =< 𝑠" 𝑡 , ∅! 𝑡 > = ∫, 1 𝑑𝑡 + ∫! (−1) 𝑑𝑡 = 0 !
𝑐$" =< 𝑠$ 𝑡 , ∅" 𝑡 > = ∫, (−1)
! "
𝑑𝑡 + ∫! 1(−
!
)𝑑𝑡 = 0
" "
" "

0 ≤ 𝑡 < 1, 1 0 ≤ 𝑡 < 1, −1
𝑑" 𝑡 = 𝑠" 𝑡 − 𝑐"! · ∅! 𝑡 = D 𝑑$ 𝑡 = 𝑠$ 𝑡 − 𝑐$" · ∅" 𝑡 = D
1≤𝑡<2 −1 1 ≤ 𝑡 < 3, 1
1 −1
0≤𝑡<1 0 ≤ 𝑡 < 1,
2 d$ 𝑡 3
d" 𝑡 ∅𝟑 𝐭 = =
∅𝟐 𝐭 = = 1 . "
1
. " 1≤𝑡<2 − ∫, 𝑑$ 𝑡 𝑑𝑡 1 ≤ 𝑡 < 3,
∫, 𝑑" 𝑡 𝑑𝑡 2 3
2 ≤ 𝑡 < 3, 0

19
Gram-Schmidt Orthogonalization Procedure (IV)
• Still pending: Once you have the basis, what are the amplitudes (coordinates)
corresponding to each one of the modulated signals (symbols)?

• The 𝑠&' (coordinates in the orthonormal basis) corresponding to each symbol


can be computed with:
)
this is a scalar product and the 𝑠!" are called
𝑠!" = + 𝑠! (𝑡) · ∅" (𝑡)𝑑𝑡
Fourier Coefficients
()
• Moreover, if the basis is orthonormal, the energy of the modulated signal
(symbol) can be calculated in the time domain, or by means of the Fourier
Coefficients (Parseval Theorem):
) %

𝐸-" = 𝑠8! & = + 𝑠!& (𝑡) 𝑑𝑡 = % 𝑠!" & = 𝑠! &

() "#$

20
https://propagation.ece.gatech.edu/ECE6390/project/Fall2010/Projects/group6/ExoBuzz/page1/page8/page8.html

Demodulation
Demodulation
• To modulate is to build a symbol from its amplitudes or coefficients, the
coefficients that multiply the basis functions
• Demodulation is to obtain the amplitudes, or coefficients, from the received
symbol
• The output of the demodulator is an N element vector
• There are two equivalent procedures to demodulate:
• With correlators
• With matched filters
• If the channel does not modify the transmitted symbol, apart from a scaling
factor, the demodulated (received) symbol vector 𝑠& *- is the same than the
modulated (transmitted) symbol vector 𝑠& .-
• If the channel distorts the symbol shape, the distortion must be undone
after the demodulation. This process is called equalization
• If at reception the symbol is contaminated with noise, the transmitted
symbol vector must be estimated from the vector resulting from the
demodulation. This process is called detection
22
Correlation demodulator
• With correlators, to demodulate is to simultaneously perform the following
operations (i.e. calculate the coordinates in the orthonormal basis):
) $
, 𝑠! 23 𝑡 ∅& 𝑡 dt 𝑗 = 1 . . . 𝑁, 𝑤ℎ𝑒𝑟𝑒 𝑠! 23 𝑡 = > 𝑠!& ϕ& 𝑡
( &+"
demodulator
1 .
𝑠!$ = 1
si1 @
,
From ϕ! (𝑡) ϕ! (𝑡)
coder 0 𝑠! 83 (𝑡) 𝑠! 23 (𝑡) .
si2 + channel @
,
𝑠!& = 0
ϕ (𝑡) ϕ (𝑡)
... . . ." . . . . ". . .
.
0 @ 𝑠!% = 0
siN ,

ϕ# (𝑡) ϕ# (𝑡)
example
ϕ! (𝑡), ϕ# (𝑡) orthogonal

• The operation is synchronous: the integration starts and ends at the


beginning and end of the symbol period. Two oscillators are coherent if both of them
have the same frequency and phase
• This demodulation is also known as coherent demodulation 23
Correlation demodulation with noise (I)
• If the symbol is corrupted with AWGN white noise 𝑛(𝑡) with these characteristics:
• Gaussian probability density function 0.4

0.35

1 " 9' 0.3

-#· '
𝑓$ 𝑛 = · 𝑒 ;0
0.25

2𝜋𝜎9 #
0.2

0.15

0.1

mean 𝐸 𝑛 =0 0.05

variance 𝐸 𝑛" = σ"1


0
-5 -4 -3 -2 -1 0 1 2 3 4 5

%#
• Power spectral density (double sided) constant with frequency, 𝐺0 𝑓 =
&
• Autocorrelation: samples are uncorrelated
$2
𝑅9 𝜏 = 𝐸 𝑛 𝑡 · 𝑛 𝑡 − 𝜏 = 𝑇𝐹 -" 𝐺9 𝑓 = 𝛿(𝜏)
#

• the output from correlator j is


8+) 8+) 8+)

𝑦!& = , 𝑠! 𝜏 + 𝑛(𝜏) ϕ& (𝜏)dτ = , 𝑠!& ∅& 𝜏 + 𝑛(𝜏) ∅& 𝜏 dτ = 𝑠!& + , 𝑛 𝜏 · ∅& 𝜏 dτ
( ( (

signal noise term

24
Correlation type demodulation with noise (II)
• The correlator output is the transmitted coefficient 𝒔𝒊𝒋 plus a noise term 𝒚′𝒊𝒋
8+)

𝑦′!& = , 𝑛 𝜏 · ∅& 𝜏 dτ
(

whose mean value and power (variance) are:


56.

𝑬 𝒚′𝒊𝒋 = @ 𝐸 𝑛 𝜏 · ∅7 𝜏 dτ = 𝐸 𝑛 𝜏 =0 =𝟎
,
86. " 86. 9" 6.

𝑬 𝒚′𝟐𝒊𝒋 = 𝐸 @ 𝑛 𝜏 ∅7 𝜏 dτ = 𝐸 @ 𝑛 𝜏 ∅7 𝜏 dτ @ 𝑛 𝜏 : ∅7 𝜏 : d𝜏 : =
, , ,

= O 𝐸 ∅7 𝜏 ∅7 𝜏 : · 𝑛 𝜏 𝑛 𝜏 : dτ 𝑑𝜏 : = O ∅7 𝜏 ∅7 𝜏 : · 𝐸 𝑛 𝜏 𝑛 𝜏 : 𝑑τdτ: =
. .

= O ∅7 𝜏 ∅7 𝜏 : 𝑅1 𝜏, 𝜏 : dτ 𝑑𝜏 : = 𝑛 𝑏𝑒𝑖𝑛𝑔 𝑠𝑡𝑎𝑡𝑖𝑜𝑛𝑎𝑟𝑦 = O ∅7 𝜏 ∅7 𝜏 : 𝑅1 𝜏 − 𝜏 : dτ 𝑑𝜏 : =
. .
96.
𝑁, 𝑁, 𝑁, 𝑵𝟎
= O ∅7 𝜏 ∅7 𝜏 : ·𝐼𝐹𝑇( )·dτ 𝑑𝜏 : = O ∅7 𝜏 ∅7 𝜏 : · 𝛿 𝜏 − 𝜏 : ·dτ 𝑑𝜏 : = @ ∅"7 (𝜏) dτ =
2 2 2 𝟐
. . ,

1<
• The noise term is Gaussian, zero mean and power
#
25
The original uploader, CC BY-SA 3.0 <https://creativecommons.org/licenses/by-sa/3.0>, via Wikimedia Commons
Demodulation with matched filter
This demodulation is not coherent Matched filters: Used for syncronization
• An alternative for the correlators is the matched filter bank, a bank of filters
whose impulse response is ℎ' = ∅' 𝑇 − 𝑡 , 𝑗 = 1 … 𝑁

∅! (𝑡) ℎ! (𝑡)
𝐴 𝐴
Basis function Matched filter
t t
T T
• To demodulate is to simultaneously perform the convolutions:
𝑠! 𝑡 ∗ ℎ!" 𝑡 , 𝑗 = 1 . . . 𝑁 @ 𝑡 = 𝑇

• The operation is synchronous: the convolution ∗ starts and ends at the


beginning and end of the symbol period.
1 +

𝑠! 𝑡 ∗ ℎ!" 𝑡 = + 𝑠! 𝜏 ∅" 𝑇 − 𝑡 + 𝜏 𝑑𝜏 = + 𝑠! 𝜏 ∅" 𝜏 𝑑𝜏


* if 𝑡 = 𝑇 *

• The output is identical to that of the correlator. All conclusions hold.


26
Demodulation with matched filter. Details
∅! (𝑡) ℎ! (𝑡)
𝐴 𝐴

t
t
𝐴 T T
ℎ! 𝑡 − 𝜏 @ 𝑡 = 𝑇

𝜏 If 𝑡 = 𝑇 and the
T % basis is orthonormal
If no noise, the branch k output will be: 𝑠! 𝑡 = % 𝑠!" ϕ" 𝑡
"#$
$ )
ℎ" (𝑡) si1 𝑠! 𝑡 ∗ ℎ!= 𝑡 = > 𝑠!& , ϕ& 𝜏 ℎ!= 𝑡 − 𝜏 𝑑𝜏 =
&+" (
)
𝑠! (𝑡) #
ℎ% (𝑡) 𝑠&% 𝑗 = 𝑘, 𝑠!= , ∅= 𝜏 𝑑𝜏 = 𝑠!=
= (
𝑗 ≠ 𝑘, 0

ℎ1 (𝑡) siN
27
Detection (I)
• If there were no noise, the output of the demodulator would be a replica of
the transmitted symbol, an N element vector 𝑠!" 𝑠!# . . . 𝑠!$ %

• But since there is noise, the received vector 𝑦!" 𝑦!# . . . 𝑦!$ % must be detected,
i.e. the transmitted symbol must be estimated from the demodulator
output. For this estimation, we consider:
1. The outputs from the correlators are statistically independent; the
estimation of the transmitted symbol becomes the estimation of N
different values
In other words: with coherent demodulation, transmission of a symbol is
equivalent to the transmission of N independent 1-D sub-symbols
2. The pdf of the signal at the output of the correlator, 𝑓(𝑦' ) is not known.
But the conditional pdf if 𝑠&' was sent is:
$
3"! (-"!
1 ( &
𝑁*
𝑓 𝑦!" │𝑠!" = 𝑒 &4$ σ =
2πσ 2

28
Detection (II)
• It is proved (Appendix) that if 𝑦' is the output of correlator 𝑗, the most likely
transmitted coefficient is the one that fulfills:
𝑑𝑖𝑠𝑡 𝑦!" , 𝑠!" = 𝑦!" − 𝑠!" is minimum
for all 𝑠&' possible values
• And, if 𝑦 is the demodulator’s output vector, the most likely transmitted
symbol is the one that fulfills
𝑑𝑖𝑠𝑡 𝑦8! , [8
𝑠! ] = 𝑦! − [𝑠! ] is minimum,
for all the 𝑠F& alphabet symbols
• Example: s2 s4
𝑠̅ = 𝑠8! = 𝑠8$ , 𝑠8& , 𝑠8/ , 𝑠8. = { [-1;-1], [-1;1], [1;-1], [1;1] } y

𝑦j = −0.8; 0.3
The most likely transmitted symbol
is 𝑠8& , because it is the one at the minimum distance from 𝑦j
s1 s3

29
𝑠! 𝑡 = 𝐸$ G(𝑡)
Example of error probability
𝐸=
• A basic example; bipolar baseband transmission
t
N = 1, basis function: ϕ" 𝑡 = ∏(𝑡) - 1/2 1/2
t

symbols coefficients: 𝑠" = + 𝐸> 𝑠# = − 𝐸> ; 𝐸> : bit energy


− 𝐸=
symbols: 𝑠" 𝑡 = + 𝐸> ∏(𝑡) , 𝑠# 𝑡 = − 𝐸> ∏(𝑡) 𝑠" 𝑡 = − 𝐸$ G(𝑡)
"/# "/#

+ 𝐸> = , 𝑠" 𝑡 ϕ"𝑑𝑡 − 𝐸> = , 𝑠# 𝑡 ϕ"𝑑𝑡


-"/# -"/#
If symbols equiprobable and
Conditional probability
𝑃 𝑠̃#b𝑠" = 𝑃 𝑠"̃ b𝑠#
𝑃@ = 𝑃 𝑠" · 𝑃 𝑠̃#b𝑠" + 𝑃 𝑠# · 𝑃 𝑠"̃ b𝑠# = 𝑃 𝑠̃#b𝑠" = 𝑁*l
Prob of receiving s1 given Prob of receiving s2 given 𝜎& = 2
that s2 is received that s1 is received
. . 𝐸>i
1 9' 1 C' 1 2 1 𝐸>
- ' - #
= 𝑃 𝑛 𝑡 > 𝐸> = , 𝑒 #B 𝑑𝑛 = , 𝑒 𝑑𝑢 = erfc( ) = erfc( )
2πσ 2π 2 𝜎 2 𝑁(
A< A</B
𝐸>
=𝑄 2
𝑁(

30
Appendix

Maximum likelihood detection


Joint probability of demodulated symbol
• The output of the demodulator branch 𝑗 is a random signal 𝑦' whose conditional
probability function, when 𝑠&' was transmitted, is
$
%! &'"!
d h i#
𝑓 𝑦b │𝑠cb = efg
𝑒 $($ σe = e

• Since the demodulator outputs are statistically independent, the conditional


probability function of the whole demodulated symbol, if symbol 𝑠& was
transmitted, is the joint probability function

j) hk") $ j$ hk"$ $ j* hk"* $


1 h 1 h 1 h
𝑓 𝑦d , 𝑦e , . . . 𝑦i │*
𝑠c = 𝑒 eg$ · 𝑒 eg$ · ... 𝑒 eg$ =
2πσ 2πσ 2πσ
i ∑* j hk $ i o k1" ))$
(n(j,
1 h +,) i+ "+ 1 h i#
= · 𝑒 # = · 𝑒 𝑖 = 1,2, … 𝑀
𝜋𝑁l 𝜋𝑁l

Where 𝑑(𝑦,
G 𝑠F& ) is the Euclidean distance between vectors 𝑦G and 𝑠F&
Note: for simplicity, from now we write 𝑦& instead of 𝑦!& and assume we refer to symbol 𝑖
32
Optimum Detection (I)
• Optimum detection seeks to maximize the probability of making a correct
decision, which is equivalent to minimizing the error probability
𝑠! (𝑡) was transmitted, but the
𝑃5 ≡ Error Probability ≜ 𝑃 𝑠"̃ (𝑡)p𝑠! (𝑡)
demodulator output is 𝑠"̃ (𝑡)
• Optimum detection is based on the computation of the Posterior Probability

Posterior Probability ≜ 𝑃 𝑠8! │𝑦j ≡ 𝑃(𝑠! (𝑡) was transmitted given 𝑦(𝑡) is received)

• Therefore, among the M possible 𝑠! (𝑡) we seek to find the one which maximizes
𝑃 𝑠8! │𝑦j . That is, 𝑠! (𝑡) will be the most likely transmitted symbol 𝑠q! (𝑡)

This decision criteria is known as the


𝑠q! (𝑡) ≜ max 𝑃 𝑠8! │𝑦j
∀! Maximum A Posteriori Probability
(MAP)

33
Optimum Detection - MAP criterium
• Simple example with two symbols compute decide
𝑠! (𝑡)𝜖 𝑠$ (𝑡), 𝑠& (𝑡) 𝑦(𝑡) 𝑃 𝑠8$ │𝑦j
Channel 𝑠q! = 𝑠8$ or 𝑠q! = 𝑠8&
𝑃 𝑠8& │𝑦j

• The receiver should compute 𝑃 𝑠F" │𝑦G and 𝑃 𝑠F# │𝑦G and, depending on which
one is larger, choose 𝑠J& = 𝑠F" or 𝑠J& = 𝑠F#

• However, neither 𝑷 𝒔𝟏 │F 𝒚 are available at the receiver.


𝒚 nor 𝑷 𝒔𝟐 │F
4 𝑦G 𝑠F&
·6(98I )
• Solution: Use Bayes theorem, 𝑃 𝑠F& │𝑦G = ;
4(<)

• Since 𝑓(𝑦)
j is equal for all 𝑠& (𝑡) the MAP criteria becomes:

Maximum A Posteriori Probability (MAP) criterium 𝑠J& ≜ max 𝑓 𝑦G 𝑠F& · 𝑃 𝑠F&


∀&

• A further simplification can be made if we assume the same probability for all
symbols, i.e., 𝑃 𝑠8! = 1/𝑀
𝑠J& ≜ max 𝑓 𝑦G 𝑠F&
∀&
34
Optimum detection - ML criteria
• 𝑓 𝑦 𝑠& is commonly known as the likelihood function, and its logarithm as
the log-likelihood function
𝑁 1
l𝑜𝑔 𝑓 𝑦 𝑠! = − · l𝑜𝑔 𝜋𝑁* − (𝑑(𝑦, 𝑠! ))&
2 𝑁*
• The MAP criteria is transformed into the LOG-MAP criteria
𝑠q! ≜ max 𝑓 𝑦 𝑠! · 𝑃 𝑠! = max 𝑁* l𝑜𝑔 𝑃(𝑠! ) − (𝑑(𝑦, 𝑠! ))&
∀! ∀!

and, if symbols are equiprobable into the log-maximum likelihood criteria,


LOG-ML
𝑠J& ≜ min (𝑑(𝑦, 𝑠& ))#
∀&
which means that the most likely symbol to have been transmitted is the
one nearest to the demodulated symbol
s2 s4
most likely symbol y
demodulated symbol

s1 s3
35
Scrambling
Be#er prestations and electronic simplicity. Used to not repeat X bits that are equal. Reduce frequency.
Big capacitors are needed, as they give more energy
• In transmission, symbols are always equiprobable. If bits at source are
not, they are randomized by scrambling, to make them equiprobable

[1 1 1 1 1 1 0 0 0 0 0 . . .] <XOR> [1 0 0 1 0 1 0 0 1 1 0 . . .] = [0 1 1 0 1 0 0 0 1 1 0 . . .]

Input sequence random scrambling sequence scrambled sequence

• Scrambling is achieved by XOR operation with a random sequence;


descrambling bt XOR the scrambled sequence with the same random
sequence
[0 1 1 0 1 0 0 0 1 1 0 . . . ] <XOR> [1 0 0 1 0 1 0 0 1 1 0 . . . .] = [1 1 1 1 1 1 0 0 0 0 0 . . .]

Scrambled sequence random scrambling sequence Input sequence


Code-division multiple access (CDMA): used in militar communications, 3G.
- It enables no localization.
- Expands in frequency domain
- Low middle power
- If it is below the noise level, it is not detectable

36
Notes on error probability
• 𝑃> : Mean probability of detecting erroneously a symbol
• Optimum detection is usually assumed. With optimum detection, noise
1<
variance after demodulation is , the AWGN double sided spectral
#
density before the demodulator
• 𝑃> depends on the constellation ( 𝑠F& ) and the noise power (𝑁) )
• General equation for 𝑃> : ,

𝑃5 = % 𝑃(𝑠! (𝑡)) · 𝑃 𝑠"̃ (𝑡)p𝑠! (𝑡)


!#$
• Two basic theorems:
• Rotation-invariant behavior: if all symbols in a constellation are rotated, the
same 𝑃5 will hold.
• Translation-invariant behavior: if all symbols in a constellation are translated,
the same 𝑃5 will hold
• Signal-to-Noise Ratio before demodulation:
7' /+ 7'
𝑆𝑁𝑅 = = B: receiver bandwidth
9$ +%#:
37

You might also like