Professional Documents
Culture Documents
PDF Joiner
PDF Joiner
Poor
SIGNAL
Strong Weak
STRENGTH
CODEC MODEM
Error
ADC Source control Line coding/ Multiple
….
Multiplexing
Sampling coder coder pulse shaping Modulation accessing
PCM
encoder
Anti- Encryption
aliasing
filter
Error
DAC Source control Decision Matched Multiple
Demultiplexing
--- decoding decoding circuit filtering Demodulation accessing
PCM
decoder
Audio- Deciphering Equalisation
frequency
amplifier
Detection
Analog S1
input f
Ke
0 fs 2fs
fA(max)
t s(t) t
Sample Vs (f)
t “Aliasing” distortion
fs = 1/TS occurs
Ts
Vs (f)
Ideal low-pass filter
f
f
0 fs 2fs
fs > 2fA(max)
-1/2 0 1/2 t
-3 -2 -1 0 1 2 3 f
(a) Unit rectangular pulse, Π(t) (b) Fourier transform of unit rectangular pulse centred on t=0
M(0)
½* M(0)
f f
-W 0 W -fC - W -fC -fC + W 0 fC - W fC fC + W
𝑓𝑐 ≫ 𝑊 𝐵𝑇 = 2𝑊
𝐴𝑐 1
𝑆 𝑓 = 𝛿 𝑓 − 𝑓𝑐 + 𝛿 𝑓 + 𝑓𝑐 + 𝑀 𝑓 − 𝑓𝑐 + 𝑀 𝑓 + 𝑓𝑐
2 2
𝐾−1
1 𝑝𝑘 : probability of events 𝑠𝑘 ∈ 𝒢
𝐻 𝒢 = 𝑝𝑘 log 2
𝑝𝑘
𝑘=0
Hanoi, 2017
Khoa Điện tử - Viễn thông Digital Communications and Coding
Trường Đại học Công nghệ, ĐHQGHN Truyền thông số và mã hóa
Content
3. Matched Filter
After passing the channel, the signal remains finite spectrum that
means it is un-limited in the time domain
𝑅𝑏 1 sin 2𝜋𝑊𝑡
W= = 𝑝 𝑡 = = 𝑠𝑖𝑛𝑐 2𝑊𝑡
2 2𝑇𝑏 2𝜋𝑊𝑡
2𝑇𝑏 2
1.0
0.5
-1 0 1 -3 -2 -1 0 1 2 3 X-Axis
Sampling instants
Tb
Signaling intervals
Nyquist’s criterion
Ideal root
Linear time-invariant
x(t) y(t) y(T)
Signal ∑ filter of impulse
g(t) response h(t)
Simple at
time t=T
∞ ∞
𝑁0
𝐸 𝑛2 𝑡 = 𝑆𝑁 𝑓 𝑑𝑓 = 𝐻 𝑓 2 𝑑𝑓
−∞ 2 −∞
∞ 2
−∞
𝐻 𝑓 𝐺 𝑓 𝑒𝑥𝑝 𝑗2𝜋𝑓𝑇 𝑑𝑓
𝜂=
𝑁0 ∞ 2 𝑑𝑓
𝐻 𝑓
2 −∞
∞ 2 ∞ ∞
𝜙1 𝑥 𝜙1 𝑥 𝑑𝑥 ≤ 𝜙1 𝑥 2 𝑑𝑥 𝜙2 𝑥 2 𝑑𝑥
−∞ −∞ −∞
∞ 2 ∞ ∞
2 2
𝐻 𝑓 𝐺 𝑓 𝑒𝑥𝑝 𝑗2𝜋𝑓𝑇 𝑑𝑓 ≤ 𝐻 𝑓 𝑑𝑓 𝐺 𝑓 𝑑𝑓
−∞ −∞ −∞
∞ ∞
2 2
2 2
𝜂≤ 𝐺 𝑓 𝑑𝑓 𝜂𝑚𝑎𝑥 = 𝐺 𝑓 𝑑𝑓
𝑁0 −∞ 𝑁0 −∞
Impulse response of MF is
∞
𝑜𝑝𝑡 𝑡 = 𝑘 𝐺 −𝑓 𝑒𝑥𝑝 −𝑗2𝜋𝑓 𝑇 − 𝑡 𝑑𝑓
−∞
∞
=𝑘 𝐺 𝑓 𝑒𝑥𝑝 𝑗2𝜋𝑓 𝑇 − 𝑡 𝑑𝑓
−∞
= 𝑘𝑔 𝑇 − 𝑡
2
=𝑘𝐺 𝑓 𝑒𝑥𝑝 −𝑗2𝜋𝑓𝑇
∞ ∞
𝑔0 𝑇 = 𝐺0 𝑓 𝑒𝑥𝑝 𝑗2𝜋𝑓𝑇 𝑑𝑓 = 𝑘 𝐺 𝑓 2 𝑑𝑓
−∞ −∞
∞ ∞
By energy theorem 𝐸= 𝑔2 𝑡 𝑑𝑡 = 𝐺 𝑓 2
𝑑𝑓
−∞ −∞
𝑔0 𝑇 = 𝑘𝐸
∞ 2𝑁 ∞
𝑘 0
𝐸 𝑛2 𝑡 = 𝑔2 𝑡 𝑑𝑡 = 𝐺 𝑓 2
𝑑𝑓 = 𝑘 2 𝑁0 𝐸 2
−∞ 2 −∞
𝑘𝐸 2 2𝐸
𝜂𝑚𝑎𝑥 = 2 =
𝑘 𝑁0 𝐸 2 𝑁0
Energy = A2T
t
0 T
Matched filter
output g0(t)
2
kA T
0 T t
Output of
Intergrate-end-dump
circuit
AT
0 T t
(a) Rectangular pulse. (b) Matched filter output. (c) Intergrator output.
t=T t=T
y(t) = s(t) + n(t) h(t) = s(T-t) v(t) v(T) y(t) = s(t) + n(t) v’(t) v’(T)
X Tích phân
0≤t≤T
1 𝑦+𝐴 2 1 𝑦−𝐴 2
𝑓𝑌 𝑦|0 = exp − 𝑓𝑌 𝑦|1 = exp −
𝜋𝑁0 𝑇𝑏 𝑁0 𝑇𝑏 𝜋𝑁0 𝑇𝑏 𝑁0 𝑇𝑏
𝑦+𝐴
𝑧=
𝑁0 𝑇𝑏
∞
1 1 𝐴+𝜆
𝑝10 = exp −𝑧 2 𝑑𝑧 = erfc
𝜋 𝐴+𝜆 𝑁0 𝑇𝑏 2 𝑁0 𝑇𝑏
∞ 2
2 exp −𝑢
Definition erfc 𝑢 = exp −𝑧 𝑑𝑧 <2
𝜋 𝑢 𝜋𝑢
Analogy
∞
1 1 𝐴−𝜆
𝑝01 = exp −𝑧 2 𝑑𝑧 = erfc
𝜋 𝐴−𝜆 𝑁0 𝑇𝑏 2 𝑁0 𝑇𝑏
𝐸𝑏 = 𝐴2 𝑇𝑏
1 𝐸𝑏
𝑃𝑒 = erfc
2 𝑁0
exp −𝐸𝑏 𝑁0
𝑃𝑒 <
2 𝜋 𝐸𝑏 𝑁0
Approximation
exp −𝑧 𝐴2 𝑇 𝐸𝑏
𝑃𝑒 ≅ ,𝑧 ≫ 1 𝑧= =
2 𝜋𝑧 𝑁0 𝑁0
How to estimate the bit error probability when go pass noise channel?
𝑥1 = 𝑥 𝑡 𝜙1 𝑡 𝑑𝑡
0
1 1 2
1 1 2
𝑓𝑋1 𝑥1 |0 = exp − 𝑥 − 𝑠21 = exp − 𝑥 + 𝐸𝑏
𝜋𝑁0 𝑁0 1 𝜋𝑁0 𝑁0 1
2
𝜙𝑖 𝑡 = cos 2𝜋𝑓𝑖 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑇𝑏
0 elsewhere
Take the basic functions
0
𝑠1 = 𝐸𝑏 𝑠2 =
0 𝐸𝑏
Express by the basic functions
Figure: Signal-space diagram for binary FSK system. The diagram also includes
two inserts showing example waveforms of the two signals 𝑠1 𝑡 and 𝑠2 𝑡
Figure: Blocks diagrams for (a) binary FSK transmitter and (b) coherent binary FSK receiver
1 𝐸𝑏
𝑃𝑒 = erfc
2 2𝑁0
1 𝐸𝑏
𝑃𝑒𝐵𝑃𝑆𝐾 = erfc
2 𝑁0
1 𝐸𝑏
𝑃𝑒𝐵𝐹𝑆𝐾 = erfc
2 2𝑁0
𝑇
1 if 𝑖 = 𝑗
𝜙𝑖 𝑡 𝜙𝑗 𝑡 𝑑𝑡 = 𝛿𝑖𝑗 ,
0 if 𝑖 ≠ 𝑗
0
𝑁 0≤𝑡≤𝑇 𝑇 𝑖 = 1,2, … , 𝑀
𝑠𝑖 𝑡 = 𝑗=1 𝑠𝑖𝑗 𝜙𝑗 𝑡 , 𝑠𝑖𝑗 = 𝑠𝜙
0 𝑖 𝑗
𝑡 𝑑𝑡,
𝑖 = 1,2, … , 𝑀 𝑗 = 1,2, … , 𝑁
Solution
We let 𝑣1 𝑡 = 𝑠1 𝑡 and compute
𝑣1 𝑡
𝜙1 𝑡 = = 1, 0≤𝑡≤1
𝑣1
1
Next, we compute 𝑠2 , 𝜙1 = 0
1cos 2𝜋𝑡 𝑑𝑡 = 0
and we set 𝑣2 𝑡 = 𝑠2 𝑡 − 𝑠2 , 𝜙1 𝜙1 = cos 2𝜋𝑡 , 0 ≤ 𝑡 ≤ 1
The second orthonormal function is found from
𝑦𝑗 𝑡 = 𝑥 𝜏 ℎ𝑗 𝑡 − 𝜏 𝑑𝜏
−∞
If set ℎ𝑗 𝑡 =𝜙𝑗 𝑇 − 𝑡 (Matched filter)
∞
𝑦𝑗 𝑡 = 𝑥 𝜏 𝜙𝑗 𝑇 − 𝑡 + 𝜏 𝑑𝜏
−∞
Sampling this output at time 𝑡 = 𝑇
∞
𝑦𝑗 𝑇 = 𝑥 𝜏 𝜙𝑗 𝜏 𝑑𝜏
−∞
Result is same the correlation output
2
= 𝑁 𝑘=1 𝑠𝑖𝑗 = 𝒔𝑖
2
When 𝑝𝑘 is equal, we have the decision rule called the Maximum likelihood
rule (ML)
For an AWGN channel
𝑁
1 2
𝑙 𝑚𝑖 =− 𝑥𝑗 − 𝑠𝑖𝑗 , 𝑖 = 1,2, … , 𝑀
𝑁0
𝑗=1
𝑁 2
attains maximum when 𝑗=1 𝑥𝑗 − 𝑠𝑖𝑗 is minimum or distance minimum
In practice
𝑁 𝑁 𝑁 𝑁
2
𝑥𝑗 − 𝑠𝑘𝑗 = 𝑥𝑗2 − 2 𝑥𝑗 𝑠𝑘𝑗 + 2
𝑠𝑘𝑗
𝑗=1 𝑗=1 𝑗=1 𝑗=1
𝑁 1
Observation vector 𝒙 lies in region 𝑍𝑖 if 𝑗=1 𝑥𝑗 𝑠𝑘𝑗 − 𝐸𝑘 is maximum for
2
𝑘=𝑖
Figure: Illustrating the partitioning of the observation space into decision regions
for the case when 𝑁 = 2 and 𝑀 = 4; it is assumed that the 𝑀 transmitted symbols are equally likely
𝑀
1
𝑃𝑒 = 1 − 𝑓𝑿 𝑿|𝑚𝑖 𝑑𝑿
𝑀
𝑖=1 𝑍𝑖
𝑠𝑖 𝑡 = 𝑠𝑖𝑗 𝜙𝑗 𝑡 , 0 ≤ 𝑡 < 𝑇,
𝑗=1
𝑁 𝑁
𝑠𝑖𝑗 + 𝑛𝑗 𝜙𝑗 𝑡 + 𝑛𝑟 𝑡 = 𝑟𝑗 𝜙𝑗 𝑡 + 𝑛𝑟 𝑡
𝑗=1 𝑗=1
𝑁
𝑟𝑗 =𝑠𝑖𝑗 + 𝑛𝑗 and 𝑛𝑟 𝑡 = 𝑛 𝑡 − 𝑗=1 𝑟𝑗 𝜙𝑗 𝑡
𝐻1 : 𝑦 𝑡 = 2𝐸 𝑇 𝐺1 cos 𝑤1 𝑡 + 𝐺2 sin 𝑤1 𝑡 + 𝑛 𝑡
𝐻2 : 𝑦 𝑡 = 2𝐸 𝑇 𝐺1 cos 𝑤2 𝑡 + 𝐺2 sin 𝑤2 𝑡 + 𝑛 𝑡 , 0≤𝑡≤𝑇
𝜙1 𝑡 = 2𝐸 𝑇 cos 𝑤1 𝑡
𝜙2 𝑡 = 2𝐸 𝑇 sin 𝑤1 𝑡
0≤𝑡≤𝑇
𝜙3 𝑡 = 2𝐸 𝑇 cos 𝑤2 𝑡
𝜙4 𝑡 = 2𝐸 𝑇 sin 𝑤2 𝑡
𝑠𝑖 𝑡 = 𝐴cos 2𝜋 𝑓𝑐 + 𝑖 − 1 Δ𝑓 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑚
where Δ𝑓 = , m an interger
2𝑇𝑠
Applying Gram-Schmidt procedure. Choosing
𝑣1 𝑡 = 𝑠1 𝑡 = 𝐴cos 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑇𝑠
2 2 2
𝐴2 𝑇𝑠
𝑣1 = 𝐴 cos 2𝜋𝑓𝑐 𝑡 𝑑𝑡 =
2
0
𝑣1 2
𝜙1 𝑡 = = cos 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑣1 𝑇𝑠
𝑚
It can be shown that 𝑠2 , 𝜙1 = 0 if △ 𝑓 = , so that the second orthnormal function is
2𝑇𝑠
2
𝜙2 𝑡 = cos 2𝜋 𝑓𝑐 + Δ𝑓 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑠
𝑇𝑠
Similarly for 𝑀 − 2 other orthogonal functions up to 𝜙𝑀 𝑡 . Thus the number of orthonormal functions is the
same as the number of possible signals; orthonormal function as
𝑠𝑖 𝑡 = 𝐸𝑠 𝜙𝑖 𝑡
𝑍𝑖 = 𝑦 𝑡 𝜙𝑖 𝑡 𝑑𝑡 where 𝑦 𝑡 = 𝑠𝑖 𝑡 + 𝑛(𝑡)
0
If 𝑠𝑙 𝑡 is transmitted, the decision rule becomes
𝑀
2
𝑑2 = 𝑍𝑖 − 𝐸𝑠 𝛿𝑙𝑗 = minimum over 𝑙 = 1,2, … , 𝑀
𝑗=1
Take the square root and writing the sum out, this can be expressed as
2
𝑑= 𝑍12 + 𝑍22 + ⋯ + 𝑍𝑙 − 𝐸𝑠 2
+ ⋯ + 𝑍𝑀 = minimum
∞
𝑑2 = 𝑍𝑗2 + 𝐸𝑠 − 2 𝐸𝑠 𝑍𝑙 = minimum
𝑛=1
Since the sums over 𝑗 and 𝐸𝑠 are independence of 𝑙, 𝑑2 can be minimined with respect to 𝑙 by
choosing as the possible transmitted signal the one that will maximize the last term; that is, the
decision rule becomes: Choose the possible transmitted signal 𝑠𝑙 (𝑡) such that
𝑇
𝐸𝑠 𝑍𝑙 = maximum or 𝑍𝑙 = 0
𝑦 𝑡 𝜙𝑖 𝑡 𝑑𝑡 = maximum with respect to 𝑙
1. Conceptual Model
2. Coherent Techniques
4. Comparison
5. Revision
𝜋
𝐸cos 2𝑖 − 1
4
𝑠𝑖 =
𝜋
− 𝐸sin 2𝑖 − 1
4
′
1 𝐸 2 1 𝐸
𝑃 = erfc = erfc
2 𝑁0 2 𝑁0
Error total
2
2
1 𝐸 𝐸 1 𝐸
𝑃𝑐 = 1 − 𝑃′ = 1 − erfc = 1 − erfc + erfc 2
2 2𝑁0 2𝑁0 4 2𝑁0
𝐸 1 𝐸
𝑃𝑒 = 1 − 𝑃𝑐 = erfc − erfc 2
2𝑁0 4 2𝑁0
𝐸
𝑃𝑒 ≈ erfc
2𝑁0
Figure: The time offset waveforms that are applied to the in-phase
and quadrature arms of an OQPSK modulator.
Notice that a half-symbol offdet is used.
00 −3𝜋 4
10 −𝜋 4
𝑠 𝑡 = 𝑠1 𝜙1 𝑡 + 𝑠2 𝜙2 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑏
2 𝜋
𝜙1 𝑡 = cos 𝑡 cos 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑇𝑏 2𝑇𝑏
2 𝜋
𝜙2 𝑡 = sin 𝑡 sin 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇𝑏
𝑇𝑏 2𝑇𝑏
2𝑇𝑏
𝑠2 = 0
𝑠 𝑡 𝜙2 𝑡 𝑑𝑡=− 𝐸𝑏 sin 𝑇𝑏 , 0 ≤ 𝑡 ≤ 2𝑇𝑏
0 0 −𝜋 2 + 𝐸𝑏 + 𝐸𝑏
1 𝜋 −𝜋 2 − 𝐸𝑏 + 𝐸𝑏
0 𝜋 +𝜋 2 − 𝐸𝑏 − 𝐸𝑏
1 0 +𝜋 2 + 𝐸𝑏 − 𝐸𝑏
Principles
Each arrival waveform can be phase-difference with the local
waves but subtraction of two waveform has coherent nature
Subtraction is done by a comparison
𝑑𝑘−1 1 1 0 1 1 0 1 1
Differentially 1 1 0 1 1 0 1 1 1
coded sequence
𝑑𝑘
Transmitted phase 0 0 𝜋 0 0 𝜋 0 0 0
(radians)
Hanoi, 2017
Khoa Điện tử - Viễn thông Digital Communications and Coding
Trường Đại học Công nghệ, ĐHQGHN Truyền thông số và mã hóa
Content
2. M-PSK
3. M-QAM
4. Synchronization
5. Revision
2𝐸
𝑥 𝑡 = cos 2𝜋𝑓𝑖 𝑡 + 𝜃 + 𝑤 𝑡 , 0 ≤ 𝑡 ≤ 𝑇, 𝑖 = 1,2
𝑇
Let
Then
Figure: Output of matched filter for a rectangular RF wave: (a) 𝜃 = 0° and (b) 𝜃 = 180°
𝑙2 𝑙22 1
𝑓𝑋𝐼2𝑋𝑄2 𝑥I2 , 𝑥𝑄2 𝑑𝑥I2 𝑑𝑥𝑄2 = exp −𝑁 𝑑𝑙2 𝑑𝛽 𝑓𝐵 𝛽 = 2𝜋 , −𝜋 < 𝛽 ≤ 𝜋
𝜋𝑁0 0
2𝑙2 𝑙22
exp −𝑁 , 𝑙2 ≥ 0
𝑓𝐿2 𝑙2 = 𝑁0 0
0, elsewhere
2 2
𝑙2 = 𝑥𝐼2 + 𝑥𝑄2
∞ 2 ∞ 2
2 𝐸 𝜋𝑁0 2𝑥𝑄1 𝜋𝑁0
exp − 𝑥𝐼1 − 𝑑𝑥𝐼1 = exp − 𝑑𝑥𝑄1 =
−∞ 𝑁0 2 2 −∞ 𝑁0 2
Finals:
∞
𝑃 error|𝑥𝐼1 , 𝑥𝑄1 𝑓𝑋𝐼1𝑋𝑄1 𝑥I1 , 𝑥𝑄1 𝑑𝑥𝐼1 𝑑𝑥𝑄1
−∞
∞ ∞ 2
1 𝐸 2 2 2𝑥𝑄1
= exp − exp − 𝑥𝐼1 − 𝐸 𝑑𝑥𝐼1 × exp − 𝑑𝑥𝑄1
𝜋𝑁0 2𝑁0 −∞ 𝑁0 −∞ 𝑁0
1 𝐸 𝜋𝑁0 𝜋𝑁0 1 𝐸
= exp − × × = exp −
𝜋𝑁0 2𝑁0 2 2 2 2𝑁0
Basic function
2
𝜙1 𝑡 = cos(2𝜋𝑓𝑐 𝑡), 0 ≤𝑡≤𝑇
𝑇
2
𝜙2 𝑡 = 𝑠𝑖𝑛(2𝜋𝑓𝑐 𝑡), 0 ≤𝑡≤𝑇
𝑇
𝐸 𝜋
𝑃𝑒 ≈ erfc sin
𝑁0 𝑀
2 2𝑅𝑏 𝑅𝑏 log2 𝑀
𝐵= 𝐵= 𝜌= =
𝑇 log2 𝑀 𝐵 2
4 0.5 0.34 dB
8 0.333 3.91 dB
16 0.25 8.52 dB
32 0.2 13.52 dB
Figure: (a) Signal-space diagram of M-ary QAM for 𝑀 = 16; the message points
In each quadrant are identified with Gray-endcoded quadbits. (b) Signal-space
Diagram of the corresponding 4-PAM signal.
2𝐸0 2𝐸0
𝑠𝑘 𝑡 = 𝑇
𝑎𝑘 cos 2𝜋𝑓𝑐 𝑡 + 𝑇 𝑘
𝑏 sin 2𝜋𝑓𝑐 𝑡 , 0 ≤ 𝑡 ≤ 𝑇, 𝑘 = 0, ±1, ±2, …
𝐿= 𝑀
Waveform
−𝐿 + 1, 𝐿 − 1 −𝐿 + 3, 𝐿 − 1 ⋯ 𝐿 − 1, 𝐿 − 1
−𝐿 + 1, 𝐿 − 3 −𝐿 + 3, 𝐿 − 3 ⋯ 𝐿 − 1, 𝐿 − 1
𝑎𝑖 , 𝑏𝑖 =
⋮ ⋮ ⋮
−𝐿 + 1, −𝐿 + 1 −𝐿 + 3, −𝐿 + 1 ⋯ 𝐿 − 1, −𝐿 + 1
1 𝐸
𝑃𝑒 = 1 − 𝑃𝑐 = 1 − 1 − 𝑃𝑒′ 2
≈ 2𝑃𝑒′ 𝑃𝑒 ≈ 2 1 − erfc
𝑀 𝑁0
𝐿 2
2𝐸0 2
2 𝐿2 − 1 𝐸0 2 𝑀 − 1 𝐸0
𝐸av =2 2𝑖 − 1 𝐸av = =
𝐿 3 3
𝑖=1
1 𝐸av
𝑃𝑒 ≈ 2 1 − erfc
𝑀 2 𝑀 − 1 𝑁0
2𝐸 2𝐸
𝑥 𝑡 = cos 2𝜋𝑓𝑐 𝑡 − 𝜏𝑐 + 𝛼𝑘 𝑔 𝑡 − 𝜏𝑔 + 𝑤 𝑡 = cos 2𝜋𝑓𝑐 𝑡 + 𝜃 + 𝛼𝑘 𝑔 𝑡 − 𝜏𝑔 + 𝑤 𝑡
𝑇 𝑇
𝑥1 𝜏 𝑇+𝜏
𝒙 𝜏 = 𝑥𝑖 𝜏 = 𝑥 𝑡 𝜙𝑖 𝑡 𝑑𝑡 , 𝑖 = 1,2
𝑥2 𝜏 𝜏
2 2
𝜙1 𝑡 = cos 2𝜋𝑓𝑐 𝑡 , 𝜏 ≤ 𝑡 ≤ 𝑇 + 𝜏 𝜙2 𝑡 = 𝑠𝑖𝑛(2𝜋𝑓𝑐 𝑡), 𝜏 ≤ 𝑡 ≤ 𝑇 + 𝜏
𝑇 𝑇
𝑇+𝜏
At receiver: 𝒙𝑘 𝜏 =𝒔 𝑎𝑘 , 𝜃, 𝜏 + 𝒘 𝑤𝑖 = 𝜏 𝑤 𝑡 𝜙𝑖 𝑡 𝑑𝑡 , 𝑖 = 1,2
1 1
𝑓𝑿 𝒙|𝑎𝑘 , 𝜃, 𝜏 = exp − 𝒙𝑘 𝜏 − 𝒔 𝑎𝑘 , 𝜃, 𝜏 2
𝜋𝑁0 𝑁0
1 1 2
When 𝑎𝑘 = 0: 𝑓𝑿 𝒙|𝑎𝑘 = 0 = 𝜋𝑁 exp − 𝑁 𝒙𝑘 𝜏
0 0
So in M-PSK 𝒔 𝑎𝑘 , 𝜃, 𝜏 = constant
𝑓𝑿 𝒙|𝑎𝑘 , 𝜃, 𝜏 2 𝑇 1 2
𝐿 𝑎𝑘 , 𝜃, 𝜏 = = exp 𝒙𝑘 𝜏 𝒔 𝑎𝑘 , 𝜃, 𝜏 − 𝒔 𝑎𝑘 , 𝜃, 𝜏
𝑓𝑿 𝒙|𝑎𝑘 = 0 𝑁0 𝑁0
2 𝐸 𝐿0 −1
𝑙 𝜃 = 𝑘=0 𝑥1,𝑘 cos 𝛼𝑘 + 𝜃 + 𝑥2,𝑘 sin 𝛼𝑘 + 𝜃
𝑁0
2 𝐸 𝐿0 −1
= 𝑘=0 𝑥1,𝑘 cos𝛼𝑘 + 𝑥2,𝑘 sin𝛼𝑘 cos𝜃 − 𝑥1,𝑘 sin𝛼𝑘 − 𝑥2,𝑘 cos𝛼𝑘 sin𝜃
𝑁0
𝑒 𝑛 = Im 𝑎𝑘∗ 𝑥𝑘
𝜃 𝑛 + 1 = 𝜃 𝑛 + 𝛾𝑒 𝑛
Due to:
2 𝑇 2 𝐸
𝒙𝑘 𝜏 𝒔 𝑎𝑘 , 𝜃, 𝜏 = Re 𝑎𝑘∗ 𝑥𝑘 𝑒 −𝑗𝜃
𝑁0 𝑁0
2 𝐸
= Re 𝑎𝑘 𝑥𝑘 𝜏 exp 𝑗 arg 𝑥𝑘 𝜏 − arg 𝑎𝑘 − 𝜃
𝑁0
2 𝐸
= 𝑎 𝑥 𝜏 cos arg 𝑥𝑘 𝜏 − arg 𝑎𝑘 − 𝜃
𝑁0 𝑘 𝑘
So: 𝜑 = arg 𝑥𝑘 𝜏 − arg 𝑎𝑘 − 𝜃
2𝜋
1 2 𝐸
𝐿av 𝑎𝑘 , 𝜏 = exp 𝑎 𝑥 𝜏 cos arg 𝑥𝑘 𝜏 − arg 𝑎𝑘 − 𝜃 𝑑𝜃
2𝜋 0 𝑁0 𝑘 𝑘
1 2𝜋−arg 𝑥𝑘 𝜏 +arg 𝑎𝑘 2 𝐸
= exp 𝑎𝑘 𝑥𝑘 𝜏 cos φ 𝑑𝜑
2𝜋 −arg 𝑥𝑘 𝜏 +arg 𝑎𝑘 𝑁0
Hanoi, 2017
Khoa Điện tử - Viễn thông Digital Communications and Coding
Trường Đại học Công nghệ, ĐHQGHN Truyền thông số và mã hóa
Content
1. Introduction
2. Repetition Code
5. Revision
Assume 𝑝 = 10−3
Consider a repetition code:
In transmitter: 111 substitutes for 1,000 substitutes for 0
In receiver: Hard decision by majority
𝑃𝑒1 = 𝑃𝑟 001 + 𝑃𝑟 010 +𝑃𝑟 100 +𝑃𝑟 000
= 3𝑝2 1 − 𝑝 + 𝑝3
= 3. 10−2.3 1 − 10−3 + 10−3.3
≈ 3.10−6
Figure 1: Repetition coding packs points inefficiently in the high dimensional signal
Figure : The number of noise spheres that can be packed into the y-sphere yields
the maximum number of codewords that can be reliably distinguished
Systematic codes
Note:
The all-zero sequence is a codeword in every block code
Any linear block code can be put in a systematic form
Input u
Message register
+ + +
v0 v1 v2
Parity resister
1 0 0
0 1 0
0 0 1
𝐬 = 1100010 1 1 0 = 0 0 1
0 1 1
1 1 1
1 0 1
Syndrome like as the second row of HT,
That mean error at the second bit
𝐬 is called Syndrome
+ + +
e0 e1 e2 e3 e4 e5 e6
r0 r1 r2 r3 r4 r5 r6
+ + + + + + +
Correcte
doutput
𝑝𝑒 ≤ 𝑃 𝑚, 𝑛
p e C 72 p 2 (1 p ) 5
𝑚=𝑡+1
For high signal to noise ratio, i.e small values of 𝑝
𝑝𝑒 ≈ 𝐶𝑛𝑡+1 𝑝𝑡+1 1 − 𝑝 𝑛−𝑡−1
If p=10-2 in BSC
𝑝𝑒 ≈ 𝐶72 𝑝2 1 − 𝑝 5
Due to 𝐆𝐇𝑇 = 0
𝑐 𝑋 = 𝑐0 + 𝑐1 𝑋 + ⋯ + 𝑐𝑛−1 𝑋 𝑛−1
𝑋 𝑛−𝑘 𝑚(𝑋) 𝑏 𝑋
⇒ =𝑎 𝑋 +
𝑔 𝑋 𝑔 𝑋
𝑏 𝑋 is the remainder left over after dividing 𝑋 𝑛−𝑘 𝑚 𝑋 by 𝑔(𝑋).
𝑋 𝑛−𝑘 𝑚 𝑋 = 𝑋 3 + 𝑋 6
𝑎 𝑋 = 𝑋 + 𝑋3
𝑏 𝑋 = 𝑋 + 𝑋2
𝑐 𝑋 = 𝑏 𝑋 + 𝑋 𝑛−𝑘 𝑚 𝑋
= 𝑋 + 𝑋2 + 𝑋3 + 𝑋6
𝑋 3 +𝑋 6 𝑋+𝑋 2
Or 1+𝑋+𝑋 3
= 𝑋 + 𝑋 3 + 1+𝑋+𝑋3
That Codeword is 0111001. Here 001 are the parity check. The
code word is exactly the same as the Hamming code shown in
Table.
1 1 110
Flip-flop Mudulo-2
adder Parity
bits Code
word
Message bits 2 0 011
Received word
𝑟 𝑋 = 𝑟0 + 𝑟1 𝑋 + ⋯ + 𝑟𝑛−1 𝑋 𝑛−1
Received bits
Modulo-2 Flip-flop
added
1. Description
2. Expressed by polynomials
5. Viterbi algorithm
𝑖 𝑖 𝑖 𝑖 𝑖
𝑔 𝐷 = 𝑔0 + 𝑔1 𝐷 +𝑔2 𝐷 2 + ⋯ +𝑔𝑀 𝐷 𝑀
𝑔 1 𝐷 ,𝑔 2 𝐷 ,…,𝑔 𝑛 𝐷
𝑐 1 𝐷 =𝑔 1 𝐷 𝑚 𝐷 = 1 + 𝐷 + 𝐷2 1 + 𝐷3 + 𝐷4 = 1 + 𝐷 + 𝐷2 + 𝐷3 + 𝐷6
2 2
𝑐 𝐷 =𝑔 𝐷 𝑚 𝐷 = 1 + 𝐷2 1 + 𝐷3 + 𝐷4 = 1 + 𝐷2 + 𝐷3 + 𝐷4 + 𝐷5 + 𝐷6
𝑝, if 𝑟𝑖 ≠ 𝑐𝑖
𝑝 𝑟𝑖 |𝑐𝑖 =
1 − 𝑝, if 𝑟𝑖 = 𝑐𝑖
Assume 𝑟 difference 𝑐 at 𝑑 position
log 𝑝 𝑟|𝑐 = 𝑑 log 𝑝 + 𝑁 − 𝑑 log 1 − 𝑝
𝑝
= 𝑑 log + 𝑁 log 1 − 𝑝
1−𝑝
Second term is constant. Choose 𝑐 close to 𝑟 by a
Hamming distance.
ML decoding is equivalent with the minimum distance
decoding that is the base of Viterbi algorithm.
At 𝑗 = 2, have 4 branch.
𝑏 = 𝐷2 𝐿𝑎0 + 𝐿𝑐
𝑐 = 𝐷𝐿𝑏 + 𝐷𝐿𝑑
𝑑 = 𝐷𝐿𝑏 + 𝐷𝐿𝑑
𝑎1 = 𝐷2 𝐿𝑐