You are on page 1of 49

ICAI – Máster MIT

Sistemas de comunicación I

Chapter 5:
Channel coding

Javier Matanza
W. Warzanskyj
Luis Cucala
1
Introduction
Coding and error control
– From the Shannon theorem point of view, to code is to assign an input message
(set of information bits) to an output message called codeword. If the
codewords are different enough, on reception the information bits can be
recovered from the noise-corrupted code words
– The codeword is longer than the set of information bits. This indicates that it
includes redundancy that helps reduce the number errors
Information bits (set 𝑖) Codeword (message 𝑖)
𝑑! Coder 𝑐!
𝑘 bits 𝒌<𝒏 𝑛 bits
𝑑! = [𝑑!" , 𝑑!# , … , 𝑑!(%&#) ] 𝑐! = [𝑐!" , 𝑐!# , … , 𝑐!((&#) ]

A message 𝑑! of 𝑘 bits is coded into a codeword 𝑐! of 𝑛 bits, 𝑘 < 𝑛.


#
The code rate 𝑅" is → 𝑅" = Code naming → 𝐶(𝑛, 𝑘)
$

3
Coding and error control: an example and
some definitions
# %
An Example: even parity code C(3,2) , code rate 𝑅" = =
$ &
• A redundancy bit (parity bit), which is the sum of the message’s two bits, is added at the end
of the message
𝑐!$ = 𝑑!$
𝑑" = 0 1 𝑐" = 0 1 1
𝑐!" = 𝑑!"
𝑑# = 1 1 𝑐# = 1 1 0
𝑐!# = 𝑑!$ ⊕ 𝑑!"

– Definition of Linear Code: the sum of two codewords is also a codeword


10110
11000 code words
*sum ≜ modulo-2 addition or XOR
01110

- Definition of Systematic Code : The information message is included in the


codeword

*sum ≜ modulo-2 addition or XOR operation because we are assuming binary codes,
there are only two possible values 4
2

Block coding
(Reed–Solomon codes, Hamming codes, Hadamard
codes, Expander codes, Golay codes, Reed–Muller
codes…)
Block coding
– Coding is performed on blocks of bits in a memoryless way, that is, the coding
of one block is independent of the coding of previous blocks
n-bits codeword or block 𝑑" 𝑁, n-dimension vector space
k-bits message 𝑑"
2n possible
𝑑$ c0 codewords of n bits
𝐾, k-dimension
vector space 𝑑" c1

𝑑# c2
.
.
. . 𝓒
. .

2% possible 2n - 2k not used


𝑑#!"# 𝑐#!"#
messages 𝑔 injective linear codewords
of k bits each application (one-to- 2% codewords
one correspondence of n bits each

There are 2( possible codewords in a binary block code of length 𝑛. From these 2(
codewords, we can select a subset 𝒞 of 𝑀 = 2% codewords (this is the codebook) to
form a block. Thus, a block of 𝑘 information bits is mapped into a codeword of length 𝑛
selected from the subset of 𝑀 codewords 6
Linear block code
– A code is a linear block code when it is both a block code and linear, and it is
completely defined by a Generator Matrix, 𝐺(𝑘, 𝑛)

𝑔"" 𝑔"# … 𝑔"&


𝑔#" 𝑔## … 𝑔#& note: this is the linear
𝐺≜ ⋮ ⋱ application 𝑔: 𝐾 → 𝑁
⋮ ⋮
𝑔%" 𝑔%# … 𝑔 conversion matrix (k input bits,
%&
n output bits)

– For a given message 𝑑 the code word 𝑐 is calculated as:


𝒄 = 𝒅. 𝑮
1xn 1xk kxn

– An example based on the previous even parity code:


redundancy or parity bit
𝑐!" = 𝑑!"
1 0 1
𝑐!# = 𝑑!# 𝑑# = 0 1 , 𝐺= ⟹ 𝑐# = 𝑑# · 𝐺 = 0 1 1
0 1 1
𝑐!) = 𝑑!" ⊕ 𝑑!# original message

7
Systematic linear block codes
– A code is systematic when the generator matrix has the following form:
1 0 1
𝐺= 𝐼# | 𝑃 𝐺=
0 1 1

Identity matrix Parity matrix


[k x k] [k x (n-k)]

– All linear code blocks can have a systematic form. This can be achieved by
substituting rows by linear combinations of other rows until a systematic
form is obtained (the matrixes are row equivalent)
– We use a systematic form because codeword computation is easier:

𝑐 = 𝑑. 𝐺 = 𝑑. 𝐼# | 𝑃 = 𝑑 |𝑟
where 𝑟 = 𝑑. 𝑃 It is easier to compute 𝑑. 𝑃 than 𝑑. 𝐺

data
systematic data parity Non-systematic

𝑐# = 0 1 · 𝐺 = 0 1 1 parity
8
Parity check matrix and Syndrome
– The Parity Check Matrix 𝑯 is defined from a systematic generator matrix:
'
𝑃 [k x (n-k)]
𝐺 = 𝐼# | 𝑃 𝐻 =
𝐼$(#
– The parity check matrix is used to determine whether a received word 𝒚 is a valid
codeword or not. For this purpose, the Syndrome 𝒔 is defined
Syndrome of 𝑦 ≜ 𝑠 = 𝑦 · 𝐻'
[n x (n-k) ]
[1 x (n-k)] [1 x n]

• If the received word is a codeword, its syndrome is an (𝒏 − 𝒌) zeros vector


𝑃
𝑖𝑓 𝑦 = 𝑐 ∈ 𝒞 𝑠=𝑐· 𝐻* = 𝑑 · 𝐼% | 𝑃 · = 𝑑 · (𝑃 ⊕ 𝑃) = 𝑑 · 0 = [0]
𝐼(&%
modulo-2 addition, we are assuming binary codes

• If the received word is not a codeword, then y = 𝑐 + 𝑒, where 𝑒 is the error


s = 𝑐 + 𝑒 · 𝐻* = 𝑐 · 𝐻* + 𝑒 · 𝐻* = 0 + 𝑒 · 𝐻* = 𝑒 · 𝐻*

The syndrome of a codeword with error is the syndrome of the error


9
Example: parity check and syndrome
Parity Check Matrix transpose
𝐶 𝑛, 𝑘 = 𝐶(6,4) generator matrix
1 1
1 0 0 0 1 1 1 0
0 1 0 0 1 0 𝑃 0 1
𝐺= 𝐻' = =
R = 4/6 [Input bits: 0 0 1 0 0 1 𝐼 1 0
4 (rows), output 0 0 0 1 1 0 1 0
bits: 6 (columns)]
0 1
𝐼 | 𝑃
n 𝑥 (𝑛 − 𝑘)
𝑘𝑥𝑘 𝑘 𝑥 (𝑛 − 𝑘)

let’s check the syndrome 𝑠 = 𝑦 · 𝐻 * for two received words

𝒚𝟏 = [ 𝟏 𝟎 𝟎 𝟎 𝟏 𝟏] 1 1 𝒚𝟐 = [ 𝟏 𝟏 𝟎 𝟎 𝟏 𝟏] 1 1
1 0 1 0
0 1 0 1
s = [ 1 0 0 0 1 1] · =[00] s = [ 1 1 0 0 1 1] · =[10]
1 0 1 0
1 0 1 0
0 1 𝑦# ∈ 𝒞 it is a codeword 0 1 𝑦) ∉ 𝒞 not a codeword

detail: 1 B 1 ⊕ 1 ⋅ 1 = 0 detail: 1 B 1 ⊕ 1 ⋅ 1 ⊕ 1 ⋅ 1 = 1
10
3

Distance between codewords


Hamming distance
– Hamming weight 𝒘(𝒄𝒊 ) of a codeword: number of 1’s in the codeword 𝑐!
– Hamming distance between two codewords 𝒅 𝒄𝒊 , 𝒄𝒋 : number of elements at
which the two codewords differ. (

𝑑 𝑐! , 𝑐. = R 𝑐!+ ⊕ 𝑐.+ = 𝑤(𝑐! ⊕ 𝑐. )


+/#
modulo 2 addition or XOR
– A codeword’s Hamming weight is equal to its distance to the zero codeword 𝟎

– Minimum distance of a codebook 𝑑+!$ ∶ minimum of the set of Hamming


distances between any two possible codewords in a codebook

𝑑+!( = min 𝑑 𝑐! , 𝑐. ∀𝑐! , 𝑐. ∈ 𝒞 𝑖 ≠ 𝑗


,! ,," n element
words
k element codewords
messages 𝒅𝒎𝒊𝒏
non codewords

12
Minimum distance 𝑑!"# properties
- The minimum distance is the weight of the codeword whose weight is
minimum: w(𝑐# ) = 𝑑+!$

𝑑+!( 𝑐! , 𝑐. = min[ 𝑤 𝑐! ⊕ 𝑐. ]∀𝑐! , 𝑐. ∈ 𝒞 𝑖 ≠ 𝑗 → 𝑑+!( 𝑐! , 𝑐. = 𝑤 𝑐% ⊕ 0 = 𝑤 𝑐%


where 𝑐% is the minimum weight codeword

- 𝑑+!$ ≜ minimum number of rows in 𝐻' that are linear dependent


Let 𝑤 𝑐% = 𝑚𝑖𝑛, 𝑐% 𝐻 * = 0 (𝑐% is a codeword) ---> the sum of the rows of 𝐻 * , whose
indexes are that of the non-zero elements of 𝑐% , is 0 ---> the number of those rows is
𝑑+!(

- 1 ≤ 𝑑+!$ ≤ 𝑛 − 𝑘 + 1
• 1 ≤ 𝑑+!( by definition: there must be at least two different codewords
• The last 𝑛 − 𝑘 rows in 𝐻 * , 𝐼(&% , a diagonal submatrix, form a basis in 𝐻 * . This
means that any other row in 𝐻 * is a linear combination of the last 𝑛 − 𝑘 rows;
this implies that a given row and the rows of 𝐼(&% are linear dependent, which
means that 𝑑+!( can not be greater than 𝑛 − 𝑘 + 1

13
Practical computation of dmin
– 𝑑+!$ ≜ minimum number of rows in 𝐻' that are linear dependent
1 1
0 1
– Example. 𝑑+!$ for this Parity Check Matrix: 𝐻 = 1
* 0 → 𝑑+!( = 2
1 0
0 1
1 1
0 1
𝑖𝑓 𝑎 𝑚𝑖𝑛𝑖𝑚𝑢𝑑 𝑑 𝑐𝑜𝑑𝑒𝑤𝑜𝑟𝑑 𝑖𝑠 𝑐% = 0 0 1 1 0 𝑡ℎ𝑒𝑛 00110 1 0 = 00
1 0
0 1

- Shortcuts:
- If there are two equal rows 𝑑+!( = 2

- If one row is a zero vector 𝑑+!( = 1

14
𝑑!"# in error correction
– From 𝑑+!$ the maximum number of correctable erroneous bits 𝒆𝒄 can be
calculated
any non-codeword within this sphere can be decoded into the right codeword

codewords 𝒆𝒄 ≜ maximum correctable errors


non codewords 𝑒7 𝑒7
𝑐! 𝑐8
(proportional to the largest sphere
radius without contacting any other
sphere)
this is only a 2-dimensions
representation of the n- 𝒅𝒎𝒊𝒏 ≡ minimum distance between 2 codewords
dimensions vector space

– There are two case to evaluate


1. If 𝑑+!( is odd ---> 2 · 𝑒, + 1 = 𝑑+!(
2. If 𝑑+!( is even ---> 2 · 𝑒, + 2 = 𝑑+!( rounding to largest integer

-123 (.
– It is always possible to correct 𝑡 = %
erroneous bits in a codeword
– Conclusion: any coding scheme tries to maximize 𝑑+!$
15
𝑑!"# in error detection
– Detection is possible if the error does not turn a transmitted codeword into
a different valid codeword
– Detection is always possible for 𝑒 = 𝑑+!$ − 1 erroneous bits in the same
code word

𝑒EFG
𝑐] 𝑐^
Code words
Non code words
𝑑+!(

– The number of errors than can be corrected is lower than the number of
errors that can be detected
– Error detection is used by Automatic Repeat-Request (ARQ) procedures for
retransmission of erroneous codewords (when detected but not corrected)

16
4

Burst errors and interleaving


Burst errors and interleaving
– Most error correcting codes are designed to correct random errors; isolated
errors, as it is the case in AWGN channels
– However, depending on the transmission medium, the errors may occur in
bursts. (e.g. fading channels, impulsive noise sources)

Isolated errors Errors burst

1 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1 0 1

– The usual way for dealing with errors bursts is the use of an Interleaver Block.
Nonetheless, there are some specific codes for dealing with burst errors

18
Interleaving for burst errors
- An interleaver shuffles the position of the coded bits among several codewords
- Objective: try to convert the burst pattern into a more scattered pattern
- The receiver undoes the interleaving before decoding
codeword 1 codeword 3

Coded stream a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c 3 d3

Interleaved a1 a2 a3 b1 b2 b3 c1 c2 c3 d1 d 2 d3
stream

burst errors Received a1 a2 a3 b1 b2 b3 c1 c2 c3 d1 d 2 d3


Interleaved stream
codeword 1 codeword 3

isolated errors Received a1 b1 c1 d1 a2 b2 c2 d2 a3 b3 c 3 d3


de-Interleaved stream
(input to the decoder)
19
Interleaving for burst errors
– Transmission diagram when using “interleaving + coding”:
Encode Interleave Mod
r r
Channel
Decode de-interleaver Demod
r
– Interleaver can be implemented using an 𝑚. 𝑛 matrix. Bit write is performed
column-wise and bit read is performed row-wise. De-interleaving does to
opposite.
Modulator

encoded input bits


1 5 9 … m·n - m - 1
2 6 10 … m·n - m – 2
Encoder

3 7 11 … m·n - m – 3 m rows
… … … … …
m 2m 3m … m·n

n
columns 20
Interleaving example
𝑏𝑖𝑡𝑠!( = [ 𝑏# 𝑏) 𝑏0 𝑏1 𝑏2 𝑏3 𝑏4 𝑏5 𝑏6 𝑏#" 𝑏## 𝑏#) ]
m = 4, n = 3
– 1st step: write 𝑏𝑖𝑡𝑠!$ in an mxn matrix form, column by column
- 2nd step: transpose the matrix
𝑏# 𝑏2 𝑏6
𝑏# 𝑏) 𝑏0 𝑏1
𝑏) 𝑏3 𝑏#"
𝑏2 𝑏3 𝑏4 𝑏5
𝑏0 𝑏4 𝑏##
𝑏6 𝑏#" 𝑏## 𝑏#)
𝑏1 𝑏5 𝑏#)
– 3rd step: obtain 𝑏𝑖𝑡𝑠/0' reading as it was written, column by column
𝑏# 𝑏) 𝑏0 𝑏1
𝑏2 𝑏3 𝑏4 𝑏5 𝑏𝑖𝑡𝑠78* = [ 𝑏# 𝑏2 𝑏6 𝑏) 𝑏3 𝑏#" 𝑏0 𝑏4 𝑏## 𝑏1 𝑏5 𝑏#) ]
𝑏6 𝑏#" 𝑏## 𝑏#)

– 2nd & 3rd steps are equivalent to reading by rows (previous procedure is
Matlab-optimized)
𝑏# 𝑏2 𝑏6
𝑏) 𝑏3 𝑏#"
𝑏0 𝑏4 𝑏##
𝑏1 𝑏5 𝑏#) 21
Interleaving analysis
Assume 𝑚 = 15, 𝑛 = 8 and a 𝐶(15,11) linear block code, 𝑑+!$ = 3
- What’s the size of the interleaver matrix? Represent it

- How many errors per codeword can be corrected?


- What is the effect of an 8 errors burst, with and without interleaver?

22
5

Convolutional codes
Introduction to Convolutional Codes
- Convolutional codes are linear, i.e., the sum of two codewords is another
codeword, but they are not block codes: they have memory
- In its simplest form, for every input bit 𝑛 output bits are generated,
𝑘. , 𝑘% , … 𝑘$ using 𝐾 shift registers and 𝑛 adders
𝑘"
𝐾 shift registers storing 𝑘#
values from previous inputs
+
and n modulo-2 adders +

𝐾-bits input message 𝑧#&# 𝑧)&# 𝑧0&# 𝑧9&#

+
𝑘&

- 𝑳: constraint length (“memory”). Equal to 𝐾 if the input is not fed to any


adder, equal to 𝐾 + 1 if it is, like in the previous figure

25
Representations for Convolutional Codes
– A convolutional code given by 𝐶(𝑛, 𝑘, 𝐿) can be represented with the
following methods:
• Digital model
• State-transition diagram
• Trellis diagram
– The three representations will be explained with a code example
𝐶(2,1,3)

Constraint length (“memory”, 𝐾 or 𝐾 + 1)


n: output bits per
input message k: number of bits in the input
(equal to the message (input bit granularity)
number of adders)

26
Digital model representation
– The digital model represents the encoder as a convolution process
𝒈𝟏 = 1,0,1 𝑘"[𝑛]
𝐶(2,1,3)
+
x1 x0 x1
Input bit 𝑘[𝑛] k[n-1] k[n-2]
x0 x1 Shift register storing
values from previous
+ x1 input

𝒈𝟐 = 0,1,1 𝑘#[𝑛]

- The state of the encoder is the set of values stored in the registers
• Initial values for the registers must be pre-defined (usually set to 0)
• Every new input might change the state of the encoder.
- 𝒈𝟏 , 𝒈𝟐 : generator polynomials modulo-2 addition

𝑘" 𝑛 = 𝑔" 0 . 𝑘 𝑛 + 𝑔" 1 . 𝑘 𝑛 − 1 + 𝑔" 2 . 𝑘 𝑛 − 2 = 𝑔" 𝑛 ∗ 𝑘 𝑛 = 𝑘[𝑛] ⊕ 𝑘[𝑛 − 2]

𝑘# 𝑛 = 𝑔# 0 . 𝑘 𝑛 + 𝑔# 1 . 𝑘 𝑛 − 1 + 𝑔# 2 . 𝑘 𝑛 − 2 = 𝑔# 𝑛 ∗ 𝑘 𝑛 = 𝑘[𝑛 − 1] ⊕ 𝑘[𝑛 − 2]

27
Digital model representation example
– Input 𝒌[𝒏] = [𝟏𝟏𝟎𝟏𝟎𝟏𝟏], calculate output

𝑘[𝑛] = [𝟏𝟏𝟎𝟏𝟎𝟏𝟏]
𝑘# 𝑛 = 𝟏 + 0 𝟏 + 0 (𝟎 + 1) (𝟏 + 1) (𝟎 + 0) (𝟏 + 1) (𝟏 + 0) = [1,1,1,0,0,0,1]
𝑘) 𝑛 = (𝟎 + 0) (𝟎 + 1) (𝟏 + 1) (𝟏 + 0) (𝟎 + 1) (𝟏 + 0) (𝟎 + 1) = [0,1,0,1,1,1,1]
𝑜𝑢𝑡𝑝𝑢𝑡 𝑛 = [10, 11, 𝟏𝟎, 01, 01, 01,11]

0⊕1 𝒈𝟏 = 1,0,1
𝑘" 3 = 𝟏
calculation detail:
+
0 𝑘 2 =𝟏 𝑘 1 =𝟏
𝑘[𝑛] = [𝟏𝟏01011] k[n-1] k[n-2]

𝑘[3] +
𝑘# 3 = 𝟎
𝑘" 3 = 𝑘 3 ⊕ 𝑘 3 − 2 = 0 ⊕ 1 = 1 1 ⊕ 1 𝒈𝟐 = 0,1,1
𝑘# 3 = 𝑘 3 − 1 ⊕ 𝑘 3 − 2 = 1 ⊕ 1 = 0

28
State transition diagram representation
– It is a finite-state machine representation
- Transitions between states are represented by a line
- Each transition is induced by a new input
- Each output depends on the current state and the current input (if it is a
Mealy machine), or only on the current state (Moore machine)
𝑘# 𝑘) Input
0 / 00 (length = 1)
1 / 10
0 / 11 00 Output
(length = 2)
machine states 1 / 01
(stored in the 01 10
registers) 0 / 01

0 / 10 11 1 / 11

1 / 00

The machine’s number of states is 29


29
Trellis diagram representation
– It is the same idea than the State Transition Diagram but using a time axis
• States are in the vertical axis and time in the horizontal one
• Transitions between states are represented by a line, each line color
represents a different input: 0’s are in one color and 1’s in another
• Output labels are attached to each line
output 0 input
State
1 input
00
00
10 11
10
01
01
11
01
10
00
11
time
30
Trellis diagram example
Calculate the output for this input: k = 1 1 0 1 0 1 1

0 input
State
1 input
00
10

10
01 01 01
11

01
10
11
11
time
input bits 1 1 0 1 0 1 1

output sequence 1,0,1,1,1,0,0,1,0,1,0,1,1,1

31
Decoding convolutional codes: Viterbi algorithm

- Convolutional codes are designed for correcting errors. Decoding is performed


with the Viterbi algorithm
- From the initial decoder state, [0], all possible paths are considered. When
two paths collide, the most probable one, i.e., the one that would have less
errors is selected (Maximum likelihood approach)
- Decoding steps, shown graphically with an example in the following slide
1. Draw a trellis grid with all existing states and all valid transitions from
state 𝑖 to state (𝑖 + 1)
2. At each transition, defined with a received group of n bits, label each
transition path with the number of errors produced when compared to
the received group, and label the states 𝑖 + 1 with their cumulative
number of errors.
3. At all states where more than one path arrives, keep only the path with
the lower cumulative error.
4. Choose the path with the smallest cumulative error

32
Viterbi algorithm 0 input
Example: initial state
1 input
coded sequence, 𝑐 = 10 11 10 01 01 01 11
received sequence with two errors @ Rx, 𝑐9 = 11 11 10 01 01 00 11

11 11 10 01 01 00 11
00 00 00 00 00 00 00
00 10
2
10 10 10 10 10 10

01 01 01 01 01 01 01
10 11
1
11 11 11 11 11 11

11 11 11 11 11 11 11
01 01 01 01 01 01 01 01

10 10 10 10 10 10 10
11 00 00
1 00 00 00 00 00

t
possible outputs for
every state If received 11, there possible transitions between states
If received 11, there is one are two errors with
error with respect to 10 respect to 00 33
Viterbi algorithm
Example: first pair of input bits, 11
0 input

1 input
11 11 10 01 01 00 11
00/2 00 00 00 00 00 00
00 10/1
2
10 10 10 10 10 10

01 01 01 01 01 01 01
10 11
1
11 11 11 11 11 11

11 11 11 11 11 11 11
01 01 01 01 01 01 01 01

10 10 10 10 10 10 10
11 00 00
1 00 00 00 00 00

34
Viterbi algorithm
Example: second pair of input bits, 11
0 input

1 input
11 11 10 01 01 00 11
00/2 00/2 00 00 00 00 00
00 10/1
2
10/1
4
10 10 10 10 10

01 01/1 01 01 01 01 01
10 11
1
11/0
3
11 11 11 11 11

11 11 11 11 11 11 11
01 01 01
2
01 01 01 01 01

10 10 10 10 10 10 10
11 00 00
1
00 00 00 00 00

35
Viterbi algorithm
Example: third pair of input bits,10
0 input more than one path arrives,
keep only the path with the
1 input lower cumulative error

11 11 10 01 01 00 11
00/2 00/2 00/1 00 00 00 00
00 10/1
2
10/1
4
10/0
3
10 10 10 10

01 01/1 01/2 01 01 01 01
10 11
1
11/0
3
11/1
4
11 11 11 11

11 11 11/1 11 11 11 11
01 01 01
2
01/2
1
01 01 01 01

10 10 10/0 10 10 10 10
11 00 00
1
00/1
2
00 00 00 00

36
Viterbi algorithm
Example: 4th pair of input bits 01
0 input

1 input
11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00 00 00
00 10/1
2
10/1
4
10/0
3
10/2
2
10 10 10

01 01/1 01/2 01/0 01 01 01


10 11
1
11/0
3
11/1
4
11/1
1
11 11 11

11 11 11/1 11/1 11 11 11
01 01 01
2
01/2
1
01/0
4
01 01 01

10 10 10/0 10/2 10 10 10
11 00 00
1
00/1
2 00/1
3
00 00 00

37
Viterbi algorithm
Example: 5th pair of input bits 01
0 input

1 input
11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00/1 00 00
00 10/1
2
10/1
4
10/0
3
10/2
2
10/2
3
10 10

01 01/1 01/2 01/0 01/0 01 01


10 11
1
11/0
3
11/1
4
11/1
1
11/1
4
11 11

11 11 11/1 11/1 11/1 11 11


01 01 01
2
01/2
1
01/0
4
01/0
1
01 01

10 10 10/0 10/2 10/2 10 10


11 00 00
1
00/1
2 00/1
3
00/1
2
00 00

38
Viterbi algorithm
Example: 6th pair of input bits 00
0 input

1 input
11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00/1 00/0 00
00 10/1
2
10/1
4
10/0
3
10/2
2
10/2
3
10/1
3
10

01 01/1 01/2 01/0 01/0 01/1 01


10 11
1
11/0
3
11/1
4
11/1
1
11/1
4
11/2
2
11

11 11 11/1 11/1 11/1 11/2 11


01 01 01
2
01/2
1
01/0
4
01/0
1
01/1
3
01

10 10 10/0 10/2 10/2 10/1 10


11 00 00
1
00/1
2 00/1
3
00/1
2
00/0
2
00

39
Viterbi algorithm
Example: final pair of input bits 11
0 input

1 input
11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00/1 00/0 00/2
00 10/1
2
10/1
4
10/0
3
10/2
2
10/2
3
10/1
3
10/1
3

01 01/1 01/2 01/0 01/0 01/1 01/1


10 11
1
11/0
3
11/1
4
11/1
1
11/1
4
11/2
2
11/0
4

11 11 11/1 11/1 11/1 11/2 11/0


01 01 01
2
01/2
1
01/0
4
01/0
1
01/1
3
01/1
3

10 10 10/0 10/2 10/2 10/1 10/1


11 00 00
1
00/1
2 00/1
3
00/1
2
00/0
2
00/2
2

40
Viterbi algorithm
Example: decoded codeword
0 input

1 input Choose the path with the smallest cumulative error


11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00/1 00/0 00/2
00 10/1
2
10/1
4
10/0
3
10/2
2
10/2
3
10/1
3
10/1
3

01 01/1 01/2 01/0 01/0 01/1 01/1


10 11
1
11/0
3
11/1
4
11/1
1
11/1
4
11/2
2
11/0
4

11 11 11/1 11/1 11/1 11/2 11/0


01 01 01
2
01/2
1
01/0
4
01/0
1
01/1
3
01/1
3

10 10 10/0 10/2 10/2 10/1 10/1


11 00 00
1
00/1
2 00/1
3
00/1
2
00/0
2
00/2
2

t
Grey -> 1 Grey -> 1 Blue -> 0 Grey -> 1 Blue -> 0 Grey -> 1 Grey -> 1

41
Viterbi algorithm example
The coder generated the coded sequence 𝑐 based on
message 𝑘. However, due to errors on reception, the coded
sequence 𝑐: is received

𝑐 = 10 11 10 01 01 01 11 0 input
𝑘 = 1101011
𝑐9 = 11 11 10 01 01 00 11 1 input

State 11 11 10 01 01 00 11
00/2 00/2 00/1 00/1 00/1 00/0 00/2
00 10/1
2
10/1
4
10/0
3
10/2
2
10/2
3
10/1
3
10/1
3

01 01/1 01/2 01/0 01/0 01/1 01/1


10 11
1
11/0
3
11/1
4
11/1
1
11/1
4
11/2
2
11/0
4

11 11 11/1 11/1 11/1 11/2 11/0


01 01 01
2
01/2
1
01/0
4
01/0
1
01/1
3
01/1
3

10 10 10/0 10/2 10/2 10/1 10/1


11 00 00
1
00/1
2
00/1
3
00/1
2
00/0
2
00/2
2

Choose the path with the t


smallest cumulative error 𝑑𝑒𝑐𝑜𝑑𝑒𝑑 = 1 1 0 1 0 1 1 No errors!
42
5

High performance codes


Better codes based on simpler ones
- Simple codes can be combined in order to create a more powerful (robust)
code.
- Simple codes can be concatenated, in series or in parallel, with their outputs
combined
- In concatenated codes, typically an inner coder corrects some isolated errors
(i.e. Block Code or Convolutional) and an outer coder is robust against long
sequences of errors (i.e. Reed Solomon )
• Coding rate for the combination is: 𝑅,, = 𝑟, · 𝑅, ≜ Outer · Inner
• 𝑑+!( is also multiplied: the correcting properties are enhanced
• The design of the interleaver also plays an important role

Info Outer Inner


source Interleaver Mod MUX +DAC
coder coder
+ +n(t)
Info Outer Inner De Demo
sink ADC + deMUX
decoder decoder interleaver d

44
Performance of concatenated codes (example)
0
10

-1
10

-2
10
Coding Gain
-3
BER

10

-4
10

-5
10

-6
10
-20 -10 0 10 20 30 40 50
SNR [dB]

Inner = Convolutional (R = ½)
Interleaver = 64 OFDM symbols (PRIME)
Outer= LDPC. (R = 0.8)

45
Codes and the Shannon capacity limit
- The maximum capacity (bit rate) that can be sent without errors through a
noisy channel is bound by the Shannon limit
𝑆

𝐶(𝑏𝑖𝑡 𝑠) = 𝐵(𝐻𝑧) · 𝑙𝑜𝑔% (1 + )
𝑁
:
e.g, =1→𝐶=𝐵
;

- The Shannon theorem specifies that for error-free transmission within its
bound, it is necessary to use codes, but it does not indicate which codes
- They must be long, with distance between codewords proportional to the S/N
- Some codes are better than others
- With the best available convolutional codes, it is possible to get within 3 dB
of the Shannon limit
- Better and more complex codes are necessary to get closer to the Shannon
limit.
- With the so-called Turbo Codes and LDPC (Low Density Parity Check) codes it
is possible to get almost within 0.5 dB of the Shannon limit (in practice within
0.8 dB)
46
Turbo coding
– Turbo coding uses Parallel Concatenated Convolutional coding (PCCC)
• Decoding is beyond the scope of this course

Encoder:

Input Two bits per input bit


RSCC #1 Coded
bits bits
Interleaver Puncturing

RSCC #1 (Optional
)
RSCC ≜ Recursive Systematic Convolutional Code. See example below
𝑐'
+
+
+
𝑐(

47
Low Density Parity Check Codes (LDPC)
- Today they can be considered the best codes: they can get close to the
Shannon limit, like some turbo codes, but unlike turbo codes they are easy to
implement
- They are characterized by a parity-check matrix, 𝐻 = 𝑃' |𝐼 that is sparse, i.e.,
the number of ones is much lower than number of zeros
- All rows have the same number of ones, 𝑘 , and all columns have the same
number of ones, 𝑗 . The coding ratio 𝑅 is 𝑅 = 𝑗/𝑘 . At least 𝑗 − 1 rows must
linearly independent
- It is decoded using specialized algorithms, beyond the scope of this course

48
5

Appendix: soft decoding


Soft decoding
– Soft decoding; decoding where the input is not what comes from the
detector, but from the demodulator
• Decoding input is not a sequence of 1’s and 0’s, but the voltages
(converted into numbers) that come from the demodulator
• Distances are not integer numbers, but finite precision real numbers
– For example, with respect to a parity check example
• If 101 is sent (1V 0V 1V) but 0.9V 0.6V 0.8V are received, the detector would
provide 111 (assuming a 0.5 V threshold). The decoder would detect the error,
but cannot determine which one is erroneous
• Given the valid codewords are 000, 011, 101, 110, soft decoding calculates the
distance to these codes, and the shortest distance is to code 101, the right one
(d = (1 − 0.9)) +(0 − 0.6)) +(1 − 0.8)) = 0.64)

50

You might also like