Professional Documents
Culture Documents
Convolutional codes
• Convolutional codes offer an approach to error control
coding substantially different from that of block codes.
– A convolutional encoder:
• encodes the entire data stream, into a single codeword.
• does not need to segment the data stream into blocks of
fixed size
• is a machine with memory.
• This fundamental difference in approach imparts a
different nature to the design and evaluation of the
code.
– Block codes are based on algebraic/combinatorial
techniques.
– Convolutional codes are based on construction
techniques.
Convolutional codes
u1 u1
u1 u 2 u1 u2
t3 1 0 1 t4 0 1 0
0 0 1 0
u2 u2
A Rate ½ Convolutional encoder
L
Reff Rc
n( L K 1)
Encoder representation
Vector representation:
u1
g1 (111)
m u1 u2
g 2 (101) u2
Encoder representation
Message sequence: m (101)
Polynomial representation:
In more detail:
m( X )g1 ( X ) (1 X 2 )(1 X X 2 ) 1 X X 3 X 4
m( X )g 2 ( X ) (1 X 2 )(1 X 2 ) 1 X 4
m( X )g1 ( X ) 1 X 0. X 2 X 3 X 4
m( X )g 2 ( X ) 1 0. X 0. X 2 0. X 3 X 4
U( X ) (1,1) (1,0) X (0,0) X 2 (1,0) X 3 (1,1) X 4
U 11 10 00 10 11
State diagram
A finite-state machine only encounters a finite
number of states.
State of a machine: the smallest amount of
information that, together with a current input to
the machine, can predict the output of the
machine.
In a Convolutional encoder, the state is
represented by the content of the memory.
Hence, there are states, K is the constraint
length. 2 K 1
State diagram
A state diagram is a way to represent the
encoder.
t1 t2 t3 t4 t5 t6
11 10 00 10 11
Output bits
Trellis
t1 t2 t3 t4 t5 t6
Trellis
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
t1 t2 t3 t4 t5 t6
Block diagram of the DCS
Channel
Codeword sequence
• In hard decision:
– The demodulator makes a firm or hard decision
whether “1” or “0” is transmitted and provides no
other information for the decoder such as the
reliability of the decision.
• In Soft decision:
– The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Soft and hard decision decoding
• ML hard-decisions decoding rule:
– Choose the path in the trellis with minimum Hamming
distance from the received sequence
ML Maximum Likelihood
Decoding of Convolutional Codes
• Maximum likelihood decoding of convolutional codes
– Finding the code branch in the trellis that was most likely
transmitted
– Based on calculating code Hamming distances for each branch
forming encoded word
– Assume that the information symbols applied into an AWGN
channel are equally alike and independent,
p ( y , x) p ( y j | x j )
j 0
• The most likely path through the trellis will maximize this
metric.
• Since probabilities are often small numbers, often ln( )
received code:
is taken from both sides, yielding,
ln p (y , x) ln p( y j xmj )
j 1
Example of Exhaustive Maximal
Likelihood Detection
states
c
decoder outputs
if this path is selected
d
Example of Exhaustive
Maximal Likelihood Detection
Received Sequence 10 01 10 11 00
Hamming Distance 5
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
• Label all branches with the branch metric (Hamming distance)
i =1
0 2 0 0 1 2
0 2 2
2 1 0
0
1 1 0
1
1 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
State metric Path metric
i=2
0 2 2 0 0 1 2
0 2 2
0
2 1 0
0
1 1 0
1
1 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=3
0 2 2 0 2 0 1 2
0 2 2
0 4
2 1 0
0
1 1 0
1
1
1 2
1 1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=4
0 2 2 0 2 0 2 1 2
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0
1
1
1 2
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=5
0 2 2 0 2 0 2 1 3 2
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=6
0 2 2 0 2 0 2 1 3 2 1
X
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
m (101) Z (11 00 00 10 11)
U (11 10 00 10 11) ˆ
m ( 101 )
0 2 2 0 2 0 2 1 3 2 1
X
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding
m (101)
U (11 10 00 10 11) Z (11 10 11 10 01 )
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=1
0 2 1 2 1 1
0 1 0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=2
0 2 2 1 2 1 1
0 1 0
0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=3
0 2 2 1 3 2 1 1
0 1 0
0 3
0 1 1
2
0 1 0
0
1
2 2
2 1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=4
0 2 2 1 3 2 0 1 1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=5
0 2 2 1 3 2 0 1 1 1
0 1 0
0 3 2
0 1 1
1 2
0 0 2
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=6
0 2 2 1 3 2 0 1 1 1 2
0 1 0
0 3 2
0 1 1
1 2
0 0 2
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
m (101) Z (11 10 11 10 01)
U (11 10 00 10 11) ˆ (100)
m
0 2 2
1 3
2 0
1 1
1 2
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Free distance of
Convolutional Codes
Since the code is linear, the minimum distance of the
code is the minimum distance between each of the
codewords and the all-zero codeword.
This is the minimum distance in the set of all arbitrary
long paths along the trellis that diverge and remerge
to the all-zero path.
It is called the minimum free distance or the free
distance of the code, denoted by
d free or d f
Free distance …
The path diverging and remerging to Hamming weight
all-zero path with minimum weight All-zero path of the branch
df 5
0 0 0 0 0
2 2 2
2 2 2
1 0
1 1
1 1
1 1
t1 t2 t3 t4 t5 t6
Performance bounds …
t (d f 1) / 2
• The coding gain is upper bounded by,
4 1
A0 0
2 5 7
d1= 2 6 3 1
4 B0 0 B1 d1= 2
5 7
6 2
1 3
4 C0 0 C1 C2 C3
5 7
d2= 2 d2 = 2
6
4-State Trellis
0,1,2,3,4,5,6,7 Waveform Numbers
States 0 0 0
C0 C1 4 4 4
2 2 2
04 26
6 6 6
2 6 2 6 2 6
C2 C3 0 0 0
4 4 4
15 37
1 1 1
5 5 5
C1 C0
26 04 3 3 3 3 3 3
7 7 7 1 7 7 1 7
1
C3 C2 5 5 5
37 15
Mapping Waveforms
to Trellis Transitions
• If k bits are to be encoded per modulation interval, trellis must
allow for 2k possible transitions from each state to successor state,
• More than one transition may occur between pairs of states,
• All waveforms should occur with equal frequency and with a fair
amount of regularity and symmetry,
• Transitions originating from the same state are assigned waveforms
either from subset B0 or B1 – never a mixture between them ,
• Transitions joining into the same state are assigned waveforms
either from subset B0 or B1 – never a mixture between them ,
• Parallel transitions are assigned waveforms either from subset C0 or
C1 or C2 or C3 – never a mixture between them.
Free Distance
• Minimum free distance minimum distance in the set of
all arbitrary long paths that diverge and remerge from the
all-zero path.
• Systematic code free distance < Non-Systematic code free
distance
• Error correcting capability t = (df -1)/2
Coding Gain
• Soft Decision ML decoding
– Pe Q ( df/2 ) ; for moderate to high SNR
C0 C1 4 4 4
2 2 2
04 26
6 6 6
2 6 2 6 2 6
C2 C3 0 0 0
4 4 4
15 37
1 1 1
5 5 5
C1 C0
26 04 3 3 3 3 3 3
7 7 7 1 7 7 1 7
1
C3 C2 5 5 5
37 15
Coding gain for 8-PSK 4-State
trellis
• Candidate error event paths
– 2, 1, 2
– d2 = d12 + d02 + d12 = 2 + 0.585 + 2 = 4.585
– d = sqrt( 4.585) = 2.2
–4
–d = 2
0426
0 0 0 Coded 8-state 8-PSK
6
1537 d2 = d12 + d02 + d12
4062 = 2 + 0.585 + 2
= 4.585
5173
7 6 df = 4.585
2604
3715 dref = 2
6240
G (dB)
7351 = 10 log10 (d f2/ d ref )
2
= 3.6 dB
16-state trellis 4.1 dB coding gain over uncoded 4-PSK