You are on page 1of 61

UNIT-IV c

Convolutional codes
• Convolutional codes offer an approach to error control
coding substantially different from that of block codes.
– A convolutional encoder:
• encodes the entire data stream, into a single codeword.
• does not need to segment the data stream into blocks of
fixed size
• is a machine with memory.
• This fundamental difference in approach imparts a
different nature to the design and evaluation of the
code.
– Block codes are based on algebraic/combinatorial
techniques.
– Convolutional codes are based on construction
techniques.
Convolutional codes

• A Convolutional code is specified by three


parameters
or ( n, k , K ) ( k / n, K )

– Rc  k / n is the coding rate, determining the


number of data bits per coded word.

– K is the constraint length of the encoder,


where the encoder has K-1 memory elements.
A Rate ½ Convolutional encoder

u1 First coded bit


(Branch word)
Input data Output coded
bits m bits u1 ,u 2
u2 Second coded bit
A Rate ½ Convolutional encoder
Message sequence: m  (101)

Time Output Time Output


(Branch word) (Branch word)
u1 u1
u1 u 2 u1 u 2
t1 1 0 0 t2 0 1 0
1 1 1 0
u2 u2

u1 u1
u1 u 2 u1 u2
t3 1 0 1 t4 0 1 0
0 0 1 0
u2 u2
A Rate ½ Convolutional encoder

Time Output Time Output


(Branch word) (Branch word)
u1 u1
u1 u2 u1 u 2
t5 0 0 1 t6 0 0 0
1 1 0 0
u2 u2

m  (101) Encoder U  (11 10 00 10 11)


Effective code rate
• Initialize the memory before encoding the first bit (all-
zero)
• Clear out the memory after encoding the last bit (all-
zero), a tail of zero-bits is appended to data bits.

Data Tail Encoder Codeword


• Effective code rate :
– L is the number of data bits and k=1 is assumed:

L
Reff   Rc
n( L  K  1)
Encoder representation
 Vector representation:

u1
g1  (111)
m u1 u2
g 2  (101) u2
Encoder representation
Message sequence: m  (101)

 Impulse response representation:


Branch word
Register
contents u1 u2
100 1 1
Input sequence : 1 0 0
010 1 0
Output sequence : 11 10 11
001 1 1
Input m Output
1 11 10 11
0 00 00 00
1 11 10 11
Modulo-2 sum: 11 10 00 10 11
Encoder representation

 Polynomial representation:

g1 ( X )  g 0(1)  g1(1) . X  g 2(1) . X 2  1  X  X 2


g 2 ( X )  g 0( 2 )  g1( 2 ) . X  g 2( 2 ) . X 2  1  X 2

 The output sequence is found as follows:

U( X )  m( X )g1 ( X ) interlaced with m( X )g 2 ( X )


Encoder representation

In more detail:

m( X )g1 ( X )  (1  X 2 )(1  X  X 2 )  1  X  X 3  X 4
m( X )g 2 ( X )  (1  X 2 )(1  X 2 )  1  X 4
m( X )g1 ( X )  1  X  0. X 2  X 3  X 4
m( X )g 2 ( X )  1  0. X  0. X 2  0. X 3  X 4
U( X )  (1,1)  (1,0) X  (0,0) X 2  (1,0) X 3  (1,1) X 4
U  11 10 00 10 11
State diagram
 A finite-state machine only encounters a finite
number of states.
 State of a machine: the smallest amount of
information that, together with a current input to
the machine, can predict the output of the
machine.
 In a Convolutional encoder, the state is
represented by the content of the memory.
 Hence, there are states, K is the constraint
length. 2 K 1
State diagram
 A state diagram is a way to represent the
encoder.

 A state diagram contains all the states and all


possible transitions between them.

 Only two transitions initiate from a state

 Only two transitions end up in a state


State diagram
0/00 Output
(Branch word) Current Input Next Output
Input
State State
S0
1/11 00 0/11 S0 0 S0 00
00 1 S2 11
1/00
S2 S1 S1 0 S0 11
10 01 01 1 S2 00
0/10
S2 0 S1 10
1/01 S3 0/01 10 1 S3 01
11 S3 0 S1 01
11 1 S3 10
1/10
Trellis Representation
• Trellis diagram is an extension of the state diagram
that shows the passage of time.
0/00
State S 0  00
1/11
S 2  10 0/11
1/00
0/10
S1  01 1/01
0/01
1/10
S3  11
ti ti 1 Time
Trellis
Input bits Tail bits
1 0 1 0 0

0/00 0/00 0/00 0/00 0/00


1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01

t1 t2 t3 t4 t5 t6
11 10 00 10 11
Output bits
Trellis

Input bits Tail bits


1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11 1/11 1/11
0/11 0/11 0/11 0/11 0/11
1/00 1/00 1/00 1/00 1/00
0/10 0/10 0/10 0/10 0/10
1/01 1/01 1/01 1/01 1/01
0/01 0/01 0/01 0/01 0/01

t1 t2 t3 t4 t5 t6
Trellis
Input bits Tail bits
1 0 1 0 0
Output bits
11 10 00 10 11
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01

t1 t2 t3 t4 t5 t6
Block diagram of the DCS

Information Rate 1/n


Modulator
source Conv. encoder
m  ( m1 ,m2,..., mi ,...) U  G(m)
Input sequence
 (U 1 , U 2 , U 3 ,..., U i ,...)

Channel
Codeword sequence

Information Rate 1/n


Demodulator
sink Conv. decoder
ˆ  (mˆ 1 , mˆ 2 ,..., mˆ i ,...)
m Z  ( Z1 , Z2 , Z3 ,..., Zi ,...)
received sequence
Soft and hard decision decoding

• In hard decision:
– The demodulator makes a firm or hard decision
whether “1” or “0” is transmitted and provides no
other information for the decoder such as the
reliability of the decision.

• In Soft decision:
– The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
Soft and hard decision decoding
• ML hard-decisions decoding rule:
– Choose the path in the trellis with minimum Hamming
distance from the received sequence

• ML soft-decisions decoding rule:


– Choose the path in the trellis with minimum Euclidean
distance from the received sequence

ML  Maximum Likelihood
Decoding of Convolutional Codes
• Maximum likelihood decoding of convolutional codes
– Finding the code branch in the trellis that was most likely
transmitted
– Based on calculating code Hamming distances for each branch
forming encoded word
– Assume that the information symbols applied into an AWGN
channel are equally alike and independent,

x  x0 x1 x2 ...x j ... y non -erroneous code:


Decoder
(=Distance
Calculation &
y  y0 y1 ... y j ... x Comparison)
Decoding of Convolutional Codes
• Probability to decode the symbols is then


p ( y , x)   p ( y j | x j )
j 0
• The most likely path through the trellis will maximize this
metric.
• Since probabilities are often small numbers, often ln( )
received code:
is taken from both sides, yielding,

ln  p (y , x)   ln  p( y j xmj )

j 1
Example of Exhaustive Maximal
Likelihood Detection

• Assume a three bit message is transmitted


[encoded by (2,1,2) convolutional encoder]. To
clear the decoder, two zero-bits are appended after
message. Thus 5 bits are encoded resulting 10 bits
of code. Assume channel error probability is p =
0.1. After the channel 10,01,10,11,00 is produced
(including some errors). What comes after the
decoder, e.g. what was most likely the transmitted
code and what were the respective message bits?
Example of Exhaustive Maximal
Likelihood Detection

states
c
decoder outputs
if this path is selected
d
Example of Exhaustive
Maximal Likelihood Detection
Received Sequence  10 01 10 11 00

All Zero Sequence  00 00 00 00 00

Hamming Distance  5

Path metric  5 (-2.3) + 5 (-0.11) = -12.05

ln [ p(0/0)] = ln [p(1/1)] = ln (0.9) = - 0.11


ln [ p(1/0)] = ln [p(0/1)] = ln (0.1) = - 2.30
Example of Exhaustive
Maximal Likelihood Detection
Example of Exhaustive
Maximal Likelihood Detection

correct:1+1+2+2+2=8;8  ( 0.11)  0.88


false:1+1+0+0+0=2;2  ( 2.30)  4.6
total path metric:  5.48
The Viterbi Algorithm

 Viterbi algorithm performs ML decoding 

 finds a path through trellis with the largest metric,


 processes demodulator outputs in an iterative
manner,
 At each step in the trellis, it compares the metric
of all paths entering each state, and keeps only
the path with the largest metric, called the
survivor, together with its metric,
 It proceeds in the trellis by eliminating the least
likely paths.
Example of Hard decision
Viterbi decoding
m  (101)
U  (11 10 00 10 11) Z  (11 00 00 10 11 )
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01

t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
• Label all branches with the branch metric (Hamming distance)

i =1
0 2 0 0 1 2
0 2 2
2 1 0
0
1 1 0
1
1 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
State metric Path metric
i=2
0 2 2 0 0 1 2
0 2 2
0
2 1 0
0
1 1 0
1
1 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=3

0 2 2 0 2 0 1 2
0 2 2
0 4
2 1 0
0
1 1 0
1
1
1 2
1 1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=4

0 2 2 0 2 0 2 1 2
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0
1
1
1 2
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=5

0 2 2 0 2 0 2 1 3 2
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=6

0 2 2 0 2 0 2 1 3 2 1
X
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
m  (101) Z  (11 00 00 10 11)
U  (11 10 00 10 11) ˆ
m  ( 101 )

0 2 2 0 2 0 2 1 3 2 1
X
0 2 2 X
0 4 X 1
2 1 0
0
1 1X 2 0 1
1
1
1 2X
1 1 X 2
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding
m  (101)
U  (11 10 00 10 11) Z  (11 10 11 10 01 )
0/00 0/00 0/00 0/00 0/00
1/11 1/11 1/11
0/11 0/11 0/11
0/10 1/00
0/10 0/10
1/01 1/01
0/01 0/01

t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
i=1

0 2 1 2 1 1
0 1 0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d

i=2
0 2 2 1 2 1 1
0 1 0
0
0 1 1
2
0 1 0
1
2 2
1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d

i=3
0 2 2 1 3 2 1 1
0 1 0
0 3
0 1 1
2
0 1 0
0
1
2 2
2 1
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d

i=4
0 2 2 1 3 2 0 1 1
0 1 0
0 3 2
0 1 1
1 2
0 0
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d

i=5
0 2 2 1 3 2 0 1 1 1
0 1 0
0 3 2
0 1 1
1 2
0 0 2
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d

i=6
0 2 2 1 3 2 0 1 1 1 2

0 1 0
0 3 2
0 1 1
1 2
0 0 2
0 3
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Example of Hard decision
Viterbi decoding-cont’d
m  (101) Z  (11 10 11 10 01)
U  (11 10 00 10 11) ˆ  (100)
m
0 2 2
1 3
2 0
1 1
1 2

0 1 0
0 3 2
0 1 1
1 2
0 0
0 3 2
1
2 2
2 1 3
1
t1 t2 t3 t4 t5 t6
Free distance of
Convolutional Codes
 Since the code is linear, the minimum distance of the
code is the minimum distance between each of the
codewords and the all-zero codeword.
 This is the minimum distance in the set of all arbitrary
long paths along the trellis that diverge and remerge
to the all-zero path.
 It is called the minimum free distance or the free
distance of the code, denoted by

d free or d f
Free distance …
The path diverging and remerging to Hamming weight
all-zero path with minimum weight All-zero path of the branch
df 5
0 0 0 0 0
2 2 2
2 2 2
1 0
1 1
1 1
1 1

t1 t2 t3 t4 t5 t6
Performance bounds …

• Error correction capability of Convolutional codes is


given by,

t  (d f  1) / 2
• The coding gain is upper bounded by,

coding gain  10 log10 ( Rc d f )


How to end-up decoding?

• In the previous example it was assumed that the


register was finally filled with zeros thus finding the
minimum distance path
• In practice with long code words zeroing requires
feeding of long sequence of zeros to the end of the
message bits: this wastes channel capacity &
introduces delay
• To avoid this path memory truncation is applied:
– It has been experimentally tested for that negligible
error rate increase occurs by truncating at 5L
– Note this also introduces a delay of 5L
Trellis Coded Modulation
TCM
What is TCM ?
• TCM  Combined coding & modulation
scheme that uses redundant non-binary
modulation in combination with a finite-state
machine
• Current input + current state  output
(Restricted set of possible outputs- free distance)
• Characterized by a trellis diagram
• Viterbi Decoding Algorithm
TCM - Procedure
Set Partitioning
Set Partitioning – Example of 8-
PSK
2
3 1 d = 2 sin(/8) = 0.765
0

4 1
A0 0

2 5 7
d1=  2 6 3 1

4 B0 0 B1 d1=  2

5 7
6 2
1 3

4 C0 0 C1 C2 C3

5 7
d2= 2 d2 = 2
6
4-State Trellis
0,1,2,3,4,5,6,7  Waveform Numbers
States 0 0 0

C0 C1 4 4 4
2 2 2
04 26
6 6 6
2 6 2 6 2 6
C2 C3 0 0 0
4 4 4
15 37
1 1 1
5 5 5
C1 C0
26 04 3 3 3 3 3 3
7 7 7 1 7 7 1 7
1
C3 C2 5 5 5

37 15
Mapping Waveforms
to Trellis Transitions
• If k bits are to be encoded per modulation interval, trellis must
allow for 2k possible transitions from each state to successor state,
• More than one transition may occur between pairs of states,
• All waveforms should occur with equal frequency and with a fair
amount of regularity and symmetry,
• Transitions originating from the same state are assigned waveforms
either from subset B0 or B1 – never a mixture between them ,
• Transitions joining into the same state are assigned waveforms
either from subset B0 or B1 – never a mixture between them ,
• Parallel transitions are assigned waveforms either from subset C0 or
C1 or C2 or C3 – never a mixture between them.
Free Distance
• Minimum free distance  minimum distance in the set of
all arbitrary long paths that diverge and remerge from the
all-zero path.
• Systematic code free distance < Non-Systematic code free
distance
• Error correcting capability  t = (df -1)/2
Coding Gain
• Soft Decision ML decoding
– Pe  Q ( df/2 ) ; for moderate to high SNR

• Asymptotic Coding gain G

– G (dB) = 20 log10 (d f/ d ref)


or
G (dB) = 10 log10 (d f2/ d ref2)

• Alternately for high SNR and given error probability


– G (dB) = (Eb/No)U (dB) - (Eb/No)C (dB)

• TCM goal  to achieve d f > d ref at same info rate , BW &


power
4-State Trellis
0,1,2,3,4,5,6,7  Waveform Numbers
States 0 0 0

C0 C1 4 4 4
2 2 2
04 26
6 6 6
2 6 2 6 2 6
C2 C3 0 0 0
4 4 4
15 37
1 1 1
5 5 5
C1 C0
26 04 3 3 3 3 3 3
7 7 7 1 7 7 1 7
1
C3 C2 5 5 5

37 15
Coding gain for 8-PSK 4-State
trellis
• Candidate error event paths
– 2, 1, 2
– d2 = d12 + d02 + d12 = 2 + 0.585 + 2 = 4.585
– d = sqrt( 4.585) = 2.2
–4
–d = 2

• df  min ( 2.2, 2 ) = 2, dref  2

• G (dB) = 10 log10 (d f2/ d ref2) = 3 dB


Coded 8- PSK 8 - State
trellis
State

0426
0 0 0 Coded 8-state 8-PSK
6
1537 d2 = d12 + d02 + d12
4062 = 2 + 0.585 + 2
= 4.585
5173
7 6 df  = 4.585
2604

3715 dref  = 2
6240
G (dB)
7351 = 10 log10 (d f2/ d ref )
2

= 3.6 dB
16-state trellis  4.1 dB coding gain over uncoded 4-PSK

You might also like