You are on page 1of 29

Module 4

Introduction of different types of coding schemes


Convolutional Codes
• Overview
– Parameters
– Encoding
– State Diagram
– Trellis Diagram
– Decoding
• Viterbi Algorithm
Convolutional codes

• In this coding scheme, smaller blocks of uncoded data called information


frames are used.
• The frames contain just a few symbols, or just one symbol
• The information frames are encoded into code frames
• One information frame is not used to obtain a single codeword
• Instead the current information frame with previous m information frames
are used to obtain a single codeword frame,
• This implies, that such codes have memory
• Termed as tree codes
• Convolutional codes are an important sub class of tree codes.
• Convolutional codes makes decision based on past information
Convolutional codes

Constraint Length K=mk


110010101011

k k k k k k

INFORMATION FRAME

Logic n n

ENCODER CODEWORD FRAME

4
Convolutional codes

• Assume an infinitely long stream of incoming symbols first broken up


into segments of k symbols.
• Each segment is an information frame.
• The encoder consists of MEMORY (Shift register) and a logic circuit.
• The memory of encoder can store m information frames.
• Each time a new frame arrives, it is shifted into the shift register and
the oldest frame is discarded
• At the end of any frame time, the encoder has m most recent
information frames in its memory which corresponds to a total of mk
information symbols.
• For every information frame (k) that comes in, the encoder generates
a codeword frame (n)

5
Convolutional Codes
• Convert any length message to a single ‘codeword’
• In block codes, the block of n bits generated by the encoder in a
particular time slot depends on the block of k message bits.
• Convolutional Encoder has memory and has n outputs that at any
time depend on k inputs and m previous input blocks
• Typically described by 3 parameters:
– n= no. of bits produced at encoder output at each time unit
– k= no. of bits input to encoder at each time unit
– Rate r=k/n
• m = memory of encoder
= no. of prev. input blocks used to generate each output
(or)
• K = constraint length
= mk
Encoding
• Linear conv. encoders can be implemented with feed-forward shift
registers:
• Example:
k=1; n=2 Rate ½, m=3 convolutional code
(2,1,3) code (n,k,m) x1

m m1 m2

x2

• Encoders can be viewed as FIR digital filters or finite state machines


Encoding Example
Previous two
successive bits
m m1 m2 message bit
m1, m2
Current message 0 0 0 x1= m + m1 + m2
bit m
x2= m + m2
1110
m m1 m2 x1 x2

x1 x2 0 0 0 0 0

1 0 0 1 1

1 1 0 0 1

Output codeword Switch


1 1 1 1 0
frame:x1x2x1x2x1x2…
0 1 1 0 1
STATE TRANSITION DIAGRAM

States
d

0 0 a 11

0 1 b

b c
1 0 c
01 10

1 1 d
m2 m1 m0
a
0 0 0
00
x1=m+m1+m2
x2=m1+m2 Output codeword frame
m2 m1 m x1 x2
Input frame

0 0 0 0 0

Previous/current
0 0 1 Current/Next
1 1
state state
0 1 0 1 0

0 1 1 0 1

1 0 0 1 1

1 0 1 0 0

1 1 0 0 1

1 1 1 1 0
m2 m1 m x1 x2

0 0
1/10
0 0 0
d
0 0 1 1 1
11
0 1 0 1 0

1/01 0/01
0 1 1 0 1

1 0 0 1 1
b 0/10 c
1 0 1 0 0
01 10
1 1 0 0 1
1/00
1 1 1 1 0

0/11
1/11 a
00

0/00
Draw the encoder and mention the dimensions; also generate the
codeword frame for input 0101

X1/g1

m m1 m2 X2/g2

X3/g3
CODE TREE DIAGRAM

Output codeword
frame
m2 m1 m x1 x2
Previous/ Input 00
current Current/
00 0
0 state 0 0 0 Next 0
11
state 1
00 0 10
0 0 1 1 1
1 11 0
0
0 1 0 1 0 10 1
1 01
0 1 1 0 1 11 0
1
1 0 0 1 1
01
1 0 1 0 0

1 1 0 0 1

1 1 1 1 0 m2 m1 m
x1=m+m1+m2
x2=m1+m2
Trellis Diagram
• A trellis diagram is a state diagram that has been expanded to show the
passage of time:
Trellis Diagram
• Every possible codeword is represented by a single unique path
through the trellis
• Every codeword starts and stops in S0, the all-zeros state
m2 m1 m0 x1 x2
a 0/00 0/00 0/00
00
0 0 0 0 0
1/11 1/11 1/11
0 0 1 1 1
0/11
0 1 0 1 0 b 01
0/10 0/10
0 1 1 0 1
1/00
1 0 0 1 1
10
c
1 0 1 0 0 1/01
1/01
1 1 0 0 1 0/01
1 0
d 11
1 1 1 1/10

Message : 10110: Codeword : 1110000101:


1
0

m2 m1 m0 x1 x2
0/00 0/00 0/00 0/00
a
00
0 0 0 0 0 1/11 1/11 1/11 1/11
0 0 1 1 1
0/11 0/11
0 1 0 1 0 b 01
0/10 0/10 0/10
0 1 1 0 1
1/00 1/00
1 0 0 1 1
10
c
1 0 1 0 0 0/01
1/01 1/01
1/01 1/01
1 1 0 0 1

1 0
d 11
1 1 1
1/10 1/10

22
Decoding
• Several algorithms exist to decode convolutional codes: trellis
decoders, sequential (Fano) decoders, stack decoders, etc.
• Only one algorithm is optimal in terms of minimizing the probability
of error – the Viterbi algorithm
• The Viterbi algorithm is a maximum likelihood (ML) decoder
– ML decoders maximize
p( r | y’ )=p( received word | estimate of code word )
– For uniform distribution, it also maximizes
p( estimate of code word | received word)

• For memoryless channel,


L m1  n1 (j)

  p(r
'( j )
p(r | y')  i0
i | yi )

 j0
• This is called the likelihood function of y’
Decoding:Viterbi Algorithm
• Usually, the log-likelihood function is used
 

L m1 n1
log p(r | y')  ( j | y '( j
 log p ri ) i)
• . .  , 
to simplify implementation
  j

i0
*
0 
• The log-probabilities, *, known as bit metrics, are usually converted
into small integers
• The ML algorithm chooses y’ that maximizes p( r | y’ )
L35- 20

Decoding: Viterbi Algorithm


• The Viterbi algorithm finds the ML codeword by using the code
trellis:

Viterbi Algorithm
• Let the node corresponding to state Sj at time t be denoted by Sj,t
• Each node in the trellis is assigned a metric value V( Sj,t )
• The metric values are computed by
1. Let V( S0,0 ) =0 and t = 1
2. At time t, compute the partial path metrics for all paths entering each node.
3. Set V( Sk,t ) equal to the best partial path metric entering the node
corresponding to state Sk at time t. Ties can be broken by flipping a coin.
Nonsurviving branches are deleted from the trellis.
4. If t<L+m, increment t and return to step 2.
m2 m1 m0 x1 x2

0 0 0 0 0

0 0 1 1 1

0 1 0 1 0

0 1 1 0 1
Step1: Trellis Encoder
1 0 0 1 1
Step2: Trellis Diagram 0 0
1 0 1
Step3: Match the weightage with 1 1 0 0 1

respect to Trellis Diagram 1 1 1 1 0


Match the bit sequence with the trellis diagram
Received Sequence: 01 10 11 00 00
01 10 11 00 00
1 2 1 1 1

1 2 2

1 3

3 3

Received Sequence: 01 10 11 00 00
Decoded Sequence: 11 10 11 00 00
L35- 21

Conv. Codes in Wireless Communications


• Why are convolutional codes often used in wireless communication
systems?
– Block codes typically have algebraic decoders.
• These decoders operate on hard decisions (0’s and 1’s, or equivalents)
– Convolutional decoders can use soft-decision decoding.
• Decoder inputs are continuous-valued (or quantized values); for instance, the
correlator output for BPSK
• Soft-decision decoding gives a 2-3 dB gain over hard decision decoding
(2.2 dB for coherent communication)
Thank you

You might also like