You are on page 1of 115

Chapter10 Convolutional Codes

Dr. Chih-Peng Li ()

Table of Contents
10.1 Encoding of Convolutional Codes 10.2 Structural Properties of Convolutional Codes 10.3 Distance Properties of Convolutional Codes

Convolutional Codes
Convolutional codes differ from block codes in that the encoder contains memory and the n encoder outputs at any time unit depend not only on the k inputs but also on m previous input blocks. An (n, k, m) convolutional code can be implemented with a kinput, n-output linear sequential circuit with input memory m.
Typically, n and k are small integers with k<n, but the memory order m must be made large to achieve low error probabilities.

In the important special case when k=1, the information sequence is not divided into blocks and can be processed continuously. Convolutional codes were first introduced by Elias in 1955 as an alternative to block codes.
3

Convolutional Codes
Shortly thereafter, Wozencraft proposed sequential decoding as an efficient decoding scheme for convolutional codes, and experimental studies soon began to appear. In 1963, Massey proposed a less efficient but simpler-toimplement decoding method called threshold decoding. Then in 1967, Viterbi proposed a maximum likelihood decoding scheme that was relatively easy to implement for cods with small memory orders. This scheme, called Viterbi decoding, together with improved versions of sequential decoding, led to the application of convolutional codes to deep-space and satellite communication in early 1970s.
4

Convolutional Code
A convolutional code is generated by passing the information sequence to be transmitted through a linear finite-state shift register. In general, the shift register consists of K (k-bit) stages and n linear algebraic function generators.

Convoltuional Code
Convolutional codes k = number of bits shifted into the encoder at one time k=1 is usually used!! n = number of encoder output bits corresponding to the k information bits Rc = k/n = code rate K = constraint length, encoder memory. Each encoded bit is a function of the present input bits and their past ones. Note that the definition of constraint length here is the same as that of Shu Lins, while the shift registers representation is different.
6

Encoding of Convolutional Code


Example 1:
Consider the binary convolutional encoder with constraint length K=3, k=1, and n=3. The generators are: g1=[100], g2=[101], and g3=[111]. The generators are more conveniently given in octal form as (4,5,7).

Encoding of Convolutional Code


Example 2:
Consider a rate 2/3 convolutional encoder. The generators are: g1=[1011], g2=[1101], and g3=[1010]. In octal form, these generator are (13, 15, 12).

Representations of Convolutional Code


There are three alternative methods that are often used to describe a convolutional code: Tree diagram Trellis diagram State disgram

Representations of Convolutional Code


Tree diagram
Note that the tree diagram in the right repeats itself after the third stage. This is consistent with the fact that the constraint length K=3. The output sequence at each stage is determined by the input bit and the two previous input bits. In other words, we may sat that the 3-bit output sequence for each input bit is determined by the input bit and the four possible states of the shift Tree diagram for rate 1/3, register, denoted as a=00, b=01, K=3 convolutional code. c=10, and d=11.
10

Representations of Convolutional Code


Trellis diagram

Tree diagram for rate 1/3, K=3 convolutional code.


11

Representations of Convolutional Code


State diagram
0 1 a a a c 0 1 b a b c 0 1 c b c d 0 1 d b d d

State diagram for rate 1/3, K=3 convolutional code.


12

Representations of Convolutional Code


Example: K=2, k=2, n=3 convolutional code
Tree diagram

13

Representations of Convolutional Code


Example: K=2, k=2, n=3 convolutional code
Trellis diagram

14

Representations of Convolutional Code


Example: K=2, k=2, n=3 convolutional code
State diagram

15

Representations of Convolutional Code


In general, we state that a rate k/n, constraint length K, convolutional code is characterized by 2k branches emanating from each node of the tree diagram. The trellis and the state diagrams each have 2k(K-1) possible states. There are 2k branches entering each state and 2k branches leaving each state.

16

10.1 Encoding of Convolutional Codes


Example: A (2, 1, 3) binary convolutional codes:

the encoder consists of an m= 3-stage shift register together with n=2 modulo-2 adders and a multiplexer for serializing the encoder outputs.
The mod-2 adders can be implemented as EXCLUSIVE-OR gates.

Since mod-2 addition is a linear operation, the encoder is a linear feedforward shift register. All convolutional encoders can be implemented using a linear feedforward shift register of this type.
17

10.1 Encoding of Convolutional Codes


The information sequence u =(u0, u1, u2, ) enters the encoder one bit at a time. Since the encoder is a linear system, the two encoder output ( ( ( ( sequence v (1) = ( 01) ,1(1) , 21) , ) and v ( 2 ) = ( 02 ) ,1( 2 ) , 22 ) , )can be obtained as the convolution of the input sequence u with the two encoder impulse response. The impulse responses are obtained by letting u =(1 0 0 ) and observing the two output sequence. Since the encoder has an m-time unit memory, the impulse responses can last at most m+1 time units, and are written as :
( ( g (1) = ( g 01) , g1(1) , , g m1) ) ( ( g ( 2 ) = ( g 02 ) , g1( 2 ) , , g m2 ) )

18

10.1 Encoding of Convolutional Codes


The encoder of the binary (2, 1, 3) code is g (1) = (1 0 1 1) g ( 2 ) = (1 1 1 1) The impulse response g(1) and g(2) are called the generator sequences of the code. The encoding equations can now be written as v (1) = u g (1) v ( 2) = u g ( 2) The convolution operation implies that for all l 0,
( l( j ) = ul i gi( j ) = ul g 0 j ) + ul 1 g1( j ) + m i =0 ( + ul m g mj ) , j = 1, 2,.

where * denotes discrete convolution and all operations are mod-2.

where ul i = 0 for all l < i.


19

10.1 Encoding of Convolutional Codes


Hence, for the encoder of the binary (2,1,3) code, l(1) = ul + ul 2 + ul 3 l( 2 ) = ul + ul 1 + ul 2 + ul 3 as can easily be verified by direct inspection of the encoding circuit. After encoding, the two output sequences are multiplexed into a signal sequence, called the code word, for transmission over the channel. The code word is given by
( ( ( ( v = ( 01) 02 ) ,1(1)1( 2 ) , 21) 22 ) , ).

20

10.1 Encoding of Convolutional Codes


Example 10.1
Let the information sequence u = (1 0 1 1 1). Then the output sequences are

v (1) = (1 0 1 1 1) (1 0 1 1) = (1 0 0 0 0 0 0 1) v (2) = (1 0 1 1 1) (1 1 1 1) = (1 1 0 1 1 1 0 1) and the code word is

v = (1 1, 0 1, 0 0, 0 1, 0 1, 0 1, 0 0, 1 1).

21

10.1 Encoding of Convolutional Codes


If the generator sequence g(1) and g(2) are interlaced and then arranged in the matrix
( ( g 01) g 02 ) G=

g1(1) g1( 2 ) ( ( g 01) g 02 )

( ( g 21) g 22 ) g1(1) g1( 2 ) ( ( g 01) g 02 )

( ( g m1) g m2 ) ( ) ( g m11 g m2)1 ( ) ( g m1 2 g m2)2

( ( g m1) g m2 ) ( ) ( g m11 g m2)1

( ( g m1) g m2 )

where the blank areas are all zeros, the encoding equations can be rewritten in matrix form as v = uG. G is called the generator matrix of the code. Note that each row of G is identical to the preceding row but shifted n = 2 places to right, and the G is a semi-infinite matrix, corresponding to the fact that the information sequence u is of arbitrary length.
22

10.1 Encoding of Convolutional Codes


If u has finite length L, then G has L rows and 2(m+L) columns, and v has length 2(m + L).

Example 10.2
If u=(1 0 1 1 1), then

v = uG 11 0 1 11 = (1 0 1 1 1) = (1 1, 0 1, 0 0, 0 11 11 01 11 11 11 01 11 11 11 0 1 11 11 0 1 1, 0 1, 0 1, 0 0,
23

11 11 11 1 1),

agree with our previous calculation using discrete convolution.

10.1 Encoding of Convolutional Codes


Consider a (3, 2, 1) convolutional codes Since k = 2, the encoder consists of two m = 1stage shift registers together with n = 3 mode-2 adders and two multiplexers.

24

10.1 Encoding of Convolutional Codes


The information sequence enters the encoder k = 2 bits at a time, and can be written as ( ( ( ( u = (u 01) u 02 ) , u1(1) u1( 2 ) , u 21) u 22 ) , )
( ( u (1) = (u 01) , u1(1) , u 21) , ) ( ( u (2) = (u 02 ) , u1( 2 ) , u 22 ) , ) There are three generator sequences corresponding to each input sequence. j j j g i( j ) = ( g i(, 0) , g i(,1) , , g i(, m) ) represent the generator sequence Let corresponding to input i and output j, the generator sequence of the (3, 2, 1) convolutional codes are

or as the two input sequences

( g11) = (1 1), g (21) = (0 1),

(2) g1 = (0 1), g (2) = (1 0), 2

(3) g1 = (1 1), g (3) = (1 0), 2

25

10.1 Encoding of Convolutional Codes


And the encoding equations can be written as
( v (1) = u (1) g11) + u ( 2 ) g (21) ( v ( 2 ) = u (1) g12 ) + u ( 2 ) g (22 ) ( v ( 3) = u (1) g13) + u ( 2 ) g (23) The convolution operation implies that 1 2 l(1) = ul(1) + u l()1 + u l(1) 1 l( 2 ) = u l( 2 ) + u l()1 1 l(3) = ul(1) + ul( 2 ) + ul()1 ,

After multiplexing, the code word is given by


( ( ( ( ( ( v = ( 01) 02 ) 03) ,1(1)1( 2 )1(3) , 21) 22 ) 23) , ).

26

10.1 Encoding of Convolutional Codes


Example 10.3
If u(1) = (1 0 1) and u(2) = (1 1 0), then

v (1) = (1 0 1) (1 1) + (1 1 0) (0 1) = (1 0 0 1) v ( 2 ) = (1 0 1) (0 1) + (1 1 0) (1 0) = (1 0 0 1) v ( 3) = (1 0 1) (1 1) + (1 1 0) (1 0) = (0 0 1 1)
and

v = (1 1 0, 0 0 0 , 0 0 1, 1 1 1).

27

10.1 Encoding of Convolutional Codes


The generator matrix of a (3, 2, m) code is
) ) g1(,10) g1(,2) g1(,30) g1(,11) g1(,2) g1(,3) g1(,1m g1(,2m) g1(,3m 0 1 1 (1) ( 2 ) (3) (1) ( 2 ) ( 3) (1) ( 2 ) ( 3) g 2, m g 2, m g 2, m g 2, 0 g 2, 0 g 2, 0 g 2,1 g 2,1 g 2,1 ) ) ) ) G= g1(,10) g1(,2) g1(,30) g1(,1m 1 g1(,2m) 1 g1(,3m 1 g1(,1m g1(,2m) g1(,3m 0 (1) ( 2 ) ( 3) (1) ( 2) ( 3) (1) ( 2 ) ( 3) g 2, 0 g 2, 0 g 2, 0 g 2, m 1 g 2, m 1 g 2, m 1 g 2, m g 2, m g 2, m and the encoding equation in matrix are again given by v = uG. Note that each set of k = 2 rows of G is identical to the preceding set of rows but shifted n = 3 places to right.

28

10.1 Encoding of Convolutional Codes


Example 10.4
If u(1) = (1 0 1) and u(2) = (1 1 0), then u = (1 1, 0 1, 1 0) and v = uG 1 0 1 1 1 1 0 1 1 1 0 0 1 0 1 111 = (1 1, 0 1, 1 0) 0 11 1 0 0 1 0 1 1 1 1 0 1 1 1 0 0 = (1 1 0, 0 0 0, 0 0 1, 1 1 1), it agree with our previous calculation using discrete convolution.

29

10.1 Encoding of Convolutional Codes


In particular, the encoder now contains k shift registers, not all of which must have the same length. If Ki is the length of the ith shift register, then the encoder memory order m is defined as m max K i
1 i k

An example of a (4, 3, 2) convolutional encoder in which the shift register length are 0, 1, and 2.

30

10.1 Encoding of Convolutional Codes


The constraint length is defined as nAn(m+1). Since each information bit remains in the encoder for up to m+1 time units, and during each time unit can affect any of the n encoder outputs, nA can be interpreted as the maximum number of encoder outputs that can be affected by a signal information bit. For example, the constraint length of the (2,1,3), (3,2,1), and (4,3,2) convolutional codes are 8, 6, and 12, respectively.

31

10.1 Encoding of Convolutional Codes


If the general case of an (n, k, m) code, the generator matrix is Gm G 0 G 1 G 2 G 0 G1 G m 1 G m G= G0 G m 2 G m 1 G m where each Gl is a k n submatrix whose entries are g1(,1l) g1(,2 ) g1(,n ) l l (1) ( 2) (n) g 2 ,l g 2,l g 2,l Gl = (1) ( 2) (n) g k ,l g k ,l g k ,l Note that each set of k rows of G is identical to the previous set of rows but shifted n places to the right.
32

10.1 Encoding of Convolutional Codes


For an information sequence ( ( ( u = (u 0 , u1 , ) = (u 01) u 02 ) u 0k ) , u1(1) u1( 2 ) u1( k ) , ) ( ( ( and the code word v = ( v 0 , v1 , ) = ( 01) 02) 0n ) ,1(1)1( 2) 1( n ) , ) is given by v = uG. Since the code word v is a linear combination of rows of the generator matrix G, an (n, k, m) convolutional code is a linear code.

33

10.1 Encoding of Convolutional Codes


A convolutional encoder generates n encoded bits for each k information bits, and R = k/n is called the code rate. For an kL finite length information sequence, the corresponding code word has length n(L + m), where the final nm outputs are generated after the last nonzero information block has entered the encoder. Viewing a convolutional code as a linear block code with generator matrix G, the block code rate is given by kL/n(L + m), the ratio of the number of information bits to the length of the code word. If L m, then L/(L + m) 1, and the block code rate and convolutional code are approximately equal .
34

10.1 Encoding of Convolutional Codes


If L were small, however, the ratio kL/n(L + m), which is the effective rate of information transmission, would be reduced below the code rate by a fractional amount k n kL n( L + m) m = k n L+m called the fractional rate loss. To keep the fractional rate loss small, L is always assumed to be much larger than m. Example 10.5
For a (2,1,3) convolutional codes, L=5 and the fractional rate loss is 3/8=37.5%. However, if the length of the information sequence is L=1000, the fractional rate loss is only 3/1003=0.3%.
35

10.1 Encoding of Convolutional Codes


In a linear system, time-domain operations involving convolution can be replaced by more convenient transform-domain operations involving polynomial multiplication. Since a convolutional encoder is a linear system, each sequence in the encoding equations can be replaced by corresponding polynomial, and the convolution operation replaced by polynomial multiplication. In the polynomial representation of a binary sequence, the sequence itself is represent by the coefficients of the polynomial. For example, for a (2, 1, m) code, the encoding equations become

v (1) ( D) = u( D)g (1) ( D) v ( 2 ) ( D) = u( D)g ( 2 ) ( D), where u(D) = u0 + u1D + u2D2 + is the information sequence.
36

10.1 Encoding of Convolutional Codes


The encoded sequences are
( ( v (1) ( D ) = 01) + 1(1) D + 21) D 2 + ( ( v ( 2 ) ( D ) = 02 ) + 1( 2 ) D + 22 ) D 2 + The generator polynomials of the code are

( ( g (1) ( D ) = g 01) + g1(1) D + + g m1) D m ( ( g ( 2 ) ( D ) = g 02 ) + g1( 2 ) D + + g m2 ) D m and all operations are modulo-2. After multiplexing, the code word become v ( D) = v (1) ( D 2 ) + Dv ( 2 ) ( D 2 ) the indeterminate D can be interpreted as a delay operator, and the power of D denoting the number of time units a bit is delayed with respect to the initial bit.

37

10.1 Encoding of Convolutional Codes


Example 10.6
For the previous (2, 1, 3) convolutional code, the generator polynomials are g(1)(D) = 1+D2+D3 and g(2)(D) = 1+D+D2+D3. For the information sequence u(D) = 1+D2+D3+D4, the encoding equation are v (1) ( D) = (1 + D 2 + D 3 + D 4 )(1 + D 2 + D 3 ) = 1 + D 7 v ( 2 ) ( D) = (1 + D 2 + D 3 + D 4 )(1 + D + D 2 + D 3 ) = 1+ D + D3 + D4 + D5 + D7 ,
and the code word is

v ( D) = v(1) ( D2 ) + Dv(2) ( D2 ) = 1 + D + D3 + D7 + D9 + D11 + D14 + D15 .


Note that the result is the same as previously computed using convolution and matrix multiplication.
38

10.1 Encoding of Convolutional Codes


The generator polynomials of an encoder can be determined directly from its circuit diagram. Since the shift register stage represents a one-time-unit delay, the sequence of connection (a 1 representing a connection and a 0 no connection) from a shift register to an output is the sequence of coefficients in the corresponding generator polynomial. Since the last stage of the shift register in an (n, 1) code must be connected to at least one output, at least one of the generator polynomials must have degree equal to the shift register length m, that is

m = max deg g ( j ) (D )
1 j n

39

10.1 Encoding of Convolutional Codes


In an (n, k) code where k>1, there are n generator polynomials for each of the k inputs. Each set of n generators represents the connections from one of the shift registers to the n outputs. The length Ki of the ith shift register is given by

K i = max deg g i( j ) (D ) ,
1 j n

1 i k,

where g i( j ) (D ) is the generator polynomial relating the ith input to the jth output, and the encoder memory order m is
m = max K i = max deg g i( j ) .
1 i k 1 j n 1 i k

40

10.1 Encoding of Convolutional Codes


Since the encoder is a linear system, and u(i)(D) is the ith input sequence and v(j)(D)is the jth output sequence, the generator polynomial g i( j ) (D )can be interpreted as the encoder transfer function relating input i to output j. As with k-input, n-output linear system, there are a total of kn transfer functions. These can be represented by the k n transfer function matrix
( ( g11) ( D ) g12) ( D ) (1) ( 2) g 2 ( D ) g 2 ( D ) = g (1) ( D ) g ( 2) ( D ) k k
41

G (D)

( D ) (n) g 2 ( D )
( g1
n)

n g (k ) ( D )

10.1 Encoding of Convolutional Codes


Using the transfer function matrix, the encoding equation for an (n, k, m) code can be expressed as

V (D ) = U(D )G (D )
where U ( D ) u (1) ( D ), u (2) ( D ), , u ( k ) ( D ) is the k-tuple of (1) (2) input sequences and V ( D ) v ( D ), v ( D ), , v ( n ) ( D ) is the n-tuple of output sequences. After multiplexing, the code word becomes
v(D ) = v (1) D n + Dv (2 ) D n +

( )

( )

+ D n 1 v (n ) D n .

( )

42

10.1 Encoding of Convolutional Codes


Example 10.7
For the previous (3, 2, 1) convolutional code 1 0 1 1 1 1 1 + D D 1 + D 0 1 1 1 0 0 G (D ) = 1 1 D For the input sequences u(1)(D) = 1+D2 and u(2)(D)=1+D, the encoding equations are 1 + D D 1 + D (1) (2 ) (3 ) 2 V (D ) = v (D ), v (D ), v (D ) = 1 + D ,1 + D D 1 1 = 1 + D 3 ,1 + D 3 , D 2 + D 3

] [ [

and the code word is

v ( D ) = (1 + D 9 ) + (1 + D 9 ) D + ( D 6 + D 9 ) D 2 = 1 + D + D8 + D 9 + D10 + D11
43

10.1 Encoding of Convolutional Codes


Then, we can find a means of representing the code word v(D) directly in terms of the input sequences. A little algebraic manipulation yields

v (D ) = u (i ) D n g i (D )
i =1

( )

where
gi ( D ) gi
(1)

( D ) + Dg ( D ) +
n

( 2)
i

D gi

n 1

( n 1)

(D ),
n

1 i k,

is a composite generator polynomial relating the ith input sequence to v(D).

44

10.1 Encoding of Convolutional Codes Example 10.8


For the previous (2, 1, 3) convolutional codes, the composite generator polynomial is

g (D ) = g (1) D 2 + Dg (2 ) D 2 = 1 + D + D 3 + D 4 + D 5 + D 6 + D 7
and for u(D)=1+D2+D3+D4 , the code word is

( )

( )

v ( D ) = u ( D2 ) g ( D )

= (1 + D 4 + D 6 + D8 ) (1 + D + D 3 + D 4 + D 5 + D 6 + D 7 ) = 1 + D + D 3 + D 7 + D 9 + D11 + D14 + D15


again agreeing with previous calculations.
45

10.2 Structural Properties of Convolutional Codes


Since a convolutional encoder is a sequential circuit, its operation can be described by a state diagram. The state of the encoder is defined as its shift register contents. For an (n, k, m) code with k > 1, the ith shift register contains Ki previous information bits. k Defined K i =1 K i as the total encoder memory, the encoder state at time unit l is the binary K-tuple of inputs

(u ( ) u ( )

1 1 l 1 l 2

u l(1)K1

2) 2) u l(1u l( 2

2) u l( K 2

k k) u l(1)u l( 2

k) u l( K k

and there are a total 2K different possible states.

46

10.2 Structural Properties of Convolutional Codes


For a (n, 1, m) code, K = K1 = m and the encoder state at time unit l is simply (u l 1u l 2 u l m ). Each new block of k inputs causes a transition to a new state. There are 2k branches leaving each state, one corresponding to each different input block. Note that for an (n, 1, m) code, there are only two branches leaving each state. Each branch is labeled with the k inputs causing the transition ul(1) ul( 2 ) ul( k ) and n corresponding outputs ( l(1) l( 2 ) l( n ) ) . The states are labeled S0,S1,,S2K-1, where by convention Si represents the state whose binary K-tuple representation b0,b1,,bK-1 is equivalent to the integer

i = b0 20 + b1 21 +
47

+ bK 1 2 K 1

10.2 Structural Properties of Convolutional Codes


Assuming that the encoder is initially in state S0 (all-zero state), the code word corresponding to any given information sequence can be obtained by following the path through the state diagram and noting the corresponding outputs on the branch labels. Following the last nonzero information block, the encoder is return to state S0 by a sequence of m all-zero blocks appended to the information sequence.

48

10.2 Structural Properties of Convolutional Codes


Encoder state diagram of a (2, 1, 3) code

If u = (1 1 1 0 1), the code word v = (1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1)


49

10.2 Structural Properties of Convolutional Codes


Encoder state diagram of a (3, 2, 1) code

50

10.2 Structural Properties of Convolutional Codes


The state diagram can be modified to provide a complete description of the Hamming weights of all nonzero code words (i.e. a weight distribution function for the code). State S0 is split into an initial state and a final state, the selfloop around state S0 is deleted, and each branch is labeled with a branch gain Xi ,where i is the weight of the n encoded bits on that branch. Each path connecting the initial state to the final state represents a nonzero code word that diverge from and remerge with state S0 exactly once. The path gain is the product of the branch gains along a path, and the weight of the associated code word is the power of X in the path gain.
51

10.2 Structural Properties of Convolutional Codes Modified encoder state diagram of a (2, 1, 3) code.

The path representing the sate sequence S0S1S3S7S6S5S2S4S0 has path gain X2X1X1X1X2X1X2X2=X12.
52

10.2 Structural Properties of Convolutional Codes Modified encoder state diagram of a (3, 2, 1) code.

The path representing the sate sequence S0S1S3S2S0 has path gain X2X1X0X1 =X12.
53

10.2 Structural Properties of Convolutional Codes


The weight distribution function of a code can be determined by considering the modified state diagram as a signal flow graph and applying Masons gain formula to compute its generating function T ( X ) = Ai X i , where Ai is the number of code words of weight i. In a signal flow graph, a path connecting the initial state to the final state which does not go through any state twice is called a forward path. A closed path starting at any state and returning to that state without going through any other state twice is called a loop.
i

54

10.2 Structural Properties of Convolutional Codes


Let Ci be the gain of the ith loop. A set of loops is nontouching if no state belongs to more than one loop in the set. Let {i} be the set of all loops, {i, j} be the set of all pairs of nontouching loops, {i, j, l} be the set of all triples of nontouching loops, and so on. Then define = 1 Ci + Ci ' C j ' Ci '' C j '' Cl '' + ,
i i' , j' i '' , j '' , l ''

where Ci is the sum of the loop gains, ' Ci ' C j 'is the product of i i' , j the loop gains of two nontouching loops summed over all pairs of nontouching loops, '' Ci '' C j '' Cl '' is the product of the loop gains of i , j '' , l '' three nontouching loops summed over all nontouching loops.
55

10.2 Structural Properties of Convolutional Codes


And i is defined exactly like , but only for that portion of the graph not touching the ith forward path; that is, all states along the ith forward path, together with all branches connected to these states, are removed from the graph when computing i. Masons formula for computing the generating function T(X) of a graph can now be states as

T (X ) =

F
i i

where the sum in the numerator is over all forward paths and Fi is the gain of the ith forward path.

56

10.2 Structural Properties of Convolutional Codes


Example (2,1,3) Code: There are 11 loops in the modified encoder state diagram.
Loop 1 : S 1 S 3 S 7 S 6 S 5 S 2 S 4 S 1 Loop 2 : S 1 S 3 S 7 S 6 S 4 S 1 Loop 3 : S 1 S 3 S 6 S 5 S 2 S 4 S 1 Loop 4 : S 1 S 3 S 6 S 4 S 1 Loop 5 : S 1 S 2 S 5 S 3 S 7 S 6 S 4 S 1 Loop 6 : S 1 S 2 S 5 S 3 S 6 S 4 S 1 Loop 7 : S 1 S 2 S 4 S 1 Loop 8 : S 2 S 5 S 2 Loop 9 : S 3 S 7 S 6 S 5 S 3 Loop 10 : S 3 S 6 S 5 S 3 Loop 11 : S 7 S 7
57

(C 8

(C (C (C (C (C (C (C (C (C

= X8 = X3 = X7 = X2 = X9 = X8 = X3 = X) = X5

2 3 4

5 6

) ) ) ) ) ) ) ) )

9 10

(C11 = X )

= X4

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code: (cont.)


There are 10 pairs of nontouching loops :
Loop pair 1 : (loop 2 , loop 8) Loop pair 2 : (loop 3, loop 11) Loop pair 3 : (loop 3, loop 8) Loop pair 4 : (loop 4 , loop 11) Loop pair 5 : (loop 6 , loop 11) Loop pair 6 : (loop 7 , loop 9 ) Loop pair 7 : (loop 7 , loop 10 ) Loop pair 8 : (loop 2 , loop 8) Loop pair 9 : (loop 7 , loop 11) Loop pair 10 : (loop 8, loop 11)
58

(C C (C C (C C (C C (C C (C C (C C (C C (C C (C C
2 3 4 4 6 7 2 7 7 8

11 8

11

11 9

10

11

11

) =X ) =X ) =X ) =X ) =X ) =X ) =X ) =X ) =X )
= X4
3 8 3 9 8 7 4 4 4

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


There are two triples of nontouching loops :

(C C C Loop triple 2 : (loop 7, loop 10, loop 11 ) (C C C


Loop triple 1 : (loop 4, loop 8, loop 11 )
4 8 7 10

11

= X4

11

= X8

)
)

There are no other sets of nontouching loops. Therefore,


= 1 X 8 + X 3 + X 7 + X 2 + X 4 + X 3 + X 3 + X + X 5 + X 4 + X
4 4

( + (X (X

+ X8 + X3 + X3 + X4 + X8 + X7 + X4 + X2 + X5 + X 8 = 1 2X + X 3

59

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


There are seven forward paths in this state diagram :

Foward path 1 : S 0 S1 S 3 S 7 S 6 S 5 S 2 S 4 S 0 F1 = X 12 Foward path 2 : S 0 S1 S 3 S 7 S 6 S 4 S 0 Foward path 3 : S 0 S1 S 3 S 6 S 5 S 2 S 4 S 0 Foward path 4 : S 0 S1 S 3 S 6 S 4 S 0 Foward path 5 : S 0 S1 S 2 S 5 S 3 S 7 S 6 S 4 S 0 Foward path 6 : S 0 S1 S 2 S 5 S 3 S 6 S 4 S 0 Foward path 7 : S 0 S1 S 2 S 4 S 0
60
7 2

( ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ) (F = X ).
11 6 3 4 8 5 7 6 7 7

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


Forward paths 1 and 5 touch all states in the graph, and hence the sub graph not touching these paths contains no states. Therefore, 1 = 5 = 1. The subgraph not touching forward paths 3 and 6: 3 = 6 = 1 - X

61

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


The subgraph not touching forward path 2: 2 = 1 - X The subgraph not touching forward path 4: 4 = 1 (X + X) + (X2) = 1 2X + X2

62

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


The subgraph not touching forward path 7: 7 = 1 (X + X4 + X5) + (X5) = 1 X X4

63

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code : (cont.)


The generating function for this graph is then given by
X 12 1 + X 7 (1 X ) + X 11 (1 X ) + X 6 1 2 X + X 2 + X 8 1 + X 7 (1 X ) + X 7 1 X X 4 T (X ) = 1 2X X 3 X6 + X7 X8 = = X 6 + 3 X 7 + 5 X 8 + 11X 9 + 25 X 10 + 1 2X X 3

T(X) provides a complete description of the weight distribution of all nonzero code words that diverge from and remerge with state S0 exactly once. In this case, there is one such code word of weight 6, three of weight 7, five of weight 8, and so on.
64

10.2 Structural Properties of Convolutional Codes Example (3,2,1) Code : (cont.)


There are eight loops, six pairs of nontouching loops, and one triple of nontouching loops in the graph of previous modified encoder state diagrams : (3, 2, 1) code, and
= 1 X 2 + X 4 + X 3 + X + X 2 + X 1 + X 2 + X 3
6

( + (X

+X2 +X4 +X3+X4 +X5 X6

) ( )

= 1 2 X 2 X 2 X 3 + X 4 + X 5.

65

10.2 Structural Properties of Convolutional Codes Example (3,2,1) Code : (cont.)


There are 15 forward path in this graph, and
Fi i = X 5 1 X X 2 X 3 + X 5 + X 4 1 X 2 + X 6 1 + X 5 1 X 3
i

+ X 4 1 + X 3 1 X X 2 + X 6 1 X 2 + X 6 1 + X 5 (1 X ) + X 8 1 + X 4 1 X X 2 X 3 + X 4 + X 7 1 X 3 + X 6 1 + X 3 (1 X ) + X 6 1 = 2 X 3 + X 4 + X 5 + X 6 X 7 X 8.

( (

Hence, the generating function is


2X 3 + X 4 + X 5 + X 6 X 7 X 8 = 2 X 3 + 5 X 4 + 15 X 5 + T (X ) = 1 2X 2X 2 X 3 + X 4 + X 5

This code contains two nonzero code word of weight 3, five of weight 4, fifteen of weight 5, and so on.
66

10.2 Structural Properties of Convolutional Codes


Additional information about the structure of a code can be obtained using the same procedure. If the modified state diagram is augmented by labeling each branch corresponding to a nonzero information block with Y j, where j is the weight of the k information bits on the branch, and labeling every branch with Z, the generating function is given by

T ( X , Y , Z ) = Ai , j ,l X Y Z .
i j l i , j ,l

The coefficient Ai,j,l denotes the number of code words with weight i, whose associated information sequence has weight j, and whose length is l branches.
67

10.2 Structural Properties of Convolutional Codes The augment state diagram for the (2, 1, 3) codes.

68

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code:


For the graph of the augment state diagram for the (2, 1, 3) codes, we have:

= 1 ( X 8Y 4 Z 7 + X 3Y 3 Z 5 + X 7Y 3 Z 6 + X 2Y 2 Z 4 + X 4Y 4 Z 7

+ X 3Y 3 Z 6 + X 3YZ 3 + XYZ 2 + X 5Y 3 Z 4 + X 4Y 2 Z 3 + XYZ ) + ( X 4Y 4 Z 7 + X 8Y 4 Z 7 + X 3Y 3 Z 6 + X 3Y 3 Z 5 + X 4Y 4 Z 7 + X 8Y 4 Z 7 + X 7Y 3 Z 6 + X 4Y 2 Z 4 + X 2Y 2 Z 3 + X 5Y 3 Z 4 ) X 4Y 4 Z 7 + X 8Y 4 Z 7 X 4Y 2

= 1 + XY Z + Z 2 X 2Y 2 Z 4 Z 3 X 3 YZ 3 Y 3 Z 6
3

( (Z

Z4

) ( ) X (Y Z
8 3

Y 4 Z 7 X 9Y 4 Z 7

69

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code: (cont.)


i

Fi i = X 12Y 4 Z 8 1 + X 7Y 3 Z 6 1 XYZ 2 + X 11Y 3 Z 7 (1 XYZ ) + X 6Y 2 Z 5 (1 XY Z + Z 2 + X 2Y 2 Z 3 ) + X 8Y 4 Z 8 1 + X 7Y 3 Z 7 (1 XYZ ) + X 7YZ 4 1 XYZ X 4Y 2 Z 3 = X 6Y 2 Z 5 + X 7YZ 4 X 8Y 2 Z 5

Hence the generating function is


X 6Y 2 Z 5 + X 7YZ 4 X 8Y 2 Z 5 T (X ,Y , Z ) = = X 6Y 2 Z 5 + X 7 YZ 4 + Y 3 Z 6 + Y 3 Z 7 + X 8 Y 2 Z 6 + Y 4 Z 7 + Y 4 Z 8 + 2Y 4 Z 9
70

) )+

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code: (cont.)


This implies that the code word of weight 6 has length 5 branches and an information sequence of weight 2, one code word of weight 7 has length 4 branches and information sequence weight 1, another has length 6 branches and information sequence weight 3, the third has length 7 branches and information sequence weight 3, and so on.

71

The Transfer Function of a Convolutional Code


The state diagram can be used to obtain the distance property of a convolutional code. Without loss of generality, we assume that the all-zero code sequence is the input to the encoder.

72

The Transfer Function of a Convolutional Code


First, we label the branches of the state diagram as either D0=1, D1, D2, or D3, where the exponent of D denotes the Hamming distance between the sequence of output bits corresponding to each branch and the sequence of output bits corresponding to the all-zero branch. The self-loop at node a can be eliminated, since it contributes nothing to the distance properties of a code sequence relative to the all-zero code sequence. Furthermore, node a is split into two nodes, one of which represents the input and the other the output of the state diagram.

73

The Transfer Function of a Convolutional Code


Use the modified state diagram, we can obtain four state equations: X c = D 3 X a + DX b

X b = DX c + DX d X d = D2 X c + D2 X d X e = D2 X b The transfer function for the code is defined as T(D)=Xe/Xa. By solving the state equations, we obtain: D6 T ( D) = = D 6 + 2 D8 + 4 D10 + 8 D12 + = ad D d 1 2D2 d =6
2(d 6 ) 2 (even d ) ad = 0 (odd d )
74

The Transfer Function of a Convolutional Code


The transfer function can be used to provide more detailed information than just the distance of the various paths. Suppose we introduce a factor N into all branch transitions caused by the input bit 1. Furthermore, we introduce a factor of J into each branch of the state diagram so that the exponent of J will serve as a counting variable to indicate the number of branches in any given path from node a to node e.

75

The Transfer Function of a Convolutional Code


The state equations for the state diagram are: X c = JND 3 X a + JNDX b

X b = JDX c + JDX d X d = JND 2 X c + JND 2 X d X e = JD 2 X b Upon solving these equations for the ratio Xe/Xa, we obtain the transfer function: J 3 ND 6 T ( D, N , J ) = 1 JND 2 (1 + J )
= J 3 ND 6 + J 4 N 2 D8 + J 5 N 2 D8 + J 5 N 3 D10 + 2 J 6 N 3 D10 + J 7 N 3 D10 +
76

The Transfer Function of a Convolutional Code


The exponent of the factor J indicates the length of the path that merges with the all-zero path for the first time. The exponent of the factor N indicates the number of 1s in the information sequence for that path. The exponent of D indicates the distance of the sequence of encoded bits for that path from the all-zero sequence.

Reference: John G. Proakis, Digital Communications, Fourth Edition, pp. 477 482, McGraw-Hill, 2001.

77

10.2 Structural Properties of Convolutional Codes


An important subclass of convolutional codes is the class of systematic codes. In a systematic code, the first k output sequences are exact replicas of the k input sequences, i.e., v(i) = u(i), i = 1, 2, , k, and the generator sequences satisfy if j = i ( j) 1 , i = 1,2 ,...k, gi = if j i 0

78

10.2 Structural Properties of Convolutional Codes


The generator matrix is given by

0 Pm I P0 0 P1 0 P2 I P0 0 P1 0 Pm 1 0 Pm I P0 0 Pm 2 0 Pm 1 0 Pm G= where I is the k k identity matrix, 0 is the k k all-zero matrix, and Pl is the k (n k) matrix
( ( g1,kl+1) g1,kl+ 2 ) g (k +1) g (k + 2 ) 2,l Pl = 2,l g (kk,l+1) g (kk,l+ 2 ) ( g1,nl) g (2n,l) , g (kn,l)

79

10.2 Structural Properties of Convolutional Codes


And the transfer matrix becomes ( k +1) (n ) 1 0 0 g 1 (D ) g1 (D ) 0 1 0 g (2k +1) (D ) g (2n ) (D ) G (D ) = 0 0 1 g (kk +1) (D ) g (kn ) (D ) Since the first k output sequences equal the input sequences, they are called information sequences and the last n k output sequences are called parity sequences. Note that whereas in general kn generator sequences must be specified to define an (n, k, m) convolutional code, only k(n k) sequences must be specified to define a systematic code. Systematic codes represent a subclass of the set of all possible codes.
80

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code:


Consider the (2, 1, 3) systematic code. The generator sequences are g(1)=(1 0 0 0) and g(2)=(1 1 0 1). The generator matrix is 1 1 0 1 0 0 0 1
G= 1 1 0 1 1 1 0 0 0 1 0 1 0 0 0 1

The (2, 1, 3) systematic code.


81

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code:


The transfer function matrix is
g(1)=(1 0 0 0) and g(2)=(1 1 0 1).

G(D) = [1 1 + D + D3].

For an input sequence u(D) = 1 + D2 + D3, the information sequence is

(1)

(D ) = u(D )g (D ) = (1 + D 2 + D 3 )1 = 1 + D 2 + D 3
(1)

and the parity sequence is


v (2 ) (D ) = u(D )g (2 ) (D ) = 1 + D 2 + D 3 1 + D + D 3 = 1+ D + D2 + D3 + D4 + D5 + D6
82

)(

10.2 Structural Properties of Convolutional Codes


One advantage of systematic codes is that encoding is somewhat simpler than for nonsystematic codes because less hardware is required. For an (n, k, m) systematic code with k > n k , there exits a modified encoding circuit which normally requires fewer than K shift register states.

The (2, 1, 3) systematic code requires only one modulo-2 adder with three inputs.
83

10.2 Structural Properties of Convolutional Codes Example (3,2,2) Systematic Code:


Consider the (3, 2, 2) systematic code with transfer function matrix 2
1 0 1 + D + D G (D ) = 0 1 1+ D2

The straightforward realization of the encoder requires a total of K = K1 + K2 = 4 shift registers.

84

10.2 Structural Properties of Convolutional Codes Example (3,2,2) Systematic Code:


Since the information sequences are given by v(1)(D) = u(1)(D) and v(2)(D) = u(2)(D), and the parity sequence is given by
( v (3 ) (D ) = u (1) (D )g13 ) (D ) + u (2 ) (D )g (23 ) (D ) ,

The (3, 2, 2) systematic encoder, and it requires only two stages of encoder memory rather than 4.
85

10.2 Structural Properties of Convolutional Codes


A complete discussion of the minimal encoder memory required to realize a convolutional code is given by Forney. In most cases the straightforward realization requiring K states of shift register memory is most efficient. In the case of an (n,k,m) systematic code with k>n-k, a simpler realization usually exists. Another advantage of systematic codes is that no inverting circuit is needed for recovering the information sequence from the code word.

86

10.2 Structural Properties of Convolutional Codes


Nonsystematic codes, on the other hand, require an inverter to recover the information sequence; that is, an n k matrix G-1(D) must exit such that G(D)G-1(D) = IDl for some l 0, where I is the k k identity matrix. Since V(D) = U(D)G(D), we can obtain V(D)G-1(D) = U(D)G(D)G-1(D) = U(D)Dl , and the information sequence can be recovered with an l-timeunit delay from the code word by letting V(D) be the input to the n-input, k-output linear sequential circuit whose transfer function matrix is G-1(D).

87

10.2 Structural Properties of Convolutional Codes


For an (n, 1, m) code, a transfer function matrix G(D) has a feedforward inverse G-1(D) of delay l if and only if GCD[g(1)(D), g(2)(D),, g(n)(D)] = Dl for some l 0, where GCD denotes the greatest common divisor. n For an (n, k, m) code with k > 1, let i (D ), i = 1, 2, , , n distinct k k submatrices k the of be the determinants of the k transfer function matrix G(D). A feedforward inverse of delay l exits if and only if
GCD i ( D) : i = 1, 2, for some l 0.
88

n , k = D l

10.2 Structural Properties of Convolutional Codes Example (2,1,3) Code:


For the (2, 1, 3) code, GCD 1 + D 2 + D 3 , 1 + D + D 2 + D 3 = 1 = D 0 and the transfer function matrix 1 + D + D 2 G 1 ( D ) = 2 D+D provides the required inverse of delay 0 [i.e., G(D)G1(D) = 1]. The implementation of the inverse is shown below

89

10.2 Structural Properties of Convolutional Codes Example (3,2,1) Code:


For the (3, 2, 1) code , the 2 2 submatrices of G(D) yield determinants 1 + D + D2, 1 + D2, and 1. Since GCD 1 + D + D 2 , 1 + D 2 , 1 = 1 there exists a feedforward inverse of delay 0. The required transfer function matrix is given by: 0 0 G 1 ( D ) = 1 1 + D 1 D
90

10.2 Structural Properties of Convolutional Codes


To understand what happens when a feedforward inverse does not exist, it is best to consider an example. For the (2, 1, 2) code with g(1)(D) = 1 + D and g(2)(D) = 1 + D2 , GCD 1 + D, 1 + D 2 = 1 + D, and a feedforward inverse does not exist. If the information sequence is u(D) = 1/(1 + D) = 1 + D + D2 + , the output sequences are v(1)(D) = 1 and v(2)(D) = 1 + D; that is, the code word contains only three nonzero bits even though the information sequence has infinite weight. If this code word is transmitted over a BSC, and the three nonzero bits are changed to zeros by the channel noise, the received sequence will be all zeros.

91

10.2 Structural Properties of Convolutional Codes


A MLD will then produce the all-zero code word as its estimated, since this is a valid code word and it agrees exactly with the received sequence. The estimated information sequence will be u ( caused implying an infinite number of decoding errors D ) = 0, by a finite number (only three in this case) of channel errors. Clearly, this is a very undesirable circumstance, and the code is said to be subject to catastrophic error propagation, and is called a catastrophic code. 2 n 1 GCD g ( ) ( D ) , g ( ) ( D ) , , g ( ) ( D ) = D l and GCD i ( D ) : i = 1, 2,...., n = D l are necessary and sufficient conditions k for a code to be noncatastrophic.

()

92

10.2 Structural Properties of Convolutional Codes


Any code for which a feedforward inverse exists is noncatastrophic. Another advantage of systematic codes is that they are always noncatastrophic. A code is catastrophic if and only if the state diagram contains a loop of zero weight other than the self-loop around the state S0. Note that the self-loop around the state S3 has zero weight. State diagram of a (2, 1, 2) catastrophic code.
93

10.2 Structural Properties of Convolutional Codes In choosing nonsystematic codes for use in a communication system, it is important to avoid the selection of catastrophic codes. Only a fraction 1/(2n 1) of (n, 1, m) nonsystematic codes are catastrophic. A similar result for (n, k, m) codes with k > 1 is still lacking.

94

10.3 Distance Properties of Convolutional Codes


The performance of a convolutional code depends on the decoding algorithm employed and the distance properties of the code. The most important distance measure for convolutional codes is the minimum free distance dfree, defined as
d free min {d ( v, v ) : u u} ,

where v and v are the code words corresponding to the information sequences u and u, respectively. In equation above, it is assumed that if u and u are of different lengths, zeros are added to the shorter sequence so that their corresponding code words have equal lengths.
95

10.3 Distance Properties of Convolutional Codes


dfree is the minimum distance between any two code words in the code. Since a convolutional code is a linear code,
d free = min {w ( v + v ) : u u} = min {w ( v ) : u 0} = min {w ( uG ) : u 0} , where v is the code word corresponding to the information sequence u. dfree is the minimum-weight code word of any length produced by a nonzero information sequence.

96

10.3 Distance Properties of Convolutional Codes


Also, it is the minimum weight of all paths in the state diagram that diverge from and remerge with the all-zero state S0, and it is the lowest power of X in the code-generating function T(X). For example, dfree = 6 for the (2, 1, 3) code of example 10.10(a), and dfree = 3 for the (3, 2, 1) code of Example 10.10(b). Another important distance measure for convolutional codes is the column distance function (CDF). Letting
( ( v ]i = v0 ) v0 [
1

2)

( v0 ) , v1( ) v1(
n 1

2)

v1( ) , , vi( ) vi(


n 1 k 1

2)

vi(

n)

denote the ith truncation of the code word v, and


( ( ( u ]i = u0 )u0 ) u0 ) , u1( )u1( ) u1( ) , , ui( )ui( [ denote the ith truncation of the code word u.
1 2 k 1 2

2)

ui(

k)

97

10.3 Distance Properties of Convolutional Codes


The column distance function of order i, di, is defined as

di

min d [ v]i , [ v]i : [u]0 [u]0 , = min w [ v ]i : [u ]0 0

{(

where v is the code word corresponding to the information sequence u. di is the minimum-weight code word over the first (i + 1) time units whose initial information block is nonzero. In terms of the generator matrix of the code,

[ v ]i = [u ]i [G ]i
98

10.3 Distance Properties of Convolutional Codes


[G]i is a k(i + 1) n(i + 1) submatrix of G with the form
G 0 G1 [G ]i = G 0 Gi G i 1 , G0 i m, G m , i > m. G m 1 G1 G0

or

G 0 G1 G0 [G ]i =

G m 1 G m G m 2 G m 1 G m G m 1 G1 G 0 G1 G 0 G1 G0

Gm G m 1 G1 G0

99

10.3 Distance Properties of Convolutional Codes


Then
di = min w ([u ]i [G ]i ) : [u ]0 0

is seen to depend only on the first n(i + 1) columns of G and this accounts for the name column distance function. The definition implies that di cannot decrease with increasing i (i.e., it is a monotonically nondecreasing function of i). The complete CDF of the (2, 1, 16) code with

G ( D ) = 1 + D + D 2 + D 5 + D 6 + D8 + D13 + D16 , 1 + D 3 + D 4 + D 7 + D 9 + D10 + D11 + D12 + D14 + D15 + D16


is shown in the figure of the following page.
100

10.3 Distance Properties of Convolutional Codes


Column distance function of a (2,1,16) code

101

10.3 Distance Properties of Convolutional Codes


Two cases are of specific interest: i = m and i . For i = m, dm is called the minimum distance of a convolutional code and will also be denoted dmin. From the definition di = min{w([u]i[G]i):[u]00}, we see that dmin represents the minimum-weight code word over the first constraint length whose initial information block is nonzero. For the (2,1,16) convolutional code, dmin = d16 = 8. For i , limi di is the minimum-weight code word of any length whose first information block is nonzero. Comparing the definitions of limi di and dfree, it can be shown that for noncatastrophic codes lim di = d free
i

102

10.3 Distance Properties of Convolutional Codes


di eventually reaches dfree and then increases no more. This usually happens within three to four constraint lengths (i.e., when i reaches 3m or 4m). Above equation is not necessarily true for catastrophic codes. Take as an example the (2,1,2) catastrophic code whose state diagram is shown in Figure10.11. For this code, d0 = 2 and d1 = d2 = = limi di =3, since the truncated information sequence [u]i =(1, 1, 1, , 1) always produces the truncated code word [v]i =(1 1, 0 1, 0 0, 0 0,, 0 0), even in the limit as i . Note that all paths in the state diagram that diverge from and remerge with the all-zero state S0 have weight at least 4, and dfree = 4.
103

10.3 Distance Properties of Convolutional Codes


Hence, we have a situation in which limi di = 3 dfree = 4. It is characteristic of catastrophic codes that an infinite-weight information sequence produces a finite-weight code word. In some cases, as in the example above, this code word can have weight less than the free distance of the code. This is due to the zero weight loop in the state diagram. In other words, an information sequence that cycles around this zero weight loop forever will itself pick up infinite weight without adding to the weight of the code word.

104

10.3 Distance Properties of Convolutional Codes


In a noncatastrophic code, which contains no zero weight loop other than the self-loop around the all-zero state S0, all infiniteweight information sequences must generate infinite-weight code words, and the minimum-weight code word always has finite length. Unfortunately, the information sequence that produces the minimum-weight code word may be quite long in some cases, thereby causing the calculation of dfree to be a rather formidable task. The best achievable dfree for a convolutional code with a given rate and encoder memory has not been determined exactly. Upper and lower bound on dfree for the best code have been obtained using a random coding approach.
105

10.3 Distance Properties of Convolutional Codes


A comparison of the bounds for nonsystematic codes with the bounds for systematic codes implies that more free distance is available with nonsystematic codes of a given rate and encoder memory than with systematic codes. This observation is verified by the code construction results presented in succeeding chapters, and has important consequences when a code with large dfree must be selected for use with either Viterbi or sequential decoding. Heller (1968) derived the upper bound on the minimum free distance of a rate 1/n convolutional code:

d free

2l 1 min l ( K + l 1) n l 1 2 1
106

Maximum Free Distance Codes


Rate

107

Maximum Free Distance Codes


Rate 1/3

108

Maximum Free Distance Codes


Rate 1/4

109

Maximum Free Distance Codes


Rate 1/5

110

Maximum Free Distance Codes


Rate 1/6

111

Maximum Free Distance Codes


Rate 1/7

112

Maximum Free Distance Codes


Rate 1/8

113

Maximum Free Distance Codes


Rate 2/3

Rate k/5

114

Maximum Free Distance Codes


Rate 3/4 and 3/8

Rate k/7

115

You might also like