Professional Documents
Culture Documents
1 Basics of Convolutional Coding. 1
1 Basics of Convolutional Coding. 1
1.1
Relating
. . . . . . . . . . . . . . . . .
1.2
1.3
1.4
1.5
1.6
11
1.7
Catastrophic Encoders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.8
13
1.9
14
16
21
ii
CONTENTS
List of Figures
1.1
Implementation of encoder
1.2
Implementation of encoder
using feedback.
. . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
1.3
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4
1.5
1.6
1.7
1.8
1.9
1.10
.
. . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . .
Controller canonical form for . . . . . . . . . . . . . . . . . . . . .
Observer canonical form for . . . . . . . . . . . . . . . . . . . . .
. . . .
State transition diagram for
Modified state transition diagram for
.
Trellis diagram for encoder
iii
10
10
14
15
18
18
iv
LIST OF FIGURES
Chapter 1
Basics of Convolutional Coding.
Contents
1.1
Relating
. . . . . . . . . . . . . .
1.2
1.3
trices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4
1.5
1.6
11
1.7
Catastrophic Encoders. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11
1.8
13
1.9
14
16
21
1.1 Relating
The matrix
(1.1)
(1.2)
Remember that the encoder has memory of order implies that the longest shiftregister in the encoder has length
If some shift registers are shorter than , we may extend them to have length assuming
that the associated coefficients with those extra taps, are zero (zero padding). That way
we ensure that all the impulse responses will have taps, without changing their
outputs (encoded bits).
Now considering the convolution
yields:
...
.
..
...
..
.
1.1 Relating
Now since we want to have the input bits in the interleaved format (1.1), we need to reorder the
elements in the above expression, to get:
.
..
..
.
.
.
.
.
..
..
.
as:
.
..
..
.
.
..
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
..
.
.
..
..
.
and that
..
We kept the lower case for the transformed quantity as we want to keep the upper case letters
for matrices. The D-transform is the same as the z-transform substituting
with .
All
the properties of the well known transform apply as well to -transform. Therefore (1.2) is
written as
..
.
We can now express the transformed output (encoded) bit sequences as:
..
.
..
.
..
.
We assume we are given a generator matrix for a convolutional codeeither in the form
and we wish to design a structure using delay elements, that will implement the appro-
that are associated with the -th input and the -th output. Each of these impulse
responses will have (or can be made to have) length (memory ). Lets focus on the -th
therefore any other output), provided that outputs of the delay elements are connected with the
impulse responses are required to specify the convolutional encoder, only shift registers of
appropriate set of coefficient from the appropriate impulse response. Therefore although
taps each, are necessary. This implies that the number of input symbols that are stored at every
time instant in the encoder will be and therefore the number of states of the encoder will be
.
. There are many ways that this may happen. For
instance:
So even when considering the first output bits, there are several possibilities to have the input
The important thing is to have outputs that reflect exactly what the input bit streams are.
.
Since such a reordering of the outputs is always possible we usually refer to a systematic
.
From the above definition of a systematic encoder, it follows that columns of the
.
ensures that
This encoder ensures that
to be the second and the second to be the first. This is achieved by multiplying with the matrix
obtaining
This indicates that there may be several systematic encoders (even in the standard form where
polynomials must be
polynomials are required to specify a
.
from the
and
This implies that direct translation into a feedforward circuit (tap delay line) is impossible as an
infinite number of delay elements would be required and also an infinite time must be waited
before obtaining the encoded output bit.
However if we rewrite it as:
then by recalling some of the canonical implementations of IIR filters we have the structure of
figure 1.4
feedback branch
Figure 1.1: Implementation of encoder
It is to be noted that the non-systematic encoder
using feedback.
can be implemented with the same
number of delay elements (2) but without feedback branch, as shown in figure 1.4
.
can store anyone out of bits. Therefore there will be possible states. Transition from
one state to the other is due to the incoming info-bits. At time , info-bits are entering the
encoder, causing it to transition to a new state and at the same time the output of the encoder
is
coded bits. Transition from one state to the other is through a branch that is labeled in the
form
.
states at time
.
lar state, and are interested in how many incoming branches exist. We give an explanation
which will also provide us with an additional result. Lets imagine the delay elements of
the encoder as lying on a
a row of
array.
delay elements.
delay elements of the first column of the array. Then consider the next
the second column of the array, e.t.c. until we consider the last
umn of the array, as shown in figure 1.5. At time
elements of
. Then at time the encoder will transit to the state
. Now assume that the
. Then
encoder at time was at the state
11
00
10
1
00 0
11
0
1
0
11
0
1
11 0
00
Figure 1.3:
are entering the encoder, the state at which it will transit
, i.e. the same state as before.
to will be
if at time the info-bits
same time, the ordering of the delay elements of the convolutional encoder that is suggested in
the right part of figure 1.5 provides a good convention for naming the states of the system. We
will use the convention
to denote the state when the content of the delay elements in the form
equals the integer . For example for the
encoder the state means that the content ofthe delay
elements is or in the
form of the labeling we will use for both the state transition diagram and the trellis diagram will
have the format of figure 1.5.
yields:
same encoder. We can do it based on the state transition diagram of figure 1.5, as follows:
10
0/00
00
1/11
0/11
1/00
01
10
0/01
0/10
1/10
11
1/01
Figure 1.5: State transition diagram for encoder
0/00
00
1/11
10
0/01
00
10
1/10
0/11
01
1/00
01
0/10
11
time
1/01
11
time
.
.
11
will study the Viterbi algorithm, we will see an algorithmic way to determine the free distance
of a convolutional code.
So far we dont see anything strange. However, coding is used to protect information from
errors, and assume that through the transmission of the encoded bit stream through a noisy
channel, we receive the sequence
000
where the underbar denotes a bit that has been affected by the noisy channel (flipped). The
above received sequence suggests that 3 errors were performed during reception.
12
.
Since
has a
Hamming weigth of , the all zero code word will be selected as the most likely transmitted
was used as the encoder, we see that the detected info word ( )
and the transmitted info word ( ) differ in only position. However if the encoder
was used, the transmitted info word was and therefore it will differ from the
codeword. If
detected info word in an infinite number of positions, althoug only a finite number of channel
errors () occured. This is highly undesirable in transmission systems and the encoder
is
said to be catastrophic.
Catastrophic encoders are to be avoided and several criteria have been developped in order
to identify them. A very popular one is by Massey and Sain and provides a sufficient and
necessary condition for an encoder to be non-catastrophic.
T HEOREM 1
is a rational generator matrix for a convolutional code, then the following conditions
are equivalent. If
satisfies any of these (it will therefore satisfy all others), the encoder
If
will be non-catastrophic.
1. No infinite Hamming weight input
(this is equivalent to: All infinite Hamming weight input sequences, produce infinite
Hamming weight codewords).
2. Let
be the least common multiple of the denominators in the entries of . Define (i.e. clear the denominators. Transform all entries to polynomials). Let be the greatest common divisor of all minors of . When
is reduced to lowest terms (no more common factors) yielding
, then
for some . (Note that if
is a polynomial matrix, then since all
denominators equal .
3.
E XAMPLE 1
which has polynomial entries. Therefore . The greatest common divisor of all minors of is
. This is not in the form , violating condiConsider the encoder
13
C OROLLARY 1
.
is non-catastrophic.
Thus by condition 3
as
since
14
the same order, we complete the order defficient polynomial with appropriate zero coefficients,
such as to have the same order ). More specifically we have:
and
This rational entry corresponds to the transfer function of a linear time invariant (LTI) system
and therefore can be implemented in many possible ways. Of particular interest are two implementations: the controller canonical form and the observer canonical form. If
and
is the input
The controller canonical form is illustrated in figure 1.9. This implementation has the adders
.
out of the delay elements path. A dual implementation of the above that has the adders in the
delay elements path is known as observer canonical form and is illustrated in figure 1.9.
15
D EFINITION 1
.
.
D EFINITION 2
An encoder
for a code is called minimal if its McMillan degree is the smallest that is
and find
the minimum of their McMillan degrees, then the encoder with that minimal McMillan degree
The degree of
Question:
Given an encoder, how can we identify if it is minimal? If it is not, how can we find a minimal
encoder?
Some answers: (many more details in pp.97-113 of Schlegels book)
16
C OROLLARY 2
(1.3)
where is the Hamming weight of a codeword and is the number of codewords of weight .
17
! and terminate at !
without passing from any state more than one time. The gain of each forward path is
denoted as #
4. We consider the set of cycles (loops). A cycle is a path that starts from any state and
terminates to that state, without passing through any other state more than one time. We
denote
5. We consider the set of pairs of non-touching cycles. Two cycles are non touching if they
dont have any states in common. We denote as
cycles. Assuming that the cycles and are non touching, the gain of the pair
is defined as $
$ $ .
6. Similarly we consider the set of triplets of non-touching cycles and define the gain of the
triplet
as $
7. Then
where
$
and
is defined as
#
(1.4)
$
$
but for the portion of the graph that remains after we erase the
states associated with the -th forward path, and the branches originating from or merging
to those states.
E XAMPLE 3
yields the following graph: Examining the above state diagram we found that:
Path 1:
18
1/10
1/01
0/01
1/11
1/01
1/00
0/00
1/00
0/10
0/11
0/10
0/11
1/11
1/10
0/01
0/00
.
1/10
1/01
0/01
0/10
1/01
1/11
1/10
1/00
0/00
0/10
0/01
1/11
1/00
0/11
0/11
.
Path 6:
Cycle 1:
19
20
Computation of yields:
(1.5)
has
achieving it. In addition there are 3 paths of weight 7, 5 paths of weight 8, 11 paths of weigth 9
e.t.c.
The above example focused on computing the WEF of a convolutional code. In several
occasions, additional information about the structure of the convolutional code is required. For
instance we may be interested on the weight of the info-bit stream that produces the codeword
of weight 7 and of how many branches (successive jumps from a sequence to another) it consists
of. This yields to a generalization of the previously described Mason rule as follows: We want
Masons rule is the labeling of the branches in the state transition diagram. We now label each
branch as
instead of just
%
21
% %
%
%
%
%
%
% % % %
% % % %
(1.6)
Comparing (1.5) and (1.6) we see that the convolutional code of example 3 has 5 codewords of
weight 8 out of which 1 has input weight 2 and consists of 6 branches, 1 has input weight 4 and
consists of 7 branches, 1 has input weight 4 and consists of 8 branches and 2 have input weigth
4 and consist of 9 branches. Therefore we have a much more detailled information about the
structure of that code.
codeword . The codeword is transmitted through a Discrete Memory
less Channel (DMC) and the sequence is received at the demodula
tors output.
The Maximum Aposteriori Probability (MAP) criterion for detection (decoding) of convolutional codes implies that the codeword
is detected, iff:
for all possible sequences
.
In other words,
22