Professional Documents
Culture Documents
0 Encoded
0 bits
Input bit
0 0
0
Stored bit
(a) 0/00
1 Encoded
1 bits
Input bit
1 0
0
Stored bit
(b) 1/11
0 Encoded
1 bits
Input bit
0 1
1
Stored bit
(c) 0/01
1 Encoded
0 bits
Input bit
1 1
1
Stored bit
(d) 1/10
Figure 10.24 Illustration of the operations involved in the four possible state transitions.
To continue the background material for the example, we need to bring in a mapper that
transforms the encoded signal into a form suitable for transmission over the AWGN channel.
To this end, consider the simple example of binary PSK as the mapper. We may then express
the SNR at the channel output (i.e., receiver input) as follows (see Problem 10.35):
E
SNR channel output = -----s-
N0
(10.113)
Eb
= r ------
N0
Haykin_ch10_pp3.fm Page 642 Friday, January 4, 2013 5:03 PM
where Eb is the signal energy per message bit applied to the encoder input, and r is the code
rate of the convolutional encoder. Thus for the SNR = 12, that is –3.01 dB and r = 38, the
required Eb N0 is 43.
In transmitting the coded vector c over the AWGN environment, it is assumed that the
received signal vector, normalized with respect to E s , is given by
r = +0.8 0.1 ; +1.0 – 0.5 ; – 1.8 1.1 ; +1.6 – 1.6
r0 r1 r2 r3
The received vector r is included at the top of the trellis diagram in Figure 10.23b.
We are now fully prepared to proceed with decoding the received vector r using the
max-log-MAP algorithm described next, assuming the message bits are equally likely.
Computation of the Decoded Message Vector
To prepare the stage for this computation, we find it convenient to reproduce the following
equations, starting with the formula for the log-domain transition metrics:
1 T
*j s s = --- L c r j c j j = 0 1 K – 1 (10.114)
2
Then for the log-domain forward metrics:
*j + 1 s max+ *j s s + *j s j = 0 1 K – 1 (10.115)
s
j
– j + 1 s *j s
*
–max + *j s s +
s s
j
A Matlab code has been used to perform the computation, starting with the initial
conditions for the forward and backward metrics, 0 s and K s , defined in (10.79)
and (10.80), respectively. The results of the computation are summarized as follows:
1. Log-domain transition metrics
*
0 S 0 S 0 = – 0.9
Gamma 0 :
* S S = 0.9
0 0 1
*1 S 0 S 1 = – 0.5
*
1 S 1 S 0 = 1.5
Gamma 1 :
* S S = 0.5
1 0 1
*
1 S 1 S 1 = – 1.5
Haykin_ch10_pp3.fm Page 643 Friday, January 4, 2013 5:03 PM
*2 S 0 S 0 = 0.7
*
2 S 1 S 0 = – 2.9
Gamma 2 :
* S S = – 0.7
2 0 1
*
2 S 1 S 1 = 2.9
*
3 S 0 S 0 = 0
Gamma 3 :
* S S = 3.2
3 1 0
2. Log-domain forward metrics
*
0 S0 = 0
Alpha 0 :
* S = 0
0 1
*
1 S 0 = – 0.9
Alpha 1 :
* S = 0.9
1 1
*
2 S 0 = 2.4
Alpha 2 :
* S = – 0.4
2 1
3. Log-domain backward metrics
*
K S 0 = 0
K :
* S = 0
K 1
*
3 S 0 = 0
Beta 3 :
* S = 3.2
3 1
*
2 S 0 = 2.5
Beta 2 :
* S = 6.1
2 1
*
1 S 0 = 6.6
Beta 1 :
* S = 4.6
1 1
Haykin_ch10_pp3.fm Page 644 Friday, January 4, 2013 5:03 PM
4. A posteriori L-values
L p m 0 = – 0.2
L p m 1 = 0.2 (10.118)
L p m 2 = – 0.8
5. Final decision
Decoded version of the original message vector
m̂ = – 1 1 – 1 (10.119)
Traditionally, the design of good codes has been tackled by constructing codes with a great
deal of algebraic structure, for which there are feasible decoding schemes. Such an
approach is exemplified by the linear block codes, cyclic codes, and convolutional codes
discussed in preceding sections of this chapter. The difficulty with these traditional codes
is that, in an effort to approach the theoretical limit for Shannon’s channel capacity, we
need to increase the codeword length of a linear block code or the constraint length of a
convolutional code, which, in turn, causes the computational complexity of a maximum
likelihood or maximum a posteriori decoder to increase exponentially. Ultimately, we
reach a point where complexity of the decoder is so high that it becomes physically
impractical.
Ironically enough, in his 1948 paper, Shannon showed that the “average” performance
of a randomly chosen ensemble of codes results in an exponentially decreasing decoding
Haykin_ch10_pp3.fm Page 645 Friday, January 4, 2013 5:03 PM
error with increasing block length. Unfortunately, as it was with his coding theorem,
Shannon did not provide guidance on how to construct randomly chosen codes.
Turbo Encoder
As mentioned in the preceding section, the use of a good code with random-like properties is
basic to turbo coding. In the first successful implementation of turbo codes11, Berrou et al.
achieved this design objective by using concatenated codes. The original idea of
concatenated codes was conceived by Forney (1966). To be more specific, concatenated
codes can be of two types: parallel or serial. The type of concatenated codes used by
Berrou et al. was of the parallel type, which is discussed in this section. Discussion of the
serial type of concatenated codes will be taken up in Section 10.16.
Haykin_ch10_pp3.fm Page 646 Friday, January 4, 2013 5:03 PM
Systematic
bits
Parity-check
Encoder 1 Output
Message bits
code
bits
Parity-check
Interleaver Encoder 2
bits
Figure 10.25 depicts the most basic form of a turbo code generator that consists of two
constituent systematic encoders, which are concatenated by means of an interleaver.
The interleaver is an input–output mapping device that permutes the ordering of a
sequence of symbols from a fixed alphabet in a completely deterministic manner; that is, it
takes the symbols at the input and produces identical symbols at the output but in a different
temporal order. Turbo codes use a pseudo-random interleaver, which operates only on the
systematic (i.e., message) bits. (Interleavers are discussed in Appendix F.) The size of the
interleaver used in turbo codes is typically very large, on the order of several thousand bits.
There are two reasons for the use of an interleaver in a turbo code:
1. The interleaver ties together errors that are easily made in one half of the turbo code
to errors that are exceptionally unlikely to occur in the other half; this is indeed one
reason why the turbo code performs better than a traditional code.
2. The interleaver provides robust performance with respect to mismatched decoding, a
problem that arises when the channel statistics are not known or have been
incorrectly specified.
Ordinarily, but not necessarily, the same code is used for both constituent encoders in
Figure 10.25. The constituent codes recommended for turbo codes are short constraint-
length RSC codes. The reason for making the convolutional codes recursive (i.e., feeding
one or more of the tap outputs in the shift register back to the input) is to make the internal
state of the shift register depend on past outputs. This affects the behavior of the error
patterns, with the result that a better performance of the overall coding strategy is attained.
G D = 1 -------------
1
1 + D
The input sequence of bits has length K = 4 , made up of three message bits and one
termination bit. (This RSC encoder was discussed previously in Section 10.9.) The input
vector is given by
m = m 0 m 1 m 2 m 3