You are on page 1of 6

Haykin_ch10_pp3.

fm Page 641 Friday, January 4, 2013 5:03 PM

10.10 Illustrative Procedure for Map Decoding in the Log-Domain 641

0 Encoded
0 bits

Input bit
0 0
0
Stored bit

(a) 0/00

1 Encoded
1 bits

Input bit
1 0
0
Stored bit

(b) 1/11

0 Encoded
1 bits

Input bit
0 1
1
Stored bit

(c) 0/01

1 Encoded
0 bits

Input bit
1 1
1
Stored bit

(d) 1/10

Figure 10.24 Illustration of the operations involved in the four possible state transitions.

To continue the background material for the example, we need to bring in a mapper that
transforms the encoded signal into a form suitable for transmission over the AWGN channel.
To this end, consider the simple example of binary PSK as the mapper. We may then express
the SNR at the channel output (i.e., receiver input) as follows (see Problem 10.35):
E
 SNR channel output = -----s-
N0
(10.113)
Eb
= r  ------ 
 N0 
Haykin_ch10_pp3.fm Page 642 Friday, January 4, 2013 5:03 PM

642 Chapter 10 Error-Control Coding

where Eb is the signal energy per message bit applied to the encoder input, and r is the code
rate of the convolutional encoder. Thus for the SNR = 12, that is –3.01 dB and r = 38, the
required Eb N0 is 43.
In transmitting the coded vector c over the AWGN environment, it is assumed that the
received signal vector, normalized with respect to E s , is given by
r =  +0.8 0.1 ; +1.0 – 0.5 ; – 1.8 1.1 ; +1.6 – 1.6 





















r0 r1 r2 r3
The received vector r is included at the top of the trellis diagram in Figure 10.23b.
We are now fully prepared to proceed with decoding the received vector r using the
max-log-MAP algorithm described next, assuming the message bits are equally likely.
Computation of the Decoded Message Vector
To prepare the stage for this computation, we find it convenient to reproduce the following
equations, starting with the formula for the log-domain transition metrics:
1 T
 *j  s s  = --- L c  r j c j  j = 0 1  K – 1 (10.114)
2
Then for the log-domain forward metrics:
 *j + 1  s   max+   *j  s s  +  *j  s   j = 0 1   K – 1 (10.115)
s  
j

Next, for the log-domain backward metrics:


 j* s  = max   *j  s s  +  j*+ 1  s   (10.116)
s  j + 1

And finally for computation of the a posteriori L-values:


Lp  mj  = max +
 j*+ 1  s  +  *j  s s  +  *j  s 
 s s   
j
(10.117)

– j + 1  s   *j  s 
*
–max +  *j  s s +
 s s   
j
A Matlab code has been used to perform the computation, starting with the initial
conditions for the forward and backward metrics,  0  s  and  K  s  , defined in (10.79)
and (10.80), respectively. The results of the computation are summarized as follows:
1. Log-domain transition metrics
 *
  0  S 0 S 0  = – 0.9
Gamma 0 : 
  *  S  S  = 0.9
 0 0 1
  *1  S 0 S 1  = – 0.5

 *
  1  S 1 S 0  = 1.5
Gamma 1 : 
 *  S  S  = 0.5
 1 0 1
 *
  1  S 1 S 1  = – 1.5
Haykin_ch10_pp3.fm Page 643 Friday, January 4, 2013 5:03 PM

10.10 Illustrative Procedure for Map Decoding in the Log-Domain 643

  *2  S 0 S 0  = 0.7

 *
  2  S 1 S 0  = – 2.9
Gamma 2 : 
 *  S  S  = – 0.7
 2 0 1
 *
  2  S 1 S 1  = 2.9

 *
  3  S 0 S 0  = 0
Gamma 3 : 
  *  S  S  = 3.2
 3 1 0
2. Log-domain forward metrics

 *
 0  S0  = 0
Alpha 0 : 
 *  S  = 0
 0 1
 *
  1  S 0  = – 0.9
Alpha 1 : 
  *  S  = 0.9
 1 1
 *
  2  S 0  = 2.4
Alpha 2 : 
  *  S  = – 0.4
 2 1
3. Log-domain backward metrics

 *
 K  S 0  = 0
K : 
 *  S  = 0
 K 1
 *
 3  S 0  = 0
Beta 3 : 
 *  S  = 3.2
 3 1
 *
 2  S 0  = 2.5
Beta 2 : 
 *  S  = 6.1
 2 1
 *
 1  S 0  = 6.6
Beta 1 : 
 *  S  = 4.6
 1 1
Haykin_ch10_pp3.fm Page 644 Friday, January 4, 2013 5:03 PM

644 Chapter 10 Error-Control Coding

4. A posteriori L-values
L p  m 0  = – 0.2 

L p  m 1  = 0.2  (10.118)

L p  m 2  = – 0.8 
5. Final decision
Decoded version of the original message vector
m̂ =  – 1 1 – 1  (10.119)

In binary form, we may equivalently write


m̂ =  0 1 0 

Two Final Remarks on Example 7


1. In arriving at the decoded output of (10.119) we made use of the termination bit, m 3.
Although m 3 is not a message bit, the same procedure was used to calculate its a
posteriori L-value. Lin and Costello (2004) showed that this kind of calculation is a
necessary requirement in the iterative decoding of turbo codes. Specifically, with the
turbo decoder consisting of two stages, “soft-output” a posteriori L-values are
passed as a priori inputs to a second decoder.
2. In Example 7, we focused attention on the application of the max-log-MAP
algorithm to decode the rate-38 RSC code produced by the two-state encoder of
Figure 10.23a. The procedure described herein, embodying six steps, applies
equally well to the log-MAP algorithm with no approximations. In Problem 10.34 at
the end of the chapter, the objective is to show that the corresponding decoded
output is (+1, +1, –1), which is different from that of Example 7. Naturally, in
arriving at this new result, the calculations are somewhat more demanding but more
accurate in the final decision-making.

10.11 New Generation of Probabilistic Compound Codes

Traditionally, the design of good codes has been tackled by constructing codes with a great
deal of algebraic structure, for which there are feasible decoding schemes. Such an
approach is exemplified by the linear block codes, cyclic codes, and convolutional codes
discussed in preceding sections of this chapter. The difficulty with these traditional codes
is that, in an effort to approach the theoretical limit for Shannon’s channel capacity, we
need to increase the codeword length of a linear block code or the constraint length of a
convolutional code, which, in turn, causes the computational complexity of a maximum
likelihood or maximum a posteriori decoder to increase exponentially. Ultimately, we
reach a point where complexity of the decoder is so high that it becomes physically
impractical.
Ironically enough, in his 1948 paper, Shannon showed that the “average” performance
of a randomly chosen ensemble of codes results in an exponentially decreasing decoding
Haykin_ch10_pp3.fm Page 645 Friday, January 4, 2013 5:03 PM

10.12 Turbo Codes 645

error with increasing block length. Unfortunately, as it was with his coding theorem,
Shannon did not provide guidance on how to construct randomly chosen codes.

The Turbo Revolution Followed by LDPC Rediscovery


Interest in the use of randomly chosen codes was essentially dormant for a long time until
the new idea of turbo coding was described by Berrou et al. (1993); that idea was based on
two design initiatives:
1. The design of a good code, the construction of which is characterized by random-
like properties.
2. The iterative design of a decoder that makes use of soft-output values by exploiting
the maximum a posteriori decoding algorithm due to Bahl et al. (1974).
By exploiting these two ideas, it was experimentally demonstrated that turbo coding can
approach the Shannon limit at a computational cost that would have been infeasible with
traditional algebraic codes. Therefore, it can be said that the invention of turbo coding
deserves to be ranked among the major technical achievements in the design of
communication systems in the 20th century.
What is also remarkable is the fact that the discovery of turbo coding and iterative
decoding flamed theoretical as well as practical interest in some prior work by Gallager
(1962, 1963) on LDPC codes. These codes also possess the information-processing power
to approach the Shannon limit in their own individual ways. The important point to note
here is the fact that both turbo codes and LDPC codes are capable of approaching the
Shannon limit at a similar level of computational complexity, provided that they both have
a sufficiently long codeword. Specifically, turbo codes require a long turbo interleaver,
whereas LDPC codes require a longer codeword at a given code rate (Hanzo, 2012).
We thus have two basic classes of probabilistic compound coding techniques: turbo
codes and LDPC codes, which complement each other in the following sense:
Turbo encoders are simple to design but the decoding algorithm can be
demanding. In contrast, LDPC encoders are relatively complex but they are
simple to decode.
With these introductory remarks, the stage is set for the study of turbo codes in Section
10.12, followed by LDPC codes in Section 10.14.

10.12 Turbo Codes

Turbo Encoder
As mentioned in the preceding section, the use of a good code with random-like properties is
basic to turbo coding. In the first successful implementation of turbo codes11, Berrou et al.
achieved this design objective by using concatenated codes. The original idea of
concatenated codes was conceived by Forney (1966). To be more specific, concatenated
codes can be of two types: parallel or serial. The type of concatenated codes used by
Berrou et al. was of the parallel type, which is discussed in this section. Discussion of the
serial type of concatenated codes will be taken up in Section 10.16.
Haykin_ch10_pp3.fm Page 646 Friday, January 4, 2013 5:03 PM

646 Chapter 10 Error-Control Coding

Systematic
bits

Parity-check
Encoder 1 Output
Message bits
code
bits

Parity-check
Interleaver Encoder 2
bits

Figure 10.25 Block diagram of turbo encoder of the parallel type.

Figure 10.25 depicts the most basic form of a turbo code generator that consists of two
constituent systematic encoders, which are concatenated by means of an interleaver.
The interleaver is an input–output mapping device that permutes the ordering of a
sequence of symbols from a fixed alphabet in a completely deterministic manner; that is, it
takes the symbols at the input and produces identical symbols at the output but in a different
temporal order. Turbo codes use a pseudo-random interleaver, which operates only on the
systematic (i.e., message) bits. (Interleavers are discussed in Appendix F.) The size of the
interleaver used in turbo codes is typically very large, on the order of several thousand bits.
There are two reasons for the use of an interleaver in a turbo code:
1. The interleaver ties together errors that are easily made in one half of the turbo code
to errors that are exceptionally unlikely to occur in the other half; this is indeed one
reason why the turbo code performs better than a traditional code.
2. The interleaver provides robust performance with respect to mismatched decoding, a
problem that arises when the channel statistics are not known or have been
incorrectly specified.
Ordinarily, but not necessarily, the same code is used for both constituent encoders in
Figure 10.25. The constituent codes recommended for turbo codes are short constraint-
length RSC codes. The reason for making the convolutional codes recursive (i.e., feeding
one or more of the tap outputs in the shift register back to the input) is to make the internal
state of the shift register depend on past outputs. This affects the behavior of the error
patterns, with the result that a better performance of the overall coding strategy is attained.

EXAMPLE 8 Two-State Turbo Encoder


Figure 10.26 shows the block diagram of a specific turbo encoder using an identical pair of
two-state RSC constituent encoders. The generator matrix of each constituent encoder is
given by

G  D  =  1 -------------
1
 1 + D
The input sequence of bits has length K = 4 , made up of three message bits and one
termination bit. (This RSC encoder was discussed previously in Section 10.9.) The input
vector is given by
m =  m 0 m 1 m 2 m 3 

You might also like