You are on page 1of 6

Haykin_ch10_pp3.

fm Page 659 Tuesday, January 8, 2013 1:00 PM

10.13 EXIT Charts 659

BPSK modulation in an AWGN channel


1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6

I2

I2
0.6 0.6

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
I1 I1
(a) (b)

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6
I2

I2
0.6 0.6

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
I1 I1
(c) (d)

Figure 10.33 (a) Extrinsic information transfer characteristic of decoder 1; (b) Extrinsic information
transfer characteristic of decoder 2; (c) Input-output extrinsic transfer characteristic of the two
constituent decoders working together; (d) EXIT chart, including the staircase (shown dashed)
embracing the extrinsic information transfer characteristics of both constituent decoders.

Iteration 2: message bit, m2


2 2
Decoder 1: I 1  m 2  defines I 2  m 2  .
2 3
Decoder 2: I 2  m 2  initiates I 1  m 3  for iteration 3.

Iteration 3: message bit, m3


3 3
Decoder 1: I 1  m 3  defines I 2  m 3  .
3 4
Decoder 2: I 2  m 3  initiates I 1  m 4  for iteration 4.
Haykin_ch10_pp3.fm Page 660 Friday, January 4, 2013 5:03 PM

660 Chapter 10 Error-Control Coding

Iteration 4: message bit, m4


4 4
Decoder 1: I 1  m 4  defines I 2  m 4  .
4 4
Decoder 2: I 2  m 4  initiates I 1  m 5  for iteration 5.
and so on.
Proceeding in the way just described, we may construct the EXIT chart illustrated in Figure
10.33d, which embodies a trajectory that moves from one constituent decoder to the other in
the form of a staircase. Specifically, the extrinsic information transfer curve from constituent
decoder 1 to constituent decoder 2 proceeds in a horizontal manner and, by the same token,
the extrinsic information transfer curve from constituent decoder 2 to constituent decoder 1
proceeds in a vertical manner. Hereafter, construction of the sequence of extrinsic
information transfer curves from one constituent decoder to another is called the staircase-
shaped extrinsic information transfer trajectory between constituent decoders 1 and 2.
Examination of the EXIT chart depicted in Figure 10.33d prompts us to make the
following two observations:
a. Provided that the SNR at the channel output is sufficiently high, then the extrinsic
information transfer curve of constituent decoder 1 stays above the straight line
I1 = I2, while the corresponding extrinsic information transfer curve of constituent
decoder 2 stays below this line. It follows, therefore, that an open tunnel exists
between the extrinsic information transfer curves of the two constituent decoders.
Under this scenario, the turbo-decoding algorithm converges to a stable solution for
the prescribed Eb N0.
b. The estimates of extrinsic information in the turbo-decoding algorithm continually
become more reliable from one iteration to the next as the stable solution is
approached.
If, however, in contrast to the picture depicted in Figure 10.33d, no open tunnel exists
between the extrinsic information transfer curves of constituent decoders 1 and 2 when the
prescribed Eb N0 is relatively low, then the turbo-decoding algorithm fails to converge
(i.e., the turbo-decoding algorithm is unstable). This behavior is illustrated in the EXIT
chart of Figure 10.34 where the SNR has been reduced compared to that in Figure 10.33.

BPSK modulation in an AWGN channel


1

0.9

0.8

0.7

0.6
Ie

0.6

0.4

0.3

0.2
Figure 10.34
0.1
EXIT chart demonstrating nonconvergent
0
behavior of the turbo decoder when the EbN0 is 0 0.2 0.4 0.6 0.8 1
Ia
reduced compared to that in Figure 10.33d.
Haykin_ch10_pp3.fm Page 661 Friday, January 4, 2013 5:03 PM

10.13 EXIT Charts 661

The stage is now set for us to introduce the following statement:


The Eb N0 threshold of a turbo-decoding algorithm is that smallest value
of Eb N0for which an open tunnel exists in the EXIT chart.
It is the graphical simplicity of this important statement that makes the EXIT chart such a
useful practical tool in the design of iterative decoding algorithms.
Moreover, if it turns out that the EXIT curves of the two constituent decoders do not
intersect before the (1, 1)-point of perfect convergence and the staircase-shaped decoding
trajectory also succeeds in reaching this critical point, then a vanishingly low BER is
expected (Hanzo, 2012).

Approximate Gaussian Model


For an approximate model needed to display the underlying dynamics of iterative
decoding algorithms, the first step is to assume that the a priori L-values for message bit
mj, namely La(mj), constitute independent Gaussian random variables. With mj = 1, the
La(mj) assumes a variance  2a and a mean value   2a  2  m j . Equivalently, we may express
the statistical dependence of La on mj as follows:

 a 
2
L a  m j  =  ------  m j + n a (10.129)
2 
where na is the sample value of a zero-mean Gaussian random variable with variance  2a .
The rationale for the approximate Gaussian model just described is motivated by the
following two points (Lin and Costello, 2003):
a. For an AWGN channel with soft (i.e., unquantized) output, the log-likelihood ratio,
L-value, denoted by La(m j |rj (0)) of a transmitted message bit mj given the receiver
signal rj (0), may be modeled as follows (see Problem 10.36):
0 0
La  m j rj  = Lc rj + La  mj  (10.130)

where Lc = 4(Es N0 ) is the channel reliability factor defined in (10.91) and La(mj) is
the a priori L-value of message bit mj. The point to note here is that the product
terms Lcrj (0) for varying j are independent Gaussian random variables with variance
2Lc and mean Lc.
b. Extensive Monte Carlo simulations of the a posteriori extrinsic L-values, Le(mj), for
a constituent decoder with large block length appear to support the Gaussian-model
assumption of (10.129); see Wiberg et al. (1999).
Accordingly, using the Gaussian approximation of (10.129), we may express the
conditional probability density function of the a priori L-value as follows:
2
1   – mj a  2  2
f L   m j  = ------------------ exp – -----------------------------------
- (10.131)
a
2  a 2 2 a

where  is a dummy variable, representing a sample value of La(mj). Note also that  is
continuous whereas, of course, mj is discrete. It follows that in formulating the mutual
information between the message bit mj = +1 and a priori L-value La(mj) we have a binary
Haykin_ch10_pp3.fm Page 662 Friday, January 4, 2013 5:03 PM

662 Chapter 10 Error-Control Coding

input AWGN channel to deal with; such a channel was discussed previously in Example 5
of Chapter 5 on information theory. Building on the results of that example, we may
express the first desired mutual information, denoted by I1(mj; La), as follows:

1   2f L   m j  
I 1  m j ; L a  = ---  – fL   mj  log2  -----------------------------------------------------------------------------
- d 
a
(10.132)
2 a f L   m j = –1  + f L   m j = +1 
m j = –1, +1 a a

where the summation accounts for the binary nature of the information bit mj and the
integral accounts for the continuous nature of La. Using (10.131) and (10.132) and
manipulating the results, we get (ten Brink, 2001):
2
  – m j  2a  2 
exp – ----------------------------------- -
 2  2a
I 1  m j ; L a  = 1–
–

------------------------------------------------------ log 2  1 + exp  –    d 
2  a
(10.133)

2
which, as expected, depends solely on the variance  a . To emphasize this fact, let the new
function
𝒥  a  := I1  mj ; La  (10.134)

with the following two limiting values:


lim 𝒥   a  = 0
a  0
and
lim 𝒥   a  = 1
a  
In other words, we have
0  I1  mj ; La   1 (10.135)

Moreover, 𝒥   a  increases monotonically with increasing  a , which means that if the


value of the mutual information I 1  m j ; L a  is given, then the corresponding value of  a is
uniquely determined by the inverse formula:
–1
a = 𝒥  I1  (10.136)

and with it, the corresponding Gaussian random variable La(mj) defined in (10.129) is
obtained.
Referring back to (10.128), we note that for us to construct the EXIT chart we also need
to know the second mutual information between the message bit mj and the a posteriori
extrinsic L-value Lp(mj). To this end, we may build on the formula of (10.132) to write

1   2f L   m j  
I 2  m j ; L p  = ---  – fL   mj  log2  -----------------------------------------------------------------------------
- d 
p
(10.137)
2 p f L   m j = –1  + f L   m j = +1 
m j = –1, +1 p p

where, in a manner similar to the a priori mutual information I 1  m j ; L a  m j   , we also


have
0  I2  mj ; Lp   1 (10.138)
Haykin_ch10_pp3.fm Page 663 Friday, January 4, 2013 5:03 PM

10.13 EXIT Charts 663

Accordingly, with the two mutual informations I 1  m j ; L a  and I 2  m j ; L p  at hand, we


may go on to compute the EXIT chart for an iterative decoding algorithm by merely
focusing on a single constituent decoder in the turbo decoding algorithm.
The next issue to be considered is how to perform this computation, which we now
address.

Histogram Method for Computing the EXIT Chart


For turbo codes having long interleavers, the approximate Gaussian model of (10.129) is
good enough for practical purposes. Hence, we may use this model to formulate the
traditional histogram method, described in ten Brink (2001) to compute the EXIT chart.
Specifically, (10.137) is used to compute the mutual information I 2  m j ; L p  for a
prescribed Eb N0. To this end, Monte Carlo simulation (i.e., histogram measurements) is
used to compute the required probability density function, f L   L p  m j   , on which no
p
Gaussian assumption can be imposed for obvious reasons. Computaton of this probability
density function is central to the EXIT chart, which may proceed in a step-by-step manner
for a prescribed Eb N0, as follows:
Step 1: Apply the independent Gaussian random variable defined in (10.129) to constituent
decoder 1 in the turbo decoder. The corresponding value of the mutual information I 1  m j ; L p 
2
is obtained by choosing the variance  a in accordance with (10.129).
Step 2: Using Monte Carlo simulation, compute the probability density function
f L   L p  . Hence, compute the second mutual information I 2  m j ; L p  , and with it a certain
p
point for the extrinsic information transfer curve of constituent decoder 1 is determined.
Step 3: Continue Steps 1 and 2 until we have sufficient points to construct the extrinsic
information transfer curve of constituent decoder 1.
Step 4: Construct the extrinsic information transfer curve of constituent decoder 2 as the
mirror image of the curve for constituent decoder 1 computed in Step 3, respecting the
straight line I1 = I2.
Step 5: Construct the EXIT chart for the turbo decoder by combining the extrinsic
information transfer curves of constituent decoders 1 and 2.
Step 6: Starting with some prescribed initial condition, for example I 1  m 1  = 0 for
message bit m1, construct the staircase information transfer trajectory between constituent
decoders 1 and 2.
A desirable feature of the histogram method for computing the EXIT chart is the fact that,
except for the approximate Gaussian model of (10.129), there are no other assumptions
needed for the computations involved in Steps 1 through 6.

Averaging Method for Computing EXIT Charts


For another method to compute EXIT charts, we may use the so-called averaging method,
which represents an alternative approach to the histogram method.
As a reminder, the basic issue in computing an EXIT chart is to measure the mutual
information between the information bits, mj, at the turbo encoder input in the transmitter
Haykin_ch10_pp3.fm Page 664 Friday, January 4, 2013 5:03 PM

664 Chapter 10 Error-Control Coding

and the corresponding L-values produced at the output of the corresponding BCJR decoder
in the receiver. Due to the inherently nonlinear input–output characteristic of the BCJR
decoder, the underlying distribution of the L-values is not only unknown, but also highly
likely to be non-Gaussian as well, thereby complicating the measurement. To get around this
difficulty, we may invoke the ergodic theorem, which was discussed previously in Chapter 4
on stochastic processes. As explained therein, under certain conditions, it is feasible to
replace the operation of ensemble averaging (i.e., expectation) with time averaging. Thus, in
proceeding along this ergodic path, we have a new nonlinear transformation, where the time
average of a large set of L-value samples, available at the output of the BCJR decoder,
provides an estimate of the desired mutual information; moreover, it does so without
requiring knowledge of the original data (i.e., the mj). It is for this reason that the second
method of computing EXIT charts is called the averaging method.15
Just as the use of a single constituent decoder suffices for computing an EXIT chart in
the histogram method, the same decoding scenario applies equally well to the averaging
method. It is with this point in mind that the underlying scheme for the averaging method
is as depicted in Figure 10.35. Most importantly, this scheme is designed in such a way
that the following requirements are satisfied:
1. Implementations of channel estimation, carrier receiver, modulation, and
demodulation are all perfect.
2. The turbo decoder is perfectly synchronized with the turbo encoder.
3. The BCJR algorithm or exact equivalent (i.e., the log-MAP algorithm) is used to
optimize the turbo decoder.
Moreover, the following analytic correspondences between the constituent encoder 1 at
the top of Figure 10.35 and the turbo decoder 2 at the bottom of the figure are carefully
noted: the code vectors c(0), c(1), and termination vector t(1) in the encoder map onto the a
posteriori L-values Lp(r(0)), Lp(r(1)), and Lp(z(1)) in the decoder, respectively.
It can therefore be justifiably argued that in light of these rigorous requirements, the
underlying algorithm for the averaging method is well designed and therefore trustworthy
in the following sense: in the course of computing the EXIT chart, the algorithm trusts
what the computed L-values actually say; that is, they do not under or over represent their
confidence in the message bits. This important characteristic of the averaging method is to
be contrasted against the histogram method. Indeed, it is for this reason that the histogram
method compares the L-values against the true values of the message bits, hence requiring
knowledge of them.
In summary, we may say that trustworthy L-values are those L-values that satisfy the
consistency condition. A simple way of testing this condition is to do the following: use
the averaging and histogram methods to compute two respective sets of L-values. If, then,
both methods yield the same value for the mutual information, then the consistency
condition is satisfied (Maunder, 2012).

Procedure for Measuring the EXIT Chart


Referring back to the scheme of Figure 10.35, the demultiplexer outputs denoted by r(0),
r(1)and z(1) represent the L-values corresponding to the encoder outputs c(0), c(1) and t(1),
respectively. Thus, following the way in which the turbo decoder of Figure 10.33b was
described, the internally generated input applied to BCJR decoder 1 assumes exactly the

You might also like