y u1 u2
Figure 4.1: A concatenated SOVA decoder where y represents the received channel
values, u represents the hard decision output values, and L represents
the associated reliability values.
0 0 S 0,t
0 0 0 0 0 0 0
0 V s(S 1,t )
S 1,t
1 1 1 1 1 1 1 1
States
0 V c(S 1,t)
1 0 1
S 2,t
2 0 2 2 2 2 2 1 2
1
S 3,t
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t5 t4 t3 t2 t1 t
Figure 4.2: Example of survivor and competing paths for reliability estimation at
time t [Ber93a].
In Figure 4.2, a 4state trellis diagram is shown. The solid line indicates the survivor path
(assumed here to be part of the final ML path) and the dashed line indicates the
competing (concurrent) path at time t for state 1. For the sake of brevity, survivor and
competing paths for other nodes are not shown. The label S1,t represents state 1 and time
t. Also, the labels {0,1} shown on each path indicate the estimated binary decision for
the paths. The survivor path for this node is assigned an accumulated metric Vs(S1,t) and
the competing path for this node is assigned an accumulated metric Vc(S1,t). The
fundamental information for assigning a reliability value L(t) to node S1,ts survivor path
is the absolute difference between the two accumulated metrics, L(t)= Vs(S1,t)  Vc(S1,t) 
[Ber93a]. The greater this difference, the more reliable is the survivor path. For this
reliability calculation, it is assumed that the survivor accumulated metric is always
better than the competing accumulated metric. Furthermore, to reduce complexity, the
reliability values only need to be calculated for the ML survivor path (assume it is known
for now) and are unnecessary for the other survivor paths since they will be discarded
later.
Figure 4.3 illustrates a problem with the use of the absolute difference between
accumulated survivor and competing metrics as a measure of the reliability of the
decision
.
Vs(S0,t4)=10 Vs(S0,t2)=50
L(t4)=10 L(t2)=25
Vc(S0,t4)=20 Vc(S0,t2)=75
0 0
0 0 0 0 0 0 0
0 Vc(S1,t)=100
1 0 1
2 0 2 2 2 2 2 1 2
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t5 t4 t3 t2 t1 t
Figure 4.3: Example that shows the weakness of reliability assignment using metric
values directly.
To improve the reliability values of the survivor path, a trace back operation to
update the reliability values has been suggested [Hag89], [Ber93a]. This updating
procedure is integrated into the Viterbi algorithm as follows [Hag89]:
For node Sk,t in the trellis diagram (corresponding to state k at time t),
1. Store L(t) =  Vs(Sk,t)  Vc(Sk,t) . (This is also denoted as in other papers.)
If there is more than one competing path, then multiple reliability values must
be calculated and the smallest reliability value is then set to L(t).
2. Initialize the reliability value of Sk,t to + (most reliable).
3. Compare the survivor and competing paths at Sk,t and store the memorization
levels (MEMs) where the estimated binary decisions of the two paths differ.
4. Update the reliability values at these MEMs with the following procedure:
a. Find the lowest MEM>0, denoted as MEMlow, whose reliability value
has not been updated.
b. Update MEMlows reliability value L(tMEMlow) by assigning the
lowest reliability value between MEM = 0 and MEM = MEMlow.
Continuing from the example, the opposite bit estimations between the survivor and
competing bit paths for S1,t are located and stored as MEM={0, 2, 4}. With this MEM
information, the reliability updating process is accomplished as shown in Figure 4.4 and
Figure 4.5. In Figure 4.4, the first reliability update is shown. The lowest MEM>0,
whose reliability value has not been updated, is determined to be MEMlow=2. The lowest
reliability value between MEM=0 and MEM=MEMlow=2 is found to be L(t)=0. Thus, the
associated reliability value is updated from L(t2)=25 to L(t2)=L(t)=0. The next lowest
MEM>0, whose reliability value has not been updated, is determined to be MEMlow=4.
The lowest reliability value between MEM=0 and MEM=MEMlow=4 is found to be
L(t)=L(t2)=0. Thus, the associated reliability value is updated from L(t4)=10 to L(t
4)=L(t)=L(t2)=0. Figure 4.5 shows the second reliability update.
Fuhua Huang Chapter 4. Iterative Turbo Code Decoder 41
L(t4)=10 L(t2)=25
Updated L(t2)=0
0 0
0 0 0 0 0 0 0
L(t5)=100 0 L(t)=0
L(t3)=200
S 1,t
1 1 1 1 1 1 1 1
States
0
1 0 1
2 0 2 2 2 2 2 1 2
1 L(t1)=300
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t5 t4 t3 t2 t1 t
L(t5)=100 0 L(t)=0
L(t3)=200
S 1,t
1 1 1 1 1 1 1 1
States
0
1 0 1
2 0 2 2 2 2 2 1 2
1 L(t1)=300
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t5 t4 t3 t2 t1 t
It has been suggested that the final reliability values should be normalized or
logarithmically compressed before passing to the next concatenated decoder to offset
possible defects of this updating operation [Ber93a].
u Channel x y Channel u
Channel
Encoder Decoder
Clearly from Table 4.2, as L(u) increase toward +, the probability of u=+1 also
increases. Furthermore, as L(u) decreases toward , the probability of u=1 increases.
As it can be seen, L(u) provides a form of reliability for u. This will be exploited for
SOVA decoding as described later in the chapter.
The probability of the sum of two binary random variables, say P(u1u2=+1), is
found from
P(u1 u2 = +1) = P(u1 = +1) P(u2 = +1) + P(u1 = 1) P(u2 = 1) (4.3)
With the following relation,
P(u = 1) = 1 P(u = +1) (4.4)
the probability P(u1u2=+1) becomes
P(u1 u 2 = +1) = P(u1 = +1) P(u 2 = +1) + (1 P(u1 = +1))(1 P(u 2 = +1)) (4.5)
Using the following relation shown in [Hag96]
e L( u )
P(u = +1) = (4.6)
1 + e L( u )
it can be shown that
1 + e L( u1 ) e L( u2 )
P(u1 u 2 = +1) = (4.7)
(1 + e L( u1 ) )(1 + e L( u2 ) )
The probability P(u1 u 2 = 1) can then be calculated as
P(u1 u2 = 1) = 1 P (u1 u2 = +1)) (4.8)
e L( u1 ) + e L( u2 )
= (4.9)
(1 + e L( u1 ) )(1 + e L( u2 ) )
From the definition of loglikelihood ratio (4.1), it follows directly that
P(u1 u2 = +1)
L(u1 u2 ) = ln (4.10)
P(u1 u2 = 1)
Using (4.7) and (4.9), L(u1u2) is found to be
1 + e L ( u1 ) e L ( u2 )
L(u1 u2 ) = ln L ( u1 ) (4.11)
e + e L ( u2 )
This result is approximated in [Hag94] as
L(u1 u 2 ) sign( L(u1 )) sign( L(u 2 )) min( L( u1 ), L( u 2 )) (4.12)
Table 4.3 shows the accuracy of this approximation compared to the exact solution.
From Table 4.3, it can be seen that the deviation between the exact and the approximated
solutions becomes larger as the reliability of the decision approaches to zero for both
variables.
[+]
L(u j ) = L u j
(following 4.13) (4.17)
j=1 j=1
J
P u j = +1
j =1
= ln (following 4.10) (4.18)
J
P u j = 1
j =1
J J
(e
j =1
L(u j )
+ 1) + (e
j =1
L(u j )
1)
= ln J J (4.19)
(e
j =1
L(u j )
+ 1) (e
j =1
L(u j )
1)
[+]
L(u j ) = L u j
(4.23)
j=1 j=1
J
sign( L(u j )) min { L(u j )} (following 4.12) (4.24)
j =1 j =1,..., J
From the system model in Figure 4.6, the information bit u is mapped to the
encoded bits x. The encoded bits x are transmitted over the channel and received as y.
From this system model, the loglikelihood ratio of x conditioned on y is calculated as
P( x = +1 y )
L( x y ) = ln (4.25)
P( x = 1 y )
By using Bayes Theorem, this loglikelihood ratio is equivalent to
p( y x = +1) P( x = +1)
L( x y ) = ln (4.26)
p( y x = 1) P( x = 1)
p( y x = +1) P( x = +1)
= ln + ln (4.27)
p( y x = 1) P( x = 1)
The channel model is assumed to be flat fading with Gaussian noise. By using the
Gaussian pdf f(z),
( z m) 2
1
f ( z) = e 2
2
(4.28)
2
where m is the mean and the 2 is the variance, it can be shown that
Eb
( y a ) 2
p( y x = +1) e No
ln = ln Eb (4.29)
p( y x = 1) ( y +a )2
e No
Eb
2 ay
No
e
= ln Eb (4.30)
2 ay
No
e
Eb
=4 ay (4.31)
No
Eb
where is the signal to noise ratio per bit (directly related to the noise variance) and a
No
is the fading amplitude. For nonfading Gaussian channel, a=1.
The SOVA component decoder estimates the information sequence using one of
the two encoded streams produced by the turbo code encoder. Figure 4.7 shows the
inputs and outputs of the SOVA component decoder.
L(u) u
SOVA
Lcy L(u)
The SOVA component decoder processes the (loglikelihood ratio) inputs L(u) and Lcy,
where L(u) is the apriori sequence of the information sequence u and Lcy is the
weighted received sequence. The sequence y is received from the channel. However, the
sequence L(u) is produced and obtained from the preceding SOVA component decoder.
If there is no preceding SOVA component decoder then there are no apriori values.
Thus, the L(u) sequence is initialized to the allzero sequence. A similar concept is also
shown at the beginning of the chapter in Figure 4.1. The SOVA component decoder
produces u and L(u) as outputs where u is the estimated information sequence and
L(u) is the associated loglikelihood ratio (soft or Lvalue) sequence.
The SOVA component decoder operates similarly to the Viterbi decoder except
the ML sequence is found by using a modified metric. This modified metric, which
incorporates the apriori value, is derived below.
The fundamental Viterbi algorithm searches for the state sequence S(m) or the
information sequence u(m) that maximizes the aposteriori probability P(S(m)y). For
binary (k=1) trellises, m can be either 1 or 2 to denote the survivor and the competing
paths respectively. By using Bayes Theorem, the aposteriori probability can be
expressed as
P(S ( m) )
P(S ( m)  y) = p( yS ( m) ) (4.34)
p( y)
Since the received sequence y is fixed for metric computation and does not depend on m,
it can be discarded. Thus, the maximization results to
Fuhua Huang Chapter 4. Iterative Turbo Code Decoder 49
max p( yS ( m) ) P(S ( m) ) (4.35)
m
The probability of a state sequence terminating at time t is P(St). This probability can be
calculated as
P(S t ) = P(S t 1 ) P( S t ) (4.36)
= P(S t 1 ) P(u t ) (4.37)
where P(St) and P(ut) denote the probability of the state and the bit at time t respectively.
The maximization can then be expanded to
t
max p( yS ( m) ) P(S ( m ) ) = max p( y i S i(m1) , Si( m ) ) P (S t( m ) ) (4.38)
m m
i =0
where ( S i(1
m)
, S i( m) ) denotes the state transition between time i1 and time i and yi denotes
the associated received channel values for the state transition.
After substituting and rearranging,
t 1
max p( yS ( m) ) P(S ( m ) ) = max P(S t( m1) ) p( y i S i(m1) , S i( m) ) P(ut( m) ) p( y t  S t(m1) , S t( m) ) (4.39)
m m
i =0
Note that
N
p( y t  S t(m1) , S t( m) ) = p( y t , j  x t(,mj ) ) (4.40)
j=1
As seen from (4.47) and (4.48), the SOVA metric incorporates values from the past
metric, the channel reliability, and the source reliability (apriori value).
Figure 4.8 shows the source reliability as used in SOVA metric computation.
A d d (+ 1 )(L (u t ))
Sa Sa
A d d (1 )(L (u t ))
Sb Sb
A d d (+ 1 )(L (u t ))
Figure 4.8 shows a trellis diagram with two states Sa and Sb and a transition period
between time t1 and time t. The solid line indicates that the transition will produce an
information bit ut=+1 and the dash line indicates that the transition will produce an
information bit ut=1. The source reliability L(ut), which may be either a positive or a
negative value, is from the preceding SOVA component decoder. The add on value is
incorporated into the SOVA metric to provide a more reliable decision on the estimated
information bit. For example, if L(ut) is a large positive number, then it would be
relatively more difficult to change the estimated bit decision from +1 to 1 between
decoding stages (based on assigning max{ M t( m ) } to the survivor path). However, if L(ut)
m
is a small positive number, then it would be relatively easier to change the estimated bit
decision from +1 to 1 between decoding stages. Thus, L(ut) is like a buffer which tries
to prevent the decoder from choosing the opposite bit decision to the preceding decoder.
Channel Source
Channel Source
Reliability Reliability
Lc L(u)
As it is illustrated in Figure 4.9, the balance between channel and source reliability is very
important for the SOVA metric. This does not mean that the channel and source
reliability values should have the same magnitude but rather that their relative values
should reflect the channel and source conditions. For instance, if the channel is very
good, Lc will be larger than L(u) and decoding relies mostly on the received channel
values. However, if the channel is very bad, decoding relies mostly on the apriori
information L(u). If this balance is not achieved, catastrophic effects may result and
degrade the performance of the channel decoder.
0 Mt(2)
1 0 1
S2,t
2 0 2 2 2 2 2 1 2
1
t
1
S3,t
3 3 3 3 3 3 3
Memorization
Level (MEM) 5 4 3 2 1 0
Time (t)
t5 t4 t3 t2 t1 t
Figure 4.10: Example of SOVA survivor and competing paths for reliability
estimation.
The probability of path m at time t and the SOVA metric are stated in [Hag94] to
be related as
P( path(m)) = P(S (t m) ) (4.50)
Mt( m )
=e 2
(4.51)
(1)
At time t, let us suppose that the survivor metric of a node is denoted as Mt and the
competing metric is denoted as Mt( 2 ) . Thus, the probability of selecting the correct
survivor path is
P( path(1))
P(correct ) = (4.52)
P( path(1)) + P( path(2))
Mt( 1)
2
e
= Mt( 1) Mt( 2 )
(4.53)
e 2
+e 2
0t
e
= (4.54)
1 + e t
0
0t
= log 1 + e 0
P(correct )
log (4.55)
1 P(correct ) e t
1
1 + e t
0
= 0t (4.56)
The reliability values along the survivor path for a particular node at time t are
denoted as MEMt , where MEM = 0, .., t. For this node at time t, if the bit on the survivor
path at MEM=k (or equivalently at time tMEM) is the same as the associated bit on the
competing path, then there would be no bit error if the competing path was chosen. Thus,
the reliability value at this bit position remains unchanged. However, if the bits differ on
the survivor and competing path at MEM=k, then there is a bit error. The reliability value
at this bit error position must then be updated using the same updating procedure as
described at the beginning of the chapter. As shown in Figure 4.10, reliability updates are
required for MEM=2 and MEM=4.
The soft output Viterbi algorithm (along with its reliability updating procedure)
can be implemented as follows:
1. (a) Initialize time t = 0.
(b) Initialize M0( m) = 0 only for the zero state in the trellis diagram and all other
states to .
2. (a) Set time t = t +1.
N
(b) Compute the metric M t( m) = M t(m1) + ut( m) Lc y t ,1 + x t(,mj ) Lc y t , j + ut( m ) L(ut )
j =2
Shift Register
Shift Register
The SOVA decoder inputs L(u) and Lcy, the apriori values and the weighted received
values respectively and outputs u and L(u), the estimated bit decisions and its
associated soft or Lvalues respectively. This implementation of the SOVA decoder is
composed of two separate SOVA decoders. The first SOVA decoder computes the
metrics for the ML path only and does not compute (suppresses) the reliability values.
The shift registers are used to buffer the inputs while the first SOVA decoder is
processing the ML path. The second SOVA decoder (with the knowledge of the ML
path) recomputes the ML path and also calculates and updates the reliability values. As it
can be seen, this implementation method reduces the complexity in the updating process.
Instead of keeping track and updating 2m survivor paths, only the ML path needs to be
processed.
SOVA 1
L1(u) Le1(u)
CS Register
y2 +
Channel Reliability 4Eb/No
2 Parallel

Shift Registers 
y1
The turbo code decoder processes the received channel bits on a frame basis. As shown
in Figure 4.12, the received channel bits are demultiplexed into the systematic stream y1
and two parity check streams y2 and y3 from component encoders 1 and 2 respectively.
These bits are weighted by the channel reliability value and loaded on to the CS registers.
The registers shown in the figure are used as buffers to store sequences until they are
needed. The switches are placed in the open position to prevent the bits from the next
frame from being processed until the present frame has been processed.
The SOVA component decoder produces the soft or Lvalue L(ut' ) for the
estimated bit ut' (for time t). The soft or Lvalue L(ut' ) can be decomposed into three
distinct terms as stated in [Hag94]
L(ut' ) = L(ut ) + Lc y t ,1 + Le (ut' ) (4.59)
L(u t ) is the apriori value and is produced by the preceding SOVA component decoder.
Lc y t ,1 is the weighted received systematic channel value. Le (u t' ) is the extrinsic value
produced by the present SOVA component decoder. The information that is passed
between SOVA component decoders is the extrinsic value
Le (ut' ) = L(ut' ) L(ut ) Lc y t ,1 (4.60)
Figure 4.12 shows that the turbo code decoder is a closed loop serial
concatenation of SOVA component decoders. In this closed loop decoding scheme, each
of the SOVA component decoders estimates the information sequence using a different
weighted parity check stream. The turbo code decoder further implements iterative
decoding to provide more dependable reliability/apriori estimations from the two
different weighted parity check streams, hoping to achieve better decoding performance.
The iterative turbo code decoding algorithm for the nth iteration is as follows:
E E
1. The SOVA1 decoder inputs sequences 4 b y1 (systematic), 4 b y2 (parity
No No
check), and Le2 (u' ) and outputs sequence L1 (u') . For the first iteration, sequence
Le2 (u' ) = 0 because there is no initial apriori value (no extrinsic values from
SOVA2).
2. The extrinsic information from SOVA1 is obtained by
E
Le1 (u ' ) = L1 (u' ) Le 2 (u') Lc y1 where Lc = 4 b .
No
Eb E
3. The sequences 4 y1 and Le1 (u') are interleaved and denoted as I4 b y1
No No
and I{Le1 (u' )} .
E E
4. The SOVA2 decoder inputs sequences I4 b y1 (systematic), I4 b y3
No No
(parity check that was already interleaved by the turbo code encoder), and
I{Le1 (u')} (apriori information) and outputs sequences I{L2 (u')} and I{u'} .
5. The extrinsic information from SOVA2 is obtained by
{ }
I Le 2 (u ' ) = I{L2 (u' )} I{Le1 (u' )} I{Lc y1 } .
6. The sequences I{Le2 (u')} and I{u'} are deinterleaved and denoted as Le2 (u') and
u' . Le2 (u') is fed back to SOVA1 as apriori information for the next iteration
and u' is the estimated bits output for the nth iteration.
The SOVA component decoder and the SOVA iterative turbo code decoder are
both complicated. An example is shown below to aid in the understanding of these
decoders. Figure 4.13 shows the turbo code encoder structure used in the example.
Interleaver
(Size L) Com plete RSC 2
x1
Shift Register 2 x3
Recursive Encoder 2
(Size L)
In Figure 4.13, the systematic bit stream (with its termination bits) is associated with
recursive encoder 2. The input information bit stream u is loaded into shift register 1 to
form a data frame. This data frame is passed to the interleaver and its output is fed into
shift register 2. The two component encoders then encode their respective inputs. The
output encoded bit streams x1, x2, and x3 are multiplexed together to form a single
transmission bit stream. Figure 4.14 shows the RSC component encoder used in the
example.
u
+ D D
A
x2
x1
In Figure 4.14, the switch is turned on to position A for encoding the input sequence and
is turned on to position B for terminating the trellis. Figure 4.15 shows the state diagram
of the RSC component encoder.
00 11
0/00 1/10
1/11 0/01
01
Label: u / x1x2
{0,1} Domain
Figure 4.15: State diagram of the RSC component encoder for the example.
The encoded bits need to mapped from the {0,1} domain to the {1,+1} domain for
transmission and Figure 4.16 shows this modified state diagram.
10
1/11 1/1 1
1/1 1 1/1 1
00 11
1/1 1 1/1 1
1/11 1/1 1
01
Label: u / x1x2
{1,+1} Domain
Figure 4.16: Transmission state diagram of the RSC component encoder for the
example.
For the example, the input sequence is u={01101}. The interleaver (of size L=5)
inverses the input sequence as its output. For the input sequence, the interleaver
outputs I{u}= {10110}. From Figure 4.13 and Figure 4.14, the encoded sequences are
x1={1011010}, x2={0100010}, and x3={1100110}. Coincidentally, the encoded
sequences have the same tail bits (10). The encoded sequences are mapped to the {1,+1}
domain for transmission as x1={1 1 1 1 1 1 1}, x2={1 1 1 1 1 1 1}, and
Fuhua Huang Chapter 4. Iterative Turbo Code Decoder 60
x3={1 1 1 1 1 1 1}. The corresponding received sequences are y1={1 1 1 1 1 1 1},
y2={0 1 1 1 1 1 1}, and y3={0 1 1 1 1 1 1} where errors are underlined. A 0 is
received to represent an erasure (to indicate the reception of a signal whose corresponding
symbol value is in doubt) [Wic95]. Assuming Eb/No=1, the weighted received sequences
are Lcy1={4 4 4 4 4 4 4}, Lcy2={0 4 4 4 4 4 4}, and Lcy3={0 4 4 4 4 4 4}.
Turbo code decoding is shown below for the first (initial) decoding iteration.
From the transmission state diagram shown in Figure 4.16, the trellis legend (state
transition diagram) is obtained and is shown in Figure 4.17. The trellis legend is required
for decoding the RSC component codes.
L E G E N D
S ta te s S ta te s
1 /1 1 0 0
0 0
1 /1 1 1 /1 1
0 1 0 1
1 /1 1
1 /1 1
1 0 1 0
1 /1 1 1 /1 1
1 1 1 1
1 /1 1
i i+ 1
T im e
L a b e l: u / x 1x 2
{  1 ,+ 1 } D o m a in
Figure 4.17: Trellis legend (state transition diagram) of the RSC component
encoder for the example. The first coded bit is the systematic bit
and the second coded bit is the parity check bit.
CS Register
I1 SOVA 1
L1(u) Le1(u)
CS Register
y2 +
Channel Reliability 4Eb/No
2 Parallel

Shift Registers 
y1
CS Register
2 Parallel Le2(u)
y3 Shift Registers 

I
+ I1
SOVA 2 I{L2(u)}
I1 u
CS = Circular Shift CS Register
I = Interleaver
I1 = Deinterleaver
Figure 4.18: SOVA iterative turbo code decoder for the example.
The SOVA1 component decoder is used to decode the RSC1 code. The SOVA1
component decoders input sequences are I1{Lcy1}, Lcy2, and Le2(u). The systematic
sequence Lcy1 is deinterleaved to decode the RSC1 code. The input sequences are
I1{Lcy1}={4 4 4 4 4 4 4}, Lcy2={0 4 4 4 4 4 4}, and Le2(u)={0 0 0 0 0 0 0} (no a
priori knowledge). The SOVA1 component decoder is implemented with two separate
SOVA decoders (Figure 4.11). The first SOVA decoder (of the SOVA1 component
decoder) computes the SOVA metric (4.48) for the ML path. (Note that the notation from
the iterative turbo code decoder Le(u) is equivalent to the notation from the SOVA
metric L(u)). Figure 4.19 shows the first SOVA decoders (of the SOVA1 component
decoder) ML path.
01 4 20 4 36 20
10 4 12 4 28 12
TIE
11 4 4 4 20
TIE
0 1 2 3 4 5 6 7
Time
1
Metric Used: M t
( m)
=M ( m)
t 1 +u ( m)
t I { Lc y t ,1} + x t(,m2 ) Lc y t ,2 + ut( m) Le 2 (ut' )
I1{Lcy1} 4 4 4 4 4 4 4
Lcy2 0 4 4 4 4 4 4
Le2(u) 0 0 0 0 0 0 0
Figure 4.19: The first SOVA decoders (of the SOVA1 component decoder) ML
path.
In Figure 4.19, the bold partial path metrics correspond to the ML path. Survivor paths
are represented by bold solid lines and competing paths are represented by simple solid
lines. For metric ties, the first branch is always chosen. Table 4.4 shows the survivor
(larger) and competing (smaller) partial path metrics for the trellis diagram in Figure 4.19.
Table 4.4: Survivor and Competing Partial Path Metrics for the Trellis Diagram
in Figure 4.19
Time 0 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7
State 00 0 4 4 4 / 4 4 / 12 12 / 4 4 / 44 52 / 12
State 01 4 20 / 12 4 / 4 36 / 4 12 / 20
State 10 4 12 4 / 4 12 / 28 12 / 4
State 11 4 4/4 4 / 4 20 / 12
With the knowledge of the ML path, the second SOVA decoder (of the SOVA1
component decoder) recomputes the SOVA metric and also calculates and updates the
reliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.19),
the SOVA1 component decoder produces the estimated bit sequence
Fuhua Huang Chapter 4. Iterative Turbo Code Decoder 63
u={1 1 1 1 1 1 1} and state sequence s={00, 10, 01, 10, 01, 00, 00}. Table 4.5,
obtained from Figure 4.19, shows the competing path (bit and state sequences) for
reliability updates.
Table 4.5: SOVA1s Competing Path (Bit and State Sequences) for Reliability
Updates
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 1/10 1/11 1/01
Time 4 1/00 1/00 1/00 1/10
Time 5 1/00 1/10 1/11 1/11 1/01
Time 6 1/00 1/10 1/01 1/00 1/00 1/00
Time 7 1/00 1/10 1/01 1/10 1/11 1/01 1/00
Table 4.6, obtained from Figure 4.19, Table 4.4, and Table 4.5, shows the calculated and
updated (in bold) reliability values.
Table 4.6: SOVA1s Calculated and Updated (In Bold) Reliability Values
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 16 16 16
Time 4 16 16 16 20
Time 5 16 16 16 20 20
Time 6 16 16 16 20 20 24
Time 7 16 16 16 20 20 24 32
From Table 4.6, the SOVA1 component decoder produces the final reliability sequence
={16, 16, 16, 20, 20, 24, 32}. The SOVA1 component decoder outputs the soft or L
value sequence L1(u)={16, 16, 16, 20, 20, 24, 32}. The extrinsic value sequence,
obtained by subtracting SOVA1s inputs from the soft or Lvalue sequence, is
Le1(u)=L1(u)I{Lcy1}Le2(u)={12, 12, 12, 16, 16, 20, 28}.
The SOVA2 component decoder is used to decode the RSC2 code. The SOVA2
component decoders input sequences are Lcy1, Lcy3, and I{Le1(u)}. The extrinsic value
sequence Le1(u) is interleaved to decode the RSC2 code. These input sequences are
Lcy1={4 4 4 4 4 4 4}, Lcy3={0 4 4 4 4 4 4}, and
I{Le1(u)}={16, 16, 12, 12, 12, 20, 28}. The SOVA2 component decoder is
implemented with two separate SOVA decoders (Figure 4.11). The first SOVA decoder
TRELLIS DIAGRAM
States
01 4 24 44 104 52
10 20 36 8 20 32
11 44 64 84 64
0 1 2 3 4 5 6 7
Time
Metric Used: M t( m) = M t(1
m)
+ ut( m) Lc y t ,1 + x t(,m2 ) Lc y t ,3 + ut( m) I { Le1 (ut' )}
Lcy1 4 4 4 4 4 4 4
Lcy3 0 4 4 4 4 4 4
I{Le1(u)} 16 16 12 12 12 20 28
Figure 4.20: The first SOVA decoders (of the SOVA2 component decoder) ML
path.
In Figure 4.20, the bold partial path metrics correspond to the ML path. Survivor paths
are represented by bold solid lines and competing paths are represented by simple solid
lines. Table 4.7 shows the survivor (larger) and competing (smaller) partial path metrics
for the trellis diagram in Figure 4.20.
With the knowledge of the ML path, the second SOVA decoder (of the SOVA2
component decoder) recomputes the SOVA metric and also calculates and updates the
reliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.20),
the SOVA2 component decoder produces the estimated bit sequence
I{u}={1 1 1 1 1 1 1} and state sequence I{s}={10, 11, 11, 11, 01, 00, 00}. Table 4.8,
obtained from Figure 4.20, shows the competing path (bit and state sequences) for
reliability updates.
Table 4.8: SOVA2s Competing Path (Bit and State Sequences) for Reliability
Updates
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 1/00 1/10 1/11
Time 4 1/00 1/00 1/10 1/11
Time 5 1/10 1/01 1/00 1/10 1/01
Time 6 1/10 1/11 1/01 1/00 1/00 1/00
Time 7 1/10 1/11 1/11 1/01 1/10 1/01 1/00
Table 4.9, obtained from Figure 4.20, Table 4.7, and Table 4.8, shows the calculated and
updated (in bold) reliability values.
Table 4.9: SOVA2s Calculated and Updated (In Bold) Reliability Values
Time Time Time Time Time Time Time Time
Index Index Index Index Index Index Index Index
0 1 2 3 4 5 6 7
Time 3 60 60 60
Time 4 48 60 60 48
Time 5 48 48 60 48 52
Time 6 48 48 48 48 52 56
Time 7 48 48 48 52 52 56 76
Much more than documents.
Discover everything Scribd has to offer, including books and audiobooks from major publishers.
Cancel anytime.