Professional Documents
Culture Documents
Turbo Codes: Coding and Communication Laboratory
Turbo Codes: Coding and Communication Laboratory
Reference
1. Lin, Error Control Coding
• chapter 16
2. Moon, Error Correction Coding
• chapter 14
3. C. Heegard and S. B. Wicker, Turbo Coding
4. B. Vucetic and J. Yuan, Turbo Codes
Introduction
1+D 4
(37, 21, 65536) turbo codes with G(D) = [1, 1+D+D 2 +D 3 +D 4 ]
1
Figure B: Conventional convolutional encoder with r = 2
and K = 3
G = [1, g2 /g1 ]
Trellis termination
¶ ³
For encoding the input sequence, the switch is turned on
to position A and for terminating the trellis, the switch is
µ ´
turned on to position B.
1
Figure E: Nonrecursive r = 2
and K = 2 convolutional encoder
with input and output sequences.
1
Figure F: Recursive r = 2 and K = 2 convolutional encoder of
Figure E with input and output sequences.
D3
T (D) = Where N and J are neglected.
1−D
– The two codes have the same minimum free distance and can
be described by the same trellis structure.
– These two codes have different bit error rates. This is due to
the fact that BER depends on the input–output
correspondence of the encoders.b
– It has been shown that the BER for a recursive convolutional
code is lower than that of the corresponding nonrecursive
convolutional code at low signal-to-noise ratios Eb /N0 .c
Benedetto, S., and Montorsi, G., “Unveiling Turbo Codes: Some Results on
b Parallel Concatenated Coding Schemes,” IEEE Transactions on Information
Theory, Vol. 42, No. 2, pp. 409-428, March 1996.
Berrou, C., and Glavieux, A., “Near Optimum Error Correcting Coding and
c Decoding: Turbo-Codes,” IEEE Transactions on Communications, Vol. 44,
No. 10, pp. 1261-1271, Oct. 10, 1996.
Concatenation of codes
Interleaver design
Block interleaver
Circular-Shifting interleaver
• At each time unit l, three output values are received from the
(0) (0)
channel, one for the information bit ul = vl , denoted by rl ,
(1) (2) (1) (2)
and two for the parity bits vl and vl , denote by rl and rl ,
and the 3K-dimensional received vector is denoted by
(0) (1) (2) (0) (1) (2) (0) (1) (2)
r = (r0 r0 r0 , r1 r1 r1 , . . . , rK−1 rK−1 rK−1 )
0 → −1 and 1 → +1.
(0)
where Es /N0 is the channel SNR, and ul and rl have both been
√
normalized by a factor of Es .
(0) (1)
• The received soft channel L-valued Lc rl for ul and Lc rl for
(1)
vl enter decoder 1 along with a prior L–values of the
(1) (2)
information bits La (ul ) = Le (ul ).
(0) (2)
• Subtracting the term in brackets, namely, Lc rl + Le (ul ),
removes the effect of the current information bit ul from L(1) (ul ),
leaving only the effect of the parity constraint, thus providing an
independent estimate of the information bit ul to decoder 2 in
addition to the received soft channel L-values at time l.
(0)
• The (properly interleaved) received soft channel L–valued Lc rl
(2) (2)
for ul and the soft channel L–valued Lc rl for vl enter decoder
2 along with a (properly interleaved) prior L–values of the
(2) (1)
information bits La (ul ) = Le (ul ).
(0) (1)
• Subtracting the term in brackets, namely, Lc rl + Le (ul ),
removes the effect of the current information bit ul from L(2) (ul ),
leaving only the effect of the parity constraint, thus providing an
independent estimate of the information bit ul to decoder 1 in
addition to the received soft channel L-values at time l.
are given by
(j) Es (j) (j)
Lc rl = 4( )rl = rl , l = 0, 1, 2, 3, j = 0, 1, 2.
N0
Again for purposes of illustration, a set of particular received
channel L-values is given in Figure R(c).
where
– s0 represents a state at time l (denote by s0 ∈ σl ).
– s represents a state at time l + 1 (denoted by s ∈ σl+1 ).
– The sums are over all state pairs (s0 , s) for which ul = +1 or
−1, respectively.
∗ ∗
where the max function is defined in max(x, y) ≡ ln(ex + ey ) =
max(x, y) + ln(1 + e−|x+y| ) and the initial conditions are
α0∗ (S0 ) = β4∗ (S0 ) = 0, and α0∗ (S1 ) = β4∗ (S1 ) = −∞.
computation of L(u0 )
where Le (u0 ) ≡ Lc rp0 + β1∗ (S1 ) − β1∗ (S0 ) represents the extrinsic a
posterior (output) L-value of u0 .
computation of L(u1 )
ln p(s0 = S0 , s = S1 , r) + p(s0 = S1 , s = S0 , r) −
£ ¤
L(u1 ) =
ln p(s0 = S0 , s = S0 , r) + p(s0 = S1 , s = S1 , r)
£ ¤
∗ ©£ ∗ ∗ 0 ∗ ¤
= max α1 (S0 ) + γ1 (s = S0 , s = S1 ) + β2 (S1 ) ,
£ ∗
α1 (S1 ) + γ1∗ (s0 = S1 , s = S0 ) + β2∗ (S0 )
¤ª
∗ ©£ ∗ ∗ 0 ∗ ¤
− max α1 (S0 ) + γ1 (s = S0 , s = S0 ) + β2 (S0 ) ,
£ ∗
α1 (S1 ) + γ1∗ (s0 = S1 , s = S1 ) + β2∗ (S1 )
¤ª
1 1
½µ ¶
∗ ∗ ∗
= max + [La (u1 ) + Lc ru1 ] + Lc rp1 + α1 (S0 ) + β2 (S1 ) ,
2 2
+ 2 [La (u1 ) + Lc ru1 ] − 2 Lc rp1 + α∗ (S1 ) + β2∗ (S0 )
¡ 1 1
¢ª
1
1 1
½µ ¶
∗ ∗ ∗
− max − [La (u1 ) + Lc ru1 ] − Lc rp1 + α1 (S0 ) + β2 (S0 ) ,
2 2
− 2 [La (u1 ) + Lc ru1 ] + 12 Lc rp1 + α∗ ∗
¡ 1 ¢ª
1 (S 1 ) + β 2 (S 1 )
© 1 ª © 1 ª
= + 2 [La (u1 ) + Lc ru1 ] − − 2 [La (u1 ) + Lc ru1 ]
1 1
½· ¸ · ¸¾
∗ ∗ ∗ ∗ ∗
+ max + Lc rp1 + α1 (S0 ) + β2 (S1 ) , − Lc rp1 + α1 (S1 ) + β2 (S0 )
½· 2 ¸ · 2
1 1
¸¾
∗ ∗ ∗ ∗ ∗
− max − Lc rp1 + α1 (S0 ) + β2 (S0 ) , + Lc rp1 + α1 (S1 ) + β2 (S1 )
2 2
∗ ∗
max(w + x, w + y) ≡ w + max(x, y)
= Lc ru1 + La (u1 ) + Le (u1 )
where
1 1
½· ¸ · ¸¾
∗ ∗ ∗ ∗ ∗
Le (u2 ) = max + Lc rp2 + α2 (S0 ) + β3 (S1 ) , − Lc rp2 + α2 (S1 ) + β3 (S0 )
⇒ ½· 2 ¸ · 2
1 1
¸¾
∗ ∗ ∗ ∗ ∗
− max − Lc rp2 + α2 (S0 ) + β3 (S0 ) , + Lc rp2 + α2 (S1 ) + β3 (S1 )
2 2
and
L(u3 ) = Lc ru3 + La (u3 ) + Le (u3 ),
where
− 12 Lc rp3 + α∗ ∗ ∗ ∗
£ ¤ £ 1 ¤
L(u3 ) = 3 (S1 ) + β4 (S0 ) − − L r
2 c p3 + α3 (S0 ) + β4 (S0 )
= α∗ ∗
3 (S1 ) − α3 (S0 )
• We now need expressions for the terms α1∗ (S0 ), α1∗ (S1 ), α2∗ (S0 ),
α2∗ (S1 ), α3∗ (S0 ), α3∗ (S1 ), β1∗ (S0 ), β1∗ (S1 ), β2∗ (S0 ), β2∗ (S1 ), β3∗ (S0 ),
and β3∗ (S1 ) that are used to calculate the extrinsic a posteriori
L-values Le (ul ), l = 0, 1, 2, 3.
• We use the shorthand notation Lul = Lc rul + La (ul ) and
Lpl = Lc rpl , l = 0, 1, 2, 3, for intrinsic information bit L–values
and parity bit L–values respectively.
α∗
1 (S0 ) = 1
2 (Lu0 + Lp0 )
α∗
1 (S1 ) = − 12 (Lu0 + Lp0 )
1 1
½· ¸ · ¸¾
∗ ∗ ∗
α∗
2 (S0 ) = max − (Lu1 + Lp1 ) +α1 (S0 ) , + (Lu1 − Lp1 ) + α1 (S1 )
2 ¸ · 2
1 1
½· ¸¾
∗ ∗ ∗
α∗
2 (S1 ) = max + (Lu1 + Lp1 ) + α1 (S0 ) , − (Lu1 − Lp1 ) + α1 (S1 )
½· 2 ¸ · 2
1 1
¸¾
∗ ∗ ∗
α∗
3 (S0 ) = max − (Lu2 + Lp2 ) + α2 (S0 ) , + (Lu2 − Lp2 ) + α2 (S1 )
½· 2 ¸ · 2
1 1
¸¾
∗ ∗ ∗
α∗
3 (S1 ) = max + (Lu2 + Lp2 ) + α2 (S0 ) , − (Lu2 − Lp2 ) + α2 (S1 )
2 2
and
∗ ∗
Le (u3 ) = α3 (S1 ) − α3 (S0 ).
α∗
2 (S0 ) ≈ max{−0.70, 1.20} = 1.20
α∗
2 (S1 ) ≈ max{−0.20, −0.30} = −0.20
1 1
½· ¸ · ¸¾
α∗
3 (S0 ) ≈ max − (−1.8 + 1.1) + 1.20 , + (−1.8 − 1.1) − 0.20
2 2
= max {1.55, −1.65} = 1.55
1 1
½· ¸ · ¸¾
α∗
3 (S1 ) ≈ max + (−1.8 + 1.1) + 1.20 , − (−1.8 − 1.1) − 0.20
2 2
= max {0.85, 1.25} = 1.25
L(1) (1)
e (u2 ) ≈ +1.4 and Le (u3 ) ≈ −0.3
(1) (2)
- Now, using the facts that La(i) (ul ) = Le(i−1) (ul ) and
(2) (1)
La(i) (ul ) = Le(i) (ul ), and letting Q(ul ) and P (ul ) represent
the a posteriori probability distributions at the outputs of
decoders 1 and 2, respectively, we can write
(Q) (P ) (Q)
L(i) (ul ) = Lc rul + Le(i−1) (ul ) + Le(i) (ul )
and
(P ) (Q) (P )
L(i) (ul ) = Lc rul + Le(i) (ul ) + Le(i) (ul ).
- We can write the difference in the two soft outputs as
(P ) (Q) (P ) (P ) ∆ (P )
L(i) (ul ) − L(i) (ul ) = Le(i) (ul ) − Le(i−1) (ul ) = ∆Le(i) (ul );
(P )
that is, ∆Le(i) (ul ) represents the difference in the extrinsic a
posteriori L-values of decoder 2 in two successive iterations.
and
h i
(Q) (Q) (Q) (i) (Q)
|L(i) (ul )| = sgn L(i) (ul ) L(i) (ul ) = ûl L(i) (ul ),
- We now use the facts that once decoding has converged, the
magnitudes of the a posteriori L-values are large; that is,
¯ ¯ ¯ ¯
¯ (P ) ¯ (Q)
¯L(i) (ul )¯ À 0 and ¯L(i) (ul )¯ À 0,
¯ ¯
(P )
- Noting that the magnitude of ∆Le(i) (ul ) will be smaller than
1 when decoding converges, we can approximate the term
(i) (P )
−ûl ∆Le(i) (ul )
e using the first two terms of its series expansion
as follows:
(i) (P )
−ûl ∆Le(i) (ul ) (i) (P )
e ≈ 1 − ûl ∆Le(i) (ul ),
Q&A