You are on page 1of 37

Universitatea Tehnica Cluj-Napoca

8th April 2005

Turbo Codes
and
Principles and applications
Catherine Douillard
ENST Bretagne
CNRS TAMCIC UMR 2872

Catherine.Douillard@enst-bretagne.fr

Turbo Codes: Contents


• Concatenated codes

• The turbo encoder

• Turbo decoding

• Performance

• Recent improvements in turbo coding

• Conclusion

1
Concatenated Codes

Concatenated Codes : Divide and Conquer


Algebraic (RS)
or
Convolutional Convolutional
Encoder 1
Encoder 2
(outer) π (inner)
interleaver
Channel
de-interleaver
Decoder 2 Decoder 1
(outer) π−1 (inner)

• First proposed by Forney in 1966 [1]


• Divide complexity between inner and outer decoding
• π and π-1 : de-interleaving breaks up error bursts at inner decoder
output
• A standard in many communication systems : CCSDS, DVB-S …

2
Concatenated Codes :
Performance Improvements
Performance of concatenated codes suffers from the fact that inner
decoder only provides “hard” (0 or 1) decisions to outer decoder

Step 1 Soft decision decoding (~1.5 dB gain)


• Soft Output Viterbi Algorithm (SOVA): Battail (1987) [2] and
Hagenauer (1989) [3]
• Maximum a Posteriori (MAP) or BCJR algorithm (1974) [4]

Step2 “Turbo” decoding: Soft Iterative Decoding (<1 dB from


channel capacity)
• Turbo Codes (Parallel Concatenation of Convolutional Codes):
Berrou and Glavieux (1993)
• Turbo Product Codes: Pyndiah (1995)
• Iterative decoding of Serially Concatenated Convolutional
Codes: Benedetto et al. (1996) …

Iterative (“Turbo”) Decoding of


Concatenated Codes: principle (I)
- Based on a Soft Input Soft Output (SISO) elementary decoder
- Iterative decoding process (turbo effect)

π
xi
d^i
Decoder 1
π ^^
-1 Decoder 2
y1i (SISO) (SISO) di
y2i
xi = (2Xi-1) + ni (Xi=di)
y1i = (2Y1i-1) + n1i
y2i = (2Y2i-1) + n2i

3
Iterative (“Turbo”) Decoding of
Concatenated Codes: principle (II)
- Based on a Soft Input Soft Output (SISO) elementary decoder
- Iterative decoding process (turbo effect)

First iteration

xi
π
-
π
-1 1
Decoder 1 Decoder 2 + Zi
y1i (SISO) (SISO)

y2i

Z: extrinsic information

Iterative (“Turbo”) Decoding of


Concatenated Codes: Principle (II)

Iteration p

p-1
Zi
π
-
+ Decoder 1
- p
π Zi
-1 Decoder 2 +
xi
+ +
(SISO) (SISO)
y1i

y2i π ^
di

4
Parallel Concatenated Codes
k bits
rate : k/r1
k bits
Data Code 1 r1 bits
Global code rate:
rate : k/r2 k/(k+r1+r2)

π Code 2 r2 bits

• Parallel concatenation of two (or more) systematic


encoders separated by interleavers.
• Component codes can be algebraic or convolutional codes
• Historical Turbo Codes: code 1 and code 2 are two
identical Recursive Systematic Convolutional (RSC) codes

Serial Concatenated Codes


rate : k/p rate : p/n

k bits Code 1 π Code 2

k p-k p bits p n-p


p bits n bits
Global rate : k/n

• Component codes can be algebraic or convolutional codes


• SCCC: Serial Concatenation of Convolutional codes
• Some hybrid schemes (serial -//) can also be investigated

5
Parallel Concatenation vs Serial Concatenation:
Performance Comparison under Iterative Decoding
BER
Typical serial
5 concatenation
10-1 behaviour 1 E
5
uncoded erfc ( b )
2 N0
10-2 with
5 Typical parallel ∞
2
10-3
concatenation erfc ( x) = ∫ exp(−t 2 )dt
π t=x
5 behaviour
10-4 "Waterfall" region
5

10-5
5

10-6
5
"Error floor" region
10-7
5

10-8
Convergence
0 1 2 3 4 5 6 7 8 9 10 11 12
abscissa
Ga = 10 log( Rd min )

Convolutional Turbo Codes

6
Turbo Codes Encoder

The turbo encoder involves a parallel concatenation of


at least two elementary Recursive Systematic
Convolutional (RSC) codes separated by an interleaver (π)

Xi
Data
Code 1
di Y1i
Redundancy
Y2i
π Code 2
natural coding rate : 1/3

Parallel concatenation of two RSC codes


or «convolutional» turbo code

Recursive Systematic
Convolutional (RSC) Codes

7
Systematic Convolutional (SC) codes

Xi Xi=di

di di D D
D D

Yi Yi
NSC SC
(Non Systematic Convolutional) (Systematic Convolutional)
(Elias code)

SC codes Trellis diagram


Xi Xi=di

di D D di D D

NSC Yi SC Yi
00 00
00 00
11 11 df = 5 df = 4 11 01
01 00 01 10
01 01
10 10 Xi Yi 10 00
di =0
10 10
01 Xi Yi 01
11 di =1 11

8
SC codes Bit Error Rate
BER
1,E+00
1,E-01
1,E-02
1,E-03
Uncoded
1,E-04 SC
NSC
1,E-05
1,E-06
1,E-07
1,E-08 Eb/N0
0 2 4 6 8 10 12

Is there any convolutional code able to combine

• the advantageous df of non systematic codes


• the good behaviour at very low SNRs

????

The answer is : yes

9
Recursive Systematic Convolutional (RSC)
codes Xi
Xi

di di D D
D D

[1,(1+D+D2)/(1+D2)] Yi
NSC Yi
[(1+D2),(1+D+D2)]
Xi

di
D D

[1,(1+D2)/(1+D+D2)] Yi

RSC Codes Trellis diagram

Xi Xi
di
di D D D D

RSC Yi
NSC Yi
00 00
00 00
11 11 11 11
df = 5 df = 5
01 01 00
00
01 01
10 10 Xi Yi 10 10
di =0
10 10
01 Xi Yi 01
11 di =1 11

10
RSC Codes Bit Error Rate

BER
1,E+00
1,E-01
1,E-02
1,E-03 Uncoded
SC
1,E-04
NSC
1,E-05 RSC
1,E-06
1,E-07
1,E-08 Eb/N0
0 2 4 6 8 10 12

The permutation

11
Turbo Codes Permutation
The 1st function of Π : to compensate for the
vulnerability of a decoder to errors occuring in bursts
xi
Xi Decoder 1
Data y1i
Code 1
di Y1i

Y2i π
π Code 2
y2i
Decoder 2

xi = (2Xi-1) + ni (Xi=di)
y1i = (2Y1i-1) + n1i
y2i = (2Y2i-1) + n2i

Turbo Codes Permutation


The 2nd function of Π: The way of Π is defined governs the
behaviour at low BER (< 10-5)

Xi
Data
Code 1
di Y1i

Y2i
π Code 2

The fundamental aim of the permutation : if the direct sequence is RTZ,


minimize the probability that the permuted sequence is also RTZ and vice-versa.

12
Turbo Codes Permutation

Xi
Data
Code 1
di Y1i

Y2i
π Code 2

Codeword distance d = w( X 1 L X k ) + w(Y11 LYk1 ) + w(Y12 LYk2 )


1 wmin E
BERa ≈ erfc( Rd min b )
2 k N0

The fundamental aim of the permutation : if the direct sequence is RTZ,


minimize the probability that the permuted sequence is also RTZ and vice-versa.

Permutation Regular

natural order : M columns


address 0
0010110010...0110001
Row-wise ...................................
writing 0100100100...1001010 N rows
0011110100...1011011
0101000110...0101001

Column-wise natural order :


reading address k-1

Condition : k = M.N

13
Permutation Regular

Another representation
i 0 1 2 … k-2 k-1
Data in natural order
i = Π(j) j 0 1 2 … k-2 k-1
Data in interleaved order
Π(0) = 0
for j = 0L k − 1 k-1 0
Π(1) = P
i = Π ( j ) = Pj mod k

P and k are relatively prime integers

Π(2) = 2P

Π(2) = 3P

Permutation Regular

Regular permutation is appropriate for input weight 2

M columns
0100000010...0000000
period : 7 ...................................
0000000000...0000000 N rows
0000000000...0000000
period : 7 0000000000...0000000

k → ∞ ⇒ d(w=2) → ∞

14
Permutation Regular

Regular permutation is appropriate for input weight 3

M columns
0110100000...0000000
period : 7 ...................................
0000000000...0000000 N rows
0000000000...0000000
period : 7 0000000000...0000000

k → ∞ ⇒ d(w=3) → ∞

Permutation Regular

Regular permutation is not appropriate for input weight 4

M columns
0100000010...0000000
0000000000...0000000
0000000000...0000000
period : 7 0000000000...0000000
0000000000...0000000 N rows
0000000000...0000000
0000000000...0000000
period : 7
0100000010...0000000
.....................................

k → ∞ ⇒ d(w=4) is limited

15
Permutation Regular

Regular permutation is not appropriate for other values of w

M columns
0110100000...0000000
0110100000...0000000
0000000000...0000000
period : 7 0110100000...0000000
0000000000...0000000 N rows
0000000000...0000000
0000000000...0000000
period : 7
0000000000...0000000
.....................................

So, let us introduce disorder, but not in any manner !!

A good permutation must ensure :


(1) maximum scattering of data
(2) maximum disorder in the permuted sequence

(these two conditions are in conflict)

16
Permutation pseudo-random (= controlled disorder)
Everyone has his own tricks
For instance, the turbo code permutation
DVB-RCS/RCT (ETSI EN 301 790 and 301 958)
Built from a regular permutation N-1 0 Π(0) = Q
for j = 0L N − 1
i = Π ( j ) = Pj + Q mod N Π(1) = P+Q
P and N are relatively prime integers

Introduction of controlled disorder Π(2) = 2P+Q


(here with max degree 4) Π(3) = 3P+Q
j mod 4 = 0 ⇒ Q = P1
j mod 4 = 1 ⇒ Q = P2 P1… P4 are small integers
j mod 4 = 2 ⇒ Q = P3 depending on the block size

j mod 4 = 3 ⇒ Q = P4

Permutation

In conclusion :

No ideal permutation for the time being (does it exist ?)

17
Turbo Decoding

Turbo Decoding

Soft-In Soft-Out (SISO)


algorithms

18
Turbo decoding SISO algorithms

Soft-Input Soft-Output (SISO) decoding algorithms are


necessary to turbo decoding.

Two families of decoding algorithms:


•The Viterbi based SISO algorithms [2][3]
•The MAP algorithm and its approximations [4]

Turbo decoding SISO algorithms: MAP

The MAP (Maximum A Posteriori) algorithm

• Also known as BCJR (Bahl, Cocke, Jelinek, Raviv), APP


(A Posteriori Probability), or Backward-Forward algorithm
• Aim of the algorithm: computation of the Logarithm of
Likelihood Ratio (LLR) relative to data di

Pr{d i = 1 R1k } Sign ⇒ hard decision


Λ (d i ) = ln
Pr{d i = 0 R1k } Magnitude ⇒ reliability

R1k is the noisy received sequence


R1k = {R1 ,L, Rk } with Ri = ( xi , yi )

19
Turbo decoding SISO algorithms: MAP

• The principle of MAP algorithm: processing separately data


available between steps 1 to i and between steps i +1 to k.
=> Introduction of forward state probabilities α i (m), i = 1L k
and backward state probabilities βi (m), i = 1L k

α i (m) = Pr{S i = m, R1i }

βi (m) = Pr{R ik+1 S i = m}

Turbo decoding SISO algorithms: MAP

• One can show that:

∑ ∑ γ1 ( Ri , m′, m)α i −1 (m′)βi (m)


m m′
=> Λ (d i ) = ln
∑ ∑ γ 0 ( Ri , m′, m)α i −1 (m′)βi (m)
m m′

where γ j ( Ri , m′, m) represents the branch likelihood


between states m' and m when the received symbol is Ri
at step i.

20
Turbo decoding SISO algorithms: MAP

Example: 0 0

0 1 1
* λ i (2) = γ 0 ( Ri ,1,2)α i −1 (1)βi (2)
1 2 2
* λ i (2) = γ1 ( Ri ,0,2)αi −1 (0)βi (2)
3 3
i-1 i
di = 0
di = 1
γ1 ( Ri ,1,0)α i −1 (1)βi (0) + γ1 ( Ri ,2,1)α i −1 (2)βi (1)
Λ (d i ) = ln L
γ 0 ( Ri ,0,0)α i −1 (0)βi (0) + γ 0 ( Ri ,3,1)α i −1 (3)βi (1)
+ γ1 ( Ri ,0,2)α i −1 (0)βi (2) + γ1 ( Ri ,3,3)α i −1 ( 3)βi (3)
L
+ γ 0 ( Ri ,1,2)α i −1 (1)βi ( 2) + γ 0 ( Ri ,2,3)α i −1 (2)βi (3)

Turbo decoding SISO algorithms: MAP


• Forward recursion: computation of α i (m)
α i (m) can be recursively computed from
α i (m) = ∑ ∑ α i −1 (m′) γ j ( Ri , m′, m )
m′ j

α i (m) are all computed during the forward


recursion in the trellis from initial values α 0 (m)
• Backward recursion: computation of βi (m′)
βi (m′) can be recursively computed from
βi (m′) = ∑ ∑ βi +1 (m ) γ j ( Ri +1 , m′, m)
m′ j

βi (m′) are all computed during the backward recursion


into the trellis from initial values β k (m)

21
Turbo decoding SISO algorithms: MAP

• Branch likelihoods: computation of γ j ( Ri , m′, m)

If there is no transition between m' and m in the trellis or if


the transition is not labeled with di = j .
⇒ γ j ( Ri , m′, m) = 0
else, for a transmission over a Gaussian channel

1  R −C 2 
γ j ( Ri , m′, m) = Pr{d i = j} × exp− i i 
2πσ 2  2σ 2 
 

Where Ci is the expected symbol along the branch from state m'
to state m and Ri is the received symbol .

Turbo decoding SISO algorithms: LOG-MAP


and Max-Log-MAP
The MAP algorithm in the logarithmic domain (LOG-MAP) [12]

Multiplications => additions


Exponentials in branch probabilities disappear
Additions => ?
Solution 1: LOG-MAP

− a −b
ln(e a + eb ) ≈ max* (a, b) = max(a, b) + ln(1 + e )

Term ln(1 + e − a −b ) is precomputed and stored in a lookup table


☺ Performance MAP
Knowledge of σ is necessary

22
Turbo decoding SISO algorithms: LOG-MAP
and Max-Log-MAP
The MAP algorithm in the logarithmic domain (MAX-LOG-MAP)
[12][13]

Solution 2: MAX-LOG-MAP

ln(e a + eb ) ≈ max(a, b)

Performance < MAP (a few tenths of dB)


☺ Knowledge of σ is not necessary

Turbo decoding SISO algorithms: Log-MAP


and Max-Log-MAP

• In the Log domain, one can show that Λ(di) can be written as:

xi
Decoder Λ (d i ) = xi + Z i
yi

Z i is the extrinsic information provided by the decoder

23
Turbo decoding The turbo principle
The turbo decoder structure

π +
-
x1
xi
Decoder 1 ^
Z2 di
y1 i (SISO)

y2 i
Z1 Decoder 2
π x2 (SISO)

π
-1 +

Through extrinsic information, each decoder uses both redundancies.


Fundamental principle: a decoder must not reuse a piece of information
provided by itself .

Turbo codes

Performance

24
Turbo Codes Performance
BER Gaussian channel, code rate=1/2

Gaussian channel
1.0e-01
QPSK modulation
MAP decoding
Xi
uncoded
di
1.0e-02
RSC
S1 S2 S3 S4 37,21

1.0e-03 Y1i
Interleaving Yi
256X256 Y2i

1.0e-04
Iteration #1 RSC
S1 S2 S3 S4 37,21
#6 #2
1.0e-05
#18 #3
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 ICC’93
Eb/N0 (dB)

Theoretical limit (binary input


channel): 0,2 dB

Recent improvements in
turbo codes

25
Circular Recursive Systematic
Convolutional (CRSC) Codes

Improvements Trellis termination


In many applications, block coding is required:
How to transform a convolutional code into a
block code?
Inserting tail bits: encoder state returns to 0
☺ easy to implement
the transmission of ν additional bits is needed
initial and final states are singular states
Adopting circular encoding (= tail-biting)[9][10]
☺ coding rate remains unchanged
☺ the trellis can be regarded as a circle without any
singularity
a precoding step is necessary

26
Improvements Circular encoding

di
D D D

Yi

000
001
010
011
100
101
110
111 0 1 1 0 1 1 0 1

Improvements Circular encoding

di
D D D

Yi

000
001
010
011
100
101
110
111

27
Improvements Circular encoding

It can be shown that, provided the data block


length k is not a multiple of the LFSR period Penc,
• For a given information block, there is one and only
one state Sc (circulation state) such as Sc= S0=Sk
• Sc can be easily computed from
Sc=f(k mod Penc, Sk0)
Sk0 is the final encoder state when encoding starts
from state 0

Improvements Circular encoding


- Set the encoder to 0 (= 000)
• Pre-coding : - Encode the information block Sk = Sk0
- Compute Sc
• Set the encoder to Sc
• Encode
000
001
010
011
100
101
110
111 0 1 1 0 1 1 0 1

28
Decoding CRSC codes

Double-binary Turbo Codes

29
Improvements Double binary codes
binary double-binary
data (bits) data (couples) A
X
B
C1 Y1 C1 Y1

Π Π

C2 Y2 C2 Y2

R = 1/3 (1/2 //1/2) R = 1/2 (2/3 //2/3)

Qualitative Behaviour of Concatenated Codes


BER
under Iterative Decoding
5

10-1 1 E
5
uncoded erfc ( b )
2 N0
10-2 with
5 ∞
2
10-3 erfc( x) = ∫ exp(−t 2 )dt
π t=x
5
Bad convergence,
10-4 high asymptotic gain
5

10-5
5

10-6
5
Good convergence,
10-7 low asymptotic gain
5

10-8
0 1 2 3 4 5 6 7 8 9 10 11 12

Ga = 10 log (R.dmin)
QPSK, AWGN channel, target BER : 10-8

30
Improvements Double binary codes
Contribution to the convergence property of TCs
k

erroneous
path

locked
patterns

double-binary :
C1 decreases the correlation C1
when decoding
binary
Π k Π
k /2

C2 k /2
C2
k

Improvements Double binary codes


Contribution to the minimum distance

Intra-symbol permutation

2 0 1 1 3 1 3
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
(A,B) becomes (B,A) 0 0 0 0 0 0
0 0 0 0 3 0 0 0 0 0 0 3 0 0
before vertical encoding 2 0 1 1 3 0 0 0 0
0 0 0 0
once out of two 3 0 0 0 0 0 0 3 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
0 0 0 0 0 0
2 0 0 0 0 0 0 2 3 0 0 0 0 0 0 3 0 0
1 3

These patterns don’t exist any


more if couples are inverted Remaining error patterns
periodically

Thanks to this technique, minimum distances of turbo codes are


increased

31
Improvements Double binary codes
Other avantages of double-binary codes

• Latency divided by 2 (encoding and decoding)


• Data rate doubled (intrinsic parallelism)
• Less degradation when MAP → Max-Log-MAP

Improvements Double binary codes


Example of performance
BER

-1 iteration
10 #1
• QPSK modulation
-2
10 #5 • R=1/2
(binary-input channel)

• AWGN channel
Shannon limit

-3
10
• Duo-binary 8-state turbo code
-4
10
• Large block size
-5
10 #10
• MAP algorithm
#20
-6 • No quantization
10
#15
E b /N0
0.1 0.2 0.3 0.4 0.5 0.6 0.7

0.35 dB

The residual loss in turbo decoding

32
Conclusion

Convolutional Turbo Codes Current standards


Application Turbo code Rates
CCSDS 16-state, binary 1/6, 1/4, 1/3, 1/2
UMTS (data), 8-state, binary 1/4, 1/3, 1/2
CDMA2000
DVB-RCS 8-state, duo-binary 1/3 up to 6/7

DVB-RCT 8-state, duo-binary 1/2, 3/4

INMARSAT (M4) 16-state, binary 1/2

Eutelsat (Skyplex) 8-state, duo-binary 4/5, 6/7

33
Convolutional Turbo Codes Example of performance
FER
1.0e+00

1.0e-01

1.0e-02 R=1/2: 8-state • QPSK modulation


16-state
R=2/3: 8-state • AWGN channel
1.0e-03 16-state
R=3/4: 8-state • Duo-binary turbo codes
16-state
1.0e-04 • 1504-bit data blocks
• Max-log-MAP algorithm
1.0e-05
• 4-bit quantization

1.0e-06 • 8 iterations

1.0e-07

Theoretical limits: R=1/2 R=2/3 R=3/4


1.0e-08 2
-1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Eb/N0 (dB)

Turbo codes and modulations

FER
5
10-1 QPSK 8-PSK 16-QAM 64-QAM
5
10-2
5
10-3
5 0,8 dB
0,8 dB
10-4
5 0,6 dB
0,85 dB 0,9 dB
10-5
5
10-6 8-state
5
1,7 dB
10-7 16-state
5 0,8 dB 0,9 dB
10-8
2 3 4 E / N (dB) 5 6 7 8 9
b 0

Gaussien channel, 188-byte blocks, R = 2/3, Max-Log-MAP decoding,


8 itérations

34
Turbo codes : hot topics

Reduction of decoding complexity (decrease the number of


iterations …)
Search for robust permutations
Method for quick computation of dmin
Design of analogue turbo decoders
Optimisation of turbo coded systems for non Gaussian
channels

Turbo Communications
• Turbo demodulation
• Turbo equalisation
• Turbo detection
• Turbo synchronisation
• Source turbo decoding
turbo processor

local extrinsic
intrinsic 1 information
information
4
2
probabilistic
processor
3
shared
intrinsic
information

35
Turbo codes References
References

[1] G. D. Forney, Jr., Concatenated codes, MIT Press, 1966.


[2] G. Battail, “Pondération des symboles décodés par l’algorithme de Viterbi,” Ann.
Télécomm., vol. 42, N°1-2, pp. 31-38, Jan. 1987.
[3] J. Hagenauer and P. Hoeher, “A Viterbi algorithm with soft-decision outputs and its
applications,” in Proc. IEEE Globecom’89, Dallas, Texas, Nov. 1989, pp. 4711-4717
[4] L.R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for
minimizing symbol error rate,” IEEE Trans. Inform. Theory, vol. IT-20, pp. 284-287,
1974.
[5] C. Berrou, A. Glavieux and P. Thitimajshima, “Near Shannon limit error-correcting
coding and decoding: Turbo-codes,” Proc. ICC’93, Geneva, Switzerland, May 1993,
pp.1064-1070.
[6] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding:
Turbo codes,” IEEE Trans. Commun., vol. 44, n° 10, pp. 1261-1271, Oct.1996.
[7] R. Pyndiah, “Near optimum decoding of product codes: Block turbo codes,” IEEE
Trans. Commun., vol. 46, n° 8, pp. 1003-1010, Aug. 1998.
[8] S. Benedetto, D. Divsalar, G. Montorsi and F. Pollara, “Serial concatenation of
interleaved codes: Performance analysis, design and iterative decoding,”, IEEE Trans.
Inform. Theory., vol.44, n°3, pp. 909-926, May 1998.

Turbo codes References


References

[9] C. Weiss, C. Bettstetter and S. Riedel, “Code construction and decoding of parallel
concatenated tail-biting codes,” IEEE Trans. Inform. Theory., vol. 47, n° 1, pp. 366-
386, Jan. 2001.
[10] C. Berrou, C. Douillard and M. Jézéquel, “Multiple parallel concatenation of circular
recursive convolutional (CRSC) codes,” Annals of Telecommun., vol. 54, n° 3-4, pp.
166-172, 1989.
[11] C. Berrou, P. Adde, E. Angui, and S. Faudeil, “A low complexity soft-output Viterbi
decoder architecture”, Proc. ICC’93, Geneva, Switzerland, May 1993, pp.737-740.
[12] P. Robertson, E. Villebrun, and P. Hoeher, “ A comparison of optimal and sub-
optimal MAP decoding algorithms operating in the log domain”, Proc. ICC’95,
Seattle, WA, 1995, pp.1009-1013.
[13] A. J. Viterbi, “An intuitive justification and a simplified implementation of the MAP
decoder for convolutional codes”, IEEE Journal on Selected Areas in Comm., Vol. 16,
N°2, Feb. 1998.
[14] D. Divsalar, S. Dolinar and F. Pollara, “Iterative turbo decoder analysisbased on
density evolution”, IEEE JSAC, vol. 16, n°5, pp.891-907, May 2001.
[15] S. ten Brink, “Convergence behavior of iteratively decoded parallel concatenated
codes”, IEEE Trans. Commun., vol. 49, pp.1727-37, Oct. 2001.

36
General information about Turbo Codes
• C. Heegard, S. B. Wicker, Turbo Coding, Kluwer Academic Publishers, 1999
• Branka Vucetic, Jinhong Yuan, Turbo Codes, Principles and Applications, Kluwer
Academic Publishers, 2000
• C. Schlegel, Trellis coding, IEEE Press, 1997
• B.J. Frey, Graphical Models for Machine Learning and Digital Communication, MIT
Press, 1998
• R. Johannesson , K. Sh. Zignagirov, Fundamentals of Convolutional Coding, IEEE
Press, 1999
• Hanzo, Lajos, Adaptative wireless transceivers, John Wiley & sons, 2002
• Hanzo, Lajos, Turbo coding, turbo equalisation and space-time coding for transmission
over fading channels, John Wiley & sons, 2002
• IEEE Communications Magazine, vol. 41, n°8, August 2003
Capacity approaching codes, iterative decoding, and their applications
• WEB sites : http://www-turbo.enst-bretagne.fr/2emesymposium/presenta/turbosit.htm
is a good starting point

37

You might also like