You are on page 1of 12

ESE576: Project

Analysis of Turbo and Convolutional Codes


in BPSK-AWGN Systems
Charles Jeon, Edin Kadric
April 29, 2012
1 Introduction
There had been tremendous advancements in Telecommunication Systems after Claude E. Shannon published
his landmark paper, A Mathematical Theory of Communication in 1948. From Shannons Channel Coding
Theorem, we know that there exists a coding scheme for a discrete memoryless channel (DMC) such that all
rates below capacity C are achievable. Formally speaking, for every rate R < C, there exists a sequence of
(2
nr
, n) codes with maximum probability of error
(n)
0. Hence, we know that there exists block codes that
will allow the transmission with low probability of error for suciently large block length. As a result, many
research has been devoted to nding various coding schemes that will achieve low probability of error and be
simple enough, so ecient encoding and decoding is possible. This project focuses on the recent developments
in coding theory, Turbo Codes and Convolutional Codes and their performance in BPSK-AWGN Systems.
2 Communication
For this project, we use Binary Phase-Shift Keying methods for communication in an Additive White Gaussian
Noise Channel. Note that BPSK system is equivalent to 2 - PAM system.

E
b

E
b
Figure 1: BPSK System
If we assign

E
b
for x
i
= 1 and

E
b
for x
i
= 0, and assuming that noise is a zero-mean AWN with
Power Spectral Density N
o
/2, the Probability of Error is given as
P
e
=

E
b
1
_
2 (N
o
/2)
exp
_

x
2
N
0
_
dx
=

2E
b
/N0
1

2
exp
_

u
2
2
_
du = Q
_
_
2E
b
N
0
_
where we usually denote SNR = E
b
/N
0
.
3 Block Coding
The simplest block coding is by adding redundancy in our codes. Although we will achieve an arbitrary low
probability of error, the rate of the code decreases signicantly with the block length. Therefore, while our
code may be simple, it would not be very useful coding scheme for communication. Instead of repeating
codes, we can rearrange our codes in an intelligent fashion so the extra bits can be used to determine the
existence of error in our codes. Hamming Codes are simplest example of these error-detecting codes, coding
schemes targeted to detecting errors and correcting them. We show the simplest version of Hamming Codes,
(7, 4, 3) Hamming Code for this section. We consider binary codes of length 7, along with the following
matrix, set of all non-zero binary vectors of length 3.
H =
_
_
0 0 0 1 1 1 1
0 1 1 0 0 1 1
1 0 1 0 1 0 1
_
_
Using the nullity-rank theorem, we have the dimension of the null space of H as 4, which can be translated
to 2
4
= 16 codewords, such as 0111100. Denote these 16 codewords as c
i
, where i = 0, . . . , 15. Since these
belong to the null space of H, we have that Hc
i
= 0. Now let e
j
denote the vector that has 0 everywhere and
2
1 for the j-th position. Assuming one bit error, a codeword corrupted at position j is dened as r = c
i
+e
j
.
By multiplying with matrix H,
Hr = H (c
i
+ e
j
) = He
j
Hence, assuming that there exists a single error e
j
it becomes to nd r. To increase the number of error
corrections, we can use higher Hamming Codes, such as (15, 11, 4) or (31, 26, 5) or Reed-Solomon (RS) or
Bose-Chaudhuri-Hocquenghm (BCH) codes.
4 Convolutional Coding
Instead of appending special codes to detect errors which we commonly refer as block codes, we present here
another method, convolutional codes. Although the complexity of the coding scheme is increased, it turns out
that with good decoding algorithms, the performance is substantially improved over block codes. The basic
idea behind convolutional coding is by convoluting the code with itself in a clever way, we obtain sequences
that cannot arise, which can happen only due to the presence of noise. Consider the diagram below.
+ D
1
D
2
D
3
+ +
+
x
i
x
i
z
i
Figure 2: Recursive Systematic Convolutional Code (RSC) with Rate R = 1/2
For each interval i, we produce two outputs, x
i
and z
i
with total of 8 states, D
1
, D
2
and D
3
. With this
system, we have certain paths that our coded sequence will travel within the 8 states, which we will explain
in more detail below.
4.1 Trellis Diagram
In our case, RSC coding with R = 1/2, we have total of 8 states, each with possibility of x
i
= {0, 1} at time
i. With each block storing the delayed version of x, we use Block Diagram algebra to solve for x
i
and z
i
.
Noting that addition follows modulo 2,
z
i
= x
i
+ (D
2
+D
3
) +D
1
+D
3
= x
i
+D
1
+D
2
With the shift registers
D
1
= x
i
+D
2
+D
3
D
2
= D
1
D
3
= D
2
Assuming that the registers are cleared to 0 before sending data, we build a diagram that progresses with
dierent x
i
at dierent times. The Trellis Diagram in gure below shows the dierent possible paths, where
3
solid line denotes 0, and dashed line denotes 1. After building a diagram with length N, we will observe our
output values, mixed with additive white Gaussian noise, to obtain the original sequence.
D
1
D
2
D
3
000
100
010
101
001
110
111
011
Figure 3: Trellis Diagram for D
1
D
2
D
3
4.2 Decoding
After transmission, we process the data to retrieve the original data stream. While there are dierent decoding
algorithms, we concentrate on Viterbi decoding in this project. There are two Viterbi Algorithms, Soft and
Hard Decision Decoding:
4.2.1 Hard Viterbi Decoding
Hard Viterbi Decoding Algorithm compares the distance between the received and all of the possible channel
symbol pairs that could have been received at the receiver. The metric that we will be suing is the Hamming
distance, which is simply the number of bits that are dierent between the received and the channel symbol
pairs. For illustration, consider the case at t = 0. Since the shift registers are all 0, we have z
i
= x
i
.
Therefore, we either have 00 or 11. If we decided that we received 00 and what was sent was 11, we have
accumulated Hamming distance of 2 at t = 0. Following this process until the end of the output, we decide
the path with the minimal Hamming distance.
4.2.2 Soft Viterbi Decoding
Soft Viterbi Decoding does not use Hamming distance as its metric to compute the most likely path; it rather
uses Euclidean distance to compute the estimates. While this increases the computational complexity in
decoding, we obtain better performance compared with the Hard Viterbi Decoding Algorithm. For example,
signals of 0.1 and 3 would both be hard decoded into 1, although 3 would be 1 with a much higher probability,
which is what soft algorithms capture.
5 Turbo Codes
Prior to the development of Turbo Codes, the best possible codes were based on a mix of Reed-Solomon error
correction codes and Convolutional Codes using the Viterbi Algorithm. However, in 1993, Berrou, Glavieux
and Thitimajshima stunned the coding theory community by introducing Turbo Codes that could achieve
near the Shannon-limit performance with moderate complexity. The simple idea behind Turbo Codes is using
parallel sets of convolutional codes, shued with a special interleaver function.
4
+ D D D
+ +
+
x
i
x
i
z
i
(i)
+ D D D
+ +
+
x

i
z

i
Encoder 1
Encoder 2
Figure 4: Recursive Systematic Convolutional Code (RSC) Turbo Encoder
5.1 Interleaver Function
The Internal Interleaver function is simply a function that shues the index in the data stream. The function
is usually expressed in the following form:
(i) =
_
f
1
i +f
2
i
2
_
modK
where K denotes the block size, and f
1
and f
2
are parameters. Interleaving the data helps spread possible
burst errors in between two decodings. Some examples of parameters are shown below:
K f
1
f
2
K f
1
f
2
K f
1
f
2
K f
1
f
2
40 3 10 200 13 50 416 25 52 672 43 252
96 11 24 248 33 62 464 247 58 784 25 98
144 17 108 296 19 74 501 55 84 912 29 114
192 23 48 344 193 86 576 65 96 1008 55 84
Table 1: Turbo Code Internal Interleaver Parameters
5.2 Decoding
After encoding our data with the interleaver as the gure above, we perform multiple iterations of decoding
to obtain the original signal.
5.2.1 Iterative Decoding
We use several iterations to decode the received data. We follow the gure below.
5
Figure 5: Turbo Decoding Principle[3]
Unlike Convolutational Codes, Turbo Codes use several iterations for decoding to obtain a more probable
sequence, reducing the probability of error. The idea behind Turbo decoder is that it uses parity, systematic,
and extrinsic log-likelihood ratios to make a better guess by using the previously computed data as a feedback.
To compute the extrinsic log-likelihood at each iteration, we use a Soft-Input-Soft-Output (SISO) Decoding
Algorithm such as Viterbi. Here we use a slight variation explained in the following section.
5.2.2 Soft-Input-Soft-Output Decoding: Max-Log-APP Algorithm
While there are dierent Decoding Algorithms we focus on the Bahl, Cocke, Jelinek, and Raviv (BCJR)
Decoding Algorithm. The BCJR Algorithm is commonly known as Maximum A Posteriori (MAP) Decoding.
Since the MAP Algorithm is computationally quite complex, we use a logarithmic version of the MAP
Algorithm to reduce the complexity but retain performance. For the project, we used Max-Log-BCJR
Algorithm (also known as Max-Log-APP), whose equations are derived in Section 8.
6 Results
For this section, we perform BER simulations of randomly generated data passing through an AWGN channel
for various SNR. We simulate the system for a Soft Viterbi Algorithm and Turbo Coding with various
iterations to compare the performance. Throughout this section, the parameters are the ones shown in Table
2 below. To ensure reliability, we perform simulations until we have at least 100 (20) packet errors and 10000
(1000) trials for Viterbi Coding (Turbo Coding). Note that if we use a dierent block length, the interleaver
function would be modied to matches with the block length K.
Input Block Length K = 200 bits
Interleaver Function (i) = 13i + 50i
2
modK
Minimum Errors 100 (Viterbi) 20 (Turbo)
Minimum Iterations 10000 (Viterbi) 1000 (Turbo)
Table 2: Viterbi and Turbo Code Parameters
6
6.1 Convolutional Code using Soft-Output-Viterbi-Algorithm (SOVA)
We use the R = 1/2 convolutional code shown in Section 4 to encode the data. At the receiver we use Soft
Viterbi decoding to recover the original data stream from errors from the AWN Channel. The resulting BER
performance is compared with an uncoded system.
0 1 2 3 4 5 6 7 8
10
9
10
8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
E
b
/N
0
(dB)
B
E
R
BER curve for Viterbi with BPSK-AWGN


Viterbi
Uncoded
Figure 6: R = 1/2 Convolutional Code (RSC) with Soft Viterbi Algorithm
6.2 Turbo Code using Max-Log-APP Algorithm
We use the R = 1/3 convolutional code shown in Section 5 to encode the data. Similarly, for Turbo decoding,
we use Max-Log-APP Algorithm to decode the data from errors. The rst gure shows the variation in BER
for dierent number of iterations. The second gure shows the overall performance between the Viterbi,
Turbo with 6 iterations with the uncoded system.
0 0.5 1 1.5 2 2.5 3 3.5 4
10
6
10
5
10
4
10
3
10
2
10
1
10
0
Eb/N0 (dB)
B
E
R
BER curve for Turbo decoder with dierent Iterations


1 Iteration
2 Iterations
3 Iterations
4 Iterations
5 Iterations
6 Iterations
Uncoded
Figure 7: R = 1/3 Turbo Code with Max-Log-APP and Dierent Iterations
7
0 1 2 3 4 5 6 7 8
10
7
10
6
10
5
10
4
10
3
10
2
10
1
10
0
E
b
/N
o
(dB)
B
E
R
BER curve for Turbo and Viterbi decoder with BPSK-AWGN


Viterbi
Turbo
Uncoded
Figure 8: BER Performance of Viterbi and Turbo Codes with 6 Iterations
7 Conclusion
As stated in the introduction, prior to Turbo Codes, Convolutional Codes alone were a revolutionary scheme
for communication. Although using Hard Viterbi decoding already gives a much better performance than
an uncoded scheme, Soft Viterbi decoding does better, which was explained in Section 4. As our simulation
suggests, for a BER of 10
4
, we obtain a gain of 3 dB by using Soft Viterbi coding versus an uncoded system.
However, using oating point operation is impractical for hardware implementations, thus hard decoding
may be preferred to soft decoding due to the low complexity and hardware eciency. In practice, industry
standards typically use soft decoding with a quantization of 34 bits to obtain a good performance while
keeping a low implementation complexity.
While Convolutional Codes alone are an ecient coding scheme with a satisfying performance with mild
level of complexity, we have seen that Turbo Codes outperform them. We observe a huge increase in perfor-
mance by increasing the number of iterations at rst, but after three iterations, the gain becomes minimal.
Since increasing the number of iterations requires extra computations, we use 6 iterations to guarantee a good
performance with moderate computations. Similar to the introduction of Convolutional Codes, simulations
show a huge gain in BER performance by using Turbo coding. For a SNR of 2 dB, the Uncoded and Viterbi
Coding scheme gives BER performance of around 10
1
and 10
2
, but Turbo Coding gives a performance
between 10
3
and 10
4
, a signicant improvement. We note that we would obtain an even better BER
performance for SNR greater than 3 dB, as the BER curves fall at a faster rate, but we exclude it for our
report. because of long simulation times
In 1963, Robert G. Gallager introduced Low-Density Parity-Check (LDPC) Codes, but at that time, people
did not use them, since they were impractical to implement. However, after Turbo Codes were introduced,
LDPC Codes were rediscovered and found to be another capacity-approaching codes. Recent advances in
LDPC Codes have been shown to have better performance than Turbo Codes in some instances. Further
analysis could be made as part of this project by simulation the performance of LDPC codes and comparing
it to Turbo codes.
8
8 Appendix
8.1 Derivation of the Max-Log-APP Algorithm equations
Let us denote P (X
t
= +1) as the probability that the symbol received at time t is equal to 1. Then,
P (X
t
= 1) = 1 P (X
t
= 1). If we denote r as the sequence observed at the receiver, the Log-Likelihood
Ratio gives the posterior information for the symbol X
t
,
L(X
t
) = log
_
P (X
t
= +1|r)
P (X
t
= 1|r)
_
The basic intuition is that if P (X
t
= +1|r) > P (X
t
= 1|r), then L(X
t
) > 0. The greater the dierence,
L(X
t
) 0, the more likely that the guess is correct. The same logic applies for L(X
t
) < 0.
Now, if we denote S
t
m
as the state of m at time t, our log likelihood is then
L(X
t
) = log
_
(m,m

)B
+1
P
_
S
t
m
, S
t+1
m
|r
_

(m,m

)B
1 P
_
S
t
m
, S
t+1
m
|r
_
_
(1)
where B
+1
_
B
1
_
is the set of all pairs of states (m, m

) that correspond to a transition of +1 (1). Notice


that in our project we have a total of 8 states and each state goes to two other states, of which only one
transition corresponds to +1 (1). Let us dene the following metrics,

_
S
t
m

_
= P
_
S
t
m
, r
t1
0
_

_
S
t+1
m
_
= P
_
r
N1
t+1
|S
t+1
m
_

_
S
t
m
, S
t+1
m
_
= P
_
S
t+1
m
, r
t
|S
t
m

_
where (S
t
m
) denotes the probability of observing the state m

at time t, given that the sequence r


t1
0
was observed between time 0 and t 1. Notice that this is a forward state metric. Similarly,
_
S
t+1
m
_
is
a backward state metric, which denotes the probability of observing the sequence r
N1
t+1
between time t + 1
and N 1, given that the state at time t +1 is m. Lastly,
_
S
t
m
, S
t+1
m
_
is a branch metric that denotes the
probability of taking the particular branch between states m and m

at time t. Hence, using Bayes Rule,


P
_
S
t
m
, S
t+1
m
, r
_
=
_
S
t
m

_
S
t+1
m
_

_
S
t
m
, S
t+1
m
_
therefore, giving the probability of a particular branch being taken given the structure of the trellis diagram
and the sequence that was observed before and after the branch. Thus, Equation (1) becomes
L(X
t
) = log
_
(m,m

)B
+1
P
_
S
t
m
, S
t+1
m
|r
_

(m,m

)B
1 P
_
S
t
m
, S
t+1
m
|r
_
_
= log
_
(m,m

)B
+1 (S
t
m
)
_
S
t+1
m
_

_
S
t
m
, S
t+1
m
_

(m,m

)B
1 (S
t
m
)
_
S
t+1
m
_

_
S
t
m
, S
t+1
m
_
_
(2)
Notice that and can be computed iteratively, based on the adjacent value at time t 1 for and t + 1
for , together with the current branch metric.
With S denoting the set of all states,

_
S
t+1
m

_
= P
_
S
t+1
m
, r
t
0
_
=

S
t
m

S
P
_
S
t
m
, S
t+1
m
, r
t
0
_
=

S
t
m

S
P
_
S
t+1
m
, r
t
| S
t
m
, r
t1
0
_
P
_
S
t
m
, r
t1
0
_
=

S
t
m

S
P
_
S
t+1
m
, r
t
| S
t
m

_
P
_
S
t
m
, r
t1
0
_
=

S
t
m

_
S
t
m
, S
t+1
m
_

_
S
t
m

_
(3)
9
Similarly,

_
S
t
m

_
=

S
t+1
m
S
P
_
S
t+1
m
, r
N1
t
| S
t
m

_
=

S
t+1
m
S
P
_
S
t+1
m
, r
t
, r
N1
t+1
, S
t
m

_
P (S
t
m
)
=

S
t+1
m
S
P
_
r
N1
t+1
| S
t+1
m
, r
t
, S
t
m

_
P
_
S
t+1
m
, r
t
| S
t
m

_
=

S
t+1
m
S
P
_
r
N1
t+1
| S
t+1
m
_
P
_
S
t+1
m
, r
t
| S
t
m

_
=

S
t+1
m
S

_
S
t+1
m

_
S
t
m
, S
t+1
m
_
(4)
Combining Equation (2) with (3) and (4),
L(X
t
) = log
_
(m,m

)B
+1
(S
t
m
)
_
S
t+1
m
_

_
S
t
m
, S
t+1
m
_

(m,m

)B
1 (S
t
m
)
_
S
t+1
m
_

_
S
t
m
, S
t+1
m
_
_
= log
_
(m,m

)B
+1
exp
_
log (S
t
m
) + log
_
S
t+1
m
_
+ log
_
S
t
m
, S
t+1
m
__

(m,m

)B
1 exp
_
log (S
t
m
) + log
_
S
t+1
m
_
+ log
_
S
t
m
, S
t+1
m
__
_
= log
_
_

(m,m

)B
+1
exp {A
t
+B
t
+G
t
}
_
_
log
_
_

(m,m

)B
1
exp {A
t
+B
t
+G
t
}
_
_
(5)
where A
t
log (S
t
m
), B
t+1
log
_
S
t+1
m
_
, and G
t
log
_
S
t
m
, S
t+1
m
_
. Notice that
A
t+1
= log
_
_

S
t
m

S
exp {A
t
+G
t
}
_
_
(6)
B
t
= log
_
_

S
t+1
m
S
exp {B
t+1
+G
t
}
_
_
(7)
Now, we use the Log MAP identity,
log (e
x1
+e
x2
) = max (x
1
, x
2
) + log
_
1 +e
|x1x2|
_
It turns out, that since we are dealing with probabilities, the last log term goes to 0, which we ignore. Hence,
equations (5) , (6) and (7) simplify to,
A
t+1
= max
S
t
m

S
(A
t
+G
t
)
B
t
= max
S
t+1
m
S
(B
t+1
+G
t
)
Finally, we have
L(X
t
) = max
(m,m

)B
+1
(A
t
+B
t+1
+G
t
) max
(m,m

)B
1
(A
t
+B
t+1
+G
t
)
Now, we observe that A
t
can be obtained by a forward recursion, while B
t
can be obtained by a backward
recursion. G
t
can be computed based on the received data. Now, assuming that our Trellis diagram starts
and terminates at the same state, say state 0, at time t = 0, we have that

_
S
0
0
_
= 1
and thus
A
0
= log 1 = 0
10
Then, if
_
S
0
k
_
= 0 for all k = 0, we have that
A
k
= log (0)
which are the initial conditions. Similarly assuming that our Trellis has length N, we nd at t = N,

_
S
N
0
_
= 1
and thus
B
N
= log 1 = 0
Then, if
_
S
N
k
_
= 0 for all k = 0, we have that
B
k
= log (0)
8.2 Iterative Decoding with Max-Log-APP Algorithm
On Section 8.1, we derived expressions for the log-likelihood L(X
t
) based on forward and backward trellis
values. In this section, we simplify the equations to improve practical implementations as we change the
notations into three log-likelihood ratio values: systematic, parity and extrinsic log-likelihood ratios, which
helps better to integrate the algorithm into an iterative scheme such as Turbo, as shown in Figure 5.
L(X
t
) = log
_
P (X
t
= +1)
P (X
t
= 1)
_
= log
_
P (X
t
= +1)
1 P (X
t
= +1)
_
which implies that
exp {L(X
t
)} =
P (X
t
= +1)
1 P (X
t
= +1)
Hence
P (X
t
= +1) =
e
L(Xt)
1 +e
L(Xt)
=
e
L(Xt)/2
1 +e
L(Xt)
e
L(Xt)/2
Suppose we denote D
t
as D
t
= e
L(Xt)/2
/
_
1 +e
L(Xt)
_
. We show that D
t
is independent of X
t
.
D
t
=
e
L(Xt)/2
1 +e
L(Xt)
=
_
P (X
t
= +1) /P (X
t
= 1)
1 +P (X
t
= +1) /P (X
t
= 1)
=
_
P (X
t
= +1)
P (X
t
= 1)
_
P (X
t
= 1)
P (X
t
= 1) +P (X
t
= +1)
=
_
P (X
t
= 1) P (X
t
= +1)
Hence, by symmetry, D
t
is independent of the fact that X
t
is 1. Rearranging, P (X
t
= +1) = D
t
e
L(Xt)/2
.
Trivial algebra lead us to
P (X
t
= 1) = 1 P (X
t
= +1) = 1
e
L(Xt)
1 +e
L(Xt)
=
1
1 +e
L(Xt)
= D
t
e
L(Xt)/2
With slight abuse of notation,
P (X
t
) = D
t
e
XtL(Xt)/2
(8)
where X
t
is either 1. Now, recall from above, G
t
(s

, s) log
t
(s

, s). In the case for an AWN Channel,


G
t
(s

, s) = log (
t
(s

, s)) = log
_
P (X
t
) exp
_

y
t
c
t

2
2
2
__
(9)
where
2
denotes the Variance of the noise, y
t
denotes the array of received bits, and c
t
denotes the array of
the expected bits for that particular transition. In our case, each array consists of two bits: systematic bit,
11
S and parity bit, P. Systematic bit P is the bit to be transmitted, and the parity bit P is the convoluted
version of P. Therefore,
y
t
c
t

2
= (y
s
t
c
s
t
)
2
+ (y
p
t
c
p
t
)
2
=
_
y
s2
t
+y
p2
t
_
+
_
c
s2
t
+c
p2
t
_
2 (y
s
t
c
s
t
+y
p
t
c
p
t
)
= y
t

2
+c
t

2
2 (y
t
c
t
) = E
t
2 (y
t
c
t
) (10)
where E
t
is independent of the structure of the current branch. Combining Equations (8) , (9) , and (10),
G
t
(s

, s) = log
_
D
t
e
XtL(Xt)/2
exp
_

E
t
2 (y
t
c
t
)
2
2
__
= log
_
D
t
e
Et/2
2
_
+ log
_
e
XtL(Xt)/2
_
+ log
_
e
(y
t
ct)/
2
_
= log
_
D
t
e
Et/2
2
_
+
X
t
L(X
t
)
2
+
(y
t
c
t
)

2
Now, noting that D
t
, E
t
and
2
does not depend on the specic branch for which the log-likelihood is
calculated, and we can disregard it, since we are maximizing the log-likelihood in Section 7.1. Hence,
G
t
(s

, s)
y
s
t
c
s
t

2
+
y
p
t
c
p
t

2
+
X
t
L(X
t
)
2
=
1
2
(L
S
+L
P
+L
E
)
where L
S
and L
P
denotes the systematic information and parity information respectively. Finally, L
E
=
X
t
L(X
t
) is the extrinsic information, obtained from previous iterations of the Max-Log-BC Algorithm. Note
that this equals 0 if only the algorithm is used. However, for Turbo codes, only the rst iteration has L
E
= 0
and the output log-likelihoods are used as the L
E
of the next iterations. The principal idea is to improve the
L
E
estimates at every iteration.
References
[1] H. Dawid, O. Joeressen, and H. Meyr, Viterbi Decoders: High Performance Algorithms and
Architectures, available online.
[2] G. D. Forney and D. J. Costello, Channel coding: The road to channel capacity, Proc. IEEE,
vol. 95, no. 6, pp. 11501177, 2007.
[3] E. Boutillon, C. Douillard, and G. Montorsi, Iterative decoding of concatenated convolutional
codes: Implementation issues, Proc. IEEE, vol. 95, no. 6, pp. 12011227, 2007.
[4] Berrou, C., Glavieux, A., and Thitimajshima. P., 1993. Near Shannon Limit Error-Correcting
Coding and Decoding: Turbo Codes, IEEE Proceedings of the Int. Conf. on Communications,
Geneva, Switzerland, pp. 1064-1070.
[5] 3rd Generation Partnership Project; Technical Specication Group Radio Access Network;
Evolved Universal Terrestrial Radio Access (E-UTRA); Multiplexing and channel coding (Release
10), 3GPP Organizational Partners TS 36.212, Rev. 10.3.0, Sept. 2011.
[6] D. J. C. MacKay and R. M. Neal, 1996. Near Shannon limit performance of low-density parity-
check codes. Electron. Lett., 32:16451646
12

You might also like