You are on page 1of 8

TURBO CODES

ABSTRACT
During the transmission of data from transmitter to receiver, there is loss of
information in the communication channel due to noise. This loss is measured in terms of bit
error rate (BER) and several decoding algorithms and modulation techniques used to
minimize it. . Turbo codes are one of the most powerful types of error control codes currently
available, which could achieve low BERs at signal to noise ratio (SNR) very close to
Shannon limit. Nevertheless, the specific performance of the code highly depends on the
particular decoding algorithm used at the receiver. In this sense, the election of the decoding
algorithm involves a trade off between the gain introduced by the code and the complexity of
the decoding process.

INTRODUCTION
Concatenated coding schemes were first proposed by Forney as a method for
achieving large coding gains by combining two or more relatively simple buildingblock or
component codes (sometimes called constituent codes). The resulting codes had the error-
correction capability of much longer codes, and they were endowed with a structure that
permitted relatively easy to moderately complex decoding. A serial concatenation of codes is
most often used for powerlimited systems such as transmitters on deep-space probes. The
most popular of these schemes consists of a Reed-Solomon outer (applied first, removed last)
code followed by a convolutional inner (applied last, removed first) code .
A turbo code can be thought of as a refinement of the concatenated encoding structure plus an
iterative algorithm for decoding the associated code sequence. Turbo codes were first
introduced in 1993 by Berrou, Glavieux, and Thitimajshima, and reported in ,where a scheme
is described that achieves a bit-error probability of 10-5 using a rate 1/2 code over an additive
white Gaussian noise channel and modulation at an Eb/N0 of 0.7 dB. The codes are
constructed by using two or more component codes on different interleaved versions of the
same information sequence. Whereas, for conventional codes, the final step at the decoder
yields hard-decision decoded bits (or, more generally, decoded symbols), for a concatenated
scheme such as a turbo code to work properly, the decoding algorithm should not limit itself
to passing hard decisions among the decoders. To best exploit the information learned from
each decoder, the decoding algorithm must effect an exchange of soft decisions rather than
hard decisions. For a system with two component codes, the concept behind turbo decoding is
to pass soft decisions from the output of one decoder to the input of the other decoder, and to
iterate this process several times so as to produce more reliable decisions.
Channel Coding
The task of channel coding is to encode the information sent over a communication channel
in such a way that in the presence of channel noise, errors can be detected and/or corrected.
We distinguish between two coding methods:

• Backward error correction (BEC) requires only error detection: if an error is detected, the
sender is requested to retransmit the message. While this method is simple and sets lower
requirements on the code’s error-correcting properties, it on the other hand requires duplex
communication and causes undesirable delays in transmission

• Forward error correction (FEC) requires that the decoder should also be capable of
correcting a certain number of errors, i.e. it should be capable of locating the positions where
the errors occurred. Since FEC codes require only simplex communication, they are
especially attractive in wireless communication systems, helping to improve the energy
efficiency of the system. In the rest of this paper we deal with binary FEC codes only.

Next, we briefly recall the concept of conventional convolutional codes. Convolutional codes
differ from block codes in the sense that they do not break the message stream into fixed-size
blocks. Instead, redundancy is added continuously to the whole stream. The encoder keeps M
previous input bits in memory. Each output bit of the encoder then depends on the current
input bit as well as the M stored bits. Figure 1 depicts a sample convolutional encoder. The
encoder produces two output bits per every input bit, defined by the equations

y1,i = xi + xi−1 + xi−3,

y2,i = xi + xi−2 + xi−3.

For this encoder, M = 3, since the ith bits of output depend on input bit i, as well as three
previous bits i − 1, i − 2, i − 3. The encoder is nonsystematic, since the input bits do not
appear explicitly in its output. An important parameter of a channel code is the code rate. If
the input size (or message size) of the encoder is k bits and the output size (the code word
size) is n bits, then the ratio k n is called the code rate r. Since our sample convolutional
encoder produces two output bits for every input bit, its rate is 1 2 . Code rate expresses
Turbo Codes 3 the amount of redundancy in the code—the lower the rate, the more redundant
the code. Finally, the Hamming weight or simply the weight of a code word is the number of
non-zero symbols in the code word. In the case of binary codes, dealt with in this paper, the
weight of a code word is the number of ones in the word.

A Need for Better Codes


Designing a channel code is always a tradeoff between energy efficiency and
bandwidth efficiency. Codes with lower rate (i.e. bigger redundancy) can usually correct
more errors. If more errors can be corrected, the communication system can operate with a
lower transmit power, transmit over longer distances, tolerate more interference, use smaller
antennas and transmit at a higher data rate. These properties make the code energy efficient.
On the other hand, low-rate codes have a large overhead and are hence more heavy on
bandwidth consumption. Also, decoding complexity grows exponentially with code length,
and long (low-rate) codes set high computational requirements to conventional decoders.
According to Viterbi, this is the central problem of channel coding: encoding is easy but
decoding is hard . For every combination of bandwidth (W), channel type, signal power (S)
and received noise power (N), there is a theoretical upper limit on the data transmission rate
R, for which error-free data transmission is possible. This limit is called channel capacity or
also Shannon capacity (after Claude Shannon, who introduced the notion in 1948). For
additive white Gaussian noise channels, the formula is

TURBO ENCODER:
There are different types of turbo encoders Available they are

 Concatenated encoding

• Serial concatenated codes

• Parallel concatenated codes

 Recursive convolutional encoder

Concatenated encoding
 Some times single error correction codes are not good enough for error protection

 Concatenating two or more codes will results more powerful codes

 Types of concatenated codes

1. Serial concatenated codes:


• Shannon showed that large block-length random codes achieve channel capacity

• Only a small number of low-weight input sequences are mapped to low-weight output
sequences

• Make the code appear random, while maintaining enough structure to permit decoding

• The interleaver ensures that the probability that both encoders have inputs that causes
low weight output is very low.

2. Parallel concatenated codes:

One systematic and two parity bits are generated from the message stream

Recursive convolutional encoder


• An RSC encoder can be constructed from a standard convolutional encoder by
feeding back one of the outputs.

• In coded system performance is dominated by low weight code words.

• A good code will causes low weight output with low probability

• RSC will produces low weight and low probability output

TURBO DECODER

• Turbo codes get their name because the decoder uses feedback, like a turbo engine.

• Each decoder estimates the a posteriori probability (MAP) of each data bit.

• Decoding continues for a set number of iterations.

• Performance generally improves from iteration to iteration, but follows a law of


diminishing returns

• Information exchanged by the decoders must not be strongly correlated with systematic
info or earlier exchanges.
WORKING:
APPLICATIONS:
• Wireless multimedia

– Data: use large frame sizes

• Low BER, but long latency

– Voice: use small frame sizes

• Short latency, but higher BER

• Combined equalization and error correction decoding.

• Combined multiuser detection and error correction decoding.

ADVANTAGES & DISADVANTAGES:

Advantages: It has the advantages like

– Remarkable power efficiency in AWGN and flat-fading channels for moderately


low BER.

– Deign tradeoffs suitable for delivery of multimedia services.  

  Disadvantages: It has the disadvantages like

– Long latency.

– Poor performance at very low BER.

– Because turbo codes operate at very low SNR, channel estimation and tracking is a
critical issue.

CONCLUSION:

We can conclude

 Turbo codes achieved the theoritical limits with small gap

 Give rise to new codes : Low Density Parity Check (LDPC)

 Need Improvements in decoding delay

REFERANCES:

[1] University of South Australia, Institute for Telecommunications


Research,Turbocoding research group. http://www.itr.unisa.edu.au/~steven/turbo/.

[2] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of
powerful error correction coding schemes. Part I: Code structures and interleaverdesign.
J. Elec. and Electron.Eng., Australia, 19:129–142, September 1999.
[3] Joachim Hagenauer, Joachim; et al. "Iterative Decoding of Binary Block and
Convolutional Codes" (PDF). Archived from the original (PDF) on 11 June 2013.
Retrieved 2014-03-20.

[4] Berrou, Claude; Glavieux, Alain; Thitimajshima, Punya, Near Shannon Limit Error –
Correcting, retrieved 11 February 2010

[5] Berrou, Claude, The ten-year-old turbo codes are entering into service, Bretagne,
France, retrieved 11 February 2010

You might also like