Turbo Codes

Emilia Käsper ekasper[at]tcs.hut.fi

This paper is an introductory tutorial on turbo codes, a new technique of errorcorrection coding developed in the 1990s. The reader is expected to be familiar with the basic concepts of channel coding, although we briefly and informally review the most important terms. The paper starts with a short overview of channel coding and the reader is reminded the concept of convolutional encoding. Bottlenecks of the traditional approach are described and the motivation behind turbo codes is explained. After examining the turbo codes design more in detail, reasons behind their efficiency are brought out. Finally, a real-world example of the turbo code used in the third generation Universal Mobile Telecommunications System (UMTS) is presented. The paper is mainly based on two excellent tutorials by Valenti and Sun [6], and Barbulescu and Pietrobon [2]. The scope of this paper does not cover implementationspecific issues such as decoder architecture, modulation techniques and the like. For a follow-up on these topics, the interested reader is referred to Part II of the second tutorial [3].

Channel Coding
The task of channel coding is to encode the information sent over a communication channel in such a way that in the presence of channel noise, errors can be detected and/or corrected. We distinguish between two coding methods: • Backward error correction (BEC) requires only error detection: if an error is detected, the sender is requested to retransmit the message. While this method is simple and sets lower requirements on the code’s error-correcting

Since our sample convolutional encoder produces two output bits for every input bit. y2. Instead. Since FEC codes require only simplex communication.e. helping to improve the energy efficiency of the system. they are especially attractive in wireless communication systems. If the input size (or message size) of the encoder is k bits and the output size (the code word size) is n k bits. Each output bit of the encoder then depends on the current input bit as well as the M stored bits.i Input xi xi−1 xi−2 xi−3 Output y2. it on the other hand requires duplex communication and causes undesirable delays in transmission. then the ratio n is called the code rate r. In the rest of this paper we deal with binary FEC codes only.i Figure 1: A convolutional encoder properties. Figure 1 depicts a sample convolutional encoder. it should be capable of locating the positions where the errors occurred. since the ith bits of output depend on input bit i. The encoder keeps M previous input bits in memory. as well as three previous bits i − 1. i. M = 3. Convolutional codes differ from block codes in the sense that they do not break the message stream into fixed-size blocks. • Forward error correction (FEC) requires that the decoder should also be capable of correcting a certain number of errors. Next. For this encoder. An important parameter of a channel code is the code rate. The encoder produces two output bits per every input bit. i − 3. Code rate expresses 2 . i − 2.i = xi + xi−1 + xi−3 . The encoder is nonsystematic. since the input bits do not appear explicitly in its output. we briefly recall the concept of conventional convolutional codes. defined by the equations y1.Turbo Codes 2 Output y1. redundancy is added continuously to the whole stream.i = xi + xi−2 + xi−3 . its rate is 1 .

for which error-free data transmission is possible. there is a theoretical upper limit on the data transmission rate R. error-free data transmission is interpreted in a way that the bit error probability can be brought to an arbitrarily small constant. there is of course no such thing as an ideal error-free channel. If more errors can be corrected. Now. who introduced the notion in 1948). Also. The bit error probability. dealt with in this paper. the formula is R < W log2 1 + S N [bits/second]. These properties make the code energy efficient. In practical settings. On the other hand. Hence. Although Shannon developed his theory already in the 1940s. For every combination of bandwidth (W ). Even . the weight of a code word is the number of ones in the word. or bit error rate (BER) used in benchmarking is often chosen to be 10−5 or 10−6 . the communication system can operate with a lower transmit power. transmit over longer distances. Finally. decoding complexity grows exponentially with code length. if the transmission rate. A Need for Better Codes Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. bigger redundancy) can usually correct more errors. Shannon capacity sets a limit to the energy efficiency of a code. signal power (S) and received noise power (N ).Turbo Codes 3 the amount of redundancy in the code—the lower the rate. this is the central problem of channel coding: encoding is easy but decoding is hard [7]. several decades later the code designs were unable to come close to the theoretical bound. Codes with lower rate (i. tolerate more interference. low-rate codes have a large overhead and are hence more heavy on bandwidth consumption. This limit is called channel capacity or also Shannon capacity (after Claude Shannon. For additive white Gaussian noise channels. Instead. we get a lower bound on the amount of energy that must be expended to convey one bit of information. channel type.e. the Hamming weight or simply the weight of a code word is the number of non-zero symbols in the code word. In the case of binary codes. According to Viterbi. the more redundant the code. the bandwidth and the noise power are fixed. and long (low-rate) codes set high computational requirements to conventional decoders. use smaller antennas and transmit at a higher data rate.

so that each part of the code can be decoded separately with less complex decoders and each decoder can gain from information exchange with others. This is called the divide-and-conquer strategy. then the relative energy increase in decibels is 10 log10 EE . High-weight code words. the input bits also occur in the output (see Figure 2). a turbo code is formed from the parallel concatenation of two codes separated by an interleaver. If E is the actual energy and Eref is the theoretical lower bound. Turbo Codes: Encoding with Interleaving The first turbo code. can be distinguished more easily. Since then. most designs follow the ideas presented in [4]: • The two encoders used are normally identical. Another strategy involves combining simple codes in a parallel fashion. several schemes have been proposed and the term “turbo codes” has been generalized to cover block codes as well as convolutional codes. new codes were sought that would allow for easier decoding. the gap between this theoretical bound and practical implementations was still at best about 3dB.Turbo Codes 4 in the beginning of the 1990s. a ref twofold relative energy increase equals 3dB. • The code is in a systematic form. • The interleaver reads the bits in a pseudo-random order. The generic design of a turbo code is depicted in Figure 2. i.3. [4]. albeit predetermined A decibel is a relative measure. The task of the interleaver is to “scramble” bits in a (pseudo-)random.e. Keeping these design methods in mind.e.1 Hence. Since log10 2 ≈ 0. we are now ready to introduce the concept of turbo codes. The choice of the interleaver is a crucial part in the turbo code design . Simply put. code words containing more ones and less zeros. Although the general concept allows for free choice of the encoders and the interleaver. i. One way of making the task of the decoder easier is using a code with mostly high-weight code words. based on convolutional encoding. was introduced in 1993 by Berrou et al. 1 . This means that practical codes required about twice as much energy as the theoretical predicted minimum.

We now briefly review some interleaver design ideas. The first three designs are illustrated in Figure 3 with a sample input size of 15 bits. This means that even if one of the output code words has low weight. stressing that the list is by no means complete. This means that the corresponding two decoders will gain more from information exchange. Secondly. Odd-even encoders can be used. but only the odd-positioned coded bits are stored. If the input to the second decoder is scrambled. 4.Turbo Codes 5 Input xi Systematic output xi Output I Encoder I Interleaver Output II Encoder II Figure 2: The generic turbo encoder fashion. but now only the even-positioned coded bits are stored. A “helical” interleaver: data is written row-wise and read diagonally. the divide-and-conquer strategy can be employed for decoding. its output is usually quite different from the output of the first encoder. Higher weight. since the code is a parallel concatenation of two codes. This serves two purposes. 3. 1. the other usually does not. Then. . as we saw above. A pseudo-random interleaver defined by a pseudo-random number generator or a look-up table. or “uncorrelated” from the output of the first encoder. A “row-column” interleaver: data is written row-wise and read columnwise. Firstly. it also provides little randomness. is beneficial for the performance of the decoder. While very simple. and there is a smaller chance of producing an output with very low weight. if the input to the second encoder is interleaved. the bits are scrambled and encoded. when the second encoder produces one output bit per one input bit. An “odd-even” interleaver: first. 2. the bits are left uninterleaved and encoded. also its output will be different.

z8 z4 z14 Final output of the encoder z2 y z12 y7 z8 y9 z4 y11 z14 Figure 3: Interleaver designs x13 y13 x5 y13 x14 x10 z10 z10 x15 y15 x15 y15 .y5 y7 y9 y11 Encoder output with row-column interleaving x2 x7 x12 x3 x8 x13 x4 x9 x14 z2 .z12 .Turbo Codes 6 x1 x6 x11 x2 x7 x12 Input x3 x4 x8 x9 x13 x14 x5 x10 x15 x1 x6 x11 x2 Row-column interleaver output x7 x12 x3 x8 x13 x4 x9 x14 Helical interleaver output x10 x1 x12 x8 x4 x15 x6 x5 x10 x15 x11 x7 x3 x14 x2 x13 x9 x5 x1 y1 x1 y1 x2 x6 z6 z6 x3 y3 x11 y3 Odd-even interleaver output Encoder output without interleaving x4 x5 x6 x7 x8 x9 x10 x11 x12 .

i. although more difficult to implement. the code rate usually varies between 1 and 2 1 . The choice of the interleaver has a key part in the success of the code and the best choice is dependent on the code design. the decoders can gain greatly from exchanging information. giving a less than 1.e. the allowed bit error rate BER was 10−5 ). Turbo Codes: Some Notes on Decoding In the traditional decoding approach. and only in the final stage will hard decisions be made. For code rate 1 . Both decoders provide estimates of the same set of data bits. For short block sizes. one clearly wants to ask: how efficient are they? Already the first rate 1 code proposed in 1993 made a huge improvement: the gap 3 between Shannon’s limit and implementation practice was only 0.7dB. a thorough comparison between convolutional codes and turbo codes is given.2-fold overhead. In the case of turbo codes. albeit in a different order.Turbo Codes 7 There is no such thing as a universally best interleaver. and passes to the error control decoder a discrete value. real) value of the signal. A soft-in-soft-out (SISO) decoder receives as input a “soft” (i.e. Information exchange can be iterated a number of times to enhance performance. the odd-even interleaver has been found to outperform the pseudo-random interleaver. Let the allowed bit error rate be 10−6 . The disadvantage of this approach is that while the value of some bits is determined with greater certainty than that of others. The decoder then outputs for each data bit an estimate expressing the probability that the transmitted data bit was equal to one. either a 0 or a 1. the demodulator makes a “hard” decision of the received symbol. (In the authors’ measurements. each bit is assigned the value 1 or 0. and vice versa. several articles on interleaver design can be found for example at [1]. decoders re-evaluate their estimates. After bringing out the arguments for the efficiency of turbo codes. the decoder cannot make use of this information. For further reading. the relative increase in 6 2 . In [2]. In practice. are essential in the design of turbo codes. Such decoders. using information from the other decoder. At each round. Turbo Codes: Performance We have seen that the conventional codes left a 3dB gap between theory and practice. If all intermediate values in the decoding process are soft values. after appropriate reordering of values. there are two decoders for outputs from both encoders.

i Input 8 xi xi−1 xi−2 xi−3 Interleaver Output y2. since for each input bit it produces one systematic 3 output bit and two parity bits. The resulting turbo encoder. In order to obtain a systematic code. This encoder is used twice. desirable for better decoding. It can 6 also be noticed. For code rate 1 .98dB for turbo codes. whereas Shannon’s capacity applies for perfect error-free transmission. exactly as described above. depicted in Figure 4. The UMTS Turbo Code The UMTS turbo encoder closely follows the design ideas presented in the original 1993 paper [4]. the respective numbers are 4. is a rate 1 encoder. and 0. Firstly. that turbo codes gain significantly more from lowering the code rate than conventional convolutional codes.Turbo Codes Systematic Output xi Output y1. Details on the interleaver design can be found in the corresponding specification [5].80dB for convolutional codes. once without interleaving and once with the use of an interleaver. Secondly. The starting building block of the encoder is the simple convolutional encoder depicted in Figure 1. the second output from each of the two encoders is fed back to the corresponding encoder’s input.i xi xi−1 xi−2 xi−3 Figure 4: The UMTS turbo encoder energy consumption is then 4.12dB2 . Although the relative value is negative. 2 . it does not actually violate the Shannon’s limit.28dB and -0. a systematic output is added to the encoder. The negative value is due to the fact that we allow for a small error. the following modifications are made to the design.

turbo codes are used in many commercial applications. conventional codes performed—in terms of energy efficiency or. . While earlier. including both third generation cellular systems UMTS and cdma2000. equivalently. giving a less than 1. Nowadays. in the case of speech signal.2-fold overhead. research in the field of turbo codes has produced even better results. interleaving to provide better weight distribution. and soft decoding to enhance decoder decisions and maximize the gain from decoder interaction. 456 2 Conclusions Turbo codes are a recent development in the field of forward-error-correction channel coding. turbo codes immediately achieved performance results in the near range of the theoretically best values. The codes make use of three simple ideas: parallel concatenation of codes to allow simpler decoding. channel capacity—at least twice as bad as the theoretical bound suggested. it is 260 < 1 .Turbo Codes 9 As a comparison. the GSM system uses conventional convolutional encoding in combination with block codes. The code rate varies with the type of input. Since the first proposed design in 1993.

[3] S. Geneva. Handbook of RF and Wireless Technologies.S..Eng.unisa. Berrou. editor. M. Thitimajshima.htm. concatenated codes and turbo coding. Part I: Code structures and interleaver design. Part II: Decoder design and performance. Elec.Turbo Codes 10 References [1] University of South Australia.Eng. A. and P. . http: //occs. May 2003.itr. 19:129–142. [2] S. pages 375–400.. Barbulescu and S. [7] A. Turbo codes: A tutorial on a new class of powerful error correction coding schemes.edu.0.4. Pietrobon. Switzerland. Newnes. Institute for Telecommunications Research.com/viterbi/index. J. 19:143–152. Turbo codes: A tutorial on a new class of powerful error correction coding schemes. Australia. Glavieux. September 1999. Sun. [4] C. Multiplexing and Channel Coding (FDD). [5] Third Generation Partnership Project(3GPP). Turbo codes. In Proceedings of the IEEE International Conference on Communications. Australia.au/ ~steven/turbo/. Dowla. March 2005. C. J. 2004. [6] M. and Electron. 1998. http://www. September 1999.S.A. Turbo coding research group. Pietrobon. TS 25. Barbulescu and S.A. and Electron. Elec.212 Version 6. Valenti and J. Shannon theory. Viterbi. In F.donvblack. Near Shannon limit errorcorrecting coding and decoding: Turbo codes.

Sign up to vote on this title
UsefulNot useful