You are on page 1of 15

The theoretical limits for coding

The setting of Shannon theorems


[Inform. bits/s] [Coded bits/s] [Channel symbols/s]

Rs
Source

Encoder (rate rc)

Re

Serial to Parallel (1:m)

Modulator (M=2m)

Rm

Channel (B=NRm/2)
N is the dimensionality of the signal space

Noise Codulator (rate r )

Rs Rs mRs rc = , r= = = mrc Rm Re Re
1

The theoretical limits for coding


Channel capacity theorem (Shannon, 1948)
For any discrete-input memoryless channel, there exists a block codulator handling blocks of k information bits with rate r (information bits per channel symbol) for which the word error probability with maximum likelihood decoding is bounded by

Pw (e) < 2 nm E ( r )
where nm =k/r is the number of channel symbols per information block, and E(r) is a convex U, decreasing positive function of r for

0r C
C being the channel capacity measured in bits/(channel symbol) (or
bits/channel use)

The theoretical limits for coding


From channel coding theorem
1. Decrease the codulator rate r

three ways to improve performance

E(r)

E (r )

r2

r1

For a given source rate, decreasing r means increasing the transmission rate to the channel, and hence increasing the required channel bandwidth

The theoretical limits for coding


2. Increase the channel capacity C

E E(r)(r )

For a given codulator rate r, increasing C makes E(r) increasing. However, to increase the channel capacity

C = 0.5 log2 (1 + P / N 0 B) for a given channel and a given bandwidth,


we need to increase the signal power P

The theoretical limits for coding


3. Keep E(r) fixed, so that both the bandwidth and power remain the same, and increase the code symbols length nm Performance improves, at the expenses of the decoding complexity. In fact, for randomly chosen codes and ML decoding, the decoding complexity D is of the order of the number of codewords

so that

D 2n r
m

and the error probability decreases only algebraically with the decoding complexity.

Pw (e) < 2 n E ( r ) = D E ( r ) / r
m

The theoretical limits for coding


Practical implications of Shannon Theorem The theorem proves the existence of good codes; it does not say how to construct them

(Folk Theorem: all codes are good, except those that we know of )

Problem 1 . Finding good codes

From the proof of the theorem, we learn that a randomly-chosen code will turn out to be good with high probability; however, its decoding complexity is proportional to the number of code words

Problem 2. Finding more practical decoding methods

The theoretical limits for coding


Channel capacity converse theorem From the converse to the coding theorem, we can derive the inequality

C r 1 H b (e)
where

H b (e) = Pb (e) log Pb (e) [1 Pb (e)] log[1 Pb (e)]


is the entropy of a binary source with probabilities Pb(e) and [1-Pb(e)] , and Pb(e) is the bit error probability

The theoretical limits for coding


Channel capacity bound The unconstrained additive Gaussian channel with capacity

2rEb 1 C = log2 1 + NN 2 0
yields

bits/symbol

2rEb 1 r log2 1 + NN 2[1 H b (e)] 0

(1)

which permits finding the pairs (Pb(e),Eb/N0 ) satisfying (1) with equality

The theoretical limits for coding

Channel capacity bounds for the unconstrained additive Gaussian channel Bit error probability as a function of the minimum required signal-to-noise ratio for various codulator rates in the case of unidimensional signal space (N=1)

r = 0.0
r=0

The theoretical limits for coding

Sphere-packing bound: Signal-to-noise ratio versus block size

10

The search for good codes


For 45 years, code and information theorists looked for channel codes yielding performance close to those predicted by Shannon They invented several classes of codes offering good performance:
Block (memoryless) codes, such as BCH, Reed-Solomon codes Convolutional (with memory) codes Concatenated codes, a mixture of the two above

In the best case, those codes would require a signal-to-noise ratio Eb/N0 greater by 2.5 dBs from the Shannon limit

11

Parallel concatenated codes: Structure


The breakthrough appeared in 1983 (ICC 93, Geneva, Switzerland)

Parallel Concatenated Convolutional Codes (PCCC) , Turbo codes Constituent Codes (CC)

12

The search for good codes


The breakthrough appeared in 1983 (ICC 93, Geneva, Switzerland)
Rate 1/2 turbo code 16-state constituent encoders: systematic recursive convolutional encoders Interleaver size N = 65,536 Bit error probability of 10-5 at 0.7 dB Shannon limit at 0.5 dB

13

Main characteristics of turbo codes


Maximum-likelihood (optimal) decoding is impossible in practice, since it would require Viterbi algorithm applied to a finite-state machine with 2N states = 216,000 in the case of the original 1993 code! The decoding algorithm is an iterative one, with complexity growing linearly with the code word block size Convergence of the algorithm is an issue:
No general conditions that guarantee convergence When it converges, almost impossible to predict whether it will converge to a code word or something different (pseudo code words)

14

LDPC codes
Shortly after the invention of turbo codes (1996), there was the rediscovery of a class of codes studied by Gallager in its PhD thesis (1060), the so-called Low-Density Paricity-Check (LDPC) codes Endowed with an iterative decoding algorithm borrowed from turbo codes and control theory (the message passing algorithm), LDPC codes were shown to yield performance almost similar to those of turbo codes The best result known so far is a very long LDPC code as close to Shannon limit as 0.00004 dB

15

You might also like