This action might not be possible to undo. Are you sure you want to continue?

Marc URO

CONTENTS

**INTRODUCTION................................................................................................................................................. 5 INFORMATION MEASURE ............................................................................................................................ 11
**

SELF-INFORMATION, UNCERTAINTY .................................................................................................................. 11 ENTROPY ........................................................................................................................................................... 16

SOURCE CODING............................................................................................................................................. 23

ENGLISH LANGUAGE .......................................................................................................................................... 23 ENTROPY OF A SOURCE ...................................................................................................................................... 25

entropy rate ................................................................................................................................................. 28

THE SOURCE CODING PROBLEM ......................................................................................................................... 28 ELEMENTARY PROPERTIES OF CODES ................................................................................................................. 31 SOURCE CODING THEOREM ................................................................................................................................ 39

**COMPRESSION ALGORITHMS ................................................................................................................... 46 Shannon-Fano algorithm............................................................................................................................. 47 Huffman algorithm ...................................................................................................................................... 48 LZ 78 algorithm ........................................................................................................................................... 52 LZW algorithm............................................................................................................................................. 54 COMMUNICATION CHANNELS ................................................................................................................... 59
**

CHANNEL CAPACITY .......................................................................................................................................... 59 THE NOISY CHANNEL THEOREM ......................................................................................................................... 74

**ERROR CORRECTING CODES ..................................................................................................................... 79
**

CONSTRUCTION ................................................................................................................................................. 81 NEAREST NEIGHBOUR DECODING....................................................................................................................... 82 LINEAR CODES ................................................................................................................................................... 83 GENERATOR MATRIX ......................................................................................................................................... 85 PARITY-CHECK MATRIX ..................................................................................................................................... 87

EXERCISES ........................................................................................................................................................ 91 INFORMATION MEASURE EXERCISES.................................................................................................................. 91 SOURCE CODING EXERCISES.................................................................................................................... 93 COMMUNICATION CHANNEL EXERCISES ............................................................................................................ 96 ERROR CORRECTING CODES EXERCISES ........................................................................................................... 100 SOLUTIONS ..................................................................................................................................................... 103

INFORMATION MEASURE SOLUTIONS ............................................................................................................... 103 SOURCE CODING SOLUTIONS ............................................................................................................................ 104 COMMUNICATION CHANNEL SOLUTIONS .......................................................................................................... 105 ERROR CORRECTING CODES SOLUTIONS........................................................................................................... 106

BIBLIOGRAPHY ..............................................................................................................................................109 INDEX.................................................................................................................................................................111

.

Its dimensions are 8.4 kbps . the transmission of the page takes 4 minutes and 20 seconds.5 × 11 inches.5 INTRODUCTION Most scientists agree that information theory began in 1948 with Shannon’s famous article. information theory has kept on designing devices that reach or approach these limits. he provided answers to the following questions : What is “information” and how to measure it? What are the fundamental limits on the storage and the transmission of information? The answers were both satisfying and surprising. Here are two examples which illustrate the results obtained with information theory methods when storing or transmitting information. and moreover striking in their ability to reduce complicated problems to simple analytical forms.5 × 11× 4 × 10 4 = 3. the time of transmission is reduced to 17 seconds! . The resolution is 200 dots per inch.74 Mbits . Huffman coding). the number of binary digits to represent this page is 8. Thanks to techniques of coding (run length coding. With a modem at the rate of 14. Consequently. TRANSMISSION OF A FACSIMILE The page to be transmitted consists of dots represented by binary digits (“1” for a black dot and “0” for a white dot). that is to say 4 × 10 4 dots per square inch. Since then. In that paper.

the capacity is the maximum bit rate at which we can transmit information. For CD quality. It is a standard for compressed audio files based on psychoacoustic models of human hearing. left channel and right channel are sampled at 44.1×10 3 × 16 × 2 = 1. can store more than 10 hours of MP3 stereo music. one minute of stereo music requires : 128 × 103 × 60 ≈ 1 Mbytes 8 A CD ROM. Information theory allows to compute its capacity (in bits/sec) : C = B log 2 ( + snr ) with SNR = 10 log10 snr 1 To make a long story short. provided appropriate means are used. DOWNLOADING MP3 FILES Over an analogue telephone line An analogue telephone line is made of a pair of copper wires whose bandwidth is limited to B = 4 Khz . Eventually. Let us consider a musical stereo analog signal. limiting bandwidth and Huffman coding. .6 ______________________________________________________________ introduction STORAGE OF MP3 AUDIO FILES MP3 stands for Moving Picture Experts Group 1 layer 3.411 Mbits By using the MP3 encoding algorithm.1 Khz. Such a line transmits analogue signals with SNR (Signal to Noise Ratio) ≈ 30 dB and can be modelled by a memoryless additive Gaussian noise channel. One second of stereo music in CD format generates : 44. which has a capacity of 650 Mbytes. it allows one to reduce the amount of information needed to represent an audio signal. The samples are quantized to 16 bits. By means of masking time and frequency. as the human ear cannot distinguish between the original sound and the coded sound. allowing an arbitrary small probability of error. this value drops to 128 Kbits without perceptible loss of sound quality.

downloading a 3 minute’s MP3 song with a V90 standard modem takes about : 1× 3 × 103 = 750 sec = 12 minutes and 30 seconds 4 At busy hours. the sampling frequency is 8 KHz. With a 512 Kbits/sec modem. 60 . It consists of splitting the available bandwidth into three channels : a high speed downstream channel a medium speed upstream channel a POTS (Plain Old Telephone Service) channel The main advantage lies in the fact that you can use your phone and be connected to the internet at the same time. we obtain : C ≈ 33800 bits/sec ≈ 4 Kbytes/sec Hence. 8 bits are used for quantifying each sample. In addition. the downloading speed is twice as high as it is in the case of analog lines and it takes 6 minutes 15 seconds to download a 3 minute MP3 song. Then. the bit rate is : 8 × 8 = 64 Kbits/sec = 8 Kbytes/sec Thus. the downloading stream rises to 60 Kbytes/sec.introduction _______________________________________________________________ 7 Then. With ADSL (Asymmetric Digital Subscriber Line) modem technology Using this technology requires having a USB (Universal Serial Bus) or an ethernet modem. the downloading speed may lower to 1 Kbytes/sec and the downloading time can reach 50 minutes! Over a digital line As telephone signals are band limited to 4 KHz. Downloading a 3 minute MP3 song takes only : 1× 3 × 103 = 50 seconds .

Given a received output symbol. Source and channel decoders are converse to source and channel encoders. is general and applies to a great variety of situations. a PC (Personal Computer) connected to internet is an information source which produces binary digits from the binary alphabet {0. known as the “ Shannon paradigm” . as the former tends to reduce the data rate while the latter raises it. A source encoder allows one to represent the data source more compactly by eliminating redundancy : it aims to reduce the data rate. due to the presence of random ambient noise and the imperfections of the signalling process. As an example. It includes signalling equipment and pair of copper wires or coaxial cable or optical fibre. you cannot be sure which input symbol has been sent. - - - - There is duality between “ source coding” and “ channel coding” . among other possibilities. 1}. A channel encoder adds redundancy to protect the transmitted signal against transmission errors. A channel is a system which links a transmitter to a receiver. .8 ______________________________________________________________ introduction SHANNON PARADIGM Transmitting a message from a transmitter to a receiver can be sketched as follows : information source source encoder channel encoder c h a n n e l destination source decoder channel decoder This model. An information source is a device which randomly delivers symbols from an alphabet.

Shannon’s noisy coding theorem will help us answer this fundamental question : Under what conditions can the data of an information source be transmitted reliably? The chapter entitled “ Error coding codes” deals with redundancy added to the signal to correct transmission errors.introduction _______________________________________________________________ 9 The course will be divided into 4 parts : “ information measure” “ source coding” “ communication channel” “ error correcting codes” The first chapter introduces some definitions to do with information content and entropy. By how much can the data rate be reduced? How can we achieve data compression without loss of information? In the “ communication channel” chapter. we will define the capacity of a channel as its ability to convey information. as added symbols (the redundancy) can be corrupted too. provided there are not too many of them. In our attempt to recover the transmitted signal from the received signal. is it possible to reduce its data rate? If so. Nevertheless we will learn to detect and correct errors. We will learn to compute the capacity in simple situations. . we will answer three questions : Given an information source. At first sight. In the second chapter. correcting errors may seem amazing.

.

Afterwards. Shannon pursued the idea of defining the information content h(E ) of an event E as a function which depends solely on the probability P{ }. SELF-INFORMATION. by analogy. we will extend these notions to random variables. P{ }: more an event is likely. we get no information from its outcome. since if we are certain (there is no doubt) that E will occur.11 INFORMATION MEASURE We begin by introducing definitions to measure information in the case of events. - The only function satisfying the above axioms is the logarithmic function: h(E ) = log 1 = − log P{ } E P{ } E . the less E - h(E ) = 0 if P{E}= 1 . h(E F ) = h(E )+ h(F ) if E and F are independent. To overcome this problem. UNCERTAINTY When trying to work out the information content of an event. we encounter a difficulty linked with the subjectivity of the information which is effectively brought to us when the event occurs. He E added the following axioms: - h(E ) must be a decreasing function of information its occurrence brings to us.

12 _______________________________________________________ information measure 1 represents a measure. we divide the set of left cards into two subsets having the same number of elements : we proceed by dichotomy. As log logarithm base 2 e 3 10 unit bit or Sh (Shannon) natural unit or nepers trit decimal digit The outcome (or not) of E involves an experiment. Calculate the amount of uncertainty of the event E = {The card drawn is the king of hearts}. one bit for the suit (hearts or clubs if the colour is red and diamonds or spades if the colour is black) and so on … - At each stage. As each card has the same probability of being chosen. after) this experiment we will think of i (E ) as the uncertainty (respectively. Example Let us consider a pack of 32 playing cards. Before (respectively. . it is expressed in different units according to the chosen P{ } E base of logarithm. the self-information) associated to its outcome. we have 1 32 and we get P{ }= E h(E ) = log 2 32 = log 2 2 5 = 5 bits Since h(E ) is an integer. we can easily interpret the result: 5 bits are required to specify one playing card among the 32 cards: one bit for the colour (red or black). one of which is drawn at random.

What is the amount of uncertainty of E = {The card drawn is the king of hearts } knowing F = {The card drawn is a heart}? We have : P{E / F }= P{ F } P{ } E E as E ⊂ F = P{ } F P{ } F 1 1 and P{ }= F 32 4 E and since P{ }= we obtain h(E / F ) = − log 2 P{ / F }= log 2 E Interpretation: The fact that F has occurred determines two bits: one for the colour (red) and one for the suit (hearts). we can also define the uncertainty of E knowing that another event F has occurred by using the conditional probability P{E / F } : h(E / F ) = − log P{E / F } with P{ / F }= E P{ F } E P{ } F Example The setting is the same as the previous example. The uncertainty of E has been reduced thanks to the knowledge of F. P{ } E 32 = log 2 = log 2 2 3 = 3 bits P{ } F 4 Using the definition of h(E / F ) allows us to express h(E F ) : h(E F ) = − log P{ F }= − log[P{ / F }× P{F } = − log P{ / F }− log P{F } E E ] E . whatever it is. Consequently. requires only 5 – 2 = 3 bits. specifying one card.information measure _______________________________________________________ 13 As the uncertainty of an event E depends on the probability of E.

i F → E . This leads us to introduce the amount of information provided by F about E. we observed that the knowledge of F reduced the uncertainty of E. F ) and is called the “ mutual information between E and F” . P{ F }= P{ }× P{ } E E F P{ / F }= P{ } E E P{ / E}= P{ } F F Accordingly. as the reduction in the uncertainty of E due to the knowledge of F : i F → E = h(E ) − h(E / F ) Substituting for h(E / F ) from h(E F ) = h(E / F ) + h(F ) into the previous definition. we obtain iF →E = iE →F This quantity ( i F → E or i E → F ) will be denoted by i (E.14 _______________________________________________________ information measure Hence. we get : i F → E = h(E ) − (h(E F ) − h(F )) = h(E ) + h(F ) − h(E F ) As the above expression is symmetric with respect to E and F. . it follows that : h(E F ) = h(F E ) = h(F / E ) + h(E ) If E and F are independent. then. as E F = F E . h(E F ) = h(E ) + h(F ) (it is one of the axioms we took into account to define the uncertainty) h(E / F ) = h(E ) h(F / E ) = h(F ) In the example of the previous page. h(E F ) = h(E / F ) + h(F ) By symmetry.

E E Example Two playing cards are simultaneously drawn from a pack of 32 cards. F ) = h(E )− h(E / F ) in which 16 15 16 16 47 P{ }= E × + 2× × = 32 31 32 31 62 and 16 P{ / F }= E 31 Thus. F ) = h(E ) − h(E / F ) = 5 − 3 = 2 bits From this example.5546 bit 47 16 In this case. we may hastily deduce that i (E. we have h(E / F ) > h(E ) and i (E. that is to say if P{ / F }> P{ }. knowing that F has occurred makes E less likely.information measure _______________________________________________________ 15 Let us return to the previous example : The mutual information between E and F is : i (E. F ) > 0 for all E and F. However. i (E. F ) < 0 . F ) = log 2 62 31 − log 2 ≈ −0. we have to be very careful as this property is true if and only if h(E / F ) < h(E ). Let E (respectively F) be the event {At least one of the two drawn cards is red} (respectively { The king of spades is one of the two drawn cards}). . What is the amount of mutual information between E and F? We have : i (E. Otherwise. and the mutual information is negative.

xn } with pi = P{X = xi }.5. x2 .5 bit 2 4 This means that the average number of bits required to represent the possible values of X is 1. …..16 _______________________________________________________ information measure ENTROPY Example A random experiment consists of drawing one card from a pack of 32 playing cards. In more general terms. the entropy H (X ) of a discrete random variable X taking values in {x1 . the average uncertainty becomes 1× 1 1 + 2 × × 2 = 1.. . {X = xn } : H (X ) = −∑ p i log p i i =1 n We note the following: n can be infinite. {X = x2 }. Let X be the discrete random variable defined as ⇔ { drawn card is red} The {X = 7} ⇔ {The drawn card is a spade} {X = log π } ⇔ {The drawn card is a diamond} {X = 3} We can calculate the uncertainty associated with each of the three occurrences : PX = 3 = P{X = 7}= { } 1 ⇒ h X = 3 = log 2 2 = 1 bit 2 ( ) 1 ⇒ h(X = 7 ) = log 2 4 = 2 bits 4 1 P{X = log π }= ⇒ h(X = log π ) = log 2 4 = 2 bits 4 Then... is the average uncertainty in the outcomes {X = x1 }.

e pi = ∀ i ∈ { . By applying the definition. n}). 1I{X = xi } = ω 0 if X = xi X ≠ xi Example A discrete random variable X takes its values in {0. Sketching H 2 (p ) versus p gives the following graph : 1 H2(p) 0 1/2 1 p . 1}. we get : H (X ) = − p log 2 p − (1 − p) log 2 ( − p ) = H 2 (p ) 1 H 2 (p ) is called the binary entropy function. 1 n A more formal interpretation of H (X ) consists in considering H (X ) as the expected value of Y = log p (X ) with p(X ) = ∑ 1I {X = x } pi i =1 i - - n where 1I {X = x i } is the indicator function 1 if of {X = xi }= { / X (ω ) = xi } i. not on the actual values taken by X. the maximum of H (X ) is achieved if and only if X is uniformly distributed 1 over its values (i.e. we have H (X ) = log n . Then.information measure _______________________________________________________ 17 - H (X ) depends only on the probability distribution of X. If n is finite. The probability distribution is given by : P{X = 1}= p = 1 − P{X = 0} Calculate the entropy of X.

we have : H (X / Y ) = −∑∑ pij log P{ = xi / Y = y j } X n m i =1 j =1 The average mutual information I (X . Y ) = −∑∑ pij log pij i =1 j =1 n m Proceeding by analogy.. j H (X / Y = y j ) m j =1 where H (X / Y = y j ) = −∑ P{ = xi / Y = y j }log P{ = xi / Y = y j } X X n i =1 From this.18 _______________________________________________________ information measure From the graph... Y ) is the reduction in the entropy of X due to the knowledge of Y : I (X . we observe the following: 1 . it is natural to define the joint entropy of the pair (X . we define: The conditional entropy H (X / Y ) : H (X / Y ) = ∑ p. j = P{ = y j } and Y pij = P{ = xi Y = y j }. x2 . Y ) as H (X . X As the entropy depends only on the probability distribution. 2 We now extend the notions related with events to random variables.H 2 (p ) = 0 when p = 0 or p = 1 (there is no uncertainty). y 2 .1] H 2 (p ) = H 2 ( − p ). Let us consider X (respectively Y) be a discrete random variable taking on values in {x1 .. = P{X = xi }. xn } (respectively {y1 . This means that H 2 (p ) is symmetric with respect to the 1 1 vertical line p = . 2 .. p...the maximum is achieved when p = .. y m }) with pi.H 2 (p ) is a top convex ( ) (concave) function which satisfies : ∀p ∈ [0. . Y ) = H (X )− H (X / Y ) = H (Y )− H (Y / X ) .

The relationship between entropy and mutual information is sketched in the Venn diagram H(X) H(Y/X) H(Y) H(X/Y) I(X . Y ) ≥ 0 . we can rewrite : I (X . I (X . the previous relations simplify to : H (X / Y = y j ) = H (X ). Y ) may be expressed as the expected value of the random variable I = log Then.Y) below: In the case of independent random variables. I (X . Y ). Y ) = E [I ] = ∑∑ p ij log i =1 j =1 n m P(X . Y ) = H (X )+ H (Y )− I (X . H (X / Y ) < H (X ) conditional entropy is always smaller than entropy. Y ) . Y ) = H (X )+ H (Y / X ) = H (Y )+ H (X / Y ). Y ) = 0 . p. j Some elementary calculations show that: - H (X . H (X .information measure _______________________________________________________ 19 I (X . H (X / Y ) = H (X ) . P(X )P(Y ) pij p i . H (X . . Y ) = H (X )+ H (Y ) .

As the number of possible sequences is 6 4 = 1. we need 4 × log 2 6 = log 2 6 4 = 10. so that the chosen sequence may consist of one. The average uncertainty in the unknown sequence is : H (X ) = log 2 1. Six colours are available.296 values. Player B has to guess the sequence by submitting ordered sequences of four pieces. On the whole. 1. let X be a discrete random variable taking on 1. The entropy of X is the answer to the first question. .34 bits Another way to solve the problem consists in counting the needed number of bits to specify one ordered sequence. And. After considering the combination put forth by B.296 = 10. but without indicating which pieces or positions are correct. player A chooses an ordered sequence of four pieces which is concealed from player B.20 _______________________________________________________ information measure Example In the game of mastermind.296 . two. What is the average amount of uncertainty in the unknown sequence (the one chosen by player A) resolved by the answer given by player A to the first submitted sequence? C = a colour C C C C 1 2 3 4 position numbers Solution 1. we consider X uniformly distributed over its 1. The pieces are of the same shape and may be of different colours. since any sequence has the same probability of being chosen. three or four colours. The first sequence submitted by player B consists of four pieces of the same colour. and for each one. There are four positions. What is the average amount of uncertainty in the sequence chosen by player A? 2. which is identical to the previous result.34 bits . player A tells player B the number of pieces in the correct position and the number of pieces in the wrong position. log 2 6 bits are required to specify the colour.296 different values according to the chosen sequence (no matter which one).

which yields 5 possible sequences. which can be written as : I (X . First.296 . Y ) = H (X ) − H (X / Y ) = H (Y ) − H (Y / X ) Since knowing X implies knowing Y (if the sequence chosen by player A is known. I (X . 2 and 3. Y .296 1. The reduction in the average uncertainty of the unknown sequence resolved by the answer given by player A is nothing but the mutual information between X and Y.296 Similar calculations lead to: P{ = 2}= Y P{ = 1}= Y 150 1. there is no doubt about the answer to be given by player A). we have 5 possible answers according to the number of pieces in the correct position.296 .information measure _______________________________________________________ 21 2. we get : P{ = 3}= 4 × Y 5 20 = 1. 4. 2. we have to notice that player A cannot indicate that some pieces are in the wrong position.296 500 1. Accordingly. The corresponding 1 probability is P{ = 4}= Y .“ three pieces are in the correct position” Let us suppose we have numbered the four different positions as 1. Let us represent the possible answers of player A by a discrete random variable Y. 1. Let us denote { = j}= {j pieces are in the correct position}. then there are 6 − 1 = 5 different possible colours in position 4. 3. Proceeding this way for the three other possibilities according to the possible correct positions. If the three pieces in the right position are in position 1. Y ) = H (Y ) Let us evaluate the probability distribution of Y. we have : H (Y / X ) = 0 Consequently.“ four pieces are in the correct position” This means that the unknown sequence is the submitted sequence.

296 625 500 625 500 log 2 log 2 − − 1.296 1.296 1.22 _______________________________________________________ information measure P{ = 0}= Y 625 1.296 1.296 1.296 Eventually. we obtain : H (Y ) = − 150 20 150 1 20 1 log 2 log 2 log 2 − − 1.296 1. Y ) = H (Y ) ≈ 1 bit . I (X .296 1.296 Then.296 1.296 1.

ENGLISH LANGUAGE Considering a 27 symbol alphabet (26 letters and the space). Shannon studied different models of the English language. The source coding theorem will exhibit the entropy of a source as the fundamental limit in data compression. We will be concerned with encoding the outcomes of a source so that we can recover the original data by using a minimum number of letters (for instance bits). This will lead us to study some elementary properties of codes. will also be described. We can think of a box from which pieces of paper associated with letters are drawn. . An information source is a device which delivers symbols (or letters) randomly from a set of symbols (or letters) called an alphabet. The successive symbols are chosen according to their probabilities in relation to the previous symbols. Coding procedures often used in practice as Huffman coding and the Lempel Ziv Welch algorithm. The simulations below are those of Shannon’s original paper. The best examples are natural written languages such as the English language. There are the same number of pieces of paper for each letter. we will introduce the notion of entropy for an information source. Zero-order letter model The symbols are chosen independently from each other and are equally likely. A random drawing may yield a sequence as the following: XFOML RXKHRJFFJUJ ZLPWCFWKCYJ FFJEYVKCQSGHYD QPAAMKBZAACIBZLHJQD This compares to the result of monkeys strumming unintelligently on type writers.23 SOURCE CODING In this chapter.

in the box containing the couples beginning with “ A” . we jump to word units. For instance. B. Z. Z. is drawn from the box described in the firstorder letter model. each box having pieces of paper associated with couples of letters with the same first letter (A. REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE . B. and so on…The result may appear as: ON IE ANTSOUTINYS ARE T INCTORE ST BE S DEAMY ACHIN D ILONASIVE TUCOOWE AT TEASONARE FUSO TIZIN ANDY TOBE SEACE CTISBE - Third-order letter model We take into account the probabilities of units consisting of three successive letters. We take out a couple from the “ N” box. Let us suppose we got “ ON” . the number of “ AR” will be twice as great as the number of “ AL” if we assume “ AR” is twice as frequent as “ AL” in English language. let us say “ O” . In the next stages. let us suppose “ N space” .24 ____________________________________________________________ source coding - First-order letter model The symbols are still independent and their numbers in the box are distributed according to their actual frequencies in the English language. …. the next letter is obtained by drawing a piece of paper from the box containing the couples beginning with “ O” . space). …. The first letter of the sequence. Then. - First-order word model The successive words are chosen independently from each other according to their frequencies in English language. The numbers of pieces of paper match the frequencies of the English language. There are 27 boxes (A. to obtain: IN NO IST LAT WHEY CRATICT FROURE BIRS GROCID PONDENOME OF DEMONSTURES OF THE REPTAGIN IS REGOACTIONA OF CRE We observe that English words begin to appear. space). The result may look like this: OCRO HLI RGWR NMIELWIS EU LL NBNESEBYA TH EEI ALHENHTTPA OOBTTVA NAH BRL - Second-order letter model The samples drawn consist of couples of letters.

as : H ∞ ( ) = lim H (U L / U L −1 . ENTROPY OF A SOURCE In this course.source coding ____________________________________________________________ 25 - Second-order word model In addition to the previous conditions. It turns out that this amounts to L L the same thing... certain sequences of words or letters are far more likely to occur than others.. we take into account the word transition probabilities.U 2 .. i. The simplest case is the discrete memoryless source : U 1 ..U 2 . although the sequence of letters or words in English is potentially random.. we will limit ourselves to discrete stationary sources U..} (the successive outputs of the source U) taking on values in the same set U of symbols and whose joint probability distributions are invariant under a translation of the time origin. To take into account the memory of a source U (if the successive outputs of U are dependent)....e. In the special case of a memoryless source. we have H ∞ (U ) = H (U L ) - . the more simulations will approach understandable English text.. are independent random variables with the same probability distribution. to discrete random process { 1 .. This illustrates that. U 1 ) U L → +∞ Another way to estimate H ∞ (U ) consists in calculating the limit of H (U L . H ∞ (U ). since lim L → +∞ H L (U ) = lim H (U L / U L −1 .. U L −1 . and that natural language may display transition probabilities that do not reveal themselves when monkeys strum on typewriters..U L −2 . U L − 2 .. U 1 ) H L (U ) = when L increases indefinitely.... we define the entropy of U. THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED The more sophisticated the model is.34 bit for the entropy of English language. U 1 ) L L → +∞ - An experiment carried out on the book “ Jefferson the Virginian” by Dumas Malone resulted in 1.

2} whose transition graph is 0 sketched below : 1/2 1/2 0 1 2 1/2 1/2 1 The transition matrix is : 0 0 T= 1 1/2 1/2 1 1/2 0 2 0 1/2 2 0 1 0 As there is only one class of recurrent states. y. z )× T with x = P{ = 0} U (1) y = P{ = 1} U ⇔ and z = P{ = 2} U Solving the equation (1) for x.26 ____________________________________________________________ source coding - For a first-order Markov chain. y. y and z yields : x=y= 2 5 z= 1 5 .1. y and z satisfy : (x. H ∞ (U ) = H (U L / U L −1 ) Example Let us consider a Markov chain U taking on values in { . z ) = (x. U is stationary and the limiting-state probabilities x.

8 bit 5 5 5 5 So.source coding ____________________________________________________________ 27 As the first row of the transition matrix corresponds to the probability distribution of U L knowing U L −1 = 0 . there is no redundancy and r = 0 . Let U be an information source with memory (the successive outputs of U are dependent). For a memoryless source U with equally likely values. - . we obtain : U i=0 2 H (U L / U L −1 ) = 2 2 1 4 × 1 + × 1 + × 0 = = 0.8 bit) is almost twice as small as the maximum entropy of a ternary memoryless source (log 2 3 = 1. the entropy per symbol of U is : H ∞ (U ) = 0.585 bit ). we get : 1 1 1 1 1 H (U L / U L −1 = 0 ) = − log 2 − log 2 = H 2 = 1 bit 2 2 2 2 2 Proceeding the same way for the remaining two other rows gives : 1 1 1 1 1 H (U L / U L −1 = 1) = − log 2 − log 2 = H 2 = 1 bit 2 2 2 2 2 H (U L / U L −1 = 2) = 0 bit By applying the formula H (U L / U L −1 ) = ∑ P{ L −1 = i}× H (U L / U L −1 = i ).8 bit Due to the memory of the source. this value (0. we define the redundancy of U as: r = 1− H∞ ( ) U with H MAX H MAX = log N where H ∞ (U ) and H MAX are expressed in the same unit (with the same base of logarithm). Assuming U can only take on a finite number of values (N).

THE SOURCE CODING PROBLEM Example Let U be a memoryless quaternary source taking values in {A. H ’(U ) may be interpreted as the average amount of information delivered by the source in one second. D} with probabilities ½. 1. The estimated entropy being 1. However.34 r = 1− ≈ 72% . the first-order letter model leads to an entropy of log 2 27 = 4. each of them can be associated with a word of two binary digits as follows: .34 bit.75 bits . B. A possible interpretation is : 4. This leads us to introduce the entropy rate of a source H ’(U ) as : H ’( ) = H ∞ (U )× DU U where DU is the symbol rate (in symbols per second) Accordingly. approximately 28% of the letters can be extracted freely whereas the remaining 72% are dictated by the rules of structure of the language. C. 1/8 and 1/8.000 outputs of U are to be stored in the form of a file of binary digits. ENTROPY RATE So far. Thus. ¼. the redundancy is: 1.28 ____________________________________________________________ source coding - In the case of the English language.75 When choosing letters to write a comprehensible English text. This quantity is useful when data are to be transmitted from a transmitter to a receiver over a communication channel. we have been considering entropy per symbol of a source without taking into account the symbol rate of the source. First solution There are 4 = 2 2 symbols to be encoded. the amount of information delivered by a source for a certain time depends on the symbol rate. and one seeks to reduce the file to its smallest possible size.

The size of the file is : 1000 × 2 = 2. we can think of a code which assigns shorter words to more frequent symbols as : A → "1" B → "01" C → "000" D → "001" This code is said to be a variable length code. this code is said to be a fixed length code.000 × = 125 symbols of type " D" 8 1.000 × = 125 symbols of type " C" 8 1 1. From the weak law of large numbers. there are roughly : 1 = 500 symbols of type " A" 2 1 1.750 bits and is 2. .5% smaller than it was in the previous solution.000 information (each symbol can be recovered reliably).000 × Hence. Second solution The different symbols do not occur with the same probabilities.000 bits 2 bits are used to represent one symbol.000 × = 250 symbols of type " B" 4 1 1.e. Therefore. we deduce that in the sequence of 1.000 − 1. without loss of 2.000 symbols.source coding ____________________________________________________________ 29 A → "00" B → "01" C → "10" D → "11" All the codewords having the number of bits. The data have been compressed. as the codewords do not have the same length.750 = 12. i. the same length. the size of the file reduces to 500 × 1 + 250 × 2 + 125 × 3 + 125 × 3 = 1.

What is the minimum average number of bits necessary to represent one symbol? How do we design algorithms to achieve effective compression of the data? These three questions constitute the source coding problem. Example Let us continue the previous example with equally likely symbols A. By “ more efficient. Let us consider a variable length code with : nA nB nC nD the length of the codeword associated with symbol " A" the length of the codeword associated with symbol " B" the length of the codeword associated with symbol " C" the length of the codeword associated with symbol " D" .75 bit are necessary to represent one symbol. we mean that the average number of bits used to represent one symbol is smaller. 1. 1. We will show that there is no suitable code more efficient than the fixed length code related to “ first solution” . using the first code would result in a bit rate of 2. the bit rate would reduce to 1.750 bits/sec.750 = 1. B.30 ____________________________________________________________ source coding On average. Is it possible to compress its data? If so. With the second code.000 quaternary symbols per second.000 If the symbol rate of U were 1. We are now faced with three questions : Given an information source. C and D.000 bits/sec.

As the symbols are equally likely. Consequently. which result in the juxtaposition of symbols (letters) extracted from a code alphabet. the weak law of large numbers applies here. The number of symbols n(c ) which comprise a codeword is its length. it is not possible to design a more efficient code than the one of fixed length 2. n being very large. D} is uniform. called codewords. We will denote b the size of the alphabet. C. is n1 = n A × n n n n n + n B × + nC × + n D × = (n A + n B + nC + n D )× 4 4 4 4 4 As n is very large. B. we can think of taking n A = 1 . This is due to the fact that the probability distribution over the set of symbols {A. ELEMENTARY PROPERTIES OF CODES A code C is a set of words c. the average number of bits is n2 = 2 × n n n n n + 2 × + 2 × + 2 × = (2 + 2 + 2 + 2 )× 4 4 4 4 4 If we want to satisfy n1 < n 2 . as the codewords associated with B. n B = nC = n D = 2 . For instance. the average number of bits used to represent one symbol in a sequence of n quaternary symbols. A → "1" B → "10" C → "00" D → "01" Let “ 10001” be an encoded sequence. there is one of them which begins with “ 1” (respectively “ 0” ). it does not matter which one is chosen to have a codeword of length 1. Two interpretations are possible : “ ACD” or “ BCA” Such a code is not suitable to recover the data unambigously. . D must be different (otherwise we could not distinguish between two different symbols).source coding ____________________________________________________________ 31 With this code. C. Assuming “ 1” (respectively “ 0” ) is the codeword assigned to symbol “ A” . By encoding this sequence with the fixed length code of the “ first solution” .

1}. keyboards still communicate to computers with ASCII codes and when saving document in “ plain text” . Nowadays.) (Start of Header) (Start of Text) (End of Text) (End of Transmission) (Enquiry) (Acknowledgment) (Bell) (Backspace) (Horizontal Tab) (Line Feed) (Vertical Tab) (Form Feed) (Carriage Return) (Shift Out) (Shift In) (Data Link Escape) binary characters codes 1000000 @ 1000001 A 1000010 B 1000011 C 1000100 D 1000101 1000110 1000111 1001000 1001001 1001010 1001011 1001100 1001101 1001110 1001111 1010000 E F G H I J K L M N O P comments (AT symbol) . i.32 ____________________________________________________________ source coding The most common codes are binary codes. a 2 8 = 256 fixed length binary code whose the first 128 characters are common with the ASCII code. It consists of 2 7 = 128 binary codewords having the same length (7). it had to be used with teletypes. This gave birth to the extended ASCII code. Example In anticipation of the spread of communications and data processing technologies. the American Standard Association designed the ASCII code in 1963. Originally intended to represent the whole set of characters of a typewriter. ASCII stands for American Standard for Communication Information Interchange. Later on. ASCII CODE TABLE binary characters codes 0000000 NUL 0000001 SOH 0000010 STX 0000011 ETX 0000100 EOT 0000101 0000110 0000111 0001000 0001001 0001010 0001011 0001100 0001101 0001110 0001111 0010000 ENQ ACK BEL BS HT LF VT FF CR SO SI DLE comments (Null char. hence some special characters (the first ones listed in the table below) are now somewhat obscure. codes whose code alphabet is {0.e. characters are encoded with ASCII codes. additional and non printing characters were added to meet new demands.

.source coding ____________________________________________________________ 33 binary characters comments codes 0010001 DC1 (Device Control 1) 0010010 DC2 (Device Control 2) 0010011 DC3 (Device Control 3) 0010100 DC4 (Device Control 4) 0010101 NAK (Negative Acknowledgement) 0010110 SYN (Synchronous Idle) 0010111 ETB (End of Trans. Block) 0011000 CAN (Cancel) 0011001 EM (End of Medium) 0011010 SUB (Substitute) 0011011 ESC (Escape) 0011100 0011101 0011110 FS GS RS (File Separator) (Group Separator) (Request to Send)(Record Separator) (Unit Separator) (Space) (exclamation mark) (double quote) (number sign) (dollar sign) (percent) (ampersand) (single quote) (left/opening parenthesis) (right/closing parenthesis) (asterisk) (plus) (comma) (minus or dash) (dot) (forward slash) binary characters codes 1010001 Q 1010010 R 1010011 S 1010100 T 1010101 U 1010110 1010111 1011000 1011001 1011010 1011011 1011100 1011101 1011110 V W X Y Z [ \ ] ^ comments (left/opening bracket) (back slash) (right/closing bracket) (caret/cirumflex) 0011111 0100000 0100001 0100010 0100011 0100100 0100101 0100110 0100111 0101000 0101001 0101010 0101011 0101100 0101101 0101110 0101111 0110000 0110001 0110010 0110011 0110100 US SP ! " # $ % & ’ ( ) * + . / 0 1 2 3 4 1011111 1100000 1100001 1100010 1100011 1100100 1100101 1100110 1100111 1101000 1101001 1101010 1101011 1101100 1101101 1101110 1101111 1110000 1110001 1110010 1110011 1110100 _ ‘ a b c d e f g h i j k l m n o p q r s t (underscore) .

“ full stop” . _. There are different lapses of time between words..wisc. … } to be sent as short electrical signals (dots) and long electrical signals (dashes). “ comma” . Invented by Samuel Morse in the 1840’s. The value of the unit of time depends on the speed of the operator. … .html) Example Another famous code is the Morse code.neurophys. dit (unit of time)}.edu/www/comp/docs/ascii. “ space” . 1912. letters of a same word and dots and dashes within letters. it allows letters of the alphabet {a. the space is equal to three dits. the Titanic used the international distress call SOS “ …_ _ _…” (sent in the correct way as one Morse symbol). b. Consequently the Morse code is a ternary code with code alphabet {.34 ____________________________________________________________ source coding binary codes 0110101 0110110 0110111 0111000 0111001 0111010 0111011 0111100 0111101 0111110 0111111 characters comments binary codes 1110101 1110110 1110111 1111000 1111001 1111010 1111011 1111100 1111101 1111110 1111111 characters comments 5 6 7 8 9 : (colon) (semi-colon) (less than) (equal sign) (greater than) (question mark) u v w x y z { | } ~ DEL < = > ? (left/opening brace) (vertical bar) (right/closing brace) (tilde) (delete) (this table has been extracted from http://www. the space between code letters is equal to one dit. z. Within a letter. Between two characters in a word. On April 15. . Morse code differs from ASCII code in the sense that shorter words are assigned to more frequent letters. The space between two words is equal to seven dits.

_ _.wikipedia. __ letters N O P Q R S T U V W X Y Z Morse code _. common punctuation ._ … _ _ _… (source: http://www......_._ _ _._. _ _._ . _ _ _ _._._ _ _ _._ _ _._ _ _ …__ … . numbers 0 1 2 3 4 Morse code _____ .._ _. special characters error + (end of message) @ (end of contact) SOS (international distress call) Morse code … … ._... ___ . _ _… _ _ _. common punctuation ._ _ _ _. . (full stop) .. . _ _._ _ _ _ ._._ .. … _._.(hyphen) / (slash) Morse code _… _ _.. … _ ._ _ ._..._.. _… ._ …_ .source coding ____________________________________________________________ 35 MORSE CODE TABLE letters A B C D E F G H I J K L M Morse code . _. (comma) ? (question mark) Morse code . .org/wiki/Morse_code) ._ _… _._. . .. …._ numbers 5 6 7 8 9 Morse code … ._ _.

10} is instantaneous {1. 000. the first codeword is “ 1” since the following symbol is “ 1” whereas the second codeword is “ 10” since the third symbol is “ 0” and so on … - An instantaneous code is a code in which any sequence of codewords can be interpreted codeword by codeword. Examples {1. . 10} is not instantaneous. 11} is not uniquely decipherable as the sequence “ 1111” can be interpreted in “ 1” “ 11” “ 1” or “ 1” “ 1” “ 11” or … {1. 10} is uniquely decipherable although for any sequence. For instance. we need to consider two symbols at a time to decipher the successive codewords. 10. In the sequence 11011.36 ____________________________________________________________ source coding A code is said to be uniquely decipherable if any sequence of codewords can be interpreted in only one way. This is due to the fact that a codeword (“ 1” ) is the beginning of another codeword (“ 10” ). A code is a prefix code if and only if no codeword is the beginning of another codeword. Examples {0. as soon as they are received. It motivates the following definition. in the sequence 1110. we need to know whether the second symbol is “ 0” or “ 1” before interpreting the first symbol. Example {1. 01. 001} is a prefix code.

Kraft’s theorem states the condition which the lengths of codewords must meet to be a prefix codes. but the root (level 0) has no parent. McMillan’s theorem will show us that we can limit our attention to prefix codes without loss of generality.1101}.. we can use a tree made of a root. c m } with lengths c n(c1 )..11. . branches and leaves..1100. Nevertheless...source coding ____________________________________________________________ 37 - A prefix code is an instantaneous code and the converse is true.101. Example Let us consider C = { . there is only one node (the root) At level 1. Recovering the original codewords calls for designing uniquely decipherable codes.111. as uniquely decipherable codes are not always prefix codes. However. At level 0.000.. nodes.. c 2 . there exists an equivalent binary prefix code with two codewords of length 2. To build such a code. n(c 2 ). It may seem restrictive to limit ourselves to prefix codes. n(c m ) if and only if : ∑b k =1 m − n (ck ) ≤1 This inequality is known as the Kraft inequality. A node at level i has one parent at level i-1 and at most b children at level i+1. This code is not a prefix code as the 10 codeword “ 11” is the beginning of the codewords “ 111” . there are at most b nodes . A prefix code is uniquely decipherable but some uniquely decipherable codes do not have the prefix property. three codewords of length 3 and two codewords of length 4. Kraft’s theorem There exists a b-ary (the size of the code alphabet is b) prefix code { 1 . “ 1100” and “ 1101” . it satisfies the Kraft inequality : ∑b c∈C − n (c ) = 2 × 2 − 2 + 3 × 2 −3 + 2 × 2 − 4 = 1 3 2 + + =1 2 8 16 According to Kraft’ s theorem.

To construct a prefix code. we have to make sure that no sequence of branches associated with a codeword is included in any sequence of branches associated with other codewords. Its length is equal to the level of its leaf.38 ____________________________________________________________ source coding - At level 2. A codeword is represented by a sequence of branches coming from different levels. no codeword is an ancestor of any other codeword. In other words. In the example. there are at most b 2 nodes And so on … The terminal nodes (with no children) are called leaves. the construction of a prefix code with a code tree requires that we construct two nodes at level 2 for the two codewords of length 2 three nodes at level 3 for the three codewords of length 3 two nodes at level 4 for the two codewords of length 4 root 0 0 0 1 1 0 0 1 level 0 level 1 1 level 2 1 level 3 leaves nodes branches terminal nodes (codewords) 0 1 level 4 .

01. 00 Assuming the memory of U does not allow a “ 1” to follow a “ 0” and a “ 2” .source coding ____________________________________________________________ 39 Eventually we obtain the codewords listed in the table below : codewords 01 10 000 001 111 1100 1101 McMillan’s theorem A uniquely decipherable code satisfies the Kraft inequality Taking into account Kraft’ s theorem. we mean “ having the same length distribution” . Example U is a memoryless ternary source taking values in {0.02.11. By “ equivalent” . this means that any uniquely decipherable code can be associated with an equivalent prefix code.21. 2}. It is a source whose outputs are juxtapositions of L consecutive symbols delivered by U. SOURCE CODING THEOREM This theorem states the limits which refer to the coding of the outputs of a source. we can consider the Lth extension of U.20. The second extension of U consists of the symbols taken two at a time.22} as the symbols “ 01” and “ 21” cannot occur.11. e. 1.12. { . Let U be a stationary discrete source and b the size of the code alphabet. 00 ..02. To take into account the memory of U. the second extension is : { .10.10.g.12.22}.20.

the former inequality becomes : n(∞ ) ≥ H ∞ (U ) log b and as H L (U ) is a decreasing function of L. to compress information. Comments : If L tends to + ∞ . we define the average number of code symbols used to represent one source symbol as : n n(L ) = L = L ∑ p n(c ) i i i L where ci are the codewords assigned to the source words of the Lth extension of U and pi = P{ i }.40 ____________________________________________________________ source coding To measure the ability of a code. The smaller n(L ) is. limit : we cannot find a uniquely decipherable code with n(L ) smaller than ∞ log b . c n(L ) is also the average length of the codewords. the more efficient the code is. applied to the Lth extension. H ∞ (U ) appears as the ultimate compression log b H (U ) . L log b L where H L (U ) and log b are expressed in the same base. The source coding theorem consists of the two following statements : Any uniquely decipherable code used to encode the source words of the Lth extension of a stationary source U satisfies : n L H L (U ) ≥ L log b n(L ) = - It is possible to encode the source words of the Lth extension of a stationary source U with a prefix code in such way that : n(L ) = nL H L ( ) 1 U < + .

1/9. ∀ ε > 0 ∃L0 / ∀ L > L0 1 < ε . 1/9. if ∀ i n(ci ) = − log b P{ i }. . of the definition of entropy. B. otherwise − log b P{ i } c c would not be an integer. 1/9. the entropy can be interpreted as the minimum average number of bits required to represent one source symbol. 1/9. Hence. E. Expressing all logarithms in base 2 and taking b = 2 . we obtain : n(∞ ) < H ∞ (U ) +ε log b This means that we can achieve a prefix code to encode the outputs of U in such a way that H (U ) .source coding ____________________________________________________________ 41 This property provides a justification. To meet this condition. P{ i } must be of the form b − m with m ∈ IN . we have : L → +∞ 1 n(L ) = nL H L ( ) U < +ε L log b Taking the limits as L tends to + ∞ . D. As the probabilities are negative powers of 3. we will assign codewords to the seven source symbols {A. a posteriori. F. G} with probabilities 1/3. Example U is a memoryless source taking values in {A. i. 1/9. 1/9.. 2}. B. C.e. This leads us to pose this the average number of code symbols is arbitrarily close to ∞ log b question : Is it possible to find a prefix code which satisfies n(L ) = H∞ ( ) U ? log b Such a code exists. D. F. C. 1. E. The outputs of U are to be encoded with a ternary code alphabet {0. provided that the length of each codeword ci is equal to the selfc information of its occurrence. The code is then said to be optimum. which is the size of the code alphabet. since L lim L = 0 . for L large enough. G} in such a way that the length of a codeword is equal to the self-information (expressed in trits) associated with the corresponding source symbol.

2. there exists a prefix code with codewords 1 having the length distribution { . Consequently. we can use a ternary tree with nodes having three children as the size of the code alphabet is 3.2.42 ____________________________________________________________ source coding source symbol A B C D E F G self-information (in trits) 1 1 log 3 3 1 2 log 3 9 1 log 3 2 9 1 2 log 3 9 1 log 3 2 9 1 2 log 3 9 1 log 3 2 9 length of the codeword 1 2 2 2 2 2 2 ci ∑b ci − n (ci ) = 3−1 + 6 × 3−2 = 1 The Kraft inequality is satisfied.2.2.2}. 0 B Let us calculate ¦b n ci . To construct this code.2.

: 0 1 2 A 1 2 0 1 2 C D E F G .

e. U has redundancy). log 3 3 Let U be a source taking N different values. . denoted log b N . since logarithms must be expressed in the same base) Hence. The number of b-ary symbols needed to represent one value is the smallest integer greater than or equal to log b N . since we can reduce the bary symbol rate as close to DU × H ∞ (U ) as desired. If H ∞ (U ) < log b N (i. The source coding theorem states that we can find a prefix code whose the average b-ary symbols rate can be arbitrarily close to DU × H ∞ (U ) (logarithms are expressed in base b). if we encode the outputs of U by a fixed length code. then there exists a prefix code with a b-ary symbol rate smaller than DU × log b N . we have n = H 1 (U ) and the code is optimum.source coding ____________________________________________________________ 43 source symbols A B C D E F G codewords 1 00 01 02 20 21 22 The average length of the codewords is : 1 1 5 n = 1× + 6 × 2 × = 3 9 3 and the limit given by the source coding theorem : H 1 (U ) 1 5 1 1 1 = H ∞ (U ) = − log 3 − 6 × log 3 = trit 9 3 9 3 log 3 3 3 (here we had to express H ∞ (U ) in trits. hence smaller than DU × log b N . Using the fixed length code would result in a b-ary symbol rate equal to DU × log b N . Let DU be the symbol rate of U expressed in N-ary symbols per second.

which results in a reduction in the symbol rate. is possible as long as H ∞ (U ) < log b N . Is it possible to compress its data? If so. Example A binary source U is described by a Markov chain whose state transition graph is sketched below : 1/2 1/2 0 1 1 The transition matrix is : 0 0 T= 1 1 0 1/2 1 1/2 . What is the minimum average number of code symbols necessary to represent one source symbol? Answers The compression. The minimum average number of code symbols required to represent one source symbol is H ∞ (U ).44 ____________________________________________________________ source coding Consequently. the source coding theorem answers the two first questions of the source coding problem : Questions Given an information source.

66 bit 3 3 3 The maximum entropy for a binary source is log 2 2 = 1 bit . we get : 1 H (U n / U n −1 = 0 ) = H 2 = 1 bit 2 H (U n / U n −1 = 1) = H 2 ( ) = 0 bit 1 Eventually. Their probabilities are : 1 2 1 P{ 00"}= P{ n −1 = 0 U n = 0}= P{ n = 0 / U n −1 = 0}× P{ n −1 = 0}= × = " U U U 2 3 3 1 2 1 P{ 01"}= P{ n −1 = 0 U n = 1}= P{ n = 1 / U n −1 = 0}× P{ n −1 = 0}= × = " U U U 2 3 3 . hence U is stationary and the limiting state probabilities x = P{ = 0} and y = P{ = 1} satisfy : U U (x.source coding ____________________________________________________________ 45 There is only one class of recurrent states. we obtain : x= 2 3 and y= 1 3 Interpreting the two rows of the transition matrix.66 bit < 1 bit . since a “ 1” cannot follow a “ 1” . y ) = (x. we have : H ∞ ( ) = H ( n / U n−1 ) = U U 2 1 2 × 1 + × 0 = = 0. To take into account the memory of U. U has redundancy and its data can be compressed. y )× T x x = 2 + y x ⇔ y = 2 x + y =1 with x + y =1 Solving this system for x and y. “ 10” . As H ∞ (U ) = 0. “ 01” . “ 11” is not listed. let us consider its 2nd extension which consists of the source words : “ 00” .

COMPRESSION ALGORITHMS This section develops a systematic construction of binary codes compressing the data of a source. the entropy of the second extension of U is smaller than 1 trit " " " and is not equal to n2 : the code is not optimum.83 = 17% . we can think of this prefix code : "00" → 0 "01" → 10 "10" → 11 In this case.83 2 2 3 3 3 Using this code results in a reduction of the source symbol rate equal to 1 − 0. With a binary code alphabet {0. . 2}. we have n 2 = 1 and n(2 ) = 1 .46 ____________________________________________________________ source coding 1 1 P{10"}= P{ n −1 = 1 U n = 0}= P{ n = 0 / U n −1 = 1}× P{ n −1 = 1}= 1× = " U U U 3 3 With a ternary code alphabet {0. although the distribution probability is uniform (P{ 00"}= P{ 01"}= P{10"}). Consequently. we can construct the following code : "00" → 0 "01" → 1 "10" → 2 Then. 1. the average number of bits necessary to represent one source symbol is : n(2 ) = n2 1 1 1 1 = 2 × + + 1 × = 0. the maximum being 1 − 0. 1}. 2 One has to be cautious here as the second extension of U is not memoryless. since “ 10” cannot follow “ 01” .66 = 34% .

Third step We apply the process of the previous step to the subsets containing at least two symbols.2 × log 2 0.1. First step We have to list the symbols. C .0. we assign “ 1” (respectively “ 0” ) to the symbols of the top (respectively bottom) subset. This amounts to constructing a binary tree from the root to the leaves. which can be described by a random variable. but not on the whole of the digits which constitute the codewords.2. as the optimisation is achieved binary digit by binary digit.source coding ____________________________________________________________ 47 SHANNON-FANO ALGORITHM The Shannon-Fano encoding scheme is based on the principle that each code bit.15 − 0. Then. The algorithm ends when there are only subsets with one symbol left.1 × log 2 0. must have a maximum entropy. Example Let U be a memoryless source taking values in {A.4 × log 2 0. D.4 − 0. in such way that the two probabilities of the subsets are as close as possible. F .15 × log 2 0. The successive binary digits assigned to the subsets have to be arranged from left to right to form the codewords.0.0.2 − 0. E . for instance from top to bottom. We should note that the Shannon-Fano encoding scheme does not always provide the best code.15.05.38 bits U The maximum entropy of a source taking on 7 values is log 2 7 ≈ 2.4. B.1 − 3 × 0.05.05} respectively.05 H ∞ ( ) ≈ 2.0. Second step We divide the whole set of source symbols into two subsets.05 × log 2 0.0.81 bits . G} with the probabilities {0.0. The entropy of U is : H ∞ (U ) = −0. in order of decreasing probability. each one containing only consecutive symbols of the list.

this Shannon-Fano code results in a reduction in the 3 − 2 .05) + 4 × (0.A. invented in 1952 by D.1 0.2 ) + 3 × (0.05 codewords 11 10 011 010 0011 0010 000 1 1 0 0 0 1 0 1 1 0 1 0 The average number of bits required to represent one source symbol is : n = 2 × (0. provides a prefix code whose construction can be achieved by a binary tree. U has redundancy and its data can be compressed.15 0.1 + 0.4 0.5 ≈ 16% .48 ____________________________________________________________ source coding Consequently.05 + 0. Here are the successive steps : .4 + 0.15 + 0. symbol rate of 3 HUFFMAN ALGORITHM This algorithm.05) = 2.05 0.5 Compared to the 3 fixed length code. Huffman.2 0. With a fixed length code The length n has to be chosen as the smallest integer satisfying : 2n ≥ 7 We obtain n = 3 With Shannon-Fano code Let us display the successive steps of Shannon-Fano encoding in the table below : 1st step 1 1 0 0 0 0 0 2nd step 1 0 3rd step 4th step 5th step 6th step symbols A B C D E F G probabilities 0.05 0.

Then.05) D (0.2) A (0.15) II 0 (0.05) E (0. A and B are removed from the list and replaced by the node. the corresponding node is the root of the binary tree.4) V 1 VI 1 .source coding ____________________________________________________________ 49 First step We arrange the source symbols on a row in order of increasing probability from left to right. Example Let us return to the source of the previous example. Applying the above algorithm results in the following binary tree : (1) 0 (0.25) III 0 (0.6) 0 (0. We combine A and B together with two branches into a node which replaces A and B with probability assignment equal to PA + PB .1) 1 1 (0. Third step We apply the procedure of the second step until the probability assignment is equal to 1.15) 1 B (0.35) IV 0 C (0. Second step Let us denote A and B the two source symbols of lowest probabilities PA and PB in the list of the source words.05) 1 F (0.1) I 0 G (0.

Consequently.4 = 2.05 + 3 × (0.50 ____________________________________________________________ source coding symbols A B C D E F G probabilities 0.05 codewords 1 011 010 001 0001 00001 00000 The average length codewords is : n = 5 × (0. As the receiver does not know these values. a Huffman code always satisfies the conditions stated in the source coding theorem. they may be estimated by the relative frequency of the source word outcomes in the message. the Huffman code is the most efficient among the uniquely decipherable codes. they have to be sent with the encoded data to allow the message to be decoded. The algorithm can be sketched as follows : source image compressed image 8x8 block entropy encoder DCT quantizer . In practice.2 0. According to the previous comment.15 0.2 )+ 1 × 0.05 0.05 0. the efficiency of the code will be reduced.1 + 0.05 + 0.4 0.1 0.45 Comments - When the source words to be encoded have the same length. as a result of the weak law of large numbers. Applying the Shannon-Fano or Huffman algorithms requires knowledge of the probability distribution of the source words. - - Huffman coding is implemented in the Joint Photographic Experts Group standard to compress images. the probabilities of the source words are unknown but.05) + 4 × 0.15 + 0.

. The elements of the 8x8 output blocks are quantized with a number of bits according to their locations in the block : more bits will be allocated to elements near the top left-hand corner. as there is often strong correlation between “ DC” coefficients of adjacent 8x8 blocks. results in an 8x8 output block containing 64 elements arranged in rows and columns. “ DC” coefficients are encoded by difference. The term located in the top left-hand corner is called the “ DC” coefficient and the remaining 63. Symbol-1 consists of two numbers : the number of consecutive zero coefficients in the zigzag sequence preceding the nonzero coefficient to be encoded (RUNLENGTH). The “ DC” coefficient is a measure of the average value of the 64 pixels of the input block. The number of bits used to encode the value of the amplitude of the nonzero coefficient (SIZE) Symbol-2 is a signed integer equal to the amplitude of the nonzero coefficient (AMPLITUDE). the “ AC” coefficients. To facilitate the entropy coding procedure. which is similar to the Fourier Transform.source coding ____________________________________________________________ 51 - The source image is divided into 8x8 pixel input blocks. After quantization. Quantization is lossy. the “ AC” coefficients are ordered into a “ zigzag” sequence as shown below : - - AC1 DC AC2 AC63 Then. Each input block can be regarded as a function of the two spatial dimensions x and y. Calculating the Discrete Cosine Transform. each nonzero coefficient is represented by two symbols : symbol-1 and symbol-2.

-4.52 ____________________________________________________________ source coding If there are more than 15 consecutive zero coefficients.16.63 -127.3 -7.31 -63.-256. symbol-1 is represented by (15. Otherwise.255 -511.… .… . Wallace Multimedia Engineering Digital Equipment Corporation Maynard. whereas symbol-2 is encoded by a variable length integer code whose codewords lengths (in bits) must satisfy : codeword length (in bits) 1 2 3 4 5 6 7 8 9 10 amplitude -1.… .… . At the beginning.… .… .127 -255.-16. If the juxtaposition P ⊕ c of the preceding string P with the last read character c is in the dictionary. Massachussetts) LZ 78 ALGORITHM In 1978.-32.15 -31.128.… . symbol-1 is encoded with a Huffman code. character c) is sent and the string P ⊕ c is added to the dictionary. .… .-8.511 -1023.2. the couple (position of P in the dictionary. Jacob Ziv and Abraham Lempel wrote an article entitled “ Compression of Individual Sequences via Variable Rate Coding” in the IEEE Transactions on Information Theory describing a compression algorithm known as the LZ 78 algorithm.… . For both “ DC” and “ AC” coefficients.-128.7 -15. 0).… .8. It consists of constructing a dictionary as the message is being read.1023 (source : The JPEG Still Picture Compression Standard Gregory K. the only string in the dictionary is the empty string “ ” in position 0.… -512.1 -3.… .256.-64.64.512.… .-2.… .32. character by character.… . the algorithm passes to the next character.4.… .

C) (0.U) (3.^) (0. The successive steps of LZ 78 algorithm are described in the table below : position in the dictionary 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 string in the dictionary “” I F ^ S T U ^C H E W S^ SH HO ES .E) (0.S) (8.T) (8.T) (6.^) .^) (0. ^S HO L D ^ST U^ C HOO SE ^T HE ^SH O ES^ emitted couple (0. ^S HO L D ^ST U^ C HOO SE ^T HE ^SH O read characters I F ^ S T U ^C H E W S^ SH O ES .H) (0.O) (0.I) (0.S) (0.O) (4.H) (0.W) (4.^) (4.source coding ____________________________________________________________ 53 Example The message to be transmitted is the tongue twister : IF^STU^CHEWS^SHOES.O) (14.D) (16.T) (0..) (3.L) (0.E) (16.S) (0.E) (3.O) (9.C) (17.^SHOULD^STU^CHOOSE^THE^SHOES^HE^CHEWS^ ? (“ ^” represents the space).F) (0.H) (0.

^SHOULD^STU^CHOOSE^THE^SHOES^HE^CHEWS^ ? . As soon as the juxtaposition P ⊕ c is not in the dictionary. the string P ⊕ c is added to the dictionary and the character c is used to initialise the next string. The longer the message. the dictionary contains all the strings of length one. Before starting the algorithm. then the dictionary will contain 2 8 = 128 strings. The algorithm described in this paper is an improvement of LZ 78.W) (11.^) (22.54 ____________________________________________________________ source coding position in the dictionary 29 30 31 32 string in the dictionary ES^ HE^ CH EW read characters HE^ CH EW S^? emitted couple (26. the more efficient the algorithm. Example Let us resume the message : IF^STU^CHEWS^SHOES. Then. If the symbols are transmitted with extended ASCII.H) (9. the number of bytes used to encode the message is : 33 × (2 + 1) = 99 Bytes Transmitting the message by encoding the successive letters into extended ASCII code would result in a file containing 60 Bytes.?) If string positions are encoded with one byte. 2 8 × 2 8 = 65536 strings may be strored in the dictionary. With two bytes to represent string positions. only the address of P is transmitted. Terry Welch published “ A technique for High performance Data Compression” in IEEE Computer. In LZW. which is not enough. LZW ALGORITHM In 1984. and it is now called the LZW algorithm.

source coding ____________________________________________________________ 55 By using the ASCII code table (page 30) to initialise the dictionary and applying the algorithm. we obtain : position in the dictionary 0 … … 8 … … 44 … … 67 68 69 70 … 72 73 … … 76 … … 79 … … 83 84 85 … 87 … … 255 256 257 258 259 260 261 string in the initialisation of P dictionary Nul character … … Backspace … … coma … … C D E F … H I … … L … … O … … S T U … W … … IF F^ ^S ST TU ^C I F ^ S T ^ F ^ S T U C 73 70 8 83 84 8 read characters emitted position .

11 bits and so on … .^ ^SHO OU UL LD D^ ^ST TU^ ^CH HOO OS SE E^ ^T TH HE^ ^SHOE ES^ ^H HE^C CHE EWS S^ ^? C H E W ^ H O E S . then 10 bits. ^ SHO U L D ^ ST U^ CH OO S E ^ T H E^ SHOE S^ H E^C HE WS ^ ? emitted position 67 72 69 87 258 72 79 69 83 44 266 79 85 76 68 8 84 8 72 79 83 69 8 84 72 8 68 8 72 67 69 83 8 The indexes in the dictionary may be coded with a fixed number of bits. but the algorithm is more efficient with a variable number of bits : at the beginning 9 bits are used until 256 entries are added. .56 ____________________________________________________________ source coding position in the dictionary 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 string in the initialisation dictionary of P CH HE EW WS ^SH HO OE ES S. ^ O U L D ^ T ^ H O S E ^ T H ^ E ^ H C E S ^ read characters H E W S SH O E S .

15% 54.07% text 40.source coding ____________________________________________________________ 57 To compare the performances of some codes. codes Huffman adaptative Huffman LZW (fixed 12 bits) LZW (variable 12 bits) LZW (variable 15 bits) graphic 27.72% 50. the compression rates have been calculated after applying different algorithms to the same 6MB set of files divided into three parts : text files binary files graphic files size of the compressed file size of the original file compression rate = 1 − These files have been extracted from Dr Dobb’ s journal February 1991 (source Mark Nelson).22% 32.20% 46.04% 33.32% on average 31.27% 29.31% .79% 26.28% 45.69% 15.38% 40.59% 20.82% 58.61% binary 24.44% 36.78% 48.81% 47.61% 36.

.

Then. CHANNEL CAPACITY Example Let us consider a baseband digital communication system : binary source transmit filter A B c h a n n e l estimated symbols F decision device sampler E D receive filter C . − .binary source A binary memoryless source with alphabet { V . we will state the noisy channel theorem. First.V } (the symbols are equally likely). we will consider the transfer of data in terms of information theory.59 COMMUNICATION CHANNELS This chapter deals with the transmission of information.

pairs of wires. of the input and output of a channel depends on the devices it includes.60 ___________________________________________________ communication channels . If we consider the devices between A and E. equally likely and the noise process has an even probability density. Let us denote a k the random variable such as : {ak = V }= { k th symbol emitted by the source is V} the {ak = −V }= { k th symbol emitted by the source is . coaxial cables are channels used to link a transmitter to a distant receiver. the resulting device is a channel with a discrete input and a continuous output. . . the estimated symbol is V (respectively –V). Between B and E.channel Optical fibres. 2 Let 1 T Π T (t ) be the impulse response of the transmit and receive filters. the decision device compares the input sample amplitude to the threshold “ 0” : if it is greater (respectively smaller) than “ 0” .decision device As the symbols are symmetric.V} the and B(t) the zero mean Gaussian random variable modelling the noise (its power spectrum N density is 0 for any value of the frequency).transmit filter Its transfer function determines the shape of the power spectrum of the signal to transmit. . the nature. We will justify this decision rule further. .sampler Converts the filtered received signal to a discrete time signal at a sample rate equal to the baud rate of the symbols emitted by the source. discrete or continuous. the input is continuous and the output continuous. .receive filter Used to select the bandwidth of the transmitted signal (generally matched to the transmit filter to optimise the signal to noise ratio). So.

a 2 = −V . as shown in the figure below : a 0 = V .e. the impulse response is δ (t )). a 4 = V . a 5 = V 0 T 2T 3T 4T 5T . a1 = −V . we obtain the signal in D : 1 1 YD = ∑ ak Π T (t ) Π T (t − kT )+ B(t ) ∗ T k T 1 1 Π T (t ) + B(t ) ∗ YD = ∑ ak δ (t − kT )∗ T Π T (t ) T k 1 1 1 Π T (t )∗ Π T (t ) + B(t )∗ YD = ∑ ak δ (t − kT )∗ Π T (t ) T T T k YD = ∑ ak δ (t − kT )∗ (Λ 2T (t ))+ B(t )∗ k 1 Π T (t ) T YD = ∑ ak Λ 2T (t − kT )+ B(t )∗ k 1 Π T (t ) T In order not to have intersymbol interference. a 3 = V .communication channels ___________________________________________________ 61 Considering the channel as a perfect channel (i. we have to sample at (nT )n∈Z .

B’ (t) is T gaussian. we have to compare the probability densities of YE knowing a n : if f YE / an = −V (y ) > (respectively < ) f YE / an =V (y ).62 ___________________________________________________ communication channels Π T (t ) and H r ( f ) the transfer function of the receive filter. we obtain : YE = a n + B’(nT ) Then.V (respectively V) As f YE / an = −V (y ) (respectively f YE / a n =V (y )) is a Gaussian random variable with mean –V (respectively V) and variance N0 . we have 2 f YE / an =V (y ) f YE / an = −V (y ) -V « -V » is estimated 0 V « V » is estimated y . This is known as the maximum likelihood decision. In other terms. then the estimated symbol is . we can think of a decision rule consisting in choosing the more likely of the two hypothesis (a n = −V or a n = V ) based on the observation (YE ). its mean is zero and its variance σ 2 can be calculated as the power : Let B’ (t) be B(t )∗ 1 σ = ∫ S B ’ ( f )df = ∫ S B ( f ) H r ( f ) 2 −∞ −∞ +∞ +∞ 2 N df = 0 2 +∞ −∞ ∫ H (f ) r 2 N df = 0 2 +∞ −∞ ∫ 1 T Π T (t ) df = 2 N0 2 After sampling at nT.

n ]× [ . channels whose input and output alphabets are finite and for which the output symbol at a certain time depends statistically only on the most recent input symbol. x 2 ...e. an output alphabet {y1 .. y m } and a transition probability distribution : pij = P{ = y j / X = xi } ∀(i.. j )∈ [ .. by symmetry : P{ −V " is estimated /"V " is sent}= P{ V " is estimated /"−V " is sent}= p " " Then. m] Y 1 1 In this course. we will limit ourselves to discrete memoryless channels. A channel can be specified by an input alphabet {x1 . the model corresponding to the transmission chain between A and F can be sketched as follows : 1-p -V p -V p 1-p V V Such a channel is called a Binary Symmetric Channel with error probability p. x n }. y 2 . 0 > 0 = P N (0. i. The transition probability distribution can be expressed as a transition probability matrix.... .communication channels ___________________________________________________ 63 2V N " P{ V " is estimated /"−V " is sent}= P N − V ..1) > = p 2 N0 And.

Y ). Thus. it is not intrinsic to the channel itself. Accordingly.64 ___________________________________________________ communication channels Example Returning to the preceding example. we define the capacity C as follows : C = Max I (X . we can consider the average mutual information between X and Y : I (X . depends on the input probability distribution p(X ). Y ) p (X ) Examples A noiseless channel . I (X . we have the transition probability matrix -V V Y -V 1-p p V p 1-p X As we will attempt to recover the input symbol from the output symbol. Y ) = H (X ) − H (X / Y ) = H (Y ) − H (Y / X ) This quantity.

we get : C = log 2 4 = 2 bits A noisy channel A 1 1 Y X B 1 C B’ A’ D 1 . H (X / Y ) = 0 and C = Max H (X ) p (X ) As X is a random variable taking on 4 different values. Y ) = H (X )− H (X / Y ) The occurrence of Y uniquely specifies the input X. Finally. the maximum of H (X ) is (log 2 4) bits .communication channels ___________________________________________________ 65 A B C D X We have : 1 1 1 1 A’ B’ C’ D’ Y I (X . Consequently. This value is achieved for a uniform probability distribution on the input alphabet.

2 . Accordingly. we deduce : P{ = B’ = 1 − (p A + pB ) Y } And : H (Y ) = H 2 (p A + p B ) The maximum of H 2 (p A + pB ). is achieved when p A + p B = Thus. However. the capacity is 1 bit. we are not sure there exists an input probability distribution such as the corresponding output probability distribution is uniform. there is no uncertainty on Y.66 ___________________________________________________ communication channels I (X . we have to carry out the following calculations : Let us denote : p A = P{X = A} p B = P{X = B} pC = P{X = C } p D = P{X = D} Considering the possible transitions from the input to the output. Then. we have : P{ = A’ = P{X = A Y = A’ + P{X = B Y = A’ Y } } } P{ = A’ = P{X = A}× P{ = A’/ X = A}+ P{X = B}× P{ = A’/ X = B} Y } Y Y P{ = A’ = p A + p B Y } As Y can only take two values. Thus. we have : I (X . the input value uniquely determines the output value. 1 . we could think of C = 1 bit as the maximum entropy of Y should be 1 bit. knowing X. 1 bit. Y ) = H (Y ) − H (Y / X ) In this case. Y ) = H (Y ) And : C = Max H (Y ) p (X ) Here.

when a channel is symmetric (to be defined below).1 0.1 0.5 0. Comment Usually the probability transition matrices related to the subsets are not stochastic matrices as some output symbols are missing. the probability transition matrix has the following two properties each row is a permutation of each remaining row. we mean a matrix for which each row sum equals 1.communication channels ___________________________________________________ 67 Computing the capacity of a channel may be tricky as it consists of finding the maximum of a function of (n − 1) variables if the input alphabet contains n symbols. there is no difficulty in calculating the capacity.3 X . By stochastic matrix.1 0.3 B 0.3 C 0.1 0.1 0. Nevertheless.1 0. each column (if there are more than one) is a permutation of each remaining column. Example 1 Let us consider a channel with the probability transition matrix : D E F G Y A 0. Definition A channel is said to be symmetric if the set of output symbols can be partitioned into subsets in such a way that for each subset.5 0.5 0.

Example 2 Let the probability transition matrix be : 0.6 0.3 As not one of the three columns has the same value on its three rows.3 0.1 0. there is no partition containing one input symbol for which the symmetry properties are met. Calculating the capacity of a symmetric channel is easy by applying the following theorem : Theorem For a symmetric channel. The two probability transition matrices are : 0.5 0.5 T= 0. E.3 T1 = 0. the channel is not symmetric. .1 0.2 0.1 0.1 0.3 Each of them meets the required properties to make the channel symmetric.5 0.4 0.5 0. F} and {G}. the capacity is achieved for a uniform input probability distribution.68 ___________________________________________________ communication channels The output symbols {D. F.1 0. Neither does the global probability transition matrix meet the properties.1 0.1 and T2 = 0.3 0. E. Consequently.1 0. G} can be partitioned into two subsets {D.5 0.

the capacity is 1 achieved for P{X = 0}= P{X = 1}= 2 . Thus.communication channels ___________________________________________________ 69 Example 1 Let us consider a Binary Symmetric Channel : 1-p 0 p 0 p 1-p 1 1 The probability transition matrix is : 0 1 Y 0 1-p p 1 p 1-p X This matrix meets the requirements to make the channel symmetric.

we obtain : C = 1 − H 2 (p ) . we have : H (Y / X = 0 ) = − p × log p − ( − p )× log( − p ) = H 2 (p ) 1 1 H (Y / X = 1) = − p × log p − ( − p )× log( − p ) = H 2 (p ) 1 1 H (Y / X ) = 1 1 × H 2 ( p ) + × H 2 ( p ) = H 2 (p ) 2 2 Finally. Y ) = H (Y ) − H (Y / X ) P{ = 0}= P{ = 0 X = 0}+ P{ = 0 X = 1} Y Y Y P{ = 0}= P{X = 0}× P{ = 0 / X = 0}+ P{X = 1}× P{ = 0 / X = 1} Y Y Y P{ = 0}= Y 1 1 1 × ( − p )+ × p = 1 2 2 2 1 and H (Y ) = 1 bit 2 Thus.70 ___________________________________________________ communication channels I (X . P{ = 1}= Y Interpreting the rows of the probability transition matrix. C can be sketched as a function of p : 1 C(p) H2(p) 0 1/2 1 p -H2(p) .

We can think of a decision device such that the estimated symbol is V (respectively –V) if the input sample amplitude is greater (respectively smaller) than αV (respectively − αV ). i.communication channels ___________________________________________________ 71 Comments 1 1 is an axis of symmetry for C (p ). otherwise the estimated symbol is ε (an erasure symbol.e. which may be represented by one bit. - For p = - The cases p = 1 or p = 0 can be easily interpreted : knowing the output symbol implies knowing the input symbol.. we have C (p ) = C ( − p ) : changing 2 p into ( − p ) amounts to permuting the output symbols. as the input and output random variables become independent. neither –V nor V). f YE / an =V (y ) f YE / an = −V (y ) DV « -V » is estimated -V 0 no decision V DV y « V » is estimated . Accordingly. Let us resume the example of page 57. 1 p= 1 1 . Example 2 The Binary Erasure Channel. C = 0 : knowing the output symbol does not provide any information 2 2 about the input symbol.

Let us write the probability transition matrix : V -V -V 1-p H p V 0 V 0 p 1-p .72 ___________________________________________________ communication channels If αV is chosen such as : P{ E < −αV / a k = V } and P{ E > αV / ak = −V } are negligible. we may consider that the transmitted symbol is lost or ask the transmitter to re-send the symbol until the decision device delivers “ -V” or “ V” . then the channel can be Y Y sketched as follows : 1-p -V -V p H p V 1-p N (1 − α )V 2 with p = P N − V . 0 > −αV = P N (0.1) > 2 N0 After receiving ε.

The two probability − ε transition matrices meet the requirements to make the channel symmetric. I (X . the channel is the noiseless channel whose capacity is one bit : when –V (respectively V) is transmitted. we deduce : 1 P{ = ε }= 1 − 2 × × ( − p ) = p Y 1 2 1 1 H (Y ) = −2 × × ( − p )log × ( − p ) − p log p 1 1 2 2 H (Y ) = −( − p )(− 1 + log( − p )) − p log p 1 1 H (Y ) = ( − p ) + H 2 (p ) 1 Interpreting the terms of the probability transition matrix. -V (respectively V) is estimated. Y ) = H (Y ) − H (Y / X ) P{ = −V }= P{X = −V Y = −V }= P{X = −V }× P{ = −V / X = −V }= Y Y 1 × ( − p) 1 2 By symmetry.communication channels ___________________________________________________ 73 The set of output symbols can be partitioned into { V . Y ) with a uniform input probability distribution. we obtain : H (Y / X = −V ) = H (Y / X = V ) = H 2 (p ) Eventually.V } and { }. we have : P{ = V }= Y 1 × ( − p) 1 2 Then. we get : C = ( − p ) bit 1 For p = 0 . . The capacity is I (X .

i. Comments unlike the source coding theorem. a posteriori. what is surprising is that we can transmit as reliably as desired with a noisy channel.: H ∞ ’(S ) = H ∞ (S )× DS < C × DC = C ’ (entropy and capacity must be expressed in the same unit) then. ∀ ε > 0 . the noisy channel theorem justifies the definition of capacity as the ability of the channel to transmit information reliably. - Example Let S be a memoryless binary source such that P{ = 0}= 0. we have : P{ error}< ε .74 ___________________________________________________ communication channels THE NOISY CHANNEL THEOREM Let S be an information source whose the entropy per symbol is H ∞ (S ) and the symbol rate DS . the noisy channel theorem does not state how to construct the code. after decoding. there exists a code to transmit the outcomes of S over the channel in such a way that. we only know that such a code exists.02 S S . In other words. provided appropriate means are used. This theorem is the most important result in information theory. if H ∞ ’(S ) < C ’. Is it possible? The answer is given by the noisy channel theorem : If the entropy rate is smaller than the capacity per time unit.98 and P{ = 1}= 0. We want to transmit reliably the outcomes of S over a channel of capacity per use C at a symbol rate DC .e. it is possible to transmit the outcomes of S over the channel with an arbitrarily low probability of error.

Its maximum symbol rate is DC = 450 Kbits / sec .02 ) = 0. we can think of using the repetition code of length 3 consisting of repeating the information digit twice. we have to know whether the condition of the noisy channel theorem is satisfied or not. S being memoryless.communication channels ___________________________________________________ 75 The symbol rate is DS = 600 Kbits / sec To link the emitter to the receiver. In other words. - information digits 0 1/DC 0 0 1 1 1 check digits 1/DS Which source coding algorithm must we use to be able to implement the repetition code without loss of information? To answer the first question. we have a binary symmetric channel with crossover probability p = 10 −3 . We will try to answer the questions : Is it possible to transmit the outcomes of the source over the channel with an arbitrarily low probability of error? To reduce the probability of error due to the noisy channel. we can decide “ 0” has been emitted if the received codeword contains at least two “ 0” s and “ 1” otherwise. we have : H ∞ (S ) = H 2 (0. As a decision rule.1414 bit . each information digit is encoded into a codeword made of three digits : the information digit plus two check digits identical to the information digit.

we cannot connect directly the output of S with the input of the channel. we know that there exists a prefix code applied to the Lth extension satisfying : H L (S ) ≤ n < H L (S ) + 1 L As S is a memoryless source.25.1414 × 600 × 10 3 = 84864 bits/sec The capacity (per use) of the channel is : C = 1 − H 2 10 −3 = 0. we have to satisfy : 1 1 ≥ 3× DS ’ DC DS ’≤ DC 450 600 DS = = 150 Kbits/sec = = = 0. We have to reduce the symbol rate of S by applying a compression algorithm.25 × DS 3 3 4 4 Hence. the entropy rate is : H ∞ ’(S ) = H ∞ (S )× DS = 0. After the source coding theorem.9886 bit And the capacity per time unit : C ’= C × DC = 0. the source coding should result in an average number of code bits by source bit smaller than 0. the answer to the first question is “ yes” . The initial symbol rate of S being greater than the maximum symbol rate of the channel. if DS ’ denotes the symbol rate of the compressed source.1414 bit . Taking into account the repetition code we want to use to transmit the outcomes of S over the channel.9886 × 450 × 10 3 = 444870 bits/sec ( ) As H ∞ ’(S ) < C ’. we have : H L (S ) = H ∞ (S ) = 0.76 ___________________________________________________ communication channels Thus.

the repetition code of length 3 can be implemented to transmit the outcomes of S’ (the compressed source) over the channel.1414 ) Conclusion Encoding the 10th extension of S by a Huffman code will result in a reduction in the symbol rate of at least 75%.25 − 0.1414 ≤ n < 0. 0. .1414 + 1 L 1 < 0.e.25 i. Then. we have to chose L so that 0.communication channels ___________________________________________________ 77 Thus.25 .1414 + L> 1 ≈ 9.2 (0. L To make sure that n < 0.

.

1} of a binary memoryless source with P{1"}= q and P{ 0"}= 1 − q have to 0 " " be transmitted over a binary symmetric channel whose the probability of error is p. We will calculate the average probability of error in both cases. the output). Y) denote the input (respectively.79 ERROR CORRECTING CODES Example The outcomes { . We think of two ways to transmit the digits : directly (without coding) and using a repetition code consisting of sending three times the same information bit. Letting X (respectively. - without coding Let ε be the event that an error occurs. we have P{ }= P{ X = 1 Y = 0)* (X = 0 Y = 1)} ε ( P{ }= P{X = 1 Y = 0}+ P{X = 0 Y = 1} ε and P{X = 1 Y = 0}= P{X = 1} { = 0 / X = 1}= qp PY P{X = 0 Y = 1}= P{X = 0} { = 1 / X = 0}= ( − q )p PY 1 P{ }= qp + ( − q )p = p ε 1 .

001. we have : at ε = {"0" emitted ) E}* {"1" emitted ) E} ( ( P{ }= P{"0" emitted ) E}+ P{"1" emitted ) E} ε ( ( P{ }= P{ 0" emitted} { /"0" emitted}+ P{1" emitted} { /"1" emitted} ε " PE " PE P{ }= ( − q ) C 32 p 2 ( − p ) + p 3 + q C 32 p 2 ( − p ) + p 3 ε 1 1 1 ( ) ( ) P{ }= 3 p 2 ( − p )+ p 3 = −2 p 3 + 3 p 2 1 ε To compare the performances. 100. The decoding rule (majority logic) consists of deciding “ 0” has been emitted if the received word contains at least two “ 0” . 101. Setting E = { least two bits are corrupted}. 010. 011. 111. otherwise “ 1” is decided. the possible received words are : 000. we may sketch P{ } versus p for both cases (without coding ε and using the repetition code) : P{H} 1 without coding 1/2 with the repetition code 0 1/2 1 p . 110.80 ______________________________________________________ error correcting codes - with the repetition code There are two codewords : "0" → "000" "1" → "111" As some errors may occur while transmitting.

the outputs “ 0” and “ 1” have to be exchanged (otherwise corrupted 2 bits are more frequent than correct bits). the received word contains one “ 1” and two “ 0” or one “ 0” and two “ 1” . this error is corrected by applying the decision rule (majority logic). the probability of error resulting from using the repetition code is 2 1 smaller while in the range < p < 1 it is greater. codeword of length n = m+k a1 a2 am am+1 am+k m information digits k check digits . When three errors occur. To clear up this paradox. Error correction If the received word contains one error. If one or two errors occur. By “ systematic binary codes” . 2 As long as p < - Error detection This repetition code can detect one or two errors. This repetition code is said to be one-error-correcting. one only has to 2 1 notice that for p > . Then. However. The code is said to be two-error-detecting. which information bits appear directly in the codeword. the received word is a codeword and it is similar to the case where there is no error. the probability of error becomes p ’= 1 − p 1 with p ’< . CONSTRUCTION Without loss of generality. we will limit ourselves to systematic binary codes. it is not possible to know the exact number of errors (1 or 2).error correcting codes ______________________________________________________ 81 Comments 1 . we mean codes with a binary alphabet and whose codewords consist of information bits and check bits in such a way that check bits are linear combinations of information bits.

Given y and c. let us suppose they differ in l positions. Then.82 ______________________________________________________ error correcting codes Example The repetition code is a systematic code with m = 1 and k = 2 . a1 is the information digit The two check digits a 2 and a3 satisfy : a 2 = a1 a3 = a1 NEAREST NEIGHBOUR DECODING We will assume the codewords are transmitted over a binary symmetric channel of probability 1 of error p p < . 2 The received word is denoted y and we have to decide which codeword (c) has been sent. we will apply maximum likelihood decoding which consists of finding the codeword c such as P{y received / c emitted} is as great as possible. To work out a solution to this problem. we may consider : log(g (l )) = l log p + (n − l )log( − p ) 1 Differentiating g (l ) with respect to l gives : p dg (l ) = log p − log( − p ) = log 1 dl 1− p . we have : P{ received / c emitted}= p l ( − p ) y 1 n −l = g (l ) To make the calculations easier. the most natural method is maximum a posteriori decoding. implementing this algorithm requires the knowledge of the a priori probability distribution. However. Consequently. for the received y.

1} . as a word can be associated with a vector. the repetition code is a linear code since C = { 000" .error correcting codes ______________________________________________________ 83 p< p 1 d log g (l ) < 1 and <0 implies 1− p 2 dl Consequently log(g (l )) is a decreasing function of l and as log(g (l )) is an increasing function of g (l ). n The 2 n elements of V are the possible received words. The sum of two vectors is obtained by adding (binary addition) 0 the components in the same position: the addition table for each element follows the “ modulo2” rule: 0 + 0 = 0. Each codeword is its own opposite as 0 + 0 = 1 + 1 = 0 . V is a vector space of 0 dimension n over K = { . maximum likelihood decoding is equivalent to minimum distance decoding. C is said to be a linear code if C is a subspace of V. we have to choose the codeword c closest to the received word y. 0 3 001 101 111 011 C 000 100 110 010 . To summarise. Also. LINEAR CODES Linear codes have advantages over non linear codes: coding and decoding are easier to implement. the maximum of g (l ) is achieved for l minimum.1}.1} consisting of n-tuples of binary elements. Accordingly. g (l ) is a decreasing function of l. "111"} is a subspace of " dimension 2 of V = { . 1 + 1 = 0 . 1 + 0 = 0 + 1 = 1. Let us consider V = { . For instance.

v ). d (u. as we shall see. v ). w("101") = 2 w("000") = 0 Hamming distance The Hamming distance from u to v. v ) = 3 since the two vectors differ in three positions. it is a group and we have : d m = inf d (u . is the number of positions in which u and v differ. since V is a vector space. Some definitions : The vectors u and v are possible received words which are elements of V. Indeed. Minimum distance The minimum distance d m of a code C is the minimum distance between distinct codewords of C. Weight The weight of u. Let u and v be respectively “ 01011” and “ 00110” . v ) is the weight of u + v . w(u ). d (u. Consequently. v ) = inf w(u + v ) = inf* w(x ) u ≠v u ≠v x∈V Minimum distance = minimum weight. is the number of “ 1” in u. the two following properties can easily be established: . d (u. And. the more powerful the code in terms of error detecting and error correcting.84 ______________________________________________________ error correcting codes V corresponds to the 8 vertices of the cube whereas C is represented by two of these vertices. the greater the minimum distance. w(u + v ) = w("01101") = 3 = d (u . once the all-zero codeword is removed The minimum distance is a fundamental parameter and. with simple geometric considerations. Moreover.

the received word is 2 not a codeword but the errors are corrected and C 0 is decided. Then no error is corrected since C1 is decided. If the codeword the closest to the 2 received word is C 0 . To make this clear. Then. whatever the number of supposed errors corrected.all the errors are corrected although their number is greater than d −1 int m . Otherwise a codeword distinct from C 0 is decided : some errors can be 2 corrected but others may be added too. we never know the number of errors which actually occurred and the decided codeword may not be the right one. Some errors occur which result in a codeword C1 received. d − 1 The number of errors is greater than int m . d − 1 The number of errors is smaller than or equal to int m .error correcting codes ______________________________________________________ 85 Error detecting ability A linear code C of minimum distance d m is able to detect a maximum of d m − 1 errors. there exists a matrix G such : 0 C = { ∈ V / u = Gv ∀ v ∈ V } u . Let us suppose the codeword C 0 has been sent. Error correcting ability d − 1 A linear code C of minimum distance d m is able to correct a maximum of int m errors. C 0 is received and decided. As C is a subspace of V = { . There are several possibilities No error occurs. we will examine the different possible situations. 2 We should keep in mind that. - - GENERATOR MATRIX Let C be a linear code whose codewords consist of m information digits followed by k check m digits (n = m + k ).1} .

a 3 are the three information bits. a 2 . a5 . a1 . The check bits a 4 . Example m = k = 3 . a 6 satisfy : 1 1 0 a1 a5 = a2 + a3 = 0 1 1 a2 a6 = a1 + a3 1 0 1 a3 The generator matrix is : 1 0 0 G= 1 0 1 1 1 0 0 1 1 P 0 1 0 0 0 1 a4 = a1 + a2 Id3 .86 ______________________________________________________ error correcting codes As the first m digits of u are identical to the m digits of v. G has the following structure : m columns Im G= P n rows The (n − m ) rows of the submatrix P express the linear combinations corresponding to the check digits.

error correcting codes ______________________________________________________ 87

After multiplying the generator matrix by the 2 3 = 8 vectors consisting of the three information digits, we obtain the codewords :

source words 000 001 010 011 100 101 110 111

codewords 000000 001011 010110 011101 100101 101110 110011 111000

The minimum weight of the codewords is 3. Therefore, the minimum distance is 3 and this code is 2-error-detecting and one-error-correcting.

**PARITY-CHECK MATRIX
**

Implementing maximum likelihood decoding involves finding the codeword closest to the received word. Let us consider a systematic linear code C and its generator matrix G m columns

Im G= P n rows

Then, the orthogonal complement of C, C ⊥ = v ∈ V / v T u = 0 ∀ u ∈ C , is a linear code and its generator matrix is :

{

}

88 ______________________________________________________ error correcting codes

(n-m) columns

PT H= In-m n rows

In addition, here is an important property useful for decoding : A necessary and sufficient condition for a received word to be a codeword, is to verify : HT y = 0 Syndrome The syndrome S (y ) associated with the received word y is defined as S (y ) = H T y .

Let us suppose the sent codeword is c and the corresponding received word y. We have y = c + e where e is the error vector. The syndrome then takes the form

S (y ) = S (c + e ) = S (c ) + S (e ) = S (e ) since c is a codeword.

This equality can be interpreted as follows : The syndrome of a received word depends only on the actual error. This property will help us when decoding.

Minimum distance decoding process y is the received word. If S (y ) = 0 , y is a codeword and y is decided.

If S (y ) ≠ 0 , we have to find a codeword c such as d (y, c ) is minimum. As y is not a codeword, we have y = c + z and z = y + c (each vector is its own opposite). S (z ) = S (y + c ) = S (y ).Now from z = y + c , we deduce w(z ) = w(y + c ) = d (y, c ) . As such, finding the codeword c closest to y is the same as finding a vector z of minimum weight satisfying S (z ) = S (y ). Then the codeword c is given by c = z + y .

error correcting codes ______________________________________________________ 89

In practice, a decoding table is constructed. It contains all the syndrome values associated with the minimum weight sequences.

Example Let us resume the preceding code C with generator matrix G :

1 0 0 G= 1 0 1

0 1 0 1 1 0

0 0 1 0 0 1

The generator matrix of the orthogonal complement of C is : 1 1 0 H= 1 0 0 The parity-check matrix is : 1 HT = 0 1 1 1 0 0 1 1 1 0 0 0 1 0 0 0 1 0 1 0 0 0 1 0 1 1 1 0 1

The syndrome 000 is associated with the sequence 000000. We may associate this value with a sequence of weight equal to 2. For instance : S( 100010) = 111 Decoding table syndrome values 000 001 010 011 100 101 110 111 Sequences z (minimum weight) 000000 000001 000010 001000 000100 100000 010000 100010 (for instance) Using this decoding table allows us to correct one error (wherever it is) and two errors if located in the second and fifth positions. the syndromes are vectors with 3 components. Consequently. . we obtain : S (000001) = 001 S (000010) = 010 S (000100) = 100 S (001000) = 011 S (010000) = 110 S( 100000) = 101 There is 8 − 6 − 1 = 1 remaining value (111). There are 2 3 = 8 different possible values for the syndrome.90 ______________________________________________________ error correcting codes The dimension of H T is 3× 6 and the sequences z have 6 components. By multiplying H T by the sequences z.

Are the events E and F independent? EXERCISE 2 Evaluate. What is the amount of information provided by E = {at least one of the two drawn cards is a diamond} about F = {there is exactly one heart among the two drawn cards}? 3.91 EXERCISES INFORMATION MEASURE EXERCISES EXERCISE 1 Two cards are simultaneously drawn at random from a pack of 52 playing cards. Calculate the uncertainty of the events : . and {X = a 2 } similarly denote a ball coming to rest in black.{at least one of the two drawn cards is a heart} 2. 1. EXERCISE 3 (after Robert Gallager) Let {X = a1 } denote the event that the ball in a roulette game comes to rest in a red compartment. in two different ways. .{the king of hearts is one of the two drawn cards} . the exact number of bits required to describe four cards simultaneously drawn from a pack of 32 playing cards.

A is lighter than B. describe the procedure for which the average information provided by each weighing is maximum. Extend this result to the case of m weighings. The balance is a weighing machine. A and B have the same weight. 3 Y = b2 . 3. What is the average uncertainty associated with finding the counterfeit coin? A Roman balance is available on which two groups of coins A and B may be compared. What is the maximum average information provided by a weighing towards finding the counterfeit coin? When do we come across such a situation? 4. the croupier expects to use his inside knowledge to gain a tidy sum for his retirement. Y = b1 . By communicating this knowledge to an accomplice. Let us suppose a procedure has been worked out to find the counterfeit coin with at most m weighings. Assuming P{X = a1 / Y = b1 }= P{X = a 2 / Y = b2 }= . with one of them counterfeit: it is lighter than the others. . After years of patient study. Express the average information provided by a weighing towards finding the counterfeit coin in terms of the average uncertainty associated with the weighing.92 ________________________________________________________________ exercises 1 . We now consider that the number of counterfeit coins is unknown : 0. The weight f of a counterfeit coin is smaller than the weight v of the other coins. Each weighing results in three possibilities : A is heavier than B. What is the smallest value of m as function of n? When n is a power of 3. … or n. 2. indicates a red prediction and a blink. EXERCISE 4 Suppose we have n coins. 1. 1. 2 The croupier of the roulette table has developed a scheme to defraud the house. 4 calculate the average information provided by Y about X. he has learned to partially predict the coulour that will turn up by observing the path of the ball up to the last instant that bets may be placed. We suppose that P{X = a1 }= P{X = a 2 }= Let Y denote the croupier’ s signal : a cough. 5. indicates a black prediction.

and the actual weather. T. n . Construct a Huffman code for the values of X. Why? 2. The student explains the situation and applies for the weatherman’ s job. Compare the average length of the codewords to the entropy of X. 2 2 2 1. M. actual prediction no rain rain no rain 5/8 3/16 rain 1/16 1/8 1. Let us suppose a procedure has been worked out to find the counterfeit coin (s) in at most m weighings. How can we explain such a result? . who is an information theorist. of 1.000 days in a computer file. A clever student notices that he could be right more frequently than the weatherman by always predicting no rain.exercises ________________________________________________________________ 93 - Demonstrate that a weighing of coins allows us to know the number of counterfeit coins among the weighed coins.. the numbers indicating the relative frequency of the indicated event... x n } with probabilities . x 2 . 2.. How many bits are required? 3. The weatherman’ s boss wants to store the predictions.T). Using a Huffman code for the value pairs (M. 2 . but the weatherman’ s boss.. the size of the file? EXERCISE 2 1 1 1 Let X be a random variable taking values in {x1 .. What is the minimum value of m as function of n? SOURCE CODING EXERCISES EXERCISE 1 (after Robert Gallager) The weatherman’ s record in a given city is given in the table below.. what is.. turns him down. approximately.

. What is the average number of questions of an optimum procedure? What is the first question of the optimum procedure? ... We call an optimum procedure any set of successive questions which allows player B to determine S in a minimum average number of questions. 1.. Construct the code for 1 1 1 1 1 1 1 1 . .. We suppose: a . the sum of the two faces is denoted S.. Does this code satisfy the Kraft inequality? 2.. → 100. . Player B has to guess the number S by asking questions whose answers must be “ yes” or “ no” . a . . Compare the average length of the codewords to the entropy of U.. How can we explain this result? EXERCISE 4 Player A rolls a pair of fair dice. → 0100. → 10100. . ai . Show that n satisfies the double inequality : H (U ) ≤ n < H (U )+ 1 with H (U ) the entropy of U Application 3. ∀ k > j ≥ 1 P{ k }≤ P{ j } a a Let us define Qi = ∑ P{ k } ∀ k > 1 and Q1 = 0 associated with the message a i . and then 8 4 2 truncating this expansion to the first ni digits. 2.. Construct a Huffman code to encode the possible values of S. a 2 . .e.94 ________________________________________________________________ exercises EXERCISE 3 (after Robert Gallager) Let U be a discrete memoryless source taking on values in { 1 . U 1. a k =1 i −1 The codeword assigned to message a i is formed by finding the “ decimal” expansion of 5 1 1 Qi < 1 in the binary system i.. . ... Let n denote the average length of the codewords.... a . After the throw.. where ni is the integer equal to or just larger than the self-information of the event { = a i } expressed in bits.} with probabilities a P{ 1 } P{ 2 } P{ i } .. 4 4 8 8 16 16 16 16 a source U taking 8 values with probabilities 4.

EXERCISE 6 A memoryless source S delivers symbols A. 2. the entropy rate of U. If U delivers 3. Let U denote the binary source obtained by the preceding coding of S. 1/16. Calculate the entropy of U. Construct a Huffman code for the 7 values of S. 1.1/4 and 1/4 . .000 ternary symbols per second. B. 1/16. Compare the average number of bits used to represent one value of S to the entropy of S. 1/16. D. calculate. C.1/4. 2. EXERCISE 5 A ternary information source U is represented by a Markov chain whose state transition graph is sketched below : 1/2 0 1 1/2 2 1/2 1 1/2 1. in bits per second. E.exercises ________________________________________________________________ 95 - Calculate the average information provided by player A about the number S when answering the first question of an optimum procedure. Calculate the resulting average number of bits used to represent one ternary symbol of U. Construct a Huffman code for the second extension of U. F and G with probabilities 1/16.

Calculate β M and the corresponding optimum probability distribution (p)..2}. it can be shown that the the information rate β (p ) transmitted over the telephone line is maximum when we have: p n = 2 − β M l (n ) ∀ n ∈ IN * with β M = Max β (p ) p Now we will suppose that ∀ n ∈ IN * l (n ) = n seconds. in seconds. 2. n2 . Let S be a binary memoryless source whose entropy is maximum. What can be said about the memory of U? 5... By generalising the preceding results. p2 .. By applying the law of large numbers.96 ________________________________________________________________ exercises 3. is received. Let l (n ) denote the lapse of time necessary to transmit the integer n. If p were the uniform distribution over { . We have to transmit the outcomes of S over the telephone line. the average duration in transmitting one bit of S? COMMUNICATION CHANNEL EXERCISES EXERCISE 1 Calculate the capacity of the channels : . 1.).. 3.. calculate the probability distribution of U. what would be the value of the information 1 rate? Compare with the value given by the preceding question. 4. What is. EXERCISE 7 (after David MacKay) A poverty-stricken student communicates for free with a friend using a telephone by selecting a positive integer n and making the friend’ s phone ring n times then hanging up in the middle of the nth ring. in seconds. This process is repeated so that a string of symbols n1 . Setting pn = P{ integer n is emitted} ∀ n ∈ IN * and p = (p1 . Construct a prefix code so that the preceding procedure will achieve a maximum information rate transmitted. give a signification to an optimum source coding. the average duration in transmitting a codeword? What is.

Calculate the capacity of the two channels : A B C 1 1-p channel 1 1 p E E q 1-q channel 2 C B D D 1 A .exercises ________________________________________________________________ 97 1 A 1 B 1 C C B A 1 A B C D 1/3 1 D 1 2/3 A B C A B 1/2 1 1/2 1/2 A B C C 1/2 EXERCISE 2 1.

provided appropriate means are used? 1-p p B p p 1-p C D E A EXERCISE 4 Let S be a memoryless source taking on 8 equally likely values.001. Its symbol rate is 1.98 ________________________________________________________________ exercises 2.000 bits per second. Consider transmitting the outcomes of a binary source S over this channel. 2. If we want to transmit directly (without coding) the outcomes of S. Calculate its capacity. Suppose the source symbols equally likely. The outcomes of S are to be transmitted over a binary symmetric channel of crossover probability equal to 0. Depending on DS . how many source symbols must be taken at a time? 3. the source symbol rate. Is it possible to transmit the outcomes of S with an arbitrarily low probability of error? . Calculate the probability for a source word to be received without error. Let us suppose S is memoryless. The maximum channel symbol rate is 3. 4.000 symbols per second. EXERCISE 3 We consider the channel below : 1-p A B 1-p p C D 1. what is the maximum channel symbol rate DU so that the probability of error is arbitrarily low. Calculate the capacity of the channel obtained when the outputs D and E of channel 1 are connected with the inputs D and E of channel 2.

d .01 and the channel 3 3 9 9 9 symbol rate equal to 5. Deduce from the preceding questions a procedure to connect the source to the channel so that the probability of error is zero. Calculate the capacity of the channel : A B C D E F p 1-p 1-p p 1-p 1-p 1-p p p p 1-p p A B C D E F Consider transmitting the outcomes of a source S over the channel. . Is this code optimum? Why or why not? 5. We suppose p = 0. c. . . b. Is this code uniquely decipherable? 4. What is the maximum source symbol rate if we want to transmit S over the channel with an arbitrarily low probability of error? We encode the possible values of S by a ternary code such as : a→0 b →1 c → 20 d → 21 e → 22 3. Calculate the average length of the codewords.000 (6-ary) symbols per second. The possible values of S 1 1 1 1 1 a are { . 2. .exercises ________________________________________________________________ 99 EXERCISE 5 1. e} with probabilities . .

Give the generator matrix and the parity-check matrix.100 _______________________________________________________________ exercises ERROR CORRECTING CODE EXERCISES EXERCISE 1 Let us consider the following code : source words 00 01 10 11 codewords 00000 01101 10111 11010 1. Is it possible to construct a decoding table to correct one error on any of the check bits? . Calculate the probability of error by codeword when using the decoding table. EXERCISE 2 We consider the systematic linear code : source words 000 001 0?? 011 100 101 110 111 codewords ?00?? 00101 010?? ??1?1 1001? 101?1 1100? 111?? 1. Is it possible to construct a decoding table able to correct one error on any of the three information bits? 4. 2. Construct a decoding table associated with the code. 3. Calculate the generator matrix and the parity check matrix. Complete the table by replacing the question marks with the correct bits. We suppose the codewords are transmitted over a binary symmetric channel of crossover probability p. 2.

what is the smallest acceptable value of the minimum distance? 5. . A binary symmetric channel of crossover probability equal to 0. Is it possible to transmit the outcomes of S over the channel with an arbitrarily low probability of error? SOURCE CODING 2. What is the average symbol rate of the binary source which results from this code? How many check bits have to be juxtaposed to one information digit so that the symbol rate over the channel is equal to 280 Kbits/sec? CHANNEL CODING 4. 1. Construct a Huffman code for the third extension of S. We consider a code that meets the preceding conditions.02 at a symbol rate of 300 Kbits/sec. 6. Construct the corresponding generator matrix. The second extension of the binary source obtained after coding S is to be encoded by a systematic linear code whose codewords consist of two information bits and three check digits. Construct a decoding table. List the codewords by assigning to each codeword its weight.exercises _______________________________________________________________ 101 EXERCISE 3 A memoryless binary source S delivers “ 0” and “ 1” with probabilities 0. If we want the code to correct one error by codeword. 7. How many patterns of two errors can be corrected? 8.05 and whose the maximum symbol rate is 280 Kbits/sec is available. Consider reducing the symbol rate of S by at least 50% by encoding the outputs of S with a Huffman code. What is the minimum extension of S to be encoded to meet such conditions? 3. Compare the probability of error resulting from using the decoding table to the probability of error when transmitting directly (without coding) the source words.98 and 0.

.

2. No EXERCISE 2 15. log 2 3 bits. I (X i . log 2 (n + 1) .103 SOLUTIONS INFORMATION MEASURE SOLUTIONS EXERCISE 1 1. –0. X i : result of the ith weighing and X random variable taking on n values with a uniform probability distribution. log 3 n n 5.18 bit. X ) = H (X i ) 3.188 bit EXERCISE 4 1. 4.134 bits EXERCISE 3 0.404 bit 3. 4. log 2 n 2. At the first weighing when n is a multiple of 3.70 bits and 1.

n = H ∞ (S ) 2. The student does not provide any information about the actual weather. 3. Yes. Approximately 1. n = H (U ) EXERCISE 4 2. 4. 0. n = H (X ) EXERCISE 3 1. 2 Kbits/sec 2. H ∞ (U ) = 1 bit U U 3.98 bit.3 questions.562 bits EXERCISE 2 2. U is memoryless.166 EXERCISE 6 1. 3. P{ = 0}= P{ = 1}= 1 2 4. At least 696 + 896 = 1. 2. EXERCISE 5 1. . 1.592 bits.104 _______________________________________________________________ exercises SOURCE CODING SOLUTIONS EXERCISE 1 1.

one second. 2( − p )bit 1 2.666 bit/sec 3. 0. (1 − p ) DS 2( − p ) 1 EXERCISE 4 No EXERCISE 5 1. C1 = C 2 = 1 bit 2.exercises _______________________________________________________________ 105 EXERCISE 7 1. β M = 1 bit/sec and p n = 2 − n 2. log 2 6 − H 2 (p ) 2. 5. ∀ n ≥1 TRANSMISSION CHANNELS SOLUTIONS EXERCISE 1 C1 = C 2 = 1. Two bits at a time. 3. C1 EXERCISE 3 1. 4.23 bit EXERCISE 2 1.585 bit and C 3 = 1.936 symbols/sec .

1 with C and 2 with E. By linking 0 with A. Yes.106 _______________________________________________________________ exercises 3. 3 5. P{ error}= 1 − ( − p ) + 5 p ( − p ) + 2 p 2 ( − p ) 1 1 1 5 4 { 3 } . 4 . G = 1 1 1 0 1 1 1 1 0 0 T 1 and H = 1 0 0 1 0 1 1 0 0 1 0 1 Decoding table syndromes 000 001 010 011 100 101 110 111 minimum weight sequences 00000 00001 00010 00011 00100 01000 00110 10000 number of errors 0 1 1 2 1 1 2 1 2. ERROR CORRECTING CODES SOLUTIONS EXERCISE 1 1 0 1. Yes 4.

2. 3. source words 000 001 010 011 100 101 110 111 1 0 2.78 Kbits/sec. 3. 1. G = 0 1 0 3. Yes. G = 1 1 0 0 1 0 1 1 . 0 1 0 1 0 0 0 1 1 0 1 0 1 and H T = 0 0 1 0 1 0 1 codewords 00000 00101 01010 01111 10010 10111 11000 11101 EXERCISE 3 1. 111. 4.5. 4. No.exercises _______________________________________________________________ 107 EXERCISE 2 1. Yes. 3. 1 0 5.

108 _______________________________________________________________ exercises 6. Decoding table syndromes 000 001 010 011 100 101 110 111 One pattern of two errors can be corrected. P{ error with coding}= 0.018 P{ error without coding}= 0. codewords 00000 01011 10110 11101 weight 0 3 3 4 7.097 minimum weight sequences 00000 00001 00010 01000 00100 11000 10000 10001 . 8.

C MacKay Information Theory. Gallager Information Theory and Reliable Communication (John Wiley & Sons) David J.109 BIBLIOGRAPHY Thomas M.Thomas Elements of Information Theory (John Wiley & Sons) Robert G. Proakis Masoud Salehi Communications Systems Engineering (Mac Graw Hill) .Cover Joy A. Inference and Learning Algorithms Mark Nelson La Compression de Données (Dunod) John G.

.

.........36 Repetition (....41 Parity-check matrix.................................................................................................................................................................................................................................................82 Noisy channel theorem ...............................................................................................................................................................................................................................................................32 Average mutual information...............................................83 LZ 78 algorithm................................................................................................................................................................................code)...84 Minimum distance decoding ...............................................................................................................................................................................................................18 Decoding table...........................39 Mastermind (game of -)....................................................................................................................code) .......................................................................................................................................18 Binary erasure channel .........................................................................................16 Entropy (of a source) .........85 Hamming distance ...................................................................................................................87 Prefix (...........................................................79 Self-information........................54 Mac Millan theorem ............................................................................................................................................23 Instantaneous (.............................................................................................................................36 JPEG .......................................................................52 LZW algorithm ............................................57 Conditional entropy ....................................48 Information source............................................................30 Source coding theorem ...............................................50 Kraft inequality..............................................................................................................................................................................................................................................................................................................................................................................................................25 Entropy rate .........................................................................................................................................................................................................................................................................................................................39 Generator matrix.............................................................................................40 Symmetric channel .........12 Shannon-Fano algorithm ......8 Source coding problem .....................12 Weight ................................74 Optimum (.......................................................................................................................................................................................................................................................................................................................................................................................37 Kraft theorem...........................................................88 Morse code ....................................................................................................................................................................................................................................................67 Syndrome.................................................................................................................................................................................................................................................................64 Compression rate ......................................................................................................71 Binary symmetric channel ................................................................................................................................90 Entropy (of a random variable) .................................................................................................................20 Minimum distance ...........................84 Huffman algorithm .........................code) ......................47 Shannon paradigm ..........................................................................................................................................................................84 ...............................................88 Uncertainty ............................................................................................................................................................................37 Linear code .............28 Extension of a source..69 Capacity .......................34 Nearest neighbour decoding .......111 INDEX ASCII code .......................code).................................................................................

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

We've moved you to where you read on your other device.

Get the full title to continue

Get the full title to continue reading from where you left off, or restart the preview.

scribd