You are on page 1of 6
84 Chapter2 Channels and Channel Capacity ‘The numbers labeling the ares in this graph are transition probabilities. Find the transi- tion probability matrix for this channel 2.1.2: A DMCis described by the transition probability matrix, 9 05 05 0 Pye =| 2 9 9 05 ve= 105 0 05 0 0 05 9 95 a) What is the cardinality of the input alphabet C? 'b) What isthe cardinality of the output symbol alphabet ¥ ? ©) Draw a graphical representation of this DMC, and label the transition probabilities. 2413: A DMC has the graphical representation. 4) Does this channel represent a system using hard-decision or soft-decision decoding? Explain, 1b) Ifit is desired to have ey yy and c,-> J, what is the probability of this occurring if the source symbols are equally probable? 2d: For the channel of Exercise 2.1.3, what is the entropy of ¥ if the input symbols are equally probable? 2.1.5: Calculate the mutual information for the channel of Exercise 2.1.4, 2.1.6: The channel of Exercise 2.1.4 is modified to use hard-decision decoding. The resulting zraphical representation of the new channel is 99 84° Chapter2 Channels and Channel Capacity ‘The numbers labeling the ares in this graph are transition probabilities. Find the trans tion probability matrix for this channel. 212: A DMCis described by the transition probability matrix 9 05 05 0 0s 9 0 0S Pre=los 9 05 0 0 05 9 95 8) What is the cardinality of the input alphabet C? 'b) Whats the cardinality of the output symbol alphabet ¥ ? ©) Draw a graphical representation of this DMC, and label the transition probabilities 213: A DMC has the graphical representation. 985, th 010 oo 008 n 005 ean oy = Ye 1) Does this channel represent a system using hard-decision or soft-decision decoding? Explain. by If itis desired to have cy+> yp and , > y, what isthe probability ofthis occurring if the source symbols are equally probable? 2A: For the channel of Exercise 2.1.3, what is the entropy of ¥ if the input symbols are equally probable? 218 Calculate the mutual information for the channel of Exercise 2.14. 2146 The channel of Exercise 2.1.4 is modified to use hard-decision decoding. The resulting graphical representation of the new channel is 99 & ——&@ o1 1 29 a ” Exercises 85 For equally probable input symbols, a) Calculate Qy ) Cateulate H(Y) ©) Calculate 1(C; ¥),and compare this with the result of Exercise 2.1. 22.1: Write a computer program to implement the Arimoto-Blahut algorithm. Test your pro gram for the channels in Example 2.2.1 2.22: Using the Arimoto-Blahut algorithm, find the channel capacity and the Pc that achieves this capacity for a channel with transition probability matrix 6 3) Pyo=|3 1 1 6, 22.3: Repeat Exercise 2.22 for a channel with transition probability matrix 64 Pye=|3 3 1 6] 22.4: Determine whether the following channels are symmetric: 2.2.8: Find the channel capacity and the input probability distribution that achieves this capac: ity for the channels in Exercise 2.2.4, 2.2.6: Plot the capacity ofthe BSC asa function of crossover probability Lo 86 Chapter2 Channels and Channel Capacity 23,k: In a communication system with an optimum receiver operating over a BSC, the t crossover probability of the channel is a function of the signal 10 noise ratio, 2 ‘Communication theory gives this crossover probability as ae i le p= QV2y), 2P1 where the error rate function Q is defined as “ L € =e feta ee 22 20) = Taf, * Tien Be + RIVE + SSL 1isalso common practice to express the signal to noise ratio in decibels 94g = 10-logo(7) 23 a) Plot p on a logarithmic scale as a function of yy from 0 dB to 15 dB. }) Plot the channel capacity forthe BSC asa function of y,q rom 0dB to 15 dB. 262: Fi 6) Plot the cutof rate for the BSC asa function of y,» trom 0 dB to 15 dB. in 232: Binary phase-shift keying (BPSK) is a modulation method that produces a binary sym- 263 Gi ‘metric channel. Assuming the transmitter can transmit 1000 binary symbols per second iti ‘and the receiver is optimum (see Exercise 23.1), what is the theoretical maximum num- i ber of information bits per second that can be transmitted ifthe signal to noise ratio at FE the receiver is 5 dB? Poh: 2.33: For the communication channel of Exercise 232, estimate the number of information a bits per second that can be transmitted over a practical communication system, a 2.34: You are given a modem capable of transmitting 56,000 binary symbols per second and a ae BSC which makes an eror every 100 symbols (on the average). Whats the maximum error. Bae free information rate you can expect from a practical encoderdecoder using this system? ; 2.4: You are given an information source having afoursymbo alphabet A= {star,0,1, stop} ‘The source obeys the following rules: a 1. start” symbol must always be followed by either a“0" or a“1” symbol; 4 . always be followed by eit symbol antl 2 the probability A will emit a0" at any given time is always equal tothe proba be bility will emit a1” symbol; 282: Th 3 the probability a“0" or a“I" symbol wil be followed by an “end” symbol is 0.1; anh 4, an “end” symbol is always followed by a “start” symbol. Draw a state diagram for this source, label its transition probabilities, and find the steady- state probabilities of each state, 22: Find the entropy rate for the source in Exercise 2.4.1 24,3: For the source of Exercise 241, calculate H(A) assuming the steady-state symbol proba- Fin bilities, and compare this with the entropy rate from Exercise 2.42 2B Dis 2.4: ‘The design of computer systems often takes advantage of the property of “locality.” This and property says that the probability that the next data or instruction needed by the com: puter is largest for data or instructions located in memory near the current data or instruction and that this probability decreases rapidly for data or instructions located far ane BSC, the ratio, . 21 5B. inary sym- per second se ratio at information zcond and a jmum error system? mbol, o the proba- ymbol is 0.1 d the steady- bol proba- ocality" This by the com- rent data or Tocated far- Exercises 87 ther away. Use information theory to qualitatively explain how computers might be able to take advantage of locality without any significant loss in computer performance. In Exercise 1.2.12, he entropy ofthe letters in English text was found to be 408 bits per letter if it was assumed that the letters occur independently of each other. Text files stored in a computer system are normally stored using an 8-bit ASCII representation. When the Lempel-Ziv algorithm is used to compress large tex files, it is often found to achieve more than a 2:1 compression ratio, Explain how this is possible. {: High-level computer languages are often advertised as being “machine independent.” Give some example situations where this might be true and some examples where it might be false. Use the data-processing inequality to justify your arguments, The dicode channel is a LTI channel characterized by impulse response hy hy In = 0 for ke {0.1}. Find the transition probability matrix for the dicode channel assuming, a discrete ‘memoryless source with alphabet A = [-1, +1] and H(4} Find the entropy rate and the output symbol probabilities for the dicode channel defined in Exercise 2.6.1 Given a discrete memoryless source with alphabet A = [~1, +1} having symbol probabil ities Pr(—1) = p and Pr(+1)=1—p=q, and given the dicode channel of Exercise 2.6.1, let state S, correspond to a, =—1 and state S, correspond to a, , = +1. Show that,in the steady state, r4= p, = q,and R= H(A). i: Show that 6,,(-m) =#,(m). Find the power spectrum of a discrete sequence x, with autocorrelation ¢,,(m)= cexp(-0.25)n) Find the power spectrum of a discrete sequence x, with autocorrelation ¢,(m) = (0.1) Given a diserete memoryless source with alphabet A= [—1,+1}, H(A) = I,and.a channel 1, k=0 =Q-1 k=1, 0, else find the power spectrum ofthe channel output Given an NRZI encoder defined by 8 = ¢, © B,; with 8 ,=0. Find the encoded out put sequence for an input sequence (¢, = (111010000}). ‘The NRZI-encoded sequence from Exercise 2.8.1 is applied to another identical NRZI ‘encoder. Find the output sequence of the second encoder. ‘The NRZI-encoded output sequence from Exercise 28.1 is applied to a decoder w= B® Find the output sequence of the decoder. Draw the state diagram and the trellis diagram for the NRZI encoder of Exercise 2.8.1 and determine its connection matrix. For the NRZI encoder of Exercise 28.1, ind the number of possible 3-bit output sequences assuming B., Find the capacity of the NRZI encoder of Exercise 2.8.1 88 Chapter2 Channels and Channel Capacity 2.8.7 Find the capacity of the dicode channel of Exercise 2.61 assuming a binary input. 294: Find the state transition matrix for a (dk) =(1,2) sequence and verify the channel capacity given in Table 2.9. 2.92: The capacity of a (d, k) = (1,2) sequence is 0.4057. Specify an eneoder for a rate 1/3, (dk) = (4,2) code. 2.9.3: Calculate and plot the power spectrum for a maxentropic (d,k) = (1,2) sequence.

You might also like