You are on page 1of 338
SCHAUM'S oul lives Second Edition 333 Fully-solved problems New chapters on Signal and Spectra = Updated information on information and channel capacity = ; 130 supplementary problems Use with these courses; [Communications Theory and Practice (Computer Engineering (2 Principles of Communication Engineering: (2fCommunications Engineering (Electrical Engineering Ree UTR els Theory and Problems of ANALOG AND DIGITAL MMUNICATIONS Second Edition HWEI P. HSU, Ph.D. Schaum’s Outline Series LA Hwei P. Hsu received his BS. from National Taiwan University and M.S. and Ph.D. from Case Institute of Technology. He has published several books which include Schaum's Outline of Signals and Systems and Schaum's Outline of Probability, Random Variables, & Random Processes. Schaum's Outline of Theory and Problems of ANALOG AND DIGITAL COMMUNICATIONS Copyright © 2003, 1993 by The McGraw-Hill Companies, Inc, All rights reserved. Printed in the United States of America. Except as permitted unde: the Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a data base or retrieval system, without the prior written permission of the publisher. 1234567891011 12 13 14 15 16 17 18 19 20 VLP VLP098765432 ISBN 0-07-140228-4 ‘The objective of this book, like its previous edition, is to provide an introduction to the basic principles in analog and digital communications for advanced undergraduates in electrical or computer engineering. The assumed background is circuit analysis, and prior exposure to signals and systems course is helpful. The book can be used as a self-contained textbook or for self-study. Each topic is introduced in a chapter with numerous solved problems. The solved problems constitute an integral part of the text, In this edition, there are two chapters dealing with basic tools. These tools are then applied to next three chapters in analog communication systems, including sampling and digital transmission of analog signal. Probability, random variables, and random processes are introduced in Chapters 6 and 7 and applied to the chapters that follow. Effect of noise in analog communication systems is covered in Chapter 8. Chapter 9 treats the effect of noise in digital communication and the subject of optimum reception. Information theory and source coding is treated in Chapter 10. A distinctive feature of this edition is the new chapter, Chapter 11, on error control coding, including linear block codes, cyclic codes, and convolutional codes Hwel P. Hsu Monteville, NI iii CHAPTER 1 CHAPTER 2 CHAPTER 3 CHAPTER 4 Signals and Spectra 1.1 Introduction 1.2 Fourier Series and Discrete Spectra 1.3 Fourier Transforms and Continuous Spectra 1.4 Properties of Fourier Transform 1.5 Fourier Transforms of Power Signals Signal Transmission and Filtering 2.1 Introduction 2.2 Impulse Response and Frequency Response 2.3 Filter Characteristics of LTI Systems 2.4 Transmission of Signals Through LTI Systems 2.5 Filters 2.6 Quadrature Filters and Hilbert Transforms Ampiitude Modulation 3.1 Introduction 3.2 Amplitude Modulation 3.3 Double-Sideband Modulation 3.4 Ordinary Amplitude Modulation 3.5 Single-Sideband Modulation 3.6 Vestigial-Sideband Modulation 3.7 Frequency Translation and Mixing 3.8 Frequency-Division Multiplexing Angle Modulation 4.1 Introduction 4.2 Angle Modulation and Instantaneous Frequency 4.3 Phase and Frequency Modulation 4.4 Fourier Spectra of Angle-Modulated Signals 4.5 Narrowband Angle Modulation 4.6 Sinusoidal (or Tone) Modulation 4.7 Bandwidth of Angle-Modulated Signals 4.8 Generation of Angle-Modulated Signals 4.9 Demodulation of Angle-Modulated Signals iv were 24 24 26 26 29 43 43 43 45 47 49 52 68 68 68 69 70 1 1 74 CHAPTER 5 CHAPTER 6 CHAPTER 7 CHAPTER 8 CHAPTER 9 CONTENTS Digital Transmission of Analog Signals 5.1 Introduction 5.2 Pulse Code Modulation 5.3 Sampling Theorem 5.4 Sampling 5.5 Pulse Amplitude Modulation 5.6 Quantizing 5.7 Encoding 5.8 Bandwidth Requirements of PCM 5.9 Delta Modulation 5.10 Signaling Format 5.11 Time-Division Multiplexing 5.12 Bandwidth Requirements for TDM 5.13 Pulse Shaping and Intersymbol Interference 5.14 Digital Carrier Modulation Systems Probability and Random Variables 6.1 Introduction 6.2 Probability 6.3 Random Variables 6.4 Two-Dimensional Random Variables 6.5 Functions of Random Variables 6.6 Statistical Averages 6.7 Special Distributions Random Processes 7.1 Introduction 7.2 Definitions and Notations of Random Processes 7.3 Statistics of Random Processes 7.4 Correlations and Power Spectral Densities 7.5 Transmission of Random Processes through Linear Systems 7.6 Special Classes of Random Processes Noise in Analog Communication Systems 8.1 Introduction 8.2 Additive Noise and Signal-to-Noise Ratio 8.3 Noise in Baseband Communication Systems 8.4 Noise in Amplitude Modulation Systems 8.5 Noise in Angle Modulation Systems Optimum Detection 9.1 Introduction 9.2 Binary Signal Detection and Hypothesis Testing 90 90 90 a1 91 93 93 96 96 97 99 100 101 101 104 128 128 128 131 133 135 137 138 165 165 165 165 169 171 172 202 202 202 203 204 208 226 226 226 vi CONTENTS 9.3 Probability of Error and Maximum Likelihood Detector 227 9.4 Optimum Detection 229 9.5 Error Probability Performance of Binary Transmission Systems 231 CHAPTER 10 Information Theory and Source Coding 245 10.1 Introduction 245 10.2 Measure of Information 245 10.3 Discrete Memoryless Channels 247 10.4 Mutual Information 250 10.5 Channel Capacity 251 10.6 Additive White Gaussian Noise Channel 252 10.7 Source Coding 253 10.8 Entropy Coding 255 CHAPTER 11 Error Control Coding 282 11.1 Introduction 282 11.2 Channel Coding 282 11.3 Block Codes 283 11.4 Linear Block Codes 283 11.5 Cyclic Codes 286 11.6 Convolutional Codes 290 11.7 Decoding of Convolutional Codes 295 APPENDIX A Fourier Transform 321 APPENDIX B Bessel Functions J, (B) 323 APPENDIX C The Complementary Error Function Q(z) 325 INDEX 327 GNALS AND SPECTRA ODUCTION is chapter we review the basics of signals in the frequency domain. The frequency domain n is called the spectrum. The spectral analysis of signals using Fourier series and Fourier s is one of the fundamental methods of communication engineering. ER SERIES AND DISCRETE SPECTRA. ‘Exponential Fourier Series: ) be a periodic signal with fundamental period To. Then we define the complex exponential ies of x(t) a8 x= > cei —— ay = 2R/To ay ‘ +i (wert a2 n= al. xO 2) trary fo. Setting f) = —Tp/2, we have Lm aaa sod (13) 1/2 Gare called the Fourier coefficients of x(t). These are, in general, complex numbers ressed as lenlet™ aay amplitude and 8, is the phase angle of cy. tra: >| versus the angular frequency « = 2nf'is called the amplizude spectrum of the periodic lot of J, versus « is called the phase spectrum of x(t). These are referred to as frequency ‘Since the index m assumes only integers, the frequency spectra of a periodic signal exist 1 SIGNALS AND SPECTRA [CHAP. 1 only at the discrete frequencies nag. These are therefore referred to as diserete frequency spectra or line spectra. If the periodi signal x(1) is a real function of time, then lenle™ sy This means that, for a real periodic signal, the positive and negative coefficients are conjugate, that is, lent On = Oy 6) le. Henee, the amplitude spectrum is an even function of w and the phase spectrum is an odd function of. C. Power Content of a Periodic Signal and Parseval’s Theorem: ‘The power content of a periodic signal x(¢) is defined as the mean square value over « period: 1pm =f boar (7) 3 Parseval’s theorem for the Fourier series states that if x(t) is a periodic s 1 pt? 2. — 2 Ff 9 beoPar= > tel «sy mal with period, To, then 1.3 FOURIER TRANSFORMS AND CONTINUOUS SPECTRA To generalize the Fourier series representation (/.!) to a representation valid for nonperiodic signals in frequency domain, we introduce the Fourier transform. A. Definition: Let x(t) be a nonperiodic signal. Then the Fourier transform of x(t), symbolized by Fis defined by Xo) = FIs} = [steer 9) ‘The inverse Fourier transform of X(w), symbolized by Ft, is defined by xO =F “X0) zi Xo)edeo 10) Equations (J.9) and (/.10) are often called the Fourier transform pair denoted by x) + Kw) B. Frequency Spectra: In general, the Fourier transform X() is a complex function of angular frequency «, so that we may express it in the form X(oyle@ ay) where |X(«)| is called the continuous amplitude spectrum of x(1), and 0 («) is called the continuous phase spectrum of x(z). Here, the spectrum is referred to as a continuous spectrum because both the amplitude and phase of X(«) are functions of continuous frequency . If x(2) is a real function of time, we have Xo) X(-0) = X"(@) = [Xl (1.12) CHAP. 1] SIGNALS AND SPECTRA 3 or Ix-@)l = 1X(@)] 6-2) = -0(0) (3) Thus, just as for the complex Fourier series, the amplitude spectrum |X(w)| is an even function of , and the phase spectrum 6(o) is an odd function of Energy Content of a Signal and Parseval’s Theorem: ‘The normalized energy content E of a signal x(¢) is defined as e= [7 lore (4) If Eis finite (E < eo), then we call x(#) as an energy signal. If E = oo, then we define the normalized average power P by m7 1 ; p= jim of pa Robe (sy If Pis finite (P < 00), then (1) is referred to as a power signal. Note that a periodic signal is a power signal if its energy per period is finite. Parseval’s theorem for the Fourier transform states that if x(7) is an energy signal, then ~ oe Pde [bora= EI. Mord (1.16) 1.4 PROPERTIES OF FOURIER TRANSFORM 1. Linearity (Superposition): yxy) + ayx(0) + X\() + XA) (Ld?) 2. Time Shifting: x(E= Ig) + X(opeF" (L18) Equation (1.18) shows that the effect of a shift in the time domain is simply to add a linear term —ato to the original phase spectrum 60). 3. Frequency Shifting: xine Xo a9) (119) 4. Seating: Lipo , san (2) (1.20) Equation (1.20) implies that time compression of a signal (a > 1) results in its spectral expansion and that time expansion of the signal (a <1) results in its spectral compression 5. Time-Reversal: a1) + XC@) (21) 4 SIGNALS AND SPECTRA ICHAP.1 6. Duality: X(t) + 2ax(-w) (1.22) 7. Differentiation: Time differentiation: rO= Sx ~joX@) (1.23) Equation (/.23) shows that the effect of differentiation in the time domain is the multiplication of X() Dy jo in the frequency domain Frequency differentiation: "= 4 y Ginx + X"@) = 7 Xo) (1.24) 8. Integration: If X(0) = 0, then d 1 x(a) dt + Xo) 1.25 fio ro (1.23) Equation (/.25) shows that the effect of integration in the time domain is the division of X(«) by join the frequency domain, assuming that X(0) = 0. Note that by definition (1.9) x= [7 stndt (1.26) The more general case pertaining to X(0) # 0 is considered in See. 1.5 (Eq. (1.42)} 9% Convolution: The convolution of two signals x;(1) and x,(1), denoted by x,(1) * x2(0), is a new signal x(#) defined by x(1) = (0) # x30) = J ayaa de (27 Then x() (0 > X(O)XA(W) (1.28) Equation (/.28) is referred to as the time convolution theorem, and it states that the convolution in the time domain becomes multiplication in the frequency domain (Prob. 1.18), Note that the operation of convolution is commutative, that is, 24(1) * x2 = x29 * x1() 10, Muhiplication: HOnO EXO) * XA) (129) Equation (1.29) is often referred to as the frequency convolution theorem. Thus, the multiplication in the time domain becomes convolution in the frequency domain. CHAP. 1]. SIGNALS AND SPECTRA 5 1.5 FOURIER TRANSFORMS OF POWER SIGNALS To find the Fourier transform of a periodic signal or a power signal, we need to introduce the unit impulse function. A. Impulse Function: The unit impulse function, also known as the Dirac delta function, 6(¢), is not an ordinary function and is defined in terms of the following process: fe HOS dt = G0) (130) where (2) is any regular function continuous at = 0. Equation (/.30) and all subsequent expressions will also apply to the frequency-domain impulse 5(«) by replacing ¢ by o. ‘Note that Eq, (/.30) is a symbolic expression and should not be considered an ordinary Riemann integral. In this sense, 6(0) is often called a generalized function and (2) is known as a testing function. AA different class of testing function will define a different generalized function, Similarly, the delayed delta function 5(¢ — t9) is defined by Jf e000 ar = 600 ast) where $(i) is any regular function continuous at 1 = to Some additional properties of 6(¢) are X(D4(~ fy) = x(t) 0) (1.32) if x(0) is continuous at t= to (DIE) = (OSH, 0.33) if x(4) is continuous at t = 0 oat) qo a#0 (34) 5-0 = 5) (1.35) (0 * 60-9) = X(t) (1.36) x(t) * (0) = x() (137) Note that Eq. (J.35) can be obtained by setting a= ~1 in Eq. (1.34). Equations (1.33) and (1.37) are the special cases of Eqs. (1.32) and (1.36), respectively, for to = 0. An alternative definition of 6(2) is provided by the following two conditions: f 8- W)dt= N de — ay =28/T (a7) Comparing Fig. 1-4 with Fig. 1-3(a), we see that »(0) = x(1 ~ a). Changing ¢to ¢ ~ avin Bq. (J.J) we have x= 0) = SF eer =F eget! ‘Thus, we get sin noe 4, = G6 em uss) From Eq, (/.48) we see that the magnitude spectrum of a signal that is shifted in time remains the same land the effect of a time shift on signal is to introduce a phase shift in its phase spectrum that is a linear function of a. Consider the signal (0) = sin pe Find the complex Fourier series of x(#) and plot its frequency spectra. We could use Eq, (1.3) to compute the Fourier coefficients, but for this case it is simpler to use Eules's formula and identify by inspection the Fourier coefficients. Now we can express x(t) a8 CHAP. 1] SIGNALS AND SPECTRA 9 1 a a ed xO = sin ogt = E (ee) me Lem Le \ Thus, Joh JP? 6, = 0 for né +1 or 4 z att) = snot 7 A 2 Oy Fig. 1-5 sin opt and its spectrum 15. Find the complex Fourier series for the signal Mf) = cos wyt + sin? yt Again using Euler's formulas, we have x0 = Hee emt [ye =e oT 1 row 1 tet 24¢ =} jenn 2+ Tihs, we find that e) = hey 1.6. If xs(0) and x2(¢) are periodic signals with period 7 and their complex Fourier series expressions are show that the signal x(t) = x(f)x2(0) is periodic with the same period T and can be expressed as x= LY cer where cy is given by 10 SIGNALS AND SPECTRA [CHAP. 1 n= > deen (1.49) att D= (tt Dat + D = xia ‘Thus, x(0) is periodic with period 7. Let x0= Fee ay = 2E Then ( s eto rom 72 1pm HEM dt = Af a lneQem ae 12 2 1 ! “ahr 5 af) sare va] $ aoe ie 1.7. Let x4(1) and x2(2) be the two periodic signals of Prob. 1.6. Show that 1 pra : FraMOnOdt= ¥ hen «0 Equation (/.50) is known as Parseval’s formula From Prob. 1.6 and Eq. (/.49) we have Ler mt 66 = Ff py OO d= Dee Setting n = 0 in the above expression, we obtain 1 pte 2 3 pariOncodt= ¥ deer = Yeon FOURIER TRANSFORMS AND CONTINUOUS SPECTRA 18. Consider the signal (Fig. 1-6(@)] x)= e™ult) a>O (51) where u(t) is the unit step function defined by Lf) 0 ‘ wo= fy 125 2) Find the Fourier transform of x (#) and sketch its magnitude and phase spectra. From Eq. (J.9), we have * avon gy =! 0 atjo ia The amplitude and phase spectra of x(1) are CHAP. 1] SIGNALS AND SPECTRA a [Xo } IX@)I Va + of? which are sketched in Fig. 1-6(6) and (c). 1.9. Find the Fourier transform of the signal (Fig. 1-7(a)] Signal x(#) can be rewritten as sonetafS 129 en (o) = : Me™ dt ime vat HOt dp Bs nope [eens [ - _ = flea tea Hence, we get Fig. 17 R SIGNALS AND SPECTRA (CHAP. 1 ol ., 24 oat 195 ere a ‘The Fourier transform X (o) of x() is shown in Fig. 1-708) 1.10, Find the Fourier transform of the rectangular pulse signal x(0) (Fig. 1-8(a)] defined by _ fl Wea wo= r0={) Id>a@ see Mo) =| pale" at = =u «s7) ‘The Fourier transform of p(!) is shown in Fig. 1-8(6) aw 1 “0a 7 7 @ Fig. 18 LAL. Applying the inverse Fourier transform (Eq. (/.70)] to Eq. (1.57) with 1 = 0, show that * sin aw n a>0 5 im 1a dy {% a (158) Substituting Eq, (1.57) into Eg, (1.10) we have = Bef Beta Lf” SOA From Eq, (1.56) or Fig. 1-8(@), we get [_B2w-mm=2 020 When a < 0, we obtain [22 aemo=- aso since sin(~6) = ~sin 8, 1.12, Given Xo) (eo) find x(0). From Eg, (/.10) and using Euler's identity, we have enn 0 Ef edn EP Le ra andy ) oJ aja“ 8 J yo CHAP. 1 SIGNALS AND SPECTRA 13 Since cos w/a is an odd function of c, the first integral of the third term in the preceding expression is zero. ‘Then by Eq. (/.58) we obtain ‘ ct) 3H ut) (1.59) where sgn (t) (sign function or signwm function) is defined by (Fig. 1-9) wn =f, 22 «aa0) From the preceding result we have the Fourier transform pair: sen 5 61) Fig. 1-9 Signum function PROPERTIES OF FOURIER TRANSFORM 1.13, 14, Verify the duality property (1.22), that is, XO) 2nx(-w) From the inverse Fourier transform definition (J.10), we have Fi stentao = 209 Changing 1 to ~1, we obtain [xoemtin-20e0 Now interchanging ¢ and o, we get f Ne dt = Iex(-w) Since FXO} = f (peat wwe conclude that XQ) + 2nx(-o) Find the Fourier transform of the signal (Fig. 1-10(a)} =e (162) From the result of Prob. 1.10, we have 4 SIGNALS AND SPECTRA F ipso] = 2sin aw ‘Now from the duality property of the Fourier transform (J.22), we have [CHAP. 1 2 [sin at] = 274-0) sina] _ 1 gf? Thus, xo) = ¥ [2] = 59 [Fain at] = 2 (163) where 72(0) is defined by [see Eq. (1.56) and Fig. 1-10(6)] aft lol o sin at" then x) * S ()* x) ifa>o, From Prob. 1.14, we have CHAP. 1] SIGNALS AND SPECTRA 7 sinat yu fl lol we Hence, x1 BE x) ifa>o, 1.22, If x X(0) and x20) + Xo) show that f. x(oxiodt = 2 f. X(oXy(-odeo (70) From the frequency convolution theorem (J.29), we have Flx(oriol= Ef VXxo~ Aa That is, 1 Plmonoema=ZP xaxe-na Setting w= 0, we get 1 ; , fl ntontoa = 5 [7 ncaa By changing the dummy variable of integration, we get f “Om(0dt = Ff Merced 1.23. Prove Parseval’s theorem Eq, (/.16) for a real x(0). If x(0 is real, then from Eq, (1.65) of Prob. 1.17 we have Xe) = Xo) (in Eq, (1.70) of Prob. 1.22, we have Then, setting (1) = x2(0) * pxcobdr= [7 txorar= 3 [" x@x-ordo e EI ine Ff MOr ero =F" Mora FOURIER TRANSFORMS OF POWER SIGNALS 1.24, Verify property (1.32) (2)5(t~ fo) = x(40)5(0~ fo) ‘The proof will be based on the following equivalence property: 18 SIGNALS AND SPECTRA (CHAP. 1 Let ¢,(0) and g,() be generalized functions. Then the equivalence property states that g(t) = gu(0) if and only if Psa [" scoona a7) for all test functions 4(0), If x(0) is continuous at 1 = fo, then Jose -sordtode = [7 90~ orsto dod = soe) = xo [90 motodr= fF Iatose= mnétoat for all (2). Hence, by the equivalence property (J.7/), we conclude that (B(0= 19) = xC)6E= 49) 1.25. Verify Eq, (1.36), that is, (1) # H(t fo) = x(t f0) According to the commutative property of convolution and the definition (1.31) of a(0), we have (0) # (t= fp) = 5(t= ) #10) = f. 3er— te)x(t— ede = x(¢~ Dleas, = XC) 1.26, Find the complex Fourier series of the unit impulse train 5;(#) shown in Fig. 1-12(a) and defined by 64) = Y 6e-nt) (72) = a ee From Eq. (J.3), the coefficients cy are 107? 5 inemntar a1 snerma oa 5), eroermdr = EI aoe Ie Thus, sxin= ¥ hehe 173) 1.27. Find the Fourier transform of the following signals: eb) =e (©) x) =e08 ant @ x (@) Applying the frequency-shifting property (1.19) to Eq, (7.41), we get eb os 2nd 0) my CHAP. 1} SIGNALS AND SPECTRA 19 ay 0 ay doy @ ) Fig. 1-12 Unit impulse train and its magnitude spectrum (®) From Eq, (1.74) it follows that oP oe 2500 +) (75) (© From Buler’s formula we have 08 ont = Ker" “uty ‘Thus, using Eqs, (.74) and (J.75) and the linear property (1.17), we get £08 aigt 4 W5(w = 0p) + RBL+ 2) (76) 1.28. Find the Fourier transform of a periodic signal x(¢) with period 7. We express x() as Taking the Fourier transform of both sides, and wsing Eq. (1.74), we obtain Kw) = 2n F e,6(0-n09) wm Note thatthe Fourier transform ofa period signal consists ofa sequence of equidistant impulses located at the harmonic frequencies of the signa 1.29, Find the Fourier transform of the periodic train of unit impulses 57() (Fig. 1-13(a)] 50 = ¥ su-nD) From Eq, (J.73) of Prob. 1.26, the complex Fourier series of (0) is given by cl Qn 510= 5 berm oy a Using Eq. (1.77) of Prob. 1.28, F101 =F Ho-noy) = 06 Y H(O-M09) = oy.60) oS o~n0») 78) 20 SIGNALS AND SPECTRA (CHAP. 1 3,t0) 8. (0) —T 0 1 oF 1° wg 0 wy Dey ° @ ry Fig. 113 Unit impulse train and its Fourier transform ‘Thus, the Fourier transform of a unit impulse train is also a similar impulse train [see Fig. 1-13(6)]. 1.30. Find the Fourier transform of the unit step function u(). ‘As shown in Fig. 1-14, u() can be expressed as 1 1 m= 545800 Note that J is the even component of u(t) and {sgn isthe odd component of u(). Thus, F (WO) =§ FU +4F lsen(0) which, using Eqs. (142) and (1.61), becomes # tu) = n5(0) +1 (179) jo Sent) Pay ' pam 1 3 1 = + a ° 7 ° 7 0 : Fig. 1-14 Unit step function and its even and odd components 1.31. Show that xO) * i) = f x(e)de and find its Fourier transform. By definition (1.27) of the convolution, x(e)de xox= aos since ui-9 =f) St Next, by the time convolution theorem (J.28) and Eq. (J.79), we obtain Ff sod] = cose +2] = AXIO) +7 MO) = RAOHLO) + Mo) CHAP. 1] SIGNALS AND SPECTRA 2 1.32, 1.33. 134, 1.38. since X(o)5(@) = X(0)5() by Eq. (1.33) [see Eq. (1.42)] Use the frequency convolution theorem (/.29) to derive the modulation theorem (1.64). (See Prob. 1.15.) From Eq, (1.76), we have coset + 5(w— 09) + 25(0 + 09) By the frequency convolution theorem (/.29), 1 1 x(Q)e08 gf = 3 Mw) + [xB(O~ cy) + 25(0 + 09)] = 3 KUO ~ Op) +5 XO + 0) The last equality follows from Eg. (1.36). Let x(1) be a periodic signal with fundamental period To and let x(2) be the segment of x(0) between —To/2 <1 0). Ans, 1a + jo)? Prove the frequency convolution theorem (7.29), that is 1 12200 > 7 Kile) + Xu) Hint: Apply duality property (7.22) to the time convolution theorem (J.28) Show that if x) + Xa) a"x(0) en 2%) = Gay" MO) th =a > Gay'X@) Hint: Repeat time-differentiation theorem (I.23). CHAP. 1] SIGNALS AND SPECTRA 144, 145. Find the Fourier transform of sin ox. Ans, —jnd{o — 0%) + fr8(w + 0) Find the Fourier transform of 8(¢ ~ ta) 2B SIGNAL TRANSMISSION AND FILTERING DUCTION transmission is a process whereby a message (or information-bearing) signal is transmitted munication channel. Signal filtering purposefully alters the spectral content of the signal so transmission and reception can be achieved. Many communication channels, as well as e modeled as a linear time-invariant system. In this chapter we review the basics of a linear int system in frequency domain. RESPONSE AND FREQUENCY RESPONSE is a mathematical model of a physical process that relates the input signal (ource or al) to the output signal (response signal). ind (1) be the input and output signals, respectively, of a system, Then the system is apping of x() into y(1). Symbolically, this is expressed as HO = FIO] ut) operator that produces output (¢) from input x(1), as illustrated in Fig. 2-1. satisfies the following two conditions, then the system is called a linear system. Fix) + x20) = FL Ol+ Flex] = WO + yx) i) nals x4(t) and xs(0) F lax(t)] = aF [x(1)] = ay() 23) nals x(¢) and scalar a. ) is the additivity property and condition (2.3) is the homogeneity property. 4 CHAP. 2} SIGNAL TRANSMISSION AND FILTERING 25 Aw, ‘System yw) Fig. 24 Operator representation of a system If the system satisfies the following condition, then the system is called a time-invariant or fixed system: Txt t0)] = YE t0) en where fy is any real constant. Equation (2.4) indicates that the delayed input gives delayed output, If the system is linear and time-invariant, then the system is called a linear time-invariant (LT1) system, B. Impulse Response: ‘The impulse response h(t) of an LTI system is defined to be the response of the system when the input is 6(2, that is, WM) = F160) ex ‘The function A) is arbitrary, and it need not be zero for ¢ < 0. If Mi) =0 fort<0 eo then the system is called causal. C. Response to an Arbitrary Input: ‘The response y() of an LTI system to an arbitrary input x(¢) can be expressed as the convolution of x(0) and the impulse response h(t) of the system, that is, HO = (0 # AC) = j Ax()A(E = Dd an Since the convolution is commutative, we also can express the output as yo = Ma) (= [boxer de (28) D. Frequency Response: Applying the time convolution theorem of the Fourier transform (J.28) to Eq. (2.7), we obtain Y(@) = XH) 29) where X(o) = F[x(), Yo) = FO), and Ho) = FO). And H(o) is referred to as the frequency response (or transfer function) of the system. Thus Ho) = Fh) = 3 (2.10) The relationships represented by Eqs. (2.5), (2.7), and (2.9) are depicted in Fig. 2-2. By taking the inverse Fourier transform of Eq. (2.9), the output becomes 1 jor, yO = 52 | KoMoye™do ip ‘Thus, we see that cither the impulse response A(é) or the frequency response H() completely char- acterizes the LTI system. 26 SIGNAL TRANSMISSION AND FILTERING Icnap. 2 1 Hw ! | a) TM x LS yey a xknne te) | | Xw) Yow) = XwIHw) Fig. 2-2. Relationships between inputs and outputs in an LTI system 2.3. FILTER CHARACTERISTICS OF LTI SYSTEMS The frequency response H(@) is a characteristic property of an LTI system. It is, in general, a complex quantity, that is, Ho) = |Hole™ (2.12) In the case of an LTI system with a real-valued impulse response h(t), H(c) exhibits conjugate symmetry (Eq. (/.65)], that is, H(-o) = H"(o) (2.13) which means that 1H(-a)| = 1H(@)|_—_ (cw) = -0,(@) 1h That is, the amplitude |(«)) is an even function of frequency, whereas the phase 6,(w) is an odd function of frequency. Let Yo) =lYole* xo) =1X@le™ Rewriting Eq. (2.9), we have [role = [Xe Hoyle = X@MLA(@yle"07H ep Thus, we have 1Y(@)| = X@onllH@)| (2.16) 6,(@) = 0,(0) + (0) ain Note that the amplitude spectrum of the output signal is given by the product of the amplitude spectrum of the input signal and the amplitude of the frequency response. The phase spectrum of the output is given by the sum of the phase spectrum of the input and the phase of the frequency response. Therefore, an LTI system acts as a filter on the input signal. Here the word filter is used to denote a system that exhibits some sort of frequency-selective behavior. “24 ‘TRANSMISSION OF SIGNALS THROUGH LTI SYSTEMS A. Distortionless Transmission: For distortionless transmission through a system, we require that the exact input signal shape be reproduced at the output. Therefore, if x(¢) is the.input signal, the required output is WO) = Kx(t~ 1g) (2.18) where 1 is the time delay and K is a gain constant. This is illustrated in Fig. 2-3(a) and (6). Taking the Fourier transform of both sides of Eq. (2.18), we get CHAP. 2] SIGNAL TRANSMISSION AND FILTERING 2 0 re 04 : ° > Input signal (0) 1H) versus xo al) 0% ath 7 o > ©) Ovipot signal C “oes Slope = -te G4) 046) versus Fig. 2-3 Distortionless transmission ¥(o) = Ke"*X(o) (219) From Eq. (2.9), we see that for distortionless transmission the system must have Ho) = |H(o)le* = Ker (2.20) That is, the amplitude of H(~) must be constant over the entire frequency range, and the phase of H(o) must be linear with frequency. This is illustrated in Fig. 2-3(c) and (@). B, Amplitude Distortion and Phase Distortion: / When the amplitude spectrum |H()| of thé system is not constant within the frequency band of interest, the frequency components of the input signal are transmitted with different amounts of gain or attenuation. This effect is called amplitude distortion. ‘When the phase spectrum 6,(«) of the system is not linear with frequency, the output signal ‘has a different waveform from the input signal because of different delays in passing through the system for different frequency components of the input signal. This form of distortion is called phase distortion. 2.5 FILTERS A. Ideal Filters: By definition, an ideal filter has the characteristics of distortionless transmission over one or more specified frequency band and has zero response at all other frequencies. ‘An ideal bandpass filter (BPF) is defined by _ fem fore, < lol 0,0, = 02. An ideal bandstop filter (BSF) or notch filter is defined by _fo @, 0 , HO) { meen (2.23) Since e*/*? = +f, H(w) can be rewritten as H(@) = ~jsen () (2.2 ‘The corresponding impulse response /i(#) can be obtained as (Prob. 2.15) mot (225) a Hilbert Transform: Let a signal x(?) be the input to a quadrature filter (Fig. 2-6). Then by Eq. (2.6) the output WO = x( * ho) = x(t) * (1x2) will be defined as the Hilbert transform of x(¢), denoted by %(1). Thus, s=xgetat{” © me k (2.26) The Fourier transform of £(¢) is given by X(@) = H@)X(@) = [=i sgn (A) (2.27) Phase shifter, x0, waft EO) Fig. 2-6 —n/2 rad phase shifter Solved Problems IMPULSE RESPONSE AND FREQUENCY RESPONSE 2.1. For each of the following systems, determine whether the system is linear. (FIO = x9 €08 wt (6) F{x(0] = [4 + x(0)] cos wt @ FUxs( +2201 = Lo + m2(0} 008 ovet = (00s act + x10 608 @.t FUx(Ol+ FLO] Fax] = [ax(O]o08 w= a CO) Hence, the system represented by (a) is linear. 30 22, 23. SIGNAL TRANSMISSION AND FILTERING [CHAP. 2 © FU + x9] = [A+ (+ a2(Oe0s ot FF MOI+ TIO] A + xy(D]CO8 Oct +A + xx(0)}C08 at = A+ HC) + y(0]e08 ot Thus, the system represented by (6) is not linear. The system also does not satisy the homogeneity condition (2.3). Note that the system represented by (a) is called the balanced modulator for DSB (double-sideband) signals in amplitude modulation (see Sec. 33), and the system represented by (b)is the generator for ordinary ‘AM (amplitude modulation) signals (see Sec. 3.4). Consider a system with input x(/) and output (0) given by VW) = x5) =x) Y 6t= nT) (a) Is this system linear? (6) Is this system time-invariant? (@_ Let x(0) = xi(0) + x9(0). Then H0) = I + HAMAD = BAO + (D510 2H +9260 Similarly, let x(2) = xi). Then WO = fea (N50) = aly (057(0] = a94(0 ‘Thus, the system is linear. © sto =e) Then no= ¥ moryse-n7) =F coEat)secarre Fara) =f) 6) [Next let the input be Then vo= > sin (En) 50-n7) = o4»('-) T Thus, the system is not time-invariant. Note that this system is known as the ideal sampler (see Sec. 5.4) Derive Eq. (2.7), that is, HO) = (9 # HC) -f x(a) h(t 1) de where y(#) and x(#) are the output and input, respectively, of an LTI system whose impulse response is h(). If the system is time-invariant, then from Eq. (2.4) we have F18~ N= ht) Now, from definition (1.31) of (1 ~ to), we can express x(t) as CHAP. 2] SIGNAL TRANSMISSION AND FILTERING 31 24, 25. 2.6. x= x80 ae ‘Then from the linearity of the J operator, we obtain r= sis0o)= |” stor te—olee= [7 sone ae ‘The response of an LTI system to a unit step function u(t) is called the unit step response of the system and is denoted by a(1). Show that a(t) can be obtained as a= [heya (2.28) and if the system is causal, then a(t) f, A(t) dt (2.29) From Ea. (2), a) = Wo) # ue) = Meta) lice a es wwe have Hae For a causal system, since A(s) = 0 for ¢ <0, a = fe Io) de Let an LTI system with impulse response h() be represented by an operator 7. If F(x] = Ax (2.30) then 2.is called the eigenvalue of F and x(¢) is called the associated eigenfunction of J. Show that the frequency response H(o) = F[h()] is the eigenvalue of the LTI system and e! is the associated eigenfunction. Using Eq. (2.8), we have -[fLuoome} Thus, we see that H(c) is the eigenvalue of the LTT system and is the associated eigenfunction, sie) = [7 ore ae ~ 23D a = Hoel" Consider the RC network shown in Fig. 2-7 (a). Find the frequency response () and the impulse response f(!) of the RC network. Using the principle of voltage divider, we can obtain the frequency response H() by inspection: 32 SIGNAL TRANSMISSION AND FILTERING (CHAP. 2 __MGeQ a yt MO"FET [G00 ~joRCF1 Jose *~ RC ae Next, fom Prob. 1.8 we obtain 1 meena te lOy MO AO = Re at) (233) which is sketched in Fig. 2.7(6). mn b=] 4 (a) RC nerwork (0) impulse response of the RC network 0 : Fig. 2-7 2.7. Consider the simple RC circuit shown in Fig. 2-7(a). Find the unit step response a(). From Eq. (2.33) we have 1 = eno, WO = Re "att ‘Thus, by Eq, (2.29) the unit step response is, t= fm de = ge fe) = [lean ea 2.8. Redo Prob. 2.7 with frequency response and Fourier inversion technique. Now x() = u(t). Thus, by Eq. (1.79) ¥(o) = x50) + jo Next, by Eq. (2.32) « 1 joa *"RC Ho) Thus, by Eq. (2.9) we obtain xo) = xeon) = [so +7 ](% in + 1 = 15) + ig jora In the last step we have used the property (J.33) of 6 function and the partial fraction expansion technique. ‘Taking the inverse Fourier transform of Y(o), we obtain WO = a) = ult) et) = (Le ul) = 1/(RO) FILTER CHARACTERISTICS OF LINEAR SYSTEMS 2 2.9. Show that the RC network of Prob. 2.6 (a)] is a low-pass filter. Also find its 3-4B bandwidth Ws as. CHAP. 2} SIGNAL TRANSMISSION AND FILTERING 33 From Prob. 2.6, the frequency response H(o) is given by 1 1 Mo) = TiaRC~ T¥jo]on where wy = 1/(RC). Writing Ho) = \Manle we have and @\(@)=~tan-! 2 1+ (jon? & ‘The amplitude spectrum |H(a)| and phase spectrum 9(o) are plotted in Fig, 2-8. From Fig, 2-8 we see that the RC network of Fig. 2-1) is a low-pass filter When @ = a = 1/(RC), |H(o)| = 1/-V2. Thus, la@)| 1 Wren = = RE Hw) 1 elf =e 0 w= IRC) @ Fig. 28 2.10. The rise time t, of the low-pass RC filter of Fig. 2-7(a) is defined as the time required for a unit step response fo go from 10 to 90% of its final value. Show that 03s fas |(2nRC) is the 3-dB bandwidth (in hertz) of the filter. where fs an = Ws an/(2n) = From Eq, (2.34) of Prob. 2.7, the unit step response of the low-pass RC filter is found to be ay = (end hich is sketched in Fig. 2-9. By definition of the rise time, non WO 9,1 erHFO = 9.9 AO = 9,1 where an) = eho 9 Dividing the first equation by the second equation on the right-hand side, we obtain gOrWMAR = 9 34 SIGNAL TRANSMISSION AND FILTERING (CHAP. 2 2.197 _ 038 2mhaw Haw and t= - = RCIn9 = 219TRC= Fig. 29 2.11. Show that the RL network shown in Fig. 2-10(a) is a high-pass filter. In a manner similar to Prob. 2.6, we can obtain the frequency response H(«) by inspection: Ho) = 2 _JOL/R)___Kofen) LR RtjoL” T+ja(LJR) T+ jojo The magnitude of H(«)is plotted in Fig. 2-10(6). Itis seen that the RL network of Fig. 2-10(a) is a high-pass filter. 235) tHe) x0 Fan) xo 0 ° (@) RL network (6) Magnitude of frequency response ofthe RE network Fig. 2.10 FILTERS 2.12. Find the impulse response /i(t) of the ideal LPF with cutoff frequency a. The frequency response of an ideal LPF with cutoff frequency a, is given by « for lal sa, be found by taking the inverse Fourier transform of Eq. (2.36), which yields CHAP. 2] SIGNAL TRANSMISSION AND FILTERING 35 sin (t= ta) n= t) ‘The impulse response /.pp(0) is shown in Fig. 2-11(b). Note that hy pe(t) # 0 For ¢ <0. Thus, the ideal LPF is ‘not a causal system, Iypr() = (237) reo af) @) ) Fig. 2-11 Frequency response and impulse response of an ideal LPF 2.13. Consider the system shown in Fig. 2-12(a). Find the impulse response /() and its frequency response (0). From Fig. 2-12(a) we can write the relationship between input x(0) and output y(0) as MO = xQ—XE-T) ‘Thus, by definition (2.5) we got A(t) = H(t) — 5(t~ T) 5 ; NW. 36 SIGNAL TRANSMISSION AND FILTERING [CHAP. 2 Using Eq. (7.40) and property (18) we obtain phoren/2 PT = HTT Ho) ‘The magnitude of H(w) is shown in Fig. 2-12(6). The system is known as comb filter. 2.14, A multipath transmission occurs when a transmitted signal arrives at the receiver by two or more paths of different delays. A simple model for a multipath communication channel is illustrated in Fig. 2-13(a) (@)_ Find the frequency system function H(o) for this channel and plot |H(«)| for « = 1 and 0.5. () To compensate for the channel-induced distortion, an equalization filter is often utilized. Ideally, the frequency system function of the equalization filter should be 1 Hao) = = HG A tapped delay-line ot transversal filter, as shown in Fig. 2-13(b), is commonly used to ap- proximate this equalization filter. Find the values for ay, a2,...,ay, assumingt = Tanda< 1. a” A wo) Delay r ext 7) (a) Model for multipath transmission Delay Dey} Delay T T xo . © 0 (8) Tapped delaytine iter Fig. 2413, @ 3) = x() Faxte—9) ‘Taking the Fourier transform of both sides, we have Yo) = Xo) +2€*" Xo) = (1 +e *)X(@) By Eq. (2.10), _ Xo) Mod = Foy CHAP. 2} SIGNAL TRANSMISSION AND FILTERING 7 Using Euler's identity for e*” gives Ho) = 1+ 200s0r— jxsinox Thus, MH(@)| = [0 + 2eoson)* + (asinoe? |!” = [0 +e + 2200800]! 2+ enon =n [(@)| = (1.25 + cose)" When a= |H@)| When « Amplitude spectra (M(@)) for «= 1 and «= 4 are plotted in Fig. 2-14. lw) a=05 (6) From Fig, 2-13(6), we have 20 = Y agte-G- DT ‘Taking the Fourier transform of both sides gives Z@)= Ye" Yo) ‘Thus, the frequency response H,,{) of the transversal filter is given by te) . “jody Heo ae Wo Fay Hay tage Ph aye hs payeODT 1 1 Xow Bae) Ho) 1+0e"7 Using wxePePdees tel wwe can express Hax(o) as mac 4 et ee 38 SIGNAL TRANSMISSION AND FILTERING ICHAP. 2 T and |a| < 1, we have a, =1, a) =a, as = 0°... ay = (a) QUADRATURE FILTERS AND HILBERT TRANSFORMS 215. 2.16. 247. 2.18. Verify Eq. (2.25), From Eq, (1.61) we have 2 sent) 7 Applying duality property (/.22), we obtain from which we get 1 seo) Show that a signal x(0) and its Hilbert transform (0) have the same amplitude spectrum, By Eq. (2.27) Since |~ j sgn{o)| = 1, we obtain |r) -j sen(oo||X(o)] = [X@)] Show that if #(0 is the Hilbert transform of x(1), then the Hilbert transform of £(1) is —x(1), that is, R= -20 (2.38) Let x) = Xo) Then by Eq. (227) AG) + Fo) = Ef sem X(o) and 40 + 2) = Ei sen@F X@) = -K) since [— j sen(o)}? = Alsea(a)}? = -1. Therefore, we conclude that %() = —x(0) Let x(#) be a real signal. Show that x(1) and its Hilbert transform £(#) are orthogonal, that is, f_xosma (239) Using Bq, (1.70), we have sosnd = [" xok-o)do FLxostna=$. [7 xertcor ae If (0 is real, then by Bq, (1.12) we have R(-eo) = [7 sgn(—co)|X(-e») = j sen(o)X* (co) CHAP. 2] SIGNAL TRANSMISSION AND FILTERING 39 ms, [Lantod-2f emeomtoxriod Ef senonatort ao since the integrand in the last integral is an odd function of 219. Let x(0) = cos wot. Find £0), From Eq, (/.76) H18(0 ~ 09) + to +05) Then Xo) ~jn|8(@~ co) + 5(o + on) 8@00) = PALBO~ 0) Ho + 09)] Thus, by the result of Prob. 1.44, we obtain sinogt Note that £¢ oos(ay'=$) = ino 2.20. Let x(1) = 5(0. (@ Find (0. (8) Use the result of (a) to confirm that (Prob. 2.15) =Jsen(o) (@) By definition (2.26) and Eq, (1.37) we obtain wyesneLat HO = He T= mt (®) From Eq, (1.40) 5 + Kw) = ‘Then, by Eq. (2.27) we have Xo) = =jsgn (@)X(w) = =i sen (@) Thus, we conclude that a @) tesa Supplementary Problems 2.21, Consider the system whose input-output relation is given by the linear equation xO= x) +b ‘where x(¢) and y(0) are input and output of the system, respectively, and a and b are constants. Is this system linear? Ans. No 222. 2.23, 224, 225, 2.26. 227. 2.28, SIGNAL TRANSMISSION AND FILTERING [CHAP. 2 Consider the system that is represented by FIX = XO where x*(4) is the complex conjugate of x(t). Is this system linear? Ans. No A system is called BIBO stable if every bounded input produces a bounded output. Show that the system is stable if its impulse response is absolutely integrable, that is, J_tncotae <= Hint: Take the absolute value of both sides of Eq. (2.8) and use the fact that [x(t — 2)| < K, Consider the simple RC circuit shown in Fig. 2-7(a). Find the output y(0) when the input x() = p.(t) (See Bq, (1.36). mm De + a) +o ult — a) Find the frequency response H(w) of the network shown in Fig. 2-15, and show that the network is a high- pass filter. Ans, Hw) = ~LCa*{( ~ LCo? + joRC) Fig. 2415 Find the frequency response H(«) of the network obtained by interchanging C and L in Fig. 2.15, and show that the network is a low-pass filter. Ans, Ho) = WW ~ Leo" + joRo) Determine the impulse response and the 3-dB bandwidth of the filter whose frequency response is Heo) = 10/(«? + 100). Ans. Wit)= $e", Wy ox = 6.44 radians per second (rad/s) A gaussian filter is a linear system whose frequency response is given by Ho) Calculate (a) the 3-4B bandwidth 13 ap and (8) the equivalent bandwidth Way defined by gf, CHAP. 2] SIGNAL TRANSMISSION AND FILTERING 4 229, 2.30, 2.32. 1. Woy mol LH(@| de> 09 0.886 Ans. (@) Ween = (0) Wg = @ ~ ® G A Butterworth low-pass filter has || = i+ @/oo" where 7 is the number of reactive components (ie., inductors or capacitors). (a) Show that as n—+ 00, |H(w)| approaches the characteristics of the ideal low-pass filter, shown in Fig. 2-I1(@) with a = © (b) Find m so that |H(c)/ is constant to within 1 dB over the frequency range of || 800. Ans, (a) Note that (ey {5 eee ) 2 If the unit impulse response of a causal LTI system contains no impulse at the origin, then show that with Hw) = A(o) + iB) A(o) and B(e) satisfy the following equations: 1p Ba) tJ.o-7 a AG) aJ.o-7 Ao) dh Blo) = These equations are known as the Hilbert transform pair. Hint: Let WO) = ht) + ho(®) and use the causality of (2) to show that h(t) = h(olsen(0), helt) = heolsen(0) Show that J [x(oF dt = - (eOP ar Hint: Use Eq. (2.27) and apply Parseval’s theorem (1.16) for the Fourier transform. Show that x) = 80 * ( Hint: Use Ba, (2.27) 42 SIGNAL TRANSMISSION AND FILTERING ICHAP. 2 2.33, Let (a) x(0) = sin eof; (6) x(0) = mm C08 wet; (€) x(0) = mit) sin ext. Find £0. Hint: Use Eq. (2.27) Ans. (@) 8) = ~c0s gt; (0) 8) = m(_) sin ene (€) 8(D = ~m(0) cost AMPLITUDE MODULATION DUCTION ansmission of an information-bearing signal (or the message signal) over a bandpass ition channel, such as a telephone line or a satellite channel, usually requires a shift of the frequencies contained in the signal to another frequency range suitable for transmission. the signal frequency range is accomplished by modulation. Modulation is defined as the which some characteristic of a carrier signal is varied in accordance with a modulating the message signal is referred to as the modulating signal, and the result of modulation is as the modulated signal. sic types of analog modulation are continuous-wave (CW) modulation and pulse | In continuous-wave modulation, a sinusoidal signal A.cos(o,t +) is used as a carrier a general modulated carrier signal can be represented mathematically as KAD) = A(o08 [ot + $0] @, = Def BD ax {or fe = 2-/(2n)] is known as the carrier frequency. And A(?) and (0 are called the amplitude and phase angle of the carrier, respectively. When A(?) is linearly related to the mit), the result is amplitude modulation. If (0) ot its derivative is linearly related to have phase or frequency modulation. Collectively, phase and frequency modulation are angle modulation, which is discussed in Chap. 4. lation, a periodic train of short pulses act as the carrier signal )E MODULATION le modulation, the modulated carrier is represented by [setting $(#) = 0 in Eq. (3.1) generality), x) = A cosa.t G2, articr amplitude A(t) is linearly related to the message signal m(i). Amplitude sometimes referred to as linear modulation. Depending on the nature of the spectral ween m{z) and A(2), we have the following types of amplitude modulation schemes: (DSB) modulation, ordinary amplitude modulation (AM), single-sideband (SSB) Ind. vestigial-sideband (VSB) modulation, B 44 AMPLITUDE MODULATION [CHAP. 3 3.3 DOUBLE-SIDEBAND MODULATION DSB modulation results when A() is proportional to the message signal m(t), that is, Xpswlt) = m(s)c0s wt G3) where we assumed that the constant of proportionality is 1. Equation (3.3) indicates that DSB modulation is simply the multiplication of a carrier, cost, by the message signal m(q). By application of the modulation theorem, Eq, (1.64), the spectrum of a DSB signal is given by Xpsal) M(w ~ a1.) +4M(o +.) GA) A. Generation of DSB Signals: ‘The process of DSB modulation is illustrated in Fig. 3-1 (a). The time-domain waveforms are shown in Fig. 3-1 (b) and () for an assumed message signal. The frequency-domain representations of m(¢) and Xpsa() are shown in Fig. 3-1 (d) and (¢) for an assumed M(o) having bandwidth «yy. The spectra M(o-o,) and M(o + a,) are the message spectrum translated to @ =, and @ = —w,, respectively. The part of the spectrum that lies above © is called the upper sideband, and the part below a. is called the lower sideband. The spectral range occupied by the message signal is called the baseband, and thus the ‘message signal is often referred to as the baseband signal. As seen in Fig, 3-1(e), the spectrum of xpsp(t) has no identifiable carrier in it. Thus, this type of modulation is also known as double-sideband suppressed-carrier (DSB-SC) modulation. The carrier frequency «, is normally much higher than the bandwidth cy, of the message signal m(1); that is 0, > Wye. ti —_ |sideband is cons Ieee a Fig. 31 Double-sideband modulation CHAP. 3} AMPLITUDE MODULATION 45 B. Demodulation of DSB Signals: Recovery of the message signal from the modulated signal is called demodulation, or detection. The message signal m(r) can be recovered from the modulated signal xpsa() by multiplying xpsq(#) by a local carrier and using a low-pass filter (LPF) on the product signal, as shown in Fig. 3-2 (see Prob. 3.1). ‘The basic difficulty associated with the DSB modulation is that for demodulation, the receiver must generate a local carrier that is in phase and frequency synchronism with the incoming carrier (Probs, 3.2 and 3.3), This type of demodulation is known a synchronous demodulation or coherent detection, osu) a) 0 oS LPF £08 at Fig. 32. Synchronous demodulator 3.4 ORDINARY AMPLITUDE MODULATION ‘An ordinary amplitude-modulated signal is generated by adding a large carrier signal to the DSB signal. The ordinary AM signal (or simply AM signal) has the form Xam (t) = m1) 008 @.t + ACOs wet = [A + m(i)] coset G5) The spectrum of xay4(t) is given by Xqu(@) = {M(o— 0) +4M(o + 0,) + ALSO c,) + 50 + 0,)] Go An example of an AM signal, in both time domain and frequency domain, is shown in Fig. 3-3. Z nm errr cam ABW + w,) WAS w— w,) Ito. 7 Nu Nu 7 0, 0 _ 8 ig. 3-3 Amplitude modulation 46 AMPLITUDE MODULATION [CHAP. 3 A. Demodulation of AM Signals: ‘The advantage of AM over DSB modulation is that a very simple scheme, known as envelope detection, can be used for demodulation if sufficient carrier power is transmitted. In Eq. (3.5), if A is large enough, the envelope (amplitude) of the modulated waveform given by A+m(¢) will be proportional to m1). Demodulation in this case simply reduces to the detection of the envelope of a ‘modulated carrier with no dependence on the exact phase or frequency of the carrier. If is not large enough, then the enevelope of xqy4(t) is not always proportional to m(‘), as illustrated in Fig. 3-4. Thus, the condition for demodulation of AM by an envelope detector is Atm) >0 forall ¢ Gn or Az>|mintimoll G8) where min {m()} is the minimum value of m(), mit) oN A+ mO>0 forall Ae m4 forall Envelope he Fig. 34 AM signal and its envelope = Modulation Index: ‘The modulation index u for AM is defined as [min{m(n} ae Ga From Eq. (3.8), the condition for demodulation of AM by an envelope detector can be expressed as wel (3.10) When > 1, the carrier is said to be overmodulated, resulting in envelope distortion. CHAP. 3] AMPLITUDE MODULATION a7 CC. Envelope Detector: Figure 3-5 (a) shows the simplest form of an envelope detector consisting of a diode and a resistor- capacitor combination. The operation of the envelope detector is as follows. During the positive half- cycle of the input signal, the diode is forward-biased and the capacitor C charges up rapidly to the peak value of the input signal. As the input signal falls below its maximum, the diode turns off. This is followed by a slow discharge of the capacitor through resistor R until the next positive half-cycle, when the input signal becomes greater than the capacitor voltage and the diode turns on again. The capacitor charges to the new peak value, and the process is repeated. For proper operation of the envelope detector, the discharge time constant RC must be chosen properly (see Prob. 3.6). In practice, satisfactory operation requires that 1/f, < I /fiy, where fy is the ‘message signal bandwidth. Envelope | @ ow Fig. 35 Envelope detector for AM 3.8 SINGLE-SIDEBAND MODULATION Ordinary AM modulation and DSB modulation waste bandwidth because they both require a transmission bandwidth equal to twice the message bandwidth. (See Figs. 3-1 and 3-3.) Since either the upper sideband or the lower sideband contains the complete information of the ‘message signal, only one sideband is necessary for information transmission. When only one sideband is transmitted, the modulation is referred to as single-sideband (SSB) modulation. Figure 3-6 illustrates the spectra of DSB and SSB signals. The benefit of SSB modulation is the reduced bandwidth requirement, but the principal disadvantages are the cost and complexity of its implementation, A. Generation of SSB Signals: 1. Frequency Discrimination Method: The straightforward way to generate an SSB signal is to generate a DSB signal first and then suppress one of the sidebands by filtering. This is known as the frequency discrimination method, and the process is illustrated in Fig. 3-7, In practice, this method is not easy because the filter must have sharp cutoff characteristics. 2. Phase-Shift Method: Another method for generating an SSB signal, known as the phase-shift method, is illustrated in Fig. 3-8. The box marked —n/2 is a n/2 phase shifter that delays the phase of every frequency component by z/2. An ideal phase shifter is almost impossible to implement exactly, But we can approximate it over a finite frequency band, AMPLITUDE MODULATION [CHAP. 3 Moa) a y @ Kose? =e ° rs ~ w Koco) os Fj ° © Kosa) 0 * ° @ Fig. 346 Spectra of DSB and SSB signals mite) Fosalt) salt) Mor pr }— coset ae -o, 0 « ° Fig. 3-7 SSB generation using bandpass filter CHAP. 3] AMPLITUDE MODULATION 49 m1 08 1 cos wt : mn) © ssa) 2 Vi & « PAD sin wt ea Fig. 38 SSB generation using phase shifters If we let (2) be the output of the ~1/2 phase shifter due to the input m(t) (see Sec. 2.6), then from Fig, 3-8 the SSB signal xssp(7) can be represented by xssa(l) = m(fcos cet F r{t)sin w6t GID The difference represents the upper-sideband SSB signal, and the sum represents the lower-sideband SSB signal. (See Prob. 3.8) B, Demodulation of SSB Signals: Demodulation of SSB signals can be achieved easily by using the coherent detector as used in the DSB demodulation, that is, by multiplying -xssp(¢) by a local carrier and passing the resulting signal through a low-pass filter. This is illustrated in Fig. 3-9. Fssa(t) an yo Kesalo) upp }—— 08 ot 0 ° Fig. 3.9 Synchronous demodulation of SSB signals 3.6 VESTIGIAL-SIDEBAND MODULATION Vestigial-sidehand (VSB) modulation is a compromise between SSB and DSB modulations. In this modulation scheme, one sideband is passed almost completely, whereas just a trace, or vestige, 50 AMPLITUDE MODULATION [cHAP. 3 of the other sideband is retained. The typical bandwidth required to transmit a VSB signal is about 1.25 that of SSB. VSB is used for transmission of the video signal in commercial television broadcasting, A. Generation of VSB Signal A VSB signal can be generated by passing a DSB signal through a sideband-shaping filter (or vestigial filter), as shown in Fig. 3-10 (a). Figure 3-10 (6) to (e) illustrates the spectrum of a VSB signal [evsp(] in relation to that of the message signal m(#), assuming that the lower sideband is transformed to vestigial sideband. Mw) mo Sosa!) agp] esate? fier 2.08 wt @ OO) 0 Fig. 310 VSB modulation B. Demodulation of VSB Signals: For VSB signals, m() can be recovered by synchronous or coherent demodulation (sce Fig. 3-11); this determines the requirements of the frequency response H(w). It can be shown that for distortionless recovery of mii), itis required that H(o+ 0.) +H{@—0,) = constant for lal Oy. (@)_ Show that if the detector output is to follow the envelope of xay4(0, it is required that at any time to AC. 1008 Opt) 1 ( Hsin Opto ) aon FeazOm RC T+ 1.608 «tf, (8) Show that if the detector output is to follow the envelope at all times, it is required that 20 (@ Figure 3-18 shows the envelope of xqy(0) and the output of the detector (the voltage across the capacitor of Fig. 3-5). Assume that the capacitor discharge from the peak value Ey = A(I-+ [HOOS yf) at fp = 0. Then the voltage v.(2) across the capacitor of Fig. 3-5 is given by ve) = Bye RO (3.22) The interval between two successive cartier peaks is 1/f,=2a/, and RC> 1/o,. This means that the time constant RC is much larger than the interval between two successive carrier peaks, Therefore, »,(0) can be approximated by v= B(1-Z5) ay Thus, if the »,(0) is to follow the envelope of xans(0) it is required that at any time 4 (ps0 atolt gp) 1+ por ont Ro i) e29 Now if 0. which is sketched in Fig, 3-20 (6). We see that X.(0) is an upper-sideband SSB signal In a similar manner, taking the Fourier transform of Eq. (3.27), we have 1 Alo) = 7 M(O~ WII —sgn(- 0) +5Ml@+ O91 +5890 +20] ap Since csanw-o,) =f2 &<% I-seno~= {5 9 See and ptay={2 C2 7% we have 0 lol > 0 X(0)=| M@-0) — o<0, (332) Mota) 0>~W, which is sketched in Fig. 3-20 (¢). We see that ~,(0) is a lower-sideband SSB signal, Mw) o Xho) © Fig. 3:20 60 AMPLITUDE MODULATION (CHAP. 3 3.9. Show that an SSB signal can be demodulated by the synchronous detector of Fig. 3-21 (a) by sketching the spectrum of the signal at each point and (6) by the time-domain expression of the signals at each point. Xssx(0) ao wo Fig. 321 Synchronous detector (@ Let M(o>, the spectrum of the message m(), be as shown in Fig. 3-22 (a) Also assume that x35y(t)is a lower-sideband SSB signal and its spectrum is Xgsp(0), as shown in Fig. 3-22 (b). Multiplication by 05 ct shifts the spectrum s5p(@) to + @, and We obtain D(a», the spectrum of ct) [Fig. 3-22 ()) After low-pass filtering, we obtain ¥(q) = $M(o), the spectrum of y(0) [Fig. 3-22 (d)]. Thus, we obtain 0 = Jn), which is proportional to mc (6) From Eq, (3.11), xs99(0) can be expressed as gga (0) = m(d) cos «nt F MAC) sinert Thus, (1) = Xsgu(1) C08 w-t = mit) cos*c~t F M(t) sin.«o,tcos wt Ynoyh + 008 2ent) (9 sin 20, (d+ 4m) c08 20. $0 sin 20,¢ Hence, after low-pass filtering, we obtain BEE) = nto) 3.10. Show that the system depicted in Fig. 3-23 can be used to demodulate an SSB signal. From Eq. (3.26) of Prob. 3.8, the upper-sideband SSB signal x.(0) is, Xe(0) = {008 t= HCD sin ot ‘Then from Prob.2.33, we have $,(0) = meDSiN 2,1 + MEN}COS C-t From Fig. 3.23, we have HO = xe.008 021-4 £(9 sin Ot = m(oy(costent + sin?) = m0) In a similar manner, the lower-sideband SSB signal x,(0) is X(t) = mi) 008 o,f +AA(siN gt and & 1) = m(D sineo.t—riKt)c08 ot Again, from Fig. 3-23 we obtain CHAP. 3] AMPLITUDE MODULATION 6 | Mow) sow 0 G (a) Xssolw) 2a, 0 a 2u ° © Yow) = {Mtw) 0 ° w Fig. 3-22 vo (A008 0.4 + &,(t)sin ot (coset + sin?e,t) = (0) Note that Fig, 3.23 is exactly the same as Fig. 3-8 except in the addition at the summing junction for both AU) 008 ant 20 HAO sin ot Fig. 323 Phase-shift SSB demodulator AMPLITUDE MODULATION [CHAP. 3 VESTIGIAL-SIDEBAND MODULATION 3. 32 Show that for distortionless demodulation of a VSB signal using the synchronous detector of Fig. 3-11, the frequency response H(~) of the VSB filter of Fig. 3-10 (a) must satisfy Eq. (3.12), that is, Ho +0,)+H@-o,) =constant for lolfe 540 < fe<1600 fro f= 435 where both f- and fio are expressed in kilohertz. Thus, fro =fe+ 455 When f, = S40 kHz, we get fio = 995 kHz; and when f. = 1600 kHz, we get fzo = 2055 kHz. 4 AMPLITUDE MODULATION (CHAP. 3 Thus, the required tuning range of the local oscillator is 995—2055 kHz. (i) When fro 1, there will be many sideband lines. Figure 4-2 shows the amplitude spectra of angle-modulated signals for several values of . 4.7 BANDWIDTH OF ANGLE-MODULATED SIGNALS A, Sinusoidal Modulation: From Fig. 4-2 and Table B-l we see that the bandwidth of the angle-modulated signal with sinusoidal modulation depends on f and 0, In fact, it can be shown that 98 percent of the normalized total signal power is contained in the bandwidth Wy ~ B+ Vm (4.25) nR ANGLE MODULATION [CHAP. 4 i fs re | re Pa aE itt titel uth Ltd tia. Fig. 42 Amplitude spectra of sinusoidally modulated FM signals (fixed) When f < 1, the signal is an NB angle-modulated signal and its bandwidth is approximately equal to 2p. Usually a value of < 0.2 is taken to be sufficient to satisfy this condition. All the bandwidths can be expressed in hertz (Hz) simply by replacing Aw with Af and cy. with fy. Arbitrary Modulation: For an angle-modulated signal with an arbitrary modulating signal m() band-limited to @s, radians per second (rad/s), we define the deviation ratio D as maximum frequency deviation _ Ac ? bandwidth of mo Ou (4.26) The deviation ratio D plays the same role for arbitrary modulation as the modulation index f plays for sinusoidal modulation. Replacing f by D and «wy by oy, in Eq, (4.25), we have We = UD+ Dow 4.27) ‘This expression for bandwidth is generally referred to as Carson's rule. If D<1, the bandwidth is approximately 20, and the signal is known as a narrowband (NB) angle-modulated signal (see Seo. 4.4). If D > 1, the bandwidth is approximately 2Doryy = 2Ao, which is twice the peak frequency deviation. Such a signal is called a wideband (WB) angle-modulated signal, 4.8 GENERATION OF ANGLE-MODULATED SIGNALS A. Narrowband Angle-Modulated Signals: ‘The generation of narrowband angle-modulated signals is easily accomplished in view of Eq. (4.16) or Eqs. (4.17) and (4.18). This is illustrated in Fig. 4-3. B. Wideband Angle-Modulated Signals: There are two methods of generating wideband (WB) angle-modulated signals; the indirect method and the direct method. CHAP. 4] ANGLE MODULATION B mn) mn) A een) j «}G A008 ex () NBFM Fig. 43 Generation of narrowband angle-modulated signals Indirect Method: In this method, an NB angle-modulated signal is produced first (see Fig. 4-3) and then converted to a WB angle-modulated signal by using frequency multipliers (Fig. 4-4). The frequency multiplier ‘multiplies the argument of the input sinusoid by n. Thus, if the input of a frequency multiplier is x(1) = Acos logt + $0] (4.28) then the output of the frequency multiplier is HO) = Acos [net + nH) 4.29) NB we signal [Frequency |__sien=! x) Lenltipties | yory Xn Fig. 44 Frequency multiplier [NBFM signal WBEM signal Se Seg 2 My ah Ah=nah mo Fi 240) Fequency NOFM |" multiplier BPF Local oscillator Fig. 45 NB-to-WB conversion 14 ANGLE MODULATION (CHAP. 4 Use of frequency multiplication normally increases the carrier frequency to an impractically high value. To avoid this, a frequency conversion (using a mixer or DSB modulator) is necessary (Fig.4-5) to shift the spectrum, 2. Direct Method: In the direct method of generating an FM signal, the modulating signal directly controls the carrier frequency. A common method used for generating FM directly is to vary the inductance or capacitance of a tuned electric oscillator. Any oscillator whose frequency is controlled by the ‘modulating signal voltage is called a voltage controlled oscillator (VCO). The main advantage of direct FM is that large frequency deviations are possible, and thus less frequency multiplication is required. The major disadvantage is that the cartier frequency tends to drift, and so additional circuitry is required for frequency stabilization. 49 DEMODULATION OF ANGLE-MODULATED SIGNALS Demodulation of an FM signal requires a system that produces an output proportional to the instantaneous frequency deviation of the input signal. Such a system is called a frequency discriminator. If the input to an ideal discriminator is an angle-modulated signal XA) = Acos [ert + (01 then the output of the discriminator is dt) = kD (4.30) where ky is the discriminator sensitivity. For FM, ¢(0) is given by (Eq. (4.8)] o= af (2) dh so that Eq. (4.30) becomes BAe) = Kekym(0) 431) ‘The characteristics of an ideal frequency discriminator are shown in Fig. 4-6. Ourput vollage Slope = kg Input frequency Fig. 446 Characteristics of ideal frequency discriminator The frequency discriminator also can be used to demodulate PM signals, For PM, @(¢) is given by [Eq. (4.6)] OO) = Kym) Then y,(0), given by Eq. (4.30), becomes dm( Dll) = kay 4.32) CHAP. 4] ANGLE MODULATION 18 Integration of the discriminator output yields a signal that is proportional to m(). A demodulator for PM can therefore be implemented as an FM demodulator followed by an integrator. A simple approximation to the ideal discriminator is an ideal differentiator followed by an envelope detector (Fig. 4-7). If the input to the differentiator is X(t) = A008 [gt + $(0)] then the output of the differentiator is 4b) dt ko=-afo. in fot + $0) (433) yan Emelope |_|” detector [1 100 Fig. 47. Frequency discriminator The signal x4(1) is both amplitude- and angle-modulated. The envelope of x(t) is do(ey fa The output of the envelope detector is, by Eq. (4.4), vt) = 0, 439 which is the instantaneous frequeney of the x,(0) ‘There are many other techniques that can be used to implement a frequency discriminator. (See Probs. 4.20, 4.21, and 4.22.) Solved Problems INSTANTANEOUS FREQUEN 4.1, Determine the instantaneous frequency in hertz of each of the following signals: (@ 10 cos (200n1+§) (6) 10 cos (20nt + n°) (©) cos 200nt cos (Ssin 2nt) + sin 200n1 sin (Ssin 2x0) @ 8 = 20021 +5 = F = 200 = 2n(100) a The instantaneous frequency of the signal is 100 Hz, which is constant. o 6) = 2021 + Bo r0n-4 amt = 2n(10+ Or" The instantaneous frequency of the signal is 10 Hz at t = 0 and increases linearly at a rate of | Hz/s 16 42. ANGLE MODULATION [CHAP. 4 © 0s 200n10s (Ssin 2x2) + sin 200nr sin (Sin 2nt) = cos (200n1— 5 sin 20) A(t) = 200n0— Ssin 2x2 co, = 2 = 200n— 10x cos 2xt = 2n(100— 50s 2x0) a ‘The instantaneous frequency of the signal is 95 Hz at ¢= 0 and oscillates sinusoidally between 95 and 105 Hz. Consider an angle-modulated signal x<(t) = 10cos [(108)t + S sin 2n(10*)F] Find the maximum phase deviation and the maximum frequency deviation, Comparing the given x,(0) with Eq, (4.1), we have (0) = cage + G(t) = (108 yn + S sin 2n(10?)t and (0) = Ssin 2m(10"9¢ Now #) = 52n}(10"}c08 210 \t Thus, the maximum phase deviation is 16Olmax = Srad and the maximum frequency deviation is ‘Aor = 16 lean = 5(2m)(10°) rad/s or Af = SkHz PHASE MODULATION AND FREQUENCY MODULATION 43. An angle-modulated signal is described by x(t) = 100s [2n(10°)1 + 0.1 sin (10°}} (@) Considering x,(2) as a PM signal with k, = 10, find m(0). (b) Considering x,(#) as an FM signal with ky 10m, find m0. @ xp (t) = Ae0s loge + kgm] = Locos [2m 108+ Lome) 10cos (2x(10°)¢ + 0.1 sin (10° ye] Thus, n(e) = 0.01 sin (10%) ) t= Acoe[ oem] = 10cos [2n(10°)r + 0.1 sin (10?)az} Assuming (0) = dy 608 (1 CHAP. 4] ANGLE MODULATION 1 we get Loe f madi tna, {cos oat sin (10?)at = 0.1 sin (10°)nr 100 Thus, dy = 10, and ‘m(2) = 10 cos (10")nt 44, Let my(f) and m(®) be two message signals, and let x,,(1) and x,,(0) be the modulated signals corresponding to m(t) and mg(¢), respectively. (@ Show that if the modulation is DSB (AM), then m() +m(2) will produce a modulated signal equal to x,,(0) +, (0). (This is why AM is sometimes referred to as a linear modulation.) - (@)_ Show that if the modulation is PM, then the modulated signal produced by my(t)+m(0) will not be x,,(0) + xq (0; that is, superposition does not apply to angle-modulated signals. (This is why angle modulation is sometimes referred to as a nonlinear modulation.) (@) For DSB (AM), from Eq. (2.3) we have m0) + Xe\() = mx(008 Ot m0) > x, (0) + m0) —* el (0) = my{0)c0s ot mC) +ms(0)1008 oct = mi(0) C08 wt + m() 608 at Fal +a(0) Hence, DSB (AM) modulation is a linear modulation (©) For PM, by Eq, (4.9) we have my() = X40 = Ao0s lot + kyr] y(t) Xgl) = Aoos le.t+ ym] a(t) F mal) + XO) = A.c08 {ot + kglomy(d) + my) HF alO+ eK Hence, PM is a nonlinear modulation, 45. Derive Eq. (4.24). In a sinusoidal angle modulation, the modulated signal (Eq. (4.23)] XCD = A008 (eat + Bsin Om) ccan be expressed as Re(elel#$i Only (4.35) xelt The function e/#5!1 nm! is clearly a periodic function with period 7, series representation = 2n/op. It therefore has a Fourier Snont = F gto By Eq. (/.3), the Fourier coefficients ¢, ean be found to be Cg (71 ee Bef ettnct eos Setting w,¢— x, we have 8 ANGLE MODULATION (CHAP. 4 if ahh extsineenng, = Fah dx = 1B) where J,(B) is the Bessel function of the first kind of order » and argument f (see App. B). Thus, eb Sin Ont = Iy(prelmnt (4.36) Substituting Ba, (4.36) into Eq, (4.35), we obtain x)= A ref > som] =A rf > saute) “aking the real part yields x= AS 1B) 008 (+ moht 4.6. Find the normalized average power in an angle-modulated signal with sinusoidal modulation. From Eq. (4.24), an angle-modulated signal with a single-tone modulation can be expressed as sd = ¥ AbyG)008 (0, + nO)! The normalized average power in x,() is given by 37) nce > 40 (4.38) FOURIER SPECTRA OF ANGLE-MODULATED SIGNALS 4.7. A carrier is angle-modulated by the sum of two sinusoids Xe(t) = Acos (co,t + By sin ct + Bz sin @t) (4.39) where o, and @, are not harmonically related. Find the spectrum of x,(0. In a manner similar to Prob. 4.5, x,(#) can be expressed as sel) = A Refel/etts8in 2 F445 9 colt) = ARe(e“e a (440) = aRecem eB sinontesesin oat Using Ba. (4.36), we have emsinoyt $F pope ehsinort = F J¢pyem™ Substituting these expressions into Eq. (4.40) and taking the real part, we obtain XD=A YY ABrn(B2)008 (oe + nea, + mooy)t (44D) CHAP. 4] ANGLE MODULATION 9 From Eq, (4.41) we see that the spectrum of x.(0) consists of four categories (1) the carrie line; (2) sideband lines at c+ na, due to one tone alone; (3) sideband lines at «, + mo due to the other tone alone; and (4) sideband lines at a. + noo, + may due to the nonlinear property of angle modulation. 48. Ina tone-modulated angle modulation, the modulated signal x,(2) is [see Eq. (4.23)] Xe(1) = cos (wet + SiN Opt) When 8 < 1, we have NB angle modulation (@)_ Find the spectrum of this NB angle-modulated signal, (®) Compare the result with that of a tone-modulated AM signal. (©) Discuss the similarities and differences by drawing their phasor representations, @ Hell) = AOS (Wet + Pi mt) 08 «£008 (f sin @y)— Asin w,tsin (Bin yf) when P< 1, we ean write 608 sin Ot) = 1 sin Bsin ogi) ~ Bsinoyt ‘Then the NB signal can be approximated by Awad) ~ Acos «= Asin ogtsin c.f BA (4.42) = teosos #4 costo, 44 tt Note that Eq. (4.42) also can be easily obtained from Eq, (4.16) by letting $(0) = isin et From Eq. (4.42) we see that the spectrum of xype(t) consists of a cartier line and a pair of side lines at (6) The preceding result is almost identical to the situation for a tone-modulated AM signal given by (see Prob. 3.4) Xam) = AOS 12f + HA COS Op COS Ot 4.43) - nA = og) +44 008 e+ ty) Ao0s 1761 +9 08 (0, ~ ag) +S 008 (04 + Wy) wher jis the modulation index for AM Compazon of Boe. (442) end (40) shows that he main dfesence between NB angle nodlaton sod AM i the poe several ofthe loc cband component CO) By oars Ba (029) ood BSN Ol — 1+ iB sin Opt for fh <1 9, (42) canbe writen in phasor form as sync) = Re[ Ae + sin on] je, ao = of some(s Badu Boms)] : Simla, Ba, 4) canbe wten in phasor frm a8 xan = Re[ Ae + 1008 nt] oy] = Ref ae™(1 +E erent 80 ANGLE MODULATION [CHAP. 4 By taking the term Ae as the reference, the phasor representations of Eqs. (4.44) and (4.45) are shown in Fig. 48. From Fig. 4-8, the difference between Eqs. (4.44) and (4.45) is obvious. In NB angle modulation, the modulation is added in quadrature with the carrier, which results in phase fluctuation with little amplitude change. In the AM case, the modulation is added in phase with the carrier, producing. amplitude fluctuation with no phase deviation. a aa g Z (a) NBFM wave ©) AM wave Fig. 448 Phase representation BANDWIDTH OF ANGLE-MODULATED SIGNALS 49. 4.10, Given the angle-modulated signal x(t) = 10cos (210% + 200 cos 2710°1) what is its bandwidth? ‘The instantaneous frequency is 9; = 2n(108) — 4x10) sin 2n(10% 9 So Aw = 4n(10°), om Ae _ 4n(10°) _ iy ~ Bacio) = 8 By Ea. (4. {p= UP+ Voy = 8.04R(10°) rad/s Since B > 1, Wy 2A = Sn(10%) rad/s or fy = 400kHz ‘A 20-megahertz (MHz) carrier is frequency-modulated by a sinusoidal signal such that the maximum frequency deviation is 100 kHz. Determine the modulation index and the approximate bandwidth of the FM signal if the frequency of the modulating signal is (a)1 kHz, (B)100 kHz, and (c)500 KIlz, Af = 100 KHz, f, = 20MHz> fy lfm (a) With fy, = 1 kHz, f= 100. This is a WBFM signal, and fp ~2A/—= 200 kHz. () With fj, = 100 kHz, 6 = 1. Thus, by Eq. (4.25), For sinusoidal modulation, B = In ~ 2B + Wp = 400 KHz. (© With f, = 500kHz, = 0.2. This is an NBFM signal, and fy ~ 2 = 1000 kHz = 1 MHz CHAP. 4] ANGLE MODULATION 81 4.11. 4.12, 4.13. Consider an angle-modulated signal xO) Assume PM and fy, = 1 KHz, Calculate the modulation index and find the bandwidth when (4) is doubled and (b) fy, is decreased by one-half, Ocos (wet + 3iN Opt) Kew) = A.08 [ogt+ kyl] = 1008 (a_t+ 3sin Oy!) Thus, m()= ay sin yt, and pull) = 10608 (01+ kg SD Of) From Eq. (4.21) ot Ea. (4.23), B= hyd =3 We see that the value of fis independent of fa. By Eq. (4.25), when fy = I kHz, Sa = UB+ Wg = 8KHZ (@) When fy is doubled, 3, fe = 2KHz, and fp = 2G +1)2 = 16kHz (®) When fy is decreased by one-half, 6 = 3, fy = 0.5kHz, and Sa = 23 + 10.5) = 4kH2 Repeat Prob. 4.11 when FM is assumed. ar = cos («7+ 3sin qf) xf maul] Thus, m() = ay COS yt and Oink anu = 10008 (0,1 + sin ont) From Eq. (6.21) or Eq. (4.23), inky _ aks __ dinky Im 2x10) ‘We see that the value of fis inversely proportional to fy. Thus, by Eq. (4.25), when fy = 1 kHz, fs =2B-+1) fy = 28+ Il = BkHz (a) When fay is doubled, 312, Sie kHz, and fa=26-+ 0 fy =2(3+ 1)2 = 10Kt2 (0) When fy is decreased by one-hal, = 6, fa = 0.5KHz, and fo = 2B + fy = U6 + 10.5) = THe, A carrier is frequency-modulated with a sinusoidal signal of 2 kHz, resulting in a maximum frequency deviation of 5 kHz. (a) Find the bandwidth of the modulated signal. () The amplitude of the modulating sinusoid is increased by a factor of 3, and its frequency is lowered to 1 kHz. Find the maximum frequency deviation and the bandwidth of the new modulated signal. 82 414, ANGLE MODULATION (CHAP. 4 (@ From Bq. (4.21), kydm _ Af _ S(10?) B Om Im 20) By Eq, (4.25), the bandwidth is fy = 28 +1) fy = 22.5 + 1)2 = KH (B) Let 6; be the new modulation index. Then Kan _ ky iam _ gr b= = 66 = 62.5) = 15 Thus, AS= Br Son = (SKI) = 15kH2 Sa = 261 +1) fon = 2405 + 11) = 32H In addition to Carson's rule (4.27), the following formula is often used to estimate the bandwidth of an FM signal: Wy~%UD+20y forD>2 = 2nfy and fy, is the highest frequency of the signal in hertz. Compute the bandwidth, using this formula, and compare it to the bandwidth, using Carson’s rule for the FM signal with A= 7SkHz and fy = 15 kHz. Note that commercial FM broadcast stations in the United States are limited to a maximum frequeney deviation of 75 kHz, and modulating frequencies typically cover 50 Hz t0 15 kHz. Using Eq. (4.26) with oy = 2x fig, where fy = 15kHz, we have AS_ 7800) _ ~~ 1500 ‘and by using the given formula, the bandwidth is Ja = XD + 2fy = 20KHe, Using Carson's rule, Eq. (4.27), we see that the bandwidth is fo = UD + Vfy = 180KH2 Note: High-quality FM radios require a bandwidth of at least 200 kHz. Thus, it seems that Carson's rule underestimates the bandwidth. GENERATION OF ANGLE-MODULATED SIGNALS 4.15, Consider the frequency multiplier of Fig. 4-4 and an NBFM signal 2xarm() = A008 (Wet + Bin Opt) with 6 <0.5 and f, = 200 kHz. Let f,, range from 50 Hz to 15 kHz, and let the maximum frequency deviation A fat the output be 75 kHz. Find the required frequency multiplication n and the maximum allowed frequency deviation at the input. From Eq. (4.22), 8 = Afifn- Thus, _ 100% Bose = Tsd0) CHAP. 4] ANGLE MODULATION 83 4.16. 4.17. If, =0.5, where fj is the input p, then the required frequency multiplication is Bruax _ 1500 _ rm Bose = = 3000 The maximum allowed frequency deviation at the input, denoted Afi, is Af _ 750°) af == BP = 25 z A block diagram of an indirect (Armstrong) FM transmitter is shown in Fig. 4-9. Compute the maximum frequency deviation A of the output of the FM transmitter and the carrier frequency feif fy = 200 kHz, fio = 108MHz, Af, = 25Hz,m = 64, and m, = 48 AS= AAI) 15) (64) (48) Hz = 76.8 kHz. fa = mfr = (64) (200) (10°) = 12.8(10°) Hz = 12.8 MHz hap = oy §226 Mbz A= fy fio = (128 = 108) (105)He { ne Me “Thus, when fs = 23.6MHlz then ny fy = (48) (23.6) = 1132.8 MHz When fy = MHz, then = Myf = (48)(2) = 96 MHz =e Frequency Frequeney |_*0? sauhipier 1 | rmltpier fi fami |b fh oh of ho Fig. 49 Block diagram of an indirect FM transmitter In an Armstrong-type FM generator of Fig. 4-9 (Prob. 4.16), the crystal oscillator frequency is 200 kHz, The maximum phase deviation is limited to 0.2 to avoid distortion. Let f,, range from 50 Hz to 15 kHz, The carrier frequency at the output is 108 MHz, and the maximum frequency deviation is 75 kHz. Select multiplier and mixer oscillator frequencies. Referring to Fig. 4-9, we have Afi = Bln = (0.2)(50) = 10H Af _ 7500) af 10 fe= mf = m,2)(10) Az = 7500 = myn, Assuming down conversion, we have fhe = Thus, = nib sone 10800) 220) ne 84 ANGLE MODULATION (CHAP. 4 Letting n = 150, we obtain m=50 and fo =9.28MHz 4.18. A given angle-modulated signal has a maximum frequency deviation of $0 Hz for an input sinusoid of unit amplitude and a frequency of 120 Hz. Determine the required frequency multiplication factor to produce a maximum frequency deviation of 20 kHz when the input sinusoid has unit amplitude and a frequency of 240 Hz and the angle modulation used is (a) PM and (b) FM. (@) From Eqs. (4.21) and (4.22) we see that in sinusoidal PM, the maximum frequency deviation Ais proportional to fy. Thus, i= Gs0= one Hence, (©) Again from Eqs. (4.21) and (4.22) we see that in sinusoidal FM, the maximum frequency deviation A f is independent off. Thus, n= Sf _ 20005) _ “Bh 30 400 4.19. Atlow carrier frequencies it may be possible to generate an FM signal by varying the capacitance of a parallel resonant circuit. Show that the output x,(t) of the tuned circuit shown in Fig. 4-10 is an FM signal if the capacitance has a time dependence of the form © = Cy kro, “ femal ir 1 oscillator a cn “" Fig. 4.10 If we assume km) is small and slowly varying, then the output frequency «, of the oscillator is given by \ 1 Lyk 8 ~ Tete” Teele] Since |(k/Ca)m(0 < 1, we can use the approximation 1a (=a 143s , k and obtain om alts td mo] = a+b CHAP, 4.20, 421. 4 ANGLE MODULATION 85 where o. VEG Thus, by Bq. (412), x:() is an FM signal ‘An FM signal xeu() = Acos | at+ &f m(2). a is applied to the system shown in Fig. 4-11 consisting of a high-pass RC filter and an envelope detector. Assume that oRC < 1 in the frequency band occupied by xpyx(¢). Determine the output signal y(1), assuming that klm(2)| t1) denote the times associated with two adjacent zero crossings of xpy(®) (Fig. 4-13). If then show that where Ar= 4-4 Let xn(0) = A008 00) we fom out f mie Fig. 4.13 Zero crossings of an FM signal CHAP. 4] ANGLE MODULATION 87 4.23. Let #; and (ty > &) be the times associated with two adjacent zero crossings, that is, eu (tr) = Xena(t) = 0 Then exaust f mana (42) ~ (6) = ‘The bandwidth of the message m(‘) is assumed much less than the bandwidth of the modulated signal ‘Then m() is essentially constant over the interval [t;, 9], and we have [o. + km] = Thus, by Eq. (4.12), + kym = or kyrn(o) where At= 4-4) ‘The result of Prob. 4.22 indicates that m(t) can be recovered by counting the zero crossings in xpm(). Let N denote the number of zero crossings in time 7. Show that if T satisfies the condition 1 Let fy, 2 f4.--denote the times of zero crossings and T; are N zero crossings in ‘Assume that there T=Tythhts + +Ty From the result of Prob. 4.22, we have oF Rm This is true for T3, T3,.--; that is, Fp PNB ood Thus +Ty=T= ae Hence, we obtain m= oy or Kemi) = bmn = Pf ‘The condition 1/f, < T ensures that within T there will be some zero crossings, and the condition T & I fag offers no excessive averaging (or smoothing) of m(t). 88 424, 425, 426. 421. ANGLE MODULATION [CHAP. 4 Supplementary Problems ‘An angle-modulated signal is given by x4(1) = $008 [2x(10%)t + 0.2.cos 200zt} Can you identify whether x.) is @ PM or an FM signal? Ans, No. It can be either a PM or an FM signal The frequency multiplier is @ nonlinear device followed by a bandpass filter, as shown in Fig, 414. Suppose that the nonlincar device is an ideal square-law device with input-output characteristics eof) = aei() Find the output (2) if the input is an FM signal given by eft) = Acos (,t+ Bsin Wt) eo | eft 4 Nonlinear |_! 17 eevice BPF yo Fig. 4-14 Ans. y(0) = A'cos 2exct-+2Bsin cq), where A! = Jad, This result indicates that a square-law device can bbe used as a frequency doubler. ‘Assume that the 10.8-MHz signal in Fig. 4-15 is derived from the 200-kHz oscillator (multiplication by 54) and that the 200-KE1z oscillator drift is 0.1 Hz. xem(t) Frequency Frequency Naf} —+ Tract | —-@)— [ati x 6 x 4 Frequency Y rultiplier 200 kHz lea 108 MHz Fig. 415 (@) Find the drift in the 10.8-MHz signal () Find the drift in the carrier of the resulting FM signal. Ans, (@) * 5A He, (B) 48 Hz ‘A given FM signal has a maximum frequency deviation of 25 Hz for 2 modulating sinusoid of unit amplitude and a frequency of 100 kHz. Find the required value of frequency multiplication n to produce a maximum, frequency deviation of 20 kHz when the modulating sinusoid has unit amplitude and a frequency of 200 Hz, Ans. = 800 CHAP. 4] ANGLE MODULATION 89 428. A block diagram of a typical FM receiver, covering the broadcast range of 88 to 108 MHz, is shown in Fig. 4-16, The IF amplifier frequency is 10.7 MHz The limiter is used to remove the amplitude fluctuations caused by channel imperfection. The FM receiver is tuned to a carrier frequency of 100 MHz seal) RF F amplifier ampli. Frequency eiscriminator Limiter ‘Audio amplifier Loudspeaker Fig. 416 FM receiver (@) A 10-Hz audio signal frequency modulates a 100-MHz carrier, producing B=0.2. Find the bandwidths required for the RF and IF amplifiers and for the audio amplifier. (b) Repeat (a) if B Ans, (a) RF and IF amplifiers: 24 kHz; audio amplifier: 10 kHz (6) RF and IF amplifiers: 120 kHz; audio amplifier: 10 kHz. DIGITAL TRANSMISSION OF ANALOG SIGNALS DUCTION nd in the design of new communication systems has been toward increasing the use of niques. Digital communications offer several important advantages compared to analog tions, for example, higher performance, greater versatility, and higher security. smit analog message signals, such as voice and video signals, by digital means, the signal nverted to a digital signal. This process is known as the analog-ro-digital conversion, oF referred as digital pulse modulation. Two important techniques of analog-to-digital re pulse code modufation (PCM) and delta modulation (DM), DE MODULATION ial processes of PCM are sampling, quantizing, and encoding, as shown in Fig. 5-1. s the process in which a continuous-time signal is sampled by measuring its amplitude ints. Representing the sampled values of the amplitude by a finite set of levels is called ignating each quantized level by a code is called encoding. ling converts a continuous-time signal to a discrete-time signal, quantizing converts a litude sample to a discrete-amplitude sample. Thus, sampling and quantizing jsform an analog signal to a digital signal. ng and encoding operations are usually performed in the same circuit, which is called jeital (A/D) converter. The combined use of quan log pulse modulation techniques, lowing sections, we discuss the operations of sampling, quantizing, and encoding. 20 izing and encoding distinguishes CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS a” mit) ‘Sampler Quantizer Encoder Fig. $1 Pulse code modulation 53 SAMPLING THEOREM Digital transmission of analog signals is possible by virtue of the sampling theorem, and the sampling operation is performed in accordance with the sampling theorem. A. Band-Limited Signals: A band-limited signal is a signal m(0) for which the Fourier transform of m(t) is identically zero above a certain frequency «oy m() + Mo) 0 for lol > wy = 2nfy GD B. Sampling Theorem: Ifa signal m(2) is a real-valued band-limited signal satisfying condition (5.1), then m(#) can be uniquely determined from its values m(n7,) sampled at uniform intervals T,[<1/(2fay)] In fact, m(i) is given by 5 sin oy(t—nT) > mat)" me @udt—nT,) (5.2) We refer to T, as the sampling period and to its reciprocal f, = 1/T, as the sampling rate. Thus, the sampling theorem states that a band-limited signal which has no frequency components higher than fy, Hz can be recovered completely from a set of samples taken at the rate of f= 2fy) samples per second. ‘The preceding sampling theorem is often called the uniform sampling theorem for baseband or low- pass signals. The minimum sampling rate, 2fy, samples per second, is called the Nyquist rate; its reciprocal 1/(2fy) (measured in seconds) is called the Nyquist interval. For the proof of the sampling theorem see Prob. 5.2. The requirement that the sampling rate be equal to or greater than twice the highest frequency applies to baseband or low-pass signals. However, when bandpass signals are to be sampled, lower sampling rates can sometimes be used (see Prob. 5.7). 5.4 SAMPLING A. Instantaneous Sampli Suppose we sample an arbitrary signal m(1) (Fig. 5-2(a)] instantaneously and at a uniform rate once every T, s. Then we obtain an infinite sequence of samples {m(n7,)}, where m takes on all possible integer values. This ideal form of sampling is called instantaneous sampling. B. Ideal Sampled Signal: Multiplication of m(t) by a unit impulse train 57(¢) [Fig. 5-2(6), Eq. (J.72)] yields m0 = M57.) =. m(nT,)(t- nT) 6.3) 92 DIGITAL TRANSMISSION OF ANALOG SIGNALS (CHAP. 5 @ 5f0 Fig. 5-2 Ideal signal sampling See mo) @ Fig. 3 Natural sampling The signal m,(¢) (Fig. 5-2(c)] is referred to as the ideal sampled signal. C. Practical Sampling : 1. Natural Sampling: Although instantaneous sampling is a convenient model, a more practical way of sampling a band-limited analog signal m(2) is performed by high-speed switching circuits. An equivalent circuit employing a mechanical switch and the resulting sampled signal are shown in Fig. 5-3(a) and (6), respectively The sampled signal x,,(0) can be written as Xue) = MDX) CH) where xp(¢) is the periodic train of rectangular pulses with period T,, and each rectangular pulse in p(t) has width d and unit amplitude. CHAP, 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 93 The sampling here is termed natural sampling, since the top of each pulse in x,(t) retains the shape of its corresponding analog segment during the pulse interval. The effect of the finite width of the sampling pulses is investigated in Prob. 5.9. 2, Flat-Top Sampling: The simplest and thus most popular practical sampling method is actually performed by a functional block termed the sample-and-hold (S/H) circuit [Fig. 5-4(a)]. This circuit produces a flat-top sampled signal x,(0) [Fig. 5-4(b)]. The effect of the flat-top sampling is discussed in Prob. 5.11 Sampling switch yn) — S | mi) chon mee IE ch G | ‘| z w © Fig. $4 Flattop sampling 5.5 PULSE AMPLITUDE MODULATION The signal x,(1) depicted in Fig. 5-4(6) represents a pulse amplitude-modulated signal. In pulse amplitude modulation (PAM), the carrier signal consists of a periodic train of rectangular pulses, and the amplitudes of rectangular pulses vary with the instantaneous sampled values of an analog message signal. Note that the carrier frequency (that is, the pulse repetition frequency) is the same as the sampling rate The PAM signal x,(¢) can be expressed as x= > maT, pe~nT,) GS) where p(t) is a rectangular pulse of unit amplitude and duration d, defined as 1 oi<$ n= ns 6.0) 0 otherwise It can be shown (Prob. 5.10) that x,(#) can be expressed as the convolution of m,(t), the instantaneously sampled signal, and the rectangular pulse p(0), that is, x0 = m0 * WO (5.7) 5.6 QUANTIZING A. Uniform Quantizing: An example of the quantizing operation is shown in Fig. 5-5. We assume that the amplitude of ‘m(0) is confined to the range (-m,.mi,). As illustrated in Fig. 5-5(a), this range is divided in L zones, cach of step size A, given by (5.8) 94. DIGITAL TRANSMISSION OF ANALOG SIGNALS ICHAP. 5 A sample amplitude value is approximated by the midpoint of the interval in which it lies. The input-output characteristics of a uniform quantizer are shown in Fig. 5-5(b). w ely i 7 ' Input Oy Fig. 5 Uniform quantizing B. Quantizing Noise: The difference between the input and output signals of the quantizer becomes the quantizing error, ot quantizing noise. It is apparent that with a random input signal, the quantizing error q. varies randomly within the interval hag 5.9) pee ( CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 95 ‘Assuming that the error is equally likely to lie anywhere in the range (~A/2,A/2), the mean-square quantizing error (@) given by (see Chap. 6) a2 2 xf eae=F (5.10) Substituting Eq. (5.8) into Eq. (5.10), we have mi (t)=36 (SID C. Nomuniform Quantizing and Companding: For many classes of signals the uniform quantizing is not efficient. For example, in speech communication it is found (statistically) that smaller amplitudes predominate in speech and that larger amplitudes are relatively rare. The uniform quantizing scheme is thus wasteful for speech signals, many of the quantizing levels are rarely used. An efficient scheme is to employ a nonuniform quantizing method in which smaller steps for small amplitudes are used (Fig. 5-6). Quantizing levels Fig. 6 Nonuniform quantizing The same result can be achieved by first compressing signal samples and then using a uniform quantizing. The input-output characteristics of a compressor are shown in Fig. 5-7. The horizontal axis is the normalized input signal (that is, m/m,), and the vertical axis is the output signal y. The compressor maps input signal increment Am into large increment Ay for small input signals and small increments for large input signals. Hence, by applying the compressed signal to a uniform quantizer, a given interval Am contains a larger number of steps (or smaller step size) when m is small. ‘A particular form of compression law that is used in practice (in North America and Japan) is the so-called 4 law, defined by Ind + p/m in +p) (S12) where jis a positive constant and +1 m>0 sent) ie m<0 tice (in Europe) is the so-called 4 law, defined by Another compression law that is used in pra 96 DIGITAL TRANSMISSION OF ANALOG SIGNALS ma(& mot T+ind\m, lm,|~ 4 (+10 4}n/m ) , . 6.13, Sarai as Pst Nonuniform Fig. £7 Characteristics of a compressor For the slaw, j= 255 is used in digital telephone systems in North America, For the A law, A=87.6 is used in European systems. These values are selected to obtain a nearly constant output signal-to-quantizing noise ratio over an input signal power dynamic range of 40 decibels (4B). To restore the signal samples to their correct relative level, an expander with a characteristic ‘complementary to that of the compressor is used in the receiver. The combination of compression and expansion is called companding. 5.7 ENCODING ‘An encoder in PCM translates the quantized sample into a code number. Usually the code number is converted to its representation in binary sequence. The binary sequence is converted to a sequential string of pulses for transmission (see Sec. 5.10). In this case the system is referred to as binary PCM. The essential features of binary PCM are shown in Fig. 5-8, Assume that an analog signal m(1) is confined to the range —4 to 4 volts (V). The step size A is set to 1. Thus, eight quantizing levels are employed; these are located at ~3.5, ~2.5,..,+3.5 V. We assign the code number 0 to the level at —3.5 V, the code number 1 to the level at ~2.5 V, and so on, until the level at +3.5-V, which is assigned the code number 7. Each code number has its binary code representation, ranging from 000 for code number 0 to 111 for code number 7. Each sample of m(t) is assigned to the quantizing level closest to the sampled value, 5.8 BANDWIDTH REQUIREMENTS OF PCM ‘Suppose that in a binary PCM, Z quantizing levels are used, satisfying L=2" n=logL (5.14) CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 7 Code Quantzing mi number level 7 3s —}——+ ee 3 o Sampled value 02531 1s -24 0-37 uantized value os 351s 25-38 Code number 3 7 3 1 0 Binary code ont tor ot 000 Fig. $8 PCM where n is an integer. For this case, n = logy L binary pulses must be transmitted for each sample of the message signal. If the message bandwidth is f,, and the sampling rate is f,(=2/,,), then nf, binary pulses must be transmitted per second, ‘Assuming the PCM signal is a low-pass signal of bandwidth foc, the required minimum sampling rate is 2focyy. Thus, 2fecm = nfs or Soom = fy He (5.15) Equation (5.15) shows that the minimum required bandwidth for PCM is proportional to the message signal bandwidth and the number of bits per symbol. Note that the actual bandwidth required for a PCM system depends on the PCM representation. 5.9 DELTA MODULATION ‘A method for converting analog signals to a string of binary digits that requires much simpler circuitry than PCM is delta modulation. ‘A. Delta Modulator: A simple DM system is shown in Fig. 5-9(a). The input to the comparator is (a) — m2) (5.16) nal. The output of the comparator is dt) = Asgn [e() = {A a a (17) eo where m(t) is the message signal and *i(t) is @ referenc 98 DIGITAL TRANSMISSION OF ANALOG SIGNALS [CHAP. 5 Comparator me) Fom(e) (a Alo Slope overload wo Fig. $9 Delta modulation Then the output of the delta modulator is pat) = Asan [e(] Y 6¢—nT,) . (5.18) =A DY san fetnT))5e-nT.) Thus, the output of the delta modulator is a series of impulses, each having positive or negative polarity depending on the sign of e() at the sampling instants. Integrating xpy(t), we obtain Y Asenfetn7,)] (5.19) m0 which is a staircase approximation of m(é), as shown in Fig. 5-9(b). B. Demodulation: Demodulation of DM is accomplished by integrating xpy4(t) to form the staircase approximation ‘mm(0) and then passing through a low-pass filter to eliminate the discrete jumps in #(0). ‘Small step size is desirable to reproduce the input waveform accurately. However, small step size must be accompanied by a fast sampling rate to avoid slope overload, as shown in Fig. 5-9(b). TTo avoid slope overload, we require that 4 amc) a oa CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 99 C. Quantizing Error: We assume that the quantizing error in DM is equally likely to lie anywhere in the interval (CA,A), so that the mean-square quantizing error is (@)= al, dq, = (5.21) D. Adaptive Delta Modulation: ‘The delta modulation discussed so far suffers from one serious disadvantage: The dynamic range of amplitude of m(¢) is too small because of the threshold and overload effects. This problem can be ‘overcome by making a delta modulator adaptive. In adaptive DM, the step size A is varied according to the level of the input signal. For example, in Fig. 5-9 (6), when the signal m(t) is falling rapidly, slope overload occurs. Ifthe step size is increased during this period, the overload could be avoided On the other hand, if the slope of m() is small, reduction of step size will reduce the threshold level as well as the quantizing noise. The use of an adaptive delta modulator requires that the receiver be adaptive also, so that the step size at the receiver changes to match the change in A at the modulator. 5.10 SIGNALING FORMAT Digital data (a sequence of binary digits) can be transmitted by various pulse waveforms Sometimes these pulse waveforms have been called line codes. Figure 5-10 shows a number of signal formats for transmission of binary data 1011000. o 1 to 0 0 1 Binary data \ TO tare Bipolar NRZ Unipolar RZ. Bipolar RZ. CT pope i Split-phase T— (Manchester) tf Fig. 510 Binary signaling formats 100 DIGITAL TRANSMI ON OF ANALOG SIGNALS [cHAP. 5 (a) Unipolar Nonreturn-to-Zero (NRZ) Signaling: Symbol | is represented by transmitting a pulse of constant amplitude for the entire duration of the bit interval, and symbol 0 is represented by no pulse. NRZ indicates that the assigned amplitude level is maintained throughout the entire bit interval, (6) Bipolar NRZ. ling: Symbols | and 0 are represented by pulses of equal positive and negative amplitudes. In either case, the assigned pulse amplitude level is maintained throughout the bit interval (©) Unipolar Return-to-Zero (RZ) Signaling: ‘Symbol 1 is represented by a positive pulse that returns to zero before the end of the bit interval, and symbol 0 is represented by the absence of pulse. (@) Bipolar RZ. Signal Positive and negative pulses of equal amplitude are used for symbols ! and 0, respectively. In either case, the pulse returns to 0 before the end of the bit interval (Alternate Mark Inversion (AMI) RZ Signaling: Positive and negative pulses (of equal amplitude) are used alternately for symbol 1, and no pulse is used for symbol 0. In either case the pulse returns to 0 before the end of the bit interval, (D Split-Phase (Manchester) Signaling: ‘Symbol 1 is represented by a positive pulse followed by a negative pulse, with both pulses being of equal amplitude and half-bit duration; for symbol 0, the polarities of these pulses are reversed. Many other signaling formats and variations are discussed in the literature. There are many formats because the channel characteristics vary from application to application. For example, if the channel is alternating current (ac) coupled, a format with a large de component should not be chosen. Some of the important parameters to consider in selecting a signaling format are the spectral characteristics, immunity of the format to noise, bit synchronization capability, cost and complexity of implementation, and other factors that vary with the application. 5.11 TIME-DIVIS ION MULTIPLEXING ‘Time-division multiplexing (TDM) is one of the many applications of the principle of sampling in communication systems. TDM is commonly used to simultaneously transmit several different signals over a single channel. Figure S-11(a) illustrates the scheme of TDM. Each input message signal is first restricted in bandwidth by a low-pass filter, to remove frequencies that are not essential to an adequate signal representation, The low-pass filter outputs are then applied to a commutator that is ustally implemented by using electronic switching circuitry. Each signal is sampled at the Nyquist rate or higher. Usually 1.1 times the Nyquist rate is employed in practice to avoid aliasing (Prob. 5.2). The samples are interleaved, and a single composite signal consisting of all the interleaved pulses is transmitted over the channel. Figure 5-11(b) shows time-division multiplexing of two PAM signals. At the receiving end of the channel, the composite signal is demultiplexed by using a second commutator whose output is distributed to the appropriate low-pass filter for demodulation. Proper operation of this system depends on proper synchronization between the two commutators. Tf all message signals have equal bandwidths, then the samples are transmitted sequentially, as shown in Fig. 5-11(a). If the sampled data signals have unequal bandwidths, more samples must be transmitted per unit time from the wideband signals. This is easily accomplished if the bandwidths are harmonically related. (See Prob. 5.31.) CHAP. 5} DIGITAL TRANSMISSION OF ANALOG SIGNALS 101 mi) —=[ Le pL min Synchronized mg —| J mate YEA ‘Communication a channel Commutator Commutator mt) —>[_ LPP Ler [= mat Transmitter Receiver a) Switching in time-division multiplexing mo (b) Time-division multiplexing of two signals ig. 5-11 Time-civision multiplexing Although Fig. 5-II refers to TDM of PAM signals, the same general concepts apply to TDM of PCM or any other pulse signals. TDM is widely used in telephony, telemetry, radio, and data processing. The Bell system, for example, time-multiplexes 24 PCM signals on a telephone channel in its TI system. In telephony, both frequency-division multiplexing (FDM) and TDM are jointly used to permit many hundreds of different conversations to utilize the same microwave link. 5.12 BANDWIDTH REQUIREMENTS FOR TDM First, let us define T’as the time spacing between adjacent samples in the time-multiplexed signal waveform (ee Fig. 5-11(0)) If all input signals have the same bandwidths f,, and are sampled equally, then 7'= 7,/n [where m is the number of input signals and T, = 1/f, < 1/2f,)] is the sampling interval for cach signal, Assuming the resultant time-multiplexed signal is a low-pass signal of bandwidth ‘From: then the required minimum sampling rate is 2f;py, 1 oT Ths, fom tea ghelysa ne Ya wz Equation (5.22) shows that the minimum required bandwidth for TDM transmission is proportional to the message signal bandwidth and the number of the multiplexed signals. 5.13, PULSE SHAPING AND INTERSYMBOL INTERFERENCE A. Intersymbol Interference: In discussing digital signal transmission so far, we have shown the digital pulse as rectangular and have assumed the transmission channel to be linear and distortionless. In practice, however, channels have a limited bandwidth, and hence transmitted pulses tend to be “spread” during 102 DIGITAL TRANSMISSION OF ANALOG SIGNALS (CHAP. 5 transmission. This pulse spreading or dispersion causes overlap of pulses into adjacent time slots, as shown in Fig. 5-12. The signal overlap may result in an error at the receiver. This phenomenon of pulse overlap and the resultant difficulty of discriminating between symbols at the receiver are termed intersymbol interference (ISI). Fig. 12 Intersymbol interference in digital transmission B. Pulse Shaping: ‘One method of controlling ISI is to shape the transmitted pulses properly. One pulse shape that produces zero ISI is given by [Fig. 5-13(a)] 1 sin(at/T,) WO aT, (5.23) This is the impulse response of an ideal low-pass filter whose frequency response is shown in Fig. 5-130), Ko , Hw) r, 1 d+oW wt (-aW<|ol so-m)| Mo) * (ane) Using Eq. (1.36), we obtain Le $32 Mo) = 7, 3, Mo-no) (5.32) Note that M,(c) will repeat periodically without overlap as long as «, > 2cay, of 2/T, = 20y, that is, mo | Mow o 1 on c ry we sve from Eg. (5.34) that Mo) =*-Mjo) for jol oy ‘Then, by Eq, (1.70) of Prob. 1.22, that is, [_noxcoa= we obtain male, [eodioa= 2% (Zfemrernay Hence, we condude that J toducods = Tu If m(2) is band-limited, that is, M(w) = 0 for || > ey, then show that [Ltmora = 7, mony? (541) where T, = x/oy. Using Eqs. (5.39) and (5.40), we have fC mora f° [ S moro] x mayo ft sss mt meer [ sooo] ie = mor] ¥ matotbe)=7, 5 wmorye x > me nba] PA i ime Find the Nyquist rate and the Nyquist interval for each of the following signals’ (a) m(t)=5 cos 1000n¢ cos 4000x¢ (8) mo) = Sin 20a (©) may = (Sin. 20020)? 110 5.7. 58. DIGITAL TRANSMISSION OF ANALOG SIGNALS [CHAP. 5 @ ‘m(®) = Se0s 1000x1 cos 4000nt 2,5(c0s 3000xt + cos 5000!) Thus, m(0) is band-limited with fy = 2500 Hz, Hence, the Nyquist rate is 5000 Hz, and the Nyquist interval is 1/5000 s = 0.2 ms. (b) From the result of Prob. 1.14, we have sinat yf ala seers dy false ‘Thus, we see that m(t) is a band-limited signal with fy = 100 Hz. Hence, the Nyquist rate is 200 Hz, and the Nyquist interval is 1/200 s. (©) From the frequency convolution theorem (7.54), we find that the signal m(¢) is also band-limited and its bandwidth is twice that of the signal of part (b), that is, 200 Hz. Thus, the Nyquist rate is 400 Hz and the Nyquist interval is 1/400 s. The bandpass sampling theorem states that if'a bandpass signal m(¢) has a spectrum of bandwidth nf) and an upper frequency limit «,(= 2nf,), then m(i) can be recovered from m,(i) by bandpass filtering if f, = 2f,/k, where k is the largest integer not exceeding /,/fp. All higher sampling rates are not necessarily usable unless they exceed 2f, Consider the bandpass signal m(2) with a spectrum shown in Fig. 5-20. Check the bandpass sampling theorem by sketching the spectrum of the ideally sampled signal m,(s) when f, = 25, 45, and 50 kHz. Indicate if and how the signal can be recovered. Mp) 735-15 0 1s 25 f (ki) Fig. $20 Bandpass signal spectrum From Fig, 5-20, f,=25 kHz and fy=10 kHz, Then fi/fy=25 and k = 2. Hence, we have fk = 25 ke For f, = 25 kHz: From Fig. 5-21(a), we se that m() can be recovered from the sampled signal by using a bandpass filter over Jf $f 25kH2 with 1OkH2=/, < 15 kHz 5 kHz: From Fig. 5-21(b) itis not possible to recover mi) by filtering. $0 kHz: From Fig. 5-21(c), we see that m(2) can be recovered by using a low-pass filter with cutoff frequency f= 25 kElz, Given the signal ‘m() = 10cos 2000nt cos 8000xt (@ What is the minimum sampling rate based on the low-pass uniform sampling theorem? (b) Repeat (a) based on the bandpass sampling theorem. @ mt) = 100s 2000z2 cos 800% cos 6000n + Sos 100002 fu = 5000 Hz. = 5 kHz = Yfy = 10 kez, CHAP. 5] 59. DIGITAL TRANSMISSION OF ANALOG SIGNALS ui =50) 0 “fs 25 3540 50 60 f (kHz) fy @ = 45 KH My 00 (ke) Fig. 5-21 (©) fy=fae = 5 kHz and fy = (S—3) = 2 kHz fe fy Based on the bandpass sampling theorem, S+k=2 ‘Show that if the sampling rate is equal to or greater than twice the highest message frequency, the ‘message m(/) can be recovered from the natural sampled signal x,,(2) by low-pass filtering. ‘As shown in Eq, (5.4) and Fig. 5-22, the natural sampled signal (0) is equal to mi) multiplied by « rectangular pulse train x(0) whose Fourier series is given by (see Prob. 1.2) 30= > oe a d sin (n0,d/2) where OF nod? Then asl) = m(O)xQ(0) = mt) Y cye™"! = SY cyantpel™* G42) Hence, by the frequency-shifting property of the Fourier transform (J.9), we have uo) = Mono) (5.43) Equation (5.43) indicates that the spectrum X,,(o) consists of a weighted version of the spectrum M(«) centered on integer multiples of the sampling frequency. The spectral component at «= no, is multiplied 2 5.10. SAL. DIGITAL TRANSMISSION OF ANALOG SIGNALS [CHAP. 5 mio) rs) 0 7 Tey 0 oy w @ co) Fig. $22 Natural sampling by cy. It is clear from Fig. 5-22(/) that if «2, = 2ay, then m(0) can be recovered from x(t) by low-pass filtering of (0). Find the spectrum of x,(#) (Eq. (5.7)] in terms of the spectrum of the message M(w) and discuss the distortion in the recovered waveform if flat-top sampling is used Using Eq, (1.57) of Prob.1.10, we have sin (od/2) ptt = Peo) = dod?) 4H) Next, applying the convolution theorem (J.28) to Eq. (5.7) and using Eq, (5.22), we obtain Ko) — Mlo)Po) = FS Mlw= no.) Po) 64 Figure 5.23 illustrates a graphical interpretation of Eq. (5.45), with the assumed M(). We see that flat- top sampling is equivalent to passing an ideal sampled signal through a filter having the frequency response H(o) = P(a). The high-frequency roll-off characteristic of P(w) acts as a low-pass filter and attenuates the ‘upper portion of the message spectrum. This loss of high-frequency content is known as the aperture effect. The larger the pulse duration of aperture d, the larger the effect. Since T,, does not depend on d, the ratio dj, isa measure of the flatness of P(c) within the bandwidth of the low-pass filter. In practice, this aperture effect can be neglected as long as d/T, = 0.1 Verify Eq. (5.7), that is, (0 = mC * PO CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 113 Mw) ty 0 oy, ae « re Xo) oT Pw) NN y g & § i 8 al Fig. 5-23 Aperture effect in flat-top sampling From Eq. (5.3), m= >) mnToe—nT,) Then using Eq, (1.36), we obtain mA) * Pt) = Y maT )E- nT, epl) E mento » 5ee-n7, Y met pe—ay = x10) QUANTIZING 5.12. A binary channel with bit rate R, = 36 000 bits per second (bj) is available for PCM voice transmission. Find appropriate values of the sampling rate f,, the quantizing level L, and the binary digits n, assuming fyy = 3.2 kHz. Since we require 2 Yfy = 6400 and nf, = Ry = 36000 Rs 36000 then a 56 114 5.13. 5.14, 5.15. DIGITAL TRANSMISSION OF ANALOG SIGNALS (CHAP. 5 Son=5,L=25=32, and ssi = 2600 apo = 72 ke An analog signal is quantized and transmitted by using a PCM system. If each sample at the receiving end of the system must be known to within +0.5 percent of the peak-to-peak full-scale value, how many binary digits must each sample contain? Let 2m, be the peak-to-peak value ofthe signal. The peak error is then 0.005(2m,) = 0.0lm, and the peak-to-peak error is 2(0.01m,) = 0.02m, (the maximum step size A). Thus, from EQ. (5.8) the required number of quantizing levels is 2m, Ime ae X Toam, ~ 1005” Hence, the number of binary digits needed for each sample is n= 7. ‘An analog signal is sampled at the Nyquist rate f, and quantized into L levels. Find the time duration t of 1 b of the binary-encoded signal. Let n be the number of bits per sample. Then by Eq. (5.14) logs L1 ‘where {log: L] indicates the next higher integer to be taken if logy Lis not an integer value; nf, binary pulses ‘must be transmitted per second. Thus, qr nf, (og L where 7, is the Nyquist interval ‘The output signal-to-quantizing-noise ratinSNR), in a PCM system is defined as the ratio of average signal power to average quantizing noise power. For a full-scale sinusoidal modulating signal with amplitude 4, show that (SNR), (%) =5L (5.46) Ss or =10 toe (oa : where Z is the number of quantizing levels, ) = 1.76 +20 logh, (547) ‘The peak-to-peak excursion of the quantizer input is 24, From Eq. (5.8), the quantizer step size is 24 T Then from Eq, (5.10) or (5.11), the average quantizing noise power is yw at 9», =(8)=5=55 ‘The output signal-to-quantizing-noise ratio of a PCM system for a full-scale test tone is therefore xw,~ (5) =e CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS us 5.16. 5.17. Expressing this in decibels, we have s (2), 1nee(Q) -1200 see PCM system, the output signal-to-quantizing-noise ratio is to be held to a minimum. of 40 dB. Determine the number of required levels, and find the corresponding output signal-to- quantizing-noise ratio. Ina binary PCM system, L = 2", where n is the number of binary digits. Then Eq. (5.47) becomes (2) -inees0br 176260 a oe ww 8-10 an (8), ~ 1000 a Thus, from Eq. (5.46), 81.6] = 82 and the number of binary digits » is log, 82) = [6.36] = 7 ‘Then the number of levels required is L = 2” = 128, and the corresponding output signal-to-quantizing- noise ratio is (x) = 1.16 +6.02%7= 439 4B Ne Note: Equation (5.48) indicates that each bit in the code word of a binary PCM system contributes 6 4B to the output signal-to-quantizing-noise ratio. This is called the 6 dB rule ‘A compact disc (CD) recording system samples each of two stereo signals with a 16-bit analog-to- digital converter (ADC) at 44.1 kbjs, (@) Determine the output signal-to-quantizing-noise ratio for a full-scale sinusoid, (8). The bit stream of digitized data is augmented by the addition of error-correcting bits, clock extraction bits, and display and control bit fields. These additional bits represent 100 percent overhead. Determine the output bit rate of the CD recording system. (©) The CD can record an hour's worth of music. Determine the number of bits recorded on a cp. (@_ Fora comparison, a high-grade collegiate dictionary may contain 1500 pages, 2 columns per page, 100 lines per column, 8 words per line, 6 letters per word, and 7 b per letter on average. Determine the number of bits required to describe the dictionary, and estimate the number of comparable books that can be stored on a CD. (a) From Eq. (5.48), (x) = 1,76 + 6.02% 16 = 98.0848 ‘The very high SNR of the disk has the effect of increasing the dynamic range of recording, resulting in the excellent clarity of sound from a CD. 116 5.18, DIGITAL TRANSMISSION OF ANALOG SIGNALS [cHaP. 5 (The input bit rate is 2(44.1)(10°}(16) = 1.411(10°) b/s = 1.411 Mb/s Including the additional 100 percent overhead, the output bitrate is 2(1.411 108) b/s = 2.822 Mb/s (©) The number of bits recorded on a CD is 2.822(10°)(3600) = 10.16(10") b = 10.16 gigabits(GB) (@ The number of bits required to describe the dictionary is 1500(2)(100(8)(6)(7) = 100.8(10%)b = 100.8 Mb Including the additional 100 percent overhead, then, Thus, a disc contains the equivalent of about 50 comparable-books storage capacity. (@) Plot the 1 law compression characteristic for = 255. ©) If m, = 20V and 256 quantizing levels are employed, what is the voltage between levels when there is no compression? For «= 255, what is the smallest and what is the largest effective separation between levels? (a) From Eq, (5.12), for = 255 we have In(1 +255Ix/) 1n256 where x= m/mp. The plot of the slaw compression characteristic for ye = 255 is shown in Fig. 5-24. (®) With no compression (that is, a uniform quantizing), from Eq. (5.8) the step size A is lc 5. 24 The u law characteristics for x= 255 Fig. CHAP. 5] DIGITAL TRANSMI 5.19, 5.20. |ON OF ANALOG SIGNALS 7 ‘With compression (that is, a nonuniform quantizing), the smallest effective separation between levels will be the one closest to the origin, and the largest effective separation between levels will be the one closest to bet = 1 Let 1; be the value of x corresponding to y = 1/127, that is, In(l 10256 arid Solving for |x|, we obtain byl = 1.75007) ‘Thus, the smallest effective separation between levels is given by nin = miplsil = 20(1.75)10~) = 3.510%) V = 3.5m¥ Next, let x127 be the value of x corresponding to y= 1= 1/127, that is, In + 288leizr) _ 126 Tn256 Solving for [xian], we obtain Thus, the largest effective separation between levels is given by Sonus = m1 — bez) = 20(1 = 0.957) = 0.86 V ‘When a 1 law compander is used in PCM, the output signal-to-quantizing-noise ratio for w= 1 is, approximated by s 31? ah : 5.49 (x), fn +P aay Derive the 6 dB rule for = 255. Ss aL? 10 | = 10log————_, oa ) Find +r Ss 3b a (ea sn a x 2" = 6.02n-— 5, Which is the 6 dB rule for = 255, Consider an audio signal with spectral components limited to the frequency band of 300 to 3300 Hz. APCM signal is generated with a sampling rate of 8000 samples/s. The required output signal-to- quantizing-noise ratio is 30 dB. (@) What is the minimum number of uniform quantizing levels needed, and what is the minimum number of bits per sample needed? (6) Calculate the minimum system bandwidth required. (©) Repeat parts (a) and (6) when a y law compander is used with p= (a) Using Eq. (5.47), we have 118 DIGITAL TRANSMISSION OF ANALOG SIGNALS ICHAP. 5 togt= 100-170 = 142252512 Thus, the minimum number of uniform quantizing levels needed is 26 = (log: L) = (log, 26] = [4.7] = 56 per sample The minimum number of bits per sample is 5 (6) From Eq, (5.15), the minimum required system bandwidth is m5 From = 5f, = 3 (8000) = 20000 Hz = 20kH (0 Using Eq, (5.50), (x) = 2log - 10.130 2,005 + L > 101.2 ete eae: sow = Ym 0) = meno = 28 DELTA MODULATION 5.21. Consider a sinusoidal signal m(1) = Acos @»t applied to a delta modulator with step size A. Show that slope overload distortion will occur if aA(& ae) 632) where f, = 1/T, is the sampling frequency. mi) = Ae08 ont PO —Aagsin ont From Eq. (5.20), to avoid the slope overload, we require that fe = Ago Ase Thus, if A > A/(«47,), slope overload distortion will occur. 5.22. For a sinusoidal modulating signal M1) = ACOs Ot Oy = hin show that the maximum output signal-to-quantizing-noise ratio in a DM system under the assumption of no slope overload is given by CHAR. 5 ISSION OF ANALOG SIGNALS us Ss 3f2 (SNR), = (= a 5.53) (i). fila ee where f, = 1/T, is the sampling rate and fy is the cutoff frequency of a low-pass filter at the output end of the receiver. From Eq, (5.52), for ne-slope-overload condition, we must have ae if ) nT, 28 ‘Thus, the maximum permissible value of the output signal power equals G58) From Eq, (5.2/,), the mean-square quantizing error, or the quantizing noise power, is (@2) = A?/3. Let the bandwidth of a postreconstruction low-pass filter at the output end of the receiver be fy > fm and fy fy ‘Then, assuming that the quantizing noise power P, is uniformly distributed over the frequency band up t0 J, the output quantizing noise power within the bandwidth fir is aa ' n= (8) a Combining Eqs. (5.54) and (5.55), we see that the maximum output signal-to-quantizing-noise ratio is DH fa Sar 5.23. Determine the output SNR in a DM system for a I-kHz sinusoid, sampled at 32 kHz, without slope overload, and followed by a 4-kHz postreconstruction filter. From Eq, (5.53), we obtain 31200)" ENR) BOY AIO) 11,3 =249 4B 5.24, The data rate for Prob. 5.23 is 32 kbjs, which is the same bit rate obtained by sampling at 8 kHz with 4 b per sample in a PCM system. Find the average output SNR of a 4-b PCM quantizer for the sampling of a full-scale sinusoid with f, = 8 kHz, and compare it with the result of Prob. 5.23, From Ea. (5.48), we have (SNR)p gp = 1.76 + 6.02(4) = 25.844B ‘Comparing this result with that of Prob, 5.23, we conclude that for all the simplicity of DM, it does not perform as well as even a 4-b PCM. 5.25. A DM system is designed to operate at 3 times the Nyquist rate for a signal with a 3-kKHz bandwidth. The quantizing step size is 250 mV. (@) Determine the maximum amplitude of a I-kHz input sinusoid for which the delta modulator does not show slope overload. () Determine the postfiltered output signal-to-quantizing-noise ratio for the signal of part (a) @ I(t) = Ao0s yt = Acos 2(10°)t Km seomy0 = AenK00) 120 DIGITAL TRANSMISSION OF ANALOG SIGNALS (CHAP. 5 By Eq, (5.52), the maximum allowable amplitude of the input sinusoid is Aa 250 =A = Bg = 2. sey aya0) = 162m Ons Om Foy" Oca (®) From Eq, (5.53), and assuming that the cutoff frequency of the low-pass filter is fq, we have Ss = 216=23.5 4B xm=( ) OKO)! 80) SIGNALING FORMATS 5.26. 5.27. 5.28. Consider the binary sequence 0100101. Draw the waveforms for the following signaling formats, (@ Unipolar NRZ signaling format (®) Bipolar RZ signaling format (©) AMI alternate mark inversion) RZ signaling format See Fig, $26. Ce Binary sequence or | [i pose naz a . Bipolar RZ @ LL ____ nm AMER Fig. $25 Discuss the advantages and disadvantages of the three signaling formats illustrated in Fig. 5-25 of Prob. 5.26. The unipolar NRZ signaling format, although conceptually simple, has disadvantages: There are no pulse ansitions for long sequences of Os or 1s, which are necessary If one wishes t0 extract timing oF synchronizing information; and there is no way to detect when and ifan error has occurred from the received pulse sequence, ‘The bipolar RZ signaling format guarantees the availability of timing information, but there is no error detection capability, The AMT RZ signaling format has an error detection property; if two sequential pulses (ignoring intervening 0s) are detected with the same polarity, it is evident that an error has occurred, However, to guarantee the availability of timing information, it is necessary to restrict the allowable number of consecutive Os, Consider a binary sequence Jong sequence of 1s followed by a single 0 and then a long sequence of 1s. Draw the waveforms for this sequence, using the following signaling formats: CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 121 (@) Unipolar NRZ signaling (®) Bipolar NRZ signaling (o) AMI RZ signaling (® Split-phase (Manchester) signaling See Fig. 5-26. Por bb OF Binary sequence wo CU A one 5.29. The AMI RZ signaling waveform representing the binary sequence 0100101011 is transmitted over a noisy channel. The received waveform is shown in Fig. 5-27, which contains a single error. Locate the position of this error, and justify your answer. ‘The error is located at the bit position 7 (as indicated in Fig, 5-27), where we have a negative pulse. This bit isin error, because with the AMI signaling format, positive and negative pulses (of equal amplitude) are used alternatively for symbol 1, and no pulse is used for symbol 0. The pulse in position 7 representing the third digit 1 in the data stream should have had positive polarity 122 DIGITAL TRANSMISSION OF ANALOG (CHAP. 5 TIM ‘-DIVISION MULTIPLEXING 5.30. Two analog signals m,(t) and m,(t) are to be transmitted over a common channel by means of time-division multiplexing. The highest frequency of m,(t) is 3 kHz, and that of ma(t) is 3.5 kHz, What is the minimum value of the permissible sampling rate? The highest-frequency component of the composite signal my(t) + mad) is 3.5 kHz, Hence, the minimum sampling rate is 23500) = 7000 samples/s 531. A signal (0) is band-limited to 3.6 kHz, and three other signals—mp(i), m3(1), and m(t)—are band-limited to 1.2 KHz cach. These signals are to be transmitted by means of time-division multiplexing (a) Set up a scheme for accomplishing this multiplexing requirement, with each signal sampled at its Nyquist rate. () What must be the speed of the commutator (in samples per second)? (© Ithe commutator output is quantized with L = 1024 and the result is binary-coded, what is the output bit rate? (@ Determine the minimum transmission bandwidth of the channel. @ Message | Bandwidth [ Nyquist rate xo) 3.6 kHz 72 kHz m0) 1.2 kHz 24 kHz my) 1.2 kHz 24 kHz mad) 1.2 kHz 24 kHz If the sampling commutator rotates at the rate of 2400 rotations per second, then in one rotation we obtain one sample from each of m(1), my(1), and m(*) and throe samples from m,(¢). This means that the commutator must have at least six poles connected to the signals, as shown in Fig. 5-28. (®)m(t) bas 7200 samples/s. And my(t),m3(), and my(t) each have 2400 samples/s. Hence, there are a total of 14 400 samples). © L=10m4="%=7 Thus, the output bit rate is 10(14400) = 144 kb/s (@ The minimum channel bandwidth is mio msn Quantizer and fp encoder Fig. 528 Time-division multiplexing scheme CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS. 123, (72424424424) kn 5.32. The TI carrier system used in digital telephony multiplexes 24 voice channels based on 8-b PCM. Each voice signal is usually put through a low-pass filter with the cutoff frequency of about 3.4 KHz. The filtered voice signal is sampled at 8 KHz. In addition, a single bit is added at the end of the frame for the purpose of synchronization. Calculate (a) the duration of each bit, (6) the resultant transmission rate, and (c) the minimum required transmission bandwidth (Nyquist bandwidth). (@) With a sampling rate of 8 kHz, each frame of the multiplexed signal occupies a period of a 3000 (000125 s = 125 microseconds (1s) Since each frame consists of twenty-four 8-b words, plus a single synchronizing bit, it contains a total of 248) +1= 1936 ‘Thus, the duration of each bit is (0). The resultant transmission rate is (©) From Eq, (6.22), the minimum required transmission bandwidth is fn 1 =m ayy 7 72 Ke PULSE SHAPING AND INTERSYMBOL INTERFERENCI 5.33. Show that (a pulse-shape function) h(t), with Fourier transform given by H(o), that satisfies the criterion 2mk\ _ Jo nl du ‘Assuming thatthe integration and summation can be interchanged, we have 1p ge 2m) ir fee Mu ema nT) = Finally, if Eq. (5.56) is satisfied, then which verifies that 4/1) with @ Fourier transform £i(«) satisfying criterion (5.56) produces zero ISI. A certain telephone line bandwidth is 3.5 kHz. Calculate the data rate (in bjs) that can be transmitted if we use binary signaling with the raised-cosine pulses and a roll-off factor a= 0.25, Using Eq. (5.28), we see that the data rate is 12 T 1+¥02: R= (3500) = 5600 b/s ‘A communication channel of bandwidth 75 kHz is required to transmit binary data at a rate of 0.1 Mbjs using raised-cosine pulses. Determine the roll-off factor a. 1 ora fp = 15 kHz = 75(10°) Hz 1, Using Eq. (5.27), we have 27510104) = 1.5 Hence, we obtain v=o In a certain telemetry system, eight message signals having 2-kHz bandwidth each are time- division multiplexed using a binary PCM. The error in sampling amplitude cannot be greater than 1 percent of the peak amplitude, Determine the minimum transmission bandwidth required if raised-cosine pulses with roll-off factor = 0.2 are used. The sampling rate must be at least 25 percent above the Nyquist rate. From Eq. (5.9), the maximum quantizing error must satisfy AL Goma = 5 =P <= 0.0L CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 125 Hence, L > 100, and we choose L= 128 = 27. The number of bits per sample required is 7, Since the Nyquist sampling rate is 2/4, = 4000 samples, the sampling rate for each signal is ‘Jf, = 12544000) = 5000 samples/s ‘There are eight time-division multiplexed signals, requiring @ total of (5000) = 40000 samples/s Since each sample is encoded by 7 bits, the resultant bitrate is 280 kb/s From Eg. (5.27), the minimum transmission bandwidth required is Ju~ 1582 280) = 168 He Supplementary Problems 5:37. If m() isa band-limited signal, show that [Lmms.coa= nme where ¢,(¢) is the function defined in Eq. (5.39) of Prob. 5.4. ‘Hint: Use the orthogonality property (5.40) of é(0)- 5.38. The signals m(t) = 10c0s 100n¢ and m0) Ocos 50nt are both sampled with f; = 75H. Show that the two sequences of samples so obtained are identical. Hint: Take the Fourier transforms of ideally sampled signals m,() and m3. Note: This problem indicates that by undersampling m(f) and oversampling m,(t) appropriately, their sampled versions can be identical 5.39. A signal ‘m(t) = 60s 200nt +2 cos 320nt 4s ideally sampled at f, = 300 Hz. Ifthe sampled signal is passed through an ideal low-pass filter with a cutoff frequency of 250 Hz, what frequency components will appear in the output? Ans. 1006, 140-, 160-, and 200-Hz components $40, A duration-limited signal is a time function m(¢) for which m=O for |e >T Let Ml) = #lm(0)], Show that M(o) can be uniquely determined from its values Minn/T) at a series of equidistant points spaced x/T apart. In fact, M(w) is given by 126 Sal. 5.82. 5.43, 544, 5.45. 5.46. S41. DIGITAL TRANSMISSION OF ANALOG SIGNALS [CHAP. 5 Moy= 5 w("2) sin (@T—nz) This is known as the sampling theorem in the frequency domain, Hint; Interchange the roles of ¢ and o in the sampling theorem proof (Prob. 5.2). American Standard Code for Information Interchange (ASCII) has 128 characters that are binary-coded. If ‘a computer generates 100 000 characters per second, determine (a) The number of bits required per character (6) The data rate (or bit rate) Ry required to transmit the computer output Ans. (a) 7b per character, (b) Ry = 0.7 Mb/s APCM system uses a uniform quantizer followed by a 7-b binary encoder. The bit rate of the system is 50 ‘Mb/s. What is the maximum message bandwidth for which system operation is satisfactory? Ans, 3.57 MHz Consider binary PCM transmission of a video signal with f, = 10 MHz. Calculate the signaling rate needed to achieve (SNR), = 45 dB. Ans, 80 Mbjs Show that in a PCM system, the output signal-to-quantizing-noise ratio can be expressed as 3 (afin eam where fy is the channel bandwidth and fi, is the message bandwidth, Hint: Use Eqs. (5.14), (5.15), and (5.46) ‘The bandwidth of a TV radio plus audio signal is 4.5 MHz. If this signal is converted to PCM with 1024 quantizing levels, determine the bit rate of the resulting PCM signal. Assume that the signal is sampled at a rate 20 percent above the Nyquist rate, Ans, 108 Mbls ‘A commonly used value 4 for the A law compander is A = 87.6, If m, = 20 and 256 quantizing levels are employed, what is the smallest and what is the largest effective separation between levels? Ans, nin =9.8 mV, Anas = 0.84 V Given the binary sequence 1101110, draw the transmitted pulse waveform for (a) AMI RZ signaling format and (b) split-phase (Manchester) signaling format. Hint: See Fig. 5-10. CHAP. 5] DIGITAL TRANSMISSION OF ANALOG SIGNALS 127 5.48. 5.49, ssi. 582. [A given DM system operates with a sampling rate f; and a fixed size A. If the input to the system is m() = at fort >0 determine the value of « for which slope overload occurs. Ans. Mf, Consider a DM system whose receiver does not include a low-pass filter, as in Prob. 5.22. Show that under the assumption of no slope overload distortion, the maximum output signal-to-quantizing-noise ratio increases by 6 dB when the sampling rate is doubled. What is the improvement that results from the use of a low-pass filter at the receiver output? Ans. 9-dB improvement Twenty-four voice signals are sampled uniformly and then time-division-multiplexed. The sampling operation uses flat-top samples with I-us duration. The multiplexing operation includes provision for synchronization by adding an extra pulse of appropriate amplitude and 1-ns duration. The highest frequency component of each voice signal is 3.4 kHz. (@ Assuming a sampling rate of & kHz, calculate the spacing between successive pulses of the multiplexed signal, (®) Repeat (a), assuming the use of Nyquist rate sampling, Ans. (A) 4 ps, (B) 5.68 ps. Five telemetry signals, each of bandwidth 1 kHz, are to be transmitted by binary PCM with TDM. The ‘maximum tolerable error in sampling amplitude is 0.5 percent of the peak signal amplitude. The signals are sampled at least 20 percent above the Nyquist rate. Framing and synchronization require an additional 0.5 percent extra bits, Determine the minimum transmission data rate and the minimum required bandwidth for the TDM transmission, Ans, Ry = 964.8 kb|s, frow = 482.9 kHz Ina certain telemetry system, there are four analog signals: my(t), m(), m3(0, and m0. The bandwidth of (1) is 3.6 kHz, but the bandwidths of the remaining signals are 1.5 kHz each, Set up a suitable scheme for accomplishing the time-division multiplexing of these signals, Ans. Use the same scheme as the one depicted in Fig. 5-28 of Prob. 5.31 with the commutator speed raised to 3000 rotations per second. (Qiy 14am) PROBABILITY AND RANDOM VARIABLES DUCTION far we have discussed the transmission of deterministic signals over a channel, and we lemphasized the central role played by the concept of “randomness” in communication. random means unpredictable. If the receiver at the end of a channel knew in advance the itput from the originating source, there would be no need for communication. So there is ess in the message source. Moreover, transmitted signals are always accompanied by juced in the system. These noise waveforms are also unpredictable. The objective of is to present the mathematical background essential for further study of communi- dy of probability, any process of observation is referred to as an experiment. The results ition are called the outcomes of the experiment. An experiment is called a random its outcome cannot be predicted. Typical examples of a random experiment are the roll ss of a coin, drawing a card from a deck, of selecting a message signal for transmission essa ges, and Events: Il possible outcomes of a random experiment is called the sample space S. An clement sample point. Each outcome of a random experiment corresponds to a sample point. lled a subset of B, denoted by Ac Bif every clement of is also an element of B. Any imple space S is called an event. A sample point of S is often referred to as an 1. Note that the sample space S is the subset of itself, that is, Sc S, Since S is the set of tcomes, it is often called the certain event. 128 CHAP. 6] PROBABILITY AND RANDOM VARIABLE: 129 C. Algebra of Events: 1. The complement of event A, denoted A, is the event containing all sample points in S but not in 4. 2. The union of events A and B, denoted A UB, is the event containing all sample points in either A or B or both. 3. The intersection of events A and B, denoted 4 B, is the event containing all sample points in both 4 and B. 4, The event containing no sample point is called the nu event, denoted @. Thus @ corresponds to an impossible event. 5, Two events A and B are called mutually exclusive or disjoint if they contain no common sample point, that is, 4 B= @. By the preceding set of definitions, we obtain the following identities: S=0 o=s SUA=S SAA= AVA=S ANA=8 A=A D. Probabilities of Events: An assignment of real numbers to the events defined on $ is known as the probability measure. In the axiomatic definition, the probability P(A) of the event 4 is a real number assigned to 4 that satisfies the following three axioms: Axiom I: PIA)>0 6D Axiom 2: RS) =1 6.2) Axiom 3: PAU B)= P(A) + PB) if ANB=2 63) With the preceding axioms, the following useful properties of probability can be obtained (Probs. 6.1— 64): 1, PA)=1- PCA) (4) 2 P@)=0 (65) 3. P(A)SP(B) if ACB (6.6) 4 PASI (7) 5. PAU B)= P(A) + P(B)- PAB) (68) Note that Property 4 can be easily derived from axiom 2 and property 3. Since A cS, we have PAS S)=1 Thus, combining with axiom 1, we obtain 0=P(AVS1 (6.9) Property 5 implies that PAU B) = PA) + PB) 6.10) since P(A 9 B) 0 by axiom 1 One can also define P(A) intuitively, in terms of relative frequency. Suppose that a random experiment is repeated n times. If an event A occurs ny times, then its probability P(A) is defined as PCA) = lim “4 6.11) Note that this limit may not exist. 130 PROBABILITY AND RANDOM VARIABLES [CHAP. 6 E. Equally Likely Events: Consider a finite sample space S with finite elements SH Uyhayeee sad where 2's are elementary events. Let P(2,) = py. Then 1. 0=p<1 i=1,2,...40 2 Laant. + P= 1 (6.12) ai 3. If A= U Ay where is a collection of subscripts, then PA= > pad=Yp. (6.13) ro fa ‘When all elementary events 4, (i= 1,2,...,) are equally likely events, that is r=n= then from Eq. (6.12), we have sen (6.14) and Paya (6.15) where n(4) is the number of outcomes belonging to event A and n is the number of sample points in S. F, Conditional Probability: ‘The conditional probability of an event A given the event B, denoted by P(AIB), is defined as PAN B) (A| B) = (B) > 0 (6.16 pulp) =" Pw) (6.16) where P(A 9 B) is the joint probability of A and B. Similarly, PAN B) pal ="4°R pay >0 6.17 Bly =A PA > 617) is the conditional probability of an event B given event 4. From Eqs. (6.16) and (6.17) we have P(A B) = P(A|B)P(B) = P(B\A)P(A) (6.18) Equation (6.18) is often quite useful in computing the joint probability of events. From Eq. (6.18) we can obtain the following Bayes rule: _ PIA PCA) PUAIB) = Fey (6.19) G. Independent Events: Two events A and B are said to be (statistically) independent if PUAIB) = P(A) and P(BIA) = PCB) (6.20) This, together with Eq. (6.19), implies that for two statistically independent events PLAN B) = PLA)PCB) (6.21) CHAP. 6] PROBABILITY AND RANDOM VARIABLES 131 We may also extend the definition of independence to more than two events. The events Ay, Agy-++y4q are independent if and only if for every subset {4),.Ain,..-)4q} (2=k PBO A) => PBIADPA), (6.24) a which is known as the total probability of event B (Prob. 6.13). Let A = A; in Eq. (6.19); using Eq. (6.24) we obtain PEBIAD P(A |B) = ———— & PBIADPCAD. ai (6.25) Note that the terms on the right-hand side are all conditioned on events A,, while that on the left is conditioned on B. Equation (6.25) is sometimes referred to as Bayes’ theorem. 6.3 RANDOM VARIABLES A, Random Variables: Consider a random experiment with sample space S. A random variable X(2) is a single-valued real function that assigns a real number called the value of X(J) to each sample point 2 of S. Often we use a single letter X for this function in place of X(2) and use r.v, to denote the random variable. A schematic diagram representing a r.v. is given in Fig. 6-1 xa ® Fig. 6-1 Random variable X as a function The sample space S is termed the domain of the r.v. X, and the collection of all numbers [values of X()] is termed the range of the r.v. X. Thus, the range of is a certain subset of the set of all real numbers and it is usually denoted by Ry. Note that two or more different sample points might give the same value of X(2), but two different numbers in the range cannot be assigned to the same sample point. The r.v. X induces a probability measure on the teal line as follows: PX =x) = PA X= PX a)=1- (6.29) PX <= FAO) = lim b-e (6.30) C, Discrete Random Variables and Probability Mass Functions: Let X be a discrete r.v. with cdf Fy(x). Then Fy(x) is a staircase function (see Fig. 6-2), and Fy(x). changes values only in jumps (at most 2 countable number of them) and is constant between jumps, Fw Fig. 62 Suppose that the jumps in Fy(x) of a discrete rv. X occur at the points x),x.,..., where the sequence may be either finite or countably infinite, and we assume x;< x; if i < j. Then Fy(ai) ~ Fu(siy) = POS x) — PWS au) = P= x) (631) Let x(x) = POX = 2) (6.32) The function py(x) is called the probability mass function (pmf) of the discrete r.v. X. Properties of px) 1. O=pyx)y=l i= 1,2,.. (6.33a) 2 pexy=0 ifxéx(i=1,2,...) (6.33b) 3. Lex) =1 (6.33c) CHAP. 6} PROBABILITY AND RANDOM VARIABLES 133 The cdf Fy(x) of a discrete rv. ¥ can be obtained by Fr) = PX SD = Y pyr (6.34) D. Continuous Random Variables and Probability Density Functions: Let ¥ be a rv. with edf Fy(x). Then Fy(x) is continuous and also has a derivative dFy(x)/de that exists everywhere except at possibly a finite number of points and is piecewise continuous. Thus, if X is a continuous r.v., then (Prob. 6.22) PX=Y=0 (6.35) In most applications, the rv. is either discrete or continuous. But if the cdf Fy(x) of a r.v. X possesses both features of discrete and continuous r.v.s, then the r.v. X is called the mixed rw. Let dE yx) a) = (6.36 Sieg = (6.36) The function f(x) is called the probability density function (pdf) of the continuous r.v. X. Properties of fl) L foo =0 (6.374) 2. fRofrevide (6.37b) 3. f(x) is piecewise continuous, 4. Pa 0) if its pmf is given by ‘ Pxlk) = POX = k) =e k=0, 1... (688) The corresponding cdf of X is Fy) nex P(A), then P(BIA) > PCB), 16 r(Aia) =P SOP)» rp, then PAB) > PADRE), Thus PAN) PAY PBA) = 6.11, Let A and B be events in a sample space S. Show that if A and B are independent, then so are (@) A and B and (6) A and B. (@ From Eq. (6.100) (Prob, 6.6), we have P(A) = PUA B) + PAB) Since A and B are independent, using Eqs. (6.2/) and (6.4), we obtain PMB) = P(A) PAB) = PCA) = PLAYPLB) = PALL = PB) = PAP) Thus, by Eq. (6.21), A and B are independent. (6) Interchanging 4 and B in Eq, (6.102), we obtain PBA A) = PB) A) (6.102) which indicates that and B are independent. 6.12. Let A and B be events defined in a sample space S. Show that if both P(4) and P(B) are nonzero, then events 4 and B cannot be both mutually exclusive and independent. Let A and 8 be mutually exclusive events and P(A) #0, P(B) #0. Then PAB) = P@)=0 and PCA)P(B) #0. Therefore PAA B)# PAB) That is, A and B are not independent. 144 PROBABILITY AND RANDOM VARIABLES [CHAP. 6 6.13. Verify Ea. (6.24), Since B/S = B [and using Eq. (6.23)}, we have BOS=BA(AUALU + + U Ay) = BOA)UBOADY ++ BO Ay) Now the events B- 4g (k= 1, 2, ..., M) are mutually exclusive, as seen from the Venn diagram of Fig. 6-8. ‘Then by axiom 3 of the probability definition and Eq. (6.18), we obtain PB)= PBS) = ¥ PBA) = Y PBA PAD KG \ ~ o 6.14, In a binary communication system (Fig. 6-9), a 0 or 1 is transmitted, Because of channel noise, a 0 can be received as a 1 and vice versa. Let mp and m, denote the events of transmitting 0 and 1, respectively. Let rp and r, denote the events of receiving 0 and 1, respectively. Let P(m) = 0.5, Port) =p = 0.1, and Porglrm, 02 Pom) Poriim) Fig. 69 Binary communication system (@) Find P(r) and P(r). (8) If a 0 was received, what is the probability that a 0 was sent? (0) If a 1 was received, what is the probability that a 1 was sent? (@) Calculate the probability of error P,. (e) Calculate the probability that the transmitted signal is correctly read at the receiver. (a) From Fig. 69, we have Pom) = 1= Pom) = 1-05 = 0.5 Plralmg) = 1 = P(rsinm) = 1 =p = 1—0.1=09 Porjlm) = 1- Perm) = 1-g= 1-0.2508 CHAP. 6] PROBABILITY AND RANDOM VARIABLES 145 6.15. Using Eq, (6.24), we obtain Per) Per Perl) Pm) + Plralms)PCm: Porylp) Pom) + Ports )PCrm (©) Using Bayes’ rule (6.19), we have 0.9(0.5) + 0.2(0.5) = 0.55 0.1€0.5) + 0.80.5) = 0.45 Plone) Pll Plrrglta) = Piro) (© Similarly, Pom,)Perilr) _ (0.50.8) oe Po) (OS @ PCribmg) Png) + Plrolm,)Plon;) = 0.1(0-5) + 0.2(0.5) = 0.15 (© The probability that the transmitted signal is correctly read at the receiver is P, = Pirolimg) Pema) + Peril Pm) = 0.9(0.5) + 0.8(0.5) = 0.85 Note that the probability of error P, is Consider @ binary communication system of Fig. 6-9 with P(ralma) = 0.9, P(r;lm;) = 0.6. To decide which of the message was sent from an observed response ry oF rj, we utilize the following, criterion: If ro is received: Decide my if Plmolro) > POmlro) Decide m, if P(omlro) > POmolro) If r is received: Decide my if Pomglr,) > Pm |r) Decide my if Pomln,) > Plmlrs) This criterion is known as the maximum a posteriori probability (MAP) decision criterion (see Prob. 9.1). (@ Find the range of Pom) for which the MAP criterion prescribes that we decide mig if ro is received. () Find the range of P(mp) for which the MAP criterion prescribes that we decide m if ry is received. (©) Find the range of P(mg) for which the MAP criterion prescribes that we decide my no matter what is received. (@)_ Find the range of P(mg) for which the MAP criterion prescribes that we decide m, no matter what is received, (@) Since P(rylma) = Piralrig) and P(rglm,) = 1— Ptrylm), we have Prana) = 0.9 Piri) =0.1 Plrylm)=0.6 Plrolm) = 0.4 By Eq. (6.24), we obtain lrg) = Plralg)Plong) + Plrabm;)Plom,) = 0.9 Ping) + 0.4{1 — Pom] = 0.5 Porm) + 0.4 ‘Using Bayes rule (6.19), we have 146 PROBABILITY AND RANDOM VARIABLES (CHAP. 6 Pordn)POm) __ 09 Pim) Pel) = Ra) OS Pim) +04 Pomlra) = Plain) Pm) _ 0.4L — Pmo)) _ 0.4-0.4 Plrmo) Pere) 0.5 Plog) + 04 0.5 Pom) +04 Now by the MAP decision rule, we decide mg if rq is received if P(m|ro) > P(m|ro), that is, 0.9P(m) __ 0.4-0.4P(m) 0.5P(mg) +0.4 ~ 0.5 P(r) + 0.4 « 09Rm) > 04=04P%m) oF 138) >04 or Pim > 4 =031 ‘Thus, the range of P(m) for which the MAP criterion prescribes that we decide my if ra is received is 031 < Pim) <1 (©) Similarly, we have POr,) = Poralme)PCmig) + P(r lm )PCm,) = 0.1 Pom) + 0.611 ~ Pom)] = ~0.5 P(g) + 0.6 04Pem) =O5P Cm) +06 = Perla) Perms) Pooley Fr PorglmyPomy) 0.611 ~ Pony) 0.6P(m) Por) -O.SPlrig) + 0.6 =0.5Plm) + 08 Now by the MAP decision rule, we deside my ifr is received if P(m;[r;) > Pool), that is, 0.6-0.6P0m) _0.1P¢m) =0.5P (mg) +08 ” =O-SPlomg) +06 Pomlri) or 046-0.6Pt0) > 0.1 Pim) oF 0.6>0.9PI) or Pom) < 2% = 086 ‘Thus, the range of Pm) for which the MAP criterion prescribes that we decide my i ris received is 0 Pom) < 0.86 (0) From the result of b) we see that the range of Pn) for which we decide m ifr is received is Pio) > 086 Combining withthe result of (c), the range of Pm) for which we decide mg no matter what is received is given by 0.86 < Pom) = 1 (@ Similarly, from the result of (a) we see that the range of Pimp) for which we decide m ifr is received is Pom) < 0.31 Combining with the result of (6), the range of P(mp) for which we decide m; no matter what is received is sven by 05 Pm) < 031 6.16. Consider an experiment consisting of the observation of six successive pulse positions on a communication link. Suppose that at each of the six possible pulse positions there can be a positive pulse, a negative pulse, or no pulse. Suppose also that the individual experiments that determine the kind of pulse at each possible position are independent. Let us denote the event that the ith pulse is positive by {x; = +1}, that itis negative by {x, = -1}, and that it is zero by (x; = 0}. ‘Assume that P(x; = +1) POX, (@ Find the probability that all pulses are positive. -I)=9=03 for CHAP. PROBABILITY AND RANDOM VARIABLES 147 (6) Find the probability that the first three pulses are positive, the next two are zero, and the last is negative, (a) Since the individual experiments are independent, by Eq. (6.22) the probability that all pulses are positive is given by Ploy = 40 G2 = HD+ AG = FD) = Ply = HD PO = HD+ + + Poe = + = p= 04 = 0.0041 (8) From the given assumptions, we have Pex, = 9) ‘Thus, the probability that the first three pulses are positive, the next two are zero, and the last is negative is given by -p-9=03 Ploy = F062 = HIN Gs = HDA Ga = HAG = HAG = —DI = Posy = + PO = +1) = +DP Gu = OPES = OPCs = =D) = P= p~ aa = 040.370.3) = 0.0017 RANDOM VARIABLES 6.17. 6.18. 6.19. A binary source generates digits | and 0 randomly with probabilities 0.6 and 0.4, respectively. (@ What is the probability that two Is and three 0s will occur in a five-digit sequence? (6) What is the probability that at least three Is will occur in a five-digit sequence? (@ Let X’be the random variable denoting the number of 1s generated in a five-digit sequence. Since there are only two possible outcomes (1 or 0) and the probability of generating 1 is constant and there are five digits, it is clear that X has a binomial distribution described by Eq. (6.85) with n = 5 and k = 2. Hence, the probability that two 1s and three 0s will occur in a five-digit sequence is nx=2)=(3)oo%04? =o (6) The robe hat at sine wl orn a Beg ee PUX=3)=1-P\X=2) where raxs2)=¥ ({)ooto9* =0317 ene, P= 3) = 10317 = 0.683 Let X be a binomial r.v. with parameters (1,p). Show that py(k) given by Eq. (6.85) satisfies Eq, (6.330). Recall thatthe binomial expansion formula is given by BY otyek (tbo = d (ie Thus, by Eq. (6.85), S ({)eta-pt = 041-97 dex & Show that when n is very large (n>>k) and p very small (p <1), the binomial distribution [Eq. (6.85)] can be approximated by the following Poisson distribution (Eq. (6.88)|: * 148 PROBABILITY AND RANDOM VARIABLES (CHAP. 6 PIX = y= cote 6.103) From Eq. (6.85) - see ~ 1) = = RED 6.108 ul? ) = D pkg When n> k and p< 1, then nn= 1s (nk + nen sna Ga lnpn et gmc net Substituting these slatons into Eq (6.104), we obtain Panky ew 6.20. A noisy transmission channel has a per-digit error probability p, = 0.01 (@) Calculate the probability of more than one error in 10 reccived digits. (©) Repeat (a), using the Poisson approximation, Eq. (6.103). (@ Let X’be a binomial random variable denoting the number of errors in 10 received digits. Then using Eq. (6.85) we obtain PU> I= = P(X =0)- P(X = 1) -(Jooirom”- ("Joan oss = 0.0042 (Using Eq. (6.103) with np, = 10(0.01) = 0.1, we have ee nue ye OR gana! a 0.0047 6.21. Let X be a Poisson rv. with parameter #. Show that py(k) given by Eq. (6.88) satisfies Eq. (6.33¢). By Eq. (6.88), 6.22. Verify Eq. (6.35). From Eqs. (6.6) and (6.28), we have P= POH XS) OO Fe 0) for any ¢ 0. As Fy(x) is continuous, the right-hand side of the preceding expression approaches 0 as ¢ — 0. ‘Thus, PX =x) =0. 623. The pdf of a random variable is given by k asx 0 (6.106) ‘A random variable X with the pdf given by Eq. (6.106) is called an exponential random variable with parameter a. All manufactured devices and machines fail to work sooner or later. If the failure rate is constant, the time to failure 7'is modeled as an exponential random variable. Suppose that a particular class of computer memory chips has been found to have the exponential failure law of Eq. (6.106) in hours. (@) Measurements show that the probability that the time to failure exceeds 10* hours (h) for chips in the given class is e~!(~0.368). Calculate the value of parameter a for this case. (6) Using the value of parameter a determined in part (a), calculate the time fy such that the probability is 0.05 that the time to faifure is less than i (2) Using qs. (6.38) and (6.106), we se that the distribution function of Tis given by FAO [fonds = 0-0 Now PU > 10) =1- T= 10°) 21 Fy105 = 1-0-6) = from which we obtain () We want Flto) = PCTS) = 0.05 atte Hence, 1-e 150 6.26. 6.27. 6.28, PROBABILITY AND RANDOM VARIABLES (CHAP. 6 or eI = 0.95 from which we obtain wy =-10'In0.95 = 513 ‘The joint paf of X and Y is given by Fa 3) = ke udu) where a and b are positive constants. Determine the value of constant k. ‘The value of k is determined by Eq. (6.496), that is, Jo, [testes =a [2 Pesta y =i f° omtae ema = Hence, k = ab, The joint pdf of X and Y is given by Sev) = ape F PP oudy) (a) Find the marginal pdf's fx(x) and fy()). (b) Are X and ¥ independent? (@) By Eas. (6.500) and (6.506), we have fy) = J foes vdym [sve Puaydy yey — we? Since fyy(x,y) is symmetric with respect to x and y, interchanging x and y, we obtain. fr) = ye F Puy) (©) Since frrx,») = fe(2Vfvt), we conclude that and Y are independent. The random variables ¥ and Y are said to be jointly normal random variables if their joint pdf is given by Fry) = Bro yoy = pF orbs 6.07 [222 ery y ox Ge Mey Ite (a) Find the marginal pdf's of ¥ and Y. (6) Show that ¥ and Y are independent when p = 0. (a) By Eq. (6.50a) the marginal pdf of ¥ is Gute) = J fark ry By completing the square in the exponent of Eq. (6.107), we obtain CHAP. 6] PROBABILITY AND RANDOM VARIABLES Ist 2 Pray = Py? ex -2622)'] beef ET fo Tia. Se consi [» = y= oe vo} fo ‘Comparing the integrand with Eq. (6.91), we see that the integrand is a normal pdf with mean hy + PLO wy) or and variance ole) Thus, the integral must be unity, and we obtain 1 SW exp[—Ce~ 20%] 6.108) Ina similar manner, the marginal pdf of ¥ is L100= [fares yids = eal -0-- 2a] (6.109) Pray (©) When p=0, Eq. (6.107) reduces to food ssf (28) +22) T} at 152) bam] fxr) Hence, X and Y are independent. FUNCTIONS OF RANDOM VARIABLE: 6.29, If Xis Nu:o2), then show that Z = (X'—4)/o is a standard normal r.v.; that is, N(0; 1) Th dois rays ean = o(XHs:) = parszesw = [encore By the change of variable y = (x y/o (that is x= oy + 1), we obtain Fae) = Pa<2)= | and dE (fz) Sae= which indicates that Z = NO; D. 152 6.30. PROBABILITY AND RANDOM VARIABLES ICHAP. 6 Verity Eq. (6.57). ‘Assume that y = g(x) is a continuous monotonically increasing function (Fig. 6-10(@)). ‘Then it has an inverse that we denote by x= 7") = hi). Then Fr) = POY $1) = PX = hy) = Fy] (6.110) and a4 r= 4, fr) = GF) = FF HOM Applying the chain rule of differentiation to this expression yields a Fx) = flO HO) which can be written as SOIR KOE =O (6.111) If y = g(x) is monotonically decreasing (Fig. 6-10(6)], then Fy) = POS)) = PIX > hy} = 1— FG] (6112) and y= By =f LO)" ZF) = “hoo x= hy) (13) Combining Eqs. (6.1/1) and (6.113), we obtain 6.31. (a) () Fig. 6-10 — penltl = pay fh Foy = feof = stron HE hich is valid for any continuous monotone (increasing or decreasing) function y = go Let Y= 2¥+ 3. Ifa random variable X is uniformly distributed over [-1,2], find fy(y). From Eq. (6.105) (Prob, 6.28), we have Fav=f ‘The equation y = g(x) = 2x+3 has a single solution x; = (— 3)/2, the range of y is (1,7), and g(x) = ‘Thus, ~1 =x) <2 and by Eq. (6.58) fo risx<2 0 otherwise CHAP. 6] PROBABILITY AND RANDOM VARIABLES. 153, Sr=ev={b ncwise otherwise 632. Let ¥= aX +b. Show that if ¥= Nquso"), then ¥= Nequ +b; 0°). ‘The equation y = g(x) = ax +b has a single solution x; = (y—5)/a, and g'(x) = a. The range of y is (C+, ), Hence, by Eq. (6.58) relate no= gh) 11) Since ¥ = N(u; 0"), by Eq. (6.91) fils) = Fo ox0] —5e3 00? ] (6.15) Hence, by Eq. (6.114) fs) =| 3 (*- ) 1 = Baale | 28a 1 1 sa aeee | (6.116) which is the pdf of N(qu +6; a%o?). Thus, if ¥ = N(u;o"), then ¥ = Nau + 6;a"o*). 633, Let Y= X?. Find fy») if ¥ = NOD. If y <0, then the equation y = x" has no real solutions; hence, fy() If y > 0, then y =x? has two solutions may na—W Now, y= g(x) = 2? and g!(x) = 2x. Hence, by Ea. (6.58) 1 fr) ZUR + FW) (6.117) Since ¥= N(O;1) from Eq, (6.91), we have Lyn jy(x) =e (6.11 Fi) = Fe (6.118) Since f(x) is an even function from Eq. (6.117), we have 1 1 ~ fro =f uy PQ) 6.119 Fu) = FPaJIMO) = eeu) (6.119) 634, The input to a noisy communication channel is a binary random variable X with P(X =0) = P(X = 1) =}. The output of channel Z is given by X+ Y, where Y is the additive noise introduced by the channel. Assuming that X and Y are independent and Y = N(0;/), find the density function of Z. Using Eqs. (6.24) and (6.26), we have F,(2) = PZ <2) = PZ 21K = P(X = 0) + ZS AIK = DPW =D Since Z=NX+Y PZ <2X=0)= PX + YSAlX= = PY $2) = Fy) Similarly, PZ =a =1)= P+ "AN = PY < 2-1) = Fy(e= 1) 154 6.38, 6.36. PROBABILITY AND RANDOM VARIABLES Hence, Since ¥ = NO;1), and Consider the transformation Z=aX+bY¥ W=cxX+d¥ Find the joint density function fzy(z, w) in terms of fy(x,9)- If ad~ be #0, then the system axtby=z extdy=w hhas one and only one solution: e y= yetnw wer Since (Eq. (6.68)} a & ronal Hl-[f fart jax Oy Eq. (6.67) yields Sowtew) = Tada bay + Bor ye +) Let Z= X+ ¥. Find the pdf of Z if ¥ and Y are independent random variables. We introduce an auxiliary random variable W, defined by a & sso-ffs B)-[b ter Eq. (6.67) yields for by setting a = =d=1 and c=0in Eq. (6.122) Lawl, 9) = fye—,w) Hence, by Eq. (6.50a), we obtain [CHAP. 6 (6.120) (6.121) (6.122) (6.123) CHAP. 6] PROBABILITY AND RANDOM VARIABLES 155 637. 6.38. ster [fete [Pile mode co If X and Y are independent, then fe) = [7 fue wyfdon an 6.25) Which is the convolution of functions fy(x) and fy). ‘Suppose that Y and Y are independent normalized normal random variables. Find the pdf of X+Y. ‘The pdf's of X and ¥ are w= lett Ler See) Te fro) Ta ‘Then, by Eq, (6.125), we have fete) = [fale wyflon an -[ewrrd oo 2m vin foal oe “le “Eo oa “keh nf tal Let w= Vw =2/V2. Then Lael ft FO NF) Fk Since the integrand is the pdf of N(0;1), the integral is equal to unity, and we obtain 1 LO = RE eel (6.126) which is the pdf of NO; v2). Thus, Z, is a normal random variable with zero mean and variance V2, Consider the transformation R=V¥+¥ © Find fgo(r,0) in terms of fyy(x,9). (6.127) We assume that r = 0 and 0 <0 2n, With this assumption, the system feepar tavit=o hhas a single solution: x=rcos@ y=rsin @ Since [Eq. (6.68)} 156 6.39, PROBABILITY AND RANDOM VARIABLES ax ax] Br Bo|_|cos 8 —rsin 0 Fon = sin 8 reos 0 Eq. (6.67) yields Sn0l0s0) = tfeylro0s 8, rsin 8) A voltage V is a function of time't and is given by Vit) = Xeos wt + ¥sin ot in which @ is a constant angular frequency and X= (@ Show that ¥/t) may be written as vO Roos (wt- ©) (©) Find the density functions of R and ©, and show that R and are independent. @ Ven) = Xo0s wt + Ysin ot =v ¥( X08 cot + at snc) NR Re = VIP F(c0s © cos wt + sin © sin ot) Reos (wi) whe BaABTP ont One!! (b) Since X= ¥= N(O; 0%) and are independent, from Eqs. (6.51) and (6.91) L fn =e Viza® 1 = cerAyoe) Thus, using the result of Prob, 6.38 (Eq. (6.128)), we have Suet, 8) = tfyv(reos 8, rsin 6) reine) Using Eqs. (6.50a) and (6.506), we obtain -*/@64) f dg = Le Fiae fai0)= [1 fro -{* r, Ode "He PIO) dy 1 FoM= fc Saale Oc 2no* Jc 4 Qn and Srolt 0) = fafa) Hence, R and @ are independent. ‘Note that © is a uniform random variable, and R is called a Rayleigh random variable. STATISTICAL AVERAGES 6.40. The random variable X takes the values 0 and 1 with probabilities » and f Find the mean and the variance of X. Using Bqs. (6.69) and (6.70), we have (CHAP. 6 (6.128) (6.129) = N(; 0?) and they are independent. (6.130) (6.131) (6.132) (6.133) (6.134) (6.135) 4, respectively. cHaP.6] PROBABILITY AND RANDOM VARIABLES 187 ber = EU) = 0G) + 108) = B EL] =F @) + P= BP From Ea. (6.77) of = EDP ]- LX? = f= B= a8 6.41. Binary data are transmitted over a noisy communication channel in a block of 16 binary digits. The probability that a received digit is in error due to channel noise is 0.01. Assume that the errors occurring in various digit positions within a block are independent, (a) Find the mean (average number of) errors per block, () Find the variance of the number of errors per block. (c) Find the probability that the number of errors per block is greater than or equal to 4. (@)_ Let X be the random variable representing the number of errors per block. Then X has @ binomial distribution with n= 16 and p = 0.01, By Eg. (6.87) the average number of errors per block is. EX) = np = (160.01) = 0.16 () By Eq, (687) Ge = mpl = p) = (16)(0.01)(0.99) = 0.158 © P= 4)=1-PX<3) Using Bq, (686), we have ; nvea~¥ (!8)oorosn} 094s Hence, PIX = 4) = 1- 0.986 = 014 642 Verify Ea. (6.90), By Eq. (6.69) mea zini= Sine =n=04 Seg Similarly, Bur p)= F Kk-prar=H=0404 Fe a & &e Se et eS ES or ED? = X= EL? )- B(x) = BLP \-a= Thus, EIR )=0 +a (6.136) ‘Then using Eq. (6.77) gives oF = EWN I-ELMY? = +a) =o 6.43. Verify Eq. (6.95). Substituting Eq. (6.91) into Eq. (6.69), we have we = EWN [lscertn ae 158 PROBABILITY AND RANDOM VARIABLES [CHAP. 6 Changing the variable of integration to y= (x—)/a, we have Bula Fel or ne"ay =” yey £ Len, Fel to tf” peerta The first integral is zero, since its integrand is an odd function. The second integral is unity, since its integrand is the pdf of N(0; 1). Thus, by = BUX) = 6 From property 2 of fy(x) (Eq. (6.376)], we have [ert ace ve (6.137) Differentiating with respect to «we obtain f SW? ot OM dy = STR ‘Multiplying both sides by o?/V/2r, we have Fag | meee = RICE 6.44, Let ¥= N(0;0). Show that m= Ewi={} Le3eesG= Dot (6.138) X= NO07) = f(x) Fe ete) The odd moments max of X are 0 because f(—x) = fx(x). Differentiating the identity feten ie 6.139) ‘times with respect to a when m= 2k, we obtain fiten my = EL] = Setting «= 1/(20”), we have [meres vine: 163 k= Io 6.45, Verify Eq. (6.72). Let fy(x,») be the joint density function of ¥ and ¥. Then using Eq. (6.71), we have ere n= ff ort vet dey = [i Fate ana+ [ [atic aay CHAP. 6] PROBABILITY AND RANDOM VARIABLES 159 6.46. 6.47. 6.48, Using Eqs. (6.50a) and (6.69), we have Ff vtiven ani f° [7 freon ay] as = [7 et de = £00 Ina similar manner, we have [ fotirenn acts = f° wfion y= 200 Thus, BUX+ N= BLN +E If X and ¥ are independent, then show that EUXY|= EINE) (6.140) and Elg(Og2) = Ele e200) (6.141) CX and ¥ are independent, then by Eqs. (6.51) and (6.72) we have evan = f° f sfscarvon dey = [ie as |” ono = eee Similarly, Big anni = |" fF sicanoriervio) day = [7 escrvson ax [” es0vvon y= Ble C0) Ele) Find the covariance of ¥ and ¥ if (a) they are independent and (6) Y is related to X by aX +b, (@) If ¥ and ¥ are independent, then by Eqs. (6.81) and (6.140) oxy = ELXY— EXIELY) ME(Y)- ELEY = ® E[XY) = E[XaX + 6)) = aE 1X7) + bELX) = aE X71 + by E(Y¥] = EfaX +6] = aE(X] +b = apy +b E(XY)—E(X) E(Y) = aE [X°] + bux urlany + 6) = a(E(X")— 13) = act (6.143) (6.142) wy Thus, oxy Note that the result of (@) states that if V and Y are independent, then they are uncorrelated. But the converse is not necessarily true. (See Prob. 6.49.) Let Z = aX + bY, where a and b are arbitrary constants. Show that if X and Y are independent, then oh = doy + boy (6.144) 160 6.49, PROBABILITY AND RANDOM VARIABLES (CHAP. 6 By Eqs. (6.72) and (6.73), biz = E[Z] = ElaX + BY) = aX) + BELY| = any + buy By Eq. (6.75) oe [2-12] = Efex +61) - cur + buy} = Effex ny) + 60"~ uy = @E[A~ ny?) + 2abB Kw dO ny) + PE wy) = Pa + 2abE UX uyN(¥— wy) + Poy, (6.145) Since and ¥ are independent, by Eq. (6.142) E\(X= wy Y= py) = EX ~ py) BLY py] = 0 Hence, = doh +60} Let X and Y be defined by X= cos® and Y= sin@ where © is a random variable uniformly distributed over (0, 2r] (@) Show that ¥ and ¥ are uncorrelated. () Show that X and ¥ are not independent. (@) From Eq. (6.105) sao {i o 0, Pure a) se (6.146) where uy = E[X1. This is known as the Markov inequality From Eq. (6.376) pix a)= [fc ds Since fil) = 0 for x <0, me BUX = [xen axe J” sfx da Hence, eo) de = Peay For any € > 0, show that % PUX- ul 2083 (6.147) where ty = E[X] and 6% is the variance of X. This is known as the Chebyshev inequality. From Eq. (6.370) Pix-ml=o=[" nto de+ |” gerae= Ff fooae By Eq. (6.75) “ouside f ommrherde=e f flnde nine bein Hence fy ave or PUX— pl FOS Let X and Y be real random variables with finite second moments, Show that (EY)? s ELLE] (6.148) This is known as the Cauchy-Sehwarz inequality. Because the mean-square value of a random variable can never be negative, E(X—a¥F1=0 for any value of a. Expanding this, we obtain EUP]- 228 (XY) +E] = 0 Choose a value of a for which the left-hand side of this inequality is minimum a 2XN1 “EA 162 6.53, 654, 635. 656. 657. 658. PROBABILITY AND RANDOM VARIABLES ICHAP. 6 which results in the inequality ppt EO? eu FP =o or (EUXY? = EL?) EL] Verify Eq. (6.84). From the Cauchy-Schwarz inequality Eq. (6.148) we have {ELC Y= wyyIP BLK wy EL y)?) or oy S OYoy The 2, - or ny =a Sl PY Got from which it follows that lox = Supplementary Problems For any three events 4, B, and C, show that PABUO) = P(A) + PB) + P(C)- ANB) =PBOO= RCO A)+ PAN BAO, Hint: Write AU BU C= AU(BUO) and then apply Eq. (6.8). Given that P(A) = 75, find (a) P(A WB); (6) P(A OB); and (c) PAB) 9, B)=0.8, PA B)= Ans, (a) 0.95; (b) 0.15; (€) 0.08. Show that if events 4 and B are independent, then PA AB) = PA)PB) Hint: Use Eq. (6.102) and the relation T=A@oDVaOB) Let A and B be events defined in a sample space S. Show that if both P(A) and P(B) are nonzero, then the events A and B cannot be both mutually exclusive and independent. Hint: Show that condition (6.21) will not hold A certain computer becomes inoperable if two components A and B both fail. The probability that A fails is 0.01, and the probability that B fails is 0.005. However, the probability that B fails increases by a factor of 3 if A has failed. (@) Calculate the probability that the computer becomes inoperable. (®) Find the probability that 4 will fail if B has failed. Ans. (a) 0.00015 (6) 0.03 CHAP. 6] PROBABILITY AND RANDOM VARIABLES 163 659. 661. 6.82. 6.3. 6.64, A certain binary PCM system transmits the two binary states Y= +1, ¥=—1 with equal probability, However, because of channel noise, the receiver makes recognition errors. Also, as a result of path distortion, the receiver may lose necessary signal strength to make any decision. Thus, there are three possible receiver states: Y= +1, ¥=0, and ¥=—1, where ¥ = 0 corresponds to “loss of signal." Assume that POY = -IX= 41) = 0.1, POY = HX = 1) = 02, and PLY = OLY = +1) = PY = OLY = =1) = 0.05. (@) Find the probabilities P= +1), PO” =—1), and POY = 0). (®) Find the probabilities P(X = +1IY = +1) and POW =-11¥ = —D. Ans. (a) POY = +1) = 0.525, P= =1 () P= 411Y = +1) =081, POX ‘Suppose 10 000 digits are transmitted over a noisy channel having per-digit error probability p = 5x 10-*. Find the probability that there will be no more than two-digit errors Ans, 09856 Show that Eq, (6.97) does in fact define «true probability density; in particular, show that J faoo acm “Hint: Make & change of variable [= (= 1/0] and show that 1- [PP w= Vin which can be proved by evaluating P by using the polar coordinates. A noisy resistor produces a voltage V,(1). At ¢= 1), the noise level X= Vq(ty) is known to be a gaussian random variable with density ‘Compute the probability that [1 > ka for k Ans, P(X| > 0) = 03173, PAX] > 20) = 0455, (|X| > 30) = 0.0027 ‘Consider the transformation ¥ = 1/X. (@) Find fy) in terms of fx. 0) Whe) = 3 find fro Ans. (@) Seo) = Sh 3) Ilan) eee Note that ¥ and Y are known as Cauchy random variables. ® f= [Let X and ¥ be two independent random variables with filo) = ae“ ulx)—foly) = feu) Find the density function of Z= X+ ¥. 164 6.065. 6.6. 697. 6.68. 6.69. 6.70, 6m. PROBABILITY AND RANDOM VARIABLES ICHAP. 6 Let X be a random variable uniformly distributed over (a, 6]. Find the mean and the variance of X. bray Ans. wy = Let (&, 1 be bivariate ev. IW and Fare independent, show that X'and Y are uneorelaed Hint Use as. (6.78) 204 (65.51 Let (&,¥) bea bivariate withthe joint pt fa =PBZ PP we nye Show that Xand ¥are not independent but are uncorrelated. Hint: Use Bas. (650) and (6.9) Given the random variable X with mean wy and oj, find the linear transformation Y= a¥ + such that hy =0 and o — Define random variables Z and W by Xtay W=X-a¥ where a is a real number. Determine a such that Z and W are orthogonal. ‘The moment generating function of 1 is defined by MxQ)= Be} = | flsdeds where J is a real variable. Then m= EX" k=12 o where wo) = 2, (@) Find the moment generating function of X uniformly distributed over (a,b) (Using the result of (a), find ELX], EX), and EL}. P Ans. (a) =a) OFM =HO+W, BWI =KW + ud +e), BUA = HO + Pat be Fay Show that if ¥ and ¥ are zero-mean jointly normal random variables, then EW Y| = EWP EL?) +26 ery? Hint: Use the moment generating function of X and Y given by MyyGiyda) = Ele) - EEE (fewest me RANDOM PROCESSES ODUCTION 1s for random message signals and noise encountered in communication systems are developed in er. Random signals cannot be explicitly described prior to their occurrence, and noises cannot be by deterministic functions of time. However, when observed over a long period, a random signal may exhibit certain regularities that can be described in terms of probabilities and statistical uch a model, in the form of a probabilistic description of a collection of functions of times, is andom process. IONS AND NOTATIONS OF RANDOM PROCESSES a random experiment with outcomes }. and a sample space S. If to every outcome 4 € S we -valued time function X(t, 2), we create a random (or stochastic) process. A random process refore a function of two parameters, the time f and the outcome 2.. For a specific 2, say, 2, we time function X(t, 2,) = x(t). This time function is called a sample function. The totality of all ions is called an ensemble, For a specific time 1, X(jy A) =X; denotes a random variable, 4) and fixed A= 4), X(G,4) = (6) is a number. indom process is sometimes defined as a family of random variables indexed by the parameter is called the index ser. | illustrates the concepts of the sample space of the random experiment, outcomes of the sociated sample functions, and random variables resulting from taking two measurements of ctions. ywing we use the notation X(t) to represent X(¢, 2). 'S OF RANDOM PROCESSES Expressions: random process X(t). For a particular time f,, X(¢,) =X; is a rancom variable, and its tion Fy(%;t) is defined as Fy(xist,) = PAX(,) a» Then (7.65) 2m apt ‘And Sye(@) and Ryx(2) of band-limited white noise are shown in Fig. 7-4. Note that the term white or band-limited white refers to the spectral shape of the process X(t) only, and these terms do not imply that the distribution associated with X(*) is gaussian | Sets) Regt) a oe z Ter Owe = 5 o Fig. 7-4 Band-limited white noise D. Narrowband Random Process: ‘Suppose that X(°) is a WSS process with zero mean and its power spectral density Syx(c) is nonzero only in some narrow frequency band of width 2W that is very small compared to a center frequency ey as shown in Fig. 7-5. Then the process X(t) is called a narrowband random process. In many communication systems, a narrowband process (or noise) is produced when white noise (or broadband noise) is passed through a narrowband linear filter. When a sample function of the narrowband process is viewed on an oscilloscope, the observed waveform appears as a sinusoid of random amplitude and phase. For this reason, the narrowband noise X(t) is conveniently represented by the expression X() = V(Ne0s [act + $O) (7.66) CHAP. 7] RANDOM PROCESSES 175 Sxxlo) 0 cS . om w Fig. 7.5. Narrowband random process where Vit) and (0) are random processes that are called the envelope function and phase function, respectively. Equation (7.66) can be rewritten as X() = Vi eos H(0) cos «1 ~ V(sin P(A)sin «wt (7.67) = X-(0)08 cet — X,()sin wet where X,(9 = Videos (0) (in-phase component) (7.68a) X,() = Vsin G2) (quadrature component) (7.686) Vin = yX20) + X20) (7.694) = tan SO P(t) = tan’ XO (7.69b) Equation (7.67) is called the quadrature representation of X(t). Given X(t), its in-phase component X,(t) and quadrature component X,(t) can be obtained by using the arrangement shown in Fig. 7-6. Low-pass “0 fiter ce xo) 208 ot Low-pass xe filter ss ~2sin wet Fig. 7-6 Properties of X(@) and X,(0: 1, X:(# and X,(0) have identical power spectral densities related to that of X(2) by §. (ow) = S¢(@) = {S(O — Oe) + Sx $0.) lol = W Sx,(@) = (0) = {9 otherwise ae 176 RANDOM PROCESSES (CHAP. 7 2. X.(0 and X,(1) have the same means and variances as X(t) Hy, = Hx, = Hx = 0 (7.71) OR, = 0%, = OF (7.72) 3. X.(0) and X,(#) are uncorrelated with each other: ELX(OX,()] = 0 (7.73) 4. IF X() is gaussian, then X.(0) and X,(0) are also gaussian, 5. IFX(0) is gaussian, then for fixed but arbitrary 1, V(0) isa random variable with Rayleigh distribution and ‘(0 is a random variable uniformly distributed over {0,22}. (See Prob. 6.39.) Solved Problems RANDOM PROCESSES AND STATISTICS OF RANDOM PROCESSES 7.1. Consider a random process X(#) given by xO where A and co are constants and @ is a uniform random variable over [~x, 7]. Show that X(2) is WSS. \cos (wt + ©) (7.74) From Eq, (6.105) (Prob. 6.23), we have a <0= fa) =} 2n “0 =" 0 otherwise Thus, 1x60 = EIXCO= J Acos or + Bo(®)a AP swurehatno ea Rutt) = EKOXO +O) re Hf cm ort conten 9 + oat Are HEP Meas or co aor20+ woo a = cos wt (7.76) Since the mean of X() is a constant and the autocorrelation of X(t) is a function of time difference only, we conclude that X(0) is WSS. [Note that Ryx(2) is periodic with the period To = 2n/«. A WSS random process is called periodic if its autocorrelation is periodic. 7.2. Consider a random process X() given by X(t) = Acos (cot + 0) (7.77) where © and @ are constants and A is a random variable. Determine whether X(t) is WSS. cHap. 7] RANDOM PROCESSES 7 E(X(t)] = ElAcos (wt + 8)) 08 (ot + EIA] (7.78) Which indicates thatthe mean of X() is not constant unless EA] = 0. Ret +) = KON + 9) = E{AP cos (wt + O) cos foxt +2) + ON) Moos wt + 00s 2a +26 + x)]E[A"] (7.79) ‘Thus, we see thatthe autocorrelation of X(t) is not a function of the time difference + only, and the process X(t) is not WSS, 73. Consider a random process ¥(t) defined by ro= J) xeoas (7.80) where X(#) is given by X( = Acos oot (781) where @ is constant and A = N{0;07]. (@)_ Determine the pdf of ¥(¢) at t = 1. @) Is Y) WSS? sin cf ot @ via)= f Acos oxae = a) Then from the result of Prob. 6.32, we see that Yi.) is a gaussian random variable with EWG) = et eal =0 (783) and 784) Hence, by Eq, (6.91), the paf of Y(t) is 1 ret Si = Fe erie (785) (®) From Eqs. (7.83) and (7.84), the mean and variance of ¥(1) depend on time s(,), so Y() is not WSS. 7.4. Consider a random process X(t) given by X(#) = Acos wt + Bsin ot (7.86) where o is constant and A and B are random variables. (@) Show that the condition EIA] = E(B] = 0 (7.87) is necessary for X(t) to be stationary. () Show that X(0 is WSS if and only if the random variables A and B are uncorrelated with equal variance, that is, EUAB] 0 (7.88) 178 RANDOM PROCESSES [CHAP. 7 and E{A?] = E(B?] =o’ (7.89) (2) y(t) = BLX() = ElAloos ot + B{B]sin oot must be independent of + for X() to be stationary. This is possible only if y(t) = 0, that is, FIA) = EIB) =0 (®)_IFX@) is WSS, then from Eq. (A7.17) ax) = 2%) |= Rx) = oh =8 but xo=A ad x2 Thus, Ela") = EIB] = 0% Using the preceding result, we obtain Rll, +) = EKOXC +O] ETA cos wt + B sin w1)[A £08 ot + 2) +B sin oxt + 2) = o%cos ox + EIAB]sin 2a +1) (7.90) ‘hich will be a function of « only if E[AB] = 0. Conversely, if E(AB] = Oand £{A?] = E[B?] = o?, then from the result of part (a) and Eq. (7.90), we have Bx) = 0 Ralt,t+2) = ae0s or = Ryx(2) Hence, X(0) is WSS. 7.5. A random process X(t) is said to be covariance-stationary if the covariance of X(t) depends only on the time difference t = fy ~1,, that is, Calt,t + = Cx 791) Let X(t) be given by X() = (A+ Deos t+ Bsin t where A and B are independent random variables for which FIA] = E[B]=0 and FTA’) = E{B*}=1 Show dat X() is uot WSS, but it is covariance-stationary. ELX(0)] = E[(A + 1) 008 ¢+ Bsin )) BUA + 10s 1+ ELB)sin + = cost Bx ‘which depends on 1. Thus, X(@) cannot be WSS. Rexltys fe) = EXO IKE] = BIA + 1)e0s t, + Bsin t][(A + 1e0s #2 + Bsin 41) H+ fost t+ EBM sin sin EA + Bos sin sn 0s) CHAP. 7) RANDOM PROCESSES 179 Now afta + 0] = ea? +2841) = 2} + 2A +1 EQ(A + 1)B] = E(AB] + EIB) = E{AIELB] + E(B] = 0 EP) =1 ‘Substituting these values into the expression of Ry(t,f2), we obtain Rett td 2c0s f, cos ty + sin fy sin fy = 08 (fy) + 608 1, 608 fy From Eq, (7.9), we have Cael 2) = Ritts) — malta) = cS (f2~ 1) + E08 M1608 fy = 608 £008 fp = c0s (=) Taus, X(0) is covariance-stationary. 7.6. Show that the process X(#) defined in Eq. (7.74) (Prob. 7.1) is ergodic in both the mean and the autocorrelation, From Eq. (7.20), we have 5=600)= Jind | i Avos (one T A [ha =F [cos (or dr =0 (7.92) TJ where Ty = 2n/ From Bq. (7.21), we have Rex() = Watt + 0) = jn} me orton 49-40 = J Nos onscodton 428 on A oe oss) HG) = BIX(O = (st) = Rag (0) = ELXOXE + 1] = WOKE +9) = Rag) Hence, by definitions (7,24) and (7.25), we conclude that X(0) is ergodic in both the mean and the autocorrelation, CORRELATIONS AND POWER SPECTRA. 7.7. Show that if X(#) is WSS, then Fx +) - XP] = AR) — Rex (7.94) where Ryx(t) is the autocorrelation of X(). Using the linesrity of E (the expectation operator) and Eqs. (7.15) and (7.17), we have 180 RANDOM PROCESSES (CHAP. 7 E[X0+ 2x0 + OX] +X°0] = afta + 0] 220+ OKO EL (0) — 2Rax(2) + Real) [Rex (O) — Rex] efxe + 9-x00P 7.8. Let X() be a WSS random process. Verify Eqs. (7.26) and (7.27), that is, @ Ryx(—1) = Rex) o [Rx] = Rex(O) (@) From Bq. (7.15) Rex() = FIXOXC +91 Setting 1-2 =, we have Reels) = EX = OX) = EXOXE -2)) = Ry) ® Am *x¢+oF]=0 oe Ate =xoKe+9 +P e+0]=0 or Axo] = 22KOXe+ 0) +AXC+9]=0 or 2Ryx(0) + WRex(t) =O Hence, Ryx(O)™ [Rex 7.9. Show that the power spectrum of a (real) random process X(0) is real, and verify Eq, (7.39), that is, Sual-0) = Sx(o) From Eq, (7.36) and by expanding the exponential, we have Salo) = J" Rates = Fh Rarer orn on de 798) = fF tetoces antes” Ratonin onde Since Re(2) = Ry) I. (7-26 th iaginay term in Ey (7.95) hen vanishes and we obtain 7 (7.96) Syl) = f. Ryy(2)c0s ordre Which indicates that Se) is eal Since the cosine is an even funtion ofits arguments, that is cos (0) = 60s wt, it follows that Suto) = Sato) which indicates that the power spectrum of (0) is an even function of frequency. CHAP. 7] RANDOM PROCESSES 181 7.10, 7A. A class of modulated random signal ¥() is defined by ¥(#) = AX(t)cos (w.t + ©) (7.97) where X(0) is the random message signal and Acos (,t + @) is the cartier. The random message signal _X( is a zero-mean stationary random process with autocorrelation Ryx(t) and power spectrum Syx(0). ‘The carrier amplitude A and the frequency «, are constants, and phase © is a random variable uniformly distributed over [0,2n]. Assuming that X(t) and © are independent, find the mean, autocorrelation, and power spectrum of Y(t). yl®) = ELV) = EIAX() 0s (01 + )) AEIX(O)Efe0s (o.t + ®)] = 0 since X(t) and © are independent and EIX()] = 0. Ryltst +2) = EYOME+ 9] = BAX OX + 100s (oa + eos [o.(t-+) + OT] Faxoxe + DIE le0s w-t +005 (200,t + «ct + 20)] & Re)c08 cnt = Rl) 738) Since the mean of ¥(A is a constant and the autocorrelation of Y(0) depends only on the time difference r, ¥(d) is WSS. Thus, Srr(0) = F IRyO) = FF Rex(2)e08 04] By Eqs. (7.36) and (/.76) FRx) = Sl) F (008 st) = 1500 ~ 0) + AB + 0.) “Then, using the frequency convolution theorem (1.29) and Eq, (1.36), we obtain Sm(0) = £ Sq(0) # (rd(@- 04) + FO +0) A iSqo-0,) + Salo+ 0) 79) Consider a random process X(j) that assumes the values A with equal probability. A typical sample function of X(¢) is shown in Fig. 7-7. The average number of polarity switches (zero crossings) per unit xo | UU Fig. 7-7 Telegraph signal 182 RANDOM PROCESSES (CHAP. 7 time is a. The probability Of having exactly k crossings in time + is given by the Poisson distribution (Eq, 6.88) 5 (an i where Z is the random variable representing the number of zero crossings. The process X(f) is known as the elegraph signal. Find the autocorrelation and the power spectrum of X(t). PZ=b (7.100) 1f¢ is any positive time interval, then PROX+ OL = AP(X() and X(-+ 9) have same signs] + CAYPIXG and X(C+ 2) have different signs] = A°PIZ even in (1+ 9] = A°PLZ odd in (1 +9] Rett +0 ay nat a (oot ey net a ee wen Fey en FN pene atom tom which indicates that the autocorrelation depends only on the time difference *. By Eq, (7.26), the complete solution that includes + < 0 is given by Rx) = (7.102) which is sketched in Fig. 7-8(a) Taking the Fourier transform of both sides of Eq. (7.102), we see that the power spectrum of X() is (Ea. (55) fe fe(o) = A 7.103 Sato) = 4 ay ea which is sketched in Fig. 7.8). me Sexo) 3 4 Seen o Ps Fig. 7-8 7.12. Consider a random binary process X() consisting of a random sequence of binary symbols 1 and 0. A typical sample function of X() is shown in Fig. 7-9. It is assumed that 1, The symbols 1 and 0 are represented by pulses of amplitude +A and ~A V, respectively, and duration 7, s. 2, The two symbols 1 and 0 are equally likely, and the presence of a 1 or 0 in any one interval is independent of the presence in all other intervals. CHAP. 7] RANDOM PROCESSES 183 3. The pulse sequence is not synchronized, so that the starting time ty of the first pulse after ¢ = 0 is ‘equally likely to be anywhere between 0 to Ty. That is, tg is the sample value of a random variable T, uniformly distributed over (0, Ts) ne “A Fig. 7-9 Random binary signal Find the autocorrelation and power spectrum of X(t) ‘The random binary process X(P) can be represented by x= 5 Agee) (7.104) where {4} is a sequence of independent random variables with P[A, = A] = PLA, amplitude pulse of duration 7, and Ty is a random variable uniformly distributed over (0, 7} » pC) is a unit ald) = EXO) = FA) = A +30) = 0 7.105) Let, > th When fy > Th then and t,t fl in ferent pulse intervals (Fig. 7-10 ()] and the random variables X(t) and X() are therefore independent. We ths have IX )K(e)] = ELRCANIETKC) = 0 7.106) When 24 o Fig. 7-20 CHAP. 7] RANDOM PROCESSES 199 136. 731. 1.38. 7.239, Which is plotted in Fig. 7-20(6). From Fig. 7-20(a) 1 EXO] FJ, Saloon = bof on Law 2 From Fig. 7-206) 2, 1 w 1 AZO} = IO] = 5.0) [nde = ZEMeW) = 2nd Hence, EXO) X20] HO] = en2W) = 2B 7.161) Supplementary Problems Consider a random process X(#) defined by X( = cos OF where Q is a random variable uniformly distributed over [0,c]. Determine whether X(0 is stationary. Ans, Nonstationary Hint: Examine specific sample functions of X() for different frequencies, say, Q = n/2, m, and 2e. Consider the random process X() defined by X(@) = Acos or Where @ is a constant and 4 is a random variable uniformly distributed over (0, 1). Find the autocorrelation and autocovariance of X(). Ans. Realist Caux(ta fa) = 73008 1,608 fy Let X(9 be a WSS random process with autoeorrelation Ryx(s) = Ae" Find the second moment of the random vasisble ¥ = X(5)~ X(2) Ans, 2A(1-€*) Let X() be a zero-mean WSS random process with autocorelaton Ryx(). A random variable Y is formed by integrating 40): if Hf oa Find the mean and variance of ¥. TJo Ans. jy = Rex) 200 RANDOM PROC! [CHAP. 7 740. A sample function of a random telegraph signal X() is shown in Fig. 7-21. This signal makes independent random shifts between two equally likely values, A and 0. The number of shifts per unit time is governed by the Poisson distribution with parameter 0. x if Fig. 7-21 (a) Find the autocorrelation and the power spectrum of X(). (©) Find the rms value of X(0. a 2 Mt eh Selo a Tbe Sela) =F OOF Ans, (a) B(o) +A Rex! A 2 TAL. Suppose that X(0) is @ gaussian process with By = 2 Ryyls) = Se Find the probability that X(4) = 1 Ans. 0.159 7.42. The output of a filter is given by ¥() = X@41)-XU- where X() is a WSS process with power spectrum Syx(o) and Tis constant. Find the power spectrum of ¥() Ans, Spa) = 4sin *@TSy(0) 743. Let X() is the Hilbert transform of a WSS process X(0). Show that Ryg(O) = ELXOK)] = 0 Hint, Use relation (b) of Prob, 7.20 and definition (2.26). 7.44, When a metallic resistor R is at temperature T, random electron motion produces a noise voltage V(0) atthe open- circuited terminals. This voltage V() is known as the thermal noise. Its power spectrum Syy(c) is practically ‘constant for f = 10"? Hz and is given by RE wo c vo I CHAP. 7] RANDOM PROCESSES 201 Syy(o) = 2kTR where k= Boltzmann constant = 1.3707 T= absolute temperature, Kelvins (K) R= resistance, ohms (Q) Calculate the thermal noise voltage (rms value) across the simple RC circuit shown in Fig. 7-22 with R (kQ), C= 1 microfarad (uF), at T = 27° C. joules per kelvin (S/K) kilobm ee Chapter 8 NOISE IN ANALOG COMMUNICATION SYSTEMS DUCTION esence of noise degrades the performance of communication systems. The extent to. which ts the performance of communication systems is measured by the output signal-to-noise or the probability of error. In this chapter, we review the effect of noise on the of analog communication systems. The signal-to-noise ratio is used to measure the of analog communication systems. following analysis, we will be mainly concerned with the additive noise that accompanies the input to the receiver. E NOISE AND SIGNAL-TO-NOISE RATIO; tic of a communication system is shown in Fig. 8-1. It is assumed that the input of the modeled by the random process (1), the channel introduces no distortion other than ym noise, and the receiver is a linear system. At the receiver input, we have a signal ie, Tlie sigisal au the uvise power at the seceiver input are S; and N,, respectively. Since linear, the receiver output Y,(/) can be written as Yolt) = Kole) + 19() 1) 1n,(0) are the signal and noise components at the receiver output, respectively. ‘0 further assumptions about additive noise: zero-mean white gaussian with power spectral density Sy(eo) = n/2. js uncorrelated with %(0) ceding specified assumptions, we have 202 CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS 203 Channel noise 7 Fig. 81 A communication system model s[Ro]=£ [x40] +2[do]= 5,44, 62) where S, = E[X3(9] and N, = E(n3(0)] and they are the average signal and noise power at the receiver output, respectively. The output signal-to-noise ratio (S/N), is defined as (3) So _ ERO] (6.3) N, No E[ne(o] Tn analog communication systems, the quality of the received signal is determined by this parameter. Note that this ratio is meaningful only when Eq (8.1) holds. 8.3 NOISE IN BASEBAND COMMUNICATION SYSTEMS, In baseband communication systems, the signal is transmitted directly without any modulation, The results obtained for baseband systems serve as a basis for comparing with other systems. Figure &- 2 shows a simple analog baseband system. For a baseband system, the receiver is a low-pass filter that passes the message while reducing the noise at the output. Obviously, the filter should reject all noise frequency components that fall outside the message band. We assume that the low-pass filter is ideal with bandwidth W¢= 2nB). Channet noise te) xo run nn fe AL coamet ver LS 5 SN, Fig. 82 A baseband system It is assumed that the message signal X(¢) is a zero-mean ergodic random process band-limited to W with power spectral density Syy(«). The channel is assumed to be distortionless over the message band so that X(t) = X(t- ta) (8.4) where f4 is the time delay of the system. The average output signal power S, is S, = BUX) = EWC ta)) om” @5) =F) 4, Srdo = Sx = 5; where Sy is the average signal power and S;is the signal power at the input of the receiver. The average output noise power N, is 204 NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 2, 1h Ny = Eta@] = Al iy Smo For the case of additive white noise, S,,,(@) = /2, and wv aif" N= on [ w2 de: ‘The white noise assumption simplifies the calculations and provides essential aspects of analysis. ‘The output signal-to-noise ratio is w =ny= 1B (86) (87) Let 8) Then 69) ‘The parameter 7 is directly proportional to S;. Hence, comparing various systems for the output SNR for a given S; is the same as comparing these systems for the output SNR for a given 7, 84 NOISE IN AMPLITUDE MODULATION SYSTEMS A block diagram of a continuous-wave (CW) communication system is shown in Fig. 8-3. The receiver front end (RF/IF stages) is modeled as an ideal bandpass filter with a bandwidth 2W centered at @,. This bandpass filter, also known as a predetection filter, limits the amount of noise outside the band that reaches the detector (“out-of-band noise”). Predetection bandpass filtering produces Y= XO +nK0) (8.10) where n,(1) is the narrowband noise, which can be expressed as [Eq. (7.67)] ft) = nO cet = n,( sin wt B.D) If the power spectral density of n(¢) is 7/2, then (Prob. 7.35) 20] = Elm] = E[7O| = 2nB (8.12) Channel noise te) xo xo Transmitter (modulator) Channel zr BPF Detector |} wer Demodulator Fig. 83 A CW communication system CHAP. 8) NOISE IN ANALOG COMMUNICATION SYSTEMS 205 ‘A. Synchronous Detection: 1. DSB Systems: In a DSB system, the transmitted signal X,(¢) has the form! Xd) = AX) e08 0.1 (8.13) The demodulator portion of the DSB system is shown in Fig. 8-4. The input of the receiver is Yi) = AX (eos ct + (0) 6.14 [A-X(0) +n)]608 0,1 ~n,(0) sin @,t Gla) 2008 at Xin +A x0 err Su Me Fig. 84 DSB modulation Multiplying ¥;(0 by 2cos o,t and using a low-pass filter, we obtain VD = AX +n) = XA MA0) (8.13) where X= (XC) and ng(t) = nt) We see that the output signal and noise are additive and the quadrature noise component m,(t) has been rejected by the demodulator. Now $= E [x00] =E[4¥P 0] = 4e[PO] = Ls, (8.16a) y= E [mito] = [020] = e[nio] = 2B (8.16b) and the output SNR is ArSy UB (8.17) ‘The input signal power S; is given by (8.18) Thus, from Eqs. (8.7) and (8.8) we obtain S Si @- a 8.19) which indicates that insofar as noise is concerned, DSB with ideal synchronous detection has the same performance as baseband system. * We need a random phase @ in the carrier to make Xf) stationary (see Prob. 7.10). Thg random phase {which is independent of | (0) does not affect the final results; hence, its neplected here. 206 NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 The SNR at the input of the detector is S)_Si_ Si (a); Np 2B (6.20) (SIM and G/M), (8.21) The ratio xj is known as the detector gain and is often used as a figure of merit for the demodulation 2. SSB Systems: Similar calculations for an SSB system using synchronous detection yield the same noise performance as for a baseband or DSB system (Prob. 8.4). AM Systems In ordinary AM (or simply AM) systems, AM signals can be demodulated by synchronous detector or by envelope detector. The modulated signal in an AM system has the form X= Al + XC] 008 ot 6.22) where y is the modulation index of the AM signal and wS1 and [X@1<1 If the synchronous detector includes an ideal de suppressor, the receiver output Y,(¢) will be (Fig. 8-4) Yo(t) = ApX(O +n() = X(0 +060 (8.23) where X= ApX and (=n. S\ _S, _ AteSy ° Gn a- “one aa The input signal power S; is Lar 7 S.= 58/4201 + nxO}] . Since X(1) is assumed to have a zero mean, (8.25) Thus, and (8.26) Because 17S we have S) 27 (<3 6.27) which indicates that the output SNR in AM is at least 3 dB worse that that in DSB and SSB systems, CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS. 207 B. Envelope Detection and Threshold Effect: An ordinary AM signal is usually demodulated by envelope detection. With reference to Fig. 8-3, the input to the detector is YA) = XA) +040) [A,{1 + uX(O] + ne} 008 oct =n, sin ot (8.28) ‘We can analyze the effect of the noise by considering a phasor representation of Yi(0) ¥ = Ref Vine™] (8.29) where YO) = Al + wXO] + At) +0) (8.30) The phase diagram is shown in Fig. §-5. From Fig. 8-5, we see that ¥;(1) can be written as ¥,(0 = Videos [a(t + $0] (831) where Vet) = {Al + XO] + nCOP +02) (8.324) $0) = tan ne) (8.326) ALI + uX@] + nel) Imy 1 Adie ax) no Fig. 85 Phasor diagram for AM when (S/N), > 1 1. Large-SNR (Signal Dominance) Case: When (S/¥), > 1, AI + aX(0)]>n(0), and hence, A,[1 + uX(0] > n.(0) and n,(0) for almost all. Under this condition, the envelope M(t) can be approximated by VQ = ALL + aX] +0 (8.33) An ideal envelope detector reproduces the envelope (1) minus its de component, so ¥,(0) = AguX() + .(0) (834) which is identical to that of a synchronous detector (Eq, (8.23)]. The output SNR is then as given in Eq. (8.26), that is, Ss Sy S) _ Sx 8.35 (9), T+PSy ee) ‘Therefore, for AM, when (S/N); > 1, the performance of the envelope detector is identical to that of the synchronous detector 208 NOISE IN ANALOG COMMUNICATION SYSTEMS [CHAP. 8 2. Small-SNR (Noise Dominance) Case: When (S/N), <1, the envelope of the resultant signal is primarily dominated by the envelope of the noise signal (Fig. 8-6). From the diagram of Fig. 8-6, the envelope of the resultant signal is approximated by Vi) = Valt) + Acf1 + HX] 008 onl) (8.36) where V,(0) and @,(j) are the envelope and the phase of the noise n,(1). Equation (8.36) indicates that the output contains no term proportional to ¥(¢) and that noise is multiplicative. The signal X(1) is multiplied by noise in the form of cos ¢,(), which is random. Thus, the message signal X(¢) is badly mutilated, and its information has been lost. Under these circumstances, it is meaningless to talk about output SNR. The loss or mutilation of the message at low predetection SNR is called the threshold effect. The name comes about because there is some value of (S/.N), above which signal distortion due to noise is negligible and below which system performance deteriorates rapidly. The threshold occurs when (S/N); is about 10 dB or less (Prob. 8.6). Imy Adi + wX(0)] 005 ¢, no Fig. 86 Phasor diagram for AM when (S/N)y < 1 8.5 NOISE IN ANGLE MODULATION SYSTEMS With reference to Fig. 8-3, in angle modulation systems the transmitted signal X,(t) has the form He) = Accs [ot + $(0) (637) KX) for PM where oo= {i Xe for FM 638) X40 +nt0 «0 ro Ber }—-[inier i 7 Se 5s Ne Fig. 8-7 Angle demodulation system CHAP. 8] ‘NOISE IN ANALOG COMMUNICATION SYSTEMS 209 Figure 8-7 shows a model for the angle demodulation system. The predetection filter bandwidth By is approximately 2(D + 1)B, where D is the deviation ratio and B is the bandwidth of the message signal (Eq. (4.27)]. The detector input is YO = XO +n) (8.39) = A,cos [wt + oH) +2(0 ‘The carrier amplitude remains constant, therefore S= ERO) =48 6.400) and N= nBr (8.406) - ence, E 41 a 2B; Cy which is independent of X(2). The (S/N), of Eq. (8.41) is often called the carrier-to-noise ratio (CNR). Because n,(¢) is narrowband, we write nft) = v4(t)c08 logt + G4(01 (8.42) where ¥,(0) is Rayleigh-distributed and ¢,(¢) is uniformly distributed in (0, 2x) (Prob. 6.39). Then ¥,(0) can be written as ¥O = Videos fot + 90) (843) where Ved) = fA. c0s $+ y(0)c0s },()F + [Acsin 4 +», sin (OPP et) = tan) Aesin b+ (0 sin G0, A008 b+ %4(1)€08 Gy (1) The limiter suppresses any amplitude variation (1). Hence, in angle modulation, SNRs are derived from consideration of 0(4) only. The expression for 0(1) is too complicated for analysis without some simplification. The detector is assumed to be ideal. The output of the detector is (1) for PM Y=] dO soy Eng (644) dt ° A. Signal Dominance Case: A phasor diagram (Fig. 8-8) for this case is obtained from YA) = Rel¥(e”] (45) where ¥(1) = Ace + (tela (8.46) and (0) < de for almost all t. From Fig. 8-8 the length L of arc AB is L= Ye = 601 (847) and ¥()~ Ac + ¥q(0 008 [by (1) — HOI ~ Ae L~ y(0)sin (a) — GO) * 210 NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 Re Fig. 88 Phasor diagram for angle modulation Hence, from Eq. (8.47), we obtain v(0) 80) = 60 + sin [460 — GO] (8.48) For the purpose of computing an output SNR, replacing $,(0)- (2) with ¢,(¢) will not affect the result? Thus, we write 110 ~ 460) +0 sin ot) aa 8.49) = gy +e From Eqs, (8.44) and (8.38) the detector output is YA) = 00) = yx +" for PM (8.50) _ 4a(0) : : y= 5? for FM (851) B. (S/N), in PM Systems: From Eq. (8.50) EUBx01 = RELPO) = BSy 652 Lap] 1 lz 0] = PEWOI= On (853) 8) _ BARS y Hence, (= UB (8.54) From Eqs. (8.8) and (8.404) a ? 5B nB (8.55) 2 This is based on the observation that in the sense of ensemble averages ),—¢ differs fom ¢, only by a shift ofthe mean value, CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS 2 and then Eq, (8.54) can be expressed as (8.56) CC. (S/N), in FM Systems: From Eq. (8.51) S, = EVP (0] = ELC) = Sx (8.57) = wor]-2 rs = ef pbhoor ]-z4lmor] (8.58) By using Eq, (7.133) (Prob. 7.23), the power spectral density of n{(1) is given by =os, for lo] < = 2nB) Spi (0) = © Sn (@ otherwise (8.59) ys Then net a arrears 4B On (8.60) S)_3 = (3). CD Using Eq, (8.55), we can express Eq. (8.61) as (EG) C2) we Since Aw = [kX (lmax = KIX = 1], Ea. (8.62) can be rewritten as Ss (bo)? (3).=xGir) % where D is the deviation ratio, defined in Eq. (4.26). Equation (8.60) indicates that the output noise power is inversely proportional to the mean carrier power 42/2 in FM. This effect of a decrease in output noise power as the carrier power increases is called noise quieting 3D*Sy7 (8.63) D. Threshold Effects in Angle Modulation Systems: When 4? < En}(0)], the resulting phasor [Eq. (8.46)] is dominated by the term v,(t)e/, For this case, the phase of the detector input is 0 = 60) + AE sin [H(0)~ 44001 (6.64) The noise now dominates, and the message signal has been corrupted by the noise beyond the possibility of recovery—an effect similar to the one we observed in an AM system using envelope detection. This threshold effect is illustrated in Fig. 8-9. If 42 > Elv2(o), we may expect y,(2) < A, most of the time, and the tip of the resultant phasor (0) traces randomly around the end of the carrier phasor, as illustrated in Fig. 8-9(a). The variation in phase is relatively small; however, if the noise becomes large enough, the tip of the resultant phasor ¥(f) may move away from the endpoint of the carrier phase and may even occasionally encircle the origin (Fig. 8-9(b)], where the phase of Y(r) rapidly changes by 2x [Fig. 8-10(a)]. The output of ‘the frequency discriminator then will contain a spike every time an encirclement occurs. This is illustrated in Fig. 8-10(5). . 212 NOISE IN ANALOG COMMUNICATION SYSTEMS ICHAP. 8 Locus of ¥¢0) Locus of Yt") ABS Elio) AP El) @ Oy Fig. 89 Threshold effect in FM system an o Fig. 810 Solved Problems NOISE IN BASEBAND SYSTEMS 8.1. Consider an analog baseband communication system with additive white noise. The transmission channel is assumed to be distortionless, and the power spectral density of white noise 1/2 is 10° watt per hertz (W/Hz). The signal to be transmitted is an audio signal with 4-kHz bandwidth. At the receiver end, an RC low-pass filter with a 3-dB bandwidth of 8 kHz is used to limit the noise power at the output. Calculate the output noise power. From Prob. 2.9 the frequency response H(«) of an RC low-pass filter with a 3-B bandwidth of 8 KHz is siven by 1 Ho) = te + jaja CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS, 213 where gy = 2n(8000). Using Eqs. (7.40) and (7.53), we see that the output noise power N, is THF deo nip 1 do 22n)_.1+ @/ooy roy = 4210 \2n)8) 0 )W = 25.2 pW 82. Consider an analog baseband communication system with additive white noise having power spectral density 7/2 and a distorting channel having the frequency response 1 YO ew The distortion is equalized by a receiver filter having the frequency response 1 Os |ol=W Heg(oo) = 4 0 otherwise Obtain an expression for the output SNR. Lycee cae S.= Fe) _HMLOP HP Srx(o)do 1h s = LY stores (6.55) NOISE IN AMPLITUDE MODULATION § A DSB signal with additive white noise is demodulated by a synchronous detector (Fig. 8-4) with a phase error 6. Taking the local oscillator signal to be 2cos (wt + ¢), show that (3), yeos 7" (8.66) where 7 is defined by Eq. (8.8). From Eq. (8.14) YO = (AX +. (]008 0.4 ~ nA SiN Ot Multiplying ¥;(0 by 2cos (a,1-+ @) and using a low-pass filter, we obtain Y,(0) = AX(e0s § +n(Oeos 6 +n,(Osin d = XO + no) where X= AeX(d008 6 ned) = neitoos $+ n,Osin d 214 NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 So Se [30] = APeos*oz[ °C] = A2(c08 74) Sy E[ei0] = £[rdens0n29 + losin 26-+ ndondosin 26] E[eo eos? + efaze]sin6 = E[rito | c0s*6 + sin 26) = EL RO] since E{n(0) ns] = 0 (Eq. (7.73)} Thus, by Eqs. (6.17), (8.18), and (8.19), we obtain ‘S\ _ ABS, cos? _ LAls, 8) _ Ae HS 52 (a. 2B BS where is {Eq. (8.8)) js 1" 84. Show that the performance of an SSB system using synchronous detection is equivalent to the performance of both DSB and baseband systems. ‘The SSB signal X.(1) can be expressed as [Eq. (3.1/)] X= A. [XOe08 2.1 + KeYsin ot] (867) where X(¢) denotes the Hilbert transform of X(¢). Note that Eq. (8.67) represents the lower-sideband SSB signal and that the minimum bandwidth of the predetection filter is W for a single-band signal, Refer to Fig. 8-5; the input of the receiver is Y= Xm, TAM) + nO] 008 «66 + [AX — (0) sin wt 8.68) Synchronous detection rejects the quadrature components of both the signal and noise, yielding Y(t) = AHO +H = XO +060) 6.09) where X= AMO and (0) = elt) ‘The output signal power is = £[40]= #e[vo]= 425, The power spectral density of n,(0) i illustrated in Fig. 8-11 for the case of lower-sideband SSB. The output noise power is x, =e bo ‘Thus, the output SNR is (8.70) ‘The input signal power is s=e[t0]= belo] + 5e[eo]} = de Po] = a5, CHAP. 8} NOISE IN ANALOG COMMUNICATION SYSTEMS, 215 0 “Ww w (a) Saul) + ba) “wo Ww = w ul since [Prob. 7.43 and Eq. (7.128) of Prob. 7.20] e[roto]=0 an eLCo] =e [Po] We obtain 71 al 7) Which indicates that insofar as noise is concerned, SSB with ideal synchronous detection has the same performance as both DSB and baseband systems. 8.5. Assuming sinusoidal modulation, show that in an AM system with envelope detection the output SNR is given by Ss we Ss) 7: where u is the modulation index for AM. For sinusoidal modulation XO = 005 ont So sy=e[Po]=4 x 2 Thus, from Bq, (6.35), we obtain ‘S) wSy (i) Teese ( 6.73) 216 8.6. 87. NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 ‘The threshold level (or value) in an AM system with envelope detection is usually defined as that value of (S/N); for which V, 0 875) vitere N= Eft] =28 and PW, > A, -[.. fi An) ty = fier ae feta AON) Now PU, 3B > 3 or B>o4T In Sec. 4.7 we mentioned that a value of f < 0.2is considered to define an FM signal to be narrowband, We conclude that the narrowband FM offers no improvement in SNR over AM. ‘Note that based on the preceding noise consideration, f= 0.5 is often considered as defining roughly the transition from narrowband FM to wideband FM. An audio signal X(0) is to be transmitted over a radio frequency (RF) channel with additive white noise. It is required that the output SNR be greater than 40 dB. Assume the following characteristics for X(0) and the channel: ELX@]=0 IX(I=1 Sy= ELMO] =} B= 15 kHz Power spectral density of white noise n/2 = 10-" W/Hz Power loss in channel = 50 dB Calculate the transmission bandwidth By and the required average transmitted power Sy for (@) DSB modulation (6) AM with 100 percent modulation and envelope detection (0) PM with ky =3 (@)_ FM with a deviation ratio D (@) For DSB modulation By = 2B = 30kHz CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS 224 From Bas. (68) and (6.19), we have S, Sate @ 7B IP HagaOH ~~ eB) or 52300) Hence, Sp = S(10*) = 3000W = 3kW (®) For AM with «= 1 and envelope detection From Eq. (8.35), we have Hence, Sp =9kW (©) For PM with k, = 3, from Eqs. (4.21) and (4.25), Br-+2ky + 1)B= 120 kHz From Eq, (6.56), we have (2 = 880 = 03) saemepaay? a2y0? or 8.2 }u0yw Hence, © Sp= S105) = 667W @ For FM with D =, from Eq. (4.27) Br = XD + 1)B = 180kHz. From Eq, (8.63), we have S) = apis, = 46% om (3) 20's 38) wo 5. ) 200 VIHTO) = 207 or = 500%) W Hence, Sp= S10") = 80W 8.14 The result of Prob. 8.6 indicates that the threshold level for AM is equivalent to the input SNR (S/N), = 10. Assume this conclusion is also valid for FM. (a) Find the output SNR at the threshold level for FM (assuming sinusoidal modulation). (6) Find the modulation index that produces (S/N), = 30 dB at the threshold. G.-C) (a) 86 (@ From Eq, (8.62), we have 22 NOISE IN ANALOG COMMUNICATION SYSTEMS (CHAP. 8 For sinusoidal modulation Thus, Eq. (8.86) can be rewritten as Using Eg. (8.41), we obtain Se According to Carson’s rule (4.25), Thus, we obtain Br=2R+ DB (§),-2°6+0(8), an Setting (S/N); = 10, we see that the output SNR at the threshold level is o Solving for p, we obtain (2) -2riv+0 «39 s Yee G., 30p%(B + 1) = 10°¢= 30aB) 8.15. In commercial FM broadcasting, a scheme known as preemphasis/deemphasis filtering is used to improve the output SNR. In this scheme, the high-frequency component in the input signal is R, 200g Fp Wi “ It Ry : 1 . “ o 16 (2) Preemphasis iter and is frequency response 20g Poco Rx “ At 0 , y tes © EC (©) Deemphass filter and its frequency response CHAP. 8] NOISE IN ANALOG COMMUNICATION SYSTEMS 23 8.16, emphasized at the transmitter before the noise is introduced, and at the output of the FM demodulator the inverse operation is performed to deemphasize the high-frequency components. The circuits shown in Fig. 8-15 are used as preemphasis (PE) and deemphasis (DE) filters. The standard value for the time constant is RjC = 75 ys, and R>C> R,C. Calculate the noise improvement factor T defined by where N, is the output noise power without preemphasis/deemphasis filters and Nj is the output noise power when these filters are used. ‘The frequency response for the preemphasis filter is = clon Hpglo) = Kye Rac y- Rm. where kage UTR ong p RT For 2, and hypothesis H is chosen if 2 < 2. If 2 = A, the decision can be an arbitrary one 9.3 PROBABILITY OF ERROR AND MAXIMUM LIKELIHOOD DETECTOR A. Probability of Err For the binary signal detection system, there are two ways in which errors can occur. That is, given that signal s}(2) was transmitted, an error results if hypothesis H2 is chosen; or given that signal sa(t) was transmitted, an error results if hypothesis H is chosen. Thus, the probability of error P, is, expressed as [Eq. (6.24)] P= where P(s;) and P(s3) are the a priori probabilities that 5,(0) and s(), When symbols I and 0 occur with equal probability, that is, P(s, $LPCp|s)) + PUR Isa) (9.6) 1) P(s1) + PUL |s2) Pls2) (95) spectively, are transmitted B. Maximum Likelihood Detector: A popular criterion for choosing the threshold 4 of Eq. (9.4) is based on minimizing the probability of error of Eq. (9.5). The computation for this minimum error value of 2 = Ay starts with forming the following likelihood ratio test (Prob. 9.1) mh felon) > Pls) fizlss) < Poy) n Aw (9.7) where jtzls,) is the conditional pdf known as the likelihood of s,. The ratio A(z) is known as the likelihood ratio, Equation (9.7) states that we should choose hypothesis H if the likelihood ratio A(z) is greater than the ratio of a priori probabilities. If P(sy) = Pts2), Eq. (9-7) reduces to (98a) 228 OPTIMUM DETECTION [CHAP. 9 Ay or feels 2 flv.) 8b) A If P(s;) = P(s;) and the likelihoods /(z|s;) (= 1,2) are symmetric, then Eq. (9.7) yields the criterion (Prob. 9.2) zoo (9.9) Ras 4o 0.10) It can be shown that the threshold 2 represented by Eq. (9.10) is the optimum threshold for minimizing the error of probability (Prob. 9.3). The criterion of Eq. (9.9) is known as the minimum error criterion. A detector that minimizes the error probability (for the case where the signal classes are equally likely) is also known as a maximum likelihood detector. C. Probability of Error with Gaussian Noise: The pdf of the gaussian random noise n, in Eq. (9.36) is (Ea. (6.91)] Le 1 tad) on ‘-o /0at) (@.12a) ah /Qei,) (9.12) which are illustrated in Fig. 9-2. a Fig. 92 Conditional pat Now PUA|s1) = fi pets de (9.134) Puttibe) = f Rels,) dz (9.136) a Because of the symmetry of f(zls,), Eq. (9.6) reduces to Pe = P(Ha|s:) = Pil) 9.19) CHAP. 9] OPTIMUM DETECTION 29 Thus, the probability of error P, is numerically equal to the area under the “tail” of either likelihood function f(zls:) or flzisp) falling on the “incorrect” side of the threshold; P, J “flels,) dz (9.15) where Zo = (a, +.)/2 is the optimum threshold [Eq. (9.10)]. Using Eq. (9.128), we have 1 nae, Pea de Iwi, Let y= (@~4s)/a,,. Then o,,dy = de and - Ly aya eo PP dy (to @ 9.16) al a ae) where Q(>) is the complementary error function, or the Q function defined in Eq. (6.93). The values of the Q function are tabulated in App. C. 9.4 OPTIMUM DETECTION In this section we consider optimizing the linear filter in the receiver (block 1) of Fig. 9-1 by minimizing the probability of error A. The Matched Filter: A matched filter is a linear filter designed to provide the maximum output SNR for a given transmitted signal. Consider that a known signal s(t) plus AWGN n(¢) is the input to an LTI filter followed by a sampler, as shown in Fig. 9-1. Let a(t) be the output of the filter. Then from Eq. (9.34), at =7, we have ar) @.17) We wish to find the filter frequency response Ho(oo) that maximizes Eq. (9.17). It can be shown, that (Prob. 9.7) [tse do =? (9.18) 7 Jae aa ” where S(o) = F[s(2)|, n/2 is the power spectral density of the input noise, and E is the energy of the input signal s(1). Note that the right-hand side of this inequality does not depend on H(c) but only on the input signal energy and the power spectral density of noise. Thus, S\) _2E (er (2.19) The equality in Eq. (9.18) holds only if the optimum filter frequency response H(«) is employed such that (Prob. 9.7) Hw) = Hoo) = Swe" (0.20) where * denotes the complex conjugate. The impulse response /i() of this optimum filter is [see Eqs. (/.18) and (1.22)] sT-) 0 1, 7 > 1, and (@ ~ on)T > 1, then the probability of error P, is (Prob. 9.17) of 2) -e((8) 0 where £, = A°T/2 is the average signal energy per bit. (9.37) Solved Problems PROBABILITY OF ERROR AND MAXIMUM LIKELIHOOD DETECTOR 9.1, Derive the likelihood ratio test given by Eq. (9.7), that is, A reasouble receiver devision rule is to choose hypothesis 17; if the a posteriori probability P(s[2) is areater than the a posteriori probability P(sp|2). Otherwise, we should choose hypothesis H (See Prob. 6.15) mn Hence, Poile) 2 Poole (0.39) ih pt Pils) > or 1 9.40) Fiske) < ° th The decision criterion of Eq. (9.40) is called the maximum a posteriori (MAP) criterion, CHAP. 9] OPTIMUM DETECTION 233 Expressing Bayes’ rule (Eq. (6.19)] for @ continuous conditional pdf, we have fealsyP(s) He) where /le|s) is the conditional paf of the received sample z conditioned on the signal class s;. Thus, by using Eq. (9-41), Eq. (9.39) yields Posi) = 2 sn, A felss)Pts1) 2 felsa)Ps) (9.42) Hy A _flals) > Pla) Fels) < Pod Hy or AG) 9.2. Derive Eq, (9.9), that is, Using Eqs. (2.122) and (9.12b), we see that the likelihood ratio A(2) defined in Eg, (9.7) can be written as fal _eeaF/08) 40 Fel) eID Hence, the likelihood ratio test (9.7) can be expressed as = cermin) mi oasis jndyjaedy > Psa) < Bs) ee) \ ih ‘The inequality relationship of Eq. (9.43) is preserved for any monotonically increasing (or decreasing) ' transformation, Taking the natural logarithm of both sides, we obtain y Ha) =a) _aj- a > |, Pls) Eh < As) OD a When P(s;) = Pos), POY ig a mae into Eq. (2.44) yields { Ay | 2> Ging ata j laaran gn 0.45) | Hy “9.3, Verify that the threshold Z given by Eq. (9.45) yields the minimum probability of error when PG,) = Pls). ‘Assume that the threshold is set at 2 (Fig. 9-5). From Eq. (9.5) 94, OPTIMUM DETECTION Pe = PHp|s,)P(sy) + PUA Is) Ptsa) Peso [felon der Pe [falda = Pop felsyaceen| | - fi peroee] = Pl) + f. (PG, filo) ~ Poss) els dz fis) seals) Fig. 98 To find the threshold 2 that minimizes P,, we set which yields Posi) als or Sidalsi) _ Ps) Rigls2) Piss) Using Eqs. (0.122) and (9.126), we obtain ertiora/24) ‘Paels.)— Cora TOR) or ee Aaa) or Jaa From which we obtain 4 When P(s:) = PCs). Ea. (9.48) reduces to poe fo = za +a) Poss) Adolsy) Pisz} PCy) Piss) POs) PIs) [CHAP. 9 (0.46) 47) (9.48) A bipolar binary signal 5,0) is a +4-V or ~A-V pulse during the interval (0, 7’). The linear filter shown in Fig. 9-1 is an integrator, as shown in Fig. 9-6. (This detector is known as an integrate- and-dump detector.) Assuming ¢3, = 0.1, determine the optimum detection threshold 4g if the a priori probabilities are (a) P(x) =0.5 () Ps) =0.7 (Psi) = 02 CHAP. 9] OPTIMUM DETECTION 235 Sample at = 7 Ko fou tt Threshold h comparator |“ 5nion Fig. 9-6 Integrate-and-dump detector ‘The received signal r() is NO = s+, where spafi@=td 05ST fort Olea ORreT for 0 ‘The output of the integrator at the end of a signaling interval is ar)= [torn ferm eh 49) oa pe (ieee ee i fe and n, is a random variable defined by ny fi no dt es (@) Pe) = Py) = 05 From Eqs. (0.10) and (9.50), the optizuum threshold is ata; _AT+CAD_ 2 2 0 ©) Py) =0.7, Ply) =03 From Eqs. (9.48) and (9.50) the optimum threshold is 1 4, 03 _ 008 24T "0.7 aT © Pe) =0.2, POs.) =0.8 From Eqs. (9.48) and (9.50) the optimum threshold is OL | 08 _ 0.07 "34" 03> aT 9.5. Show that the probability of error P, in the binary system of Prob. 9.4 can be expressed as e pavT) (9.52) 1 na [moa From Eq. (9.51), Since 1, is obtained by a linear operation on a sample function from a gaussian process, it is a gaussian radon eb en=e[ffnoa] ff etvo4 053) 236 OPTIMUM DETECTION [CHAP. 9 since n(0) has zero mean. Pte = lL xo at] ] rer Ji fj etmcomen aut Since (0) 8 & white nose, we have (Eq. (7.63)) Etnton(eyt = 430-2) and c= [foe = W54) ‘Thus, using Eqs. (9.16) and (9.50), we obtain = of1=) x of 47.) « of PEF rao('s2)- ean) (5) 9.6. In the binary system of Prob. 9.4, P(s;) = Pls) = },/2= 10" W/Hz, 4 = 10mV, and the transmission rate of data (bit rate) is 10° bjs. (a) Find the probability of error P, (6) If the bit rate is increased to 10° b/s, what value of A is needed to attain the same P, as in part (a)? (@) 207 _ AT _ (0.01004) _ se Hence, from Eq, (9.52) and Table C-1, we obtain Pe o( pa) = Q(Vi0) = 7.8010) 10 0) #7 _ £00) _ 9 nf2 10 Solving for 4, we obtain A= 31.6210) V = 31.62 mV ‘THE MATCHED FILTER AND CORRELATOR 9.7. Derive Eq. (9.18), that is, ‘S) 2E 7 where £ is the energy of the input signal s(2) and n/2 is the power spectral density of the input noise n(t) (Fig. 9-1). Let Hc) be the frequency response ofthe linear filter. Let a) be the output signal of the filter. Then by Eq. (2.11) aT) = HeoyStope™ deo (95) where Siw) = FIs(0)]. The output noise power N, is [by Eq. (7.55)] y= arin =2 [une do (956) 2a) CHAP. 9] OPTIMUM DETECTION 237 ‘Thus, the output SNR is, nF fe Meors(aye®? deol Q. O/2n)P If F@)S(oePF dof (57) No (n/2)01 2a) [2 LP deo We can examine the preceding SNR by applying Schwarz inequality, which state that = ep = [Firs af =” ico df” Lato? do (9.5) ‘The equality holds if Sila) = fo) (9.59) where k is an arbitrary real number and * denotes the complex conjugate Itwe set, F(@)= Ho) flo) = Score" wwe can write [FL mersione®™ dof [" vator? ao [° Ista dn 60) Substituting Eq. (9.60) into Eq. (9.57), we obtain 21 na 2 dy = 28 [Isto do = 2 | Isto do where which is the energy of the input signal s() 98. Find the output of the matched filter and determine the maximum value of (S/N), if the input s(0) is a rectangular pulse of amplitude 4 and duration T. st) fy = T=) 20 aT A 4 ' oT 7 oT 7 o rT 7 6 @ o wo Fig. 97 For the given s(t) (Fig. 9-7(a)] the impulse response A(t) of the matched filter is (Eq. (9.27)] ha) =s(T-) = 40) which is exactly the same as the input s(¢) [Fig. 9-7(b)}. Thus, the output 2(¢) is at oss Oa = Fars 2AT T= s27 0 otherwise which is plotted in Fig. 9-7(0) Note that 2(7) = 4°T is the maximum value of 2(0). From Eq. (9.19) the maximum value of (S/N), is 238 OPTIMUM DETECTION [CHAP. 9 Gh where Leeda [laa ar = (8)_-247 ea 99. Repeat Prob. 9.8 if a simple RC filter [Fig. 9-8(a)] is used instead of the matched filter. Mo 1 j 1 RC ww 1 EO) | 0 7 0 ror 7: (a) i} wo co) Fig. 9-8 From Prob. 2.6 the impulse response f(#) and the frequency response H(o) of the RC filter are given by = Lee, HO) = peed “Ho=77h T+ joRC ‘Then the output =(0) is given by 0 1<0 2) = 9) * HD) = 4} AC eC osrsT Mme TO ATR which is plotted in Fig 9-8(c). Note that the maximum value of =(2) is reached at ¢ = T and AT)= Ad ETO) ‘The average output noise power is =evmm-2 a N= Eva0o) eB TROP ARC 0.02) 2D) _ 4dr ~eHOH Thus, Qe Seer 005) We now maximize (S/N), with respect to RC. Letting x = T/(RC) and 1-eF sa - caer fo) = BET * we obtain or 142 CHAP. 9] OPTIMUM DETECTION 239 Solving for x, we obtain (0.68, Note that by using a simple RC filter, the maximum output SNR is reduced by a factor of 0.815, or about (0.89 4B from that of the matched filter. 9.10. Find the optimum filter frequency response Ho(w) that maximizes the output SNR when the input noise is not a white noise. Let H(@) be the frequency response of a linear filter. Let Sns(w) be the power spectrum of the input noise, Proceeding as in Prob. 9.7, we see that the output signal of the filter at time =T is i ne an=2 f° mos do ‘The output noise power is N= FUEO)= 3” Sa(OlHaP do ‘Then the output SNR is “12 We dof? , (2) U/2r fe Heysia)e! dof (065) Fee Su OH do To find the frequency response H(w) that maximizes Eq. (9.65), we apply Schwarz inequality Eq. (9.58). Setting Si@) = VSm(o) Ha) fro) wwe can write vor clare do [” XP gy [Fmeorsioree™ dof f° spielen? dof Ber ao a) ‘The equality holds if Iw iw) < KO Sao) Heo) = VS) (9.67) Substituting Eq. (9.66) into Eq. (9.65), we obtain wy? (3), «i is ot doy (9.68) ‘The maximum value of (S/:V), occurs when the equality holds in Eq. (9.68) or Eq. (9.65), Thus, from Eq, (9.67) the optimum filter frequency response Ho(~) is SOx Sal An optimum filter given by Ea. (9.69) is called a matched filter for colored noise Hoo) =k 0.59) 9.11, Referring to Eqs. (9.26) and (9.27), we have the output SNR of a matched filter receiver as ‘SY 2Ey_ 2/7, (), A FJ, uo or ae (9.70) 240 OPTIMUM DETECTION (CHAP. 9 Now suppose that we want s1(#) and s(t) to have the same signal energy. Show that the optimum choice of (i) is sx) = (0) 9.71) and the resultant output SNR is ={['soa-% 9.72) i 7 where E is the signal energy. From Eq. (9.70) (i). (9.73) Using Schwarz inequality (9.58), we obtain < {[izou i ao d= E [fscrioa The equality holds when 0 = kD Equal energy requirement implies k = +1, and maximizing (S/N), requires k= ~1. Hence 8) = 99 Substituting this relation in Eq, (9.73), we obtain 1 ()-#+4ffaoe- ERROR PROBABILITY PERFORMANCE OF BINARY TRANSMISSION SYSTEMS 9.12. Derive Eq. (9.32), that is, — Pa P.= al, of| 7 neta o< WO ana Deter ‘Then by Bgs. (9.27) and (9.28) we obtain £,=[{0-s0oF = [len a= aer CHAP. 9} OPTIMUM DETECTION 241 9.13. 9.14, 9.15. where the average energy per bit is E, = 427. Note that Eq. (9.32) is the same as Eq. (9.52) obtained in Prob. 95. A bipolar binary signal 51) is a + 1-V or ~1-V’ pulse during the interval (0, 7). Additive white noise with power spectral density /2= 10 W/Hz is added to the signal. Determine the maximum bit rate that can be sent with a bit error probability of P, < 10. P -of <10 (x) = 104 x= 3.71 By Eq. (9.32) From Table C-1 Hence, par_ faayr Vor Y3005 from which T=G.1PU0% 13.7600) s ‘Thus, the maximum bit rate R is 26(10") b/s = 7.26 kb/s Derive Eq. (9.36), that is, From Eq. (9.35) n=Acorot — 06reT sto [20 Sfewcu o 1, oT > 1, and (@ ~03)T > 1, then Thus, Supplementary Problems A binary communication system transmits signals 5{1)(i = 1,2). The receiver test statistic 2(7) is xT) =aj+n, where a; = +1 and a aiven by = and ny is uniformly distributed, yielding the conditional density functions ls) } -o1e:<19 ai =f fals.) fb otherwise 9<7<01 otherwise falss) = 1%, Find the probability of error P, for the case of equally likely signals, using an optimum decision threshold. Ans, P= 0.05 ‘A binary communication system transmits signals s,()(i= 1,2) with equal probability. The receiver test statistic 2(7) is AT) = af) +n{T) where a,(T) = +1 and a,(7) = —1 and n,(T7) is a zero-mean gaussian random variable with variance 0.1 (a) Determine the optimum decision rule. (b) Calculate the probability of error. 9.20, 921, 922. 923. 9.24, OPTIMUM DETECTION [CHAP. 9 dns. (920 Pe A binary communication system transmits signals s()(i= 1,2) with the probabilities P(sy) = 0.75 and P(sq) = 0.25. The receiver test statistic 2(7) is AT) = af) +n(T) sao where a)(T) = 1 and a,(T) = 1 and n,(T) is a zero-mean gaussian random variable with variance 0.1 (@) Determine the optimum decision rule (6) Calculate the probability of error. Hint; Use Eq. (9.48) of Prob. 9.3 and Eq, (9.5) ‘Ans. (a) 22081, (6) P, = 0.0883 ‘Compute the matched filter output over (0,7) to the pulse waveform _fet osisT oO iG otherwise Ans, Tsinht Derive Eqs. (9.30) and (9.34), Hint: Use Eqs. (9.27) and (0.28) It is required (o transmit 2.08 Mb/s with an error probability of P, = 10“. The channel noise power spectrum is /2= 10" W/Hz. Determine the signal power required at the receiver input, using polar signaling, Ans, 0.47 mW -A cost are coherently detected Assume that the noise power In a PSK system, the received waveforms 5)(0) = Acosw.t and s:() = with a matched filter. The value of 4 is 20 mV, and the bit rate is 1 MI spectral density /2= 10°!" W/H, Find the probability of error P,. Ans, P.= 3.910) Chapter 10 FORMATION THEORY AND SOURCE CODING RODUCTION tation theory provides a quantitative measure of the information contained in message \d allows us to determine the capacity of a communication system to transfer this bn from source to destination. In this chapter we briefly explore some basic ideas involved in ation theory and source coding. {URE OF INFORMATION jrmation source is an object that produces an event, the outcome of which is selected at ording to a probability distribution, A practical source in a communication system is a produces messages, and it can be either analog or discrete. In this chapter we dcal mainly rete sources, since analog sources can be transformed to discrete sources through the use 1d quantization techniques, described in Chap. 5. A discrete information source is a only a finite set of symbols as possible outputs. The set of source symbols is called the abet, and the elements of the set are called symbols or letters. jon sources can be classified as having memory or being memoryless. A source with for which a current symbol depends on the previous symbols. A memoryless source is, each symbol produced is independent of the previous symbols, ‘memoryless source (DMS) can be characterized by the list of the symbols, the signment to these symbols, and the specification of the rate of generating these symbols ‘ontent of a Discrete Memoryless Source: t of information contained in an event is closely related to its uncertainty. Messages Bwledge of high probability of occurrence convey relatively litle information. We note nt is certain (that is, the event occurs with probability 1), it conveys zero information, 245 246 INFORMATION THEORY AND SOURCE CODING [cHaP. 10 Thus, a mathematical measure of information should be @ function of the probability of the outcome and should satisfy the following axioms: 1. Information should be proportional to the uncertainty of an outcome. 2. Information contained in ‘independent outcomes should add. 1. Information Content of a Symbol: Consider a DMS, denoted by X, with alphabet {21,.%9,...,%m}: The information content of a symbol yi, denoted by I(x), is defined by Hex) log, Pox) (10.1) °8 Pox) where P(x) is the probability of occurrence of symbol x,. Note that (x,) satisfies the following properties! Kx) =0 for Pex) (10.2) Kx) 20 (10.3) Hex) > Mx) if Px) < Posy 0.4) Kxix)) = Mx) + Hog) if xyand x are independent (0.5) The unit of A(x) is the bit (binary unit) if b = 2, Hartley or decit if b= 10, and nat (natural unit) if =e. It is standard to use 6 = 2. Here the unit bit (abbreviated “b”) is a measure of information content and is not to be confused with the term bit meaning “binary digit.” The conversion of these units to other units can be achieved by the following relationships. Ina_loga lo824 = i93 ~ toed (10.6) 2. Average Information or Entropy: In a practical communication system, we usually transmit long sequences of symbols from an information source. Thus, we are more interested in the average information that a source produces than the information content of a single symbol. The mean value of I(x,) over the alphabet of source X with m different symbols is given by H(X) = ELK) = > POD) a 0.7) = -> P(x) log: Pox) b/symbol a ‘The quantity H(X) is called the entropy of source X. It is a measure of the average information content er source symbol. The source entropy H(X) can be considered as the average amount of uncertainty within source X that is resolved by use of the alphabet. Note that for a binary source ¥ that generates independent symbols 0 and 1 with equal probability, the source entropy H(X) is WX flog} }logs} = 1 b/symbol (08) The source entropy H(X) satisfies the following relation: 0= MX) =log;m (10.9) where m is the size (number of symbols) of the alphabet of source X (Prob.10.4). The lower bound corresponds to no uncertainty, which occurs when one symbol has probability P(x) =1 while CHAP. 10) INFORMATION THEORY AND SOURCE 247 P(x) =0. for j#i, so X emits the same symbol x; all the time, The upper bound corresponds to the maximum uncertainty which occurs when P(x;) = 1/m for all i, that is, when all symbols are equally likely to be emitted by X. 3. Information Rate: If the time rate at which source X emits symbols is r (symbols/s), the information rate R of the source is given by R=rH(X) b/s (10.10) 103 DISCRETE MEMORY A. Channel Representation: CHANNELS A communication channel is the path or medium through which the symbols flow to the receiver. A discrete memoryless channel (DMC) is a statistical model with an input X and an output Y (Fig. 10-1). During each unit of the time (signaling interval), the channel accepts an input symbol from , and in response it generates an output symbol from ¥. The channel is “discrete” when the alphabets of ¥ and Y are both finite. It is “memoryless” when the current output depends on only the current input and not on any of the previous inputs. ne rm ae yn int oo Fig. 10-1 Discrete memoryless channel A diagram of a DMC with m inputs and n outputs is illustrated in Fig. 10-1. The input X consists of input symbols 21,X9,--.%n- The a priori probabilities of these source symbols P(x.) are assumed to be known, The output ¥ consists of output symbols y,.72,.~-.). Each possible input-to-output path is indicated along with a conditional probability P(|x;), where P(yj|x) is the conditional probability of obtaining output y, given that the input is x;, and is called a chanmel transition probability B. Channel Matrix: ‘A channel is completely specified by the complete set of transition probabilities. Accordingly, the channel of Fig. 10-1 is often specified by the matrix of transition probabilities (P(/1X)], given by Pola) POr|x1) Praha) payny= | Pork®) Poale) = Pou) cnn) PEIN) PCIe) Pan) The matrix (P(YIX)] is called the channel matrix. Since each input to the channel results in some output, each row of the channel matrix must sum to unity, that is, > Polya fi 1 forall (0.12) 248 INFORMATION THEORY AND SOURCE CODING (CHAP. 10 Now, if the input probabilities P(¥) are represented by the row matrix [POM = [Pon) Pos) Pp] (0.13) and the output probabilities PCY) are represented by the row matrix [POM = [PO PO») Pow] (0.14) then (PO) = [PONPAW)] 10.15) If P(X) is represented as a diagonal matrix Po) 0 0 tPoow=| 9 PG ° (10.16) 0 0 P(%p1). then (PO, ¥)] = [PO POO) (10.17) where the (i,j) element of matrix [P(X, ¥)] has the form P(x, y;). The matrix [PCX, Y)] is known as the joint probability matrix, and the element P(x;,¥,) is the joint probability of transmitting x and receiving y, C. Special Channel 1. Lossless Channel: A channel described by a channel matrix with only one nonzero element in cach column is called a lossless channel. An example of a lossless channel is shown in Fig. 10-2, and the corresponding channel matrix is shown in Eq. (10.18) (10.18) tonne Fig. 10-2 Lossless channel It can be shown that in the lossless channel no source information is lost in transmission. [See Eq, (10.35) and Prob. 10.10.) 2. Deterministic Channel: ‘A channel described by a channel matrix with only one nonzero element in each row is called a deterministic chanel. An example of a deterministic channel is shown in Fig. 10-3, and the corresponding channel matrix is shown in Eq. (10.19), CHAP. 10} INFORMATION THEORY AND SOURCE CODING 249 Fig. 10-3. Deterministic channel 0 0 Lrayi= | 01 1 0 0 0 0 (10.19) 0 1 Note that since each row has only one nonzero element, this element must be unity by Eq. (10.12). ‘Thus, when a given source symbol is sent in the deterministic channel, itis clear which output symbol will be received. 3. Noiseless Channel: ‘A channel is called noiseless if itis both lossless and deterministic. A noiseless channel is shown in Fig. 10-4. The channel matrix has only one element in each row and in each column, and this element is unity. Note that the input and output alphabets are of the same size; that is, m = 1 for the noiseless channel. I q¢@——____-— Yn Fig. 10-4 Noiseless channel 4. Binary Symmetric Channel: ‘The binary symmetric channel (BSC) is defined by the channel diagram shown in Fig. 10-5, and its channel matrix is given by yi-[it? 2 vavoi=[ > ral (40.20) The channel has two inputs (x; =0,x) = 1) and two outputs (7) =0,y2= 1). The channel is symmetric because the probability of receiving a 1 if a 0 is sent is the same as the probability of receiving a 0 if 1 is sent. This common transition probability is denoted by p. (See Prob. 10.35.) 250 INFORMATION THEORY AND SOURCE CODING (CHAP. 10 Fig. 10-5 Binary symmetrical channel 10.4 MUTUAL INFORMATION ‘A. Conditional and Joint Entropies: Using the input probabilities P(x;), output probabilities P(y,), transition probabilities PC yj|x;), and joint probabilities P(x;,3j), we can define the following various entropy functions for a channel with mr inputs and n outputs: Hx) ~S Peo logs Pox) (10.20) Hor) =-¥ Poptos, Pop (10.22) HOY) > dre yplogs PCy) (10.23) HX) x 3 Pos.» o8s Poy) (10.24) HY) x Se y) logs PCx,»3) (10.25) Ae ‘These entropies can be interpreted as follows: H(X) is the average uncertainty of the channel input, and H(Y) is the average uncertainty of the channel output. The conditional entropy H(X|Y) is a measure of the average uncertainty remaining about the channel input after the channel output has been observed. And H(XI¥) is sometimes called the equivocation of X with respect to Y. The conditional entropy H(Y1X) is the average uncertainty of the channel output given that was transmitted, The joint entropy H(X, ¥) is the average uncertainty of the communication channel as a whole. Two useful relationships among the above various entropies are H(X, Y) = H(X|¥) + HY) (10.26) HX, Y) = H(YX) + HOO) (10.27) B. Mutual Information: ‘The mutual information IX; ¥) of a channel is defined by WX;¥) = H(X)-H(X|Y) —_b/symbol (10.28) CHAP. 10] INFORMATION THEORY AND SOURCE CODING 251 Since H(X) represents the uncertainty about the channel input before the channel output is observed and H(X|Y) represents the uncertainty about the channel input after the channel output is observed, the mutual information /(X; Y) represents the uncertainty about the channel input that is resolved by observing the channel output. Properties of 1X; ¥): 1 1K Y) = 106 X) (10.29) 2 (10.30) 3. I(X;Y) = HOY) = HOAX) 0.31) 4 1(X; ¥) = HOO) + HO) = HX, Y) (10.32) 10.5 CHANNEL CAPACITY ‘A. Channel Capacity per Symbol C,: The channel capacity per symbol of a DMC is defined as C.= max KX:Y) _b/symbol (10.33) where the maximization is over all possible input probability distributions {P(x;)} on X. Note that the channel capacity C, is a function of only the channel transition probabilities that define the channel. B. Channel Capacity per Second C: If r symbols are being transmitted per second, then the maximum rate of transmission of information per second is rC,. This is the channel capacity per second and is denoted by C (bjs) C=rC, b/s (10.34) C. Capacities of Special Channels: J. Lossless Channet: For a lossless channel, H(XI¥) = 0 (Prob. 10.10) and (X,Y) = HOO) (10.35) Thus, the mutual information (information transfer) is equal to the input (source) entropy, and no source information is lost in transmission. Consequently, the channel capacity per symbol is max H(X) = log, m (10.36 ga HO) = lope ) where m is the number of symbols in X. 2. Deterministic Channel: For a deterministic channel, H(Y1X) = 0 for all input distributions P(x), and WX ¥)= HY) (10.37) Thus, the information transfer is equal to the output entropy. The channel capa ity per symbol is 252 INFORMATION THEORY AND SOURCE CODING. [CHAP. 10 C, = max H(Y) = log n (10.38) (a) where 1 is the number of symbols in ¥. 3. Noiseless Channel Since a noiseless channel is both lossless and deterministic, we have 106) = HO) = HO) (10.39) and the channel capacity per symbol is = log: m= logs n (10.40) 4. Binary Symmetric Channel: For the BSC of Fig. 10-5, the mutual information is (Prob. 10.16) UX; Y) = H(¥) + plogyp + (1 ~p)logx(1 -p) (10.41) and the channel capacity per symbol is C,= 1+ plogsp+ (I= p)log1—p) 10.6 ADDITIVE WHITE GAUSSIAN NOISE CHANNE) In a continuous channel an information source produces a continuous signal x(2). The set of possible signals is considered as an ensemble of waveforms generated by some ergodic random process. It is further assumed that x(¢) has a finite bandwidth so that x(2) is completely characterized by its periodic sample values. Thus, at any sampling instant, the collection of possible sample values constitutes 2 continuous random variable X described by its probability density function f(x). A. Differential Entropy: The average amount of information per sample value of x(t) is measured by HX) = -f fulo) log: f(a) dx b/sample (10.43) ‘The entropy H(X) defined by Eq, (10.43) is known as the differential entropy of X. The average mutual information in a continuous channel is defined (by analogy with the discrete case) as WKY) = HX) HOY) or WX,Y) = HY) - HONX) where HY)= -f Sr(y)log fy) dy (10.44) Hay=-[, [P fts oss fxcy ady (10.450) HX) = f— [faves log fr) dedy (10.456) CHAP. 10) INFORMATION THEORY AND SOURCE CODING 253 B. Additive White Gaussian Noise Channel In an additive white gaussian noise (AWGN ) channel, the channel output Y is given by Y=X+n (10.46) where X is the channel input and n is an additive band-limited white gaussian noise with zero mean and variance o? The capacity C, of an AWGN channel is given by ya! S) yy C= max tay =5 tog,(1 +5) b/sample 0.47) where S/N is the signal-to-noise ratio at the channel output. If the channel bandwidth B Hz is fixed, then the output (0) is also a band-limited signal completely characterized by its periodic sample values taken at the Nyquist rate 2B samples/s. Then the capacity C (b/s) of the AWGN channel is given by C= 280, = B tog,(1 + 5) bjs (10.48) Equation (10.48) is known as the Shannon-Hartiey law. ‘The Shannon-Hartley law underscores the fundamental role of bandwidth and signal-to-noise ratio in communication. It also shows that we can exchange increased bandwidth for decreased signal power (Prob. 10.24) for a system with given capacity C, 10.7. SOURCE CODING A conversion of the output of a DMS into a sequence of binary symbols (binary code word) is called source coding. The device that performs this conversion is called the source encoder (Fig. 10-6). memories ae emoniess -——T encoder | ~Ginay sequence Fig. 10-6 Source coding An objective of source coding is to minimize the average bit rate required for representation of the source by reducing the redundancy of the information source. A. Code Length and Code Efficiency: Let X be a DMS with finite entropy H(X) and an alphabet {x,,...,%y} with corresponding probabilities of occurrence P(x;}(/= 1,...™m). Let the binary code word assigned to symbol x; by the encoder have length m;, measured in bits. The length of a code word is the number of binary digits in the code word. The average code word length L, per source symbol is given by L > Pen (10.49) a The parameter L represents the average number of bits per source symbol used in the source coding process. The code efficiency n is defined as 254 INFORMATION THEORY AND SOURCE CODING [CHAP. 10 (10.50) where Zmin is the minimum possible value of L. When 1 approaches unity, the code is said to be efficient. ‘The code redundancy y is defined as (10.51) . Source Coding Theorem: The source coding theorem states that for a DMS X with entropy H(X), the average code word length L per symbol is bounded as L>HQ) (10.52) and further, L can be made as close to H(X’) as desired for some suitably chosen code. Thus, With Li, = HX), the code efficiency can be rewritten as =H ==> (10.53) C. Classification of Codes: Classification of codes is best illustrated by an example. Consider Table 10-1 where a source of size 4 has been encoded in binary codes with symbol 0 and 1 Table 10-1 Binary Codes x; | Godel ] Code2 [| Codes [ Codes [ Codes | Code 6 x Cy 00 0 0 0 1 x a o1 1 wo | oo o Xs 0 1 | 00 ho ou oor x un 1 u um oun 0001 J. Fixed-Length Codes: A fixed-length code is one whose code word length is fixed, Code 1 and code 2 of Table 10-1 are fixed-length codes with length 2. 2. Variable-Length Codes: A variable-length code is one whose code word length is not fixed. All codes of Table 10-1 except codes 1 and 2 are variable-length codes. 3. Distinet Codes: A code is distinct if each code word is distinguishable from other code words. All codes of Table 10-1 except code I are distinct codes—notice the codes for x, and x 4. Prefix-Free Codes: ‘A code in which no code word can be formed by adding code symbols to another code word is called a prefix-free code. Thus, in a prefix-free code no code word is a prefix of another. Codes 2, 4, and 6 of Table 10-1 are prefix-free codes. CHAP. 10] INFORMATION THEORY AND SOURCE CODING 255 5. Uniquely Decodable Codes: A distinct code is uniquely decodable if the original source sequence can be reconstructed perfectly from the encoded binary sequence. Note that code 3 of Table 10-1 is not a uniquely decodable code. For example, the binary sequence 1001 may correspond to the source sequences x9x3X3 OF XpX1X1X9. A sufficient condition to ensure that a code is uniquely decodable is that no code word is a prefix of another. Thus, the prefix-free codes 2, 4, and 6 are uniquely decodable codes. Note that the prefix-free condition is not a necessary condition for unique decodability. For example, code 5 of Table 10-1 does not satisfy the prefix-free condition, and yet it is uniquely decodable since the bit 0 indicates the beginning of each code word of the code. 6. Instantaneous Codes: A uniquely decodable code is called an instantaneous code if the end of any code word is recognizable without examining subsequent code symbols. The instantaneous codes have the property previously mentioned that no code word is a prefix of another code word. For this reason, prefix-free codes are sometimes called instantaneous codes 7. Optimal Codes: A code is said to be optimal if it is instantaneous and has minimum average length L for a given source with a given probability assignment for the source symbols. D. Kraft Inequality: with alphabet {x,} (= 1,2, code word corresponding to x; is n,. ‘A necessary and sufficient condition for the existence of an instantaneous binary code is +m). Assume that the length of the assigned binary k-Sr"s1 (10.54) ai which is known as the Kraft inequality. Note that the Kraft inequality assures us of the existence of an instantaneously decodable code with code word lengths that satisfy the inequality. But it does not show us how to obtain these code words, nor does it say that any code that satisfies the inequality is automatically uniquely decodable (Prob. 10.27) 108 ENTROPY CODING The design of a variable-length code such that its average code word length approaches the entropy of the DMS is often referred to as entropy coding. In this section we present two examples of entropy coding. A. Shannon-Fano Coding: An efficient code can be obtained by the following simple procedure, known as ‘Shannon-Fano algorithm: 1. List the source symbols in order of decreasing probability. 2. Partition the set into two sets that are as close to equiprobable as possible, and assign 0 to the upper set and 1 to the lower set. 256 INFORMATION THEORY AND SOURCE CODING (CHAP. 10 3. Continue this process, each time partitioning the sets with as nearly equal probabilities as possible until further partitioning is not possible. An example of Shannon-Fano encoding is shown in Table 10-2. Note that in Shannon-Fano encoding the ambiguity may arise in the choice of approximately equiprobable sets. (See Prob. 10.33.) Table 10-2 Shannon-Fano Encoding x_[ Pao [ sept | sep? [steps | Step | Codes x ox To 0 | 0 x | 0.25 0 1 o xs | 020 I o 10 x | oom | or 1 ° 10 xs | 0.08 1 1 1 | 0 m0 xe | 005 i 1 1 I un H(X) = 2.36b/symbol L=2.38b/symbol n= HX)/L = 0.99 B. Huffman Encoding: In general, Huffman encoding results in an optimum code. Thus, itis the code that has the highest efficiency (Prob. 10.34). The Huffman encoding procedure is as follows: 1. List the source symbols in order of decreasing probability. 2. Combine the probabilities of the two symbols having the lowest probabilities, and reorder the resultant probabilities; this step is called reduction 1. The same procedure is repeated until there are two ordered probabilities remaining, 3. Start encoding with the last reduction, which consists of exactly two ordered probabilities. Assign as the first digit in the code words for all the source symbols associated with the first probability; assign 1 to the second probability. 4. Now go back and assign 0 and 1 to the second digit for the two probabilities that were combined in the previous reduction step, retaining all assignments made in Step 3. 5. Keep regressing this way until the first column is reached. ‘An example of Huffman encoding is shown in Table 10-3. H(X) = 2.36b/symbol L=2.38b/symbol n= 0.99 CHAP. 10] INFORMATION THEORY AND SOURCE CODING 257 PG) Cade © © © T ° n 030 —2 030 030 oas oss a o a 00s 02s 025 030 ous — 0 a 0.20 - 0.20 n 025 025: a1 101 100 x on ous, 020 in %s 0.08 012 ‘01 me 00s 1001 Table 10-3 Huffman Encoding Solved Problems MEASURE OF INFORMATION 10.1. 10.2, Verify Eq. (10.5), that is, Hox) = Hox) + Ix) if x; and x are independent If-x, and x, are independent, then by Eq. (6.21) POxiay) = POS)PCS) By Eq. (10.1) 1 EPEC 1 oe Rt M8 = Hex) +105) 1 es) = boa = A.DMS X has four symbols xy, 2,3, x4 with probabilities P(x) = 0.4, P(x») = 0.3, Pa) = P(x) = 0.1 (@) Calculate H(X). (®) Find the amount of information contained in the messages x1x2%jx5 and x4xX3%5%., and compare with the H(X) obtained in part (a). @ HO) =~ Pes loss{ PCs) “04 log, 0.4~0.3]og; 03/~0.2l0g: 0.2~0.1 logy 0.1 1.85b/symbot oy Px, x3435) = (0.4Y(0.3)(0.40.2) = 0.0096 T,x2%1%3) = ~ log: 0.0096 = 6.70 b/symbol 258 10.3. 10.4. INFORMATION THEORY AND SOURCE CODING IcHaP. 10 Thus, Hexyx2x%3) < 74 [= 4H) b/symbol Poxyxpxpp) = (0.1)(0.2)°(0.3) = 0.0012 Hexqxsxsx2) = ~log; 0.0012 = 9.70b/symbol Thus, ToaxsxyX2) > T4[= 4HO0)] b/symbol Consider a binary memoryless source -X with two symbols x; and x. Show that H(X) is maximum when both x, and x, are equiprobable. Let Pox) =a. Pox A(X) = ~wlog2 a ~ (I~ a) loga(] — 2) (10.55) ane) _ a AO =F alogs a= (1-9) fom2(1~2)] Using the relation 4 sogy =o e® lotr = Hoss we obtain topes top 0) = gs! ‘The maximum value of H(X) requires that HX) that is, rere Note that H(X) = 0 when = 0 or 1. When P(x) = Pox) = J, HUY) is maximum and is given by HO liga at : 50822 + Flog? = 1b/symbol (10.56) Verify Eq. (10.9), that is, 0-4) “toes where m is the size of the alphabet of X. Poot ofthe lover bound Sine 0 Ft) piget and bow teo Then i aos tat 1 Psd lon p= 0 Thos 1a) rookergtea «ns Next, we note that ropkesgis=0 CHAP. 10) INFORMATION THEORY AND SOURCE CODING 259 10.5. ‘if and only if P(x) = 0 or 1. Since Srey =1 when P(x) = 1, then P(x) = 0 for j# i. Thus, only in this case, H(X) = 0. Proof of the upper bound: Consider two probability distributions {P(x;) = P and {Q(x) = Q;} on the alphabet {xj},/= 1,2,...m, such that and YQ) 10.58) Using Eq. (10.6), we have © p top 2 = 1 pin 2 2 Plow 5, earn Fi Next, using the inequality Ina Pint < ¥ p,(2 Drm < Se (¢ . 10.60) by using Eq. (10.58). Thus, FP, top, Si < Dritoes <0 0s!) Where the equality holds only if Q, = P; for alli, Setting Oak 112m (10.62) « 1 a wwe obtain ah 08 Fi “2h log: P;- 2 Ploeam a 1083) = H(X)~ log m> py; eee = HOX)= log: m <0 Hence, H(X) < logy m and the equality holds only if the symbols in ¥ are equiprobable, as in Eq. (10.62). A high-resolution black-and-white TV picture consists of about 2x 10° picture elements and 16 different brightness levels. Pictures are repeated at the rate of 32 per second. Alll picture elements are assumed to be independent, and all levels have equal likelihood of occurrence. Calculate the average rate of information conveyed by this TV picture source. we 4 Hi) =-¥ biog, b= 4 b/clement = 2(10°)32) = 64(10) elements/s Hence, by Ea. (10.10) R= rH(X) = 64(10° (4) = 256(10%) b/s = 256 Mb/s 260 INFORMATION THEORY AND SOURCE CODING. [cHaP. 10 10.6. Consider a telegraph source having two symbols, dot and dash. The dot duration is 0.2 s. The dash duration is 3 times the dot duration. The probability of the dot’s occurring is twice that of the dash, and the time between symbols is 0.2 s. Calculate the information rate of the telegraph a = 2 ro + ad) = a #1 = rune! and ron? By Eq. (10.7) HO =P(dot) log, P(dot) — P(dash) log, P(dash) 0.667(0.585) + 0.333(1.585) = 0.92 b/symbol 025 tam =065 trace = 0.28 ‘Thus, the average time per symbol is T, = P(dot)iggs + PCGaSH) aan + fi 0.5333 s/symbol and the average symbol rate is = 1875 symbols/s ‘Thus, the average information rate of the telegraph source is R=rH(X) = 1.875(0.92) = 1.725 b/s DISCRETE MEMORYLESS CHANNE] 10.7. Consider a binary channel shown in Fig. 10-7 (See Prob. 6.14) 09 a Puy) ” 08 Fig. 10-7 (a) Find the channel matrix of the channel. (b) Find P() and P(y) when P(x;) = Pl) = 0.5. (c)_ Find the joint probabilities P(xy,y3) and P(x.,y1) when P(x) = P(x) = 0. (a) Using Eq. (10.11, we see the channel matrix is given by _[2orbs) POaks)]_f09 01 vrowor=[pok) posal =[02 03 () Using Eqs. (70.13), (10.14, and (10.15), we obtain CHAP. 10) INFORMATION THEORY AND SOURCE CODING 261 (Po) = (OOP) = [0.55 0.45) =LP())P(0)] Hence, P(y)=0.55 and PCy2) = 0.45, (©) Using Eas. (10.16) and (10.17), we obtain [Px n= PanPOw)) _fos 0 ]fo9 o1 “Lo os}lo2 os _fo4s 0.05] _PPea.n) Ply) o1 04 Posy) Posasd2) Hence, P(s1,y2) = 0.05 and P(x.) = 0-1 10.8. Two binary channels of Prob. 10.7 are connected in cascade, as shown in Fig. 10-8. (@)_ Find the overall channel matrix of the resultant channel, and draw the resultant equivalent channel diagram, (®) Find PG) and P(e) when P(x,) = P(x) = 0.5. (@ By Eq, (10.15) (ea = oop] [PZ] = (PONPCZLYY) = POOPOOIPZ)] = [POOIPZIX)] Thus, feom Fig. 1058 [Pzix)] = Poway) 09 0.1][09 01] _[0s3 017 “Lo2 os |Lo2 os} L034 066 ‘The resultant equivalent channel diagram is shown in Fig. 10-9. © P@n = PEO] ae 031[°8 ea Al 34 66 |= E0585. 0415] Hence, P(zi) = 0.585 and P(za) = 0.415. 262 INFORMATION THEORY AND SOURCE CODING [CHAR. 10 * 0.66 . Fig. 109 10.9. A channel has the following channel matrix: =['77 2 0 (pony =[ ee ral (10.64) (@) Draw the channel diagram. (®) If the source has equally likely outputs, compute the probabilities associated with the channel outputs for p = 0.2. (@) The channel diagram is shown in Fig. 10-10. Note that the channel represented by Eq. (10.64) (see Fig. 10-10) is known as the binary erasure channel. The binary erasure channel has two inputs x; = 0 and x; = I and three outputs y; =0,y2 =e, and yy = 1, where ¢ indicates an erasure; that is, the output is in doubt, and it should be erased. tee mao ? neve P aot not i= Fig. 10-10 Binary erasure channel (© By Bq, (10.15) yi=s osf°* 02% (Pan) = I aaa =[04 02 04] Thus Ply) = 0.4, PU2) = 0.2, and POs) = 0.4 MUTUAL INFORMATION 10.10. For a lossless channel show that HAY, (10.65) ‘When we observe the output y, ina lossless channel (Fig. 10-2), it isclear which x, was transmitted, that is, Poxily) = 0 oF 1 (10.66) Now by Eq. (10.23) CHAP. 10] INFORMATION THEORY AND SOURCE CODING HMY) =—> Y Pex, ¥) low Ply) =-> PO) > Poly) log, Poly) Oe Note that all the term: Ox log: 0. Hence, we conclude that for a lossless channel HY) =0 263 (10.67) the inner summation are zero because they are in the form of 1x log: 1 or 10.11. Consider a noiseless channel with m input symbols and m output symbols (Fig. 10-4). Show that H(X) =H) and HK) =0 For a noiseless channel the transition probabilities are Poy) = fo ay Hence Posnyd= Posispreny ={O f3 and POY =F POI = Pos Thus, by Eqs. (/0.7) and (10.72) HY) =~ Pop log PO) =-S Pexplog Pex) = HX) Next, by Eqs. (10.24), (10.70), and (10.71) Hox) =-¥ ¥ Poxnyloes Poyly Aa = PES tog, Posts) oe > Pexdlogs 1 =0 10.12, Verify Eq. (10.26), that is, HX, ¥) = HXY) + HOY) From Eqs. (6.16 ) and (624) Posy) = Poly PO) and Frey = Py (10.68) (10.69) (10.70) (10.71) (10,72) 264 INFORMATION THEORY AND SOURCE CODING [CHAP. 10 So by Eq. (0.25) and using Eqs. (J0.22) and (10.23), we have EF Moswypog Pais) y Eres log[ Pesily PL) -¥ F Poin log Pesky) fi HX,Y)= =i Srosppoeeey ale = AMY) -> Poylog Py) fi = HY) +H), 10.13. Show that the mutual information 1(X; ¥) of the channel described by Eq. (10.11 ), with the input probabilities P(x), i=1,2,...,m, and the output probabilities PQ), j= 1,2,...m% can be expressed as 1261) =F ¥ Por yplog LD (10.73) a4 Pex) By Eq, (10.28) AX, Y) = H(X)- HX) Using Eqs. (10.21) and (10.28), we obtain ran = 5 reioes Pept YE rosnyplows Poly & D” Bi Bt a x [s rw] bona eat +3 Frey) lows Posty fi aa => Dd Povifloe | & & px, yo pn =z 2” nd DOB Bry 10.14. Verify Eq. (10.29), that is WX Y)= 1X) Using Eq. (0.73), we can express I(¥; X’) as 1X) = 3 ¥ Pop xdlows ae (10.74 ae Now, by Eq. (6.19) PLpxd = Posy) and Poylsd _ Peulyy Poy) Pox) CHAP. 10] INFORMATION THEORY AND SOURCE CODING ‘Thus, comparing Eqs. (10.74) with (10.73), we conclude that 40) = 1X) 10.15. Verify Eq. (10.30), that is, yao From Eq, (10.73) and using the relation log(a/b) = ~log(b/a), we have 7 -s3 slog Poe. HAY = 2D Pe lowe Using Bayes’ rule (Eq. (6.19)}, we have Pou) _ PSDP) Po) Pon) ‘Then by using Eq, (10.6), Eq, (10.75) can be rewritten as irene PCx)PO) WY) Bid DP ae G9) Using the inequality (10.59), that is, Ina $row] Since Po) > Ply) = (I) = a 5/3 res] = Smo Equation (0.77) reduces to 1061) <0 or WYN) =O 10.16, Consider a BSC (Fig. 10-5) with P(x) =a. (@_ Show that the mutual information 1(¥; ¥) is given by 1X; ¥) = H(Y) + plogs p + (1 ~p) logo( ~ p) (®) Calculate 10%; ¥) for «= 0.5 and p= 0.1. (6) Repeat (b) for 2 =0.5 and p = 0.5, and comment on the result. 265 (0.75) (10.76) 0.77) (10.78) 266 INFORMATION THEORY AND SOURCE CODING. (CHAP. 10 Figure 10-11 shows the diagram of the BSC with associated input probabilities, I-p Fig. 10-11 (@) Using Eqs. (10.16), (10.17), and (10.20), we have _f«= 0 [i-p » wmarr[e 2 2] a(1—p) op PO ri) POH ya) (-a)p (1-a)(l-p) Poa.n) PO Then by Ba. (10.24 ) HX) = — PCs, 91) Tog2 Pride) ~ Po, 92) 1oB PCalv2) = PO, Toge PLY be2) — PCa, Tog2 PCa) =-a(1 =p) log.(1~p)~2plog: p = (1=a)plogs p~ (1 ~2)(1 = p)loga(t =p) ‘plogs p~(1~ p)logs(1 —p) 10.79) Henee, by Ea. (10.32) 10 Y) = HOY) = HOY) = HON + plogsp + (1 p)loss( ~p) (®) When 2=05 and p=0.1, by Eq. (10.15) [pn] =(05 0.5] a oa |= los 05} HCY) = ~ PCy logs Pv) ~ PCD 108 PC) = 0.5 log 0.5—0.510g;0.5= log: p+ (1 —p)log: (1p) = 0.1 1og20.1 + 0.9 10g; 0.9 =~0.469 Thus, 1X; 1) = 10.469 = 0531 (0 When «= 05 and p= 05, (pay) = [05 oss os |=l05 05) HY) =1 CHAP. 10) * INFORMATION THEORY AND SOURCE CODING 267 P logs p+(1~p)logs (I =p) = 0.5 log; 0.5 +0.5 log: 0.5 “1 Thus, WX Y)=1-1=0 Note that in this case (p = 0.5) no information is being transmitted at all. An equally acceptable decision could be made by dispensing with the channel entirely and “flipping a coin” at the receiver, When 1; ¥) = 0, the channel is said to be useless. CHANNEL CAPACITY 10.17. Verify Eq. (10.36 ), that is, c of a lossless channel and m is the number of symbols in . log, where C, is the channel caps For a lossless channel (Eq. (10.65), Prob. 10.10] HY) = Then by Eg, (10.28) WX; ¥) = HO) = HOY) = HO) 10.80) Hence, by Eqs. (10.33) and (10.9) max IXY) = max HOO = logs m BRO) = max HOD) = Ios, 10.18. Verify Eq. (10.42), that is, 1+ plogs p+ (1p) log; (-p) where C, is the channel capacity of a BSC (Fig. 10-10), By Eq, (10.78) (Prob. 10.16) the mutual information CX; Y) of a BSC is given by 1X; Y) = HCY) + plog: p+ (1~p)log: (1 -p) which is maximum when H(Y) is maximum. Since the channel output is binary, H(Y) is maximum when each output has a probability of 0.5 and is achieved for equally likely inputs [Eq. (J0.9)]. For this case H(Y)=1, and the channel capacity is C= man 1X5 ¥) = 1+ p logs p+ (1 ~p) logs lp) 10.19. Find the channel capacity of the binary erasure channel of Fig. 10-12 (Prob. 10.9). Let Posy) = a Then P(x) = 1a By Eq. (10.64) I-p Piyea % n ° Ys 2 " ” I-p Fig. 10-12 268, INFORMATION THEORY AND SOURCE CODING [CHAP. 10 [parxy) luo , =F Pala) Hote} op POnka) POalsa) POsln) By Eq. (10.15) I-p p 0 (PO) = le ial na aa = [al =p) p A~ ay —p)} (PO) Pox POs] By Eq. (10.17) (PI 2 Oi po [o sc.[" 7 2] _fui-p 0 -[%0” Zar a-or-v] Poy) Pony PCs.93) Posi) PO2sy2) Pld) In addition, from Egs. (10.22) and (10.24) we can calculate HO) =-¥ PL) logs PCY) (1 ~ p) logs a1 ~ p)~p log: p~ (1 —aX1 =p) logs (4-21 = p)] = (= p)i-a log; «(1 a) log, (1 9)] =p logy p~ (1p) logs (1-7) 08) MO) =- YY Pony) lows Pt = ~a(1 — p) loge (1 —p)—ap logs p = (1~a)p loge p— (1 —a(1 ~p) loge (1 —p) -p log, p~(1~p) log: (1p) 082) Thus, by Eqs. (10.37) and (10.55) 1X; ¥) = HOY) - HONX) = (1=p)l-a logs (1-2) log; (1-2) (=pHe) (1083) And by Eqs. (10.33) and (10.56) C= max 0G Y) = max (1 p)HX) = (1~p) max HOD) Baye) = a OHO) = CE 084) ADDITIVE WHITE GAUSSIAN NOISE CHANNEL. 10.20. Find the differential entropy H(X) of the uniformly distributed random variable X with probability density function osx C, error-free transmission is not possible, (©) The required S/N ratio can be found by = 10" og. (1-45) > 840% S) or ton (1+5)=8 So 95645 = or 145 = = 2565 = 255 (= 24.1 dB) ‘Thus, the required S/N 1 (@) The required bandwidth B can be found by jo must be greater than or equal (0 24.1 dB for error-free transmission, C= B log, (1 + 100) = 810") 80%) Ped + 1.2(0)Hz = 12kHz and the required bandwidth of the channel must be greater than or equal to 12 kHz. 2m INFORMATION THEORY AND SOURCE CODING. [CHAP. 10 SOURCE CODING 10.25. Consider a DMS X with two symbols x; and x2 and P(x) = 0.9, Pls are encoded as follows (Table 10-4) 0.1. Symbols x; and x2 ‘Table 10-4 ep | ea =] 08 pe m|o | 1 Find the efficiency and the redundancy 7 of this code, By Eq, (10.49) the average code length L per symbol is 12 rein = 094+ @.)= 16 By Eq. (10.7) HX) =-5 Posy toss Pos) 0.9 log: 0.9 ~0.1 logy 0.1 = 0.469 b/symbol Thus, by Eq. (10.53) the code efficiency 1 is 0.469 = 46.9% By Eq, (10.51) the code redundaney 7 is = 0.531 = 53.1% 10.26. The second-order extension of the DMS X of Prob, 10.25, denoted by X?, is formed by taking the source symbols two at a time. The coding of this extension is shown in Table 10-5. Find the efficiency 1 and the redundaney 7 of this extension code. Table 10-8 a Peay [Code 081 0 0.09 10 0.09 no | oo my L=¥ Plan, = 0810) +0092) +0096) +0016) 29 b/symbol The entropy of the second-order extension of X, H(X*), is given by CHAP. 10] INFORMATION THEORY AND SOURCE CODING 23 H(X)=-Y PCG) logs Pla) ==0.81 log; 0.81 ~0.09 logs 0.09-0.09 logs 0.09 ~0.01 log: 0.01 = 0.938 b/symbol Therefore, the code efficiency 1 is H(X?) _ 0.938 7 Sa Fay = OT = 12.7% and the code redundancy 9 is = 0273 = 27.3% 10.27. Consider a DMS X with symbols ;, i=1,2,3,4. Table 10-6 lists four possible binary codes. Table 10-6 x | CodeA | Code | Codec | Coded x | 00 0 0 0 % 1 wo | ou 100 x 10 nm | 100 10 x ul uo | _10 un (@ Show that all codes except code B satisfy the Kraft inequality. (®) Show that codes 4 and D are uniquely decodable but codes B and C are not uniquely decodable. (@) From Eq, (10.54 ) we obtain the following: For code A: For code B: For code C: For code D: keymady All codes except code # satisfy the Kraft inequality (©) Codes A and D are prefis-ree codes. They are therefore uniquely decodable. Code B does not satisfy the Kraft inequality, and itis not uniquely decodable. Although code C does satisfy the Kraft inequality, itis not uniquely decodable. This can be seen by the following example: Given the binary sequence 0110110. This sequence may correspond to the source sequences x,x2x1%4 OF XX4Xu 274 10.28. 10.29. INFORMATION THEORY AND SOURCE CODING. [CHAP. 10 Verify Eq. (10.52), that is, L> HX) where L is the average code word length per symbol and H(X ) is the source entropy. From Eq. (10.61) (Prob. 10.4), we have IS <0 Pi FP, tow: where the equality holds only if Q; = P,. Let (10.97) where (10.98) hich sd n a (10.5. (10.99) ant S rim 2 = $ roe hn toe) = Srna Sem don Se, 09 = H)-L-top K=O . From the Kraft inequality (10.54 ) we have log, K=0 (10.101) Thus, H(X)—L = log; K=0 (10.102) 5 Ls mp The equality holds when K'= 1 and P;= Q, Let X be a DMS with symbols x, and corresponding probabilities P(x)) = P;, #= 1,2,....m. Show that for the optimum source encoding we require that 1 (10.103) ant a, (n.0 where n;is the length of the code word corresponding to x, and /; is the information content of x; From the result of Prob. 10.28, the optimum source encoding with L= H(X) requires K=1 and j. Thus, by Eqs. (10.98) and (10.97) (10.103) (10.106) CHAP, 10.30. 10.31, 10) INFORMATION THEORY AND SOURCE CODING 215 Hews nas t= me! Note that Eq. (10.104) implies the following commonsense principle: Symbols that occur with high probability should be assigned shorter code words than symbols that occur with low probability. Consider a DMS X with symbols x; and corresponding probabilities P(x;) = Pj, i= 1,2,..-5m Let 7, be the length of the code word for x; such that log + Plog: Pi + 1) (10.118) a as Srila: P+ =-¥ Pitogs P+ SP, Now = HO) +1 ‘Thus, Eq. (10.114 ) reduces to HQ) SL= HU) +1 276 INFORMATION THEORY AND SOURCE CODING (CHAP. 10 ENTROPY CODING 10.32. A DMS X has four symbols x;,.2,x3, and x, with P(x;) = 4, P(r.) =}, and P(x) = POx) = £ Construct a Shannon-Fano code for X; show that this code has the optimum property that n; = Icx,) and that the code efficiency is 100 percent. The Shannon-Fano code is constructed as follows (see Table 10-7: Table 10-7, x | Po) [ Step! | Step? | Step3 | Code x i 0 0 x 4 1 0 10 x t 1 1 o 110 x i oto 1 Mm -lon }=1=m Ky) ==k 8 5 1 Tosa) = ~log 1 wlog 5 = Me,) =~ log, HX) = ¥ Pexylcx) Lyatayet (1) + 5Q)+ 58) +5@)= 175 _¢ . ley ata ata = L=Y Pm = 5432+ g@+5@ = 1.75 10.33. A DMS X has five equally likely symbols. @ o oO @ Construct a Shannon-Fano code for X, and calculate the efficiency of the code. Construct another Shannon-Fano code and compare the results, Repeat for the Huffman code and compare the results, A Shannon-Fano code [by choosing two approximately equiprobable (0.4 versus 0.6) sets) is constructed as follows (see Table 10-8) Table 10-8 x | Po) | Sep! | Step2 | Step3 | Code x | 02 0 0 LJ x | 02 0 1 a xs | 02 1 ° 10 x | 02 1 1 o 110 xs _|_02 1 1 1 i CHAP. 10) INFORMATION THEORY AND SOURCE CODING 2 H(X) = — PC) logy P(x) = 5(-0.2 logy 0.2) = 2.32 L= Porn = 02242424343) =24 The eftciney m is HO) _232_ 9 geo = HO 2 2 9.967 = 96.7% (®) Another Shannon-Fano code [by choosing another two approximately equiprobable (0.6 versus 0.4) sets] is constructed as follows (see Table 10-9) Table 10-9 x | A) | Step! | Siep2 | Step3 | Code x | 02 0 0 00 nm | 02 0 1 o o10 4 x | 02 0 1 1 on x | 02 1 0 10 xs | 02 L 1 ul L= > Pom = 0.22 43434242) a Since the average code word length is the same as that for the code of part (a), the efficiency is the same. (6) The Huffiman code is constructed as follows (see Table 10-10) La Pexyny = 02043434242 Since the average code word length is the same as that for the Shannon-Fano code, the efliciency is also the same. Table 10-10 Pie) Cale n 02 —& oa— 000 on 02 02 on 502 o2 0 xm 02 02: on 4 2—— i 278 INFORMATION THEORY AND SOURCE CODING (CHAP. 10 10.34, A DMS X has five symbols x1,x2,5,¥4, and x5 with P(x) = 0.4, P(x) = 0.19, ors) = 0.16, Plxg) = 0.15, and PCs) = 0.1 (@ Construct a Shannon-Fano code for X, and calculate the efficiency of the code. (b)_ Repeat for the Huffman code and compare the results (a) The Shannon-Fano code is constructed as Follows (see Table 10-11) Table 10-11 » | Pe Sep! | Sep? | Swp3 | Code a | oa ° ° 00 a | oss ° 1 a1 5 | 016 1 ° 0 x | ous 1 Hi a 0 «| oo 1 1 1 m1 mx) = -¥ Pes logs Psd = 04 log; 0.4 -0.19 logy 0.19 ~ 0.16 log: 0.16 0.15 log, 0.15 0.1 log, 0.1 =tis L= > Pex, % = 0.4(2) + 0.19(2) + 0.16(2) + 0.1503) + 0.103) (®) The Huffinan code is constructed as follows (see Table 10-12): L=¥ Pen = 0.A(1) + (0.19 +0.16-+ 0.15 + 0.143) = 2.2 Table 10-12 = Pe) ease T 7 7 5 94 ——~___0s —~____os nox —@ 025 —" 03s he oot 000 06 0.9 02s a 10 ay 04S 016s Ol ° oi CHAP. 10] INFORMATION THEORY AND SOURCE CODING. 219 HX) _ 2.15 eer 22 917 = 91.1% ‘The average code word length of the Huffman code is shorter than that of the Shannon-Fano code, and, thus the efficiency is higher than that of the Shannon-Fano code. Supplementary Problems 10.35, Consider a source X that produces five symbols with probabilities $4,475, and 4. Determine the source entropy HX). Ans. 1.875 bisymbol 10,36, Calculate the average information content in the English language, assuming that each of the 26 characters in the alphabet occurs with equal probability Ans. 4.7 bjcharacter 10.37. Two BSCs are connected in cascade, as shown in Fig. 10-13, 08 Ys 7 Fig. 10-13 (@) Find the channel matrix of the resultant channel () Find P(e) and PE) if P(x) = 06 and Pox) = 0.4. 0620.38 ams) [055 002 ) Ply) =0.524, Pee,) = 0.476 10.38. Consider the DMC shown in Fig. 10-14 (@) Find the outpat probabilities if P(x) = $ and Ps.) = Pox) = (6) Find the output entropy H(Y). Ans. (a) PCy) = 1/24, PC () 1.58 bjsymbol = 17/48, and PCs) = 17/48 10.39. Verify Eq. (10.32), that is, WX) = HX) + HY) HX, Y) Hint: Use Bgs. (10.28) and (10.26). 280 10.40, tot. 10.42. 10.43, INFORMATION THEORY AND SOURCE CODING ICHAP. 10 Fig. 10-14 Show that HUX, ¥)= HOC) + HC) with equality if and only if X and Y are independent. Hint: Use Bas. (10.30) and (10.32) ‘Show that for a deterministic channel HOX)=0 Hint: Use Bq, (10.24 ), and note that for a deterministic channel P(x) ate either 0 oF 1 Consider a channel with an input Xand an output Y. Show that if ¥ and Y are statistically independent, then HX) = HX) and 1%: Y) = 0. Hint: Use Eqs. (6.51) and (6.52) in Eqs. (10.23) and (10.28). ‘A channel is described by the following channel matri. (@) Draw the channel diagram. (6) Find the channel capacity. 40 ot Ans. (@) See Fig. 10-15, () 1 bjsymbot : 5 ” ” 4), ' Fig. 10-15 CHAP. 10.44, 10.48, 10.46. 10.47, 10.48, 10.49, 10.50, 1051. 10) INFORMATION THEORY AND SOURCE CODING 281 Let be a random variable with probability density function fy(2), and let ¥ = a+ 5, where a and b are constants. Find H(¥) in terms of F(X), Ans, HY) = H(X) + loga Find the differential entropy HX) of a gaussian random variable X with zero mean and variance o}. Ans. H(X) =} log, Qneay) Consider an AWGN channel described by Eq. (10.46), that is, Y=Xtn where Yand ¥ are the channel input and output, respectively, and n is an additive white gaussian noise with zero mean and variance ¢3. Find the average mutual information /(X; Y) when the channel input is gaussian with zero mean and variance 0. drs, 106) = blogs (1 Calculate the capacity of AWGN channel with a bandwidth of | MHz and an S/N ratio of 40 4B Ans, 13.29 Mbjs Consider a DMS X with mr equiprobable symbols x, 7 m. (@)_ Show that the use of a fixed-length code for the representation of x, is most efficient, (B) Let ny be the fixed code word length. Show that if », = logy m, then the code efficiency is 100 percent. Hint: Use Eqs, (10.49) and (10.52) Construct a Huffman code for the DMS X of Prob. 8.32, and show that the code is an optimum code. Ans, Symbols: x) Xe Code: 0 10-110 «IIL A.DMS X has five symbols x, 2, 3, X45 and xs with respective probabilities 0.2, 0.15, 0.05, 0.1, and 0.5. (@) Construct a Shannon-Fano code for X, and calculate the code efficiency. (®) Repeat (a) for the Huffinan code. Ans. fa) Symbols: x; xy XX Code: 10 110 1111 1110 0 Code efficiency = 98.6 percent. (6) Symbols: xy x2 xs ys Code: 11 100 1011 1010 0 Code efficiency 9 = 98.6 percent ‘Show that the Kraft inequality is satisfied by the codes of Prob. 10.33. Hint: Use Eq. (10.54), ERROR CONTROL CODING RODUCTION is chapter we treat the subject of designing codes for the reliable transmission of digital in over a noisy channel. Codes can either correct or merely detect errors, depending on the redundancy contained in the code, Codes that can detect errors are called error-detecting codes that can correct errors are known as error-correcting codes. There are many different ‘ol codes. These are classified into black codes and convolutional codes. The codes described wary codes for which the alphabet consists of only two elements: 0 and 1. The set {0, 1} is plock diagram: for the channel coding is shown in Fig, 11-1, The binary message sequence pf the channel encoder may be the output of a source encoder or the output of a source hhannel encoder introduces systematic redundancy into the data stream by adding bits to ts in such a way as to facilitate the detection and correetion of bit errors in the original sequence at the receiver. The channel decoder in the receiver exploits the redundancy jh message bits are actually transmitted. The combined objective of the channel encoder to minimize the effect of channel noise. ing Theorem: I coding theorem for a DMC is stated as follows: SX with entropy H(X) bits/symbol and a DMC with capacity C, bits/symbol, if re exists a coding scheme for which the source output can be transmitted over the arbitrarily small probability of error. » if HX) > Cy it is not possible to transmit information over the channel with an probability of error. 282 CHAP. 11] ERROR CONTROL CODING 283 ‘Channel Discrete Channel oe = Binary LemeOder [Coded |" channet decoder | Decoded ed i ca Fig. 1-1 Channel coding Note that the channel coding theorem only asserts the existence of codes but it does not tell us how to construct these codes. 11.3, BLOCK CODES ‘A block code is a code having all its words of the same length. In block codes the binary message or data sequence is divided into sequential blocks each k bits long, and each k-bit block is converted into an mbit block, where > k. The resultant block code is called an (n, k) block code, The &-bit blocks form 2 distinct message sequences referred to as k-tuples. The n-bit blocks can form as many as 2" distinct sequences referred to as n-tuples. The set of all n-tuples defined over K= (0, 1} is denoted by K". The channel encoder performs a mapping T:U3V andy where Lis a set of binary data words of length k and Visa set of binary code words of length mn with n> k. Each of the 2* data words is mapped to a unique code word. The ratio k/n is called the code rate. 11.4 LINEAR BLOCK COD! ‘A. Binary Field: The set K = (0, 1} is a binary field. The binary field has two operations, addition and multipli- cation such that the results of all operations are in K, The rules of addition and multiplication are as follows: Addition: 0e0=0 1@1=0 0eI=1e Multiplication: 0-0=0 I-1=1 0-1=1-0=0 Linear Codes: Let a = (1,42... 4p), and b = (bi,b2,... bp) be two code words in a code C. The sum of a and b, denoted by a ® b, is defined by (a: ® bs, a ® bay... dy ® by). A code C is called linear if the sum of two code words is also a code word in C. A linear code C must contain the zero code word 0 = (0.0,...,0), since a ® a = 0. Hamming Weight and Distance: Lete bea code word of length n. The Hamming weight of ¢, denoted by w(e), is the number of I's in Let a and b be code words of length n. The Hamming distance between a and b, denoted by ala, b), is the number of positions in which a and b differ. Thus, the Hamming weight of a code word e is the Hamming distance between c and 0, that is 284 ERROR CONTROL CODING (CHAP. 11 w(e) = de, 0) (12) Similarly, the Hamming distance can be written in terms of Hamming weight as dla,b) = waeb) (13) D. Minimum Distance: The minimum distance dia of a linear code C is defined as the smallest Hamming distance between any pair of code words in C. From the closure property of linear codes—that is, the sum (modulo 2) of two code words is also a code word—we can derive the following theorem (Problem 11.9). THEOREM 11.1 The minimum distance dyin of a linear code C is the smallest Hamming weight of the nonzero code ‘word in the C. Error Detection and Correction Capabilities: The minimum distance dain of a linear code C is an important parameter of C. It determines the error detection and correction capabilities of C. This is stated in the following theorems ‘THEOREM 11.2 A linear code C of minimum distance dig can detect up to ¢ errors if and only if pig 21+ 1 (ay ‘THEOREM 11.3, A linear code C of minimum distance dpi, can correct up to ¢ errors if and only if dyin > 21 + 1 ns) Equation (1/.5) can be illustrated geometrically by Fig. 11-2. In Fig. 1-2 two Hamming spheres, each of radius 1, are constructed around the points that represent code words ¢; and ¢). Figure 11-2(a) depicts the case where two spheres are disjoint, that is, d(e; ¢) > 2¢-+ I. For this case, ifthe code word cis transmitted, the received word is r, and dle, r) < ¢, then itis clear that the decoder will choose ¢;, since it is the code word closest to the received word r. On the other hand, Fig. 11-2(b) depicts the case where the two spheres intersect, that is, d(e, ¢)) < 21. In this case, we see that if the code word ¢; is transmitted, there exists a received word r such that d{e,,r) = 4, and yet r is as close to gas it is to ¢, Thus, the decoder may choose ¢,, which is incorrect. i (ede, 6) 2241 (8) dogg) 24 Fig. 11-2 Hamming distance CHAP. 11] ERROR CONTROL CODING 285 F. Generator Matrix: In an (1, k) linear block code C, we define a code vector ¢ and a data (or message) vector d as follows: If the data bits appear in specified location of ¢, then the code C is called systematic. Otherwise, itis called nonsystematic. Here we assume that the first k bits of e are the data bits and the last (n—K) bits are the parity-check bits formed by linear combination of data bits, that is, 14, @ pide @ + + + ® Prd (16) nid & Pars ® + + + B Pred Chim = Pith ® Pads B+ + + @ Pade where m = 1 ~ k. Equation (1/.6) can be written in a matrix form as TO re) 0) pp Pt Lge | Ca 7) Ce cc OO where G=(kP) (118) where Jy is the kth-order identity matrix and P is the transpose of the matrix P given by Pu Pa Pu pa| Pu Pa vt) Pe a9) Pmt Pro" Pk The kX n matrix G is called the generator matrix. Note that a generator matrix for C must have k rows and n columns, and it must have rank &; that is, the k rows of G are linearly independent. Parity-Check Matrix: Let H denote an m X n matrix defined by H=[P In} (11.10) where m =n — k and 1, is the mth-order identity matrix. Then rf Pt w[t na Using Eqs. (11.8) and (11.11), we have : outa, [P= er ana where O denotes the k x m zero matrix. Now by Eqs. (//.7) and (11.12), we have cH!’ =dGH' =0 (L13) where 0 denotes the 1 x m zero vector. 286 ERROR CONTROL CODING (CHAP. 11 The matrix His called the parity-check matrix of C, Note that the rank of His m =n — ke and the rows of H are linearly independent. The minimum distance din of @ linear block code C is closely related to the structure of the parity-check matrix H of C. This is stated in the following theorem (Prob. 11.22) ‘THEOREM 11.4 ‘The minimum distance diy of @ linear block code C is equal to the minimum number of rows of H” that sum to 0, where H” is the transpose of the parity-check matrix H of C. H. Syndrome Decoding: Let x denote the received word of length m when code word ¢ of length n was sent over a noisy channel. Then r = ¢ @ e, where e is called the error pattern, Note that e = r + €. Consider first the case of a single error in the ith position. Then we can represent e by (0+ -010- - <0) T 114) ith position ‘Next, we evaluate r7 and obtain rH! =(€@@H" = cH" SeH” =eH™ =s ULI5) in view of Eq. (1.13) and s is called the syndrome of t. Thus, using s and noting that eH" is the ith row of H7, we can identify the error position by comparing s to the rows of H’, Decoding by this simple comparison method is called syndrome decoding. Note that not all error patterns can be correctly decoded by syndrome decoding. The zero syndrome indicates that r is a code word and is presumably correct. With syndrome decoding, an (n, &) linear block code can correct up to 1 errors per codeword if n and k satisfy the following Hamming bound: ™¥(") (116) n nt where =" ()-amm A block code for which the equality holds for Eq, (11.16) is known as the perfect code. Single error- correcting perfect codes are called Hamming codes Note that the Hamming bound is necessary but not sufficient for the construction of a terror correcting linear block code. 11.5. CYCLIC CODES Definition: Let ¢ (Co,c1,-+- 6-1) be a codeword in code C. The cyclic shift of6) of ¢ is defined by Cn) (ULI) ote) and a second cyclic shift produces &@) = aa(e)} = m3) 11.176) Note that 0” (¢) CHAP. 11] ERROR CONTROL CODING 287 A code Cis said to be a cyclic code if the cyclic shift of each code word is also a code word. Cyclic codes are a subclass of linear block code. B. Code Polynomials: We define a code polynomial c(x) corresponding to € as (x)= cy Fete Hex (LI8) Note that o(x) is a polynomial of degree (n — 1) over K = (0, 1}, where coefficients ose... s6n=1 are elements of K. Thus, a code C of length n can be represented as a set of polynomials over K of degree at most n ~ 1. The polynomials over K are added and multiplied in the usual fashion, except that since 1 © 1 = 0, we have x* ® x‘ = 0. The set of all polynomials over K is denoted by [x]. Let f(x) and h(x) be in K[x] with h(x) # 0. Then there exists unique polynomials q(x) and r(x) in K(x] such that feo) = glexyh() + 1) dig) with r(x) = 0 or degree(r(x)] < degree [f(x)]. The polynomial q(x) is called the quotient, and r(x) is called the remainder. The procedure for finding q(x) and r(x) is the familiar long-division process with modulo 2 arithmetic among the coefficients. We say that f(x) modulo h(x) is r(x) if r(x) is the remainder when f(x) is divided by h(x), and we write r(x) = fx) mod h(x) (11.20) If o(x) mod A(x) = r(x) = 6(x) mod A(x), then we say that o(x) and B(x) are equivalent modulo A(x) denoted by ex) = be) mod hex) (121) Note that x'=1 mod +2”) (11.22) CC. Generator Polynomial: The code polynomial o"” (x) corresponding to o(¢) [Ea. (11.17a)] is PQ) met FER E Rp a (1.23) Now from Eq. (11.18) Xelx) = ex hex? + tax” Using Bq, (11.22), we have OG) = xe(x) mod (1+ x") (24) In a similar fashion, we get (x) = x'e(x) mod (1+.x") (1125) ‘THEOREM 11.5 If g(x) is a polynomial of degree (n ~ k) that is a factor of 1 + x", then g(x) generates an (n, k) cyclic code C in which the code polynomial e(x) for a data word d = (dosdi,...,dk-) is generated by efx) = deg) «a126) where d(x) = dy + dix +... + d-1x*”! is the data polynomial corresponding to the data word. 288 ERROR CONTROL CODING (CHAP. 11 The polynomial g(x) is called the generator polynomial (Prob. 11.27). Note that the resultant eyclie ‘code C is, in general, not systematic. To construct a systematic cyclic code, see Prob. 11.35. D. Parity-Check Polynomial: Let A(x) be a polynomial of degree k and also a factor of 1 + x". Then h(x) is the parity-check polynomial of an (1, k) cyclic code C. The parity-check polynomial h(x) and the generating polynomial g(x) of C are related by si) = 0 mod +x") (11.270) or g@DMx) = 1x" (11.276) E. Syndrome Polynomial Let r(x) be the received word polynomial when the code word polynomial c(x) was sent. Then r(x) = e(x) + eC) (11.28) where e(x) is the most likely error polynomial. The syndrome polynomial s(x) is defined by 96) = r(x) mod g(x) (11.29) Assuming g(x) has degree m — k, then s(x) will have degree less than n — k and will correspond to a binary word s of length n — k. Since e(x) = a(x)g(x), we have s(x) = e(x) mod g(x) (11.30) Thus, the syndrome polynomial is dependent only on the error. Note that if degree [e(x)] < degree {(x)], then e() = e(x) mod g(x) and thus s(x) = e(x). The following theorem specifies the condition for the existence of a unique syndrome polynomial (Prob. 11.32). THEOREM 11.6 Let C be a cyclic code with minimum distance dyin. Every error polynomial of weight less than has a unique syndrome polynomial From the preceding, we state that the error-correction problem then is to find the unique e(x) with the least weight satisfying Eq. (11.30). (Note that the weight of a polynomial is the number of nonzero coefficients in the polynomial.) This can be done as follows: For each correctable e(x), compute and tabulate s(x) as shown in Table 11.1. The table is called the syndrome evaluator table. A decoder finds the error polynomial e(x) by computing s(x) from r(x) and then finding s(x) in the syndrome evaluator table, thereby finding the corresponding e(x) (see Prob. 11.34). Table 11-1 Syndrome Evaluator Table e@) (3) T T mod £3) x x mod g(x) * x? mod g(x) 14x 14x mod g(x) 14x 14? mod g(x) com CHAP. 11) ERROR CONTROL CODING 289 F. Implementation: One of the advantages of cyclic codes is their easier implementation. The cyclic encoders can be implemented by shift registers. These devices consist of m registers (or delay elements) and a “clock” that controls the shifting of the data contained in the registers, ‘Output % a & fot Fig. 11-3. m-stage shift register ‘An m-stage shift register, shown in Fig. 11-3, is a shift register with m registers, The output of an m- stage shift register is a linear combination of the contents of the registers and can be described using coefficients go, g1,---+ &m-1 With g,€ K = (0, 1}. Let c, be the output at time f, then 6, = BoXoll) + BX + + 8% (1.31) where X(t) is the value of the contents of register X; at time ¢. Given a fixed generator polynomial ¢(x) of degree n ~ k for an (a, k) linear cyclic code C, we can build an n — k + 1 stage shift register with generator g(x) to implement polynomial encoding of data polynomial d(x). Polynomial division (and thus polynomial decoding for cyclic codes) can be implemented by devices known as feedback shift registers. A feedback shift register is a shift register with the output fed back into the shift register as shown in Fig. 11-4 Output Fig. 11-4 Feedback shift register Feeding r(x) = ro + 1x +--+ + fa-1x" | into a feedback shift register with g(x) with high-order coefficients fed in first (i.e. ryt, F21---» 70) is equivalent to dividing r(x) by g(x). The output after 1 clock tick will be the quotient (high-order coefficients first) and the contents of registers will be the remainder (high-order digits to the right). G. Generator Matrix: ‘There exist many generator matrices G for linear cyclic codes. The simplest is the matrix in which the rows are the code words corresponding to the generator polynomial g(x) and its first k ~ 1 cyclic shifts: 290 ERROR CONTROL CODING (CHAP. 11 ax) x80) (11.32) x lg(x), Note that the resulting generator matrix Gis, in general, not in the form shown in Eq. (1/.8). In order to obtain a generator matrix for a systematic cyclic code, one can apply elementary row operations to the matrix of Eq. (//.32) and transform it to an equivalent matrix that is in the form shown in Eq. (18) (see Prob. 11.41). H. Special Cyclic Codes: 1. Goley Codes: The (23, 12) Goley code is the only known three-error correcting binary perfect code. The ‘minimum distance of the code is 7. The (23, 12) Goley code is generated either by the polynomial gi = 1txth + (11.33) or by the polynomial walt tater ph tx pat (1134) Note that 1423 = (1+ x)eiQogr(x) (11.35) 2. Bose-Chaudhuri-Hocqueghem (BCH) Codes: ‘The BCH codes are the most efficient error-correcting cyclic codes known. The BCH codes offer flexibility in the choice of code parameters, that is, block length and code rate, The most common BCH codes are characterized as follows: For any positive integers m (> 3) and 1, there exists a binary BCH code with the following parameters: Block length n=— ‘Number of data length : n—k )o.on'os9+( 2 }oon* 3 4 5 = 9.85(10°9) = 10° For n= 7, m= 3, and p= 0.01 7 2. f7 7 P = ({Joavteany + ({}oor'os9y «({}oortes (1 }oon? = 3.416(107) 11.3. 14, In a communication channel encoder, every data bit is repeated five times, and at the receiver, a majority vote decides the value of each data bit. If the uncoded bit error probability p is 10-7, calculate the coded bit error probability when using this best-of-five code. A coding error will be made if three or more of the repetitions are received in error. Thhus, the coded bit exon probity 9, => (Fra- ~ (Sarp 0-109 10% Ina single-parity-check code, a single parity bit is appended to a block of k data bits (did, The single parity bit cy is chosen so that the code word satisfies the even parity rule: ed e+ +d eq=0 (a) For k = 3, set up all possible code words in the (4,3) code. (8) Which error patterns can the (4,3) code detect? (©) Compute the probability of an undetected symbol error, assuming that all symbol errors are independent and that the probability of symbol error is p = 0.01. (@) There are 2* = 2° = 8 possible code words, which are shown in Table 11-2. 298 us. 11.6, 17. ERROR CONTROL CODING [CHAP. 11 Table 11-2 Data code | Parity bit_| Code word | 000 0 0000 or 1 oon 010 1 | oor on o | ono 100 1 1001 01 6 1010 110 ° 1100 m 1 un (©) The (4, 3) code is capable of detecting all single-error and triple-error patterns. (0) The probability of an undetected symbol error Py. is equal to the probability that two or four errors occur anywhere in a code word I= pr + (3 4 =6P—p +p = 6(0.01)7(0.99)" + (0.01)* = 5.88(10~) Show that C = {0 00, 1 1 1} isa linear code, Since all four of the sums 0008000=000) dine@oH=aI) AIDS O00= C1) CINSUIN= 000) are in C, C is a linear code. Show that C r code. {000,001,101} is not a line Since the sum (0 0 1) @ (1.0 1) = (1 0 0) is not in C, Cis not a linear code. Show that the Hamming distance measure has the following properties () dla,b)=0 with =0 if and only if a=b (1.400) 2) dla,b) = atb,a) (11.406) (3) dla,e) = da,b) + dib,c) (11.400) Prom Eq. (1.2) dab) — (a ® b). Ifa 7 b, then w(a ® b) > 0, and if a — b, thon aDb— a® and w(a ® a) = w(0) = 0. Thus, property (1) is verified. Property (2) follows since a ® b dla,e) waee => la-el where aj, cj are elements of K = {0, 1} and m is the code length. Similarly la~64 db, ©) = w(b Be) Sled 5 lero Now Ka,- 6) = (1 = lab) + le,~ Bl CHAP. 11) ERROR CONTROL CODING 299 Tins, Sle el< Lla-sl+ Sle or dae) = dab) + abe) which is property (3). 11.8. Consider the following code vectors: e=[1 001 0] 2=[0 110 1] e@=[1 100 1] (@) Find dey, €2), dle, €3), and d{eo, es). (6) Show that Ces, €2) + Alea, €3) = dley,€) (@ From Eq, (/1.3) we obtain ae,,e2) =m) Be) wl 1 1 1 1 dey, €5) =e Bes) =w[0 1 0 1 1]=3 es 63) =w(e; Bes) =wl1 0 1 0 0} o Ae, €2)+d(e3,¢5) = 5 +223 =dle,¢5) 11.9, Prove Theorem 11.1, that is, the minimum distance of a linear block code is the smallest ‘Hamming weight of the nonzero code vectors in the code. From Eg, (11.3) ale 6) =W6e; 86) Then dig = min de,,¢) = min w(e, © ¢) aan Hence, by definition of linear code, Eq. (1/.41) becomes sig = min (0) a2 in = mip wt tay 11.10. Verify Theorem 11.2, that is, a linear code C of minimum distance dmin can detect up to ¢ errors if and only if dpyin = t+ 1. Let e be a nonzero error pattern with W(e) = = dig 1 and let ¢ be a code word in a linear code C. Then lee Be) = weBESE) = we) < dyin Thus, r = €@ eis not in C. Therefore, C can detect e; that is, C.can detect up t0 £ (= dain ~1) F808 if lm + 1. Proof of “only if” partis left for exercise. IL.A1. Verify Theorem 11.3; that is, a linear code C of minimum distance dyn €an correct up to f errors if'and only if dig = 20 + 1 Let e be a nonzero error pattern with we) = 1 = Kis D) 300 ERROR CONTROL CODING (CHAP. 11 and let @ and b be code words in a linear code C with a # b. By Eq. (11.400) , dha,a 6) + dab e,b) > cha, b) = don db, 6) + dla e,a) = aba) = dain Now w(e) = d (a @ ea), and 20(6) + 1 = (dgin ~ 1) +1 = dy Thus, we have ib, a6) +(e) = 2w(e) + 1 Then th, €) > we) +1 = daa e+ and we have da,aoe S(1) “There are a total of 2" ~* syndromes, including the alL0 syndrome. Bach syndrome corresponds toa specific crror pattern, The number of possible tuple erors in an mbit code word is equal to the number of ways of choosing bits out of namely, (" orn. el mater fl se ror tes ct 0) we 1th masinu unter fs nano at Theor fa a ck i capable of correcting up to ferrors, the total number of syndromes must not be less than the total number of all possible error patterns. Thus, we must have 31) CHAP. 11] ERROR CONTROL CODING 305 11.20. Consider a single-error-correcting code for 11 data bits (@ How many check bits are required? (6) Find a parity-check matrix H for this code. @ By Eq, (11.16) tn =3(1)-(3)+(1)- P>1+m—m>4 Thus, at least 4 check bits are required. (6) By Eq. (1.11) w[] For a single-error-correcting condition, itis required that the first 11 rows of matrix HT be unique. They must also differ from the last 4 rows containing a single 1 in each row and cannot include an al-0 With this requirement, a parity-check matrix H (transpose of H") for the (15, 11) code is given by 11000 1o11000 a[i oo 0 0 To 10 1 1 o1 1 1 1 1 110100 I 110010 0 110001 1 10 o1 11.21. Show that if ¢; and ¢ are two code vectors in an (n, k) linear block code, then their sum is also a code vector. Since all code vectors ¢ must satisfy Eq. (1/.13), we have eH =0 and Gli” =0 Then (26) =o OG HT =040=0 ary which indicates that ¢; & g, is also a code vector. 11.22. Prove Theorem 11.4; that is, the minimum distance of a linear block code is equal to the minimum number of rows of H™ that sum to 0. By Ea, (11.13) cH” The product eH" is a linear combination of rows of H”. (See Prob. 11.18.) Hence, the minimum number of rows of H’ that can be added to produce 0 is aero which equals din by Eq. (11.42). 11.23. Consider the (6, 3) code of Prob. 11.13. (a) Show that dig = 3 and that it can correct a single error. (®) Using the minimum distance decoding rule, repeat part (¢) of Prob. 11.13. 306 ERROR CONTROL CODING [CHAP. 11 (@) From part (6) of Prob. 11.13, the code vectors and their Hamming weights are listed in Table 11-6 Since dy is the smallest Hamming weight of the nonzero code vectors in the code, dain = 3. Then by Eq. (//.3) dyig = 32241 Which is satisfied by = 1, Hence, the code can correct a single error. Table 11-6 © ‘Hamming weight 1 = (000000) 0 = (001110) 3 = 10011) 3 e@=D1110n 4 es=[l0011 i) 4 e=[101001) 3 e=[110100) 3 e=1111010) 4 (6) The received vector is r= {01 0.11 I}. Then by Bq. (1.3) are) =4 aleses) dlr,e,) = 3. aleses)=5 dhe) = 1 deve) =3 dls,eg) = 2 dleses)=4 ‘The minimum distance between the code vectors and the received vector is e,03) Thus, we conclude that the code vector ¢s = [0 1.0.01 I] was sent and the data bits are 010 [which is the same result obtained in part (e) of Prob, 11.13] 11.24, Consider a (7, 4) linear block code with the parity-check matrix H given by 1011100 H=|1 101010 o1riroot (@) Construct code words for this (7, 4) code. (6) Show that this code is a Hamming code. (©) Iustrate the relation between the minimum distance and the structure of the parity-check matrix Hf by considering the code word 0101100. (@) By Eqs, (11.10) and (11.8) the generating matrix G for this code is ooo1r1o0 1000 o101 oo1d 1 0 1 0 1 0 1 CHAP. 11] : ERROR CONTROL CODING 307 Table 11-7 Data word | Code word | Hamming weight ‘0000 ‘0000000 0 001 coon | 4 010 010101 3 oot oono10 | 3 100 100011 3 ono1 101100 3 o110 o110n0 4 ou 111001 4 1000 | 1000110 3 1001 1001001, 3 110 | 010011 4 1011 1011100 4 1100 1100101 4 iol Lio1010 4 110 1110000 3 nu muy 7 With k = 4, there are 2* = 16 distinet data words, which are listed in Table 11-7. For a given data ‘word, the corresponding code word is obtained by using Eq, (11.7). The resultant code words are listed in Table 11-7, (®) Table 11-7 also lists the Hamming weights of all code words. Since the smallest of the Hamming ‘weights for the nonzero code words is 3, we have dyin = 3. Thus, by Eq. (/1.5) the code can correct a single error. Next, n= 7 and k = 4, and we have 2° = 2? = X()-G)+ Go a Thus, the equality holds for the Hamming bound of Eq. (/.16), and the code is a Hamming code. (c) With code vector ¢ = [0 1 0 1 1 00), the matrix multiplication defined by Eq. (11.13) indicates that the second, fourth, and fifth rows of matrix H” yield [o 1 tell 1 1Jef1 0 o]=[0 0 9} Similar calculations for the remaining 14 nonzero code vectors indicate that the smallest number of rows in H” that sums to O is 3, which is equal to dyin (Theorem 11.4). CYCLIC CODES 11.25, Show that the code C= {000.0,0101,1010,111 1} isa linear cyclic code. 0101 @ 1010 = 1111 0101 @ 1111 = 1010 10106 1111 = 101 and C contains 00.0 0, Thus, C is linear. Next, (0000) = 0000, o(0101)= 1010, (1010) = 0101, (1111) = 1111 Since o(@) is also in C for each ¢ in C,C is cyclic, Thus, C is a linear cyclic code, 308 11.26. 11.27. 11.28, 11.29. ERROR CONTROL CODING (CHAP. 11 Show that the code C= {00 0, 100,011, 11 1) is not cyclic. The cyclic shift of e = 100 is a(100) = 010 which is not in C. Thus, C is not eyelic. Verify Theorem 11.5. Consider k polynomial’s g(x),xg(x),....x*~'g(x), which all have degree n—1 or less. Let o(x) be any linear combination of these polynomials, (2) = dg) + dhyrgla) + + + + + de-ix 90) = dogo where x) = dy + dy ts ss be yx! Then degree of [c(x)} dis~ Thus, the right-hand side of Eg. (/1.49) is 0, and consequently, e,(x) = ex). 11.33, Construct the syndrome evaluator table for the (7, 4) cyclic code of Prob. 11.31. By Eq, (11.30) the syndrome polynomial s(x) is s(x) = e(x) mod g(x) where e(2) is the most likely error polynomial. Then the syndrome evaluator table is given by Table 11-8 11.34. In the (7, 4) eyclic code of Prob. 11.31, sequence (1 1 1 0.0 1 1) is received. Find the data word sent, MOON + rQ)= lt xtat ext 38 By Eq, (11.29) the syndrome polynomial s(x) is s(x) = r(x) mod g(x) = (x9 ba? +x? +841) mod GP +x+ =e +1= 14.7 ‘Assuming only one error occurred, then from syndrome evaluation Table 11-8 (Prob. 11.33) we have sQ)=l4x + ex)=x* ‘Thus, ox) = 163) + eX) = L$? 42° 310 ERROR CONTROL CODING ICHAP. 11 Table 11-8 Syndrome Evaluator Table [9 3) 1 Tmod (Il +x 4x)=1 x x mod (l+x+x)=x xe 2? mod (1 + x + 8) = 27 x“ mod (l¢x+e)=14x xs at mod (1+ x-+x)= xt? Mo | mod txts ltt? af x® mod (1+ x 430) = 14.27 tex | Lt xmod (dtxt x)= 14x lee | 14tmod (+x sy tt? mi daa tat Thus, the data word is (1 0 1 0). = 1010 11.35. Let g(x) be the generator polynomial of a cyclic code C. Find a scheme for encoding the data sequence (dy,dj,..-, dk) into an (1, k) systematic code C. Let ¢ be a code word of length » with the following structure. = Cas biyooes Prckets dandy ssn) where 6; (i= 0,L,...0— k= 1) are (nm ~ k) parity bits. Let day = dy ty x dea BE) = by $B) Ppt ‘Then the code polynomial e(x) can be expressed as (x) = B(x) + Fae) Since g(x) is a factor of efx), we have (8) = a(a)g(x) = B(x) + x(x) ot wkd) a) Fd) 94D 1150 we Fa) ry Equation (11.50) shows that B(x) is the remainder of x"~*dx)/g(x), Thus, we can state the encoding procedure fora systematic cyclic code as follows 1. Multiply ax) by x". 2. Divide x"*a(x) by g(x) and obtain remainder (x). 3. Add (x) + x"*d(x) to form the code polynomial e(x). 11,36, Consider a (7, 4) cyclic code with generator polynomial g(x) = 1 +x + x3. Let data word 01 0). Find the corresponding systematic code word. n=1k= k= 3, and dx) = 1+ CHAP. 11] ERROR CONTROL CODING 3 ‘Now, by the procedure stated in Prob. (11.35), we have wd) = St A= +5 x¥doy te ay ge) eel” *SeeeT and Thus, (a) = 3° and of) = B(x) + Fal) = 2 ba x8 Hence, the corresponding code word ise = (0.0 1 1 0 10). 11.37. Consider the cyclic code C of Prob. 11.25. Find a generator matrix G for C. C={0000,0101,10101 111} 4x2, Hence, n= 4 and k= 2, and From Prob. (11.28), the generator polynomial for C is g(x) = g@=lte 1010 xgQextx O10 Lore [toi] By Eq, (11.32), we have Note that C is a (4, 2) systematic cyclic code. 11.38. Let C be a (7, 4) cyclic code with g(x) = 1 +.x-+ 2°, Find a generator matrix G for Cand find the code word for d = (1.0 10) = 4, we have Since n= 7, gay=l+x+x° 1101000 xgay—xtetx¢ 0110100 Seat te tes = O10 Seix)= tate’ = ooortoL Then by Eq, (17.32), we have ast) Ford = [1010] o19 = 110019 It 1 00 0 Thus, we obtained the code word (1 1 1.00 1 0), which is the same result obtained in Prob. 11.31 11.39. Consider an (n, k) cyclic code C with generating polynomial g(x). Find a procedure for construction of (n ~ k) Xn parity-check matrix H. Let A(x) be the parity-check polynomial of C. Then by Eq. (11.27a) g(xh(x) = 0 mod Lx” 312 ERROR CONTROL CODING [CHAP. 11 which indicates that g(x) and A(x) are orthogonal (mod 14-2"). Furthermore, x¢(x) and h(x) are also orthogonal for all i and j, However, the vectors corresponding to g(x) and f(x) are orthogonal oaly if the ordered clements of one of these vectors are reversed (see Prob. 11.64). The same statement applies to the vectors corresponding to x'g(x) and wh(x), Thus, we can construct (1 — k) x n parity-check matrix H in which rows are the code words corresponding to h(x) and its first n — k — 1 cyclic shifts in reversed order. Now, if Bo) = ig yc eo big aE gE gy Iga It (152) then its reversed order is given by XK) = faa Hak to ANE gam gale 8 + hy (1.53) 11.40, Find the parity-check polynomial h(x) and the parity-check matrix H for the (7, 4) cyclic code C with g(x) = 1+ xx. By Eqs. (/1.27b) and (11.48), the parity-check polynomial f(x) is Wx) = (1+ $7 $e) = Dx to tt Now, by Eq, (1.53), we have MH = txt tx® OOIOIL Thus, according to the procedure stated in Prob. 11.38, the parity-check matrix His given by ooO10TId [a bottd :] ans ro1110 0, Using Eqs. (11.51) and (11.54), is easily verified that GH? = 0, where 0 is 4 x 3 zero matrix, 11.41. Consider a (7, 4) eyclie code C with g(x) = 1 + x + x°. Using the result of Prob. 11.38 [Eq. (17.5)}, construct a (7, 4) systematic cyclic code generator matrix. From Eq. (1/.51), we have 1101000 o11o0100 oo11010 ooo11oOL Applying elementary row operations on the preceding matrix G, we can obtain the following equivalent matrices Gy and Gy Gs 1 o Q G= 1 0 t 1 t 0 110 old tid 104 o } 1 0 1 1 1 0 1 1 1 o 000 0 Generating matrix G; is in the form of Eq. (/1.8) and it will generate a systematic (7, 4 eyclic code C where data bits will appear in the first 4 bits of code words, whereas G; will generate a systematic (7, 4) eyelic code C where data bits will appear in the last 4 bits of code words. 0 11.42. Using the result of Prob. 11.41, redo Prob. 11.36. a=(1 0 1 0) Using G2 of Prob. 11.41, we have CHAP. 11) ERROR CONTROL CODING lo 1 c=dG t 1 I 1 00 100 10 0 o1 313, =(0 01101 0) Thus, e= (0.01 10.1 0), which is the same result obtained in Prob. 11.36, CONVOLUTIONAL CODES 11.43, Consider the convolutional encoder shown in Fig. 11-11. @ @) @ o 11.44 (a) 0) o> yy Fig. 1-11 Find the impulse response of the encoder. Using the impulse response, determine the output code word for input data d = (1 0 1), input XX 1orot o 010 Thus, we have impulse response 11 0 1. input output 1 1 Ol 0 00 00 1 1 ol 1 nL modulo 2 sum: 11 01 1 Thus, the output code word is (1101110 1. ol Sketch the state diagram, the tree diagram, and the trellis diagram for the convolutional encoder of Fig. 11-11 Find the free distance of this convolutional code. i uf boy ; 12 % a0 Su NS. betes © 314 ERROR CONTROL CODING (CHAP. 11 (@) There are two states a= 0 and = 1. The state diagram, the tree diagram, and the trellis diagram are sketched in Fig. 11-12 (a), (6), and (c), respectively (®) From the trellis diagram [Fig. 11-12 (@)]}, we observe that path path metric aba 3 abba 4 abbba 5S Thus, the free distance of this eode is 3 11.45 (a) Find the generator matrix G for the convolutional encoder of Fig. 11-1] (®) Using the generator matrix G of (a), find the output code word for the input data d. = (1.0 1). (@) Using the impulse response of the encoder (I 1 0 1) (Prob. 11.43), the generator matrix G for the convolutional encoder of Fig. 11-11 can take the form of staggered impulse response as follows: 1101 1101 1 o1 where ements nt shown ae Os. The suture of G consis of large number of rows, with each tow sideatipped ou ight by 2 (eunter of ouput tein) cements bu olbervise deta to he one above The number of rows in G may extend indfntly depending onthe length of apt aoe 11010000 : so-woufe drier. ooootiet Thus, the output code word is ¢ = (1 1.01110 1), which is the same result obtained in Prob. 11.43. 11.46, Consider the convolutional encoder of Fig. 11-6. Using the state diagram of Fig. 11-7, find the output code word ¢ for the message sequence d = ( 1 0 1) followed by 2 zeros to flush the register. From Fig. 11-7 with input sequence 1 0 1 00, and remembering a solid line for input 0 and a dashed line for 1, we obtain the path a b ¢ 6c a with the output sequence e = (110100011 1). 11.47, Consider the convolutional encoder of Fig. 11-6. Assuming that a S-bit message sequence (followed by zeros) is transmitted. Using the state diagram, find the message sequence when the received sequence r= (110100100110110000...). Using the state diagram of Fig. 11-7, starting state a and decoding the received sequence 2 bits at atime, 11,0 1,00, 10,01,10,11,00,00..., we obtaina path ab cbddc aa, Noting 0 is represented by a solid line and 1 by a dashed line, we obtain the decoded sequence as 1.01 1 1000 .... Since the message sequence is a Sbit sequence, we decide the message sequence as (101 I 1). 11.48, Consider the convolutional encoder shown in Fig. 11-13. (a) Find the impulse response of the encoder. (b) Find the output code word if the input sequence is all I’s (11111 1...). (©) Discuss the result of (6). CHAP. 11} ERROR CONTROL CODING 315 Fig. 11413, O ‘input Xo xy xy uy vp 1 ofofi|{t o }o]ifofi1)o o f{oltof:rfofi Thus, the impulse response is (11 10.01). © inp output 1 rr | 10 | ot 1 11] 10 | on 1 11 | 10 | on 1 en modulo 2sum | 11 | o1 | oo | oo | 00 | 00 Thus, the output code word is e = (1 1.0100.0.000... all zeros). (0) The output code word has weight 3, and it is closer to the all-zero sequence that is the output code word associated with the all-zero input data word. Furthermore, if the following channel error sequence occurs e=1101000000...=¢ then the received sequence is all zeros, and any decoder would decide in favor of the allzero information sequence. Thus, error event of finite weight and duration produces an infinite number of decoding errors, Note that the preceding infinite number of decoding errors is called a catastrophic error, and such an encoder is called catastrophic encoder. 11.49, Consider the catastrophic encoder of Fig. 11-13 (Prob. 11.48). (@_ Find the polynomial representation of the encoder and discuss the significance. (0) Sketch the state diagram of the encoder and discuss its significance. (@ From Fig. 11-13 the generator polynomial g)(x) for the upper connection and go(x) for the lower ‘connection are given by g@=t+x gC) 316 ERROR CONTROL CODING (CHAP. 11 Note that g(x) and g(x) have in common the polynomial 1+ x, since 1 +7 = (14+ x)(1 + x). This is the condition for catastrophic error propagation. The condition for catastrophic error propagation is that the generator polynomials have a common factor of at least degree one. (8) The state diagram for the encoder of Fig, 11-13 is sketched in Fig, 11-14. Fig. 11-14 Note that at state d a nonzero input causes the encoder to break back to itself with zero output. Thus, catastrophic error can occur. It is seen that all 1s input sequence will travel the path a bd dd. producing output sequence of (1101000000...) 11.50, Consider that the convolutional encoder of Fig. 11-6 is used over a binary symmetric channel’ (BSC). Assume that the initial encoder state is the 0 0 state. At the output of the BSC, the received sequence is = (110000011 1 rest all 0). Find the maximum likelihood path through the trellis diagram and determine the first 5 decoded data bits LAS Decoder trellis diagram Figure 11-15 shows the trellis diagram with branch metric, which is the Hamming distance between the branch output word and received sequence. According to Viterbi decoding, we proceed as follows: First, cumulative path metric of a given path at time ¢; is computed. If any two paths in the trellis merge to a single state, one of two having greater metric will be eliminated, Should metrics of the two entering paths be of equal value, one path is chosen by flipping a coin. This process is repeated at time ‘1. Figure [1-16 shows this process, and the surviving path into each state is shown in Fig, 11-16 (d), (f), and (h). Finally, at time ts we decide the most likely path abchea (with metric 1), as shown in Fig. 11-16 (h). From Fig. 11-9, the corresponding code word is e= (1 10100011 1) and the first 5 data bits are 10100, CHAP. 11] ERROR CONTROL CODING 317 b= 106 en01e dee ® wo Fig. 11-16 Selection of survivor paths 318 1151. 1182, 1153, 1154. 1155, 1156, 1187. ERROR CONTROL CODING (CHAP. 11 Supplementary Problems Consider a (6, 3) linear block code with the parity-check matrix JT given by 1o11o0o H=|0 11010 r110o01 (a) Find the generator matrix G. (6) Find the code word for the data bit 101 Ans. (a) 10 o1 G=]o10011 ooriidi (6) 101010 For the (7.4) Hamming code of Prob. 11.24, decode the received word 0111100. Ans. 0101 Consider an (n, k) linear block code with generator matrix G and parity-check matrix H, The (n, n ~ k) code generated by /1s called the dual code of the (n, k) code. Show that matrix G is the pacity-check matrix for the dual code, Hint: Take the transpose of Eq, (11.12). Show that all code vectors of an (n,k) linear block code are orthogonal to the code veetors of its dual code. (The vector x is orthogonal to e if xe = 0, where e” is the transpose of ¢.) Hint: Use Bq. (11.12). Find the dual code of the (7,4) Hamming code of Prob. 11.24 and find dyin of this dual code, Ans. 0900000] 11100 | 1T01010 1010017 1oitto0 | t1oo101 | o1o110 | oor dan =4_| A code consists of code words 1101000, 0111001, 0011010, 1001011, 1011100, and 0001101. If 1101011 is received, what is the decoded code word? Ans, 1001011 ‘Show that for all (n, &) linear block codes doyin SK + Hint: Apply Theorem 11.4 (o show that the rank of matrix 17 is dan — 1 CHAP. 11] ERROR CONTROL CODING 319 11.59, 11.60, 1161, 11.82, 11.63. 11.64, 1167, Show that the eyelic shift operator w defined in Bq, (/1.17a) is a linear operator. Hint: Show that o{a@b) = ofa)@o(b), and ofa) = aofa) x € K. Consider a cyclic code C with n = 3. (a) Find the (3, 1) eyelic code C,. (b) Find the (3, 2) eyelie code Co. Ans. (a) C= (000,11 13, (6) Co= (000,011, 110,101} Let o(3) = oy + 61% oe + ey! + x" be the nonzero code polynomial of minimum degree in an (1, k) cyclic code C. Show that the constant term cy must be equal to 1 Hint: Assume c= 0, then show the contradiction of the assumption of minimum degree by cyclically shifting o(x) Show that 1+." = (1 + x1* Hint Expand the right-hand side and use the fact that x" @ x" = 0. Consider a (6, 2) systematic cyclic code. (a) Find a generator matrix G for this code. (b) Find the minimum distance di ° ans. @ 6=[5 9 4 Consider a (7, 4) cyclic code with g(x) = 1 +x? + 2°. (a) Encode the data word (1 00 1). (6) Find the data word corresponding to the code word (I 11 0100). Ans, (@(1010011) (6)(1100) Let A(x) = dy + ax Fa! B(x) = by HB K+. + bp Show that if a(x)6(x) = 0 mod (1 + x"), then the vector corresponding to a(x) is orthogonal to the vector corresponding to B(x) with the order of its components reversed, and to every cyclic shift of this vector. Hint: Find the coefficient of x! in a(x)b(x) mod (I + x). Consider the convolutional encoder of Fig, 11-6. Find the encoded sequence for the input sequence (1.00 1 1) by polynomial representation. dns. (L1011111101011) . Consider the convolutional encoder of Fig. 11-6, Using the trellis diagram of Fig. 11-9, find the output sequence for the input sequence 1 | | followed by two zeros to flush the register. Ans. (1110011011) Consider the convolutional encoder shown in Fig. 11-17. (@) Find the impulse response of the encoder. (®) Find the output sequence if the input sequence is (11.0.0 10 1). 320 ERROR CONTROL CODING [CHAP. 11 (©) Sketch the state diagram and the trellis diagram, (@ Find the free distance dige of the code. Fig. 11-17 Ans. (a) (011101111) © @1111001011 10111011 00101111) (c) Hint: See Figs. 11-7 and 11-9. (@ dace =7 11.68. Consider the convolutional encoder shown in Fig. 11-18. (@) Sketch the state diagram. (b) Determine if the code is catastrophic or noncatastrophic. (@) Hint; See Fig, 11-14. Fig. 11-18 Ans, (b) Catastrophic 10 =F KO) =z | [nomoa=2f Appendix A: OURIER TRANSFORM (tear Xo) = Flxeto)] = oye da XOXO) do f [bora == [x(@yF deo ‘Table A-I__ Properties of the Fourler Transform Property x0) Xo) Linearity ax) + anxD a X\(w) + 2X26) Time shifting aI) Xe (any i af; ime reversal sn x0) xD 2ex(-a) requency shifting xine Koa) odulation stocoscnt — | ${XCo~09) + Kool ime differentiation xl) jaXea) nency differentiation ~jrxtt) x") tegration Sisto de ko) + aH HL0) nvolution xp(0) * x21) XX) ‘ultiplication xilfx2(f) = ‘(wa) # X(a) 32 322 FOURIER TRANSFORM ‘Table A-2 Some Fourier Transform Pairs x0 XO) 0 1 i=) i «) 5 2 sent 5 a “i santo) eb ~ 2nd(0 om) £08 apt n[d(@ — @) + 5(@ + sina TAlB(@— 0) Ho + 09) Mt) a z eu a>0 aa i teu a>0 a a a> Tora eal a>0 -{i la xy =fi- MWa wap? E u-nn) oF S{o-no») 0» == “jsan(o) Xo) (APP. A eV) Teh ay BESSEL FUNCTIONS Jn (B) inctions of the first kind of order and argument ING FUNCTION AND DEFINITION iPS ont s IuBelmont = Ef osama (RUE) all ae San Heel S OF J,(): Con ni = FID) =! 323 324 BESSEL FUNCTIONS J, (}) (APP. B Table Bel Selected Values of J(f) mB On 02 os T z 5 8 10 0 | 0997 | 099 | 0938 | 076s | 0228 | —o.78 0172 | —0.286 1 0.050 0.100 | 0.242 0.440 0.577 0.328, 0.235 0.043 2 | 0001 | ooos | oos1 | ors | 0353 | 0047 | -o113 | 0255 3 0.003 | 0.02 | 0129 | 0.365 | -0291 | oss 4 0.002 | 0034 | 03 | -0.105 | -0220 s 0007 | 0251 | 0286 | -o234 6 0.001 ost | 0338 | 0014 7 | 0033 | 0321 0217 8 | oo | 0224 | oats 9 | 0.006 0.126 0.292 10 0.1 | 0.061 0.208 n 026 | 0123 2 ooo | 0.063 B | 0.003 | 0.029 4 | oor | 0012 1s | 0.005, 6 0.002 oY) Tiel an HE COMPLEMENTARY 2ROR FUNCTION Q(Z) 326 THE COMPLEMENTARY ERROR FUNCTION Q/Z) able C-1 Qt) z 28) 7 0@@ | 2 0@ | 2 0 000 0.5000 | 1.00 ose? | 200 0.0228 | 300 000135 os oasor-| 1.05 ot4s9 | 205 0.0202 | 305 0.00114 0.10 04602 J 110 0.1387 | 2.10 0.0179 | 310 0.00097 01s oats J 11s 02st | 2.15 oo1ss | 345 © 0.00082 020 04207 | 1.20 © ois | 220 0.0139 | 3.20 0.00069 02s oats. | 125 os0s6 | 225 00122 | 325 — o.00ase 03003821 130 00968 | 230 o0107 | 3.30 0.00048 035 03632 | 135 oss | 235 0.0094 | 335 0.00040 o4o 03446 | 140 0.0808 | 240 0.0082 | 340 0.00034 o4s 03268 | 145 0.0735 | 245 0.0071 | 345 0.00028 050 03085 | 1.50 0.0668 | 250 0.0062 | 350 0.00028 oss 02912, | 155 00606 | 2.55 0.0054 | 355 0.00019 060 ©0243 | 1.60 0.0548 | 2.60 0.0047 | 3.60 0.00016 oss 0.2578 | 165 0.0495 | 2.65 0.0040 | 365 0.00013 070 02420 | 1.70 00446 | 2.70 0.0035 | 370 0.00011 075 02266 | 1.75 oo4oi | 2.75 0.0030 | 3.75 0.00009 ogo 02169 | 180 0.0359 | 280 0.0026 | 380 0.00007 oss osm? | 185 00x22 | 285 0.0022 | 385 0.00006 090 oss | 190 00287 | 290 0.0019 | 390 0.00005 o9s aint | 19s 00256 | 295 cons | 395 0.00008 400 0.00003 | 4250 4750 520107 560 10° Tape. C ‘Adaptive dla modulation. 99 ‘Addie noise, 202 ‘Additive white premian noise (AWGN), 226 23 ctannel, 253, 268,270 Abusing. 107 ‘Alteroate mark terion (AM) signing 100 ‘Amplitude distortion, 27 ‘Amlitide modulation (AM). 43-67 mdi, 47 doubesidetand (DSB), 44,53 Aloublesideband suppresed-carsier (DSB:SC}, 44 ‘quadrature (QAM), 64 Single sideband (SS), 47 ‘Amapltude-shi-keying (ASK), 14,231 Amplitude spectrum, 1-2 dlscete, 2 ‘Analog o-digial (A/D) converte, 90 ‘Analyte signal. 66 ‘Angl modulation, 68-89 ‘andi, 71, 80 demodulaion, 7 fequency modulation (EM), 69 ranrowband, 70 ‘nurowhand, 72 srieband, 72 pace modulation (PM), 69 A posttion probabil, 232 1A prion probability, 227 ‘Aporare fies, 112 ‘Armutrongype FM generator, 83 ASCII code 136 AAntccoretation, 167-169 tie average, 168 Autocovariae, 167, 169 Average information, 246 Averages ensemble, 165 Statist, 197, 156,166 time, 168 Balanced modula. Band limited signal Banclimited white ase, 174 Basdpus ler (BDF), 26 Bandpas sampling theorem, 110 Bandas signal, 110 Banca uvalent, 4 uivalent noise, 197 Str, 16 of AM, 47 of angle-modulton, 71, 80 Baseband signal, 48 Bayes rule, 130 Bayes theorem, 131 BCH (Bose-Chaudhur-Hooqueghes) code, 0 Bes functions, 71,78, 323, Binary erase shane! 262, 267 Binary ld, 283 Binary symmetric channel (BSC), 29, 252 Bipolar bazeband signaling, 251 Bipolar NRZ signaling. 100 Bipolar RZ signaling 100, inary dit 246 inary unit information content), 246 Block codes, 283 Bonferon's ineguaiy 141 Braoch mete 316 Butterworth low-pas fe, 41 Carrier requeney, 43 Cartier signal 43 Carrie to-tz ratio (CNR), 209 Carson's fue. 72,82 Catasropie convelutional codes, 315 Catastrophic error, 315 Couey random variable, 163 CCeuchy-Schwere iol, 161 Causal ar 28 Causal ssn, 25 Centra imittheorem, 140 Cann ‘AWGN, 253, 28, 270, ‘Woary ease, 262,267 ‘inary symmetic (BSC), 29, 282 determiniti, 248, 251 ‘dace memories (DMC), 247,260 isles, 24 tele, 267 (Chanadl capacity, 251 AWGN chantel 253, tay eraace channel 267 sc. 252 deterministic channel, 251 Toes channel, 251 roseles channel, 252 De scond, 252 pe smbol, 251 (Channel coding, 282,296 (Chanadl coding theorem, 282 (Channel mat, 247 Channel tanstion probability, 247 Chess ieauals, 16k (Coa sfieny, 253 Coc legth. 253 Code polynomial, 287 (Code fae, 283 Code redundancy, 254 Coe vector, 288 Code word 243, Code "BCH, 250 block, 253, ‘sonveluional, 290-296, 13-317 jc, 286-250, 307 ‘isi, 254 ‘ual, 284 ‘erorcoresting, 282 Feed length 254 327 Goey, 290, Hamming, 286 fine 9 linear, 283, optimal, 255 pevtct, 286 Prefcfice, 284 Reed-Solomon, 290 singe panty-ceck, 297 uniguly decodable, 255, Variable sengt, 254 Cong channel, 282,296 entropy, 258, 2 Hut, 256 Sannon-Fano, 255 souree, 253,272 Coherent detection, 48 Companding, 95-96 Aw, 98 law, 95 Complementary eror function, 139,325 (See also @ funtion) Complex exponential Fourier erie, 1 Conditional entropy, 250 Conditional protabity, 130 Connection diagram, 251 Continuous random variable, 132 Convolution 4 Convolution theorem: Frequeny, 4,22 time, 4 15 Convolutional codes. 20-296, 313-317 Creation, 138, 179.290 Soeiint, 138 ross, 733,18, 188 Correlator, 23, 236 Covariance, 138 tor, 167, 168 ros 18 sat, 173 (Cros spetral density. 170 ‘Cumulative distribution Function (ef, 132 int, 133 Cutt frequency, 27 ‘Gylle codes, 286-290 ‘Cyl shi, 286 ‘convolutional odes, 295 ‘marimumke hood, 296 syndrome, 286 iter algorithm, 296 DDeemphass ter, 222 Desay tine, 85 "apped. 36 Deita modulation (DM), 97-99 ‘adaptive, ‘demodulation, 98 328 INDEX — ae = nea ecaee ee rere Se ay ay a eevee cas aoe ay feo eal ema te Ene SAE haw eae = Ee tt ane as ear ane ae synchronous, 205 ‘unipolar baseband signaling, 231 SET 138-140" oe aor ‘ee Sanceewey eer uc e. rca 0 Seay 10 oa Paneer ene ise etn cone et slime acai ent 2 oe ened ‘Deviation ratio, 72 probabilities of, 129 eraion mattis: Toc coneluonal code, 314 Tor jee code, 289 far linear block co, 285 Generator polynomial, 287 Differential entropy, 25. 268, Datferenator, 75,190 0 Digital carier modulation systems, 108 Exponential néom variable, 49 Digital pulse modulation, 99 ‘Dirac delta function, $ ‘Peedback shift 290. ce ocecaa Discrete frequency spectra, 2 hack shift cegister ‘Diserete memoryless channel (DMC), 247, 260 Filter, 26 ‘Hamming bound, 286 ‘Discrete memoryless source (DMS), 245 bandwidth, 26 ‘Hamming codes, 286 Direc inom varie, 132 Buteworth ope 1 Hamming dtc, 23,39 Distance: cuca, 20 ‘measure, 298 ‘free, 295 comb, 36 ‘Hamming spheres, 264 seaming, 53,296,258 ‘eempbas, 22 Hamming weak 25,29 nea 24 salaton 36 iy oormaon comes), 2465 Distortion: ao ° ‘Heterodyning, 52 amplitude, 27 ml, 27 super, 63 pe. 27 Sand pee (BPP, 28 ier aso, 28,38 ‘Distortionless transmission, 26 high-pass (HPF), 28 pair, 41 Distribution, 138, 139 low-pass (LPF), 28 ‘Huffman encoding, 256 (Ser abo Gas distibun) mathe, 204 Hypotiess eseng226 Distribution function, 132 ee Se emule (e132 Pree, 22 eel en, 27 sm 3 seed 3 {ie sampled sin, 91 marin, 133 abode Image eens, 32 bee ixelength codes, 25¢ Pee " pane ea eH of a convolutional encoder, 292 Precholcaegns ofan LT! system, 25 Efficiency: Fourier series, 1 —, aM 3s oul exponent 1 = ‘ode, 283 frequency speci, 1 ontent, 245-246 ‘Bigenfunction, 31 of impaise train, 18, measure of, 246, 257 Fenn oven, YR owe ator satu 29,263 ney content, 3 “nition. a oma 3 Sf inpue in1 a eee ere Inststancous codes, 255 ure err Instantaneous frequency, 68,75 aye 66 Paras hore, 3 manasa Feu 8 ae cleaner Instantaneous phase deviation, 68 onions, 250 oper a4 32 Iesananeos as dein, crs hoe Integrate-and-dump detector, 234-235 iat, 250 ee Intersymbo! interference (ISI), 101, 123 no eating 25,276 re dtanes, 295 1, lan, 88 Freqeay Shanon Fano, 25 ier, «3 Iacoian, 136 none onverson, Join existe) dsebuon anton et tion, 46,207 Seto? a seta, 4,36 ‘Sorin, 47 Join eon, 20 ‘toatos 178 ‘Srniato 74-75, Joint mame, 138 Join probability ‘eastyfupeuon. 134 ‘mas function, 13 satri, 245 Joimly normal random variables 150 Kraft nequaiy, 255,273, Kronarke’ del, 7-108 Lagrange multiplies, 269 kliood function, 296 Liketibood of event, 227 Likelihood ratio, 277 {es 227,232 Lise code, 99 Line spect, 2 Liner Block codes, 283 ode rate, 283 ual codes 318 seneator matin, 318 ary check matin, 285 Syndrome, 286 ‘yndrome decoding, 286 systematic, 285 Linear code ‘eiition, 283 ‘ror detection and correction capitis, 284 Hamming distance, 283, Hamming weight, 283, ‘minimum distance, 234 Linear modulation, 43 Tinea system, 24 neat time itvariant (LTH) system, 28 ler characterises, 25 fresueney respons, 25 impulse response, 25 transmission of signals, 26 fracamision of random proses, 171 unt sep response, 3 Losses channel, 268,251 Lower seband, 4 Manetestersgaling, 100, Majorty re, 296 ‘Magna curulative density functions (df), oy ‘Margins diseibution, 133 ‘Marginal probability density fineton, 134 ‘Marginal probably mass function, 134 Markov inequality. 161 Matchod fier, 229 for colored nose, 239 “Maximum 2 postion (MAP) exitrion, 14, 2 Maximum tikthood ‘ecoing, 296 scaler 296 detector, 227, 232 “Maximum radi frequency deviation, 68 Mean, 137, 166 Uime-averaged, 168 Minimum distance, 288 Minioum error eteroa, 28 Modulation, 8 ‘ampltede (AM), 43-67 ange, 68-39 Sela (DMD, 97-99 ‘igi pus. 90 ‘double sideband (DSB), 44 ‘exponeatial, 70 Frequency (7M), 68-59 linear. 3 sonlinear. 77 INDEX. phase (PM), 68-69 ple, 4 pulse ampltode (PAM), 93 pulse code (PCM), 90 ‘quadrature amplitude (QAM), 64 ‘Snalesidehand (SSB), 47 ‘sinusoidal, 71 tone, T vestigial sdchand (VSB), 49 Modulation index for AM, 46 for angle modulation, 71 Modulation theorem, 16,2 Moments 17 encrating function, 164 Joint 138 Gn, 137 i 133 Multpeing Troquency-ivson (EDM), 52 quadrature, {ime-ision (TDM, $2,100 ‘Mutat information, 250, 262 “Mutually exclusive and exhautive evens, 1 Narrowtund ange modulation, 0 Narrow er, 20 ‘Narrowband random process, 174 ‘Natural sampling. 92 Noise auditive, 202 Adve white gaussian (AWGN), 226, 252 brnd-imted white, 17 colored, 29 in AM, 204 in analog communication sytem, 202-205 in ange modulation. 208 in baseband systems, 203 squaniing, 9 thermal 30 ‘white, 13 [Noise ahietig, 211 Noises channel, 289,252 Nonlinear modalition, 77 Nonretur-o-zero (NRZ) signaling, 100 NNonsniform quantizing, 95 Normal distibution, 159 {See also Gausian diswibtion) "Normal proces (Gaussian proces), 173 Normale enesy coment 3 [Nall event, 129 Nyquist inter, 91 Nyquist rate 91 [Nyquist pulsshaping eitevion, 123 On-ofFkeying (OOK), 104 Optimal codes. 285 (Optimum detection, 26, 29-230, 238 Optimum theshol, 28 ‘Orthogonal anon variables, 138 Parity chock mati, 285-286 Pariyeheck polynomial 288 Darseval's formula, 10 arsevas theorem: for the Foumer series, 2 for the Fourier tasorm, 3, 17 Perfect code. 288 Pesiodic signa, complex Four sos, 1 Fourier coefiets, 1 power content, 2 Petiodic random process, 176 329 Phase distortion, 27 Pave function, 175 Phase modulation (PM), 68-69 Phasesiftkeying (PSK), 104, 252 Phase spectrum, 1 Poison disibution, 139 Poison random vaviable, 139 Polynomial decoding, 259 Polynomial division, 289 Polynomial representation, 252 Polynomial division, 289, uotent, 287 Feainde, 257 Polynomials code, 257 ror, 288 feneator, 257 pasy-chock, 298 Syndrome, 288 Power content, 2 ‘ofa period inal, 2 Power sem, 3 Power spectral density, 170 Power spectra, 170.179 Practical sampling. 92 Predtection fer, 204 Preemphasis ler 222 Preefre codes, 254 Probably. 125. ' posterio, 292 2 prion, 227 Saiomati defnion, 129 ‘hanpel transition, 287 ondtional, 130 of evens, 129 measure, 129 relative frequency defnition, 129 foul, 131 Probability donsiyfoaction (pd), 133, ‘ondiional 133 joint, 136 roargnal, 138 Probability mas fonction (pm, 132 joint, 136 ‘marginal, 134 Probabty of eror, 227, 232 for ASK, 231 for bipolr bssboad sgling, 231 for FSK, 232 With gausia noise, 228-229 for PSK. 232 for unipolar baseband sgealing, 231 Pulse amplitade modulation (PAM), 93 Pulse cade modulation (PCM), 90 Pals shaping 102 @ function, 139 Quadrature amplitude snoduston (QAM), 64 Quadrature fe, 29, 38 ‘Quadrature multipesng, 654 ‘Quadrature presentation of random process, 15 in-phase component. 178 ‘quadrature component, 175 quantizing, 93 for, 4 98 soe 4 onuriform, 98 ‘iorm, 93 ‘Quotient 387 ‘Ralsed-osne ter, 103 endow binary signal, 183, Random experinent, 128 330 Random process, 165 ‘covarancestationary, 178 detain of, 168 ergodic, 168 firstonder distribution, 166 aus, 172 barrowband, 174 rth order dstibaton, 166 fnhogonal, 169 petiode, 136 ‘Quadiatire repesznttion of, 175 Sceond- onder dinbution, 165 Sats of, 16, 176 strct-sense stationary ($89), 167 tinooreated, 170 Widesense stationary (WSS). 167 Rendon signal. 168 ‘Random telegraph signal, 200 ‘Random varabis, 131 ‘nomial, 13 Cauchy, 163 continous, 182-135, fovaranee, 38 discrete, 12 Sisebation funtion, 132 Soman, 13) exposed valu, 137 exponential, 132 anetion of, 135, 1 gassan, 139 Independent, 133, jelmty normal, 190 (nj moment, 137 oan, 197 normal, 139 th moment, 137 frtiogonal, 3S Pessoa, 133 range, 13 Rayeih, 156 {ordimensional, 133 uncorrelated, 138 ‘inion, 149 variance, 137 Rendon vector, 172 ‘Rayleigh random vail, 156 ‘Reed Solomon codes, 90 ‘Remainder, 287 Repation code, 296, 302 Return-o-2r0 (RZ) spring, 100 Rolo fate, 103 Sarpleand-hold (S/H) circuit, 93 Sample function, 165 Sonne pos, 2s race, 128 Scent 208 Hatop, 93 instantaneous, 91 acral 92 period, 91 Practical, 92 ‘bandpess. 110 in the frequeney domain, 126 sniforn, St Scar ineguaity, 237 Scrambler, ‘Sharmon-Fano encoding, 285, Shanmon-Hlartiy law, 253 Shit register, 299 INDEX constant 6 6 Seieminiie, 128, rey, Information-bessng, 43 message, 4 ‘modulated, & ‘modulating, 43 passband, de Peto, power. 3 Fandom, 165 random telegraph, 200, ‘eleraph 181,200 ‘wideband angie-modolated, 72 ‘Signato-noise ratio (SIN, SNR), 202-203 “AM, 206 in baseand, 204 in DSB, 205 in FM, 211 dupa, 203, inPM, 210 in $36, 214 Signaling aller mark inversion (AMI), 100 bipolar baseband 231 bipolar nonreturn-to-2er0 (NRZ), 100 biol retun-t-ero (2), 100 Formats, 99 Macher, 100 nonteture-o-270 (NRZ), 100 Fetur-tozero (RZ), 100 Splitphase, 100 Tniplar baseband, 237 unipolar NRZ, 100 ‘unipolar RZ, i00| Sinalto-quantzing aise rato, 14 Signum function, 13 Single parity-chock code, 297 Single-sideband (SSB) multi, 47 Sintsoidal modulation, 71 Slope overload, 98 Sours alphabet, 248 Source cong, 285,253 Source couing theorem, 254 Sours encoder, 253 Special densi ross, 170 power, 170 172 Speci ‘amplitude, 1 discrete, > line? phase, 1 Spli-phase signaling, 100 ‘Sguare-law detector 65,218, Standard deviation, 17 State diagram, 293, State eeprseiation, 293 Stationary process 167 sticesene (S85), 167 videsense (WSS), 167 Sutinica average, 137, 156, 166 Statistics of random press, 168,116 Stochastic process, 168 (Se avo Random proceses) Stitsnse stationary ($88) random proces, 167 Sobearen, 53 Superhsterodyne AM receiver, 8 Survivor, 296 ath, 317 Syconous demodulation, 48 Synchronous detestion, 28 Syndrome, 286 econ. 286 ‘valator table, 288 polynomial, 288 syst: ‘aus, 25 fixed, 25 lines, 26 lines time invariant (LTD, 24 Systematic codes, 290, 310 Topped delaying, 36 ‘Telegraph signal, ‘Test satis, 226 ‘Thermal noe, 200 ‘Threshold eet, 207-208, 211 in AM, 208 in angle modulation, 211 Torso eel, 216 ‘Time average, 168 ‘Time averaged autocorrelation, 168 ‘Time-averaged mean, 18 ‘Time convolution theorem, 4 15 ‘Time-divison maliplexes (TDM), 52, 100 ‘Tone modslaion, 71 ‘oval probability. 131 ‘Transter fonction, 25 “Transversal er, 36 ‘Troe diagram, 294 ‘Trelis diagram, 234 ‘Two-dimensional andom variables, 133 Uniform quantizing 93 Uniform sendom variable, 149 Uaiform sampling theoran, 91 Unipolar hatcbandsigaing, 251 Unipolar NRZ signaling, 10 Unipolar RZ signaling, 100 Uniquely decodable code, 258 Unit impuke function, ‘mp i 29 sep faneion, 10, 20 Unt sep pont, 3 Upper sideband, ‘Uses channel. 267 ‘Variabloenath code, 254 vasance, 197 ‘Vector means, 173 ‘enn diagram, 10-162 Vestgabsdeband modulation (VSB), 49 Viterbi decoding algorithm. 296 Voltage conrlled eslator (VCO), 74 White woe, 173 ‘undsimied, 174 Wideband unglemodulated signal, 72 ‘Widesense stationary (WSS) rendom proces, 161 ‘Wiener Khincin elation, 170 Zero crossings, 86,182 Zero ISI, 102, 124 Master analog and digital communications with Schaum’s—the high-performance study guide. It will help you cut study time, hone problem-solving skills, and achieve your personal best on exams and projects! ‘Students lova Schaum's Outlines because they produce results. Each year hundreds of thousands of students improve t scores and tinal tades with these indispensable study guides. Ht you don’t have a lot of time but want to excel in class, this book helps you: + Brush up betore tests + Find answers fast + Study quickly and more effectively + Get the big picture without spending hours poring aver lengthy textbooks c Schaum's Outlines give you the intormat ners expect you to know in 4 handy and succinct format—without overwhelming you with unnecessary details, You get a complete overview of the subject. Plus, you get plenty af ractice exercises to test your skill. Compatible with any classroom text ‘Schaum's lot you study al your awn pace and remind you of all the impor: lant facts you need to remember—tast! And Schaum's are so completa they're perlect for preparing for graduate or professional exams! will find + 333 detailed problem 1ep-by-step solutions * New chapter on error control caging * Easy-to-toliow explanations of signal modul transmission, and fittering + Understandable explana all topics, accompanying sample prablems and solution ctuding it you want top grades and a thorowgh understanding of analog and digital ‘communicabons, this powertul study tool is the bes! fulor you can have! CChapiers include: Signals and Spectra» Signal Transmission and Fitering + Amplitude Modulation « Angle Modulation » Digtal Transmission of Analog Signals + Probabilty and Random Variables « Random Processes + Noise in Analog Communication Systems * Optmum Detection « information Theory and Source Coding Errar Control Coding + Appendixes include: Fouriar Transtorm + Bessel Functions JijB)+ The Complementary Error Function QX2) Visit us at www-schaums.com $16.95 USA / $26.95 CAN / £13.99 UK ee ey oullines OVER 30 MILLION SOLD Related Titles in re rd Sere aa) oe

You might also like