You are on page 1of 54
Solutions Manual for Communication Systems 4th Edition Simon Haykin McMaster University, Canada Preface This Manual is written to accompany the fourth edition of my book on Communication Systems. It consists of the following: * Detailed solutions to all the problems in Chapters 1 to 10 of the book + MATLAB codes and representative results for the computer experiments in Chapters 1, 2, 3, 4, 6, 7,9 and 10 I would like to express my thanks to my graduate student, Mathini Sellathurai, for her help in solving some of the problems and writing the above-mentioned MATLAB codes. I am also grateful to my Technical coordinator, Lola Brooks for typing the solutions to new problems and preparing the manuscript for the Manual. Simon Haykin Ancaster April 29, 2000 CHAPTER I roblem As an illustration, three particular sample functions of the random process X(t), corresponding to F = W/4, W/2, and W, are plotted below: sin (2nWe) t arty sin(an # t inton & sin(an ft) t -2 ° 2 Ww Ww To show that X(t) is nonstationary, we need only observe that every waveform illustrated above is zero at t = 0, positive for 0 < t < 1/M, and negative for -1/2W < t < 0. ‘Thus, the probability density function of the random variable X(t,) obtained by sampling X(t) at t, = 1/tW is identically zero for negative argument, whereas the probability density function of the random variable X(t) obtained by sampling X(t) at t = -1/IW is nonzero only for negative arguments. Clearly, therefore, fe? # fee) "D> and the random process X(t) is nonstationary. Problem 1.2 X(t) = A cos(2nf,t) efore, X, = A cos(2nf,t,) Since the amplitude A is uniformly distributed, we may write 1 waar 0 < ¥, < costar t,) fy (x) = cra °, otherwise Fx er cos (2mEt, = 4 ° cos (2r£_t,) Similarly, we may write Xi, =A coslant (ty) and 1 {aaah O ) = costam(t -tp)I >) of the procs Since every weighted sum of the samples, Z(t) is Gaussian, it follows that Z(t) is a Gaussian process. Furthermore, we note that 2 2 eae) 7 EC = 1 This result is obtained by putting t,=t, in Eq. (1). Similarly, 2 2 E(Z2 e P28) * ElZ*(t,)] = 1 Therefore, the correlation coefficient of Z(t,) and Z(tp) is Covl2( t4)20 ty I @ 2 4)72(ty) Hence, the joint probability density function of Z(t,) and 2(tp) Fo¢ 4) 52g) BrP Be) = © exl-RC2 4422) where 1 awh ~cos"2e (t=t9)] 1 = Es By sintan(e,-ta) Q(24425) 2 sin“L2n(ty-to)] (>) We note that the covariance of Z(t,) and Z(t) depends only on the time difference tyrtg. The process Z(t) is therefore widessense stationary. Since it is Gaussian it is also strictly stationary. Problem 1.5 @) Let X(t) = A + ¥(t) where A is a constant and Y(t) is a zero-mean random process. The autocorrelation function of X(t) is * Ry(e) = ELK(ter) X(t)] E((A + ¥(ter)] [A + ¥(t)]} Eta? + A Y(ter) +A Y(t) + Y(ter) Y(t) a RCD Which shows that Ry(t) contains a constant component equal to A°. (by Let X(t) = Ay cos(2uf .t + 6) + Z(t) where Ay cos(2nf .t+é) represents the sinusoidal component of X(t) and @ is a random phase variable. The autocorrelation function of X(t) is R(t) = E(K(ter) X(t)I EUIA, cos(2nft + 2nfjx + 6) + Z(t+r)] (A, cos(2nft + 8) + Z(t)]} ELAZ cos(2af,t + 2nfyr + 6) cos(2ift + 8)1 + ElZ(ter) Ay cos(2nf + 6)] + ELA, cos (anf t + ant. + 6) 2(t)] + EZ (ter) 2¢t)) 2 (Ag/2) cost2nt.t) + Ry(x) which shows that Ry(t) contains a sinusoidal component of the same frequency as X(t). Problem 1.6 (a) We note that the distribution function of X(t) is oO, x <0 Ad rAd Ace Faces® and the corresponding probability density function is 1 1 f(g) = FOX) + 3 Cx - A) which are illustrated below: a oe 1.0 (b) By ensemble-averaging, we have EIX(t)] = Sx fycgy (2) ax £ x Oh 6G) +b oex - a0) ax A - 2 The autocorrelation function of X(t) is Ry(t) = E(X(ter) X¢t)] Define the square function Sqq (t) as the square-wave shown below: 0 Sa, (t) 1.0 Then, we may write Ryle) = EIA Sag Ct - 1 Say (t= ty +t) Say (t= ty) + goat, 5 ‘a Ty a * Ta 7 4 ea 24h vic 2. 2 es 2 Since the wave is periodic with period Tg, Ry(t) must also be periodic with period Ty. (c) On a time-aver aging basis, we note by inspection of Fig. P/.4that the mean is ext = £ Next, the autocorrelation function Ty/2 a x(ter) x(t) dt Tye L 1 Kx(ter)x(t)> = has its maximum value of A°/2 at t = 0, and decreases linearly to zero at t = Tg/2. Therefore, 2 emcees éxtter) x(t)? #5 = 2 0 Again, the autocorrelation must be periodic with period Ty. (4) We note that the ensemble-aver aging and time-averaging procedures yield the same set of results for the mean and autocorrelation functions. Therefore, X(t) is ergodic in both the mean and the autocorrelation function. Since ergodicity implies wide-sense stationarity, it follows that X(t) must be wide-sense stationary. Problem 1.7 (a) For tr] > T, the random variables X(t) and X(ter) occur in different pulse intervals and are therefore independent. Thus, E(X(t) X(ter)) = E(X(t)] E(K(ter)], lot Since both amplitudes are equally likely, we have E(X(t)] = Elx(ter)] for It] > T, 4/2, Therefore, 2 A Rt) =e For It] <1, the random variables occur in the same pulse interval if t, s ) Form the non-negative quantity EUUXCtar) + ¥(t)}7) = ELK? (ter) + ax(ter) Y(t) + ¥2(t)) E(x*(ter)] + 26(x(ter) ¥(t)) + ELY@(t] RCO) + ARyy(t) + RyCO) Hence , Ry(O) + Aye) + Ry(O) > 0 or Wy 1 ) Since Ryy(t) = Ryy(-t), we have Ryy(t) =F nCu) Ry(-r-u) du Since Ry(t) is an even function of t: Ryy(t) = nCu) Ry(t+u) du Replacing u by -u: Rye) =F now) Ry(e-u) du (c) If X(t) is a white noise process with zero mean and power spectral density No/2, we may write z Rye) = 260) Therefore, Rye) = ges aw) 6-u) du Using the sifting property of the delta function: N ‘0 Ryy(t) 2 xe hte) That is, 2 h(t) = Ge Ryy (1) Ny “yx This means that we may measure the impulse response of the filter by applying a white Power noise of, spectral density No/2 to the filter input, cross-correlating the filter output with the input, and then multiplying the result by 2M. Problem 1.12 (a) The power spectral density consists of two components: (1) A delta function 6(t) at the origin, whose inverse Fourier transform is one. (2) A triangular component of unit amplitude and width 2 oq, centered at the origin; the inverse Fourier transform of this component is fy sine@(f gt). Therefore, the autocorrelation function of X(t) is 13 Ryle) = 1 + fq sine@(t ye) Wnieh is sketched below: R(t) T 1 1 i fy L fo (b) Since Ry(t) contains a constant component of amplitude 1, it follows that the de power contained in X(t) is 1. (c) The mean-square value of X(t) is given by EIX?(t)] = Ry (0) F14fy The ac power contained in X(f) is therefore equal to fo (4) If the sampling rate is f>/n, where n is an integer, the samples are uncorrelated. They are not, however, statistically independent. ‘They would be statistically independent if X(t) were a Gaussian process. ler The autocorrelation function of no(t) is + By Ce getg) = Elnatty) nat] E{fn (ty) cos(anfgt, +0) = ny(t,) sin(2nt.t40)] + [nj(tp) cos(arfgt, +0) - nyt.) sin(arf.t, + 0)]) Eln,(t,) nj(ty) cos(anf.t, +6) cos(anr.t, + 6) = nylt,) ny(ty) cos(arf,t, +0) sin(arf.t, + @) = nytty) ny(tp) sin(2nt.t, +6) cos(arf.t, +6) 14 + ny(ty) nylty) sin(ant ty +8) sin(art.t, +6)) = E{ny(ty) ny(to) coslarty(t yt.) = ny(ty) ny(tg) sinlant, (tty) + 26)) = Elny(ty) ny(to)] coslart,(t j-t3)1 ~ Elny(t,) ny(ty)] + Elsinfant,(t +t.) + 20]} Since © is a uniformly distributed random variable, the second term is zero, giving Ry (tyta) = Ry (tyty) coslant(t to] 2 1 2) Since n,(t) is stationary, we find that in terms of t tyrtet Ry @) Ne By cos (af ,t) Taking the Fourier transforms of both sides of this relation: Sy (f) = 15 (fet) + Sy, (F893 1 With Sy (f) as defined in Fig. Pfy% we find that Sy (f) is as show below: 1 2 20 20 Mpa Problem 1.14 The power spectral density of the random telegraph wave is S(t) =F Ryle) expl-Jertt) a o =f exp(2vt) exp(-janft) de +f expl-2ur) exp(-Jjarfr) dr 0 0 1 = Spape) leepl@vr - Senfe)) 1 ~ Bwaget) Lexar - gered) 5 © 20=5nf) * 204jnt) van The transfer function of the filter is HOD) = 79a T+ jarf rc Therefore, the power spectral density of the filter output is 2 SyCf) = THCA I? scr) [1 + (nfRC)?1 an) To determine the autocorrelation function of the filter output, we first expand Sy(f) in partial fractions as follows v 1 — a - . Taner? Cyan) ete ve Sy(f) = Recognizing that 15 v 2 yee exp(-2t tt) > v2 snr 172Rc 2, 2pe exp(-1t{ /RC) => (1 /2RC) “ag We obtain the desired result: L expe In) = 2Rc exp #1) Ryle) = Kr 5 1-4R 70% S 16 Problem 1.15 We are given y= fsa For x(t) = 6(2), the impulse response of this running integrator is, by definition, a(n =f 8(t)de 1 =1 for 1-TSOSt or, equivalently, 0S1 Next, we note that Ry(x) is related to the power spectral density by Ry) = f Sy(f) cos(anft) df (2) power Therefore, comparing Eqs. (1) and (2), we deduce that the, spectral density of X(t) is When the frequency assumes a constant value, f, (say), we have e a1 1 fpf) = 5 b(t.) + 3 6( fet) Ci 19 and, correspondingly, 2 2 A A Sy() = FF Olof.) + ster) Problem L18 Let 6x? denote the variance of the random variable X, obtained by observing the random process X(t) at time t,. The variance ,? is related to the mean-square value of X,, as follows ox = IX?) - ne wherefx = EX]. Since the process X(t) has zero mean, it follows that of = EX?) Next we note that EIX{] = [7 Sx(Oaf We may therefore define the variance ox” as the total area under the power spectral density Sx(f) as oe £ Sx(Oat @ ‘Thus with the mean px = 0 and the variance o, defined by Eq. (1), we may express the probability density function of X, as follows fx) = me oian ox 20x 20 Problem 1.19 The input-output relation of a full-wave rectifier is defined by { X(tyy— X(ty) 20 [xii xt) <0 Y(t) = XC Y= Tne probability density function of the random variable X(t,), obtained by observing the input random process at time t,, is defined by To find the probability density function of the random variable Y(t,), obtained by observing the output random process, we need an expression for the inverse relation defining X(t.) in terms of Y(t,). We note that a given value of Y(t,) corresponds to 2 values of X(t,), of equal magnitude and opposite sign. We may therefore write X(ty) = KCI, KC) <0 XE, = Wb, Ky > 0. In both cases, we have - px | aes) eae ‘The probability density function of Y(t,) is therefore given by ! X(t, ) | Bore fy OY) = fyq s(x = oy) [ae]. (ezys X(t) X(t) art) XC) j He, which is illustrated below: fa) Ye) 0.791 Problem 1.20, (a) Tne probability density function of the random variable Y(t,)+ obtained by observing the rectifier output Y(t) at time t,, is (2 1 exp(- dy 20 (z oy cc aa f, (y) = X(t) oOo, y, is given by by) = J X(tga) hy(u) du The expected value of Z(t») is my, * Elatt,)1 HAO) my where Hg(0) =F pCu) au The covariance of Y(t.) and Ut) is Cove) Zby)] = EL((ty) ay MZ(ty) ~My II 1 2 ELL Slt yt) Ay) (tye) = Ay) by) hy Cu) dt du) Lf ELK Ht) ay) K(tgeu) = 44) 46D hy(u) dt du LF Cy(tyatgeteu) ny Cr) hy(u) at du mee X al 1 2! where Cy(t) is the autocovariance function of X(t). Next, we note that the variance of X(t.) is 2 2 Of = LCE) may 97 FEF Sylora) yD Cw) de du and the variance of Z(t) is 2 2. oy = EL(Z(ty) — a, °7 2, 2 25 es fete) h(t) ho(u) dt du 26 The correlation coefficient of ¥(t4) and Z(t) is covi¥(t4)2(t5)1 3 YT, % pe Since X(t) 18 a Gaussian process, it follows that Y(t,) and 2(t,) are 4 i a : ointly Caussi with a probability density function given by ' a sonny fewseten FyC4 4) ,2¢ty) Ya22) = K expl-O0y92)1 where anoy a 41 1 | ein 2 — »? - 206 2a-e)| ¥yn1. 227 z, 171 22, a 22 2,2 Y. 2. 7. 2 2 Ay 9z5) = 1 1 (b) The random variables ¥(t,) and Z(t,) are uncorrelated if and only if their covariance is zero, Since ¥(t) and Z(t) are jointly Gaussian processes, it follows that Y(t,) and 2p) are statistically independent if Cov[¥(t,)2(tp)] is zero. ‘Therefore, the necessary and sufficient condition for ¥(t,) and Z(t,) to be statistically independent is that is £ Sgt grtgeteu) h(a) hg(u) dt du = 0 for choices of t, and tp. . Problem 1.22 (a) The filter output is Y(t) = f h(x) X(tet) de a Tf Xtar) dt 0 + Then, the sample value of Y(t) at t=T equals ie 5 Yaqs xt) du ° ‘The mean of ¥ is therefore T EW] = EES x(u) au) oO T J E(XK(u)] du 0 aio The variance of Y is of = ELV?) - terry}? = Ry(0) = ce Sy(f) df £ S(t) ce? at H(f) = s W(t) exp(—Jj2nft) dt T J expl-jextt) dt 0 sI- pexptoseert 7 =jext 4 0 = jar (1 = exp(-j2fT)] = sine(fT) exp(-jafT) Therefore, og = S SyCt) sine@(eT) at (b) Since the filter input is Gaussian, it follows that Y is also Gaussian. probability density function of ¥ is mere of is detined shove. Problem 1.23 G@) Te power spectral density of the noise at the filter output is given by N = Moy jee 2 Se rorya Hence, the y 2 sy(t) = 30 2xflmy T+(2qfL/RY zu Ls 14(2qf L/R)' Zz J ‘The autocorrelation function of the filter output is therefore x Bo too - Be expe Bet) ‘The mean of the filter output is equal to H(0) times the mean of the filter input. The process at the filter input has zero mean. The value H(0) of the filter’s transfer function H(f) is zero. It follows therefore that the filter output also has a zero mean. ‘The mean-square value of the filter output is equal to Ry(0). With zero mean, it follows therefore that the variance of the filter output is of = RCO) Since Ry(t) contains a delta function &(t) centered on t = 0, we find that, in theory, oy? is infinitely large. 30 Problem 1.24 (a) The noise equivalent bandwidth is 4s wey ae (0)\? +» ipa ne + (4/8) at zs af ors (ety) fo * Bn sinGa/en) 2 * sino(i 72a) (b) When the filter order n approaches infinity, we have 0 Me SrRStT ERT Problem 1.25 ‘The process X(t) defined by xX®= Yo hit - wy, kee where h(t - %,) is a current pulse at time %, is stationary for the following simple reason. There is no distinguishing origin of time. 31 Problem 1.26 (a) Let S,(f) denote the power spectral density of the noise at the first filter output. The dependence of S,(f) on frequency is illustrated below: s\(2) ¥/2 Let S,(f) denote the power spectral density of the noise at the mixer output. may write So(f) 1 TUS (fe) + 84 Ce which {s illustrated below: 8, (£) 32 Then, we ‘The power spectral density of the noise n(t) at the second filter output is therefore defined by No ee 860 = 4 uc 0, otherwise ‘The autocorrelation function of the noise n(t) is RO = nce sine(2Bt) (b) The mean value of the noise at the system output is zero. Hence, the variance and mean-square value of this noise are the same. Now, the total area under S,(f is equal to (No/4)(2B) = NoB/2. The variance of the noise at the system output is therefore NyB/2. (c) The maximum rate at which n(t) can be sampled for the resulting samples to be uncorrelated is 2B samples per second, 33 coblem 1.2’ (a) The autocorrelation function of the filter output is Ryle) = FF ley) Wlrg) Ryltoty+ty) dey Aty i} Since Ry(t) = (No/2) 6(t), we find that the impulse response h(t) of the filter must Satisfy the condition: Ry(r) = zh i h(t,) h(t.) S(tHt 445) dt, dty Ny * Bf Mart) h(ty) dtp (b) For the filter output to have a power spectral density equal to Sy(f), we have to choose the transfer function H(f) of the filter such that Ng 2 Sy(f) = so HCE) or (aCe) Problem 1.28 (@) Consider the part of the analyzer in Fig. 1.19 defining the in-phase component 742), reproduced here as Fig. 1 Narrowband noise no) filter > ni) 2cos(2nf.t) Figure | For the multiplier output, we have v(t) = 2n(t)cos(2mf,t) Applying Eq. (1.55) in the textbook, we therefore get SVP) = SF fo) +50 F + fod] Passing v(#) through an ideal low-pass filter of bandwidth B, defined as one-half the bandwidth of the narrowband noise n(f), we obtain Sy) = | Sy(f) for -BS f

You might also like