Solutions Manual


Communication Systems

4th Edition

Simon Haykin

McMaster University, Canada


This Manual is written to accompany the fourth edition of my book on Communication Systems. It consists of the following:

Detailed solutions to all the problems in Chapters 1 to 10 of the book

MATLAB codes and representative results for the computer experiments in Chapters 1,2,3,4,6, 7, 9 and 10

I would like to express my thanks to my graduate student, Mathini Sellathurai, for her help in solving some of the problems and writing the above-mentioned MATLAB codes. I am also grateful to my Technical coordinator, Lola Brooks for typing the solutions to new problems and preparing the manuscript for the Manual.

Simon Haykin Ancaster April 29, 2000


Problem 1.1

As an illustration, three particular sample functions of the random process X(t), corresponding to F = W/4, W/2, and W, are plotted below!

sin (21TWt)

sin(21T ~ t)

2 W


sin(21T ~ t)

2 W


To show that X(t) is nonstationary, we need only observe that every waveform illustrated above is zero at t = 0, positive for 0 < t < 1 /Zvl, and negative for -1/2W < t < O. Thus, the probability density function of the random variable X(t1) obtained b.y sampling X(t) at t 1 = 1/4W is identically zero for negative arg unent , whereas the probability density function of the random variable X(t2) obtained by sampling X(t) at t = -1/4W is nonzero only for negative arg unent s , Clearly, therefore,

and the random process X(t) is nonstationar y.


Problem 1.2

X(t) = A cos(2nfct)


Xi = A cos(2nfcti)

Since the amplitude A is uniformly distributed, we may write

1 COS(~f t.)'

C 1

fX.(Xi) =

1 0,

o < x < cos(2nf t.)

- 1 - C 1

otherwi se


cos (2nf t.) C l.


cos (2'1Tf t.) C l.

Similarly, we may write

X. = A cos[2nf (t.+T)]

l+r c 1



co s [ 2rr f (t. +T ) ] , C 1

o < x < cos[2nf (t.+T)]

2. - c 1

0, otherwise

We thus see that fx.(\) t. fX.> (x2), and so the process X(t) is nonstationary.

1 l+T

Problem 1.3

(a) The integrator output at time t is


yet) = f X(T) dr o

t = A f o

cos(21Tf r ) dr c


= :r f sine 2rrf t)

c c


sin(21ff t)
E [Y (t)] c E [A] 0
= 2rrf =
sin2(21ff t)
Var[Y (t)] c Var[A]
= (2rrfc)2
sin2 (21ff t) 2
= (2rrf )2 °A
c (, )

Y (t) . is Gaussian-distributed, and so we may express its probability density function as

(b) From Eq. (1) we note that the variance of Y( t) depends on time t, and so Y( t) is

nonsta tionar y.

(c) For a random process to be ergodic it has to be stationary. nonstationary, it follows that it is not ergodic.

Since yet) is

Problem 1.4

(a) The expected value of Z(t,) is

E[Z(t,)] = cos(2rrt,) E[X] + sin(2rrt1) E[Y] Since E[X] = E[Y] = 0, we deduce that

E[Z(t1)] = 0

Similarly, we find that

Next, we note that

Cov[Z(t1)Z(t2)] = E[Z(t1)Z(t2)]

= E{[X cos(21ft1) + Y sin(21ft1)][X cos(21ft2) + Y sin(21ft2)]} 2

= cos(2rrt1) cos(2rrt2) E[X ]

+ [cos( 21ft 1) sin (21ft 2 )+sin (21ft 1 ) cos( 21ft 2)]E [XY] + sin(2rrt1)sin(2rrt2)E[y2]


Noting that
E[X2] 2 {E [X]}2 1
= Ox + =
E[y2] 2 {E [y]}2 ,
= 0y + =
E[XY] = a we obtain

Cov [Z (t 1) Z( t2)] = cos( 21ft 1 )cos( 21ft 2>+sin (21ft, ) sin (21ft 2)

= cos[21f(t1-t2)] (1)

of -the .pfO~

Since every weighted sum of the samplesA Z( t) is Gaussian, it follows that Z( t) is a ~ussian process. Furthermore, we note that

2 2

° Z (t ) = E [Z (t ,)] = 1

This result is obtained by putting t1=t2 in Eq. ('). Similarly,

2 2

0Z(t ) = E[Z (t2)] = ,


Therefore, the correlation coefficient of Z(t,) and Z(t2) is

p =

Cov[Z( t1)z( t2)] °Z(t,)OZ(t2)

Hence, the joint probability density function of Z(t,) and Z(t2)


C = --;:::=:======= 21f(, _cos2[ 21f (t ,-t2)]


(b) We note that the covariance of Z(t,) and Z(t2) depends only on the time difference t ,-t2• The process Z( t) is therefore wide_sense stationary. Since it is Gaussian it is also strictly stationary.

Problem 1.5

(4.) Let

X(t) = A + yet)

where A is a constant and yet) is a zero-mean random process. function of X(t) is

The autocorrelation

RX(') = E[X(t+.) X(t)]

= E{[A + Y(t+.)] [A + Yet)]}

= E[A2 + A Y(t+.) + A yet) + Y(t+.) yet)] = A2 + Ry(')

which shows that RX(') contains a constant component equal to A2.

<.Ir> Let

X(t) = A cos(2nf t + 6) + Z(t)

c c

where Ac cos(2nfct+6) represents the sinusoidal component of X(t) and 6 is a random phase variable. The autocorrelation function of X(t) is

RX(') = E[X(t+.) X(t)]

= E{[Ac cos(2nfct + 2nfc' + 6) + Z(t+.)] [Ac cos(2nfct + 6) + Z(t)]} 2

= E[A cos(2nf t + 2nf • + 6) cos(2nf t + 6)]

c c c c

+ E[Z(t+.) A cos(2nf t + 6)]

c c

+ E[A cos(2nf t + 2nf • + e) Z(t)]

c c c

+ E [Z ( t+.) Z ( t) ]

= (A2/2) cos(2nf .) + RZ(')

c c

which shows that RX(') contains a sinusoidal component of the same frequency as X(t).

Problem l.6

(a) We note that the distribution function of X(t) is


1 2'

o < x < A

0, x < 0

1, A < x

and the corresponding probability density function is

which are illustrated below:

F ex)

>'Ct) 1. 0

I- - -- - r-------


:2 "_--_...I,





.t _____..._t _




(b) By ensemble-averaging, we have


E[X(t») = J x fX(t) (x) dx



1 1

x [2 <s(x) + 2 <S(x - A») dx

= J

A = '2

The autocorrelation function of X(t) is

RX(t) = E[X(t+t) X(t)]

Define the square function SqT (t) as the square-wave shown below: o


sqT (t)
1.0 t

-T o





Then, we may write


= A2 f SqT (t - td + r ) SqT (t - td) fT (td) dtd 0 0 d

= A22 (1 _ 2 lU) , l r I < TO

TO "2

Since the wave is periodic with period To' RXC.) must also be periodic with period TO. (c) On a time-averaging basis, we note by inspection of Fig. p/.bthat the mean is

A <x(t» = 2

Next, the autocorrelation function


has its maximum value of A2/2 at. = 0, and decreases linearly to zero at. = TO/2. Therefore,

< x ( t+L) X ( t) >


Again, the autocorrelation must be periodic with period TO.

(d) We note that the ensemble-averaging and time-averaging procedures yield the sane set of resul ts for the mean and autocorrelation functions. Therefore, X(t) is ergodic in both the mean and the autocorrelation function. Since ergodicity implies wide-sense stationarity, it follows that X(t) must be wide-sense stationary.

Problem 1.7

(a) For lr I > T, the random variables X(t) and X (t+r ) occur in different pulse intervals and are therefore independent. Thus,

E[X(t) X(t+.)] = E[X(t)] E[X(t+.)],

I. I > T.

Since both amplitudes are equally likely, we have E[X(t)] = E[x(t+.)] = A/2. Therefore, for I. I > T,

For I. I ~ T, the random variables occur in the same pulse interval if td < T - I. I. If they do occur in the same pulse interval,

E [X ( t) X ( t+L ) ]

1. A2 + 1. 02 A2

= 2 2 = 2

We thus have a conditional expectation:

E [X (t) X (t+. )]


= A /2, t d < T - IT 1


= A /4, otherwise.

Averaging over td, we get


= !2 (1 _ I~I) +t2

I. I < T

(b) The power spectral density is the Four ier tr ansform of the autocorrelation function. The Fourier transform of

= 0 , otherwise, is ~illet\ ~

G(f) = T sinc2(fT).


2 + A; sinc2 err: .
S (f) = }- 15 (f)
We next note that
2 CD A2
Lf 15 (f) df = 4
2 CD TSinc2(fT) df A2
4 = 4
..... It follows therefore that half the power is in the dc component.

Problem 1.8


Yet) = gp(t) + X(t) + 13/2

and gp(t) and X(t) are uncorrelatedJf,e.YI

Cy(or) = C (or) + Cx(or)


where G (L) is the autocovariance gp

autocovariance of the random component.

3/2, removing the dc component. Ggp(L)

of the periodic component and eX(L) is the C-y(L) is the plot in figure pi ~B shifted down by and eX(L) are plotted below:


c,. (c;) 91'




Both gp(t) and X(t) have zero mean, ove: yo.._3e

(a) The power of the periodic canponent g (t) is therefore,

A p

1 T 0/2 2 - (: 1
:rJ gp(t) dt - ~ (0) = 2"
o -TO/2 ~
o.-s e: 'Co.._j€
( b) The power of the random component X(t) is
E [X2 (t)] = Gx(O) = 1 Problem 1.9

Replacing 1 with -1:


R xy (-1) = E [X (t-1) Y (t) ]

Next, replacing t-. with t, we get

Rxy(-') = E[Y(t+.) X(t)] = RyX(·)

(b) Form the non-negative quantity

2 2· 2

E[{X(t+.) + yet)} ] = E[X (t+.) ~ 2X(t+.) yet) + Y (t)]

2 . 2

= E [X (t+.) ] ~ 2E [X ( t+.) Y ( t )] + E [Y (t)]

= RX(O) ~ 2Rxy(') + Ry(O)



Problem 1.10

(a) The cascade connection of the two filters is equivalent to a filter with impulse response

h(t) = f h,(u) h2(t-u) du


The autocorrelation function of yet) is given by

00 00

Ry(') = f f h(.,) h('2) RX(' -., +'2) dL, d'2

-'» -'»

(b) The cross-correlation function of vet) and yet) is

R vy ( r ) = E [V ( t--r ) Y ( t ) ] .

The yet) and V (t-sr ) are related by


yet) = f yeA) h2(t-A) dA




Rvy(') = E[V(t+.) f yeA) h2(t-A) dA]




= J h2(t-A) E[V(t+.) v o n dA



= J h2(t-A) RV(t+L-A) dA


Substituting A for t-A:


Rvy (r ) = J h2(A) RV(.+A) ex


The autocorrelation function RV(') is related to the given RX(') by

00 00

RV(') = J J h,(.,) h'('2) RX('-"+'t2) d',d'2

-- --

Problem 1.11

(a) The cross-correlation function RyX(') is

RyX (r ) = E [y ( t+L) X ( t ) ]

The yet) and X (t) are related by


yet) = J X(U) h(t-u) du



RyX(') = E[ J X(u)X(t) h(t+.-u) du]



= J h(t+.-u) E[X(u)X(t)] du


= J h(t+.-u) RX(u-t) du


Replacing t+T-U by u:


RYX(T) = J h( u) RX(T-U) du


(b) Since Rxy(T) = RYX(-T), we have


Rxy (r ) = f h( u) RX(-'-U) du
Since RXh) is an even function of .:
Rxy(' ) = f h( u) RX('+U) du
J:IO Replacing u by -u:


Rxy(') = f h(-u) RX('-u) du


(c) If X(t) is a white noise process with zero mean and power spectral density NO/2, we may write



2" f ht u) s (.-u) du


Using the sifting property of the delta function:

That is,

h(. )

This means that we may measure the impulse response of the filter by applying a white


noise of ... spectral density NO/2 to the filter input, cross-correlating the filter output

wi th the input, and then mul tipl ying the resul t by 2/N O"

Problem 1.12

(a) The power spectral density consists of two components:

(1) A delta function &(t) at the origin, whose inverse Fourier transform is one.

(2) A triangular component of unit amplitude and width 2fO' centered at the origin; the inverse Fourier transform of this component is f 0 sinc2(f 0').

Therefore, the autocorrelation function of X(t) is


RX(·r) = 1 + f 0 sinc2 (f o"r) which is sketched below:

.",.- 1 +f o




(b) Since RX(·r) contains a constant component of amplitude 1, it follows that the dc power contained in X(t) is 1.

(c) The mean-square value of X(t) is given by E[X2(t)] = RX(O)

= 1 + fO

The ac power contained in X(f) is therefore equal to f O.

(d) If the sampling rate is f O/n, where n is an integer, the samples are uncorrelated. They are not, however, statistically independent. They would be statistically independent if X(t) were a Gaussian process.

Problem 1.13

The autocorrelation function of n2(t) is RN (t1,t2) = E[n2(t1) n2(t2)]


= E{[n,(t,) COS(21Tfct1 + e) - n1(t,) si n ( 21T f c t ,-+0 ) ]
• [n 1 (t2) co s ( 2rr f c t 2 + e) - n,(t2) sin(2rrfct2 + e)]}
= E[n,(t,) n,(t2) cos( 21Tf c t, + e) COS(21Tfct2 + e)
- n,(t,) n,(t2) co s ( 2rr f c t , + e) sin(2rrfct2 + e)
- n,(t,) n,(t2) si n ( 21T f c t, + e) cos(21Tfct2 + e) 14

+ n 1 ( t 1) n 1 (t 2 ) si n ( 2Tr f c t 1 + 9) si n ( 21r f c t 2 + 9)] = E{n1(t1) n1(t2) cos[2nfc(t1-t2)]

- n1(t1) n1(t2) sin[2nfc(t1+t2) + 2e]} = E[n1(t1) n1(t2)] cos[2rrfc(t1-t2)]

- E [n 1 (t 1) n 1 ( t 2 )] • E {si n [ 2n f c (t 1 + t 2 ) + 29]}

Since 9 is a uni formly distributed random variable, the second term is zero ,giving

Since n,(t) is stationary, we find that in terms of r = t1-t2:

RN (r ) = RN (r ) cos(2rrf.)

. 2 1 c

Taking the Fourier transforms of both sides of this relation:


= '2 [SN (f+f ) + SN (f-f )]

1 c 1 c

With SN (f) as defined in Fig. Pf .. ,], we find that SN (f) is as shown below:

1 2






Problem 1.14

The power spectral density of the random telegraph wave is


Sx(f) = f RX(L) exp(-j2rrfL) dr



= f exp(2VL) exp(-j2rrfL) de


+ f ex p( -2n) ex p( - j2-rr f-r) cit o

, 0

= 2 (v-jolT f) [exp(2vL - j2nf-r)]



- 2(v+jnf) [exp(-2n - j2rrfL)]



1 1

= -=-:----:--=~ + -=-:----:--::-:-

2(v-jnf) 2(v+jnf)



2 2f2 v +n

The tr an sfer func tio n 0 f the fil ter is



= """-+-J""'·2rr"-::f'""R=C

Therefore, the power spectral density of the filter output is

= v~~~~~=-

[1 + (2nfRC)2](v2+rr2f2)

To determine the autocorrelation function of the fil ter output, we first expand Sy(f) in partial fractions as follows

Recogni zing that


exp(-2vCtl)~ 2 v 2 2 v + 1T f

we obtain the desired result:

v 1

2 2 2 [v- exp(-2v I. I)

1-4R C v

2RC ex p f- ~~ I)]


Problem 1.15

We are given

yet) = r x( -r)d-r


For x(t) = bet), the impulse response of this running integrator is, by definition,

h(t) = r b(-r)d-r

t- T

= 1 for t-T'5,O'5,t or, equivalently, O'5,t'5,T

Correspondingly, the frequency response of the running integrator is

H(f) = foo h(t)exp(-j2nft)dt



= fo exp(-j2nft)dt

= .21.1' [1- exp(-j2nfT)] ] nJt

= T sine (fT) exp ( - jnfT)

Hence the power spectral density Sy({) is defined in terms of the power spectral density Sx(f) as follows

Problem 1.16

We are given a filter with the impulse response

Put) = {


aexp( -at),

o '5,t'5, T otherwise


The frequency response of the filter is therefore

H(f) = [ h(t)exp(-J2nft)dt



= Io aexp(-at)exp(-J2nft)dt


= aIo exp(-(a + J2nf)t)dt

= ~2 j[ -exp( -(a + J2nf)t]~

a+ ] n

= ~2 j[l- exp(-(a + J2nf)T]

a+ ] n

= ~2 j[l-e-a\cos(2nfT)-JSin(2nfT»

a+ ] n

The squared magnitude response is

IH(f)12 =

[ 2 ]

a -aT 2 -aT 2

2 2 2(1- e cos(2nfT» + (e sin (2nfT»

a +4n f


a -aT -2aT 2 . 2

= 2 22[1-2e cos(2nfT)+e (COS (2nfT)+sm (2nfT»]

a +4n f


a -aT -2aT

= 2 22[1-2e cos(2nfT)+e ]

a +4n f

Correspondingly, we may write


a -aT -2aT

Sy(f) = 2 22[1-2e cos(2nfT)+e ]Sx(f)

a +4n f


Problem 1.17

The autocorrelation function of X(t) is

RX (T) = E [X (t+T) X ( t) ]

= A 2 E[cos(2'JrFt + 2'JrFT - e) cos(2'JrFt - e)]


= 2E[cos(4rrFt + 2'JrFT ~a3) + cos(2'JrFT)]

Averaging over e, and noting that e is uniformly distributed over 2'Ir radians, we get


= 2 E[cos(2'lrFT)]


Next, we note that Rx(T) is rel ated to the power spectral density by


R X (T) = J S X ( f) co s ( 2'Ir rr ) d f


(2 )


Therefore, comparing Eqs. (1) and (2), we deduce that the spectral density of X(t) is A

When the frequency assumes a constant value, fc (say), we have


and, correspondingly,

Problem 1.18

Let Ox 2 denote the variance of the random variable Xk obtained by observing the random process X(t) at time~. The variance Ox2 is related to the mean-square value ofXk as follows

wherel"x = E[Xk]. Since the process X(t) has zero mean, it follows that

Next we note that

E[X~] = 1: Sx(f)df

We may therefore define the variance Ox2 as the total area under the power spectral density 8x(O as


Thus with the mean Px = 0 and the variance CJx2 defined by Eq. (1), we may express the probability density function of Xk as follows


Problem 1.19

The input-output relation of a full-wave rectifier is defined by

The probabil i ty densi ty function of the random variable X(tk), obtained by observing the input random process at time tk, is defined by

To find the probabil i ty density function of the random variable Y(tk), obtained by observing the output random process, we need an expression for the inverse relation

defining X(tk) in terms of Y(tk). We note that a given value of Y(tk) corresponds to 2 values of X(tk), of equal magnitude and o ppo sf te sign. We may therefore write

X(tk) = ':"Y(tk), X(tk) < 0
X(tk) = Y(tk) , X(tk) > 0
In both cases, we have
I dX(tk'l
dY(tk) = 1 • The pr obab I'l i tydensi ty function of Y(tk) is therefore given by

2 .

1 y

= - - exp(--)

1£ C1 2i

We may therefore write

1 y2

- - exp(- --) ,

1£ C1 2i

y ~ 0

y < 0 •


which is illustrated below:

0.79 a

--_-L-_______________ Y


Problem 1.20

(a) The probability density function of the random variable Y(tk), obtained by observing the rectifier output Yet) at time tk, is

2 exp( - tcr) , Ox

y ~ 0

y < 0

2 2

OX = E[X (tk)] -

= E [X2(tk)]

= RX (0)

The mean value of Y(tk) is therefore



co 2

= -'-- J IY exp(- L_) dy

f21T <1x 0 20i




Then, we may rewrite Eq. (1) as

2 <XI 2 u2

E[Y(tk)] = - a f u exp(- -2 ) du 'IT X 0

(~) The autocorrelation function of Y(t) is

Ry (-r) = E [y ( t+.r) Y ( t) ] 2

Since Y(t) = X (t), we have

Ry(-r) = E[X2(t+'t") X2(t)]

(2 )

The X(tk-+'i") and X(tk) are jOintly Gaussian with a joint probability density function

defined by
2 2
fX(tk+'t"),X(tk)(x" x2) 1 exp[- x 1 - 2 p X ('t") x, x2 + x2
= 2 2 ]
at O~ r,~i("C) 20 X (, ~ X ("C »
2 Rx (0) ,
where Ox =
Px<' ) Cov[X(tk+'t")X (tk)]
= 2
RX('t" )
= RX(O)
Rewrite Eq. (2) in the form:
<XI 2 x2
R Y ("C ) 1 f g(x2) dX2
= x2 exp(- 2)
an o~ /1-p~('t") -- 2cX
where (3 )



x, exp{-



Then, we may express g( x2) in the form

However, we note that

CD 2

f exp(- ~ ) du = 121T


CD u2

f u exp(- 2) du = 0


CD 2

2 u ~

f u ex p t- 2) du = r21T



1hus, from Eq. (3):


Using the results:

-, 3 = 121T OX

= 3/21T ai


we obtain,

424 2

= 30X PX('t) + aX [1 - PX('t)]

= ai [1 + 2p~('t)]

Px ('t) RX ( r)
= RX(O)
we obtain
Ry ('t) Ri(O) [1 + 2 ~('t)
= ]
R~(O )
Ri(O) 2
= + 2RX( r) The autocovariance function of yet) is therefore Cy('t) = Ry('t) - {E[Y(tk)]}2 222

= R X (0) + 2R X ( r) - R X (0 )


= 2RX( 't)

Problem 1.21

(0.) The random variable Y(t1) obtained by observing the fil ter output of impulse response

h1(t), at time t1, is given by


Y(t1) = i X(t1-'t) h1('t) d r


The expected value of Y(t1) is

m.. = E[Y(t1)]




H1(O) = f h1('t) d't



The random variable Z(t2) obtained by observing the filter output of impulse response h2( t), at time t 2' is given by


Z( t2) = f X(t2-u) h2(u) du


Th e ex pee ted val ue 0 f Z ( t 2 ) is

mZ = E [z( t2) ]




H2 (0) = f h2( u) du


The covariance of Yet,) and Z(t2) is

Cov[Y(t,)Z(t2)] = E[(Y(t,) -~y )(Z(t2) -~Z )]

, 2

co co

= E[f f (X(t,-T) -)'x) (X(t2-u) -.fix) h1(T) h2(U) d r du]

_co _GO

co co

= f f E[(X(t,-T) -_)-L·X) (X(t2-u) -~'X)] h,(T) h2(u) d r du


co co

= f f CX(t,-t2-T+u) h,(T) h2(u) d r du

_00 _00

where CX(T) is the autocovariance function of X(t). Next, we note that the variance of Yet,) is

2 2

0y = E [ (Y (t ,) -Py ) ]

, ,

co co

= f f ex ( T-U ) hi (T) \ (u) d T du

_00 _00

and the var iance of Z( t2) is

2 2

c:iZ = E[(Z(t2) -/-Iz ) ]

2 2

co co

= f f CxC-r-u) h2(T) h2(u) d r du

_00 _co


The correlation coefficient of Yet,) and Z(t2) is cov[Y(t,)Z(t2)]

p =

°Y, Oz 2

Since X(t) is a Gaussian process, it follows th t Y(t) d Z(t )

a , an 2 ar e jointly Gaussian

with a probability density function given by


(b) The random variables yet,) and Z(t2) are U1correlated if and only if their covariance is zero. Since yet) and Z(t) are jOintly Gaussian processes, it follows that Yet,) and Z( t2) are stati stically independent if Cov[Y (t, )Z( t2)] is zero. Therefore, the necessary and sufficient condition for Yet,) and Z(t2) to be statistically independent is that

co co

. 'f f CX(t,-t2-'I+u) h, ('I) h2(U) d r du = 0

-co -co

for choices of t, and t2•


Problem 1.22

(a) The fil ter output is


yet) = 1 h(·r) X(t-'t) d r


Put T-'t=u. Then, the sample value of yet) at t=T equals

1 "T

Y = T 1 X(U) du o

The mean of Y is therefore

1 T

E[Y] = E[f 1 X(u) du] o

1 T

= T 1 E[X(u)] du o

= 0

The variance of Y is

o~ = E [Y 2] _ {E [Y ] } 2 = Ry (0)


= 1 Sy(f) df



= 1 Sx(f) IH (f) 12 df





H(f) = 1 h(t) exp(-j2nft) dt


1 T

= T 10 ex p( - j2 nft) dt


= j2nfT [1 - exp(-j2nfT)]

= sinc (fT) exp(-j lrfT)



O~ = 1 SX(f) sinc2(fT) df


(b) Since the filter input is Gaussian, it follows that Y is also Gaussian. Hence, the probability density function of Y is

f y(Y) 1 y2
= exp(- 2)
121T ay 20y
where 2 . defined above.
0y 1S Problem 1.23

(<l,) The power spectral density of the noise at the fil ter output is given by

j21TfL ,2 R+j21TfL


The autocorrelation fUnction of the filter output is therefore

(b) The mean of the filter output is equal to H(O) times the mean of the filter input. The process at the filter input has zero mean. The value H(O) of the filter's transfer function H(f) is zero. It follows therefore that the filter output also has a zero mean.

The mean-square value of the filter output is equal to RN(O). With zero mean, it follows therefore that the variance of the filter output is

Since R~'t) contains a delta function ()('t) centered on 't = 0, we fmd that, in theory, oN2 is infinitely large.


Problem 1.24

(a) The noise equivalent bandwidth is

f IH(O) 12 _00


If 00 df
= 2n
_00 1 + (fifO)
f df
= 2n
o 1 + (fifO)
= 2n sine 'II/2n)
= sine (1 12n) (b) When the filter order n approaches infinity, we have

W = f 1i m --:----.,.1..,,--,,=__:_

N 0 sinc(1/2n)


Problem 1.25

The process X(I:;) defined by


X(I:) = E h(t - 'tk),


where h(t - 'tk) is a current pulse at time 'tk' is stationary for the following simple reason. There is no distinguishing origin of time.


Problem 1.26

(a) Let S,(f) denote the power spectral density of the noise at the first filter output. The dependence of S,(f) on frequency is illustrated below:

1 No/2
-- - --I- - --'_ -
,_+r~~ '-",
I-fc ,
0 i f
I c
r S (f)




Let S2( f) denote the power spectral density of the noise at the mixer output. Then, we

may write

which is illustrated below:

2f c



r----.---- -------~



The power spectral density of the noise n(t) at the second filter output is therefore defined by



0, otherwise

The autocorrelation function of the noise n(t) is

NB .

R (e) = _O_ slnc(2Bt)

o 2

(b) The mean value of the noise at the system output is zero. Hence, the variance and mean-square value of this noise are the same. Now, the total area under 80(0 is equal to (NoI4X2B) = NoB/2. The variance of the noise at the system output is therefore NoB/2.

(c) The maximum rate at which n(t) can be sampled for the resulting samples to be uncorrelated is 2B samples per second.


Problem 1.27

(a) The autocorrelation function of the filter output is

00 00

RX(T) = f f h(T,) h(T2) RW(T-T,+T2) dT, dT2

-- --

.Since Rw(T) = (NOI2) OCT), we find that the impulse response ht t ) of the filter must satisfy the condition:

NO 00 00

RX(T) = ~ f f h(T,) h(T2) O(T-T,+T2) dT, dT2

...00 ...00

NO 00

= ~ f h(T+T2) h(T2) dT2


(b) For the filter output to have a power spectral density equal to SX(f), we have to choose the transfer function H(f) of the filter such that

Sx(O NO IH( f) ,2
= 2
IH(O I = ;+
0 34

Problem 1.28

(a) Consider the part of the analyzer in Fig. 1.19 defining the in-phase component n/(t) , reproduced here as Fig. 1:

Narrowband .7x vet)

nOIse --~



Figure 1

For the multiplier output, we have

vet) = 2n(t)cos(2nfct)

Applying Eq. (1.55) in the textbook, we therefore get

Passing vet) through an ideal low-pass filter of bandwidth B, defined as one-half the bandwidth of the narrowband noise net), we obtain

= { ~v(f)

for -B <5, f <5, B otherwise

for -B <5, f <5, B otherwise


For the quadrature component, we have the system shown in Fig. 2:

Narrowband .7x u(t)

nOIse --~



Fig. 2


The multiplier output u(t) is given by

u(t) = -2n(t)sin(2nlct)



SN (I) = { Su(l)

Q 0

for -B~I~B otherwise

for -B ~I~B otherwise


Accordingly, from Eqs. (1) and (2) we have

(b) Applying Eq. (1.78) of the textbook to Figs. 1 and 2, we obtain


SN N (I) = IR(I)I Svu(l)




IHU)I ~ { ~

for -B ~I~B otherwise

Applying Eq. (1.23) of the textbook to the problem at hand:

1 j2nJ T -j2nJ T

Rvu('r) = 2RN(t)sin(2nlc t) = --;RN(t)(e C _ e C )


Applying the Fourier transform to both sides of this relation:



Substituting Eq. (4) into (3):

for -B~f~B otherwise

which is the desired result.

Problem 1.29

If the power spectral density SN(j) of narrowband noise net) is symmetric about the midband frequency I; we then have

From part (b) of Problem 1.28, the cross-spectral densities between the in-phase noise component n/(t) and quadrature noise component nQ(t) are zero for all frequencies:

S N N (f) = 0 for all f I Q

This, in turn, means that the cross-correlation functions RN N (r) and RN N (r) are both zero,


that is,

which states that the random variables Nitk + r) and NQ(tk), obtained by observing nit) at time tk + 't and observing nQ(t) at time tb are orthogonal for all t.

If the narrow-band noise net) is Gaussian, with zero mean (by virtue of the narrowband nature of n(t)), then it follows that both Nitk + r) and NQ(tk) are also Gaussian with zero mean. We thus

conclude the following:

N itk + r) and N Q(tk) are both uncorrelated

Being Gaussian and uncorrelated, Nitk + r) and NQ(tk) are therefore statistically independent.

That is, the in-phase noise component nit) and quadrature noise component nQ(t) are statistically independent.


Problem 1.30

C a) The power spectral density of the in-phase component or quadrature component is defined by



-B < f < B

We note that, for -2 < f < 2, the SNCf+5) and SNCf-5) are as shown below:

s (f+5) N






S (f-S) N






-' .

We thus find that SN (f) or SN (f) is as shown below:

1 ~

s14 (f)=SN (f)

I Q 2.0







(D) The cross-spectral density SN N (f) is defined by r G

-B < f < B


We therefore find that SN N (f»/j is as shown below: :r ~

~ S (f)


'1 Q




Next, we note that

= sn N (f) I Q

We thus find that ~ N (f) is as shown below:

I b)


-;- SN N (f)

J .r Q

Problem 1.3 ,

(a) Express the noise net) in terms of its in-phase and quadrature components as follows:

net) = n (t) cos(2nf t) - n (t) sin(2nf t)

.I e e c

The envelope of n( t) is

ret) = ~2(t) + n2(t)


which is Rayleigh-distributed. That is


r r

2 exp(-. 2) , r > 0

o 20



To evaluate the variance as follows


o ,

we note that the power spectral density of n (t) or n (t) is r Q


SN (f)=SN (f) I Gi




Since the mean of net) is rero, we find that 2

(1 = 2 NOB



r > 0


(b) The mean value of the envelope is equal to J;NOB, and its variance is equal to 0.-858 NOB.


Problem 1.32

Autocorrelation of a Sinusoidal Wave Plus White Gaussian Noise

In this computer experiment, we study the statistical characterization of a random process X(t) consisting of a sinusoidal wave component Acos(2nfet + 8) and a white Gaussian noise process

Wet) of zero mean and power spectral density NoI2. That is, we have

X(t) = Acos2nfet + 8 + Wet)


where 8 is a uniformly distributed random variable over the intervale -n.n). Clearly, the two components of the process X(t) are independerlt. The autocorrelation function of X(t) is therefore the sum of the individual autocorrelation functions of the signal (sinusoidal wave) component and the noise component, as shown by


This equation shows that for l'tl > 0, the autocorrelation function Rx('t) has the same sinusoidal waveform as the signal component. We may generalize this result by stating that the presence of a periodic signal component corrupted by additive white noise can be detected by computing the autocorrelation function of the composite process X(t).

The purpose of the experiment described here is to perform this computation using two different methods: (a) ensemble averaging, and (b) time averaging. The signal of interest consists of a sinusoidal signal of frequency I; = 0.002 and phase e = - n12, truncated to a finite duration T =

1000; the amplitude A of the sinusoidal signal is set to -V2 to give unit average power. A particular realization x(t) of the random process X(t) consists of this sinusoidal signal and additive white Gaussian noise; the power spectral density of the noise for this realization is (NoI2) = 1000. The

original sinusoidal is barely recognizable in x(t).

(a) For ensemble-average computation of the autocorrelation function, we may proceed as follows:

• Compute the product x(t + 't)x(t) for some fixed time t and specified time shift r, where x(t) is a particular realization of the random process X(t).

• Repeat the computation of the product x(t + 't)x(t) for M independent realizations (i.e., sample functions) of the random process X(t).

• Compute the average of these computations over M.

• Repeat this sequence of computations for different values of r.

The results of this computation are plotted in Fig. 1 for M = 50 realizations. The picture portrayed here is in perfect agreement with theory defined by Eq. (2). The important point to note here is that the ensemble-averaging process yields a clean estimate of the true


autocorrelation function Rx('r:) of the random process X(t). Moreover, the presence of the sinusoidal signal is clearly visible in the plot of RxC'r:) versus t.

(b) For the time-average estimation of the autocorrelation function of the process X(t), we invoke ergodicity and use the formula

Rx('t) = lim RxC't, T)



where RxC't,T) is the time-averaged autocorrelation function:


The x(t) in Eq. (4) is a particular realization of the process X(t), and 2T is the total observation interval. Define the time-windowed/unction

{X(t), -T~t~T xT(t) =

0, otherwise


We may then rewrite Eq. (4) as


For a specified time shift r, we may compute Rx('t,T) directly using Eq. (6). However, from a computational viewpoint, it is more efficient to use an indirect method based on Fourier transformation. First, we note From Eq. (6) that the time-averaged autocorrelation function RxC't,T) may be viewed as a scaled form of convolution in the r-domain as follows:


where the star denotes convolution and xy-('t) is simply the time-windowed function xy-(t) with t replaced by 'to Let XTif) denote the Fourier transform xy-('t); note that XTif) is the same as the Fourier transform X(j,T). Since convolution in the r-dornain is transformed into multiplication in the frequency domain, we have the Fourier-transform pair:



The parameter IXy(f)12I2T is recognized as the periodogram of the process X(t). Equation (8) is a mathematical description of the correlation theorem, which may be formally stated as follows: The time-averaged autocorrelation function of a sample function pertaining to a random process and its periodogram, based on that sample function, constitute a Fouriertransform pair.

We are now ready to describe the indirect method for computing the time-averaged autocorrelation function RxC't,T):

• Compute the Fourier transform Xy(f) of time-windowed function xy('t).

• Compute the periodogram IXy(f)1212T.

• Compute the inverse Fourier transform of IXy(f)1212T.

To perform these calculations on a digital computer, the customary procedure is to use the fast Fourier transform (FFT) algorithm. With xy('t) uniformly sampled, the computational

procedure described herein yields the desired values of RxC't,T) for 't = 0,8,28, "', (N - 1)8 where 8 is the sampling period and N is the total number of samples used in the computation. Figure 2 presents the results obtained in the time-averaging approach of "estimating" the autocorrelation function Rx('t) using the indirect method for the same set of parameters as those used for the ensemble-averaged results of Fig. 1. The symbol

Rx('t) is used to emphasize the fact that the computation described here results in an "estimate" of the autocorrelation function Rx('t). The results presented in Fig. 2 are for a signal-to-noise ratio of + lOdB, which is defined by




On the basis of the results presented in Figures 1 and 2 we may make the following observations:

• The ensemble-averaging and time-averaging approaches yield similar results for the autocorrelation function RxC't), signifying the fact that the random process X(t) described

herein is indeed ergodic.

• The indirect time-averaging approach, based on the FFT algorithm, provides an efficient method for the estimation of Rx('t) using a digital computer.

• As the SNR is increased, the numerical accuracy of the estimation is improved, which is intuitively satisfying.


1 Problem 1.32

Matlab codes

Yo Problem 1.32a CS: Haykin

Yo Ensemble average autocorrelation Yo M. Sellathurai

clear all A=sqrt(2) ;

N=1000; M=1; SNRdb=O; e_corrf_f=zeros(1,1000); f_c=2/N;


for trial=1: M

Yo signal s=cos(2*pi*f_c*t);


snr = 10-(SNRdb/10);

wn = (randn(1,length(s)))/sqrt(snr)/sqrt(2);

Yosignal plus noise s=s+wn;

Yo autocorrelation [e_corrf]=en_corr(s,s, N);

YoEnsemble-averaged autocorrelation e_corrf_f=e_corrf_f+e_corrf;


Yoprints plot(-500:500-1,e_corrf_f/M); xlabel(' (\tau)') ylabel('R_X(\tau)')


% Problem 1.32b CS: Haykin

% time-averaged estimation of autocorrelation % M. Sellathurai

clear all A=sqrt(2) ; N=1000; SNRdb=O; f_c=2/N; t=O:1:N-1;

% signal s=cos(2*pi*f_c*t);%noise


snr = 10-(SNRdb/10);

wn = (randn(1,length(s)))/sqrt(snr)/sqrt(2);

%signal plus noise s=s+wn;

% time -averaged autocorrelation [e_corrf]=time_corr(s,N);

%prints plot(-500:500-1,e_corrf); x Labe Lf ' (\tau)') ylabel('R_X(\tau)')


function [corrf]=en_corr(u, v, N)Yo funtion to compute the autocorreation! cross-correlati( Yo ensemble average

Yo used in problem 1.32, CS: Haykin Yo M. Sellathurai, 10 june 1999.

max_cross_corr=O; tt=length(u);

for m=O:tt

shifted_u=[u(m+1:tt) u(1:m)]; corr(m+1)=(sum(v.*shifted_u»!(N!2); if (abs(corr»max_cross_corr) max_cross_corr=abs(corr);



corr1=flipud(corr); corrf=[corr1(501:tt) corr(1:500)];


function [corrf]=time_corr(s,N)

% funtion to compute the autocorreation/ cross-correlation % time average

% used in problem 1.32, CS: Haykin % M. Sellathurai, 10 june 1999.

X=fft(s); X1=fftshift((abs(X).-2)/(N/2)); corrf=(fftshift(abs(ifft(X1))));


Answer to Problem 1.32






-~go~0~-----~4~0~0~----~3~0~0~-----=2~0~0------~1~0~0~-----0~----~1~0~0~--~2~0~0~----=3~0=0------~~----5~00 (T)

Figure 1: Ensemble averaging




<~ 1

Figure 2: Time averaging


Problem 1.33

Matlab codes

% Problem 1.33 CS: Haykin % multipath channel

% M. Sellathurai

clear all

Nf=O;Xf=O; % initializing counters

N=10000; % number of samples M=2; P=10;

a=1; % line of sight component component

for i=1:P

A=sqrt(randn(N,M).-2 + randn(N,M).-2);

xi=A.*cos(cos(rand(N,M)*2*pi) + rand(N,M)*2*pi); % inphase cpmponent xq=A.*sin(cos(rand(N,M)*2*pi) + rand(N,M)*2*pi); % quadrature phase component

xi=(sum(xi')); xq=(sum(xq'));

ra=sqrt((xi+a).-2+ xq.-2)

% rayleigh, rician fading

[h X]=hist(ra,50);

Nf=Nf+h; Xf=Xf+X;


Nf=Nf/(P) ; Xf=Xf/(P) ;

% print plot(Xf,Nf/(sum(Xf.*Nf)/20)) xlabel( 'v' )



Answer to Problem 1.33


Figure , Rician distribution




Continuous-wave Modulation

Problem 2.1

(a) Let the input voltage vi consist of a sinusoidal wave of frequency ~ f c (i.e., hal f the desired carrier frequency) and the message signal m( t):

v. = A cos Or f t)+m(t)

1 C C

Then, the output current io is io = a1 vi + a3 vi

= a1 [A cos(lT f t)+m( t.) ]+a3[A cos(lT f thm( t)] 3

c c c c

= a1 [Accos(lTfct)+m(t)] + ia3A~ [cos(3rrfct)+3coS(lTfct)]

+ ~t~ m(t)[1+cos(2rrfct)] + 3a3AcCOS(lTfct)i(t) + a3m3(t)

Assume that met) occupies the frequency interval -W < f < W. Then, the amplitude spectrum of the output current i is as shown below.


I (f) o

3f c

-W 0 W

3f c






From this diagram we see that in order to extract a OSBSe wave, with carrier frequency f c

fran i , we need a band-pass filter with mid-band frequency f and bandwidth Zvl, which

o c

satisfy the requirement:

that is,

Therefore, to use the given nonlinear device as a product mmodulator, we may use the following configuration:

A cos(1Tf t)

c c

m (t)




3'a A2 met) COS(21Tf t)

'2 3 c c

(b) To generate an AM wave with carrier frequency fc we require a sinusoidal component of frequ'ency f to be added to the DSBSe generated in the manner described above. To achieve

. c _

thi s requirement, we may use the following configuration involving a pair of the nonlinear

devices and a pair of identical band-pass filters.

A cos (1Tf t)

c c

m (t)




A cos(1Tf t)

c c




AM wave


3 2

The resulting AM wave is therefore 2" a3 Ac[AO+m(t)]coS(2rrfct). Thus, the choice of the dc

level AO at the input of the lower branch controls the percentage modulation of the AM wave.

Problem 2.2

Consider the square-law characteristic:


where al and ~ are constants. Let


Therefore substitutingEq. (2) into (1), and expanding terms:

v~t) • a1A.r + ~: m(t)] COS(211fct)

+ alm(t) + a2m 2(t) + a2Ac2cos2(21tfct)


The first term in Eq. (3) is the desired AM signal with ka = 2az!al. The remaining three terms are unwanted terms that are removed by filtering.

Let the modulating wave m(t) be limited to the band -W < f~ W, as in Fig. 1(a). Then, from Eq. (3) we find that the amplitude spectrum lv 2(0 I is as shown in Fig. 1(b). It follows therefore that the unwanted terms may be removed from v2(t.) by designing the tuned filter at the modulator output of Fig. P'2.2 to have a mid-band frequency fc and bandwidth 2W, which satisfy the requirement that fc> 3W ..




-w 0

Figure 1


Problem 2.3

The generation of an AM wave may be accomplished using various devices; here we describe one such device called a switching modulator. Details of this modulator are shown in Fig. P2.3a, where it is assumed that the carrier wave c(t) applied to the diode is large in amplitude, so that it swings right across the characteristic curve of the diode. We assume that the diode acts as an ideal switch, that is, it presents zero impedance when it is forward-biased [corresponding to c(t) > 0]. We may thus approximate the transfer characteristic of the diode-load resistor combination by a piecewise-linear characteristic, as shown in Fig. P2.3b. Accordingly, for an input voltage "i (t)

consisting of the sum of the carrier and the message signal:


where Im(t)1 «AC' the resulting load voltage v2(t) is

( ) _ { VI (t),

V2 t -


c(t) > 0 c(t) < 0


That is, the load voltage v2(t) varies periodically between the values VI (t) and zero at a rate equal to the carrier frequency fe In this way, by assuming a modulating wave that is weak compared with the carrier wave, we have effectively replaced the nonlinear behavior of the diode by an approximately equivalent piecewise-linear time-varying operation.

We may express Eq. (2) mathematically as


where gT (t) is a periodic pulse train of duty cycle equal to one-half, and period To = 1lfc' as in


Fig. 1. Representing this gT (t) by its Fourier series, we have


1 2 00 ( 1) n-I

gT (t) = -2 + - L; 1 cos [2nfct(2n -1)]

o n n-



Therefore, substituting Eq. (4) in (3), we find that the load voltage v2(t) consists of the sum of two components:

1. The component


which is the desired AM wave with amplitude sensitivity ka = 4nAC' The switching modulator is therefore made more sensitive by reducing the carrier amplitude Ae; however, it must be maintained large enough to make the diode act like an ideal switch.

2. Unwanted components, the spectrum of which contains delta functions at 0, ±2fC' ±4fe' and so on, and which occupy frequency intervals of width 2W centered at 0, ±3fC' ±5fC' and so on, where W is the message bandwidth.

Fig. 1: Periodic pulse train

The unwanted terms are removed from the load voltage v2(t) by means of a band-pass filter with mid-band frequency t; and bandwidth 2W, provided thatfe > 2W This latter condition ensures that the frequency separations between the desired AM wave the unwanted components are large enough for the band-pass filter to suppress the unwanted components.


Problem 2.4

(a) The envelope detector output is

vet) = A 11+ llcos(2rrf t) 1

c m

which is illustrated below for the case when II =2.

v Ct)

3A c


2 1 1

3f 2f 3f

In In In


1 3f In

1 2

2f 3f

In In

1 f In

We see that vet) is periodic with a period equal to f , and an even function of t, and so

we may express vet) in the form: m


v( t) = aO + 2 I: an cos(2n1T fmt) n=1



aO = 2f f m v( t)d t

m 0

1/3f 1/2f

= 2A f f m [1+2 cos(2rrf t)]dt + 2A f J m [-1-2cos(2rrf t)]dt

c mO' m c m 1 /3f m


A 4A

= ~ + 1T c Sin(2'3)



an = 2f f m v( t ) cos(2mr f t) dt



= 2Acfm f m [1+2cos(2nfmt)]cos(2mrfmt)dt



1 !2f

+ 2A f J m [-1-2cos(2'rrf t)]cos(2mrf t)dt

c m 1/3f m m


Ac 2 Ac 'l-

rm [2 sine ~) - sin(nrr)] + (n+1)1T {2 sin[~(n+1)] - sin[1T(n+1)]}

Ac 2'rr

+ (n-1)1T {2 sin["3(n-1)] - sin[1T(n-1)]}


For n=O, Eq. (2) reduces to that shown in Eq. (1).

(b) For n= 1, Eq. (2) yields

A (n + 1.)

a 1 = c 21T 3

For n=2, it yields

Therefore, the ratio of second-harmonic amplitude to fundamental amplitude in vet) is






Problem 2.5

(a) The demodulation of an AM wave can be accomplished using various devices; here, we describe a simple and yet highly effective device known as the envelope detector. Some version of this demodulator is used in almost all commercial AM radio receivers. For it to function properly, however, the AM wave has to be narrow-band, which requires that the carrier frequency be large compared to the message bandwidth. Moreover, the percentage modulation must be less than 100 percent.

An envelope detector of the series type is shown in Fig. P2.5, which consists of a diode and a resistor-capacitor (Rf") filter. The operation of this envelope detector is as follows. On a positive half-cycle of the input signal, the diode is forward-biased and the capacitor C charges up rapidly to the peak value of the input signal. When the input signal falls below this value, the diode becomes reverse-biased and the capacitor C discharges slowly through the load resistor Rz. The discharging process continues until the next positive half-cycle. When the

input signal becomes greater than the voltage across the capacitor, the diode conducts again and the process is repeated. We assume that the diode is ideal, presenting resistance rr to

current flow in the forward-biased region and infinite resistance in the reverse-biased region. We further assume that the AM wave applied to the envelope detector is supplied by a voltage source of internal impedance R; The charging time constant (rf + Rs) C must be short

compared with the carrier period lIfe' that is


so that the capacitor C charges rapidly and thereby follows the applied voltage up to the positive peak when the diode is conducting.

(b) The discharging time constant RzC must be long enough to ensure that the capacitor discharges slowly through the load resistor Rz between positive peaks of the carrier wave, but not so long that the capacitor voltage will not discharge at the maximum rate of change of the modulating wave, that is

1 1

-« RiC«fe W


where W is the message bandwidth. The result is that the capacitor voltage or detector output is nearly the same as the envelope of the AM wave.


Problem 2.6


(a) Then the output of the square-law device is

2 V2(t) = alVl(t) + a2vl (t)

= a1AJ1 + kam(t)]cos(21tfct)

+ _!a2A;[1 + 2kam(t) + k;m 2(t)] [1 + cos(41tfct)] 2

(b) The desired signal, namely ¥c2kam(t), is due to the ~vt(t) - hence, the name "square-law detection". This component can be extracted by means of a low-pass filter. This is not the only contribution within the baseband spectrum, because the term 112 ¥c2ka2m2(t) will give rise to a plurality of similar frequency components. The ratio of wanted signal to distortion is 2Ikam(t). To make this ratio large, the percentage modulation, that is, I kam(t) I should be kept small compared with unity.


Problem 2.7

The squarer output is

v1(t) = A2 [1+k m(t)]2 coS2(errf t)

c a c

The ampl itude spectrum of v 1 (t) is therefore as follows, assuming that m( t) is limited to the interval -W < f < W:

Since fc > 2W, we find that 2fc-2W > 2W. Therefore, by choosing the cutoff frequency of the low-pass filter greater than 2W, but less than 2f c -2W, we obtain the output


v2(t) = 2c [1+kam(t)]2

Hence, the square-rooter output is

A c

v3(t) = -- [1+k met)]

12 a


which, except for the dc component -- , is proportional to the message signal m(t).


Problem 2.8

(a) For fc = 1.25 k Hz , the spectra of theimessage signal met), the product modulator output s( t), and the coherent detector output v( t ) are as follows, respectively:




f (kHz)


S (f)



1. 25





(b) For the case when fc = 0.75, the respective spectra are as follows:

M( f)

_______________ ~~_. __ ~--~--~----------~~~--------------f (kHz)

-1 1



·~--~--~--~----------------f (kHz) 61




To avoid sideband-overlap, the carrier frequency fc must be greater than or equal to

kHz. The lowest carrier frequency is therefore 1 kHz for each sideband of the

modulated wave s( t) to be uniquely determined by m( t).

Problem 2.9

The two AM modulator outputs are

SI (t) = Ac[1 + kam(t)]cos(21tfct) S2(t) = AJI - kam(t)]cos(21tfct)

_- . Subtracting ~(t) from 81 (t):

s(t) = s2(t) - sl(t)

= 2kaAcffi(t)cos(21tfct)

which represents a DSB-SC modulated wave.


Problem 2.10

(a) Multiplying the signal by the local oscillator gives:


= ~ met) {cos(2~6ft) + cos[2~(2fc+~f)t]}

Low pa ss fil ter ing leaves:

Thus the output signal is the message signal modul ated by a sinusoid of fr equency M.


If met) = cos(2~f t), m

Problem 2.11


= 2c [1+COS(4~ct)]m2(t)

Therefore, the spectrum of the multiplier output is

A2 00 A2 00 00

Y(f) = 2c J M(~JM(f-~)dA + 4c [J M(A)M(f-2fc-A)dA + J M(A)M(f+2fc-A)d>"]

-~ -~ -~

where M( f) = F[m( t)].

( b)

At f=2f , we have c


A2 = =

+ r [f M ( A) M ( - A) d A + f M ( A) M ( 4 f c - A) d A] .


Since M(_A) _ M*(A), we may write

A2 =

Y (2f ) = 2c f M( A)M(2f -A)dA

c _= c

= =

IM( A) ,2dA + f M( A)M(4f -A)dAJ c




With met) limited to -W < f < Wand fc > W, we find that the first and third integrals reduce to zero, and so we may simplify Eq. (1) as follows

where E is the signal energy (by Rayleigh's energy theorem). Similarly, we find that

The band-pass filter output, in the frequency domain, is therefore defined by

A2 c

V (f) '" -4 E srt o(f-2f ) + o(f+2f )]

c c



vet) '" 2c E M cos(4nfct)

Problem 2.12

The mul tiplexed signal is


where M, (f) = F [m, (t)] and M2(f) = F [~(t)]. The spectrtm of the received signal is


R(f) = H (f)S (f)


= 2c H (f) [M, (f-f )+M, (f+f )+ l:- M2(f-f)- l:- M2(f+f )]

c c J c J c

To recover m, (t), we multiply r(t), the inverse Fourier transform of R(f), by cos(211f t) and then pass the resulting output through a low-pass filter, producing a signal with ~he following spectrum

The condition H(fc+f) = H*(fc-f) is equivalent to H(f+fc)=H(f-fc); this follows from the fact that for a real-valued impulse response h(t), we have H(-f)=H*(f). Hence, substituting this condition in Eq. ('), we get

The low-pass filter output, therefore, has a spectrum equal to (Ac/2) H(f-fc)M,(f).

Similarly, to recover ~(t), we multiply ret) by sin(21Tfct), and then pass the

r esul ting Signal through a low-pass filter. In thi s case, we get an output wi th a



Problem 2.13

When the local carriers have a phase error $, we may write

cos(2"1rfct + $) = cos(2"1rfct)cos $ - sin(2"1rfct) sin $

In this case, we find that by multiplying the received signal ret) by cos(2nfct+$), and passing the resulting output through a low-pass filter, the corresponding lOw-Pass filter output in the receiver has a spectrum equal to (Ac/2) H(f-fc) [COS$ M,(f) - sin$ M2(f)]. This indicates that there is cross-talk at the demodulator outputs.


Problem 2.14

The transmitted signal is given by

(a) The envelope detection of set) yields

To minimize the distortion in the envelope detector output due to the quadrature component, we choose the DC offset Vo to be large. We may then approximate YI (t) as

which, except for the DC component Ac Va, is proportional to the sum mz(t) + mr(t).

(b) For coherent detection at the receiver, we need a replica of the carrier Accos(2nfct). This requirement can be satisfied by passing the received signal set) through a narrow-band filter of mid-band frequency I; However, to extract the difference mz(t) - mr(t), we need sin(2nfct), which

is obtained by passing the narrow-band filter output through a 90°-phase shifter. Then, multiplying set) by sin(27ifct) andt low-pass filtering, we obtain a signal proportional to mz(t) - mr(t).

(c) To recover the original loudspeaker signals mz(t) and mr(t), we proceed as follows:

Equalize the outputs of the envelope detector and coherent detector .

Pass the equalized outputs through an audio demixer to produce mz(t) and mr(t) .


Problem 2.15

To ensure 50 percent modulation, ka = 1, in which case we get

set) = Ac( 1 + ~)cOS(2nfct) 1 + t

(b) set) = Acm(t)cos(2nfct)


(c) set) = 2C[m(t)cos(2nfct) - m(t)sin(2nfct)]

Ac[ 1 t ]

(d) set) = "2 --2cos(2nfct) + --2sin(2nfct)

l+t l+t

As an aid to the sketching of the modulated signals in (c) and (d), the envelope of either SSB wave is

Plots of the modulated signals in (a) to (d) are presented in Fig. 1 on the next page.


- 0
-10 -8 -6 -4 -2 0 2 4 6 8 10
- 0
-10 -8 -6 -4 -2 0 2 4 6 8 10
.3- 0

-10 -8 -6 -4 -2 0 2 4 6 8 10
'0 0
-10 -8 -6 -4 -2 0 2 4 6 8 10
time F' I


Problem 2.16

Consider first the modulated signal

set) = ~m(t)COS(2nfet) - ~m(t)Sin(2nfet)


Let S(j) = F[s(t)], M(j) = F[m(t)], and M(f) = f[m(t)] where met) is the Hilbert transform of the message signal met). Then applying the Fourier transform to Eq. (1), we obtain

1 1 A A

S(f) = 4[M(f - fJ + M(f + fJ] - 4}M(f - fe) - M(f + fe)]


From the definition of the Hilbert transform, we have

M(f) = -jsgn(f)M(f)

where sgn(j) is the signum function. Equivalently, we may write

1 A

--.M(f - fe) = sgn(f - fe)M(f - fJ }

1 A

--.M(f + fJ = sgn(f + fJM(f + fJ }

(i) From the definition of the signum function, we note the following for f> 0 = andf> fe:

sgn(f - fJ = sgn(f + fe) = +1

Correspondingly, Eq. (2) reduces to

1 1

S(f) = 4[M(f - fe) + M(f + fJ] + 4[M(f - fJ - M(f + fJ]


= 2M(f - fJ

In words, we may thus state that, except for a scaling factor, the spectrum of the modulated signal set) defined in Eq. (1) is the same as that of the DSB-SC modulated signal forf> fe

(ii) For t> 0 andf <fe' we have


sgn(f - fe> = -1 sgn(f + fe> = +1

Correspondingly, Eq. (2) reduces to

= °

In words, we may now state that for f <fc' the modulated signal s(t) defined in Eq. (1) is zero.

Combining the results for parts (i) and (ii), the modulated signal set) of Eq. (1) represents a single sideband modulated signal containing only the upper sideband. This result was derived for i> 0. This result also holds for f < 0, the proof for which is left as an exercise for the reader.

Following a procedure similar to that described above, we may show that the modulated signal

set) = ~m(t)cos(21tfct) + ~m(t)Sin(21tfct)


represents a single sideband modulated signal containing only the lower sideband.


Problem 2.17

An error M in the frequency of the local oscillator in the demodulation of an SSB signal, measured with respect to the carrier frequency fc' gives rise to distortion in the demodulated signal. Let the local oscillator output be denoted by A~ cos(21t(fc + M)t). The resulting demodulated signal is given by (for the case when the upper sideband only is transmitted)

1 '

vo(t) = - AcAc [m(t)cos(21tMt) + m(t)sin(21tMt)]


This demodulated signal represents an SSB wave corresponding to a carrier frequency M.

The effect of frequency error Af in the local oscillator may be interpreted as follows:

_(a.) If the SSB wave s(t) contains the upper sideband and the frequency error M is positive, or equivalently if s(t) contains the lower sideband and M is negative, then the frequency components of the demodulated signal v o( t) are shifted inward by the amount Af compared with the baseband signal m(t), as illustrated in Fig. I(b).

(b) If the incoming SSB wave s(t) contains the lower sideband and the frequency error Mis positive, or equivalently if s(t) contains the upper sideband and M is negative, then the frequency components of the demodulated signal vo(t) are shifted outward by the amount Af, compared with the baseband signal m(t). This is illustrated in Fig. Ie for the case of a baseband signal (e.g., voice signal) with an energy gap occupying the interval -fa < f < fa' in part (a) of the figure.



o I. (a)

-I. + ~I 0 (bl

-Ib-~I -I.-~I 0 I.+~I (e)

Fig. 1





Problem 2.18

(a sb ) The spectrum of the message signal is illustrated pelow:

Correspondingly f the output of the upper first product modulator has the following s pee tr lID :

t M(t-!. )

.~) a

The output of the lower first product modulator has the spectrum:

The output of the upper low pass filter has the spectrum:


The output of the lower low pass fil ter has the spectrum:

~ M Cf- f.)

2J - b

The output of the upper second product modulator has the spectrum:

The output of the lower second product modulator. has the spectrum:


Adding the two second product modulator outputs, their upper sidebands add constructively while their lower sidebands cancel each other.

(c) To modify the modulator to transmit only the lower sideband, a single sign change is required in one of the channels. For example, the lower first product modulator could mul tiply the message signal by -sin(21Tf t). Then, the upper sideband would be cancelled

and the lower one transmitted. 0


Problem 2.19


( t) Product po l(t) High-pass ~2 (t) Product \T3(t) s
modulator filter modulator filter (t)

COS(27Tf t) c

(a) The fir st product modulator output is

.. v1(t) = met) cos(2'1rfct)

The second product modulator output is

v 3 ( t) = v 2 ( t) co s [2 'Ir( f c + f b) t]

The amplitude spectra or m(t), v1 (t), v2(t), v3(t) and set) are illustrated on the next page:

We may express the voice signal met) as met) = -21 [m (t) + m (t)]

+ -


where m+ (t) is the pre-envelope of m( t), and m_ (t) .. m+ (t) is its complex conjugate. The

Fourier transforms of m (t) and m (t) are defined by (see Appendix 2)


M (f)

1 <M(f),
f 0,
2M (f) , f > 0

M (f)


f < a

f > a

f < a

Comparing the spectrum of s( t) with that of m( t), we see that s( t) may be expressed in terms of m (t) and m (t) as follows:

+ -

set) = t m+(t)exp(-j2mbt)+ t m_(t)exp(j2mbt)

= F [me t )+jm( t)] ex p(-j27Tfb t)+ ~m( t )-jm( t) ]exp( j27Tfb t) = ir m(t)cos(2mbt)+ ir m(t)sin(2mbt)

(b) With set) as input, the first product modulator output is

v1 (t ) = set) cos(2nfct)



IM(f) I

IM(f ) I


______________________________ --~~~------------------------------ f

-f -f Of f

b a a b

-f -f f

c b c


f f +f

c c b

-E +fa 0 f-f Dba

Is (f) I


-f +f 0 f-f

b a b a


Problem 2.20

(a) Consider the system described in Fig. la, where u(t) denotes the product modulator output, as shown by

Message signal mit)

Product u(l) Band-pass
modulator filter
t Modulated signal sit}


Modulated signal

. s(l)

Product utt ) Low-pass
modulator filter
t Demodulated signal vo(1)


Figure 1: (a) Filtering scheme for processing sidebands. (b) Coherent detector for recovering the message signal.

Let H(j) denote the transfer function of the filter following the product modulator. The spectrum of the modulated signal set) produced by passing u(t) through the filter is given by

S(f) = U(f)H(f)


= 2C[M(f - fJ + M(f + fc)]H(f)


where M(j) is the Fourier transform of the message signal met). The problem we wish to address is to determine the particular H(j) required to produce a modulated signal set) with desired spectral characteristics, such that the original message signal met) may be recovered from set) by coherent detection.

The first step in the coherent detection process involves multiplying the modulated signal set) by a locally generated sinusoidal wave A'ccos(2nfct), which is synchronous with the carrier wave Accos(2nfct), in both frequency and phase as in Fig. lb. We may thus write


vet) = A'ecos(2nfet)s(t)

Transforming this relation into the frequency domain gives the Fourier transform of vet) as


V(f) = -i[S(f - fe> + S(f + fe)]


Therefore, substitution of Eq. (1) in (2) yields

A A'

V(f) = TM(fHH(f - fe> + H(f + fe>]

A A'

e e

+ -4-[M(f - 2fe>H(f - fe) + M(f + 2fe>H(f + fe)]


(b) The high-frequency components of vet) represented by the second term in Eq. (3) are removed by the low-pass filter in Fig. 1 b to produce an output v oCt), the spectrum of which is given by the remaining components:

A A'

Vo(f) = TM(fHH(f - fe> + H(f + fJ]


For a distortionless reproduction of the original baseband signal met) at the coherent detector output, we require Vo(f) to be a scaled version of M(f). This means, therefore, that the transfer

function H(f) must satisfy the condition

H(f - fJ + H(f + fJ = 2H(fe)


where Hife), the value of H(f) at f = fe' is a constant. When the message (baseband) spectrum M(f) is zero outside the frequency range - W 5:.f 5:. W, we need only satisfy Eq. (5) for values off in this interval. Also, to simplify the exposition, we set Hife) = 112. We thus require that H(f) satisfies the condition:



Under the condition described in Eq. (6), we find from Eq. (4) that the coherent detector output in Fig. Ib is given by

A A'

e e

vo(t) = -2-m(t)



Equation (1) defines the spectrum of the modulated signal set). Recognizing that set) is a bandpass signal, we may formulate its time-domain description in terms of in-phase and quadrature components. In particular, set) may be expressed in the canonical form


where s/t) is the in-phase component of set), and sQ(t) is its quadrature component. To determine s/(t), we note that its Fourier transform is related to the Fourier transform of set) as follows:

{S(J - Je) + S(J + JJ, SI(J) =





Hence, substituting Eq. (1) in (9), we find that the Fourier transform of s/t) is given by



where, in the second line, we have made use of the condition in Eq. (6) imposed on H(j). From Eq. (10) we readily see that the in-phase component of the modulated signal set) is defined by


which, except for a scaling factor, is the same as the original message signal met).

To determine the quadrature component sQ(t) of the modulated signal set), we recognize that its Fourier transform is defined in terms of the Fourier transform of set) as follows:

{j[S(J-J )-S(J+J)] -W~J~W

SQ(J) = e e

0, elsewhere


Therefore, substituting Eq. (11) in (12), we get



This equation suggests that we may generate sQ(t), except for a scaling factor, by passing the message signal met) through a new filter whose transfer function is related to that of the filter in Fig. 1a as follows:

H Q(f) = j[H(f - fe> - H(f + fen



Let m' (t) denote the output of this filter produced in response to the input met). Hence, we may express the quadrature component of the modulated signal set) as


Accordingly, substituting Eqs. (11) and (15) in (8), we find that set) may be written in the canonical form


There are two important points to note here:

1. The in-phase component sit) is completely independent of the transfer function H(j) of the band-pass filter involved in the generation of the modulated wave set) in Fig. la, so long as it satisfies the condition of Eq. (6).

2. The spectral modification attributed to the transfer function H(j) is confined solely to the quadrature component sQ(t).

The role of the quadrature component is merely to interfere with the in-phase component, so as to reduce or eliminate power in one of the sidebands of the modulated signal set), depending on the application of interest.


Problem 2.21

(a) Expand ing s( t), we get

s( t) = 1.2 a A A eos(21Tf t) cos (21Tf t) m cern

_ "::"821 A A sin(2nf t ) sin(2nf t) + -21 (1-a) A A eos(2nf t) eos(2nfmt)

me e m em e

+ -21 (1-a) A A sin(e1Tf t) sin(21Tf t)

m e c m

= -21 A A cos(2nf t) eos(2nf t)

m c c m

+ -21 A A (1-2a) sin(21Tf t) sin(21Tfmt)

m c c

Therefore, the quadrature component is:

_!_AA (1-2a) sin(21Tf t)

- 2 em m

(b) After adding the carrier, the signal will be:

s(t) =

The envelope equal s

= A c


[1 + -2 A eos(21Tf t)] d(t)

m m

where d( t) is the distortion, defined by

(c) d(t) is greatest when a = O.


Problem 2.22

Consider an incoming narrow-band signal of bandwidth 10 kHz, and mid-band frequency which may lie in the range 0.535-1.605 M Hz. It is required to translate this signal to a fixed frequency band centered at 0.455 MHz. The problem is to determine the range of tuning that must be provided in the local oscillator.

Let [, denote the mid-band frequency of the incoming signal, and j; denote the local oscillator frequency. Then we may write

0.535 < !c < 1.605


where both fe and j; are expressed in MHz. That is,


When fe=0.535 MHz, we get I. = 0.08 MHz; and when!c= 1.605 MHz, we get Ii= 1.15 MHz. Thus the required range of tuning of the local oscillator is 0.08-1.15 MHz.


Problem 2.23

Let s(t) denote the multiplier output, as shown by

s(t) = A g(t) cos(211'fct)

where fc lies in the range fO ~o fO+W. The amplitude spectra of s Ct ) and g(t) are related as follows:

I G (f) I

I G (0) I


o f -f W c 0

Is (.f) I

~ AIG(O) I

-f -w c.

-f -f -f +W c 0 c


f c

f +W c


With vet) denoting the band-pass filter output, we thus find that the Fourier transform of v( t) is approximately given by

1 6f 6f

V(n ="2 A G(fc-fO) fO- 2 ~ If I s fo+ 2

The rms meter output is therefore (by using Rayleigh's energy theorem)


Vrms = [J l (t) dt] 1/2



= [f



Problem 2.24

For the PM case,

set) = A cos[2rrf t + k met)].

c c p

The angle equals

6i(t) = 2rrfct + kp met).

The instantaneous frequency,

Ak Ak

f. (t ) = f c + _E.._ - L ~ 15 (t - nTO)'

1 2rrTO n '"

is equal to f c



+ Akp/2nTO except for the instants that the message At these instants, the phase shifts by -kpA/TO radians.



For the FM case, fi (t) = fc + kf met)




Problem 2.25


The instantaneous frequency of the mixer is as shown below: "


:rhe . presence of negative frequency merely indicates that the phasor representing the difference frequency at the mixer output has reversed its direction of rotation.

Let N denote the number of beat cycles in one period. Then, noting that N is equal to the shaded area shown above, we deduce that

Since f 0"[ « 1, we have

Therefore, the number of beat cycles counted over one second is equal to


Problem 2.26

The instantaneous frequency of the modulated wave s(t) is as shown below:

f C

_____ ...J ~ '- _



_________ _ll ~~ ~ t

TOT 2 2

f. (t.) l.

f +L!.f c

We may thus express s(t) as follows \ COS(2wfct),

=, COS[21£(fc+~f)t], L co s [ 21£ f c t) ,


The Fourier transform of s(t) is therefore


S(f) = f COS(21£fct) exp(-j21£ft) dt


T t < - ~



! < t



+ f COS[21£(f +~f)t] exp(-j21£ft) dt

-T/2 c


+ f COS(21£f t) exp(-j21£ft) dt

T/2 c

- -co

= f

COS(21£f t) exp(-j21£ft) dt c


+ f {COS[21£(f +~f)t - COS(21£f t)} exp(-j21£ft) dt

-T/2 C c



The second term of Eq. (1) is recognized of two RF pulses of unit amplitude, one having a fr equency equal to f. Hence,

c follows:

as the difference between the Fourier transforms having a frequency equal to fc+6f and the other assuming that f T » 1, we may express S(f) as


S(n ::

+ ~ sinc[T(f-fc-6f)] - ~ sinc[T(f-fc)]' f > 0

Problem 2.27

For SSB modulation, the modulated wave is


s(t) = --2 [m(t) cos(2nf t) ± met) sin(2nf t)],

c c

the minus sign applying when transmitting the upper sideband and the plus sign applying when transmitting the lower one.

Regardless of the .sign, the envelope is

(a) For upper sideband transmission, the angle,

-1 met»~ e,(t) = 2nfct + tan (m(t) •


The instantaneous frequency is,

= f c

where' denotes time derivative.

(b) For lower sideband transmission, we have



Problem 2.28


(a) The envelope of the FM wave set) is

The maximum value of the envelope is

a =A Q

max c

and its minimum value is

= A c


a ~

max 2

= 1+13 a .


This ratio is shown plotted below for 0 ( 13 ( 0.3:











I 0.3

(b) Expressing set) in terms of its frequency components:


The mean power of s(t) is therefore

The mean power of the unmodulated carrier is


which is shown plotted below for 0 ~ s ~ 0.3:











(c) The angle 6i (t), expressed in terms of the in-phase component, s.r(t) , and the quadrature component, s (t), is:

- ~

6i(t) 2lTft+ -1 s-x(t)
= tan [s (t)]
2lTft+ tan -1 [ssin(2rrfmt) ]
Since -1 3
tan (x) = x - x 13 + ••• , The harmonic distortion is the power ratio of the third and first harmonics:

Problem 2.29

(a) The phase-modulated wave is

s(t) = A cos[2lTf t + k A cos(2rrf t)]

c c p m m

= Ac cos[2nfct + Sp cos(2nfmt)]

= Ac cos(2nfct) cos[Sp cos(2nfmt)] - Ac sin(2nfct) sin[Bp cos(2nfrnt)] (1)


If 8p ~ 0.5, then

cos[8p cos(21Tfmt)] =

sin[8p COS(21Tfmt)] = 8p COS(21Tfmt)

Hence, we may rewrite Eq. (1) as
set) = Ac COS(21Tf t) - 8 A sin(21Tf t) COS(21Tf t)
c p c c m
A cos(2nf t ) 1 A sin [21T (f c +f m) t]
= --8
c c 2 P c
1 Ac sin[21T (fc-fm)t]
2 p (2 )

.The.spectrum of set) is therefore


= -21 A [0 (f-f ) + 0 (f+f )]

c c c

(b) The phasor diagram for set) is deduced from Eq. (2) to be as follows:

Lower side-frequency


< Upper side-frequency


/.4~2 A

" _/" P c



The corresponding phasor diagram for the narrow-band FM wave is as follows:


, -,


Upper side-frequency

2nf t m


Lower side-frequency

Comparing these two phasor diagrams, we see that, except .. for a phase difference, the narrow-band PM and FM waves are of exactly the same form. /"

Problem 2.30

The phase-modulated wave is

set) = A cos[21rf t + 6 cos(21rf t)]

c c p m

The complex envelope of s( t) is

set) = Ac exp[j6p cos(21rfmt)]

Expressing set) in the form of a complex Fourier series, we have


set) = 1: Cn exp(j2nnfmt)

n= ......



cn = fm J m set) exp(-j2nnfmt) dt -1/2f



= A f J m exp[j6 cos(2nf t) - j2nnf t] dt

c m -1/2f p m m



Let 2rrf t = n/2 - cj>. m

Then, we may rewrite Eq. (1) as

A . -n/2

c = - ~ exp(- J~1T)! exp[j6p sin(cj» + jncj>] dcj>

n 31T /2


The integrand is periodic wi th respect to cp wi th a period of 2rr. Hence, we may rewr i te this expression as

A . 'IT

C (.J!!!!..) f

cn = 2'IT exp - 2

exp[jBp sin(cp) + jncp] dcp

However, from the definition of the Bessel function of the first kind of order n, we have

1 'IT

In(x) = 2'IT f exp(j x sincp - njcp) dcp -'IT


,We may thus express the PM wave set) as

set) = Re[s(t) exp(j2'ITfct)]


= -Ac Re[ E . J_n(Bp) exp(- j~'IT) expQ2'ITnfmt) exp(j2'ITfct)]

ne ......


n= .....

= A E


The band-pass filter only passes the carrier, the first upper side-frequency, and the first lower side-frequency, so that the resulting output is

s (t) = A JO(B) cos(2'ITf t) + A J ,(B ) cos[2'IT(f +f )t - ~2]

o c P c c - p c m

+ A J,(B) cos[2'IT(f -f )t + ~2]

c P c m

= Ac JO(Bp) cos(2'ITfct) + Ac J_,(Bp) sin[2n(fc+fm)t]

- A J,(B) sin[2'IT(f -f )t]

c p c m



so(t) = Ac Jo(Bp) cos(2nfct)

- Ac J,(Bp) {sin[2'IT(fc+fm)t] + sin[2n(fc-fm)tJ}

= A JO(B) cos(2rrf t ) - 2 A J,(B) cos(2rrf t ) sin(2'lTf t)

c p c c p m c

The envelope of so(t) equals


The phase of SO(t) is

-1 [2 J1 (8p) (

~(t) = -tan Jo(8p) cos 2~fmt)]

The instantaneous frequency of so(t) is

fi ( t ) = fc + _1 dlj> ( t ) 2~ dt

Problem 2.31

(a) From Table A4.1, we find (by interpolation) that JO(8) is zero for

8 = 2.44,
8 = 5.52,
8 = 8.65,
8 = 11.8,
.and so on.
(b) The modulation index is
8 llf kf Am
= r = f
m m Therefore,

Since JO(e) = 0 for the first time when e = 2.44, we deduce that k _ 2.44 x 103

f - 2

= 1.22 x 103 hertz/volt

Next, we note that JO(e) = 0 for the second time when e = 5.52.

Hence, the corresponding

value of Am for which the carrier component is reduced to zero is


Bfm Am = E"" f

5.52 x 103


1.22 x 103

= 4.52 volts

Problem 2.32

For B = 1, we have
JO(l ) = 0.765
J1 (1) = 0.44
J2(1 ) = 0.115 Therefore, the band-pass filter output is (assumi~g a carrier amplitude of 1 volt)

= 0.765 cos(21Tf t) c

+ 0.44 {cos[2n(fc+fm)t] - cos[2n(fc-fm)t]}

+ 0.115 {cos[2n(fc+2fm)t] + cos[2n(fc-2fm)t]} , and the amplitude spectrum (for positive frequencies) is



0. 22
t f
t- ;;, It 2-£
c:. ;", 0.22



c. M