Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
for
Communication Systems
4th Edition
Simon Haykin
McMaster University, Canada
Preface
This Manual is written to accompany the fourth edition of my book on Communication Systems. It consists of the following:
•
Detailed solutions to all the problems in Chapters 1 to 10 of the book
•
MATLAB codes and representative results for the computer experiments in Chapters 1,2,3,4,6, 7, 9 and 10
I would like to express my thanks to my graduate student, Mathini Sellathurai, for her help in solving some of the problems and writing the abovementioned MATLAB codes. I am also grateful to my Technical coordinator, Lola Brooks for typing the solutions to new problems and preparing the manuscript for the Manual.
Simon Haykin Ancaster April 29, 2000
CHAPTER 1
Problem 1.1
As an illustration, three particular sample functions of the random process X(t), corresponding to F = W/4, W/2, and W, are plotted below!
sin (21TWt)
sin(21T ~ t)
2 W
t
sin(21T ~ t)
2 W
t
To show that X(t) is nonstationary, we need only observe that every waveform illustrated above is zero at t = 0, positive for 0 < t < 1 /Zvl, and negative for 1/2W < t < O. Thus, the probability density function of the random variable X(t1) obtained b.y sampling X(t) at t 1 = 1/4W is identically zero for negative arg unent , whereas the probability density function of the random variable X(t2) obtained by sampling X(t) at t = 1/4W is nonzero only for negative arg unent s , Clearly, therefore,
and the random process X(t) is nonstationar y.
1
Problem 1.2
X(t) = A cos(2nfct)
Therefore,
Xi = A cos(2nfcti)
Since the amplitude A is uniformly distributed, we may write
1 COS(~f t.)'
C 1
fX.(Xi) =
1 0,
o < x < cos(2nf t.)
 1  C 1
otherwi se
1
cos (2nf t.) C l.
o
cos (2'1Tf t.) C l.
Similarly, we may write
X. = A cos[2nf (t.+T)]
l+r c 1
and
=
co s [ 2rr f (t. +T ) ] , C 1
o < x < cos[2nf (t.+T)]
2.  c 1
0, otherwise
We thus see that fx.(\) t. fX.> (x2), and so the process X(t) is nonstationary.
1 l+T
Problem 1.3
(a) The integrator output at time t is
t
yet) = f X(T) dr o
t = A f o
cos(21Tf r ) dr c
2
= :r f sine 2rrf t)
c c
Therefore,
sin(21ff t)
E [Y (t)] c E [A] 0
= 2rrf =
c
sin2(21ff t)
Var[Y (t)] c Var[A]
= (2rrfc)2
sin2 (21ff t) 2
c
= (2rrf )2 °A
c (, )
Y (t) . is Gaussiandistributed, and so we may express its probability density function as
(b) From Eq. (1) we note that the variance of Y( t) depends on time t, and so Y( t) is
nonsta tionar y.
(c) For a random process to be ergodic it has to be stationary. nonstationary, it follows that it is not ergodic.
Since yet) is
Problem 1.4
(a) The expected value of Z(t,) is
E[Z(t,)] = cos(2rrt,) E[X] + sin(2rrt1) E[Y] Since E[X] = E[Y] = 0, we deduce that
E[Z(t1)] = 0
Similarly, we find that
Next, we note that
Cov[Z(t1)Z(t2)] = E[Z(t1)Z(t2)]
= E{[X cos(21ft1) + Y sin(21ft1)][X cos(21ft2) + Y sin(21ft2)]} 2
= cos(2rrt1) cos(2rrt2) E[X ]
+ [cos( 21ft 1) sin (21ft 2 )+sin (21ft 1 ) cos( 21ft 2)]E [XY] + sin(2rrt1)sin(2rrt2)E[y2]
3
Noting that
E[X2] 2 {E [X]}2 1
= Ox + =
E[y2] 2 {E [y]}2 ,
= 0y + =
E[XY] = a we obtain
Cov [Z (t 1) Z( t2)] = cos( 21ft 1 )cos( 21ft 2>+sin (21ft, ) sin (21ft 2)
= cos[21f(t1t2)] (1)
of the .pfO~
Since every weighted sum of the samplesA Z( t) is Gaussian, it follows that Z( t) is a ~ussian process. Furthermore, we note that
2 2
° Z (t ) = E [Z (t ,)] = 1
This result is obtained by putting t1=t2 in Eq. ('). Similarly,
2 2
0Z(t ) = E[Z (t2)] = ,
2
Therefore, the correlation coefficient of Z(t,) and Z(t2) is
p =
Cov[Z( t1)z( t2)] °Z(t,)OZ(t2)
Hence, the joint probability density function of Z(t,) and Z(t2)
where
C = ;:::=:======= 21f(, _cos2[ 21f (t ,t2)]
4
(b) We note that the covariance of Z(t,) and Z(t2) depends only on the time difference t ,t2• The process Z( t) is therefore wide_sense stationary. Since it is Gaussian it is also strictly stationary.
Problem 1.5
(4.) Let
X(t) = A + yet)
where A is a constant and yet) is a zeromean random process. function of X(t) is
The autocorrelation
RX(') = E[X(t+.) X(t)]
= E{[A + Y(t+.)] [A + Yet)]}
= E[A2 + A Y(t+.) + A yet) + Y(t+.) yet)] = A2 + Ry(')
which shows that RX(') contains a constant component equal to A2.
<.Ir> Let
X(t) = A cos(2nf t + 6) + Z(t)
c c
where Ac cos(2nfct+6) represents the sinusoidal component of X(t) and 6 is a random phase variable. The autocorrelation function of X(t) is
RX(') = E[X(t+.) X(t)]
= E{[Ac cos(2nfct + 2nfc' + 6) + Z(t+.)] [Ac cos(2nfct + 6) + Z(t)]} 2
= E[A cos(2nf t + 2nf • + 6) cos(2nf t + 6)]
c c c c
+ E[Z(t+.) A cos(2nf t + 6)]
c c
+ E[A cos(2nf t + 2nf • + e) Z(t)]
c c c
+ E [Z ( t+.) Z ( t) ]
= (A2/2) cos(2nf .) + RZ(')
c c
which shows that RX(') contains a sinusoidal component of the same frequency as X(t).
Problem l.6
(a) We note that the distribution function of X(t) is
5
1 2'
o < x < A
0, x < 0
1, A < x
and the corresponding probability density function is
which are illustrated below:
F ex)
>'Ct) 1. 0
I    r
1
:2 "__...I,
I J
o
A
tc~)~
.t _____..._t _
o
A
x
(b) By ensembleaveraging, we have
co
E[X(t») = J x fX(t) (x) dx
co
co
1 1
x [2 <s(x) + 2 <S(x  A») dx
= J
A = '2
The autocorrelation function of X(t) is
RX(t) = E[X(t+t) X(t)]
Define the square function SqT (t) as the squarewave shown below: o
6
sqT (t)
0
1.0 t
T o
TO
'
2
o
Then, we may write
CXI
= A2 f SqT (t  td + r ) SqT (t  td) fT (td) dtd
...co 0 0 d
= A22 (1 _ 2 lU) , l r I < TO
TO "2
Since the wave is periodic with period To' RXC.) must also be periodic with period TO. (c) On a timeaveraging basis, we note by inspection of Fig. p/.bthat the mean is
A <x(t» = 2
Next, the autocorrelation function
<x(t+.)x(t»
has its maximum value of A2/2 at. = 0, and decreases linearly to zero at. = TO/2. Therefore,
< x ( t+L) X ( t) >
7
Again, the autocorrelation must be periodic with period TO.
(d) We note that the ensembleaveraging and timeaveraging procedures yield the sane set of resul ts for the mean and autocorrelation functions. Therefore, X(t) is ergodic in both the mean and the autocorrelation function. Since ergodicity implies widesense stationarity, it follows that X(t) must be widesense stationary.
Problem 1.7
(a) For lr I > T, the random variables X(t) and X (t+r ) occur in different pulse intervals and are therefore independent. Thus,
E[X(t) X(t+.)] = E[X(t)] E[X(t+.)],
I. I > T.
Since both amplitudes are equally likely, we have E[X(t)] = E[x(t+.)] = A/2. Therefore, for I. I > T,
For I. I ~ T, the random variables occur in the same pulse interval if td < T  I. I. If they do occur in the same pulse interval,
E [X ( t) X ( t+L ) ]
1. A2 + 1. 02 A2
= 2 2 = 2
We thus have a conditional expectation:
E [X (t) X (t+. )]
2
= A /2, t d < T  IT 1
2
= A /4, otherwise.
Averaging over td, we get
.
= !2 (1 _ I~I) +t2
I. I < T
(b) The power spectral density is the Four ier tr ansform of the autocorrelation function. The Fourier transform of
= 0 , otherwise, is ~illet\ ~
G(f) = T sinc2(fT).
8
Therefore,
2 + A; sinc2 err: .
S (f) = } 15 (f)
x
We next note that
2 CD A2
Lf 15 (f) df = 4
4

2 CD TSinc2(fT) df A2
Lf
4 = 4
..... It follows therefore that half the power is in the dc component.
Problem 1.8
Since
Yet) = gp(t) + X(t) + 13/2
and gp(t) and X(t) are uncorrelatedJf,e.YI
Cy(or) = C (or) + Cx(or)
gp
where G (L) is the autocovariance gp
autocovariance of the random component.
3/2, removing the dc component. Ggp(L)
of the periodic component and eX(L) is the Cy(L) is the plot in figure pi ~B shifted down by and eX(L) are plotted below:
9
c,. (c;) 91'
0.5
o
D,S'
Both gp(t) and X(t) have zero mean, ove: yo.._3e
(a) The power of the periodic canponent g (t) is therefore,
A p
1 T 0/2 2  (: 1
:rJ gp(t) dt  ~ (0) = 2"
o TO/2 ~
o.s e: 'Co.._j€
( b) The power of the random component X(t) is
.~
E [X2 (t)] = Gx(O) = 1 Problem 1.9
Replacing 1 with 1:
10
R xy (1) = E [X (t1) Y (t) ]
Next, replacing t. with t, we get
Rxy(') = E[Y(t+.) X(t)] = RyX(·)
(b) Form the nonnegative quantity
2 2· 2
E[{X(t+.) + yet)} ] = E[X (t+.) ~ 2X(t+.) yet) + Y (t)]
2 . 2
= E [X (t+.) ] ~ 2E [X ( t+.) Y ( t )] + E [Y (t)]
= RX(O) ~ 2Rxy(') + Ry(O)
Hence,
or
Problem 1.10
(a) The cascade connection of the two filters is equivalent to a filter with impulse response
h(t) = f h,(u) h2(tu) du
~
The autocorrelation function of yet) is given by
00 00
Ry(') = f f h(.,) h('2) RX(' ., +'2) dL, d'2
'» '»
(b) The crosscorrelation function of vet) and yet) is
R vy ( r ) = E [V ( tr ) Y ( t ) ] .
The yet) and V (tsr ) are related by
00
yet) = f yeA) h2(tA) dA
~
Therefore,
00
Rvy(') = E[V(t+.) f yeA) h2(tA) dA]
....,
11
00
= J h2(tA) E[V(t+.) v o n dA
.....
00
= J h2(tA) RV(t+LA) dA

Substituting A for tA:
00
Rvy (r ) = J h2(A) RV(.+A) ex
.....
The autocorrelation function RV(') is related to the given RX(') by
00 00
RV(') = J J h,(.,) h'('2) RX('"+'t2) d',d'2
 
Problem 1.11
(a) The crosscorrelation function RyX(') is
RyX (r ) = E [y ( t+L) X ( t ) ]
The yet) and X (t) are related by
00
yet) = J X(U) h(tu) du
Therefore,
00
RyX(') = E[ J X(u)X(t) h(t+.u) du]
00
00
= J h(t+.u) E[X(u)X(t)] du
00
= J h(t+.u) RX(ut) du
00
Replacing t+TU by u:
00
RYX(T) = J h( u) RX(TU) du

(b) Since Rxy(T) = RYX(T), we have
12
CXI
Rxy (r ) = f h( u) RX('U) du
J:IO
Since RXh) is an even function of .:
CXI
Rxy(' ) = f h( u) RX('+U) du
J:IO Replacing u by u:
CXI
Rxy(') = f h(u) RX('u) du
J:IO
(c) If X(t) is a white noise process with zero mean and power spectral density NO/2, we may write
Therefore,
NO CXI
2" f ht u) s (.u) du
co
Using the sifting property of the delta function:
That is,
h(. )
This means that we may measure the impulse response of the filter by applying a white
pOUJ"i'
noise of ... spectral density NO/2 to the filter input, crosscorrelating the filter output
wi th the input, and then mul tipl ying the resul t by 2/N O"
Problem 1.12
(a) The power spectral density consists of two components:
(1) A delta function &(t) at the origin, whose inverse Fourier transform is one.
(2) A triangular component of unit amplitude and width 2fO' centered at the origin; the inverse Fourier transform of this component is f 0 sinc2(f 0').
Therefore, the autocorrelation function of X(t) is
13
RX(·r) = 1 + f 0 sinc2 (f o"r) which is sketched below:
.",. 1 +f o
1
T
o
(b) Since RX(·r) contains a constant component of amplitude 1, it follows that the dc power contained in X(t) is 1.
(c) The meansquare value of X(t) is given by E[X2(t)] = RX(O)
= 1 + fO
The ac power contained in X(f) is therefore equal to f O.
(d) If the sampling rate is f O/n, where n is an integer, the samples are uncorrelated. They are not, however, statistically independent. They would be statistically independent if X(t) were a Gaussian process.
Problem 1.13
The autocorrelation function of n2(t) is RN (t1,t2) = E[n2(t1) n2(t2)]
2
= E{[n,(t,) COS(21Tfct1 + e)  n1(t,) si n ( 21T f c t ,+0 ) ]
• [n 1 (t2) co s ( 2rr f c t 2 + e)  n,(t2) sin(2rrfct2 + e)]}
= E[n,(t,) n,(t2) cos( 21Tf c t, + e) COS(21Tfct2 + e)
 n,(t,) n,(t2) co s ( 2rr f c t , + e) sin(2rrfct2 + e)
 n,(t,) n,(t2) si n ( 21T f c t, + e) cos(21Tfct2 + e) 14
+ n 1 ( t 1) n 1 (t 2 ) si n ( 2Tr f c t 1 + 9) si n ( 21r f c t 2 + 9)] = E{n1(t1) n1(t2) cos[2nfc(t1t2)]
 n1(t1) n1(t2) sin[2nfc(t1+t2) + 2e]} = E[n1(t1) n1(t2)] cos[2rrfc(t1t2)]
 E [n 1 (t 1) n 1 ( t 2 )] • E {si n [ 2n f c (t 1 + t 2 ) + 29]}
Since 9 is a uni formly distributed random variable, the second term is zero ,giving
Since n,(t) is stationary, we find that in terms of r = t1t2:
RN (r ) = RN (r ) cos(2rrf.)
. 2 1 c
Taking the Fourier transforms of both sides of this relation:
1
= '2 [SN (f+f ) + SN (ff )]
1 c 1 c
With SN (f) as defined in Fig. Pf .. ,], we find that SN (f) is as shown below:
1 2
a/2
.U
2W
o
f
Problem 1.14
The power spectral density of the random telegraph wave is
co
Sx(f) = f RX(L) exp(j2rrfL) dr
JO
o
= f exp(2VL) exp(j2rrfL) de
co
+ f ex p( 2n) ex p(  j2rr fr) cit o
, 0
= 2 (vjolT f) [exp(2vL  j2nfr)]
....,
1
 2(v+jnf) [exp(2n  j2rrfL)]
o
co
1 1
= =::=~ + =:::::
2(vjnf) 2(v+jnf)
=
v
2 2f2 v +n
The tr an sfer func tio n 0 f the fil ter is
H(f)
1
= """+J""'·2rr"::f'""R=C
Therefore, the power spectral density of the filter output is
= v~~~~~=
[1 + (2nfRC)2](v2+rr2f2)
To determine the autocorrelation function of the fil ter output, we first expand Sy(f) in partial fractions as follows
Recogni zing that
15
exp(2vCtl)~ 2 v 2 2 v + 1T f
we obtain the desired result:
v 1
2 2 2 [v exp(2v I. I)
14R C v
2RC ex p f ~~ I)]
16
Problem 1.15
We are given
yet) = r x( r)dr
it
For x(t) = bet), the impulse response of this running integrator is, by definition,
h(t) = r b(r)dr
t T
= 1 for tT'5,O'5,t or, equivalently, O'5,t'5,T
Correspondingly, the frequency response of the running integrator is
H(f) = foo h(t)exp(j2nft)dt
_00
T
= fo exp(j2nft)dt
= .21.1' [1 exp(j2nfT)] ] nJt
= T sine (fT) exp (  jnfT)
Hence the power spectral density Sy({) is defined in terms of the power spectral density Sx(f) as follows
Problem 1.16
We are given a filter with the impulse response
Put) = {
0,
aexp( at),
o '5,t'5, T otherwise
17
The frequency response of the filter is therefore
H(f) = [ h(t)exp(J2nft)dt
_00
T
= Io aexp(at)exp(J2nft)dt
T
= aIo exp((a + J2nf)t)dt
= ~2 j[ exp( (a + J2nf)t]~
a+ ] n
= ~2 j[l exp((a + J2nf)T]
a+ ] n
= ~2 j[lea\cos(2nfT)JSin(2nfT»
a+ ] n
The squared magnitude response is
IH(f)12 =
[ 2 ]
a aT 2 aT 2
2 2 2(1 e cos(2nfT» + (e sin (2nfT»
a +4n f
2
a aT 2aT 2 . 2
= 2 22[12e cos(2nfT)+e (COS (2nfT)+sm (2nfT»]
a +4n f
2
a aT 2aT
= 2 22[12e cos(2nfT)+e ]
a +4n f
Correspondingly, we may write
2
a aT 2aT
Sy(f) = 2 22[12e cos(2nfT)+e ]Sx(f)
a +4n f
18
Problem 1.17
The autocorrelation function of X(t) is
RX (T) = E [X (t+T) X ( t) ]
= A 2 E[cos(2'JrFt + 2'JrFT  e) cos(2'JrFt  e)]
A2
= 2E[cos(4rrFt + 2'JrFT ~a3) + cos(2'JrFT)]
Averaging over e, and noting that e is uniformly distributed over 2'Ir radians, we get
A2
= 2 E[cos(2'lrFT)]
(1)
Next, we note that Rx(T) is rel ated to the power spectral density by
00
R X (T) = J S X ( f) co s ( 2'Ir rr ) d f
..110
(2 )
po~
Therefore, comparing Eqs. (1) and (2), we deduce that the spectral density of X(t) is A
When the frequency assumes a constant value, fc (say), we have
19
and, correspondingly,
Problem 1.18
Let Ox 2 denote the variance of the random variable Xk obtained by observing the random process X(t) at time~. The variance Ox2 is related to the meansquare value ofXk as follows
wherel"x = E[Xk]. Since the process X(t) has zero mean, it follows that
Next we note that
E[X~] = 1: Sx(f)df
We may therefore define the variance Ox2 as the total area under the power spectral density 8x(O as
(1)
Thus with the mean Px = 0 and the variance CJx2 defined by Eq. (1), we may express the probability density function of Xk as follows
20
Problem 1.19
The inputoutput relation of a fullwave rectifier is defined by
The probabil i ty densi ty function of the random variable X(tk), obtained by observing the input random process at time tk, is defined by
To find the probabil i ty density function of the random variable Y(tk), obtained by observing the output random process, we need an expression for the inverse relation
defining X(tk) in terms of Y(tk). We note that a given value of Y(tk) corresponds to 2 values of X(tk), of equal magnitude and o ppo sf te sign. We may therefore write
X(tk) = ':"Y(tk), X(tk) < 0
X(tk) = Y(tk) , X(tk) > 0
In both cases, we have
I dX(tk'l
dY(tk) = 1 • The pr obab I'l i tydensi ty function of Y(tk) is therefore given by
2 .
1 y
=   exp()
1£ C1 2i
We may therefore write
1 y2
  exp( ) ,
1£ C1 2i
y ~ 0
y < 0 •
21
which is illustrated below:
0.79 a
_L_______________ Y
o
Problem 1.20
(a) The probability density function of the random variable Y(tk), obtained by observing the rectifier output Yet) at time tk, is
2 exp(  tcr) , Ox
y ~ 0
y < 0
2 2
OX = E[X (tk)] 
= E [X2(tk)]
= RX (0)
The mean value of Y(tk) is therefore
where
co
co 2
= ' J IY exp( L_) dy
f21T <1x 0 20i
(1)
Put
22
Then, we may rewrite Eq. (1) as
2 <XI 2 u2
E[Y(tk)] =  a f u exp( 2 ) du 'IT X 0
(~) The autocorrelation function of Y(t) is
Ry (r) = E [y ( t+.r) Y ( t) ] 2
Since Y(t) = X (t), we have
Ry(r) = E[X2(t+'t") X2(t)]
(2 )
The X(tk+'i") and X(tk) are jOintly Gaussian with a joint probability density function
defined by
2 2
fX(tk+'t"),X(tk)(x" x2) 1 exp[ x 1  2 p X ('t") x, x2 + x2
= 2 2 ]
at O~ r,~i("C) 20 X (, ~ X ("C »
2 Rx (0) ,
where Ox =
Px<' ) Cov[X(tk+'t")X (tk)]
= 2
Ox
RX('t" )
= RX(O)
Rewrite Eq. (2) in the form:
2
<XI 2 x2
R Y ("C ) 1 f g(x2) dX2
= x2 exp( 2)
an o~ /1p~('t")  2cX
where (3 )
<XI
2
x, exp{
23
Let
Then, we may express g( x2) in the form
However, we note that
CD 2
f exp( ~ ) du = 121T
.....
CD u2
f u exp( 2) du = 0
CD
CD 2
2 u ~
f u ex p t 2) du = r21T
_CD
Hence,
1hus, from Eq. (3):
CD
Using the results:
, 3 = 121T OX
= 3/21T ai
24
we obtain,
424 2
= 30X PX('t) + aX [1  PX('t)]
= ai [1 + 2p~('t)]
Px ('t) RX ( r)
= RX(O)
we obtain
Ry ('t) Ri(O) [1 + 2 ~('t)
= ]
R~(O )
Ri(O) 2
= + 2RX( r) The autocovariance function of yet) is therefore Cy('t) = Ry('t)  {E[Y(tk)]}2 222
= R X (0) + 2R X ( r)  R X (0 )
2
= 2RX( 't)
Problem 1.21
(0.) The random variable Y(t1) obtained by observing the fil ter output of impulse response
h1(t), at time t1, is given by
co
Y(t1) = i X(t1't) h1('t) d r
_co
The expected value of Y(t1) is
m.. = E[Y(t1)]
11
where
co
H1(O) = f h1('t) d't
co
25
The random variable Z(t2) obtained by observing the filter output of impulse response h2( t), at time t 2' is given by
co
Z( t2) = f X(t2u) h2(u) du
co
Th e ex pee ted val ue 0 f Z ( t 2 ) is
mZ = E [z( t2) ]
2
where
co
H2 (0) = f h2( u) du
co
The covariance of Yet,) and Z(t2) is
Cov[Y(t,)Z(t2)] = E[(Y(t,) ~y )(Z(t2) ~Z )]
, 2
co co
= E[f f (X(t,T) )'x) (X(t2u) .fix) h1(T) h2(U) d r du]
_co _GO
co co
= f f E[(X(t,T) _)L·X) (X(t2u) ~'X)] h,(T) h2(u) d r du
_CD _CD
co co
= f f CX(t,t2T+u) h,(T) h2(u) d r du
_00 _00
where CX(T) is the autocovariance function of X(t). Next, we note that the variance of Yet,) is
2 2
0y = E [ (Y (t ,) Py ) ]
, ,
co co
= f f ex ( TU ) hi (T) \ (u) d T du
_00 _00
and the var iance of Z( t2) is
2 2
c:iZ = E[(Z(t2) /Iz ) ]
2 2
co co
= f f CxCru) h2(T) h2(u) d r du
_00 _co
26
The correlation coefficient of Yet,) and Z(t2) is cov[Y(t,)Z(t2)]
p =
°Y, Oz 2
Since X(t) is a Gaussian process, it follows th t Y(t) d Z(t )
a , an 2 ar e jointly Gaussian
with a probability density function given by
where
(b) The random variables yet,) and Z(t2) are U1correlated if and only if their covariance is zero. Since yet) and Z(t) are jOintly Gaussian processes, it follows that Yet,) and Z( t2) are stati stically independent if Cov[Y (t, )Z( t2)] is zero. Therefore, the necessary and sufficient condition for Yet,) and Z(t2) to be statistically independent is that
co co
. 'f f CX(t,t2'I+u) h, ('I) h2(U) d r du = 0
co co
for choices of t, and t2•
27
Problem 1.22
(a) The fil ter output is
co
yet) = 1 h(·r) X(t't) d r
co
Put T't=u. Then, the sample value of yet) at t=T equals
1 "T
Y = T 1 X(U) du o
The mean of Y is therefore
1 T
E[Y] = E[f 1 X(u) du] o
1 T
= T 1 E[X(u)] du o
= 0
The variance of Y is
o~ = E [Y 2] _ {E [Y ] } 2 = Ry (0)
co
= 1 Sy(f) df
co
co
= 1 Sx(f) IH (f) 12 df
co
28
But
CD
H(f) = 1 h(t) exp(j2nft) dt
CD
1 T
= T 10 ex p(  j2 nft) dt
1
= j2nfT [1  exp(j2nfT)]
= sinc (fT) exp(j lrfT)
Therefore,
CD
O~ = 1 SX(f) sinc2(fT) df
_CD
(b) Since the filter input is Gaussian, it follows that Y is also Gaussian. Hence, the probability density function of Y is
f y(Y) 1 y2
= exp( 2)
121T ay 20y
where 2 . defined above.
0y 1S Problem 1.23
(<l,) The power spectral density of the noise at the fil ter output is given by
j21TfL ,2 R+j21TfL
29
The autocorrelation fUnction of the filter output is therefore
(b) The mean of the filter output is equal to H(O) times the mean of the filter input. The process at the filter input has zero mean. The value H(O) of the filter's transfer function H(f) is zero. It follows therefore that the filter output also has a zero mean.
The meansquare value of the filter output is equal to RN(O). With zero mean, it follows therefore that the variance of the filter output is
Since R~'t) contains a delta function ()('t) centered on 't = 0, we fmd that, in theory, oN2 is infinitely large.
30
Problem 1.24
(a) The noise equivalent bandwidth is
f IH(O) 12 _00
1/2
If 00 df
= 2n
2
_00 1 + (fifO)
00
f df
= 2n
o 1 + (fifO)
nfO
= 2n sine 'II/2n)
fo
= sine (1 12n) (b) When the filter order n approaches infinity, we have
W = f 1i m :.,.1..,,,,=__:_
N 0 sinc(1/2n)
n+co
Problem 1.25
The process X(I:;) defined by
00
X(I:) = E h(t  'tk),
k=oo
where h(t  'tk) is a current pulse at time 'tk' is stationary for the following simple reason. There is no distinguishing origin of time.
31
Problem 1.26
(a) Let S,(f) denote the power spectral density of the noise at the first filter output. The dependence of S,(f) on frequency is illustrated below:
1 No/2
  I  '_ 
.
,_+r~~ '",
Ifc ,
0 i f
I c
I
r S (f)
H
2B
2B
Let S2( f) denote the power spectral density of the noise at the mixer output. Then, we
may write
which is illustrated below:
2f c
f
No/8
r. ~
2B
32
The power spectral density of the noise n(t) at the second filter output is therefore defined by
_,
4
0, otherwise
The autocorrelation function of the noise n(t) is
NB .
R (e) = _O_ slnc(2Bt)
o 2
(b) The mean value of the noise at the system output is zero. Hence, the variance and meansquare value of this noise are the same. Now, the total area under 80(0 is equal to (NoI4X2B) = NoB/2. The variance of the noise at the system output is therefore NoB/2.
(c) The maximum rate at which n(t) can be sampled for the resulting samples to be uncorrelated is 2B samples per second.
33
Problem 1.27
(a) The autocorrelation function of the filter output is
00 00
RX(T) = f f h(T,) h(T2) RW(TT,+T2) dT, dT2
 
.Since Rw(T) = (NOI2) OCT), we find that the impulse response ht t ) of the filter must satisfy the condition:
NO 00 00
RX(T) = ~ f f h(T,) h(T2) O(TT,+T2) dT, dT2
...00 ...00
NO 00
= ~ f h(T+T2) h(T2) dT2
00
(b) For the filter output to have a power spectral density equal to SX(f), we have to choose the transfer function H(f) of the filter such that
Sx(O NO IH( f) ,2
= 2
or
IH(O I = ;+
0 34
Problem 1.28
(a) Consider the part of the analyzer in Fig. 1.19 defining the inphase component n/(t) , reproduced here as Fig. 1:
Narrowband .7x vet)
nOIse ~
net)
2cos(2nfct)
Figure 1
For the multiplier output, we have
vet) = 2n(t)cos(2nfct)
Applying Eq. (1.55) in the textbook, we therefore get
Passing vet) through an ideal lowpass filter of bandwidth B, defined as onehalf the bandwidth of the narrowband noise net), we obtain
= { ~v(f)
for B <5, f <5, B otherwise
for B <5, f <5, B otherwise
(1)
For the quadrature component, we have the system shown in Fig. 2:
Narrowband .7x u(t)
nOIse ~
net)
2sin(27ifct)
Fig. 2
35
The multiplier output u(t) is given by
u(t) = 2n(t)sin(2nlct)
Hence,
and
SN (I) = { Su(l)
Q 0
for B~I~B otherwise
for B ~I~B otherwise
(2)
Accordingly, from Eqs. (1) and (2) we have
(b) Applying Eq. (1.78) of the textbook to Figs. 1 and 2, we obtain
2
SN N (I) = IR(I)I Svu(l)
I Q
(3)
where
IHU)I ~ { ~
for B ~I~B otherwise
Applying Eq. (1.23) of the textbook to the problem at hand:
1 j2nJ T j2nJ T
Rvu('r) = 2RN(t)sin(2nlc t) = ;RN(t)(e C _ e C )
]
Applying the Fourier transform to both sides of this relation:
(4)
36
Substituting Eq. (4) into (3):
for B~f~B otherwise
which is the desired result.
Problem 1.29
If the power spectral density SN(j) of narrowband noise net) is symmetric about the midband frequency I; we then have
From part (b) of Problem 1.28, the crossspectral densities between the inphase noise component n/(t) and quadrature noise component nQ(t) are zero for all frequencies:
S N N (f) = 0 for all f I Q
This, in turn, means that the crosscorrelation functions RN N (r) and RN N (r) are both zero,
I Q Q I
that is,
which states that the random variables Nitk + r) and NQ(tk), obtained by observing nit) at time tk + 't and observing nQ(t) at time tb are orthogonal for all t.
If the narrowband noise net) is Gaussian, with zero mean (by virtue of the narrowband nature of n(t)), then it follows that both Nitk + r) and NQ(tk) are also Gaussian with zero mean. We thus
conclude the following:
•
N itk + r) and N Q(tk) are both uncorrelated
Being Gaussian and uncorrelated, Nitk + r) and NQ(tk) are therefore statistically independent.
•
That is, the inphase noise component nit) and quadrature noise component nQ(t) are statistically independent.
37
Problem 1.30
C a) The power spectral density of the inphase component or quadrature component is defined by
o
otherwise
B < f < B
We note that, for 2 < f < 2, the SNCf+5) and SNCf5) are as shown below:
s (f+5) N
11.0
f
1
o
2
S (fS) N
1.0
f
2
o
1
' .
We thus find that SN (f) or SN (f) is as shown below:
1 ~
s14 (f)=SN (f)
I Q 2.0
f
2
1
o
1
2
(D) The crossspectral density SN N (f) is defined by r G
B < f < B
otherwise
We therefore find that SN N (f»/j is as shown below: :r ~
~ S (f)
J N N
'1 Q
0.5
2
f
Next, we note that
= sn N (f) I Q
We thus find that ~ N (f) is as shown below:
I b)
1
; SN N (f)
J .r Q
Problem 1.3 ,
(a) Express the noise net) in terms of its inphase and quadrature components as follows:
net) = n (t) cos(2nf t)  n (t) sin(2nf t)
.I e e c
The envelope of n( t) is
ret) = ~2(t) + n2(t)
I Q
which is Rayleighdistributed. That is
2
r r
2 exp(. 2) , r > 0
o 20
0,
otherwise
To evaluate the variance as follows
2
o ,
we note that the power spectral density of n (t) or n (t) is r Q
39
SN (f)=SN (f) I Gi
B
o
B
Since the mean of net) is rero, we find that 2
(1 = 2 NOB
Therefore,
f
r > 0
otherwise
(b) The mean value of the envelope is equal to J;NOB, and its variance is equal to 0.858 NOB.
40
Problem 1.32
Autocorrelation of a Sinusoidal Wave Plus White Gaussian Noise
In this computer experiment, we study the statistical characterization of a random process X(t) consisting of a sinusoidal wave component Acos(2nfet + 8) and a white Gaussian noise process
Wet) of zero mean and power spectral density NoI2. That is, we have
X(t) = Acos2nfet + 8 + Wet)
(1)
where 8 is a uniformly distributed random variable over the intervale n.n). Clearly, the two components of the process X(t) are independerlt. The autocorrelation function of X(t) is therefore the sum of the individual autocorrelation functions of the signal (sinusoidal wave) component and the noise component, as shown by
(2)
This equation shows that for l'tl > 0, the autocorrelation function Rx('t) has the same sinusoidal waveform as the signal component. We may generalize this result by stating that the presence of a periodic signal component corrupted by additive white noise can be detected by computing the autocorrelation function of the composite process X(t).
The purpose of the experiment described here is to perform this computation using two different methods: (a) ensemble averaging, and (b) time averaging. The signal of interest consists of a sinusoidal signal of frequency I; = 0.002 and phase e =  n12, truncated to a finite duration T =
1000; the amplitude A of the sinusoidal signal is set to V2 to give unit average power. A particular realization x(t) of the random process X(t) consists of this sinusoidal signal and additive white Gaussian noise; the power spectral density of the noise for this realization is (NoI2) = 1000. The
original sinusoidal is barely recognizable in x(t).
(a) For ensembleaverage computation of the autocorrelation function, we may proceed as follows:
• Compute the product x(t + 't)x(t) for some fixed time t and specified time shift r, where x(t) is a particular realization of the random process X(t).
• Repeat the computation of the product x(t + 't)x(t) for M independent realizations (i.e., sample functions) of the random process X(t).
• Compute the average of these computations over M.
• Repeat this sequence of computations for different values of r.
The results of this computation are plotted in Fig. 1 for M = 50 realizations. The picture portrayed here is in perfect agreement with theory defined by Eq. (2). The important point to note here is that the ensembleaveraging process yields a clean estimate of the true
41
autocorrelation function Rx('r:) of the random process X(t). Moreover, the presence of the sinusoidal signal is clearly visible in the plot of RxC'r:) versus t.
(b) For the timeaverage estimation of the autocorrelation function of the process X(t), we invoke ergodicity and use the formula
Rx('t) = lim RxC't, T)
T+oo
(3)
where RxC't,T) is the timeaveraged autocorrelation function:
(4)
The x(t) in Eq. (4) is a particular realization of the process X(t), and 2T is the total observation interval. Define the timewindowed/unction
{X(t), T~t~T xT(t) =
0, otherwise
(5)
We may then rewrite Eq. (4) as
(6)
For a specified time shift r, we may compute Rx('t,T) directly using Eq. (6). However, from a computational viewpoint, it is more efficient to use an indirect method based on Fourier transformation. First, we note From Eq. (6) that the timeaveraged autocorrelation function RxC't,T) may be viewed as a scaled form of convolution in the rdomain as follows:
(7)
where the star denotes convolution and xy('t) is simply the timewindowed function xy(t) with t replaced by 'to Let XTif) denote the Fourier transform xy('t); note that XTif) is the same as the Fourier transform X(j,T). Since convolution in the rdornain is transformed into multiplication in the frequency domain, we have the Fouriertransform pair:
(8)
42
The parameter IXy(f)12I2T is recognized as the periodogram of the process X(t). Equation (8) is a mathematical description of the correlation theorem, which may be formally stated as follows: The timeaveraged autocorrelation function of a sample function pertaining to a random process and its periodogram, based on that sample function, constitute a Fouriertransform pair.
We are now ready to describe the indirect method for computing the timeaveraged autocorrelation function RxC't,T):
• Compute the Fourier transform Xy(f) of timewindowed function xy('t).
• Compute the periodogram IXy(f)1212T.
• Compute the inverse Fourier transform of IXy(f)1212T.
To perform these calculations on a digital computer, the customary procedure is to use the fast Fourier transform (FFT) algorithm. With xy('t) uniformly sampled, the computational
procedure described herein yields the desired values of RxC't,T) for 't = 0,8,28, "', (N  1)8 where 8 is the sampling period and N is the total number of samples used in the computation. Figure 2 presents the results obtained in the timeaveraging approach of "estimating" the autocorrelation function Rx('t) using the indirect method for the same set of parameters as those used for the ensembleaveraged results of Fig. 1. The symbol
Rx('t) is used to emphasize the fact that the computation described here results in an "estimate" of the autocorrelation function Rx('t). The results presented in Fig. 2 are for a signaltonoise ratio of + lOdB, which is defined by
SNR =
=
(9)
On the basis of the results presented in Figures 1 and 2 we may make the following observations:
• The ensembleaveraging and timeaveraging approaches yield similar results for the autocorrelation function RxC't), signifying the fact that the random process X(t) described
herein is indeed ergodic.
• The indirect timeaveraging approach, based on the FFT algorithm, provides an efficient method for the estimation of Rx('t) using a digital computer.
• As the SNR is increased, the numerical accuracy of the estimation is improved, which is intuitively satisfying.
43
1 Problem 1.32
Matlab codes
Yo Problem 1.32a CS: Haykin
Yo Ensemble average autocorrelation Yo M. Sellathurai
clear all A=sqrt(2) ;
N=1000; M=1; SNRdb=O; e_corrf_f=zeros(1,1000); f_c=2/N;
t=O:1:N1;
for trial=1: M
Yo signal s=cos(2*pi*f_c*t);
Yonoise
snr = 10(SNRdb/10);
wn = (randn(1,length(s)))/sqrt(snr)/sqrt(2);
Yosignal plus noise s=s+wn;
Yo autocorrelation [e_corrf]=en_corr(s,s, N);
YoEnsembleaveraged autocorrelation e_corrf_f=e_corrf_f+e_corrf;
end
Yoprints plot(500:5001,e_corrf_f/M); xlabel(' (\tau)') ylabel('R_X(\tau)')
44
% Problem 1.32b CS: Haykin
% timeaveraged estimation of autocorrelation % M. Sellathurai
clear all A=sqrt(2) ; N=1000; SNRdb=O; f_c=2/N; t=O:1:N1;
% signal s=cos(2*pi*f_c*t);%noise
%noise
snr = 10(SNRdb/10);
wn = (randn(1,length(s)))/sqrt(snr)/sqrt(2);
%signal plus noise s=s+wn;
% time averaged autocorrelation [e_corrf]=time_corr(s,N);
%prints plot(500:5001,e_corrf); x Labe Lf ' (\tau)') ylabel('R_X(\tau)')
45
function [corrf]=en_corr(u, v, N)Yo funtion to compute the autocorreation! crosscorrelati( Yo ensemble average
Yo used in problem 1.32, CS: Haykin Yo M. Sellathurai, 10 june 1999.
max_cross_corr=O; tt=length(u);
for m=O:tt
shifted_u=[u(m+1:tt) u(1:m)]; corr(m+1)=(sum(v.*shifted_u»!(N!2); if (abs(corr»max_cross_corr) max_cross_corr=abs(corr);
end
end
corr1=flipud(corr); corrf=[corr1(501:tt) corr(1:500)];
46
function [corrf]=time_corr(s,N)
% funtion to compute the autocorreation/ crosscorrelation % time average
% used in problem 1.32, CS: Haykin % M. Sellathurai, 10 june 1999.
X=fft(s); X1=fftshift((abs(X).2)/(N/2)); corrf=(fftshift(abs(ifft(X1))));
47
Answer to Problem 1.32
2r.__.~~r_.__.~_.r___,
1.5
0.5
0.5
1
~go~0~~4~0~0~~3~0~0~=2~0~0~1~0~0~0~~1~0~0~~2~0~0~=3~0=0~~5~00 (T)
Figure 1: Ensemble averaging
1.4
1.8
1.2
<~ 1
Figure 2: Time averaging
48
Problem 1.33
Matlab codes
% Problem 1.33 CS: Haykin % multipath channel
% M. Sellathurai
clear all
Nf=O;Xf=O; % initializing counters
N=10000; % number of samples M=2; P=10;
a=1; % line of sight component component
for i=1:P
A=sqrt(randn(N,M).2 + randn(N,M).2);
xi=A.*cos(cos(rand(N,M)*2*pi) + rand(N,M)*2*pi); % inphase cpmponent xq=A.*sin(cos(rand(N,M)*2*pi) + rand(N,M)*2*pi); % quadrature phase component
xi=(sum(xi')); xq=(sum(xq'));
ra=sqrt((xi+a).2+ xq.2)
% rayleigh, rician fading
[h X]=hist(ra,50);
Nf=Nf+h; Xf=Xf+X;
end
Nf=Nf/(P) ; Xf=Xf/(P) ;
% print plot(Xf,Nf/(sum(Xf.*Nf)/20)) xlabel( 'v' )
ylabel('f_v(v)')
49
Answer to Problem 1.33
v
Figure , Rician distribution
50
12
CHAPTER 2
Continuouswave Modulation
Problem 2.1
(a) Let the input voltage vi consist of a sinusoidal wave of frequency ~ f c (i.e., hal f the desired carrier frequency) and the message signal m( t):
v. = A cos Or f t)+m(t)
1 C C
Then, the output current io is io = a1 vi + a3 vi
= a1 [A cos(lT f t)+m( t.) ]+a3[A cos(lT f thm( t)] 3
c c c c
= a1 [Accos(lTfct)+m(t)] + ia3A~ [cos(3rrfct)+3coS(lTfct)]
+ ~t~ m(t)[1+cos(2rrfct)] + 3a3AcCOS(lTfct)i(t) + a3m3(t)
Assume that met) occupies the frequency interval W < f < W. Then, the amplitude spectrum of the output current i is as shown below.
o
I (f) o
3f c
W 0 W
3f c
2
2
2W
2W
51
From this diagram we see that in order to extract a OSBSe wave, with carrier frequency f c
fran i , we need a bandpass filter with midband frequency f and bandwidth Zvl, which
o c
satisfy the requirement:
that is,
Therefore, to use the given nonlinear device as a product mmodulator, we may use the following configuration:
A cos(1Tf t)
c c
m (t)
Nonlinear
device
BPF
3'a A2 met) COS(21Tf t)
'2 3 c c
(b) To generate an AM wave with carrier frequency fc we require a sinusoidal component of frequ'ency f to be added to the DSBSe generated in the manner described above. To achieve
. c _
thi s requirement, we may use the following configuration involving a pair of the nonlinear
devices and a pair of identical bandpass filters.
A cos (1Tf t)
c c
m (t)
Nonlinear
device
BPF
A cos(1Tf t)
c c
Nonlinear
device
BPF
AM wave
52
3 2
The resulting AM wave is therefore 2" a3 Ac[AO+m(t)]coS(2rrfct). Thus, the choice of the dc
level AO at the input of the lower branch controls the percentage modulation of the AM wave.
Problem 2.2
Consider the squarelaw characteristic:
(1)
where al and ~ are constants. Let
(2)
Therefore substitutingEq. (2) into (1), and expanding terms:
v~t) • a1A.r + ~: m(t)] COS(211fct)
+ alm(t) + a2m 2(t) + a2Ac2cos2(21tfct)
(3)
The first term in Eq. (3) is the desired AM signal with ka = 2az!al. The remaining three terms are unwanted terms that are removed by filtering.
Let the modulating wave m(t) be limited to the band W < f~ W, as in Fig. 1(a). Then, from Eq. (3) we find that the amplitude spectrum lv 2(0 I is as shown in Fig. 1(b). It follows therefore that the unwanted terms may be removed from v2(t.) by designing the tuned filter at the modulator output of Fig. P'2.2 to have a midband frequency fc and bandwidth 2W, which satisfy the requirement that fc> 3W ..
IM(f)1
(a)
Ie
w 0
Figure 1
53
Problem 2.3
The generation of an AM wave may be accomplished using various devices; here we describe one such device called a switching modulator. Details of this modulator are shown in Fig. P2.3a, where it is assumed that the carrier wave c(t) applied to the diode is large in amplitude, so that it swings right across the characteristic curve of the diode. We assume that the diode acts as an ideal switch, that is, it presents zero impedance when it is forwardbiased [corresponding to c(t) > 0]. We may thus approximate the transfer characteristic of the diodeload resistor combination by a piecewiselinear characteristic, as shown in Fig. P2.3b. Accordingly, for an input voltage "i (t)
consisting of the sum of the carrier and the message signal:
(1)
where Im(t)1 «AC' the resulting load voltage v2(t) is
( ) _ { VI (t),
V2 t 
0,
c(t) > 0 c(t) < 0
(2)
That is, the load voltage v2(t) varies periodically between the values VI (t) and zero at a rate equal to the carrier frequency fe In this way, by assuming a modulating wave that is weak compared with the carrier wave, we have effectively replaced the nonlinear behavior of the diode by an approximately equivalent piecewiselinear timevarying operation.
We may express Eq. (2) mathematically as
(3)
where gT (t) is a periodic pulse train of duty cycle equal to onehalf, and period To = 1lfc' as in
o
Fig. 1. Representing this gT (t) by its Fourier series, we have
o
1 2 00 ( 1) nI
gT (t) = 2 +  L; 1 cos [2nfct(2n 1)]
o n n
n=l
(4)
Therefore, substituting Eq. (4) in (3), we find that the load voltage v2(t) consists of the sum of two components:
1. The component
54
which is the desired AM wave with amplitude sensitivity ka = 4nAC' The switching modulator is therefore made more sensitive by reducing the carrier amplitude Ae; however, it must be maintained large enough to make the diode act like an ideal switch.
2. Unwanted components, the spectrum of which contains delta functions at 0, ±2fC' ±4fe' and so on, and which occupy frequency intervals of width 2W centered at 0, ±3fC' ±5fC' and so on, where W is the message bandwidth.
Fig. 1: Periodic pulse train
The unwanted terms are removed from the load voltage v2(t) by means of a bandpass filter with midband frequency t; and bandwidth 2W, provided thatfe > 2W This latter condition ensures that the frequency separations between the desired AM wave the unwanted components are large enough for the bandpass filter to suppress the unwanted components.
55
Problem 2.4
(a) The envelope detector output is
vet) = A 11+ llcos(2rrf t) 1
c m
which is illustrated below for the case when II =2.
v Ct)
3A c
t
2 1 1
3f 2f 3f
In In In
o
1 3f In
1 2
2f 3f
In In
1 f In
We see that vet) is periodic with a period equal to f , and an even function of t, and so
we may express vet) in the form: m
<XI
v( t) = aO + 2 I: an cos(2n1T fmt) n=1
where
1/2f
aO = 2f f m v( t)d t
m 0
1/3f 1/2f
= 2A f f m [1+2 cos(2rrf t)]dt + 2A f J m [12cos(2rrf t)]dt
c mO' m c m 1 /3f m
m
A 4A
= ~ + 1T c Sin(2'3)
(1)
1/2f
an = 2f f m v( t ) cos(2mr f t) dt
mOm
1/3f
= 2Acfm f m [1+2cos(2nfmt)]cos(2mrfmt)dt
o
56
1 !2f
+ 2A f J m [12cos(2'rrf t)]cos(2mrf t)dt
c m 1/3f m m
m
Ac 2 Ac 'l
rm [2 sine ~)  sin(nrr)] + (n+1)1T {2 sin[~(n+1)]  sin[1T(n+1)]}
Ac 2'rr
+ (n1)1T {2 sin["3(n1)]  sin[1T(n1)]}
(2)
For n=O, Eq. (2) reduces to that shown in Eq. (1).
(b) For n= 1, Eq. (2) yields
A (n + 1.)
a 1 = c 21T 3
For n=2, it yields
Therefore, the ratio of secondharmonic amplitude to fundamental amplitude in vet) is
=
313
=
0.452
57
Problem 2.5
(a) The demodulation of an AM wave can be accomplished using various devices; here, we describe a simple and yet highly effective device known as the envelope detector. Some version of this demodulator is used in almost all commercial AM radio receivers. For it to function properly, however, the AM wave has to be narrowband, which requires that the carrier frequency be large compared to the message bandwidth. Moreover, the percentage modulation must be less than 100 percent.
An envelope detector of the series type is shown in Fig. P2.5, which consists of a diode and a resistorcapacitor (Rf") filter. The operation of this envelope detector is as follows. On a positive halfcycle of the input signal, the diode is forwardbiased and the capacitor C charges up rapidly to the peak value of the input signal. When the input signal falls below this value, the diode becomes reversebiased and the capacitor C discharges slowly through the load resistor Rz. The discharging process continues until the next positive halfcycle. When the
input signal becomes greater than the voltage across the capacitor, the diode conducts again and the process is repeated. We assume that the diode is ideal, presenting resistance rr to
current flow in the forwardbiased region and infinite resistance in the reversebiased region. We further assume that the AM wave applied to the envelope detector is supplied by a voltage source of internal impedance R; The charging time constant (rf + Rs) C must be short
compared with the carrier period lIfe' that is
(1)
so that the capacitor C charges rapidly and thereby follows the applied voltage up to the positive peak when the diode is conducting.
(b) The discharging time constant RzC must be long enough to ensure that the capacitor discharges slowly through the load resistor Rz between positive peaks of the carrier wave, but not so long that the capacitor voltage will not discharge at the maximum rate of change of the modulating wave, that is
1 1
« RiC«fe W
(2)
where W is the message bandwidth. The result is that the capacitor voltage or detector output is nearly the same as the envelope of the AM wave.
58
Problem 2.6
Let
(a) Then the output of the squarelaw device is
2 V2(t) = alVl(t) + a2vl (t)
= a1AJ1 + kam(t)]cos(21tfct)
+ _!a2A;[1 + 2kam(t) + k;m 2(t)] [1 + cos(41tfct)] 2
(b) The desired signal, namely ¥c2kam(t), is due to the ~vt(t)  hence, the name "squarelaw detection". This component can be extracted by means of a lowpass filter. This is not the only contribution within the baseband spectrum, because the term 112 ¥c2ka2m2(t) will give rise to a plurality of similar frequency components. The ratio of wanted signal to distortion is 2Ikam(t). To make this ratio large, the percentage modulation, that is, I kam(t) I should be kept small compared with unity.
59
Problem 2.7
The squarer output is
v1(t) = A2 [1+k m(t)]2 coS2(errf t)
c a c
The ampl itude spectrum of v 1 (t) is therefore as follows, assuming that m( t) is limited to the interval W < f < W:
Since fc > 2W, we find that 2fc2W > 2W. Therefore, by choosing the cutoff frequency of the lowpass filter greater than 2W, but less than 2f c 2W, we obtain the output
A2
v2(t) = 2c [1+kam(t)]2
Hence, the squarerooter output is
A c
v3(t) =  [1+k met)]
12 a
AC
which, except for the dc component  , is proportional to the message signal m(t).
12
Problem 2.8
(a) For fc = 1.25 k Hz , the spectra of theimessage signal met), the product modulator output s( t), and the coherent detector output v( t ) are as follows, respectively:
60
M(f)
1
f (kHz)
1
S (f)
1.25
o
1. 25
V(f)
o
(kHz)
1
(b) For the case when fc = 0.75, the respective spectra are as follows:
M( f)
_______________ ~~_. __ ~~~~~~f (kHz)
1 1
S(f)
V(f)
·~~~~f (kHz) 61
1
o
1
To avoid sidebandoverlap, the carrier frequency fc must be greater than or equal to
kHz. The lowest carrier frequency is therefore 1 kHz for each sideband of the
modulated wave s( t) to be uniquely determined by m( t).
Problem 2.9
The two AM modulator outputs are
SI (t) = Ac[1 + kam(t)]cos(21tfct) S2(t) = AJI  kam(t)]cos(21tfct)
_ . Subtracting ~(t) from 81 (t):
s(t) = s2(t)  sl(t)
= 2kaAcffi(t)cos(21tfct)
which represents a DSBSC modulated wave.
62
Problem 2.10
(a) Multiplying the signal by the local oscillator gives:
Ac
= ~ met) {cos(2~6ft) + cos[2~(2fc+~f)t]}
Low pa ss fil ter ing leaves:
Thus the output signal is the message signal modul ated by a sinusoid of fr equency M.
(b)
If met) = cos(2~f t), m
Problem 2.11
A2
= 2c [1+COS(4~ct)]m2(t)
Therefore, the spectrum of the multiplier output is
A2 00 A2 00 00
Y(f) = 2c J M(~JM(f~)dA + 4c [J M(A)M(f2fcA)dA + J M(A)M(f+2fcA)d>"]
~ ~ ~
where M( f) = F[m( t)].
( b)
At f=2f , we have c
63
A2 = =
+ r [f M ( A) M (  A) d A + f M ( A) M ( 4 f c  A) d A] .
_GO _CD
Since M(_A) _ M*(A), we may write
A2 =
Y (2f ) = 2c f M( A)M(2f A)dA
c _= c
= =
IM( A) ,2dA + f M( A)M(4f A)dAJ c
(1)
=
=
With met) limited to W < f < Wand fc > W, we find that the first and third integrals reduce to zero, and so we may simplify Eq. (1) as follows
where E is the signal energy (by Rayleigh's energy theorem). Similarly, we find that
The bandpass filter output, in the frequency domain, is therefore defined by
A2 c
V (f) '" 4 E srt o(f2f ) + o(f+2f )]
c c
Hence,
A2
vet) '" 2c E M cos(4nfct)
Problem 2.12
The mul tiplexed signal is
Therefore,
where M, (f) = F [m, (t)] and M2(f) = F [~(t)]. The spectrtm of the received signal is
therefore
R(f) = H (f)S (f)
A
= 2c H (f) [M, (ff )+M, (f+f )+ l: M2(ff) l: M2(f+f )]
c c J c J c
To recover m, (t), we multiply r(t), the inverse Fourier transform of R(f), by cos(211f t) and then pass the resulting output through a lowpass filter, producing a signal with ~he following spectrum
The condition H(fc+f) = H*(fcf) is equivalent to H(f+fc)=H(ffc); this follows from the fact that for a realvalued impulse response h(t), we have H(f)=H*(f). Hence, substituting this condition in Eq. ('), we get
The lowpass filter output, therefore, has a spectrum equal to (Ac/2) H(ffc)M,(f).
Similarly, to recover ~(t), we multiply ret) by sin(21Tfct), and then pass the
r esul ting Signal through a lowpass filter. In thi s case, we get an output wi th a
65
66
Problem 2.13
When the local carriers have a phase error $, we may write
cos(2"1rfct + $) = cos(2"1rfct)cos $  sin(2"1rfct) sin $
In this case, we find that by multiplying the received signal ret) by cos(2nfct+$), and passing the resulting output through a lowpass filter, the corresponding lOwPass filter output in the receiver has a spectrum equal to (Ac/2) H(ffc) [COS$ M,(f)  sin$ M2(f)]. This indicates that there is crosstalk at the demodulator outputs.
66
Problem 2.14
The transmitted signal is given by
(a) The envelope detection of set) yields
To minimize the distortion in the envelope detector output due to the quadrature component, we choose the DC offset Vo to be large. We may then approximate YI (t) as
which, except for the DC component Ac Va, is proportional to the sum mz(t) + mr(t).
(b) For coherent detection at the receiver, we need a replica of the carrier Accos(2nfct). This requirement can be satisfied by passing the received signal set) through a narrowband filter of midband frequency I; However, to extract the difference mz(t)  mr(t), we need sin(2nfct), which
is obtained by passing the narrowband filter output through a 90°phase shifter. Then, multiplying set) by sin(27ifct) andt lowpass filtering, we obtain a signal proportional to mz(t)  mr(t).
(c) To recover the original loudspeaker signals mz(t) and mr(t), we proceed as follows:
•
Equalize the outputs of the envelope detector and coherent detector .
Pass the equalized outputs through an audio demixer to produce mz(t) and mr(t) .
•
67
Problem 2.15
To ensure 50 percent modulation, ka = 1, in which case we get
set) = Ac( 1 + ~)cOS(2nfct) 1 + t
(b) set) = Acm(t)cos(2nfct)
A
(c) set) = 2C[m(t)cos(2nfct)  m(t)sin(2nfct)]
Ac[ 1 t ]
(d) set) = "2 2cos(2nfct) + 2sin(2nfct)
l+t l+t
As an aid to the sketching of the modulated signals in (c) and (d), the envelope of either SSB wave is
Plots of the modulated signals in (a) to (d) are presented in Fig. 1 on the next page.
68
2
 0
.s
2
10 8 6 4 2 0 2 4 6 8 10
1
 0
.c
.......
1
10 8 6 4 2 0 2 4 6 8 10
0.5

.3 0
0.5
10 8 6 4 2 0 2 4 6 8 10
0.5
'0 0
.......
0.5
10 8 6 4 2 0 2 4 6 8 10
time F'ju..re I
69
Problem 2.16
Consider first the modulated signal
set) = ~m(t)COS(2nfet)  ~m(t)Sin(2nfet)
(1)
Let S(j) = F[s(t)], M(j) = F[m(t)], and M(f) = f[m(t)] where met) is the Hilbert transform of the message signal met). Then applying the Fourier transform to Eq. (1), we obtain
1 1 A A
S(f) = 4[M(f  fJ + M(f + fJ]  4}M(f  fe)  M(f + fe)]
(2)
From the definition of the Hilbert transform, we have
M(f) = jsgn(f)M(f)
where sgn(j) is the signum function. Equivalently, we may write
1 A
.M(f  fe) = sgn(f  fe)M(f  fJ }
1 A
.M(f + fJ = sgn(f + fJM(f + fJ }
(i) From the definition of the signum function, we note the following for f> 0 = andf> fe:
sgn(f  fJ = sgn(f + fe) = +1
Correspondingly, Eq. (2) reduces to
1 1
S(f) = 4[M(f  fe) + M(f + fJ] + 4[M(f  fJ  M(f + fJ]
1
= 2M(f  fJ
In words, we may thus state that, except for a scaling factor, the spectrum of the modulated signal set) defined in Eq. (1) is the same as that of the DSBSC modulated signal forf> fe
(ii) For t> 0 andf <fe' we have
70
sgn(f  fe> = 1 sgn(f + fe> = +1
Correspondingly, Eq. (2) reduces to
= °
In words, we may now state that for f <fc' the modulated signal s(t) defined in Eq. (1) is zero.
Combining the results for parts (i) and (ii), the modulated signal set) of Eq. (1) represents a single sideband modulated signal containing only the upper sideband. This result was derived for i> 0. This result also holds for f < 0, the proof for which is left as an exercise for the reader.
Following a procedure similar to that described above, we may show that the modulated signal
set) = ~m(t)cos(21tfct) + ~m(t)Sin(21tfct)
(3)
represents a single sideband modulated signal containing only the lower sideband.
71
Problem 2.17
An error M in the frequency of the local oscillator in the demodulation of an SSB signal, measured with respect to the carrier frequency fc' gives rise to distortion in the demodulated signal. Let the local oscillator output be denoted by A~ cos(21t(fc + M)t). The resulting demodulated signal is given by (for the case when the upper sideband only is transmitted)
1 '
vo(t) =  AcAc [m(t)cos(21tMt) + m(t)sin(21tMt)]
4
This demodulated signal represents an SSB wave corresponding to a carrier frequency M.
The effect of frequency error Af in the local oscillator may be interpreted as follows:
_(a.) If the SSB wave s(t) contains the upper sideband and the frequency error M is positive, or equivalently if s(t) contains the lower sideband and M is negative, then the frequency components of the demodulated signal v o( t) are shifted inward by the amount Af compared with the baseband signal m(t), as illustrated in Fig. I(b).
(b) If the incoming SSB wave s(t) contains the lower sideband and the frequency error Mis positive, or equivalently if s(t) contains the upper sideband and M is negative, then the frequency components of the demodulated signal vo(t) are shifted outward by the amount Af, compared with the baseband signal m(t). This is illustrated in Fig. Ie for the case of a baseband signal (e.g., voice signal) with an energy gap occupying the interval fa < f < fa' in part (a) of the figure.
72
M(f}
o I. (a)
I. + ~I 0 (bl
Ib~I I.~I 0 I.+~I (e)
Fig. 1
73
1
1
1
Problem 2.18
(a sb ) The spectrum of the message signal is illustrated pelow:
Correspondingly f the output of the upper first product modulator has the following s pee tr lID :
t M(t!. )
.~) a
The output of the lower first product modulator has the spectrum:
The output of the upper low pass filter has the spectrum:
74
The output of the lower low pass fil ter has the spectrum:
~ M Cf f.)
2J  b
The output of the upper second product modulator has the spectrum:
The output of the lower second product modulator. has the spectrum:
o
Adding the two second product modulator outputs, their upper sidebands add constructively while their lower sidebands cancel each other.
(c) To modify the modulator to transmit only the lower sideband, a single sign change is required in one of the channels. For example, the lower first product modulator could mul tiply the message signal by sin(21Tf t). Then, the upper sideband would be cancelled
and the lower one transmitted. 0
75
Problem 2.19
m
( t) Product po l(t) Highpass ~2 (t) Product \T3(t) s
lLowpass
modulator filter modulator filter (t)
COS(27Tf t) c
(a) The fir st product modulator output is
.. v1(t) = met) cos(2'1rfct)
The second product modulator output is
v 3 ( t) = v 2 ( t) co s [2 'Ir( f c + f b) t]
The amplitude spectra or m(t), v1 (t), v2(t), v3(t) and set) are illustrated on the next page:
We may express the voice signal met) as met) = 21 [m (t) + m (t)]
+ 
*
where m+ (t) is the preenvelope of m( t), and m_ (t) .. m+ (t) is its complex conjugate. The
Fourier transforms of m (t) and m (t) are defined by (see Appendix 2)
+
M (f)
1 <M(f),
=
0,
f 0,
=
2M (f) , f > 0
M (f)
+
f < a
f > a
f < a
Comparing the spectrum of s( t) with that of m( t), we see that s( t) may be expressed in terms of m (t) and m (t) as follows:
+ 
set) = t m+(t)exp(j2mbt)+ t m_(t)exp(j2mbt)
= F [me t )+jm( t)] ex p(j27Tfb t)+ ~m( t )jm( t) ]exp( j27Tfb t) = ir m(t)cos(2mbt)+ ir m(t)sin(2mbt)
(b) With set) as input, the first product modulator output is
v1 (t ) = set) cos(2nfct)
76
81
IM(f) I
IM(f ) I
a
______________________________ ~~~ f
f f Of f
b a a b
f f f
c b c
o
f f +f
c c b
E +fa 0 ff Dba
Is (f) I
f
f +f 0 ff
b a b a
77
Problem 2.20
(a) Consider the system described in Fig. la, where u(t) denotes the product modulator output, as shown by
Message signal mit)
Product u(l) Bandpass
modulator filter
H(j)
t Modulated signal sit}
(a)
Modulated signal
. s(l)
Product utt ) Lowpass
modulator filter
t Demodulated signal vo(1)
(b)
Figure 1: (a) Filtering scheme for processing sidebands. (b) Coherent detector for recovering the message signal.
Let H(j) denote the transfer function of the filter following the product modulator. The spectrum of the modulated signal set) produced by passing u(t) through the filter is given by
S(f) = U(f)H(f)
A
= 2C[M(f  fJ + M(f + fc)]H(f)
(1)
where M(j) is the Fourier transform of the message signal met). The problem we wish to address is to determine the particular H(j) required to produce a modulated signal set) with desired spectral characteristics, such that the original message signal met) may be recovered from set) by coherent detection.
The first step in the coherent detection process involves multiplying the modulated signal set) by a locally generated sinusoidal wave A'ccos(2nfct), which is synchronous with the carrier wave Accos(2nfct), in both frequency and phase as in Fig. lb. We may thus write
78
vet) = A'ecos(2nfet)s(t)
Transforming this relation into the frequency domain gives the Fourier transform of vet) as
A'
V(f) = i[S(f  fe> + S(f + fe)]
(2)
Therefore, substitution of Eq. (1) in (2) yields
A A'
V(f) = TM(fHH(f  fe> + H(f + fe>]
A A'
e e
+ 4[M(f  2fe>H(f  fe) + M(f + 2fe>H(f + fe)]
(3)
(b) The highfrequency components of vet) represented by the second term in Eq. (3) are removed by the lowpass filter in Fig. 1 b to produce an output v oCt), the spectrum of which is given by the remaining components:
A A'
Vo(f) = TM(fHH(f  fe> + H(f + fJ]
(4)
For a distortionless reproduction of the original baseband signal met) at the coherent detector output, we require Vo(f) to be a scaled version of M(f). This means, therefore, that the transfer
function H(f) must satisfy the condition
H(f  fJ + H(f + fJ = 2H(fe)
(5)
where Hife), the value of H(f) at f = fe' is a constant. When the message (baseband) spectrum M(f) is zero outside the frequency range  W 5:.f 5:. W, we need only satisfy Eq. (5) for values off in this interval. Also, to simplify the exposition, we set Hife) = 112. We thus require that H(f) satisfies the condition:
W5:.f5:.W
(6)
Under the condition described in Eq. (6), we find from Eq. (4) that the coherent detector output in Fig. Ib is given by
A A'
e e
vo(t) = 2m(t)
(7)
79
Equation (1) defines the spectrum of the modulated signal set). Recognizing that set) is a bandpass signal, we may formulate its timedomain description in terms of inphase and quadrature components. In particular, set) may be expressed in the canonical form
(8)
where s/t) is the inphase component of set), and sQ(t) is its quadrature component. To determine s/(t), we note that its Fourier transform is related to the Fourier transform of set) as follows:
{S(J  Je) + S(J + JJ, SI(J) =
0,
W~J~W
(9)
elsewhere
Hence, substituting Eq. (1) in (9), we find that the Fourier transform of s/t) is given by
W~J~W
(10)
where, in the second line, we have made use of the condition in Eq. (6) imposed on H(j). From Eq. (10) we readily see that the inphase component of the modulated signal set) is defined by
(11)
which, except for a scaling factor, is the same as the original message signal met).
To determine the quadrature component sQ(t) of the modulated signal set), we recognize that its Fourier transform is defined in terms of the Fourier transform of set) as follows:
{j[S(JJ )S(J+J)] W~J~W
SQ(J) = e e
0, elsewhere
(12)
Therefore, substituting Eq. (11) in (12), we get
(13)
80
This equation suggests that we may generate sQ(t), except for a scaling factor, by passing the message signal met) through a new filter whose transfer function is related to that of the filter in Fig. 1a as follows:
H Q(f) = j[H(f  fe>  H(f + fen
W~f~W
(14)
Let m' (t) denote the output of this filter produced in response to the input met). Hence, we may express the quadrature component of the modulated signal set) as
(15)
Accordingly, substituting Eqs. (11) and (15) in (8), we find that set) may be written in the canonical form
(16)
There are two important points to note here:
1. The inphase component sit) is completely independent of the transfer function H(j) of the bandpass filter involved in the generation of the modulated wave set) in Fig. la, so long as it satisfies the condition of Eq. (6).
2. The spectral modification attributed to the transfer function H(j) is confined solely to the quadrature component sQ(t).
The role of the quadrature component is merely to interfere with the inphase component, so as to reduce or eliminate power in one of the sidebands of the modulated signal set), depending on the application of interest.
81
Problem 2.21
(a) Expand ing s( t), we get
s( t) = 1.2 a A A eos(21Tf t) cos (21Tf t) m cern
_ "::"821 A A sin(2nf t ) sin(2nf t) + 21 (1a) A A eos(2nf t) eos(2nfmt)
me e m em e
+ 21 (1a) A A sin(e1Tf t) sin(21Tf t)
m e c m
= 21 A A cos(2nf t) eos(2nf t)
m c c m
+ 21 A A (12a) sin(21Tf t) sin(21Tfmt)
m c c
Therefore, the quadrature component is:
_!_AA (12a) sin(21Tf t)
 2 em m
(b) After adding the carrier, the signal will be:
s(t) =
The envelope equal s
= A c
1
[1 + 2 A eos(21Tf t)] d(t)
m m
where d( t) is the distortion, defined by
(c) d(t) is greatest when a = O.
82
Problem 2.22
Consider an incoming narrowband signal of bandwidth 10 kHz, and midband frequency which may lie in the range 0.5351.605 M Hz. It is required to translate this signal to a fixed frequency band centered at 0.455 MHz. The problem is to determine the range of tuning that must be provided in the local oscillator.
Let [, denote the midband frequency of the incoming signal, and j; denote the local oscillator frequency. Then we may write
0.535 < !c < 1.605
and
where both fe and j; are expressed in MHz. That is,
j;=fe0.455
When fe=0.535 MHz, we get I. = 0.08 MHz; and when!c= 1.605 MHz, we get Ii= 1.15 MHz. Thus the required range of tuning of the local oscillator is 0.081.15 MHz.
83
Problem 2.23
Let s(t) denote the multiplier output, as shown by
s(t) = A g(t) cos(211'fct)
where fc lies in the range fO ~o fO+W. The amplitude spectra of s Ct ) and g(t) are related as follows:
I G (f) I
I G (0) I
w
o f f W c 0
Is (.f) I
~ AIG(O) I
f w c.
f f f +W c 0 c
o
f c
f +W c
84
With vet) denoting the bandpass filter output, we thus find that the Fourier transform of v( t) is approximately given by
1 6f 6f
V(n ="2 A G(fcfO) fO 2 ~ If I s fo+ 2
The rms meter output is therefore (by using Rayleigh's energy theorem)
co
Vrms = [J l (t) dt] 1/2
_co
co
= [f
=
_co
Problem 2.24
For the PM case,
set) = A cos[2rrf t + k met)].
c c p
The angle equals
6i(t) = 2rrfct + kp met).
The instantaneous frequency,
Ak Ak
f. (t ) = f c + _E.._  L ~ 15 (t  nTO)'
1 2rrTO n '"
is equal to f c
discontinuities.
SCi)
+ Akp/2nTO except for the instants that the message At these instants, the phase shifts by kpA/TO radians.
signal
has
For the FM case, fi (t) = fc + kf met)
O~~~~~~++~~~~~~i
A
c;.
Problem 2.25
Ol_df>lA..1
The instantaneous frequency of the mixer is as shown below: "
t
:rhe . presence of negative frequency merely indicates that the phasor representing the difference frequency at the mixer output has reversed its direction of rotation.
Let N denote the number of beat cycles in one period. Then, noting that N is equal to the shaded area shown above, we deduce that
Since f 0"[ « 1, we have
Therefore, the number of beat cycles counted over one second is equal to
86
Problem 2.26
The instantaneous frequency of the modulated wave s(t) is as shown below:
f C
_____ ...J ~ ' _
I I
I I
_________ _ll ~~ ~ t
TOT 2 2
f. (t.) l.
f +L!.f c
We may thus express s(t) as follows \ COS(2wfct),
=, COS[21£(fc+~f)t], L co s [ 21£ f c t) ,
s(t)
The Fourier transform of s(t) is therefore
T/2
S(f) = f COS(21£fct) exp(j21£ft) dt
...PO
T t <  ~
T T
2~t~2
! < t
2
T/2
+ f COS[21£(f +~f)t] exp(j21£ft) dt
T/2 c
co
+ f COS(21£f t) exp(j21£ft) dt
T/2 c
 co
= f
COS(21£f t) exp(j21£ft) dt c
T/2
+ f {COS[21£(f +~f)t  COS(21£f t)} exp(j21£ft) dt
T/2 C c
(1)
87
The second term of Eq. (1) is recognized of two RF pulses of unit amplitude, one having a fr equency equal to f. Hence,
c follows:
as the difference between the Fourier transforms having a frequency equal to fc+6f and the other assuming that f T » 1, we may express S(f) as
c
S(n ::
+ ~ sinc[T(ffc6f)]  ~ sinc[T(ffc)]' f > 0
Problem 2.27
For SSB modulation, the modulated wave is
Ac
s(t) = 2 [m(t) cos(2nf t) ± met) sin(2nf t)],
c c
the minus sign applying when transmitting the upper sideband and the plus sign applying when transmitting the lower one.
Regardless of the .sign, the envelope is
(a) For upper sideband transmission, the angle,
1 met»~ e,(t) = 2nfct + tan (m(t) •
"
The instantaneous frequency is,
= f c
where' denotes time derivative.
(b) For lower sideband transmission, we have
and
88
Problem 2.28
~
(a) The envelope of the FM wave set) is
The maximum value of the envelope is
a =A Q
max c
and its minimum value is
= A c
Therefore,
a ~
max 2
= 1+13 a .
mln
This ratio is shown plotted below for 0 ( 13 ( 0.3:
1.1
1.005
1.02
1.044
1
L
o
~
0.1
•
0.2
I 0.3
(b) Expressing set) in terms of its frequency components:
89
The mean power of s(t) is therefore
The mean power of the unmodulated carrier is
Therefore,
which is shown plotted below for 0 ~ s ~ 0.3:
1.045
1.005
1.02
1.1
T
o
0.1
0.2
0.3
90
(c) The angle 6i (t), expressed in terms of the inphase component, s.r(t) , and the quadrature component, s (t), is:
 ~
6i(t) 2lTft+ 1 sx(t)
= tan [s (t)]
c
a
2lTft+ tan 1 [ssin(2rrfmt) ]
=
c
Since 1 3
tan (x) = x  x 13 + ••• , The harmonic distortion is the power ratio of the third and first harmonics:
Problem 2.29
(a) The phasemodulated wave is
s(t) = A cos[2lTf t + k A cos(2rrf t)]
c c p m m
= Ac cos[2nfct + Sp cos(2nfmt)]
= Ac cos(2nfct) cos[Sp cos(2nfmt)]  Ac sin(2nfct) sin[Bp cos(2nfrnt)] (1)
91
If 8p ~ 0.5, then
cos[8p cos(21Tfmt)] =
sin[8p COS(21Tfmt)] = 8p COS(21Tfmt)
Hence, we may rewrite Eq. (1) as
set) = Ac COS(21Tf t)  8 A sin(21Tf t) COS(21Tf t)
c p c c m
A cos(2nf t ) 1 A sin [21T (f c +f m) t]
= 8
c c 2 P c
1 Ac sin[21T (fcfm)t]
8
2 p (2 )
.The.spectrum of set) is therefore
S(f)
= 21 A [0 (ff ) + 0 (f+f )]
c c c
(b) The phasor diagram for set) is deduced from Eq. (2) to be as follows:
Lower sidefrequency
Carrier
< Upper sidefrequency
1
/.4~2 A
" _/" P c
/
/
The corresponding phasor diagram for the narrowband FM wave is as follows:
92
, ,
,
Upper sidefrequency
2nf t m
Carrier
Lower sidefrequency
Comparing these two phasor diagrams, we see that, except .. for a phase difference, the narrowband PM and FM waves are of exactly the same form. /"
Problem 2.30
The phasemodulated wave is
set) = A cos[21rf t + 6 cos(21rf t)]
c c p m
The complex envelope of s( t) is
set) = Ac exp[j6p cos(21rfmt)]
Expressing set) in the form of a complex Fourier series, we have
CO)
set) = 1: Cn exp(j2nnfmt)
n= ......
where
1/2f
cn = fm J m set) exp(j2nnfmt) dt 1/2f
m
1/2f
= A f J m exp[j6 cos(2nf t)  j2nnf t] dt
c m 1/2f p m m
m
(1)
Let 2rrf t = n/2  cj>. m
Then, we may rewrite Eq. (1) as
A . n/2
c =  ~ exp( J~1T)! exp[j6p sin(cj» + jncj>] dcj>
n 31T /2
93
The integrand is periodic wi th respect to cp wi th a period of 2rr. Hence, we may rewr i te this expression as
A . 'IT
C (.J!!!!..) f
cn = 2'IT exp  2
exp[jBp sin(cp) + jncp] dcp
However, from the definition of the Bessel function of the first kind of order n, we have
1 'IT
In(x) = 2'IT f exp(j x sincp  njcp) dcp 'IT
Therefore,
,We may thus express the PM wave set) as
set) = Re[s(t) exp(j2'ITfct)]
00
= Ac Re[ E . J_n(Bp) exp( j~'IT) expQ2'ITnfmt) exp(j2'ITfct)]
ne ......
00
n= .....
= A E
c
The bandpass filter only passes the carrier, the first upper sidefrequency, and the first lower sidefrequency, so that the resulting output is
s (t) = A JO(B) cos(2'ITf t) + A J ,(B ) cos[2'IT(f +f )t  ~2]
o c P c c  p c m
+ A J,(B) cos[2'IT(f f )t + ~2]
c P c m
= Ac JO(Bp) cos(2'ITfct) + Ac J_,(Bp) sin[2n(fc+fm)t]
 A J,(B) sin[2'IT(f f )t]
c p c m
But
Therefore,
so(t) = Ac Jo(Bp) cos(2nfct)
 Ac J,(Bp) {sin[2'IT(fc+fm)t] + sin[2n(fcfm)tJ}
= A JO(B) cos(2rrf t )  2 A J,(B) cos(2rrf t ) sin(2'lTf t)
c p c c p m c
The envelope of so(t) equals
94
The phase of SO(t) is
1 [2 J1 (8p) (
~(t) = tan Jo(8p) cos 2~fmt)]
The instantaneous frequency of so(t) is
fi ( t ) = fc + _1 dlj> ( t ) 2~ dt
Problem 2.31
(a) From Table A4.1, we find (by interpolation) that JO(8) is zero for
8 = 2.44,
8 = 5.52,
8 = 8.65,
8 = 11.8,
.and so on.
(b) The modulation index is
8 llf kf Am
= r = f
m m Therefore,
Since JO(e) = 0 for the first time when e = 2.44, we deduce that k _ 2.44 x 103
f  2
= 1.22 x 103 hertz/volt
Next, we note that JO(e) = 0 for the second time when e = 5.52.
Hence, the corresponding
value of Am for which the carrier component is reduced to zero is
95
Bfm Am = E"" f
5.52 x 103
=
1.22 x 103
= 4.52 volts
Problem 2.32
For B = 1, we have
JO(l ) = 0.765
J1 (1) = 0.44
J2(1 ) = 0.115 Therefore, the bandpass filter output is (assumi~g a carrier amplitude of 1 volt)
= 0.765 cos(21Tf t) c
+ 0.44 {cos[2n(fc+fm)t]  cos[2n(fcfm)t]}
+ 0.115 {cos[2n(fc+2fm)t] + cos[2n(fc2fm)t]} , and the amplitude spectrum (for positive frequencies) is
c.oS&
t
0. 22
0.058
t f
t ;;, It 2£
c:. ;", 0.22
o
ff
c. M
f
c
96
This action might not be possible to undo. Are you sure you want to continue?